Merge "Duplicate of elk_metrics_6x to elk_metrics_7x"

This commit is contained in:
Zuul 2019-07-30 17:29:09 +00:00 committed by Gerrit Code Review
commit b35f7f9286
194 changed files with 16967 additions and 0 deletions

630
elk_metrics_7x/README.rst Normal file
View File

@ -0,0 +1,630 @@
Install ELK with beats to gather metrics
########################################
:tags: openstack, ansible
..
About this repository
---------------------
This set of playbooks will deploy an elastic stack cluster (Elasticsearch,
Logstash, Kibana) with beats to gather metrics from hosts and store them into
the elastic stack.
**These playbooks require Ansible 2.5+.**
Highlevel overview of the Elastic-Stack infrastructure these playbooks will
build and operate against.
.. image:: assets/Elastic-Stack-Diagram.svg
:scale: 50 %
:alt: Elasticsearch Architecture Diagram
:align: center
OpenStack-Ansible Integration
-----------------------------
These playbooks can be used as standalone inventory or as an integrated part of
an OpenStack-Ansible deployment. For a simple example of standalone inventory,
see [test-inventory.yml](tests/inventory/test-inventory.yml).
Optional | Load balancer configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Configure the Elasticsearch endpoints:
While the Elastic stack cluster does not need a load balancer to scale, it is
useful when accessing the Elasticsearch cluster using external tooling. Tools
like OSProfiler, Grafana, etc will all benefit from being able to interact
with Elasticsearch using the load balancer. This provides better fault
tolerance especially when compared to connecting to a single node.
The following section can be added to the `haproxy_extra_services` list to
create an Elasticsearch backend. The ingress port used to connect to
Elasticsearch is **9201**. The backend port is **9200**. If this backend is
setup make sure you set the `internal_lb_vip_address` on the CLI or within a
known variable file which will be sourced at runtime. If using HAProxy, edit
the `/etc/openstack_deploy/user_variables.yml` file and add the following
lines.
.. code-block:: yaml
haproxy_extra_services:
- service:
haproxy_service_name: elastic-logstash
haproxy_ssl: False
haproxy_backend_nodes: "{{ groups['Kibana'] | default([]) }}" # Kibana nodes are also Elasticsearch coordination nodes
haproxy_port: 9201 # This is set using the "elastic_hap_port" variable
haproxy_check_port: 9200 # This is set using the "elastic_port" variable
haproxy_backend_port: 9200 # This is set using the "elastic_port" variable
haproxy_balance_type: tcp
Configure the Kibana endpoints:
It is recommended to use a load balancer with Kibana. Like Elasticsearch, a
load balancer is not required however without one users will need to directly
connect to a single Kibana node to access the dashboard. If a load balancer is
present it can provide a highly available address for users to access a pool
of Kibana nodes which will provide a much better user experience. If using
HAProxy, edit the `/etc/openstack_deploy/user_variables.yml` file and add the
following lines.
.. code-block:: yaml
haproxy_extra_services:
- service:
haproxy_service_name: Kibana
haproxy_ssl: False
haproxy_backend_nodes: "{{ groups['Kibana'] | default([]) }}"
haproxy_port: 81 # This is set using the "Kibana_nginx_port" variable
haproxy_balance_type: tcp
Configure the APM endpoints:
It is recommented to use a load balancer for submitting Application
Performance Monitoring data to the APM server. A load balancer will provide
a highly available address which APM clients can use to connect to a pool of
APM nodes. If using HAProxy, edit the `/etc/openstack_deploy/user_variables.yml`
and add the following lines
.. code-block:: yaml
haproxy_extra_services:
- service:
haproxy_service_name: apm-server
haproxy_ssl: False
haproxy_backend_nodes: "{{ groups['apm-server'] | default([]) }}"
haproxy_port: 8200 # this is set using the "apm_port" variable
haproxy_balance_type: tcp
Optional | add OSProfiler to an OpenStack-Ansible deployment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To initialize the `OSProfiler` module within openstack the following overrides
can be applied to the to a user variables file. The hmac key needs to be defined
consistently throughout the environment.
Full example to initialize the `OSProfiler` modules throughout an
OpenStack-Ansible deployment.
.. code-block:: yaml
profiler_overrides: &os_profiler
profiler:
enabled: true
trace_sqlalchemy: true
hmac_keys: "UNIQUE_HMACKEY" # This needs to be set consistently throughout the deployment
connection_string: "Elasticsearch://{{ internal_lb_vip_address }}:9201"
es_doc_type: "notification"
es_scroll_time: "2m"
es_scroll_size: "10000"
filter_error_trace: "false"
aodh_aodh_conf_overrides: *os_profiler
barbican_config_overrides: *os_profiler
ceilometer_ceilometer_conf_overrides: *os_profiler
cinder_cinder_conf_overrides: *os_profiler
designate_designate_conf_overrides: *os_profiler
glance_glance_api_conf_overrides: *os_profiler
gnocchi_conf_overrides: *os_profiler
heat_heat_conf_overrides: *os_profiler
horizon_config_overrides: *os_profiler
ironic_ironic_conf_overrides: *os_profiler
keystone_keystone_conf_overrides: *os_profiler
magnum_config_overrides: *os_profiler
neutron_neutron_conf_overrides: *os_profiler
nova_nova_conf_overrides: *os_profiler
octavia_octavia_conf_overrides: *os_profiler
rally_config_overrides: *os_profiler
sahara_conf_overrides: *os_profiler
swift_swift_conf_overrides: *os_profiler
tacker_tacker_conf_overrides: *os_profiler
trove_config_overrides: *os_profiler
If a deployer wishes to use multiple keys they can do so by with comma separated
list.
.. code-block:: yaml
profiler_overrides: &os_profiler
profiler:
hmac_keys: "key1,key2"
To add the `OSProfiler` section to an exist set of overrides, the `yaml` section
can be added or dynamcally appended to a given hash using `yaml` tags.
.. code-block:: yaml
profiler_overrides: &os_profiler
profiler:
enabled: true
hmac_keys: "UNIQUE_HMACKEY" # This needs to be set consistently throughout the deployment
connection_string: "Elasticsearch://{{ internal_lb_vip_address }}:9201"
es_doc_type: "notification"
es_scroll_time: "2m"
es_scroll_size: "10000"
filter_error_trace: "false"
# Example to merge the os_profiler tag to into an existing override hash
nova_nova_conf_overrides:
section1_override:
key: "value"
<<: *os_profiler
While the `osprofiler` and `Elasticsearch` libraries should be installed
within all virtual environments by default, it's possible they're missing
within a given deployment. To install these dependencies throughout the
cluster without having to invoke a *repo-build* run the following *adhoc*
Ansible command can by used.
The version of the Elasticsearch python library should match major version of
of Elasticsearch being deployed within the environment.
.. code-block:: bash
ansible -m shell -a 'find /openstack/venvs/* -maxdepth 0 -type d -exec {}/bin/pip install osprofiler "elasticsearch>=6.0.0,<7.0.0" --isolated \;' all
Once the overrides are in-place the **openstack-ansible** playbooks will need to
be rerun. To simply inject these options into the system a deployer will be able
to use the `*-config` tags that are apart of all `os_*` roles. The following
example will run the **config** tag on **ALL** openstack playbooks.
.. code-block:: bash
openstack-ansible setup-openstack.yml --tags "$(cat setup-openstack.yml | grep -wo 'os-.*' | awk -F'-' '{print $2 "-config"}' | tr '\n' ',')"
Once the `OSProfiler` module has been initialized tasks can be profiled on
demand by using the `--profile` or `--os-profile` switch in the various
openstack clients along with one of the given hmac keys defined.
Legacy profile example command.
.. code-block:: bash
glance --profile key1 image-list
Modern profile example command, requires `python-openstackclient >= 3.4.1` and
the `osprofiler` library.
.. code-block:: bash
openstack --os-profile key2 image list
If the client library is not installed in the same path as the
`python-openstackclient` client, run the following command to install the
required library.
.. code-block:: bash
pip install osprofiler
Optional | run the haproxy-install playbook
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: bash
cd /opt/openstack-ansible/playbooks/
openstack-ansible haproxy-install.yml --tags=haproxy-service-config
Setup | system configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Clone the elk-osa repo
.. code-block:: bash
cd /opt
git clone https://github.com/openstack/openstack-ansible-ops
Copy the env.d file into place
.. code-block:: bash
cd /opt/openstack-ansible-ops/elk_metrics_6x
cp env.d/elk.yml /etc/openstack_deploy/env.d/
Copy the conf.d file into place
.. code-block:: bash
cp conf.d/elk.yml /etc/openstack_deploy/conf.d/
In **elk.yml**, list your logging hosts under elastic-logstash_hosts to create
the Elasticsearch cluster in multiple containers and one logging host under
`kibana_hosts` to create the Kibana container
.. code-block:: bash
vi /etc/openstack_deploy/conf.d/elk.yml
Create the containers
.. code-block:: bash
cd /opt/openstack-ansible/playbooks
openstack-ansible lxc-containers-create.yml --limit elk_all
Deploying | Installing with embedded Ansible
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If this is being executed on a system that already has Ansible installed but is
incompatible with these playbooks the script `bootstrap-embedded-ansible.sh` can
be sourced to grab an embedded version of Ansible prior to executing the
playbooks.
.. code-block:: bash
source bootstrap-embedded-ansible.sh
Deploying | Manually resolving the dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This playbook has external role dependencies. If Ansible is not installed with
the `bootstrap-ansible.sh` script these dependencies can be resolved with the
``ansible-galaxy`` command and the ``ansible-role-requirements.yml`` file.
* Example galaxy execution
.. code-block:: bash
ansible-galaxy install -r ansible-role-requirements.yml
Once the dependencies are set make sure to set the action plugin path to the
location of the config_template action directory. This can be done using the
environment variable `ANSIBLE_ACTION_PLUGINS` or through the use of an
`ansible.cfg` file.
Deploying | The environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Install master/data Elasticsearch nodes on the elastic-logstash containers,
deploy logstash, deploy Kibana, and then deploy all of the service beats.
.. code-block:: bash
cd /opt/openstack-ansible-ops/elk_metrics_6x
ansible-playbook site.yml $USER_VARS
* The `openstack-ansible` command can be used if the version of ansible on the
system is greater than **2.5**. This will automatically pick up the necessary
group_vars for hosts in an OSA deployment.
* If required add ``-e@/opt/openstack-ansible/inventory/group_vars/all/all.yml``
to import sufficient OSA group variables to define the OpenStack release.
Journalbeat will then deploy onto all hosts/containers for releases prior to
Rocky, and hosts only for Rocky onwards. If the variable ``openstack_release``
is undefined the default behaviour is to deploy Journalbeat to hosts only.
* Alternatively if using the embedded ansible, create a symlink to include all
of the OSA group_vars. These are not available by default with the embedded
ansible and can be symlinked into the ops repo.
.. code-block:: bash
ln -s /opt/openstack-ansible/inventory/group_vars /opt/openstack-ansible-ops/elk_metrics_6x/group_vars
The individual playbooks found within this repository can be independently run
at anytime.
Architecture | Data flow
^^^^^^^^^^^^^^^^^^^^^^^^
This diagram outlines the data flow from within an Elastic-Stack deployment.
.. image:: assets/Elastic-dataflow.svg
:scale: 50 %
:alt: Elastic-Stack Data Flow Diagram
:align: center
Optional | Enable uwsgi stats
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Config overrides can be used to make uwsgi stats available on unix
domain sockets. Any /tmp/*uwsgi-stats.sock will be picked up by Metricsbeat.
.. code-block:: yaml
keystone_uwsgi_ini_overrides:
uwsgi:
stats: "/tmp/keystone-uwsgi-stats.sock"
cinder_api_uwsgi_ini_overrides:
uwsgi:
stats: "/tmp/cinder-api-uwsgi-stats.sock"
glance_api_uwsgi_ini_overrides:
uwsgi:
stats: "/tmp/glance-api-uwsgi-stats.sock"
heat_api_uwsgi_ini_overrides:
uwsgi:
stats: "/tmp/heat-api-uwsgi-stats.sock"
heat_api_cfn_init_overrides:
uwsgi:
stats: "/tmp/heat-api-cfn-uwsgi-stats.sock"
nova_api_metadata_uwsgi_ini_overrides:
uwsgi:
stats: "/tmp/nova-api-metadata-uwsgi-stats.sock"
nova_api_os_compute_uwsgi_ini_overrides:
uwsgi:
stats: "/tmp/nova-api-os-compute-uwsgi-stats.sock"
nova_placement_uwsgi_ini_overrides:
uwsgi:
stats: "/tmp/nova-placement-uwsgi-stats.sock"
octavia_api_uwsgi_ini_overrides:
uwsgi:
stats: "/tmp/octavia-api-uwsgi-stats.sock"
sahara_api_uwsgi_ini_overrides:
uwsgi:
stats: "/tmp/sahara-api-uwsgi-stats.sock"
ironic_api_uwsgi_ini_overrides:
uwsgi:
stats: "/tmp/ironic-api-uwsgi-stats.sock"
magnum_api_uwsgi_ini_overrides:
uwsgi:
stats: "/tmp/magnum-api-uwsgi-stats.sock"
Rerun all of the **openstack-ansible** playbooks to enable these stats. Use
the `${service_name}-config` tags on all of the `os_*` roles. It's possible to
auto-generate the tags list with the following command.
.. code-block:: bash
openstack-ansible setup-openstack.yml --tags "$(cat setup-openstack.yml | grep -wo 'os-.*' | awk -F'-' '{print $2 "-config"}' | tr '\n' ',')"
Optional | add Kafka Output format
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To send data from Logstash to Kafka create the `logstash_kafka_options`
variable. This variable will be used as a generator and create a Kafka output
configuration file using the key/value pairs as options.
.. code-block:: yaml
logstash_kafka_options:
codec: json
topic_id: "elk_kafka"
ssl_key_password: "{{ logstash_kafka_ssl_key_password }}"
ssl_keystore_password: "{{ logstash_kafka_ssl_keystore_password }}"
ssl_keystore_location: "/var/lib/logstash/{{ logstash_kafka_ssl_keystore_location | basename }}"
ssl_truststore_location: "/var/lib/logstash/{{ logstash_kafka_ssl_truststore_location | basename }}"
ssl_truststore_password: "{{ logstash_kafka_ssl_truststore_password }}"
bootstrap_servers:
- server1.local:9092
- server2.local:9092
- server3.local:9092
client_id: "elk_metrics_6x"
compression_type: "gzip"
security_protocol: "SSL"
id: "UniqueOutputID"
For a complete list of all options available within the Logstash Kafka output
plugin please review the `following documentation <https://www.elastic.co/guide/en/logstash/current/plugins-outputs-kafka.html>`_.
Optional config:
The following variables are optional and correspond to the example
`logstash_kafka_options` variable.
.. code-block:: yaml
logstash_kafka_ssl_key_password: "secrete"
logstash_kafka_ssl_keystore_password: "secrete"
logstash_kafka_ssl_truststore_password: "secrete"
# SSL certificates in Java KeyStore format
logstash_kafka_ssl_keystore_location: "/root/kafka/keystore.jks"
logstash_kafka_ssl_truststore_location: "/root/kafka/truststore.jks"
When using the kafka output plugin the options,
`logstash_kafka_ssl_keystore_location` and
`logstash_kafka_ssl_truststore_location` will automatically copy a local SSL key
to the logstash nodes. These options are string value and assume the deployment
nodes have local access to the files.
Optional | add Grafana visualizations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
See the grafana directory for more information on how to deploy grafana. Once
When deploying grafana, source the variable file from ELK in order to
automatically connect grafana to the Elasticsearch datastore and import
dashboards. Including the variable file is as simple as adding
``-e @../elk_metrics_6x/vars/variables.yml`` to the grafana playbook
run.
Included dashboards.
* https://grafana.com/dashboards/5569
* https://grafana.com/dashboards/5566
Example command using the embedded Ansible from within the grafana directory.
.. code-block:: bash
ansible-playbook ${USER_VARS} installGrafana.yml \
-e @../elk_metrics_6x/vars/variables.yml \
-e 'galera_root_user="root"' \
-e 'galera_address={{ internal_lb_vip_address }}'
Optional | add kibana custom dashboard
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you want to use a custom dashboard directly on your kibana,
you can run the playbook bellow. The dashboard uses filebeat to
collect the logs of your deployment.
.. code-block:: bash
ansible-playbook setupKibanaDashboard.yml $USER_VARS
Overview of kibana custom dashboard
.. image:: assets/openstack-kibana-custom-dashboard.png
:scale: 50 %
:alt: Kibana Custom Dashboard
:align: center
Optional | Customize Elasticsearch cluster configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Cluster configuration can be augmented using several variables which will force
a node to use a given role.
Available roles are *data*, *ingest*, and *master*.
* ``elasticsearch_node_data``: This variable will override the automatic node
determination and set a given node to be an "data" node.
* ``elasticsearch_node_ingest``: This variable will override the automatic node
determination and set a given node to be an "ingest" node.
* ``elasticsearch_node_master``: This variable will override the automatic node
determination and set a given node to be an "master" node.
Example setting override options within inventory.
.. code-block:: yaml
hosts:
children:
elastic-logstash:
hosts:
elk1:
ansible_host: 10.0.0.1
ansible_user: root
elasticsearch_node_master: true
elasticsearch_node_data: false
elasticsearch_node_ingest: false
elk2:
ansible_host: 10.0.0.2
ansible_user: root
elasticsearch_node_master: false
elasticsearch_node_data: true
elasticsearch_node_ingest: false
elk3:
ansible_host: 10.0.0.3
ansible_user: root
elasticsearch_node_master: false
elasticsearch_node_data: false
elasticsearch_node_ingest: true
elk4:
ansible_host: 10.0.0.4
ansible_user: root
With the following inventory settings **elk1** would be a master node, **elk2**
would be a data, **elk3** would be an ingest node, and **elk4** would auto
select a role.
Upgrading the cluster
---------------------
To upgrade the packages throughout the elastic search cluster set the package
state variable, `elk_package_state`, to latest.
.. code-block:: bash
cd /opt/openstack-ansible-ops/elk_metrics_6x
ansible-playbook site.yml $USER_VARS -e 'elk_package_state="latest"'
Forcing the Elasticsearch cluster retention policy to refresh
-------------------------------------------------------------
To force the cluster retention policy to refresh set `elastic_retention_refresh`, to
"yes". When setting `elastic_retention_refresh` to "yes" the retention policy will forcibly
be refresh across all hosts. This option should only be used when the Elasticsearch storage
array is modified on an existing cluster. Should the Elasticseach cluster size change
(nodes added or removed) the retention policy will automatically be refreshed on playbook
execution.
.. code-block:: bash
cd /opt/openstack-ansible-ops/elk_metrics_6x
ansible-playbook site.yml $USER_VARS -e 'elastic_retention_refresh="yes"'
Trouble shooting
----------------
If everything goes bad, you can clean up with the following command
.. code-block:: bash
openstack-ansible /opt/openstack-ansible-ops/elk_metrics_6x/site.yml -e 'elk_package_state="absent"' --tags package_install
openstack-ansible /opt/openstack-ansible/playbooks/lxc-containers-destroy.yml --limit elk_all
Local testing
-------------
To test these playbooks within a local environment you will need a single server
with at leasts 8GiB of RAM and 40GiB of storage on root. Running an `m1.medium`
(openstack) flavor size is generally enough to get an environment online.
To run the local functional tests execute the `run-tests.sh` script out of the
tests directory. This will create a 4 node elasaticsearch cluster, 1 kibana node
with an elasticsearch coordination process, and 1 APM node. The beats will be
deployed to the environment as if this was a production installation.
.. code-block:: bash
CLUSTERED=yes tests/run-tests.sh
After the test build is completed the cluster will test it's layout and ensure
processes are functioning normally. Logs for the cluster can be found at
`/tmp/elk-metrics-6x-logs`.
To rerun the playbooks after a test build, source the `tests/manual-test.rc`
file and follow the onscreen instructions.
To clean-up a test environment and start from a bare server slate the
`run-cleanup.sh` script can be used. This script is distructive and will purge
all `elk_metrics_6x` related services within the local test environment.
.. code-block:: bash
tests/run-cleanup.sh

View File

@ -0,0 +1,13 @@
---
- name: systemd_service
scm: git
src: https://git.openstack.org/openstack/ansible-role-systemd_service
version: master
- name: systemd_mount
scm: git
src: https://git.openstack.org/openstack/ansible-role-systemd_mount
version: master
- name: config_template
scm: git
src: https://git.openstack.org/openstack/ansible-config_template
version: master

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 1.0 MiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 270 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 119 KiB

View File

@ -0,0 +1 @@
../bootstrap-embedded-ansible/bootstrap-embedded-ansible.sh

View File

@ -0,0 +1,27 @@
# For the puposes of this example, the kibana nodes have been added to
# different host machines that the logging nodes. The intention here
# is to show that the different components can scale independently of
# one another.
kibana_hosts:
infra01:
ip: 172.22.8.24
infra02:
ip: 172.22.8.25
infra03:
ip: 172.22.8.26
elastic-logstash_hosts:
logging01:
ip: 172.22.8.27
logging02:
ip: 172.22.8.28
logging03:
ip: 172.22.8.29
apm-server_hosts:
logging01:
ip: 172.22.8.27
logging02:
ip: 172.22.8.28
logging03:
ip: 172.22.8.29

View File

@ -0,0 +1,251 @@
---
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Create/Setup known indexes in Elasticsearch
hosts: "elastic-logstash[0]"
become: true
vars:
_elastic_refresh_interval: "{{ (elasticsearch_number_of_replicas | int) * 5 }}"
elastic_refresh_interval: "{{ (_elastic_refresh_interval > 0) | ternary(30, _elastic_refresh_interval) }}"
vars_files:
- vars/variables.yml
environment: "{{ deployment_environment_variables | default({}) }}"
roles:
- role: elastic_retention
post_tasks:
- name: Create beat indexes
uri:
url: http://127.0.0.1:9200/{{ item.name }}
method: PUT
body: "{{ item.index_options | to_json }}"
status_code: 200,400
body_format: json
register: elk_indexes
until: elk_indexes is success
retries: 3
delay: 30
with_items: |-
{% set beat_indexes = [] %}
{% for key, value in elastic_beat_retention_policy_hosts.items() %}
{% if ((value.hosts | length) > 0) and (value.make_index | default(false) | bool) %}
{%
set _index = {
'name': key,
'index_options': {
'settings': {
'index': {
'codec': 'best_compression',
'mapping': {
'total_fields': {
'limit': '10000'
}
},
'refresh_interval': elastic_refresh_interval
}
}
}
}
%}
{% set _ = beat_indexes.append(_index) %}
{% endif %}
{% endfor %}
{{ beat_indexes }}
- name: Create basic indexes
uri:
url: http://127.0.0.1:9200/{{ item.name }}
method: PUT
body: "{{ item.index_options | to_json }}"
status_code: 200,400
body_format: json
register: elk_indexes
until: elk_indexes is success
retries: 3
delay: 30
with_items:
- name: "_all/_settings?preserve_existing=true"
index_options:
index.queries.cache.enabled: "true"
indices.queries.cache.size: "5%"
- name: "_all/_settings"
index_options:
index.number_of_replicas: "{{ elasticsearch_number_of_replicas | int }}"
index.translog.durability: "async"
index.refresh_interval: "{{ ((elastic_refresh_interval | int) > 30) | ternary(30, elastic_refresh_interval) }}s"
- name: Check for basic index template
uri:
url: http://127.0.0.1:9200/_template/basic-index-template
method: HEAD
failed_when: false
register: check_basicIndexTemplate
until: check_basicIndexTemplate is success
retries: 3
delay: 30
- name: Check for basic index template
uri:
url: http://127.0.0.1:9200/_template/basic-index-template
method: DELETE
status_code: 200
register: delete_basicIndexTemplate
until: delete_basicIndexTemplate is success
retries: 3
delay: 30
when:
- check_basicIndexTemplate.status == 200
- name: Create basic index template
uri:
url: http://127.0.0.1:9200/_template/basic-index-template
method: PUT
body: "{{ index_option | to_json }}"
status_code: 200
body_format: json
register: create_basicIndexTemplate
until: create_basicIndexTemplate is success
retries: 3
delay: 30
vars:
index_option:
index_patterns: >-
{{
(elastic_beat_retention_policy_hosts.keys() | list)
| map('regex_replace', '(.*)', '\1-' ~ '*')
| list
}}
settings:
number_of_replicas: "{{ elasticsearch_number_of_replicas | int }}"
index:
mapping:
total_fields:
limit: "3072"
- name: Create custom monitoring index template
uri:
url: http://127.0.0.1:9200/_template/custom_monitoring
method: PUT
body: "{{ index_option | to_json }}"
status_code: 200
body_format: json
register: create_basicIndexTemplate
until: create_basicIndexTemplate is success
retries: 3
delay: 30
vars:
index_option:
template: ".monitoring*"
order: 1
settings:
number_of_replicas: "{{ elasticsearch_number_of_replicas | int }}"
number_of_shards: "{{ ((elasticsearch_number_of_replicas | int) * 2) + 1 }}"
- name: Create custom skydive index template
uri:
url: http://127.0.0.1:9200/_template/skydive
method: PUT
body: "{{ index_option | to_json }}"
status_code: 200
body_format: json
register: create_basicIndexTemplate
until: create_basicIndexTemplate is success
retries: 3
delay: 30
vars:
index_option:
template: "skydive*"
order: 1
settings:
number_of_replicas: "{{ elasticsearch_number_of_replicas | int }}"
number_of_shards: "{{ ((elasticsearch_number_of_replicas | int) * 2) + 1 }}"
- name: Create/Setup known indexes in Kibana
hosts: kibana
become: true
vars_files:
- vars/variables.yml
environment: "{{ deployment_environment_variables | default({}) }}"
roles:
- role: elastic_retention
post_tasks:
- name: Create kibana indexe patterns
uri:
url: "http://127.0.0.1:5601/api/saved_objects/index-pattern/{{ item.name }}"
method: POST
body: "{{ item.index_options | to_json }}"
status_code: 200,409
body_format: json
headers:
Content-Type: "application/json"
kbn-xsrf: "{{ inventory_hostname | to_uuid }}"
with_items: |-
{% set beat_indexes = [] %}
{% for key, value in elastic_beat_retention_policy_hosts.items() %}
{% if (value.hosts | length) > 0 %}
{%
set _index = {
'name': key,
'index_options': {
'attributes': {}
}
}
%}
{% if 'beat' in key %}
{% set _ = _index.index_options.attributes.__setitem__('title', (key ~ '-*')) %}
{% else %}
{% set _ = _index.index_options.attributes.__setitem__('title', (key ~ '*')) %}
{% endif %}
{% if value.timeFieldName is defined %}
{% set _ = _index.index_options.attributes.__setitem__('timeFieldName', (value.timeFieldName | string)) %}
{% endif %}
{% set _ = beat_indexes.append(_index) %}
{% endif %}
{% endfor %}
{% set _ = beat_indexes.append({'name': 'default', 'index_options': {'attributes': {'title': '*'}}}) %}
{{ beat_indexes }}
register: kibana_indexes
until: kibana_indexes is success
retries: 6
delay: 30
run_once: true
- name: Create basic indexes
uri:
url: "http://127.0.0.1:5601/api/kibana/settings/defaultIndex"
method: POST
body: "{{ item.index_options | to_json }}"
status_code: 200
body_format: json
headers:
Content-Type: "application/json"
kbn-xsrf: "{{ inventory_hostname | to_uuid }}"
with_items:
- name: "default"
index_options:
value: "default"
register: kibana_indexes
until: kibana_indexes is success
retries: 6
delay: 30
run_once: true
tags:
- server-install

View File

@ -0,0 +1,53 @@
---
component_skel:
apm-server:
belongs_to:
- elk_all
- apm_all
elastic-logstash:
belongs_to:
- elk_all
- elasticsearch
- elasticsearch_all
- logstash
- logstash_all
kibana:
belongs_to:
- elk_all
container_skel:
apm-server_container:
belongs_to:
- apm-server_containers
contains:
- apm-server
elastic-logstash_container:
belongs_to:
- elastic-logstash_containers
contains:
- elastic-logstash
kibana_container:
belongs_to:
- kibana_containers
contains:
- kibana
physical_skel:
apm-server_containers:
belongs_to:
- all_containers
apm-server_hosts:
belongs_to:
- hosts
elastic-logstash_containers:
belongs_to:
- all_containers
elastic-logstash_hosts:
belongs_to:
- hosts
kibana_containers:
belongs_to:
- all_containers
kibana_hosts:
belongs_to:
- hosts

View File

@ -0,0 +1,101 @@
---
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Refresh kibana index-pattern
hosts: "kibana[0]"
become: true
gather_facts: false
vars_files:
- vars/variables.yml
environment: "{{ deployment_environment_variables | default({}) }}"
tasks:
- name: Get index fields
uri:
url: "http://127.0.0.1:{{ kibana_port }}/api/saved_objects/_bulk_get"
method: POST
body:
- id: "{{ index_pattern }}"
type: "index-pattern"
status_code: 200,404
body_format: json
return_content: true
headers:
Content-Type: "application/json"
kbn-xsrf: "{{ inventory_hostname | to_uuid }}"
register: index_fields_return
until: index_fields_return is success
retries: 6
delay: 30
run_once: true
- name: Get index fields format
uri:
url: >-
http://127.0.0.1:{{ kibana_port }}/api/index_patterns/_fields_for_wildcard?pattern={{ index_pattern }}&meta_fields=["_source","_id","_type","_index","_score"]
method: GET
status_code: 200,404
return_content: true
headers:
Content-Type: "application/json"
kbn-xsrf: "{{ inventory_hostname | to_uuid }}"
register: index_fields_format_return
until: index_fields_format_return is success
retries: 6
delay: 30
run_once: true
- name: Refresh fields block
block:
- name: Set index-pattern refresh fact attributes
set_fact:
attributes: "{{ index_fields_return['json']['saved_objects'][0]['attributes'] }}"
- name: Set index-refresh fact
set_fact:
index_refresh_fact:
attributes:
fieldFormatMap: "{{ attributes['fieldFormatMap'] | string }}"
timeFieldName: "{{ attributes['timeFieldName'] }}"
title: "{{ attributes['title'] }}"
fields: "{{ index_fields_format_return['content'] | string }}"
- name: Put index fields
uri:
url: "http://127.0.0.1:{{ kibana_port }}/api/saved_objects/index-pattern/{{ index_pattern }}"
method: PUT
body: "{{ index_refresh_fact }}"
status_code: 200
body_format: json
timeout: 120
headers:
Content-Type: "application/json"
kbn-xsrf: "{{ inventory_hostname | to_uuid }}"
register: index_fields_return
until: index_fields_return is success
retries: 6
delay: 30
run_once: true
rescue:
- name: Notify deployer
debug:
msg: >-
Index pattern refresh was not possible at this time. Either there are no dashboards
loaded or the index being refreshed does not exist. While the task failed, this is
not a fatal error, so the play has been rescued.
run_once: true
when:
- index_fields_return.status == 200
- index_fields_format_return.status == 200

View File

@ -0,0 +1,51 @@
---
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Install apm-server
hosts: apm-server
become: true
vars:
haproxy_ssl: false
vars_files:
- vars/variables.yml
environment: "{{ deployment_environment_variables | default({}) }}"
roles:
- role: elastic_apm_server
- role: elastic_rollup
index_name: apm
when:
- elastic_create_rollup | bool
tags:
- apm-server
- name: Setup apm-server rollup
hosts: elastic-logstash[0]
become: true
vars:
haproxy_ssl: false
vars_files:
- vars/variables.yml
environment: "{{ deployment_environment_variables | default({}) }}"
roles:
- role: elastic_rollup
index_name: apm
when:
- elastic_create_rollup | bool
tags:
- apm-server

View File

@ -0,0 +1,52 @@
---
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Install Auditbeat
hosts: hosts
become: true
vars:
haproxy_ssl: false
vars_files:
- vars/variables.yml
environment: "{{ deployment_environment_variables | default({}) }}"
roles:
- role: elastic_auditbeat
tags:
- beat-install
- name: Setup auditbeat rollup
hosts: elastic-logstash[0]
become: true
vars:
haproxy_ssl: false
vars_files:
- vars/variables.yml
environment: "{{ deployment_environment_variables | default({}) }}"
roles:
- role: elastic_rollup
index_name: auditbeat
when:
- elastic_create_rollup | bool
tags:
- auditbeat
- import_playbook: fieldRefresh.yml
vars:
index_pattern: auditbeat-*

View File

@ -0,0 +1,30 @@
---
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Install Curator
hosts: "elastic-logstash"
become: true
gather_facts: true
vars:
haproxy_ssl: false
vars_files:
- vars/variables.yml
environment: "{{ deployment_environment_variables | default({}) }}"
roles:
- role: elastic_curator
tags:
- beat-install

View File

@ -0,0 +1,27 @@
---
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Install Elastic Search
hosts: "elastic-logstash:kibana"
become: true
vars_files:
- vars/variables.yml
environment: "{{ deployment_environment_variables | default({}) }}"
roles:
- role: elasticsearch
tags:
- server-install

View File

@ -0,0 +1,52 @@
---
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Install Filebeat
hosts: hosts
become: true
vars:
haproxy_ssl: false
vars_files:
- vars/variables.yml
environment: "{{ deployment_environment_variables | default({}) }}"
roles:
- role: elastic_filebeat
tags:
- beat-install
- name: Setup filebeat rollup
hosts: elastic-logstash[0]
become: true
vars:
haproxy_ssl: false
vars_files:
- vars/variables.yml
environment: "{{ deployment_environment_variables | default({}) }}"
roles:
- role: elastic_rollup
index_name: filebeat
when:
- elastic_create_rollup | bool
tags:
- filebeat
- import_playbook: fieldRefresh.yml
vars:
index_pattern: filebeat-*

View File

@ -0,0 +1,66 @@
---
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Set heartbeat host deployment group
hosts: kibana
gather_facts: false
connection: local
tasks:
- name: Add hosts to dynamic inventory group
group_by:
key: heatbeat_deployment_targets
parents: kibana
when:
- inventory_hostname in groups['kibana'][:3]
tags:
- always
- name: Install Heartbeat
hosts: heatbeat_deployment_targets
become: true
vars:
haproxy_ssl: false
vars_files:
- vars/variables.yml
environment: "{{ deployment_environment_variables | default({}) }}"
roles:
- role: elastic_heartbeat
tags:
- beat-install
- name: Setup heartbeat rollup
hosts: elastic-logstash[0]
become: true
vars:
haproxy_ssl: false
vars_files:
- vars/variables.yml
environment: "{{ deployment_environment_variables | default({}) }}"
roles:
- role: elastic_rollup
index_name: heartbeat
when:
- elastic_create_rollup | bool
tags:
- heartbeat
- import_playbook: fieldRefresh.yml
vars:
index_pattern: heartbeat-*

View File

@ -0,0 +1,102 @@
---
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Detect journalbeat host deployment group(s)
hosts: all
gather_facts: false
connection: local
tasks:
- name: Add hosts to dynamic inventory group
group_by:
key: journalbeat_deployment_containers
parents: all_journalbeat_deployments
when:
- openstack_release is defined and
openstack_release is version('18.0.0', 'lt')
- physical_host is defined and
physical_host != inventory_hostname
- name: Add hosts to dynamic inventory group
group_by:
key: journalbeat_deployment_hosts
parents: all_journalbeat_deployments
when:
- physical_host is undefined or
physical_host == inventory_hostname
tags:
- always
- name: Install Journalbeat
hosts: all_journalbeat_deployments
become: true
vars:
haproxy_ssl: false
vars_files:
- vars/variables.yml
environment: "{{ deployment_environment_variables | default({}) }}"
pre_tasks:
- name: Check for journal directory
stat:
path: /var/log/journal
register: journal_dir
tags:
- always
- name: Halt this playbook if no journal is found
meta: end_play
when:
- not (journal_dir.stat.exists | bool) or
(ansible_service_mgr != 'systemd')
roles:
- role: elastic_journalbeat
tags:
- beat-install
- name: Setup journalbeat rollup
hosts: elastic-logstash[0]
become: true
vars:
haproxy_ssl: false
vars_files:
- vars/variables.yml
environment: "{{ deployment_environment_variables | default({}) }}"
pre_tasks:
- name: Check for journal directory
stat:
path: /var/log/journal
register: journal_dir
tags:
- always
- name: Halt this playbook if no journal is found
meta: end_play
when:
- not (journal_dir.stat.exists | bool) or
(ansible_service_mgr != 'systemd')
roles:
- role: elastic_rollup
index_name: journalbeat
when:
- elastic_create_rollup | bool
tags:
- journalbeat

View File

@ -0,0 +1,26 @@
---
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Install Kibana
hosts: kibana
become: true
vars_files:
- vars/variables.yml
environment: "{{ deployment_environment_variables | default({}) }}"
roles:
- role: elastic_kibana
tags:
- server-install

View File

@ -0,0 +1,26 @@
---
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Install Logstash
hosts: elastic-logstash
become: true
vars_files:
- vars/variables.yml
environment: "{{ deployment_environment_variables | default({}) }}"
roles:
- role: elastic_logstash
tags:
- server-install

View File

@ -0,0 +1,52 @@
---
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Install Metricsbeat
hosts: all
become: true
vars:
haproxy_ssl: false
environment: "{{ deployment_environment_variables | default({}) }}"
vars_files:
- vars/variables.yml
roles:
- role: elastic_metricbeat
tags:
- beat-install
- name: Setup metricbeat rollup
hosts: elastic-logstash[0]
become: true
vars:
haproxy_ssl: false
vars_files:
- vars/variables.yml
environment: "{{ deployment_environment_variables | default({}) }}"
roles:
- role: elastic_rollup
index_name: metricbeat
when:
- elastic_create_rollup | bool
tags:
- metricbeat
- import_playbook: fieldRefresh.yml
vars:
index_pattern: metricbeat-*

View File

@ -0,0 +1,297 @@
---
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Detect monitorstack host deployment group(s)
hosts: "hosts:all_containers"
gather_facts: false
connection: local
tasks:
- name: Add hosts to dynamic inventory group
group_by:
key: monitorstack_deployment
parents: monitorstack_all
when:
- inventory_hostanme in (
(groups['nova_compute'] | default([])) |
union(groups['utility_all'] | default([])) |
union(groups['memcached_all'] | default([])) |
union(groups['memcached_all'] | default([]))
)
- ansible_service_mgr == 'systemd'
tags:
- always
- name: Install MonitorStack
hosts: monitorstack_all
become: true
gather_facts: true
vars:
haproxy_ssl: false
monitorstack_distro_packages:
ubuntu:
- gcc
- git
- python-dev
- pkg-config
redhat:
- gcc
- git
- python-devel
suse:
- gcc
- git
- python-devel
- pkg-config
monitorstack_config_enabled:
- check: kvm
options: ''
condition: >-
{{
inventory_hostname in (groups['nova_compute'] | default([]))
}}
- check: memcache
options: >-
--host {{ (monitorstack_memcached_access.stdout_lines[0] | default("127.0.0.1:11211")).split(":")[0] }}
--port {{ (monitorstack_memcached_access.stdout_lines[0] | default("127.0.0.1:11211")).split(":")[1] }}
condition: >-
{{
inventory_hostname in (groups['memcached_all'] | default([]))
}}
- check: os_block_pools_totals
options: ''
condition: >-
{{
(clouds_config.stat.exists | bool) and
(inventory_hostname in (groups['utility_all'] | default([]))) and
(inventory_hostname == (groups['utility_all'] | default([null]))[0])
}}
- check: os_block_pools_usage
options: ''
condition: >-
{{
(clouds_config.stat.exists | bool) and
(inventory_hostname in (groups['utility_all'] | default([]))) and
(inventory_hostname == (groups['utility_all'] | default([null]))[0])
}}
- check: os_vm_quota_cores
options: ''
condition: >-
{{
(clouds_config.stat.exists | bool) and
(inventory_hostname in (groups['utility_all'] | default([]))) and
(inventory_hostname == (groups['utility_all'] | default([null]))[0])
}}
- check: os_vm_quota_instance
options: ''
condition: >-
{{
(clouds_config.stat.exists | bool) and
(inventory_hostname in (groups['utility_all'] | default([]))) and
(inventory_hostname == (groups['utility_all'] | default([null]))[0])
}}
- check: os_vm_quota_ram
options: ''
condition: >-
{{
(clouds_config.stat.exists | bool) and
(inventory_hostname in (groups['utility_all'] | default([]))) and
(inventory_hostname == (groups['utility_all'] | default([null]))[0])
}}
- check: os_vm_used_cores
options: ''
condition: >-
{{
(clouds_config.stat.exists | bool) and
(inventory_hostname in (groups['utility_all'] | default([]))) and
(inventory_hostname == (groups['utility_all'] | default([null]))[0])
}}
- check: os_vm_used_disk
options: ''
condition: >-
{{
(clouds_config.stat.exists | bool) and
(inventory_hostname in (groups['utility_all'] | default([]))) and
(inventory_hostname == (groups['utility_all'] | default([null]))[0])
}}
- check: os_vm_used_instance
options: ''
condition: >-
{{
(clouds_config.stat.exists | bool) and
(inventory_hostname in (groups['utility_all'] | default([]))) and
(inventory_hostname == (groups['utility_all'] | default([null]))[0])
}}
- check: os_vm_used_ram
options: ''
condition: >-
{{
(clouds_config.stat.exists | bool) and
(inventory_hostname in (groups['utility_all'] | default([]))) and
(inventory_hostname == (groups['utility_all'] | default([null]))[0])
}}
- check: uptime
options: ''
condition: true
vars_files:
- vars/variables.yml
environment: "{{ deployment_environment_variables | default({}) }}"
roles:
- role: elastic_data_hosts
post_tasks:
- name: Find clouds config
stat:
path: "{{ ansible_env.HOME }}/.config/openstack/clouds.yaml"
register: clouds_config
- name: Find openstack release
stat:
path: "/etc/openstack-release"
register: openstack_release
- name: Find osp release
stat:
path: "/etc/rhosp-release"
register: rhosp_release
- name: MonitorStack block
when:
- (openstack_release.stat.exists | bool) or
(rhosp_release.stat.exists | bool)
block:
- name: Ensure disto packages are installed
package:
name: "{{ monitorstack_distro_packages[(ansible_distribution | lower)] }}"
state: "{{ monitorstack_package_state | default('present') }}"
update_cache: "{{ (ansible_pkg_mgr == 'apt') | ternary('yes', omit) }}"
- name: Refresh local facts
setup:
filter: ansible_local
gather_subset: "!all"
tags:
- always
- name: create the system group
group:
name: "monitorstack"
state: "present"
system: "yes"
- name: Create the monitorstack system user
user:
name: "monitorstack"
group: "monitorstack"
comment: "monitorstack user"
shell: "/bin/false"
createhome: "yes"
home: "/var/lib/monitorstack"
- name: Create monitorstack data path
file:
path: "{{ item }}"
state: directory
owner: "monitorstack"
group: "monitorstack"
mode: "0750"
recurse: true
with_items:
- "/var/lib/monitorstack"
- "/var/lib/monitorstack/.config"
- "/var/lib/monitorstack/.config/openstack"
- "/var/lib/monitorstack/venv"
- "/var/log/monitorstack"
- "/etc/monitorstack"
- name: Copy the clouds config into monitorstack
copy:
src: "{{ ansible_env.HOME }}/.config/openstack/clouds.yaml"
dest: "/var/lib/monitorstack/.config/openstack/clouds.yaml"
remote_src: yes
when:
- clouds_config.stat.exists | bool
- name: Create the virtualenv (if it does not exist)
command: "virtualenv --no-setuptools --system-site-packages /var/lib/monitorstack/venv"
args:
creates: "/var/lib/monitorstack/venv/bin/activate"
- name: Setup venv
pip:
name:
- pip
- setuptools
virtualenv_site_packages: yes
extra_args: "-U"
virtualenv: "/var/lib/monitorstack/venv"
- name: Ensure monitorstack is installed
pip:
name: "git+https://github.com/openstack/monitorstack@{{ monitorstack_release | default('master') }}"
state: "{{ monitorstack_package_state | default('present') }}"
extra_args: --isolated
virtualenv: /var/lib/monitorstack/venv
register: _pip_task
until: _pip_task is success
retries: 3
delay: 2
tags:
- package_install
- name: Create montiorstack config
copy:
dest: "/etc/monitorstack/monitorstack.ini"
content: |
[elasticsearch]
hosts = {{ elasticsearch_data_hosts | join(',') }}
port = {{ elastic_port }}
- name: Run memcached port scan
shell: "ss -ntlp | awk '/11211/ {print $4}'"
register: monitorstack_memcached_access
changed_when: false
- name: Run the systemd service role
include_role:
name: systemd_service
vars:
systemd_user_name: monitorstack
systemd_group_name: monitorstack
systemd_services: |-
{% set services = [] %}
{% for item in monitorstack_config_enabled %}
{% if item.condition | bool %}
{%
set check = {
"service_name": ("monitorstack-" ~ item.check),
"execstarts": ("/var/lib/monitorstack/venv/bin/monitorstack --format elasticsearch --config-file /etc/monitorstack/monitorstack.ini " ~ item.check ~ ' ' ~ item.options),
"timer": {
"state": "started",
"options": {
"OnBootSec": "5min",
"OnUnitActiveSec": "10m",
"Persistent": true
}
}
}
%}
{% set _ = services.append(check) %}
{% endif %}
{% endfor %}
{{ services }}
tags:
- beat-install

View File

@ -0,0 +1,52 @@
---
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Install Packetbeat
hosts: hosts
become: true
vars:
haproxy_ssl: false
environment: "{{ deployment_environment_variables | default({}) }}"
vars_files:
- vars/variables.yml
roles:
- role: elastic_packetbeat
tags:
- beat-install
- name: Setup packetbeat rollup
hosts: elastic-logstash[0]
become: true
vars:
haproxy_ssl: false
vars_files:
- vars/variables.yml
environment: "{{ deployment_environment_variables | default({}) }}"
roles:
- role: elastic_rollup
index_name: packetbeat
when:
- elastic_create_rollup | bool
tags:
- packetbeat
- import_playbook: fieldRefresh.yml
vars:
index_pattern: packetbeat-*

View File

@ -0,0 +1,19 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# APM vars
apm_interface: 0.0.0.0
apm_port: 8200
apm_token: SuperSecrete

View File

@ -0,0 +1,33 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Enable and restart apm-server (systemd)
systemd:
name: "apm-server"
enabled: true
state: restarted
daemon_reload: true
when:
- ansible_service_mgr == 'systemd'
listen: Enable and restart apm server
- name: Enable and restart apm-server (upstart)
service:
name: "apm-server"
state: restarted
enabled: yes
when:
- ansible_service_mgr == 'upstart'
listen: Enable and restart apm server

View File

@ -0,0 +1,35 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
galaxy_info:
author: OpenStack
description: Elastic v6.x apm-server role
company: Rackspace
license: Apache2
min_ansible_version: 2.5
platforms:
- name: Ubuntu
versions:
- trusty
- xenial
- bionic
categories:
- cloud
- development
- elasticsearch
- elastic-stack
dependencies:
- role: elastic_data_hosts
- role: elastic_repositories

View File

@ -0,0 +1,81 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Gather variables for each operating system
include_vars: "{{ item }}"
with_first_found:
- "{{ ansible_distribution | lower }}-{{ ansible_distribution_version | lower }}.yml"
- "{{ ansible_distribution | lower }}-{{ ansible_distribution_major_version | lower }}.yml"
- "{{ ansible_os_family | lower }}-{{ ansible_distribution_major_version | lower }}.yml"
- "{{ ansible_distribution | lower }}.yml"
- "{{ ansible_os_family | lower }}-{{ ansible_distribution_version.split('.')[0] }}.yml"
- "{{ ansible_os_family | lower }}.yml"
tags:
- always
- name: Ensure apm-server is installed
package:
name: "{{ apm_server_distro_packages }}"
state: "{{ elk_package_state | default('present') }}"
update_cache: "{{ (ansible_pkg_mgr == 'apt') | ternary('yes', omit) }}"
register: _package_task
until: _package_task is success
retries: 3
delay: 2
notify:
- Enable and restart apm server
tags:
- package_install
- name: Create apm-server systemd service config dir
file:
path: "/etc/systemd/system/apm-server.service.d"
state: "directory"
group: "root"
owner: "root"
mode: "0755"
when:
- ansible_service_mgr == 'systemd'
- name: Apply systemd options
template:
src: "{{ item.src }}"
dest: "/etc/systemd/system/apm-server.service.d/{{ item.dest }}"
mode: "0644"
when:
- ansible_service_mgr == 'systemd'
with_items:
- src: "systemd.general-overrides.conf.j2"
dest: "apm-server-overrides.conf"
notify:
- Enable and restart apm server
- name: Drop apm-server conf file
template:
src: "apm-server.yml.j2"
dest: "/etc/apm-server/apm-server.yml"
notify:
- Enable and restart apm server
- name: Run the beat setup role
include_role:
name: elastic_beat_setup
when:
- (groups['kibana'] | length) > 0
vars:
elastic_beat_name: "apm-server"
- name: Force beat handlers
meta: flush_handlers

View File

@ -0,0 +1 @@
../../../templates/systemd.general-overrides.conf.j2

View File

@ -0,0 +1,145 @@
{% import 'templates/_macros.j2' as elk_macros %}
######################## APM Server Configuration #############################
############################# APM Server ######################################
apm-server:
# Defines the host and port the server is listening on
host: "{{ apm_interface }}:{{ apm_port }}"
# Maximum permitted size in bytes of an unzipped request accepted by the server to be processed.
#max_unzipped_size: 52428800
# Maximum permitted size in bytes of a request's header accepted by the server to be processed.
#max_header_size: 1048576
# Maximum permitted duration in seconds for reading an entire request.
#read_timeout: 2s
# Maximum permitted duration in seconds for writing a response.
#write_timeout: 2s
# Maximum duration in seconds before releasing resources when shutting down the server.
#shutdown_timeout: 5s
# Maximum number of requests permitted to be sent to the server concurrently.
#concurrent_requests: 40
# Authorization token to be checked. If a token is set here the agents must
# send their token in the following format: Authorization: Bearer <secret-token>.
# It is recommended to use an authorization token in combination with SSL enabled.
secret_token: {{ apm_token }}
#ssl.enabled: false
#ssl.certificate : "path/to/cert"
#ssl.key : "path/to/private_key"
# Please be aware that frontend support is an experimental feature at the moment!
frontend:
# To enable experimental frontend support set this to true.
enabled: true
# Rate limit per second and IP address for requests sent to the frontend endpoint.
#rate_limit: 10
# Comma separated list of permitted origins for frontend. User-agents will send
# a origin header that will be validated against this list.
# An origin is made of a protocol scheme, host and port, without the url path.
# Allowed origins in this setting can have * to match anything (eg.: http://*.example.com)
# If an item in the list is a single '*', everything will be allowed
#allow_origins : ['*']
# Regexp to be matched against a stacktrace frame's `file_name` and `abs_path` attributes.
# If the regexp matches, the stacktrace frame is considered to be a library frame.
#library_pattern: "node_modules|bower_components|~"
# Regexp to be matched against a stacktrace frame's `file_name`.
# If the regexp matches, the stacktrace frame is not used for calculating error groups.
# The default pattern excludes stacktrace frames that have
# - a filename starting with '/webpack'
#exclude_from_grouping: "^/webpack"
# If a source map has previously been uploaded, source mapping is automatically applied
# to all error and transaction documents sent to the frontend endpoint.
#source_mapping:
# Source maps are are fetched from Elasticsearch and then kept in an in-memory cache for a certain time.
# The `cache.expiration` determines how long a source map should be cached before fetching it again from Elasticsearch.
# Note that values configured without a time unit will be interpreted as seconds.
#cache:
#expiration: 5m
# Source maps are stored in the same index as transaction and error documents.
# If the default index pattern at 'outputs.elasticsearch.index' is changed,
# a matching index pattern needs to be specified here.
#index_pattern: "apm-*"
#================================ General ======================================
# Internal queue configuration for buffering events to be published.
#queue:
# Queue type by name (default 'mem')
# The memory queue will present all available events (up to the outputs
# bulk_max_size) to the output, the moment the output is ready to server
# another batch of events.
#mem:
# Max number of events the queue can buffer.
#events: 4096
# Hints the minimum number of events stored in the queue,
# before providing a batch of events to the outputs.
# A value of 0 (the default) ensures events are immediately available
# to be sent to the outputs.
#flush.min_events: 2048
# Maximum duration after which events are available to the outputs,
# if the number of events stored in the queue is < min_flush_events.
#flush.timeout: 1s
# Sets the maximum number of CPUs that can be executing simultaneously. The
# default is the number of logical CPUs available in the system.
#max_procs:
#================================ Outputs ======================================
# Configure what output to use when sending the data collected by the beat.
#----------------------------- Logstash output ---------------------------------
{{ elk_macros.output_elasticsearch(inventory_hostname, elasticsearch_data_hosts) }}
#================================= Paths ======================================
# The home path for the apm-server installation. This is the default base path
# for all other path settings and for miscellaneous files that come with the
# distribution (for example, the sample dashboards).
# If not set by a CLI flag or in the configuration file, the default for the
# home path is the location of the binary.
#path.home:
# The configuration path for the apm-server installation. This is the default
# base path for configuration files, including the main YAML configuration file
# and the Elasticsearch template file. If not set by a CLI flag or in the
# configuration file, the default for the configuration path is the home path.
#path.config: ${path.home}
# The data path for the apm-server installation. This is the default base path
# for all the files in which apm-server needs to store its data. If not set by a
# CLI flag or in the configuration file, the default for the data path is a data
# subdirectory inside the home path.
#path.data: ${path.home}/data
# The logs path for a apm-server installation. This is the default location for
# the Beat's log files. If not set by a CLI flag or in the configuration file,
# the default for the logs path is a logs subdirectory inside the home path.
#path.logs: ${path.home}/logs
#============================== Dashboards =====================================
{{ elk_macros.setup_dashboards('apm') }}
#=============================== Template ======================================
{{ elk_macros.setup_template('apm', inventory_hostname, data_nodes, elasticsearch_number_of_replicas) }}
#============================== Kibana =====================================
{% if (groups['kibana'] | length) > 0 %}
{{ elk_macros.setup_kibana(hostvars[groups['kibana'][0]]['ansible_host'] ~ ':' ~ kibana_port) }}
{% endif %}
#================================ Logging ======================================
{{ elk_macros.beat_logging('apm-server') }}

View File

@ -0,0 +1,17 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apm_server_distro_packages:
- apm-server

View File

@ -0,0 +1,17 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apm_server_distro_packages:
- apm-server

View File

@ -0,0 +1,17 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apm_server_distro_packages:
- apm-server

View File

@ -0,0 +1,16 @@
---
# Copyright 2018, Vexxhost, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
auditbeat_service_state: restarted

View File

@ -0,0 +1,33 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Enable and restart auditbeat (systemd)
systemd:
name: "auditbeat"
enabled: true
state: "{{ auditbeat_service_state }}"
daemon_reload: true
when:
- ansible_service_mgr == 'systemd'
listen: Enable and restart auditbeat
- name: Enable and restart auditbeat (upstart)
service:
name: "auditbeat"
state: "{{ auditbeat_service_state }}"
enabled: yes
when:
- ansible_service_mgr == 'upstart'
listen: Enable and restart auditbeat

View File

@ -0,0 +1,35 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
galaxy_info:
author: OpenStack
description: Elastic v6.x auditbeat role
company: Rackspace
license: Apache2
min_ansible_version: 2.5
platforms:
- name: Ubuntu
versions:
- trusty
- xenial
- bionic
categories:
- cloud
- development
- elasticsearch
- elastic-stack
dependencies:
- role: elastic_data_hosts
- role: elastic_repositories

View File

@ -0,0 +1,112 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Gather variables for each operating system
include_vars: "{{ item }}"
with_first_found:
- "{{ ansible_distribution | lower }}-{{ ansible_distribution_version | lower }}.yml"
- "{{ ansible_distribution | lower }}-{{ ansible_distribution_major_version | lower }}.yml"
- "{{ ansible_os_family | lower }}-{{ ansible_distribution_major_version | lower }}.yml"
- "{{ ansible_distribution | lower }}.yml"
- "{{ ansible_os_family | lower }}-{{ ansible_distribution_version.split('.')[0] }}.yml"
- "{{ ansible_os_family | lower }}.yml"
tags:
- always
- name: Ensure beat is installed (x86_64)
package:
name: "{{ auditbeat_distro_packages }}"
state: "{{ elk_package_state | default('present') }}"
update_cache: "{{ (ansible_pkg_mgr == 'apt') | ternary('yes', omit) }}"
register: _package_task
until: _package_task is success
retries: 3
delay: 2
when:
- ansible_architecture == 'x86_64'
notify:
- Enable and restart auditbeat
tags:
- package_install
- name: Ensure beat is installed (aarch64)
apt:
deb: 'https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/8709ca2640344a4ba85cba0a1d6eea69/aarch64/auditbeat-6.5.0-arm64.deb'
when:
- ansible_pkg_mgr == 'apt'
- ansible_architecture == 'aarch64'
notify:
- Enable and restart auditbeat
tags:
- package_install
- name: Create auditbeat systemd service config dir
file:
path: "/etc/systemd/system/auditbeat.service.d"
state: "directory"
group: "root"
owner: "root"
mode: "0755"
when:
- ansible_service_mgr == 'systemd'
- name: Apply systemd options
template:
src: "{{ item.src }}"
dest: "/etc/systemd/system/auditbeat.service.d/{{ item.dest }}"
mode: "0644"
when:
- ansible_service_mgr == 'systemd'
with_items:
- src: "systemd.general-overrides.conf.j2"
dest: "auditbeat-overrides.conf"
notify:
- Enable and restart auditbeat
- name: Drop auditbeat conf file
template:
src: templates/auditbeat.yml.j2
dest: /etc/auditbeat/auditbeat.yml
notify:
- Enable and restart auditbeat
- name: Run the beat setup role
include_role:
name: elastic_beat_setup
when:
- (groups['kibana'] | length) > 0
vars:
elastic_beat_name: "auditbeat"
- name: Force beat handlers
meta: flush_handlers
- name: set auditbeat service state (upstart)
service:
name: "auditbeat"
state: "{{ auditbeat_service_state }}"
enabled: "{{ auditbeat_service_state in ['running', 'started', 'restarted'] }}"
when:
- ansible_service_mgr == 'upstart'
- auditbeat_service_state in ['started', 'stopped']
- name: set auditbeat service state (systemd)
systemd:
name: "auditbeat"
state: "{{ auditbeat_service_state }}"
enabled: "{{ auditbeat_service_state in ['running', 'started', 'restarted'] }}"
when:
- ansible_service_mgr == 'systemd'
- auditbeat_service_state in ['started', 'stopped']

View File

@ -0,0 +1 @@
../../../templates/systemd.general-overrides.conf.j2

View File

@ -0,0 +1,845 @@
{% import 'templates/_macros.j2' as elk_macros %}
########################## Auditbeat Configuration #############################
# This is a reference configuration file documenting all non-deprecated options
# in comments. For a shorter configuration example that contains only the most
# common options, please see auditbeat.yml in the same directory.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/auditbeat/index.html
#============================ Config Reloading ================================
# Config reloading allows to dynamically load modules. Each file which is
# monitored must contain one or multiple modules as a list.
auditbeat.config.modules:
# Glob pattern for configuration reloading
path: ${path.config}/conf.d/*.yml
# Period on which files under path should be checked for changes
reload.period: 60s
# Set to true to enable config reloading
reload.enabled: true
# Maximum amount of time to randomly delay the start of a metricset. Use 0 to
# disable startup delay.
auditbeat.max_start_delay: 10s
#========================== Modules configuration =============================
auditbeat.modules:
# The auditd module collects events from the audit framework in the Linux
# kernel. You need to specify audit rules for the events that you want to audit.
- module: auditd
{% if ansible_kernel is version_compare('4.4', '>=') %}
socket_type: {{ (apply_security_hardening | default(true) | bool) | ternary('multicast', 'unicast') }}
{% endif %}
resolve_ids: true
failure_mode: silent
backlog_limit: 8196
rate_limit: 0
include_raw_message: false
include_warnings: true
{% if not apply_security_hardening | default(true) | bool %}
audit_rule_files:
- '${path.config}/audit.rules.d/*.conf'
- '/etc/audit/rules.d/*.rules'
audit_rules: |
## Define audit rules here.
## Create file watches (-w) or syscall audits (-a or -A). Uncomment these
## examples or add your own rules.
## If you are on a 64 bit platform, everything should be running
## in 64 bit mode. This rule will detect any use of the 32 bit syscalls
## because this might be a sign of someone exploiting a hole in the 32
## bit API.
-a always,exit -F arch=b32 -S all -F key=32bit-abi
## Executions.
-a always,exit -F arch=b64 -S execve,execveat -k exec
# Things that affect identity.
-w /etc/group -p wa -k identity
-w /etc/passwd -p wa -k identity
-w /etc/gshadow -p wa -k identity
-w /etc/shadow -p wa -k identity
# Unauthorized access attempts to files (unsuccessful).
-a always,exit -F arch=b32 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=4294967295 -F key=access
-a always,exit -F arch=b32 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=4294967295 -F key=access
-a always,exit -F arch=b64 -S open,truncate,ftruncate,creat,openat,open_by_handle_at -F exit=-EACCES -F auid>=1000 -F auid!=4294967295 -F key=access
-a always,exit -F arch=b64 -S open,truncate,ftruncate,creat,openat,open_by_handle_at -F exit=-EPERM -F auid>=1000 -F auid!=4294967295 -F key=access
{% endif %}
# The file integrity module sends events when files are changed (created,
# updated, deleted). The events contain file metadata and hashes.
- module: file_integrity
paths:
- /bin
- /etc/ansible/roles
- /etc/apt
- /etc/apache2
- /etc/httpd
- /etc/network
- /etc/nginx
- /etc/mysql
- /etc/openstack_deploy
- /etc/sysconfig
- /etc/systemd
- /etc/uwsgi
- /etc/yum
- /etc/zypp
- /openstack/venvs
- /opt/openstack-ansible
- /sbin
- /usr/bin
- /usr/local/bin
- /usr/sbin
- /var/lib/lxc
# List of regular expressions to filter out notifications for unwanted files.
# Wrap in single quotes to workaround YAML escaping rules. By default no files
# are ignored.
exclude_files:
- '(?i)\.sw[nop]$'
- '~$'
- '/\.git($|/)'
# Scan over the configured file paths at startup and send events for new or
# modified files since the last time Auditbeat was running.
scan_at_start: true
# Average scan rate. This throttles the amount of CPU and I/O that Auditbeat
# consumes at startup while scanning. Default is "50 MiB".
scan_rate_per_sec: 64 MiB
# Limit on the size of files that will be hashed. Default is "100 MiB".
# Limit on the size of files that will be hashed. Default is "100 MiB".
max_file_size: 128 MiB
# Hash types to compute when the file changes. Supported types are
# blake2b_256, blake2b_384, blake2b_512, md5, sha1, sha224, sha256, sha384,
# sha512, sha512_224, sha512_256, sha3_224, sha3_256, sha3_384 and sha3_512.
# Default is sha1.
hash_types: [sha1]
# Detect changes to files included in subdirectories. Disabled by default.
recursive: true
# The system module collects security related information about a host.
# All datasets send both periodic state information (e.g. all currently
# running processes) and real-time changes (e.g. when a new process starts
# or stops).
- module: system
datasets:
- host # General host information, e.g. uptime, IPs
- process # Started and stopped processes
- socket # Opened and closed sockets
- user # User information
# How often datasets send state updates with the
# current state of the system (e.g. all currently
# running processes, all open sockets).
state.period: 12h
# The state.period can be overridden for any dataset.
# host.state.period: 12h
# process.state.period: 12h
# socket.state.period: 12h
# user.state.period: 12h
# Enabled by default. Auditbeat will read password fields in
# /etc/passwd and /etc/shadow and store a hash locally to
# detect any changes.
user.detect_password_changes: true
#================================ General ======================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
# If this options is not defined, the hostname is used.
#name:
# The tags of the shipper are included in their own field with each
# transaction published. Tags make it easy to group servers by different
# logical properties.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output. Fields can be scalar values, arrays, dictionaries, or any nested
# combination of these.
#fields:
# env: staging
# If this option is set to true, the custom fields are stored as top-level
# fields in the output document instead of being grouped under a fields
# sub-dictionary. Default is false.
#fields_under_root: false
# Internal queue configuration for buffering events to be published.
#queue:
# Queue type by name (default 'mem')
# The memory queue will present all available events (up to the outputs
# bulk_max_size) to the output, the moment the output is ready to server
# another batch of events.
#mem:
# Max number of events the queue can buffer.
#events: 4096
# Hints the minimum number of events stored in the queue,
# before providing a batch of events to the outputs.
# The default value is set to 2048.
# A value of 0 ensures events are immediately available
# to be sent to the outputs.
#flush.min_events: 2048
# Maximum duration after which events are available to the outputs,
# if the number of events stored in the queue is < min_flush_events.
#flush.timeout: 1s
# The spool queue will store events in a local spool file, before
# forwarding the events to the outputs.
#
# Beta: spooling to disk is currently a beta feature. Use with care.
#
# The spool file is a circular buffer, which blocks once the file/buffer is full.
# Events are put into a write buffer and flushed once the write buffer
# is full or the flush_timeout is triggered.
# Once ACKed by the output, events are removed immediately from the queue,
# making space for new events to be persisted.
#spool:
# The file namespace configures the file path and the file creation settings.
# Once the file exists, the `size`, `page_size` and `prealloc` settings
# will have no more effect.
#file:
# Location of spool file. The default value is ${path.data}/spool.dat.
#path: "${path.data}/spool.dat"
# Configure file permissions if file is created. The default value is 0600.
#permissions: 0600
# File size hint. The spool blocks, once this limit is reached. The default value is 100 MiB.
#size: 100MiB
# The files page size. A file is split into multiple pages of the same size. The default value is 4KiB.
#page_size: 4KiB
# If prealloc is set, the required space for the file is reserved using
# truncate. The default value is true.
#prealloc: true
# Spool writer settings
# Events are serialized into a write buffer. The write buffer is flushed if:
# - The buffer limit has been reached.
# - The configured limit of buffered events is reached.
# - The flush timeout is triggered.
#write:
# Sets the write buffer size.
#buffer_size: 1MiB
# Maximum duration after which events are flushed, if the write buffer
# is not full yet. The default value is 1s.
#flush.timeout: 1s
# Number of maximum buffered events. The write buffer is flushed once the
# limit is reached.
#flush.events: 16384
# Configure the on-disk event encoding. The encoding can be changed
# between restarts.
# Valid encodings are: json, ubjson, and cbor.
#codec: cbor
#read:
# Reader flush timeout, waiting for more events to become available, so
# to fill a complete batch, as required by the outputs.
# If flush_timeout is 0, all available events are forwarded to the
# outputs immediately.
# The default value is 0s.
#flush.timeout: 0s
# Sets the maximum number of CPUs that can be executing simultaneously. The
# default is the number of logical CPUs available in the system.
#max_procs:
#================================ Processors ===================================
# Processors are used to reduce the number of fields in the exported event or to
# enhance the event with external metadata. This section defines a list of
# processors that are applied one by one and the first one receives the initial
# event:
#
# event -> filter1 -> event1 -> filter2 ->event2 ...
#
# The supported processors are drop_fields, drop_event, include_fields,
# decode_json_fields, and add_cloud_metadata.
#
# For example, you can use the following processors to keep the fields that
# contain CPU load percentages, but remove the fields that contain CPU ticks
# values:
#
#processors:
#- include_fields:
# fields: ["cpu"]
#- drop_fields:
# fields: ["cpu.user", "cpu.system"]
#
# The following example drops the events that have the HTTP response code 200:
#
#processors:
#- drop_event:
# when:
# equals:
# http.code: 200
#
# The following example renames the field a to b:
#
#processors:
#- rename:
# fields:
# - from: "a"
# to: "b"
#
# The following example tokenizes the string into fields:
#
#processors:
#- dissect:
# tokenizer: "%{key1} - %{key2}"
# field: "message"
# target_prefix: "dissect"
#
# The following example enriches each event with metadata from the cloud
# provider about the host machine. It works on EC2, GCE, DigitalOcean,
# Tencent Cloud, and Alibaba Cloud.
#
#processors:
#- add_cloud_metadata: ~
#
# The following example enriches each event with the machine's local time zone
# offset from UTC.
#
#processors:
#- add_locale:
# format: offset
#
# The following example enriches each event with docker metadata, it matches
# given fields to an existing container id and adds info from that container:
#
#processors:
#- add_docker_metadata:
# host: "unix:///var/run/docker.sock"
# match_fields: ["system.process.cgroup.id"]
# match_pids: ["process.pid", "process.ppid"]
# match_source: true
# match_source_index: 4
# match_short_id: false
# cleanup_timeout: 60
# labels.dedot: false
# # To connect to Docker over TLS you must specify a client and CA certificate.
# #ssl:
# # certificate_authority: "/etc/pki/root/ca.pem"
# # certificate: "/etc/pki/client/cert.pem"
# # key: "/etc/pki/client/cert.key"
#
# The following example enriches each event with docker metadata, it matches
# container id from log path available in `source` field (by default it expects
# it to be /var/lib/docker/containers/*/*.log).
#
#processors:
#- add_docker_metadata: ~
#
# The following example enriches each event with host metadata.
#
#processors:
#- add_host_metadata:
# netinfo.enabled: false
#
# The following example enriches each event with process metadata using
# process IDs included in the event.
#
#processors:
#- add_process_metadata:
# match_pids: ["system.process.ppid"]
# target: system.process.parent
#
# The following example decodes fields containing JSON strings
# and replaces the strings with valid JSON objects.
#
#processors:
#- decode_json_fields:
# fields: ["field1", "field2", ...]
# process_array: false
# max_depth: 1
# target: ""
# overwrite_keys: false
processors:
- add_host_metadata: ~
#============================= Elastic Cloud ==================================
# These settings simplify using auditbeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
#================================ Outputs ======================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output -------------------------------
#output.elasticsearch:
# Boolean flag to enable or disable the output module.
#enabled: true
# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (http and 9200)
# In case you specify and additional path, the scheme is required: http://localhost:9200/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
#hosts: ["localhost:9200"]
# Set gzip compression level.
#compression_level: 0
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
# Dictionary of HTTP parameters to pass within the url with index operations.
#parameters:
#param1: value1
#param2: value2
# Number of workers per Elasticsearch host.
#worker: 1
# Optional index name. The default is "auditbeat" plus date
# and generates [auditbeat-]YYYY.MM.DD keys.
# In case you modify this pattern you must update setup.template.name and setup.template.pattern accordingly.
#index: "auditbeat-%{[beat.version]}-%{+yyyy.MM.dd}"
# Optional ingest node pipeline. By default no pipeline will be used.
#pipeline: ""
# Optional HTTP Path
#path: "/elasticsearch"
# Custom HTTP headers to add to each request
#headers:
# X-My-Header: Contents of the header
# Proxy server url
#proxy_url: http://proxy:3128
# The number of times a particular Elasticsearch index operation is attempted. If
# the indexing operation doesn't succeed after this many retries, the events are
# dropped. The default is 3.
#max_retries: 3
# The maximum number of events to bulk in a single Elasticsearch bulk API index request.
# The default is 50.
#bulk_max_size: 50
# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90
# Use SSL settings for HTTPS.
#ssl.enabled: true
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# SSL configuration. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#----------------------------- Logstash output ---------------------------------
{{ elk_macros.output_logstash(inventory_hostname, logstash_data_hosts, ansible_processor_count) }}
#------------------------------- Kafka output ----------------------------------
#output.kafka:
# Boolean flag to enable or disable the output module.
#enabled: true
# The list of Kafka broker addresses from where to fetch the cluster metadata.
# The cluster metadata contain the actual Kafka brokers events are published
# to.
#hosts: ["localhost:9092"]
# The Kafka topic used for produced events. The setting can be a format string
# using any event field. To set the topic from document type use `%{[type]}`.
#topic: beats
# The Kafka event key setting. Use format string to create unique event key.
# By default no event key will be generated.
#key: ''
# The Kafka event partitioning strategy. Default hashing strategy is `hash`
# using the `output.kafka.key` setting or randomly distributes events if
# `output.kafka.key` is not configured.
#partition.hash:
# If enabled, events will only be published to partitions with reachable
# leaders. Default is false.
#reachable_only: false
# Configure alternative event field names used to compute the hash value.
# If empty `output.kafka.key` setting will be used.
# Default value is empty list.
#hash: []
# Authentication details. Password is required if username is set.
#username: ''
#password: ''
# Kafka version auditbeat is assumed to run against. Defaults to the oldest
# supported stable version (currently version 0.8.2.0)
#version: 0.8.2
# Metadata update configuration. Metadata do contain leader information
# deciding which broker to use when publishing.
#metadata:
# Max metadata request retry attempts when cluster is in middle of leader
# election. Defaults to 3 retries.
#retry.max: 3
# Waiting time between retries during leader elections. Default is 250ms.
#retry.backoff: 250ms
# Refresh metadata interval. Defaults to every 10 minutes.
#refresh_frequency: 10m
# The number of concurrent load-balanced Kafka output workers.
#worker: 1
# The number of times to retry publishing an event after a publishing failure.
# After the specified number of retries, the events are typically dropped.
# Some Beats, such as Filebeat, ignore the max_retries setting and retry until
# all events are published. Set max_retries to a value less than 0 to retry
# until all events are published. The default is 3.
#max_retries: 3
# The maximum number of events to bulk in a single Kafka request. The default
# is 2048.
#bulk_max_size: 2048
# The number of seconds to wait for responses from the Kafka brokers before
# timing out. The default is 30s.
#timeout: 30s
# The maximum duration a broker will wait for number of required ACKs. The
# default is 10s.
#broker_timeout: 10s
# The number of messages buffered for each Kafka broker. The default is 256.
#channel_buffer_size: 256
# The keep-alive period for an active network connection. If 0s, keep-alives
# are disabled. The default is 0 seconds.
#keep_alive: 0
# Sets the output compression codec. Must be one of none, snappy and gzip. The
# default is gzip.
#compression: gzip
# The maximum permitted size of JSON-encoded messages. Bigger messages will be
# dropped. The default value is 1000000 (bytes). This value should be equal to
# or less than the broker's message.max.bytes.
#max_message_bytes: 1000000
# The ACK reliability level required from broker. 0=no response, 1=wait for
# local commit, -1=wait for all replicas to commit. The default is 1. Note:
# If set to 0, no ACKs are returned by Kafka. Messages might be lost silently
# on error.
#required_acks: 1
# The configurable ClientID used for logging, debugging, and auditing
# purposes. The default is "beats".
#client_id: beats
# Enable SSL support. SSL is automatically enabled, if any SSL setting is set.
#ssl.enabled: true
# Optional SSL configuration options. SSL is off by default.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#------------------------------- Redis output ----------------------------------
#output.redis:
# Boolean flag to enable or disable the output module.
#enabled: true
# The list of Redis servers to connect to. If load balancing is enabled, the
# events are distributed to the servers in the list. If one server becomes
# unreachable, the events are distributed to the reachable servers only.
#hosts: ["localhost:6379"]
# The Redis port to use if hosts does not contain a port number. The default
# is 6379.
#port: 6379
# The name of the Redis list or channel the events are published to. The
# default is auditbeat.
#key: auditbeat
# The password to authenticate with. The default is no authentication.
#password:
# The Redis database number where the events are published. The default is 0.
#db: 0
# The Redis data type to use for publishing events. If the data type is list,
# the Redis RPUSH command is used. If the data type is channel, the Redis
# PUBLISH command is used. The default value is list.
#datatype: list
# The number of workers to use for each host configured to publish events to
# Redis. Use this setting along with the loadbalance option. For example, if
# you have 2 hosts and 3 workers, in total 6 workers are started (3 for each
# host).
#worker: 1
# If set to true and multiple hosts or workers are configured, the output
# plugin load balances published events onto all Redis hosts. If set to false,
# the output plugin sends all events to only one host (determined at random)
# and will switch to another host if the currently selected one becomes
# unreachable. The default value is true.
#loadbalance: true
# The Redis connection timeout in seconds. The default is 5 seconds.
#timeout: 5s
# The number of times to retry publishing an event after a publishing failure.
# After the specified number of retries, the events are typically dropped.
# Some Beats, such as Filebeat, ignore the max_retries setting and retry until
# all events are published. Set max_retries to a value less than 0 to retry
# until all events are published. The default is 3.
#max_retries: 3
# The maximum number of events to bulk in a single Redis request or pipeline.
# The default is 2048.
#bulk_max_size: 2048
# The URL of the SOCKS5 proxy to use when connecting to the Redis servers. The
# value must be a URL with a scheme of socks5://.
#proxy_url:
# This option determines whether Redis hostnames are resolved locally when
# using a proxy. The default value is false, which means that name resolution
# occurs on the proxy server.
#proxy_use_local_resolver: false
# Enable SSL support. SSL is automatically enabled, if any SSL setting is set.
#ssl.enabled: true
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# Optional SSL configuration options. SSL is off by default.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#------------------------------- File output -----------------------------------
#output.file:
# Boolean flag to enable or disable the output module.
#enabled: true
# Configure JSON encoding
#codec.json:
# Pretty print json event
#pretty: false
# Configure escaping html symbols in strings.
#escape_html: true
# Path to the directory where to save the generated files. The option is
# mandatory.
#path: "/tmp/auditbeat"
# Name of the generated files. The default is `auditbeat` and it generates
# files: `auditbeat`, `auditbeat.1`, `auditbeat.2`, etc.
#filename: auditbeat
# Maximum size in kilobytes of each file. When this size is reached, and on
# every auditbeat restart, the files are rotated. The default value is 10240
# kB.
#rotate_every_kb: 10000
# Maximum number of files under path. When this number of files is reached,
# the oldest file is deleted and the rest are shifted from last to first. The
# default is 7 files.
#number_of_files: 7
# Permissions to use for file creation. The default is 0600.
#permissions: 0600
#----------------------------- Console output ---------------------------------
#output.console:
# Boolean flag to enable or disable the output module.
#enabled: true
# Configure JSON encoding
#codec.json:
# Pretty print json event
#pretty: false
# Configure escaping html symbols in strings.
#escape_html: true
#================================= Paths ======================================
# The home path for the auditbeat installation. This is the default base path
# for all other path settings and for miscellaneous files that come with the
# distribution (for example, the sample dashboards).
# If not set by a CLI flag or in the configuration file, the default for the
# home path is the location of the binary.
#path.home:
# The configuration path for the auditbeat installation. This is the default
# base path for configuration files, including the main YAML configuration file
# and the Elasticsearch template file. If not set by a CLI flag or in the
# configuration file, the default for the configuration path is the home path.
#path.config: ${path.home}
# The data path for the auditbeat installation. This is the default base path
# for all the files in which auditbeat needs to store its data. If not set by a
# CLI flag or in the configuration file, the default for the data path is a data
# subdirectory inside the home path.
#path.data: ${path.home}/data
# The logs path for a auditbeat installation. This is the default location for
# the Beat's log files. If not set by a CLI flag or in the configuration file,
# the default for the logs path is a logs subdirectory inside the home path.
#path.logs: ${path.home}/logs
#================================ Keystore ==========================================
# Location of the Keystore containing the keys and their sensitive values.
#keystore.path: "${path.config}/beats.keystore"
#============================== Dashboards =====================================
{{ elk_macros.setup_dashboards('auditbeat') }}
#=============================== Template ======================================
{{ elk_macros.setup_template('auditbeat', inventory_hostname, data_nodes, elasticsearch_number_of_replicas) }}
#============================== Kibana =====================================
{% if (groups['kibana'] | length) > 0 %}
{{ elk_macros.setup_kibana(hostvars[groups['kibana'][0]]['ansible_host'] ~ ':' ~ kibana_port) }}
{% endif %}
#================================ Logging ======================================
{{ elk_macros.beat_logging('auditbeat') }}
#============================== Xpack Monitoring =====================================
{{ elk_macros.xpack_monitoring_elasticsearch(inventory_hostname, elasticsearch_data_hosts, ansible_processor_count) }}
#================================ HTTP Endpoint ======================================
# Each beat can expose internal metrics through a HTTP endpoint. For security
# reasons the endpoint is disabled by default. This feature is currently experimental.
# Stats can be access through http://localhost:5066/stats . For pretty JSON output
# append ?pretty to the URL.
# Defines if the HTTP endpoint is enabled.
#http.enabled: false
# The HTTP endpoint will bind to this hostname or IP address. It is recommended to use only localhost.
#http.host: localhost
# Port on which the HTTP endpoint will bind. Default is 5066.
#http.port: 5066
#============================= Process Security ================================
# Enable or disable seccomp system call filtering on Linux. Default is enabled.
#seccomp.enabled: true

View File

@ -0,0 +1,18 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
auditbeat_distro_packages:
- audispd-plugins
- auditbeat

View File

@ -0,0 +1,18 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
auditbeat_distro_packages:
- audit-audispd-plugins
- auditbeat

View File

@ -0,0 +1,18 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
auditbeat_distro_packages:
- audispd-plugins
- auditbeat

View File

@ -0,0 +1,30 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Each setup flag is run one at a time.
elastic_setup_flags:
- "--template"
- "--pipelines"
# - "--dashboards"
# Setup options are cast as a string with, one option per line.
elastic_beat_setup_options: >-
-E 'output.logstash.enabled=false'
-E 'output.elasticsearch.hosts={{ coordination_nodes | to_json }}'
-E 'setup.template.enabled=true'
-E 'setup.template.overwrite=true'
# The node defined here will be used with the environment variable, "no_proxy".
elastic_beat_kibana_host: "{{ hostvars[groups['kibana'][0]]['ansible_host'] }}"

View File

@ -0,0 +1,35 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
galaxy_info:
author: OpenStack
description: Elastic v6.x repositories role
company: Rackspace
license: Apache2
min_ansible_version: 2.5
platforms:
- name: Ubuntu
versions:
- trusty
- xenial
- bionic
categories:
- cloud
- development
- beats
- elastic-beats
- elasticsearch
- elastic-stack
dependencies: []

View File

@ -0,0 +1,72 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Check for coordination_nodes var
fail:
msg: >-
To use this role the variable `coordination_nodes` must be defined.
when:
- coordination_nodes is undefined
- name: Check for elastic_beat_name var
fail:
msg: >-
To use this role the variable `elastic_beat_name` must be defined.
when:
- elastic_beat_name is undefined
- name: Refresh local facts
setup:
filter: ansible_local
gather_subset: "!all"
tags:
- always
- name: Load templates
shell: >-
{% if item == '--dashboards' %}
sed -i 's@\\\"index\\\": \\\"{{ elastic_beat_name }}-\*\\\"@\\\"index\\\": \\\"{{ elastic_beat_name }}\\\"@g' /usr/share/{{ elastic_beat_name }}/kibana/6/dashboard/*.json
sed -i 's@"id": "{{ elastic_beat_name }}\-\*",@"id": "{{ elastic_beat_name }}",@g' /usr/share/{{ elastic_beat_name }}/kibana/6/index-pattern/*.json
{% endif %}
{{ elastic_beat_name }} setup
{{ item }}
{{ elastic_beat_setup_options }}
-e -v
with_items: "{{ elastic_setup_flags }}"
register: templates
environment:
no_proxy: "{{ elastic_beat_kibana_host }}"
until: templates is success
retries: 5
delay: 5
run_once: true
when:
- ((ansible_local['elastic']['setup'][elastic_beat_name + '_loaded_templates'] is undefined) or
(not (ansible_local['elastic']['setup'][elastic_beat_name + '_loaded_templates'] | bool))) or
((elk_package_state | default('present')) == "latest") or
(elk_beat_setup | default(false) | bool)
tags:
- setup
- name: Set template fact
ini_file:
dest: "/etc/ansible/facts.d/elastic.fact"
section: "setup"
option: "{{ elastic_beat_name + '_loaded_templates' }}"
value: true
when:
- templates is changed
tags:
- setup

View File

@ -0,0 +1,25 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Enable and restart curator.timer
systemd:
name: "curator.timer"
enabled: true
state: restarted
when:
- (elk_package_state | default('present')) != 'absent'
- ansible_service_mgr == 'systemd'
tags:
- config

View File

@ -0,0 +1,34 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
galaxy_info:
author: OpenStack
description: Elastic v6.x curator role
company: Rackspace
license: Apache2
min_ansible_version: 2.5
platforms:
- name: Ubuntu
versions:
- trusty
- xenial
- bionic
categories:
- cloud
- development
- elasticsearch
- elastic-stack
dependencies:
- role: elastic_retention

View File

@ -0,0 +1,46 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Run the systemd service role
include_role:
name: systemd_service
vars:
systemd_service_enabled: "{{ ((elk_package_state | default('present')) != 'absent') | ternary(true, false) }}"
systemd_service_restart_changed: false
systemd_user_name: curator
systemd_group_name: curator
systemd_services:
- service_name: "curator"
execstarts:
- /opt/elasticsearch-curator/bin/curator
--config /var/lib/curator/curator.yml
/var/lib/curator/actions-age.yml
timer:
state: "started"
options:
OnBootSec: 30min
OnUnitActiveSec: 12h
Persistent: true
- service_name: "curator-size"
execstarts:
- /opt/elasticsearch-curator/bin/curator
--config /var/lib/curator/curator.yml
/var/lib/curator/actions-size.yml
timer:
state: "started"
options:
OnBootSec: 30min
OnUnitActiveSec: 1h
Persistent: true

View File

@ -0,0 +1,32 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Create cron job for curator (age)
cron:
name: "Run curator"
minute: "0"
hour: "1"
user: "curator"
job: "/opt/elasticsearch-curator/bin/curator --config /var/lib/curator/curator.yml /var/lib/curator/actions-age.yml"
cron_file: "elasticsearch-curator"
- name: Create cron job for curator (size)
cron:
name: "Run curator"
minute: "0"
hour: "*/5"
user: "curator"
job: "/opt/elasticsearch-curator/bin/curator --config /var/lib/curator/curator.yml /var/lib/curator/actions-size.yml"
cron_file: "elasticsearch-curator"

View File

@ -0,0 +1,103 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Gather variables for each operating system
include_vars: "{{ item }}"
with_first_found:
- "{{ ansible_distribution | lower }}-{{ ansible_distribution_version | lower }}.yml"
- "{{ ansible_distribution | lower }}-{{ ansible_distribution_major_version | lower }}.yml"
- "{{ ansible_os_family | lower }}-{{ ansible_distribution_major_version | lower }}.yml"
- "{{ ansible_distribution | lower }}.yml"
- "{{ ansible_os_family | lower }}-{{ ansible_distribution_version.split('.')[0] }}.yml"
- "{{ ansible_os_family | lower }}.yml"
tags:
- always
- name: Refresh local facts
setup:
filter: ansible_local
gather_subset: "!all"
tags:
- always
- name: Ensure virtualenv is installed
package:
name: "{{ curator_distro_packages }}"
state: "{{ elk_package_state | default('present') }}"
update_cache: "{{ (ansible_pkg_mgr == 'apt') | ternary('yes', omit) }}"
tags:
- package_install
- name: Create the virtualenv (if it does not exist)
command: "virtualenv --never-download --no-site-packages /opt/elasticsearch-curator"
args:
creates: "/opt/elasticsearch-curator/bin/activate"
- name: Ensure curator is installed
pip:
name: "elasticsearch-curator<6"
state: "{{ elk_package_state | default('present') }}"
extra_args: --isolated
virtualenv: /opt/elasticsearch-curator
register: _pip_task
until: _pip_task is success
retries: 3
delay: 2
tags:
- package_install
- name: create the system group
group:
name: "curator"
state: "present"
system: "yes"
- name: Create the curator system user
user:
name: "curator"
group: "curator"
comment: "curator user"
shell: "/bin/false"
createhome: "yes"
home: "/var/lib/curator"
- name: Create curator data path
file:
path: "{{ item }}"
state: directory
owner: "curator"
group: "curator"
mode: "0755"
recurse: true
with_items:
- "/var/lib/curator"
- "/var/log/curator"
- "/etc/curator"
- name: Drop curator conf file(s)
template:
src: "{{ item.src }}"
dest: "{{ item.dest }}"
with_items:
- src: "curator.yml.j2"
dest: /var/lib/curator/curator.yml
- src: "curator-actions-age.yml.j2"
dest: /var/lib/curator/actions-age.yml
- src: "curator-actions-size.yml.j2"
dest: /var/lib/curator/actions-size.yml
notify:
- Enable and restart curator.timer
- include_tasks: "curator_{{ ansible_service_mgr }}.yml"

View File

@ -0,0 +1,65 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
{% set action_items = [] -%}
{# Delete index loop #}
{% for key in (ansible_local['elastic']['retention']['elastic_beat_retention_policy_keys'] | from_yaml) -%}
{% set delete_indices = {} -%}
{# Total retention size in days #}
{% set _index_retention = ansible_local['elastic']['retention']['elastic_' + key + '_retention'] -%}
{% set index_retention = ((_index_retention | int) > 0) | ternary(_index_retention, 1) | int %}
{% set _ = delete_indices.update(
{
'action': 'delete_indices',
'description': 'Prune indices for ' + key + ' after ' ~ index_retention ~ ' days',
'options': {
'ignore_empty_list': true,
'disable_action': false
}
}
)
-%}
{% set filters = [] -%}
{% set _ = filters.append(
{
'filtertype': 'pattern',
'kind': 'prefix',
'value': key
}
)
-%}
{% set _ = filters.append(
{
'filtertype': 'age',
'source': 'name',
'direction': 'older',
'timestring': '%Y.%m.%d',
'unit': 'days',
'unit_count': index_retention
}
)
-%}
{% set _ = delete_indices.update({'filters': filters}) -%}
{% set _ = action_items.append(delete_indices) -%}
{% endfor -%}
{% set actions = {} -%}
{% for action_item in action_items -%}
{% set _ = actions.update({loop.index: action_item}) -%}
{% endfor -%}
{# Render all actions #}
{% set curator_actions = {'actions': actions} -%}
{{ curator_actions | to_nice_yaml(indent=2) }}

View File

@ -0,0 +1,63 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
{% set action_items = [] -%}
{# Delete index loop #}
{% for key in (ansible_local['elastic']['retention']['elastic_beat_retention_policy_keys'] | from_yaml) -%}
{% set delete_indices = {} -%}
{# Total retention size in gigabytes #}
{% set _index_size = ((ansible_local['elastic']['retention']['elastic_' + key + '_size'] | int) // 1024) -%}
{% set index_size = ((_index_size | int) > 0) | ternary(_index_size, 1) | int %}
{% set _ = delete_indices.update(
{
'action': 'delete_indices',
'description': 'Prune indices for ' + key + ' after index is > ' ~ index_size ~ 'gb',
'options': {
'ignore_empty_list': true,
'disable_action': false
}
}
)
-%}
{% set filters = [] -%}
{% set _ = filters.append(
{
'filtertype': 'pattern',
'kind': 'prefix',
'value': key
}
)
-%}
{% set _ = filters.append(
{
'filtertype': 'space',
'disk_space': index_size,
'use_age': true,
'source': 'creation_date'
}
)
-%}
{% set _ = delete_indices.update({'filters': filters}) -%}
{% set _ = action_items.append(delete_indices) -%}
{% endfor -%}
{% set actions = {} -%}
{% for action_item in action_items -%}
{% set _ = actions.update({loop.index: action_item}) -%}
{% endfor -%}
{# Render all actions #}
{% set curator_actions = {'actions': actions} -%}
{{ curator_actions | to_nice_yaml(indent=2) }}

View File

@ -0,0 +1,32 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
client:
hosts:
- {{ ansible_host }}
port: {{ elastic_port }}
url_prefix: ""
use_ssl: false
ssl_no_validate: true
http_auth: ""
timeout: 120
master_only: true
logging:
loglevel: INFO
logfile: /var/log/curator/curator
logformat: default
blacklist:
- elasticsearch
- urllib3

View File

@ -0,0 +1,17 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
curator_distro_packages:
- python-virtualenv

View File

@ -0,0 +1,17 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
curator_distro_packages:
- python-virtualenv

View File

@ -0,0 +1,17 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
curator_distro_packages:
- python-virtualenv

View File

@ -0,0 +1,18 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
curator_distro_packages:
- python-virtualenv
- virtualenv

View File

@ -0,0 +1,17 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This interface is used to determine cluster recovery speed.
elastic_data_interface: "{{ ansible_default_ipv4['alias'] }}"

View File

@ -0,0 +1,33 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
galaxy_info:
author: OpenStack
description: Elastic v6.x data hosts role
company: Rackspace
license: Apache2
min_ansible_version: 2.5
platforms:
- name: Ubuntu
versions:
- trusty
- xenial
- bionic
categories:
- cloud
- development
- elasticsearch
- elastic-stack
dependencies: []

View File

@ -0,0 +1,41 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Refresh minimal facts
setup:
gather_subset: '!all,!any,network,virtual'
tags:
- always
- name: Load data node variables
include_vars: "data-node-variables.yml"
tags:
- always
- name: Ensure local facts directory exists
file:
dest: "/etc/ansible/facts.d"
state: directory
group: "root"
owner: "root"
mode: "0755"
recurse: no
- name: Initialize local facts
ini_file:
dest: "/etc/ansible/facts.d/elastic.fact"
section: "setup"
option: cacheable
value: true

View File

@ -0,0 +1,204 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# storage node count is equal to the cluster size
storage_node_count: "{{ groups['elastic-logstash'] | length }}"
# the elasticserch cluster elects one master from all those which are marked as master-eligible
# 1 node cluster can only have one master
# 2 node clusters have 1 master-eligable nodes to avoid split-brain
# 3 node clusters have 3 master-eligable nodes
# >3 node clusters have (nodes // 2) eligable masters rounded up to the next odd number
elastic_master_node_count: |-
{% set masters = 0 %}
{% if (storage_node_count | int) < 3 %}
{% set masters = 1 %}
{% elif (storage_node_count | int) == 3 %}
{% set masters = 3 %}
{% else %}
{% set masters = (storage_node_count | int ) // 2 %}
{% if ((masters | int) % 2 == 0) %}
{% set masters = (masters | int) + 1 %}
{% endif %}
{% endif %}
{{ masters }}
# Assign node roles
# the first 'elastic_master_node_count' hosts in groups['elastic-logstash'] become master-eligible nodes
# the first 'elastic_master_node_count' and subsequent alternate hosts in groups['elastic-logstash'] becomes data nodes
## While the data node group is dynamically chosen the override
## `elasticsearch_node_data` can be used to override the node type.
## Dynamic node inclusion will still work for all other nodes in the group.
_data_nodes: "{{ (groups['elastic-logstash'][:elastic_master_node_count | int] | union(groups['elastic-logstash'][elastic_master_node_count | int::2])) }}"
data_nodes: |-
{% set nodes = [] %}
{% for node in groups['elastic-logstash'] %}
{% if (hostvars[node]['elasticsearch_node_data'] is defined) and (hostvars[node]['elasticsearch_node_data'] | bool) %}
{% set _ = nodes.append(node) %}
{% endif %}
{% endfor %}
{% for node in groups['elastic-logstash'] %}
{% if (nodes | length) <= (_data_nodes | length) %}
{% if (node in _data_nodes) %}
{% set _ = nodes.append(node) %}
{% endif %}
{% endif %}
{% endfor %}
{{ nodes }}
## While the logstash node group is dynamically chosen the override
## `elasticsearch_node_ingest` can be used to override the node type.
## Dynamic node inclusion will still work for all other nodes in the group.
_logstash_nodes: "{{ data_nodes }}"
logstash_nodes: |-
{% set nodes = [] %}
{% for node in groups['elastic-logstash'] %}
{% if (hostvars[node]['elasticsearch_node_ingest'] is defined) and (hostvars[node]['elasticsearch_node_ingest'] | bool) %}
{% set _ = nodes.append(node) %}
{% endif %}
{% endfor %}
{% for node in groups['elastic-logstash'] %}
{% if (nodes | length) <= (_logstash_nodes | length) %}
{% if (node in _logstash_nodes) %}
{% set _ = nodes.append(node) %}
{% endif %}
{% endif %}
{% endfor %}
{{ nodes }}
## While the logstash node group is dynamically chosen the override
## `elasticsearch_node_ingest` can be used to override the node type.
## Dynamic node inclusion will still work for all other nodes in the group.
_ingest_nodes: "{{ data_nodes }}"
ingest_nodes: |-
{% set nodes = [] %}
{% for node in groups['elastic-logstash'] %}
{% if (hostvars[node]['elasticsearch_node_ingest'] is defined) and (hostvars[node]['elasticsearch_node_ingest'] | bool) %}
{% set _ = nodes.append(node) %}
{% endif %}
{% endfor %}
{% for node in groups['elastic-logstash'] %}
{% if (nodes | length) <= (_ingest_nodes | length) %}
{% if (node in _ingest_nodes) %}
{% set _ = nodes.append(node) %}
{% endif %}
{% endif %}
{% endfor %}
{{ nodes }}
## While the master node group is dynamically chosen the override
## `elasticsearch_node_master` can be used to override the node type.
## Dynamic node inclusion will still work for all other nodes in the group.
_master_nodes: "{{ groups['elastic-logstash'][:elastic_master_node_count | int] }}"
master_nodes: |-
{% set nodes = [] %}
{% for node in groups['elastic-logstash'] %}
{% if (nodes | length) <= (elastic_master_node_count | int) %}
{% if (hostvars[node]['elasticsearch_node_master'] is defined) and (hostvars[node]['elasticsearch_node_master'] | bool) %}
{% set _ = nodes.append(node) %}
{% endif %}
{% endif %}
{% endfor %}
{% for node in groups['elastic-logstash'] %}
{% if (nodes | length) <= (elastic_master_node_count | int) %}
{% if (node in _master_nodes) %}
{% set _ = nodes.append(node) %}
{% endif %}
{% endif %}
{% endfor %}
{{ nodes }}
master_node_count: "{{ master_nodes | length }}"
coordination_nodes: |-
{% if (groups['kibana'] | length) > 0 %}
{% set c_nodes = groups['kibana'] %}
{% else %}
{% set c_nodes = groups['elastic-logstash'] %}
{% endif %}
{{
(c_nodes | map('extract', hostvars, 'ansible_host') | list)
| map('regex_replace', '(.*)' ,'\1:' ~ elastic_port)
| list
}}
zen_nodes: >-
{{
(groups['elastic-logstash'] | union(groups['kibana'])) | map('extract', hostvars, 'ansible_host') | list | shuffle(seed=inventory_hostname)
}}
elasticserch_interface_speed: |-
{% set default_interface_fact = hostvars[inventory_hostname]['ansible_' + (elastic_data_interface | replace('-', '_'))] %}
{% set speeds = [] %}
{% if default_interface_fact['type'] == 'bridge' %}
{% for interface in default_interface_fact['interfaces'] %}
{% set interface_fact = hostvars[inventory_hostname]['ansible_' + (interface | replace('-', '_'))] %}
{% if 'speed' in interface_fact %}
{% set speed = (interface_fact['speed'] | default(1000)) | string %}
{% if speed == "-1" %}
{% set _ = speeds.append(1000) %}
{% else %}
{% set _ = speeds.append(speed | int) %}
{% endif %}
{% if 'module' in interface_fact %}
{% set _ = speeds.append((interface_fact['speed'] | default(1000)) | int) %}
{% else %}
{% set _ = speeds.append(1000) %}
{% endif %}
{% endif %}
{% endfor %}
{% else %}
{% if ('module' in default_interface_fact) or (default_interface_fact['type'] == 'bond') %}
{% set speed = (default_interface_fact['speed'] | default(1000)) | string %}
{% if speed == "-1" %}
{% set _ = speeds.append(1000) %}
{% else %}
{% set _ = speeds.append(speed | int) %}
{% endif %}
{% else %}
{% set _ = speeds.append(1000) %}
{% endif %}
{% endif %}
{% set interface_speed = ((speeds | min) * 0.20) | int %}
{{ ((interface_speed | int) > 750) | ternary(750, interface_speed) }}
elasticsearch_data_node_details: >-
{{
(data_nodes | map('extract', hostvars, 'ansible_host') | list) | map('regex_replace', '(.*)' ,'\1:' ~ elastic_port) | list
}}
logstash_data_node_details: >-
{{
(logstash_nodes | map('extract', hostvars, 'ansible_host') | list) | map('regex_replace', '(.*)' ,'\1:' ~ logstash_beat_input_port) | list
}}
# based on the assignment of roles to hosts, set per host booleans
master_node: "{{ (inventory_hostname in master_nodes) | ternary(true, false) }}"
data_node: "{{ (inventory_hostname in data_nodes) | ternary(true, false) }}"
elastic_processors_floor: "{{ ((ansible_processor_count | int) - 1) }}"
elastic_processors_floor_set: "{{ ((elastic_processors_floor | int) > 0) | ternary(elastic_processors_floor, 1) }}"
elastic_thread_pool_size: "{{ ((ansible_processor_count | int) >= 24) | ternary(23, elastic_processors_floor_set) }}"
# Set a data node facts. The data nodes, in the case of elasticsearch are also
# ingest nodes.
elasticsearch_number_of_replicas: "{{ ((data_nodes | length) > 2) | ternary('2', ((data_nodes | length) > 1) | ternary('1', '0')) }}"
elasticsearch_data_hosts: |-
{% set data_hosts = elasticsearch_data_node_details | shuffle(seed=inventory_hostname) %}
{% if inventory_hostname in data_nodes %}
{% set _ = data_hosts.insert(0, '127.0.0.1:' ~ elastic_port) %}
{% endif %}
{{ data_hosts }}
logstash_data_hosts: |-
{% set data_hosts = logstash_data_node_details | shuffle(seed=inventory_hostname) %}
{% if inventory_hostname in data_nodes %}
{% set _ = data_hosts.insert(0, '127.0.0.1:' ~ logstash_beat_input_port) %}
{% endif %}
{{ data_hosts }}

View File

@ -0,0 +1,46 @@
---
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
## Adds option to set the UID/GID of a given service user.
# service_group_gid: 5000
# service_owner_uid: 5000
#define this in host/group vars as needed to mount remote filesystems
#set the client address as appropriate, eth1 assumes osa container mgmt network
#mountpoints and server paths are just examples
#elastic_shared_fs_repos:
# - fstype: nfs4
# src: "<nfs-server-ip>:/esbackup"
# opts: clientaddr="{{ ansible_eth1['ipv4']['address'] }}"
# path: "/elastic-backup"
# state: mounted
# NOTE(cloudnull) - When the heap size for a given elastic node is graeter than
# 6GiB the G1 garbage collector can be enabled.
elastic_g1gc_enabled: true
elastic_lxc_template_config:
3:
aa_profile: lxc.apparmor.profile
mount: lxc.mount.entry
2:
aa_profile: lxc.aa_profile
mount: lxc.mount.entry
# Set the elastic search heap size. If this option is undefined the value will
# be derived automatically using 1/4 of the available RAM for logstash and 1/2
# of the available RAM for elasticsearch. The value is expected to be in MiB.
# elastic_heap_size_default: 10240 # type `int`
# Set the friendly name of the version of java that will be used as the default.
elastic_java_version: java-8

View File

@ -0,0 +1,34 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
allow_duplicates: true
galaxy_info:
author: OpenStack
description: Elastic v6.x dependencies role
company: Rackspace
license: Apache2
min_ansible_version: 2.5
platforms:
- name: Ubuntu
versions:
- trusty
- xenial
- bionic
categories:
- cloud
- development
- elasticsearch
- elastic-stack
dependencies: []

View File

@ -0,0 +1,238 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Check for service_name var
fail:
msg: >-
The required variable [ service_name ] is undefined.
when:
- service_name is undefined
- name: Check for service_owner var
fail:
msg: >-
The required variable [ service_owner ] is undefined.
when:
- service_owner is undefined
- name: Check for service_group var
fail:
msg: >-
The required variable [ service_group ] is undefined.
when:
- service_group is undefined
- name: Load service variables
include_vars: "vars_{{ service_name }}.yml"
- name: Gather variables for each operating system
include_vars: "{{ item }}"
with_first_found:
- "{{ ansible_distribution | lower }}-{{ ansible_distribution_version | lower }}.yml"
- "{{ ansible_distribution | lower }}-{{ ansible_distribution_major_version | lower }}.yml"
- "{{ ansible_os_family | lower }}-{{ ansible_distribution_major_version | lower }}.yml"
- "{{ ansible_distribution | lower }}.yml"
- "{{ ansible_os_family | lower }}-{{ ansible_distribution_version.split('.')[0] }}.yml"
- "{{ ansible_os_family | lower }}.yml"
tags:
- always
- name: Set elastic log rotate path
set_fact:
elastic_log_rotate_path: "/var/log/{{ service_name }}"
- name: Configure systcl vm.max_map_count=524288 on elastic hosts
sysctl:
name: "vm.max_map_count"
value: "524288"
state: "present"
reload: "yes"
sysctl_file: /etc/sysctl.d/99-elasticsearch.conf
delegate_to: "{{ physical_host }}"
tags:
- sysctl
- name: Configure systcl fs.inotify.max_user_watches=1048576 on elastic hosts
sysctl:
name: "fs.inotify.max_user_watches"
value: "1048576"
state: "present"
reload: "yes"
sysctl_file: /etc/sysctl.d/99-elasticsearch.conf
delegate_to: "{{ physical_host }}"
tags:
- sysctl
- name: Create the system group
group:
name: "{{ service_group }}"
gid: "{{ service_group_gid | default(omit) }}"
state: "present"
system: "yes"
- name: Create the system user
block:
- name: Create the system user
user:
name: "{{ service_owner }}"
uid: "{{ service_owner_uid | default(omit) }}"
group: "{{ service_group }}"
shell: "/bin/false"
system: "yes"
createhome: "no"
home: "/var/lib/{{ service_name }}"
rescue:
- name: Check for system user
debug:
msg: >-
The general user creation task failed. This typically means that the
user already exists and something in the user configuration provided
is changing the system user in way that is simply not possible at this
time. The playbooks will now simply ensure the user exists and before
carrying on to the next task. While it's not required, it may be
benificial to schedule a maintenance where the elastic services are
stopped.
- name: Ensure the system user exists
user:
name: "{{ service_owner }}"
group: "{{ service_group }}"
- name: Physical host block
block:
- name: Check for directory
stat:
path: "/var/lib/{{ service_name }}"
register: service_dir
- name: Check for data directory
debug:
msg: >-
The service data directory [ /var/lib/{{ service_name }} ] already
exists. To ensure no data is lost, the linked directory path to
[ /openstack/{{ inventory_hostname }}/{{ service_name }} ] will not be
created for this host.
when:
- service_dir.stat.isdir is defined and
service_dir.stat.isdir
- name: Ensure service directories data-path exists
file:
path: "/openstack/{{ inventory_hostname }}/{{ service_name }}"
state: "directory"
owner: "{{ service_owner }}"
group: "{{ service_group }}"
when:
- not (service_dir.stat.exists | bool)
- name: Ensure data link exists
file:
src: "/openstack/{{ inventory_hostname }}/{{ service_name }}"
dest: "/var/lib/{{ service_name }}"
owner: "{{ service_owner }}"
group: "{{ service_group }}"
state: link
when:
- not (service_dir.stat.exists | bool)
when:
- physical_host == inventory_hostname
- name: Container block
block:
- name: Ensure service directories data-path exists
file:
path: "/openstack/{{ inventory_hostname }}/{{ service_name }}"
state: "directory"
delegate_to: "{{ physical_host }}"
- name: Pull lxc version
command: "lxc-ls --version"
delegate_to: "{{ physical_host }}"
changed_when: false
register: lxc_version
tags:
- skip_ansible_lint
- name: Enable or Disable lxc three syntax
set_fact:
lxc_major_version: "{{ lxc_version.stdout.split('.')[0] }}"
- name: elasticsearch datapath bind mount
lxc_container:
name: "{{ inventory_hostname }}"
container_command: |
[[ ! -d "/var/lib/{{ service_name }}" ]] && mkdir -p "/var/lib/{{ service_name }}"
container_config:
- "{{ elastic_lxc_template_config[(lxc_major_version | int)]['mount'] }}=/openstack/{{ inventory_hostname }}/{{ service_name }} var/lib/{{ service_name }} none bind 0 0"
- "{{ elastic_lxc_template_config[(lxc_major_version | int)]['aa_profile'] }}=unconfined"
delegate_to: "{{ physical_host }}"
when:
- container_tech | default('lxc') == 'lxc'
- physical_host != inventory_hostname
- name: Ensure Java is installed
package:
name: "{{ elastic_distro_packages }}"
state: "{{ elk_package_state | default('present') }}"
install_recommends: "{{ (ansible_pkg_mgr == 'apt') | ternary('yes', omit) }}"
update_cache: "{{ (ansible_pkg_mgr == 'apt') | ternary('yes', omit) }}"
register: _package_task
until: _package_task is success
retries: 3
delay: 2
tags:
- package_install
- name: Set java alternatives
block:
- name: Get java version alternantive
shell: >-
update-alternatives --query java | awk -F':' '/{{ elastic_java_version }}/ && /Alternative/ {print $2}'
register: java_alternatives
changed_when: false
- name: Set java version alternantive
alternatives:
name: java
path: "{{ java_alternatives.stdout.strip() }}"
when:
- (ansible_os_family | lower) == 'debian'
- name: Ensure service directories exists
file:
path: "/etc/{{ service_name }}"
state: "directory"
owner: "{{ service_owner }}"
group: "{{ service_group }}"
- name: Drop logrotate conf file(s)
template:
src: "{{ item.src }}"
dest: "{{ item.dest }}"
with_items:
- src: "templates/logrotate.j2"
dest: "/etc/logrotate.d/{{ service_name }}"
- name: Ensure host can resolve itself
lineinfile:
path: /etc/hosts
regexp: '^{{ item }}'
line: '{{ item }} {{ ansible_hostname }} {{ ansible_fqdn }}'
owner: root
group: root
mode: 0644
with_items:
- "127.0.2.1"
- "{{ ansible_host }}"

View File

@ -0,0 +1,12 @@
{{ elastic_log_rotate_path }}/*.log
{
copytruncate
daily
rotate 2
delaycompress
compress
dateext
notifempty
missingok
maxage 5
}

View File

@ -0,0 +1,17 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
elastic_distro_packages:
- java-1.8.0-openjdk

View File

@ -0,0 +1,17 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
elastic_distro_packages:
- java-1_8_0-openjdk

View File

@ -0,0 +1,17 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
elastic_distro_packages:
- openjdk-8-jre

View File

@ -0,0 +1,17 @@
---
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The heap size is set using the a half of the total memory available with
# a cap of 32GiB. If the total available memory is less than 32GiB a buffer of
# 10% will be used to ensure the underlying system is not starved of memory.
_elastic_heap_size_default: "{{ ((elastic_memory_upper_limit | int) > 30720) | ternary(30720, ((elastic_memory_upper_limit | int) - ((elastic_memory_upper_limit | int) * 0.1))) }}"

View File

@ -0,0 +1,17 @@
---
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The heap size is set using the a quarter of the total memory available with
# a cap of 32GiB. If the total available memory is less than 32GiB a buffer of
# 10% will be used to ensure the underlying system is not starved of memory.
_elastic_heap_size_default: "{{ ((elastic_memory_lower_limit | int) > 30720) | ternary(30720, ((elastic_memory_lower_limit | int) - ((elastic_memory_lower_limit | int) * 0.1))) }}"

View File

@ -0,0 +1,284 @@
---
# Copyright 2018, Vexxhost, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
filebeat_service_state: restarted
filebeat_oslo_log_multiline_config:
pattern: '^[0-9-]{10} +[0-9:\.]+ +[0-9]+ +[A-Z]+ +[A-Za-z0-9\._]+ \[|Traceback'
negate: true
match: after
filebeat_prospectors:
- type: log
enabled: "{{ filebeat_repo_enabled | default(true) }}"
paths:
- /openstack/log/*repo_container*/apt-cacher-ng/apt-cacher.*
- /openstack/log/*repo_container*/pypiserver/*.log
- /openstack/log/*repo_container*/rsyncd.log
tags:
- infrastructure
- repo-server
- type: log
enabled: "{{ filebeat_haproxy_enabled | default(true) }}"
paths:
- /var/log/haproxy/*.log
tags:
- infrastructure
- haproxy
- type: log
enabled: "{{ filebeat_rabbitmq_enabled | default(true) }}"
paths:
- /openstack/log/*rabbit*/rabbitmq/*.log
- /openstack/log/*rabbit*/rabbitmq/log/*.log
- /var/log/rabbitmq/*.log
- /var/log/rabbitmq/log/*.log
multiline:
pattern: '^='
negate: true
match: after
tags:
- infrastructure
- rabbitmq
- type: log
enabled: "{{ filebeat_ceph_enabled | default(true) }}"
paths:
- /openstack/log/*ceph*/ceph/ceph-mon.*.log
- /var/log/ceph/ceph-mon.*.log
tags:
- infrastructure
- ceph
- ceph-mon
- type: log
enabled: "{{ filebeat_ceph_enabled | default(true) }}"
paths:
- /openstack/log/*ceph*/ceph/ceph-mgr.*.log
- /var/log/ceph/ceph-mgr.*.log
tags:
- infrastructure
- ceph
- ceph-mgr
- type: log
enabled: "{{ filebeat_ceph_enabled | default(true) }}"
paths:
- /openstack/log/*ceph*/ceph/ceph-osd.*.log
- /var/log/ceph-osd.*.log
tags:
- infrastructure
- ceph
- ceph-osd
- type: log
enabled: "{{ filebeat_keystone_enabled | default(true) }}"
paths:
- /openstack/log/*keystone*/keystone/keystone.log
- /var/log/keystone/keystone.log
multiline: "{{ filebeat_oslo_log_multiline_config }}"
tags:
- openstack
- keystone
# NOTE(mnaser): Barbican ships to Journal
- type: log
enabled: "{{ filebeat_glance_enabled | default(true) }}"
paths:
- /openstack/log/*glance*/glance/*.log
- /var/log/glance/*.log
multiline: "{{ filebeat_oslo_log_multiline_config }}"
tags:
- openstack
- glance
# NOTE(mnaser): Cinder ships to journal
- type: log
enabled: "{{ filebeat_nova_enabled | default(true) }}"
paths:
- /openstack/log/*nova*/nova/*.log
- /var/log/nova/*.log
multiline: "{{ filebeat_oslo_log_multiline_config }}"
tags:
- openstack
- nova
- type: log
enabled: "{{ filebeat_neutron_enabled | default(true) }}"
paths:
- /openstack/log/*neutron*/neutron/*.log
- /var/log/neutron/*.log
multiline: "{{ filebeat_oslo_log_multiline_config }}"
tags:
- openstack
- neutron
- type: log
enabled: "{{ filebeat_heat_enabled | default(true) }}"
paths:
- /openstack/log/*heat*/heat/*.log
- /var/log/heat/*.log
multiline: "{{ filebeat_oslo_log_multiline_config }}"
tags:
- openstack
- heat
- type: log
enabled: "{{ filebeat_designate_enabled | default(true) }}"
paths:
- /openstack/log/*designate*/designate/*.log
- /var/log/designate/*.log
multiline: "{{ filebeat_oslo_log_multiline_config }}"
tags:
- openstack
- designate
- type: log
enabled: "{{ filebeat_swift_enabled | default(true) }}"
paths:
- /openstack/log/*swift*/account*.log
- /var/log/swift/account*.log
multiline:
pattern: '^[A-Za-z]+[[:space:]]* +[0-9]{1,2} +[0-9:\.]+ +[A-Za-z0-9-]+ container-replicator: +[A-Za-z0-9-\ ]+'
negate: false
match: after
tags:
- openstack
- swift
- swift-account
- type: log
enabled: "{{ filebeat_swift_enabled | default(true) }}"
paths:
- /openstack/log/*swift*/container*.log
- /var/log/swift/container*.log
multiline:
pattern: '^[A-Za-z]+[[:space:]]* +[0-9]{1,2} +[0-9:\.]+ +[A-Za-z0-9-]+ account-replicator: +[A-Za-z0-9-\ ]+'
negate: false
match: after
tags:
- openstack
- swift
- swift-container
- type: log
enabled: "{{ filebeat_swift_enabled | default(true) }}"
paths:
- /openstack/log/*swift*/object*.log
- /var/log/swift/object*.log
multiline:
pattern: '^[A-Za-z]+[[:space:]]* +[0-9]{1,2} +[0-9:\.]+ +[A-Za-z0-9-]+ object-replicator: +[A-Za-z0-9-\ ]+'
negate: false
match: after
tags:
- openstack
- swift
- swift-object
- type: log
enabled: "{{ filebeat_swift_enabled | default(true) }}"
paths:
- /openstack/log/*swift*/proxy*.log
- /var/log/swift/proxy*.log
tags:
- openstack
- swift
- swift-proxy
- type: log
enabled: "{{ filebeat_gnocchi_enabled | default(true) }}"
paths:
- /openstack/log/*gnocchi*/gnocchi/*.log
- /var/log/gnocchi/*.log
multiline: "{{ filebeat_oslo_log_multiline_config }}"
tags:
- openstack
- gnocchi
- type: log
enabled: "{{ filebeat_ceilometer_enabled | default(true) }}"
paths:
- /openstack/log/*ceilometer*/ceilometer/*.log
- /var/log/ceilometer/*.log
multiline: "{{ filebeat_oslo_log_multiline_config }}"
tags:
- openstack
- ceilometer
- type: log
enabled: "{{ filebeat_aodh_enabled | default(true) }}"
paths:
- /openstack/log/*aodh*/aodh/*.log
- /var/log/aodh/*.log
multiline: "{{ filebeat_oslo_log_multiline_config }}"
tags:
- openstack
- aodh
- type: log
enabled: "{{ filebeat_ironic_enabled | default(true) }}"
paths:
- /openstack/log/*ironic*/ironic/*.log
- /var/log/ironic/*.log
multiline: "{{ filebeat_oslo_log_multiline_config }}"
tags:
- openstack
- ironic
- type: log
enabled: "{{ filebeat_magnum_enabled | default(true) }}"
paths:
- /openstack/log/*magnum*/magnum/*.log
- /var/log/magnum/*.log
multiline: "{{ filebeat_oslo_log_multiline_config }}"
tags:
- openstack
- magnum
- type: log
enabled: "{{ filebeat_trove_enabled | default(true) }}"
paths:
- /openstack/log/*trove*/trove/*.log
- /var/log/trove/*.log
multiline: "{{ filebeat_oslo_log_multiline_config }}"
tags:
- openstack
- trove
- type: log
enabled: "{{ filebeat_sahara_enabled | default(true) }}"
paths:
- /openstack/log/*sahara*/sahara/*.log
- /var/log/sahara/*.log
multiline: "{{ filebeat_oslo_log_multiline_config }}"
tags:
- openstack
- sahara
- type: log
enabled: "{{ filebeat_octavia_enabled | default(true) }}"
paths:
- /openstack/log/*octavia*/octavia/*.log
- /var/log/octavia/*.log
multiline: "{{ filebeat_oslo_log_multiline_config }}"
tags:
- openstack
- octavia
- type: log
enabled: "{{ filebeat_tacker_enabled | default(true) }}"
paths:
- /openstack/log/*tacker*/tacker/*.log
- /var/log/tacker/*.log
multiline: "{{ filebeat_oslo_log_multiline_config }}"
tags:
- openstack
- tacker
- type: log
enabled: "{{ filebeat_system_enabled | default(true) }}"
paths:
- /openstack/log/ansible-logging/*.log
- /var/log/*.log
- /var/log/libvirt/*.log
- /var/log/libvirt/*/*.log
- /var/log/lxc/*.log
tags:
- system
- type: log
enabled: "{{ filebeat_logging_enabled | default(true) }}"
paths:
- /openstack/log/*/beats/*.log
- /openstack/log/*/curator/curator
- /openstack/log/*/elasticsearch/*.log
- /var/log/beats/*.log
- /var/log/curator/curator
- /var/log/elasticsearch/*.log
tags:
- beats

View File

@ -0,0 +1,33 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Enable and restart filebeat (systemd)
systemd:
name: "filebeat"
enabled: true
state: "{{ filebeat_service_state }}"
daemon_reload: true
when:
- ansible_service_mgr == 'systemd'
listen: Enable and restart filebeat
- name: Enable and restart filebeat (upstart)
service:
name: "filebeat"
state: "{{ filebeat_service_state }}"
enabled: yes
when:
- ansible_service_mgr == 'upstart'
listen: Enable and restart filebeat

View File

@ -0,0 +1,35 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
galaxy_info:
author: OpenStack
description: Elastic v6.x filebeat role
company: Rackspace
license: Apache2
min_ansible_version: 2.5
platforms:
- name: Ubuntu
versions:
- trusty
- xenial
- bionic
categories:
- cloud
- development
- elasticsearch
- elastic-stack
dependencies:
- role: elastic_data_hosts
- role: elastic_repositories

View File

@ -0,0 +1,112 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Gather variables for each operating system
include_vars: "{{ item }}"
with_first_found:
- "{{ ansible_distribution | lower }}-{{ ansible_distribution_version | lower }}.yml"
- "{{ ansible_distribution | lower }}-{{ ansible_distribution_major_version | lower }}.yml"
- "{{ ansible_os_family | lower }}-{{ ansible_distribution_major_version | lower }}.yml"
- "{{ ansible_distribution | lower }}.yml"
- "{{ ansible_os_family | lower }}-{{ ansible_distribution_version.split('.')[0] }}.yml"
- "{{ ansible_os_family | lower }}.yml"
tags:
- always
- name: Ensure beat is installed
package:
name: "{{ filebeat_distro_packages }}"
state: "{{ elk_package_state | default('present') }}"
update_cache: "{{ (ansible_pkg_mgr == 'apt') | ternary('yes', omit) }}"
register: _package_task
until: _package_task is success
retries: 3
delay: 2
when:
- ansible_architecture == 'x86_64'
notify:
- Enable and restart filebeat
tags:
- package_install
- name: Ensure beat is installed (aarch64)
apt:
deb: 'https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/8709ca2640344a4ba85cba0a1d6eea69/aarch64/filebeat-6.5.0-arm64.deb'
when:
- ansible_pkg_mgr == 'apt'
- ansible_architecture == 'aarch64'
notify:
- Enable and restart filebeat
tags:
- package_install
- name: Create filebeat systemd service config dir
file:
path: "/etc/systemd/system/filebeat.service.d"
state: "directory"
group: "root"
owner: "root"
mode: "0755"
when:
- ansible_service_mgr == 'systemd'
- name: Apply systemd options
template:
src: "{{ item.src }}"
dest: "/etc/systemd/system/filebeat.service.d/{{ item.dest }}"
mode: "0644"
when:
- ansible_service_mgr == 'systemd'
with_items:
- src: "systemd.general-overrides.conf.j2"
dest: "filebeat-overrides.conf"
notify:
- Enable and restart filebeat
- name: Drop Filebeat conf file
template:
src: "filebeat.yml.j2"
dest: "/etc/filebeat/filebeat.yml"
notify:
- Enable and restart filebeat
- name: Run the beat setup role
include_role:
name: elastic_beat_setup
when:
- (groups['kibana'] | length) > 0
vars:
elastic_beat_name: "filebeat"
- name: Force beat handlers
meta: flush_handlers
- name: set filebeat service state (upstart)
service:
name: "filebeat"
state: "{{ filebeat_service_state }}"
enabled: "{{ filebeat_service_state in ['running', 'started', 'restarted'] }}"
when:
- ansible_service_mgr == 'upstart'
- filebeat_service_state in ['started', 'stopped']
- name: set filebeat service state (systemd)
systemd:
name: "filebeat"
state: "{{ filebeat_service_state }}"
enabled: "{{ filebeat_service_state in ['running', 'started', 'restarted'] }}"
when:
- ansible_service_mgr == 'systemd'
- filebeat_service_state in ['started', 'stopped']

View File

@ -0,0 +1 @@
../../../templates/systemd.general-overrides.conf.j2

View File

@ -0,0 +1,938 @@
{% import 'templates/_macros.j2' as elk_macros %}
######################## Filebeat Configuration ############################
# This file is a full configuration example documenting all non-deprecated
# options in comments. For a shorter configuration example, that contains only
# the most common options, please see filebeat.yml in the same directory.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
#========================== Modules configuration ============================
filebeat.modules:
#------------------------------- System Module -------------------------------
- module: system
# Syslog
syslog:
enabled: "{{ filebeat_syslog_enabled | default(true) }}"
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
# Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
var.convert_timezone: false
# Prospector configuration (advanced). Any prospector configuration option
# can be added under this section.
#prospector:
# Authorization logs
auth:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
# Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
var.convert_timezone: false
# Prospector configuration (advanced). Any prospector configuration option
# can be added under this section.
#prospector:
#------------------------------- Apache2 Module ------------------------------
- module: apache2
access:
enabled: "{{ filebeat_httpd_enabled | default(true) }}"
var.paths:
- /openstack/log/*horizon*/horizon/*access.log
error:
enabled: "{{ filebeat_httpd_enabled | default(true) }}"
var.paths:
- /openstack/log/*horizon*/horizon/horizon-error.log
#------------------------------- Auditd Module -------------------------------
- module: auditd
log:
enabled: "{{ filebeat_auditd_enabled | default(true) }}"
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
# Prospector configuration (advanced). Any prospector configuration option
# can be added under this section.
#prospector:
#------------------------------- Icinga Module -------------------------------
#- module: icinga
# Main logs
#main:
#enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
# Prospector configuration (advanced). Any prospector configuration option
# can be added under this section.
#prospector:
# Debug logs
#debug:
#enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
# Prospector configuration (advanced). Any prospector configuration option
# can be added under this section.
#prospector:
# Startup logs
#startup:
#enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
# Prospector configuration (advanced). Any prospector configuration option
# can be added under this section.
#prospector:
#--------------------------------- IIS Module --------------------------------
#- module: iis
# Access logs
#access:
#enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
# Prospector configuration (advanced). Any prospector configuration option
# can be added under this section.
#prospector:
# Error logs
#error:
#enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
# Prospector configuration (advanced). Any prospector configuration option
# can be added under this section.
#prospector:
#-------------------------------- Kafka Module -------------------------------
#- module: kafka
# All logs
#log:
#enabled: true
# Set custom paths for Kafka. If left empty,
# Filebeat will look under /opt.
#var.kafka_home:
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
#------------------------------ logstash Module ------------------------------
- module: logstash
# logs
log:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
# var.paths:
# Slow logs
slowlog:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
#------------------------------- mongodb Module ------------------------------
#- module: mongodb
# Logs
#log:
#enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
# Input configuration (advanced). Any input configuration option
# can be added under this section.
#input:
#-------------------------------- MySQL Module -------------------------------
- module: mysql
error:
enabled: "{{ filebeat_galera_enabled | default(true) }}"
var.paths:
- /openstack/log/*galera*/mysql_logs/galera_server_error.log
- /var/log/mysql_logs/galera_server_error.log
slowlog:
enabled: false
#-------------------------------- Nginx Module -------------------------------
- module: nginx
access:
enabled: "{{ filebeat_nginx_enabled | default(true) }}"
var.paths:
- /openstack/log/*repo_container*/nginx/*access.log
- /openstack/log/*keystone*/nginx/*access.log
error:
enabled: "{{ filebeat_nginx_enabled | default(true) }}"
var.paths:
- /openstack/log/*repo_container*/nginx/*error.log
- /openstack/log/*keystone*/nginx/*error.log
#------------------------------- Osquery Module ------------------------------
- module: osquery
result:
enabled: "{{ filebeat_osquery_enabled | default(true) }}"
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
# If true, all fields created by this module are prefixed with
# `osquery.result`. Set to false to copy the fields in the root
# of the document. The default is true.
var.use_namespace: true
#----------------------------- PostgreSQL Module -----------------------------
#- module: postgresql
# Logs
#log:
#enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
# Prospector configuration (advanced). Any prospector configuration option
# can be added under this section.
#prospector:
#-------------------------------- Redis Module -------------------------------
#- module: redis
# Main logs
#log:
#enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths: ["/var/log/redis/redis-server.log*"]
# Slow logs, retrieved via the Redis API (SLOWLOG)
#slowlog:
#enabled: true
# The Redis hosts to connect to.
#var.hosts: ["localhost:6379"]
# Optional, the password to use when connecting to Redis.
#var.password:
#------------------------------- Traefik Module ------------------------------
#- module: traefik
# Access logs
#access:
#enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
# Prospector configuration (advanced). Any prospector configuration option
# can be added under this section.
#prospector:
#=========================== Filebeat prospectors =============================
# List of prospectors to fetch data.
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
# Type of the files. Based on this the way the file is read is decided.
# The different types cannot be mixed in one prospector
#
# Possible options are:
# * log: Reads every line of the log file (default)
# * stdin: Reads the standard in
#------------------------------ Log prospector --------------------------------
{% for p in filebeat_prospectors %}
- type: {{ p['type'] }}
enabled: {{ p['enabled'] }}
paths:
{% for path in p['paths'] %}
- {{ path }}
{% endfor %}
{% if 'multiline' in p %}
multiline.pattern: '{{ p['multiline']['pattern'] }}'
multiline.negate: {{ p['multiline']['negate'] }}
multiline.match: {{ p['multiline']['match'] }}
{% endif %}
tags:
{% for tag in p['tags'] %}
- {{ tag }}
{% endfor %}
{% endfor %}
#----------------------------- Stdin prospector -------------------------------
# Configuration to use stdin input
#- type: stdin
#------------------------- Redis slowlog prospector ---------------------------
# Experimental: Config options for the redis slow log prospector
#- type: redis
#hosts: ["localhost:6379"]
#username:
#password:
#enabled: false
#scan_frequency: 10s
# Timeout after which time the prospector should return an error
#timeout: 1s
# Network type to be used for redis connection. Default: tcp
#network: tcp
# Max number of concurrent connections. Default: 10
#maxconn: 10
# Redis AUTH password. Empty by default.
#password: foobared
#------------------------------ Udp prospector --------------------------------
# Experimental: Config options for the udp prospector
#- type: udp
# Maximum size of the message received over UDP
#max_message_size: 10240
#========================== Filebeat autodiscover ==============================
# Autodiscover allows you to detect changes in the system and spawn new modules
# or prospectors as they happen.
#filebeat.autodiscover:
# List of enabled autodiscover providers
# providers:
# - type: docker
# templates:
# - condition:
# equals.docker.container.image: busybox
# config:
# - type: log
# paths:
# - /var/lib/docker/containers/${data.docker.container.id}/*.log
#========================= Filebeat global options ============================
# Name of the registry file. If a relative path is used, it is considered relative to the
# data path.
#filebeat.registry_file: ${path.data}/registry
# These config files must have the full filebeat config part inside, but only
# the prospector part is processed. All global options like spool_size are ignored.
# The config_dir MUST point to a different directory then where the main filebeat config file is in.
#filebeat.config_dir:
# How long filebeat waits on shutdown for the publisher to finish.
# Default is 0, not waiting.
#filebeat.shutdown_timeout: 0
# Enable filebeat config reloading
#filebeat.config:
#prospectors:
#enabled: false
#path: prospectors.d/*.yml
#reload.enabled: true
#reload.period: 10s
#modules:
#enabled: false
#path: modules.d/*.yml
#reload.enabled: true
#reload.period: 10s
#================================ General ======================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
# If this options is not defined, the hostname is used.
#name:
# The tags of the shipper are included in their own field with each
# transaction published. Tags make it easy to group servers by different
# logical properties.
tags:
- filebeat
# Optional fields that you can specify to add additional information to the
# output. Fields can be scalar values, arrays, dictionaries, or any nested
# combination of these.
#fields:
# env: staging
# If this option is set to true, the custom fields are stored as top-level
# fields in the output document instead of being grouped under a fields
# sub-dictionary. Default is false.
#fields_under_root: false
# Internal queue configuration for buffering events to be published.
#queue:
# Queue type by name (default 'mem')
# The memory queue will present all available events (up to the outputs
# bulk_max_size) to the output, the moment the output is ready to server
# another batch of events.
#mem:
# Max number of events the queue can buffer.
#events: 4096
# Hints the minimum number of events stored in the queue,
# before providing a batch of events to the outputs.
# A value of 0 (the default) ensures events are immediately available
# to be sent to the outputs.
#flush.min_events: 2048
# Maximum duration after which events are available to the outputs,
# if the number of events stored in the queue is < min_flush_events.
#flush.timeout: 1s
# Sets the maximum number of CPUs that can be executing simultaneously. The
# default is the number of logical CPUs available in the system.
#max_procs:
#================================ Processors ===================================
# Processors are used to reduce the number of fields in the exported event or to
# enhance the event with external metadata. This section defines a list of
# processors that are applied one by one and the first one receives the initial
# event:
#
# event -> filter1 -> event1 -> filter2 ->event2 ...
#
# The supported processors are drop_fields, drop_event, include_fields, and
# add_cloud_metadata.
#
# For example, you can use the following processors to keep the fields that
# contain CPU load percentages, but remove the fields that contain CPU ticks
# values:
#
#processors:
#- include_fields:
# fields: ["cpu"]
#- drop_fields:
# fields: ["cpu.user", "cpu.system"]
#
# The following example drops the events that have the HTTP response code 200:
#
#processors:
#- drop_event:
# when:
# equals:
# http.code: 200
#
# The following example enriches each event with metadata from the cloud
# provider about the host machine. It works on EC2, GCE, DigitalOcean,
# Tencent Cloud, and Alibaba Cloud.
#
#processors:
#- add_cloud_metadata: ~
#
# The following example enriches each event with the machine's local time zone
# offset from UTC.
#
#processors:
#- add_locale:
# format: offset
#
# The following example enriches each event with docker metadata, it matches
# given fields to an existing container id and adds info from that container:
#
#processors:
#- add_docker_metadata:
# host: "unix:///var/run/docker.sock"
# match_fields: ["system.process.cgroup.id"]
# match_pids: ["process.pid", "process.ppid"]
# match_source: true
# match_source_index: 4
# cleanup_timeout: 60
# # To connect to Docker over TLS you must specify a client and CA certificate.
# #ssl:
# # certificate_authority: "/etc/pki/root/ca.pem"
# # certificate: "/etc/pki/client/cert.pem"
# # key: "/etc/pki/client/cert.key"
#
# The following example enriches each event with docker metadata, it matches
# container id from log path available in `source` field (by default it expects
# it to be /var/lib/docker/containers/*/*.log).
#
#processors:
#- add_docker_metadata: ~
processors:
- add_host_metadata: ~
#============================= Elastic Cloud ==================================
# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
#================================ Outputs ======================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output -------------------------------
#output.elasticsearch:
# Boolean flag to enable or disable the output module.
#enabled: true
# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (http and 9200)
# In case you specify and additional path, the scheme is required: http://localhost:9200/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
#hosts: ["localhost:9200"]
# Set gzip compression level.
#compression_level: 0
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
# Dictionary of HTTP parameters to pass within the url with index operations.
#parameters:
#param1: value1
#param2: value2
# Number of workers per Elasticsearch host.
#worker: 1
# Optional index name. The default is "filebeat" plus date
# and generates [filebeat-]YYYY.MM.DD keys.
# In case you modify this pattern you must update setup.template.name and setup.template.pattern accordingly.
#index: "filebeat-%{[beat.version]}-%{+yyyy.MM.dd}"
# Optional ingest node pipeline. By default no pipeline will be used.
#pipeline: ""
# Optional HTTP Path
#path: "/elasticsearch"
# Custom HTTP headers to add to each request
#headers:
# X-My-Header: Contents of the header
# Proxy server url
#proxy_url: http://proxy:3128
# The number of times a particular Elasticsearch index operation is attempted. If
# the indexing operation doesn't succeed after this many retries, the events are
# dropped. The default is 3.
#max_retries: 3
# The maximum number of events to bulk in a single Elasticsearch bulk API index request.
# The default is 50.
#bulk_max_size: 50
# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90
# Use SSL settings for HTTPS.
#ssl.enabled: true
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# SSL configuration. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#----------------------------- Logstash output ---------------------------------
{{ elk_macros.output_logstash(inventory_hostname, logstash_data_hosts, ansible_processor_count) }}
#------------------------------- Kafka output ----------------------------------
#output.kafka:
# Boolean flag to enable or disable the output module.
#enabled: true
# The list of Kafka broker addresses from where to fetch the cluster metadata.
# The cluster metadata contain the actual Kafka brokers events are published
# to.
#hosts: ["localhost:9092"]
# The Kafka topic used for produced events. The setting can be a format string
# using any event field. To set the topic from document type use `%{[type]}`.
#topic: beats
# The Kafka event key setting. Use format string to create unique event key.
# By default no event key will be generated.
#key: ''
# The Kafka event partitioning strategy. Default hashing strategy is `hash`
# using the `output.kafka.key` setting or randomly distributes events if
# `output.kafka.key` is not configured.
#partition.hash:
# If enabled, events will only be published to partitions with reachable
# leaders. Default is false.
#reachable_only: false
# Configure alternative event field names used to compute the hash value.
# If empty `output.kafka.key` setting will be used.
# Default value is empty list.
#hash: []
# Authentication details. Password is required if username is set.
#username: ''
#password: ''
# Kafka version filebeat is assumed to run against. Defaults to the oldest
# supported stable version (currently version 0.8.2.0)
#version: 0.8.2
# Metadata update configuration. Metadata do contain leader information
# deciding which broker to use when publishing.
#metadata:
# Max metadata request retry attempts when cluster is in middle of leader
# election. Defaults to 3 retries.
#retry.max: 3
# Waiting time between retries during leader elections. Default is 250ms.
#retry.backoff: 250ms
# Refresh metadata interval. Defaults to every 10 minutes.
#refresh_frequency: 10m
# The number of concurrent load-balanced Kafka output workers.
#worker: 1
# The number of times to retry publishing an event after a publishing failure.
# After the specified number of retries, the events are typically dropped.
# Some Beats, such as Filebeat, ignore the max_retries setting and retry until
# all events are published. Set max_retries to a value less than 0 to retry
# until all events are published. The default is 3.
#max_retries: 3
# The maximum number of events to bulk in a single Kafka request. The default
# is 2048.
#bulk_max_size: 2048
# The number of seconds to wait for responses from the Kafka brokers before
# timing out. The default is 30s.
#timeout: 30s
# The maximum duration a broker will wait for number of required ACKs. The
# default is 10s.
#broker_timeout: 10s
# The number of messages buffered for each Kafka broker. The default is 256.
#channel_buffer_size: 256
# The keep-alive period for an active network connection. If 0s, keep-alives
# are disabled. The default is 0 seconds.
#keep_alive: 0
# Sets the output compression codec. Must be one of none, snappy and gzip. The
# default is gzip.
#compression: gzip
# The maximum permitted size of JSON-encoded messages. Bigger messages will be
# dropped. The default value is 1000000 (bytes). This value should be equal to
# or less than the broker's message.max.bytes.
#max_message_bytes: 1000000
# The ACK reliability level required from broker. 0=no response, 1=wait for
# local commit, -1=wait for all replicas to commit. The default is 1. Note:
# If set to 0, no ACKs are returned by Kafka. Messages might be lost silently
# on error.
#required_acks: 1
# The configurable ClientID used for logging, debugging, and auditing
# purposes. The default is "beats".
#client_id: beats
# Enable SSL support. SSL is automatically enabled, if any SSL setting is set.
#ssl.enabled: true
# Optional SSL configuration options. SSL is off by default.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#------------------------------- Redis output ----------------------------------
#output.redis:
# Boolean flag to enable or disable the output module.
#enabled: true
# The list of Redis servers to connect to. If load balancing is enabled, the
# events are distributed to the servers in the list. If one server becomes
# unreachable, the events are distributed to the reachable servers only.
#hosts: ["localhost:6379"]
# The Redis port to use if hosts does not contain a port number. The default
# is 6379.
#port: 6379
# The name of the Redis list or channel the events are published to. The
# default is filebeat.
#key: filebeat
# The password to authenticate with. The default is no authentication.
#password:
# The Redis database number where the events are published. The default is 0.
#db: 0
# The Redis data type to use for publishing events. If the data type is list,
# the Redis RPUSH command is used. If the data type is channel, the Redis
# PUBLISH command is used. The default value is list.
#datatype: list
# The number of workers to use for each host configured to publish events to
# Redis. Use this setting along with the loadbalance option. For example, if
# you have 2 hosts and 3 workers, in total 6 workers are started (3 for each
# host).
#worker: 1
# If set to true and multiple hosts or workers are configured, the output
# plugin load balances published events onto all Redis hosts. If set to false,
# the output plugin sends all events to only one host (determined at random)
# and will switch to another host if the currently selected one becomes
# unreachable. The default value is true.
#loadbalance: true
# The Redis connection timeout in seconds. The default is 5 seconds.
#timeout: 5s
# The number of times to retry publishing an event after a publishing failure.
# After the specified number of retries, the events are typically dropped.
# Some Beats, such as Filebeat, ignore the max_retries setting and retry until
# all events are published. Set max_retries to a value less than 0 to retry
# until all events are published. The default is 3.
#max_retries: 3
# The maximum number of events to bulk in a single Redis request or pipeline.
# The default is 2048.
#bulk_max_size: 2048
# The URL of the SOCKS5 proxy to use when connecting to the Redis servers. The
# value must be a URL with a scheme of socks5://.
#proxy_url:
# This option determines whether Redis hostnames are resolved locally when
# using a proxy. The default value is false, which means that name resolution
# occurs on the proxy server.
#proxy_use_local_resolver: false
# Enable SSL support. SSL is automatically enabled, if any SSL setting is set.
#ssl.enabled: true
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# Optional SSL configuration options. SSL is off by default.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#------------------------------- File output -----------------------------------
#output.file:
# Boolean flag to enable or disable the output module.
#enabled: true
# Path to the directory where to save the generated files. The option is
# mandatory.
#path: "/tmp/filebeat"
# Name of the generated files. The default is `filebeat` and it generates
# files: `filebeat`, `filebeat.1`, `filebeat.2`, etc.
#filename: filebeat
# Maximum size in kilobytes of each file. When this size is reached, and on
# every filebeat restart, the files are rotated. The default value is 10240
# kB.
#rotate_every_kb: 10000
# Maximum number of files under path. When this number of files is reached,
# the oldest file is deleted and the rest are shifted from last to first. The
# default is 7 files.
#number_of_files: 7
# Permissions to use for file creation. The default is 0600.
#permissions: 0600
#----------------------------- Console output ---------------------------------
#output.console:
# Boolean flag to enable or disable the output module.
#enabled: true
# Pretty print json event
#pretty: false
#================================= Paths ======================================
# The home path for the filebeat installation. This is the default base path
# for all other path settings and for miscellaneous files that come with the
# distribution (for example, the sample dashboards).
# If not set by a CLI flag or in the configuration file, the default for the
# home path is the location of the binary.
#path.home:
# The configuration path for the filebeat installation. This is the default
# base path for configuration files, including the main YAML configuration file
# and the Elasticsearch template file. If not set by a CLI flag or in the
# configuration file, the default for the configuration path is the home path.
#path.config: ${path.home}
# The data path for the filebeat installation. This is the default base path
# for all the files in which filebeat needs to store its data. If not set by a
# CLI flag or in the configuration file, the default for the data path is a data
# subdirectory inside the home path.
#path.data: ${path.home}/data
# The logs path for a filebeat installation. This is the default location for
# the Beat's log files. If not set by a CLI flag or in the configuration file,
# the default for the logs path is a logs subdirectory inside the home path.
#path.logs: ${path.home}/logs
#============================== Dashboards =====================================
{{ elk_macros.setup_dashboards('filebeat') }}
#=============================== Template ======================================
{{ elk_macros.setup_template('filebeat', inventory_hostname, data_nodes, elasticsearch_number_of_replicas) }}
#============================== Kibana =====================================
{% if (groups['kibana'] | length) > 0 %}
{{ elk_macros.setup_kibana(hostvars[groups['kibana'][0]]['ansible_host'] ~ ':' ~ kibana_port) }}
{% endif %}
#================================ Logging ======================================
{{ elk_macros.beat_logging('filebeat') }}
#============================== Xpack Monitoring =====================================
{{ elk_macros.xpack_monitoring_elasticsearch(inventory_hostname, elasticsearch_data_hosts, ansible_processor_count) }}
#================================ HTTP Endpoint ======================================
# Each beat can expose internal metrics through a HTTP endpoint. For security
# reasons the endpoint is disabled by default. This feature is currently experimental.
# Stats can be access through http://localhost:5066/stats . For pretty JSON output
# append ?pretty to the URL.
# Defines if the HTTP endpoint is enabled.
#http.enabled: false
# The HTTP endpoint will bind to this hostname or IP address. It is recommended to use only localhost.
#http.host: localhost
# Port on which the HTTP endpoint will bind. Default is 5066.
#http.port: 5066

View File

@ -0,0 +1,17 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
filebeat_distro_packages:
- filebeat

View File

@ -0,0 +1,17 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
filebeat_distro_packages:
- filebeat

View File

@ -0,0 +1,17 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
filebeat_distro_packages:
- filebeat

View File

@ -0,0 +1,16 @@
---
# Copyright 2018, Vexxhost, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
heartbeat_service_state: restarted

View File

@ -0,0 +1,33 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Enable and restart heartbeat (systemd)
systemd:
name: "heartbeat-elastic"
enabled: true
state: "{{ heartbeat_service_state }}"
daemon_reload: true
when:
- ansible_service_mgr == 'systemd'
listen: Enable and restart heartbeat
- name: Enable and restart heartbeat (upstart)
service:
name: "heartbeat-elastic"
state: "{{ heartbeat_service_state }}"
enabled: yes
when:
- ansible_service_mgr == 'upstart'
listen: Enable and restart heartbeat

View File

@ -0,0 +1,35 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
galaxy_info:
author: OpenStack
description: Elastic v6.x heartbeat role
company: Rackspace
license: Apache2
min_ansible_version: 2.5
platforms:
- name: Ubuntu
versions:
- trusty
- xenial
- bionic
categories:
- cloud
- development
- elasticsearch
- elastic-stack
dependencies:
- role: elastic_data_hosts
- role: elastic_repositories

View File

@ -0,0 +1,118 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Gather variables for each operating system
include_vars: "{{ item }}"
with_first_found:
- "{{ ansible_distribution | lower }}-{{ ansible_distribution_version | lower }}.yml"
- "{{ ansible_distribution | lower }}-{{ ansible_distribution_major_version | lower }}.yml"
- "{{ ansible_os_family | lower }}-{{ ansible_distribution_major_version | lower }}.yml"
- "{{ ansible_distribution | lower }}.yml"
- "{{ ansible_os_family | lower }}-{{ ansible_distribution_version.split('.')[0] }}.yml"
- "{{ ansible_os_family | lower }}.yml"
tags:
- always
- name: Ensure beat is installed
package:
name: "{{ heartbeat_distro_packages }}"
state: "{{ elk_package_state | default('present') }}"
update_cache: "{{ (ansible_pkg_mgr == 'apt') | ternary('yes', omit) }}"
register: _package_task
until: _package_task is success
retries: 3
delay: 2
when:
- ansible_architecture == 'x86_64'
notify:
- Enable and restart heartbeat
tags:
- package_install
- name: Ensure beat is installed (aarch64)
apt:
deb: 'https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/8709ca2640344a4ba85cba0a1d6eea69/aarch64/heartbeat-6.5.0-arm64.deb'
when:
- ansible_pkg_mgr == 'apt'
- ansible_architecture == 'aarch64'
notify:
- Enable and restart heartbeat
tags:
- package_install
- name: Create heartbeat systemd service config dir
file:
path: "/etc/systemd/system/heartbeat.service.d"
state: "directory"
group: "root"
owner: "root"
mode: "0755"
when:
- ansible_service_mgr == 'systemd'
- name: Apply systemd options
template:
src: "{{ item.src }}"
dest: "{{ item.dest }}"
mode: "0644"
when:
- ansible_service_mgr == 'systemd'
with_items:
- src: "systemd.general-overrides.conf.j2"
dest: "/etc/systemd/system/heartbeat.service.d/heartbeat-overrides.conf"
notify:
- Enable and restart heartbeat
- name: Create heartbeat configs
template:
src: "{{ item.src }}"
dest: "{{ item.dest }}"
mode: "0644"
when:
- ansible_service_mgr == 'systemd'
with_items:
- src: "heartbeat.yml.j2"
dest: "/etc/heartbeat/heartbeat.yml"
notify:
- Enable and restart heartbeat
- name: Run the beat setup role
include_role:
name: elastic_beat_setup
when:
- (groups['kibana'] | length) > 0
vars:
elastic_beat_name: "heartbeat"
- name: Force beat handlers
meta: flush_handlers
- name: set heartbeat service state (upstart)
service:
name: "heartbeat-elastic"
state: "{{ heartbeat_service_state }}"
enabled: "{{ heartbeat_service_state in ['running', 'started', 'restarted'] }}"
when:
- ansible_service_mgr == 'upstart'
- heartbeat_service_state in ['started', 'stopped']
- name: set heartbeat service state (systemd)
systemd:
name: "heartbeat-elastic"
state: "{{ heartbeat_service_state }}"
enabled: "{{ heartbeat_service_state in ['running', 'started', 'restarted'] }}"
when:
- ansible_service_mgr == 'systemd'
- heartbeat_service_state in ['started', 'stopped']

View File

@ -0,0 +1 @@
../../../templates/systemd.general-overrides.conf.j2

View File

@ -0,0 +1,951 @@
{% import 'templates/_macros.j2' as elk_macros %}
################### Heartbeat Configuration Example #########################
# This file is a full configuration example documenting all non-deprecated
# options in comments. For a shorter configuration example, that contains
# only some common options, please see heartbeat.yml in the same directory.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/heartbeat/index.html
############################# Heartbeat ######################################
{% set icmp_hosts = [] %}
{% for host_item in groups['all'] %}
{% if hostvars[host_item]['ansible_host'] is defined %}
{% set _ = icmp_hosts.extend([hostvars[host_item]['ansible_host']]) %}
{% endif %}
{% endfor %}
# Configure monitors
heartbeat.monitors:
- type: icmp # monitor type `icmp` (requires root) uses ICMP Echo Request to ping
# configured hosts
# Monitor name used for job name and document type.
name: icmp
# Enable/Disable monitor
enabled: true
# Configure task schedule using cron-like syntax
schedule: '@every 30s' # every 30 seconds from start of beat
# List of hosts to ping
hosts: {{ (icmp_hosts | default([])) | to_json }}
# Configure IP protocol types to ping on if hostnames are configured.
# Ping all resolvable IPs if `mode` is `all`, or only one IP if `mode` is `any`.
ipv4: true
ipv6: true
mode: any
# Configure file json file to be watched for changes to the monitor:
#watch.poll_file:
# Path to check for updates.
#path:
# Interval between file file changed checks.
#interval: 5s
# Total running time per ping test.
timeout: {{ icmp_hosts | length }}s
# Waiting duration until another ICMP Echo Request is emitted.
wait: 1s
# The tags of the monitors are included in their own field with each
# transaction published. Tags make it easy to group servers by different
# logical properties.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# monitor output. Fields can be scalar values, arrays, dictionaries, or any nested
# combination of these.
#fields:
# env: staging
# If this option is set to true, the custom fields are stored as top-level
# fields in the output document instead of being grouped under a fields
# sub-dictionary. Default is false.
#fields_under_root: false
# NOTE: THIS FEATURE IS DEPRECATED AND WILL BE REMOVED IN A FUTURE RELEASE
# Configure file json file to be watched for changes to the monitor:
#watch.poll_file:
# Path to check for updates.
#path:
# Interval between file file changed checks.
#interval: 5s
# Define a directory to load monitor definitions from. Definitions take the form
# of individual yaml files.
# heartbeat.config.monitors:
# Directory + glob pattern to search for configuration files
#path: /path/to/my/monitors.d/*.yml
# If enabled, heartbeat will periodically check the config.monitors path for changes
#reload.enabled: true
# How often to check for changes
#reload.period: 1s
{% for item in heartbeat_services %}
{% if item.type == 'tcp' %}
{% set hosts = [] %}
{% for port in item.ports | default([]) %}
{% for backend in item.group | default([]) %}
{% set backend_host = hostvars[backend]['ansible_host'] %}
{% set _ = hosts.extend([backend_host + ":" + (port | string)]) %}
{% endfor %}
{% endfor %}
{% if hosts | length > 0 %}
- type: tcp # monitor type `tcp`. Connect via TCP and optionally verify endpoint
# by sending/receiving a custom payload
# Monitor name used for job name and document type
name: "{{ item.name }}"
# Enable/Disable monitor
enabled: true
# Configure task schedule
schedule: '@every 45s' # every 30 seconds from start of beat
# configure hosts to ping.
# Entries can be:
# - plain host name or IP like `localhost`:
# Requires ports configs to be checked. If ssl is configured,
# a SSL/TLS based connection will be established. Otherwise plain tcp connection
# will be established
# - hostname + port like `localhost:12345`:
# Connect to port on given host. If ssl is configured,
# a SSL/TLS based connection will be established. Otherwise plain tcp connection
# will be established
# - full url syntax. `scheme://<host>:[port]`. The `<scheme>` can be one of
# `tcp`, `plain`, `ssl` and `tls`. If `tcp`, `plain` is configured, a plain
# tcp connection will be established, even if ssl is configured.
# Using `tls`/`ssl`, an SSL connection is established. If no ssl is configured,
# system defaults will be used (not supported on windows).
# If `port` is missing in url, the ports setting is required.
hosts: {{ (hosts | default([])) | to_json }}
# Configure IP protocol types to ping on if hostnames are configured.
# Ping all resolvable IPs if `mode` is `all`, or only one IP if `mode` is `any`.
ipv4: true
ipv6: true
mode: any
# Configure file json file to be watched for changes to the monitor:
#watch.poll_file:
# Path to check for updates.
#path:
# Interval between file file changed checks.
#interval: 5s
# List of ports to ping if host does not contain a port number
# ports: [80, 9200, 5044]
# Total test connection and data exchange timeout
#timeout: 16s
# Optional payload string to send to remote and expected answer. If none is
# configured, the endpoint is expected to be up if connection attempt was
# successful. If only `send_string` is configured, any response will be
# accepted as ok. If only `receive_string` is configured, no payload will be
# send, but client expects to receive expected payload on connect.
#check:
#send: ''
#receive: ''
# SOCKS5 proxy url
# proxy_url: ''
# Resolve hostnames locally instead on SOCKS5 server:
#proxy_use_local_resolver: false
# TLS/SSL connection settings:
#ssl:
# Certificate Authorities
#certificate_authorities: ['']
# Required TLS protocols
#supported_protocols: ["TLSv1.0", "TLSv1.1", "TLSv1.2"]
{% endif %}
{% elif item.type == 'http' %}
{% set hosts = [] %}
{% for port in item.ports | default([]) %}
{% for backend in item.group | default([]) %}
{% set backend_host = hostvars[backend]['ansible_host'] %}
{% set _ = hosts.extend(["http://" + backend_host + ":" + (port | string) + item.path]) %}
{% endfor %}
{% endfor %}
{% if hosts | length > 0 %}
- type: http # monitor type `http`. Connect via HTTP an optionally verify response
# Monitor name used for job name and document type
name: "{{ item.name }}"
# Enable/Disable monitor
enabled: true
# Configure task schedule
schedule: '@every 60s' # every 30 seconds from start of beat
# Configure URLs to ping
urls: {{ (hosts | default([])) | to_json }}
# Configure IP protocol types to ping on if hostnames are configured.
# Ping all resolvable IPs if `mode` is `all`, or only one IP if `mode` is `any`.
ipv4: true
ipv6: true
mode: "any"
# Configure file json file to be watched for changes to the monitor:
#watch.poll_file:
# Path to check for updates.
#path:
# Interval between file file changed checks.
#interval: 5s
# Optional HTTP proxy url. If not set HTTP_PROXY environment variable will be used.
#proxy_url: ''
# Total test connection and data exchange timeout
#timeout: 16s
# Optional Authentication Credentials
#username: ''
#password: ''
# TLS/SSL connection settings for use with HTTPS endpoint. If not configured
# system defaults will be used.
#ssl:
# Certificate Authorities
#certificate_authorities: ['']
# Required TLS protocols
#supported_protocols: ["TLSv1.0", "TLSv1.1", "TLSv1.2"]
# Request settings:
check.request:
# Configure HTTP method to use. Only 'HEAD', 'GET' and 'POST' methods are allowed.
method: "{{ item.method }}"
# Dictionary of additional HTTP headers to send:
headers:
User-agent: osa-heartbeat-healthcheck
# Optional request body content
#body:
# Expected response settings
{% if item.check_response is defined %}
check.response: {{ item.check_response }}
#check.response:
# Expected status code. If not configured or set to 0 any status code not
# being 404 is accepted.
#status: 0
# Required response headers.
#headers:
# Required response contents.
#body:
{% endif %}
{% endif %}
{% endif %}
{% endfor %}
heartbeat.scheduler:
# Limit number of concurrent tasks executed by heartbeat. The task limit if
# disabled if set to 0. The default is 0.
limit: {{ icmp_hosts | length // 4 }}
# Set the scheduler it's timezone
#location: ''
#================================ General ======================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
# If this options is not defined, the hostname is used.
#name:
# The tags of the shipper are included in their own field with each
# transaction published. Tags make it easy to group servers by different
# logical properties.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output. Fields can be scalar values, arrays, dictionaries, or any nested
# combination of these.
#fields:
# env: staging
# If this option is set to true, the custom fields are stored as top-level
# fields in the output document instead of being grouped under a fields
# sub-dictionary. Default is false.
#fields_under_root: false
# Internal queue configuration for buffering events to be published.
#queue:
# Queue type by name (default 'mem')
# The memory queue will present all available events (up to the outputs
# bulk_max_size) to the output, the moment the output is ready to server
# another batch of events.
#mem:
# Max number of events the queue can buffer.
#events: 4096
# Hints the minimum number of events stored in the queue,
# before providing a batch of events to the outputs.
# The default value is set to 2048.
# A value of 0 ensures events are immediately available
# to be sent to the outputs.
#flush.min_events: 2048
# Maximum duration after which events are available to the outputs,
# if the number of events stored in the queue is < min_flush_events.
#flush.timeout: 1s
# The spool queue will store events in a local spool file, before
# forwarding the events to the outputs.
#
# Beta: spooling to disk is currently a beta feature. Use with care.
#
# The spool file is a circular buffer, which blocks once the file/buffer is full.
# Events are put into a write buffer and flushed once the write buffer
# is full or the flush_timeout is triggered.
# Once ACKed by the output, events are removed immediately from the queue,
# making space for new events to be persisted.
#spool:
# The file namespace configures the file path and the file creation settings.
# Once the file exists, the `size`, `page_size` and `prealloc` settings
# will have no more effect.
#file:
# Location of spool file. The default value is ${path.data}/spool.dat.
#path: "${path.data}/spool.dat"
# Configure file permissions if file is created. The default value is 0600.
#permissions: 0600
# File size hint. The spool blocks, once this limit is reached. The default value is 100 MiB.
#size: 100MiB
# The files page size. A file is split into multiple pages of the same size. The default value is 4KiB.
#page_size: 4KiB
# If prealloc is set, the required space for the file is reserved using
# truncate. The default value is true.
#prealloc: true
# Spool writer settings
# Events are serialized into a write buffer. The write buffer is flushed if:
# - The buffer limit has been reached.
# - The configured limit of buffered events is reached.
# - The flush timeout is triggered.
#write:
# Sets the write buffer size.
#buffer_size: 1MiB
# Maximum duration after which events are flushed, if the write buffer
# is not full yet. The default value is 1s.
#flush.timeout: 1s
# Number of maximum buffered events. The write buffer is flushed once the
# limit is reached.
#flush.events: 16384
# Configure the on-disk event encoding. The encoding can be changed
# between restarts.
# Valid encodings are: json, ubjson, and cbor.
#codec: cbor
#read:
# Reader flush timeout, waiting for more events to become available, so
# to fill a complete batch, as required by the outputs.
# If flush_timeout is 0, all available events are forwarded to the
# outputs immediately.
# The default value is 0s.
#flush.timeout: 0s
# Sets the maximum number of CPUs that can be executing simultaneously. The
# default is the number of logical CPUs available in the system.
#max_procs:
#================================ Processors ===================================
# Processors are used to reduce the number of fields in the exported event or to
# enhance the event with external metadata. This section defines a list of
# processors that are applied one by one and the first one receives the initial
# event:
#
# event -> filter1 -> event1 -> filter2 ->event2 ...
#
# The supported processors are drop_fields, drop_event, include_fields,
# decode_json_fields, and add_cloud_metadata.
#
# For example, you can use the following processors to keep the fields that
# contain CPU load percentages, but remove the fields that contain CPU ticks
# values:
#
#processors:
#- include_fields:
# fields: ["cpu"]
#- drop_fields:
# fields: ["cpu.user", "cpu.system"]
#
# The following example drops the events that have the HTTP response code 200:
#
#processors:
#- drop_event:
# when:
# equals:
# http.code: 200
#
# The following example renames the field a to b:
#
#processors:
#- rename:
# fields:
# - from: "a"
# to: "b"
#
# The following example tokenizes the string into fields:
#
#processors:
#- dissect:
# tokenizer: "%{key1} - %{key2}"
# field: "message"
# target_prefix: "dissect"
#
# The following example enriches each event with metadata from the cloud
# provider about the host machine. It works on EC2, GCE, DigitalOcean,
# Tencent Cloud, and Alibaba Cloud.
#
#processors:
#- add_cloud_metadata: ~
#
# The following example enriches each event with the machine's local time zone
# offset from UTC.
#
#processors:
#- add_locale:
# format: offset
#
# The following example enriches each event with docker metadata, it matches
# given fields to an existing container id and adds info from that container:
#
#processors:
#- add_docker_metadata:
# host: "unix:///var/run/docker.sock"
# match_fields: ["system.process.cgroup.id"]
# match_pids: ["process.pid", "process.ppid"]
# match_source: true
# match_source_index: 4
# match_short_id: false
# cleanup_timeout: 60
# labels.dedot: false
# # To connect to Docker over TLS you must specify a client and CA certificate.
# #ssl:
# # certificate_authority: "/etc/pki/root/ca.pem"
# # certificate: "/etc/pki/client/cert.pem"
# # key: "/etc/pki/client/cert.key"
#
# The following example enriches each event with docker metadata, it matches
# container id from log path available in `source` field (by default it expects
# it to be /var/lib/docker/containers/*/*.log).
#
#processors:
#- add_docker_metadata: ~
#
# The following example enriches each event with host metadata.
#
#processors:
#- add_host_metadata:
# netinfo.enabled: false
#
# The following example enriches each event with process metadata using
# process IDs included in the event.
#
#processors:
#- add_process_metadata:
# match_pids: ["system.process.ppid"]
# target: system.process.parent
#
# The following example decodes fields containing JSON strings
# and replaces the strings with valid JSON objects.
#
#processors:
#- decode_json_fields:
# fields: ["field1", "field2", ...]
# process_array: false
# max_depth: 1
# target: ""
# overwrite_keys: false
processors:
- add_host_metadata: ~
#============================= Elastic Cloud ==================================
# These settings simplify using heartbeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
#================================ Outputs ======================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output -------------------------------
#output.elasticsearch:
# Boolean flag to enable or disable the output module.
#enabled: true
# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (http and 9200)
# In case you specify and additional path, the scheme is required: http://localhost:9200/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
#hosts: ["localhost:9200"]
# Set gzip compression level.
#compression_level: 0
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
# Dictionary of HTTP parameters to pass within the url with index operations.
#parameters:
#param1: value1
#param2: value2
# Number of workers per Elasticsearch host.
#worker: 1
# Optional index name. The default is "heartbeat" plus date
# and generates [heartbeat-]YYYY.MM.DD keys.
# In case you modify this pattern you must update setup.template.name and setup.template.pattern accordingly.
#index: "heartbeat-%{[beat.version]}-%{+yyyy.MM.dd}"
# Optional ingest node pipeline. By default no pipeline will be used.
#pipeline: ""
# Optional HTTP Path
#path: "/elasticsearch"
# Custom HTTP headers to add to each request
#headers:
# X-My-Header: Contents of the header
# Proxy server url
#proxy_url: http://proxy:3128
# The number of times a particular Elasticsearch index operation is attempted. If
# the indexing operation doesn't succeed after this many retries, the events are
# dropped. The default is 3.
#max_retries: 3
# The maximum number of events to bulk in a single Elasticsearch bulk API index request.
# The default is 50.
#bulk_max_size: 50
# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90
# Use SSL settings for HTTPS.
#ssl.enabled: true
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# SSL configuration. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#----------------------------- Logstash output ---------------------------------
{{ elk_macros.output_logstash(inventory_hostname, logstash_data_hosts, ansible_processor_count) }}
#------------------------------- Kafka output ----------------------------------
#output.kafka:
# Boolean flag to enable or disable the output module.
#enabled: true
# The list of Kafka broker addresses from where to fetch the cluster metadata.
# The cluster metadata contain the actual Kafka brokers events are published
# to.
#hosts: ["localhost:9092"]
# The Kafka topic used for produced events. The setting can be a format string
# using any event field. To set the topic from document type use `%{[type]}`.
#topic: beats
# The Kafka event key setting. Use format string to create unique event key.
# By default no event key will be generated.
#key: ''
# The Kafka event partitioning strategy. Default hashing strategy is `hash`
# using the `output.kafka.key` setting or randomly distributes events if
# `output.kafka.key` is not configured.
#partition.hash:
# If enabled, events will only be published to partitions with reachable
# leaders. Default is false.
#reachable_only: false
# Configure alternative event field names used to compute the hash value.
# If empty `output.kafka.key` setting will be used.
# Default value is empty list.
#hash: []
# Authentication details. Password is required if username is set.
#username: ''
#password: ''
# Kafka version heartbeat is assumed to run against. Defaults to the oldest
# supported stable version (currently version 0.8.2.0)
#version: 0.8.2
# Metadata update configuration. Metadata do contain leader information
# deciding which broker to use when publishing.
#metadata:
# Max metadata request retry attempts when cluster is in middle of leader
# election. Defaults to 3 retries.
#retry.max: 3
# Waiting time between retries during leader elections. Default is 250ms.
#retry.backoff: 250ms
# Refresh metadata interval. Defaults to every 10 minutes.
#refresh_frequency: 10m
# The number of concurrent load-balanced Kafka output workers.
#worker: 1
# The number of times to retry publishing an event after a publishing failure.
# After the specified number of retries, the events are typically dropped.
# Some Beats, such as Filebeat, ignore the max_retries setting and retry until
# all events are published. Set max_retries to a value less than 0 to retry
# until all events are published. The default is 3.
#max_retries: 3
# The maximum number of events to bulk in a single Kafka request. The default
# is 2048.
#bulk_max_size: 2048
# The number of seconds to wait for responses from the Kafka brokers before
# timing out. The default is 30s.
#timeout: 30s
# The maximum duration a broker will wait for number of required ACKs. The
# default is 10s.
#broker_timeout: 10s
# The number of messages buffered for each Kafka broker. The default is 256.
#channel_buffer_size: 256
# The keep-alive period for an active network connection. If 0s, keep-alives
# are disabled. The default is 0 seconds.
#keep_alive: 0
# Sets the output compression codec. Must be one of none, snappy and gzip. The
# default is gzip.
#compression: gzip
# The maximum permitted size of JSON-encoded messages. Bigger messages will be
# dropped. The default value is 1000000 (bytes). This value should be equal to
# or less than the broker's message.max.bytes.
#max_message_bytes: 1000000
# The ACK reliability level required from broker. 0=no response, 1=wait for
# local commit, -1=wait for all replicas to commit. The default is 1. Note:
# If set to 0, no ACKs are returned by Kafka. Messages might be lost silently
# on error.
#required_acks: 1
# The configurable ClientID used for logging, debugging, and auditing
# purposes. The default is "beats".
#client_id: beats
# Enable SSL support. SSL is automatically enabled, if any SSL setting is set.
#ssl.enabled: true
# Optional SSL configuration options. SSL is off by default.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#------------------------------- Redis output ----------------------------------
#output.redis:
# Boolean flag to enable or disable the output module.
#enabled: true
# The list of Redis servers to connect to. If load balancing is enabled, the
# events are distributed to the servers in the list. If one server becomes
# unreachable, the events are distributed to the reachable servers only.
#hosts: ["localhost:6379"]
# The Redis port to use if hosts does not contain a port number. The default
# is 6379.
#port: 6379
# The name of the Redis list or channel the events are published to. The
# default is heartbeat.
#key: heartbeat
# The password to authenticate with. The default is no authentication.
#password:
# The Redis database number where the events are published. The default is 0.
#db: 0
# The Redis data type to use for publishing events. If the data type is list,
# the Redis RPUSH command is used. If the data type is channel, the Redis
# PUBLISH command is used. The default value is list.
#datatype: list
# The number of workers to use for each host configured to publish events to
# Redis. Use this setting along with the loadbalance option. For example, if
# you have 2 hosts and 3 workers, in total 6 workers are started (3 for each
# host).
#worker: 1
# If set to true and multiple hosts or workers are configured, the output
# plugin load balances published events onto all Redis hosts. If set to false,
# the output plugin sends all events to only one host (determined at random)
# and will switch to another host if the currently selected one becomes
# unreachable. The default value is true.
#loadbalance: true
# The Redis connection timeout in seconds. The default is 5 seconds.
#timeout: 5s
# The number of times to retry publishing an event after a publishing failure.
# After the specified number of retries, the events are typically dropped.
# Some Beats, such as Filebeat, ignore the max_retries setting and retry until
# all events are published. Set max_retries to a value less than 0 to retry
# until all events are published. The default is 3.
#max_retries: 3
# The maximum number of events to bulk in a single Redis request or pipeline.
# The default is 2048.
#bulk_max_size: 2048
# The URL of the SOCKS5 proxy to use when connecting to the Redis servers. The
# value must be a URL with a scheme of socks5://.
#proxy_url:
# This option determines whether Redis hostnames are resolved locally when
# using a proxy. The default value is false, which means that name resolution
# occurs on the proxy server.
#proxy_use_local_resolver: false
# Enable SSL support. SSL is automatically enabled, if any SSL setting is set.
#ssl.enabled: true
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# Optional SSL configuration options. SSL is off by default.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#------------------------------- File output -----------------------------------
#output.file:
# Boolean flag to enable or disable the output module.
#enabled: true
# Configure JSON encoding
#codec.json:
# Pretty print json event
#pretty: false
# Configure escaping html symbols in strings.
#escape_html: true
# Path to the directory where to save the generated files. The option is
# mandatory.
#path: "/tmp/heartbeat"
# Name of the generated files. The default is `heartbeat` and it generates
# files: `heartbeat`, `heartbeat.1`, `heartbeat.2`, etc.
#filename: heartbeat
# Maximum size in kilobytes of each file. When this size is reached, and on
# every heartbeat restart, the files are rotated. The default value is 10240
# kB.
#rotate_every_kb: 10000
# Maximum number of files under path. When this number of files is reached,
# the oldest file is deleted and the rest are shifted from last to first. The
# default is 7 files.
#number_of_files: 7
# Permissions to use for file creation. The default is 0600.
#permissions: 0600
#----------------------------- Console output ---------------------------------
#output.console:
# Boolean flag to enable or disable the output module.
#enabled: true
# Configure JSON encoding
#codec.json:
# Pretty print json event
#pretty: false
# Configure escaping html symbols in strings.
#escape_html: true
#================================= Paths ======================================
# The home path for the heartbeat installation. This is the default base path
# for all other path settings and for miscellaneous files that come with the
# distribution (for example, the sample dashboards).
# If not set by a CLI flag or in the configuration file, the default for the
# home path is the location of the binary.
#path.home:
# The configuration path for the heartbeat installation. This is the default
# base path for configuration files, including the main YAML configuration file
# and the Elasticsearch template file. If not set by a CLI flag or in the
# configuration file, the default for the configuration path is the home path.
#path.config: ${path.home}
# The data path for the heartbeat installation. This is the default base path
# for all the files in which heartbeat needs to store its data. If not set by a
# CLI flag or in the configuration file, the default for the data path is a data
# subdirectory inside the home path.
#path.data: ${path.home}/data
# The logs path for a heartbeat installation. This is the default location for
# the Beat's log files. If not set by a CLI flag or in the configuration file,
# the default for the logs path is a logs subdirectory inside the home path.
#path.logs: ${path.home}/logs
#================================ Keystore ==========================================
# Location of the Keystore containing the keys and their sensitive values.
#keystore.path: "${path.config}/beats.keystore"
#============================== Dashboards =====================================
{{ elk_macros.setup_dashboards('heartbeat') }}
#=============================== Template ======================================
{{ elk_macros.setup_template('heartbeat', inventory_hostname, data_nodes, elasticsearch_number_of_replicas) }}
#============================== Kibana =====================================
{% if (groups['kibana'] | length) > 0 %}
{{ elk_macros.setup_kibana(hostvars[groups['kibana'][0]]['ansible_host'] ~ ':' ~ kibana_port) }}
{% endif %}
#================================ Logging ======================================
{{ elk_macros.beat_logging('heartbeat') }}
#============================== Xpack Monitoring =====================================
{{ elk_macros.xpack_monitoring_elasticsearch(inventory_hostname, elasticsearch_data_hosts, ansible_processor_count) }}
#================================ HTTP Endpoint ======================================
# Each beat can expose internal metrics through a HTTP endpoint. For security
# reasons the endpoint is disabled by default. This feature is currently experimental.
# Stats can be access through http://localhost:5066/stats . For pretty JSON output
# append ?pretty to the URL.
# Defines if the HTTP endpoint is enabled.
#http.enabled: false
# The HTTP endpoint will bind to this hostname or IP address. It is recommended to use only localhost.
#http.host: localhost
# Port on which the HTTP endpoint will bind. Default is 5066.
#http.port: 5066
#============================= Process Security ================================
# Enable or disable seccomp system call filtering on Linux. Default is enabled.
#seccomp.enabled: true

View File

@ -0,0 +1,17 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
heartbeat_distro_packages:
- heartbeat-elastic

View File

@ -0,0 +1,17 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
heartbeat_distro_packages:
- heartbeat-elastic

View File

@ -0,0 +1,17 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
heartbeat_distro_packages:
- heartbeat-elastic

View File

@ -0,0 +1,16 @@
---
# Copyright 2018, Vexxhost, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
journalbeat_service_state: restarted

View File

@ -0,0 +1,25 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Enable and restart journalbeat
systemd:
name: "journalbeat"
enabled: true
state: "{{ journalbeat_service_state }}"
daemon_reload: yes
when:
- (elk_package_state | default('present')) != 'absent'
tags:
- config

View File

@ -0,0 +1,35 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
galaxy_info:
author: OpenStack
description: Elastic v6.x journalbeat role
company: Rackspace
license: Apache2
min_ansible_version: 2.5
platforms:
- name: Ubuntu
versions:
- trusty
- xenial
- bionic
categories:
- cloud
- development
- elasticsearch
- elastic-stack
dependencies:
- role: elastic_data_hosts
- role: elastic_repositories

View File

@ -0,0 +1,118 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Gather variables for each operating system
include_vars: "{{ item }}"
with_first_found:
- "{{ ansible_distribution | lower }}-{{ ansible_distribution_version | lower }}.yml"
- "{{ ansible_distribution | lower }}-{{ ansible_distribution_major_version | lower }}.yml"
- "{{ ansible_os_family | lower }}-{{ ansible_distribution_major_version | lower }}.yml"
- "{{ ansible_distribution | lower }}.yml"
- "{{ ansible_os_family | lower }}-{{ ansible_distribution_version.split('.')[0] }}.yml"
- "{{ ansible_os_family | lower }}.yml"
tags:
- always
- name: Uninstall legacy journalbeat
file:
path: "{{ item }}"
state: absent
with_items:
- /etc/systemd/system/journalbeat.service
- /usr/local/bin/journalbeat
- name: Ensure beat is installed
package:
name: "{{ journalbeat_distro_packages }}"
state: "{{ elk_package_state | default('present') }}"
update_cache: "{{ (ansible_pkg_mgr == 'apt') | ternary('yes', omit) }}"
register: _package_task
until: _package_task is success
retries: 3
delay: 2
notify:
- Enable and restart journalbeat
tags:
- package_install
- name: Ensure beat is installed (aarch64)
apt:
deb: 'https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/8709ca2640344a4ba85cba0a1d6eea69/aarch64/journalbeat-6.5.0-arm64.deb'
when:
- ansible_pkg_mgr == 'apt'
- ansible_architecture == 'aarch64'
notify:
- Enable and restart heartbeat
tags:
- package_install
- name: Create journalbeat systemd service config dir
file:
path: "/etc/systemd/system/journalbeat.service.d"
state: "directory"
group: "root"
owner: "root"
mode: "0755"
- name: Apply systemd options
template:
src: "{{ item.src }}"
dest: "{{ item.dest }}"
mode: "0644"
with_items:
- src: "systemd.general-overrides.conf.j2"
dest: "/etc/systemd/system/journalbeat.service.d/journalbeat-overrides.conf"
notify:
- Enable and restart journalbeat
- name: Drop journalbeat configs
template:
src: "{{ item.src }}"
dest: "{{ item.dest }}"
mode: "0644"
with_items:
- src: "journalbeat.yml.j2"
dest: "/etc/journalbeat/journalbeat.yml"
notify:
- Enable and restart journalbeat
- name: Run the beat setup role
include_role:
name: elastic_beat_setup
when:
- (groups['kibana'] | length) > 0
vars:
elastic_beat_name: "journalbeat"
- name: Force beat handlers
meta: flush_handlers
- name: set journalbeat service state (upstart)
service:
name: "journalbeat"
state: "{{ journalbeat_service_state }}"
enabled: "{{ journalbeat_service_state in ['running', 'started', 'restarted'] }}"
when:
- ansible_service_mgr == 'upstart'
- journalbeat_service_state in ['started', 'stopped']
- name: set journalbeat service state (systemd)
systemd:
name: "journalbeat"
state: "{{ journalbeat_service_state }}"
enabled: "{{ journalbeat_service_state in ['running', 'started', 'restarted'] }}"
when:
- ansible_service_mgr == 'systemd'
- journalbeat_service_state in ['started', 'stopped']

View File

@ -0,0 +1,796 @@
{% import 'templates/_macros.j2' as elk_macros %}
###################### Journalbeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The journalbeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/journalbeat/index.html
# For more available modules and options, please see the journalbeat.reference.yml sample
# configuration file.
#=========================== Journalbeat inputs =============================
journalbeat.inputs:
# Paths that should be crawled and fetched. Possible values files and directories.
# When setting a directory, all journals under it are merged.
# When empty starts to read from local journal.
- paths: ["/var/log/journal"]
# The number of seconds to wait before trying to read again from journals.
#backoff: 1s
# The maximum number of seconds to wait before attempting to read again from journals.
#max_backoff: 60s
# Position to start reading from journal. Valid values: head, tail, cursor
seek: cursor
# Fallback position if no cursor data is available.
#cursor_seek_fallback: head
# Exact matching for field values of events.
# Matching for nginx entries: "systemd.unit=nginx"
#include_matches: []
# Optional fields that you can specify to add additional information to the
# output. Fields can be scalar values, arrays, dictionaries, or any nested
# combination of these.
#fields:
# env: staging
#========================= Journalbeat global options ============================
journalbeat:
# Name of the registry file. If a relative path is used, it is considered relative to the
# data path.
registry_file: registry
# The number of seconds to wait before trying to read again from journals.
backoff: 10s
# The maximum number of seconds to wait before attempting to read again from journals.
max_backoff: 60s
# Position to start reading from all journal. Possible values: head, tail, cursor
seek: head
# Exact matching for field values of events.
# Matching for nginx entries: "systemd.unit=nginx"
#matches: []
#================================ General ======================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
# If this options is not defined, the hostname is used.
name: journalbeat
# The tags of the shipper are included in their own field with each
# transaction published. Tags make it easy to group servers by different
# logical properties.
tags:
- journald
# Optional fields that you can specify to add additional information to the
# output. Fields can be scalar values, arrays, dictionaries, or any nested
# combination of these.
#fields:
# env: staging
# If this option is set to true, the custom fields are stored as top-level
# fields in the output document instead of being grouped under a fields
# sub-dictionary. Default is false.
#fields_under_root: false
# Internal queue configuration for buffering events to be published.
#queue:
# Queue type by name (default 'mem')
# The memory queue will present all available events (up to the outputs
# bulk_max_size) to the output, the moment the output is ready to server
# another batch of events.
#mem:
# Max number of events the queue can buffer.
#events: 4096
# Hints the minimum number of events stored in the queue,
# before providing a batch of events to the outputs.
# The default value is set to 2048.
# A value of 0 ensures events are immediately available
# to be sent to the outputs.
#flush.min_events: 2048
# Maximum duration after which events are available to the outputs,
# if the number of events stored in the queue is < min_flush_events.
#flush.timeout: 1s
# The spool queue will store events in a local spool file, before
# forwarding the events to the outputs.
#
# Beta: spooling to disk is currently a beta feature. Use with care.
#
# The spool file is a circular buffer, which blocks once the file/buffer is full.
# Events are put into a write buffer and flushed once the write buffer
# is full or the flush_timeout is triggered.
# Once ACKed by the output, events are removed immediately from the queue,
# making space for new events to be persisted.
#spool:
# The file namespace configures the file path and the file creation settings.
# Once the file exists, the `size`, `page_size` and `prealloc` settings
# will have no more effect.
#file:
# Location of spool file. The default value is ${path.data}/spool.dat.
#path: "${path.data}/spool.dat"
# Configure file permissions if file is created. The default value is 0600.
#permissions: 0600
# File size hint. The spool blocks, once this limit is reached. The default value is 100 MiB.
#size: 100MiB
# The files page size. A file is split into multiple pages of the same size. The default value is 4KiB.
#page_size: 4KiB
# If prealloc is set, the required space for the file is reserved using
# truncate. The default value is true.
#prealloc: true
# Spool writer settings
# Events are serialized into a write buffer. The write buffer is flushed if:
# - The buffer limit has been reached.
# - The configured limit of buffered events is reached.
# - The flush timeout is triggered.
#write:
# Sets the write buffer size.
#buffer_size: 1MiB
# Maximum duration after which events are flushed, if the write buffer
# is not full yet. The default value is 1s.
#flush.timeout: 1s
# Number of maximum buffered events. The write buffer is flushed once the
# limit is reached.
#flush.events: 16384
# Configure the on-disk event encoding. The encoding can be changed
# between restarts.
# Valid encodings are: json, ubjson, and cbor.
#codec: cbor
#read:
# Reader flush timeout, waiting for more events to become available, so
# to fill a complete batch, as required by the outputs.
# If flush_timeout is 0, all available events are forwarded to the
# outputs immediately.
# The default value is 0s.
#flush.timeout: 0s
# Sets the maximum number of CPUs that can be executing simultaneously. The
# default is the number of logical CPUs available in the system.
#max_procs:
#================================ Processors ===================================
# Processors are used to reduce the number of fields in the exported event or to
# enhance the event with external metadata. This section defines a list of
# processors that are applied one by one and the first one receives the initial
# event:
#
# event -> filter1 -> event1 -> filter2 ->event2 ...
#
# The supported processors are drop_fields, drop_event, include_fields,
# decode_json_fields, and add_cloud_metadata.
#
# For example, you can use the following processors to keep the fields that
# contain CPU load percentages, but remove the fields that contain CPU ticks
# values:
#
#processors:
#- include_fields:
# fields: ["cpu"]
#- drop_fields:
# fields: ["cpu.user", "cpu.system"]
#
# The following example drops the events that have the HTTP response code 200:
#
#processors:
#- drop_event:
# when:
# equals:
# http.code: 200
#
# The following example renames the field a to b:
#
#processors:
#- rename:
# fields:
# - from: "a"
# to: "b"
#
# The following example tokenizes the string into fields:
#
#processors:
#- dissect:
# tokenizer: "%{key1} - %{key2}"
# field: "message"
# target_prefix: "dissect"
#
# The following example enriches each event with metadata from the cloud
# provider about the host machine. It works on EC2, GCE, DigitalOcean,
# Tencent Cloud, and Alibaba Cloud.
#
#processors:
#- add_cloud_metadata: ~
#
# The following example enriches each event with the machine's local time zone
# offset from UTC.
#
#processors:
#- add_locale:
# format: offset
#
# The following example enriches each event with docker metadata, it matches
# given fields to an existing container id and adds info from that container:
#
#processors:
#- add_docker_metadata:
# host: "unix:///var/run/docker.sock"
# match_fields: ["system.process.cgroup.id"]
# match_pids: ["process.pid", "process.ppid"]
# match_source: true
# match_source_index: 4
# match_short_id: false
# cleanup_timeout: 60
# labels.dedot: false
# # To connect to Docker over TLS you must specify a client and CA certificate.
# #ssl:
# # certificate_authority: "/etc/pki/root/ca.pem"
# # certificate: "/etc/pki/client/cert.pem"
# # key: "/etc/pki/client/cert.key"
#
# The following example enriches each event with docker metadata, it matches
# container id from log path available in `source` field (by default it expects
# it to be /var/lib/docker/containers/*/*.log).
#
#processors:
#- add_docker_metadata: ~
#
# The following example enriches each event with host metadata.
#
#processors:
#- add_host_metadata:
# netinfo.enabled: false
#
# The following example enriches each event with process metadata using
# process IDs included in the event.
#
#processors:
#- add_process_metadata:
# match_pids: ["system.process.ppid"]
# target: system.process.parent
#
# The following example decodes fields containing JSON strings
# and replaces the strings with valid JSON objects.
#
#processors:
#- decode_json_fields:
# fields: ["field1", "field2", ...]
# process_array: false
# max_depth: 1
# target: ""
# overwrite_keys: false
processors:
- add_host_metadata: ~
#============================= Elastic Cloud ==================================
# These settings simplify using journalbeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
#================================ Outputs ======================================
# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
#-------------------------- Elasticsearch output -------------------------------
#output.elasticsearch:
# Boolean flag to enable or disable the output module.
#enabled: true
# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (http and 9200)
# In case you specify and additional path, the scheme is required: http://localhost:9200/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
#hosts: ["localhost:9200"]
# Enabled ilm (beta) to use index lifecycle management instead daily indices.
#ilm.enabled: false
#ilm.rollover_alias: "journalbeat"
#ilm.pattern: "{now/d}-000001"
# Set gzip compression level.
#compression_level: 0
# Configure escaping html symbols in strings.
#escape_html: true
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
# Dictionary of HTTP parameters to pass within the url with index operations.
#parameters:
#param1: value1
#param2: value2
# Number of workers per Elasticsearch host.
#worker: 1
# Optional index name. The default is "journalbeat" plus date
# and generates [journalbeat-]YYYY.MM.DD keys.
# In case you modify this pattern you must update setup.template.name and setup.template.pattern accordingly.
#index: "journalbeat-%{[beat.version]}-%{+yyyy.MM.dd}"
# Optional ingest node pipeline. By default no pipeline will be used.
#pipeline: ""
# Optional HTTP Path
#path: "/elasticsearch"
# Custom HTTP headers to add to each request
#headers:
# X-My-Header: Contents of the header
# Proxy server url
#proxy_url: http://proxy:3128
# The number of times a particular Elasticsearch index operation is attempted. If
# the indexing operation doesn't succeed after this many retries, the events are
# dropped. The default is 3.
#max_retries: 3
# The maximum number of events to bulk in a single Elasticsearch bulk API index request.
# The default is 50.
#bulk_max_size: 50
# The number of seconds to wait before trying to reconnect to Elasticsearch
# after a network error. After waiting backoff.init seconds, the Beat
# tries to reconnect. If the attempt fails, the backoff timer is increased
# exponentially up to backoff.max. After a successful connection, the backoff
# timer is reset. The default is 1s.
#backoff.init: 1s
# The maximum number of seconds to wait before attempting to connect to
# Elasticsearch after a network error. The default is 60s.
#backoff.max: 60s
# Configure http request timeout before failing a request to Elasticsearch.
#timeout: 90
# Use SSL settings for HTTPS.
#ssl.enabled: true
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# SSL configuration. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#----------------------------- Logstash output ---------------------------------
{{ elk_macros.output_logstash(inventory_hostname, logstash_data_hosts, ansible_processor_count, 'journalbeat') }}
#------------------------------- Kafka output ----------------------------------
#output.kafka:
# Boolean flag to enable or disable the output module.
#enabled: true
# The list of Kafka broker addresses from where to fetch the cluster metadata.
# The cluster metadata contain the actual Kafka brokers events are published
# to.
#hosts: ["localhost:9092"]
# The Kafka topic used for produced events. The setting can be a format string
# using any event field. To set the topic from document type use `%{[type]}`.
#topic: beats
# The Kafka event key setting. Use format string to create unique event key.
# By default no event key will be generated.
#key: ''
# The Kafka event partitioning strategy. Default hashing strategy is `hash`
# using the `output.kafka.key` setting or randomly distributes events if
# `output.kafka.key` is not configured.
#partition.hash:
# If enabled, events will only be published to partitions with reachable
# leaders. Default is false.
#reachable_only: false
# Configure alternative event field names used to compute the hash value.
# If empty `output.kafka.key` setting will be used.
# Default value is empty list.
#hash: []
# Authentication details. Password is required if username is set.
#username: ''
#password: ''
# Kafka version journalbeat is assumed to run against. Defaults to the "1.0.0".
#version: '1.0.0'
# Configure JSON encoding
#codec.json:
# Pretty print json event
#pretty: false
# Configure escaping html symbols in strings.
#escape_html: true
# Metadata update configuration. Metadata do contain leader information
# deciding which broker to use when publishing.
#metadata:
# Max metadata request retry attempts when cluster is in middle of leader
# election. Defaults to 3 retries.
#retry.max: 3
# Waiting time between retries during leader elections. Default is 250ms.
#retry.backoff: 250ms
# Refresh metadata interval. Defaults to every 10 minutes.
#refresh_frequency: 10m
# The number of concurrent load-balanced Kafka output workers.
#worker: 1
# The number of times to retry publishing an event after a publishing failure.
# After the specified number of retries, the events are typically dropped.
# Some Beats, such as Filebeat, ignore the max_retries setting and retry until
# all events are published. Set max_retries to a value less than 0 to retry
# until all events are published. The default is 3.
#max_retries: 3
# The maximum number of events to bulk in a single Kafka request. The default
# is 2048.
#bulk_max_size: 2048
# The number of seconds to wait for responses from the Kafka brokers before
# timing out. The default is 30s.
#timeout: 30s
# The maximum duration a broker will wait for number of required ACKs. The
# default is 10s.
#broker_timeout: 10s
# The number of messages buffered for each Kafka broker. The default is 256.
#channel_buffer_size: 256
# The keep-alive period for an active network connection. If 0s, keep-alives
# are disabled. The default is 0 seconds.
#keep_alive: 0
# Sets the output compression codec. Must be one of none, snappy and gzip. The
# default is gzip.
#compression: gzip
# Set the compression level. Currently only gzip provides a compression level
# between 0 and 9. The default value is chosen by the compression algorithm.
#compression_level: 4
# The maximum permitted size of JSON-encoded messages. Bigger messages will be
# dropped. The default value is 1000000 (bytes). This value should be equal to
# or less than the broker's message.max.bytes.
#max_message_bytes: 1000000
# The ACK reliability level required from broker. 0=no response, 1=wait for
# local commit, -1=wait for all replicas to commit. The default is 1. Note:
# If set to 0, no ACKs are returned by Kafka. Messages might be lost silently
# on error.
#required_acks: 1
# The configurable ClientID used for logging, debugging, and auditing
# purposes. The default is "beats".
#client_id: beats
# Enable SSL support. SSL is automatically enabled, if any SSL setting is set.
#ssl.enabled: true
# Optional SSL configuration options. SSL is off by default.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#------------------------------- Redis output ----------------------------------
#output.redis:
# Boolean flag to enable or disable the output module.
#enabled: true
# Configure JSON encoding
#codec.json:
# Pretty print json event
#pretty: false
# Configure escaping html symbols in strings.
#escape_html: true
# The list of Redis servers to connect to. If load balancing is enabled, the
# events are distributed to the servers in the list. If one server becomes
# unreachable, the events are distributed to the reachable servers only.
#hosts: ["localhost:6379"]
# The Redis port to use if hosts does not contain a port number. The default
# is 6379.
#port: 6379
# The name of the Redis list or channel the events are published to. The
# default is journalbeat.
#key: journalbeat
# The password to authenticate with. The default is no authentication.
#password:
# The Redis database number where the events are published. The default is 0.
#db: 0
# The Redis data type to use for publishing events. If the data type is list,
# the Redis RPUSH command is used. If the data type is channel, the Redis
# PUBLISH command is used. The default value is list.
#datatype: list
# The number of workers to use for each host configured to publish events to
# Redis. Use this setting along with the loadbalance option. For example, if
# you have 2 hosts and 3 workers, in total 6 workers are started (3 for each
# host).
#worker: 1
# If set to true and multiple hosts or workers are configured, the output
# plugin load balances published events onto all Redis hosts. If set to false,
# the output plugin sends all events to only one host (determined at random)
# and will switch to another host if the currently selected one becomes
# unreachable. The default value is true.
#loadbalance: true
# The Redis connection timeout in seconds. The default is 5 seconds.
#timeout: 5s
# The number of times to retry publishing an event after a publishing failure.
# After the specified number of retries, the events are typically dropped.
# Some Beats, such as Filebeat, ignore the max_retries setting and retry until
# all events are published. Set max_retries to a value less than 0 to retry
# until all events are published. The default is 3.
#max_retries: 3
# The number of seconds to wait before trying to reconnect to Redis
# after a network error. After waiting backoff.init seconds, the Beat
# tries to reconnect. If the attempt fails, the backoff timer is increased
# exponentially up to backoff.max. After a successful connection, the backoff
# timer is reset. The default is 1s.
#backoff.init: 1s
# The maximum number of seconds to wait before attempting to connect to
# Redis after a network error. The default is 60s.
#backoff.max: 60s
# The maximum number of events to bulk in a single Redis request or pipeline.
# The default is 2048.
#bulk_max_size: 2048
# The URL of the SOCKS5 proxy to use when connecting to the Redis servers. The
# value must be a URL with a scheme of socks5://.
#proxy_url:
# This option determines whether Redis hostnames are resolved locally when
# using a proxy. The default value is false, which means that name resolution
# occurs on the proxy server.
#proxy_use_local_resolver: false
# Enable SSL support. SSL is automatically enabled, if any SSL setting is set.
#ssl.enabled: true
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# Optional SSL configuration options. SSL is off by default.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
# Configure what types of renegotiation are supported. Valid options are
# never, once, and freely. Default is never.
#ssl.renegotiation: never
#------------------------------- File output -----------------------------------
#output.file:
# Boolean flag to enable or disable the output module.
#enabled: true
# Configure JSON encoding
#codec.json:
# Pretty print json event
#pretty: false
# Configure escaping html symbols in strings.
#escape_html: true
# Path to the directory where to save the generated files. The option is
# mandatory.
#path: "/tmp/journalbeat"
# Name of the generated files. The default is `journalbeat` and it generates
# files: `journalbeat`, `journalbeat.1`, `journalbeat.2`, etc.
#filename: journalbeat
# Maximum size in kilobytes of each file. When this size is reached, and on
# every journalbeat restart, the files are rotated. The default value is 10240
# kB.
#rotate_every_kb: 10000
# Maximum number of files under path. When this number of files is reached,
# the oldest file is deleted and the rest are shifted from last to first. The
# default is 7 files.
#number_of_files: 7
# Permissions to use for file creation. The default is 0600.
#permissions: 0600
#----------------------------- Console output ---------------------------------
#output.console:
# Boolean flag to enable or disable the output module.
#enabled: true
# Configure JSON encoding
#codec.json:
# Pretty print json event
#pretty: false
# Configure escaping html symbols in strings.
#escape_html: true
#================================= Paths ======================================
# The home path for the journalbeat installation. This is the default base path
# for all other path settings and for miscellaneous files that come with the
# distribution (for example, the sample dashboards).
# If not set by a CLI flag or in the configuration file, the default for the
# home path is the location of the binary.
#path.home:
# The configuration path for the journalbeat installation. This is the default
# base path for configuration files, including the main YAML configuration file
# and the Elasticsearch template file. If not set by a CLI flag or in the
# configuration file, the default for the configuration path is the home path.
#path.config: ${path.home}
# The data path for the journalbeat installation. This is the default base path
# for all the files in which journalbeat needs to store its data. If not set by a
# CLI flag or in the configuration file, the default for the data path is a data
# subdirectory inside the home path.
#path.data: ${path.home}/data
# The logs path for a journalbeat installation. This is the default location for
# the Beat's log files. If not set by a CLI flag or in the configuration file,
# the default for the logs path is a logs subdirectory inside the home path.
#path.logs: ${path.home}/logs
#================================ Keystore ==========================================
# Location of the Keystore containing the keys and their sensitive values.
#keystore.path: "${path.config}/beats.keystore"
#============================== Dashboards =====================================
{{ elk_macros.setup_dashboards('journalbeat') }}
#=============================== Template ======================================
{{ elk_macros.setup_template('journalbeat', inventory_hostname, data_nodes, elasticsearch_number_of_replicas) }}
#============================== Kibana =====================================
{% if (groups['kibana'] | length) > 0 %}
{{ elk_macros.setup_kibana(hostvars[groups['kibana'][0]]['ansible_host'] ~ ':' ~ kibana_port) }}
{% endif %}
#================================ Logging ======================================
{{ elk_macros.beat_logging('journalbeat') }}
#============================== Xpack Monitoring ===============================
{{ elk_macros.xpack_monitoring_elasticsearch(inventory_hostname, elasticsearch_data_hosts, ansible_processor_count) }}
#================================ HTTP Endpoint ======================================
# Each beat can expose internal metrics through a HTTP endpoint. For security
# reasons the endpoint is disabled by default. This feature is currently experimental.
# Stats can be access through http://localhost:5066/stats . For pretty JSON output
# append ?pretty to the URL.
# Defines if the HTTP endpoint is enabled.
#http.enabled: false
# The HTTP endpoint will bind to this hostname or IP address. It is recommended to use only localhost.
#http.host: localhost
# Port on which the HTTP endpoint will bind. Default is 5066.
#http.port: 5066
#============================= Process Security ================================
# Enable or disable seccomp system call filtering on Linux. Default is enabled.
#seccomp.enabled: true

View File

@ -0,0 +1,17 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
journalbeat_distro_packages:
- journalbeat

View File

@ -0,0 +1,17 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
journalbeat_distro_packages:
- journalbeat

View File

@ -0,0 +1,17 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
journalbeat_distro_packages:
- journalbeat

View File

@ -0,0 +1,26 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
kibana_enable_basic_auth: false
# kibana vars
kibana_interface: 0.0.0.0
kibana_port: 5601
kibana_username: admin
kibana_password: admin
kibana_nginx_port: 81
kibana_server_name: "{{ ansible_hostname }}"
kibana_index_on_elasticsearch: "http://{{ hostvars[groups['elastic-logstash'][0]]['ansible_host'] }}:{{ elastic_port}}/.kibana"
kibana_elastic_request_timeout: 1800000

View File

@ -0,0 +1,39 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Enable and restart services (systemd)
systemd:
name: "{{ item }}"
enabled: true
state: restarted
daemon_reload: true
when:
- ansible_service_mgr == 'systemd'
with_items:
- nginx
- kibana
listen: Enable and restart services
- name: Enable and restart services (upstart)
service:
name: "{{ item }}"
state: restarted
enabled: yes
when:
- ansible_service_mgr == 'upstart'
with_items:
- nginx
- kibana
listen: Enable and restart services

View File

@ -0,0 +1,34 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
galaxy_info:
author: OpenStack
description: Elastic v6.x kibana role
company: Rackspace
license: Apache2
min_ansible_version: 2.5
platforms:
- name: Ubuntu
versions:
- trusty
- xenial
- bionic
categories:
- cloud
- development
- elasticsearch
- elastic-stack
dependencies:
- role: elastic_repositories

View File

@ -0,0 +1,88 @@
---
# Copyright 2018, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Gather variables for each operating system
include_vars: "{{ item }}"
with_first_found:
- "{{ ansible_distribution | lower }}-{{ ansible_distribution_version | lower }}.yml"
- "{{ ansible_distribution | lower }}-{{ ansible_distribution_major_version | lower }}.yml"
- "{{ ansible_os_family | lower }}-{{ ansible_distribution_major_version | lower }}.yml"
- "{{ ansible_distribution | lower }}.yml"
- "{{ ansible_os_family | lower }}-{{ ansible_distribution_version.split('.')[0] }}.yml"
- "{{ ansible_os_family | lower }}.yml"
tags:
- always
- name: Ensure distro packages are installed
package:
name: "{{ kibana_distro_packages }}"
state: "{{ elk_package_state | default('present') }}"
update_cache: "{{ (ansible_pkg_mgr == 'apt') | ternary('yes', omit) }}"
register: _package_task
until: _package_task is success
retries: 3
delay: 2
notify:
- Enable and restart services
tags:
- package_install
- name: create kibana user to access web interface
htpasswd:
path: "/etc/nginx/htpasswd.users"
name: "{{ kibana_username }}"
password: "{{ kibana_password }}"
owner: root
mode: 0644
when:
- kibana_enable_basic_auth
- name: Drop Nginx default conf file
template:
src: "nginx_default.j2"
dest: "{{ kibana_nginx_vhost_path }}/default"
notify:
- Enable and restart services
- name: Create kibana systemd service config dir
file:
path: "/etc/systemd/system/kibana.service.d"
state: "directory"
group: "root"
owner: "root"
mode: "0755"
when:
- ansible_service_mgr == 'systemd'
- name: Apply systemd options
template:
src: "{{ item.src }}"
dest: "{{ item.dest }}"
mode: "0644"
when:
- ansible_service_mgr == 'systemd'
with_items:
- src: "systemd.general-overrides.conf.j2"
dest: "/etc/systemd/system/kibana.service.d/kibana-overrides.conf"
notify:
- Enable and restart services
- name: Drop kibana conf file
template:
src: "kibana.yml.j2"
dest: "/etc/kibana/kibana.yml"
mode: "0666"
notify:
- Enable and restart services

View File

@ -0,0 +1 @@
../../../templates/systemd.general-overrides.conf.j2

View File

@ -0,0 +1,92 @@
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: {{ kibana_port }}
# This setting specifies the IP address of the back end server.
server.host: {{ kibana_interface }}
# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This setting
# cannot end in a slash.
# server.basePath: ""
# The maximum payload size in bytes for incoming server requests.
# server.maxPayloadBytes: 1048576
# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://127.0.0.1:{{ elastic_port }}"
# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
# elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
# kibana.index: ".kibana"
# The default application to load.
# kibana.defaultAppId: "discover"
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
# elasticsearch.username: "user"
# elasticsearch.password: "pass"
# Paths to the PEM-format SSL certificate and SSL key files, respectively. These
# files enable SSL for outgoing requests from the Kibana server to the browser.
# server.ssl.cert: /path/to/your/server.crt
# server.ssl.key: /path/to/your/server.key
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
# elasticsearch.ssl.cert: /path/to/your/client.crt
# elasticsearch.ssl.key: /path/to/your/client.key
# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
# elasticsearch.ssl.ca: /path/to/your/CA.pem
# To disregard the validity of SSL certificates, change this setting's value to false.
# elasticsearch.ssl.verify: true
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
# elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
elasticsearch.requestTimeout: {{ kibana_elastic_request_timeout }}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
# elasticsearch.shardTimeout: 0
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
# elasticsearch.startupTimeout: 5000
# Specifies the path where Kibana creates the process ID file.
# pid.file: /var/run/kibana.pid
# Enables you specify a file where Kibana stores log output.
logging.dest: stdout
# Set the value of this setting to true to suppress all logging output.
# logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
# logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
# logging.verbose: false
# ---------------------------------- X-Pack ------------------------------------
# X-Pack Monitoring
# https://www.elastic.co/guide/en/kibana/6.3/monitoring-settings-kb.html
xpack.monitoring.enabled: true
xpack.xpack_main.telemetry.enabled: false
xpack.monitoring.kibana.collection.enabled: true
xpack.monitoring.kibana.collection.interval: 30000
xpack.monitoring.min_interval_seconds: 30
xpack.monitoring.ui.enabled: true
xpack.monitoring.ui.container.elasticsearch.enabled: true

Some files were not shown because too many files have changed in this diff Show More