Removing ELK Server Install

This was built to help users standup infrastructure. However, we no
longer maintain these playbooks, and there are other solutions out there
to help uers deploy ELK.

Change-Id: I001ca4ed75c55dce617b7efe9ac9e38f2f0b9060
This commit is contained in:
Joe Talerico 2018-12-09 18:17:58 -05:00 committed by Aakarsh
parent d5fbcb1203
commit 2e3ec425ad
15 changed files with 7 additions and 1258 deletions

View File

@ -15,7 +15,7 @@ Ansible for Browbeat
Currently we support Ansible 1.9.4 within browbeat-venv and Ansible 2.0+ for installation.
Playbooks for:
* Installing Browbeat, collectd, ELK stack and clients, graphite, grafana, and grafana dashboards
* Installing Browbeat, collectd, elk clients, graphite, grafana, and grafana dashboards
* Check overcloud for performance issues
* Tune overcloud for performance (Experimental)
* Adjust number of workers for cinder/keystone/neutron/nova
@ -90,38 +90,6 @@ To Install Kibana Visuals
# ansible-playbook -i hosts install/kibana-visuals.yml
Install Generic ELK Stack
'''''''''''''''''''''''''
Listening ports and other options can be changed in ``install/group_vars/all.yml``
as needed. You can also change the logging backend to use fluentd via the
``logging_backend:`` variable. For most uses leaving the defaults in place is
accceptable. If left unchanged the default is to use logstash.
You can also install the optional `curator <https://www.elastic.co/guide/en/elasticsearch/client/curator/current/index.html>`_ tool for managing
elasticsearch indexes. Set ``install_curator_tool: true`` to enable this optional tool installation.
If all the variables look ok in ``install/group_vars/all.yml`` you can proceed with deployment.
::
ansible-playbook -i hosts install/elk.yml
Install ELK Stack (on an OpenStack Undercloud)
''''''''''''''''''''''''''''''''''''''''''''''
Triple-O based OpenStack deployments have a lot of ports already listening on
the Undercloud node. You'll need to change the default listening ports for ELK
to be deployed without conflict.
::
sed -i 's/nginx_kibana_port: 80/nginx_kibana_port: 8888/' install/group_vars/all.yml
sed -i 's/elk_server_ssl_cert_port: 8080/elk_server_ssl_cert_port: 9999/' install/group_vars/all.yml
Now you can proceed with deployment.
::
ansible-playbook -i hosts install/elk.yml
Install Generic ELK Clients
'''''''''''''''''''''''''''

View File

@ -452,10 +452,6 @@ echo " the [graphite] and [grafana] hosts entries are updated with val
echo " You will need to have passwordless access to root on these hosts."
echo "---------------------------"
echo "" | tee -a ${ansible_inventory_file}
echo "[elk]" | tee -a ${ansible_inventory_file}
echo "## example host entry." | tee -a ${ansible_inventory_file}
echo "#host-01" | tee -a ${ansible_inventory_file}
echo "" | tee -a ${ansible_inventory_file}
echo "[elk-client]" | tee -a ${ansible_inventory_file}
echo "## example host entry." | tee -a ${ansible_inventory_file}
echo "#host-02" | tee -a ${ansible_inventory_file}
@ -464,8 +460,8 @@ echo "[stockpile]" | tee -a ${ansible_inventory_file}
echo "undercloud ansible_user=${user}" | tee -a ${ansible_inventory_file}
echo "---------------------------"
echo "IMPORTANT: If you plan on deploying ELK and ELK clients, update hosts and make sure"
echo " the [elk] and [elk-client] hosts entries are updated with valid hosts."
echo "IMPORTANT: If you plan on deploying ELK clients, update hosts and make sure"
echo " the [elk-client] hosts entries are updated with valid hosts."
echo " You will need to have passwordless access to root on these hosts."
echo "---------------------------"

View File

@ -1,22 +0,0 @@
---
#
# Playbook to install the ELK stack for browbeat
#
- hosts: elk
remote_user: root
roles:
- { role: epel }
- { role: elasticsearch }
- { role: fluentd, when: (logging_backend == 'fluentd') }
- { role: logstash, when: ((logging_backend is none) or (logging_backend == 'logstash')) }
- { role: nginx }
- { role: curator, when: install_curator_tool }
- { role: kibana }
environment: "{{proxy_env}}"
- hosts: localhost
connection: local
roles:
- { role: es-template }
environment: "{{proxy_env}}"

View File

@ -1,6 +0,0 @@
[elk-client]
name=Elastic FileBeat Repository
baseurl=https://packages.elastic.co/beats/yum/el/$basearch
enabled=1
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
gpgcheck=1

View File

@ -1,98 +0,0 @@
---
#
# install/run filebeat elk client for browbeat
#
- name: Copy filebeat yum repo file
copy:
src=filebeat.repo
dest=/etc/yum.repos.d/filebeat.repo
owner=root
group=root
mode=0644
when: (logging_backend != 'fluentd')
become: true
- name: Import Filebeat GPG Key
become: true
rpm_key: key=http://packages.elastic.co/GPG-KEY-elasticsearch
state=present
when: (logging_backend != 'fluentd')
- name: Install filebeat rpms
package:
name: "{{ item }}"
state: present
become: true
with_items:
- filebeat
when: (logging_backend != 'fluentd')
- name: Generate filebeat configuration template
template:
src=filebeat.yml.j2
dest=/etc/filebeat/filebeat.yml
owner=root
group=root
mode=0644
become: true
when: (logging_backend != 'fluentd')
register: filebeat_needs_restart
- name: Check ELK server SSL client certificate
stat: path=/etc/pki/tls/certs/filebeat-forwarder.crt
become: true
ignore_errors: true
register: elk_client_ssl_cert_exists
when: (logging_backend != 'fluentd')
- name: Install ELK server SSL client certificate
get_url:
url=http://{{ elk_server }}:{{ elk_server_ssl_cert_port }}/filebeat-forwarder.crt
dest=/etc/pki/tls/certs/filebeat-forwarder.crt
become: true
when: ((elk_client_ssl_cert_exists != 0) and (logging_backend != 'fluentd'))
- name: Start filebeat service
systemd:
name: filebeat.service
state: started
when: ((filebeat_needs_restart != 0) and (logging_backend != 'fluentd'))
- name: Setup filebeat service
service: name=filebeat state=started enabled=true
become: true
when: (logging_backend != 'fluentd')
- name: Install rsyslogd for fluentd
package:
name: "{{ item }}"
state: present
become: true
with_items:
- rsyslog
when: (logging_backend == 'fluentd')
- name: Setup rsyslogd for fluentd
become: true
lineinfile: dest=/etc/rsyslog.conf \
line="*.* @{{ elk_server }}:{{ fluentd_syslog_port }}"
when: (logging_backend == 'fluentd')
register: rsyslog_updated
- name: Setup common OpenStack rsyslog logging
template:
src=rsyslog-openstack.conf.j2
dest=/etc/rsyslog.d/openstack-logs.conf
owner=root
group=root
mode=0644
become: true
register: rsyslog_updated
when: (logging_backend == 'fluentd')
- name: Restarting rsyslog for fluentd
systemd:
name: rsyslog.service
state: restarted
when: rsyslog_updated != 0

View File

@ -1,383 +0,0 @@
################### Filebeat Configuration Example #########################
############################# Filebeat ######################################
filebeat:
# List of prospectors to fetch data.
prospectors:
# Each - is a prospector. Below are the prospector specific configurations
-
# Paths that should be crawled and fetched. Glob based paths.
# To fetch all ".log" files from a specific level of subdirectories
# /var/log/*/*.log can be used.
# For each file found under this path, a harvester is started.
# Make sure not file is defined twice as this can lead to unexpected behaviour.
paths:
- /var/log/*.log
- /var/log/messages
# foreman
- /var/log/foreman/*.log
- /var/log/foreman-proxy/*.log
# openstack
- /var/log/nova/*.log
- /var/log/neutron/*.log
- /var/log/cinder/*.log
- /var/log/keystone/*.log
- /var/log/horizon/*.log
- /var/log/glance/*.log
- /var/log/mariadb/*.log
- /var/log/rabbitmq/*.log
- /var/log/mongodb/*.log
- /var/log/ceilometer/*.log
- /var/log/ceph/*.log
- /var/log/heat/*.log
- /var/log/openvswitch/*.log
- /var/log/pcsd/*.log
- /var/log/puppet/*.log
- /var/log/redis/*.log
- /var/log/glusterfs/*.log
- /var/log/swift/*.log
# Configure the file encoding for reading files with international characters
# following the W3C recommendation for HTML5 (http://www.w3.org/TR/encoding).
# Some sample encodings:
# plain, utf-8, utf-16be-bom, utf-16be, utf-16le, big5, gb18030, gbk,
# hz-gb-2312, euc-kr, euc-jp, iso-2022-jp, shift-jis, ...
#encoding: plain
# Type of the files. Based on this the way the file is read is decided.
# The different types cannot be mixed in one prospector
#
# Possible options are:
# * log: Reads every line of the log file (default)
# * stdin: Reads the standard in
input_type: log
# Optional additional fields. These field can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
# Set to true to store the additional fields as top level fields instead
# of under the "fields" sub-dictionary. In case of name conflicts with the
# fields added by Filebeat itself, the custom fields overwrite the default
# fields.
#fields_under_root: false
# Ignore files which were modified more then the defined timespan in the past
# Time strings like 2h (2 hours), 5m (5 minutes) can be used.
#ignore_older: 24h
# Type to be published in the 'type' field. For Elasticsearch output,
# the type defines the document type these entries should be stored
# in. Default: log
document_type: syslog
# Scan frequency in seconds.
# How often these files should be checked for changes. In case it is set
# to 0s, it is done as often as possible. Default: 10s
#scan_frequency: 10s
# Defines the buffer size every harvester uses when fetching the file
#harvester_buffer_size: 16384
# Setting tail_files to true means filebeat starts readding new files at the end
# instead of the beginning. If this is used in combination with log rotation
# this can mean that the first entries of a new file are skipped.
#tail_files: false
# Backoff values define how aggressively filebeat crawls new files for updates
# The default values can be used in most cases. Backoff defines how long it is waited
# to check a file again after EOF is reached. Default is 1s which means the file
# is checked every second if new lines were added. This leads to a near real time crawling.
# Every time a new line appears, backoff is reset to the initial value.
#backoff: 1s
# Max backoff defines what the maximum backoff time is. After having backed off multiple times
# from checking the files, the waiting time will never exceed max_backoff idenependent of the
# backoff factor. Having it set to 10s means in the worst case a new line can be added to a log
# file after having backed off multiple times, it takes a maximum of 10s to read the new line
#max_backoff: 10s
# The backoff factor defines how fast the algorithm backs off. The bigger the backoff factor,
# the faster the max_backoff value is reached. If this value is set to 1, no backoff will happen.
# The backoff value will be multiplied each time with the backoff_factor until max_backoff is reached
#backoff_factor: 2
# This option closes a file, as soon as the file name changes.
# This config option is recommended on windows only. Filebeat keeps the files it's reading open. This can cause
# issues when the file is removed, as the file will not be fully removed until also Filebeat closes
# the reading. Filebeat closes the file handler after ignore_older. During this time no new file with the
# same name can be created. Turning this feature on the other hand can lead to loss of data
# on rotate files. It can happen that after file rotation the beginning of the new
# file is skipped, as the reading starts at the end. We recommend to leave this option on false
# but lower the ignore_older value to release files faster.
#force_close_files: false
#-
# paths:
# - /var/log/apache/*.log
# type: log
#
# # Ignore files which are older then 24 hours
# ignore_older: 24h
#
# # Additional fields which can be freely defined
# fields:
# type: apache
# server: localhost
#-
# type: stdin
# paths:
# - "-"
# General filebeat configuration options
#
# Event count spool threshold - forces network flush if exceeded
#spool_size: 1024
# Defines how often the spooler is flushed. After idle_timeout the spooler is
# Flush even though spool_size is not reached.
#idle_timeout: 5s
# Name of the registry file. Per default it is put in the current working
# directory. In case the working directory is changed after when running
# filebeat again, indexing starts from the beginning again.
#registry_file: .filebeat
# Full Path to directory with additional prospector configuration files. Each file must end with .yml
# These config files must have the full filebeat config part inside, but only
# the prospector part is processed. All global options like spool_size are ignored.
# The config_dir MUST point to a different directory then where the main filebeat config file is in.
#config_dir:
###############################################################################
############################# Libbeat Config ##################################
# Base config file used by all other beats for using libbeat features
############################# Output ##########################################
# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
output:
### Elasticsearch as output
#elasticsearch:
logstash:
# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (http and 9200)
# In case you specify and additional path, the scheme is required: http://localhost:9200/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
hosts: ["{{ elk_server }}:{{ logstash_syslog_port }}"]
bulk_max_size: 1024
# Optional protocol and basic auth credentials. These are deprecated.
#protocol: "https"
#username: "admin"
#password: "s3cr3t"
# Number of workers per Elasticsearch host.
#worker: 1
# Optional index name. The default is "filebeat" and generates
# [filebeat-]YYYY.MM.DD keys.
#index: "filebeat"
# Optional HTTP Path
#path: "/elasticsearch"
# Proxy server URL
# proxy_url: http://proxy:3128
# The number of times a particular Elasticsearch index operation is attempted. If
# the indexing operation doesn't succeed after this many retries, the events are
# dropped. The default is 3.
#max_retries: 3
# The maximum number of events to bulk in a single Elasticsearch bulk API index request.
# The default is 50.
#bulk_max_size: 50
# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90
# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index
# requests are made.
#flush_interval: 1
# Boolean that sets if the topology is kept in Elasticsearch. The default is
# false. This option makes sense only for Packetbeat.
#save_topology: false
# The time to live in seconds for the topology information that is stored in
# Elasticsearch. The default is 15 seconds.
#topology_expire: 15
# tls configuration. By default is off.
tls:
# List of root certificates for HTTPS server verifications
certificate_authorities: ["/etc/pki/tls/certs/filebeat-forwarder.crt"]
# Certificate for TLS client authentication
#certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#certificate_key: "/etc/pki/client/cert.key"
# Controls whether the client verifies server certificates and host name.
# If insecure is set to true, all server host names and certificates will be
# accepted. In this mode TLS based connections are susceptible to
# man-in-the-middle attacks. Use only for testing.
#insecure: true
# Configure cipher suites to be used for TLS connections
#cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#curve_types: []
# Configure minimum TLS version allowed for connection to logstash
#min_version: 1.0
# Configure maximum TLS version allowed for connection to logstash
#max_version: 1.2
### Logstash as output
#logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Number of workers per Logstash host.
#worker: 1
# Optional load balance the events between the Logstash hosts
#loadbalance: true
# Optional index name. The default index name depends on the each beat.
# For Packetbeat, the default is set to packetbeat, for Topbeat
# top topbeat and for Filebeat to filebeat.
#index: filebeat
# Optional TLS. By default is off.
#tls:
# List of root certificates for HTTPS server verifications
#certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for TLS client authentication
#certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#certificate_key: "/etc/pki/client/cert.key"
# Controls whether the client verifies server certificates and host name.
# If insecure is set to true, all server host names and certificates will be
# accepted. In this mode TLS based connections are susceptible to
# man-in-the-middle attacks. Use only for testing.
#insecure: true
# Configure cipher suites to be used for TLS connections
#cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#curve_types: []
### File as output
#file:
# Path to the directory where to save the generated files. The option is mandatory.
#path: "/tmp/filebeat"
# Name of the generated files. The default is `filebeat` and it generates files: `filebeat`, `filebeat.1`, `filebeat.2`, etc.
#filename: filebeat
# Maximum size in kilobytes of each file. When this size is reached, the files are
# rotated. The default value is 10 MB.
#rotate_every_kb: 10000
# Maximum number of files under path. When this number of files is reached, the
# oldest file is deleted and the rest are shifted from last to first. The default
# is 7 files.
#number_of_files: 7
### Console output
# console:
# Pretty print json event
#pretty: false
############################# Shipper #########################################
shipper:
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
# If this options is not defined, the hostname is used.
#name:
# The tags of the shipper are included in their own field with each
# transaction published. Tags make it easy to group servers by different
# logical properties.
#tags: ["service-X", "web-tier"]
# Uncomment the following if you want to ignore transactions created
# by the server on which the shipper is installed. This option is useful
# to remove duplicates if shippers are installed on multiple servers.
#ignore_outgoing: true
# How often (in seconds) shippers are publishing their IPs to the topology map.
# The default is 10 seconds.
#refresh_topology_freq: 10
# Expiration time (in seconds) of the IPs published by a shipper to the topology map.
# All the IPs will be deleted afterwards. Note, that the value must be higher than
# refresh_topology_freq. The default is 15 seconds.
#topology_expire: 15
# Configure local GeoIP database support.
# If no paths are not configured geoip is disabled.
#geoip:
#paths:
# - "/usr/share/GeoIP/GeoLiteCity.dat"
# - "/usr/local/var/GeoIP/GeoLiteCity.dat"
############################# Logging #########################################
# There are three options for the log ouput: syslog, file, stderr.
# Under Windos systems, the log files are per default sent to the file output,
# under all other system per default to syslog.
logging:
# Send all logging output to syslog. On Windows default is false, otherwise
# default is true.
#to_syslog: true
# Write all logging output to files. Beats automatically rotate files if rotateeverybytes
# limit is reached.
#to_files: false
# To enable logging to files, to_files option has to be set to true
files:
# The directory where the log files will written to.
#path: /var/log/mybeat
# The name of the files where the logs are written to.
#name: mybeat
# Configure log file size limit. If limit is reached, log file will be
# automatically rotated
rotateeverybytes: 10485760 # = 10MB
# Number of rotated log files to keep. Oldest files will be deleted first.
#keepfiles: 7
# Enable debug output for selected components. To enable all selectors use ["*"]
# Other available selectors are beat, publish, service
# Multiple selectors can be chained.
#selectors: [ ]
# Sets log level. The default log level is error.
# Available log levels are: critical, error, warning, info, debug
#level: error

View File

@ -1,163 +0,0 @@
# This template aggregates common OpenStack logs
# via rsyslog. You should edit this file with any
# application logs that need to be captured that
# are not sent to rsyslog.
#
# If you enable the ForwardToSyslog option in
# journalctl you can send anything logged by systemd
# as well:
# file: /etc/systemd/journald.conf
# parameter: ForwardToSyslog=yes
# action: systemctl restart systemd-journald
$ModLoad imfile
# Neutron
$InputFileName /var/log/neutron/server.log
$InputFileTag neutron-server-errors
$InputFileStateFile neutron-server-errors
$InputFileSeverity error
$InputFileFacility local7
$InputRunFileMonitor
# Nova
$InputFileName /var/log/nova/nova-api.log
$InputFileTag nova-api-errors
$InputFileStateFile nova-api-errors
$InputFileSeverity error
$InputFileFacility local7
$InputRunFileMonitor
$InputFileName /var/log/nova/nova-cert.log
$InputFileTag nova-cert-errors
$InputFileStateFile nova-cert-errors
$InputFileSeverity error
$InputFileFacility local7
$InputRunFileMonitor
$InputFileName /var/log/nova/nova-conductor.log
$InputFileTag nova-conductor-errors
$InputFileStateFile nova-conductor-errors
$InputFileSeverity error
$InputFileFacility local7
$InputRunFileMonitor
$InputFileName /var/log/nova/nova-consoleauth.log
$InputFileTag nova-consoleauth-errors
$InputFileStateFile nova-consoleauth-errors
$InputFileSeverity error
$InputFileFacility local7
$InputRunFileMonitor
$InputFileName /var/log/nova/nova-manage.log
$InputFileTag nova-manage-errors
$InputFileStateFile nova-manage-errors
$InputFileSeverity error
$InputFileFacility local7
$InputRunFileMonitor
$InputFileName /var/log/nova/nova-novncproxy.log
$InputFileTag nova-novncproxy-errors
$InputFileStateFile nova-novncproxy-errors
$InputFileSeverity error
$InputFileFacility local7
$InputRunFileMonitor
$InputFileName /var/log/nova/nova-scheduler.log
$InputFileTag nova-scheduler-errors
$InputFileStateFile nova-scheduler-errors
$InputFileSeverity error
$InputFileFacility local7
$InputRunFileMonitor
# cinder
$InputFileName /var/log/cinder/api.log
$InputFileTag cinder-api-errors
$InputFileStateFile cinder-api-errors
$InputFileSeverity error
$InputFileFacility local7
$InputRunFileMonitor
$InputFileName /var/log/cinder/backup.log
$InputFileTag cinder-backup-errors
$InputFileStateFile cinder-backup-errors
$InputFileSeverity error
$InputFileFacility local7
$InputRunFileMonitor
$InputFileName /var/log/cinder/scheduler.log
$InputFileTag cinder-scheduler-errors
$InputFileStateFile cinder-scheduler-errors
$InputFileSeverity error
$InputFileFacility local7
$InputRunFileMonitor
$InputFileName /var/log/cinder/volume.log
$InputFileTag cinder-volume-errors
$InputFileStateFile cinder-volume-errors
$InputFileSeverity error
$InputFileFacility local7
$InputRunFileMonitor
# glance
$InputFileName /var/log/glance/api.log
$InputFileTag glance-api-errors
$InputFileStateFile glance-api-errors
$InputFileSeverity error
$InputFileFacility local7
$InputRunFileMonitor
$InputFileName /var/log/glance/registry.log
$InputFileTag glance-registry-errors
$InputFileStateFile glance-registry-errors
$InputFileSeverity error
$InputFileFacility local7
$InputRunFileMonitor
$InputFileName /var/log/glance/scrubber.log
$InputFileTag glance-scrubber-errors
$InputFileStateFile glance-scrubber-errors
$InputFileSeverity error
$InputFileFacility local7
$InputRunFileMonitor
# keystone
$InputFileName /var/log/keystone/keystone.log
$InputFileTag keystone-errors
$InputFileStateFile keystone-errors
$InputFileSeverity error
$InputFileFacility local7
$InputRunFileMonitor
# horizon
$InputFileName /var/log/horizon/horizon.log
$InputFileTag horizon-errors
$InputFileStateFile horizon-errors
$InputFileSeverity error
$InputFileFacility local7
$InputRunFileMonitor
$InputFileName /var/log/httpd/horizon_error.log
$InputFileTag horizon-httpd-errors
$InputFileStateFile horizon-httpd-errors
$InputFileSeverity error
$InputFileFacility local7
$InputRunFileMonitor
$InputFileName /var/log/httpd/horizon_ssl_error.log
$InputFileTag horizon-httpd_ssl-errors
$InputFileStateFile horizon-httpd_ssl-errors
$InputFileSeverity error
$InputFileFacility local7
$InputRunFileMonitor
# mariadb
$InputFileName /var/log/mariadb/mariadb.log
$InputFileTag mariadb-errors
$InputFileStateFile mariadb-errors
$InputFileSeverity error
$InputFileFacility local7
$InputRunFileMonitor
# send to elk_server
*.* @{{ elk_server }}:{{ fluentd_syslog_port }}

View File

@ -1,6 +0,0 @@
[kibana-4.6]
name=Kibana repository for 4.6.x packages
baseurl=http://packages.elastic.co/kibana/4.6/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1

View File

@ -1,6 +0,0 @@
[logstash-2.2]
name=logstash repository for 2.2 packages
baseurl=http://packages.elasticsearch.org/logstash/2.2/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1

View File

@ -1,161 +0,0 @@
---
#
# Install/run kibana for browbeat
#
- name: Copy kibana yum repo file
copy:
src=kibana.repo
dest=/etc/yum.repos.d/kibana.repo
owner=root
group=root
mode=0644
become: true
# We need to insert data to create an initial index, query if it exists
- name: Check elasticsearch index for content
uri:
url=http://localhost:9200/_cat/indices
method=GET
return_content=yes
register: elasticsearch_index
# Populate elasticsearch with local logs if using logstash
- name: Populate elasticsearch index with local logs via logstash
shell: cat /var/log/messages | /opt/logstash/bin/logstash -f /etc/logstash/conf.d/10-syslog.conf
when: "'logstash-' not in elasticsearch_index.content"
ignore_errors: true
no_log: true
- name: Install local rsyslogd for fluentd
package:
name: "{{ item }}"
state: present
become: true
with_items:
- rsyslog
when: (logging_backend == 'fluentd')
- name: Setup local rsyslogd for fluentd
lineinfile: dest=/etc/rsyslog.conf \
line="*.* @localhost:{{ fluentd_syslog_port }}"
when: (logging_backend == 'fluentd')
register: rsyslog_updated
- name: Populate elasticsearch index with local logs via fluentd
systemd:
name: rsyslog.service
state: restarted
ignore_errors: true
when: rsyslog_updated != 0
- name: Install kibana rpms
package:
name: "{{ item }}"
state: present
become: true
with_items:
- kibana
- unzip
- name: Check kibana filebeat dashboards
stat: path=/tmp/filebeat-dashboards.zip
ignore_errors: true
register: kibana_dashboards_present
- name: Copy kibana filebeat dashboards
copy:
src=filebeat-dashboards.zip
dest=/tmp/filebeat-dashboards.zip
owner=root
group=root
mode=0644
become: true
ignore_errors: true
when: kibana_dashboards_present != 0
- name: Install kibana filebeat dashboards
unarchive: src=/tmp/filebeat-dashboards.zip dest=/tmp/ copy=no
ignore_errors: true
when: kibana_dashboards_present != 0
- name: Validate kibana load.sh script is available for use
stat:
path: /tmp/beats-dashboards-master/load.sh
ignore_errors: true
register: kibana_dashboards_load_sh_present
- name: Configure kibana filebeat dashboards
shell: sh /tmp/beats-dashboards-master/load.sh -url "http://localhost:9200" -user "{{kibana_user}}:{{kibana_password}}"
ignore_errors: true
when: kibana_dashboards_load_sh_present != 0
tags:
# Skip ANSIBLE0013 Use shell only when shell functionality is required
# Shell required here during script execution
- skip_ansible_lint
- name: Check kibana users
stat: path=/etc/nginx/htpasswd.users
ignore_errors: true
register: kibana_user_pwfile_exists
- name: Create kibana admin user
command: htpasswd -b -c /etc/nginx/htpasswd.users {{kibana_user}} {{kibana_password}}
ignore_errors: true
when: kibana_user_pwfile_exists != 0
- name: Setup kibana service
service: name=kibana state=started enabled=true
become: true
- name: Check Filebeat forwarder SSL certificate
stat: path=/etc/pki/tls/certs/filebeat-forwarder.crt
ignore_errors: true
register: filebeat_forwarder_ssl_exists
- name: Create client forwarder SSL certificate
command: openssl req -subj '/CN={{ ansible_fqdn }}/' -config /etc/pki/tls/openssl_extras.cnf \
-x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /etc/pki/tls/private/filebeat-forwarder.key \
-out /etc/pki/tls/certs/filebeat-forwarder.crt
ignore_errors: true
when: filebeat_forwarder_ssl_exists != 0
- name: Check Filebeat forwarder SSL certificate copy
stat: path=/usr/share/nginx/html/filebeat-forwarder.crt
ignore_errors: true
register: filebeat_forwarder_ssl_client_copy_exists
- name: Copy Filebeat forwarder SSL certificate
command: cp /etc/pki/tls/certs/filebeat-forwarder.crt /usr/share/nginx/html/filebeat-forwarder.crt
ignore_errors: true
when: filebeat_forwarder_ssl_client_copy_exists != 0
- name: Refresh logstash service
systemd:
name: logstash.service
state: restarted
ignore_errors: true
when: (logging_backend != 'fluentd')
- name: Refresh fluentd service
systemd:
name: td-agent.service
state: restarted
when: (logging_backend == 'fluentd')
become: true
- name: Print SSL post-setup information
debug: msg="Filebeat SSL Certificate available at http://{{ ansible_fqdn }}:{{ elk_server_ssl_cert_port }}/filebeat-forwarder.crt"
when: (logging_backend != 'fluentd')
- name: Print post-setup URL
debug: msg="*** ELK Services available at http://{{ ansible_fqdn }}:{{ nginx_kibana_port }} ***"
- name: Print index creation instructions
debug: msg="** 1) Navigate to http://{{ ansible_fqdn }}:{{ nginx_kibana_port }} and login with admin/admin, click 'create' on the green index button ***"
- name: Print filebeat openstack client setup instructions
debug: msg="** 2) Run ansible-playbook -i hosts install/elk-openstack-client.yml --extra-vars 'elk_server={{ ansible_default_ipv4.address }}' to setup OpenStack clients ***"
- name: Print filebeat client setup instructions
debug: msg="** 2) Run ansible-playbook -i hosts install/elk-client.yml --extra-vars 'elk_server={{ ansible_default_ipv4.address }}' to setup clients ***"

View File

@ -1,199 +0,0 @@
---
#
# Install/run nginx for browbeat
#
- name: Install nginx, httpd-tools, httplib2, libsemanage-python
package:
name: "{{ item }}"
state: present
become: true
with_items:
- nginx
- httpd-tools
- python-httplib2
- libsemanage-python
# SELinux boolean for nginx
- name: Apply SELinux boolean httpd_can_network_connect
seboolean: name=httpd_can_network_connect state=yes persistent=yes
when: "ansible_selinux['status'] == 'enabled'"
# create /etc/nginx/conf.d/ directory
- name: Create nginx directory structure
file: path=/etc/nginx/conf.d/
state=directory
mode=0755
# deploy kibana.conf with FQDN
- name: Setup nginx reverse proxy for kibana
template:
src=kibana.conf.j2
dest=/etc/nginx/conf.d/kibana.conf
owner=root
group=root
mode=0644
become: true
register: nginx_needs_restart
# deploy basic nginx.conf 8080 vhost
- name: Setup nginx TCP/{{elk_server_ssl_cert_port}} for SSL certificate retrieval
template:
src=nginx.conf.j2
dest=/etc/nginx/nginx.conf
owner=root
group=root
mode=0644
become: true
# start nginx service
- name: Start nginx service
systemd:
name: nginx.service
state: restarted
ignore_errors: true
when: nginx_needs_restart != 0
- name: Check if nginx is in use
shell: systemctl is-enabled nginx.service | egrep -qv 'masked|disabled'
register: nginx_in_use
ignore_errors: yes
tags:
# Skip ANSIBLE0012 Commands should not change things if nothing needs doing
# Determine if nginx is enabled
- skip_ansible_lint
- name: Set nginx to start on boot
systemd:
name: nginx.service
enabled: yes
when: nginx_in_use.rc != 0
# we need TCP/80 and TCP/8080 open
# determine firewall status and take action
# 1) use firewall-cmd if firewalld is utilized
# 2) insert iptables rule if iptables is used
# Firewalld
- name: Determine if firewalld is in use
shell: systemctl is-enabled firewalld.service | egrep -qv 'masked|disabled'
ignore_errors: true
register: firewalld_in_use
no_log: true
tags:
# Skip ANSIBLE0012 Commands should not change things if nothing needs doing
# Need to check if firewall is active
- skip_ansible_lint
- name: Determine if firewalld is active
shell: systemctl is-active firewalld.service | grep -vq inactive
ignore_errors: true
register: firewalld_is_active
no_log: true
tags:
# Skip ANSIBLE0012 Commands should not change things if nothing needs doing
# Need to check if firewall is active
- skip_ansible_lint
- name: Determine if TCP/{{nginx_kibana_port}} is already active
shell: firewall-cmd --list-ports | egrep -q "^{{nginx_kibana_port}}/tcp"
ignore_errors: true
register: firewalld_nginx_kibana_port_exists
no_log: true
tags:
# Skip ANSIBLE0012 Commands should not change things if nothing needs doing
# Need to check if firewall rule already exists
- skip_ansible_lint
# add firewall rule via firewall-cmd
- name: Add firewall rule for TCP/{{nginx_kibana_port}} (firewalld)
command: "{{ item }}"
with_items:
- firewall-cmd --zone=public --add-port={{nginx_kibana_port}}/tcp --permanent
- firewall-cmd --reload
ignore_errors: true
become: true
when: firewalld_in_use.rc == 0 and firewalld_is_active.rc == 0 and firewalld_nginx_kibana_port_exists.rc != 0
# iptables-services
- name: check firewall rules for TCP/{{nginx_kibana_port}} (iptables-services)
shell: grep "dport {{nginx_kibana_port}} \-j ACCEPT" /etc/sysconfig/iptables | wc -l
ignore_errors: true
register: iptables_nginx_kibana_port_exists
failed_when: iptables_nginx_kibana_port_exists == 127
no_log: true
tags:
# Skip ANSIBLE0012 Commands should not change things if nothing needs doing
# Need to check if firewall rule already exists
- skip_ansible_lint
- name: Add firewall rule for TCP/{{nginx_kibana_port}} (iptables-services)
lineinfile:
dest: /etc/sysconfig/iptables
line: '-A INPUT -p tcp -m tcp --dport {{nginx_kibana_port}} -j ACCEPT'
regexp: '^INPUT -i lo -j ACCEPT'
insertbefore: '-A INPUT -i lo -j ACCEPT'
backup: yes
when: firewalld_in_use.rc != 0 and firewalld_is_active.rc != 0 and iptables_nginx_kibana_port_exists.stdout|int == 0
register: iptables_needs_restart
- name: Restart iptables-services for TCP/{{nginx_kibana_port}} (iptables-services)
shell: systemctl restart iptables.service
ignore_errors: true
when: iptables_needs_restart != 0 and firewalld_in_use.rc != 0 and firewalld_is_active.rc != 0
tags:
# Skip ANSIBLE0013 Use shell only when shell functionality is required
# No systemctl module available in current stable release (Ansible 2.1)
- skip_ansible_lint
# Firewalld
- name: Determine if TCP/{{elk_server_ssl_cert_port}} is already active
shell: firewall-cmd --list-ports | egrep -q "^{{elk_server_ssl_cert_port}}/tcp"
ignore_errors: true
register: firewalld_elk_server_ssl_port_exists
no_log: true
tags:
# Skip ANSIBLE0012 Commands should not change things if nothing needs doing
# Need to check if firewall rule already exists
- skip_ansible_lint
# add firewall rule via firewall-cmd
- name: Add firewall rule for TCP/{{elk_server_ssl_cert_port}} (firewalld)
command: "{{ item }}"
with_items:
- firewall-cmd --zone=public --add-port={{elk_server_ssl_cert_port}}/tcp --permanent
- firewall-cmd --reload
ignore_errors: true
become: true
when: firewalld_in_use.rc == 0 and firewalld_is_active.rc == 0 and firewalld_elk_server_ssl_port_exists.rc != 0
# iptables-services
- name: check firewall rules for TCP/{{elk_server_ssl_cert_port}} (iptables-services)
shell: grep "dport {{elk_server_ssl_cert_port}} \-j ACCEPT" /etc/sysconfig/iptables | wc -l
ignore_errors: true
register: iptables_elk_server_ssl_port_exists
failed_when: iptables_elk_server_ssl_port_exists == 127
no_log: true
tags:
# Skip ANSIBLE0012 Commands should not change things if nothing needs doing
# Need to check if firewall rule already exists
- skip_ansible_lint
- name: Add firewall rule for TCP/{{elk_server_ssl_cert_port}} (iptables-services)
lineinfile:
dest: /etc/sysconfig/iptables
line: '-A INPUT -p tcp -m tcp --dport {{elk_server_ssl_cert_port}} -j ACCEPT'
regexp: '^INPUT -i lo -j ACCEPT'
insertbefore: '-A INPUT -i lo -j ACCEPT'
backup: yes
when: firewalld_in_use.rc != 0 and firewalld_is_active.rc != 0 and iptables_elk_server_ssl_port_exists.stdout|int == 0
register: iptables_needs_restart
- name: Restart iptables-services for TCP/{{elk_server_ssl_cert_port}} (iptables-services)
shell: systemctl restart iptables.service
ignore_errors: true
when: iptables_needs_restart != 0 and firewalld_in_use.rc != 0 and firewalld_is_active.rc != 0
tags:
# Skip ANSIBLE0013 Use shell only when shell functionality is required
# No systemctl module available in current stable release (Ansible 2.1)
- skip_ansible_lint

View File

@ -1,17 +0,0 @@
server {
listen {{nginx_kibana_port}};
server_name {{ansible_hostname}};
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}

View File

@ -1,55 +0,0 @@
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen {{elk_server_ssl_cert_port}} default_server;
listen [::]:{{elk_server_ssl_cert_port}} default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}

View File

@ -603,110 +603,11 @@ monitoring host such that you can see if you hit resource issues with your
monitoring host.
Install ELK Host (ElasticSearch/LogStash/Kibana)
-------------------------------------------------
Install Kibana Visualizations
-------------------------------
An ELK server allows you to publish resulting benchmark data into ElasticSearch
which allows you to build querys and dashboards to examine your benchmarking
result data over various metadata points.
Prerequisites
~~~~~~~~~~~~~
Hardware
* Baremetal or Virtual Machine
Operating System
* RHEL 7
* CentOS 7
Repos
* Red Hat Enterprise Linux 7Server - x86_64 - Server
* Red Hat Enterprise Linux 7Server - x86_64 - Server Optional
RPM
* epel-release
* ansible
* git
Installation
~~~~~~~~~~~~
1. Deploy machine (RHEL7 is used in this example)
2. Install RPMS
::
[root@dhcp23-93 ~]# yum install -y https://download.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
...
[root@dhcp23-93 ~]# yum install -y ansible git
3. Clone Browbeat
::
[root@dhcp23-93 ~]# git clone https://github.com/openstack/browbeat.git
Cloning into 'browbeat'...
remote: Counting objects: 7533, done.
remote: Compressing objects: 100% (38/38), done.
remote: Total 7533 (delta 30), reused 36 (delta 23), pack-reused 7469
Receiving objects: 100% (7533/7533), 5.26 MiB | 5.79 MiB/s, done.
Resolving deltas: 100% (4330/4330), done.
4. Add a hosts file into ansible directory
::
[root@dhcp23-93 ~]# cd browbeat/ansible/
[root@dhcp23-93 ansible]# vi hosts
Content of hosts file should be following
::
[elk]
localhost
5. Setup SSH config, SSH key and exchange for Ansible
::
[root@dhcp23-93 ansible]# touch ssh-config
[root@dhcp23-93 ansible]# ssh-keygen
Generating public/private rsa key pair.
...
[root@dhcp23-93 ansible]# ssh-copy-id root@localhost
...
6. Edit install variables
::
[root@dhcp23-93 ansible]# vi install/group_vars/all.yml
Depending on the environment you may need to edit more than just the following
variables - es_ip
If you are deploying using a machine that is not an OSP undercloud, be sure to edit
the home_dir/browbeat_path to match its actual path.
.. note:: If you require a proxy to get outside your network, you must
configure http_proxy, https_proxy, no_proxy variables in the proxy_env
dictionary in install/group_vars/all.yml
7. Install ELK via Ansible playbook
::
[root@dhcp23-93 ansible]# ansible-playbook -i hosts install/elk.yml
...
8. Install Kibana Visualizations via Ansible playbook
1. Update install/group_vars/all.yml (es_ip) to identify your ELK host.
2. Install Kibana Visualizations via Ansible playbook
::