Journalbeat is a community beat which allows journals to be directly shipped to logstash. This beat has been setup to start using the common systemd role and will only be executed on systems where systemd is present. Change-Id: I8e911b83e28c82dd2e19dc4a044b1dd3e75ebf77 Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
5.4 KiB
Install ELK with beats to gather metrics
- tags
-
openstack, ansible
About this repository
This set of playbooks will deploy elk cluster (Elasticsearch, Logstash, Kibana) with topbeat to gather metrics from hosts metrics to the ELK cluster.
These playbooks require Ansible 2.4+.
Before running these playbooks the systemd_service role
is required and is used in community roles. If these playbooks are being
run in an OpenStack-Ansible installation the required role will be
resolved for you. If the Installation is outside of OpenStack-Ansible,
clone the role or add it to an ansible role requirements file.
git clone https://github.com/openstack/ansible-role-systemd_service /etc/ansible/roles/systemd_serviceOpenStack-Ansible Integration
These playbooks can be used as standalone inventory or as an
integrated part of an OpenStack-Ansible deployment. For a simple example
of standalone inventory, see inventory.example.yml.
Optional | Load balancer VIP address
In order to use multi-node elasticsearch a loadbalancer is required. Haproxy can provide the load balancer functionality needed. The option internal_lb_vip_address is used as the endpoint (virtual IP address) services like Kibana will use when connecting to elasticsearch. If this option is omitted, the first node in the elasticsearch cluster will be used.
Optional | configure haproxy endpoints
Edit the /etc/openstack_deploy/user_variables.yml file and add fiel following lines
haproxy_extra_services:
- service:
haproxy_service_name: kibana
haproxy_ssl: False
haproxy_backend_nodes: "{{ groups['kibana'] | default([]) }}"
haproxy_port: 81 # This is set using the "kibana_nginx_port" variable
haproxy_balance_type: tcp
- service:
haproxy_service_name: elastic-logstash
haproxy_ssl: False
haproxy_backend_nodes: "{{ groups['elastic-logstash'] | default([]) }}"
haproxy_port: 5044 # This is set using the "logstash_beat_input_port" variable
haproxy_balance_type: tcp
- service:
haproxy_service_name: elastic-logstash
haproxy_ssl: False
haproxy_backend_nodes: "{{ groups['elastic-logstash'] | default([]) }}"
haproxy_port: 9201 # This is set using the "elastic_hap_port" variable
haproxy_check_port: 9200 # This is set using the "elastic_port" variable
haproxy_backend_port: 9200 # This is set using the "elastic_port" variable
haproxy_balance_type: tcpOptional | run the haproxy-install playbook
cd /opt/openstack-ansible/playbooks/
openstack-ansible haproxy-install.yml --tags=haproxy-service-configDeployment Process
Clone the elk-osa repo
cd /opt
git clone https://github.com/openstack/openstack-ansible-opsCopy the env.d file into place
cd /opt/openstack-ansible-ops/elk_metrics_6x
cp env.d/elk.yml /etc/openstack_deploy/env.d/Copy the conf.d file into place
cp conf.d/elk.yml /etc/openstack_deploy/conf.d/In elk.yml, list your logging hosts under elastic-logstash_hosts to create the elasticsearch cluster in multiple containers and one logging host under kibana_hosts to create the kibana container
vi /etc/openstack_deploy/conf.d/elk.ymlCreate the containers
cd /opt/openstack-ansible/playbooks
openstack-ansible lxc-containers-create.yml -e 'container_group=elastic-logstash:kibana'install master/data elasticsearch nodes on the elastic-logstash containers
cd /opt/openstack-ansible-ops/elk_metrics_6x
openstack-ansible installElastic.ymlInstall Logstash on all the elastic containers
cd /opt/openstack-ansible-ops/elk_metrics_6x
openstack-ansible installLogstash.ymlInstall Kibana, nginx reverse proxy and metricbeat on the kibana container
cd /opt/openstack-ansible-ops/elk_metrics_6x
openstack-ansible installKibana.ymlInstall Metricbeat everywhere to start shipping metrics to our logstash instances
cd /opt/openstack-ansible-ops/elk_metrics_6x
openstack-ansible installMetricbeat.ymlAdding Grafana visualizations
See the grafana directory for more information on how to deploy
grafana. Once When deploying grafana, source the variable file from ELK
in order to automatically connect grafana to the Elasticsearch datastore
and import dashboards. Including the variable file is as simple as
adding -e @../elk_metrics_6x/vars/variables.yml to the
grafana playbook run.
Included dashboards
Trouble shooting
If everything goes bad, you can clean up with the following command
openstack-ansible /opt/openstack-ansible-ops/elk_metrics_6x/site.yml -e "elk_package_state=absent" --tags package_install
openstack-ansible /opt/openstack-ansible/playbooks/lxc-containers-destroy.yml --limit=kibana:elastic-logstash_all