Add two more elasticsearch nodes.

* manifests/site.pp: List the two new elasticsearch nodes in the
appropriate lists.

* modules/logstash/manifests/elasticsearch.pp: Do not restart
elasticsearch when config files change. Service restarts are costly and
should be manually performed when necessary. Otherwise puppet should
simply update the config files.

* modules/logstash/templates/elasticsearch.yml.erb: Update elasticsearch
config with new cluster topology. Increase memory available for
indexing.

* modules/openstack_project/manifests/cacti.pp: Add new nodes to cacti
monitoring list.

Adding two more elasticsearch nodes to relieve memory pressure (more
nodes means fewer indexes per nodes which requires less memory to
manage). And two more nodes gives us more disk to retain older indexes
in. These new nodes should allow us to retain at least 3 weeks of
indexed logs.

Change-Id: I3a5a02311e939c8147e401110c7b96d085eb8274
Reviewed-on: https://review.openstack.org/36305
Reviewed-by: Jeremy Stanley <fungi@yuggoth.org>
Approved: Clark Boylan <clark.boylan@gmail.com>
Reviewed-by: Clark Boylan <clark.boylan@gmail.com>
Tested-by: Jenkins
This commit is contained in:
Clark Boylan
2013-07-09 10:40:54 -07:00
committed by Jenkins
parent bc1c091f37
commit e13716360c
2 changed files with 5 additions and 5 deletions

View File

@@ -72,8 +72,8 @@ class logstash::elasticsearch (
service { 'elasticsearch':
ensure => running,
require => Package['elasticsearch'],
subscribe => [
require => [
Package['elasticsearch'],
File['/etc/elasticsearch/elasticsearch.yml'],
File['/etc/elasticsearch/default-mapping.json'],
File['/etc/default/elasticsearch'],

View File

@@ -139,7 +139,7 @@ node.name: "<%= scope.lookupvar("::hostname") %>"
index.store.compress.stored: true
index.store.compress.tv: true
indices.memory.index_buffer_size: "33%"
indices.memory.index_buffer_size: "40%"
#################################### Paths ####################################
@@ -255,7 +255,7 @@ bootstrap.mlockall: true
# Allow recovery process after N nodes in a cluster are up:
#
gateway.recover_after_nodes: 2
gateway.recover_after_nodes: 4
# Set the timeout to initiate the recovery process, once the N nodes
# from previous setting are up (accepts time value):
@@ -266,7 +266,7 @@ gateway.recover_after_time: 5m
# are up (and recover_after_nodes is met), begin recovery process immediately
# (without waiting for recover_after_time to expire):
#
gateway.expected_nodes: 3
gateway.expected_nodes: 5
############################# Recovery Throttling #############################