jkilpatr bb44cd830c Rsyslog -> Elasticsearch logging
This implements rsyslog -> elasticsearch logging as well
as rsyslog forwarder -> rsyslog aggregator -> elasticsearch logging
using the common logging template as a base and adding
in dynamic detection of containerized services and log path
detection.

Services can be moved into and out of containers and add
or remove log files and the log detector script will create a template
that reflects these changes dynamically.

Logging inherits cloud name and elasticsearch info from the existing
group_vars variables, so this should be no additional work to setup
beyond setting logging_backend: rsyslog and either running the install
playbook or the rsyslog-logging playbook.

Finally additional variables can be passed into the deployment with
-e or just being in the ansible namespace, this way things like a
unique build ID can be templated into the logs automatically. I've
added support for browbeat_uuid, dlrn_hash, and rhos_puddle others
should be trivial to add.

There are also additional tunables to configure if logging instaces
should be standalone (viable for small clouds) or rely on a server
side aggregator service (more efficient for large deployments).
Disk backed mode is another tunable that will create a variable
disk load that may be undesierable in some deployments, but if
collecting every last log is important it can be turned on creating
a one or two layer queueing structure in case of Elasticsearch downtime
or overload depending on if the aggregation server is in use.

If you want to see examples from both containerized and non
container clouds check out elk.browbeatproject.org's logstash
index.

Change-Id: I3e6652223a08ab8a716a40b7a0e21b7fcea6c000
2017-10-16 12:08:26 +00:00

58 lines
2.1 KiB
Django/Jinja

#jinja2: lstrip_blocks: True
# This template aggregates common OpenStack logs
# via rsyslog. With dynamic detection of containerization
# and new log file locations. Any service in a container
# will we pulled from /var/log/containers any service without
# a container will pull from /var/log
# Credit jkilpatr for containers templating portante for everything else
#### GLOBAL DIRECTIVES ####
global(
# Where to place auxiliary files
workDirectory="/var/lib/rsyslog"
# perf-dept: we want fully qualified domain names for common logging
preserveFQDN="on"
# Try to avoid any message truncation
maxMessageSize="65536")
{% if disk_backed_rsyslog %}
main_queue(
# Directory where the queue files on disk will be stored
queue.spoolDirectory="/srv/data/rsyslog"
# Prefix of the name the queue files on disk
queue.filename="main-queue"
# In-memory linked-list queue, but because filename is defined it is disk-assisted
# See http://www.rsyslog.com/doc/v8-stable/concepts/queues.html?highlight=disk%20assisted
queue.type="linkedlist"
# Only store up to 2 GB of logs on disk
queue.maxdiskspace="2g"
# Use 100 MB queue files
queue.maxfilesize="100m"
# Update disk queue every 1,000 messages
queue.checkpointinterval="1000"
# Fsync when a check point occurs
queue.syncqueuefiles="on"
# Allow up to 4 threads processing items in the queue
queue.workerthreads="4"
# Beaf up the internal message queue
queue.size="131072"
# 75% of QueueSize, start persisting to disk
queue.highwatermark="98304"
# 90% of QueueSize, start discarding messages
queue.discardmark="117964"
# If we reach the discard mark, we'll throw out notice, info, and debug messages
queue.discardseverity="5")
{% else %}
main_queue(
# Allow up to 4 threads processing items in the queue
queue.workerthreads="4"
# Beaf up the internal message queue
queue.size="131072"
# 90% of QueueSize
queue.discardmark="117964"
# If we reach the discard mark, we'll throw out notice, info, and debug messages
queue.discardseverity="5")
{% endif %}