This implements rsyslog -> elasticsearch logging as well as rsyslog forwarder -> rsyslog aggregator -> elasticsearch logging using the common logging template as a base and adding in dynamic detection of containerized services and log path detection. Services can be moved into and out of containers and add or remove log files and the log detector script will create a template that reflects these changes dynamically. Logging inherits cloud name and elasticsearch info from the existing group_vars variables, so this should be no additional work to setup beyond setting logging_backend: rsyslog and either running the install playbook or the rsyslog-logging playbook. Finally additional variables can be passed into the deployment with -e or just being in the ansible namespace, this way things like a unique build ID can be templated into the logs automatically. I've added support for browbeat_uuid, dlrn_hash, and rhos_puddle others should be trivial to add. There are also additional tunables to configure if logging instaces should be standalone (viable for small clouds) or rely on a server side aggregator service (more efficient for large deployments). Disk backed mode is another tunable that will create a variable disk load that may be undesierable in some deployments, but if collecting every last log is important it can be turned on creating a one or two layer queueing structure in case of Elasticsearch downtime or overload depending on if the aggregation server is in use. If you want to see examples from both containerized and non container clouds check out elk.browbeatproject.org's logstash index. Change-Id: I3e6652223a08ab8a716a40b7a0e21b7fcea6c000
Shaker Data Plane Performance Dashboards
Two dashboards have been provided with Browbeat for Shaker.
Browbeat Shaker Scenarios with Throughput vs Concurrency -------------------------------------------------------This Shaker dashboard aims to present data plane performance of OpenStack VMs connected in different network topologies in a summarized form. Three distinct visulizations representing L2, L3 E-W and L3 N-S topologies along with the corrensponding markdown to exaplain each visualization make the "Browbeat Shaker Scenarios with Throughput vs Concurrency" dashboard. For each network topology, average throughput for TCP download and upload in Mbps is expressed vs the VM conccurency (number of pairs of VMs firing traffic at any given moment). For example, in the L2 scenario if the average throughput is 4000 Mbps at a concurrency of 2, it means that each pair of VMs involved average at 4000 Mbps for the duration of the test, bringing the total throughput to 8000 Mbps(avg throughput*concurrency).
Browbeat Shaker Cloud Performance Comparison
This Shaker dashboard lets you compare network performance results
from various clouds. This dashboard is ideal if you want to compare data
plane performance with different neutron configurations in different
clouds. For each topology, a visualization comparing
tcp_download and tcp_upload per cloud name and
a visualization comparing ping latency per cloud name is generated in
the dashboard along with instructions in markdown for advanced filtering
and querying.
Note
You can filter based on browbeat_uuid and
shaker_uuid to view results from a specific run or shaker
scenario only and record.concurrency and
record.accommodation to filter based on the subest of the
test results you want to view.