Files
browbeat/conf/browbeat-workloads.yml
jkilpatr bb44cd830c Rsyslog -> Elasticsearch logging
This implements rsyslog -> elasticsearch logging as well
as rsyslog forwarder -> rsyslog aggregator -> elasticsearch logging
using the common logging template as a base and adding
in dynamic detection of containerized services and log path
detection.

Services can be moved into and out of containers and add
or remove log files and the log detector script will create a template
that reflects these changes dynamically.

Logging inherits cloud name and elasticsearch info from the existing
group_vars variables, so this should be no additional work to setup
beyond setting logging_backend: rsyslog and either running the install
playbook or the rsyslog-logging playbook.

Finally additional variables can be passed into the deployment with
-e or just being in the ansible namespace, this way things like a
unique build ID can be templated into the logs automatically. I've
added support for browbeat_uuid, dlrn_hash, and rhos_puddle others
should be trivial to add.

There are also additional tunables to configure if logging instaces
should be standalone (viable for small clouds) or rely on a server
side aggregator service (more efficient for large deployments).
Disk backed mode is another tunable that will create a variable
disk load that may be undesierable in some deployments, but if
collecting every last log is important it can be turned on creating
a one or two layer queueing structure in case of Elasticsearch downtime
or overload depending on if the aggregation server is in use.

If you want to see examples from both containerized and non
container clouds check out elk.browbeatproject.org's logstash
index.

Change-Id: I3e6652223a08ab8a716a40b7a0e21b7fcea6c000
2017-10-16 12:08:26 +00:00

78 lines
2.1 KiB
YAML

# Complete set of Workload Benchmarks
browbeat:
results: results/
rerun: 1
cloud_name: openstack
overcloud_credentials: /home/stack/overcloudrc
elasticsearch:
enabled: false
regather: false
host: 1.1.1.1
port: 9200
metadata_files:
- name: hardware-metadata
file: metadata/hardware-metadata.json
- name: environment-metadata
file: metadata/environment-metadata.json
- name: software-metadata
file: metadata/software-metadata.json
- name: version
file: metadata/version.json
ansible:
ssh_config: ansible/ssh-config
hosts: ansible/hosts
adjust:
keystone_token: ansible/browbeat/adjustment-keystone-token.yml
neutron_l3: ansible/browbeat/adjustment-l3.yml
nova_db: ansible/browbeat/adjustment-db.yml
workers: ansible/browbeat/adjustment-workers.yml
metadata: ansible/gather/site.yml
grafana:
enabled: true
grafana_ip: 1.1.1.1
grafana_port: 3000
dashboards:
- openstack-general-system-performance
rally:
enabled: true
sleep_before: 5
sleep_after: 5
plugins:
- workloads: rally/rally-plugins/workloads
benchmarks:
- name: workloads
enabled: true
concurrency:
- 1
times: 1
scenarios:
- name: browbeat-linpack
enabled: true
image_name: browbeat-linpack
flavor_name: m1.small
external_network:
net_id:
file: rally/rally-plugins/workloads/linpack.yml
- name: browbeat-pbench-uperf
enabled: true
user: root
image_name: browbeat-uperf
flavor_name: m1.small
# hypervisor_server: "nova:overcloud-compute-1.localdomain"
# hypervisor_client: "nova:overcloud-compute-0.localdomain"
external_network:
net_id:
protocols: tcp,udp
instances: 1
num_pairs: 1
samples: 1
test_types: stream,rr
message_sizes: 64,1024,16384
test_name: browbeat-pbench-uperf
send_results: true
cloudname:
elastic_host:
elastic_port: 9200
file: rally/rally-plugins/workloads/pbench-uperf.yml