Merge "Refactor Shaker"
This commit is contained in:
commit
30f95ee2eb
@ -278,6 +278,7 @@ shaker:
|
|||||||
sleep_after: 5
|
sleep_after: 5
|
||||||
venv: /home/stack/shaker-venv
|
venv: /home/stack/shaker-venv
|
||||||
shaker_region: regionOne
|
shaker_region: regionOne
|
||||||
|
external_host: 2.2.2.2
|
||||||
scenarios:
|
scenarios:
|
||||||
- name: l2-4-1
|
- name: l2-4-1
|
||||||
enabled: true
|
enabled: true
|
||||||
|
@ -67,6 +67,7 @@ shaker:
|
|||||||
sleep_after: 5
|
sleep_after: 5
|
||||||
venv: /home/stack/shaker-venv
|
venv: /home/stack/shaker-venv
|
||||||
shaker_region: regionOne
|
shaker_region: regionOne
|
||||||
|
external_host: 2.2.2.2
|
||||||
scenarios:
|
scenarios:
|
||||||
- name: l2
|
- name: l2
|
||||||
enabled: true
|
enabled: true
|
||||||
|
@ -43,9 +43,14 @@ From your local machine
|
|||||||
$ vi install/group_vars/all.yml # Make sure to edit the dns_server to the correct ip address
|
$ vi install/group_vars/all.yml # Make sure to edit the dns_server to the correct ip address
|
||||||
$ ansible-playbook -i hosts install/browbeat.yml
|
$ ansible-playbook -i hosts install/browbeat.yml
|
||||||
$ vi install/group_vars/all.yml # Edit Browbeat network settings
|
$ vi install/group_vars/all.yml # Edit Browbeat network settings
|
||||||
$ ansible-playbook -i hosts install/browbeat_network.yml
|
$ ansible-playbook -i hosts install/browbeat_network.yml # For external access(required to build Shaker image)
|
||||||
$ ansible-playbook -i hosts install/shaker_build.yml
|
$ ansible-playbook -i hosts install/shaker_build.yml
|
||||||
|
|
||||||
|
.. note:: ``browbeat-network.yml`` will more than likely not work for you
|
||||||
|
depending on your underlay/overlay network setup. In such cases, user needs
|
||||||
|
to create appropriate networks for instances to allow them to reach the
|
||||||
|
internet.
|
||||||
|
|
||||||
(Optional) Install collectd
|
(Optional) Install collectd
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
@ -68,6 +68,82 @@ browbeat-config.yaml:
|
|||||||
|
|
||||||
(browbeat-venv)[stack@ospd browbeat]$ ./browbeat.py perfkit -s browbeat-config.yaml
|
(browbeat-venv)[stack@ospd browbeat]$ ./browbeat.py perfkit -s browbeat-config.yaml
|
||||||
|
|
||||||
|
Running Shaker
|
||||||
|
==============
|
||||||
|
Running Shaker requires the shaker image to be built, which in turn requires
|
||||||
|
instances to be able to access the internet. The playbooks for this installation
|
||||||
|
have been described in the installation documentation but for the sake of
|
||||||
|
convenience they are being mentioned here as well.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
$ ansible-playbook -i hosts install/browbeat_network.yml
|
||||||
|
$ ansible-playbook -i hosts install/shaker_build.yml
|
||||||
|
|
||||||
|
.. note:: The playbook to setup networking is provided as an example only and
|
||||||
|
might not work for you based on your underlay/overlay network setup. In such
|
||||||
|
cases, the exercise of setting up networking for instances to be able to access
|
||||||
|
the internet is left to the user.
|
||||||
|
|
||||||
|
Once the shaker image is built, you can run Shaker via Browbeat by filling in a
|
||||||
|
few configuration options in the configuration file. The meaning of each option is
|
||||||
|
summarized below:
|
||||||
|
|
||||||
|
**shaker:**
|
||||||
|
:enabled: Boolean ``true`` or ``false``, enable shaker or not
|
||||||
|
:server: IP address of the shaker-server for agent to talk to (undercloud IP
|
||||||
|
by default)
|
||||||
|
:port: Port to connect to the shaker-server (undercloud port 5555 by default)
|
||||||
|
:flavor: OpenStack instance flavor you want to use
|
||||||
|
:join_timeout: Timeout in seconds for agents to join
|
||||||
|
:sleep_before: Time in seconds to sleep before executing a scenario
|
||||||
|
:sleep_after: Time in seconds to sleep after executing a scenario
|
||||||
|
:venv: venv to execute shaker commands in, ``/home/stack/shaker-venv`` by
|
||||||
|
default
|
||||||
|
:shaker_region: OpenStack region you want to use
|
||||||
|
:external_host: IP of a server for external tests (should have
|
||||||
|
``browbeat/util/shaker-external.sh`` executed on it previously and have
|
||||||
|
iptables/firewalld/selinux allowing connections on the ports used by network
|
||||||
|
testing tools netperf and iperf)
|
||||||
|
|
||||||
|
**scenarios:** List of scenarios you want to run
|
||||||
|
:\- name: Name for the scenario. It is used to create directories/files
|
||||||
|
accordingly
|
||||||
|
:enabled: Boolean ``true`` or ``false`` depending on whether or not you
|
||||||
|
want to execute the scenario
|
||||||
|
:density: Number of instances
|
||||||
|
:compute: Number of compute nodes across which to spawn instances
|
||||||
|
:placement: ``single_room`` would mean one instance per compute node and
|
||||||
|
``double_room`` would give you two instances per compute node
|
||||||
|
:progression: ``null`` means all agents are involved, ``linear`` means
|
||||||
|
execution starts with one agent and increases linearly, ``quadratic``
|
||||||
|
would result in quadratic growth in number of agents participating
|
||||||
|
in the test concurrently
|
||||||
|
:time: Time in seconds you want each test in the scenario
|
||||||
|
file to run
|
||||||
|
:file: The base shaker scenario file to use to override
|
||||||
|
options (this would depend on whether you want to run L2, L3 E-W or L3
|
||||||
|
N-S tests and also on the class of tool you want to use such as flent or
|
||||||
|
iperf3)
|
||||||
|
|
||||||
|
To analyze results sent to Elasticsearch (you must have Elasticsearch enabled
|
||||||
|
and the IP of the Elasticsearch host provided in the browbeat configuration
|
||||||
|
file), you can use the following playbook to setup some prebuilt dashboards for
|
||||||
|
you:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
$ ansible-playbook -i hosts install/kibana-visuals.yml
|
||||||
|
|
||||||
|
Alternatively you can create your own visualizations of specific shaker runs
|
||||||
|
using some simple searches such as:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
shaker_uuid: 97092334-34e8-446c-87d6-6a0f361b9aa8 AND record.concurrency: 1 AND result.result_type: bandwidth
|
||||||
|
shaker_uuid: c918a263-3b0b-409b-8cf8-22dfaeeaf33e AND record.concurrency:1 AND record.test:Bi-Directional
|
||||||
|
|
||||||
|
|
||||||
Working with Multiple Clouds
|
Working with Multiple Clouds
|
||||||
============================
|
============================
|
||||||
|
|
||||||
|
115
lib/Shaker.py
115
lib/Shaker.py
@ -56,6 +56,26 @@ class Shaker(WorkloadBase.WorkloadBase):
|
|||||||
"Current number of Shaker tests failed: {}".format(
|
"Current number of Shaker tests failed: {}".format(
|
||||||
self.error_count))
|
self.error_count))
|
||||||
|
|
||||||
|
def accommodation_to_dict(self, accommodation):
|
||||||
|
accommodation_dict = {}
|
||||||
|
for item in accommodation:
|
||||||
|
if isinstance(item, dict):
|
||||||
|
accommodation_dict.update(item)
|
||||||
|
else:
|
||||||
|
accommodation_dict[item] = True
|
||||||
|
return accommodation_dict
|
||||||
|
|
||||||
|
def accommodation_to_list(self, accommodation):
|
||||||
|
accommodation_list = []
|
||||||
|
for key, value in accommodation.iteritems():
|
||||||
|
if value is True:
|
||||||
|
accommodation_list.append(key)
|
||||||
|
else:
|
||||||
|
temp_dict = {}
|
||||||
|
temp_dict[key] = value
|
||||||
|
accommodation_list.append(temp_dict)
|
||||||
|
return accommodation_list
|
||||||
|
|
||||||
def final_stats(self, total):
|
def final_stats(self, total):
|
||||||
self.logger.info(
|
self.logger.info(
|
||||||
"Total Shaker scenarios enabled by user: {}".format(total))
|
"Total Shaker scenarios enabled by user: {}".format(total))
|
||||||
@ -120,19 +140,29 @@ class Shaker(WorkloadBase.WorkloadBase):
|
|||||||
'shaker_test_info']['execution']:
|
'shaker_test_info']['execution']:
|
||||||
shaker_test_meta['shaker_test_info'][
|
shaker_test_meta['shaker_test_info'][
|
||||||
'execution']['progression'] = "all"
|
'execution']['progression'] = "all"
|
||||||
var = data['scenarios'][scenario][
|
accommodation = self.accommodation_to_dict(data['scenarios'][scenario][
|
||||||
'deployment'].pop('accommodation')
|
'deployment'].pop('accommodation'))
|
||||||
if 'deployment' not in shaker_test_meta:
|
if 'deployment' not in shaker_test_meta:
|
||||||
shaker_test_meta['deployment'] = {}
|
shaker_test_meta['deployment'] = {}
|
||||||
shaker_test_meta['deployment']['accommodation'] = {}
|
shaker_test_meta['deployment']['accommodation'] = {}
|
||||||
|
if 'single' in accommodation:
|
||||||
shaker_test_meta['deployment'][
|
shaker_test_meta['deployment'][
|
||||||
'accommodation']['distribution'] = var[0]
|
'accommodation']['distribution'] = 'single'
|
||||||
|
elif 'pair' in accommodation:
|
||||||
shaker_test_meta['deployment'][
|
shaker_test_meta['deployment'][
|
||||||
'accommodation']['placement'] = var[1]
|
'accommodation']['distribution'] = 'pair'
|
||||||
|
if 'single_room' in accommodation:
|
||||||
|
shaker_test_meta['deployment'][
|
||||||
|
'accommodation']['placement'] = 'single_room'
|
||||||
|
elif 'double_room' in accommodation:
|
||||||
|
shaker_test_meta['deployment'][
|
||||||
|
'accommodation']['placement'] = 'double_room'
|
||||||
|
if 'density' in accommodation:
|
||||||
shaker_test_meta['deployment']['accommodation'][
|
shaker_test_meta['deployment']['accommodation'][
|
||||||
'density'] = var[2]['density']
|
'density'] = accommodation['density']
|
||||||
|
if 'compute_nodes' in accommodation:
|
||||||
shaker_test_meta['deployment']['accommodation'][
|
shaker_test_meta['deployment']['accommodation'][
|
||||||
'compute_nodes'] = var[3]['compute_nodes']
|
'compute_nodes'] = accommodation['compute_nodes']
|
||||||
shaker_test_meta['deployment']['template'] = data[
|
shaker_test_meta['deployment']['template'] = data[
|
||||||
'scenarios'][scenario]['deployment']['template']
|
'scenarios'][scenario]['deployment']['template']
|
||||||
# Iterating through each record to get result values
|
# Iterating through each record to get result values
|
||||||
@ -213,25 +243,32 @@ class Shaker(WorkloadBase.WorkloadBase):
|
|||||||
stream = open(fname, 'r')
|
stream = open(fname, 'r')
|
||||||
data = yaml.load(stream)
|
data = yaml.load(stream)
|
||||||
stream.close()
|
stream.close()
|
||||||
default_placement = "double_room"
|
|
||||||
default_density = 1
|
default_density = 1
|
||||||
default_compute = 1
|
default_compute = 1
|
||||||
default_progression = "linear"
|
default_progression = "linear"
|
||||||
if "placement" in scenario:
|
accommodation = self.accommodation_to_dict(data['deployment']['accommodation'])
|
||||||
data['deployment']['accommodation'][1] = scenario['placement']
|
if 'placement' in scenario and any(k in accommodation for k in ('single_room',
|
||||||
|
'double_room')):
|
||||||
|
if 'single_room' in accommodation and scenario['placement'] == 'double_room':
|
||||||
|
accommodation.pop('single_room', None)
|
||||||
|
accommodation['double_room'] = True
|
||||||
|
elif 'double_room' in accommodation and scenario['placement'] == 'single_room':
|
||||||
|
accommodation['single_room'] = True
|
||||||
|
accommodation.pop('double_room', None)
|
||||||
else:
|
else:
|
||||||
data['deployment']['accommodation'][1] = default_placement
|
accommodation['double_room'] = True
|
||||||
if "density" in scenario:
|
accommodation.pop('single_room', None)
|
||||||
data['deployment']['accommodation'][
|
if 'density' in scenario and 'density' in accommodation:
|
||||||
2]['density'] = scenario['density']
|
accommodation['density'] = scenario['density']
|
||||||
else:
|
elif 'density' in accommodation:
|
||||||
data['deployment']['accommodation'][2]['density'] = default_density
|
accommodation['density'] = default_density
|
||||||
if "compute" in scenario:
|
if "compute" in scenario and 'compute_nodes' in accommodation:
|
||||||
data['deployment']['accommodation'][3][
|
accommodation['compute_nodes'] = scenario['compute']
|
||||||
'compute_nodes'] = scenario['compute']
|
elif 'compute_nodes' in accommodation:
|
||||||
else:
|
accommodation['compute_nodes'] = default_compute
|
||||||
data['deployment']['accommodation'][3][
|
accommodation = self.accommodation_to_list(accommodation)
|
||||||
'compute_nodes'] = default_compute
|
self.logger.debug("Using accommodation {}".format(accommodation))
|
||||||
|
data['deployment']['accommodation'] = accommodation
|
||||||
if "progression" in scenario:
|
if "progression" in scenario:
|
||||||
if scenario['progression'] is None:
|
if scenario['progression'] is None:
|
||||||
data['execution'].pop('progression', None)
|
data['execution'].pop('progression', None)
|
||||||
@ -245,6 +282,7 @@ class Shaker(WorkloadBase.WorkloadBase):
|
|||||||
else:
|
else:
|
||||||
for test in data['execution']['tests']:
|
for test in data['execution']['tests']:
|
||||||
test['time'] = default_time
|
test['time'] = default_time
|
||||||
|
self.logger.debug("Execution time of each test set to {}".format(test['time']))
|
||||||
with open(fname, 'w') as yaml_file:
|
with open(fname, 'w') as yaml_file:
|
||||||
yaml_file.write(yaml.dump(data, default_flow_style=False))
|
yaml_file.write(yaml.dump(data, default_flow_style=False))
|
||||||
|
|
||||||
@ -297,7 +335,7 @@ class Shaker(WorkloadBase.WorkloadBase):
|
|||||||
from_time, new_test_name, workload, index_status):
|
from_time, new_test_name, workload, index_status):
|
||||||
self.logger.info("Completed Test: {}".format(scenario['name']))
|
self.logger.info("Completed Test: {}".format(scenario['name']))
|
||||||
self.logger.info("Saved report to: {}.html".
|
self.logger.info("Saved report to: {}.html".
|
||||||
format(os.path.join(result_dir,test_name)))
|
format(os.path.join(result_dir, test_name)))
|
||||||
self.logger.info("saved log to: {}.log".format(os.path.join(result_dir,
|
self.logger.info("saved log to: {}.log".format(os.path.join(result_dir,
|
||||||
test_name)))
|
test_name)))
|
||||||
self.update_pass_tests()
|
self.update_pass_tests()
|
||||||
@ -315,16 +353,29 @@ class Shaker(WorkloadBase.WorkloadBase):
|
|||||||
timeout = self.config['shaker']['join_timeout']
|
timeout = self.config['shaker']['join_timeout']
|
||||||
self.logger.info(
|
self.logger.info(
|
||||||
"The uuid for this shaker scenario is {}".format(shaker_uuid))
|
"The uuid for this shaker scenario is {}".format(shaker_uuid))
|
||||||
cmd_1 = (
|
cmd_env = (
|
||||||
"source {}/bin/activate; source /home/stack/overcloudrc").format(venv)
|
"source {}/bin/activate; source /home/stack/overcloudrc").format(venv)
|
||||||
cmd_2 = (
|
if 'external' in filename and 'external_host' in self.config['shaker']:
|
||||||
"shaker --server-endpoint {0}:{1} --flavor-name {2} --scenario {3}"
|
external_host = self.config['shaker']['external_host']
|
||||||
" --os-region-name {7} --agent-join-timeout {6}"
|
cmd_shaker = (
|
||||||
" --report {4}/{5}.html --output {4}/{5}.json"
|
'shaker --server-endpoint {0}:{1} --flavor-name {2} --scenario {3}'
|
||||||
" --book {4}/{5} --debug > {4}/{5}.log 2>&1").format(
|
' --os-region-name {7} --agent-join-timeout {6}'
|
||||||
server_endpoint, port_no, flavor, filename,
|
' --report {4}/{5}.html --output {4}/{5}.json'
|
||||||
result_dir, test_name, timeout, shaker_region)
|
' --book {4}/{5} --matrix "{{host: {8}}}" --debug'
|
||||||
cmd = ("{}; {}").format(cmd_1, cmd_2)
|
' > {4}/{5}.log 2>&1').format(server_endpoint,
|
||||||
|
port_no, flavor, filename, result_dir,
|
||||||
|
test_name, timeout, shaker_region,
|
||||||
|
external_host)
|
||||||
|
else:
|
||||||
|
cmd_shaker = (
|
||||||
|
'shaker --server-endpoint {0}:{1} --flavor-name {2} --scenario {3}'
|
||||||
|
' --os-region-name {7} --agent-join-timeout {6}'
|
||||||
|
' --report {4}/{5}.html --output {4}/{5}.json'
|
||||||
|
' --book {4}/{5} --debug'
|
||||||
|
' > {4}/{5}.log 2>&1').format(server_endpoint, port_no, flavor,
|
||||||
|
filename, result_dir, test_name,
|
||||||
|
timeout, shaker_region)
|
||||||
|
cmd = ("{}; {}").format(cmd_env, cmd_shaker)
|
||||||
from_ts = int(time.time() * 1000)
|
from_ts = int(time.time() * 1000)
|
||||||
if 'sleep_before' in self.config['shaker']:
|
if 'sleep_before' in self.config['shaker']:
|
||||||
time.sleep(self.config['shaker']['sleep_before'])
|
time.sleep(self.config['shaker']['sleep_before'])
|
||||||
@ -374,7 +425,7 @@ class Shaker(WorkloadBase.WorkloadBase):
|
|||||||
for interval in range(0, test_time + 9):
|
for interval in range(0, test_time + 9):
|
||||||
es_list.append(
|
es_list.append(
|
||||||
datetime.datetime.utcnow() +
|
datetime.datetime.utcnow() +
|
||||||
datetime.timedelta(0,interval))
|
datetime.timedelta(0, interval))
|
||||||
|
|
||||||
for run in range(self.config['browbeat']['rerun']):
|
for run in range(self.config['browbeat']['rerun']):
|
||||||
self.logger.info("Scenario: {}".format(scenario['name']))
|
self.logger.info("Scenario: {}".format(scenario['name']))
|
||||||
|
@ -209,6 +209,10 @@ mapping:
|
|||||||
shaker_region:
|
shaker_region:
|
||||||
type: str
|
type: str
|
||||||
required: true
|
required: true
|
||||||
|
external_host:
|
||||||
|
type: str
|
||||||
|
required: False
|
||||||
|
pattern: ^([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]{0,61}[a-zA-Z0-9])(\.([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]{0,61}[a-zA-Z0-9]))*$
|
||||||
scenarios:
|
scenarios:
|
||||||
type: seq
|
type: seq
|
||||||
sequence:
|
sequence:
|
||||||
|
49
utils/shaker-external.sh
Executable file
49
utils/shaker-external.sh
Executable file
@ -0,0 +1,49 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Run as root to setup a shaker-server to run external network tests with
|
||||||
|
yum install -y epel-release
|
||||||
|
yum install -y wget iperf iperf3 gcc gcc-c++ python-devel screen zeromq zeromq-devel
|
||||||
|
wget ftp://ftp.netperf.org/netperf/netperf-2.7.0.tar.gz
|
||||||
|
tar xvzf netperf-2.7.0.tar.gz
|
||||||
|
pushd netperf-2.7.0
|
||||||
|
./configure --enable-demo=yes
|
||||||
|
make
|
||||||
|
make install
|
||||||
|
popd
|
||||||
|
easy_install pip
|
||||||
|
pip install pbr flent pyshaker-agent
|
||||||
|
cat<<'EOF' >> /etc/systemd/system/iperf.service
|
||||||
|
[Unit]
|
||||||
|
Description=iperf Service
|
||||||
|
After=network.target
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
ExecStart=/usr/bin/iperf -s
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
EOF
|
||||||
|
cat<<'EOF' >> /etc/systemd/system/iperf3.service
|
||||||
|
[Unit]
|
||||||
|
Description=iperf3 Service
|
||||||
|
After=network.target
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
ExecStart=/usr/bin/iperf3 -s
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
EOF
|
||||||
|
cat<<'EOF' >> /etc/systemd/system/netperf.service
|
||||||
|
[Unit]
|
||||||
|
Description="Netperf netserver daemon"
|
||||||
|
After=network.target
|
||||||
|
[Service]
|
||||||
|
ExecStart=/usr/local/bin/netserver -D
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
EOF
|
||||||
|
systemctl start iperf
|
||||||
|
systemctl enable iperf
|
||||||
|
systemctl start iperf3
|
||||||
|
systemctl enable iperf3
|
||||||
|
systemctl start netperf
|
||||||
|
systemctl enable netperf
|
||||||
|
|
Loading…
Reference in New Issue
Block a user