Initial commit for stress framework from tempest

This commit is contained in:
ghanshyam 2016-10-07 17:28:43 +09:00
parent df94348675
commit 3a3f7d7d9c
44 changed files with 1956 additions and 29 deletions

17
CONTRIBUTING.rst Normal file
View File

@ -0,0 +1,17 @@
If you would like to contribute to the development of OpenStack, you must
follow the steps in this page:
http://docs.openstack.org/infra/manual/developers.html
If you already have a good understanding of how the system works and your
OpenStack accounts are set up, you can skip to the development workflow
section of this documentation to learn how changes to OpenStack should be
submitted for review via the Gerrit tool:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/tempest_stress

4
HACKING.rst Normal file
View File

@ -0,0 +1,4 @@
tempest_stress Style Commandments
===============================================
Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/

28
LICENSE
View File

@ -1,3 +1,4 @@
Apache License Apache License
Version 2.0, January 2004 Version 2.0, January 2004
http://www.apache.org/licenses/ http://www.apache.org/licenses/
@ -172,30 +173,3 @@
defend, and hold each Contributor harmless for any liability defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability. of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

6
MANIFEST.in Normal file
View File

@ -0,0 +1,6 @@
include AUTHORS
include ChangeLog
exclude .gitignore
exclude .gitreview
global-exclude *.pyc

View File

@ -1,2 +0,0 @@
# tempest_stress
Tempest Stress Tests

64
README.rst Normal file
View File

@ -0,0 +1,64 @@
.. _stress_field_guide:
Tempest Field Guide to Stress Tests
===================================
OpenStack is a distributed, asynchronous system that is prone to race condition
bugs. These bugs will not be easily found during
functional testing but will be encountered by users in large deployments in a
way that is hard to debug. The stress test tries to cause these bugs to happen
in a more controlled environment.
Stress tests are designed to stress an OpenStack environment by running a high
workload against it and seeing what breaks. The stress test framework runs
several test jobs in parallel and can run any existing test in Tempest as a
stress job.
Environment
-----------
This particular framework assumes your working Nova cluster understands Nova
API 2.0. The stress tests can read the logs from the cluster. To enable this
you have to provide the hostname to call 'nova-manage' and
the private key and user name for ssh to the cluster in the
[stress] section of tempest.conf. You also need to provide the
location of the log files:
target_logfiles = "regexp to all log files to be checked for errors"
target_private_key_path = "private ssh key for controller and log file nodes"
target_ssh_user = "username for controller and log file nodes"
target_controller = "hostname or ip of controller node (for nova-manage)
log_check_interval = "time between checking logs for errors (default 60s)"
To activate logging on your console please make sure that you activate `use_stderr`
in tempest.conf or use the default `logging.conf.sample` file.
Running default stress test set
-------------------------------
The stress test framework can automatically discover test inside the tempest
test suite. All test flag with the `@stresstest` decorator will be executed.
In order to use this discovery you have to install tempest CLI, be in the
tempest root directory and execute the following:
tempest run-stress -a -d 30
Running the sample test
-----------------------
To test installation, do the following:
tempest run-stress -t tempest/stress/etc/server-create-destroy-test.json -d 30
This sample test tries to create a few VMs and kill a few VMs.
Additional Tools
----------------
Sometimes the tests don't finish, or there are failures. In these
cases, you may want to clean out the nova cluster. We have provided
some scripts to do this in the ``tools`` subdirectory.
You can use the following script to destroy any keypairs,
floating ips, and servers:
tempest/stress/tools/cleanup.py

1
babel.cfg Normal file
View File

@ -0,0 +1 @@
[python: **.py]

75
doc/source/conf.py Normal file
View File

@ -0,0 +1,75 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
#'sphinx.ext.intersphinx',
'oslosphinx'
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'tempest_stress'
copyright = u'2016, OpenStack Foundation'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}

View File

@ -0,0 +1,4 @@
============
Contributing
============
.. include:: ../../CONTRIBUTING.rst

24
doc/source/index.rst Normal file
View File

@ -0,0 +1,24 @@
.. tempest_stress documentation master file, created by
sphinx-quickstart on Tue Jul 9 22:26:36 2013.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to tempest_stress's documentation!
========================================================
Contents:
.. toctree::
:maxdepth: 2
readme
installation
usage
contributing
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -0,0 +1,12 @@
============
Installation
============
At the command line::
$ pip install tempest_stress
Or, if you have virtualenvwrapper installed::
$ mkvirtualenv tempest_stress
$ pip install tempest_stress

1
doc/source/readme.rst Normal file
View File

@ -0,0 +1 @@
.. include:: ../../README.rst

7
doc/source/usage.rst Normal file
View File

@ -0,0 +1,7 @@
========
Usage
========
To use tempest_stress in a project::
import tempest_stress

8
requirements.txt Normal file
View File

@ -0,0 +1,8 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr>=1.6 # Apache-2.0
Babel>=1.3
oslo.log>=1.14.0 # Apache-2.0
tempest>=1333.0.0

54
setup.cfg Normal file
View File

@ -0,0 +1,54 @@
[metadata]
name = tempest_stress
summary = OpenStack is a distributed, asynchronous system that is prone to race condition
description-file =
README.rst
author = OpenStack
author-email = openstack-dev@lists.openstack.org
home-page = http://www.openstack.org/
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.3
Programming Language :: Python :: 3.4
[files]
packages =
tempest_stress
[entry_points]
console_scripts =
run-tempest-stress = tempest_stress.cmd.run_stress:main
tempest.test_plugins =
tempest_stress = tempest_stress.plugin:TempestStressPlugin
[build_sphinx]
source-dir = doc/source
build-dir = doc/build
all_files = 1
[upload_sphinx]
upload-dir = doc/build/html
[compile_catalog]
directory = tempest_stress/locale
domain = tempest_stress
[update_catalog]
domain = tempest_stress
output_dir = tempest_stress/locale
input_file = tempest_stress/locale/tempest_stress.pot
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = tempest_stress/locale/tempest_stress.pot

29
setup.py Normal file
View File

@ -0,0 +1,29 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr'],
pbr=True)

View File

View File

View File

@ -0,0 +1,42 @@
# Copyright 2013 Quanta Research Cambridge, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from tempest.common.utils import data_utils
from tempest.common import waiters
from tempest import config
import tempest.stress.stressaction as stressaction
CONF = config.CONF
class ServerCreateDestroyTest(stressaction.StressAction):
def setUp(self, **kwargs):
self.image = CONF.compute.image_ref
self.flavor = CONF.compute.flavor_ref
def run(self):
name = data_utils.rand_name(self.__class__.__name__ + "-instance")
self.logger.info("creating %s" % name)
server = self.manager.servers_client.create_server(
name=name, imageRef=self.image, flavorRef=self.flavor)['server']
server_id = server['id']
waiters.wait_for_server_status(self.manager.servers_client, server_id,
'ACTIVE')
self.logger.info("created %s" % server_id)
self.logger.info("deleting %s" % name)
self.manager.servers_client.delete_server(server_id)
waiters.wait_for_server_termination(self.manager.servers_client,
server_id)
self.logger.info("deleted %s" % server_id)

View File

@ -0,0 +1,200 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import socket
import subprocess
from tempest.common.utils import data_utils
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import test_utils
import tempest.stress.stressaction as stressaction
CONF = config.CONF
class FloatingStress(stressaction.StressAction):
# from the scenario manager
def ping_ip_address(self, ip_address):
cmd = ['ping', '-c1', '-w1', ip_address]
proc = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
proc.communicate()
success = proc.returncode == 0
return success
def tcp_connect_scan(self, addr, port):
# like tcp
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect((addr, port))
except socket.error as exc:
self.logger.info("%s(%s): %s", self.server_id, self.floating['ip'],
str(exc))
return False
self.logger.info("%s(%s): Connected :)", self.server_id,
self.floating['ip'])
s.close()
return True
def check_port_ssh(self):
def func():
return self.tcp_connect_scan(self.floating['ip'], 22)
if not test_utils.call_until_true(func, self.check_timeout,
self.check_interval):
raise RuntimeError("Cannot connect to the ssh port.")
def check_icmp_echo(self):
self.logger.info("%s(%s): Pinging..",
self.server_id, self.floating['ip'])
def func():
return self.ping_ip_address(self.floating['ip'])
if not test_utils.call_until_true(func, self.check_timeout,
self.check_interval):
raise RuntimeError("%s(%s): Cannot ping the machine.",
self.server_id, self.floating['ip'])
self.logger.info("%s(%s): pong :)",
self.server_id, self.floating['ip'])
def _create_vm(self):
self.name = name = data_utils.rand_name(
self.__class__.__name__ + "-instance")
servers_client = self.manager.servers_client
self.logger.info("creating %s" % name)
vm_args = self.vm_extra_args.copy()
vm_args['security_groups'] = [self.sec_grp]
server = servers_client.create_server(name=name, imageRef=self.image,
flavorRef=self.flavor,
**vm_args)['server']
self.server_id = server['id']
if self.wait_after_vm_create:
waiters.wait_for_server_status(self.manager.servers_client,
self.server_id, 'ACTIVE')
def _destroy_vm(self):
self.logger.info("deleting %s" % self.server_id)
self.manager.servers_client.delete_server(self.server_id)
waiters.wait_for_server_termination(self.manager.servers_client,
self.server_id)
self.logger.info("deleted %s" % self.server_id)
def _create_sec_group(self):
sec_grp_cli = self.manager.compute_security_groups_client
s_name = data_utils.rand_name(self.__class__.__name__ + '-sec_grp')
s_description = data_utils.rand_name('desc')
self.sec_grp = sec_grp_cli.create_security_group(
name=s_name, description=s_description)['security_group']
create_rule = sec_grp_cli.create_security_group_rule
create_rule(parent_group_id=self.sec_grp['id'], ip_protocol='tcp',
from_port=22, to_port=22)
create_rule(parent_group_id=self.sec_grp['id'], ip_protocol='icmp',
from_port=-1, to_port=-1)
def _destroy_sec_grp(self):
sec_grp_cli = self.manager.compute_security_groups_client
sec_grp_cli.delete_security_group(self.sec_grp['id'])
def _create_floating_ip(self):
floating_cli = self.manager.compute_floating_ips_client
self.floating = (floating_cli.create_floating_ip(self.floating_pool)
['floating_ip'])
def _destroy_floating_ip(self):
cli = self.manager.compute_floating_ips_client
cli.delete_floating_ip(self.floating['id'])
cli.wait_for_resource_deletion(self.floating['id'])
self.logger.info("Deleted Floating IP %s", str(self.floating['ip']))
def setUp(self, **kwargs):
self.image = CONF.compute.image_ref
self.flavor = CONF.compute.flavor_ref
self.vm_extra_args = kwargs.get('vm_extra_args', {})
self.wait_after_vm_create = kwargs.get('wait_after_vm_create',
True)
self.new_vm = kwargs.get('new_vm', False)
self.new_sec_grp = kwargs.get('new_sec_group', False)
self.new_floating = kwargs.get('new_floating', False)
self.reboot = kwargs.get('reboot', False)
self.floating_pool = kwargs.get('floating_pool', None)
self.verify = kwargs.get('verify', ('check_port_ssh',
'check_icmp_echo'))
self.check_timeout = kwargs.get('check_timeout', 120)
self.check_interval = kwargs.get('check_interval', 1)
self.wait_for_disassociate = kwargs.get('wait_for_disassociate',
True)
# allocate floating
if not self.new_floating:
self._create_floating_ip()
# add security group
if not self.new_sec_grp:
self._create_sec_group()
# create vm
if not self.new_vm:
self._create_vm()
def wait_disassociate(self):
cli = self.manager.compute_floating_ips_client
def func():
floating = (cli.show_floating_ip(self.floating['id'])
['floating_ip'])
return floating['instance_id'] is None
if not test_utils.call_until_true(func, self.check_timeout,
self.check_interval):
raise RuntimeError("IP disassociate timeout!")
def run_core(self):
cli = self.manager.compute_floating_ips_client
cli.associate_floating_ip_to_server(self.floating['ip'],
self.server_id)
for method in self.verify:
m = getattr(self, method)
m()
cli.disassociate_floating_ip_from_server(self.floating['ip'],
self.server_id)
if self.wait_for_disassociate:
self.wait_disassociate()
def run(self):
if self.new_sec_grp:
self._create_sec_group()
if self.new_floating:
self._create_floating_ip()
if self.new_vm:
self._create_vm()
if self.reboot:
self.manager.servers_client.reboot(self.server_id, 'HARD')
waiters.wait_for_server_status(self.manager.servers_client,
self.server_id, 'ACTIVE')
self.run_core()
if self.new_vm:
self._destroy_vm()
if self.new_floating:
self._destroy_floating_ip()
if self.new_sec_grp:
self._destroy_sec_grp()
def tearDown(self):
if not self.new_vm:
self._destroy_vm()
if not self.new_floating:
self._destroy_floating_ip()
if not self.new_sec_grp:
self._destroy_sec_grp()

View File

@ -0,0 +1,92 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log as logging
from oslo_utils import importutils
from tempest import config
import tempest.stress.stressaction as stressaction
CONF = config.CONF
class SetUpClassRunTime(object):
process = 'process'
action = 'action'
application = 'application'
allowed = set((process, action, application))
@classmethod
def validate(cls, name):
if name not in cls.allowed:
raise KeyError("\'%s\' not a valid option" % name)
class UnitTest(stressaction.StressAction):
"""This is a special action for running existing unittests as stress test.
You need to pass ``test_method`` and ``class_setup_per``
using ``kwargs`` in the JSON descriptor;
``test_method`` should be the fully qualified name of a unittest,
``class_setup_per`` should be one from:
``application``: once in the stress job lifetime
``process``: once in the worker process lifetime
``action``: on each action
Not all combination working in every case.
"""
def setUp(self, **kwargs):
method = kwargs['test_method'].split('.')
self.test_method = method.pop()
self.klass = importutils.import_class('.'.join(method))
self.logger = logging.getLogger('.'.join(method))
# valid options are 'process', 'application' , 'action'
self.class_setup_per = kwargs.get('class_setup_per',
SetUpClassRunTime.process)
SetUpClassRunTime.validate(self.class_setup_per)
if self.class_setup_per == SetUpClassRunTime.application:
self.klass.setUpClass()
self.setupclass_called = False
@property
def action(self):
if self.test_method:
return self.test_method
return super(UnitTest, self).action
def run_core(self):
res = self.klass(self.test_method).run()
if res.errors:
raise RuntimeError(res.errors)
def run(self):
if self.class_setup_per != SetUpClassRunTime.application:
if (self.class_setup_per == SetUpClassRunTime.action
or self.setupclass_called is False):
self.klass.setUpClass()
self.setupclass_called = True
try:
self.run_core()
finally:
if (CONF.stress.leave_dirty_stack is False
and self.class_setup_per == SetUpClassRunTime.action):
self.klass.tearDownClass()
else:
self.run_core()
def tearDown(self):
if self.class_setup_per != SetUpClassRunTime.action:
self.klass.tearDownClass()

View File

@ -0,0 +1,70 @@
# (c) 2013 Deutsche Telekom AG
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from tempest.common.utils import data_utils
from tempest.common import waiters
from tempest import config
import tempest.stress.stressaction as stressaction
CONF = config.CONF
class VolumeAttachDeleteTest(stressaction.StressAction):
def setUp(self, **kwargs):
self.image = CONF.compute.image_ref
self.flavor = CONF.compute.flavor_ref
def run(self):
# Step 1: create volume
name = data_utils.rand_name(self.__class__.__name__ + "-volume")
self.logger.info("creating volume: %s" % name)
volume = self.manager.volumes_client.create_volume(
display_name=name, size=CONF.volume.volume_size)['volume']
self.manager.volumes_client.wait_for_volume_status(volume['id'],
'available')
self.logger.info("created volume: %s" % volume['id'])
# Step 2: create vm instance
vm_name = data_utils.rand_name(self.__class__.__name__ + "-instance")
self.logger.info("creating vm: %s" % vm_name)
server = self.manager.servers_client.create_server(
name=vm_name, imageRef=self.image, flavorRef=self.flavor)['server']
server_id = server['id']
waiters.wait_for_server_status(self.manager.servers_client, server_id,
'ACTIVE')
self.logger.info("created vm %s" % server_id)
# Step 3: attach volume to vm
self.logger.info("attach volume (%s) to vm %s" %
(volume['id'], server_id))
self.manager.servers_client.attach_volume(server_id,
volumeId=volume['id'],
device='/dev/vdc')
self.manager.volumes_client.wait_for_volume_status(volume['id'],
'in-use')
self.logger.info("volume (%s) attached to vm %s" %
(volume['id'], server_id))
# Step 4: delete vm
self.logger.info("deleting vm: %s" % vm_name)
self.manager.servers_client.delete_server(server_id)
waiters.wait_for_server_termination(self.manager.servers_client,
server_id)
self.logger.info("deleted vm: %s" % server_id)
# Step 5: delete volume
self.logger.info("deleting volume: %s" % volume['id'])
self.manager.volumes_client.delete_volume(volume['id'])
self.manager.volumes_client.wait_for_resource_deletion(volume['id'])
self.logger.info("deleted volume: %s" % volume['id'])

View File

@ -0,0 +1,233 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import re
from tempest.common.utils import data_utils
from tempest.common.utils.linux import remote_client
from tempest.common import waiters
from tempest import config
from tempest.lib.common.utils import test_utils
import tempest.stress.stressaction as stressaction
CONF = config.CONF
class VolumeVerifyStress(stressaction.StressAction):
def _create_keypair(self):
keyname = data_utils.rand_name("key")
self.key = (self.manager.keypairs_client.create_keypair(name=keyname)
['keypair'])
def _delete_keypair(self):
self.manager.keypairs_client.delete_keypair(self.key['name'])
def _create_vm(self):
self.name = name = data_utils.rand_name(
self.__class__.__name__ + "-instance")
servers_client = self.manager.servers_client
self.logger.info("creating %s" % name)
vm_args = self.vm_extra_args.copy()
vm_args['security_groups'] = [self.sec_grp]
vm_args['key_name'] = self.key['name']
server = servers_client.create_server(name=name, imageRef=self.image,
flavorRef=self.flavor,
**vm_args)['server']
self.server_id = server['id']
waiters.wait_for_server_status(self.manager.servers_client,
self.server_id, 'ACTIVE')
def _destroy_vm(self):
self.logger.info("deleting server: %s" % self.server_id)
self.manager.servers_client.delete_server(self.server_id)
waiters.wait_for_server_termination(self.manager.servers_client,
self.server_id)
self.logger.info("deleted server: %s" % self.server_id)
def _create_sec_group(self):
sec_grp_cli = self.manager.compute_security_groups_client
s_name = data_utils.rand_name(self.__class__.__name__ + '-sec_grp')
s_description = data_utils.rand_name('desc')
self.sec_grp = sec_grp_cli.create_security_group(
name=s_name, description=s_description)['security_group']
create_rule = sec_grp_cli.create_security_group_rule
create_rule(parent_group_id=self.sec_grp['id'], ip_protocol='tcp',
from_port=22, to_port=22)
create_rule(parent_group_id=self.sec_grp['id'], ip_protocol='icmp',
from_port=-1, to_port=-1)
def _destroy_sec_grp(self):
sec_grp_cli = self.manager.compute_security_groups_client
sec_grp_cli.delete_security_group(self.sec_grp['id'])
def _create_floating_ip(self):
floating_cli = self.manager.compute_floating_ips_client
self.floating = (floating_cli.create_floating_ip(self.floating_pool)
['floating_ip'])
def _destroy_floating_ip(self):
cli = self.manager.compute_floating_ips_client
cli.delete_floating_ip(self.floating['id'])
cli.wait_for_resource_deletion(self.floating['id'])
self.logger.info("Deleted Floating IP %s", str(self.floating['ip']))
def _create_volume(self):
name = data_utils.rand_name(self.__class__.__name__ + "-volume")
self.logger.info("creating volume: %s" % name)
volumes_client = self.manager.volumes_client
self.volume = volumes_client.create_volume(
display_name=name, size=CONF.volume.volume_size)['volume']
volumes_client.wait_for_volume_status(self.volume['id'],
'available')
self.logger.info("created volume: %s" % self.volume['id'])
def _delete_volume(self):
self.logger.info("deleting volume: %s" % self.volume['id'])
volumes_client = self.manager.volumes_client
volumes_client.delete_volume(self.volume['id'])
volumes_client.wait_for_resource_deletion(self.volume['id'])
self.logger.info("deleted volume: %s" % self.volume['id'])
def _wait_disassociate(self):
cli = self.manager.compute_floating_ips_client
def func():
floating = (cli.show_floating_ip(self.floating['id'])
['floating_ip'])
return floating['instance_id'] is None
if not test_utils.call_until_true(func, CONF.compute.build_timeout,
CONF.compute.build_interval):
raise RuntimeError("IP disassociate timeout!")
def new_server_ops(self):
self._create_vm()
cli = self.manager.compute_floating_ips_client
cli.associate_floating_ip_to_server(self.floating['ip'],
self.server_id)
if self.ssh_test_before_attach and self.enable_ssh_verify:
self.logger.info("Scanning for block devices via ssh on %s"
% self.server_id)
self.part_wait(self.detach_match_count)
def setUp(self, **kwargs):
"""Note able configuration combinations:
Closest options to the test_stamp_pattern:
new_server = True
new_volume = True
enable_ssh_verify = True
ssh_test_before_attach = False
Just attaching:
new_server = False
new_volume = False
enable_ssh_verify = True
ssh_test_before_attach = True
Mostly API load by repeated attachment:
new_server = False
new_volume = False
enable_ssh_verify = False
ssh_test_before_attach = False
Minimal Nova load, but cinder load not decreased:
new_server = False
new_volume = True
enable_ssh_verify = True
ssh_test_before_attach = True
"""
self.image = CONF.compute.image_ref
self.flavor = CONF.compute.flavor_ref
self.vm_extra_args = kwargs.get('vm_extra_args', {})
self.floating_pool = kwargs.get('floating_pool', None)
self.new_volume = kwargs.get('new_volume', True)
self.new_server = kwargs.get('new_server', False)
self.enable_ssh_verify = kwargs.get('enable_ssh_verify', True)
self.ssh_test_before_attach = kwargs.get('ssh_test_before_attach',
False)
self.part_line_re = re.compile(kwargs.get('part_line_re', '.*vd.*'))
self.detach_match_count = kwargs.get('detach_match_count', 1)
self.attach_match_count = kwargs.get('attach_match_count', 2)
self.part_name = kwargs.get('part_name', '/dev/vdc')
self._create_floating_ip()
self._create_sec_group()
self._create_keypair()
private_key = self.key['private_key']
username = CONF.validation.image_ssh_user
self.remote_client = remote_client.RemoteClient(self.floating['ip'],
username,
pkey=private_key)
if not self.new_volume:
self._create_volume()
if not self.new_server:
self.new_server_ops()
# now we just test that the number of partitions has increased or decreased
def part_wait(self, num_match):
def _part_state():
self.partitions = self.remote_client.get_partitions().split('\n')
matching = 0
for part_line in self.partitions[1:]:
if self.part_line_re.match(part_line):
matching += 1
return matching == num_match
if test_utils.call_until_true(_part_state,
CONF.compute.build_timeout,
CONF.compute.build_interval):
return
else:
raise RuntimeError("Unexpected partitions: %s",
str(self.partitions))
def run(self):
if self.new_server:
self.new_server_ops()
if self.new_volume:
self._create_volume()
servers_client = self.manager.servers_client
self.logger.info("attach volume (%s) to vm %s" %
(self.volume['id'], self.server_id))
servers_client.attach_volume(self.server_id,
volumeId=self.volume['id'],
device=self.part_name)
self.manager.volumes_client.wait_for_volume_status(self.volume['id'],
'in-use')
if self.enable_ssh_verify:
self.logger.info("Scanning for new block device on %s"
% self.server_id)
self.part_wait(self.attach_match_count)
servers_client.detach_volume(self.server_id,
self.volume['id'])
self.manager.volumes_client.wait_for_volume_status(self.volume['id'],
'available')
if self.enable_ssh_verify:
self.logger.info("Scanning for block device disappearance on %s"
% self.server_id)
self.part_wait(self.detach_match_count)
if self.new_volume:
self._delete_volume()
if self.new_server:
self._destroy_vm()
def tearDown(self):
cli = self.manager.compute_floating_ips_client
cli.disassociate_floating_ip_from_server(self.floating['ip'],
self.server_id)
self._wait_disassociate()
if not self.new_server:
self._destroy_vm()
self._delete_keypair()
self._destroy_floating_ip()
self._destroy_sec_grp()
if not self.new_volume:
self._delete_volume()

View File

@ -0,0 +1,34 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from tempest.common.utils import data_utils
from tempest import config
import tempest.stress.stressaction as stressaction
CONF = config.CONF
class VolumeCreateDeleteTest(stressaction.StressAction):
def run(self):
name = data_utils.rand_name("volume")
self.logger.info("creating %s" % name)
volumes_client = self.manager.volumes_client
volume = volumes_client.create_volume(
display_name=name, size=CONF.volume.volume_size)['volume']
vol_id = volume['id']
volumes_client.wait_for_volume_status(vol_id, 'available')
self.logger.info("created %s" % volume['id'])
self.logger.info("deleting %s" % name)
volumes_client.delete_volume(vol_id)
volumes_client.wait_for_resource_deletion(vol_id)
self.logger.info("deleted %s" % vol_id)

118
tempest_stress/cleanup.py Normal file
View File

@ -0,0 +1,118 @@
#!/usr/bin/env python
# Copyright 2013 Quanta Research Cambridge, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log as logging
from tempest.common import credentials_factory as credentials
from tempest.common import waiters
LOG = logging.getLogger(__name__)
def cleanup():
admin_manager = credentials.AdminManager()
body = admin_manager.servers_client.list_servers(all_tenants=True)
LOG.info("Cleanup::remove %s servers" % len(body['servers']))
for s in body['servers']:
try:
admin_manager.servers_client.delete_server(s['id'])
except Exception:
pass
for s in body['servers']:
try:
waiters.wait_for_server_termination(admin_manager.servers_client,
s['id'])
except Exception:
pass
keypairs = admin_manager.keypairs_client.list_keypairs()['keypairs']
LOG.info("Cleanup::remove %s keypairs" % len(keypairs))
for k in keypairs:
try:
admin_manager.keypairs_client.delete_keypair(k['name'])
except Exception:
pass
secgrp_client = admin_manager.compute_security_groups_client
secgrp = (secgrp_client.list_security_groups(all_tenants=True)
['security_groups'])
secgrp_del = [grp for grp in secgrp if grp['name'] != 'default']
LOG.info("Cleanup::remove %s Security Group" % len(secgrp_del))
for g in secgrp_del:
try:
secgrp_client.delete_security_group(g['id'])
except Exception:
pass
admin_floating_ips_client = admin_manager.compute_floating_ips_client
floating_ips = (admin_floating_ips_client.list_floating_ips()
['floating_ips'])
LOG.info("Cleanup::remove %s floating ips" % len(floating_ips))
for f in floating_ips:
try:
admin_floating_ips_client.delete_floating_ip(f['id'])
except Exception:
pass
users = admin_manager.users_client.list_users()['users']
LOG.info("Cleanup::remove %s users" % len(users))
for user in users:
if user['name'].startswith("stress_user"):
admin_manager.users_client.delete_user(user['id'])
tenants = admin_manager.tenants_client.list_tenants()['tenants']
LOG.info("Cleanup::remove %s tenants" % len(tenants))
for tenant in tenants:
if tenant['name'].startswith("stress_tenant"):
admin_manager.tenants_client.delete_tenant(tenant['id'])
# We have to delete snapshots first or
# volume deletion may block
_, snaps = admin_manager.snapshots_client.list_snapshots(
all_tenants=True)['snapshots']
LOG.info("Cleanup::remove %s snapshots" % len(snaps))
for v in snaps:
try:
waiters.wait_for_snapshot_status(
admin_manager.snapshots_client, v['id'], 'available')
admin_manager.snapshots_client.delete_snapshot(v['id'])
except Exception:
pass
for v in snaps:
try:
admin_manager.snapshots_client.wait_for_resource_deletion(v['id'])
except Exception:
pass
vols = admin_manager.volumes_client.list_volumes(
params={"all_tenants": True})
LOG.info("Cleanup::remove %s volumes" % len(vols))
for v in vols:
try:
waiters.wait_for_volume_status(
admin_manager.volumes_client, v['id'], 'available')
admin_manager.volumes_client.delete_volume(v['id'])
except Exception:
pass
for v in vols:
try:
admin_manager.volumes_client.wait_for_resource_deletion(v['id'])
except Exception:
pass

View File

130
tempest_stress/cmd/run_stress.py Executable file
View File

@ -0,0 +1,130 @@
#!/usr/bin/env python
# Copyright 2013 Quanta Research Cambridge, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import inspect
import json
import sys
try:
from unittest import loader
except ImportError:
# unittest in python 2.6 does not contain loader, so uses unittest2
from unittest2 import loader
from oslo_log import log as logging
from testtools import testsuite
from tempest.stress import driver
LOG = logging.getLogger(__name__)
def discover_stress_tests(path="./", filter_attr=None, call_inherited=False):
"""Discovers all tempest tests and create action out of them
"""
LOG.info("Start test discovery")
tests = []
testloader = loader.TestLoader()
list = testloader.discover(path)
for func in (testsuite.iterate_tests(list)):
attrs = []
try:
method_name = getattr(func, '_testMethodName')
full_name = "%s.%s.%s" % (func.__module__,
func.__class__.__name__,
method_name)
test_func = getattr(func, method_name)
# NOTE(mkoderer): this contains a list of all type attributes
attrs = getattr(test_func, "__testtools_attrs")
except Exception:
next
if 'stress' in attrs:
if filter_attr is not None and filter_attr not in attrs:
continue
class_setup_per = getattr(test_func, "st_class_setup_per")
action = {'action':
"tempest.stress.actions.unit_test.UnitTest",
'kwargs': {"test_method": full_name,
"class_setup_per": class_setup_per
}
}
if (not call_inherited and
getattr(test_func, "st_allow_inheritance") is not True):
class_structure = inspect.getmro(test_func.im_class)
if test_func.__name__ not in class_structure[0].__dict__:
continue
tests.append(action)
return tests
parser = argparse.ArgumentParser(description='Run stress tests')
parser.add_argument('-d', '--duration', default=300, type=int,
help="Duration of test in secs")
parser.add_argument('-s', '--serial', action='store_true',
help="Trigger running tests serially")
parser.add_argument('-S', '--stop', action='store_true',
default=False, help="Stop on first error")
parser.add_argument('-n', '--number', type=int,
help="How often an action is executed for each process")
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('-a', '--all', action='store_true',
help="Execute all stress tests")
parser.add_argument('-T', '--type',
help="Filters tests of a certain type (e.g. gate)")
parser.add_argument('-i', '--call-inherited', action='store_true',
default=False,
help="Call also inherited function with stress attribute")
group.add_argument('-t', "--tests", nargs='?',
help="Name of the file with test description")
def main():
ns = parser.parse_args()
result = 0
if not ns.all:
tests = json.load(open(ns.tests, 'r'))
else:
tests = discover_stress_tests(filter_attr=ns.type,
call_inherited=ns.call_inherited)
if ns.serial:
# Duration is total time
duration = ns.duration / len(tests)
for test in tests:
step_result = driver.stress_openstack([test],
duration,
ns.number,
ns.stop)
# NOTE(mkoderer): we just save the last result code
if (step_result != 0):
result = step_result
if ns.stop:
return result
else:
result = driver.stress_openstack(tests,
ns.duration,
ns.number,
ns.stop)
return result
if __name__ == "__main__":
try:
sys.exit(main())
except Exception:
LOG.exception("Failure in the stress test framework")
sys.exit(1)

55
tempest_stress/config.py Normal file
View File

@ -0,0 +1,55 @@
# Copyright 2015 Intel Corp
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from tempest import config # noqa
stress_group = cfg.OptGroup(name='stress', title='Stress Test Options')
StressGroup = [
cfg.StrOpt('nova_logdir',
help='Directory containing log files on the compute nodes'),
cfg.IntOpt('max_instances',
default=16,
help='Maximum number of instances to create during test.'),
cfg.StrOpt('controller',
help='Controller host.'),
# new stress options
cfg.StrOpt('target_controller',
help='Controller host.'),
cfg.StrOpt('target_ssh_user',
help='ssh user.'),
cfg.StrOpt('target_private_key_path',
help='Path to private key.'),
cfg.StrOpt('target_logfiles',
help='regexp for list of log files.'),
cfg.IntOpt('log_check_interval',
default=60,
help='time (in seconds) between log file error checks.'),
cfg.IntOpt('default_thread_number_per_action',
default=4,
help='The number of threads created while stress test.'),
cfg.BoolOpt('leave_dirty_stack',
default=False,
help='Prevent the cleaning (tearDownClass()) between'
' each stress test run if an exception occurs'
' during this run.'),
cfg.BoolOpt('full_clean_stack',
default=False,
help='Allows a full cleaning process after a stress test.'
' Caution : this cleanup will remove every objects of'
' every project.')
]

264
tempest_stress/driver.py Normal file
View File

@ -0,0 +1,264 @@
# Copyright 2013 Quanta Research Cambridge, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import multiprocessing
import os
import signal
import time
from oslo_log import log as logging
from oslo_utils import importutils
import six
from tempest import clients
from tempest.common import cred_client
from tempest.common import credentials_factory as credentials
from tempest.common.utils import data_utils
from tempest import config
from tempest import exceptions
from tempest.lib.common import ssh
from tempest.stress import cleanup
CONF = config.CONF
LOG = logging.getLogger(__name__)
processes = []
def do_ssh(command, host, ssh_user, ssh_key=None):
ssh_client = ssh.Client(host, ssh_user, key_filename=ssh_key)
try:
return ssh_client.exec_command(command)
except exceptions.SSHExecCommandFailed:
LOG.error('do_ssh raise exception. command:%s, host:%s.'
% (command, host))
return None
def _get_compute_nodes(controller, ssh_user, ssh_key=None):
"""Returns a list of active compute nodes.
List is generated by running nova-manage on the controller.
"""
nodes = []
cmd = "nova-manage service list | grep ^nova-compute"
output = do_ssh(cmd, controller, ssh_user, ssh_key)
if not output:
return nodes
# For example: nova-compute xg11eth0 nova enabled :-) 2011-10-31 18:57:46
# This is fragile but there is, at present, no other way to get this info.
for line in output.split('\n'):
words = line.split()
if len(words) > 0 and words[4] == ":-)":
nodes.append(words[1])
return nodes
def _has_error_in_logs(logfiles, nodes, ssh_user, ssh_key=None,
stop_on_error=False):
"""Detect errors in nova log files on the controller and compute nodes."""
grep = 'egrep "ERROR|TRACE" %s' % logfiles
ret = False
for node in nodes:
errors = do_ssh(grep, node, ssh_user, ssh_key)
if len(errors) > 0:
LOG.error('%s: %s' % (node, errors))
ret = True
if stop_on_error:
break
return ret
def sigchld_handler(signalnum, frame):
"""Signal handler (only active if stop_on_error is True)."""
for process in processes:
if (not process['process'].is_alive() and
process['process'].exitcode != 0):
signal.signal(signalnum, signal.SIG_DFL)
terminate_all_processes()
break
def terminate_all_processes(check_interval=20):
"""Goes through the process list and terminates all child processes."""
LOG.info("Stopping all processes.")
for process in processes:
if process['process'].is_alive():
try:
process['process'].terminate()
except Exception:
pass
time.sleep(check_interval)
for process in processes:
if process['process'].is_alive():
try:
pid = process['process'].pid
LOG.warning("Process %d hangs. Send SIGKILL." % pid)
os.kill(pid, signal.SIGKILL)
except Exception:
pass
process['process'].join()
def stress_openstack(tests, duration, max_runs=None, stop_on_error=False):
"""Workload driver. Executes an action function against a nova-cluster."""
admin_manager = credentials.AdminManager()
ssh_user = CONF.stress.target_ssh_user
ssh_key = CONF.stress.target_private_key_path
logfiles = CONF.stress.target_logfiles
log_check_interval = int(CONF.stress.log_check_interval)
default_thread_num = int(CONF.stress.default_thread_number_per_action)
if logfiles:
controller = CONF.stress.target_controller
computes = _get_compute_nodes(controller, ssh_user, ssh_key)
for node in computes:
do_ssh("rm -f %s" % logfiles, node, ssh_user, ssh_key)
skip = False
for test in tests:
for service in test.get('required_services', []):
if not CONF.service_available.get(service):
skip = True
break
if skip:
break
# TODO(andreaf) This has to be reworked to use the credential
# provider interface. For now only tests marked as 'use_admin' will
# work.
if test.get('use_admin', False):
manager = admin_manager
else:
raise NotImplemented('Non admin tests are not supported')
for p_number in range(test.get('threads', default_thread_num)):
if test.get('use_isolated_tenants', False):
username = data_utils.rand_name("stress_user")
tenant_name = data_utils.rand_name("stress_tenant")
password = "pass"
if CONF.identity.auth_version == 'v2':
identity_client = admin_manager.identity_client
projects_client = admin_manager.tenants_client
roles_client = admin_manager.roles_client
users_client = admin_manager.users_client
domains_client = None
else:
identity_client = admin_manager.identity_v3_client
projects_client = admin_manager.projects_client
roles_client = admin_manager.roles_v3_client
users_client = admin_manager.users_v3_client
domains_client = admin_manager.domains_client
domain = (identity_client.auth_provider.credentials.
get('project_domain_name', 'Default'))
credentials_client = cred_client.get_creds_client(
identity_client, projects_client, users_client,
roles_client, domains_client, project_domain_name=domain)
project = credentials_client.create_project(
name=tenant_name, description=tenant_name)
user = credentials_client.create_user(username, password,
project, "email")
# Add roles specified in config file
for conf_role in CONF.auth.tempest_roles:
credentials_client.assign_user_role(user, project,
conf_role)
creds = credentials_client.get_credentials(user, project,
password)
manager = clients.Manager(credentials=creds)
test_obj = importutils.import_class(test['action'])
test_run = test_obj(manager, max_runs, stop_on_error)
kwargs = test.get('kwargs', {})
test_run.setUp(**dict(six.iteritems(kwargs)))
LOG.debug("calling Target Object %s" %
test_run.__class__.__name__)
mp_manager = multiprocessing.Manager()
shared_statistic = mp_manager.dict()
shared_statistic['runs'] = 0
shared_statistic['fails'] = 0
p = multiprocessing.Process(target=test_run.execute,
args=(shared_statistic,))
process = {'process': p,
'p_number': p_number,
'action': test_run.action,
'statistic': shared_statistic}
processes.append(process)
p.start()
if stop_on_error:
# NOTE(mkoderer): only the parent should register the handler
signal.signal(signal.SIGCHLD, sigchld_handler)
end_time = time.time() + duration
had_errors = False
try:
while True:
if max_runs is None:
remaining = end_time - time.time()
if remaining <= 0:
break
else:
remaining = log_check_interval
all_proc_term = True
for process in processes:
if process['process'].is_alive():
all_proc_term = False
break
if all_proc_term:
break
time.sleep(min(remaining, log_check_interval))
if stop_on_error:
if any([True for proc in processes
if proc['statistic']['fails'] > 0]):
break
if not logfiles:
continue
if _has_error_in_logs(logfiles, computes, ssh_user, ssh_key,
stop_on_error):
had_errors = True
break
except KeyboardInterrupt:
LOG.warning("Interrupted, going to print statistics and exit ...")
if stop_on_error:
signal.signal(signal.SIGCHLD, signal.SIG_DFL)
terminate_all_processes()
sum_fails = 0
sum_runs = 0
LOG.info("Statistics (per process):")
for process in processes:
if process['statistic']['fails'] > 0:
had_errors = True
sum_runs += process['statistic']['runs']
sum_fails += process['statistic']['fails']
print("Process %d (%s): Run %d actions (%d failed)" % (
process['p_number'],
process['action'],
process['statistic']['runs'],
process['statistic']['fails']))
print("Summary:")
print("Run %d actions (%d failed)" % (sum_runs, sum_fails))
if not had_errors and CONF.stress.full_clean_stack:
LOG.info("cleaning up")
cleanup.cleanup()
if had_errors:
return 1
else:
return 0

View File

@ -0,0 +1,8 @@
[{"action": "tempest.stress.actions.unit_test.UnitTest",
"threads": 8,
"use_admin": true,
"use_isolated_tenants": true,
"kwargs": {"test_method": "tempest.cli.simple_read_only.test_glance.SimpleReadOnlyGlanceClientTest.test_glance_fake_action",
"class_setup_per": "process"}
}
]

View File

@ -0,0 +1,7 @@
[{"action": "tempest.stress.actions.server_create_destroy.ServerCreateDestroyTest",
"threads": 8,
"use_admin": true,
"use_isolated_tenants": true,
"kwargs": {}
}
]

View File

@ -0,0 +1,16 @@
[{"action": "tempest.stress.actions.ssh_floating.FloatingStress",
"threads": 8,
"use_admin": true,
"use_isolated_tenants": true,
"kwargs": {"vm_extra_args": {},
"new_vm": true,
"new_sec_group": true,
"new_floating": true,
"verify": ["check_icmp_echo", "check_port_ssh"],
"check_timeout": 120,
"check_interval": 1,
"wait_after_vm_create": true,
"wait_for_disassociate": true,
"reboot": false}
}
]

View File

@ -0,0 +1,28 @@
[{"action": "tempest.stress.actions.server_create_destroy.ServerCreateDestroyTest",
"threads": 8,
"use_admin": true,
"use_isolated_tenants": true,
"kwargs": {}
},
{"action": "tempest.stress.actions.volume_create_delete.VolumeCreateDeleteTest",
"threads": 4,
"use_admin": true,
"use_isolated_tenants": true,
"kwargs": {}
},
{"action": "tempest.stress.actions.volume_attach_delete.VolumeAttachDeleteTest",
"threads": 2,
"use_admin": true,
"use_isolated_tenants": true,
"kwargs": {}
},
{"action": "tempest.stress.actions.unit_test.UnitTest",
"threads": 4,
"use_admin": true,
"use_isolated_tenants": true,
"required_services": ["neutron"],
"kwargs": {"test_method": "tempest.scenario.test_network_advanced_server_ops.TestNetworkAdvancedServerOps.test_server_connectivity_stop_start",
"class_setup_per": "process"}
}
]

View File

@ -0,0 +1,7 @@
[{"action": "tempest.stress.actions.volume_attach_delete.VolumeAttachDeleteTest",
"threads": 4,
"use_admin": true,
"use_isolated_tenants": true,
"kwargs": {}
}
]

View File

@ -0,0 +1,11 @@
[{"action": "tempest.stress.actions.volume_attach_verify.VolumeVerifyStress",
"threads": 1,
"use_admin": true,
"use_isolated_tenants": true,
"kwargs": {"vm_extra_args": {},
"new_volume": true,
"new_server": false,
"ssh_test_before_attach": false,
"enable_ssh_verify": true}
}
]

View File

@ -0,0 +1,7 @@
[{"action": "tempest.stress.actions.volume_create_delete.VolumeCreateDeleteTest",
"threads": 4,
"use_admin": true,
"use_isolated_tenants": true,
"kwargs": {}
}
]

38
tempest_stress/plugin.py Normal file
View File

@ -0,0 +1,38 @@
# Copyright 2015 Intel
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from tempest import config
from tempest.test_discover import plugins
from tempest_stress import config as config_opts
class TempestStressPlugin(plugins.TempestPlugin):
def load_tests(self):
return {}
def register_opts(self, conf):
config.register_opt_group(conf,
config_opts.stress_group,
config_opts.StressGroup)
def get_opt_lists(self):
return [
(config_opts.stress_group.name,
config_opts.StressGroup)
]

View File

View File

@ -0,0 +1,54 @@
# Copyright 2013 Deutsche Telekom AG
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import shlex
import subprocess
from oslo_log import log as logging
from tempest.lib import exceptions
from tempest.tests import base
LOG = logging.getLogger(__name__)
class StressFrameworkTest(base.TestCase):
"""Basic test for the stress test framework."""
def _cmd(self, cmd, param):
"""Executes specified command."""
cmd = ' '.join([cmd, param])
LOG.info("running: '%s'" % cmd)
cmd_str = cmd
cmd = shlex.split(cmd)
result = ''
result_err = ''
try:
stdout = subprocess.PIPE
stderr = subprocess.PIPE
proc = subprocess.Popen(
cmd, stdout=stdout, stderr=stderr)
result, result_err = proc.communicate()
if proc.returncode != 0:
LOG.debug('error of %s:\n%s' % (cmd_str, result_err))
raise exceptions.CommandFailed(proc.returncode,
cmd,
result)
finally:
LOG.debug('output of %s:\n%s' % (cmd_str, result))
return proc.returncode
def test_help_function(self):
result = self._cmd("python", "-m tempest.cmd.run_stress -h")
self.assertEqual(0, result)

View File

@ -0,0 +1,63 @@
# Copyright 2013 Deutsche Telekom AG
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import tempest.stress.stressaction as stressaction
import tempest.test
class FakeStressAction(stressaction.StressAction):
def __init__(self, manager, max_runs=None, stop_on_error=False):
super(self.__class__, self).__init__(manager, max_runs, stop_on_error)
self._run_called = False
def run(self):
self._run_called = True
@property
def run_called(self):
return self._run_called
class FakeStressActionFailing(stressaction.StressAction):
def run(self):
raise Exception('FakeStressActionFailing raise exception')
class TestStressAction(tempest.test.BaseTestCase):
def _bulid_stats_dict(self, runs=0, fails=0):
return {'runs': runs, 'fails': fails}
def testStressTestRun(self):
stressAction = FakeStressAction(manager=None, max_runs=1)
stats = self._bulid_stats_dict()
stressAction.execute(stats)
self.assertTrue(stressAction.run_called)
self.assertEqual(stats['runs'], 1)
self.assertEqual(stats['fails'], 0)
def testStressMaxTestRuns(self):
stressAction = FakeStressAction(manager=None, max_runs=500)
stats = self._bulid_stats_dict(runs=499)
stressAction.execute(stats)
self.assertTrue(stressAction.run_called)
self.assertEqual(stats['runs'], 500)
self.assertEqual(stats['fails'], 0)
def testStressTestRunWithException(self):
stressAction = FakeStressActionFailing(manager=None, max_runs=1)
stats = self._bulid_stats_dict()
stressAction.execute(stats)
self.assertEqual(stats['runs'], 1)
self.assertEqual(stats['fails'], 1)

View File

@ -0,0 +1,96 @@
# (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import signal
import sys
import six
from oslo_log import log as logging
@six.add_metaclass(abc.ABCMeta)
class StressAction(object):
def __init__(self, manager, max_runs=None, stop_on_error=False):
full_cname = self.__module__ + "." + self.__class__.__name__
self.logger = logging.getLogger(full_cname)
self.manager = manager
self.max_runs = max_runs
self.stop_on_error = stop_on_error
def _shutdown_handler(self, signal, frame):
try:
self.tearDown()
except Exception:
self.logger.exception("Error while tearDown")
sys.exit(0)
@property
def action(self):
"""This methods returns the action.
Overload this if you create a stress test wrapper.
"""
return self.__class__.__name__
def setUp(self, **kwargs):
"""Initialize test structures/resources
This method is called before "run" method to help the test
initialize any structures. kwargs contains arguments passed
in from the configuration json file.
setUp doesn't count against the time duration.
"""
self.logger.debug("setUp")
def tearDown(self):
"""Cleanup test structures/resources
This method is called to do any cleanup after the test is complete.
"""
self.logger.debug("tearDown")
def execute(self, shared_statistic):
"""This is the main execution entry point called by the driver.
We register a signal handler to allow us to tearDown gracefully,
and then exit. We also keep track of how many runs we do.
"""
signal.signal(signal.SIGHUP, self._shutdown_handler)
signal.signal(signal.SIGTERM, self._shutdown_handler)
while self.max_runs is None or (shared_statistic['runs'] <
self.max_runs):
self.logger.debug("Trigger new run (run %d)" %
shared_statistic['runs'])
try:
self.run()
except Exception:
shared_statistic['fails'] += 1
self.logger.exception("Failure in run")
finally:
shared_statistic['runs'] += 1
if self.stop_on_error and (shared_statistic['fails'] > 1):
self.logger.warning("Stop process due to"
"\"stop-on-error\" argument")
self.tearDown()
sys.exit(1)
@abc.abstractmethod
def run(self):
"""This method is where the stress test code runs."""
return

19
tempest_stress/tools/cleanup.py Executable file
View File

@ -0,0 +1,19 @@
#!/usr/bin/env python
# Copyright 2013 Quanta Research Cambridge, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from tempest.stress import cleanup
cleanup.cleanup()

14
test-requirements.txt Normal file
View File

@ -0,0 +1,14 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
hacking<0.12,>=0.11.0 # Apache-2.0
coverage>=3.6 # Apache-2.0
python-subunit>=0.0.18 # Apache-2.0/BSD
sphinx!=1.3b1,<1.3,>=1.2.1 # BSD
oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
oslotest>=1.10.0 # Apache-2.0
testrepository>=0.0.18 # Apache-2.0/BSD
testscenarios>=0.4 # Apache-2.0/BSD
testtools>=1.4.0 # MIT

43
tox.ini Normal file
View File

@ -0,0 +1,43 @@
[tox]
minversion = 2.0
envlist = py34,py27,pypy,pep8
skipsdist = True
[testenv]
usedevelop = True
setenv =
VIRTUAL_ENV={envdir}
PYTHONWARNINGS=default::DeprecationWarning
deps = -r{toxinidir}/test-requirements.txt
commands = python setup.py test --slowest --testr-args='{posargs}'
[testenv:stress]
envdir = .tox/tempest_stress
sitepackages = {[tempestenv]sitepackages}
setenv = {[tempestenv]setenv}
deps = {[tempestenv]deps}
commands =
run-tempest-stress {posargs}
[testenv:pep8]
commands = flake8 {posargs}
[testenv:venv]
commands = {posargs}
[testenv:cover]
commands = python setup.py test --coverage --testr-args='{posargs}'
[testenv:docs]
commands = python setup.py build_sphinx
[testenv:debug]
commands = oslo_debug_helper {posargs}
[flake8]
# E123, E125 skipped as they are invalid PEP-8.
show-source = True
ignore = E123,E125
builtins = _
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build