Restructure Rally docs

Modify Rally docs (for readthedocs), sticking to the following principles:
* Make the docs structure as simple as possible:
    1. Overview
    2. Install Rally
    3. Rally step-by-step
    4. User stories
    5. Plugins
    6. Contribute to Rally
    7. Rally OS Gates
    8. Request a New Feature
    9. Project Info
* Keep in mind what questions different target groups usually have about Rally.
  The new structure relates to these groups as follows:
    1 -> Managers
    2, 3, 4 -> QA
    5, 6, 7, 8, 9 -> QA & Developers
* Make each docs page to be easy to get through;
* Prefer pictures over text;
* Use hyperlinks to easily navigate from page to page;
* Fix incorrect English & typos.

Also add a sample for SLA plugins.

Change-Id: I720d87be851c273689a21aaba87fc67eacf0f161
This commit is contained in:
Mikhail Dubov 2014-09-09 10:28:04 +04:00
parent f882c096e2
commit 1ceb1555f3
39 changed files with 1911 additions and 832 deletions

View File

@ -2,10 +2,8 @@
Feature requests
================
To request a new feature you should create a document similar to other feature
requests. And contribute it to this directory using next instruction_.
To request a new feature, you should create a document similar to other feature requests. And contribute it to this directory using the next instruction_.
If you don't have time to contribute via gerrit,
please contact Boris Pavlovic (boris@pavlovic.me)
If you don't have time to contribute your feature request via gerrit, please contact Boris Pavlovic (boris@pavlovic.me)
.. _instruction: https://wiki.openstack.org/wiki/Rally/Develop#How_to_contribute
.. _instruction: http://rally.readthedocs.org/en/latest/contribute.html#how-to-contribute

View File

@ -1,37 +0,0 @@
..
Copyright 2014 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
.. _cmds:
Command Line Interface
======================
Represents command line operations.
The :mod:`rally.cmd.main` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: rally.cmd.main
:members:
:undoc-members:
:show-inheritance:
The :mod:`rally.cmd.manage` Module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: rally.cmd.manage
:members:
:undoc-members:
:show-inheritance:

View File

@ -75,8 +75,8 @@ Note that inside each scenario configuration, the benchmark scenario is actually
.. _ScenariosDevelopment:
Developer's view
^^^^^^^^^^^^^^^^^
Developer's view
^^^^^^^^^^^^^^^^
From the developer's perspective, a benchmark scenario is a method marked by a **@scenario** decorator and placed in a class that inherits from the base `Scenario <https://github.com/stackforge/rally/blob/master/rally/benchmark/scenarios/base.py#L40>`_ class and located in some subpackage of `rally.benchmark.scenarios <https://github.com/stackforge/rally/tree/master/rally/benchmark/scenarios>`_. There may be arbitrary many benchmark scenarios in a scenario class; each of them should be referenced to (in the task configuration file) as *ScenarioClassName.method_name*.
@ -162,8 +162,8 @@ Also, all scenario runners can be provided (again, through the **"runner"** sect
.. _RunnersDevelopment:
Developer's view
^^^^^^^^^^^^^^^^^
Developer's view
^^^^^^^^^^^^^^^^
It is possible to extend Rally with new Scenario Runner types, if needed. Basically, each scenario runner should be implemented as a subclass of the base `ScenarioRunner <https://github.com/stackforge/rally/blob/master/rally/benchmark/runners/base.py#L137>`_ class and located in the `rally.benchmark.runners package <https://github.com/stackforge/rally/tree/master/rally/benchmark/runners>`_. The interface each scenario runner class should support is fairly easy:
@ -318,27 +318,3 @@ The *hidden* attribute defines whether the context should be a *hidden* one. **H
it configuration via task and break his cloud.
If you want to dive deeper, also see the context manager (:mod:`rally.benchmark.context.base`) class that actually implements the algorithm described above.
Plugins
-------
Rally provides an opportunity to create and use a custom benchmark scenario, runner or context as a plugin. The plugins mechanism can be used to simplify some experiments with new scenarios and to facilitate their creation by users who don't want to edit the actual Rally code.
Placement
^^^^^^^^^
Put the plugin into the **/opt/rally/plugins** or **~/.rally/plugins** directory or it's subdirectories and it will be autoloaded. The corresponding module should have ".py" extension. Directories are not created automatically, you should create them by hand or you can use script **unpack_plugins_samles.sh** from **doc/samples/plugins** which will internally create directory **~/.rally/plugins** (see more about this script into **Samples** section).
Creation
^^^^^^^^
Inherit a class for your plugin from base class for scenario, runner or context depends on what type of plugin you want create.
See more information about `scenarios <ScenariosDevelopment>`_, `runnres <RunnersDevelopment>`_ and `contexts <ContextDevelopment>`_ creation.
Usage
^^^^^
Specify your plugin's information into a task configuration file. See `how to work with task configuration file <https://github.com/stackforge/rally/blob/master/doc/samples/tasks/README.rst>`_. You can find samples of configuration files for different types of plugins in corresponded folders `here <https://github.com/stackforge/rally/tree/master/doc/samples/plugins>`_.

View File

@ -1,5 +1,5 @@
..
Copyright 2014 Mirantis Inc. All Rights Reserved.
Copyright 2015 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
@ -15,31 +15,19 @@
.. _improve_rally:
Improve Rally
=============
Main directions of work
-----------------------
* **Benchmarks**. Improvements in the benchmarking engine & developing new benchmark scenarios.
* **Deployments**. Making Rally able to support multiple cloud deployment facilities, e.g. Fuel.
* **CLI**. Enriching the command line interface for Rally.
* **API**. Work around making Rally to be a Benchmark-as-a-Service system & developing rally-pythonclient.
* **Incubation**. Efforts to make Rally an integrated project in OpenStack.
* **Share system**. Benchmark results visualization and paste.openstack.org-like sharing system.
* **Tempest**. Integration of Tempest tests in Rally for deployment verification.
.. _contribute:
Contribute to Rally
===================
Where to begin
--------------
It is extremetly simple to participate in different Rally development lines mentioned above. The **Good for start** section of our `Trello board <https://trello.com/b/DoD8aeZy/rally>`_ contains a wide range of tasks perfectly suited for you to quickly and smoothly start contributing to Rally. As soon as you have chosen a task, just log in to Trello, join the corresponding card and move it to the **In progress** section.
Please take a look `our Roadmap <https://docs.google.com/a/mirantis.com/spreadsheets/d/16DXpfbqvlzMFaqaXAcJsBzzpowb_XpymaK2aFY2gA2g/edit#gid=0>`_ to get information about our current work directions.
The most Trello cards contain basic descriptions of what is to be done there; in case you have questions or want to share your ideas, be sure to contanct us at the ``#openstack-rally`` IRC channel on **irc.freenode.net**.
In case you have questions or want to share your ideas, be sure to contact us at the ``#openstack-rally`` IRC channel on **irc.freenode.net**.
If you want to grasp a better understanding of several main design concepts used throughout the Rally code (such as **benchmark scenarios**, **contexts** etc.), please read this :ref:`article <main_concepts>`.
If you are going to contribute to Rally, you will probably need to grasp a better understanding of several main design concepts used throughout our project (such as **benchmark scenarios**, **contexts** etc.). To do so, please read :ref:`this article <main_concepts>`.
How to contribute
@ -78,15 +66,15 @@ Several Linux distributions (notably Fedora 16 and Ubuntu 12.04) are also starti
7. Start coding
8. Run the test suite locally to make sure nothing broke, e.g.:
8. Run the test suite locally to make sure nothing broke, e.g. (this will run py26/py27/pep8 tests):
.. code-block:: none
tox
**(NOTE you should have installed tox<=1.6.1 )**
**(NOTE: you should have installed tox<=1.6.1 )**
If you extend Rally with new functionality, make sure you also have provided unit tests for it.
If you extend Rally with new functionality, make sure you have also provided unit and/or functional tests for it.
9. Commit your work using:
@ -111,3 +99,80 @@ That is the awesome tool we installed earlier that does a lot of hard work for y
(This tutorial is based on: http://www.linuxjedi.co.uk/2012/03/real-way-to-start-hacking-on-openstack.html)
Testing
-------
Please, don't hesitate to write tests ;)
Unit tests
^^^^^^^^^^
*Files: /tests/unit/**
The goal of unit tests is to ensure that internal parts of the code work properly.
All internal methods should be fully covered by unit tests with a reasonable mocks usage.
About Rally unit tests:
- All `unit tests <http://en.wikipedia.org/wiki/Unit_testing>`_ are located inside /tests/unit/*
- Tests are written on top of: *testtools*, *fixtures* and *mock* libs
- `Tox <https://tox.readthedocs.org/en/latest/>`_ is used to run unit tests
To run unit tests locally::
$ pip install tox
$ tox
To run py26, py27 or pep8 only::
$ tox -e <name>
#NOTE: <name> is one of py26, py27 or pep8
To get test coverage::
$ tox -e cover
#NOTE: Results will be in /cover/index.html
To generate docs::
$ tox -e docs
#NOTE: Documentation will be in doc/source/_build/html/index.html
Functional tests
^^^^^^^^^^^^^^^^
*Files: /tests/functional/**
The goal of `functional tests <https://en.wikipedia.org/wiki/Functional_testing>`_ is to check that everything works well together.
Fuctional tests use Rally API only and check responses without touching internal parts.
To run functional tests locally::
$ source openrc
$ rally deployment create --fromenv --name testing
$ tox -e cli
#NOTE: openrc file with OpenStack admin credentials
Rally CI scripts
^^^^^^^^^^^^^^^^
*Files: /tests/ci/**
This directory contains scripts and files related to the Rally CI system.
Rally Style Commandments
^^^^^^^^^^^^^^^^^^^^^^^^
*Files: /tests/hacking/*
This module contains Rally specific hacking rules for checking commandments.
For more information about Style Commandments, read the `OpenStack Style Commandments manual <http://docs.openstack.org/developer/hacking/>`_.

View File

@ -1,6 +1,31 @@
.. include:: feature_request/README.rst
..
Copyright 2015 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
.. _feature_requests:
Request New Features
====================
To request a new feature, you should create a document similar to other feature requests and then contribute it to the **doc/feature_request** directory of the Rally repository (see the :ref:`How-to-contribute tutorial <contribute>`).
If you don't have time to contribute your feature request via gerrit, please contact Boris Pavlovic (boris@pavlovic.me)
Active feature requests:
.. toctree::
:glob:
:maxdepth: 1
feature_request/*

169
doc/source/gates.rst Normal file
View File

@ -0,0 +1,169 @@
..
Copyright 2015 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
.. _gates:
Rally OS Gates
==============
Gate jobs
---------
The **Openstack CI system** uses the so-called **"Gate jobs"** to control merges of patched submitted for review on Gerrit. These **Gate jobs** usually just launch a set of tests -- unit, functional, integration, style -- that check that the proposed patch does not break the software and can be merged into the target branch, thus providing additional guarantees for the stability of the software.
Create a custom Rally Gate job
------------------------------
You can create a **Rally Gate job** for your project to run Rally benchmarks against the patchsets proposed to be merged into your project.
To create a rally-gate job, you should create a **rally-jobs/** directory at the root of your project.
As a rule, this directory contains only **{projectname}.yaml**, but more scenarios and jobs can be added as well. This yaml file is in fact an input Rally task file specifying benchmark scenarios that should be run in your gate job.
To make *{projectname}.yaml* run in gates, you need to add *"rally-jobs"* to the "jobs" section of *projects.yaml* in *openstack-infra/project-config*.
Example: Rally Gate job for Glance
----------------------------------
Let's take a look at an example for the `Glance <https://wiki.openstack.org/wiki/Glance>`_ project:
Edit *jenkins/jobs/projects.yaml:*
.. parsed-literal::
- project:
name: glance
node: 'bare-precise || bare-trusty'
tarball-site: tarballs.openstack.org
doc-publisher-site: docs.openstack.org
jobs:
- python-jobs
- python-icehouse-bitrot-jobs
- python-juno-bitrot-jobs
- openstack-publish-jobs
- translation-jobs
**- rally-jobs**
Also add *gate-rally-dsvm-{projectname}* to *zuul/layout.yaml*:
.. parsed-literal::
- name: openstack/glance
template:
- name: merge-check
- name: python26-jobs
- name: python-jobs
- name: openstack-server-publish-jobs
- name: openstack-server-release-jobs
- name: periodic-icehouse
- name: periodic-juno
- name: check-requirements
- name: integrated-gate
- name: translation-jobs
- name: large-ops
- name: experimental-tripleo-jobs
check:
- check-devstack-dsvm-cells
**- gate-rally-dsvm-glance**
gate:
- gate-devstack-dsvm-cells
experimental:
- gate-grenade-dsvm-forward
To add one more scenario and job, you need to add *{scenarioname}.yaml* file here, and *gate-rally-dsvm-{scenarioname}* to *projects.yaml*.
For example, you can add *myscenario.yaml* to *rally-jobs* directory in your project and then edit *jenkins/jobs/projects.yaml* in this way:
.. parsed-literal::
- project:
name: glance
github-org: openstack
node: bare-precise
tarball-site: tarballs.openstack.org
doc-publisher-site: docs.openstack.org
jobs:
- python-jobs
- python-havana-bitrot-jobs
- openstack-publish-jobs
- translation-jobs
- rally-jobs
**- 'gate-rally-dsvm-{name}':
name: myscenario**
Finally, add *gate-rally-dsvm-myscenario* to *zuul/layout.yaml*:
.. parsed-literal::
- name: openstack/glance
template:
- name: python-jobs
- name: openstack-server-publish-jobs
- name: periodic-havana
- name: check-requirements
- name: integrated-gate
check:
- check-devstack-dsvm-cells
- check-tempest-dsvm-postgres-full
- gate-tempest-dsvm-large-ops
- gate-tempest-dsvm-neutron-large-ops
**- gate-rally-dsvm-myscenario**
It is also possible to arrange your input task files as templates based on jinja2. Say, you want to set the image names used throughout the *myscenario.yaml* task file as a variable parameter. Then, replace concrete image names in this file with a variable:
.. parsed-literal::
...
NovaServers.boot_and_delete_server:
-
args:
image:
name: {{image_name}}
...
NovaServers.boot_and_list_server:
-
args:
image:
name: {{image_name}}
...
and create a file named *myscenario_args.yaml* that will define the parameter values:
.. parsed-literal::
---
image_name: "^cirros.*uec$"
this file will be automatically used by Rally to substitute the variables in *myscenario.yaml*.
Plugins & Extras in Rally Gate jobs
-----------------------------------
Along with scenario configs in yaml, the **rally-jobs** directory can also contain two subdirectories:
- **plugins**: :ref:`Plugins <plugins>` needed for your gate job;
- **extra**: auxiliary files like bash scripts or images.
Both subdirectories will be copied to *~/.rally/* before the job gets started.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 178 KiB

After

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 135 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

View File

@ -1,5 +1,5 @@
..
Copyright 2014 Mirantis Inc. All Rights Reserved.
Copyright 2015 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
@ -14,45 +14,26 @@
under the License.
What is Rally?
=================================
If you are here, you are probably familiar with OpenStack and you also know that it's a really huge ecosystem of cooperative services. When something fails, performs slowly or doesn't scale, it's really hard to answer different questions on "what", "why" and "where" has happened. Another reason why you could be here is that you would like to build an OpenStack CI/CD system that will allow you to improve SLA, performance and stability of OpenStack continuously.
The OpenStack QA team mostly works on CI/CD that ensures that new patches don't break some specific single node installation of OpenStack. On the other hand it's clear that such CI/CD is only an indication and does not cover all cases (e.g. if a cloud works well on a single node installation it doesn't mean that it will continue to do so on a 1k servers installation under high load as well). Rally aims to fix this and help us to answer the question "How does OpenStack work at scale?". To make it possible, we are going to automate and unify all steps that are required for benchmarking OpenStack at scale: multi-node OS deployment, verification, benchmarking & profiling.
==============
**OpenStack** is, undoubtedly, a really *huge* ecosystem of cooperative services. **Rally** is a **benchmarking tool** that answers the question: **"How does OpenStack work at scale?"**. To make this possible, Rally **automates** and **unifies** multi-node OpenStack deployment, cloud verification, benchmarking & profiling. Rally does it in a **generic** way, making it possible to check whether OpenStack is going to work well on, say, a 1k-servers installation under high load. Thus it can be used as a basic tool for an *OpenStack CI/CD system* that would continuously improve its SLA, performance and stability.
.. image:: ./images/Rally-Actions.png
:width: 50%
:width: 100%
:align: center
* Deploy engine is not yet another deployer of OpenStack, but just a pluggable mechanism that allows to unify & simplify work with different deployers like: DevStack, Fuel, Anvil on hardware/VMs that you have.
* Verification - (work in progress) uses tempest to verify the functionality of a deployed OpenStack cloud. In future Rally will support other OS verifiers.
* Benchmark engine - allows to create parameterized load on the cloud based on a big repository of benchmarks.
Deeper in Rally:
----------------
Contents
--------
.. toctree::
:maxdepth: 2
overview
concepts
deploy_engines
server_providers
verify
installation
usage
testing
feature_requests
install
tutorial
user_stories
Development information:
------------------------
.. toctree::
:maxdepth: 2
cmds
implementation
improve_rally
rally_gatejob
plugins
contribute
gates
feature_requests
project_info

111
doc/source/install.rst Normal file
View File

@ -0,0 +1,111 @@
..
Copyright 2015 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
.. _install:
Installation
============
Automated installation
----------------------
.. code-block:: none
git clone https://git.openstack.org/stackforge/rally
./rally/install_rally.sh
**Notes:** The installation script should be run as root or as a normal user using **sudo**. Rally requires either the Python 2.6 or the Python 2.7 version.
**Alternatively**, you can install Rally in a **virtual environment**:
.. code-block:: none
git clone https://git.openstack.org/stackforge/rally
./rally/install_rally.sh -v
Rally with DevStack all-in-one installation
-------------------------------------------
It is also possible to install Rally with DevStack. First, clone the corresponding repositories:
.. code-block:: none
git clone https://git.openstack.org/openstack-dev/devstack
git clone https://github.com/stackforge/rally
Then, configure DevStack to run Rally:
.. code-block:: none
cp rally/contrib/devstack/lib/rally devstack/lib/
cp rally/contrib/devstack/extras.d/70-rally.sh devstack/extras.d/
cd devstack
echo "enable_service rally" >> localrc
Finally, run DevStack as usually:
.. code-block:: none
./stack.sh
Rally & Docker
--------------
There is an image on dokerhub with rally installed. To pull this image, just execute:
.. code-block:: none
docker pull rallyforge/rally
Or you may want to build rally image from source:
.. code-block:: none
# first cd to rally source root dir
docker build -t myrally .
Since rally stores local settings in user's home dir and the database in /var/lib/rally/database,
you may want to keep this directories outside of container. This may be done by the following steps:
.. code-block:: none
cd ~ #go to your home directory
mkdir rally_home rally_db
docker run -t -i -v ~/rally_home:/home/rally -v ~/rally_db:/var/lib/rally/database rallyforge/rally
You may want to save last command as an alias:
.. code-block:: none
echo 'alias dock_rally="docker run -t -i -v ~/rally_home:/home/rally -v ~/rally_db:/var/lib/rally/database rallyforge/rally"' >> ~.bashrc
After executing ``dock_rally`` alias, or ``docker run`` you got bash running inside container with
rally installed. You may do anytnig with rally, but you need to create db first:
.. code-block:: none
user@box:~/rally$ dock_rally
rally@1cc98e0b5941:~$ rally-manage db recreate
rally@1cc98e0b5941:~$ rally deployment list
There are no deployments. To create a new deployment, use:
rally deployment create
rally@1cc98e0b5941:~$
More about docker: `https://www.docker.com/ <https://www.docker.com/>`_

View File

@ -1,212 +0,0 @@
..
Copyright 2014 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
.. _installation:
Installation
============
Rally setup
-----------
The simplest way to start using Rally is to install it together with OpenStack using DevStack. If you already have an existing OpenStack installation and/or don't want to install DevStack, then the preferable way to set up Rally would be to install it manually. Both types of installation are described below in full detail.
**Note: Running Rally on OSX is not advised as some pip dependencies will fail to install**.
Automated installation
^^^^^^^^^^^^^^^^^^^^^^
**NOTE: Please ensure that you have installed either the Python 2.6 or the Python 2.7 version in the system that you are planning to install Rally**.
The installation script of Rally supports 2 installation methods:
* system-wide (default)
* in a virtual environment using the virtualenv tool
On the target system, get the source code of Rally:
.. code-block:: none
git clone https://git.openstack.org/stackforge/rally
**As root, or as a normal user using sudo**, execute the installation script. If you define the -v switch, Rally will be installed in a virtual environment, otherwise, it will be installed system-wide.
**Install system-wide**:
.. code-block:: none
./rally/install_rally.sh
**Or install in a virtual environment**:
.. code-block:: none
./rally/install_rally.sh -v
Now you are able to :ref:`use Rally <usage>`!
Rally with DevStack all in one installation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To install Rally with DevStack, you should first clone the corresponding repositories and copy the files necessary to integrate Rally with DevStack:
.. code-block:: none
git clone https://git.openstack.org/openstack-dev/devstack
git clone https://github.com/stackforge/rally
To configure DevStack to run Rally:
.. code-block:: none
cp rally/contrib/devstack/lib/rally devstack/lib/
cp rally/contrib/devstack/extras.d/70-rally.sh devstack/extras.d/
cd devstack
echo "enable_service rally" >> localrc
Finally, run DevStack as usually:
.. code-block:: none
./stack.sh
And finally you are able to :ref:`use Rally <usage>`!
Manual installation
^^^^^^^^^^^^^^^^^^^
Prerequisites
"""""""""""""
Start with installing some requirements that Rally needs to be set up correctly. The specific requirements depend on the environment you are going to install Rally in:
**Ubuntu**
.. code-block:: none
sudo apt-get update
sudo apt-get install libpq-dev git-core python-dev libevent-dev libssl-dev libffi-dev libsqlite3-dev
curl -o /tmp/get-pip.py https://raw.github.com/pypa/pip/master/contrib/get-pip.py
sudo python /tmp/get-pip.py
sudo pip install pbr
**CentOS**
.. code-block:: none
sudo yum install gcc git-core postgresql-libs python-devel libevent-devel openssl-devel libffi-devel sqlite
#install pip on centos:
curl -o /tmp/get-pip.py https://raw.github.com/pypa/pip/master/contrib/get-pip.py
sudo python /tmp/get-pip.py
sudo pip install pbr
**VirtualEnv**
Another option is to install Rally in virtualenv; you should then install this package, create a virtualenv and activate it:
.. code-block:: none
sudo pip install -U virtualenv
virtualenv .venv
. .venv/bin/activate # NOTE: Make sure that your current shell is either bash or zsh (otherwise it will fail)
sudo pip install pbr
Installing Rally
""""""""""""""""
The next step is to clone & install rally:
.. code-block: none
git clone https://github.com/stackforge/rally.git && cd rally
sudo python setup.py install
Now you are ready to configure Rally (in order for it to be able to use the database):
.. code-block:: none
sudo mkdir /etc/rally
sudo cp etc/rally/rally.conf.sample /etc/rally/rally.conf
sudo vim /etc/rally/rally.conf
# Change the "connection" parameter, For example to this:
connection="sqlite:////a/path/here/rally.sqlite"
After the installation step has been completed, you need to create the Rally database:
.. code-block:: none
rally-manage db recreate
And finally you are able to :ref:`use Rally <usage>`!
Rally & Docker
^^^^^^^^^^^^^^
There is an image on dokerhub with rally installed. To pull this image just execute:
.. code-block: none
docker pull rallyforge/rally
Or you may want to build rally image from source:
.. code-block: none
# first cd to rally source root dir
docker build -t myrally .
Since rally stores local settings in user's home dir and the database in /var/lib/rally/database,
you may want to keep this directories outside of container. This may be done by the following steps:
.. code-block: none
cd ~ #go to your home directory
mkdir rally_home rally_db
docker run -t -i -v ~/rally_home:/home/rally -v ~/rally_db:/var/lib/rally/database rallyforge/rally
You may want to save last command as an alias:
.. code-block: none
echo 'alias dock_rally="docker run -t -i -v ~/rally_home:/home/rally -v ~/rally_db:/var/lib/rally/database rallyforge/rally"' >> ~.bashrc
After executing ``dock_rally`` alias, or ``docker run`` you got bash running inside container with
rally installed. You may do anytnig with rally, but you need to create db first:
.. code-block: none
user@box:~/rally$ dock_rally
rally@1cc98e0b5941:~$ rally-manage db recreate
rally@1cc98e0b5941:~$ rally deployment list
There are no deployments. To create a new deployment, use:
rally deployment create
rally@1cc98e0b5941:~$
More about docker: `https://www.docker.com/ <https://www.docker.com/>`_
Running Rally's Unit Tests
--------------------------
Rally should be tested with tox, but is not compatible with the current version of tox, so install tox 1.6.1 then run it.
.. code-block:: none
pip install 'tox<=1.6.1'
tox

View File

@ -1,5 +1,5 @@
..
Copyright 2014 Mirantis Inc. All Rights Reserved.
Copyright 2015 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
@ -18,101 +18,75 @@
Overview
========
**Rally** is a **benchmarking tool** that **automates** and **unifies** multi-node OpenStack deployment, cloud verification, benchmarking & profiling. It can be used as a basic tool for an *OpenStack CI/CD system* that would continuously improve its SLA, performance and stability.
Use Cases
---------
Before diving deep in Rally architecture let's take a look at 3 major high level Rally Use Cases:
Let's take a look at 3 major high level Use Cases of Rally:
.. image:: ./images/Rally-UseCases.png
:width: 50%
:width: 100%
:align: center
Typical cases where Rally aims to help are:
Generally, there are a few typical cases where Rally proves to be of great use:
1. Automate measuring & profiling focused on how new code changes affect the OS performance;
2. Using Rally profiler to detect scaling & performance issues;
3. Investigate how different deployments affect the OS performance:
* Find the set of suitable OpenStack deployment architectures;
* Create deployment specifications for different loads (amount of controllers, swift nodes, etc.);
4. Automate the search for hardware best suited for particular OpenStack cloud;
5. Automate the production cloud specification generation:
* Determine terminal loads for basic cloud operations: VM start & stop, Block Device create/destroy & various OpenStack API methods;
* Check performance of basic cloud operations in case of different loads.
Architecture
------------
Real-life examples
------------------
Usually OpenStack projects are as-a-Service, so Rally provides this approach and a CLI driven approach that does not require a daemon:
1. Rally as-a-Service: Run rally as a set of daemons that present Web UI (work in progress) so 1 RaaS could be used by whole team.
2. Rally as-an-App: Rally as a just lightweight CLI app (without any daemons), that makes it simple to develop & much more portable.
To be substantive, let's investigate a couple of real-life examples of Rally in action.
How is this possible? Take a look at diagram below:
How does amqp_rpc_single_reply_queue affect performance?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. image:: ./images/Rally_Architecture.png
:width: 50%
:align: center
So what is behind Rally?
Rally Components
^^^^^^^^^^^^^^^^
Rally consists of 4 main components:
1. **Server Providers** - provide servers (virtual servers), with ssh access, in one L3 network.
2. **Deploy Engines** - deploy OpenStack cloud on servers that are presented by Server Providers
3. **Verification** - component that runs tempest (or another pecific set of tests) against a deployed cloud, collects results & presents them in human readable form.
4. **Benchmark engine** - allows to write parameterized benchmark scenarios & run them against the cloud.
But **why** does Rally need these components?
It becomes really clear if we try to imagine: how I will benchmark cloud at Scale, if ...
.. image:: ./images/Rally_QA.png
:align: center
:width: 50%
Rally in action
---------------
How amqp_rpc_single_reply_queue affects performance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To show Rally's capabilities and potential we used NovaServers.boot_and_destroy scenario to see how amqp_rpc_single_reply_queue option affects VM bootup time. Some time ago it was `shown <https://docs.google.com/file/d/0B-droFdkDaVhVzhsN3RKRlFLODQ/edit?pli=1>`_ that cloud performance can be boosted by setting it on so naturally we decided to check this result. To make this test we issued requests for booting up and deleting VMs for different number of concurrent users ranging from one to 30 with and without this option set. For each group of users a total number of 200 requests was issued. Averaged time per request is shown below:
Rally allowed us to reveal a quite an interesting fact about **Nova**. We used *NovaServers.boot_and_delete* benchmark scenario to see how the *amqp_rpc_single_reply_queue* option affects VM bootup time (it turns on a kind of fast RPC). Some time ago it was `shown <https://docs.google.com/file/d/0B-droFdkDaVhVzhsN3RKRlFLODQ/edit?pli=1>`_ that cloud performance can be boosted by setting it on, so we naturally decided to check this result with Rally. To make this test, we issued requests for booting and deleting VMs for a number of concurrent users ranging from 1 to 30 with and without the investigated option. For each group of users, a total number of 200 requests was issued. Averaged time per request is shown below:
.. image:: ./images/Amqp_rpc_single_reply_queue.png
:width: 50%
:width: 100%
:align: center
So apparently this option affects cloud performance, but not in the way it was thought before.
**So Rally has unexpectedly indicated that setting the *amqp_rpc_single_reply_queue* option apparently affects the cloud performance, but in quite an opposite way rather than it was thought before.**
Performance of Nova instance list command
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Performance of Nova list command
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**Context**: 1 OpenStack user
Another interesting result comes from the *NovaServers.boot_and_list_server* scenario, which enabled us to we launched the following benchmark with Rally:
**Scenario**: 1) boot VM from this user 2) list VM
* **Benchmark environment** (which we also call **"Context"**): 1 temporary OpenStack user.
* **Benchmark scenario**: boot a single VM from this user & list all VMs.
* **Benchmark runner** setting: repeat this procedure 200 times in a continuous way.
**Runner**: Repeat 200 times.
As a result, on every next iteration user has more and more VMs and performance of VM list is degrading quite fast:
During the execution of this benchmark scenario, the user has more and more VMs on each iteration. Rally has shown that in this case, the performance of the **VM list** command in Nova is degrading much faster than one might expect:
.. image:: ./images/Rally_VM_list.png
:width: 50%
:width: 100%
:align: center
Complex scenarios & detailed information
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For example NovaServers.snapshot contains a lot of "atomic" actions:
Complex scenarios
^^^^^^^^^^^^^^^^^
In fact, the vast majority of Rally scenarios is expressed as a sequence of **"atomic" actions**. For example, *NovaServers.snapshot* is composed of 6 atomic actions:
1. boot VM
2. snapshot VM
@ -121,11 +95,36 @@ For example NovaServers.snapshot contains a lot of "atomic" actions:
5. delete VM
6. delete snapshot
Fortunately Rally collects information about duration of all these operation for every iteration.
As a result we are generating beautiful graph image:: Rally_snapshot_vm.png
Rally measures not only the performance of the benchmark scenario as a whole, but also that of single atomic actions. As a result, Rally also plots the atomic actions performance data for each benchmark iteration in a quite detailed way:
.. image:: ./images/Rally_snapshot_vm.png
:width: 50%
:width: 100%
:align: center
Architecture
------------
Usually OpenStack projects are implemented *"as-a-Service"*, so Rally provides this approach. In addition, it implements a *CLI-driven* approach that does not require a daemon:
1. **Rally as-a-Service**: Run rally as a set of daemons that present Web UI *(work in progress)* so 1 RaaS could be used by a whole team.
2. **Rally as-an-App**: Rally as a just lightweight and portable CLI app (without any daemons) that makes it simple to use & develop.
The diagram below shows how this is possible:
.. image:: ./images/Rally_Architecture.png
:width: 100%
:align: center
The actual **Rally core** consists of 4 main components, listed below in the order they go into action:
1. **Server Providers** - provide a **unified interface** for interaction with different **virtualization technologies** (*LXS*, *Virsh* etc.) and **cloud suppliers** (like *Amazon*): it does so via *ssh* access and in one *L3 network*;
2. **Deploy Engines** - deploy some OpenStack distribution (like *DevStack* or *FUEL*) before any benchmarking procedures take place, using servers retrieved from Server Providers;
3. **Verification** - runs *Tempest* (or another specific set of tests) against the deployed cloud to check that it works correctly, collects results & presents them in human readable form;
4. **Benchmark Engine** - allows to write parameterized benchmark scenarios & run them against the cloud.
It should become fairly obvious why Rally core needs to be split to these parts if you take a look at the following diagram that visualizes a rough **algorithm for starting benchmarking OpenStack at scale**. Keep in mind that there might be lots of different ways to set up virtual servers, as well as to deploy OpenStack to them.
.. image:: ./images/Rally_QA.png
:align: center
:width: 100%

391
doc/source/plugins.rst Normal file
View File

@ -0,0 +1,391 @@
..
Copyright 2015 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
.. _plugins:
Rally Plugins
=============
How plugins work
----------------
Rally provides an opportunity to create and use a **custom benchmark scenario, runner or context** as a **plugin**:
.. image:: ./images/Rally-Plugins.png
:width: 100%
:align: center
Plugins can be quickly written and used, with no need to contribute them to the actual Rally code. Just place a python module with your plugin class into the **/opt/rally/plugins** or **~/.rally/plugins** directory (or it's subdirectories), and it will be autoloaded.
Example: Benchmark scenario as a plugin
---------------------------------------
Let's create as a plugin a simple scenario which lists flavors.
Creation
^^^^^^^^
Inherit a class for your plugin from the base *Scenario* class and implement a scenario method inside it as usual. In our scenario, let us first list flavors as an ordinary user, and then repeat the same using admin clients:
.. code-block:: none
from rally.benchmark.scenarios import base
class ScenarioPlugin(base.Scenario):
"""Sample plugin which lists flavors."""
@base.atomic_action_timer("list_flavors")
def _list_flavors(self):
"""Sample of usage clients - list flavors
You can use self.context, self.admin_clients and self.clients which are
initialized on scenario instanse creation"""
self.clients("nova").flavors.list()
@base.atomic_action_timer("list_flavors_as_admin")
def _list_flavors_as_admin(self):
"""The same with admin clients"""
self.admin_clients("nova").flavors.list()
@base.scenario()
def list_flavors(self):
"""List flavors."""
self._list_flavors()
self._list_flavors_as_admin()
Placement
^^^^^^^^^
Put the python module with your plugin class into the **/opt/rally/plugins** or **~/.rally/plugins** directory or it's subdirectories and it will be autoloaded. You can also use a script **unpack_plugins_samples.sh** from **samples/plugins** which will automatically create the **~/.rally/plugins** directory.
Usage
^^^^^
You can refer to your plugin scenario in the benchmark task configuration files just in the same way as to any other scenarios:
.. code-block:: none
{
"ScenarioPlugin.list_flavors": [
{
"runner": {
"type": "serial",
"times": 5,
},
"context": {
"create_flavor": {
"ram": 512,
}
}
}
]
}
This configuration file uses the *"create_flavor"* context which we'll create as a plugin below.
Example: Context as a plugin
----------------------------
Let's create as a plugin a simple context which adds a flavor to the environment before the benchmark task starts and deletes it after it finishes.
Creation
^^^^^^^^
Inherit a class for your plugin from the base *Context* class. Then, implement the Context API: the *setup()* method that creates a flavor and the *cleanup()* method that deletes it.
.. code-block:: none
from rally.benchmark.context import base
from rally.common import log as logging
from rally import consts
from rally import osclients
LOG = logging.getLogger(__name__)
@base.context(name="create_flavor", order=1000)
class CreateFlavorContext(base.Context):
"""This sample create flavor with specified options before task starts and
delete it after task completion.
To create your own context plugin, inherit it from
rally.benchmark.context.base.Context
"""
CONFIG_SCHEMA = {
"type": "object",
"$schema": consts.JSON_SCHEMA,
"additionalProperties": False,
"properties": {
"flavor_name": {
"type": "string",
},
"ram": {
"type": "integer",
"minimum": 1
},
"vcpus": {
"type": "integer",
"minimum": 1
},
"disk": {
"type": "integer",
"minimum": 1
}
}
}
def setup(self):
"""This method is called before the task start"""
try:
# use rally.osclients to get nessesary client instance
nova = osclients.Clients(self.context["admin"]["endpoint"]).nova()
# and than do what you need with this client
self.context["flavor"] = nova.flavors.create(
# context settings are stored in self.config
name=self.config.get("flavor_name", "rally_test_flavor"),
ram=self.config.get("ram", 1),
vcpus=self.config.get("vcpus", 1),
disk=self.config.get("disk", 1)).to_dict()
LOG.debug("Flavor with id '%s'" % self.context["flavor"]["id"])
except Exception as e:
msg = "Can't create flavor: %s" % e.message
if logging.is_debug():
LOG.exception(msg)
else:
LOG.warning(msg)
def cleanup(self):
"""This method is called after the task finish"""
try:
nova = osclients.Clients(self.context["admin"]["endpoint"]).nova()
nova.flavors.delete(self.context["flavor"]["id"])
LOG.debug("Flavor '%s' deleted" % self.context["flavor"]["id"])
except Exception as e:
msg = "Can't delete flavor: %s" % e.message
if logging.is_debug():
LOG.exception(msg)
else:
LOG.warning(msg)
Placement
^^^^^^^^^
Put the python module with your plugin class into the **/opt/rally/plugins** or **~/.rally/plugins** directory or it's subdirectories and it will be autoloaded. You can also use a script **unpack_plugins_samples.sh** from **samples/plugins** which will automatically create the **~/.rally/plugins** directory.
Usage
^^^^^
You can refer to your plugin context in the benchmark task configuration files just in the same way as to any other contexts:
.. code-block:: none
{
"Dummy.dummy": [
{
"args": {
"sleep": 0.01
},
"runner": {
"type": "constant",
"times": 5,
"concurrency": 1
},
"context": {
"users": {
"tenants": 1,
"users_per_tenant": 1
},
"create_flavor": {
"ram": 1024
}
}
}
]
}
Example: SLA as a plugin
------------------------
Let's create as a plugin an SLA (success criterion) which checks whether the range of the observed performance measurements does not exceed the allowed maximum value.
Creation
^^^^^^^^
Inherit a class for your plugin from the base *SLA* class and implement its API (the *check()* method):
.. code-block:: none
from rally.benchmark.sla import base
class MaxDurationRange(base.SLA):
"""Maximum allowed duration range in seconds."""
OPTION_NAME = "max_duration_range"
CONFIG_SCHEMA = {"type": "number", "minimum": 0.0,
"exclusiveMinimum": True}
@staticmethod
def check(criterion_value, result):
durations = [r["duration"] for r in result if not r.get("error")]
durations_range = max(durations) - min(durations)
success = durations_range <= criterion_value
msg = (_("Maximum duration range per iteration %ss, actual %ss")
% (criterion_value, durations_range))
return base.SLAResult(success, msg)
Placement
^^^^^^^^^
Put the python module with your plugin class into the **/opt/rally/plugins** or **~/.rally/plugins** directory or it's subdirectories and it will be autoloaded. You can also use a script **unpack_plugins_samples.sh** from **samples/plugins** which will automatically create the **~/.rally/plugins** directory.
Usage
^^^^^
You can refer to your SLA in the benchmark task configuration files just in the same way as to any other SLA:
.. code-block:: none
{
"Dummy.dummy": [
{
"args": {
"sleep": 0.01
},
"runner": {
"type": "constant",
"times": 5,
"concurrency": 1
},
"context": {
"users": {
"tenants": 1,
"users_per_tenant": 1
}
},
"sla": {
"max_duration_range": 2.5
}
}
]
}
Example: Scenario runner as a plugin
------------------------------------
Let's create as a plugin a scenario runner which runs a given benchmark scenario for a random number of times (chosen at random from a given range).
Creation
^^^^^^^^
Inherit a class for your plugin from the base *ScenarioRunner* class and implement its API (the *_run_scenario()* method):
.. code-block:: none
import random
from rally.benchmark.runners import base
from rally import consts
class RandomTimesScenarioRunner(base.ScenarioRunner):
"""Sample of scenario runner plugin.
Run scenario random number of times, which is choosen between min_times and
max_times.
"""
__execution_type__ = "random_times"
CONFIG_SCHEMA = {
"type": "object",
"$schema": consts.JSON_SCHEMA,
"properties": {
"type": {
"type": "string"
},
"min_times": {
"type": "integer",
"minimum": 1
},
"max_times": {
"type": "integer",
"minimum": 1
}
},
"additionalProperties": True
}
def _run_scenario(self, cls, method_name, context, args):
# runners settings are stored in self.config
min_times = self.config.get('min_times', 1)
max_times = self.config.get('max_times', 1)
for i in range(random.randrange(min_times, max_times)):
run_args = (i, cls, method_name,
base._get_scenario_context(context), args)
result = base._run_scenario_once(run_args)
# use self.send_result for result of each iteration
self._send_result(result)
Placement
^^^^^^^^^
Put the python module with your plugin class into the **/opt/rally/plugins** or **~/.rally/plugins** directory or it's subdirectories and it will be autoloaded. You can also use a script **unpack_plugins_samples.sh** from **samples/plugins** which will automatically create the **~/.rally/plugins** directory.
Usage
^^^^^
You can refer to your scenario runner in the benchmark task configuration files just in the same way as to any other runners. Don't forget to put you runner-specific parameters to the configuration as well (*"min_times"* and *"max_times"* in our example):
.. code-block:: none
{
"Dummy.dummy": [
{
"runner": {
"type": "random_times",
"min_times": 10,
"max_times": 20,
},
"context": {
"users": {
"tenants": 1,
"users_per_tenant": 1
}
}
}
]
}
Different plugin samples are available `here <https://github.com/stackforge/rally/tree/master/samples/plugins>`_.

View File

@ -0,0 +1,37 @@
..
Copyright 2015 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
.. _project_info:
Project Info
============
Useful links
------------
- `Source code <https://github.com/stackforge/rally>`_
- `Rally road map <https://docs.google.com/a/mirantis.com/spreadsheets/d/16DXpfbqvlzMFaqaXAcJsBzzpowb_XpymaK2aFY2gA2g/edit#gid=0>`_
- `Project space <http://launchpad.net/rally>`_
- `Bugs <https://bugs.launchpad.net/rally>`_
- `Patches on review <https://review.openstack.org/#/q/status:open+rally,n,z>`_
- `Meeting logs <http://eavesdrop.openstack.org/meetings/rally/2015/>`_ (server: **irc.freenode.net**, channel: **#openstack-meeting**)
- `IRC logs <http://irclog.perlgeek.de/openstack-rally>`_ (server: **irc.freenode.net**, channel: **#openstack-rally**, each Tuesday at 17:00 UTC)
Where can I discuss and propose changes?
----------------------------------------
- Our IRC channel: **#openstack-rally** on **irc.freenode.net**;
- Weekly Rally team meeting (in IRC): **#openstack-meeting** on **irc.freenode.net**, held on Tuesdays at 17:00 UTC;
- Openstack mailing list: **openstack-dev@lists.openstack.org** (see `subscription and usage instructions <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>`_);
- `Rally team on Launchpad <https://launchpad.net/rally>`_: Answers/Bugs/Blueprints.

View File

@ -1,113 +0,0 @@
..
Copyright 2014 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
.. _rally_gatejob:
Rally gates
===========
How to create custom rally-gate job
-----------------------------------
To create rally-gate job, you should create rally-scenarios directory at the root of your project.
Normally this directory contains only {projectname}.yaml, but easily can be added more scenarios and jobs.
To {projectname}.yaml was ran on gate, you need to add "rally-jobs" to "jobs" section of projects.yaml in openstack-infra/config.
For example in glance project:
modules/openstack_project/files/jenkins_job_builder/config/projects.yaml:
.. code-block:: none
- project:
name: glance
github-org: openstack
node: bare-precise
tarball-site: tarballs.openstack.org
doc-publisher-site: docs.openstack.org
jobs:
- python-jobs
- python-havana-bitrot-jobs
- openstack-publish-jobs
- translation-jobs
- rally-jobs
and add check-rally-dsvm-{projectname} to modules/openstack_project/files/zuul/layout.yaml:
.. code-block:: none
- name: openstack/glance
template:
- name: python-jobs
- name: openstack-server-publish-jobs
- name: periodic-havana
- name: check-requirements
- name: integrated-gate
check:
- check-devstack-dsvm-cells
- check-tempest-dsvm-postgres-full
- gate-tempest-dsvm-large-ops
- gate-tempest-dsvm-neutron-large-ops
- check-rally-dsvm-glance
To add one more scenario and job, you need to add {scenarioname}.yaml file here, and check-rally-dsvm-{scenarioname} in projects.yaml. For example:
add rally-scenarios/myscenario.yaml to rally-scenarios directory in you project
and modules/openstack_project/files/jenkins_job_builder/config/projects.yaml:
.. code-block:: none
- project:
name: glance
github-org: openstack
node: bare-precise
tarball-site: tarballs.openstack.org
doc-publisher-site: docs.openstack.org
jobs:
- python-jobs
- python-havana-bitrot-jobs
- openstack-publish-jobs
- translation-jobs
- rally-jobs
- 'check-rally-dsvm-{name}':
name: myscenario
and add check-rally-dsvm-myscenario to modules/openstack_project/files/zuul/layout.yaml:
.. code-block:: none
- name: openstack/glance
template:
- name: python-jobs
- name: openstack-server-publish-jobs
- name: periodic-havana
- name: check-requirements
- name: integrated-gate
check:
- check-devstack-dsvm-cells
- check-tempest-dsvm-postgres-full
- gate-tempest-dsvm-large-ops
- gate-tempest-dsvm-neutron-large-ops
- check-rally-dsvm-myscenario

View File

@ -1,5 +1,5 @@
..
Copyright 2014 Mirantis Inc. All Rights Reserved.
Copyright 2015 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
@ -13,9 +13,16 @@
License for the specific language governing permissions and limitations
under the License.
.. _verify:
.. _tutorial:
Verify
======
Rally step-by-step
==================
Rally has a component that runs *Tempest* (or another specific set of tests) against a deployed OpenStack cloud, collects results & represents them in human-readable form.
In the following tutorial, we will guide you step-by-step through different use cases that might occur in Rally, starting with the easy ones and moving towards more complicated cases.
.. toctree::
:glob:
:maxdepth: 1
tutorial/**

View File

@ -0,0 +1,32 @@
..
Copyright 2015 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
.. _tutorial_step_0_installation:
Step 0. Installation
====================
Installing Rally is very simple. Just execute the following commands:
.. code-block:: none
git clone https://git.openstack.org/stackforge/rally
./rally/install_rally.sh
**Notes:** The installation script should be run as root or as a normal user using **sudo**. Rally requires either the Python 2.6 or the Python 2.7 version.
There are also other installation options that you can find :ref:`here <install>`.
Now that you have rally installed, you are ready to start :ref:`benchmarking OpenStack with it <tutorial_step_1_setting_up_env_and_running_benchmark_from_samples>`!

View File

@ -0,0 +1,237 @@
..
Copyright 2015 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
.. _tutorial_step_1_setting_up_env_and_running_benchmark_from_samples:
Step 1. Setting up the environment and running a benchmark from samples
=======================================================================
In this demo, we will show how to perform the following basic operations in Rally:
.. toctree::
:maxdepth: 1
We assume that you have a :ref:`Rally installation <tutorial_step_0_installation>` and an already existing OpenStack deployment with Keystone available at *<KEYSTONE_AUTH_URL>*.
1. Registering an OpenStack deployment in Rally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
First, you have to provide Rally with an Openstack deployment it is going to benchmark. This should be done either through `OpenRC files <http://docs.openstack.org/user-guide/content/cli_openrc.html>`_ or through deployment `configuration files <https://github.com/stackforge/rally/tree/master/samples/deployments>`_. In case you already have an *OpenRC*, it is extremely simple to register a deployment with the *deployment create* command:
.. code-block:: none
$ . opernc admin admin
$ rally deployment create --fromenv --name=existing
+--------------------------------------+----------------------------+------------+------------------+--------+
| uuid | created_at | name | status | active |
+--------------------------------------+----------------------------+------------+------------------+--------+
| 28f90d74-d940-4874-a8ee-04fda59576da | 2015-01-18 00:11:38.059983 | devstack_2 | deploy->finished | |
+--------------------------------------+----------------------------+------------+------------------+--------+
Using deployment : <Deployment UUID>
...
Alternatively, you can put the information about your cloud credentials into a JSON configuration file (let's call it `existing.json <https://github.com/stackforge/rally/blob/master/samples/deployments/existing.json>`_). The *deployment create* command has a slightly different syntax in this case:
.. code-block:: none
$ rally deployment create --file=existing.json --name=existing
+--------------------------------------+----------------------------+------------+------------------+--------+
| uuid | created_at | name | status | active |
+--------------------------------------+----------------------------+------------+------------------+--------+
| 28f90d74-d940-4874-a8ee-04fda59576da | 2015-01-18 00:11:38.059983 | devstack_2 | deploy->finished | |
+--------------------------------------+----------------------------+------------+------------------+--------+
Using deployment : <Deployment UUID>
...
Note the last line in the output. It says that the just created deployment is now used by Rally; that means that all the benchmarking operations from now on are going to be performed on this deployment. Later we will show how to switch between different deployments.
Finally, the *deployment check* command enables you to verify that your current deployment is healthy and ready to be benchmarked:
.. code-block:: none
$ rally deployment check
keystone endpoints are valid and following services are available:
+----------+----------------+-----------+
| services | type | status |
+----------+----------------+-----------+
| cinder | volume | Available |
| cinderv2 | volumev2 | Available |
| ec2 | ec2 | Available |
| glance | image | Available |
| heat | orchestration | Available |
| heat-cfn | cloudformation | Available |
| keystone | identity | Available |
| nova | compute | Available |
| novav21 | computev21 | Available |
| s3 | s3 | Available |
+----------+----------------+-----------+
2. Benchmarking
^^^^^^^^^^^^^^^
Now that we have a working and registered deployment, we can start benchmarking it. The sequence of benchmarks to be launched by Rally should be specified in a *benchmark task configuration file* (either in *JSON* or in *YAML* format). Let's try one of the sample benchmark tasks available in `samples/tasks/scenarios <https://github.com/stackforge/rally/tree/master/samples/tasks/scenarios>`_, say, the one that boots and deletes multiple servers (*samples/tasks/scenarios/nova/boot-and-delete.json*):
.. code-block:: none
{
"NovaServers.boot_and_delete_server": [
{
"args": {
"flavor": {
"name": "m1.nano"
},
"image": {
"name": "^cirros.*uec$"
},
"force_delete": false
},
"runner": {
"type": "constant",
"times": 10,
"concurrency": 2
},
"context": {
"users": {
"tenants": 3,
"users_per_tenant": 2
}
}
}
]
}
To start a benchmark task, run the task start command (you can also add the *-v* option to print more logging information):
.. code-block:: none
$ rally task start samples/tasks/scenarios/nova/boot-and-delete.json
--------------------------------------------------------------------------------
Preparing input task
--------------------------------------------------------------------------------
Input task is:
<Your task config here>
--------------------------------------------------------------------------------
Task 6fd9a19f-5cf8-4f76-ab72-2e34bb1d4996: started
--------------------------------------------------------------------------------
Benchmarking... This can take a while...
To track task status use:
rally task status
or
rally task detailed
--------------------------------------------------------------------------------
Task 6fd9a19f-5cf8-4f76-ab72-2e34bb1d4996: finished
--------------------------------------------------------------------------------
test scenario NovaServers.boot_and_delete_server
args position 0
args values:
{u'args': {u'flavor': {u'name': u'm1.nano'},
u'force_delete': False,
u'image': {u'name': u'^cirros.*uec$'}},
u'context': {u'users': {u'project_domain': u'default',
u'resource_management_workers': 30,
u'tenants': 3,
u'user_domain': u'default',
u'users_per_tenant': 2}},
u'runner': {u'concurrency': 2, u'times': 10, u'type': u'constant'}}
+--------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| action | min (sec) | avg (sec) | max (sec) | 90 percentile | 95 percentile | success | count |
+--------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| nova.boot_server | 7.99 | 9.047 | 11.862 | 9.747 | 10.805 | 100.0% | 10 |
| nova.delete_server | 4.427 | 4.574 | 4.772 | 4.677 | 4.725 | 100.0% | 10 |
| total | 12.556 | 13.621 | 16.37 | 14.252 | 15.311 | 100.0% | 10 |
+--------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
Load duration: 70.1310448647
Full duration: 87.545541048
HINTS:
* To plot HTML graphics with this data, run:
rally task plot2html 6fd9a19f-5cf8-4f76-ab72-2e34bb1d4996 --out output.html
* To get raw JSON output of task results, run:
rally task results 6fd9a19f-5cf8-4f76-ab72-2e34bb1d4996
Using task: 6fd9a19f-5cf8-4f76-ab72-2e34bb1d4996
Note that the Rally input task above uses *regular expressions* to specify the image and flavor name to be used for server creation, since concrete names might differ from installation to installation. If this benchmark task fails, then the reason for that might a non-existing image/flavor specified in the task. To check what images/flavors are available in the deployment you are currently benchmarking, you might use the *rally show* command:
.. code-block:: none
$ rally show images
+--------------------------------------+-----------------------+-----------+
| UUID | Name | Size (B) |
+--------------------------------------+-----------------------+-----------+
| 8dfd6098-0c26-4cb5-8e77-1ecb2db0b8ae | CentOS 6.5 (x86_64) | 344457216 |
| 2b8d119e-9461-48fc-885b-1477abe2edc5 | CirrOS 0.3.1 (x86_64) | 13147648 |
+--------------------------------------+-----------------------+-----------+
$ rally show flavors
+---------------------+-----------+-------+----------+-----------+-----------+
| ID | Name | vCPUs | RAM (MB) | Swap (MB) | Disk (GB) |
+---------------------+-----------+-------+----------+-----------+-----------+
| 1 | m1.tiny | 1 | 512 | | 1 |
| 2 | m1.small | 1 | 2048 | | 20 |
| 3 | m1.medium | 2 | 4096 | | 40 |
| 4 | m1.large | 4 | 8192 | | 80 |
| 5 | m1.xlarge | 8 | 16384 | | 160 |
+---------------------+-----------+-------+----------+-----------+-----------+
3. Report generation
^^^^^^^^^^^^^^^^^^^^
One of the most beautiful things in Rally is its task report generation mechanism. It enables you to create illustrative and comprehensive HTML reports based on the benchmarking data. To create and open at once such a report for the last task you have launched, call:
.. code-block:: none
$ rally task report --out=report1.html --open
This will produce an HTML page with the overview of all the scenarios that you've included into the last benchmark task completed in Rally (in our case, this is just one scenario, and we will cover the topic of multiple scenarios in one task in :ref:`the next step of our tutorial <tutorial_step_3_adding_success_criteria_for_benchmarks>`):
.. image:: ../images/Report-Overview.png
:width: 100%
:align: center
This aggregating table shows the duration of the load produced by the corresponding scenario (*"Load duration"*), the overall benchmark scenario execution time, including the duration of environment preparation with contexts (*"Full duration"*), the number of iterations of each scenario (*"Iterations"*), the type of the load used while running the scenario (*"Runner"*), the number of failed iterations (*"Errors"*) and finally whether the scenario has passed certain Success Criteria (*"SLA"*) that were set up by the user in the input configuration file (we will cover these criteria in :ref:`one of the next steps <tutorial_step_3_sla>`).
By navigating in the left panel, you can switch to the detailed view of the benchmark results for the only scenario we included into our task, namely **NovaServers.boot_and_delete_server**:
.. image:: ../images/Report-Scenario-Overview.png
:width: 100%
:align: center
This page, along with the description of the success criteria used to check the outcome of this scenario, shows some more detailed information and statistics about the duration of its iterations. Now, the *"Total durations"* table splits the duration of our scenario into the so-called **"atomic actions"**: in our case, the **"boot_and_delete_server"** scenario consists of two actions - **"boot_server"** and **"delete_server"**. You can also see how the scenario duration changed throughout is iterations in the *"Charts for the total duration"* section. Similar charts, but with atomic actions detalization, will arise if you switch to the *"Details"* tab of this page:
.. image:: ../images/Report-Scenario-Atomic.png
:width: 100%
:align: center
Note that all the charts on the report pages are very dynamic: you can change their contents by clicking the switches above the graph and see more information about its single points by hovering the cursor over these points.
Take some time to play around with these graphs
and then move on to :ref:`the next step of our tutorial <tutorial_step_2_running_multple_benchmarks_in_a_single_task>`.

View File

@ -0,0 +1,211 @@
..
Copyright 2015 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
.. _tutorial_step_2_running_multple_benchmarks_in_a_single_task:
Step 2. Running multiple benchmarks in a single task
====================================================
1. Rally input task syntax
^^^^^^^^^^^^^^^^^^^^^^^^^^
Rally comes with a really great collection of :ref:`benchmark scenarios <tutorial_step_5_discovering_more_benchmark_scenarios>` and in most real-world scenarios you will use multiple scenarios to test your OpenStack cloud. Rally makes it very easy to run **different benchmarks defined in a single benchmark task**. To do so, use the following syntax:
.. code-block:: none
{
"<ScenarioName1>": [<benchmark_config>, <benchmark_config2>, ...]
"<ScnearioName2>": [<benchmark_config>, ...]
}
where *<benchmark_config>*, as before, is a dictionary:
.. code-block:: none
{
"args": { scenario-specific arguments },
"runner": {"type": ..., }
...
}
2. Multiple benchmarks in a single task
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
As an example, let's edit our configuration file from :ref:`step 1 <tutorial_step_1_setting_up_env_and_running_benchmark_from_samples>` so that it prescribes Rally to launch not only the **NovaServers.boot_and_delete_server** scenario, but also the **KeystoneBasic.create_delete_user** scenario. All we have to do is to append the configuration of the second scenario as yet another top-level key of our json file:
*multiple-scenarios.json*
.. code-block:: none
{
"NovaServers.boot_and_delete_server": [
{
"args": {
"flavor": {
"name": "m1.nano"
},
"image": {
"name": "^cirros.*uec$"
},
"force_delete": false
},
"runner": {
"type": "constant",
"times": 10,
"concurrency": 2
},
"context": {
"users": {
"tenants": 3,
"users_per_tenant": 2
}
}
}
],
"KeystoneBasic.create_delete_user": [
{
"args": {
"name_length": 10
},
"runner": {
"type": "constant",
"times": 10,
"concurrency": 3
}
}
]
}
Now you can start this benchmark task as usually:
.. code-block:: none
$ rally task start multiple-scenarios.json
...
+--------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| action | min (sec) | avg (sec) | max (sec) | 90 percentile | 95 percentile | success | count |
+--------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| nova.boot_server | 8.06 | 11.354 | 18.594 | 18.54 | 18.567 | 100.0% | 10 |
| nova.delete_server | 4.364 | 5.054 | 6.837 | 6.805 | 6.821 | 100.0% | 10 |
| total | 12.572 | 16.408 | 25.396 | 25.374 | 25.385 | 100.0% | 10 |
+--------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
Load duration: 84.1959171295
Full duration: 102.033041
--------------------------------------------------------------------------------
...
+----------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| action | min (sec) | avg (sec) | max (sec) | 90 percentile | 95 percentile | success | count |
+----------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| keystone.create_user | 0.676 | 0.875 | 1.03 | 1.02 | 1.025 | 100.0% | 10 |
| keystone.delete_user | 0.407 | 0.647 | 0.84 | 0.739 | 0.79 | 100.0% | 10 |
| total | 1.082 | 1.522 | 1.757 | 1.724 | 1.741 | 100.0% | 10 |
+----------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
Load duration: 5.72119688988
Full duration: 10.0808410645
...
Note that the HTML reports you can generate by typing **rally task report --out=report_name.html** after your benchmark task has completed will get richer as your benchmark task configuration file includes more benchmark scenarios. Let's take a look at the report overview page for a task that covers all the scenarios available in Rally:
.. code-block:: none
$ rally task report --out=report_multiple_scenarios.html --open
.. image:: ../images/Report-Multiple-Overview.png
:width: 100%
:align: center
3. Multiple configurations of the same scenario
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Yet another thing you can do in Rally is to launch **the same benchmark scenario multiple times with different configurations**. That's why our configuration file stores a list for the key *"NovaServers.boot_and_delete_server"*: you can just append a different configuration of this benchmark scenario to this list to get it. Let's say, you want to run the **boot_and_delete_server** scenario twice: first using the *"m1.nano"* flavor and then using the *"m1.tiny"* flavor:
*multiple-configurations.json*
.. code-block:: none
{
"NovaServers.boot_and_delete_server": [
{
"args": {
"flavor": {
"name": "m1.nano"
},
"image": {
"name": "^cirros.*uec$"
},
"force_delete": false
},
"runner": {...},
"context": {...}
},
{
"args": {
"flavor": {
"name": "m1.tiny"
},
"image": {
"name": "^cirros.*uec$"
},
"force_delete": false
},
"runner": {...},
"context": {...}
}
]
}
That's it! You will get again the results for each configuration separately:
.. code-block:: none
$ rally task start --task=multiple-configurations.json
...
+--------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| action | min (sec) | avg (sec) | max (sec) | 90 percentile | 95 percentile | success | count |
+--------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| nova.boot_server | 7.896 | 9.433 | 13.14 | 11.329 | 12.234 | 100.0% | 10 |
| nova.delete_server | 4.435 | 4.898 | 6.975 | 5.144 | 6.059 | 100.0% | 10 |
| total | 12.404 | 14.331 | 17.979 | 16.72 | 17.349 | 100.0% | 10 |
+--------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
Load duration: 73.2339417934
Full duration: 91.1692159176
--------------------------------------------------------------------------------
...
+--------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| action | min (sec) | avg (sec) | max (sec) | 90 percentile | 95 percentile | success | count |
+--------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| nova.boot_server | 8.207 | 8.91 | 9.823 | 9.692 | 9.758 | 100.0% | 10 |
| nova.delete_server | 4.405 | 4.767 | 6.477 | 4.904 | 5.691 | 100.0% | 10 |
| total | 12.735 | 13.677 | 16.301 | 14.596 | 15.449 | 100.0% | 10 |
+--------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
Load duration: 71.029528141
Full duration: 88.0259010792
...
The HTML report will also look similar to what we have seen before:
.. code-block:: none
$ rally task report --out=report_multiple_configuraions.html --open
.. image:: ../images/Report-Multiple-Configurations-Overview.png
:width: 100%
:align: center

View File

@ -0,0 +1,143 @@
..
Copyright 2015 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
.. _tutorial_step_3_adding_success_criteria_for_benchmarks:
Step 3. Adding success criteria (SLA) for benchmarks
====================================================
1. SLA - Service-Level Agreement (Success Criteria)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Rally allows you to set success criteria (also called *SLA - Service-Level Agreement*) for every benchmark. Rally will automatically check them for you.
To configure the SLA, add the *"sla"* section to the configuration of the corresponding benchmark (the check name is a key associated with its target value). You can combine different success criteria:
.. code-block:: none
{
"NovaServers.boot_and_delete_server": [
{
"args": {
...
},
"runner": {
...
},
"context": {
...
},
"sla": {
"max_seconds_per_iteration": 10,
"max_failure_percent": 25
}
}
]
}
Such configuration will mark the **NovaServers.boot_and_delete_server** benchmark scenario as not successful if either some iteration took more than 10 seconds or more than 25% iterations failed.
2. Checking SLA
^^^^^^^^^^^^^^^
Let us show you how Rally SLA work using a simple example based on **Dummy benchmark scenarios**. These scenarios actually do not perform any OpenStack-related stuff but are very useful for testing the behavious of Rally. Let us put in a new task, *test-sla.json*, 2 scenarios -- one that does nothing and another that just throws an exception:
.. code-block:: none
{
"Dummy.dummy": [
{
"args": {},
"runner": {
"type": "constant",
"times": 5,
"concurrency": 2
},
"context": {
"users": {
"tenants": 3,
"users_per_tenant": 2
}
},
"sla": {
"failure_rate": {"max": 0.0}
}
}
],
"Dummy.dummy_exception": [
{
"args": {},
"runner": {
"type": "constant",
"times": 5,
"concurrency": 2
},
"context": {
"users": {
"tenants": 3,
"users_per_tenant": 2
}
},
"sla": {
"failure_rate": {"max": 0.0}
}
}
]
}
Note that both scenarios in these tasks have the **maximum failure rate of 0%** as their **success criterion**. We expect that the first scenario will pass this criterion while the second will fail it. Let's start the task:
.. code-block:: none
$ rally task start test-sla.json
...
After the task completes, run *rally task sla_check* to check the results again the success criteria you defined in the task:
.. code-block:: none
$ rally task sla_check
+-----------------------+-----+--------------+--------+-------------------------------------------------------------------------------------------------------+
| benchmark | pos | criterion | status | detail |
+-----------------------+-----+--------------+--------+-------------------------------------------------------------------------------------------------------+
| Dummy.dummy | 0 | failure_rate | PASS | Maximum failure rate percent 0.0% failures, minimum failure rate percent 0% failures, actually 0.0% |
| Dummy.dummy_exception | 0 | failure_rate | FAIL | Maximum failure rate percent 0.0% failures, minimum failure rate percent 0% failures, actually 100.0% |
+-----------------------+-----+--------------+--------+-------------------------------------------------------------------------------------------------------+
Exactly as expected.
3. SLA in task report
^^^^^^^^^^^^^^^^^^^^^
SLA checks are nicely visualized in task reports. Generate one:
.. code-block:: none
$ rally task report --out=report_sla.html --open
Benchmark scenarios that have passed SLA have a green check on the overview page:
.. image:: ../images/Report-SLA-Overview.png
:width: 100%
:align: center
Somewhat more detailed information about SLA is displayed on the scenario pages:
.. image:: ../images/Report-SLA-Scenario.png
:width: 100%
:align: center

View File

@ -0,0 +1,160 @@
..
Copyright 2015 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
.. _tutorial_step_4_working_with_multple_openstack_clouds:
Step 4. Working with multiple OpenStack clouds
==============================================
1. Multiple OpenStack clouds in Rally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Rally is an awesome tool that allows you to work with multiple clouds and can itself deploy them. We already know how to work with :ref:`a single cloud <tutorial_step_1_setting_up_env_and_running_benchmark_from_samples>`. Let us now register 2 clouds in Rally: the one that we have access to and the other that we know is registered with wrong credentials.
.. code-block:: none
$ . opernc admin admin # openrc with correct credentials
$ rally deployment create --fromenv --name=cloud-1
+--------------------------------------+----------------------------+------------+------------------+--------+
| uuid | created_at | name | status | active |
+--------------------------------------+----------------------------+------------+------------------+--------+
| 4251b491-73b2-422a-aecb-695a94165b5e | 2015-01-18 00:11:14.757203 | cloud-1 | deploy->finished | |
+--------------------------------------+----------------------------+------------+------------------+--------+
Using deployment: 4251b491-73b2-422a-aecb-695a94165b5e
~/.rally/openrc was updated
...
$ . bad_opernc admin admin # openrc with wrong credentials
$ rally deployment create --fromenv --name=cloud-2
+--------------------------------------+----------------------------+------------+------------------+--------+
| uuid | created_at | name | status | active |
+--------------------------------------+----------------------------+------------+------------------+--------+
| 658b9bae-1f9c-4036-9400-9e71e88864fc | 2015-01-18 00:38:26.127171 | cloud-2 | deploy->finished | |
+--------------------------------------+----------------------------+------------+------------------+--------+
Using deployment: 658b9bae-1f9c-4036-9400-9e71e88864fc
~/.rally/openrc was updated
...
Let us now list the deployments we have created:
.. code-block:: none
$ rally deployment list
+--------------------------------------+----------------------------+------------+------------------+--------+
| uuid | created_at | name | status | active |
+--------------------------------------+----------------------------+------------+------------------+--------+
| 4251b491-73b2-422a-aecb-695a94165b5e | 2015-01-05 00:11:14.757203 | cloud-1 | deploy->finished | |
| 658b9bae-1f9c-4036-9400-9e71e88864fc | 2015-01-05 00:40:58.451435 | cloud-2 | deploy->finished | * |
+--------------------------------------+----------------------------+------------+------------------+--------+
Note that the second is marked as **"active"** because this is the deployment we have created most recently. This means that it will be automatically (unless its UUID or name is passed explicitly via the *--deployment* parameter) used by the commands that need a deployment, like *rally task start ...* or *rally deployment check*:
.. code-block:: none
$ rally deployment check
Authentication Issues: wrong keystone credentials specified in your endpoint properties. (HTTP 401).
$ rally deployment check --deployment=cloud-1
keystone endpoints are valid and following services are available:
+----------+----------------+-----------+
| services | type | status |
+----------+----------------+-----------+
| cinder | volume | Available |
| cinderv2 | volumev2 | Available |
| ec2 | ec2 | Available |
| glance | image | Available |
| heat | orchestration | Available |
| heat-cfn | cloudformation | Available |
| keystone | identity | Available |
| nova | compute | Available |
| novav21 | computev21 | Available |
| s3 | s3 | Available |
+----------+----------------+-----------+
You can also switch the active deployment using the **rally use deployment** command:
.. code-block:: none
$ rally use deployment cloud-1
Using deployment: 658b9bae-1f9c-4036-9400-9e71e88864fc
~/.rally/openrc was updated
...
$ rally deployment check
keystone endpoints are valid and following services are available:
+----------+----------------+-----------+
| services | type | status |
+----------+----------------+-----------+
| cinder | volume | Available |
| cinderv2 | volumev2 | Available |
| ec2 | ec2 | Available |
| glance | image | Available |
| heat | orchestration | Available |
| heat-cfn | cloudformation | Available |
| keystone | identity | Available |
| nova | compute | Available |
| novav21 | computev21 | Available |
| s3 | s3 | Available |
+----------+----------------+-----------+
Note the first two lines of the CLI output for the *rally use deployment* command. They tell you the UUID of the new active deployment and also say that the *~/.rally/openrc* file was updated -- this is the place where the "active" UUID is actually stored by Rally.
One last detail about managing different deployments in Rally is that the *rally task list* command outputs only those tasks that were run against the currently active deployment, and you have to provide the *--all-deployments* parameter to list all the tasks:
.. code-block:: none
$ rally task list
+--------------------------------------+-----------------+----------------------------+----------------+----------+--------+-----+
| uuid | deployment_name | created_at | duration | status | failed | tag |
+--------------------------------------+-----------------+----------------------------+----------------+----------+--------+-----+
| c21a6ecb-57b2-43d6-bbbb-d7a827f1b420 | cloud-1 | 2015-01-05 01:00:42.099596 | 0:00:13.419226 | finished | False | |
| f6dad6ab-1a6d-450d-8981-f77062c6ef4f | cloud-1 | 2015-01-05 01:05:57.653253 | 0:00:14.160493 | finished | False | |
+--------------------------------------+-----------------+----------------------------+----------------+----------+--------+-----+
$ rally task list --all-deployment
+--------------------------------------+-----------------+----------------------------+----------------+----------+--------+-----+
| uuid | deployment_name | created_at | duration | status | failed | tag |
+--------------------------------------+-----------------+----------------------------+----------------+----------+--------+-----+
| c21a6ecb-57b2-43d6-bbbb-d7a827f1b420 | cloud-1 | 2015-01-05 01:00:42.099596 | 0:00:13.419226 | finished | False | |
| f6dad6ab-1a6d-450d-8981-f77062c6ef4f | cloud-1 | 2015-01-05 01:05:57.653253 | 0:00:14.160493 | finished | False | |
| 6fd9a19f-5cf8-4f76-ab72-2e34bb1d4996 | cloud-2 | 2015-01-05 01:14:51.428958 | 0:00:15.042265 | finished | False | |
+--------------------------------------+-----------------+----------------------------+----------------+----------+--------+-----+
2. Rally as a deployment engine
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Along with supporting already existing OpenStack deployments, Rally itself can **deploy OpenStack automatically** by using one of its *deployment engines*. Take a look at other `deployment configuration file samples <https://github.com/stackforge/rally/tree/master/samples/deployments>`_. For example, *devstack-in-existing-servers.json* is a deployment configuration file that tells Rally to deploy OpenStack with **Devstack** on the server with given credentials:
.. code-block:: none
{
"type": "DevstackEngine",
"provider": {
"type": "ExistingServers",
"credentials": [{"user": "root", "host": "10.2.0.8"}]
}
}
You can try this out, say, with a virtual machine. Edit the configuration file with your IP address/user name and run, as usual:
.. code-block:: none
$ rally deployment create --file=samples/deployments/devstack-in-existing-servers.json.json --name=new-devstack
+---------------------------+----------------------------+----------+----------------------+
| uuid | created_at | name | status |
+---------------------------+----------------------------+----------+----------------------+
| <Deployment UUID> | 2015-01-10 22:00:28.270941 | new-devstack | deploy->finished |
+---------------------------+----------------------------+--------------+------------------+
Using deployment : <Deployment UUID>

View File

@ -0,0 +1,102 @@
..
Copyright 2015 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
.. _tutorial_step_5_discovering_more_benchmark_scenarios:
Step 5. Discovering more benchmark scenarios in Rally
=====================================================
1. Scenarios in the Rally repository
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Rally currently comes with a great collection of benchmark scenarios that use the API of different OpenStack projects like **Keystone**, **Nova**, **Cinder**, **Glance** and so on. The good news is that you can combine multiple benchmark scenarios in one task to benchmark your cloud in a comprehensive way.
First, let's see what scenarios are available in Rally. One of the ways to discover these scenario is just to inspect their `source code <https://github.com/stackforge/rally/tree/master/rally/benchmark/scenarios>`_.
2. Rally built-in search engine
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A much more convenient way to learn about different benchmark scenarios in Rally, however, is to use a special **search engine** embedded into its Command-Line Interface, which, for a given **search query**, prints documentation for the corresponding benchmark scenario (and also supports other Rally entities like SLA).
To search for some specific benchmark scenario by its name or by its group, use the **rally info find <query>** command:
.. code-block:: none
$ rally info find create_meter_and_get_stats
--------------------------------------------------------------------------------
CeilometerStats.create_meter_and_get_stats (benchmark scenario)
--------------------------------------------------------------------------------
Create a meter and fetch its statistics.
Meter is first created and then statistics is fetched for the same
using GET /v2/meters/(meter_name)/statistics.
Parameters:
- kwargs: contains optional arguments to create a meter
$ rally info find some_non_existing_benchmark
Failed to find any docs for query: 'some_non_existing_benchmark'
You can also get the list of different benchmark scenario groups available in Rally by typing **rally info find BenchmarkScenarios** command:
.. code-block:: none
$ rally info find BenchmarkScenarios
--------------------------------------------------------------------------------
Rally - Benchmark scenarios
--------------------------------------------------------------------------------
Benchmark scenarios are what Rally actually uses to test the performance of an OpenStack deployment.
Each Benchmark scenario implements a sequence of atomic operations (server calls) to simulate
interesing user/operator/client activity in some typical use case, usually that of a specific OpenStack
project. Iterative execution of this sequence produces some kind of load on the target cloud.
Benchmark scenarios play the role of building blocks in benchmark task configuration files.
Scenarios in Rally are put together in groups. Each scenario group is concentrated on some specific
OpenStack functionality. For example, the "NovaServers" scenario group contains scenarios that employ
several basic operations available in Nova.
List of Benchmark scenario groups:
--------------------------------------------------------------------------------------------
Name Description
--------------------------------------------------------------------------------------------
Authenticate Benchmark scenarios for the authentication mechanism.
CeilometerAlarms Benchmark scenarios for Ceilometer Alarms API.
CeilometerMeters Benchmark scenarios for Ceilometer Meters API.
CeilometerQueries Benchmark scenarios for Ceilometer Queries API.
CeilometerResource Benchmark scenarios for Ceilometer Resource API.
CeilometerStats Benchmark scenarios for Ceilometer Stats API.
CinderVolumes Benchmark scenarios for Cinder Volumes.
DesignateBasic Basic benchmark scenarios for Designate.
Dummy Dummy benchmarks for testing Rally benchmark engine at scale.
GlanceImages Benchmark scenarios for Glance images.
HeatStacks Benchmark scenarios for Heat stacks.
KeystoneBasic Basic benchmark scenarios for Keystone.
NeutronNetworks Benchmark scenarios for Neutron.
NovaSecGroup Benchmark scenarios for Nova security groups.
NovaServers Benchmark scenarios for Nova servers.
Quotas Benchmark scenarios for quotas.
Requests Benchmark scenarios for HTTP requests.
SaharaClusters Benchmark scenarios for Sahara clusters.
SaharaJob Benchmark scenarios for Sahara jobs.
SaharaNodeGroupTemplates Benchmark scenarios for Sahara node group templates.
TempestScenario Benchmark scenarios that launch Tempest tests.
VMTasks Benchmark scenarios that are to be run inside VM instances.
ZaqarBasic Benchmark scenarios for Zaqar.
--------------------------------------------------------------------------------------------
To get information about benchmark scenarios inside each scenario group, run:
$ rally info find <ScenarioGroupName>

View File

@ -1,301 +0,0 @@
..
Copyright 2014 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
.. _usage:
Usage
=====
Usage demo
----------
**NOTE**: Throughout this demo, we assume that you have a configured :ref:`Rally installation <installation>` and an already existing OpenStack deployment has keystone available at <KEYSTONE_AUTH_URL>.
Step 1. Deployment initialization (use existing cloud)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
First, you have to provide Rally with an Openstack deployment it is going to benchmark. This is done through deployment **configuration files**. The actual deployment can be either created by Rally (see /doc/samples for configuration examples) or, as in our example, an already existing one. The configuration file (let's call it **existing.json**) should contain the deployment strategy (in our case, the deployment will be performed by the so called **"ExistingCloud"**, since the deployment is ready to use) and some specific parameters (for the ExistingCloud, an endpoint with administrator permissions):
.. code-block:: none
{
"type": "ExistingCloud",
"endpoint": {
"auth_url": <KEYSTONE_AUTH_URL>,
"username": <ADMIN_USER_NAME>,
"password": <ADMIN_PASSWORD>,
"tenant_name": <ADMIN_TENANT>
}
}
To register this deployment in Rally, use the **deployment create** command:
.. code-block:: none
$ rally deployment create --filename=existing.json --name=existing
+---------------------------+----------------------------+----------+------------------+
| uuid | created_at | name | status |
+---------------------------+----------------------------+----------+------------------+
| <Deployment UUID> | 2014-02-15 22:00:28.270941 | existing | deploy->finished |
+---------------------------+----------------------------+----------+------------------+
Using deployment : <Deployment UUID>
Note the last line in the output. It says that the just created deployment is now used by Rally; that means that all the benchmarking operations from now on are going to be performed on this deployment. In case you want to switch to another deployment, execute the **use deployment** command:
.. code-block:: none
$ rally use deployment <Another deployment name or UUID>
Using deployment : <Another deployment name or UUID>
Finally, the **deployment check** command enables you to verify that your current deployment is healthy and ready to be benchmarked:
.. code-block:: none
$ rally deployment check
+----------+-----------+-----------+
| services | type | status |
+----------+-----------+-----------+
| nova | compute | Available |
| cinderv2 | volumev2 | Available |
| novav3 | computev3 | Available |
| s3 | s3 | Available |
| glance | image | Available |
| cinder | volume | Available |
| ec2 | ec2 | Available |
| keystone | identity | Available |
+----------+-----------+-----------+
Step 2. Benchmarking
^^^^^^^^^^^^^^^^^^^^
Now that we have a working and registered deployment, we can start benchmarking it. Again, the sequence of benchmark scenarios to be launched by Rally should be specified in a **benchmark task configuration file**. Note that there is already a set of nice benchmark tasks examples in *doc/samples/tasks/* (assuming that you are in the Rally root directory). The natural thing would be just to try one of these sample benchmark tasks, say, the one that boots and deletes multiple servers (*doc/samples/tasks/nova/boot-and-delete.json*). To start a benchmark task, run the task start command:
.. code-block:: none
ubuntu@tempeste-test:~$ rally -v task start rally/doc/samples/tasks/nova/boot-and-delete.json
=============================================================================================
Task 392c803b-37fd-4915-9732-3523f4252e9b is started
--------------------------------------------------------------------------------
2014-03-20 06:17:39.994 27502 INFO rally.benchmark.engine [-] Task 392c803b-37fd-4915-9732-3523f4252e9b | Starting: Check cloud.
2014-03-20 06:17:40.123 27502 INFO rally.benchmark.engine [-] Task 392c803b-37fd-4915-9732-3523f4252e9b | Completed: Check cloud.
2014-03-20 06:17:40.123 27502 INFO rally.benchmark.engine [-] Task 392c803b-37fd-4915-9732-3523f4252e9b | Starting: Task validation.
2014-03-20 06:17:40.133 27502 INFO rally.benchmark.engine [-] Task 392c803b-37fd-4915-9732-3523f4252e9b | Starting: Task validation of scenarios names.
2014-03-20 06:17:40.137 27502 INFO rally.benchmark.engine [-] Task 392c803b-37fd-4915-9732-3523f4252e9b | Completed: Task validation of scenarios names.
2014-03-20 06:17:40.138 27502 INFO rally.benchmark.engine [-] Task 392c803b-37fd-4915-9732-3523f4252e9b | Starting: Task validation of syntax.
2014-03-20 06:17:40.140 27502 INFO rally.benchmark.engine [-] Task 392c803b-37fd-4915-9732-3523f4252e9b | Completed: Task validation of syntax.
2014-03-20 06:17:40.140 27502 INFO rally.benchmark.engine [-] Task 392c803b-37fd-4915-9732-3523f4252e9b | Starting: Task validation of semantic.
2014-03-20 06:17:41.098 27502 ERROR glanceclient.common.http [-] Request returned failure status.
================================================================================
Task 392c803b-37fd-4915-9732-3523f4252e9b is failed.
--------------------------------------------------------------------------------
<class 'rally.exceptions.InvalidBenchmarkConfig'>
Task config is invalid.
Benchmark NovaServers.boot_and_delete_server has wrong configuration of args at position 0: {'image_id': '73257560-c59b-4275-a1ec-ab140e5b9979', 'flavor_id': 1}
Reason: Image with id '73257560-c59b-4275-a1ec-ab140e5b9979' not found
For more details run:
rally -vd task detailed 392c803b-37fd-4915-9732-3523f4252e9b
This attempt, however, will most likely fail because of an **input arguments validation error** (due to a non-existing image name). The thing is that the benchmark scenario that boots a server needs to do that using a concrete image available in the OpenStack deployment. In prior iterations of Rally, the images were denoted by UUID (such as "flavor_id", "image_id", etc). Now, these resources are simply denoted by name.
To get started, make a local copy of the sample benchmark task:
.. code-block:: none
cp doc/samples/tasks/nova/boot-and-delete.json my-task.json
and then edit it with the resource names from your OpenStack installation:
.. code-block:: none
{
"NovaServers.boot_and_delete_server": [
{
"args": {
"flavor": {
"name": "m1.tiny"
},
"image": {
"name": "CirrOS 0.3.1 (x86_64)"
}
},
"runner": {
"type": "constant",
"times": 10,
"concurrency": 2
},
"context": {
"users": {
"tenants": 3,
"users_per_tenant": 2
}
}
}
]
}
To obtain proper image name and flavor name, you can use the subcommand show of rally.
let's get a proper image name:
.. code-block:: none
$ rally show images
+--------------------------------------+-----------------------+-----------+
| UUID | Name | Size (B) |
+--------------------------------------+-----------------------+-----------+
| 8dfd6098-0c26-4cb5-8e77-1ecb2db0b8ae | CentOS 6.5 (x86_64) | 344457216 |
| 2b8d119e-9461-48fc-885b-1477abe2edc5 | CirrOS 0.3.1 (x86_64) | 13147648 |
+--------------------------------------+-----------------------+-----------+
and a proper flavor name:
.. code-block:: none
$ rally show flavors
+---------------------+-----------+-------+----------+-----------+-----------+
| ID | Name | vCPUs | RAM (MB) | Swap (MB) | Disk (GB) |
+---------------------+-----------+-------+----------+-----------+-----------+
| 1 | m1.tiny | 1 | 512 | | 1 |
| 2 | m1.small | 1 | 2048 | | 20 |
| 3 | m1.medium | 2 | 4096 | | 40 |
| 4 | m1.large | 4 | 8192 | | 80 |
| 5 | m1.xlarge | 8 | 16384 | | 160 |
+---------------------+-----------+-------+----------+-----------+-----------+
After you've edited the **my-task.json** file, you can run this benchmark task again. This time, let's also use the --verbose parameter that will allow us to retrieve more logging from Rally while it performs benchmarking:
.. code-block:: none
$ rally -v task start my-task.json --tag my_task
================================================================================
Task my_task 87eb8ff3-07f9-4941-b1be-63e707aceb1e is started
--------------------------------------------------------------------------------
2014-03-20 06:26:36.431 27812 INFO rally.benchmark.engine [-] Task 87eb8ff3-07f9-4941-b1be-63e707aceb1e | Starting: Check cloud.
2014-03-20 06:26:36.555 27812 INFO rally.benchmark.engine [-] Task 87eb8ff3-07f9-4941-b1be-63e707aceb1e | Completed: Check cloud.
2014-03-20 06:26:36.555 27812 INFO rally.benchmark.engine [-] Task 87eb8ff3-07f9-4941-b1be-63e707aceb1e | Starting: Task validation.
2014-03-20 06:26:36.564 27812 INFO rally.benchmark.engine [-] Task 87eb8ff3-07f9-4941-b1be-63e707aceb1e | Starting: Task validation of scenarios names.
2014-03-20 06:26:36.568 27812 INFO rally.benchmark.engine [-] Task 87eb8ff3-07f9-4941-b1be-63e707aceb1e | Completed: Task validation of scenarios names.
2014-03-20 06:26:36.568 27812 INFO rally.benchmark.engine [-] Task 87eb8ff3-07f9-4941-b1be-63e707aceb1e | Starting: Task validation of syntax.
2014-03-20 06:26:36.571 27812 INFO rally.benchmark.engine [-] Task 87eb8ff3-07f9-4941-b1be-63e707aceb1e | Completed: Task validation of syntax.
2014-03-20 06:26:36.571 27812 INFO rally.benchmark.engine [-] Task 87eb8ff3-07f9-4941-b1be-63e707aceb1e | Starting: Task validation of semantic.
2014-03-20 06:26:37.316 27812 INFO rally.benchmark.engine [-] Task 87eb8ff3-07f9-4941-b1be-63e707aceb1e | Completed: Task validation of semantic.
2014-03-20 06:26:37.316 27812 INFO rally.benchmark.engine [-] Task 87eb8ff3-07f9-4941-b1be-63e707aceb1e | Completed: Task validation.
2014-03-20 06:26:37.316 27812 INFO rally.benchmark.engine [-] Task 87eb8ff3-07f9-4941-b1be-63e707aceb1e | Starting: Benchmarking.
2014-03-20 06:26:41.596 27820 INFO rally.benchmark.runners.base [-] ITER: 0 START
2014-03-20 06:26:41.596 27821 INFO rally.benchmark.runners.base [-] ITER: 1 START
2014-03-20 06:26:46.105 27820 INFO rally.benchmark.runners.base [-] ITER: 0 END: Error <class 'rally.exceptions.GetResourceNotFound'>: Resource not found: `404`
2014-03-20 06:26:46.105 27820 INFO rally.benchmark.runners.base [-] ITER: 2 START
2014-03-20 06:26:46.451 27821 INFO rally.benchmark.runners.base [-] ITER: 1 END: Error <type 'exceptions.AttributeError'>: status
2014-03-20 06:26:46.452 27821 INFO rally.benchmark.runners.base [-] ITER: 3 START
2014-03-20 06:26:46.497 27820 INFO rally.benchmark.runners.base [-] ITER: 2 END: Error <class 'novaclient.exceptions.NotFound'>: Instance could not be found (HTTP 404) (Request-ID: req-dfd372e9-728d-49ca-87e1-54cbf593b2be)
2014-03-20 06:26:46.497 27820 INFO rally.benchmark.runners.base [-] ITER: 4 START
2014-03-20 06:26:53.274 27821 INFO rally.benchmark.runners.base [-] ITER: 3 END: OK
2014-03-20 06:26:53.275 27821 INFO rally.benchmark.runners.base [-] ITER: 5 START
2014-03-20 06:26:53.709 27820 INFO rally.benchmark.runners.base [-] ITER: 4 END: OK
2014-03-20 06:26:53.710 27820 INFO rally.benchmark.runners.base [-] ITER: 6 START
2014-03-20 06:26:59.942 27821 INFO rally.benchmark.runners.base [-] ITER: 5 END: OK
2014-03-20 06:26:59.943 27821 INFO rally.benchmark.runners.base [-] ITER: 7 START
2014-03-20 06:27:00.601 27820 INFO rally.benchmark.runners.base [-] ITER: 6 END: OK
2014-03-20 06:27:00.601 27820 INFO rally.benchmark.runners.base [-] ITER: 8 START
2014-03-20 06:27:06.635 27821 INFO rally.benchmark.runners.base [-] ITER: 7 END: OK
2014-03-20 06:27:06.635 27821 INFO rally.benchmark.runners.base [-] ITER: 9 START
2014-03-20 06:27:07.414 27820 INFO rally.benchmark.runners.base [-] ITER: 8 END: OK
2014-03-20 06:27:13.311 27821 INFO rally.benchmark.runners.base [-] ITER: 9 END: OK
2014-03-20 06:27:14.302 27812 WARNING rally.benchmark.context.secgroup [-] Unable to delete secgroup: 43
2014-03-20 06:27:14.336 27812 WARNING rally.benchmark.context.secgroup [-] Unable to delete secgroup: 45
2014-03-20 06:27:14.336 27812 INFO rally.benchmark.context.cleaner [-] Task 87eb8ff3-07f9-4941-b1be-63e707aceb1e | Starting: Cleanup users resources.
2014-03-20 06:27:25.498 27812 INFO rally.benchmark.context.cleaner [-] Task 87eb8ff3-07f9-4941-b1be-63e707aceb1e | Completed: Cleanup users resources.
2014-03-20 06:27:25.498 27812 INFO rally.benchmark.context.cleaner [-] Task 87eb8ff3-07f9-4941-b1be-63e707aceb1e | Starting: Cleanup admin resources.
2014-03-20 06:27:25.689 27812 INFO rally.benchmark.context.cleaner [-] Task 87eb8ff3-07f9-4941-b1be-63e707aceb1e | Completed: Cleanup admin resources.
2014-03-20 06:27:26.092 27812 INFO rally.benchmark.engine [-] Task 87eb8ff3-07f9-4941-b1be-63e707aceb1e | Completed: Benchmarking.
================================================================================
Task 87eb8ff3-07f9-4941-b1be-63e707aceb1e is finished.
--------------------------------------------------------------------------------
test scenario NovaServers.boot_and_delete_server
args position 0
args values:
{u'args': {u'flavor_id': 1,
u'image_id': u'976dfd41-d8d5-4688-a8c1-8f196316d8b9'},
u'context': {u'users': {u'tenants': 3, u'users_per_tenant': 2}},
u'runner': {u'concurrency': 2, u'times': 10, u'type': u'continuous'}}
+---------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| action | min (sec) | avg (sec) | max (sec) | 90 percentile | 95 percentile | success | count |
+---------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| nova.boot_server | 0.480 | 0.501 | 0.521 | 0.521 | 0.521 | 100.0% | 10 |
| nova.delete_server | 0.185 | 0.189 | 0.195 | 0.194 | 0.194 | 70.0% | 10 |
| total | 0.666 | 0.690 | 0.715 | 0.715 | 0.715 | 70.0% | 10 |
+---------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
HINTS:
* To plot HTML graphics with this data, run:
rally task plot2html 87eb8ff3-07f9-4941-b1be-63e707aceb1e --out output.html
* To get raw JSON output of task results, run:
rally task results 87eb8ff3-07f9-4941-b1be-63e707aceb1e
Available Rally facilities
--------------------------
To be able to run complex benchmark scenarios on somewhat more sophisticated OpenStack deployment types, you should familiarize yourself with more **deploy engines, server providers** and **benchmark scenarios** available in Rally.
..
List of available Deploy engines (including their description and usage examples): :ref:`Deploy engines <deploy_engines>`
..
List of available Server providers (including their description and usage examples): :ref:`Server providers <server_providers>`
You can also learn about different Rally entities without leaving the Command Line Interface. There is a special **search engine** embedded into Rally, which, for a given *search query*, prints documentation for the corresponding benchmark scenario/deploy engine/... as fetched from the source code. This is accomplished by the **rally info find** command:
.. code-block: none
$ rally info find *create_meter_and_get_stats*
CeilometerStats.create_meter_and_get_stats (benchmark scenario).
Test creating a meter and fetching its statistics.
Meter is first created and then statistics is fetched for the same
using GET /v2/meters/(meter_name)/statistics.
Parameters:
- name_length: length of generated (random) part of meter name
- kwargs: contains optional arguments to create a meter
$ rally info find *Authenticate*
Authenticate (benchmark scenario group).
This class should contain authentication mechanism.
For different types of clients like Keystone.
$ rally info find *some_non_existing_benchmark*
Failed to find any docs for query: 'some_non_existing_benchmark'

View File

@ -1,8 +1,28 @@
..
Copyright 2015 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
.. _user_stories:
User stories
============
Many users of Rally were able to make interesting discoveries concerning their OpenStack clouds using our benchmarking tool. Numerous user stories presented below show how Rally has made it possible to find performance bugs and validate improvements for different OpenStack installations.
.. toctree::
:glob:
:maxdepth: 1
stories/**

View File

@ -1,12 +1,15 @@
===================================================================================
4x performance increase in Keysone inside Apache using the token creation benchmark
===================================================================================
Keystone token creation benchmark
=================================
Authenticate users with keystone to get tokens.
*(Contributed by Neependra Khare, Red Hat)*
Below we describe how we were able to get and verify a 4x better performance of Keysone inside Apache. To do that, we ran a Keystone token creation benchmark with Rally under different load (this benchmark scenario essentially just authenticates users with keystone to get tokens).
Goal
----
- To get data about performance of token creation under different load.
- To ensure that keystone with increased public_workers/admin_workers values
- Get the data about performance of token creation under different load.
- Ensure that keystone with increased public_workers/admin_workers values
and under Apache works better than the default setup.
Summary

View File

@ -1,8 +1,10 @@
=========================================================
Testing how 20 node HA cloud performs on creating 400 VMs
=========================================================
==========================================================================================
Finding a Keystone bug while benchmarking 20 node HA cloud performance at creating 400 VMs
==========================================================================================
Boot significant amount of servers on a cluster and ensure that we have reasonable performance and there are no errors.
*(Contributed by Alexander Maretskiy, Mirantis)*
Below we describe how we found a `bug in keystone <https://bugs.launchpad.net/keystone/+bug/1360446>`_ and achieved 2x average performance increase at booting Nova servers after fixing that bug. Our initial goal was to benchmark the booting of a significant amount of servers on a cluster (running on a custom build of `Mirantis OpenStack <https://software.mirantis.com/>`_ v5.1) and to ensure that this operation has reasonable performance and completes with no errors.
Goal
----
@ -37,7 +39,7 @@ Cluster
This cluster was created via Fuel Dashboard interface.
+----------------------+-----------------------------------------------------------------------------+
| Deployment | Custom build of `MirantisOpenStack <https://software.mirantis.com/>`_ v5.1 |
| Deployment | Custom build of `Mirantis OpenStack <https://software.mirantis.com/>`_ v5.1 |
+----------------------+-----------------------------------------------------------------------------+
| OpenStack release | Icehouse |
+----------------------+-----------------------------------------------------------------------------+
@ -65,7 +67,7 @@ https://review.openstack.org/#/c/96300/
**Deployment**
Rally was deployed for cluster using `ExistingCloud <https://github.com/stackforge/rally/blob/master/doc/samples/deployments/existing.json>`_ type of deployment.
Rally was deployed for cluster using `ExistingCloud <https://github.com/stackforge/rally/blob/master/samples/deployments/existing.json>`_ type of deployment.
**Server flavor** ::
@ -165,7 +167,7 @@ That is how a `bug in keystone <https://bugs.launchpad.net/keystone/+bug/1360446
**Second run, with bugfix:**
After a patch was applied (using RPC instead of neutron client in metadata agent), we got **100% success and 2x improved avg perfomance**:
After a patch was applied (using RPC instead of neutron client in metadata agent), we got **100% success and 2x improved average perfomance**:
+------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| action | min (sec) | avg (sec) | max (sec) | 90 percentile | 95 percentile | success | count |

View File

@ -55,7 +55,7 @@ class CreateFlavorContext(base.Context):
}
def setup(self):
"""This method is called before task start."""
"""This method is called before the task start."""
try:
# use rally.osclients to get nessesary client instance
nova = osclients.Clients(self.context["admin"]["endpoint"]).nova()
@ -75,7 +75,7 @@ class CreateFlavorContext(base.Context):
LOG.warning(msg)
def cleanup(self):
"""This method is called after task finish."""
"""This method is called after the task finish."""
try:
nova = osclients.Clients(self.context["admin"]["endpoint"]).nova()
nova.flavors.delete(self.context["flavor"]["id"])

View File

@ -8,5 +8,8 @@
times: 5
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
create_flavor:
ram: 512

View File

@ -0,0 +1,32 @@
# Copyright 2015: Mirantis Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from rally.benchmark.sla import base
class MaxDurationRange(base.SLA):
"""Maximum allowed duration range in seconds."""
OPTION_NAME = "max_duration_range"
CONFIG_SCHEMA = {"type": "number", "minimum": 0.0,
"exclusiveMinimum": True}
@staticmethod
def check(criterion_value, result):
durations = [r["duration"] for r in result if not r.get("error")]
durations_range = max(durations) - min(durations)
success = durations_range <= criterion_value
msg = (_("Maximum duration range per iteration %ss, actual %ss")
% (criterion_value, durations_range))
return base.SLAResult(success, msg)

View File

@ -0,0 +1,23 @@
{
"Dummy.dummy": [
{
"args": {
"sleep": 0.01
},
"runner": {
"type": "constant",
"times": 5,
"concurrency": 1
},
"context": {
"users": {
"tenants": 1,
"users_per_tenant": 1
}
},
"sla": {
"max_duration_range": 2.5
}
}
]
}

View File

@ -0,0 +1,15 @@
---
Dummy.dummy:
-
args:
sleep: 0.01
runner:
type: "constant"
times: 5
concurrency: 1
context:
users:
tenants: 1
users_per_tenant: 1
sla:
max_duration_range: 2.5