[docs][3] Re-design docs to cover all user-groups

First pack of changes in upcoming chain to redesign Rally docs.
All information related to Rally step by step and its usage
in the gates separated and refactored . Modified files fit 80
symbols margin where possible.

[TODO] continue with other parts of the docs:
       - Command Line Interface
       - Rally Task Component
       - Rally Verification Component
       - Rally Plugins, Rally Plugins Reference
       - Contribute to Rally
       - Request New Features
       - Project Info
[TODO] add 80 symbols margin check similar to what
       Performance Documentation has

Change-Id: I3dc17027a2bfa75214b960573ec8b036b1bc7bb0
This commit is contained in:
Dina Belova 2016-11-16 14:37:45 -08:00
parent 4292063876
commit 33aa9110ea
15 changed files with 368 additions and 105 deletions

View File

@ -38,13 +38,12 @@ Contents
overview/index
install_and_upgrade/index
tutorial
quick_start/index
cli/cli_reference
reports
plugins
plugin/plugin_reference
contribute
gates
feature_requests
project_info
release_notes

View File

@ -15,31 +15,42 @@
.. _gates:
Rally OS Gates
==============
Rally OpenStack Gates
=====================
Gate jobs
---------
The **OpenStack CI system** uses the so-called **"Gate jobs"** to control merges of patched submitted for review on Gerrit. These **Gate jobs** usually just launch a set of tests -- unit, functional, integration, style -- that check that the proposed patch does not break the software and can be merged into the target branch, thus providing additional guarantees for the stability of the software.
The **OpenStack CI system** uses the so-called **"Gate jobs"** to control
merges of patches submitted for review on Gerrit. These **Gate jobs** usually
just launch a set of tests -- unit, functional, integration, style -- that
check that the proposed patch does not break the software and can be merged
into the target branch, thus providing additional guarantees for the stability
of the software.
Create a custom Rally Gate job
------------------------------
You can create a **Rally Gate job** for your project to run Rally benchmarks against the patchsets proposed to be merged into your project.
You can create a **Rally Gate job** for your project to run Rally benchmarks
against the patchsets proposed to be merged into your project.
To create a rally-gate job, you should create a **rally-jobs/** directory at the root of your project.
To create a rally-gate job, you should create a **rally-jobs/** directory at
the root of your project.
As a rule, this directory contains only **{projectname}.yaml**, but more scenarios and jobs can be added as well. This yaml file is in fact an input Rally task file specifying benchmark scenarios that should be run in your gate job.
As a rule, this directory contains only **{projectname}.yaml**, but more
scenarios and jobs can be added as well. This yaml file is in fact an input
Rally task file specifying benchmark scenarios that should be run in your gate
job.
To make *{projectname}.yaml* run in gates, you need to add *"rally-jobs"* to the "jobs" section of *projects.yaml* in *openstack-infra/project-config*.
To make *{projectname}.yaml* run in gates, you need to add *"rally-jobs"* to
the "jobs" section of *projects.yaml* in *openstack-infra/project-config*.
Example: Rally Gate job for Glance
----------------------------------
Let's take a look at an example for the `Glance <https://wiki.openstack.org/wiki/Glance>`_ project:
Let's take a look at an example for the `Glance`_ project:
Edit *jenkins/jobs/projects.yaml:*
@ -87,9 +98,11 @@ Also add *gate-rally-dsvm-{projectname}* to *zuul/layout.yaml*:
- gate-grenade-dsvm-forward
To add one more scenario and job, you need to add *{scenarioname}.yaml* file here, and *gate-rally-dsvm-{scenarioname}* to *projects.yaml*.
To add one more scenario and job, you need to add *{scenarioname}.yaml* file
here, and *gate-rally-dsvm-{scenarioname}* to *projects.yaml*.
For example, you can add *myscenario.yaml* to *rally-jobs* directory in your project and then edit *jenkins/jobs/projects.yaml* in this way:
For example, you can add *myscenario.yaml* to *rally-jobs* directory in your
project and then edit *jenkins/jobs/projects.yaml* in this way:
.. parsed-literal::
@ -127,7 +140,10 @@ Finally, add *gate-rally-dsvm-myscenario* to *zuul/layout.yaml*:
- gate-tempest-dsvm-neutron-large-ops
**- gate-rally-dsvm-myscenario**
It is also possible to arrange your input task files as templates based on jinja2. Say, you want to set the image names used throughout the *myscenario.yaml* task file as a variable parameter. Then, replace concrete image names in this file with a variable:
It is also possible to arrange your input task files as templates based on
``Jinja2``. Say, you want to set the image names used throughout the
*myscenario.yaml* task file as a variable parameter. Then, replace concrete
image names in this file with a variable:
.. code-block:: yaml
@ -147,7 +163,8 @@ It is also possible to arrange your input task files as templates based on jinja
name: {{image_name}}
...
and create a file named *myscenario_args.yaml* that will define the parameter values:
and create a file named *myscenario_args.yaml* that will define the parameter
values:
.. code-block:: yaml
@ -155,15 +172,21 @@ and create a file named *myscenario_args.yaml* that will define the parameter va
image_name: "^cirros.*uec$"
this file will be automatically used by Rally to substitute the variables in *myscenario.yaml*.
this file will be automatically used by Rally to substitute the variables in
*myscenario.yaml*.
Plugins & Extras in Rally Gate jobs
-----------------------------------
Along with scenario configs in yaml, the **rally-jobs** directory can also contain two subdirectories:
Along with scenario configs in yaml, the **rally-jobs** directory can also
contain two subdirectories:
- **plugins**: :ref:`Plugins <plugins>` needed for your gate job;
- **extra**: auxiliary files like bash scripts or images.
Both subdirectories will be copied to *~/.rally/* before the job gets started.
.. references:
.. _Glance: https://wiki.openstack.org/wiki/Glance

View File

@ -0,0 +1,30 @@
..
Copyright 2015 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
===========
Quick start
===========
This section will guide you through all steps of using Rally - from
installation to its advanced usage in different use cases (including running
Rally in OpenStack CI system gates to control merges of patches submitted for
review on Gerrit code review system).
.. toctree::
:glob:
:maxdepth: 2
tutorial
gates

View File

@ -18,7 +18,9 @@
Rally step-by-step
==================
In the following tutorial, we will guide you step-by-step through different use cases that might occur in Rally, starting with the easy ones and moving towards more complicated cases.
In the following tutorial, we will guide you step-by-step through different use
cases that might occur in Rally, starting with the easy ones and moving towards
more complicated cases.
.. toctree::

View File

@ -18,8 +18,7 @@
Step 0. Installation
====================
The easiest way to install Rally is by running its `installation script
<https://raw.githubusercontent.com/openstack/rally/master/install_rally.sh>`_:
The easiest way to install Rally is by running its `installation script`_:
.. code-block:: bash
@ -35,4 +34,9 @@ please refer to the :ref:`installation <install>` page.
**Note:** Rally requires Python version 2.7 or 3.4.
Now that you have rally installed, you are ready to start :ref:`benchmarking OpenStack with it <tutorial_step_1_setting_up_env_and_running_benchmark_from_samples>`!
Now that you have Rally installed, you are ready to start
:ref:`benchmarking OpenStack with it <tutorial_step_1_setting_up_env_and_running_benchmark_from_samples>`!
.. references:
.. _installation script: https://raw.githubusercontent.com/openstack/rally/master/install_rally.sh

View File

@ -21,9 +21,9 @@ Step 10. Verifying cloud via Tempest
.. contents::
:local:
In this guide, we show how to use Tempest and Rally together.
We assume that you have a :ref:`Rally installation <tutorial_step_0_installation>`
and have already :ref:`registered an OpenStack deployment <tutorial_step_1_setting_up_env_and_running_benchmark_from_samples>`
In this guide, we show how to use Tempest and Rally together. We assume that
you have a :ref:`Rally installation <tutorial_step_0_installation>` and have
:ref:`registered an OpenStack deployment <tutorial_step_1_setting_up_env_and_running_benchmark_from_samples>`
in Rally. So, let's get started!
@ -53,7 +53,8 @@ The command clones Tempest from the
a Python virtual environment for the current deployment by default. The
arguments below allow these default behaviors to be overridden.
Use the **--deployment** argument to specify any deployment registered in Rally.
Use the **--deployment** argument to specify any deployment registered in
Rally.
.. code-block:: console
@ -154,9 +155,9 @@ Use the **--version** argument to specify a Tempest commit ID or tag.
2016-05-09 13:50:42.903 23870 INFO rally.verification.tempest.tempest [-] Installing the virtual environment for Tempest.
2016-05-09 13:50:55.827 23870 INFO rally.verification.tempest.tempest [-] Tempest has been successfully installed!
Use the **--system-wide** argument to install Tempest in the system Python path.
In this case, it is assumed that all Tempest requirements are already installed
in the local environment.
Use the **--system-wide** argument to install Tempest in the system Python
path. In this case, it is assumed that all Tempest requirements are already
installed in the local environment.
.. code-block:: console
@ -795,7 +796,7 @@ a verification report we tell you bellow.
Details: {u'message': u'Cannot add host node-2.domain.tld in aggregate 450: host exists', u'code': 409}
...
.. image:: ../images/Report-verify-xfail.png
.. image:: ../../images/Report-verify-xfail.png
:align: center
Finally, users can specify the **--system-wide** argument that will tell Rally

View File

@ -21,15 +21,22 @@ Step 1. Setting up the environment and running a benchmark from samples
.. contents::
:local:
In this demo, we will show how to perform some basic operations in Rally, such as registering an OpenStack cloud, benchmarking it and generating benchmark reports.
In this demo, we will show how to perform some basic operations in Rally, such
as registering an OpenStack cloud, benchmarking it and generating benchmark
reports.
We assume that you have a :ref:`Rally installation <tutorial_step_0_installation>` and an already existing OpenStack deployment with Keystone available at *<KEYSTONE_AUTH_URL>*.
We assume that you have gone through :ref:`tutorial_step_0_installation` and
have an already existing OpenStack deployment with Keystone available at
*<KEYSTONE_AUTH_URL>*.
Registering an OpenStack deployment in Rally
--------------------------------------------
First, you have to provide Rally with an OpenStack deployment it is going to benchmark. This should be done either through `OpenRC files <http://docs.openstack.org/user-guide/content/cli_openrc.html>`_ or through deployment `configuration files <https://github.com/openstack/rally/tree/master/samples/deployments>`_. In case you already have an *OpenRC*, it is extremely simple to register a deployment with the *deployment create* command:
First, you have to provide Rally with an OpenStack deployment it is going to
benchmark. This should be done either through `OpenRC files`_ or through
deployment `configuration files`_. In case you already have an *OpenRC*, it is
extremely simple to register a deployment with the *deployment create* command:
.. code-block:: console
@ -43,7 +50,9 @@ First, you have to provide Rally with an OpenStack deployment it is going to ben
Using deployment : <Deployment UUID>
...
Alternatively, you can put the information about your cloud credentials into a JSON configuration file (let's call it `existing.json <https://github.com/openstack/rally/blob/master/samples/deployments/existing.json>`_). The *deployment create* command has a slightly different syntax in this case:
Alternatively, you can put the information about your cloud credentials into a
JSON configuration file (let's call it `existing.json`_). The *deployment
create* command has a slightly different syntax in this case:
.. code-block:: console
@ -57,9 +66,13 @@ Alternatively, you can put the information about your cloud credentials into a J
...
Note the last line in the output. It says that the just created deployment is now used by Rally; that means that all the benchmarking operations from now on are going to be performed on this deployment. Later we will show how to switch between different deployments.
Note the last line in the output. It says that the just created deployment is
now used by Rally; that means that all the benchmarking operations from now on
are going to be performed on this deployment. Later we will show how to switch
between different deployments.
Finally, the *deployment check* command enables you to verify that your current deployment is healthy and ready to be benchmarked:
Finally, the *deployment check* command enables you to verify that your current
deployment is healthy and ready to be benchmarked:
.. code-block:: console
@ -84,7 +97,12 @@ Finally, the *deployment check* command enables you to verify that your current
Benchmarking
------------
Now that we have a working and registered deployment, we can start benchmarking it. The sequence of benchmarks to be launched by Rally should be specified in a *benchmark task configuration file* (either in *JSON* or in *YAML* format). Let's try one of the sample benchmark tasks available in `samples/tasks/scenarios <https://github.com/openstack/rally/tree/master/samples/tasks/scenarios>`_, say, the one that boots and deletes multiple servers (*samples/tasks/scenarios/nova/boot-and-delete.json*):
Now that we have a working and registered deployment, we can start benchmarking
it. The sequence of benchmarks to be launched by Rally should be specified in a
*benchmark task configuration file* (either in *JSON* or in *YAML* format).
Let's try one of the sample benchmark tasks available in
`samples/tasks/scenarios`_, say, the one that boots and deletes multiple
servers (*samples/tasks/scenarios/nova/boot-and-delete.json*):
.. code-block:: json
@ -117,7 +135,8 @@ Now that we have a working and registered deployment, we can start benchmarking
}
To start a benchmark task, run the task start command (you can also add the *-v* option to print more logging information):
To start a benchmark task, run the ``task start`` command (you can also add the
*-v* option to print more logging information):
.. code-block:: console
@ -178,7 +197,12 @@ To start a benchmark task, run the task start command (you can also add the *-v*
Using task: 6fd9a19f-5cf8-4f76-ab72-2e34bb1d4996
Note that the Rally input task above uses *regular expressions* to specify the image and flavor name to be used for server creation, since concrete names might differ from installation to installation. If this benchmark task fails, then the reason for that might a non-existing image/flavor specified in the task. To check what images/flavors are available in the deployment you are currently benchmarking, you might use the *rally show* command:
Note that the Rally input task above uses *regular expressions* to specify the
image and flavor name to be used for server creation, since concrete names
might differ from installation to installation. If this benchmark task fails,
then the reason for that might a non-existing image/flavor specified in the
task. To check what images/flavors are available in the deployment you are
currently benchmarking, you might use the *rally show* command:
.. code-block:: console
@ -207,30 +231,64 @@ Note that the Rally input task above uses *regular expressions* to specify the i
Report generation
-----------------
One of the most beautiful things in Rally is its task report generation mechanism. It enables you to create illustrative and comprehensive HTML reports based on the benchmarking data. To create and open at once such a report for the last task you have launched, call:
One of the most beautiful things in Rally is its task report generation
mechanism. It enables you to create illustrative and comprehensive HTML reports
based on the benchmarking data. To create and open at once such a report for
the last task you have launched, call:
.. code-block:: bash
rally task report --out=report1.html --open
This will produce an HTML page with the overview of all the scenarios that you've included into the last benchmark task completed in Rally (in our case, this is just one scenario, and we will cover the topic of multiple scenarios in one task in :ref:`the next step of our tutorial <tutorial_step_2_input_task_format>`):
This will produce an HTML page with the overview of all the scenarios that
you've included into the last benchmark task completed in Rally (in our case,
this is just one scenario, and we will cover the topic of multiple scenarios in
one task in
:ref:`the next step of our tutorial <tutorial_step_2_input_task_format>`):
.. image:: ../images/Report-Overview.png
.. image:: ../../images/Report-Overview.png
:align: center
This aggregating table shows the duration of the load produced by the corresponding scenario (*"Load duration"*), the overall benchmark scenario execution time, including the duration of environment preparation with contexts (*"Full duration"*), the number of iterations of each scenario (*"Iterations"*), the type of the load used while running the scenario (*"Runner"*), the number of failed iterations (*"Errors"*) and finally whether the scenario has passed certain Success Criteria (*"SLA"*) that were set up by the user in the input configuration file (we will cover these criteria in :ref:`one of the next steps <tutorial_step_4_adding_success_criteria_for_benchmarks>`).
This aggregating table shows the duration of the load produced by the
corresponding scenario (*"Load duration"*), the overall benchmark scenario
execution time, including the duration of environment preparation with contexts
(*"Full duration"*), the number of iterations of each scenario
(*"Iterations"*), the type of the load used while running the scenario
(*"Runner"*), the number of failed iterations (*"Errors"*) and finally whether
the scenario has passed certain Success Criteria (*"SLA"*) that were set up by
the user in the input configuration file (we will cover these criteria in
:ref:`one of the next steps <tutorial_step_4_adding_success_criteria_for_benchmarks>`).
By navigating in the left panel, you can switch to the detailed view of the benchmark results for the only scenario we included into our task, namely **NovaServers.boot_and_delete_server**:
By navigating in the left panel, you can switch to the detailed view of the
benchmark results for the only scenario we included into our task, namely
**NovaServers.boot_and_delete_server**:
.. image:: ../images/Report-Scenario-Overview.png
.. image:: ../../images/Report-Scenario-Overview.png
:align: center
This page, along with the description of the success criteria used to check the outcome of this scenario, shows more detailed information and statistics about the duration of its iterations. Now, the *"Total durations"* table splits the duration of our scenario into the so-called **"atomic actions"**: in our case, the **"boot_and_delete_server"** scenario consists of two actions - **"boot_server"** and **"delete_server"**. You can also see how the scenario duration changed throughout its iterations in the *"Charts for the total duration"* section. Similar charts, but with atomic actions detailed are on the *"Details"* tab of this page:
This page, along with the description of the success criteria used to check the
outcome of this scenario, shows more detailed information and statistics about
the duration of its iterations. Now, the *"Total durations"* table splits the
duration of our scenario into the so-called **"atomic actions"**: in our case,
the **"boot_and_delete_server"** scenario consists of two actions -
**"boot_server"** and **"delete_server"**. You can also see how the scenario
duration changed throughout its iterations in the *"Charts for the total
duration"* section. Similar charts, but with atomic actions detailed are on the
*"Details"* tab of this page:
.. image:: ../images/Report-Scenario-Atomic.png
.. image:: ../../images/Report-Scenario-Atomic.png
:align: center
Note that all the charts on the report pages are very dynamic: you can change their contents by clicking the switches above the graph and see more information about its single points by hovering the cursor over these points.
Note that all the charts on the report pages are very dynamic: you can change
their contents by clicking the switches above the graph and see more
information about its single points by hovering the cursor over these points.
Take some time to play around with these graphs
and then move on to :ref:`the next step of our tutorial <tutorial_step_2_input_task_format>`.
.. references:
.. _OpenRC files: http://docs.openstack.org/user-guide/content/cli_openrc.html
.. _configuration files: https://github.com/openstack/rally/tree/master/samples/deployments
.. _existing.json: https://github.com/openstack/rally/blob/master/samples/deployments/existing.json
.. _samples/tasks/scenarios: https://github.com/openstack/rally/tree/master/samples/tasks/scenarios

View File

@ -27,8 +27,8 @@ Basic input task syntax
Rally comes with a really great collection of
:ref:`plugins <tutorial_step_8_discovering_more_plugins>` and in most
real-world cases you will use multiple plugins to test your OpenStack cloud.
Rally makes it very easy to run **different test cases defined in a single task**.
To do so, use the following syntax:
Rally makes it very easy to run **different test cases defined in a single
task**. To do so, use the following syntax:
.. code-block:: json
@ -51,7 +51,13 @@ where *<benchmark_config>*, as before, is a dictionary:
Multiple benchmarks in a single task
------------------------------------
As an example, let's edit our configuration file from :ref:`step 1 <tutorial_step_1_setting_up_env_and_running_benchmark_from_samples>` so that it prescribes Rally to launch not only the **NovaServers.boot_and_delete_server** scenario, but also the **KeystoneBasic.create_delete_user** scenario. All we have to do is to append the configuration of the second scenario as yet another top-level key of our json file:
As an example, let's edit our configuration file from
:ref:`step 1 <tutorial_step_1_setting_up_env_and_running_benchmark_from_samples>`
so that it prescribes Rally to launch not only the
**NovaServers.boot_and_delete_server** scenario, but also the
**KeystoneBasic.create_delete_user** scenario. All we have to do is to append
the configuration of the second scenario as yet another top-level key of our
JSON file:
*multiple-scenarios.json*
@ -125,20 +131,30 @@ Now you can start this benchmark task as usually:
...
Note that the HTML reports you can generate by typing **rally task report --out=report_name.html** after your benchmark task has completed will get richer as your benchmark task configuration file includes more benchmark scenarios. Let's take a look at the report overview page for a task that covers all the scenarios available in Rally:
Note that the HTML reports you can generate by typing **rally task report
--out=report_name.html** after your benchmark task has completed will get
richer as your benchmark task configuration file includes more benchmark
scenarios. Let's take a look at the report overview page for a task that covers
all the scenarios available in Rally:
.. code-block:: bash
rally task report --out=report_multiple_scenarios.html --open
.. image:: ../images/Report-Multiple-Overview.png
.. image:: ../../images/Report-Multiple-Overview.png
:align: center
Multiple configurations of the same scenario
--------------------------------------------
Yet another thing you can do in Rally is to launch **the same benchmark scenario multiple times with different configurations**. That's why our configuration file stores a list for the key *"NovaServers.boot_and_delete_server"*: you can just append a different configuration of this benchmark scenario to this list to get it. Let's say, you want to run the **boot_and_delete_server** scenario twice: first using the *"m1.tiny"* flavor and then using the *"m1.small"* flavor:
Yet another thing you can do in Rally is to launch **the same benchmark
scenario multiple times with different configurations**. That's why our
configuration file stores a list for the key
*"NovaServers.boot_and_delete_server"*: you can just append a different
configuration of this benchmark scenario to this list to get it. Let's say,
you want to run the **boot_and_delete_server** scenario twice: first using the
*"m1.tiny"* flavor and then using the *"m1.small"* flavor:
*multiple-configurations.json*
@ -211,5 +227,5 @@ The HTML report will also look similar to what we have seen before:
rally task report --out=report_multiple_configuraions.html --open
.. image:: ../images/Report-Multiple-Configurations-Overview.png
.. image:: ../../images/Report-Multiple-Configurations-Overview.png
:align: center

View File

@ -24,17 +24,29 @@ Step 3. Benchmarking OpenStack with existing users
Motivation
----------
There are two very important reasons from the production world of why it is preferable to use some already existing users to benchmark your OpenStack cloud:
There are two very important reasons from the production world of why it is
preferable to use some already existing users to benchmark your OpenStack
cloud:
1. *Read-only Keystone Backends:* creating temporary users for benchmark scenarios in Rally is just impossible in case of r/o Keystone backends like *LDAP* and *AD*.
1. *Read-only Keystone Backends:* creating temporary users for benchmark
scenarios in Rally is just impossible in case of r/o Keystone backends like
*LDAP* and *AD*.
2. *Safety:* Rally can be run from an isolated group of users, and if something goes wrong, this wont affect the rest of the cloud users.
2. *Safety:* Rally can be run from an isolated group of users, and if something
goes wrong, this wont affect the rest of the cloud users.
Registering existing users in Rally
-----------------------------------
The information about existing users in your OpenStack cloud should be passed to Rally at the :ref:`deployment initialization step <tutorial_step_1_setting_up_env_and_running_benchmark_from_samples>`. You have to use the **ExistingCloud** deployment plugin that just provides Rally with credentials of an already existing cloud. The difference from the deployment configuration we've seen previously is that you should set up the *"users"* section with the credentials of already existing users. Let's call this deployment configuration file *existing_users.json*:
The information about existing users in your OpenStack cloud should be passed
to Rally at the
:ref:`deployment initialization step <tutorial_step_1_setting_up_env_and_running_benchmark_from_samples>`.
You have to use the **ExistingCloud** deployment plugin that just provides
Rally with credentials of an already existing cloud. The difference from the
deployment configuration we've seen previously is that you should set up the
*"users"* section with the credentials of already existing users. Let's call
this deployment configuration file *existing_users.json*:
.. code-block:: json
@ -62,7 +74,11 @@ The information about existing users in your OpenStack cloud should be passed to
]
}
This deployment configuration requires some basic information about the OpenStack cloud like the region name, auth url. admin user credentials, and any amount of users already existing in the system. Rally will use their credentials to generate load in against this deployment as soon as we register it as usual:
This deployment configuration requires some basic information about the
OpenStack cloud like the region name, auth url. admin user credentials, and any
amount of users already existing in the system. Rally will use their
credentials to generate load in against this deployment as soon as we register
it as usual:
.. code-block:: console
@ -76,7 +92,8 @@ This deployment configuration requires some basic information about the OpenStac
~/.rally/openrc was updated
After that, the **rally show** command lists the resources for each user separately:
After that, the **rally show** command lists the resources for each user
separately:
.. code-block:: console
@ -112,13 +129,18 @@ After that, the **rally show** command lists the resources for each user separat
| d82eaf7a-ff63-4826-9aa7-5fa105610e01 | cirros-0.3.4-x86_64-uec-kernel | 4979632 |
+--------------------------------------+---------------------------------+-----------+
With this new deployment being active, Rally will use the already existing users *"b1"* and *"b2"* instead of creating the temporary ones when launching benchmark task that do not specify the *"users"* context.
With this new deployment being active, Rally will use the already existing
users *"b1"* and *"b2"* instead of creating the temporary ones when launching
benchmark task that do not specify the *"users"* context.
Running benchmark scenarios with existing users
-----------------------------------------------
After you have registered a deployment with existing users, don't forget to remove the *"users"* context from your benchmark task configuration if you want to use existing users, like in the following configuration file (*boot-and-delete.json*):
After you have registered a deployment with existing users, don't forget to
remove the *"users"* context from your benchmark task configuration if you want
to use existing users, like in the following configuration file
(*boot-and-delete.json*):
.. code-block:: json
@ -145,13 +167,14 @@ After you have registered a deployment with existing users, don't forget to remo
]
}
When you start this task, it will use the existing users *"b1"* and *"b2"* instead of creating the temporary ones:
When you start this task, it will use the existing users *"b1"* and *"b2"*
instead of creating the temporary ones:
.. code-block:: bash
rally task start samples/tasks/scenarios/nova/boot-and-delete.json
It goes without saying that support of benchmarking with predefined users simplifies the usage of Rally for generating loads against production clouds.
It goes without saying that support of benchmarking with predefined users
simplifies the usage of Rally for generating loads against production clouds.
(based on: http://boris-42.me/rally-can-generate-load-with-passed-users-now/)

View File

@ -24,9 +24,12 @@ Step 4. Adding success criteria (SLA) for benchmarks
SLA - Service-Level Agreement (Success Criteria)
------------------------------------------------
Rally allows you to set success criteria (also called *SLA - Service-Level Agreement*) for every benchmark. Rally will automatically check them for you.
Rally allows you to set success criteria (also called *SLA - Service-Level
Agreement*) for every benchmark. Rally will automatically check them for you.
To configure the SLA, add the *"sla"* section to the configuration of the corresponding benchmark (the check name is a key associated with its target value). You can combine different success criteria:
To configure the SLA, add the *"sla"* section to the configuration of the
corresponding benchmark (the check name is a key associated with its target
value). You can combine different success criteria:
.. code-block:: json
@ -52,12 +55,18 @@ To configure the SLA, add the *"sla"* section to the configuration of the corres
]
}
Such configuration will mark the **NovaServers.boot_and_delete_server** benchmark scenario as not successful if either some iteration took more than 10 seconds or more than 25% iterations failed.
Such configuration will mark the **NovaServers.boot_and_delete_server**
benchmark scenario as not successful if either some iteration took more than 10
seconds or more than 25% iterations failed.
Checking SLA
------------
Let us show you how Rally SLA work using a simple example based on **Dummy benchmark scenarios**. These scenarios actually do not perform any OpenStack-related stuff but are very useful for testing the behaviors of Rally. Let us put in a new task, *test-sla.json*, 2 scenarios -- one that does nothing and another that just throws an exception:
Let us show you how Rally SLA work using a simple example based on **Dummy
benchmark scenarios**. These scenarios actually do not perform any
OpenStack-related stuff but are very useful for testing the behaviors of Rally.
Let us put in a new task, *test-sla.json*, 2 scenarios -- one that does nothing
and another that just throws an exception:
.. code-block:: json
@ -102,14 +111,17 @@ Let us show you how Rally SLA work using a simple example based on **Dummy bench
]
}
Note that both scenarios in these tasks have the **maximum failure rate of 0%** as their **success criterion**. We expect that the first scenario will pass this criterion while the second will fail it. Let's start the task:
Note that both scenarios in these tasks have the **maximum failure rate of 0%**
as their **success criterion**. We expect that the first scenario will pass
this criterion while the second will fail it. Let's start the task:
.. code-block:: bash
rally task start test-sla.json
After the task completes, run *rally task sla_check* to check the results again the success criteria you defined in the task:
After the task completes, run *rally task sla_check* to check the results again
the success criteria you defined in the task:
.. code-block:: console
@ -133,14 +145,20 @@ SLA checks are nicely visualized in task reports. Generate one:
rally task report --out=report_sla.html --open
Benchmark scenarios that have passed SLA have a green check on the overview page:
Benchmark scenarios that have passed SLA have a green check on the overview
page:
.. image:: ../images/Report-SLA-Overview.png
.. image:: ../../images/Report-SLA-Overview.png
:align: center
Somewhat more detailed information about SLA is displayed on the scenario pages:
Somewhat more detailed information about SLA is displayed on the scenario
pages:
.. image:: ../images/Report-SLA-Scenario.png
.. image:: ../../images/Report-SLA-Scenario.png
:align: center
Success criteria present a very useful concept that enables not only to analyze the outcome of your benchmark tasks, but also to control their execution. In :ref:`one of the next sections <tutorial_step_6_aborting_load_generation_on_sla_failure>` of our tutorial, we will show how to use SLA to abort the load generation before your OpenStack goes wrong.
Success criteria present a very useful concept that enables not only to analyze
the outcome of your benchmark tasks, but also to control their execution. In
:ref:`one of the next sections <tutorial_step_6_aborting_load_generation_on_sla_failure>`
of our tutorial, we will show how to use SLA to abort the load generation before
your OpenStack goes wrong.

View File

@ -24,7 +24,11 @@ Step 5. Rally task templates
Basic template syntax
---------------------
A nice feature of the input task format used in Rally is that it supports the **template syntax** based on `Jinja2 <https://pypi.python.org/pypi/Jinja2>`_. This turns out to be extremely useful when, say, you have a fixed structure of your task but you want to parameterize this task in some way. For example, imagine your input task file (*task.yaml*) runs a set of Nova scenarios:
A nice feature of the input task format used in Rally is that it supports the
**template syntax** based on `Jinja2`_. This turns out to be extremely useful
when, say, you have a fixed structure of your task but you want to parameterize
this task in some way. For example, imagine your input task file (*task.yaml*)
runs a set of Nova scenarios:
.. code-block:: yaml
@ -63,7 +67,12 @@ A nice feature of the input task format used in Rally is that it supports the **
tenants: 1
users_per_tenant: 1
In both scenarios above, the *"^cirros.*uec$"* image is passed to the scenario as an argument (so that these scenarios use an appropriate image while booting servers). Lets say you want to run the same set of scenarios with the same runner/context/sla, but you want to try another image while booting server to compare the performance. The most elegant solution is then to turn the image name into a template variable:
In both scenarios above, the *"^cirros.*uec$"* image is passed to the scenario
as an argument (so that these scenarios use an appropriate image while booting
servers). Lets say you want to run the same set of scenarios with the same
runner/context/sla, but you want to try another image while booting server to
compare the performance. The most elegant solution is then to turn the image
name into a template variable:
.. code-block:: yaml
@ -102,10 +111,13 @@ In both scenarios above, the *"^cirros.*uec$"* image is passed to the scenario a
tenants: 1
users_per_tenant: 1
and then pass the argument value for **{{image_name}}** when starting a task with this configuration file. Rally provides you with different ways to do that:
and then pass the argument value for **{{image_name}}** when starting a task
with this configuration file. Rally provides you with different ways to do
that:
1. Pass the argument values directly in the command-line interface (with either a JSON or YAML dictionary):
1. Pass the argument values directly in the command-line interface (with either
a JSON or YAML dictionary):
.. code-block:: bash
@ -136,7 +148,8 @@ where the files containing argument values should look as follows:
---
image_name: "^cirros.*uec$"
Passed in either way, these parameter values will be substituted by Rally when starting a task:
Passed in either way, these parameter values will be substituted by Rally when
starting a task:
.. code-block:: console
@ -192,7 +205,10 @@ Passed in either way, these parameter values will be substituted by Rally when s
Using the default values
------------------------
Note that the Jinja2 template syntax allows you to set the default values for your parameters. With default values set, your task file will work even if you don't parameterize it explicitly while starting a task. The default values should be set using the *{% set ... %}* clause (*task.yaml*):
Note that the ``Jinja2`` template syntax allows you to set the default values
for your parameters. With default values set, your task file will work even if
you don't parameterize it explicitly while starting a task. The default values
should be set using the *{% set ... %}* clause (*task.yaml*):
.. code-block:: yaml
@ -217,7 +233,8 @@ Note that the Jinja2 template syntax allows you to set the default values for yo
...
If you don't pass the value for *{{image_name}}* while starting a task, the default one will be used:
If you don't pass the value for *{{image_name}}* while starting a task, the
default one will be used:
.. code-block:: console
@ -251,9 +268,13 @@ If you don't pass the value for *{{image_name}}* while starting a task, the defa
Advanced templates
------------------
Rally makes it possible to use all the power of Jinja2 template syntax, including the mechanism of **built-in functions**. This enables you to construct elegant task files capable of generating complex load on your cloud.
Rally makes it possible to use all the power of ``Jinja2`` template syntax,
including the mechanism of **built-in functions**. This enables you to
construct elegant task files capable of generating complex load on your cloud.
As an example, let us make up a task file that will create new users with increasing concurrency. The input task file (*task.yaml*) below uses the Jinja2 **for-endfor** construct to accomplish that:
As an example, let us make up a task file that will create new users with
increasing concurrency. The input task file (*task.yaml*) below uses the
``Jinja2`` **for-endfor** construct to accomplish that:
.. code-block:: yaml
@ -273,7 +294,9 @@ As an example, let us make up a task file that will create new users with increa
{% endfor %}
In this case, you dont need to pass any arguments via *--task-args/--task-args-file*, but as soon as you start this task, Rally will automatically unfold the for-loop for you:
In this case, you dont need to pass any arguments via
*--task-args/--task-args-file*, but as soon as you start this task, Rally will
automatically unfold the for-loop for you:
.. code-block:: console
@ -344,4 +367,12 @@ In this case, you dont need to pass any arguments via *--task-args/--task-arg
Benchmarking... This can take a while...
As you can see, the Rally task template syntax is a simple but powerful mechanism that not only enables you to write elegant task configurations, but also makes them more readable for other people. When used appropriately, it can really improve the understanding of your benchmarking procedures in Rally when shared with others.
As you can see, the Rally task template syntax is a simple but powerful
mechanism that not only enables you to write elegant task configurations, but
also makes them more readable for other people. When used appropriately, it can
really improve the understanding of your benchmarking procedures in Rally when
shared with others.
.. references:
.. _Jinja2: https://pypi.python.org/pypi/Jinja2

View File

@ -18,11 +18,20 @@
Step 6. Aborting load generation on success criteria failure
============================================================
Benchmarking pre-production and production OpenStack clouds is not a trivial task. From the one side it is important to reach the OpenStack clouds limits, from the other side the cloud shouldnt be damaged. Rally aims to make this task as simple as possible. Since the very beginning Rally was able to generate enough load for any OpenStack cloud. Generating too big a load was the major issue for production clouds, because Rally didnt know how to stop the load until it was too late.
Benchmarking pre-production and production OpenStack clouds is not a trivial
task. From the one side it is important to reach the OpenStack clouds limits,
from the other side the cloud shouldn't be damaged. Rally aims to make this
task as simple as possible. Since the very beginning Rally was able to generate
enough load for any OpenStack cloud. Generating too big a load was the major
issue for production clouds, because Rally didn't know how to stop the load
until it was too late.
With the **"stop on SLA failure"** feature, however, things are much better.
This feature can be easily tested in real life by running one of the most important and plain benchmark scenario called *"Authenticate.keystone"*. This scenario just tries to authenticate from users that were pre-created by Rally. Rally input task looks as follows (*auth.yaml*):
This feature can be easily tested in real life by running one of the most
important and plain benchmark scenario called *"Authenticate.keystone"*. This
scenario just tries to authenticate from users that were pre-created by Rally.
Rally input task looks as follows (*auth.yaml*):
.. code-block:: yaml
@ -40,11 +49,20 @@ This feature can be easily tested in real life by running one of the most import
sla:
max_avg_duration: 5
In human-readable form this input task means: *Create 5 tenants with 10 users in each, after that try to authenticate to Keystone 6000 times performing 50 authentications per second (running new authentication request every 20ms). Each time we are performing authentication from one of the Rally pre-created user. This task passes only if max average duration of authentication takes less than 5 seconds.*
In human-readable form this input task means: *Create 5 tenants with 10 users
in each, after that try to authenticate to Keystone 6000 times performing 50
authentications per second (running new authentication request every 20ms).
Each time we are performing authentication from one of the Rally pre-created
user. This task passes only if max average duration of authentication takes
less than 5 seconds.*
**Note that this test is quite dangerous because it can DDoS Keystone**. We are running more and more simultaneously authentication requests and things may go wrong if something is not set properly (like on my DevStack deployment in Small VM on my laptop).
**Note that this test is quite dangerous because it can DDoS Keystone**. We are
running more and more simultaneously authentication requests and things may go
wrong if something is not set properly (like on my DevStack deployment in Small
VM on my laptop).
Lets run Rally task with **an argument that prescribes Rally to stop load on SLA failure**:
Lets run Rally task with **an argument that prescribes Rally to stop load on
SLA failure**:
.. code-block:: console
@ -68,10 +86,16 @@ To understand better what has happened lets generate HTML report:
rally task report --out auth_report.html
.. image:: ../images/Report-Abort-on-SLA-task-1.png
.. image:: ../../images/Report-Abort-on-SLA-task-1.png
:align: center
On the chart with durations we can observe that the duration of authentication request reaches 65 seconds at the end of the load generation. **Rally stopped load at the very last moment just before bad things happened. The reason why it runs so many attempts to authenticate is because of not enough good success criteria.** We had to run a lot of iterations to make average duration bigger than 5 seconds. Lets chose better success criteria for this task and run it one more time.
On the chart with durations we can observe that the duration of authentication
request reaches 65 seconds at the end of the load generation. **Rally stopped
load at the very last moment just before bad things happened. The reason why it
runs so many attempts to authenticate is because of not enough good success
criteria.** We had to run a lot of iterations to make average duration bigger
than 5 seconds. Lets chose better success criteria for this task and run it
one more time.
.. code-block:: yaml
@ -111,9 +135,15 @@ Lets run it!
| total | 0.082 | 5.411 | 22.081 | 10.848 | 14.595 | 100.0% | 1410 |
+--------+-----------+-----------+-----------+---------------+---------------+---------+-------+
.. image:: ../images/Report-Abort-on-SLA-task-2.png
.. image:: ../../images/Report-Abort-on-SLA-task-2.png
:align: center
This time load stopped after 1410 iterations versus 2495 which is much better. The interesting thing on this chart is that first occurrence of “> 10 second” authentication happened on 950 iteration. The reasonable question: “Why does Rally run 500 more authentication requests then?”. This appears from the math: During the execution of **bad** authentication (10 seconds) Rally performed about 50 request/sec * 10 sec = 500 new requests as a result we run 1400 iterations instead of 950.
This time load stopped after 1410 iterations versus 2495 which is much better.
The interesting thing on this chart is that first occurrence of “> 10 second”
authentication happened on 950 iteration. The reasonable question: “Why does
Rally run 500 more authentication requests then?”. This appears from the math:
During the execution of **bad** authentication (10 seconds) Rally performed
about 50 request/sec * 10 sec = 500 new requests as a result we run 1400
iterations instead of 950.
(based on: http://boris-42.me/rally-tricks-stop-load-before-your-openstack-goes-wrong/)

View File

@ -18,7 +18,11 @@
Step 7. Working with multiple OpenStack clouds
==============================================
Rally is an awesome tool that allows you to work with multiple clouds and can itself deploy them. We already know how to work with :ref:`a single cloud <tutorial_step_1_setting_up_env_and_running_benchmark_from_samples>`. Let us now register 2 clouds in Rally: the one that we have access to and the other that we know is registered with wrong credentials.
Rally is an awesome tool that allows you to work with multiple clouds and can
itself deploy them. We already know how to work with
:ref:`a single cloud <tutorial_step_1_setting_up_env_and_running_benchmark_from_samples>`.
Let us now register 2 clouds in Rally: the one that we have access to and the
other that we know is registered with wrong credentials.
.. code-block:: console
@ -56,7 +60,11 @@ Let us now list the deployments we have created:
| 658b9bae-1f9c-4036-9400-9e71e88864fc | 2015-01-05 00:40:58.451435 | cloud-2 | deploy->finished | * |
+--------------------------------------+----------------------------+------------+------------------+--------+
Note that the second is marked as **"active"** because this is the deployment we have created most recently. This means that it will be automatically (unless its UUID or name is passed explicitly via the *--deployment* parameter) used by the commands that need a deployment, like *rally task start ...* or *rally deployment check*:
Note that the second is marked as **"active"** because this is the deployment
we have created most recently. This means that it will be automatically (unless
its UUID or name is passed explicitly via the *--deployment* parameter) used by
the commands that need a deployment, like *rally task start ...* or *rally
deployment check*:
.. code-block:: console
@ -80,7 +88,8 @@ Note that the second is marked as **"active"** because this is the deployment we
| s3 | s3 | Available |
+----------+----------------+-----------+
You can also switch the active deployment using the **rally deployment use** command:
You can also switch the active deployment using the **rally deployment use**
command:
.. code-block:: console
@ -106,9 +115,15 @@ You can also switch the active deployment using the **rally deployment use** com
| s3 | s3 | Available |
+----------+----------------+-----------+
Note the first two lines of the CLI output for the *rally deployment use* command. They tell you the UUID of the new active deployment and also say that the *~/.rally/openrc* file was updated -- this is the place where the "active" UUID is actually stored by Rally.
Note the first two lines of the CLI output for the *rally deployment use*
command. They tell you the UUID of the new active deployment and also say that
the *~/.rally/openrc* file was updated -- this is the place where the "active"
UUID is actually stored by Rally.
One last detail about managing different deployments in Rally is that the *rally task list* command outputs only those tasks that were run against the currently active deployment, and you have to provide the *--all-deployments* parameter to list all the tasks:
One last detail about managing different deployments in Rally is that the
*rally task list* command outputs only those tasks that were run against the
currently active deployment, and you have to provide the *--all-deployments*
parameter to list all the tasks:
.. code-block:: console

View File

@ -29,9 +29,8 @@ different OpenStack projects like **Keystone**, **Nova**, **Cinder**,
**Glance** and so on. The good news is that you can combine multiple plugins
in one task to test your cloud in a comprehensive way.
First, let's see what plugins are available in Rally.
One of the ways to discover these plugins is just to inspect their
`source code <https://github.com/openstack/rally/tree/master/rally/plugins/>`_.
First, let's see what plugins are available in Rally. One of the ways to
discover these plugins is just to inspect their `source code`_.
another is to use build-in rally plugin command.
CLI: rally plugin show
@ -110,3 +109,7 @@ This command can be used to list filtered by name list of plugins.
| KeystoneBasic.create_user_update_password | default | Create user and update password for that user. |
| KeystoneBasic.get_entities | default | Get instance of a tenant, user, role and service by id's. |
+--------------------------------------------------+-----------+-----------------------------------------------------------------+
.. references:
.. _source code: https://github.com/openstack/rally/tree/master/rally/plugins/

View File

@ -18,7 +18,12 @@
Step 9. Deploying OpenStack from Rally
======================================
Along with supporting already existing OpenStack deployments, Rally itself can **deploy OpenStack automatically** by using one of its *deployment engines*. Take a look at other `deployment configuration file samples <https://github.com/openstack/rally/tree/master/samples/deployments>`_. For example, *devstack-in-existing-servers.json* is a deployment configuration file that tells Rally to deploy OpenStack with **Devstack** on the existing servers with given credentials:
Along with supporting already existing OpenStack deployments, Rally itself can
**deploy OpenStack automatically** by using one of its *deployment engines*.
Take a look at other `deployment configuration file samples`_. For example,
*devstack-in-existing-servers.json* is a deployment configuration file that
tells Rally to deploy OpenStack with **Devstack** on the existing servers with
given credentials:
.. code-block:: json
@ -30,7 +35,8 @@ Along with supporting already existing OpenStack deployments, Rally itself can *
}
}
You can try to deploy OpenStack in your Virtual Machine using this script. Edit the configuration file with your IP address/user name and run, as usual:
You can try to deploy OpenStack in your Virtual Machine using this script. Edit
the configuration file with your IP address/user name and run, as usual:
.. code-block:: console
@ -41,3 +47,7 @@ You can try to deploy OpenStack in your Virtual Machine using this script. Edit
| <Deployment UUID> | 2015-01-10 22:00:28.270941 | new-devstack | deploy->finished |
+---------------------------+----------------------------+--------------+------------------+
Using deployment : <Deployment UUID>
.. references:
.. _deployment configuration file samples: https://github.com/openstack/rally/tree/master/samples/deployments