Fix texts and images in docs

* Reuse the images from the repository in README
* Shrink the images to 800-1000px
* Fix incorrect sectioning, typos, missing info etc. on ReadTheDocs
* Add a tutorial step about the --abort-on-sla-failure feature
* Move Rally deployment engines to a separate tutorial step
* rally use deployment -> rally deployment use

Change-Id: Id5f492e40a041aa3308e9faa21b833220415323d
This commit is contained in:
Mikhail Dubov
2015-02-16 03:37:58 +03:00
parent fe7570a158
commit 738d932aa5
29 changed files with 203 additions and 85 deletions

View File

@@ -17,8 +17,7 @@ The OpenStack QA team mostly works on CI/CD that ensures that new patches don't
**Rally** workflow can be visualized by the following diagram:
.. image:: https://wiki.openstack.org/w/images/e/ee/Rally-Actions.png
:width: 700px
.. image:: doc/source/images/Rally-Actions.png
:alt: Rally Architecture
@@ -43,8 +42,7 @@ Use Cases
There are 3 major high level Rally Use Cases:
.. image:: https://wiki.openstack.org/w/images/6/6e/Rally-UseCases.png
:width: 700px
.. image:: doc/source/images/Rally-UseCases.png
:alt: Rally Use Cases

Binary file not shown.

Before

Width:  |  Height:  |  Size: 182 KiB

After

Width:  |  Height:  |  Size: 94 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 135 KiB

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 207 KiB

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 178 KiB

After

Width:  |  Height:  |  Size: 172 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 117 KiB

After

Width:  |  Height:  |  Size: 103 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 79 KiB

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 128 KiB

After

Width:  |  Height:  |  Size: 114 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 116 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 54 KiB

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 78 KiB

After

Width:  |  Height:  |  Size: 109 KiB

View File

@@ -19,7 +19,6 @@ What is Rally?
**OpenStack** is, undoubtedly, a really *huge* ecosystem of cooperative services. **Rally** is a **benchmarking tool** that answers the question: **"How does OpenStack work at scale?"**. To make this possible, Rally **automates** and **unifies** multi-node OpenStack deployment, cloud verification, benchmarking & profiling. Rally does it in a **generic** way, making it possible to check whether OpenStack is going to work well on, say, a 1k-servers installation under high load. Thus it can be used as a basic tool for an *OpenStack CI/CD system* that would continuously improve its SLA, performance and stability.
.. image:: ./images/Rally-Actions.png
:width: 100%
:align: center

View File

@@ -37,6 +37,12 @@ Automated installation
./rally/install_rally.sh -v
You also have to set up the **Rally database** after the installation is complete:
.. code-block:: none
rally-manage db recreate
Rally with DevStack all-in-one installation
-------------------------------------------
@@ -64,7 +70,6 @@ Finally, run DevStack as usually:
./stack.sh
Rally & Docker
--------------

View File

@@ -26,7 +26,6 @@ Use Cases
Let's take a look at 3 major high level Use Cases of Rally:
.. image:: ./images/Rally-UseCases.png
:width: 100%
:align: center
@@ -61,7 +60,6 @@ How does amqp_rpc_single_reply_queue affect performance?
Rally allowed us to reveal a quite an interesting fact about **Nova**. We used *NovaServers.boot_and_delete* benchmark scenario to see how the *amqp_rpc_single_reply_queue* option affects VM bootup time (it turns on a kind of fast RPC). Some time ago it was `shown <https://docs.google.com/file/d/0B-droFdkDaVhVzhsN3RKRlFLODQ/edit?pli=1>`_ that cloud performance can be boosted by setting it on, so we naturally decided to check this result with Rally. To make this test, we issued requests for booting and deleting VMs for a number of concurrent users ranging from 1 to 30 with and without the investigated option. For each group of users, a total number of 200 requests was issued. Averaged time per request is shown below:
.. image:: ./images/Amqp_rpc_single_reply_queue.png
:width: 100%
:align: center
**So Rally has unexpectedly indicated that setting the *amqp_rpc_single_reply_queue* option apparently affects the cloud performance, but in quite an opposite way rather than it was thought before.**
@@ -79,7 +77,6 @@ Another interesting result comes from the *NovaServers.boot_and_list_server* sce
During the execution of this benchmark scenario, the user has more and more VMs on each iteration. Rally has shown that in this case, the performance of the **VM list** command in Nova is degrading much faster than one might expect:
.. image:: ./images/Rally_VM_list.png
:width: 100%
:align: center
@@ -98,7 +95,6 @@ In fact, the vast majority of Rally scenarios is expressed as a sequence of **"a
Rally measures not only the performance of the benchmark scenario as a whole, but also that of single atomic actions. As a result, Rally also plots the atomic actions performance data for each benchmark iteration in a quite detailed way:
.. image:: ./images/Rally_snapshot_vm.png
:width: 100%
:align: center
@@ -113,7 +109,6 @@ Usually OpenStack projects are implemented *"as-a-Service"*, so Rally provides t
The diagram below shows how this is possible:
.. image:: ./images/Rally_Architecture.png
:width: 100%
:align: center
The actual **Rally core** consists of 4 main components, listed below in the order they go into action:
@@ -127,4 +122,3 @@ It should become fairly obvious why Rally core needs to be split to these parts
.. image:: ./images/Rally_QA.png
:align: center
:width: 100%

View File

@@ -24,7 +24,6 @@ How plugins work
Rally provides an opportunity to create and use a **custom benchmark scenario, runner or context** as a **plugin**:
.. image:: ./images/Rally-Plugins.png
:width: 100%
:align: center
Plugins can be quickly written and used, with no need to contribute them to the actual Rally code. Just place a python module with your plugin class into the **/opt/rally/plugins** or **~/.rally/plugins** directory (or it's subdirectories), and it will be autoloaded.

View File

@@ -26,7 +26,7 @@ Useful links
- `Bugs <https://bugs.launchpad.net/rally>`_
- `Patches on review <https://review.openstack.org/#/q/status:open+rally,n,z>`_
- `Meeting logs <http://eavesdrop.openstack.org/meetings/rally/2015/>`_ (server: **irc.freenode.net**, channel: **#openstack-meeting**)
- `IRC logs <http://irclog.perlgeek.de/openstack-rally>`_ (server: **irc.freenode.net**, channel: **#openstack-rally**, each Tuesday at 17:00 UTC)
- `IRC logs <http://irclog.perlgeek.de/openstack-rally>`_ (server: **irc.freenode.net**, channel: **#openstack-rally**)
Where can I discuss and propose changes?

View File

@@ -18,16 +18,13 @@
Step 1. Setting up the environment and running a benchmark from samples
=======================================================================
In this demo, we will show how to perform the following basic operations in Rally:
.. toctree::
:maxdepth: 1
In this demo, we will show how to perform some basic operations in Rally, such as registering an OpenStack cloud, benchmarking it and generating benchmark reports.
We assume that you have a :ref:`Rally installation <tutorial_step_0_installation>` and an already existing OpenStack deployment with Keystone available at *<KEYSTONE_AUTH_URL>*.
1. Registering an OpenStack deployment in Rally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-----------------------------------------------
First, you have to provide Rally with an Openstack deployment it is going to benchmark. This should be done either through `OpenRC files <http://docs.openstack.org/user-guide/content/cli_openrc.html>`_ or through deployment `configuration files <https://github.com/stackforge/rally/tree/master/samples/deployments>`_. In case you already have an *OpenRC*, it is extremely simple to register a deployment with the *deployment create* command:
@@ -82,7 +79,7 @@ Finally, the *deployment check* command enables you to verify that your current
2. Benchmarking
^^^^^^^^^^^^^^^
---------------
Now that we have a working and registered deployment, we can start benchmarking it. The sequence of benchmarks to be launched by Rally should be specified in a *benchmark task configuration file* (either in *JSON* or in *YAML* format). Let's try one of the sample benchmark tasks available in `samples/tasks/scenarios <https://github.com/stackforge/rally/tree/master/samples/tasks/scenarios>`_, say, the one that boots and deletes multiple servers (*samples/tasks/scenarios/nova/boot-and-delete.json*):
@@ -203,7 +200,7 @@ Note that the Rally input task above uses *regular expressions* to specify the i
3. Report generation
^^^^^^^^^^^^^^^^^^^^
--------------------
One of the most beautiful things in Rally is its task report generation mechanism. It enables you to create illustrative and comprehensive HTML reports based on the benchmarking data. To create and open at once such a report for the last task you have launched, call:
@@ -214,7 +211,6 @@ One of the most beautiful things in Rally is its task report generation mechanis
This will produce an HTML page with the overview of all the scenarios that you've included into the last benchmark task completed in Rally (in our case, this is just one scenario, and we will cover the topic of multiple scenarios in one task in :ref:`the next step of our tutorial <tutorial_step_3_adding_success_criteria_for_benchmarks>`):
.. image:: ../images/Report-Overview.png
:width: 100%
:align: center
This aggregating table shows the duration of the load produced by the corresponding scenario (*"Load duration"*), the overall benchmark scenario execution time, including the duration of environment preparation with contexts (*"Full duration"*), the number of iterations of each scenario (*"Iterations"*), the type of the load used while running the scenario (*"Runner"*), the number of failed iterations (*"Errors"*) and finally whether the scenario has passed certain Success Criteria (*"SLA"*) that were set up by the user in the input configuration file (we will cover these criteria in :ref:`one of the next steps <tutorial_step_3_sla>`).
@@ -222,13 +218,11 @@ This aggregating table shows the duration of the load produced by the correspond
By navigating in the left panel, you can switch to the detailed view of the benchmark results for the only scenario we included into our task, namely **NovaServers.boot_and_delete_server**:
.. image:: ../images/Report-Scenario-Overview.png
:width: 100%
:align: center
This page, along with the description of the success criteria used to check the outcome of this scenario, shows some more detailed information and statistics about the duration of its iterations. Now, the *"Total durations"* table splits the duration of our scenario into the so-called **"atomic actions"**: in our case, the **"boot_and_delete_server"** scenario consists of two actions - **"boot_server"** and **"delete_server"**. You can also see how the scenario duration changed throughout is iterations in the *"Charts for the total duration"* section. Similar charts, but with atomic actions detalization, will arise if you switch to the *"Details"* tab of this page:
.. image:: ../images/Report-Scenario-Atomic.png
:width: 100%
:align: center
Note that all the charts on the report pages are very dynamic: you can change their contents by clicking the switches above the graph and see more information about its single points by hovering the cursor over these points.

View File

@@ -19,9 +19,9 @@ Step 2. Running multiple benchmarks in a single task
====================================================
1. Rally input task syntax
^^^^^^^^^^^^^^^^^^^^^^^^^^
--------------------------
Rally comes with a really great collection of :ref:`benchmark scenarios <tutorial_step_5_discovering_more_benchmark_scenarios>` and in most real-world scenarios you will use multiple scenarios to test your OpenStack cloud. Rally makes it very easy to run **different benchmarks defined in a single benchmark task**. To do so, use the following syntax:
Rally comes with a really great collection of :ref:`benchmark scenarios <tutorial_step_6_discovering_more_benchmark_scenarios>` and in most real-world scenarios you will use multiple scenarios to test your OpenStack cloud. Rally makes it very easy to run **different benchmarks defined in a single benchmark task**. To do so, use the following syntax:
.. code-block:: none
@@ -41,7 +41,7 @@ where *<benchmark_config>*, as before, is a dictionary:
}
2. Multiple benchmarks in a single task
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
---------------------------------------
As an example, let's edit our configuration file from :ref:`step 1 <tutorial_step_1_setting_up_env_and_running_benchmark_from_samples>` so that it prescribes Rally to launch not only the **NovaServers.boot_and_delete_server** scenario, but also the **KeystoneBasic.create_delete_user** scenario. All we have to do is to append the configuration of the second scenario as yet another top-level key of our json file:
@@ -126,12 +126,11 @@ Note that the HTML reports you can generate by typing **rally task report --out=
$ rally task report --out=report_multiple_scenarios.html --open
.. image:: ../images/Report-Multiple-Overview.png
:width: 100%
:align: center
3. Multiple configurations of the same scenario
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-----------------------------------------------
Yet another thing you can do in Rally is to launch **the same benchmark scenario multiple times with different configurations**. That's why our configuration file stores a list for the key *"NovaServers.boot_and_delete_server"*: you can just append a different configuration of this benchmark scenario to this list to get it. Let's say, you want to run the **boot_and_delete_server** scenario twice: first using the *"m1.nano"* flavor and then using the *"m1.tiny"* flavor:
@@ -207,5 +206,4 @@ The HTML report will also look similar to what we have seen before:
$ rally task report --out=report_multiple_configuraions.html --open
.. image:: ../images/Report-Multiple-Configurations-Overview.png
:width: 100%
:align: center

View File

@@ -19,7 +19,7 @@ Step 3. Adding success criteria (SLA) for benchmarks
====================================================
1. SLA - Service-Level Agreement (Success Criteria)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
---------------------------------------------------
Rally allows you to set success criteria (also called *SLA - Service-Level Agreement*) for every benchmark. Rally will automatically check them for you.
@@ -53,10 +53,10 @@ Such configuration will mark the **NovaServers.boot_and_delete_server** benchmar
2. Checking SLA
^^^^^^^^^^^^^^^
---------------
Let us show you how Rally SLA work using a simple example based on **Dummy benchmark scenarios**. These scenarios actually do not perform any OpenStack-related stuff but are very useful for testing the behavious of Rally. Let us put in a new task, *test-sla.json*, 2 scenarios -- one that does nothing and another that just throws an exception:
.. code-block:: none
.. code-block:: none
{
"Dummy.dummy": [
@@ -123,7 +123,7 @@ Exactly as expected.
3. SLA in task report
^^^^^^^^^^^^^^^^^^^^^
---------------------
SLA checks are nicely visualized in task reports. Generate one:
@@ -135,11 +135,11 @@ SLA checks are nicely visualized in task reports. Generate one:
Benchmark scenarios that have passed SLA have a green check on the overview page:
.. image:: ../images/Report-SLA-Overview.png
:width: 100%
:align: center
Somewhat more detailed information about SLA is displayed on the scenario pages:
.. image:: ../images/Report-SLA-Scenario.png
:width: 100%
:align: center
Success criteria present a very useful concept that enables not only to analyze the outcome of your benchmark tasks, but also to control their execution. In the :ref:`the next section of our tutorial <tutorial_step_4_aborting_load_generation_on_sla_failure>`, we will show how to use SLA to abort the load generation before your OpenStack goes wrong.

View File

@@ -0,0 +1,119 @@
..
Copyright 2015 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
.. _tutorial_step_4_aborting_load_generation_on_sla_failure:
Step 4. Aborting load generation on success criteria failure
============================================================
Benchmarking pre-production and production OpenStack clouds is not a trivial task. From the one side its important to reach the OpenStack clouds limits, from the other side the cloud shouldnt be damaged. Rally aims to make this task as simple as possible. Since the very beginning Rally was able to generate enough load for any OpenStack cloud. Generating to big load was the major issue for production clouds, because Rally didnt know how to stop the load until it was to late. Finally I am happy to say that we solved this issue.
With the **"stop on SLA failure"** feature, however, things are much better.
This feature can be easily tested in real life by running one of the most important and plain benchmark scenario called *"KeystoneBasic.authenticate"*. This scenario just tries to authenticate from users that were pre-created by Rally. Rally input task looks as follows (*auth.yaml*):
.. code-block:: none
---
Authenticate.keystone:
-
runner:
type: "rps"
times: 6000
rps: 50
context:
users:
tenants: 5
users_per_tenant: 10
sla:
max_avg_duration: 5
In human-readable form this input task means: *Create 5 tenants with 10 users in each, after that try to authenticate to Keystone 6000 times performing 50 authentications per second (running new authentication request every 20ms). Each time we are performing authentication from one of the Rally pre-created user. This task passes only if max average duration of authentication takes less than 5 seconds.*
**Note that this test is quite dangerous because it can DDoS Keystone**. We are running more and more simultaneously authentication requests and things may go wrong if something is not set properly (like on my DevStack deployment in Small VM on my laptop).
Lets run Rally task with **an argument that prescribes Rally to stop load on SLA failure**:
.. code-block:: none
$ rally task start --abort-on-sla-failure auth.yaml
....
+--------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| action | min (sec) | avg (sec) | max (sec) | 90 percentile | 95 percentile | success | count |
+--------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| total | 0.108 | 8.58 | 65.97 | 19.782 | 26.125 | 100.0% | 2495 |
+--------+-----------+-----------+-----------+---------------+---------------+---------+-------+
On the resulting table there are 2 interesting things:
1. Average duration was 8.58 sec which is more than 5 seconds
2. Rally performed only 2495 (instead of 6000) authentication requests
To understand better what has happened lets generate HTML report:
.. code-block:: none
$ rally task report --out auth_report.html
.. image:: Report-Abort-on-SLA-task-1.png
:align: center
On the chart with durations we can observe that the duration of authentication request reaches 65 seconds at the end of the load generation. **Rally stopped load at the very last moment just before the mad things happened. The reason why it runs so many attempts to authenticate is because of not enough good success criteria.** We had to run a lot of iterations to make average duration bigger than 5 seconds. Lets chose better success criteria for this task and run it one more time.
.. code-block:: none
---
Authenticate.keystone:
-
runner:
type: "rps"
times: 6000
rps: 50
context:
users:
tenants: 5
users_per_tenant: 10
sla:
max_avg_duration: 5
max_seconds_per_iteration: 10
failure_rate:
max: 0
Now our task is going to be successful if the following three conditions hold:
1. maximum average duration of authentication should be less than 5 seconds
2. maximum duration of any authentication should be less than 10 seconds
3. no failed authentication should appear
Lets run it!
.. code-block:: none
$ rally task start --abort-on-sla-failure auth.yaml
...
+--------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| action | min (sec) | avg (sec) | max (sec) | 90 percentile | 95 percentile | success | count |
+--------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| total | 0.082 | 5.411 | 22.081 | 10.848 | 14.595 | 100.0% | 1410 |
+--------+-----------+-----------+-----------+---------------+---------------+---------+-------+
.. image:: Report-Abort-on-SLA-task-2.png
:align: center
This time load stopped after 1410 iterations versus 2495 which is much better. The interesting thing on this chart is that first occurence of “> 10 second” authentication happened on 950 iteration. The reasonable question: “Why Rally run 500 more authentication requests then?”. This appears from the math: During the execution of **bad** authentication (10 seconds) Rally performed about 50 request/sec * 10 sec = 500 new requests as a result we run 1400 iterations instead of 950.
(based on: http://boris-42.me/rally-tricks-stop-load-before-your-openstack-goes-wrong/)

View File

@@ -13,14 +13,11 @@
License for the specific language governing permissions and limitations
under the License.
.. _tutorial_step_4_working_with_multple_openstack_clouds:
.. _tutorial_step_5_working_with_multple_openstack_clouds:
Step 4. Working with multiple OpenStack clouds
Step 5. Working with multiple OpenStack clouds
==============================================
1. Multiple OpenStack clouds in Rally
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Rally is an awesome tool that allows you to work with multiple clouds and can itself deploy them. We already know how to work with :ref:`a single cloud <tutorial_step_1_setting_up_env_and_running_benchmark_from_samples>`. Let us now register 2 clouds in Rally: the one that we have access to and the other that we know is registered with wrong credentials.
.. code-block:: none
@@ -30,7 +27,7 @@ Rally is an awesome tool that allows you to work with multiple clouds and can it
+--------------------------------------+----------------------------+------------+------------------+--------+
| uuid | created_at | name | status | active |
+--------------------------------------+----------------------------+------------+------------------+--------+
| 4251b491-73b2-422a-aecb-695a94165b5e | 2015-01-18 00:11:14.757203 | cloud-1 | deploy->finished | |
| 4251b491-73b2-422a-aecb-695a94165b5e | 2015-01-18 00:11:14.757203 | cloud-1 | deploy->finished | |
+--------------------------------------+----------------------------+------------+------------------+--------+
Using deployment: 4251b491-73b2-422a-aecb-695a94165b5e
~/.rally/openrc was updated
@@ -41,7 +38,7 @@ Rally is an awesome tool that allows you to work with multiple clouds and can it
+--------------------------------------+----------------------------+------------+------------------+--------+
| uuid | created_at | name | status | active |
+--------------------------------------+----------------------------+------------+------------------+--------+
| 658b9bae-1f9c-4036-9400-9e71e88864fc | 2015-01-18 00:38:26.127171 | cloud-2 | deploy->finished | |
| 658b9bae-1f9c-4036-9400-9e71e88864fc | 2015-01-18 00:38:26.127171 | cloud-2 | deploy->finished | |
+--------------------------------------+----------------------------+------------+------------------+--------+
Using deployment: 658b9bae-1f9c-4036-9400-9e71e88864fc
~/.rally/openrc was updated
@@ -55,8 +52,8 @@ Let us now list the deployments we have created:
+--------------------------------------+----------------------------+------------+------------------+--------+
| uuid | created_at | name | status | active |
+--------------------------------------+----------------------------+------------+------------------+--------+
| 4251b491-73b2-422a-aecb-695a94165b5e | 2015-01-05 00:11:14.757203 | cloud-1 | deploy->finished | |
| 658b9bae-1f9c-4036-9400-9e71e88864fc | 2015-01-05 00:40:58.451435 | cloud-2 | deploy->finished | * |
| 4251b491-73b2-422a-aecb-695a94165b5e | 2015-01-05 00:11:14.757203 | cloud-1 | deploy->finished | |
| 658b9bae-1f9c-4036-9400-9e71e88864fc | 2015-01-05 00:40:58.451435 | cloud-2 | deploy->finished | * |
+--------------------------------------+----------------------------+------------+------------------+--------+
Note that the second is marked as **"active"** because this is the deployment we have created most recently. This means that it will be automatically (unless its UUID or name is passed explicitly via the *--deployment* parameter) used by the commands that need a deployment, like *rally task start ...* or *rally deployment check*:
@@ -83,11 +80,11 @@ Note that the second is marked as **"active"** because this is the deployment we
| s3 | s3 | Available |
+----------+----------------+-----------+
You can also switch the active deployment using the **rally use deployment** command:
You can also switch the active deployment using the **rally deployment use** command:
.. code-block:: none
$ rally use deployment cloud-1
$ rally deployment use cloud-1
Using deployment: 658b9bae-1f9c-4036-9400-9e71e88864fc
~/.rally/openrc was updated
...
@@ -109,7 +106,7 @@ You can also switch the active deployment using the **rally use deployment** com
| s3 | s3 | Available |
+----------+----------------+-----------+
Note the first two lines of the CLI output for the *rally use deployment* command. They tell you the UUID of the new active deployment and also say that the *~/.rally/openrc* file was updated -- this is the place where the "active" UUID is actually stored by Rally.
Note the first two lines of the CLI output for the *rally deployment use* command. They tell you the UUID of the new active deployment and also say that the *~/.rally/openrc* file was updated -- this is the place where the "active" UUID is actually stored by Rally.
One last detail about managing different deployments in Rally is that the *rally task list* command outputs only those tasks that were run against the currently active deployment, and you have to provide the *--all-deployments* parameter to list all the tasks:
@@ -119,42 +116,14 @@ One last detail about managing different deployments in Rally is that the *rally
+--------------------------------------+-----------------+----------------------------+----------------+----------+--------+-----+
| uuid | deployment_name | created_at | duration | status | failed | tag |
+--------------------------------------+-----------------+----------------------------+----------------+----------+--------+-----+
| c21a6ecb-57b2-43d6-bbbb-d7a827f1b420 | cloud-1 | 2015-01-05 01:00:42.099596 | 0:00:13.419226 | finished | False | |
| f6dad6ab-1a6d-450d-8981-f77062c6ef4f | cloud-1 | 2015-01-05 01:05:57.653253 | 0:00:14.160493 | finished | False | |
| c21a6ecb-57b2-43d6-bbbb-d7a827f1b420 | cloud-1 | 2015-01-05 01:00:42.099596 | 0:00:13.419226 | finished | False | |
| f6dad6ab-1a6d-450d-8981-f77062c6ef4f | cloud-1 | 2015-01-05 01:05:57.653253 | 0:00:14.160493 | finished | False | |
+--------------------------------------+-----------------+----------------------------+----------------+----------+--------+-----+
$ rally task list --all-deployment
+--------------------------------------+-----------------+----------------------------+----------------+----------+--------+-----+
| uuid | deployment_name | created_at | duration | status | failed | tag |
+--------------------------------------+-----------------+----------------------------+----------------+----------+--------+-----+
| c21a6ecb-57b2-43d6-bbbb-d7a827f1b420 | cloud-1 | 2015-01-05 01:00:42.099596 | 0:00:13.419226 | finished | False | |
| f6dad6ab-1a6d-450d-8981-f77062c6ef4f | cloud-1 | 2015-01-05 01:05:57.653253 | 0:00:14.160493 | finished | False | |
| 6fd9a19f-5cf8-4f76-ab72-2e34bb1d4996 | cloud-2 | 2015-01-05 01:14:51.428958 | 0:00:15.042265 | finished | False | |
| c21a6ecb-57b2-43d6-bbbb-d7a827f1b420 | cloud-1 | 2015-01-05 01:00:42.099596 | 0:00:13.419226 | finished | False | |
| f6dad6ab-1a6d-450d-8981-f77062c6ef4f | cloud-1 | 2015-01-05 01:05:57.653253 | 0:00:14.160493 | finished | False | |
| 6fd9a19f-5cf8-4f76-ab72-2e34bb1d4996 | cloud-2 | 2015-01-05 01:14:51.428958 | 0:00:15.042265 | finished | False | |
+--------------------------------------+-----------------+----------------------------+----------------+----------+--------+-----+
2. Rally as a deployment engine
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Along with supporting already existing OpenStack deployments, Rally itself can **deploy OpenStack automatically** by using one of its *deployment engines*. Take a look at other `deployment configuration file samples <https://github.com/stackforge/rally/tree/master/samples/deployments>`_. For example, *devstack-in-existing-servers.json* is a deployment configuration file that tells Rally to deploy OpenStack with **Devstack** on the server with given credentials:
.. code-block:: none
{
"type": "DevstackEngine",
"provider": {
"type": "ExistingServers",
"credentials": [{"user": "root", "host": "10.2.0.8"}]
}
}
You can try this out, say, with a virtual machine. Edit the configuration file with your IP address/user name and run, as usual:
.. code-block:: none
$ rally deployment create --file=samples/deployments/devstack-in-existing-servers.json.json --name=new-devstack
+---------------------------+----------------------------+----------+----------------------+
| uuid | created_at | name | status |
+---------------------------+----------------------------+----------+----------------------+
| <Deployment UUID> | 2015-01-10 22:00:28.270941 | new-devstack | deploy->finished |
+---------------------------+----------------------------+--------------+------------------+
Using deployment : <Deployment UUID>

View File

@@ -13,20 +13,20 @@
License for the specific language governing permissions and limitations
under the License.
.. _tutorial_step_5_discovering_more_benchmark_scenarios:
.. _tutorial_step_6_discovering_more_benchmark_scenarios:
Step 5. Discovering more benchmark scenarios in Rally
Step 6. Discovering more benchmark scenarios in Rally
=====================================================
1. Scenarios in the Rally repository
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
------------------------------------
Rally currently comes with a great collection of benchmark scenarios that use the API of different OpenStack projects like **Keystone**, **Nova**, **Cinder**, **Glance** and so on. The good news is that you can combine multiple benchmark scenarios in one task to benchmark your cloud in a comprehensive way.
First, let's see what scenarios are available in Rally. One of the ways to discover these scenario is just to inspect their `source code <https://github.com/stackforge/rally/tree/master/rally/benchmark/scenarios>`_.
2. Rally built-in search engine
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-------------------------------
A much more convenient way to learn about different benchmark scenarios in Rally, however, is to use a special **search engine** embedded into its Command-Line Interface, which, for a given **search query**, prints documentation for the corresponding benchmark scenario (and also supports other Rally entities like SLA).

View File

@@ -0,0 +1,43 @@
..
Copyright 2015 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
.. _tutorial_step_7_working_with_multple_openstack_clouds:
Step 7. Deploying OpenStack from Rally
======================================
Along with supporting already existing OpenStack deployments, Rally itself can **deploy OpenStack automatically** by using one of its *deployment engines*. Take a look at other `deployment configuration file samples <https://github.com/stackforge/rally/tree/master/samples/deployments>`_. For example, *devstack-in-existing-servers.json* is a deployment configuration file that tells Rally to deploy OpenStack with **Devstack** on the existing servers with given credentials:
.. code-block:: none
{
"type": "DevstackEngine",
"provider": {
"type": "ExistingServers",
"credentials": [{"user": "root", "host": "10.2.0.8"}]
}
}
You can try to deploy OpenStack in your Virtual Machine using this script. Edit the configuration file with your IP address/user name and run, as usual:
.. code-block:: none
$ rally deployment create --file=samples/deployments/devstack-in-existing-servers.json.json --name=new-devstack
+---------------------------+----------------------------+----------+----------------------+
| uuid | created_at | name | status |
+---------------------------+----------------------------+----------+----------------------+
| <Deployment UUID> | 2015-01-10 22:00:28.270941 | new-devstack | deploy->finished |
+---------------------------+----------------------------+--------------+------------------+
Using deployment : <Deployment UUID>