[docs][1] Re-design docs to cover all user-groups

First pack of changes in upcoming chain to redesign Rally docs.
All information related to the project overview separated and
refactored. Modified files fit 80 symbols margin where possible.

[TODO] continue with other parts of the docs:
       - Installation and upgrade
       - Quick start aka Rally step-by-step
       - Command Line Interface
       - Rally Task Component
       - Rally Verification Component
       - Rally Plugins, Rally Plugins Reference
       - Contribute to Rally
       - Request New Features
       - Project Info
[TODO] add 80 symbols margin check similar to what
       Performance Documentation has

Change-Id: Icc6c9665fe52a7d7c191f78a1fe3ad2a38b4231f
This commit is contained in:
Dina Belova 2016-11-15 12:17:24 -08:00
parent 40d1000599
commit a4773e404e
9 changed files with 182 additions and 72 deletions

View File

@ -13,27 +13,34 @@
License for the specific language governing permissions and limitations
under the License.
==============
What is Rally?
==============
**OpenStack** is, undoubtedly, a really *huge* ecosystem of cooperative services. **Rally** is a **benchmarking tool** that answers the question: **"How does OpenStack work at scale?"**. To make this possible, Rally **automates** and **unifies** multi-node OpenStack deployment, cloud verification, benchmarking & profiling. Rally does it in a **generic** way, making it possible to check whether OpenStack is going to work well on, say, a 1k-servers installation under high load. Thus it can be used as a basic tool for an *OpenStack CI/CD system* that would continuously improve its SLA, performance and stability.
**OpenStack** is, undoubtedly, a really *huge* ecosystem of cooperative
services. **Rally** is a **benchmarking tool** that answers the question:
**"How does OpenStack work at scale?"**. To make this possible, Rally
**automates** and **unifies** multi-node OpenStack deployment, cloud
verification, benchmarking & profiling. Rally does it in a **generic** way,
making it possible to check whether OpenStack is going to work well on, say, a
1k-servers installation under high load. Thus it can be used as a basic tool
for an *OpenStack CI/CD system* that would continuously improve its SLA,
performance and stability.
.. image:: ./images/Rally-Actions.png
:align: center
Contents
--------
========
.. toctree::
:maxdepth: 2
overview
glossary
overview/index
install
tutorial
cli/cli_reference
reports
user_stories
plugins
plugin/plugin_reference
db_migrations

View File

@ -20,8 +20,8 @@ Common
Alembic
-------
A lightweight database migration tool which powers Rally migrations.
Read more at `Official Alembic documentation <http://alembic.readthedocs.io/en/latest/>`_
A lightweight database migration tool which powers Rally migrations. Read more
at `Official Alembic documentation <http://alembic.readthedocs.io/en/latest/>`_
DB Migrations
-------------

View File

@ -0,0 +1,25 @@
..
Copyright 2015 Mirantis Inc. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
======================
Rally project overview
======================
.. toctree::
:glob:
overview
glossary
user_stories

View File

@ -15,17 +15,24 @@
.. _overview:
.. contents::
:depth: 1
:local:
Overview
========
**Rally** is a **benchmarking tool** that **automates** and **unifies** multi-node OpenStack deployment, cloud verification, benchmarking & profiling. It can be used as a basic tool for an *OpenStack CI/CD system* that would continuously improve its SLA, performance and stability.
**Rally** is a **benchmarking tool** that **automates** and **unifies**
multi-node OpenStack deployment, cloud verification, benchmarking & profiling.
It can be used as a basic tool for an *OpenStack CI/CD system* that would
continuously improve its SLA, performance and stability.
Who Is Using Rally
------------------
Here's a small selection of some of the many companies using Rally:
.. image:: ./images/Rally_who_is_using.png
.. image:: ../images/Rally_who_is_using.png
:align: center
Use Cases
@ -33,65 +40,89 @@ Use Cases
Let's take a look at 3 major high level Use Cases of Rally:
.. image:: ./images/Rally-UseCases.png
.. image:: ../images/Rally-UseCases.png
:align: center
Generally, there are a few typical cases where Rally proves to be of great use:
1. Automate measuring & profiling focused on how new code changes affect the OS performance;
1. Automate measuring & profiling focused on how new code changes affect
the OS performance;
2. Using Rally profiler to detect scaling & performance issues;
3. Investigate how different deployments affect the OS performance:
* Find the set of suitable OpenStack deployment architectures;
* Create deployment specifications for different loads (amount of controllers, swift nodes, etc.);
* Create deployment specifications for different loads (amount of
controllers, swift nodes, etc.);
4. Automate the search for hardware best suited for particular OpenStack cloud;
4. Automate the search for hardware best suited for particular OpenStack
cloud;
5. Automate the production cloud specification generation:
* Determine terminal loads for basic cloud operations: VM start & stop, Block Device create/destroy & various OpenStack API methods;
* Check performance of basic cloud operations in case of different loads.
* Determine terminal loads for basic cloud operations: VM start & stop,
Block Device create/destroy & various OpenStack API methods;
* Check performance of basic cloud operations in case of different
loads.
Real-life examples
------------------
To be substantive, let's investigate a couple of real-life examples of Rally in action.
To be substantive, let's investigate a couple of real-life examples of Rally in
action.
How does amqp_rpc_single_reply_queue affect performance?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Rally allowed us to reveal a quite an interesting fact about **Nova**. We used *NovaServers.boot_and_delete* benchmark scenario to see how the *amqp_rpc_single_reply_queue* option affects VM bootup time (it turns on a kind of fast RPC). Some time ago it was `shown <https://docs.google.com/file/d/0B-droFdkDaVhVzhsN3RKRlFLODQ/edit?pli=1>`_ that cloud performance can be boosted by setting it on, so we naturally decided to check this result with Rally. To make this test, we issued requests for booting and deleting VMs for a number of concurrent users ranging from 1 to 30 with and without the investigated option. For each group of users, a total number of 200 requests was issued. Averaged time per request is shown below:
Rally allowed us to reveal a quite an interesting fact about **Nova**. We used
*NovaServers.boot_and_delete* benchmark scenario to see how the
*amqp_rpc_single_reply_queue* option affects VM bootup time (it turns on a kind
of fast RPC). Some time ago it was
`shown <https://docs.google.com/file/d/0B-droFdkDaVhVzhsN3RKRlFLODQ/edit?pli=1>`_
that cloud performance can be boosted by setting it on, so we naturally decided
to check this result with Rally. To make this test, we issued requests for
booting and deleting VMs for a number of concurrent users ranging from 1 to 30
with and without the investigated option. For each group of users, a total
number of 200 requests was issued. Averaged time per request is shown below:
.. image:: ./images/Amqp_rpc_single_reply_queue.png
.. image:: ../images/Amqp_rpc_single_reply_queue.png
:align: center
**So Rally has unexpectedly indicated that setting the *amqp_rpc_single_reply_queue* option apparently affects the cloud performance, but in quite an opposite way rather than it was thought before.**
**So Rally has unexpectedly indicated that setting the
*amqp_rpc_single_reply_queue* option apparently affects the cloud performance,
but in quite an opposite way rather than it was thought before.**
Performance of Nova list command
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Another interesting result comes from the *NovaServers.boot_and_list_server* scenario, which enabled us to we launched the following benchmark with Rally:
Another interesting result comes from the *NovaServers.boot_and_list_server*
scenario, which enabled us to we launched the following benchmark with Rally:
* **Benchmark environment** (which we also call **"Context"**): 1 temporary OpenStack user.
* **Benchmark environment** (which we also call **"Context"**): 1 temporary
OpenStack user.
* **Benchmark scenario**: boot a single VM from this user & list all VMs.
* **Benchmark runner** setting: repeat this procedure 200 times in a continuous way.
* **Benchmark runner** setting: repeat this procedure 200 times in a
continuous way.
During the execution of this benchmark scenario, the user has more and more VMs on each iteration. Rally has shown that in this case, the performance of the **VM list** command in Nova is degrading much faster than one might expect:
During the execution of this benchmark scenario, the user has more and more VMs
on each iteration. Rally has shown that in this case, the performance of the
**VM list** command in Nova is degrading much faster than one might expect:
.. image:: ./images/Rally_VM_list.png
.. image:: ../images/Rally_VM_list.png
:align: center
Complex scenarios
^^^^^^^^^^^^^^^^^
In fact, the vast majority of Rally scenarios is expressed as a sequence of **"atomic" actions**. For example, *NovaServers.snapshot* is composed of 6 atomic actions:
In fact, the vast majority of Rally scenarios is expressed as a sequence of
**"atomic" actions**. For example, *NovaServers.snapshot* is composed of 6
atomic actions:
1. boot VM
2. snapshot VM
@ -100,33 +131,53 @@ In fact, the vast majority of Rally scenarios is expressed as a sequence of **"a
5. delete VM
6. delete snapshot
Rally measures not only the performance of the benchmark scenario as a whole, but also that of single atomic actions. As a result, Rally also plots the atomic actions performance data for each benchmark iteration in a quite detailed way:
Rally measures not only the performance of the benchmark scenario as a whole,
but also that of single atomic actions. As a result, Rally also plots the
atomic actions performance data for each benchmark iteration in a quite
detailed way:
.. image:: ./images/Rally_snapshot_vm.png
.. image:: ../images/Rally_snapshot_vm.png
:align: center
Architecture
------------
Usually OpenStack projects are implemented *"as-a-Service"*, so Rally provides this approach. In addition, it implements a *CLI-driven* approach that does not require a daemon:
Usually OpenStack projects are implemented *"as-a-Service"*, so Rally provides
this approach. In addition, it implements a *CLI-driven* approach that does not
require a daemon:
1. **Rally as-a-Service**: Run rally as a set of daemons that present Web UI *(work in progress)* so 1 RaaS could be used by a whole team.
2. **Rally as-an-App**: Rally as a just lightweight and portable CLI app (without any daemons) that makes it simple to use & develop.
1. **Rally as-a-Service**: Run rally as a set of daemons that present Web
UI *(work in progress)* so 1 RaaS could be used by a whole team.
2. **Rally as-an-App**: Rally as a just lightweight and portable CLI app
(without any daemons) that makes it simple to use & develop.
The diagram below shows how this is possible:
.. image:: ./images/Rally_Architecture.png
.. image:: ../images/Rally_Architecture.png
:align: center
The actual **Rally core** consists of 4 main components, listed below in the order they go into action:
The actual **Rally core** consists of 4 main components, listed below in the
order they go into action:
1. **Server Providers** - provide a **unified interface** for interaction with different **virtualization technologies** (*LXS*, *Virsh* etc.) and **cloud suppliers** (like *Amazon*): it does so via *ssh* access and in one *L3 network*;
2. **Deploy Engines** - deploy some OpenStack distribution (like *DevStack* or *FUEL*) before any benchmarking procedures take place, using servers retrieved from Server Providers;
3. **Verification** - runs *Tempest* (or another specific set of tests) against the deployed cloud to check that it works correctly, collects results & presents them in human readable form;
4. **Benchmark Engine** - allows to write parameterized benchmark scenarios & run them against the cloud.
1. **Server Providers** - provide a **unified interface** for interaction
with different **virtualization technologies** (*LXS*, *Virsh* etc.) and
**cloud suppliers** (like *Amazon*): it does so via *ssh* access and in
one *L3 network*;
2. **Deploy Engines** - deploy some OpenStack distribution (like *DevStack*
or *FUEL*) before any benchmarking procedures take place, using servers
retrieved from Server Providers;
3. **Verification** - runs *Tempest* (or another specific set of tests)
against the deployed cloud to check that it works correctly, collects
results & presents them in human readable form;
4. **Benchmark Engine** - allows to write parameterized benchmark scenarios
& run them against the cloud.
It should become fairly obvious why Rally core needs to be split to these parts if you take a look at the following diagram that visualizes a rough **algorithm for starting benchmarking OpenStack at scale**. Keep in mind that there might be lots of different ways to set up virtual servers, as well as to deploy OpenStack to them.
It should become fairly obvious why Rally core needs to be split to these parts
if you take a look at the following diagram that visualizes a rough **algorithm
for starting benchmarking OpenStack at scale**. Keep in mind that there might
be lots of different ways to set up virtual servers, as well as to deploy
OpenStack to them.
.. image:: ./images/Rally_QA.png
.. image:: ../images/Rally_QA.png
:align: center

1
doc/source/overview/stories Symbolic link
View File

@ -0,0 +1 @@
../../user_stories/

View File

@ -18,7 +18,10 @@
User stories
============
Many users of Rally were able to make interesting discoveries concerning their OpenStack clouds using our benchmarking tool. Numerous user stories presented below show how Rally has made it possible to find performance bugs and validate improvements for different OpenStack installations.
Many users of Rally were able to make interesting discoveries concerning their
OpenStack clouds using our benchmarking tool. Numerous user stories presented
below show how Rally has made it possible to find performance bugs and validate
improvements for different OpenStack installations.
.. toctree::

View File

@ -1 +0,0 @@
../user_stories

View File

@ -4,20 +4,29 @@
*(Contributed by Neependra Khare, Red Hat)*
Below we describe how we were able to get and verify a 4x better performance of Keystone inside Apache. To do that, we ran a Keystone token creation benchmark with Rally under different load (this benchmark scenario essentially just authenticate users with keystone to get tokens).
Below we describe how we were able to get and verify a 4x better performance of
Keystone inside Apache. To do that, we ran a Keystone token creation benchmark
with Rally under different load (this benchmark scenario essentially just
authenticate users with keystone to get tokens).
Goal
----
- Get the data about performance of token creation under different load.
- Ensure that keystone with increased public_workers/admin_workers values and under Apache works better than the default setup.
- Ensure that keystone with increased public_workers/admin_workers values and
under Apache works better than the default setup.
Summary
-------
- As the concurrency increases, time to authenticate the user gets up.
- Keystone is CPU bound process and by default only one thread of keystone-all process get started. We can increase the parallelism by:
1. increasing public_workers/admin_workers values in keystone.conf file
2. running keystone inside Apache
- We configured Keystone with 4 public_workers and ran Keystone inside Apache. In both cases we got upto 4x better performance as compared to default keystone configuration.
- Keystone is CPU bound process and by default only one thread of
*keystone-all* process get started. We can increase the parallelism by:
1. increasing *public_workers/admin_workers* values in *keystone.conf* file
2. running Keystone inside Apache
- We configured Keystone with 4 *public_workers* and ran Keystone inside
Apache. In both cases we got up to 4x better performance as compared to
default Keystone configuration.
Setup
-----
@ -35,9 +44,11 @@ Keystone - Commit#455d50e8ae360c2a7598a61d87d9d341e5d9d3ed
Keystone API - 2
To increase public_workers - Uncomment line with public_workers and set public_workers to 4. Then restart keystone service.
To increase public_workers - Uncomment line with *public_workers* and set
*public_workers* to 4. Then restart Keystone service.
To run keystone inside Apache - Added *APACHE_ENABLED_SERVICES=key* in localrc file while setting up OpenStack environment with devstack.
To run Keystone inside Apache - Added *APACHE_ENABLED_SERVICES=key* in
*localrc* file while setting up OpenStack environment with Devstack.
Results

View File

@ -4,7 +4,11 @@ Finding a Keystone bug while benchmarking 20 node HA cloud performance at creati
*(Contributed by Alexander Maretskiy, Mirantis)*
Below we describe how we found a `bug in keystone <https://bugs.launchpad.net/keystone/+bug/1360446>`_ and achieved 2x average performance increase at booting Nova servers after fixing that bug. Our initial goal was to benchmark the booting of a significant amount of servers on a cluster (running on a custom build of `Mirantis OpenStack <https://software.mirantis.com/>`_ v5.1) and to ensure that this operation has reasonable performance and completes with no errors.
Below we describe how we found a `bug in Keystone`_ and achieved 2x average
performance increase at booting Nova servers after fixing that bug. Our initial
goal was to benchmark the booting of a significant amount of servers on a
cluster (running on a custom build of `Mirantis OpenStack`_ v5.1) and to ensure
that this operation has reasonable performance and completes with no errors.
Goal
----
@ -38,36 +42,36 @@ Cluster
This cluster was created via Fuel Dashboard interface.
+----------------------+-----------------------------------------------------------------------------+
| Deployment | Custom build of `Mirantis OpenStack <https://software.mirantis.com/>`_ v5.1 |
+----------------------+-----------------------------------------------------------------------------+
| OpenStack release | Icehouse |
+----------------------+-----------------------------------------------------------------------------+
| Operating System | Ubuntu 12.04.4 |
+----------------------+-----------------------------------------------------------------------------+
| Mode | High availability |
+----------------------+-----------------------------------------------------------------------------+
| Hypervisor | KVM |
+----------------------+-----------------------------------------------------------------------------+
| Networking | Neutron with GRE segmentation |
+----------------------+-----------------------------------------------------------------------------+
| Controller nodes | 3 |
+----------------------+-----------------------------------------------------------------------------+
| Compute nodes | 17 |
+----------------------+-----------------------------------------------------------------------------+
+----------------------+--------------------------------------------+
| Deployment | Custom build of `Mirantis OpenStack`_ v5.1 |
+----------------------+--------------------------------------------+
| OpenStack release | Icehouse |
+----------------------+--------------------------------------------+
| Operating System | Ubuntu 12.04.4 |
+----------------------+--------------------------------------------+
| Mode | High availability |
+----------------------+--------------------------------------------+
| Hypervisor | KVM |
+----------------------+--------------------------------------------+
| Networking | Neutron with GRE segmentation |
+----------------------+--------------------------------------------+
| Controller nodes | 3 |
+----------------------+--------------------------------------------+
| Compute nodes | 17 |
+----------------------+--------------------------------------------+
Rally
-----
**Version**
For this benchmark, we use custom rally with the following patch:
For this benchmark, we use custom Rally with the following patch:
https://review.openstack.org/#/c/96300/
**Deployment**
Rally was deployed for cluster using `ExistingCloud <https://github.com/openstack/rally/blob/master/samples/deployments/existing.json>`_ type of deployment.
Rally was deployed for cluster using `ExistingCloud`_ type of deployment.
**Server flavor**
@ -153,16 +157,18 @@ Rally was deployed for cluster using `ExistingCloud <https://github.com/openstac
]
}
The only difference between first and second run is that runner.times for first time was set to 500
The only difference between first and second run is that runner.times for first
time was set to 500
Results
-------
**First time - a bug was found:**
Starting from 142 server, we have error from novaclient: Error <class 'novaclient.exceptions.Unauthorized'>: Unauthorized (HTTP 401).
Starting from 142 server, we have error from novaclient: **Error <class
'novaclient.exceptions.Unauthorized'>: Unauthorized (HTTP 401).**
That is how a `bug in keystone <https://bugs.launchpad.net/keystone/+bug/1360446>`_ was found.
That is how a `bug in Keystone`_ was found.
+------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| action | min (sec) | avg (sec) | max (sec) | 90 percentile | 95 percentile | success | count |
@ -173,7 +179,8 @@ That is how a `bug in keystone <https://bugs.launchpad.net/keystone/+bug/1360446
**Second run, with bugfix:**
After a patch was applied (using RPC instead of neutron client in metadata agent), we got **100% success and 2x improved average performance**:
After a patch was applied (using RPC instead of neutron client in metadata
agent), we got **100% success and 2x improved average performance**:
+------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| action | min (sec) | avg (sec) | max (sec) | 90 percentile | 95 percentile | success | count |
@ -181,3 +188,9 @@ After a patch was applied (using RPC instead of neutron client in metadata agent
| nova.boot_server | 5.031 | 8.008 | 14.093 | 9.616 | 9.716 | 100.0% | 400 |
| total | 5.031 | 8.008 | 14.093 | 9.616 | 9.716 | 100.0% | 400 |
+------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
.. references:
.. _bug in Keystone: https://bugs.launchpad.net/keystone/+bug/1360446
.. _Mirantis OpenStack: https://software.mirantis.com/
.. _ExistingCloud: https://github.com/openstack/rally/blob/master/samples/deployments/existing.json