Rephrase docs call things properly
In a lot of placeses we are using word "benchmark" which can mean workload, subtask, or test case which is very confusing. This patch partially address wrong usage of "benchamrk" word Change-Id: Id3b2b7ae841a5243684c12cc51c96f005dbe7544
This commit is contained in:
parent
2f45502170
commit
2f4555be27
@ -9,7 +9,7 @@ Sometimes OpenStack services use common messaging system very prodigally. For
|
||||
example Neutron metering agent sending all database table data on new object
|
||||
creation i.e https://review.openstack.org/#/c/143672/. It cause to Neutron
|
||||
degradation and other obvious problems. It will be nice to have a way to track
|
||||
messages count and messages size in queue during tests/benchmarks.
|
||||
messages count and messages size in queue during tasks.
|
||||
|
||||
Problem description
|
||||
-------------------
|
||||
@ -19,5 +19,5 @@ Heavy usage of queue isn’t checked.
|
||||
Possible solution
|
||||
-----------------
|
||||
|
||||
* Before running tests/benchmarks start process which will connect to queue
|
||||
* Before running task start process which will connect to queue
|
||||
topics and measure messages count, size and other data which we need.
|
||||
|
@ -6,7 +6,7 @@ Use Case
|
||||
--------
|
||||
|
||||
Some OpenStack projects (Marconi, MagnetoDB) require a real huge load,
|
||||
like 10-100k request per second for benchmarking.
|
||||
like 10-100k request per second for load testing.
|
||||
|
||||
To generate such huge load Rally has to create load from different
|
||||
servers.
|
||||
|
@ -1,12 +1,13 @@
|
||||
===============================================
|
||||
Support benchmarking clouds that are using LDAP
|
||||
===============================================
|
||||
==========================================
|
||||
Support testing clouds that are using LDAP
|
||||
==========================================
|
||||
|
||||
Use Case
|
||||
--------
|
||||
|
||||
A lot of production clouds are using LDAP with read only access. It means
|
||||
that load can be generated only by existing in system users and there is no admin access.
|
||||
that load can be generated only by existing in system users and there is no
|
||||
admin access.
|
||||
|
||||
|
||||
Problem Description
|
||||
@ -25,6 +26,8 @@ Possible Solution
|
||||
Current Solution
|
||||
----------------
|
||||
|
||||
* Allow the user to specify existing users in the configuration of the *ExistingCloud* deployment plugin
|
||||
* When such an *ExistingCloud* deployment is active, and the benchmark task file does not specify the *"users"* context, use the existing users instead of creating the temporary ones.
|
||||
* Modify the *rally show ...* commands to list resources for each user separately.
|
||||
* Add ability to specify existing users in the *ExistingCloud* plugin config
|
||||
* When such an *ExistingCloud* deployment is active, and the task file does not
|
||||
specify the *"users"* context, use the existing users instead of creating the
|
||||
temporary ones.
|
||||
* Modify the *rally show* commands to list resources for each user separately.
|
||||
|
@ -1,22 +1,22 @@
|
||||
============================
|
||||
Launch Specific Benchmark(s)
|
||||
============================
|
||||
=======================
|
||||
Launch Specific SubTask
|
||||
=======================
|
||||
|
||||
|
||||
Use case
|
||||
--------
|
||||
|
||||
A developer is working on a feature that is covered by one or more specific
|
||||
benchmarks/scenarios. He/she would like to execute a rally task with an
|
||||
existing task template file (YAML or JSON) indicating exactly which
|
||||
benchmark(s) will be executed.
|
||||
subtask. He/she would like to execute a rally task with an
|
||||
existing task template file (YAML or JSON) indicating exactly what subtask
|
||||
will be executed.
|
||||
|
||||
|
||||
Problem description
|
||||
-------------------
|
||||
|
||||
When executing a task with a template file in Rally, all benchmarks are
|
||||
executed without the ability to specify one or a set of benchmarks the user
|
||||
When executing a task with a template file in Rally, all subtasks are
|
||||
executed without the ability to specify one or a set of subtasks the user
|
||||
would like to execute.
|
||||
|
||||
|
||||
@ -24,4 +24,4 @@ Possible solution
|
||||
-----------------
|
||||
|
||||
* Add optional flag to rally task start command to specify one or more
|
||||
benchmarks to execute as part of that test run.
|
||||
subtasks to execute as part of that test run.
|
||||
|
@ -14,22 +14,22 @@ image and listing users.
|
||||
Problem Description
|
||||
-------------------
|
||||
|
||||
At the moment Rally is able to run only 1 scenario per benchmark.
|
||||
At the moment Rally is able to run only 1 scenario per subtask.
|
||||
Scenario are quite specific (e.g. boot and delete VM for example) and can't
|
||||
actually generate real life load.
|
||||
|
||||
Writing a lot of specific benchmark scenarios that will produce more real life
|
||||
Writing a lot of specific subtask scenarios that produces more real life
|
||||
load will produce mess and a lot of duplication of code.
|
||||
|
||||
|
||||
Possible solution
|
||||
-----------------
|
||||
|
||||
* Extend Rally task benchmark configuration in such way to support passing
|
||||
multiple benchmark scenarios in single benchmark context
|
||||
* Extend Rally subtask configuration in such way to support passing
|
||||
multiple scenarios in single subtask context
|
||||
|
||||
* Extend Rally task output format to support results of multiple scenarios in
|
||||
single benchmark separately.
|
||||
single subtask separately.
|
||||
|
||||
* Extend rally task plot2html and rally task detailed to show results
|
||||
separately for every scenario.
|
||||
|
@ -1,30 +1,40 @@
|
||||
================================================
|
||||
Add support of persistence benchmark environment
|
||||
================================================
|
||||
===========================================
|
||||
Add support of persistence task environment
|
||||
===========================================
|
||||
|
||||
Use Case
|
||||
--------
|
||||
|
||||
To benchmark many of operations like show, list, detailed you need to have
|
||||
already these resource in cloud. So it will be nice to be able to create
|
||||
benchmark environment once before benchmarking. So run some amount of
|
||||
benchmarks that are using it and at the end just delete all created resources
|
||||
by benchmark environment.
|
||||
There are situations when same environment is used across different tasks.
|
||||
For example you would like to improve operation of listing objects.
|
||||
For example:
|
||||
|
||||
- Create hundreds of objects
|
||||
- Collect baseline of list performance
|
||||
- Fix something in system
|
||||
- Repeat the performance test
|
||||
- Repeat fixing and testing until things are fixed.
|
||||
|
||||
Current implementation of Rally will force you to recreate task context which
|
||||
is time consuming operation.
|
||||
|
||||
|
||||
Problem Description
|
||||
-------------------
|
||||
|
||||
Fortunately Rally has already a mechanism for creating benchmark environment,
|
||||
that is used to create load. Unfortunately it's atomic operation:
|
||||
(create environment, make load, delete environment).
|
||||
Fortunately Rally has already a mechanism for creating task environment via
|
||||
contexts. Unfortunately it's atomic operation:
|
||||
- Create task context
|
||||
- Perform subtask scenario-runner pairs
|
||||
- Destroy task context
|
||||
|
||||
This should be split to 3 separated steps.
|
||||
|
||||
|
||||
Possible solution
|
||||
-----------------
|
||||
|
||||
* Add new CLI operations to work with benchmark environment:
|
||||
* Add new CLI operations to work with task environment:
|
||||
(show, create, delete, list)
|
||||
|
||||
* Allow task to start against benchmark environment (instead of deployment)
|
||||
* Allow task to start against existing task context (instead of deployment)
|
||||
|
@ -5,7 +5,7 @@ Production read cleanups
|
||||
Use Case
|
||||
--------
|
||||
|
||||
Rally should delete in any case all resources that it created during benchmark.
|
||||
Rally should delete in all cases all resources that it creates during tasks.
|
||||
|
||||
|
||||
Problem Description
|
||||
|
@ -22,7 +22,7 @@ Information
|
||||
Details
|
||||
-------
|
||||
|
||||
Rally is awesome tool for testing verifying and benchmarking OpenStack clouds.
|
||||
Rally is awesome tool for generic testing of OpenStack clouds.
|
||||
|
||||
A lot of people started using Rally in their CI/CD so Rally team should provide
|
||||
more stable product with clear strategy of deprecation and upgrades.
|
||||
|
@ -19,7 +19,7 @@ Information
|
||||
Details
|
||||
-------
|
||||
|
||||
This release contains new features, new benchmark plugins, bug fixes,
|
||||
This release contains new features, new task plugins, bug fixes,
|
||||
various code and API improvements.
|
||||
|
||||
|
||||
@ -64,7 +64,7 @@ API changes
|
||||
Plugins
|
||||
~~~~~~~
|
||||
|
||||
* **Benchmark Scenario Runners**:
|
||||
* **Task Runners**:
|
||||
|
||||
[improved] Improved algorithm of generation load in **constant runner**
|
||||
|
||||
@ -80,7 +80,7 @@ Plugins
|
||||
New method **abort()** is used to immediately interrupt execution.
|
||||
|
||||
|
||||
* **Benchmark Scenarios**:
|
||||
* **Task Scenarios**:
|
||||
|
||||
[new] DesignateBasic.create_and_delete_server
|
||||
|
||||
@ -135,7 +135,7 @@ Plugins
|
||||
Add optional \*\*kwargs that are passed to boot server comment
|
||||
|
||||
|
||||
* **Benchmark Context**:
|
||||
* **Task Context**:
|
||||
|
||||
[new] **stacks**
|
||||
|
||||
@ -143,7 +143,7 @@ Plugins
|
||||
|
||||
[new] **custom_image**
|
||||
|
||||
Prepares images for benchmarks in VMs.
|
||||
Prepares images for internal VMs testing.
|
||||
|
||||
To Support generating workloads in VMs by existing tools like: IPerf,
|
||||
Blogbench, HPCC and others we have to have prepared images, with
|
||||
@ -181,7 +181,7 @@ Plugins
|
||||
The Job Binaries data should be treated as a binary content
|
||||
|
||||
|
||||
* **Benchmark SLA**:
|
||||
* **Task SLA**:
|
||||
|
||||
[interface] SLA calculations is done in additive way now
|
||||
|
||||
|
@ -19,7 +19,7 @@ Information
|
||||
Details
|
||||
-------
|
||||
|
||||
This release contains new features, new benchmark plugins, bug fixes,
|
||||
This release contains new features, new task plugins, bug fixes,
|
||||
various code and API improvements.
|
||||
|
||||
|
||||
@ -27,7 +27,7 @@ New Features & API changes
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
||||
* Add the ability to specify versions for clients in benchmark scenarios
|
||||
* Add the ability to specify versions for clients in scenarios
|
||||
|
||||
You can call self.clients("glance", "2") and get any client for
|
||||
specific version.
|
||||
@ -70,7 +70,7 @@ Specs
|
||||
Plugins
|
||||
~~~~~~~
|
||||
|
||||
* **Benchmark Scenario Runners**:
|
||||
* **Task Runners**:
|
||||
|
||||
* Add a maximum concurrency option to rps runner
|
||||
|
||||
@ -79,7 +79,7 @@ Plugins
|
||||
more parallel requests then 'concurrency' value.
|
||||
|
||||
|
||||
* **Benchmark Scenarios**:
|
||||
* **Task Scenarios**:
|
||||
|
||||
[new] CeilometerAlarms.create_alarm_and_get_history
|
||||
|
||||
@ -114,7 +114,7 @@ Plugins
|
||||
|
||||
|
||||
|
||||
* **Benchmark SLA**:
|
||||
* **Task SLA**:
|
||||
|
||||
* [new] aborted_on_sla
|
||||
|
||||
|
@ -19,7 +19,7 @@ Information
|
||||
Details
|
||||
-------
|
||||
|
||||
This release contains new features, new benchmark plugins, bug fixes, various code and API improvements.
|
||||
This release contains new features, new task plugins, bug fixes, various code and API improvements.
|
||||
|
||||
|
||||
New Features & API changes
|
||||
@ -27,11 +27,19 @@ New Features & API changes
|
||||
|
||||
* Rally now can generate load with users that already exist
|
||||
|
||||
Now one can use Rally for benchmarking OpenStack clouds that are using LDAP, AD or any other read-only keystone backend where it is not possible to create any users. To do this, one should set up the "users" section of the deployment configuration of the ExistingCloud type. This feature also makes it safer to run Rally against production clouds: when run from an isolated group of users, Rally won’t affect rest of the cloud users if something goes wrong.
|
||||
Now one can use Rally for testing OpenStack clouds that are using LDAP, AD or
|
||||
any other read-only keystone backend where it is not possible to create any
|
||||
users. To do this, one should set up the "users" section of the deployment
|
||||
configuration of the ExistingCloud type. This feature also makes it safer to
|
||||
run Rally against production clouds: when run from an isolated group of
|
||||
users, Rally won’t affect rest of the cloud users if something goes wrong.
|
||||
|
||||
* New decorator *@osclients.Clients.register* can add new OpenStack clients at runtime
|
||||
* New decorator *@osclients.Clients.register* can add new OpenStack clients
|
||||
at runtime
|
||||
|
||||
It is now possible to add a new OpenStack client dynamically at runtime. The added client will be available from osclients.Clients at the module level and cached. Example:
|
||||
It is now possible to add a new OpenStack client dynamically at runtime.
|
||||
The added client will be available from osclients.Clients at the
|
||||
module level and cached. Example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@ -48,11 +56,19 @@ New Features & API changes
|
||||
|
||||
* Assert methods now available for scenarios and contexts
|
||||
|
||||
There is now a new *FunctionalMixin* class that implements basic unittest assert methods. The *base.Context* and *base.Scenario* classes inherit from this mixin, so now it is possible to use *base.assertX()* methods in scenarios and contexts.
|
||||
There is now a new *FunctionalMixin* class that implements basic unittest
|
||||
assert methods. The *base.Context* and *base.Scenario* classes inherit from
|
||||
this mixin, so now it is possible to use *base.assertX()*
|
||||
methods in scenarios and contexts.
|
||||
|
||||
* Improved installation script
|
||||
|
||||
The installation script has been almost completely rewritten. After this change, it can be run from an unprivileged user, supports different database types, allows to specify a custom python binary, always asks confirmation before doing potentially dangerous actions, automatically install needed software if run as root, and also automatically cleans up the virtualenv and/or the downloaded repository if interrupted.
|
||||
The installation script has been almost completely rewritten. After this
|
||||
change, it can be run from an unprivileged user, supports different database
|
||||
types, allows to specify a custom python binary, always asks confirmation
|
||||
before doing potentially dangerous actions, automatically install needed
|
||||
software if run as root, and also automatically cleans up the
|
||||
virtualenv and/or the downloaded repository if interrupted.
|
||||
|
||||
|
||||
Specs & Feature requests
|
||||
@ -60,24 +76,30 @@ Specs & Feature requests
|
||||
|
||||
* [Spec] Reorder plugins
|
||||
|
||||
The spec describes how to split Rally framework and plugins codebase to make it simpler for newbies to understand how Rally code is organized and how it works.
|
||||
The spec describes how to split Rally framework and plugins codebase to make
|
||||
it simpler for newbies to understand how Rally code is organized and
|
||||
how it works.
|
||||
|
||||
* [Feature request] Specify what benchmarks to execute in task
|
||||
* [Feature request] Specify what subtasks to execute in task
|
||||
|
||||
This feature request proposes to add the ability to specify benchmark(s) to be executed when the user runs the *rally task start* command. A possible solution would be to add a special flag to the *rally task start* command.
|
||||
This feature request proposes to add the ability to specify subtask(s)
|
||||
to be executed when the user runs the *rally task start* command. A possible
|
||||
solution would be to add a special flag to the *rally task start* command.
|
||||
|
||||
|
||||
Plugins
|
||||
~~~~~~~
|
||||
|
||||
* **Benchmark Scenario Runners**:
|
||||
* **Task Runners**:
|
||||
|
||||
* Add limits for maximum Core usage to constant and rps runners
|
||||
|
||||
The new 'max_cpu_usage' parameter can be used to avoid possible 100% usage of all available CPU cores by reducing the number of CPU cores available for processes started by the corresponding runner.
|
||||
The new 'max_cpu_usage' parameter can be used to avoid possible 100%
|
||||
usage of all available CPU cores by reducing the number of CPU cores
|
||||
available for processes started by the corresponding runner.
|
||||
|
||||
|
||||
* **Benchmark Scenarios**:
|
||||
* **Task Scenarios**:
|
||||
|
||||
* [new] KeystoneBasic.create_update_and_delete_tenant
|
||||
|
||||
@ -107,21 +129,22 @@ Plugins
|
||||
|
||||
* [new] HttpRequests.check_request
|
||||
|
||||
* [improved] NovaServers live migrate benchmarks
|
||||
* [improved] NovaServers live migrate scenarios
|
||||
|
||||
add 'min_sleep' and 'max_sleep' parameters to simulate a pause between VM booting and running live migration
|
||||
add 'min_sleep' and 'max_sleep' parameters to simulate a pause between
|
||||
VM booting and running live migration
|
||||
|
||||
* [improved] NovaServers.boot_and_live_migrate_server
|
||||
|
||||
add a usage sample to samples/tasks
|
||||
|
||||
* [improved] CinderVolumes benchmarks
|
||||
* [improved] CinderVolumes scenarios
|
||||
|
||||
support size range to be passed to the 'size' argument as a dictionary
|
||||
*{"min": <minimum_size>, "max": <maximum_size>}*
|
||||
|
||||
|
||||
* **Benchmark Contexts**:
|
||||
* **Task Contexts**:
|
||||
|
||||
* [new] MuranoPackage
|
||||
|
||||
@ -129,14 +152,18 @@ Plugins
|
||||
|
||||
* [new] CeilometerSampleGenerator
|
||||
|
||||
Context that can be used for creating samples and collecting resources for benchmarks in a list.
|
||||
Context that can be used for creating samples and collecting resources
|
||||
for testing of list operations.
|
||||
|
||||
|
||||
* **Benchmark SLA**:
|
||||
* **Task SLA**:
|
||||
|
||||
* [new] outliers
|
||||
|
||||
This new SLA checks that the number of outliers (calculated from the mean and standard deviation of the iteration durations) does not exceed some maximum value. The SLA is highly configurable: the parameters used for outliers threshold calculation can be set by the user.
|
||||
This new SLA checks that the number of outliers (calculated from the mean
|
||||
and standard deviation of the iteration durations) does not exceed some
|
||||
maximum value. The SLA is highly configurable: the parameters used for
|
||||
outliers threshold calculation can be set by the user.
|
||||
|
||||
|
||||
Bug fixes
|
||||
@ -144,13 +171,17 @@ Bug fixes
|
||||
|
||||
**21 bugs were fixed, the most critical are**:
|
||||
|
||||
* Make it possible to use relative imports for plugins that are outside of rally package.
|
||||
* Make it possible to use relative imports for plugins that are outside of
|
||||
rally package.
|
||||
|
||||
* Fix heat stacks cleanup by deleting them only 1 time per tenant (get rid of "stack not found" errors in logs).
|
||||
* Fix heat stacks cleanup by deleting them only 1 time per tenant
|
||||
(get rid of "stack not found" errors in logs).
|
||||
|
||||
* Fix the wrong behavior of 'rally task detailed --iterations-data' (it lacked the iteration info before).
|
||||
* Fix the wrong behavior of 'rally task detailed --iterations-data'
|
||||
(it lacked the iteration info before).
|
||||
|
||||
* Fix security groups cleanup: a security group called "default", created automatically by Neutron, did not get deleted for each tenant.
|
||||
* Fix security groups cleanup: a security group called "default", created
|
||||
automatically by Neutron, did not get deleted for each tenant.
|
||||
|
||||
|
||||
Other changes
|
||||
@ -158,15 +189,25 @@ Other changes
|
||||
|
||||
* Streaming algorithms that scale
|
||||
|
||||
This release introduces the common/streaming_algorithms.py module. This module is going to contain implementations of benchmark data processing algorithms that scale: these algorithms do not store exhaustive information about every single benchmark iteration duration processed. For now, the module contains implementations of algorithms for computation of mean & standard deviation.
|
||||
This release introduces the common/streaming_algorithms.py module.
|
||||
This module is going to contain implementations of task data processing
|
||||
algorithms that scale: these algorithms do not store exhaustive information
|
||||
about every single subtask iteration duration processed. For now, the module
|
||||
contains implementations of algorithms for
|
||||
computation of mean & standard deviation.
|
||||
|
||||
* Coverage job to check that new patches come with unit tests
|
||||
|
||||
Rally now has a coverage job that checks that every patch submitted for review does not decrease the number of lines covered by unit tests (at least too much). This job allows to mark most patches with no unit tests with '-1'.
|
||||
Rally now has a coverage job that checks that every patch submitted for
|
||||
review does not decrease the number of lines covered by unit tests
|
||||
(at least too much). This job allows to mark most patches with no
|
||||
unit tests with '-1'.
|
||||
|
||||
* Splitting the plugins code (Runners & SLA) into common/openstack plugins
|
||||
|
||||
According to the spec "Reorder plugins" (see above), the plugins code for runners and SLA has been moved to the *plugins/common/* directory. Only base classes now remain in the *benchmark/* directory.
|
||||
According to the spec "Reorder plugins" (see above), the plugins code for
|
||||
runners and SLA has been moved to the *plugins/common/* directory.
|
||||
Only base classes now remain in the *benchmark/* directory.
|
||||
|
||||
|
||||
Documentation
|
||||
@ -174,7 +215,8 @@ Documentation
|
||||
|
||||
* Various fixes
|
||||
|
||||
* Remove obsolete *.rst* files (*deploy_engines.rst* / *server_providers.rst* / ...)
|
||||
* Remove obsolete *.rst* files
|
||||
(*deploy_engines.rst* / *server_providers.rst* / ...)
|
||||
* Restructure the docs files to make them easier to navigate through
|
||||
* Move the chapter on task templates to the 4th step in the tutorial
|
||||
* Update the information about meetings (new release meeting & time changes)
|
||||
* Update the info about meetings (new release meeting & time changes)
|
||||
|
@ -75,7 +75,7 @@ Plugins
|
||||
|
||||
Cloudera manager need master-node flavor
|
||||
|
||||
* [added] Expand Nova API benchmark in Rally
|
||||
* [added] Add more Nova API scenarios
|
||||
|
||||
Add support for listing nova hosts, agents, availability-zones
|
||||
and aggregates.
|
||||
|
@ -30,7 +30,7 @@ preferably, at the ``#openstack-rally`` IRC channel on **irc.freenode.net**).
|
||||
|
||||
If you are going to contribute to Rally, you will probably need to grasp a
|
||||
better understanding of several main design concepts used throughout our
|
||||
project (such as **benchmark scenarios**, **contexts** etc.). To do so, please
|
||||
project (such as **scenarios**, **contexts** etc.). To do so, please
|
||||
read :ref:`this article <main_concepts>`.
|
||||
|
||||
|
||||
|
@ -18,10 +18,10 @@ What is Rally?
|
||||
==============
|
||||
|
||||
**OpenStack** is, undoubtedly, a really *huge* ecosystem of cooperative
|
||||
services. **Rally** is a **benchmarking tool** that answers the question:
|
||||
services. **Rally** is a **testing tool** that answers the question:
|
||||
**"How does OpenStack work at scale?"**. To make this possible, Rally
|
||||
**automates** and **unifies** multi-node OpenStack deployment, cloud
|
||||
verification, benchmarking & profiling. Rally does it in a **generic** way,
|
||||
verification, testing & profiling. Rally does it in a **generic** way,
|
||||
making it possible to check whether OpenStack is going to work well on, say, a
|
||||
1k-servers installation under high load. Thus it can be used as a basic tool
|
||||
for an *OpenStack CI/CD system* that would continuously improve its SLA,
|
||||
|
@ -22,8 +22,8 @@
|
||||
Overview
|
||||
========
|
||||
|
||||
**Rally** is a **benchmarking tool** that **automates** and **unifies**
|
||||
multi-node OpenStack deployment, cloud verification, benchmarking & profiling.
|
||||
**Rally** is a **generic testing tool** that **automates** and **unifies**
|
||||
multi-node OpenStack deployment, verification, testing & profiling.
|
||||
It can be used as a basic tool for an *OpenStack CI/CD system* that would
|
||||
continuously improve its SLA, performance and stability.
|
||||
|
||||
@ -79,7 +79,7 @@ How does amqp_rpc_single_reply_queue affect performance?
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Rally allowed us to reveal a quite an interesting fact about **Nova**. We used
|
||||
*NovaServers.boot_and_delete* benchmark scenario to see how the
|
||||
*NovaServers.boot_and_delete* scenario to see how the
|
||||
*amqp_rpc_single_reply_queue* option affects VM bootup time (it turns on a kind
|
||||
of fast RPC). Some time ago it was
|
||||
`shown <https://docs.google.com/file/d/0B-droFdkDaVhVzhsN3RKRlFLODQ/edit?pli=1>`_
|
||||
@ -101,16 +101,14 @@ Performance of Nova list command
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Another interesting result comes from the *NovaServers.boot_and_list_server*
|
||||
scenario, which enabled us to launch the following benchmark with Rally:
|
||||
scenario, which enabled us to launch the following task with Rally:
|
||||
|
||||
* **Benchmark environment** (which we also call **"Context"**): 1 temporary
|
||||
OpenStack user.
|
||||
* **Benchmark scenario**: boot a single VM from this user & list all VMs.
|
||||
* **Benchmark runner** setting: repeat this procedure 200 times in a
|
||||
continuous way.
|
||||
* **Task context**: 1 temporary OpenStack user.
|
||||
* **Task scenario**: boot a single VM from this user & list all VMs.
|
||||
* **Task runner**: repeat this procedure 200 times in a continuous way.
|
||||
|
||||
During the execution of this benchmark scenario, the user has more and more VMs
|
||||
on each iteration. Rally has shown that in this case, the performance of the
|
||||
During the execution of this task, the user has more and more VMs on each
|
||||
iteration. Rally has shown that in this case, the performance of the
|
||||
**VM list** command in Nova is degrading much faster than one might expect:
|
||||
|
||||
.. image:: ../images/Rally_VM_list.png
|
||||
@ -131,9 +129,9 @@ atomic actions:
|
||||
5. delete VM
|
||||
6. delete snapshot
|
||||
|
||||
Rally measures not only the performance of the benchmark scenario as a whole,
|
||||
but also that of single atomic actions. As a result, Rally also plots the
|
||||
atomic actions performance data for each benchmark iteration in a quite
|
||||
Rally measures not only the performance of the scenario as a whole,
|
||||
but also that of single atomic actions. As a result, Rally also displays the
|
||||
atomic actions performance data for each scenario iteration in a quite
|
||||
detailed way:
|
||||
|
||||
.. image:: ../images/Rally_snapshot_vm.png
|
||||
@ -157,27 +155,25 @@ The diagram below shows how this is possible:
|
||||
.. image:: ../images/Rally_Architecture.png
|
||||
:align: center
|
||||
|
||||
The actual **Rally core** consists of 4 main components, listed below in the
|
||||
The actual **Rally core** consists of 3 main components, listed below in the
|
||||
order they go into action:
|
||||
|
||||
1. **Server Providers** - provide a **unified interface** for interaction
|
||||
with different **virtualization technologies** (*LXS*, *Virsh* etc.) and
|
||||
**cloud suppliers** (like *Amazon*): it does so via *ssh* access and in
|
||||
one *L3 network*;
|
||||
2. **Deploy Engines** - deploy some OpenStack distribution (like *DevStack*
|
||||
or *FUEL*) before any benchmarking procedures take place, using servers
|
||||
retrieved from Server Providers;
|
||||
3. **Verification** - runs *Tempest* (or another specific set of tests)
|
||||
against the deployed cloud to check that it works correctly, collects
|
||||
results & presents them in human readable form;
|
||||
4. **Benchmark Engine** - allows to write parameterized benchmark scenarios
|
||||
& run them against the cloud.
|
||||
1. **Deploy** - store credentials about your deployments, credentials
|
||||
are used by verify and task commands. It has plugable mechanism that
|
||||
allows one to implement basic LCM for testing environment as well.
|
||||
|
||||
2. **Verify** - wraps unittest based functional testing framework to
|
||||
provide complete tool with result storage and reporting.
|
||||
Currently has only plugin implemneted for OpenStack Tempest.
|
||||
|
||||
3. **Task** - framework that allows to write parametrized plugins and
|
||||
combine them in complex test cases using YAML. Framework allows to
|
||||
produce all kinds of tests including functional, concurrency,
|
||||
regression, load, scale, capacity and even chaos testing.
|
||||
|
||||
It should become fairly obvious why Rally core needs to be split to these parts
|
||||
if you take a look at the following diagram that visualizes a rough **algorithm
|
||||
for starting benchmarking OpenStack at scale**. Keep in mind that there might
|
||||
be lots of different ways to set up virtual servers, as well as to deploy
|
||||
OpenStack to them.
|
||||
for starting testing clouds at scale**.
|
||||
|
||||
.. image:: ../images/Rally_QA.png
|
||||
:align: center
|
||||
|
@ -18,10 +18,9 @@
|
||||
User stories
|
||||
============
|
||||
|
||||
Many users of Rally were able to make interesting discoveries concerning their
|
||||
OpenStack clouds using our benchmarking tool. Numerous user stories presented
|
||||
below show how Rally has made it possible to find performance bugs and validate
|
||||
Rally has made it possible to find performance bugs and validate
|
||||
improvements for different OpenStack installations.
|
||||
You can read some stories below:
|
||||
|
||||
|
||||
.. toctree::
|
||||
|
@ -25,7 +25,8 @@ resources (e.g., download 10 images) that will be used by the
|
||||
scenarios. All created objects must be put into the *self.context*
|
||||
dict, through which they will be available in the scenarios. Let's
|
||||
create a simple context plugin that adds a flavor to the environment
|
||||
before the benchmark task starts and deletes it after it finishes.
|
||||
before runner start first iteration and deletes it after runner finishes
|
||||
execution of all iterations.
|
||||
|
||||
Creation
|
||||
^^^^^^^^
|
||||
@ -46,12 +47,7 @@ implement the Context API: the *setup()* method that creates a flavor and the
|
||||
|
||||
@context.configure(name="create_flavor", order=1000)
|
||||
class CreateFlavorContext(context.Context):
|
||||
"""This sample creates a flavor with specified options before task starts
|
||||
and deletes it after task completion.
|
||||
|
||||
To create your own context plugin, inherit it from
|
||||
rally.task.context.Context
|
||||
"""
|
||||
"""This sample creates a flavor with specified option."""
|
||||
|
||||
CONFIG_SCHEMA = {
|
||||
"type": "object",
|
||||
@ -113,8 +109,7 @@ implement the Context API: the *setup()* method that creates a flavor and the
|
||||
Usage
|
||||
^^^^^
|
||||
|
||||
You can refer to your plugin context in the benchmark task configuration
|
||||
files in the same way as any other contexts:
|
||||
The new plugin can be used by specifying it in context section. Like below:
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
|
@ -18,9 +18,8 @@
|
||||
Scenario runner as a plugin
|
||||
===========================
|
||||
|
||||
Let's create a scenario runner plugin that runs a given benchmark
|
||||
scenario a random number of times (chosen at random from a given
|
||||
range).
|
||||
Let's create a runner plugin that runs a given scenario a random number of
|
||||
times (chosen at random from a given range).
|
||||
|
||||
Creation
|
||||
^^^^^^^^
|
||||
@ -40,8 +39,7 @@ and implement its API (the *_run_scenario()* method):
|
||||
class RandomTimesScenarioRunner(runner.ScenarioRunner):
|
||||
"""Sample scenario runner plugin.
|
||||
|
||||
Run scenario random number of times, which is chosen between min_times and
|
||||
max_times.
|
||||
Run scenario random number of times (between min_times and max_times)
|
||||
"""
|
||||
|
||||
CONFIG_SCHEMA = {
|
||||
@ -78,10 +76,9 @@ and implement its API (the *_run_scenario()* method):
|
||||
Usage
|
||||
^^^^^
|
||||
|
||||
You can refer to your scenario runner in the benchmark task
|
||||
configuration files in the same way as any other runners. Don't forget
|
||||
to put your runner-specific parameters in the configuration as well
|
||||
(*"min_times"* and *"max_times"* in our example):
|
||||
You can refer to your scenario runner in the input task files in the same way
|
||||
as any other runners. Don't forget to put your runner-specific parameters
|
||||
in the configuration as well (*"min_times"* and *"max_times"* in our example):
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
@ -106,4 +103,5 @@ to put your runner-specific parameters in the configuration as well
|
||||
|
||||
|
||||
|
||||
Different plugin samples are available `here <https://github.com/openstack/rally/tree/master/samples/plugins>`_.
|
||||
Different plugin samples are available
|
||||
`here <https://github.com/openstack/rally/tree/master/samples/plugins>`_.
|
||||
|
@ -47,8 +47,8 @@ clients:
|
||||
def _list_flavors(self):
|
||||
"""Sample of usage clients - list flavors
|
||||
|
||||
You can use self.context, self.admin_clients and self.clients which are
|
||||
initialized on scenario instance creation"""
|
||||
You can use self.context, self.admin_clients and self.clients
|
||||
which are initialized on scenario instance creation"""
|
||||
self.clients("nova").flavors.list()
|
||||
|
||||
@atomic.action_timer("list_flavors_as_admin")
|
||||
@ -65,8 +65,8 @@ clients:
|
||||
Usage
|
||||
^^^^^
|
||||
|
||||
You can refer to your plugin scenario in the benchmark task
|
||||
configuration files in the same way as any other scenarios:
|
||||
You can refer to your plugin scenario in the task input files in the same
|
||||
way as any other scenarios:
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
|
@ -69,8 +69,7 @@ Inherit a class for your plugin from the base *SLA* class and implement its API
|
||||
Usage
|
||||
^^^^^
|
||||
|
||||
You can refer to your SLA in the benchmark task configuration files in
|
||||
the same way as any other SLA:
|
||||
The new plugin can be used by specifying it in SLA section. Like below:
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
|
@ -32,7 +32,7 @@ plugins with detailed descriptions.
|
||||
How plugins work
|
||||
----------------
|
||||
|
||||
Rally provides an opportunity to create and use a **custom benchmark
|
||||
Rally provides an opportunity to create and use a **custom task
|
||||
scenario, runner, SLA, deployment or context** as a **plugin**:
|
||||
|
||||
.. image:: ../images/Rally-Plugins.png
|
||||
|
@ -55,25 +55,25 @@ Project Core maintainers
|
||||
+------------------------------+------------------------------------------------+
|
||||
| | Boris Pavlovic | * Founder and ideological leader |
|
||||
| | boris-42 (irc) | * Architect |
|
||||
| | boris@pavlovic.me | * Rally task & benchmark |
|
||||
| | boris@pavlovic.me | * Rally task & plugins |
|
||||
+------------------------------+------------------------------------------------+
|
||||
| | Chris St. Pierre | * Rally task & benchmark |
|
||||
| | Chris St. Pierre | * Rally task & plugins |
|
||||
| | stpierre (irc) | * Bash guru ;) |
|
||||
| | cstpierr@cisco.com | |
|
||||
+------------------------------+------------------------------------------------+
|
||||
| | Illia Khudoshyn | * Rally task & benchmark |
|
||||
| | Illia Khudoshyn | * Rally task & plugins |
|
||||
| | ikhudoshyn (irc) | |
|
||||
| | ikhudoshyn@mirantis.com | |
|
||||
+------------------------------+------------------------------------------------+
|
||||
| | Kun Huang | * Rally task & benchmark |
|
||||
| | Kun Huang | * Rally task & plugins |
|
||||
| | kun_huang (irc) | |
|
||||
| | gareth.huang@huawei.com | |
|
||||
+------------------------------+------------------------------------------------+
|
||||
| | Li Yingjun | * Rally task & benchmark |
|
||||
| | Li Yingjun | * Rally task & plugins |
|
||||
| | liyingjun (irc) | |
|
||||
| | yingjun.li@kylin-cloud.com | |
|
||||
+------------------------------+------------------------------------------------+
|
||||
| | Roman Vasilets | * Rally task & benchmark |
|
||||
| | Roman Vasilets | * Rally task & plugins |
|
||||
| | rvasilets (irc) | |
|
||||
| | pomeo92@gmail.com | |
|
||||
+------------------------------+------------------------------------------------+
|
||||
@ -82,7 +82,7 @@ Project Core maintainers
|
||||
| | sskripnick@mirantis.com | * Automation of everything |
|
||||
+------------------------------+------------------------------------------------+
|
||||
| | Yair Fried | * Rally-Tempest integration |
|
||||
| | yfried (irc) | * Rally task & benchmark |
|
||||
| | yfried (irc) | * Rally task & plugins |
|
||||
| | yfried@redhat.com | |
|
||||
+------------------------------+------------------------------------------------+
|
||||
| | Yaroslav Lobankov | * Rally Verification |
|
||||
|
@ -32,7 +32,7 @@ of the software.
|
||||
Create a custom Rally Gate job
|
||||
------------------------------
|
||||
|
||||
You can create a **Rally Gate job** for your project to run Rally benchmarks
|
||||
You can create a **Rally Gate job** for your project to run Rally tasks
|
||||
against the patchsets proposed to be merged into your project.
|
||||
|
||||
To create a rally-gate job, you should create a **rally-jobs/** directory at
|
||||
@ -40,8 +40,7 @@ the root of your project.
|
||||
|
||||
As a rule, this directory contains only **{projectname}.yaml**, but more
|
||||
scenarios and jobs can be added as well. This yaml file is in fact an input
|
||||
Rally task file specifying benchmark scenarios that should be run in your gate
|
||||
job.
|
||||
Rally task file specifying scenarios that should be run in your gate job.
|
||||
|
||||
To make *{projectname}.yaml* run in gates, you need to add *"rally-jobs"* to
|
||||
the "jobs" section of *projects.yaml* in *openstack-infra/project-config*.
|
||||
|
@ -35,7 +35,7 @@ please refer to the :ref:`installation <install>` page.
|
||||
**Note:** Rally requires Python version 2.7 or 3.4.
|
||||
|
||||
Now that you have Rally installed, you are ready to start
|
||||
:ref:`benchmarking OpenStack with it <tutorial_step_1_setting_up_env_and_running_benchmark_from_samples>`!
|
||||
:ref:`testing OpenStack with Rally <tutorial_step_1_setting_up_env_and_running_benchmark_from_samples>`!
|
||||
|
||||
.. references:
|
||||
|
||||
|
@ -15,17 +15,16 @@
|
||||
|
||||
.. _tutorial_step_1_setting_up_env_and_running_benchmark_from_samples:
|
||||
|
||||
Step 1. Setting up the environment and running a benchmark from samples
|
||||
=======================================================================
|
||||
Step 1. Setting up the environment and running a task from samples
|
||||
==================================================================
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
|
||||
In this demo, we will show how to perform some basic operations in Rally, such
|
||||
as registering an OpenStack cloud, benchmarking it and generating benchmark
|
||||
reports.
|
||||
In this demo basic operations in Rally are performed, such as adding
|
||||
OpenStack cloud deployment, running task against it and generating report.
|
||||
|
||||
We assume that you have gone through :ref:`tutorial_step_0_installation` and
|
||||
It's assumed that you have gone through :ref:`tutorial_step_0_installation` and
|
||||
have an already existing OpenStack deployment with Keystone available at
|
||||
*<KEYSTONE_AUTH_URL>*.
|
||||
|
||||
@ -33,8 +32,8 @@ have an already existing OpenStack deployment with Keystone available at
|
||||
Registering an OpenStack deployment in Rally
|
||||
--------------------------------------------
|
||||
|
||||
First, you have to provide Rally with an OpenStack deployment it is going to
|
||||
benchmark. This should be done either through `OpenRC files`_ or through
|
||||
First, you have to provide Rally with an OpenStack deployment that should be
|
||||
tested. This should be done either through `OpenRC files`_ or through
|
||||
deployment `configuration files`_. In case you already have an *OpenRC*, it is
|
||||
extremely simple to register a deployment with the *deployment create* command:
|
||||
|
||||
@ -67,12 +66,11 @@ create* command has a slightly different syntax in this case:
|
||||
|
||||
|
||||
Note the last line in the output. It says that the just created deployment is
|
||||
now used by Rally; that means that all the benchmarking operations from now on
|
||||
are going to be performed on this deployment. Later we will show how to switch
|
||||
between different deployments.
|
||||
now used by Rally; that means that all tasks or verify commands are going to be
|
||||
run against it. Later in tutorial is described how to use multiple deployments.
|
||||
|
||||
Finally, the *deployment check* command enables you to verify that your current
|
||||
deployment is healthy and ready to be benchmarked:
|
||||
deployment is healthy and ready to be tested:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -94,13 +92,13 @@ deployment is healthy and ready to be benchmarked:
|
||||
+----------+----------------+-----------+
|
||||
|
||||
|
||||
Benchmarking
|
||||
------------
|
||||
Running Rally Tasks
|
||||
-------------------
|
||||
|
||||
Now that we have a working and registered deployment, we can start benchmarking
|
||||
it. The sequence of benchmarks to be launched by Rally should be specified in a
|
||||
*benchmark task configuration file* (either in *JSON* or in *YAML* format).
|
||||
Let's try one of the sample benchmark tasks available in
|
||||
Now that we have a working and registered deployment, we can start testing
|
||||
it. The sequence of subtask to be launched by Rally should be specified in a
|
||||
*task input file* (either in *JSON* or in *YAML* format).
|
||||
Let's try one of the task sample available in
|
||||
`samples/tasks/scenarios`_, say, the one that boots and deletes multiple
|
||||
servers (*samples/tasks/scenarios/nova/boot-and-delete.json*):
|
||||
|
||||
@ -135,7 +133,7 @@ servers (*samples/tasks/scenarios/nova/boot-and-delete.json*):
|
||||
}
|
||||
|
||||
|
||||
To start a benchmark task, run the ``task start`` command (you can also add the
|
||||
To start a task, run the ``task start`` command (you can also add the
|
||||
*-v* option to print more logging information):
|
||||
|
||||
.. code-block:: console
|
||||
@ -152,7 +150,7 @@ To start a benchmark task, run the ``task start`` command (you can also add the
|
||||
Task 6fd9a19f-5cf8-4f76-ab72-2e34bb1d4996: started
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
Benchmarking... This can take a while...
|
||||
Running Task... This can take a while...
|
||||
|
||||
To track task status use:
|
||||
|
||||
@ -199,10 +197,10 @@ To start a benchmark task, run the ``task start`` command (you can also add the
|
||||
|
||||
Note that the Rally input task above uses *regular expressions* to specify the
|
||||
image and flavor name to be used for server creation, since concrete names
|
||||
might differ from installation to installation. If this benchmark task fails,
|
||||
then the reason for that might a non-existing image/flavor specified in the
|
||||
task. To check what images/flavors are available in the deployment you are
|
||||
currently benchmarking, you might use the the following commands:
|
||||
might differ from installation to installation. If this task fails, then the
|
||||
reason for that might a non-existing image/flavor specified in the task.
|
||||
To check what images/flavors are available in the deployment, you might use the
|
||||
the following commands:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -235,16 +233,16 @@ Report generation
|
||||
|
||||
One of the most beautiful things in Rally is its task report generation
|
||||
mechanism. It enables you to create illustrative and comprehensive HTML reports
|
||||
based on the benchmarking data. To create and open at once such a report for
|
||||
the last task you have launched, call:
|
||||
based on the task data. To create and open at once such a report for the last
|
||||
task you have launched, call:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
rally task report --out=report1.html --open
|
||||
|
||||
This will produce an HTML page with the overview of all the scenarios that
|
||||
you've included into the last benchmark task completed in Rally (in our case,
|
||||
this is just one scenario, and we will cover the topic of multiple scenarios in
|
||||
This is going produce an HTML page with the overview of all the scenarios that
|
||||
you've included into the last task completed in Rally (in our case, this is
|
||||
just one scenario, and we will cover the topic of multiple scenarios in
|
||||
one task in
|
||||
:ref:`the next step of our tutorial <tutorial_step_2_input_task_format>`):
|
||||
|
||||
@ -252,17 +250,17 @@ one task in
|
||||
:align: center
|
||||
|
||||
This aggregating table shows the duration of the load produced by the
|
||||
corresponding scenario (*"Load duration"*), the overall benchmark scenario
|
||||
execution time, including the duration of environment preparation with contexts
|
||||
(*"Full duration"*), the number of iterations of each scenario
|
||||
(*"Iterations"*), the type of the load used while running the scenario
|
||||
(*"Runner"*), the number of failed iterations (*"Errors"*) and finally whether
|
||||
the scenario has passed certain Success Criteria (*"SLA"*) that were set up by
|
||||
the user in the input configuration file (we will cover these criteria in
|
||||
corresponding scenario (*"Load duration"*), the overall subtask execution time,
|
||||
including the duration of context creation (*"Full duration"*), the number of
|
||||
iterations of each scenario (*"Iterations"*), the type of the load used while
|
||||
running the scenario (*"Runner"*), the number of failed iterations (*"Errors"*)
|
||||
and finally whether the scenario has passed certain Success Criteria (*"SLA"*)
|
||||
that were set up by the user in the input configuration file (we will cover
|
||||
these criteria in
|
||||
:ref:`one of the next steps <tutorial_step_4_adding_success_criteria_for_benchmarks>`).
|
||||
|
||||
By navigating in the left panel, you can switch to the detailed view of the
|
||||
benchmark results for the only scenario we included into our task, namely
|
||||
task results for the only scenario we included into our task, namely
|
||||
**NovaServers.boot_and_delete_server**:
|
||||
|
||||
.. image:: ../../images/Report-Scenario-Overview.png
|
||||
|
@ -33,11 +33,11 @@ task**. To do so, use the following syntax:
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"<ScenarioName1>": [<benchmark_config>, <benchmark_config2>, ...]
|
||||
"<ScenarioName2>": [<benchmark_config>, ...]
|
||||
"<ScenarioName1>": [<config>, <config2>, ...]
|
||||
"<ScenarioName2>": [<config>, ...]
|
||||
}
|
||||
|
||||
where *<benchmark_config>*, as before, is a dictionary:
|
||||
where *<config>*, as before, is a dictionary:
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
@ -48,7 +48,7 @@ where *<benchmark_config>*, as before, is a dictionary:
|
||||
"sla": { <different SLA configs> }
|
||||
}
|
||||
|
||||
Multiple benchmarks in a single task
|
||||
Multiple subtasks in a single task
|
||||
------------------------------------
|
||||
|
||||
As an example, let's edit our configuration file from
|
||||
@ -100,7 +100,7 @@ JSON file:
|
||||
]
|
||||
}
|
||||
|
||||
Now you can start this benchmark task as usually:
|
||||
Now you can start this task as usually:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -131,11 +131,10 @@ Now you can start this benchmark task as usually:
|
||||
|
||||
...
|
||||
|
||||
Note that the HTML reports you can generate by typing **rally task report
|
||||
--out=report_name.html** after your benchmark task has completed will get
|
||||
richer as your benchmark task configuration file includes more benchmark
|
||||
scenarios. Let's take a look at the report overview page for a task that covers
|
||||
all the scenarios available in Rally:
|
||||
Note that the HTML task reports can be generate by typing **rally task report
|
||||
--out=report_name.html**. This command works even if not all subtask are done.
|
||||
|
||||
Let's take a look at the report overview page for a task with multiple subtasks
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
@ -148,11 +147,10 @@ all the scenarios available in Rally:
|
||||
Multiple configurations of the same scenario
|
||||
--------------------------------------------
|
||||
|
||||
Yet another thing you can do in Rally is to launch **the same benchmark
|
||||
scenario multiple times with different configurations**. That's why our
|
||||
configuration file stores a list for the key
|
||||
*"NovaServers.boot_and_delete_server"*: you can just append a different
|
||||
configuration of this benchmark scenario to this list to get it. Let's say,
|
||||
Yet another thing you can do in Rally is to launch **the same scenario multiple
|
||||
times with different configurations**. That's why our configuration file stores
|
||||
a list for the key *"NovaServers.boot_and_delete_server"*: you can just append
|
||||
a different configuration of this scenario to this list to get it. Let's say,
|
||||
you want to run the **boot_and_delete_server** scenario twice: first using the
|
||||
*"m1.tiny"* flavor and then using the *"m1.small"* flavor:
|
||||
|
||||
|
@ -15,8 +15,8 @@
|
||||
|
||||
.. _tutorial_step_3_benchmarking_with_existing_users:
|
||||
|
||||
Step 3. Benchmarking OpenStack with existing users
|
||||
==================================================
|
||||
Step 3. Running Task against OpenStack with read only users
|
||||
===========================================================
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
@ -25,10 +25,9 @@ Motivation
|
||||
----------
|
||||
|
||||
There are two very important reasons from the production world of why it is
|
||||
preferable to use some already existing users to benchmark your OpenStack
|
||||
cloud:
|
||||
preferable to use some already existing users to test your OpenStack cloud:
|
||||
|
||||
1. *Read-only Keystone Backends:* creating temporary users for benchmark
|
||||
1. *Read-only Keystone Backends:* creating temporary users for running
|
||||
scenarios in Rally is just impossible in case of r/o Keystone backends like
|
||||
*LDAP* and *AD*.
|
||||
|
||||
@ -36,8 +35,8 @@ scenarios in Rally is just impossible in case of r/o Keystone backends like
|
||||
goes wrong, this won’t affect the rest of the cloud users.
|
||||
|
||||
|
||||
Registering existing users in Rally
|
||||
-----------------------------------
|
||||
Registering deployment with existing users in Rally
|
||||
---------------------------------------------------
|
||||
|
||||
The information about existing users in your OpenStack cloud should be passed
|
||||
to Rally at the
|
||||
@ -92,15 +91,15 @@ it as usual:
|
||||
~/.rally/openrc was updated
|
||||
|
||||
With this new deployment being active, Rally will use the already existing
|
||||
users instead of creating the temporary ones when launching benchmark task
|
||||
that do not specify the *"users"* context.
|
||||
users instead of creating the temporary ones when launching task that do not
|
||||
specify the *"users"* context.
|
||||
|
||||
|
||||
Running benchmark scenarios with existing users
|
||||
-----------------------------------------------
|
||||
Running tasks that uses existing users
|
||||
--------------------------------------
|
||||
|
||||
After you have registered a deployment with existing users, don't forget to
|
||||
remove the *"users"* context from your benchmark task configuration if you want
|
||||
remove the *"users"* context from your task input file if you want
|
||||
to use existing users, like in the following configuration file
|
||||
(*boot-and-delete.json*):
|
||||
|
||||
@ -129,14 +128,14 @@ to use existing users, like in the following configuration file
|
||||
]
|
||||
}
|
||||
|
||||
When you start this task, it will use the existing users *"b1"* and *"b2"*
|
||||
instead of creating the temporary ones:
|
||||
When you start this task, it is going to use *"b1"* and *"b2"* for running
|
||||
subtask instead of creating the temporary users:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
rally task start samples/tasks/scenarios/nova/boot-and-delete.json
|
||||
|
||||
It goes without saying that support of benchmarking with predefined users
|
||||
It goes without saying that support of running with predefined users
|
||||
simplifies the usage of Rally for generating loads against production clouds.
|
||||
|
||||
(based on: http://boris-42.me/rally-can-generate-load-with-passed-users-now/)
|
||||
|
@ -15,8 +15,8 @@
|
||||
|
||||
.. _tutorial_step_4_adding_success_criteria_for_benchmarks:
|
||||
|
||||
Step 4. Adding success criteria (SLA) for benchmarks
|
||||
====================================================
|
||||
Step 4. Adding success criteria (SLA) for subtasks
|
||||
==================================================
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
@ -25,10 +25,10 @@ SLA - Service-Level Agreement (Success Criteria)
|
||||
------------------------------------------------
|
||||
|
||||
Rally allows you to set success criteria (also called *SLA - Service-Level
|
||||
Agreement*) for every benchmark. Rally will automatically check them for you.
|
||||
Agreement*) for every subtask. Rally will automatically check them for you.
|
||||
|
||||
To configure the SLA, add the *"sla"* section to the configuration of the
|
||||
corresponding benchmark (the check name is a key associated with its target
|
||||
corresponding subtask (the check name is a key associated with its target
|
||||
value). You can combine different success criteria:
|
||||
|
||||
.. code-block:: json
|
||||
@ -56,14 +56,14 @@ value). You can combine different success criteria:
|
||||
}
|
||||
|
||||
Such configuration will mark the **NovaServers.boot_and_delete_server**
|
||||
benchmark scenario as not successful if either some iteration took more than 10
|
||||
task scenario as not successful if either some iteration took more than 10
|
||||
seconds or more than 25% iterations failed.
|
||||
|
||||
|
||||
Checking SLA
|
||||
------------
|
||||
Let us show you how Rally SLA work using a simple example based on **Dummy
|
||||
benchmark scenarios**. These scenarios actually do not perform any
|
||||
Let us show you how Rally SLA work using a simple example based on
|
||||
**Dummy scenarios**. These scenarios actually do not perform any
|
||||
OpenStack-related stuff but are very useful for testing the behaviors of Rally.
|
||||
Let us put in a new task, *test-sla.json*, 2 scenarios -- one that does nothing
|
||||
and another that just throws an exception:
|
||||
@ -112,8 +112,8 @@ and another that just throws an exception:
|
||||
}
|
||||
|
||||
Note that both scenarios in these tasks have the **maximum failure rate of 0%**
|
||||
as their **success criterion**. We expect that the first scenario will pass
|
||||
this criterion while the second will fail it. Let's start the task:
|
||||
as their **success criterion**. We expect that the first scenario is going
|
||||
to pass this criterion while the second will fail it. Let's start the task:
|
||||
|
||||
|
||||
.. code-block:: bash
|
||||
@ -127,7 +127,7 @@ the success criteria you defined in the task:
|
||||
|
||||
$ rally task sla_check
|
||||
+-----------------------+-----+--------------+--------+-------------------------------------------------------------------------------------------------------+
|
||||
| benchmark | pos | criterion | status | detail |
|
||||
| subtask | pos | criterion | status | detail |
|
||||
+-----------------------+-----+--------------+--------+-------------------------------------------------------------------------------------------------------+
|
||||
| Dummy.dummy | 0 | failure_rate | PASS | Maximum failure rate percent 0.0% failures, minimum failure rate percent 0% failures, actually 0.0% |
|
||||
| Dummy.dummy_exception | 0 | failure_rate | FAIL | Maximum failure rate percent 0.0% failures, minimum failure rate percent 0% failures, actually 100.0% |
|
||||
@ -145,20 +145,20 @@ SLA checks are nicely visualized in task reports. Generate one:
|
||||
|
||||
rally task report --out=report_sla.html --open
|
||||
|
||||
Benchmark scenarios that have passed SLA have a green check on the overview
|
||||
SubTask that have passed SLA have a green check on the overview
|
||||
page:
|
||||
|
||||
.. image:: ../../images/Report-SLA-Overview.png
|
||||
:align: center
|
||||
|
||||
Somewhat more detailed information about SLA is displayed on the scenario
|
||||
Somewhat more detailed information about SLA is displayed on the subtask
|
||||
pages:
|
||||
|
||||
.. image:: ../../images/Report-SLA-Scenario.png
|
||||
:align: center
|
||||
|
||||
Success criteria present a very useful concept that enables not only to analyze
|
||||
the outcome of your benchmark tasks, but also to control their execution. In
|
||||
the outcome of your tasks, but also to control their execution. In
|
||||
:ref:`one of the next sections <tutorial_step_6_aborting_load_generation_on_sla_failure>`
|
||||
of our tutorial, we will show how to use SLA to abort the load generation
|
||||
before your OpenStack goes wrong.
|
||||
|
@ -199,7 +199,7 @@ starting a task:
|
||||
Task cbf7eb97-0f1d-42d3-a1f1-3cc6f45ce23f: started
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
Benchmarking... This can take a while...
|
||||
Running Task... This can take a while...
|
||||
|
||||
|
||||
Using the default values
|
||||
@ -365,12 +365,12 @@ automatically unfold the for-loop for you:
|
||||
Task ea7e97e3-dd98-4a81-868a-5bb5b42b8610: started
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
Benchmarking... This can take a while...
|
||||
Running Task... This can take a while...
|
||||
|
||||
As you can see, the Rally task template syntax is a simple but powerful
|
||||
mechanism that not only enables you to write elegant task configurations, but
|
||||
also makes them more readable for other people. When used appropriately, it can
|
||||
really improve the understanding of your benchmarking procedures in Rally when
|
||||
really improve the understanding of your testing procedures in Rally when
|
||||
shared with others.
|
||||
|
||||
.. references:
|
||||
|
@ -18,7 +18,7 @@
|
||||
Step 6. Aborting load generation on success criteria failure
|
||||
============================================================
|
||||
|
||||
Benchmarking pre-production and production OpenStack clouds is not a trivial
|
||||
Testing pre-production and production OpenStack clouds is not a trivial
|
||||
task. From the one side it is important to reach the OpenStack cloud's limits,
|
||||
from the other side the cloud shouldn't be damaged. Rally aims to make this
|
||||
task as simple as possible. Since the very beginning Rally was able to generate
|
||||
@ -29,8 +29,8 @@ until it was too late.
|
||||
With the **"stop on SLA failure"** feature, however, things are much better.
|
||||
|
||||
This feature can be easily tested in real life by running one of the most
|
||||
important and plain benchmark scenario called *"Authenticate.keystone"*. This
|
||||
scenario just tries to authenticate from users that were pre-created by Rally.
|
||||
important and plain scenario called *"Authenticate.keystone"*. This scenario
|
||||
just tries to authenticate from users that were pre-created by Rally.
|
||||
Rally input task looks as follows (*auth.yaml*):
|
||||
|
||||
.. code-block:: yaml
|
||||
|
@ -62,7 +62,7 @@ information about them:
|
||||
+--------+------------------------------------------------+
|
||||
|
||||
|
||||
In case if multiple found benchmarks found command list all matches elements:
|
||||
In case if multiple plugins were found, all matched elements are listed:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
|
@ -26,7 +26,7 @@ by Rally (and thus should be cleaned up after the fact).
|
||||
|
||||
Random names are generated from a fairly limited set of digits and
|
||||
ASCII letters. This should be configurable by each plugin, along with
|
||||
all other parts of the random name, in order to support benchmarking
|
||||
all other parts of the random name, in order to support testing
|
||||
systems other than OpenStack, which may have different naming
|
||||
restrictions.
|
||||
|
||||
|
@ -18,7 +18,7 @@ There are 3 use cases that requires DB refactoring:
|
||||
|
||||
1. scalable task engine
|
||||
|
||||
Run benchmarks with billions iterations
|
||||
Run load tests with billions iterations
|
||||
Generate distributed load 10k-100k RPS
|
||||
Generate all reports/aggregated based on that data
|
||||
|
||||
@ -150,10 +150,10 @@ Task table
|
||||
# Duration of verification can be used to tune verification process.
|
||||
validation_duration : FLOAT
|
||||
|
||||
# Duration of benchmarking part of task
|
||||
# Duration of load part of subtask
|
||||
task_duration : FLOAT
|
||||
|
||||
# All workloads in the task are passed
|
||||
# All workloads in the subtask are passed
|
||||
pass_sla : BOOL
|
||||
|
||||
# Current status of task
|
||||
|
@ -19,14 +19,14 @@ Problem description
|
||||
|
||||
There are 5 use cases that require cleanup refactoring:
|
||||
|
||||
#. Benchmarking with existing tenants.
|
||||
#. Running Task against existing tenants and users.
|
||||
|
||||
Keep existing resources instead of deleting all resources in the tenants.
|
||||
|
||||
#. Persistence benchmark context.
|
||||
#. Persistence Task context.
|
||||
|
||||
Create benchmark environment once before benchmarking. After that run some
|
||||
amount of benchmarks that are using it and at the end just delete all
|
||||
Create testing environment once before running tasks. After that run some
|
||||
amount of tasks that are using it and at the end just delete all
|
||||
created resources by context cleanups.
|
||||
|
||||
#. Disaster cleanup.
|
||||
@ -141,7 +141,7 @@ Alternatives
|
||||
better place for this, and for the cleanup code in general. In this case,
|
||||
we need to think about a case where a Rally scenario creates a tenant, and
|
||||
then deletes it but some resources are left around. And also we need to think
|
||||
about a case of benchmark on existing tenants.
|
||||
about a case of testing using existing tenants.
|
||||
|
||||
|
||||
Implementation
|
||||
@ -188,10 +188,10 @@ Dependencies
|
||||
* Add name pattern filter for resource cleanup:
|
||||
https://review.openstack.org/#/c/139643/
|
||||
|
||||
* Finish support of benchmarking with existing users:
|
||||
* Finish support of running tasks using existing users:
|
||||
https://review.openstack.org/#/c/168524/
|
||||
|
||||
* Add support of persistence benchmark environment:
|
||||
* Add support of persistence context environment:
|
||||
https://github.com/openstack/rally/blob/master/doc/feature_request/persistence_benchmark_env.rst
|
||||
|
||||
* Production ready cleanups:
|
||||
|
@ -88,7 +88,7 @@ keystone scenarios use plugins/openstack/scenarios/keystone/utils.py
|
||||
.. code-block:: python
|
||||
|
||||
class KeystoneBasic(kutils.KeystoneScenario):
|
||||
"""Basic benchmark scenarios for Keystone."""
|
||||
"""Basic scenarios for Keystone."""
|
||||
|
||||
@validation.number("name_length", minval=10)
|
||||
@validation.required_openstack(admin=True)
|
||||
@ -155,8 +155,7 @@ Users context:
|
||||
|
||||
@context.configure(name="users", order=100)
|
||||
class UserGenerator(UserContextMixin, context.Context):
|
||||
"""Context class for generating temporary
|
||||
users/tenants for benchmarks."""
|
||||
"""Context class for generating temporary users/tenants for testing."""
|
||||
|
||||
def _create_tenants(self):
|
||||
cache["client"] = keystone.wrap(clients.keystone())
|
||||
@ -307,7 +306,7 @@ of scenario.
|
||||
from rally.plugins.openstack.services.identity import keystone_v3
|
||||
|
||||
class KeystoneBasic(scenario.OpenStackScenario): # no more utils.py
|
||||
"""Basic benchmark scenarios for Keystone."""
|
||||
"""Basic scenarios for Keystone."""
|
||||
|
||||
|
||||
@validation.number("name_length", minval=10)
|
||||
|
@ -1,14 +1,15 @@
|
||||
==========================================================================================
|
||||
Finding a Keystone bug while benchmarking 20 node HA cloud performance at creating 400 VMs
|
||||
==========================================================================================
|
||||
=====================================================================================
|
||||
Finding a Keystone bug while testing 20 node HA cloud performance at creating 400 VMs
|
||||
=====================================================================================
|
||||
|
||||
*(Contributed by Alexander Maretskiy, Mirantis)*
|
||||
|
||||
Below we describe how we found a `bug in Keystone`_ and achieved 2x average
|
||||
performance increase at booting Nova servers after fixing that bug. Our initial
|
||||
goal was to benchmark the booting of a significant amount of servers on a
|
||||
cluster (running on a custom build of `Mirantis OpenStack`_ v5.1) and to ensure
|
||||
that this operation has reasonable performance and completes with no errors.
|
||||
goal was to measure performance the booting of a significant amount of servers
|
||||
on a cluster (running on a custom build of `Mirantis OpenStack`_ v5.1) and to
|
||||
ensure that this operation has reasonable performance and completes
|
||||
with no errors.
|
||||
|
||||
Goal
|
||||
----
|
||||
@ -65,7 +66,7 @@ Rally
|
||||
|
||||
**Version**
|
||||
|
||||
For this benchmark, we use custom Rally with the following patch:
|
||||
For this test case, we use custom Rally with the following patch:
|
||||
|
||||
https://review.openstack.org/#/c/96300/
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user