ff9c4ac151
Class `verification.verifiers.tempest.Tempest` contains next changes: Output from installation of virtual environment is not useful for users, so visibility of this output was changed and now only in debug mode it will be printed. The status of verification will be changed to 'FAILED' if something will go wrong in subproccess.call Two variables contain word 'tempest' in their names. This is redundant, so they were renamed: Tempest.tempest_base_path => Tempest.base_repo Tempest.tempest_path => Tempest._path Construction "os.path.join(Tempest.path, some_path)" was moved to method `Tempest.path`, since it used in many places. Method `Tempest.parse_results` should not be static, because it needed inner variable `Tempest.log_file_raw`, so this was fixed. "git remote update" is not needed in Tempest installation, so we can remove this call and decrease time of installation. In `rally.cmd.commands.verify.start` command, several issues were fixed: First function argument changed to "set_name" instead of "deploy_id". Reason: "deploy_id" have default value, so it should be the first in arguments. It will simplify command for end-users(launch 'rally verify start <set_name>' instead of 'rally verify start --set <set_name>'). Task commands have cool feature: save task_id in global variables, so results cmd can print the last task, without setting it id. This feature is ported in verification. Tests for verification contains a lot of tests, so they are splitted to separate classes(TempestVerifyTestCase, TempestInstallAndUninstallTestCase and etc). Also, new tests were added. Change-Id: I08a52a1e3ceb468ba619049573bcfe642aecbcaf |
||
---|---|---|
contrib/devstack | ||
doc | ||
etc/rally | ||
rally | ||
rally-scenarios | ||
tests | ||
tools | ||
.coveragerc | ||
.gitignore | ||
.gitreview | ||
.testr.conf | ||
babel.cfg | ||
install_rally.sh | ||
LICENSE | ||
openstack-common.conf | ||
README.rst | ||
requirements.txt | ||
setup.cfg | ||
setup.py | ||
test-requirements.txt | ||
tox.ini |
Rally
What is Rally
Rally is a Benchmark-as-a-Service project for OpenStack.
Rally is intended to provide the community with a benchmarking tool that is capable of performing specific, complicated and reproducible test cases on real deployment scenarios.
If you are here, you are probably familiar with OpenStack and you also know that it's a really huge ecosystem of cooperative services. When something fails, performs slowly or doesn't scale, it's really hard to answer different questions on "what", "why" and "where" has happened. Another reason why you could be here is that you would like to build an OpenStack CI/CD system that will allow you to improve SLA, performance and stability of OpenStack continuously.
The OpenStack QA team mostly works on CI/CD that ensures that new patches don't break some specific single node installation of OpenStack. On the other hand it's clear that such CI/CD is only an indication and does not cover all cases (e.g. if a cloud works well on a single node installation it doesn't mean that it will continue to do so on a 1k servers installation under high load as well). Rally aims to fix this and help us to answer the question "How does OpenStack work at scale?". To make it possible, we are going to automate and unify all steps that are required for benchmarking OpenStack at scale: multi-node OS deployment, verification, benchmarking & profiling.
Rally workflow can be visualized by the following diagram:
Architecture
In terms of software architecture, Rally is built of 4 main components:
- Server Providers - provide servers (virtual servers), with ssh access, in one L3 network.
- Deploy Engines - deploy OpenStack cloud on servers that are presented by Server Providers
- Verification - component that runs tempest (or another specific set of tests) against a deployed cloud, collects results & presents them in human readable form.
- Benchmark engine - allows to write parameterized benchmark scenarios & run them against the cloud.
Use Cases
There are 3 major high level Rally Use Cases:
Typical cases where Rally aims to help are:
- Automate measuring & profiling focused on how new code changes affect the OS performance;
- Using Rally profiler to detect scaling & performance issues;
- Investigate how different deployments affect the OS performance:
-
- Find the set of suitable OpenStack deployment architectures;
- Create deployment specifications for different loads (amount of controllers, swift nodes, etc.);
- Automate the search for hardware best suited for particular OpenStack cloud;
- Automate the production cloud specification generation:
-
- Determine terminal loads for basic cloud operations: VM start & stop, Block Device create/destroy & various OpenStack API methods;
- Check performance of basic cloud operations in case of different loads.
Links
Wiki page:
Rally/HowTo:
Launchpad page:
Code is hosted on github:
Trello board: