Go to file
Alexander Maretskiy 7652ac0e4b [Common] Add more streaming algorithms
Changes:

 * Refactor classes names for existing streaming algorithms
   (rally.common.streaming_algorithms), as they are too long

 * Add new streaming algorithms classes to module
   rally.common.streaming_algorithms

   All of these classes are intended for further usage in
   reports generation - to make reports generation simplier,
   unified and able to process arbitrary data with low
   memory usage.

   These classes are:

   MinComputation
     simply keep minimal value from stream of numbers

   MaxComputation
     simply keep maximal value from stream of numbers

   PercentileComputation
     calculate percentile value from stream of numbers,
     including median value (with specified percent=50).

     This class will replace the following functions:
     rally.task.processing.utils.median
     rally.task.processing.utils.percentile

   ProgressComputation
     calculate percent how many times add() has been called
     with respect to expected (total) calls number

   IncrementComputation
     simple counter how many times add() has been called

Change-Id: Ib63d34f573a695edf86b309d0b7bb69571b2d95d
2015-07-21 14:42:48 +03:00
2015-07-14 20:09:40 +09:00
2015-04-17 18:18:21 +03:00
2015-05-08 16:29:59 +03:00
2014-10-07 13:50:40 +00:00
2013-08-14 14:08:09 +04:00
2015-05-13 18:24:05 +03:00
2013-08-03 09:17:25 -07:00
2015-06-10 18:46:43 +03:00
2015-07-15 20:46:17 +00:00
2015-06-12 13:45:57 +09:00

Rally

What is Rally

Rally is a Benchmark-as-a-Service project for OpenStack.

Rally is intended to provide the community with a benchmarking tool that is capable of performing specific, complicated and reproducible test cases on real deployment scenarios.

If you are here, you are probably familiar with OpenStack and you also know that it's a really huge ecosystem of cooperative services. When something fails, performs slowly or doesn't scale, it's really hard to answer different questions on "what", "why" and "where" has happened. Another reason why you could be here is that you would like to build an OpenStack CI/CD system that will allow you to improve SLA, performance and stability of OpenStack continuously.

The OpenStack QA team mostly works on CI/CD that ensures that new patches don't break some specific single node installation of OpenStack. On the other hand it's clear that such CI/CD is only an indication and does not cover all cases (e.g. if a cloud works well on a single node installation it doesn't mean that it will continue to do so on a 1k servers installation under high load as well). Rally aims to fix this and help us to answer the question "How does OpenStack work at scale?". To make it possible, we are going to automate and unify all steps that are required for benchmarking OpenStack at scale: multi-node OS deployment, verification, benchmarking & profiling.

Rally workflow can be visualized by the following diagram:

Rally Architecture

Documentation

Rally documentation on ReadTheDocs is a perfect place to start learning about Rally. It provides you with an easy and illustrative guidance through this benchmarking tool. For example, check out the Rally step-by-step tutorial that explains, in a series of lessons, how to explore the power of Rally in benchmarking your OpenStack clouds.

Architecture

In terms of software architecture, Rally is built of 4 main components:

  1. Server Providers - provide servers (virtual servers), with ssh access, in one L3 network.
  2. Deploy Engines - deploy OpenStack cloud on servers that are presented by Server Providers
  3. Verification - component that runs tempest (or another specific set of tests) against a deployed cloud, collects results & presents them in human readable form.
  4. Benchmark engine - allows to write parameterized benchmark scenarios & run them against the cloud.

Use Cases

There are 3 major high level Rally Use Cases:

Rally Use Cases

Typical cases where Rally aims to help are:

  • Automate measuring & profiling focused on how new code changes affect the OS performance;
  • Using Rally profiler to detect scaling & performance issues;
  • Investigate how different deployments affect the OS performance:
    • Find the set of suitable OpenStack deployment architectures;
    • Create deployment specifications for different loads (amount of controllers, swift nodes, etc.);
  • Automate the search for hardware best suited for particular OpenStack cloud;
  • Automate the production cloud specification generation:
    • Determine terminal loads for basic cloud operations: VM start & stop, Block Device create/destroy & various OpenStack API methods;
    • Check performance of basic cloud operations in case of different loads.

Rally documentation:

http://rally.readthedocs.org/en/latest/

Rally step-by-step tutorial:

http://rally.readthedocs.org/en/latest/tutorial.html

RoadMap:

https://docs.google.com/a/mirantis.com/spreadsheets/d/16DXpfbqvlzMFaqaXAcJsBzzpowb_XpymaK2aFY2gA2g

Launchpad page:

https://launchpad.net/rally

Code is hosted on git.openstack.org:

http://git.openstack.org/cgit/openstack/rally

Code is mirrored on github:

https://github.com/openstack/rally

Description
Rally provides a framework for performance analysis and benchmarking of individual OpenStack components as well as full production OpenStack cloud deployments
Readme 123 MiB
Languages
Python 96.2%
HTML 3%
JavaScript 0.4%
Shell 0.3%