Fixed some formatting and typos in the README

Change-Id: I19f5d8e253aab9dca9ff3eccc376cc0d6c07107f
This commit is contained in:
Brandon Clifford 2013-10-15 17:04:25 -06:00
parent 1edfbc8a95
commit bbf0369400
1 changed files with 7 additions and 7 deletions

View File

@ -7,29 +7,29 @@ Introduction
Rally is a Benchmark-as-a-Service project for OpenStack.
Rally is intended for providing the community with a benchmarking tool that is capable of performing **specific**, **complicated** and **reproducible** test cases on **real deployment** scenarios.
Rally is intended to provide the community with a benchmarking tool that is capable of performing **specific**, **complicated** and **reproducible** test cases on **real deployment** scenarios.
In the OpenStack ecosystem there are currently several tools that are helpful in carrying out the benchmarking process for an OpenStack deployment. To name a few, there are *DevStack* and *FUEL* which are intended for deploying and managing OpenStack clouds, the *Tempest* testing framework that validates OpenStack APIs, some tracing facilities like *Tomograph* with *Zipkin*, and so on. The challenge, however, is to compile all these tools together on a reproducible basis. That can be a rather difficult task since the number of compute nodes in a practical deployment can be really huge and also because one may be willing to use lots of different deployment strategies that pursue different goals (e.g., while benchmarking the Nova Scheduler, one usually does not care of virtualization details, but is more concerned with the infrastructure topologies; while in other specific cases it may be the virtualization technology that matters). Compiling a bunch of already existing benchmarking facilities into one project, making it flexible to user requirements and ensuring the reproducibility of test results, is exactly what Rally does.
In the OpenStack ecosystem there are currently several tools that are helpful in carrying out the benchmarking process for an OpenStack deployment. To name a few, there are *DevStack* and *FUEL*, which are intended for deploying and managing OpenStack clouds, the *Tempest* testing framework which validates OpenStack APIs, some tracing facilities like *Tomograph* with *Zipkin*. The challenge, however, is to compile all these tools together on a reproducible basis. That can be a rather difficult task since the number of compute nodes in a practical deployment can easily be large and also because one may be willing to use many different deployment strategies that pursue different goals (e.g., while benchmarking the Nova Scheduler, one usually does not care of virtualization details, but is more concerned with the infrastructure topologies; while in other specific cases it may be the virtualization technology that matters). What Rally aims to do is Compile many already existing benchmarking facilities into one project, making it flexible to user requirements and ensuring the reproducibility of test results.
Architecture
------------
Rally is basically split into 4 main components:
Rally is split into 4 main components:
1. **Deployment Engine**, which is responsible for processing and deploying VM images (using DevStack or FUEL according to users preferences). The engine can do one of the following:
+ deploying an OS on already existing VMs;
+ deploy an Operating System (OS) on already existing VMs;
+ starting VMs from a VM image with pre-installed OS and OpenStack;
+ delpoying multiply VMs inside each has OpenStack compute node based on a VM image.
+ delpoying multiple VMs inside each OpenStack compute node based on a VM image.
2. **VM Provider**, which interacts with cloud provider-specific interfaces to load and destroy VM images;
3. **Benchmarking Tool**, which carries out the benchmarking process in several stages:
+ runs *Tempest* tests, reduced to 5-minute length (to save the usually expensive computing time);
+ runs the used-defined test scenarios (using the Rally testing framework);
+ runs the user-defined test scenarios (using the Rally testing framework);
+ collects all the test results and processes the by *Zipkin* tracer;
+ puts together a benchmarking report and stores it on the machine Rally was lauched on.
4. **Orchestrator**, which is the central component of the system. It uses the Deployment Engine to run control and compute nodes and to launch an OpenStack distribution and, after that, calls the Benchmarking Tool to start the benchmarking process.
4. **Orchestrator**, which is the central component of the system. It uses the Deployment Engine, to run control and compute nodes, in addition to launching an OpenStack distribution. After that, it calls the Benchmarking Tool to start the benchmarking process.
Links