Before this patch we have to specify: __ctx_name__, __ctx_order__ and __ctx_hidden__ by hands as attributes of context class. This thing produced few issues: 1) It looks ugly 2) It is absolutelly unclear. So you don't know what they mean, and there is no easy way to find information about it. So now it's much simpler e.g. you should just take a look at decorator implementation 3) This was source of bugs, cause it was not tested by syntax of language => so we have to have mad tests to check that all __ctx_name__, __ctx_order__ were specified * Use decorator instead of direct setting attributes * Rename attributes: __ctx_name__ -> _ctx_name __ctx_order__ -> _ctx_order __ctx_hidden__ -> _ctx_hidden This was done cause of unification with other parts of code * Improve a bit "servers" get uuid of image and flavor only once * Impove order of context * Make secgroup and allow_ssh not hidden context. There is no need to hide them, and probably somebody would like to use them. * Docs are updated Change-Id: Ib089b398ba0c7f54d5246eb6ba29f5bbcfd2deee
16 KiB
Main concepts of Rally
Benchmark Scenarios
Concept
The concept of benchmark scenarios is a central one in Rally. Benchmark scenarios are what Rally actually uses to test the performance of an OpenStack deployment. They also play the role of main building blocks in the configurations of benchmark tasks. Each benchmark scenario performs a small set of atomic operations, thus testing some simple use case, usually that of a specific OpenStack project. For example, the "NovaServers" scenario group contains scenarios that use several basic operations available in nova. The "boot_and_delete_server" benchmark scenario from that group allows to benchmark the performance of a sequence of only two simple operations: it first boots a server (with customizable parameters) and then deletes it.
User's view
From the user's point of view, Rally launches different benchmark scenarios while performing some benchmark task. Benchmark task is essentially a set of benchmark scenarios run against some OpenStack deployment in a specific (and customizable) manner by the CLI command:
rally task start --task=<task_config.json>
Accordingly, the user may specify the names and parameters of benchmark scenarios to be run in benchmark task configuration files. A typical configuration file would have the following contents:
- {
-
- "NovaServers.boot_server": [
-
- {
-
- "args": {
-
"flavor_id": 42, "image_id": "73257560-c59b-4275-a1ec-ab140e5b9979"
}, "runner": {"times": 3}, "context": {...}
}, { "args": { "flavor_id": 1, "image_id": "3ba2b5f6-8d8d-4bbe-9ce5-4be01d912679" }, "runner": {"times": 3}, "context": {...} }
], "CinderVolumes.create_volume": [ { "args": { "size": 42 }, "runner": {"times": 3}, "context": {...} } ]
}
In this example, the task configuration file specifies two benchmarks to be run, namely "NovaServers.boot_server" and "CinderVolumes.create_volume" (benchmark name = ScenarioClassName.method_name). Each benchmark scenario may be started several times with different parameters. In our example, that's the case with "NovaServers.boot_server", which is used to test booting servers from different images & flavors.
Note that inside each scenario configuration, the benchmark scenario is actually launched 3 times (that is specified in the "runner" field). It can be specified in "runner" in more detail how exactly the benchmark scenario should be launched; we elaborate on that in the "Sceario Runners" section below.
From the developer's perspective, a benchmark scenario is a method marked by a @scenario decorator and placed in a class that inherits from the base Scenario class and located in some subpackage of rally.benchmark.scenarios. There may be arbitrary many benchmark scenarios in a scenario class; each of them should be referenced to (in the task configuration file) as ScenarioClassName.method_name.
In a toy example below, we define a scenario class MyScenario with one benchmark scenario MyScenario.scenario. This benchmark scenario tests the performance of a sequence of 2 actions, implemented via private methods in the same class. Both methods are marked with the @atomic_action_timer decorator. This allows Rally to handle those actions in a special way and, after benchmarks complete, show runtime statistics not only for the whole scenarios, but for separate actions as well.
from rally.benchmark.scenarios import base
from rally.benchmark.scenarios import utils
class MyScenario(base.Scenario):
"""My class that contains benchmark scenarios."""
@base.atomic_action_timer("action_1")
def _action_1(self, **kwargs):
"""Do something with the cloud."""
@base.atomic_action_timer("action_2")
def _action_2(self, **kwargs):
"""Do something with the cloud."""
@base.scenario()
def scenario(self, **kwargs):
self._action_1()
self._action_2()
Scenario runners
Concept
Scenario Runners in Rally are entities that control the execution type and order of benchmark scenarios. They support different running strategies for creating load on the cloud, including simulating concurrent requests from different users, periodic load, gradually growing load and so on.
User's view
The user can specify which type of load on the cloud he would like to have through the "runner" section in the task configuration file:
- {
-
- "NovaServers.boot_server": [
-
- {
-
- "args": {
-
"flavor_id": 42, "image_id": "73257560-c59b-4275-a1ec-ab140e5b9979"
}, "runner": { "type": "constant", "times": 15, "concurrency": 2 }, "context": { "users": { "tenants": 1, "users_per_tenant": 3 }, "quotas": { "nova": { "instances": 20 } } }
}
]
}
The scenario running strategy is specified by its type and also by some type-specific parameters. Available types include:
- constant, for creating a constant load by running the scenario for a fixed number of times, possibly in parallel (that's controlled by the "concurrency" parameter).
- constant_for_duration that works exactly as constant, but runs the benchmark scenario until a specified number of seconds elapses ("duration" parameter).
- periodic, which executes benchmark scenarios with intervals between two consecutive runs, specified in the "period" field in seconds.
- serial, which is very useful to test new scenarios since it just runs the benchmark scenario for a fixed number of times in a single thread.
Also, all scenario runners can be provided (again, through the "runner" section in the config file) with an optional "timeout" parameter, which specifies the timeout for each single benchmark scenario run (in seconds).
It is possible to extend Rally with new Scenario Runner types, if needed. Basically, each scenario runner should be implemented as a subclass of the base ScenarioRunner class and located in the rally.benchmark.runners package. The interface each scenario runner class should support is fairly easy:
from rally.benchmark.runners import base from rally import utils
- class MyScenarioRunner(base.ScenarioRunner):
-
"""My scenario runner."""
# This string is what the user will have to specify in the task # configuration file (in "runner": {"type": ...})
__execution_type__ = "my_scenario_runner"
# CONFIG_SCHEMA is used to automatically validate the input # config of the scenario runner, passed by the user in the task # configuration file.
- CONFIG_SCHEMA = {
-
"type": "object", "$schema": utils.JSON_SCHEMA, "properties": { "type": { "type": "string" }, "some_specific_property": {...} }
}
- def _run_scenario(self, cls, method_name, ctx, args):
-
*"""Run the scenario 'method_name' from scenario class 'cls' with arguments 'args', given a context 'ctx'.
This method should return the results dictionary wrapped in a base.ScenarioRunnerResult object (not plain JSON) """* results = ...
return base.ScenarioRunnerResult(results)
Benchmark contexts
Concept
The notion of contexts in Rally is essentially used to define different types of environments in which benchmark scenarios can be launched. Those environments are usually specified by such parameters as the number of tenants and users that should be present in an OpenStack project, the roles granted to those users, extended or narrowed quotas and so on.
User's view
From the user's prospective, contexts in Rally are manageable via the task configuration files. In a typical configuration file, each benchmark scenario to be run is not only supplied by the information about its arguments and how many times it should be launched, but also with a special "context" section. In this section, the user may configure a number of contexts he needs his scenarios to be run within.
In the example below, the "users" context specifies that the "NovaServers.boot_server" scenario should be run from 1 tenant having 3 users in it. Bearing in mind that the default quota for the number of instances is 10 instances per tenant, it is also reasonable to extend it to, say, 20 instances in the "quotas" context. Otherwise the scenario would eventually fail, since it tries to boot a server 15 times from a single tenant.
- {
-
- "NovaServers.boot_server": [
-
- {
-
- "args": {
-
"flavor_id": 42, "image_id": "73257560-c59b-4275-a1ec-ab140e5b9979"
}, "runner": { "type": "constant", "times": 15, "concurrency": 2 }, "context": { "users": { "tenants": 1, "users_per_tenant": 3 }, "quotas": { "nova": { "instances": 20 } } }
}
]
}
From the developer's view, contexts management is implemented via Context classes. Each context type that can be specified in the task configuration file corresponds to a certain subclass of the base [https://github.com/stackforge/rally/blob/master/rally/benchmark/context/base.py Context] class, located in the [https://github.com/stackforge/rally/tree/master/rally/benchmark/context rally.benchmark.context] module. Every context class should implement a fairly simple interface:
from rally.benchmark.context import base
- @base.context(name="your_context", # Corresponds to the context field name in task configuration files
-
order=100500, # a number specifying the priority with which the context should be set up hidden=False) # True if the context cannot be configured through the input task file
- class YourContext(base.Context):
-
"""Yet another context class."""
# The schema of the context configuration format CONFIG_SCHEMA = { "type": "object", "$schema": utils.JSON_SCHEMA, "additionalProperties": False, "properties": { "property_1": <SCHEMA>, "property_2": <SCHEMA> } }
- def __init__(self, context):
-
super(YourContext, self).__init__(context) # Initialize the necessary stuff
- def setup(self):
-
# Prepare the environment in the desired way
- def cleanup(self):
-
# Cleanup the environment properly
Consequently, the algorithm of initiating the contexts can be roughly seen as follows:
context1 = Context1(ctx) context2 = Context2(ctx) context3 = Context3(ctx)
context1.setup() context2.setup() context3.setup()
<Run benchmark scenarios in the prepared environment>
context3.cleanup() context2.cleanup() context1.cleanup()
- where the order of contexts in which they are set up depends on the value of their order attribute. Contexts with lower order have higher priority: 1xx contexts are reserved for users-related stuff (e.g. users/tenants creation, roles assignment etc.), 2xx - for quotas etc.
The hidden attribute defines whether the context should be a
hidden one. Hidden contexts cannot be
configured by end-users through the task configuration file as shown
above, but should be specified by a benchmark scenario developer through
a special @base.scenario(context={...}) decorator. Hidden
contexts are typically needed to satisfy some specific benchmark
scenario-specific needs, which don't require the end-user's attention.
For example, the hidden "cleanup" context (rally.benchmark.context.cleanup.context
) is used to
make generic cleanup after running benchmark. So user can't change it
configuration via task and break his cloud.
If you want to dive deeper, also see the context manager (rally.benchmark.context.base
)
class that actually implements the algorithm described above.
Plugins
Rally provides an opportunity to create and use a custom benchmark scenario, runner or context as a plugin. The plugins mechanism can be used to simplify some experiments with new scenarios and to facilitate their creation by users who don't want to edit the actual Rally code.
Placement
Put the plugin into the /opt/rally/plugins or ~/.rally/plugins directory or it's subdirectories and it will be autoloaded. The corresponding module should have ".py" extension. Directories are not created automatically, you should create them by hand or you can use script unpack_plugins_samles.sh from doc/samples/plugins which will internally create directory ~/.rally/plugins (see more about this script into Samples section).
Creation
Inherit a class for your plugin from base class for scenario, runner or context depends on what type of plugin you want create.
See more information about scenarios, runnres and contexts creation.
Usage
Specify your plugin's information into a task configuration file. See how to work with task configuration file. You can find samples of configuration files for different types of plugins in corresponded folders here.