Add feature_request mechanism
This will be standart way for end users to ask Rally team to get some new feature. The idea is to make it realy simple for end users to ask about covering their use cases. E.g. to add request users don't need to deep dive in rally community, road map and how it works. They need just to share their use cases that is not covered by current rally functionality. As a bonus adding some Readme.rst files related to doc/* dirs Change-Id: I4c615101012ac9d1668ed766ac831f7e5c8129a8
This commit is contained in:
parent
bd009a5c1c
commit
ef583a3415
37
doc/README.rst
Normal file
37
doc/README.rst
Normal file
@ -0,0 +1,37 @@
|
||||
========================
|
||||
Content of doc directory
|
||||
========================
|
||||
|
||||
This directory contains everything that is related to documentation and
|
||||
bureaucracy. You can find here 4 subdirectories:
|
||||
|
||||
|
||||
feature_request
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
If some use case is not covered by Rally, it is the right place to request it.
|
||||
To request new feature you should just explain use case on high level.
|
||||
Technical details and writing code are not required at all.
|
||||
|
||||
|
||||
samples
|
||||
~~~~~~~
|
||||
|
||||
If you are looking for samples of deployments and benchmarks configuration
|
||||
you are on the right way
|
||||
|
||||
|
||||
source
|
||||
~~~~~~
|
||||
|
||||
Source of documentation. Latest version of documentation_.
|
||||
|
||||
.. _documentation: http://rally.readthedocs.org/
|
||||
|
||||
|
||||
user_stories
|
||||
~~~~~~~~~~~~
|
||||
|
||||
Place where you can share any of Rally user experience. E.g. fixing some bugs,
|
||||
measuring performance of different architectures or comparing different
|
||||
hardware and so on..
|
23
doc/feature_request/LDAP_support.rst
Normal file
23
doc/feature_request/LDAP_support.rst
Normal file
@ -0,0 +1,23 @@
|
||||
===============================================
|
||||
Support benchmarking clouds that are using LDAP
|
||||
===============================================
|
||||
|
||||
Use Case
|
||||
--------
|
||||
|
||||
A lot of production clouds are using LDAP with read only access. It means
|
||||
that load can be generated only by existing in system users and there is no admin access.
|
||||
|
||||
|
||||
Problem Description
|
||||
-------------------
|
||||
|
||||
Rally is using admin access to create temporary users that will be used to
|
||||
produce load.
|
||||
|
||||
|
||||
Possible Solution
|
||||
-----------------
|
||||
|
||||
* Drop admin requirements
|
||||
* Add way to pass already existing users
|
11
doc/feature_request/README.rst
Normal file
11
doc/feature_request/README.rst
Normal file
@ -0,0 +1,11 @@
|
||||
================
|
||||
Feature requests
|
||||
================
|
||||
|
||||
To request a new feature you should create a document similar to other feature
|
||||
requests. And contribute it to this directory using next instruction_.
|
||||
|
||||
If you don't have time to contribute via gerrit,
|
||||
please contact Boris Pavlovic (boris@pavlovic.me)
|
||||
|
||||
.. _instruction: https://wiki.openstack.org/wiki/Rally/Develop#How_to_contribute
|
24
doc/feature_request/comparing_results_of_2_tasks.rst
Normal file
24
doc/feature_request/comparing_results_of_2_tasks.rst
Normal file
@ -0,0 +1,24 @@
|
||||
=======================================
|
||||
Ability to compare results between task
|
||||
=======================================
|
||||
|
||||
|
||||
Use case
|
||||
--------
|
||||
|
||||
During the work on performance it's essential to be able to compare results of
|
||||
similar task before and after change in system.
|
||||
|
||||
|
||||
|
||||
Problem description
|
||||
-------------------
|
||||
|
||||
There is no command to compare two or more tasks and get tables and graphs.
|
||||
|
||||
|
||||
|
||||
Possible solution
|
||||
-----------------
|
||||
|
||||
* Add command that accepts 2 tasks UUID and prints graphs that compares result
|
20
doc/feature_request/distributed_load_generation.rst
Normal file
20
doc/feature_request/distributed_load_generation.rst
Normal file
@ -0,0 +1,20 @@
|
||||
===========================
|
||||
Distributed load generation
|
||||
===========================
|
||||
|
||||
Use Case
|
||||
--------
|
||||
|
||||
Some OpenStack projects (Marconi, MagnetoDB) require a real huge load,
|
||||
like 10-100k request per second for benchmarking.
|
||||
|
||||
To generate such huge load Rally have to create load from different
|
||||
servers.
|
||||
|
||||
|
||||
Problem Description
|
||||
-------------------
|
||||
|
||||
* Rally can't generate load from different servers
|
||||
* Result processing can't handle big amount of data
|
||||
* There is no support for chunking results
|
26
doc/feature_request/historical_performance_data.rst
Normal file
26
doc/feature_request/historical_performance_data.rst
Normal file
@ -0,0 +1,26 @@
|
||||
===========================
|
||||
Historical performance data
|
||||
===========================
|
||||
|
||||
|
||||
Use case
|
||||
--------
|
||||
|
||||
OpenStack is really rapidly developed. Hundreds patches are merged daily
|
||||
and it's really hard to track how performance is changed during time.
|
||||
It will be nice to have a way to track performance of major functionality
|
||||
of OpenStack running periodically rally task and building graphs that represent
|
||||
how performance of specific method is changed during the time.
|
||||
|
||||
|
||||
Problem description
|
||||
-------------------
|
||||
|
||||
There is no way to bind tasks
|
||||
|
||||
|
||||
Possible solution
|
||||
-----------------
|
||||
|
||||
* Add grouping for tasks
|
||||
* Add command that creates historical graphs
|
35
doc/feature_request/multi_scenarios_load_gen.rst
Normal file
35
doc/feature_request/multi_scenarios_load_gen.rst
Normal file
@ -0,0 +1,35 @@
|
||||
======================================
|
||||
Using multi scenarios to generate load
|
||||
======================================
|
||||
|
||||
|
||||
Use Case
|
||||
--------
|
||||
|
||||
Rally should be able to generate real life load. Simultaneously create load
|
||||
on different components of OpenStack, e.g. simultaneously booting VM, uploading
|
||||
image and listing users.
|
||||
|
||||
|
||||
Problem Description
|
||||
-------------------
|
||||
|
||||
At the moment Rally is able to run only 1 scenario per benchmark.
|
||||
Scenario are quite specific (e.g. boot and delete VM for example) and can't
|
||||
actually generate real life load.
|
||||
|
||||
Writing a lot of specific benchmark scenarios that will produce more real life
|
||||
load will produce mess and a lot of duplication of code.
|
||||
|
||||
|
||||
Possible solution
|
||||
-----------------
|
||||
|
||||
* Extend Rally task benchmark configuration in such way to support passing
|
||||
multiple benchmark scenarios in singe benchmark context
|
||||
|
||||
* Extend Rally task output format to support results of multiple scenarios in
|
||||
single benchmark separately.
|
||||
|
||||
* Extend rally task plot2html and rally task detailed to show results
|
||||
separately for every scenario.
|
30
doc/feature_request/persistence_benchmark_env.rst
Normal file
30
doc/feature_request/persistence_benchmark_env.rst
Normal file
@ -0,0 +1,30 @@
|
||||
================================================
|
||||
Add support of persistence benchmark environment
|
||||
================================================
|
||||
|
||||
Use Case
|
||||
--------
|
||||
|
||||
To benchmark many of operations like show, list, detailed you need to have
|
||||
already these resource in cloud. So it will be nice to be able to create
|
||||
benchmark environment once before benchmarking. The run some amount of
|
||||
benchmarks that are using it and at the end just delete all created resources
|
||||
by benchmark environment.
|
||||
|
||||
|
||||
Problem Description
|
||||
-------------------
|
||||
|
||||
Fortunately Rally has already a mechanism for creating benchmark environment,
|
||||
that is used to create load. Unfortunately it's atomic operation:
|
||||
(create environment, make load, delete environment).
|
||||
This should be split to 3 separated steps.
|
||||
|
||||
|
||||
Possible solution
|
||||
-----------------
|
||||
|
||||
* Add new CLI operations to work with benchmark environment:
|
||||
(show, create, delete, list)
|
||||
|
||||
* Allow task to start against benchmark environment (instead of deployment)
|
28
doc/feature_request/production_ready_cleanup.rst
Normal file
28
doc/feature_request/production_ready_cleanup.rst
Normal file
@ -0,0 +1,28 @@
|
||||
========================
|
||||
Production read cleanups
|
||||
========================
|
||||
|
||||
Use Case
|
||||
--------
|
||||
|
||||
Rally should delete in any case all resources that it created during benchmark.
|
||||
|
||||
|
||||
Problem Description
|
||||
-------------------
|
||||
|
||||
* Deletion rate limit
|
||||
|
||||
You can kill cloud by deleting too many objects simultaneously, so deletion
|
||||
rate limit is required
|
||||
|
||||
* Retry on failures
|
||||
|
||||
There should be few attempts to delete resource in case of failures
|
||||
|
||||
* Disaster recovery
|
||||
|
||||
Rally should use special name patterns, to be able to delete resources
|
||||
in such case if something went wrong with server that is running rally. And
|
||||
you have just new instance (without old rally db) of rally on new server.
|
||||
|
14
doc/samples/README.rst
Normal file
14
doc/samples/README.rst
Normal file
@ -0,0 +1,14 @@
|
||||
============================
|
||||
Content of samples directory
|
||||
============================
|
||||
|
||||
deployments
|
||||
~~~~~~~~~~~
|
||||
|
||||
Input configurations for "rally deployment" command
|
||||
|
||||
|
||||
tasks
|
||||
~~~~~
|
||||
|
||||
Input configurations for "rally task" command
|
Loading…
Reference in New Issue
Block a user