RETIRED, Fuel testing
Go to file
2016-04-28 11:10:09 +00:00
etc Update tox.ini to skip dtabase creation steps 2016-03-07 16:59:28 +02:00
fuel_health Merge "Skip 2 health checks if no computes w/o DPDK" 2016-04-28 11:10:09 +00:00
fuel_plugin Skip 2 health checks if no computes w/o DPDK 2016-04-28 09:17:16 +03:00
specs Merge "Bump version to 10.0" 2016-04-06 11:55:03 +00:00
.gitignore Extended run_tests.sh to run Python tests 2015-01-26 12:50:04 +01:00
.gitreview Update paths due to stackforge migration 2015-10-18 00:39:00 +03:00
LICENSE LICENCE added 2014-06-05 00:44:48 +04:00
MAINTAINERS Add new Maintainers 2016-04-20 13:55:36 +00:00
MANIFEST.in Ostf config refactoring 2014-06-24 15:21:39 +03:00
ostf.service Update ostf service for CentOS7 2015-11-25 12:26:59 +00:00
pylintrc Added pylintrc. 2013-07-23 19:27:37 +03:00
README.md Update README.md 2015-12-09 12:02:11 +00:00
requirements.txt Sync Babel requirements with openstack-globals 2016-04-20 10:38:17 +00:00
run_tests.sh Add possibility invoke run_tests with coverage 2015-07-28 17:22:07 +03:00
setup.cfg Bump version to 10.0 2016-03-27 20:29:54 +00:00
setup.py Move setup.py to pbr usage 2016-01-04 16:14:55 +02:00
test-requirements.txt Extended run_tests.sh to run Python tests 2015-01-26 12:50:04 +01:00
tox.ini Update tox.ini to skip dtabase creation steps 2016-03-07 16:59:28 +02:00

Fuel OSTF tests

After OpenStack installation via Fuel, it is very important to understand whether it was successful and if it is ready for work. Fuel-ostf provides a set of health checks to be run against from Fuel console check the proper operation of all system components in typical conditions.

Details of Fuel OSTF tests

Tests are included to Fuel, so they will be accessible as soon as you install Fuel on your lab. Fuel ostf architecture is quite simple, it consists of two main packages:

  • fuel_health which contains the test set itself and related modules
  • fuel_plugin which contains OSTF-adapter that forms necessary test list in context of cluster deployment options and transfers them to UI using REST_API

On the other hand, there is some information necessary for test execution itself. There are several modules that gather information and parse them into objects which will be used in the tests themselves. All information is gathered from Nailgun component.

Python REST API interface

Fuel-ostf module provides not only testing, but also RESTful interface, a means for interaction with the components.

In terms of REST, all types of OSTF entities are managed by three HTTP verbs: GET, POST and PUT.

The following basic URL is used to make requests to OSTF:

{ostf_host}:{ostf_port}/v1/{requested_entity}/{cluster_id}

Currently, you can get information about testsets, tests and testruns via GET request on corresponding URLs for ostf_plugin.

To get information about testsets, make the following GET request on:

{ostf_host}:{ostf_port}/v1/testsets/{cluster_id}

To get information about tests, make GET request on:

{ostf_host}:{ostf_port}/v1/tests/{cluster_id}

To get information about executed tests, make the following GET requests:

for the whole set of testruns:

{ostf_host}:{ostf_port}/v1/testruns/

for the particular testrun:

{ostf_host}:{ostf_port}/v1/testruns/{testrun_id}

for the list of testruns executed on the particular cluster:

{ostf_host}:{ostf_port}/v1/testruns/last/{cluster_id}

To start test execution, make the following POST request on this URL:

{ostf_host}:{ostf_port}/v1/testruns/

The body must consist of JSON data structure with testsets and the list of tests belonging to it that must be executed. It should also have metadata with the information about the cluster (the key with the “cluster_id” name is used to store the parameters value):

[
    {
        "testset": "test_set_name",
        "tests": ["module.path.to.test.1", ..., "module.path.to.test.n"],
        "metadata": {"cluster_id": id}
    },

...,

{...}, # info for another testrun
{...},

...,

{...}
]

If succeeded, OSTF adapter returns attributes of created testrun entities in JSON format. If you want to launch only one test, put its id into the list. To launch all tests, leave the list empty (by default). Example of the response:

[
{
    "status": "running",
    "testset": "sanity",
    "meta": null,
    "ended_at": "2014-12-12 15:31:54.528773",
    "started_at": "2014-12-12 15:31:41.481071",
    "cluster_id": 1,
    "id": 1,
    "tests": [.....info on tests.....]
},

....
]

You can also stop and restart testruns. To do that, make a PUT request on testruns. The request body must contain the list of the testruns and tests to be stopped or restarted. Example:

[
{
    "id": test_run_id,
    "status": ("stopped" | "restarted"),
    "tests": ["module.path.to.test.1", ..., "module.path.to.test.n"]
},

...,

{...}, # info for another testrun
{...},

...,

{...}
]

Testing

There are next test targets that can be run to validate the code.

tox -e pep8 - style guidelines enforcement
tox -e py27 - unit and integration testing