tempest/tempest/api
2017-10-18 10:54:52 +00:00
..
compute Merge "Rename base.rebuild_server to base.recreate_server" 2017-10-18 06:57:31 +00:00
identity Fix "import xx as xx" grammer 2017-10-17 09:54:47 +08:00
image Remove unused variable assignment 2017-08-18 01:25:29 +00:00
network Remove _project_network_cidr in security group tests 2017-09-29 01:18:41 +00:00
object_storage Fix object_client methods to accept headers and query param 2017-10-11 08:05:22 +00:00
volume Merge "Fix volume group test" 2017-10-10 07:37:55 +00:00
__init__.py Remove copyright from empty files 2014-01-14 03:02:04 +04:00
README.rst Doc: fix markups, capitalization and add 2 REVIEWING advices 2017-07-11 20:26:32 +02:00

Tempest Field Guide to API tests

What are these tests?

One of Tempest's prime function is to ensure that your OpenStack cloud works with the OpenStack API as documented. The current largest portion of Tempest code is devoted to test cases that do exactly this.

It's also important to test not only the expected positive path on APIs, but also to provide them with invalid data to ensure they fail in expected and documented ways. The latter type of tests is called negative tests in Tempest source code. Over the course of the OpenStack project Tempest has discovered many fundamental bugs by doing just this.

In order for some APIs to return meaningful results, there must be enough data in the system. This means these tests might start by spinning up a server, image, etc, then operating on it.

Why are these tests in Tempest?

This is one of the core missions for the Tempest project, and where it started. Many people use this bit of function in Tempest to ensure their clouds haven't broken the OpenStack API.

It could be argued that some of the negative testing could be done back in the projects themselves, and we might evolve there over time, but currently in the OpenStack gate this is a fundamentally important place to keep things.

Scope of these tests

API tests should always use the Tempest implementation of the OpenStack API, as we want to ensure that bugs aren't hidden by the official clients.

They should test specific API calls, and can build up complex state if it's needed for the API call to be meaningful.

They should send not only good data, but bad data at the API and look for error codes.

They should all be able to be run on their own, not depending on the state created by a previous test.