tempest/tempest/api
Benny Kopilov f5e277c802 Add wait_for_resource_deletion for swift api clients
Currently today we dont have any way in swift to verify that resources were deleted
before moving to the next command
In current code there was hardcoded sleep for 2 seconds instead of checking if
resource really deleted.

Added to the current cleanup :
Implement is_resource_deleted for object_client and container_client
After remove action we wait/ verify till resource really deleted
Remove hardcoded sleep for 2 seconds
Remove ignore for not found in reomval , if we hit on it means something
wrong in our code.

Change-Id: I32f37f8e874a3510bb1af6db45a1b9a8d2fed543
2021-02-08 16:22:38 +02:00
..
compute Merge "compute: Skip AttachVolumeShelveTestJSON when cross_az_attach unavailable" 2020-12-08 16:59:33 +00:00
identity Add client methods and tests for system grants 2021-01-11 19:12:14 +00:00
image Fix memory explosion in multi-store image tests 2021-01-14 12:15:45 -08:00
network [Trivial]Remove unused variables and methods 2020-11-19 01:19:12 +00:00
object_storage Add wait_for_resource_deletion for swift api clients 2021-02-08 16:22:38 +02:00
volume Merge "Fix negative tests of update_volume for volume microversion 3.59" 2020-10-01 13:13:56 +00:00
README.rst Doc: fix markups, capitalization and add 2 REVIEWING advices 2017-07-11 20:26:32 +02:00
__init__.py Remove copyright from empty files 2014-01-14 03:02:04 +04:00

README.rst

Tempest Field Guide to API tests

What are these tests?

One of Tempest's prime function is to ensure that your OpenStack cloud works with the OpenStack API as documented. The current largest portion of Tempest code is devoted to test cases that do exactly this.

It's also important to test not only the expected positive path on APIs, but also to provide them with invalid data to ensure they fail in expected and documented ways. The latter type of tests is called negative tests in Tempest source code. Over the course of the OpenStack project Tempest has discovered many fundamental bugs by doing just this.

In order for some APIs to return meaningful results, there must be enough data in the system. This means these tests might start by spinning up a server, image, etc, then operating on it.

Why are these tests in Tempest?

This is one of the core missions for the Tempest project, and where it started. Many people use this bit of function in Tempest to ensure their clouds haven't broken the OpenStack API.

It could be argued that some of the negative testing could be done back in the projects themselves, and we might evolve there over time, but currently in the OpenStack gate this is a fundamentally important place to keep things.

Scope of these tests

API tests should always use the Tempest implementation of the OpenStack API, as we want to ensure that bugs aren't hidden by the official clients.

They should test specific API calls, and can build up complex state if it's needed for the API call to be meaningful.

They should send not only good data, but bad data at the API and look for error codes.

They should all be able to be run on their own, not depending on the state created by a previous test.