7adb69563b
When validation was added to all ssh connections this means that the exception handling code around ssh connections was actually run, a bug was fixed in the exception logging for test_snapshot_pattern. It also reraises the error so we know that the failure was caused by the ssh connection Change-Id: Iadccef5046e8884fce54ba2cea591a6e86a8b318 |
||
---|---|---|
.. | ||
orchestration | ||
__init__.py | ||
manager.py | ||
README.rst | ||
test_aggregates_basic_ops.py | ||
test_baremetal_basic_ops.py | ||
test_dashboard_basic_ops.py | ||
test_large_ops.py | ||
test_load_balancer_basic.py | ||
test_minimum_basic.py | ||
test_network_advanced_server_ops.py | ||
test_network_basic_ops.py | ||
test_security_groups_basic_ops.py | ||
test_server_advanced_ops.py | ||
test_server_basic_ops.py | ||
test_snapshot_pattern.py | ||
test_stamp_pattern.py | ||
test_swift_basic_ops.py | ||
test_volume_boot_pattern.py | ||
utils.py |
Tempest Field Guide to Scenario tests
What are these tests?
Scenario tests are "through path" tests of OpenStack function. Complicated setups where one part might depend on completion of a previous part. They ideally involve the integration between multiple OpenStack services to exercise the touch points between them.
Any scenario test should have a real-life use case. An example would be:
- "As operator I want to start with a blank environment":
- upload a glance image
- deploy a vm from it
- ssh to the guest
- create a snapshot of the vm
Why are these tests in tempest?
This is one of tempests core purposes, testing the integration between projects.
Scope of these tests
Scenario tests should use the official python client libraries for OpenStack, as they provide a more realistic approach in how people will interact with the services.
Tests should be tagged with which services they exercise, as determined by which client libraries are used directly by the test.
Example of a good test
While we are looking for interaction of 2 or more services, be specific in your interactions. A giant "this is my data center" smoke test is hard to debug when it goes wrong.
A flow of interactions between glance and nova, like in the introduction, is a good example. Especially if it involves a repeated interaction when a resource is setup, modified, detached, and then reused later again.