Tempest plugin for whitebox testing. For testing things not exposed through the REST APIs.
Go to file
Artom Lifshitz 2015942d6e Subclass API tests instead of scenario
Our tests are only for the compute API (and related compute things).
They are not scenario tests as defined in the Tempest scenario README.
This patch moves our tests to subclass the Tempest base compute test
class. While this isn't guaranteed to be a stable interface, it's
convenient enough (we gain helper methods) that the potential
maintenance overhead is deemed worth it. Our unit tests are also moved
up a directory, with tests/ now being only for unit tests and
tempest/api for our test cases.

Change-Id: Ie34aa99765c3fa8a136fa4cb1b11edb3d8c76ba3
2018-07-13 10:38:09 -04:00
whitebox_tempest_plugin Subclass API tests instead of scenario 2018-07-13 10:38:09 -04:00
.gitignore .gitignore: Ignore swp files 2018-01-09 21:35:06 -05:00
.gitreview ManageSF commit 2017-12-11 16:33:19 +00:00
.stestr.conf Subclass API tests instead of scenario 2018-07-13 10:38:09 -04:00
LICENSE Includes license and fixes copyright headers 2017-12-12 17:55:00 +01:00
README.rst README: Add links to source code 2018-01-12 12:41:20 +00:00
requirements.txt tox: enable unit tests and fix flake8 2018-01-09 11:16:47 -05:00
setup.cfg Re-home project 2018-01-10 11:57:05 +00:00
setup.py Initial commit 2017-12-12 17:55:00 +01:00
test-requirements.txt tox: Rename flake8 -> linters 2018-07-06 13:25:11 +01:00
tox.ini tox: Rename flake8 -> linters 2018-07-06 13:25:11 +01:00

README.rst

Whitebox Tempest plugin

This repo is a Tempest plugin that contains scenario tests ran against TripleO/Director-based deployments.

Important

This is still a work in progress.

Requirements

The tests assume a TripleO/Director-based deployment with an undercloud and overcloud. The tests will be run from the undercloud therefore Tempest should be installed and configured on the undercloud node. It's assumed that the Unix user running the tests, generally stack, has SSH access to all the compute nodes running in the overcloud.

Most tests have specific hardware requirements. These are documented in the tests themselves and the tests should fast-fail if these hardware requirements are not met. You will require multiple nodes to run these tests and will need to manually specify which test to run on which node. For more information on our plans here, refer to roadmap.

For more information on TripleO/Director, refer to the Red Hat OpenStack Platform documentation.

Install, configure and run

  1. Install the plugin.

    This should be done from source. :

    WORKSPACE=/some/directory
    cd $WORKSPACE
    git clone https://github.com/redhat-openstack/whitebox-tempest-plugin
    sudo pip install whitebox-tempest-plugin
  2. Configure Tempest.

    Add the following lines at the end of your tempest.conf file. These determine how your undercloud node, which is running Tempest, should connect to the compute nodes in the overcloud and vice versa. For example:

    [whitebox]
    target_controller = <address of the nova controller>
    target_ssh_user = heat-admin
    target_private_key_path = /home/stack/.ssh/id_rsa
    containers = <true/false>
  3. Execute the tests. :

    tempest run --regex whitebox_tempest_plugin.

How to add a new test

New tests should be added to the whitebox_tempest_plugin/tests directory.

According to the plugin interface doc, you should mainly import "stable" APIs which usually are:

  • tempest.lib.*
  • tempest.config
  • tempest.test_discover.plugins
  • tempest.common.credentials_factory
  • tempest.clients
  • tempest.test

Importing classes from tempest.api.* could be dangerous since future version of Tempest could break.

Roadmap

The different tests found here all have different hardware requirements, and these requirements often conflict. For example, a test that requires a host without HyperThreading enabled cannot be used for a test that requires HyperThreading. As a result, it's not possible to have one "master configuration" that can be used to run all tests. Instead, different tests must be run on different nodes.

At present, this plugin exists in isolation and the running of individual tests on nodes, along with the configuration of said nodes, remains a manual process. However, the end goal for this project is to be able to kick run this test suite of against N overcloud nodes, where each node has a different hardware configuration and N is the total number of different hardware configurations required (one for real-time, one for SR-IOV, etc.). Each node would have a different profile and host aggregates would likely be used to ensure each test runs on its preferred hardware. To get here, we should probably provide a recipe along with hardware configuration steps.

This being said, the above is way off yet. For now, we're focussed on getting the tests in place so we can stop doing all this stuff by hand.