Tempest plugin for whitebox testing. For testing things not exposed through the REST APIs.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Zuul c74bb93b6d Merge "move whitebox-multinode-multinuma-devstack to third party ci" 3 weeks ago
devstack Introduce libvirt (un)mask)_command config option 2 months ago
playbooks/whitebox Add ensure-pip role 5 months ago
roles Use ansible_become instead of become in multinode devstack gate 11 months ago
whitebox_tempest_plugin Merge "Introduce libvirt (un)mask)_command config option" 3 weeks ago
.gitignore Update gitignore 1 year ago
.gitreview Fix repo after rename 5 months ago
.stestr.conf Subclass API tests instead of scenario 2 years ago
.zuul.yaml Merge "move whitebox-multinode-multinuma-devstack to third party ci" 3 weeks ago
LICENSE Includes license and fixes copyright headers 3 years ago
README.rst Add max_compute_nodes option 1 year ago
requirements.txt Add paramiko version atleast 2.7.0 in requirement.txt 2 months ago
setup.cfg Re-home project 2 years ago
setup.py Initial commit 3 years ago
test-requirements.txt Remove tempest and oslo.log from requirements.txt 4 months ago
tox.ini Update hacking for Python 3 7 months ago


Whitebox Tempest plugin

This repo is a Tempest plugin that contains scenario tests ran against TripleO/Director-based deployments.


This is still a work in progress.


The tests assume a TripleO/Director-based deployment with an undercloud and overcloud. The tests will be run from the undercloud therefore Tempest should be installed and configured on the undercloud node. It's assumed that the Unix user running the tests, generally stack, has SSH access to all the compute nodes running in the overcloud.

Most tests have specific hardware requirements. These are documented in the tests themselves and the tests should fast-fail if these hardware requirements are not met. You will require multiple nodes to run these tests and will need to manually specify which test to run on which node. For more information on our plans here, refer to roadmap.

For more information on TripleO/Director, refer to the Red Hat OpenStack Platform documentation.

Install, configure and run

  1. Install the plugin.

    This should be done from source. :

    git clone https://github.com/redhat-openstack/whitebox-tempest-plugin
    sudo pip install whitebox-tempest-plugin
  2. Configure Tempest.

    Add the following lines at the end of your tempest.conf file. These determine how your undercloud node, which is running Tempest, should connect to the compute nodes in the overcloud and vice versa. For example:

    hypervisors = compute-0.localdomain:,compute-1.localdomain:
    # Only set the following if different from the defaults listed
    # ctlplane_ssh_username = heat-admin
    # ctlplane_ssh_private_key_path = /home/stack/.ssh/id_rsa
    containers = true
    max_compute_nodes = 2 # Some tests depend on there being a single
                          # (available) compute node
  3. Execute the tests. :

    tempest run --regex whitebox_tempest_plugin.

How to add a new test

New tests should be added to the whitebox_tempest_plugin/tests directory.

According to the plugin interface doc, you should mainly import "stable" APIs which usually are:

  • tempest.lib.*
  • tempest.config
  • tempest.test_discover.plugins
  • tempest.common.credentials_factory
  • tempest.clients
  • tempest.test

Importing classes from tempest.api.* could be dangerous since future version of Tempest could break.


The different tests found here all have different hardware requirements, and these requirements often conflict. For example, a test that requires a host without HyperThreading enabled cannot be used for a test that requires HyperThreading. As a result, it's not possible to have one "master configuration" that can be used to run all tests. Instead, different tests must be run on different nodes.

At present, this plugin exists in isolation and the running of individual tests on nodes, along with the configuration of said nodes, remains a manual process. However, the end goal for this project is to be able to kick run this test suite of against N overcloud nodes, where each node has a different hardware configuration and N is the total number of different hardware configurations required (one for real-time, one for SR-IOV, etc.). Each node would have a different profile and host aggregates would likely be used to ensure each test runs on its preferred hardware. To get here, we should probably provide a recipe along with hardware configuration steps.

This being said, the above is way off yet. For now, we're focussed on getting the tests in place so we can stop doing all this stuff by hand.