Tempest plugin for whitebox testing. For testing things not exposed through the REST APIs.
Go to file
2020-06-22 15:39:08 +00:00
devstack Rename nova-libvirt to libvirt 2020-05-28 15:49:25 -04:00
playbooks/whitebox Add ensure-pip role 2020-06-19 13:26:58 -04:00
roles Use ansible_become instead of become in multinode devstack gate 2019-12-19 16:39:48 +00:00
whitebox_tempest_plugin Merge "Rename nova-libvirt to libvirt" 2020-06-22 15:39:08 +00:00
.gitignore Update gitignore 2019-04-02 15:51:24 +00:00
.gitreview Fix repo after rename 2020-06-13 07:08:00 +02:00
.stestr.conf Subclass API tests instead of scenario 2018-07-13 10:38:09 -04:00
.zuul.yaml Add ensure-pip role 2020-06-19 13:26:58 -04:00
LICENSE Includes license and fixes copyright headers 2017-12-12 17:55:00 +01:00
README.rst Add max_compute_nodes option 2019-01-27 20:38:15 -05:00
requirements.txt update requirements to support older releases 2020-03-02 12:39:20 +00:00
setup.cfg Re-home project 2018-01-10 11:57:05 +00:00
setup.py Initial commit 2017-12-12 17:55:00 +01:00
test-requirements.txt Add setuptools>=17.1 to test-requirements.txt 2020-05-20 14:54:49 -04:00
tox.ini Update hacking for Python 3 2020-05-03 11:29:45 -04:00

Whitebox Tempest plugin

This repo is a Tempest plugin that contains scenario tests ran against TripleO/Director-based deployments.

Important

This is still a work in progress.

Requirements

The tests assume a TripleO/Director-based deployment with an undercloud and overcloud. The tests will be run from the undercloud therefore Tempest should be installed and configured on the undercloud node. It's assumed that the Unix user running the tests, generally stack, has SSH access to all the compute nodes running in the overcloud.

Most tests have specific hardware requirements. These are documented in the tests themselves and the tests should fast-fail if these hardware requirements are not met. You will require multiple nodes to run these tests and will need to manually specify which test to run on which node. For more information on our plans here, refer to roadmap.

For more information on TripleO/Director, refer to the Red Hat OpenStack Platform documentation.

Install, configure and run

  1. Install the plugin.

    This should be done from source. :

    WORKSPACE=/some/directory
    cd $WORKSPACE
    git clone https://github.com/redhat-openstack/whitebox-tempest-plugin
    sudo pip install whitebox-tempest-plugin
  2. Configure Tempest.

    Add the following lines at the end of your tempest.conf file. These determine how your undercloud node, which is running Tempest, should connect to the compute nodes in the overcloud and vice versa. For example:

    [whitebox]
    hypervisors = compute-0.localdomain:192.168.24.6,compute-1.localdomain:192.168.24.12
    # Only set the following if different from the defaults listed
    # ctlplane_ssh_username = heat-admin
    # ctlplane_ssh_private_key_path = /home/stack/.ssh/id_rsa
    containers = true
    max_compute_nodes = 2 # Some tests depend on there being a single
                          # (available) compute node
  3. Execute the tests. :

    tempest run --regex whitebox_tempest_plugin.

How to add a new test

New tests should be added to the whitebox_tempest_plugin/tests directory.

According to the plugin interface doc, you should mainly import "stable" APIs which usually are:

  • tempest.lib.*
  • tempest.config
  • tempest.test_discover.plugins
  • tempest.common.credentials_factory
  • tempest.clients
  • tempest.test

Importing classes from tempest.api.* could be dangerous since future version of Tempest could break.

Roadmap

The different tests found here all have different hardware requirements, and these requirements often conflict. For example, a test that requires a host without HyperThreading enabled cannot be used for a test that requires HyperThreading. As a result, it's not possible to have one "master configuration" that can be used to run all tests. Instead, different tests must be run on different nodes.

At present, this plugin exists in isolation and the running of individual tests on nodes, along with the configuration of said nodes, remains a manual process. However, the end goal for this project is to be able to kick run this test suite of against N overcloud nodes, where each node has a different hardware configuration and N is the total number of different hardware configurations required (one for real-time, one for SR-IOV, etc.). Each node would have a different profile and host aggregates would likely be used to ensure each test runs on its preferred hardware. To get here, we should probably provide a recipe along with hardware configuration steps.

This being said, the above is way off yet. For now, we're focussed on getting the tests in place so we can stop doing all this stuff by hand.