Add job to test plans against the template

* template modified a bit to fit in more beautiful way
* tests written
* test environment py27 created

Change-Id: I9ad9483609ff63e44d46900004a3266620dc0078
This commit is contained in:
Dina Belova 2016-01-15 20:08:09 +03:00
parent b1932a9245
commit c72c57f040
11 changed files with 523 additions and 271 deletions

4
.testr.conf Normal file
View File

@ -0,0 +1,4 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -9,5 +9,5 @@ Test Plans
.. toctree::
:maxdepth: 2
mq/index
provisioning/main
mq/plan
provisioning/plan

View File

@ -1,8 +0,0 @@
Message Queue Test Plan
=======================
.. toctree::
:maxdepth: 2
setup
test_cases

View File

@ -1,5 +1,20 @@
Test Setup
----------
=======================
Message Queue Test Plan
=======================
:status: ready
:version: 0
:Abstract:
This document describes a test plan for quantifying the performance of
message queues usually used as a message bug between OpenStack services.
Test Plan
=========
Test Environment
----------------
This section describes the setup for message queue testing. It can be either
a single (all-in-one) or a multi-node installation.
@ -15,14 +30,17 @@ A basic multi-node setup with RabbitMQ or ActiveMQ comprises 5 physical nodes:
is typical for OpenStack control plane services.
* Three nodes are allocated for the MQ cluster.
When using ZeroMQ, the basic multi-node setup can be reduced to two physical nodes.
When using ZeroMQ, the basic multi-node setup can be reduced to two physical
nodes.
* One node for a compute node as above.
* One node for a controller node. This node also acts as a Redis host
for match making purposes.
* One node for a controller node. This node also acts as a Redis host for
match making purposes.
Preparation
^^^^^^^^^^^
RabbitMQ Installation and Configuration
---------------------------------------
**RabbitMQ Installation and Configuration**
* Install RabbitMQ server package:
``sudo apt-get install rabbitmq-server``
@ -51,8 +69,7 @@ RabbitMQ Installation and Configuration
``sudo rabbitmqctl set_permissions stackrabbit ".*" ".*" ".*"``
ActiveMQ Installation and Configuration
---------------------------------------
**ActiveMQ Installation and Configuration**
This section describes installation and configuration steps for an ActiveMQ
message queue implementation. ActiveMQ is based on Java technologies so it
@ -79,8 +96,8 @@ performed for an ActiveMQ installation:
.. note::
Here 10.4.1.x are the IP addresses of the ZooKeeper nodes where ZK is
installed. ZK will be run in cluster mode with majority voting, so at least 3 nodes
are required.
installed. ZK will be run in cluster mode with majority voting, so at least
3 nodes are required.
.. code-block:: none
@ -96,8 +113,8 @@ performed for an ActiveMQ installation:
* create dataDir and dataLogDir directories
* for each MQ node create a myid file in dataDir with the id of the
server and nothing else. For node-1 the file will contain one line with 1,
node-2 with 2, and node-3 with 3.
server and nothing else. For node-1 the file will contain one line
with 1, node-2 with 2, and node-3 with 3.
* start ZooKeeper (on each node): \textbf{./zkServer.sh start}
* check ZK status with: \textbf{./zkServer.sh status}
* Configure ActiveMQ (apache-activemq-5.12.0/conf/activemq.xml file - set
@ -134,27 +151,28 @@ After ActiveMQ is installed and configured it can be started with the command:
:command:./activemq start or ``./activemq console`` for a foreground process.
Oslo.messaging ActiveMQ Driver
------------------------------
**Oslo.messaging ActiveMQ Driver**
All OpenStack changes (in the oslo.messaging library) to support ActiveMQ are
already merged to the upstream repository. The relevant changes can be found in
the amqp10-driver-implementation topic.
All OpenStack changes (in the oslo.messaging library) to support ActiveMQ are already
merged to the upstream repository. The relevant changes can be found in the
amqp10-driver-implementation topic.
To run ActiveMQ even on the most basic all-in-one topology deployment the
following requirements need to be satisfied:
* Java JRE must be installed in the system. The Java version can be checked with the
command ``java -version``. If java is not installed an error message will
appear. Java can be installed with the following command:
* Java JRE must be installed in the system. The Java version can be checked
with the command ``java -version``. If java is not installed an error
message will appear. Java can be installed with the following command:
``sudo apt-get install default-jre``
* ActiveMQ binaries should be installed in the system. See
http://activemq.apache.org/getting-started.html for installation instructions.
The latest stable version is currently
http://activemq.apache.org/getting-started.html for installation
instructions. The latest stable version is currently
http://apache-mirror.rbc.ru/pub/apache/activemq/5.12.0/apache-activemq-5.12.0-bin.tar.gz.
* To use the OpenStack oslo.messaging amqp 1.0 driver, the following Python libraries
need to be installed:
* To use the OpenStack oslo.messaging amqp 1.0 driver, the following Python
libraries need to be installed:
``pip install "pyngus$>=$1.0.0,$<$2.0.0"``
``pip install python-qpid-proton``
@ -162,19 +180,20 @@ following requirements need to be satisfied:
``rpc_backend = rabbit`` need to be modified to replace this line with
``rpc_backend = amqp``, and then all the services need to be restarted.
ZeroMQ Installation
-------------------
**ZeroMQ Installation**
This section describes installation steps for ZeroMQ. ZeroMQ (also ZMQ or 0MQ)
is an embeddable networking library but acts like a concurrency framework.
Unlike other AMQP-based drivers, such as RabbitMQ, ZeroMQ doesnt have any central brokers in
oslo.messaging. Instead, each host (running OpenStack services) is both a ZeroMQ client and
a server. As a result, each host needs to listen to a certain TCP port for incoming connections
and directly connect to other hosts simultaneously.
Unlike other AMQP-based drivers, such as RabbitMQ, ZeroMQ doesnt have any
central brokers in oslo.messaging. Instead, each host (running OpenStack
services) is both a ZeroMQ client and a server. As a result, each host needs to
listen to a certain TCP port for incoming connections and directly connect to
other hosts simultaneously.
To set up ZeroMQ, only one step needs to be performed.
* Install python bindings for ZeroMQ. All necessary packages will be installed as dependencies:
* Install python bindings for ZeroMQ. All necessary packages will be
installed as dependencies:
``sudo apt-get install python-zmq``
.. note::
@ -191,11 +210,12 @@ To set up ZeroMQ, only one step needs to be performed.
Depends: libc6
Depends: libzmq3
Oslo.messaging ZeroMQ Driver
----------------------------
All OpenStack changes (in the oslo.messaging library) to support ZeroMQ are already
merged to the upstream repository. You can find the relevant changes in the
zmq-patterns-usage topic.
**Oslo.messaging ZeroMQ Driver**
All OpenStack changes (in the oslo.messaging library) to support ZeroMQ are
already merged to the upstream repository. You can find the relevant changes
in the zmq-patterns-usage topic.
To run ZeroMQ on the most basic all-in-one topology deployment the
following requirements need to be satisfied:
@ -206,11 +226,12 @@ following requirements need to be satisfied:
.. note::
The following changes need to be applied to all OpenStack project configuration files.
The following changes need to be applied to all OpenStack project
configuration files.
* To enable the driver, in the section [DEFAULT] of each configuration file, the rpc_backend
flag must be set to zmq and the rpc_zmq_host flag must be set to the hostname
of the node.
* To enable the driver, in the section [DEFAULT] of each configuration file,
the rpc_backend flag must be set to zmq and the rpc_zmq_host flag
must be set to the hostname of the node.
.. code-block:: none
@ -231,19 +252,20 @@ following requirements need to be satisfied:
port = 6379
password = None
Running ZeroMQ on a multi-node setup
------------------------------------
The process of setting up oslo.messaging with ZeroMQ on a multi-node environment is very similar
to the all-in-one installation.
**Running ZeroMQ on a multi-node setup**
The process of setting up oslo.messaging with ZeroMQ on a multi-node
environment is very similar to the all-in-one installation.
* On each node ``rpc_zmq_host`` should be set to its FQDN.
* Redis-server should be up and running on a controller node or a separate host.
Redis can be used with master-slave replication enabled, but currently the oslo.messaging ZeroMQ driver
does not support Redis Sentinel, so it is not yet possible to achieve high availability, automatic failover,
* Redis-server should be up and running on a controller node or a separate
host. Redis can be used with master-slave replication enabled, but
currently the oslo.messaging ZeroMQ driver does not support Redis Sentinel,
so it is not yet possible to achieve high availability, automatic failover,
and fault tolerance.
The ``host`` parameter in section ``[matchmaker_redis]`` should be set to the IP address of a host which runs
a master Redis instance, e.g.
The ``host`` parameter in section ``[matchmaker_redis]`` should be set to
the IP address of a host which runs a master Redis instance, e.g.
.. code-block:: none
@ -251,3 +273,119 @@ to the all-in-one installation.
host = 10.0.0.3
port = 6379
password = None
Environment description
^^^^^^^^^^^^^^^^^^^^^^^
Test results must include used environment description. This includes:
* Hardware used (servers, switches, storage, etc.)
* Network scheme
* Messaging bus specification and OpenStack version deployed (if any).
Test Case 1: Message Queue Throughput Test
------------------------------------------
Description
^^^^^^^^^^^
This test measures the aggregate throughput of a MQ layer by using the
oslo.messaging simulator tool. Either RabbitMQ, ActiveMQ, or ZeroMQ can be used
as the MQ layer. Throughput is calculated as the sum over the MQ clients of the
throughput for each client. For each test the number of clients/threads is
configured to one of the specific values defined in the test case parameters
section. The full set of tests will cover all the "Threads count" values shown,
plus additional values as needed to quantify the dependence of MQ throughput on
load, and to find the maximum throughput.
Parameters
^^^^^^^^^^
======================= ===========
Parameter name Value
======================= ===========
oslo.messaging version 2.5.0
simulator.py version 1.0
Threads count 50, 70, 100
======================= ===========
List of performance metrics
^^^^^^^^^^^^^^^^^^^^^^^^^^^
======== ========== ================ ===================================
Priority Value Measurment Units Description
======== ========== ================ ===================================
1 Throughput msg/sec Directly measured by simulator tool
======== ========== ================ ===================================
Result Type
^^^^^^^^^^^
================ ======================= =========================
Result type Measurement Units Description
================ ======================= =========================
Throughput Value msg/sec Table of numerical values
Throughput Graph msg/sec vs # of threads Graph
================ ======================= =========================
Additional Measurements
^^^^^^^^^^^^^^^^^^^^^^^
=========== ======= =============================
Measurement Units Description
=========== ======= =============================
Variance msg/sec Throughput variance over time
=========== ======= =============================
Test Case 2: OMGBenchmark Rally test
------------------------------------
Description
^^^^^^^^^^^
OMGBenchmark is a rally plugin for benchmarking oslo.messaging.
The plugin and installation instructions are available on github:
https://github.com/Yulya/omgbenchmark
Parameters
^^^^^^^^^^
================================= =============== ===============
Parameter name Rally name Value
================================= =============== ===============
oslo.messaging version 2.5.0
Number of iterations times 50, 100, 500
Threads count concurrency 40, 70, 100
Number of RPC servers num_servers 10
Number of RPC clients num_clients 10
Number of topics num_topics 5
Number of messages per iteration num_messages 100
Message size msg_length_file 900-12000 bytes
================================= =============== ===============
List of performance metrics
^^^^^^^^^^^^^^^^^^^^^^^^^^^
======= ================= ==========================================
Name Measurement Units Description
======= ================= ==========================================
min sec Minimal execution time of one iteration
median sec Median execution time
90%ile sec 90th percentile execution time
95%ile sec 95th percentile execution time
max sec Maximal execution time of one iteration
avg sec Average execution time
success none Number of successfully finished iterations
count none Number of executed iterations
======= ================= ==========================================
Result Type
^^^^^^^^^^^
================= ======================= =========================
Result type Measurement Units Description
================= ======================= =========================
Throughput Graph msg size vs median Graph
Concurrency Graph concurrency vs median Graph
================= ======================= =========================

View File

@ -1,99 +0,0 @@
Test Cases
==========
Test Case 1: Message Queue Throughput Test
------------------------------------------
**Description**
This test measures the aggregate throughput of a MQ layer by using the oslo.messaging
simulator tool. Either RabbitMQ, ActiveMQ, or ZeroMQ can be used as the MQ layer.
Throughput is calculated as the sum
over the MQ clients of the throughput for each client. For each test the number of
clients/threads is configured to one of the specific values defined in the test case
parameters section. The full set of tests will cover all the "Threads count" values shown,
plus additional values as needed to quantify the dependence of MQ throughput on load, and
to find the maximum throughput.
**Parameters**
======================= =====
Parameter name Value
======================= =====
oslo.messaging version 2.5.0
simulator.py version 1.0
Threads count 50, 70, 100
======================= =====
**Measurements**
========== ================ ===========
Value Measurment Units Description
========== ================ ===========
Throughput msg/sec Directly measured by simulator tool
========== ================ ===========
**Result Type**
================ ======================= =========================
Result type Measurement Units Description
================ ======================= =========================
Throughput Value msg/sec Table of numerical values
Throughput Graph msg/sec vs # of threads Graph
================ ======================= =========================
**Additional Measurements**
=========== ======= =============================
Measurement Units Description
=========== ======= =============================
Variance msg/sec Throughput variance over time
=========== ======= =============================
Test Case 2: OMGBenchmark Rally test
------------------------------------
**Description**
OMGBenchmark is a rally plugin for benchmarking oslo.messaging.
The plugin and installation instructions are available on github:
https://github.com/Yulya/omgbenchmark
**Parameters**
================================= =============== =====
Parameter name Rally name Value
================================= =============== =====
oslo.messaging version 2.5.0
Number of iterations times 50, 100, 500
Threads count concurrency 40, 70, 100
Number of RPC servers num_servers 10
Number of RPC clients num_clients 10
Number of topics num_topics 5
Number of messages per iteration num_messages 100
Message size msg_length_file 900-12000 bytes
================================= =============== =====
**Measurements**
======= ================= ==========================================
Name Measurement Units Description
======= ================= ==========================================
min sec Minimal execution time of one iteration
median sec Median execution time
90%ile sec 90th percentile execution time
95%ile sec 95th percentile execution time
max sec Maximal execution time of one iteration
avg sec Average execution time
success none Number of successfully finished iterations
count none Number of executed iterations
======= ================= ==========================================
**Result Type**
================= ======================= =========================
Result type Measurement Units Description
================= ======================= =========================
Throughput Graph msg size vs median Graph
Concurrency Graph concurrency vs median Graph
================= ======================= =========================

View File

@ -10,20 +10,20 @@ Measuring performance of provisioning systems
:Abstract:
This document describes a test plan for quantifying the performance of
provisioning systems as a function of the number of nodes to be provisioned. The
plan includes the collection of several resource utilization metrics, which will
be used to analyze and understand the overall performance of each system. In
particular, resource bottlenecks will either be fixed, or best practices
developed for system configuration and hardware requirements.
provisioning systems as a function of the number of nodes to be provisioned.
The plan includes the collection of several resource utilization metrics,
which will be used to analyze and understand the overall performance of each
system. In particular, resource bottlenecks will either be fixed, or best
practices developed for system configuration and hardware requirements.
:Conventions:
- **Provisioning:** is the entire process of installing and configuring an
operating system.
- **Provisioning system:** is a service or a set of services which enables the
installation of an operating system and performs basic operations such as
configuring network interfaces and partitioning disks. A preliminary
- **Provisioning system:** is a service or a set of services which enables
the installation of an operating system and performs basic operations such
as configuring network interfaces and partitioning disks. A preliminary
`list of provisioning systems`_ can be found below in `Applications`_.
The provisioning system
can include configuration management systems like Puppet or Chef, but
@ -37,70 +37,38 @@ Measuring performance of provisioning systems
- **Nodes:** are servers which will be provisioned.
List of performance metrics
---------------------------
The table below shows the list of test metrics to be collected. The priority
is the relative ranking of the importance of each metric in evaluating the
performance of the system.
.. table:: List of performance metrics
+--------+------------------------+------------------------------------------+
|Priority| Parameter | Description |
+========+========================+==========================================+
| | | | The elapsed time to provision all |
| 1 |PROVISIONING_TIME(NODES)| | nodes, as a function of the numbers of |
| | | | nodes |
+--------+------------------------+------------------------------------------+
| | | | Incoming network bandwidth usage as a |
| 2 |INGRESS_NET(NODES) | | function of the number of nodes. |
| | | | Average during provisioning on the host|
| | | | where the provisioning system is |
| | | | installed. |
+--------+------------------------+------------------------------------------+
| | | | Outgoing network bandwidth usage as a |
| 2 | EGRESS_NET(NODES) | | function of the number of nodes. |
| | | | Average during provisioning on the host|
| | | | where the provisioning system is |
| | | | installed. |
+--------+------------------------+------------------------------------------+
| | | | CPU utilization as a function of the |
| 3 | CPU(NODES) | | number of nodes. Average during |
| | | | provisioning on the host where the |
| | | | provisioning system is installed. |
+--------+------------------------+------------------------------------------+
| | | | Active memory usage as a function of |
| 3 | RAM(NODES) | | the number of nodes. Average during |
| | | | provisioning on the host where the |
| | | | provisioning system is installed. |
+--------+------------------------+------------------------------------------+
| | | | Storage read IO bandwidth as a |
| 3 | WRITE_IO(NODES) | | function of the number of nodes. |
| | | | Average during provisioning on the host|
| | | | where the provisioning system is |
| | | | installed. |
+--------+------------------------+------------------------------------------+
| | | | Storage write IO bandwidth as a |
| 3 | READ_IO(NODES) | | function of the number of nodes. |
| | | | Average during provisioning on the host|
| | | | where the provisioning system is |
| | | | installed. |
+--------+------------------------+------------------------------------------+
Test Plan
---------
=========
The above performance metrics will be measured for various number
of provisioned nodes. The result will be a table that shows the
dependence of these metrics on the number of nodes.
This test plan aims to identify the best provisioning solution for cloud
deployment, using specified list of performance measurements and tools.
Test Environment
----------------
Preparation
^^^^^^^^^^^
1.
The following package needs to be installed on the provisioning system
servers to collect performance metrics.
.. table:: Software to be installed
+--------------+---------+-----------------------------------+
| package name | version | source |
+==============+=========+===================================+
| `dstat`_ | 0.7.2 | Ubuntu trusty universe repository |
+--------------+---------+-----------------------------------+
Environment description
^^^^^^^^^^^^^^^^^^^^^^^
Test results MUST include a description of the environment used. The following items
should be included:
- **Hardware configuration of each server.** If virtual machines are used then both
physical and virtual hardware should be fully documented.
Test results MUST include a description of the environment used. The following
items should be included:
- **Hardware configuration of each server.** If virtual machines are used then
both physical and virtual hardware should be fully documented.
An example format is given below:
.. table:: Description of server hardware
@ -141,10 +109,10 @@ should be included:
| |size | | |
+-------+----------------+-------+-------+
- **Configuration of hardware network switches.** The configuration file from the
switch can be downloaded and attached.
- **Configuration of hardware network switches.** The configuration file from
the switch can be downloaded and attached.
- **Configuration of virtual machines and virtual networks (if they are used).**
- **Configuration of virtual machines and virtual networks (if used).**
The configuration files can be attached, along with the mapping of virtual
machines to host machines.
@ -166,22 +134,78 @@ should be included:
affect the amount of work to be performed by the provisioning system
and thus its performance.
Preparation
Test Case
---------
Description
^^^^^^^^^^^
1.
The following package needs to be installed on the provisioning system
servers to collect performance metrics.
.. table:: Software to be installed
This specific test plan contains only one test case, that needs to be run
step by step on the environments differing list of parameters below.
+--------------+---------+-----------------------------------+
| package name | version | source |
+==============+=========+===================================+
| `dstat`_ | 0.7.2 | Ubuntu trusty universe repository |
+--------------+---------+-----------------------------------+
Parameters
^^^^^^^^^^
=============== =========================================
Parameter name Value
=============== =========================================
number of nodes 10, 20, 40, 80, 160, 320, 640, 1280, 2000
=============== =========================================
List of performance metrics
^^^^^^^^^^^^^^^^^^^^^^^^^^^
The table below shows the list of test metrics to be collected. The priority
is the relative ranking of the importance of each metric in evaluating the
performance of the system.
.. table:: List of performance metrics
+--------+-------------------+-------------------+------------------------------------------+
|Priority| Value | Measurement Units | Description |
+========+===================+===================+==========================================+
| | | || The elapsed time to provision all |
| 1 | PROVISIONING_TIME | seconds || nodes, as a function of the numbers of |
| | | || nodes |
+--------+-------------------+-------------------+------------------------------------------+
| | | || Incoming network bandwidth usage as a |
| 2 | INGRESS_NET | Gbit/s || function of the number of nodes. |
| | | || Average during provisioning on the host |
| | | || where the provisioning system is |
| | | || installed. |
+--------+-------------------+-------------------+------------------------------------------+
| | | || Outgoing network bandwidth usage as a |
| 2 | EGRESS_NET | Gbit/s || function of the number of nodes. |
| | | || Average during provisioning on the host |
| | | || where the provisioning system is |
| | | || installed. |
+--------+-------------------+-------------------+------------------------------------------+
| | | || CPU utilization as a function of the |
| 3 | CPU | percentage || number of nodes. Average during |
| | | || provisioning on the host where the |
| | | || provisioning system is installed. |
+--------+-------------------+-------------------+------------------------------------------+
| | | || Active memory usage as a function of |
| 3 | RAM | GB || the number of nodes. Average during |
| | | || provisioning on the host where the |
| | | || provisioning system is installed. |
+--------+-------------------+-------------------+------------------------------------------+
| | | || Storage read IO bandwidth as a |
| 3 | WRITE_IO | operations/second || function of the number of nodes. |
| | | || Average during provisioning on the host |
| | | || where the provisioning system is |
| | | || installed. |
+--------+-------------------+-------------------+------------------------------------------+
| | | || Storage write IO bandwidth as a |
| 3 | READ_IO | operations/second || function of the number of nodes. |
| | | || Average during provisioning on the host |
| | | || where the provisioning system is |
| | | || installed. |
+--------+-------------------+-------------------+------------------------------------------+
Measuring performance values
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The script
`Full script for collecting performance metrics`_
can be used for the first five of the following steps.
@ -197,9 +221,8 @@ can be used for the first five of the following steps.
2.
Start the provisioning process for the first node and record the wall time.
3.
Wait until the provisioning process has finished (when all nodes are reachable
via ssh)
and record the wall time.
Wait until the provisioning process has finished (when all nodes are
reachable via ssh) and record the wall time.
4.
Stop the dstat program.
5.
@ -233,32 +256,20 @@ can be used for the first five of the following steps.
These values will be graphed and maximum values reported.
6.
Repeat steps 1-5 for provisioning at the same time the following number of
nodes:
* 10 nodes
* 20 nodes
* 40 nodes
* 80 nodes
* 160 nodes
* 320 nodes
* 640 nodes
* 1280 nodes
* 2000 nodes
Additional tests will be performed if some anomalous behaviour is found.
These may require the collection of additional performance metrics.
7.
6.
The result of this part of test will be:
* to provide the following graphs, one for each number of provisioned nodes:
#) Three dependencies on one graph.
* INGRESS_NET(TIME) Dependence on time of incoming network bandwidth usage.
* EGRESS_NET(TIME) Dependence on time of outgoing network bandwidth usage.
* INGRESS_NET(TIME) Dependence on time of incoming network bandwidth
usage.
* EGRESS_NET(TIME) Dependence on time of outgoing network bandwidth
usage.
* ALL_NET(TIME) Dependence on time of total network bandwidth usage.
#) One dependence on one graph.
@ -313,10 +324,10 @@ nodes.
+-------+--------------+---------+---------+---------+---------+
Applications
------------
============
list of provisioning systems
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
List of provisioning systems
----------------------------
.. table:: list of provisioning systems
@ -333,7 +344,7 @@ list of provisioning systems
+-----------------------------+---------+
Full script for collecting performance metrics
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
==============================================
.. literalinclude:: measure.sh
:language: bash

View File

@ -2,8 +2,6 @@
Example Test Plan - The title of your plan
==========================================
Please include the following information to this primary section:
:status: test plan status - either **draft** or **ready**
:version: test plan version
@ -33,13 +31,15 @@ using sections, similar to the written below.
Test Environment
----------------
**Preparation**
Preparation
^^^^^^^^^^^
Please specify here what needs to be done with the environment to run
this test plan. This can include specific tools installation,
specific OpenStack deployment, etc.
**Environment description**
Environment description
^^^^^^^^^^^^^^^^^^^^^^^
Please define here used environment. You can use the scheme below for this
purpose or modify it due to your needs:
@ -54,17 +54,20 @@ purpose or modify it due to your needs:
Test Case 1: Something very interesting #1
------------------------------------------
**Description**
Description
^^^^^^^^^^^
Define test case #1. Every test case can contain at least the sections, defined
below.
**Parameters**
Parameters
^^^^^^^^^^
Optional section. Can be used if there are multiple test cases differing in
some input parameters - if so, these parameters need to be listed here.
**List of performance metrics**
List of performance metrics
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Mandatory section. Defines what measurements are in fact done during the test.
To be a good citizen in case of multiple metrics collection, it will be nice to
@ -78,7 +81,8 @@ Priority Value Measurement Units Description
3 - not that much important What's measured <units> <description>
=========================== =============== ================= =============
**Some additional section**
Some additional section
^^^^^^^^^^^^^^^^^^^^^^^
Depending on the test case nature, something else may need to be defined.
If so, additional sections with free form titles should be added.

View File

@ -2,3 +2,5 @@ rst2pdf
sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
sphinxcontrib-httpdomain
sphinx_rtd_theme
testrepository>=0.0.18
testtools>=0.9.34

0
tests/__init__.py Normal file
View File

198
tests/test_titles.py Normal file
View File

@ -0,0 +1,198 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import glob
import re
import docutils.core
import testtools
OPTIONAL_SECTIONS = ("Upper level additional section",)
OPTIONAL_SUBSECTIONS = ("Some additional section",)
OPTIONAL_SUBSUBSECTIONS = ("Parameters", "Some additional section",)
OPTIONAL_FIELDS = ("Conventions",)
class TestTitles(testtools.TestCase):
def _get_title(self, section_tree, depth=1):
section = {
"subtitles": [],
}
for node in section_tree:
if node.tagname == "title":
section["name"] = node.rawsource
elif node.tagname == "section":
subsection = self._get_title(node, depth+1)
if depth < 2:
if subsection["subtitles"]:
section["subtitles"].append(subsection)
else:
section["subtitles"].append(subsection["name"])
elif depth == 2:
section["subtitles"].append(subsection["name"])
return section
def _get_titles(self, test_plan):
titles = {}
for node in test_plan:
if node.tagname == "section":
section = self._get_title(node)
titles[section["name"]] = section["subtitles"]
return titles
@staticmethod
def _get_docinfo(test_plan):
fields = []
for node in test_plan:
if node.tagname == "field_list":
for field in node:
for f_opt in field:
if f_opt.tagname == "field_name":
fields.append(f_opt.rawsource)
if node.tagname == "docinfo":
for info in node:
fields.append(info.tagname)
if node.tagname == "topic":
fields.append("abstract")
return fields
def _check_fields(self, tmpl, test_plan):
tmpl_fields = self._get_docinfo(tmpl)
test_plan_fields = self._get_docinfo(test_plan)
missing_fields = [f for f in tmpl_fields
if f not in test_plan_fields and
f not in OPTIONAL_FIELDS]
if len(missing_fields) > 0:
self.fail("While checking '%s':\n %s"
% (test_plan[0].rawsource,
"Missing fields: %s" % missing_fields))
def _check_titles(self, filename, expect, actual):
missing_sections = [x for x in expect.keys() if (
x not in actual.keys()) and (x not in OPTIONAL_SECTIONS)]
msgs = []
if len(missing_sections) > 0:
msgs.append("Missing sections: %s" % missing_sections)
for section in expect.keys():
missing_subsections = [x for x in expect[section]
if x not in actual.get(section, {}) and
(x not in OPTIONAL_SUBSECTIONS)]
extra_subsections = [x for x in actual.get(section, {})
if x not in expect[section]]
for ex_s in extra_subsections:
s_name = (ex_s if type(ex_s) is str or
type(ex_s) is unicode else ex_s["name"])
if s_name.startswith("Test Case"):
new_missing_subsections = []
for m_s in missing_subsections:
m_s_name = (m_s if type(m_s) is str or
type(m_s) is unicode
else m_s["name"])
if not m_s_name.startswith("Test Case"):
new_missing_subsections.append(m_s)
missing_subsections = new_missing_subsections
break
if len(missing_subsections) > 0:
msgs.append("Section '%s' is missing subsections: %s"
% (section, missing_subsections))
for subsection in expect[section]:
if type(subsection) is dict:
missing_subsubsections = []
actual_section = actual.get(section, {})
matching_actual_subsections = [
s for s in actual_section
if type(s) is dict and (
s["name"] == subsection["name"] or
(s["name"].startswith("Test Case") and
subsection["name"].startswith("Test Case")))
]
for actual_subsection in matching_actual_subsections:
for x in subsection["subtitles"]:
if (x not in actual_subsection["subtitles"] and
x not in OPTIONAL_SUBSUBSECTIONS):
missing_subsubsections.append(x)
if len(missing_subsubsections) > 0:
msgs.append("Subsection '%s' is missing "
"subsubsections: %s"
% (actual_subsection,
missing_subsubsections))
if len(msgs) > 0:
self.fail("While checking '%s':\n %s"
% (filename, "\n ".join(msgs)))
def _check_lines_wrapping(self, tpl, raw):
code_block = False
for i, line in enumerate(raw.split("\n")):
# NOTE(ndipanov): Allow code block lines to be longer than 79 ch
if code_block:
if not line or line.startswith(" "):
continue
else:
code_block = False
if "::" in line:
code_block = True
if "http://" in line or "https://" in line:
continue
# Allow lines which do not contain any whitespace
if re.match("\s*[^\s]+$", line):
continue
self.assertTrue(
len(line) < 80,
msg="%s:%d: Line limited to a maximum of 79 characters." %
(tpl, i + 1))
def _check_no_cr(self, tpl, raw):
matches = re.findall("\r", raw)
self.assertEqual(
len(matches), 0,
"Found %s literal carriage returns in file %s" %
(len(matches), tpl))
def _check_trailing_spaces(self, tpl, raw):
for i, line in enumerate(raw.split("\n")):
trailing_spaces = re.findall("\s+$", line)
self.assertEqual(
len(trailing_spaces), 0,
"Found trailing spaces on line %s of %s" % (i + 1, tpl))
def test_template(self):
with open("doc/source/test_plans/template.rst") as f:
template = f.read()
test_plan_tmpl = docutils.core.publish_doctree(template)
template_titles = self._get_titles(test_plan_tmpl)
files = glob.glob("doc/source/test_plans/*/*.rst")
for filename in files:
with open(filename) as f:
data = f.read()
test_plan = docutils.core.publish_doctree(data)
self._check_titles(filename,
template_titles,
self._get_titles(test_plan))
self._check_fields(test_plan_tmpl, test_plan)
self._check_lines_wrapping(filename, data)
self._check_no_cr(filename, data)
self._check_trailing_spaces(filename, data)

View File

@ -1,5 +1,5 @@
[tox]
envlist = docs
envlist = docs,py27
minversion = 1.6
skipsdist = True
@ -11,6 +11,8 @@ setenv = VIRTUAL_ENV={envdir}
LANGUAGE=en_US:en
LC_ALL=C
deps = -r{toxinidir}/requirements.txt
commands =
python setup.py test --slowest --testr-args='{posargs}'
[testenv:venv]
commands = {posargs}