Remove the deprecated ostestr command

ostestr command has been deprecated in June 2019
- I3a6084db9f86627e3e94abaa4fb4aec52a01126a

This command is replaced by the stestr. os_testr
repo which has other utilities also is not deprecated
and will continue to be maintained.

QA meeting discussion:
https://meetings.opendev.org/irclogs/%23openstack-qa/%23openstack-qa.2022-03-22.log.html#t2022-03-22T15:45:36

Change-Id: Ic0cddcc226f092ac6df405e83b2e7660d71d0ba2
changes/24/835124/2
Ghanshyam Mann 10 months ago
parent 4506fcf719
commit 6f10535042

@ -20,19 +20,6 @@ A testr wrapper to provide functionality for OpenStack projects.
Features
--------
.. warning::
``ostestr`` command is deprecated. Use `stestr`_ command instead like
following
0. Install `stestr`_ (This step is already done if you're using ostestr.)
1. You can use ``stestr run ...`` instead of ``ostestr ...``
2. You can use ``stestr list ...`` instead of ``ostestr --list ...``
For more sub commands and options, please refer to `stestr help` or the
`stestr`_ document.
* ``ostestr``: a testr wrapper that uses subunit-trace for output and builds
some helpful extra functionality around testr
* ``subunit-trace``: an output filter for a subunit stream which provides
useful information about the run
* ``subunit2html``: generates a test results html page from a subunit stream

@ -7,7 +7,6 @@ This section contains the documentation for each of tools packaged in os-testr
.. toctree::
:maxdepth: 2
ostestr
subunit_trace
subunit2html
generate_subunit

@ -1,271 +0,0 @@
.. _ostestr:
ostestr
=======
.. warning::
``ostestr`` command is deprecated. Use `stestr`_ command instead like
following.
0. Install `stestr`_ (This step is already done if you're using ostestr.)
1. You can use ``stestr run ...`` instead of ``ostestr ...``
2. You can use ``stestr list ...`` instead of `ostestr --list ...``
For more sub commands and options, please refer to `stestr help` or the
`stestr`_ document.
.. _stestr: https://stestr.readthedocs.io/
The ostestr command provides a wrapper around the testr command included in
the testrepository package. It's designed to build on the functionality
included in testr and workaround several UI bugs in the short term. By default
it also has output that is much more useful for OpenStack's test suites which
are lengthy in both runtime and number of tests. Please note that the CLI
semantics are still a work in progress as the project is quite young, so
default behavior might change in future version.
Summary
-------
::
ostestr [-b|--blacklist-file <blacklist_file>] [-r|--regex REGEX]
[-w|--whitelist-file <whitelist_file>]
[-p|--pretty] [--no-pretty] [-s|--subunit] [-l|--list]
[-n|--no-discover <test_id>] [--slowest] [--no-slowest]
[--pdb <test_id>] [--parallel] [--serial]
[-c|--concurrency <workers>] [--until-failure] [--print-exclude]
Options
-------
--blacklist-file BLACKLIST_FILE, -b BLACKLIST_FILE
Path to a blacklist file, this file contains a
separate regex exclude on each newline
--whitelist-file WHITELIST_FILE, -w WHITELIST_FILE
Path to a whitelist file, this file contains a
separate regex on each newline
--regex REGEX, -r REGEX
A normal testr selection regex.
--black-regex BLACK_REGEX, -B BLACK_REGEX
Test rejection regex. If the test cases durring a
search opration matches, it will be removed from the
final test list.
--pretty, -p
Print pretty output from subunit-trace. This is
mutually exclusive with --subunit
--no-pretty
Disable the pretty output with subunit-trace
--subunit, -s
output the raw subunit v2 from the test run this is
mutually exclusive with --pretty
--list, -l
List all the tests which will be run.
--no-discover TEST_ID, -n TEST_ID
Takes in a single test to bypasses test discover and
just execute the test specified
--slowest
After the test run print the slowest tests
--no-slowest
After the test run don't print the slowest tests
--pdb TEST_ID
Run a single test that has pdb traces added
--parallel
Run tests in parallel (this is the default)
--serial
Run tests serially
--concurrency WORKERS, -c WORKERS
The number of workers to use when running in parallel.
By default this is the number of cpus
--until-failure
Run the tests in a loop until a failure is
encountered. Running with subunit or prettyoutput
enable will force the loop to run testsserially
--print-exclude
If an exclude file is used this option will prints the
comment from the same line and all skipped tests
before the test run
Running Tests
-------------
os-testr is primarily for running tests at it's basic level you just invoke
ostestr to run a test suite for a project. (assuming it's setup to run tests
using testr already) For example::
$ ostestr
This will run tests in parallel (with the number of workers matching the number
of CPUs) and with subunit-trace output. If you need to run tests in serial you
can use the serial option::
$ ostestr --serial
Or if you need to adjust the concurrency but still run in parallel you can use
-c/--concurrency::
$ ostestr --concurrency 2
If you only want to run an individual test module or more specific (a single
class, or test) and parallel execution doesn't matter, you can use the
-n/--no-discover to skip test discovery and just directly calls subunit.run on
the tests under the covers. Bypassing discovery is desirable when running a
small subset of tests in a larger test suite because the discovery time can
often far exceed the total run time of the tests.
For example::
$ ostestr --no-discover test.test_thing.TestThing.test_thing_method
Additionally, if you need to run a single test module, class, or single test
with pdb enabled you can use --pdb to directly call testtools.run under the
covers which works with pdb. For example::
$ ostestr --pdb tests.test_thing.TestThing.test_thing_method
Test Selection
--------------
ostestr intially designed to build on top of the test selection in testr.
testr only exposed a regex option to select tests. This functionality is
exposed via the --regex option. For example::
$ ostestr --regex 'magic\.regex'
This will do a straight passthrough of the provided regex to testr.
When ostestr is asked to do more complex test selection than a sinlge regex,
it will ask testr for a full list of tests than passing the filtered test list
back to testr.
ostestr allows you do to do simple test exclusion via apssing rejection/black regex::
$ ostestr --black-regex 'slow_tests|bad_tests'
ostestr also allow you to combine these argumants::
$ ostestr --regex ui\.interface --black-regex 'slow_tests|bad_tests'
Here first we selected all tests which matches to 'ui\.interface',
than we are dropping all test which matches
'slow_tests|bad_tests' from the final list.
ostestr also allows you to specify a blacklist file to define a set
of regexes to exclude. You can specify a blacklist file with the
--blacklist_file/-b option, for example::
$ ostestr --blacklist_file $path_to_file
The format for the file is line separated regex, with '#' used to signify the
start of a comment on a line. For example::
# Blacklist File
^regex1 # Excludes these tests
.*regex2 # exclude those tests
The regex used in the blacklist File or passed as argument, will be used
to drop tests from the initial selection list.
Will generate a list which will exclude both any tests
matching '^regex1' and '.*regex2'. If a blacklist file is used in conjunction
with the --regex option the regex specified with --regex will be used for the intial
test selection. Also it's worth noting that the
regex test selection options can not be used in conjunction with the
--no-discover or --pdb options described in the previous section. This is
because the regex selection requires using testr under the covers to actually
do the filtering, and those 2 options do not use testr.
The dual of the blacklist file is the whitelist file which altering the initial
test selection regex, by joining the white list elements by '|'.
You can specify the path to the file with --whitelist_file/-w, for example::
$ ostestr --whitelist_file $path_to_file
The format for the file is more or less identical to the blacklist file::
# Whitelist File
^regex1 # Include these tests
.*regex2 # include those tests
However, instead of excluding the matches it will include them.
It's also worth noting that you can use the test list option to dry run any
selection arguments you are using. You just need to use --list/-l with your
selection options to do this, for example::
$ ostestr --regex 'regex3.*' --blacklist_file blacklist.txt --list
This will list all the tests which will be run by ostestr using that combination
of arguments.
Please not that all of this selection functionality will be expanded on in the
future and a default grammar for selecting multiple tests will be chosen in a
future release. However as of right now all current arguments (which have
guarantees on always remaining in place) are still required to perform any
selection logic while this functionality is still under development.
Output Options
--------------
By default ostestr will use subunit-trace as the output filter on the test
run. It will also print the slowest tests from the run after the run is
concluded. You can disable the printing the slowest tests with the --no-slowest
flag, for example::
$ ostestr --no-slowest
If you'd like to disable the subunit-trace output you can do this using
--no-pretty::
$ ostestr --no-pretty
ostestr also provides the option to just output the raw subunit stream on
STDOUT with --subunit/-s. Note if you want to use this you also have to
specify --no-pretty as the subunit-trace output and the raw subunit output
are mutually exclusive. For example, to get raw subunit output the arguments
would be::
$ ostestr --no-pretty --subunit
An additional option on top of the blacklist file is --print-exclude option.
When this option is specified when using a blacklist file before the tests are
run ostestr will print all the tests it will be excluding from the blacklist
file. If a line in the blacklist file has a comment that will be printed before
listing the tests which will be excluded by that line's regex. If no comment is
present on a line the regex from that line will be used instead. For example,
if you were using the example blacklist file from the previous section the
output before the regular test run output would be::
$ ostestr -b blacklist-file blacklist.txt --print-exclude
Excludes these tests
regex1_match
regex1_exclude
exclude those tests
regex2_match
regex2_exclude
...
Notes for running with tox
--------------------------
If you use `tox`_ for running your tests and call ostestr as the test command
it's recommended that you set a posargs following ostestr on the commands
stanza. For example::
[testenv]
commands = ostestr {posargs}
.. _tox: https://tox.readthedocs.org/en/latest/
this will enable end users to pass args to configure the output, use the
selection logic, or any other options directly from the tox cli. This will let
tox take care of the venv management and the environment separation but enable
direct access to all of the ostestr options to easily customize your test run.
For example, assuming the above posargs usage you would be to do::
$ tox -epy34 -- --regex ^regex1
or to skip discovery::
$ tox -epy34 -- -n test.test_thing.TestThing.test_thing_method

@ -1,284 +0,0 @@
#!/usr/bin/env python3
# Copyright 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import argparse
import copy
import io
import os
import subprocess
import sys
import warnings
import pbr.version
import six.moves
from stestr import commands
from subunit import run as subunit_run
from testtools import run as testtools_run
from os_testr import regex_builder as rb
__version__ = pbr.version.VersionInfo('os_testr').version_string()
def get_parser(args):
parser = argparse.ArgumentParser(
description='Tool to run openstack tests')
parser.add_argument('--version', action='version',
version='%s' % __version__)
parser.add_argument('--blacklist-file', '-b', '--blacklist_file',
help='Path to a blacklist file, this file '
'contains a separate regex exclude on each '
'newline')
parser.add_argument('--whitelist-file', '-w', '--whitelist_file',
help='Path to a whitelist file, this file '
'contains a separate regex on each newline.')
group = parser.add_mutually_exclusive_group()
group.add_argument('--regex', '-r',
help='A normal testr selection regex.')
group.add_argument('--path', metavar='FILE_OR_DIRECTORY',
help='A file name or directory of tests to run.')
group.add_argument('--no-discover', '-n', metavar='TEST_ID',
help="Takes in a single test to bypasses test "
"discover and just execute the test specified. "
"A file name may be used in place of a test "
"name.")
parser.add_argument('--black-regex', '-B',
help='Test rejection regex. If a test cases name '
'matches on re.search() operation , '
'it will be removed from the final test list. '
'Effectively the black-regex is added to '
' black regex list, but you do need to edit a file. '
'The black filtering happens after the initial '
' white selection, which by default is everything.')
pretty = parser.add_mutually_exclusive_group()
pretty.add_argument('--pretty', '-p', dest='pretty', action='store_true',
help='Print pretty output from subunit-trace. This is '
'mutually exclusive with --subunit')
pretty.add_argument('--no-pretty', dest='pretty', action='store_false',
help='Disable the pretty output with subunit-trace')
parser.add_argument('--subunit', '-s', action='store_true',
help='output the raw subunit v2 from the test run '
'this is mutually exclusive with --pretty')
parser.add_argument('--list', '-l', action='store_true',
help='List all the tests which will be run.')
parser.add_argument('--color', action='store_true',
help='Use color in the pretty output')
slowest = parser.add_mutually_exclusive_group()
slowest.add_argument('--slowest', dest='slowest', action='store_true',
help="after the test run print the slowest tests")
slowest.add_argument('--no-slowest', dest='slowest', action='store_false',
help="after the test run don't print the slowest "
"tests")
parser.add_argument('--pdb', metavar='TEST_ID',
help='Run a single test that has pdb traces added')
parallel = parser.add_mutually_exclusive_group()
parallel.add_argument('--parallel', dest='parallel', action='store_true',
help='Run tests in parallel (this is the default)')
parallel.add_argument('--serial', dest='parallel', action='store_false',
help='Run tests serially')
parser.add_argument('--concurrency', '-c', type=int, metavar='WORKERS',
default=0,
help='The number of workers to use when running in '
'parallel. By default this is the number of cpus')
parser.add_argument('--until-failure', action='store_true',
help='Run the tests in a loop until a failure is '
'encountered. Running with subunit or pretty'
'output enable will force the loop to run tests'
'serially')
parser.add_argument('--print-exclude', action='store_true',
help='If an exclude file is used this option will '
'prints the comment from the same line and all '
'skipped tests before the test run')
parser.set_defaults(pretty=True, slowest=True, parallel=True)
return parser.parse_known_args(args)
def _parse_testrconf():
# Parse the legacy .testr.conf file.
test_dir = None
top_dir = None
group_regex = None
with open('.testr.conf', 'r') as testr_conf_file:
config = six.moves.configparser.ConfigParser()
config.readfp(testr_conf_file)
test_command = config.get('DEFAULT', 'test_command')
group_regex = None
if config.has_option('DEFAULT', 'group_regex'):
group_regex = config.get('DEFAULT', 'group_regex')
for line in test_command.split('\n'):
if 'subunit.run discover' in line:
command_parts = line.split(' ')
top_dir_present = '-t' in line
for idx, val in enumerate(command_parts):
if top_dir_present:
if val == '-t':
top_dir = command_parts[idx + 1]
test_dir = command_parts[idx + 2]
else:
if val == 'discover':
test_dir = command_parts[idx + 1]
return (test_dir, top_dir, group_regex)
def call_testr(regex, subunit, pretty, list_tests, slowest, parallel, concur,
until_failure, color, others=None, blacklist_file=None,
whitelist_file=None, black_regex=None, load_list=None):
# Handle missing .stestr.conf from users from before stestr migration
test_dir = None
top_dir = None
group_regex = None
if not os.path.isfile('.stestr.conf') and os.path.isfile('.testr.conf'):
msg = ('No .stestr.conf file found in the CWD. Please create one to '
'replace the .testr.conf file. You can find a script to do '
'this in the stestr git repository.')
warnings.warn(msg)
test_dir, top_dir, group_regex = _parse_testrconf()
elif not os.path.isfile(
'.testr.conf') and not os.path.isfile('.stestr.conf'):
msg = ('No .stestr.conf found, please create one.')
print(msg)
sys.exit(1)
regexes = None
if regex:
regexes = regex.split()
serial = not parallel
if list_tests:
# TODO(mtreinish): remove init call after list command detects and
# autocreates the repository
if not os.path.isdir('.stestr'):
commands.init_command()
return commands.list_command(filters=regexes)
return_code = commands.run_command(filters=regexes, subunit_out=subunit,
concurrency=concur, test_path=test_dir,
top_dir=top_dir,
group_regex=group_regex,
until_failure=until_failure,
serial=serial, pretty_out=pretty,
load_list=load_list,
blacklist_file=blacklist_file,
whitelist_file=whitelist_file,
black_regex=black_regex)
if slowest:
sys.stdout.write("\nSlowest Tests:\n")
commands.slowest_command()
return return_code
def call_subunit_run(test_id, pretty, subunit):
env = copy.deepcopy(os.environ)
cmd_save_results = ['stestr', 'load', '--subunit']
if not os.path.isdir('.stestr'):
commands.init_command()
if pretty:
# Use subunit run module
cmd = ['python', '-m', 'subunit.run', test_id]
ps = subprocess.Popen(cmd, env=env, stdout=subprocess.PIPE)
# Save subunit results via testr
pfile = subprocess.Popen(cmd_save_results, env=env,
stdin=ps.stdout, stdout=subprocess.PIPE)
ps.stdout.close()
# Transform output via subunit-trace
proc = subprocess.Popen(['subunit-trace', '--no-failure-debug', '-f'],
env=env, stdin=pfile.stdout)
pfile.stdout.close()
proc.communicate()
return proc.returncode
elif subunit:
sstdout = io.BytesIO()
subunit_run.main([sys.argv[0], test_id], sstdout)
pfile = subprocess.Popen(cmd_save_results, env=env,
stdin=subprocess.PIPE)
pfile.communicate(input=sstdout.getvalue())
else:
testtools_run.main([sys.argv[0], test_id], sys.stdout)
def _select_and_call_runner(opts, exclude_regex, others):
ec = 1
if not opts.no_discover and not opts.pdb:
ec = call_testr(exclude_regex, opts.subunit, opts.pretty, opts.list,
opts.slowest, opts.parallel, opts.concurrency,
opts.until_failure, opts.color, others,
blacklist_file=opts.blacklist_file,
whitelist_file=opts.whitelist_file,
black_regex=opts.black_regex)
else:
if others:
print('Unexpected arguments: ' + ' '.join(others))
return 2
test_to_run = opts.no_discover or opts.pdb
if test_to_run.find('/') != -1:
test_to_run = rb.path_to_regex(test_to_run)
ec = call_subunit_run(test_to_run, opts.pretty, opts.subunit)
return ec
def ostestr(args):
msg = ('Deprecate: ostestr command is deprecated now. Use stestr '
'command instead. For more information: '
'https://docs.openstack.org/os-testr/latest/user/ostestr.html')
warnings.warn(msg)
opts, others = get_parser(args)
if opts.pretty and opts.subunit:
msg = ('Subunit output and pretty output cannot be specified at the '
'same time')
print(msg)
return 2
if opts.list and opts.no_discover:
msg = ('you can not list tests when you are bypassing discovery to '
'run a single test')
print(msg)
return 3
if not opts.parallel and opts.concurrency:
msg = "You can't specify a concurrency to use when running serially"
print(msg)
return 4
if (opts.pdb or opts.no_discover) and opts.until_failure:
msg = "You can not use until_failure mode with pdb or no-discover"
print(msg)
return 5
if ((opts.pdb or opts.no_discover) and
(opts.blacklist_file or opts.whitelist_file)):
msg = "You can not use blacklist or whitelist with pdb or no-discover"
print(msg)
return 6
if ((opts.pdb or opts.no_discover) and (opts.black_regex)):
msg = "You can not use black-regex with pdb or no-discover"
print(msg)
return 7
if opts.path:
regex = rb.path_to_regex(opts.path)
else:
regex = opts.regex
return _select_and_call_runner(opts, regex, others)
def main():
exit(ostestr(sys.argv[1:]))
if __name__ == '__main__':
main()

@ -1,116 +0,0 @@
# Copyright 2016 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import os
import subprocess
def _get_test_list(regex, env=None):
env = env or copy.deepcopy(os.environ)
testr_args = ['stestr', 'list']
if regex:
testr_args.append(regex)
proc = subprocess.Popen(testr_args, env=env,
stdout=subprocess.PIPE, universal_newlines=True)
out = proc.communicate()[0]
raw_test_list = out.split('\n')
bad = False
test_list = []
exclude_list = ['OS_', 'CAPTURE', 'TEST_TIMEOUT', 'PYTHON',
'subunit.run discover']
for line in raw_test_list:
for exclude in exclude_list:
if exclude in line or not line:
bad = True
break
if not bad:
test_list.append(line)
bad = False
return test_list
def print_skips(regex, message):
test_list = _get_test_list(regex)
if test_list:
if message:
print(message)
else:
print('Skipped because of regex %s:' % regex)
for test in test_list:
print(test)
# Extra whitespace to separate
print('\n')
def path_to_regex(path):
root, _ = os.path.splitext(path)
return root.replace('/', '.')
def get_regex_from_whitelist_file(file_path):
lines = []
with open(file_path) as white_file:
for line in white_file.read().splitlines():
split_line = line.strip().split('#')
# Before the # is the regex
line_regex = split_line[0].strip()
if line_regex:
lines.append(line_regex)
return '|'.join(lines)
def get_regex_from_blacklist_file(file_path, print_exclude=False):
exclude_regex = ''
with open(file_path, 'r') as black_file:
exclude_regex = ''
for line in black_file:
raw_line = line.strip()
split_line = raw_line.split('#')
# Before the # is the regex
line_regex = split_line[0].strip()
if len(split_line) > 1:
# After the # is a comment
comment = split_line[1].strip()
else:
comment = ''
if line_regex:
if print_exclude:
print_skips(line_regex, comment)
if exclude_regex:
exclude_regex = '|'.join([line_regex, exclude_regex])
else:
exclude_regex = line_regex
if exclude_regex:
exclude_regex = "(?!" + exclude_regex + ")"
return exclude_regex
def construct_regex(blacklist_file, whitelist_file, regex, print_exclude):
"""Deprecated, please use testlist_builder.construct_list instead."""
bregex = ''
wregex = ''
pregex = ''
if blacklist_file:
bregex = get_regex_from_blacklist_file(blacklist_file, print_exclude)
if whitelist_file:
wregex = get_regex_from_whitelist_file(whitelist_file)
if regex:
pregex = regex
combined_regex = '^%s.*(%s).*$' % (bregex, '|'.join(
filter(None, [pregex, wregex])
))
return combined_regex

@ -1,107 +0,0 @@
# Copyright 2016 RedHat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from os_testr import regex_builder
import re
def black_reader(blacklist_file):
black_file = open(blacklist_file, 'r')
regex_comment_lst = [] # tupple of (regex_compild, msg, skipped_lst)
for line in black_file:
raw_line = line.strip()
split_line = raw_line.split('#')
# Before the # is the regex
line_regex = split_line[0].strip()
if len(split_line) > 1:
# After the # is a comment
comment = ''.join(split_line[1:]).strip()
else:
comment = 'Skipped because of regex %s:' % line_regex
if not line_regex:
continue
regex_comment_lst.append((re.compile(line_regex), comment, []))
return regex_comment_lst
def print_skips(regex, message, test_list):
for test in test_list:
print(test)
# Extra whitespace to separate
print('\n')
def construct_list(blacklist_file, whitelist_file, regex, black_regex,
print_exclude):
"""Filters the discovered test cases
:retrun: iterable of strings. The strings are full
test cases names, including tags like.:
"project.api.TestClass.test_case[positive]"
"""
if not regex:
regex = '' # handle the other false things
if whitelist_file:
white_re = regex_builder.get_regex_from_whitelist_file(whitelist_file)
else:
white_re = ''
if not regex and white_re:
regex = white_re
elif regex and white_re:
regex = '|'.join((regex, white_re))
if blacklist_file:
black_data = black_reader(blacklist_file)
else:
black_data = None
if black_regex:
msg = "Skipped because of regex provided as a command line argument:"
record = (re.compile(black_regex), msg, [])
if black_data:
black_data.append(record)
else:
black_data = [record]
search_filter = re.compile(regex)
# NOTE(afazekas): we do not want to pass a giant re
# to an external application due to the arg length limitatios
list_of_test_cases = [test_case for test_case in
regex_builder._get_test_list('')
if search_filter.search(test_case)]
set_of_test_cases = set(list_of_test_cases)
if not black_data:
return set_of_test_cases
# NOTE(afazekas): We might use a faster logic when the
# print option is not requested
for (rex, msg, s_list) in black_data:
for test_case in list_of_test_cases:
if rex.search(test_case):
# NOTE(mtreinish): In the case of overlapping regex the test
# case might have already been removed from the set of tests
if test_case in set_of_test_cases:
set_of_test_cases.remove(test_case)
s_list.append(test_case)
if print_exclude:
for (rex, msg, s_list) in black_data:
if s_list:
print_skips(rex, msg, s_list)
return set_of_test_cases

@ -1,253 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
test_os_testr
----------------------------------
Tests for `os_testr` module.
"""
import io
from unittest import mock
from os_testr import ostestr as os_testr
from os_testr.tests import base
class TestGetParser(base.TestCase):
def test_pretty(self):
namespace = os_testr.get_parser(['--pretty'])
self.assertEqual(True, namespace[0].pretty)
namespace = os_testr.get_parser(['--no-pretty'])
self.assertEqual(False, namespace[0].pretty)
self.assertRaises(SystemExit, os_testr.get_parser,
['--no-pretty', '--pretty'])
def test_slowest(self):
namespace = os_testr.get_parser(['--slowest'])
self.assertEqual(True, namespace[0].slowest)
namespace = os_testr.get_parser(['--no-slowest'])
self.assertEqual(False, namespace[0].slowest)
self.assertRaises(SystemExit, os_testr.get_parser,
['--no-slowest', '--slowest'])
def test_parallel(self):
namespace = os_testr.get_parser(['--parallel'])
self.assertEqual(True, namespace[0].parallel)
namespace = os_testr.get_parser(['--serial'])
self.assertEqual(False, namespace[0].parallel)
self.assertRaises(SystemExit, os_testr.get_parser,
['--parallel', '--serial'])
class TestCallers(base.TestCase):
def test_no_discover(self):
namespace = os_testr.get_parser(['-n', 'project.tests.foo'])
def _fake_exit(arg):
self.assertTrue(arg)
def _fake_run(*args, **kwargs):
return 'project.tests.foo' in args
with mock.patch.object(os_testr, 'exit', side_effect=_fake_exit), \
mock.patch.object(os_testr,
'get_parser',
return_value=namespace), \
mock.patch.object(os_testr,
'call_subunit_run',
side_effect=_fake_run):
os_testr.main()
def test_no_discover_path(self):
namespace = os_testr.get_parser(['-n', 'project/tests/foo'])
def _fake_exit(arg):
self.assertTrue(arg)
def _fake_run(*args, **kwargs):
return 'project.tests.foo' in args
with mock.patch.object(os_testr, 'exit', side_effect=_fake_exit), \
mock.patch.object(os_testr,
'get_parser',
return_value=namespace), \
mock.patch.object(os_testr,
'call_subunit_run',
side_effect=_fake_run):
os_testr.main()
def test_pdb(self):
namespace = os_testr.get_parser(['--pdb', 'project.tests.foo'])
def _fake_exit(arg):
self.assertTrue(arg)
def _fake_run(*args, **kwargs):
return 'project.tests.foo' in args
with mock.patch.object(os_testr, 'exit', side_effect=_fake_exit), \
mock.patch.object(os_testr,
'get_parser',
return_value=namespace), \
mock.patch.object(os_testr,
'call_subunit_run',
side_effect=_fake_run):
os_testr.main()
def test_pdb_path(self):
namespace = os_testr.get_parser(['--pdb', 'project/tests/foo'])
def _fake_exit(arg):
self.assertTrue(arg)
def _fake_run(*args, **kwargs):
return 'project.tests.foo' in args
with mock.patch.object(os_testr, 'exit', side_effect=_fake_exit), \
mock.patch.object(os_testr,
'get_parser',
return_value=namespace), \
mock.patch.object(os_testr,
'call_subunit_run',
side_effect=_fake_run):
os_testr.main()
def test_call_subunit_run_pretty(self):
'''Test call_subunit_run
Test ostestr call_subunit_run function when:
Pretty is True
'''
pretty = True
subunit = False
with mock.patch('subprocess.Popen', autospec=True) as mock_popen:
mock_popen.return_value.returncode = 0
mock_popen.return_value.stdout = io.BytesIO()
os_testr.call_subunit_run('project.tests.foo', pretty, subunit)
# Validate Popen was called three times
self.assertTrue(mock_popen.called, 'Popen was never called')
count = mock_popen.call_count
self.assertEqual(3, count, 'Popen was called %s'
' instead of 3 times' % count)
# Validate Popen called the right functions
called = mock_popen.call_args_list
msg = "Function %s not called"
function = ['python', '-m', 'subunit.run', 'project.tests.foo']
self.assertIn(function, called[0][0], msg % 'subunit.run')
function = ['stestr', 'load', '--subunit']
self.assertIn(function, called[1][0], msg % 'testr load')
function = ['subunit-trace', '--no-failure-debug', '-f']
self.assertIn(function, called[2][0], msg % 'subunit-trace')
def test_call_subunit_run_sub(self):
'''Test call_subunit run
Test ostestr call_subunit_run function when:
Pretty is False and Subunit is True
'''
pretty = False
subunit = True
with mock.patch('subprocess.Popen', autospec=True) as mock_popen:
os_testr.call_subunit_run('project.tests.foo', pretty, subunit)
# Validate Popen was called once
self.assertTrue(mock_popen.called, 'Popen was never called')
count = mock_popen.call_count
self.assertEqual(1, count, 'Popen was called more than once')
# Validate Popen called the right function
called = mock_popen.call_args
function = ['stestr', 'load', '--subunit']
self.assertIn(function, called[0], "testr load not called")
def test_call_subunit_run_testtools(self):
'''Test call_subunit_run
Test ostestr call_subunit_run function when:
Pretty is False and Subunit is False
'''
pretty = False
subunit = False
with mock.patch('testtools.run.main', autospec=True) as mock_run:
os_testr.call_subunit_run('project.tests.foo', pretty, subunit)
# Validate testtool.run was called once
self.assertTrue(mock_run.called, 'testtools.run was never called')
count = mock_run.call_count
self.assertEqual(1, count, 'testtools.run called more than once')
def test_parse_legacy_testrconf_discover(self):
'''Test _parse_testrconf
Test ostestr _parse_testrconf function when:
-t is not specified and discover is specified
'''
testrconf_data = u"""
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
${PYTHON:-python} -m subunit.run discover mytestdir \
$LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list
group_regex=([^\\.]+\\.)+
"""
with io.StringIO() as testrconf_data_file:
testrconf_data_file.write(testrconf_data)
testrconf_data_file.seek(0)
with mock.patch('six.moves.builtins.open',
return_value=testrconf_data_file, autospec=True):
parsed_values = os_testr._parse_testrconf()
# validate the discovery of the options from the legacy
# .testr.conf
self.assertEqual(parsed_values, ('mytestdir', None,
r'([^\.]+\.)+'))
def test_parse_legacy_testrconf_topdir(self):
'''Test parse_testrconf
Test ostestr _parse_testrconf function when:
-t is specified
'''
testrconf_data = u"""
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
${PYTHON:-python} -m subunit.run discover -t .. mytestdir \
$LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list
group_regex=([^\\.]+\\.)+
"""
with io.StringIO() as testrconf_data_file:
testrconf_data_file.write(testrconf_data)
testrconf_data_file.seek(0)
with mock.patch('six.moves.builtins.open',
return_value=testrconf_data_file, autospec=True):
parsed_values = os_testr._parse_testrconf()
# validate the discovery of the options from the legacy
# .testr.conf
self.assertEqual(parsed_values, ('mytestdir', '..',
r'([^\.]+\.)+'))

@ -1,237 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import io
from unittest import mock
from os_testr import regex_builder as os_testr
from os_testr.tests import base
class TestPathToRegex(base.TestCase):
def test_file_name(self):
result = os_testr.path_to_regex("tests/network/v2/test_net.py")
self.assertEqual("tests.network.v2.test_net", result)
result = os_testr.path_to_regex("openstack/tests/network/v2")
self.assertEqual("openstack.tests.network.v2", result)
class TestConstructRegex(base.TestCase):
def test_regex_passthrough(self):
result = os_testr.construct_regex(None, None, 'fake_regex', False)
self.assertEqual(result, '^.*(fake_regex).*$')
def test_blacklist_regex_with_comments(self):
with io.StringIO() as blacklist_file:
for i in range(4):
blacklist_file.write(u'fake_regex_%s # A Comment\n' % i)
blacklist_file.seek(0)
with mock.patch('six.moves.builtins.open',
return_value=blacklist_file):
result = os_testr.construct_regex(
'fake_path', None, None, False)
self.assertEqual(result, "^(?!fake_regex_3|fake_regex_2|"
"fake_regex_1|fake_regex_0).*().*$")
def test_whitelist_regex_with_comments(self):
with io.StringIO() as whitelist_file:
for i in range(4):
whitelist_file.write(u'fake_regex_%s # A Comment\n' % i)
whitelist_file.seek(0)
with mock.patch('six.moves.builtins.open',
return_value=whitelist_file):
result = os_testr.construct_regex(
None, 'fake_path', None, False)
self.assertEqual(
result,
"^.*(fake_regex_0|fake_regex_1|fake_regex_2|fake_regex_3).*$")
def test_blacklist_regex_without_comments(self):
with io.StringIO() as blacklist_file:
for i in range(4):
blacklist_file.write(u'fake_regex_%s\n' % i)
blacklist_file.seek(0)
with mock.patch('six.moves.builtins.open',
return_value=blacklist_file):
result = os_testr.construct_regex(
'fake_path', None, None, False)
self.assertEqual(result, "^(?!fake_regex_3|fake_regex_2|"
"fake_regex_1|fake_regex_0).*().*$")
def test_blacklist_regex_with_comments_and_regex(self):
with io.StringIO() as blacklist_file:
for i in range(4):
blacklist_file.write(u'fake_regex_%s # Comments\n' % i)
blacklist_file.seek(0)
with mock.patch('six.moves.builtins.open',
return_value=blacklist_file):
result = os_testr.construct_regex('fake_path', None,
'fake_regex', False)
expected_regex = (
"^(?!fake_regex_3|fake_regex_2|fake_regex_1|"
"fake_regex_0).*(fake_regex).*$")
self.assertEqual(result, expected_regex)
def test_blacklist_regex_without_comments_and_regex(self):
with io.StringIO() as blacklist_file:
for i in range(4):
blacklist_file.write(u'fake_regex_%s\n' % i)
blacklist_file.seek(0)
with mock.patch('six.moves.builtins.open',
return_value=blacklist_file):
result = os_testr.construct_regex('fake_path', None,
'fake_regex', False)
expected_regex = (
"^(?!fake_regex_3|fake_regex_2|fake_regex_1|"
"fake_regex_0).*(fake_regex).*$")
self.assertEqual(result, expected_regex)
@mock.patch.object(os_testr, 'print_skips')
def test_blacklist_regex_with_comment_print_skips(self, print_mock):
with io.StringIO() as blacklist_file:
for i in range(4):
blacklist_file.write(u'fake_regex_%s # Comment\n' % i)
blacklist_file.seek(0)
with mock.patch('six.moves.builtins.open',
return_value=blacklist_file):
result = os_testr.construct_regex('fake_path', None,
None, True)
expected_regex = ("^(?!fake_regex_3|fake_regex_2|fake_regex_1|"
"fake_regex_0).*().*$")
self.assertEqual(result, expected_regex)
calls = print_mock.mock_calls
self.assertEqual(len(calls), 4)
args = list(map(lambda x: x[1], calls))
self.assertIn(('fake_regex_0', 'Comment'), args)
self.assertIn(('fake_regex_1', 'Comment'), args)
self.assertIn(('fake_regex_2', 'Comment'), args)
self.assertIn(('fake_regex_3', 'Comment'), args)
@mock.patch.object(os_testr, 'print_skips')
def test_blacklist_regex_without_comment_print_skips(self, print_mock):
with io.StringIO() as blacklist_file:
for i in range(4):
blacklist_file.write(u'fake_regex_%s\n' % i)
blacklist_file.seek(0)
with mock.patch('six.moves.builtins.open',
return_value=blacklist_file):
result = os_testr.construct_regex('fake_path', None,
None, True)
expected_regex = ("^(?!fake_regex_3|fake_regex_2|"
"fake_regex_1|fake_regex_0).*().*$")
self.assertEqual(result, expected_regex)
calls = print_mock.mock_calls
self.assertEqual(len(calls), 4)
args = list(map(lambda x: x[1], calls))
self.assertIn(('fake_regex_0', ''), args)
self.assertIn(('fake_regex_1', ''), args)
self.assertIn(('fake_regex_2', ''), args)
self.assertIn(('fake_regex_3', ''), args)
def test_whitelist_regex_without_comments_and_regex_passthrough(self):
file_contents = u"""regex_a
regex_b"""
with io.StringIO() as whitelist_file:
whitelist_file.write(file_contents)
whitelist_file.seek(0)
with mock.patch('six.moves.builtins.open',
return_value=whitelist_file):
result = os_testr.construct_regex(None, 'fake_path',
None, False)
expected_regex = '^.*(regex_a|regex_b).*$'
self.assertEqual(result, expected_regex)
def test_whitelist_regex_without_comments_with_regex_passthrough(self):
file_contents = u"""regex_a
regex_b"""
with io.StringIO() as whitelist_file:
whitelist_file.write(file_contents)
whitelist_file.seek(0)
with mock.patch('six.moves.builtins.open',
return_value=whitelist_file):
result = os_testr.construct_regex(None, 'fake_path',
'fake_regex', False)
expected_regex = '^.*(fake_regex|regex_a|regex_b).*$'
self.assertEqual(result, expected_regex)
def test_blacklist_whitelist_and_regex_passthrough_at_once(self):
with io.StringIO() as blacklist_file, io.StringIO() as whitelist_file:
for i in range(4):
blacklist_file.write(u'fake_regex_%s\n' % i)
blacklist_file.seek(0)
whitelist_file.write(u'regex_a\n')
whitelist_file.write(u'regex_b\n')
whitelist_file.seek(0)
with mock.patch('six.moves.builtins.open',
side_effect=[blacklist_file, whitelist_file]):
result = os_testr.construct_regex('fake_path_1', 'fake_path_2',
'fake_regex', False)
expected_regex = (
"^(?!fake_regex_3|fake_regex_2|fake_regex_1|"
"fake_regex_0).*(fake_regex|regex_a|regex_b).*$")
self.assertEqual(result, expected_regex)
class TestGetRegexFromListFile(base.TestCase):
def test_get_regex_from_whitelist_file(self):
file_contents = u"""regex_a
regex_b"""
with io.StringIO() as whitelist_file:
whitelist_file.write(file_contents)
whitelist_file.seek(0)
with mock.patch('six.moves.builtins.open',
return_value=whitelist_file):
regex = os_testr.get_regex_from_whitelist_file(
'/path/to/not_used')
self.assertEqual('regex_a|regex_b', regex)
def test_get_regex_from_blacklist_file(self):
with io.StringIO() as blacklist_file:
for i in range(4):
blacklist_file.write(u'fake_regex_%s\n' % i)
blacklist_file.seek(0)
with mock.patch('six.moves.builtins.open',
return_value=blacklist_file):
regex = os_testr.get_regex_from_blacklist_file(
'/path/to/not_used')
self.assertEqual('(?!fake_regex_3|fake_regex_2'
'|fake_regex_1|fake_regex_0)', regex)
class TestGetTestList(base.TestCase):
def test__get_test_list(self):
test_list = os_testr._get_test_list('test__get_test_list')
self.assertIn('test__get_test_list', test_list[0])
def test__get_test_list_regex_is_empty(self):
test_list = os_testr._get_test_list('')
self.assertIn('', test_list[0])
def test__get_test_list_regex_is_none(self):
test_list = os_testr._get_test_list(None)
# NOTE(masayukig): We should get all of the tests. So we should have
# more than one test case.
self.assertGreater(len(test_list), 1)
self.assertIn('os_testr.tests.test_regex_builder.'
'TestGetTestList.test__get_test_list_regex_is_none',
test_list)

@ -1,107 +0,0 @@
# Copyright 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import shutil
import subprocess
import tempfile
import testtools
from os_testr.tests import base
from six import StringIO
DEVNULL = open(os.devnull, 'wb')
class TestReturnCodes(base.TestCase):
def setUp(self):
super(TestReturnCodes, self).setUp()
# Setup test dirs
self.directory = tempfile.mkdtemp(prefix='ostestr-unit')
self.addCleanup(shutil.rmtree, self.directory)
self.test_dir = os.path.join(self.directory, 'tests')
os.mkdir(self.test_dir)
# Setup Test files
self.testr_conf_file = os.path.join(self.directory, '.stestr.conf')
self.setup_cfg_file = os.path.join(self.directory, 'setup.cfg')
self.passing_file = os.path.join(self.test_dir, 'test_passing.py')
self.failing_file = os.path.join(self.test_dir, 'test_failing.py')
self.init_file = os.path.join(self.test_dir, '__init__.py')
self.setup_py = os.path.join(self.directory, 'setup.py')
shutil.copy('os_testr/tests/files/stestr-conf', self.testr_conf_file)
shutil.copy('os_testr/tests/files/passing-tests', self.passing_file)
shutil.copy('os_testr/tests/files/failing-tests', self.failing_file)
shutil.copy('setup.py', self.setup_py)
shutil.copy('os_testr/tests/files/setup.cfg', self.setup_cfg_file)
shutil.copy('os_testr/tests/files/__init__.py', self.init_file)
self.stdout = StringIO()
self.stderr = StringIO()
# Change directory, run wrapper and check result
self.addCleanup(os.chdir, os.path.abspath(os.curdir))
os.chdir(self.directory)
def assertRunExit(self, cmd, expected, subunit=False):
p = subprocess.Popen(
"%s" % cmd, shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
if not subunit:
self.assertEqual(
p.returncode, expected,
"Stdout: %s; Stderr: %s" % (out, err))
else:
self.assertEqual(p.returncode, expected,
"Expected return code: %s doesn't match actual "
"return code of: %s" % (expected, p.returncode))
def test_default_passing(self):
self.assertRunExit('ostestr --regex passing', 0)
def test_default_fails(self):
self.assertRunExit('ostestr', 1)
def test_default_passing_no_slowest(self):
self.assertRunExit('ostestr --no-slowest --regex passing', 0)
def test_default_fails_no_slowest(self):
self.assertRunExit('ostestr --no-slowest', 1)
def test_default_serial_passing(self):
self.assertRunExit('ostestr --serial --regex passing', 0)
def test_default_serial_fails(self):
self.assertRunExit('ostestr --serial', 1)
def test_testr_subunit_passing(self):
self.assertRunExit('ostestr --no-pretty --subunit --regex passing', 0,
subunit=True)
@testtools.skip('Skipped because of testrepository lp bug #1411804')
def test_testr_subunit_fails(self):
self.assertRunExit('ostestr --no-pretty --subunit', 1, subunit=True)
def test_testr_no_pretty_passing(self):
self.assertRunExit('ostestr --no-pretty --regex passing', 0)
def test_testr_no_pretty_fails(self):
self.assertRunExit('ostestr --no-pretty', 1)
def test_list(self):
self.assertRunExit('ostestr --list', 0)
def test_no_test(self):
self.assertRunExit('ostestr --regex a --black-regex a', 1)

@ -1,138 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import re
from unittest import mock
import six
from os_testr import testlist_builder as list_builder
from os_testr.tests import base
class TestBlackReader(base.TestCase):
def test_black_reader(self):
blacklist_file = six.StringIO()
for i in range(4):
blacklist_file.write('fake_regex_%s\n' % i)
blacklist_file.write('fake_regex_with_note_%s # note\n' % i)
blacklist_file.seek(0)
with mock.patch('six.moves.builtins.open',
return_value=blacklist_file):
result = list_builder.black_reader('fake_path')
self.assertEqual(2 * 4, len(result))
note_cnt = 0
# not assuming ordering, mainly just testing the type
for r in result:
self.assertEqual(r[2], [])
if r[1] == 'note':
note_cnt += 1
self.assertIn('search', dir(r[0])) # like a compiled regex
self.assertEqual(note_cnt, 4)
class TestConstructList(base.TestCase):
def test_simple_re(self):
test_lists = ['fake_test(scen)[tag,bar])', 'fake_test(scen)[egg,foo])']
with mock.patch('os_testr.regex_builder._get_test_list',
return_value=test_lists):
result = list_builder.construct_list(None,
None,
'foo',
None,
False)
self.assertEqual(list(result), ['