Initial submission for starlingx pytest framework.
Include: - util modules. such as table_parser, ssh/localhost clients, cli module, exception, logger, etc. Util modules are mostly used by keywords. - keywords modules. These are helper functions that are used directly by test functions. - platform (with platform or platform_sanity marker) and stx-openstack (with sanity, sx_sanity, cpe_sanity, or storage_sanity marker) sanity testcases - pytest config conftest, and test fixture modules - test config file template/example Required packages: - python3.4 or python3.5 - pytest >=3.10,<4.0 - pexpect - requests - pyyaml - selenium (firefox, ffmpeg, pyvirtualdisplay, Xvfb or Xephyr or Xvnc) Limitations: - Anything that requires copying from Test File Server will not work until a public share is configured to shared test files. Tests skipped for now. Co-Authored-By: Maria Yousaf <maria.yousaf@windriver.com> Co-Authored-By: Marvin Huang <marvin.huang@windriver.com> Co-Authored-By: Yosief Gebremariam <yosief.gebremariam@windriver.com> Co-Authored-By: Paul Warner <paul.warner@windriver.com> Co-Authored-By: Xueguang Ma <Xueguang.Ma@windriver.com> Co-Authored-By: Charles Chen <charles.chen@windriver.com> Co-Authored-By: Daniel Graziano <Daniel.Graziano@windriver.com> Co-Authored-By: Jordan Li <jordan.li@windriver.com> Co-Authored-By: Nimalini Rasa <nimalini.rasa@windriver.com> Co-Authored-By: Senthil Mukundakumar <senthil.mukundakumar@windriver.com> Co-Authored-By: Anuejyan Manokeran <anujeyan.manokeran@windriver.com> Co-Authored-By: Peng Peng <peng.peng@windriver.com> Co-Authored-By: Chris Winnicki <chris.winnicki@windriver.com> Co-Authored-By: Joe Vimar <Joe.Vimar@windriver.com> Co-Authored-By: Alex Kozyrev <alex.kozyrev@windriver.com> Co-Authored-By: Jack Ding <jack.ding@windriver.com> Co-Authored-By: Ming Lei <ming.lei@windriver.com> Co-Authored-By: Ankit Jain <ankit.jain@windriver.com> Co-Authored-By: Eric Barrett <eric.barrett@windriver.com> Co-Authored-By: William Jia <william.jia@windriver.com> Co-Authored-By: Joseph Richard <Joseph.Richard@windriver.com> Co-Authored-By: Aldo Mcfarlane <aldo.mcfarlane@windriver.com> Story: 2005892 Task: 33750 Signed-off-by: Yang Liu <yang.liu@windriver.com> Change-Id: I7a88a47e09733d39f024144530f5abb9aee8cad2changes/19/665419/9
parent
d999d831d9
commit
33756ac899
26
README.rst
26
README.rst
|
@ -1,5 +1,25 @@
|
|||
==========
|
||||
========
|
||||
stx-test
|
||||
==========
|
||||
========
|
||||
|
||||
StarlingX Test
|
||||
StarlingX Test repository for manual and automated test cases.
|
||||
|
||||
|
||||
Contribute
|
||||
----------
|
||||
|
||||
- Clone the repo
|
||||
- Gerrit hook needs to be added for code review purpose.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Generate a ssh key if needed
|
||||
ssh-keygen -t rsa -C "<your email address>"
|
||||
ssh-add $private_keyfile_path
|
||||
|
||||
# add ssh key to settings https://review.opendev.org/#/q/project:starlingx/test
|
||||
cd <stx-test repo>
|
||||
git remote add gerrit ssh://<your gerrit username>@review.opendev.org/starlingx/test.git
|
||||
git review -s
|
||||
|
||||
- When you are ready, create your commit with detailed commit message, and submit for review.
|
|
@ -0,0 +1,76 @@
|
|||
====================================
|
||||
StarlingX Integration Test Framework
|
||||
====================================
|
||||
|
||||
The project contains integration test cases that can be executed on an
|
||||
installed and configured StarlingX system.
|
||||
|
||||
Supported test cases:
|
||||
|
||||
- CLI tests over SSH connection to StarlingX system via OAM floating IP
|
||||
- Platform RestAPI test cases via external endpoints
|
||||
- Horizon test cases
|
||||
|
||||
|
||||
Packages Required
|
||||
-----------------
|
||||
- python >='3.4.3,<3.7'
|
||||
- pytest>='3.1.0,<4.0'
|
||||
- pexpect
|
||||
- pyyaml
|
||||
- requests (used by RestAPI test cases only)
|
||||
- selenium (used by Horizon test cases only)
|
||||
- Firefox (used by Horizon test cases only)
|
||||
- pyvirtualdisplay (used by Horizon test cases only)
|
||||
- ffmpeg (used by Horizon test cases only)
|
||||
- Xvfb or Xephyr or Xvnc (used by pyvirtualdisplay for Horizon test cases only)
|
||||
|
||||
|
||||
Setup Test Tool
|
||||
---------------
|
||||
This is a off-box test tool that needs to be set up once on a Linux server
|
||||
that can reach the StarlingX system under test (such as SSH to STX
|
||||
system, send/receive RestAPI requests, open Horizon page).
|
||||
|
||||
- Install above packages
|
||||
- Clone stx-test repo
|
||||
- Add absolute path for automated-pytest-suite to PYTHONPATH environment variable
|
||||
|
||||
Execute Test Cases
|
||||
------------------
|
||||
Precondition: STX system under test should be installed and configured.
|
||||
|
||||
- | Customized config can be provided via --testcase-config <config_file>.
|
||||
| Config template can be found at ${project_root}/stx-test_template.conf.
|
||||
- Test cases can be selected by specifying via -m <markers>
|
||||
- | If stx-openstack is not deployed, platform specific marker should be specified,
|
||||
| e.g., -m "platform_sanity or platform"
|
||||
- | Automation logs will be created at ${HOME}/AUTOMATION_LOGS directory by default.
|
||||
| Log directory can also be specified with --resultlog=${LOG_DIR} commandline option
|
||||
- Examples:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export project_root=<automated-pytest-suite dir>
|
||||
|
||||
# Include $project_root to PYTHONPATH if not already done
|
||||
export PYTHONPATH=${PYTHONPATH}:${project_root}
|
||||
|
||||
cd $project_root
|
||||
|
||||
# Example 1: Run all platform_sanity test cases under testcases/
|
||||
pytest -m platform_sanity --testcase-config=~/my_config.conf testcases/
|
||||
|
||||
# Example 2: Run platform_sanity or sanity (requires stx-openstack) test cases,
|
||||
# on a StarlingX virtual box system that is already saved in consts/lab.py
|
||||
# and save automation logs to /tmp/AUTOMATION_LOGS
|
||||
pytest --resultlog=/tmp/ -m sanity --lab=vbox --natbox=localhost testcases/
|
||||
|
||||
# Example 3: List (not execute) the test cases with "migrate" in the name
|
||||
pytest --collect-only -k "migrate" --lab=<stx_oam_fip> testcases/
|
||||
|
||||
|
||||
Contribute
|
||||
----------
|
||||
|
||||
- In order to contribute, python3.4 is required to avoid producing code that is incompatible with python3.4.
|
|
@ -0,0 +1,693 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import logging
|
||||
import os
|
||||
from time import strftime, gmtime
|
||||
# import threading # Used for formatting logger
|
||||
|
||||
|
||||
import pytest # Don't remove. Used in eval
|
||||
|
||||
import setups
|
||||
from consts.proj_vars import ProjVar
|
||||
from utils.tis_log import LOG
|
||||
from utils import parse_log
|
||||
|
||||
tc_start_time = None
|
||||
tc_end_time = None
|
||||
has_fail = False
|
||||
repeat_count = -1
|
||||
stress_count = -1
|
||||
count = -1
|
||||
no_teardown = False
|
||||
tracebacks = []
|
||||
region = None
|
||||
test_count = 0
|
||||
console_log = True
|
||||
|
||||
################################
|
||||
# Process and log test results #
|
||||
################################
|
||||
|
||||
|
||||
class MakeReport:
|
||||
nodeid = None
|
||||
instances = {}
|
||||
|
||||
def __init__(self, item):
|
||||
MakeReport.nodeid = item.nodeid
|
||||
self.test_pass = None
|
||||
self.test_results = {}
|
||||
MakeReport.instances[item.nodeid] = self
|
||||
|
||||
def update_results(self, call, report):
|
||||
if report.failed:
|
||||
global has_fail
|
||||
has_fail = True
|
||||
msg = "***Failure at test {}: {}".format(call.when, call.excinfo)
|
||||
print(msg)
|
||||
LOG.debug(msg + "\n***Details: {}".format(report.longrepr))
|
||||
global tracebacks
|
||||
tracebacks.append(str(report.longrepr))
|
||||
self.test_results[call.when] = ['Failed', call.excinfo]
|
||||
elif report.skipped:
|
||||
sep = 'Skipped: '
|
||||
skipreason_list = str(call.excinfo).split(sep=sep)[1:]
|
||||
skipreason_str = sep.join(skipreason_list)
|
||||
self.test_results[call.when] = ['Skipped', skipreason_str]
|
||||
elif report.passed:
|
||||
self.test_results[call.when] = ['Passed', '']
|
||||
|
||||
def get_results(self):
|
||||
return self.test_results
|
||||
|
||||
@classmethod
|
||||
def get_report(cls, item):
|
||||
if item.nodeid == cls.nodeid:
|
||||
return cls.instances[cls.nodeid]
|
||||
else:
|
||||
return cls(item)
|
||||
|
||||
|
||||
class TestRes:
|
||||
PASSNUM = 0
|
||||
FAILNUM = 0
|
||||
SKIPNUM = 0
|
||||
TOTALNUM = 0
|
||||
|
||||
|
||||
def _write_results(res_in_tests, test_name):
|
||||
global tc_start_time
|
||||
with open(ProjVar.get_var("TCLIST_PATH"), mode='a') as f:
|
||||
f.write('\n{}\t{}\t{}'.format(res_in_tests, tc_start_time, test_name))
|
||||
global test_count
|
||||
test_count += 1
|
||||
# reset tc_start and end time for next test case
|
||||
tc_start_time = None
|
||||
|
||||
|
||||
def pytest_runtest_makereport(item, call, __multicall__):
|
||||
report = __multicall__.execute()
|
||||
my_rep = MakeReport.get_report(item)
|
||||
my_rep.update_results(call, report)
|
||||
|
||||
test_name = item.nodeid.replace('::()::',
|
||||
'::') # .replace('testcases/', '')
|
||||
res_in_tests = ''
|
||||
res = my_rep.get_results()
|
||||
|
||||
# Write final result to test_results.log
|
||||
if report.when == 'teardown':
|
||||
res_in_log = 'Test Passed'
|
||||
fail_at = []
|
||||
for key, val in res.items():
|
||||
if val[0] == 'Failed':
|
||||
fail_at.append('test ' + key)
|
||||
elif val[0] == 'Skipped':
|
||||
res_in_log = 'Test Skipped\nReason: {}'.format(val[1])
|
||||
res_in_tests = 'SKIP'
|
||||
break
|
||||
if fail_at:
|
||||
fail_at = ', '.join(fail_at)
|
||||
res_in_log = 'Test Failed at {}'.format(fail_at)
|
||||
|
||||
# Log test result
|
||||
testcase_log(msg=res_in_log, nodeid=test_name, log_type='tc_res')
|
||||
|
||||
if 'Test Passed' in res_in_log:
|
||||
res_in_tests = 'PASS'
|
||||
elif 'Test Failed' in res_in_log:
|
||||
res_in_tests = 'FAIL'
|
||||
if ProjVar.get_var('PING_FAILURE'):
|
||||
setups.add_ping_failure(test_name=test_name)
|
||||
|
||||
if not res_in_tests:
|
||||
res_in_tests = 'UNKNOWN'
|
||||
|
||||
# count testcases by status
|
||||
TestRes.TOTALNUM += 1
|
||||
if res_in_tests == 'PASS':
|
||||
TestRes.PASSNUM += 1
|
||||
elif res_in_tests == 'FAIL':
|
||||
TestRes.FAILNUM += 1
|
||||
elif res_in_tests == 'SKIP':
|
||||
TestRes.SKIPNUM += 1
|
||||
|
||||
_write_results(res_in_tests=res_in_tests, test_name=test_name)
|
||||
|
||||
if repeat_count > 0:
|
||||
for key, val in res.items():
|
||||
if val[0] == 'Failed':
|
||||
global tc_end_time
|
||||
tc_end_time = strftime("%Y%m%d %H:%M:%S", gmtime())
|
||||
_write_results(res_in_tests='FAIL', test_name=test_name)
|
||||
TestRes.FAILNUM += 1
|
||||
if ProjVar.get_var('PING_FAILURE'):
|
||||
setups.add_ping_failure(test_name=test_name)
|
||||
|
||||
try:
|
||||
parse_log.parse_test_steps(ProjVar.get_var('LOG_DIR'))
|
||||
except Exception as e:
|
||||
LOG.warning(
|
||||
"Unable to parse test steps. \nDetails: {}".format(
|
||||
e.__str__()))
|
||||
|
||||
pytest.exit(
|
||||
"Skip rest of the iterations upon stress test failure")
|
||||
|
||||
if no_teardown and report.when == 'call':
|
||||
for key, val in res.items():
|
||||
if val[0] == 'Skipped':
|
||||
break
|
||||
else:
|
||||
pytest.exit("No teardown and skip rest of the tests if any")
|
||||
|
||||
return report
|
||||
|
||||
|
||||
def pytest_runtest_setup(item):
|
||||
global tc_start_time
|
||||
# tc_start_time = setups.get_tis_timestamp(con_ssh)
|
||||
tc_start_time = strftime("%Y%m%d %H:%M:%S", gmtime())
|
||||
print('')
|
||||
message = "Setup started:"
|
||||
testcase_log(message, item.nodeid, log_type='tc_setup')
|
||||
# set test name for ping vm failure
|
||||
test_name = 'test_{}'.format(
|
||||
item.nodeid.rsplit('::test_', 1)[-1].replace('/', '_'))
|
||||
ProjVar.set_var(TEST_NAME=test_name)
|
||||
ProjVar.set_var(PING_FAILURE=False)
|
||||
|
||||
|
||||
def pytest_runtest_call(item):
|
||||
separator = \
|
||||
'++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++'
|
||||
message = "Test steps started:"
|
||||
testcase_log(message, item.nodeid, separator=separator, log_type='tc_start')
|
||||
|
||||
|
||||
def pytest_runtest_teardown(item):
|
||||
print('')
|
||||
message = 'Teardown started:'
|
||||
testcase_log(message, item.nodeid, log_type='tc_teardown')
|
||||
|
||||
|
||||
def testcase_log(msg, nodeid, separator=None, log_type=None):
|
||||
if separator is None:
|
||||
separator = '-----------'
|
||||
|
||||
print_msg = separator + '\n' + msg
|
||||
logging_msg = '\n{}{} {}'.format(separator, msg, nodeid)
|
||||
if console_log:
|
||||
print(print_msg)
|
||||
if log_type == 'tc_res':
|
||||
global tc_end_time
|
||||
tc_end_time = strftime("%Y%m%d %H:%M:%S", gmtime())
|
||||
LOG.tc_result(msg=msg, tc_name=nodeid)
|
||||
elif log_type == 'tc_start':
|
||||
LOG.tc_func_start(nodeid)
|
||||
elif log_type == 'tc_setup':
|
||||
LOG.tc_setup_start(nodeid)
|
||||
elif log_type == 'tc_teardown':
|
||||
LOG.tc_teardown_start(nodeid)
|
||||
else:
|
||||
LOG.debug(logging_msg)
|
||||
|
||||
|
||||
########################
|
||||
# Command line options #
|
||||
########################
|
||||
@pytest.mark.tryfirst
|
||||
def pytest_configure(config):
|
||||
config.addinivalue_line("markers",
|
||||
"features(feature_name1, feature_name2, "
|
||||
"...): mark impacted feature(s) for a test case.")
|
||||
config.addinivalue_line("markers",
|
||||
"priorities(, cpe_sanity, p2, ...): mark "
|
||||
"priorities for a test case.")
|
||||
config.addinivalue_line("markers",
|
||||
"known_issue(LP-xxxx): mark known issue with "
|
||||
"LP ID or description if no LP needed.")
|
||||
|
||||
if config.getoption('help'):
|
||||
return
|
||||
|
||||
# Common reporting params
|
||||
collect_all = config.getoption('collectall')
|
||||
always_collect = config.getoption('alwayscollect')
|
||||
session_log_dir = config.getoption('sessiondir')
|
||||
resultlog = config.getoption('resultlog')
|
||||
|
||||
# Test case params on installed system
|
||||
testcase_config = config.getoption('testcase_config')
|
||||
lab_arg = config.getoption('lab')
|
||||
natbox_arg = config.getoption('natbox')
|
||||
tenant_arg = config.getoption('tenant')
|
||||
horizon_visible = config.getoption('horizon_visible')
|
||||
is_vbox = config.getoption('is_vbox')
|
||||
|
||||
global repeat_count
|
||||
repeat_count = config.getoption('repeat')
|
||||
global stress_count
|
||||
stress_count = config.getoption('stress')
|
||||
global count
|
||||
if repeat_count > 0:
|
||||
count = repeat_count
|
||||
elif stress_count > 0:
|
||||
count = stress_count
|
||||
|
||||
global no_teardown
|
||||
no_teardown = config.getoption('noteardown')
|
||||
if repeat_count > 0 or no_teardown:
|
||||
ProjVar.set_var(NO_TEARDOWN=True)
|
||||
|
||||
collect_netinfo = config.getoption('netinfo')
|
||||
|
||||
# Determine lab value.
|
||||
lab = natbox = None
|
||||
if lab_arg:
|
||||
lab = setups.get_lab_dict(lab_arg)
|
||||
if natbox_arg:
|
||||
natbox = setups.get_natbox_dict(natbox_arg)
|
||||
|
||||
lab, natbox = setups.setup_testcase_config(testcase_config, lab=lab,
|
||||
natbox=natbox)
|
||||
tenant = tenant_arg.upper() if tenant_arg else 'TENANT1'
|
||||
|
||||
# Log collection params
|
||||
collect_all = True if collect_all else False
|
||||
always_collect = True if always_collect else False
|
||||
|
||||
# If floating ip cannot be reached, whether to try to ping/ssh
|
||||
# controller-0 unit IP, etc.
|
||||
if collect_netinfo:
|
||||
ProjVar.set_var(COLLECT_SYS_NET_INFO=True)
|
||||
|
||||
horizon_visible = True if horizon_visible else False
|
||||
|
||||
if session_log_dir:
|
||||
log_dir = session_log_dir
|
||||
else:
|
||||
# compute directory for all logs based on resultlog arg, lab,
|
||||
# and timestamp on local machine
|
||||
resultlog = resultlog if resultlog else os.path.expanduser("~")
|
||||
if '/AUTOMATION_LOGS' in resultlog:
|
||||
resultlog = resultlog.split(sep='/AUTOMATION_LOGS')[0]
|
||||
resultlog = os.path.join(resultlog, 'AUTOMATION_LOGS')
|
||||
lab_name = lab['short_name']
|
||||
time_stamp = strftime('%Y%m%d%H%M')
|
||||
log_dir = '{}/{}/{}'.format(resultlog, lab_name, time_stamp)
|
||||
os.makedirs(log_dir, exist_ok=True)
|
||||
|
||||
# set global constants, which will be used for the entire test session, etc
|
||||
ProjVar.init_vars(lab=lab, natbox=natbox, logdir=log_dir, tenant=tenant,
|
||||
collect_all=collect_all,
|
||||
always_collect=always_collect,
|
||||
horizon_visible=horizon_visible)
|
||||
|
||||
if lab.get('central_region'):
|
||||
ProjVar.set_var(IS_DC=True,
|
||||
PRIMARY_SUBCLOUD=config.getoption('subcloud'))
|
||||
|
||||
if is_vbox:
|
||||
ProjVar.set_var(IS_VBOX=True)
|
||||
|
||||
config_logger(log_dir, console=console_log)
|
||||
|
||||
# set resultlog save location
|
||||
config.option.resultlog = ProjVar.get_var("PYTESTLOG_PATH")
|
||||
|
||||
# Repeat test params
|
||||
file_or_dir = config.getoption('file_or_dir')
|
||||
origin_file_dir = list(file_or_dir)
|
||||
if count > 1:
|
||||
print("Repeat following tests {} times: {}".format(count, file_or_dir))
|
||||
del file_or_dir[:]
|
||||
for f_or_d in origin_file_dir:
|
||||
for i in range(count):
|
||||
file_or_dir.append(f_or_d)
|
||||
|
||||
|
||||
def pytest_addoption(parser):
|
||||
testconf_help = "Absolute path for testcase config file. Template can be " \
|
||||
"found at automated-pytest-suite/stx-test_template.conf"
|
||||
lab_help = "STX system to connect to. Valid value: 1) short_name or name " \
|
||||
"of an existing dict entry in consts.Labs; Or 2) OAM floating " \
|
||||
"ip of the STX system under test"
|
||||
tenant_help = "Default tenant to use when unspecified. Valid values: " \
|
||||
"tenant1, tenant2, or admin"
|
||||
natbox_help = "NatBox IP or name. If automated tests are executed from " \
|
||||
"NatBox, --natbox=localhost can be used. " \
|
||||
"If username/password are required to SSH to NatBox, " \
|
||||
"please specify them in test config file."
|
||||
vbox_help = "Specify if StarlingX system is installed in virtual " \
|
||||
"environment."
|
||||
collect_all_help = "Run collect all on STX system at the end of test " \
|
||||
"session if any test fails."
|
||||
logdir_help = "Directory to store test session logs. If this is " \
|
||||
"specified, then --resultlog will be ignored."
|
||||
stress_help = "Number of iterations to run specified testcase(s). Abort " \
|
||||
"rest of the test session on first failure"
|
||||
count_help = "Repeat tests x times - NO stop on failure"
|
||||
horizon_visible_help = "Display horizon on screen"
|
||||
no_console_log = 'Print minimal console logs'
|
||||
|
||||
# Test session options on installed and configured STX system:
|
||||
parser.addoption('--testcase-config', action='store',
|
||||
metavar='testcase_config', default=None,
|
||||
help=testconf_help)
|
||||
parser.addoption('--lab', action='store', metavar='lab', default=None,
|
||||
help=lab_help)
|
||||
parser.addoption('--tenant', action='store', metavar='tenantname',
|
||||
default=None, help=tenant_help)
|
||||
parser.addoption('--natbox', action='store', metavar='natbox', default=None,
|
||||
help=natbox_help)
|
||||
parser.addoption('--vm', '--vbox', action='store_true', dest='is_vbox',
|
||||
help=vbox_help)
|
||||
|
||||
# Debugging/Log collection options:
|
||||
parser.addoption('--sessiondir', '--session_dir', '--session-dir',
|
||||
action='store', dest='sessiondir',
|
||||
metavar='sessiondir', default=None, help=logdir_help)
|
||||
parser.addoption('--collectall', '--collect_all', '--collect-all',
|
||||
dest='collectall', action='store_true',
|
||||
help=collect_all_help)
|
||||
parser.addoption('--alwayscollect', '--always-collect', '--always_collect',
|
||||
dest='alwayscollect',
|
||||
action='store_true', help=collect_all_help)
|
||||
parser.addoption('--repeat', action='store', metavar='repeat', type=int,
|
||||
default=-1, help=stress_help)
|
||||
parser.addoption('--stress', metavar='stress', action='store', type=int,
|
||||
default=-1, help=count_help)
|
||||
parser.addoption('--no-teardown', '--no_teardown', '--noteardown',
|
||||
dest='noteardown', action='store_true')
|
||||
parser.addoption('--netinfo', '--net-info', dest='netinfo',
|
||||
action='store_true',
|
||||
help="Collect system networking info if scp keyfile fails")
|
||||
parser.addoption('--horizon-visible', '--horizon_visible',
|
||||
action='store_true', dest='horizon_visible',
|
||||
help=horizon_visible_help)
|
||||
parser.addoption('--noconsolelog', '--noconsole', '--no-console-log',
|
||||
'--no_console_log', '--no-console',
|
||||
'--no_console', action='store_true', dest='noconsolelog',
|
||||
help=no_console_log)
|
||||
|
||||
|
||||
def config_logger(log_dir, console=True):
|
||||
# logger for log saved in file
|
||||
file_name = log_dir + '/TIS_AUTOMATION.log'
|
||||
logging.Formatter.converter = gmtime
|
||||
log_format = '[%(asctime)s] %(lineno)-5d%(levelname)-5s %(threadName)-8s ' \
|
||||
'%(module)s.%(funcName)-8s:: %(message)s'
|
||||
tis_formatter = logging.Formatter(log_format)
|
||||
LOG.setLevel(logging.NOTSET)
|
||||
|
||||
tmp_path = os.path.join(os.path.expanduser('~'), '.tmp_log')
|
||||
# clear the tmp log with best effort so it wont keep growing
|
||||
try:
|
||||
os.remove(tmp_path)
|
||||
except:
|
||||
pass
|
||||
logging.basicConfig(level=logging.NOTSET, format=log_format,
|
||||
filename=tmp_path, filemode='w')
|
||||
|
||||
# file handler:
|
||||
file_handler = logging.FileHandler(file_name)
|
||||
file_handler.setFormatter(tis_formatter)
|
||||
file_handler.setLevel(logging.DEBUG)
|
||||
LOG.addHandler(file_handler)
|
||||
|
||||
# logger for stream output
|
||||
console_level = logging.INFO if console else logging.CRITICAL
|
||||
stream_hdler = logging.StreamHandler()
|
||||
stream_hdler.setFormatter(tis_formatter)
|
||||
stream_hdler.setLevel(console_level)
|
||||
LOG.addHandler(stream_hdler)
|
||||
|
||||
print("LOG DIR: {}".format(log_dir))
|
||||
|
||||
|
||||
def pytest_unconfigure(config):
|
||||
# collect all if needed
|
||||
if config.getoption('help'):
|
||||
return
|
||||
|
||||
try:
|
||||
natbox_ssh = ProjVar.get_var('NATBOX_SSH')
|
||||
natbox_ssh.close()
|
||||
except:
|
||||
pass
|
||||
|
||||
version_and_patch = ''
|
||||
try:
|
||||
version_and_patch = setups.get_version_and_patch_info()
|
||||
except Exception as e:
|
||||
LOG.debug(e)
|
||||
pass
|
||||
log_dir = ProjVar.get_var('LOG_DIR')
|
||||
if not log_dir:
|
||||
try:
|
||||
from utils.clients.ssh import ControllerClient
|
||||
ssh_list = ControllerClient.get_active_controllers(fail_ok=True)
|
||||
for con_ssh_ in ssh_list:
|
||||
con_ssh_.close()
|
||||
except:
|
||||
pass
|
||||
return
|
||||
|
||||
log_dir = ProjVar.get_var('LOG_DIR')
|
||||
if not log_dir:
|
||||
try:
|
||||
from utils.clients.ssh import ControllerClient
|
||||
ssh_list = ControllerClient.get_active_controllers(fail_ok=True)
|
||||
for con_ssh_ in ssh_list:
|
||||
con_ssh_.close()
|
||||
except:
|
||||
pass
|
||||
return
|
||||
|
||||
try:
|
||||
tc_res_path = log_dir + '/test_results.log'
|
||||
build_info = ProjVar.get_var('BUILD_INFO')
|
||||
build_id = build_info.get('BUILD_ID', '')
|
||||
build_job = build_info.get('JOB', '')
|
||||
build_server = build_info.get('BUILD_HOST', '')
|
||||
system_config = ProjVar.get_var('SYS_TYPE')
|
||||
session_str = ''
|
||||
total_exec = TestRes.PASSNUM + TestRes.FAILNUM
|
||||
# pass_rate = fail_rate = '0'
|
||||
if total_exec > 0:
|
||||
pass_rate = "{}%".format(
|
||||
round(TestRes.PASSNUM * 100 / total_exec, 2))
|
||||
fail_rate = "{}%".format(
|
||||
round(TestRes.FAILNUM * 100 / total_exec, 2))
|
||||
with open(tc_res_path, mode='a') as f:
|
||||
# Append general info to result log
|
||||
f.write('\n\nLab: {}\n'
|
||||
'Build ID: {}\n'
|
||||
'Job: {}\n'
|
||||
'Build Server: {}\n'
|
||||
'System Type: {}\n'
|
||||
'Automation LOGs DIR: {}\n'
|
||||
'Ends at: {}\n'
|
||||
'{}' # test session id and tag
|
||||
'{}'.format(ProjVar.get_var('LAB_NAME'), build_id,
|
||||
build_job, build_server, system_config,
|
||||
ProjVar.get_var('LOG_DIR'), tc_end_time,
|
||||
session_str, version_and_patch))
|
||||
# Add result summary to beginning of the file
|
||||
f.write(
|
||||
'\nSummary:\nPassed: {} ({})\nFailed: {} ({})\nTotal '
|
||||
'Executed: {}\n'.
|
||||
format(TestRes.PASSNUM, pass_rate, TestRes.FAILNUM,
|
||||
fail_rate, total_exec))
|
||||
if TestRes.SKIPNUM > 0:
|
||||
f.write('------------\nSkipped: {}'.format(TestRes.SKIPNUM))
|
||||
|
||||
LOG.info("Test Results saved to: {}".format(tc_res_path))
|
||||
with open(tc_res_path, 'r') as fin:
|
||||
print(fin.read())
|
||||
except Exception as e:
|
||||
LOG.exception(
|
||||
"Failed to add session summary to test_results.py. "
|
||||
"\nDetails: {}".format(e.__str__()))
|
||||
# Below needs con_ssh to be initialized
|
||||
try:
|
||||
from utils.clients.ssh import ControllerClient
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
except:
|
||||
LOG.warning("No con_ssh found")
|
||||
return
|
||||
|
||||
try:
|
||||
parse_log.parse_test_steps(ProjVar.get_var('LOG_DIR'))
|
||||
except Exception as e:
|
||||
LOG.warning(
|
||||
"Unable to parse test steps. \nDetails: {}".format(e.__str__()))
|
||||
|
||||
if test_count > 0 and (ProjVar.get_var('ALWAYS_COLLECT') or (
|
||||
has_fail and ProjVar.get_var('COLLECT_ALL'))):
|
||||
# Collect tis logs if collect all required upon test(s) failure
|
||||
# Failure on collect all would not change the result of the last test
|
||||
# case.
|
||||
try:
|
||||
setups.collect_tis_logs(con_ssh)
|
||||
except Exception as e:
|
||||
LOG.warning("'collect all' failed. {}".format(e.__str__()))
|
||||
|
||||
ssh_list = ControllerClient.get_active_controllers(fail_ok=True,
|
||||
current_thread_only=True)
|
||||
for con_ssh_ in ssh_list:
|
||||
try:
|
||||
con_ssh_.close()
|
||||
except:
|
||||
pass
|
||||
|
||||
|
||||
def pytest_collection_modifyitems(items):
|
||||
# print("Collection modify")
|
||||
move_to_last = []
|
||||
absolute_last = []
|
||||
|
||||
for item in items:
|
||||
# re-order tests:
|
||||
trylast_marker = item.get_closest_marker('trylast')
|
||||
abslast_marker = item.get_closest_marker('abslast')
|
||||
|
||||
if abslast_marker:
|
||||
absolute_last.append(item)
|
||||
elif trylast_marker:
|
||||
move_to_last.append(item)
|
||||
|
||||
priority_marker = item.get_closest_marker('priorities')
|
||||
if priority_marker is not None:
|
||||
priorities = priority_marker.args
|
||||
for priority in priorities:
|
||||
item.add_marker(eval("pytest.mark.{}".format(priority)))
|
||||
|
||||
feature_marker = item.get_closest_marker('features')
|
||||
if feature_marker is not None:
|
||||
features = feature_marker.args
|
||||
for feature in features:
|
||||
item.add_marker(eval("pytest.mark.{}".format(feature)))
|
||||
|
||||
# known issue marker
|
||||
known_issue_mark = item.get_closest_marker('known_issue')
|
||||
if known_issue_mark is not None:
|
||||
issue = known_issue_mark.args[0]
|
||||
msg = "{} has a workaround due to {}".format(item.nodeid, issue)
|
||||
print(msg)
|
||||
LOG.debug(msg=msg)
|
||||
item.add_marker(eval("pytest.mark.known_issue"))
|
||||
|
||||
# add dc maker to all tests start with test_dc_xxx
|
||||
dc_maker = item.get_marker('dc')
|
||||
if not dc_maker and 'test_dc_' in item.nodeid:
|
||||
item.add_marker(pytest.mark.dc)
|
||||
|
||||
# add trylast tests to the end
|
||||
for item in move_to_last:
|
||||
items.remove(item)
|
||||
items.append(item)
|
||||
|
||||
for i in absolute_last:
|
||||
items.remove(i)
|
||||
items.append(i)
|
||||
|
||||
|
||||
def pytest_generate_tests(metafunc):
|
||||
# Prefix 'remote_cli' to test names so they are reported as a different
|
||||
# testcase
|
||||
if ProjVar.get_var('REMOTE_CLI'):
|
||||
metafunc.parametrize('prefix_remote_cli', ['remote_cli'])
|
||||
|
||||
|
||||
##############################################################
|
||||
# Manipulating fixture orders based on following pytest rules
|
||||
# session > module > class > function
|
||||
# autouse > non-autouse
|
||||
# alphabetic after full-filling above criteria
|
||||
#
|
||||
# Orders we want on fixtures of same scope:
|
||||
# check_alarms > delete_resources > config_host
|
||||
#############################################################
|
||||
|
||||
@pytest.fixture(scope='session')
|
||||
def check_alarms():
|
||||
LOG.debug("Empty check alarms")
|
||||
return
|
||||
|
||||
|
||||
@pytest.fixture(scope='session')
|
||||
def config_host_class():
|
||||
LOG.debug("Empty config host class")
|
||||
return
|
||||
|
||||
|
||||
@pytest.fixture(scope='session')
|
||||
def config_host_module():
|
||||
LOG.debug("Empty config host module")
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def a1_fixture(check_alarms):
|
||||
return
|
||||
|
||||
|
||||
@pytest.fixture(scope='module', autouse=True)
|
||||
def c1_fixture(config_host_module):
|
||||
return
|
||||
|
||||
|
||||
@pytest.fixture(scope='class', autouse=True)
|
||||
def c2_fixture(config_host_class):
|
||||
return
|
||||
|
||||
|
||||
@pytest.fixture(scope='session', autouse=True)
|
||||
def prefix_remote_cli():
|
||||
return
|
||||
|
||||
|
||||
def __params_gen(index):
|
||||
return 'iter{}'.format(index)
|
||||
|
||||
|
||||
@pytest.fixture(scope='session')
|
||||
def global_setup():
|
||||
os.makedirs(ProjVar.get_var('TEMP_DIR'), exist_ok=True)
|
||||
os.makedirs(ProjVar.get_var('PING_FAILURE_DIR'), exist_ok=True)
|
||||
os.makedirs(ProjVar.get_var('GUEST_LOGS_DIR'), exist_ok=True)
|
||||
|
||||
if region:
|
||||
setups.set_region(region=region)
|
||||
|
||||
|
||||
#####################################
|
||||
# End of fixture order manipulation #
|
||||
#####################################
|
||||
|
||||
|
||||
def pytest_sessionfinish():
|
||||
if ProjVar.get_var('TELNET_THREADS'):
|
||||
threads, end_event = ProjVar.get_var('TELNET_THREADS')
|
||||
end_event.set()
|
||||
for thread in threads:
|
||||
thread.join()
|
||||
|
||||
if repeat_count > 0 and has_fail:
|
||||
# _thread.interrupt_main()
|
||||
print('Printing traceback: \n' + '\n'.join(tracebacks))
|
||||
pytest.exit("\n========== Test failed - "
|
||||
"Test session aborted without teardown to leave the "
|
||||
"system in state ==========")
|
||||
|
||||
if no_teardown:
|
||||
pytest.exit(
|
||||
"\n========== Test session stopped without teardown after first "
|
||||
"test executed ==========")
|
|
@ -0,0 +1,348 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
class Tenant:
|
||||
__PASSWORD = 'St8rlingX*'
|
||||
__REGION = 'RegionOne'
|
||||
__URL_PLATFORM = 'http://192.168.204.2:5000/v3/'
|
||||
__URL_CONTAINERS = 'http://keystone.openstack.svc.cluster.local/v3'
|
||||
__DC_MAP = {'SystemController': {'region': 'SystemController',
|
||||
'auth_url': __URL_PLATFORM},
|
||||
'RegionOne': {'region': 'RegionOne',
|
||||
'auth_url': __URL_PLATFORM}}
|
||||
|
||||
# Platform openstack user - admin
|
||||
__ADMIN_PLATFORM = {
|
||||
'user': 'admin',
|
||||
'password': __PASSWORD,
|
||||
'tenant': 'admin',
|
||||
'domain': 'Default',
|
||||
'platform': True,
|
||||
}
|
||||
|
||||
# Containerized openstack users - admin, and two test users/tenants
|
||||
__ADMIN = {
|
||||
'user': 'admin',
|
||||
'password': __PASSWORD,
|
||||
'tenant': 'admin',
|
||||
'domain': 'Default'
|
||||
}
|
||||
|
||||
__TENANT1 = {
|
||||
'user': 'tenant1',
|
||||
'password': __PASSWORD,
|
||||
'tenant': 'tenant1',
|
||||
'domain': 'Default',
|
||||
'nova_keypair': 'keypair-tenant1'
|
||||
}
|
||||
|
||||
__TENANT2 = {
|
||||
'user': 'tenant2',
|
||||
'password': __PASSWORD,
|
||||
'tenant': 'tenant2',
|
||||
'domain': 'Default',
|
||||
'nova_keypair': 'keypair-tenant2'
|
||||
}
|
||||
|
||||
__tenants = {
|
||||
'ADMIN_PLATFORM': __ADMIN_PLATFORM,
|
||||
'ADMIN': __ADMIN,
|
||||
'TENANT1': __TENANT1,
|
||||
'TENANT2': __TENANT2}
|
||||
|
||||
@classmethod
|
||||
def add_dc_region(cls, region_info):
|
||||
cls.__DC_MAP.update(region_info)
|
||||
|
||||
@classmethod
|
||||
def set_platform_url(cls, url, central_region=False):
|
||||
"""
|
||||
Set auth_url for platform keystone
|
||||
Args:
|
||||
url (str):
|
||||
central_region (bool)
|
||||
"""
|
||||
if central_region:
|
||||
cls.__DC_MAP.get('SystemController')['auth_url'] = url
|
||||
cls.__DC_MAP.get('RegionOne')['auth_url'] = url
|
||||
else:
|
||||
cls.__URL_PLATFORM = url
|
||||
|
||||
@classmethod
|
||||
def set_region(cls, region):
|
||||
"""
|
||||
Set default region for all tenants
|
||||
Args:
|
||||
region (str): e.g., SystemController, subcloud-2
|
||||
|
||||
"""
|
||||
cls.__REGION = region
|
||||
|
||||
@classmethod
|
||||
def add(cls, tenantname, dictname=None, username=None, password=None,
|
||||
region=None, auth_url=None, domain='Default'):
|
||||
tenant_dict = dict(tenant=tenantname)
|
||||
tenant_dict['user'] = username if username else tenantname
|
||||
tenant_dict['password'] = password if password else cls.__PASSWORD
|
||||
tenant_dict['domain'] = domain
|
||||
if region:
|
||||
tenant_dict['region'] = region
|
||||
if auth_url:
|
||||
tenant_dict['auth_url'] = auth_url
|
||||
|
||||
dictname = dictname.upper() if dictname else tenantname.upper().\
|
||||
replace('-', '_')
|
||||
cls.__tenants[dictname] = tenant_dict
|
||||
return tenant_dict
|
||||
|
||||
__primary = 'TENANT1'
|
||||
|
||||
@classmethod
|
||||
def get(cls, tenant_dictname, dc_region=None):
|
||||
"""
|
||||
Get tenant auth dict that can be passed to auth_info in cli cmd
|
||||
Args:
|
||||
tenant_dictname (str): e.g., tenant1, TENANT2, system_controller
|
||||
dc_region (None|str): key for dc_region added via add_dc_region.
|
||||
Used to update auth_url and region
|
||||
e.g., SystemController, RegionOne, subcloud-2
|
||||
|
||||
Returns (dict): mutable dictionary. If changed, DC map or tenant dict
|
||||
will update as well.
|
||||
|
||||
"""
|
||||
tenant_dictname = tenant_dictname.upper().replace('-', '_')
|
||||
tenant_dict = cls.__tenants.get(tenant_dictname)
|
||||
if dc_region:
|
||||
region_dict = cls.__DC_MAP.get(dc_region, None)
|
||||
if not region_dict:
|
||||
raise ValueError(
|
||||
'Distributed cloud region {} is not added to '
|
||||
'DC_MAP yet. DC_MAP: {}'.format(dc_region, cls.__DC_MAP))
|
||||
tenant_dict.update({'region': region_dict['region']})
|
||||
else:
|
||||
tenant_dict.pop('region', None)
|
||||
|
||||
return tenant_dict
|
||||
|
||||
@classmethod
|
||||
def get_region_and_url(cls, platform=False, dc_region=None):
|
||||
auth_region_and_url = {
|
||||
'auth_url':
|
||||
cls.__URL_PLATFORM if platform else cls.__URL_CONTAINERS,
|
||||
'region': cls.__REGION
|
||||
}
|
||||
|
||||
if dc_region:
|
||||
region_dict = cls.__DC_MAP.get(dc_region, None)
|
||||
if not region_dict:
|
||||
raise ValueError(
|
||||
'Distributed cloud region {} is not added to DC_MAP yet. '
|
||||
'DC_MAP: {}'.format(dc_region, cls.__DC_MAP))
|
||||
auth_region_and_url['region'] = region_dict.get('region')
|
||||
if platform:
|
||||
auth_region_and_url['auth_url'] = region_dict.get('auth_url')
|
||||
|
||||
return auth_region_and_url
|
||||
|
||||
@classmethod
|
||||
def set_primary(cls, tenant_dictname):
|
||||
"""
|
||||
should be called after _set_region and _set_url
|
||||
Args:
|
||||
tenant_dictname (str): Tenant dict name
|
||||
|
||||
Returns:
|
||||
|
||||
"""
|
||||
cls.__primary = tenant_dictname.upper()
|
||||
|
||||
@classmethod
|
||||
def get_primary(cls):
|
||||
return cls.get(tenant_dictname=cls.__primary)
|
||||
|
||||
@classmethod
|
||||
def get_secondary(cls):
|
||||
secondary = 'TENANT1' if cls.__primary != 'TENANT1' else 'TENANT2'
|
||||
return cls.get(tenant_dictname=secondary)
|
||||
|
||||
@classmethod
|
||||
def update(cls, tenant_dictname, username=None, password=None, tenant=None,
|
||||
**kwargs):
|
||||
tenant_dict = cls.get(tenant_dictname)
|
||||
|
||||
if not isinstance(tenant_dict, dict):
|
||||
raise ValueError("{} dictionary does not exist in "
|
||||
"consts/auth.py".format(tenant_dictname))
|
||||
|
||||
if not username and not password and not tenant and not kwargs:
|
||||
raise ValueError("Please specify username, password, tenant, "
|
||||
"and/or domain to update for {} dict".
|
||||
format(tenant_dictname))
|
||||
|
||||
if username:
|
||||
kwargs['user'] = username
|
||||
if password:
|
||||
kwargs['password'] = password
|
||||
if tenant:
|
||||
kwargs['tenant'] = tenant
|
||||
tenant_dict.update(kwargs)
|
||||
cls.__tenants[tenant_dictname] = tenant_dict
|
||||
|
||||
@classmethod
|
||||
def get_dc_map(cls):
|
||||
return cls.__DC_MAP
|
||||
|
||||
|
||||
class HostLinuxUser:
|
||||
|
||||
__SYSADMIN = {
|
||||
'user': 'sysadmin',
|
||||
'password': 'St8rlingX*'
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def get_user(cls):
|
||||
return cls.__SYSADMIN['user']
|
||||
|
||||
@classmethod
|
||||
def get_password(cls):
|
||||
return cls.__SYSADMIN['password']
|
||||
|
||||
@classmethod
|
||||
def get_home(cls):
|
||||
return cls.__SYSADMIN.get('home', '/home/{}'.format(cls.get_user()))
|
||||
|
||||
@classmethod
|
||||
def set_user(cls, username):
|
||||
cls.__SYSADMIN['user'] = username
|
||||
|
||||
@classmethod
|
||||
def set_password(cls, password):
|
||||
cls.__SYSADMIN['password'] = password
|
||||
|
||||
@classmethod
|
||||
def set_home(cls, home):
|
||||
if home:
|
||||
cls.__SYSADMIN['home'] = home
|
||||
|
||||
|
||||
class Guest:
|
||||
CREDS = {
|
||||
'tis-centos-guest': {
|
||||
'user': 'root',
|
||||
'password': 'root'
|
||||
},
|
||||
|
||||
'cgcs-guest': {
|
||||
'user': 'root',
|
||||
'password': 'root'
|
||||
},
|
||||
|
||||
'ubuntu': {
|
||||
'user': 'ubuntu',
|
||||
'password': None
|
||||
},
|
||||
|
||||
'centos_6': {
|
||||
'user': 'centos',
|
||||
'password': None
|
||||
},
|
||||
|
||||
'centos_7': {
|
||||
'user': 'centos',
|
||||
'password': None
|
||||
},
|
||||
|
||||
# This image has some issue where it usually fails to boot
|
||||
'opensuse_13': {
|
||||
'user': 'root',
|
||||
'password': None
|
||||
},
|
||||
|
||||
# OPV image has root/root enabled
|
||||
'rhel': {
|
||||
'user': 'root',
|
||||
'password': 'root'
|
||||
},
|
||||
|
||||
'cirros': {
|
||||
'user': 'cirros',
|
||||
'password': 'cubswin:)'
|
||||
},
|
||||
|
||||
'win_2012': {
|
||||
'user': 'Administrator',
|
||||
'password': 'Li69nux*'
|
||||
},
|
||||
|
||||
'win_2016': {
|
||||
'user': 'Administrator',
|
||||
'password': 'Li69nux*'
|
||||
},
|
||||
|
||||
'ge_edge': {
|
||||
'user': 'root',
|
||||
'password': 'root'
|
||||
},
|
||||
|
||||
'vxworks': {
|
||||
'user': 'root',
|
||||
'password': 'root'
|
||||
},
|
||||
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def set_user(cls, image_name, username):
|
||||
cls.CREDS[image_name]['user'] = username
|
||||
|
||||
@classmethod
|
||||
def set_password(cls, image_name, password):
|
||||
cls.CREDS[image_name]['password'] = password
|
||||
|
||||
|
||||
class TestFileServer:
|
||||
# Place holder for shared file server in future.
|
||||
SERVER = 'server_name_or_ip_that_can_ssh_to'
|
||||
USER = 'username'
|
||||
PASSWORD = 'password'
|
||||
HOME = 'my_home'
|
||||
HOSTNAME = 'hostname'
|
||||
PROMPT = r'[\[]?.*@.*\$[ ]?'
|
||||
|
||||
|
||||
class CliAuth:
|
||||
|
||||
__var_dict = {
|
||||
'OS_AUTH_URL': 'http://192.168.204.2:5000/v3',
|
||||
'OS_ENDPOINT_TYPE': 'internalURL',
|
||||
'CINDER_ENDPOINT_TYPE': 'internalURL',
|
||||
'OS_USER_DOMAIN_NAME': 'Default',
|
||||
'OS_PROJECT_DOMAIN_NAME': 'Default',
|
||||
'OS_IDENTITY_API_VERSION': '3',
|
||||
'OS_REGION_NAME': 'RegionOne',
|
||||
'OS_INTERFACE': 'internal',
|
||||
'HTTPS': False,
|
||||
'OS_KEYSTONE_REGION_NAME': None,
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def set_vars(cls, **kwargs):
|
||||
|
||||
for key in kwargs:
|
||||
cls.__var_dict[key.upper()] = kwargs[key]
|
||||
|
||||
@classmethod
|
||||
def get_var(cls, var_name):
|
||||
var_name = var_name.upper()
|
||||
valid_vars = cls.__var_dict.keys()
|
||||
if var_name not in valid_vars:
|
||||
raise ValueError("Invalid var_name. Valid vars: {}".
|
||||
format(valid_vars))
|
||||
|
||||
return cls.__var_dict[var_name]
|
|
@ -0,0 +1,192 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
class VCPUSchedulerErr:
|
||||
CANNOT_SET_VCPU0 = "vcpu 0 cannot be specified"
|
||||
VCPU_VAL_OUT_OF_RANGE = "vcpu value out of range"
|
||||
INVALID_PRIORITY = "priority must be between 1-99"
|
||||
PRIORITY_NOT_INTEGER = "priority must be an integer"
|
||||
INVALID_FORMAT = "invalid format"
|
||||
UNSUPPORTED_POLICY = "not a supported policy"
|
||||
POLICY_MUST_SPECIFIED_LAST = "policy/priority for all vcpus must be " \
|
||||
"specified last"
|
||||
MISSING_PARAMETER = "missing required parameter"
|
||||
TOO_MANY_PARAMETERS = "too many parameters"
|
||||
VCPU_MULTIPLE_ASSIGNMENT = "specified multiple times, specification is " \
|
||||
"ambiguous"
|
||||
CPU_MODEL_UNAVAIL = "No valid host was found.*Host VCPU model.*required.*"
|
||||
CPU_MODEL_CONFLICT = "Image vCPU model is not permitted to override " \
|
||||
"configuration set against the flavor"
|
||||
|
||||
|
||||
class NumaErr:
|
||||
GENERAL_ERR_PIKE = 'Requested instance NUMA topology cannot fit the ' \
|
||||
'given host NUMA topology'
|
||||
# NUMA_AFFINITY_MISMATCH = " not match requested NUMA: {}"
|
||||
NUMA_VSWITCH_MISMATCH = 'vswitch not configured.* does not match ' \
|
||||
'requested NUMA'
|
||||
NUMA_NODE_EXCLUDED = "NUMA: {} excluded"
|
||||
# UNINITIALIZED = '(NUMATopologyFilter) Uninitialized'
|
||||
TWO_NUMA_ONE_VSWITCH = 'vswitch not configured'
|
||||
FLV_UNDEVISIBLE = 'ERROR (Conflict): flavor vcpus not evenly divisible ' \
|
||||
'by the specified hw:numa_nodes value'
|
||||
FLV_CPU_OR_MEM_UNSPECIFIED = 'ERROR (Conflict): CPU and memory ' \
|
||||
'allocation must be provided for all ' \
|
||||
'NUMA nodes'
|
||||
INSUFFICIENT_CORES = 'Not enough free cores to schedule the instance'
|
||||
|
||||
|
||||
class MinCPUErr:
|
||||
VAL_LARGER_THAN_VCPUS = "min_vcpus must be less than or equal to " \
|
||||
"the flavor vcpus value"
|
||||
VAL_LESS_THAN_1 = "min_vcpus must be greater than or equal to 1"
|
||||
CPU_POLICY_NOT_DEDICATED = "min_vcpus is only valid when hw:cpu_policy " \
|
||||
"is dedicated"
|
||||
|
||||
|
||||
class ScaleErr:
|
||||
SCALE_LIMIT_HIT = "When scaling, cannot scale beyond limits"
|
||||
|
||||
|
||||
class CpuAssignment:
|
||||
VSWITCH_TOO_MANY_CORES = "The vswitch function can only be assigned up to" \
|
||||
" 8 core"
|
||||
TOTAL_TOO_MANY_CORES = "More total logical cores requested than present " \
|
||||
"on 'Processor {}'"
|
||||
NO_VM_CORE = "There must be at least one unused core for VMs."
|
||||
VSWITCH_INSUFFICIENT_CORES = "The vswitch function must have at least {} " \
|
||||
"core(s)"
|
||||
|
||||
|
||||
class CPUThreadErr:
|
||||
INVALID_POLICY = "invalid hw:cpu_thread_policy '{}', must be one of " \
|
||||
"prefer, isolate, require"
|
||||
DEDICATED_CPU_REQUIRED_FLAVOR = 'ERROR (Conflict): hw:cpu_thread_policy ' \
|
||||
'is only valid when hw:cpu_policy is ' \
|
||||
'dedicated. Either unset ' \
|
||||
'hw:cpu_thread_policy or set ' \
|
||||
'hw:cpu_policy to dedicated.'
|
||||
DEDICATED_CPU_REQUIRED_BOOT_VM = 'ERROR (BadRequest): Cannot set cpu ' \
|
||||
'thread pinning policy in a non ' \
|
||||
'dedicated ' \
|
||||
'cpu pinning policy'
|
||||
VCPU_NUM_UNDIVISIBLE = "(NUMATopologyFilter) Cannot use 'require' cpu " \
|
||||
"threads policy as requested #VCPUs: {}, " \
|
||||
"is not divisible by number of threads: 2"
|
||||
INSUFFICIENT_CORES_FOR_ISOLATE = "{}: (NUMATopologyFilter) Cannot use " \
|
||||
"isolate cpu thread policy as requested " \
|
||||
"VCPUS: {} is greater than available " \
|
||||
"CPUs with all siblings free"
|
||||
HT_HOST_UNAVAIL = "(NUMATopologyFilter) Host not useable. Requested " \
|
||||
"threads policy: '{}'; from flavor or image " \
|
||||
"is not allowed on non-hyperthreaded host"
|
||||
UNSET_SHARED_VCPU = "Cannot set hw:cpu_thread_policy to {} if " \
|
||||
"hw:wrs:shared_vcpu is set. Either unset " \
|
||||
"hw:cpu_thread_policy, set it to prefer, or unset " \
|
||||
"hw:wrs:shared_vcpu"
|
||||
UNSET_MIN_VCPUS = "Cannot set hw:cpu_thread_policy to {} if " \
|
||||
"hw:wrs:min_vcpus is set. Either unset " \
|
||||
"hw:cpu_thread_policy, set it to another policy, " \
|
||||
"or unset hw:wrs:min_vcpus"
|
||||
CONFLICT_FLV_IMG = "Image property 'hw_cpu_thread_policy' is not " \
|
||||
"permitted to override CPU thread pinning policy " \
|
||||
"set against the flavor"
|
||||
|
||||
|
||||
class CPUPolicyErr:
|
||||
CONFLICT_FLV_IMG = "Image property 'hw_cpu_policy' is not permitted to " \
|
||||
"override CPU pinning policy set against " \
|
||||
"the flavor "
|
||||
|
||||
|
||||
class SharedCPUErr:
|
||||
DEDICATED_CPU_REQUIRED = "hw:wrs:shared_vcpu is only valid when " \
|
||||
"hw:cpu_policy is dedicated"
|
||||
INVALID_VCPU_ID = "hw:wrs:shared_vcpu must be greater than or equal to 0"
|
||||
MORE_THAN_FLAVOR = "hw:wrs:shared_vcpu value ({}) must be less than " \
|
||||
"flavor vcpus ({})"
|
||||
|
||||
|
||||
class ResizeVMErr:
|
||||
RESIZE_ERR = "Error resizing server"
|
||||
SHARED_NOT_ENABLED = 'Shared vCPU not enabled .*, required by instance ' \
|
||||
'cell {}'
|
||||
|
||||
|
||||
class ColdMigErr:
|
||||
HT_HOST_REQUIRED = "(NUMATopologyFilter) Host not useable. Requested " \
|
||||
"threads policy: '[{}, {}]'; from flavor or " \
|
||||
"image is not allowed on non-hyperthreaded host"
|
||||
|
||||
|
||||
class LiveMigErr:
|
||||
BLOCK_MIG_UNSUPPORTED = "is not on local storage: Block migration can " \
|
||||
"not be used with shared storage"
|
||||
GENERAL_NO_HOST = "No valid host was found. There are not enough hosts " \
|
||||
"available."
|
||||
BLOCK_MIG_UNSUPPORTED_LVM = 'Block live migration is not supported for ' \
|
||||
'hosts with LVM backed storage'
|
||||
LVM_PRECHECK_ERROR = 'Live migration can not be used with LVM backed ' \
|
||||
'storage except a booted from volume VM ' \
|
||||
'which does not have a local disk'
|
||||
|
||||
|
||||
class NetworkingErr:
|
||||
INVALID_VXLAN_VNI_RANGE = "exceeds 16777215"
|
||||
INVALID_MULTICAST_IP_ADDRESS = "is not a valid multicast IP address."
|
||||
INVALID_VXLAN_PROVISION_PORTS = "Invalid input for port"
|
||||
VXLAN_TTL_RANGE_MISSING = "VXLAN time-to-live attribute missing"
|
||||
VXLAN_TTL_RANGE_TOO_LARGE = "is too large - must be no larger than '255'."
|
||||
VXLAN_TTL_RANGE_TOO_SMALL = "is too small - must be at least '1'."
|
||||
OVERLAP_SEGMENTATION_RANGE = "segmentation id range overlaps with"
|
||||
INVALID_MTU_VALUE = "requires an interface MTU value of at least"
|
||||
VXLAN_MISSING_IP_ON_INTERFACE = "requires an IP address"
|
||||
WRONG_IF_ADDR_MODE = "interface address mode must be 'static'"
|
||||
SET_IF_ADDR_MODE_WHEN_IP_EXIST = "addresses still exist on interfac"
|
||||
NULL_IP_ADDR = "Address must not be null"
|
||||
NULL_NETWORK_ADDR = "Network must not be null"
|
||||
NULL_GATEWAY_ADDR = "Gateway address must not be null"
|
||||
NULL_HOST_PARTION_ADDR = "Host bits must not be zero"
|
||||
NOT_UNICAST_ADDR = "Address must be a unicast address"
|
||||
NOT_BROADCAST_ADDR = "Address cannot be the network broadcast address"
|
||||
DUPLICATE_IP_ADDR = "already exists"
|
||||
INVALID_IP_OR_PREFIX = "Invalid IP address and prefix"
|
||||
INVALID_IP_NETWORK = "Invalid IP network"
|
||||
ROUTE_GATEWAY_UNREACHABLE = "not reachable"
|
||||
IP_VERSION_NOT_MATCH = "Network and gateway IP versions must match"
|
||||
GATEWAY_IP_IN_SUBNET = "Gateway address must not be within destination " \
|
||||
"subnet"
|
||||
NETWORK_IP_EQUAL_TO_GATEWAY = "Network and gateway IP addresses must be " \
|
||||
"different"
|
||||
|
||||
|
||||
class PciAddrErr:
|
||||
NONE_ZERO_DOMAIN = 'Only domain 0000 is supported'
|
||||
LARGER_THAN_MAX_BUS = 'PCI bus maximum value is 8'
|
||||
NONE_ZERO_FUNCTION = 'Only function 0 is supported'
|
||||
RESERVED_SLOTS_BUS0 = 'Slots 0,1 are reserved for PCI bus 0'
|
||||
RESERVED_SLOT_ANY_BUS = 'Slots 0 is reserved for any PCI bus'
|
||||
LARGER_THAN_MAX_SLOT = 'PCI slot maximum value is 31'
|
||||
BAD_FORMAT = 'Bad PCI address format'
|
||||
WRONG_BUS_VAL = 'Wrong bus value for PCI address'
|
||||
|
||||
|
||||
class SrvGrpErr:
|
||||
EXCEEDS_GRP_SIZE = 'Action would result in server group {} exceeding the ' \
|
||||
'group size of {}'
|
||||
HOST_UNAVAIL_ANTI_AFFINITY = '(ServerGroupAntiAffinityFilter) ' \
|
||||
'Anti-affinity server group specified, ' \
|
||||
'but this host is already used by that group'
|
||||
|
||||
|
||||
class CpuRtErr:
|
||||
RT_AND_ORD_REQUIRED = 'Realtime policy needs vCPU.* mask configured with ' \
|
||||
'at least 1 RT vCPU and 1 ordinary vCPU'
|
||||
DED_CPU_POL_REQUIRED = 'Cannot set realtime policy in a non dedicated cpu' \
|
||||
' pinning policy'
|
||||
RT_MASK_SHARED_VCPU_CONFLICT = 'hw:wrs:shared_vcpu .* is not a subset of ' \
|
||||
'non-realtime vCPUs'
|
|
@ -0,0 +1,55 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
class StxPath:
|
||||
TIS_UBUNTU_PATH = '~/userdata/ubuntu_if_config.sh'
|
||||
TIS_CENTOS_PATH = '~/userdata/centos_if_config.sh'
|
||||
USERDATA = '~/userdata/'
|
||||
IMAGES = '~/images/'
|
||||
HEAT = '~/heat/'
|
||||
BACKUPS = '/opt/backups'
|
||||
CUSTOM_HEAT_TEMPLATES = '~/custom_heat_templates/'
|
||||
HELM_CHARTS_DIR = '/www/pages/helm_charts/'
|
||||
DOCKER_CONF = '/etc/docker-distribution/registry/config.yml'
|
||||
DOCKER_REPO = '/var/lib/docker-distribution/docker/registry/v2/repositories'
|
||||
|
||||
|
||||
class VMPath:
|
||||
VM_IF_PATH_UBUNTU = '/etc/network/interfaces.d/'
|
||||
ETH_PATH_UBUNTU = '/etc/network/interfaces.d/{}.cfg'
|
||||
# Below two paths are common for CentOS, OpenSUSE, and RHEL
|
||||
VM_IF_PATH_CENTOS = '/etc/sysconfig/network-scripts/'
|
||||
ETH_PATH_CENTOS = '/etc/sysconfig/network-scripts/ifcfg-{}'
|
||||
|
||||
# Centos paths for ipv4:
|
||||
RT_TABLES = '/etc/iproute2/rt_tables'
|
||||
ETH_RT_SCRIPT = '/etc/sysconfig/network-scripts/route-{}'
|
||||
ETH_RULE_SCRIPT = '/etc/sysconfig/network-scripts/rule-{}'
|
||||
ETH_ARP_ANNOUNCE = '/proc/sys/net/ipv4/conf/{}/arp_announce'
|
||||
ETH_ARP_FILTER = '/proc/sys/net/ipv4/conf/{}/arp_filter'
|
||||
|
||||
|
||||
class UserData:
|
||||
ADDUSER_TO_GUEST = 'cloud_config_adduser.txt'
|
||||
DPDK_USER_DATA = 'dpdk_user_data.txt'
|
||||
|
||||
|
||||
class TestServerPath:
|
||||
USER_DATA = '/home/svc-cgcsauto/userdata/'
|
||||
TEST_SCRIPT = '/home/svc-cgcsauto/test_scripts/'
|
||||
CUSTOM_HEAT_TEMPLATES = '/sandbox/custom_heat_templates/'
|
||||
CUSTOM_APPS = '/sandbox/custom_apps/'
|
||||
|
||||
|
||||
class PrivKeyPath:
|
||||
OPT_PLATFORM = '/opt/platform/id_rsa'
|
||||
SYS_HOME = '~/.ssh/id_rsa'
|
||||
|
||||
|
||||
class SysLogPath:
|
||||
DC_MANAGER = '/var/log/dcmanager/dcmanager.log'
|
||||
DC_ORCH = '/var/log/dcorch/dcorch.log'
|
|
@ -0,0 +1,8 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
test_result = False
|
|
@ -0,0 +1,162 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
class Labs:
|
||||
# Place for existing stx systems for convenience.
|
||||
# --lab <short_name> can be used in cmdline specify an existing system
|
||||
|
||||
EXAMPLE = {
|
||||
'short_name': 'my_server',
|
||||
'name': 'my_server.com',
|
||||
'floating ip': '10.10.10.2',
|
||||
'controller-0 ip': '10.10.10.3',
|
||||
'controller-1 ip': '10.10.10.4',
|
||||
}
|
||||
|
||||
|
||||
def update_lab(lab_dict_name=None, lab_name=None, floating_ip=None, **kwargs):
|
||||
"""
|
||||
Update/Add lab dict params for specified lab
|
||||
Args:
|
||||
lab_dict_name (str|None):
|
||||
lab_name (str|None): lab short_name. This is used only if
|
||||
lab_dict_name is not specified
|
||||
floating_ip (str|None):
|
||||
**kwargs: Some possible keys: subcloud-1, name, etc
|
||||
|
||||
Returns (dict): updated lab dict
|
||||
|
||||
"""
|
||||
|
||||
if not lab_name and not lab_dict_name:
|
||||
from consts.proj_vars import ProjVar
|
||||
lab_name = ProjVar.get_var('LAB').get('short_name', None)
|
||||
if not lab_name:
|
||||
raise ValueError("lab_dict_name or lab_name needs to be specified")
|
||||
|
||||
if floating_ip:
|
||||
kwargs.update(**{'floating ip': floating_ip})
|
||||
|
||||
if not kwargs:
|
||||
raise ValueError("Please specify floating_ip and/or kwargs")
|
||||
|
||||
if not lab_dict_name:
|
||||
attr_names = [attr for attr in dir(Labs) if not attr.startswith('__')]
|
||||
lab_names = [getattr(Labs, attr).get('short_name') for attr in
|
||||
attr_names]
|
||||
lab_index = lab_names.index(lab_name.lower().strip())
|
||||
lab_dict_name = attr_names[lab_index]
|
||||
else:
|
||||
lab_dict_name = lab_dict_name.upper().replace('-', '_')
|
||||
|
||||
lab_dict = getattr(Labs, lab_dict_name)
|
||||
lab_dict.update(kwargs)
|
||||
return lab_dict
|
||||
|
||||
|
||||
def get_lab_dict(lab, key='short_name'):
|
||||
"""
|
||||
|
||||
Args:
|
||||
lab: lab name or fip
|
||||
key: unique identifier to locate a lab. Valid values: short_name,
|
||||
name, floating ip
|
||||
|
||||
Returns (dict|None): lab dict or None if no matching lab found
|
||||
"""
|
||||
__lab_attr_list = [attr for attr in dir(Labs) if not attr.startswith('__')]
|
||||
__lab_list = [getattr(Labs, attr) for attr in __lab_attr_list]
|
||||
__lab_list = [lab for lab in __lab_list if isinstance(lab, dict)]
|
||||