[robot] Initial submission for robot test suite

Initial submission of a patch series with code of the robot test suite
used for Deployment + Provisioning + Sanity Test.

  - runner:  Staring point of the suite exposing all the different
             options available on the suite as well as initializing all
             needed features on the host according to the configuration
  - README: Instructions to setup the suite and basic usage

Change-Id: I6ead335412150fb8d64a6abf7909cf702d0d248c
Signed-off-by: Jose Perez Carranza <jose.perez.carranza@intel.com>
Jose Perez Carranza 2019-08-12 08:49:14 -05:00
parent 299667bada
commit 3b98a48102
2 changed files with 1010 additions and 0 deletions

View File

@ -0,0 +1,636 @@
.. default-role:: code
StarlingX Test Suite
.. image:: .images/starlingx.png
.. contents:: Table of contents:
:depth: 4
This Test Suite provides an automated way to Setup, Provision and do a Sanity
Test of the 4 basic StarlingX Deployments options described at
`StarlingX Installation Guide`__. Currently the suite has fully support to
deploy on virtual environments using Libvirt/Qemu to simulate the nodes,
installation on BareMetal is only supported for a very specific infrastructure
that is described on `BareMetal`_, complete documentation for this process will
be ready soon.
Suite is based on Robot Framework and Python, please follow below instructions
to properly use the suite.
***NOTE***: Currently the suite is designed to run on Pyhton 2.7 environment,
migration to Python 3.5 is undergoing and will be ready soon.
__ https://docs.starlingx.io/deploy_install_guides/index.html
Quick Start
This guide is focused on a clean OS installation, any kind of issue not
document here the user must solve it.
The recommend OS system is **Ubuntu 16.04 LTS**, you can download it from the
following link:
`Download Ubuntu 16.04 LTS`__.
__ http://releases.ubuntu.com/16.04/ubuntu-16.04.5-desktop-amd64.iso
*** Note *** Was also tested on `Debian 9` and `Fedora 27`
Updating the system
In order to get the system **up-to-date** you must run the following commands:
.. code:: bash
$ sudo apt update
$ sudo apt upgrade
automated-robot-suite repository
Installing Git
Before to be able for clone the repository, a tool is needed and you must
install it typing the following command:
.. code:: bash
$ sudo apt install git
Cloning the repository
The next step is to make a copy of this repository in your local machine:
.. code:: bash
$ git clone https://opendev.org/starlingx/test/src/branch/master/automated-robot-suite
Git configuration
Make sure that you have git correctly configured:
.. code:: bash
$ git config --global user.name "your name here"
$ git config --global user.email "your email here"
$ git config --list
If you have any issues please visit `Troubleshooting`_ section
Host package requirements
Please execute below steps to enable Qemu-Libvirt on your host
1. Add your linux user to **/etc/sudoers** file at the end of the file:
.. code:: bash
<your_user> ALL = (root) NOPASSWD:ALL
2. Install the following packages
.. code:: bash
$ sudo apt-get install virt-manager libvirt-bin qemu-system
| Package | Description |
| virt-manager | Display the virtual machine desktop management tool |
| libvirt-bin | Programs for the libvirt library |
| qemu-system | QEMU full system emulation binaries |
3. Start the libvirt service daemon with the following command:
.. code:: bash
$ sudo service libvirt-bin restart
4. Be sure that the daemon is loaded and running
.. code:: bash
$ service libvirt-bin status
● libvirt-bin.service - Virtualization daemon
Loaded: loaded (/lib/systemd/system/libvirt-bin.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2018-08-21 11:17:36 CDT; 3s ago
Docs: man:libvirtd(8)
Main PID: 5593 (libvirtd)
CGroup: /system.slice/libvirt-bin.service
├─5558 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr /lib/libvirt/libvirt_leaseshelper
├─5559 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
├─5593 /usr/sbin/libvirtd
└─5630 /usr/sbin/libvirtd
Aug 21 11:17:36 computing systemd[1]: Starting Virtualization daemon...
Aug 21 11:17:36 computing systemd[1]: Started Virtualization daemon.
5. Reboot the system in order that the current user be reflected in
**libvirtd** group, needed to run the services related.
.. code:: bash
$ sudo reboot
Project requirements
Every python project has requirement files, in this case the repository
**automated-robot-suite** has the following files:
- **requirements.txt**: which contains all the requirements
that the project needs.
- **test-requirements.txt**: which contains all the test
requirements that the project needs.
Python virtual environments
Python “Virtual Environments” allow Python packages to be installed in an
isolated location for a particular application, rather than being installed
Installation on Virtual Environment
Make sure you have python **virtualenv** package installed in your host machine.
.. code:: bash
$ sudo apt install python-pip
$ sudo pip install virtualenv
You can manage your virtual environments for the two options explained below:
Managing virtual environments with virtualenvwrapper
While virtual environments certainly solve some big problems with package
management, theyre not perfect. After creating a few environments, you will
start to see that they create some problems of their own, most of which revolve
around managing the environments themselves. To help with this, the
virtualenvwrapper tool was created, which is just some wrapper scripts around
the main virtualenv tool
A few of the more useful features of virtualenvwrapper are that it:
Organizes all of your virtual environments in one location
Provides methods to help you easily create, delete, and copy environments
Provides a single command to switch between environments
To get started, you can download the wrapper with pip
.. code:: bash
$ sudo pip install virtualenvwrapper
Once installed, you will need to activate its shell functions, which can be
done by running source on the installed virtualenvwrapper.sh script
.. code:: bash
$ which virtualenvwrapper.sh
Using that path, add the following lines to your shells startup file
which is your **~/.bashrc**
.. code:: bash
export WORKON_HOME=$HOME/.virtualenvs
export PROJECT_HOME=$HOME/projects
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python
export VIRTUALENVWRAPPER_VIRTUALENV=/usr/local/bin/virtualenv
source /usr/local/bin/virtualenvwrapper.sh
Finally, reload your **bashrc** file
.. code:: bash
$ source ~/.bashrc
For help and examples on Virtualenvwrapper please visit `Help`_ section
Managing virtual environments raw
If you want a more direct way to work with virtual environment on python
you can follow below steps:
.. code:: bash
$ virtualenv my-venv
$ source my-venv/bin/activate
Install the project requirements on virtual environment.
Now that virtualenv is activated you need to install the needed packages.
.. code:: bash
$ cd <automated-robot-suite>
$ pip install -r requirements.txt
$ pip install -r test-requirements.txt
Augment the default search path for module files. The format is the same as
the shells PATH: one or more directory pathnames separated by os.pathsep
(e.g: colons on Unix or semicolons on Windows). Non-existent directories are
silently ignored.
In addition to normal directories, individual **PYTHONPATH** entries may refer
to zip files containing pure Python modules (in either source or compiled
form). Extension modules cannot be imported from zip files.
The default search path is installation dependent, but generally begins with
**/prefix/lib/pythonversion**. It is always appended to **PYTHONPATH**.
**PYTHONPATH** environment variable is a pre requisite for this project.
Please setup **PYTHONPATH** in your local bashrc like below:[#]_
.. code:: bash
$ export PYTHONPATH="${PYTHONPATH}:../automated-robot-suite"
.. [#] where **../** indicates the absolute path to the project.
Using the suite
This section will describe how to configure, interact and run test
on the suite based on robot framework, this suite supports two diferent
environments `Virtual`_ and `BareMetal`_
__ https://docs.starlingx.io/deploy_install_guides/upcoming/installation_libvirt_qemu.html
Virtual deployment is based on **qemu/libvirt** to create virtual
machines that will host the **StarlingX** deployment
**NOTE** There are minimum HW requirements to deploy on virtual environments please
refer to `installation_libvirt_qemu`__ for more details
Download Artifacts
Suite needs an **ISO** to be installed and the associated **Helm Chart** to
deploy OpenStack services sot they should be downloaded and put inside of
**automated-robot-suite/** path.
There is daily build under **CENGEN** infrastructure so there above items can
be downloaded form there from:
`StarlingX Mirror`__
**ISO** = /*<release>*/outputs/iso/bootimage.iso
**HELM_CHART** = /*<release>*/outputs/helm-charts/stx-openstack-*<VERSION>*-centos-stable-versioned.tgz
__ http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/
Suite Configuration
- **config.ini**: This file contains information that will use directly
by the suite to Setup a deployment, parameters that should be updated are:
1. **STX_ISO_FILE**: The name of the ISO, for automation purposes
recommended to let **bootimage.iso** and create a symlink to the
required ISO.
.. code:: bash
ln -sfn stx-2018-10-19-29-r-2018.10.iso bootimage.iso
2. **CHART_MANIFEST**: With the name of the Helm chart associated to the
ISO, as well is recommended to have a symlink
3. **STX_DEPLOY_USER_NAME**: The user name to be setup on the deployment.
4. **STX_DEPLOY_USER_PSWD**: The password to be setup on the deployment.
- **stx-<configuration>.yml**: Is the configuration file used to configure
the StarlingX deployment. There is file for **Simplex**, **Duplex** and
**Multinode** configurations. The structure of this file is out of the scope
of this document please refer to the official `StarlingX documentation`__
for more information
__ https://docs.starlingx.io/
- **VM's Resources Yaml**: Definition of the resources that will be used by
libvirt to create the VM's. those files are stored at **Qemu/configs** and
are set with the minimum resources needed hence values only can be increased
according to the host resources.
Suite Execution
The suite is divides in 3 main stages that will be explained below:
In this stage all the virtual machines are created for the specific
configuration selected and with the attributes previously defined, the ISO
will be installed on the master controller and be configured to be a SatrlingX
.. code:: bash
$python runner.py --run-suite Setup --configuration <config_number> --environment virtual
In this stage all other nodes are installed and system is provisioned following
the steps defined at `StarlingX Installation Guides`__
__ https://docs.starlingx.io/deploy_install_guides/
.. code:: bash
$python runner.py --run-suite Provision
Test Execution
In this stage the system is already provisioning and Test can be executed,
below are the steps to execute a **Sanity-Test** suite
1. Download required images
External: - `Cirros`__ - `Ubuntu`__ - `Centos`__ - `Windows`__
__ http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
__ http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img
__ http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
__ https://cloudbase.it/windows-cloud-images/
2. Update **config.ini** with the name of the downloaded images.
.. code:: bash
CIRROS_FILE = cirros-0.4.0-x86_64-disk.qcow2
CENTOS_FILE = CentOS-7-x86_64-GenericCloud.qcow2
UBUNTU_FILE = xenial-server-cloudimg-amd64-disk1.qcow2
WINDOWS_FILE = windows_server_2012_r2.qcow2
3. Run Tests
.. code:: bash
$python runner.py --run-suite Sanity-Test
**Infrastructure Diagram**
.. image:: .images/bm_diagram.png
**PXE client** - This is the main StarlingX controller (controller-0).
**PXE Server** - StarlingX test suite must be executed on this host. Also,
these services are running:
- *TFTP* - Used to serve uefi/shim.efi file. Indicating where the pxe client
is going to connect to download installation packages.
- *HTTP* - Serving the full content of an ISO to the pxe client.
- *DHCP* - This service assigns a temporal IP address to the pxe client, it
also tells the clients where to grab the boot shim file.
These services should be running through OAM network. You need to ensure that
TFTP and DHCP are configured properly to serve the shim file. Also, the test
suite needs to identify the temporal IP address that the pxe client is going to
The following is an example of a DHCP configuration file to assing temporal
IP to a pxe client:
host standard_example {
hardware ethernet aa:bb:cc:dd:ee:ff;
Also, you need to have this option on the same dhcp configuration file:
filename "uefi/shim.efi";
Test suite will do the following steps to start an install:
1) Mount bootimage.iso and expose it with HTTP
2) Take info from the mounted files to create a custom shim file. This file will
automatically setup the required boot options for the pxe client.
3) It will use BMC network to send a signal to the pxe client, telling it to
boot on the first network adapter (pxe boot).
4) Open a SOL connection to the host to monitor the progress of the install,
once completed, it will change sysadmin password to the one defined on the
.yml file
5) Copy required rpms to install secondary nodes. This is done using scp from
the pxe server to the pxe client using the temporal IP address
Results and Logs
Every execution on the suite generate a separate directory with logs, this is
placed under **Results/** and also a a link to the mos recent execution can be
acceded by **latest-results/** symlink, the list of the available logs is:
- **debug.log**: Showing the output form Robot Framework activity.
- **iso_setup_console.txt** : Showing the serial output of the ISO
installation and Configuration on virtual environments.
- **iso_setup.error.log**: Filtering only the errors on the serial console.
- **qemu_setup.error.log**: Showing the information related to
*Qemu* and *Libvirt*
- **log.html**: Showing the *debug.log* in *HTML* format
- **output.xml**: Showing the *debug.log* in *XML* format
- **report.html**: Showing the results on a visual and customizable format.
- TLS connection was non-properly terminated
Sometimes trying to clone the repository you could have the following error:
.. code:: bash
<git_url>: (35) gnutls_handshake() failed: The TLS connection was non-properly terminated.
This error message means that git is having trouble setting up such a secure
connection, to solve this please follow the next steps:
.. code:: bash
unset https_proxy
export http_proxy=http://<PROXY>:<PORT>
- AttributeError: 'module' object has no attribute 'SSL_ST_INIT'
This error is because the python module that comes with the distribution is
incompatible with pip version. Please do the following steps to fix it:
.. code:: bash
$ sudo apt-get --auto-remove --yes remove python-openssl
$ pip install pyOpenSSL
This is a common issue and this mean that your system date is out-to-date.
To fix this please setup the correct date in your system.
- Nodes not being installed
In some cases was seen that during virtual deployment of Duplex or
Multi-node the extra nodes (controller-1, computes and storage) are not
being installed and keeps waiting for PXE image until timeout expires, we
found that for those cases the guilty of causing controllers not booting
for pxe is **docker**, for some reason (not yet discovered why) docker
is sending packages to the interfaces used by the VMs to be installed by PXE
and this causes unknown traffic on the interface making PXE installation
fail. The workaround for now is to kill docker daemon to avoid this issues.
.. code:: bash
$sudo status docker
$sudo stop docker
This section will show different topics that could help on he suite usage.
Increase resources on virtual environment
Suite has set the minimum requirements on the virtual machines to support a
**StarlingX** deployment, but is also possible to increase those values if
the host machine has enough resources, follow below steps to increase resources
1. Go to **Qemu/configs/** and open *yaml* file of your configuration
2. Edit file with the values for:
- partition_a (in GB)
- partition_b (in GB)
- partition_d (in GB)
- memory_size (in MB)
- system_cores
3. Vales can be increased on **Controllers**, **Computes** and **Storage**
Using proxies to download docker images
With the support of containers on *StarlingX* deployment there is a need of
downloading docker images, if you are using a proxy please follow below steps
to successfully configure your deployment.
1. Open your configuration file at **Config/stx-<config>.ini** and add below
.. code:: bash
DOCKER_NO_PROXY=localhost,,,,,<IPs of the OAM network of all your nodes>
2. Save the file and run **Setup** to have a StarlingX deployment configured
with docker proxies.
Using local registry to download docker images
With the support of containers on *StarlingX* deployment there is a need of
downloading docker images, if you don't have access to public repositories you
can point docker to sue local registry (how to setup a local registry is out
of the scope of this document), follow below steps:
1. Open your configuration file at **Config/stx-<config>.ini** and delete
**[DNS]** and **[DOCKER_PROXY]** if exists
2. add below section
.. code:: bash
Virtualenvwrapper useful commands
| cmd | Description |
| workon | List or change working virtual environments |
| deactivate | Programs for the libvirt library |
| rmvirtualenv | Remove an environment |
| mkvirtualenv | QEMU full system emulation binaries |
| lsvirtualenv | List all of the environments |
| lssitepackages | Shows contents of site-packages directory |
Virtualenvwrapper Exampes
- Create a virtual environment: This will create and activate a new
environment in the directory located at $WORKON_HOME, where all
virtualenvwrapper environments are stored.
.. code:: bash
$ mkvirtualenv my-new-virtualenvironment
(my-new-virtualenvironment) $
- Stop a existing virtual environment: To stop using that environment,
you just need to deactivate it like before
.. code:: bash
(my-new-virtualenvironment) $ deactivate
- List virtual environments: If you have many environments to choose from,
you can list them all with the workon function
.. code:: bash
$ workon
- Activate a existing virtual environment
.. code:: bash
$ workon web-scraper
(web-scraper) $

automated-robot-suite/runner.py Executable file
View File

@ -0,0 +1,374 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Runner for StarlingX test suite"""
from __future__ import print_function
import argparse
import getpass
import os
from shutil import copy
from Config import config
import sys
import robot
import Utils.common as common
from Libraries.common import update_config_ini, get_controllers_ip
# Global variables
CURRENT_USER = getpass.getuser()
SUITE_DIR = os.path.dirname(os.path.abspath(__file__))
MAIN_SUITE = os.path.join(SUITE_DIR, 'Tests')
LOG_NAME = 'debug.log'
# Set PYHTHONPATH variable
os.environ["PYTHONPATH"] = SUITE_DIR
def update_general_config_file(configuration, config_type, env, config_file,
"""Update general configuration file with selected options
- configuration: The configuration to be setup, the possible options
1. for simplex configuration
2. for duplex configuration
3. for multinode controller storage configuration
4. for multinode dedicated storage configuration
- config_type: The type of configuration selected from the command
- env: The environment selected from the command line
- config_file: The stx-configuration.ini file to be setup in
the controller
config_path = os.path.join(SUITE_DIR, 'Config', 'config.ini')
# Get Controller(s) IPs from the stx specific config file
stx_config_path = os.path.join(SUITE_DIR, 'Config', config_file)
if env == 'baremetal':
lab_yaml = ('{}.yaml').format(config_type)
lab_config = os.path.join(SUITE_DIR, 'baremetal', 'configs', lab_yaml)
lab_config = os.path.join(SUITE_DIR, yaml_file)
unit_ips = get_controllers_ip(env, stx_config_path, config_type,
# Update configuration info
if env == 'baremetal':
update_config_ini(config_ini=config_path, KERNEL_OPTION=configuration,
update_config_ini(config_ini=config_path, KERNEL_OPTION=configuration,
if ARGS.update:
print('''Suite Updated !!!
Following values are now set on Config/config.ini file
ENV_YAML_FILE={}'''.format(configuration, config_type, env, config_file,
if env == 'baremetal':
MGMT={}'''.format(unit_ips['OAM_IF'], unit_ips['MGMT_IF']))
# Only update configuration hence exit
def update_yaml_file(config_opt, env):
"""Overwrite the yaml file for specific yaml file used
This function overwrite the current yaml file into environment folder
for a specific configuration file from environment/configs.
:param config_opt: the argument from the command line given by the user
:param env: environemnt argument from the command line given by the user
- conf_type: the type of configuration selected from the
command line
- conf_file: the configuration to be use during config controller
command in the node
conf_type = ''
conf_file = ''
if config_opt == '1':
conf_type = 'simplex'
conf_file = 'stx-simplex.yml'
elif config_opt == '2':
conf_type = 'duplex'
conf_file = 'stx-duplex.yml'
elif config_opt == '3':
conf_type = 'multinode_controller_storage'
conf_file = 'stx-multinode.yml'
elif config_opt == '4':
conf_type = 'multinode_dedicated_storage'
conf_file = 'stx-multinode.yml'
# Update yaml file of selected environment
if env == 'virtual':
env_dir = 'Qemu'
env_setup_file = 'qemu_setup.yaml'
env_dir = 'baremetal'
env_setup_file = 'baremetal_setup.yaml'
origin = os.path.join(SUITE_DIR, '{}/configs/{}.yaml'
.format(env_dir, conf_type))
destination = os.path.join(SUITE_DIR, '{}/{}'
.format(env_dir, env_setup_file))
copy(origin, destination)
return {'ctype': conf_type, 'cfile': conf_file,
'eyaml': '{}/{}'.format(env_dir, env_setup_file,)}
def kernel_option(configuration):
"""Return the correct kernel option
This function return the kernel option to install the correct
configuration selected by the user
:param configuration: the argument from the command line given by the user
- kernel_opt: which is the kernel option for boot the controller-0
kernel_opt = ''
if configuration == '1' or configuration == '2':
kernel_opt = '3'
elif configuration == '3' or configuration == '4':
kernel_opt = '1'
return kernel_opt
def get_args():
"""Define and handle arguments with options to run the script
parser.parse_args(): list arguments as objects assigned
as attributes of a namespace
description = 'Script used to run sxt-test-suite'
parser = argparse.ArgumentParser(description=description)
# optional args
'--list-suites', dest='list_suite_name',
nargs='?', const=os.path.basename(MAIN_SUITE),
'List the suite and sub-suites including test cases of the '
'specified suite, if no value is given the entire suites tree '
'is displayed.'))
# groups args
group = parser.add_argument_group(
'Execution Suite', 'One of this arguments is mandatory - Suite(s) to '
'be run')
group.add_argument('--run-all', dest='run_all',
action='store_true', help='Run all available suites')
group.add_argument('--run-suite', dest='run_suite_name',
help='Run the specified suite')
group_configuration = parser.add_argument_group(
'Execution Environment and Configuration',
'Environment and Configuration to be run in the host'
'- This option is only required if `--run-suite` is equal to `Setup`')
'--environment', dest='environment', choices=['virtual', 'baremetal'],
help=('The environment where the suite will run'))
'--configuration', dest='configuration', choices=['1', '2', '3', '4'],
'{}: will deploy configurations for the host. '
'1=simplex, 2=duplex, 3=multinode-controller-storage, 4='
'--update-only', dest='update', action='store_true',
help=('Update execution parameters on the suite.'))
group_extras = parser.add_argument_group(
'Execution Extras', 'Extra options to be used on the suite execution.')
'--include', dest='tags',
'Executes only the test cases with specified tags.'
'Tags and patterns can also be combined together with `AND`, `OR`,'
'and `NOT` operators.'
'Examples: --include foo --include bar* --include foo AND bar*'))
'--test', dest='tests', nargs='+', default='*',
'Select test cases to run by name. '
'Name is case and space insensitive. '
'Test cases should be separated by a blank space, '
'if the test case has spaces on the name send it beetwen "". '
'Examples: --test "TEST 1" TEST_2 "Test 3"'))
return parser.parse_args()
def list_suites_option(suite_to_list):
"""Display the suite tree including test cases
suite_to_list: name of the suite to display on stdout
# Get suite details
suite = common.Suite(suite_to_list, MAIN_SUITE)
Suite is located at: {}
[S] = Suite
(T) = Test Case
=== SUITE TREE ====
common.list_suites(suite.data, '')
def get_config_tag(configuration):
"""Associate to the configuration selected wit a tag
configuration: Configuration selected
tag: Tag ssociate to the configuration
tags_dict = {
'simplex': 'Simplex',
'duplex': 'Duplex',
'multinode_controller_storage': 'MN-Local',
'multinode_dedicated_storage': 'MN-External'
return tags_dict.get(configuration)
def get_iso_name():
"""Check real name of the ISO used on the deployment
real_name: ISO real name
name = config.get('general', 'STX_ISO_FILE')
# Check if synlink was used instead of updating config file
real_name = os.readlink('{}/{}'.format(SUITE_DIR, name))
except OSError:
real_name = name
return real_name
def get_metadata():
"""Construct default metadata to be displayed on reports
metadata_list: List with names and values to be added as metadata
metadata_list = []
system = ('System:{}'.format(config.get('general', 'CONFIGURATION_TYPE')))
iso = ('ISO:{}'.format(get_iso_name()))
metadata_list.extend([system, iso])
return metadata_list
def run_suite_option(suite_name):
"""Run Specified Test Suite and creates the results structure
- suite_name: name of the suite that will be executed
# Get suite details
suite = common.Suite(suite_name, MAIN_SUITE)
# Create results directory if does not exist
results_dir = common.check_results_dir(SUITE_DIR)
# Create output directory to store execution results
output_dir = common.create_output_dir(results_dir, suite.name)
# Create a link pointing to the latest run
common.link_latest_run(SUITE_DIR, output_dir)
# Updating config.ini LOG_PATH variable with output_dir
config_path = os.path.join(SUITE_DIR, 'Config', 'config.ini')
update_config_ini(config_ini=config_path, LOG_PATH=output_dir)
# Get configuration and environent from general config file
config_type = config.get('general', 'CONFIGURATION_TYPE')
env = config.get('general', 'ENVIRONMENT')
env_yaml = config.get('general', 'ENV_YAML_FILE')
# Check configuration and add it as default to the tags
default_tag = get_config_tag(config_type)
# Select tags to be used, empty if not set to execute all
include_tags = ('{0}AND{1}'.format(default_tag, ARGS.tags)
if ARGS.tags else default_tag)
if ARGS.run_suite_name == 'Setup':
include_tags = ('{0}AND{1}'.format(include_tags, ARGS.environment))
metadata_list = get_metadata()
# Run sxt-test-suite using robot framework
return robot.run(suite.path, outputdir=output_dir, debugfile=LOG_NAME,
variable=['CONFIGURATION_TYPE :{}'.format(default_tag),
'ENVIRONMENT :{}'.format(env),
'ENV_YAML :{}'.format(env_yaml)],
include=include_tags, tagstatinclude=include_tags,
if __name__ == '__main__':
if CURRENT_USER == 'root':
raise RuntimeError('DO NOT RUN AS ROOT')
# Validate if script is called with at least one argument
# Get args variables
ARGS = get_args()
if not (ARGS.run_suite_name or ARGS.run_all or ARGS.list_suite_name):
sys.exit('Execution Suite could not be empty')
env_configuration = (
False if not (ARGS.environment and ARGS.configuration) else True)
if ARGS.run_suite_name == 'Setup':
if not env_configuration:
sys.exit('Execution Environment arguments are required')
config_dict = update_yaml_file(ARGS.configuration,
configuration_type = config_dict['ctype']
configuration_file = config_dict['cfile']
environment_yaml = config_dict['eyaml']
# Update configuration file with values selected from command line
configuration_type, ARGS.environment,
configuration_file, environment_yaml)
# Check options selected
if ARGS.list_suite_name:
elif ARGS.run_all:
elif ARGS.run_suite_name: