Add reproducer script for OVB and multinode jobs

This review adds functionality to create a reproducer
script in the logs. The reproducer script will allow
users to recreate failing OVB and multinode jobs
in personal cloud tenants.

User documentation for the reproducer-quickstart script
is added.

Change-Id: I9fe8550a75c3ffb6d1271b01b1144bfbdc82c95d
This commit is contained in:
Ronelle Landy 2017-12-05 15:09:03 -05:00 committed by Sagi Shnaidman
parent 7bc877db04
commit ed220f5b98
9 changed files with 368 additions and 1 deletions

View File

@ -50,7 +50,7 @@ artcl_exclude_list:
- /etc/pki/ca-trust/extracted
- /etc/alternatives
- /var/log/journal
artcl_collect_dir: "{{ local_working_dir }}/collected_files"
# artcl_collect_dir is defaulted in extras-common
artcl_gzip_only: true
artcl_tar_gz: false

View File

@ -47,6 +47,11 @@
/(\/var\/log\/|\/etc\/)[^ \/\.]+\.gz$/ { rename($0) }';
when: artcl_txt_rename|bool
- name: Create the reproducer script
include_role:
name: create-reproducer-script
when: lookup('env', 'TOCI_JOBTYPE') != ''
- name: upload to the artifact server using pubkey auth
shell: rsync -av --quiet -e "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" {{ artcl_collect_dir }}/ {{ artcl_rsync_path }}/{{ lookup('env', 'BUILD_TAG') }}
async: "{{ artcl_publish_timeout }}"

View File

@ -0,0 +1,82 @@
<!DOCTYPE HTML>
<html lang="en-US">
<head>
<title>README for Quickstart Job Reproducer Script</title>
</head>
<body>
<h1>How to reproduce a job result using the reproducer-quickstart.sh script</h1>
<p>Check the top-level logs directory for a file <b>reproducer-quickstart.sh</b>.
Running this file will set up a stack on a personal tenant where the job run
can be reproduced.
If no <b>reproducer-quickstart.sh</b> file is found, that usually means an
infra failure before Quickstart could start or a problem in collecting logs.
Check on IRC Freenode channel <i>#tripleo</i> to see if there's an ongoing infra
issue.</p>
<h2>What is the reproducer-quickstart.sh script?</h2>
<p>A <b>reproducer-quickstart.sh</b> script is generated at the top-level log directory
for jobs that run TripleO-Quickstart and toci_gate_test-oooq.sh. Running the script
from a host machine will set up a stack to the point where the user can ssh to the
undercloud, source a variable file, and run <b>toci_gate_test-oooq.sh</b>. As such,
reproducing the complete jobs result requires some manual interaction.</p>
<p>The <b>reproducer-quickstart.sh</b> script is generated to be specific to that
job and <b>reproducer-quickstart.sh</b> scripts from other jobs will be different.</p>
<h2>Running the reproducer-quickstart.sh script</h2>
<p>Use <code>cURL</code>, <code>wget</code> or a similar tool to copy the
<b>reproducer-quickstart.sh</b> to a clean working directory on a host from
where you can access the cloud tenant you want to run the reproducer on.
Make sure that the copied script is executable (has +x permissions) before you
attempt to run it.
<b>Source the tenant credentials before running the script.</b>
Also, it is a good idea to check that the tenant has enough capacity
and resources to reproduce the job before you begin.</p>
<p>Note that, by default, the <b>reproducer-quickstart.sh</b>
script is written to run an a personal RDO Cloud tenant. Instructions on how to
modify the script to run on another host cloud are included in a section below.</p>
<p>When the script completes, you will see a list of instructions printed out on how
to run <b>toci_gate_test-oooq.sh</b> from the undercloud:</p>
<ul>
<li><code>ssh to the undercloud: $ ssh zuul@$ansible_host</code></li>
<li><code>Source the environment settings file: $ source /home/zuul/env_vars_to_src.sh</code></li>
<li><code>Run the toci gate script: $ /opt/stack/tripleo-ci/toci_gate_test-oooq.sh</code></li>
</ul>
<h2>Input argument options in the reproducer-quickstart.sh script</h2>
<p>Running <code>reproducer-quickstart.sh --help</code> will show a list of the
input options. All the options have a default and none are required.
The options are listed below with explanations:</p>
<ul>
<li><code>-w, --workspace</code>: directory where the virtualenv, inventory files, etc.
Defaults to creating a directory in /tmp</li>
<li><code>-v, --create-virtualenv</code>: create a virtualenv to install Ansible and dependencies.
Defaults to true.</li>
<li><code>-r, --remove-stacks-keypairs</code>: delete all stacks in the tenant before deployment.
will also delete associated keypairs if they exist.
Defaults to false.</li>
<li><code>-p, --nodestack-prefix</code>: add a unique prefix for multinode and singlenode stacks
Defaults to empty.</li>
</ul>
<h2>Using an alternative host cloud</h2>
<p>The <b>reproducer-quickstart.sh</b> script is written to run an a personal RDO Cloud tenant.
If you want to run on a different host cloud, modify the <b>reproducer-quickstart.sh</b> as follows:</p>
<ul>
<li>Add git clone of <code>tripleo-environments</code> or any other repository containing
the cloud details, after cloning <code>tripleo-quickstart-extras</code></li>
<li>Add the repository to quickstart-extras-requirements.txt:
<code>echo "file://$WORKSPACE/tripleo-environments/#egg=tripleo-environments" >>
$WORKSPACE/tripleo-quickstart/quickstart-extras-requirements.txt</code></li>
<li>Use that host cloud's environment file in place of RDO Cloud's environment file.</li>
</ul>
</body>
</html>

View File

@ -0,0 +1,40 @@
create-reproducer-script
========================
This role creates a script to reproduce OVB and multinode jobs.
Role Variables
--------------
For the defaults of these variables, see the defaults/main.yml file in this role.
* env_vars_to_source_file: env_vars_to_src.sh
* reproducer_quickstart_script: reproducer-quickstart.sh.j2
From the extras-common role:
* artcl_collect_dir: "{{ local_working_dir }}/collected_files"
Dependencies
------------
The role is run within the collect-logs role.
Example Playbook
----------------
```yaml
---
- name: Create a file to reproduce the job
hosts: localhost
roles:
- create-reproducer-script
```
License
-------
Apache 2.0
Author Information
------------------
OpenStack

View File

@ -0,0 +1,3 @@
env_vars_to_source_file: env_vars_to_src.sh
reproducer_quickstart_script: reproducer-quickstart.sh.j2
reproducer_quickstart_readme_file: "{{ artcl_collect_dir }}/README-reproducer-quickstart.html"

View File

@ -0,0 +1,2 @@
dependencies:
- extras-common

View File

@ -0,0 +1,21 @@
---
- name: Set fact for environment variables
set_fact:
zuul_changes: "{{ lookup('env', 'ZUUL_CHANGES') }}"
nodes_config: "{{ lookup('env', 'NODES_FILE') }}"
toci_jobtype: "{{ lookup('env', 'TOCI_JOBTYPE') }}"
- name: Set fact for stable branch
set_fact:
stable_release: "{{ lookup('env', 'STABLE_RELEASE') }}"
- name: Create the reproducer file from template
template:
src: "{{ reproducer_quickstart_script }}"
dest: "{{ artcl_collect_dir }}/reproducer-quickstart.sh"
mode: 0755
- name: Create reproducer script documentation from template
template:
src: README-reproducer-quickstart.html.j2
dest: "{{ reproducer_quickstart_readme_file }}"

View File

@ -0,0 +1,213 @@
#!/bin/bash
# See documentation for using the reproducer script:
# README-reproducer-quickstart.html
# (in the same top-level logs directory as this reproducer script).
: ${WORKSPACE:=$(mktemp -d -t reproduce-tmp.XXXXX)}
{% if 'ovb' in toci_jobtype %}
: ${CREATE_VIRTUALENV:=true}
{% else %}
: ${CREATE_VIRTUALENV:=false}
{% endif %}
: ${REMOVE_STACKS_KEYPAIRS:=false}
: ${NODESTACK_PREFIX:=""}
usage () {
echo "Usage: $0 [options]"
echo ""
echo "Options:"
echo " -w, --workspace <dir>"
echo " directory where the virtualenv, inventory files, etc."
echo " are created. Defaults to creating a directory in /tmp"
echo " -v, --create-virtualenv"
echo " create a virtualenv to install Ansible and dependencies."
echo " Options to pass true/false. Defaults to true for OVB. "
echo " Defaults to false for other deployment types."
echo " -r, --remove-stacks-keypairs"
echo " delete all Heat stacks (both Multinode and OVB created) "
echo " in the tenant before deployment."
echo " Will also delete associated keypairs if they exist."
echo " Options to pass true/false.Defaults to false."
echo " -p, --nodestack-prefix"
echo " add a unique prefix for multinode and singlenode stacks"
echo " Defaults to empty."
echo " -h, --help print this help and exit"
}
set -e
# Check that tenant credentials have been sourced
if [[ ! -v OS_TENANT_NAME ]]; then
echo "Tenant credentials are not sourced."
exit 1;
fi
# Input argument assignments
while [ "x$1" != "x" ]; do
case "$1" in
--workspace|-w)
WORKSPACE=$(realpath $2)
shift
;;
--create-virtualenv|-v)
CREATE_VIRTUALENV=$2
shift
;;
--remove-stacks-keypairs|-r)
REMOVE_STACKS_KEYPAIRS=$2
shift
;;
--nodestack-prefix|-p)
NODESTACK_PREFIX=$2
shift
;;
--help|-h)
usage
exit
;;
--) shift
break
;;
-*) echo "ERROR: unknown option: $1" >&2
usage >&2
exit 2
;;
*) break
;;
esac
shift
done
set -x
# Exit if running ovb-fakeha-caserver
# This test is not converted to run with tripleo-quickstart
export TOCI_JOBTYPE="{{ toci_jobtype }}"
if [[ "$TOCI_JOBTYPE" == *"ovb-fakeha-caserver"* ]]; then
echo "
ovb-fakeha-caserver is not run with tripleo-quickstart.
It can not be reproduced using this script.
"
exit 1;
fi
# Start from a clean workspace
export WORKSPACE
cd $WORKSPACE
rm -rf tripleo-quickstart tripleo-quickstart-extras
# Clone quickstart and quickstart-extras
git clone https://github.com/openstack/tripleo-quickstart
git clone https://github.com/openstack/tripleo-quickstart-extras
# Set up a virtual env if requested
if [ "$CREATE_VIRTUALENV" = "true" ]; then
virtualenv --system-site-packages $WORKSPACE/venv_ansible
source $WORKSPACE/venv_ansible/bin/activate
pip install --upgrade setuptools pip
pip install -r $WORKSPACE/tripleo-quickstart/requirements.txt
fi
if [ "$REMOVE_STACKS_KEYPAIRS" = "true" ]; then
# The cleanup templates expects there to be in a /bin dir in the workspace # from quickstart setup.
# To use the clients sourced from venv
sed -i "s#{.*/bin/##g" $WORKSPACE/tripleo-quickstart-extras/roles/ovb-manage-stack/templates/cleanup-stacks-keypairs.sh.j2
fi
# Export ZUUL_CHANGES in the Ansible host if there are changes in
# tripleo-quickstart, tripleo-quickstart-extras or tripleo-ci repos
# before running playbooks
export ZUUL_CHANGES="{{ zuul_changes }}"
# Export our roles path so that we can use the roles from our workspace
export ANSIBLE_ROLES_PATH=$ANSIBLE_ROLES_PATH:$WORKSPACE/tripleo-quickstart/roles:$WORKSPACE/tripleo-quickstart-extras/roles
# Export a node config for the topology you need ie:
export NODES_FILE="{{ nodes_config }}"
{% if 'ovb' in toci_jobtype %}
ansible-playbook tripleo-quickstart-extras/playbooks/ovb-create-stack.yml \
-e local_working_dir=$WORKSPACE \
-e virthost=localhost \
-e @$WORKSPACE/tripleo-quickstart-extras/config/environments/rdocloud.yml \
-e ssh_extra_args="" \
-e ovb_dump_hosts=true \
-e ovb_setup_user=true \
-e cleanup_stacks_keypairs=$REMOVE_STACKS_KEYPAIRS \
-e @$WORKSPACE/tripleo-quickstart/$NODES_FILE
# Run the playbook to setup the undercloud/subnodes to look like nodepool nodes
ansible-playbook -i $WORKSPACE/ovb_hosts $WORKSPACE/tripleo-quickstart-extras/playbooks/nodepool-setup.yml
# Copy the nodes.json file to the undercloud
export $(awk '/subnode-0/ {print $2}' ovb_hosts)
scp $WORKSPACE/nodes.json zuul@$ansible_host:/home/zuul/
{% endif %}
{% if 'multinode' in toci_jobtype or 'singlenode' in toci_jobtype %}
# Calculate subnode_count
if [[ -z "$NODES_FILE" ]]; then
SUBNODE_COUNT=1
else
SUBNODE_COUNT=$(( $( awk '/node_count: / {print $2}' $WORKSPACE/tripleo-quickstart/$NODES_FILE ) +1 ))
fi
ansible-playbook tripleo-quickstart-extras/playbooks/provision_multinodes.yml \
-e local_working_dir=$WORKSPACE \
-e subnode_count=$SUBNODE_COUNT \
-e prefix=$NODESTACK_PREFIX
# Run the playbook to setup the undercloud/subnodes to look like nodepool nodes
ansible-playbook -i $WORKSPACE/multinode_hosts $WORKSPACE/tripleo-quickstart-extras/playbooks/nodepool-setup.yml
# Get ansible_host
export $(awk '/subnode-0/ {print $2}' multinode_hosts)
{% endif %}
{% if dlrn_hash_newest is defined %}
EXTRA_VARS="$EXTRA_VARS --extra-vars dlrn_hash_tag_newest={{ hostvars['undercloud'].dlrn_hash_newest }} "
{% endif %}
# Create the env_vars_to_source file and copy it to the undercloud
cat >"{{ env_vars_to_source_file }}" <<EOF
export ZUUL_CHANGES="{{ zuul_changes }}"
export NODES_FILE="{{ nodes_config }}"
export TOCI_JOBTYPE="{{ toci_jobtype }}"
export EXTRA_VARS="$EXTRA_VARS --extra-vars dlrn_hash_tag={{ hostvars['undercloud'].dlrn_hash }} "
EOF
{% if stable_release != '' %}
cat >>"{{ env_vars_to_source_file }}" <<EOF
export STABLE_RELEASE="{{ stable_release }}"
EOF
{% endif %}
{% if 'ovb' in toci_jobtype %}
cat >>"{{ env_vars_to_source_file }}" <<EOF
export TE_DATAFILE=/home/zuul/nodes.json
EOF
{% endif %}
scp "$WORKSPACE/{{ env_vars_to_source_file }}" zuul@$ansible_host:/home/zuul/
# Remove -x so that the instructions don't print twice
set +x
# Instruct the user to execute toci_gate_test-oooq.sh on the undercloud
echo "
Now complete the test excution on the undercloud:
- ssh to the undercloud: $ ssh zuul@$ansible_host
- Source the environment settings file: $ source /home/zuul/env_vars_to_src.sh
- Run the toci gate script: $ /opt/stack/tripleo-ci/toci_gate_test-oooq.sh
To avoid timeouts, you can start a screen session before executing the commands: $ screen -S ci
"

View File

@ -25,3 +25,4 @@ timestamper_cmd: >-
enable_libvirt_tripleo_ui: false
composable_scenario: ""
upgrade_composable_scenario: ""
artcl_collect_dir: "{{ local_working_dir }}/collected_files"