Sometimes we want to collect everything, but specific files and
folders. Support exclude list for this.
Change-Id: I10302ee50c5539fbacd539371dde0e7d0d7c4f71
We need a simple, togglable way to verify that Sphinx has, at a
minimum, generated each of the files outlined within the
`artcl_create_docs_payload.table_of_contents` and added them to
the resulting index.html
- Add togglable var to collect-logs role, `artcl_verify_sphinx_build`
default to false
- Update artcl_collect_dir to pull from common/defaults
Change-Id: I76fef28a026730818d44274ab01741034f9fa40d
When publishing logs in usptream we need an option to get
all files from host together both getting only one of them.
Save and upload the whole tar.gz for host together with flattened
logs and configs.
Change-Id: Ib272841a9ddf15d9ccd36a5d42a9dd94bf934f8f
In upstream we collect delorean logs from building and save in
logs directory. Don't clean up if not configured and save logs.
Change-Id: I3ce37dfcac416ab58763260586a8b1bafdb1427b
In addition to known text file extensions, also rename files without
extension in the /var/log and /etc directories.
Change-Id: Ia9898816831392951cd927b7661d4d8fdcb4d007
Collecting the console logs from internal jobs is failing due to
certificate issues. Adding the -k option to curl will ignore
any certificate requirements and collect the console log.
Change-Id: Ic90a045c5cc848996dd23be3210347fb95319a13
Debugging issues related to doc generation is very difficult without
access to sphinx logs. This review pushes sphinx_build.log into
{{ artcl_collect_dir }}/docs which will be consumed with all other
log files by the publishes.
Change-Id: I1c99187f952e92f6a1d89c5f7fadb99e21541e8e
The static job doc only describes how to launch quickstart in the
most basic way. The automatically created doc has several different
configurations and we should instruct users how to find and use the
appropriate configuration.
Change-Id: I089154c6d0177fff945f9c7afcfc01f560fc77be
If no docker service runs on the host, but docker is installed,
docker stats command hangs forever. Prevent running all docker
commands if no docker service is running.
Change-Id: I697ed2bde251513ece0a25f7d2b51074f11e9c3e
Add the artcl_txt_rename option. When enabled, the publishing step
renames known text file extension to end in txt.gz which is directly
displayed by upstream log servers.
Also simplify the way we handle the stackviz and tempest results.
Change-Id: I793088995ca5a945738c5b04c1cefdd974e5f2d1
We need to differentiate local_working_dir from working_dir
as well as decouple the stack user from `ansible_user` var.
Both of these are causing issues as we begin to automate
deployments in more environments.
- Cleanup duplicate variables that are consumed via extras-common
- Note: extras-common depends on the common role in OOOQ
- Cleanup redundant var and superfluous quotes from overcloud-scale
role
- Cleanup redundant comments in <role>/defaults/main.yml
Closes-bug: 1654574
Change-Id: I9c7a3166ed1fc5042c11e420223134ea912b45c5
As more ansible variables are shared or reused across roles it is
important to define these variables in a role that is always
executed. In this case that role is extras-common.
Note: This review is a blocker for https://review.openstack.org/#/c/418998/
Change-Id: I31fd13d7bcb98d73e7f16048c57c027d95faeec5
Collect logs of containers on compute host:
List of containers on the host, list of container images,
statistics and info about containers host.
For each container collect logs, docker top, linux top for 2 secs,
and inspection logs.
Change-Id: I934f12856f74594e1d8848f784b72a1d9698659c
ARA is being set up by quickstart.sh and the static report is
later generated by collect-logs.sh in the workspace.
The collect-logs role should seek that location and recover it if
it's available.
Change-Id: I611a071bb839f3c402a6c1bc2db35951f75461e0
We are having issues with collect-logs and potentially infra. Use
shell so we get better output.
This was motivated by a failure in collect-logs.
https://ci.centos.org/job/tripleo-quickstart-promote-master-delorean-minimal/858/console
failed here:
https://github.com/openstack/tripleo-quickstart-extras/blame/master/roles/collect-logs/tasks/publish.yml#L23
The output from command module looks like this:
01:24:17.955 fatal: [localhost]: FAILED! => {"ansible_job_id":
"897063162915.10850", "changed": false, "cmd":
"/home/rdo-ci/.ansible/tmp/ansible-tmp-1484081443.92-162620171328073/command.py",
"failed": 1, "finished": 1, "msg": "[Errno 2] No such file or directory",
"outdata": "", "stderr": ""}
The actual command is buried in {longpath}/command.py, and for
the CI jobs only a handful of folks can get ssh access to the
nodes, which would allow for setting ANSIBLE_KEEP_REMOTE_FILES=1
to debug this (by looking at the actual command.py after failure).
Switching to shell block will output the expanded shell command
to the output by default.
Change-Id: Ie94492f023a2c7af8b6361f9538184c7de55cd7a
* soft link the tempest.html file to the base of collected logs dir
if the tempest results exist.
* fetch the xml results for jenkins to parse if the file exists
Change-Id: I0eccf093f6ca12ca6ff1d4697b6d64f20dca2e69
This patch add to tempest role, the generation of the report created by
stackviz from tempest results. Basically, what it does is:
- Download stackviz from git repository
- Copy static html and javascript stackviz files so we don't need
to install npm, download the dependences, and build the static html
(which would save some minutes)
- Install stackviz (only the processor)
- Run stackviz-export on tempest to collect the results
Also, add the code needed in the collect logs role to collect these
static files generated by stackviz.
Change-Id: Ia5ee717eaa9777bad265ecc338154e131021283f
Uploading the logs can take too much time, resulting ssh connection
timeout during the ansible task execution. Log collection usually runs
in post build tasks, making it immune to jenkins job timeouts.
This change makes the upload tasks asyncronous and adds a timeout.
Change-Id: I65cf017717775ac85b953fe554f84c79e4f808b5
We should run and publish a minimal sets of files even without any host,
when we failed the run before inventory generation.
This change separates the collection step that runs on all hosts except
localhost, and the rest running on localhost. Running on localhost
always succeeds, even with an empty inventory.
Also add a log environment file for local collect-logs.sh runs that
does not upload logs.
Change-Id: I48d07d42be879026fb80afd73835484770006f85
Zuul uses different variables and files to handle artifact uploading,
this change adds the required modifications to collect and publish logs
in the rdoproject.org Zuul environment.
Change-Id: I5d74392210e55be5f5ecd889a5017750a874d45a
Follow up the rename of the role in the documentation and in absolute
paths. Also clean up the non-functional files within.
Change-Id: Idc0d60134ddbe48b4237ea56c7993d14cc22f5b1