Classify tempest-devstack failures using ElasticSearch
Go to file
frenzyfriday 01b6b8299b Fixing: TypeError: '<' not supported between instances of 'list' and 'str'
The above exception [1] occurs for example [2] when elasticsearch returns data
with more than one zuul_executor as a list.

This is what l#58 is able to sort
[(12.5, '5'), (12.5, '4'), (12.5, '3'), (25.0, '6'), (18.75, '2'), (6.25, '13'), (12.5, '1')]

This  is when it throws exception
[(8.13953488372093, 'ze06.opendev.org'),
(12.790697674418604, 'ze10.opendev.org'),
(5.813953488372093, 'ze05.opendev.org'),
(8.13953488372093, 'ze01.opendev.org'),
(16.27906976744186, 'ze04.opendev.org'),
(4.651162790697675, 'ze03.opendev.org'),
(3.488372093023256, 'ze02.opendev.org'),
(4.651162790697675, 'ze08.opendev.org'),
(12.790697674418604, 'ze09.opendev.org'),
(20.930232558139537, 'ze12.opendev.org'),
(1.1627906976744187, 'ze11.opendev.org'),
(1.1627906976744187, ['ze12.opendev.org', 'ze11.opendev.org'])]

[1] https://0050cb9fd8118437e3e0-3c2a18acb5109e625907972e3aa6a592.ssl.cf5.rackcdn.com/790065/7/check/openstack-tox-py38/4968a73/tox/test_results/1449136.yaml.log
[2] https://review.opendev.org/c/openstack/tripleo-ci-health-queries/+/787569/6/output/elastic-recheck/1449136.yaml

Change-Id: Ie559d5764d9f68420119a7f9608389f0745a9c02
2021-05-19 22:10:24 +02:00
data Create elastic-recheck container image 2020-09-14 18:14:39 +01:00
doc convert docs to PTI 2018-09-20 06:22:02 +00:00
elastic_recheck Fixing: TypeError: '<' not supported between instances of 'list' and 'str' 2021-05-19 22:10:24 +02:00
queries Add query for bug 1882521 2020-12-18 20:25:46 +00:00
tools Bumped flake8 2020-05-23 08:54:14 +01:00
web Fix flot xaxis labels 2020-01-29 09:58:09 -08:00
.coveragerc Change ignore-errors to ignore_errors 2015-09-21 14:22:56 +00:00
.dockerignore Create elastic-recheck container image 2020-09-14 18:14:39 +01:00
.gitignore gitignore: Ignore .eggs directory 2020-12-02 10:52:37 +00:00
.gitreview OpenDev Migration Patch 2019-04-19 19:26:01 +00:00
.pylintrc pylint: 4 more 2020-09-11 11:19:35 +01:00
.testr.conf Apply Cookiecutter to the repo. 2013-09-23 15:27:39 -07:00
.zuul.yaml Create elastic-recheck container image 2020-09-14 18:14:39 +01:00
CONTRIBUTING.rst Fix link formatting in CONTRIBUTING.rst 2019-10-16 16:58:17 -04:00
Dockerfile Create elastic-recheck container image 2020-09-14 18:14:39 +01:00
LICENSE Apply Cookiecutter to the repo. 2013-09-23 15:27:39 -07:00
MANIFEST.in Apply Cookiecutter to the repo. 2013-09-23 15:27:39 -07:00
Makefile Create elastic-recheck container image 2020-09-14 18:14:39 +01:00
README.rst Create elastic-recheck container image 2020-09-14 18:14:39 +01:00
babel.cfg Apply Cookiecutter to the repo. 2013-09-23 15:27:39 -07:00
bindep.txt Create elastic-recheck container image 2020-09-14 18:14:39 +01:00
elasticRecheck.conf.sample Switch gerrit URL to review.opendev.org 2019-04-22 11:55:55 -04:00
recheckwatchbot.yaml Support Zuul as a Gerrit user equivalent to Jenkins 2016-06-16 09:29:41 -07:00
requirements.txt Drop py27 and add py38 jobs 2020-09-08 10:21:02 +01:00
setup.cfg Drop py27 and add py38 jobs 2020-09-08 10:21:02 +01:00
setup.py Update to modern pbr requirement 2015-10-16 09:06:57 -07:00
test-requirements.txt Replace testr with pytest 2020-09-17 15:49:32 +01:00
tox.ini Replace testr with pytest 2020-09-17 15:49:32 +01:00
web_server.py Bumped flake8 2020-05-23 08:54:14 +01:00

README.rst

elastic-recheck

"Use ElasticSearch to classify OpenStack gate failures"

  • Open Source Software: Apache license

Idea

Identifying the specific bug that is causing a transient error in the gate is difficult. Just identifying which tempest test failed is not enough because a single tempest test can fail due to any number of underlying bugs. If we can find a fingerprint for a specific bug using logs, then we can use ElasticSearch to automatically detect any occurrences of the bug.

Using these fingerprints elastic-recheck can:

  • Search ElasticSearch for all occurrences of a bug.
  • Identify bug trends such as: when it started, is the bug fixed, is it getting worse, etc.
  • Classify bug failures in real time and report back to gerrit if we find a match, so a patch author knows why the test failed.

queries/

All queries are stored in separate yaml files in a queries directory at the top of the elastic-recheck code base. The format of these files is ######.yaml (where ###### is the launchpad bug number), the yaml should have a query keyword which is the query text for elastic search.

Guidelines for good queries:

  • Queries should get as close as possible to fingerprinting the root cause. A screen log query (e.g. tags:"screen-n-net.txt") is typically better than a console one (tags:"console"), as that's matching a deep failure versus a surface symptom.

  • Queries should not return any hits for successful jobs, this is a sign the query isn't specific enough. A rule of thumb is > 10% success hits probably means this isn't good enough.

  • If it's impossible to build a query to target a bug, consider patching the upstream program to be explicit when it fails in a particular way.

  • Use the 'tags' field rather than the 'filename' field for filtering. This is primarily because of grenade jobs where the same log file shows up in the 'old' and 'new' side of the grenade job. For example, tags:"screen-n-cpu.txt" will query in logs/old/screen-n-cpu.txt and logs/new/screen-n-cpu.txt. The tags:"console" filter is also used to query in console.html as well as tempest and devstack logs.

  • Avoid the use of wildcards in queries since they can put an undue burden on the query engine. A common case where wildcards are used and shouldn't be are in querying against a specific set of build_name fields, e.g. gate-nova-python26 and gate-nova-python27. Rather than use build_name:gate-nova-python*, list the jobs with an OR. For example:

    (build_name:"gate-nova-python26" OR build_name:"gate-nova-python27")

When adding queries you can optionally suppress the creation of graphs and notifications by adding suppress-graph: true or suppress-notification: true to the yaml file. These can be used to make sure expected failures don't show up on the unclassified page.

If the only signature available is overly broad and adding additional logging can't reasonably make a good signature, you can also filter the results of a query based on the test_ids that failed for the run being checked. This can be done by adding a test_ids keyword to the query file and then a list of the test_ids to verify failed. The test_id also should exclude any attrs, this is the list of attrs appended to the test_id between '[]'. For example, 'smoke', 'slow', any service tags, etc. This is how subunit-trace prints the test ids by default if you're using it. If any of the listed test_ids match as failing for the run being checked with the query it will return a match. Since filtering leverages subunit2sql which only receives tempest test results from the gate pipeline, this technique will only work on tempest or grenade jobs in the gate queue. For more information about this refer to the infra subunit2sql documentation For example, if your query yaml file looked like:

query: >-
  message:"ExceptionA"
test_ids:
  - tempest.api.compute.servers.test_servers.test_update_server_name
  - tempest.api.compute.servers.test_servers_negative.test_server_set_empty_name

this will only match the bug if the logstash query had a hit for the run and either test_update_server_name or test_server_set_empty name failed during the run.

In order to support rapidly added queries, it's considered socially acceptable to approve changes that only add 1 new bug query, and to even self approve those changes by core reviewers.

Adding Bug Signatures

Most transient bugs seen in gate are not bugs in tempest associated with a specific tempest test failure, but rather some sort of issue further down the stack that can cause many tempest tests to fail.

  1. Given a transient bug that is seen during the gate, go through the logs and try to find a log that is associated with the failure. The closer to the root cause the better.

    • Note that queries can only be written against INFO level and higher log messages. This is by design to not overwhelm the search cluster.
    • Since non-voting jobs are not allowed in the gate queue and e-r is primarily used for tracking bugs in the gate queue, it doesn't spend time tracking race failures in non-voting jobs since they are considered unstable by definition (since they don't vote).
      • There is, however, a special 'allow-nonvoting' key that can be added to a query yaml file to allow tracking non-voting job bug failures in the graph. They won't show up in the bot though (IRC or Gerrit comments).
  2. Go to logstash.openstack.org and create an elastic search query to find the log message from step 1. To see the possible fields to search on click on an entry. Lucene query syntax is available at lucene.apache.org.

  3. Tag your commit with a Related-Bug tag in the footer, or add a comment to the bug with the query you identified and a link to the logstash URL for that query search.

    Putting the logstash query link in the bug report is also valuable in the case of rare failures that fall outside the window of how far back log results are stored. In such cases the bug might be marked as Incomplete and the e-r query could be removed, only for the failure to re-surface later. If a link to the query is in the bug report someone can easily track when it started showing up again.

  4. Add the query to elastic-recheck/queries/BUGNUMBER.yaml (All queries can be found on git.openstack.org) and push the patch up for review.

You can also help classify Unclassified failed jobs, which is an aggregation of all failed voting gate jobs that don't currently have elastic-recheck fingerprints.

Removing Bug Signatures

Old queries which are no longer hitting in logstash and are associated with fixed or incomplete bugs are routinely deleted. This is to keep the load on the elastic-search engine as low as possible when checking a job failure. If a bug marked as Incomplete does show up again, the bug should be re-opened with a link to the failure and the e-r query should be restored.

Queries that have "suppress-graph: true" in them generally should not be removed since we basically want to keep those around, they are persistent infra issues and are not going away.

Automated Cleanup

  1. Run the elastic-recheck-cleanup command:

    $ tox -e venv -- elastic-recheck-cleanup -h
    ...
    usage: elastic-recheck-cleanup [-h] [--bug <bug>] [--dry-run] [-v]
    
    Remove old queries where the affected projects list the bug status as one
    of: Fix Committed, Fix Released
    
    optional arguments:
         -h, --help   show this help message and exit
         --bug <bug>  Specific bug number/id to clean. Returns an exit code of
                      1 if no query is found for the bug.
         --dry-run    Print out old queries that would be removed but do not
                      actually remove them.
         -v           Print verbose information during execution.

    Note

    You may want to run with the --dry-run option first and sanity check the removed queries before committing them.

  2. Commit the changes and push them up for review:

    $ git commit -a -m "Remove old queries: `date +%F`"
    $ git review -t rm-old-queries

Manual Cleanup

  1. Go to the All Pipelines page.
  2. Look for anything that is grayed out at the bottom which means it has not had any hits in 10 days.
  3. From those, look for the ones that are status of Fixed/Incomplete/Invalid/Won't Fix in Launchpad - those are candidates for removal.

Note

Sometimes bugs are still New/Confirmed/Triaged/In Progress but have not had any hits in over 10 days. Those bugs should be re-assessed to see if they are now actually fixed or incomplete/invalid, marked as such and then remove the related query.

Running Queries Locally

You can execute an individual query locally and analyze the search results:

$ elastic-recheck-query queries/1331274.yaml
total hits: 133
build_status
  100% FAILURE
build_name
  48% check-grenade-dsvm
  15% check-grenade-dsvm-partial-ncpu
  13% gate-grenade-dsvm
  9% check-grenade-dsvm-icehouse
  9% check-grenade-dsvm-partial-ncpu-icehouse
build_branch
  95% master
  4% stable/icehouse

Notes

  • The html generation will generate links that work with Kibana3's logstash.json dashboard. If you want the links to work properly on these generated files you will need to host a Kibana3 with that dashboard.
  • View the OpenStack ElasticSearch cluster health here.

Development

In addition to using tox you can also run make in order to list current container build and testing commands.

Future Work

  • Move config files into a separate directory
  • Make unit tests robust
  • Add debug mode flag
  • Expand gating testing
  • Cleanup and document code better
  • Add ability to check if any resolved bugs return
  • Move away from polling ElasticSearch to discover if its ready or not
  • Add nightly job to propose a patch to remove bug queries that return no hits -- Bug hasn't been seen in 2 weeks and must be closed