ironic/tools/benchmark
Julia Kreger ffff76a682 Add basic tools for benchmarking
Adds a horribly written, just hacked together little tool to help
provide sizing insight into an ironic deployment's state and underlying
performance.

Key data:
* Queries the list of node from a pure python interface level with the
  database and reports timeing for the list of nodes to be returned.
  This information helps convey how long a periodic hits the database
  just for the query.
* Requests *all* nodes using the query pattern/structure of the nova
  resource tracker, and uses the marker to make any additional requsts.
  The data is parsed, and collected, and counts identified vendors,
  if any.
* Collects basic data on conductors in terms of running, conductor groups
  as well as currently loaded drivers in the deployment.

All of this information provides operational insight into *what*
conditions exist within the deployment allowing developers to try
and identify solutions based on the unique circumstances of larger
deployments.

Also adds a utility to generate and semi-randomize data to allow us to
create a benchmark job in CI.

Change-Id: Iae660aea82db8f1c4567ee2982595ccfdf434fe3
2021-05-24 16:36:02 +00:00
..
do_not_run_create_benchmark_data.py Add basic tools for benchmarking 2021-05-24 16:36:02 +00:00
generate-statistics.py Add basic tools for benchmarking 2021-05-24 16:36:02 +00:00
README Add basic tools for benchmarking 2021-05-24 16:36:02 +00:00

This folder contains two files:

* do_not_run_create_benchmark_data.py - This script will destroy your
  ironic database. DO NOT RUN IT. You have been warned!
  It is is intended to generate a semi-random database of node data
  which can be used for benchmarks, instead of crafting a raw SQL file
  representing a test model

* generate-statistics.py - This is a utility some statistics to both
  aid in basic benchmarking of ironic operations *and* provide developers
  with conceptual information regarding a deployment's size. It operates
  only by reading the data present and timing how long the result take to
  return as well as isolating some key details about the deployment.