This graph approximately answers the question "How many runs would
a third-party CI system be expected to handle for the integrated
gate?"
Change-Id: Iae74306ce3c922be3d82d61ca86e724f7e048dff
The header says check, and one of the jobs only exists in check,
so them referring to the gate pipeline is an error.
Change-Id: Ie18c5b3ec6cceab1f5901bd6828c944fc42e4278
The OSIC grafyaml file was copied from vexxhost and the region name
carried over inappropriately. Fix that and refer to OSIC Cloud 1
instead.
Change-Id: Ib41bdb1e8d8b606a1ce0c2d9626a3b7b36c6bc6a
This will give the current build, ready, in use and delete totals for
all clouds.
Change-Id: Ib2636e4a0e94c0b8927c2128e297579605e47b3a
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
We recently added templating support to grafyaml, so lets start using
it and demo the functionality.
Change-Id: I2304c2403611698a98babbde967d217f27e3d9a7
Depends-On: Ib2f565e3d39523105b2c07d29d5257494a8bae67
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
Possibly the previous race conditions that kicked this job out of the
gate have been addressed as part of the pymysql fixes. Let's start
getting some data, and see if we can find some bugs.
Change-Id: Id40fe990e61a09ebb9e99528f37402c19ccef3c2
During testing, an issue with Nova was identified and resolved
in change f2c1cfae. This addresses the persistent errors seen
on the Grenade job.
Now that we can get successful runs, the job is ready to move
to the check queue.
Depends-on: I22eb3a3fcd8e74a1d9085acde15c25a927ae12cb
Change-Id: Icfaaf075dda3ecc89a16f65962cc6c673fb7e3ae
Now that the dashboard are deployed, it is easier to tweak the
yaml file. Spotted a typo, and added baseline to integrated and
DVR/Multinode/Linuxbridge dashboards.
Change-Id: I0d4b3f6c89d1cda746ba56228c2a081b219d3a15
It is incredibly useful to see failure rates over time and
Grafana is an excellent tool for this.
This patch creates a dashboard that captures the failure rates
of Neutron check and gate jobs.
Change-Id: If7552e10bcafd17e245b3a5de839bcaa0ef12b97
I missed some tweaks on the previous Test Nodes graph change.
Also make the job runtimes wider like Paul suggested.
Change-Id: I5ac43909a679d273a557112ad8526a68de15f4f1
Add axis labels and units where appropriate.
Change the launch attempts graphs to summarize to 1m rather than
1h since grafana lets us zoom in. 1m is the lowest native unit
of time that will always show whole numbers for this metric (whose
lowest non-zero value is 1 event / 10 seconds).
Change the test nodes graph to stacked to match the way we normally
draw this graph, but change the tooltip to 'individual' so that
when hovering, individual values for the different states are
displayed, rather than cumulative (which does not make sense for
this application).
Also change the tooltip for the node graphs on the zuul dashboard
in the same manner.
Change-Id: I500aa486362476cff76a3d254093723f27021bed
Depends-On: Ie542dc4d0e151a00e84cc970c2cfa8c02377d7bf
This lets you see at a glance how many nodes are in each state
across all rackspace regions.
Validating here, then will copy to other providers.
Change-Id: Id28ab4dc9228ab31fe2798840fb5eac92d701c95
These are per-region versions of the nodepool node state graph,
except that the values are not stacked in order to make the
individual values more accessible.
Change-Id: I8ec90758828484a9ffb7a90d2eacbcccc8b78bb4
There is no .error metric, but rather, errors are broken out by
cause. For this graph, simply display their sum.
Change-Id: Iae19e4e78098f3373c3195ff3ec52a11c5e92a3b
The multiple regions in rackspace suggested the launch attempts
graph be reworked and dropping the provider name from the legend
for brevity. Also, the graphs are larger. Make internap match.
Change-Id: Icba3293a4b09e5e022584f00f18647d7567363a9
Right now, the values we display are averages, which is confusing
to people. Setting valueName to current, we'll actually display the
current count for singlestats.
Change-Id: Icb1a62fb8b289165679ceec16e7d65dab98bf602
Depends-On: I4df8d130fce45cf58b01808997fc561cf8c4b42d
Signed-off-by: Paul Belanger <pabelanger@redhat.com>