The zuul launcers are gone so we can't have the jenkins logstash client
listen to them for 0mq events anymore. This is fine we submit the jobs
directly from Zuulv3 now and just need this to act as a geard.
Future cleanup should replace this service with a standalone geard
instead.
Change-Id: I132f065d570054081a05b27a430cfd3a5a2ad461
The devstack change in Pike to run nova-conductor in a tiered
"superconductor" mode by default creates a couple of different
log files for the top-level conductor service and the cell-level
conductor service, so we need to index those as well.
Depends-On: I44fc11f09bb7283be0b068f5e02a424f3e5dafe2
Change-Id: Ic828dd95520b21246d46a8087e8579fc3820f5c7
Over time the keystone log file location has evolved in part due to
keystone dropping support for eventlet and relying on wsgi servers to
host keystone. We have gone through the following iterations of log
locations:
logs/screen-key.txt # Eventlet based log file
logs/apache/keystone.txt # Apache + wsgi < pike location
logs/screen-keystone.txt # Apache + uwsgi + system >= pike location
Since eventlet is completely gone at this point we drop all support the
the screen-key.txt log name. We add in support for screen-keystone.txt
which was missing and update the comments around apache/keystone.txt to
indicate it is < pike (so will only be present on older branches and in
grenade jobs).
Change-Id: I8b04a8ee6c6ab45ffac47119df307a86cd3c2c23
Bring online 2 more zuul-launchers to help run ansible-playbooks. This
should result in 150/jobs per launcher now.
Change-Id: I774dc8672a2370ff6b6bd0e6eaa8d50186c504a6
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
We apparently don't index this. Index it as sometimes devstack fails in
the subnode and we want to track that.
Change-Id: I68874c88d4e4b0ab84baa9a15485bde8f2b8ea94
This commit enables the mqtt notifications in the gearman worker. Most
of the setup was already there from our previous failed attempts to
leverage the logstash plugin directly. This just pivots that to use the
gearman worker instead.
Change-Id: I6becec12604c61fe50d3e6b9c7ed9f9e9be311ae
Depends-On: I0a17444cce18dd4b63f1f924e393483f6d8fe8eb
Depends-On: I43be3562780c61591ebede61f3a8929e8217f199
This commit enables the mqtt support in the subunit gearman worker so
the workers will report over mqtt when they process a subunit file.
Change-Id: Ifff2f57740f160e328c0548254e16b04e6ab6c4e
Depends-On: Ibd13b737eccf52863a69d20843cb7d50242f7bb9
Those files are generated by neutron post_test_hook.sh script and
contain all log messages from all functional / fullstack test cases,
merged into a single file.
This will help with capturing particular fail patterns for those jobs.
Without the patch, only console log is indexed.
Depends-On: I59c11085d1645283f329e5fb22cc2988e19e8298
Change-Id: I9ebbe249e8591b1cd84a8fa7c0314134c2898648
The octavia process logs were missing from the source-files list
in jenkins-log-client.yaml. This patch corrects the file.
Change-Id: I65eef061ea56bf4730049fb8b0d29c7552b70c55
Add undercloud installation and image building logs to logstash
tracking because they are not shown in console log
Change-Id: I751712a74ab0736257d4af73c60d82b10333e97e
With the switch to zuul launcher instead of jenkins we now get the zmq
finished event after all logs are done copying (unlike with jenkins
where the console log could show up later). As a result we don't need to
continue trying to download console logs until some EOF file string
shows up, instead we can download what is there and know it is
complete.
Change-Id: I789c073a2fab8863de833684bc64b3e5cb405cf8
This adds a rule to collect 'karma.subunit' files generated from
JavaScript tests in the gate. This should allow JavaScript tests
to be collected by subunit2sql.
Change-Id: I8f5c4997355b01049718799036796ddec1747e4c
This adds the static node zuul launcher, zlstatic01, into production
so that it can communicate with zuul, be graphed by cacti, and
send events to the logstash zmq listeners.
Change-Id: I3cc083c38d4e8c3dfb847f73d4edd0572c48f96b
The puppet-subunit2sql module updated where it puts config file to
be in /etc/subunit2sql instead of /etc/logstash. But we missed updating
the worker yaml config file. This patch corrects the oversight.
Change-Id: Ibb4d2eee7deb7876fc17c49dade8d4ae1236ca3e
This commit adds back the job filter on the old side subunit stream
from grenade runs. The subunit stream will only end up under logs/old
if it's a grenade run. So to be more efficient lets filter looking
for a subunit stream there only to grenade jobs.
Change-Id: I5a6c9b17f923526745507a131658f581a9d94e0a
This commit removes the retry-get parameters from the subunit file
collection in the logstash gearman client. These were cargo-culted
from elsewhere in the yaml file and doesn't add anything useful in
this case. The param is only there to workaround jenkins limitations
with the console log uploading.
Change-Id: Ic751d5e295e605b20386ad9cde6500bae44f34d8
This commit will enable collection of subunit results from all subunit
emitting jobs in the gate and periodic pipelines. Now that we prune
the database to only 6 months of history we should have plenty of
headroom for the extra data generated by non-tempest-dsvm jobs.
(hopefully)
Depends-On: I58a640f804313e1e4b80680f0e39b86d76cb29da
Change-Id: If5691af792409f02352f25b1498dd78294a7cd74
This commit updates the selection regex to be a bit more general an
ensure that the subunit streams from all the gate tempest jobs are
being picked up. There are certain jobs that don't start with
gate-tempest-dsvm or gate-grenade-dsvm that we wanted to collect
results for. For example the puppet jobs will be named something like:
gate-puppet-openstack-integration-scenario001-tempest-dsvm-centos7
and those would not be picked up with the old regex.
Change-Id: If68978679a9ebb6252663691b1850918db913990
This commit fixes the regex on periodic job subunit collection filter
so it'll actually have matches. The jobs don't have a gate-* name on
the periodic queue, so we shouldn't assume they do.
Change-Id: I6f2d78b4b7d8cdb1d15dea0ad4b638c30b229f2b
This commit expands the test result collection to also include
periodic jobs. The data rate for this is relatively low and there
shouldn't be any issues with data purity because the jobs only
run with merged code, so the tests should pass.
Change-Id: I29f930c15fff124cb2c4934a84a999d3ce1a18cb
The elastic-recheck uncategorized bugs page has manila jobs in it
so we need to be able to index the manila service logs in logstash.
For example: http://goo.gl/k1bjUX
Change-Id: I3360858a5a704bc1304c2bc00792ea15f2b924f2
Related-Bug: #1491325
With the recent ceph gate blocker with backups it'd be good to be able
to query for failures in the cinder backup logs, so let's add it to the
logstash index.
Example log: http://goo.gl/4IIWai
Related-Bug: #1476416
Change-Id: Ie5f63e3a4acb5cfef97292cd7c8a009d86ee703a
Enable us to use logstash.openstack.org to help debug multinode failures
that occur on the subnode.
Change-Id: I3629f4827abec68b731ce47048dab553908662d2
Depends-On: Icf842ec12e87ccd208c551870312c8e4b62613bd
We've been trying to get the cells job in the check queue passing
consistently and it's been a game of whack-a-mole with regressions, so
when we're trying to debug new failures it'd be super helpful to
actually have logstash for the cells logs.
This adds the cells logs and only indexes them for the cells job(s)
which is the only time they should appear.
Change-Id: Id4450b7cb5d3303f9cb031c3e77fc17cfff97890
Apache logs aren't getting indexed after changes to where they exist
in devstack-gate. This means the horizon logs are lost, and that
keystone logs aren't getting indexed at all on the main jobs.
Change-Id: I1ef5084d6bf4dc9f74f4e4b51e00e97573074e38