When Zuul updates its copy of data about a change, it protects itself
from inifinite loops by detecting dependency cycles. However, this
only happens when updating a change. If a change depends on another
change already in Zuul's cache, it will not necessarily update the
cached change, and the dependency cycle detection code will not run.
This can later cause problems when Zuul attempts to work with these
changes.
Correct this by always performing a dependency cycle check, even
on cached changes which are not updated.
A test is added for this, and it also ensures that the situation can
still be corrected by the user by removing the dependency cycle.
Many debug log lines in the Gerrit source driver are updated to make
it more clear what change is being updated in the updateChange method,
since this method is recursive and otherwise logs can get somewhat
confusing.
Change-Id: I6ab570f734d3abed2f71d547f130d9c392b976d6
This allows us to add arbitrary string 'tags' to jobs. These may
then be inspected inside of parameter functions for any purpose.
In OpenStack, we will likely pass these through untouched to the
the build so that they can be picked up by the logstash worker
and we can record extra metadata about jobs.
Change-Id: Ibc00c6d30cdfe4678864adb13421a4d9f71f5128
Currently zuul-cloner falls back to the fallback branch when fetching
of the zuul ref fails. This is intended when the zuul ref is not found
on the remote. But in case the fetch fails due to infrastructure
reasons (e.g. zuul-merger is not reachable or certificate verification
failed) it should bail out with an error. Otherwise a already merged
and tested patch will be verified which could lead to broken patches
being merged.
Change-Id: Iefc82603de279e36ad5972ce341b102c8d38f69e
When preparing a reference, we set the merge state to PENDING before
emitting the merge:merge function. If any exception occurs when
submitting the merge:merge job, the buildset is left PENDING and is
never retried because prepareRef() early exit in such case.
Move the merge_state change after the job has been submitted. An
exception would let the state as is (ie NEW) and thus indicate it should
be retried.
Closes-Bug: #1358517
Change-Id: I4d91a15aaae878ed231d50ab5f4f7a65f0d0e830
The logic to decide whether or not to use the cache attempted to
detect whether the repo had previously been cloned. It only did
that by checking whether the destination directory exists. However,
it's perfectly valid for the dest dir to exist if it is empty.
Adjust the check to look for a .git dir within the dest dir to decide
if the repo has already been cloned.
Change-Id: I17926efcf0f38d6229f0e666e53e6730f455d8ef
Add reconfigure test case. This test previously fails currently due to a
regression introduced with the connections changes.
Because multiple sources share a connection, a pipeline that does not hold
and therefore require any changes in the cache may clear a connections
cache before a pipeline that does need said change has an opportunity to
add it to the relevant list.
Allow connections to manage their cache directly rather than the source
doing it vicariously ignorant of other pipelines/sources. Collect the
relevant changes from all pipelines and ask any connections holding a
cache for that item to keep it on reconfiguration.
Co-Authored-By: James E. Blair <jeblair@linux.vnet.ibm.com>
Change-Id: I2bf8ba6b9deda58114db9e9b96985a2a0e2a69cb
Commit 385d11e2ed moved the logic
that performs report formatting into the reporters themselves. But
it also contained a logic change to the formatting.
Previously, if an item was not mergeable, it was reported without
a job list, regardless if the merge-failure or standard failure
reporter was used. With that change, if a pipeline specified a
merge-failure message reporter, it would not format the job list,
but if no merge-failure reporter was supplied, and the standard
failure reporter was used, the standard failure reporter would not
check whether a merge-failure happened and instead always try to
format the job list.
Change-Id: If65d4f64d6558a544d3d0c2cc0b32ad7786a6bcd
Reconfiguration should be synchronous, but just in case, wait
afterwords. The second wait may be more important -- we might
still have some outstanding timer events from the previous
configuration so we should make sure that everything really has
stopped before we release builds.
Change-Id: I52a3b0c309dc87fc6a553e690286d5dae093e085
Because the triggers are loaded to the scheduler and not an object
of a pipeline they aren't reset when reloading. They are reused
by the new pipeline, however any previously loaded triggers will
still have an old connection object that should no longer be used.
Instead reset the triggers when reloading causing new triggers to be
created by the pipeline configuration against the new connection objects.
The connection objects that were hanging around for old triggers were
keeping their change cache and hence using up a lot of memory.
It appears that maintainTriggerCache is only called when reloading,
so the cache would have a habit of growing out of hand and, in particular,
if you don't ever reload it will not be maintained. A followup to run
the cache at sensible times will come.
Change-Id: I81ee47524cda71a500c55a95a2280f491b1b63d9
Problem occurs when merge events for periodic pipeline
are processed: first, a trivial mistake in type checking
introduced in review 243493,
second, merge events generated for NullChange's are repo updates,
and don't provide 'commit' attribute.
I don't see the necessity to enforce 'build_set.commit' to be set
(which is given to job env ZUUL_COMMIT if 'ref' or 'refspec' are set,
but that's not the case for NullChange)
Change-Id: Ib5da4ba987898d37d8e5082f4b8e2a5c31910323
argparse was external in python 2.6 but not anymore, remove it from
requirements.
This should help with pip 8.0 that gets confused in this situation.
Installation of the external argparse is not needed.
Change-Id: Ib7e74912b36c1b5ccb514e31fac35efeff57378d
Since zuul continues to function when it receives an unknown event,
let's reduce the log level to a warning.
Change-Id: If0f763f47b3d775410f608876babb5fa8f69ae96
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
Looking at the library, and openstack requirements, there doesn't
appear to be a reason to cap webob. So, remove the cap so we can use
newer versions.
Change-Id: Id19c297b540e9081bcf733dff3a334a4b0f477d8
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
This patch upgrades zuul to support APScheduler 3.0. For the most
part, 3.0 was a rewrite but our changes seem to be limited.
Change-Id: I0c66b5998122c3f59ed06e3e7b3ab3199f94f478
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
Make sure we update the referenced change object on a new gerrit
event rather than waiting to remake the queue item.
This was a performance regression in the connection changes.
Change-Id: I2a967f0347352a7674deb550e34fb94d1d903e89
Adding a exception handler to updateBuildDescriptions to
handle exceptions from launcher (Gearman).
This addresses the corner case seen in:
https://storyboard.openstack.org/#!/story/2000445
In the above mentioned corner case, Gearman throws a communication
error exception. Since updateBuildDescriptions is called after the
build status has been reported to Gerrit, we want to make sure that
we return back to caller so that the build is properly marked as
reported in reportItem to avoid an infinite loop.
Change-Id: I166d4a2e1d45e97a9ed0b6addb7270e1bf92d6f7
It has been set like that:
connections = {'_legacy_gerrit': Connection(name='gerrit')}
This leads to that in merger.py it creates ".ssh_wrapper_gerrit" file,
basing it on key from connections dict,
but tries to use ".ssh_wrapper__legacy_gerrit", basing on connection name.
I propose to set it to "gerrit" everywhere (and the same for smtp).
Change-Id: I2c2d8bee7bcf98c5f12809f77b34e206d07b8438
This will allow a reporter to decide how to handle the results of
each item. It can use the common plain text formatter (as has been
the case, '_formatItemReport') or it may generate a report itself.
This will be useful for the MySQL reporter where it will want to
create an entry in a table for each build.
Action reporters are now configured with the action type, so they can
react differently for the success, failure, etc.
Co-Authored-By: Jan Hruban <jan.hruban@gooddata.com>
Change-Id: Ib270334ff694fdff69a3028db8d6d7fed1f05176
We no longer need action reporters that we load a new instance
of each reporter for each pipeline/action couple.
Change-Id: I3b6c6f9fd5402786dbc9916e1d18df34e348a7bd
Test reporting back to gerrit as a different user.
Test multiple gerrit instances.
Squashed with: Assign fake connections to test class dynamically
As multiple connections can be configured for testing purposes, it makes
more sense to be able to access them all comfortably.
Drawback of this change is, that the dynamicism makes the code less
readable and less obvious.
Previously: If50567dd5a087e9fe46cd4e30f9e0562cda794d7
Co-Authored-By: Jan Hruban <jan.hruban@gooddata.com>
Change-Id: I4652799c8b9a626bd2f7f6996d09c95da2953936
This is a general tidy up and squashed commit of 3 previous changes
to ease reviewing.
- Generalize event queue handling in tests
Store the event queue list in the base test class, so operation on the
queues can be delegated to separate method instead of operating on each
queue explicitly. Previously: I4c23143b4d49fa409fe378d1348bb6397b748cb9
- Don't pass fake_gerrit to FakeURLOpener
It's not used there whatsoever.
Previously: Ieefb09aaf60d4044c567b4d51fedf2068c3de1a6
- Pass upstream_root to fake gerrit explicitly
Previously: Ifb7d47caec430d0a209140e87ada98b8531c5d04
Co-Authored-By: Joshua Hesketh <josh@nitrotech.org>
Change-Id: I4c23143b4d49fa409fe378d1348bb6397b748cb9
This is a large refactor and as small as I could feasibly make it
while keeping the tests working. I'll do the documentation and
touch ups in the next commit to make digesting easier.
Change-Id: Iac5083996a183d1d8a9b6cb8f70836f7c39ee910
Move EventFilter configuration into the triggers to allow dynamic
inclusion of triggers rather than specifying each one in the sechduler.
Change-Id: I4ed345058ff1ffdd662fafa854e36782cc7f047b