ASCII art is fun but not savvy for project managers and directors. This
patch slightly enhance the 'gating' documentation with colored diagrams.
This is made possible via http://blockdiag.com/ by Takeshi KOMIYA who
even took the time to write a sphinx extension. The version dependency
is at the very least 0.5.5, but might be higher :/
Change-Id: Ibe3c2674a5dff2114c40a84ffdec8a8886b1b21b
When doing the layout validation, I ended up spammed with a few errors:
ERROR:zuul.Scheduler:Invalid reporter name gerrit
The issue is that my pipelines use 'gerrit' as reporter while it is not
registered when testing the config. I registered the 'smtp' reporter as
well and the error is gone.
Side effect: the layout validation output now dumps actions for
start/success/failure.
Change-Id: I271a2943fa3e846ae60d9b615cd3a1ac3815bb1b
Voluptuous 0.7.0 introduced a backward API incompatible change in
upstream commit 475adebc:
https://github.com/alecthomas/voluptuous/commit/475adebc
The schema are precompiled and the validate_X have been removed,
voluptuous is now smart enough to detect the type of value being
validated and would call an internal validation method matching the
type.
Commit is contained since 0.7.0:
$ git tag --contains 475adebc
0.7.0
0.7.1
0.7.2
0.8.1
0.8.2
$
I tested it using etc/layout.yaml-sample and added an inexistent
pipeline to the project. The test yield:
voluptuous.MultipleInvalid: extra keys not allowed @
data['projects'][0]['nonpipe']
Debian has recently accepted voluptuous 0.8.2 in testing, so if we want
to package Zuul, we better have to upgrade our voluptuous requirement as
well. Ref: http://packages.qa.debian.org/v/voluptuous.html
Change-Id: I117ea644863b2e4a4dc3429aa81e868573382877
The intersphinx extensions let one reserve a link namespace and map it
to a remote URL, for example to refer to the Python documentation. A
drawback is that, from time to time, Sphinx will attempt to download an
object inventory from the remote, that is not convenient when running
sphinx in offline mode such as in a plane.
Change-Id: Idef8e88545715a3f1cb9ace002960a7ad3e37b69
* tools/trigger-job.py: Add a logpath arg and make it required. This
allows us to upload logs to appropriate dirs even when manually
triggering jobs.
Change-Id: I59144a2d5443fb6396af45f302f67bc8eec70780
Previously if a job was listed more than once for a project, it was
ignored. That's pretty arbitrary (it silently dropped the second
without an error; and if there are two, which is the right one
anyway?). OTOH, it's potentially useful to run a job more than
once in order to increase the chance of encountering
notdeterministic behavior. And if listing a job twice is an error,
it is now more likely to be noticed by the operator.
This removes the check for duplicate invocations of a job.
Change-Id: If8e2e8cc3fca855bd6b14eb3a957dadddfe143ed
So that the status display and logs read more better.
Also, include the Zuul ref in JSON output so that the status
screen can do something intelligent with unprocessed items (also
it's an important bit of into).
Change-Id: I1429344917856edfaeefa4e9655c2dad8081cae5
The launcher documentation about 'Starting Builds' evokes the
parameter-function in the zuul.conf documentation. One had to goes down
a few pages until the 'Includes' subsection. I have updated the
references to point directly to Includes, saving the reader sometime.
Also fixed a typo: paremeter -> parameter
Change-Id: I560f857b40a7cb989a71161f94e1fb0c8fd69293
Where we're using the same libraries as OpenStack, sync with the
OpenStack versions. Just to be nice.
Change-Id: I8e90d2a8945d62e962b813c6396f0e7db4e14222
d2to1 pulls in setuptools, which trips the unhappy bugs with setuptools
updating. Move past that and just use new pbr.
Change-Id: I2609eda10ed781116940c3607ff85a14fc4b7f58
If A is the head in A <- B <- C, and B failed, then C would be
correctly reparented to A. Then if A failed, B and C would be
restarted, but C would not be reparented back to B. This is
because the check around moving a change short-circuited if
there was no change ahead (which is the case if C is behind A
and A reports).
The solution to this is to still perform the move check even if
there is no change currently ahead (so that if there is a NNFI
change ahead, the current change will be moved behind it). This
effectively means we should remove the "not item ahead" part of
the conditional around the move.
This part of the conditional serves two additional purposes --
to make sure that we don't dereference an attribute on item_ahead
if it is None, and also to ensure that the NNFI algorithm is not
applied to independent queues.
So the fix moves that part of the conditional out so that we can
safely reference the needed attributes if there is a change ahead,
and also makes explicit that we ignore the situation if we are
working on an independent change queue.
This also adds a test that failed (at the indicated position) with
the previous code.
Change-Id: I4cf5e868af7cddb7e95ef378abb966613ac9701c
When moving an item, we correctly reparented the items behind
the item to the item that was previously ahead. But we did
not remove the references to the items behind from the item
that was being moved. This could result in that item
maintaining references to items that were previously behind it.
Generally, these would be the same and it would manifest as
double entries in items_behind.
Change-Id: Ibc1447867df4c6fc7b4fe954770a06c7c24fadc4
Update the scheduler algorithm to NNFI -- Nearest Non-Failing Item.
A stateless description of the algorithm is that jobs for every
item should always be run based on the repository state(s) set by
the nearest non-failing item ahead of it in its change queue.
This means that should an item fail (for any reason -- failure to
merge, a merge conflict, or job failure) changes after it will
have their builds canceled and restarted with the assumption that
the failed change will not merge but the nearest non-failing
change ahead will merge.
This should mean that dependent queues will always be running
jobs and no longer need to wait for a failing change to merge or
not merge before restarting jobs.
This removes the dequeue-on-conflict behavior because there is
now no cost to keeping an item that can not merge in the queue.
The documentation and associated test for this are removed.
This also removes the concept of severed heads because a failing
change at the head will not prevent other changes from proceeding
with their tests. If the jobs for the change at the head run
longer than following changes, it could still impact them while
it completes, but the reduction in code complexity is worth this
minor de-optimization.
The debugging representation of QueueItem is changed to make it
more useful.
Change-Id: I0d2d416fb0dd88647490ec06ed69deae71d39374
By default, tox passes --pre option to pip install commands so that
prerelease packages, which perhaps are not suitable for test,
would be installed, causing unforseeable errors. Like:
http://logs.openstack.org/33/47233/1/check/gate-zuul-docs/7b794af/console.html
whose installed Sphinx package is a beta version 2 causing the error in the log.
fungi's proposal https://review.openstack.org/#/c/47239/1 also applies here.
Change-Id: I9108b534bb469211434a4abf22b25c983aa444ba
without forge author identity privilege, zuul will fail to push
refs/zuul/master/**** to gerrit because zuul git user is different
from the commit's author. Therefore, update the documentation.
Fixes bug: 1226877
Change-Id: I3f59f03e28578e61a748e6f33a25a5707f433a1b
Utilises the new reporter plugin architecture to add support for
emailing success/failure messages based on layout.yaml.
This will assist in testing new gates as currently after a job has
finished if no report is sent back to gerrit then only the workers
logs can be consulted to see if it was successful. This will allow
developers to see exactly what zuul will return if they turn on
gerrit reporting.
Change-Id: I47ac038bbdffb0a0c75f8e63ff6978fd4b4d0a52
Allows multiple reports per a patchset to be sent to pluggable
destinations. These are configurable per pipeline and, if not
specified, defaults to the legacy behaviour of reporting back only
to gerrit.
Having multiple reporting methods means only certain success/failure
/start parameters will apply to certain reporters. Reporters are
listed as keys under each of those actions.
This means that each key under success/failure/start is a reporter and the
dictionaries under those are sent to the reporter to deal with.
Change-Id: I80d7539772e1485d5880132f22e55751b25ec198
The conditional that did a 'git remote update' for ref-updated
events (which otherwise don't affect the Merger) was wrong.
Change-Id: Icb2596df023279442613e10e13104a3621d867d9
Revert "Fix checkout when preparing a ref"
This reverts commit 6eeb24743a.
Revert "Don't reset the local repo"
This reverts commit 96ee718c4b.
Revert "Fetch specific refs on ref-updated events"
This reverts commit bfd5853957.
Change-Id: I50ae4535e3189350d3cc3a7527f89d5cb8eec01d
The new checkout method was relying on out of date information
stored in the remote which was not being updated by the fetch
command. Instead, just checkout FETCH_HEAD using git directly
so that the remote does not need to be kept up to date.
Also, reset and clean _before_ checking out, since that's supposed
to clean up from messy merges, etc.
Change-Id: Ie47b675512edc36e8aeb9b537ca945ad8d07b780
If an exception was received during a report, _reportItem would
erroneously indicate that it had been reported without error.
If a merge was expected, isMerged would be called which may then
raise a further exception which would stop queue processing.
Instead, set the default return value for _reportItem to True
because trigger.report returns a true value on error. This will
cause the change to be marked as reported (with a value of ERROR),
the merge check skipped, and the change will be quickly removed
from the pipeline.
Change-Id: I08b7cee486111200ac9857644d478727c635908d
If a change is removed outside of the main process method (eg
it is superceded), stats were not reported. Report them in that
case.
Change-Id: I9e753599dc3ecdf0d4bffc04f4515c04d882c8be
When the gearman server was added, the exit handler was not updated
correctly. It should tell the scheduler to exit, wait for the
scheduler to empty its pipelines, and then kill the gearman server
and exit.
Change-Id: Ie0532c2ea058ed56217e41641f8eec45080a9470
Instead of "resetting" the local repo (git remote update,
git checkout master, git reset --hard, git clean -xfdq) before
merging each change, just fetch the remote ref for the branch
and check that out (as a detached head). Or, if we are merging
a change that depends on another change in the queue, just check
that change out.
Change-Id: I0a9b839a0c75c04eca7393d7bb58cf89448b6494
The current behavior is that for every event, run
'git remote origin update', which is quite a bit of overhead and
doesn't match what the comments say should be happening. The goal
is to ensure that when new tags arrive, we have them locally in
our repo. It's also not a bad idea for us to keep up with remote
branch movements as well.
This updates the event pre-processor to fetch the ref for each
ref-updated event as they are processed. This is much faster than
the git remote update that was happening before. It also adds
a git remote update to the repo initialization step so that when
Zuul starts, it will pick up any remote changes since it last ran.
Change-Id: I671bb43eddf41c7403de53bb4a223762101adc3c
We can more closely approximate Gerrit's behavior by using the
'resolve' git merge strategy. Make that the default, and leave
the previous behavior ('git merge') as an option. Also, finish
and correct the partially implemented plumbing for other merge
strategies (including cherry-pick).
(Note the previous unfinished implementation attempted to mimic
Gerrit's option names; the new implementation does not, but rather
documents the alignment. It's not a perfect translation anyway,
and this gives us more room to support other strategies not
currently supported by Gerrit).
Change-Id: Ie1ce4fde5980adf99bba69a5aa1d4e81026db676
We have a graph on our status page showing all of the jobs Zuul
launched, but it's built from more than 1000 graphite keys which
is a little inefficient. Add a key for convenience that rolls
up all of the job completions in a pipeline, so that such a graph
can be built with only about 10 keys.
Change-Id: Ie6dbcca68c8a118653effe90952c7921a9de9ad1