Finish circular dependency refactor

This change completes the circular dependency refactor.

The principal change is that queue items may now include
more than one change simultaneously in the case of circular
dependencies.

In dependent pipelines, the two-phase reporting process is
simplified because it happens during processing of a single
item.

In independent pipelines, non-live items are still used for
linear depnedencies, but multi-change items are used for
circular dependencies.

Previously changes were enqueued recursively and then
bundles were made out of the resulting items.  Since we now
need to enqueue entire cycles in one queue item, the
dependency graph generation is performed at the start of
enqueing the first change in a cycle.

Some tests exercise situations where Zuul is processing
events for old patchsets of changes.  The new change query
sequence mentioned in the previous paragraph necessitates
more accurate information about out-of-date patchsets than
the previous sequence, therefore the Gerrit driver has been
updated to query and return more data about non-current
patchsets.

This change is not backwards compatible with the existing
ZK schema, and will require Zuul systems delete all pipeline
states during the upgrade.  A later change will implement
a helper command for this.

All backwards compatability handling for the last several
model_api versions which were added to prepare for this
upgrade have been removed.  In general, all model data
structures involving frozen jobs are now indexed by the
frozen job's uuid and no longer include the job name since
a job name no longer uniquely identifies a job in a buildset
(either the uuid or the (job name, change) tuple must be
used to identify it).

Job deduplication is simplified and now only needs to
consider jobs within the same buildset.

The fake github driver had a bug (fakegithub.py line 694) where
it did not correctly increment the check run counter, so our
tests that verified that we closed out obsolete check runs
when re-enqueing were not valid.  This has been corrected, and
in doing so, has necessitated some changes around quiet dequeing
when we re-enqueue a change.

The reporting in several drivers has been updated to support
reporting information about multiple changes in a queue item.

Change-Id: I0b9e4d3f9936b1e66a08142fc36866269dc287f1
Depends-On: https://review.opendev.org/907627
This commit is contained in:
James E. Blair 2024-01-08 15:28:03 -08:00
parent 4a7e86f7f6
commit 1f026bd49c
42 changed files with 2187 additions and 2650 deletions

View File

@ -192,3 +192,9 @@ Version 25
:Prior Zuul version: 9.3.0
:Description: Add job_uuid to BuildRequests and BuildResultEvents.
Affects schedulers and executors.
Version 26
----------
:Prior Zuul version: 9.5.0
:Description: Refactor circular dependencies.
Affects schedulers and executors.

View File

@ -0,0 +1,60 @@
---
prelude: >
This release includes a significant refactoring of the internal
handling of circular dependencies. This requires some changes for
consumers of Zuul output (via some reporters or the REST API) and
requires special care during upgrades. In the case of a
dependency cycle between changes, Zuul pipeline queue items will
now represent multiple changes rather than a single change. This
allows for more intuitive behavior and information display as well
as better handling of job deduplication.
upgrade:
- |
Zuul can not be upgraded to this version while running. To upgrade:
* Stop all Zuul components running the previous version
(stopping Nodepool is optional).
* On a scheduler machine or image (with the scheduler stopped)
and the new version of Zuul, run the command:
zuul-admin delete-state --keep-config-cache
This will delete all of the pipeline state from ZooKeeper, but
it will retain the configuration cache (which contains all of
the project configuration from zuul.yaml files). This will
speed up the startup process.
* Start all Zuul components on the new version.
- The MQTT reporter now includes a job_uuid field to correlate retry
builds with final builds.
deprecations:
- |
The syntax of string substitution in pipeline reporter messages
has changed. Since queue items may now represent more than one
change, the `{change}` substitution in messages is deprecated and
will be removed in a future version. To maintain backwards
compatability, it currently refers to the arbitrary first change
in the list of changes for a queue item. Please upgrade your
usage to use the new `{changes}` substitution which is a list.
- |
The syntax of string substitution in SMTP reporter messages
has changed. Since queue items may now represent more than one
change, the `{change}` substitution in messages is deprecated and
will be removed in a future version. To maintain backwards
compatability, it currently refers to the arbitrary first change
in the list of changes for a queue item. Please upgrade your
usage to use the new `{changes}` substitution which is a list.
- |
The MQTT and Elasticsearch reporters now include a `changes` field
which is a list of dictionaries representing the changes included
in an item. The correspending scalar fields describing what was
previously the only change associated with an item remain for
backwards compatability and refer to the arbitrary first change is
the list of changes for a queue item. These scalar values will be
removed in a future version of Zuul. Please upgrade yur usage to
use the new `changes` entries.
- |
The `zuul.bundle_id` variable is deprecated and will be removed in
a future version. For backwards compatability, it currently
duplicates the item uuid.

View File

@ -881,32 +881,32 @@ class FakeGerritChange(object):
if 'approved' not in label:
label['approved'] = app['by']
revisions = {}
rev = self.patchsets[-1]
num = len(self.patchsets)
files = {}
for f in rev['files']:
if f['file'] == '/COMMIT_MSG':
continue
files[f['file']] = {"status": f['type'][0]} # ADDED -> A
parent = '0000000000000000000000000000000000000000'
if self.depends_on_change:
parent = self.depends_on_change.patchsets[
self.depends_on_patchset - 1]['revision']
revisions[rev['revision']] = {
"kind": "REWORK",
"_number": num,
"created": rev['createdOn'],
"uploader": rev['uploader'],
"ref": rev['ref'],
"commit": {
"subject": self.subject,
"message": self.data['commitMessage'],
"parents": [{
"commit": parent,
}]
},
"files": files
}
for i, rev in enumerate(self.patchsets):
num = i + 1
files = {}
for f in rev['files']:
if f['file'] == '/COMMIT_MSG':
continue
files[f['file']] = {"status": f['type'][0]} # ADDED -> A
parent = '0000000000000000000000000000000000000000'
if self.depends_on_change:
parent = self.depends_on_change.patchsets[
self.depends_on_patchset - 1]['revision']
revisions[rev['revision']] = {
"kind": "REWORK",
"_number": num,
"created": rev['createdOn'],
"uploader": rev['uploader'],
"ref": rev['ref'],
"commit": {
"subject": self.subject,
"message": self.data['commitMessage'],
"parents": [{
"commit": parent,
}]
},
"files": files
}
data = {
"id": self.project + '~' + self.branch + '~' + self.data['id'],
"project": self.project,
@ -1462,13 +1462,14 @@ class FakeGerritConnection(gerritconnection.GerritConnection):
}
return event
def review(self, item, message, submit, labels, checks_api, file_comments,
phase1, phase2, zuul_event_id=None):
def review(self, item, change, message, submit, labels,
checks_api, file_comments, phase1, phase2,
zuul_event_id=None):
if self.web_server:
return super(FakeGerritConnection, self).review(
item, message, submit, labels, checks_api, file_comments,
phase1, phase2, zuul_event_id)
self._test_handle_review(int(item.change.number), message, submit,
item, change, message, submit, labels, checks_api,
file_comments, phase1, phase2, zuul_event_id)
self._test_handle_review(int(change.number), message, submit,
labels, phase1, phase2)
def _test_get_submitted_together(self, change):
@ -3577,9 +3578,11 @@ class TestingExecutorApi(HoldableExecutorApi):
self._test_build_request_job_map = {}
if build_request.uuid in self._test_build_request_job_map:
return self._test_build_request_job_map[build_request.uuid]
job_name = build_request.job_name
params = self.getParams(build_request)
job_name = params['zuul']['job']
self._test_build_request_job_map[build_request.uuid] = job_name
return build_request.job_name
return job_name
def release(self, what=None):
"""

View File

@ -691,7 +691,7 @@ class FakeGithubSession(object):
if commit is None:
commit = FakeCommit(head_sha)
repo._commits[head_sha] = commit
repo.check_run_counter += 1
repo.check_run_counter += 1
check_run = commit.set_check_run(
str(repo.check_run_counter),
json['name'],

File diff suppressed because it is too large Load Diff

View File

@ -1165,7 +1165,7 @@ class TestExecutorFailure(ZuulTestCase):
self.waitUntilSettled()
job = items[0].current_build_set.job_graph.getJob(
'project-merge', items[0].change.cache_key)
'project-merge', items[0].changes[0].cache_key)
build_retries = items[0].current_build_set.getRetryBuildsForJob(job)
self.assertEqual(len(build_retries), 1)
self.assertIsNotNone(build_retries[0].error_detail)

View File

@ -232,7 +232,7 @@ class TestJob(BaseTestCase):
change = model.Change(self.project)
change.branch = 'master'
change.cache_stat = Dummy(key=Dummy(reference=uuid.uuid4().hex))
item = self.queue.enqueueChange(change, None)
item = self.queue.enqueueChanges([change], None)
self.assertTrue(base.changeMatchesBranch(change))
self.assertTrue(python27.changeMatchesBranch(change))
@ -249,7 +249,7 @@ class TestJob(BaseTestCase):
change.branch = 'stable/diablo'
change.cache_stat = Dummy(key=Dummy(reference=uuid.uuid4().hex))
item = self.queue.enqueueChange(change, None)
item = self.queue.enqueueChanges([change], None)
self.assertTrue(base.changeMatchesBranch(change))
self.assertTrue(python27.changeMatchesBranch(change))
@ -300,7 +300,7 @@ class TestJob(BaseTestCase):
change.branch = 'master'
change.cache_stat = Dummy(key=Dummy(reference=uuid.uuid4().hex))
change.files = ['/COMMIT_MSG', 'ignored-file']
item = self.queue.enqueueChange(change, None)
item = self.queue.enqueueChanges([change], None)
self.assertTrue(base.changeMatchesFiles(change))
self.assertFalse(python27.changeMatchesFiles(change))
@ -375,7 +375,7 @@ class TestJob(BaseTestCase):
# Test master
change.branch = 'master'
change.cache_stat = Dummy(key=Dummy(reference=uuid.uuid4().hex))
item = self.queue.enqueueChange(change, None)
item = self.queue.enqueueChanges([change], None)
with testtools.ExpectedException(
Exception,
"Pre-review pipeline gate does not allow post-review job"):
@ -453,7 +453,7 @@ class TestJob(BaseTestCase):
change = model.Change(self.project)
change.branch = 'master'
change.cache_stat = Dummy(key=Dummy(reference=uuid.uuid4().hex))
item = self.queue.enqueueChange(change, None)
item = self.queue.enqueueChanges([change], None)
self.assertTrue(base.changeMatchesBranch(change))
self.assertTrue(python27.changeMatchesBranch(change))
@ -488,6 +488,7 @@ class FakeFrozenJob(model.Job):
super().__init__(name)
self.uuid = uuid.uuid4().hex
self.ref = 'fake reference'
self.all_refs = [self.ref]
class TestGraph(BaseTestCase):

View File

@ -465,53 +465,6 @@ class TestGithubModelUpgrade(ZuulTestCase):
config_file = 'zuul-github-driver.conf'
scheduler_count = 1
@model_version(3)
@simple_layout('layouts/gate-github.yaml', driver='github')
def test_status_checks_removal(self):
# This tests the old behavior -- that changes are not dequeued
# once their required status checks are removed -- since the
# new behavior requires a flag in ZK.
# Contrast with test_status_checks_removal.
github = self.fake_github.getGithubClient()
repo = github.repo_from_project('org/project')
repo._set_branch_protection(
'master', contexts=['something/check', 'tenant-one/gate'])
A = self.fake_github.openFakePullRequest('org/project', 'master', 'A')
self.fake_github.emitEvent(A.getPullRequestOpenedEvent())
self.waitUntilSettled()
self.executor_server.hold_jobs_in_build = True
# Since the required status 'something/check' is not fulfilled,
# no job is expected
self.assertEqual(0, len(self.history))
# Set the required status 'something/check'
repo.create_status(A.head_sha, 'success', 'example.com', 'description',
'something/check')
self.fake_github.emitEvent(A.getPullRequestOpenedEvent())
self.waitUntilSettled()
# Remove it and verify the change is not dequeued (old behavior).
repo.create_status(A.head_sha, 'failed', 'example.com', 'description',
'something/check')
self.fake_github.emitEvent(A.getCommitStatusEvent('something/check',
state='failed',
user='foo'))
self.waitUntilSettled()
self.executor_server.hold_jobs_in_build = False
self.executor_server.release()
self.waitUntilSettled()
# the change should have entered the gate
self.assertHistory([
dict(name='project-test1', result='SUCCESS'),
dict(name='project-test2', result='SUCCESS'),
], ordered=False)
self.assertTrue(A.is_merged)
@model_version(10)
@simple_layout('layouts/github-merge-mode.yaml', driver='github')
def test_merge_method_syntax_check(self):
@ -703,48 +656,6 @@ class TestDefaultBranchUpgrade(ZuulTestCase):
self.assertEqual('foobar', md.default_branch)
class TestDeduplication(ZuulTestCase):
config_file = "zuul-gerrit-github.conf"
tenant_config_file = "config/circular-dependencies/main.yaml"
scheduler_count = 1
def _test_job_deduplication(self):
A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A')
B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B')
# A <-> B
A.data["commitMessage"] = "{}\n\nDepends-On: {}\n".format(
A.subject, B.data["url"]
)
B.data["commitMessage"] = "{}\n\nDepends-On: {}\n".format(
B.subject, A.data["url"]
)
A.addApproval('Code-Review', 2)
B.addApproval('Code-Review', 2)
self.fake_gerrit.addEvent(A.addApproval('Approved', 1))
self.fake_gerrit.addEvent(B.addApproval('Approved', 1))
self.waitUntilSettled()
self.assertEqual(A.data['status'], 'MERGED')
self.assertEqual(B.data['status'], 'MERGED')
@simple_layout('layouts/job-dedup-auto-shared.yaml')
@model_version(7)
def test_job_deduplication_auto_shared(self):
self._test_job_deduplication()
self.assertHistory([
dict(name="project1-job", result="SUCCESS", changes="2,1 1,1"),
dict(name="common-job", result="SUCCESS", changes="2,1 1,1"),
dict(name="project2-job", result="SUCCESS", changes="2,1 1,1"),
# This would be deduplicated
dict(name="common-job", result="SUCCESS", changes="2,1 1,1"),
], ordered=False)
self.assertEqual(len(self.fake_nodepool.history), 4)
class TestDataReturn(AnsibleZuulTestCase):
tenant_config_file = 'config/data-return/main.yaml'

View File

@ -1107,7 +1107,7 @@ class TestScheduler(ZuulTestCase):
self.assertEqual(len(queue), 1)
self.assertEqual(queue[0].zone, None)
params = self.executor_server.executor_api.getParams(queue[0])
self.assertEqual(queue[0].job_name, 'project-merge')
self.assertEqual(params['zuul']['job'], 'project-merge')
self.assertEqual(params['items'][0]['number'], '%d' % A.number)
self.executor_api.release('.*-merge')
@ -1121,12 +1121,14 @@ class TestScheduler(ZuulTestCase):
self.assertEqual(len(self.builds), 0)
self.assertEqual(len(queue), 6)
self.assertEqual(queue[0].job_name, 'project-test1')
self.assertEqual(queue[1].job_name, 'project-test2')
self.assertEqual(queue[2].job_name, 'project-test1')
self.assertEqual(queue[3].job_name, 'project-test2')
self.assertEqual(queue[4].job_name, 'project-test1')
self.assertEqual(queue[5].job_name, 'project-test2')
params = [self.executor_server.executor_api.getParams(x)
for x in queue]
self.assertEqual(params[0]['zuul']['job'], 'project-test1')
self.assertEqual(params[1]['zuul']['job'], 'project-test2')
self.assertEqual(params[2]['zuul']['job'], 'project-test1')
self.assertEqual(params[3]['zuul']['job'], 'project-test2')
self.assertEqual(params[4]['zuul']['job'], 'project-test1')
self.assertEqual(params[5]['zuul']['job'], 'project-test2')
self.executor_api.release(queue[0])
self.waitUntilSettled()
@ -2935,16 +2937,16 @@ class TestScheduler(ZuulTestCase):
items = check_pipeline.getAllItems()
self.assertEqual(len(items), 3)
self.assertEqual(items[0].change.number, '1')
self.assertEqual(items[0].change.patchset, '1')
self.assertEqual(items[0].changes[0].number, '1')
self.assertEqual(items[0].changes[0].patchset, '1')
self.assertFalse(items[0].live)
self.assertEqual(items[1].change.number, '2')
self.assertEqual(items[1].change.patchset, '1')
self.assertEqual(items[1].changes[0].number, '2')
self.assertEqual(items[1].changes[0].patchset, '1')
self.assertTrue(items[1].live)
self.assertEqual(items[2].change.number, '1')
self.assertEqual(items[2].change.patchset, '1')
self.assertEqual(items[2].changes[0].number, '1')
self.assertEqual(items[2].changes[0].patchset, '1')
self.assertTrue(items[2].live)
# Add a new patchset to A
@ -2957,16 +2959,16 @@ class TestScheduler(ZuulTestCase):
items = check_pipeline.getAllItems()
self.assertEqual(len(items), 3)
self.assertEqual(items[0].change.number, '1')
self.assertEqual(items[0].change.patchset, '1')
self.assertEqual(items[0].changes[0].number, '1')
self.assertEqual(items[0].changes[0].patchset, '1')
self.assertFalse(items[0].live)
self.assertEqual(items[1].change.number, '2')
self.assertEqual(items[1].change.patchset, '1')
self.assertEqual(items[1].changes[0].number, '2')
self.assertEqual(items[1].changes[0].patchset, '1')
self.assertTrue(items[1].live)
self.assertEqual(items[2].change.number, '1')
self.assertEqual(items[2].change.patchset, '2')
self.assertEqual(items[2].changes[0].number, '1')
self.assertEqual(items[2].changes[0].patchset, '2')
self.assertTrue(items[2].live)
# Add a new patchset to B
@ -2979,16 +2981,16 @@ class TestScheduler(ZuulTestCase):
items = check_pipeline.getAllItems()
self.assertEqual(len(items), 3)
self.assertEqual(items[0].change.number, '1')
self.assertEqual(items[0].change.patchset, '2')
self.assertEqual(items[0].changes[0].number, '1')
self.assertEqual(items[0].changes[0].patchset, '2')
self.assertTrue(items[0].live)
self.assertEqual(items[1].change.number, '1')
self.assertEqual(items[1].change.patchset, '1')
self.assertEqual(items[1].changes[0].number, '1')
self.assertEqual(items[1].changes[0].patchset, '1')
self.assertFalse(items[1].live)
self.assertEqual(items[2].change.number, '2')
self.assertEqual(items[2].change.patchset, '2')
self.assertEqual(items[2].changes[0].number, '2')
self.assertEqual(items[2].changes[0].patchset, '2')
self.assertTrue(items[2].live)
self.builds[0].release()
@ -3055,13 +3057,13 @@ class TestScheduler(ZuulTestCase):
items = check_pipeline.getAllItems()
self.assertEqual(len(items), 3)
self.assertEqual(items[0].change.number, '1')
self.assertEqual(items[0].changes[0].number, '1')
self.assertFalse(items[0].live)
self.assertEqual(items[1].change.number, '2')
self.assertEqual(items[1].changes[0].number, '2')
self.assertTrue(items[1].live)
self.assertEqual(items[2].change.number, '1')
self.assertEqual(items[2].changes[0].number, '1')
self.assertTrue(items[2].live)
# Abandon A
@ -3073,10 +3075,10 @@ class TestScheduler(ZuulTestCase):
items = check_pipeline.getAllItems()
self.assertEqual(len(items), 2)
self.assertEqual(items[0].change.number, '1')
self.assertEqual(items[0].changes[0].number, '1')
self.assertFalse(items[0].live)
self.assertEqual(items[1].change.number, '2')
self.assertEqual(items[1].changes[0].number, '2')
self.assertTrue(items[1].live)
self.executor_server.hold_jobs_in_build = False
@ -4589,8 +4591,9 @@ class TestScheduler(ZuulTestCase):
first = pipeline_status['change_queues'][0]['heads'][0][0]
second = pipeline_status['change_queues'][1]['heads'][0][0]
self.assertIn(first['ref'], ['refs/heads/master', 'refs/heads/stable'])
self.assertIn(second['ref'],
self.assertIn(first['changes'][0]['ref'],
['refs/heads/master', 'refs/heads/stable'])
self.assertIn(second['changes'][0]['ref'],
['refs/heads/master', 'refs/heads/stable'])
self.executor_server.hold_jobs_in_build = False
@ -5799,7 +5802,6 @@ For CI problems and help debugging, contact ci@example.org"""
build_set = items[0].current_build_set
job = list(filter(lambda j: j.name == 'project-test1',
items[0].getJobs()))[0]
build_set.job_graph.getJobFromName(job)
for x in range(3):
# We should have x+1 retried builds for project-test1
@ -8311,8 +8313,8 @@ class TestSemaphore(ZuulTestCase):
1)
items = check_pipeline.getAllItems()
self.assertEqual(items[0].change.number, '1')
self.assertEqual(items[0].change.patchset, '2')
self.assertEqual(items[0].changes[0].number, '1')
self.assertEqual(items[0].changes[0].patchset, '2')
self.assertTrue(items[0].live)
self.executor_server.hold_jobs_in_build = False
@ -8389,7 +8391,8 @@ class TestSemaphore(ZuulTestCase):
# Save some variables for later use while the job is running
check_pipeline = tenant.layout.pipelines['check']
item = check_pipeline.getAllItems()[0]
job = item.getJob('semaphore-one-test1')
job = list(filter(lambda j: j.name == 'semaphore-one-test1',
item.getJobs()))[0]
tenant.semaphore_handler.cleanupLeaks()

View File

@ -717,7 +717,12 @@ class TestSOSCircularDependencies(ZuulTestCase):
self.assertEqual(len(self.builds), 4)
builds = self.builds[:]
self.executor_server.failJob('job1', A)
# Since it's one queue item for the two changes, all 4
# builds need to complete.
builds[0].release()
builds[1].release()
builds[2].release()
builds[3].release()
app.sched.wake_event.set()
self.waitUntilSettled(matcher=[app])
self.assertEqual(A.reported, 2)

View File

@ -79,7 +79,7 @@ class TestTimerAlwaysDynamicBranches(ZuulTestCase):
self.assertEqual(len(pipeline.queues), 2)
for queue in pipeline.queues:
item = queue.queue[0]
self.assertIn(item.change.branch, ['master', 'stable'])
self.assertIn(item.changes[0].branch, ['master', 'stable'])
self.executor_server.hold_jobs_in_build = False

View File

@ -23,7 +23,11 @@ from opentelemetry import trace
def attributes_to_dict(attrlist):
ret = {}
for attr in attrlist:
ret[attr.key] = attr.value.string_value
if attr.value.string_value:
ret[attr.key] = attr.value.string_value
else:
ret[attr.key] = [v.string_value
for v in attr.value.array_value.values]
return ret
@ -247,8 +251,8 @@ class TestTracing(ZuulTestCase):
jobexec.span_id)
item_attrs = attributes_to_dict(item.attributes)
self.assertTrue(item_attrs['ref_number'] == "1")
self.assertTrue(item_attrs['ref_patchset'] == "1")
self.assertTrue(item_attrs['ref_number'] == ["1"])
self.assertTrue(item_attrs['ref_patchset'] == ["1"])
self.assertTrue('zuul_event_id' in item_attrs)
def getSpan(self, name):

View File

@ -1730,8 +1730,8 @@ class TestInRepoConfig(ZuulTestCase):
self.waitUntilSettled()
items = check_pipeline.getAllItems()
self.assertEqual(items[0].change.number, '1')
self.assertEqual(items[0].change.patchset, '1')
self.assertEqual(items[0].changes[0].number, '1')
self.assertEqual(items[0].changes[0].patchset, '1')
self.assertTrue(items[0].live)
in_repo_conf = textwrap.dedent(
@ -1760,8 +1760,8 @@ class TestInRepoConfig(ZuulTestCase):
self.waitUntilSettled()
items = check_pipeline.getAllItems()
self.assertEqual(items[0].change.number, '1')
self.assertEqual(items[0].change.patchset, '2')
self.assertEqual(items[0].changes[0].number, '1')
self.assertEqual(items[0].changes[0].patchset, '2')
self.assertTrue(items[0].live)
self.executor_server.hold_jobs_in_build = False
@ -3438,9 +3438,9 @@ class TestExtraConfigInDependent(ZuulTestCase):
# Jobs in both changes should be success
self.assertHistory([
dict(name='project2-private-extra-file', result='SUCCESS',
changes='3,1 1,1 2,1'),
changes='3,1 2,1 1,1'),
dict(name='project2-private-extra-dir', result='SUCCESS',
changes='3,1 1,1 2,1'),
changes='3,1 2,1 1,1'),
dict(name='project-test1', result='SUCCESS',
changes='3,1 2,1 1,1'),
dict(name='project3-private-extra-file', result='SUCCESS',
@ -3987,8 +3987,8 @@ class TestInRepoJoin(ZuulTestCase):
self.waitUntilSettled()
items = gate_pipeline.getAllItems()
self.assertEqual(items[0].change.number, '1')
self.assertEqual(items[0].change.patchset, '1')
self.assertEqual(items[0].changes[0].number, '1')
self.assertEqual(items[0].changes[0].patchset, '1')
self.assertTrue(items[0].live)
self.executor_server.hold_jobs_in_build = False

View File

@ -173,13 +173,14 @@ class TestWeb(BaseTestWeb):
# information is missing.
self.assertIsNone(q['branch'])
for head in q['heads']:
for change in head:
for item in head:
self.assertIn(
'review.example.com/org/project',
change['project_canonical'])
self.assertTrue(change['active'])
item['changes'][0]['project_canonical'])
self.assertTrue(item['active'])
change = item['changes'][0]
self.assertIn(change['id'], ('1,1', '2,1', '3,1'))
for job in change['jobs']:
for job in item['jobs']:
status_jobs.append(job)
self.assertEqual('project-merge', status_jobs[0]['name'])
# TODO(mordred) pull uuids from self.builds
@ -334,12 +335,13 @@ class TestWeb(BaseTestWeb):
data = self.get_url("api/tenant/tenant-one/status/change/1,1").json()
self.assertEqual(1, len(data), data)
self.assertEqual("org/project", data[0]['project'])
self.assertEqual("org/project", data[0]['changes'][0]['project'])
data = self.get_url("api/tenant/tenant-one/status/change/2,1").json()
self.assertEqual(1, len(data), data)
self.assertEqual("org/project1", data[0]['project'], data)
self.assertEqual("org/project1", data[0]['changes'][0]['project'],
data)
@simple_layout('layouts/nodeset-alternatives.yaml')
def test_web_find_job_nodeset_alternatives(self):
@ -1966,7 +1968,10 @@ class TestBuildInfo(BaseTestWeb):
buildsets = self.get_url("api/tenant/tenant-one/buildsets").json()
self.assertEqual(2, len(buildsets))
project_bs = [x for x in buildsets if x["project"] == "org/project"][0]
project_bs = [
x for x in buildsets
if x["refs"][0]["project"] == "org/project"
][0]
buildset = self.get_url(
"api/tenant/tenant-one/buildset/%s" % project_bs['uuid']).json()
@ -2070,7 +2075,10 @@ class TestArtifacts(BaseTestWeb, AnsibleZuulTestCase):
self.waitUntilSettled()
buildsets = self.get_url("api/tenant/tenant-one/buildsets").json()
project_bs = [x for x in buildsets if x["project"] == "org/project"][0]
project_bs = [
x for x in buildsets
if x["refs"][0]["project"] == "org/project"
][0]
buildset = self.get_url(
"api/tenant/tenant-one/buildset/%s" % project_bs['uuid']).json()
self.assertEqual(3, len(buildset["builds"]))
@ -2672,7 +2680,7 @@ class TestTenantScopedWebApi(BaseTestWeb):
items = tenant.layout.pipelines['gate'].getAllItems()
enqueue_times = {}
for item in items:
enqueue_times[str(item.change)] = item.enqueue_time
enqueue_times[str(item.changes[0])] = item.enqueue_time
# REST API
args = {'pipeline': 'gate',
@ -2699,7 +2707,7 @@ class TestTenantScopedWebApi(BaseTestWeb):
items = tenant.layout.pipelines['gate'].getAllItems()
for item in items:
self.assertEqual(
enqueue_times[str(item.change)], item.enqueue_time)
enqueue_times[str(item.changes[0])], item.enqueue_time)
self.waitUntilSettled()
self.executor_server.release('.*-merge')
@ -2761,7 +2769,7 @@ class TestTenantScopedWebApi(BaseTestWeb):
items = tenant.layout.pipelines['gate'].getAllItems()
enqueue_times = {}
for item in items:
enqueue_times[str(item.change)] = item.enqueue_time
enqueue_times[str(item.changes[0])] = item.enqueue_time
# REST API
args = {'pipeline': 'gate',
@ -2788,7 +2796,7 @@ class TestTenantScopedWebApi(BaseTestWeb):
items = tenant.layout.pipelines['gate'].getAllItems()
for item in items:
self.assertEqual(
enqueue_times[str(item.change)], item.enqueue_time)
enqueue_times[str(item.changes[0])], item.enqueue_time)
self.waitUntilSettled()
self.executor_server.release('.*-merge')
@ -2853,7 +2861,7 @@ class TestTenantScopedWebApi(BaseTestWeb):
if i.live]
enqueue_times = {}
for item in items:
enqueue_times[str(item.change)] = item.enqueue_time
enqueue_times[str(item.changes[0])] = item.enqueue_time
# REST API
args = {'pipeline': 'check',
@ -2882,12 +2890,12 @@ class TestTenantScopedWebApi(BaseTestWeb):
if i.live]
for item in items:
self.assertEqual(
enqueue_times[str(item.change)], item.enqueue_time)
enqueue_times[str(item.changes[0])], item.enqueue_time)
# We can't reliably test for side effects in the check
# pipeline since the change queues are independent, so we
# directly examine the queues.
queue_items = [(item.change.number, item.live) for item in
queue_items = [(item.changes[0].number, item.live) for item in
tenant.layout.pipelines['check'].getAllItems()]
expected = [('1', False),
('2', True),
@ -3555,7 +3563,7 @@ class TestCLIViaWebApi(BaseTestWeb):
items = tenant.layout.pipelines['gate'].getAllItems()
enqueue_times = {}
for item in items:
enqueue_times[str(item.change)] = item.enqueue_time
enqueue_times[str(item.changes[0])] = item.enqueue_time
# Promote B and C using the cli
authz = {'iss': 'zuul_operator',
@ -3581,7 +3589,7 @@ class TestCLIViaWebApi(BaseTestWeb):
items = tenant.layout.pipelines['gate'].getAllItems()
for item in items:
self.assertEqual(
enqueue_times[str(item.change)], item.enqueue_time)
enqueue_times[str(item.changes[0])], item.enqueue_time)
self.waitUntilSettled()
self.executor_server.release('.*-merge')

View File

@ -356,7 +356,7 @@ class TestZuulClientAdmin(BaseTestWeb):
items = tenant.layout.pipelines['gate'].getAllItems()
enqueue_times = {}
for item in items:
enqueue_times[str(item.change)] = item.enqueue_time
enqueue_times[str(item.changes[0])] = item.enqueue_time
# Promote B and C using the cli
authz = {'iss': 'zuul_operator',
@ -382,7 +382,7 @@ class TestZuulClientAdmin(BaseTestWeb):
items = tenant.layout.pipelines['gate'].getAllItems()
for item in items:
self.assertEqual(
enqueue_times[str(item.change)], item.enqueue_time)
enqueue_times[str(item.changes[0])], item.enqueue_time)
self.waitUntilSettled()
self.executor_server.release('.*-merge')

View File

@ -1,4 +1,5 @@
# Copyright 2019 Red Hat, Inc.
# Copyright 2024 Acme Gating, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
@ -37,20 +38,34 @@ class ElasticsearchReporter(BaseReporter):
docs = []
index = '%s.%s-%s' % (self.index, item.pipeline.tenant.name,
time.strftime("%Y.%m.%d"))
changes = [
{
"project": change.project.name,
"change": getattr(change, 'number', None),
"patchset": getattr(change, 'patchset', None),
"ref": getattr(change, 'ref', ''),
"oldrev": getattr(change, 'oldrev', ''),
"newrev": getattr(change, 'newrev', ''),
"branch": getattr(change, 'branch', ''),
"ref_url": change.url,
}
for change in item.changes
]
buildset_doc = {
"uuid": item.current_build_set.uuid,
"build_type": "buildset",
"tenant": item.pipeline.tenant.name,
"pipeline": item.pipeline.name,
"project": item.change.project.name,
"change": getattr(item.change, 'number', None),
"patchset": getattr(item.change, 'patchset', None),
"ref": getattr(item.change, 'ref', ''),
"oldrev": getattr(item.change, 'oldrev', ''),
"newrev": getattr(item.change, 'newrev', ''),
"branch": getattr(item.change, 'branch', ''),
"changes": changes,
"project": item.changes[0].project.name,
"change": getattr(item.changes[0], 'number', None),
"patchset": getattr(item.changes[0], 'patchset', None),
"ref": getattr(item.changes[0], 'ref', ''),
"oldrev": getattr(item.changes[0], 'oldrev', ''),
"newrev": getattr(item.changes[0], 'newrev', ''),
"branch": getattr(item.changes[0], 'branch', ''),
"zuul_ref": item.current_build_set.ref,
"ref_url": item.change.url,
"ref_url": item.changes[0].url,
"result": item.current_build_set.result,
"message": self._formatItemReport(item, with_jobs=False)
}
@ -80,8 +95,21 @@ class ElasticsearchReporter(BaseReporter):
buildset_doc['duration'] = (
buildset_doc['end_time'] - buildset_doc['start_time'])
change = item.getChangeForJob(build.job)
change_doc = {
"project": change.project.name,
"change": getattr(change, 'number', None),
"patchset": getattr(change, 'patchset', None),
"ref": getattr(change, 'ref', ''),
"oldrev": getattr(change, 'oldrev', ''),
"newrev": getattr(change, 'newrev', ''),
"branch": getattr(change, 'branch', ''),
"ref_url": change.url,
}
build_doc = {
"uuid": build.uuid,
"change": change_doc,
"build_type": "build",
"buildset_uuid": buildset_doc['uuid'],
"job_name": build.job.name,

View File

@ -1,6 +1,6 @@
# Copyright 2011 OpenStack, LLC.
# Copyright 2012 Hewlett-Packard Development Company, L.P.
# Copyright 2023 Acme Gating, LLC
# Copyright 2023-2024 Acme Gating, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
@ -1165,24 +1165,23 @@ class GerritConnection(ZKChangeCacheMixin, ZKBranchCacheMixin, BaseConnection):
}
self.event_queue.put(event)
def review(self, item, message, submit, labels, checks_api,
def review(self, item, change, message, submit, labels, checks_api,
file_comments, phase1, phase2, zuul_event_id=None):
if self.session:
meth = self.review_http
else:
meth = self.review_ssh
return meth(item, message, submit, labels, checks_api,
return meth(item, change, message, submit, labels, checks_api,
file_comments, phase1, phase2,
zuul_event_id=zuul_event_id)
def review_ssh(self, item, message, submit, labels, checks_api,
def review_ssh(self, item, change, message, submit, labels, checks_api,
file_comments, phase1, phase2, zuul_event_id=None):
log = get_annotated_logger(self.log, zuul_event_id)
if checks_api:
log.error("Zuul is configured to report to the checks API, "
"but no HTTP password is present for the connection "
"in the configuration file.")
change = item.change
project = change.project.name
cmd = 'gerrit review --project %s' % project
if phase1:
@ -1208,8 +1207,7 @@ class GerritConnection(ZKChangeCacheMixin, ZKBranchCacheMixin, BaseConnection):
out, err = self._ssh(cmd, zuul_event_id=zuul_event_id)
return err
def report_checks(self, log, item, changeid, checkinfo):
change = item.change
def report_checks(self, log, item, change, changeid, checkinfo):
checkinfo = checkinfo.copy()
uuid = checkinfo.pop('uuid', None)
scheme = checkinfo.pop('scheme', None)
@ -1254,10 +1252,9 @@ class GerritConnection(ZKChangeCacheMixin, ZKBranchCacheMixin, BaseConnection):
"attempt %s: %s", x, e)
time.sleep(x * self.submit_retry_backoff)
def review_http(self, item, message, submit, labels,
def review_http(self, item, change, message, submit, labels,
checks_api, file_comments, phase1, phase2,
zuul_event_id=None):
change = item.change
changeid = "%s~%s~%s" % (
urllib.parse.quote(str(change.project), safe=''),
urllib.parse.quote(str(change.branch), safe=''),
@ -1293,7 +1290,7 @@ class GerritConnection(ZKChangeCacheMixin, ZKBranchCacheMixin, BaseConnection):
if self.version >= (2, 13, 0):
data['tag'] = 'autogenerated:zuul:%s' % (item.pipeline.name)
if checks_api:
self.report_checks(log, item, changeid, checks_api)
self.report_checks(log, item, change, changeid, checks_api)
if (message or data.get('labels') or data.get('comments')
or data.get('robot_comments')):
for x in range(1, 4):
@ -1356,7 +1353,7 @@ class GerritConnection(ZKChangeCacheMixin, ZKBranchCacheMixin, BaseConnection):
def queryChangeHTTP(self, number, event=None):
query = ('changes/%s?o=DETAILED_ACCOUNTS&o=CURRENT_REVISION&'
'o=CURRENT_COMMIT&o=CURRENT_FILES&o=LABELS&'
'o=DETAILED_LABELS' % (number,))
'o=DETAILED_LABELS&o=ALL_REVISIONS' % (number,))
if self.version >= (3, 5, 0):
query += '&o=SUBMIT_REQUIREMENTS'
data = self.get(query)

View File

@ -160,9 +160,12 @@ class GerritChange(Change):
'%s/c/%s/+/%s' % (baseurl, self.project.name, self.number),
]
for rev_commit, revision in data['revisions'].items():
if str(revision['_number']) == self.patchset:
self.ref = revision['ref']
self.commit = rev_commit
if str(current_revision['_number']) == self.patchset:
self.ref = current_revision['ref']
self.commit = data['current_revision']
self.is_current_patchset = True
else:
self.is_current_patchset = False

View File

@ -1,4 +1,5 @@
# Copyright 2013 Rackspace Australia
# Copyright 2024 Acme Gating, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
@ -43,44 +44,44 @@ class GerritReporter(BaseReporter):
"""Send a message to gerrit."""
log = get_annotated_logger(self.log, item.event)
ret = []
for change in item.changes:
err = self._reportChange(item, change, log, phase1, phase2)
if err:
ret.append(err)
return ret
def _reportChange(self, item, change, log, phase1=True, phase2=True):
"""Send a message to gerrit."""
# If the source is no GerritSource we cannot report anything here.
if not isinstance(item.change.project.source, GerritSource):
if not isinstance(change.project.source, GerritSource):
return
# We can only report changes, not plain branches
if not isinstance(item.change, Change):
if not isinstance(change, Change):
return
# For supporting several Gerrit connections we also must filter by
# the canonical hostname.
if item.change.project.source.connection.canonical_hostname != \
if change.project.source.connection.canonical_hostname != \
self.connection.canonical_hostname:
log.debug("Not reporting %s as this Gerrit reporter "
"is for %s and the change is from %s",
item, self.connection.canonical_hostname,
item.change.project.source.connection.canonical_hostname)
return
comments = self.getFileComments(item)
comments = self.getFileComments(item, change)
if self._create_comment:
message = self._formatItemReport(item)
else:
message = ''
log.debug("Report change %s, params %s, message: %s, comments: %s",
item.change, self.config, message, comments)
if phase2 and self._submit and not hasattr(item.change, '_ref_sha'):
change, self.config, message, comments)
if phase2 and self._submit and not hasattr(change, '_ref_sha'):
# If we're starting to submit a bundle, save the current
# ref sha for every item in the bundle.
changes = set([item.change])
if item.bundle:
for i in item.bundle.items:
changes.add(i.change)
# Store a dict of project,branch -> sha so that if we have
# duplicate project/branches, we only query once.
ref_shas = {}
for other_change in changes:
for other_change in item.changes:
if not isinstance(other_change, GerritChange):
continue
key = (other_change.project, other_change.branch)
@ -92,9 +93,10 @@ class GerritReporter(BaseReporter):
ref_shas[key] = ref_sha
other_change._ref_sha = ref_sha
return self.connection.review(item, message, self._submit,
self._labels, self._checks_api,
comments, phase1, phase2,
return self.connection.review(item, change, message,
self._submit, self._labels,
self._checks_api, comments,
phase1, phase2,
zuul_event_id=item.event)
def getSubmitAllowNeeds(self):

View File

@ -78,7 +78,7 @@ class GitConnection(ZKChangeCacheMixin, BaseConnection):
self.projects[project.name] = project
def getChangeFilesUpdated(self, project_name, branch, tosha):
job = self.sched.merger.getFilesChanges(
job = self.sched.merger.getFilesChangesRaw(
self.connection_name, project_name, branch, tosha,
needs_result=True)
self.log.debug("Waiting for fileschanges job %s" % job)
@ -86,8 +86,8 @@ class GitConnection(ZKChangeCacheMixin, BaseConnection):
if not job.updated:
raise Exception("Fileschanges job %s failed" % job)
self.log.debug("Fileschanges job %s got changes on files %s" %
(job, job.files))
return job.files
(job, job.files[0]))
return job.files[0]
def lsRemote(self, project):
refs = {}

View File

@ -1,4 +1,5 @@
# Copyright 2015 Puppet Labs
# Copyright 2024 Acme Gating, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
@ -58,37 +59,48 @@ class GithubReporter(BaseReporter):
self.context = "{}/{}".format(pipeline.tenant.name, pipeline.name)
def report(self, item, phase1=True, phase2=True):
"""Report on an event."""
log = get_annotated_logger(self.log, item.event)
ret = []
for change in item.changes:
err = self._reportChange(item, change, log, phase1, phase2)
if err:
ret.append(err)
return ret
def _reportChange(self, item, change, log, phase1=True, phase2=True):
"""Report on an event."""
# If the source is not GithubSource we cannot report anything here.
if not isinstance(item.change.project.source, GithubSource):
if not isinstance(change.project.source, GithubSource):
return
# For supporting several Github connections we also must filter by
# the canonical hostname.
if item.change.project.source.connection.canonical_hostname != \
if change.project.source.connection.canonical_hostname != \
self.connection.canonical_hostname:
return
# order is important for github branch protection.
# A status should be set before a merge attempt
if phase1 and self._commit_status is not None:
if (hasattr(item.change, 'patchset') and
item.change.patchset is not None):
self.setCommitStatus(item)
elif (hasattr(item.change, 'newrev') and
item.change.newrev is not None):
self.setCommitStatus(item)
if (hasattr(change, 'patchset') and
change.patchset is not None):
self.setCommitStatus(item, change)
elif (hasattr(change, 'newrev') and
change.newrev is not None):
self.setCommitStatus(item, change)
# Comments, labels, and merges can only be performed on pull requests.
# If the change is not a pull request (e.g. a push) skip them.
if hasattr(item.change, 'number'):
if hasattr(change, 'number'):
errors_received = False
if phase1:
if self._labels or self._unlabels:
self.setLabels(item)
self.setLabels(item, change)
if self._review:
self.addReview(item)
self.addReview(item, change)
if self._check:
check_errors = self.updateCheck(item)
check_errors = self.updateCheck(item, change)
# TODO (felix): We could use this mechanism to
# also report back errors from label and review
# actions
@ -98,12 +110,12 @@ class GithubReporter(BaseReporter):
)
errors_received = True
if self._create_comment or errors_received:
self.addPullComment(item)
self.addPullComment(item, change)
if phase2 and self._merge:
try:
self.mergePull(item)
self.mergePull(item, change)
except Exception as e:
self.addPullComment(item, str(e))
self.addPullComment(item, change, str(e))
def _formatJobResult(self, job_fields):
# We select different emojis to represents build results:
@ -145,24 +157,24 @@ class GithubReporter(BaseReporter):
ret += 'Skipped %i %s\n' % (skipped, jobtext)
return ret
def addPullComment(self, item, comment=None):
def addPullComment(self, item, change, comment=None):
log = get_annotated_logger(self.log, item.event)
message = comment or self._formatItemReport(item)
project = item.change.project.name
pr_number = item.change.number
project = change.project.name
pr_number = change.number
log.debug('Reporting change %s, params %s, message: %s',
item.change, self.config, message)
change, self.config, message)
self.connection.commentPull(project, pr_number, message,
zuul_event_id=item.event)
def setCommitStatus(self, item):
def setCommitStatus(self, item, change):
log = get_annotated_logger(self.log, item.event)
project = item.change.project.name
if hasattr(item.change, 'patchset'):
sha = item.change.patchset
elif hasattr(item.change, 'newrev'):
sha = item.change.newrev
project = change.project.name
if hasattr(change, 'patchset'):
sha = change.patchset
elif hasattr(change, 'newrev'):
sha = change.newrev
state = self._commit_status
url = item.formatStatusUrl()
@ -180,27 +192,27 @@ class GithubReporter(BaseReporter):
log.debug(
'Reporting change %s, params %s, '
'context: %s, state: %s, description: %s, url: %s',
item.change, self.config, self.context, state, description, url)
change, self.config, self.context, state, description, url)
self.connection.setCommitStatus(
project, sha, state, url, description, self.context,
zuul_event_id=item.event)
def mergePull(self, item):
def mergePull(self, item, change):
log = get_annotated_logger(self.log, item.event)
merge_mode = item.current_build_set.getMergeMode()
merge_mode = item.current_build_set.getMergeMode(change)
if merge_mode not in self.merge_modes:
mode = model.get_merge_mode_name(merge_mode)
self.log.warning('Merge mode %s not supported by Github', mode)
raise MergeFailure('Merge mode %s not supported by Github' % mode)
project = item.change.project.name
pr_number = item.change.number
sha = item.change.patchset
project = change.project.name
pr_number = change.number
sha = change.patchset
log.debug('Reporting change %s, params %s, merging via API',
item.change, self.config)
message = self._formatMergeMessage(item.change, merge_mode)
change, self.config)
message = self._formatMergeMessage(change, merge_mode)
merge_mode = self.merge_modes[merge_mode]
for i in [1, 2]:
@ -208,26 +220,26 @@ class GithubReporter(BaseReporter):
self.connection.mergePull(project, pr_number, message, sha=sha,
method=merge_mode,
zuul_event_id=item.event)
self.connection.updateChangeAttributes(item.change,
self.connection.updateChangeAttributes(change,
is_merged=True)
return
except MergeFailure as e:
log.exception('Merge attempt of change %s %s/2 failed.',
item.change, i, exc_info=True)
change, i, exc_info=True)
error_message = str(e)
if i == 1:
time.sleep(2)
log.warning('Merge of change %s failed after 2 attempts, giving up',
item.change)
change)
raise MergeFailure(error_message)
def addReview(self, item):
def addReview(self, item, change):
log = get_annotated_logger(self.log, item.event)
project = item.change.project.name
pr_number = item.change.number
sha = item.change.patchset
project = change.project.name
pr_number = change.number
sha = change.patchset
log.debug('Reporting change %s, params %s, review:\n%s',
item.change, self.config, self._review)
change, self.config, self._review)
self.connection.reviewPull(
project,
pr_number,
@ -239,12 +251,12 @@ class GithubReporter(BaseReporter):
self.connection.unlabelPull(project, pr_number, label,
zuul_event_id=item.event)
def updateCheck(self, item):
def updateCheck(self, item, change):
log = get_annotated_logger(self.log, item.event)
message = self._formatItemReport(item)
project = item.change.project.name
pr_number = item.change.number
sha = item.change.patchset
project = change.project.name
pr_number = change.number
sha = change.patchset
status = self._check
# We declare a item as completed if it either has a result
@ -260,13 +272,13 @@ class GithubReporter(BaseReporter):
log.debug(
"Updating check for change %s, params %s, context %s, message: %s",
item.change, self.config, self.context, message
change, self.config, self.context, message
)
details_url = item.formatStatusUrl()
# Check for inline comments that can be reported via checks API
file_comments = self.getFileComments(item)
file_comments = self.getFileComments(item, change)
# Github allows an external id to be added to a check run. We can use
# this to identify the check run in any custom actions we define.
@ -279,11 +291,13 @@ class GithubReporter(BaseReporter):
{
"tenant": item.pipeline.tenant.name,
"pipeline": item.pipeline.name,
"change": item.change.number,
"change": change.number,
}
)
state = item.dynamic_state[self.connection.connection_name]
check_run_ids = state.setdefault('check_run_ids', {})
check_run_id = check_run_ids.get(change.cache_key)
check_run_id, errors = self.connection.updateCheck(
project,
pr_number,
@ -296,27 +310,27 @@ class GithubReporter(BaseReporter):
file_comments,
external_id,
zuul_event_id=item.event,
check_run_id=state.get('check_run_id')
check_run_id=check_run_id,
)
if check_run_id:
state['check_run_id'] = check_run_id
check_run_ids[change.cache_key] = check_run_id
return errors
def setLabels(self, item):
def setLabels(self, item, change):
log = get_annotated_logger(self.log, item.event)
project = item.change.project.name
pr_number = item.change.number
project = change.project.name
pr_number = change.number
if self._labels:
log.debug('Reporting change %s, params %s, labels:\n%s',
item.change, self.config, self._labels)
change, self.config, self._labels)
for label in self._labels:
self.connection.labelPull(project, pr_number, label,
zuul_event_id=item.event)
if self._unlabels:
log.debug('Reporting change %s, params %s, unlabels:\n%s',
item.change, self.config, self._unlabels)
change, self.config, self._unlabels)
for label in self._unlabels:
self.connection.unlabelPull(project, pr_number, label,
zuul_event_id=item.event)

View File

@ -1,4 +1,5 @@
# Copyright 2019 Red Hat, Inc.
# Copyright 2024 Acme Gating, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
@ -51,62 +52,68 @@ class GitlabReporter(BaseReporter):
def report(self, item, phase1=True, phase2=True):
"""Report on an event."""
if not isinstance(item.change.project.source, GitlabSource):
for change in item.changes:
self._reportChange(item, change, phase1, phase2)
return []
def _reportChange(self, item, change, phase1=True, phase2=True):
"""Report on an event."""
if not isinstance(change.project.source, GitlabSource):
return
if item.change.project.source.connection.canonical_hostname != \
if change.project.source.connection.canonical_hostname != \
self.connection.canonical_hostname:
return
if hasattr(item.change, 'number'):
if hasattr(change, 'number'):
if phase1:
if self._create_comment:
self.addMRComment(item)
self.addMRComment(item, change)
if self._approval is not None:
self.setApproval(item)
self.setApproval(item, change)
if self._labels or self._unlabels:
self.setLabels(item)
self.setLabels(item, change)
if phase2 and self._merge:
self.mergeMR(item)
if not item.change.is_merged:
self.mergeMR(item, change)
if not change.is_merged:
msg = self._formatItemReportMergeConflict(item)
self.addMRComment(item, msg)
self.addMRComment(item, change, msg)
def addMRComment(self, item, comment=None):
def addMRComment(self, item, change, comment=None):
log = get_annotated_logger(self.log, item.event)
message = comment or self._formatItemReport(item)
project = item.change.project.name
mr_number = item.change.number
project = change.project.name
mr_number = change.number
log.debug('Reporting change %s, params %s, message: %s',
item.change, self.config, message)
change, self.config, message)
self.connection.commentMR(project, mr_number, message,
event=item.event)
def setApproval(self, item):
def setApproval(self, item, change):
log = get_annotated_logger(self.log, item.event)
project = item.change.project.name
mr_number = item.change.number
patchset = item.change.patchset
project = change.project.name
mr_number = change.number
patchset = change.patchset
log.debug('Reporting change %s, params %s, approval: %s',
item.change, self.config, self._approval)
change, self.config, self._approval)
self.connection.approveMR(project, mr_number, patchset,
self._approval, event=item.event)
def setLabels(self, item):
def setLabels(self, item, change):
log = get_annotated_logger(self.log, item.event)
project = item.change.project.name
mr_number = item.change.number
project = change.project.name
mr_number = change.number
log.debug('Reporting change %s, params %s, labels: %s, unlabels: %s',
item.change, self.config, self._labels, self._unlabels)
change, self.config, self._labels, self._unlabels)
self.connection.updateMRLabels(project, mr_number,
self._labels, self._unlabels,
zuul_event_id=item.event)
def mergeMR(self, item):
project = item.change.project.name
mr_number = item.change.number
def mergeMR(self, item, change):
project = change.project.name
mr_number = change.number
merge_mode = item.current_build_set.getMergeMode()
merge_mode = item.current_build_set.getMergeMode(change)
if merge_mode not in self.merge_modes:
mode = model.get_merge_mode_name(merge_mode)
@ -118,17 +125,17 @@ class GitlabReporter(BaseReporter):
for i in [1, 2]:
try:
self.connection.mergeMR(project, mr_number, merge_mode)
item.change.is_merged = True
change.is_merged = True
return
except MergeFailure:
self.log.exception(
'Merge attempt of change %s %s/2 failed.' %
(item.change, i), exc_info=True)
(change, i), exc_info=True)
if i == 1:
time.sleep(2)
self.log.warning(
'Merge of change %s failed after 2 attempts, giving up' %
item.change)
change)
def getSubmitAllowNeeds(self):
return []

View File

@ -1,4 +1,5 @@
# Copyright 2017 Red Hat, Inc.
# Copyright 2024 Acme Gating, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
@ -32,21 +33,35 @@ class MQTTReporter(BaseReporter):
return
include_returned_data = self.config.get('include-returned-data')
log = get_annotated_logger(self.log, item.event)
log.debug("Report change %s, params %s", item.change, self.config)
log.debug("Report %s, params %s", item, self.config)
changes = [
{
'project': change.project.name,
'branch': getattr(change, 'branch', ''),
'change_url': change.url,
'change': getattr(change, 'number', ''),
'patchset': getattr(change, 'patchset', ''),
'commit_id': getattr(change, 'commit_id', ''),
'owner': getattr(change, 'owner', ''),
'ref': getattr(change, 'ref', ''),
}
for change in item.changes
]
message = {
'timestamp': time.time(),
'action': self._action,
'tenant': item.pipeline.tenant.name,
'zuul_ref': item.current_build_set.ref,
'pipeline': item.pipeline.name,
'project': item.change.project.name,
'branch': getattr(item.change, 'branch', ''),
'change_url': item.change.url,
'change': getattr(item.change, 'number', ''),
'patchset': getattr(item.change, 'patchset', ''),
'commit_id': getattr(item.change, 'commit_id', ''),
'owner': getattr(item.change, 'owner', ''),
'ref': getattr(item.change, 'ref', ''),
'changes': changes,
'project': item.changes[0].project.name,
'branch': getattr(item.changes[0], 'branch', ''),
'change_url': item.changes[0].url,
'change': getattr(item.changes[0], 'number', ''),
'patchset': getattr(item.changes[0], 'patchset', ''),
'commit_id': getattr(item.changes[0], 'commit_id', ''),
'owner': getattr(item.changes[0], 'owner', ''),
'ref': getattr(item.changes[0], 'ref', ''),
'message': self._formatItemReport(
item, with_jobs=False),
'trigger_time': item.event.timestamp,
@ -63,13 +78,26 @@ class MQTTReporter(BaseReporter):
for job in item.getJobs():
job_informations = {
'job_name': job.name,
'job_uuid': job.uuid,
'voting': job.voting,
}
build = item.current_build_set.getBuild(job)
if build:
# Report build data if available
(result, web_url) = item.formatJobResult(job)
change = item.getChangeForJob(job)
change_info = {
'project': change.project.name,
'branch': getattr(change, 'branch', ''),
'change_url': change.url,
'change': getattr(change, 'number', ''),
'patchset': getattr(change, 'patchset', ''),
'commit_id': getattr(change, 'commit_id', ''),
'owner': getattr(change, 'owner', ''),
'ref': getattr(change, 'ref', ''),
}
job_informations.update({
'change': change_info,
'uuid': build.uuid,
'start_time': build.start_time,
'end_time': build.end_time,
@ -90,16 +118,17 @@ class MQTTReporter(BaseReporter):
# Report build data of retried builds if available
retry_builds = item.current_build_set.getRetryBuildsForJob(
job)
for build in retry_builds:
for retry_build in retry_builds:
(result, web_url) = item.formatJobResult(job, build)
retry_build_information = {
'job_name': job.name,
'job_uuid': job.uuid,
'voting': job.voting,
'uuid': build.uuid,
'start_time': build.start_time,
'end_time': build.end_time,
'execute_time': build.execute_time,
'log_url': build.log_url,
'uuid': retry_build.uuid,
'start_time': retry_build.start_time,
'end_time': retry_build.end_time,
'execute_time': retry_build.execute_time,
'log_url': retry_build.log_url,
'web_url': web_url,
'result': result,
}
@ -112,11 +141,12 @@ class MQTTReporter(BaseReporter):
topic = self.config['topic'].format(
tenant=item.pipeline.tenant.name,
pipeline=item.pipeline.name,
project=item.change.project.name,
branch=getattr(item.change, 'branch', None),
change=getattr(item.change, 'number', None),
patchset=getattr(item.change, 'patchset', None),
ref=getattr(item.change, 'ref', None))
changes=changes,
project=item.changes[0].project.name,
branch=getattr(item.changes[0], 'branch', None),
change=getattr(item.changes[0], 'number', None),
patchset=getattr(item.changes[0], 'patchset', None),
ref=getattr(item.changes[0], 'ref', None))
except Exception:
log.exception("Error while formatting MQTT topic %s:",
self.config['topic'])

View File

@ -1,4 +1,5 @@
# Copyright 2018 Red Hat, Inc.
# Copyright 2024 Acme Gating, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
@ -36,33 +37,39 @@ class PagureReporter(BaseReporter):
def report(self, item, phase1=True, phase2=True):
"""Report on an event."""
for change in item.changes:
self._reportChange(item, change, phase1, phase2)
return []
def _reportChange(self, item, change, phase1=True, phase2=True):
"""Report on an event."""
# If the source is not PagureSource we cannot report anything here.
if not isinstance(item.change.project.source, PagureSource):
if not isinstance(change.project.source, PagureSource):
return
# For supporting several Pagure connections we also must filter by
# the canonical hostname.
if item.change.project.source.connection.canonical_hostname != \
if change.project.source.connection.canonical_hostname != \
self.connection.canonical_hostname:
return
if phase1:
if self._commit_status is not None:
if (hasattr(item.change, 'patchset') and
item.change.patchset is not None):
self.setCommitStatus(item)
elif (hasattr(item.change, 'newrev') and
item.change.newrev is not None):
self.setCommitStatus(item)
if hasattr(item.change, 'number'):
if (hasattr(change, 'patchset') and
change.patchset is not None):
self.setCommitStatus(item, change)
elif (hasattr(change, 'newrev') and
change.newrev is not None):
self.setCommitStatus(item, change)
if hasattr(change, 'number'):
if self._create_comment:
self.addPullComment(item)
self.addPullComment(item, change)
if phase2 and self._merge:
self.mergePull(item)
if not item.change.is_merged:
self.mergePull(item, change)
if not change.is_merged:
msg = self._formatItemReportMergeConflict(item)
self.addPullComment(item, msg)
self.addPullComment(item, change, msg)
def _formatItemReportJobs(self, item):
# Return the list of jobs portion of the report
@ -75,23 +82,23 @@ class PagureReporter(BaseReporter):
ret += 'Skipped %i %s\n' % (skipped, jobtext)
return ret
def addPullComment(self, item, comment=None):
def addPullComment(self, item, change, comment=None):
message = comment or self._formatItemReport(item)
project = item.change.project.name
pr_number = item.change.number
project = change.project.name
pr_number = change.number
self.log.debug(
'Reporting change %s, params %s, message: %s' %
(item.change, self.config, message))
(change, self.config, message))
self.connection.commentPull(project, pr_number, message)
def setCommitStatus(self, item):
project = item.change.project.name
if hasattr(item.change, 'patchset'):
sha = item.change.patchset
elif hasattr(item.change, 'newrev'):
sha = item.change.newrev
def setCommitStatus(self, item, change):
project = change.project.name
if hasattr(change, 'patchset'):
sha = change.patchset
elif hasattr(change, 'newrev'):
sha = change.newrev
state = self._commit_status
change_number = item.change.number
change_number = change.number
url_pattern = self.config.get('status-url')
sched_config = self.connection.sched.config
@ -106,30 +113,30 @@ class PagureReporter(BaseReporter):
self.log.debug(
'Reporting change %s, params %s, '
'context: %s, state: %s, description: %s, url: %s' %
(item.change, self.config,
(change, self.config,
self.context, state, description, url))
self.connection.setCommitStatus(
project, change_number, state, url, description, self.context)
def mergePull(self, item):
project = item.change.project.name
pr_number = item.change.number
def mergePull(self, item, change):
project = change.project.name
pr_number = change.number
for i in [1, 2]:
try:
self.connection.mergePull(project, pr_number)
item.change.is_merged = True
change.is_merged = True
return
except MergeFailure:
self.log.exception(
'Merge attempt of change %s %s/2 failed.' %
(item.change, i), exc_info=True)
(change, i), exc_info=True)
if i == 1:
time.sleep(2)
self.log.warning(
'Merge of change %s failed after 2 attempts, giving up' %
item.change)
change)
def getSubmitAllowNeeds(self):
return []

View File

@ -1,4 +1,5 @@
# Copyright 2013 Rackspace Australia
# Copyright 2024 Acme Gating, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
@ -32,8 +33,8 @@ class SMTPReporter(BaseReporter):
log = get_annotated_logger(self.log, item.event)
message = self._formatItemReport(item)
log.debug("Report change %s, params %s, message: %s",
item.change, self.config, message)
log.debug("Report %s, params %s, message: %s",
item, self.config, message)
from_email = self.config['from'] \
if 'from' in self.config else None
@ -42,13 +43,17 @@ class SMTPReporter(BaseReporter):
if 'subject' in self.config:
subject = self.config['subject'].format(
change=item.change, pipeline=item.pipeline.getSafeAttributes())
change=item.changes[0],
changes=item.changes,
pipeline=item.pipeline.getSafeAttributes())
else:
subject = "Report for change {change} against {ref}".format(
change=item.change, ref=item.change.ref)
subject = "Report for changes {changes} against {ref}".format(
changes=' '.join([str(c) for c in item.changes]),
ref=' '.join([c.ref for c in item.changes]))
self.connection.sendMail(subject, message, from_email, to_email,
zuul_event_id=item.event)
return []
def getSchema():

View File

@ -246,10 +246,13 @@ class DatabaseSession(object):
# joinedload).
q = self.session().query(self.connection.buildModel).\
join(self.connection.buildSetModel).\
join(self.connection.refModel).\
outerjoin(self.connection.providesModel).\
options(orm.contains_eager(self.connection.buildModel.buildset),
options(orm.contains_eager(self.connection.buildModel.buildset).
subqueryload(self.connection.buildSetModel.refs),
orm.selectinload(self.connection.buildModel.provides),
orm.selectinload(self.connection.buildModel.artifacts))
orm.selectinload(self.connection.buildModel.artifacts),
orm.selectinload(self.connection.buildModel.ref))
q = self.listFilter(q, buildset_table.c.tenant, tenant)
q = self.listFilter(q, build_table.c.uuid, uuid)
@ -428,7 +431,9 @@ class DatabaseSession(object):
options(orm.joinedload(self.connection.buildSetModel.builds).
subqueryload(self.connection.buildModel.artifacts)).\
options(orm.joinedload(self.connection.buildSetModel.builds).
subqueryload(self.connection.buildModel.provides))
subqueryload(self.connection.buildModel.provides)).\
options(orm.joinedload(self.connection.buildSetModel.builds).
subqueryload(self.connection.buildModel.ref))
q = self.listFilter(q, buildset_table.c.tenant, tenant)
q = self.listFilter(q, buildset_table.c.uuid, uuid)
@ -799,6 +804,11 @@ class SQLConnection(BaseConnection):
with self.getSession() as db:
return db.getBuilds(*args, **kw)
def getBuild(self, *args, **kw):
"""Return a Build object"""
with self.getSession() as db:
return db.getBuild(*args, **kw)
def getBuildsets(self, *args, **kw):
"""Return a list of BuildSet objects"""
with self.getSession() as db:

View File

@ -1,4 +1,5 @@
# Copyright 2015 Rackspace Australia
# Copyright 2024 Acme Gating, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
@ -54,16 +55,6 @@ class SQLReporter(BaseReporter):
event_timestamp = datetime.datetime.fromtimestamp(
item.event.timestamp, tz=datetime.timezone.utc)
ref = db.getOrCreateRef(
project=item.change.project.name,
change=getattr(item.change, 'number', None),
patchset=getattr(item.change, 'patchset', None),
ref_url=item.change.url,
ref=getattr(item.change, 'ref', ''),
oldrev=getattr(item.change, 'oldrev', ''),
newrev=getattr(item.change, 'newrev', ''),
branch=getattr(item.change, 'branch', ''),
)
db_buildset = db.createBuildSet(
uuid=buildset.uuid,
tenant=item.pipeline.tenant.name,
@ -72,7 +63,18 @@ class SQLReporter(BaseReporter):
event_timestamp=event_timestamp,
updated=datetime.datetime.utcnow(),
)
db_buildset.refs.append(ref)
for change in item.changes:
ref = db.getOrCreateRef(
project=change.project.name,
change=getattr(change, 'number', None),
patchset=getattr(change, 'patchset', None),
ref_url=change.url,
ref=getattr(change, 'ref', ''),
oldrev=getattr(change, 'oldrev', ''),
newrev=getattr(change, 'newrev', ''),
branch=getattr(change, 'branch', ''),
)
db_buildset.refs.append(ref)
return db_buildset
def reportBuildsetStart(self, buildset):
@ -200,15 +202,16 @@ class SQLReporter(BaseReporter):
if db_buildset.first_build_start_time is None:
db_buildset.first_build_start_time = start
item = buildset.item
change = item.getChangeForJob(build.job)
ref = db.getOrCreateRef(
project=item.change.project.name,
change=getattr(item.change, 'number', None),
patchset=getattr(item.change, 'patchset', None),
ref_url=item.change.url,
ref=getattr(item.change, 'ref', ''),
oldrev=getattr(item.change, 'oldrev', ''),
newrev=getattr(item.change, 'newrev', ''),
branch=getattr(item.change, 'branch', ''),
project=change.project.name,
change=getattr(change, 'number', None),
patchset=getattr(change, 'patchset', None),
ref_url=change.url,
ref=getattr(change, 'ref', ''),
oldrev=getattr(change, 'oldrev', ''),
newrev=getattr(change, 'newrev', ''),
branch=getattr(change, 'branch', ''),
)
db_build = db_buildset.createBuild(

View File

@ -56,9 +56,9 @@ class ExecutorClient(object):
tracer = trace.get_tracer("zuul")
uuid = str(uuid4().hex)
log.info(
"Execute job %s (uuid: %s) on nodes %s for change %s "
"Execute job %s (uuid: %s) on nodes %s for %s "
"with dependent changes %s",
job, uuid, nodes, item.change, dependent_changes)
job, uuid, nodes, item, dependent_changes)
params = zuul.executor.common.construct_build_params(
uuid, self.sched.connections,
@ -93,7 +93,7 @@ class ExecutorClient(object):
if job.name == 'noop':
data = {"start_time": time.time()}
started_event = BuildStartedEvent(
build.uuid, build.build_set.uuid, job.name, job._job_id,
build.uuid, build.build_set.uuid, job.uuid,
None, data, zuul_event_id=build.zuul_event_id)
self.result_events[pipeline.tenant.name][pipeline.name].put(
started_event
@ -101,7 +101,7 @@ class ExecutorClient(object):
result = {"result": "SUCCESS", "end_time": time.time()}
completed_event = BuildCompletedEvent(
build.uuid, build.build_set.uuid, job.name, job._job_id,
build.uuid, build.build_set.uuid, job.uuid,
None, result, zuul_event_id=build.zuul_event_id)
self.result_events[pipeline.tenant.name][pipeline.name].put(
completed_event
@ -134,7 +134,7 @@ class ExecutorClient(object):
f"{req_id}")
data = {"start_time": time.time()}
started_event = BuildStartedEvent(
build.uuid, build.build_set.uuid, job.name, job._job_id,
build.uuid, build.build_set.uuid, job.uuid,
None, data, zuul_event_id=build.zuul_event_id)
self.result_events[pipeline.tenant.name][pipeline.name].put(
started_event
@ -142,7 +142,7 @@ class ExecutorClient(object):
result = {"result": None, "end_time": time.time()}
completed_event = BuildCompletedEvent(
build.uuid, build.build_set.uuid, job.name, job._job_id,
build.uuid, build.build_set.uuid, job.uuid,
None, result, zuul_event_id=build.zuul_event_id)
self.result_events[pipeline.tenant.name][pipeline.name].put(
completed_event
@ -173,8 +173,7 @@ class ExecutorClient(object):
request = BuildRequest(
uuid=uuid,
build_set_uuid=build.build_set.uuid,
job_name=job.name,
job_uuid=job._job_id,
job_uuid=job.uuid,
tenant_name=build.build_set.item.pipeline.tenant.name,
pipeline_name=build.build_set.item.pipeline.name,
zone=executor_zone,
@ -225,7 +224,7 @@ class ExecutorClient(object):
pipeline_name = build.build_set.item.pipeline.name
event = BuildCompletedEvent(
build_request.uuid, build_request.build_set_uuid,
build_request.job_name, build_request.job_uuid,
build_request.job_uuid,
build_request.path, result)
self.result_events[tenant_name][pipeline_name].put(event)
finally:
@ -312,7 +311,7 @@ class ExecutorClient(object):
event = BuildCompletedEvent(
build_request.uuid, build_request.build_set_uuid,
build_request.job_name, build_request.job_uuid,
build_request.job_uuid,
build_request.path, result)
self.result_events[build_request.tenant_name][
build_request.pipeline_name].put(event)

View File

@ -30,22 +30,23 @@ def construct_build_params(uuid, connections, job, item, pipeline,
environment - for example, a local runner.
"""
tenant = pipeline.tenant
change = item.getChangeForJob(job)
project = dict(
name=item.change.project.name,
short_name=item.change.project.name.split('/')[-1],
canonical_hostname=item.change.project.canonical_hostname,
canonical_name=item.change.project.canonical_name,
name=change.project.name,
short_name=change.project.name.split('/')[-1],
canonical_hostname=change.project.canonical_hostname,
canonical_name=change.project.canonical_name,
src_dir=os.path.join('src',
strings.workspace_project_path(
item.change.project.canonical_hostname,
item.change.project.name,
change.project.canonical_hostname,
change.project.name,
job.workspace_scheme)),
)
zuul_params = dict(
build=uuid,
buildset=item.current_build_set.uuid,
ref=item.change.ref,
ref=change.ref,
pipeline=pipeline.name,
post_review=pipeline.post_review,
job=job.name,
@ -54,30 +55,30 @@ def construct_build_params(uuid, connections, job, item, pipeline,
event_id=item.event.zuul_event_id if item.event else None,
jobtags=sorted(job.tags),
)
if hasattr(item.change, 'branch'):
zuul_params['branch'] = item.change.branch
if hasattr(item.change, 'tag'):
zuul_params['tag'] = item.change.tag
if hasattr(item.change, 'number'):
zuul_params['change'] = str(item.change.number)
if hasattr(item.change, 'url'):
zuul_params['change_url'] = item.change.url
if hasattr(item.change, 'patchset'):
zuul_params['patchset'] = str(item.change.patchset)
if hasattr(item.change, 'message'):
zuul_params['message'] = strings.b64encode(item.change.message)
zuul_params['change_message'] = item.change.message
if hasattr(change, 'branch'):
zuul_params['branch'] = change.branch
if hasattr(change, 'tag'):
zuul_params['tag'] = change.tag
if hasattr(change, 'number'):
zuul_params['change'] = str(change.number)
if hasattr(change, 'url'):
zuul_params['change_url'] = change.url
if hasattr(change, 'patchset'):
zuul_params['patchset'] = str(change.patchset)
if hasattr(change, 'message'):
zuul_params['message'] = strings.b64encode(change.message)
zuul_params['change_message'] = change.message
commit_id = None
if (hasattr(item.change, 'oldrev') and item.change.oldrev
and item.change.oldrev != '0' * 40):
zuul_params['oldrev'] = item.change.oldrev
commit_id = item.change.oldrev
if (hasattr(item.change, 'newrev') and item.change.newrev
and item.change.newrev != '0' * 40):
zuul_params['newrev'] = item.change.newrev
commit_id = item.change.newrev
if hasattr(item.change, 'commit_id'):
commit_id = item.change.commit_id
if (hasattr(change, 'oldrev') and change.oldrev
and change.oldrev != '0' * 40):
zuul_params['oldrev'] = change.oldrev
commit_id = change.oldrev
if (hasattr(change, 'newrev') and change.newrev
and change.newrev != '0' * 40):
zuul_params['newrev'] = change.newrev
commit_id = change.newrev
if hasattr(change, 'commit_id'):
commit_id = change.commit_id
if commit_id:
zuul_params['commit_id'] = commit_id
@ -101,8 +102,8 @@ def construct_build_params(uuid, connections, job, item, pipeline,
params['job_ref'] = job.getPath()
params['items'] = merger_items
params['projects'] = []
if hasattr(item.change, 'branch'):
params['branch'] = item.change.branch
if hasattr(change, 'branch'):
params['branch'] = change.branch
else:
params['branch'] = None
merge_rs = item.current_build_set.merge_repo_state
@ -116,8 +117,8 @@ def construct_build_params(uuid, connections, job, item, pipeline,
params['ssh_keys'].append("REDACTED")
else:
params['ssh_keys'].append(dict(
connection_name=item.change.project.connection_name,
project_name=item.change.project.name))
connection_name=change.project.connection_name,
project_name=change.project.name))
params['zuul'] = zuul_params
projects = set()
required_projects = set()

View File

@ -4196,7 +4196,7 @@ class ExecutorServer(BaseMergeServer):
event = BuildStartedEvent(
build_request.uuid, build_request.build_set_uuid,
build_request.job_name, build_request.job_uuid,
build_request.job_uuid,
build_request.path, data, build_request.event_id)
self.result_events[build_request.tenant_name][
build_request.pipeline_name].put(event)
@ -4204,7 +4204,7 @@ class ExecutorServer(BaseMergeServer):
def updateBuildStatus(self, build_request, data):
event = BuildStatusEvent(
build_request.uuid, build_request.build_set_uuid,
build_request.job_name, build_request.job_uuid,
build_request.job_uuid,
build_request.path, data, build_request.event_id)
self.result_events[build_request.tenant_name][
build_request.pipeline_name].put(event)
@ -4219,7 +4219,7 @@ class ExecutorServer(BaseMergeServer):
event = BuildPausedEvent(
build_request.uuid, build_request.build_set_uuid,
build_request.job_name, build_request.job_uuid,
build_request.job_uuid,
build_request.path, data, build_request.event_id)
self.result_events[build_request.tenant_name][
build_request.pipeline_name].put(event)
@ -4286,7 +4286,7 @@ class ExecutorServer(BaseMergeServer):
updater = self.executor_api.getRequestUpdater(build_request)
event = BuildCompletedEvent(
build_request.uuid, build_request.build_set_uuid,
build_request.job_name, build_request.job_uuid,
build_request.job_uuid,
build_request.path, result, build_request.event_id)
build_request.state = BuildRequest.COMPLETED
updated = False

File diff suppressed because it is too large Load Diff

View File

@ -1,3 +1,5 @@
# Copyright 2024 Acme Gating, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
@ -43,35 +45,34 @@ class DependentPipelineManager(SharedQueuePipelineManager):
window_decrease_factor=p.window_decrease_factor,
name=queue_name)
def getNodePriority(self, item):
with self.getChangeQueue(item.change, item.event) as change_queue:
items = change_queue.queue
return items.index(item)
def getNodePriority(self, item, change):
return item.queue.queue.index(item)
def isChangeReadyToBeEnqueued(self, change, event):
def areChangesReadyToBeEnqueued(self, changes, event):
log = get_annotated_logger(self.log, event)
source = change.project.source
if not source.canMerge(change, self.getSubmitAllowNeeds(),
event=event):
log.debug("Change %s can not merge", change)
return False
for change in changes:
source = change.project.source
if not source.canMerge(change, self.getSubmitAllowNeeds(),
event=event):
log.debug("Change %s can not merge", change)
return False
return True
def getNonMergeableCycleChanges(self, bundle):
def getNonMergeableCycleChanges(self, item):
"""Return changes in the cycle that do not fulfill
the pipeline's ready criteria."""
changes = []
for item in bundle.items:
source = item.change.project.source
for change in item.changes:
source = change.project.source
if not source.canMerge(
item.change,
change,
self.getSubmitAllowNeeds(),
event=item.event,
allow_refresh=True,
):
log = get_annotated_logger(self.log, item.event)
log.debug("Change %s can no longer be merged", item.change)
changes.append(item.change)
log.debug("Change %s can no longer be merged", change)
changes.append(change)
return changes
def enqueueChangesBehind(self, change, event, quiet, ignore_requirements,
@ -142,29 +143,26 @@ class DependentPipelineManager(SharedQueuePipelineManager):
change_queue=change_queue, history=history,
dependency_graph=dependency_graph)
def enqueueChangesAhead(self, change, event, quiet, ignore_requirements,
def enqueueChangesAhead(self, changes, event, quiet, ignore_requirements,
change_queue, history=None, dependency_graph=None,
warnings=None):
log = get_annotated_logger(self.log, event)
history = history if history is not None else []
if hasattr(change, 'number'):
history.append(change)
else:
# Don't enqueue dependencies ahead of a non-change ref.
return True
for change in changes:
if hasattr(change, 'number'):
history.append(change)
else:
# Don't enqueue dependencies ahead of a non-change ref.
return True
abort, needed_changes = self.getMissingNeededChanges(
change, change_queue, event,
changes, change_queue, event,
dependency_graph=dependency_graph,
warnings=warnings)
if abort:
return False
# Treat cycle dependencies as needed for the current change
needed_changes.extend(
self.getCycleDependencies(change, dependency_graph, event))
if not needed_changes:
return True
log.debug(" Changes %s must be merged ahead of %s",
@ -183,107 +181,93 @@ class DependentPipelineManager(SharedQueuePipelineManager):
return False
return True
def getMissingNeededChanges(self, change, change_queue, event,
def getMissingNeededChanges(self, changes, change_queue, event,
dependency_graph=None, warnings=None):
log = get_annotated_logger(self.log, event)
changes_needed = []
abort = False
# Return true if okay to proceed enqueing this change,
# false if the change should not be enqueued.
log.debug("Checking for changes needed by %s:" % change)
if not isinstance(change, model.Change):
log.debug(" %s does not support dependencies", type(change))
return False, []
if not change.getNeedsChanges(
self.useDependenciesByTopic(change.project)):
log.debug(" No changes needed")
return False, []
changes_needed = []
abort = False
# Ignore supplied change_queue
with self.getChangeQueue(change, event) as change_queue:
for needed_change in self.resolveChangeReferences(
change.getNeedsChanges(
self.useDependenciesByTopic(change.project))):
log.debug(" Change %s needs change %s:" % (
change, needed_change))
if needed_change.is_merged:
log.debug(" Needed change is merged")
continue
if dependency_graph is not None:
log.debug(" Adding change %s to dependency graph for "
"change %s", needed_change, change)
node = dependency_graph.setdefault(change, [])
node.append(needed_change)
if (self.pipeline.tenant.max_dependencies is not None and
dependency_graph is not None and
(len(dependency_graph) >
self.pipeline.tenant.max_dependencies)):
log.debug(" Dependency graph for change %s is too large",
change)
return True, []
with self.getChangeQueue(needed_change,
event) as needed_change_queue:
if needed_change_queue != change_queue:
msg = ("Change %s in project %s does not "
"share a change queue with %s "
"in project %s" %
(needed_change.number,
needed_change.project,
change.number,
change.project))
log.debug(" " + msg)
if warnings is not None:
warnings.append(msg)
for change in changes:
log.debug("Checking for changes needed by %s:" % change)
if not isinstance(change, model.Change):
log.debug(" %s does not support dependencies", type(change))
continue
needed_changes = dependency_graph.get(change)
if not needed_changes:
log.debug(" No changes needed")
continue
# Ignore supplied change_queue
with self.getChangeQueue(change, event) as change_queue:
for needed_change in needed_changes:
log.debug(" Change %s needs change %s:" % (
change, needed_change))
if needed_change.is_merged:
log.debug(" Needed change is merged")
continue
with self.getChangeQueue(needed_change,
event) as needed_change_queue:
if needed_change_queue != change_queue:
msg = ("Change %s in project %s does not "
"share a change queue with %s "
"in project %s" %
(needed_change.number,
needed_change.project,
change.number,
change.project))
log.debug(" " + msg)
if warnings is not None:
warnings.append(msg)
changes_needed.append(needed_change)
abort = True
if not needed_change.is_current_patchset:
log.debug(" Needed change is not "
"the current patchset")
changes_needed.append(needed_change)
abort = True
if not needed_change.is_current_patchset:
log.debug(" Needed change is not the current patchset")
changes_needed.append(needed_change)
abort = True
if self.isChangeAlreadyInQueue(needed_change, change_queue):
log.debug(" Needed change is already ahead in the queue")
continue
if needed_change.project.source.canMerge(
needed_change, self.getSubmitAllowNeeds(),
event=event):
log.debug(" Change %s is needed", needed_change)
if needed_change not in changes_needed:
changes_needed.append(needed_change)
if needed_change in changes:
log.debug(" Needed change is in cycle")
continue
# The needed change can't be merged.
log.debug(" Change %s is needed but can not be merged",
needed_change)
changes_needed.append(needed_change)
abort = True
if self.isChangeAlreadyInQueue(
needed_change, change_queue):
log.debug(" Needed change is already "
"ahead in the queue")
continue
if needed_change.project.source.canMerge(
needed_change, self.getSubmitAllowNeeds(),
event=event):
log.debug(" Change %s is needed", needed_change)
if needed_change not in changes_needed:
changes_needed.append(needed_change)
continue
else:
# The needed change can't be merged.
log.debug(" Change %s is needed "
"but can not be merged",
needed_change)
changes_needed.append(needed_change)
abort = True
return abort, changes_needed
def getFailingDependentItems(self, item, nnfi):
if not isinstance(item.change, model.Change):
return None
if not item.change.getNeedsChanges(
self.useDependenciesByTopic(item.change.project)):
return None
def getFailingDependentItems(self, item):
failing_items = set()
for needed_change in self.resolveChangeReferences(
item.change.getNeedsChanges(
self.useDependenciesByTopic(item.change.project))):
needed_item = self.getItemForChange(needed_change)
if not needed_item:
for change in item.changes:
if not isinstance(change, model.Change):
continue
if needed_item.current_build_set.failing_reasons:
failing_items.add(needed_item)
# Only look at the bundle if the item ahead is the nearest non-failing
# item. This is important in order to correctly reset the bundle items
# in case of a failure.
if item.item_ahead == nnfi and item.isBundleFailing():
failing_items.update(item.bundle.items)
failing_items.remove(item)
if failing_items:
return failing_items
return None
needs_changes = change.getNeedsChanges(
self.useDependenciesByTopic(change.project))
if not needs_changes:
continue
for needed_change in self.resolveChangeReferences(needs_changes):
needed_item = self.getItemForChange(needed_change)
if not needed_item:
continue
if needed_item is item:
continue
if needed_item.current_build_set.failing_reasons:
failing_items.add(needed_item)
return failing_items
def dequeueItem(self, item, quiet=False):
super(DependentPipelineManager, self).dequeueItem(item, quiet)

View File

@ -1,3 +1,5 @@
# Copyright 2021-2024 Acme Gating, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
@ -37,28 +39,25 @@ class IndependentPipelineManager(PipelineManager):
log.debug("Dynamically created queue %s", change_queue)
return DynamicChangeQueueContextManager(change_queue)
def enqueueChangesAhead(self, change, event, quiet, ignore_requirements,
def enqueueChangesAhead(self, changes, event, quiet, ignore_requirements,
change_queue, history=None, dependency_graph=None,
warnings=None):
log = get_annotated_logger(self.log, event)
history = history if history is not None else []
if hasattr(change, 'number'):
history.append(change)
else:
# Don't enqueue dependencies ahead of a non-change ref.
return True
for change in changes:
if hasattr(change, 'number'):
history.append(change)
else:
# Don't enqueue dependencies ahead of a non-change ref.
return True
abort, needed_changes = self.getMissingNeededChanges(
change, change_queue, event,
changes, change_queue, event,
dependency_graph=dependency_graph)
if abort:
return False
# Treat cycle dependencies as needed for the current change
needed_changes.extend(
self.getCycleDependencies(change, dependency_graph, event))
if not needed_changes:
return True
log.debug(" Changes %s must be merged ahead of %s" % (
@ -80,55 +79,43 @@ class IndependentPipelineManager(PipelineManager):
return False
return True
def getMissingNeededChanges(self, change, change_queue, event,
def getMissingNeededChanges(self, changes, change_queue, event,
dependency_graph=None):
log = get_annotated_logger(self.log, event)
if self.pipeline.ignore_dependencies:
return False, []
log.debug("Checking for changes needed by %s:" % change)
# Return true if okay to proceed enqueing this change,
# false if the change should not be enqueued.
if not isinstance(change, model.Change):
log.debug(" %s does not support dependencies" % type(change))
return False, []
if not change.getNeedsChanges(
self.useDependenciesByTopic(change.project)):
log.debug(" No changes needed")
return False, []
changes_needed = []
abort = False
for needed_change in self.resolveChangeReferences(
change.getNeedsChanges(
self.useDependenciesByTopic(change.project))):
log.debug(" Change %s needs change %s:" % (
change, needed_change))
if needed_change.is_merged:
log.debug(" Needed change is merged")
for change in changes:
log.debug("Checking for changes needed by %s:" % change)
# Return true if okay to proceed enqueing this change,
# false if the change should not be enqueued.
if not isinstance(change, model.Change):
log.debug(" %s does not support dependencies" % type(change))
continue
if dependency_graph is not None:
log.debug(" Adding change %s to dependency graph for "
"change %s", needed_change, change)
node = dependency_graph.setdefault(change, [])
node.append(needed_change)
if (self.pipeline.tenant.max_dependencies is not None and
dependency_graph is not None and
len(dependency_graph) > self.pipeline.tenant.max_dependencies):
log.debug(" Dependency graph for change %s is too large",
change)
return True, []
if self.isChangeAlreadyInQueue(needed_change, change_queue):
log.debug(" Needed change is already ahead in the queue")
needed_changes = dependency_graph.get(change)
if not needed_changes:
log.debug(" No changes needed")
continue
log.debug(" Change %s is needed" % needed_change)
if needed_change not in changes_needed:
changes_needed.append(needed_change)
continue
# This differs from the dependent pipeline check in not
# verifying that the dependent change is mergable.
for needed_change in needed_changes:
log.debug(" Change %s needs change %s:" % (
change, needed_change))
if needed_change.is_merged:
log.debug(" Needed change is merged")
continue
if needed_change in changes:
log.debug(" Needed change is in cycle")
continue
if self.isChangeAlreadyInQueue(needed_change, change_queue):
log.debug(" Needed change is already ahead in the queue")
continue
log.debug(" Change %s is needed" % needed_change)
if needed_change not in changes_needed:
changes_needed.append(needed_change)
continue
# This differs from the dependent pipeline check in not
# verifying that the dependent change is mergable.
return abort, changes_needed
def dequeueItem(self, item, quiet=False):

View File

@ -1,3 +1,5 @@
# Copyright 2021, 2023-2024 Acme Gating, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
@ -32,11 +34,11 @@ class SupercedentPipelineManager(PipelineManager):
# Don't use Pipeline.getQueue to find an existing queue
# because we're matching project and (branch or ref).
for queue in self.pipeline.queues:
if (queue.queue[-1].change.project == change.project and
if (queue.queue[-1].changes[0].project == change.project and
((hasattr(change, 'branch') and
hasattr(queue.queue[-1].change, 'branch') and
queue.queue[-1].change.branch == change.branch) or
queue.queue[-1].change.ref == change.ref)):
hasattr(queue.queue[-1].changes[0], 'branch') and
queue.queue[-1].changes[0].branch == change.branch) or
queue.queue[-1].changes[0].ref == change.ref)):
log.debug("Found existing queue %s", queue)
return DynamicChangeQueueContextManager(queue)
change_queue = model.ChangeQueue.new(
@ -66,6 +68,13 @@ class SupercedentPipelineManager(PipelineManager):
(item, queue.queue[-1]))
self.removeItem(item)
def cycleForChange(self, *args, **kw):
ret = super().cycleForChange(*args, **kw)
if len(ret) > 1:
raise Exception("Dependency cycles not supported "
"in supercedent pipelines")
return ret
def addChange(self, *args, **kw):
ret = super(SupercedentPipelineManager, self).addChange(
*args, **kw)

View File

@ -1,4 +1,5 @@
# Copyright 2014 OpenStack Foundation
# Copyright 2021-2022, 2024 Acme Gating, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
@ -138,13 +139,43 @@ class MergeClient(object):
)
return job
def getFilesChanges(self, connection_name, project_name, branch,
tosha=None, precedence=PRECEDENCE_HIGH,
build_set=None, needs_result=False, event=None):
data = dict(connection=connection_name,
project=project_name,
branch=branch,
tosha=tosha)
def getFilesChanges(self, changes, precedence=PRECEDENCE_HIGH,
build_set=None, needs_result=False,
event=None):
changes_data = []
for change in changes:
# if base_sha is not available, fallback to branch
tosha = getattr(change, "base_sha", None)
if tosha is None:
tosha = getattr(change, "branch", None)
changes_data.append(dict(
connection=change.project.connection_name,
project=change.project.name,
branch=change.ref,
tosha=tosha,
))
data = dict(changes=changes_data)
job = self.submitJob(
MergeRequest.FILES_CHANGES,
data,
build_set,
precedence,
needs_result=needs_result,
event=event,
)
return job
def getFilesChangesRaw(self, connection_name, project_name, branch, tosha,
precedence=PRECEDENCE_HIGH,
build_set=None, needs_result=False,
event=None):
changes_data = [dict(
connection=connection_name,
project=project_name,
branch=branch,
tosha=tosha,
)]
data = dict(changes=changes_data)
job = self.submitJob(
MergeRequest.FILES_CHANGES,
data,

View File

@ -1,5 +1,5 @@
# Copyright 2014 OpenStack Foundation
# Copyright 2021-2022 Acme Gating, LLC
# Copyright 2021-2022, 2024 Acme Gating, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
@ -334,25 +334,36 @@ class BaseMergeServer(metaclass=ABCMeta):
self.log.debug("Got fileschanges job: %s", merge_request.uuid)
zuul_event_id = merge_request.event_id
connection_name = args['connection']
project_name = args['project']
# MODEL_API < 26:
changes = args.get('changes')
old_format = False
if changes is None:
changes = [args]
old_format = True
lock = self.repo_locks.getRepoLock(connection_name, project_name)
try:
self._update(connection_name, project_name,
zuul_event_id=zuul_event_id)
with lock:
files = self.merger.getFilesChanges(
connection_name, project_name,
args['branch'], args['tosha'],
zuul_event_id=zuul_event_id)
except Exception:
result = dict(update=False)
results = []
for change in changes:
connection_name = change['connection']
project_name = change['project']
lock = self.repo_locks.getRepoLock(connection_name, project_name)
try:
self._update(connection_name, project_name,
zuul_event_id=zuul_event_id)
with lock:
files = self.merger.getFilesChanges(
connection_name, project_name,
change['branch'], change['tosha'],
zuul_event_id=zuul_event_id)
results.append(files)
except Exception:
return dict(updated=False)
if old_format:
# MODEL_API < 26:
return dict(updated=True, files=results[0])
else:
result = dict(updated=True, files=files)
result['zuul_event_id'] = zuul_event_id
return result
return dict(updated=True, files=results)
def completeMergeJob(self, merge_request, result):
log = get_annotated_logger(self.log, merge_request.event_id)

File diff suppressed because it is too large Load Diff

View File

@ -14,4 +14,4 @@
# When making ZK schema changes, increment this and add a record to
# doc/source/developer/model-changelog.rst
MODEL_API = 25
MODEL_API = 26

View File

@ -191,7 +191,7 @@ class Nodepool(object):
else:
event_id = None
req = model.NodeRequest(self.system_id, build_set_uuid, tenant_name,
pipeline_name, job.name, job._job_id, labels,
pipeline_name, job.uuid, labels,
provider, relative_priority, event_id)
if job.nodeset.nodes:

View File

@ -1,4 +1,5 @@
# Copyright 2014 Rackspace Australia
# Copyright 2021-2024 Acme Gating, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
@ -60,13 +61,14 @@ class BaseReporter(object, metaclass=abc.ABCMeta):
def postConfig(self):
"""Run tasks after configuration is reloaded"""
def addConfigurationErrorComments(self, item, comments):
def addConfigurationErrorComments(self, item, change, comments):
"""Add file comments for configuration errors.
Updates the comments dictionary with additional file comments
for any relevant configuration errors for this item's change.
for any relevant configuration errors for the specified change.
:arg QueueItem item: The queue item
:arg Ref change: One of the item's changes to check
:arg dict comments: a file comments dictionary
"""
@ -77,13 +79,13 @@ class BaseReporter(object, metaclass=abc.ABCMeta):
if not (context and mark and err.short_error):
continue
if context.project_canonical_name != \
item.change.project.canonical_name:
change.project.canonical_name:
continue
if not hasattr(item.change, 'branch'):
if not hasattr(change, 'branch'):
continue
if context.branch != item.change.branch:
if context.branch != change.branch:
continue
if context.path not in item.change.files:
if context.path not in change.files:
continue
existing_comments = comments.setdefault(context.path, [])
existing_comments.append(dict(line=mark.end_line,
@ -94,36 +96,40 @@ class BaseReporter(object, metaclass=abc.ABCMeta):
end_line=mark.end_line,
end_character=mark.end_column)))
def _getFileComments(self, item):
def _getFileComments(self, item, change):
"""Get the file comments from the zuul_return value"""
ret = {}
for build in item.current_build_set.getBuilds():
fc = build.result_data.get("zuul", {}).get("file_comments")
if not fc:
continue
# Only consider comments for this change
if change.cache_key not in build.job.all_refs:
continue
for fn, comments in fc.items():
existing_comments = ret.setdefault(fn, [])
existing_comments.extend(comments)
self.addConfigurationErrorComments(item, ret)
self.addConfigurationErrorComments(item, change, ret)
return ret
def getFileComments(self, item):
comments = self._getFileComments(item)
self.filterComments(item, comments)
def getFileComments(self, item, change):
comments = self._getFileComments(item, change)
self.filterComments(item, change, comments)
return comments
def filterComments(self, item, comments):
def filterComments(self, item, change, comments):
"""Filter comments for files in change
Remove any comments for files which do not appear in the
item's change. Leave warning messages if this happens.
specified change. Leave warning messages if this happens.
:arg QueueItem item: The queue item
:arg Change change: The change
:arg dict comments: a file comments dictionary (modified in place)
"""
for fn in list(comments.keys()):
if fn not in item.change.files:
if fn not in change.files:
del comments[fn]
item.warning("Comments left for invalid file %s" % (fn,))
@ -172,7 +178,8 @@ class BaseReporter(object, metaclass=abc.ABCMeta):
return item.pipeline.enqueue_message.format(
pipeline=item.pipeline.getSafeAttributes(),
change=item.change.getSafeAttributes(),
change=item.changes[0].getSafeAttributes(),
changes=[c.getSafeAttributes() for c in item.changes],
status_url=status_url)
def _formatItemReportStart(self, item, with_jobs=True):
@ -182,7 +189,8 @@ class BaseReporter(object, metaclass=abc.ABCMeta):
return item.pipeline.start_message.format(
pipeline=item.pipeline.getSafeAttributes(),
change=item.change.getSafeAttributes(),
change=item.changes[0].getSafeAttributes(),
changes=[c.getSafeAttributes() for c in item.changes],
status_url=status_url)
def _formatItemReportSuccess(self, item, with_jobs=True):
@ -195,23 +203,23 @@ class BaseReporter(object, metaclass=abc.ABCMeta):
return msg
def _formatItemReportFailure(self, item, with_jobs=True):
if item.cannotMergeBundle():
msg = 'This change is part of a bundle that can not merge.\n'
if isinstance(item.bundle.cannot_merge, str):
msg += '\n' + item.bundle.cannot_merge + '\n'
elif item.dequeued_needing_change:
msg = 'This change depends on a change that failed to merge.\n'
if len(item.changes) > 1:
change_text = 'These changes'
else:
change_text = 'This change'
if item.dequeued_needing_change:
msg = f'{change_text} depends on a change that failed to merge.\n'
if isinstance(item.dequeued_needing_change, str):
msg += '\n' + item.dequeued_needing_change + '\n'
elif item.dequeued_missing_requirements:
msg = ('This change is unable to merge '
msg = (f'{change_text} is unable to merge '
'due to a missing merge requirement.\n')
elif item.isBundleFailing():
msg = 'This change is part of a bundle that failed.\n'
elif len(item.changes) > 1:
msg = f'{change_text} is part of a dependency cycle that failed.\n'
if with_jobs:
msg = '{}\n\n{}'.format(msg, self._formatItemReportJobs(item))
msg = "{}\n\n{}".format(
msg, self._formatItemReportOtherBundleItems(item))
msg, self._formatItemReportOtherChanges(item))
elif item.didMergerFail():
msg = item.pipeline.merge_conflict_message
elif item.current_build_set.has_blocking_errors:
@ -247,7 +255,8 @@ class BaseReporter(object, metaclass=abc.ABCMeta):
return item.pipeline.no_jobs_message.format(
pipeline=item.pipeline.getSafeAttributes(),
change=item.change.getSafeAttributes(),
change=item.changes[0].getSafeAttributes(),
changes=[c.getSafeAttributes() for c in item.changes],
status_url=status_url)
def _formatItemReportDisabled(self, item, with_jobs=True):
@ -264,13 +273,9 @@ class BaseReporter(object, metaclass=abc.ABCMeta):
msg += '\n\n' + self._formatItemReportJobs(item)
return msg
def _formatItemReportOtherBundleItems(self, item):
related_changes = item.pipeline.manager.resolveChangeReferences(
item.change.getNeedsChanges(
item.pipeline.manager.useDependenciesByTopic(
item.change.project)))
def _formatItemReportOtherChanges(self, item):
return "Related changes:\n{}\n".format("\n".join(
f' - {c.url}' for c in related_changes if c is not item.change))
f' - {c.url}' for c in item.changes))
def _getItemReportJobsFields(self, item):
# Extract the report elements from an item

View File

@ -905,13 +905,15 @@ class Scheduler(threading.Thread):
try:
if self.statsd and build.pipeline:
tenant = build.pipeline.tenant
jobname = build.job.name.replace('.', '_').replace('/', '_')
hostname = (build.build_set.item.change.project.
item = build.build_set.item
job = build.job
change = item.getChangeForJob(job)
jobname = job.name.replace('.', '_').replace('/', '_')
hostname = (change.project.
canonical_hostname.replace('.', '_'))
projectname = (build.build_set.item.change.project.name.
projectname = (change.project.name.
replace('.', '_').replace('/', '_'))
branchname = (getattr(build.build_set.item.change,
'branch', '').
branchname = (getattr(change, 'branch', '').
replace('.', '_').replace('/', '_'))
basekey = 'zuul.tenant.%s' % tenant.name
pipekey = '%s.pipeline.%s' % (basekey, build.pipeline.name)
@ -1611,8 +1613,8 @@ class Scheduler(threading.Thread):
log.info("Tenant reconfiguration complete for %s (duration: %s "
"seconds)", event.tenant_name, duration)
def _reenqueueGetProject(self, tenant, item):
project = item.change.project
def _reenqueueGetProject(self, tenant, item, change):
project = change.project
# Attempt to get the same project as the one passed in. If
# the project is now found on a different connection or if it
# is no longer available (due to a connection being removed),
@ -1644,12 +1646,13 @@ class Scheduler(threading.Thread):
if child is item:
return None
if child and child.live:
(child_trusted, child_project) = tenant.getProject(
child.change.project.canonical_name)
if child_project:
source = child_project.source
new_project = source.getProject(project.name)
return new_project
for child_change in child.changes:
(child_trusted, child_project) = tenant.getProject(
child_change.project.canonical_name)
if child_project:
source = child_project.source
new_project = source.getProject(project.name)
return new_project
return None
@ -1679,8 +1682,8 @@ class Scheduler(threading.Thread):
for item in shared_queue.queue:
# If the old item ahead made it in, re-enqueue
# this one behind it.
new_project = self._reenqueueGetProject(
tenant, item)
new_projects = [self._reenqueueGetProject(
tenant, item, change) for change in item.changes]
if item.item_ahead in items_to_remove:
old_item_ahead = None
item_ahead_valid = False
@ -1691,8 +1694,9 @@ class Scheduler(threading.Thread):
item.item_ahead = None
item.items_behind = []
reenqueued = False
if new_project:
item.change.project = new_project
if all(new_projects):
for change_index, change in enumerate(item.changes):
change.project = new_projects[change_index]
item.queue = None
if not old_item_ahead or not last_head:
last_head = item
@ -1945,12 +1949,13 @@ class Scheduler(threading.Thread):
for item in shared_queue.queue:
if not item.live:
continue
if (item.change.number == number and
item.change.patchset == patchset):
promote_operations.setdefault(
shared_queue, []).append(item)
found = True
break
for item_change in item.changes:
if (item_change.number == number and
item_change.patchset == patchset):
promote_operations.setdefault(
shared_queue, []).append(item)
found = True
break
if found:
break
if not found:
@ -1981,11 +1986,12 @@ class Scheduler(threading.Thread):
pipeline.manager.dequeueItem(item)
for item in items_to_enqueue:
pipeline.manager.addChange(
item.change, item.event,
enqueue_time=item.enqueue_time,
quiet=True,
ignore_requirements=True)
for item_change in item.changes:
pipeline.manager.addChange(
item_change, item.event,
enqueue_time=item.enqueue_time,
quiet=True,
ignore_requirements=True)
# Regardless, move this shared change queue to the head.
pipeline.promoteQueue(change_queue)
@ -2011,14 +2017,15 @@ class Scheduler(threading.Thread):
% (item, project.name))
for shared_queue in pipeline.queues:
for item in shared_queue.queue:
if item.change.project != change.project:
continue
if (isinstance(item.change, Change) and
item.change.number == change.number and
item.change.patchset == change.patchset) or\
(item.change.ref == change.ref):
pipeline.manager.removeItem(item)
return
for item_change in item.changes:
if item_change.project != change.project:
continue
if (isinstance(item_change, Change) and
item_change.number == change.number and
item_change.patchset == change.patchset) or\
(item_change.ref == change.ref):
pipeline.manager.removeItem(item)
return
raise Exception("Unable to find shared change queue for %s:%s" %
(event.project_name,
event.change or event.ref))
@ -2059,18 +2066,19 @@ class Scheduler(threading.Thread):
change = project.source.getChange(change_key, event=event)
for shared_queue in pipeline.queues:
for item in shared_queue.queue:
if item.change.project != change.project:
continue
if not item.live:
continue
if ((isinstance(item.change, Change)
and item.change.number == change.number
and item.change.patchset == change.patchset
) or (item.change.ref == change.ref)):
log = get_annotated_logger(self.log, item.event)
log.info("Item %s is superceded, dequeuing", item)
pipeline.manager.removeItem(item)
return
for item_change in item.changes:
if item_change.project != change.project:
continue
if ((isinstance(item_change, Change)
and item_change.number == change.number
and item_change.patchset == change.patchset
) or (item_change.ref == change.ref)):
log = get_annotated_logger(self.log, item.event)
log.info("Item %s is superceded, dequeuing", item)
pipeline.manager.removeItem(item)
return
def _doSemaphoreReleaseEvent(self, event, pipeline):
tenant = pipeline.tenant
@ -2791,7 +2799,7 @@ class Scheduler(threading.Thread):
if not build_set:
return
job = build_set.item.getJob(event._job_id)
job = build_set.item.getJob(event.job_uuid)
build = build_set.getBuild(job)
# Verify that the build uuid matches the one of the result
if not build:
@ -2824,12 +2832,14 @@ class Scheduler(threading.Thread):
log = get_annotated_logger(
self.log, build.zuul_event_id, build=build.uuid)
try:
change = build.build_set.item.change
item = build.build_set.item
job = build.job
change = item.getChangeForJob(job)
estimate = self.times.getEstimatedTime(
pipeline.tenant.name,
change.project.name,
getattr(change, 'branch', None),
build.job.name)
job.name)
if not estimate:
estimate = 0.0
build.estimated_time = estimate
@ -2884,11 +2894,8 @@ class Scheduler(threading.Thread):
# resources.
build = Build()
job = DummyFrozenJob()
job.name = event.job_name
job.uuid = event.job_uuid
job.provides = []
# MODEL_API < 25
job._job_id = job.uuid or job.name
build._set(
job=job,
uuid=event.build_uuid,
@ -2997,8 +3004,7 @@ class Scheduler(threading.Thread):
# In case the build didn't show up on any executor, the node
# request does still exist, so we have to make sure it is
# removed from ZK.
request_id = build.build_set.getJobNodeRequestID(
build.job, ignore_deduplicate=True)
request_id = build.build_set.getJobNodeRequestID(build.job)
if request_id:
self.nodepool.deleteNodeRequest(
request_id, event_id=build.zuul_event_id)
@ -3058,11 +3064,11 @@ class Scheduler(threading.Thread):
return
log = get_annotated_logger(self.log, request.event_id)
job = build_set.item.getJob(request._job_id)
job = build_set.item.getJob(request.job_uuid)
if job is None:
log.warning("Item %s does not contain job %s "
"for node request %s",
build_set.item, request._job_id, request)
build_set.item, request.job_uuid, request)
return
# If the request failed, we must directly delete it as the nodes will
@ -3073,7 +3079,7 @@ class Scheduler(threading.Thread):
nodeset = self.nodepool.getNodeSet(request, job.nodeset)
job = build_set.item.getJob(request._job_id)
job = build_set.item.getJob(request.job_uuid)
if build_set.getJobNodeSetInfo(job) is None:
pipeline.manager.onNodesProvisioned(request, nodeset, build_set)
else:
@ -3111,8 +3117,8 @@ class Scheduler(threading.Thread):
self.executor.cancel(build)
except Exception:
log.exception(
"Exception while canceling build %s for change %s",
build, item.change)
"Exception while canceling build %s for %s",
build, item)
# In the unlikely case that a build is removed and
# later added back, make sure we clear out the nodeset

View File

@ -1,5 +1,5 @@
# Copyright (c) 2017 Red Hat
# Copyright 2021-2023 Acme Gating, LLC
# Copyright 2021-2024 Acme Gating, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@ -385,9 +385,14 @@ class ChangeFilter(object):
for pipeline in payload['pipelines']:
for change_queue in pipeline.get('change_queues', []):
for head in change_queue['heads']:
for change in head:
if self.wantChange(change):
status.append(copy.deepcopy(change))
for item in head:
want_item = False
for change in item['changes']:
if self.wantChange(change):
want_item = True
break
if want_item:
status.append(copy.deepcopy(item))
return status
def wantChange(self, change):
@ -1455,7 +1460,19 @@ class ZuulWebAPI(object):
return my_datetime.strftime('%Y-%m-%dT%H:%M:%S')
return None
def buildToDict(self, build, buildset=None):
def refToDict(self, ref):
return {
'project': ref.project,
'branch': ref.branch,
'change': ref.change,
'patchset': ref.patchset,
'ref': ref.ref,
'oldrev': ref.oldrev,
'newrev': ref.newrev,
'ref_url': ref.ref_url,
}
def buildToDict(self, build, buildset=None, skip_refs=False):
start_time = self._datetimeToString(build.start_time)
end_time = self._datetimeToString(build.end_time)
if build.start_time and build.end_time:
@ -1480,28 +1497,25 @@ class ZuulWebAPI(object):
'final': build.final,
'artifacts': [],
'provides': [],
'ref': self.refToDict(build.ref),
}
# TODO: This should not be conditional in the future, when we
# can have multiple refs for a buildset.
if buildset:
# We enter this branch if we're returning top-level build
# objects (ie, not builds under a buildset).
event_timestamp = self._datetimeToString(buildset.event_timestamp)
ret.update({
'project': build.ref.project,
'branch': build.ref.branch,
'pipeline': buildset.pipeline,
'change': build.ref.change,
'patchset': build.ref.patchset,
'ref': build.ref.ref,
'oldrev': build.ref.oldrev,
'newrev': build.ref.newrev,
'ref_url': build.ref.ref_url,
'event_id': buildset.event_id,
'event_timestamp': event_timestamp,
'buildset': {
'uuid': buildset.uuid,
},
})
if not skip_refs:
ret['buildset']['refs'] = [
self.refToDict(ref)
for ref in buildset.refs
]
for artifact in build.artifacts:
art = {
@ -1560,7 +1574,8 @@ class ZuulWebAPI(object):
idx_max=_idx_max, exclude_result=exclude_result,
query_timeout=self.query_timeout)
return [self.buildToDict(b, b.buildset) for b in builds]
return [self.buildToDict(b, b.buildset, skip_refs=True)
for b in builds]
@cherrypy.expose
@cherrypy.tools.save_params()
@ -1570,10 +1585,10 @@ class ZuulWebAPI(object):
def build(self, tenant_name, tenant, auth, uuid):
connection = self._get_connection()
data = connection.getBuilds(tenant=tenant_name, uuid=uuid, limit=1)
data = connection.getBuild(tenant_name, uuid)
if not data:
raise cherrypy.HTTPError(404, "Build not found")
data = self.buildToDict(data[0], data[0].buildset)
data = self.buildToDict(data, data.buildset)
return data
def buildTimeToDict(self, build):
@ -1646,19 +1661,15 @@ class ZuulWebAPI(object):
'uuid': buildset.uuid,
'result': buildset.result,
'message': buildset.message,
'project': buildset.refs[0].project,
'branch': buildset.refs[0].branch,
'pipeline': buildset.pipeline,
'change': buildset.refs[0].change,
'patchset': buildset.refs[0].patchset,
'ref': buildset.refs[0].ref,
'oldrev': buildset.refs[0].oldrev,
'newrev': buildset.refs[0].newrev,
'ref_url': buildset.refs[0].ref_url,
'event_id': buildset.event_id,
'event_timestamp': event_timestamp,
'first_build_start_time': start,
'last_build_end_time': end,
'refs': [
self.refToDict(ref)
for ref in buildset.refs
],
}
if builds:
ret['builds'] = []
@ -1798,7 +1809,7 @@ class ZuulWebAPI(object):
@cherrypy.tools.check_tenant_auth()
def project_freeze_jobs(self, tenant_name, tenant, auth,
pipeline_name, project_name, branch_name):
item = self._freeze_jobs(
item, change = self._freeze_jobs(
tenant, pipeline_name, project_name, branch_name)
output = []
@ -1822,9 +1833,10 @@ class ZuulWebAPI(object):
job_name):
# TODO(jhesketh): Allow a canonical change/item to be passed in which
# would return the job with any in-change modifications.
item = self._freeze_jobs(
item, change = self._freeze_jobs(
tenant, pipeline_name, project_name, branch_name)
job = item.current_build_set.job_graph.getJobFromName(job_name)
job = item.current_build_set.job_graph.getJob(
job_name, change.cache_key)
if not job:
raise cherrypy.HTTPError(404)
@ -1873,12 +1885,12 @@ class ZuulWebAPI(object):
change.cache_stat = FakeCacheKey()
with LocalZKContext(self.log) as context:
queue = ChangeQueue.new(context, pipeline=pipeline)
item = QueueItem.new(context, queue=queue, change=change)
item = QueueItem.new(context, queue=queue, changes=[change])
item.freezeJobGraph(tenant.layout, context,
skip_file_matcher=True,
redact_secrets_and_keys=True)
return item
return item, change
class StaticHandler(object):