Finish circular dependency refactor
This change completes the circular dependency refactor. The principal change is that queue items may now include more than one change simultaneously in the case of circular dependencies. In dependent pipelines, the two-phase reporting process is simplified because it happens during processing of a single item. In independent pipelines, non-live items are still used for linear depnedencies, but multi-change items are used for circular dependencies. Previously changes were enqueued recursively and then bundles were made out of the resulting items. Since we now need to enqueue entire cycles in one queue item, the dependency graph generation is performed at the start of enqueing the first change in a cycle. Some tests exercise situations where Zuul is processing events for old patchsets of changes. The new change query sequence mentioned in the previous paragraph necessitates more accurate information about out-of-date patchsets than the previous sequence, therefore the Gerrit driver has been updated to query and return more data about non-current patchsets. This change is not backwards compatible with the existing ZK schema, and will require Zuul systems delete all pipeline states during the upgrade. A later change will implement a helper command for this. All backwards compatability handling for the last several model_api versions which were added to prepare for this upgrade have been removed. In general, all model data structures involving frozen jobs are now indexed by the frozen job's uuid and no longer include the job name since a job name no longer uniquely identifies a job in a buildset (either the uuid or the (job name, change) tuple must be used to identify it). Job deduplication is simplified and now only needs to consider jobs within the same buildset. The fake github driver had a bug (fakegithub.py line 694) where it did not correctly increment the check run counter, so our tests that verified that we closed out obsolete check runs when re-enqueing were not valid. This has been corrected, and in doing so, has necessitated some changes around quiet dequeing when we re-enqueue a change. The reporting in several drivers has been updated to support reporting information about multiple changes in a queue item. Change-Id: I0b9e4d3f9936b1e66a08142fc36866269dc287f1 Depends-On: https://review.opendev.org/907627
This commit is contained in:
parent
4a7e86f7f6
commit
1f026bd49c
|
@ -192,3 +192,9 @@ Version 25
|
||||||
:Prior Zuul version: 9.3.0
|
:Prior Zuul version: 9.3.0
|
||||||
:Description: Add job_uuid to BuildRequests and BuildResultEvents.
|
:Description: Add job_uuid to BuildRequests and BuildResultEvents.
|
||||||
Affects schedulers and executors.
|
Affects schedulers and executors.
|
||||||
|
|
||||||
|
Version 26
|
||||||
|
----------
|
||||||
|
:Prior Zuul version: 9.5.0
|
||||||
|
:Description: Refactor circular dependencies.
|
||||||
|
Affects schedulers and executors.
|
||||||
|
|
|
@ -0,0 +1,60 @@
|
||||||
|
---
|
||||||
|
prelude: >
|
||||||
|
This release includes a significant refactoring of the internal
|
||||||
|
handling of circular dependencies. This requires some changes for
|
||||||
|
consumers of Zuul output (via some reporters or the REST API) and
|
||||||
|
requires special care during upgrades. In the case of a
|
||||||
|
dependency cycle between changes, Zuul pipeline queue items will
|
||||||
|
now represent multiple changes rather than a single change. This
|
||||||
|
allows for more intuitive behavior and information display as well
|
||||||
|
as better handling of job deduplication.
|
||||||
|
upgrade:
|
||||||
|
- |
|
||||||
|
Zuul can not be upgraded to this version while running. To upgrade:
|
||||||
|
|
||||||
|
* Stop all Zuul components running the previous version
|
||||||
|
(stopping Nodepool is optional).
|
||||||
|
|
||||||
|
* On a scheduler machine or image (with the scheduler stopped)
|
||||||
|
and the new version of Zuul, run the command:
|
||||||
|
|
||||||
|
zuul-admin delete-state --keep-config-cache
|
||||||
|
|
||||||
|
This will delete all of the pipeline state from ZooKeeper, but
|
||||||
|
it will retain the configuration cache (which contains all of
|
||||||
|
the project configuration from zuul.yaml files). This will
|
||||||
|
speed up the startup process.
|
||||||
|
|
||||||
|
* Start all Zuul components on the new version.
|
||||||
|
- The MQTT reporter now includes a job_uuid field to correlate retry
|
||||||
|
builds with final builds.
|
||||||
|
deprecations:
|
||||||
|
- |
|
||||||
|
The syntax of string substitution in pipeline reporter messages
|
||||||
|
has changed. Since queue items may now represent more than one
|
||||||
|
change, the `{change}` substitution in messages is deprecated and
|
||||||
|
will be removed in a future version. To maintain backwards
|
||||||
|
compatability, it currently refers to the arbitrary first change
|
||||||
|
in the list of changes for a queue item. Please upgrade your
|
||||||
|
usage to use the new `{changes}` substitution which is a list.
|
||||||
|
- |
|
||||||
|
The syntax of string substitution in SMTP reporter messages
|
||||||
|
has changed. Since queue items may now represent more than one
|
||||||
|
change, the `{change}` substitution in messages is deprecated and
|
||||||
|
will be removed in a future version. To maintain backwards
|
||||||
|
compatability, it currently refers to the arbitrary first change
|
||||||
|
in the list of changes for a queue item. Please upgrade your
|
||||||
|
usage to use the new `{changes}` substitution which is a list.
|
||||||
|
- |
|
||||||
|
The MQTT and Elasticsearch reporters now include a `changes` field
|
||||||
|
which is a list of dictionaries representing the changes included
|
||||||
|
in an item. The correspending scalar fields describing what was
|
||||||
|
previously the only change associated with an item remain for
|
||||||
|
backwards compatability and refer to the arbitrary first change is
|
||||||
|
the list of changes for a queue item. These scalar values will be
|
||||||
|
removed in a future version of Zuul. Please upgrade yur usage to
|
||||||
|
use the new `changes` entries.
|
||||||
|
- |
|
||||||
|
The `zuul.bundle_id` variable is deprecated and will be removed in
|
||||||
|
a future version. For backwards compatability, it currently
|
||||||
|
duplicates the item uuid.
|
|
@ -881,32 +881,32 @@ class FakeGerritChange(object):
|
||||||
if 'approved' not in label:
|
if 'approved' not in label:
|
||||||
label['approved'] = app['by']
|
label['approved'] = app['by']
|
||||||
revisions = {}
|
revisions = {}
|
||||||
rev = self.patchsets[-1]
|
for i, rev in enumerate(self.patchsets):
|
||||||
num = len(self.patchsets)
|
num = i + 1
|
||||||
files = {}
|
files = {}
|
||||||
for f in rev['files']:
|
for f in rev['files']:
|
||||||
if f['file'] == '/COMMIT_MSG':
|
if f['file'] == '/COMMIT_MSG':
|
||||||
continue
|
continue
|
||||||
files[f['file']] = {"status": f['type'][0]} # ADDED -> A
|
files[f['file']] = {"status": f['type'][0]} # ADDED -> A
|
||||||
parent = '0000000000000000000000000000000000000000'
|
parent = '0000000000000000000000000000000000000000'
|
||||||
if self.depends_on_change:
|
if self.depends_on_change:
|
||||||
parent = self.depends_on_change.patchsets[
|
parent = self.depends_on_change.patchsets[
|
||||||
self.depends_on_patchset - 1]['revision']
|
self.depends_on_patchset - 1]['revision']
|
||||||
revisions[rev['revision']] = {
|
revisions[rev['revision']] = {
|
||||||
"kind": "REWORK",
|
"kind": "REWORK",
|
||||||
"_number": num,
|
"_number": num,
|
||||||
"created": rev['createdOn'],
|
"created": rev['createdOn'],
|
||||||
"uploader": rev['uploader'],
|
"uploader": rev['uploader'],
|
||||||
"ref": rev['ref'],
|
"ref": rev['ref'],
|
||||||
"commit": {
|
"commit": {
|
||||||
"subject": self.subject,
|
"subject": self.subject,
|
||||||
"message": self.data['commitMessage'],
|
"message": self.data['commitMessage'],
|
||||||
"parents": [{
|
"parents": [{
|
||||||
"commit": parent,
|
"commit": parent,
|
||||||
}]
|
}]
|
||||||
},
|
},
|
||||||
"files": files
|
"files": files
|
||||||
}
|
}
|
||||||
data = {
|
data = {
|
||||||
"id": self.project + '~' + self.branch + '~' + self.data['id'],
|
"id": self.project + '~' + self.branch + '~' + self.data['id'],
|
||||||
"project": self.project,
|
"project": self.project,
|
||||||
|
@ -1462,13 +1462,14 @@ class FakeGerritConnection(gerritconnection.GerritConnection):
|
||||||
}
|
}
|
||||||
return event
|
return event
|
||||||
|
|
||||||
def review(self, item, message, submit, labels, checks_api, file_comments,
|
def review(self, item, change, message, submit, labels,
|
||||||
phase1, phase2, zuul_event_id=None):
|
checks_api, file_comments, phase1, phase2,
|
||||||
|
zuul_event_id=None):
|
||||||
if self.web_server:
|
if self.web_server:
|
||||||
return super(FakeGerritConnection, self).review(
|
return super(FakeGerritConnection, self).review(
|
||||||
item, message, submit, labels, checks_api, file_comments,
|
item, change, message, submit, labels, checks_api,
|
||||||
phase1, phase2, zuul_event_id)
|
file_comments, phase1, phase2, zuul_event_id)
|
||||||
self._test_handle_review(int(item.change.number), message, submit,
|
self._test_handle_review(int(change.number), message, submit,
|
||||||
labels, phase1, phase2)
|
labels, phase1, phase2)
|
||||||
|
|
||||||
def _test_get_submitted_together(self, change):
|
def _test_get_submitted_together(self, change):
|
||||||
|
@ -3577,9 +3578,11 @@ class TestingExecutorApi(HoldableExecutorApi):
|
||||||
self._test_build_request_job_map = {}
|
self._test_build_request_job_map = {}
|
||||||
if build_request.uuid in self._test_build_request_job_map:
|
if build_request.uuid in self._test_build_request_job_map:
|
||||||
return self._test_build_request_job_map[build_request.uuid]
|
return self._test_build_request_job_map[build_request.uuid]
|
||||||
job_name = build_request.job_name
|
|
||||||
|
params = self.getParams(build_request)
|
||||||
|
job_name = params['zuul']['job']
|
||||||
self._test_build_request_job_map[build_request.uuid] = job_name
|
self._test_build_request_job_map[build_request.uuid] = job_name
|
||||||
return build_request.job_name
|
return job_name
|
||||||
|
|
||||||
def release(self, what=None):
|
def release(self, what=None):
|
||||||
"""
|
"""
|
||||||
|
|
|
@ -691,7 +691,7 @@ class FakeGithubSession(object):
|
||||||
if commit is None:
|
if commit is None:
|
||||||
commit = FakeCommit(head_sha)
|
commit = FakeCommit(head_sha)
|
||||||
repo._commits[head_sha] = commit
|
repo._commits[head_sha] = commit
|
||||||
repo.check_run_counter += 1
|
repo.check_run_counter += 1
|
||||||
check_run = commit.set_check_run(
|
check_run = commit.set_check_run(
|
||||||
str(repo.check_run_counter),
|
str(repo.check_run_counter),
|
||||||
json['name'],
|
json['name'],
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1165,7 +1165,7 @@ class TestExecutorFailure(ZuulTestCase):
|
||||||
self.waitUntilSettled()
|
self.waitUntilSettled()
|
||||||
|
|
||||||
job = items[0].current_build_set.job_graph.getJob(
|
job = items[0].current_build_set.job_graph.getJob(
|
||||||
'project-merge', items[0].change.cache_key)
|
'project-merge', items[0].changes[0].cache_key)
|
||||||
build_retries = items[0].current_build_set.getRetryBuildsForJob(job)
|
build_retries = items[0].current_build_set.getRetryBuildsForJob(job)
|
||||||
self.assertEqual(len(build_retries), 1)
|
self.assertEqual(len(build_retries), 1)
|
||||||
self.assertIsNotNone(build_retries[0].error_detail)
|
self.assertIsNotNone(build_retries[0].error_detail)
|
||||||
|
|
|
@ -232,7 +232,7 @@ class TestJob(BaseTestCase):
|
||||||
change = model.Change(self.project)
|
change = model.Change(self.project)
|
||||||
change.branch = 'master'
|
change.branch = 'master'
|
||||||
change.cache_stat = Dummy(key=Dummy(reference=uuid.uuid4().hex))
|
change.cache_stat = Dummy(key=Dummy(reference=uuid.uuid4().hex))
|
||||||
item = self.queue.enqueueChange(change, None)
|
item = self.queue.enqueueChanges([change], None)
|
||||||
|
|
||||||
self.assertTrue(base.changeMatchesBranch(change))
|
self.assertTrue(base.changeMatchesBranch(change))
|
||||||
self.assertTrue(python27.changeMatchesBranch(change))
|
self.assertTrue(python27.changeMatchesBranch(change))
|
||||||
|
@ -249,7 +249,7 @@ class TestJob(BaseTestCase):
|
||||||
|
|
||||||
change.branch = 'stable/diablo'
|
change.branch = 'stable/diablo'
|
||||||
change.cache_stat = Dummy(key=Dummy(reference=uuid.uuid4().hex))
|
change.cache_stat = Dummy(key=Dummy(reference=uuid.uuid4().hex))
|
||||||
item = self.queue.enqueueChange(change, None)
|
item = self.queue.enqueueChanges([change], None)
|
||||||
|
|
||||||
self.assertTrue(base.changeMatchesBranch(change))
|
self.assertTrue(base.changeMatchesBranch(change))
|
||||||
self.assertTrue(python27.changeMatchesBranch(change))
|
self.assertTrue(python27.changeMatchesBranch(change))
|
||||||
|
@ -300,7 +300,7 @@ class TestJob(BaseTestCase):
|
||||||
change.branch = 'master'
|
change.branch = 'master'
|
||||||
change.cache_stat = Dummy(key=Dummy(reference=uuid.uuid4().hex))
|
change.cache_stat = Dummy(key=Dummy(reference=uuid.uuid4().hex))
|
||||||
change.files = ['/COMMIT_MSG', 'ignored-file']
|
change.files = ['/COMMIT_MSG', 'ignored-file']
|
||||||
item = self.queue.enqueueChange(change, None)
|
item = self.queue.enqueueChanges([change], None)
|
||||||
|
|
||||||
self.assertTrue(base.changeMatchesFiles(change))
|
self.assertTrue(base.changeMatchesFiles(change))
|
||||||
self.assertFalse(python27.changeMatchesFiles(change))
|
self.assertFalse(python27.changeMatchesFiles(change))
|
||||||
|
@ -375,7 +375,7 @@ class TestJob(BaseTestCase):
|
||||||
# Test master
|
# Test master
|
||||||
change.branch = 'master'
|
change.branch = 'master'
|
||||||
change.cache_stat = Dummy(key=Dummy(reference=uuid.uuid4().hex))
|
change.cache_stat = Dummy(key=Dummy(reference=uuid.uuid4().hex))
|
||||||
item = self.queue.enqueueChange(change, None)
|
item = self.queue.enqueueChanges([change], None)
|
||||||
with testtools.ExpectedException(
|
with testtools.ExpectedException(
|
||||||
Exception,
|
Exception,
|
||||||
"Pre-review pipeline gate does not allow post-review job"):
|
"Pre-review pipeline gate does not allow post-review job"):
|
||||||
|
@ -453,7 +453,7 @@ class TestJob(BaseTestCase):
|
||||||
change = model.Change(self.project)
|
change = model.Change(self.project)
|
||||||
change.branch = 'master'
|
change.branch = 'master'
|
||||||
change.cache_stat = Dummy(key=Dummy(reference=uuid.uuid4().hex))
|
change.cache_stat = Dummy(key=Dummy(reference=uuid.uuid4().hex))
|
||||||
item = self.queue.enqueueChange(change, None)
|
item = self.queue.enqueueChanges([change], None)
|
||||||
|
|
||||||
self.assertTrue(base.changeMatchesBranch(change))
|
self.assertTrue(base.changeMatchesBranch(change))
|
||||||
self.assertTrue(python27.changeMatchesBranch(change))
|
self.assertTrue(python27.changeMatchesBranch(change))
|
||||||
|
@ -488,6 +488,7 @@ class FakeFrozenJob(model.Job):
|
||||||
super().__init__(name)
|
super().__init__(name)
|
||||||
self.uuid = uuid.uuid4().hex
|
self.uuid = uuid.uuid4().hex
|
||||||
self.ref = 'fake reference'
|
self.ref = 'fake reference'
|
||||||
|
self.all_refs = [self.ref]
|
||||||
|
|
||||||
|
|
||||||
class TestGraph(BaseTestCase):
|
class TestGraph(BaseTestCase):
|
||||||
|
|
|
@ -465,53 +465,6 @@ class TestGithubModelUpgrade(ZuulTestCase):
|
||||||
config_file = 'zuul-github-driver.conf'
|
config_file = 'zuul-github-driver.conf'
|
||||||
scheduler_count = 1
|
scheduler_count = 1
|
||||||
|
|
||||||
@model_version(3)
|
|
||||||
@simple_layout('layouts/gate-github.yaml', driver='github')
|
|
||||||
def test_status_checks_removal(self):
|
|
||||||
# This tests the old behavior -- that changes are not dequeued
|
|
||||||
# once their required status checks are removed -- since the
|
|
||||||
# new behavior requires a flag in ZK.
|
|
||||||
# Contrast with test_status_checks_removal.
|
|
||||||
github = self.fake_github.getGithubClient()
|
|
||||||
repo = github.repo_from_project('org/project')
|
|
||||||
repo._set_branch_protection(
|
|
||||||
'master', contexts=['something/check', 'tenant-one/gate'])
|
|
||||||
|
|
||||||
A = self.fake_github.openFakePullRequest('org/project', 'master', 'A')
|
|
||||||
self.fake_github.emitEvent(A.getPullRequestOpenedEvent())
|
|
||||||
self.waitUntilSettled()
|
|
||||||
|
|
||||||
self.executor_server.hold_jobs_in_build = True
|
|
||||||
# Since the required status 'something/check' is not fulfilled,
|
|
||||||
# no job is expected
|
|
||||||
self.assertEqual(0, len(self.history))
|
|
||||||
|
|
||||||
# Set the required status 'something/check'
|
|
||||||
repo.create_status(A.head_sha, 'success', 'example.com', 'description',
|
|
||||||
'something/check')
|
|
||||||
|
|
||||||
self.fake_github.emitEvent(A.getPullRequestOpenedEvent())
|
|
||||||
self.waitUntilSettled()
|
|
||||||
|
|
||||||
# Remove it and verify the change is not dequeued (old behavior).
|
|
||||||
repo.create_status(A.head_sha, 'failed', 'example.com', 'description',
|
|
||||||
'something/check')
|
|
||||||
self.fake_github.emitEvent(A.getCommitStatusEvent('something/check',
|
|
||||||
state='failed',
|
|
||||||
user='foo'))
|
|
||||||
self.waitUntilSettled()
|
|
||||||
|
|
||||||
self.executor_server.hold_jobs_in_build = False
|
|
||||||
self.executor_server.release()
|
|
||||||
self.waitUntilSettled()
|
|
||||||
|
|
||||||
# the change should have entered the gate
|
|
||||||
self.assertHistory([
|
|
||||||
dict(name='project-test1', result='SUCCESS'),
|
|
||||||
dict(name='project-test2', result='SUCCESS'),
|
|
||||||
], ordered=False)
|
|
||||||
self.assertTrue(A.is_merged)
|
|
||||||
|
|
||||||
@model_version(10)
|
@model_version(10)
|
||||||
@simple_layout('layouts/github-merge-mode.yaml', driver='github')
|
@simple_layout('layouts/github-merge-mode.yaml', driver='github')
|
||||||
def test_merge_method_syntax_check(self):
|
def test_merge_method_syntax_check(self):
|
||||||
|
@ -703,48 +656,6 @@ class TestDefaultBranchUpgrade(ZuulTestCase):
|
||||||
self.assertEqual('foobar', md.default_branch)
|
self.assertEqual('foobar', md.default_branch)
|
||||||
|
|
||||||
|
|
||||||
class TestDeduplication(ZuulTestCase):
|
|
||||||
config_file = "zuul-gerrit-github.conf"
|
|
||||||
tenant_config_file = "config/circular-dependencies/main.yaml"
|
|
||||||
scheduler_count = 1
|
|
||||||
|
|
||||||
def _test_job_deduplication(self):
|
|
||||||
A = self.fake_gerrit.addFakeChange('org/project1', 'master', 'A')
|
|
||||||
B = self.fake_gerrit.addFakeChange('org/project2', 'master', 'B')
|
|
||||||
|
|
||||||
# A <-> B
|
|
||||||
A.data["commitMessage"] = "{}\n\nDepends-On: {}\n".format(
|
|
||||||
A.subject, B.data["url"]
|
|
||||||
)
|
|
||||||
B.data["commitMessage"] = "{}\n\nDepends-On: {}\n".format(
|
|
||||||
B.subject, A.data["url"]
|
|
||||||
)
|
|
||||||
|
|
||||||
A.addApproval('Code-Review', 2)
|
|
||||||
B.addApproval('Code-Review', 2)
|
|
||||||
|
|
||||||
self.fake_gerrit.addEvent(A.addApproval('Approved', 1))
|
|
||||||
self.fake_gerrit.addEvent(B.addApproval('Approved', 1))
|
|
||||||
|
|
||||||
self.waitUntilSettled()
|
|
||||||
|
|
||||||
self.assertEqual(A.data['status'], 'MERGED')
|
|
||||||
self.assertEqual(B.data['status'], 'MERGED')
|
|
||||||
|
|
||||||
@simple_layout('layouts/job-dedup-auto-shared.yaml')
|
|
||||||
@model_version(7)
|
|
||||||
def test_job_deduplication_auto_shared(self):
|
|
||||||
self._test_job_deduplication()
|
|
||||||
self.assertHistory([
|
|
||||||
dict(name="project1-job", result="SUCCESS", changes="2,1 1,1"),
|
|
||||||
dict(name="common-job", result="SUCCESS", changes="2,1 1,1"),
|
|
||||||
dict(name="project2-job", result="SUCCESS", changes="2,1 1,1"),
|
|
||||||
# This would be deduplicated
|
|
||||||
dict(name="common-job", result="SUCCESS", changes="2,1 1,1"),
|
|
||||||
], ordered=False)
|
|
||||||
self.assertEqual(len(self.fake_nodepool.history), 4)
|
|
||||||
|
|
||||||
|
|
||||||
class TestDataReturn(AnsibleZuulTestCase):
|
class TestDataReturn(AnsibleZuulTestCase):
|
||||||
tenant_config_file = 'config/data-return/main.yaml'
|
tenant_config_file = 'config/data-return/main.yaml'
|
||||||
|
|
||||||
|
|
|
@ -1107,7 +1107,7 @@ class TestScheduler(ZuulTestCase):
|
||||||
self.assertEqual(len(queue), 1)
|
self.assertEqual(len(queue), 1)
|
||||||
self.assertEqual(queue[0].zone, None)
|
self.assertEqual(queue[0].zone, None)
|
||||||
params = self.executor_server.executor_api.getParams(queue[0])
|
params = self.executor_server.executor_api.getParams(queue[0])
|
||||||
self.assertEqual(queue[0].job_name, 'project-merge')
|
self.assertEqual(params['zuul']['job'], 'project-merge')
|
||||||
self.assertEqual(params['items'][0]['number'], '%d' % A.number)
|
self.assertEqual(params['items'][0]['number'], '%d' % A.number)
|
||||||
|
|
||||||
self.executor_api.release('.*-merge')
|
self.executor_api.release('.*-merge')
|
||||||
|
@ -1121,12 +1121,14 @@ class TestScheduler(ZuulTestCase):
|
||||||
self.assertEqual(len(self.builds), 0)
|
self.assertEqual(len(self.builds), 0)
|
||||||
self.assertEqual(len(queue), 6)
|
self.assertEqual(len(queue), 6)
|
||||||
|
|
||||||
self.assertEqual(queue[0].job_name, 'project-test1')
|
params = [self.executor_server.executor_api.getParams(x)
|
||||||
self.assertEqual(queue[1].job_name, 'project-test2')
|
for x in queue]
|
||||||
self.assertEqual(queue[2].job_name, 'project-test1')
|
self.assertEqual(params[0]['zuul']['job'], 'project-test1')
|
||||||
self.assertEqual(queue[3].job_name, 'project-test2')
|
self.assertEqual(params[1]['zuul']['job'], 'project-test2')
|
||||||
self.assertEqual(queue[4].job_name, 'project-test1')
|
self.assertEqual(params[2]['zuul']['job'], 'project-test1')
|
||||||
self.assertEqual(queue[5].job_name, 'project-test2')
|
self.assertEqual(params[3]['zuul']['job'], 'project-test2')
|
||||||
|
self.assertEqual(params[4]['zuul']['job'], 'project-test1')
|
||||||
|
self.assertEqual(params[5]['zuul']['job'], 'project-test2')
|
||||||
|
|
||||||
self.executor_api.release(queue[0])
|
self.executor_api.release(queue[0])
|
||||||
self.waitUntilSettled()
|
self.waitUntilSettled()
|
||||||
|
@ -2935,16 +2937,16 @@ class TestScheduler(ZuulTestCase):
|
||||||
items = check_pipeline.getAllItems()
|
items = check_pipeline.getAllItems()
|
||||||
self.assertEqual(len(items), 3)
|
self.assertEqual(len(items), 3)
|
||||||
|
|
||||||
self.assertEqual(items[0].change.number, '1')
|
self.assertEqual(items[0].changes[0].number, '1')
|
||||||
self.assertEqual(items[0].change.patchset, '1')
|
self.assertEqual(items[0].changes[0].patchset, '1')
|
||||||
self.assertFalse(items[0].live)
|
self.assertFalse(items[0].live)
|
||||||
|
|
||||||
self.assertEqual(items[1].change.number, '2')
|
self.assertEqual(items[1].changes[0].number, '2')
|
||||||
self.assertEqual(items[1].change.patchset, '1')
|
self.assertEqual(items[1].changes[0].patchset, '1')
|
||||||
self.assertTrue(items[1].live)
|
self.assertTrue(items[1].live)
|
||||||
|
|
||||||
self.assertEqual(items[2].change.number, '1')
|
self.assertEqual(items[2].changes[0].number, '1')
|
||||||
self.assertEqual(items[2].change.patchset, '1')
|
self.assertEqual(items[2].changes[0].patchset, '1')
|
||||||
self.assertTrue(items[2].live)
|
self.assertTrue(items[2].live)
|
||||||
|
|
||||||
# Add a new patchset to A
|
# Add a new patchset to A
|
||||||
|
@ -2957,16 +2959,16 @@ class TestScheduler(ZuulTestCase):
|
||||||
items = check_pipeline.getAllItems()
|
items = check_pipeline.getAllItems()
|
||||||
self.assertEqual(len(items), 3)
|
self.assertEqual(len(items), 3)
|
||||||
|
|
||||||
self.assertEqual(items[0].change.number, '1')
|
self.assertEqual(items[0].changes[0].number, '1')
|
||||||
self.assertEqual(items[0].change.patchset, '1')
|
self.assertEqual(items[0].changes[0].patchset, '1')
|
||||||
self.assertFalse(items[0].live)
|
self.assertFalse(items[0].live)
|
||||||
|
|
||||||
self.assertEqual(items[1].change.number, '2')
|
self.assertEqual(items[1].changes[0].number, '2')
|
||||||
self.assertEqual(items[1].change.patchset, '1')
|
self.assertEqual(items[1].changes[0].patchset, '1')
|
||||||
self.assertTrue(items[1].live)
|
self.assertTrue(items[1].live)
|
||||||
|
|
||||||
self.assertEqual(items[2].change.number, '1')
|
self.assertEqual(items[2].changes[0].number, '1')
|
||||||
self.assertEqual(items[2].change.patchset, '2')
|
self.assertEqual(items[2].changes[0].patchset, '2')
|
||||||
self.assertTrue(items[2].live)
|
self.assertTrue(items[2].live)
|
||||||
|
|
||||||
# Add a new patchset to B
|
# Add a new patchset to B
|
||||||
|
@ -2979,16 +2981,16 @@ class TestScheduler(ZuulTestCase):
|
||||||
items = check_pipeline.getAllItems()
|
items = check_pipeline.getAllItems()
|
||||||
self.assertEqual(len(items), 3)
|
self.assertEqual(len(items), 3)
|
||||||
|
|
||||||
self.assertEqual(items[0].change.number, '1')
|
self.assertEqual(items[0].changes[0].number, '1')
|
||||||
self.assertEqual(items[0].change.patchset, '2')
|
self.assertEqual(items[0].changes[0].patchset, '2')
|
||||||
self.assertTrue(items[0].live)
|
self.assertTrue(items[0].live)
|
||||||
|
|
||||||
self.assertEqual(items[1].change.number, '1')
|
self.assertEqual(items[1].changes[0].number, '1')
|
||||||
self.assertEqual(items[1].change.patchset, '1')
|
self.assertEqual(items[1].changes[0].patchset, '1')
|
||||||
self.assertFalse(items[1].live)
|
self.assertFalse(items[1].live)
|
||||||
|
|
||||||
self.assertEqual(items[2].change.number, '2')
|
self.assertEqual(items[2].changes[0].number, '2')
|
||||||
self.assertEqual(items[2].change.patchset, '2')
|
self.assertEqual(items[2].changes[0].patchset, '2')
|
||||||
self.assertTrue(items[2].live)
|
self.assertTrue(items[2].live)
|
||||||
|
|
||||||
self.builds[0].release()
|
self.builds[0].release()
|
||||||
|
@ -3055,13 +3057,13 @@ class TestScheduler(ZuulTestCase):
|
||||||
items = check_pipeline.getAllItems()
|
items = check_pipeline.getAllItems()
|
||||||
self.assertEqual(len(items), 3)
|
self.assertEqual(len(items), 3)
|
||||||
|
|
||||||
self.assertEqual(items[0].change.number, '1')
|
self.assertEqual(items[0].changes[0].number, '1')
|
||||||
self.assertFalse(items[0].live)
|
self.assertFalse(items[0].live)
|
||||||
|
|
||||||
self.assertEqual(items[1].change.number, '2')
|
self.assertEqual(items[1].changes[0].number, '2')
|
||||||
self.assertTrue(items[1].live)
|
self.assertTrue(items[1].live)
|
||||||
|
|
||||||
self.assertEqual(items[2].change.number, '1')
|
self.assertEqual(items[2].changes[0].number, '1')
|
||||||
self.assertTrue(items[2].live)
|
self.assertTrue(items[2].live)
|
||||||
|
|
||||||
# Abandon A
|
# Abandon A
|
||||||
|
@ -3073,10 +3075,10 @@ class TestScheduler(ZuulTestCase):
|
||||||
items = check_pipeline.getAllItems()
|
items = check_pipeline.getAllItems()
|
||||||
self.assertEqual(len(items), 2)
|
self.assertEqual(len(items), 2)
|
||||||
|
|
||||||
self.assertEqual(items[0].change.number, '1')
|
self.assertEqual(items[0].changes[0].number, '1')
|
||||||
self.assertFalse(items[0].live)
|
self.assertFalse(items[0].live)
|
||||||
|
|
||||||
self.assertEqual(items[1].change.number, '2')
|
self.assertEqual(items[1].changes[0].number, '2')
|
||||||
self.assertTrue(items[1].live)
|
self.assertTrue(items[1].live)
|
||||||
|
|
||||||
self.executor_server.hold_jobs_in_build = False
|
self.executor_server.hold_jobs_in_build = False
|
||||||
|
@ -4589,8 +4591,9 @@ class TestScheduler(ZuulTestCase):
|
||||||
|
|
||||||
first = pipeline_status['change_queues'][0]['heads'][0][0]
|
first = pipeline_status['change_queues'][0]['heads'][0][0]
|
||||||
second = pipeline_status['change_queues'][1]['heads'][0][0]
|
second = pipeline_status['change_queues'][1]['heads'][0][0]
|
||||||
self.assertIn(first['ref'], ['refs/heads/master', 'refs/heads/stable'])
|
self.assertIn(first['changes'][0]['ref'],
|
||||||
self.assertIn(second['ref'],
|
['refs/heads/master', 'refs/heads/stable'])
|
||||||
|
self.assertIn(second['changes'][0]['ref'],
|
||||||
['refs/heads/master', 'refs/heads/stable'])
|
['refs/heads/master', 'refs/heads/stable'])
|
||||||
|
|
||||||
self.executor_server.hold_jobs_in_build = False
|
self.executor_server.hold_jobs_in_build = False
|
||||||
|
@ -5799,7 +5802,6 @@ For CI problems and help debugging, contact ci@example.org"""
|
||||||
build_set = items[0].current_build_set
|
build_set = items[0].current_build_set
|
||||||
job = list(filter(lambda j: j.name == 'project-test1',
|
job = list(filter(lambda j: j.name == 'project-test1',
|
||||||
items[0].getJobs()))[0]
|
items[0].getJobs()))[0]
|
||||||
build_set.job_graph.getJobFromName(job)
|
|
||||||
|
|
||||||
for x in range(3):
|
for x in range(3):
|
||||||
# We should have x+1 retried builds for project-test1
|
# We should have x+1 retried builds for project-test1
|
||||||
|
@ -8311,8 +8313,8 @@ class TestSemaphore(ZuulTestCase):
|
||||||
1)
|
1)
|
||||||
|
|
||||||
items = check_pipeline.getAllItems()
|
items = check_pipeline.getAllItems()
|
||||||
self.assertEqual(items[0].change.number, '1')
|
self.assertEqual(items[0].changes[0].number, '1')
|
||||||
self.assertEqual(items[0].change.patchset, '2')
|
self.assertEqual(items[0].changes[0].patchset, '2')
|
||||||
self.assertTrue(items[0].live)
|
self.assertTrue(items[0].live)
|
||||||
|
|
||||||
self.executor_server.hold_jobs_in_build = False
|
self.executor_server.hold_jobs_in_build = False
|
||||||
|
@ -8389,7 +8391,8 @@ class TestSemaphore(ZuulTestCase):
|
||||||
# Save some variables for later use while the job is running
|
# Save some variables for later use while the job is running
|
||||||
check_pipeline = tenant.layout.pipelines['check']
|
check_pipeline = tenant.layout.pipelines['check']
|
||||||
item = check_pipeline.getAllItems()[0]
|
item = check_pipeline.getAllItems()[0]
|
||||||
job = item.getJob('semaphore-one-test1')
|
job = list(filter(lambda j: j.name == 'semaphore-one-test1',
|
||||||
|
item.getJobs()))[0]
|
||||||
|
|
||||||
tenant.semaphore_handler.cleanupLeaks()
|
tenant.semaphore_handler.cleanupLeaks()
|
||||||
|
|
||||||
|
|
|
@ -717,7 +717,12 @@ class TestSOSCircularDependencies(ZuulTestCase):
|
||||||
self.assertEqual(len(self.builds), 4)
|
self.assertEqual(len(self.builds), 4)
|
||||||
builds = self.builds[:]
|
builds = self.builds[:]
|
||||||
self.executor_server.failJob('job1', A)
|
self.executor_server.failJob('job1', A)
|
||||||
|
# Since it's one queue item for the two changes, all 4
|
||||||
|
# builds need to complete.
|
||||||
builds[0].release()
|
builds[0].release()
|
||||||
|
builds[1].release()
|
||||||
|
builds[2].release()
|
||||||
|
builds[3].release()
|
||||||
app.sched.wake_event.set()
|
app.sched.wake_event.set()
|
||||||
self.waitUntilSettled(matcher=[app])
|
self.waitUntilSettled(matcher=[app])
|
||||||
self.assertEqual(A.reported, 2)
|
self.assertEqual(A.reported, 2)
|
||||||
|
|
|
@ -79,7 +79,7 @@ class TestTimerAlwaysDynamicBranches(ZuulTestCase):
|
||||||
self.assertEqual(len(pipeline.queues), 2)
|
self.assertEqual(len(pipeline.queues), 2)
|
||||||
for queue in pipeline.queues:
|
for queue in pipeline.queues:
|
||||||
item = queue.queue[0]
|
item = queue.queue[0]
|
||||||
self.assertIn(item.change.branch, ['master', 'stable'])
|
self.assertIn(item.changes[0].branch, ['master', 'stable'])
|
||||||
|
|
||||||
self.executor_server.hold_jobs_in_build = False
|
self.executor_server.hold_jobs_in_build = False
|
||||||
|
|
||||||
|
|
|
@ -23,7 +23,11 @@ from opentelemetry import trace
|
||||||
def attributes_to_dict(attrlist):
|
def attributes_to_dict(attrlist):
|
||||||
ret = {}
|
ret = {}
|
||||||
for attr in attrlist:
|
for attr in attrlist:
|
||||||
ret[attr.key] = attr.value.string_value
|
if attr.value.string_value:
|
||||||
|
ret[attr.key] = attr.value.string_value
|
||||||
|
else:
|
||||||
|
ret[attr.key] = [v.string_value
|
||||||
|
for v in attr.value.array_value.values]
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
|
|
||||||
|
@ -247,8 +251,8 @@ class TestTracing(ZuulTestCase):
|
||||||
jobexec.span_id)
|
jobexec.span_id)
|
||||||
|
|
||||||
item_attrs = attributes_to_dict(item.attributes)
|
item_attrs = attributes_to_dict(item.attributes)
|
||||||
self.assertTrue(item_attrs['ref_number'] == "1")
|
self.assertTrue(item_attrs['ref_number'] == ["1"])
|
||||||
self.assertTrue(item_attrs['ref_patchset'] == "1")
|
self.assertTrue(item_attrs['ref_patchset'] == ["1"])
|
||||||
self.assertTrue('zuul_event_id' in item_attrs)
|
self.assertTrue('zuul_event_id' in item_attrs)
|
||||||
|
|
||||||
def getSpan(self, name):
|
def getSpan(self, name):
|
||||||
|
|
|
@ -1730,8 +1730,8 @@ class TestInRepoConfig(ZuulTestCase):
|
||||||
self.waitUntilSettled()
|
self.waitUntilSettled()
|
||||||
|
|
||||||
items = check_pipeline.getAllItems()
|
items = check_pipeline.getAllItems()
|
||||||
self.assertEqual(items[0].change.number, '1')
|
self.assertEqual(items[0].changes[0].number, '1')
|
||||||
self.assertEqual(items[0].change.patchset, '1')
|
self.assertEqual(items[0].changes[0].patchset, '1')
|
||||||
self.assertTrue(items[0].live)
|
self.assertTrue(items[0].live)
|
||||||
|
|
||||||
in_repo_conf = textwrap.dedent(
|
in_repo_conf = textwrap.dedent(
|
||||||
|
@ -1760,8 +1760,8 @@ class TestInRepoConfig(ZuulTestCase):
|
||||||
self.waitUntilSettled()
|
self.waitUntilSettled()
|
||||||
|
|
||||||
items = check_pipeline.getAllItems()
|
items = check_pipeline.getAllItems()
|
||||||
self.assertEqual(items[0].change.number, '1')
|
self.assertEqual(items[0].changes[0].number, '1')
|
||||||
self.assertEqual(items[0].change.patchset, '2')
|
self.assertEqual(items[0].changes[0].patchset, '2')
|
||||||
self.assertTrue(items[0].live)
|
self.assertTrue(items[0].live)
|
||||||
|
|
||||||
self.executor_server.hold_jobs_in_build = False
|
self.executor_server.hold_jobs_in_build = False
|
||||||
|
@ -3438,9 +3438,9 @@ class TestExtraConfigInDependent(ZuulTestCase):
|
||||||
# Jobs in both changes should be success
|
# Jobs in both changes should be success
|
||||||
self.assertHistory([
|
self.assertHistory([
|
||||||
dict(name='project2-private-extra-file', result='SUCCESS',
|
dict(name='project2-private-extra-file', result='SUCCESS',
|
||||||
changes='3,1 1,1 2,1'),
|
changes='3,1 2,1 1,1'),
|
||||||
dict(name='project2-private-extra-dir', result='SUCCESS',
|
dict(name='project2-private-extra-dir', result='SUCCESS',
|
||||||
changes='3,1 1,1 2,1'),
|
changes='3,1 2,1 1,1'),
|
||||||
dict(name='project-test1', result='SUCCESS',
|
dict(name='project-test1', result='SUCCESS',
|
||||||
changes='3,1 2,1 1,1'),
|
changes='3,1 2,1 1,1'),
|
||||||
dict(name='project3-private-extra-file', result='SUCCESS',
|
dict(name='project3-private-extra-file', result='SUCCESS',
|
||||||
|
@ -3987,8 +3987,8 @@ class TestInRepoJoin(ZuulTestCase):
|
||||||
self.waitUntilSettled()
|
self.waitUntilSettled()
|
||||||
|
|
||||||
items = gate_pipeline.getAllItems()
|
items = gate_pipeline.getAllItems()
|
||||||
self.assertEqual(items[0].change.number, '1')
|
self.assertEqual(items[0].changes[0].number, '1')
|
||||||
self.assertEqual(items[0].change.patchset, '1')
|
self.assertEqual(items[0].changes[0].patchset, '1')
|
||||||
self.assertTrue(items[0].live)
|
self.assertTrue(items[0].live)
|
||||||
|
|
||||||
self.executor_server.hold_jobs_in_build = False
|
self.executor_server.hold_jobs_in_build = False
|
||||||
|
|
|
@ -173,13 +173,14 @@ class TestWeb(BaseTestWeb):
|
||||||
# information is missing.
|
# information is missing.
|
||||||
self.assertIsNone(q['branch'])
|
self.assertIsNone(q['branch'])
|
||||||
for head in q['heads']:
|
for head in q['heads']:
|
||||||
for change in head:
|
for item in head:
|
||||||
self.assertIn(
|
self.assertIn(
|
||||||
'review.example.com/org/project',
|
'review.example.com/org/project',
|
||||||
change['project_canonical'])
|
item['changes'][0]['project_canonical'])
|
||||||
self.assertTrue(change['active'])
|
self.assertTrue(item['active'])
|
||||||
|
change = item['changes'][0]
|
||||||
self.assertIn(change['id'], ('1,1', '2,1', '3,1'))
|
self.assertIn(change['id'], ('1,1', '2,1', '3,1'))
|
||||||
for job in change['jobs']:
|
for job in item['jobs']:
|
||||||
status_jobs.append(job)
|
status_jobs.append(job)
|
||||||
self.assertEqual('project-merge', status_jobs[0]['name'])
|
self.assertEqual('project-merge', status_jobs[0]['name'])
|
||||||
# TODO(mordred) pull uuids from self.builds
|
# TODO(mordred) pull uuids from self.builds
|
||||||
|
@ -334,12 +335,13 @@ class TestWeb(BaseTestWeb):
|
||||||
data = self.get_url("api/tenant/tenant-one/status/change/1,1").json()
|
data = self.get_url("api/tenant/tenant-one/status/change/1,1").json()
|
||||||
|
|
||||||
self.assertEqual(1, len(data), data)
|
self.assertEqual(1, len(data), data)
|
||||||
self.assertEqual("org/project", data[0]['project'])
|
self.assertEqual("org/project", data[0]['changes'][0]['project'])
|
||||||
|
|
||||||
data = self.get_url("api/tenant/tenant-one/status/change/2,1").json()
|
data = self.get_url("api/tenant/tenant-one/status/change/2,1").json()
|
||||||
|
|
||||||
self.assertEqual(1, len(data), data)
|
self.assertEqual(1, len(data), data)
|
||||||
self.assertEqual("org/project1", data[0]['project'], data)
|
self.assertEqual("org/project1", data[0]['changes'][0]['project'],
|
||||||
|
data)
|
||||||
|
|
||||||
@simple_layout('layouts/nodeset-alternatives.yaml')
|
@simple_layout('layouts/nodeset-alternatives.yaml')
|
||||||
def test_web_find_job_nodeset_alternatives(self):
|
def test_web_find_job_nodeset_alternatives(self):
|
||||||
|
@ -1966,7 +1968,10 @@ class TestBuildInfo(BaseTestWeb):
|
||||||
|
|
||||||
buildsets = self.get_url("api/tenant/tenant-one/buildsets").json()
|
buildsets = self.get_url("api/tenant/tenant-one/buildsets").json()
|
||||||
self.assertEqual(2, len(buildsets))
|
self.assertEqual(2, len(buildsets))
|
||||||
project_bs = [x for x in buildsets if x["project"] == "org/project"][0]
|
project_bs = [
|
||||||
|
x for x in buildsets
|
||||||
|
if x["refs"][0]["project"] == "org/project"
|
||||||
|
][0]
|
||||||
|
|
||||||
buildset = self.get_url(
|
buildset = self.get_url(
|
||||||
"api/tenant/tenant-one/buildset/%s" % project_bs['uuid']).json()
|
"api/tenant/tenant-one/buildset/%s" % project_bs['uuid']).json()
|
||||||
|
@ -2070,7 +2075,10 @@ class TestArtifacts(BaseTestWeb, AnsibleZuulTestCase):
|
||||||
self.waitUntilSettled()
|
self.waitUntilSettled()
|
||||||
|
|
||||||
buildsets = self.get_url("api/tenant/tenant-one/buildsets").json()
|
buildsets = self.get_url("api/tenant/tenant-one/buildsets").json()
|
||||||
project_bs = [x for x in buildsets if x["project"] == "org/project"][0]
|
project_bs = [
|
||||||
|
x for x in buildsets
|
||||||
|
if x["refs"][0]["project"] == "org/project"
|
||||||
|
][0]
|
||||||
buildset = self.get_url(
|
buildset = self.get_url(
|
||||||
"api/tenant/tenant-one/buildset/%s" % project_bs['uuid']).json()
|
"api/tenant/tenant-one/buildset/%s" % project_bs['uuid']).json()
|
||||||
self.assertEqual(3, len(buildset["builds"]))
|
self.assertEqual(3, len(buildset["builds"]))
|
||||||
|
@ -2672,7 +2680,7 @@ class TestTenantScopedWebApi(BaseTestWeb):
|
||||||
items = tenant.layout.pipelines['gate'].getAllItems()
|
items = tenant.layout.pipelines['gate'].getAllItems()
|
||||||
enqueue_times = {}
|
enqueue_times = {}
|
||||||
for item in items:
|
for item in items:
|
||||||
enqueue_times[str(item.change)] = item.enqueue_time
|
enqueue_times[str(item.changes[0])] = item.enqueue_time
|
||||||
|
|
||||||
# REST API
|
# REST API
|
||||||
args = {'pipeline': 'gate',
|
args = {'pipeline': 'gate',
|
||||||
|
@ -2699,7 +2707,7 @@ class TestTenantScopedWebApi(BaseTestWeb):
|
||||||
items = tenant.layout.pipelines['gate'].getAllItems()
|
items = tenant.layout.pipelines['gate'].getAllItems()
|
||||||
for item in items:
|
for item in items:
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
enqueue_times[str(item.change)], item.enqueue_time)
|
enqueue_times[str(item.changes[0])], item.enqueue_time)
|
||||||
|
|
||||||
self.waitUntilSettled()
|
self.waitUntilSettled()
|
||||||
self.executor_server.release('.*-merge')
|
self.executor_server.release('.*-merge')
|
||||||
|
@ -2761,7 +2769,7 @@ class TestTenantScopedWebApi(BaseTestWeb):
|
||||||
items = tenant.layout.pipelines['gate'].getAllItems()
|
items = tenant.layout.pipelines['gate'].getAllItems()
|
||||||
enqueue_times = {}
|
enqueue_times = {}
|
||||||
for item in items:
|
for item in items:
|
||||||
enqueue_times[str(item.change)] = item.enqueue_time
|
enqueue_times[str(item.changes[0])] = item.enqueue_time
|
||||||
|
|
||||||
# REST API
|
# REST API
|
||||||
args = {'pipeline': 'gate',
|
args = {'pipeline': 'gate',
|
||||||
|
@ -2788,7 +2796,7 @@ class TestTenantScopedWebApi(BaseTestWeb):
|
||||||
items = tenant.layout.pipelines['gate'].getAllItems()
|
items = tenant.layout.pipelines['gate'].getAllItems()
|
||||||
for item in items:
|
for item in items:
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
enqueue_times[str(item.change)], item.enqueue_time)
|
enqueue_times[str(item.changes[0])], item.enqueue_time)
|
||||||
|
|
||||||
self.waitUntilSettled()
|
self.waitUntilSettled()
|
||||||
self.executor_server.release('.*-merge')
|
self.executor_server.release('.*-merge')
|
||||||
|
@ -2853,7 +2861,7 @@ class TestTenantScopedWebApi(BaseTestWeb):
|
||||||
if i.live]
|
if i.live]
|
||||||
enqueue_times = {}
|
enqueue_times = {}
|
||||||
for item in items:
|
for item in items:
|
||||||
enqueue_times[str(item.change)] = item.enqueue_time
|
enqueue_times[str(item.changes[0])] = item.enqueue_time
|
||||||
|
|
||||||
# REST API
|
# REST API
|
||||||
args = {'pipeline': 'check',
|
args = {'pipeline': 'check',
|
||||||
|
@ -2882,12 +2890,12 @@ class TestTenantScopedWebApi(BaseTestWeb):
|
||||||
if i.live]
|
if i.live]
|
||||||
for item in items:
|
for item in items:
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
enqueue_times[str(item.change)], item.enqueue_time)
|
enqueue_times[str(item.changes[0])], item.enqueue_time)
|
||||||
|
|
||||||
# We can't reliably test for side effects in the check
|
# We can't reliably test for side effects in the check
|
||||||
# pipeline since the change queues are independent, so we
|
# pipeline since the change queues are independent, so we
|
||||||
# directly examine the queues.
|
# directly examine the queues.
|
||||||
queue_items = [(item.change.number, item.live) for item in
|
queue_items = [(item.changes[0].number, item.live) for item in
|
||||||
tenant.layout.pipelines['check'].getAllItems()]
|
tenant.layout.pipelines['check'].getAllItems()]
|
||||||
expected = [('1', False),
|
expected = [('1', False),
|
||||||
('2', True),
|
('2', True),
|
||||||
|
@ -3555,7 +3563,7 @@ class TestCLIViaWebApi(BaseTestWeb):
|
||||||
items = tenant.layout.pipelines['gate'].getAllItems()
|
items = tenant.layout.pipelines['gate'].getAllItems()
|
||||||
enqueue_times = {}
|
enqueue_times = {}
|
||||||
for item in items:
|
for item in items:
|
||||||
enqueue_times[str(item.change)] = item.enqueue_time
|
enqueue_times[str(item.changes[0])] = item.enqueue_time
|
||||||
|
|
||||||
# Promote B and C using the cli
|
# Promote B and C using the cli
|
||||||
authz = {'iss': 'zuul_operator',
|
authz = {'iss': 'zuul_operator',
|
||||||
|
@ -3581,7 +3589,7 @@ class TestCLIViaWebApi(BaseTestWeb):
|
||||||
items = tenant.layout.pipelines['gate'].getAllItems()
|
items = tenant.layout.pipelines['gate'].getAllItems()
|
||||||
for item in items:
|
for item in items:
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
enqueue_times[str(item.change)], item.enqueue_time)
|
enqueue_times[str(item.changes[0])], item.enqueue_time)
|
||||||
|
|
||||||
self.waitUntilSettled()
|
self.waitUntilSettled()
|
||||||
self.executor_server.release('.*-merge')
|
self.executor_server.release('.*-merge')
|
||||||
|
|
|
@ -356,7 +356,7 @@ class TestZuulClientAdmin(BaseTestWeb):
|
||||||
items = tenant.layout.pipelines['gate'].getAllItems()
|
items = tenant.layout.pipelines['gate'].getAllItems()
|
||||||
enqueue_times = {}
|
enqueue_times = {}
|
||||||
for item in items:
|
for item in items:
|
||||||
enqueue_times[str(item.change)] = item.enqueue_time
|
enqueue_times[str(item.changes[0])] = item.enqueue_time
|
||||||
|
|
||||||
# Promote B and C using the cli
|
# Promote B and C using the cli
|
||||||
authz = {'iss': 'zuul_operator',
|
authz = {'iss': 'zuul_operator',
|
||||||
|
@ -382,7 +382,7 @@ class TestZuulClientAdmin(BaseTestWeb):
|
||||||
items = tenant.layout.pipelines['gate'].getAllItems()
|
items = tenant.layout.pipelines['gate'].getAllItems()
|
||||||
for item in items:
|
for item in items:
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
enqueue_times[str(item.change)], item.enqueue_time)
|
enqueue_times[str(item.changes[0])], item.enqueue_time)
|
||||||
|
|
||||||
self.waitUntilSettled()
|
self.waitUntilSettled()
|
||||||
self.executor_server.release('.*-merge')
|
self.executor_server.release('.*-merge')
|
||||||
|
|
|
@ -1,4 +1,5 @@
|
||||||
# Copyright 2019 Red Hat, Inc.
|
# Copyright 2019 Red Hat, Inc.
|
||||||
|
# Copyright 2024 Acme Gating, LLC
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
# not use this file except in compliance with the License. You may obtain
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
@ -37,20 +38,34 @@ class ElasticsearchReporter(BaseReporter):
|
||||||
docs = []
|
docs = []
|
||||||
index = '%s.%s-%s' % (self.index, item.pipeline.tenant.name,
|
index = '%s.%s-%s' % (self.index, item.pipeline.tenant.name,
|
||||||
time.strftime("%Y.%m.%d"))
|
time.strftime("%Y.%m.%d"))
|
||||||
|
changes = [
|
||||||
|
{
|
||||||
|
"project": change.project.name,
|
||||||
|
"change": getattr(change, 'number', None),
|
||||||
|
"patchset": getattr(change, 'patchset', None),
|
||||||
|
"ref": getattr(change, 'ref', ''),
|
||||||
|
"oldrev": getattr(change, 'oldrev', ''),
|
||||||
|
"newrev": getattr(change, 'newrev', ''),
|
||||||
|
"branch": getattr(change, 'branch', ''),
|
||||||
|
"ref_url": change.url,
|
||||||
|
}
|
||||||
|
for change in item.changes
|
||||||
|
]
|
||||||
buildset_doc = {
|
buildset_doc = {
|
||||||
"uuid": item.current_build_set.uuid,
|
"uuid": item.current_build_set.uuid,
|
||||||
"build_type": "buildset",
|
"build_type": "buildset",
|
||||||
"tenant": item.pipeline.tenant.name,
|
"tenant": item.pipeline.tenant.name,
|
||||||
"pipeline": item.pipeline.name,
|
"pipeline": item.pipeline.name,
|
||||||
"project": item.change.project.name,
|
"changes": changes,
|
||||||
"change": getattr(item.change, 'number', None),
|
"project": item.changes[0].project.name,
|
||||||
"patchset": getattr(item.change, 'patchset', None),
|
"change": getattr(item.changes[0], 'number', None),
|
||||||
"ref": getattr(item.change, 'ref', ''),
|
"patchset": getattr(item.changes[0], 'patchset', None),
|
||||||
"oldrev": getattr(item.change, 'oldrev', ''),
|
"ref": getattr(item.changes[0], 'ref', ''),
|
||||||
"newrev": getattr(item.change, 'newrev', ''),
|
"oldrev": getattr(item.changes[0], 'oldrev', ''),
|
||||||
"branch": getattr(item.change, 'branch', ''),
|
"newrev": getattr(item.changes[0], 'newrev', ''),
|
||||||
|
"branch": getattr(item.changes[0], 'branch', ''),
|
||||||
"zuul_ref": item.current_build_set.ref,
|
"zuul_ref": item.current_build_set.ref,
|
||||||
"ref_url": item.change.url,
|
"ref_url": item.changes[0].url,
|
||||||
"result": item.current_build_set.result,
|
"result": item.current_build_set.result,
|
||||||
"message": self._formatItemReport(item, with_jobs=False)
|
"message": self._formatItemReport(item, with_jobs=False)
|
||||||
}
|
}
|
||||||
|
@ -80,8 +95,21 @@ class ElasticsearchReporter(BaseReporter):
|
||||||
buildset_doc['duration'] = (
|
buildset_doc['duration'] = (
|
||||||
buildset_doc['end_time'] - buildset_doc['start_time'])
|
buildset_doc['end_time'] - buildset_doc['start_time'])
|
||||||
|
|
||||||
|
change = item.getChangeForJob(build.job)
|
||||||
|
change_doc = {
|
||||||
|
"project": change.project.name,
|
||||||
|
"change": getattr(change, 'number', None),
|
||||||
|
"patchset": getattr(change, 'patchset', None),
|
||||||
|
"ref": getattr(change, 'ref', ''),
|
||||||
|
"oldrev": getattr(change, 'oldrev', ''),
|
||||||
|
"newrev": getattr(change, 'newrev', ''),
|
||||||
|
"branch": getattr(change, 'branch', ''),
|
||||||
|
"ref_url": change.url,
|
||||||
|
}
|
||||||
|
|
||||||
build_doc = {
|
build_doc = {
|
||||||
"uuid": build.uuid,
|
"uuid": build.uuid,
|
||||||
|
"change": change_doc,
|
||||||
"build_type": "build",
|
"build_type": "build",
|
||||||
"buildset_uuid": buildset_doc['uuid'],
|
"buildset_uuid": buildset_doc['uuid'],
|
||||||
"job_name": build.job.name,
|
"job_name": build.job.name,
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
# Copyright 2011 OpenStack, LLC.
|
# Copyright 2011 OpenStack, LLC.
|
||||||
# Copyright 2012 Hewlett-Packard Development Company, L.P.
|
# Copyright 2012 Hewlett-Packard Development Company, L.P.
|
||||||
# Copyright 2023 Acme Gating, LLC
|
# Copyright 2023-2024 Acme Gating, LLC
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
# not use this file except in compliance with the License. You may obtain
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
@ -1165,24 +1165,23 @@ class GerritConnection(ZKChangeCacheMixin, ZKBranchCacheMixin, BaseConnection):
|
||||||
}
|
}
|
||||||
self.event_queue.put(event)
|
self.event_queue.put(event)
|
||||||
|
|
||||||
def review(self, item, message, submit, labels, checks_api,
|
def review(self, item, change, message, submit, labels, checks_api,
|
||||||
file_comments, phase1, phase2, zuul_event_id=None):
|
file_comments, phase1, phase2, zuul_event_id=None):
|
||||||
if self.session:
|
if self.session:
|
||||||
meth = self.review_http
|
meth = self.review_http
|
||||||
else:
|
else:
|
||||||
meth = self.review_ssh
|
meth = self.review_ssh
|
||||||
return meth(item, message, submit, labels, checks_api,
|
return meth(item, change, message, submit, labels, checks_api,
|
||||||
file_comments, phase1, phase2,
|
file_comments, phase1, phase2,
|
||||||
zuul_event_id=zuul_event_id)
|
zuul_event_id=zuul_event_id)
|
||||||
|
|
||||||
def review_ssh(self, item, message, submit, labels, checks_api,
|
def review_ssh(self, item, change, message, submit, labels, checks_api,
|
||||||
file_comments, phase1, phase2, zuul_event_id=None):
|
file_comments, phase1, phase2, zuul_event_id=None):
|
||||||
log = get_annotated_logger(self.log, zuul_event_id)
|
log = get_annotated_logger(self.log, zuul_event_id)
|
||||||
if checks_api:
|
if checks_api:
|
||||||
log.error("Zuul is configured to report to the checks API, "
|
log.error("Zuul is configured to report to the checks API, "
|
||||||
"but no HTTP password is present for the connection "
|
"but no HTTP password is present for the connection "
|
||||||
"in the configuration file.")
|
"in the configuration file.")
|
||||||
change = item.change
|
|
||||||
project = change.project.name
|
project = change.project.name
|
||||||
cmd = 'gerrit review --project %s' % project
|
cmd = 'gerrit review --project %s' % project
|
||||||
if phase1:
|
if phase1:
|
||||||
|
@ -1208,8 +1207,7 @@ class GerritConnection(ZKChangeCacheMixin, ZKBranchCacheMixin, BaseConnection):
|
||||||
out, err = self._ssh(cmd, zuul_event_id=zuul_event_id)
|
out, err = self._ssh(cmd, zuul_event_id=zuul_event_id)
|
||||||
return err
|
return err
|
||||||
|
|
||||||
def report_checks(self, log, item, changeid, checkinfo):
|
def report_checks(self, log, item, change, changeid, checkinfo):
|
||||||
change = item.change
|
|
||||||
checkinfo = checkinfo.copy()
|
checkinfo = checkinfo.copy()
|
||||||
uuid = checkinfo.pop('uuid', None)
|
uuid = checkinfo.pop('uuid', None)
|
||||||
scheme = checkinfo.pop('scheme', None)
|
scheme = checkinfo.pop('scheme', None)
|
||||||
|
@ -1254,10 +1252,9 @@ class GerritConnection(ZKChangeCacheMixin, ZKBranchCacheMixin, BaseConnection):
|
||||||
"attempt %s: %s", x, e)
|
"attempt %s: %s", x, e)
|
||||||
time.sleep(x * self.submit_retry_backoff)
|
time.sleep(x * self.submit_retry_backoff)
|
||||||
|
|
||||||
def review_http(self, item, message, submit, labels,
|
def review_http(self, item, change, message, submit, labels,
|
||||||
checks_api, file_comments, phase1, phase2,
|
checks_api, file_comments, phase1, phase2,
|
||||||
zuul_event_id=None):
|
zuul_event_id=None):
|
||||||
change = item.change
|
|
||||||
changeid = "%s~%s~%s" % (
|
changeid = "%s~%s~%s" % (
|
||||||
urllib.parse.quote(str(change.project), safe=''),
|
urllib.parse.quote(str(change.project), safe=''),
|
||||||
urllib.parse.quote(str(change.branch), safe=''),
|
urllib.parse.quote(str(change.branch), safe=''),
|
||||||
|
@ -1293,7 +1290,7 @@ class GerritConnection(ZKChangeCacheMixin, ZKBranchCacheMixin, BaseConnection):
|
||||||
if self.version >= (2, 13, 0):
|
if self.version >= (2, 13, 0):
|
||||||
data['tag'] = 'autogenerated:zuul:%s' % (item.pipeline.name)
|
data['tag'] = 'autogenerated:zuul:%s' % (item.pipeline.name)
|
||||||
if checks_api:
|
if checks_api:
|
||||||
self.report_checks(log, item, changeid, checks_api)
|
self.report_checks(log, item, change, changeid, checks_api)
|
||||||
if (message or data.get('labels') or data.get('comments')
|
if (message or data.get('labels') or data.get('comments')
|
||||||
or data.get('robot_comments')):
|
or data.get('robot_comments')):
|
||||||
for x in range(1, 4):
|
for x in range(1, 4):
|
||||||
|
@ -1356,7 +1353,7 @@ class GerritConnection(ZKChangeCacheMixin, ZKBranchCacheMixin, BaseConnection):
|
||||||
def queryChangeHTTP(self, number, event=None):
|
def queryChangeHTTP(self, number, event=None):
|
||||||
query = ('changes/%s?o=DETAILED_ACCOUNTS&o=CURRENT_REVISION&'
|
query = ('changes/%s?o=DETAILED_ACCOUNTS&o=CURRENT_REVISION&'
|
||||||
'o=CURRENT_COMMIT&o=CURRENT_FILES&o=LABELS&'
|
'o=CURRENT_COMMIT&o=CURRENT_FILES&o=LABELS&'
|
||||||
'o=DETAILED_LABELS' % (number,))
|
'o=DETAILED_LABELS&o=ALL_REVISIONS' % (number,))
|
||||||
if self.version >= (3, 5, 0):
|
if self.version >= (3, 5, 0):
|
||||||
query += '&o=SUBMIT_REQUIREMENTS'
|
query += '&o=SUBMIT_REQUIREMENTS'
|
||||||
data = self.get(query)
|
data = self.get(query)
|
||||||
|
|
|
@ -160,9 +160,12 @@ class GerritChange(Change):
|
||||||
'%s/c/%s/+/%s' % (baseurl, self.project.name, self.number),
|
'%s/c/%s/+/%s' % (baseurl, self.project.name, self.number),
|
||||||
]
|
]
|
||||||
|
|
||||||
|
for rev_commit, revision in data['revisions'].items():
|
||||||
|
if str(revision['_number']) == self.patchset:
|
||||||
|
self.ref = revision['ref']
|
||||||
|
self.commit = rev_commit
|
||||||
|
|
||||||
if str(current_revision['_number']) == self.patchset:
|
if str(current_revision['_number']) == self.patchset:
|
||||||
self.ref = current_revision['ref']
|
|
||||||
self.commit = data['current_revision']
|
|
||||||
self.is_current_patchset = True
|
self.is_current_patchset = True
|
||||||
else:
|
else:
|
||||||
self.is_current_patchset = False
|
self.is_current_patchset = False
|
||||||
|
|
|
@ -1,4 +1,5 @@
|
||||||
# Copyright 2013 Rackspace Australia
|
# Copyright 2013 Rackspace Australia
|
||||||
|
# Copyright 2024 Acme Gating, LLC
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
# not use this file except in compliance with the License. You may obtain
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
@ -43,44 +44,44 @@ class GerritReporter(BaseReporter):
|
||||||
"""Send a message to gerrit."""
|
"""Send a message to gerrit."""
|
||||||
log = get_annotated_logger(self.log, item.event)
|
log = get_annotated_logger(self.log, item.event)
|
||||||
|
|
||||||
|
ret = []
|
||||||
|
for change in item.changes:
|
||||||
|
err = self._reportChange(item, change, log, phase1, phase2)
|
||||||
|
if err:
|
||||||
|
ret.append(err)
|
||||||
|
return ret
|
||||||
|
|
||||||
|
def _reportChange(self, item, change, log, phase1=True, phase2=True):
|
||||||
|
"""Send a message to gerrit."""
|
||||||
# If the source is no GerritSource we cannot report anything here.
|
# If the source is no GerritSource we cannot report anything here.
|
||||||
if not isinstance(item.change.project.source, GerritSource):
|
if not isinstance(change.project.source, GerritSource):
|
||||||
return
|
return
|
||||||
|
|
||||||
# We can only report changes, not plain branches
|
# We can only report changes, not plain branches
|
||||||
if not isinstance(item.change, Change):
|
if not isinstance(change, Change):
|
||||||
return
|
return
|
||||||
|
|
||||||
# For supporting several Gerrit connections we also must filter by
|
# For supporting several Gerrit connections we also must filter by
|
||||||
# the canonical hostname.
|
# the canonical hostname.
|
||||||
if item.change.project.source.connection.canonical_hostname != \
|
if change.project.source.connection.canonical_hostname != \
|
||||||
self.connection.canonical_hostname:
|
self.connection.canonical_hostname:
|
||||||
log.debug("Not reporting %s as this Gerrit reporter "
|
|
||||||
"is for %s and the change is from %s",
|
|
||||||
item, self.connection.canonical_hostname,
|
|
||||||
item.change.project.source.connection.canonical_hostname)
|
|
||||||
return
|
return
|
||||||
|
|
||||||
comments = self.getFileComments(item)
|
comments = self.getFileComments(item, change)
|
||||||
if self._create_comment:
|
if self._create_comment:
|
||||||
message = self._formatItemReport(item)
|
message = self._formatItemReport(item)
|
||||||
else:
|
else:
|
||||||
message = ''
|
message = ''
|
||||||
|
|
||||||
log.debug("Report change %s, params %s, message: %s, comments: %s",
|
log.debug("Report change %s, params %s, message: %s, comments: %s",
|
||||||
item.change, self.config, message, comments)
|
change, self.config, message, comments)
|
||||||
if phase2 and self._submit and not hasattr(item.change, '_ref_sha'):
|
if phase2 and self._submit and not hasattr(change, '_ref_sha'):
|
||||||
# If we're starting to submit a bundle, save the current
|
# If we're starting to submit a bundle, save the current
|
||||||
# ref sha for every item in the bundle.
|
# ref sha for every item in the bundle.
|
||||||
changes = set([item.change])
|
|
||||||
if item.bundle:
|
|
||||||
for i in item.bundle.items:
|
|
||||||
changes.add(i.change)
|
|
||||||
|
|
||||||
# Store a dict of project,branch -> sha so that if we have
|
# Store a dict of project,branch -> sha so that if we have
|
||||||
# duplicate project/branches, we only query once.
|
# duplicate project/branches, we only query once.
|
||||||
ref_shas = {}
|
ref_shas = {}
|
||||||
for other_change in changes:
|
for other_change in item.changes:
|
||||||
if not isinstance(other_change, GerritChange):
|
if not isinstance(other_change, GerritChange):
|
||||||
continue
|
continue
|
||||||
key = (other_change.project, other_change.branch)
|
key = (other_change.project, other_change.branch)
|
||||||
|
@ -92,9 +93,10 @@ class GerritReporter(BaseReporter):
|
||||||
ref_shas[key] = ref_sha
|
ref_shas[key] = ref_sha
|
||||||
other_change._ref_sha = ref_sha
|
other_change._ref_sha = ref_sha
|
||||||
|
|
||||||
return self.connection.review(item, message, self._submit,
|
return self.connection.review(item, change, message,
|
||||||
self._labels, self._checks_api,
|
self._submit, self._labels,
|
||||||
comments, phase1, phase2,
|
self._checks_api, comments,
|
||||||
|
phase1, phase2,
|
||||||
zuul_event_id=item.event)
|
zuul_event_id=item.event)
|
||||||
|
|
||||||
def getSubmitAllowNeeds(self):
|
def getSubmitAllowNeeds(self):
|
||||||
|
|
|
@ -78,7 +78,7 @@ class GitConnection(ZKChangeCacheMixin, BaseConnection):
|
||||||
self.projects[project.name] = project
|
self.projects[project.name] = project
|
||||||
|
|
||||||
def getChangeFilesUpdated(self, project_name, branch, tosha):
|
def getChangeFilesUpdated(self, project_name, branch, tosha):
|
||||||
job = self.sched.merger.getFilesChanges(
|
job = self.sched.merger.getFilesChangesRaw(
|
||||||
self.connection_name, project_name, branch, tosha,
|
self.connection_name, project_name, branch, tosha,
|
||||||
needs_result=True)
|
needs_result=True)
|
||||||
self.log.debug("Waiting for fileschanges job %s" % job)
|
self.log.debug("Waiting for fileschanges job %s" % job)
|
||||||
|
@ -86,8 +86,8 @@ class GitConnection(ZKChangeCacheMixin, BaseConnection):
|
||||||
if not job.updated:
|
if not job.updated:
|
||||||
raise Exception("Fileschanges job %s failed" % job)
|
raise Exception("Fileschanges job %s failed" % job)
|
||||||
self.log.debug("Fileschanges job %s got changes on files %s" %
|
self.log.debug("Fileschanges job %s got changes on files %s" %
|
||||||
(job, job.files))
|
(job, job.files[0]))
|
||||||
return job.files
|
return job.files[0]
|
||||||
|
|
||||||
def lsRemote(self, project):
|
def lsRemote(self, project):
|
||||||
refs = {}
|
refs = {}
|
||||||
|
|
|
@ -1,4 +1,5 @@
|
||||||
# Copyright 2015 Puppet Labs
|
# Copyright 2015 Puppet Labs
|
||||||
|
# Copyright 2024 Acme Gating, LLC
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
# not use this file except in compliance with the License. You may obtain
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
@ -58,37 +59,48 @@ class GithubReporter(BaseReporter):
|
||||||
self.context = "{}/{}".format(pipeline.tenant.name, pipeline.name)
|
self.context = "{}/{}".format(pipeline.tenant.name, pipeline.name)
|
||||||
|
|
||||||
def report(self, item, phase1=True, phase2=True):
|
def report(self, item, phase1=True, phase2=True):
|
||||||
|
"""Report on an event."""
|
||||||
|
log = get_annotated_logger(self.log, item.event)
|
||||||
|
|
||||||
|
ret = []
|
||||||
|
for change in item.changes:
|
||||||
|
err = self._reportChange(item, change, log, phase1, phase2)
|
||||||
|
if err:
|
||||||
|
ret.append(err)
|
||||||
|
return ret
|
||||||
|
|
||||||
|
def _reportChange(self, item, change, log, phase1=True, phase2=True):
|
||||||
"""Report on an event."""
|
"""Report on an event."""
|
||||||
# If the source is not GithubSource we cannot report anything here.
|
# If the source is not GithubSource we cannot report anything here.
|
||||||
if not isinstance(item.change.project.source, GithubSource):
|
if not isinstance(change.project.source, GithubSource):
|
||||||
return
|
return
|
||||||
|
|
||||||
# For supporting several Github connections we also must filter by
|
# For supporting several Github connections we also must filter by
|
||||||
# the canonical hostname.
|
# the canonical hostname.
|
||||||
if item.change.project.source.connection.canonical_hostname != \
|
if change.project.source.connection.canonical_hostname != \
|
||||||
self.connection.canonical_hostname:
|
self.connection.canonical_hostname:
|
||||||
return
|
return
|
||||||
|
|
||||||
# order is important for github branch protection.
|
# order is important for github branch protection.
|
||||||
# A status should be set before a merge attempt
|
# A status should be set before a merge attempt
|
||||||
if phase1 and self._commit_status is not None:
|
if phase1 and self._commit_status is not None:
|
||||||
if (hasattr(item.change, 'patchset') and
|
if (hasattr(change, 'patchset') and
|
||||||
item.change.patchset is not None):
|
change.patchset is not None):
|
||||||
self.setCommitStatus(item)
|
self.setCommitStatus(item, change)
|
||||||
elif (hasattr(item.change, 'newrev') and
|
elif (hasattr(change, 'newrev') and
|
||||||
item.change.newrev is not None):
|
change.newrev is not None):
|
||||||
self.setCommitStatus(item)
|
self.setCommitStatus(item, change)
|
||||||
# Comments, labels, and merges can only be performed on pull requests.
|
# Comments, labels, and merges can only be performed on pull requests.
|
||||||
# If the change is not a pull request (e.g. a push) skip them.
|
# If the change is not a pull request (e.g. a push) skip them.
|
||||||
if hasattr(item.change, 'number'):
|
if hasattr(change, 'number'):
|
||||||
errors_received = False
|
errors_received = False
|
||||||
if phase1:
|
if phase1:
|
||||||
if self._labels or self._unlabels:
|
if self._labels or self._unlabels:
|
||||||
self.setLabels(item)
|
self.setLabels(item, change)
|
||||||
if self._review:
|
if self._review:
|
||||||
self.addReview(item)
|
self.addReview(item, change)
|
||||||
if self._check:
|
if self._check:
|
||||||
check_errors = self.updateCheck(item)
|
check_errors = self.updateCheck(item, change)
|
||||||
# TODO (felix): We could use this mechanism to
|
# TODO (felix): We could use this mechanism to
|
||||||
# also report back errors from label and review
|
# also report back errors from label and review
|
||||||
# actions
|
# actions
|
||||||
|
@ -98,12 +110,12 @@ class GithubReporter(BaseReporter):
|
||||||
)
|
)
|
||||||
errors_received = True
|
errors_received = True
|
||||||
if self._create_comment or errors_received:
|
if self._create_comment or errors_received:
|
||||||
self.addPullComment(item)
|
self.addPullComment(item, change)
|
||||||
if phase2 and self._merge:
|
if phase2 and self._merge:
|
||||||
try:
|
try:
|
||||||
self.mergePull(item)
|
self.mergePull(item, change)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
self.addPullComment(item, str(e))
|
self.addPullComment(item, change, str(e))
|
||||||
|
|
||||||
def _formatJobResult(self, job_fields):
|
def _formatJobResult(self, job_fields):
|
||||||
# We select different emojis to represents build results:
|
# We select different emojis to represents build results:
|
||||||
|
@ -145,24 +157,24 @@ class GithubReporter(BaseReporter):
|
||||||
ret += 'Skipped %i %s\n' % (skipped, jobtext)
|
ret += 'Skipped %i %s\n' % (skipped, jobtext)
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
def addPullComment(self, item, comment=None):
|
def addPullComment(self, item, change, comment=None):
|
||||||
log = get_annotated_logger(self.log, item.event)
|
log = get_annotated_logger(self.log, item.event)
|
||||||
message = comment or self._formatItemReport(item)
|
message = comment or self._formatItemReport(item)
|
||||||
project = item.change.project.name
|
project = change.project.name
|
||||||
pr_number = item.change.number
|
pr_number = change.number
|
||||||
log.debug('Reporting change %s, params %s, message: %s',
|
log.debug('Reporting change %s, params %s, message: %s',
|
||||||
item.change, self.config, message)
|
change, self.config, message)
|
||||||
self.connection.commentPull(project, pr_number, message,
|
self.connection.commentPull(project, pr_number, message,
|
||||||
zuul_event_id=item.event)
|
zuul_event_id=item.event)
|
||||||
|
|
||||||
def setCommitStatus(self, item):
|
def setCommitStatus(self, item, change):
|
||||||
log = get_annotated_logger(self.log, item.event)
|
log = get_annotated_logger(self.log, item.event)
|
||||||
|
|
||||||
project = item.change.project.name
|
project = change.project.name
|
||||||
if hasattr(item.change, 'patchset'):
|
if hasattr(change, 'patchset'):
|
||||||
sha = item.change.patchset
|
sha = change.patchset
|
||||||
elif hasattr(item.change, 'newrev'):
|
elif hasattr(change, 'newrev'):
|
||||||
sha = item.change.newrev
|
sha = change.newrev
|
||||||
state = self._commit_status
|
state = self._commit_status
|
||||||
|
|
||||||
url = item.formatStatusUrl()
|
url = item.formatStatusUrl()
|
||||||
|
@ -180,27 +192,27 @@ class GithubReporter(BaseReporter):
|
||||||
log.debug(
|
log.debug(
|
||||||
'Reporting change %s, params %s, '
|
'Reporting change %s, params %s, '
|
||||||
'context: %s, state: %s, description: %s, url: %s',
|
'context: %s, state: %s, description: %s, url: %s',
|
||||||
item.change, self.config, self.context, state, description, url)
|
change, self.config, self.context, state, description, url)
|
||||||
|
|
||||||
self.connection.setCommitStatus(
|
self.connection.setCommitStatus(
|
||||||
project, sha, state, url, description, self.context,
|
project, sha, state, url, description, self.context,
|
||||||
zuul_event_id=item.event)
|
zuul_event_id=item.event)
|
||||||
|
|
||||||
def mergePull(self, item):
|
def mergePull(self, item, change):
|
||||||
log = get_annotated_logger(self.log, item.event)
|
log = get_annotated_logger(self.log, item.event)
|
||||||
merge_mode = item.current_build_set.getMergeMode()
|
merge_mode = item.current_build_set.getMergeMode(change)
|
||||||
|
|
||||||
if merge_mode not in self.merge_modes:
|
if merge_mode not in self.merge_modes:
|
||||||
mode = model.get_merge_mode_name(merge_mode)
|
mode = model.get_merge_mode_name(merge_mode)
|
||||||
self.log.warning('Merge mode %s not supported by Github', mode)
|
self.log.warning('Merge mode %s not supported by Github', mode)
|
||||||
raise MergeFailure('Merge mode %s not supported by Github' % mode)
|
raise MergeFailure('Merge mode %s not supported by Github' % mode)
|
||||||
|
|
||||||
project = item.change.project.name
|
project = change.project.name
|
||||||
pr_number = item.change.number
|
pr_number = change.number
|
||||||
sha = item.change.patchset
|
sha = change.patchset
|
||||||
log.debug('Reporting change %s, params %s, merging via API',
|
log.debug('Reporting change %s, params %s, merging via API',
|
||||||
item.change, self.config)
|
change, self.config)
|
||||||
message = self._formatMergeMessage(item.change, merge_mode)
|
message = self._formatMergeMessage(change, merge_mode)
|
||||||
merge_mode = self.merge_modes[merge_mode]
|
merge_mode = self.merge_modes[merge_mode]
|
||||||
|
|
||||||
for i in [1, 2]:
|
for i in [1, 2]:
|
||||||
|
@ -208,26 +220,26 @@ class GithubReporter(BaseReporter):
|
||||||
self.connection.mergePull(project, pr_number, message, sha=sha,
|
self.connection.mergePull(project, pr_number, message, sha=sha,
|
||||||
method=merge_mode,
|
method=merge_mode,
|
||||||
zuul_event_id=item.event)
|
zuul_event_id=item.event)
|
||||||
self.connection.updateChangeAttributes(item.change,
|
self.connection.updateChangeAttributes(change,
|
||||||
is_merged=True)
|
is_merged=True)
|
||||||
return
|
return
|
||||||
except MergeFailure as e:
|
except MergeFailure as e:
|
||||||
log.exception('Merge attempt of change %s %s/2 failed.',
|
log.exception('Merge attempt of change %s %s/2 failed.',
|
||||||
item.change, i, exc_info=True)
|
change, i, exc_info=True)
|
||||||
error_message = str(e)
|
error_message = str(e)
|
||||||
if i == 1:
|
if i == 1:
|
||||||
time.sleep(2)
|
time.sleep(2)
|
||||||
log.warning('Merge of change %s failed after 2 attempts, giving up',
|
log.warning('Merge of change %s failed after 2 attempts, giving up',
|
||||||
item.change)
|
change)
|
||||||
raise MergeFailure(error_message)
|
raise MergeFailure(error_message)
|
||||||
|
|
||||||
def addReview(self, item):
|
def addReview(self, item, change):
|
||||||
log = get_annotated_logger(self.log, item.event)
|
log = get_annotated_logger(self.log, item.event)
|
||||||
project = item.change.project.name
|
project = change.project.name
|
||||||
pr_number = item.change.number
|
pr_number = change.number
|
||||||
sha = item.change.patchset
|
sha = change.patchset
|
||||||
log.debug('Reporting change %s, params %s, review:\n%s',
|
log.debug('Reporting change %s, params %s, review:\n%s',
|
||||||
item.change, self.config, self._review)
|
change, self.config, self._review)
|
||||||
self.connection.reviewPull(
|
self.connection.reviewPull(
|
||||||
project,
|
project,
|
||||||
pr_number,
|
pr_number,
|
||||||
|
@ -239,12 +251,12 @@ class GithubReporter(BaseReporter):
|
||||||
self.connection.unlabelPull(project, pr_number, label,
|
self.connection.unlabelPull(project, pr_number, label,
|
||||||
zuul_event_id=item.event)
|
zuul_event_id=item.event)
|
||||||
|
|
||||||
def updateCheck(self, item):
|
def updateCheck(self, item, change):
|
||||||
log = get_annotated_logger(self.log, item.event)
|
log = get_annotated_logger(self.log, item.event)
|
||||||
message = self._formatItemReport(item)
|
message = self._formatItemReport(item)
|
||||||
project = item.change.project.name
|
project = change.project.name
|
||||||
pr_number = item.change.number
|
pr_number = change.number
|
||||||
sha = item.change.patchset
|
sha = change.patchset
|
||||||
|
|
||||||
status = self._check
|
status = self._check
|
||||||
# We declare a item as completed if it either has a result
|
# We declare a item as completed if it either has a result
|
||||||
|
@ -260,13 +272,13 @@ class GithubReporter(BaseReporter):
|
||||||
|
|
||||||
log.debug(
|
log.debug(
|
||||||
"Updating check for change %s, params %s, context %s, message: %s",
|
"Updating check for change %s, params %s, context %s, message: %s",
|
||||||
item.change, self.config, self.context, message
|
change, self.config, self.context, message
|
||||||
)
|
)
|
||||||
|
|
||||||
details_url = item.formatStatusUrl()
|
details_url = item.formatStatusUrl()
|
||||||
|
|
||||||
# Check for inline comments that can be reported via checks API
|
# Check for inline comments that can be reported via checks API
|
||||||
file_comments = self.getFileComments(item)
|
file_comments = self.getFileComments(item, change)
|
||||||
|
|
||||||
# Github allows an external id to be added to a check run. We can use
|
# Github allows an external id to be added to a check run. We can use
|
||||||
# this to identify the check run in any custom actions we define.
|
# this to identify the check run in any custom actions we define.
|
||||||
|
@ -279,11 +291,13 @@ class GithubReporter(BaseReporter):
|
||||||
{
|
{
|
||||||
"tenant": item.pipeline.tenant.name,
|
"tenant": item.pipeline.tenant.name,
|
||||||
"pipeline": item.pipeline.name,
|
"pipeline": item.pipeline.name,
|
||||||
"change": item.change.number,
|
"change": change.number,
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
|
||||||
state = item.dynamic_state[self.connection.connection_name]
|
state = item.dynamic_state[self.connection.connection_name]
|
||||||
|
check_run_ids = state.setdefault('check_run_ids', {})
|
||||||
|
check_run_id = check_run_ids.get(change.cache_key)
|
||||||
check_run_id, errors = self.connection.updateCheck(
|
check_run_id, errors = self.connection.updateCheck(
|
||||||
project,
|
project,
|
||||||
pr_number,
|
pr_number,
|
||||||
|
@ -296,27 +310,27 @@ class GithubReporter(BaseReporter):
|
||||||
file_comments,
|
file_comments,
|
||||||
external_id,
|
external_id,
|
||||||
zuul_event_id=item.event,
|
zuul_event_id=item.event,
|
||||||
check_run_id=state.get('check_run_id')
|
check_run_id=check_run_id,
|
||||||
)
|
)
|
||||||
|
|
||||||
if check_run_id:
|
if check_run_id:
|
||||||
state['check_run_id'] = check_run_id
|
check_run_ids[change.cache_key] = check_run_id
|
||||||
|
|
||||||
return errors
|
return errors
|
||||||
|
|
||||||
def setLabels(self, item):
|
def setLabels(self, item, change):
|
||||||
log = get_annotated_logger(self.log, item.event)
|
log = get_annotated_logger(self.log, item.event)
|
||||||
project = item.change.project.name
|
project = change.project.name
|
||||||
pr_number = item.change.number
|
pr_number = change.number
|
||||||
if self._labels:
|
if self._labels:
|
||||||
log.debug('Reporting change %s, params %s, labels:\n%s',
|
log.debug('Reporting change %s, params %s, labels:\n%s',
|
||||||
item.change, self.config, self._labels)
|
change, self.config, self._labels)
|
||||||
for label in self._labels:
|
for label in self._labels:
|
||||||
self.connection.labelPull(project, pr_number, label,
|
self.connection.labelPull(project, pr_number, label,
|
||||||
zuul_event_id=item.event)
|
zuul_event_id=item.event)
|
||||||
if self._unlabels:
|
if self._unlabels:
|
||||||
log.debug('Reporting change %s, params %s, unlabels:\n%s',
|
log.debug('Reporting change %s, params %s, unlabels:\n%s',
|
||||||
item.change, self.config, self._unlabels)
|
change, self.config, self._unlabels)
|
||||||
for label in self._unlabels:
|
for label in self._unlabels:
|
||||||
self.connection.unlabelPull(project, pr_number, label,
|
self.connection.unlabelPull(project, pr_number, label,
|
||||||
zuul_event_id=item.event)
|
zuul_event_id=item.event)
|
||||||
|
|
|
@ -1,4 +1,5 @@
|
||||||
# Copyright 2019 Red Hat, Inc.
|
# Copyright 2019 Red Hat, Inc.
|
||||||
|
# Copyright 2024 Acme Gating, LLC
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
# not use this file except in compliance with the License. You may obtain
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
@ -51,62 +52,68 @@ class GitlabReporter(BaseReporter):
|
||||||
|
|
||||||
def report(self, item, phase1=True, phase2=True):
|
def report(self, item, phase1=True, phase2=True):
|
||||||
"""Report on an event."""
|
"""Report on an event."""
|
||||||
if not isinstance(item.change.project.source, GitlabSource):
|
for change in item.changes:
|
||||||
|
self._reportChange(item, change, phase1, phase2)
|
||||||
|
return []
|
||||||
|
|
||||||
|
def _reportChange(self, item, change, phase1=True, phase2=True):
|
||||||
|
"""Report on an event."""
|
||||||
|
if not isinstance(change.project.source, GitlabSource):
|
||||||
return
|
return
|
||||||
|
|
||||||
if item.change.project.source.connection.canonical_hostname != \
|
if change.project.source.connection.canonical_hostname != \
|
||||||
self.connection.canonical_hostname:
|
self.connection.canonical_hostname:
|
||||||
return
|
return
|
||||||
|
|
||||||
if hasattr(item.change, 'number'):
|
if hasattr(change, 'number'):
|
||||||
if phase1:
|
if phase1:
|
||||||
if self._create_comment:
|
if self._create_comment:
|
||||||
self.addMRComment(item)
|
self.addMRComment(item, change)
|
||||||
if self._approval is not None:
|
if self._approval is not None:
|
||||||
self.setApproval(item)
|
self.setApproval(item, change)
|
||||||
if self._labels or self._unlabels:
|
if self._labels or self._unlabels:
|
||||||
self.setLabels(item)
|
self.setLabels(item, change)
|
||||||
if phase2 and self._merge:
|
if phase2 and self._merge:
|
||||||
self.mergeMR(item)
|
self.mergeMR(item, change)
|
||||||
if not item.change.is_merged:
|
if not change.is_merged:
|
||||||
msg = self._formatItemReportMergeConflict(item)
|
msg = self._formatItemReportMergeConflict(item)
|
||||||
self.addMRComment(item, msg)
|
self.addMRComment(item, change, msg)
|
||||||
|
|
||||||
def addMRComment(self, item, comment=None):
|
def addMRComment(self, item, change, comment=None):
|
||||||
log = get_annotated_logger(self.log, item.event)
|
log = get_annotated_logger(self.log, item.event)
|
||||||
message = comment or self._formatItemReport(item)
|
message = comment or self._formatItemReport(item)
|
||||||
project = item.change.project.name
|
project = change.project.name
|
||||||
mr_number = item.change.number
|
mr_number = change.number
|
||||||
log.debug('Reporting change %s, params %s, message: %s',
|
log.debug('Reporting change %s, params %s, message: %s',
|
||||||
item.change, self.config, message)
|
change, self.config, message)
|
||||||
self.connection.commentMR(project, mr_number, message,
|
self.connection.commentMR(project, mr_number, message,
|
||||||
event=item.event)
|
event=item.event)
|
||||||
|
|
||||||
def setApproval(self, item):
|
def setApproval(self, item, change):
|
||||||
log = get_annotated_logger(self.log, item.event)
|
log = get_annotated_logger(self.log, item.event)
|
||||||
project = item.change.project.name
|
project = change.project.name
|
||||||
mr_number = item.change.number
|
mr_number = change.number
|
||||||
patchset = item.change.patchset
|
patchset = change.patchset
|
||||||
log.debug('Reporting change %s, params %s, approval: %s',
|
log.debug('Reporting change %s, params %s, approval: %s',
|
||||||
item.change, self.config, self._approval)
|
change, self.config, self._approval)
|
||||||
self.connection.approveMR(project, mr_number, patchset,
|
self.connection.approveMR(project, mr_number, patchset,
|
||||||
self._approval, event=item.event)
|
self._approval, event=item.event)
|
||||||
|
|
||||||
def setLabels(self, item):
|
def setLabels(self, item, change):
|
||||||
log = get_annotated_logger(self.log, item.event)
|
log = get_annotated_logger(self.log, item.event)
|
||||||
project = item.change.project.name
|
project = change.project.name
|
||||||
mr_number = item.change.number
|
mr_number = change.number
|
||||||
log.debug('Reporting change %s, params %s, labels: %s, unlabels: %s',
|
log.debug('Reporting change %s, params %s, labels: %s, unlabels: %s',
|
||||||
item.change, self.config, self._labels, self._unlabels)
|
change, self.config, self._labels, self._unlabels)
|
||||||
self.connection.updateMRLabels(project, mr_number,
|
self.connection.updateMRLabels(project, mr_number,
|
||||||
self._labels, self._unlabels,
|
self._labels, self._unlabels,
|
||||||
zuul_event_id=item.event)
|
zuul_event_id=item.event)
|
||||||
|
|
||||||
def mergeMR(self, item):
|
def mergeMR(self, item, change):
|
||||||
project = item.change.project.name
|
project = change.project.name
|
||||||
mr_number = item.change.number
|
mr_number = change.number
|
||||||
|
|
||||||
merge_mode = item.current_build_set.getMergeMode()
|
merge_mode = item.current_build_set.getMergeMode(change)
|
||||||
|
|
||||||
if merge_mode not in self.merge_modes:
|
if merge_mode not in self.merge_modes:
|
||||||
mode = model.get_merge_mode_name(merge_mode)
|
mode = model.get_merge_mode_name(merge_mode)
|
||||||
|
@ -118,17 +125,17 @@ class GitlabReporter(BaseReporter):
|
||||||
for i in [1, 2]:
|
for i in [1, 2]:
|
||||||
try:
|
try:
|
||||||
self.connection.mergeMR(project, mr_number, merge_mode)
|
self.connection.mergeMR(project, mr_number, merge_mode)
|
||||||
item.change.is_merged = True
|
change.is_merged = True
|
||||||
return
|
return
|
||||||
except MergeFailure:
|
except MergeFailure:
|
||||||
self.log.exception(
|
self.log.exception(
|
||||||
'Merge attempt of change %s %s/2 failed.' %
|
'Merge attempt of change %s %s/2 failed.' %
|
||||||
(item.change, i), exc_info=True)
|
(change, i), exc_info=True)
|
||||||
if i == 1:
|
if i == 1:
|
||||||
time.sleep(2)
|
time.sleep(2)
|
||||||
self.log.warning(
|
self.log.warning(
|
||||||
'Merge of change %s failed after 2 attempts, giving up' %
|
'Merge of change %s failed after 2 attempts, giving up' %
|
||||||
item.change)
|
change)
|
||||||
|
|
||||||
def getSubmitAllowNeeds(self):
|
def getSubmitAllowNeeds(self):
|
||||||
return []
|
return []
|
||||||
|
|
|
@ -1,4 +1,5 @@
|
||||||
# Copyright 2017 Red Hat, Inc.
|
# Copyright 2017 Red Hat, Inc.
|
||||||
|
# Copyright 2024 Acme Gating, LLC
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
# not use this file except in compliance with the License. You may obtain
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
@ -32,21 +33,35 @@ class MQTTReporter(BaseReporter):
|
||||||
return
|
return
|
||||||
include_returned_data = self.config.get('include-returned-data')
|
include_returned_data = self.config.get('include-returned-data')
|
||||||
log = get_annotated_logger(self.log, item.event)
|
log = get_annotated_logger(self.log, item.event)
|
||||||
log.debug("Report change %s, params %s", item.change, self.config)
|
log.debug("Report %s, params %s", item, self.config)
|
||||||
|
changes = [
|
||||||
|
{
|
||||||
|
'project': change.project.name,
|
||||||
|
'branch': getattr(change, 'branch', ''),
|
||||||
|
'change_url': change.url,
|
||||||
|
'change': getattr(change, 'number', ''),
|
||||||
|
'patchset': getattr(change, 'patchset', ''),
|
||||||
|
'commit_id': getattr(change, 'commit_id', ''),
|
||||||
|
'owner': getattr(change, 'owner', ''),
|
||||||
|
'ref': getattr(change, 'ref', ''),
|
||||||
|
}
|
||||||
|
for change in item.changes
|
||||||
|
]
|
||||||
message = {
|
message = {
|
||||||
'timestamp': time.time(),
|
'timestamp': time.time(),
|
||||||
'action': self._action,
|
'action': self._action,
|
||||||
'tenant': item.pipeline.tenant.name,
|
'tenant': item.pipeline.tenant.name,
|
||||||
'zuul_ref': item.current_build_set.ref,
|
'zuul_ref': item.current_build_set.ref,
|
||||||
'pipeline': item.pipeline.name,
|
'pipeline': item.pipeline.name,
|
||||||
'project': item.change.project.name,
|
'changes': changes,
|
||||||
'branch': getattr(item.change, 'branch', ''),
|
'project': item.changes[0].project.name,
|
||||||
'change_url': item.change.url,
|
'branch': getattr(item.changes[0], 'branch', ''),
|
||||||
'change': getattr(item.change, 'number', ''),
|
'change_url': item.changes[0].url,
|
||||||
'patchset': getattr(item.change, 'patchset', ''),
|
'change': getattr(item.changes[0], 'number', ''),
|
||||||
'commit_id': getattr(item.change, 'commit_id', ''),
|
'patchset': getattr(item.changes[0], 'patchset', ''),
|
||||||
'owner': getattr(item.change, 'owner', ''),
|
'commit_id': getattr(item.changes[0], 'commit_id', ''),
|
||||||
'ref': getattr(item.change, 'ref', ''),
|
'owner': getattr(item.changes[0], 'owner', ''),
|
||||||
|
'ref': getattr(item.changes[0], 'ref', ''),
|
||||||
'message': self._formatItemReport(
|
'message': self._formatItemReport(
|
||||||
item, with_jobs=False),
|
item, with_jobs=False),
|
||||||
'trigger_time': item.event.timestamp,
|
'trigger_time': item.event.timestamp,
|
||||||
|
@ -63,13 +78,26 @@ class MQTTReporter(BaseReporter):
|
||||||
for job in item.getJobs():
|
for job in item.getJobs():
|
||||||
job_informations = {
|
job_informations = {
|
||||||
'job_name': job.name,
|
'job_name': job.name,
|
||||||
|
'job_uuid': job.uuid,
|
||||||
'voting': job.voting,
|
'voting': job.voting,
|
||||||
}
|
}
|
||||||
build = item.current_build_set.getBuild(job)
|
build = item.current_build_set.getBuild(job)
|
||||||
if build:
|
if build:
|
||||||
# Report build data if available
|
# Report build data if available
|
||||||
(result, web_url) = item.formatJobResult(job)
|
(result, web_url) = item.formatJobResult(job)
|
||||||
|
change = item.getChangeForJob(job)
|
||||||
|
change_info = {
|
||||||
|
'project': change.project.name,
|
||||||
|
'branch': getattr(change, 'branch', ''),
|
||||||
|
'change_url': change.url,
|
||||||
|
'change': getattr(change, 'number', ''),
|
||||||
|
'patchset': getattr(change, 'patchset', ''),
|
||||||
|
'commit_id': getattr(change, 'commit_id', ''),
|
||||||
|
'owner': getattr(change, 'owner', ''),
|
||||||
|
'ref': getattr(change, 'ref', ''),
|
||||||
|
}
|
||||||
job_informations.update({
|
job_informations.update({
|
||||||
|
'change': change_info,
|
||||||
'uuid': build.uuid,
|
'uuid': build.uuid,
|
||||||
'start_time': build.start_time,
|
'start_time': build.start_time,
|
||||||
'end_time': build.end_time,
|
'end_time': build.end_time,
|
||||||
|
@ -90,16 +118,17 @@ class MQTTReporter(BaseReporter):
|
||||||
# Report build data of retried builds if available
|
# Report build data of retried builds if available
|
||||||
retry_builds = item.current_build_set.getRetryBuildsForJob(
|
retry_builds = item.current_build_set.getRetryBuildsForJob(
|
||||||
job)
|
job)
|
||||||
for build in retry_builds:
|
for retry_build in retry_builds:
|
||||||
(result, web_url) = item.formatJobResult(job, build)
|
(result, web_url) = item.formatJobResult(job, build)
|
||||||
retry_build_information = {
|
retry_build_information = {
|
||||||
'job_name': job.name,
|
'job_name': job.name,
|
||||||
|
'job_uuid': job.uuid,
|
||||||
'voting': job.voting,
|
'voting': job.voting,
|
||||||
'uuid': build.uuid,
|
'uuid': retry_build.uuid,
|
||||||
'start_time': build.start_time,
|
'start_time': retry_build.start_time,
|
||||||
'end_time': build.end_time,
|
'end_time': retry_build.end_time,
|
||||||
'execute_time': build.execute_time,
|
'execute_time': retry_build.execute_time,
|
||||||
'log_url': build.log_url,
|
'log_url': retry_build.log_url,
|
||||||
'web_url': web_url,
|
'web_url': web_url,
|
||||||
'result': result,
|
'result': result,
|
||||||
}
|
}
|
||||||
|
@ -112,11 +141,12 @@ class MQTTReporter(BaseReporter):
|
||||||
topic = self.config['topic'].format(
|
topic = self.config['topic'].format(
|
||||||
tenant=item.pipeline.tenant.name,
|
tenant=item.pipeline.tenant.name,
|
||||||
pipeline=item.pipeline.name,
|
pipeline=item.pipeline.name,
|
||||||
project=item.change.project.name,
|
changes=changes,
|
||||||
branch=getattr(item.change, 'branch', None),
|
project=item.changes[0].project.name,
|
||||||
change=getattr(item.change, 'number', None),
|
branch=getattr(item.changes[0], 'branch', None),
|
||||||
patchset=getattr(item.change, 'patchset', None),
|
change=getattr(item.changes[0], 'number', None),
|
||||||
ref=getattr(item.change, 'ref', None))
|
patchset=getattr(item.changes[0], 'patchset', None),
|
||||||
|
ref=getattr(item.changes[0], 'ref', None))
|
||||||
except Exception:
|
except Exception:
|
||||||
log.exception("Error while formatting MQTT topic %s:",
|
log.exception("Error while formatting MQTT topic %s:",
|
||||||
self.config['topic'])
|
self.config['topic'])
|
||||||
|
|
|
@ -1,4 +1,5 @@
|
||||||
# Copyright 2018 Red Hat, Inc.
|
# Copyright 2018 Red Hat, Inc.
|
||||||
|
# Copyright 2024 Acme Gating, LLC
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
# not use this file except in compliance with the License. You may obtain
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
@ -36,33 +37,39 @@ class PagureReporter(BaseReporter):
|
||||||
|
|
||||||
def report(self, item, phase1=True, phase2=True):
|
def report(self, item, phase1=True, phase2=True):
|
||||||
"""Report on an event."""
|
"""Report on an event."""
|
||||||
|
for change in item.changes:
|
||||||
|
self._reportChange(item, change, phase1, phase2)
|
||||||
|
return []
|
||||||
|
|
||||||
|
def _reportChange(self, item, change, phase1=True, phase2=True):
|
||||||
|
"""Report on an event."""
|
||||||
|
|
||||||
# If the source is not PagureSource we cannot report anything here.
|
# If the source is not PagureSource we cannot report anything here.
|
||||||
if not isinstance(item.change.project.source, PagureSource):
|
if not isinstance(change.project.source, PagureSource):
|
||||||
return
|
return
|
||||||
|
|
||||||
# For supporting several Pagure connections we also must filter by
|
# For supporting several Pagure connections we also must filter by
|
||||||
# the canonical hostname.
|
# the canonical hostname.
|
||||||
if item.change.project.source.connection.canonical_hostname != \
|
if change.project.source.connection.canonical_hostname != \
|
||||||
self.connection.canonical_hostname:
|
self.connection.canonical_hostname:
|
||||||
return
|
return
|
||||||
|
|
||||||
if phase1:
|
if phase1:
|
||||||
if self._commit_status is not None:
|
if self._commit_status is not None:
|
||||||
if (hasattr(item.change, 'patchset') and
|
if (hasattr(change, 'patchset') and
|
||||||
item.change.patchset is not None):
|
change.patchset is not None):
|
||||||
self.setCommitStatus(item)
|
self.setCommitStatus(item, change)
|
||||||
elif (hasattr(item.change, 'newrev') and
|
elif (hasattr(change, 'newrev') and
|
||||||
item.change.newrev is not None):
|
change.newrev is not None):
|
||||||
self.setCommitStatus(item)
|
self.setCommitStatus(item, change)
|
||||||
if hasattr(item.change, 'number'):
|
if hasattr(change, 'number'):
|
||||||
if self._create_comment:
|
if self._create_comment:
|
||||||
self.addPullComment(item)
|
self.addPullComment(item, change)
|
||||||
if phase2 and self._merge:
|
if phase2 and self._merge:
|
||||||
self.mergePull(item)
|
self.mergePull(item, change)
|
||||||
if not item.change.is_merged:
|
if not change.is_merged:
|
||||||
msg = self._formatItemReportMergeConflict(item)
|
msg = self._formatItemReportMergeConflict(item)
|
||||||
self.addPullComment(item, msg)
|
self.addPullComment(item, change, msg)
|
||||||
|
|
||||||
def _formatItemReportJobs(self, item):
|
def _formatItemReportJobs(self, item):
|
||||||
# Return the list of jobs portion of the report
|
# Return the list of jobs portion of the report
|
||||||
|
@ -75,23 +82,23 @@ class PagureReporter(BaseReporter):
|
||||||
ret += 'Skipped %i %s\n' % (skipped, jobtext)
|
ret += 'Skipped %i %s\n' % (skipped, jobtext)
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
def addPullComment(self, item, comment=None):
|
def addPullComment(self, item, change, comment=None):
|
||||||
message = comment or self._formatItemReport(item)
|
message = comment or self._formatItemReport(item)
|
||||||
project = item.change.project.name
|
project = change.project.name
|
||||||
pr_number = item.change.number
|
pr_number = change.number
|
||||||
self.log.debug(
|
self.log.debug(
|
||||||
'Reporting change %s, params %s, message: %s' %
|
'Reporting change %s, params %s, message: %s' %
|
||||||
(item.change, self.config, message))
|
(change, self.config, message))
|
||||||
self.connection.commentPull(project, pr_number, message)
|
self.connection.commentPull(project, pr_number, message)
|
||||||
|
|
||||||
def setCommitStatus(self, item):
|
def setCommitStatus(self, item, change):
|
||||||
project = item.change.project.name
|
project = change.project.name
|
||||||
if hasattr(item.change, 'patchset'):
|
if hasattr(change, 'patchset'):
|
||||||
sha = item.change.patchset
|
sha = change.patchset
|
||||||
elif hasattr(item.change, 'newrev'):
|
elif hasattr(change, 'newrev'):
|
||||||
sha = item.change.newrev
|
sha = change.newrev
|
||||||
state = self._commit_status
|
state = self._commit_status
|
||||||
change_number = item.change.number
|
change_number = change.number
|
||||||
|
|
||||||
url_pattern = self.config.get('status-url')
|
url_pattern = self.config.get('status-url')
|
||||||
sched_config = self.connection.sched.config
|
sched_config = self.connection.sched.config
|
||||||
|
@ -106,30 +113,30 @@ class PagureReporter(BaseReporter):
|
||||||
self.log.debug(
|
self.log.debug(
|
||||||
'Reporting change %s, params %s, '
|
'Reporting change %s, params %s, '
|
||||||
'context: %s, state: %s, description: %s, url: %s' %
|
'context: %s, state: %s, description: %s, url: %s' %
|
||||||
(item.change, self.config,
|
(change, self.config,
|
||||||
self.context, state, description, url))
|
self.context, state, description, url))
|
||||||
|
|
||||||
self.connection.setCommitStatus(
|
self.connection.setCommitStatus(
|
||||||
project, change_number, state, url, description, self.context)
|
project, change_number, state, url, description, self.context)
|
||||||
|
|
||||||
def mergePull(self, item):
|
def mergePull(self, item, change):
|
||||||
project = item.change.project.name
|
project = change.project.name
|
||||||
pr_number = item.change.number
|
pr_number = change.number
|
||||||
|
|
||||||
for i in [1, 2]:
|
for i in [1, 2]:
|
||||||
try:
|
try:
|
||||||
self.connection.mergePull(project, pr_number)
|
self.connection.mergePull(project, pr_number)
|
||||||
item.change.is_merged = True
|
change.is_merged = True
|
||||||
return
|
return
|
||||||
except MergeFailure:
|
except MergeFailure:
|
||||||
self.log.exception(
|
self.log.exception(
|
||||||
'Merge attempt of change %s %s/2 failed.' %
|
'Merge attempt of change %s %s/2 failed.' %
|
||||||
(item.change, i), exc_info=True)
|
(change, i), exc_info=True)
|
||||||
if i == 1:
|
if i == 1:
|
||||||
time.sleep(2)
|
time.sleep(2)
|
||||||
self.log.warning(
|
self.log.warning(
|
||||||
'Merge of change %s failed after 2 attempts, giving up' %
|
'Merge of change %s failed after 2 attempts, giving up' %
|
||||||
item.change)
|
change)
|
||||||
|
|
||||||
def getSubmitAllowNeeds(self):
|
def getSubmitAllowNeeds(self):
|
||||||
return []
|
return []
|
||||||
|
|
|
@ -1,4 +1,5 @@
|
||||||
# Copyright 2013 Rackspace Australia
|
# Copyright 2013 Rackspace Australia
|
||||||
|
# Copyright 2024 Acme Gating, LLC
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
# not use this file except in compliance with the License. You may obtain
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
@ -32,8 +33,8 @@ class SMTPReporter(BaseReporter):
|
||||||
log = get_annotated_logger(self.log, item.event)
|
log = get_annotated_logger(self.log, item.event)
|
||||||
message = self._formatItemReport(item)
|
message = self._formatItemReport(item)
|
||||||
|
|
||||||
log.debug("Report change %s, params %s, message: %s",
|
log.debug("Report %s, params %s, message: %s",
|
||||||
item.change, self.config, message)
|
item, self.config, message)
|
||||||
|
|
||||||
from_email = self.config['from'] \
|
from_email = self.config['from'] \
|
||||||
if 'from' in self.config else None
|
if 'from' in self.config else None
|
||||||
|
@ -42,13 +43,17 @@ class SMTPReporter(BaseReporter):
|
||||||
|
|
||||||
if 'subject' in self.config:
|
if 'subject' in self.config:
|
||||||
subject = self.config['subject'].format(
|
subject = self.config['subject'].format(
|
||||||
change=item.change, pipeline=item.pipeline.getSafeAttributes())
|
change=item.changes[0],
|
||||||
|
changes=item.changes,
|
||||||
|
pipeline=item.pipeline.getSafeAttributes())
|
||||||
else:
|
else:
|
||||||
subject = "Report for change {change} against {ref}".format(
|
subject = "Report for changes {changes} against {ref}".format(
|
||||||
change=item.change, ref=item.change.ref)
|
changes=' '.join([str(c) for c in item.changes]),
|
||||||
|
ref=' '.join([c.ref for c in item.changes]))
|
||||||
|
|
||||||
self.connection.sendMail(subject, message, from_email, to_email,
|
self.connection.sendMail(subject, message, from_email, to_email,
|
||||||
zuul_event_id=item.event)
|
zuul_event_id=item.event)
|
||||||
|
return []
|
||||||
|
|
||||||
|
|
||||||
def getSchema():
|
def getSchema():
|
||||||
|
|
|
@ -246,10 +246,13 @@ class DatabaseSession(object):
|
||||||
# joinedload).
|
# joinedload).
|
||||||
q = self.session().query(self.connection.buildModel).\
|
q = self.session().query(self.connection.buildModel).\
|
||||||
join(self.connection.buildSetModel).\
|
join(self.connection.buildSetModel).\
|
||||||
|
join(self.connection.refModel).\
|
||||||
outerjoin(self.connection.providesModel).\
|
outerjoin(self.connection.providesModel).\
|
||||||
options(orm.contains_eager(self.connection.buildModel.buildset),
|
options(orm.contains_eager(self.connection.buildModel.buildset).
|
||||||
|
subqueryload(self.connection.buildSetModel.refs),
|
||||||
orm.selectinload(self.connection.buildModel.provides),
|
orm.selectinload(self.connection.buildModel.provides),
|
||||||
orm.selectinload(self.connection.buildModel.artifacts))
|
orm.selectinload(self.connection.buildModel.artifacts),
|
||||||
|
orm.selectinload(self.connection.buildModel.ref))
|
||||||
|
|
||||||
q = self.listFilter(q, buildset_table.c.tenant, tenant)
|
q = self.listFilter(q, buildset_table.c.tenant, tenant)
|
||||||
q = self.listFilter(q, build_table.c.uuid, uuid)
|
q = self.listFilter(q, build_table.c.uuid, uuid)
|
||||||
|
@ -428,7 +431,9 @@ class DatabaseSession(object):
|
||||||
options(orm.joinedload(self.connection.buildSetModel.builds).
|
options(orm.joinedload(self.connection.buildSetModel.builds).
|
||||||
subqueryload(self.connection.buildModel.artifacts)).\
|
subqueryload(self.connection.buildModel.artifacts)).\
|
||||||
options(orm.joinedload(self.connection.buildSetModel.builds).
|
options(orm.joinedload(self.connection.buildSetModel.builds).
|
||||||
subqueryload(self.connection.buildModel.provides))
|
subqueryload(self.connection.buildModel.provides)).\
|
||||||
|
options(orm.joinedload(self.connection.buildSetModel.builds).
|
||||||
|
subqueryload(self.connection.buildModel.ref))
|
||||||
|
|
||||||
q = self.listFilter(q, buildset_table.c.tenant, tenant)
|
q = self.listFilter(q, buildset_table.c.tenant, tenant)
|
||||||
q = self.listFilter(q, buildset_table.c.uuid, uuid)
|
q = self.listFilter(q, buildset_table.c.uuid, uuid)
|
||||||
|
@ -799,6 +804,11 @@ class SQLConnection(BaseConnection):
|
||||||
with self.getSession() as db:
|
with self.getSession() as db:
|
||||||
return db.getBuilds(*args, **kw)
|
return db.getBuilds(*args, **kw)
|
||||||
|
|
||||||
|
def getBuild(self, *args, **kw):
|
||||||
|
"""Return a Build object"""
|
||||||
|
with self.getSession() as db:
|
||||||
|
return db.getBuild(*args, **kw)
|
||||||
|
|
||||||
def getBuildsets(self, *args, **kw):
|
def getBuildsets(self, *args, **kw):
|
||||||
"""Return a list of BuildSet objects"""
|
"""Return a list of BuildSet objects"""
|
||||||
with self.getSession() as db:
|
with self.getSession() as db:
|
||||||
|
|
|
@ -1,4 +1,5 @@
|
||||||
# Copyright 2015 Rackspace Australia
|
# Copyright 2015 Rackspace Australia
|
||||||
|
# Copyright 2024 Acme Gating, LLC
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
# not use this file except in compliance with the License. You may obtain
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
@ -54,16 +55,6 @@ class SQLReporter(BaseReporter):
|
||||||
event_timestamp = datetime.datetime.fromtimestamp(
|
event_timestamp = datetime.datetime.fromtimestamp(
|
||||||
item.event.timestamp, tz=datetime.timezone.utc)
|
item.event.timestamp, tz=datetime.timezone.utc)
|
||||||
|
|
||||||
ref = db.getOrCreateRef(
|
|
||||||
project=item.change.project.name,
|
|
||||||
change=getattr(item.change, 'number', None),
|
|
||||||
patchset=getattr(item.change, 'patchset', None),
|
|
||||||
ref_url=item.change.url,
|
|
||||||
ref=getattr(item.change, 'ref', ''),
|
|
||||||
oldrev=getattr(item.change, 'oldrev', ''),
|
|
||||||
newrev=getattr(item.change, 'newrev', ''),
|
|
||||||
branch=getattr(item.change, 'branch', ''),
|
|
||||||
)
|
|
||||||
db_buildset = db.createBuildSet(
|
db_buildset = db.createBuildSet(
|
||||||
uuid=buildset.uuid,
|
uuid=buildset.uuid,
|
||||||
tenant=item.pipeline.tenant.name,
|
tenant=item.pipeline.tenant.name,
|
||||||
|
@ -72,7 +63,18 @@ class SQLReporter(BaseReporter):
|
||||||
event_timestamp=event_timestamp,
|
event_timestamp=event_timestamp,
|
||||||
updated=datetime.datetime.utcnow(),
|
updated=datetime.datetime.utcnow(),
|
||||||
)
|
)
|
||||||
db_buildset.refs.append(ref)
|
for change in item.changes:
|
||||||
|
ref = db.getOrCreateRef(
|
||||||
|
project=change.project.name,
|
||||||
|
change=getattr(change, 'number', None),
|
||||||
|
patchset=getattr(change, 'patchset', None),
|
||||||
|
ref_url=change.url,
|
||||||
|
ref=getattr(change, 'ref', ''),
|
||||||
|
oldrev=getattr(change, 'oldrev', ''),
|
||||||
|
newrev=getattr(change, 'newrev', ''),
|
||||||
|
branch=getattr(change, 'branch', ''),
|
||||||
|
)
|
||||||
|
db_buildset.refs.append(ref)
|
||||||
return db_buildset
|
return db_buildset
|
||||||
|
|
||||||
def reportBuildsetStart(self, buildset):
|
def reportBuildsetStart(self, buildset):
|
||||||
|
@ -200,15 +202,16 @@ class SQLReporter(BaseReporter):
|
||||||
if db_buildset.first_build_start_time is None:
|
if db_buildset.first_build_start_time is None:
|
||||||
db_buildset.first_build_start_time = start
|
db_buildset.first_build_start_time = start
|
||||||
item = buildset.item
|
item = buildset.item
|
||||||
|
change = item.getChangeForJob(build.job)
|
||||||
ref = db.getOrCreateRef(
|
ref = db.getOrCreateRef(
|
||||||
project=item.change.project.name,
|
project=change.project.name,
|
||||||
change=getattr(item.change, 'number', None),
|
change=getattr(change, 'number', None),
|
||||||
patchset=getattr(item.change, 'patchset', None),
|
patchset=getattr(change, 'patchset', None),
|
||||||
ref_url=item.change.url,
|
ref_url=change.url,
|
||||||
ref=getattr(item.change, 'ref', ''),
|
ref=getattr(change, 'ref', ''),
|
||||||
oldrev=getattr(item.change, 'oldrev', ''),
|
oldrev=getattr(change, 'oldrev', ''),
|
||||||
newrev=getattr(item.change, 'newrev', ''),
|
newrev=getattr(change, 'newrev', ''),
|
||||||
branch=getattr(item.change, 'branch', ''),
|
branch=getattr(change, 'branch', ''),
|
||||||
)
|
)
|
||||||
|
|
||||||
db_build = db_buildset.createBuild(
|
db_build = db_buildset.createBuild(
|
||||||
|
|
|
@ -56,9 +56,9 @@ class ExecutorClient(object):
|
||||||
tracer = trace.get_tracer("zuul")
|
tracer = trace.get_tracer("zuul")
|
||||||
uuid = str(uuid4().hex)
|
uuid = str(uuid4().hex)
|
||||||
log.info(
|
log.info(
|
||||||
"Execute job %s (uuid: %s) on nodes %s for change %s "
|
"Execute job %s (uuid: %s) on nodes %s for %s "
|
||||||
"with dependent changes %s",
|
"with dependent changes %s",
|
||||||
job, uuid, nodes, item.change, dependent_changes)
|
job, uuid, nodes, item, dependent_changes)
|
||||||
|
|
||||||
params = zuul.executor.common.construct_build_params(
|
params = zuul.executor.common.construct_build_params(
|
||||||
uuid, self.sched.connections,
|
uuid, self.sched.connections,
|
||||||
|
@ -93,7 +93,7 @@ class ExecutorClient(object):
|
||||||
if job.name == 'noop':
|
if job.name == 'noop':
|
||||||
data = {"start_time": time.time()}
|
data = {"start_time": time.time()}
|
||||||
started_event = BuildStartedEvent(
|
started_event = BuildStartedEvent(
|
||||||
build.uuid, build.build_set.uuid, job.name, job._job_id,
|
build.uuid, build.build_set.uuid, job.uuid,
|
||||||
None, data, zuul_event_id=build.zuul_event_id)
|
None, data, zuul_event_id=build.zuul_event_id)
|
||||||
self.result_events[pipeline.tenant.name][pipeline.name].put(
|
self.result_events[pipeline.tenant.name][pipeline.name].put(
|
||||||
started_event
|
started_event
|
||||||
|
@ -101,7 +101,7 @@ class ExecutorClient(object):
|
||||||
|
|
||||||
result = {"result": "SUCCESS", "end_time": time.time()}
|
result = {"result": "SUCCESS", "end_time": time.time()}
|
||||||
completed_event = BuildCompletedEvent(
|
completed_event = BuildCompletedEvent(
|
||||||
build.uuid, build.build_set.uuid, job.name, job._job_id,
|
build.uuid, build.build_set.uuid, job.uuid,
|
||||||
None, result, zuul_event_id=build.zuul_event_id)
|
None, result, zuul_event_id=build.zuul_event_id)
|
||||||
self.result_events[pipeline.tenant.name][pipeline.name].put(
|
self.result_events[pipeline.tenant.name][pipeline.name].put(
|
||||||
completed_event
|
completed_event
|
||||||
|
@ -134,7 +134,7 @@ class ExecutorClient(object):
|
||||||
f"{req_id}")
|
f"{req_id}")
|
||||||
data = {"start_time": time.time()}
|
data = {"start_time": time.time()}
|
||||||
started_event = BuildStartedEvent(
|
started_event = BuildStartedEvent(
|
||||||
build.uuid, build.build_set.uuid, job.name, job._job_id,
|
build.uuid, build.build_set.uuid, job.uuid,
|
||||||
None, data, zuul_event_id=build.zuul_event_id)
|
None, data, zuul_event_id=build.zuul_event_id)
|
||||||
self.result_events[pipeline.tenant.name][pipeline.name].put(
|
self.result_events[pipeline.tenant.name][pipeline.name].put(
|
||||||
started_event
|
started_event
|
||||||
|
@ -142,7 +142,7 @@ class ExecutorClient(object):
|
||||||
|
|
||||||
result = {"result": None, "end_time": time.time()}
|
result = {"result": None, "end_time": time.time()}
|
||||||
completed_event = BuildCompletedEvent(
|
completed_event = BuildCompletedEvent(
|
||||||
build.uuid, build.build_set.uuid, job.name, job._job_id,
|
build.uuid, build.build_set.uuid, job.uuid,
|
||||||
None, result, zuul_event_id=build.zuul_event_id)
|
None, result, zuul_event_id=build.zuul_event_id)
|
||||||
self.result_events[pipeline.tenant.name][pipeline.name].put(
|
self.result_events[pipeline.tenant.name][pipeline.name].put(
|
||||||
completed_event
|
completed_event
|
||||||
|
@ -173,8 +173,7 @@ class ExecutorClient(object):
|
||||||
request = BuildRequest(
|
request = BuildRequest(
|
||||||
uuid=uuid,
|
uuid=uuid,
|
||||||
build_set_uuid=build.build_set.uuid,
|
build_set_uuid=build.build_set.uuid,
|
||||||
job_name=job.name,
|
job_uuid=job.uuid,
|
||||||
job_uuid=job._job_id,
|
|
||||||
tenant_name=build.build_set.item.pipeline.tenant.name,
|
tenant_name=build.build_set.item.pipeline.tenant.name,
|
||||||
pipeline_name=build.build_set.item.pipeline.name,
|
pipeline_name=build.build_set.item.pipeline.name,
|
||||||
zone=executor_zone,
|
zone=executor_zone,
|
||||||
|
@ -225,7 +224,7 @@ class ExecutorClient(object):
|
||||||
pipeline_name = build.build_set.item.pipeline.name
|
pipeline_name = build.build_set.item.pipeline.name
|
||||||
event = BuildCompletedEvent(
|
event = BuildCompletedEvent(
|
||||||
build_request.uuid, build_request.build_set_uuid,
|
build_request.uuid, build_request.build_set_uuid,
|
||||||
build_request.job_name, build_request.job_uuid,
|
build_request.job_uuid,
|
||||||
build_request.path, result)
|
build_request.path, result)
|
||||||
self.result_events[tenant_name][pipeline_name].put(event)
|
self.result_events[tenant_name][pipeline_name].put(event)
|
||||||
finally:
|
finally:
|
||||||
|
@ -312,7 +311,7 @@ class ExecutorClient(object):
|
||||||
|
|
||||||
event = BuildCompletedEvent(
|
event = BuildCompletedEvent(
|
||||||
build_request.uuid, build_request.build_set_uuid,
|
build_request.uuid, build_request.build_set_uuid,
|
||||||
build_request.job_name, build_request.job_uuid,
|
build_request.job_uuid,
|
||||||
build_request.path, result)
|
build_request.path, result)
|
||||||
self.result_events[build_request.tenant_name][
|
self.result_events[build_request.tenant_name][
|
||||||
build_request.pipeline_name].put(event)
|
build_request.pipeline_name].put(event)
|
||||||
|
|
|
@ -30,22 +30,23 @@ def construct_build_params(uuid, connections, job, item, pipeline,
|
||||||
environment - for example, a local runner.
|
environment - for example, a local runner.
|
||||||
"""
|
"""
|
||||||
tenant = pipeline.tenant
|
tenant = pipeline.tenant
|
||||||
|
change = item.getChangeForJob(job)
|
||||||
project = dict(
|
project = dict(
|
||||||
name=item.change.project.name,
|
name=change.project.name,
|
||||||
short_name=item.change.project.name.split('/')[-1],
|
short_name=change.project.name.split('/')[-1],
|
||||||
canonical_hostname=item.change.project.canonical_hostname,
|
canonical_hostname=change.project.canonical_hostname,
|
||||||
canonical_name=item.change.project.canonical_name,
|
canonical_name=change.project.canonical_name,
|
||||||
src_dir=os.path.join('src',
|
src_dir=os.path.join('src',
|
||||||
strings.workspace_project_path(
|
strings.workspace_project_path(
|
||||||
item.change.project.canonical_hostname,
|
change.project.canonical_hostname,
|
||||||
item.change.project.name,
|
change.project.name,
|
||||||
job.workspace_scheme)),
|
job.workspace_scheme)),
|
||||||
)
|
)
|
||||||
|
|
||||||
zuul_params = dict(
|
zuul_params = dict(
|
||||||
build=uuid,
|
build=uuid,
|
||||||
buildset=item.current_build_set.uuid,
|
buildset=item.current_build_set.uuid,
|
||||||
ref=item.change.ref,
|
ref=change.ref,
|
||||||
pipeline=pipeline.name,
|
pipeline=pipeline.name,
|
||||||
post_review=pipeline.post_review,
|
post_review=pipeline.post_review,
|
||||||
job=job.name,
|
job=job.name,
|
||||||
|
@ -54,30 +55,30 @@ def construct_build_params(uuid, connections, job, item, pipeline,
|
||||||
event_id=item.event.zuul_event_id if item.event else None,
|
event_id=item.event.zuul_event_id if item.event else None,
|
||||||
jobtags=sorted(job.tags),
|
jobtags=sorted(job.tags),
|
||||||
)
|
)
|
||||||
if hasattr(item.change, 'branch'):
|
if hasattr(change, 'branch'):
|
||||||
zuul_params['branch'] = item.change.branch
|
zuul_params['branch'] = change.branch
|
||||||
if hasattr(item.change, 'tag'):
|
if hasattr(change, 'tag'):
|
||||||
zuul_params['tag'] = item.change.tag
|
zuul_params['tag'] = change.tag
|
||||||
if hasattr(item.change, 'number'):
|
if hasattr(change, 'number'):
|
||||||
zuul_params['change'] = str(item.change.number)
|
zuul_params['change'] = str(change.number)
|
||||||
if hasattr(item.change, 'url'):
|
if hasattr(change, 'url'):
|
||||||
zuul_params['change_url'] = item.change.url
|
zuul_params['change_url'] = change.url
|
||||||
if hasattr(item.change, 'patchset'):
|
if hasattr(change, 'patchset'):
|
||||||
zuul_params['patchset'] = str(item.change.patchset)
|
zuul_params['patchset'] = str(change.patchset)
|
||||||
if hasattr(item.change, 'message'):
|
if hasattr(change, 'message'):
|
||||||
zuul_params['message'] = strings.b64encode(item.change.message)
|
zuul_params['message'] = strings.b64encode(change.message)
|
||||||
zuul_params['change_message'] = item.change.message
|
zuul_params['change_message'] = change.message
|
||||||
commit_id = None
|
commit_id = None
|
||||||
if (hasattr(item.change, 'oldrev') and item.change.oldrev
|
if (hasattr(change, 'oldrev') and change.oldrev
|
||||||
and item.change.oldrev != '0' * 40):
|
and change.oldrev != '0' * 40):
|
||||||
zuul_params['oldrev'] = item.change.oldrev
|
zuul_params['oldrev'] = change.oldrev
|
||||||
commit_id = item.change.oldrev
|
commit_id = change.oldrev
|
||||||
if (hasattr(item.change, 'newrev') and item.change.newrev
|
if (hasattr(change, 'newrev') and change.newrev
|
||||||
and item.change.newrev != '0' * 40):
|
and change.newrev != '0' * 40):
|
||||||
zuul_params['newrev'] = item.change.newrev
|
zuul_params['newrev'] = change.newrev
|
||||||
commit_id = item.change.newrev
|
commit_id = change.newrev
|
||||||
if hasattr(item.change, 'commit_id'):
|
if hasattr(change, 'commit_id'):
|
||||||
commit_id = item.change.commit_id
|
commit_id = change.commit_id
|
||||||
if commit_id:
|
if commit_id:
|
||||||
zuul_params['commit_id'] = commit_id
|
zuul_params['commit_id'] = commit_id
|
||||||
|
|
||||||
|
@ -101,8 +102,8 @@ def construct_build_params(uuid, connections, job, item, pipeline,
|
||||||
params['job_ref'] = job.getPath()
|
params['job_ref'] = job.getPath()
|
||||||
params['items'] = merger_items
|
params['items'] = merger_items
|
||||||
params['projects'] = []
|
params['projects'] = []
|
||||||
if hasattr(item.change, 'branch'):
|
if hasattr(change, 'branch'):
|
||||||
params['branch'] = item.change.branch
|
params['branch'] = change.branch
|
||||||
else:
|
else:
|
||||||
params['branch'] = None
|
params['branch'] = None
|
||||||
merge_rs = item.current_build_set.merge_repo_state
|
merge_rs = item.current_build_set.merge_repo_state
|
||||||
|
@ -116,8 +117,8 @@ def construct_build_params(uuid, connections, job, item, pipeline,
|
||||||
params['ssh_keys'].append("REDACTED")
|
params['ssh_keys'].append("REDACTED")
|
||||||
else:
|
else:
|
||||||
params['ssh_keys'].append(dict(
|
params['ssh_keys'].append(dict(
|
||||||
connection_name=item.change.project.connection_name,
|
connection_name=change.project.connection_name,
|
||||||
project_name=item.change.project.name))
|
project_name=change.project.name))
|
||||||
params['zuul'] = zuul_params
|
params['zuul'] = zuul_params
|
||||||
projects = set()
|
projects = set()
|
||||||
required_projects = set()
|
required_projects = set()
|
||||||
|
|
|
@ -4196,7 +4196,7 @@ class ExecutorServer(BaseMergeServer):
|
||||||
|
|
||||||
event = BuildStartedEvent(
|
event = BuildStartedEvent(
|
||||||
build_request.uuid, build_request.build_set_uuid,
|
build_request.uuid, build_request.build_set_uuid,
|
||||||
build_request.job_name, build_request.job_uuid,
|
build_request.job_uuid,
|
||||||
build_request.path, data, build_request.event_id)
|
build_request.path, data, build_request.event_id)
|
||||||
self.result_events[build_request.tenant_name][
|
self.result_events[build_request.tenant_name][
|
||||||
build_request.pipeline_name].put(event)
|
build_request.pipeline_name].put(event)
|
||||||
|
@ -4204,7 +4204,7 @@ class ExecutorServer(BaseMergeServer):
|
||||||
def updateBuildStatus(self, build_request, data):
|
def updateBuildStatus(self, build_request, data):
|
||||||
event = BuildStatusEvent(
|
event = BuildStatusEvent(
|
||||||
build_request.uuid, build_request.build_set_uuid,
|
build_request.uuid, build_request.build_set_uuid,
|
||||||
build_request.job_name, build_request.job_uuid,
|
build_request.job_uuid,
|
||||||
build_request.path, data, build_request.event_id)
|
build_request.path, data, build_request.event_id)
|
||||||
self.result_events[build_request.tenant_name][
|
self.result_events[build_request.tenant_name][
|
||||||
build_request.pipeline_name].put(event)
|
build_request.pipeline_name].put(event)
|
||||||
|
@ -4219,7 +4219,7 @@ class ExecutorServer(BaseMergeServer):
|
||||||
|
|
||||||
event = BuildPausedEvent(
|
event = BuildPausedEvent(
|
||||||
build_request.uuid, build_request.build_set_uuid,
|
build_request.uuid, build_request.build_set_uuid,
|
||||||
build_request.job_name, build_request.job_uuid,
|
build_request.job_uuid,
|
||||||
build_request.path, data, build_request.event_id)
|
build_request.path, data, build_request.event_id)
|
||||||
self.result_events[build_request.tenant_name][
|
self.result_events[build_request.tenant_name][
|
||||||
build_request.pipeline_name].put(event)
|
build_request.pipeline_name].put(event)
|
||||||
|
@ -4286,7 +4286,7 @@ class ExecutorServer(BaseMergeServer):
|
||||||
updater = self.executor_api.getRequestUpdater(build_request)
|
updater = self.executor_api.getRequestUpdater(build_request)
|
||||||
event = BuildCompletedEvent(
|
event = BuildCompletedEvent(
|
||||||
build_request.uuid, build_request.build_set_uuid,
|
build_request.uuid, build_request.build_set_uuid,
|
||||||
build_request.job_name, build_request.job_uuid,
|
build_request.job_uuid,
|
||||||
build_request.path, result, build_request.event_id)
|
build_request.path, result, build_request.event_id)
|
||||||
build_request.state = BuildRequest.COMPLETED
|
build_request.state = BuildRequest.COMPLETED
|
||||||
updated = False
|
updated = False
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1,3 +1,5 @@
|
||||||
|
# Copyright 2024 Acme Gating, LLC
|
||||||
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
# not use this file except in compliance with the License. You may obtain
|
# not use this file except in compliance with the License. You may obtain
|
||||||
# a copy of the License at
|
# a copy of the License at
|
||||||
|
@ -43,35 +45,34 @@ class DependentPipelineManager(SharedQueuePipelineManager):
|
||||||
window_decrease_factor=p.window_decrease_factor,
|
window_decrease_factor=p.window_decrease_factor,
|
||||||
name=queue_name)
|
name=queue_name)
|
||||||
|
|
||||||
def getNodePriority(self, item):
|
def getNodePriority(self, item, change):
|
||||||
with self.getChangeQueue(item.change, item.event) as change_queue:
|
return item.queue.queue.index(item)
|
||||||
items = change_queue.queue
|
|
||||||
return items.index(item)
|
|
||||||
|
|
||||||
def isChangeReadyToBeEnqueued(self, change, event):
|
def areChangesReadyToBeEnqueued(self, changes, event):
|
||||||
log = get_annotated_logger(self.log, event)
|
log = get_annotated_logger(self.log, event)
|
||||||
source = change.project.source
|
for change in changes:
|
||||||
if not source.canMerge(change, self.getSubmitAllowNeeds(),
|
source = change.project.source
|
||||||
event=event):
|
if not source.canMerge(change, self.getSubmitAllowNeeds(),
|
||||||
log.debug("Change %s can not merge", change)
|
event=event):
|
||||||
return False
|
log.debug("Change %s can not merge", change)
|
||||||
|
return False
|
||||||
return True
|
return True
|
||||||
|
|
||||||
def getNonMergeableCycleChanges(self, bundle):
|
def getNonMergeableCycleChanges(self, item):
|
||||||
"""Return changes in the cycle that do not fulfill
|
"""Return changes in the cycle that do not fulfill
|
||||||
the pipeline's ready criteria."""
|
the pipeline's ready criteria."""
|
||||||
changes = []
|
changes = []
|
||||||
for item in bundle.items:
|
for change in item.changes:
|
||||||
source = item.change.project.source
|
source = change.project.source
|
||||||
if not source.canMerge(
|
if not source.canMerge(
|
||||||
item.change,
|
change,
|
||||||
self.getSubmitAllowNeeds(),
|
self.getSubmitAllowNeeds(),
|
||||||
event=item.event,
|
event=item.event,
|
||||||
allow_refresh=True,
|
allow_refresh=True,
|
||||||
):
|
):
|
||||||
log = get_annotated_logger(self.log, item.event)
|
log = get_annotated_logger(self.log, item.event)
|
||||||
log.debug("Change %s can no longer be merged", item.change)
|
log.debug("Change %s can no longer be merged", change)
|
||||||
changes.append(item.change)
|
changes.append(change)
|
||||||
return changes
|
return changes
|
||||||
|
|
||||||
def enqueueChangesBehind(self, change, event, quiet, ignore_requirements,
|
def enqueueChangesBehind(self, change, event, quiet, ignore_requirements,
|
||||||
|
@ -142,29 +143,26 @@ class DependentPipelineManager(SharedQueuePipelineManager):
|
||||||
change_queue=change_queue, history=history,
|
change_queue=change_queue, history=history,
|
||||||
dependency_graph=dependency_graph)
|
dependency_graph=dependency_graph)
|
||||||
|
|
||||||
def enqueueChangesAhead(self, change, event, quiet, ignore_requirements,
|
def enqueueChangesAhead(self, changes, event, quiet, ignore_requirements,
|
||||||
change_queue, history=None, dependency_graph=None,
|
change_queue, history=None, dependency_graph=None,
|
||||||
warnings=None):
|
warnings=None):
|
||||||
log = get_annotated_logger(self.log, event)
|
log = get_annotated_logger(self.log, event)
|
||||||
|
|
||||||
history = history if history is not None else []
|
history = history if history is not None else []
|
||||||
if hasattr(change, 'number'):
|
for change in changes:
|
||||||
history.append(change)
|
if hasattr(change, 'number'):
|
||||||
else:
|
history.append(change)
|
||||||
# Don't enqueue dependencies ahead of a non-change ref.
|
else:
|
||||||
return True
|
# Don't enqueue dependencies ahead of a non-change ref.
|
||||||
|
return True
|
||||||
|
|
||||||
abort, needed_changes = self.getMissingNeededChanges(
|
abort, needed_changes = self.getMissingNeededChanges(
|
||||||
change, change_queue, event,
|
changes, change_queue, event,
|
||||||
dependency_graph=dependency_graph,
|
dependency_graph=dependency_graph,
|
||||||
warnings=warnings)
|
warnings=warnings)
|
||||||
if abort:
|
if abort:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
# Treat cycle dependencies as needed for the current change
|
|
||||||
needed_changes.extend(
|
|
||||||
self.getCycleDependencies(change, dependency_graph, event))
|
|
||||||
|
|
||||||
if not needed_changes:
|
if not needed_changes:
|
||||||
return True
|
return True
|
||||||
log.debug(" Changes %s must be merged ahead of %s",
|
log.debug(" Changes %s must be merged ahead of %s",
|
||||||
|
@ -183,107 +181,93 @@ class DependentPipelineManager(SharedQueuePipelineManager):
|
||||||
return False
|
return False
|
||||||
return True
|
return True
|
||||||
|
|
||||||
def getMissingNeededChanges(self, change, change_queue, event,
|
def getMissingNeededChanges(self, changes, change_queue, event,
|
||||||
dependency_graph=None, warnings=None):
|
dependency_graph=None, warnings=None):
|
||||||
log = get_annotated_logger(self.log, event)
|
log = get_annotated_logger(self.log, event)
|
||||||
|
changes_needed = []
|
||||||
|
abort = False
|
||||||
|
|
||||||
# Return true if okay to proceed enqueing this change,
|
# Return true if okay to proceed enqueing this change,
|
||||||
# false if the change should not be enqueued.
|
# false if the change should not be enqueued.
|
||||||
log.debug("Checking for changes needed by %s:" % change)
|
for change in changes:
|
||||||
if not isinstance(change, model.Change):
|
log.debug("Checking for changes needed by %s:" % change)
|
||||||
log.debug(" %s does not support dependencies", type(change))
|
if not isinstance(change, model.Change):
|
||||||
return False, []
|
log.debug(" %s does not support dependencies", type(change))
|
||||||
if not change.getNeedsChanges(
|
continue
|
||||||
self.useDependenciesByTopic(change.project)):
|
needed_changes = dependency_graph.get(change)
|
||||||
log.debug(" No changes needed")
|
if not needed_changes:
|
||||||
return False, []
|
log.debug(" No changes needed")
|
||||||
changes_needed = []
|
continue
|
||||||
abort = False
|
# Ignore supplied change_queue
|
||||||
# Ignore supplied change_queue
|
with self.getChangeQueue(change, event) as change_queue:
|
||||||
with self.getChangeQueue(change, event) as change_queue:
|
for needed_change in needed_changes:
|
||||||
for needed_change in self.resolveChangeReferences(
|
log.debug(" Change %s needs change %s:" % (
|
||||||
change.getNeedsChanges(
|
change, needed_change))
|
||||||
self.useDependenciesByTopic(change.project))):
|
if needed_change.is_merged:
|
||||||
log.debug(" Change %s needs change %s:" % (
|
log.debug(" Needed change is merged")
|
||||||
change, needed_change))
|
continue
|
||||||
if needed_change.is_merged:
|
with self.getChangeQueue(needed_change,
|
||||||
log.debug(" Needed change is merged")
|
event) as needed_change_queue:
|
||||||
continue
|
if needed_change_queue != change_queue:
|
||||||
|
msg = ("Change %s in project %s does not "
|
||||||
if dependency_graph is not None:
|
"share a change queue with %s "
|
||||||
log.debug(" Adding change %s to dependency graph for "
|
"in project %s" %
|
||||||
"change %s", needed_change, change)
|
(needed_change.number,
|
||||||
node = dependency_graph.setdefault(change, [])
|
needed_change.project,
|
||||||
node.append(needed_change)
|
change.number,
|
||||||
|
change.project))
|
||||||
if (self.pipeline.tenant.max_dependencies is not None and
|
log.debug(" " + msg)
|
||||||
dependency_graph is not None and
|
if warnings is not None:
|
||||||
(len(dependency_graph) >
|
warnings.append(msg)
|
||||||
self.pipeline.tenant.max_dependencies)):
|
changes_needed.append(needed_change)
|
||||||
log.debug(" Dependency graph for change %s is too large",
|
abort = True
|
||||||
change)
|
if not needed_change.is_current_patchset:
|
||||||
return True, []
|
log.debug(" Needed change is not "
|
||||||
|
"the current patchset")
|
||||||
with self.getChangeQueue(needed_change,
|
|
||||||
event) as needed_change_queue:
|
|
||||||
if needed_change_queue != change_queue:
|
|
||||||
msg = ("Change %s in project %s does not "
|
|
||||||
"share a change queue with %s "
|
|
||||||
"in project %s" %
|
|
||||||
(needed_change.number,
|
|
||||||
needed_change.project,
|
|
||||||
change.number,
|
|
||||||
change.project))
|
|
||||||
log.debug(" " + msg)
|
|
||||||
if warnings is not None:
|
|
||||||
warnings.append(msg)
|
|
||||||
changes_needed.append(needed_change)
|
changes_needed.append(needed_change)
|
||||||
abort = True
|
abort = True
|
||||||
if not needed_change.is_current_patchset:
|
if needed_change in changes:
|
||||||
log.debug(" Needed change is not the current patchset")
|
log.debug(" Needed change is in cycle")
|
||||||
changes_needed.append(needed_change)
|
|
||||||
abort = True
|
|
||||||
if self.isChangeAlreadyInQueue(needed_change, change_queue):
|
|
||||||
log.debug(" Needed change is already ahead in the queue")
|
|
||||||
continue
|
|
||||||
if needed_change.project.source.canMerge(
|
|
||||||
needed_change, self.getSubmitAllowNeeds(),
|
|
||||||
event=event):
|
|
||||||
log.debug(" Change %s is needed", needed_change)
|
|
||||||
if needed_change not in changes_needed:
|
|
||||||
changes_needed.append(needed_change)
|
|
||||||
continue
|
continue
|
||||||
# The needed change can't be merged.
|
if self.isChangeAlreadyInQueue(
|
||||||
log.debug(" Change %s is needed but can not be merged",
|
needed_change, change_queue):
|
||||||
needed_change)
|
log.debug(" Needed change is already "
|
||||||
changes_needed.append(needed_change)
|
"ahead in the queue")
|
||||||
abort = True
|
continue
|
||||||
|
if needed_change.project.source.canMerge(
|
||||||
|
needed_change, self.getSubmitAllowNeeds(),
|
||||||
|
event=event):
|
||||||
|
log.debug(" Change %s is needed", needed_change)
|
||||||
|
if needed_change not in changes_needed:
|
||||||
|
changes_needed.append(needed_change)
|
||||||
|
continue
|
||||||
|
else:
|
||||||
|
# The needed change can't be merged.
|
||||||
|
log.debug(" Change %s is needed "
|
||||||
|
"but can not be merged",
|
||||||
|
needed_change)
|
||||||
|
changes_needed.append(needed_change)
|
||||||
|
abort = True
|
||||||
return abort, changes_needed
|
return abort, changes_needed
|
||||||
|
|
||||||
def getFailingDependentItems(self, item, nnfi):
|
def getFailingDependentItems(self, item):
|
||||||
if not isinstance(item.change, model.Change):
|
|
||||||
return None
|
|
||||||
if not item.change.getNeedsChanges(
|
|
||||||
self.useDependenciesByTopic(item.change.project)):
|
|
||||||
return None
|
|
||||||
failing_items = set()
|
failing_items = set()
|
||||||
for needed_change in self.resolveChangeReferences(
|
for change in item.changes:
|
||||||
item.change.getNeedsChanges(
|
if not isinstance(change, model.Change):
|
||||||
self.useDependenciesByTopic(item.change.project))):
|
|
||||||
needed_item = self.getItemForChange(needed_change)
|
|
||||||
if not needed_item:
|
|
||||||
continue
|
continue
|
||||||
if needed_item.current_build_set.failing_reasons:
|
needs_changes = change.getNeedsChanges(
|
||||||
failing_items.add(needed_item)
|
self.useDependenciesByTopic(change.project))
|
||||||
# Only look at the bundle if the item ahead is the nearest non-failing
|
if not needs_changes:
|
||||||
# item. This is important in order to correctly reset the bundle items
|
continue
|
||||||
# in case of a failure.
|
for needed_change in self.resolveChangeReferences(needs_changes):
|
||||||
if item.item_ahead == nnfi and item.isBundleFailing():
|
needed_item = self.getItemForChange(needed_change)
|
||||||
failing_items.update(item.bundle.items)
|
if not needed_item:
|
||||||
failing_items.remove(item)
|
continue
|
||||||
if failing_items:
|
if needed_item is item:
|
||||||
return failing_items
|
continue
|
||||||
return None
|
if needed_item.current_build_set.failing_reasons:
|
||||||
|
failing_items.add(needed_item)
|
||||||
|
return failing_items
|
||||||
|
|
||||||
def dequeueItem(self, item, quiet=False):
|
def dequeueItem(self, item, quiet=False):
|
||||||
super(DependentPipelineManager, self).dequeueItem(item, quiet)
|
super(DependentPipelineManager, self).dequeueItem(item, quiet)
|
||||||
|
|
|
@ -1,3 +1,5 @@
|
||||||
|
# Copyright 2021-2024 Acme Gating, LLC
|
||||||
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
# not use this file except in compliance with the License. You may obtain
|
# not use this file except in compliance with the License. You may obtain
|
||||||
# a copy of the License at
|
# a copy of the License at
|
||||||
|
@ -37,28 +39,25 @@ class IndependentPipelineManager(PipelineManager):
|
||||||
log.debug("Dynamically created queue %s", change_queue)
|
log.debug("Dynamically created queue %s", change_queue)
|
||||||
return DynamicChangeQueueContextManager(change_queue)
|
return DynamicChangeQueueContextManager(change_queue)
|
||||||
|
|
||||||
def enqueueChangesAhead(self, change, event, quiet, ignore_requirements,
|
def enqueueChangesAhead(self, changes, event, quiet, ignore_requirements,
|
||||||
change_queue, history=None, dependency_graph=None,
|
change_queue, history=None, dependency_graph=None,
|
||||||
warnings=None):
|
warnings=None):
|
||||||
log = get_annotated_logger(self.log, event)
|
log = get_annotated_logger(self.log, event)
|
||||||
|
|
||||||
history = history if history is not None else []
|
history = history if history is not None else []
|
||||||
if hasattr(change, 'number'):
|
for change in changes:
|
||||||
history.append(change)
|
if hasattr(change, 'number'):
|
||||||
else:
|
history.append(change)
|
||||||
# Don't enqueue dependencies ahead of a non-change ref.
|
else:
|
||||||
return True
|
# Don't enqueue dependencies ahead of a non-change ref.
|
||||||
|
return True
|
||||||
|
|
||||||
abort, needed_changes = self.getMissingNeededChanges(
|
abort, needed_changes = self.getMissingNeededChanges(
|
||||||
change, change_queue, event,
|
changes, change_queue, event,
|
||||||
dependency_graph=dependency_graph)
|
dependency_graph=dependency_graph)
|
||||||
if abort:
|
if abort:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
# Treat cycle dependencies as needed for the current change
|
|
||||||
needed_changes.extend(
|
|
||||||
self.getCycleDependencies(change, dependency_graph, event))
|
|
||||||
|
|
||||||
if not needed_changes:
|
if not needed_changes:
|
||||||
return True
|
return True
|
||||||
log.debug(" Changes %s must be merged ahead of %s" % (
|
log.debug(" Changes %s must be merged ahead of %s" % (
|
||||||
|
@ -80,55 +79,43 @@ class IndependentPipelineManager(PipelineManager):
|
||||||
return False
|
return False
|
||||||
return True
|
return True
|
||||||
|
|
||||||
def getMissingNeededChanges(self, change, change_queue, event,
|
def getMissingNeededChanges(self, changes, change_queue, event,
|
||||||
dependency_graph=None):
|
dependency_graph=None):
|
||||||
log = get_annotated_logger(self.log, event)
|
log = get_annotated_logger(self.log, event)
|
||||||
|
|
||||||
if self.pipeline.ignore_dependencies:
|
if self.pipeline.ignore_dependencies:
|
||||||
return False, []
|
return False, []
|
||||||
log.debug("Checking for changes needed by %s:" % change)
|
|
||||||
# Return true if okay to proceed enqueing this change,
|
|
||||||
# false if the change should not be enqueued.
|
|
||||||
if not isinstance(change, model.Change):
|
|
||||||
log.debug(" %s does not support dependencies" % type(change))
|
|
||||||
return False, []
|
|
||||||
if not change.getNeedsChanges(
|
|
||||||
self.useDependenciesByTopic(change.project)):
|
|
||||||
log.debug(" No changes needed")
|
|
||||||
return False, []
|
|
||||||
changes_needed = []
|
changes_needed = []
|
||||||
abort = False
|
abort = False
|
||||||
for needed_change in self.resolveChangeReferences(
|
for change in changes:
|
||||||
change.getNeedsChanges(
|
log.debug("Checking for changes needed by %s:" % change)
|
||||||
self.useDependenciesByTopic(change.project))):
|
# Return true if okay to proceed enqueing this change,
|
||||||
log.debug(" Change %s needs change %s:" % (
|
# false if the change should not be enqueued.
|
||||||
change, needed_change))
|
if not isinstance(change, model.Change):
|
||||||
if needed_change.is_merged:
|
log.debug(" %s does not support dependencies" % type(change))
|
||||||
log.debug(" Needed change is merged")
|
|
||||||
continue
|
continue
|
||||||
|
needed_changes = dependency_graph.get(change)
|
||||||
if dependency_graph is not None:
|
if not needed_changes:
|
||||||
log.debug(" Adding change %s to dependency graph for "
|
log.debug(" No changes needed")
|
||||||
"change %s", needed_change, change)
|
|
||||||
node = dependency_graph.setdefault(change, [])
|
|
||||||
node.append(needed_change)
|
|
||||||
|
|
||||||
if (self.pipeline.tenant.max_dependencies is not None and
|
|
||||||
dependency_graph is not None and
|
|
||||||
len(dependency_graph) > self.pipeline.tenant.max_dependencies):
|
|
||||||
log.debug(" Dependency graph for change %s is too large",
|
|
||||||
change)
|
|
||||||
return True, []
|
|
||||||
|
|
||||||
if self.isChangeAlreadyInQueue(needed_change, change_queue):
|
|
||||||
log.debug(" Needed change is already ahead in the queue")
|
|
||||||
continue
|
continue
|
||||||
log.debug(" Change %s is needed" % needed_change)
|
for needed_change in needed_changes:
|
||||||
if needed_change not in changes_needed:
|
log.debug(" Change %s needs change %s:" % (
|
||||||
changes_needed.append(needed_change)
|
change, needed_change))
|
||||||
continue
|
if needed_change.is_merged:
|
||||||
# This differs from the dependent pipeline check in not
|
log.debug(" Needed change is merged")
|
||||||
# verifying that the dependent change is mergable.
|
continue
|
||||||
|
if needed_change in changes:
|
||||||
|
log.debug(" Needed change is in cycle")
|
||||||
|
continue
|
||||||
|
if self.isChangeAlreadyInQueue(needed_change, change_queue):
|
||||||
|
log.debug(" Needed change is already ahead in the queue")
|
||||||
|
continue
|
||||||
|
log.debug(" Change %s is needed" % needed_change)
|
||||||
|
if needed_change not in changes_needed:
|
||||||
|
changes_needed.append(needed_change)
|
||||||
|
continue
|
||||||
|
# This differs from the dependent pipeline check in not
|
||||||
|
# verifying that the dependent change is mergable.
|
||||||
return abort, changes_needed
|
return abort, changes_needed
|
||||||
|
|
||||||
def dequeueItem(self, item, quiet=False):
|
def dequeueItem(self, item, quiet=False):
|
||||||
|
|
|
@ -1,3 +1,5 @@
|
||||||
|
# Copyright 2021, 2023-2024 Acme Gating, LLC
|
||||||
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
# not use this file except in compliance with the License. You may obtain
|
# not use this file except in compliance with the License. You may obtain
|
||||||
# a copy of the License at
|
# a copy of the License at
|
||||||
|
@ -32,11 +34,11 @@ class SupercedentPipelineManager(PipelineManager):
|
||||||
# Don't use Pipeline.getQueue to find an existing queue
|
# Don't use Pipeline.getQueue to find an existing queue
|
||||||
# because we're matching project and (branch or ref).
|
# because we're matching project and (branch or ref).
|
||||||
for queue in self.pipeline.queues:
|
for queue in self.pipeline.queues:
|
||||||
if (queue.queue[-1].change.project == change.project and
|
if (queue.queue[-1].changes[0].project == change.project and
|
||||||
((hasattr(change, 'branch') and
|
((hasattr(change, 'branch') and
|
||||||
hasattr(queue.queue[-1].change, 'branch') and
|
hasattr(queue.queue[-1].changes[0], 'branch') and
|
||||||
queue.queue[-1].change.branch == change.branch) or
|
queue.queue[-1].changes[0].branch == change.branch) or
|
||||||
queue.queue[-1].change.ref == change.ref)):
|
queue.queue[-1].changes[0].ref == change.ref)):
|
||||||
log.debug("Found existing queue %s", queue)
|
log.debug("Found existing queue %s", queue)
|
||||||
return DynamicChangeQueueContextManager(queue)
|
return DynamicChangeQueueContextManager(queue)
|
||||||
change_queue = model.ChangeQueue.new(
|
change_queue = model.ChangeQueue.new(
|
||||||
|
@ -66,6 +68,13 @@ class SupercedentPipelineManager(PipelineManager):
|
||||||
(item, queue.queue[-1]))
|
(item, queue.queue[-1]))
|
||||||
self.removeItem(item)
|
self.removeItem(item)
|
||||||
|
|
||||||
|
def cycleForChange(self, *args, **kw):
|
||||||
|
ret = super().cycleForChange(*args, **kw)
|
||||||
|
if len(ret) > 1:
|
||||||
|
raise Exception("Dependency cycles not supported "
|
||||||
|
"in supercedent pipelines")
|
||||||
|
return ret
|
||||||
|
|
||||||
def addChange(self, *args, **kw):
|
def addChange(self, *args, **kw):
|
||||||
ret = super(SupercedentPipelineManager, self).addChange(
|
ret = super(SupercedentPipelineManager, self).addChange(
|
||||||
*args, **kw)
|
*args, **kw)
|
||||||
|
|
|
@ -1,4 +1,5 @@
|
||||||
# Copyright 2014 OpenStack Foundation
|
# Copyright 2014 OpenStack Foundation
|
||||||
|
# Copyright 2021-2022, 2024 Acme Gating, LLC
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
# not use this file except in compliance with the License. You may obtain
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
@ -138,13 +139,43 @@ class MergeClient(object):
|
||||||
)
|
)
|
||||||
return job
|
return job
|
||||||
|
|
||||||
def getFilesChanges(self, connection_name, project_name, branch,
|
def getFilesChanges(self, changes, precedence=PRECEDENCE_HIGH,
|
||||||
tosha=None, precedence=PRECEDENCE_HIGH,
|
build_set=None, needs_result=False,
|
||||||
build_set=None, needs_result=False, event=None):
|
event=None):
|
||||||
data = dict(connection=connection_name,
|
changes_data = []
|
||||||
project=project_name,
|
for change in changes:
|
||||||
branch=branch,
|
# if base_sha is not available, fallback to branch
|
||||||
tosha=tosha)
|
tosha = getattr(change, "base_sha", None)
|
||||||
|
if tosha is None:
|
||||||
|
tosha = getattr(change, "branch", None)
|
||||||
|
changes_data.append(dict(
|
||||||
|
connection=change.project.connection_name,
|
||||||
|
project=change.project.name,
|
||||||
|
branch=change.ref,
|
||||||
|
tosha=tosha,
|
||||||
|
))
|
||||||
|
data = dict(changes=changes_data)
|
||||||
|
job = self.submitJob(
|
||||||
|
MergeRequest.FILES_CHANGES,
|
||||||
|
data,
|
||||||
|
build_set,
|
||||||
|
precedence,
|
||||||
|
needs_result=needs_result,
|
||||||
|
event=event,
|
||||||
|
)
|
||||||
|
return job
|
||||||
|
|
||||||
|
def getFilesChangesRaw(self, connection_name, project_name, branch, tosha,
|
||||||
|
precedence=PRECEDENCE_HIGH,
|
||||||
|
build_set=None, needs_result=False,
|
||||||
|
event=None):
|
||||||
|
changes_data = [dict(
|
||||||
|
connection=connection_name,
|
||||||
|
project=project_name,
|
||||||
|
branch=branch,
|
||||||
|
tosha=tosha,
|
||||||
|
)]
|
||||||
|
data = dict(changes=changes_data)
|
||||||
job = self.submitJob(
|
job = self.submitJob(
|
||||||
MergeRequest.FILES_CHANGES,
|
MergeRequest.FILES_CHANGES,
|
||||||
data,
|
data,
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
# Copyright 2014 OpenStack Foundation
|
# Copyright 2014 OpenStack Foundation
|
||||||
# Copyright 2021-2022 Acme Gating, LLC
|
# Copyright 2021-2022, 2024 Acme Gating, LLC
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
# not use this file except in compliance with the License. You may obtain
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
@ -334,25 +334,36 @@ class BaseMergeServer(metaclass=ABCMeta):
|
||||||
self.log.debug("Got fileschanges job: %s", merge_request.uuid)
|
self.log.debug("Got fileschanges job: %s", merge_request.uuid)
|
||||||
zuul_event_id = merge_request.event_id
|
zuul_event_id = merge_request.event_id
|
||||||
|
|
||||||
connection_name = args['connection']
|
# MODEL_API < 26:
|
||||||
project_name = args['project']
|
changes = args.get('changes')
|
||||||
|
old_format = False
|
||||||
|
if changes is None:
|
||||||
|
changes = [args]
|
||||||
|
old_format = True
|
||||||
|
|
||||||
lock = self.repo_locks.getRepoLock(connection_name, project_name)
|
results = []
|
||||||
try:
|
for change in changes:
|
||||||
self._update(connection_name, project_name,
|
connection_name = change['connection']
|
||||||
zuul_event_id=zuul_event_id)
|
project_name = change['project']
|
||||||
with lock:
|
|
||||||
files = self.merger.getFilesChanges(
|
lock = self.repo_locks.getRepoLock(connection_name, project_name)
|
||||||
connection_name, project_name,
|
try:
|
||||||
args['branch'], args['tosha'],
|
self._update(connection_name, project_name,
|
||||||
zuul_event_id=zuul_event_id)
|
zuul_event_id=zuul_event_id)
|
||||||
except Exception:
|
with lock:
|
||||||
result = dict(update=False)
|
files = self.merger.getFilesChanges(
|
||||||
|
connection_name, project_name,
|
||||||
|
change['branch'], change['tosha'],
|
||||||
|
zuul_event_id=zuul_event_id)
|
||||||
|
results.append(files)
|
||||||
|
except Exception:
|
||||||
|
return dict(updated=False)
|
||||||
|
|
||||||
|
if old_format:
|
||||||
|
# MODEL_API < 26:
|
||||||
|
return dict(updated=True, files=results[0])
|
||||||
else:
|
else:
|
||||||
result = dict(updated=True, files=files)
|
return dict(updated=True, files=results)
|
||||||
|
|
||||||
result['zuul_event_id'] = zuul_event_id
|
|
||||||
return result
|
|
||||||
|
|
||||||
def completeMergeJob(self, merge_request, result):
|
def completeMergeJob(self, merge_request, result):
|
||||||
log = get_annotated_logger(self.log, merge_request.event_id)
|
log = get_annotated_logger(self.log, merge_request.event_id)
|
||||||
|
|
1450
zuul/model.py
1450
zuul/model.py
File diff suppressed because it is too large
Load Diff
|
@ -14,4 +14,4 @@
|
||||||
|
|
||||||
# When making ZK schema changes, increment this and add a record to
|
# When making ZK schema changes, increment this and add a record to
|
||||||
# doc/source/developer/model-changelog.rst
|
# doc/source/developer/model-changelog.rst
|
||||||
MODEL_API = 25
|
MODEL_API = 26
|
||||||
|
|
|
@ -191,7 +191,7 @@ class Nodepool(object):
|
||||||
else:
|
else:
|
||||||
event_id = None
|
event_id = None
|
||||||
req = model.NodeRequest(self.system_id, build_set_uuid, tenant_name,
|
req = model.NodeRequest(self.system_id, build_set_uuid, tenant_name,
|
||||||
pipeline_name, job.name, job._job_id, labels,
|
pipeline_name, job.uuid, labels,
|
||||||
provider, relative_priority, event_id)
|
provider, relative_priority, event_id)
|
||||||
|
|
||||||
if job.nodeset.nodes:
|
if job.nodeset.nodes:
|
||||||
|
|
|
@ -1,4 +1,5 @@
|
||||||
# Copyright 2014 Rackspace Australia
|
# Copyright 2014 Rackspace Australia
|
||||||
|
# Copyright 2021-2024 Acme Gating, LLC
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
# not use this file except in compliance with the License. You may obtain
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
@ -60,13 +61,14 @@ class BaseReporter(object, metaclass=abc.ABCMeta):
|
||||||
def postConfig(self):
|
def postConfig(self):
|
||||||
"""Run tasks after configuration is reloaded"""
|
"""Run tasks after configuration is reloaded"""
|
||||||
|
|
||||||
def addConfigurationErrorComments(self, item, comments):
|
def addConfigurationErrorComments(self, item, change, comments):
|
||||||
"""Add file comments for configuration errors.
|
"""Add file comments for configuration errors.
|
||||||
|
|
||||||
Updates the comments dictionary with additional file comments
|
Updates the comments dictionary with additional file comments
|
||||||
for any relevant configuration errors for this item's change.
|
for any relevant configuration errors for the specified change.
|
||||||
|
|
||||||
:arg QueueItem item: The queue item
|
:arg QueueItem item: The queue item
|
||||||
|
:arg Ref change: One of the item's changes to check
|
||||||
:arg dict comments: a file comments dictionary
|
:arg dict comments: a file comments dictionary
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
@ -77,13 +79,13 @@ class BaseReporter(object, metaclass=abc.ABCMeta):
|
||||||
if not (context and mark and err.short_error):
|
if not (context and mark and err.short_error):
|
||||||
continue
|
continue
|
||||||
if context.project_canonical_name != \
|
if context.project_canonical_name != \
|
||||||
item.change.project.canonical_name:
|
change.project.canonical_name:
|
||||||
continue
|
continue
|
||||||
if not hasattr(item.change, 'branch'):
|
if not hasattr(change, 'branch'):
|
||||||
continue
|
continue
|
||||||
if context.branch != item.change.branch:
|
if context.branch != change.branch:
|
||||||
continue
|
continue
|
||||||
if context.path not in item.change.files:
|
if context.path not in change.files:
|
||||||
continue
|
continue
|
||||||
existing_comments = comments.setdefault(context.path, [])
|
existing_comments = comments.setdefault(context.path, [])
|
||||||
existing_comments.append(dict(line=mark.end_line,
|
existing_comments.append(dict(line=mark.end_line,
|
||||||
|
@ -94,36 +96,40 @@ class BaseReporter(object, metaclass=abc.ABCMeta):
|
||||||
end_line=mark.end_line,
|
end_line=mark.end_line,
|
||||||
end_character=mark.end_column)))
|
end_character=mark.end_column)))
|
||||||
|
|
||||||
def _getFileComments(self, item):
|
def _getFileComments(self, item, change):
|
||||||
"""Get the file comments from the zuul_return value"""
|
"""Get the file comments from the zuul_return value"""
|
||||||
ret = {}
|
ret = {}
|
||||||
for build in item.current_build_set.getBuilds():
|
for build in item.current_build_set.getBuilds():
|
||||||
fc = build.result_data.get("zuul", {}).get("file_comments")
|
fc = build.result_data.get("zuul", {}).get("file_comments")
|
||||||
if not fc:
|
if not fc:
|
||||||
continue
|
continue
|
||||||
|
# Only consider comments for this change
|
||||||
|
if change.cache_key not in build.job.all_refs:
|
||||||
|
continue
|
||||||
for fn, comments in fc.items():
|
for fn, comments in fc.items():
|
||||||
existing_comments = ret.setdefault(fn, [])
|
existing_comments = ret.setdefault(fn, [])
|
||||||
existing_comments.extend(comments)
|
existing_comments.extend(comments)
|
||||||
self.addConfigurationErrorComments(item, ret)
|
self.addConfigurationErrorComments(item, change, ret)
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
def getFileComments(self, item):
|
def getFileComments(self, item, change):
|
||||||
comments = self._getFileComments(item)
|
comments = self._getFileComments(item, change)
|
||||||
self.filterComments(item, comments)
|
self.filterComments(item, change, comments)
|
||||||
return comments
|
return comments
|
||||||
|
|
||||||
def filterComments(self, item, comments):
|
def filterComments(self, item, change, comments):
|
||||||
"""Filter comments for files in change
|
"""Filter comments for files in change
|
||||||
|
|
||||||
Remove any comments for files which do not appear in the
|
Remove any comments for files which do not appear in the
|
||||||
item's change. Leave warning messages if this happens.
|
specified change. Leave warning messages if this happens.
|
||||||
|
|
||||||
:arg QueueItem item: The queue item
|
:arg QueueItem item: The queue item
|
||||||
|
:arg Change change: The change
|
||||||
:arg dict comments: a file comments dictionary (modified in place)
|
:arg dict comments: a file comments dictionary (modified in place)
|
||||||
"""
|
"""
|
||||||
|
|
||||||
for fn in list(comments.keys()):
|
for fn in list(comments.keys()):
|
||||||
if fn not in item.change.files:
|
if fn not in change.files:
|
||||||
del comments[fn]
|
del comments[fn]
|
||||||
item.warning("Comments left for invalid file %s" % (fn,))
|
item.warning("Comments left for invalid file %s" % (fn,))
|
||||||
|
|
||||||
|
@ -172,7 +178,8 @@ class BaseReporter(object, metaclass=abc.ABCMeta):
|
||||||
|
|
||||||
return item.pipeline.enqueue_message.format(
|
return item.pipeline.enqueue_message.format(
|
||||||
pipeline=item.pipeline.getSafeAttributes(),
|
pipeline=item.pipeline.getSafeAttributes(),
|
||||||
change=item.change.getSafeAttributes(),
|
change=item.changes[0].getSafeAttributes(),
|
||||||
|
changes=[c.getSafeAttributes() for c in item.changes],
|
||||||
status_url=status_url)
|
status_url=status_url)
|
||||||
|
|
||||||
def _formatItemReportStart(self, item, with_jobs=True):
|
def _formatItemReportStart(self, item, with_jobs=True):
|
||||||
|
@ -182,7 +189,8 @@ class BaseReporter(object, metaclass=abc.ABCMeta):
|
||||||
|
|
||||||
return item.pipeline.start_message.format(
|
return item.pipeline.start_message.format(
|
||||||
pipeline=item.pipeline.getSafeAttributes(),
|
pipeline=item.pipeline.getSafeAttributes(),
|
||||||
change=item.change.getSafeAttributes(),
|
change=item.changes[0].getSafeAttributes(),
|
||||||
|
changes=[c.getSafeAttributes() for c in item.changes],
|
||||||
status_url=status_url)
|
status_url=status_url)
|
||||||
|
|
||||||
def _formatItemReportSuccess(self, item, with_jobs=True):
|
def _formatItemReportSuccess(self, item, with_jobs=True):
|
||||||
|
@ -195,23 +203,23 @@ class BaseReporter(object, metaclass=abc.ABCMeta):
|
||||||
return msg
|
return msg
|
||||||
|
|
||||||
def _formatItemReportFailure(self, item, with_jobs=True):
|
def _formatItemReportFailure(self, item, with_jobs=True):
|
||||||
if item.cannotMergeBundle():
|
if len(item.changes) > 1:
|
||||||
msg = 'This change is part of a bundle that can not merge.\n'
|
change_text = 'These changes'
|
||||||
if isinstance(item.bundle.cannot_merge, str):
|
else:
|
||||||
msg += '\n' + item.bundle.cannot_merge + '\n'
|
change_text = 'This change'
|
||||||
elif item.dequeued_needing_change:
|
if item.dequeued_needing_change:
|
||||||
msg = 'This change depends on a change that failed to merge.\n'
|
msg = f'{change_text} depends on a change that failed to merge.\n'
|
||||||
if isinstance(item.dequeued_needing_change, str):
|
if isinstance(item.dequeued_needing_change, str):
|
||||||
msg += '\n' + item.dequeued_needing_change + '\n'
|
msg += '\n' + item.dequeued_needing_change + '\n'
|
||||||
elif item.dequeued_missing_requirements:
|
elif item.dequeued_missing_requirements:
|
||||||
msg = ('This change is unable to merge '
|
msg = (f'{change_text} is unable to merge '
|
||||||
'due to a missing merge requirement.\n')
|
'due to a missing merge requirement.\n')
|
||||||
elif item.isBundleFailing():
|
elif len(item.changes) > 1:
|
||||||
msg = 'This change is part of a bundle that failed.\n'
|
msg = f'{change_text} is part of a dependency cycle that failed.\n'
|
||||||
if with_jobs:
|
if with_jobs:
|
||||||
msg = '{}\n\n{}'.format(msg, self._formatItemReportJobs(item))
|
msg = '{}\n\n{}'.format(msg, self._formatItemReportJobs(item))
|
||||||
msg = "{}\n\n{}".format(
|
msg = "{}\n\n{}".format(
|
||||||
msg, self._formatItemReportOtherBundleItems(item))
|
msg, self._formatItemReportOtherChanges(item))
|
||||||
elif item.didMergerFail():
|
elif item.didMergerFail():
|
||||||
msg = item.pipeline.merge_conflict_message
|
msg = item.pipeline.merge_conflict_message
|
||||||
elif item.current_build_set.has_blocking_errors:
|
elif item.current_build_set.has_blocking_errors:
|
||||||
|
@ -247,7 +255,8 @@ class BaseReporter(object, metaclass=abc.ABCMeta):
|
||||||
|
|
||||||
return item.pipeline.no_jobs_message.format(
|
return item.pipeline.no_jobs_message.format(
|
||||||
pipeline=item.pipeline.getSafeAttributes(),
|
pipeline=item.pipeline.getSafeAttributes(),
|
||||||
change=item.change.getSafeAttributes(),
|
change=item.changes[0].getSafeAttributes(),
|
||||||
|
changes=[c.getSafeAttributes() for c in item.changes],
|
||||||
status_url=status_url)
|
status_url=status_url)
|
||||||
|
|
||||||
def _formatItemReportDisabled(self, item, with_jobs=True):
|
def _formatItemReportDisabled(self, item, with_jobs=True):
|
||||||
|
@ -264,13 +273,9 @@ class BaseReporter(object, metaclass=abc.ABCMeta):
|
||||||
msg += '\n\n' + self._formatItemReportJobs(item)
|
msg += '\n\n' + self._formatItemReportJobs(item)
|
||||||
return msg
|
return msg
|
||||||
|
|
||||||
def _formatItemReportOtherBundleItems(self, item):
|
def _formatItemReportOtherChanges(self, item):
|
||||||
related_changes = item.pipeline.manager.resolveChangeReferences(
|
|
||||||
item.change.getNeedsChanges(
|
|
||||||
item.pipeline.manager.useDependenciesByTopic(
|
|
||||||
item.change.project)))
|
|
||||||
return "Related changes:\n{}\n".format("\n".join(
|
return "Related changes:\n{}\n".format("\n".join(
|
||||||
f' - {c.url}' for c in related_changes if c is not item.change))
|
f' - {c.url}' for c in item.changes))
|
||||||
|
|
||||||
def _getItemReportJobsFields(self, item):
|
def _getItemReportJobsFields(self, item):
|
||||||
# Extract the report elements from an item
|
# Extract the report elements from an item
|
||||||
|
|
|
@ -905,13 +905,15 @@ class Scheduler(threading.Thread):
|
||||||
try:
|
try:
|
||||||
if self.statsd and build.pipeline:
|
if self.statsd and build.pipeline:
|
||||||
tenant = build.pipeline.tenant
|
tenant = build.pipeline.tenant
|
||||||
jobname = build.job.name.replace('.', '_').replace('/', '_')
|
item = build.build_set.item
|
||||||
hostname = (build.build_set.item.change.project.
|
job = build.job
|
||||||
|
change = item.getChangeForJob(job)
|
||||||
|
jobname = job.name.replace('.', '_').replace('/', '_')
|
||||||
|
hostname = (change.project.
|
||||||
canonical_hostname.replace('.', '_'))
|
canonical_hostname.replace('.', '_'))
|
||||||
projectname = (build.build_set.item.change.project.name.
|
projectname = (change.project.name.
|
||||||
replace('.', '_').replace('/', '_'))
|
replace('.', '_').replace('/', '_'))
|
||||||
branchname = (getattr(build.build_set.item.change,
|
branchname = (getattr(change, 'branch', '').
|
||||||
'branch', '').
|
|
||||||
replace('.', '_').replace('/', '_'))
|
replace('.', '_').replace('/', '_'))
|
||||||
basekey = 'zuul.tenant.%s' % tenant.name
|
basekey = 'zuul.tenant.%s' % tenant.name
|
||||||
pipekey = '%s.pipeline.%s' % (basekey, build.pipeline.name)
|
pipekey = '%s.pipeline.%s' % (basekey, build.pipeline.name)
|
||||||
|
@ -1611,8 +1613,8 @@ class Scheduler(threading.Thread):
|
||||||
log.info("Tenant reconfiguration complete for %s (duration: %s "
|
log.info("Tenant reconfiguration complete for %s (duration: %s "
|
||||||
"seconds)", event.tenant_name, duration)
|
"seconds)", event.tenant_name, duration)
|
||||||
|
|
||||||
def _reenqueueGetProject(self, tenant, item):
|
def _reenqueueGetProject(self, tenant, item, change):
|
||||||
project = item.change.project
|
project = change.project
|
||||||
# Attempt to get the same project as the one passed in. If
|
# Attempt to get the same project as the one passed in. If
|
||||||
# the project is now found on a different connection or if it
|
# the project is now found on a different connection or if it
|
||||||
# is no longer available (due to a connection being removed),
|
# is no longer available (due to a connection being removed),
|
||||||
|
@ -1644,12 +1646,13 @@ class Scheduler(threading.Thread):
|
||||||
if child is item:
|
if child is item:
|
||||||
return None
|
return None
|
||||||
if child and child.live:
|
if child and child.live:
|
||||||
(child_trusted, child_project) = tenant.getProject(
|
for child_change in child.changes:
|
||||||
child.change.project.canonical_name)
|
(child_trusted, child_project) = tenant.getProject(
|
||||||
if child_project:
|
child_change.project.canonical_name)
|
||||||
source = child_project.source
|
if child_project:
|
||||||
new_project = source.getProject(project.name)
|
source = child_project.source
|
||||||
return new_project
|
new_project = source.getProject(project.name)
|
||||||
|
return new_project
|
||||||
|
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
@ -1679,8 +1682,8 @@ class Scheduler(threading.Thread):
|
||||||
for item in shared_queue.queue:
|
for item in shared_queue.queue:
|
||||||
# If the old item ahead made it in, re-enqueue
|
# If the old item ahead made it in, re-enqueue
|
||||||
# this one behind it.
|
# this one behind it.
|
||||||
new_project = self._reenqueueGetProject(
|
new_projects = [self._reenqueueGetProject(
|
||||||
tenant, item)
|
tenant, item, change) for change in item.changes]
|
||||||
if item.item_ahead in items_to_remove:
|
if item.item_ahead in items_to_remove:
|
||||||
old_item_ahead = None
|
old_item_ahead = None
|
||||||
item_ahead_valid = False
|
item_ahead_valid = False
|
||||||
|
@ -1691,8 +1694,9 @@ class Scheduler(threading.Thread):
|
||||||
item.item_ahead = None
|
item.item_ahead = None
|
||||||
item.items_behind = []
|
item.items_behind = []
|
||||||
reenqueued = False
|
reenqueued = False
|
||||||
if new_project:
|
if all(new_projects):
|
||||||
item.change.project = new_project
|
for change_index, change in enumerate(item.changes):
|
||||||
|
change.project = new_projects[change_index]
|
||||||
item.queue = None
|
item.queue = None
|
||||||
if not old_item_ahead or not last_head:
|
if not old_item_ahead or not last_head:
|
||||||
last_head = item
|
last_head = item
|
||||||
|
@ -1945,12 +1949,13 @@ class Scheduler(threading.Thread):
|
||||||
for item in shared_queue.queue:
|
for item in shared_queue.queue:
|
||||||
if not item.live:
|
if not item.live:
|
||||||
continue
|
continue
|
||||||
if (item.change.number == number and
|
for item_change in item.changes:
|
||||||
item.change.patchset == patchset):
|
if (item_change.number == number and
|
||||||
promote_operations.setdefault(
|
item_change.patchset == patchset):
|
||||||
shared_queue, []).append(item)
|
promote_operations.setdefault(
|
||||||
found = True
|
shared_queue, []).append(item)
|
||||||
break
|
found = True
|
||||||
|
break
|
||||||
if found:
|
if found:
|
||||||
break
|
break
|
||||||
if not found:
|
if not found:
|
||||||
|
@ -1981,11 +1986,12 @@ class Scheduler(threading.Thread):
|
||||||
pipeline.manager.dequeueItem(item)
|
pipeline.manager.dequeueItem(item)
|
||||||
|
|
||||||
for item in items_to_enqueue:
|
for item in items_to_enqueue:
|
||||||
pipeline.manager.addChange(
|
for item_change in item.changes:
|
||||||
item.change, item.event,
|
pipeline.manager.addChange(
|
||||||
enqueue_time=item.enqueue_time,
|
item_change, item.event,
|
||||||
quiet=True,
|
enqueue_time=item.enqueue_time,
|
||||||
ignore_requirements=True)
|
quiet=True,
|
||||||
|
ignore_requirements=True)
|
||||||
# Regardless, move this shared change queue to the head.
|
# Regardless, move this shared change queue to the head.
|
||||||
pipeline.promoteQueue(change_queue)
|
pipeline.promoteQueue(change_queue)
|
||||||
|
|
||||||
|
@ -2011,14 +2017,15 @@ class Scheduler(threading.Thread):
|
||||||
% (item, project.name))
|
% (item, project.name))
|
||||||
for shared_queue in pipeline.queues:
|
for shared_queue in pipeline.queues:
|
||||||
for item in shared_queue.queue:
|
for item in shared_queue.queue:
|
||||||
if item.change.project != change.project:
|
for item_change in item.changes:
|
||||||
continue
|
if item_change.project != change.project:
|
||||||
if (isinstance(item.change, Change) and
|
continue
|
||||||
item.change.number == change.number and
|
if (isinstance(item_change, Change) and
|
||||||
item.change.patchset == change.patchset) or\
|
item_change.number == change.number and
|
||||||
(item.change.ref == change.ref):
|
item_change.patchset == change.patchset) or\
|
||||||
pipeline.manager.removeItem(item)
|
(item_change.ref == change.ref):
|
||||||
return
|
pipeline.manager.removeItem(item)
|
||||||
|
return
|
||||||
raise Exception("Unable to find shared change queue for %s:%s" %
|
raise Exception("Unable to find shared change queue for %s:%s" %
|
||||||
(event.project_name,
|
(event.project_name,
|
||||||
event.change or event.ref))
|
event.change or event.ref))
|
||||||
|
@ -2059,18 +2066,19 @@ class Scheduler(threading.Thread):
|
||||||
change = project.source.getChange(change_key, event=event)
|
change = project.source.getChange(change_key, event=event)
|
||||||
for shared_queue in pipeline.queues:
|
for shared_queue in pipeline.queues:
|
||||||
for item in shared_queue.queue:
|
for item in shared_queue.queue:
|
||||||
if item.change.project != change.project:
|
|
||||||
continue
|
|
||||||
if not item.live:
|
if not item.live:
|
||||||
continue
|
continue
|
||||||
if ((isinstance(item.change, Change)
|
for item_change in item.changes:
|
||||||
and item.change.number == change.number
|
if item_change.project != change.project:
|
||||||
and item.change.patchset == change.patchset
|
continue
|
||||||
) or (item.change.ref == change.ref)):
|
if ((isinstance(item_change, Change)
|
||||||
log = get_annotated_logger(self.log, item.event)
|
and item_change.number == change.number
|
||||||
log.info("Item %s is superceded, dequeuing", item)
|
and item_change.patchset == change.patchset
|
||||||
pipeline.manager.removeItem(item)
|
) or (item_change.ref == change.ref)):
|
||||||
return
|
log = get_annotated_logger(self.log, item.event)
|
||||||
|
log.info("Item %s is superceded, dequeuing", item)
|
||||||
|
pipeline.manager.removeItem(item)
|
||||||
|
return
|
||||||
|
|
||||||
def _doSemaphoreReleaseEvent(self, event, pipeline):
|
def _doSemaphoreReleaseEvent(self, event, pipeline):
|
||||||
tenant = pipeline.tenant
|
tenant = pipeline.tenant
|
||||||
|
@ -2791,7 +2799,7 @@ class Scheduler(threading.Thread):
|
||||||
if not build_set:
|
if not build_set:
|
||||||
return
|
return
|
||||||
|
|
||||||
job = build_set.item.getJob(event._job_id)
|
job = build_set.item.getJob(event.job_uuid)
|
||||||
build = build_set.getBuild(job)
|
build = build_set.getBuild(job)
|
||||||
# Verify that the build uuid matches the one of the result
|
# Verify that the build uuid matches the one of the result
|
||||||
if not build:
|
if not build:
|
||||||
|
@ -2824,12 +2832,14 @@ class Scheduler(threading.Thread):
|
||||||
log = get_annotated_logger(
|
log = get_annotated_logger(
|
||||||
self.log, build.zuul_event_id, build=build.uuid)
|
self.log, build.zuul_event_id, build=build.uuid)
|
||||||
try:
|
try:
|
||||||
change = build.build_set.item.change
|
item = build.build_set.item
|
||||||
|
job = build.job
|
||||||
|
change = item.getChangeForJob(job)
|
||||||
estimate = self.times.getEstimatedTime(
|
estimate = self.times.getEstimatedTime(
|
||||||
pipeline.tenant.name,
|
pipeline.tenant.name,
|
||||||
change.project.name,
|
change.project.name,
|
||||||
getattr(change, 'branch', None),
|
getattr(change, 'branch', None),
|
||||||
build.job.name)
|
job.name)
|
||||||
if not estimate:
|
if not estimate:
|
||||||
estimate = 0.0
|
estimate = 0.0
|
||||||
build.estimated_time = estimate
|
build.estimated_time = estimate
|
||||||
|
@ -2884,11 +2894,8 @@ class Scheduler(threading.Thread):
|
||||||
# resources.
|
# resources.
|
||||||
build = Build()
|
build = Build()
|
||||||
job = DummyFrozenJob()
|
job = DummyFrozenJob()
|
||||||
job.name = event.job_name
|
|
||||||
job.uuid = event.job_uuid
|
job.uuid = event.job_uuid
|
||||||
job.provides = []
|
job.provides = []
|
||||||
# MODEL_API < 25
|
|
||||||
job._job_id = job.uuid or job.name
|
|
||||||
build._set(
|
build._set(
|
||||||
job=job,
|
job=job,
|
||||||
uuid=event.build_uuid,
|
uuid=event.build_uuid,
|
||||||
|
@ -2997,8 +3004,7 @@ class Scheduler(threading.Thread):
|
||||||
# In case the build didn't show up on any executor, the node
|
# In case the build didn't show up on any executor, the node
|
||||||
# request does still exist, so we have to make sure it is
|
# request does still exist, so we have to make sure it is
|
||||||
# removed from ZK.
|
# removed from ZK.
|
||||||
request_id = build.build_set.getJobNodeRequestID(
|
request_id = build.build_set.getJobNodeRequestID(build.job)
|
||||||
build.job, ignore_deduplicate=True)
|
|
||||||
if request_id:
|
if request_id:
|
||||||
self.nodepool.deleteNodeRequest(
|
self.nodepool.deleteNodeRequest(
|
||||||
request_id, event_id=build.zuul_event_id)
|
request_id, event_id=build.zuul_event_id)
|
||||||
|
@ -3058,11 +3064,11 @@ class Scheduler(threading.Thread):
|
||||||
return
|
return
|
||||||
|
|
||||||
log = get_annotated_logger(self.log, request.event_id)
|
log = get_annotated_logger(self.log, request.event_id)
|
||||||
job = build_set.item.getJob(request._job_id)
|
job = build_set.item.getJob(request.job_uuid)
|
||||||
if job is None:
|
if job is None:
|
||||||
log.warning("Item %s does not contain job %s "
|
log.warning("Item %s does not contain job %s "
|
||||||
"for node request %s",
|
"for node request %s",
|
||||||
build_set.item, request._job_id, request)
|
build_set.item, request.job_uuid, request)
|
||||||
return
|
return
|
||||||
|
|
||||||
# If the request failed, we must directly delete it as the nodes will
|
# If the request failed, we must directly delete it as the nodes will
|
||||||
|
@ -3073,7 +3079,7 @@ class Scheduler(threading.Thread):
|
||||||
|
|
||||||
nodeset = self.nodepool.getNodeSet(request, job.nodeset)
|
nodeset = self.nodepool.getNodeSet(request, job.nodeset)
|
||||||
|
|
||||||
job = build_set.item.getJob(request._job_id)
|
job = build_set.item.getJob(request.job_uuid)
|
||||||
if build_set.getJobNodeSetInfo(job) is None:
|
if build_set.getJobNodeSetInfo(job) is None:
|
||||||
pipeline.manager.onNodesProvisioned(request, nodeset, build_set)
|
pipeline.manager.onNodesProvisioned(request, nodeset, build_set)
|
||||||
else:
|
else:
|
||||||
|
@ -3111,8 +3117,8 @@ class Scheduler(threading.Thread):
|
||||||
self.executor.cancel(build)
|
self.executor.cancel(build)
|
||||||
except Exception:
|
except Exception:
|
||||||
log.exception(
|
log.exception(
|
||||||
"Exception while canceling build %s for change %s",
|
"Exception while canceling build %s for %s",
|
||||||
build, item.change)
|
build, item)
|
||||||
|
|
||||||
# In the unlikely case that a build is removed and
|
# In the unlikely case that a build is removed and
|
||||||
# later added back, make sure we clear out the nodeset
|
# later added back, make sure we clear out the nodeset
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
# Copyright (c) 2017 Red Hat
|
# Copyright (c) 2017 Red Hat
|
||||||
# Copyright 2021-2023 Acme Gating, LLC
|
# Copyright 2021-2024 Acme Gating, LLC
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
|
@ -385,9 +385,14 @@ class ChangeFilter(object):
|
||||||
for pipeline in payload['pipelines']:
|
for pipeline in payload['pipelines']:
|
||||||
for change_queue in pipeline.get('change_queues', []):
|
for change_queue in pipeline.get('change_queues', []):
|
||||||
for head in change_queue['heads']:
|
for head in change_queue['heads']:
|
||||||
for change in head:
|
for item in head:
|
||||||
if self.wantChange(change):
|
want_item = False
|
||||||
status.append(copy.deepcopy(change))
|
for change in item['changes']:
|
||||||
|
if self.wantChange(change):
|
||||||
|
want_item = True
|
||||||
|
break
|
||||||
|
if want_item:
|
||||||
|
status.append(copy.deepcopy(item))
|
||||||
return status
|
return status
|
||||||
|
|
||||||
def wantChange(self, change):
|
def wantChange(self, change):
|
||||||
|
@ -1455,7 +1460,19 @@ class ZuulWebAPI(object):
|
||||||
return my_datetime.strftime('%Y-%m-%dT%H:%M:%S')
|
return my_datetime.strftime('%Y-%m-%dT%H:%M:%S')
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def buildToDict(self, build, buildset=None):
|
def refToDict(self, ref):
|
||||||
|
return {
|
||||||
|
'project': ref.project,
|
||||||
|
'branch': ref.branch,
|
||||||
|
'change': ref.change,
|
||||||
|
'patchset': ref.patchset,
|
||||||
|
'ref': ref.ref,
|
||||||
|
'oldrev': ref.oldrev,
|
||||||
|
'newrev': ref.newrev,
|
||||||
|
'ref_url': ref.ref_url,
|
||||||
|
}
|
||||||
|
|
||||||
|
def buildToDict(self, build, buildset=None, skip_refs=False):
|
||||||
start_time = self._datetimeToString(build.start_time)
|
start_time = self._datetimeToString(build.start_time)
|
||||||
end_time = self._datetimeToString(build.end_time)
|
end_time = self._datetimeToString(build.end_time)
|
||||||
if build.start_time and build.end_time:
|
if build.start_time and build.end_time:
|
||||||
|
@ -1480,28 +1497,25 @@ class ZuulWebAPI(object):
|
||||||
'final': build.final,
|
'final': build.final,
|
||||||
'artifacts': [],
|
'artifacts': [],
|
||||||
'provides': [],
|
'provides': [],
|
||||||
|
'ref': self.refToDict(build.ref),
|
||||||
}
|
}
|
||||||
|
|
||||||
# TODO: This should not be conditional in the future, when we
|
|
||||||
# can have multiple refs for a buildset.
|
|
||||||
if buildset:
|
if buildset:
|
||||||
|
# We enter this branch if we're returning top-level build
|
||||||
|
# objects (ie, not builds under a buildset).
|
||||||
event_timestamp = self._datetimeToString(buildset.event_timestamp)
|
event_timestamp = self._datetimeToString(buildset.event_timestamp)
|
||||||
ret.update({
|
ret.update({
|
||||||
'project': build.ref.project,
|
|
||||||
'branch': build.ref.branch,
|
|
||||||
'pipeline': buildset.pipeline,
|
'pipeline': buildset.pipeline,
|
||||||
'change': build.ref.change,
|
|
||||||
'patchset': build.ref.patchset,
|
|
||||||
'ref': build.ref.ref,
|
|
||||||
'oldrev': build.ref.oldrev,
|
|
||||||
'newrev': build.ref.newrev,
|
|
||||||
'ref_url': build.ref.ref_url,
|
|
||||||
'event_id': buildset.event_id,
|
'event_id': buildset.event_id,
|
||||||
'event_timestamp': event_timestamp,
|
'event_timestamp': event_timestamp,
|
||||||
'buildset': {
|
'buildset': {
|
||||||
'uuid': buildset.uuid,
|
'uuid': buildset.uuid,
|
||||||
},
|
},
|
||||||
})
|
})
|
||||||
|
if not skip_refs:
|
||||||
|
ret['buildset']['refs'] = [
|
||||||
|
self.refToDict(ref)
|
||||||
|
for ref in buildset.refs
|
||||||
|
]
|
||||||
|
|
||||||
for artifact in build.artifacts:
|
for artifact in build.artifacts:
|
||||||
art = {
|
art = {
|
||||||
|
@ -1560,7 +1574,8 @@ class ZuulWebAPI(object):
|
||||||
idx_max=_idx_max, exclude_result=exclude_result,
|
idx_max=_idx_max, exclude_result=exclude_result,
|
||||||
query_timeout=self.query_timeout)
|
query_timeout=self.query_timeout)
|
||||||
|
|
||||||
return [self.buildToDict(b, b.buildset) for b in builds]
|
return [self.buildToDict(b, b.buildset, skip_refs=True)
|
||||||
|
for b in builds]
|
||||||
|
|
||||||
@cherrypy.expose
|
@cherrypy.expose
|
||||||
@cherrypy.tools.save_params()
|
@cherrypy.tools.save_params()
|
||||||
|
@ -1570,10 +1585,10 @@ class ZuulWebAPI(object):
|
||||||
def build(self, tenant_name, tenant, auth, uuid):
|
def build(self, tenant_name, tenant, auth, uuid):
|
||||||
connection = self._get_connection()
|
connection = self._get_connection()
|
||||||
|
|
||||||
data = connection.getBuilds(tenant=tenant_name, uuid=uuid, limit=1)
|
data = connection.getBuild(tenant_name, uuid)
|
||||||
if not data:
|
if not data:
|
||||||
raise cherrypy.HTTPError(404, "Build not found")
|
raise cherrypy.HTTPError(404, "Build not found")
|
||||||
data = self.buildToDict(data[0], data[0].buildset)
|
data = self.buildToDict(data, data.buildset)
|
||||||
return data
|
return data
|
||||||
|
|
||||||
def buildTimeToDict(self, build):
|
def buildTimeToDict(self, build):
|
||||||
|
@ -1646,19 +1661,15 @@ class ZuulWebAPI(object):
|
||||||
'uuid': buildset.uuid,
|
'uuid': buildset.uuid,
|
||||||
'result': buildset.result,
|
'result': buildset.result,
|
||||||
'message': buildset.message,
|
'message': buildset.message,
|
||||||
'project': buildset.refs[0].project,
|
|
||||||
'branch': buildset.refs[0].branch,
|
|
||||||
'pipeline': buildset.pipeline,
|
'pipeline': buildset.pipeline,
|
||||||
'change': buildset.refs[0].change,
|
|
||||||
'patchset': buildset.refs[0].patchset,
|
|
||||||
'ref': buildset.refs[0].ref,
|
|
||||||
'oldrev': buildset.refs[0].oldrev,
|
|
||||||
'newrev': buildset.refs[0].newrev,
|
|
||||||
'ref_url': buildset.refs[0].ref_url,
|
|
||||||
'event_id': buildset.event_id,
|
'event_id': buildset.event_id,
|
||||||
'event_timestamp': event_timestamp,
|
'event_timestamp': event_timestamp,
|
||||||
'first_build_start_time': start,
|
'first_build_start_time': start,
|
||||||
'last_build_end_time': end,
|
'last_build_end_time': end,
|
||||||
|
'refs': [
|
||||||
|
self.refToDict(ref)
|
||||||
|
for ref in buildset.refs
|
||||||
|
],
|
||||||
}
|
}
|
||||||
if builds:
|
if builds:
|
||||||
ret['builds'] = []
|
ret['builds'] = []
|
||||||
|
@ -1798,7 +1809,7 @@ class ZuulWebAPI(object):
|
||||||
@cherrypy.tools.check_tenant_auth()
|
@cherrypy.tools.check_tenant_auth()
|
||||||
def project_freeze_jobs(self, tenant_name, tenant, auth,
|
def project_freeze_jobs(self, tenant_name, tenant, auth,
|
||||||
pipeline_name, project_name, branch_name):
|
pipeline_name, project_name, branch_name):
|
||||||
item = self._freeze_jobs(
|
item, change = self._freeze_jobs(
|
||||||
tenant, pipeline_name, project_name, branch_name)
|
tenant, pipeline_name, project_name, branch_name)
|
||||||
|
|
||||||
output = []
|
output = []
|
||||||
|
@ -1822,9 +1833,10 @@ class ZuulWebAPI(object):
|
||||||
job_name):
|
job_name):
|
||||||
# TODO(jhesketh): Allow a canonical change/item to be passed in which
|
# TODO(jhesketh): Allow a canonical change/item to be passed in which
|
||||||
# would return the job with any in-change modifications.
|
# would return the job with any in-change modifications.
|
||||||
item = self._freeze_jobs(
|
item, change = self._freeze_jobs(
|
||||||
tenant, pipeline_name, project_name, branch_name)
|
tenant, pipeline_name, project_name, branch_name)
|
||||||
job = item.current_build_set.job_graph.getJobFromName(job_name)
|
job = item.current_build_set.job_graph.getJob(
|
||||||
|
job_name, change.cache_key)
|
||||||
if not job:
|
if not job:
|
||||||
raise cherrypy.HTTPError(404)
|
raise cherrypy.HTTPError(404)
|
||||||
|
|
||||||
|
@ -1873,12 +1885,12 @@ class ZuulWebAPI(object):
|
||||||
change.cache_stat = FakeCacheKey()
|
change.cache_stat = FakeCacheKey()
|
||||||
with LocalZKContext(self.log) as context:
|
with LocalZKContext(self.log) as context:
|
||||||
queue = ChangeQueue.new(context, pipeline=pipeline)
|
queue = ChangeQueue.new(context, pipeline=pipeline)
|
||||||
item = QueueItem.new(context, queue=queue, change=change)
|
item = QueueItem.new(context, queue=queue, changes=[change])
|
||||||
item.freezeJobGraph(tenant.layout, context,
|
item.freezeJobGraph(tenant.layout, context,
|
||||||
skip_file_matcher=True,
|
skip_file_matcher=True,
|
||||||
redact_secrets_and_keys=True)
|
redact_secrets_and_keys=True)
|
||||||
|
|
||||||
return item
|
return item, change
|
||||||
|
|
||||||
|
|
||||||
class StaticHandler(object):
|
class StaticHandler(object):
|
||||||
|
|
Loading…
Reference in New Issue