Zookeeper docker 'latest' image was updated from 3.6.2 to 3.7.0
Some exceptions java.nio.file.NoSuchFileException about files
- /var/certs/keystores/examples_zk_1.examples_default.pem
- /var/certs/certs/cacert.pem
appeared.
This change adds a check on the last file created by tools/zk-ca.sh
before running zookeeper service.
Change-Id: I15b67977a8b14bb83474390786ab47000e7be07c
This updates the new metric added in
I453ff6dfe479a48a32f4f581f5d07f4ee6b4d804 to:
* Rename it event_enqueue_processing_time to make it more clear
what is being measured.
* Move it out of the pipeline hierarchy since it's not pipeline-
specific.
* Add a companion event_enqueue_time which also includes the
driver pre-processing time.
* Rename the internal variable from "trigger_timestamp" to
"arrived_at_scheduler_timestamp" because we have a lot of timestamps
and they are confusing and the extra verbosity helps keep them
straight.
* Switches from monotonic to time.time() to match what the drivers
are setting for the initial event timestamp.
Change-Id: Ie283c949bd5f4e133152050fa9c027d4882a8c0e
At present, if we read components page[1] of Zuul doc, database
seems to be required only when SQL connection is specified in zuul.conf.
However, database is required even if SQL connection is not specified
in zuul.conf.
To avoid this confusion, this commit modifies components page of doc.
[1] https://zuul-ci.org/docs/zuul/discussion/components.html
Story: 2008725
Task: 42069
Change-Id: Id1d68bd5cff43e9479dab851018799691c8e3f57
zuul.artifacts example fails to be parsed with an Ansible parse error:
ERROR! Syntax Error while loading YAML. mapping values are not
allowed in this context
Change-Id: I7d344e96de74c3b55ee7430ae391d344b2485c2d
When running a deployment using the zone feature it can be troublesome
to run nodeless jobs. The reason for this is that zoned executors
don't run jobs that have no zone at all. This can be solved by either
running an additional set of unzoned executors or with this change
allow some executors also to process unzoned jobs. As running
additional unzoned executors just for nodeless jobs can increase
operations overhead this optional setting can be very useful for those
setups.
Change-Id: I9c025cdbe17a55f47f0cbd97b8b593727fc97e2d
This gives us 3 sets of executor stats: zoned, unzoned, and all.
The zoned and unzoned stats are moved to a hierarchy that avoids
name collisions.
The only way to detect whether an executor in a zone is online
is with a function with its zone name registered. Since the only
function currently registered with a zone name is "execute:execute"
and that is unregistered when an executor is online but not accepting,
we need to add a new dummy function in order to do this accounting.
This also updates the docs and adds a release note about the (minor)
change in stats meaning.
Change-Id: Ie28963426024f2d54275426794549f31ace9d998
Change Ic121b2d8d057a7dc4448ae70045853347f265c6c omitted support
for enqueuing changes behind a cycle in order to avoid having them
show up in the middle of the cycle. However, we can process changes
behind a cycle if we wait until we have completely enqueued the
cycle before we embark on enqueing changes behind it.
This also removes unused pipelines which cause extra processing and
output in the unit tests.
Also add a release note about circular deps, and describe additional
caveats about using circular dependencies in the documentation.
Change-Id: I64c0d4e8c20e4638bbafb18409cd28c062369738
Allow Zuul to process circular dependencies between changes. Gating of
circular dependencies must be explicitly enabled on a per tenant or
project basis.
In case Zuul detects a dependency cycle it will make sure that every
change also include all other changes that are part of the cycle. However
each change will still be a normal item in the queue with its own jobs.
When it comes to reporting, all items in the cycle are treated as one
unit that determines the success/failure of those changes.
Changes with cross-repo circular dependencies are required to share the
same change queue.
Depends-On: https://review.opendev.org/#/c/643309/
Change-Id: Ic121b2d8d057a7dc4448ae70045853347f265c6c
This adds documentation for zuul-executor graceful. In the process we
tweak the docs for pause as well.
Change-Id: Ieec5740f9a3d1857036d5d6e2af7cb30f9d77f94
This change updates the config documentation to indicate that
the config project branch can be configured.
Change-Id: Ib7e77ac289b2bc97b6c851744f0999773e091951
Part of point 5 in https://etherpad.openstack.org/p/zuulv4
Connection is idle for now.
Also update component documentation.
Change-Id: I97a97f61940fab2a555c3651e78fa7a929e8ebfb
This change is a common root for other
Zookeeper related changed regarding
scale-out-scheduler. Zookeeper becoming
a central component requires to increase
"maxClientCnxns".
Since the ZooKeeper class is expected to grow
significantly (ZooKeeper is becoming a central part
of Zuul) a split of the ZooKeeper class (zk.py) into
zk module is done here to avoid the current god-class.
Also the zookeeper log is copied to the "zuul_output_dir".
Change-Id: I714c06052b5e17269a6964892ad53b48cf65db19
Story: 2007192
On the way towards a fully scale out scheduler we need to move the
times database from the local filesystem into the SQL
database. Therefore we need to make at least one SQL connection
mandatory.
SQL reporters are required (an implied sql reporter is added to
every pipeline, explicit sql reporters ignored)
Change-Id: I30723f9b320b9f2937cc1d7ff3267519161bc380
Depends-On: https://review.opendev.org/621479
Story: 2007192
Task: 38329
Referring to a queue makes more sense in the project where it has more
use such as allowing circular dependencies later. This makes it
possible to refer to the queue on project level which takes precedence
over referring queues in the pipeline config. We'll also deprecate
referring to the queue in the pipeline config and remove this in a
later release.
Change-Id: I14533ae57cfe688b55c26a1e17ab38e133180b28
We have several large projects with most of the time long gate
queues. Those projects typically work on master and few release
branches where the changes in the release branches are more important
to the changes for master. Currently all of those changes are queued
up in a shared gate queue which makes the process of getting changes
into the release branches very slow especially if occasional gate
resets are involved. In order to improve this allow specifying the
change queues per branch so we can queue up the changes for each
release branch in a separate queue.
This is done by adding a new config element 'queue' which can be
configured to work on a per branch level.
Change-Id: Ie5c1a2b8f413fd595dbaaeba67251da14c6b4b36
The timestamp field helps to prepare visualization in
Kibana.
The duration field was set to integer in Kibana object, but
the value was string, so the Kibana was doing incorrect visualizations.
Also this commit provides fix for rst block in elasticserarch driver
document.
Change-Id: I92a034d78f9193476eccecc7efb4a818d4b4a658
It has the capability to index build and buildset results.
With the help of tools like Kibana, advanced analytics
dashboard could be built on top of the Zuul Elasticsearch
index.
Optionally job's variables and zuul_return data can be
exported along with build results under the job_vars and
job_returned_vars fields.
Change-Id: I5315483c55c10de63a3cd995ef681d0b64b98513
-quick-start steps are modified and fit more to what a reader would do
-quick-start test code is mainly splitted into 2 files, one which is a
setup part as a role, the second one starts with cloning the test
repository, just like all followings tutorial will do
-some elementary steps when manipulating or checking gerrit are being
added as roles
tutorial ssh config: test ssh configuration has been modified to allow
using a known_hosts file for both someone executing localtest and
opendev.org's zuul. A reader executing the tutorial would still have to
accept the fingerprint. To do so, commit-msg hook is fetched manually,
otherwise it would be downloaded by git-review throught scp. Alas,
git-review doesn't allow to pass options to scp to provide a new
known_hosts file.
User's ssh key is used if ~/.ssh/id_rsa.pub is available, otherwise use
a generated one.
- "to_json | from_json | json_query" in test is due to an issue between
ansible and jmespath [1]
[1] https://github.com/ansible-collections/community.general/issues/320
Change-Id: Id5c669537ff5afc7468352139980ebade167d534
The latest release of Gerrit has changed the name of the
Non-Interactive Users group to Service Users.
Change-Id: Ia6d33df50498164e4df2ef6b062cb3807151ca9f
Co-Authored-By: Guillaume Chauvel <guillaume.chauvel@gmail.com>
Now that zuul-client's encrypt subcommand covers the same
functionalities as encrypt_secret.py, add a deprecation
message when running the script. Document the zuul-client
encrypt command in the doc section about secrets.
Change-Id: Id5437ffbb688cb80b2744db3beeaa28c97080d90
Depends-On: https://review.opendev.org/765313
The HEADER_* argument format for the uri module has been deprecated
for some time and now removed. Update to use the new format.
Change-Id: I4691642213344f2516e6f146da669141db39772a
Emphasize that the protected attribute also applies to execution
related attributes defined in a project pipeline like mentioned in the
docs of the final attribute.
Change-Id: I7031a40972e89a1a14e9af56d995eac38b82d3ee
The entrypoint script in the Gerrit image has changed. This is the
new way to specify the canonical URL. The other entries we were
setting are either no longer necessary or not critical.
Change-Id: I6d8979a5fdb204b9a3a12acf05a56a13413379c1
The HKPS proxy on 443/tcp at sks-keyservers.net hasn't been operable
for many months (consistently returning a 502 Bad Gateway error).
Switch to a direct HKP URL on 11371/tcp at pool.sks-keyservers.net
instead, which returns the same content (unfortunately not over an
encrypted connection). The next best alternatives would be to use a
lookup on keyserver.ubuntu.com which misses a lot of the
cross-signing key info, or keys.openpgp.net which only provides a
link to download key material with no additional information and no
signatures.
Change-Id: I65ba5cfdd75583bc67633eb7c677b296778d5979
I recently had an issue using our nodepool container jobs from the dib
repository. Nodepool jobs define a base container job and then two
variants; one for installing with released tools and one for
installing with siblings from git. In this case, you never really
want to inherit from the base job; you want to choose one of the two
variants. Although the base job was marked abstract, it didn't really
stop me accidentally inheriting from it directly.
This proposes an "intermediate" flag that would be applied to such a
base job. In a case such as above, it would have raised an error and
alerted me to use one of the two other jobs.
Change-Id: Ifbb9ffa65f86a6b86b63a38e3234a12b564ba3c1
We already have the infrastructure in place for adding warnings to the
reporting. Plumb that through to zuul_return so jobs can do that on
purpose as well. An example could be a post playbook that analyzes
performance statistics and emits a warning about inefficient usage of
the build node resources.
Change-Id: I4c3b85dc8f4c69c55cbc6168b8a66afce8b50a97
When multiple tasks (or roles) return file_comments for the
same file via zuul_return only the last value is taken into account.
This uses a similar approach like for artifacts to merge the
file comments of multiple zuul_return calls. With that, all file
comments are merged additive into a final data structure. File comments
for the same file and/or the same line are kept distinct and don't
override each other.
Change-Id: Ib7300c243deaeb7922182a94cfa41e617fc4e6e6