Add additional sections to the testing instructions; these are based
on the same sections in the Zuul repo. This brings the nodepool repo
to parity.
Change-Id: I7a1b2c62963a815a1ab864e3685be1653e5734fc
Reuse the CA certificate if one is available. CA certificate can be
defined in the kube/config file used by Nodepool service.
Fix the following error:
DEBUG zuul.AnsibleJob.output: [...]
Ansible output: b'fatal: [molecule]: FAILED! => {
[...]
"failed_modules": {
"setup": {
"failed": true,
"module_stderr": "Unable to connect to the server: x509: certificate signed by unknown authority",
"module_stdout": "",
"msg": "MODULE FAILURE See stdout/stderr for the exact error",
"rc": 1,
}
},
"msg": "The following modules failed to execute: setup"
}
Change-Id: Ic2b764e88d966a5c501e72ba3dfb46436979072c
After Ia94f20b15ab9cf2ae4969891eedccec8d5291d36 the hostname fields
are just used for informational purposes. Use the FQDN so something
like 'nb01.opendev.org' and 'nb01.openstack.org' are differentiated in
the output of something like 'nodepool dib-image-list'.
Change-Id: I1f061958ff271f707fddbe9ef74fd2e2a228e4ca
If you use a ipv6 literal as the host: parameter in zookeeper-services
section, it gets incorretly concatentated with the port when building
the host list. Detect ipv6 literals and quote them with rfc2732 []'s
when building the host string.
Change-Id: I49a749142e8031be37a08960bf645faca4e97149
So that we can run nodepool-builder in containers on arm
hosts to better build arm images, update our zuul config
to build multi-arch docker images.
Change-Id: I98ec4cef2ff35c707ff43d0b8e554b969d720250
We see timeouts trying to get this key fairly frequently in the gate.
Store it locally and use that in the container build.
Change-Id: Ifd706849f1fad88c8ec4afc79090df4afb88abb4
This allows setting build-log-retention to -1 to disable automatic
collection of logs. This would facilitate managing these logs with an
external tool like logrotate. Another case is where you have the
builds failing very quickly -- say, one of the builds has destroyed
the container and so builds fail to even exec dib correctly. In this
case it's difficult to get to the root-cause of the problem because
the first build's logs (the one that destroyed the container) have
been repead just seconds after the failure.
Change-Id: I259c78e6a0e30b4c0a8d2f4c12a6941a2d227c38
The most important change here is fixes for building SuSE with
pip-and-virtualenv; something that has been failing for some time.
Change-Id: I90328693f3ad45f44bec72fb9e72f45ac3be6790
When used outside the nodepool repository, we need to ensure the
nodepool project is included for sibling image build.
Change-Id: Ie696abe98620ff1036fe12069f0df89eaab47ef7
We added two packages to extras so that they'd end up in the
container images, but we never told anything to install them.
It became clear that that's confusing, so we added an api
to python-builder to allow specifying a list of extras to
install.
Depends-On: https://review.opendev.org/722125
Change-Id: I27e10822744863560febcdad8bab9a4f3cf8fc8e
These tests are used generically to build a range of images, not just
debuntu based ones. Remove these platform specific flags.
In the jobs based on this for the nodepool gate we are building
centos images, so these don't apply there.
Change-Id: Ia4dde1fb01da284a5e681237ab88c68fb9afcbef
Set the working directory to the nodepool checkout so that when we use
this job from other repos, it finds and builds the right Dockerfile.
Change-Id: I8578dd612d58387ad20c43404444df43f41a6723
As described in the updated comment section, this debootstrap from the
openstack-ci PPA works around some issues building inside a container.
Change-Id: I0887a801bb6dd4ce992c39d9e332a18f8194a7b9
We need to set pool info on leaked instances so that they are properly
accounted for against quota. When a znode has provider details but not
pool details we can't count it against quota used but also don't count
it as unmanaged quota so end up in limbo.
To fix this we set pool info metadata so that when an instance leaks we
can recover the pool info and set it on the phony instance znode records
used to delete those instances.
Change-Id: Iba51655f7bf86987f9f88bb45059464f9f211ee9
This only happens if the user has enabled console-log on the label,
so log it at info level so it appears in the standard logs.
Change-Id: I74dbc82bbbd310ba788250d864869681603babd2
We currently register static nodes serially on startup of the static
node provider. This is not a problem as long as all static nodes are
reachable. However if there are multiple static nodes unreachable and
fail with timeout the startup of the provider can take very long since
it must wait for many timeouts one after another. In order to handle
this better parallelize the initial sync of static nodes.
Change-Id: I4e9b12de277ef0c3140815fb61fed612be2d9396
To avoid a situation where nodepool deletes an image build but
continues to indicate that it is "ready" in the ui, make sure that
we set the state to "deleting" before we start deleting.
Change-Id: I0f87dd93262ba46f42931d83d123244dfe85cd2f
As a follow-on to I81b57d6f6142e64dd0ebf31531ca6489d6c46583, bring
consistency to the resource leakage cleanup statistics provided by
nodepool.
New stats for cleanup of leaked instances and floating ips are added
and documented. For consistency, the downPorts stat is renamed to
leaked.ports.
The documenation is re-organised slightly to group common stats
together. The nodepool.task.<provider>.<task> stat is removed because
it is covered by the section on API stats below.
Change-Id: I9773181a81db245c5d1819fc7621b5182fbe5f59
As a follow-on to I615e530decbee6a46167a40748342d2193851c02, switch
one of the usernames in testing away from the default to test setting
the username from config.
Change-Id: Id05561b942ed96bc3cc011df6906d706f12d80bf