Only a single test actually depends on having more than a single
upload thread active, so this is just wasteful. Reduce the default
to 1 and add an option to useBuilder() that tests may use to alter
the value.
Change-Id: I07ec96000a81153b51b79bfb0daee1586491bcc5
The Kubernetes driver can optionally use the credentials inside
the pod which means that context should be optional.
Change-Id: Iaf42cd03bd4a92e36133fd2ec7157869f8747d6b
The missing enter is resulting in the entire section rendered
with monospace font and the attr not properly resolving.
Change-Id: I657c62d572526d8cd06299c6d349806c4581677b
The labels section was mis-spaced therefore the documentation was
pointing that labels was a root configuration option for the provider
when it lives inside the pools section in reality.
Change-Id: If757cc23c694927bb6402d42eed171ad23b1e87e
When creating instances with boot from volume we don't get quota
related information in the exception raised by wait_for_server. Also
in the server munch that is returned the fault information is
missing. This causes node failures when we run into the volume
quota. This can be fixed by explicitly fetching the server if we got
one and inspecting the fault information which contains more
information about the fault reason [1].
[1] Example fault reason:
Build of instance 4628f079-26a9-4a1d-aaa0-881ba4c7b9cb aborted:
VolumeSizeExceedsAvailableQuota: Requested volume or snapshot exceeds
allowed gigabytes quota. Requested 500G, quota is 10240G and 10050G
has been consumed.
Change-Id: I6d832d4dbe348646cd4fb49ee7cb5f6a6ad343cf
We are very close to being able to remove our fedora-28 images as this
release is now EOL. Use fedora-29 in the openshift job so that we can
remove fedora-28 without affecting nodepool's testing.
Change-Id: I1983d9408d038a9431159d75f1d6e2dc47a0456f
We don't have image management for aws so set the manage_images property
on aws providers to false.
This corrects a problem with min ready calculations checking that the
imgae is ready and doing so by checking for the pool_label diskimage
attribute.
Change-Id: I698f8cb1c6ac2969ce94830510fead8918f8a55e
This change indicates what network can not be found and
prevents this exception from occuring:
File "nodepool/driver/utils.py", line 70, in run
self.launch()
File "nodepool/driver/openstack/handler.py", line 249, in launch
self._launchNode()
File "nodepool/driver/openstack/handler.py", line 145, in _launchNode
userdata=self.label.userdata)
File "nodepool/driver/openstack/provider.py", line 316, in createServer
net_id = self.findNetwork(network)['id']
TypeError: 'NoneType' object is not subscriptable
Change-Id: Ic5fbce7c0c7ea2fc35c866f1e5dbec22b4cc0ef6
This change allows you to specify a dib-cmd parameter for disk images,
which overrides the default call to "disk-image-create". This allows
you to essentially decide the disk-image-create binary to be called
for each disk image configured.
It is inspired by a couple of things:
The "--fake" argument to nodepool-builder has always been a bit of a
wart; a case of testing-only functionality leaking across into the
production code. It would be clearer if the tests used exposed
methods to configure themselves to use the fake builder.
Because disk-image-create is called from the $PATH, it makes it more
difficult to use nodepool from a virtualenv. You can not just run
"nodepool-builder"; you have to ". activate" the virtualenv before
running the daemon so that the path is set to find the virtualenv
disk-image-create.
In addressing activation issues by automatically choosing the
in-virtualenv binary in Ie0e24fa67b948a294aa46f8164b077c8670b4025, it
was pointed out that others are already using wrappers in various ways
where preferring the co-installed virtualenv version would break.
With this, such users can ensure they call the "disk-image-create"
binary they want. We can then make a change to prefer the
co-installed version without fear of breaking.
In theory, there's no reason why a totally separate
"/custom/venv/bin/disk-image-create" would not be valid if you
required a customised dib for some reason for just one image. This is
not currently possible, even modulo PATH hacks, etc., all images will
use the same binary to build. It is for this flexibility I think this
is best at the diskimage level, rather than as, say a global setting
for the whole builder instance.
Thus add a dib-cmd option for diskimages. In the testing case, this
points to the fake-image-create script, and the --fake command-line
option and related bits are removed.
It should have no backwards compatibility effects; documentation and a
release note is added.
Change-Id: I6677e11823df72f8c69973c83039a987b67eb2af
This is logged everywhere else except in these locations, and having
this would have helped identify an image leak.
Change-Id: If19c06ee7723b30a4957c7ff610431c2c0965f5a
I think we are beyond the point where logging this all the time is
useful. It greatly increases the log size and makes searching logs
difficult.
Change-Id: I26133473a00363eb003477ad3be1969c3137208a
When using the tasks API, sdk was unable to delete an image from
the provider. This is fixed in 0.32.0.
Change-Id: I97ebc2bc4ad458926b400c1ccf12e5853fc18761
This element redirects the systemd journal to the console. It can be
helpful if the inner VM isn't booting; when it times out nodepool will
dump the console into the logs. With this enabled, you'll be able to
see various useful output such as NetworkManager getting addresses,
etc.
Depends-On: https://review.opendev.org/669784
Change-Id: I608b8cd9e467351c996fc5f51f15ee67f4570ba3
As a follow on to If1434378b325d6115b45e66b3c42c824e083100e, enable
the debug flag for these tests to get DEBUG level output from nodepool
launcher/executor.
This is particularly helpful for booting tests because the console of
any nodes that fail to boot is dumped at DEBUG level. Otherwise they
disappear without a trace.
Change-Id: Ib626b0417dd155dc4e404a7e474253e8dcb67cf9
Nodepool container fails with error message:
whoami: extra operand ‘/dev/null’
Try 'whoami --help' for more information.
Change-Id: I7ef5b6527eb08d00b9b27e37b5d5b5dce69bb4ef
These are fairly stable and don't take exceptionally long (compared
to other lengths we endure) now. Since we no longer have clean
check, it's more important to gate on these now.
Change-Id: I888e7a5839fcd4ac62068578df418d55c29372da
Nodes in a static pool can potentially have multiple labels. The current
implementation pauses any further request processing in case a label
could not be served immediately.
With the proposed change we will re-use the existing launch logic to
create a placeholder node in BUILDING state. As soon as a used node is
re-registerd this placeholder node will be used and the pending request
fulfilled.
This allows having a single pool for all static nodes, which prevents
orphaned Zookeeper entries. That can happen when e.g. a label specific
pool is removed or renamed.
Change-Id: If69cbeb33b8abb4f8e36146b9d849403dccb7d32
This flag when set to true will enable debug logging in the nodepool
services (builder and launcher). It is optional and when not set logs
are captured at INFO level and higher (the existing behavior).
Change-Id: If1434378b325d6115b45e66b3c42c824e083100e
Co-Authored-By: Ian Wienand <iwienand@redhat.com>
Co-Authored-By: James E. Blair <jeblair@redhat.com>
When providing multiple static labels, user should create multiple pool
to avoid requests being queued.
Change-Id: I0fdca7f1409d80d82f00094c7eb889f7d92a8d59
When running within a kubernetes cluster, it's easier to configure a
service account and mounting it on the nodepool POD than creating a
kubeconfig file and making it available to nodepool.
For this reason, this commit adds the ability to load configs from the
in-cluster service account paths. It does this as a fallback when the
kubeconfig path doesn't exist.
This commit also makes `context` a non required configuration option,
since it's not needed when a service account is used.
Change-Id: I7762940993c185c17d7468df72dff22e99d7f8c2
Rather than implement functional OpenStack testing as a devstack
plugin, use devstack as a black-box OpenStack. This allows us
to move nodepool to a new Zuul tenant without depending on the
devstack job and all its associated projects. It will also let
us replace devstack with something potentially lighter weight in
the future.
The job permits customized settings for what images to build, so
that the DIB project can inherit from it and make child jobs for
each of the operating systems it cares about.
Change-Id: Ie6bc891cebd32b3d1bb646109f13ac2fd383bba5
If we set the interval too short (currently 3m), we may delete
a port which simply hasn't been attached to an instance yet if
instance creation is proceeding slowly.
Change-Id: I372e45f2442003369ab9057e1e5d468249e23dad
It looks like 0.9.0 has released, and it seems like it requires
creating a dynamic client now. We're seeing jobs using nodepool fail
with
File "... /nodepool/driver/openshift/provider.py", line 21, in <module>
from openshift import client as os_client
ImportError: cannot import name 'client' from 'openshift'
This does break things like the project-config gate. Pin to 0.8.9 for
now until it is fixed.
Change-Id: Ie9ccb56f987b4d05c6fe9d4938ca74a110f55653
This change modifies the static driver registering logic to take
into account the username and connection-port when identifying
unique node so that a single hostname can be registered multiple
times with different user or port.
Change-Id: I8f770e13c0d00d65094abccee30e54ec0e6371b6