The default zookeeper session timout is 10 seconds which is not enough
on a highly loaded nodepool. Like in zuul make this configurable so we
can avoid session losses.
Change-Id: Id7087141174c84c6cdcbb3933c233f5fa0e7d569
This change adds the option to put quota on resources on a per-tenant
basis (i.e. Zuul tenants).
It adds a new top-level config structure ``tenant-resource-limits``
under which one can specify a number of tenants, each with
``max-servers``, ``max-cores``, and ``max-ram`` limits. These limits
are valid globally, i.e., for all providers. This is contrary to
currently existing provider and pool quotas, which only are consindered
for nodes of the same provider.
Change-Id: I0c0154db7d5edaa91a9fe21ebf6936e14cef4db7
This change enables setting configuration values through
environment variables. This is useful to manage user defined
configuration, such as user password, in Kubernetes deployment.
Change-Id: Iafbb63ebbb388ef3038f45fd3a929c3e7e2dc343
While YAML does have inbuilt support for anchors to greatly reduce
duplicated sections, anchors have no support for merging values. For
diskimages, this can result in a lot of duplicated values for each
image which you can not otherwise avoid.
This provides two new values for diskimages; a "parent" and
"abstract".
Specifying a parent means you inherit all the configuration values
from that image. Anything specified within the child image overwrites
the parent values as you would expect; caveats, as described in the
documentation, are that the elements field appends and the env-vars
field has update() semantics.
An "abstract" diskimage is not instantiated into a real image, it is
only used for configuration inheritance. This way you can make a
abstrat "base" image with common values and inherit that everywhere
without having to worry about bringing in values you don't want.
You can also chain parents together and the inheritance flows through.
Documentation is updated, and several tests are added to ensure the
correct parenting, merging and override behaviour of the new values.
Change-Id: I170016ef7d8443b9830912b9b0667370e6afcde7
Ensure 'name' is a primary key for diskimages.
Change the constructor to take the name as an argument. Update the
config validator to ensure there is a name, and that it is unique.
Add tests for both these cases.
Change-Id: I3931dc1457c023154cde0df2bb7b0a41cc6f20d3
We broke nodepool configuration with
I3795fee1530045363e3f629f0793cbe6e95c23ca by not having the labels
defined in the OpenStack provider in the top-level label list.
The added check here would have found such a case.
The validate() function is reworked slightly; previously it would
return various exceptions from the tools it was calling (YAML,
voluptuous, etc.). Now we have more testing (and I'd imagine we could
do even more, similar vaildations too) we'd have to keep adding
exception types. Just make the function return a value; this also
makes sure the regular exit paths are taken from the caller in
nodepoolcmd.py, rather than dying with an exception at whatever point.
A unit test is added.
Co-Authored-By: Mohammed Naser <mnaser@vexxhost.com>
Change-Id: I5455f5d7eb07abea34c11a3026d630dee62f2185
This change allows you to specify a dib-cmd parameter for disk images,
which overrides the default call to "disk-image-create". This allows
you to essentially decide the disk-image-create binary to be called
for each disk image configured.
It is inspired by a couple of things:
The "--fake" argument to nodepool-builder has always been a bit of a
wart; a case of testing-only functionality leaking across into the
production code. It would be clearer if the tests used exposed
methods to configure themselves to use the fake builder.
Because disk-image-create is called from the $PATH, it makes it more
difficult to use nodepool from a virtualenv. You can not just run
"nodepool-builder"; you have to ". activate" the virtualenv before
running the daemon so that the path is set to find the virtualenv
disk-image-create.
In addressing activation issues by automatically choosing the
in-virtualenv binary in Ie0e24fa67b948a294aa46f8164b077c8670b4025, it
was pointed out that others are already using wrappers in various ways
where preferring the co-installed virtualenv version would break.
With this, such users can ensure they call the "disk-image-create"
binary they want. We can then make a change to prefer the
co-installed version without fear of breaking.
In theory, there's no reason why a totally separate
"/custom/venv/bin/disk-image-create" would not be valid if you
required a customised dib for some reason for just one image. This is
not currently possible, even modulo PATH hacks, etc., all images will
use the same binary to build. It is for this flexibility I think this
is best at the diskimage level, rather than as, say a global setting
for the whole builder instance.
Thus add a dib-cmd option for diskimages. In the testing case, this
points to the fake-image-create script, and the --fake command-line
option and related bits are removed.
It should have no backwards compatibility effects; documentation and a
release note is added.
Change-Id: I6677e11823df72f8c69973c83039a987b67eb2af
This change adds a new python_path Node attribute so that zuul executor
can remove the default hard-coded ansible_python_interpreter.
Change-Id: Iddf2cc6b2df579636ec39b091edcfe85a4a4ed10
Change Ie14935f604f23b0928eed0dd8e28dff49699a2d1 altered one use of
this method, but this one was missed.
Change-Id: I299a12d73a6524f5097712f97342aed640786eea
This reverts commit ccf40a462a.
The previous version would not work properly when daemonized
because there was no stdout. This version maintains stdout and
uses select/poll with non-blocking stdout to capture the output
to a log file.
Depends-On: https://review.openstack.org/634266
Change-Id: I7f0617b91e071294fe6051d14475ead1d7df56b7
A builder thread can wedge if the build process wedges. Add a timeout
to the subprocess. Since it was the call to readline() that would block,
we change the process to have DIB write directly to the log. This allows
us to set a timeout in the Popen.wait() call. And we kill the dib
subprocess, as well.
The timeout value can be controlled in the diskimage configuration and
defaults to 8 hours.
Change-Id: I188e8a74dc39b55a4b50ade5c1a96832fea76a7d
Adds a ProviderConfig class method that can be called to get
the config schema for the common config options in a Provider.
Drivers are modified to call this method.
Change-Id: Ib67256dddc06d13eb7683226edaa8c8c10a73326
Introduce a new configuration setting, "max_hold_age", that specifies
the maximum uptime of held instances. If set to 0, held instances
are kept until manually deleted. A custom value can be provided
at the rpcclient level.
Change-Id: I9a09728e5728c537ee44721f5d5e774dc0dcefa7
This updates the builder to store individual build logs in dedicated
files, one per build, named for the image and build id. Old logs are
automatically pruned. By default, they are stored in
/var/log/nodepool/builds, but this can be changed.
This removes the need to specially configure logging handler for the
image build logs.
Change-Id: Ia7415d2fbbb320f8eddc4e46c3a055414df5f997
This change adds a new ProviderConfig driver interface so that driver can
load and validate their config.
This change also adds a new provider abstract method 'cleanupLeakedResources'
that the openstack driver implements to clean floating ip. This removes the
need for a shared clean-floating-ip provider config.
Change-Id: I20319aa660ebf5fbe8df5d6af1d77028e1b18350
When we deploy nodepool and zuul instances in virtual machine of
cloud provider, the provisioned nodes may be in the same internal
network with nodepool and zuul instances, in that case we don't
have to allocate floating ip for nodes, zuul can talk with nodes
via fixed ip of virtual machines. So if we can customize the behavior,
save the quota of floating ip, it's awesome.
Note: Although option "floating_ip_source: None" in clouds.yaml can
decide to apply floating ip or not for specified cloud, but that impact
all the SDKs and tools that use the clouds.yaml, we should control
nodepool behavior flexibly and independently.
This patch add a bool option "auto-floating-ip" into each pool of
"provider" section in nodepool.conf
Change-Id: Ia9a1bed6dd4f6e39015bde660f52e4cd6addb26e
The username should be included in the stored information so that when
this is passed over to zuul it can ssh to the correct username.
Change-Id: Ife0daa79f319aea04ed32513f99c73c460156941
The cloud-image name is currently used both to specify the image
in the cloud, and also as a cross-referencing key within the
nodepool config. As such, it ends up being repeated within the config
(possibly quite often in large configurations).
Separate these functions so that an image can be identified once in
a cloud provider, and referenced from multiple labels with the internal
key. This makes for improved readability in some cases (such as long
cloud image names, or specifying images by uuid), and reduces churn
when cloud image identifiers change.
Change-Id: I83f2902be4b9b73a949461b7f14da548066b9562
This change adds a webapp settings to nodepool.yaml to enable custom setting
for port and listen_address.
Change-Id: I0f41a0b131bc2a09c47a448c65471e052c0a9e88
For example, a cloud may get better preformance from a cinder volume
then the local compute drive. As a result, give nodepool to option to
choose if the server should boot from volume or not.
Change-Id: I3faefe99096fef1fe28816ac0a4b28c05ff7f0ec
Depends-On: If58cd96b0b9ce4569120d60fbceb2c23b2f7641d
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
Sadly, I missed this on our previous commit. Also update coverage from
nodepool dsvm job.
Change-Id: I6966957ac8162a588531c38bd69a93fb58a15258
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
This adds the max-ready-age setting to the label config. With this one
can specify how long nodes should live unused in READY state. This
enables the following use cases:
- When switching nodepool between a 'working-hours' and a
'non-working-hours' configuration with high or low min-ready
settings this can trigger a (delayed) scale down of unused
resources. this can be important when using a cloud provider with
ondemand billing model.
- Renewing old nodes without having to run a job on it. This can be
useful for capping the age of the cached data inside the nodes.
Change-Id: Id705f0a5e478ab658ed3a396f92d6eb6694c1c8f
In order to support putting less things into images via puppet in Infra,
we'd like to be able to pre-populate our clouds with keypairs for the
infra-root accounts and have nova add those at boot time.
Change-Id: I9e2c990040342de722f68de09f273005f57a699f
It's possible that it's easier for a nodepool user to just specify a
name or id of a flavor in their config instead of the combo of min-ram
and name-filter.
In order to not have two name related items, and also to not have the
pure flavor-name case use a term called "name-filter" - change
name-filter to flavor-name, and introduce the semantics that if
flavor-name is given by itself, it will look for an exact match on
flavor name or id, and if it's given with min-ram it will behave as
name-filter did already.
Change-Id: I8b98314958d03818ceca5abf4e3b537c8998f248
We require clouds.yaml files now. It's just the way it is. If we don't
have one, os-client-config will become unpleased - but it will do so in
a hard to understand error message (that's the best we can do there for
$reasons) ... so make sure that we present a config validation error and
not "keystoneauth1.exceptions.auth_plugins.MissingRequiredOptions: Auth
plugin requires parameters which were not given: auth_url"
Change-Id: I84e36400f38eecd5d798b772c09d768002f535f5
The nodepool_id feature may need to be removed. I've kept it to simplify
merging both now and if we do it again later.
A couple of the tests are disabled and need reworking in a subsquent
commit.
Change-Id: I948f9f69ad911778fabb1c498aebd23acce8c89c
shade/occ have a force-ipv4 setting which can be used to change
autodetected behavior, but also have detection for ipv6 viability.
This makes us aggressively use IPv6 and only us v4 if v6 is not
available or has been explicitly disabled. Yay us.
Incidentally, this should also help people use zuul in places that are
completely non-public - as a zuul running in a cloud with a private
network on it and spinning up nodes that only have private networks
means public_v4 won't really have anything in it - but clouds.yaml
supports a private=True setting which will cause the private ip to be
listed as the ip that is desired.
Change-Id: I2b4d992e3b21c00cefe98023267347c02dd961dc
This was an unused setting which was left over from when we supported
snapshots.
Change-Id: I940eaa57f5dad8761752d767c0dfa80f2a25c787
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
Before os-client-config and shade, we would include cloud credentials
in nodepool.yaml. But now comes the time where we can remove these
settings in favor of using a local clouds.yaml file.
Change-Id: Ie7af6dcd56dc48787f280816de939d07800e9d11
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
As we move forward with zuulv3, we no longer need to ability to SSH
into a node from nodepool-launcher. This means we can remove SSH
private keys from production server. Now we only keyscan the node and
pass the info to zuul to do SSH operations.
We also create out own socket now for paramiko, so we can better
control the exception handling.
Change-Id: I123631aa41fd3db374ef78cf97a8b8afde93f699
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
It no longer makes sense to have nodepool execute 'ready-scripts' on a
remote node. With zuulv3, we have ansible and are able to convert our
ready-scripts into ansible-playbooks.
Change-Id: I07b63a16a668bb9a37fb3f763ac29f307f6c3a65
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
Even though there is nothing to read from secure.conf anymore, it
is kept around intact since we may want to use this for ZooKeeper
credentials at some point.
Change-Id: Ieb3a93b09c889f74da3463494957335aaaa9f40f
Remove files and fakes related to Jenkins. Since the 'targets'
config section was for mapping to Jenkins, this isn't needed either.
Change-Id: Ib5c615a95fcdce5234b3c63957171d77b8fbc65d