This config option, available under each provider pool section, can contain static key-value pairs that will be stored in ZooKeeper on each Node znode. This will allow us to pass along abitrary data from nodepool to any user of nodepool (specifically, zuul). Initially, this will be used to pass along zone information to zuul executors. Change-Id: I126d37a8c0a4f44dca59c11f76a583b9181ab653
33 KiB
zuul
Configuration
Nodepool reads its configuration from
/etc/nodepool/nodepool.yaml
by default. The configuration
file follows the standard YAML syntax with a number of sections defined
with top level keys. For example, a full configuration file may have the
diskimages
, labels
, and providers
sections:
diskimages:
...
labels:
...
providers:
...
The following sections are available. All are required unless otherwise indicated.
Options
webapp
Define the webapp endpoint port and listen address
port
The port to provide basic status information
listen_address
Listen address for web app
elements-dir
If an image is configured to use diskimage-builder and glance to
locally create and upload images, then a collection of diskimage-builder
elements must be present. The elements-dir
parameter
indicates a directory that holds one or more elements.
images-dir
When we generate images using diskimage-builder they need to be
written to somewhere. The images-dir
parameter is the place
to write them.
Note
The builder daemon creates a UUID to uniquely identify itself and to
mark image builds in ZooKeeper that it owns. This file will be named
builder_id.txt
and will live in the directory named by the
images-dir
option. If
this file does not exist, it will be created on builder startup and a
UUID will be created automatically.
build-log-dir
The builder will store build logs in this directory. It will create
one file for each build, named <image>-<build-id>.log; for
example, fedora-0000000004.log. It
defaults to /var/log/nodepool/builds
.
build-log-retention
At the start of each build, the builder will remove old build logs if they exceed this value. This option specifies how many will be kept (usually you will see one more, as deletion happens before starting a new build). By default, the last 7 old build logs are kept.
zookeeper-servers
Lists the ZooKeeper servers uses for coordinating information between nodepool workers.
zookeeper-servers:
- host: zk1.example.com
port: 2181
chroot: /nodepool
Each entry is a dictionary with the following keys
host
A zookeeper host
port
Port to talk to zookeeper
chroot
The chroot
key, used for interpreting ZooKeeper paths
relative to the supplied root path, is also optional and has no
default.
labels
Defines the types of nodes that should be created. Jobs should be written to run on nodes of a certain label. Example
labels:
- name: my-precise
max-ready-age: 3600
min-ready: 2
- name: multi-precise
min-ready: 2
Each entry is a dictionary with the following keys
name
Unique name used to tie jobs to those instances.
max-ready-age
Maximum number of seconds the node shall be in ready state. If this is exceeded the node will be deleted. A value of 0 disables this.
min-ready
Minimum number of instances that should be in a ready state. Nodepool
always creates more nodes as necessary in response to demand, but
setting min-ready
can speed processing by attempting to
keep nodes on-hand and ready for immedate use. min-ready
is
best-effort based on available capacity and is not a guaranteed
allocation. The default of 0 means that nodepool will only create nodes
of this label when there is demand. Set to -1 to have the label
considered disabled, so that no nodes will be created at all.
max-hold-age
Maximum number of seconds a node shall be in "hold" state. If this is exceeded the node will be deleted. A value of 0 disables this.
This setting is applied to all nodes, regardless of label or provider.
diskimages
This section lists the images to be built using diskimage-builder.
The name of the diskimage is mapped to the providers.[openstack].diskimages
section of the
provider, to determine which providers should received uploads of each
image. The diskimage will be built in every format required by the
providers with which it is associated. Because Nodepool needs to know
which formats to build, if the diskimage will only be built if it
appears in at least one provider.
To remove a diskimage from the system entirely, remove all associated
entries in providers.[openstack].diskimages
and remove its entry
from diskimages. All uploads will be
deleted as well as the files on disk.
diskimages:
- name: ubuntu-precise
pause: False
rebuild-age: 86400
elements:
- ubuntu-minimal
- vm
- simple-init
- openstack-repos
- nodepool-base
- cache-devstack
- cache-bindep
- growroot
- infra-package-needs
release: precise
username: zuul
env-vars:
TMPDIR: /opt/dib_tmp
DIB_CHECKSUM: '1'
DIB_IMAGE_CACHE: /opt/dib_cache
DIB_APT_LOCAL_CACHE: '0'
DIB_DISABLE_APT_CLEANUP: '1'
FS_TYPE: ext3
- name: ubuntu-xenial
pause: True
rebuild-age: 86400
formats:
- raw
- tar
elements:
- ubuntu-minimal
- vm
- simple-init
- openstack-repos
- nodepool-base
- cache-devstack
- cache-bindep
- growroot
- infra-package-needs
release: precise
username: ubuntu
env-vars:
TMPDIR: /opt/dib_tmp
DIB_CHECKSUM: '1'
DIB_IMAGE_CACHE: /opt/dib_cache
DIB_APT_LOCAL_CACHE: '0'
DIB_DISABLE_APT_CLEANUP: '1'
FS_TYPE: ext3
Each entry is a dictionary with the following keys
name
Identifier to reference the disk image in providers.[openstack].diskimages
and labels
.
formats
The list of formats to build is normally automatically created based on the needs of the providers to which the image is uploaded. To build images even when no providers are configured or to build additional formats which you know you may need in the future, list those formats here.
rebuild-age
If the current diskimage is older than this value (in seconds), then nodepool will attempt to rebuild it. Defaults to 86400 (24 hours).
release
Specifies the distro to be used as a base image to build the image using diskimage-builder.
elements
Enumerates all the elements that will be included when building the
image, and will point to the elements-dir
path referenced in the same config
file.
env-vars
Arbitrary environment variables that will be available in the spawned diskimage-builder child process.
pause
When set to True, nodepool-builder
will not build the
diskimage.
username
The username that a consumer should use when connecting to the node.
providers
Lists the providers Nodepool should use. Each provider is associated to a driver listed below.
Each entry is a dictionary with the following keys
name
Name of the provider
max-concurrency
Maximum number of node requests that this provider is allowed to handle concurrently. The default, if not specified, is to have no maximum. Since each node request is handled by a separate thread, this can be useful for limiting the number of threads used by the nodepool-launcher daemon.
driver
The driver type.
openstack
For details on the extra options required and provided by the
OpenStack driver, see the separate section providers.[openstack]
static
For details on the extra options required and provided by the static
driver, see the separate section providers.[static]
kubernetes
For details on the extra options required and provided by the
kubernetes driver, see the separate section providers.[kubernetes]
OpenStack Driver
Selecting the OpenStack driver adds the following options to the
providers
section of
the configuration.
providers.[openstack]
Specifying the openstack
driver for a provider adds the
following keys to the providers
configuration.
Note
For documentation purposes the option names are prefixed
providers.[openstack]
to disambiguate from other drivers,
but [openstack]
is not required in the configuration (e.g.
below providers.[openstack].cloud
refers to the
cloud
key in the providers
section when the
openstack
driver is selected).
An OpenStack provider's resources are partitioned into groups called
"pools" (see providers.[openstack].pools
for details), and within
a pool, the node types which are to be made available are listed (see
providers.[openstack].pools.labels
for details).
Within each OpenStack provider the available Nodepool image types are
defined (see providers.[openstack].diskimages
).
providers:
- name: provider1
driver: openstack
cloud: example
region-name: 'region1'
rate: 1.0
boot-timeout: 120
launch-timeout: 900
launch-retries: 3
image-name-format: '{image_name}-{timestamp}'
hostname-format: '{label.name}-{provider.name}-{node.id}'
diskimages:
- name: trusty
meta:
key: value
key2: value
- name: precise
- name: devstack-trusty
pools:
- name: main
max-servers: 96
availability-zones:
- az1
networks:
- some-network-name
security-groups:
- zuul-security-group
labels:
- name: trusty
min-ram: 8192
diskimage: trusty
console-log: True
- name: precise
min-ram: 8192
diskimage: precise
- name: devstack-trusty
min-ram: 8192
diskimage: devstack-trusty
- name: provider2
driver: openstack
cloud: example2
region-name: 'region1'
rate: 1.0
image-name-format: '{image_name}-{timestamp}'
hostname-format: '{label.name}-{provider.name}-{node.id}'
diskimages:
- name: precise
meta:
key: value
key2: value
pools:
- name: main
max-servers: 96
labels:
- name: trusty
min-ram: 8192
diskimage: trusty
- name: precise
min-ram: 8192
diskimage: precise
- name: devstack-trusty
min-ram: 8192
diskimage: devstack-trusty
cloud
Name of a cloud configured in clouds.yaml
.
The instances spawned by nodepool will inherit the default security group of the project specified in the cloud definition in clouds.yaml (if other values not specified). This means that when working with Zuul, for example, SSH traffic (TCP/22) must be allowed in the project's default security group for Zuul to be able to reach instances.
More information about the contents of clouds.yaml can be found in the os-client-config documentation.
boot-timeout
Once an instance is active, how long to try connecting to the image via SSH. If the timeout is exceeded, the node launch is aborted and the instance deleted.
launch-timeout
The time to wait from issuing the command to create a new instance until that instance is reported as "active". If the timeout is exceeded, the node launch is aborted and the instance deleted.
nodepool-id
Deprecated
A unique string to identify which nodepool instances is using a provider. This is useful if you want to configure production and development instances of nodepool but share the same provider.
launch-retries
The number of times to retry launching a server before considering the job failed.
region-name
The region name if the provider cloud has multiple regions.
hostname-format
Hostname template to use for the spawned instance.
image-name-format
Format for image names that are uploaded to providers.
rate
In seconds, amount to wait between operations on the provider.
clean-floating-ips
If it is set to True, nodepool will assume it is the only user of the OpenStack project and will attempt to clean unattached floating ips that may have leaked around restarts.
diskimages
Each entry in a provider's diskimages
section must correspond to an entry in diskimages
. Such an entry indicates that the
corresponding diskimage should be uploaded for use in this provider.
Additionally, any nodes that are created using the uploaded image will
have the associated attributes (such as flavor or metadata).
If an image is removed from this section, any previously uploaded images will be deleted from the provider.
diskimages:
- name: precise
pause: False
meta:
key: value
key2: value
- name: windows
connection-type: winrm
connection-port: 5986
Each entry is a dictionary with the following keys
name
Identifier to refer this image from providers.[openstack].pools.labels
and diskimages
sections.
pause
When set to True, nodepool-builder will not upload the image to the provider.
config-drive
Whether config drive should be used for the image. Defaults to unset which will use the cloud's default behavior.
meta
Arbitrary key/value metadata to store for this server using the Nova metadata service. A maximum of five entries is allowed, and both keys and values must be 255 characters or less.
connection-type
The connection type that a consumer should use when connecting to the
node. For most diskimages this is not necessary. However when creating
Windows images this could be winrm
to enable access via
ansible.
connection-port
The port that a consumer should use when connecting to the node. For most diskimages this is not necessary. This defaults to 22 for ssh and 5986 for winrm.
cloud-images
Each entry in this section must refer to an entry in the labels
section.
cloud-images:
- name: trusty-external
config-drive: False
- name: windows-external
connection-type: winrm
connection-port: 5986
Each entry is a dictionary with the following keys
name
Identifier to refer this cloud-image from labels
section. Since this
name appears elsewhere in the nodepool configuration file, you may want
to use your own descriptive name here and use one of
image-id
or image-name
to specify the cloud
image so that if the image name or id changes on the cloud, the impact
to your Nodepool configuration will be minimal. However, if neither of
those attributes are provided, this is also assumed to be the image name
or ID in the cloud.
config-drive
Whether config drive should be used for the cloud image. Defaults to unset which will use the cloud's default behavior.
image-id
If this is provided, it is used to select the image from the cloud
provider by ID, rather than name. Mutually exclusive with providers.[openstack].cloud-images.image-name
image-name
If this is provided, it is used to select the image from the cloud
provider by this name or ID. Mutually exclusive with providers.[openstack].cloud-images.image-id
username
The username that a consumer should use when connecting to the node.
connection-type
The connection type that a consumer should use when connecting to the node. For most diskimages this is not necessary. However when creating Windows images this could be 'winrm' to enable access via ansible.
connection-port
The port that a consumer should use when connecting to the node. For most diskimages this is not necessary. This defaults to 22 for ssh and 5986 for winrm.
pools
A pool defines a group of resources from an OpenStack provider. Each pool has a maximum number of nodes which can be launched from it, along with a number of cloud-related attributes used when launching nodes.
pools:
- name: main
max-servers: 96
availability-zones:
- az1
networks:
- some-network-name
security-groups:
- zuul-security-group
auto-floating-ip: False
host-key-checking: True
node-attributes:
key1: value1
key2: value2
labels:
- name: trusty
min-ram: 8192
diskimage: trusty
console-log: True
- name: precise
min-ram: 8192
diskimage: precise
- name: devstack-trusty
min-ram: 8192
diskimage: devstack-trusty
Each entry is a dictionary with the following keys
name
Pool name
node-attributes
A dictionary of key-value pairs that will be stored with the node data in ZooKeeper. The keys and values can be any arbitrary string.
max-cores
Maximum number of cores usable from this pool. This can be used to limit usage of the tenant. If not defined nodepool can use all cores up to the quota of the tenant.
max-servers
Maximum number of servers spawnable from this pool. This can be used to limit the number of servers. If not defined nodepool can create as many servers the tenant allows.
max-ram
Maximum ram usable from this pool. This can be used to limit the amount of ram allocated by nodepool. If not defined nodepool can use as much ram as the tenant allows.
ignore-provider-quota
Ignore the provider quota for this pool. Instead, only check against the configured max values for this pool and the current usage based on stored data. This may be useful in circumstances where the provider is incorrectly calculating quota.
availability-zones
A list of availability zones to use.
If this setting is omitted, nodepool will fetch the list of all availability zones from nova. To restrict nodepool to a subset of availability zones, supply a list of availability zone names in this setting.
Nodepool chooses an availability zone from the list at random when creating nodes but ensures that all nodes for a given request are placed in the same availability zone.
networks
Specify custom Neutron networks that get attached to each node. Specify the name or id of the network as a string.
security-groups
Specify custom Neutron security groups that get attached to each node. Specify the name or id of the security_group as a string.
auto-floating-ip
Specify custom behavior of allocating floating ip for each node. When
set to False, nodepool-launcher
will not apply floating ip
for nodes. When zuul instances and nodes are deployed in the same
internal private network, set the option to False to save floating ip
for cloud provider.
host-key-checking
Specify custom behavior of validation of SSH host keys. When set to False, nodepool-launcher will not ssh-keyscan nodes after they are booted. This might be needed if nodepool-launcher and the nodes it launches are on different networks. The default value is True.
labels
Each entry in a pool`s labels section indicates that the corresponding label is available for use in this pool. When creating nodes for a label, the flavor-related attributes in that label's section will be used.
labels:
- name: precise
min-ram: 8192
flavor-name: 'something to match'
console-log: True
Each entry is a dictionary with the following keys
name
Identifier to refer this image; from labels
and diskimages
sections.
diskimage
Refers to provider's diskimages, see providers.[openstack].diskimages
. Mutually exclusive
with providers.[openstack].pools.labels.cloud-image
cloud-image
Refers to the name of an externally managed image in the cloud that
already exists on the provider. The value of cloud-image
should match the name
of a previously configured entry from
the cloud-images
section of the provider. See providers.[openstack].cloud-images
. Mutually
exclusive with providers.[openstack].pools.labels.diskimage
flavor-name
Name or id of the flavor to use. If providers.[openstack].pools.labels.min-ram
is
omitted, it must be an exact match. If providers.[openstack].pools.labels.min-ram
is given,
flavor-name
will be used to find flavor names that meet
providers.[openstack].pools.labels.min-ram
and also
contain flavor-name
.
One of providers.[openstack].pools.labels.min-ram
or providers.[openstack].pools.labels.flavor-name
must
be specified.
min-ram
Determine the flavor to use (e.g. m1.medium
,
m1.large
, etc). The smallest flavor that meets the
min-ram
requirements will be chosen.
One of providers.[openstack].pools.labels.min-ram
or providers.[openstack].pools.labels.flavor-name
must
be specified.
boot-from-volume
If given, the label for use in this pool will create a volume from the image and boot the node from it.
key-name
If given, is the name of a keypair that will be used when booting each server.
console-log
On the failure of the ssh ready check, download the server console log to aid in debuging the problem.
volume-size
When booting an image from volume, how big should the created volume be.
instance-properties
A dictionary of key/value properties to set when booting each server.
These properties become available via the meta-data
on the
active server (e.g. within
config-drive:openstack/latest/meta_data.json
)
Static Driver
Selecting the static driver adds the following options to the providers
section of the
configuration.
providers.[static]
The static provider driver is used to define static nodes.
Note
For documentation purposes the option names are prefixed
providers.[static]
to disambiguate from other drivers, but
[static]
is not required in the configuration (e.g. below
providers.[static].pools
refers to the pools
key in the providers
section when the static
driver is selected).
Example:
providers:
- name: static-rack
driver: static
pools:
- name: main
nodes:
- name: trusty.example.com
labels: trusty-static
host-key: fake-key
timeout: 13
connection-port: 22022
username: zuul
max-parallel-jobs: 1
pools
Each entry in a pool's nodes section indicates a static node and it's corresponding label.
Note
Although you may define more than one pool, it is essentially useless
to do so since a node's name
must be unique across all
pools.
Each entry is a dictionary with entries as follows
name
The hostname or ip address of the static node. This must be unique across all nodes defined within the configuration file.
labels
The list of labels associated with the node.
username
The username nodepool will use to validate it can connect to the node.
timeout
The timeout in second before the ssh ping is considered failed.
host-key
The ssh host key of the node.
connection-type
The connection type that a consumer should use when connecting to the node.
winrm
ssh
connection-port
The port that a consumer should use when connecting to the node. For
most nodes this is not necessary. This defaults to 22 when
connection-type
is 'ssh' and 5986 when it is 'winrm'.
max-parallel-jobs
The number of jobs that can run in parallel on this node.
Kubernetes Driver
Selecting the kubernetes driver adds the following options to the
providers
section of
the configuration.
providers.[kubernetes]
A Kubernetes provider's resources are partitioned into groups called
pools (see providers.[kubernetes].pools
for details), and within
a pool, the node types which are to be made available are listed (see
providers.[kubernetes].labels
for details).
Note
For documentation purposes the option names are prefixed
providers.[kubernetes]
to disambiguate from other drivers,
but [kubernetes]
is not required in the configuration (e.g.
below providers.[kubernetes].pools
refers to the
pools
key in the providers
section when the
kubernetes
driver is selected).
Example:
providers:
- name: kubespray
driver: kubernetes
context: admin-cluster.local
pools:
- name: main
labels:
- name: kubernetes-namespace
type: namespace
- name: pod-fedora
type: pod
image: docker.io/fedora:28
context
Name of the context configured in kube/config
.
Before using the driver, Nodepool services need a
kube/config
file manually installed with cluster admin
context.
launch-retries
The number of times to retry launching a node before considering the job failed.
pools
A pool defines a group of resources from a Kubernetes provider.
name
Namespaces are prefixed with the pool's name.
labels
Each entry in a pool`s labels section indicates that the corresponding label is available for use in this pool.
Each entry is a dictionary with the following keys
name
Identifier for this label; references an entry in the labels
section.
type
The Kubernetes provider supports two types of labels:
namespace
Namespace labels provide an empty namespace configured with a service account that can creates pods, services, configmaps, etc.
pod
Pod labels provide a dedicated namespace with a single pod created
using the providers.[kubernetes].labels.image
parameter and it
is configured with a service account that can exec and get the logs of
the pod.
image
Only used by the providers.[kubernetes].labels.type.pod
label type;
specifies the image name used by the pod.