Adds the logic around uploading local DIB image builds to the providers that need them. This code expects that the local images will be named using the sequence ID of the build, and that the build data, as stored in the ZooKeeper node, will contain the formats available for the built image. Storing the formats lets us handle the case when a new provider may have been added to the config file that needs a new image format. If the format is not available for upload to the new provider, it will just wait for the next build which should have the new format. Change-Id: Id6f98f5d64be6ab72dd0c38a452ab7ad09a6ba3b
17 KiB
Configuration
Nodepool reads its secure configuration from
/etc/nodepool/secure.conf by default. The secure file is a
standard ini config file, with one section for database, and another
section for the jenkins secrets for each target:
[database]
dburi={dburi}
[jenkins "{target_name}"]
user={user}
apikey={apikey}
credentials={credentials}
url={url}
Following settings are available:
**required**
dburiIndicates the URI for the database connection. See the SQLAlchemy documentation for the syntax. Example:dburi='mysql+pymysql://nodepool@localhost/nodepool'
optional
While it is possible to run Nodepool without any Jenkins targets, if Jenkins is used, the target_name and url are required. The user, apikey and credentials also may be needed depending on the Jenkins security settings.
target_nameName of the jenkins target. It needs to match with a target specified in nodepool.yaml, in order to retrieve its settings.
urlUrl to the Jenkins REST API.
userJenkins username.
apikeyAPI key generated by Jenkins (not the user password).
credentialsIf provided, Nodepool will configure the Jenkins slave to use the Jenkins credential identified by that ID, otherwise it will use the username and ssh keys configured in the image.
Nodepool reads its configuration from
/etc/nodepool/nodepool.yaml by default. The configuration
file follows the standard YAML syntax with a number of sections defined
with top level keys. For example, a full configuration file may have the
labels, providers, and targets
sections. If building images using diskimage-builder, the
diskimages section is also required:
labels:
...
diskimages:
...
providers:
...
targets:
...
The following sections are available. All are required unless otherwise indicated.
script-dir
When creating an image to use when launching new nodes, Nodepool will
run a script that is expected to prepare the machine before the snapshot
image is created. The script-dir parameter indicates a
directory that holds all of the scripts needed to accomplish this.
Nodepool will copy the entire directory to the machine before invoking
the appropriate script for the image being created.
Example:
script-dir: /path/to/script/dir
elements-dir
If an image is configured to use diskimage-builder and glance to
locally create and upload images, then a collection of diskimage-builder
elements must be present. The elements-dir parameter
indicates a directory that holds one or more elements.
Example:
elements-dir: /path/to/elements/dir
images-dir
When we generate images using diskimage-builder they need to be
written to somewhere. The images-dir parameter is the place
to write them.
Example:
images-dir: /path/to/images/dir
cron
This section is optional.
Nodepool runs several periodic tasks. The cleanup task
deletes old images and servers which may have encountered errors during
their initial deletion. The check task attempts to log into
each node that is waiting to be used to make sure that it is still
operational. The following illustrates how to change the schedule for
these tasks and also indicates their default values:
cron:
cleanup: '27 */6 * * *'
check: '*/15 * * * *'
zmq-publishers
Lists the ZeroMQ endpoints for the Jenkins masters. Nodepool uses this to receive real-time notification that jobs are running on nodes or are complete and nodes may be deleted. Example:
zmq-publishers:
- tcp://jenkins1.example.com:8888
- tcp://jenkins2.example.com:8888
gearman-servers
Lists the Zuul Gearman servers that should be consulted for real-time demand. Nodepool will use information from these servers to determine if additional nodes should be created to satisfy current demand. Example:
gearman-servers:
- host: zuul.example.com
port: 4730
The port key is optional (default: 4730).
zookeeper-servers
Lists the ZooKeeper servers uses for coordinating information between nodepool workers. Example:
zookeeper-servers:
- host: zk1.example.com
port: 2181
chroot: /nodepool
The port key is optional (default: 2181).
The chroot key, used for interpreting ZooKeeper paths
relative to the supplied root path, is also optional and has no
default.
labels
Defines the types of nodes that should be created. Maps node types to the images that are used to back them and the providers that are used to supply them. Jobs should be written to run on nodes of a certain label (so targets such as Jenkins don't need to know about what providers or images are used to create them). Example:
labels:
- name: my-precise
image: precise
min-ready: 2
providers:
- name: provider1
- name: provider2
- name: multi-precise
image: precise
subnodes: 2
min-ready: 2
ready-script: setup_multinode.sh
providers:
- name: provider1
required
nameUnique name used to tie jobs to those instances.
imageRefers to providers images, see
images.providers(list)Required if any nodes should actually be created (e.g., the label is not currently disabled, see
min-readybelow).
optional
min-ready(default: 2)Minimum instances that should be in a ready state. Set to -1 to have the label considered disabled.
min-readyis best-effort based on available capacity and is not a guaranteed allocation.subnodesUsed to configure multi-node support. If a subnodes key is supplied to an image, it indicates that the specified number of additional nodes of the same image type should be created and associated with each node for that image.
Only one node from each such group will be added to the target, the subnodes are expected to communicate directly with each other. In the example above, for each Precise node added to the target system, two additional nodes will be created and associated with it.
ready-scriptA script to be used to perform any last minute changes to a node after it has been launched but before it is put in the READY state to receive jobs. For more information, see
scripts.
diskimages
Lists the images that are going to be built using diskimage-builder. Image keyword defined on labels section will be mapped to the images listed on diskimages. If an entry matching the image is found this will be built using diskimage-builder and the settings found on this configuration. If no matching image is found, image will be built using the provider snapshot approach:
diskimages:
- name: devstack-precise
rebuild-age: 86400
- ubuntu
- vm
- puppet
- nodepool-base
- node-devstack
release: precise
env-vars:
DIB_DISTRIBUTION_MIRROR: http://archive.ubuntu.com
DIB_IMAGE_CACHE: /opt/dib_cache
required
nameIdentifier to reference the disk image in
imagesandlabels.
optional
rebuild-ageIf the current diskimage is older than this value (in seconds), then nodepool will attempt to rebuild it. Defaults to 86400 (24 hours).
releaseSpecifies the distro to be used as a base image to build the image using diskimage-builder.
elements(list)Enumerates all the elements that will be included when building the image, and will point to the
elements-dirpath referenced in the same config file.env-vars(dict)Arbitrary environment variables that will be available in the spawned diskimage-builder child process.
provider
Lists the OpenStack cloud providers Nodepool should use. Within each
provider, the Nodepool image types are also defined (see images for details).
Example:
providers:
- name: provider1
cloud: example
region-name: 'region1'
max-servers: 96
rate: 1.0
availability-zones:
- az1
boot-timeout: 120
launch-timeout: 900
template-hostname: 'template-{image.name}-{timestamp}'
pool: 'public'
ipv6-preferred: False
networks:
- name: 'some-network-name'
public: True
images:
- name: trusty
base-image: 'Trusty'
min-ram: 8192
name-filter: 'something to match'
setup: prepare_node.sh
username: jenkins
user-home: '/home/jenkins'
private-key: /var/lib/jenkins/.ssh/id_rsa
meta:
key: value
key2: value
- name: precise
base-image: 'Precise'
min-ram: 8192
setup: prepare_node.sh
username: jenkins
user-home: '/home/jenkins'
private-key: /var/lib/jenkins/.ssh/id_rsa
- name: devstack-trusty
min-ram: 30720
diskimage: devstack-trusty
username: jenkins
private-key: /home/nodepool/.ssh/id_rsa
- name: provider2
username: 'username'
password: 'password'
auth-url: 'http://auth.provider2.example.com/'
project-name: 'project'
service-type: 'compute'
service-name: 'compute'
region-name: 'region1'
max-servers: 96
rate: 1.0
template-hostname: '{image.name}-{timestamp}-nodepool-template'
images:
- name: precise
base-image: 'Fake Precise'
min-ram: 8192
setup: prepare_node.sh
username: jenkins
user-home: '/home/jenkins'
private-key: /var/lib/jenkins/.ssh/id_rsa
meta:
key: value
key2: value
cloud configuration*
preferred
cloudThere are two methods supported for configuring cloud entries. The preferred method is to create an~/.config/openstack/clouds.yamlfile containing your cloud configuration information. Then, usecloudto refer to a named entry in that file.More information about the contents of clouds.yaml can be found in the os-client-config documentation.
compatablity
For backwards compatibility reasons, you can also include portions of the cloud configuration directly in
nodepool.yaml. Not all of the options settable viaclouds.yamlare available.
username
password
project-idORproject-nameSome clouds may refer to the
project-idastenant-id. Some clouds may refer to theproject-nameastenant-name.auth-urlKeystone URL.
image-typeSpecifies the image type supported by this provider. The disk images built by diskimage-builder will output an image for each
image-typespecified by a provider using that particular diskimage.By default,
image-typeis set to the value returned fromos-client-configand can be omitted in most cases.
required
name
max-serversMaximum number of servers spawnable on this provider.
optional
availability-zones(list)Without it nodepool will rely on nova to schedule an availability zone.
If it is provided the value should be a list of availability zone names. Nodepool will select one at random and provide that to nova. This should give a good distribution of availability zones being used. If you need more control of the distribution you can use multiple logical providers each providing a different list of availabiltiy zones.
boot-timeoutOnce an instance is active, how long to try connecting to the image via SSH. If the timeout is exceeded, the node launch is aborted and the instance deleted.
In seconds. Default 60.
launch-timeoutThe time to wait from issuing the command to create a new instance until that instance is reported as "active". If the timeout is exceeded, the node launch is aborted and the instance deleted.
In seconds. Default 3600.
keypairDefault None
networks(dict)Specify custom Neutron networks that get attached to each node. Specify the
nameof the network (a string) and if the network routes to the Internet, set the booleanpublicto true.ipv6-preferredIf it is set to True, nodepool will try to find ipv6 in public net first as the ip address for ssh connection to build snapshot images and create jenkins slave definition. If ipv6 is not found or the key is not specified or set to False, ipv4 address will be used.
poolSpecify a floating ip pool in cases where the 'public' pool is unavailable or undesirable.
api-timeout(compatability)Timeout for the OpenStack API calls client in seconds. Prefer setting this in clouds.yaml
service-type(compatability)Prefer setting this in clouds.yaml.
service-name(compatability)Prefer setting this in clouds.yaml.
region-name
template-hostnameHostname template to use for the spawned instance. Default
template-{image.name}-{timestamp}rateIn seconds. Default 1.0.
clean-floating-ipsIf it is set to True, nodepool will assume it is the only user of the OpenStack project and will attempt to clean unattached floating ips that may have leaked around restarts.
images
Example:
images:
- name: precise
base-image: 'Precise'
min-ram: 8192
name-filter: 'something to match'
setup: prepare_node.sh
username: jenkins
private-key: /var/lib/jenkins/.ssh/id_rsa
meta:
key: value
key2: value
required
nameIdentifier to refer this image from
labelsandprovidersections.If the resulting images from different providers
base-imageshould be equivalent, give them the same name; e.g. if one provider has aFedora 20image and another has an equivalentFedora 20 (Heisenbug)image, they should use a commonname. Otherwise select a uniquename.base-imageUUID or string-name of the image to boot as specified by the provider.
min-ramDetermine the flavor of
base-imageto use (e.g.m1.medium,m1.large, etc). The smallest flavor that meets themin-ramrequirements will be chosen. To further filter by flavor name, see optionalname-filterbelow.
optional
name-filterAdditional filter complementing
min-ram, will be required to match on the flavor-name (e.g. Rackspace offer a "Performance" flavour; setting name-filter toPerformancewill ensure the chosen flavor also contains this string as well as meeting min-ram requirements).setupScript to run to prepare the instance.
Used only when not building images using diskimage-builder, in that case settings defined in the
diskimagessection will be used instead. Seescriptsfor setup script details.diskimageSee
diskimages.usernameNodepool expects that user to exist after running the script indicated by
setup. Defaultjenkinsprivate-keyDefault
/var/lib/jenkins/.ssh/id_rsaconfig-drive(boolean)Whether config drive should be used for the image.
meta(dict)Arbitrary key/value metadata to store for this server using the Nova metadata service. A maximum of five entries is allowed, and both keys and values must be 255 characters or less.
targets
Lists the Jenkins masters to which Nodepool should attach nodes after they are created. Nodes of each label will be evenly distributed across all of the targets which are on-line:
targets:
- name: jenkins1
hostname: '{label.name}-{provider.name}-{node_id}'
subnode-hostname: '{label.name}-{provider.name}-{node_id}-{subnode_id}'
- name: jenkins2
hostname: '{label.name}-{provider.name}-{node_id}'
subnode-hostname: '{label.name}-{provider.name}-{node_id}-{subnode_id}'
required
nameIdentifier for the system an instance is attached to.
optional
hostnameDefault
{label.name}-{provider.name}-{node_id}subnode-hostnameDefault
{label.name}-{provider.name}-{node_id}-{subnode_id}rateIn seconds. Default 1.0
jenkins(dict)
test-job(optional)Setting this would cause a newly created instance to be in a TEST state. The job name given will then be executed with the node name as a parameter.
If the job succeeds, move the node into READY state and relabel it with the appropriate label (from the image name).
If it fails, immediately delete the node.
If the job never runs, the node will eventually be cleaned up by the periodic cleanup task.