nodepool/doc/source/configuration.rst

16 KiB

Configuration

Nodepool reads its secure configuration from /etc/nodepool/secure.conf by default. The secure file is a standard ini config file, with one section for database, and another section for the jenkins secrets for each target:

[database]
dburi={dburi}

[jenkins "{target_name}"]
user={user}
apikey={apikey}
credentials={credentials}
url={url}

Following settings are available:

**required**

dburi Indicates the URI for the database connection. See the SQLAlchemy documentation for the syntax. Example:

dburi='mysql+pymysql://nodepool@localhost/nodepool'

optional

While it is possible to run Nodepool without any Jenkins targets, if Jenkins is used, the target_name and url are required. The user, apikey and credentials also may be needed depending on the Jenkins security settings.

target_name Name of the jenkins target. It needs to match with a target specified in nodepool.yaml, in order to retrieve its settings.

url Url to the Jenkins REST API.

user Jenkins username.

apikey API key generated by Jenkins (not the user password).

credentials If provided, Nodepool will configure the Jenkins slave to use the Jenkins credential identified by that ID, otherwise it will use the username and ssh keys configured in the image.

Nodepool reads its configuration from /etc/nodepool/nodepool.yaml by default. The configuration file follows the standard YAML syntax with a number of sections defined with top level keys. For example, a full configuration file may have the labels, providers, and targets sections. If building images using diskimage-builder, the diskimages section is also required:

labels:
  ...
diskimages:
  ...
providers:
  ...
targets:
  ...

The following sections are available. All are required unless otherwise indicated.

script-dir

When creating an image to use when launching new nodes, Nodepool will run a script that is expected to prepare the machine before the snapshot image is created. The script-dir parameter indicates a directory that holds all of the scripts needed to accomplish this. Nodepool will copy the entire directory to the machine before invoking the appropriate script for the image being created.

Example:

script-dir: /path/to/script/dir

elements-dir

If an image is configured to use diskimage-builder and glance to locally create and upload images, then a collection of diskimage-builder elements must be present. The elements-dir parameter indicates a directory that holds one or more elements.

Example:

elements-dir: /path/to/elements/dir

images-dir

When we generate images using diskimage-builder they need to be written to somewhere. The images-dir parameter is the place to write them.

Example:

images-dir: /path/to/images/dir

cron

This section is optional.

Nodepool runs several periodic tasks. The image-update task creates a new image for each of the defined images, typically used to keep the data cached on the images up to date. The cleanup task deletes old images and servers which may have encountered errors during their initial deletion. The check task attempts to log into each node that is waiting to be used to make sure that it is still operational. The following illustrates how to change the schedule for these tasks and also indicates their default values:

cron:
  image-update: '14 2 * * *'
  cleanup: '27 */6 * * *'
  check: '*/15 * * * *'

zmq-publishers

Lists the ZeroMQ endpoints for the Jenkins masters. Nodepool uses this to receive real-time notification that jobs are running on nodes or are complete and nodes may be deleted. Example:

zmq-publishers:
  - tcp://jenkins1.example.com:8888
  - tcp://jenkins2.example.com:8888

gearman-servers

Lists the Zuul Gearman servers that should be consulted for real-time demand. Nodepool will use information from these servers to determine if additional nodes should be created to satisfy current demand. Example:

gearman-servers:
  - host: zuul.example.com
    port: 4730

The port key is optional (default: 4730).

labels

Defines the types of nodes that should be created. Maps node types to the images that are used to back them and the providers that are used to supply them. Jobs should be written to run on nodes of a certain label (so targets such as Jenkins don't need to know about what providers or images are used to create them). Example:

labels:
  - name: my-precise
    image: precise
    min-ready: 2
    providers:
      - name: provider1
      - name: provider2
  - name: multi-precise
    image: precise
    subnodes: 2
    min-ready: 2
    ready-script: setup_multinode.sh
    providers:
      - name: provider1

required

name

Unique name used to tie jobs to those instances.

image

Refers to providers images, see images.

providers (list)

Required if any nodes should actually be created (e.g., the label is not currently disabled, see min-ready below).

optional

min-ready (default: 2)

Minimum instances that should be in a ready state. Set to -1 to have the label considered disabled. min-ready is best-effort based on available capacity and is not a guaranteed allocation.

subnodes

Used to configure multi-node support. If a subnodes key is supplied to an image, it indicates that the specified number of additional nodes of the same image type should be created and associated with each node for that image.

Only one node from each such group will be added to the target, the subnodes are expected to communicate directly with each other. In the example above, for each Precise node added to the target system, two additional nodes will be created and associated with it.

ready-script

A script to be used to perform any last minute changes to a node after it has been launched but before it is put in the READY state to receive jobs. For more information, see scripts.

diskimages

Lists the images that are going to be built using diskimage-builder. Image keyword defined on labels section will be mapped to the images listed on diskimages. If an entry matching the image is found this will be built using diskimage-builder and the settings found on this configuration. If no matching image is found, image will be built using the provider snapshot approach:

diskimages:
- name: devstack-precise
  elements:
    - ubuntu
    - vm
    - puppet
    - nodepool-base
    - node-devstack
  release: precise
  env-vars:
      DIB_DISTRIBUTION_MIRROR: http://archive.ubuntu.com
      DIB_IMAGE_CACHE: /opt/dib_cache

required

name

Identifier to reference the disk image in images and labels.

optional

release

Specifies the distro to be used as a base image to build the image using diskimage-builder.

elements (list)

Enumerates all the elements that will be included when building the image, and will point to the elements-dir path referenced in the same config file.

env-vars (dict)

Arbitrary environment variables that will be available in the spawned diskimage-builder child process.

provider

Lists the OpenStack cloud providers Nodepool should use. Within each provider, the Nodepool image types are also defined (see images for details). Example:

providers:
  - name: provider1
    cloud: example
    region-name: 'region1'
    max-servers: 96
    rate: 1.0
    availability-zones:
      - az1
    boot-timeout: 120
    launch-timeout: 900
    template-hostname: 'template-{image.name}-{timestamp}'
    pool: 'public'
    ipv6-preferred: False
    networks:
      - name: 'some-network-name'
        public: True
    images:
      - name: trusty
        base-image: 'Trusty'
        min-ram: 8192
        name-filter: 'something to match'
        setup: prepare_node.sh
        username: jenkins
        user-home: '/home/jenkins'
        private-key: /var/lib/jenkins/.ssh/id_rsa
        meta:
            key: value
            key2: value
      - name: precise
        base-image: 'Precise'
        min-ram: 8192
        setup: prepare_node.sh
        username: jenkins
        user-home: '/home/jenkins'
        private-key: /var/lib/jenkins/.ssh/id_rsa
      - name: devstack-trusty
        min-ram: 30720
        diskimage: devstack-trusty
        username: jenkins
        private-key: /home/nodepool/.ssh/id_rsa
  - name: provider2
    username: 'username'
    password: 'password'
    auth-url: 'http://auth.provider2.example.com/'
    project-name: 'project'
    service-type: 'compute'
    service-name: 'compute'
    region-name: 'region1'
    max-servers: 96
    rate: 1.0
    template-hostname: '{image.name}-{timestamp}-nodepool-template'
    images:
      - name: precise
        base-image: 'Fake Precise'
        min-ram: 8192
        setup: prepare_node.sh
        username: jenkins
        user-home: '/home/jenkins'
        private-key: /var/lib/jenkins/.ssh/id_rsa
        meta:
            key: value
            key2: value

cloud configuration*

preferred

cloud There are two methods supported for configuring cloud entries. The preferred method is to create an ~/.config/openstack/clouds.yaml file containing your cloud configuration information. Then, use cloud to refer to a named entry in that file.

More information about the contents of clouds.yaml can be found in the os-client-config documentation.

compatablity

For backwards compatibility reasons, you can also include portions of the cloud configuration directly in nodepool.yaml. Not all of the options settable via clouds.yaml are available.

username

password

project-id OR project-name

Some clouds may refer to the project-id as tenant-id. Some clouds may refer to the project-name as tenant-name.

auth-url

Keystone URL.

image-type

Specifies the image type supported by this provider. The disk images built by diskimage-builder will output an image for each image-type specified by a provider using that particular diskimage.

By default, image-type is set to the value returned from os-client-config and can be omitted in most cases.

required

name

max-servers

Maximum number of servers spawnable on this provider.

optional

availability-zones (list)

Without it nodepool will rely on nova to schedule an availability zone.

If it is provided the value should be a list of availability zone names. Nodepool will select one at random and provide that to nova. This should give a good distribution of availability zones being used. If you need more control of the distribution you can use multiple logical providers each providing a different list of availabiltiy zones.

boot-timeout

Once an instance is active, how long to try connecting to the image via SSH. If the timeout is exceeded, the node launch is aborted and the instance deleted.

In seconds. Default 60.

launch-timeout

The time to wait from issuing the command to create a new instance until that instance is reported as "active". If the timeout is exceeded, the node launch is aborted and the instance deleted.

In seconds. Default 3600.

keypair

Default None

networks (dict)

Specify custom Neutron networks that get attached to each node. Specify the name of the network (a string) and if the network routes to the Internet, set the boolean public to true. If the network should be the target of floating IP NAT, set nat_destination to true.

ipv6-preferred

If it is set to True, nodepool will try to find ipv6 in public net first as the ip address for ssh connection to build snapshot images and create jenkins slave definition. If ipv6 is not found or the key is not specified or set to False, ipv4 address will be used.

pool

Specify a floating ip pool in cases where the 'public' pool is unavailable or undesirable.

api-timeout (compatability)

Timeout for the OpenStack API calls client in seconds. Prefer setting this in clouds.yaml

service-type (compatability)

Prefer setting this in clouds.yaml.

service-name (compatability)

Prefer setting this in clouds.yaml.

region-name

template-hostname

Hostname template to use for the spawned instance. Default template-{image.name}-{timestamp}

rate

In seconds. Default 1.0.

images

Example:

images:
  - name: precise
    base-image: 'Precise'
    min-ram: 8192
    name-filter: 'something to match'
    setup: prepare_node.sh
    username: jenkins
    private-key: /var/lib/jenkins/.ssh/id_rsa
    meta:
        key: value
        key2: value

required

name

Identifier to refer this image from labels and provider sections.

If the resulting images from different providers base-image should be equivalent, give them the same name; e.g. if one provider has a Fedora 20 image and another has an equivalent Fedora 20 (Heisenbug) image, they should use a common name. Otherwise select a unique name.

base-image

UUID or string-name of the image to boot as specified by the provider.

min-ram

Determine the flavor of base-image to use (e.g. m1.medium, m1.large, etc). The smallest flavor that meets the min-ram requirements will be chosen. To further filter by flavor name, see optional name-filter below.

optional

name-filter

Additional filter complementing min-ram, will be required to match on the flavor-name (e.g. Rackspace offer a "Performance" flavour; setting name-filter to Performance will ensure the chosen flavor also contains this string as well as meeting min-ram requirements).

setup

Script to run to prepare the instance.

Used only when not building images using diskimage-builder, in that case settings defined in the diskimages section will be used instead. See scripts for setup script details.

diskimages

See diskimages.

username

Nodepool expects that user to exist after running the script indicated by setup. Default jenkins

private-key

Default /var/lib/jenkins/.ssh/id_rsa

config-drive (boolean)

Whether config drive should be used for the image.

meta (dict)

Arbitrary key/value metadata to store for this server using the Nova metadata service. A maximum of five entries is allowed, and both keys and values must be 255 characters or less.

targets

Lists the Jenkins masters to which Nodepool should attach nodes after they are created. Nodes of each label will be evenly distributed across all of the targets which are on-line:

targets:
  - name: jenkins1
    hostname: '{label.name}-{provider.name}-{node_id}'
    subnode-hostname: '{label.name}-{provider.name}-{node_id}-{subnode_id}'
  - name: jenkins2
    hostname: '{label.name}-{provider.name}-{node_id}'
    subnode-hostname: '{label.name}-{provider.name}-{node_id}-{subnode_id}'

required

name Identifier for the system an instance is attached to.

optional

hostname

Default {label.name}-{provider.name}-{node_id}

subnode-hostname

Default {label.name}-{provider.name}-{node_id}-{subnode_id}

rate

In seconds. Default 1.0

jenkins (dict)

test-job (optional)

Setting this would cause a newly created instance to be in a TEST state. The job name given will then be executed with the node name as a parameter.

If the job succeeds, move the node into READY state and relabel it with the appropriate label (from the image name).

If it fails, immediately delete the node.

If the job never runs, the node will eventually be cleaned up by the periodic cleanup task.