Manage a pool of nodes for a distributed test infrastructure
Go to file
David Shrewsbury 284e0b52bf Provider wedge fix
If a provider is going to pause due to insufficient quota, we attempt
to delete the oldest unused node (one that is READY and unlocked) to
guarantee that some quota will be freed.

To prevent us from issuing unnecessary delete requests, if we find a
node already being deleted within a recent timeframe (here, 5 minutes),
we do not issue another delete.

NOTE: This causes some assumptions in tests depending on the current
pause behavior (no proactive deletion) to no longer be valid, so steps
are taken in those tests to prevent the proactive delete from happening.

Change-Id: I8d94c60fe1ab184503592a02d6ca458f94a2ea3d
2018-02-02 16:42:13 -05:00
devstack Drop python2 virtualenv for devstack 2018-01-21 14:19:01 -05:00
doc Merge "Remove the hold command" 2018-02-01 17:49:29 +00:00
nodepool Provider wedge fix 2018-02-02 16:42:13 -05:00
playbooks Migrate legacy jobs for feature/zuulv3 branch 2017-10-24 11:42:56 -04:00
tools Move the fakeprovider module to the fake driver 2017-07-28 11:35:07 +00:00
.gitignore Add files for zuul-nodepool integration test 2017-01-24 09:46:08 -08:00
.gitreview Replace master with feature/zuulv3 2018-01-18 10:13:57 -08:00
.testr.conf Add a test for subnodes 2014-03-31 09:22:00 -07:00
.zuul.yaml Remove name from project stanza 2018-01-03 09:03:51 -08:00
LICENSE license: remove dos line break 2018-01-19 00:30:22 +00:00
README.rst Rename nodepoold to nodepool-launcher 2017-03-29 09:28:33 -04:00
bindep.txt Add libffi development headers to bindep 2017-03-21 08:53:12 -04:00
requirements.txt requirements: remove paramiko <2.0 cap 2017-12-19 13:54:22 -05:00
setup.cfg update supported python version in setup.cfg 2018-02-02 04:30:06 +00:00
setup.py Bump pbr requirements to >=1.3 2015-09-14 16:19:13 -04:00
test-requirements.txt Block sphinx 1.6 2017-05-16 16:35:51 -05:00
tox.ini Use same flake8 config as in zuul 2018-01-17 02:20:40 +00:00

README.rst

Nodepool

Nodepool is a service used by the OpenStack CI team to deploy and manage a pool of devstack images on a cloud server for use in OpenStack project testing.

Developer setup

Make sure you have pip installed:

wget https://bootstrap.pypa.io/get-pip.py
sudo python get-pip.py

Install dependencies:

sudo pip install bindep
sudo apt-get install $(bindep -b nodepool)

mkdir src
cd ~/src
git clone git://git.openstack.org/openstack-infra/system-config
git clone git://git.openstack.org/openstack-infra/nodepool
cd nodepool
sudo pip install -U -r requirements.txt
sudo pip install -e .

If you're testing a specific patch that is already in gerrit, you will also want to install git-review and apply that patch while in the nodepool directory, ie:

git review -x XXXXX

Create or adapt a nodepool yaml file. You can adapt an infra/system-config one, or fake.yaml as desired. Note that fake.yaml's settings won't Just Work - consult ./modules/openstack_project/templates/nodepool/nodepool.yaml.erb in the infra/system-config tree to see a production config.

If the cloud being used has no default_floating_pool defined in nova.conf, you will need to define a pool name using the nodepool yaml file to use floating ips.

Export variable for your ssh key so you can log into the created instances:

export NODEPOOL_SSH_KEY=`cat ~/.ssh/id_rsa.pub | awk '{print $2}'`

Start nodepool with a demo config file (copy or edit fake.yaml to contain your data):

export STATSD_HOST=127.0.0.1
export STATSD_PORT=8125
nodepool-launcher -d -c tools/fake.yaml

All logging ends up in stdout.

Use the following tool to check on progress:

nodepool image-list