Manage a pool of nodes for a distributed test infrastructure
Go to file
Tobias Henkel 64487baef0
Asynchronously update node statistics
We currently updarte the node statistics on every node launch or
delete. This cannot use caching at the moment because when the
statistics are updated we might end up pushing slightly outdated
data. If then there is no further update for a longer time we end up
with broken gauges. We already get update events from the node cache
so we can use that to centrally trigger node statistics updates.

This is combined with leader election so there is only a single
launcher that keeps the statistics up to date. This will ensure that
the statistics are not cluttered because of several launchers pushing
their own slightly different view into the stats.

As a side effect this reduces the runtime of a test that creates 200
nodes from 100s to 70s on my local machine.

Change-Id: I77c6edc1db45b5b45be1812cf19eea66fdfab014
2018-11-29 16:48:30 +01:00
devstack Merge "move 01-nodepool-setup to a later phase" 2018-11-21 19:29:26 +00:00
doc Normalize sidebar in docs 2018-11-08 10:00:44 -08:00
etc Merge "Add systemd drop-in file for CentOS 7" 2018-05-09 18:09:54 +00:00
nodepool Asynchronously update node statistics 2018-11-29 16:48:30 +01:00
playbooks Move k8s install to pre playbook 2018-11-13 16:39:06 +01:00
releasenotes/notes Use openstacksdk submit_task 2018-11-09 07:28:38 +11:00
roles/nodepool-zuul-functional Remove nodepool-k8s-functional and install-nodepool roles 2018-11-14 10:13:02 +00:00
tools Only setup zNode caches in launcher 2018-11-26 20:13:39 +01:00
.coveragerc Switch to stestr 2018-04-26 11:52:17 -05:00
.gitignore Ignore files produced by tox-cover 2018-07-23 13:44:06 +02:00
.gitreview Replace master with feature/zuulv3 2018-01-18 10:13:57 -08:00
.stestr.conf Switch to stestr 2018-04-26 11:52:17 -05:00
.zuul.yaml Implement a Kubernetes driver 2018-10-25 10:24:45 +00:00
LICENSE license: remove dos line break 2018-01-19 00:30:22 +00:00
README.rst Switch storyboard url to be by name 2018-08-03 10:19:44 -05:00
TESTING.rst Update README and add TESTING similar to Zuul repo 2018-07-11 11:15:56 +01:00
bindep.txt Build container images using pbrx 2018-07-20 12:03:35 -05:00
requirements.txt Implement a Kubernetes driver 2018-10-25 10:24:45 +00:00
setup.cfg Update pypi metadata 2018-10-13 12:07:38 -04:00
setup.py Bump pbr requirements to >=1.3 2015-09-14 16:19:13 -04:00
test-requirements.txt Move sphinx + deps to doc/requirements.txt 2018-08-16 09:57:52 +02:00
tox.ini Only set basepython once 2018-11-06 10:48:37 -06:00

README.rst

Nodepool

Nodepool is a system for managing test node resources. It supports launching single-use test nodes from cloud providers as well as managing access to pre-defined pre-existing nodes. Nodepool is part of a suite of tools that form a comprehensive test system, including Zuul.

The latest documentation for Nodepool is published at: https://zuul-ci.org/docs/nodepool/

The latest documentation for Zuul is published at: https://zuul-ci.org/docs/zuul/

Getting Help

There are two Zuul-related mailing lists:

zuul-announce

A low-traffic announcement-only list to which every Zuul operator or power-user should subscribe.

zuul-discuss

General discussion about Zuul, including questions about how to use it, and future development.

You will also find Zuul developers in the #zuul channel on Freenode IRC.

Contributing

To browse the latest code, see: https://git.zuul-ci.org/cgit/nodepool/tree/ To clone the latest code, use git clone https://git.zuul-ci.org/nodepool

Bugs are handled at: https://storyboard.openstack.org/#!/project/openstack-infra/nodepool

Code reviews are handled by gerrit at https://review.openstack.org

After creating a Gerrit account, use git review to submit patches. Example:

# Do your commits
$ git review
# Enter your username if prompted

Join #zuul on Freenode to discuss development or usage.

License

Nodepool is free software, licensed under the Apache License, version 2.0.

Python Version Support

Nodepool requires Python 3. It does not support Python 2.