Go to file
James E. Blair 711104d3b4 Optimize order of operation in cleanupNodes
We have some quick local checks that let us skip expensive operations
in the cleanupNodes method, but we run them after a code block that
could be expensive.  Run them before it instead.

The block in question is responsible for finding ready nodes that were
allocated to a request which has since been deleted.  This ran before
the quick local checks because technically it could be executed despite
them failing, but in practice, it doesn't make much sense.  There are
two checks:

1) That this provider is the node's provider.

   This isn't necessary -- any provider *could* deallocate the request,
   which could speed up the recovery of ready-but-not-allocated nodes.

   In practice, the node's provider is going to be the next thing to do
   something with the node anyway, so why have every provider in the
   system trying to lock the node when we can just let its own provider
   do it.

2) That the node is (probably) not locked.

   We have a weak check for locking here, in that the cache could be
   slightly out of sync with reality.  But it's good enough to generally
   prevent us from locking nodes that are likely already locked.  By
   skipping the lock attempt, we can save some time.  Especially since
   every sucessful node is going to have a window where the node is ready
   and locked but the request is deleted.  This happens between the time
   an executor starts a job and it actually touches the nodes.  During
   that window, the block would execute and fail to lock the nodes.
   Let's just skip it in that case.

Change-Id: Id1814f194e987032a2e797fe25ab91cfca47693c
2023-09-05 15:08:45 -07:00
2023-09-01 11:02:05 +02:00
2023-05-30 15:35:47 -07:00
2018-04-26 11:52:17 -05:00
2021-02-19 18:42:33 +00:00
2019-04-19 19:25:31 +00:00
2018-04-26 11:52:17 -05:00
2023-08-02 14:46:04 -07:00
2018-01-19 00:30:22 +00:00
2023-01-10 10:30:14 -08:00
2023-03-21 03:41:57 +00:00
2023-08-14 17:54:07 +00:00
2015-09-14 16:19:13 -04:00
2023-01-10 10:30:14 -08:00

Nodepool

Nodepool is a system for managing test node resources. It supports launching single-use test nodes from cloud providers as well as managing access to pre-defined pre-existing nodes. Nodepool is part of a suite of tools that form a comprehensive test system, including Zuul.

The latest documentation for Nodepool is published at: https://zuul-ci.org/docs/nodepool/

The latest documentation for Zuul is published at: https://zuul-ci.org/docs/zuul/

Getting Help

There are two Zuul-related mailing lists:

zuul-announce

A low-traffic announcement-only list to which every Zuul operator or power-user should subscribe.

zuul-discuss

General discussion about Zuul, including questions about how to use it, and future development.

You will also find Zuul developers in the #zuul channel on Freenode IRC.

Contributing

To browse the latest code, see: https://opendev.org/zuul/nodepool To clone the latest code, use git clone https://opendev.org/zuul/nodepool

Bugs are handled at: https://storyboard.openstack.org/#!/project/zuul/nodepool

Code reviews are handled by gerrit at https://review.opendev.org

After creating a Gerrit account, use git review to submit patches. Example:

# Do your commits
$ git review
# Enter your username if prompted

Join #zuul on Freenode to discuss development or usage.

License

Nodepool is free software, licensed under the Apache License, version 2.0.

Python Version Support

Nodepool requires Python 3. It does not support Python 2.

Description
Manage a pool of nodes for a distributed test infrastructure
Readme 36 MiB
Languages
Python 98.3%
Shell 1.1%
Dockerfile 0.3%
Jinja 0.3%