And fix a bug in it that caused too-small allocations in some
The main demonstration of this failure was:
Which allocated 1,2 instead of 2,2. But the following tests also
Write information about the node group to /etc/nodepool, along
with an ssh key generated specifically for the node group.
Add an optional script that is run on each node (and sub-node) for
a label right before a node is placed in the ready state. This
script can use the data in /etc/nodepool to setup access between
the nodes in the group.
The new simpler method for calculating the weight of the targets
is a little too simple and can miss allocating nodes. Make the
weight change as the algorithm walks through the target list
to ensure that everything is allocated somewhere.
Labels replace images as the basic identity for nodes. Rather than
having nodes of a particular image, we now have nodes of a particular
label. A label describes how a node is created from an image, which
providers can supply such nodes, and how many should be kept ready.
This makes configuration simpler (by not specifying which images
are associated with which targets and simply assuming an even
distribution, the target section is _much_ smaller and _much_ less
repetitive). It also facilitates describing how a nodes of
potentially different configurations (e.g., number of subnodes) can
be created from the same image.
An image can specify that it should be created with a number of
subnodes. That number of nodes of the same image type will also
be created and associated with each primary node of that image
Adjust the allocator to accomodate the expected loss of capacity
associated with subnodes.
If a node has subnodes, wait until they are all in the ready state
before declaring a node ready.
There's no way to create subnodes yet. But this change introduces
the class/table and a method to cleanly delete them if they exist.
The actual server deletion and the waiting for that to complete
are separated out in the provider manager, so that we can kick
off a bunch of subnode deletes and then wait for them all to
complete in one thread.
All existing calls to cleanupServer are augmented with a new
waitForServerDeletion call to handle the separation.
flake8 does not pin its pep8 and pyflakes dependencies which makes it
possible for new sets of rules to suddenly apply whenever either of
those projects pushes a new release. Fix this by depending on hacking
instead of flake8, pep8, and pyflakes directly. This will keep nodepool
in sync with the rest of openstack even if it doesn't use the hacking
Ubuntu 12.04 package version for paramiko is 18.104.22.168, which lacks
the additional arguments for exec_command:
TypeError: exec_command() got an unexpected keyword argument 'get_pty'
1.10.0 was the first version to add get_pty flag.
Signed-off-by: Paul Belanger <firstname.lastname@example.org>
If the server fails to boot (perhaps due to quota issues) during
an image update, if nodepool created a key for that server it won't
be deleted because the existing keypair delete is done as part of
deleting the server. Handle the special case of never having
actually booted a server and delete the keypair explicitly in that
As the number of providers and targets grows, the number of stats
that graphite has to sum in order to produce the summary graphs that
we use grows. Instead of asking graphite to summarize something like
400 metrics (takes about 3 seconds), have nodepool directly produce
the metrics that we are going to use.
The previous logic around how to keep images was not accomplishing
anything particularly useful.
* Delete images that are not configured or have no corresponding
* Keep the current and previous READY images.
* Otherwise, delete any images that have been in their current
state for more than 8 hours.
Also, correct the image-update command which no longer needs to
join a thread. Also, fix up some poorly exercised parts of the
Previously when a job started on a node nodepool always changed that
node to state USED. Now preserve the old state value if it was
Since we're caching lists, we would actually expect a server not
to appear within the first two iterations of the loop, so remove
a log message that says it isn't there (which now shows up quite
Nova might return int or str ids for various objects. Our db
stores them all as strings (since that supports the superset).
Since we convert all novaclient info into simply list/dict
datastructures anyway, convert all the ids to str at the same
time to make comparisons elsewhere easier.
* Less verbose logging (these messages show up a lot).
* Handle the case where a db record disappears during cleanup but
before the individual node cleanup has started.
* Fix a missed API cleanup where deleteNode was still being
called with a node object instead of the id.
Since these calls can now come from any thread, these should now
be run through the task manager to serialize them and make sure
that we don't run into novaclient thread-safety issues.
Nodepool can not (currently) resume building a node if the daemon
is interrupted while that is happening. At least have it clean
up nodes that are in the 'building' state when it starts.
Have all node deletions (like node launches) handled by a thread,
including ones started by the periodic cleanup. This will make
the system more responsive when providers are unable to delete
nodes quickly, as well as when large numbers of nodes are deleted
by an operator, or when the system is restarted while many node
deletions are in progress.
Additionally, make the 'nodepool delete' command merely update the
db with the expectation that the next run of the cleanup cron will
spawn deletes. Add a '--now' option so that an operator may still
delete nodes synchronously, for instance when the daemon is not
And cache the results for 5 seconds. This should remove a huge
amount of GETs for individual server status, instead replacing
them with a single request that fetches the status of all servers.
All individual build or delete threads will use the cached result
from the most recent server list, and if it is out of date, they
will trigger an update.
The main benefit of using the individual requests was that they
also provided progress information. Since that just goes into
the logs and no one cares, we can certainly do without it.
Also includes a minor constant replacement.
The previous loop iterating usernames doesn't work if another user is
added as we move onto the second username once ssh comes up and root
fails to run a command and then hit a timeout on the second user and
never attempt the 3rd.
Instead take an approach there we continue trying root until it either
works or ssh succeeds but a command can't be run. At this stage we can
try any other users that may be configured on the image, with a short
timeout (as we know ssh has come up, if it hadn't ssh'ing as root would
have timed out).
When a cloud is offline we cannot query it's flavors or extensions,
and without those we cannot use a provider manager. For these
attributes making the properties that lazy-initialize will fix the
problem (we may make multiple queries, but it is idempotent so
locking is not needed).
Callers that trigger flavor or extension lookups have to be able to
cope with a failure propogating up - I've manually found all the
places I think.
The catchall in _getFlavors would mask the problem and lead to
the manager being incorrectly initialized, so I have removed that.
Startup will no longer trigger cloud connections in the main thread,
it will all be deferred to worker threads such as ImageUpdate,
periodic check etc.
Additionally I've added some belts-and-braces catches to the two
key methods - launchImage and updateImage which while they don't
directly interact with a provider manager do access the provider
definition, which I think can lead to occasional skew between the
DB and the configuration - I'm not /sure/ they are needed, but
I'd rather be safe.
Some cloud instance types (Fedora for example) create
the ssh user after sshd comes online. This allows
our ssh connection retry loop to handle this scenario
This was causing the target distribution for all images to be
calculated according to the current values of the last image
through the previous loop.
This is called within the main loop which means if there are a lot
of jenkins tasks pending, the main loop waits until they are all
finished. Instead, make this synchronous. It should be lightweight
so the additional load on jenkins should be negligible.
Also increase the sleep in the main loop to 10 seconds to mitigate
Revert "Log state names not numbers"
This reverts commit 1c7c954aab.
Revert "Also log provider name when debugging deletes."
This reverts commit 3c32477bc7.
Revert "Run per-provider cleanup threads."
This reverts commit 3f2100cf6f.
Revert "Decouple cron names from config file names."
This reverts commit 9505671e66.
Revert "Move cron definition out of the inner loop."
This reverts commit f91940bf31.
Revert "Move cron loading below provider loading."
This reverts commit 84061a038c.
Revert "Teach periodicCleanup how to do one provider."
This reverts commit e856646bd7.
Revert "Use the nonblocking cleanupServer."
This reverts commit c2f854c99f.
Revert "Split out the logic for deleting a nodedb node."
This reverts commit 5a696ca992.
Revert "Make cleanupServer optionally nonblocking."
This reverts commit 423bed124e.
Revert "Consolidate duplicate logging messages."
This reverts commit b8a8736ac5.
Revert "Log how long nodes have been in DELETE state."
This reverts commit daf427f463.
Revert "Cleanup nodes in state DELETE immediately."
This reverts commit a84540f332.