Currently, if min-ready is 0 a snapshot will not be created (nodepool
considers the image to be disabled). Now if min-ready is greater than or
equal 0, nodepool will create the snapshot. The reason for the change is to
allow jenkins slaves to be offline waiting for a new job to be submitted. Once
a new job is submit, nodepool will properly launch a slave node from the
snapshot.
Additionally, min-ready is now optional and defaults to 2. If min-ready is -1
the snapshot will not be created (label becomes disabled).
Closes-Bug: #1299172
Change-Id: I7094a76b09266c00c0290d84ae0a39b6c2d16215
Signed-off-by: Paul Belanger <paul.belanger@polybeacon.com>
If statsd was enabled, we would hit an undefined attribute error
because subnodes have no target. Instead, pass through the target
name of the parent node and use that for statsd reporting.
Change-Id: Ic7a04a85775a23f954ea565e8c82976b52b218c7
Some misc changes related to running this:
* Set log/stdout/err capture vars as in ZUUL
* Give the main loop a configurable sleep value so tests can run faster
* Fix a confusing typo in the node.yaml config
Additionally, a better method for waiting for test completion is added
which permits us to use assert statements in the tests.
Change-Id: Icddd2afcd816dbd5ab955fa4ab5011ac8def8faf
It starts the daemon with a simple config file and ensures that
it spins up a node.
A timeout is added to the zmq listener so that its run loop can
be stopped by the 'stopped' flag. And the shutdown procedure
for nodepool is altered so that it sets those flags and waits
for those threads to join before proceeding. The previous method
could occasionally cause assertion errors (from C, therefore
core dumps) due to zmq concurrency issues.
Change-Id: I7019a80c9dbf0396c8ddc874a3f4f0c2e977dcfa
Finish the initial sections defined in the documentation index.
Add sphinxcontrib-programoutput to document command line utils.
Add py27 to the list of default tox targets.
Change-Id: I254534032e0706e410647b023249fe3af4f3a35f
And fix a bug in it that caused too-small allocations in some
circumstances.
The main demonstration of this failure was:
nodepool.tests.test_allocator.TwoLabels.test_allocator(two_nodes)
Which allocated 1,2 instead of 2,2. But the following tests also
failed:
nodepool.tests.test_allocator.TwoProvidersTwoLabels.test_allocator(four_nodes_over_quota)
nodepool.tests.test_allocator.TwoProvidersTwoLabels.test_allocator(four_nodes)
nodepool.tests.test_allocator.TwoProvidersTwoLabels.test_allocator(three_nodes)
nodepool.tests.test_allocator.TwoProvidersTwoLabels.test_allocator(four_nodes_at_quota)
nodepool.tests.test_allocator.TwoProvidersTwoLabels.test_allocator(one_node)
Change-Id: Idba0e52b2775132f52386785b3d5f0974c5e0f8e
Write information about the node group to /etc/nodepool, along
with an ssh key generated specifically for the node group.
Add an optional script that is run on each node (and sub-node) for
a label right before a node is placed in the ready state. This
script can use the data in /etc/nodepool to setup access between
the nodes in the group.
Change-Id: Id0771c62095cccf383229780d1c4ddcf0ab42c1b
The new simpler method for calculating the weight of the targets
is a little too simple and can miss allocating nodes. Make the
weight change as the algorithm walks through the target list
to ensure that everything is allocated somewhere.
Change-Id: I98f72c69cf2793aa012f330219cd850a5d4ceab2
Labels replace images as the basic identity for nodes. Rather than
having nodes of a particular image, we now have nodes of a particular
label. A label describes how a node is created from an image, which
providers can supply such nodes, and how many should be kept ready.
This makes configuration simpler (by not specifying which images
are associated with which targets and simply assuming an even
distribution, the target section is _much_ smaller and _much_ less
repetitive). It also facilitates describing how a nodes of
potentially different configurations (e.g., number of subnodes) can
be created from the same image.
Change-Id: I35b80d6420676d405439cbeca49f4b0a6f8d3822
As soon as a resource changes to the ERROR state, stop waiting for
it. Return it to the caller, where it will be deleted.
Change-Id: I128bc4344b238b96e5696cce87f608fb2cdffa6e
An image can specify that it should be created with a number of
subnodes. That number of nodes of the same image type will also
be created and associated with each primary node of that image
type.
Adjust the allocator to accomodate the expected loss of capacity
associated with subnodes.
If a node has subnodes, wait until they are all in the ready state
before declaring a node ready.
Change-Id: Ia4b315b1ed2999da96aab60c5c02ea2ce7667494
There's no way to create subnodes yet. But this change introduces
the class/table and a method to cleanly delete them if they exist.
The actual server deletion and the waiting for that to complete
are separated out in the provider manager, so that we can kick
off a bunch of subnode deletes and then wait for them all to
complete in one thread.
All existing calls to cleanupServer are augmented with a new
waitForServerDeletion call to handle the separation.
Change-Id: Iba9d5a0a61cccc07d914e60a24777c6451dca7ea
flake8 does not pin its pep8 and pyflakes dependencies which makes it
possible for new sets of rules to suddenly apply whenever either of
those projects pushes a new release. Fix this by depending on hacking
instead of flake8, pep8, and pyflakes directly. This will keep nodepool
in sync with the rest of openstack even if it doesn't use the hacking
rules (H*).
Change-Id: Ice9198e9439ebcac15e76832835e78f72344425c
Ubuntu 12.04 package version for paramiko is 1.7.7.1, which lacks
the additional arguments for exec_command:
TypeError: exec_command() got an unexpected keyword argument 'get_pty'
1.10.0 was the first version to add get_pty flag.
Change-Id: I3b4d8a6d8a1d10ab002a79824feab8937d160244
Signed-off-by: Paul Belanger <paul.belanger@polybeacon.com>
If the server fails to boot (perhaps due to quota issues) during
an image update, if nodepool created a key for that server it won't
be deleted because the existing keypair delete is done as part of
deleting the server. Handle the special case of never having
actually booted a server and delete the keypair explicitly in that
case.
Change-Id: I0607b77ef2d52cbb8a81feb5e9c502b080a51dbe
Use six.moves.urllib instead of urllib and
six.moves.urllib.request instead of urllib2.
Partial-Bug: #1280105
Change-Id: Id122f7be5aa3e0dd213bfa86f9be86d10d72b4a6
As the number of providers and targets grows, the number of stats
that graphite has to sum in order to produce the summary graphs that
we use grows. Instead of asking graphite to summarize something like
400 metrics (takes about 3 seconds), have nodepool directly produce
the metrics that we are going to use.
Change-Id: I2a7403af2512ace0cbe795f2ec17ebcd9b90dd09
The previous logic around how to keep images was not accomplishing
anything particularly useful.
Instead:
* Delete images that are not configured or have no corresponding
base image.
* Keep the current and previous READY images.
* Otherwise, delete any images that have been in their current
state for more than 8 hours.
Also, correct the image-update command which no longer needs to
join a thread. Also, fix up some poorly exercised parts of the
fake provider.
Change-Id: Iba921f26d971e56692b9104f9d7c531d955d17b4
Previously when a job started on a node nodepool always changed that
node to state USED. Now preserve the old state value if it was
previously HOLD.
Change-Id: Ia4f736ae20b8b24ec079aa024fad404019725bcb
When we return ids after creating servers or images, coerce them
to strings so they compare correctly when we wait for them.
Change-Id: I6d4575f9a392b6028bcec4ad57299b7f467cb764
Since we're caching lists, we would actually expect a server not
to appear within the first two iterations of the loop, so remove
a log message that says it isn't there (which now shows up quite
often).
Change-Id: Ifbede6a141809e9fa40b910de2aabbd44f252fe5
Nova might return int or str ids for various objects. Our db
stores them all as strings (since that supports the superset).
Since we convert all novaclient info into simply list/dict
datastructures anyway, convert all the ids to str at the same
time to make comparisons elsewhere easier.
Change-Id: Ic90c07ec906865e975decee190c2e5a27ef7ef6d
* Less verbose logging (these messages show up a lot).
* Handle the case where a db record disappears during cleanup but
before the individual node cleanup has started.
* Fix a missed API cleanup where deleteNode was still being
called with a node object instead of the id.
Change-Id: I2025ff19a51cfacff64dd8345eaf120bf3473ac2
Since these calls can now come from any thread, these should now
be run through the task manager to serialize them and make sure
that we don't run into novaclient thread-safety issues.
Change-Id: I46ab44b93d56ad1ce289bf837511b9373d3284ee
Nodepool can not (currently) resume building a node if the daemon
is interrupted while that is happening. At least have it clean
up nodes that are in the 'building' state when it starts.
Change-Id: I66124c598b01919d3fd8b6158c482d65508c6dae
Have all node deletions (like node launches) handled by a thread,
including ones started by the periodic cleanup. This will make
the system more responsive when providers are unable to delete
nodes quickly, as well as when large numbers of nodes are deleted
by an operator, or when the system is restarted while many node
deletions are in progress.
Additionally, make the 'nodepool delete' command merely update the
db with the expectation that the next run of the cleanup cron will
spawn deletes. Add a '--now' option so that an operator may still
delete nodes synchronously, for instance when the daemon is not
running.
Change-Id: I20ce2873172cb1906e7c5832ed2100e23f86e74e
And cache the results for 5 seconds. This should remove a huge
amount of GETs for individual server status, instead replacing
them with a single request that fetches the status of all servers.
All individual build or delete threads will use the cached result
from the most recent server list, and if it is out of date, they
will trigger an update.
The main benefit of using the individual requests was that they
also provided progress information. Since that just goes into
the logs and no one cares, we can certainly do without it.
Also includes a minor constant replacement.
Change-Id: I995c3f39e5c3cddc6f1b2ce6b91bcd178ef2fbb0
The previous loop iterating usernames doesn't work if another user is
added as we move onto the second username once ssh comes up and root
fails to run a command and then hit a timeout on the second user and
never attempt the 3rd.
Instead take an approach there we continue trying root until it either
works or ssh succeeds but a command can't be run. At this stage we can
try any other users that may be configured on the image, with a short
timeout (as we know ssh has come up, if it hadn't ssh'ing as root would
have timed out).
Change-Id: Id05aa186e8496d19f41d9c723260e2151deb45c9
When a cloud is offline we cannot query it's flavors or extensions,
and without those we cannot use a provider manager. For these
attributes making the properties that lazy-initialize will fix the
problem (we may make multiple queries, but it is idempotent so
locking is not needed).
Callers that trigger flavor or extension lookups have to be able to
cope with a failure propogating up - I've manually found all the
places I think.
The catchall in _getFlavors would mask the problem and lead to
the manager being incorrectly initialized, so I have removed that.
Startup will no longer trigger cloud connections in the main thread,
it will all be deferred to worker threads such as ImageUpdate,
periodic check etc.
Additionally I've added some belts-and-braces catches to the two
key methods - launchImage and updateImage which while they don't
directly interact with a provider manager do access the provider
definition, which I think can lead to occasional skew between the
DB and the configuration - I'm not /sure/ they are needed, but
I'd rather be safe.
Change-Id: I7e8e16d5d4266c9424e4c27ebcc36ed7738bc86f
Fixes-Bug: #1281319
Some cloud instance types (Fedora for example) create
the ssh user after sshd comes online. This allows
our ssh connection retry loop to handle this scenario
gracefully.
Change-Id: Ie345dea50fc2983112cd2e72826a708583d2712a
This was causing the target distribution for all images to be
calculated according to the current values of the last image
through the previous loop.
Change-Id: I3d5190c78849b77933e18c5cf6a9c7443945b6cd
This is called within the main loop which means if there are a lot
of jenkins tasks pending, the main loop waits until they are all
finished. Instead, make this synchronous. It should be lightweight
so the additional load on jenkins should be negligible.
Also increase the sleep in the main loop to 10 seconds to mitigate
this somewhat.
Change-Id: I16e27fbd9bfef0617e35df08f4fd17bc2ead67b0
RAX have some hidden images useful for build a xenserver host.
Ensure nodepool can use these by referring to them by image UUID.
Change-Id: Idfefc60e762740f4cffa437933007942a0920970
Revert "Log state names not numbers"
This reverts commit 1c7c954aab8022e786b74beb4013c6d402fddedb.
Revert "Also log provider name when debugging deletes."
This reverts commit 3c32477bc73839b946403adf0750f2e9b09ba855.
Revert "Run per-provider cleanup threads."
This reverts commit 3f2100cf6fab5c51e22f806060440182c24c50eb.
Revert "Decouple cron names from config file names."
This reverts commit 9505671e66a976e9b0ee8d13a9fb677a0409f39a.
Revert "Move cron definition out of the inner loop."
This reverts commit f91940bf31b9cf4c187980a76d49bd9955e2c53e.
Revert "Move cron loading below provider loading."
This reverts commit 84061a038c9f1684ba654d81ce675b0eb3b70957.
Revert "Teach periodicCleanup how to do one provider."
This reverts commit e856646bd7df2acf2883ba181b0d685b87249f37.
Revert "Use the nonblocking cleanupServer."
This reverts commit c2f854c99f9a983bcd6f1390f6c20838cf67d525.
Revert "Split out the logic for deleting a nodedb node."
This reverts commit 5a696ca99231bf72827503b06f08f0c3e91e8ae1.
Revert "Make cleanupServer optionally nonblocking."
This reverts commit 423bed124e90bc7773017a7884cc23c428f58265.
Revert "Consolidate duplicate logging messages."
This reverts commit b8a8736ac54dac25d89fd9e0f01eb64d1b035b78.
Revert "Log how long nodes have been in DELETE state."
This reverts commit daf427f463706e99a90e7d29c48c19566cc710f9.
Revert "Cleanup nodes in state DELETE immediately."
This reverts commit a84540f33214eb31228e964bb193402903568754.
Change-Id: Iaf1eae8724f0a9fe1c14e70896fa699629624d28