This reverts commit bc0bf489323e103381eec4007e9a40a513531f45.
With this in place, it should be safe to use Internap for 2-node
labels again. Further improvements to devstack-gate should handle
this for us automatically in the future.
Change-Id: Ie82be01b5bd4e9cd75b1ff7a049fde2ef22cfbb2
Depends-On: I6c8c1101771d1c5496884be7a405285472ae803a
We are seeing race condition in the Nova resource tracker when reaching
maximum compute capacity. Our hypothesis is that some instances are getting
scheduled to an hypervisor which doesn't have enough free resources
according to the resource tracker, rejecting the instance creation.
We suggest reducing the max servers to avoid hitting this race condition and
reducing the number of capacity alerts sent to our operation team.
Change-Id: Iff197a94ad988fb070efaeb6bb393bfaba6e01fb
The rax-iad region fails quite often in making a reliable subnode for
a multinode job. It's failed in the gate 20 times in the last week, 12
times in the last 24 hours, and is current our top elastic recheck
bug.
The root cause is unknown, however it is heavily contributing to the
gate backlog.
Change-Id: I6b67dc236307efabed3039bc7dd695d3942585bf
Related-Bug: #1531187
CPU flags are too different from other providers; updates to
devstack-gate will fix this and we can add it back.
Change-Id: I09281c7920db0048ba190c375c1cfbfbfd552064
In another confusing addendum to
I41e81d6bac98875eecde2376e0865784626e11a8 (which was already a
confusing addendum to Ieb6a6e9f55bd93f63c3d0a71828c276c2d02e1b9), we
have decided that the refspec used to fetch here is not sufficient to
clear out remote branches everything when updating.
"+refs/heads/*:refs/heads/*" says to replicate everything from the
remote refs/heads into our local refs/heads, but leaves out
refs/remotes/*
The upshot of this is that I41e81d6bac98875eecde2376e0865784626e11a8
will remove the local branches (refs/heads/stable/icehouse, say) but
not remove the remote branches (refs/remotes/origin/stable/icehouse).
The devstack caching script keeps picking up these remote branches,
checking them out, and consequently trying to download old images.
*Nothing* ever removes these branches. In the main dib cache git
update, we also have --prune, but our refspec there is even more
limited (+master:master). This explains why these branches never seem
to die.
Note, an even better mirror would be "+refs/*:refs/*" (in fact, if you
do git clone --mirror, this is what the repo would be setup with to
fetch by default). However, this drags in "refs/changes/*" and all
sorts of other gerrit things. We don't really need them so we just
keep the limiting on.
Change-Id: Ia9c3ffdb2b5f72a45d629961338b415308d6dd21
In a rather confusing addendum to
Ieb6a6e9f55bd93f63c3d0a71828c276c2d02e1b9, we have actually mirrored a
version of the source-repo script from dib and munged it to cache
devstack early so we can use it to find the vm images to download and
cache.
However, we are not ensuring that we remove old branches in this
clone, which is leading to the problems of us picking up images from
old branches that don't exist any more.
Change-Id: I41e81d6bac98875eecde2376e0865784626e11a8
Bluebox has some maintenance coming up. They would like us to
not be creating vms during their maintenance. This patch
reduces the number of bluebox max_servers from 39 to 0.
Change-Id: I410d48c3ee97c22e70afca8b32dfa91f83906b5d
We are gradually increasing the number of servers nodepool
creates on internap in order to test load. This patch increases
the max_servers on internap from 24 to 36.
Change-Id: I5be7785f5f814269491fdc838e6b7c766e1df0a2
This is the second step in Fedora 21 EOL cleanup. This can be merged
once no more jobs are configured to use Fedora 21 and we have cleaned up
our Fedora 21 images/nodes from nodepool.
Change-Id: I5a1005f3d0568aadb12816d147c3dc4bd5fce260
Fedora 21 reached end of life on December 1, 2015. We should not be
testing on end of life distro releases so remove tests that run on
Fedora 21. A follow up change should completely remove the configuration
from nodepool once we have deleted all images existing slaves.
Change-Id: Iab7e4676d534e59848835a1874843705ae301955
Currently internap nodes are performing without error. This
patch increases the number of nodes nodepool will create on
internap to a maximum of 24.
Change-Id: I7f5eb4d8b0b519fd39b51c697ed15785e4d07fab
This uses a new feature just added to our running nodepool that
allows us to identify which network is public so that nodepool
knows how to find a public ip address for servers.
Change-Id: I1427d22933a315346386718a65cf16262d45d008
The Bluebox cloud got new CPUs and initial jobs run on it with a
max-server value of 1 look good. Bump that value back up to 39 to use
the full IP address capacity available to us.
Change-Id: I69cdf6e08c09e01a4f48844aa448978e79625ec7
This reverts commit 684bb51cbdc253d5ac20fdf892b1badee1fcbd19.
Maintenance is completely, bump max servers back to 1 to see how things
are doing now.
Change-Id: Ie33155c32bedd8cb94de952d8e8823a5e9dc19a2
Bluebox is performing maintenance soon. It would be friendliest
if nodepool wasn't asking for vms on their cloud whilst they
do their maintenance. This patch sets bluebox max_servers to 0.
Change-Id: I6e147b3e142dc185b2752f25007fca8c26d03e2d
simple-init (i.e. glean) has some unreleased changes that are required
for enabling the network correctly on F23. Install from source before
we cut a release so we can test things out.
Change-Id: Icffcede14853a7cdc988c3358c660416b30138ae
On OVH and Bluebox, the memory layout is such that there is still
significant (~900M) memory above the 8192M address. Increase the
limit to encompass that, which will bring these providers up to
approximately 8G, while increasing hpcloud (which is the actual
target of this restriction) to about 8.5G.
Change-Id: I5c121be55cadad13ad5807968f33b492f9b1e215
This reverts commit e474929518a11337a87432a67abd5f0144833551.
Approve once https://bitbucket.org/hpk42/tox/issues/294 is fixed in
a new tox release.
Change-Id: I73846af205efb49e14c41793e0327b54a9f2f7f9
Until the regression in tox 2.3.0 is resolved through a new release,
only update nodepool images on Tuesdays so we don't need to delete
them every day (hopefully it will be fixed by Tuesday!).
https://bitbucket.org/hpk42/tox/issues/294
Change-Id: I300857d50c1dc010dba569c5978e45e76a90162c
The non ha job has been tested on F22 and
F21 is going EOL. Also change the number of
F21 and F22 nodes appropriately.
Depends-On: I7835ec90d8bcbf1183b4217b9bc6d7714833d243
Change-Id: Ib42dbfc8e8ae1cf5e85159562bd6c112bc0d3220