On OVH and Bluebox, the memory layout is such that there is still
significant (~900M) memory above the 8192M address. Increase the
limit to encompass that, which will bring these providers up to
approximately 8G, while increasing hpcloud (which is the actual
target of this restriction) to about 8.5G.
Change-Id: I5c121be55cadad13ad5807968f33b492f9b1e215
Since https://review.openstack.org/#/c/196252/ merged, nodepool
injects a sync;sleep 5 after running the scripts, so having it in the
scripts is a double sleep.
Change-Id: Ie7a9315e6de67625eca15b45f1662e4c63f8bc10
The release has happened, the migration worked, all's right with the
world.
This reverts commit 04eb36588d80dd77a9c692f53470b7d727567604.
Change-Id: I28a3e9cf3b420a6ccca20c52c4e67adc3f5710c5
This commit adds a temporary version cap on subunit2sql to be < 1.0.0.
The 1.0.0 release includes a very large database migration which will
be slow to execute. The python DB api from >=v1.0.0 will not work with
a database that doesn't have the updated schema. So while the migration
is running let's cap the version we install to prevent everything from
breaking while the migration is running. (which might take days)
Change-Id: Iab89beb5c7aba8b744a62f5063e513b72cab0ec2
The gate-neutron-dsvm-functional-py34 job needs Python.h, and it is
currently not available when running Python 3.
Change-Id: I00ce26864db682a4577dc5360519918007aa5499
Partial-Bug: #1500400
This commit creates a venv for installing os-testr which will enable
all test jobs to have access to the subunit2html utility which has
been moved to live inside the os-testr package instead of as a slave
script.
Change-Id: I2050b54eb2def10438764f3eeb55ecf9caa874dc
This help when reading log files from nodepool. Otherwise we see the
following in the log files:
[1;31mWarning: Config file /etc/puppet/hiera.yaml not found, using
Hiera defaults[0m
Change-Id: I3a865e5107e2749ed44c144539af49e311e0125f
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
This reverts commit 71d10ea626f5a62d6b48133e19f469b19c9a7f14.
It was unnecessary once the offending wheel got removed from PyPI,
and this never worked anyway. It was preventing the ability to boot
new nodes, erroring thusly:
./configure_mirror.sh: line 25: /tmp/pip.conf: Permission denied
Deleting the images to which the change was applied returned
nodepool to being able to boot new nodes again.
Change-Id: I0841c6a5a26cf5be22e2d8fea861bdceb0393842
Remove jobs which test stable/icehouse branches of repos tagged
release:managed as the branch has reached end-of-life and is being
removed from those repos.
Change-Id: I88a44cfa84597012af7da0bd22de02dc2349b1fa
If we have booted with cloud-init, then this status directory is
populated. Remove it so when we boot snapshot images, they behave as
if on a fresh system.
Change-Id: Idc9ce01290b659e3239d30be847221447a8e5e84
hpcloud has started sending metadata to cloud-init to mount ephemeral
disks. This ends up writing a fstab entry for /dev/vdb.
---
$ curl http://169.254.169.254/2009-04-04/meta-data/block-device-mapping/ephemeral0
/dev/vdb
---
It's unclear why this just started happening, but it did.
devstack-gate later attempts to repartition this into swap & disk
space and mount it elsewhere. So remove this mount from fstab -- it
shouldn't come back because the next thing we do here is tell
cloud-init to not use the metadata source.
Change-Id: I3787d0f7e5139e891686ffbb2970e65d09f112b1
Cleanup every use of `` for subshells in the nodepool and tools
directory , replacing them with $(), and finally making the scripts
consistent.
Change-Id: I2b05cd20f9c9a30ab88f8db235aa81da93b1fad3
Some packages may depend on python-setuptools, which is not
installed and cannot be reinstalled on CentOS 6.x once yum has
erased them, so use --skip-broken to avoid aborting. Also on this
platform --downloadonly causes yum to return nonzero even when it
succeeds, so ignore its exit code.
Change-Id: Iaada39ae81e1e47fe9d0bedba80fd19e4e0e6f38
During scripted snapshot image builds we remove python-setuptools on
rhel/centos 6. This uninstalls cloud-init on hpcloud centos6 builds
because cloud-init depends on setuptools. We don't actually need
cloud-init in hpcloud because they use dhcp and we have hard coded ssh
keys so just don't bother setting cloud-init config if the config dir in
/etc does not exist.
One could argue that we would possibly want cloud-init to regen host
keys for us, but centos' sshd service should do that for us if the host
keys do not exist.
Change-Id: I96621b0ab1574eb8db0f4394877d3c1fc8208576
The --downloadonly option to yum is provided by the
yum-plugin-downloadonly package. This is merely a virtual package
satisfied by yum itself in newer releases, but an optional package
in older ones such as CentOS/RHEL 6.x. Install it just to be sure it
will work, since we use this to pre-cache RPMs on nodepool images.
Change-Id: I9e2e1605f3721c410180aa46a81b7b731d08503a
On CentOS 6.x the rpm executable is in /bin instead of /usr/bin, but
cache_devstack.py ends up running yum if it wants to download RPM
packages anyway. Look for yum instead of rpm as an indication of
which packages to install.
Change-Id: Iad79c0fcf66d2bd457195f007009c76f6e6aa2d2
We don't need it for anything other than detecting whether or not we
should update the sources.list file. Except that's obviously not
something we should do on systems with yum
Remove the call and defer it to install_puppet where it's also
installed.
Change-Id: Ie22aeac1e4c731e7ab61514cb4982c8c35c482e6
In the DevStack caching script/element for Nodepool images, use
run_local instead of subprocess.check_output in the _find_images
function. The latter wasn't introduced until Python 2.7 and so won't
work on CentOS 6. The script called from this function returns
quickly and doesn't benefit from non-blocking I/O anyway.
Change-Id: I3129f1f5b3fece321ae132ea1a52b0e156e58365
Some CentOS 6 images do not come with redhat-lsb-core preinstalled,
so the lsb_release command is not available when prepare_node.sh
wants it. Make sure we install it first if it's not already there.
Change-Id: Ieeae21538c237e069acbc4df051474071b81ba4a
technical_debt = technical_debt - 1
This function is now available from the subunit2sql 0.4.2 API,
so lets use it and clean up this TODO.
Change-Id: I5acf279b2e78dddaeb59489d01d92c00ee996f8d
Rather than deleting cloud-init, which is going to take longer, just
disable ec2 metadata service. This will be a no-op on rackspace, which
already does this.
Change-Id: I5e8baee50800f7aae474288a914333c21466855a
Once the migration has been run manually we need to use
subunit2sql>=0.4.0 because that is the version of the DB schema running
on the infra db.
This reverts commit aa87c57ca7fe95f9785719acb6b626f12171e240.
Change-Id: I747083bc15ecda7e1926a87c2adcdaf29e565e23
Subunit2sql 0.4.0 was recently released which included a schema change
however the infra db migration takes too long and can't be automated.
(yet?) Until we run the migration we can't use 0.4.0 on any of the
tooling. This caps the version to be less than 0.4.0
Change-Id: I08f113fa904fa962e8f2dc04187ff44e764de47e
This reverts commit 1f84dbb2ce2ae7ed50401299db499adb6e836ec0.
Reverting because this runs a significant portion of devstack during
image builds installing packages necessary to build the wheels. Instead
we should be building wheels and hosting them on our mirror. Then image
builds and jobs themselves can both use this mirror to either cache
wheels or install wheels.
Conflicts:
nodepool/scripts/cache_devstack.py
Change-Id: I70ebc57076845995c539a42e4f87e241569211b4
This commit adds a print statement with the stdout from the run_local()
call which executes build_wheels.sh in devstack. Right now when the
script is executed all it does is print that it's running but we are
unable to debug anything that could potentially go wrong during the
execution of the script without the output from build_wheels.
Change-Id: I7433f054127a42fa73598db219c6e351efc98fb7
This commit adds calling tools/build_wheels.sh to the cache devstack
step in the nodepool scripts. This will generate and cache the wheels
before the devstack run, which is a length process normally involving C
compilations.
Change-Id: I9064d7d9b9b879a05d665c3b002a631fcc953f52
Allow supplying filename and paths with '**' recursive glob matches
to zuul-swift-upload. Since bash (or shell etc) will expand on any
filenames provided to the program this needs to be used in quotes.
Usage example:
./zuul_swift_upload.py my_results.txt '**/sdist/*.zip' output.log
The hierarchy is always flattened meaning the supplied list is
placed in the topmost generated index.html. Sub-folders still keep
their hierarchy.
Change-Id: I9ba04f7e46b579dcf3f8ad0bd188f41fa5dbcad9
When we build our images we attempt to checkout all of the valid
devstack branches so that we can list all of the images necessary to be
cached for each branch. Unfortunately this devstack repo is owned by
root during dib and snapshot image builds. This means we must use sudo
when updating the repo (either changing branches or updating the
current branch).
Change-Id: I8cd09cfed4d586648dcbd34fa04bfc030c31ee45
There doesn't appear to be anything in TripleO that relies
on these clones. Also, this is a bit confusing since people
may expect that things copied into ~workspace-cache would
eventually end up at /opt/git/ (this is not the case).
The correct way to add projects to a job is to export a
PROJECTS variable like devstack-gate does.
Change-Id: I0a96855ed9a38bcade6e506b37621b6b7b348b49