8b3532c562
We are currently trying to debug persistent wheel corruption. So far, we have been completely unable to replicate the corrupt output outside the periodic jobs. A first thought was that it was due to AFS corruption due to multiple writes (I4f8a2f2c6c8164e7ea207f8e4b286e06df0b13dd), however this does not appear to be the case. Experimentally manually running the builds, on infra nodes, under python2 and python3, in parallel just as done here, does not replicate the problem. This patched version of pip will output the sha256 hash for the final build output of each wheel. The plan is to correlate that against any corrupt file that appears. If the corrupt hash matches any file produced by pip, then we know the problem is inside pip (and we will have the exact build situation that it occured in); if the corrupt file does not match then we must have some sort of issue copying the files or similar. Change-Id: I81943ed459bf4e2c77cae42e50af5fc5979682b4
78 lines
2.8 KiB
Bash
Executable File
78 lines
2.8 KiB
Bash
Executable File
#!/bin/bash -xe
|
|
|
|
# Working variables
|
|
WHEELHOUSE_DIR=$1
|
|
WORKING_DIR=$(pwd)/src/git.openstack.org/openstack/requirements
|
|
PYTHON_VERSION=$2
|
|
LOGS=$(pwd)/logs
|
|
|
|
FAIL_LOG=${LOGS}/failed.txt
|
|
|
|
# preclean logs
|
|
mkdir -p ${LOGS}
|
|
rm -rf ${LOGS}/*
|
|
|
|
# Extract and iterate over all the branch names.
|
|
BRANCHES=`git --git-dir=$WORKING_DIR/.git branch -a | grep '^ stable' | \
|
|
grep -Ev '(newton)'`
|
|
for BRANCH in master $BRANCHES; do
|
|
git --git-dir=$WORKING_DIR/.git show $BRANCH:upper-constraints.txt \
|
|
2>/dev/null > /tmp/upper-constraints.txt || true
|
|
|
|
# setup the building virtualenv. We want to freshen this for each
|
|
# branch.
|
|
rm -rf build_env
|
|
virtualenv -p $PYTHON_VERSION build_env
|
|
|
|
# NOTE(ianw) 2018-10-22 This is a temporary hack to get some more
|
|
# info into the logs to debug corrupt wheels. We should see pip
|
|
# stamping sha256 hashes into the logs for each wheel, so we can
|
|
# see if the bad output is coming from pip, or somewhere else.
|
|
build_env/bin/pip install -e 'git+https://github.com/ianw/pip.git@path-and-hash#egg=pip'
|
|
|
|
# SHORT_BRANCH is just "master","newton","kilo" etc. because this
|
|
# keeps the output log hierarchy much simpler.
|
|
SHORT_BRANCH=${BRANCH##origin/}
|
|
SHORT_BRANCH=${SHORT_BRANCH##stable/}
|
|
|
|
# Failed parallel jobs don't fail the whole job, we just report
|
|
# the issues for investigation.
|
|
set +e
|
|
|
|
# This runs all the jobs under "parallel". The stdout, stderr and
|
|
# exit status for each pip invocation will be captured into files
|
|
# kept in ${LOGS}/build/${SHORT_BRANCH}/1/[package]. The --joblog
|
|
# file keeps an overview of all run jobs, which we can probe to
|
|
# find failed jobs.
|
|
cat /tmp/upper-constraints.txt | \
|
|
parallel --files --progress --joblog ${LOGS}/$SHORT_BRANCH-job.log \
|
|
--results ${LOGS}/build/$SHORT_BRANCH \
|
|
build_env/bin/pip --verbose --exists-action=i wheel \
|
|
-c /tmp/upper-constraints.txt \
|
|
-w $WHEELHOUSE_DIR {}
|
|
set -e
|
|
|
|
# Column $7 is the exit status of the job, $NF is the last
|
|
# argument to pip, which is our package.
|
|
FAILED=$(awk -e '$7!=0 {print $NF}' ${LOGS}/$SHORT_BRANCH-job.log)
|
|
if [ -n "${FAILED}" ]; then
|
|
echo "*** FAILED BUILDS FOR BRANCH ${BRANCH}" >> ${FAIL_LOG}
|
|
echo "${FAILED}" >> ${FAIL_LOG}
|
|
echo -e "***\n\n" >> ${FAIL_LOG}
|
|
fi
|
|
done
|
|
|
|
if [ -f ${FAIL_LOG} ]; then
|
|
cat ${FAIL_LOG}
|
|
fi
|
|
|
|
# XXX This does make a lot of log files; about 80mb after compression.
|
|
# In theory we could correlate just the failed logs and keep those
|
|
# from the failure logs above. This is currently (2017-01) left as an
|
|
# exercise for when the job is stable :) bz2 gave about 20%
|
|
# improvement over gzip in testing.
|
|
pushd ${LOGS}
|
|
tar zcvf build-logs.tar.gz ./build
|
|
rm -rf ./build
|
|
popd
|