Gates: Remove legacy zuulv2 scripts

This PS removes the legacy v2 scripts from OSH.

Change-Id: I02ee7654765d90b71632b0042930f4a8d71f648b
This commit is contained in:
portdirect
2018-01-14 02:16:05 -05:00
parent 1809164e05
commit 9d40323eb1
58 changed files with 1 additions and 3900 deletions

View File

@@ -34,9 +34,7 @@ Installation and Development
Please review our `documentation <https://docs.openstack.org/openstack-helm>`_.
For quick installation, evaluation, and convenience, we have a kubeadm
based all-in-one solution that runs in a Docker container. The Kubeadm-AIO set
up can be found `here, <https://docs.openstack.org/openstack-helm/latest/install/developer/all-in-one.html>`_
and the `gate scripts, <https://docs.openstack.org/openstack-helm/latest/install/developer/gates.html>`_
use are supported on any fresh Ubuntu, CentOS or Fedora machine.
up can be found `here, <https://docs.openstack.org/openstack-helm/latest/install/developer/all-in-one.html>`_.
This project is under active development. We encourage anyone interested in
OpenStack-Helm to review our `Installation <https://docs.openstack.org/openstack-helm/latest/install/index.html>`_

View File

@@ -1 +0,0 @@
.. include:: ../../../../tools/gate/README.rst

View File

@@ -7,5 +7,3 @@ Contents:
:maxdepth: 2
all-in-one
gates
vagrant

View File

@@ -1 +0,0 @@
.. include:: ../../../../tools/vagrant/README.rst

View File

@@ -1,85 +0,0 @@
Openstack-Helm Gate Scripts
===========================
.. warning:: These scripts are out of date. For all development and single node
evaluation purposes, please reference the All-in-One installation_ docs instead.
.. _installation: https://docs.openstack.org/openstack-helm/latest/install/developer/all-in-one.html
These scripts are used in the OpenStack-Helm Gates and can also be run
locally to aid development and for demonstration purposes. Please note
that they assume full control of a machine, and may be destructive in
nature, so should only be run on a dedicated host.
Supported Platforms
~~~~~~~~~~~~~~~~~~~
Currently supported host platforms are:
* Ubuntu 16.04
* CentOS 7
* Fedora 25
Usage (Single Node)
-------------------
The Gate scripts use the ``setup_gate.sh`` as an entrypoint and are
controlled by environment variables, an example of use to run the basic
integration test is below:
.. code:: bash
export INTEGRATION=aio
export INTEGRATION_TYPE=basic
export PVC_BACKEND=ceph
./tools/gate/setup_gate.sh
Usage (Multi Node)
------------------
To use for a multinode deployment you simply need to set a few extra environment
variables:
.. code:: bash
export INTEGRATION=multi
export INTEGRATION_TYPE=basic
export PVC_BACKEND=ceph
#IP of primary node:
export PRIMARY_NODE_IP=1.2.3.4
#IP's of subnodes:
export SUB_NODE_IPS="1.2.3.5 1.2.3.6 1.2.3.7"
#Location of SSH private key to use with subnodes:
export SSH_PRIVATE_KEY=/etc/nodepool/id_rsa
./tools/gate/setup_gate.sh
Options
-------
You can also export some additional environment variables prior to running the
``./tools/gate/setup_gate.sh`` that tweak aspects of the deployment.
Rather than ceph, you may use a nfs based backend. This option is especially
useful on old or low spec machines, though is not currently supported with
Linux Kernels >=4.10:
.. code:: bash
export PVC_BACKEND=nfs
export GLANCE=pvc
It is also possible to customise the CNI used in the deployment:
.. code:: bash
export KUBE_CNI=calico # or "canal" "weave" "flannel"
export CNI_POD_CIDR=192.168.0.0/16
If you wish to deploy using Armada then you just need to export the following
variable:
.. code:: bash
export INTEGRATION_TYPE=armada

View File

@@ -1,137 +0,0 @@
#!/bin/bash
set +xe
# if we can't find kubectl, bail immediately because it is likely
# the whitespace linter fails - no point to collect logs.
if ! type "kubectl" &> /dev/null; then
exit $1
fi
# make sure there are no helm processes sticking about when we're done
# which can cause some test runs to hang
pkill -x helm
echo "Capturing logs from environment."
mkdir -p ${LOGS_DIR}/k8s/etc
sudo cp -a /etc/kubernetes ${LOGS_DIR}/k8s/etc
sudo chmod 777 --recursive ${LOGS_DIR}/*
mkdir -p ${LOGS_DIR}/k8s
for OBJECT_TYPE in nodes \
namespace \
storageclass; do
kubectl get ${OBJECT_TYPE} -o yaml > ${LOGS_DIR}/k8s/${OBJECT_TYPE}.yaml
done
kubectl describe nodes > ${LOGS_DIR}/k8s/nodes.txt
for OBJECT_TYPE in svc \
pods \
jobs \
deployments \
daemonsets \
statefulsets \
configmaps \
secrets; do
kubectl get --all-namespaces ${OBJECT_TYPE} -o yaml > \
${LOGS_DIR}/k8s/${OBJECT_TYPE}.yaml
done
mkdir -p ${LOGS_DIR}/k8s/pods
kubectl get pods -a --all-namespaces -o json | jq -r \
'.items[].metadata | .namespace + " " + .name' | while read line; do
NAMESPACE=$(echo $line | awk '{print $1}')
NAME=$(echo $line | awk '{print $2}')
kubectl get --namespace $NAMESPACE pod $NAME -o json | jq -r \
'.spec.containers[].name' | while read line; do
CONTAINER=$(echo $line | awk '{print $1}')
kubectl logs $NAME --namespace $NAMESPACE -c $CONTAINER > \
${LOGS_DIR}/k8s/pods/$NAMESPACE-$NAME-$CONTAINER.txt
done
done
mkdir -p ${LOGS_DIR}/k8s/svc
kubectl get svc -o json --all-namespaces | jq -r \
'.items[].metadata | .namespace + " " + .name' | while read line; do
NAMESPACE=$(echo $line | awk '{print $1}')
NAME=$(echo $line | awk '{print $2}')
kubectl describe svc $NAME --namespace $NAMESPACE > \
${LOGS_DIR}/k8s/svc/$NAMESPACE-$NAME.txt
done
mkdir -p ${LOGS_DIR}/k8s/pvc
kubectl get pvc -o json --all-namespaces | jq -r \
'.items[].metadata | .namespace + " " + .name' | while read line; do
NAMESPACE=$(echo $line | awk '{print $1}')
NAME=$(echo $line | awk '{print $2}')
kubectl describe pvc $NAME --namespace $NAMESPACE > \
${LOGS_DIR}/k8s/pvc/$NAMESPACE-$NAME.txt
done
mkdir -p ${LOGS_DIR}/k8s/rbac
for OBJECT_TYPE in clusterroles \
roles \
clusterrolebindings \
rolebindings; do
kubectl get ${OBJECT_TYPE} -o yaml > ${LOGS_DIR}/k8s/rbac/${OBJECT_TYPE}.yaml
done
mkdir -p ${LOGS_DIR}/k8s/descriptions
for NAMESPACE in $(kubectl get namespaces -o name | awk -F '/' '{ print $NF }') ; do
for OBJECT in $(kubectl get all --show-all -n $NAMESPACE -o name) ; do
OBJECT_TYPE=$(echo $OBJECT | awk -F '/' '{ print $1 }')
OBJECT_NAME=$(echo $OBJECT | awk -F '/' '{ print $2 }')
mkdir -p ${LOGS_DIR}/k8s/descriptions/${NAMESPACE}/${OBJECT_TYPE}
kubectl describe -n $NAMESPACE $OBJECT > ${LOGS_DIR}/k8s/descriptions/${NAMESPACE}/$OBJECT_TYPE/$OBJECT_NAME.txt
done
done
NODE_NAME=$(hostname)
mkdir -p ${LOGS_DIR}/nodes/${NODE_NAME}
echo "${NODE_NAME}" > ${LOGS_DIR}/nodes/master.txt
sudo docker logs kubelet 2> ${LOGS_DIR}/nodes/${NODE_NAME}/kubelet.txt
sudo docker logs kubeadm-aio 2>&1 > ${LOGS_DIR}/nodes/${NODE_NAME}/kubeadm-aio.txt
sudo docker images --digests --no-trunc --all > ${LOGS_DIR}/nodes/${NODE_NAME}/images.txt
sudo du -h --max-depth=1 /var/lib/docker | sort -hr > ${LOGS_DIR}/nodes/${NODE_NAME}/docker-size.txt
sudo iptables-save > ${LOGS_DIR}/nodes/${NODE_NAME}/iptables.txt
sudo ip a > ${LOGS_DIR}/nodes/${NODE_NAME}/ip.txt
sudo route -n > ${LOGS_DIR}/nodes/${NODE_NAME}/routes.txt
sudo arp -a > ${LOGS_DIR}/nodes/${NODE_NAME}/arp.txt
cat /etc/resolv.conf > ${LOGS_DIR}/nodes/${NODE_NAME}/resolv.conf
sudo lshw > ${LOGS_DIR}/nodes/${NODE_NAME}/hardware.txt
if [ "x$INTEGRATION" == "xmulti" ]; then
: ${SSH_PRIVATE_KEY:="/etc/nodepool/id_rsa"}
: ${SUB_NODE_IPS:="$(cat /etc/nodepool/sub_nodes_private)"}
for NODE_IP in $SUB_NODE_IPS ; do
ssh-keyscan "${NODE_IP}" >> ~/.ssh/known_hosts
NODE_NAME=$(ssh -i ${SSH_PRIVATE_KEY} $(whoami)@${NODE_IP} hostname)
mkdir -p ${LOGS_DIR}/nodes/${NODE_NAME}
ssh -i ${SSH_PRIVATE_KEY} $(whoami)@${NODE_IP} sudo docker logs kubelet 2> ${LOGS_DIR}/nodes/${NODE_NAME}/kubelet.txt
ssh -i ${SSH_PRIVATE_KEY} $(whoami)@${NODE_IP} sudo docker logs kubeadm-aio 2>&1 > ${LOGS_DIR}/nodes/${NODE_NAME}/kubeadm-aio.txt
ssh -i ${SSH_PRIVATE_KEY} $(whoami)@${NODE_IP} sudo docker images --digests --no-trunc --all > ${LOGS_DIR}/nodes/${NODE_NAME}/images.txt
ssh -i ${SSH_PRIVATE_KEY} $(whoami)@${NODE_IP} sudo du -h --max-depth=1 /var/lib/docker | sort -hr > ${LOGS_DIR}/nodes/${NODE_NAME}/docker-size.txt
ssh -i ${SSH_PRIVATE_KEY} $(whoami)@${NODE_IP} sudo iptables-save > ${LOGS_DIR}/nodes/${NODE_NAME}/iptables.txt
ssh -i ${SSH_PRIVATE_KEY} $(whoami)@${NODE_IP} sudo ip a > ${LOGS_DIR}/nodes/${NODE_NAME}/ip.txt
ssh -i ${SSH_PRIVATE_KEY} $(whoami)@${NODE_IP} sudo route -n > ${LOGS_DIR}/nodes/${NODE_NAME}/routes.txt
ssh -i ${SSH_PRIVATE_KEY} $(whoami)@${NODE_IP} sudo arp -a > ${LOGS_DIR}/nodes/${NODE_NAME}/arp.txt
ssh -i ${SSH_PRIVATE_KEY} $(whoami)@${NODE_IP} cat /etc/resolv.conf > ${LOGS_DIR}/nodes/${NODE_NAME}/resolv.conf
ssh -i ${SSH_PRIVATE_KEY} $(whoami)@${NODE_IP} sudo lshw > ${LOGS_DIR}/nodes/${NODE_NAME}/hardware.txt
done
fi
source ./tools/gate/funcs/openstack.sh
mkdir -p ${LOGS_DIR}/openstack
$OPENSTACK service list > ${LOGS_DIR}/openstack/service.txt
$OPENSTACK endpoint list > ${LOGS_DIR}/openstack/endpoint.txt
$OPENSTACK extension list > ${LOGS_DIR}/openstack/extension.txt
$OPENSTACK compute service list > ${LOGS_DIR}/openstack/compute_service.txt
$OPENSTACK compute agent list > ${LOGS_DIR}/openstack/compute_agent.txt
$OPENSTACK host list > ${LOGS_DIR}/openstack/host.txt
$OPENSTACK hypervisor list > ${LOGS_DIR}/openstack/hypervisor.txt
$OPENSTACK hypervisor show $(hostname) > ${LOGS_DIR}/openstack/hypervisor-$(hostname).txt
$OPENSTACK network agent list > ${LOGS_DIR}/openstack/network_agent.txt
if [ "x$RALLY_CHART_ENABLED" == "xtrue" ]; then
mkdir -p ${LOGS_DIR}/openstack/rally
kubectl -n openstack logs $(kubectl -n openstack get pods -l job-name=rally-run-task --no-headers --output=name --show-all | awk -F '/' '{ print $NF; exit 0 }') > ${LOGS_DIR}/openstack/rally/rally_results.log
fi
exit $1

View File

@@ -1,228 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
function sdn_lb_support_install {
if [ "x$HOST_OS" == "xubuntu" ]; then
sudo apt-get update -y
sudo apt-get install -y --no-install-recommends \
bridge-utils
elif [ "x$HOST_OS" == "xcentos" ]; then
sudo yum install -y \
bridge-utils
elif [ "x$HOST_OS" == "xfedora" ]; then
sudo dnf install -y \
bridge-utils
fi
}
function base_install {
if [ "x$HOST_OS" == "xubuntu" ]; then
sudo apt-get update -y
sudo apt-get install -y --no-install-recommends \
iproute2 \
iptables \
ipcalc \
nmap \
lshw \
jq \
python-pip
elif [ "x$HOST_OS" == "xcentos" ]; then
sudo yum install -y \
epel-release
# ipcalc is in the initscripts package
sudo yum install -y \
iproute \
iptables \
initscripts \
nmap \
lshw \
python-pip
# We need JQ 1.5 which is not currently in the CentOS or EPEL repos
sudo curl -L -o /usr/bin/jq https://github.com/stedolan/jq/releases/download/jq-1.5/jq-linux64
sudo chmod +x /usr/bin/jq
elif [ "x$HOST_OS" == "xfedora" ]; then
sudo dnf install -y \
iproute \
iptables \
ipcalc \
nmap \
lshw \
jq \
python-pip
fi
sudo -H -E pip install --upgrade pip
sudo -H -E pip install --upgrade setuptools
sudo -H -E pip install pyyaml
sudo -H -E pip install yq
if [ "x$SDN_PLUGIN" == "xlinuxbridge" ]; then
sdn_lb_support_install
fi
}
function json_to_yaml {
python -c 'import sys, yaml, json; yaml.safe_dump(json.load(sys.stdin), sys.stdout, default_flow_style=False)'
}
function yaml_to_json {
python -c 'import sys, yaml, json; json.dump(yaml.safe_load(sys.stdin), sys.stdout)'
}
function loopback_support_install {
if [ "x$HOST_OS" == "xubuntu" ]; then
sudo apt-get update -y
sudo apt-get install -y --no-install-recommends \
targetcli \
open-iscsi \
lshw
sudo systemctl restart iscsid
elif [ "x$HOST_OS" == "xcentos" ]; then
sudo yum install -y \
targetcli \
iscsi-initiator-utils \
lshw
elif [ "x$HOST_OS" == "xfedora" ]; then
sudo dnf install -y \
targetcli \
iscsi-initiator-utils \
lshw
fi
}
function loopback_setup {
ORIGINAL_DISCS=$(mktemp --suffix=.txt)
ALL_DISCS=$(mktemp --suffix=.txt)
NEW_DISCS=$(mktemp --directory)
sudo rm -rf ${LOOPBACK_DIR} || true
sudo mkdir -p ${LOOPBACK_DIR}
COUNT=0
IFS=','; for LOOPBACK_NAME in ${LOOPBACK_NAMES}; do
sudo lshw -class disk > ${ORIGINAL_DISCS}
IFS=' '
let COUNT=COUNT+1
LOOPBACK_DEVS=$(echo ${LOOPBACK_DEV_COUNT} | awk -F ',' "{ print \$${COUNT}}")
LOOPBACK_SIZE=$(echo ${LOOPBACK_SIZES} | awk -F ',' "{ print \$${COUNT}}")
for ((LOOPBACK_DEV=1;LOOPBACK_DEV<=${LOOPBACK_DEVS};LOOPBACK_DEV++)); do
if [ "x$HOST_OS" == "xubuntu" ]; then
sudo targetcli backstores/fileio create loopback-${LOOPBACK_NAME}-${LOOPBACK_DEV} ${LOOPBACK_DIR}/fileio-${LOOPBACK_NAME}-${LOOPBACK_DEV} ${LOOPBACK_SIZE}
else
sudo targetcli backstores/fileio create loopback-${LOOPBACK_NAME}-${LOOPBACK_DEV} ${LOOPBACK_DIR}/fileio-${LOOPBACK_NAME}-${LOOPBACK_DEV} ${LOOPBACK_SIZE} write_back=false
fi
done
sudo targetcli iscsi/ create iqn.2016-01.com.example:${LOOPBACK_NAME}
if ! [ "x$HOST_OS" == "xubuntu" ]; then
sudo targetcli iscsi/iqn.2016-01.com.example:${LOOPBACK_NAME}/tpg1/portals delete 0.0.0.0 3260 || true
sudo targetcli iscsi/iqn.2016-01.com.example:${LOOPBACK_NAME}/tpg1/portals create 127.0.0.1 3260
else
#NOTE (Portdirect): Frustratingly it appears that Ubuntu's targetcli wont
# let you bind to localhost.
sudo targetcli iscsi/iqn.2016-01.com.example:${LOOPBACK_NAME}/tpg1/portals create 0.0.0.0 3260
fi
for ((LOOPBACK_DEV=1;LOOPBACK_DEV<=${LOOPBACK_DEVS};LOOPBACK_DEV++)); do
sudo targetcli iscsi/iqn.2016-01.com.example:${LOOPBACK_NAME}/tpg1/luns/ create /backstores/fileio/loopback-${LOOPBACK_NAME}-${LOOPBACK_DEV}
done
sudo targetcli iscsi/iqn.2016-01.com.example:${LOOPBACK_NAME}/tpg1/acls/ create $(sudo cat /etc/iscsi/initiatorname.iscsi | awk -F '=' '/^InitiatorName/ { print $NF}')
if [ "x$HOST_OS" == "xubuntu" ]; then
sudo targetcli iscsi/iqn.2016-01.com.example:${LOOPBACK_NAME}/tpg1 set attribute authentication=0
fi
sudo iscsiadm --mode discovery --type sendtargets --portal 127.0.0.1 3260
sudo iscsiadm -m node -T iqn.2016-01.com.example:${LOOPBACK_NAME} -p 127.0.0.1:3260 -l
sudo lshw -class disk > ${ALL_DISCS}
# NOTE (Portdirect): Ugly subshell hack to suppress diff's exit code
(diff --changed-group-format="%>" --unchanged-group-format="" ${ORIGINAL_DISCS} ${ALL_DISCS} > ${NEW_DISCS}/${LOOPBACK_NAME}.raw) || true
jq -n -c -M \
--arg devclass "${LOOPBACK_NAME}" \
--arg device "$(awk '/bus info:/ { print $NF }' ${NEW_DISCS}/${LOOPBACK_NAME}.raw)" \
'{($devclass): ($device|split("\n"))}' > ${NEW_DISCS}/${LOOPBACK_NAME}
rm -f ${NEW_DISCS}/${LOOPBACK_NAME}.raw
done
unset IFS
jq -c -s add ${NEW_DISCS}/* | jq --arg hostname "$(hostname)" -s -M '{block_devices:{($hostname):.[]}}' > ${LOOPBACK_LOCAL_DISC_INFO}
cat ${LOOPBACK_LOCAL_DISC_INFO}
}
function loopback_dev_info_collect {
DEV_INFO_DIR=$(mktemp --dir)
cat ${LOOPBACK_LOCAL_DISC_INFO} > ${DEV_INFO_DIR}/$(hostname)
if [ "x$INTEGRATION" == "xmulti" ]; then
for SUB_NODE in $SUB_NODE_IPS ; do
ssh-keyscan "${SUB_NODE}" >> ~/.ssh/known_hosts
ssh -i ${SSH_PRIVATE_KEY} $(whoami)@${SUB_NODE} cat ${LOOPBACK_LOCAL_DISC_INFO} > ${DEV_INFO_DIR}/${SUB_NODE}
done
fi
touch ${LOOPBACK_DEV_INFO}
JQ_OPT='.[0]'
COUNT=1
let ITERATIONS=$(ls -1q $DEV_INFO_DIR | wc -l)
while [ $COUNT -lt "$ITERATIONS" ]; do
JQ_OPT="$JQ_OPT * .[$COUNT]"
COUNT=$[$COUNT+1]
done
(cd $DEV_INFO_DIR; jq -s "$JQ_OPT" *) | json_to_yaml >> ${LOOPBACK_DEV_INFO}
cat ${LOOPBACK_DEV_INFO}
}
function ceph_support_install {
if [ "x$HOST_OS" == "xubuntu" ]; then
sudo apt-get update -y
sudo apt-get install -y --no-install-recommends -qq \
ceph-common
elif [ "x$HOST_OS" == "xcentos" ]; then
sudo yum install -y \
ceph
elif [ "x$HOST_OS" == "xfedora" ]; then
sudo dnf install -y \
ceph
fi
sudo modprobe rbd
}
function nfs_support_install {
if [ "x$HOST_OS" == "xubuntu" ]; then
sudo apt-get update -y
sudo apt-get install -y --no-install-recommends -qq \
nfs-common
elif [ "x$HOST_OS" == "xcentos" ]; then
sudo yum install -y \
nfs-utils
elif [ "x$HOST_OS" == "xfedora" ]; then
sudo dnf install -y \
nfs-utils
fi
}
function gate_base_setup {
# Install base requirements
base_install
# Install and setup iscsi loopback devices if required.
if [ "x$LOOPBACK_CREATE" == "xtrue" ]; then
loopback_support_install
loopback_setup
fi
# Install support packages for pvc backends
if [ "x$PVC_BACKEND" == "xceph" ]; then
ceph_support_install
elif [ "x$PVC_BACKEND" == "xnfs" ]; then
nfs_support_install
fi
}

View File

@@ -1,110 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
function helm_install {
if [ "x$HOST_OS" == "xubuntu" ]; then
sudo apt-get update -y
sudo apt-get install -y --no-install-recommends -qq \
git \
make \
curl \
ca-certificates
elif [ "x$HOST_OS" == "xcentos" ]; then
sudo yum install -y \
git \
make \
curl
elif [ "x$HOST_OS" == "xfedora" ]; then
sudo dnf install -y \
git \
make \
curl
fi
# install helm
if CURRENT_HELM_LOC=$(type -p helm); then
CURRENT_HELM_VERSION=$(${CURRENT_HELM_LOC} version --client --short | awk '{ print $NF }' | awk -F '+' '{ print $1 }')
fi
[ "x$HELM_VERSION" == "x$CURRENT_HELM_VERSION" ] || ( \
TMP_DIR=$(mktemp -d)
curl -sSL https://storage.googleapis.com/kubernetes-helm/helm-${HELM_VERSION}-linux-amd64.tar.gz | tar -zxv --strip-components=1 -C ${TMP_DIR}
sudo mv ${TMP_DIR}/helm /usr/local/bin/helm
rm -rf ${TMP_DIR} )
}
function helm_serve {
if [[ -d "$HOME/.helm" ]]; then
echo ".helm directory found"
else
helm init --client-only
fi
if [[ -z $(curl -s 127.0.0.1:8879 | grep 'Helm Repository') ]]; then
helm serve & > /dev/null
while [[ -z $(curl -s 127.0.0.1:8879 | grep 'Helm Repository') ]]; do
sleep 1
echo "Waiting for Helm Repository"
done
else
echo "Helm serve already running"
fi
if helm repo list | grep -q "^stable" ; then
helm repo remove stable
fi
helm repo add local http://localhost:8879/charts
}
function helm_lint {
make build-helm-toolkit -C ${WORK_DIR}
make TASK=lint -C ${WORK_DIR}
}
function helm_build {
make TASK=build -C ${WORK_DIR}
}
function helm_test_deployment {
DEPLOYMENT=$1
if [ x$2 == "x" ]; then
TIMEOUT=300
else
TIMEOUT=$2
fi
NAME="${DEPLOYMENT}-test"
# Get the namespace of the chart via the Helm release
NAMESPACE=$(helm status ${DEPLOYMENT} | awk '/^NAMESPACE/ { print $NF }')
helm test --timeout ${TIMEOUT} ${DEPLOYMENT}
mkdir -p ${LOGS_DIR}/helm-tests
kubectl logs -n ${NAMESPACE} ${NAME} > ${LOGS_DIR}/helm-tests/${DEPLOYMENT}
kubectl delete -n ${NAMESPACE} pod ${NAME}
}
function helm_plugin_template_install {
helm plugin install https://github.com/technosophos/helm-template
}
function helm_template_run {
mkdir -p ${LOGS_DIR}/templates
set +x
for CHART in $(helm search | tail -n +2 | awk '{ print $1 }' | awk -F '/' '{ print $NF }'); do
echo "Running Helm template plugin on chart: $CHART"
helm template --verbose $CHART > ${LOGS_DIR}/templates/$CHART
done
set -x
}

View File

@@ -1,157 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
function kube_wait_for_pods {
# From Kolla-Kubernetes, original authors Kevin Fox & Serguei Bezverkhi
# Default wait timeout is 180 seconds
set +x
end=$(date +%s)
if ! [ -z $2 ]; then
end=$((end + $2))
else
end=$((end + 180))
fi
while true; do
kubectl get pods --namespace=$1 -o json | jq -r \
'.items[].status.phase' | grep Pending > /dev/null && \
PENDING=True || PENDING=False
query='.items[]|select(.status.phase=="Running")'
query="$query|.status.containerStatuses[].ready"
kubectl get pods --namespace=$1 -o json | jq -r "$query" | \
grep false > /dev/null && READY="False" || READY="True"
kubectl get jobs -o json --namespace=$1 | jq -r \
'.items[] | .spec.completions == .status.succeeded' | \
grep false > /dev/null && JOBR="False" || JOBR="True"
[ $PENDING == "False" -a $READY == "True" -a $JOBR == "True" ] && \
break || true
sleep 1
now=$(date +%s)
[ $now -gt $end ] && echo containers failed to start. && \
kubectl get pods --namespace $1 -o wide && exit -1
done
set -x
}
function kube_wait_for_nodes {
# Default wait timeout is 180 seconds
set +x
end=$(date +%s)
if ! [ -z $2 ]; then
end=$((end + $2))
else
end=$((end + 180))
fi
while true; do
NUMBER_OF_NODES_EXPECTED=$1
NUMBER_OF_NODES=$(kubectl get nodes --no-headers -o name | wc -l)
[ $NUMBER_OF_NODES -eq $NUMBER_OF_NODES_EXPECTED ] && \
NODES_ONLINE="True" || NODES_ONLINE="False"
while read SUB_NODE; do
echo $SUB_NODE | grep -q ^Ready && NODES_READY="True" || NODES_READY="False"
done < <(kubectl get nodes --no-headers | awk '{ print $2 }')
[ $NODES_ONLINE == "True" -a $NODES_READY == "True" ] && \
break || true
sleep 5
now=$(date +%s)
[ $now -gt $end ] && echo "Nodes Failed to be ready in time." && \
kubectl get nodes -o wide && exit -1
done
set -x
}
function kubeadm_aio_reqs_install {
if [ "x$HOST_OS" == "xubuntu" ]; then
sudo apt-get update -y
sudo apt-get install -y --no-install-recommends -qq \
docker.io \
jq
elif [ "x$HOST_OS" == "xcentos" ]; then
sudo yum install -y \
epel-release
sudo yum install -y \
docker-latest
# We need JQ 1.5 which is not currently in the CentOS or EPEL repos
sudo curl -L -o /usr/bin/jq https://github.com/stedolan/jq/releases/download/jq-1.5/jq-linux64
sudo chmod +x /usr/bin/jq
sudo cp -f /usr/lib/systemd/system/docker-latest.service /etc/systemd/system/docker.service
sudo sed -i "s|/var/lib/docker-latest|/var/lib/docker|g" /etc/systemd/system/docker.service
sudo sed -i 's/^OPTIONS/#OPTIONS/g' /etc/sysconfig/docker-latest
sudo sed -i "s|^MountFlags=slave|MountFlags=share|g" /etc/systemd/system/docker.service
sudo sed -i "/--seccomp-profile/,+1 d" /etc/systemd/system/docker.service
echo "DOCKER_STORAGE_OPTIONS=--storage-driver=overlay" | sudo tee /etc/sysconfig/docker-latest-storage
sudo setenforce 0 || true
sudo systemctl daemon-reload
sudo systemctl restart docker
elif [ "x$HOST_OS" == "xfedora" ]; then
sudo dnf install -y \
docker-latest \
jq
sudo cp -f /usr/lib/systemd/system/docker-latest.service /etc/systemd/system/docker.service
sudo sed -i "s|/var/lib/docker-latest|/var/lib/docker|g" /etc/systemd/system/docker.service
echo "DOCKER_STORAGE_OPTIONS=--storage-driver=overlay2" | sudo tee /etc/sysconfig/docker-latest-storage
sudo setenforce 0 || true
sudo systemctl daemon-reload
sudo systemctl restart docker
fi
if CURRENT_KUBECTL_LOC=$(type -p kubectl); then
CURRENT_KUBECTL_VERSION=$(${CURRENT_KUBECTL_LOC} version --client --short | awk '{ print $NF }' | awk -F '+' '{ print $1 }')
fi
[ "x$KUBE_VERSION" == "x$CURRENT_KUBECTL_VERSION" ] || ( \
TMP_DIR=$(mktemp -d)
curl -sSL https://storage.googleapis.com/kubernetes-release/release/${KUBE_VERSION}/bin/linux/amd64/kubectl -o ${TMP_DIR}/kubectl
chmod +x ${TMP_DIR}/kubectl
sudo mv ${TMP_DIR}/kubectl /usr/local/bin/kubectl
rm -rf ${TMP_DIR} )
}
function kubeadm_aio_build {
sudo docker build --pull -t ${KUBEADM_IMAGE} --build-arg KUBE_VERSION=$KUBE_VERSION tools/kubeadm-aio
}
function kubeadm_aio_launch {
${WORK_DIR}/tools/kubeadm-aio/kubeadm-aio-launcher.sh
mkdir -p ${HOME}/.kube
cat ${KUBECONFIG} > ${HOME}/.kube/config
kube_wait_for_pods kube-system ${POD_START_TIMEOUT_SYSTEM}
kube_wait_for_pods default ${POD_START_TIMEOUT_DEFAULT}
}
function kubeadm_aio_clean {
sudo docker rm -f kubeadm-aio || true
sudo docker rm -f kubelet || true
sudo docker ps -aq | xargs -r -l1 -P16 sudo docker rm -f
sudo rm -rfv \
/etc/cni/net.d \
/etc/kubernetes \
/var/lib/etcd \
/var/etcd \
/var/lib/kubelet/* \
/run/openvswitch \
/var/lib/nova \
${HOME}/.kubeadm-aio/admin.conf \
/var/lib/openstack-helm \
/var/lib/nfs-provisioner || true
}
function kube_label_node_block_devs {
for HOST in $(cat $LOOPBACK_DEV_INFO | yaml_to_json | jq -r ".block_devices | keys? | .[]"); do
for DEV_TYPE in $(cat $LOOPBACK_DEV_INFO | yaml_to_json | jq -r ".block_devices.\"$HOST\" | keys? | .[]"); do
DEV_ADDRS=$(cat $LOOPBACK_DEV_INFO | yaml_to_json | jq -r ".block_devices.\"$HOST\".\"$DEV_TYPE\" | .[]")
for DEV_ADDR in $(cat $LOOPBACK_DEV_INFO | yaml_to_json | jq -r ".block_devices.\"$HOST\".\"$DEV_TYPE\" | .[]"); do
kubectl label node $HOST device-$DEV_TYPE-$(echo $DEV_ADDR | tr '@' '_' | tr ':' '-' )=enabled
done
done
done
}

View File

@@ -1,85 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
function net_default_iface {
sudo ip -4 route list 0/0 | awk '{ print $5; exit }'
}
function net_default_host_addr {
sudo ip addr | awk "/inet / && /$(net_default_iface)/{print \$2; exit }" | sed 's/\/32/\/24/'
}
function net_default_host_ip {
echo $(net_default_host_addr) | awk -F '/' '{ print $1; exit }'
}
function net_resolv_pre_kube {
sudo cp -f /etc/resolv.conf /etc/resolv-pre-kube.conf
sudo rm -f /etc/resolv.conf
cat << EOF | sudo tee /etc/resolv.conf
nameserver ${UPSTREAM_DNS1}
nameserver ${UPSTREAM_DNS2}
EOF
}
function net_resolv_post_kube {
sudo cp -f /etc/resolv-pre-kube.conf /etc/resolv.conf
}
function net_hosts_pre_kube {
sudo cp -f /etc/hosts /etc/hosts-pre-kube
sudo sed -i "/$(hostname)/d" /etc/hosts
echo "$(net_default_host_ip) $(hostname)" | sudo tee -a /etc/hosts
}
function net_hosts_post_kube {
sudo cp -f /etc/hosts-pre-kube /etc/hosts
}
function find_subnet_range {
if [ "x$HOST_OS" == "xubuntu" ]; then
ipcalc $(net_default_host_addr) | awk '/^Network/ { print $2 }'
else
eval $(ipcalc --network --prefix $(net_default_host_addr))
echo "$NETWORK/$PREFIX"
fi
}
function find_multi_subnet_range {
: ${PRIMARY_NODE_IP:="$(cat /etc/nodepool/primary_node | tail -1)"}
: ${SUB_NODE_IPS:="$(cat /etc/nodepool/sub_nodes)"}
NODE_IPS="${PRIMARY_NODE_IP} ${SUB_NODE_IPS}"
NODE_IP_UNSORTED=$(mktemp --suffix=.txt)
for NODE_IP in $NODE_IPS; do
echo $NODE_IP >> ${NODE_IP_UNSORTED}
done
NODE_IP_SORTED=$(mktemp --suffix=.txt)
sort -V ${NODE_IP_UNSORTED} > ${NODE_IP_SORTED}
rm -f ${NODE_IP_UNSORTED}
FIRST_IP_SUBNET=$(ipcalc "$(head -n 1 ${NODE_IP_SORTED})/24" | awk '/^Network/ { print $2 }')
LAST_IP_SUBNET=$(ipcalc "$(tail -n 1 ${NODE_IP_SORTED})/24" | awk '/^Network/ { print $2 }')
rm -f ${NODE_IP_SORTED}
function ip_diff {
echo $(($(echo $LAST_IP_SUBNET | awk -F '.' "{ print \$$1}") - $(echo $FIRST_IP_SUBNET | awk -F '.' "{ print \$$1}")))
}
for X in {1..4}; do
if ! [ "$(ip_diff $X)" -eq "0" ]; then
SUBMASK=$(((($X - 1 )) * 8))
break
elif [ $X -eq "4" ]; then
SUBMASK=24
fi
done
echo ${FIRST_IP_SUBNET%/*}/${SUBMASK}
}

View File

@@ -1,141 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
: ${KS_USER:="admin"}
: ${KS_PROJECT:="admin"}
: ${KS_PASSWORD:="password"}
: ${KS_USER_DOMAIN:="default"}
: ${KS_PROJECT_DOMAIN:="default"}
: ${KS_URL:="http://keystone.openstack/v3"}
# Setup openstack clients
KEYSTONE_CREDS="--os-username ${KS_USER} \
--os-project-name ${KS_PROJECT} \
--os-auth-url ${KS_URL} \
--os-project-domain-name ${KS_PROJECT_DOMAIN} \
--os-user-domain-name ${KS_USER_DOMAIN} \
--os-password ${KS_PASSWORD}"
HEAT_POD=$(kubectl get -n openstack pods -l application=heat,component=engine --no-headers -o name | awk -F '/' '{ print $NF; exit }')
HEAT="kubectl exec -n openstack ${HEAT_POD} -- heat ${KEYSTONE_CREDS}"
NEUTRON_POD=$(kubectl get -n openstack pods -l application=heat,component=engine --no-headers -o name | awk -F '/' '{ print $NF; exit }')
NEUTRON="kubectl exec -n openstack ${NEUTRON_POD} -- neutron ${KEYSTONE_CREDS}"
NOVA_POD=$(kubectl get -n openstack pods -l application=heat,component=engine --no-headers -o name | awk -F '/' '{ print $NF; exit }')
NOVA="kubectl exec -n openstack ${NOVA_POD} -- nova ${KEYSTONE_CREDS}"
OPENSTACK_POD=$(kubectl get -n openstack pods -l application=heat,component=engine --no-headers -o name | awk -F '/' '{ print $NF; exit }')
OPENSTACK="kubectl exec -n openstack ${OPENSTACK_POD} -- openstack ${KEYSTONE_CREDS} --os-identity-api-version 3 --os-image-api-version 2"
function wait_for_ping {
# Default wait timeout is 180 seconds
set +x
PING_CMD="ping -q -c 1 -W 1"
end=$(date +%s)
if ! [ -z $2 ]; then
end=$((end + $2))
else
end=$((end + 180))
fi
while true; do
$PING_CMD $1 > /dev/null && \
break || true
sleep 1
now=$(date +%s)
[ $now -gt $end ] && echo "Could not ping $1 in time" && exit -1
done
set -x
$PING_CMD $1
}
function openstack_wait_for_vm {
# Default wait timeout is 180 seconds
set +x
end=$(date +%s)
if ! [ -z $2 ]; then
end=$((end + $2))
else
end=$((end + 180))
fi
while true; do
STATUS=$($OPENSTACK server show $1 -f value -c status)
[ $STATUS == "ACTIVE" ] && \
break || true
sleep 1
now=$(date +%s)
[ $now -gt $end ] && echo VM failed to start. && \
$OPENSTACK server show $1 && exit -1
done
set -x
}
function wait_for_ssh_port {
# Default wait timeout is 180 seconds
set +x
end=$(date +%s)
if ! [ -z $2 ]; then
end=$((end + $2))
else
end=$((end + 180))
fi
while true; do
# Use Nmap as its the same on Ubuntu and RHEL family distros
nmap -Pn -p22 $1 | awk '$1 ~ /22/ {print $2}' | grep -q 'open' && \
break || true
sleep 1
now=$(date +%s)
[ $now -gt $end ] && echo "Could not connect to $1 port 22 in time" && exit -1
done
set -x
}
function openstack_wait_for_stack {
# Default wait timeout is 180 seconds
set +x
end=$(date +%s)
if ! [ -z $2 ]; then
end=$((end + $2))
else
end=$((end + 180))
fi
while true; do
STATUS=$($OPENSTACK stack show $1 -f value -c stack_status)
[ $STATUS == "CREATE_COMPLETE" ] && \
break || true
sleep 1
now=$(date +%s)
[ $now -gt $end ] && echo Stack failed to start. && \
$OPENSTACK stack show $1 && exit -1
done
set -x
}
function openstack_wait_for_volume {
# Default wait timeout is 180 seconds
set +x
end=$(date +%s)
if ! [ -z $3 ]; then
end=$((end + $3))
else
end=$((end + 180))
fi
while true; do
STATUS=$($OPENSTACK volume show $1 -f value -c status)
[ $STATUS == "$2" ] && \
break || true
sleep 1
now=$(date +%s)
[ $now -gt $end ] && echo "Volume did not become $2 in time." && \
$OPENSTACK volume show $1 && exit -1
done
set -x
}

View File

@@ -1,17 +0,0 @@
#!/usr/bin/env python
import json
import sys
def dump(s):
print json.dumps(eval(s))
def main(args):
if not args:
dump(''.join(sys.stdin.readlines()))
else:
for arg in args:
dump(''.join(open(arg, 'r').readlines()))
return 0
if __name__ == "__main__":
sys.exit(main(sys.argv[1:]))

View File

@@ -1,25 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -ex
: ${WORK_DIR:="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/../.."}
source ${WORK_DIR}/tools/gate/vars.sh
source ${WORK_DIR}/tools/gate/funcs/helm.sh
helm_build
mkdir -p ${LOGS_DIR}/dry-runs
for CHART in $(helm search | awk '{ print $1 }' | tail -n +2 | awk -F '/' '{ print $NF }'); do
echo "Dry Running chart: $CHART"
helm install --dry-run --debug local/$CHART --name="${CHART}-dry-run" --namespace=openstack > ${LOGS_DIR}/dry-runs/$CHART
done

View File

@@ -1,23 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -ex
: ${WORK_DIR:="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"}
source ${WORK_DIR}/tools/gate/vars.sh
source ${WORK_DIR}/tools/gate/funcs/network.sh
source ${WORK_DIR}/tools/gate/funcs/kube.sh
kubeadm_aio_reqs_install
sudo docker pull ${KUBEADM_IMAGE} || kubeadm_aio_build
kubeadm_aio_launch

View File

@@ -1,55 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -ex
: ${WORK_DIR:="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/.."}
source ${WORK_DIR}/tools/gate/vars.sh
source ${WORK_DIR}/tools/gate/funcs/helm.sh
source ${WORK_DIR}/tools/gate/funcs/kube.sh
source ${WORK_DIR}/tools/gate/funcs/network.sh
ARMADA_MANIFEST=$(mktemp --suffix=.yaml)
if [ "x$INTEGRATION" == "xaio" ]; then
SUBNET_RANGE=$(find_subnet_range)
ARMADA_MANIFEST_TEMPLATE=${WORK_DIR}/tools/deployment/armada/openstack-master-aio.yaml
else
SUBNET_RANGE="$(find_multi_subnet_range)"
ARMADA_MANIFEST_TEMPLATE=${WORK_DIR}/tools/deployment/armada/openstack-master.yaml
fi
sed "s|192.168.0.0/16|${SUBNET_RANGE}|g" ${ARMADA_MANIFEST_TEMPLATE} > ${ARMADA_MANIFEST}
sudo docker build https://github.com/att-comdev/armada.git#master -t openstackhelm/armada:latest
sudo docker run -d \
--net=host \
--name armada \
-v ${HOME}/.kube/config:/armada/.kube/config \
-v ${ARMADA_MANIFEST}:${ARMADA_MANIFEST}:ro \
-v ${WORK_DIR}:/opt/openstack-helm/charts:ro \
openstackhelm/armada:latest
sudo docker exec armada armada tiller --status
sudo docker exec armada armada apply ${ARMADA_MANIFEST}
sudo docker rm -f armada
kube_wait_for_pods ceph ${POD_START_TIMEOUT_CEPH}
kube_wait_for_pods openstack ${POD_START_TIMEOUT_OPENSTACK}
MON_POD=$(kubectl get pods -l application=ceph -l component=mon -n ceph --no-headers | awk '{ print $1; exit }')
kubectl exec -n ceph ${MON_POD} -- ceph -s
if [ "x$INTEGRATION" == "xmulti" ]; then
helm_test_deployment osh-keystone ${SERVICE_TEST_TIMEOUT}
helm_test_deployment osh-glance ${SERVICE_TEST_TIMEOUT}
helm_test_deployment osh-cinder ${SERVICE_TEST_TIMEOUT}
helm_test_deployment osh-neutron ${SERVICE_TEST_TIMEOUT}
helm_test_deployment osh-nova ${SERVICE_TEST_TIMEOUT}
fi

View File

@@ -1,228 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -ex
: ${WORK_DIR:="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/.."}
source ${WORK_DIR}/tools/gate/vars.sh
source ${WORK_DIR}/tools/gate/funcs/helm.sh
source ${WORK_DIR}/tools/gate/funcs/kube.sh
source ${WORK_DIR}/tools/gate/funcs/network.sh
helm_build
helm search
if [ "x$PVC_BACKEND" == "xceph" ]; then
if [ "x$INTEGRATION" == "xmulti" ]; then
SUBNET_RANGE="$(find_multi_subnet_range)"
else
SUBNET_RANGE=$(find_subnet_range)
fi
if [ "x$INTEGRATION" == "xaio" ]; then
helm install --namespace=ceph ${WORK_DIR}/ceph --name=ceph \
--set endpoints.identity.namespace=openstack \
--set endpoints.object_store.namespace=ceph \
--set endpoints.ceph_mon.namespace=ceph \
--set ceph.rgw_keystone_auth=${CEPH_RGW_KEYSTONE_ENABLED} \
--set network.public=${SUBNET_RANGE} \
--set network.cluster=${SUBNET_RANGE} \
--set deployment.storage_secrets=true \
--set deployment.ceph=true \
--set deployment.rbd_provisioner=true \
--set deployment.cephfs_provisioner=true \
--set deployment.client_secrets=false \
--set deployment.rgw_keystone_user_and_endpoints=false \
--set bootstrap.enabled=true \
--values=${WORK_DIR}/tools/overrides/mvp/ceph.yaml
else
helm install --namespace=ceph ${WORK_DIR}/ceph --name=ceph \
--set endpoints.identity.namespace=openstack \
--set endpoints.object_store.namespace=ceph \
--set endpoints.ceph_mon.namespace=ceph \
--set ceph.rgw_keystone_auth=${CEPH_RGW_KEYSTONE_ENABLED} \
--set network.public=${SUBNET_RANGE} \
--set network.cluster=${SUBNET_RANGE} \
--set deployment.storage_secrets=true \
--set deployment.ceph=true \
--set deployment.rbd_provisioner=true \
--set deployment.cephfs_provisioner=true \
--set deployment.client_secrets=false \
--set deployment.rgw_keystone_user_and_endpoints=false \
--set bootstrap.enabled=true
fi
kube_wait_for_pods ceph ${POD_START_TIMEOUT_CEPH}
MON_POD=$(kubectl get pods \
--namespace=ceph \
--selector="application=ceph" \
--selector="component=mon" \
--no-headers | awk '{ print $1; exit }')
kubectl exec -n ceph ${MON_POD} -- ceph -s
helm install --namespace=openstack ${WORK_DIR}/ceph --name=ceph-openstack-config \
--set endpoints.identity.namespace=openstack \
--set endpoints.object_store.namespace=ceph \
--set endpoints.ceph_mon.namespace=ceph \
--set ceph.rgw_keystone_auth=${CEPH_RGW_KEYSTONE_ENABLED} \
--set network.public=${SUBNET_RANGE} \
--set network.cluster=${SUBNET_RANGE} \
--set deployment.storage_secrets=false \
--set deployment.ceph=false \
--set deployment.rbd_provisioner=false \
--set deployment.cephfs_provisioner=false \
--set deployment.client_secrets=true \
--set deployment.rgw_keystone_user_and_endpoints=false
kube_wait_for_pods openstack ${POD_START_TIMEOUT_OPENSTACK}
fi
helm install --namespace=openstack ${WORK_DIR}/ingress --name=ingress
if [ "x$INTEGRATION" == "xmulti" ]; then
helm install --namespace=openstack ${WORK_DIR}/mariadb --name=mariadb
else
helm install --namespace=openstack ${WORK_DIR}/mariadb --name=mariadb \
--set pod.replicas.server=1
fi
helm install --namespace=openstack ${WORK_DIR}/ldap --name=ldap
kube_wait_for_pods openstack ${POD_START_TIMEOUT_OPENSTACK}
helm install --namespace=openstack ${WORK_DIR}/memcached --name=memcached
kube_wait_for_pods openstack ${POD_START_TIMEOUT_OPENSTACK}
helm install --namespace=openstack ${WORK_DIR}/keystone --name=keystone
kube_wait_for_pods openstack ${POD_START_TIMEOUT_OPENSTACK}
if [ "x$OPENSTACK_OBJECT_STORAGE" == "xradosgw" ]; then
helm install --namespace=openstack ${WORK_DIR}/ceph --name=radosgw-openstack \
--set endpoints.identity.namespace=openstack \
--set endpoints.object_store.namespace=ceph \
--set endpoints.ceph_mon.namespace=ceph \
--set ceph.rgw_keystone_auth=${CEPH_RGW_KEYSTONE_ENABLED} \
--set network.public=${SUBNET_RANGE} \
--set network.cluster=${SUBNET_RANGE} \
--set deployment.storage_secrets=false \
--set deployment.ceph=false \
--set deployment.rbd_provisioner=false \
--set deployment.cephfs_provisioner=false \
--set deployment.client_secrets=false \
--set deployment.rgw_keystone_user_and_endpoints=true
kube_wait_for_pods openstack ${POD_START_TIMEOUT_OPENSTACK}
fi
helm install --namespace=openstack ${WORK_DIR}/rabbitmq --name=rabbitmq
if [ "x$INTEGRATION" == "xmulti" ]; then
if [ "x$PVC_BACKEND" == "xceph" ]; then
#NOTE(portdirect): Deploy Telemetry componets here to enable ingestion
# of data from other services as the come online.
helm install --namespace=openstack ${WORK_DIR}/postgresql --name=postgresql
helm install --namespace=openstack ${WORK_DIR}/gnocchi --name=gnocchi
helm install --namespace=openstack ${WORK_DIR}/mongodb --name=mongodb
helm install --namespace=openstack ${WORK_DIR}/ceilometer --name=ceilometer
fi
kube_wait_for_pods openstack ${POD_START_TIMEOUT_OPENSTACK}
fi
if [[ "x${PVC_BACKEND}" != "xceph" ]] && [[ "x${GLANCE}" != "xpvc" ]] ; then
echo "Gate only supports glance with pvc backend when not using ceph"
exit 1
fi
helm install --namespace=openstack ${WORK_DIR}/glance --name=glance \
--set storage=${GLANCE}
kube_wait_for_pods openstack ${POD_START_TIMEOUT_OPENSTACK}
if [ "x${PVC_BACKEND}" == "xceph" ]; then
helm install --namespace=openstack ${WORK_DIR}/libvirt --name=libvirt
else
helm install --namespace=openstack ${WORK_DIR}/libvirt --name=libvirt \
--set ceph.enabled="false"
fi
if [ "x$SDN_PLUGIN" == "xovs" ]; then
helm install --namespace=openstack ${WORK_DIR}/openvswitch --name=openvswitch
fi
kube_wait_for_pods openstack ${POD_START_TIMEOUT_OPENSTACK}
if [ "x$INTEGRATION" == "xmulti" ] || [ "x$RALLY_CHART_ENABLED" == "xtrue" ]; then
if [ "x$PVC_BACKEND" != "xceph" ]; then
helm install --namespace=openstack ${WORK_DIR}/cinder --name=cinder \
--values=${WORK_DIR}/tools/overrides/mvp/cinder.yaml
else
helm install --namespace=openstack ${WORK_DIR}/cinder --name=cinder
fi
fi
NOVA_INSTALL="helm install --namespace=openstack ${WORK_DIR}/nova --name=nova\
--set conf.nova.libvirt.virt_type=qemu"
if [ "x$PVC_BACKEND" == "x" ] || [ "x$PVC_BACKEND" == "xnfs" ]; then
NOVA_INSTALL+=" --values=${WORK_DIR}/tools/overrides/mvp/nova.yaml"
fi
if [ "x$SDN_PLUGIN" == "xlinuxbridge" ]; then
NOVA_INSTALL+=" --set dependencies.compute.daemonset={neutron-lb-agent}"
fi
$NOVA_INSTALL
if [ "x$SDN_PLUGIN" == "xovs" ]; then
helm install --namespace=openstack ${WORK_DIR}/neutron --name=neutron \
--values=${WORK_DIR}/tools/overrides/mvp/neutron-ovs.yaml
elif [ "x$SDN_PLUGIN" == "xlinuxbridge" ]; then
helm install --namespace=openstack ${WORK_DIR}/neutron --name=neutron \
--values=${WORK_DIR}/tools/overrides/mvp/neutron-linuxbridge.yaml
fi
kube_wait_for_pods openstack ${POD_START_TIMEOUT_OPENSTACK}
helm install --namespace=openstack ${WORK_DIR}/heat --name=heat
kube_wait_for_pods openstack ${POD_START_TIMEOUT_OPENSTACK}
helm install --namespace=openstack ${WORK_DIR}/congress --name=congress
kube_wait_for_pods openstack ${POD_START_TIMEOUT_OPENSTACK}
if [ "x$INTEGRATION" == "xmulti" ]; then
helm install --namespace=openstack ${WORK_DIR}/horizon --name=horizon
kube_wait_for_pods openstack ${POD_START_TIMEOUT_OPENSTACK}
helm install --namespace=openstack ${WORK_DIR}/barbican --name=barbican
helm install --namespace=openstack ${WORK_DIR}/magnum --name=magnum
kube_wait_for_pods openstack ${POD_START_TIMEOUT_OPENSTACK}
helm install --namespace=openstack ${WORK_DIR}/mistral --name=mistral
helm install --namespace=openstack ${WORK_DIR}/senlin --name=senlin
kube_wait_for_pods openstack ${POD_START_TIMEOUT_OPENSTACK}
helm_test_deployment keystone ${SERVICE_TEST_TIMEOUT}
helm_test_deployment gnocchi ${SERVICE_TEST_TIMEOUT}
helm_test_deployment ceilometer ${SERVICE_TEST_TIMEOUT}
helm_test_deployment glance ${SERVICE_TEST_TIMEOUT}
helm_test_deployment cinder ${SERVICE_TEST_TIMEOUT}
helm_test_deployment neutron ${SERVICE_TEST_TIMEOUT}
helm_test_deployment nova ${SERVICE_TEST_TIMEOUT}
helm_test_deployment barbican ${SERVICE_TEST_TIMEOUT}
fi
if [ "x$RALLY_CHART_ENABLED" == "xtrue" ]; then
helm install --namespace=openstack ${WORK_DIR}/magnum --name=magnum
helm install --namespace=openstack ${WORK_DIR}/senlin --name=senlin
kube_wait_for_pods openstack ${POD_START_TIMEOUT_OPENSTACK}
helm install --namespace=openstack ${WORK_DIR}/rally --name=rally
kube_wait_for_pods openstack 28800
fi

View File

@@ -1,43 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -ex
: ${WORK_DIR:="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/.."}
source ${WORK_DIR}/tools/gate/vars.sh
source ${WORK_DIR}/tools/gate/funcs/helm.sh
source ${WORK_DIR}/tools/gate/funcs/kube.sh
source ${WORK_DIR}/tools/gate/funcs/network.sh
if [ "x$PVC_BACKEND" == "xceph" ]; then
kubectl label nodes ceph-mon=enabled --all --overwrite
kubectl label nodes ceph-osd=enabled --all --overwrite
kubectl label nodes ceph-mds=enabled --all --overwrite
kubectl label nodes ceph-rgw=enabled --all --overwrite
kubectl label nodes ceph-mgr=enabled --all --overwrite
fi
if [ "x$SDN_PLUGIN" == "xovs" ]; then
kubectl label nodes openvswitch=enabled --all --overwrite
elif [ "x$SDN_PLUGIN" == "xlinuxbridge" ]; then
# first unlabel nodes with 'openvswitch' tag, which is applied by default
# by kubeadm-aio docker image
kubectl label nodes openvswitch- --all --overwrite
kubectl label nodes linuxbridge=enabled --all --overwrite
fi
#FIXME(portdirect): Ensure RBAC rules are essentially open until support added
# to all charts and helm-toolkit.
kubectl replace -f ${WORK_DIR}/tools/kubeadm-aio/assets/opt/rbac/dev.yaml
helm install --namespace=openstack ${WORK_DIR}/dns-helper --name=dns-helper
kube_wait_for_pods openstack ${POD_START_TIMEOUT_OPENSTACK}

View File

@@ -1,80 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -xe
: ${WORK_DIR:="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/.."}
source ${WORK_DIR}/tools/gate/vars.sh
source ${WORK_DIR}/tools/gate/funcs/network.sh
source ${WORK_DIR}/tools/gate/funcs/openstack.sh
# Turn on ip forwarding if its not already
if [ $(cat /proc/sys/net/ipv4/ip_forward) -eq 0 ]; then
sudo bash -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
fi
# Assign IP address to br-ex
sudo ip addr add ${OSH_BR_EX_ADDR} dev br-ex
sudo ip link set br-ex up
# Setup masquerading on default route dev to public subnet
sudo iptables -t nat -A POSTROUTING -o $(net_default_iface) -s ${OSH_EXT_SUBNET} -j MASQUERADE
if [ "x$SDN_PLUGIN" == "xovs" ]; then
# Disable In-Band rules on br-ex bridge to ease debugging
OVS_VSWITCHD_POD=$(kubectl get -n openstack pods -l application=openvswitch,component=openvswitch-vswitchd --no-headers -o name | head -1 | awk -F '/' '{ print $NF }')
kubectl exec -n openstack ${OVS_VSWITCHD_POD} -- ovs-vsctl set Bridge br-ex other_config:disable-in-band=true
fi
if ! $OPENSTACK service list -f value -c Type | grep -q orchestration; then
echo "No orchestration service active: creating public network via CLI"
$NEUTRON net-create ${OSH_EXT_NET_NAME} -- --is-default \
--router:external \
--provider:network_type=flat \
--provider:physical_network=public
$NEUTRON subnet-create \
--name ${OSH_EXT_SUBNET_NAME} \
--ip-version 4 \
$($NEUTRON net-show ${OSH_EXT_NET_NAME} -f value -c id) ${OSH_EXT_SUBNET} -- \
--enable_dhcp=False
# Create default subnet pool
$NEUTRON subnetpool-create \
${OSH_PRIVATE_SUBNET_POOL_NAME} \
--default-prefixlen ${OSH_PRIVATE_SUBNET_POOL_DEF_PREFIX} \
--pool-prefix ${OSH_PRIVATE_SUBNET_POOL} \
--shared \
--is-default=True
else
echo "Orchestration service active: creating public network via Heat"
HEAT_TEMPLATE=$(cat ${WORK_DIR}/tools/gate/files/${OSH_PUB_NET_STACK}.yaml | base64 -w 0)
kubectl exec -n openstack ${OPENSTACK_POD} -- bash -c "echo $HEAT_TEMPLATE | base64 -d > /tmp/${OSH_PUB_NET_STACK}.yaml"
$OPENSTACK stack create \
--parameter network_name=${OSH_EXT_NET_NAME} \
--parameter physical_network_name=public \
--parameter subnet_name=${OSH_EXT_SUBNET_NAME} \
--parameter subnet_cidr=${OSH_EXT_SUBNET} \
--parameter subnet_gateway=${OSH_BR_EX_ADDR%/*} \
-t /tmp/${OSH_PUB_NET_STACK}.yaml \
${OSH_PUB_NET_STACK}
openstack_wait_for_stack ${OSH_PUB_NET_STACK}
HEAT_TEMPLATE=$(cat ${WORK_DIR}/tools/gate/files/${OSH_SUBNET_POOL_STACK}.yaml | base64 -w 0)
kubectl exec -n openstack ${OPENSTACK_POD} -- bash -c "echo $HEAT_TEMPLATE | base64 -d > /tmp/${OSH_SUBNET_POOL_STACK}.yaml"
$OPENSTACK stack create \
--parameter subnet_pool_name=${OSH_PRIVATE_SUBNET_POOL_NAME} \
--parameter subnet_pool_prefixes=${OSH_PRIVATE_SUBNET_POOL} \
--parameter subnet_pool_default_prefix_length=${OSH_PRIVATE_SUBNET_POOL_DEF_PREFIX} \
-t /tmp/${OSH_SUBNET_POOL_STACK}.yaml \
${OSH_SUBNET_POOL_STACK}
openstack_wait_for_stack ${OSH_SUBNET_POOL_STACK}
fi

View File

@@ -1,114 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -xe
: ${WORK_DIR:="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/.."}
source ${WORK_DIR}/tools/gate/vars.sh
source ${WORK_DIR}/tools/gate/funcs/network.sh
source ${WORK_DIR}/tools/gate/funcs/openstack.sh
# Create default private network
$NEUTRON net-create ${OSH_PRIVATE_NET_NAME}
$NEUTRON subnet-create \
--name ${OSH_PRIVATE_SUBNET_NAME} \
--ip-version 4 \
--dns-nameserver ${OSH_EXT_DNS} \
$($NEUTRON net-show private -f value -c id) \
${OSH_PRIVATE_SUBNET}
# Create default router and link networks
$NEUTRON router-create ${OSH_ROUTER}
$NEUTRON router-interface-add \
$($NEUTRON router-show ${OSH_ROUTER} -f value -c id) \
$($NEUTRON subnet-show private-subnet -f value -c id)
$NEUTRON router-gateway-set \
$($NEUTRON router-show ${OSH_ROUTER} -f value -c id) \
$($NEUTRON net-show ${OSH_EXT_NET_NAME} -f value -c id)
ROUTER_PUBLIC_IP=$($NEUTRON router-show ${OSH_ROUTER} -f value -c external_gateway_info | jq -r '.external_fixed_ips[].ip_address')
wait_for_ping ${ROUTER_PUBLIC_IP}
# Loosen up security group to allow access to the VM
PROJECT=$($OPENSTACK project show admin -f value -c id)
SECURITY_GROUP=$($OPENSTACK security group list -f csv | grep ${PROJECT} | grep "default" | awk -F "," '{ print $1 }' | tr -d '"')
$OPENSTACK security group rule create ${SECURITY_GROUP} \
--protocol icmp \
--src-ip 0.0.0.0/0
$OPENSTACK security group rule create ${SECURITY_GROUP} \
--protocol tcp \
--dst-port 22:22 \
--src-ip 0.0.0.0/0
# Setup SSH Keypair in Nova
KEYPAIR_LOC="$(mktemp).pem"
$OPENSTACK keypair create ${OSH_VM_KEY_CLI} > ${KEYPAIR_LOC}
chmod 600 ${KEYPAIR_LOC}
# Boot a vm and wait for it to become active
FLAVOR=$($OPENSTACK flavor show "${OSH_VM_FLAVOR}" -f value -c id)
IMAGE=$($OPENSTACK image list -f csv | awk -F ',' '{ print $2 "," $1 }' | grep "^\"Cirros" | head -1 | awk -F ',' '{ print $2 }' | tr -d '"')
NETWORK=$($NEUTRON net-show ${OSH_PRIVATE_NET_NAME} -f value -c id)
$NOVA boot \
--nic net-id=${NETWORK} \
--flavor=${FLAVOR} \
--image=${IMAGE} \
--key-name=${OSH_VM_KEY_CLI} \
--security-groups="default" \
${OSH_VM_NAME_CLI}
openstack_wait_for_vm ${OSH_VM_NAME_CLI}
# Assign a floating IP to the VM
FLOATING_IP=$($OPENSTACK floating ip create ${OSH_EXT_NET_NAME} -f value -c floating_ip_address)
$OPENSTACK server add floating ip ${OSH_VM_NAME_CLI} ${FLOATING_IP}
# Ping our VM
wait_for_ping ${FLOATING_IP} ${SERVICE_TEST_TIMEOUT}
# Wait for SSH to come up
wait_for_ssh_port ${FLOATING_IP} ${SERVICE_TEST_TIMEOUT}
# SSH into the VM and check it can reach the outside world
ssh-keyscan "$FLOATING_IP" >> ~/.ssh/known_hosts
ssh -i ${KEYPAIR_LOC} cirros@${FLOATING_IP} ping -q -c 1 -W 2 ${OSH_BR_EX_ADDR%/*}
# SSH into the VM and check it can reach the metadata server
ssh -i ${KEYPAIR_LOC} cirros@${FLOATING_IP} curl -sSL 169.254.169.254
# Bonus round - display a Unicorn
ssh -i ${KEYPAIR_LOC} cirros@${FLOATING_IP} curl http://artscene.textfiles.com/asciiart/unicorn || true
if $OPENSTACK service list -f value -c Type | grep -q volume; then
$OPENSTACK volume create \
--size ${OSH_VOL_SIZE_CLI} \
--type ${OSH_VOL_TYPE_CLI} \
${OSH_VOL_NAME_CLI}
openstack_wait_for_volume ${OSH_VOL_NAME_CLI} available ${SERVICE_TEST_TIMEOUT}
$OPENSTACK server add volume ${OSH_VM_NAME_CLI} ${OSH_VOL_NAME_CLI}
openstack_wait_for_volume ${OSH_VOL_NAME_CLI} in-use ${SERVICE_TEST_TIMEOUT}
VOL_DEV=$($OPENSTACK volume show ${OSH_VOL_NAME_CLI} \
-f value -c attachments | \
${WORK_DIR}/tools/gate/funcs/python-data-to-json.py | \
jq -r '.[] | .device')
ssh -i ${KEYPAIR_LOC} cirros@${FLOATING_IP} sudo /usr/sbin/mkfs.ext4 ${VOL_DEV}
$OPENSTACK server remove volume ${OSH_VM_NAME_CLI} ${OSH_VOL_NAME_CLI}
openstack_wait_for_volume ${OSH_VOL_NAME_CLI} available ${SERVICE_TEST_TIMEOUT}
$OPENSTACK volume delete ${OSH_VOL_NAME_CLI}
fi
# Remove the test vm
$NOVA delete ${OSH_VM_NAME_CLI}

View File

@@ -1,71 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -xe
: ${WORK_DIR:="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )/.."}
source ${WORK_DIR}/tools/gate/vars.sh
source ${WORK_DIR}/tools/gate/funcs/network.sh
source ${WORK_DIR}/tools/gate/funcs/openstack.sh
# Setup SSH Keypair in Nova
KEYPAIR_LOC="$(mktemp).pem"
$OPENSTACK keypair create ${OSH_VM_KEY_STACK} > ${KEYPAIR_LOC}
chmod 600 ${KEYPAIR_LOC}
# NOTE(portdirect): We do this fancy, and seemingly pointless, footwork to get
# the full image name for the cirros Image without having to be explicit.
IMAGE_NAME=$($OPENSTACK image show -f value -c name \
$($OPENSTACK image list -f csv | awk -F ',' '{ print $2 "," $1 }' | \
grep "^\"Cirros" | head -1 | awk -F ',' '{ print $2 }' | tr -d '"'))
HEAT_TEMPLATE=$(cat ${WORK_DIR}/tools/gate/files/${OSH_BASIC_VM_STACK}.yaml | base64 -w 0)
kubectl exec -n openstack ${OPENSTACK_POD} -- bash -c "echo $HEAT_TEMPLATE | base64 -d > /tmp/${OSH_BASIC_VM_STACK}.yaml"
$OPENSTACK stack create \
--parameter public_net=${OSH_EXT_NET_NAME} \
--parameter image="${IMAGE_NAME}" \
--parameter flavor=${OSH_VM_FLAVOR} \
--parameter ssh_key=${OSH_VM_KEY_STACK} \
--parameter cidr=${OSH_PRIVATE_SUBNET} \
-t /tmp/${OSH_BASIC_VM_STACK}.yaml \
${OSH_BASIC_VM_STACK}
openstack_wait_for_stack ${OSH_BASIC_VM_STACK} ${SERVICE_TEST_TIMEOUT}
FLOATING_IP=$($OPENSTACK floating ip show \
$($OPENSTACK stack resource show \
${OSH_BASIC_VM_STACK} \
server_floating_ip \
-f value -c physical_resource_id) \
-f value -c floating_ip_address)
# Ping our VM
wait_for_ping ${FLOATING_IP} ${SERVICE_TEST_TIMEOUT}
# Wait for SSH to come up
wait_for_ssh_port ${FLOATING_IP} ${SERVICE_TEST_TIMEOUT}
# SSH into the VM and check it can reach the outside world
ssh-keyscan "$FLOATING_IP" >> ~/.ssh/known_hosts
ssh -i ${KEYPAIR_LOC} cirros@${FLOATING_IP} ping -q -c 1 -W 2 ${OSH_BR_EX_ADDR%/*}
# SSH into the VM and check it can reach the metadata server
ssh -i ${KEYPAIR_LOC} cirros@${FLOATING_IP} curl -sSL 169.254.169.254
# Bonus round - display a Unicorn
ssh -i ${KEYPAIR_LOC} cirros@${FLOATING_IP} curl http://artscene.textfiles.com/asciiart/unicorn || true
# Remove the test stack
$OPENSTACK stack delete ${OSH_BASIC_VM_STACK}
#Remove keypair
$OPENSTACK keypair delete ${OSH_VM_KEY_STACK}
rm ${KEYPAIR_LOC}

View File

@@ -1,31 +0,0 @@
# Copyright 2017 The Openstack-Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: primary
vars:
logs_dir: "/tmp/logs"
environment:
LOGS_DIR: "{{ logs_dir }}"
tasks:
- name: Capture logs from environment
shell: ./tools/gate/dump_logs.sh 0
args:
chdir: "{{ zuul.project.src_dir }}"
ignore_errors: yes
- name: Downloads logs to executor
synchronize:
src: "{{ logs_dir }}/"
dest: "{{ zuul.executor.log_root }}/{{ inventory_hostname }}"
mode: pull
ignore_errors: yes

View File

@@ -1,66 +0,0 @@
# Copyright 2017 The Openstack-Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: primary
tasks:
- name: Create nodepool directory
become: true
become_user: root
file:
path: /etc/nodepool
state: directory
mode: 0777
- name: Create nodepool sub_nodes file
copy:
dest: /etc/nodepool/sub_nodes
content: ""
- name: Create nodepool sub_nodes_private file
copy:
dest: /etc/nodepool/sub_nodes_private
content: ""
- name: Populate nodepool sub_nodes file
lineinfile:
path: /etc/nodepool/sub_nodes
line: "{{ hostvars[item]['nodepool']['private_ipv4'] }}"
with_items: "{{ groups['nodes'] }}"
when: groups['nodes'] is defined
- name: Populate nodepool sub_nodes_private file
lineinfile:
path: /etc/nodepool/sub_nodes_private
line: "{{ hostvars[item]['nodepool']['private_ipv4'] }}"
with_items: "{{ groups['nodes'] }}"
when: groups['nodes'] is defined
- name: Create nodepool primary file
copy:
dest: /etc/nodepool/primary_node
content: "{{ hostvars['primary']['nodepool']['private_ipv4'] }}"
when: hostvars['primary'] is defined
- name: Create nodepool node_private for this node
copy:
dest: /etc/nodepool/node_private
content: "{{ nodepool.private_ipv4 }}"
- name: Run OSH Deploy
shell: |
set -xe;
export INTEGRATION=multi
export INTEGRATION_TYPE=basic
export PVC_BACKEND=ceph
export ZUUL_VERSION=v3
export KUBECONFIG=${HOME}/.kube/config
export SDN_PLUGIN="{{ sdn_plugin }}"
export GLANCE="{{ glance_backend }}"
kubectl get nodes -o wide
./tools/gate/setup_gate.sh
args:
chdir: "{{ zuul.project.src_dir }}"

View File

@@ -1,54 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -ex
: ${WORK_DIR:="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"}
source ${WORK_DIR}/tools/gate/vars.sh
cd ${WORK_DIR}
source ${WORK_DIR}/tools/gate/funcs/common.sh
source ${WORK_DIR}/tools/gate/funcs/network.sh
source ${WORK_DIR}/tools/gate/funcs/kube.sh
# Do the basic node setup for running the gate
gate_base_setup
# Install KubeadmAIO requirements and get image
kubeadm_aio_reqs_install
sudo docker pull ${KUBEADM_IMAGE} || kubeadm_aio_build
# Setup shared mounts for kubelet
sudo mkdir -p /var/lib/kubelet
sudo mount --bind /var/lib/kubelet /var/lib/kubelet
sudo mount --make-shared /var/lib/kubelet
# Clean up any old install
kubeadm_aio_clean
# Launch Container
sudo docker run \
-dt \
--name=kubeadm-aio \
--net=host \
--security-opt=seccomp:unconfined \
--cap-add=SYS_ADMIN \
--tmpfs=/run \
--tmpfs=/run/lock \
--volume=/etc/machine-id:/etc/machine-id:ro \
--volume=${HOME}:${HOME}:rw \
--volume=/etc/kubernetes:/etc/kubernetes:rw \
--volume=/sys/fs/cgroup:/sys/fs/cgroup:ro \
--volume=/var/run/docker.sock:/run/docker.sock \
--env KUBE_ROLE="worker" \
--env KUBELET_CONTAINER="${KUBEADM_IMAGE}" \
--env KUBEADM_JOIN_ARGS="--token=${KUBEADM_TOKEN} ${PRIMARY_NODE_IP}:6443" \
${KUBEADM_IMAGE}

View File

@@ -1,92 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -ex
# Exit if run as root
if [[ $EUID -eq 0 ]]; then
echo "This script cannot be run as root" 1>&2
exit 1
fi
export WORK_DIR=$(pwd)
source ${WORK_DIR}/tools/gate/vars.sh
source ${WORK_DIR}/tools/gate/funcs/common.sh
source ${WORK_DIR}/tools/gate/funcs/network.sh
source ${WORK_DIR}/tools/gate/funcs/helm.sh
source ${WORK_DIR}/tools/gate/funcs/kube.sh
# Setup the logging location: by default use the working dir as the root.
rm -rf ${LOGS_DIR} || true
mkdir -p ${LOGS_DIR}
# Moving the ws-linter here to avoid it blocking all the jobs just for ws
if [ "x$INTEGRATION_TYPE" == "xlinter" ]; then
bash ${WORK_DIR}/tools/gate/whitespace.sh
fi
# Do the basic node setup for running the gate
gate_base_setup
# We setup the network for pre kube here, to enable cluster restarts on
# development machines
net_resolv_pre_kube
net_hosts_pre_kube
# Setup helm
helm_install
helm_serve
helm_lint
# In the linter, we also run the helm template plugin to get a sanity check
# of the chart without verifying against the k8s API
if [ "x$INTEGRATION_TYPE" == "xlinter" ]; then
helm_build > ${LOGS_DIR}/helm_build
helm_plugin_template_install
helm_template_run
else
# Setup the K8s Cluster
if ! [ "x$ZUUL_VERSION" == "xv3" ]; then
if [ "x$INTEGRATION" == "xaio" ]; then
bash ${WORK_DIR}/tools/gate/kubeadm_aio.sh
elif [ "x$INTEGRATION" == "xmulti" ]; then
bash ${WORK_DIR}/tools/gate/kubeadm_aio.sh
bash ${WORK_DIR}/tools/gate/setup_gate_worker_nodes.sh
fi
fi
# Pull all required images
cd ${WORK_DIR}; make pull-all-images
if [ "x$LOOPBACK_CREATE" == "xtrue" ]; then
loopback_dev_info_collect
kube_label_node_block_devs
fi
# Deploy OpenStack-Helm
if ! [ "x$INTEGRATION_TYPE" == "x" ]; then
bash ${WORK_DIR}/tools/gate/helm_dry_run.sh
bash ${WORK_DIR}/tools/gate/launch-osh/common.sh
if [ "x$INTEGRATION_TYPE" == "xbasic" ]; then
bash ${WORK_DIR}/tools/gate/launch-osh/basic.sh
elif [ "x$INTEGRATION_TYPE" == "xarmada" ]; then
bash ${WORK_DIR}/tools/gate/launch-osh/armada.sh
fi
fi
if ! [ "x$INTEGRATION_TYPE" == "x" ]; then
# Run Basic Full Stack Tests
if [ "x$INTEGRATION" == "xaio" ] && [ "x$RALLY_CHART_ENABLED" == "xfalse" ]; then
bash ${WORK_DIR}/tools/gate/openstack/network_launch.sh
bash ${WORK_DIR}/tools/gate/openstack/vm_cli_launch.sh
bash ${WORK_DIR}/tools/gate/openstack/vm_heat_launch.sh
fi
fi
fi

View File

@@ -1,50 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -ex
: ${WORK_DIR:="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"}
source ${WORK_DIR}/tools/gate/vars.sh
sudo chown $(whoami) ${SSH_PRIVATE_KEY}
sudo chmod 600 ${SSH_PRIVATE_KEY}
KUBEADM_TOKEN=$(sudo docker exec kubeadm-aio kubeadm token list | awk '/The default bootstrap token/ { print $1 ; exit }')
SUB_NODE_PROVISION_SCRIPT=$(mktemp --suffix=.sh)
for SUB_NODE in $SUB_NODE_IPS ; do
cat >> ${SUB_NODE_PROVISION_SCRIPT} <<EOS
ssh-keyscan "${SUB_NODE}" >> ~/.ssh/known_hosts
ssh -i ${SSH_PRIVATE_KEY} $(whoami)@${SUB_NODE} mkdir -p ${WORK_DIR%/*}
scp -i ${SSH_PRIVATE_KEY} -r ${WORK_DIR} $(whoami)@${SUB_NODE}:${WORK_DIR%/*}
ssh -i ${SSH_PRIVATE_KEY} $(whoami)@${SUB_NODE} "export WORK_DIR=${WORK_DIR}; \
export KUBEADM_TOKEN=${KUBEADM_TOKEN}; \
export PRIMARY_NODE_IP=${PRIMARY_NODE_IP}; \
export KUBEADM_IMAGE=${KUBEADM_IMAGE}; \
export PVC_BACKEND=${PVC_BACKEND}; \
export LOOPBACK_CREATE=${LOOPBACK_CREATE}; \
export LOOPBACK_DEVS=${LOOPBACK_DEVS}; \
export LOOPBACK_SIZE=${LOOPBACK_SIZE}; \
export LOOPBACK_DIR=${LOOPBACK_DIR}; \
bash ${WORK_DIR}/tools/gate/provision_gate_worker_node.sh"
EOS
done
bash ${SUB_NODE_PROVISION_SCRIPT}
rm -rf ${SUB_NODE_PROVISION_SCRIPT}
source ${WORK_DIR}/tools/gate/funcs/kube.sh
kube_wait_for_nodes ${SUB_NODE_COUNT} ${NODE_START_TIMEOUT}
kube_wait_for_pods kube-system ${POD_START_TIMEOUT_SYSTEM}
kube_wait_for_pods openstack ${POD_START_TIMEOUT_OPENSTACK}
kubectl get nodes --show-all
kubectl get --all-namespaces all --show-all
sudo docker exec kubeadm-aio openstack-helm-dev-prep

View File

@@ -1,101 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Set work dir if not already done
: ${WORK_DIR:="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"}
# Set logs directory
export LOGS_DIR=${LOGS_DIR:-"/tmp/logs"}
# Get Host OS
source /etc/os-release
export HOST_OS=${HOST_OS:="${ID}"}
# Set versions of K8s and Helm to use
export HELM_VERSION=${HELM_VERSION:-"v2.7.2"}
export KUBE_VERSION=${KUBE_VERSION:-"v1.9.0"}
# Set K8s-AIO options
export KUBECONFIG=${KUBECONFIG:="${HOME}/.kubeadm-aio/admin.conf"}
export KUBEADM_IMAGE=${KUBEADM_IMAGE:="openstackhelm/kubeadm-aio:${KUBE_VERSION}"}
# Set K8s network options
export CNI_POD_CIDR=${CNI_POD_CIDR:="192.168.0.0/16"}
export KUBE_CNI=${KUBE_CNI:="calico"}
# Set PVC Backend
export PVC_BACKEND=${PVC_BACKEND:-"ceph"}
# Set Object Storage options
export CEPH_RGW_KEYSTONE_ENABLED=${CEPH_RGW_KEYSTONE_ENABLED:-"true"}
export OPENSTACK_OBJECT_STORAGE=${OPENSTACK_OBJECT_STORAGE:-"radosgw"}
# Set Glance Backend options
export GLANCE=${GLANCE:-"radosgw"}
# Set SDN Plugin
# possible values: ovs, linuxbridge
export SDN_PLUGIN=${SDN_PLUGIN:-"ovs"}
# Set Upstream DNS
export UPSTREAM_DNS1=${UPSTREAM_DNS1:-"8.8.8.8"}
export UPSTREAM_DNS2=${UPSTREAM_DNS2:-"8.8.4.4"}
# Set gate script timeouts
export NODE_START_TIMEOUT=${NODE_START_TIMEOUT:="480"}
export POD_START_TIMEOUT_SYSTEM=${POD_START_TIMEOUT_SYSTEM:="480"}
export POD_START_TIMEOUT_OPENSTACK=${POD_START_TIMEOUT_OPENSTACK:="900"}
export POD_START_TIMEOUT_DEFAULT=${POD_START_TIMEOUT_DEFAULT:="480"}
export POD_START_TIMEOUT_CEPH=${POD_START_TIMEOUT_CEPH:="600"}
export SERVICE_TEST_TIMEOUT=${SERVICE_TEST_TIMEOUT:="600"}
# Setup Loopback device options
export LOOPBACK_CREATE=${LOOPBACK_CREATE:="false"}
export LOOPBACK_DEV_COUNT=${LOOPBACK_DEV_COUNT:="3,3,3"}
export LOOPBACK_SIZES=${LOOPBACK_SIZES:="8192M,1024M,1024M"}
export LOOPBACK_NAMES=${LOOPBACK_NAMES:="cephosd,cephjournal,swift"}
export LOOPBACK_DIR=${LOOPBACK_DIR:="/var/lib/iscsi-loopback"}
export LOOPBACK_LOCAL_DISC_INFO=${LOOPBACK_LOCAL_DISC_INFO:="/tmp/loopback-local-disc-info"}
export LOOPBACK_DEV_INFO=${LOOPBACK_DEV_INFO:="/tmp/loopback-dev-info"}
# Setup Multinode params
export SSH_PRIVATE_KEY=${SSH_PRIVATE_KEY:="/etc/nodepool/id_rsa"}
export PRIMARY_NODE_IP=${PRIMARY_NODE_IP:="$(cat /etc/nodepool/primary_node | tail -1)"}
export SUB_NODE_IPS=${SUB_NODE_IPS:="$(cat /etc/nodepool/sub_nodes)"}
export SUB_NODE_COUNT="$(($(echo ${SUB_NODE_IPS} | wc -w) + 1))"
# Define OpenStack Test Params
export OSH_BR_EX_ADDR=${OSH_BR_EX_ADDR:="172.24.4.1/24"}
export OSH_EXT_SUBNET=${OSH_EXT_SUBNET:="172.24.4.0/24"}
export OSH_EXT_DNS=${OSH_EXT_DNS:="8.8.8.8"}
export OSH_EXT_NET_NAME=${OSH_EXT_NET_NAME:="public"}
export OSH_EXT_SUBNET_NAME=${OSH_EXT_SUBNET_NAME:="public-subnet"}
export OSH_ROUTER=${OSH_ROUTER:="router1"}
export OSH_PRIVATE_NET_NAME=${OSH_PRIVATE_NET_NAME:="private"}
export OSH_PRIVATE_SUBNET=${OSH_PRIVATE_SUBNET:="10.0.0.0/24"}
export OSH_PRIVATE_SUBNET_NAME=${OSH_PRIVATE_SUBNET_NAME:="private-subnet"}
export OSH_PRIVATE_SUBNET_POOL=${OSH_PRIVATE_SUBNET_POOL:="10.0.0.0/8"}
export OSH_PRIVATE_SUBNET_POOL_NAME=${OSH_PRIVATE_SUBNET_POOL_NAME:="shared-default-subnetpool"}
export OSH_PRIVATE_SUBNET_POOL_DEF_PREFIX=${OSH_PRIVATE_SUBNET_POOL_DEF_PREFIX:="24"}
export OSH_VM_FLAVOR=${OSH_VM_FLAVOR:="m1.tiny"}
export OSH_VM_NAME_CLI=${OSH_VM_NAME_CLI:="osh-smoketest"}
export OSH_VM_KEY_CLI=${OSH_VM_KEY_CLI:="osh-smoketest-key"}
export OSH_VOL_NAME_CLI=${OSH_VOL_NAME_CLI:="osh-volume"}
export OSH_VOL_SIZE_CLI=${OSH_VOL_SIZE_CLI:="1"}
export OSH_VOL_TYPE_CLI=${OSH_VOL_TYPE_CLI:="rbd1"}
export OSH_PUB_NET_STACK=${OSH_PUB_NET_STACK:="heat-public-net-deployment"}
export OSH_SUBNET_POOL_STACK=${OSH_SUBNET_POOL_STACK:="heat-subnet-pool-deployment"}
export OSH_BASIC_VM_STACK=${OSH_BASIC_VM_STACK:="heat-basic-vm-deployment"}
export OSH_VM_KEY_STACK=${OSH_VM_KEY_STACK:="heat-vm-key"}
export RALLY_CHART_ENABLED=${RALLY_CHART_ENABLED:="false"}

View File

@@ -1,32 +0,0 @@
#!/bin/bash
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -ex
folder='.'
if [[ $# -gt 0 ]] ; then
folder="$1";
fi
res=$(find $folder \
-not -path "*/\.*" \
-not -path "*/doc/build/*" \
-not -name "*.tgz" \
-type f -exec egrep -l " +$" {} \;)
if [[ -z $res ]] ; then
exit 0
else
echo 'Trailing space(s) found.'
exit 1
fi

View File

@@ -1,103 +0,0 @@
FROM ubuntu:16.04
MAINTAINER pete.birley@att.com
# You can specify particulat version using
# --build-arg KUBE_VERSION=$KUBE_VERSION
# or put 'xlatest' and it will download latest availible k8s packages.
ARG KUBE_VERSION=v1.7.5
ENV HELM_VERSION=v2.6.1 \
KUBE_VERSION=${KUBE_VERSION} \
CNI_VERSION=v0.6.0-rc2 \
container="docker" \
DEBIAN_FRONTEND="noninteractive"
RUN set -x \
&& TMP_DIR=$(mktemp --directory) \
&& cd ${TMP_DIR} \
&& apt-get update \
&& apt-get install -y --no-install-recommends \
apt-transport-https \
ca-certificates \
curl \
dbus \
# Add Kubernetes repo
&& curl -sSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - \
&& echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list \
&& apt-get update \
&& if [ "$KUBE_VERSION" = "xlatest" ]; \
then version=`apt-cache policy kubeadm| grep Candidate: | awk '{print $2}' | sed 's/-[0-9]*//'`; \
APT_KUBE_VERSION="=${version}-*"; \
API_KUBE_VERSION="v${version}"; \
else APT_KUBE_VERSION="=$(echo $KUBE_VERSION | sed 's/v//g')-*";\
API_KUBE_VERSION=${KUBE_VERSION}; fi; \
apt-get install -y --no-install-recommends \
docker.io \
iptables \
kubectl"$APT_KUBE_VERSION" \
kubelet"$APT_KUBE_VERSION" \
kubernetes-cni \
# Install Kubeadm without running postinstall script as it expects systemd to be running.
&& apt-get download kubeadm"$APT_KUBE_VERSION" \
&& dpkg --unpack kubeadm*.deb \
&& mv /var/lib/dpkg/info/kubeadm.postinst /opt/kubeadm.postinst \
&& dpkg --configure kubeadm \
&& apt-get install -yf kubeadm"$APT_KUBE_VERSION" \
&& mkdir -p /etc/kubernetes/manifests \
# Install kubectl:
&& curl -sSL https://dl.k8s.io/$API_KUBE_VERSION/kubernetes-client-linux-amd64.tar.gz | tar -zxv --strip-components=1 \
&& mv ${TMP_DIR}/client/bin/kubectl /usr/bin/kubectl \
&& chmod +x /usr/bin/kubectl \
# Install kubelet & kubeadm binaries:
# (portdirect) We do things in this weird way to let us use the deps and systemd
# units from the packages in the .deb repo.
&& curl -sSL https://dl.k8s.io/${API_KUBE_VERSION}/kubernetes-server-linux-amd64.tar.gz | tar -zxv --strip-components=1 \
&& mv ${TMP_DIR}/server/bin/kubelet /usr/bin/kubelet \
&& chmod +x /usr/bin/kubelet \
&& mv ${TMP_DIR}/server/bin/kubeadm /usr/bin/kubeadm \
&& chmod +x /usr/bin/kubeadm \
# Install CNI:
&& CNI_BIN_DIR=/opt/cni/bin \
&& mkdir -p ${CNI_BIN_DIR} \
&& cd ${CNI_BIN_DIR} \
&& curl -sSL https://github.com/containernetworking/plugins/releases/download/$CNI_VERSION/cni-plugins-amd64-$CNI_VERSION.tgz | tar -zxv --strip-components=1 \
&& cd ${TMP_DIR} \
# Move kubelet binary as we will run containerised
&& mv /usr/bin/kubelet /usr/bin/kubelet-real \
# Install helm binary
&& curl -sSL https://storage.googleapis.com/kubernetes-helm/helm-${HELM_VERSION}-linux-amd64.tar.gz | tar -zxv --strip-components=1 \
&& mv ${TMP_DIR}/helm /usr/bin/helm \
# Install openstack-helm dev utils
&& apt-get install -y --no-install-recommends \
make \
git \
vim \
jq \
# Install utils for PVC provisioners
nfs-common \
ceph-common \
kmod \
# Tweak Systemd units and targets for running in a container
&& find /lib/systemd/system/sysinit.target.wants/ ! -name 'systemd-tmpfiles-setup.service' -type l -exec rm -fv {} + \
&& rm -fv \
/lib/systemd/system/multi-user.target.wants/* \
/etc/systemd/system/*.wants/* \
/lib/systemd/system/local-fs.target.wants/* \
/lib/systemd/system/sockets.target.wants/*udev* \
/lib/systemd/system/sockets.target.wants/*initctl* \
/lib/systemd/system/basic.target.wants/* \
# Clean up apt cache
&& rm -rf /var/lib/apt/lists/* \
# Clean up tmp dir
&& cd / \
&& rm -rf ${TMP_DIR}
# Load assets into place, setup startup target & units
COPY ./assets/ /
RUN set -x \
&& ln -s /usr/lib/systemd/system/container-up.target /etc/systemd/system/default.target \
&& mkdir -p /etc/systemd/system/container-up.target.wants \
&& ln -s /usr/lib/systemd/system/kubeadm-aio.service /etc/systemd/system/container-up.target.wants/kubeadm-aio.service
VOLUME /sys/fs/cgroup
CMD /kubeadm-aio

View File

@@ -1,6 +0,0 @@
Kubeadm AIO Container
=====================
Documentation for Kubeadm AIO is now consolidated here_.
.. _here: https://github.com/openstack/openstack-helm/blob/master/doc/source/install/developer/all-in-one.rst

View File

@@ -1,2 +0,0 @@
KUBE_CNI=calico
CNI_POD_CIDR=192.168.0.0/16

View File

@@ -1,3 +0,0 @@
# If KUBE_ROLE is set 'master' kubeadm-aio will set this node up to be a master
# node, otherwise if 'worker', will join an existing cluster.
KUBE_ROLE=master

View File

@@ -1,3 +0,0 @@
# If KUBE_VERSION is set 'default' kubeadm will use the default version of K8s
# otherwise the version specified here will be used.
KUBE_VERSION=default

View File

@@ -1 +0,0 @@
KUBEADM_JOIN_ARGS="no_command_supplied"

View File

@@ -1,6 +0,0 @@
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
kubernetesVersion: v1.7.5
apiServerExtraArgs:
runtime-config: "batch/v2alpha1=true"
service-node-port-range: "1024-65535"

View File

@@ -1,3 +0,0 @@
# If KUBE_BIND_DEV is set to 'autodetect' we will use kubeadm's autodetect logic
# otherwise use the device specified to find the IP address to bind to.
KUBE_BIND_DEV=autodetect

View File

@@ -1,3 +0,0 @@
# If KUBELET_CONTAINER is set to 'this_one' we will not attempt to launch a new
# container for the kubelet process, otherwise use the image tag specified
KUBELET_CONTAINER=this_one

View File

@@ -1,54 +0,0 @@
#!/bin/bash
# Copyright 2017 The Openstack-Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -xe
echo 'Checking cgroups'
if ls -dZ /sys/fs/cgroup | grep -q :svirt_sandbox_file_t: ; then
echo 'Invocation error: use -v /sys/fs/cgroup:/sys/fs/cgroup:ro parameter to docker run.'
exit 1
fi
echo 'Setting up K8s version to deploy'
: ${KUBE_VERSION:="default"}
sed -i "s|KUBE_VERSION=.*|KUBE_VERSION=${KUBE_VERSION}|g" /etc/kube-version
echo 'Setting up device to use for kube-api'
: ${KUBE_BIND_DEV:="autodetect"}
sed -i "s|KUBE_BIND_DEV=.*|KUBE_BIND_DEV=${KUBE_BIND_DEV}|g" /etc/kubeapi-device
echo 'Setting up container image to use for kubelet'
: ${KUBELET_CONTAINER:="this_one"}
sed -i "s|KUBELET_CONTAINER=.*|KUBELET_CONTAINER=${KUBELET_CONTAINER}|g" /etc/kubelet-container
echo 'Setting whether this node is a master, or slave, K8s node'
: ${KUBE_ROLE:="master"}
sed -i "s|KUBE_ROLE=.*|KUBE_ROLE=${KUBE_ROLE}|g" /etc/kube-role
echo 'Setting any kubeadm join commands'
: ${KUBEADM_JOIN_ARGS:="no_command_supplied"}
sed -i "s|KUBEADM_JOIN_ARGS=.*|KUBEADM_JOIN_ARGS=\"${KUBEADM_JOIN_ARGS}\"|g" /etc/kubeadm-join-command-args
echo 'Setting CNI pod CIDR'
: ${CNI_POD_CIDR:="192.168.0.0/16"}
sed -i "s|192.168.0.0/16|${CNI_POD_CIDR}|g" /opt/cni-manifests/*.yaml
sed -i "s|CNI_POD_CIDR=.*|CNI_POD_CIDR=\"${CNI_POD_CIDR}\"|g" /etc/kube-cni
echo 'Setting CNI '
: ${KUBE_CNI:="calico"}
sed -i "s|KUBE_CNI=.*|KUBE_CNI=\"${KUBE_CNI}\"|g" /etc/kube-cni
echo 'Starting Systemd'
exec /bin/systemd --system

View File

@@ -1,365 +0,0 @@
# Calico Version v2.1.4
# http://docs.projectcalico.org/v2.1/releases#v2.1.4
# This manifest includes the following component versions:
# calico/node:v1.1.3
# calico/cni:v1.7.0
# calico/kube-policy-controller:v0.5.4
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# The location of your etcd cluster. This uses the Service clusterIP
# defined below.
etcd_endpoints: "http://10.96.232.136:6666"
# Configure the Calico backend to use.
calico_backend: "bird"
# The CNI network configuration to install on each node.
cni_network_config: |-
{
"name": "k8s-pod-network",
"type": "calico",
"etcd_endpoints": "__ETCD_ENDPOINTS__",
"log_level": "info",
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s",
"k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
"k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
},
"kubernetes": {
"kubeconfig": "/etc/cni/net.d/__KUBECONFIG_FILENAME__"
}
}
---
# This manifest installs the Calico etcd on the kubeadm master. This uses a DaemonSet
# to force it to run on the master even when the master isn't schedulable, and uses
# nodeSelector to ensure it only runs on the master.
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: calico-etcd
namespace: kube-system
labels:
k8s-app: calico-etcd
spec:
template:
metadata:
labels:
k8s-app: calico-etcd
annotations:
# Mark this pod as a critical add-on; when enabled, the critical add-on scheduler
# reserves resources for critical add-on pods so that they can be rescheduled after
# a failure. This annotation works in tandem with the toleration below.
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
# Only run this pod on the master.
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
# Allow this pod to be rescheduled while the node is in "critical add-ons only" mode.
# This, along with the annotation above marks this pod as a critical add-on.
- key: CriticalAddonsOnly
operator: Exists
nodeSelector:
node-role.kubernetes.io/master: ""
hostNetwork: true
containers:
- name: calico-etcd
image: gcr.io/google_containers/etcd:2.2.1
env:
- name: CALICO_ETCD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
command: ["/bin/sh","-c"]
args: ["/usr/local/bin/etcd --name=calico --data-dir=/var/etcd/calico-data --advertise-client-urls=http://$CALICO_ETCD_IP:6666 --listen-client-urls=http://0.0.0.0:6666 --listen-peer-urls=http://0.0.0.0:6667"]
volumeMounts:
- name: var-etcd
mountPath: /var/etcd
volumes:
- name: var-etcd
hostPath:
path: /var/etcd
---
# This manfiest installs the Service which gets traffic to the Calico
# etcd.
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: calico-etcd
name: calico-etcd
namespace: kube-system
spec:
# Select the calico-etcd pod running on the master.
selector:
k8s-app: calico-etcd
# This ClusterIP needs to be known in advance, since we cannot rely
# on DNS to get access to etcd.
clusterIP: 10.96.232.136
ports:
- port: 6666
---
# This manifest installs the calico/node container, as well
# as the Calico CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
template:
metadata:
labels:
k8s-app: calico-node
annotations:
# Mark this pod as a critical add-on; when enabled, the critical add-on scheduler
# reserves resources for critical add-on pods so that they can be rescheduled after
# a failure. This annotation works in tandem with the toleration below.
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
hostNetwork: true
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
# Allow this pod to be rescheduled while the node is in "critical add-ons only" mode.
# This, along with the annotation above marks this pod as a critical add-on.
- key: CriticalAddonsOnly
operator: Exists
serviceAccountName: calico-cni-plugin
containers:
# Runs calico/node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: quay.io/calico/node:v1.1.3
env:
# The location of the Calico etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# Enable BGP. Disable to enforce policy only.
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Configure the IP Pool from which Pod IPs will be chosen.
- name: CALICO_IPV4POOL_CIDR
value: "192.168.0.0/16"
- name: CALICO_IPV4POOL_IPIP
value: "always"
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
# Set Felix logging to "info"
- name: FELIX_LOGSEVERITYSCREEN
value: "info"
# Auto-detect the BGP IP address.
- name: IP
value: ""
securityContext:
privileged: true
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
# This container installs the Calico CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: quay.io/calico/cni:v1.7.0
command: ["/install-cni.sh"]
env:
# The location of the Calico etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
volumes:
# Used by calico/node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
---
# This manifest deploys the Calico policy controller on Kubernetes.
# See https://github.com/projectcalico/k8s-policy
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: calico-policy-controller
namespace: kube-system
labels:
k8s-app: calico-policy
spec:
# The policy controller can only have a single active instance.
replicas: 1
strategy:
type: Recreate
template:
metadata:
name: calico-policy-controller
namespace: kube-system
labels:
k8s-app: calico-policy-controller
annotations:
# Mark this pod as a critical add-on; when enabled, the critical add-on scheduler
# reserves resources for critical add-on pods so that they can be rescheduled after
# a failure. This annotation works in tandem with the toleration below.
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
# The policy controller must run in the host network namespace so that
# it isn't governed by policy that would prevent it from working.
hostNetwork: true
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
# Allow this pod to be rescheduled while the node is in "critical add-ons only" mode.
# This, along with the annotation above marks this pod as a critical add-on.
- key: CriticalAddonsOnly
operator: Exists
serviceAccountName: calico-policy-controller
containers:
- name: calico-policy-controller
image: quay.io/calico/kube-policy-controller:v0.5.4
env:
# The location of the Calico etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# The location of the Kubernetes API. Use the default Kubernetes
# service for API access.
- name: K8S_API
value: "https://kubernetes.default:443"
# Since we're running in the host namespace and might not have KubeDNS
# access, configure the container's /etc/hosts to resolve
# kubernetes.default to the correct service clusterIP.
- name: CONFIGURE_ETC_HOSTS
value: "true"
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: calico-cni-plugin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-cni-plugin
subjects:
- kind: ServiceAccount
name: calico-cni-plugin
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: calico-cni-plugin
namespace: kube-system
rules:
- apiGroups: [""]
resources:
- pods
- nodes
verbs:
- get
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-cni-plugin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: calico-policy-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-policy-controller
subjects:
- kind: ServiceAccount
name: calico-policy-controller
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: calico-policy-controller
namespace: kube-system
rules:
- apiGroups:
- ""
- extensions
resources:
- pods
- namespaces
- networkpolicies
verbs:
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-policy-controller
namespace: kube-system

View File

@@ -1,329 +0,0 @@
# Calico Roles
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: canal
namespace: kube-system
rules:
- apiGroups: [""]
resources:
- namespaces
verbs:
- get
- list
- watch
- apiGroups: [""]
resources:
- pods/status
verbs:
- update
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
- watch
- apiGroups: [""]
resources:
- nodes
verbs:
- get
- list
- update
- watch
- apiGroups: ["extensions"]
resources:
- thirdpartyresources
verbs:
- create
- get
- list
- watch
- apiGroups: ["extensions"]
resources:
- networkpolicies
verbs:
- get
- list
- watch
- apiGroups: ["projectcalico.org"]
resources:
- globalconfigs
verbs:
- create
- get
- list
- update
- watch
- apiGroups: ["projectcalico.org"]
resources:
- ippools
verbs:
- create
- delete
- get
- list
- update
- watch
---
# Flannel roles
# Pulled from https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel-rbac.yml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: canal
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: canal
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: canal
subjects:
- kind: ServiceAccount
name: canal
namespace: kube-system
---
# This ConfigMap can be used to configure a self-hosted Canal installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: canal-config
namespace: kube-system
data:
# The interface used by canal for host <-> host communication.
# If left blank, then the interface is chosen using the node's
# default route.
canal_iface: ""
# Whether or not to masquerade traffic to destinations not within
# the pod network.
masquerade: "true"
# The CNI network configuration to install on each node.
cni_network_config: |-
{
"name": "k8s-pod-network",
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"hostname": "__KUBERNETES_NODE_NAME__",
"ipam": {
"type": "host-local",
"subnet": "usePodCidr"
},
"policy": {
"type": "k8s",
"k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
},
"kubernetes": {
"k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
}
# Flannel network configuration. Mounted into the flannel container.
net-conf.json: |
{
"Network": "192.168.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
# This manifest installs the calico/node container, as well
# as the Calico CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: canal
namespace: kube-system
labels:
k8s-app: canal
spec:
selector:
matchLabels:
k8s-app: canal
template:
metadata:
labels:
k8s-app: canal
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
hostNetwork: true
serviceAccountName: canal
tolerations:
# Allow the pod to run on the master. This is required for
# the master to communicate with pods.
- key: node-role.kubernetes.io/master
effect: NoSchedule
# Mark the pod as a critical add-on for rescheduling.
- key: "CriticalAddonsOnly"
operator: "Exists"
containers:
# Runs calico/node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: quay.io/calico/node:v1.2.1
env:
# Use Kubernetes API as the backing datastore.
- name: DATASTORE_TYPE
value: "kubernetes"
# Enable felix logging.
- name: FELIX_LOGSEVERITYSYS
value: "info"
# Period, in seconds, at which felix re-applies all iptables state
- name: FELIX_IPTABLESREFRESHINTERVAL
value: "60"
# Disable IPV6 support in Felix.
- name: FELIX_IPV6SUPPORT
value: "false"
# Don't enable BGP.
- name: CALICO_NETWORKING_BACKEND
value: "none"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
- name: WAIT_FOR_DATASTORE
value: "true"
# No IP address needed.
- name: IP
value: ""
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
# This container installs the Calico CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: quay.io/calico/cni:v1.8.3
command: ["/install-cni.sh"]
env:
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: canal-config
key: cni_network_config
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
# This container runs flannel using the kube-subnet-mgr backend
# for allocating subnets.
- name: kube-flannel
image: quay.io/coreos/flannel:v0.8.0
command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: FLANNELD_IFACE
valueFrom:
configMapKeyRef:
name: canal-config
key: canal_iface
- name: FLANNELD_IP_MASQ
valueFrom:
configMapKeyRef:
name: canal-config
key: masquerade
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
# Used by calico/node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
# Used by flannel.
- name: run
hostPath:
path: /run
- name: flannel-cfg
configMap:
name: canal-config
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: canal
namespace: kube-system

View File

@@ -1,94 +0,0 @@
#https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel.yml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"type": "flannel",
"delegate": {
"isDefaultGateway": true
}
}
net-conf.json: |
{
"Network": "192.168.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: amd64
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
serviceAccountName: flannel
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.8.0-amd64
command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: install-cni
image: quay.io/coreos/flannel:v0.8.0-amd64
command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done" ]
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg

View File

@@ -1,187 +0,0 @@
# curl --location "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.WEAVE_MTU=1337&env.IPALLOC_RANGE=192.168.0.0/16
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
name: weave-net
annotations:
cloud.weave.works/launcher-info: |-
{
"server-version": "master-c3b4969",
"original-request": {
"url": "/k8s/v1.6/net.yaml?k8s-version=Q2xpZW50IFZlcnNpb246IHZlcnNpb24uSW5mb3tNYWpvcjoiMSIsIE1pbm9yOiI2IiwgR2l0VmVyc2lvbjoidjEuNi43IiwgR2l0Q29tbWl0OiIwOTUxMzZjMzA3OGNjZjg4N2I5MDM0YjdjZTU5OGEwYTFmYWZmNzY5IiwgR2l0VHJlZVN0YXRlOiJjbGVhbiIsIEJ1aWxkRGF0ZToiMjAxNy0wNy0wNVQxNjo1MTo1NloiLCBHb1ZlcnNpb246ImdvMS43LjYiLCBDb21waWxlcjoiZ2MiLCBQbGF0Zm9ybToibGludXgvYW1kNjQifQpTZXJ2ZXIgVmVyc2lvbjogdmVyc2lvbi5JbmZve01ham9yOiIxIiwgTWlub3I6IjYiLCBHaXRWZXJzaW9uOiJ2MS42LjciLCBHaXRDb21taXQ6IjA5NTEzNmMzMDc4Y2NmODg3YjkwMzRiN2NlNTk4YTBhMWZhZmY3NjkiLCBHaXRUcmVlU3RhdGU6ImNsZWFuIiwgQnVpbGREYXRlOiIyMDE3LTA3LTA1VDE2OjQwOjQyWiIsIEdvVmVyc2lvbjoiZ28xLjcuNiIsIENvbXBpbGVyOiJnYyIsIFBsYXRmb3JtOiJsaW51eC9hbWQ2NCJ9Cg==&env.WEAVE_MTU=1337&env.IPALLOC_RANGE=192.168.0.0/16",
"date": "Sun Jul 30 2017 02:48:47 GMT+0000 (UTC)"
},
"email-address": "support@weave.works"
}
labels:
name: weave-net
namespace: kube-system
- apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: weave-net
annotations:
cloud.weave.works/launcher-info: |-
{
"server-version": "master-c3b4969",
"original-request": {
"url": "/k8s/v1.6/net.yaml?k8s-version=Q2xpZW50IFZlcnNpb246IHZlcnNpb24uSW5mb3tNYWpvcjoiMSIsIE1pbm9yOiI2IiwgR2l0VmVyc2lvbjoidjEuNi43IiwgR2l0Q29tbWl0OiIwOTUxMzZjMzA3OGNjZjg4N2I5MDM0YjdjZTU5OGEwYTFmYWZmNzY5IiwgR2l0VHJlZVN0YXRlOiJjbGVhbiIsIEJ1aWxkRGF0ZToiMjAxNy0wNy0wNVQxNjo1MTo1NloiLCBHb1ZlcnNpb246ImdvMS43LjYiLCBDb21waWxlcjoiZ2MiLCBQbGF0Zm9ybToibGludXgvYW1kNjQifQpTZXJ2ZXIgVmVyc2lvbjogdmVyc2lvbi5JbmZve01ham9yOiIxIiwgTWlub3I6IjYiLCBHaXRWZXJzaW9uOiJ2MS42LjciLCBHaXRDb21taXQ6IjA5NTEzNmMzMDc4Y2NmODg3YjkwMzRiN2NlNTk4YTBhMWZhZmY3NjkiLCBHaXRUcmVlU3RhdGU6ImNsZWFuIiwgQnVpbGREYXRlOiIyMDE3LTA3LTA1VDE2OjQwOjQyWiIsIEdvVmVyc2lvbjoiZ28xLjcuNiIsIENvbXBpbGVyOiJnYyIsIFBsYXRmb3JtOiJsaW51eC9hbWQ2NCJ9Cg==&env.WEAVE_MTU=1337&env.IPALLOC_RANGE=192.168.0.0/16",
"date": "Sun Jul 30 2017 02:48:47 GMT+0000 (UTC)"
},
"email-address": "support@weave.works"
}
labels:
name: weave-net
rules:
- apiGroups:
- ''
resources:
- pods
- namespaces
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- networkpolicies
verbs:
- get
- list
- watch
- apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: weave-net
annotations:
cloud.weave.works/launcher-info: |-
{
"server-version": "master-c3b4969",
"original-request": {
"url": "/k8s/v1.6/net.yaml?k8s-version=Q2xpZW50IFZlcnNpb246IHZlcnNpb24uSW5mb3tNYWpvcjoiMSIsIE1pbm9yOiI2IiwgR2l0VmVyc2lvbjoidjEuNi43IiwgR2l0Q29tbWl0OiIwOTUxMzZjMzA3OGNjZjg4N2I5MDM0YjdjZTU5OGEwYTFmYWZmNzY5IiwgR2l0VHJlZVN0YXRlOiJjbGVhbiIsIEJ1aWxkRGF0ZToiMjAxNy0wNy0wNVQxNjo1MTo1NloiLCBHb1ZlcnNpb246ImdvMS43LjYiLCBDb21waWxlcjoiZ2MiLCBQbGF0Zm9ybToibGludXgvYW1kNjQifQpTZXJ2ZXIgVmVyc2lvbjogdmVyc2lvbi5JbmZve01ham9yOiIxIiwgTWlub3I6IjYiLCBHaXRWZXJzaW9uOiJ2MS42LjciLCBHaXRDb21taXQ6IjA5NTEzNmMzMDc4Y2NmODg3YjkwMzRiN2NlNTk4YTBhMWZhZmY3NjkiLCBHaXRUcmVlU3RhdGU6ImNsZWFuIiwgQnVpbGREYXRlOiIyMDE3LTA3LTA1VDE2OjQwOjQyWiIsIEdvVmVyc2lvbjoiZ28xLjcuNiIsIENvbXBpbGVyOiJnYyIsIFBsYXRmb3JtOiJsaW51eC9hbWQ2NCJ9Cg==&env.WEAVE_MTU=1337&env.IPALLOC_RANGE=192.168.0.0/16",
"date": "Sun Jul 30 2017 02:48:47 GMT+0000 (UTC)"
},
"email-address": "support@weave.works"
}
labels:
name: weave-net
roleRef:
kind: ClusterRole
name: weave-net
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: weave-net
namespace: kube-system
- apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: weave-net
annotations:
cloud.weave.works/launcher-info: |-
{
"server-version": "master-c3b4969",
"original-request": {
"url": "/k8s/v1.6/net.yaml?k8s-version=Q2xpZW50IFZlcnNpb246IHZlcnNpb24uSW5mb3tNYWpvcjoiMSIsIE1pbm9yOiI2IiwgR2l0VmVyc2lvbjoidjEuNi43IiwgR2l0Q29tbWl0OiIwOTUxMzZjMzA3OGNjZjg4N2I5MDM0YjdjZTU5OGEwYTFmYWZmNzY5IiwgR2l0VHJlZVN0YXRlOiJjbGVhbiIsIEJ1aWxkRGF0ZToiMjAxNy0wNy0wNVQxNjo1MTo1NloiLCBHb1ZlcnNpb246ImdvMS43LjYiLCBDb21waWxlcjoiZ2MiLCBQbGF0Zm9ybToibGludXgvYW1kNjQifQpTZXJ2ZXIgVmVyc2lvbjogdmVyc2lvbi5JbmZve01ham9yOiIxIiwgTWlub3I6IjYiLCBHaXRWZXJzaW9uOiJ2MS42LjciLCBHaXRDb21taXQ6IjA5NTEzNmMzMDc4Y2NmODg3YjkwMzRiN2NlNTk4YTBhMWZhZmY3NjkiLCBHaXRUcmVlU3RhdGU6ImNsZWFuIiwgQnVpbGREYXRlOiIyMDE3LTA3LTA1VDE2OjQwOjQyWiIsIEdvVmVyc2lvbjoiZ28xLjcuNiIsIENvbXBpbGVyOiJnYyIsIFBsYXRmb3JtOiJsaW51eC9hbWQ2NCJ9Cg==&env.WEAVE_MTU=1337&env.IPALLOC_RANGE=192.168.0.0/16",
"date": "Sun Jul 30 2017 02:48:47 GMT+0000 (UTC)"
},
"email-address": "support@weave.works"
}
labels:
name: weave-net
namespace: kube-system
spec:
template:
metadata:
labels:
name: weave-net
spec:
containers:
- name: weave
command:
- /home/weave/launch.sh
env:
- name: WEAVE_MTU
value: '1337'
- name: IPALLOC_RANGE
value: 192.168.0.0/16
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: 'weaveworks/weave-kube:2.0.1'
imagePullPolicy: Always
livenessProbe:
httpGet:
host: 127.0.0.1
path: /status
port: 6784
initialDelaySeconds: 30
resources:
requests:
cpu: 10m
securityContext:
privileged: true
volumeMounts:
- name: weavedb
mountPath: /weavedb
- name: cni-bin
mountPath: /host/opt
- name: cni-bin2
mountPath: /host/home
- name: cni-conf
mountPath: /host/etc
- name: dbus
mountPath: /host/var/lib/dbus
- name: lib-modules
mountPath: /lib/modules
- name: weave-npc
env:
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: 'weaveworks/weave-npc:2.0.1'
imagePullPolicy: Always
resources:
requests:
cpu: 10m
securityContext:
privileged: true
hostNetwork: true
hostPID: true
restartPolicy: Always
securityContext:
seLinuxOptions: {}
serviceAccountName: weave-net
tolerations:
- effect: NoSchedule
operator: Exists
volumes:
- name: weavedb
hostPath:
path: /var/lib/weave
- name: cni-bin
hostPath:
path: /opt
- name: cni-bin2
hostPath:
path: /home
- name: cni-conf
hostPath:
path: /etc
- name: dbus
hostPath:
path: /var/lib/dbus
- name: lib-modules
hostPath:
path: /lib/modules
updateStrategy:
type: RollingUpdate

View File

@@ -1,73 +0,0 @@
kind: Service
apiVersion: v1
metadata:
name: nfs-provisioner
labels:
app: nfs-provisioner
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
- name: rpcbind-udp
port: 111
protocol: UDP
selector:
app: nfs-provisioner
---
kind: Deployment
apiVersion: apps/v1beta1
metadata:
name: nfs-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-provisioner
spec:
containers:
- name: nfs-provisioner
image: quay.io/kubernetes_incubator/nfs-provisioner:v1.0.7
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
- name: rpcbind-udp
containerPort: 111
protocol: UDP
securityContext:
capabilities:
add:
- DAC_READ_SEARCH
- SYS_RESOURCE
args:
- "-provisioner=example.com/nfs"
- "-grace-period=10"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: SERVICE_NAME
value: nfs-provisioner
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: export-volume
mountPath: /export
volumes:
- name: export-volume
hostPath:
path: /var/lib/nfs-provisioner

View File

@@ -1,5 +0,0 @@
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: general
provisioner: example.com/nfs

View File

@@ -1,15 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: Group
name: system:masters
- kind: Group
name: system:authenticated
- kind: Group
name: system:unauthenticated

View File

@@ -1,63 +0,0 @@
#!/bin/bash
# Copyright 2017 The Openstack-Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -xe
echo 'Starting the kubelet'
systemctl start kubelet
source /etc/kube-role
if [[ "${KUBE_ROLE}" == "master" ]]; then
# Source network vars
source /etc/kube-cni
# Define k8s version
source /etc/kube-version
if ! [[ "${KUBE_VERSION}" == "default" ]]; then
echo "We will use K8s ${KUBE_VERSION}"
sed -i "s|^kubernetesVersion:.*|kubernetesVersion: ${KUBE_VERSION}|g" /etc/kubeadm.conf
fi
echo 'Setting up K8s'
source /etc/kubeapi-device
if [[ "$KUBE_BIND_DEV" != "autodetect" ]]; then
KUBE_BIND_IP=$(ip addr list ${KUBE_BIND_DEV} |grep "inet " |cut -d' ' -f6|cut -d/ -f1)
echo 'We are going to bind the K8s API to: ${KUBE_BIND_IP}'
kubeadm init \
--skip-preflight-checks \
--api-advertise-addresses ${KUBE_BIND_IP} \
--config /etc/kubeadm.conf
else
kubeadm init \
--skip-preflight-checks \
--config /etc/kubeadm.conf
fi
echo 'Setting up K8s client'
cp /etc/kubernetes/admin.conf /root/
export KUBECONFIG=/root/admin.conf
echo 'Marking master node as schedulable'
kubectl taint nodes --all node-role.kubernetes.io/master-
echo 'Installing Calico CNI'
kubectl apply -f /opt/cni-manifests/${KUBE_CNI}.yaml
echo 'Setting Up Cluser for OpenStack-Helm dev use'
/usr/bin/openstack-helm-dev-prep
elif [[ "${KUBE_ROLE}" == "worker" ]]; then
source /etc/kubeadm-join-command-args
kubeadm join --skip-preflight-checks ${KUBEADM_JOIN_ARGS}
fi

View File

@@ -1,60 +0,0 @@
#!/bin/bash
# Copyright 2017 The Openstack-Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -xe
# Set the KUBELET_CONTAINER env var
source /etc/kubelet-container
# Detrmine the Cgroup driver in use by Docker
CGROUP_DRIVER=$(docker info | awk '/^Cgroup Driver:/ { print $NF }')
if [[ "${KUBELET_CONTAINER}" == "this_one" ]]; then
exec kubelet-real \
--containerized=true \
--cgroup-driver=${CGROUP_DRIVER} "${@}"
else
# Lets remove any old containers
docker rm -f kubelet || true
# Launch the container
exec docker run \
--name kubelet \
--restart=always \
--volume=/sys/fs/cgroup:/sys/fs/cgroup:ro \
--volume=/:/rootfs:ro \
--volume=/dev:/dev:rshared \
--volume=/lib/modules:/lib/modules:ro \
--volume=/var/run/netns:/var/run/netns:rw \
--volume=/sys:/sys:ro \
--volume=/etc/machine-id:/etc/machine-id:ro \
--volume=/opt/cni:/opt/cni:rw \
--volume=/etc/cni/net.d:/etc/cni/net.d:rw \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rshared \
--volume=/var/run:/var/run:rw \
--volume=/var/log/containers:/var/log/containers:rw \
--volume=/etc/kubernetes:/etc/kubernetes:rw \
--volume=/etc/hosts:/etc/hosts:rw \
--volume=/etc/resolv.conf:/etc/resolv.conf:rw \
--net=host \
--privileged=true \
--pid=host \
--ipc=host \
${KUBELET_CONTAINER} \
kubelet \
--containerized=true \
--cgroup-driver=${CGROUP_DRIVER} "${@}"
fi

View File

@@ -1,22 +0,0 @@
#!/bin/bash
# Copyright 2017 The Openstack-Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -xe
echo 'Setting up virtual network devices'
ip link add neutron-ext type dummy || true
ip link set neutron-ext up
ip link add neutron-phys type dummy || true
ip link set neutron-phys up

View File

@@ -1,34 +0,0 @@
#!/bin/bash
# Copyright 2017 The Openstack-Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -xe
echo 'Setting Kubecfg Location'
export KUBECONFIG=/root/admin.conf
echo 'Cloning OpenStack-Helm'
git clone --depth 1 https://github.com/openstack/openstack-helm.git /opt/openstack-helm
echo 'Starting helm local repo'
helm serve &
until curl -sSL --connect-timeout 1 http://localhost:8879 > /dev/null; do
echo 'Waiting for helm serve to start'
sleep 2
done
helm repo add local http://localhost:8879/charts
echo 'Building OpenStack-Helm'
cd /opt/openstack-helm
make

View File

@@ -1,27 +0,0 @@
#!/bin/bash
# Copyright 2017 The Openstack-Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -xe
echo 'Setting Kubecfg Location'
export KUBECONFIG=/root/admin.conf
echo 'Labeling the nodes for Openstack-Helm deployment'
kubectl label nodes openstack-control-plane=enabled --all --namespace=openstack --overwrite
kubectl label nodes openvswitch=enabled --all --namespace=openstack --overwrite
kubectl label nodes openstack-compute-node=enabled --all --namespace=openstack --overwrite
echo 'RBAC: applying development rules (totally open!)'
kubectl replace -f /opt/rbac/dev.yaml

View File

@@ -1,22 +0,0 @@
#!/bin/bash
# Copyright 2017 The Openstack-Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -xe
echo 'Setting Kubecfg Location'
export KUBECONFIG=/root/admin.conf
echo 'Deploying NFS Provisioner'
kubectl create -R -f /opt/nfs-provisioner/

View File

@@ -1,42 +0,0 @@
#!/bin/bash
# Copyright 2017 The Openstack-Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -e
# Default wait timeout is 180 seconds
: ${KUBECONFIG:="/etc/kubernetes/admin.conf"}
export KUBECONFIG=${KUBECONFIG}
end=$(date +%s)
if ! [ -z $2 ]; then
end=$((end + $2))
else
end=$((end + 180))
fi
while true; do
NUMBER_OF_NODES=$(kubectl get nodes --no-headers -o name | wc -l)
NUMBER_OF_NODES_EXPECTED=$(($(cat /etc/nodepool/sub_nodes_private | wc -l) + 1))
[ $NUMBER_OF_NODES -eq $NUMBER_OF_NODES_EXPECTED ] && \
NODES_ONLINE="True" || NODES_ONLINE="False"
while read SUB_NODE; do
echo $SUB_NODE | grep -q ^Ready && NODES_READY="True" || NODES_READY="False"
done < <(kubectl get nodes --no-headers | awk '{ print $2 }')
[ $NODES_ONLINE == "True" -a $NODES_READY == "True" ] && \
break || true
sleep 5
now=$(date +%s)
[ $now -gt $end ] && echo "Nodes Failed to be ready in time." && \
kubectl get nodes -o wide && exit -1
done

View File

@@ -1,46 +0,0 @@
#!/bin/bash
# Copyright 2017 The Openstack-Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -e
# From Kolla-Kubernetes, orginal authors Kevin Fox & Serguei Bezverkhi
# Default wait timeout is 180 seconds
: ${KUBECONFIG:="/etc/kubernetes/admin.conf"}
export KUBECONFIG=${KUBECONFIG}
end=$(date +%s)
if ! [ -z $2 ]; then
end=$((end + $2))
else
end=$((end + 180))
fi
while true; do
kubectl get pods --namespace=$1 -o json | jq -r \
'.items[].status.phase' | grep Pending > /dev/null && \
PENDING=True || PENDING=False
query='.items[]|select(.status.phase=="Running")'
query="$query|.status.containerStatuses[].ready"
kubectl get pods --namespace=$1 -o json | jq -r "$query" | \
grep false > /dev/null && READY="False" || READY="True"
kubectl get jobs -o json --namespace=$1 | jq -r \
'.items[] | .spec.completions == .status.succeeded' | \
grep false > /dev/null && JOBR="False" || JOBR="True"
[ $PENDING == "False" -a $READY == "True" -a $JOBR == "True" ] && \
break || true
sleep 1
now=$(date +%s)
[ $now -gt $end ] && echo containers failed to start. && \
kubectl get pods --namespace $1 -o wide && exit -1
done

View File

@@ -1,20 +0,0 @@
# Copyright 2017 The Openstack-Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
[Unit]
Description=Minimal target for containerized applications
DefaultDependencies=false
AllowIsolate=yes
Requires=systemd-tmpfiles-setup.service systemd-journald.service dbus.service
After=systemd-tmpfiles-setup.service systemd-journald.service dbus.service

View File

@@ -1,26 +0,0 @@
# Copyright 2017 The Openstack-Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
[Unit]
Description=Start Kubeadm AIO Cluster
[Service]
Type=oneshot
ExecStart=/usr/bin/kubeadm-aio
FailureAction=poweroff
StandardOutput=tty
TimeoutStartSec=0
[Install]
WantedBy=container-up.target

View File

@@ -1,112 +0,0 @@
#!/bin/bash
# Copyright 2017 The Openstack-Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -xe
# Setup shared mounts for kubelet
sudo mkdir -p /var/lib/kubelet
sudo mount --bind /var/lib/kubelet /var/lib/kubelet
sudo mount --make-shared /var/lib/kubelet
# Cleanup any old deployment
sudo docker rm -f kubeadm-aio || true
sudo docker rm -f kubelet || true
sudo docker ps -aq | xargs -r -l1 -P16 sudo docker rm -f
sudo rm -rfv \
/etc/cni/net.d \
/etc/kubernetes \
/var/lib/etcd \
/var/etcd \
/var/lib/kubelet/* \
/run/openvswitch \
/var/lib/nova \
${HOME}/.kubeadm-aio/admin.conf \
/var/lib/openstack-helm \
/var/lib/nfs-provisioner || true
: ${KUBE_CNI:="calico"}
: ${CNI_POD_CIDR:="192.168.0.0/16"}
# Launch Container
sudo docker run \
-dt \
--name=kubeadm-aio \
--net=host \
--security-opt=seccomp:unconfined \
--cap-add=SYS_ADMIN \
--tmpfs=/run \
--tmpfs=/run/lock \
--volume=/etc/machine-id:/etc/machine-id:ro \
--volume=${HOME}:${HOME}:rw \
--volume=${HOME}/.kubeadm-aio:/root:rw \
--volume=/etc/kubernetes:/etc/kubernetes:rw \
--volume=/sys/fs/cgroup:/sys/fs/cgroup:ro \
--volume=/var/run/docker.sock:/run/docker.sock \
--env KUBELET_CONTAINER=${KUBEADM_IMAGE} \
--env KUBE_VERSION=${KUBE_VERSION} \
--env KUBE_CNI=${KUBE_CNI} \
--env CNI_POD_CIDR=${CNI_POD_CIDR} \
${KUBEADM_IMAGE}
echo "Waiting for kubeconfig"
set +x
end=$(($(date +%s) + 480))
READY="False"
while true; do
if [ -f ${HOME}/.kubeadm-aio/admin.conf ]; then
READY="True"
fi
[ "$READY" == "True" ] && break || true
sleep 1
now=$(date +%s)
[ $now -gt $end ] && \
echo "KubeADM did not generate kubectl config in time" && \
sudo docker logs kubeadm-aio && exit -1
done
set -x
# Set perms of kubeconfig and set env-var
sudo chown $(id -u):$(id -g) ${HOME}/.kubeadm-aio/admin.conf
export KUBECONFIG=${HOME}/.kubeadm-aio/admin.conf
echo "Waiting for node to be ready before continuing"
set +x
end=$(($(date +%s) + 480))
READY="False"
while true; do
READY=$(kubectl get nodes --no-headers=true | awk "{ print \$2 }" | head -1)
[ $READY == "Ready" ] && break || true
sleep 1
now=$(date +%s)
[ $now -gt $end ] && \
echo "Kube node did not register as ready in time" && \
sudo docker logs kubeadm-aio && exit -1
done
set -x
# Waiting for kube-system pods to be ready before continuing
sudo docker exec kubeadm-aio wait-for-kube-pods kube-system 480
# Initialize Helm
helm init
# Initialize Environment for Development
sudo docker exec kubeadm-aio openstack-helm-dev-prep
: ${PVC_BACKEND:="nfs"}
if [ "$PVC_BACKEND" == "nfs" ]; then
# Deploy NFS provisioner into enviromment
sudo docker exec kubeadm-aio openstack-helm-nfs-prep
fi

View File

@@ -1,42 +0,0 @@
==================
Vagrant Deployment
==================
A Vagrantfile has been provided in the ``tools/`` directory. This
vagrant installation will prep a single VM with specs from the ``config.rb``
file, and will run OpenStack-Helm gate scripts to set up Kubernetes.
Requirements
------------
* Hardware:
* 16GB RAM
* 32GB HDD Space
* Software
* Vagrant >= 1.8.0
* VirtualBox >= 5.1.0
* Git
Deploy
------
Make sure you are in the directory containing the Vagrantfile before
running the following commands.
Create VM
---------
.. code:: bash
vagrant up --provider virtualbox
Validate helm charts are successfully deployed
----------------------------------------------
.. code:: bash
vagrant ssh
helm list

View File

@@ -1,94 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
# NOTE: Variable overrides are in ./config.rb
require "yaml"
require "fileutils"
# Use a variable file for overrides:
CONFIG = File.expand_path("config.rb")
if File.exist?(CONFIG)
require CONFIG
end
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://atlas.hashicorp.com/search.
config.vm.box = $vm_image
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false
# Create a private network, which allows host-only access to the machine
# using a specific IP.
config.vm.network "private_network", ip: "192.168.33.10"
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
config.vm.synced_folder "../", "/opt/openstack-helm"
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
config.vm.provider "virtualbox" do |vb|
# Display the VirtualBox GUI when booting the machine
vb.gui = false
# Customize the amount of memory on the VM:
vb.memory = $ram
# Customize the number of vCPUs in the VM:
vb.cpus = $vcpu_cores
# Set the size of the VM's docker disk:
unless File.exist?('.vagrant/machines/default/openstack-helm-storage.vdi')
vb.customize ['createhd', '--filename', '.vagrant/machines/default/openstack-helm-storage', '--size', $docker_disk]
end
vb.customize ['storageattach', :id, '--storagectl', 'SCSI', '--port', 2, '--device', 0, '--type', 'hdd', '--medium', '.vagrant/machines/default/openstack-helm-storage.vdi']
# Set the size of the VM's PVC disk:
unless File.exist?('.vagrant/machines/default/openstack-helm-storage-kube-pvc.vdi')
vb.customize ['createhd', '--filename', '.vagrant/machines/default/openstack-helm-storage-kube-pvc', '--size', $pvc_disk]
end
vb.customize ['storageattach', :id, '--storagectl', 'SCSI', '--port', 3, '--device', 0, '--type', 'hdd', '--medium', '.vagrant/machines/default/openstack-helm-storage-kube-pvc.vdi']
end
# Enable provisioning with a shell script.
config.vm.provision "shell", inline: <<-SHELL
# Setup docker storage
mkfs.xfs /dev/disk/by-path/pci-0000\:00\:14.0-scsi-0\:0\:2\:0 -f -L docker-srg
mkdir -p /var/lib/docker
echo "LABEL=docker-srg /var/lib/docker xfs defaults 0 0" >> /etc/fstab
# Setup kubelet pvc storage
mkfs.xfs /dev/disk/by-path/pci-0000\:00\:14.0-scsi-0\:0\:3\:0 -f -L kube-srg
mkdir -p /var/lib/nfs-provisioner
echo "LABEL=kube-srg /var/lib/nfs-provisioner xfs defaults 0 0" >> /etc/fstab
# Mount Storage
mount -a
apt-get update
apt-get install -y git
SHELL
config.vm.provision "shell", privileged: false, inline: <<-SHELL
#Cloning repo
git clone https://git.openstack.org/openstack/openstack-helm
cd openstack-helm
export INTEGRATION='aio'
export INTEGRATION_TYPE='basic'
./tools/gate/setup_gate.sh
SHELL
end

View File

@@ -1,6 +0,0 @@
# VM Specs
$vm_image = "ubuntu/xenial64"
$docker_disk = 20480
$pvc_disk = 10240
$vcpu_cores = 4
$ram = 8192