Move deployment scripts from AIAB to Treasuremap

* Last AIAB commit ID: d7d345f
* Also moving from Quagga to FRRouting for bgp router script.

Change-Id: If5e4e030dacaa7fcf525f9767f50c82b07516e27
This commit is contained in:
Alexander Noskov 2019-10-03 21:15:25 +00:00 committed by Sreejith Punnapuzha
parent 35733a4151
commit 2800b5489f
73 changed files with 4423 additions and 0 deletions

1
.gitignore vendored
View File

@ -3,3 +3,4 @@ peggles/
# Unit test / coverage reports
.tox/
config-ssh

View File

@ -0,0 +1,221 @@
..
Copyright 2018 AT&T Intellectual Property.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
.. _gates:
Gates
=====
Treasuremap contains scripts to aid developers and automation of Airship.
These tools are found in ./treasuremap/tools/deployment
Setup and Use
-------------
1. First time, and only needed once per node, ./setup_gate.sh will prepare the
node for use by setting up the necessary users, virsh and some dependencies.
2. gate.sh is the starting point to run each of the named gates, found in
./airship_gate/manifests, e.g.::
$ ./gate.sh multinode_deploy
where the argument for the gate.sh script is the filename of the json file
in ./airship_gate/manifests without the json extension.
if you run the script without any arguments::
$ ./gate.sh
then it will by default stand up a four-node Airship cluster.
If you'd like to bring your own manifest to drive the framework, you can
set and export the GATE_MANIFEST env var prior to running gate.sh
Each of the defined manifests used for the gate defines a virtual machine
configuration, and the steps to run as part of that gate. Additional
information found in each file is a configuration that targets a particular
set of Airship site configurations, which in some of the provided manifests are
found in the deployment_files/site directory.
Other Utilities
---------------
Several useful utilities are found in ./airship_gate/bin to facilitate
interactions with the VMs created. These commands are effectively wrapped
scripts providing the functionality of the utility they wrap, but also
incorporating the necessary identifying information needed for a particular
run of a gate. E.g.::
$ ./airship_gate/bin/ssh.sh n0
Writing Manifests
-----------------
Custom manifests can be used to drive this framework with testing outside
the default virtual site deployment scenario. Here is some information on
how to create a manifest to define custom network or VM configuration or
run a custom stage pipeline. Manifest files are written in JSON and the
documentation below will use dotted JSON paths when describing structure.
Unless the root is otherwise defined, assume it is from the document root.
Network Configuration
#####################
The ``.networking`` key defines the network topology of the site. Each
subkey is the name of a network. Under each network name is a semi-recursive stanza
defining the layer 2 and layer 3 attributes of the network:
.. code-block: json
{
"roles": ['string'],
"layer2": {
"mtu": integer,
"address": 'mac_address'
},
"layer3": {
"cidr": "CIDR",
"address": "ip_address",
"gateway": "ip_address",
"routing": {
"mode": "nat"
}
}
}
or
.. code-block: json
{
"layer2": {
"mtu": integer,
"vlans": {
"integer": {
"layer2":...,
"layer3":...
},
"integer": {
"layer2":...,
"layer3":...
}
}
}
}
* roles - These strings are used to select the correct network for internal gate
functions - supported: "ssh", "dns", "bgp"
* layer2 - Define Layer 2 attributes
* layer3 - Valid if the ``layer2`` attribute is NOT defining VLANs, then define
Layer 3 attributes.
Disk Layouts
############
The ``.disk_layouts`` key defines the various disk layouts that can be assigned
to VMs being built. Each named layout key then defines one or more block devices
that will be created as file-backed volumes.
.. code-block: json
{
"simple": {
"vda": {
"size": 30,
"io_profile": "fast",
"bootstrap": true
}
},
"multi": {
"vda": {
"size": 15,
"io_profile": "fast",
"bootstrap": true
},
"vdb": {
"size": 15,
"io_profile": "fast",
"format": {"type": "ext4", "mountpoint": "/var"}
}
}
}
* size - Size of the volume in gigabytes
* io_profile - One of the below I/O configurations
* fast - In the case of a VM disruption, synchronous I/O may be lost. Better throughput.
* safe - Synchronous I/O fully written to disk, slower throughput.
* bootstrap - For VMs that are bootstrapped by the framework, not Airship, use this disk
* format - For VMs that are bootstrapped by the framework, describe how the disk should be
formatted and mounted when desired.
* type - Filesystem type (e.g. 'xfs' or 'ext4')
* mountpoint - Path to mountpoint
VM Configuration
################
Under the ``.vm`` key is a mapping of all the VMs that will be created via virt-install.
This can be a mix of VMs that are bootstrapped via virsh/cloud-init and those deployed
via Airship. Each key is the name of a VM and value is a JSON object:
.. code-block: json
{
"memory": integer,
"vcpus": integer,
"disk_layout": "simple",
"networking": {
"ens3": {
"mac": "52:54:00:00:be:31",
"pci": {
"slot": 3,
"port": 0
},
"attachment": {
"network": "pxe"
}
},
"addresses": {
"pxe": {
"ip": "172.24.1.9"
}
}
},
"bootstrap": true,
"userdata": "packages: [docker.io]"
}
* memory - VM RAM in megabytes
* vcpus - Number of VM CPUs
* disk_layout - A disk profile for the VM matching one defined under ``.disk_layouts``
* bootstrap - True/False for whether the framework should bootstrap the VM's OS
* userdata - Cloud-init userdata to feed the VM when bootstrapped for further customization
* networking - Network attachment and addressing configuration. Every key but ``addresses``
is assumed to be a desired NIC on the VM. For each NIC stanza, the following fields are respected:
* mac - A MAC address for the NIC
* pci - A JSON object specifying ``slot`` and ``port`` specifying the PCI address for the NIC
* attachment - What network from ``.networking`` is attached to this NIC
The ``addresses`` key specifies the IP address for each layer 3 network that the VM is attached to.
Stage Pipeline
##############
TODO
External Access
###############
TODO

View File

@ -0,0 +1,50 @@
#!/usr/bin/env bash
# Copyright 2017 The Openstack-Helm Authors.
# Copyright 2019 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
export CLUSTER_TYPE="${CLUSTER_TYPE:="node,clusterrole,clusterrolebinding,storageclass,namespace"}"
export PARALLELISM_FACTOR="${PARALLELISM_FACTOR:=2}"
function list_objects () {
printf "%s" ${CLUSTER_TYPE} | xargs -d ',' -I {} -P1 -n1 bash -c 'echo "$@"' _ {}
}
export -f list_objects
function name_objects () {
export OBJECT=$1
kubectl get "${OBJECT}" -o name | xargs -L1 -I {} -P1 -n1 bash -c "echo ${OBJECT} ${1#*/}" _ {}
}
export -f name_objects
function get_objects () {
input=($1)
export OBJECT=${input[0]}
export NAME=${input[1]#*/}
echo "${OBJECT}/${NAME}"
export BASE_DIR="${BASE_DIR:="/tmp"}"
DIR="${BASE_DIR}/objects/cluster/${OBJECT}"
mkdir -p "${DIR}"
kubectl get "${OBJECT}" "${NAME}" -o yaml > "${DIR}/${NAME}.yaml"
kubectl describe "${OBJECT}" "${NAME}" > "${DIR}/${NAME}.txt"
}
export -f get_objects
list_objects | \
xargs -r -n 1 -P ${PARALLELISM_FACTOR} -I {} bash -c 'name_objects "$@"' _ {} | \
xargs -r -n 1 -P ${PARALLELISM_FACTOR} -I {} bash -c 'get_objects "$@"' _ {}

View File

@ -0,0 +1,81 @@
#!/usr/bin/env bash
# Copyright 2019 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -ux
SUBDIR_NAME=debug-$(hostname)
# NOTE(mark-burnett): This should add calicoctl to the path.
export PATH=${PATH}:/opt/cni/bin
TEMP_DIR=$(mktemp -d)
export TEMP_DIR
export BASE_DIR="${TEMP_DIR}/${SUBDIR_NAME}"
export HELM_DIR="${BASE_DIR}/helm"
export CALICO_DIR="${BASE_DIR}/calico"
mkdir -p "${BASE_DIR}"
export OBJECT_TYPE="${OBJECT_TYPE:="pods"}"
export CLUSTER_TYPE="${CLUSTER_TYPE:="namespace"}"
export PARALLELISM_FACTOR="${PARALLELISM_FACTOR:=2}"
function get_releases () {
helm list --all --short
}
function get_release () {
input=($1)
RELEASE=${input[0]}
helm status "${RELEASE}" > "${HELM_DIR}/${RELEASE}.txt"
}
export -f get_release
if which helm; then
mkdir -p "${HELM_DIR}"
helm list --all > "${HELM_DIR}/list"
get_releases | \
xargs -r -n 1 -P "${PARALLELISM_FACTOR}" -I {} bash -c 'get_release "$@"' _ {}
fi
kubectl get --all-namespaces -o wide pods > "${BASE_DIR}/pods.txt"
kubectl get pods --all-namespaces -o yaml > "${BASE_DIR}/pods_long.yaml"
kubectl describe pods --all-namespaces > "${BASE_DIR}/pods_describe.txt"
./tools/deployment/seaworthy-virt/airship_gate/bin/namespace-objects.sh
./tools/deployment/seaworthy-virt/airship_gate/bin/cluster-objects.sh
iptables-save > "${BASE_DIR}/iptables"
cat /var/log/syslog > "${BASE_DIR}/syslog"
cat /var/log/armada/bootstrap-armada.log > "${BASE_DIR}/bootstrap-armada.log"
ip addr show > "${BASE_DIR}/ifconfig"
ip route show > "${BASE_DIR}/ip-route"
cp -p /etc/resolv.conf "${BASE_DIR}/"
env | sort --ignore-case > "${BASE_DIR}/environment"
docker images > "${BASE_DIR}/docker-images"
if which calicoctl; then
mkdir -p "${CALICO_DIR}"
for kind in bgpPeer hostEndpoint ipPool nodes policy profile workloadEndpoint; do
calicoctl get "${kind}" -o yaml > "${CALICO_DIR}/${kind}.yaml"
done
fi
tar zcf "${SUBDIR_NAME}.tgz" -C "${TEMP_DIR}" "${SUBDIR_NAME}"

View File

@ -0,0 +1,24 @@
#!/bin/bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
SCRIPT_DIR="$(realpath "$(dirname "$0")")"
WORKSPACE="$(realpath "${SCRIPT_DIR}/../../..")"
GATE_UTILS="${WORKSPACE}/seaworthy-virt/airship_gate/lib/all.sh"
source "${GATE_UTILS}"
drydock_cmd "$@"

View File

@ -0,0 +1,72 @@
#!/usr/bin/env bash
# Copyright 2017 The Openstack-Helm Authors.
# Copyright 2019 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
export OBJECT_TYPE="${OBJECT_TYPE:="configmaps,cronjobs,daemonsets,deployment,endpoints,ingresses,jobs,networkpolicies,pods,podsecuritypolicies,persistentvolumeclaims,rolebindings,roles,secrets,serviceaccounts,services,statefulsets"}"
export PARALLELISM_FACTOR="${PARALLELISM_FACTOR:=2}"
function get_namespaces () {
kubectl get namespaces -o name | awk -F '/' '{ print $NF }'
}
function list_namespaced_objects () {
export NAMESPACE=$1
printf "%s" "${OBJECT_TYPE}" | xargs -d ',' -I {} -P1 -n1 bash -c "echo ${NAMESPACE} ${1#*/}" _ {}
}
export -f list_namespaced_objects
function name_objects () {
input=($1)
export NAMESPACE=${input[0]}
export OBJECT=${input[1]}
kubectl get -n "${NAMESPACE}" "${OBJECT}" -o name | xargs -L1 -I {} -P1 -n1 bash -c "echo ${NAMESPACE} ${OBJECT} ${1#*/}" _ {}
}
export -f name_objects
function get_objects () {
input=($1)
export NAMESPACE=${input[0]}
export OBJECT=${input[1]}
export NAME=${input[2]#*/}
echo "${NAMESPACE}/${OBJECT}/${NAME}"
export BASE_DIR="${BASE_DIR:="/tmp"}"
DIR="${BASE_DIR}/namespaces/${NAMESPACE}/${OBJECT}"
mkdir -p "${DIR}"
kubectl get -n "${NAMESPACE}" "${OBJECT}" "${NAME}" -o yaml > "${DIR}/${NAME}.yaml"
kubectl describe -n "${NAMESPACE}" "${OBJECT}" "${NAME}" > "${DIR}/${NAME}.txt"
LOG_DIR="${BASE_DIR}/pod-logs"
mkdir -p ${LOG_DIR}
if [ ${OBJECT_TYPE} = "pods" ]; then
POD_DIR="${LOG_DIR}/${NAME}"
mkdir -p "${POD_DIR}"
CONTAINERS=$(kubectl get pod "${NAME}" -n "${NAMESPACE}" -o json | jq -r '.spec.containers[].name')
for CONTAINER in ${CONTAINERS}; do
kubectl logs -n "${NAMESPACE}" "${NAME}" -c "${CONTAINER}" > "${POD_DIR}/${CONTAINER}.txt"
done
fi
}
export -f get_objects
get_namespaces | \
xargs -r -n 1 -P ${PARALLELISM_FACTOR} -I {} bash -c 'list_namespaced_objects "$@"' _ {} | \
xargs -r -n 1 -P ${PARALLELISM_FACTOR} -I {} bash -c 'name_objects "$@"' _ {} | \
xargs -r -n 1 -P ${PARALLELISM_FACTOR} -I {} bash -c 'get_objects "$@"' _ {}

View File

@ -0,0 +1,24 @@
#!/bin/bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
SCRIPT_DIR="$(realpath "$(dirname "$0")")"
WORKSPACE="$(realpath "${SCRIPT_DIR}/../../..")"
GATE_UTILS="${WORKSPACE}/seaworthy-virt/airship_gate/lib/all.sh"
source "${GATE_UTILS}"
exec rsync -e "ssh -F ${SSH_CONFIG_DIR}/config" "$@"

View File

@ -0,0 +1,24 @@
#!/bin/bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
SCRIPT_DIR="$(realpath "$(dirname "$0")")"
WORKSPACE="$(realpath "${SCRIPT_DIR}/../../..")"
GATE_UTILS="${WORKSPACE}/seaworthy-virt/airship_gate/lib/all.sh"
source "${GATE_UTILS}"
exec scp -F "${SSH_CONFIG_DIR}/config" "$@"

View File

@ -0,0 +1,24 @@
#!/bin/bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -x
SCRIPT_DIR="$(realpath "$(dirname "$0")")"
WORKSPACE="$(realpath "${SCRIPT_DIR}/../../..")"
GATE_UTILS="${WORKSPACE}/seaworthy-virt/airship_gate/lib/all.sh"
source "${GATE_UTILS}"
shipyard_cmd "$@"

View File

@ -0,0 +1,24 @@
#!/bin/bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
SCRIPT_DIR="$(realpath "$(dirname "$0")")"
WORKSPACE="$(realpath "${SCRIPT_DIR}/../../..")"
GATE_UTILS="${WORKSPACE}/seaworthy-virt/airship_gate/lib/all.sh"
source "${GATE_UTILS}"
exec ssh -F "${SSH_CONFIG_DIR}/config" "$@"

View File

@ -0,0 +1,166 @@
#!/bin/bash
install_ingress_ca() {
ingress_ca=$(config_ingress_ca)
if [[ -z "$ingress_ca" ]]
then
echo "Not installing ingress root CA."
return
fi
local_file="${TEMP_DIR}/ingress_ca.pem"
remote_file="${BUILD_WORK_DIR}/ingress_ca.pem"
cat <<< "$ingress_ca" > "$local_file"
rsync_cmd "$local_file" "${BUILD_NAME}":"$remote_file"
}
shipard_cmd_stdout() {
# needed to reach airship endpoints
dns_netspec="$(config_netspec_for_role "dns")"
dns_server=$(config_vm_net_ip "${BUILD_NAME}" "$dns_netspec")
install_ingress_ca
ssh_cmd "${BUILD_NAME}" \
docker run -t --network=host \
--dns "${dns_server}" \
-v "${BUILD_WORK_DIR}:/work" \
-e OS_AUTH_URL="${AIRSHIP_KEYSTONE_URL}" \
-e OS_USERNAME=shipyard \
-e OS_USER_DOMAIN_NAME=default \
-e OS_PASSWORD="${SHIPYARD_PASSWORD}" \
-e OS_PROJECT_DOMAIN_NAME=default \
-e OS_PROJECT_NAME=service \
-e REQUESTS_CA_BUNDLE=/work/ingress_ca.pem \
--entrypoint /usr/local/bin/shipyard "${IMAGE_SHIPYARD_CLI}" "$@" 2>&1
}
shipyard_cmd() {
if [[ ! -z "${LOG_FILE}" ]]
then
set -o pipefail
shipard_cmd_stdout "$@" | tee -a "${LOG_FILE}"
set +o pipefail
else
shipard_cmd_stdout "$@"
fi
}
drydock_cmd_stdout() {
dns_netspec="$(config_netspec_for_role "dns")"
dns_server="$(config_vm_net_ip "${BUILD_NAME}" "$dns_netspec")"
install_ingress_ca
ssh_cmd "${BUILD_NAME}" \
docker run -t --network=host \
--dns "${dns_server}" \
-v "${BUILD_WORK_DIR}:/work" \
-e DD_URL=http://drydock-api.ucp.svc.cluster.local:9000 \
-e OS_AUTH_URL="${AIRSHIP_KEYSTONE_URL}" \
-e OS_USERNAME=shipyard \
-e OS_USER_DOMAIN_NAME=default \
-e OS_PASSWORD="${SHIPYARD_PASSWORD}" \
-e OS_PROJECT_DOMAIN_NAME=default \
-e OS_PROJECT_NAME=service \
-e REQUESTS_CA_BUNDLE=/work/ingress_ca.pem \
--entrypoint /usr/local/bin/drydock "${IMAGE_DRYDOCK_CLI}" "$@" 2>&1
}
drydock_cmd() {
if [[ ! -z "${LOG_FILE}" ]]
then
set -o pipefail
drydock_cmd_stdout "$@" | tee -a "${LOG_FILE}"
set +o pipefail
else
drydock_cmd_stdout "$@"
fi
}
# Create a shipyard action
# and poll until completion
shipyard_action_wait() {
action="$1"
timeout="${2:-3600}"
poll_time="${3:-60}"
if [[ $action == "update_site" ]]
then
options="--allow-intermediate-commits"
else
options=""
fi
end_time=$(date -d "+${timeout} seconds" +%s)
log "Starting Shipyard action ${action}, will timeout in ${timeout} seconds."
ACTION_ID=$(shipyard_cmd create action ${options} "${action}")
ACTION_ID=$(echo "${ACTION_ID}" | grep -oE 'action/[0-9A-Z]+')
echo "Action ${ACTION_ID} has been created."
while true;
do
if [[ $(date +%s) -ge ${end_time} ]]
then
log "Shipyard action ${action} did not complete in ${timeout} seconds."
return 2
fi
ACTION_STATUS=$(shipyard_cmd describe "${ACTION_ID}" | grep -i "Lifecycle" | \
awk '{print $2}')
ACTION_STEPS=$(shipyard_cmd describe "${ACTION_ID}" | grep -i "step/" | \
awk '{print $3}')
# Verify lifecycle status
if [ "${ACTION_STATUS}" == "Failed" ]; then
echo -e "\n${ACTION_ID} FAILED\n"
shipyard_cmd describe "${ACTION_ID}"
exit 1
fi
if [ "${ACTION_STATUS}" == "Complete" ]; then
# Verify status of each action step
for step in ${ACTION_STEPS}; do
if [ "${step}" == "failed" ]; then
echo -e "\n${ACTION_ID} FAILED\n"
shipyard_cmd describe "${ACTION_ID}"
exit 1
fi
done
echo -e "\n${ACTION_ID} completed SUCCESSFULLY\n"
shipyard_cmd describe "${ACTION_ID}"
exit 0
fi
sleep "${poll_time}"
done
}
# Re-use the ssh key from ssh-config
# for MAAS-deployed nodes
collect_ssh_key() {
mkdir -p "${GATE_DEPOT}"
if [[ ! -r ${SSH_CONFIG_DIR}/id_rsa.pub ]]
then
ssh_keypair_declare
fi
if [[ -n "${USE_EXISTING_SECRETS}" ]]; then
log "Using existing manifests for secrets"
return 0
fi
cat << EOF > "${GATE_DEPOT}/airship_ubuntu_ssh_key.yaml"
---
schema: deckhand/Certificate/v1
metadata:
schema: metadata/Document/v1
name: ubuntu_ssh_key
layeringDefinition:
layer: site
abstract: false
storagePolicy: cleartext
data: |-
EOF
sed -e 's/^/ /' >> "${GATE_DEPOT}/airship_ubuntu_ssh_key.yaml" < "${SSH_CONFIG_DIR}/id_rsa.pub"
}

View File

@ -0,0 +1,25 @@
#!/bin/bash
set -e
LIB_DIR=$(realpath "$(dirname "${BASH_SOURCE[0]}")")
export LIB_DIR
REPO_ROOT=$(realpath "$(dirname "${BASH_SOURCE[0]}")/../../../../..")
export REPO_ROOT
source "$LIB_DIR"/config.sh
source "$LIB_DIR"/const.sh
source "$LIB_DIR"/docker.sh
source "$LIB_DIR"/kube.sh
source "$LIB_DIR"/log.sh
source "$LIB_DIR"/nginx.sh
source "$LIB_DIR"/promenade.sh
source "$LIB_DIR"/registry.sh
source "$LIB_DIR"/ssh.sh
source "$LIB_DIR"/virsh.sh
source "$LIB_DIR"/airship.sh
source "$LIB_DIR"/ingress.sh
source "$LIB_DIR"/bgp.sh
if [[ -v GATE_DEBUG && ${GATE_DEBUG} = "1" ]]; then
set -x
fi

View File

@ -0,0 +1,37 @@
#!/bin/bash
FRR_DAEMONS="${TEMP_DIR}/daemons"
FRR_DEBIAN_CONF="${TEMP_DIR}/debian.conf"
FRR_BGPD_CONF="${TEMP_DIR}/bgpd.conf"
bgp_router_config() {
frr_as_number="$(config_bgp_as "frr_as")"
calico_as_number="$(config_bgp_as "calico_as")"
bgp_net="$(config_netspec_for_role "bgp")"
frr_ip="$(config_vm_net_ip "build" "$bgp_net")"
# shellcheck disable=SC2016
FRR_AS=${frr_as_number} CALICO_AS=${calico_as_number} FRR_IP=${frr_ip} envsubst '${FRR_AS} ${CALICO_AS} ${FRR_IP}' < "${TEMPLATE_DIR}/bgpd_conf.sub" > "${FRR_BGPD_CONF}"
cp "${TEMPLATE_DIR}/daemons.sub" "${FRR_DAEMONS}"
cp "${TEMPLATE_DIR}/debian_conf.sub" "${FRR_DEBIAN_CONF}"
}
bgp_router_start() {
# nodename where BGP router should run
nodename=$1
remote_work_dir="/var/tmp/frr"
remote_daemons_file="${remote_work_dir}/$(basename "$FRR_DAEMONS")"
remote_debian_conf_file="${remote_work_dir}/$(basename "$FRR_DEBIAN_CONF")"
remote_bgpd_conf_file="${remote_work_dir}/$(basename "$FRR_BGPD_CONF")"
ssh_cmd "${nodename}" mkdir -p "${remote_work_dir}"
rsync_cmd "$FRR_DAEMONS" "${nodename}:${remote_daemons_file}"
rsync_cmd "$FRR_DEBIAN_CONF" "${nodename}:${remote_debian_conf_file}"
rsync_cmd "$FRR_BGPD_CONF" "${nodename}:${remote_bgpd_conf_file}"
ssh_cmd "${nodename}" docker run -ti -d --net=host --privileged -v /var/tmp/frr:/etc/frr --restart always --name FRRouting "$IMAGE_BGP"
}

View File

@ -0,0 +1,173 @@
#!/bin/bash
#
# Copyright 2019 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###############################################################################
# Helper functions
###############################################################################
# Key/value lookups from manifests
manifests_lookup(){
local file="$1"
local schema="$2"
local mdata_name="$3"
local key_path="$4"
local oper="$5"
local allow_fail="$6"
FAIL=false
RESULT=$(python3 -c "
import yaml,sys
y = yaml.load_all(open('$file'))
for x in y:
if x.get('schema') == '$schema':
if x['metadata']['name'] == '$mdata_name':
if isinstance(x$key_path,list):
if '$oper' == 'get_size':
print(len(x$key_path))
break
else:
for i in x$key_path:
print(i)
break
else:
if '$oper' == 'dict_keys':
print(' '.join(x$key_path.keys()))
break
else:
print(x$key_path)
break
else:
sys.exit(1)" 2>&1) || FAIL=true
if [[ $FAIL = true ]] && [[ $allow_fail != true ]]; then
echo "Lookup failed for schema '$schema', metadata.name '$mdata_name', key path '$key_path'"
exit 1
fi
}
install_file(){
local path="$1"
local content="$2"
local permissions="$3"
local dirname
dirname=$(dirname "$path")
if [[ ! -d $dirname ]]; then
mkdir -p "$dirname"
fi
if [[ ! -f $path ]] || [ "$(cat "$path")" != "$content" ]; then
echo "$content" > "$path"
chmod "$permissions" "$path"
export FILE_UPDATED=true
else
export FILE_UPDATED=false
fi
}
###############################################################################
# Script inputs and validations
###############################################################################
if [[ $EUID -ne 0 ]]; then
echo "This script must be run as sudo/root"
exit 1
fi
if ([[ -z $1 ]] && [[ -z $RENDERED ]]) || [[ $1 =~ .*[hH][eE][lL][pP].* ]]; then
echo "Missing required script argument"
echo "Usage: ./$(basename "${BASH_SOURCE[0]}") /path/to/rendered/site/manifest.yaml"
exit 1
fi
if [[ -n $1 ]]; then
rendered_file="$1"
else
rendered_file="$RENDERED"
fi
if [[ ! -f $rendered_file ]]; then
echo "Specified rendered manifests file '$rendered_file' does not exist"
exit 1
fi
echo "Using rendered manifests file '$rendered_file'"
# env vars which can be set if you want to disable
: "${DISABLE_SECCOMP_PROFILE:=}"
: "${DISABLE_APPARMOR_PROFILES:=}"
###############################################################################
# bootaction: seccomp-profiles
###############################################################################
if [[ ! $DISABLE_SECCOMP_PROFILE ]]; then
# Fetch seccomp profile data
manifests_lookup "$rendered_file" "drydock/BootAction/v1" \
"seccomp-profiles" "['data']['assets'][0]['path']"
path="$RESULT"
echo "seccomp profiles asset[0] path located: '$path'"
manifests_lookup "$rendered_file" "drydock/BootAction/v1" \
"seccomp-profiles" "['data']['assets'][0]['permissions']"
permissions="$RESULT"
echo "seccomp profiles asset[0] permissions located: '$permissions'"
manifests_lookup "$rendered_file" "drydock/BootAction/v1" \
"seccomp-profiles" "['data']['assets'][0]['data']"
content="$RESULT"
echo "seccomp profiles assets[0] data located: '$content'"
# seccomp_default
install_file "$path" "$content" "$permissions"
fi
###############################################################################
# bootaction: apparmor-profiles
###############################################################################
if [[ ! $DISABLE_APPARMOR_PROFILES ]]; then
manifests_lookup "$rendered_file" "drydock/BootAction/v1" \
"apparmor-profiles" "['data']['assets']" "get_size" "true"
if [[ -n "$RESULT" ]] && [[ $RESULT -gt 0 ]]; then
# Fetch apparmor profile data
LAST=$(( RESULT - 1 ))
for i in $(seq 0 $LAST); do
manifests_lookup "$rendered_file" "drydock/BootAction/v1" \
"apparmor-profiles" "['data']['assets'][$i]['path']"
path="$RESULT"
echo "apparmor profiles asset[$i] path located: '$path'"
manifests_lookup "$rendered_file" "drydock/BootAction/v1" \
"apparmor-profiles" "['data']['assets'][$i]['permissions']"
permissions="$RESULT"
echo "apparmor profiles asset[$i] permissions located: '$permissions'"
manifests_lookup "$rendered_file" "drydock/BootAction/v1" \
"apparmor-profiles" "['data']['assets'][$i]['data']"
content="$RESULT"
echo "apparmor profiles assets[$i] data located: '$content'"
install_file "$path" "$content" "$permissions"
done
# reload all apparmor profiles
systemctl reload apparmor.service
fi
fi

View File

@ -0,0 +1,514 @@
#!/bin/bash
export TEMP_DIR=${TEMP_DIR:-$(mktemp -d -p /var/tmp)}
export NAMEKEY_FILE=${NAMEKEY_FILE:-"$HOME/.airship_key"}
export DEFINITION_DEPOT="${TEMP_DIR}/site_yaml/"
export RENDERED_DEPOT="${TEMP_DIR}/rendered_yaml/"
export CERT_DEPOT="${TEMP_DIR}/cert_yaml/"
export GATE_DEPOT="${TEMP_DIR}/gate_yaml/"
export SCRIPT_DEPOT="${TEMP_DIR}/scripts/"
export BUILD_WORK_DIR=${BUILD_WORK_DIR:-/work}
export BASE_IMAGE_SIZE=${BASE_IMAGE_SIZE:-68719476736}
export BASE_IMAGE_URL=${BASE_IMAGE_URL:-https://cloud-images.ubuntu.com/releases/xenial/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img}
export IMAGE_PROMENADE_CLI=${IMAGE_PROMENADE_CLI:-quay.io/airshipit/promenade:cfb8aa498c294c2adbc369ba5aaee19b49550d22}
export IMAGE_PEGLEG_CLI=${IMAGE_PEGLEG_CLI:-quay.io/airshipit/pegleg:50ce7a02e08a0a5277c2fbda96ece6eb5782407a}
export IMAGE_SHIPYARD_CLI=${IMAGE_SHIPYARD_CLI:-quay.io/airshipit/shipyard:4dd6b484d11e86ad51da733841b9ef137421d461}
export IMAGE_COREDNS=${IMAGE_COREDNS:-docker.io/coredns/coredns:1.2.2}
export IMAGE_BGP=${IMAGE_BGP:-docker.io/frrouting/frr:latest}
export IMAGE_DRYDOCK_CLI=${IMAGE_DRYDOCK_CLI:-quay.io/airshipit/drydock:d93d6d5a0a370ced536180612d1ade708e29cd47}
export IMAGE_DOCKER_REGISTRY=${IMAGE_DOCKER_REGISTRY:-"docker.io/registry:2"}
export IMAGE_HYPERKUBE=${IMAGE_HYPERKUBE:-gcr.io/google_containers/hyperkube-amd64:v1.12.9}
export PROMENADE_DEBUG=${PROMENADE_DEBUG:-0}
export PROMENADE_TMP_LOCAL=${PROMENADE_TMP_LOCAL:-cache}
export REGISTRY_DATA_DIR=${REGISTRY_DATA_DIR:-/mnt/registry}
export VIRSH_POOL=${VIRSH_POOL:-airship}
export VIRSH_POOL_PATH=${VIRSH_POOL_PATH:-/var/lib/libvirt/airship}
export VIRSH_CPU_OPTS=${VIRSH_CPU_OPTS:-host}
export UPSTREAM_DNS=${UPSTREAM_DNS:-"8.8.8.8 8.8.4.4"}
export NTP_POOLS=${NTP_POOLS:-"0.ubuntu.pool.ntp.org 1.ubuntu.pool.ntp.org"}
export NTP_SERVERS=${NTP_SERVERS:-""}
export PROMENADE_ENCRYPTION_KEY=${PROMENADE_ENCRYPTION_KEY:-MjI1N2ZiMjMzYjI0ZmVkZDU4}
# key-pair used for drydock/maas auth towards libvirt and access to
# the virtual nodes; auto-generated if no value provided
export GATE_SSH_KEY=${GATE_SSH_KEY:-""}
# skip generation of certificates, and other security manifests
# auto-generated by default
export USE_EXISTING_SECRETS=${USE_EXISTING_SECRETS:-""}
export SHIPYARD_PASSWORD=${SHIPYARD_OS_PASSWORD:-'password18'}
export AIRSHIP_KEYSTONE_URL=${AIRSHIP_KEYSTONE_URL:-'http://keystone.gate.local:80/v3'}
config_vm_memory() {
nodename=${1}
jq -cr ".vm.${nodename}.memory" < "${GATE_MANIFEST}"
}
config_vm_names() {
jq -cr '.vm | keys | join(" ")' < "${GATE_MANIFEST}"
}
config_vm_iface_list() {
nodename="$1"
jq -cr ".vm.${nodename}.networking | del(.addresses) | keys | .[]" < "${GATE_MANIFEST}"
}
config_vm_iface_mac() {
nodename="$1"
interface="$2"
jq -cr ".vm.${nodename}.networking.${interface}.mac" < "${GATE_MANIFEST}"
}
# What network this VM interface should be attached to
config_vm_iface_network() {
nodename="$1"
interface="$2"
jq -cr ".vm.${nodename}.networking.${interface}.attachment.network" < "${GATE_MANIFEST}"
}
# What VLANs on a network should be attached to this node
config_vm_iface_vlans() {
nodename="$1"
interface="$2"
jq -cr ".vm.${nodename}.networking.${interface}.attachment.vlans | select(.!=null)" < "${GATE_MANIFEST}"
}
# PCI slot for this VM interface
config_vm_iface_slot() {
nodename="$1"
interface="$2"
jq -cr ".vm.${nodename}.networking.${interface}.pci.slot" < "${GATE_MANIFEST}"
}
# PCI card port for this VM interface
config_vm_iface_port() {
nodename="$1"
interface="$2"
jq -cr ".vm.${nodename}.networking.${interface}.pci.port" < "${GATE_MANIFEST}"
}
# The IP address for the VM for a network. If vlan is also specified, the VLAN
# on the network
config_vm_net_ip() {
nodename="$1"
network="$2"
vlan="$3"
if is_netspec "$network"
then
vlan=$(netspec_vlan "$network")
network=$(netspec_netname "$network")
fi
if [[ -z "$vlan" ]]
then
query=".vm.${nodename}.networking.addresses.${network}.ip"
else
query=".vm.${nodename}.networking.addresses.${network}.vlans.${vlan}.ip"
fi
jq -cr "$query" < "${GATE_MANIFEST}"
}
config_vm_vcpus() {
nodename=${1}
jq -cr ".vm.${nodename}.vcpus" < "${GATE_MANIFEST}"
}
config_vm_bootstrap() {
nodename=${1}
val=$(jq -cr ".vm.${nodename}.bootstrap" < "${GATE_MANIFEST}")
if [[ "${val}" == "true" ]]
then
echo "true"
else
echo "false"
fi
}
config_disk_list() {
layout_name="${1}"
jq -cr ".disk_layouts.${layout_name} | keys | join(\" \")" < "${GATE_MANIFEST}"
}
config_disk_details() {
layout_name="${1}"
disk_device="${2}"
jq -cr ".disk_layouts.${layout_name}.${disk_device}" < "${GATE_MANIFEST}"
}
config_disk_size() {
layout_name="$1"
disk_device="$2"
jq -cr ".disk_layouts.${layout_name}.${disk_device}.size" < "${GATE_MANIFEST}"
}
config_disk_format() {
layout_name="$1"
disk_device="$2"
do_format=$(jq -cr ".disk_layouts.${layout_name}.${disk_device} | has(\"format\")")
if [[ "$do_format" == "true" && "$disk_device" != "$(config_layout_bootstrap "$layout_name")" ]]
then
jq -cr ".disk_layouts.${layout_name}.${disk_device}.format" < "${GATE_MANIFEST}"
else
echo ""
fi
}
config_format_type() {
JSON="$1"
echo "$JSON" | jq -cr '.type'
}
config_format_mount() {
JSON="$1"
echo "$JSON" | jq -cr '.mountpoint'
}
config_disk_ioprofile() {
layout_name="$1"
disk_device="$2"
jq -cr ".disk_layouts.${layout_name}.${disk_device}.io_profile" < "${GATE_MANIFEST}"
}
# Find which disk in a layout should
# get the bootstrap image
config_layout_bootstrap() {
layout_name="${1}"
jq -cr ".disk_layouts.${layout_name} | keys[] as \$k | {device: (\$k)} + (.[\$k]) | select(.bootstrap) | .device " < "${GATE_MANIFEST}"
}
config_vm_disk_layout() {
nodename=${1}
jq -cr ".vm.${nodename}.disk_layout" < "${GATE_MANIFEST}"
}
config_vm_userdata() {
nodename=${1}
val=$(jq -cr ".vm.${nodename}.userdata" < "${GATE_MANIFEST}")
if [[ "${val}" != "null" ]]
then
echo "${val}"
fi
}
config_bgp_as() {
as_number=${1}
jq -cr ".bgp.${as_number}" < "${GATE_MANIFEST}"
}
config_net_list() {
jq -cr '.networking | keys | .[]' < "${GATE_MANIFEST}"
}
config_net_vlan_list() {
network="$1"
jq -cr ".networking.${network}.layer2.vlans // {} | keys | .[]" < "${GATE_MANIFEST}"
}
config_net_cidr() {
network="$1"
vlan="$2"
if is_netspec "$network"
then
vlan=$(netspec_vlan "$network")
network=$(netspec_netname "$network")
fi
if [[ -z "$vlan" ]]
then
query=".networking.${network}.layer3.cidr"
else
query=".networking.${network}.vlans.${vlan}.layer3.cidr"
fi
jq -cr "$query" < "${GATE_MANIFEST}"
}
config_net_is_layer3() {
network="$1"
vlan="$2"
if is_netspec "$network"
then
vlan=$(netspec_vlan "$network")
network=$(netspec_netname "$network")
fi
if [[ -z "$vlan" ]]
then
query=".networking.${network} | has(\"layer3\")"
else
query=".network.${network}.vlans.${vlan} | has(\"layer3\")"
fi
jq -cr "$query" < "${GATE_MANIFEST}"
}
# Find the layer 3 network tagged for a particular
# role - this can be either a native or vlan network
# If multiple networks have a role, the results is
# undefined
config_netspec_for_role() {
role="$1"
set -x
for net in $(config_net_list)
do
if config_net_has_role "$net" "$role"
then
netspec="$net"
fi
for vlan in $(config_net_vlan_list "$net")
do
if config_vlan_has_role "$net" "$vlan" "$role"
then
netspec="${vlan}@${net}"
fi
done
done
echo -n "$netspec"
}
config_net_has_role() {
netname="$1"
role="$2"
value="$(jq -cr ".networking.${netname}.roles | contains([\"${role}\"])" < "$GATE_MANIFEST")"
if [ "$value" == "true" ]
then
return 0
else
return 1
fi
}
config_vlan_has_role() {
netname="$1"
vlan="$2"
role="$3"
value="$(jq -cr " .networking.${netname}.vlans.${vlan}.roles | contains([\"${role}\"])" < "$GATE_MANIFEST")"
if [ "$value" == "true" ]
then
return 0
else
return 1
fi
}
config_net_selfip() {
network="$1"
vlan="$2"
if [[ -z "$vlan" ]]
then
query=".networking.${network}.layer3.address"
else
query=".networking.${network}.vlans.${vlan}.layer3.address"
fi
jq -cr "$query" < "${GATE_MANIFEST}"
}
config_net_selfip_cidr() {
network="$1"
vlan="$2"
if is_netspec "$network"
then
vlan=$(netspec_vlan "$network")
network=$(netspec_netname "$network")
fi
selfip=$(config_net_selfip "$network" "$vlan")
netcidr=$(config_net_cidr "$network" "$vlan")
netbits=$(echo "$netcidr" | awk -F '/' '{print $2}')
printf "%s/%s" "$selfip" "$netbits"
}
config_net_gateway() {
network="$1"
vlan="$2"
if is_netspec "$network"
then
vlan=$(netspec_vlan "$network")
network=$(netspec_netname "$network")
fi
if [[ -z "$vlan" ]]
then
query=".networking.${network}.layer3.gateway"
else
query=".networking.${network}.vlans.${vlan}.layer3.gateway"
fi
jq -cr "$query" < "${GATE_MANIFEST}"
}
config_net_routemode() {
network="$1"
vlan="$2"
if is_netspec "$network"
then
vlan=$(netspec_vlan "$network")
network=$(netspec_netname "$network")
fi
if [[ -z "$vlan" ]]
then
query=".networking.${network}.layer3.routing.mode"
else
query=".networking.${network}.vlans.${vlan}.layer3.routing.mode"
fi
jq -cr "$query" < "${GATE_MANIFEST}"
}
config_net_mtu() {
network="$1"
vlan="$2"
if is_netspec "$network"
then
vlan=$(netspec_vlan "$network")
network=$(netspec_netname "$network")
fi
if [[ -z "$vlan" ]]
then
query=".networking.${network}.layer2.mtu // 1500"
else
query=".networking.${network}.vlans.${vlan}.layer2.mtu // 1500"
fi
jq -cr "$query" < "${GATE_MANIFEST}"
}
config_net_mac() {
network="$1"
vlan="$2"
if is_netspec "$network"
then
vlan=$(netspec_vlan "$network")
network=$(netspec_netname "$network")
fi
if [[ -z "$vlan" ]]
then
query=".networking.${network}.layer2.address"
else
query=".networking.${network}.vlans.${vlan}.layer2.address"
fi
jq -cr "$query" < "${GATE_MANIFEST}"
}
config_ingress_domain() {
jq -cr '.ingress.domain' < "${GATE_MANIFEST}"
}
config_ingress_ca() {
if [[ ! -z "$GATE_MANIFEST" ]]
then
jq -cr '.ingress.ca' < "${GATE_MANIFEST}"
fi
}
config_ingress_ips() {
jq -cr '.ingress | keys | map(select(test("([0-9]{1,3}.?){4}"))) | join(" ")' < "${GATE_MANIFEST}"
}
config_ingress_entries() {
IP=$1
jq -cr ".ingress[\"${IP}\"] | join(\" \")" < "${GATE_MANIFEST}"
}
config_pegleg_primary_repo() {
jq -cr ".configuration.primary_repo" < "${GATE_MANIFEST}"
}
config_pegleg_sitename() {
jq -cr ".configuration.site" < "${GATE_MANIFEST}"
}
config_pegleg_aux_repos() {
jq -cr '.configuration.aux_repos | join(" ")' < "${GATE_MANIFEST}"
}
join_array() {
local IFS=$1
shift
echo "$*"
}
besteffort() {
set +e
"$@"
set -e
}
get_namekey() {
if [[ -r "$NAMEKEY_FILE" ]]
then
key=$(cat "$NAMEKEY_FILE")
else
key=$(openssl rand -hex 4)
echo -n "$key" > "$NAMEKEY_FILE"
fi
echo -n "$key"
}
is_netspec(){
value="$1"
if echo -n "$value" | grep -q "[0-9]+@.+"
then
return 0
else
return 1
fi
}
netspec_netname(){
netspec="$1"
echo -n "$netspec" | awk -F'@' '{print $2}'
}
netspec_vlan(){
netspec="$1"
echo -n "$netspec" | awk -F'@' '{print $1}'
}
# We'll just add the conversions as needed
cidr_to_netmask() {
cidr="$1"
netbits="$(echo "$cidr" | awk -F'/' '{print $2}')"
case "$netbits" in
32)
netmask="255.255.255.255"
;;
24)
netmask="255.255.255.0"
;;
esac
echo "$netmask"
}

View File

@ -0,0 +1,6 @@
#!/bin/bash
export GENESIS_NAME=n0
export BUILD_NAME=build
export SSH_CONFIG_DIR=${WORKSPACE}/seaworthy-virt/airship_gate/config-ssh
export TEMPLATE_DIR=${WORKSPACE}/seaworthy-virt/airship_gate/templates
export XML_DIR=${WORKSPACE}/seaworthy-virt/airship_gate/xml

View File

@ -0,0 +1,27 @@
#!/bin/bash
docker_ps() {
VIA="${1}"
ssh_cmd "${VIA}" docker ps -a
}
docker_info() {
VIA="${1}"
ssh_cmd "${VIA}" docker info 2>&1
}
docker_exited_containers() {
VIA="${1}"
ssh_cmd "${VIA}" docker ps -q --filter "status=exited"
}
docker_inspect() {
VIA="${1}"
CONTAINER_ID="${2}"
ssh_cmd "${VIA}" docker inspect "${CONTAINER_ID}"
}
docker_logs() {
VIA="${1}"
CONTAINER_ID="${2}"
ssh_cmd "${VIA}" docker logs "${CONTAINER_ID}"
}

View File

@ -0,0 +1,39 @@
#!/bin/bash
DNS_ZONE_FILE="${TEMP_DIR}/ingress.dns"
COREFILE="${TEMP_DIR}/ingress.corefile"
ingress_dns_config() {
ingress_domain="$(config_ingress_domain)"
#shellcheck disable=SC2016
INGRESS_DOMAIN="${ingress_domain}" envsubst '${INGRESS_DOMAIN}' < "${TEMPLATE_DIR}/ingress_header.sub" > "${DNS_ZONE_FILE}"
read -r -a ingress_ip_list <<< "$(config_ingress_ips)"
for ip in "${ingress_ip_list[@]}"
do
# TODO(sthussey) shift config_ingress_entries to printf w/ quotes
# shellcheck disable=SC2046
read -r -a ip_entries <<< $(config_ingress_entries "$ip")
for entry in "${ip_entries[@]}"
do
HOSTNAME="${entry}" HOSTIP="${ip}" envsubst < "${TEMPLATE_DIR}/ingress_entry.sub" >> "${DNS_ZONE_FILE}"
done
done
DNS_DOMAIN="${ingress_domain}" ZONE_FILE="$(basename "$DNS_ZONE_FILE")" DNS_SERVERS="$UPSTREAM_DNS" envsubst < "${TEMPLATE_DIR}/ingress_corefile.sub" > "${COREFILE}"
}
ingress_dns_start() {
# nodename where DNS should run
nodename="$1"
remote_work_dir="/var/tmp/coredns"
remote_zone_file="${remote_work_dir}/$(basename "$DNS_ZONE_FILE")"
remote_corefile="${remote_work_dir}/$(basename "$COREFILE")"
ssh_cmd "${nodename}" mkdir -p "${remote_work_dir}"
rsync_cmd "$DNS_ZONE_FILE" "${nodename}:${remote_zone_file}"
rsync_cmd "$COREFILE" "${nodename}:${remote_corefile}"
ssh_cmd "${nodename}" docker run -d -v /var/tmp/coredns:/data -w /data --network host --restart always -P "$IMAGE_COREDNS" -conf "$(basename "$remote_corefile")"
}

View File

@ -0,0 +1,47 @@
#!/bin/bash
kubectl_apply() {
VIA=${1}
FILE=${2}
ssh_cmd_raw "${VIA}" "KUBECONFIG=${KUBECONFIG}" "cat ${FILE} | kubectl apply -f -"
}
kubectl_cmd() {
VIA=${1}
shift
ssh_cmd_raw "${VIA}" "KUBECONFIG=${KUBECONFIG}" kubectl "${@}"
}
kubectl_wait_for_pod() {
VIA=${1}
NAMESPACE=${2}
POD_NAME=${3}
SEC=${4:-600}
log Waiting "${SEC}" seconds for termination of pod "${POD_NAME}"
POD_PHASE_JSONPATH='{.status.phase}'
end=$(($(date +%s) + SEC))
while true; do
POD_PHASE=$(kubectl_cmd "${VIA}" --request-timeout 10s --namespace "${NAMESPACE}" get -o jsonpath="${POD_PHASE_JSONPATH}" pod "${POD_NAME}")
if [[ ${POD_PHASE} = "Succeeded" ]]; then
log Pod "${POD_NAME}" succeeded.
break
elif [[ $POD_PHASE = "Failed" ]]; then
log Pod "${POD_NAME}" failed.
kubectl_cmd "${VIA}" --request-timeout 10s --namespace "${NAMESPACE}" get -o yaml pod "${POD_NAME}" 1>&2
exit 1
else
now=$(date +%s)
if [[ $now -gt $end ]]; then
log Pod did not terminate before timeout.
kubectl_cmd "${VIA}" --request-timeout 10s --namespace "${NAMESPACE}" get -o yaml pod "${POD_NAME}" 1>&2
exit 1
fi
sleep 1
fi
done
}

View File

@ -0,0 +1,84 @@
#!/bin/bash
if [[ -v GATE_COLOR && ${GATE_COLOR} = "1" ]]; then
C_CLEAR="\e[0m"
C_ERROR="\e[38;5;160m"
C_HEADER="\e[38;5;164m"
C_HILIGHT="\e[38;5;27m"
C_MUTE="\e[38;5;238m"
C_SUCCESS="\e[38;5;46m"
C_TEMP="\e[38;5;226m"
else
C_CLEAR=""
C_ERROR=""
C_HEADER=""
C_HILIGHT=""
C_MUTE=""
C_SUCCESS=""
C_TEMP=""
fi
log() {
d=$(date --utc)
echo -e "${C_MUTE}${d}${C_CLEAR} ${*}" 1>&2
echo -e "${d} ${*}" >> "${LOG_FILE}"
}
log_warn() {
d=$(date --utc)
echo -e "${C_MUTE}${d}${C_CLEAR} ${C_HILIGHT}WARN${C_CLEAR} ${*}" 1>&2
echo -e "${d} ${*}" >> "${LOG_FILE}"
}
log_error() {
d=$(date --utc)
echo -e "${C_MUTE}${d}${C_CLEAR} ${C_ERROR}ERROR${C_CLEAR} ${*}" 1>&2
echo -e "${d} ${*}" >> "${LOG_FILE}"
}
log_stage_diagnostic_header() {
echo -e " ${C_ERROR}= Diagnostic Report =${C_CLEAR}"
}
log_color_reset() {
echo -e "${C_CLEAR}"
}
log_huge_success() {
echo -e "${C_SUCCESS}=== HUGE SUCCESS ===${C_CLEAR}"
}
log_note() {
echo -e "${C_HILIGHT}NOTE:${C_CLEAR} ${*}"
}
log_stage_error() {
NAME=${1}
echo -e " ${C_ERROR}== Error in stage ${C_HILIGHT}${NAME}${C_ERROR} ( ${C_TEMP}${LOG_FILE}${C_ERROR} ) ==${C_CLEAR}"
}
log_stage_footer() {
NAME=${1}
echo -e "${C_HEADER}=== Finished stage ${C_HILIGHT}${NAME}${C_HEADER} ===${C_CLEAR}"
}
log_stage_header() {
NAME=${1}
echo -e "${C_HEADER}=== Executing stage ${C_HILIGHT}${NAME}${C_HEADER} ===${C_CLEAR}"
}
log_stage_success() {
echo -e " ${C_SUCCESS}== Stage Success ==${C_CLEAR}"
}
log_temp_dir() {
echo -e "Working in ${C_TEMP}${TEMP_DIR}${C_CLEAR}"
}
if [[ -v GATE_DEBUG && ${GATE_DEBUG} = "1" ]]; then
export LOG_FILE=/dev/stderr
elif [[ -v TEMP_DIR ]]; then
export LOG_FILE=${TEMP_DIR}/gate.log
else
export LOG_FILE=/dev/null
fi

View File

@ -0,0 +1,37 @@
#!/bin/bash
nginx_down() {
REGISTRY_ID=$(docker ps -qa -f name=promenade-nginx)
if [ "x${REGISTRY_ID}" != "x" ]; then
log Removing nginx server
docker rm -fv "${REGISTRY_ID}" &>> "${LOG_FILE}"
fi
}
nginx_up() {
log Starting nginx server to serve configuration files
mkdir -p "${NGINX_DIR}"
docker run -d \
-p 7777:80 \
--restart=always \
--name promenade-nginx \
-v "${TEMP_DIR}/nginx:/usr/share/nginx/html:ro" \
nginx:stable &>> "${LOG_FILE}"
}
nginx_cache_and_replace_tar_urls() {
log "Finding tar_url options to cache.."
TAR_NUM=0
mkdir -p "${NGINX_DIR}"
for file in "$@"; do
grep -Po "^ +tar_url: \K.+$" "${file}" | while read -r tar_url ; do
# NOTE(mark-burnet): Does not yet ignore repeated files.
DEST_PATH="${NGINX_DIR}/cached-tar-${TAR_NUM}.tgz"
log "Caching ${tar_url} in file: ${DEST_PATH}"
REPLACEMENT_URL="${NGINX_URL}/cached-tar-${TAR_NUM}.tgz"
curl -Lo "${DEST_PATH}" "${tar_url}"
sed -i "s;${tar_url};${REPLACEMENT_URL};" "${file}"
TAR_NUM=$((TAR_NUM + 1))
done
done
}

View File

@ -0,0 +1,17 @@
#!/bin/bash
promenade_health_check() {
VIA=${1}
log "Checking Promenade API health"
MAX_HEALTH_ATTEMPTS=6
for attempt in $(seq ${MAX_HEALTH_ATTEMPTS}); do
if ssh_cmd "${VIA}" curl -v --fail "${PROMENADE_BASE_URL}/api/v1.0/health"; then
log "Promenade API healthy"
break
elif [[ $attempt == "${MAX_HEALTH_ATTEMPTS}" ]]; then
log "Promenade health check failed, max retries (${MAX_HEALTH_ATTEMPTS}) exceeded."
exit 1
fi
sleep 10
done
}

View File

@ -0,0 +1,76 @@
#!/bin/bash
registry_down() {
REGISTRY_ID=$(docker ps -qa -f name=registry)
if [[ ! -z ${REGISTRY_ID} ]]; then
log Removing docker registry
docker rm -fv "${REGISTRY_ID}" &>> "${LOG_FILE}"
fi
}
registry_list_images() {
FILES=($(find "${DEFINITION_DEPOT}" -type f -name '*.yaml'))
HOSTNAME_REGEX='[a-zA-Z0-9][a-zA-Z0-9_-]{0,62}'
DOMAIN_NAME_REGEX="${HOSTNAME_REGEX}(\.${HOSTNAME_REGEX})*"
PORT_REGEX='[0-9]+'
NETLOC_REGEX="${DOMAIN_NAME_REGEX}(:${PORT_REGEX})?"
REPO_COMPONENT_REGEX='[a-zA-Z0-9][a-zA-Z0-9_-]{0,62}'
REPO_REGEX="${REPO_COMPONENT_REGEX}(/${REPO_COMPONENT_REGEX})*"
TAG_REGEX='[a-zA-Z0-9][a-zA-Z0-9.-]{0,127}'
cat "${FILES[@]}" \
| grep -v '^ *#' \
| tr ' \t' '\n' | tr -s '\n' \
| grep -E "^(${NETLOC_REGEX}/)?${REPO_REGEX}:${TAG_REGEX}$" \
| sort -u \
| grep -v 'registry:5000'
}
registry_populate() {
log Validating local registry is populated
for image in $(registry_list_images); do
if [[ ${image} =~ promenade ]]; then
continue
fi
if [[ ${image} =~ .*:(latest|master) ]] || ! docker pull "localhost:5000/${image}" &> /dev/null; then
log Loading image "${image}" into local registry
{
docker pull "${image}"
docker tag "${image}" "localhost:5000/${image}"
docker push "localhost:5000/${image}"
} &>> "${LOG_FILE}" || echo "Failed to cache ${image}"
fi
done
}
registry_replace_references() {
FILES=(${@})
for image in $(registry_list_images); do
sed -i "s;${image}\$;registry:5000/${image};g" "${FILES[@]}"
done
}
registry_up() {
log Validating local registry is up
REGISTRY_ID=$(docker ps -qa -f name=registry)
RUNNING_REGISTRY_ID=$(docker ps -q -f name=registry)
if [[ -z ${RUNNING_REGISTRY_ID} && ! -z ${REGISTRY_ID} ]]; then
log Removing stopped docker registry
docker rm -fv "${REGISTRY_ID}" &>> "${LOG_FILE}"
fi
if [[ -z ${RUNNING_REGISTRY_ID} ]]; then
log Starting docker registry
docker run -d \
-p 5000:5000 \
-e REGISTRY_HTTP_ADDR=0.0.0.0:5000 \
--restart=always \
--name registry \
-v "${REGISTRY_DATA_DIR}:/var/lib/registry" \
"${IMAGE_DOCKER_REGISTRY}" &>> "${LOG_FILE}"
fi
}

View File

@ -0,0 +1,78 @@
#!/bin/bash
rsync_cmd() {
rsync -e "ssh -F ${SSH_CONFIG_DIR}/config" "${@}"
}
ssh_cmd_raw() {
# shellcheck disable=SC2068
ssh -F "${SSH_CONFIG_DIR}/config" $@
}
ssh_cmd() {
HOST=${1}
shift
args=$(shell-quote -- "${@}")
if [[ -v GATE_DEBUG && ${GATE_DEBUG} = "1" ]]; then
# shellcheck disable=SC2029
ssh -F "${SSH_CONFIG_DIR}/config" -v "${HOST}" "${args}"
else
# shellcheck disable=SC2029
ssh -F "${SSH_CONFIG_DIR}/config" "${HOST}" "${args}"
fi
}
ssh_config_declare() {
log Creating SSH config
env -i \
"SSH_CONFIG_DIR=${SSH_CONFIG_DIR}" \
envsubst < "${TEMPLATE_DIR}/ssh-config-global.sub" > "${SSH_CONFIG_DIR}/config"
for n in $(config_vm_names)
do
ssh_net="$(config_netspec_for_role "ssh")"
env -i \
"SSH_CONFIG_DIR=${SSH_CONFIG_DIR}" \
"SSH_NODE_HOSTNAME=${n}" \
"SSH_NODE_IP=$(config_vm_net_ip "${n}" "$ssh_net")" \
envsubst < "${TEMPLATE_DIR}/ssh-config-node.sub" >> "${SSH_CONFIG_DIR}/config"
if [[ "$(config_vm_bootstrap "${n}")" == "true" ]]
then
echo " User root" >> "${SSH_CONFIG_DIR}/config"
else
echo " User ubuntu" >> "${SSH_CONFIG_DIR}/config"
fi
done
}
ssh_keypair_declare() {
log Validating SSH keypair exists
if [ ! -s "${SSH_CONFIG_DIR}/id_rsa" ]; then
if [[ "${GATE_SSH_KEY}" ]]; then
log "Using existing SSH keys for VMs"
cp "${GATE_SSH_KEY}" "${SSH_CONFIG_DIR}/id_rsa"
chmod 600 "${SSH_CONFIG_DIR}/id_rsa"
cp "${GATE_SSH_KEY}.pub" "${SSH_CONFIG_DIR}/id_rsa.pub"
else
log Generating SSH keypair
ssh-keygen -N '' -f "${SSH_CONFIG_DIR}/id_rsa" &>> "${LOG_FILE}"
fi
fi
}
ssh_load_pubkey() {
cat "${SSH_CONFIG_DIR}/id_rsa.pub"
}
ssh_setup_declare() {
mkdir -p "${SSH_CONFIG_DIR}"
ssh_keypair_declare
ssh_config_declare
}
ssh_wait() {
NAME=${1}
while ! ssh_cmd "${NAME}" /bin/true; do
sleep 0.5
done
}

View File

@ -0,0 +1,664 @@
#!/bin/bash
img_base_declare() {
log Validating base image exists
if ! virsh vol-key --pool "${VIRSH_POOL}" --vol airship-gate-base.img > /dev/null; then
log Installing base image from "${BASE_IMAGE_URL}"
cd "${TEMP_DIR}"
curl -q -L -o base.img "${BASE_IMAGE_URL}"
{
virsh vol-create-as \
--pool "${VIRSH_POOL}" \
--name airship-gate-base.img \
--format qcow2 \
--capacity "${BASE_IMAGE_SIZE}" \
--prealloc-metadata
virsh vol-upload \
--vol airship-gate-base.img \
--file base.img \
--pool "${VIRSH_POOL}"
} &>> "${LOG_FILE}"
fi
}
netconfig_gen_mtu() {
MTU="$1"
set +e
IFS= read -r -d '' MTU_TMP <<'EOF'
mtu: ${MTU}
EOF
set -e
MTU="$MTU" envsubst <<< "$MTU_TMP" >> network-config
}
netconfig_gen_physical() {
IFACE_NAME="$1"
MTU="$2"
set +e
IFS= read -r -d '' PHYS_TMP <<'EOF'
- type: physical
name: ${IFACE_NAME}
EOF
set -e
IFACE_NAME="$IFACE_NAME" envsubst <<< "$PHYS_TMP" >> network-config
if [ ! -z "$MTU" ]
then
netconfig_gen_mtu "$MTU"
fi
}
netconfig_gen_subnet() {
IP_ADDR="$1"
IP_MASK="$2"
GW_ADDR="$3"
set +e
IFS= read -r -d '' SUBNET_TMP <<'EOF'
subnets:
- type: static
address: ${IP_ADDR}
netmask: ${IP_MASK}
EOF
IFS= read -r -d '' SUBNET_GW_TMP <<'EOF'
gateway: ${GW_ADDR}
EOF
set -e
IP_ADDR="$IP_ADDR" IP_MASK="$IP_MASK" envsubst <<< "$SUBNET_TMP" >> network-config
if [ ! -z "$GW_ADDR" ]
then
GW_ADDR="$GW_ADDR" envsubst <<< "$SUBNET_GW_TMP" >> network-config
fi
}
netconfig_gen_vlan() {
IFACE_NAME="$1"
VLAN_TAG="$2"
MTU="$3"
set +e
IFS= read -r -d '' VLAN_TMP <<'EOF'
- type: vlan
name: ${IFACE_NAME}.${VLAN_TAG}
vlan_link: ${IFACE_NAME}
vlan_id: ${VLAN_TAG}
EOF
set -e
IFACE_NAME="$IFACE_NAME" VLAN_TAG="$VLAN_TAG" envsubst <<< "$VLAN_TMP" >> network-config
if [ ! -z "$MTU" ]
then
netconfig_gen_mtu "$MTU"
fi
}
netconfig_gen_nameservers() {
NAMESERVERS="$1"
set +e
IFS= read -r -d '' NS_TMP <<'EOF'
- type: nameserver
address: [${DNS_SERVERS}]
EOF
set -e
DNS_SERVERS="$NAMESERVERS" envsubst <<< "$NS_TMP" >> network-config
}
netconfig_gen() {
NAME="$1"
IFS= cat << 'EOF' > network-config
version: 1
config:
EOF
# Generate physical interfaces
for iface in $(config_vm_iface_list "$NAME")
do
iface_network=$(config_vm_iface_network "$NAME" "$iface")
netconfig_gen_physical "$iface" "$(config_net_mtu "$iface_network")"
if [ "$(config_net_is_layer3 "$iface_network")" == "true" ]
then
iface_ip="$(config_vm_net_ip "$NAME" "$iface_network")"
netmask="$(cidr_to_netmask "$(config_net_cidr "$iface_network")")"
net_gw="$(config_net_gateway "$iface_network")"
netconfig_gen_subnet "$iface_ip" "$netmask" "$net_gw"
else
if [ ! -z "$(config_vm_iface_vlans "$NAME" "$iface")" ]
then
for vlan in $(config_vm_iface_vlans "$NAME" "$iface")
do
netconfig_gen_vlan "$iface" "$vlan" "$(config_net_mtu "$iface_network" "$vlan")"
if [ "$(config_net_is_layer3 "$iface_network" "$vlan")" == "true" ]
then
iface_ip="$(config_vm_net_ip "$NAME" "$iface_network" "$vlan")"
netmask="$(cidr_to_netmask "$(config_net_cidr "$iface_network" "$vlan")")"
net_gw="$(config_net_gateway "$iface_network" "$vlan")"
netconfig_gen_subnet "$iface_ip" "$netmask" "$net_gw"
fi
done
fi
fi
done
DNS_SERVERS=$(echo "$UPSTREAM_DNS" | tr ' ' ',')
netconfig_gen_nameservers "$DNS_SERVERS"
sed -i -e '/^$/d' network-config
}
iso_gen() {
NAME=${1}
ADDL_USERDATA="${2}"
disk_layout="$(config_vm_disk_layout "$NAME")"
vm_disks="$(config_disk_list "$disk_layout")"
if virsh vol-key --pool "${VIRSH_POOL}" --vol "cloud-init-${NAME}.iso" &> /dev/null; then
log Removing existing cloud-init ISO for "${NAME}"
virsh vol-delete \
--pool "${VIRSH_POOL}" \
--vol "cloud-init-${NAME}.iso" &>> "${LOG_FILE}"
fi
log "Creating cloud-init ISO for ${NAME}"
ISO_DIR=${TEMP_DIR}/iso/${NAME}
mkdir -p "${ISO_DIR}"
cd "${ISO_DIR}"
netconfig_gen "$NAME"
SSH_PUBLIC_KEY=$(ssh_load_pubkey)
export NAME
export SSH_PUBLIC_KEY
NTP_POOLS="$(join_array ',' "$NTP_POOLS")"
export NTP_POOLS
NTP_SERVERS="$(join_array ',' "$NTP_SERVERS")"
export NTP_SERVERS
envsubst < "${TEMPLATE_DIR}/user-data.sub" > user-data
fs_header="false"
for disk in $vm_disks
do
disk_format="$(config_disk_format "$disk_layout" "$disk")"
if [[ ! -z "$disk_format" ]]
then
if [[ "$fs_header" = "false" ]]
then
echo "fs_header:" >> user-data
fs_header="true"
fi
FS_TYPE="$(config_format_type "$disk_format")" DISK_DEVICE="$disk" envsubst < "${TEMPLATE_DIR}/disk-data.sub" >> user-data
fi
done
echo >> user-data
mount_header="false"
for disk in $vm_disks
do
disk_format="$(config_disk_format "$disk_layout" "$disk")"
if [[ ! -z "$disk_format" ]]
then
if [[ "$mount_header" = "false" ]]
then
echo "mounts:" >> user-data
mount_header="true"
fi
MOUNTPOINT="$(config_format_mount "$disk_format")" DISK_DEVICE="$disk" envsubst < "${TEMPLATE_DIR}/mount-data.sub" >> user-data
fi
done
echo >> user-data
if [[ ! -z "${ADDL_USERDATA}" ]]
then
echo -e "${ADDL_USERDATA}" >> user-data
fi
envsubst < "${TEMPLATE_DIR}/meta-data.sub" > meta-data
{
genisoimage \
-V cidata \
-input-charset utf-8 \
-joliet \
-rock \
-o cidata.iso \
meta-data \
network-config \
user-data
virsh vol-create-as \
--pool "${VIRSH_POOL}" \
--name "cloud-init-${NAME}.iso" \
--capacity "$(stat -c %s "${ISO_DIR}/cidata.iso")" \
--format raw
virsh vol-upload \
--pool "${VIRSH_POOL}" \
--vol "cloud-init-${NAME}.iso" \
--file "${ISO_DIR}/cidata.iso"
} &>> "${LOG_FILE}"
}
iso_path() {
NAME=${1}
echo "${TEMP_DIR}/iso/${NAME}/cidata.iso"
}
net_list() {
namekey="$1"
if [[ -z "$namekey" ]]
then
grepargs=("-v" '^$')
else
grepargs=("^${namekey}")
fi
virsh net-list --name | grep "${grepargs[@]}"
}
nets_clean() {
namekey=$(get_namekey)
for netname in $(net_list "$namekey")
do
log Destroying Airship gate "$netname"
for iface in $(ip -oneline l show type vlan | grep "$netname" | awk -F ' ' '{print $2}' | tr -d ':' | awk -F '@' '{print $1}')
do
# shellcheck disable=SC2024
sudo ip l del dev "$iface" &>> "$LOG_FILE"
done
virsh net-destroy "$netname" &>> "${LOG_FILE}"
virsh net-undefine "$netname" &>> "${LOG_FILE}"
done
}
net_create() {
netname="$1"
namekey=$(get_namekey)
virsh_netname="${namekey}"_"${netname}"
if [[ $(config_net_is_layer3 "$net") == "true" ]]; then
net_template="${TEMPLATE_DIR}/l3network-definition.sub"
NETNAME="${virsh_netname}" NETIP="$(config_net_selfip "$netname")" NETMASK="$(cidr_to_netmask "$(config_net_cidr "$netname")")" NETMAC="$(config_net_mac "$netname")" envsubst < "$net_template" > "${TEMP_DIR}/net-${netname}.xml"
else
net_template="${TEMPLATE_DIR}/l2network-definition.sub"
NETNAME="${virsh_netname}" envsubst < "$net_template" > "${TEMP_DIR}/net-${netname}.xml"
fi
log Creating network "${namekey}"_"${netname}"
virsh net-define "${TEMP_DIR}/net-${netname}.xml" &>> "${LOG_FILE}"
virsh net-start "${virsh_netname}"
virsh net-autostart "${virsh_netname}"
for vlan in $(config_net_vlan_list "$netname")
do
if [[ $(config_net_is_layer3 "$netname") == "true" ]]
then
iface_name="${virsh_netname}-${vlan}"
iface_mtu=$(confg_net_mtu "$netname" "$vlan")
if [[ -z "$iface_mtu" ]]
then
iface_mtu="1500"
fi
iface_mac="$(config_net_mac "$netname" "$vlan")"
sudo ip link add link "${virsh_netname}" name "${iface_name}" type vlan id "${vlan}" mtu $iface_mtu address "$iface_mac"
sudo ip addr add "$(config_net_selfip_cidr "$netname" "$vlan")" dev "${iface_name}"
sudo ip link set dev "${iface_name}" up
fi
done
}
nets_declare() {
nets_clean
for net in $(config_net_list); do
net_create "$net"
done
}
pool_declare() {
log Validating virsh pool setup
if ! virsh pool-uuid "${VIRSH_POOL}" &> /dev/null; then
log Creating pool "${VIRSH_POOL}"
virsh pool-define-as --name "${VIRSH_POOL}" --type dir --target "${VIRSH_POOL_PATH}" &>> "${LOG_FILE}"
virsh pool-start "${VIRSH_POOL}"
virsh pool-autostart "${VIRSH_POOL}"
fi
}
vm_clean() {
NAME=${1}
if virsh list --name | grep "${NAME}" &> /dev/null; then
virsh destroy "${NAME}" &>> "${LOG_FILE}"
fi
if virsh list --name --all | grep "${NAME}" &> /dev/null; then
log Removing VM "${NAME}"
virsh undefine --remove-all-storage --domain "${NAME}" &>> "${LOG_FILE}"
fi
}
vm_clean_all() {
log Removing all VMs
VM_NAMES=($(config_vm_names))
for NAME in ${VM_NAMES[*]}
do
vm_clean "${NAME}"
done
wait
}
# TODO(sh8121att) - Sort out how to ensure the proper NIC
# is used for PXE boot
vm_render_interface() {
vm="$1"
iface="$2"
namekey="$(get_namekey)"
mac="$(config_vm_iface_mac "$vm" "$iface")"
network="$(config_vm_iface_network "$vm" "$iface")"
network="${namekey}_${network}"
slot="$(config_vm_iface_slot "$vm" "$iface")"
port="$(config_vm_iface_port "$vm" "$iface")"
config_string="model=virtio,network=${network}"
if [[ ! -z "$mac" ]]
then
config_string="${config_string},mac=${mac}"
fi
if [[ ! -z "$slot" ]]
then
config_string="${config_string},address.type=pci,address.slot=${slot}"
if [[ ! -z "$port" ]]
then
config_string="${config_string},address.function=${port}"
fi
fi
echo -n "$config_string"
}
vm_create_interfaces() {
vm="$1"
network_opts=""
for interface in $(config_vm_iface_list "$vm")
do
nic_opts="$(vm_render_interface "$vm" "$interface")"
network_opts="$network_opts --network ${nic_opts}"
done
echo "$network_opts"
}
vm_create_vols(){
NAME="$1"
disk_layout="$(config_vm_disk_layout "$NAME")"
vm_disks="$(config_disk_list "$disk_layout")"
bs_disk="$(config_layout_bootstrap "$disk_layout")"
bs_vm="$(config_vm_bootstrap "${NAME}")"
vols=()
for disk in $vm_disks
do
io_prof=$(config_disk_ioprofile "${disk_layout}" "${disk}")
size=$(config_disk_size "${disk_layout}" "${disk}")
if [[ "$bs_vm" = "true" && "$bs_disk" = "$disk" ]]
then
vol_create_disk "$NAME" "$disk" "$size" "true"
else
vol_create_disk "$NAME" "$disk" "$size"
fi
if [[ "$io_prof" == "fast" ]]
then
DISK_OPTS="bus=virtio,cache=none,format=qcow2,io=native"
elif [[ "$io_prof" == "safe" ]]
then
DISK_OPTS="bus=virtio,cache=directsync,discard=unmap,format=qcow2,io=native"
else
DISK_OPTS="bus=virtio,format=qcow2"
fi
vol_cmd="--disk vol=${VIRSH_POOL}/airship-gate-${NAME}-${disk}.img,target=${disk},size=${size},${DISK_OPTS}"
vols+=($vol_cmd)
done
echo "${vols[@]}"
}
vol_create_disk() {
NAME=${1}
DISK=${2}
SIZE=${3}
BS=${4}
if virsh vol-list --pool "${VIRSH_POOL}" | grep "airship-gate-${NAME}-${DISK}.img" &> /dev/null; then
log Deleting previous volume "airship-gate-${NAME}-${DISK}.img"
virsh vol-delete --pool "${VIRSH_POOL}" "airship-gate-${NAME}-${DISK}.img" &>> "${LOG_FILE}"
fi
log Creating volume "${DISK}" for "${NAME}"
if [[ "$BS" == "true" ]]; then
virsh vol-create-as \
--pool "${VIRSH_POOL}" \
--name "airship-gate-${NAME}-${DISK}.img" \
--capacity "${SIZE}"G \
--format qcow2 \
--backing-vol 'airship-gate-base.img' \
--backing-vol-format qcow2 &>> "${LOG_FILE}"
else
virsh vol-create-as \
--pool "${VIRSH_POOL}" \
--name "airship-gate-${NAME}-${DISK}.img" \
--capacity "${SIZE}"G \
--format qcow2 &>> "${LOG_FILE}"
fi
}
vm_create() {
set -x
NAME=${1}
DISK_OPTS="$(vm_create_vols "${NAME}")"
NETWORK_OPTS="$(vm_create_interfaces "${NAME}")"
if [[ "$(config_vm_bootstrap "${NAME}")" == "true" ]]; then
iso_gen "${NAME}" "$(config_vm_userdata "${NAME}")"
wait
log Creating VM "${NAME}" and bootstrapping the boot drive
# shellcheck disable=SC2086
virt-install \
--name "${NAME}" \
--os-variant ubuntu16.04 \
--virt-type kvm \
--cpu "${VIRSH_CPU_OPTS}" \
--serial "file,path=${TEMP_DIR}/console/${NAME}.log" \
--graphics none \
--noautoconsole \
$NETWORK_OPTS \
--vcpus "$(config_vm_vcpus "${NAME}")" \
--memory "$(config_vm_memory "${NAME}")" \
--import \
$DISK_OPTS \
--disk "vol=${VIRSH_POOL}/cloud-init-${NAME}.iso,device=cdrom" &>> "${LOG_FILE}"
ssh_wait "${NAME}"
ssh_cmd "${NAME}" cloud-init status --wait
ssh_cmd "${NAME}" sync
else
log Creating VM "${NAME}"
# shellcheck disable=SC2086
virt-install \
--name "${NAME}" \
--os-variant ubuntu16.04 \
--virt-type kvm \
--cpu "${VIRSH_CPU_OPTS}" \
--graphics none \
--serial file,path="${TEMP_DIR}/console/${NAME}.log" \
--noautoconsole \
$NETWORK_OPTS \
--vcpus "$(config_vm_vcpus "${NAME}")" \
--memory "$(config_vm_memory "${NAME}")" \
--import \
$DISK_OPTS &>> "${LOG_FILE}"
fi
virsh autostart "${NAME}"
}
vm_create_validate() {
NAME=${1}
vm_create "${name}"
if [[ "$(config_vm_bootstrap "${name}")" == "true" ]]
then
vm_validate "${name}"
fi
}
vm_create_all() {
log Starting all VMs
VM_NAMES=($(config_vm_names))
for name in ${VM_NAMES[*]}
do
vm_create_validate "${name}" &
done
wait
}
vm_start() {
NAME=${1}
log Starting VM "${NAME}"
virsh start "${NAME}" &>> "${LOG_FILE}"
ssh_wait "${NAME}"
}
vm_stop() {
NAME=${1}
log Stopping VM "${NAME}"
virsh destroy "${NAME}" &>> "${LOG_FILE}"
}
vm_stop_non_genesis() {
log Stopping all non-genesis VMs in parallel
for NAME in $(config_non_genesis_vms); do
vm_stop "${NAME}" &
done
wait
}
vm_restart_all() {
for NAME in $(config_vm_names); do
vm_stop "${NAME}" &
done
wait
for NAME in $(config_vm_names); do
vm_start "${NAME}" &
done
wait
}
vm_validate() {
NAME=${1}
if ! virsh list --name | grep "${NAME}" &> /dev/null; then
log VM "${NAME}" did not start correctly.
exit 1
fi
}
#Find the correct group name for libvirt access
get_libvirt_group() {
grep -oE '^libvirtd?:' /etc/group | tr -d ':'
}
# Make a user 'virtmgr' if it does not exist and add it to the libvirt group
make_virtmgr_account() {
for libvirt_group in $(get_libvirt_group)
do
if ! grep -qE '^virtmgr:' /etc/passwd
then
sudo useradd -m -s /bin/sh -g "${libvirt_group}" virtmgr
else
sudo usermod -g "${libvirt_group}" virtmgr
fi
done
}
# Generate a new keypair
gen_libvirt_key() {
log Removing any existing virtmgr SSH keys
sudo rm -rf ~virtmgr/.ssh
sudo mkdir -p ~virtmgr/.ssh
if [[ "${GATE_SSH_KEY}" ]]; then
log "Using existing SSH keys for virtmgr"
sudo cp "${GATE_SSH_KEY}" ~virtmgr/.ssh/airship_gate
sudo cp "${GATE_SSH_KEY}.pub" ~virtmgr/.ssh/airship_gate.pub
else
log "Generating new SSH keypair for virtmgr"
#shellcheck disable=SC2024
sudo ssh-keygen -N '' -b 2048 -t rsa -f ~virtmgr/.ssh/airship_gate &>> "${LOG_FILE}"
fi
}
# Install private key into site definition
install_libvirt_key() {
PUB_KEY=$(sudo cat ~virtmgr/.ssh/airship_gate.pub)
export PUB_KEY
mkdir -p "${TEMP_DIR}/tmp"
envsubst < "${TEMPLATE_DIR}/authorized_keys.sub" > "${TEMP_DIR}/tmp/virtmgr.authorized_keys"
sudo cp "${TEMP_DIR}/tmp/virtmgr.authorized_keys" ~virtmgr/.ssh/authorized_keys
sudo chown -R virtmgr ~virtmgr/.ssh
sudo chmod 700 ~virtmgr/.ssh
sudo chmod 600 ~virtmgr/.ssh/authorized_keys
if [[ -n "${USE_EXISTING_SECRETS}" ]]; then
log "Using existing manifests for secrets"
return 0
fi
mkdir -p "${GATE_DEPOT}"
cat << EOF > "${GATE_DEPOT}/airship_drydock_kvm_ssh_key.yaml"
---
schema: deckhand/CertificateKey/v1
metadata:
schema: metadata/Document/v1
name: airship_drydock_kvm_ssh_key
layeringDefinition:
layer: site
abstract: false
storagePolicy: cleartext
data: |-
EOF
sudo cat ~virtmgr/.ssh/airship_gate | sed -e 's/^/ /' >> "${GATE_DEPOT}/airship_drydock_kvm_ssh_key.yaml"
}

View File

@ -0,0 +1,130 @@
{
"$schema": "http://json-schema.org/schema#",
"definitions": {
"publish": {
"type": "object",
"properties": {
"junit": {
"type": "array",
"items": {
"$ref": "#/definitions/relativePath"
}
}
},
"additionalProperties": false
},
"relativePath": {
"type": "string",
"pattern": "^[A-Za-z0-9][A-Za-z0-9_\\.-]*(/[A-Za-z0-9_\\.-]+)*[A-Za-z0-9_-]$"
}
},
"type": "object",
"properties": {
"configuration": {
"type": "array",
"items": {
"$ref": "#/definitions/relativePath"
}
},
"publish": {
"$ref": "#/definitions/publish"
},
"ingress": {
"type": "object",
"properties": {
"domain": {
"type": "string"
},
"additionalProperties": {
"type": "array",
"items": {
"type": "string"
}
},
"required": ["domain"]
},
"stages": {
"type": "array",
"items": {
"type": "object",
"properties": {
"arguments": {
"type": "array",
"items": {
"type": "string"
}
},
"name": {
"type": "string"
},
"on_error": {
"$ref": "#/definitions/relativePath"
},
"publish": {
"$ref": "#/definitions/publish"
},
"script": {
"$ref": "#/definitions/relativePath"
}
},
"required": [
"name",
"script"
],
"additionalProperties": false
},
"minItems": 1
},
"vm": {
"type": "object",
"properties": {
"memory": {
"type": "integer",
"minimum": 1024
},
"names": {
"type": "array",
"items": {
"type": "string",
"enum": [
"n0",
"n1",
"n2",
"n3"
]
},
"uniqueItems": true
},
"non_genesis": {
"type": "array",
"items": {
"type": "string",
"enum": [
"n1",
"n2",
"n3"
]
},
"uniqueItems": true
},
"vcpus": {
"type": "integer",
"minimum": 1,
"maximum": 8
}
},
"required": [
"memory",
"names",
"vcpus"
],
"additionalProperties": false
}
},
"required": [
"stages"
],
"additionalProperties": false
}

View File

@ -0,0 +1,61 @@
{
"configuration": {
"site": "seaworthy-virt",
"primary_repo": "./",
"aux_repos": []
},
"ingress": {
"domain": "gate.local",
"ca": "-----BEGIN CERTIFICATE-----\nMIIDIDCCAgigAwIBAgIUfikFVpFSQKVjACP9i8P4tUMnQbcwDQYJKoZIhvcNAQEL\nBQAwKDERMA8GA1UEChMIU25ha2VvaWwxEzARBgNVBAMTCmluZ3Jlc3MtY2EwHhcN\nMTgxMjAzMjEzOTAwWhcNMjMxMjAyMjEzOTAwWjAoMREwDwYDVQQKEwhTbmFrZW9p\nbDETMBEGA1UEAxMKaW5ncmVzcy1jYTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCC\nAQoCggEBAOR6+3dCF5mtKvu2TlaYNHc6/v8VPvw3I0+EI+jRskXVQHZxF0kcLAVH\n/LM2maTMzNc1sZnxCnj8YYHxfhdIco+zwzCbG1YGolSPrPaslYmMmDjR0eVl1+tb\nmLnEHDZ88ds5rXNlUXDhAURzYPJivG2aYBVImvaS4GHztndaFFNE0Q7HQpldCs1Q\n5+xbFlKWHBt/xPM4QjoD/ReLEE5m5HhkT4WN0hWC0NC1OwW6bBhVkrk4D2kDTq8d\n/b5MH4FG2HHJYHXKR4caasrCHUrmuq7m6WoicwF7z53FvlM782EsNx6vSoBKYs39\n/AC4meM/9D8rjUlWaG3AjP0KFrFCLYECAwEAAaNCMEAwDgYDVR0PAQH/BAQDAgEG\nMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFJFfhFd1reBWgmrWe6PBV2z5W/Ee\nMA0GCSqGSIb3DQEBCwUAA4IBAQAZygjSCRSJrvgPllyWDpyKN1fg2r7P2ioI0WR9\nWkSrPKzdhi2hR8VdJxkMvRpEmWRhkQT7jNGEIWgy2jtyWiYKnKYobbY/kMU86QgL\nZazh2DiIeJim+Vt3RREyfOcNDwGMX7NpfwMTz7Dzl+jvtlBwKLFN0L15d0X+4J9V\ndRp5ZkooVjiOJb6vNcozDWxBrRPAowrvzLlJkFMaKgJQmGigEpgEygnCRH++NCle\n/ivGbdFuCsYzUTlR77xf9kGXMh3socMXcdu5SOtaDS7sl52DAJnAPxo9S6l0270G\na0989is2yCgDNmld5lpphVPaQSusGa8/XTaXR7YH+oc7qn1l\n-----END CERTIFICATE-----",
"172.24.1.5": ["maas"],
"172.24.1.6": ["drydock","shipyard","keystone"]
},
"stages": [
{
"name": "Load Site Design",
"script": "shipyard-load-design.sh"
},
{
"name": "Deploy Site",
"script": "shipyard-deploy-site.sh"
}
],
"vm": {
"build": {
"memory": 3072,
"vcpus": 2,
"mac": "52:54:00:00:be:31",
"ip": "172.24.1.9",
"bootstrap": true,
"userdata": "packages: [docker.io]"
},
"n0" : {
"memory": 32768,
"vcpus": 8,
"mac": "52:54:00:00:a4:31",
"ip": "172.24.1.10",
"bootstrap": true
},
"n1" : {
"memory": 3072,
"vcpus": 2,
"mac": "52:54:00:00:a3:31",
"ip": "172.24.1.11",
"bootstrap": false
},
"n2" : {
"memory": 3072,
"vcpus": 2,
"mac": "52:54:00:1a:95:0d",
"ip": "172.24.1.12",
"bootstrap": false
},
"n3" : {
"memory": 3072,
"vcpus": 2,
"mac": "52:54:00:31:c2:36",
"ip": "172.24.1.13",
"bootstrap": false
}
}
}

View File

@ -0,0 +1,236 @@
{
"configuration": {
"site": "seaworthy-virt",
"primary_repo": "./",
"aux_repos": []
},
"ingress": {
"domain": "gate.local",
"ca": "-----BEGIN CERTIFICATE-----\nMIIDIDCCAgigAwIBAgIUfikFVpFSQKVjACP9i8P4tUMnQbcwDQYJKoZIhvcNAQEL\nBQAwKDERMA8GA1UEChMIU25ha2VvaWwxEzARBgNVBAMTCmluZ3Jlc3MtY2EwHhcN\nMTgxMjAzMjEzOTAwWhcNMjMxMjAyMjEzOTAwWjAoMREwDwYDVQQKEwhTbmFrZW9p\nbDETMBEGA1UEAxMKaW5ncmVzcy1jYTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCC\nAQoCggEBAOR6+3dCF5mtKvu2TlaYNHc6/v8VPvw3I0+EI+jRskXVQHZxF0kcLAVH\n/LM2maTMzNc1sZnxCnj8YYHxfhdIco+zwzCbG1YGolSPrPaslYmMmDjR0eVl1+tb\nmLnEHDZ88ds5rXNlUXDhAURzYPJivG2aYBVImvaS4GHztndaFFNE0Q7HQpldCs1Q\n5+xbFlKWHBt/xPM4QjoD/ReLEE5m5HhkT4WN0hWC0NC1OwW6bBhVkrk4D2kDTq8d\n/b5MH4FG2HHJYHXKR4caasrCHUrmuq7m6WoicwF7z53FvlM782EsNx6vSoBKYs39\n/AC4meM/9D8rjUlWaG3AjP0KFrFCLYECAwEAAaNCMEAwDgYDVR0PAQH/BAQDAgEG\nMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFJFfhFd1reBWgmrWe6PBV2z5W/Ee\nMA0GCSqGSIb3DQEBCwUAA4IBAQAZygjSCRSJrvgPllyWDpyKN1fg2r7P2ioI0WR9\nWkSrPKzdhi2hR8VdJxkMvRpEmWRhkQT7jNGEIWgy2jtyWiYKnKYobbY/kMU86QgL\nZazh2DiIeJim+Vt3RREyfOcNDwGMX7NpfwMTz7Dzl+jvtlBwKLFN0L15d0X+4J9V\ndRp5ZkooVjiOJb6vNcozDWxBrRPAowrvzLlJkFMaKgJQmGigEpgEygnCRH++NCle\n/ivGbdFuCsYzUTlR77xf9kGXMh3socMXcdu5SOtaDS7sl52DAJnAPxo9S6l0270G\na0989is2yCgDNmld5lpphVPaQSusGa8/XTaXR7YH+oc7qn1l\n-----END CERTIFICATE-----",
"172.24.1.5": ["maas"],
"172.24.1.6": ["drydock","shipyard","keystone"]
},
"disk_layouts":{
"simple": {
"vda": {
"size": 64,
"io_profile": "fast",
"bootstrap": true
}
},
"multi": {
"vda": {
"size": 48,
"io_profile": "fast",
"bootstrap": true
},
"vdb": {
"size": 16,
"io_profile": "fast",
"format": {"type": "ext4", "mountpoint": "/var"}
}
}
},
"networking":{
"pxe": {
"roles":["ssh","bgp", "dns"],
"layer2": {
"mtu": 1500,
"address": "52:54:00:00:dd:31"
},
"layer3": {
"cidr": "172.24.1.0/24",
"address": "172.24.1.1",
"gateway": "172.24.1.1",
"routing": {
"mode": "nat"
}
}
}
},
"stages": [
{
"name": "Gate Setup",
"script": "gate-setup.sh"
},
{
"name": "Pegleg Collection",
"script": "pegleg-collect.sh"
},
{
"name": "Pegleg Render",
"script": "pegleg-render.sh"
},
{
"name": "Generate Certificates",
"script": "generate-certificates.sh"
},
{
"name": "Build Scripts",
"script": "build-scripts.sh"
},
{
"name": "Create VMs",
"script": "create-vms.sh"
},
{
"name": "Register Ingress",
"script": "ingress-dns.sh",
"arguments": ["build"]
},
{
"name": "Create BGP router",
"script": "bgp-router.sh",
"arguments": ["build"]
},
{
"name": "Pre Genesis Setup",
"script": "genesis-setup.sh"
},
{
"name": "Genesis",
"script": "genesis.sh",
"on_error": "collect_genesis_info.sh"
},
{
"name": "Validate Genesis",
"script": "validate-genesis.sh",
"on_error": "collect_genesis_info.sh"
},
{
"name": "Load Site Design",
"script": "shipyard-load-design.sh"
},
{
"name": "Deploy Site",
"script": "shipyard-deploy-site.sh"
},
{
"name": "Validate Kube",
"script": "validate-kube.sh",
"on_error": "collect_genesis_info.sh"
}
],
"vm": {
"build": {
"memory": 3072,
"vcpus": 2,
"disk_layout": "simple",
"networking": {
"ens3": {
"mac": "52:54:00:00:be:31",
"pci": {
"slot": 3,
"port": 0
},
"attachment": {
"network": "pxe"
}
},
"addresses": {
"pxe": {
"ip": "172.24.1.9"
}
}
},
"bootstrap": true,
"userdata": "packages: [docker.io]"
},
"n0" : {
"memory": 24576,
"vcpus": 16,
"disk_layout": "simple",
"networking": {
"ens3": {
"mac": "52:54:00:00:a4:31",
"pci": {
"slot": 3,
"port": 0
},
"attachment": {
"network": "pxe"
}
},
"addresses": {
"pxe": {
"ip": "172.24.1.10"
}
}
},
"bootstrap": true
},
"n1" : {
"memory": 3072,
"vcpus": 2,
"disk_layout": "simple",
"networking": {
"ens3": {
"mac": "52:54:00:00:a3:31",
"pci": {
"slot": 3,
"port": 0
},
"attachment": {
"network": "pxe"
}
},
"addresses": {
"pxe": {
"ip": "172.24.1.11"
}
}
},
"bootstrap": false
},
"n2" : {
"memory": 3072,
"vcpus": 2,
"disk_layout": "simple",
"networking": {
"ens3": {
"mac": "52:54:00:1a:95:0d",
"pci": {
"slot": 3,
"port": 0
},
"attachment": {
"network": "pxe"
}
},
"addresses": {
"pxe": {
"ip": "172.24.1.12"
}
}
},
"bootstrap": false
},
"n3" : {
"memory": 3072,
"vcpus": 2,
"disk_layout": "simple",
"networking": {
"ens3": {
"mac": "52:54:00:31:c2:36",
"pci": {
"slot": 3,
"port": 0
},
"attachment": {
"network": "pxe"
}
},
"addresses": {
"pxe": {
"ip": "172.24.1.13"
}
}
},
"bootstrap": false
}
},
"bgp" : {
"frr_as": 64688,
"calico_as": 64671
}
}

View File

@ -0,0 +1,105 @@
{
"configuration": {
"site": "seaworthy-virt",
"primary_repo": "./",
"aux_repos": []
},
"ingress": {
"domain": "gate.local",
"ca": "-----BEGIN CERTIFICATE-----\nMIIDIDCCAgigAwIBAgIUfikFVpFSQKVjACP9i8P4tUMnQbcwDQYJKoZIhvcNAQEL\nBQAwKDERMA8GA1UEChMIU25ha2VvaWwxEzARBgNVBAMTCmluZ3Jlc3MtY2EwHhcN\nMTgxMjAzMjEzOTAwWhcNMjMxMjAyMjEzOTAwWjAoMREwDwYDVQQKEwhTbmFrZW9p\nbDETMBEGA1UEAxMKaW5ncmVzcy1jYTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCC\nAQoCggEBAOR6+3dCF5mtKvu2TlaYNHc6/v8VPvw3I0+EI+jRskXVQHZxF0kcLAVH\n/LM2maTMzNc1sZnxCnj8YYHxfhdIco+zwzCbG1YGolSPrPaslYmMmDjR0eVl1+tb\nmLnEHDZ88ds5rXNlUXDhAURzYPJivG2aYBVImvaS4GHztndaFFNE0Q7HQpldCs1Q\n5+xbFlKWHBt/xPM4QjoD/ReLEE5m5HhkT4WN0hWC0NC1OwW6bBhVkrk4D2kDTq8d\n/b5MH4FG2HHJYHXKR4caasrCHUrmuq7m6WoicwF7z53FvlM782EsNx6vSoBKYs39\n/AC4meM/9D8rjUlWaG3AjP0KFrFCLYECAwEAAaNCMEAwDgYDVR0PAQH/BAQDAgEG\nMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFJFfhFd1reBWgmrWe6PBV2z5W/Ee\nMA0GCSqGSIb3DQEBCwUAA4IBAQAZygjSCRSJrvgPllyWDpyKN1fg2r7P2ioI0WR9\nWkSrPKzdhi2hR8VdJxkMvRpEmWRhkQT7jNGEIWgy2jtyWiYKnKYobbY/kMU86QgL\nZazh2DiIeJim+Vt3RREyfOcNDwGMX7NpfwMTz7Dzl+jvtlBwKLFN0L15d0X+4J9V\ndRp5ZkooVjiOJb6vNcozDWxBrRPAowrvzLlJkFMaKgJQmGigEpgEygnCRH++NCle\n/ivGbdFuCsYzUTlR77xf9kGXMh3socMXcdu5SOtaDS7sl52DAJnAPxo9S6l0270G\na0989is2yCgDNmld5lpphVPaQSusGa8/XTaXR7YH+oc7qn1l\n-----END CERTIFICATE-----",
"172.24.1.5": ["maas"],
"172.24.1.6": ["drydock","shipyard","iam"]
},
"stages": [
{
"name": "Gate Setup",
"script": "gate-setup.sh"
},
{
"name": "Pegleg Collection",
"script": "pegleg-collect.sh"
},
{
"name": "Pegleg Render",
"script": "pegleg-render.sh"
},
{
"name": "Generate Certificates",
"script": "generate-certificates.sh"
},
{
"name": "Build Scripts",
"script": "build-scripts.sh"
},
{
"name": "Create VMs",
"script": "create-vms.sh"
},
{
"name": "Register Ingress",
"script": "ingress-dns.sh",
"arguments": ["build"]
},
{
"name": "Create BGP router",
"script": "bgp-router.sh",
"arguments": ["build"]
},
{
"name": "Pre Genesis Setup",
"script": "genesis-setup.sh"
},
{
"name": "Genesis",
"script": "genesis.sh",
"on_error": "collect_genesis_info.sh"
},
{
"name": "Validate Genesis",
"script": "validate-genesis.sh",
"on_error": "collect_genesis_info.sh"
}
],
"vm": {
"build": {
"memory": 2048,
"vcpus": 2,
"mac": "52:54:00:00:be:31",
"ip": "172.24.1.9",
"bootstrap": true,
"userdata": "packages: [docker.io]"
},
"n0" : {
"memory": 16384,
"vcpus": 12,
"mac": "52:54:00:00:a4:31",
"ip": "172.24.1.10",
"bootstrap": true
},
"n1" : {
"memory": 3072,
"vcpus": 2,
"mac": "52:54:00:00:a3:31",
"ip": "172.24.1.11",
"bootstrap": false
},
"n2" : {
"memory": 3072,
"vcpus": 2,
"mac": "52:54:00:1a:95:0d",
"ip": "172.24.1.12",
"bootstrap": false
},
"n3" : {
"memory": 3072,
"vcpus": 2,
"mac": "52:54:00:31:c2:36",
"ip": "172.24.1.13",
"bootstrap": false
}
},
"bgp" : {
"frr_as": 64688,
"calico_as": 64671
}
}

View File

@ -0,0 +1,22 @@
{
"configuration": {},
"stages": [
{
"name": "Gate Setup",
"script": "gate-setup.sh"
},
{
"name": "Create VMs",
"script": "create-vms.sh"
}
],
"vm": {
"n0" : {
"memory": 20000,
"vcpus": 8,
"mac": "52:54:00:00:a4:31",
"ip": "172.24.1.10",
"bootstrap": true
}
}
}

View File

@ -0,0 +1,67 @@
{
"configuration": {
"site": "seaworthy-virt",
"primary_repo": "./",
"aux_repos": []
},
"ingress": {
"domain": "gate.local",
"ca": "-----BEGIN CERTIFICATE-----\nMIIDIDCCAgigAwIBAgIUfikFVpFSQKVjACP9i8P4tUMnQbcwDQYJKoZIhvcNAQEL\nBQAwKDERMA8GA1UEChMIU25ha2VvaWwxEzARBgNVBAMTCmluZ3Jlc3MtY2EwHhcN\nMTgxMjAzMjEzOTAwWhcNMjMxMjAyMjEzOTAwWjAoMREwDwYDVQQKEwhTbmFrZW9p\nbDETMBEGA1UEAxMKaW5ncmVzcy1jYTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCC\nAQoCggEBAOR6+3dCF5mtKvu2TlaYNHc6/v8VPvw3I0+EI+jRskXVQHZxF0kcLAVH\n/LM2maTMzNc1sZnxCnj8YYHxfhdIco+zwzCbG1YGolSPrPaslYmMmDjR0eVl1+tb\nmLnEHDZ88ds5rXNlUXDhAURzYPJivG2aYBVImvaS4GHztndaFFNE0Q7HQpldCs1Q\n5+xbFlKWHBt/xPM4QjoD/ReLEE5m5HhkT4WN0hWC0NC1OwW6bBhVkrk4D2kDTq8d\n/b5MH4FG2HHJYHXKR4caasrCHUrmuq7m6WoicwF7z53FvlM782EsNx6vSoBKYs39\n/AC4meM/9D8rjUlWaG3AjP0KFrFCLYECAwEAAaNCMEAwDgYDVR0PAQH/BAQDAgEG\nMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFJFfhFd1reBWgmrWe6PBV2z5W/Ee\nMA0GCSqGSIb3DQEBCwUAA4IBAQAZygjSCRSJrvgPllyWDpyKN1fg2r7P2ioI0WR9\nWkSrPKzdhi2hR8VdJxkMvRpEmWRhkQT7jNGEIWgy2jtyWiYKnKYobbY/kMU86QgL\nZazh2DiIeJim+Vt3RREyfOcNDwGMX7NpfwMTz7Dzl+jvtlBwKLFN0L15d0X+4J9V\ndRp5ZkooVjiOJb6vNcozDWxBrRPAowrvzLlJkFMaKgJQmGigEpgEygnCRH++NCle\n/ivGbdFuCsYzUTlR77xf9kGXMh3socMXcdu5SOtaDS7sl52DAJnAPxo9S6l0270G\na0989is2yCgDNmld5lpphVPaQSusGa8/XTaXR7YH+oc7qn1l\n-----END CERTIFICATE-----",
"172.24.1.5": ["maas"],
"172.24.1.6": ["drydock","shipyard","keystone"]
},
"stages": [
{
"name": "Pegleg Collection",
"script": "pegleg-collect.sh",
"arguments": ["update"]
},
{
"name": "Load Site Design",
"script": "shipyard-load-design.sh",
"arguments": ["-g", "-o"]
},
{
"name": "Deploy Site",
"script": "shipyard-update-site.sh"
}
],
"vm": {
"build": {
"memory": 3072,
"vcpus": 2,
"mac": "52:54:00:00:be:31",
"ip": "172.24.1.9",
"bootstrap": true,
"userdata": "packages: [docker.io]"
},
"n0" : {
"memory": 32768,
"vcpus": 8,
"mac": "52:54:00:00:a4:31",
"ip": "172.24.1.10",
"bootstrap": true
},
"n1" : {
"memory": 3072,
"vcpus": 2,
"mac": "52:54:00:00:a3:31",
"ip": "172.24.1.11",
"bootstrap": false
},
"n2" : {
"memory": 3072,
"vcpus": 2,
"mac": "52:54:00:1a:95:0d",
"ip": "172.24.1.12",
"bootstrap": false
},
"n3" : {
"memory": 3072,
"vcpus": 2,
"mac": "52:54:00:31:c2:36",
"ip": "172.24.1.13",
"bootstrap": false
}
}
}

View File

@ -0,0 +1,43 @@
#!/usr/bin/env bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# NOTE(mark-burnett): Keep trying to collect info even if there's an error
set +e
set -x
KUBECONFIG="${KUBECONFIG:-/etc/kubernetes/admin/kubeconfig.yaml}"
source "${GATE_UTILS}"
ERROR_DIR="${TEMP_DIR}/errors"
VIA=n0
mkdir -p "${ERROR_DIR}"
log "Gathering info from failed genesis server (n0) in ${ERROR_DIR}"
log "Gathering docker info for exitted containers"
mkdir -p "${ERROR_DIR}/docker"
docker_ps "${VIA}" | tee "${ERROR_DIR}/docker/ps"
docker_info "${VIA}" | tee "${ERROR_DIR}/docker/info"
for container_id in $(docker_exited_containers "${VIA}"); do
docker_inspect "${VIA}" "${container_id}" | tee "${ERROR_DIR}/docker/${container_id}"
echo "=== Begin logs ===" | tee -a "${ERROR_DIR}/docker/${container_id}"
docker_logs "${VIA}" "${container_id}" | tee -a "${ERROR_DIR}/docker/${container_id}"
done
log "Gathering kubectl output"
mkdir -p "${ERROR_DIR}/kube"
kubectl_cmd "${VIA}" describe nodes n0 | tee "${ERROR_DIR}/kube/n0"
kubectl_cmd "${VIA}" get --all-namespaces -o wide pod | tee "${ERROR_DIR}/kube/pods"

View File

@ -0,0 +1,23 @@
#!/bin/bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
source "${GATE_UTILS}"
BGP_ROUTER="$1"
bgp_router_config
bgp_router_start "${BGP_ROUTER}"

View File

@ -0,0 +1,57 @@
#!/bin/bash
set -e
source "${GATE_UTILS}"
mkdir -p "${SCRIPT_DEPOT}"
chmod 777 "${SCRIPT_DEPOT}"
DOCKER_RUN_OPTS=("-e" "PROMENADE_DEBUG=${PROMENADE_DEBUG}")
for v in HTTPS_PROXY HTTP_PROXY NO_PROXY https_proxy http_proxy no_proxy
do
if [[ -v "${v}" ]]
then
DOCKER_RUN_OPTS+=("-e" "${v}=${!v}")
fi
done
CERTS_PATH="/certs/*.yaml"
KEYS_PATH="/gate/*.yaml"
if [[ -n "${USE_EXISTING_SECRETS}" ]]
then
CERTS_PATH=""
KEYS_PATH=""
fi
PROMENADE_TMP_LOCAL="$(basename "$PROMENADE_TMP_LOCAL")"
PROMENADE_TMP="${TEMP_DIR}/${PROMENADE_TMP_LOCAL}"
mkdir -p "$PROMENADE_TMP"
chmod 777 "$PROMENADE_TMP"
log Prepare hyperkube
docker run --rm -t \
--network host \
-v "${PROMENADE_TMP}:/tmp/${PROMENADE_TMP_LOCAL}" \
"${DOCKER_RUN_OPTS[@]}" \
"${IMAGE_HYPERKUBE}" \
cp /hyperkube "/tmp/${PROMENADE_TMP_LOCAL}"
log Building scripts
docker run --rm -t \
-w /config \
--network host \
-v "${DEFINITION_DEPOT}:/config" \
-v "${GATE_DEPOT}:/gate" \
-v "${CERT_DEPOT}:/certs" \
-v "${SCRIPT_DEPOT}:/scripts" \
-v "${PROMENADE_TMP}:/tmp/${PROMENADE_TMP_LOCAL}" \
-e "PROMENADE_ENCRYPTION_KEY=${PROMENADE_ENCRYPTION_KEY}" \
"${DOCKER_RUN_OPTS[@]}" \
"${IMAGE_PROMENADE_CLI}" \
promenade \
build-all \
--validators \
-o /scripts \
/config/*.yaml "${CERTS_PATH}" "${KEYS_PATH}"

View File

@ -0,0 +1,22 @@
#!/usr/bin/env bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
source "${GATE_UTILS}"
nets_declare
vm_clean_all
vm_create_all

View File

@ -0,0 +1,33 @@
#!/usr/bin/env bash
# Copyright 2019 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -ex
source "${GATE_UTILS}"
export KUBECONFIG="${KUBECONFIG:-/etc/kubernetes/admin/kubeconfig.yaml}"
ssh_cmd "${GENESIS_NAME}" apt-get install -y jq
# Copies script and virtmgr private key to genesis VM
ssh_cmd "${GENESIS_NAME}" mkdir -p /root/airship
rsync_cmd "${REPO_ROOT}/tools/deployment/seaworthy-virt/airship_gate/bin/debug-report-lite.sh" "${GENESIS_NAME}:/root/airship/"
set -o pipefail
ssh_cmd_raw "${GENESIS_NAME}" "KUBECONFIG=${KUBECONFIG} /root/airship/debug-report-lite.sh" 2>&1 | tee -a "${LOG_FILE}"
set +o pipefail
rsync_cmd "${GENESIS_NAME}:/root/debug-${GENESIS_NAME}.tgz" .

View File

@ -0,0 +1,36 @@
#!/bin/bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
source "${GATE_UTILS}"
# Docker registry (cache) setup
# note: currently not used
# registry_up
# Create temp_dir structure
mkdir -p "${TEMP_DIR}/console"
# SSH setup
ssh_setup_declare
# Virsh setup
pool_declare
img_base_declare
# Make libvirtd available via SSH
make_virtmgr_account
gen_libvirt_key

View File

@ -0,0 +1,29 @@
#!/bin/bash
set -e
source "${GATE_UTILS}"
DESIGN_FILES=($(find "${DEFINITION_DEPOT}" -name '*.yaml' -print0 | xargs -0 -n 1 basename | xargs -n 1 printf "/tmp/design/%s\n"))
GATE_FILES=($(find "${GATE_DEPOT}" -name '*.yaml' -print0 | xargs -0 -n 1 basename | xargs -n 1 printf "/tmp/gate/%s\n"))
mkdir -p "${CERT_DEPOT}"
chmod 777 "${CERT_DEPOT}"
if [[ -n "${USE_EXISTING_SECRETS}" ]]
then
log Certificates already provided by manifests
exit 0
fi
log Generating certificates
docker run --rm -t \
-w /tmp \
-v "${DEFINITION_DEPOT}:/tmp/design" \
-v "${GATE_DEPOT}:/tmp/gate" \
-v "${CERT_DEPOT}:/certs" \
-e "PROMENADE_DEBUG=${PROMENADE_DEBUG}" \
"${IMAGE_PROMENADE_CLI}" \
promenade \
generate-certs \
-o /certs \
"${DESIGN_FILES[@]}" "${GATE_FILES[@]}"

View File

@ -0,0 +1,26 @@
#!/usr/bin/env bash
# Copyright 2019 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
source "${GATE_UTILS}"
# Copies script and virtmgr private key to genesis VM
rsync_cmd "${REPO_ROOT}/tools/deployment/seaworthy-virt/airship_gate/lib/bootaction-runner.sh" "${GENESIS_NAME}:/root/airship/"
rsync_cmd "${RENDERED_DEPOT}/rendered.yaml" "${GENESIS_NAME}:/root/airship/"
set -o pipefail
ssh_cmd "${GENESIS_NAME}" /root/airship/bootaction-runner.sh /root/airship/rendered.yaml 2>&1 | tee -a "${LOG_FILE}"
set +o pipefail

View File

@ -0,0 +1,30 @@
#!/usr/bin/env bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
source "${GATE_UTILS}"
# Copies script and virtmgr private key to genesis VM
rsync_cmd "${SCRIPT_DEPOT}/genesis.sh" "${GENESIS_NAME}:/root/airship/"
set -o pipefail
ssh_cmd_raw "${GENESIS_NAME}" "PROMENADE_ENCRYPTION_KEY=${PROMENADE_ENCRYPTION_KEY} /root/airship/genesis.sh" 2>&1 | tee -a "${LOG_FILE}"
set +o pipefail
if ! ssh_cmd n0 docker images | tail -n +2 | grep -v registry:5000 ; then
log_warn "Using some non-cached docker images. This will slow testing."
ssh_cmd n0 docker images | tail -n +2 | grep -v registry:5000 | tee -a "${LOG_FILE}"
fi

View File

@ -0,0 +1,23 @@
#!/bin/bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
source "${GATE_UTILS}"
DNS_SERVER="$1"
ingress_dns_config
ingress_dns_start "${DNS_SERVER}"

View File

@ -0,0 +1,90 @@
#!/usr/bin/env bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -xe
source "${GATE_UTILS}"
mkdir -p "${DEFINITION_DEPOT}"
chmod 777 "${DEFINITION_DEPOT}"
render_pegleg_cli() {
cli_string=("pegleg" "-v" "site")
if [[ "${GERRIT_SSH_USER}" ]]
then
cli_string+=("-u" "${GERRIT_SSH_USER}")
fi
if [[ "${GERRIT_SSH_KEY}" ]]
then
cli_string+=("-k" "/workspace/${GERRIT_SSH_KEY}")
fi
primary_repo="$(config_pegleg_primary_repo)"
if [[ -d "${REPO_ROOT}/${primary_repo}" ]]
then
cli_string+=("-r" "/workspace/${primary_repo}")
else
log "${primary_repo} not a valid primary repository"
return 1
fi
aux_repos=($(config_pegleg_aux_repos))
if [[ ${#aux_repos[@]} -gt 0 ]]
then
for r in ${aux_repos[*]}
do
cli_string+=("-e" "${r}=/workspace/${r}")
done
fi
cli_string+=("collect" "-s" "/collect")
cli_string+=("$(config_pegleg_sitename)")
printf " %s " "${cli_string[@]}"
}
collect_design_docs() {
# shellcheck disable=SC2091
# shellcheck disable=SC2046
docker run \
--rm -t \
--network host \
-v "${HOME}/.ssh":/root/.ssh \
-v "${REPO_ROOT}":/workspace \
-v "${DEFINITION_DEPOT}":/collect \
"${IMAGE_PEGLEG_CLI}" \
$(render_pegleg_cli)
}
collect_initial_docs() {
collect_design_docs
log "Generating virtmgr key documents"
gen_libvirt_key && install_libvirt_key
collect_ssh_key
}
log "Collecting site definition to ${DEFINITION_DEPOT}"
if [[ "$1" != "update" ]];
then
collect_initial_docs
else
collect_design_docs
fi

View File

@ -0,0 +1,77 @@
#!/usr/bin/env bash
# Copyright 2019 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -xe
source "${GATE_UTILS}"
mkdir -p "${RENDERED_DEPOT}"
chmod 777 "${RENDERED_DEPOT}"
render_pegleg_cli() {
cli_string=("pegleg" "-v" "site")
if [[ "${GERRIT_SSH_USER}" ]]
then
cli_string+=("-u" "${GERRIT_SSH_USER}")
fi
if [[ "${GERRIT_SSH_KEY}" ]]
then
cli_string+=("-k" "/workspace/${GERRIT_SSH_KEY}")
fi
primary_repo="$(config_pegleg_primary_repo)"
if [[ -d "${REPO_ROOT}/${primary_repo}" ]]
then
cli_string+=("-r" "/workspace/${primary_repo}")
else
log "${primary_repo} not a valid primary repository"
return 1
fi
aux_repos=($(config_pegleg_aux_repos))
if [[ ${#aux_repos[@]} -gt 0 ]]
then
for r in ${aux_repos[*]}
do
cli_string+=("-e" "${r}=/workspace/${r}")
done
fi
cli_string+=("render" "-o" "/collect/rendered.yaml")
cli_string+=("$(config_pegleg_sitename)")
printf " %s " "${cli_string[@]}"
}
collect_rendered_doc() {
# shellcheck disable=SC2091
# shellcheck disable=SC2046
docker run \
--rm -t \
--network host \
-v "${HOME}/.ssh":/root/.ssh \
-v "${REPO_ROOT}":/workspace \
-v "${RENDERED_DEPOT}":/collect \
"${IMAGE_PEGLEG_CLI}" \
$(render_pegleg_cli)
}
log "Collecting rendered document to ${RENDERED_DEPOT}"
collect_rendered_doc

View File

@ -0,0 +1,21 @@
#!/usr/bin/env bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
source "${GATE_UTILS}"
# Docker registry (cache) setup
registry_populate

View File

@ -0,0 +1,33 @@
#!/usr/bin/env bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -eu
source "${GATE_UTILS}"
log Testing disk IO
fio \
--randrepeat=1 \
--ioengine=libaio \
--direct=1 \
--gtod_reduce=1 \
--name=test \
--filename=.fiotest \
--bs=4k \
--iodepth=64 \
--size=1G \
--readwrite=randrw \
--rwmixread=50

View File

@ -0,0 +1,22 @@
#!/usr/bin/env bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
source "${GATE_UTILS}"
cd "${TEMP_DIR}"
shipyard_action_wait deploy_site 7200

View File

@ -0,0 +1,83 @@
#!/bin/bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -ex
source "${GATE_UTILS}"
cd "${TEMP_DIR}"
# Omit the cert_yaml bundle
OMIT_CERTS=0
# Omit the gate_yaml bundle
OMIT_GATE=0
while getopts "og" opt; do
case "${opt}" in
o)
OMIT_CERTS=1
;;
g)
OMIT_GATE=1
;;
*)
echo "Unknown option"
exit 1
;;
esac
done
shift $((OPTIND-1))
check_configdocs_result(){
RESULT=$1
ERROR_CNT=$(echo "${RESULT}" | grep -oE 'Errors: [0-9]+')
if [[ "${ERROR_CNT}" != "Errors: 0" ]]
then
log "Shipyard create configdocs did not pass validation."
echo "${RESULT}" >> "${LOG_FILE}"
return 1
fi
}
# Copy site design to genesis node
ssh_cmd "${BUILD_NAME}" mkdir -p "${BUILD_WORK_DIR}/site"
rsync_cmd "${DEFINITION_DEPOT}"/*.yaml "${BUILD_NAME}:${BUILD_WORK_DIR}/site/"
sleep 120
check_configdocs_result "$(shipyard_cmd create configdocs design "--directory=${BUILD_WORK_DIR}/site" --replace)"
# Skip certs/gate if already part of site manifests
if [[ -n "${USE_EXISTING_SECRETS}" ]]
then
OMIT_CERTS=1
OMIT_GATE=1
fi
if [[ "${OMIT_CERTS}" == "0" ]]
then
ssh_cmd "${BUILD_NAME}" mkdir -p "${BUILD_WORK_DIR}/certs"
rsync_cmd "${CERT_DEPOT}"/*.yaml "${BUILD_NAME}:${BUILD_WORK_DIR}/certs/"
check_configdocs_result "$(shipyard_cmd create configdocs certs "--directory=${BUILD_WORK_DIR}/certs" --append)"
fi
if [[ "${OMIT_GATE}" == "0" ]]
then
ssh_cmd "${BUILD_NAME}" mkdir -p "${BUILD_WORK_DIR}/gate"
rsync_cmd "${GATE_DEPOT}"/*.yaml "${BUILD_NAME}:${BUILD_WORK_DIR}/gate/"
check_configdocs_result "$(shipyard_cmd create configdocs gate "--directory=${BUILD_WORK_DIR}/gate" --append)"
fi
check_configdocs_result "$(shipyard_cmd commit configdocs)"

View File

@ -0,0 +1,22 @@
#!/usr/bin/env bash
# Copyright 2019, AT&T Intellectual Property
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
source "${GATE_UTILS}"
cd "${TEMP_DIR}"
shipyard_cmd create action test_site "$@"

View File

@ -0,0 +1,22 @@
#!/usr/bin/env bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
source "${GATE_UTILS}"
cd "${TEMP_DIR}"
shipyard_action_wait update_site 1800

View File

@ -0,0 +1,20 @@
#!/usr/bin/env bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
source "${GATE_UTILS}"
vm_stop_non_genesis

View File

@ -0,0 +1,30 @@
#!/usr/bin/env bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
source "${GATE_UTILS}"
# Copies script and virtmgr private key to genesis VM
rsync_cmd "${SCRIPT_DEPOT}/validate-genesis.sh" "${GENESIS_NAME}:/root/airship/"
set -o pipefail
ssh_cmd "${GENESIS_NAME}" /root/airship/validate-genesis.sh 2>&1 | tee -a "${LOG_FILE}"
set +o pipefail
if ! ssh_cmd n0 docker images | tail -n +2 | grep -v registry:5000 ; then
log_warn "Using some non-cached docker images. This will slow testing."
ssh_cmd n0 docker images | tail -n +2 | grep -v registry:5000 | tee -a "${LOG_FILE}"
fi

View File

@ -0,0 +1,121 @@
#!/bin/bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
if [[ -n "$GATE_DEBUG" && "$GATE_DEBUG" = "1" ]]; then
set -x
fi
set -e
# Define NUM_NODES to appropriate values to check this number of k8s nodes.
function upload_script() {
source "$GATE_UTILS"
BASENAME="$(basename "${BASH_SOURCE[0]}")"
# Copies script to genesis VM
rsync_cmd "${BASH_SOURCE[0]}" "$GENESIS_NAME:/root/airship/"
set -o pipefail
ssh_cmd_raw "$GENESIS_NAME" "KUBECONFIG=${KUBECONFIG} GATE_DEBUG=${GATE_DEBUG} NUM_NODES=$1 /root/airship/${BASENAME}" 2>&1 | tee -a "$LOG_FILE"
set +o pipefail
}
function kubectl_retry() {
cnt=0
while true; do
"$KUBECTL" "$@"
ret=$?
cnt=$((cnt+1))
if [[ "$ret" -ne "0" ]]; then
if [[ "$cnt" -lt "$MAX_TRIES" ]]; then
sleep "$PAUSE"
else
return 1
fi
else
break
fi
done
}
function check_kube_nodes() {
try=0
while true; do
nodes_list="$(kubectl_retry get nodes --no-headers)" || true
ret="$?"
try="$((try+1))"
if [ "$ret" -ne "0" ]; then
if [[ "$try" -lt "$MAX_TRIES" ]]; then
sleep "$PAUSE"
else
echo -e "Can't get nodes"
return 1
fi
fi
nodes_list="${nodes_list}\n"
all_nodes=$(echo -en "$nodes_list" | wc -l)
ok_nodes=$(echo -en "$nodes_list" | grep -c -v "NotReady" || true)
if [[ "$all_nodes" == "$1" && "$ok_nodes" == "$1" ]]; then
echo "Nodes: ${all_nodes}"
break
else
echo "Error: not enough nodes(found ${all_nodes}, ready ${ok_nodes}, expected $1)"
return 1
fi
done
}
function check_kube_components() {
try=0
while true; do
res=$(kubectl_retry get cs -o jsonpath="{.items[*].conditions[?(@.type == \"Healthy\")].status}") || true
try=$((try+1))
if echo "$res" | grep -q False; then
if [[ "$try" -lt "$MAX_TRIES" ]]; then
sleep "$PAUSE"
else
echo "Error: kubernetes components are not working properly"
kubectl_retry get cs
exit 1
fi
else
break
fi
done
}
if [[ -n "$GATE_UTILS" ]]; then
upload_script "$NUM_NODES"
else
set +e
KUBECONFIG="${KUBECONFIG:-/etc/kubernetes/admin/kubeconfig.yaml}"
KUBECTL="${KUBECTL:-/usr/local/bin/kubectl}"
NUM_NODES="${NUM_NODES:-4}"
PAUSE="${PAUSE:-1}"
MAX_TRIES="${MAX_TRIES:-3}"
if [[ ! -f "$KUBECTL" ]]; then
echo "Error: ${KUBECTL} not found"
exit 1
fi
check_kube_nodes "$NUM_NODES"
nodes_status=$?
check_kube_components
components_status=$?
if [[ "$nodes_status" -ne "0" || "$components_status" -ne "0" ]]; then
echo "Kubernetes validation failed"
exit 1
else
echo "Kubernetes validation succeeded"
fi
fi

View File

@ -0,0 +1 @@
from="172.24.1.0/24" ${PUB_KEY}

View File

@ -0,0 +1,20 @@
log file /var/log/frr/bgpd.log
!
!
router bgp ${FRR_AS}
bgp router-id ${FRR_IP}
neighbor calico peer-group
neighbor calico remote-as ${CALICO_AS}
bgp listen range 0.0.0.0/0 peer-group calico
!
address-family ipv4 unicast
neighbor calico route-map calico-node-fix-same-as out
exit-address-family
!
route-map calico-node-fix-same-as permit 100
set as-path exclude ${CALICO_AS}
set as-path prepend ${FRR_AS}
!
line vty
!

View File

@ -0,0 +1,7 @@
zebra=yes
bgpd=yes
ospfd=no
ospf6d=no
ripd=no
ripngd=no
isisd=no

View File

@ -0,0 +1,19 @@
#
# If this option is set the frr script automatically loads
# the config via "vtysh -b" when the servers are started.
# Check /etc/pam.d/frr if you intend to use "vtysh"!
#
vtysh_enable=yes
zebra_options=" -s 90000000 --daemon -A 0.0.0.0"
bgpd_options=" --daemon -A 0.0.0.0 -p 179"
ospfd_options=" --daemon -A 127.0.0.1"
ospf6d_options=" --daemon -A ::1"
ripd_options=" --daemon -A 127.0.0.1"
ripngd_options=" --daemon -A ::1"
isisd_options=" --daemon -A 127.0.0.1"
pimd_options=" --daemon -A 127.0.0.1"
ldpd_options=" --daemon -A 127.0.0.1"
# The list of daemons to watch is automatically generated by the init script.
watchfrr_enable=yes
watchfrr_options=(-adz -r /usr/sbin/servicebBfrrbBrestartbB%s -s /usr/sbin/servicebBfrrbBstartbB%s -k /usr/sbin/servicebBfrrbBstopbB%s -b bB -t 90)

View File

@ -0,0 +1,4 @@
- label: 'None'
filesystem: ${FS_TYPE}
device: /dev/${DISK_DEVICE}
overwrite: true

View File

@ -0,0 +1,9 @@
${DNS_DOMAIN} {
file ${ZONE_FILE}
log
}
. {
forward . ${DNS_SERVERS}
log
}

View File

@ -0,0 +1 @@
${HOSTNAME} IN A ${HOSTIP}

View File

@ -0,0 +1,4 @@
$ORIGIN ${INGRESS_DOMAIN}.
${INGRESS_DOMAIN}. IN SOA localhost. root.localhost. ( 2007120710 1d 2h 4w 1h )

View File

@ -0,0 +1,4 @@
<network>
<name>${NETNAME}</name>
<bridge name='${NETNAME}' stp='on' delay='0'/>
</network>

View File

@ -0,0 +1,7 @@
<network>
<name>${NETNAME}</name>
<bridge name='${NETNAME}' stp='on' delay='0'/>
<ip address="${NETIP}" netmask="${NETMASK}" />
<mac address='${NETMAC}'/>
<forward mode="nat"/>
</network>

View File

@ -0,0 +1,3 @@
#cloud-config
instance-id: ucp-${NAME}
local-hostname: ${NAME}

View File

@ -0,0 +1 @@
- [${DISK_DEVICE}, ${MOUNTPOINT}, auto, "defaults", 0, 2]

View File

@ -0,0 +1,5 @@
IdentityFile ${SSH_CONFIG_DIR}/id_rsa
LogLevel QUIET
StrictHostKeyChecking no
UserKnownHostsFile /dev/null

View File

@ -0,0 +1,2 @@
Host ${SSH_NODE_HOSTNAME}
HostName ${SSH_NODE_IP}

View File

@ -0,0 +1,20 @@
#cloud-config
disable_root: false
hostname: ${NAME}
manage_etc_hosts: false
package_update: true
apt_preserve_sources_list: true
ssh_authorized_keys:
- ${SSH_PUBLIC_KEY}
chpasswd:
list: |
root:password
expire: false
ntp:
pools: [${NTP_POOLS}]
servers: [${NTP_SERVERS}]

View File

@ -0,0 +1,97 @@
#!/usr/bin/env bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
SCRIPT_DIR=$(realpath "$(dirname "${0}")")
WORKSPACE=$(realpath "${SCRIPT_DIR}/..")
GATE_UTILS=${WORKSPACE}/seaworthy-virt/airship_gate/lib/all.sh
GATE_COLOR=${GATE_COLOR:-1}
MANIFEST_ARG=${1:-"multinode_deploy"}
if [ -z "$GATE_MANIFEST" ]
then
GATE_MANIFEST=${WORKSPACE}/seaworthy-virt/airship_gate/manifests/${MANIFEST_ARG}.json
fi
export GATE_COLOR
export GATE_MANIFEST
export GATE_UTILS
export WORKSPACE
source "${GATE_UTILS}"
sudo chmod -R 755 "${TEMP_DIR}"
STAGES_DIRS=()
while read -r libdir;do
if [[ -d "$libdir" && -r "$libdir" && -x "$libdir" ]]; then
STAGES_DIRS+=( "$libdir" )
else
log_warn "Could not find stage library $libdir, skipping..."
fi
done <<< "$(jq -c ".configuration.stage_libraries // [] | .[]" < "$GATE_MANIFEST")"
STAGES_DIRS+=( "${WORKSPACE}/seaworthy-virt/airship_gate/stages" )
log_temp_dir
echo
STAGES=$(mktemp)
jq -cr '.stages | .[]' "${GATE_MANIFEST}" > "${STAGES}"
# NOTE(mark-burnett): It is necessary to use a non-stdin file descriptor for
# the read below, since we will be calling SSH, which will consume the
# remaining data on STDIN.
exec 3< "$STAGES"
while read -r -u 3 stage; do
NAME=$(echo "${stage}" | jq -r .name)
STAGE_SCRIPT="$(echo "${stage}" | jq -r .script)"
STAGE_CMD=""
for dir in "${STAGES_DIRS[@]}"; do
if [ -x "${dir}/${STAGE_SCRIPT}" ]; then
STAGE_CMD="${dir}/${STAGE_SCRIPT}"
break;
fi
done
if [ -z "$STAGE_CMD" ]; then
log_error "$STAGE_SCRIPT not found!"
exit 1
fi
log_stage_header "${NAME}"
if echo "${stage}" | jq -r '.arguments | @sh' | xargs "${STAGE_CMD}" ; then
log_stage_success
else
log_color_reset
log_stage_error "${NAME}" "${LOG_FILE}"
if echo "${stage}" | jq -e .on_error > /dev/null; then
log_stage_diagnostic_header
ON_ERROR=${WORKSPACE}/seaworthy-virt/airship_gate/on_error/$(echo "${stage}" | jq -r .on_error)
set +e
$ON_ERROR
fi
log_stage_error "${NAME}" "${TEMP_DIR}"
exit 1
fi
log_stage_footer "${NAME}"
echo
done
log_note "Site Definition YAMLs found in ${DEFINITION_DEPOT}"
echo
log_huge_success

View File

@ -0,0 +1,13 @@
# source_name, tag, cache_name
coredns/coredns,0.9.9,coredns
gcr.io/google_containers/hyperkube-amd64,v1.10.2,hyperkube
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64,1.14.4,k8s-dns-dnsmasq-nanny-amd64
gcr.io/google_containers/k8s-dns-kube-dns-amd64,1.14.4,k8s-dns-kube-dns-amd64
gcr.io/google_containers/k8s-dns-sidecar-amd64,1.14.4,k8s-dns-sidecar-amd64
gcr.io/kubernetes-helm/tiller,v2.7.2,tiller
lachlanevenson/k8s-helm,v2.7.2,helm
quay.io/airshipit/armada,latest,armada
quay.io/calico/cni,v1.11.0,calico-cni
quay.io/calico/ctl,v1.6.1,calico-ctl
quay.io/calico/kube-controllers,v1.0.0,calico-kube-controllers
quay.io/calico/node,v2.6.1,calico-node

View File

@ -0,0 +1,26 @@
#!/usr/bin/env bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -ex
REGISTRY_DATA_DIR=${REGISTRY_DATA_DIR:-/mnt/registry}
docker run -d \
-p 5000:5000 \
-e REGISTRY_HTTP_ADDR=0.0.0.0:5000 \
--restart=always \
--name registry \
-v "$REGISTRY_DATA_DIR:/var/lib/registry" \
registry:2

View File

@ -0,0 +1,18 @@
#!/usr/bin/env bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -ex
docker rm -fv registry

View File

@ -0,0 +1,29 @@
#!/usr/bin/env bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -ex
IMAGES_FILE="$(dirname "$0")/IMAGES"
IFS=,
grep -v '^#.*' "$IMAGES_FILE" | while read -r src tag dst; do
echo "src=$src tag=$tag dst=$dst"
sudo docker pull "$src:$tag"
full_dst="localhost:5000/$dst:$tag"
sudo docker tag "$src:$tag" "$full_dst"
sudo docker push "$full_dst"
done

View File

@ -0,0 +1,114 @@
#!/usr/bin/env bash
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
SCRIPT_DIR=$(realpath "$(dirname "${0}")")
WORKSPACE=$(realpath "${SCRIPT_DIR}/..")
GATE_UTILS=${WORKSPACE}/seaworthy-virt/airship_gate/lib/all.sh
GATE_COLOR=${GATE_COLOR:-1}
export GATE_COLOR
export GATE_UTILS
export WORKSPACE
source "${GATE_UTILS}"
REQUIRE_RELOG=0
log_stage_header "Installing Packages"
export DEBIAN_FRONTEND=noninteractive
sudo -E apt-get update -qq
sudo -E apt-get install -q -y --no-install-recommends \
curl \
docker.io \
fio \
genisoimage \
jq \
libstring-shellquote-perl \
libvirt-bin \
qemu-kvm \
qemu-utils \
virtinst
log_stage_header "Joining User Groups"
for grp in docker libvirtd libvirt; do
if ! groups | grep $grp > /dev/null; then
sudo adduser "$(id -un)" $grp || echo "Group $grp not found, not added to user"
REQUIRE_RELOG=1
fi
done
make_virtmgr_account
HTTPS_PROXY=${HTTPS_PROXY:-${https_proxy}}
HTTPS_PROXY=${HTTP_PROXY:-${http_proxy}}
if [[ ! -z "${HTTPS_PROXY}" ]]
then
log_stage_header "Configuring Apt Proxy"
cat << EOF | sudo tee /etc/apt/apt.conf.d/50proxyconf
Acquire::https::proxy "${HTTPS_PROXY}";
Acquire::http::proxy "${HTTPS_PROXY}";
EOF
log_stage_header "Configuring Docker Proxy"
sudo mkdir -p /etc/systemd/system/docker.service.d/
cat << EOF | sudo tee /etc/systemd/system/docker.service.d/proxy.conf
[Service]
Environment="HTTP_PROXY=${HTTP_PROXY}"
Environment="HTTPS_PROXY=${HTTPS_PROXY}"
Environment="NO_PROXY=${NO_PROXY}"
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
fi
log_stage_header "Setting Kernel Parameters"
if [ "xY" != "x$(cat /sys/module/kvm_intel/parameters/nested)" ]; then
log_note Enabling nested virtualization.
sudo modprobe -r kvm_intel
sudo modprobe kvm_intel nested=1 || log_error Nested Virtualization not supported
echo "options kvm-intel nested=1" | sudo tee /etc/modprobe.d/kvm-intel.conf
fi
if ! sudo virt-host-validate qemu &> /dev/null; then
log_note Host did not validate virtualization check:
sudo virt-host-validate qemu || true
fi
if [[ ! -d ${VIRSH_POOL_PATH} ]]; then
sudo mkdir -p "${VIRSH_POOL_PATH}"
fi
log_stage_header "Disabling br_netfilter"
cat << EOF | sudo tee /etc/sysctl.d/60-bridge.conf
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
EOF
cat << EOF | sudo tee /etc/udev/rules.d/99-bridge.rules
ACTION=="add", SUBSYSTEM=="module", KERNEL=="br_netfilter", \
RUN+="/lib/systemd/systemd-sysctl --prefix=/net/bridge"
EOF
besteffort sudo sysctl -p /etc/sysctl.d/60-bridge.conf
if [[ ${REQUIRE_RELOG} -eq 1 ]]; then
echo
log_note "You must ${C_HEADER}log out${C_CLEAR} and back in before the gate is ready to run."
fi
log_huge_success