Use overrides (values) to describe VMs

Previously the VM configuration was embedded in the charts templates;
this change moves things into values as we iterate over things created
resources as required.

Change-Id: I75e60bd31ee5debda90a703775215e92480d572c
changes/10/569210/1
Chris Wedgwood 6 years ago
parent a821aeff00
commit 92a678bb5c

@ -0,0 +1,11 @@
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y qemu-kvm dnsmasq bridge-utils mkisofs curl jq wget iptables
RUN apt-get clean
RUN rm -f /var/lib/apt/lists/* || true
ENTRYPOINT ["/bin/sleep", "infinity"]
VOLUME "/image"
EXPOSE 22
CMD []

@ -9,22 +9,23 @@ test: install
install: build
@echo
-helm delete --purge berth >>helm.log 2>&1
-helm delete --purge berth
@echo
@[ -f override.yaml ] || touch override.yaml
helm install ./berth-0.1.0.tgz --values=override.yaml --name=berth >>helm.log 2>&1
@sleep 5.0 # give k8s a chance to see the IP
helm install --name=berth --debug ./berth
helm upgrade --debug berth ./berth \
--values examples/cirros-test.yaml \
--values examples/demo-ub14-apache.yaml \
--values examples/ub16-smp-test.yaml
@sleep 5 # give k8s a chance to see the IP
@echo
kubectl get pods -o wide
build:
@echo
helm lint berth
@echo
helm package berth
clean:
rm -f berth-0.1.0.tgz helm.log
rm -f *~ */*~ */*/*~ berth-0.1.0.tgz
.PHONY:
all default build clean

@ -0,0 +1,105 @@
Berth is a deliberately minimalist VM runner for Kubernetes.
## TL;DR Installation Guide
Install the bare Berth chart:
```
# Have (recent) Helm and Kubernetes (2.5.x and 1.6.x or later)
#
helm install --name=berth ./berth
kubectl get pods -o wide
```
This should happen error free and be fairly quick. At this stage
install one of the examples:
```
# helm upgrade berth ./berth --values=examples/cirros-test.yaml
```
You should be able to SSH to your VM at the Kubernetes IP for the
container which you can retrieve with `kubectl get all -o wide`. VNC
access is available on port 5900.
Additional examples may be added with additional `--values` arguments.
For example:
```
# helm upgrade berth ./berth --values=examples/cirros-test.yaml --values=examples/demo-ub14-apache.yaml
```
You can use a similar method to purge all VMs:
```
# helm upgrade berth ./berth --set 'vmlist={}'
```
<!-- https://mostsecure.pw/ -->
### Example
[Quick installation / sample](https://asciinema.org/a/4VazbwsokL3zpnGPf27eyFIfe)
### Why this?
The requirements are very narrow right now and the existing
alternatives don't align well at present. This will likely change in
time at which point we can realign the internal implementation.
#### Minimalist requirements
* Run VMs from inside of Kubernetes
* Work with Calico
* Have VM life-cycle match that of pods
* Have VMs benefit from Kubernetes resiliency
* Allow for persistent storage
* Allow for state injection/access from a ConfigMaps
## Requirements:
* Helm 2.5.x
* Kubernetes 1.6.x
This does not need to be installed as part of the OpenStack chart
collection.
## How it works:
At a high level, it works like this:
* Create a SNAT/DNAT enabled linux bridge.
* Assign the bridge a private IP address from a small /30 subnet
(controlled with `VM_IP` and `VM_GW`)
* Plug the VM network interface into the bridge.
* Run a dnsmasq process to allocate the VM the right name-servers, and
DNS search strings extracted from the parent container. Assign the
private IP address to the VM and have it use the bridge's IP as its
default gateway.
* Setup SNAT/DNAT on the parent container to do 1:1 mapping of all
ports, all protocols to the VM, except for TCP:5900 to allow for VNC
access (can be controlled with NO_VNC environment variable).
* At this point, VM essentially assumes Pod Assigned IP.
* Feed any meta-data or user-data down into the VM by leveraging these
ConfigMap mounts with the same name and turning them into an ISO
presented to the guest.
The startvm entry-point supports several environment variables:
* `IMG_SOURCE` which is an http or https URL that contains a qcow2
image. It can also be a full path to a local file baked into the
container image, e.g. "/image.qcow"
* `IMG_TARGET` the name to save the image above as in the shared
volume.
It also supports two files, which should be mounted as ConfigMaps if
using Kubernetes at `/userdata` and `/metadata` as YAML files
containing, obviously meta-data and user-data as YAML that will be fed
to the VM as a config-drive iso.
The "pet" version of the image, which is created using qemu-img -b to
base it on the source, is stored in a separate volume dedicated to the
VM itself, and named after the container hostname.
There are a few other parameters you can control as an operator:
* `VM_IP` is the IP address the VM should be allocated by DHCP. The
container will 1:1 NAT except for port 5900 for VNC access (defaults
to 192.168.254.2)
* `VM_GW` is the gateway IP address the VM should use for its default
route (defaults to 192.168.254.1)

23
TODO

@ -1,23 +0,0 @@
[Put these in Jira]
*Require* ssh key, throw an error if missing.
Chart definition vs values.yaml:
Move image source/target details from chart to values
(IMG_SOURCE/IMG_TARGET)
Move the VM definitions entirely into the values.yaml file; have
helm deployment process iterate over that creating multiple pods,
one for each VM.
Ideally this will allow incremental updates of pods without having
to know about the chart internals.
XXX Get examples of where we've done this before from Alan XXX
Add Dockerfile; update the container image we use to Ubuntu 16.04 with
a suitable KVM/Qemu.
Consider moving to stateful sets for cleaner PVC associations. See
OSH MariaDB chart for an example of this.

@ -1,16 +1,17 @@
#!/bin/bash
set -x
set -ex
echo "VER-0.1.0-1.2"
# FIXME; right now this doens't work, need to work out why
#set -o pipefail
# Returns the integer representation of an IP arg, passed in ascii
# dotted-decimal notation (x.x.x.x)
atoi() {
IP=$1; IPNUM=0
for (( i=0 ; i<4 ; ++i )); do
((IPNUM+=${IP%%.*}*$((256**$((3-${i}))))))
IP=${IP#*.}
((IPNUM+=${IP%%.*}*$((256**$((3-${i}))))))
IP=${IP#*.}
done
echo $IPNUM
}
@ -27,23 +28,23 @@ itoa() {
generate_cloud_drive() {
metadata=/metadata
if [ ! -f $metadata ]; then
metadata=""
metadata=""
fi
userdata=/userdata
if [ ! -f $userdata ]; then
userdata=""
userdata=""
fi
if [ "$metadata" == "" -a "$userdata" == "" ]; then
return
return
fi
TMPDIR=`mktemp -d -t aicvm.XXXXXX`
if [ $? -ne 0 ]; then
echo "Fail to create temporaily directory"
exit 1
echo "Fail to create temporaily directory"
exit 1
fi
# create form of config drive
@ -54,17 +55,17 @@ generate_cloud_drive() {
cd $OLD_PWD
if [ -f $metadata ]; then
cp $metadata ${TMPDIR}/openstack/2012-08-10/meta_data.json
cp $metadata ${TMPDIR}/openstack/2012-08-10/meta_data.json
fi
if [ -f $userdata ]; then
cp $userdata ${TMPDIR}/openstack/2012-08-10/user_data
cp $userdata ${TMPDIR}/openstack/2012-08-10/user_data
fi
iso="cloud-drive.iso"
mkisofs -R -V config-2 -o $iso ${TMPDIR}
if [ $? -ne 0 ]; then
echo Fail to create cloud-drive ISO image for cloud-init
exit 1
echo Fail to create cloud-drive ISO image for cloud-init
exit 1
fi
echo $iso
}
@ -110,38 +111,38 @@ if [ -e /dev/vm/root ]; then
else
if [ -e "${IMG_TARGET}" ]; then
BASE=${IMG_TARGET}
BASE=${IMG_TARGET}
else
if [ ! -d "/image" ]; then
echo "/image directory does not exist, failed to mount volume?"
exit 2
fi
if [ ! -d "/image" ]; then
echo "/image directory does not exist, failed to mount volume?"
exit 2
fi
if [ ! -e "/image/${IMG_TARGET}" ]; then
echo "Fetching missing image target"
curl ${IMG_SOURCE} > /image/${IMG_TARGET}
fi
if [ ! -e "/image/${IMG_TARGET}" ]; then
echo "Fetching missing image target"
curl ${IMG_SOURCE} > /image/${IMG_TARGET}
fi
BASE=/image/${IMG_TARGET}
BASE=/image/${IMG_TARGET}
fi
if [ ! -d "/image" ]; then
echo "/image directory does not exist, failed to mount volume /image?"
exit 2
echo "/image directory does not exist, failed to mount volume /image?"
exit 2
fi
if [ -z "${HOSTNAME}" ]; then
echo "Could not find HOSTNAME var. Did you specify a HOSTNAME environment variable?"
echo "Could not find HOSTNAME var. Did you specify a HOSTNAME environment variable?"
fi
KVM_IMAGE=/image/${HOSTNAME}.qcow2
if [ -e "${KVM_IMAGE}" ]; then
echo "Image ${KVM_IMAGE} already exists. Not recreating"
echo "Image ${KVM_IMAGE} already exists. Not recreating"
else
qemu-img create -f qcow2 -b ${BASE} \
$KVM_IMAGE > /dev/null
$KVM_IMAGE > /dev/null
if [[ $? -ne 0 ]]; then
echo "Failed to create qcow2 image"
exit 3
@ -173,14 +174,14 @@ cidr2mask() {
local partial_octet=$(($1%8))
for ((i=0;i<4;i+=1)); do
if [ $i -lt $full_octets ]; then
mask+=255
elif [ $i -eq $full_octets ]; then
mask+=$((256 - 2**(8-$partial_octet)))
else
mask+=0
fi
test $i -lt 3 && mask+=.
if [ $i -lt $full_octets ]; then
mask+=255
elif [ $i -eq $full_octets ]; then
mask+=$((256 - 2**(8-$partial_octet)))
else
mask+=0
fi
test $i -lt 3 && mask+=.
done
echo $mask
@ -196,6 +197,24 @@ setup_bridge_networking() {
NAMESERVER=( `grep nameserver /etc/resolv.conf | grep -v "#" | cut -f2 -d ' '` )
NAMESERVERS=`echo ${NAMESERVER[*]} | sed "s/ /,/"`
SEARCH=( `grep -E ^search /etc/resolv.conf | grep -v "#" | cut -f2- -d ' ' | tr ' ' ','` )
# MAC=$(ip addr show $IFACE | grep ether | sed -e 's/^[[:space:]]*//g' -e 's/[[:space:]]*\$//g' | cut -f2 -d ' ')
# HOST_IP=$(ip addr show dev $IFACE | grep "inet $IP" | awk '{print $2}' | cut -f1 -d/)
# HOST_CIDR=$(ip addr show dev $IFACE | grep "inet $IP" | awk '{print $2}' | cut -f2 -d/)
# HOST_NETMASK=$(cidr2mask $HOST_CIDR)
# HOST_GATEWAY=$(ip route get 8.8.8.8 | grep via | cut -f3 -d ' ')
# NAMESERVER=$(grep nameserver /etc/resolv.conf | grep -v "#" | cut -f2 -d ' ') )
# NAMESERVERS=$(echo ${NAMESERVER[*]} | sed "s/ /,/")
# SEARCH=$(grep -E ^search /etc/resolv.conf | grep -v "#" | cut -f2- -d ' ' | tr ' ' ',')
# fail if any of the above aren't suitable # here
[ -n "$MAC" ]
[ -n "$HOST_IP" ]
[ -n "$HOST_CIDR" ]
[ -n "$HOST_NETMASK" ]
[ -n "$HOST_GATEWAY" ]
[ -n "$NAMESERVER" ]
[ -n "$NAMESERVERS" ]
[ -n "$SEARCH" ]
# we must enable forwarding inside the container
echo 1 > /proc/sys/net/ipv4/ip_forward
@ -205,11 +224,11 @@ setup_bridge_networking() {
# specify NO_VNC as an environment variable to disable this
# functionality
if [ -z $NO_VNC ]; then
iptables -t nat -A PREROUTING -p tcp \! --dport 5900 -d $HOST_IP -j DNAT --to-destination $VM_IP
iptables -t nat -A POSTROUTING -s $VM_IP -j SNAT --to-source $HOST_IP
iptables -t nat -A PREROUTING -p tcp \! --dport 5900 -d $HOST_IP -j DNAT --to-destination $VM_IP
iptables -t nat -A POSTROUTING -s $VM_IP -j SNAT --to-source $HOST_IP
else
iptables -t nat -A PREROUTING -d $HOST_IP -j DNAT --to-destination $VM_IP
iptables -t nat -A POSTROUTING -s $VM_IP -j SNAT --to-source $HOST_IP
iptables -t nat -A PREROUTING -d $HOST_IP -j DNAT --to-destination $VM_IP
iptables -t nat -A POSTROUTING -s $VM_IP -j SNAT --to-source $HOST_IP
fi
# generate VM specifics
@ -258,4 +277,14 @@ setup_bridge_networking
HOST_IP=`ip addr show dev $IFACE | grep "inet $IP" | awk '{print $2}' | cut -f1 -d/`
VNC="-vnc $HOST_IP:0"
exec $LAUNCHER qemu-system-x86_64 -enable-kvm $VNC `eval echo $KVM_BLK_OPTS` `eval echo $KVM_NET_OPTS` -usbdevice tablet -nographic $KVM_ARGS
exec $LAUNCHER qemu-system-x86_64 \
-smp "$IMG_VCPU" \
-m "$IMG_RAM_MB" \
-machine q35 \
-cpu host,+x2apic \
-vga vmware \
-enable-kvm \
$VNC \
`eval echo $KVM_BLK_OPTS` \
`eval echo $KVM_NET_OPTS` \
-usbdevice tablet -nographic $KVM_ARGS

@ -1,132 +0,0 @@
# FIXME(cw); refactor into multiple per-function files
# FIXME(cw) consider using OSH helm-toolkit.utils.template
{{- define "template" -}}
{{- $name := index . 0 -}}
{{- $context := index . 1 -}}
{{- $last := base $context.Template.Name }}
{{- $wtf := $context.Template.Name | replace $last $name -}}
{{ include $wtf $context }}
{{- end -}}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: cloudinit
data:
metadata: |
{ "uuid": "example-01-vm.mydomain.com" }
userdata: |
#cloud-config
fqdn: example-01-vm.mydomain.com
users:
- name: root
ssh-authorized-keys:
- {{ .Values.auth.ssh_key }}
ssh_pwauth: True
runcmd:
- [ apt-get, update ]
- [ apt-get, install, -y, --force-yes, apache2 ]
---
apiVersion: v1
kind: ConfigMap
metadata:
name: configmap-startvm
data:
startvm: |
#!/bin/bash
#
# start of startvm
{{ tuple "bin/_startvm.txt" . | include "template" | indent 4 }}
# end of startvm
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-01-vm
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: {{ .Values.volume.size }}
{{ if not .Values.volume.class_name }}
storageClassName: {{ .Values.volume.class_name }}
{{ end }}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: berth
spec:
replicas: 1
template:
metadata:
labels:
app: berth
annotations:
pod.beta.kubernetes.io/hostname: example-01-vm
spec:
nodeSelector:
{{ .Values.labels.node_selector_key }}: {{ .Values.labels.node_selector_value }}
hostNetwork: false
hostPID: false
securityContext:
runAsUser: 0
containers:
- name: example-01-vm
imagePullPolicy: IfNotPresent
image: {{ .Values.images.entrypoint }}
env:
- name: IMG_SOURCE
value: http://stupidest.org/vm/ubuntu-14.04-amd64.img
- name: IMG_TARGET
value: ubuntu-14.04-amd64.img
securityContext:
privileged: true
command:
- /usr/local/bin/startvm
ports:
- containerPort: {{ .Values.network.port }}
- containerPort: {{ .Values.network.vnc }}
readinessProbe:
tcpSocket:
port: {{ .Values.network.vnc }}
volumeMounts:
- name: volume-startvm
mountPath: /usr/local/bin/startvm
subPath: startvm
- name: image
mountPath: /image
- name: dev
mountPath: /dev
- name: sys
mountPath: /sys
- name: cloudinit
mountPath: /userdata
subPath: userdata
- name: cloudinit
mountPath: /metadata
subPath: metadata
volumes:
- name: volume-startvm
configMap:
name: configmap-startvm
defaultMode: 0755
- name: image
persistentVolumeClaim:
claimName: example-01-vm
- name: dev
hostPath:
path: /dev
- name: sys
hostPath:
path: /sys
- name: cloudinit
configMap:
name: cloudinit
items:
- key: userdata
path: userdata
- key: metadata
path: metadata

@ -0,0 +1,33 @@
{{- define "template" -}}
{{- $name := index . 0 -}}
{{- $context := index . 1 -}}
{{- $last := base $context.Template.Name }}
{{- $wtf := $context.Template.Name | replace $last $name -}}
{{ include $wtf $context }}
{{- end -}}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: configmap-generic-startvm
data:
startvm: |
{{ tuple "bin/_startvm.txt" . | include "template" | indent 4 }}
---
{{- range $name, $vm := .Values.vmlist }}
{{- if $vm.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: configmap-cloudconfig-{{ $name }}
data:
metadata: {{ toYaml $vm.cloudconfig.metadata | indent 4 }}
userdata: {{ toYaml $vm.cloudconfig.userdata | indent 4 }}
---
{{- end }}
{{- end }}

@ -0,0 +1,98 @@
{{- $envAll := . }}
{{ range $name, $vm := .Values.vmlist }}
# id: {{- $name }}
{{- if $vm.enabled }}
# vm enabled
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: berth-{{ $name }}
spec:
replicas: 1
template:
metadata:
labels:
app: berth
annotations:
pod.beta.kubernetes.io/hostname: {{ $name }}
spec:
nodeSelector:
{{ $envAll.Values.labels.node_selector_key }}: {{ $envAll.Values.labels.node_selector_value }}
hostNetwork: false
hostPID: false
securityContext:
runAsUser: 0
containers:
- name: {{ $name }}
imagePullPolicy: IfNotPresent
image: {{ $envAll.Values.images.vmrunner }}
env:
- name: IMG_SOURCE
value: {{ $vm.vmconfig.rootfs.sourceurl }}
- name: IMG_TARGET
value: {{ $vm.vmconfig.rootfs.localtarget }}
- name: IMG_VCPU
value: "{{ $vm.vmconfig.cpu.vcpu }}"
- name: IMG_RAM_MB
value: "{{ $vm.vmconfig.cpu.ram_mb }}"
securityContext:
privileged: true
command:
- /usr/local/bin/startvm
{{- if $vm.netconfig.ports }}
ports:
{{- range $for, $port := $vm.netconfig.ports }}
- containerPort: {{ $port }}
{{- end }}
{{- end }}
{{- if $vm.netconfig.readinessTcpProbe }}
readinessProbe:
tcpSocket:
port: {{ $vm.netconfig.readinessTcpProbe }}
{{- end }}
volumeMounts:
- name: volume-startvm
mountPath: /usr/local/bin/startvm
subPath: startvm
- name: image
mountPath: /image
- name: dev
mountPath: /dev
- name: sys
mountPath: /sys
- name: volume-cloudinit
mountPath: /userdata
subPath: userdata
- name: volume-cloudinit
mountPath: /metadata
subPath: metadata
volumes:
- name: volume-startvm
configMap:
name: configmap-generic-startvm
defaultMode: 0755
- name: image
persistentVolumeClaim:
claimName: {{ $name }}
- name: dev
hostPath:
path: /dev
- name: sys
hostPath:
path: /sys
- name: volume-cloudinit
configMap:
name: configmap-cloudconfig-{{ $name }}
items:
- key: userdata
path: userdata
- key: metadata
path: metadata
{{- else }}
# {{ $name }} not enabled!
{{- end }}
---
{{- end }}

@ -0,0 +1,23 @@
{{- range $name, $vm := .Values.vmlist }}
# id: {{- $name }}
{{- if $vm.enabled }}
# vm enabled
# {{ $vm.vmconfig.rootfs.pvc_size }} {{ $vm.vmconfig.rootfs.pvc_class }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ $name }}
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: {{ $vm.vmconfig.rootfs.pvc_size }}
{{- if $vm.vmconfig.rootfs.pvc_class }}
storageClassName: {{ $vm.vmconfig.rootfs.pvc_class}}
{{- end }}
{{- else }}
# vm disabled - skipping
{{- end }}
---
{{- end }}

@ -1,18 +1,12 @@
auth:
ssh_key:
images:
entrypoint: quay.io/attcomdev/kvm-manager:latest
vmrunner: quay.io/attcomdev/kvm-manager:latest
labels:
node_selector_key: openstack-control-plane
node_selector_value: enabled
volume:
class_name:
size: 25Gi
# by default in the chart's values.yaml this is empty; use and
# override file and helm --values=... for your specific values
vmlist: { }
network:
port: 22
vnc: 5900

@ -0,0 +1,10 @@
# this vm isn't fully defined, not enabling it is enough to have the
# templates skip over the details and not barf
vmlist:
- bogus-vm:
enabled: false
extra: "thang"
vmconfig: "this is wrong"
cloudconfig:
metadata: "{ "
userdata: false

@ -0,0 +1,22 @@
vmlist:
cirros-test:
enabled: true
vmconfig:
cpu:
vcpu: 1
ram_mb: 256
rootfs:
sourceurl: http://stupidest.org/vm/cirros-0.3.5-x86_64-disk.img
localtarget: cirros-vm.qcow2
pvc_size: 128Mi
netconfig:
ports:
ssh: 22
vnc: 5900
cloudconfig:
metadata: |
{ "uuid": "093772fe-d6a3-4eea-84bc-5966661a0c3e" }
userdata: |
#cloud-config
fqdn: cirros.example.com
bogus_not_used: "this won't hurt anthing but shouldn't be here"

@ -0,0 +1,33 @@
vmlist:
demo-ub14-apache:
enabled: true
vmconfig:
cpu:
vcpu: 1
ram_mb: 1024
rootfs:
sourceurl: http://stupidest.org/vm/ubuntu-14.04-amd64.img
localtarget: ubuntu-14.04-amd64.qcow2
pvc_size: 5Gi
pvc_class: nfs
netconfig:
ports:
ssh: 22
vnc: 5900
readinessTcpProbe: 22
cloudconfig:
metadata: |
{ "uuid": "apache-demo.example.com" }
userdata: |
#cloud-config
fqdn: apache-demo.example.com
users:
- name: root
ssh-authorized-keys:
- "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII+k1AYpUX7Y6+pVzkw3JPbRPNpoh7m1rZBP4Qa37Wz2 user@host"
- "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPE/zGmNx3W52ztb/2vvTcgUN7RGbq172QXGcXKAagU1 user@host"
- "ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKVeMUpC6Ba0UwyLeCCWexr/sc4kYwKW6mTSTpcQHDm+F5UqoW3pnbGPn3WSJB2AbCMP9oG0qQrLP2zHmE7fyog= otheruser@otherhost"
ssh_pwauth: True
runcmd:
- [ apt-get, update ]
- [ apt-get, install, -y, --force-yes, apache2 ]

@ -0,0 +1,36 @@
vmlist:
ub16-smp-test:
enabled: true
vmconfig:
cpu:
vcpu: 2
ram_mb: 2048
rootfs:
sourceurl: http://stupidest.org/vm/cw-ub16-test-3a.qcow2
localtarget: cw-ub16-test.qcow2
pvc_size: 7Gi
netconfig:
ports:
ssh: 22
vnc: 5900
cloudconfig:
metadata: |
{
"uuid": "093772fe-d6a3-4eea-84bc-5966661a0c3e",
"name": "my-name",
"instance-id": "my-instance-id",
"availability-zone": "my-az",
"hostname": "my-hostname",
"local-hostname": "my-local-hostname",
"launch-index": "123"
}
userdata: |
#cloud-config
fqdn: ub16-smp-test.example.com
users:
- name: root
ssh-authorized-keys:
- "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII+k1AYpUX7Y6+pVzkw3JPbRPNpoh7m1rZBP4Qa37Wz2 user@host"
- "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPE/zGmNx3W52ztb/2vvTcgUN7RGbq172QXGcXKAagU1 user@host"
- "ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKVeMUpC6Ba0UwyLeCCWexr/sc4kYwKW6mTSTpcQHDm+F5UqoW3pnbGPn3WSJB2AbCMP9oG0qQrLP2zHmE7fyog= otheruser@otherhost"
ssh_pwauth: True

@ -0,0 +1,13 @@
#!/bin/sh
set -ex
. /etc/os-release
type=${ID_LIKE:=ID}
if [ "$type" == "debian" ] ; then
apt-get update
apt-get install netcat-openbsd jq
else
yum install netcat jq
fi

@ -0,0 +1,41 @@
#!/bin/bash
set -ex
NS=berth
helm install --name=berth --debug ./berth --values=examples/cirros-test.yaml --namespace="${NS}"
# wait until we get a PODIP
while : ; do
PODIP=$(kubectl -n "${NS}" get pods -o wide -o json | jq -r '.items[].status.podIP')
if [ -n "$PODIP" -a "null" != "$PODIP" ] ; then
break
fi
echo "waiting for PODIP"
# XXX
kubectl get pods --all-namespaces
sleep 2
done
kubectl -n "${NS}" get pods
# wait for pod to come up say something on ssh
timeout=60
t=0
while : ; do
if echo "bye" | nc "${PODIP}" 22 | grep --quiet ^SSH ; then
echo "VM up"
break
fi
if [ $t -gt $timeout ] ; then
exit 2
fi
t=$(($t + 5))
sleep 2
done
# verify we can cleanup
helm upgrade berth ./berth --values=berth/values.yaml
helm delete --purge berth
Loading…
Cancel
Save