Sample script for deploying Kubernetes cluster
This patch implements spec Support deploying Kubernetes cluster with MgmtDriver in blueprint bp/cnf-support-with-etsi-nfv-specs. It includes sample mgmt_driver scripts of deploying kubernetes cluster to vm created by openstack_driver. Support instantiate/terminate/scale/heal function. It also includes a shell script that actual install kubernets cluster on vm, and an use guide to show how to use this sample script. Implements: bp/cnf-support-with-etsi-nfv-specs Change-Id: I4d0085ffa3b4c90741ebb169b96f69113e2bb6d7
This commit is contained in:
parent
7fc77c1654
commit
a43e5434b6
@ -67,6 +67,7 @@ Use cases
|
|||||||
vnffg_usage_guide_advanced.rst
|
vnffg_usage_guide_advanced.rst
|
||||||
vnfm_usage_guide.rst
|
vnfm_usage_guide.rst
|
||||||
placement_policy_usage_guide.rst
|
placement_policy_usage_guide.rst
|
||||||
|
mgmt_driver_deploy_k8s_usage_guide.rst
|
||||||
|
|
||||||
Feature Documentation
|
Feature Documentation
|
||||||
---------------------
|
---------------------
|
||||||
|
2319
doc/source/user/mgmt_driver_deploy_k8s_usage_guide.rst
Normal file
2319
doc/source/user/mgmt_driver_deploy_k8s_usage_guide.rst
Normal file
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,32 @@
|
|||||||
|
---
|
||||||
|
features:
|
||||||
|
- |
|
||||||
|
MgmtDriver function configures applications provided by VNF vendors.
|
||||||
|
VNF vendors can customize configuration methods for applications via
|
||||||
|
MgmtDriver. These customizations are specified by "interface" definition
|
||||||
|
in ETSI NFV-SOL001 v2.6.1. We provide the sample of MgmtDriver and
|
||||||
|
scripts which can be used to deploy a Kubernetes cluster. The sample
|
||||||
|
script for deploying Kubernetes cluster can be used in two cases.
|
||||||
|
One is to deploy one master node with worker nodes. Under this case,
|
||||||
|
it supports to scale worker node and heal worker node. The other is to
|
||||||
|
deploy a high availability cluster, there are three(or more) master
|
||||||
|
nodes with worker nodes. Under this case, it supports to scale worker
|
||||||
|
node and to heal worker node and master node. In all the above cases,
|
||||||
|
kubeadm is used for deploying Kubernetes in the sample script. We also
|
||||||
|
provide a user guide to help users understand how to use this feature.
|
||||||
|
|
||||||
|
Instantiate single master node kubernetes cluster:
|
||||||
|
The Kubernetes cluster can be instantiated with VNF Lifecycle
|
||||||
|
Management Interface in ETSI NFV-SOL 003 v2.6.1.
|
||||||
|
|
||||||
|
Instantiate multi-master nodes kubernetes cluster:
|
||||||
|
A Kubenrnetes cluster with a high availability (HA) configuration
|
||||||
|
can be deployed.
|
||||||
|
|
||||||
|
Scale kubernetes worker node:
|
||||||
|
Scaling operations on the Worker-nodes for the VNF including
|
||||||
|
Kubernetes cluster is supported with MgmtDriver.
|
||||||
|
|
||||||
|
Heal kubernetes master and worker nodes:
|
||||||
|
Healing operations on the Master-nodes and Worker-nodes for the
|
||||||
|
VNF including Kubernetes cluster is supported with MgmtDriver.
|
23
samples/mgmt_driver/create_admin_token.yaml
Normal file
23
samples/mgmt_driver/create_admin_token.yaml
Normal file
@ -0,0 +1,23 @@
|
|||||||
|
kind: ClusterRoleBinding
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
metadata:
|
||||||
|
name: admin
|
||||||
|
annotations:
|
||||||
|
rbac.authorization.kubernetes.io/autoupdate: "true"
|
||||||
|
roleRef:
|
||||||
|
kind: ClusterRole
|
||||||
|
name: cluster-admin
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: admin
|
||||||
|
namespace: kube-system
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ServiceAccount
|
||||||
|
metadata:
|
||||||
|
name: admin
|
||||||
|
namespace: kube-system
|
||||||
|
labels:
|
||||||
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
782
samples/mgmt_driver/install_k8s_cluster.sh
Normal file
782
samples/mgmt_driver/install_k8s_cluster.sh
Normal file
@ -0,0 +1,782 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
set -o xtrace
|
||||||
|
###############################################################################
|
||||||
|
#
|
||||||
|
# This script will install and setting for the Kubernetes Cluster on Ubuntu.
|
||||||
|
# It's confirmed operation on Ubuntu of below.
|
||||||
|
#
|
||||||
|
# * OS type : Ubuntu(64 bit)
|
||||||
|
# * OS version : 20.04 LTS
|
||||||
|
# * OS architecture : amd64 (x86_64)
|
||||||
|
# * Disk/Ram size : 15GB/2GB
|
||||||
|
# * Pre setup user : ubuntu
|
||||||
|
#
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
# Usage Definition
|
||||||
|
#==============================================================================
|
||||||
|
function usage {
|
||||||
|
sudo cat <<_EOT_
|
||||||
|
$(basename ${0}) is script to construct the kubernetes cluster.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
$(basename ${0}) [-d] [-o] [-m <master ip address>]
|
||||||
|
[-w <worker ip address>] [-i <master cluster ip address>]
|
||||||
|
[-a <k8s api cluster cidr] [-p <k8s pod network cidr>]
|
||||||
|
[-t <token name>] [-s <token hash>] [-k <certificate key>]
|
||||||
|
|
||||||
|
Description:
|
||||||
|
This script is to construct the kubernetes cluster on a virtual machine.
|
||||||
|
It can install and configure a Master node or each Worker Node
|
||||||
|
as specify arguments.
|
||||||
|
|
||||||
|
Options:
|
||||||
|
-m Install and setup all master nodes(use "," to separate, the first master ip is main master ip)
|
||||||
|
-w Install and setup worker node
|
||||||
|
-i master cluster IP address (e.g. 192.168.120.100)
|
||||||
|
-a Kubernetes api cluster CIDR (e.g. 10.96.0.0/12)
|
||||||
|
-p Kubernetes pod network CIDR (e.g. 192.168.0.0/16)
|
||||||
|
-d Display the execution result in debug mode
|
||||||
|
-o Output the execution result to the log file
|
||||||
|
-t The first master's token name
|
||||||
|
-s The first master's token hash
|
||||||
|
-k The first master‘s certificate key
|
||||||
|
--help, -h Print this
|
||||||
|
|
||||||
|
_EOT_
|
||||||
|
exit 1
|
||||||
|
}
|
||||||
|
|
||||||
|
declare -g INSTALL_MODE=""
|
||||||
|
declare -g DEBUG_MODE="False"
|
||||||
|
declare -g OUTPUT_LOGFILE="False"
|
||||||
|
# master/worker ip
|
||||||
|
declare -g MASTER_IPADDRS=${MASTER_IPADDRS:-}
|
||||||
|
declare -a -g MASTER_IPS=${MASTER_IPS:-}
|
||||||
|
declare -g MASTER_IP=${MASTER_IP:-}
|
||||||
|
declare -g WORKER_IPADDR=${WORKER_IPADDR:-}
|
||||||
|
declare -g TOKEN_NAME=${TOKEN_NAME:-}
|
||||||
|
declare -g TOKEN_HASH=${TOKEN_HASH:-}
|
||||||
|
declare -g CERT_KEY=${CERT_KEY:-}
|
||||||
|
declare -g K8S_API_CLUSTER_CIDR=${K8S_API_CLUSTER_CIDR:-10.96.0.0/12}
|
||||||
|
declare -g K8S_POD_CIDR=${K8S_POD_CIDR:-192.168.0.0/16}
|
||||||
|
|
||||||
|
if [ "$OPTIND" = 1 ]; then
|
||||||
|
while getopts dom:w:i:a:p:t:s:k:h OPT; do
|
||||||
|
case $OPT in
|
||||||
|
m)
|
||||||
|
MASTER_IPADDRS=$OPTARG # 192.168.120.17,192.168.120.18,192.168.120.19
|
||||||
|
INSTALL_MODE="master" # master
|
||||||
|
MASTER_IPS=(${MASTER_IPADDRS//,/ })
|
||||||
|
MASTER_IP=${MASTER_IPS[0]}
|
||||||
|
;;
|
||||||
|
w)
|
||||||
|
WORKER_IPADDR=$OPTARG # 192.168.120.2
|
||||||
|
INSTALL_MODE="worker" # worker
|
||||||
|
;;
|
||||||
|
i)
|
||||||
|
MASTER_CLUSTER_IP=$OPTARG # master cluster ip: 192.168.120.100
|
||||||
|
;;
|
||||||
|
a)
|
||||||
|
K8S_API_CLUSTER_CIDR=$OPTARG # cluster cidr: 10.96.0.0/12
|
||||||
|
;;
|
||||||
|
p)
|
||||||
|
K8S_POD_CIDR=$OPTARG # pod network cidr: 192.168.0.0/16
|
||||||
|
;;
|
||||||
|
d)
|
||||||
|
DEBUG_MODE="True" # start debug
|
||||||
|
;;
|
||||||
|
o)
|
||||||
|
OUTPUT_LOGFILE="True" # output log file
|
||||||
|
;;
|
||||||
|
t)
|
||||||
|
TOKEN_NAME=$OPTARG # token name
|
||||||
|
;;
|
||||||
|
s)
|
||||||
|
TOKEN_HASH=$OPTARG # token hash
|
||||||
|
;;
|
||||||
|
k)
|
||||||
|
CERT_KEY=$OPTARG # certificate key
|
||||||
|
;;
|
||||||
|
h)
|
||||||
|
echo "h option. display help"
|
||||||
|
usage
|
||||||
|
;;
|
||||||
|
\?)
|
||||||
|
echo "Try to enter the h option." 1>&2
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
else
|
||||||
|
echo "No installed getopts-command." 1>&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# check parameter entered by user
|
||||||
|
if [ "$DEBUG_MODE" == "True" ]; then
|
||||||
|
echo "*** DEBUG MODE ***"
|
||||||
|
set -x
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "$OUTPUT_LOGFILE" == "True" ]; then
|
||||||
|
echo "*** OUTPUT LOGFILE MODE ***"
|
||||||
|
exec > /tmp/k8s_install_`date +%Y%m%d%H%M%S`.log 2>&1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Application Variables
|
||||||
|
#----------------------
|
||||||
|
# haproxy
|
||||||
|
declare -g CURRENT_HOST_IP=${CURRENT_HOST_IP:-}
|
||||||
|
declare -g MASTER_CLUSTER_PORT=16443
|
||||||
|
# kubeadm join
|
||||||
|
declare -g KUBEADM_JOIN_WORKER_RESULT=${KUBEADM_JOIN_WORKER_RESULT:-}
|
||||||
|
|
||||||
|
|
||||||
|
# Functions
|
||||||
|
#==========
|
||||||
|
|
||||||
|
# Set OS common functions
|
||||||
|
#------------------------
|
||||||
|
|
||||||
|
# Set public DNS
|
||||||
|
function set_public_dns {
|
||||||
|
sudo sed -i -e 's/^#DNS=/DNS=8.8.8.8 8.8.4.4/g' /etc/systemd/resolved.conf
|
||||||
|
sudo systemctl restart systemd-resolved.service
|
||||||
|
}
|
||||||
|
|
||||||
|
function set_hostname {
|
||||||
|
tmp_master_ipaddr3=`echo ${MASTER_IP} | sudo sed -e "s/.[0-9]\{1,3\}$//"`
|
||||||
|
local tmp_result=""
|
||||||
|
if [[ "$INSTALL_MODE" =~ "master" ]]; then
|
||||||
|
for _ip in `ip -4 addr | grep -oP '(?<=inet\s)\d+(\.\d+){3}'`; do
|
||||||
|
_tmp_ip=`echo ${_ip} |sudo sed -e "s/.[0-9]\{1,3\}$//"`
|
||||||
|
if [[ $_tmp_ip == $tmp_master_ipaddr3 ]]; then
|
||||||
|
CURRENT_HOST_IP=$_ip
|
||||||
|
tmp_result=`echo $_ip|cut -d"." -f4`
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
sudo /usr/bin/hostnamectl set-hostname master$tmp_result
|
||||||
|
elif [[ "$INSTALL_MODE" == "worker" ]]; then
|
||||||
|
CURRENT_HOST_IP=$WORKER_IPADDR
|
||||||
|
tmp_result=`echo $CURRENT_HOST_IP|cut -d"." -f4`
|
||||||
|
sudo /usr/bin/hostnamectl set-hostname worker$tmp_result
|
||||||
|
else
|
||||||
|
echo "error. please execute sh install_k8s_cluster.sh -h."
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
function set_sudoers {
|
||||||
|
echo "ubuntu ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/ubuntu
|
||||||
|
}
|
||||||
|
|
||||||
|
function set_hosts {
|
||||||
|
hostname=`hostname`
|
||||||
|
sudo sed -i -e 's/127.0.0.1localhost/127.0.0.1 localhost master/g' \
|
||||||
|
/etc/hosts
|
||||||
|
sudo sed -i -e "s/127.0.1.1 $hostname/127.0.1.1 $hostname master/g" \
|
||||||
|
/etc/hosts
|
||||||
|
}
|
||||||
|
|
||||||
|
function invalidate_swap {
|
||||||
|
sudo sed -i -e '/swap/s/^/#/' /etc/fstab
|
||||||
|
swapoff -a
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# Install Haproxy
|
||||||
|
#----------------
|
||||||
|
function install_haproxy {
|
||||||
|
REPOS_UPDATED=False apt_get_update
|
||||||
|
apt_get install haproxy
|
||||||
|
}
|
||||||
|
|
||||||
|
function modify_haproxy_conf {
|
||||||
|
cat <<EOF | sudo tee /etc/haproxy/haproxy.cfg >/dev/null
|
||||||
|
global
|
||||||
|
log /dev/log local0
|
||||||
|
log /dev/log local1 notice
|
||||||
|
chroot /var/lib/haproxy
|
||||||
|
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
|
||||||
|
stats timeout 30s
|
||||||
|
user haproxy
|
||||||
|
group haproxy
|
||||||
|
daemon
|
||||||
|
|
||||||
|
# Default SSL material locations
|
||||||
|
ca-base /etc/ssl/certs
|
||||||
|
crt-base /etc/ssl/private
|
||||||
|
|
||||||
|
# Default ciphers to use on SSL-enabled listening sockets.
|
||||||
|
# For more information, see ciphers(1SSL). This list is from:
|
||||||
|
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
|
||||||
|
# An alternative list with additional directives can be obtained from
|
||||||
|
# https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
|
||||||
|
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
|
||||||
|
ssl-default-bind-options no-sslv3
|
||||||
|
|
||||||
|
defaults
|
||||||
|
log global
|
||||||
|
mode http
|
||||||
|
option httplog
|
||||||
|
option dontlognull
|
||||||
|
timeout connect 5000
|
||||||
|
timeout client 50000
|
||||||
|
timeout server 50000
|
||||||
|
errorfile 400 /etc/haproxy/errors/400.http
|
||||||
|
errorfile 403 /etc/haproxy/errors/403.http
|
||||||
|
errorfile 408 /etc/haproxy/errors/408.http
|
||||||
|
errorfile 500 /etc/haproxy/errors/500.http
|
||||||
|
errorfile 502 /etc/haproxy/errors/502.http
|
||||||
|
errorfile 503 /etc/haproxy/errors/503.http
|
||||||
|
errorfile 504 /etc/haproxy/errors/504.http
|
||||||
|
|
||||||
|
frontend kubernetes-apiserver
|
||||||
|
mode tcp
|
||||||
|
bind *:$MASTER_CLUSTER_PORT
|
||||||
|
option tcplog
|
||||||
|
default_backend kubernetes-apiserver
|
||||||
|
|
||||||
|
backend kubernetes-apiserver
|
||||||
|
mode tcp
|
||||||
|
balance roundrobin
|
||||||
|
EOF
|
||||||
|
for master_ip in ${MASTER_IPS[@]}; do
|
||||||
|
split_ips=(${master_ip//./ })
|
||||||
|
cat <<EOF | sudo tee -a /etc/haproxy/haproxy.cfg >/dev/null
|
||||||
|
server master${split_ips[3]} $master_ip:6443 check
|
||||||
|
EOF
|
||||||
|
done
|
||||||
|
cat <<EOF | sudo tee -a /etc/haproxy/haproxy.cfg >/dev/null
|
||||||
|
listen stats
|
||||||
|
bind *:1080
|
||||||
|
stats auth admin:awesomePassword
|
||||||
|
stats refresh 5s
|
||||||
|
stats realm HAProxy\ Statistics
|
||||||
|
stats uri /admin?stats
|
||||||
|
EOF
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
function start_haproxy {
|
||||||
|
sudo systemctl enable haproxy
|
||||||
|
sudo systemctl start haproxy
|
||||||
|
sudo systemctl status haproxy | grep Active
|
||||||
|
result=$(ss -lnt |grep -E "16443|1080")
|
||||||
|
if [[ -z $result ]]; then
|
||||||
|
sudo systemctl restart haproxy
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# Install Keepalived
|
||||||
|
#-------------------
|
||||||
|
function install_keepalived {
|
||||||
|
REPOS_UPDATED=False apt_get_update
|
||||||
|
apt_get install keepalived
|
||||||
|
}
|
||||||
|
function modify_keepalived_conf {
|
||||||
|
local priority
|
||||||
|
local ip_name
|
||||||
|
local index=0
|
||||||
|
for master_ip in ${MASTER_IPS[@]}; do
|
||||||
|
if [[ "$CURRENT_HOST_IP" == "$master_ip" ]]; then
|
||||||
|
priority=$(expr 103 - $index)
|
||||||
|
fi
|
||||||
|
index=$(expr $index + 1)
|
||||||
|
done
|
||||||
|
|
||||||
|
ip_name=$(ip a s | grep $CURRENT_HOST_IP | awk '{print $NF}')
|
||||||
|
|
||||||
|
cat <<EOF | sudo tee /etc/keepalived/keepalived.conf >/dev/null
|
||||||
|
vrrp_script chk_haproxy {
|
||||||
|
script "killall -0 haproxy"
|
||||||
|
interval 3 fall 3
|
||||||
|
}
|
||||||
|
vrrp_instance VRRP1 {
|
||||||
|
state MASTER
|
||||||
|
interface $ip_name
|
||||||
|
virtual_router_id 51
|
||||||
|
priority $priority
|
||||||
|
advert_int 1
|
||||||
|
virtual_ipaddress {
|
||||||
|
$MASTER_CLUSTER_IP/24
|
||||||
|
}
|
||||||
|
track_script {
|
||||||
|
chk_haproxy
|
||||||
|
}
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
}
|
||||||
|
|
||||||
|
function start_keepalived {
|
||||||
|
sudo systemctl enable keepalived.service
|
||||||
|
sudo systemctl start keepalived.service
|
||||||
|
sudo systemctl status keepalived.service | grep Active
|
||||||
|
result=$(sudo systemctl status keepalived.service | \
|
||||||
|
grep Active | grep "running")
|
||||||
|
if [[ "$result" == "" ]]; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Install Docker
|
||||||
|
#---------------
|
||||||
|
function install_docker {
|
||||||
|
arch=$(sudo dpkg --print-architecture)
|
||||||
|
REPOS_UPDATED=False apt_get_update
|
||||||
|
DEBIAN_FRONTEND=noninteractive sudo apt-get install -y \
|
||||||
|
apt-transport-https ca-certificates curl gnupg-agent \
|
||||||
|
software-properties-common
|
||||||
|
result=`curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
|
||||||
|
sudo apt-key add -`
|
||||||
|
if [[ $result != "OK" ]]; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
sudo add-apt-repository \
|
||||||
|
"deb [arch=${arch}] \
|
||||||
|
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
|
||||||
|
apt_get update
|
||||||
|
DEBIAN_FRONTEND=noninteractive sudo apt-get install \
|
||||||
|
docker-ce=5:19.03.11~3-0~ubuntu-focal \
|
||||||
|
docker-ce-cli containerd.io << EOF
|
||||||
|
y
|
||||||
|
EOF
|
||||||
|
}
|
||||||
|
|
||||||
|
function set_docker_proxy {
|
||||||
|
sudo mkdir -p /etc/systemd/system/docker.service.d
|
||||||
|
sudo touch /etc/systemd/system/docker.service.d/https-proxy.conf
|
||||||
|
|
||||||
|
cat <<EOF | sudo tee \
|
||||||
|
/etc/systemd/system/docker.service.d/https-proxy.conf >/dev/null
|
||||||
|
[Service]
|
||||||
|
Environment="HTTP_PROXY=${http_proxy//%40/@}" "HTTPS_PROXY=${https_proxy//%40/@}" "NO_PROXY=$no_proxy"
|
||||||
|
EOF
|
||||||
|
cat <<EOF | sudo tee /etc/docker/daemon.json >/dev/null
|
||||||
|
{
|
||||||
|
"exec-opts": ["native.cgroupdriver=systemd"]
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
sudo systemctl daemon-reload
|
||||||
|
sudo systemctl restart docker
|
||||||
|
sleep 3
|
||||||
|
result=$(sudo systemctl status docker | grep Active | grep "running")
|
||||||
|
if [[ -z "$result" ]]; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
sleep 7
|
||||||
|
sudo docker run hello-world
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# Install Kubernetes
|
||||||
|
#-------------------
|
||||||
|
function set_k8s_components {
|
||||||
|
REPOS_UPDATED=False apt_get_update
|
||||||
|
sudo apt-get install -y apt-transport-https curl
|
||||||
|
result=`curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | \
|
||||||
|
sudo apt-key add -`
|
||||||
|
if [[ $result != "OK" ]]; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | \
|
||||||
|
sudo tee -a /etc/apt/sources.list.d/kubernetes.list
|
||||||
|
apt_get update
|
||||||
|
apt_get install -y kubelet kubeadm kubectl
|
||||||
|
sudo apt-mark hold kubelet kubeadm kubectl
|
||||||
|
echo "starting kubelet, wait 30s ..."
|
||||||
|
sleep 30
|
||||||
|
sudo systemctl status kubelet | grep Active
|
||||||
|
}
|
||||||
|
|
||||||
|
function init_master {
|
||||||
|
if [[ "$MASTER_IPADDRS" =~ "," ]]; then
|
||||||
|
sudo kubeadm init --pod-network-cidr=$K8S_POD_CIDR \
|
||||||
|
--service-cidr=$K8S_API_CLUSTER_CIDR \
|
||||||
|
--control-plane-endpoint "$MASTER_CLUSTER_IP:16443" --upload-certs
|
||||||
|
else
|
||||||
|
sudo kubeadm init --pod-network-cidr=$K8S_POD_CIDR \
|
||||||
|
--service-cidr=$K8S_API_CLUSTER_CIDR \
|
||||||
|
--control-plane-endpoint "$MASTER_CLUSTER_IP:6443" --upload-certs
|
||||||
|
fi
|
||||||
|
sleep 3
|
||||||
|
sudo mkdir -p $HOME/.kube
|
||||||
|
sudo /bin/cp -f /etc/kubernetes/admin.conf $HOME/.kube/config
|
||||||
|
sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
||||||
|
sleep 20
|
||||||
|
}
|
||||||
|
|
||||||
|
function install_pod_network {
|
||||||
|
curl https://docs.projectcalico.org/manifests/calico.yaml -O
|
||||||
|
echo "waiting install pod network..."
|
||||||
|
while true; do
|
||||||
|
result=$(kubectl apply -f calico.yaml)
|
||||||
|
if [[ "$result" =~ "created" ]] || \
|
||||||
|
[[ "$result" =~ "unchanged" ]]; then
|
||||||
|
echo "$result"
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
sudo rm -rf $HOME/.kube
|
||||||
|
sudo mkdir -p $HOME/.kube
|
||||||
|
sudo /bin/cp -f /etc/kubernetes/admin.conf $HOME/.kube/config
|
||||||
|
sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
||||||
|
sleep 10
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
function add_master_node {
|
||||||
|
sudo kubeadm join $MASTER_CLUSTER_IP:16443 \
|
||||||
|
--token $TOKEN_NAME \
|
||||||
|
--discovery-token-ca-cert-hash sha256:$TOKEN_HASH \
|
||||||
|
--control-plane --certificate-key $CERT_KEY
|
||||||
|
sudo mkdir -p $HOME/.kube
|
||||||
|
sudo /bin/cp -f /etc/kubernetes/admin.conf $HOME/.kube/config
|
||||||
|
sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
||||||
|
echo "add node ..."
|
||||||
|
sleep 10
|
||||||
|
kubectl get nodes -o wide
|
||||||
|
echo "add node successfully"
|
||||||
|
}
|
||||||
|
|
||||||
|
function init_worker {
|
||||||
|
sudo kubeadm init --pod-network-cidr=$K8S_POD_CIDR \
|
||||||
|
--service-cidr=$K8S_API_CLUSTER_CIDR
|
||||||
|
sleep 5
|
||||||
|
sudo mkdir -p $HOME/.kube
|
||||||
|
sudo /bin/cp -f /etc/kubernetes/admin.conf $HOME/.kube/config
|
||||||
|
sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
||||||
|
sleep 10
|
||||||
|
}
|
||||||
|
|
||||||
|
function add_worker_node {
|
||||||
|
if [[ "$ha_flag" != "False" ]]; then
|
||||||
|
KUBEADM_JOIN_WORKER_RESULT=$(sudo kubeadm join \
|
||||||
|
$MASTER_CLUSTER_IP:16443 --token $TOKEN_NAME \
|
||||||
|
--discovery-token-ca-cert-hash sha256:$TOKEN_HASH)
|
||||||
|
else
|
||||||
|
KUBEADM_JOIN_WORKER_RESULT=$(sudo kubeadm join \
|
||||||
|
$MASTER_CLUSTER_IP:6443 --token $TOKEN_NAME \
|
||||||
|
--discovery-token-ca-cert-hash sha256:$TOKEN_HASH)
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
function check_k8s_resource {
|
||||||
|
cat <<EOF | sudo tee "test-nginx-deployment.yaml" >/dev/null
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: nginx-deployment
|
||||||
|
labels:
|
||||||
|
app: nginx
|
||||||
|
spec:
|
||||||
|
replicas: 2
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: nginx
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: nginx
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: nginx
|
||||||
|
image: nginx:1.7.9
|
||||||
|
ports:
|
||||||
|
- containerPort: 80
|
||||||
|
EOF
|
||||||
|
cat <<EOF | sudo tee test-nginx-service.yaml >/dev/null
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: nginx-service
|
||||||
|
spec:
|
||||||
|
type: NodePort
|
||||||
|
sessionAffinity: ClientIP
|
||||||
|
selector:
|
||||||
|
app: nginx
|
||||||
|
ports:
|
||||||
|
- port: 80
|
||||||
|
nodePort: 30080
|
||||||
|
EOF
|
||||||
|
kubectl apply -f test-nginx-deployment.yaml
|
||||||
|
kubectl apply -f test-nginx-service.yaml
|
||||||
|
echo "please wait 1m to create resources..."
|
||||||
|
sleep 60
|
||||||
|
kubectl get pod,deployment,service -o wide
|
||||||
|
pod_name=`kubectl get pod | grep nginx-deployment | \
|
||||||
|
head -1 | awk '{print $1}'`
|
||||||
|
result=`kubectl describe pod $pod_name | grep Warning`
|
||||||
|
echo $result
|
||||||
|
if [[ "$result" =~ "FailedScheduling" ]]; then
|
||||||
|
local node_role
|
||||||
|
for role in ${result[@]}; do
|
||||||
|
if [[ "$role" =~ "master" ]]; then
|
||||||
|
index=${#role}-2
|
||||||
|
node_role=${role: 1:$index}
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
split_ips=(${CURRENT_HOST_IP//./ })
|
||||||
|
kubectl taint node master${split_ips[3]} $node_role:NoSchedule-
|
||||||
|
echo "please wait 500s to create resources successfully..."
|
||||||
|
sleep 500
|
||||||
|
kubectl get pod,deployment,service -o wide
|
||||||
|
else
|
||||||
|
echo "please wait 500s to create resources successfully..."
|
||||||
|
sleep 500
|
||||||
|
kubectl get pod,deployment,service -o wide
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Set common functions
|
||||||
|
#
|
||||||
|
# Refer: devstack project functions-common
|
||||||
|
#-----------------------------------------
|
||||||
|
function apt_get_update {
|
||||||
|
if [[ "$REPOS_UPDATED" == "True" ]]; then
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
local sudo="sudo"
|
||||||
|
[[ "$(id -u)" = "0" ]] && sudo="env"
|
||||||
|
|
||||||
|
# time all the apt operations
|
||||||
|
time_start "apt-get-update"
|
||||||
|
|
||||||
|
local update_cmd="sudo apt-get update"
|
||||||
|
if ! timeout 300 sh -c "while ! $update_cmd; do sleep 30; done"; then
|
||||||
|
die $LINENO "Failed to update apt repos, we're dead now"
|
||||||
|
fi
|
||||||
|
|
||||||
|
REPOS_UPDATED=True
|
||||||
|
# stop the clock
|
||||||
|
time_stop "apt-get-update"
|
||||||
|
}
|
||||||
|
|
||||||
|
function time_start {
|
||||||
|
local name=$1
|
||||||
|
local start_time=${_TIME_START[$name]}
|
||||||
|
if [[ -n "$start_time" ]]; then
|
||||||
|
die $LINENO \
|
||||||
|
"Trying to start the clock on $name, but it's already been started"
|
||||||
|
fi
|
||||||
|
|
||||||
|
_TIME_START[$name]=$(date +%s%3N)
|
||||||
|
}
|
||||||
|
|
||||||
|
function time_stop {
|
||||||
|
local name
|
||||||
|
local end_time
|
||||||
|
local elapsed_time
|
||||||
|
local total
|
||||||
|
local start_time
|
||||||
|
|
||||||
|
name=$1
|
||||||
|
start_time=${_TIME_START[$name]}
|
||||||
|
|
||||||
|
if [[ -z "$start_time" ]]; then
|
||||||
|
die $LINENO \
|
||||||
|
"Trying to stop the clock on $name, but it was never started"
|
||||||
|
fi
|
||||||
|
end_time=$(date +%s%3N)
|
||||||
|
elapsed_time=$(($end_time - $start_time))
|
||||||
|
total=${_TIME_TOTAL[$name]:-0}
|
||||||
|
# reset the clock so we can start it in the future
|
||||||
|
_TIME_START[$name]=""
|
||||||
|
_TIME_TOTAL[$name]=$(($total + $elapsed_time))
|
||||||
|
}
|
||||||
|
|
||||||
|
function apt_get {
|
||||||
|
local xtrace result
|
||||||
|
xtrace=$(set +o | grep xtrace) # set +o xtrace
|
||||||
|
set +o xtrace
|
||||||
|
|
||||||
|
[[ "$OFFLINE" = "True" || -z "$@" ]] && return
|
||||||
|
local sudo="sudo"
|
||||||
|
[[ "$(id -u)" = "0" ]] && sudo="env"
|
||||||
|
|
||||||
|
# time all the apt operations
|
||||||
|
time_start "apt-get"
|
||||||
|
|
||||||
|
$xtrace
|
||||||
|
|
||||||
|
$sudo DEBIAN_FRONTEND=noninteractive \
|
||||||
|
http_proxy=${http_proxy:-} https_proxy=${https_proxy:-} \
|
||||||
|
no_proxy=${no_proxy:-} \
|
||||||
|
apt-get --option "Dpkg::Options::=--force-confold" \
|
||||||
|
--assume-yes "$@" < /dev/null
|
||||||
|
result=$?
|
||||||
|
|
||||||
|
# stop the clock
|
||||||
|
time_stop "apt-get"
|
||||||
|
return $result
|
||||||
|
}
|
||||||
|
|
||||||
|
# Choose install function based on install mode
|
||||||
|
#----------------------------------------------
|
||||||
|
function main_master {
|
||||||
|
# prepare
|
||||||
|
set_public_dns
|
||||||
|
set_hostname
|
||||||
|
set_sudoers
|
||||||
|
set_hosts
|
||||||
|
invalidate_swap
|
||||||
|
if [[ "$MASTER_IPADDRS" =~ "," ]]; then
|
||||||
|
# haproxy
|
||||||
|
install_haproxy
|
||||||
|
modify_haproxy_conf
|
||||||
|
start_haproxy
|
||||||
|
|
||||||
|
# keepalived
|
||||||
|
install_keepalived
|
||||||
|
modify_keepalived_conf
|
||||||
|
start_keepalived
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Docker
|
||||||
|
install_docker
|
||||||
|
set_docker_proxy
|
||||||
|
|
||||||
|
# kubernetes
|
||||||
|
set_k8s_components
|
||||||
|
init_master
|
||||||
|
install_pod_network
|
||||||
|
|
||||||
|
# check_k8s_resource
|
||||||
|
|
||||||
|
clear
|
||||||
|
token=$(sudo kubeadm token create)
|
||||||
|
echo "token:$token"
|
||||||
|
server=$(kubectl cluster-info | \
|
||||||
|
sed 's,\x1B\[[0-9;]*[a-zA-Z],,g' | \
|
||||||
|
grep 'Kubernetes' |awk '{print $7}')
|
||||||
|
echo "server:$server"
|
||||||
|
cat /etc/kubernetes/pki/ca.crt
|
||||||
|
ssl_ca_cert_hash=$(openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | \
|
||||||
|
openssl rsa -pubin -outform der 2>/dev/null | \
|
||||||
|
openssl dgst -sha256 -hex | sudo sed 's/^.* //')
|
||||||
|
echo "ssl_ca_cert_hash:$ssl_ca_cert_hash"
|
||||||
|
cert_key=$(sudo kubeadm init phase upload-certs --upload-certs)
|
||||||
|
echo "certificate_key:$cert_key"
|
||||||
|
}
|
||||||
|
|
||||||
|
function normal_master {
|
||||||
|
# prepare
|
||||||
|
set_public_dns
|
||||||
|
set_hostname
|
||||||
|
set_sudoers
|
||||||
|
set_hosts
|
||||||
|
invalidate_swap
|
||||||
|
|
||||||
|
# haproxy
|
||||||
|
install_haproxy
|
||||||
|
modify_haproxy_conf
|
||||||
|
start_haproxy
|
||||||
|
|
||||||
|
# keepalived
|
||||||
|
install_keepalived
|
||||||
|
modify_keepalived_conf
|
||||||
|
start_keepalived
|
||||||
|
|
||||||
|
# Docker
|
||||||
|
install_docker
|
||||||
|
set_docker_proxy
|
||||||
|
|
||||||
|
# kubernetes
|
||||||
|
set_k8s_components
|
||||||
|
add_master_node
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
function main_worker {
|
||||||
|
# prepare
|
||||||
|
set_public_dns
|
||||||
|
set_hostname
|
||||||
|
set_sudoers
|
||||||
|
set_hosts
|
||||||
|
invalidate_swap
|
||||||
|
|
||||||
|
# Docker
|
||||||
|
install_docker
|
||||||
|
set_docker_proxy
|
||||||
|
|
||||||
|
# kubernetes
|
||||||
|
set_k8s_components
|
||||||
|
add_worker_node
|
||||||
|
}
|
||||||
|
|
||||||
|
# Pre preparations
|
||||||
|
# ________________
|
||||||
|
|
||||||
|
function check_OS {
|
||||||
|
. /etc/os-release
|
||||||
|
if [[ $PRETTY_NAME =~ "Ubuntu 20.04" ]]; then
|
||||||
|
os_architecture=`uname -a | grep 'x86_64'`
|
||||||
|
if [[ $os_architecture == "" ]]; then
|
||||||
|
echo "Your OS does not support at present."
|
||||||
|
echo "It only supports x86_64."
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
echo "Your OS does not support at present."
|
||||||
|
echo "It only supports Ubuntu 20.04.1 LTS."
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
function set_apt-conf_proxy {
|
||||||
|
sudo touch /etc/apt/apt.conf.d/proxy.conf
|
||||||
|
|
||||||
|
cat <<EOF | sudo tee /etc/apt/apt.conf.d/proxy.conf >/dev/null
|
||||||
|
Acquire::http::Proxy "${http_proxy}";
|
||||||
|
Acquire::https::Proxy "${https_proxy}";
|
||||||
|
EOF
|
||||||
|
}
|
||||||
|
|
||||||
|
# Main
|
||||||
|
# ____
|
||||||
|
|
||||||
|
flag="False"
|
||||||
|
set_apt-conf_proxy
|
||||||
|
check_OS
|
||||||
|
if [[ "$INSTALL_MODE" =~ "master" ]]; then
|
||||||
|
echo "Start install to main master node"
|
||||||
|
for _ip in `ip -4 addr | grep -oP '(?<=inet\s)\d+(\.\d+){3}'`; do
|
||||||
|
if [[ $_ip == $MASTER_IP ]]; then
|
||||||
|
flag="True"
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
if [[ "$flag" == "True" ]]; then
|
||||||
|
INSTALL_MODE="main_master"
|
||||||
|
main_master
|
||||||
|
else
|
||||||
|
INSTALL_MODE="normal_master"
|
||||||
|
normal_master
|
||||||
|
fi
|
||||||
|
elif [ "$INSTALL_MODE" == "worker" ]; then
|
||||||
|
echo "Start install to worker node"
|
||||||
|
main_worker
|
||||||
|
else
|
||||||
|
echo "The install mode does not support at present!"
|
||||||
|
exit 255
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ "$INSTALL_MODE" =~ "master" ]]; then
|
||||||
|
result=$(kubectl get nodes -o wide | grep $CURRENT_HOST_IP)
|
||||||
|
if [[ -z "$result" ]];then
|
||||||
|
echo "Install Failed! The node does not in Kubernetes cluster."
|
||||||
|
exit 255
|
||||||
|
else
|
||||||
|
echo "Install Success!"
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
if [[ "$KUBEADM_JOIN_WORKER_RESULT" =~ \
|
||||||
|
"This node has joined the cluster" ]]; then
|
||||||
|
echo "Install Success!"
|
||||||
|
else
|
||||||
|
echo "Install Failed! The node does not in Kubernetes cluster."
|
||||||
|
exit 255
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
exit 0
|
1936
samples/mgmt_driver/kubernetes_mgmt.py
Normal file
1936
samples/mgmt_driver/kubernetes_mgmt.py
Normal file
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,73 @@
|
|||||||
|
heat_template_version: 2013-05-23
|
||||||
|
description: 'Simple Base HOT for Sample VNF'
|
||||||
|
|
||||||
|
parameters:
|
||||||
|
nfv:
|
||||||
|
type: json
|
||||||
|
|
||||||
|
resources:
|
||||||
|
master_instance_group:
|
||||||
|
type: OS::Heat::AutoScalingGroup
|
||||||
|
properties:
|
||||||
|
min_size: 3
|
||||||
|
max_size: 5
|
||||||
|
desired_capacity: 3
|
||||||
|
resource:
|
||||||
|
type: complex_nested_master.yaml
|
||||||
|
properties:
|
||||||
|
flavor: { get_param: [ nfv, VDU, masterNode, flavor ] }
|
||||||
|
image: { get_param: [ nfv, VDU, masterNode, image ] }
|
||||||
|
net1: { get_param: [ nfv, CP, masterNode_CP1, network ] }
|
||||||
|
vip_port_ip: { get_attr: [vip_CP, fixed_ips, 0, ip_address] }
|
||||||
|
|
||||||
|
master_instance_scale_out:
|
||||||
|
type: OS::Heat::ScalingPolicy
|
||||||
|
properties:
|
||||||
|
scaling_adjustment: 1
|
||||||
|
auto_scaling_group_id:
|
||||||
|
get_resource: master_instance_group
|
||||||
|
adjustment_type: change_in_capacity
|
||||||
|
|
||||||
|
master_instance_scale_in:
|
||||||
|
type: OS::Heat::ScalingPolicy
|
||||||
|
properties:
|
||||||
|
scaling_adjustment: -1
|
||||||
|
auto_scaling_group_id:
|
||||||
|
get_resource: master_instance_group
|
||||||
|
adjustment_type: change_in_capacity
|
||||||
|
|
||||||
|
worker_instance_group:
|
||||||
|
type: OS::Heat::AutoScalingGroup
|
||||||
|
properties:
|
||||||
|
min_size: 2
|
||||||
|
max_size: 4
|
||||||
|
desired_capacity: 2
|
||||||
|
resource:
|
||||||
|
type: complex_nested_worker.yaml
|
||||||
|
properties:
|
||||||
|
flavor: { get_param: [ nfv, VDU, workerNode, flavor ] }
|
||||||
|
image: { get_param: [ nfv, VDU, workerNode, image ] }
|
||||||
|
net1: { get_param: [ nfv, CP, workerNode_CP2, network ] }
|
||||||
|
|
||||||
|
worker_instance_scale_out:
|
||||||
|
type: OS::Heat::ScalingPolicy
|
||||||
|
properties:
|
||||||
|
scaling_adjustment: 1
|
||||||
|
auto_scaling_group_id:
|
||||||
|
get_resource: worker_instance_group
|
||||||
|
adjustment_type: change_in_capacity
|
||||||
|
|
||||||
|
worker_instance_scale_in:
|
||||||
|
type: OS::Heat::ScalingPolicy
|
||||||
|
properties:
|
||||||
|
scaling_adjustment: -1
|
||||||
|
auto_scaling_group_id:
|
||||||
|
get_resource: worker_instance_group
|
||||||
|
adjustment_type: change_in_capacity
|
||||||
|
|
||||||
|
vip_CP:
|
||||||
|
type: OS::Neutron::Port
|
||||||
|
properties:
|
||||||
|
network: net0
|
||||||
|
|
||||||
|
outputs: {}
|
@ -0,0 +1,30 @@
|
|||||||
|
heat_template_version: 2013-05-23
|
||||||
|
description: 'masterNode HOT for Sample VNF'
|
||||||
|
|
||||||
|
parameters:
|
||||||
|
flavor:
|
||||||
|
type: string
|
||||||
|
image:
|
||||||
|
type: string
|
||||||
|
net1:
|
||||||
|
type: string
|
||||||
|
vip_port_ip:
|
||||||
|
type: string
|
||||||
|
|
||||||
|
resources:
|
||||||
|
masterNode:
|
||||||
|
type: OS::Nova::Server
|
||||||
|
properties:
|
||||||
|
flavor: { get_param: flavor }
|
||||||
|
name: masterNode
|
||||||
|
image: { get_param: image }
|
||||||
|
networks:
|
||||||
|
- port:
|
||||||
|
get_resource: masterNode_CP1
|
||||||
|
|
||||||
|
masterNode_CP1:
|
||||||
|
type: OS::Neutron::Port
|
||||||
|
properties:
|
||||||
|
network: { get_param: net1 }
|
||||||
|
allowed_address_pairs:
|
||||||
|
- ip_address: { get_param: vip_port_ip }
|
@ -0,0 +1,26 @@
|
|||||||
|
heat_template_version: 2013-05-23
|
||||||
|
description: 'workerNode HOT for Sample VNF'
|
||||||
|
|
||||||
|
parameters:
|
||||||
|
flavor:
|
||||||
|
type: string
|
||||||
|
image:
|
||||||
|
type: string
|
||||||
|
net1:
|
||||||
|
type: string
|
||||||
|
|
||||||
|
resources:
|
||||||
|
workerNode:
|
||||||
|
type: OS::Nova::Server
|
||||||
|
properties:
|
||||||
|
flavor: { get_param: flavor }
|
||||||
|
name: workerNode
|
||||||
|
image: { get_param: image }
|
||||||
|
networks:
|
||||||
|
- port:
|
||||||
|
get_resource: workerNode_CP2
|
||||||
|
|
||||||
|
workerNode_CP2:
|
||||||
|
type: OS::Neutron::Port
|
||||||
|
properties:
|
||||||
|
network: { get_param: net1 }
|
@ -0,0 +1,26 @@
|
|||||||
|
heat_template_version: 2013-05-23
|
||||||
|
description: 'masterNode HOT for Sample VNF'
|
||||||
|
|
||||||
|
parameters:
|
||||||
|
flavor:
|
||||||
|
type: string
|
||||||
|
image:
|
||||||
|
type: string
|
||||||
|
net1:
|
||||||
|
type: string
|
||||||
|
|
||||||
|
resources:
|
||||||
|
masterNode:
|
||||||
|
type: OS::Nova::Server
|
||||||
|
properties:
|
||||||
|
flavor: { get_param: flavor }
|
||||||
|
name: masterNode
|
||||||
|
image: { get_param: image }
|
||||||
|
networks:
|
||||||
|
- port:
|
||||||
|
get_resource: masterNode_CP1
|
||||||
|
|
||||||
|
masterNode_CP1:
|
||||||
|
type: OS::Neutron::Port
|
||||||
|
properties:
|
||||||
|
network: { get_param: net1 }
|
@ -0,0 +1,26 @@
|
|||||||
|
heat_template_version: 2013-05-23
|
||||||
|
description: 'workerNode HOT for Sample VNF'
|
||||||
|
|
||||||
|
parameters:
|
||||||
|
flavor:
|
||||||
|
type: string
|
||||||
|
image:
|
||||||
|
type: string
|
||||||
|
net1:
|
||||||
|
type: string
|
||||||
|
|
||||||
|
resources:
|
||||||
|
workerNode:
|
||||||
|
type: OS::Nova::Server
|
||||||
|
properties:
|
||||||
|
flavor: { get_param: flavor }
|
||||||
|
name: workerNode
|
||||||
|
image: { get_param: image }
|
||||||
|
networks:
|
||||||
|
- port:
|
||||||
|
get_resource: workerNode_CP2
|
||||||
|
|
||||||
|
workerNode_CP2:
|
||||||
|
type: OS::Neutron::Port
|
||||||
|
properties:
|
||||||
|
network: { get_param: net1 }
|
@ -0,0 +1,67 @@
|
|||||||
|
heat_template_version: 2013-05-23
|
||||||
|
description: 'Simple Base HOT for Sample VNF'
|
||||||
|
|
||||||
|
parameters:
|
||||||
|
nfv:
|
||||||
|
type: json
|
||||||
|
|
||||||
|
resources:
|
||||||
|
master_instance_group:
|
||||||
|
type: OS::Heat::AutoScalingGroup
|
||||||
|
properties:
|
||||||
|
min_size: 1
|
||||||
|
max_size: 3
|
||||||
|
desired_capacity: 1
|
||||||
|
resource:
|
||||||
|
type: simple_nested_master.yaml
|
||||||
|
properties:
|
||||||
|
flavor: { get_param: [ nfv, VDU, masterNode, flavor ] }
|
||||||
|
image: { get_param: [ nfv, VDU, masterNode, image ] }
|
||||||
|
net1: { get_param: [ nfv, CP, masterNode_CP1, network ] }
|
||||||
|
|
||||||
|
master_instance_scale_out:
|
||||||
|
type: OS::Heat::ScalingPolicy
|
||||||
|
properties:
|
||||||
|
scaling_adjustment: 1
|
||||||
|
auto_scaling_group_id:
|
||||||
|
get_resource: master_instance_group
|
||||||
|
adjustment_type: change_in_capacity
|
||||||
|
|
||||||
|
master_instance_scale_in:
|
||||||
|
type: OS::Heat::ScalingPolicy
|
||||||
|
properties:
|
||||||
|
scaling_adjustment: -1
|
||||||
|
auto_scaling_group_id:
|
||||||
|
get_resource: master_instance_group
|
||||||
|
adjustment_type: change_in_capacity
|
||||||
|
|
||||||
|
worker_instance_group:
|
||||||
|
type: OS::Heat::AutoScalingGroup
|
||||||
|
properties:
|
||||||
|
min_size: 2
|
||||||
|
max_size: 4
|
||||||
|
desired_capacity: 2
|
||||||
|
resource:
|
||||||
|
type: simple_nested_worker.yaml
|
||||||
|
properties:
|
||||||
|
flavor: { get_param: [ nfv, VDU, workerNode, flavor ] }
|
||||||
|
image: { get_param: [ nfv, VDU, workerNode, image ] }
|
||||||
|
net1: { get_param: [ nfv, CP, workerNode_CP2, network ] }
|
||||||
|
|
||||||
|
worker_instance_scale_out:
|
||||||
|
type: OS::Heat::ScalingPolicy
|
||||||
|
properties:
|
||||||
|
scaling_adjustment: 1
|
||||||
|
auto_scaling_group_id:
|
||||||
|
get_resource: worker_instance_group
|
||||||
|
adjustment_type: change_in_capacity
|
||||||
|
|
||||||
|
worker_instance_scale_in:
|
||||||
|
type: OS::Heat::ScalingPolicy
|
||||||
|
properties:
|
||||||
|
scaling_adjustment: -1
|
||||||
|
auto_scaling_group_id:
|
||||||
|
get_resource: worker_instance_group
|
||||||
|
adjustment_type: change_in_capacity
|
||||||
|
|
||||||
|
outputs: {}
|
@ -0,0 +1,254 @@
|
|||||||
|
tosca_definitions_version: tosca_simple_yaml_1_2
|
||||||
|
|
||||||
|
description: Simple deployment flavour for Sample VNF
|
||||||
|
|
||||||
|
imports:
|
||||||
|
- etsi_nfv_sol001_common_types.yaml
|
||||||
|
- etsi_nfv_sol001_vnfd_types.yaml
|
||||||
|
- sample_kubernetes_types.yaml
|
||||||
|
|
||||||
|
topology_template:
|
||||||
|
inputs:
|
||||||
|
id:
|
||||||
|
type: string
|
||||||
|
vendor:
|
||||||
|
type: string
|
||||||
|
version:
|
||||||
|
type: version
|
||||||
|
descriptor_id:
|
||||||
|
type: string
|
||||||
|
descriptor_version:
|
||||||
|
type: string
|
||||||
|
provider:
|
||||||
|
type: string
|
||||||
|
product_name:
|
||||||
|
type: string
|
||||||
|
software_version:
|
||||||
|
type: string
|
||||||
|
vnfm_info:
|
||||||
|
type: list
|
||||||
|
entry_schema:
|
||||||
|
type: string
|
||||||
|
flavour_id:
|
||||||
|
type: string
|
||||||
|
flavour_description:
|
||||||
|
type: string
|
||||||
|
|
||||||
|
substitution_mappings:
|
||||||
|
node_type: company.provider.VNF
|
||||||
|
properties:
|
||||||
|
flavour_id: complex
|
||||||
|
requirements:
|
||||||
|
virtual_link_external1_1: [ masterNode_CP1, virtual_link ]
|
||||||
|
virtual_link_external1_2: [ workerNode_CP2, virtual_link ]
|
||||||
|
|
||||||
|
node_templates:
|
||||||
|
VNF:
|
||||||
|
type: company.provider.VNF
|
||||||
|
properties:
|
||||||
|
flavour_description: A complex flavour
|
||||||
|
interfaces:
|
||||||
|
Vnflcm:
|
||||||
|
instantiate_end:
|
||||||
|
implementation: mgmt-drivers-kubernetes
|
||||||
|
terminate_end:
|
||||||
|
implementation: mgmt-drivers-kubernetes
|
||||||
|
heal_start:
|
||||||
|
implementation: mgmt-drivers-kubernetes
|
||||||
|
heal_end:
|
||||||
|
implementation: mgmt-drivers-kubernetes
|
||||||
|
scale_start:
|
||||||
|
implementation: mgmt-drivers-kubernetes
|
||||||
|
scale_end:
|
||||||
|
implementation: mgmt-drivers-kubernetes
|
||||||
|
artifacts:
|
||||||
|
mgmt-drivers-kubernetes:
|
||||||
|
description: Management driver for kubernetes cluster
|
||||||
|
type: tosca.artifacts.Implementation.Python
|
||||||
|
file: Scripts/kubernetes_mgmt.py
|
||||||
|
|
||||||
|
masterNode:
|
||||||
|
type: tosca.nodes.nfv.Vdu.Compute
|
||||||
|
properties:
|
||||||
|
name: masterNode
|
||||||
|
description: masterNode compute node
|
||||||
|
vdu_profile:
|
||||||
|
min_number_of_instances: 3
|
||||||
|
max_number_of_instances: 5
|
||||||
|
sw_image_data:
|
||||||
|
name: Image for masterNode HA kubernetes
|
||||||
|
version: '20.04'
|
||||||
|
checksum:
|
||||||
|
algorithm: sha-512
|
||||||
|
hash: fb1a1e50f9af2df6ab18a69b6bc5df07ebe8ef962b37e556ce95350ffc8f4a1118617d486e2018d1b3586aceaeda799e6cc073f330a7ad8f0ec0416cbd825452
|
||||||
|
container_format: bare
|
||||||
|
disk_format: qcow2
|
||||||
|
min_disk: 0 GB
|
||||||
|
size: 2 GB
|
||||||
|
|
||||||
|
artifacts:
|
||||||
|
sw_image:
|
||||||
|
type: tosca.artifacts.nfv.SwImage
|
||||||
|
file: ../Files/images/ubuntu-20.04-server-cloudimg-amd64.img
|
||||||
|
|
||||||
|
capabilities:
|
||||||
|
virtual_compute:
|
||||||
|
properties:
|
||||||
|
requested_additional_capabilities:
|
||||||
|
properties:
|
||||||
|
requested_additional_capability_name: m1.medium
|
||||||
|
support_mandatory: true
|
||||||
|
target_performance_parameters:
|
||||||
|
entry_schema: test
|
||||||
|
virtual_memory:
|
||||||
|
virtual_mem_size: 4 GB
|
||||||
|
virtual_cpu:
|
||||||
|
num_virtual_cpu: 2
|
||||||
|
virtual_local_storage:
|
||||||
|
- size_of_storage: 45 GB
|
||||||
|
|
||||||
|
workerNode:
|
||||||
|
type: tosca.nodes.nfv.Vdu.Compute
|
||||||
|
properties:
|
||||||
|
name: workerNode
|
||||||
|
description: workerNode compute node
|
||||||
|
vdu_profile:
|
||||||
|
min_number_of_instances: 2
|
||||||
|
max_number_of_instances: 4
|
||||||
|
sw_image_data:
|
||||||
|
name: Image for workerNode HA kubernetes
|
||||||
|
version: '20.04'
|
||||||
|
checksum:
|
||||||
|
algorithm: sha-512
|
||||||
|
hash: fb1a1e50f9af2df6ab18a69b6bc5df07ebe8ef962b37e556ce95350ffc8f4a1118617d486e2018d1b3586aceaeda799e6cc073f330a7ad8f0ec0416cbd825452
|
||||||
|
container_format: bare
|
||||||
|
disk_format: qcow2
|
||||||
|
min_disk: 0 GB
|
||||||
|
size: 2 GB
|
||||||
|
|
||||||
|
artifacts:
|
||||||
|
sw_image:
|
||||||
|
type: tosca.artifacts.nfv.SwImage
|
||||||
|
file: ../Files/images/ubuntu-20.04-server-cloudimg-amd64.img
|
||||||
|
|
||||||
|
capabilities:
|
||||||
|
virtual_compute:
|
||||||
|
properties:
|
||||||
|
requested_additional_capabilities:
|
||||||
|
properties:
|
||||||
|
requested_additional_capability_name: m1.medium
|
||||||
|
support_mandatory: true
|
||||||
|
target_performance_parameters:
|
||||||
|
entry_schema: test
|
||||||
|
virtual_memory:
|
||||||
|
virtual_mem_size: 4 GB
|
||||||
|
virtual_cpu:
|
||||||
|
num_virtual_cpu: 2
|
||||||
|
virtual_local_storage:
|
||||||
|
- size_of_storage: 45 GB
|
||||||
|
|
||||||
|
masterNode_CP1:
|
||||||
|
type: tosca.nodes.nfv.VduCp
|
||||||
|
properties:
|
||||||
|
layer_protocols: [ ipv4 ]
|
||||||
|
order: 0
|
||||||
|
requirements:
|
||||||
|
- virtual_binding: masterNode
|
||||||
|
|
||||||
|
workerNode_CP2:
|
||||||
|
type: tosca.nodes.nfv.VduCp
|
||||||
|
properties:
|
||||||
|
layer_protocols: [ ipv4 ]
|
||||||
|
order: 0
|
||||||
|
requirements:
|
||||||
|
- virtual_binding: workerNode
|
||||||
|
|
||||||
|
policies:
|
||||||
|
- scaling_aspects:
|
||||||
|
type: tosca.policies.nfv.ScalingAspects
|
||||||
|
properties:
|
||||||
|
aspects:
|
||||||
|
master_instance:
|
||||||
|
name: master_instance
|
||||||
|
description: master_instance scaling aspect
|
||||||
|
max_scale_level: 2
|
||||||
|
step_deltas:
|
||||||
|
- delta_1
|
||||||
|
worker_instance:
|
||||||
|
name: worker_instance
|
||||||
|
description: worker_instance scaling aspect
|
||||||
|
max_scale_level: 2
|
||||||
|
step_deltas:
|
||||||
|
- delta_1
|
||||||
|
|
||||||
|
- masterNode_initial_delta:
|
||||||
|
type: tosca.policies.nfv.VduInitialDelta
|
||||||
|
properties:
|
||||||
|
initial_delta:
|
||||||
|
number_of_instances: 3
|
||||||
|
targets: [ masterNode ]
|
||||||
|
|
||||||
|
- workerNode_initial_delta:
|
||||||
|
type: tosca.policies.nfv.VduInitialDelta
|
||||||
|
properties:
|
||||||
|
initial_delta:
|
||||||
|
number_of_instances: 2
|
||||||
|
targets: [ workerNode ]
|
||||||
|
|
||||||
|
- masterNode_scaling_deltas:
|
||||||
|
type: tosca.policies.nfv.VduScalingAspectDeltas
|
||||||
|
properties:
|
||||||
|
aspect: master_instance
|
||||||
|
deltas:
|
||||||
|
delta_1:
|
||||||
|
number_of_instances: 1
|
||||||
|
targets: [ masterNode ]
|
||||||
|
|
||||||
|
- workerNode_scaling_deltas:
|
||||||
|
type: tosca.policies.nfv.VduScalingAspectDeltas
|
||||||
|
properties:
|
||||||
|
aspect: worker_instance
|
||||||
|
deltas:
|
||||||
|
delta_1:
|
||||||
|
number_of_instances: 1
|
||||||
|
targets: [ workerNode ]
|
||||||
|
|
||||||
|
- instantiation_levels:
|
||||||
|
type: tosca.policies.nfv.InstantiationLevels
|
||||||
|
properties:
|
||||||
|
levels:
|
||||||
|
instantiation_level_1:
|
||||||
|
description: Smallest size
|
||||||
|
scale_info:
|
||||||
|
master_instance:
|
||||||
|
scale_level: 0
|
||||||
|
worker_instance:
|
||||||
|
scale_level: 0
|
||||||
|
instantiation_level_2:
|
||||||
|
description: Largest size
|
||||||
|
scale_info:
|
||||||
|
master_instance:
|
||||||
|
scale_level: 2
|
||||||
|
worker_instance:
|
||||||
|
scale_level: 2
|
||||||
|
default_level: instantiation_level_1
|
||||||
|
|
||||||
|
- masterNode_instantiation_levels:
|
||||||
|
type: tosca.policies.nfv.VduInstantiationLevels
|
||||||
|
properties:
|
||||||
|
levels:
|
||||||
|
instantiation_level_1:
|
||||||
|
number_of_instances: 3
|
||||||
|
instantiation_level_2:
|
||||||
|
number_of_instances: 5
|
||||||
|
targets: [ masterNode ]
|
||||||
|
|
||||||
|
- workerNode_instantiation_levels:
|
||||||
|
type: tosca.policies.nfv.VduInstantiationLevels
|
||||||
|
properties:
|
||||||
|
levels:
|
||||||
|
instantiation_level_1:
|
||||||
|
number_of_instances: 2
|
||||||
|
instantiation_level_2:
|
||||||
|
number_of_instances: 4
|
||||||
|
targets: [ workerNode ]
|
@ -0,0 +1,254 @@
|
|||||||
|
tosca_definitions_version: tosca_simple_yaml_1_2
|
||||||
|
|
||||||
|
description: Simple deployment flavour for Sample VNF
|
||||||
|
|
||||||
|
imports:
|
||||||
|
- etsi_nfv_sol001_common_types.yaml
|
||||||
|
- etsi_nfv_sol001_vnfd_types.yaml
|
||||||
|
- sample_kubernetes_types.yaml
|
||||||
|
|
||||||
|
topology_template:
|
||||||
|
inputs:
|
||||||
|
id:
|
||||||
|
type: string
|
||||||
|
vendor:
|
||||||
|
type: string
|
||||||
|
version:
|
||||||
|
type: version
|
||||||
|
descriptor_id:
|
||||||
|
type: string
|
||||||
|
descriptor_version:
|
||||||
|
type: string
|
||||||
|
provider:
|
||||||
|
type: string
|
||||||
|
product_name:
|
||||||
|
type: string
|
||||||
|
software_version:
|
||||||
|
type: string
|
||||||
|
vnfm_info:
|
||||||
|
type: list
|
||||||
|
entry_schema:
|
||||||
|
type: string
|
||||||
|
flavour_id:
|
||||||
|
type: string
|
||||||
|
flavour_description:
|
||||||
|
type: string
|
||||||
|
|
||||||
|
substitution_mappings:
|
||||||
|
node_type: company.provider.VNF
|
||||||
|
properties:
|
||||||
|
flavour_id: simple
|
||||||
|
requirements:
|
||||||
|
virtual_link_external1_1: [ masterNode_CP1, virtual_link ]
|
||||||
|
virtual_link_external1_2: [ workerNode_CP2, virtual_link ]
|
||||||
|
|
||||||
|
node_templates:
|
||||||
|
VNF:
|
||||||
|
type: company.provider.VNF
|
||||||
|
properties:
|
||||||
|
flavour_description: A simple flavour
|
||||||
|
interfaces:
|
||||||
|
Vnflcm:
|
||||||
|
instantiate_end:
|
||||||
|
implementation: mgmt-drivers-kubernetes
|
||||||
|
terminate_end:
|
||||||
|
implementation: mgmt-drivers-kubernetes
|
||||||
|
heal_start:
|
||||||
|
implementation: mgmt-drivers-kubernetes
|
||||||
|
heal_end:
|
||||||
|
implementation: mgmt-drivers-kubernetes
|
||||||
|
scale_start:
|
||||||
|
implementation: mgmt-drivers-kubernetes
|
||||||
|
scale_end:
|
||||||
|
implementation: mgmt-drivers-kubernetes
|
||||||
|
artifacts:
|
||||||
|
mgmt-drivers-kubernetes:
|
||||||
|
description: Management driver for kubernetes cluster
|
||||||
|
type: tosca.artifacts.Implementation.Python
|
||||||
|
file: Scripts/kubernetes_mgmt.py
|
||||||
|
|
||||||
|
masterNode:
|
||||||
|
type: tosca.nodes.nfv.Vdu.Compute
|
||||||
|
properties:
|
||||||
|
name: masterNode
|
||||||
|
description: masterNode compute node
|
||||||
|
vdu_profile:
|
||||||
|
min_number_of_instances: 1
|
||||||
|
max_number_of_instances: 3
|
||||||
|
sw_image_data:
|
||||||
|
name: Image for masterNode kubernetes
|
||||||
|
version: '20.04'
|
||||||
|
checksum:
|
||||||
|
algorithm: sha-512
|
||||||
|
hash: fb1a1e50f9af2df6ab18a69b6bc5df07ebe8ef962b37e556ce95350ffc8f4a1118617d486e2018d1b3586aceaeda799e6cc073f330a7ad8f0ec0416cbd825452
|
||||||
|
container_format: bare
|
||||||
|
disk_format: qcow2
|
||||||
|
min_disk: 0 GB
|
||||||
|
size: 2 GB
|
||||||
|
|
||||||
|
artifacts:
|
||||||
|
sw_image:
|
||||||
|
type: tosca.artifacts.nfv.SwImage
|
||||||
|
file: ../Files/images/ubuntu-20.04-server-cloudimg-amd64.img
|
||||||
|
|
||||||
|
capabilities:
|
||||||
|
virtual_compute:
|
||||||
|
properties:
|
||||||
|
requested_additional_capabilities:
|
||||||
|
properties:
|
||||||
|
requested_additional_capability_name: m1.medium
|
||||||
|
support_mandatory: true
|
||||||
|
target_performance_parameters:
|
||||||
|
entry_schema: test
|
||||||
|
virtual_memory:
|
||||||
|
virtual_mem_size: 4 GB
|
||||||
|
virtual_cpu:
|
||||||
|
num_virtual_cpu: 2
|
||||||
|
virtual_local_storage:
|
||||||
|
- size_of_storage: 45 GB
|
||||||
|
|
||||||
|
workerNode:
|
||||||
|
type: tosca.nodes.nfv.Vdu.Compute
|
||||||
|
properties:
|
||||||
|
name: workerNode
|
||||||
|
description: workerNode compute node
|
||||||
|
vdu_profile:
|
||||||
|
min_number_of_instances: 2
|
||||||
|
max_number_of_instances: 4
|
||||||
|
sw_image_data:
|
||||||
|
name: Image for workerNode kubernetes
|
||||||
|
version: '20.04'
|
||||||
|
checksum:
|
||||||
|
algorithm: sha-512
|
||||||
|
hash: fb1a1e50f9af2df6ab18a69b6bc5df07ebe8ef962b37e556ce95350ffc8f4a1118617d486e2018d1b3586aceaeda799e6cc073f330a7ad8f0ec0416cbd825452
|
||||||
|
container_format: bare
|
||||||
|
disk_format: qcow2
|
||||||
|
min_disk: 0 GB
|
||||||
|
size: 2 GB
|
||||||
|
|
||||||
|
artifacts:
|
||||||
|
sw_image:
|
||||||
|
type: tosca.artifacts.nfv.SwImage
|
||||||
|
file: ../Files/images/ubuntu-20.04-server-cloudimg-amd64.img
|
||||||
|
|
||||||
|
capabilities:
|
||||||
|
virtual_compute:
|
||||||
|
properties:
|
||||||
|
requested_additional_capabilities:
|
||||||
|
properties:
|
||||||
|
requested_additional_capability_name: m1.medium
|
||||||
|
support_mandatory: true
|
||||||
|
target_performance_parameters:
|
||||||
|
entry_schema: test
|
||||||
|
virtual_memory:
|
||||||
|
virtual_mem_size: 4 GB
|
||||||
|
virtual_cpu:
|
||||||
|
num_virtual_cpu: 2
|
||||||
|
virtual_local_storage:
|
||||||
|
- size_of_storage: 45 GB
|
||||||
|
|
||||||
|
masterNode_CP1:
|
||||||
|
type: tosca.nodes.nfv.VduCp
|
||||||
|
properties:
|
||||||
|
layer_protocols: [ ipv4 ]
|
||||||
|
order: 0
|
||||||
|
requirements:
|
||||||
|
- virtual_binding: masterNode
|
||||||
|
|
||||||
|
workerNode_CP2:
|
||||||
|
type: tosca.nodes.nfv.VduCp
|
||||||
|
properties:
|
||||||
|
layer_protocols: [ ipv4 ]
|
||||||
|
order: 0
|
||||||
|
requirements:
|
||||||
|
- virtual_binding: workerNode
|
||||||
|
|
||||||
|
policies:
|
||||||
|
- scaling_aspects:
|
||||||
|
type: tosca.policies.nfv.ScalingAspects
|
||||||
|
properties:
|
||||||
|
aspects:
|
||||||
|
master_instance:
|
||||||
|
name: master_instance
|
||||||
|
description: master_instance scaling aspect
|
||||||
|
max_scale_level: 2
|
||||||
|
step_deltas:
|
||||||
|
- delta_1
|
||||||
|
worker_instance:
|
||||||
|
name: worker_instance
|
||||||
|
description: worker_instance scaling aspect
|
||||||
|
max_scale_level: 2
|
||||||
|
step_deltas:
|
||||||
|
- delta_1
|
||||||
|
|
||||||
|
- masterNode_initial_delta:
|
||||||
|
type: tosca.policies.nfv.VduInitialDelta
|
||||||
|
properties:
|
||||||
|
initial_delta:
|
||||||
|
number_of_instances: 1
|
||||||
|
targets: [ masterNode ]
|
||||||
|
|
||||||
|
- workerNode_initial_delta:
|
||||||
|
type: tosca.policies.nfv.VduInitialDelta
|
||||||
|
properties:
|
||||||
|
initial_delta:
|
||||||
|
number_of_instances: 2
|
||||||
|
targets: [ workerNode ]
|
||||||
|
|
||||||
|
- masterNode_scaling_deltas:
|
||||||
|
type: tosca.policies.nfv.VduScalingAspectDeltas
|
||||||
|
properties:
|
||||||
|
aspect: master_instance
|
||||||
|
deltas:
|
||||||
|
delta_1:
|
||||||
|
number_of_instances: 1
|
||||||
|
targets: [ masterNode ]
|
||||||
|
|
||||||
|
- workerNode_scaling_deltas:
|
||||||
|
type: tosca.policies.nfv.VduScalingAspectDeltas
|
||||||
|
properties:
|
||||||
|
aspect: worker_instance
|
||||||
|
deltas:
|
||||||
|
delta_1:
|
||||||
|
number_of_instances: 1
|
||||||
|
targets: [ workerNode ]
|
||||||
|
|
||||||
|
- instantiation_levels:
|
||||||
|
type: tosca.policies.nfv.InstantiationLevels
|
||||||
|
properties:
|
||||||
|
levels:
|
||||||
|
instantiation_level_1:
|
||||||
|
description: Smallest size
|
||||||
|
scale_info:
|
||||||
|
master_instance:
|
||||||
|
scale_level: 0
|
||||||
|
worker_instance:
|
||||||
|
scale_level: 0
|
||||||
|
instantiation_level_2:
|
||||||
|
description: Largest size
|
||||||
|
scale_info:
|
||||||
|
master_instance:
|
||||||
|
scale_level: 2
|
||||||
|
worker_instance:
|
||||||
|
scale_level: 2
|
||||||
|
default_level: instantiation_level_1
|
||||||
|
|
||||||
|
- masterNode_instantiation_levels:
|
||||||
|
type: tosca.policies.nfv.VduInstantiationLevels
|
||||||
|
properties:
|
||||||
|
levels:
|
||||||
|
instantiation_level_1:
|
||||||
|
number_of_instances: 1
|
||||||
|
instantiation_level_2:
|
||||||
|
number_of_instances: 3
|
||||||
|
targets: [ masterNode ]
|
||||||
|
|
||||||
|
- workerNode_instantiation_levels:
|
||||||
|
type: tosca.policies.nfv.VduInstantiationLevels
|
||||||
|
properties:
|
||||||
|
levels:
|
||||||
|
instantiation_level_1:
|
||||||
|
number_of_instances: 2
|
||||||
|
instantiation_level_2:
|
||||||
|
number_of_instances: 4
|
||||||
|
targets: [ workerNode ]
|
@ -0,0 +1,32 @@
|
|||||||
|
tosca_definitions_version: tosca_simple_yaml_1_2
|
||||||
|
|
||||||
|
description: Sample VNF.
|
||||||
|
|
||||||
|
imports:
|
||||||
|
- etsi_nfv_sol001_common_types.yaml
|
||||||
|
- etsi_nfv_sol001_vnfd_types.yaml
|
||||||
|
- sample_kubernetes_types.yaml
|
||||||
|
- sample_kubernetes_df_simple.yaml
|
||||||
|
- sample_kubernetes_df_complex.yaml
|
||||||
|
|
||||||
|
topology_template:
|
||||||
|
inputs:
|
||||||
|
selected_flavour:
|
||||||
|
type: string
|
||||||
|
description: VNF deployment flavour selected by the consumer. It is provided in the API
|
||||||
|
|
||||||
|
node_templates:
|
||||||
|
VNF:
|
||||||
|
type: company.provider.VNF
|
||||||
|
properties:
|
||||||
|
flavour_id: { get_input: selected_flavour }
|
||||||
|
descriptor_id: b1db0ce7-ebca-1fb7-95ed-4840d70a1163
|
||||||
|
provider: Company
|
||||||
|
product_name: Sample VNF
|
||||||
|
software_version: '1.0'
|
||||||
|
descriptor_version: '1.0'
|
||||||
|
vnfm_info:
|
||||||
|
- Tacker
|
||||||
|
requirements:
|
||||||
|
#- virtual_link_external # mapped in lower-level templates
|
||||||
|
#- virtual_link_internal # mapped in lower-level templates
|
@ -0,0 +1,63 @@
|
|||||||
|
tosca_definitions_version: tosca_simple_yaml_1_2
|
||||||
|
|
||||||
|
description: VNF type definition
|
||||||
|
|
||||||
|
imports:
|
||||||
|
- etsi_nfv_sol001_common_types.yaml
|
||||||
|
- etsi_nfv_sol001_vnfd_types.yaml
|
||||||
|
|
||||||
|
node_types:
|
||||||
|
company.provider.VNF:
|
||||||
|
derived_from: tosca.nodes.nfv.VNF
|
||||||
|
properties:
|
||||||
|
id:
|
||||||
|
type: string
|
||||||
|
description: ID of this VNF
|
||||||
|
default: vnf_id
|
||||||
|
vendor:
|
||||||
|
type: string
|
||||||
|
description: name of the vendor who generate this VNF
|
||||||
|
default: vendor
|
||||||
|
version:
|
||||||
|
type: version
|
||||||
|
description: version of the software for this VNF
|
||||||
|
default: 1.0
|
||||||
|
descriptor_id:
|
||||||
|
type: string
|
||||||
|
constraints: [ valid_values: [ b1db0ce7-ebca-1fb7-95ed-4840d70a1163 ] ]
|
||||||
|
default: b1db0ce7-ebca-1fb7-95ed-4840d70a1163
|
||||||
|
descriptor_version:
|
||||||
|
type: string
|
||||||
|
constraints: [ valid_values: [ '1.0' ] ]
|
||||||
|
default: '1.0'
|
||||||
|
provider:
|
||||||
|
type: string
|
||||||
|
constraints: [ valid_values: [ 'Company' ] ]
|
||||||
|
default: 'Company'
|
||||||
|
product_name:
|
||||||
|
type: string
|
||||||
|
constraints: [ valid_values: [ 'Sample VNF' ] ]
|
||||||
|
default: 'Sample VNF'
|
||||||
|
software_version:
|
||||||
|
type: string
|
||||||
|
constraints: [ valid_values: [ '1.0' ] ]
|
||||||
|
default: '1.0'
|
||||||
|
vnfm_info:
|
||||||
|
type: list
|
||||||
|
entry_schema:
|
||||||
|
type: string
|
||||||
|
constraints: [ valid_values: [ Tacker ] ]
|
||||||
|
default: [ Tacker ]
|
||||||
|
flavour_id:
|
||||||
|
type: string
|
||||||
|
constraints: [ valid_values: [ simple,complex ] ]
|
||||||
|
default: simple
|
||||||
|
flavour_description:
|
||||||
|
type: string
|
||||||
|
default: "This is the default flavour description"
|
||||||
|
requirements:
|
||||||
|
- virtual_link_internal:
|
||||||
|
capability: tosca.capabilities.nfv.VirtualLinkable
|
||||||
|
interfaces:
|
||||||
|
Vnflcm:
|
||||||
|
type: tosca.interfaces.nfv.Vnflcm
|
@ -0,0 +1,17 @@
|
|||||||
|
TOSCA-Meta-File-Version: 1.0
|
||||||
|
Created-by: Dummy User
|
||||||
|
CSAR-Version: 1.1
|
||||||
|
Entry-Definitions: Definitions/sample_kubernetes_top.vnfd.yaml
|
||||||
|
|
||||||
|
Name: Files/images/ubuntu-20.04-server-cloudimg-amd64.img
|
||||||
|
Content-Type: application/x-iso9066-image
|
||||||
|
|
||||||
|
Name: Scripts/install_k8s_cluster.sh
|
||||||
|
Content-Type: application/sh
|
||||||
|
Algorithm: SHA-256
|
||||||
|
Hash: 2489f6162817cce794a6f19a88c3c76ce83fa19cfcb75ad1204d76aaba4a9d1c
|
||||||
|
|
||||||
|
Name: Scripts/kubernetes_mgmt.py
|
||||||
|
Content-Type: text/x-python
|
||||||
|
Algorithm: SHA-256
|
||||||
|
Hash: b292bc47d4c28a62b1261e6481498118e5dd93aa988c498568560f67c510003b
|
@ -0,0 +1,35 @@
|
|||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
from tacker.vnfm.lcm_user_data.abstract_user_data import AbstractUserData
|
||||||
|
import tacker.vnfm.lcm_user_data.utils as UserDataUtil
|
||||||
|
|
||||||
|
|
||||||
|
class KubernetesClusterUserData(AbstractUserData):
|
||||||
|
@staticmethod
|
||||||
|
def instantiate(base_hot_dict=None,
|
||||||
|
vnfd_dict=None,
|
||||||
|
inst_req_info=None,
|
||||||
|
grant_info=None):
|
||||||
|
api_param = UserDataUtil.get_diff_base_hot_param_from_api(
|
||||||
|
base_hot_dict, inst_req_info)
|
||||||
|
initial_param_dict = \
|
||||||
|
UserDataUtil.create_initial_param_server_port_dict(
|
||||||
|
base_hot_dict)
|
||||||
|
vdu_flavor_dict = \
|
||||||
|
UserDataUtil.create_vdu_flavor_capability_name_dict(vnfd_dict)
|
||||||
|
vdu_image_dict = UserDataUtil.create_sw_image_dict(vnfd_dict)
|
||||||
|
cpd_vl_dict = UserDataUtil.create_network_dict(
|
||||||
|
inst_req_info, initial_param_dict)
|
||||||
|
final_param_dict = UserDataUtil.create_final_param_dict(
|
||||||
|
initial_param_dict, vdu_flavor_dict, vdu_image_dict, cpd_vl_dict)
|
||||||
|
return {**final_param_dict, **api_param}
|
Loading…
Reference in New Issue
Block a user