Merge "Add automation script for Helm-based Tacker installation"

This commit is contained in:
Zuul
2025-09-12 01:43:45 +00:00
committed by Gerrit Code Review
12 changed files with 818 additions and 0 deletions

View File

@@ -41,6 +41,7 @@ Refer following installation procedures for both of these systems:
Manual Installation <manual_installation.rst>
Install via Kolla Ansible <kolla.rst>
Install via Openstack-helm <openstack_helm.rst>
Install via Automation Script <tacker_helm_deployment_automation.rst>
#. Target VIM Installation

View File

@@ -0,0 +1,222 @@
..
Copyright (C) 2025 NEC, Corp.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
=================================
Tacker Helm Deployment Automation
=================================
Overview
--------
Installing Tacker via OpenStack-Helm involves multiple manual steps, including
the setup of system prerequisites, environment configuration, and deployment
of various components. This process can be error-prone and time-consuming,
especially for new users.
To simplify this, a Python-based automation script is introduced that
simplifies and streamlines the installation of Tacker using OpenStack-Helm
by handling the setup of all prerequisites and deployment steps.
The Automation Script is available is Tacker repository:
.. code-block:: console
tacker/tools/tacker_helm_deployment_automation_script
This document provides the steps and guidelines for using the automation
script for Helm based installation of Tacker.
Prerequisites
-------------
#. All nodes must be able to access openstack repositories.
The repositories will automatically be downloaded during the
execution of the script.
.. code-block:: console
$ git clone https://opendev.org/openstack/openstack-helm.git
$ git clone https://opendev.org/zuul/zuul-jobs.git
#. All participating nodes should have connectivity with each other.
Configuration changes
---------------------
#. Edit ``k8s_env/inventory.yaml`` for k8s deployment
1. Add following for running ansible:
.. code-block:: console
# ansible user for running the playbook
ansible_user: <USERNAME>
ansible_ssh_private_key_file: <PATH_TO_SSH_KEY_FOR_USER>
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
2. Add user and group for running the kubectl and helm commands.
.. code-block:: console
# The user and group that will be used to run Kubectl and Helm commands.
kubectl:
user: <USERNAME>
group: <USERGROUP>
# The user and group that will be used to run Docker commands.
docker_users:
- <USERNAME>
3. Add user details which will be used for communication between the master
and worker nodes. This user must be configured with passwordless ssh on
all the nodes.
.. code-block:: console
# to connect to the k8s master node via ssh without a password.
client_ssh_user: <USERNAME>
cluster_ssh_user: <USER_GROUP>
4. Enable MetalLB if using bare-metal servers for load balancing.
.. code-block:: console
# MetalLB controllr is used for bare-metal loadbalancer.
metallb_setup: true
5. If deploying CEPH cluster enable the loopback device config to be used by CEPH
.. code-block:: console
# Loopback devices will be created on all cluster nodes which then can be used
# to deploy a Ceph cluster which requires block devices to be provided.
# Please use loopback devices only for testing purposes. They are not suitable
# for production due to performance reasons.
loopback_setup: true
loopback_device: /dev/loop100
loopback_image: /var/lib/openstack-helm/ceph-loop.img
loopback_image_size: 12G
6. Add the primary node where the Kubectl and Helm will be installed.
.. code-block:: console
children:
# The primary node where Kubectl and Helm will be installed. If it is
# the only node then it must be a member of the groups k8s_cluster and
# k8s_control_plane. If there are more nodes then the wireguard tunnel
# will be established between the primary node and the k8s_control_plane
node.
primary:
hosts:
primary:
ansible_host: <PRIMARY_NODE_IP>
7. Add node where the Kubernetes cluster will be deployed. If there only 1 node,
then mention it here
.. code-block:: console
# The nodes where the Kubernetes components will be installed.
k8s_cluster:
hosts:
primary:
ansible_host: <IP_ADDRESS>
node-2:
ansible_host: <IP_ADDRESS>
node-3:
ansible_host: <IP_ADDRESS>
8. Add control-plane node in the section
.. code-block:: console
# The control plane node where the Kubernetes control plane components
will be installed.
# It must be the only node in the group k8s_control_plane.
k8s_control_plane:
hosts:
primary:
ansible_host: <IP_ADDRESS>
9. Add worker nodes in the k8s_nodes section, If its single node installation
then leave the section empty
.. code-block:: console
# These are Kubernetes worker nodes. There could be zero such nodes.
# In this case the Openstack workloads will be deployed on the control
plane node.
k8s_nodes:
hosts:
node-2:
ansible_host: <IP_ADDRESS>
node-3:
ansible_host: <IP_ADDRESS>
You can find whole of examples of ``inventory.yaml`` in [1]_.
#. Edit ``TACKER_NODE`` in ``config/config.yaml`` with the
hostname of the master node to label it as control-plane
node
.. code-block:: console
NODES:
TACKER_NODE: <CONTROL-PLANE_NODE_HOSTNAME>
Script execution
----------------
#. Ensure that the user has permission to execute the script
.. code-block:: console
$ ls -la Tacker_Install.py
-rwxr-xr-x 1 root root 21923 Jul 22 10:00 Tacker_Install.py
#. Execute the command to run the script
.. code-block:: console
$ sudo python3 Tacker_install.py
#. Execute the command to run the script
.. code-block:: console
$ kubectl get pods -n openstack | grep -i Tacker
tacker-conductor-d7595d756-6k8wp 1/1 Running 0 24h
tacker-db-init-mxwwf 0/1 Completed 0 24h
tacker-db-sync-4xnhx 0/1 Completed 0 24h
tacker-ks-endpoints-4nbqb 0/3 Completed 0 24h
tacker-ks-service-c8s2m 0/1 Completed 0 24h
tacker-ks-user-z2cq7 0/1 Completed 0 24h
tacker-rabbit-init-fxggv 0/1 Completed 0 24h
tacker-server-6f578bcf6c-z7z2c 1/1 Running 0 24h
For details refer to the document in [2]_.
References
----------
.. [1] https://docs.openstack.org/openstack-helm/latest/install/kubernetes.html
.. [2] https://docs.openstack.org/tacker/latest/install/openstack_helm.html

View File

@@ -0,0 +1,404 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# -----------------------------------------------------------------------------
# File Name : Tacker_install.py
# Description : This script automates the installation of Tacker with all
# its dependencies over kubernetes environment.
# Author : NEC Corp.
# Created Date : 2025-07-18
# Last Modified : 2025-07-18
# Version : 1.0
# Python Version: 3.10+
# -----------------------------------------------------------------------------
import logging
import subprocess
import sys
import json
import time
import yaml
import os
import pwd
##configuration variable
config = None
## Configure Logging
logging.basicConfig(level=logging.DEBUG, format="%(asctime)s %(levelname)s %(message)s", filename= "./logs/script.log", filemode="a")
# -----------------------------------------------------------------------------
# Function Name : clear_repo
# Description : deletes all the repo downloaded earlier and verifies the
# python version and ensures that user are created.
# Arguments : none
# Returns : none
# -----------------------------------------------------------------------------
def clear_repo():
print("Inside function clear_repo")
if not os.path.exists("logs"):
os.makedirs("logs")
os.system('touch logs/script.log')
if sys.version_info.major<3:
logging.error("The current python version is less than 3 update the version")
logging.info("The current python version is less than 3 update the version")
print("current python version is older")
result= myshell("sudo apt update && sudo apt install python3 python3-pip -y", capture=True)
logggig.debug("python3 installed %s", result)
logggig.info("python3 installed %s", result)
print("python3 installed \n")
paths = ["./osh", "./openstack-helm"]
k8s_path = "./osh"
openstack_path = "./openstack-helm"
for path in paths:
if os.path.isfile(path):
os.remove(k8s_path)
print("repo removed: "+path+"\n")
try:
if pwd.getpwnam('ubuntu'):
print("user ubuntu exists\n")
except:
print("creating user ubuntu")
result = myshell("sudo useradd ubuntu -y",capture=True)
logging.debug("ubutu user created %s", result)
logging.info("ubutu user created %s", result)
# -----------------------------------------------------------------------------
# Function Name : load_config
# Description : Loads the config data kept in config.yaml
# Arguments : none
# Returns : none
# -----------------------------------------------------------------------------
def load_config(filepath="./config/config.yaml"):
global config
with open(filepath, 'r') as file:
config = yaml.safe_load(file)
# -----------------------------------------------------------------------------
# Function Name : myshell
# Description : executes the requested command in a subshell.
# Arguments : CMD - command string
# check - boolean, if true calls CalledProcessError if command
# returns non zero
# captures - boolean, captures the return of command
# **kw - dictionary, for any additional argument to pass.
# Returns : result| None
# -----------------------------------------------------------------------------
def myshell(CMD: str, check=True, capture=False, **kw):
logging.debug("CMD > %s", CMD)
if capture:
result = subprocess.run(CMD, shell=True, check=check, text=True, capture_output= capture, **kw)
return result.stdout.strip()
else:
result = subprocess.run(CMD, shell=True, check=check, text=True, stdout=None, stderr=None, **kw)
return None
# -----------------------------------------------------------------------------
# Function Name : handelexceptionshell
# Description : executes the requested command in a subshell.
# Arguments : CMD - command string
# check - boolean, if true calls CalledProcessError if command
# returns non zero
# captures - boolean, captures the return of command
# **kw - dictionary, for any additional argument to pass.
# Returns : result| None
# -----------------------------------------------------------------------------
def handelexceptionshell(CMD: str, check=True, capture=False, **kw):
logging.debug("CMD > %s", CMD)
try:
if capture:
result = subprocess.run(CMD, shell=True, check=check, text=True, capture_output= capture, **kw)
return result.stdout.strip()
else:
result = subprocess.run(CMD, shell=True, check=check, text=True, stdout=None, stderr=None, **kw)
return None
except subprocess.CalledProcessError as err:
logging.warning("Command %s faced issue: %s",CMD, err)
print("Issue while executing the command: "+CMD+"\n")
# -----------------------------------------------------------------------------
# Function Name : pod_status
# Description : Check the pod status to ensure that all pods are running.
# Arguments : none
# Returns : none
# -----------------------------------------------------------------------------
def pod_status():
print("Inside the pod_status function \n")
while True:
command = "kubectl get pods -n " + config['K8S']['NAMESPACE'] + " -o json"
endtime = time.time() + config['DURATION']['TIMEOUT']
Jstatus = myshell(config['K8S']['SET_ENVIRONMENT']+" && "+command, capture=True)
Lstatus = json.loads(Jstatus)
not_ready = []
for pod in Lstatus.get("items",[]):
condition = {condition["type"]: condition["status"] for condition in pod["status"].get("conditions",[])}
if condition.get("Ready") != "True":
not_ready.append(pod["metadata"]["name"])
if not not_ready:
print("All pods are ready in namespace "+ config['K8S']['NAMESPACE'])
logging.debug("kube-system pods in ready state \n")
logging.info("kube-system pods in ready state")
return
if time.time > endtime:
print("Error: pod status timedout after "+ str(endtime)+"\n")
raise TimeoutError(f"Timed out after {timeout}s still not ready: {', '.join(not_ready)}")
time.sleep(config['DURATION']['POLL_INTERVAL'])
# -----------------------------------------------------------------------------
# Function Name : install_cluster
# Description : Install kubernetes cluster and all its dependecies
# Arguments : none
# Returns : none
# -----------------------------------------------------------------------------
def install_cluster():
print("Inside the install_cluster function\n ")
try:
result = myshell("mkdir -p osh", capture=True)
print("osh folder created \n")
result = myshell("cd ./osh && git clone " + config['REPOS']['OPENSTACK_REPO'] +" && git clone "+ config['REPOS']['ZUUL_REPO'], capture=True)
logging.debug("git repo cloned for k8s cluster: %s", result)
logging.info("git repo cloned for k8s cluster")
print("git repo cloned for cluster: +", result+"\n")
except subprocess.CalledProcessError as err:
print("Repo did not download \n")
logging.debug("Repo did not download %s", err)
result = myshell("cd ./osh && pip install ansible netaddr", capture=True)
result = myshell("cp ./k8s_env/inventory.yaml ./osh/ && cp ./k8s_env/deploy-env.yaml ./osh/", capture=True)
logging.debug("inventory file copied %s", result)
result = myshell("sudo apt install containerd python3-openstackclient -y", capture=True)
print("result is: "+ str(result))
logging.debug("cotainerD installed for k8s: %s", result)
### Here need to check how to create ssh-keygen for user ubuntu
result = myshell("cd ./osh && export ANSIBLE_ROLES_PATH=~/osh/openstack-helm/roles:~/osh/zuul-jobs/roles && ansible-playbook -i inventory.yaml deploy-env.yaml", capture=False)
print("kubernetes deployment successful ", result)
logging.debug("kubernetes deployment successful result= %s", result)
logging.info("kubernetes deployment successful result= %s", result)
### need to add the kubeconfig to be able to run kubectl command export KUBECONFIG=/etc/kubernetes/admin.conf && kubectl get pods -A
print("Checking pod status .. ")
pod_status()
# -----------------------------------------------------------------------------
# Function Name : install_dependecies
# Description : verify system prerequisites and call for k8s installation
# in case the k8s installation is not available.
# Arguments : none
# Returns : none
# -----------------------------------------------------------------------------
def install_dependecies():
print("Inside install_dependencies function \n")
verify_ubuntu = myshell("lsb_release -a", capture=True)
logging.debug("ubuntu version details: %s",verify_ubuntu)
print("ubuntu version details: %s",verify_ubuntu)
first_line= verify_ubuntu.strip().split("\n")[0]
if(str(first_line.split(":", -1)[1]).strip() != "Ubuntu"):
logging.error("the OS is not Ubuntu")
print("the OS is not Ubuntu")
sys.exit()
output = ""
"""
## disabled the below part for starlingx
try:
output = myshell("kubectl version --short", capture=True)
print("kubernetes connection successful. %s ", output)
logging.debug("kubernetes connection successful. %s ", output)
logging.info("kubernetes conenction successful")
except subprocess.CalledProcessError:
logging.error("Kubernetes unrechable via the API\n %s", output)
logging.info("Installing Kubernetes %s", output)
install_cluster()
"""
try:
result = myshell("helm version --short", capture=True)
logging.debug("helm already available %s", result)
except:
logging.info("Installing helm on the system \n")
print("Installing helm on the system \n")
result = myshell("curl -fsSL -o get_helm.sh"+ config['HELM']['HELM_SCRIPT'], capture=True)
logging.debug("helm script downloaded %s", result)
result = myshell("chmod 700 get_helm.sh && ./get_helm.sh", capture=True)
logging.info("helm installed: %s", result)
print("helm installed: %s", result)
# -----------------------------------------------------------------------------
# Function Name : clone_code
# Description : Ensure the openstack repo availability for deployment
# Arguments : none
# Returns : none
# -----------------------------------------------------------------------------
def clone_code():
print("Inside the clone_code function \n")
try:
git_clone = myshell("git clone "+config['REPOS']['OPENSTACK_REPO'], capture=True)
logging.debug("clone for the tacker repo done: %s", git_clone)
print("clone for the tacker repo done: %s", istr(git_clone))
except:
print("could not clone the repo %s and %s to the node \n", config['REPOS']['OPENSTACK_REPO'], config['REPOS']['TACKER_INFRA_REPO'])
logging.debug("could not clone the repo %s and %s to the node \n", config['REPOS']['OPENSTACK_REPO'], config['REPOS']['TACKER_INFRA_REPO'])
# -----------------------------------------------------------------------------
# Function Name : create_namespace
# Description : Ensure that the preriquisites of the openstack installation
# are fulfilled.
# Arguments : none
# Returns : none
# -----------------------------------------------------------------------------
def create_namespace():
print("Inside create_namespace function \n")
try:
result = myshell(config['K8S']['SET_ENVIRONMENT'] +" && kubectl get ns openstack", capture=True)
logging.debug("namespace already existing ")
print("namespace already existing ")
except:
logging.debug("namespace openstack not availble %s", result)
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && kubectl create ns openstack", capture=True)
logging.debug("namespace openstack created %s", result)
print("namespace openstack created %s", result)
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && kubectl label node "+config['NODES']['TACKER_NODE']+" openstack-control-plane=enabled", capture=True)
logging.debug("master node labeling done: %s", result)
result = myshell("helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx", capture=True)
logging.debug("ingress repo added: %s", result)
print("ingress repo added "+ result+"\n")
result = myshell(config['K8S']['SET_ENVIRONMENT']+' && helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx --version="4.8.3" --namespace=openstack --set controller.kind=Deployment --set controller.admissionWebhooks.enabled="false" --set controller.scope.enabled="true" --set controller.service.enabled="false" --set controller.ingressClassResource.name=nginx --set controller.ingressClassResource.controllerValue="k8s.io/ingress-nginx" --set controller.ingressClassResource.default="false" --set controller.ingressClass=nginx --set controller.labels.app=ingress-api', capture=True)
logging.debug("ingress created: %s", result)
print("ingress created: %s", result)
time.sleep(config['DURATION']['CHECK_INTERVAL'])
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && kubectl get pods -A | grep ingress && helm ls -A | grep ingress", capture=True)
logging.debug("Ingress pod state: %s", result)
print("Ingress pod state: %s \n", result)
# -----------------------------------------------------------------------------
# Function Name : deploy_openstack
# Description : Install openstack services.
# Arguments : none
# Returns : none
# -----------------------------------------------------------------------------
def deploy_openstack():
## install rabbitmq
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && cd openstack-helm/rabbitmq && helm dependency build", capture=True)
logging.debug("rabbitmq dependency installed %s", result)
print("rabbitmq dependency installed %s\n", result)
result = myshell(config['K8S']['SET_ENVIRONMENT']+"&& cd openstack-helm && ./tools/deployment/component/common/rabbitmq.sh", capture=True)
logging.debug("rabbitmq installed: %s",result)
time.sleep(config['DURATION']['CHECK_INTERVAL'])
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && kubectl get pods -A | grep rabbitmq && helm ls -A | grep rabbitmq", capture=True)
logging.debug(" rabbitmq pod status: %s", result)
print(" rabbitmq pod status: %s \n", result)
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && cd openstack-helm/mariadb && helm dependency build", capture=True)
logging.debug("mariadb dependency installed %s", result)
print("mariadb dependency installed %s\n", result)
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && cd openstack-helm && ./tools/deployment/db/mariadb.sh", capture=True)
logging.debug("mariadb installed: %s", result)
print("mariadb installed: %s\n", result)
time.sleep(config['DURATION']['CHECK_INTERVAL'])
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && kubectl get pods -A | grep mariadb && helm ls -A | grep mariadb", capture=True)
logging.debug(" mariadb pod status: %s", result)
print(" mariadb pod status: %s \n", result)
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && cd openstack-helm/memcached && helm dependency build", capture=True)
logging.debug("memcached dependency installed %s", result)
print("memcached dependency installed %s\n", result)
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && cd openstack-helm && ./tools/deployment/component/common/memcached.sh", capture=True)
logging.debug("memcached installed: %s", result)
print("memcached installed: %s\n", result)
time.sleep(config['DURATION']['CHECK_INTERVAL'])
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && kubectl get pods -A | grep memcached && helm ls -A | grep memcached", capture=True)
logging.debug(" memcached pod status: %s", result)
print(" memcached pod status: %s \n", result)
result = myshell("cd openstack-helm/keystone && helm dependency build", capture=True)
logging.debug("keystone dependency installed %s", result)
print("keystone dependency installed %s\n", result)
result = handelexceptionshell(config['K8S']['SET_ENVIRONMENT']+" && cd openstack-helm && ./tools/deployment/component/keystone/keystone.sh", capture=True)
logging.debug("keystone installed: %s", result)
print("keystone installed: %s \n", result)
time.sleep(config['DURATION']['CHECK_INTERVAL'])
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && kubectl get pods -A | grep keystone && helm ls -A | grep keystone", capture=True)
logging.debug(" keystone pod status: %s", result)
print(" keystone pod status: %s \n", result)
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && cd openstack-helm/glance && helm dependency build", capture=True)
logging.debug("glance dependency installed %s", result)
print("glance dependency installed %s\n", result)
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && kubectl apply -f ./volumes/local-storage-class.yaml", capture=True)
logging.debug("Glance Storae class created %s", result)
print("Glance Storae class created "+str(result)+"\n")
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && kubectl apply -f ./volumes/local-pv-tempate.yaml", capture=True)
logging.debug("Glance pv created %s", result)
print("Glance pv created "+ str(result)+"\n")
result = handelexceptionshell(config['K8S']['SET_ENVIRONMENT']+" && cd openstack-helm && ./tools/deployment/component/glance/glance.sh", capture=True)
logging.debug("glance installed: %s", result)
print("glance installed: %s\n", result)
time.sleep(config['DURATION']['CHECK_INTERVAL'])
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && kubectl get pods -A | grep glance && helm ls -A | grep glance", capture=True)
logging.debug(" glance pod status: %s", result)
print(" glance pod status: %s\n", result)
result = myshell("cd openstack-helm/mariadb && helm dependency build", capture=True)
logging.debug("mariadb dependency installed %s", result)
print("mariadb dependency installed %s \n", result)
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && cd openstack-helm && ./tools/deployment/db/mariadb.sh", capture=True)
logging.debug("mariadb installed: %s", result)
print("mariadb installed: %s\n", result)
time.sleep(config['DURATION']['CHECK_INTERVAL'])
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && kubectl get pods -A | grep mariadb && helm ls -A | grep mariadb", capture=True)
logging.debug(" rabbitmq pod status: %s", result)
print(" rabbitmq pod status: %s \n", result)
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && cd openstack-helm/barbican && helm dependency build", capture=True)
logging.debug("barbican dependency installed %s", result)
print("barbican dependency installed %s\n", result)
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && cd openstack-helm && ./tools/deployment/component/barbican/barbican.sh", capture=True)
logging.debug("barbican installed: %s", result)
print("barbican installed: %s\n", result)
time.sleep(config['DURATION']['CHECK_INTERVAL'])
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && kubectl get pods -A | grep barbican && helm ls -A | grep barbican", capture=True)
logging.debug(" barbican pod status: %s", result)
print(" barbican pod status: %s\n", result)
# -----------------------------------------------------------------------------
# Function Name : deploy_tacker
# Description : Deploy tacker services
# Arguments : none
#
# Returns : none
# -----------------------------------------------------------------------------
def deploy_tacker():
print("Inside deploy_tacker function \n")
## check PV here as well
try:
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && kubectl apply -f ./volumes/task1-tacker-pv.yaml && kubectl apply -f ./volumes/task2-tacker-pv.yaml && kubectl apply -f ./volumes/task3-tacker-pv.yaml ", capture=True)
logging.debug("Tacker pv created %s", result)
print("Tacker pv created %s\n", result)
except:
logging.error("PV creation failed for Tacker ",result)
print("PV creation failed for Tacker \n",result)
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && cd openstack-helm/tacker && helm dependency build", capture=True)
logging.debug("Tacker dependency installed %s\n", result)
print("Tacker dependency installed %s\n", result)
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && cd openstack-helm && ./tools/deployment/component/tacker/tacker.sh ", capture=True)
logging.debug("tacker deployed: %s", result)
print("tacker deployed: %s\n", result)
result = myshell(config['K8S']['SET_ENVIRONMENT']+" && kubectl get pods -A | grep tacker && helm ls -A | grep tacker", capture=True)
logging.debug("tacker pod status: %s", result)
print("tacker pod status: %s\n", result)
# -----------------------------------------------------------------------------
# Function Name : main
# Description : main function call other functions for deployment of tacker.
# Arguments : none
#
# Returns : none
# -----------------------------------------------------------------------------
def main():
clear_repo()
load_config()
install_dependecies()
clone_code()
create_namespace()
deploy_openstack()
deploy_tacker()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,15 @@
REPOS:
OPENSTACK_REPO: "https://opendev.org/openstack/openstack-helm.git"
TACKER_INFRA_REPO: "https://opendev.org/openstack/openstack-helm-infra.git"
ZUUL_REPO: "https://opendev.org/zuul/zuul-jobs.git"
HELM:
HELM_SCRIPT: "https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3"
DURATION:
CHECK_INTERVAL: 30
TIMEOUT: 600
POLL_INTERVAL: 5
K8S:
NAMESPACE: "kube-system"
SET_ENVIRONMENT: "export KUBECONFIG=/etc/kubernetes/admin.conf"
NODES:
TACKER_NODE: "<NODE_NAME>"

View File

@@ -0,0 +1,9 @@
---
- hosts: all
become: true
gather_facts: true
roles:
- ensure-python
- ensure-pip
- clear-firewall
- deploy-env

View File

@@ -0,0 +1,62 @@
---
all:
vars:
ansible_port: 22
# ansible user for running the playbook
ansible_user: ubuntu
ansible_ssh_private_key_file: /home/ubuntu/.ssh/id_rsa
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
# The user and group that will be used to run Kubectl and Helm commands.
kubectl:
user: ubuntu
group: ubuntu
# The user and group that will be used to run Docker commands.
docker_users:
- ubuntu
# By default the deploy-env role sets up ssh key to make it possible
# to connect to the k8s master node via ssh without a password.
client_ssh_user: ubuntu
cluster_ssh_user: ubuntu
# The MetalLB controller will be installed on the Kubernetes cluster.
# MetalLB controllr is used for bare-metal loadbalancer.
#metallb_setup: true
# Loopback devices will be created on all cluster nodes which then can be used
# to deploy a Ceph cluster which requires block devices to be provided.
# Please use loopback devices only for testing purposes. They are not suitable
# for production due to performance reasons.
#loopback_setup: true
#loopback_device: /dev/loop100
#loopback_image: /var/lib/openstack-helm/ceph-loop.img
#loopback_image_size: 12G
children:
# The primary node where Kubectl and Helm will be installed. If it is
# the only node then it must be a member of the groups k8s_cluster and
# k8s_control_plane. If there are more nodes then the wireguard tunnel
# will be established between the primary node and the k8s_control_plane node.
primary:
hosts:
primary:
ansible_host: 10.0.0.217
# The nodes where the Kubernetes components will be installed.
k8s_cluster:
hosts:
primary:
###ansible_host: 10.0.0.217
#node-2:
###ansible_host: 10.0.0.217
#node-3:
###ansible_host: 10.0.0.217
# The control plane node where the Kubernetes control plane components will be installed.
# It must be the only node in the group k8s_control_plane.
k8s_control_plane:
hosts:
primary:
#ansible_host: 10.0.0.217
# These are Kubernetes worker nodes. There could be zero such nodes.
# In this case the Openstack workloads will be deployed on the control plane node.
k8s_nodes:
hosts:
#primary:
#ansible_host: 10.0.0.217
#node-3:
#ansible_host: 10.0.0.217

View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv-template
finalizers:
- kubernetes.io/pv-protection
annotations:
pv.kubernetes.io/bound-by-controller: "yes"
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Delete
storageClassName: standard
hostPath:
path: /mnt/vol1
type: "" # Leave empty or specify "DirectoryOrCreate", "Directory", etc.
claimRef:
namespace: openstack
name: glance-images

View File

@@ -0,0 +1,8 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

View File

@@ -0,0 +1,8 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/nfs-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-1-tacker
finalizers:
- kubernetes.io/pv-protection
annotations:
pv.kubernetes.io/bound-by-controller: "yes"
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
storageClassName: general
hostPath:
path: /mnt/vol1
type: "" # Optional: set to "DirectoryOrCreate" if you want K8s to create it
claimRef:
namespace: openstack
name: tacker-logs

View File

@@ -0,0 +1,22 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-2-tacker
finalizers:
- kubernetes.io/pv-protection
annotations:
pv.kubernetes.io/bound-by-controller: "yes"
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
storageClassName: general
hostPath:
path: /mnt/vol1
type: "" # You can set to "DirectoryOrCreate" if you want K8s to auto-create
claimRef:
namespace: openstack
name: tacker-csar-files

View File

@@ -0,0 +1,23 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-3-tacker
annotations:
pv.kubernetes.io/bound-by-controller: "yes"
finalizers:
- kubernetes.io/pv-protection
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
storageClassName: general
hostPath:
path: /mnt/vol1
type: "" # Optional: Set to "DirectoryOrCreate" if you want it auto-created
claimRef:
namespace: openstack
name: tacker-vnfpackages