Installing Tacker via OpenStack-Helm requires multiple manual steps, including setting up prerequisites, configuring the environment, and deploying components. This process is often error-prone and time-consuming, particularly for new users. This patch introduces a Python-based automation script that simplifies the installation by handling prerequisites and automating the deployment workflow. A documentation file is also included to guide users on how to use the script. Change-Id: I0247bad933b6cce6433ec8e17994e1ceda75f247 Signed-off-by: Shivam Shukla <shivam.shukla3@india.nec.com>
7.1 KiB
Tacker Helm Deployment Automation
Overview
Installing Tacker via OpenStack-Helm involves multiple manual steps, including the setup of system prerequisites, environment configuration, and deployment of various components. This process can be error-prone and time-consuming, especially for new users.
To simplify this, a Python-based automation script is introduced that simplifies and streamlines the installation of Tacker using OpenStack-Helm by handling the setup of all prerequisites and deployment steps.
The Automation Script is available is Tacker repository:
tacker/tools/tacker_helm_deployment_automation_script
This document provides the steps and guidelines for using the automation script for Helm based installation of Tacker.
Prerequisites
All nodes must be able to access openstack repositories. The repositories will automatically be downloaded during the execution of the script.
$ git clone https://opendev.org/openstack/openstack-helm.git $ git clone https://opendev.org/zuul/zuul-jobs.gitAll participating nodes should have connectivity with each other.
Configuration changes
- Edit
k8s_env/inventory.yamlfor k8s deploymentAdd following for running ansible:
# ansible user for running the playbook ansible_user: <USERNAME> ansible_ssh_private_key_file: <PATH_TO_SSH_KEY_FOR_USER> ansible_ssh_extra_args: -o StrictHostKeyChecking=noAdd user and group for running the kubectl and helm commands.
# The user and group that will be used to run Kubectl and Helm commands. kubectl: user: <USERNAME> group: <USERGROUP> # The user and group that will be used to run Docker commands. docker_users: - <USERNAME>Add user details which will be used for communication between the master and worker nodes. This user must be configured with passwordless ssh on all the nodes.
# to connect to the k8s master node via ssh without a password. client_ssh_user: <USERNAME> cluster_ssh_user: <USER_GROUP>Enable MetalLB if using bare-metal servers for load balancing.
# MetalLB controllr is used for bare-metal loadbalancer. metallb_setup: trueIf deploying CEPH cluster enable the loopback device config to be used by CEPH
# Loopback devices will be created on all cluster nodes which then can be used # to deploy a Ceph cluster which requires block devices to be provided. # Please use loopback devices only for testing purposes. They are not suitable # for production due to performance reasons. loopback_setup: true loopback_device: /dev/loop100 loopback_image: /var/lib/openstack-helm/ceph-loop.img loopback_image_size: 12GAdd the primary node where the Kubectl and Helm will be installed.
children: # The primary node where Kubectl and Helm will be installed. If it is # the only node then it must be a member of the groups k8s_cluster and # k8s_control_plane. If there are more nodes then the wireguard tunnel # will be established between the primary node and the k8s_control_plane node. primary: hosts: primary: ansible_host: <PRIMARY_NODE_IP>
Add node where the Kubernetes cluster will be deployed. If there only 1 node, then mention it here
# The nodes where the Kubernetes components will be installed. k8s_cluster: hosts: primary: ansible_host: <IP_ADDRESS> node-2: ansible_host: <IP_ADDRESS> node-3: ansible_host: <IP_ADDRESS>Add control-plane node in the section
# The control plane node where the Kubernetes control plane components will be installed. # It must be the only node in the group k8s_control_plane. k8s_control_plane: hosts: primary: ansible_host: <IP_ADDRESS>Add worker nodes in the k8s_nodes section, If its single node installation then leave the section empty
# These are Kubernetes worker nodes. There could be zero such nodes. # In this case the Openstack workloads will be deployed on the control plane node. k8s_nodes: hosts: node-2: ansible_host: <IP_ADDRESS> node-3: ansible_host: <IP_ADDRESS>You can find whole of examples of
inventory.yamlin1.
Edit
TACKER_NODEinconfig/config.yamlwith the hostname of the master node to label it as control-plane nodeNODES: TACKER_NODE: <CONTROL-PLANE_NODE_HOSTNAME>
Script execution
Ensure that the user has permission to execute the script
$ ls -la Tacker_Install.py -rwxr-xr-x 1 root root 21923 Jul 22 10:00 Tacker_Install.pyExecute the command to run the script
$ sudo python3 Tacker_install.pyExecute the command to run the script
$ kubectl get pods -n openstack | grep -i Tacker tacker-conductor-d7595d756-6k8wp 1/1 Running 0 24h tacker-db-init-mxwwf 0/1 Completed 0 24h tacker-db-sync-4xnhx 0/1 Completed 0 24h tacker-ks-endpoints-4nbqb 0/3 Completed 0 24h tacker-ks-service-c8s2m 0/1 Completed 0 24h tacker-ks-user-z2cq7 0/1 Completed 0 24h tacker-rabbit-init-fxggv 0/1 Completed 0 24h tacker-server-6f578bcf6c-z7z2c 1/1 Running 0 24hFor details refer to the document in2.