openstack-helm-infra/roles/deploy-env
Vladimir Kozhukalov a5f6eb6ed4 Update deploy-env role
When generating keys and sharing them between nodes
in a multinode env it is important that task which
generates keys is finished before trying to use these
keys on another node.

The PR splits the Ansible block into two blocks and
makes sure the playbook deploy-env is run with the linear
strategy. Thus we can be sure that keys are first generated
on all affected nodes and only then are used to setup
tunnels and passwordless ssh.

Change-Id: I9985855d7909aa5365876a24e2a806ab6be1dd7c
2024-07-19 12:58:39 -05:00
..
defaults Add Flannel deployment to deploy-env role 2024-07-15 12:12:48 -05:00
files Setup passwordless ssh from primary to cluster nodes 2024-06-25 12:32:41 -05:00
handlers Add license headers to deploy-env tasks files 2023-12-13 14:19:50 -06:00
tasks Update deploy-env role 2024-07-19 12:58:39 -05:00
README.md Update deploy-env role to support root user 2024-06-13 15:05:54 -05:00

This role is used to deploy test environment which includes

  • install necessary prerequisites including Helm
  • deploy Containerd and a container runtime for Kubernetes
  • deploy Kubernetes using Kubeadm with a single control plane node
  • install Calico as a Kubernetes networking
  • establish tunnel between primary node and K8s control plane ndoe

The role works both for single-node and multi-node inventories. The role totally relies on inventory groups. The primary and k8s_control_plane groups must include only one node and this can be the same node for these two groups.

The primary group is where we install kubectl and helm CLI tools. You can consider this group as a deployer's machine.

The k8s_control_plane is where we deploy the K8s control plane.

The k8s_cluster group must include all the K8s nodes including control plane and worker nodes.

In case of running tests on a single-node environment the group k8s_nodes must be empty. This means the K8s cluster will consist of a single control plane node where all the workloads will be running.

See for example:

all:
  vars:
    ansible_port: 22
    ansible_user: ubuntu
    ansible_ssh_private_key_file: /home/ubuntu/.ssh/id_rsa
    ansible_ssh_extra_args: -o StrictHostKeyChecking=no
  hosts:
    primary:
      ansible_host: 10.10.10.10
    node-1:
      ansible_host: 10.10.10.11
    node-2:
      ansible_host: 10.10.10.12
    node-3:
      ansible_host: 10.10.10.13
  children:
    primary:
      hosts:
        primary:
    k8s_cluster:
      hosts:
        node-1:
        node-2:
        node-3:
    k8s_control_plane:
      hosts:
        node-1:
    k8s_nodes:
      hosts:
        node-2:
        node-3: