Files
openstack-helm/roles/deploy-env
Vladimir Kozhukalov 0e104b9600 Use Metallb for all jobs by default
The current way how we utilize K8s Ingress API
is complicated and we agreed to re-design it.

It assumes we have two ingress controller instances:
- cluster wide instance
- namespace instance

Cluster wide instance is deployed in the host network namespace
and listens on node IPs. We can use it to get access to services
running inside the k8s cluster if we resolve names into cluster IPs.

Namespace instance listens on cluster IPs and is only
available inside the cluster or via a load balancer like Metallb.
For tests we utilize this instance together with the cluster.local
domain suffix to get access to services running on top of K8s.
However cluster.local is not intended to be used outside the cluster.
The *.cluster.local names are not supposed to be visible outside
and should be used only for the communication inside the cluster.

Load balancers or other more generally gateway instances (see Gateway API)
should be used to get access to services running in K8s clusters.

This PS is a step towards clearer ingress implementation.

Change-Id: I57bee6e0f82c9deb2745e8e0d18c420b74837421
2025-06-26 12:59:19 +00:00
..
2024-09-03 17:32:28 -05:00

This role is used to deploy test environment which includes

  • install necessary prerequisites including Helm
  • deploy Containerd and a container runtime for Kubernetes
  • deploy Kubernetes using Kubeadm with a single control plane node
  • install Calico as a Kubernetes networking
  • establish tunnel between primary node and K8s control plane ndoe

The role works both for single-node and multi-node inventories. The role totally relies on inventory groups. The primary and k8s_control_plane groups must include only one node and this can be the same node for these two groups.

The primary group is where we install kubectl and helm CLI tools. You can consider this group as a deployer's machine.

The k8s_control_plane is where we deploy the K8s control plane.

The k8s_cluster group must include all the K8s nodes including control plane and worker nodes.

In case of running tests on a single-node environment the group k8s_nodes must be empty. This means the K8s cluster will consist of a single control plane node where all the workloads will be running.

See for example:

all:
  vars:
    ansible_port: 22
    ansible_user: ubuntu
    ansible_ssh_private_key_file: /home/ubuntu/.ssh/id_rsa
    ansible_ssh_extra_args: -o StrictHostKeyChecking=no
  hosts:
    primary:
      ansible_host: 10.10.10.10
    node-1:
      ansible_host: 10.10.10.11
    node-2:
      ansible_host: 10.10.10.12
    node-3:
      ansible_host: 10.10.10.13
  children:
    primary:
      hosts:
        primary:
    k8s_cluster:
      hosts:
        node-1:
        node-2:
        node-3:
    k8s_control_plane:
      hosts:
        node-1:
    k8s_nodes:
      hosts:
        node-2:
        node-3: