We recently introduced extraObjects section for all chart's values. This makes it possible to deploy arbitrary K8s objects as part of a Helm release. The Openstack-Helm legacy ingress implementation is very opinionated and we don't want to do the same for Gateway API objects. Instead users are supposed to add Gateway API objects (Gateway, HTTPRoute, etc.) manually using this extraObjects value. The range of infrastructure use cases is very wide and it is better to let users to choose on their own how they are going to provide access to deployed Openstack services. Regarding this PS: * Add tasks to the deploy-env role to deploy Envoy Gateway on test env. * Add gateway.yaml overrides for Keystone, Nova, Neutron, Placement, Glance, Heat. These overrides disable rendering Ingress related objects and add HTTPRoute for public endpoints. Later gateway overrides will be added for other charts and deployment scripts will be updated appropriately Signed-off-by: Vladimir Kozhukalov <kozhukalov@gmail.com> Change-Id: I8043206136bf6513e2ba2b978510662b655a368f
This role is used to deploy test environment which includes
- install necessary prerequisites including Helm
- deploy Containerd and a container runtime for Kubernetes
- deploy Kubernetes using Kubeadm with a single control plane node
- install Calico as a Kubernetes networking
- establish tunnel between primary node and K8s control plane ndoe
The role works both for single-node and multi-node inventories. The role
totally relies on inventory groups. The primary and k8s_control_plane
groups must include only one node and this can be the same node for these two
groups.
The primary group is where we install kubectl and helm CLI tools.
You can consider this group as a deployer's machine.
The k8s_control_plane is where we deploy the K8s control plane.
The k8s_cluster group must include all the K8s nodes including control plane
and worker nodes.
In case of running tests on a single-node environment the group k8s_nodes
must be empty. This means the K8s cluster will consist of a single control plane
node where all the workloads will be running.
See for example:
all:
vars:
ansible_port: 22
ansible_user: ubuntu
ansible_ssh_private_key_file: /home/ubuntu/.ssh/id_rsa
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
hosts:
primary:
ansible_host: 10.10.10.10
node-1:
ansible_host: 10.10.10.11
node-2:
ansible_host: 10.10.10.12
node-3:
ansible_host: 10.10.10.13
children:
primary:
hosts:
primary:
k8s_cluster:
hosts:
node-1:
node-2:
node-3:
k8s_control_plane:
hosts:
node-1:
k8s_nodes:
hosts:
node-2:
node-3: