[WIP] Update docs (chart READMEs)

Change-Id: I2b44a6992f8bcc79a8f6f8426427d335ece32789
This commit is contained in:
Vladimir Kozhukalov 2024-04-29 19:49:09 -05:00
parent 1403dccc3f
commit 7ef54f2f21
22 changed files with 534 additions and 297 deletions

View File

@ -1,41 +0,0 @@
Before deployment
=================
Before proceeding with the steps outlined in the following
sections and executing the actions detailed therein, it is
imperative that you clone the essential Git repositories
containing all the required Helm charts, deployment scripts,
and Ansible roles. This preliminary step will ensure that
you have access to the necessary assets for a seamless
deployment process.
.. code-block:: bash
mkdir ~/osh
cd ~/osh
git clone https://opendev.org/openstack/openstack-helm.git
git clone https://opendev.org/openstack/openstack-helm-infra.git
All further steps assume these two repositories are cloned into the
`~/osh` directory.
Next, you need to update the dependencies for all the charts in both OpenStack-Helm
repositories. This can be done by running the following commands:
.. code-block:: bash
cd ~/osh/openstack-helm
./tools/deployment/common/prepare-charts.sh
Also before deploying the OpenStack cluster you have to specify the
OpenStack and the operating system version that you would like to use
for deployment. For doing this export the following environment variables
.. code-block:: bash
export OPENSTACK_RELEASE=2024.1
export FEATURES="${OPENSTACK_RELEASE} ubuntu_jammy"
.. note::
The list of supported versions can be found :doc:`here </readme>`.

View File

@ -1,51 +0,0 @@
Deploy ingress controller
=========================
Deploying an ingress controller when deploying OpenStack on Kubernetes
is essential to ensure proper external access and SSL termination
for your OpenStack services.
In the OpenStack-Helm project, we usually deploy multiple `ingress-nginx`_
controller instances to optimize traffic routing:
* In the `kube-system` namespace, we deploy an ingress controller that
monitors ingress objects across all namespaces, primarily focusing on
routing external traffic into the OpenStack environment.
* In the `openstack` namespace, we deploy an ingress controller that
handles traffic exclusively within the OpenStack namespace. This instance
plays a crucial role in SSL termination for enhanced security between
OpenStack services.
* In the `ceph` namespace, we deploy an ingress controller that is dedicated
to routing traffic specifically to the Ceph Rados Gateway service, ensuring
efficient communication with Ceph storage resources.
You can utilize any other ingress controller implementation that suits your
needs best. See for example the list of available `ingress controllers`_.
Ensure that the ingress controller pods are deployed with the `app: ingress-api`
label which is used by the OpenStack-Helm as a selector for the Kubernetes
services that are exposed as OpenStack endpoints.
For example, the OpenStack-Helm `keystone` chart by default deploys a service
that routes traffic to the ingress controller pods selected using the
`app: ingress-api` label. Then it also deploys an ingress object that references
the **IngressClass** named `nginx`. This ingress object corresponds to the HTTP
virtual host routing the traffic to the Keystone API service which works as an
endpoint for Keystone pods.
.. image:: deploy_ingress_controller.jpg
:width: 100%
:align: center
:alt: deploy-ingress-controller
To deploy these three ingress controller instances you can use the script `ingress.sh`_
.. code-block:: bash
cd ~/osh/openstack-helm
./tools/deployment/common/ingress.sh
.. _ingress.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/common/ingress.sh
.. _ingress-nginx: https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/README.md
.. _ingress controllers: https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/

View File

@ -1,143 +0,0 @@
Deploy Kubernetes
=================
OpenStack-Helm provides charts that can be deployed on any Kubernetes cluster if it meets
the supported version requirements. However, deploying the Kubernetes cluster itself is beyond
the scope of OpenStack-Helm.
You can use any Kubernetes deployment tool for this purpose. In this guide, we detail how to set up
a Kubernetes cluster using Kubeadm and Ansible. While not production-ready, this cluster is ideal
as a starting point for lab or proof-of-concept environments.
All OpenStack projects test their code through an infrastructure managed by the CI
tool, Zuul, which executes Ansible playbooks on one or more test nodes. Therefore, we employ Ansible
roles/playbooks to install required packages, deploy Kubernetes, and then execute tests on it.
To establish a test environment, the Ansible role deploy-env_ is employed. This role establishes
a basic single/multi-node Kubernetes cluster, ensuring the functionality of commonly used
deployment configurations. The role is compatible with Ubuntu Focal and Ubuntu Jammy distributions.
Install Ansible
---------------
.. code-block:: bash
pip install ansible
Prepare Ansible roles
---------------------
Here is the Ansible `playbook`_ that is used to deploy Kubernetes. The roles used in this playbook
are defined in different repositories. So in addition to OpenStack-Helm repositories
that we assume have already been cloned to the `~/osh` directory you have to clone
yet another one
.. code-block:: bash
cd ~/osh
git clone https://opendev.org/zuul/zuul-jobs.git
Now let's set the environment variable ``ANSIBLE_ROLES_PATH`` which specifies
where Ansible will lookup roles
.. code-block:: bash
export ANSIBLE_ROLES_PATH=~/osh/openstack-helm-infra/roles:~/osh/zuul-jobs/roles
To avoid setting it every time when you start a new terminal instance you can define this
in the Ansible configuration file. Please see the Ansible documentation.
Prepare Ansible inventory
-------------------------
We assume you have three nodes, usually VMs. Those nodes must be available via
SSH using the public key authentication and a ssh user (let say `ubuntu`)
must have passwordless sudo on the nodes.
Create the Ansible inventory file using the following command
.. code-block:: bash
cat > ~/osh/inventory.yaml <<EOF
all:
vars:
kubectl:
user: ubuntu
group: ubuntu
calico_version: "v3.25"
crictl_version: "v1.26.1"
helm_version: "v3.6.3"
kube_version: "1.26.3-00"
yq_version: "v4.6.0"
children:
primary:
hosts:
primary:
ansible_port: 22
ansible_host: 10.10.10.10
ansible_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
nodes:
hosts:
node-1:
ansible_port: 22
ansible_host: 10.10.10.11
ansible_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
node-2:
ansible_port: 22
ansible_host: 10.10.10.12
ansible_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
EOF
If you have just one node then it must be `primary` in the file above.
.. note::
If you would like to set up a Kubernetes cluster on the local host,
configure the Ansible inventory to designate the `primary` node as the local host.
For further guidance, please refer to the Ansible documentation.
Deploy Kubernetes
-----------------
.. code-block:: bash
cd ~/osh
ansible-playbook -i inventory.yaml ~/osh/openstack-helm/tools/gate/playbooks/deploy-env.yaml
The playbook only changes the state of the nodes listed in the Ansible inventory.
It installs necessary packages, deploys and configures Containerd and Kubernetes. For
details please refer to the role `deploy-env`_ and other roles (`ensure-python`_, `ensure-pip`_, `clear-firewall`_)
used in the playbook.
.. note::
The role `deploy-env`_ by default will use Google DNS servers, 8.8.8.8 or 8.8.4.4
and update `/etc/resolv.conf` on the nodes. These DNS nameserver entries can be changed by
updating the file ``~/osh/openstack-helm-infra/roles/deploy-env/files/resolv.conf``.
It also configures internal Kubernetes DNS server (Coredns) to work as a recursive DNS server
and adds its IP address (10.96.0.10 by default) to the `/etc/resolv.conf` file.
Programs running on those nodes will be able to resolve names in the
default Kubernetes domain `.svc.cluster.local`. E.g. if you run OpenStack command line
client on one of those nodes it will be able to access OpenStack API services via
these names.
.. note::
The role `deploy-env`_ installs and confiugres Kubectl and Helm on the `primary` node.
You can login to it via SSH, clone `openstack-helm`_ and `openstack-helm-infra`_ repositories
and then run the OpenStack-Helm deployment scipts which employ Kubectl and Helm to deploy
OpenStack.
.. _deploy-env: https://opendev.org/openstack/openstack-helm-infra/src/branch/master/roles/deploy-env
.. _ensure-python: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-python
.. _ensure-pip: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-pip
.. _clear-firewall: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/clear-firewall
.. _openstack-helm: https://opendev.org/openstack/openstack-helm.git
.. _openstack-helm-infra: https://opendev.org/openstack/openstack-helm-infra.git
.. _playbook: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/gate/playbooks/deploy-env.yaml

View File

@ -1,17 +1,11 @@
Installation
============
Contents:
The list of supported OpenStack/Platform versions can be found :doc:`here </readme>`.
.. toctree::
:maxdepth: 2
before_deployment
deploy_kubernetes
prepare_kubernetes
deploy_ceph
setup_openstack_client
deploy_ingress_controller
deploy_openstack_backend
deploy_openstack
kubernetes
prerequisites/index
openstack/index

View File

@ -0,0 +1,178 @@
Deploy Kubernetes
=================
OpenStack-Helm provides charts that can be deployed on any Kubernetes cluster if it meets
the supported version requirements. However, deploying the Kubernetes cluster itself is beyond
the scope of OpenStack-Helm.
You can use any Kubernetes deployment tool for this purpose. In this guide, we detail how to set up
a Kubernetes cluster using Kubeadm and Ansible. While not production-ready, this cluster is ideal
as a starting point for lab or proof-of-concept environments.
All OpenStack projects test their code through an infrastructure managed by the CI
tool, Zuul, which executes Ansible playbooks on one or more test nodes. Therefore, we employ Ansible
roles/playbooks to install required packages, deploy Kubernetes, and then execute tests on it.
To establish a test environment, the Ansible role deploy-env_ is employed. This role establishes
a basic single/multi-node Kubernetes cluster, ensuring the functionality of commonly used
deployment configurations. The role is compatible with Ubuntu Focal and Ubuntu Jammy distributions.
Clone git repositories
----------------------
Before proceeding with the steps outlined in the following sections, it is
imperative that you clone the git repositories containing the required Ansible roles.
.. code-block:: bash
mkdir ~/osh
cd ~/osh
git clone https://opendev.org/openstack/openstack-helm-infra.git
git clone https://opendev.org/zuul/zuul-jobs.git
Install Ansible
---------------
.. code-block:: bash
pip install ansible
Ansible roles lookup path
-------------------------
Now let's set the environment variable ``ANSIBLE_ROLES_PATH`` which specifies
where Ansible will lookup roles
.. code-block:: bash
export ANSIBLE_ROLES_PATH=~/osh/openstack-helm-infra/roles:~/osh/zuul-jobs/roles
To avoid setting it every time when you start a new terminal instance you can define this
in the Ansible configuration file. Please see the Ansible documentation.
Ansible inventory
-----------------
The example below assumes that there are four nodes which must be available via
SSH using the public key authentication and a ssh user (let say `ubuntu`)
must have passwordless sudo on the nodes.
.. code-block:: bash
cat > ~/osh/inventory.yaml <<EOF
---
all:
vars:
ansible_port: 22
ansible_user: ubuntu
ansible_ssh_private_key_file: /home/ubuntu/.ssh/id_rsa
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
# The user and group that will be used to run Kubectl and Helm commands.
kubectl:
user: ubuntu
group: ubuntu
# The user and group that will be used to run Docker commands.
docker_users:
- ununtu
# The MetalLB controller will be installed on the Kubernetes cluster.
metallb_setup: true
# Loopback devices will be created on the all nodes which then can be used
# for Ceph storage which requires block devices to be provided.
# Please use loopback devices only for testing purposes. They are not suitable
# for production due to performance reasons.
loopback_setup: true
loopback_device: /dev/loop100
loopback_image: /var/lib/openstack-helm/ceph-loop.img
loopback_image_size: 12G
children:
# The primary node where Kubectl and Helm will be installed. If it is
# the only node then it must be a member of the groups `k8s_cluster` and
# `k8s_control_plane`. If there are more nodes then the wireguard tunnel
# will be established between the primary node and the `k8s_control_plane` node.
primary:
hosts:
primary:
ansible_host: 10.10.10.10
# The nodes where the Kubernetes components will be installed.
k8s_cluster:
hosts:
node-1:
ansible_host: 10.10.10.11
node-2:
ansible_host: 10.10.10.12
node-3:
ansible_host: 10.10.10.13
# The control plane node where the Kubernetes control plane components will be installed.
# It must be the only node in the group `k8s_control_plane`.
k8s_control_plane:
hosts:
node-1:
ansible_host: 10.10.10.11
# These are Kubernetes worker nodes. There could be more than one node here.
k8s_nodes:
hosts:
node-2:
ansible_host: 10.10.10.12
node-3:
ansible_host: 10.10.10.13
EOF
.. note::
If you would like to set up a Kubernetes cluster on the local host,
configure the Ansible inventory to designate the `primary` node as the local host.
For further guidance, please refer to the Ansible documentation.
.. note::
The full list of variables that you can define in the inventory file can be found in the
file `deploy-env/defaults/main.yaml`_.
Ansible playbook
----------------
Create an Ansible playbook that will deploy the environment
.. code-block:: bash
cat > ~/osh/deploy-env.yaml <<EOF
---
- hosts: all
become: true
gather_facts: true
roles:
- ensure-python
- ensure-pip
- clear-firewall
- deploy-env
EOF
Run the playbook
-----------------
.. code-block:: bash
cd ~/osh
ansible-playbook -i inventory.yaml deploy-env.yaml
The playbook only changes the state of the nodes listed in the inventory file.
It installs necessary packages, deploys and configures Containerd and Kubernetes. For
details please refer to the role `deploy-env`_ and other roles (`ensure-python`_,
`ensure-pip`_, `clear-firewall`_) used in the playbook.
.. note::
The role `deploy-env`_ configures cluster nodes to use Google DNS servers (8.8.8.8).
By default, it also configures internal Kubernetes DNS server (Coredns) to work
as a recursive DNS server and adds its IP address (10.96.0.10 by default) to the
`/etc/resolv.conf` file.
Processes running on the cluster nodes will be able to resolve names internal
Kubernetes domain names `*.svc.cluster.local`.
.. _deploy-env: https://opendev.org/openstack/openstack-helm-infra/src/branch/master/roles/deploy-env
.. _deploy-env/defaults/main.yaml: https://opendev.org/openstack/openstack-helm-infra/src/branch/master/roles/deploy-env/defaults/main.yaml
.. _zuul-jobs: https://opendev.org/zuul/zuul-jobs.git
.. _ensure-python: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-python
.. _ensure-pip: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-pip
.. _clear-firewall: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/clear-firewall
.. _openstack-helm-infra: https://opendev.org/openstack/openstack-helm-infra.git

View File

@ -0,0 +1,33 @@
Openstack
=========
At this point we assume all the prerequisites are met and we can start
deploying OpenStack. The charts are published in the `openstack-helm`
repository. Let's add them to the list of helm repositories:
.. code-block:: bash
helm repo add openstack-helm https://tarballs.opendev.org/openstack/openstack-helm
helm repo add openstack-helm-infra https://tarballs.opendev.org/openstack/openstack-helm-infra
Also Openstack-Helm plugin provides some helper commands that are used in
the following sections. To install the plugin, run the following command:
.. code-block:: bash
helm plugin install https://opendev.org/openstack/openstack-helm-plugin
And now let's set environment variables that we then will be using in the
subsequent commands:
.. code-block:: bash
export OPENSTACK_RELEASE=2024.1
export FEATURES="${OPENSTACK_RELEASE} ubuntu_jammy"
.. toctree::
:maxdepth: 2
backend
openstack
openstack_client

View File

@ -1,28 +0,0 @@
Prepare Kubernetes
==================
In this section we assume you have a working Kubernetes cluster and
Kubectl and Helm properly configured to interact with the cluster.
Before deploying OpenStack components using OpenStack-Helm you have to set
labels on Kubernetes worker nodes which are used as node selectors.
Also necessary namespaces must be created.
You can use the `prepare-k8s.sh`_ script as an example of how to prepare
the Kubernetes cluster for OpenStack deployment. The script is assumed to be run
from the openstack-helm repository
.. code-block:: bash
cd ~/osh/openstack-helm
./tools/deployment/common/prepare-k8s.sh
.. note::
Pay attention that in the above script we set labels on all Kubernetes nodes including
Kubernetes control plane nodes which are usually not aimed to run workload pods
(OpenStack in our case). So you have to either untaint control plane nodes or modify the
`prepare-k8s.sh`_ script so it sets labels only on the worker nodes.
.. _prepare-k8s.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/common/prepare-k8s.sh

View File

@ -0,0 +1,16 @@
Prerequisites
=============
In this section we assume you have a working Kubernetes cluster and
Kubectl and Helm properly configured to interact with the cluster.
Before deploying OpenStack components using the OpenStack-Helm charts please make
sure the following prerequisites are satisfied:
.. toctree::
:maxdepth: 2
ingress
metallb
ceph
labels

View File

Before

Width:  |  Height:  |  Size: 108 KiB

After

Width:  |  Height:  |  Size: 108 KiB

View File

@ -0,0 +1,53 @@
Ingress controller
==================
Having an ingress controller when deploying OpenStack on Kubernetes
is essential to ensure proper external access for the OpenStack services.
We recommend using the `ingress-nginx`_ controller. It utilizes Nginx as a
reverse proxy backend. Here is how to deploy it:
.. code-block:: bash
tee > /tmp/openstack_namespace.yaml <<EOF
apiVersion: v1
kind: Namespace
metadata:
name: openstack
EOF
kubectl apply -f /tmp/openstack_namespace.yaml
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \
--version="4.8.3" \
--namespace=openstack \
--set controller.kind=Deployment \
--set controller.admissionWebhooks.enabled="false" \
--set controller.scope.enabled="true" \
--set controller.service.enabled="false" \
--set controller.ingressClassResource.name=nginx \
--set controller.ingressClassResource.controllerValue="k8s.io/ingress-nginx" \
--set controller.ingressClassResource.default="false" \
--set controller.ingressClass=nginx \
--set controller.labels.app=ingress-api
You can deploy any other ingress controller that suits your needs best.
See for example the list of available `ingress controllers`_.
Ensure that the ingress controller pods are deployed with the `app: ingress-api`
label which is used by the OpenStack-Helm as a selector for the Kubernetes
services that are exposed as OpenStack endpoints.
For example, the OpenStack-Helm `keystone` chart by default creates a service
that routes traffic to the ingress controller pods selected using the
`app: ingress-api` label. Then it also deploys an ingress object that references
the **IngressClass** named `nginx`. This ingress object corresponds to the HTTP
virtual host in the Nginx configuration that routs the traffic to the Keystone API
service which works as an endpoint for Keystone pods.
.. image:: ingress.jpg
:width: 100%
:align: center
:alt: ingress scheme
.. _ingress-nginx: https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/README.md
.. _ingress controllers: https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/

View File

@ -0,0 +1,28 @@
Node labels
===========
Openstack-Helm charts rely on Kubernetes node labels to determine which nodes
are suitable for running specific OpenStack components.
The following commands set labels on all the Kubernetes nodes in the cluster
including control plane nodes:
.. code-block:: bash
kubectl label --overwrite nodes --all openstack-control-plane=enabled
kubectl label --overwrite nodes --all openstack-compute-node=enabled
kubectl label --overwrite nodes --all openvswitch=enabled
kubectl label --overwrite nodes --all linuxbridge=enabled
# used by neutron chart to determine nodes to run l3-agent
kubectl label --overwrite nodes --all l3-agent=enabled
# used by ovn chart to determine which nodes used as l3 gateway
kubectl label --overwrite nodes --all openstack-network-node=enabled
You can choose to label only a subset of nodes where you want to run OpenStack.
If you choose to use the above please make sure the control plane nodes are
untainted:
.. code-block:: bash
kubectl taint nodes -l 'node-role.kubernetes.io/control-plane' node-role.kubernetes.io/control-plane-

View File

@ -0,0 +1,113 @@
MetalLB
=======
MetalLB is a load-balancer for bare metal Kubernetes clusters levereging
L2/L3 protocols. This is a standard de-facto for exposing the web
applications running in Kubernetes.
The following commands can be used to deploy MetalLB:
.. code-block:: bash
tee > /tmp/metallb_system_namespace.yaml <<EOF
apiVersion: v1
kind: Namespace
metadata:
name: metallb-system
EOF
kubectl apply -f /tmp/metallb_system_namespace.yaml
helm repo add metallb https://metallb.github.io/metallb
helm install metallb metallb/metallb -n metallb-system
Now it is necessary to configure the MetalLB IP address pool and the way
how the IP addresses are advertised. The following example creates two MetalLB
custom resources:
.. code-block:: bash
tee > /tmp/metallb_ipaddresspool.yaml <<EOF
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: public
namespace: metallb-system
spec:
addresses:
- "172.24.128.0/24"
EOF
kubectl apply -f /tmp/metallb_ipaddresspool.yaml
tee > /tmp/metallb_l2advertisement.yaml <<EOF
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: public
namespace: metallb-system
spec:
ipAddressPools:
- public
EOF
kubectl apply -f /tmp/metallb_l2advertisement.yaml
Next we can create a service of type `LoadBalancer` in Kubernetes and MetalLB
will assign an IP address from the IP pool to this service (can be assinged manually):
.. code-block:: bash
tee > /tmp/openstack_endpoint_service.yaml <<EOF
kind: Service
apiVersion: v1
metadata:
name: public-openstack
namespace: openstack
annotations:
metallb.universe.tf/loadBalancerIPs: "172.24.128.100"
spec:
externalTrafficPolicy: Cluster
type: LoadBalancer
selector:
app: ingress-api
ports:
- name: http
port: 80
- name: https
port: 443
EOF
This service will be redirecting the traffic to the ingress controller pods
(see the `app: ingress-api` selector). At the same time OpenStack-Helm charts
by default deploy `Ingress` resources for the ingress controller backend will be
be configured to reverse-proxy the traffic to particular Openstack API pods.
By default, the `openstack.svc.cluster.local` domain is used for the OpenStack public endpoints.
So to be able to use the above MetalLB balanced service, you need to resolve `*.openstack.svc.cluster.local`
names into the IP address of the `LoadBalancer` service (172.24.128.100). For this purpose, you can use
for example the Dnsmasq:
.. code-block:: bash
docker run -d --name dnsmasq --restart always \
--cap-add=NET_ADMIN \
--network=host \
--entrypoint dnsmasq \
docker.io/openstackhelm/neutron:2024.1-ubuntu_jammy \
--keep-in-foreground \
--no-hosts \
--bind-interfaces \
--address="/openstack.svc.cluster.local/172.24.128.100" \
--listen-address="172.17.0.1" \
--no-resolv \
--server=8.8.8.8
If you choose to use a different domain name for the Openstack endpoints, all Openstack-Helm charts
provide you with the ability to specify the domain name using the Helm values. In this case charts
will update the Keystone endpoint catalog and you have to update the Dnsmasq container properly.
.. _deploy-env: https://opendev.org/openstack/openstack-helm-infra/src/branch/master/roles/deploy-env
.. _chart: https://metallb.github.io/metallb
.. _roles/deploy-env/defaults/main.yml: https://opendev.org/openstack/openstack-helm-infra/src/branch/master/roles/deploy-env/defaults/main.yml

View File

@ -0,0 +1,2 @@
Create namespaces
=================

3
tools/debug_sleep.sh Executable file
View File

@ -0,0 +1,3 @@
#!/bin/bash
sleep 86400

View File

@ -0,0 +1,36 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -xe
: ${HELM_INGRESS_NGINX_VERSION:="4.8.3"}
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
#NOTE: Deploy namespace ingress
helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \
--version ${HELM_INGRESS_NGINX_VERSION} \
--namespace=openstack \
--set controller.kind=Deployment \
--set controller.admissionWebhooks.enabled="false" \
--set controller.scope.enabled="true" \
--set controller.service.enabled="false" \
--set controller.ingressClassResource.name=nginx \
--set controller.ingressClassResource.controllerValue="k8s.io/ingress-nginx" \
--set controller.ingressClassResource.default="false" \
--set controller.ingressClass=nginx \
--set controller.labels.app=ingress-api
#NOTE: Wait for deploy
helm osh wait-for-pods kube-system

View File

@ -0,0 +1,9 @@
- hosts: all
tasks:
- name: Put keys to .ssh/authorized_keys
lineinfile:
path: /home/zuul/.ssh/authorized_keys
state: present
line: "{{ item }}"
loop:
- "ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMyM6sgu/Xgg+VaLJX5c6gy6ynYX7pO7XNobnKotYRulcEkmiLprvLSg+WP25VDAcSoif3rek3qiVnEYh6R2/Go= vlad@russell"

34
zuul.d/debug.yaml Normal file
View File

@ -0,0 +1,34 @@
- job:
name: openstack-helm-compute-kit-metallb-2024-1-ubuntu_jammy-debug
parent: openstack-helm-compute-kit
nodeset: openstack-helm-1node-2nodes-ubuntu_jammy
timeout: 10800
pre-run:
- tools/gate/playbooks/prepare-hosts.yaml
- tools/gate/playbooks/inject-keys.yaml
vars:
metallb_setup: true
osh_params:
openstack_release: "2024.1"
container_distro_name: ubuntu
container_distro_version: jammy
feature_gates: metallb
gate_scripts:
- ./tools/deployment/common/prepare-k8s.sh
- ./tools/deployment/common/prepare-charts.sh
- ./tools/deployment/common/setup-client.sh
# - ./tools/debug_sleep.sh
- ./tools/deployment/common/ingress2.sh
# - ./tools/debug_sleep.sh
- ./tools/deployment/component/common/rabbitmq.sh
- ./tools/deployment/component/common/mariadb.sh
- ./tools/deployment/component/common/memcached.sh
# - ./tools/debug_sleep.sh
- ./tools/deployment/component/keystone/keystone.sh
- - ./tools/deployment/component/heat/heat.sh
- export GLANCE_BACKEND=memory; ./tools/deployment/component/glance/glance.sh
- ./tools/deployment/component/compute-kit/openvswitch.sh
- ./tools/deployment/component/compute-kit/libvirt.sh
- ./tools/deployment/component/compute-kit/compute-kit.sh
- ./tools/deployment/common/use-it.sh
- ./tools/deployment/common/force-cronjob-run.sh

View File

@ -21,30 +21,31 @@
- release-notes-jobs-python3
check:
jobs:
- openstack-helm-lint
- openstack-helm-bandit
# Zed
- openstack-helm-cinder-zed-ubuntu_jammy
- openstack-helm-compute-kit-zed-ubuntu_jammy
# 2023.1
- openstack-helm-cinder-2023-1-ubuntu_focal # 3 nodes
- openstack-helm-compute-kit-2023-1-ubuntu_focal # 3 nodes
- openstack-helm-compute-kit-2023-1-ubuntu_jammy # 3 nodes
# the job is faling with the error 'Node request 300-0024195009 failed'
# - openstack-helm-tls-2023-1-ubuntu_focal # 1 node 32GB
# 2023.2
- openstack-helm-horizon-2023-2-ubuntu_jammy # 1 node
- openstack-helm-keystone-ldap-2023-2-ubuntu_jammy # 1 node
- openstack-helm-cinder-2023-2-ubuntu_jammy # 3 nodes rook
- openstack-helm-compute-kit-2023-2-ubuntu_jammy # 3 nodes
- openstack-helm-umbrella-2023-2-ubuntu_jammy # 1 node 32GB
- openstack-helm-compute-kit-ovn-2023-2-ubuntu_jammy # 3 nodes
# 2024.1
- openstack-helm-tls-2024-1-ubuntu_jammy # 1 node 32GB
- openstack-helm-cinder-2024-1-ubuntu_jammy # 3 nodes rook
- openstack-helm-compute-kit-2024-1-ubuntu_jammy # 3 nodes
- openstack-helm-compute-kit-metallb-2024-1-ubuntu_jammy # 1 node + 2 nodes
- openstack-helm-compute-kit-helm-repo-local-2024-1-ubuntu_jammy # 1 node + 2 nodes
# - openstack-helm-lint
# - openstack-helm-bandit
# # Zed
# - openstack-helm-cinder-zed-ubuntu_jammy
# - openstack-helm-compute-kit-zed-ubuntu_jammy
# # 2023.1
# - openstack-helm-cinder-2023-1-ubuntu_focal # 3 nodes
# - openstack-helm-compute-kit-2023-1-ubuntu_focal # 3 nodes
# - openstack-helm-compute-kit-2023-1-ubuntu_jammy # 3 nodes
# # the job is faling with the error 'Node request 300-0024195009 failed'
# # - openstack-helm-tls-2023-1-ubuntu_focal # 1 node 32GB
# # 2023.2
# - openstack-helm-horizon-2023-2-ubuntu_jammy # 1 node
# - openstack-helm-keystone-ldap-2023-2-ubuntu_jammy # 1 node
# - openstack-helm-cinder-2023-2-ubuntu_jammy # 3 nodes rook
# - openstack-helm-compute-kit-2023-2-ubuntu_jammy # 3 nodes
# - openstack-helm-umbrella-2023-2-ubuntu_jammy # 1 node 32GB
# - openstack-helm-compute-kit-ovn-2023-2-ubuntu_jammy # 3 nodes
# # 2024.1
# - openstack-helm-tls-2024-1-ubuntu_jammy # 1 node 32GB
# - openstack-helm-cinder-2024-1-ubuntu_jammy # 3 nodes rook
# - openstack-helm-compute-kit-2024-1-ubuntu_jammy # 3 nodes
# - openstack-helm-compute-kit-metallb-2024-1-ubuntu_jammy # 1 node + 2 nodes
# - openstack-helm-compute-kit-helm-repo-local-2024-1-ubuntu_jammy # 1 node + 2 nodes
- openstack-helm-compute-kit-metallb-2024-1-ubuntu_jammy-debug # 1 node + 2 nodes
gate:
jobs:
- openstack-helm-lint