[WIP] Update docs (chart READMEs)

Change-Id: I2b44a6992f8bcc79a8f6f8426427d335ece32789
This commit is contained in:
Vladimir Kozhukalov 2024-04-29 19:49:09 -05:00
parent 1403dccc3f
commit c0c353cfae
23 changed files with 755 additions and 353 deletions

View File

@ -1,41 +0,0 @@
Before deployment
=================
Before proceeding with the steps outlined in the following
sections and executing the actions detailed therein, it is
imperative that you clone the essential Git repositories
containing all the required Helm charts, deployment scripts,
and Ansible roles. This preliminary step will ensure that
you have access to the necessary assets for a seamless
deployment process.
.. code-block:: bash
mkdir ~/osh
cd ~/osh
git clone https://opendev.org/openstack/openstack-helm.git
git clone https://opendev.org/openstack/openstack-helm-infra.git
All further steps assume these two repositories are cloned into the
`~/osh` directory.
Next, you need to update the dependencies for all the charts in both OpenStack-Helm
repositories. This can be done by running the following commands:
.. code-block:: bash
cd ~/osh/openstack-helm
./tools/deployment/common/prepare-charts.sh
Also before deploying the OpenStack cluster you have to specify the
OpenStack and the operating system version that you would like to use
for deployment. For doing this export the following environment variables
.. code-block:: bash
export OPENSTACK_RELEASE=2024.1
export FEATURES="${OPENSTACK_RELEASE} ubuntu_jammy"
.. note::
The list of supported versions can be found :doc:`here </readme>`.

View File

@ -1,52 +0,0 @@
Deploy Ceph
===========
Ceph is a highly scalable and fault-tolerant distributed storage
system designed to store vast amounts of data across a cluster of
commodity hardware. It offers object storage, block storage, and
file storage capabilities, making it a versatile solution for
various storage needs. Ceph's architecture is based on a distributed
object store, where data is divided into objects, each with its
unique identifier, and distributed across multiple storage nodes.
It uses a CRUSH algorithm to ensure data resilience and efficient
data placement, even as the cluster scales. Ceph is widely used
in cloud computing environments and provides a cost-effective and
flexible storage solution for organizations managing large volumes of data.
Kubernetes introduced the CSI standard to allow storage providers
like Ceph to implement their drivers as plugins. Kubernetes can
use the CSI driver for Ceph to provision and manage volumes
directly. By means of CSI stateful applications deployed on top
of Kubernetes can use Ceph to store their data.
At the same time, Ceph provides the RBD API, which applications
can utilize to create and mount block devices distributed across
the Ceph cluster. The OpenStack Cinder service utilizes this Ceph
capability to offer persistent block devices to virtual machines
managed by the OpenStack Nova.
The recommended way to deploy Ceph on top of Kubernetes is by means
of `Rook`_ operator. Rook provides Helm charts to deploy the operator
itself which extends the Kubernetes API adding CRDs that enable
managing Ceph clusters via Kuberntes custom objects. For details please
refer to the `Rook`_ documentation.
To deploy the Rook Ceph operator and a Ceph cluster you can use the script
`ceph-rook.sh`_. Then to generate the client secrets to interface with the Ceph
RBD API use this script `ceph-adapter-rook.sh`_
.. code-block:: bash
cd ~/osh/openstack-helm-infra
./tools/deployment/ceph/ceph-rook.sh
./tools/deployment/ceph/ceph-adapter-rook.sh
.. note::
Please keep in mind that these are the deployment scripts that we
use for testing. For example we place Ceph OSD data object on loop devices
which are slow and are not recommended to use in production.
.. _Rook: https://rook.io/
.. _ceph-rook.sh: https://opendev.org/openstack/openstack-helm-infra/src/branch/master/tools/deployment/ceph/ceph-rook.sh
.. _ceph-adapter-rook.sh: https://opendev.org/openstack/openstack-helm-infra/src/branch/master/tools/deployment/ceph/ceph-adapter-rook.sh

View File

@ -1,51 +0,0 @@
Deploy ingress controller
=========================
Deploying an ingress controller when deploying OpenStack on Kubernetes
is essential to ensure proper external access and SSL termination
for your OpenStack services.
In the OpenStack-Helm project, we usually deploy multiple `ingress-nginx`_
controller instances to optimize traffic routing:
* In the `kube-system` namespace, we deploy an ingress controller that
monitors ingress objects across all namespaces, primarily focusing on
routing external traffic into the OpenStack environment.
* In the `openstack` namespace, we deploy an ingress controller that
handles traffic exclusively within the OpenStack namespace. This instance
plays a crucial role in SSL termination for enhanced security between
OpenStack services.
* In the `ceph` namespace, we deploy an ingress controller that is dedicated
to routing traffic specifically to the Ceph Rados Gateway service, ensuring
efficient communication with Ceph storage resources.
You can utilize any other ingress controller implementation that suits your
needs best. See for example the list of available `ingress controllers`_.
Ensure that the ingress controller pods are deployed with the `app: ingress-api`
label which is used by the OpenStack-Helm as a selector for the Kubernetes
services that are exposed as OpenStack endpoints.
For example, the OpenStack-Helm `keystone` chart by default deploys a service
that routes traffic to the ingress controller pods selected using the
`app: ingress-api` label. Then it also deploys an ingress object that references
the **IngressClass** named `nginx`. This ingress object corresponds to the HTTP
virtual host routing the traffic to the Keystone API service which works as an
endpoint for Keystone pods.
.. image:: deploy_ingress_controller.jpg
:width: 100%
:align: center
:alt: deploy-ingress-controller
To deploy these three ingress controller instances you can use the script `ingress.sh`_
.. code-block:: bash
cd ~/osh/openstack-helm
./tools/deployment/common/ingress.sh
.. _ingress.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/common/ingress.sh
.. _ingress-nginx: https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/README.md
.. _ingress controllers: https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/

View File

@ -1,143 +0,0 @@
Deploy Kubernetes
=================
OpenStack-Helm provides charts that can be deployed on any Kubernetes cluster if it meets
the supported version requirements. However, deploying the Kubernetes cluster itself is beyond
the scope of OpenStack-Helm.
You can use any Kubernetes deployment tool for this purpose. In this guide, we detail how to set up
a Kubernetes cluster using Kubeadm and Ansible. While not production-ready, this cluster is ideal
as a starting point for lab or proof-of-concept environments.
All OpenStack projects test their code through an infrastructure managed by the CI
tool, Zuul, which executes Ansible playbooks on one or more test nodes. Therefore, we employ Ansible
roles/playbooks to install required packages, deploy Kubernetes, and then execute tests on it.
To establish a test environment, the Ansible role deploy-env_ is employed. This role establishes
a basic single/multi-node Kubernetes cluster, ensuring the functionality of commonly used
deployment configurations. The role is compatible with Ubuntu Focal and Ubuntu Jammy distributions.
Install Ansible
---------------
.. code-block:: bash
pip install ansible
Prepare Ansible roles
---------------------
Here is the Ansible `playbook`_ that is used to deploy Kubernetes. The roles used in this playbook
are defined in different repositories. So in addition to OpenStack-Helm repositories
that we assume have already been cloned to the `~/osh` directory you have to clone
yet another one
.. code-block:: bash
cd ~/osh
git clone https://opendev.org/zuul/zuul-jobs.git
Now let's set the environment variable ``ANSIBLE_ROLES_PATH`` which specifies
where Ansible will lookup roles
.. code-block:: bash
export ANSIBLE_ROLES_PATH=~/osh/openstack-helm-infra/roles:~/osh/zuul-jobs/roles
To avoid setting it every time when you start a new terminal instance you can define this
in the Ansible configuration file. Please see the Ansible documentation.
Prepare Ansible inventory
-------------------------
We assume you have three nodes, usually VMs. Those nodes must be available via
SSH using the public key authentication and a ssh user (let say `ubuntu`)
must have passwordless sudo on the nodes.
Create the Ansible inventory file using the following command
.. code-block:: bash
cat > ~/osh/inventory.yaml <<EOF
all:
vars:
kubectl:
user: ubuntu
group: ubuntu
calico_version: "v3.25"
crictl_version: "v1.26.1"
helm_version: "v3.6.3"
kube_version: "1.26.3-00"
yq_version: "v4.6.0"
children:
primary:
hosts:
primary:
ansible_port: 22
ansible_host: 10.10.10.10
ansible_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
nodes:
hosts:
node-1:
ansible_port: 22
ansible_host: 10.10.10.11
ansible_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
node-2:
ansible_port: 22
ansible_host: 10.10.10.12
ansible_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/id_rsa
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
EOF
If you have just one node then it must be `primary` in the file above.
.. note::
If you would like to set up a Kubernetes cluster on the local host,
configure the Ansible inventory to designate the `primary` node as the local host.
For further guidance, please refer to the Ansible documentation.
Deploy Kubernetes
-----------------
.. code-block:: bash
cd ~/osh
ansible-playbook -i inventory.yaml ~/osh/openstack-helm/tools/gate/playbooks/deploy-env.yaml
The playbook only changes the state of the nodes listed in the Ansible inventory.
It installs necessary packages, deploys and configures Containerd and Kubernetes. For
details please refer to the role `deploy-env`_ and other roles (`ensure-python`_, `ensure-pip`_, `clear-firewall`_)
used in the playbook.
.. note::
The role `deploy-env`_ by default will use Google DNS servers, 8.8.8.8 or 8.8.4.4
and update `/etc/resolv.conf` on the nodes. These DNS nameserver entries can be changed by
updating the file ``~/osh/openstack-helm-infra/roles/deploy-env/files/resolv.conf``.
It also configures internal Kubernetes DNS server (Coredns) to work as a recursive DNS server
and adds its IP address (10.96.0.10 by default) to the `/etc/resolv.conf` file.
Programs running on those nodes will be able to resolve names in the
default Kubernetes domain `.svc.cluster.local`. E.g. if you run OpenStack command line
client on one of those nodes it will be able to access OpenStack API services via
these names.
.. note::
The role `deploy-env`_ installs and confiugres Kubectl and Helm on the `primary` node.
You can login to it via SSH, clone `openstack-helm`_ and `openstack-helm-infra`_ repositories
and then run the OpenStack-Helm deployment scipts which employ Kubectl and Helm to deploy
OpenStack.
.. _deploy-env: https://opendev.org/openstack/openstack-helm-infra/src/branch/master/roles/deploy-env
.. _ensure-python: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-python
.. _ensure-pip: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-pip
.. _clear-firewall: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/clear-firewall
.. _openstack-helm: https://opendev.org/openstack/openstack-helm.git
.. _openstack-helm-infra: https://opendev.org/openstack/openstack-helm-infra.git
.. _playbook: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/gate/playbooks/deploy-env.yaml

View File

@ -1,17 +1,33 @@
Installation
============
Contents:
The OpenStack-Helm charts are published in the `openstack-helm` and
`openstack-helm-infra` helm repositories. Let's enable them:
.. code-block:: bash
helm repo add openstack-helm https://tarballs.opendev.org/openstack/openstack-helm
helm repo add openstack-helm-infra https://tarballs.opendev.org/openstack/openstack-helm-infra
The Openstack-Helm plugin provides some helper commands that are used in
the following sections. So, let's install it:
.. code-block:: bash
helm plugin install https://opendev.org/openstack/openstack-helm-plugin
And now let's set environment variables that will then be used in the
subsequent sections:
.. code-block:: bash
export OPENSTACK_RELEASE=2024.1
export FEATURES="${OPENSTACK_RELEASE} ubuntu_jammy"
.. toctree::
:maxdepth: 2
before_deployment
deploy_kubernetes
prepare_kubernetes
deploy_ceph
setup_openstack_client
deploy_ingress_controller
deploy_openstack_backend
deploy_openstack
kubernetes
prerequisites/index
openstack/index

View File

@ -0,0 +1,178 @@
Deploy Kubernetes
=================
OpenStack-Helm provides charts that can be deployed on any Kubernetes cluster if it meets
the supported version requirements. However, deploying the Kubernetes cluster itself is beyond
the scope of OpenStack-Helm.
You can use any Kubernetes deployment tool for this purpose. In this guide, we detail how to set up
a Kubernetes cluster using Kubeadm and Ansible. While not production-ready, this cluster is ideal
as a starting point for lab or proof-of-concept environments.
All OpenStack projects test their code through an infrastructure managed by the CI
tool, Zuul, which executes Ansible playbooks on one or more test nodes. Therefore, we employ Ansible
roles/playbooks to install required packages, deploy Kubernetes, and then execute tests on it.
To establish a test environment, the Ansible role deploy-env_ is employed. This role establishes
a basic single/multi-node Kubernetes cluster, ensuring the functionality of commonly used
deployment configurations. The role is compatible with Ubuntu Focal and Ubuntu Jammy distributions.
Clone git repositories
----------------------
Before proceeding with the steps outlined in the following sections, it is
imperative that you clone the git repositories containing the required Ansible roles.
.. code-block:: bash
mkdir ~/osh
cd ~/osh
git clone https://opendev.org/openstack/openstack-helm-infra.git
git clone https://opendev.org/zuul/zuul-jobs.git
Install Ansible
---------------
.. code-block:: bash
pip install ansible
Ansible roles lookup path
-------------------------
Now let's set the environment variable ``ANSIBLE_ROLES_PATH`` which specifies
where Ansible will lookup roles
.. code-block:: bash
export ANSIBLE_ROLES_PATH=~/osh/openstack-helm-infra/roles:~/osh/zuul-jobs/roles
To avoid setting it every time when you start a new terminal instance you can define this
in the Ansible configuration file. Please see the Ansible documentation.
Ansible inventory
-----------------
The example below assumes that there are four nodes which must be available via
SSH using the public key authentication and a ssh user (let say `ubuntu`)
must have passwordless sudo on the nodes.
.. code-block:: bash
cat > ~/osh/inventory.yaml <<EOF
---
all:
vars:
ansible_port: 22
ansible_user: ubuntu
ansible_ssh_private_key_file: /home/ubuntu/.ssh/id_rsa
ansible_ssh_extra_args: -o StrictHostKeyChecking=no
# The user and group that will be used to run Kubectl and Helm commands.
kubectl:
user: ubuntu
group: ubuntu
# The user and group that will be used to run Docker commands.
docker_users:
- ununtu
# The MetalLB controller will be installed on the Kubernetes cluster.
metallb_setup: true
# Loopback devices will be created on the all nodes which then can be used
# for Ceph storage which requires block devices to be provided.
# Please use loopback devices only for testing purposes. They are not suitable
# for production due to performance reasons.
loopback_setup: true
loopback_device: /dev/loop100
loopback_image: /var/lib/openstack-helm/ceph-loop.img
loopback_image_size: 12G
children:
# The primary node where Kubectl and Helm will be installed. If it is
# the only node then it must be a member of the groups `k8s_cluster` and
# `k8s_control_plane`. If there are more nodes then the wireguard tunnel
# will be established between the primary node and the `k8s_control_plane` node.
primary:
hosts:
primary:
ansible_host: 10.10.10.10
# The nodes where the Kubernetes components will be installed.
k8s_cluster:
hosts:
node-1:
ansible_host: 10.10.10.11
node-2:
ansible_host: 10.10.10.12
node-3:
ansible_host: 10.10.10.13
# The control plane node where the Kubernetes control plane components will be installed.
# It must be the only node in the group `k8s_control_plane`.
k8s_control_plane:
hosts:
node-1:
ansible_host: 10.10.10.11
# These are Kubernetes worker nodes. There could be more than one node here.
k8s_nodes:
hosts:
node-2:
ansible_host: 10.10.10.12
node-3:
ansible_host: 10.10.10.13
EOF
.. note::
If you would like to set up a Kubernetes cluster on the local host,
configure the Ansible inventory to designate the `primary` node as the local host.
For further guidance, please refer to the Ansible documentation.
.. note::
The full list of variables that you can define in the inventory file can be found in the
file `deploy-env/defaults/main.yaml`_.
Ansible playbook
----------------
Create an Ansible playbook that will deploy the environment
.. code-block:: bash
cat > ~/osh/deploy-env.yaml <<EOF
---
- hosts: all
become: true
gather_facts: true
roles:
- ensure-python
- ensure-pip
- clear-firewall
- deploy-env
EOF
Run the playbook
-----------------
.. code-block:: bash
cd ~/osh
ansible-playbook -i inventory.yaml deploy-env.yaml
The playbook only changes the state of the nodes listed in the inventory file.
It installs necessary packages, deploys and configures Containerd and Kubernetes. For
details please refer to the role `deploy-env`_ and other roles (`ensure-python`_,
`ensure-pip`_, `clear-firewall`_) used in the playbook.
.. note::
The role `deploy-env`_ configures cluster nodes to use Google DNS servers (8.8.8.8).
By default, it also configures internal Kubernetes DNS server (Coredns) to work
as a recursive DNS server and adds its IP address (10.96.0.10 by default) to the
`/etc/resolv.conf` file.
Processes running on the cluster nodes will be able to resolve names internal
Kubernetes domain names `*.svc.cluster.local`.
.. _deploy-env: https://opendev.org/openstack/openstack-helm-infra/src/branch/master/roles/deploy-env
.. _deploy-env/defaults/main.yaml: https://opendev.org/openstack/openstack-helm-infra/src/branch/master/roles/deploy-env/defaults/main.yaml
.. _zuul-jobs: https://opendev.org/zuul/zuul-jobs.git
.. _ensure-python: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-python
.. _ensure-pip: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-pip
.. _clear-firewall: https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/clear-firewall
.. _openstack-helm-infra: https://opendev.org/openstack/openstack-helm-infra.git

View File

@ -44,10 +44,6 @@ deploy the backend services.
./tools/deployment/component/common/mariadb.sh
./tools/deployment/component/common/memcached.sh
.. note::
These scripts use Helm charts from the `openstack-helm-infra`_ repository. We assume
this repo is cloned to the `~/osh` directory. See this :doc:`section </install/before_deployment>`.
.. _rabbitmq.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/common/rabbitmq.sh
.. _mariadb.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/common/mariadb.sh
.. _memcached.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/component/common/memcached.sh

View File

@ -0,0 +1,25 @@
Openstack
=========
At this point we assume all the prerequisites are met:
- Kubernetes cluster is up and running.
- `kubectl` and `helm` are configured to access the cluster
- The OpenStack-Helm repositories are enabled, OpenStack-Helm
plugin is installed and necessary environment variables are set.
See the :doc:`section </install/index>`.
- Ceph is deployed and enabled for using by OpenStack-Helm.
See the :doc:`section </install/prerequisites/ceph>`.
- Ingress controller is deployed in the `openstack` namespace. See the
:doc:`section </install/prerequisites/ingress>`.
- MetalLB is deployed and configured. The service of type
`LoadBalancer` is created and DNS is configured to resolve the
Openstack endpoint names to the IP address of the service.
See the :doc:`section </install/prerequisites/metallb>`.
.. toctree::
:maxdepth: 2
backend
openstack
client

View File

@ -1,28 +0,0 @@
Prepare Kubernetes
==================
In this section we assume you have a working Kubernetes cluster and
Kubectl and Helm properly configured to interact with the cluster.
Before deploying OpenStack components using OpenStack-Helm you have to set
labels on Kubernetes worker nodes which are used as node selectors.
Also necessary namespaces must be created.
You can use the `prepare-k8s.sh`_ script as an example of how to prepare
the Kubernetes cluster for OpenStack deployment. The script is assumed to be run
from the openstack-helm repository
.. code-block:: bash
cd ~/osh/openstack-helm
./tools/deployment/common/prepare-k8s.sh
.. note::
Pay attention that in the above script we set labels on all Kubernetes nodes including
Kubernetes control plane nodes which are usually not aimed to run workload pods
(OpenStack in our case). So you have to either untaint control plane nodes or modify the
`prepare-k8s.sh`_ script so it sets labels only on the worker nodes.
.. _prepare-k8s.sh: https://opendev.org/openstack/openstack-helm/src/branch/master/tools/deployment/common/prepare-k8s.sh

View File

@ -0,0 +1,88 @@
Ceph
====
Ceph is a highly scalable and fault-tolerant distributed storage
system. It offers object storage, block storage, and
file storage capabilities, making it a versatile solution for
various storage needs.
Kubernetes CSI (Container Storage Interface) allows storage providers
like Ceph to implement their drivers, so that Kubernetes can
use the CSI driver to provision and manage volumes which can be
used by stateful applications deployed on top of Kubernetes
to store their data. In the context of OpenStack running in Kubernetes,
the Ceph is used as a storage backend for services like MariaDB, RabbitMQ and
other services that require persistent storage.
At the same time, Ceph provides the RBD API, which applications
can utilize directly to create and mount block devices distributed across
the Ceph cluster. For example the OpenStack Cinder utilizes this Ceph
capability to offer persistent block devices to virtual machines
managed by the OpenStack Nova.
The recommended way to manage Ceph on top of Kubernetes is by means
of the `Rook`_ operator. The Rook project provides the Helm chart
to deploy the Rook operator which extends the Kubernetes API
adding CRDs that enable managing Ceph clusters via Kuberntes custom objects.
There is also another Helm chart that facilitates deploying Ceph clusters
using Rook custom resources.
For details please refer to the `Rook`_ documentation and the `charts`_.
.. note::
The following script `ceph-rook.sh`_ (recommended for testing only) can be used as
an example of how to deploy the Rook Ceph operator and a Ceph cluster using the
Rook `charts`_. Please note that the script places Ceph OSDs on loopback devices
which is **not recommended** for production. The loopback devices must exist before
using this script.
Once the Ceph cluster is deployed, the next step is to enable using it
for services depoyed by OpenStack-Helm charts. The `ceph-adapter-rook` chart
provides the necessary functionality to do this. The chart will
prepare Kubernetes secret resources containing Ceph client keys/configs
that are later used to interface with the Ceph cluster.
Here we assume the Ceph cluster is deployed in the `ceph` namespace.
The procedure consists of two steps: 1) gather necessary entities from the Ceph cluster
2) copy them to the `openstack` namespace:
.. code-block:: bash
tee > /tmp/ceph-adapter-rook-ceph.yaml <<EOF
manifests:
configmap_bin: true
configmap_templates: true
configmap_etc: false
job_storage_admin_keys: true
job_namespace_client_key: false
job_namespace_client_ceph_config: false
service_mon_discovery: true
EOF
helm upgrade --install ceph-adapter-rook openstack-helm-infra/ceph-adapter-rook \
--namespace=ceph \
--values=/tmp/ceph-adapter-rook-ceph.yaml
helm osh wait-for-pods ceph
tee > /tmp/ceph-adapter-rook-openstack.yaml <<EOF
manifests:
configmap_bin: true
configmap_templates: false
configmap_etc: true
job_storage_admin_keys: false
job_namespace_client_key: true
job_namespace_client_ceph_config: true
service_mon_discovery: false
EOF
helm upgrade --install ceph-adapter-rook openstack-helm-infra/ceph-adapter-rook \
--namespace=openstack \
--values=/tmp/ceph-adapter-rook-openstack.yaml
helm osh wait-for-pods openstack
.. _Rook: https://rook.io/
.. _charts: https://rook.io/docs/rook/latest-release/Helm-Charts/helm-charts/
.. _ceph-rook.sh: https://opendev.org/openstack/openstack-helm-infra/src/branch/master/tools/deployment/ceph/ceph-rook.sh

View File

@ -0,0 +1,16 @@
Prerequisites
=============
In this section we assume you have a working Kubernetes cluster and
Kubectl and Helm properly configured to interact with the cluster.
Before deploying OpenStack components using the OpenStack-Helm charts please make
sure the following prerequisites are satisfied:
.. toctree::
:maxdepth: 2
ingress
metallb
ceph
labels

View File

Before

Width:  |  Height:  |  Size: 108 KiB

After

Width:  |  Height:  |  Size: 108 KiB

View File

@ -0,0 +1,68 @@
Ingress controller
==================
Ingress controller when deploying OpenStack on Kubernetes
is essential to ensure proper external access for the OpenStack services.
We recommend using the `ingress-nginx`_ because it is simple and provides
all necessary features. It utilizes Nginx as a reverse proxy backend.
Here is how to deploy it.
First, let's create a namespace for the OpenStack workloads. The ingress
controller must be deployed in the same namespace because OpenStack-Helm charts
create service resources pointing to the ingress controller pods which
in turn redirect traffic to particular Openstack API pods.
.. code-block:: bash
tee > /tmp/openstack_namespace.yaml <<EOF
apiVersion: v1
kind: Namespace
metadata:
name: openstack
EOF
kubectl apply -f /tmp/openstack_namespace.yaml
Next, deploy the ingress controller in the `openstack` namespace:
.. code-block:: bash
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \
--version="4.8.3" \
--namespace=openstack \
--set controller.kind=Deployment \
--set controller.admissionWebhooks.enabled="false" \
--set controller.scope.enabled="true" \
--set controller.service.enabled="false" \
--set controller.ingressClassResource.name=nginx \
--set controller.ingressClassResource.controllerValue="k8s.io/ingress-nginx" \
--set controller.ingressClassResource.default="false" \
--set controller.ingressClass=nginx \
--set controller.labels.app=ingress-api
You can deploy any other ingress controller that suits your needs best.
See for example the list of available `ingress controllers`_.
Ensure that the ingress controller pods are deployed with the `app: ingress-api`
label which is used by the OpenStack-Helm as a selector for the Kubernetes
service resources.
For example, the OpenStack-Helm `keystone` chart by default creates a service
that redirects traffic to the ingress controller pods selected using the
`app: ingress-api` label. Then it also creates an `Ingress` resource which
the ingress controller then uses to configure its reverse proxy
backend (Nginx) which eventually routes the traffic to the Keystone API
service which works as an endpoint for Keystone API pods.
.. image:: ingress.jpg
:width: 100%
:align: center
:alt: ingress scheme
.. note::
For exposing the OpenStack services to the external world, we can a create
service of type `LoadBalancer` or `NodePort` with the selector pointing to
the ingress controller pods.
.. _ingress-nginx: https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/README.md
.. _ingress controllers: https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/

View File

@ -0,0 +1,28 @@
Node labels
===========
Openstack-Helm charts rely on Kubernetes node labels to determine which nodes
are suitable for running specific OpenStack components.
The following sets labels on all the Kubernetes nodes in the cluster
including control plane nodes but you can choose to label only a subset of nodes
where you want to run OpenStack:
.. code-block::
kubectl label --overwrite nodes --all openstack-control-plane=enabled
kubectl label --overwrite nodes --all openstack-compute-node=enabled
kubectl label --overwrite nodes --all openvswitch=enabled
kubectl label --overwrite nodes --all linuxbridge=enabled
# used by neutron chart to determine nodes to run l3-agent
kubectl label --overwrite nodes --all l3-agent=enabled
# used by ovn chart to determine which nodes used as l3 gateway
kubectl label --overwrite nodes --all openstack-network-node=enabled
.. note::
The control plane nodes are tainted by default to prevent scheduling
of pods on them. You can untaint the control plane nodes using the following command:
.. code-block:: bash
kubectl taint nodes -l 'node-role.kubernetes.io/control-plane' node-role.kubernetes.io/control-plane-

View File

@ -0,0 +1,141 @@
MetalLB
=======
MetalLB is a load-balancer for bare metal Kubernetes clusters levereging
L2/L3 protocols. This is a popular way of exposing the web
applications running in Kubernetes to the external world.
The following commands can be used to deploy MetalLB:
.. code-block:: bash
tee > /tmp/metallb_system_namespace.yaml <<EOF
apiVersion: v1
kind: Namespace
metadata:
name: metallb-system
EOF
kubectl apply -f /tmp/metallb_system_namespace.yaml
helm repo add metallb https://metallb.github.io/metallb
helm install metallb metallb/metallb -n metallb-system
Now it is necessary to configure the MetalLB IP address pool and the IP address
advertisement. The MetalLB custom resources are used for this:
.. code-block:: bash
tee > /tmp/metallb_ipaddresspool.yaml <<EOF
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: public
namespace: metallb-system
spec:
addresses:
- "172.24.128.0/24"
EOF
kubectl apply -f /tmp/metallb_ipaddresspool.yaml
tee > /tmp/metallb_l2advertisement.yaml <<EOF
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: public
namespace: metallb-system
spec:
ipAddressPools:
- public
EOF
kubectl apply -f /tmp/metallb_l2advertisement.yaml
Next, let's create a service of type `LoadBalancer` which will the
public endpoint for all OpenStack services that we will later deploy.
The MetalLB will assign an IP address to it (we can assinged a dedicated
IP using annotations):
.. code-block:: bash
tee > /tmp/openstack_endpoint_service.yaml <<EOF
---
kind: Service
apiVersion: v1
metadata:
name: public-openstack-endpoint
namespace: openstack
annotations:
metallb.universe.tf/loadBalancerIPs: "172.24.128.100"
spec:
externalTrafficPolicy: Cluster
type: LoadBalancer
selector:
app: ingress-api
ports:
- name: http
port: 80
- name: https
port: 443
EOF
This service will redirect the traffic to the ingress controller pods
(see the **app: ingress-api** selector). OpenStack-Helm charts create
`Ingress` resources which are used by the ingress controller to configure the
reverse proxy backend so that the traffic eventually goes to particular
Openstack API pods.
By default, the `Ingress` objects will only contain rules for the
`openstack.svc.cluster.local` DNS domain. This is the internal Kubernetes domain
and it is not supposed to be used outside the cluster. However, we can use
the Dnsmasq to resolve the `*.openstack.svc.cluster.local` names to the
`LoadBalancer` service IP address.
The following command will start the Dnsmasq container with the necessary configuration:
.. code-block:: bash
docker run -d --name dnsmasq --restart always \
--cap-add=NET_ADMIN \
--network=host \
--entrypoint dnsmasq \
docker.io/openstackhelm/neutron:2024.1-ubuntu_jammy \
--keep-in-foreground \
--no-hosts \
--bind-interfaces \
--address="/openstack.svc.cluster.local/172.24.128.100" \
--listen-address="172.17.0.1" \
--no-resolv \
--server=8.8.8.8
The `--network=host` option is used to start the Dnsmasq container in the
host network namespace and the `--listen-address` option is used to bind the
Dnsmasq to a specific IP. Please use the configuration that suits your environment.
Now we can add the Dnsmasq IP to the `/etc/resolv.conf` file
.. code-block:: bash
echo "nameserver 172.17.0.1" > /etc/resolv.conf
or alternatively the `resolvectl` command can be used to configure the systemd-resolved.
.. note::
In production environments you probably choose to use a different DNS
domain for public Openstack endpoints. This is easy to achieve by setting
the necessary chart values. All Openstack-Helm charts values have the
`endpoints` section where you can specify the `host_fqdn_override`.
In this case a chart will create additional `Ingress` resources to
handle the external domain name and also the Keystone endpoint catalog
will be updated.
Here is an example of how to set the `host_fqdn_override` for the Keystone chart:
.. code-block:: yaml
endpoints:
identity:
host_fqdn_override:
default: "keystone.example.com"

3
tools/debug_sleep.sh Executable file
View File

@ -0,0 +1,3 @@
#!/bin/bash
sleep 86400

View File

@ -0,0 +1,36 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -xe
: ${HELM_INGRESS_NGINX_VERSION:="4.8.3"}
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
#NOTE: Deploy namespace ingress
helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \
--version ${HELM_INGRESS_NGINX_VERSION} \
--namespace=openstack \
--set controller.kind=Deployment \
--set controller.admissionWebhooks.enabled="false" \
--set controller.scope.enabled="true" \
--set controller.service.enabled="false" \
--set controller.ingressClassResource.name=nginx \
--set controller.ingressClassResource.controllerValue="k8s.io/ingress-nginx" \
--set controller.ingressClassResource.default="false" \
--set controller.ingressClass=nginx \
--set controller.labels.app=ingress-api
#NOTE: Wait for deploy
helm osh wait-for-pods kube-system

View File

@ -0,0 +1,77 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -xe
#NOTE: Define variables
: ${OSH_HELM_REPO:="../openstack-helm"}
: ${OSH_PATH:="../openstack-helm"}
: ${OSH_EXTRA_HELM_ARGS_KEYSTONE:="$(helm osh get-values-overrides ${DOWNLOAD_OVERRIDES:-} -p ${OSH_PATH} -c keystone ${FEATURES})"}
: ${RUN_HELM_TESTS:="no"}
tee > /tmp/keystone_overrides.yaml <<EOF
endpoints:
identity:
host_fqdn_override:
default: keystone.default.tld
public: keystone.public.tld
EOF
#NOTE: Deploy command
helm upgrade --install keystone ${OSH_HELM_REPO}/keystone \
--namespace=openstack \
--values=/tmp/keystone_overrides.yaml \
${OSH_EXTRA_HELM_ARGS:=} \
${OSH_EXTRA_HELM_ARGS_KEYSTONE}
#NOTE: Wait for deploy
helm osh wait-for-pods openstack
export OS_CLOUD=openstack_helm
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack endpoint list
#NOTE: Validate feature gate options if required
FEATURE_GATE="ldap"; if [[ ${FEATURE_GATES//,/ } =~ (^|[[:space:]])${FEATURE_GATE}($|[[:space:]]) ]]; then
#NOTE: Do some additional queries here for LDAP
openstack domain list
openstack user list
openstack user list --domain ldapdomain
openstack group list --domain ldapdomain
openstack role add --user bob --project admin --user-domain ldapdomain --project-domain default admin
domain="ldapdomain"
domainId=$(openstack domain show ${domain} -f value -c id)
token=$(openstack token issue -f value -c id)
#NOTE: Testing we can auth against the LDAP user
unset OS_CLOUD
openstack --os-auth-url http://keystone.openstack.svc.cluster.local/v3 --os-username bob --os-password password --os-user-domain-name ${domain} --os-identity-api-version 3 token issue
#NOTE: Test the domain specific thing works
curl --verbose -X GET \
-H "Content-Type: application/json" \
-H "X-Auth-Token: $token" \
http://keystone.openstack.svc.cluster.local/v3/domains/${domainId}/config
fi
if [ "x${RUN_HELM_TESTS}" != "xno" ]; then
./tools/deployment/common/run-helm-tests.sh keystone
fi
FEATURE_GATE="tls"; if [[ ${FEATURE_GATES//,/ } =~ (^|[[:space:]])${FEATURE_GATE}($|[[:space:]]) ]]; then
curl --cacert /etc/openstack-helm/certs/ca/ca.pem -L https://keystone.openstack.svc.cluster.local
fi

View File

@ -0,0 +1,9 @@
- hosts: all
tasks:
- name: Put keys to .ssh/authorized_keys
lineinfile:
path: /home/zuul/.ssh/authorized_keys
state: present
line: "{{ item }}"
loop:
- "ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMyM6sgu/Xgg+VaLJX5c6gy6ynYX7pO7XNobnKotYRulcEkmiLprvLSg+WP25VDAcSoif3rek3qiVnEYh6R2/Go= vlad@russell"

35
zuul.d/debug.yaml Normal file
View File

@ -0,0 +1,35 @@
- job:
name: openstack-helm-compute-kit-metallb-2024-1-ubuntu_jammy-debug
parent: openstack-helm-compute-kit
nodeset: openstack-helm-1node-2nodes-ubuntu_jammy
timeout: 10800
pre-run:
- tools/gate/playbooks/prepare-hosts.yaml
- tools/gate/playbooks/inject-keys.yaml
vars:
metallb_setup: true
osh_params:
openstack_release: "2024.1"
container_distro_name: ubuntu
container_distro_version: jammy
feature_gates: metallb
gate_scripts:
- ./tools/deployment/common/prepare-k8s.sh
- ./tools/deployment/common/prepare-charts.sh
- ./tools/deployment/common/setup-client.sh
# - ./tools/debug_sleep.sh
- ./tools/deployment/common/ingress2.sh
# - ./tools/debug_sleep.sh
- ./tools/deployment/component/common/rabbitmq.sh
- ./tools/deployment/component/common/mariadb.sh
- ./tools/deployment/component/common/memcached.sh
# - ./tools/debug_sleep.sh
- ./tools/deployment/component/keystone/keystone2.sh
- ./tools/debug_sleep.sh
- - ./tools/deployment/component/heat/heat.sh
- export GLANCE_BACKEND=memory; ./tools/deployment/component/glance/glance.sh
- ./tools/deployment/component/compute-kit/openvswitch.sh
- ./tools/deployment/component/compute-kit/libvirt.sh
- ./tools/deployment/component/compute-kit/compute-kit.sh
- ./tools/deployment/common/use-it.sh
- ./tools/deployment/common/force-cronjob-run.sh

View File

@ -21,30 +21,31 @@
- release-notes-jobs-python3
check:
jobs:
- openstack-helm-lint
- openstack-helm-bandit
# Zed
- openstack-helm-cinder-zed-ubuntu_jammy
- openstack-helm-compute-kit-zed-ubuntu_jammy
# 2023.1
- openstack-helm-cinder-2023-1-ubuntu_focal # 3 nodes
- openstack-helm-compute-kit-2023-1-ubuntu_focal # 3 nodes
- openstack-helm-compute-kit-2023-1-ubuntu_jammy # 3 nodes
# the job is faling with the error 'Node request 300-0024195009 failed'
# - openstack-helm-tls-2023-1-ubuntu_focal # 1 node 32GB
# 2023.2
- openstack-helm-horizon-2023-2-ubuntu_jammy # 1 node
- openstack-helm-keystone-ldap-2023-2-ubuntu_jammy # 1 node
- openstack-helm-cinder-2023-2-ubuntu_jammy # 3 nodes rook
- openstack-helm-compute-kit-2023-2-ubuntu_jammy # 3 nodes
- openstack-helm-umbrella-2023-2-ubuntu_jammy # 1 node 32GB
- openstack-helm-compute-kit-ovn-2023-2-ubuntu_jammy # 3 nodes
# 2024.1
- openstack-helm-tls-2024-1-ubuntu_jammy # 1 node 32GB
- openstack-helm-cinder-2024-1-ubuntu_jammy # 3 nodes rook
- openstack-helm-compute-kit-2024-1-ubuntu_jammy # 3 nodes
- openstack-helm-compute-kit-metallb-2024-1-ubuntu_jammy # 1 node + 2 nodes
- openstack-helm-compute-kit-helm-repo-local-2024-1-ubuntu_jammy # 1 node + 2 nodes
# - openstack-helm-lint
# - openstack-helm-bandit
# # Zed
# - openstack-helm-cinder-zed-ubuntu_jammy
# - openstack-helm-compute-kit-zed-ubuntu_jammy
# # 2023.1
# - openstack-helm-cinder-2023-1-ubuntu_focal # 3 nodes
# - openstack-helm-compute-kit-2023-1-ubuntu_focal # 3 nodes
# - openstack-helm-compute-kit-2023-1-ubuntu_jammy # 3 nodes
# # the job is faling with the error 'Node request 300-0024195009 failed'
# # - openstack-helm-tls-2023-1-ubuntu_focal # 1 node 32GB
# # 2023.2
# - openstack-helm-horizon-2023-2-ubuntu_jammy # 1 node
# - openstack-helm-keystone-ldap-2023-2-ubuntu_jammy # 1 node
# - openstack-helm-cinder-2023-2-ubuntu_jammy # 3 nodes rook
# - openstack-helm-compute-kit-2023-2-ubuntu_jammy # 3 nodes
# - openstack-helm-umbrella-2023-2-ubuntu_jammy # 1 node 32GB
# - openstack-helm-compute-kit-ovn-2023-2-ubuntu_jammy # 3 nodes
# # 2024.1
# - openstack-helm-tls-2024-1-ubuntu_jammy # 1 node 32GB
# - openstack-helm-cinder-2024-1-ubuntu_jammy # 3 nodes rook
# - openstack-helm-compute-kit-2024-1-ubuntu_jammy # 3 nodes
# - openstack-helm-compute-kit-metallb-2024-1-ubuntu_jammy # 1 node + 2 nodes
# - openstack-helm-compute-kit-helm-repo-local-2024-1-ubuntu_jammy # 1 node + 2 nodes
- openstack-helm-compute-kit-metallb-2024-1-ubuntu_jammy-debug # 1 node + 2 nodes
gate:
jobs:
- openstack-helm-lint