Kubernetes prerequisites ======================== Ingress controller ------------------ Ingress controller when deploying OpenStack on Kubernetes is essential to ensure proper external access for the OpenStack services. We recommend using the `ingress-nginx`_ because it is simple and provides all necessary features. It utilizes Nginx as a reverse proxy backend. Here is how to deploy it. First, let's create a namespace for the OpenStack workloads. The ingress controller must be deployed in the same namespace because OpenStack-Helm charts create service resources pointing to the ingress controller pods which in turn redirect traffic to particular Openstack API pods. .. code-block:: bash tee > /tmp/openstack_namespace.yaml < /tmp/metallb_system_namespace.yaml < /tmp/metallb_ipaddresspool.yaml < /tmp/metallb_l2advertisement.yaml < /tmp/openstack_endpoint_service.yaml <.172-24-128-100.sslip.io`` Here is an example of how to set the ``host_fqdn_override`` for the Keystone chart: .. code-block:: yaml endpoints: identity: host_fqdn_override: public: host: "keystone.172-24-128-100.sslip.io" .. note:: In production environments you probably choose to use a different DNS domain for public OpenStack endpoints. This is easy to achieve by setting the necessary chart values. All Openstack-Helm charts values have the ``endpoints`` section where you can specify the ``host_fqdn_override``. In this case a chart will create additional ``Ingress`` resources to handle the external domain name and also the Keystone endpoint catalog will be updated. .. _sslip.io: https://sslip.io/ Ceph ---- Ceph is a highly scalable and fault-tolerant distributed storage system. It offers object storage, block storage, and file storage capabilities, making it a versatile solution for various storage needs. Kubernetes CSI (Container Storage Interface) allows storage providers like Ceph to implement their drivers, so that Kubernetes can use the CSI driver to provision and manage volumes which can be used by stateful applications deployed on top of Kubernetes to store their data. In the context of OpenStack running in Kubernetes, the Ceph is used as a storage backend for services like MariaDB, RabbitMQ and other services that require persistent storage. By default OpenStack-Helm stateful sets expect to find a storage class named **general**. At the same time, Ceph provides the RBD API, which applications can utilize directly to create and mount block devices distributed across the Ceph cluster. For example the OpenStack Cinder utilizes this Ceph capability to offer persistent block devices to virtual machines managed by the OpenStack Nova. The recommended way to manage Ceph on top of Kubernetes is by means of the `Rook`_ operator. The Rook project provides the Helm chart to deploy the Rook operator which extends the Kubernetes API adding CRDs that enable managing Ceph clusters via Kuberntes custom objects. There is also another Helm chart that facilitates deploying Ceph clusters using Rook custom resources. For details please refer to the `Rook`_ documentation and the `charts`_. .. note:: The following script `ceph-rook.sh`_ (recommended for testing only) can be used as an example of how to deploy the Rook Ceph operator and a Ceph cluster using the Rook `charts`_. Please note that the script places Ceph OSDs on loopback devices which is **not recommended** for production. The loopback devices must exist before using this script. Once the Ceph cluster is deployed, the next step is to enable using it for services depoyed by OpenStack-Helm charts. The ``ceph-adapter-rook`` chart provides the necessary functionality to do this. The chart will prepare Kubernetes secret resources containing Ceph client keys/configs that are later used to interface with the Ceph cluster. Here we assume the Ceph cluster is deployed in the ``ceph`` namespace. .. code-block:: bash helm upgrade --install ceph-adapter-rook openstack-helm-infra/ceph-adapter-rook \ --namespace=openstack helm osh wait-for-pods openstack .. _Rook: https://rook.io/ .. _charts: https://rook.io/docs/rook/latest-release/Helm-Charts/helm-charts/ .. _ceph-rook.sh: https://opendev.org/openstack/openstack-helm-infra/src/branch/master/tools/deployment/ceph/ceph-rook.sh Node labels ----------- Openstack-Helm charts rely on Kubernetes node labels to determine which nodes are suitable for running specific OpenStack components. The following sets labels on all the Kubernetes nodes in the cluster including control plane nodes but you can choose to label only a subset of nodes where you want to run OpenStack: .. code-block:: kubectl label --overwrite nodes --all openstack-control-plane=enabled kubectl label --overwrite nodes --all openstack-compute-node=enabled kubectl label --overwrite nodes --all openvswitch=enabled kubectl label --overwrite nodes --all linuxbridge=enabled .. note:: The control plane nodes are tainted by default to prevent scheduling of pods on them. You can untaint the control plane nodes using the following command: .. code-block:: bash kubectl taint nodes -l 'node-role.kubernetes.io/control-plane' node-role.kubernetes.io/control-plane-