StarlingX OpenStack Armada App
Go to file
Lucas de Ataides e40fe2d3c5 Enable NetApp as a volume backend for Cinder
This commit introduces support for using NetApp as a volume backend for
Cinder in STX-OpenStack Helm, adding compatibility with both NFS and
iSCSI protocols. The motivation for this enhancement is to provide
greater flexibility for deployments that leverage NetApp storage
solutions, allowing users to configure and manage their storage backends
more effectively. The implementation is designed to be modular, ensuring
that the NetApp backend is not enabled by default but can be activated
through user-defined overrides.

The code introduces configuration templates for the NetApp drivers in
the `values.yaml` file for the Cinder Helm chart. Two backends are
defined: `netapp-nfs`, which uses the NFS protocol, and `netapp-iscsi`,
which relies on iSCSI. These configurations include key parameters like
the NetApp storage protocol, server hostname, and authentication
credentials described in [1] and [2]. However, by default, these backends
are not included in the `enabled_backends` list, ensuring that they
remain inactive unless explicitly enabled by the user or the plugins.
The NetApp backends will be automatically added to the deployment of
Cinder if the plugin verifies that it is available in the cluster. If an
user has NetApp but don't want to use it in STX-Openstack, an override
can be defined for `conf.cinder.DEFAULT.enabled_backends`, which will
have a higher priority than the system overrides generated by the
plugin.

To support the NFS backend specifically, a new `nfs.shares` file was
introduced in the Cinder ConfigMap, providing a location to define NFS
shares for the NetApp driver. The Cinder volume deployment is updated to
mount this file, ensuring that it is accessible to the Cinder service.
The deployment templates now include logic to handle the mounting of
this file dynamically.

Utility functions are enhanced to improve the detection of available
NetApp backends. The function `check_netapp_backends`, which executes the
`tridentctl` command to determine if the required NetApp drivers
(`ontap-nas` for NFS or `ontap-san` for iSCSI) are available. This
function returns a dictionary indicating the availability of these
backends, which is then used to dynamically adjust the Cinder Helm
overrides. The existing `is_netapp_available` function now uses this
enhanced logic to provide a simplified check for whether any NetApp
backend is available.

In the Helm configuration logic, overrides for the NetApp backends were
added dynamically based on the results of these utility checks. If either
the NFS or iSCSI backend is available, the necessary parameters are
appended to the backend overrides, ensuring that they are included in the
final deployment configuration. This approach avoids hardcoding and
allows the deployment to adapt to the specific NetApp backends detected
in the environment.

The implementation ensures that NetApp backends are disabled by default,
preventing unintended configurations while enabling users to activate
them seamlessly when needed.

[1] https://netapp.github.io/openstack-deploy-ops-guide/juno/content/cinder.examples.cinder_conf.html
[2] https://wiki.openstack.org/wiki/How_to_deploy_cinder_with_NetApp

Test Plan:
PASS: Build / Upload / Apply STX-Openstack

Not using NetApp:
PASS: Verify that the NetApp backends are not enabled in cinder.conf
PASS: Create a volume and verify that it was stored in Ceph

Using NetApp NFS:
PASS: Apply Helm overrides with the configuration pointing to the
      NetApp cluster using an NFS backend
PASS: Verify that the netapp-nfs backend is enabled in cinder.conf
FAIL: Create a volume and verify that it was stored in NetApp by
      accessing the ONTAP dashboard

Using NetApp iSCSI:
PASS: Apply Helm overrides with the configuration pointing to the
      NetApp cluster using an iSCSI backend
PASS: Verify that the netapp-iscsi backend is enabled in cinder.conf

Failed test case is due to bug #2090845, which is already being worked
on.

Story: 2011281
Task: 51411
Task: 51412

Change-Id: I46cb68c5950e3343a3ed3a644981f558542e53d1
Signed-off-by: Lucas de Ataides <lucas.deataidesbarreto@windriver.com>
2024-12-04 17:39:28 -03:00
enhanced-policies Update enhanced RBAC policies for OpenStack@2023.1 2023-12-14 13:21:01 -03:00
openstack-helm Enable NetApp as a volume backend for Cinder 2024-12-04 17:39:28 -03:00
openstack-helm-infra Add app.kubernetes.io/name label to openstack pods 2024-09-30 10:19:47 -05:00
python3-k8sapp-openstack Enable NetApp as a volume backend for Cinder 2024-12-04 17:39:28 -03:00
stx-openstack-helm-fluxcd Fix missing TLS certificates on host swact 2024-10-30 10:07:57 -03:00
upstream/openstack Set application references for stx.9.0 release 2024-02-19 13:27:49 -03:00
.gitignore Update app Zuul Check Jobs. 2023-12-20 09:11:03 -07:00
.gitreview Add a .gitreview file to the new repo 2019-09-09 08:42:46 -05:00
.zuul.yaml Add minimum Kubernetes version supported 2024-02-07 14:42:14 -03:00
bindep.txt Add python3.9 support 2021-09-09 11:27:54 -04:00
CONTRIBUTING.rst Document the openstack-armada-app repository 2024-03-05 16:02:24 -03:00
debian_build_layer.cfg Uncomment Openstack Debian packages and images 2023-10-02 10:07:36 -03:00
debian_helm.inc Update .inc files to use FluxCD related pkgs 2022-10-25 16:37:54 -03:00
debian_pkg_dirs Uncomment Openstack Debian packages and images 2023-10-02 10:07:36 -03:00
debian_stable_docker_images.inc Uncomment Openstack Debian packages and images 2023-10-02 10:07:36 -03:00
HACKING.rst Document the openstack-armada-app repository 2024-03-05 16:02:24 -03:00
helm-charts-release-info.inc Update helm charts release info 2024-10-25 17:57:30 +00:00
README.rst Document the openstack-armada-app repository 2024-03-05 16:02:24 -03:00
requirements.txt Attempting to add zuul jobs to new repo 2019-09-09 12:25:17 -05:00
test-requirements.txt Attempting to add zuul jobs to new repo 2019-09-09 12:25:17 -05:00
tox.ini Update app Zuul Check Jobs. 2023-12-20 09:11:03 -07:00

StarlingX Openstack App

This repository contains the files for STX-Openstack, containing containerized OpenStack services delivered as a StarlingX application.

To learn more about Openstack, you can visit the project's official website.

This repository is divided into the following sections:

  • Openstack services (upstream/openstack)
    • Python packages
    • Service clients
    • Docker images
  • Helm charts (openstack-helm, openstack-helm-infra and stx-openstack-helm-fluxcd)
    • Openstack Helm (openstack-helm)
    • Openstack Helm Infra (openstack-helm-infra)
    • STX-Openstack specific helm charts (stx-openstack-helm-fluxcd)
  • FluxCD manifests (stx-openstack-helm-fluxcd)
  • StarlingX app lifecycle plugins (python3-k8sapp-openstack)
  • Enhanced app policies (enhanced-policies)

These folders holds all the necessary parts for the application tarball to be assembled, including:

Openstack Services

Delivered via Docker images built using LOCI, a project designed to quickly build Lightweight OCI compatible images of OpenStack services.

When the OpenStack service is supposed to be delivered without patches, its OCI image file must mainly contain the BUILDER (i.e., loci), the PROJECT (i.e., OpenStack service name) and the PROJECT_REF (i.e., OpenStack service release) information.

Example stx-cinder:

BUILDER=loci
LABEL=stx-cinder
PROJECT=cinder
PROJECT_REF=stable/2023.1

Whenever a patch (for Debian build files and/or source code) is needed for a given package, a reference to the original Debian package is created, and the proper Debian package build structure is created (identically to any other StarlingX Debian package) to enable the patching process.

In this case it is also possible to control the base version for the package (e.g., to match the delivered OpenStack release) and later our internally built and patched package MUST be referenced on the image build instruction.

To learn more about the StarlingX Debian package build structure, check here

For example the python-openstackclient package requires patches on the source code and also on the debian build file structure. So, the package meta_data.yaml file is created pointing to a fixed base version (e.g., 6.2.0-1) to be downloaded from Salsa Debian and later patched:

---
debname: python-openstackclient
debver: 6.2.0-1
dl_path:
  name: python-openstackclient-debian-6.2.0-1.tar.gz
  url: https://salsa.debian.org/openstack-team/clients/python-openstackclient/-/archive/debian/6.2.0-1/python-openstackclient-debian-6.2.0-1.tar.gz
  md5sum: db8235ad534de91738f1dab8e3865a8a
  sha256sum: 91e35f3e6bc8113cdd228709360b383e0c4e7c7f884bb3ec47e61f37c4110da3
revision:
  dist: $STX_DIST
  GITREVCOUNT:
    BASE_SRCREV: 27acda9a6b4885a50064cebc0858892e71aa37ce
    SRC_DIR: ${MY_REPO}/stx/openstack-armada-app/upstream/openstack/python-openstackclient

Inside the same Debian package folder are created directories to hold StarlingX specific patches. Each directory has a "series" file listing the patch files (and order) in which it will be applied during the build process:

  • deb_patches: contains the debian build files patches
  • patches: contains the package source code patches

Finally, any image that will require the "python-openstackclient" installed will then have to set the PROJECT_REPO to "nil" and install the package as any other DIST_PACKAGE that has to be installed.

Example stx-openstackclients:

BUILDER=loci
LABEL=stx-openstackclients
PROJECT=infra
PROJECT_REPO=nil
DIST_REPOS="OS +openstack"
PIP_PACKAGES="
  httplib2 \
  ndg-httpsclient \
  pyasn1 \
  pycryptodomex \
  pyopenssl
"
DIST_PACKAGES="
  bash-completion \
  libffi-dev \
  openssl \
  python3-dev \
  python3-aodhclient \
  python3-barbicanclient \
  python3-cinderclient \
  python3-glanceclient \
  python3-heatclient \
  python3-keystoneclient \
  python3-neutronclient \
  python3-novaclient \
  python3-openstackclient \
  python3-osc-placement \
  python3-swiftclient
"

Helm Charts

The OpenStack community provides two upstream repositories delivering helm-charts for its services (openstack-helm) and for its required infrastructure (openstack-helm-infra).

Both repositories are used by STX-Openstack. Since it might be needed to control the version of Helm charts we are using and/or apply specific patches to the Helm charts source, both repositories points to a fixed base commit SHA and are delivered as any other StarlignX Debian package.

The common approach when developing a patch for such Helm charts is to first understand if it is a StarlingX specific patch (i.e., for STX-Openstack use case only) or if it is a "generic" code enhancement. The process of creating a Debian patch is described on the StarlingX Debian package build structure docs.

Whenever it is a generic code enhancement, the approach is to create the patch to quickly fix the STX-Openstack issue/feature but also propose it upstream to the openstack-helm and/or openstack-helm-infra community. If the change is accepted, later it will be available on a newest base commit SHA, and when STX-Openstack uprevs its base version for such packages, the patch can be deleted.

There are also cases when the issue can be solved by simply changing the Helm override values for the chart, in that case, you can go for the static overrides route described in the "FluxCD Manifests" section below.

Additionally, not all the Helm charts used by STX-Openstack are delivered by the OpenStack community as part of openstack-helm and openstack-helm-infra repositories. Some charts are custom to the application and are therefore developed/maintained by the StarlingX community itself. Such helm-charts can be found under the stx-openstack-helm-fluxcd folder. Currently the list contains the following charts:

  • Clients
  • Dcdbsync
  • Garbd
  • Keystone-api-proxy
  • Nginx-ports-control
  • Nova-api-proxy
  • Pci-irq-affinity-agent

FluxCD Manifests

Identically to any other StarlingX applications, STX-Openstack uses FluxCD to manage the dependencies between multiple Helm charts, control the expression of charts relationships and provide static and default configuration attributes (i.e., values.yaml overrides).

The application main metadata.yaml is placed on the stx-openstack-helm-fluxcd folder, and is used to hold the "app_name" and "app_version" values (although those are overwritten later on the app build process) along with directives regarding: disabled helm-charts, upgrade behavior and automatic re-apply behavior.

The application main kustomization.yaml file is also placed under the on the stx-openstack-helm-fluxcd folder and is used to describe the kustomization resources, including the application namespace and the resources available for this application. Each resource will match with a directory under the same stx-openstack-helm-fluxcd folder.

Each application manifest is usually specific to a given Helm chart, since it will contain:

  • The Helm release resource description
  • The Helm release system and static helm override files
  • Specific kustomization.yaml file listing the resources and describing how the system and static override files are generating secrets.

As described in the section above, some issues can be solved by modifying the static overrides for the specific Helm chart. As an easy example, all images used by the STX-Openstack application are updated by changing their values in the static overrides for each chart. Example: Cinder chart static overrides.

Lifecycle plugins

StarlingX applications are managed by the platform sysinv service, a Python package that enables customization of its functionalities via plugins. Whenever an application requires lifecycle plugins to customize actions / configurations necessary for it to properly work, it can use systemconfig entrypoints to "plugin its own Python code" to be executed as part of that application lifecycle.

The STX-Openstack Python plugins are delivered as Debian packages containing the Python code and its built version delivered as wheels. All of those plugins are required to integrate the STX-Openstack application into the StarlingX application framework and to support the various StarlingX deployments.

All plugins entrypoints are listed in the "setup.cfg" file, placed under the python3-k8sapp-openstack folder. Such plugins might be general to the whole application (e.g., OpenstackBaseHelm, OpenstackAppLifecycleOperator and OpenstackFluxCDKustomizeOperator) or specific to a given Helm Release (e.g., CinderHelm, NeutronHelm). Usually, specific Helm release plugins will extend the base class of OpenstackBaseHelm.

  • OpenstackAppLifecycleOperator: class containing methods used to describe lifecycle actions for an application, including:
    • Pre-apply actions
    • Pre-remove actions
    • Post-remove actions
  • OpenstackFluxCDKustomizeOperator: class containing methods used to update the application top-level kustomization resource list, including actions like:
    • Enabling or disabling Helm releases on a given namespace
    • Enabling or disabling charts in a chart group.
  • OpenstackBaseHelm: base class used to encapsulate OpenStack services operations for helm. This class is later extended for each OpenStack service or infrastructure component helm release that requires a plugin.
  • Helm Release plugins: child class of OpenstackBaseHelm, used to encapsulate Helm operations for a specific Helm release.

Enhanced Policies

This directory contains a series of examples for YAML overrides in order to customize OpenStack RBAC policies.