Add check for csi-snapshotter container creation

This commit introduces a check to determine whether or not the
csi-snapshotter container will be created in each provisioner's
pod.

The only scenario in which the 'provisioner.snapshotter.enabled'
field will automatically be set to 'true' and the csi-snapshotter
container will be created during apply is with:
- CRDs created: volumesnapshotclasses, volumesnapshots and
  volumesnapshotcontents.
- Snapshot-controller pod created and using the correct image
  according to the K8s version.

In all other scenarios, 'provisioner.snapshotter.enabled' will be
set to 'false' during apply and the container will not be created.

Remembering that the user can create/remove the container manually
if desired via helm override.

Depends-on: https://review.opendev.org/c/starlingx/platform-armada-app/+/904875

Test Plan:
 PASS: Build a new app package with the code changes
 PASS: Successfully execute upload, apply, remove, and delete
       operations for platform-integ-apps
 PASS: In an environment with CRDs and snapshot-controller using
       the correct image version according to Kubernetes, apply the
       app and verify that the snaphotter field is 'enabled: true'
       and the container has been created in both provisioner pods
 PASS: In an environment with CRDs but with snapshot-controller
       using the incorrect image version according to Kubernetes,
       apply the app and verify that the snaphotter field is
       'enabled: false' and the container has not been created
 PASS: In an environment without CRDs and snapshot-controller, apply
       the app and verify that snaphotter field is 'enabled: false'
       and the container has not been created
 PASS: Manually create/remove csi-snapshotter container via
       helm-override

Partial-Bug: 2045897

Change-Id: I7dbcbba520c9758de84e8dae5a553ec1fee69518
Signed-off-by: Gabriel de Araújo Cabral <gabriel.cabral@windriver.com>
This commit is contained in:
Gabriel de Araújo Cabral 2024-01-23 10:15:33 -03:00
parent fbee8e217a
commit 855c7b1da9
6 changed files with 183 additions and 21 deletions

View File

@ -0,0 +1,50 @@
From ecdb1fa261cb6e1c48e4ae4ad7e36b4a2d9ae385 Mon Sep 17 00:00:00 2001
From: Gabriel de Araújo Cabral <gabriel.cabral@windriver.com>
Date: Mon, 15 Jan 2024 10:07:55 -0300
Subject: [PATCH] ceph-csi-cephfs: Add conditional for csi-snapshotter container
creation
The creation of the csi-snapshotter container will only occur if the
create field is 'true', as a result of the presence of CRDs and
snapshot-controller to support PVC snapshots.
Signed-off-by: Gabriel de Araújo Cabral <gabriel.cabral@windriver.com>
---
charts/ceph-csi-cephfs/templates/provisioner-deployment.yaml | 2 ++
charts/ceph-csi-cephfs/values.yaml | 1 +
2 files changed, 3 insertions(+)
diff --git a/charts/ceph-csi-cephfs/templates/provisioner-deployment.yaml b/charts/ceph-csi-cephfs/templates/provisioner-deployment.yaml
index 2d7c8158e..6e6d013d5 100644
--- a/charts/ceph-csi-cephfs/templates/provisioner-deployment.yaml
+++ b/charts/ceph-csi-cephfs/templates/provisioner-deployment.yaml
@@ -93,6 +93,7 @@ spec:
mountPath: /csi
resources:
{{ toYaml .Values.provisioner.provisioner.resources | indent 12 }}
+{{ if .Values.provisioner.snapshotter.enabled }}
- name: csi-snapshotter
image: {{ .Values.provisioner.snapshotter.image.repository }}:{{ .Values.provisioner.snapshotter.image.tag }}
imagePullPolicy: {{ .Values.provisioner.snapshotter.image.pullPolicy }}
@@ -113,6 +114,7 @@ spec:
mountPath: /csi
resources:
{{ toYaml .Values.provisioner.snapshotter.resources | indent 12 }}
+{{ end }}
{{- if .Values.provisioner.resizer.enabled }}
- name: csi-resizer
image: "{{ .Values.provisioner.resizer.image.repository }}:{{ .Values.provisioner.resizer.image.tag }}"
diff --git a/charts/ceph-csi-cephfs/values.yaml b/charts/ceph-csi-cephfs/values.yaml
index 76726f980..41276836f 100644
--- a/charts/ceph-csi-cephfs/values.yaml
+++ b/charts/ceph-csi-cephfs/values.yaml
@@ -207,6 +207,7 @@ provisioner:
extraArgs: []
snapshotter:
+ enabled: true
image:
repository: registry.k8s.io/sig-storage/csi-snapshotter
tag: v6.3.2
--
2.25.1

View File

@ -0,0 +1,50 @@
From 35d5d47bd6a1736fa2945346dc55b40c85744c14 Mon Sep 17 00:00:00 2001
From: Gabriel de Araújo Cabral <gabriel.cabral@windriver.com>
Date: Mon, 15 Jan 2024 10:17:58 -0300
Subject: [PATCH] ceph-csi-rbd: Add conditional for csi-snapshotter container
creation
The creation of the csi-snapshotter container will only occur if the
create field is 'true', as a result of the presence of CRDs and
snapshot-controller to support PVC snapshots.
Signed-off-by: Gabriel de Araújo Cabral <gabriel.cabral@windriver.com>
---
charts/ceph-csi-rbd/templates/provisioner-deployment.yaml | 2 ++
charts/ceph-csi-rbd/values.yaml | 1 +
2 files changed, 3 insertions(+)
diff --git a/charts/ceph-csi-rbd/templates/provisioner-deployment.yaml b/charts/ceph-csi-rbd/templates/provisioner-deployment.yaml
index f31b5d654..982be155e 100644
--- a/charts/ceph-csi-rbd/templates/provisioner-deployment.yaml
+++ b/charts/ceph-csi-rbd/templates/provisioner-deployment.yaml
@@ -121,6 +121,7 @@ spec:
resources:
{{ toYaml .Values.provisioner.resizer.resources | indent 12 }}
{{- end }}
+{{ if .Values.provisioner.snapshotter.enabled }}
- name: csi-snapshotter
image: {{ .Values.provisioner.snapshotter.image.repository }}:{{ .Values.provisioner.snapshotter.image.tag }}
imagePullPolicy: {{ .Values.provisioner.snapshotter.image.pullPolicy }}
@@ -141,6 +142,7 @@ spec:
mountPath: /csi
resources:
{{ toYaml .Values.provisioner.snapshotter.resources | indent 12 }}
+{{ end }}
{{- if .Values.provisioner.attacher.enabled }}
- name: csi-attacher
image: "{{ .Values.provisioner.attacher.image.repository }}:{{ .Values.provisioner.attacher.image.tag }}"
diff --git a/charts/ceph-csi-rbd/values.yaml b/charts/ceph-csi-rbd/values.yaml
index 7527d0fad..70987d311 100644
--- a/charts/ceph-csi-rbd/values.yaml
+++ b/charts/ceph-csi-rbd/values.yaml
@@ -257,6 +257,7 @@ provisioner:
extraArgs: []
snapshotter:
+ enabled: true
image:
repository: registry.k8s.io/sig-storage/csi-snapshotter
tag: v6.3.2
--
2.25.1

View File

@ -16,3 +16,5 @@
0016-ceph-csi-rbd-add-snapshotclass.patch 0016-ceph-csi-rbd-add-snapshotclass.patch
0017-ceph-csi-cephfs-remove-mountoptions-debug-on-pvs.patch 0017-ceph-csi-cephfs-remove-mountoptions-debug-on-pvs.patch
0018-Fix-chart-label-for-rbd-storage-init-Job.patch 0018-Fix-chart-label-for-rbd-storage-init-Job.patch
0019-ceph-csi-cephfs-add-conditional-for-csi-snapshotter-creation.patch
0020-ceph-csi-rbd-add-conditional-for-csi-snapshotter-creation.patch

View File

@ -0,0 +1,66 @@
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from oslo_log import log
from sysinv.common import kubernetes
import subprocess
LOG = log.getLogger(__name__)
def get_ceph_fsid():
process = subprocess.Popen(['timeout', '30', 'ceph', 'fsid'],
stdout=subprocess.PIPE)
stdout, stderr = process.communicate()
return stdout.strip()
def check_snapshot_support(chart):
# This method checks if the necessary resources for the csi-snapshotter are present to create the
# container inside each provisioner pod
crds = [
'volumesnapshotclasses.snapshot.storage.k8s.io',
'volumesnapshots.snapshot.storage.k8s.io',
'volumesnapshotcontents.snapshot.storage.k8s.io'
]
for crd in crds:
# Use subprocess to check if the CRDs exist
crd_result = subprocess.run(["kubectl", "--kubeconfig", kubernetes.KUBERNETES_ADMIN_CONF,
"get", "crd", crd], check=False)
# Check the CRD command status
if crd_result.returncode == 0:
LOG.debug("The CRD '{}' exists".format(crd))
else:
LOG.debug("The CRD '{}' does not exist. As a result, the csi-snapshotter container inside the {}"
" pod will be disabled".format(crd, chart))
return False
# Use subprocess to check the pod's health and its image version
running_pod = subprocess.run(
["kubectl", "--kubeconfig", kubernetes.KUBERNETES_ADMIN_CONF, "get", "pods", "-A", "--selector",
"app=volume-snapshot-controller", "-o=jsonpath='{.items[0].spec.containers[0].image}'"],
check=False, stdout=subprocess.PIPE, text=True
)
kube = kubernetes.KubeOperator()
k8s_version = kube.kube_get_kubernetes_version()
# Use subprocess to get the correct version of snapshot-controller image according to the k8s version
path = '/usr/share/ansible/stx-ansible/playbooks/roles/common/load-images-information/vars/k8s-' + \
k8s_version + '/system-images.yml'
command = "grep 'snapshot_controller_img:' " + path + " | cut -d ':' -f2,3"
snapshot_version = subprocess.run(command, check=False, shell=True, stdout=subprocess.PIPE, text=True)
# Check if the image version of the running pod is the same as the expected for the K8s version
if running_pod.returncode == 0 and snapshot_version.stdout.strip() in running_pod.stdout:
LOG.debug("A running snapshot-controller pod with the correct snapshot-controller image exists.")
return True
else:
LOG.debug("No running snapshot-controller pod with the correct snapshot-controller image. "
"As a result, the csi-snapshotter container inside the {} pod will be "
"disabled".format(chart))
return False

View File

@ -5,8 +5,7 @@
# #
from k8sapp_platform.common import constants as app_constants from k8sapp_platform.common import constants as app_constants
from k8sapp_platform.common import utils as cutils
import subprocess
from sysinv.common import constants from sysinv.common import constants
from sysinv.common import exception from sysinv.common import exception
@ -141,12 +140,6 @@ class CephFSProvisionerHelm(base.FluxCDBaseHelm):
def _skip_ceph_mon_2(name): def _skip_ceph_mon_2(name):
return name != constants.CEPH_MON_2 return name != constants.CEPH_MON_2
def _get_ceph_fsid():
process = subprocess.Popen(['timeout', '30', 'ceph', 'fsid'],
stdout=subprocess.PIPE)
stdout, stderr = process.communicate()
return stdout.strip()
class_defaults = { class_defaults = {
"monitors": self._get_formatted_ceph_monitor_ips( "monitors": self._get_formatted_ceph_monitor_ips(
name_filter=_skip_ceph_mon_2), name_filter=_skip_ceph_mon_2),
@ -154,9 +147,10 @@ class CephFSProvisionerHelm(base.FluxCDBaseHelm):
"adminSecretName": app_constants.K8S_CEPHFS_PROVISIONER_ADMIN_SECRET_NAME "adminSecretName": app_constants.K8S_CEPHFS_PROVISIONER_ADMIN_SECRET_NAME
} }
snapshot_support = cutils.check_snapshot_support(app_constants.HELM_CHART_CEPH_FS_PROVISIONER)
# Get tier info. # Get tier info.
tiers = self.dbapi.storage_tier_get_list() tiers = self.dbapi.storage_tier_get_list()
cluster_id = _get_ceph_fsid() cluster_id = cutils.get_ceph_fsid()
storage_classes = [] storage_classes = []
for bk in ceph_bks: for bk in ceph_bks:
@ -198,7 +192,10 @@ class CephFSProvisionerHelm(base.FluxCDBaseHelm):
} }
provisioner = { provisioner = {
"replicaCount": self._num_replicas_for_platform_app() "replicaCount": self._num_replicas_for_platform_app(),
"snapshotter": {
"enabled": snapshot_support
}
} }
monitors = self._get_formatted_ceph_monitor_ips( monitors = self._get_formatted_ceph_monitor_ips(

View File

@ -1,12 +1,11 @@
# #
# Copyright (c) 2020-2023 Wind River Systems, Inc. # Copyright (c) 2020-2024 Wind River Systems, Inc.
# #
# SPDX-License-Identifier: Apache-2.0 # SPDX-License-Identifier: Apache-2.0
# #
from k8sapp_platform.common import constants as app_constants from k8sapp_platform.common import constants as app_constants
from k8sapp_platform.common import utils as cutils
import subprocess
from sysinv.common import constants from sysinv.common import constants
from sysinv.common import exception from sysinv.common import exception
@ -58,12 +57,6 @@ class RbdProvisionerHelm(base.FluxCDBaseHelm):
def _skip_ceph_mon_2(name): def _skip_ceph_mon_2(name):
return name != constants.CEPH_MON_2 return name != constants.CEPH_MON_2
def _get_ceph_fsid():
process = subprocess.Popen(['timeout', '30', 'ceph', 'fsid'],
stdout=subprocess.PIPE)
stdout, stderr = process.communicate()
return stdout.strip()
class_defaults = { class_defaults = {
"monitors": self._get_formatted_ceph_monitor_ips( "monitors": self._get_formatted_ceph_monitor_ips(
name_filter=_skip_ceph_mon_2), name_filter=_skip_ceph_mon_2),
@ -72,9 +65,10 @@ class RbdProvisionerHelm(base.FluxCDBaseHelm):
"storageClass": constants.K8S_RBD_PROV_STOR_CLASS_NAME "storageClass": constants.K8S_RBD_PROV_STOR_CLASS_NAME
} }
snapshot_support = cutils.check_snapshot_support(app_constants.HELM_CHART_RBD_PROVISIONER)
# Get tier info. # Get tier info.
tiers = self.dbapi.storage_tier_get_list() tiers = self.dbapi.storage_tier_get_list()
cluster_id = _get_ceph_fsid() cluster_id = cutils.get_ceph_fsid()
storage_classes = [] storage_classes = []
for bk in ceph_bks: for bk in ceph_bks:
@ -113,7 +107,10 @@ class RbdProvisionerHelm(base.FluxCDBaseHelm):
} }
provisioner = { provisioner = {
"replicaCount": self._num_replicas_for_platform_app() "replicaCount": self._num_replicas_for_platform_app(),
"snapshotter": {
"enabled": snapshot_support
}
} }
csi_config = [{ csi_config = [{