Migration to ceph-csi for RBD/CephFS provisioners

Remove old RBD/CephFS provisioners and replace with a currently
supported and evolving set of provisioners based on
https://github.com/ceph/ceph-csi version 3.6.2.

Test Plan:
PASS: AIO-SX app upload/apply/remove/delete/update
PASS: AIO-DX app upload/apply/remove/delete
PASS: Storage 2+2+2 app upload/apply/remove/delete
PASS: Create pvc using storageclass general (rbd) on SX/DX/Storage
PASS: Create pod using rbd pvc on SX/DX/Storage
PASS: Create pvc using storageclass cephfs on SX/DX/Storage
PASS: Create pod using cephfs pvc on SX/DX/Storage

Story: 2009987
Task: 45050

Signed-off-by: Hediberto Cavalcante da Silva <hediberto.cavalcantedasilva@windriver.com>
Change-Id: Iffcd56f689aa70788c4c2abbbf2c9a02b5a797cf
This commit is contained in:
Hediberto Cavalcante da Silva 2022-09-21 13:50:17 +00:00
parent baccc223f0
commit 69c37e9978
79 changed files with 1497 additions and 1805 deletions

View File

@ -1 +0,0 @@
flock

View File

@ -1,2 +0,0 @@
stx-platform-helm

View File

@ -1,2 +0,0 @@
stx-platform-helm
python-k8sapp-platform

View File

@ -1,2 +1,3 @@
python-k8sapp-platform
stx-platform-helm
platform-helm

View File

@ -0,0 +1,5 @@
platform-helm (1.0-1) unstable; urgency=medium
* Initial release.
-- Hediberto Cavalcante da Silva <hediberto.cavalcantedasilva@windriver.com> Wed, 31 Aug 2022 10:45:00 +0000

View File

@ -0,0 +1,15 @@
Source: platform-helm
Section: libs
Priority: optional
Maintainer: StarlingX Developers <starlingx-discuss@lists.starlingx.io>
Build-Depends: debhelper-compat (= 13),
helm
Standards-Version: 4.5.1
Homepage: https://www.starlingx.io
Package: platform-helm
Section: libs
Architecture: any
Depends: ${misc:Depends}
Description: StarlingX Ceph CSI Helm Charts
This package contains helm charts for the Ceph CSI application.

View File

@ -0,0 +1,41 @@
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: platform-helm
Source: https://opendev.org/starlingx/platform-armada-app/
Files: *
Copyright: (c) 2022 Wind River Systems, Inc
License: Apache-2
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
.
https://www.apache.org/licenses/LICENSE-2.0
.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
.
On Debian-based systems the full text of the Apache version 2.0 license
can be found in `/usr/share/common-licenses/Apache-2.0'.
# If you want to use GPL v2 or later for the /debian/* files use
# the following clauses, or change it to suit. Delete these two lines
Files: debian/*
Copyright: 2022 Wind River Systems, Inc
License: Apache-2
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
.
https://www.apache.org/licenses/LICENSE-2.0
.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
.
On Debian-based systems the full text of the Apache version 2.0 license
can be found in `/usr/share/common-licenses/Apache-2.0'.

View File

@ -0,0 +1,30 @@
From ae9dc263c28c1820446d3680f3fcc712fc6558b2 Mon Sep 17 00:00:00 2001
From: Hediberto Cavalcante da Silva
<hediberto.cavalcantedasilva@windriver.com>
Date: Thu, 3 Nov 2022 19:41:04 -0300
Subject: [PATCH] ceph-csi-cephfs: replace appVersion/version
Signed-off-by: Hediberto Cavalcante da Silva <hediberto.cavalcantedasilva@windriver.com>
---
charts/ceph-csi-cephfs/Chart.yaml | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/charts/ceph-csi-cephfs/Chart.yaml b/charts/ceph-csi-cephfs/Chart.yaml
index 9238c26..2b3f6a0 100644
--- a/charts/ceph-csi-cephfs/Chart.yaml
+++ b/charts/ceph-csi-cephfs/Chart.yaml
@@ -1,10 +1,10 @@
---
apiVersion: v1
-appVersion: canary
+appVersion: 3.6.2
description: "Container Storage Interface (CSI) driver,
provisioner, snapshotter and attacher for Ceph cephfs"
name: ceph-csi-cephfs
-version: 3-canary
+version: 3.6.2
keywords:
- ceph
- cephfs
--
2.17.1

View File

@ -0,0 +1,79 @@
From 068b81a7103994dfa0b7e7d14eead3d191733070 Mon Sep 17 00:00:00 2001
From: Hediberto Cavalcante da Silva
<hediberto.cavalcantedasilva@windriver.com>
Date: Thu, 3 Nov 2022 20:03:05 -0300
Subject: [PATCH] ceph-csi-cephfs: add default fields to values.yaml
Signed-off-by: Hediberto Cavalcante da Silva <hediberto.cavalcantedasilva@windriver.com>
---
charts/ceph-csi-cephfs/values.yaml | 51 ++++++++++++++++++++++++++++++
1 file changed, 51 insertions(+)
diff --git a/charts/ceph-csi-cephfs/values.yaml b/charts/ceph-csi-cephfs/values.yaml
index 7375ea6..9507ffd 100644
--- a/charts/ceph-csi-cephfs/values.yaml
+++ b/charts/ceph-csi-cephfs/values.yaml
@@ -276,6 +276,24 @@ storageClass:
# mountOptions:
# - discard
+ # Ceph user name to access this pool
+ userId: kube
+ # K8 secret name with key for accessing the Ceph pool
+ userSecretName: ceph-secret-kube
+ # Pool replication
+ replication: 1
+ # Pool crush rule name
+ crush_rule_name: storage_tier_ruleset
+ # Pool chunk size / PG_NUM
+ chunk_size: 8
+ # Additional namespace to allow storage class access (other than where
+ # installed)
+ additionalNamespaces:
+ - default
+ - kube-public
+ # Ceph pools name
+ metadata_pool: kube-cephfs-metadata
+
secret:
# Specifies whether the secret should be created
create: false
@@ -326,3 +344,36 @@ configMapName: ceph-csi-config
externallyManagedConfigmap: false
# Name of the configmap used for ceph.conf
cephConfConfigMapName: ceph-config
+
+#
+# Defaults for storage classes.
+#
+classDefaults:
+ # Define ip addresses of Ceph Monitors
+ monitors:
+ - 192.168.204.2:6789
+ # K8 secret name for the admin context
+ adminId: admin
+ adminSecretName: ceph-secret-admin
+ cephFSNamespace: kube-system
+
+#
+# Defines:
+# - Provisioner's image name including container registry.
+# - CEPH helper image
+#
+images:
+ tags:
+ csi_provisioner: k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0
+ csi_snapshotter: k8s.gcr.io/sig-storage/csi-snapshotter:v4.2.0
+ csi_attacher: k8s.gcr.io/sig-storage/csi-attacher:v3.4.0
+ csi_resizer: k8s.gcr.io/sig-storage/csi-resizer:v1.4.0
+ csi_cephcsi: quay.io/cephcsi/cephcsi:v3.6.2
+ csi_registrar: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.4.0
+ cephfs_provisioner_storage_init: docker.io/openstackhelm/ceph-config-helper:ubuntu_bionic-20220802
+ pull_policy: "IfNotPresent"
+ local_registry:
+ active: false
+ exclude:
+ - dep_check
+ - image_repo_sync
--
2.17.1

View File

@ -0,0 +1,274 @@
From 30a69b72f9367802b4ebeb2667db921420328de0 Mon Sep 17 00:00:00 2001
From: Hediberto Cavalcante da Silva
<hediberto.cavalcantedasilva@windriver.com>
Date: Thu, 3 Nov 2022 19:56:35 -0300
Subject: [PATCH] ceph-csi-cephfs: add storage-init.yaml
Signed-off-by: Hediberto Cavalcante da Silva <hediberto.cavalcantedasilva@windriver.com>
---
.../templates/storage-init.yaml | 254 ++++++++++++++++++
1 file changed, 254 insertions(+)
create mode 100644 charts/ceph-csi-cephfs/templates/storage-init.yaml
diff --git a/charts/ceph-csi-cephfs/templates/storage-init.yaml b/charts/ceph-csi-cephfs/templates/storage-init.yaml
new file mode 100644
index 0000000..5c0f00d
--- /dev/null
+++ b/charts/ceph-csi-cephfs/templates/storage-init.yaml
@@ -0,0 +1,254 @@
+{{/*
+#
+# Copyright (c) 2020-2022 Wind River Systems, Inc.
+#
+# SPDX-License-Identifier: Apache-2.0
+#
+*/}}
+
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: cephfs-rbac-secrets-namespaces
+ labels:
+ app: {{ include "ceph-csi-cephfs.name" . }}
+ chart: {{ include "ceph-csi-cephfs.chart" . }}
+ component: {{ .Values.provisioner.name }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+ annotations:
+ "meta.helm.sh/release-name": {{ .Release.Name }}
+ "meta.helm.sh/release-namespace": {{ .Release.Namespace }}
+ "helm.sh/hook": "pre-upgrade, pre-install"
+ "helm.sh/hook-delete-policy": "before-hook-creation"
+rules:
+ - apiGroups: [""]
+ resources: ["secrets"]
+ verbs: ["get", "list", "watch", "create", "delete"]
+ - apiGroups: [""]
+ resources: ["namespaces"]
+ verbs: ["get", "create", "list", "update"]
+
+---
+
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: cephfs-rbac-secrets-namespaces
+ labels:
+ app: {{ include "ceph-csi-cephfs.name" . }}
+ chart: {{ include "ceph-csi-cephfs.chart" . }}
+ component: {{ .Values.provisioner.name }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+ annotations:
+ "meta.helm.sh/release-name": {{ .Release.Name }}
+ "meta.helm.sh/release-namespace": {{ .Release.Namespace }}
+ "helm.sh/hook": "pre-upgrade, pre-install"
+ "helm.sh/hook-delete-policy": "before-hook-creation"
+subjects:
+ - kind: ServiceAccount
+ name: {{ include "ceph-csi-cephfs.serviceAccountName.provisioner" . }}
+ namespace: {{ .Values.classDefaults.cephFSNamespace }}
+roleRef:
+ kind: ClusterRole
+ name: cephfs-rbac-secrets-namespaces
+ apiGroup: rbac.authorization.k8s.io
+
+---
+
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: cephfs-storage-init
+ namespace: {{ .Values.classDefaults.cephFSNamespace }}
+ labels:
+ app: {{ include "ceph-csi-cephfs.name" . }}
+ chart: {{ include "ceph-csi-cephfs.chart" . }}
+ component: {{ .Values.provisioner.name }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+ annotations:
+ "meta.helm.sh/release-name": {{ .Release.Name }}
+ "meta.helm.sh/release-namespace": {{ .Release.Namespace }}
+ "helm.sh/hook": "pre-upgrade, pre-install"
+ "helm.sh/hook-delete-policy": "before-hook-creation"
+data:
+ ceph.conf: |
+ #
+ # Copyright (c) 2020-2022 Wind River Systems, Inc.
+ #
+ # SPDX-License-Identifier: Apache-2.0
+ #
+
+ [global]
+ # For version 0.55 and beyond, you must explicitly enable
+ # or disable authentication with "auth" entries in [global].
+ auth_cluster_required = none
+ auth_service_required = none
+ auth_client_required = none
+
+ {{ $monitors := .Values.classDefaults.monitors }}
+ {{ range $index, $monitor := $monitors}}
+ [mon.{{- $index }}]
+ mon_addr = {{ $monitor }}
+ {{- end }}
+
+ storage-init.sh: |
+ #
+ # Copyright (c) 2020-2022 Wind River Systems, Inc.
+ #
+ # SPDX-License-Identifier: Apache-2.0
+ #
+
+ #! /bin/bash
+
+ # Copy from read only mount to Ceph config folder
+ cp /tmp/ceph.conf /etc/ceph/
+
+ set -x
+
+ touch /etc/ceph/ceph.client.admin.keyring
+
+ # Check if ceph is accessible
+ echo "===================================="
+ ceph -s
+ if [ $? -ne 0 ]; then
+ echo "Error: Ceph cluster is not accessible, check Pod logs for details."
+ exit 1
+ fi
+
+ set -ex
+ KEYRING=$(ceph auth get-or-create client.${USER_ID} mon "allow r" osd "allow rwx pool=${POOL_NAME}" | sed -n 's/^[[:blank:]]*key[[:blank:]]\+=[[:blank:]]\(.*\)/\1/p')
+ # Set up pool key in Ceph format
+ CEPH_USER_KEYRING=/etc/ceph/ceph.client.${USER_ID}.keyring
+ echo $KEYRING > $CEPH_USER_KEYRING
+ set +ex
+
+ if [ -n "${CEPH_USER_SECRET}" ]; then
+ kubectl get secret -n ${NAMESPACE} ${CEPH_USER_SECRET} 2>/dev/null
+ if [ $? -ne 0 ]; then
+ echo "Create ${CEPH_USER_SECRET} secret"
+ kubectl create secret generic -n ${NAMESPACE} ${CEPH_USER_SECRET} --type="kubernetes.io/cephfs" --from-literal=adminKey=$KEYRING --from-literal=adminID=${ADMIN_ID}
+ if [ $? -ne 0 ]; then
+ echo "Error creating secret ${CEPH_USER_SECRET} in ${NAMESPACE}, exit"
+ exit 1
+ fi
+ else
+ echo "Secret ${CEPH_USER_SECRET} already exists"
+ fi
+
+ # Support creating namespaces and Ceph user secrets for additional
+ # namespaces other than that which the provisioner is installed. This
+ # allows the provisioner to set up and provide PVs for multiple
+ # applications across many namespaces.
+ if [ -n "${ADDITIONAL_NAMESPACES}" ]; then
+ for ns in $(
+ IFS=,
+ echo ${ADDITIONAL_NAMESPACES}
+ ); do
+ kubectl get namespace $ns 2>/dev/null
+ if [ $? -ne 0 ]; then
+ kubectl create namespace $ns
+ if [ $? -ne 0 ]; then
+ echo "Error creating namespace $ns, exit"
+ continue
+ fi
+ fi
+
+ kubectl get secret -n $ns ${CEPH_USER_SECRET} 2>/dev/null
+ if [ $? -ne 0 ]; then
+ echo "Creating secret ${CEPH_USER_SECRET} for namespace $ns"
+ kubectl create secret generic -n $ns ${CEPH_USER_SECRET} --type="kubernetes.io/cephfs" --from-literal=adminKey=$KEYRING --from-literal=adminID=${ADMIN_ID}
+ if [ $? -ne 0 ]; then
+ echo "Error creating secret ${CEPH_USER_SECRET} in $ns, exit"
+ fi
+ else
+ echo "Secret ${CEPH_USER_SECRET} for namespace $ns already exists"
+ fi
+ done
+ fi
+ fi
+
+ ceph osd pool stats ${POOL_NAME} || ceph osd pool create ${POOL_NAME} ${CHUNK_SIZE}
+ ceph osd pool application enable ${POOL_NAME} cephfs
+ ceph osd pool set ${POOL_NAME} size ${POOL_REPLICATION}
+ ceph osd pool set ${POOL_NAME} crush_rule ${POOL_CRUSH_RULE_NAME}
+
+ ceph osd pool stats ${METADATA_POOL_NAME} || ceph osd pool create ${METADATA_POOL_NAME} ${CHUNK_SIZE}
+ ceph osd pool application enable ${METADATA_POOL_NAME} cephfs
+ ceph osd pool set ${METADATA_POOL_NAME} size ${POOL_REPLICATION}
+ ceph osd pool set ${METADATA_POOL_NAME} crush_rule ${POOL_CRUSH_RULE_NAME}
+
+ ceph fs ls | grep ${FS_NAME} || ceph fs new ${FS_NAME} ${METADATA_POOL_NAME} ${POOL_NAME}
+
+ ceph -s
+
+
+---
+
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: cephfs-storage-init
+ namespace: {{ .Values.classDefaults.cephFSNamespace }}
+ labels:
+ app: {{ include "ceph-csi-cephfs.name" . }}
+ chart: {{ include "ceph-csi-cephfs.chart" . }}
+ component: {{ .Values.provisioner.name }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+ annotations:
+ "meta.helm.sh/release-name": {{ .Release.Name }}
+ "meta.helm.sh/release-namespace": {{ .Release.Namespace }}
+ "helm.sh/hook": "post-install, pre-upgrade, pre-rollback"
+ "helm.sh/hook-delete-policy": "before-hook-creation"
+spec:
+ backoffLimit: 5
+ template:
+ spec:
+ serviceAccountName: {{ include "ceph-csi-cephfs.serviceAccountName.provisioner" . }}
+ volumes:
+ - name: cephfs-storage-init-configmap-volume
+ configMap:
+ name: cephfs-storage-init
+ defaultMode: 0555
+ containers:
+ - name: storage-init-{{- .Values.storageClass.name }}
+ image: {{ .Values.images.tags.cephfs_provisioner_storage_init | quote }}
+ command: ["/bin/bash", "/tmp/storage-init.sh"]
+ env:
+ - name: NAMESPACE
+ value: {{ .Values.classDefaults.cephFSNamespace }}
+ - name: ADDITIONAL_NAMESPACES
+ value: {{ join "," .Values.storageClass.additionalNamespaces | quote }}
+ - name: CEPH_USER_SECRET
+ value: {{ .Values.storageClass.userSecretName }}
+ - name: USER_ID
+ value: {{ .Values.storageClass.userId }}
+ - name: ADMIN_ID
+ value: {{ .Values.classDefaults.adminId }}
+ - name: POOL_NAME
+ value: {{ .Values.storageClass.pool }}
+ - name: METADATA_POOL_NAME
+ value: {{ .Values.storageClass.metadata_pool }}
+ - name: FS_NAME
+ value: {{ .Values.storageClass.fsName }}
+ - name: CHUNK_SIZE
+ value: {{ .Values.storageClass.chunk_size | quote }}
+ - name: POOL_REPLICATION
+ value: {{ .Values.storageClass.replication | quote }}
+ - name: POOL_CRUSH_RULE_NAME
+ value: {{ .Values.storageClass.crush_rule_name | quote }}
+ volumeMounts:
+ - name: cephfs-storage-init-configmap-volume
+ mountPath: /tmp
+ restartPolicy: OnFailure
+{{- if .Values.provisioner.nodeSelector }}
+ nodeSelector:
+{{ .Values.provisioner.nodeSelector | toYaml | trim | indent 8 }}
+{{- end }}
+{{- with .Values.provisioner.tolerations }}
+ tolerations:
+{{ toYaml . | indent 8 }}
+{{- end}}
--
2.17.1

View File

@ -0,0 +1,37 @@
From 1b00f927ef2f3a279ede03d8971d0cdc306fd43a Mon Sep 17 00:00:00 2001
From: Hediberto Cavalcante da Silva
<hediberto.cavalcantedasilva@windriver.com>
Date: Sun, 6 Nov 2022 18:28:54 -0300
Subject: [PATCH] ceph-csi-cephfs: add imagePullSecrets to ServiceAccount
Signed-off-by: Hediberto Cavalcante da Silva <hediberto.cavalcantedasilva@windriver.com>
---
charts/ceph-csi-cephfs/templates/nodeplugin-serviceaccount.yaml | 2 ++
.../ceph-csi-cephfs/templates/provisioner-serviceaccount.yaml | 2 ++
2 files changed, 4 insertions(+)
diff --git a/charts/ceph-csi-cephfs/templates/nodeplugin-serviceaccount.yaml b/charts/ceph-csi-cephfs/templates/nodeplugin-serviceaccount.yaml
index 5dedaf4..7c93f52 100644
--- a/charts/ceph-csi-cephfs/templates/nodeplugin-serviceaccount.yaml
+++ b/charts/ceph-csi-cephfs/templates/nodeplugin-serviceaccount.yaml
@@ -10,4 +10,6 @@ metadata:
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
+imagePullSecrets:
+ - name: default-registry-key
{{- end -}}
diff --git a/charts/ceph-csi-cephfs/templates/provisioner-serviceaccount.yaml b/charts/ceph-csi-cephfs/templates/provisioner-serviceaccount.yaml
index c4ba5c1..3d85b0f 100644
--- a/charts/ceph-csi-cephfs/templates/provisioner-serviceaccount.yaml
+++ b/charts/ceph-csi-cephfs/templates/provisioner-serviceaccount.yaml
@@ -10,4 +10,6 @@ metadata:
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
+imagePullSecrets:
+ - name: default-registry-key
{{- end -}}
--
2.17.1

View File

@ -0,0 +1,29 @@
From 727a0bd641df4e6e750242341a9a5b3223b4347a Mon Sep 17 00:00:00 2001
From: Hediberto Cavalcante da Silva
<hediberto.cavalcantedasilva@windriver.com>
Date: Wed, 9 Nov 2022 16:21:04 -0300
Subject: [PATCH] ceph-csi-cephfs: add annotations to
provisioner-deployment.yaml
Signed-off-by: Hediberto Cavalcante da Silva <hediberto.cavalcantedasilva@windriver.com>
---
charts/ceph-csi-cephfs/templates/provisioner-deployment.yaml | 3 +++
1 file changed, 3 insertions(+)
diff --git a/charts/ceph-csi-cephfs/templates/provisioner-deployment.yaml b/charts/ceph-csi-cephfs/templates/provisioner-deployment.yaml
index c455b86..91b7042 100644
--- a/charts/ceph-csi-cephfs/templates/provisioner-deployment.yaml
+++ b/charts/ceph-csi-cephfs/templates/provisioner-deployment.yaml
@@ -9,6 +9,9 @@ metadata:
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
+ annotations:
+ "helm.sh/hook": "post-upgrade, post-install"
+ "helm.sh/hook-delete-policy": "before-hook-creation"
spec:
replicas: {{ .Values.provisioner.replicaCount }}
strategy:
--
2.17.1

View File

@ -0,0 +1,30 @@
From 90be61a690e99dd5702551164d8d80faa4d2eb54 Mon Sep 17 00:00:00 2001
From: Hediberto Cavalcante da Silva
<hediberto.cavalcantedasilva@windriver.com>
Date: Thu, 3 Nov 2022 16:26:38 -0300
Subject: [PATCH] ceph-csi-rbd: replace appVersion/version
Signed-off-by: Hediberto Cavalcante da Silva <hediberto.cavalcantedasilva@windriver.com>
---
charts/ceph-csi-rbd/Chart.yaml | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/charts/ceph-csi-rbd/Chart.yaml b/charts/ceph-csi-rbd/Chart.yaml
index 107647b..c141529 100644
--- a/charts/ceph-csi-rbd/Chart.yaml
+++ b/charts/ceph-csi-rbd/Chart.yaml
@@ -1,10 +1,10 @@
---
apiVersion: v1
-appVersion: canary
+appVersion: 3.6.2
description: "Container Storage Interface (CSI) driver,
provisioner, snapshotter, and attacher for Ceph RBD"
name: ceph-csi-rbd
-version: 3-canary
+version: 3.6.2
keywords:
- ceph
- rbd
--
2.17.1

View File

@ -0,0 +1,78 @@
From 6c0d74c0347ec9cff833f9bdf3ea14677e61ecc0 Mon Sep 17 00:00:00 2001
From: Hediberto Cavalcante da Silva
<hediberto.cavalcantedasilva@windriver.com>
Date: Thu, 3 Nov 2022 20:01:13 -0300
Subject: [PATCH] ceph-csi-rbd: add default fields to values.yaml
Signed-off-by: Hediberto Cavalcante da Silva <hediberto.cavalcantedasilva@windriver.com>
---
charts/ceph-csi-rbd/values.yaml | 50 +++++++++++++++++++++++++++++++++
1 file changed, 50 insertions(+)
diff --git a/charts/ceph-csi-rbd/values.yaml b/charts/ceph-csi-rbd/values.yaml
index 42a06c4..2d9072b 100644
--- a/charts/ceph-csi-rbd/values.yaml
+++ b/charts/ceph-csi-rbd/values.yaml
@@ -406,6 +406,22 @@ storageClass:
# mountOptions:
# - discard
+ # Ceph user name to access this pool
+ userId: kube
+ # K8 secret name with key for accessing the Ceph pool
+ userSecretName: ceph-secret-kube
+ # Pool replication
+ replication: 1
+ # Pool crush rule name
+ crush_rule_name: storage_tier_ruleset
+ # Pool chunk size / PG_NUM
+ chunk_size: 8
+ # Additional namespace to allow storage class access (other than where
+ # installed)
+ additionalNamespaces:
+ - default
+ - kube-public
+
# Mount the host /etc/selinux inside pods to support
# selinux-enabled filesystems
selinuxMount: true
@@ -458,3 +474,37 @@ externallyManagedConfigmap: false
cephConfConfigMapName: ceph-config
# Name of the configmap used for encryption kms configuration
kmsConfigMapName: ceph-csi-encryption-kms-config
+
+#
+# Defaults for storage classes.
+#
+classDefaults:
+ # Define ip addresses of Ceph Monitors
+ monitors:
+ - 192.168.204.3:6789
+ - 192.168.204.150:6789
+ - 192.168.204.4:6789
+ # K8 secret name for the admin context
+ adminId: admin
+ adminSecretName: ceph-secret
+
+#
+# Defines:
+# - Provisioner's image name including container registry.
+# - CEPH helper image
+#
+images:
+ tags:
+ csi_provisioner: k8s.gcr.io/sig-storage/csi-provisioner:v3.1.0
+ csi_snapshotter: k8s.gcr.io/sig-storage/csi-snapshotter:v4.2.0
+ csi_attacher: k8s.gcr.io/sig-storage/csi-attacher:v3.4.0
+ csi_resizer: k8s.gcr.io/sig-storage/csi-resizer:v1.4.0
+ csi_cephcsi: quay.io/cephcsi/cephcsi:v3.6.2
+ csi_registrar: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.4.0
+ rbd_provisioner_storage_init: docker.io/openstackhelm/ceph-config-helper:ubuntu_bionic-20220802
+ pull_policy: "IfNotPresent"
+ local_registry:
+ active: false
+ exclude:
+ - dep_check
+ - image_repo_sync
--
2.17.1

View File

@ -0,0 +1,299 @@
From d58e048aea5ec70f830f1703245b811d1ee54a7b Mon Sep 17 00:00:00 2001
From: Hediberto Cavalcante da Silva
<hediberto.cavalcantedasilva@windriver.com>
Date: Thu, 3 Nov 2022 19:54:49 -0300
Subject: [PATCH] ceph-csi-rbd: add storage-init.yaml
Signed-off-by: Hediberto Cavalcante da Silva <hediberto.cavalcantedasilva@windriver.com>
---
.../ceph-csi-rbd/templates/storage-init.yaml | 279 ++++++++++++++++++
1 file changed, 279 insertions(+)
create mode 100644 charts/ceph-csi-rbd/templates/storage-init.yaml
diff --git a/charts/ceph-csi-rbd/templates/storage-init.yaml b/charts/ceph-csi-rbd/templates/storage-init.yaml
new file mode 100644
index 0000000..8e8c4de
--- /dev/null
+++ b/charts/ceph-csi-rbd/templates/storage-init.yaml
@@ -0,0 +1,279 @@
+{{/*
+#
+# Copyright (c) 2020-2022 Wind River Systems, Inc.
+#
+# SPDX-License-Identifier: Apache-2.0
+#
+*/}}
+
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rbd-rbac-secrets-namespaces
+ labels:
+ app: {{ include "ceph-csi-rbd.name" . }}
+ chart: {{ include "ceph-csi-rbd.chart" . }}
+ component: {{ .Values.provisioner.name }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+ annotations:
+ "meta.helm.sh/release-name": {{ .Release.Name }}
+ "meta.helm.sh/release-namespace": {{ .Release.Namespace }}
+ "helm.sh/hook": "pre-upgrade, pre-install"
+ "helm.sh/hook-delete-policy": "before-hook-creation"
+rules:
+ - apiGroups: [""]
+ resources: ["secrets"]
+ verbs: ["get", "list", "watch", "create", "delete"]
+ - apiGroups: [""]
+ resources: ["namespaces"]
+ verbs: ["get", "create", "list", "update"]
+
+---
+
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: rbd-rbac-secrets-namespaces
+ labels:
+ app: {{ include "ceph-csi-rbd.name" . }}
+ chart: {{ include "ceph-csi-rbd.chart" . }}
+ component: {{ .Values.provisioner.name }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+ annotations:
+ "meta.helm.sh/release-name": {{ .Release.Name }}
+ "meta.helm.sh/release-namespace": {{ .Release.Namespace }}
+ "helm.sh/hook": "pre-upgrade, pre-install"
+ "helm.sh/hook-delete-policy": "before-hook-creation"
+subjects:
+ - kind: ServiceAccount
+ name: {{ include "ceph-csi-rbd.serviceAccountName.provisioner" . }}
+ namespace: {{ .Release.Namespace }}
+roleRef:
+ kind: ClusterRole
+ name: rbd-rbac-secrets-namespaces
+ apiGroup: rbac.authorization.k8s.io
+
+---
+
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: rbd-storage-init
+ namespace: {{ .Release.Namespace }}
+ labels:
+ app: {{ include "ceph-csi-rbd.name" . }}
+ chart: {{ include "ceph-csi-rbd.chart" . }}
+ component: {{ .Values.provisioner.name }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+ annotations:
+ "meta.helm.sh/release-name": {{ .Release.Name }}
+ "meta.helm.sh/release-namespace": {{ .Release.Namespace }}
+ "helm.sh/hook": "pre-upgrade, pre-install"
+ "helm.sh/hook-delete-policy": "before-hook-creation"
+data:
+ ceph.conf: |
+ #
+ # Copyright (c) 2020-2022 Wind River Systems, Inc.
+ #
+ # SPDX-License-Identifier: Apache-2.0
+ #
+
+ [global]
+ # For version 0.55 and beyond, you must explicitly enable
+ # or disable authentication with "auth" entries in [global].
+ auth_cluster_required = none
+ auth_service_required = none
+ auth_client_required = none
+
+ {{ $monitors := .Values.classDefaults.monitors }}
+ {{ range $index, $monitor := $monitors}}
+ [mon.{{- $index }}]
+ mon_addr = {{ $monitor }}
+ {{- end }}
+
+ storage-init.sh: |
+ #
+ # Copyright (c) 2020-2022 Wind River Systems, Inc.
+ #
+ # SPDX-License-Identifier: Apache-2.0
+ #
+
+ #! /bin/bash
+
+ # Copy from read only mount to Ceph config folder
+ cp /tmp/ceph.conf /etc/ceph/
+
+ if [ -n "${CEPH_ADMIN_SECRET}" ]; then
+ kubectl get secret -n ${NAMESPACE} | grep ${CEPH_ADMIN_SECRET}
+ if [ $? -ne 0 ]; then
+ echo "Create ${CEPH_ADMIN_SECRET} secret"
+ kubectl create secret generic ${CEPH_ADMIN_SECRET} --type="kubernetes.io/rbd" --from-literal=key= --namespace=${NAMESPACE}
+ if [ $? -ne 0 ]; then
+ echo "Error creating secret ${CEPH_ADMIN_SECRET}, exit"
+ exit 1
+ fi
+ fi
+ fi
+
+ touch /etc/ceph/ceph.client.admin.keyring
+
+ # Check if ceph is accessible
+ echo "===================================="
+ ceph -s
+ if [ $? -ne 0 ]; then
+ echo "Error: Ceph cluster is not accessible, check Pod logs for details."
+ exit 1
+ fi
+
+ set -ex
+ # Make sure the pool exists.
+ ceph osd pool stats ${POOL_NAME} || ceph osd pool create ${POOL_NAME} ${POOL_CHUNK_SIZE}
+ # Set pool configuration.
+ ceph osd pool application enable ${POOL_NAME} rbd
+ ceph osd pool set ${POOL_NAME} size ${POOL_REPLICATION}
+ ceph osd pool set ${POOL_NAME} crush_rule ${POOL_CRUSH_RULE_NAME}
+ set +ex
+
+ if [[ -z "${USER_ID}" && -z "${CEPH_USER_SECRET}" ]]; then
+ echo "No need to create secrets for pool ${POOL_NAME}"
+ exit 0
+ fi
+
+ set -ex
+ KEYRING=$(ceph auth get-or-create client.${USER_ID} mon "allow r" osd "allow rwx pool=${POOL_NAME}" | sed -n 's/^[[:blank:]]*key[[:blank:]]\+=[[:blank:]]\(.*\)/\1/p')
+ # Set up pool key in Ceph format
+ CEPH_USER_KEYRING=/etc/ceph/ceph.client.${USER_ID}.keyring
+ echo $KEYRING > $CEPH_USER_KEYRING
+ set +ex
+
+ if [ -n "${CEPH_USER_SECRET}" ]; then
+ kubectl get secret -n ${NAMESPACE} ${CEPH_USER_SECRET} 2>/dev/null
+ if [ $? -ne 0 ]; then
+ echo "Create ${CEPH_USER_SECRET} secret"
+ kubectl create secret generic -n ${NAMESPACE} ${CEPH_USER_SECRET} --type="kubernetes.io/rbd" --from-literal=key=$KEYRING
+ if [ $? -ne 0 ]; then
+ echo"Error creating secret ${CEPH_USER_SECRET} in ${NAMESPACE}, exit"
+ exit 1
+ fi
+ else
+ echo "Secret ${CEPH_USER_SECRET} already exists"
+ fi
+
+ # Support creating namespaces and Ceph user secrets for additional
+ # namespaces other than that which the provisioner is installed. This
+ # allows the provisioner to set up and provide PVs for multiple
+ # applications across many namespaces.
+ if [ -n "${ADDITIONAL_NAMESPACES}" ]; then
+ for ns in $(IFS=,; echo ${ADDITIONAL_NAMESPACES}); do
+ kubectl get namespace $ns 2>/dev/null
+ if [ $? -ne 0 ]; then
+ kubectl create namespace $ns
+ if [ $? -ne 0 ]; then
+ echo "Error creating namespace $ns, exit"
+ continue
+ fi
+ fi
+
+ kubectl get secret -n $ns ${CEPH_USER_SECRET} 2>/dev/null
+ if [ $? -ne 0 ]; then
+ echo "Creating secret ${CEPH_USER_SECRET} for namespace $ns"
+ kubectl create secret generic -n $ns ${CEPH_USER_SECRET} --type="kubernetes.io/rbd" --from-literal=key=$KEYRING
+ if [ $? -ne 0 ]; then
+ echo "Error creating secret ${CEPH_USER_SECRET} in $ns, exit"
+ fi
+ else
+ echo "Secret ${CEPH_USER_SECRET} for namespace $ns already exists"
+ fi
+ done
+ fi
+ fi
+
+ # Check if pool is accessible using provided credentials
+ echo "====================================="
+ timeout --preserve-status 10 rbd -p ${POOL_NAME} --user ${USER_ID} ls -K $CEPH_USER_KEYRING
+ if [ $? -ne 143 ]; then
+ if [ $? -ne 0 ]; then
+ echo "Error: Ceph pool ${POOL_NAME} is not accessible using credentials for user ${USER_ID}, check Pod logs for details."
+ exit 1
+ else
+ echo "Pool ${POOL_NAME} accessible"
+ fi
+ else
+ echo "rbd command timed out and was sent a SIGTERM. Make sure OSDs have been provisioned."
+ fi
+
+ ceph -s
+
+---
+
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: rbd-storage-init
+ namespace: {{ .Release.Namespace }}
+ labels:
+ app: {{ include "ceph-csi-rbd.name" . }}
+ chart: {{ include "ceph-csi-rbd.chart" . }}
+ component: {{ .Values.provisioner.name }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+ annotations:
+ "meta.helm.sh/release-name": {{ .Release.Name }}
+ "meta.helm.sh/release-namespace": {{ .Release.Namespace }}
+ "helm.sh/hook": "post-install, pre-upgrade, pre-rollback"
+ "helm.sh/hook-delete-policy": "before-hook-creation"
+spec:
+ backoffLimit: 5
+ activeDeadlineSeconds: 360
+ template:
+ metadata:
+ name: "{{ .Release.Name }}"
+ namespace: {{ .Release.Namespace }}
+ labels:
+ heritage: {{ .Release.Service | quote }}
+ release: {{ .Release.Name | quote }}
+ chart: "{{ .Chart.Name }}-{{- .Chart.Version }}"
+ spec:
+ serviceAccountName: {{ include "ceph-csi-rbd.serviceAccountName.provisioner" . }}
+ restartPolicy: OnFailure
+ volumes:
+ - name: rbd-storage-init-configmap-volume
+ configMap:
+ name: rbd-storage-init
+ containers:
+ - name: storage-init-{{- .Values.storageClass.name }}
+ image: {{ .Values.images.tags.rbd_provisioner_storage_init | quote }}
+ command: [ "/bin/bash", "/tmp/storage-init.sh" ]
+ env:
+ - name: NAMESPACE
+ value: {{ .Release.Namespace }}
+ - name: ADDITIONAL_NAMESPACES
+ value: {{ join "," .Values.storageClass.additionalNamespaces | quote }}
+ - name: CEPH_ADMIN_SECRET
+ value: {{ .Values.classDefaults.adminSecretName }}
+ - name: CEPH_USER_SECRET
+ value: {{ .Values.storageClass.userSecretName }}
+ - name: USER_ID
+ value: {{ .Values.storageClass.userId }}
+ - name: POOL_NAME
+ value: {{ .Values.storageClass.pool }}
+ - name: POOL_REPLICATION
+ value: {{ .Values.storageClass.replication | quote }}
+ - name: POOL_CRUSH_RULE_NAME
+ value: {{ .Values.storageClass.crush_rule_name | quote }}
+ - name: POOL_CHUNK_SIZE
+ value: {{ .Values.storageClass.chunk_size | quote }}
+ volumeMounts:
+ - name: rbd-storage-init-configmap-volume
+ mountPath: /tmp
+{{- if .Values.provisioner.nodeSelector }}
+ nodeSelector:
+{{ .Values.provisioner.nodeSelector | toYaml | trim | indent 8 }}
+{{- end }}
+{{- with .Values.provisioner.tolerations }}
+ tolerations:
+{{ toYaml . | indent 8 }}
+{{- end }}
--
2.17.1

View File

@ -0,0 +1,37 @@
From 72e79f8c37dd5509a2cfdd6157ea505f0b15b8d4 Mon Sep 17 00:00:00 2001
From: Hediberto Cavalcante da Silva
<hediberto.cavalcantedasilva@windriver.com>
Date: Sun, 6 Nov 2022 18:25:44 -0300
Subject: [PATCH] ceph-csi-rbd: add imagePullSecrets to ServiceAccount
Signed-off-by: Hediberto Cavalcante da Silva <hediberto.cavalcantedasilva@windriver.com>
---
charts/ceph-csi-rbd/templates/nodeplugin-serviceaccount.yaml | 2 ++
charts/ceph-csi-rbd/templates/provisioner-serviceaccount.yaml | 2 ++
2 files changed, 4 insertions(+)
diff --git a/charts/ceph-csi-rbd/templates/nodeplugin-serviceaccount.yaml b/charts/ceph-csi-rbd/templates/nodeplugin-serviceaccount.yaml
index 36e1ee7..30080ad 100644
--- a/charts/ceph-csi-rbd/templates/nodeplugin-serviceaccount.yaml
+++ b/charts/ceph-csi-rbd/templates/nodeplugin-serviceaccount.yaml
@@ -10,4 +10,6 @@ metadata:
component: {{ .Values.nodeplugin.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
+imagePullSecrets:
+ - name: default-registry-key
{{- end -}}
diff --git a/charts/ceph-csi-rbd/templates/provisioner-serviceaccount.yaml b/charts/ceph-csi-rbd/templates/provisioner-serviceaccount.yaml
index 893b43a..cebb2e7 100644
--- a/charts/ceph-csi-rbd/templates/provisioner-serviceaccount.yaml
+++ b/charts/ceph-csi-rbd/templates/provisioner-serviceaccount.yaml
@@ -10,4 +10,6 @@ metadata:
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
+imagePullSecrets:
+ - name: default-registry-key
{{- end -}}
--
2.17.1

View File

@ -0,0 +1,28 @@
From c5d76ee99c1728e341a8631d1c06708a63dc6304 Mon Sep 17 00:00:00 2001
From: Hediberto Cavalcante da Silva
<hediberto.cavalcantedasilva@windriver.com>
Date: Wed, 9 Nov 2022 09:20:34 -0300
Subject: [PATCH] ceph-csi-rbd: add annotations to provisioner-deployment.yaml
Signed-off-by: Hediberto Cavalcante da Silva <hediberto.cavalcantedasilva@windriver.com>
---
charts/ceph-csi-rbd/templates/provisioner-deployment.yaml | 3 +++
1 file changed, 3 insertions(+)
diff --git a/charts/ceph-csi-rbd/templates/provisioner-deployment.yaml b/charts/ceph-csi-rbd/templates/provisioner-deployment.yaml
index b3b0916..0aab501 100644
--- a/charts/ceph-csi-rbd/templates/provisioner-deployment.yaml
+++ b/charts/ceph-csi-rbd/templates/provisioner-deployment.yaml
@@ -9,6 +9,9 @@ metadata:
component: {{ .Values.provisioner.name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
+ annotations:
+ "helm.sh/hook": "post-upgrade, post-install"
+ "helm.sh/hook-delete-policy": "before-hook-creation"
spec:
replicas: {{ .Values.provisioner.replicaCount }}
strategy:
--
2.17.1

View File

@ -0,0 +1,10 @@
0001-ceph-csi-cephfs-replace-appVersion-version.patch
0002-ceph-csi-cephfs-add-default-fields-to-values.yaml.patch
0003-ceph-csi-cephfs-add-storage-init.yaml.patch
0004-ceph-csi-cephfs-add-imagePullSecrets-to-ServiceAccount.patch
0005-ceph-csi-cephfs-add-annotations-to-provisioner-deployment.patch
0006-ceph-csi-rbd-replace-appVersion-version.patch
0007-ceph-csi-rbd-add-default-fields-to-values.yaml.patch
0008-ceph-csi-rbd-add-storage-init.yaml.patch
0009-ceph-csi-rbd-add-imagePullSecrets-to-ServiceAccount.patch
0010-ceph-csi-rbd-add-annotations-to-provisioner-deployment.patch

View File

@ -0,0 +1 @@
usr/lib/helm/*

View File

@ -0,0 +1,28 @@
#!/usr/bin/make -f
export DH_VERBOSE = 1
export ROOT = debian/tmp
export APP_FOLDER = $(ROOT)/usr/lib/helm
%:
dh $@
override_dh_auto_build:
mkdir -p ceph-csi
# Copy ceph-csi charts
cp -r charts/* ceph-csi
cp Makefile ceph-csi
cd ceph-csi && make ceph-csi-rbd
cd ceph-csi && make ceph-csi-cephfs
override_dh_auto_install:
# Install the app tar file.
install -d -m 755 $(APP_FOLDER)
install -p -D -m 755 ceph-csi/ceph-csi-rbd*.tgz $(APP_FOLDER)
install -p -D -m 755 ceph-csi/ceph-csi-cephfs*.tgz $(APP_FOLDER)
override_dh_auto_test:

View File

@ -0,0 +1 @@
3.0 (quilt)

View File

@ -0,0 +1,12 @@
---
debname: platform-helm
debver: 1.0-1
dl_path:
name: ceph-csi-3.6.2.tar.gz
url: https://github.com/ceph/ceph-csi/archive/v3.6.2.tar.gz
md5sum: a5fd6785c521faf0cb7df008a1012381
src_files:
- platform-helm/files/Makefile
revision:
dist: $STX_DIST
PKG_GITREVCOUNT: true

View File

@ -0,0 +1,5 @@
This directory contains all StarlingX charts that need to be built for this
application. Some charts are common across applications. These common charts
reside in the stx-config/kubernetes/helm-charts directory. To include these in
this application update the build_srpm.data file and use the COPY_LIST_TO_TAR
mechanism to populate these common charts.

View File

@ -0,0 +1,45 @@
#
# Copyright 2017 The Openstack-Helm Authors.
#
# Copyright (c) 2019 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# It's necessary to set this because some environments don't link sh -> bash.
SHELL := /bin/bash
TASK := build
EXCLUDES := helm-toolkit doc tests tools logs tmp
CHARTS := helm-toolkit $(filter-out $(EXCLUDES), $(patsubst %/.,%,$(wildcard */.)))
.PHONY: $(EXCLUDES) $(CHARTS)
all: $(CHARTS)
$(CHARTS):
@if [ -d $@ ]; then \
echo; \
echo "===== Processing [$@] chart ====="; \
make $(TASK)-$@; \
fi
init-%:
if [ -f $*/Makefile ]; then make -C $*; fi
if [ -f $*/requirements.yaml ]; then helm dep up $*; fi
lint-%: init-%
if [ -d $* ]; then helm lint $*; fi
@echo "Clobber dependencies from packaging"
rm -v -f $*/requirements.lock $*/requirements.yaml
build-%: lint-%
if [ -d $* ]; then helm package $*; fi
clean:
@echo "Clean all build artifacts"
rm -f */templates/_partials.tpl */templates/_globals.tpl
rm -f *tgz */charts/*tgz */requirements.lock
rm -rf */charts */tmpcharts
%:
@:

View File

@ -1,10 +0,0 @@
SRC_DIR="k8sapp_platform"
OPT_DEP_LIST="$STX_BASE/platform-armada-app/stx-platform-helm"
# Bump The version to be one less that what the version was prior to decoupling
# as this will align the GITREVCOUNT value to increment the version by one.
# Remove this (i.e. reset to 0) on then next major version changes when
# TIS_BASE_SRCREV changes. This version should align with the version of the
# helm charts in stx-platform-helm
TIS_BASE_SRCREV=c608f2aaa92064b712e7076e4141a162b78fe995
TIS_PATCH_VER=GITREVCOUNT+7

View File

@ -1,57 +0,0 @@
%global app_name platform-integ-apps
%global pypi_name k8sapp-platform
%global sname k8sapp_platform
Name: python-%{pypi_name}
Version: 1.0
Release: %{tis_patch_ver}%{?_tis_dist}
Summary: StarlingX sysinv extensions: Platform Integration K8S app
License: Apache-2.0
Source0: %{name}-%{version}.tar.gz
BuildArch: noarch
BuildRequires: python-setuptools
BuildRequires: python-pbr
BuildRequires: python2-pip
BuildRequires: python2-wheel
%description
StarlingX sysinv extensions: Platform Integration K8S app
%prep
%setup
# Remove bundled egg-info
rm -rf %{pypi_name}.egg-info
%build
export PBR_VERSION=%{version}
%{__python2} setup.py build
%py2_build_wheel
%install
export PBR_VERSION=%{version}.%{tis_patch_ver}
export SKIP_PIP_INSTALL=1
%{__python2} setup.py install --skip-build --root %{buildroot}
mkdir -p ${RPM_BUILD_ROOT}/plugins/%{app_name}
install -m 644 dist/*.whl ${RPM_BUILD_ROOT}/plugins/%{app_name}/
%files
%{python2_sitelib}/%{sname}
%{python2_sitelib}/%{sname}-*.egg-info
%package wheels
Summary: %{name} wheels
%description wheels
Contains python wheels for %{name}
%files wheels
/plugins/*
%changelog
* Mon May 11 2020 Robert Church <robert.church@windriver.com>
- Initial version

View File

@ -16,13 +16,13 @@ Section: libs
Architecture: any
Depends: ${misc:Depends}, ${python3:Depends}
Description: StarlingX Sysinv Platform Extensions
This package contains sysinv plugins for the platform armada
K8S app.
This package contains sysinv plugins for the platform K8S
apps.
Package: python3-k8sapp-platform-wheels
Section: libs
Architecture: any
Depends: ${misc:Depends}, ${python3:Depends}, python3-wheel
Description: StarlingX Sysinv Platform Extension Wheels
This package contains python wheels for the platform armada
K8S app plugins.
This package contains python wheels for the platform K8S
app plugins.

View File

@ -1,2 +1,2 @@
usr/lib/python3/dist-packages/k8sapp_platform-1.0.0.egg-info/*
usr/lib/python3/dist-packages/k8sapp_platform-1.0.*.egg-info/*
usr/lib/python3/dist-packages/k8sapp_platform/*

View File

@ -2,7 +2,12 @@
# export DH_VERBOSE = 1
export APP_NAME=platform-integ-apps
export PBR_VERSION=1.0.0
export DEB_VERSION = $(shell dpkg-parsechangelog | egrep '^Version:' | cut -f 2 -d ' ')
export MAJOR = $(shell echo $(DEB_VERSION) | cut -f 1 -d '-')
export MINOR_PATCH = $(shell echo $(DEB_VERSION) | cut -f 4 -d '.')
export PBR_VERSION=$(MAJOR).$(MINOR_PATCH)
export PYBUILD_NAME=k8sapp-platform
export SKIP_PIP_INSTALL=1
export ROOT=debian/tmp

View File

@ -4,4 +4,6 @@ debver: 1.0-1
src_path: k8sapp_platform
revision:
dist: $STX_DIST
PKG_GITREVCOUNT: true
GITREVCOUNT:
BASE_SRCREV: c608f2aaa92064b712e7076e4141a162b78fe995
SRC_DIR: ${MY_REPO}/stx/platform-armada-app

View File

@ -1,19 +0,0 @@
#
# Copyright (c) 2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
import yaml
class quoted_str(str):
pass
# force strings to be single-quoted to avoid interpretation as numeric values
def quoted_presenter(dumper, data):
return dumper.represent_scalar(u'tag:yaml.org,2002:str', data, style="'")
yaml.add_representer(quoted_str, quoted_presenter)

View File

@ -1,43 +0,0 @@
#
# Copyright (c) 2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# All Rights Reserved.
#
""" System inventory Armada manifest operator."""
from k8sapp_platform.helm.ceph_pools_audit import CephPoolsAuditHelm
from k8sapp_platform.helm.rbd_provisioner import RbdProvisionerHelm
from k8sapp_platform.helm.ceph_fs_provisioner import CephFSProvisionerHelm
from sysinv.common import constants
from sysinv.helm import manifest_base as base
class PlatformArmadaManifestOperator(base.ArmadaManifestOperator):
APP = constants.HELM_APP_PLATFORM
ARMADA_MANIFEST = 'platform-integration-manifest'
CHART_GROUP_CEPH = 'starlingx-ceph-charts'
CHART_GROUPS_LUT = {
CephPoolsAuditHelm.CHART: CHART_GROUP_CEPH,
RbdProvisionerHelm.CHART: CHART_GROUP_CEPH,
CephFSProvisionerHelm.CHART: CHART_GROUP_CEPH
}
CHARTS_LUT = {
CephPoolsAuditHelm.CHART: 'kube-system-ceph-pools-audit',
RbdProvisionerHelm.CHART: 'kube-system-rbd-provisioner',
CephFSProvisionerHelm.CHART: 'kube-system-cephfs-provisioner'
}
def platform_mode_manifest_updates(self, dbapi, mode):
""" Update the application manifest based on the platform
:param dbapi: DB api object
:param mode: mode to control how to apply the application manifest
"""
pass

View File

@ -1,5 +1,5 @@
#
# Copyright (c) 2020 Wind River Systems, Inc.
# Copyright (c) 2020-2022 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
@ -10,14 +10,13 @@ from sysinv.helm import common
HELM_CHART_RBD_PROVISIONER = 'rbd-provisioner'
HELM_CHART_CEPH_POOLS_AUDIT = 'ceph-pools-audit'
HELM_CHART_HELM_TOOLKIT = 'helm-toolkit'
HELM_CHART_CEPH_FS_PROVISIONER = 'cephfs-provisioner'
HELM_NS_CEPH_FS_PROVISIONER = common.HELM_NS_KUBE_SYSTEM
FLUXCD_HELMRELEASE_RBD_PROVISIONER = 'rbd-provisioner'
FLUXCD_HELMRELEASE_CEPH_POOLS_AUDIT = 'ceph-pools-audit'
FLUXCD_HELMRELEASE_CEPH_FS_PROVISIONER = 'cephfs-provisioner'
HELM_CEPH_FS_PROVISIONER_CLAIM_ROOT = '/pvc-volumes'
HELM_CEPH_FS_PROVISIONER_VOLUME_NAME_PREFIX = 'pvc-volumes-'
HELM_CHART_CEPH_FS_PROVISIONER_NAME = 'ceph.com/cephfs'
K8S_CEPHFS_PROVISIONER_ADMIN_SECRET_NAME = 'ceph-secret-admin'
K8S_CEPHFS_PROVISIONER_ADMIN_SECRET_NAMESPACE = 'kube-system'

View File

@ -6,6 +6,8 @@
from k8sapp_platform.common import constants as app_constants
import subprocess
from sysinv.common import constants
from sysinv.common import exception
@ -139,53 +141,73 @@ class CephFSProvisionerHelm(base.FluxCDBaseHelm):
def _skip_ceph_mon_2(name):
return name != constants.CEPH_MON_2
classdefaults = {
"monitors": self._get_formatted_ceph_monitor_ips(
name_filter=_skip_ceph_mon_2),
"adminId": app_constants.K8S_CEPHFS_PROVISIONER_USER_NAME,
"adminSecretName": app_constants.K8S_CEPHFS_PROVISIONER_ADMIN_SECRET_NAME
}
def _get_ceph_fsid():
process = subprocess.Popen(['timeout', '30', 'ceph', 'fsid'],
stdout=subprocess.PIPE)
stdout, stderr = process.communicate()
return stdout.strip()
bk = ceph_bks[0]
# Get tier info.
tiers = self.dbapi.storage_tier_get_list()
classes = []
for bk in ceph_bks:
# Get the ruleset for the new kube-cephfs pools.
tier = next((t for t in tiers if t.forbackendid == bk.id), None)
if not tier:
raise Exception("No tier present for backend %s" % bk.name)
# Get the ruleset for the new kube-rbd pool.
tier = next((t for t in tiers if t.forbackendid == bk.id), None)
if not tier:
raise Exception("No tier present for backend %s" % bk.name)
rule_name = "{0}{1}{2}".format(
tier.name,
constants.CEPH_CRUSH_TIER_SUFFIX,
"-ruleset").replace('-', '_')
rule_name = "{0}{1}{2}".format(
tier.name,
constants.CEPH_CRUSH_TIER_SUFFIX,
"-ruleset").replace('-', '_')
cls = {
"name": K8CephFSProvisioner.get_storage_class_name(bk),
"data_pool_name": K8CephFSProvisioner.get_data_pool(bk),
"metadata_pool_name": K8CephFSProvisioner.get_metadata_pool(bk),
"fs_name": K8CephFSProvisioner.get_fs(bk),
"replication": int(bk.capabilities.get("replication")),
"crush_rule_name": rule_name,
"chunk_size": 64,
"userId": K8CephFSProvisioner.get_user_id(bk),
"userSecretName": K8CephFSProvisioner.get_user_secret_name(bk),
"claim_root": app_constants.HELM_CEPH_FS_PROVISIONER_CLAIM_ROOT,
"additionalNamespaces": ['default', 'kube-public']
}
cluster_id = _get_ceph_fsid()
user_secret_name = K8CephFSProvisioner.get_user_secret_name(bk)
classes.append(cls)
global_settings = {
"replicas": self._num_replicas_for_platform_app(),
class_defaults = {
"monitors": self._get_formatted_ceph_monitor_ips(
name_filter=_skip_ceph_mon_2),
"adminId": app_constants.K8S_CEPHFS_PROVISIONER_USER_NAME,
"adminSecretName": constants.K8S_RBD_PROV_ADMIN_SECRET_NAME
}
storage_class = {
"clusterID": cluster_id,
"name": K8CephFSProvisioner.get_storage_class_name(bk),
"fsName": K8CephFSProvisioner.get_fs(bk),
"pool": K8CephFSProvisioner.get_data_pool(bk),
"metadata_pool": K8CephFSProvisioner.get_metadata_pool(bk),
"volumeNamePrefix": app_constants.HELM_CEPH_FS_PROVISIONER_VOLUME_NAME_PREFIX,
"provisionerSecret": user_secret_name,
"controllerExpandSecret": user_secret_name,
"nodeStageSecret": user_secret_name,
"userId": K8CephFSProvisioner.get_user_id(bk),
"userSecretName": user_secret_name or class_defaults["adminSecretName"],
"chunk_size": 64,
"replication": int(bk.capabilities.get("replication")),
"crush_rule_name": rule_name,
"additionalNamespaces": ['default', 'kube-public']
}
provisioner = {
"replicaCount": self._num_replicas_for_platform_app()
}
monitors = self._get_formatted_ceph_monitor_ips(
name_filter=_skip_ceph_mon_2)
csi_config = [{
"clusterID": cluster_id,
"monitors": [monitor for monitor in monitors]
}]
overrides = {
app_constants.HELM_NS_CEPH_FS_PROVISIONER: {
"classdefaults": classdefaults,
"classes": classes,
"global": global_settings
"storageClass": storage_class,
"provisioner": provisioner,
"csiConfig": csi_config,
"classDefaults": class_defaults
}
}

View File

@ -1,36 +0,0 @@
#
# Copyright (c) 2020 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_platform.common import constants as app_constants
from sysinv.common import exception
from sysinv.helm import common
from sysinv.helm import base
class HelmToolkitHelm(base.BaseHelm):
"""Class to encapsulate helm operations for the helm toolkit"""
CHART = app_constants.HELM_CHART_HELM_TOOLKIT
SUPPORTED_NAMESPACES = [
common.HELM_NS_HELM_TOOLKIT,
]
def get_namespaces(self):
return self.SUPPORTED_NAMESPACES
def get_overrides(self, namespace=None):
overrides = {
common.HELM_NS_HELM_TOOLKIT: {}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides

View File

@ -6,6 +6,8 @@
from k8sapp_platform.common import constants as app_constants
import subprocess
from sysinv.common import constants
from sysinv.common import exception
from sysinv.common.storage_backend_conf import K8RbdProvisioner
@ -56,50 +58,67 @@ class RbdProvisionerHelm(base.FluxCDBaseHelm):
def _skip_ceph_mon_2(name):
return name != constants.CEPH_MON_2
classdefaults = {
def _get_ceph_fsid():
process = subprocess.Popen(['timeout', '30', 'ceph', 'fsid'],
stdout=subprocess.PIPE)
stdout, stderr = process.communicate()
return stdout.strip()
bk = ceph_bks[0]
# Get tier info.
tiers = self.dbapi.storage_tier_get_list()
# Get the ruleset for the new kube-rbd pool.
tier = next((t for t in tiers if t.forbackendid == bk.id), None)
if not tier:
raise Exception("No tier present for backend %s" % bk.name)
rule_name = "{0}{1}{2}".format(
tier.name,
constants.CEPH_CRUSH_TIER_SUFFIX,
"-ruleset").replace('-', '_')
cluster_id = _get_ceph_fsid()
user_secret_name = K8RbdProvisioner.get_user_secret_name(bk)
class_defaults = {
"monitors": self._get_formatted_ceph_monitor_ips(
name_filter=_skip_ceph_mon_2),
"adminId": constants.K8S_RBD_PROV_USER_NAME,
"adminSecretName": constants.K8S_RBD_PROV_ADMIN_SECRET_NAME
}