Initial commit for app-rook-ceph

The app is based on the old StarlingX Rook Ceph application.

This provides support for the latest versions of Rook Ceph
storage and packs it as a StarlingX Application.

Auto-increment helm chart versions is already present on this
initial commit.

Support for Dual-Stack.

Partial IPv6 support was added: there is a bug with DX IPv6
configuration involving the floating monitor.

Remove/delete is successful for FluxCD, however some residual
kubernetes assets remains on the system after the remove.

Rook Ceph version: 1.13.7

Test Plan:
    PASS: build all app-rook-ceph packages successfully.
    PASS: app-rook-ceph upload/apply/remove/delete on
          SX/DX/DX+/Standard platforms.
    PASS: create a volume using PVC through cephfs and rbd
          storageClasses and test read/write on the corresponding
          pools at SX/DX/DX+/Standard plaforms.

Story: 2011066
Task: 49846

Change-Id: I7aa6b08a30676095c86a974eaca79084b2f06859
Signed-off-by: Caio Correa <caio.correa@windriver.com>
This commit is contained in:
Caio Correa 2024-04-10 09:54:47 -03:00
parent c6c693d51c
commit 326f833d3e
102 changed files with 4954 additions and 0 deletions

View File

@ -1,8 +1,116 @@
--- ---
- project: - project:
vars:
ensure_tox_version: '<4'
check: check:
jobs: jobs:
- openstack-tox-linters - openstack-tox-linters
- k8sapp-app-rook-ceph-tox-py39
- k8sapp-app-rook-ceph-tox-flake8
- k8sapp-app-rook-ceph-tox-pylint
- k8sapp-app-rook-ceph-tox-metadata
- k8sapp-app-rook-ceph-tox-bandit
gate: gate:
jobs: jobs:
- openstack-tox-linters - openstack-tox-linters
- k8sapp-app-rook-ceph-tox-py39
- k8sapp-app-rook-ceph-tox-flake8
- k8sapp-app-rook-ceph-tox-pylint
- k8sapp-app-rook-ceph-tox-metadata
- k8sapp-app-rook-ceph-tox-bandit
- job:
name: k8sapp-app-rook-ceph-tox-py39
parent: openstack-tox-py39
description: |
Run py39 test for k8sapp_rook_ceph
nodeset: debian-bullseye
required-projects:
- starlingx/config
- starlingx/fault
- starlingx/update
- starlingx/utilities
- starlingx/root
files:
- python3-k8sapp-rook-ceph/*
vars:
python_version: 3.9
tox_envlist: py39
tox_extra_args: -c python3-k8sapp-rook-ceph/k8sapp_rook_ceph/tox.ini
tox_constraints_file: '{{ ansible_user_dir }}/src/opendev.org/starlingx/root/build-tools/requirements/debian/upper-constraints.txt'
- job:
name: k8sapp-app-rook-ceph-tox-flake8
parent: tox
description: |
Run flake8 test for k8sapp_rook_ceph
nodeset: debian-bullseye
required-projects:
- starlingx/config
- starlingx/fault
- starlingx/update
- starlingx/utilities
- starlingx/root
files:
- python3-k8sapp-rook-ceph/*
vars:
tox_envlist: flake8
tox_extra_args: -c python3-k8sapp-rook-ceph/k8sapp_rook_ceph/tox.ini
tox_constraints_file: '{{ ansible_user_dir }}/src/opendev.org/starlingx/root/build-tools/requirements/debian/upper-constraints.txt'
- job:
name: k8sapp-app-rook-ceph-tox-pylint
parent: tox
description: |
Run pylint test for k8sapp_rook_ceph
nodeset: debian-bullseye
required-projects:
- starlingx/config
- starlingx/fault
- starlingx/update
- starlingx/utilities
- starlingx/root
files:
- python3-k8sapp-rook-ceph/*
vars:
tox_envlist: pylint
tox_extra_args: -c python3-k8sapp-rook-ceph/k8sapp_rook_ceph/tox.ini
tox_constraints_file: '{{ ansible_user_dir }}/src/opendev.org/starlingx/root/build-tools/requirements/debian/upper-constraints.txt'
- job:
name: k8sapp-app-rook-ceph-tox-metadata
parent: tox
description: |
Run metadata test for k8sapp_rook_ceph
nodeset: debian-bullseye
required-projects:
- starlingx/config
- starlingx/fault
- starlingx/update
- starlingx/utilities
- starlingx/root
files:
- python3-k8sapp-rook-ceph/*
vars:
tox_envlist: metadata
tox_extra_args: -c python3-k8sapp-rook-ceph/k8sapp_rook_ceph/tox.ini
tox_constraints_file: '{{ ansible_user_dir }}/src/opendev.org/starlingx/root/build-tools/requirements/debian/upper-constraints.txt'
- job:
name: k8sapp-app-rook-ceph-tox-bandit
parent: tox
description: |
Run bandit test for k8sapp_rook_ceph
nodeset: debian-bullseye
required-projects:
- starlingx/config
- starlingx/fault
- starlingx/update
- starlingx/utilities
- starlingx/root
files:
- python3-k8sapp-rook-ceph/*
vars:
tox_envlist: bandit
tox_extra_args: -c python3-k8sapp-rook-ceph/k8sapp_rook_ceph/tox.ini
tox_constraints_file: '{{ ansible_user_dir }}/src/opendev.org/starlingx/root/build-tools/requirements/debian/upper-constraints.txt'

View File

@ -1,6 +1,22 @@
# app-rook-ceph # app-rook-ceph
App-rook-ceph fluxCD app App-rook-ceph fluxCD app
#### Top Level Directory Structure
```bash
├── app-rook-ceph # Root Folder
│ ├── bindep.txt
│ ├── debian_build_layer.cfg
│ ├── debian_iso_image.inc
│ ├── debian_pkg_dirs
│ ├── python3-k8sapp-rook-ceph # lifecycle managemnt code to support flux apps
│ ├── README.md
│ ├── rook-ceph-helm # importing of upstream rook-ceph helm packages
│ ├── requirements.txt
│ ├── stx-rook-ceph-helm # helm Package manager for the app
│ ├── test-requirements.txt
│ └── tox.ini
```
### About app-rook-ceph ### About app-rook-ceph
Rook is a Ceph orchestrator providing a containerized solution for Ceph Storage. This application tracks the latest compatible upstream version of Rook and packs it targeting StarlingX platforms on fresh instalations. For systems that already have a Ceph backend installed, there's a [migration app](https://opendev.org/starlingx/rook-ceph) available. Rook is a Ceph orchestrator providing a containerized solution for Ceph Storage. This application tracks the latest compatible upstream version of Rook and packs it targeting StarlingX platforms on fresh instalations. For systems that already have a Ceph backend installed, there's a [migration app](https://opendev.org/starlingx/rook-ceph) available.

10
bindep.txt Normal file
View File

@ -0,0 +1,10 @@
# This is a cross-platform list tracking distribution packages needed for install and tests;
# see https://docs.openstack.org/infra/bindep/ for additional information.
libffi-dev [platform:dpkg]
libldap2-dev [platform:dpkg]
libxml2-dev [platform:dpkg]
libxslt1-dev [platform:dpkg]
libsasl2-dev [platform:dpkg]
libffi-devel [platform:rpm]
python3-all-dev [platform:dpkg]

1
debian_build_layer.cfg Normal file
View File

@ -0,0 +1 @@
flock

1
debian_iso_image.inc Normal file
View File

@ -0,0 +1 @@
stx-rook-ceph-helm

4
debian_pkg_dirs Normal file
View File

@ -0,0 +1,4 @@
helm-charts/upstream/rook-ceph-helm
helm-charts/custom/rook-ceph-provisioner-helm
python3-k8sapp-rook-ceph
stx-rook-ceph-helm

View File

@ -0,0 +1,5 @@
rook-ceph-provisioner-helm (2.0-0) unstable; urgency=medium
* Initial release.
-- Caio Correa <caio.correa@windriver.com> Tue, 11 Apr 2024 10:45:00 +0000

View File

@ -0,0 +1,15 @@
Source: rook-ceph-provisioner-helm
Section: libs
Priority: optional
Maintainer: StarlingX Developers <starlingx-discuss@lists.starlingx.io>
Build-Depends: debhelper-compat (= 13),
helm,
Standards-Version: 4.5.1
Homepage: https://www.starlingx.io
Package: rook-ceph-provisioner-helm
Section: libs
Architecture: any
Depends: ${misc:Depends}
Description: StarlingX Platform Rook Ceph provisioner helm chart
This package contains integrations and audits for Rook Ceph StarlingX app.

View File

@ -0,0 +1,41 @@
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: rook-ceph-provisioner-helm
Source: https://opendev.org/starlingx/platform-armada-app/
Files: *
Copyright: (c) 2024 Wind River Systems, Inc
License: Apache-2
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
.
https://www.apache.org/licenses/LICENSE-2.0
.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
.
On Debian-based systems the full text of the Apache version 2.0 license
can be found in `/usr/share/common-licenses/Apache-2.0'.
# If you want to use GPL v2 or later for the /debian/* files use
# the following clauses, or change it to suit. Delete these two lines
Files: debian/*
Copyright: 2024 Wind River Systems, Inc
License: Apache-2
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
.
https://www.apache.org/licenses/LICENSE-2.0
.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
.
On Debian-based systems the full text of the Apache version 2.0 license
can be found in `/usr/share/common-licenses/Apache-2.0'.

View File

@ -0,0 +1,28 @@
#!/usr/bin/make -f
# export DH_VERBOSE = 1
export ROOT = debian/tmp
export APP_FOLDER = $(ROOT)/usr/lib/helm
export DEB_VERSION = $(shell dpkg-parsechangelog | egrep '^Version:' | cut -f 2 -d ' ')
export PATCH_VERSION = $(shell echo $(DEB_VERSION) | cut -f 4 -d '.')
export CHART_BASE_VERSION = $(shell echo $(DEB_VERSION) | sed 's/-/./' | cut -d '.' -f 1-3)
export CHART_VERSION = $(CHART_BASE_VERSION)
%:
dh $@
override_dh_auto_build:
# Stage the chart for building
mkdir -p build
mv Makefile rook-ceph-provisioner build
# Build the chart
cd build && make CHART_VERSION=$(CHART_VERSION) rook-ceph-provisioner
override_dh_auto_install:
install -d -m 755 $(APP_FOLDER)
install -p -D -m 755 build/rook-ceph-provisioner*.tgz $(APP_FOLDER)
override_dh_auto_test:

View File

@ -0,0 +1 @@
3.0 (quilt)

View File

@ -0,0 +1,9 @@
---
debname: rook-ceph-provisioner-helm
debver: 2.0-0
src_path: rook-ceph-provisioner-helm
revision:
dist: $STX_DIST
GITREVCOUNT:
SRC_DIR: ${MY_REPO}/stx/app-rook-ceph/helm-charts/custom/rook-ceph-provisioner-helm/rook-ceph-provisioner-helm/rook-ceph-provisioner
BASE_SRCREV: c6c693d51cdc6daa4eafe34ccab5ce35496bf516

View File

@ -0,0 +1,41 @@
#
# Copyright 2017 The Openstack-Helm Authors.
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# It's necessary to set this because some environments don't link sh -> bash.
SHELL := /bin/bash
TASK := build
EXCLUDES := helm-toolkit doc tests tools logs tmp
CHARTS := helm-toolkit $(filter-out $(EXCLUDES), $(patsubst %/.,%,$(wildcard */.)))
.PHONY: $(EXCLUDES) $(CHARTS)
all: $(CHARTS)
$(CHARTS):
@if [ -d $@ ]; then \
echo; \
echo "===== Processing [$@] chart ====="; \
make $(TASK)-$@; \
fi
init-%:
if [ -f $*/Makefile ]; then make -C $*; fi
lint-%: init-%
if [ -d $* ]; then helm lint $*; fi
build-%: lint-%
if [ -d $* ]; then helm package --version $(CHART_VERSION) $*; fi
clean:
@echo "Clean all build artifacts"
rm -f */templates/_partials.tpl */templates/_globals.tpl
rm -rf */charts */tmpcharts
%:
@:

View File

@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -0,0 +1,10 @@
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
apiVersion: v1
appVersion: "1.1"
description: A Helm chart for Kubernetes
name: rook-ceph-provisioner
version: 2.0.0

View File

@ -0,0 +1,201 @@
{{- define "script.osd_audit" -}}
#!/usr/bin/env python
import os
import subprocess
from kubernetes import __version__ as K8S_MODULE_VERSION
from kubernetes import config
from kubernetes import client
from kubernetes.client import Configuration
from kubernetes.client.rest import ApiException
from six.moves import http_client as httplib
from cephclient import wrapper
K8S_MODULE_MAJOR_VERSION = int(K8S_MODULE_VERSION.split('.')[0])
# Kubernetes Files
KUBERNETES_ADMIN_CONF = '/etc/kubernetes/admin.conf'
CEPH_MGR_PORT = 7999
def is_k8s_configured():
"""Check to see if the k8s admin config file exists."""
if os.path.isfile(KUBERNETES_ADMIN_CONF):
return True
return False
class KubeOperator(object):
def __init__(self):
self._kube_client_batch = None
self._kube_client_core = None
self._kube_client_custom_objects = None
def _load_kube_config(self):
if not is_k8s_configured():
raise exception.KubeNotConfigured()
config.load_kube_config(KUBERNETES_ADMIN_CONF)
if K8S_MODULE_MAJOR_VERSION < 12:
c = Configuration()
else:
c = Configuration().get_default_copy()
# Workaround: Turn off SSL/TLS verification
c.verify_ssl = False
Configuration.set_default(c)
def _get_kubernetesclient_core(self):
if not self._kube_client_core:
self._load_kube_config()
self._kube_client_core = client.CoreV1Api()
return self._kube_client_core
def _get_kubernetesclient_custom_objects(self):
if not self._kube_client_custom_objects:
self._load_kube_config()
self._kube_client_custom_objects = client.CustomObjectsApi()
return self._kube_client_custom_objects
def kube_get_nodes(self):
try:
api_response = self._get_kubernetesclient_core().list_node()
return api_response.items
except ApiException as e:
print("Kubernetes exception in kube_get_nodes: %s" % e)
raise
def kube_get_pods_by_selector(self, namespace, label_selector,
field_selector):
c = self._get_kubernetesclient_core()
try:
api_response = c.list_namespaced_pod(namespace,
label_selector="%s" % label_selector,
field_selector="%s" % field_selector)
return api_response.items
except ApiException as e:
print("Kubernetes exception in "
"kube_get_pods_by_selector %s/%s/%s: %s",
namespace, label_selector, field_selector, e)
return None
def kube_delete_pod(self, name, namespace, **kwargs):
body = {}
if kwargs:
body.update(kwargs)
c = self._get_kubernetesclient_core()
try:
api_response = c.delete_namespaced_pod(name, namespace, body)
return True
except ApiException as e:
if e.status == httplib.NOT_FOUND:
print("Pod %s/%s not found." % (namespace, name))
return False
else:
print("Failed to delete Pod %s/%s: " "%s" % (namespace, name, e.body))
raise
def get_custom_resource(self, group, version, namespace, plural, name):
c = self._get_kubernetesclient_custom_objects()
try:
api_response = c.list_namespaced_custom_object(group, version, namespace,
plural)
return api_response
except ApiException as ex:
if ex.reason == "Not Found":
print("Failed to delete custom object, Namespace %s: %s" % (namespace, str(ex.body).replace('\n', ' ')))
pass
return None
def osd_audit():
kube = KubeOperator()
group = "ceph.rook.io"
version = "v1"
namespace = "rook-ceph"
plural = "cephclusters"
name = "cephclusters.ceph.rook.io.ceph-cluster"
try:
ceph_api = wrapper.CephWrapper(endpoint='http://localhost:{}'.format(CEPH_MGR_PORT))
response, body = ceph_api.health(body='text', timeout=30)
if body == "HEALTH_OK":
print("Cluster reports HEALTH_OK")
return
print(body)
except IOError as e:
print("Accessing Ceph API failed. Cluster health unknown. Proceeding.")
pass
cluster = {}
try:
cephcluster = kube.get_custom_resource(group, version, namespace, plural, name)
if 'items' in cephcluster:
cluster = cephcluster['items'][0]
except ApiException as ex:
if ex.reason == "Not Found":
print("Failed to delete custom object, Namespace %s: %s" % (namespace, str(ex.body).replace('\n', ' ')))
pass
health = ""
if cluster and cluster.has_key("status") and cluster["status"].has_key("ceph") and cluster['status']['ceph'].has_key("health"):
health = cluster['status']['ceph']['health']
else:
print("Failed to get cluster['status']['ceph']['health']")
return
if health != "HEALTH_OK":
delete_operator = False
osd_nodes = cluster['spec']['storage']['nodes']
nodes = {}
node_list = kube.kube_get_nodes()
for item in node_list:
nodes[item.metadata.name] = item.spec.taints
for n in osd_nodes:
# get osd info declare in ceph cluster
node_name = n['name']
osd_devices = n['devices']
# check whether there is osd pod running described in cephcluster osd_nodes
label = "app=rook-ceph-osd,failure-domain=%s" % node_name
pods = kube.kube_get_pods_by_selector(namespace, label, "")
osd_pods = []
for pod in pods:
if pod.status.phase == 'Running':
osd_pods.append(pod)
if len(osd_devices) != len(osd_pods) :
# assume when osd pod number is not equal with this node osd device
# operator should reset
delete_operator = True
# if osd pod is not running, as this node is tainted
# unnecessary to delete operator pod
taints = nodes[node_name]
if taints:
for taint in taints:
if taint.key.startswith("node.kubernetes.io"):
# pod not running for taint
delete_operator[node_name] = False
if delete_operator == True:
break
if delete_operator == True:
operator_pod = kube.kube_get_pods_by_selector(namespace, "app=rook-ceph-operator", "")
if operator_pod and operator_pod[0] and operator_pod[0].status.phase == 'Running':
print("delete operator pod")
kube.kube_delete_pod(operator_pod[0].metadata.name, namespace, grace_periods_seconds=0)
if __name__ == '__main__':
osd_audit()
{{- end -}}

View File

@ -0,0 +1,52 @@
{{/*
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
*/}}
{{- if .Values.global.rbac }}
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Values.rbac.clusterRole }}
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch", "patch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["kube-dns"]
verbs: ["list", "get"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "create", "list", "update", "delete"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "list", "update", "delete", "patch"]
- apiGroups: ["extensions", "apps"]
resources: ["deployments"]
verbs: ["get", "list", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "update", "delete"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "create", "list", "update"]
{{- end}}

View File

@ -0,0 +1,22 @@
{{/*
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
*/}}
{{- if .Values.global.rbac }}
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Values.rbac.clusterRoleBinding }}
subjects:
- kind: ServiceAccount
name: {{ .Values.rbac.serviceAccount }}
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: {{ .Values.rbac.clusterRole }}
apiGroup: rbac.authorization.k8s.io
{{- end}}

View File

@ -0,0 +1,22 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.global.configmap_key_init | quote }}
namespace: {{ .Release.Namespace }}
data:
provision.sh: |-
#!/bin/bash
if [ "${MON_HOST}"x == ""x ]; then
MON_HOST=$(echo ${ROOK_MONS} | sed 's/[a-z]\+=//g')
fi
cat > /etc/ceph/ceph.conf << EOF
[global]
mon_host = $MON_HOST
EOF
admin_keyring=$(echo $ADMIN_KEYRING | cut -f4 -d' ')
cat > /etc/ceph/ceph.client.admin.keyring << EOF
[client.admin]
key = $admin_keyring
EOF

View File

@ -0,0 +1,98 @@
{{- if .Values.global.deployment_stx_ceph_manager }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: stx-ceph-manager
namespace: {{ .Release.Namespace }}
labels:
app: stx-ceph-manager
spec:
replicas: 1
selector:
matchLabels:
app: stx-ceph-manager
template:
metadata:
labels:
app: stx-ceph-manager
spec:
tolerations:
- effect: NoSchedule
operator: Exists
key: node-role.kubernetes.io/master
- effect: NoSchedule
operator: Exists
key: node-role.kubernetes.io/control-plane
dnsPolicy: ClusterFirstWithHostNet
serviceAccountName: {{ .Values.rbac.serviceAccount }}
volumes:
- name: config-key-provision
configMap:
name: {{ .Values.global.configmap_key_init }}
- name: ceph-config
emptyDir: {}
- name: sysinv-conf
hostPath:
path: /etc/sysinv/sysinv.conf
initContainers:
- name: init
image: {{ .Values.images.tags.k8s_entrypoint | quote }}
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: INTERFACE_NAME
value: eth0
- name: PATH
value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/
- name: DEPENDENCY_SERVICE
value: ""
- name: DEPENDENCY_JOBS
value: "ceph-mgr-provision"
- name: DEPENDENCY_DAEMONSET
value: ""
- name: DEPENDENCY_CONTAINER
value: ""
- name: DEPENDENCY_POD_JSON
value: ""
- name: DEPENDENCY_CUSTOM_RESOURCE
value: ""
command:
- kubernetes-entrypoint
- name: keyring
image: {{ .Values.images.tags.ceph_config_helper | quote }}
command: [ "/bin/bash", "/tmp/mount/provision.sh" ]
env:
- name: ADMIN_KEYRING
valueFrom:
secretKeyRef:
name: rook-ceph-admin-keyring
key: keyring
- name: ROOK_MONS
valueFrom:
configMapKeyRef:
name: rook-ceph-mon-endpoints
key: data
volumeMounts:
- mountPath: /etc/ceph
name: ceph-config
- name: config-key-provision
mountPath: /tmp/mount
containers:
- name: check
image: {{ .Values.images.tags.stx_ceph_manager | quote }}
args: ["python", "/usr/bin/ceph-manager", "--config-file=/etc/sysinv/sysinv.conf"]
volumeMounts:
- name: sysinv-conf
mountPath: /etc/sysinv/sysinv.conf
readOnly: true
- name: ceph-config
mountPath: /etc/ceph/
{{- end }}

View File

@ -0,0 +1,119 @@
{{/*
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
*/}}
{{- if .Values.global.job_ceph_mon_audit }}
apiVersion: v1
kind: ConfigMap
metadata:
name: ceph-mon-audit-bin
namespace: {{ .Release.Namespace }}
data:
audit.sh: |-
#!/bin/bash
source /etc/build.info
node=$(hostname)
stat /opt/platform/.keyring/${SW_VERSION}/.CREDENTIAL > /dev/null 2>&1
if [ $? -ne 0 ]; then
if [ x"$node" = x"controller-0" ]; then
active="controller-1"
else
active="controller-0"
fi
else
active=$node
fi
controller_node=$(kubectl get pods -n rook-ceph --selector=app="rook-ceph-mon,ceph_daemon_id=a" -o wide | awk '/Running.*controller/ {print $7}')
if [ x"$active" = x"$controller_node" ]; then
echo "mon-a pod is running on active controler"
exit 0
fi
# update configmap
cat > endpoint.yaml << EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: rook-ceph-mon-endpoints
namespace: $NAMESPACE
data:
data: a=$FLOAT_IP:6789
mapping: '{"node":{"a":{"Name":"$active","Hostname":"$active","Address":"$FLOAT_IP"}}}'
maxMonId: "0"
EOF
kubectl apply -f endpoint.yaml --overwrite=true
rm -f endpoint.yaml
# delete mon-a deployment and pod
kubectl delete deployments.apps -n rook-ceph rook-ceph-mon-a
kubectl delete pods -n rook-ceph --selector="app=rook-ceph-mon,ceph_daemon_id=a"
kubectl delete po -n rook-ceph --selector="app=rook-ceph-operator"
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: stx-ceph-mon-audit
spec:
schedule: {{ .Values.ceph_audit_jobs.audit.cron | quote }}
startingDeadlineSeconds: {{ .Values.ceph_audit_jobs.audit.deadline }}
successfulJobsHistoryLimit: {{ .Values.ceph_audit_jobs.audit.history.success }}
failedJobsHistoryLimit: {{ .Values.ceph_audit_jobs.audit.history.failed }}
concurrencyPolicy: Forbid
jobTemplate:
metadata:
name: stx-ceph-mon-audit
namespace: {{ .Release.Namespace }}
labels:
app: ceph-mon-audit
spec:
template:
metadata:
labels:
app: ceph-mon-audit
spec:
serviceAccountName: {{ .Values.rbac.serviceAccount }}
restartPolicy: OnFailure
hostNetwork: true
{{- if .Values.global.nodeSelector }}
nodeSelector:
{{ .Values.global.nodeSelector | toYaml | trim | indent 10 }}
{{- end }}
volumes:
- name: ceph-mon-audit-bin
configMap:
name: ceph-mon-audit-bin
defaultMode: 0555
- name: platform
hostPath:
path: /opt/platform
- name: buildinfo
hostPath:
path: /etc/build.info
containers:
- name: ceph-mon-audit
image: {{ .Values.images.tags.ceph_config_helper | quote }}
command: [ "/bin/bash", "/tmp/mount/audit.sh" ]
env:
- name: NAMESPACE
value: {{ .Release.Namespace }}
- name: FLOAT_IP
value: {{ .Values.ceph_audit_jobs.floatIP | quote }}
volumeMounts:
- name: platform
mountPath: /opt/platform
readOnly: true
- name: ceph-mon-audit-bin
mountPath: /tmp/mount
- name: buildinfo
mountPath: /etc/build.info
readOnly: true
{{- end }}

View File

@ -0,0 +1,104 @@
{{/*
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
*/}}
{{- if .Values.global.job_ceph_osd_audit }}
apiVersion: v1
kind: ConfigMap
metadata:
name: ceph-osd-audit-bin
namespace: {{ .Release.Namespace }}
data:
osd_audit.py: |-
{{- include "script.osd_audit" . | indent 4 }}
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: stx-ceph-osd-audit
spec:
schedule: {{ .Values.ceph_audit_jobs.audit.cron | quote }}
startingDeadlineSeconds: {{ .Values.ceph_audit_jobs.audit.deadline }}
successfulJobsHistoryLimit: {{ .Values.ceph_audit_jobs.audit.history.success }}
failedJobsHistoryLimit: {{ .Values.ceph_audit_jobs.audit.history.failed }}
concurrencyPolicy: Forbid
jobTemplate:
metadata:
name: stx-ceph-osd-audit
namespace: {{ .Release.Namespace }}
labels:
app: ceph-osd-audit
spec:
template:
metadata:
labels:
app: ceph-osd-audit
spec:
tolerations:
- effect: NoSchedule
operator: Exists
key: node-role.kubernetes.io/master
- effect: NoSchedule
operator: Exists
key: node-role.kubernetes.io/control-plane
serviceAccountName: {{ .Values.rbac.serviceAccount }}
restartPolicy: OnFailure
hostNetwork: true
{{- if .Values.global.nodeSelector }}
nodeSelector:
{{ .Values.global.nodeSelector | toYaml | trim | indent 12 }}
{{- end }}
volumes:
- name: ceph-osd-audit-bin
configMap:
name: ceph-osd-audit-bin
defaultMode: 0555
- name: kube-config
hostPath:
path: /etc/kubernetes/admin.conf
- name: config-key-provision
configMap:
name: {{ .Values.global.configmap_key_init }}
- name: ceph-config
emptyDir: {}
initContainers:
- name: init
image: {{ .Values.images.tags.ceph_config_helper | quote }}
command: [ "/bin/bash", "/tmp/mount/provision.sh" ]
env:
- name: ADMIN_KEYRING
valueFrom:
secretKeyRef:
name: rook-ceph-admin-keyring
key: keyring
- name: ROOK_MONS
valueFrom:
configMapKeyRef:
name: rook-ceph-mon-endpoints
key: data
volumeMounts:
- name: ceph-config
mountPath: /etc/ceph
- name: config-key-provision
mountPath: /tmp/mount
containers:
- name: ceph-osd-audit
image: {{ .Values.images.tags.stx_ceph_manager | quote }}
command: [ "python", "/tmp/mount/osd_audit.py" ]
env:
- name: NAMESPACE
value: {{ .Release.Namespace }}
volumeMounts:
- name: ceph-osd-audit-bin
mountPath: /tmp/mount
- name: ceph-config
mountPath: /etc/ceph
readOnly: true
- name: kube-config
mountPath: /etc/kubernetes/admin.conf
readOnly: true
{{- end }}

View File

@ -0,0 +1,176 @@
{{/*
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
*/}}
{{- if .Values.global.job_ceph_mgr_provision }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ceph-mgr-provision-bin
namespace: {{ .Release.Namespace }}
data:
provision.sh: |-
#!/bin/bash
# Check if ceph is accessible
echo "===================================="
ceph -s
if [ $? -ne 0 ]; then
echo "Error: Ceph cluster is not accessible, check Pod logs for details."
exit 1
fi
# Exec_retry - wait for the cluster to create osd pools
retries=50 # 8 minutes
retry_count=1
cmd="ceph osd pool ls | wc -l"
while [ $retry_count -le $retries ]; do
ret_stdout=$(eval $cmd)
echo "ret_stdout = " $ret_stdout
[ $ret_stdout -gt 1 ] && break
echo "Retry #" $retry_count
sleep 10
let retry_count++
done
if [ $retry_count -gt $retries ]; then
echo "Error: Ceph cluster pools not correctly initialized."
exit 1
fi
cat > /tmp/controller << EOF
[req]
req_extensions = v3_ca
distinguished_name = req_distinguished_name
[v3_ca]
subjectAltName= @alt_names
basicConstraints = CA:true
[req_distinguished_name]
0.organizationName = IT
commonName = ceph-restful
[alt_names]
DNS.1 = controller-0
DNS.2 = controller-1
EOF
openssl req -new -nodes -x509 -subj /O=IT/CN=controller -days 3650 -config /tmp/controller -out /tmp/controller.crt -keyout /tmp/controller.key -extensions v3_ca
# Exec_retry - wait for the cluster to create osd pools
retries=25 # 4 minutes
retry_count=1
cmd="ls -1 /tmp/controller.key | wc -l"
while [ $retry_count -le $retries ]; do
ret_stdout=$(eval $cmd)
echo "ret_stdout = " $ret_stdout
[ $ret_stdout -eq 1 ] && break
echo "Retry #" $retry_count
sleep 1
let retry_count++
done
if [ $retry_count -gt $retries ]; then
echo "Error: File /tmp/controller.key was not created."
exit 1
fi
for i in "a" "controller-0" "controller-1"
do
ceph config-key set mgr/restful/$i/crt -i /tmp/controller.crt
ceph config-key set mgr/restful/$i/key -i /tmp/controller.key
done
ceph config set mgr mgr/restful/server_port 7999
ceph mgr module disable restful
echo "Disable restful"
ceph mgr module enable restful
echo "Enable restful"
ceph restful create-key admin
echo "Ceph Mgr Provision Complete"
---
apiVersion: batch/v1
kind: Job
metadata:
name: ceph-mgr-provision
namespace: {{ .Release.Namespace }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{.Chart.Name}}"
spec:
backoffLimit: 5 # Limit the number of job restart in case of failure: ~5 minutes.
template:
metadata:
name: ceph-mgr-provision
namespace: {{ .Release.Namespace }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{.Chart.Name}}"
spec:
tolerations:
- effect: NoSchedule
operator: Exists
key: node-role.kubernetes.io/master
- effect: NoSchedule
operator: Exists
key: node-role.kubernetes.io/control-plane
restartPolicy: OnFailure
volumes:
- name: ceph-mgr-provision-bin
configMap:
name: ceph-mgr-provision-bin
- name: config-key-provision
configMap:
name: {{ .Values.global.configmap_key_init }}
- name: ceph-config
emptyDir: {}
initContainers:
- name: init
image: {{ .Values.images.tags.ceph_config_helper | quote }}
command: [ "/bin/bash", "/tmp/mount/provision.sh" ]
env:
- name: ADMIN_KEYRING
valueFrom:
secretKeyRef:
name: rook-ceph-admin-keyring
key: keyring
- name: ROOK_MONS
valueFrom:
configMapKeyRef:
name: rook-ceph-mon-endpoints
key: data
volumeMounts:
- mountPath: /etc/ceph
name: ceph-config
- name: config-key-provision
mountPath: /tmp/mount
containers:
- name: provision
image: {{ .Values.images.tags.ceph_config_helper | quote }}
command: [ "/bin/bash", "/tmp/mount/provision.sh" ]
env:
- name: NAMESPACE
value: {{ .Release.Namespace }}
volumeMounts:
- mountPath: /etc/ceph
name: ceph-config
- name: ceph-mgr-provision-bin
mountPath: /tmp/mount/
{{- if .Values.global.nodeSelector }}
nodeSelector:
{{ .Values.global.nodeSelector | toYaml | trim | indent 8 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,67 @@
{{/*
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
*/}}
{{- if .Values.global.job_host_provision }}
{{ $root := . }}
{{- range $controller_host := $root.Values.host_provision.controller_hosts }}
---
apiVersion: batch/v1
kind: Job
metadata:
name: "rook-ceph-host-provision-{{ $controller_host }}"
namespace: {{ $root.Release.Namespace }}
labels:
heritage: {{ $root.Release.Service | quote }}
release: {{ $root.Release.Name | quote }}
chart: "{{$root.Chart.Name}}"
annotations:
"helm.sh/hook": "post-install"
"helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded"
spec:
template:
metadata:
name: "rook-ceph-host-provision-{{ $controller_host }}"
namespace: {{ $root.Release.Namespace }}
labels:
heritage: {{ $root.Release.Service | quote }}
release: {{ $root.Release.Name | quote }}
chart: "{{$root.Chart.Name}}"
spec:
serviceAccountName: {{ $root.Values.rbac.serviceAccount }}
restartPolicy: OnFailure
volumes:
- name: rook-conf
hostPath:
path: /etc/ceph/
- name: config-key-provision
configMap:
name: {{ $root.Values.global.configmap_key_init }}
containers:
- name: host-provision
image: {{ $root.Values.images.tags.ceph_config_helper | quote }}
command: [ "/bin/bash", "/tmp/mount/provision.sh" ]
env:
- name: ADMIN_KEYRING
valueFrom:
secretKeyRef:
name: rook-ceph-admin-keyring
key: keyring
- name: ROOK_MONS
valueFrom:
configMapKeyRef:
name: rook-ceph-mon-endpoints
key: data
volumeMounts:
- name: rook-conf
mountPath: /etc/ceph/
- name: config-key-provision
mountPath: /tmp/mount
nodeName: {{ $controller_host }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,199 @@
{{/*
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
*/}}
{{- if .Values.global.provision_storage }}
{{ $root := . }}
{{ $defaults := .Values.provisionStorage.classdefaults}}
{{ $mount := "/tmp/mount" }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: config-rook-ceph-provisioner
namespace: {{ $root.Release.Namespace }}
data:
provision.sh: |-
#!/bin/bash
# Check if ceph is accessible
echo "===================================="
ceph -s
if [ $? -ne 0 ]; then
echo "Error: Ceph cluster is not accessible, check Pod logs for details."
exit 1
fi
if [[ -z "${USER_ID}" && -z "${CEPH_USER_SECRET}" ]]; then
echo "No need to create secrets for pool ${POOL_NAME}"
exit 0
fi
set -ex
# Make sure the pool exists.
ceph osd pool stats ${POOL_NAME}
if [ $? -ne 0 ]; then
echo "Error: no pool for storge class"
exit 1
fi
ceph osd pool set ${POOL_NAME} size ${POOL_REPLICATION} --yes-i-really-mean-it
ceph osd pool set ${POOL_NAME} pg_num ${POOL_CHUNK_SIZE}
# Make sure crush rule exists.
ceph osd crush rule create-replicated ${POOL_CRUSH_RULE_NAME} default host
ceph osd pool set ${POOL_NAME} crush_rule ${POOL_CRUSH_RULE_NAME}
if [ $? -ne 0 ]; then
echo "Error: set pool crush rule failed"
fi
set +ex
kubectl get configmap ceph-etc -n ${NAMESPACE} | grep ceph-etc
if [ $? ]; then
echo "Delete out-of-date configmap ceph-etc"
kubectl delete configmap -n rook-ceph ceph-etc
fi
kubectl create configmap ceph-etc --from-file=/etc/ceph/ceph.conf -n ${NAMESPACE}
if [ $? -ne 0 ]; then
echo "Error creating configmap ceph-etc, exit"
exit 1
fi
if [ -n "${CEPH_ADMIN_SECRET}" ]; then
kubectl get secret ${CEPH_ADMIN_SECRET} -n ${NAMESPACE} | grep ${CEPH_ADMIN_SECRET}
if [ $? ]; then
echo "Delete out-of-date ${CEPH_ADMIN_SECRET} secret"
kubectl delete secret -n rook-ceph ${CEPH_ADMIN_SECRET}
fi
echo "Create ${CEPH_ADMIN_SECRET} secret"
admin_keyring=$(echo $ADMIN_KEYRING | cut -f4 -d' ')
kubectl create secret generic ${CEPH_ADMIN_SECRET} --type="kubernetes.io/rbd" --from-literal=key=$admin_keyring --namespace=${NAMESPACE}
if [ $? -ne 0 ]; then
echo "Error creating secret ${CEPH_ADMIN_SECRET}, exit"
exit 1
fi
fi
KEYRING=$(ceph auth get-or-create client.${USER_ID} mon "allow r" osd "allow rwx pool=${POOL_NAME}" | sed -n 's/^[[:blank:]]*key[[:blank:]]\+=[[:blank:]]\(.*\)/\1/p')
if [ -n "${CEPH_USER_SECRET}" ]; then
kubectl get secret -n ${NAMESPACE} ${CEPH_USER_SECRET} 2>/dev/null
if [ $? ]; then
echo "Delete out-of-date ${CEPH_USER_SECRET} secret"
kubectl delete secret -n rook-ceph ${CEPH_USER_SECRET}
fi
echo "Create ${CEPH_USER_SECRET} secret"
kubectl create secret generic -n ${NAMESPACE} ${CEPH_USER_SECRET} --type="kubernetes.io/rbd" --from-literal=key=$KEYRING
if [ $? -ne 0 ]; then
echo"Error creating secret ${CEPH_USER_SECRET} in ${NAMESPACE}, exit"
exit 1
fi
fi
---
apiVersion: batch/v1
kind: Job
metadata:
name: "rook-ceph-provision"
namespace: {{ $root.Release.Namespace }}
labels:
heritage: {{$root.Release.Service | quote }}
release: {{$root.Release.Name | quote }}
chart: "{{$root.Chart.Name}}"
annotations:
"helm.sh/hook": "post-install, pre-upgrade, pre-rollback"
"helm.sh/hook-delete-policy": "before-hook-creation"
spec:
backoffLimit: 10 # Limit the number of job restart in case of failure: ~10 minutes.
template:
metadata:
name: "rook-ceph-provision"
namespace: {{ $root.Release.Namespace }}
labels:
heritage: {{$root.Release.Service | quote }}
release: {{$root.Release.Name | quote }}
chart: "{{$root.Chart.Name}}"
spec:
tolerations:
- effect: NoSchedule
operator: Exists
key: node-role.kubernetes.io/master
- effect: NoSchedule
operator: Exists
key: node-role.kubernetes.io/control-plane
serviceAccountName: {{ $root.Values.rbac.serviceAccount }}
restartPolicy: OnFailure
volumes:
- name: config-volume-rook-ceph-provisioner
configMap:
name: config-rook-ceph-provisioner
- name: config-key-provision
configMap:
name: {{ .Values.global.configmap_key_init }}
- name: ceph-config
emptyDir: {}
initContainers:
- name: init
image: {{ $root.Values.images.tags.ceph_config_helper | quote }}
command: [ "/bin/bash", "{{ $mount }}/provision.sh" ]
env:
- name: MON_HOST
value: "{{ $defaults.monitors }}"
- name: ADMIN_KEYRING
valueFrom:
secretKeyRef:
name: rook-ceph-admin-keyring
key: keyring
- name: ROOK_MONS
valueFrom:
configMapKeyRef:
name: rook-ceph-mon-endpoints
key: data
volumeMounts:
- mountPath: /etc/ceph
name: ceph-config
- name: config-key-provision
mountPath: /tmp/mount
containers:
{{ $classConfig := $root.Values.provisionStorage.classes }}
- name: storage-init-{{- $classConfig.name }}
image: {{ $root.Values.images.tags.ceph_config_helper | quote }}
command: [ "/bin/bash", "{{ $mount }}/provision.sh" ]
env:
- name: NAMESPACE
value: {{ $root.Release.Namespace }}
- name: CEPH_ADMIN_SECRET
value: {{ $defaults.adminSecretName }}
- name: CEPH_USER_SECRET
value: {{ $classConfig.secret.userSecretName }}
- name: USER_ID
value: {{ $classConfig.secret.userId }}
- name: POOL_NAME
value: {{ $classConfig.pool.pool_name }}
- name: POOL_REPLICATION
value: {{ $classConfig.pool.replication | quote }}
- name: POOL_CRUSH_RULE_NAME
value: {{ $classConfig.pool.crush_rule_name | quote }}
- name: POOL_CHUNK_SIZE
value: {{ $classConfig.pool.chunk_size | quote }}
- name: ADMIN_KEYRING
valueFrom:
secretKeyRef:
name: rook-ceph-admin-keyring
key: keyring
volumeMounts:
- name: config-volume-rook-ceph-provisioner
mountPath: {{ $mount }}
- name: ceph-config
mountPath: /etc/ceph
readOnly: true
{{- if .Values.global.nodeSelector }}
nodeSelector:
{{ .Values.global.nodeSelector | toYaml | trim | indent 8 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,72 @@
{{/*
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
*/}}
{{- if .Values.global.job_cleanup }}
{{ $root := . }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: config-rook-provisioner-cleanup
namespace: {{ .Release.Namespace }}
data:
rook_clean_up.sh: |-
#!/bin/bash
kubectl delete configmap -n ${NAMESPACE} ceph-etc
kubectl delete configmap -n ${NAMESPACE} rook-ceph-mon-endpoints
kubectl delete secret -n ${NAMESPACE} ${CEPH_ADMIN_SECRET}
kubectl delete secret -n ${NAMESPACE} ${CEPH_USER_SECRET}
kubectl delete secret -n ${NAMESPACE} rook-ceph-mon
kubectl delete pods -n ${NAMESPACE} - l job-name=rook-ceph-provision
kubectl delete jobs.batch -n ${NAMESPACE} -l release=rook-ceph-provisioner
echo "rook ceph provisioner cleanup "
---
apiVersion: batch/v1
kind: Job
metadata:
name: rook-provisioner-cleanup
namespace: {{ .Release.Namespace }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{$root.Chart.Name}}"
annotations:
"helm.sh/hook": "pre-delete"
"helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded"
spec:
template:
metadata:
name: rook-provisioner-cleanup
namespace: {{ .Release.Namespace }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{$root.Chart.Name}}"
spec:
restartPolicy: OnFailure
serviceAccountName: {{ .Values.rbac.serviceAccount }}
volumes:
- name: config-rook-provisioner-cleanup
configMap:
name: config-rook-provisioner-cleanup
containers:
- name: rook-provisioner-cleanup
image: {{ .Values.images.tags.ceph_config_helper | quote }}
command: [ "/bin/bash", "/tmp/mount/rook_clean_up.sh" ]
env:
- name: NAMESPACE
value: {{ .Release.Namespace }}
- name: CEPH_ADMIN_SECRET
value: {{ .Values.provisionStorage.classdefaults.adminSecretName }}
- name: CEPH_USER_SECRET
value: {{ .Values.provisionStorage.classes.secret.userSecretName }}
volumeMounts:
- name: config-rook-provisioner-cleanup
mountPath: /tmp/mount
{{- end }}

View File

@ -0,0 +1,28 @@
{{/*
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
*/}}
{{- if .Values.global.rbac }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ .Values.rbac.role }}
namespace: {{ .Release.Namespace }}
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "create", "list", "update"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "list", "update"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "create", "list", "update"]
- apiGroups: ["ceph.rook.io"]
resources: ["*"]
verbs: [ "get", "list", "patch" ]
{{- end}}

View File

@ -0,0 +1,23 @@
{{/*
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
*/}}
{{- if .Values.global.rbac }}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ .Values.rbac.roleBinding }}
namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ .Values.rbac.role }}
subjects:
- kind: ServiceAccount
name: {{ .Values.rbac.serviceAccount }}
namespace: {{ .Release.Namespace }}
{{- end}}

View File

@ -0,0 +1,17 @@
{{/*
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
*/}}
{{- if .Values.global.rbac }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Values.rbac.serviceAccount }}
namespace: {{ .Release.Namespace }}
imagePullSecrets:
- name: default-registry-key
{{- end }}

View File

@ -0,0 +1,106 @@
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
global:
configmap_key_init: ceph-key-init-bin
#
provision_storage: true
cephfs_storage: true
job_ceph_mgr_provision: true
job_ceph_mon_audit: false
job_ceph_osd_audit: true
job_host_provision: true
job_cleanup: true
deployment_stx_ceph_manager: true
# Defines whether to generate service account and role bindings.
rbac: true
# Node Selector
nodeSelector: { node-role.kubernetes.io/control-plane: "" }
#
# RBAC options.
# Defaults should be fine in most cases.
rbac:
clusterRole: rook-ceph-provisioner
clusterRoleBinding: rook-ceph-provisioner
role: rook-ceph-provisioner
roleBinding: rook-ceph-provisioner
serviceAccount: rook-ceph-provisioner
images:
tags:
ceph_config_helper: docker.io/openstackhelm/ceph-config-helper:ubuntu_jammy_18.2.2-1-20240312
stx_ceph_manager: docker.io/starlingx/stx-ceph-manager:stx.10.0-v1.7.11
k8s_entrypoint: quay.io/airshipit/kubernetes-entrypoint:v1.0.0
provisionStorage:
# Defines the name of the provisioner associated with a set of storage classes
provisioner_name: rook-ceph.rbd.csi.ceph.com
# Enable this storage class as the system default storage class
defaultStorageClass: rook-ceph
# Configure storage classes.
# Defaults for storage classes. Update this if you have a single Ceph storage cluster.
# No need to add them to each class.
classdefaults:
# Define ip addresses of Ceph Monitors
monitors: 192.168.204.3:6789,192.168.204.4:6789,192.168.204.1:6789
# Ceph admin account
adminId: admin
# K8 secret name for the admin context
adminSecretName: ceph-secret
# Configure storage classes.
# This section should be tailored to your setup. It allows you to define multiple storage
# classes for the same cluster (e.g. if you have tiers of drives with different speeds).
# If you have multiple Ceph clusters take attributes from classdefaults and add them here.
classes:
name: rook-ceph # Name of storage class.
secret:
# K8 secret name with key for accessing the Ceph pool
userSecretName: ceph-secret-kube
# Ceph user name to access this pool
userId: kube
pool:
pool_name: kube
replication: 1
crush_rule_name: storage_tier_ruleset
chunk_size: 8
cephfsStorage:
provisioner_name: rook-ceph.cephfs.csi.ceph.com
fs_name: kube-cephfs
pool_name: kube-cephfs-data
host_provision:
controller_hosts:
- controller-0
ceph_audit_jobs:
floatIP: 192.168.204.2
audit:
cron: "*/3 * * * *"
deadline: 200
history:
success: 1
failed: 1
hook:
image: docker.io/openstackhelm/ceph-config-helper:ubuntu_jammy_18.2.2-1-20240312
cleanup:
enable: true
cluster_cleanup: rook-ceph
rbac:
clusterRole: rook-ceph-cleanup
clusterRoleBinding: rook-ceph-cleanup
role: rook-ceph-cleanup
roleBinding: rook-ceph-cleanup
serviceAccount: rook-ceph-cleanup
mon_hosts:
- controller-0

View File

@ -0,0 +1,5 @@
rook-ceph-helm (1.13-7) unstable; urgency=medium
* Initial release.
-- Caio Correa <caio.correa@windriver.com> Wed, 11 Oct 2023 10:45:00 +0000

View File

@ -0,0 +1,15 @@
Source: rook-ceph-helm
Section: libs
Priority: optional
Maintainer: StarlingX Developers <starlingx-discuss@lists.starlingx.io>
Build-Depends: debhelper-compat (= 13),
helm
Standards-Version: 4.5.1
Homepage: https://www.starlingx.io
Package: rook-ceph-helm
Section: libs
Architecture: any
Depends: ${misc:Depends}
Description: StarlingX Rook-Ceph Helm Charts
This package contains helm charts for the Rook-Ceph application.

View File

@ -0,0 +1,41 @@
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: rook-ceph-helm
Source: https://opendev.org/starlingx/rook-ceph/
Files: *
Copyright: (c) 2024 Wind River Systems, Inc
License: Apache-2
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
.
https://www.apache.org/licenses/LICENSE-2.0
.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
.
On Debian-based systems the full text of the Apache version 2.0 license
can be found in `/usr/share/common-licenses/Apache-2.0'.
# If you want to use GPL v2 or later for the /debian/* files use
# the following clauses, or change it to suit. Delete these two lines
Files: debian/*
Copyright: 2024 Wind River Systems, Inc
License: Apache-2
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
.
https://www.apache.org/licenses/LICENSE-2.0
.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
.
On Debian-based systems the full text of the Apache version 2.0 license
can be found in `/usr/share/common-licenses/Apache-2.0'.

View File

@ -0,0 +1,133 @@
From e225331b54bbeb1c027840fd27e22fd5c2d7bbd8 Mon Sep 17 00:00:00 2001
From: Caio Correa <caio.correa@windriver.com>
Date: Fri, 5 Apr 2024 08:01:17 -0300
Subject: [PATCH] Add chart for duplex preparation
This patch adds a pre-install rook that edits the entrypoint to
rook-ceph-mon. On a duplex this entrypoint should be the floating IP
to acomplish the roaming mon strategy.
Signed-off-by: Caio Correa <caio.correa@windriver.com>
---
.../pre-install-duplex-preparation.yaml | 82 +++++++++++++++++++
deploy/charts/rook-ceph-cluster/values.yaml | 18 ++++
2 files changed, 100 insertions(+)
create mode 100644 deploy/charts/rook-ceph-cluster/templates/pre-install-duplex-preparation.yaml
diff --git a/deploy/charts/rook-ceph-cluster/templates/pre-install-duplex-preparation.yaml b/deploy/charts/rook-ceph-cluster/templates/pre-install-duplex-preparation.yaml
new file mode 100644
index 000000000..61e64c87b
--- /dev/null
+++ b/deploy/charts/rook-ceph-cluster/templates/pre-install-duplex-preparation.yaml
@@ -0,0 +1,82 @@
+{{/*
+#
+# Copyright (c) 2020 Intel Corporation, Inc.
+#
+# SPDX-License-Identifier: Apache-2.0
+#
+*/}}
+
+{{- if .Values.hook.duplexPreparation.enable }}
+{{ $root := . }}
+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: config-rook-ceph-duplex-preparation
+ namespace: {{ $root.Release.Namespace }}
+ annotations:
+ "helm.sh/hook": "pre-install"
+ "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded"
+data:
+ rook_duplex_preparation.sh: |-
+ #!/bin/bash
+
+ cat > endpoint.yaml << EOF
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: rook-ceph-mon-endpoints
+ namespace: $NAMESPACE
+ data:
+ data: a=$FLOAT_IP:6789
+ mapping: '{"node":{"a":{"Name":"$ACTIVE_CONTROLLER","Hostname":"$ACTIVE_CONTROLLER","Address":"$FLOAT_IP"}}}'
+ maxMonId: "0"
+ EOF
+
+ kubectl apply -f endpoint.yaml
+
+ rm -f endpoint.yaml
+---
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: rook-ceph-duplex-preparation
+ namespace: {{ $root.Release.Namespace }}
+ labels:
+ heritage: {{$root.Release.Service | quote }}
+ release: {{$root.Release.Name | quote }}
+ chart: "{{$root.Chart.Name}}"
+ annotations:
+ "helm.sh/hook": "pre-install"
+ "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded"
+spec:
+ template:
+ metadata:
+ name: rook-ceph-duplex-preparation
+ namespace: {{ $root.Release.Namespace }}
+ labels:
+ heritage: {{$root.Release.Service | quote }}
+ release: {{$root.Release.Name | quote }}
+ chart: "{{$root.Chart.Name}}"
+ spec:
+ serviceAccountName: rook-ceph-system
+ restartPolicy: OnFailure
+ volumes:
+ - name: config-rook-ceph-duplex-preparation
+ configMap:
+ name: config-rook-ceph-duplex-preparation
+ containers:
+ - name: duplex-preparation
+ image: {{ .Values.hook.image }}
+ command: [ "/bin/bash", "/tmp/mount/rook_duplex_preparation.sh" ]
+ env:
+ - name: NAMESPACE
+ value: {{ $root.Release.Namespace }}
+ - name: ACTIVE_CONTROLLER
+ value: {{ $root.Values.hook.duplexPreparation.activeController }}
+ - name: FLOAT_IP
+ value: {{ $root.Values.hook.duplexPreparation.floatIP | quote }}
+ volumeMounts:
+ - name: config-rook-ceph-duplex-preparation
+ mountPath: /tmp/mount
+{{- end }}
diff --git a/deploy/charts/rook-ceph-cluster/values.yaml b/deploy/charts/rook-ceph-cluster/values.yaml
index 36a79d063..ebd262496 100644
--- a/deploy/charts/rook-ceph-cluster/values.yaml
+++ b/deploy/charts/rook-ceph-cluster/values.yaml
@@ -678,3 +678,21 @@ cephObjectStores:
# -- CSI driver name prefix for cephfs, rbd and nfs.
# @default -- `namespace name where rook-ceph operator is deployed`
csiDriverNamePrefix:
+
+hook:
+ image: docker.io/openstackhelm/ceph-config-helper:ubuntu_bionic-20220802
+ duplexPreparation:
+ enable: false
+ activeController: controller-0
+ floatIP: 192.188.204.1
+ cleanup:
+ enable: true
+ cluster_cleanup: rook-ceph
+ rbac:
+ clusterRole: rook-ceph-cleanup
+ clusterRoleBinding: rook-ceph-cleanup
+ role: rook-ceph-cleanup
+ roleBinding: rook-ceph-cleanup
+ serviceAccount: rook-ceph-cleanup
+ mon_hosts:
+ - controller-0
--
2.34.1

View File

@ -0,0 +1 @@
0001-Add-chart-for-duplex-preparation.patch

View File

@ -0,0 +1 @@
usr/lib/helm/*

View File

@ -0,0 +1,37 @@
#!/usr/bin/make -f
export DH_VERBOSE = 1
export DEB_VERSION = $(shell dpkg-parsechangelog | egrep '^Version:' | cut -f 2 -d ' ')
export PATCH_VERSION = $(shell echo $(DEB_VERSION) | cut -f 4 -d '.')
export CHART_BASE_VERSION = $(shell echo $(DEB_VERSION) | sed 's/-/./' | cut -d '.' -f 1-3)
export CHART_VERSION = $(CHART_BASE_VERSION)+STX.$(PATCH_VERSION)
export ROOT = debian/tmp
export APP_FOLDER = $(ROOT)/usr/lib/helm
%:
dh $@
override_dh_auto_build:
mkdir -p rook-ceph-helm
# Copy rook-ceph-helm charts
cp -r deploy/charts/* rook-ceph-helm
cp Makefile rook-ceph-helm
cd rook-ceph-helm && make rook-ceph
cd rook-ceph-helm && make rook-ceph-cluster
override_dh_auto_install:
# Install the app tar file.
install -d -m 755 $(APP_FOLDER)
install -p -D -m 755 rook-ceph-helm/rook-ceph-cluster*.tgz $(APP_FOLDER)
install -p -D -m 755 rook-ceph-helm/rook-ceph-[!c]*.tgz $(APP_FOLDER)
override_dh_auto_test:

View File

@ -0,0 +1 @@
3.0 (quilt)

View File

@ -0,0 +1,15 @@
---
debname: rook-ceph-helm
debver: 1.13-7
dl_path:
name: rook-ceph-1.13.7.tar.gz
url: https://github.com/rook/rook/archive/refs/tags/v1.13.7.tar.gz
sha256sum: 8595c8029240ad451a845bf3a45d26af4797909009f104191969577fd45ac1fc
src_files:
- rook-ceph-helm/files/Makefile
revision:
dist: $STX_DIST
stx_patch: 0
GITREVCOUNT:
BASE_SRCREV: c6c693d51cdc6daa4eafe34ccab5ce35496bf516
SRC_DIR: ${MY_REPO}/stx/app-rook-ceph/helm-charts/upstream/rook-ceph-helm

View File

@ -0,0 +1,41 @@
#
# Copyright 2017 The Openstack-Helm Authors.
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# It's necessary to set this because some environments don't link sh -> bash.
SHELL := /bin/bash
TASK := build
EXCLUDES := doc tests tools logs tmp
CHARTS := $(filter-out $(EXCLUDES), $(patsubst %/.,%,$(wildcard */.)))
.PHONY: $(EXCLUDES) $(CHARTS)
all: $(CHARTS)
$(CHARTS):
@if [ -d $@ ]; then \
echo; \
echo "===== Processing [$@] chart ====="; \
make $(TASK)-$@; \
fi
init-%:
if [ -f $*/Makefile ]; then make -C $*; fi
lint-%: init-%
if [ -d $* ]; then helm lint $*; fi
build-%: lint-%
if [ -d $* ]; then helm package --version $(CHART_VERSION) $*; fi
clean:
@echo "Clean all build artifacts"
rm -f */templates/_partials.tpl */templates/_globals.tpl
rm -rf */charts */tmpcharts
%:
@:

View File

@ -0,0 +1,5 @@
python3-k8sapp-rook-ceph (1.0-1) unstable; urgency=medium
* Initial release.
-- Caio Correa <caio.correa@windriver.com> Wed, 11 Oct 2023 10:45:00 +0000

View File

@ -0,0 +1,27 @@
Source: python3-k8sapp-rook-ceph
Section: libs
Priority: optional
Maintainer: StarlingX Developers <starlingx-discuss@lists.starlingx.io>
Build-Depends: debhelper-compat (= 13),
dh-python,
build-info,
python3-all,
python3-pbr,
python3-setuptools,
python3-wheel
Standards-Version: 4.5.1
Homepage: https://www.starlingx.io
Package: python3-k8sapp-rook-ceph
Section: libs
Architecture: any
Depends: ${misc:Depends}, ${python3:Depends}
Description: StarlingX Sysinv Rook Ceph Extensions
Sysinv plugins for the Rook Ceph K8S app.
Package: python3-k8sapp-rook-ceph-wheels
Section: libs
Architecture: any
Depends: ${misc:Depends}, ${python3:Depends}, python3-wheel
Description: StarlingX Sysinv Rook Ceph Extension Wheels
Python wheels for the Rook Ceph K8S app plugins.

View File

@ -0,0 +1,41 @@
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: python3-k8sapp-rook-ceph
Source: https://opendev.org/starlingx/rook-ceph/
Files: *
Copyright: (c) 2024 Wind River Systems, Inc
License: Apache-2
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
.
https://www.apache.org/licenses/LICENSE-2.0
.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
.
On Debian-based systems the full text of the Apache version 2.0 license
can be found in `/usr/share/common-licenses/Apache-2.0'.
# If you want to use GPL v2 or later for the /debian/* files use
# the following clauses, or change it to suit. Delete these two lines
Files: debian/*
Copyright: 2024 Wind River Systems, Inc
License: Apache-2
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
.
https://www.apache.org/licenses/LICENSE-2.0
.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
.
On Debian-based systems the full text of the Apache version 2.0 license
can be found in `/usr/share/common-licenses/Apache-2.0'.

View File

@ -0,0 +1 @@
usr/lib/python3/dist-packages/k8sapp_*

View File

@ -0,0 +1,33 @@
#!/usr/bin/make -f
# export DH_VERBOSE = 1
export APP_NAME = rook-ceph
export PYBUILD_NAME = k8sapp-rook-ceph
export DEB_VERSION = $(shell dpkg-parsechangelog | egrep '^Version:' | cut -f 2 -d ' ')
export MAJOR = $(shell cat /etc/build.info | grep SW_VERSION | cut -d'"' -f2)
export MINOR_PATCH = $(shell echo $(DEB_VERSION) | cut -f 4 -d '.')
export PBR_VERSION = $(MAJOR).$(MINOR_PATCH)
export ROOT = $(CURDIR)/debian/tmp
export SKIP_PIP_INSTALL = 1
%:
dh $@ --with=python3 --buildsystem=pybuild
override_dh_auto_install:
env | sort
python3 setup.py install \
--install-layout=deb \
--root $(ROOT)
python3 setup.py bdist_wheel \
--universal \
-d $(ROOT)/plugins
override_dh_python3:
dh_python3 --shebang=/usr/bin/python3
override_dh_auto_test:
PYTHONDIR=$(CURDIR) stestr run

View File

@ -0,0 +1 @@
3.0 (quilt)

View File

@ -0,0 +1,9 @@
---
debname: python3-k8sapp-rook-ceph
debver: 1.0-1
src_path: k8sapp_rook_ceph
revision:
dist: $STX_DIST
GITREVCOUNT:
SRC_DIR: ${MY_REPO}/stx/app-rook-ceph
BASE_SRCREV: c6c693d51cdc6daa4eafe34ccab5ce35496bf516

View File

@ -0,0 +1,7 @@
[run]
branch = True
source = k8sapp_rook_ceph
omit = k8sapp_rook_ceph/tests/*
[report]
ignore_errors = True

View File

@ -0,0 +1,35 @@
# Compiled files
*.py[co]
*.a
*.o
*.so
# Sphinx
_build
doc/source/api/
# Packages/installer info
*.egg
*.egg-info
dist
build
eggs
parts
var
sdist
develop-eggs
.installed.cfg
# Other
*.DS_Store
.stestr
.testrepository
.tox
.venv
.*.swp
.coverage
bandit.xml
cover
AUTHORS
ChangeLog
*.sqlite

View File

@ -0,0 +1,4 @@
[DEFAULT]
test_path=./k8sapp_rook_ceph/tests
top_dir=./k8sapp_rook_ceph
#parallel_class=True

View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2020 Intel Corporation, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,7 @@
k8sapp_rook_ceph
================
This project contains StarlingX Kubernetes application specific python plugins
for the rook ceph application. These plugins are required to
integrate the application into the StarlingX application framework and to
support the various StarlingX deployments.

View File

@ -0,0 +1,34 @@
#
# Copyright (c) 2020 Intel Corporation, Inc.
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# Application Name
HELM_NS_ROOK_CEPH = 'rook-ceph'
HELM_APP_ROOK_CEPH = 'rook-ceph'
# Helm: Supported charts:
# These values match the names in the chart package's Chart.yaml
HELM_CHART_ROOK_CEPH = 'rook-ceph'
HELM_CHART_ROOK_CEPH_CLUSTER = 'rook-ceph-cluster'
HELM_CHART_ROOK_CEPH_PROVISIONER = 'rook-ceph-provisioner'
# FluxCD
FLUXCD_HELMRELEASE_ROOK_CEPH = 'rook-ceph'
FLUXCD_HELMRELEASE_ROOK_CEPH_CLUSTER = 'rook-ceph-cluster'
FLUXCD_HELMRELEASE_ROOK_CEPH_PROVISIONER = 'rook-ceph-provisioner'
ROOK_CEPH_CLUSTER_SECRET_NAMESPACE = 'rook-ceph'
ROOK_CEPH_RDB_SECRET_NAME = 'rook-csi-rbd-provisioner'
ROOK_CEPH_RDB_NODE_SECRET_NAME = 'rook-csi-rbd-node'
ROOK_CEPH_FS_SECRET_NAME = 'rook-csi-cephfs-provisioner'
ROOK_CEPH_FS_NODE_SECRET_NAME = 'rook-csi-cephfs-node'
ROOK_CEPH_CLUSTER_RDB_STORAGE_CLASS_NAME = 'general'
ROOK_CEPH_CLUSTER_CEPHFS_STORAGE_CLASS_NAME = 'cephfs'
ROOK_CEPH_CLUSTER_CEPHFS_FILE_SYSTEM_NAME = 'kube-cephfs'

View File

@ -0,0 +1,32 @@
#
# Copyright (c) 2021 Intel Corporation, Inc.
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_rook_ceph.common import constants as app_constants
from k8sapp_rook_ceph.helm import storage
from sysinv.common import exception
class RookCephHelm(storage.StorageBaseHelm):
"""Class to encapsulate helm operations for the rook-operator chart"""
CHART = app_constants.HELM_CHART_ROOK_CEPH
HELM_RELEASE = app_constants.FLUXCD_HELMRELEASE_ROOK_CEPH
def get_overrides(self, namespace=None):
secrets = [{"name": "default-registry-key"}]
overrides = {
app_constants.HELM_NS_ROOK_CEPH: {
'imagePullSecrets': secrets,
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides

View File

@ -0,0 +1,214 @@
#
# Copyright (c) 2018 Intel Corporation, Inc.
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_rook_ceph.common import constants as app_constants
from k8sapp_rook_ceph.helm import storage
from sysinv.common import constants
from sysinv.common import exception
from sysinv.common import utils as cutils
import socket
class RookCephClusterHelm(storage.StorageBaseHelm):
"""Class to encapsulate helm operations for the rook-ceph chart"""
CHART = app_constants.HELM_CHART_ROOK_CEPH_CLUSTER
HELM_RELEASE = app_constants.FLUXCD_HELMRELEASE_ROOK_CEPH_CLUSTER
def get_overrides(self, namespace=None):
overrides = {
app_constants.HELM_NS_ROOK_CEPH: {
'cephClusterSpec': self._get_cluster_override(),
'cephFileSystems': self._get_cephfs_override(),
'cephBlockPools': self._get_rdb_override(),
'mds': self._get_mds_override(),
'hook': self._get_hook_override(),
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides
def _get_cephfs_override(self):
if cutils.is_aio_simplex_system(self.dbapi):
replica = 1
else:
replica = 2
parameters = {
'csi.storage.k8s.io/provisioner-secret-name': app_constants.ROOK_CEPH_FS_SECRET_NAME,
'csi.storage.k8s.io/provisioner-secret-namespace': app_constants.ROOK_CEPH_CLUSTER_SECRET_NAMESPACE,
'csi.storage.k8s.io/controller-expand-secret-name': app_constants.ROOK_CEPH_FS_SECRET_NAME,
'csi.storage.k8s.io/controller-expand-secret-namespace': app_constants.ROOK_CEPH_CLUSTER_SECRET_NAMESPACE,
'csi.storage.k8s.io/node-stage-secret-name': app_constants.ROOK_CEPH_FS_NODE_SECRET_NAME,
'csi.storage.k8s.io/node-stage-secret-namespace': app_constants.ROOK_CEPH_CLUSTER_SECRET_NAMESPACE,
'csi.storage.k8s.io/fstype': 'ext4'
}
storage_class = {
'enabled': True,
'name': app_constants.ROOK_CEPH_CLUSTER_CEPHFS_STORAGE_CLASS_NAME,
'isDefault': False,
'pool': 'data',
'allowVolumeExpansion': True,
'reclaimPolicy': 'Delete',
'parameters': parameters
}
ceph_fs_config = [{
'name': app_constants.ROOK_CEPH_CLUSTER_CEPHFS_FILE_SYSTEM_NAME,
'spec': {
'metadataPool': {
'replicated':
{'size': replica}},
'metadataServer': {
'activeCount': 1,
'activeStandby': True,
'resources': {
'limits':
{'memory': '4Gi'},
'requests': {
'memory': '0',
'cpu': '0'}},
'priorityClassName': 'system-cluster-critical'},
'dataPools': [{
'failureDomain': 'host',
'name': 'data',
'replicated':
{'size': replica}}],
},
'storageClass': storage_class
}]
return ceph_fs_config
def _get_rdb_override(self):
if cutils.is_aio_simplex_system(self.dbapi):
replica = 1
else:
replica = 2
parameters = {
'imageFormat': '2',
'imageFeatures': 'layering',
'csi.storage.k8s.io/provisioner-secret-name': app_constants.ROOK_CEPH_RDB_SECRET_NAME,
'csi.storage.k8s.io/provisioner-secret-namespace': app_constants.ROOK_CEPH_CLUSTER_SECRET_NAMESPACE,
'csi.storage.k8s.io/controller-expand-secret-name': app_constants.ROOK_CEPH_RDB_SECRET_NAME,
'csi.storage.k8s.io/controller-expand-secret-namespace': app_constants.ROOK_CEPH_CLUSTER_SECRET_NAMESPACE,
'csi.storage.k8s.io/node-stage-secret-name': app_constants.ROOK_CEPH_RDB_NODE_SECRET_NAME,
'csi.storage.k8s.io/node-stage-secret-namespace': app_constants.ROOK_CEPH_CLUSTER_SECRET_NAMESPACE,
'csi.storage.k8s.io/fstype': 'ext4'
}
storage_class = {
'enabled': True,
'name': app_constants.ROOK_CEPH_CLUSTER_RDB_STORAGE_CLASS_NAME,
'isDefault': True,
'allowVolumeExpansion': True,
'reclaimPolicy': 'Delete',
'mountOptions': [],
'parameters': parameters
}
rdb_config = [{
'name': 'kube-rbd',
'spec': {
'failureDomain': 'host',
'replicated': {'size': replica}
},
'storageClass': storage_class
}]
return rdb_config
def _get_cluster_override(self):
cluster_host_addr_name = cutils.format_address_name(constants.CONTROLLER_HOSTNAME,
constants.NETWORK_TYPE_CLUSTER_HOST)
address = cutils.get_primary_address_by_name(self.dbapi, cluster_host_addr_name,
constants.NETWORK_TYPE_CLUSTER_HOST, True)
cluster = {
'mon': {
'count': self._get_mon_count(),
},
'network': {
'ipFamily': 'IPv' + str(address.family)
},
}
return cluster
def _get_mon_count(self):
# change it with deployment configs:
# AIO simplex/duplex have 1 mon, multi-node has 3 mons,
# 2 controllers + first mon (and cannot reconfig)
if cutils.is_aio_system(self.dbapi):
return 1
else:
return 3
def _get_mds_override(self):
if cutils.is_aio_simplex_system(self.dbapi):
replica = 1
else:
replica = 2
mds = {
'replica': replica,
}
return mds
def _get_hook_override(self):
hook = {
'cleanup': {
'mon_hosts': self._get_mon_hosts(),
},
'duplexPreparation': self._get_duplex_preparation(),
}
return hook
def _get_mon_hosts(self):
ceph_mon_label = "ceph-mon-placement=enabled"
mon_hosts = []
hosts = self.dbapi.ihost_get_list()
for h in hosts:
labels = self.dbapi.label_get_by_host(h.uuid)
for label in labels:
if (ceph_mon_label == str(label.label_key) + '=' + str(label.label_value)):
mon_hosts.append(h.hostname.encode('utf8', 'strict'))
return mon_hosts
def _get_duplex_preparation(self):
duplex = {
'enable': cutils.is_aio_duplex_system(self.dbapi)
}
if cutils.is_aio_duplex_system(self.dbapi):
hosts = self.dbapi.ihost_get_by_personality(
constants.CONTROLLER)
for host in hosts:
if host['hostname'] == socket.gethostname():
duplex.update({'activeController': host['hostname'].encode('utf8', 'strict')})
cluster_host_addr_name = cutils.format_address_name(constants.CONTROLLER_HOSTNAME,
constants.NETWORK_TYPE_CLUSTER_HOST)
address = cutils.get_primary_address_by_name(self.dbapi, cluster_host_addr_name,
constants.NETWORK_TYPE_CLUSTER_HOST, True)
duplex.update({'floatIP': cutils.format_url_address(address.address)})
return duplex

View File

@ -0,0 +1,142 @@
#
# Copyright (c) 2018 Wind River Systems, Inc.
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_rook_ceph.common import constants as app_constants
from k8sapp_rook_ceph.helm import storage
from kubernetes.client.rest import ApiException
from oslo_log import log as logging
from sysinv.common import constants
from sysinv.common import exception
from sysinv.common import kubernetes
from sysinv.common import utils as cutils
LOG = logging.getLogger(__name__)
class RookCephClusterProvisionerHelm(storage.StorageBaseHelm):
"""Class to encapsulate helm operations for the rook-ceph-provisioner chart"""
CHART = app_constants.HELM_CHART_ROOK_CEPH_PROVISIONER
HELM_RELEASE = app_constants.FLUXCD_HELMRELEASE_ROOK_CEPH_PROVISIONER
def get_overrides(self, namespace=None):
base_name = 'ceph-pool'
secret_name = base_name + '-' + constants.CEPH_POOL_KUBE_NAME
if cutils.is_aio_simplex_system(self.dbapi):
replica = 1
else:
replica = 2
audit = cutils.is_aio_duplex_system(self.dbapi)
overrides = {
app_constants.HELM_NS_ROOK_CEPH: {
"global": {
"job_ceph_mon_audit": audit,
},
"provisionStorage": {
"defaultStorageClass": constants.K8S_RBD_PROV_STOR_CLASS_NAME,
"classdefaults": {
"monitors": self._get_monitors(),
"adminId": constants.K8S_RBD_PROV_USER_NAME,
"adminSecretName": constants.K8S_RBD_PROV_ADMIN_SECRET_NAME,
},
"classes": {
"name": constants.K8S_RBD_PROV_STOR_CLASS_NAME,
"pool": {
"pool_name": constants.CEPH_POOL_KUBE_NAME,
"replication": replica,
"crush_rule_name": "storage_tier_ruleset",
"chunk_size": 64,
},
"secret": {
"userId": constants.CEPH_POOL_KUBE_NAME,
"userSecretName": secret_name,
}
},
},
"host_provision": {
"controller_hosts": self._get_controller_hosts(),
},
"ceph_audit_jobs": self._get_ceph_audit(),
}
}
if namespace in self.SUPPORTED_NAMESPACES:
return overrides[namespace]
elif namespace:
raise exception.InvalidHelmNamespace(chart=self.CHART,
namespace=namespace)
else:
return overrides
def _get_rook_mon_ip(self):
try:
kube = kubernetes.KubeOperator()
mon_ip_name = 'rook-ceph-mon-endpoints'
configmap = kube.kube_read_config_map(mon_ip_name,
app_constants.HELM_NS_ROOK_CEPH)
if configmap is not None:
data = configmap.data['data']
LOG.info('rook configmap data is %s' % data)
mons = data.split(',')
lists = []
for mon in mons:
mon = mon.split('=')
lists.append(mon[1])
ip_str = ','.join(lists)
LOG.info('rook mon ip is %s' % ip_str)
return ip_str
except Exception as e:
LOG.error("Kubernetes exception in rook mon ip: %s" % e)
raise
return ''
def _is_rook_ceph(self):
try:
label = "mon_cluster=" + app_constants.HELM_NS_ROOK_CEPH
kube = kubernetes.KubeOperator()
pods = kube.kube_get_pods_by_selector(app_constants.HELM_NS_ROOK_CEPH, label, "")
if len(pods) > 0:
return True
except ApiException as ae:
LOG.error("get monitor pod exception: %s" % ae)
except exception.SysinvException as se:
LOG.error("get sysinv exception: %s" % se)
return False
def _get_monitors(self):
if self._is_rook_ceph():
return self._get_rook_mon_ip()
else:
return ''
def _get_controller_hosts(self):
controller_hosts = []
hosts = self.dbapi.ihost_get_by_personality(constants.CONTROLLER)
for h in hosts:
controller_hosts.append(h.hostname.encode('utf8', 'strict'))
return controller_hosts
def _get_ceph_audit(self):
audit = {}
if cutils.is_aio_duplex_system(self.dbapi):
mgmt_addr_name = cutils.format_address_name(constants.CONTROLLER_HOSTNAME,
constants.NETWORK_TYPE_CLUSTER_HOST)
address = cutils.get_primary_address_by_name(self.dbapi, mgmt_addr_name,
constants.NETWORK_TYPE_CLUSTER_HOST, True)
audit.update({'floatIP': cutils.format_url_address(address.address)})
return audit

View File

@ -0,0 +1,53 @@
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from sysinv.helm import base
from k8sapp_rook_ceph.common import constants as app_constants
class BaseHelm(base.FluxCDBaseHelm):
"""Class to encapsulate storage related service operations for helm"""
SUPPORTED_NAMESPACES = base.BaseHelm.SUPPORTED_NAMESPACES + \
[app_constants.HELM_NS_ROOK_CEPH]
SUPPORTED_APP_NAMESPACES = {
app_constants.HELM_APP_ROOK_CEPH: SUPPORTED_NAMESPACES,
}
class StorageBaseHelm(BaseHelm):
"""Class to encapsulate storage service operations for helm"""
def _is_enabled(self, app_name, chart_name, namespace):
"""
Check if the chart is enable at a system level
:param app_name: Application name
:param chart_name: Chart supplied with the application
:param namespace: Namespace where the chart will be executed
Returns true by default if an exception occurs as most charts are
enabled.
"""
return super(StorageBaseHelm, self)._is_enabled(
app_name, chart_name, namespace)
def execute_kustomize_updates(self, operator):
"""
Update the elements of FluxCD kustomize manifests.
This allows a helm chart plugin to use the FluxCDKustomizeOperator to
make dynamic structural changes to the application manifest based on the
current conditions in the platform
Changes currenty include updates to the top level kustomize manifest to
disable helm releases.
:param operator: an instance of the FluxCDKustomizeOperator
"""
if not self._is_enabled(operator.APP, self.CHART,
app_constants.HELM_NS_ROOK_CEPH):
operator.helm_release_resource_delete(self.HELM_RELEASE)

View File

@ -0,0 +1,19 @@
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
import yaml
class quoted_str(str):
pass
# force strings to be single-quoted to avoid interpretation as numeric values
def quoted_presenter(dumper, data):
return dumper.represent_scalar(u'tag:yaml.org,2002:str', data, style="'")
yaml.add_representer(quoted_str, quoted_presenter)

View File

@ -0,0 +1,28 @@
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# All Rights Reserved.
#
""" System inventory Kustomization resource operator."""
from k8sapp_rook_ceph.common import constants as app_constants
from sysinv.helm import kustomize_base as base
class RookCephFluxCDKustomizeOperator(base.FluxCDKustomizeOperator):
APP = app_constants.HELM_APP_ROOK_CEPH
def platform_mode_kustomize_updates(self, dbapi, mode):
""" Update the top-level kustomization resource list
Make changes to the top-level kustomization resource list based on the
platform mode
:param dbapi: DB api object
:param mode: mode to control when to update the resource list
"""
pass

View File

@ -0,0 +1,5 @@
#
# Copyright (c) 2021 Intel Corporation, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#

View File

@ -0,0 +1,109 @@
#
# Copyright (c) 2021 Intel Corporation, Inc.
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# All Rights Reserved.
#
""" System inventory App lifecycle operator."""
from oslo_log import log as logging
from sysinv.common import constants
from sysinv.common import exception
from sysinv.common import kubernetes
from sysinv.common import utils as cutils
from sysinv.helm import lifecycle_base as base
from sysinv.helm.lifecycle_constants import LifecycleConstants
from sysinv.helm import lifecycle_utils as lifecycle_utils
LOG = logging.getLogger(__name__)
class RookCephAppLifecycleOperator(base.AppLifecycleOperator):
def app_lifecycle_actions(self, context, conductor_obj, app_op, app, hook_info):
""" Perform lifecycle actions for an operation
:param context: request context
:param conductor_obj: conductor object
:param app_op: AppOperator object
:param app: AppOperator.Application object
:param hook_info: LifecycleHookInfo object
"""
# Fluxcd request
if hook_info.lifecycle_type == constants.APP_LIFECYCLE_TYPE_FLUXCD_REQUEST:
if (hook_info.operation == constants.APP_REMOVE_OP and
hook_info.relative_timing == constants.APP_LIFECYCLE_TIMING_PRE):
return self.remove_finalizers_crd()
# Resources
if hook_info.lifecycle_type == constants.APP_LIFECYCLE_TYPE_RESOURCE:
if hook_info.operation == constants.APP_APPLY_OP:
if hook_info.relative_timing == constants.APP_LIFECYCLE_TIMING_PRE:
return lifecycle_utils.create_local_registry_secrets(app_op, app, hook_info)
elif (hook_info.operation == constants.APP_REMOVE_OP and
hook_info.relative_timing == constants.APP_LIFECYCLE_TIMING_POST):
return lifecycle_utils.delete_local_registry_secrets(app_op, app, hook_info)
# Operation
elif hook_info.lifecycle_type == constants.APP_LIFECYCLE_TYPE_OPERATION:
if (hook_info.operation == constants.APP_APPLY_OP and
hook_info.relative_timing == constants.APP_LIFECYCLE_TIMING_POST):
return self.post_apply(context, conductor_obj, app, hook_info)
# Use the default behaviour for other hooks
super(RookCephAppLifecycleOperator, self).app_lifecycle_actions(context, conductor_obj, app_op, app, hook_info)
def post_apply(self, context, conductor_obj, app, hook_info):
""" Post apply actions
:param context: request context
:param conductor_obj: conductor object
:param app: AppOperator.Application object
:param hook_info: LifecycleHookInfo object
"""
if LifecycleConstants.EXTRA not in hook_info:
raise exception.LifecycleMissingInfo("Missing {}".format(LifecycleConstants.EXTRA))
if LifecycleConstants.APP_APPLIED not in hook_info[LifecycleConstants.EXTRA]:
raise exception.LifecycleMissingInfo(
"Missing {} {}".format(LifecycleConstants.EXTRA, LifecycleConstants.APP_APPLIED))
if hook_info[LifecycleConstants.EXTRA][LifecycleConstants.APP_APPLIED]:
# apply any runtime configurations that are needed for
# rook_ceph application
conductor_obj._update_config_for_rook_ceph(context)
def remove_finalizers_crd(self):
""" Remove finalizers from CustomResourceDefinitions (CRDs)
This function removes finalizers from rook-ceph CRDs for application removal
operation
"""
# Get all CRDs related to rook-ceph
cmd_crds = ["kubectl", "--kubeconfig", kubernetes.KUBERNETES_ADMIN_CONF, "get", "crd",
"-o=jsonpath='{.items[?(@.spec.group==\"ceph.rook.io\")].metadata.name}'"]
stdout, stderr = cutils.trycmd(*cmd_crds)
if not stderr:
crds = stdout.replace("'", "").strip().split(" ")
for crd_name in crds:
# Get custom resources based on each rook-ceph CRD
cmd_instances = ["kubectl", "--kubeconfig", kubernetes.KUBERNETES_ADMIN_CONF,
"get", "-n", "rook-ceph", crd_name, "-o", "name"]
stdout, stderr = cutils.trycmd(*cmd_instances)
crd_instances = stdout.strip().split("\n")
if not stderr and crd_instances:
for crd_instance in crd_instances:
if crd_instance:
# Patch each custom resource to remove finalizers
patch_cmd = ["kubectl", "--kubeconfig", kubernetes.KUBERNETES_ADMIN_CONF,
"patch", "-n", "rook-ceph", crd_instance, "-p",
"{\"metadata\":{\"finalizers\":null}}", "--type=merge"]
stdout, stderr = cutils.trycmd(*patch_cmd)
LOG.debug("{} \n stdout: {} \n stderr: {}".format(crd_instance, stdout, stderr))
else:
LOG.error("Error removing finalizers: {stderr}")

View File

@ -0,0 +1,42 @@
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_rook_ceph.common import constants as app_constants
from sysinv.tests.db import base as dbbase
class K8SAppRookAppMixin(object):
app_name = app_constants.HELM_APP_ROOK_CEPH
path_name = app_name + '.tgz'
def setUp(self): # pylint: disable=useless-super-delegation
super(K8SAppRookAppMixin, self).setUp()
# Test Configuration:
# - Controller
# - IPv6
class K8SAppRookControllerTestCase(K8SAppRookAppMixin,
dbbase.BaseIPv6Mixin,
dbbase.ControllerHostTestCase):
pass
# Test Configuration:
# - AIO
# - IPv4
class K8SAppRookAIOTestCase(K8SAppRookAppMixin,
dbbase.AIOSimplexHostTestCase):
pass
# Test Configuration:
# - Controller
# - Dual-Stack Primary IPv4
class K8SAppRookDualStackControllerIPv4TestCase(K8SAppRookAppMixin,
dbbase.BaseDualStackPrimaryIPv4Mixin,
dbbase.ControllerHostTestCase):
pass

View File

@ -0,0 +1,121 @@
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from k8sapp_rook_ceph.common import constants as app_constants
from k8sapp_rook_ceph.tests import test_plugins
from sysinv.db import api as dbapi
from sysinv.tests.db import base as dbbase
from sysinv.tests.db import utils as dbutils
from sysinv.tests.helm import base
class RookTestCase(test_plugins.K8SAppRookAppMixin,
base.HelmTestCaseMixin):
def setUp(self):
super(RookTestCase, self).setUp()
self.app = dbutils.create_test_app(name=app_constants.HELM_APP_ROOK_CEPH)
self.dbapi = dbapi.get_instance()
class RookIPv4ControllerHostTestCase(RookTestCase,
dbbase.ProvisionedControllerHostTestCase):
def test_rook_ceph_overrides(self):
d_overrides = self.operator.get_helm_chart_overrides(
app_constants.HELM_CHART_ROOK_CEPH,
cnamespace=app_constants.HELM_NS_ROOK_CEPH)
self.assertOverridesParameters(d_overrides, {
'imagePullSecrets': [
{'name': 'default-registry-key'}
]
})
def test_rook_ceph_cluster_overrides(self):
e_overrides = self.operator.get_helm_chart_overrides(
app_constants.HELM_CHART_ROOK_CEPH_CLUSTER,
cnamespace=app_constants.HELM_NS_ROOK_CEPH)
self.assertOverridesParameters(e_overrides.get('cephFileSystems')[0].get('spec').
get('metadataPool').get('replicated').get('size'), 2)
def test_rook_ceph_provisioner_overrides(self):
f_overrides = self.operator.get_helm_chart_overrides(
app_constants.HELM_CHART_ROOK_CEPH_PROVISIONER,
cnamespace=app_constants.HELM_NS_ROOK_CEPH)
self.assertOverridesParameters(f_overrides.get('global').get('job_ceph_mon_audit'),
False)
self.assertOverridesParameters(f_overrides.get('host_provision').get('controller_hosts'),
[b'controller-0'])
self.assertOverridesParameters(f_overrides.get('ceph_audit_jobs').get('floatIP'),
{})
class RookIPv6AIODuplexSystemTestCase(RookTestCase,
dbbase.BaseIPv6Mixin,
dbbase.ProvisionedAIODuplexSystemTestCase):
def test_rook_ceph_overrides(self):
a_overrides = self.operator.get_helm_chart_overrides(
app_constants.HELM_CHART_ROOK_CEPH,
cnamespace=app_constants.HELM_NS_ROOK_CEPH)
self.assertOverridesParameters(a_overrides, {
'imagePullSecrets': [{'name': 'default-registry-key'}],
})
def test_rook_ceph_cluster_overrides(self):
b_overrides = self.operator.get_helm_chart_overrides(
app_constants.HELM_CHART_ROOK_CEPH_CLUSTER,
cnamespace=app_constants.HELM_NS_ROOK_CEPH)
self.assertOverridesParameters(b_overrides.get('cephFileSystems')[0].get('spec').
get('metadataPool').get('replicated').get('size'), 2)
def test_rook_ceph_provisioner_overrides(self):
c_overrides = self.operator.get_helm_chart_overrides(
app_constants.HELM_CHART_ROOK_CEPH_PROVISIONER,
cnamespace=app_constants.HELM_NS_ROOK_CEPH)
self.assertOverridesParameters(c_overrides.get('global').get('job_ceph_mon_audit'),
True)
self.assertOverridesParameters(c_overrides.get('host_provision').get('controller_hosts'),
[b'controller-0', b'controller-1'])
self.assertOverridesParameters(c_overrides.get('ceph_audit_jobs').get('floatIP'),
'[fd02::2]')
class RookDualStackControllerIPv4TestCase(RookTestCase,
dbbase.BaseDualStackPrimaryIPv4Mixin,
dbbase.ProvisionedAIODuplexSystemTestCase):
def test_rook_ceph_overrides(self):
g_overrides = self.operator.get_helm_chart_overrides(
app_constants.HELM_CHART_ROOK_CEPH,
cnamespace=app_constants.HELM_NS_ROOK_CEPH)
self.assertOverridesParameters(g_overrides, {
'imagePullSecrets': [{'name': 'default-registry-key'}],
})
def test_rook_ceph_cluster_overrides(self):
h_overrides = self.operator.get_helm_chart_overrides(
app_constants.HELM_CHART_ROOK_CEPH_CLUSTER,
cnamespace=app_constants.HELM_NS_ROOK_CEPH)
self.assertOverridesParameters(h_overrides.get('cephFileSystems')[0].get('spec').
get('metadataPool').get('replicated').get('size'), 2)
def test_rook_ceph_provisioner_overrides(self):
i_overrides = self.operator.get_helm_chart_overrides(
app_constants.HELM_CHART_ROOK_CEPH_PROVISIONER,
cnamespace=app_constants.HELM_NS_ROOK_CEPH)
self.assertOverridesParameters(i_overrides.get('global').get('job_ceph_mon_audit'),
True)
self.assertOverridesParameters(i_overrides.get('host_provision').get('controller_hosts'),
[b'controller-0', b'controller-1'])
self.assertOverridesParameters(i_overrides.get('ceph_audit_jobs').get('floatIP'),
'192.168.206.2')

View File

@ -0,0 +1,336 @@
[MASTER]
# Specify a configuration file.
rcfile=pylint.rc
# Python code to execute, usually for sys.path manipulation such as
# pygtk.require().
#init-hook=
# Add files or directories to the blacklist. Should be base names, not paths.
ignore=
# Pickle collected data for later comparisons.
persistent=yes
# List of plugins (as comma separated values of python modules names) to load,
# usually to register additional checkers.
load-plugins=
# Use multiple processes to speed up Pylint.
jobs=4
# Allow loading of arbitrary C extensions. Extensions are imported into the
# active Python interpreter and may run arbitrary code.
unsafe-load-any-extension=no
# A comma-separated list of package or module names from where C extensions may
# be loaded. Extensions are loading into the active Python interpreter and may
# run arbitrary code
extension-pkg-whitelist=lxml.etree,greenlet
[MESSAGES CONTROL]
# Disable the message, report, category or checker with the given id(s). You
# can either give multiple identifier separated by comma (,) or put this option
# multiple time (only on the command line, not in the configuration file where
# it should appear only once).
# See "Messages Control" section of
# https://pylint.readthedocs.io/en/latest/user_guide
disable=
# C codes refer to Convention
C0103, # invalid-name
C0104, # disallowed-nameA
C0112, # empty-docstring
C0114, # missing-module-docstring
C0115, # missing-class-docstring
C0116, # missing-function-docstring
C0123, # unidiomatic-typecheck !!!
C0201, # consider-iterating-dictionary
C0202, # bad-classmethod-argument
C0206, # consider-using-dict-items
C0207, # use-maxsplit-arg
C0209, # consider-using-f-string
C0301, # line-too-long
C0302, # too-many-lines
C0325, # superfluous-parens
C0411, # wrong-import-order
C0412, # ungrouped-imports
C0413, # wrong-import-position
C0414, # useless-import-alias !!!
C0415, # import-outside-toplevel
C1802, # use-implicit-booleaness-not-len !!!
C2801, # unnecessary-dunder-call !!!
C3002, # unnecessary-direct-lambda-call !!!
# R codes refer to refactoring
R0022, # useless-option-value !!!
R0205, # useless-object-inheritance
R0402, # consider-using-from-import
R0901, # too-many-ancestors
R0902, # too-many-instance-attributes
R0903, # too-few-public-methods
R0904, # too-many-public-methods
R0911, # too-many-return-statements
R0912, # too-many-branches
R0913, # too-many-arguments
R0914, # too-many-locals
R0915, # too-many-statements
R0916, # too-many-boolean-expressions
R1702, # too-many-nested-blocks
R1703, # simplifiable-if-statement
R1704, # redefined-argument-from-local !!!
R1705, # no-else-return
R1707, # trailing-comma-tuple !!!
R1708, # stop-iteration-return !!!
R1710, # inconsistent-return-statements
R1711, # useless-return
R1714, # consider-using-in
R1717, # consider-using-dict-comprehension !!!
R1718, # consider-using-set-comprehension
R1719, # simplifiable-if-expression
R1720, # no-else-raise
R1721, # unnecessary-comprehension
R1722, # consider-using-sys-exit !!!
R1723, # no-else-break
R1724, # no-else-continue
R1725, # super-with-arguments
R1726, # simplifiable-condition !!!
R1728, # consider-using-generator
R1729, # use-a-generator
R1730, # consider-using-min-builtin !!!
R1731, # consider-using-max-builtin !!!
R1732, # consider-using-with
R1733, # unnecessary-dict-index-lookup !!
R1734, # use-list-literal
R1735, # use-dict-literal
# W codes are warnings
W0101, # unreachable
W0105, # pointless-string-statement
W0106, # expression-not-assigned
W0107, # unnecessary-pass
W0108, # unnecessary-lambda
W0109, # duplicate-key !!!
W0123, # eval-used
W0125, # using-constant-test !!!
W0133, # pointless-exception-statement !!!
W0143, # comparison-with-callable !!!
W0150, # lost-exception
W0201, # attribute-defined-outside-init
W0211, # bad-staticmethod-argument
W0212, # protected-access
W0221, # arguments-differ
W0223, # abstract-method
W0231, # super-init-not-called
W0235, # useless-super-delegation
W0237, # arguments-renamed !!!
W0311, # bad-indentation
W0402, # deprecated-module
W0404, # reimported
W0511, # fixme
W0602, # global-variable-not-assigned !!!
W0603, # global-statement
W0612, # unused-variable
W0613, # unused-argument
W0621, # redefined-outer-name
W0622, # redefined-builtin
W0631, # undefined-loop-variable
W0703, # broad-except (pylint 2.16 renamed to broad-except-caught)
W0706, # try-except-raise
W0707, # raise-missing-from
W0719, # broad-exception-raised
W1113, # keyword-arg-before-vararg
W1310, # format-string-without-interpolation !!!
W1401, # anomalous-backslash-in-string
W1406, # redundant-u-string-prefix
W1505, # deprecated-method
W1514, # unspecified-encoding
W3101, # missing-timeout
E0601, # used-before-assignment !!!
E0605, # invalid-all-format !!!
E1101, # no-member
E1111, # assignment-from-no-return
E1121, # too-many-function-args !!!
E1123, # unexpected-keyword-arg !!!
E1136, # unsubscriptable-object !!!
[REPORTS]
# Set the output format. Available formats are text, parseable, colorized, msvs
# (visual studio) and html
output-format=text
# Tells whether to display a full report or only the messages
reports=no
# Python expression which should return a note less than 10 (10 is the highest
# note). You have access to the variables errors warning, statement which
# respectively contain the number of errors / warnings messages and the total
# number of statements analyzed. This is used by the global evaluation report
# (RP0004).
evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10)
[SIMILARITIES]
# Minimum lines number of a similarity.
min-similarity-lines=4
# Ignore comments when computing similarities.
ignore-comments=yes
# Ignore docstrings when computing similarities.
ignore-docstrings=yes
[FORMAT]
# Maximum number of characters on a single line.
max-line-length=85
# Maximum number of lines in a module
max-module-lines=1000
# String used as indentation unit. This is usually 4 spaces or "\t" (1 tab).
indent-string=' '
[TYPECHECK]
# Tells whether missing members accessed in mixin class should be ignored. A
# mixin class is detected if its name ends with "mixin" (case insensitive).
ignore-mixin-members=yes
# List of module names for which member attributes should not be checked
# (useful for modules/projects where namespaces are manipulated during runtime
# and thus existing member attributes cannot be deduced by static analysis
ignored-modules=distutils,eventlet.green.subprocess,six,six.moves
# List of classes names for which member attributes should not be checked
# (useful for classes with attributes dynamically set).
# pylint is confused by sqlalchemy Table, as well as sqlalchemy Enum types
# ie: (unprovisioned, identity)
# LookupDict in requests library confuses pylint
ignored-classes=SQLObject, optparse.Values, thread._local, _thread._local,
Table, unprovisioned, identity, LookupDict
# List of members which are set dynamically and missed by pylint inference
# system, and so shouldn't trigger E0201 when accessed. Python regular
# expressions are accepted.
generated-members=REQUEST,acl_users,aq_parent
[BASIC]
# Regular expression which should only match correct module names
module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$
# Regular expression which should only match correct module level names
const-rgx=(([A-Z_][A-Z0-9_]*)|(__.*__))$
# Regular expression which should only match correct class names
class-rgx=[A-Z_][a-zA-Z0-9]+$
# Regular expression which should only match correct function names
function-rgx=[a-z_][a-z0-9_]{2,30}$
# Regular expression which should only match correct method names
method-rgx=[a-z_][a-z0-9_]{2,30}$
# Regular expression which should only match correct instance attribute names
attr-rgx=[a-z_][a-z0-9_]{2,30}$
# Regular expression which should only match correct argument names
argument-rgx=[a-z_][a-z0-9_]{2,30}$
# Regular expression which should only match correct variable names
variable-rgx=[a-z_][a-z0-9_]{2,30}$
# Regular expression which should only match correct list comprehension /
# generator expression variable names
inlinevar-rgx=[A-Za-z_][A-Za-z0-9_]*$
# Good variable names which should always be accepted, separated by a comma
good-names=i,j,k,ex,Run,_
# Bad variable names which should always be refused, separated by a comma
bad-names=foo,bar,baz,toto,tutu,tata
# Regular expression which should only match functions or classes name which do
# not require a docstring
no-docstring-rgx=__.*__
[MISCELLANEOUS]
# List of note tags to take in consideration, separated by a comma.
notes=FIXME,XXX,TODO
[VARIABLES]
# Tells whether we should check for unused import in __init__ files.
init-import=no
# A regular expression matching the beginning of the name of dummy variables
# (i.e. not used).
dummy-variables-rgx=_|dummy
# List of additional names supposed to be defined in builtins. Remember that
# you should avoid to define new builtins when possible.
additional-builtins=
[IMPORTS]
# Deprecated modules which should not be used, separated by a comma
deprecated-modules=regsub,string,TERMIOS,Bastion,rexec
# Create a graph of every (i.e. internal and external) dependencies in the
# given file (report RP0402 must not be disabled)
import-graph=
# Create a graph of external dependencies in the given file (report RP0402 must
# not be disabled)
ext-import-graph=
# Create a graph of internal dependencies in the given file (report RP0402 must
# not be disabled)
int-import-graph=
[DESIGN]
# Maximum number of arguments for function / method
max-args=5
# Argument names that match this expression will be ignored. Default to name
# with leading underscore
ignored-argument-names=_.*
# Maximum number of locals for function / method body
max-locals=15
# Maximum number of return / yield for function / method body
max-returns=6
# Maximum number of branch for function / method body
max-branches=12
# Maximum number of statements in function / method body
max-statements=50
# Maximum number of parents for a class (see R0901).
max-parents=7
# Maximum number of attributes for a class (see R0902).
max-attributes=7
# Minimum number of public methods for a class (see R0903).
min-public-methods=2
# Maximum number of public methods for a class (see R0904).
max-public-methods=20
[CLASSES]
# List of method names used to declare (i.e. assign) instance attributes.
defining-attr-methods=__init__,__new__,setUp
# List of valid names for the first argument in a class method.
valid-classmethod-first-arg=cls
[EXCEPTIONS]
# Exceptions that will emit a warning when caught.
overgeneral-exceptions=builtins.BaseException,builtins.Exception

View File

@ -0,0 +1,3 @@
pbr>=2.0.0
PyYAML>=3.10.0
pycryptodome

View File

@ -0,0 +1,44 @@
[metadata]
name = k8sapp_rook_ceph
summary = StarlingX sysinv extensions for rook-ceph
long_description = file: README.rst
long_description_content_type = text/x-rst
license = Apache 2.0
author = StarlingX
author-email = starlingx-discuss@lists.starlingx.io
home-page = https://www.starlingx.io/
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 3
Programming Language :: Python :: 3.9
[files]
packages =
k8sapp_rook_ceph
[global]
setup-hooks =
pbr.hooks.setup_hook
[entry_points]
systemconfig.helm_applications =
rook-ceph = systemconfig.helm_plugins.rook_ceph_apps
systemconfig.helm_plugins.rook_ceph_apps =
001_rook-ceph = k8sapp_rook_ceph.helm.rook_ceph:RookCephHelm
002_rook-ceph-cluster = k8sapp_rook_ceph.helm.rook_ceph_cluster:RookCephClusterHelm
003_rook-ceph-provisioner = k8sapp_rook_ceph.helm.rook_ceph_provisioner:RookCephClusterProvisionerHelm
systemconfig.fluxcd.kustomize_ops =
rook-ceph = k8sapp_rook_ceph.kustomize.kustomize_rook_ceph:RookCephFluxCDKustomizeOperator
systemconfig.app_lifecycle =
rook-ceph = k8sapp_rook_ceph.lifecycle.lifecycle_rook_ceph:RookCephAppLifecycleOperator
[wheel]
universal = 1

View File

@ -0,0 +1,12 @@
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
import setuptools
setuptools.setup(
setup_requires=['pbr>=0.5'],
pbr=True)

View File

@ -0,0 +1,20 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
hacking>=1.1.0,<=2.0.0 # Apache-2.0
astroid
bandit<1.7.2;python_version>="3.0"
coverage>=3.6
fixtures>=3.0.0 # Apache-2.0/BSD
mock>=2.0.0 # BSD
python-subunit>=0.0.18
requests-mock>=0.6.0 # Apache-2.0
sphinx
oslosphinx
oslotest>=3.2.0 # Apache-2.0
stestr>=1.0.0 # Apache-2.0
testrepository>=0.0.18
testtools!=1.2.0,>=0.9.36
isort<5;python_version>="3.0"
pylint
pycryptodomex

View File

@ -0,0 +1,188 @@
[tox]
envlist = flake8,py39,pylint,metadata,bandit
minversion = 1.6
skipsdist = True
# tox does not work if the path to the workdir is too long, so move it to /tmp
# tox 3.1.0 adds TOX_LIMITED_SHEBANG
toxworkdir = /tmp/{env:USER}_k8srooktox
stxdir = {toxinidir}/../../..
distshare={toxworkdir}/.tox/distshare
[testenv]
basepython = python3.9
usedevelop = True
# tox is silly... these need to be separated by a newline....
allowlist_externals = bash
find
echo
install_command = pip install -v -v -v \
-c{env:UPPER_CONSTRAINTS_FILE:https://opendev.org/starlingx/root/raw/branch/master/build-tools/requirements/debian/upper-constraints.txt} \
{opts} {packages}
# Note the hash seed is set to 0 until can be tested with a
# random hash seed successfully.
setenv = VIRTUAL_ENV={envdir}
PYTHONHASHSEED=0
PIP_RESOLVER_DEBUG=1
PYTHONDONTWRITEBYTECODE=1
OS_TEST_PATH=./k8sapp_rook_ceph/tests
LANG=en_US.UTF-8
LANGUAGE=en_US:en
LC_ALL=C
EVENTS_YAML=./k8sapp_rook_ceph/tests/events_for_testing.yaml
SYSINV_TEST_ENV=True
TOX_WORK_DIR={toxworkdir}
PYLINTHOME={toxworkdir}
deps = -r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
-e{[tox]stxdir}/config/sysinv/sysinv/sysinv
-e{[tox]stxdir}/config/tsconfig/tsconfig
-e{[tox]stxdir}/fault/fm-api/source
-e{[tox]stxdir}/fault/python-fmclient/fmclient
-e{[tox]stxdir}/update/sw-patch/cgcs-patch
-e{[tox]stxdir}/utilities/ceph/python-cephclient/python-cephclient
commands =
find . -type f -name "*.pyc" -delete
[flake8]
# H series are hacking
# H101 is TODO
# H102 is apache license
# H104 file contains only comments (ie: license)
# H105 author tags
# H306 imports not in alphabetical order
# H401 docstring should not start with a space
# H403 multi line docstrings should end on a new line
# H404 multi line docstring should start without a leading new line
# H405 multi line docstring summary not separated with an empty line
# H701 Empty localization string
# H702 Formatting operation should be outside of localization method call
# H703 Multiple positional placeholders
# B series are bugbear
# B006 Do not use mutable data structures for argument defaults. Needs to be FIXED.
# B007 Loop control variable not used within the loop body.
# B009 Do not call getattr with a constant attribute value
# B010 Do not call setattr with a constant attribute value
# B012 return/continue/break inside finally blocks cause exceptions to be silenced
# B014 Redundant exception types
# B301 Python 3 does not include `.iter*` methods on dictionaries. (this should be suppressed on a per line basis)
# W series are warnings
# W503 line break before binary operator
# W504 line break after binary operator
# W605 invalid escape sequence
# E series are pep8
# E117 over-indented
# E126 continuation line over-indented for hanging indent
# E127 continuation line over-indented for visual indent
# E128 continuation line under-indented for visual indent
# E402 module level import not at top of file
# E741 ambiguous variable name
ignore = H101,H102,H104,H105,H306,H401,H403,H404,H405,H701,H702,H703,
B006,B007,B009,B010,B012,B014,B301
W503,W504,W605,
E117,E126,E127,E128,E402,E741
exclude = build,dist,tools,.eggs
max-line-length=120
[testenv:flake8]
deps = -r{toxinidir}/test-requirements.txt
commands =
flake8 {posargs} ./k8sapp_rook_ceph
[testenv:py39]
commands =
stestr run {posargs}
stestr slowest
[testenv:pep8]
# testenv:flake8 clone
deps = -r{toxinidir}/test-requirements.txt
commands = {[testenv:flake8]commands}
[testenv:venv]
commands = {posargs}
[bandit]
# The following bandit tests are being skipped:
# B101: Test for use of assert
# B103: Test for setting permissive file permissions
# B104: Test for binding to all interfaces
# B105: Test for use of hard-coded password strings
# B108: Test for insecure usage of tmp file/directory
# B110: Try, Except, Pass detected.
# B303: Use of insecure MD2, MD4, MD5, or SHA1 hash function.
# B307: Blacklisted call to eval.
# B310: Audit url open for permitted schemes
# B311: Standard pseudo-random generators are not suitable for security/cryptographic purposes
# B314: Blacklisted calls to xml.etree.ElementTree
# B318: Blacklisted calls to xml.dom.minidom
# B320: Blacklisted calls to lxml.etree
# B404: Import of subprocess module
# B405: import xml.etree
# B408: import xml.minidom
# B410: import lxml
# B506: Test for use of yaml load
# B602: Test for use of popen with shell equals true
# B603: Test for use of subprocess without shell equals true
# B604: Test for any function with shell equals true
# B605: Test for starting a process with a shell
# B607: Test for starting a process with a partial path
# B608: Possible SQL injection vector through string-based query
#
# Note: 'skips' entry cannot be split across multiple lines
#
skips = B101,B103,B104,B105,B108,B110,B303,B307,B310,B311,B314,B318,B320,B404,B405,B408,B410,B506,B602,B603,B604,B605,B607,B608
exclude = tests
[testenv:bandit]
deps = -r{toxinidir}/test-requirements.txt
commands = bandit --ini tox.ini -n 5 -r k8sapp_rook_ceph
[testenv:pylint]
install_command = pip install -v -v -v \
-c{env:UPPER_CONSTRAINTS_FILE:https://opendev.org/starlingx/root/raw/branch/master/build-tools/requirements/debian/upper-constraints.txt} \
{opts} {packages}
commands =
pylint {posargs} k8sapp_rook_ceph --rcfile=./pylint.rc
[testenv:cover]
# not sure is passenv is still needed
passenv = CURL_CA_BUNDLE
deps = {[testenv]deps}
setenv = {[testenv]setenv}
PYTHON=coverage run --parallel-mode
commands =
{[testenv]commands}
coverage erase
stestr run {posargs}
coverage combine
coverage html -d cover
coverage xml -o cover/coverage.xml
coverage report
[testenv:pip-missing-reqs]
# do not install test-requirements as that will pollute the virtualenv for
# determining missing packages
# this also means that pip-missing-reqs must be installed separately, outside
# of the requirements.txt files
deps = pip_missing_reqs
-rrequirements.txt
commands=pip-missing-reqs -d --ignore-file=/k8sapp_rook_ceph/tests k8sapp_rook_ceph
[testenv:metadata]
install_command = pip install -v -v -v \
-c{env:UPPER_CONSTRAINTS_FILE:https://opendev.org/starlingx/root/raw/branch/master/build-tools/requirements/debian/upper-constraints.txt} \
{opts} {packages}
# Pass top level app folder to 'sysinv-app tox' command.
commands =
bash -c "echo $(dirname $(dirname $(pwd))) | xargs -n 1 sysinv-app tox"

View File

@ -0,0 +1 @@
# Override upstream constraints based on StarlingX load

View File

@ -0,0 +1,5 @@
stx-rook-ceph-helm (2.0-0) unstable; urgency=medium
* Initial release.
-- Caio Cesar Correa <caio.correa@windriver.com> Tue, 09 Apr 2024 15:00:00 -0300

View File

@ -0,0 +1,17 @@
Source: stx-rook-ceph-helm
Section: admin
Priority: optional
Maintainer: StarlingX Developers <starlingx-discuss@lists.starlingx.io>
Build-Depends: debhelper-compat (= 13),
rook-ceph-helm,
build-info,
rook-ceph-provisioner-helm,
python3-k8sapp-rook-ceph-wheels,
Standards-Version: 4.1.2
Homepage: https://www.starlingx.io
Package: stx-rook-ceph-helm
Architecture: any
Depends: ${shlibs:Depends}, ${misc:Depends}
Description: StarlingX K8S application: App Rook Ceph
The StarlingX K8S application for Rook Ceph

View File

@ -0,0 +1,43 @@
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: stx-rook-ceph-helm
Source: https://opendev.org/starlingx/rook-ceph/
Files: *
Copyright:
(c) 2024 Wind River Systems, Inc
(c) Others (See individual files for more details)
License: Apache-2
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
.
https://www.apache.org/licenses/LICENSE-2.0
.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
.
On Debian-based systems the full text of the Apache version 2.0 license
can be found in `/usr/share/common-licenses/Apache-2.0'.
# If you want to use GPL v2 or later for the /debian/* files use
# the following clauses, or change it to suit. Delete these two lines
Files: debian/*
Copyright: 2024 Wind River Systems, Inc
License: Apache-2
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
.
https://www.apache.org/licenses/LICENSE-2.0
.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
.
On Debian-based systems the full text of the Apache version 2.0 license
can be found in `/usr/share/common-licenses/Apache-2.0'.

View File

@ -0,0 +1,73 @@
#!/usr/bin/make -f
export DH_VERBOSE = 1
export ROOT = debian/tmp
export APP_FOLDER = $(ROOT)/usr/local/share/applications/helm
export INITRD_DIR = $(ROOT)/etc/init.d
export DEB_VERSION = $(shell dpkg-parsechangelog | egrep '^Version:' | cut -f 2 -d ' ')
export RELEASE = $(shell cat /etc/build.info | grep SW_VERSION | cut -d'"' -f2)
export REVISION = $(shell echo $(DEB_VERSION) | cut -f 4 -d '.')
export APP_NAME = rook-ceph
export APP_VERSION = $(RELEASE)-$(REVISION)
export APP_TARBALL = $(APP_NAME)-$(APP_VERSION).tgz
export HELM_REPO = stx-platform
export HELM_FOLDER = /usr/lib/helm
export STAGING = staging
%:
dh $@
override_dh_auto_build:
# Setup staging
mkdir -p $(STAGING)
cp files/metadata.yaml $(STAGING)
cp -Rv fluxcd-manifests $(STAGING)
mkdir -p $(STAGING)/charts
cp $(HELM_FOLDER)/rook-ceph*.tgz $(STAGING)/charts
# Adjust the helmrelease yamls based on the chart versions
for c in $(STAGING)/charts/*; do \
chart=$$(basename $$c .tgz); \
chart_name=$${chart%-*}; \
chart_version=$${chart##*-}; \
echo "Found $$chart; name: $$chart_name, version: $$chart_version"; \
chart_manifest=$$(find $(STAGING)/fluxcd-manifests/$$chart_name -name helmrelease.yaml -exec grep -q $$chart_name {} \; -print); \
echo "Updating manifest: $$chart_manifest"; \
sed -i "s/REPLACE_HELM_CHART_VERSION/$$chart_version/g" $$chart_manifest; \
grep version $$chart_manifest; \
done
# Populate metadata
sed -i 's/APP_REPLACE_NAME/$(APP_NAME)/g' $(STAGING)/metadata.yaml
sed -i 's/APP_REPLACE_VERSION/$(APP_VERSION)/g' $(STAGING)/metadata.yaml
sed -i 's/HELM_REPLACE_REPO/$(HELM_REPO)/g' $(STAGING)/metadata.yaml
# Copy the plugins: installed in the buildroot
mkdir -p $(STAGING)/plugins
cp /plugins/*.whl $(STAGING)/plugins
# Package it up
cd $(STAGING)
find . -type f ! -name '*.md5' -print0 | xargs -0 md5sum > checksum.md5
tar -zcf $(APP_TARBALL) -C $(STAGING)/ .
# Cleanup staging
rm -fr $(STAGING)
override_dh_auto_install:
# Install the app tar file
install -d -m 755 $(APP_FOLDER)
install -d -m 755 $(INITRD_DIR)
install -p -D -m 755 $(APP_TARBALL) $(APP_FOLDER)
install -m 750 files/rook-mon-exit.sh $(INITRD_DIR)/rook-mon-exit
# Prevents dh_fixperms from changing the permissions defined in this file
override_dh_fixperms:
dh_fixperms --exclude etc/init.d/rook-mon-exit
override_dh_usrlocal:

View File

@ -0,0 +1 @@
3.0 (quilt)

View File

@ -0,0 +1,2 @@
usr/local/share/applications/helm/*
etc/init.d/rook-mon-exit

View File

@ -0,0 +1,11 @@
---
debname: stx-rook-ceph-helm
debver: 2.0-0
src_path: stx-rook-ceph-helm
src_files:
- files
revision:
dist: $STX_DIST
GITREVCOUNT:
SRC_DIR: ${MY_REPO}/stx/app-rook-ceph
BASE_SRCREV: c6c693d51cdc6daa4eafe34ccab5ce35496bf516

View File

@ -0,0 +1,80 @@
#!/bin/bash
#
# Copyright (c) 2020 Intel Corporation, Inc.
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
RETVAL=0
################################################################################
# Start Action
################################################################################
function start {
return
}
################################################################################
# Stop Action
################################################################################
function stop {
pgrep ceph-mon
if [ x"$?" = x"0" ]; then
kubectl --kubeconfig=/etc/kubernetes/admin.conf delete \
deployments.apps -n rook-ceph rook-ceph-mon-a
kubectl --kubeconfig=/etc/kubernetes/admin.conf delete po \
-n rook-ceph --selector="app=rook-ceph-mon,mon=a"
fi
pgrep ceph-osd
if [ x"$?" = x"0" ]; then
kubectl --kubeconfig=/etc/kubernetes/admin.conf delete \
deployments.apps -n rook-ceph \
--selector="app=rook-ceph-osd,failure-domain=$(hostname)"
kubectl --kubeconfig=/etc/kubernetes/admin.conf delete po \
--selector="app=rook-ceph-osd,failure-domain=$(hostname)" \
-n rook-ceph
fi
}
################################################################################
# Status Action
################################################################################
function status {
pgrep sysinv-api
RETVAL=$?
return
}
################################################################################
# Main Entry
################################################################################
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
status)
status
;;
*)
echo "usage: $0 { start | stop | status | restart }"
exit 1
;;
esac
exit $RETVAL

View File

@ -0,0 +1,26 @@
app_name: APP_REPLACE_NAME
app_version: APP_REPLACE_VERSION
helm_repo: HELM_REPLACE_REPO
helm_toolkit_required: false
maintain_user_overrides: true
maintain_attributes: true
upgrades:
auto_update: false
supported_k8s_version:
minimum: 1.24.4
behavior:
platform_managed_app: no
evaluate_reapply:
triggers:
- type: runtime-apply-puppet
- type: host-availability-updated
- type: kube-upgrade-complete
filters:
- availability: services-enabled
- type: host-delete
filters:
- personality: controller

View File

@ -0,0 +1,13 @@
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: stx-platform
spec:
url: http://192.168.206.1:8080/helm_charts/stx-platform
interval: 60m

View File

@ -0,0 +1,8 @@
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
resources:
- helmrepository.yaml

View File

@ -0,0 +1,10 @@
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
apiVersion: v1
kind: Namespace
metadata:
name: rook-ceph

View File

@ -0,0 +1,14 @@
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: rook-ceph
resources:
- base
- rook-ceph
- rook-ceph-cluster
- rook-ceph-provisioner

View File

@ -0,0 +1,40 @@
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
apiVersion: "helm.toolkit.fluxcd.io/v2beta1"
kind: HelmRelease
metadata:
name: rook-ceph-cluster
labels:
chart_group: starlingx-rook-charts
spec:
releaseName: rook-ceph-cluster
chart:
spec:
chart: rook-ceph-cluster
version: REPLACE_HELM_CHART_VERSION
sourceRef:
kind: HelmRepository
name: stx-platform
interval: 5m
timeout: 30m
dependsOn:
- name: rook-ceph
test:
enable: false
install:
disableHooks: false
upgrade:
disableHooks: false
uninstall:
disableHooks: false
valuesFrom:
- kind: Secret
name: rook-ceph-cluster-static-overrides
valuesKey: rook-ceph-cluster-static-overrides.yaml
- kind: Secret
name: rook-ceph-cluster-system-overrides
valuesKey: rook-ceph-cluster-system-overrides.yaml

View File

@ -0,0 +1,18 @@
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
namespace: rook-ceph
resources:
- helmrelease.yaml
secretGenerator:
- name: rook-ceph-cluster-static-overrides
files:
- rook-ceph-cluster-static-overrides.yaml
- name: rook-ceph-cluster-system-overrides
files:
- rook-ceph-cluster-system-overrides.yaml
generatorOptions:
disableNameSuffixHash: true

View File

@ -0,0 +1,358 @@
# #
# # Copyright (c) 2024 Wind River Systems, Inc.
# #
# # SPDX-License-Identifier: Apache-2.0
# #
# # Default values for ceph-cluster
# # This is a YAML-formatted file.
# # Declare variables to be passed into your templates.
configOverride: |
[global]
osd_pool_default_size = 1
osd_pool_default_min_size = 1
[osd]
osd_mkfs_type = xfs
osd_mkfs_options_xfs = "-f"
osd_mount_options_xfs = "rw,noatime,inode64,logbufs=8,logbsize=256k"
[mon]
mon warn on legacy crush tunables = false
mon pg warn max per osd = 2048
mon pg warn max object skew = 0
mon clock drift allowed = .1
mon warn on pool no redundancy = false
operatorNamespace: rook-ceph
cephClusterSpec:
dataDirHostPath: /var/lib/ceph
cephVersion:
image: quay.io/ceph/ceph:v18.2.2
allowUnsupported: true
network:
provider: host
#ipFamily: "IPv6"
# Whether or not continue if PGs are not clean during an upgrade
continueUpgradeAfterChecksEvenIfNotHealthy: false
labels:
all:
app.starlingx.io/component: "platform"
resources:
mgr:
limits:
memory: "1Gi"
requests:
cpu: 0
memory: 0
mon:
limits:
memory: "2Gi"
requests:
cpu: 0
memory: 0
osd:
limits:
memory: "4Gi"
requests:
cpu: 0
memory: 0
prepareosd:
# limits: It is not recommended to set limits on the OSD prepare job
# since it's a one-time burst for memory that must be allowed to
# complete without an OOM kill. Note however that if a k8s
# limitRange guardrail is defined external to Rook, the lack of
# a limit here may result in a sync failure, in which case a
# limit should be added. 1200Mi may suffice for up to 15Ti
# OSDs ; for larger devices 2Gi may be required.
# cf. https://github.com/rook/rook/pull/11103
requests:
cpu: 0
memory: 0
mgr-sidecar:
limits:
memory: "100Mi"
requests:
cpu: 0
memory: 0
crashcollector:
limits:
memory: "60Mi"
requests:
cpu: 0
memory: 0
logcollector:
limits:
memory: "1Gi"
requests:
cpu: 0
memory: 0
cleanup:
limits:
memory: "1Gi"
requests:
cpu: 0
memory: 0
exporter:
limits:
memory: "128Mi"
requests:
cpu: 0
memory: 0
mon:
count: 1
allowMultiplePerNode: false
mgr:
count: 1
allowMultiplePerNode: false
modules:
# Several modules should not need to be included in this list. The "dashboard" and "monitoring" modules
# are already enabled by other settings in the cluster CR.
- name: pg_autoscaler
enabled: false
dashboard:
enabled: false
crashCollector:
disable: false
# #deviceFilter:
# healthCheck:
# daemonHealth:
# mon:
# interval: 45s
# timeout: 600s
# disruptionManagement:
# managePodBudgets: true
storage:
useAllNodes: false
useAllDevices: false
# priority classes to apply to ceph resources
priorityClassNames:
mon: system-node-critical
osd: system-node-critical
mgr: system-cluster-critical
placement:
all:
tolerations:
- effect: NoSchedule
operator: Exists
key: node-role.kubernetes.io/master
- effect: NoSchedule
operator: Exists
key: node-role.kubernetes.io/control-plane
mon:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: ceph-mon-placement
operator: In
values:
- enabled
mgr:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: ceph-mgr-placement
operator: In
values:
- enabled
toolbox:
enabled: true
image: quay.io/ceph/ceph:v18.2.2
tolerations:
- effect: NoSchedule
operator: Exists
key: node-role.kubernetes.io/master
- effect: NoSchedule
operator: Exists
key: node-role.kubernetes.io/control-plane
resources:
limits:
memory: "1Gi"
requests:
cpu: 0
memory: 0
# pspEnable: false
monitoring:
enabled: false
# # requires Prometheus to be pre-installed
# # enabling will also create RBAC rules to allow Operator to create ServiceMonitors
cephFileSystems:
- name: cephfs
# see https://github.com/rook/rook/blob/master/Documentation/ceph-filesystem-crd.md#filesystem-settings for available configuration
spec:
metadataPool:
replicated:
size: 1
dataPools:
- failureDomain: osd # TODO
name: data
replicated:
size: 1
metadataServer:
activeCount: 1
activeStandby: true
resources:
limits:
memory: "4Gi"
requests:
cpu: 0
memory: 0
priorityClassName: system-cluster-critical
storageClass:
enabled: true
isDefault: false
name: cephfs
pool: data
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: "Immediate"
mountOptions: []
# see https://github.com/rook/rook/blob/master/Documentation/ceph-filesystem.md#provision-storage for available configuration
parameters:
# The secrets contain Ceph admin credentials.
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
# Specify the filesystem type of the volume. If not specified, csi-provisioner
# will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock
# in hyperconverged settings where the volume is mounted on the same node as the osds.
csi.storage.k8s.io/fstype: ext4
cephBlockPools:
- name: kube-rbd
# see https://github.com/rook/rook/blob/master/Documentation/ceph-pool-crd.md#spec for available configuration
spec:
failureDomain: osd
replicated:
size: 1
storageClass:
enabled: true
name: general
isDefault: true
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: "Immediate"
mountOptions: []
allowedTopologies: []
# see https://github.com/rook/rook/blob/master/Documentation/ceph-block.md#provision-storage for available configuration
parameters:
# (optional) mapOptions is a comma-separated list of map options.
# For krbd options refer
# https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options
# For nbd options refer
# https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options
# mapOptions: lock_on_read,queue_depth=1024
# (optional) unmapOptions is a comma-separated list of unmap options.
# For krbd options refer
# https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options
# For nbd options refer
# https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options
# unmapOptions: force
# RBD image format. Defaults to "2".
imageFormat: "2"
# RBD image features. Available for imageFormat: "2". CSI RBD currently supports only `layering` feature.
imageFeatures: layering
# The secrets contain Ceph admin credentials.
csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
# Specify the filesystem type of the volume. If not specified, csi-provisioner
# will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock
# in hyperconverged settings where the volume is mounted on the same node as the osds.
csi.storage.k8s.io/fstype: ext4
# -- A list of CephObjectStore configurations to deploy
# @default -- See [below](#ceph-object-stores)
cephObjectStores:
- name: ceph-objectstore
# see https://github.com/rook/rook/blob/master/Documentation/CRDs/Object-Storage/ceph-object-store-crd.md#object-store-settings for available configuration
spec:
metadataPool:
failureDomain: osd
replicated:
size: 0
dataPool:
failureDomain: osd
erasureCoded:
dataChunks: 0
codingChunks: 0
preservePoolsOnDelete: true
gateway:
port: 80
resources:
limits:
memory: "4Gi"
requests:
cpu: 0
memory: 0
# securePort: 443
# sslCertificateRef:
instances: 1
priorityClassName: system-cluster-critical
storageClass:
enabled: false
name: ceph-bucket
reclaimPolicy: Delete
volumeBindingMode: "Immediate"
# see https://github.com/rook/rook/blob/master/Documentation/Storage-Configuration/Object-Storage-RGW/ceph-object-bucket-claim.md#storageclass for available configuration
parameters:
# note: objectStoreNamespace and objectStoreName are configured by the chart
region: us-east-1
ingress:
# Enable an ingress for the ceph-objectstore
enabled: false
# annotations: {}
# host:
# name: objectstore.example.com
# path: /
# tls:
# - hosts:
# - objectstore.example.com
# secretName: ceph-objectstore-tls
# ingressClassName: nginx
imagePullSecrets:
- name: default-registry-key
hook:
image: docker.io/openstackhelm/ceph-config-helper:ubuntu_jammy_18.2.2-1-20240312
duplexPreparation:
enable: false
activeController: controller-0
floatIP: 192.168.206.1
cleanup:
enable: true
cluster_cleanup: rook-ceph
rbac:
clusterRole: rook-ceph-cleanup
clusterRoleBinding: rook-ceph-cleanup
role: rook-ceph-cleanup
roleBinding: rook-ceph-cleanup
serviceAccount: rook-ceph-cleanup
mon_hosts:
- controller-0

View File

@ -0,0 +1,6 @@
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#

View File

@ -0,0 +1,40 @@
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
apiVersion: "helm.toolkit.fluxcd.io/v2beta1"
kind: HelmRelease
metadata:
name: rook-ceph-provisioner
labels:
chart_group: starlingx-rook-charts
spec:
releaseName: rook-ceph-provisioner
chart:
spec:
chart: rook-ceph-provisioner
version: REPLACE_HELM_CHART_VERSION
sourceRef:
kind: HelmRepository
name: stx-platform
interval: 5m
timeout: 30m
dependsOn:
- name: rook-ceph-cluster
test:
enable: false
install:
disableHooks: false
upgrade:
disableHooks: false
uninstall:
disableHooks: false
valuesFrom:
- kind: Secret
name: rook-ceph-provisioner-static-overrides
valuesKey: rook-ceph-provisioner-static-overrides.yaml
- kind: Secret
name: rook-ceph-provisioner-system-overrides
valuesKey: rook-ceph-provisioner-system-overrides.yaml

View File

@ -0,0 +1,18 @@
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
namespace: rook-ceph
resources:
- helmrelease.yaml
secretGenerator:
- name: rook-ceph-provisioner-static-overrides
files:
- rook-ceph-provisioner-static-overrides.yaml
- name: rook-ceph-provisioner-system-overrides
files:
- rook-ceph-provisioner-system-overrides.yaml
generatorOptions:
disableNameSuffixHash: true

View File

@ -0,0 +1,106 @@
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
global:
configmap_key_init: ceph-key-init-bin
#
provision_storage: true
cephfs_storage: true
job_ceph_mgr_provision: true
job_ceph_mon_audit: false
job_ceph_osd_audit: true
job_host_provision: true
job_cleanup: true
deployment_stx_ceph_manager: true
# Defines whether to generate service account and role bindings.
rbac: true
# Node Selector
nodeSelector: { node-role.kubernetes.io/control-plane: "" }
#
# RBAC options.
# Defaults should be fine in most cases.
rbac:
clusterRole: rook-ceph-provisioner
clusterRoleBinding: rook-ceph-provisioner
role: rook-ceph-provisioner
roleBinding: rook-ceph-provisioner
serviceAccount: rook-ceph-provisioner
images:
tags:
ceph_config_helper: docker.io/openstackhelm/ceph-config-helper:ubuntu_jammy_18.2.2-1-20240312
stx_ceph_manager: docker.io/starlingx/stx-ceph-manager:stx.10.0-v1.7.11
k8s_entrypoint: quay.io/airshipit/kubernetes-entrypoint:v1.0.0
provisionStorage:
# Defines the name of the provisioner associated with a set of storage classes
provisioner_name: rook-ceph.rbd.csi.ceph.com
# Enable this storage class as the system default storage class
defaultStorageClass: rook-ceph
# Configure storage classes.
# Defaults for storage classes. Update this if you have a single Ceph storage cluster.
# No need to add them to each class.
classdefaults:
# Define ip addresses of Ceph Monitors
monitors: 192.168.204.3:6789,192.168.204.4:6789,192.168.204.1:6789
# Ceph admin account
adminId: admin
# K8 secret name for the admin context
adminSecretName: ceph-secret
# Configure storage classes.
# This section should be tailored to your setup. It allows you to define multiple storage
# classes for the same cluster (e.g. if you have tiers of drives with different speeds).
# If you have multiple Ceph clusters take attributes from classdefaults and add them here.
classes:
name: rook-ceph # Name of storage class.
secret:
# K8 secret name with key for accessing the Ceph pool
userSecretName: ceph-secret-kube
# Ceph user name to access this pool
userId: kube
pool:
pool_name: kube-rbd
replication: 1
crush_rule_name: storage_tier_ruleset
chunk_size: 8
cephfsStorage:
provisioner_name: rook-ceph.cephfs.csi.ceph.com
fs_name: kube-cephfs
pool_name: kube-cephfs-data
host_provision:
controller_hosts:
- controller-0
imagePullSecrets:
- name: default-registry-key
ceph_audit_jobs:
floatIP: 192.168.204.2
audit:
cron: "*/3 * * * *"
deadline: 200
history:
success: 1
failed: 1
hook:
image: docker.io/openstackhelm/ceph-config-helper:ubuntu_jammy_18.2.2-1-20240312
cleanup:
enable: true
rbac:
clusterRole: rook-ceph-cleanup
clusterRoleBinding: rook-ceph-cleanup
role: rook-ceph-cleanup/
roleBinding: rook-ceph-cleanup
serviceAccount: rook-ceph-cleanup
mon_hosts:
- controller-0

View File

@ -0,0 +1,6 @@
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#

View File

@ -0,0 +1,38 @@
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
apiVersion: "helm.toolkit.fluxcd.io/v2beta1"
kind: HelmRelease
metadata:
name: rook-ceph
labels:
chart_group: starlingx-rook-charts
spec:
releaseName: rook-ceph
chart:
spec:
chart: rook-ceph
version: REPLACE_HELM_CHART_VERSION
sourceRef:
kind: HelmRepository
name: stx-platform
interval: 5m
timeout: 30m
test:
enable: false
install:
disableHooks: false
upgrade:
disableHooks: false
uninstall:
disableHooks: true
valuesFrom:
- kind: Secret
name: rook-ceph-static-overrides
valuesKey: rook-ceph-static-overrides.yaml
- kind: Secret
name: rook-ceph-system-overrides
valuesKey: rook-ceph-system-overrides.yaml

View File

@ -0,0 +1,19 @@
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
namespace: rook-ceph
resources:
- helmrelease.yaml
- service-account-default.yaml
secretGenerator:
- name: rook-ceph-static-overrides
files:
- rook-ceph-static-overrides.yaml
- name: rook-ceph-system-overrides
files:
- rook-ceph-system-overrides.yaml
generatorOptions:
disableNameSuffixHash: true

View File

@ -0,0 +1,298 @@
#
# Copyright (c) 2024 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
image:
prefix: rook
repository: docker.io/rook/ceph
tag: v1.13.7
pullPolicy: IfNotPresent
app.starlingx.io/component: platform
nodeSelector: {node-role.kubernetes.io/control-plane : ""}
# -- Pod annotations
# In some situations SELinux relabelling breaks (times out) on large filesystems, and doesn't work with cephfs ReadWriteMany volumes (last relabel wins).
# Disable it here if you have similar issues.
# For more details see https://github.com/rook/rook/issues/2417
enableSelinuxRelabeling: true
# Writing to the hostPath is required for the Ceph mon and osd pods. Given the restricted permissions in OpenShift with SELinux,
# the pod must be running privileged in order to write to the hostPath volume, this must be set to true then.
hostpathRequiresPrivileged: false
# Disable automatic orchestration when new devices are discovered.
disableDeviceHotplug: false
# Blacklist certain disks according to the regex provided.
discoverDaemonUdev:
enableDiscoveryDaemon: false
# Tolerations for the rook-ceph-operator to allow it to run on nodes with particular taints
allowLoopDevices: false
pspEnable: false
tolerations:
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
crds:
# Whether the helm chart should create and update the CRDs. If false, the CRDs must be
# managed independently with cluster/examples/kubernetes/ceph/crds.yaml.
# **WARNING** Only set during first deployment. If later disabled the cluster may be DESTROYED.
# If the CRDs are deleted in this case, see the disaster recovery guide to restore them.
# https://rook.github.io/docs/rook/master/ceph-disaster-recovery.html#restoring-crds-after-deletion
enabled: true
currentNamespaceOnly: false
resources:
limits:
memory: 512Mi
requests:
cpu: 0
memory: 0
# imagePullSecrets option allow to pull docker images from private docker registry. Option will be passed to all service accounts.
imagePullSecrets:
- name: default-registry-key
csi:
cephcsi:
# -- Ceph CSI image
image: quay.io/cephcsi/cephcsi:v3.10.2
registrar:
# -- Kubernetes CSI registrar image
image: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0
provisioner:
# -- Kubernetes CSI provisioner image
image: registry.k8s.io/sig-storage/csi-provisioner:v4.0.0
snapshotter:
# -- Kubernetes CSI snapshotter image
image: registry.k8s.io/sig-storage/csi-snapshotter:v7.0.1
attacher:
# -- Kubernetes CSI Attacher image
image: registry.k8s.io/sig-storage/csi-attacher:v4.5.0
resizer:
# -- Kubernetes CSI resizer image
image: registry.k8s.io/sig-storage/csi-resizer:v1.10.0
# -- Labels to add to the CSI CephFS Deployments and DaemonSets Pods
cephfsPodLabels: "app.starlingx.io/component=platform"
rbdPodLabels: "app.starlingx.io/component=platform"
kubeletDirPath: /var/lib/kubelet
pluginTolerations:
- operator: "Exists"
# -- Enable Ceph CSI RBD driver
enableRbdDriver: true
# -- Enable Ceph CSI CephFS driver
enableCephfsDriver: true
# -- Enable host networking for CSI CephFS and RBD nodeplugins. This may be necessary
# in some network configurations where the SDN does not provide access to an external cluster or
# there is significant drop in read/write performance
enableCSIHostNetwork: true
# -- Enable Snapshotter in CephFS provisioner pod
enableCephfsSnapshotter: true
# -- Enable Snapshotter in NFS provisioner pod
enableNFSSnapshotter: false
# -- Enable Snapshotter in RBD provisioner pod
enableRBDSnapshotter: true
# -- Enable Host mount for `/etc/selinux` directory for Ceph CSI nodeplugins
enablePluginSelinuxHostMount: false
# -- Enable Ceph CSI PVC encryption support
enableCSIEncryption: false
# -- PriorityClassName to be set on csi driver plugin pods
pluginPriorityClassName: system-node-critical
# -- PriorityClassName to be set on csi driver provisioner pods
provisionerPriorityClassName: system-cluster-critical
# -- Policy for modifying a volume's ownership or permissions when the RBD PVC is being mounted.
# supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
rbdFSGroupPolicy: "File"
# -- Policy for modifying a volume's ownership or permissions when the CephFS PVC is being mounted.
# supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
cephFSFSGroupPolicy: "File"
# -- Enable adding volume metadata on the CephFS subvolumes and RBD images.
# Not all users might be interested in getting volume/snapshot details as metadata on CephFS subvolume and RBD images.
# Hence enable metadata is false by default
enableMetadata: false
provisionerReplicas: 1
# -- CEPH CSI RBD provisioner resource requirement list
# csi-omap-generator resources will be applied only if `enableOMAPGenerator` is set to `true`
# @default -- see values.yaml
csiRBDProvisionerResource: |
- name : csi-provisioner
resource:
requests:
memory: 0
cpu: 0
limits:
memory: 256Mi
- name : csi-resizer
resource:
requests:
memory: 0
cpu: 0
limits:
memory: 256Mi
- name : csi-attacher
resource:
requests:
memory: 0
cpu: 0
limits:
memory: 256Mi
- name : csi-snapshotter
resource:
requests:
memory: 0
cpu: 0
limits:
memory: 256Mi
- name : csi-rbdplugin
resource:
requests:
memory: 0
limits:
memory: 1Gi
# -- CEPH CSI RBD plugin resource requirement list
# @default -- see values.yaml
csiRBDPluginResource: |
- name : driver-registrar
resource:
requests:
memory: 0
cpu: 0
limits:
memory: 256Mi
- name : csi-rbdplugin
resource:
requests:
memory: 0
cpu: 0
limits:
memory: 1Gi
- name : liveness-prometheus
resource:
requests:
memory: 0
cpu: 0
limits:
memory: 256Mi
# -- CEPH CSI CephFS provisioner resource requirement list
# @default -- see values.yaml
csiCephFSProvisionerResource: |
- name : csi-provisioner
resource:
requests:
memory: 0
cpu: 0
limits:
memory: 256Mi
- name : csi-resizer
resource:
requests:
memory: 0
cpu: 0
limits:
memory: 256Mi
- name : csi-attacher
resource:
requests:
memory: 0
cpu: 0
limits:
memory: 256Mi
- name : csi-snapshotter
resource:
requests:
memory: 0
cpu: 0
limits:
memory: 256Mi
- name : csi-cephfsplugin
resource:
requests:
memory: 0
cpu: 0
limits:
memory: 1Gi
# -- CEPH CSI CephFS plugin resource requirement list
# @default -- see values.yaml
csiCephFSPluginResource: |
- name : driver-registrar
resource:
requests:
memory: 0
cpu: 0
limits:
memory: 256Mi
- name : csi-cephfsplugin
resource:
requests:
memory: 0
cpu: 0
limits:
memory: 1Gi
# -- Enable Ceph Kernel clients on kernel < 4.17. If your kernel does not support quotas for CephFS
# you may want to disable this setting. However, this will cause an issue during upgrades
# with the FUSE client. See the [upgrade guide](https://rook.io/docs/rook/v1.2/ceph-upgrade.html)
forceCephFSKernelClient: true
# -- Whether to skip any attach operation altogether for CephFS PVCs. See more details
# [here](https://kubernetes-csi.github.io/docs/skip-attach.html#skip-attach-with-csi-driver-object).
# If cephFSAttachRequired is set to false it skips the volume attachments and makes the creation
# of pods using the CephFS PVC fast. **WARNING** It's highly discouraged to use this for
# CephFS RWO volumes. Refer to this [issue](https://github.com/kubernetes/kubernetes/issues/103305) for more details.
cephFSAttachRequired: true
# -- Whether to skip any attach operation altogether for RBD PVCs. See more details
# [here](https://kubernetes-csi.github.io/docs/skip-attach.html#skip-attach-with-csi-driver-object).
# If set to false it skips the volume attachments and makes the creation of pods using the RBD PVC fast.
# **WARNING** It's highly discouraged to use this for RWO volumes as it can cause data corruption.
# csi-addons operations like Reclaimspace and PVC Keyrotation will also not be supported if set
# to false since we'll have no VolumeAttachments to determine which node the PVC is mounted on.
# Refer to this [issue](https://github.com/kubernetes/kubernetes/issues/103305) for more details.
rbdAttachRequired: true
provisionerTolerations:
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
admissionController:
# Set tolerations and nodeAffinity for admission controller pod.
# The admission controller would be best to start on the same nodes as other ceph daemons.
tolerations:
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule

Some files were not shown because too many files have changed in this diff Show More