Network Resources Cleanup before OpenStack Removal

A new job is introduced for the purpose to cleanup
network resources before OpenStack removal.

TESTS                                                       STATUS

- After a fresh deployment of existing stx-openstack app,   PASSED
  check that VMs can be launched successfully

- Delete VMs and remove OpenStack, without a cleanup of     PASSED
  network resources. After re-deploying OpenStack, verify
  that an error scenario arises where new VMs can not be
  launched, with the following error message:
  "nova.exception.VirtualInterfaceCreateException ...
  Failed to allocate the network(s), not rescheduling".

- Load the updated charts in a fresh OpenStack re-install.  PASSED
  Assert that OpenStack is deployed successfully, and that
  VMs can be launched successfully.

- Delete VMs and remove OpenStack, without a cleanup of     PASSED
  network resources. After re-deploying OpenStack, verify
  that the previous error scenario does not show up and
  that new VMs can be launched successfully.

- Verify that the OpenStack charts deployment-sequence      PASSED
  is unchanged, compared to existing codebase.

- Verify that, for all OpenStack charts, except the ones    PASSED
  under the compute-kit group, the removal sequence is
  the reverse of deployment-sequence.

- Verify that the charts under the compute-kit group are    PASSED
  deployed in parallel but removed in sequence.
  The removal sequence is the reverse deployment-sequence
  from the OpenStack Armada App manifest.

- Verify that, within the OpenStack compute-kit group,      PASSED
  the Neutron chart is the first one to be removed.

Partial-Bug: 1892659

Explanatory text: this is a system improvement which aims
to prevent from similar bugs as the one above.

Signed-off-by: rferraz <RogerioOliveira.Ferraz@windriver.com>
Change-Id: I268ab75a849734874646b5f23b0bcdbe5faae1ef
This commit is contained in:
rferraz 2022-05-09 13:51:57 -03:00
parent 97be23078f
commit b5b4cc562a
8 changed files with 1237 additions and 92 deletions

1
.gitignore vendored
View File

@ -1 +1,2 @@
.tox
.idea/

View File

@ -34,6 +34,7 @@ Patch12: 0012-Replace-deprecated-Nova-VNC-configurations.patch
Patch13: 0013-Remove-TLS-from-openstack-services.patch
Patch14: 0014-Remove-mariadb-and-rabbit-tls.patch
Patch15: 0015-Decrease-terminationGracePeriodSeconds-on-glance-api.patch
Patch16: 0016-Network-Resources-Cleanup-before-OpenStack-Removal.patch
BuildRequires: helm
BuildRequires: openstack-helm-infra
@ -60,6 +61,7 @@ Openstack Helm charts
%patch13 -p1
%patch14 -p1
%patch15 -p1
%patch16 -p1
%build
# Stage helm-toolkit in the local repo

View File

@ -0,0 +1,431 @@
From 26035d478bc2e70182446658f3677b079818305e Mon Sep 17 00:00:00 2001
From: rferraz <RogerioOliveira.Ferraz@windriver.com>
Date: Wed, 25 May 2022 05:49:04 -0300
Subject: [PATCH] Network Resources Cleanup before OpenStack Removal
This patch introduces a new job for the purpose
to cleanup network resources before OpenStack removal.
Changes:
- new file: neutron/templates/bin/_neutron-resources-cleanup.sh.tpl
- new file: neutron/templates/job-resources-cleanup.yaml
- modified: neutron/templates/configmap-bin.yaml
- modified: neutron/values.yaml
Signed-off-by: rferraz <RogerioOliveira.Ferraz@windriver.com>
---
.../bin/_neutron-resources-cleanup.sh.tpl | 220 ++++++++++++++++++
neutron/templates/configmap-bin.yaml | 2 +
neutron/templates/job-resources-cleanup.yaml | 81 +++++++
neutron/values.yaml | 31 +++
4 files changed, 334 insertions(+)
create mode 100644 neutron/templates/bin/_neutron-resources-cleanup.sh.tpl
create mode 100644 neutron/templates/job-resources-cleanup.yaml
diff --git a/neutron/templates/bin/_neutron-resources-cleanup.sh.tpl b/neutron/templates/bin/_neutron-resources-cleanup.sh.tpl
new file mode 100644
index 00000000..8d38373d
--- /dev/null
+++ b/neutron/templates/bin/_neutron-resources-cleanup.sh.tpl
@@ -0,0 +1,220 @@
+#!/bin/bash
+
+{{/*
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/}}
+
+set -ex
+
+function cleanup_network_trunks()
+{
+ TRUNKS=$(openstack network trunk list -c ID -f value)
+ PORTS=$(openstack network trunk list -c "Parent Port" -f value)
+
+ for TRUNK in ${TRUNKS}; do
+ openstack network trunk delete ${TRUNK}
+ RET=$?
+ if [ ${RET} -ne 0 ]; then
+ echo "Failed to delete trunk ${TRUNK}"
+ return ${RET}
+ fi
+ done
+
+ for PORT in ${PORTS}; do
+ openstack port delete ${PORT}
+ RET=$?
+ if [ ${RET} -ne 0 ]; then
+ echo "Failed to delete port ${PORT}"
+ return ${RET}
+ fi
+ done
+ return 0
+}
+
+function cleanup_vm_instances()
+{
+ local VMLIST=""
+ local ID=""
+ local RETRY=0
+
+ VMLIST=$(openstack server list --all-projects -c ID -f value)
+ for VM in ${VMLIST}; do
+ openstack server delete ${VM} --wait
+ RET=$?
+ if [ ${RET} -ne 0 ]; then
+ echo "Failed to delete VM ${ID}"
+ return ${RET}
+ fi
+ done
+
+ return 0
+}
+
+function cleanup_floating_ips()
+{
+ local IPLIST=""
+ local IP=""
+
+ IPLIST=$(openstack floating ip list | grep -E "[0-9]+.[0-9]+.[0-9]+.[0-9]" | awk '{ print $2; }')
+ for IP in ${IPLIST}; do
+ openstack floating ip delete ${IP}
+ RET=$?
+ if [ ${RET} -ne 0 ]; then
+ echo "Failed to delete floating ip ${IP}"
+ return 1
+ fi
+ done
+
+ return 0
+}
+
+function cleanup_manual_ports()
+{
+ PORTS=$(openstack port list --device-owner=compute:manual | grep -E "^\|\s\w{8}-\w{4}-\w{4}-\w{4}-\w{12}\s\|" | awk '{ print $2; }')
+ for PORT in ${PORTS}; do
+ openstack port delete ${PORT}
+ RET=$?
+ if [ ${RET} -ne 0 ]; then
+ echo "Failed to delete manual port ${PORT}"
+ return 1
+ fi
+ done
+
+ return 0
+}
+
+function cleanup_routers()
+{
+ local ROUTERLIST=""
+ local ID=""
+
+ ROUTERLIST=$(openstack router list -c ID -f value)
+ for ID in ${ROUTERLIST}; do
+ openstack router set ${ID} --no-route
+ openstack router unset --external-gateway ${ID}
+ RET=$?
+ if [ ${RET} -ne 0 ]; then
+ echo "Failed to clear gateway on router ${ID}"
+ return 1
+ fi
+
+ PORTS=$(openstack port list --router ${ID} -c ID -f value)
+ for PORT in ${PORTS}; do
+ openstack router remove port ${ID} ${PORT}
+ RET=$?
+ if [ ${RET} -ne 0 ]; then
+ echo "Failed to delete interface ${PORT} from router ${ID}"
+ return ${RET}
+ fi
+ done
+
+ openstack router delete ${ID}
+ if [ ${RET} -ne 0 ]; then
+ echo "Failed to delete router ${ID}"
+ return 1
+ fi
+ done
+
+ return 0
+}
+
+function cleanup_application_ports()
+{
+ NETS=$(openstack network list -c ID -f value)
+ for NET in $NETS; do
+ NET_PORTS=$(openstack port list --network $NET -c ID -f value)
+ for NET_PORT in $NET_PORTS; do
+ openstack port delete $NET_PORT
+ RET=$?
+ if [ ${RET} -ne 0 ]; then
+ echo "Failed to delete port ${NET_PORT}"
+ return 1
+ fi
+ done
+ done
+
+ return 0
+}
+
+function cleanup_networks()
+{
+ local ID=""
+ NETLIST=$(openstack network list -c ID -f value)
+ for ID in ${NETLIST}; do
+ openstack network delete ${ID}
+ RET=$?
+ if [ ${RET} -ne 0 ]; then
+ echo "Failed to delete network ${ID}"
+ return 1
+ fi
+ done
+
+ return 0
+}
+
+date
+echo "Cleaning up network resources..."
+
+echo "Cleaning up network trunks"
+cleanup_network_trunks
+RET=$?
+if [ ${RET} -ne 0 ]; then
+ echo "Failed to cleanup network trunks"
+fi
+
+echo "Cleaning up VM instances"
+cleanup_vm_instances
+RET=$?
+if [ ${RET} -ne 0 ]; then
+ echo "Failed to cleanup VM instances"
+fi
+
+echo "Cleaning up floating IP addresses"
+cleanup_floating_ips
+RET=$?
+if [ ${RET} -ne 0 ]; then
+ echo "Failed to cleanup floating IP addresses"
+fi
+
+echo "Cleaning up manual ports"
+cleanup_manual_ports
+RET=$?
+if [ ${RET} -ne 0 ]; then
+ echo "Failed to cleanup manual ports"
+fi
+
+echo "Cleaning up routers"
+cleanup_routers
+RET=$?
+if [ ${RET} -ne 0 ]; then
+ echo "Failed to cleanup routers"
+fi
+
+echo "Cleaning up application ports"
+cleanup_application_ports
+RET=$?
+if [ ${RET} -ne 0 ]; then
+ echo "Failed to cleanup shared networks"
+fi
+
+echo "Cleaning up networks"
+cleanup_networks
+RET=$?
+if [ ${RET} -ne 0 ]; then
+ echo "Failed to cleanup networks"
+fi
+
+date
+echo "Cleanup finished"
+
+exit 0
diff --git a/neutron/templates/configmap-bin.yaml b/neutron/templates/configmap-bin.yaml
index 2a6b9cff..647762c4 100644
--- a/neutron/templates/configmap-bin.yaml
+++ b/neutron/templates/configmap-bin.yaml
@@ -95,6 +95,8 @@ data:
{{- include "helm-toolkit.scripts.rabbit_init" . | indent 4 }}
neutron-test-force-cleanup.sh: |
{{ tuple "bin/_neutron-test-force-cleanup.sh.tpl" . | include "helm-toolkit.utils.template" | indent 4 }}
+ neutron-resources-cleanup.sh: |
+{{ tuple "bin/_neutron-resources-cleanup.sh.tpl" . | include "helm-toolkit.utils.template" | indent 4 }}
{{- if ( has "tungstenfabric" .Values.network.backend ) }}
tf-plugin.pth: |
/opt/plugin/site-packages
diff --git a/neutron/templates/job-resources-cleanup.yaml b/neutron/templates/job-resources-cleanup.yaml
new file mode 100644
index 00000000..9870305f
--- /dev/null
+++ b/neutron/templates/job-resources-cleanup.yaml
@@ -0,0 +1,81 @@
+{{/*
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/}}
+
+{{- if .Values.manifests.job_resources_cleanup }}
+{{- $envAll := . }}
+
+{{- $serviceAccountName := "neutron-resources-cleanup" }}
+{{ tuple $envAll "resources_cleanup" $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }}
+---
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: {{ $serviceAccountName }}
+ labels:
+{{ tuple $envAll "neutron" "resources_cleanup" | include "helm-toolkit.snippets.kubernetes_metadata_labels" | indent 4 }}
+ annotations:
+{{- if .Values.helm3_hook }}
+ "helm.sh/hook": pre-delete
+ "helm.sh/hook-delete-policy": hook-succeeded, hook-failed
+{{- end }}
+{{- if .Values.helm2_hook }}
+ "helm.sh/hook": pre-delete
+ "helm.sh/hook-delete-policy": hook-succeeded, hook-failed
+{{- end }}
+ {{ tuple $envAll | include "helm-toolkit.snippets.release_uuid" }}
+spec:
+ backoffLimit: 2
+ activeDeadlineSeconds: 1500
+ template:
+ metadata:
+ labels:
+{{ tuple $envAll "neutron" "resources_cleanup" | include "helm-toolkit.snippets.kubernetes_metadata_labels" | indent 8 }}
+ spec:
+ serviceAccountName: {{ $serviceAccountName }}
+{{ dict "envAll" $envAll "application" "neutron_resources_cleanup" | include "helm-toolkit.snippets.kubernetes_pod_security_context" | indent 6 }}
+ restartPolicy: OnFailure
+{{ if .Values.pod.tolerations.neutron.enabled }}
+{{ tuple $envAll "neutron" | include "helm-toolkit.snippets.kubernetes_tolerations" | indent 6 }}
+{{ end }}
+ nodeSelector:
+ {{ .Values.labels.job.node_selector_key }}: {{ .Values.labels.job.node_selector_value }}
+ initContainers:
+{{ tuple $envAll "resources_cleanup" list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }}
+ containers:
+ - name: {{ $serviceAccountName }}
+{{ tuple $envAll "neutron_resources_cleanup" | include "helm-toolkit.snippets.image" | indent 10 }}
+{{ tuple $envAll .Values.pod.resources.jobs.resources_cleanup | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }}
+{{ dict "envAll" $envAll "application" "neutron_resources_cleanup" "container" "neutron_resources_cleanup" | include "helm-toolkit.snippets.kubernetes_container_security_context" | indent 10 }}
+ env:
+{{- with $env := dict "ksUserSecret" .Values.secrets.identity.admin "useCA" .Values.manifests.certificates}}
+{{- include "helm-toolkit.snippets.keystone_openrc_env_vars" $env | indent 12 }}
+{{- end }}
+ command:
+ - /tmp/{{ $serviceAccountName }}.sh
+ volumeMounts:
+ - name: pod-tmp
+ mountPath: /tmp
+ - name: neutron-bin
+ mountPath: /tmp/{{ $serviceAccountName }}.sh
+ subPath: {{ $serviceAccountName }}.sh
+{{- dict "enabled" .Values.manifests.certificates "name" .Values.secrets.tls.network.server.public | include "helm-toolkit.snippets.tls_volume_mount" | indent 12 }}
+ volumes:
+ - name: pod-tmp
+ emptyDir: {}
+ - name: neutron-bin
+ configMap:
+ name: neutron-bin
+ defaultMode: 0555
+{{- dict "enabled" .Values.manifests.certificates "name" .Values.secrets.tls.network.server.public | include "helm-toolkit.snippets.tls_volume" | indent 8 }}
+{{- end }}
diff --git a/neutron/values.yaml b/neutron/values.yaml
index 29917a59..2dc95d43 100644
--- a/neutron/values.yaml
+++ b/neutron/values.yaml
@@ -42,6 +42,7 @@ images:
neutron_bagpipe_bgp: docker.io/openstackhelm/neutron:stein-ubuntu_bionic
neutron_ironic_agent: docker.io/openstackhelm/neutron:stein-ubuntu_bionic
neutron_netns_cleanup_cron: docker.io/openstackhelm/neutron:stein-ubuntu_bionic
+ neutron_resources_cleanup: docker.io/openstackhelm/heat:stein-ubuntu_bionic
dep_check: quay.io/airshipit/kubernetes-entrypoint:v1.0.0
image_repo_sync: docker.io/docker:17.07.0
pull_policy: "IfNotPresent"
@@ -326,6 +327,21 @@ dependencies:
service: oslo_cache
- endpoint: internal
service: identity
+ resources_cleanup:
+ jobs:
+ - neutron-db-sync
+ - neutron-rabbit-init
+ services:
+ - endpoint: internal
+ service: oslo_messaging
+ - endpoint: internal
+ service: oslo_db
+ - endpoint: internal
+ service: identity
+ - endpoint: internal
+ service: compute
+ - endpoint: internal
+ service: network
tests:
services:
- endpoint: internal
@@ -547,6 +563,12 @@ pod:
neutron_netns_cleanup_cron:
readOnlyRootFilesystem: true
privileged: true
+ neutron_resources_cleanup:
+ pod:
+ runAsUser: 42424
+ container:
+ neutron_resources_cleanup:
+ readOnlyRootFilesystem: true
affinity:
anti:
type:
@@ -836,6 +858,13 @@ pod:
limits:
memory: "1024Mi"
cpu: "2000m"
+ resources_cleanup:
+ requests:
+ memory: "128Mi"
+ cpu: "100m"
+ limits:
+ memory: "1024Mi"
+ cpu: "2000m"
conf:
rally_tests:
@@ -2522,6 +2551,7 @@ network_policy:
egress:
- {}
+helm2_hook: true
helm3_hook: false
manifests:
@@ -2549,6 +2579,7 @@ manifests:
job_ks_service: true
job_ks_user: true
job_rabbit_init: true
+ job_resources_cleanup: true
pdb_server: true
pod_rally_test: true
network_policy: false
--
2.17.1

View File

@ -13,3 +13,4 @@
0013-Remove-TLS-from-openstack-services.patch
0014-Remove-mariadb-and-rabbit-tls.patch
0015-Decrease-terminationGracePeriodSeconds-on-glance-api.patch
0016-Network-Resources-Cleanup-before-OpenStack-Removal.patch

View File

@ -0,0 +1,431 @@
From 26035d478bc2e70182446658f3677b079818305e Mon Sep 17 00:00:00 2001
From: rferraz <RogerioOliveira.Ferraz@windriver.com>
Date: Wed, 25 May 2022 05:49:04 -0300
Subject: [PATCH] Network Resources Cleanup before OpenStack Removal
This patch introduces a new job for the purpose
to cleanup network resources before OpenStack removal.
Changes:
- new file: neutron/templates/bin/_neutron-resources-cleanup.sh.tpl
- new file: neutron/templates/job-resources-cleanup.yaml
- modified: neutron/templates/configmap-bin.yaml
- modified: neutron/values.yaml
Signed-off-by: rferraz <RogerioOliveira.Ferraz@windriver.com>
---
.../bin/_neutron-resources-cleanup.sh.tpl | 220 ++++++++++++++++++
neutron/templates/configmap-bin.yaml | 2 +
neutron/templates/job-resources-cleanup.yaml | 81 +++++++
neutron/values.yaml | 31 +++
4 files changed, 334 insertions(+)
create mode 100644 neutron/templates/bin/_neutron-resources-cleanup.sh.tpl
create mode 100644 neutron/templates/job-resources-cleanup.yaml
diff --git a/neutron/templates/bin/_neutron-resources-cleanup.sh.tpl b/neutron/templates/bin/_neutron-resources-cleanup.sh.tpl
new file mode 100644
index 00000000..8d38373d
--- /dev/null
+++ b/neutron/templates/bin/_neutron-resources-cleanup.sh.tpl
@@ -0,0 +1,220 @@
+#!/bin/bash
+
+{{/*
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/}}
+
+set -ex
+
+function cleanup_network_trunks()
+{
+ TRUNKS=$(openstack network trunk list -c ID -f value)
+ PORTS=$(openstack network trunk list -c "Parent Port" -f value)
+
+ for TRUNK in ${TRUNKS}; do
+ openstack network trunk delete ${TRUNK}
+ RET=$?
+ if [ ${RET} -ne 0 ]; then
+ echo "Failed to delete trunk ${TRUNK}"
+ return ${RET}
+ fi
+ done
+
+ for PORT in ${PORTS}; do
+ openstack port delete ${PORT}
+ RET=$?
+ if [ ${RET} -ne 0 ]; then
+ echo "Failed to delete port ${PORT}"
+ return ${RET}
+ fi
+ done
+ return 0
+}
+
+function cleanup_vm_instances()
+{
+ local VMLIST=""
+ local ID=""
+ local RETRY=0
+
+ VMLIST=$(openstack server list --all-projects -c ID -f value)
+ for VM in ${VMLIST}; do
+ openstack server delete ${VM} --wait
+ RET=$?
+ if [ ${RET} -ne 0 ]; then
+ echo "Failed to delete VM ${ID}"
+ return ${RET}
+ fi
+ done
+
+ return 0
+}
+
+function cleanup_floating_ips()
+{
+ local IPLIST=""
+ local IP=""
+
+ IPLIST=$(openstack floating ip list | grep -E "[0-9]+.[0-9]+.[0-9]+.[0-9]" | awk '{ print $2; }')
+ for IP in ${IPLIST}; do
+ openstack floating ip delete ${IP}
+ RET=$?
+ if [ ${RET} -ne 0 ]; then
+ echo "Failed to delete floating ip ${IP}"
+ return 1
+ fi
+ done
+
+ return 0
+}
+
+function cleanup_manual_ports()
+{
+ PORTS=$(openstack port list --device-owner=compute:manual | grep -E "^\|\s\w{8}-\w{4}-\w{4}-\w{4}-\w{12}\s\|" | awk '{ print $2; }')
+ for PORT in ${PORTS}; do
+ openstack port delete ${PORT}
+ RET=$?
+ if [ ${RET} -ne 0 ]; then
+ echo "Failed to delete manual port ${PORT}"
+ return 1
+ fi
+ done
+
+ return 0
+}
+
+function cleanup_routers()
+{
+ local ROUTERLIST=""
+ local ID=""
+
+ ROUTERLIST=$(openstack router list -c ID -f value)
+ for ID in ${ROUTERLIST}; do
+ openstack router set ${ID} --no-route
+ openstack router unset --external-gateway ${ID}
+ RET=$?
+ if [ ${RET} -ne 0 ]; then
+ echo "Failed to clear gateway on router ${ID}"
+ return 1
+ fi
+
+ PORTS=$(openstack port list --router ${ID} -c ID -f value)
+ for PORT in ${PORTS}; do
+ openstack router remove port ${ID} ${PORT}
+ RET=$?
+ if [ ${RET} -ne 0 ]; then
+ echo "Failed to delete interface ${PORT} from router ${ID}"
+ return ${RET}
+ fi
+ done
+
+ openstack router delete ${ID}
+ if [ ${RET} -ne 0 ]; then
+ echo "Failed to delete router ${ID}"
+ return 1
+ fi
+ done
+
+ return 0
+}
+
+function cleanup_application_ports()
+{
+ NETS=$(openstack network list -c ID -f value)
+ for NET in $NETS; do
+ NET_PORTS=$(openstack port list --network $NET -c ID -f value)
+ for NET_PORT in $NET_PORTS; do
+ openstack port delete $NET_PORT
+ RET=$?
+ if [ ${RET} -ne 0 ]; then
+ echo "Failed to delete port ${NET_PORT}"
+ return 1
+ fi
+ done
+ done
+
+ return 0
+}
+
+function cleanup_networks()
+{
+ local ID=""
+ NETLIST=$(openstack network list -c ID -f value)
+ for ID in ${NETLIST}; do
+ openstack network delete ${ID}
+ RET=$?
+ if [ ${RET} -ne 0 ]; then
+ echo "Failed to delete network ${ID}"
+ return 1
+ fi
+ done
+
+ return 0
+}
+
+date
+echo "Cleaning up network resources..."
+
+echo "Cleaning up network trunks"
+cleanup_network_trunks
+RET=$?
+if [ ${RET} -ne 0 ]; then
+ echo "Failed to cleanup network trunks"
+fi
+
+echo "Cleaning up VM instances"
+cleanup_vm_instances
+RET=$?
+if [ ${RET} -ne 0 ]; then
+ echo "Failed to cleanup VM instances"
+fi
+
+echo "Cleaning up floating IP addresses"
+cleanup_floating_ips
+RET=$?
+if [ ${RET} -ne 0 ]; then
+ echo "Failed to cleanup floating IP addresses"
+fi
+
+echo "Cleaning up manual ports"
+cleanup_manual_ports
+RET=$?
+if [ ${RET} -ne 0 ]; then
+ echo "Failed to cleanup manual ports"
+fi
+
+echo "Cleaning up routers"
+cleanup_routers
+RET=$?
+if [ ${RET} -ne 0 ]; then
+ echo "Failed to cleanup routers"
+fi
+
+echo "Cleaning up application ports"
+cleanup_application_ports
+RET=$?
+if [ ${RET} -ne 0 ]; then
+ echo "Failed to cleanup shared networks"
+fi
+
+echo "Cleaning up networks"
+cleanup_networks
+RET=$?
+if [ ${RET} -ne 0 ]; then
+ echo "Failed to cleanup networks"
+fi
+
+date
+echo "Cleanup finished"
+
+exit 0
diff --git a/neutron/templates/configmap-bin.yaml b/neutron/templates/configmap-bin.yaml
index 2a6b9cff..647762c4 100644
--- a/neutron/templates/configmap-bin.yaml
+++ b/neutron/templates/configmap-bin.yaml
@@ -95,6 +95,8 @@ data:
{{- include "helm-toolkit.scripts.rabbit_init" . | indent 4 }}
neutron-test-force-cleanup.sh: |
{{ tuple "bin/_neutron-test-force-cleanup.sh.tpl" . | include "helm-toolkit.utils.template" | indent 4 }}
+ neutron-resources-cleanup.sh: |
+{{ tuple "bin/_neutron-resources-cleanup.sh.tpl" . | include "helm-toolkit.utils.template" | indent 4 }}
{{- if ( has "tungstenfabric" .Values.network.backend ) }}
tf-plugin.pth: |
/opt/plugin/site-packages
diff --git a/neutron/templates/job-resources-cleanup.yaml b/neutron/templates/job-resources-cleanup.yaml
new file mode 100644
index 00000000..9870305f
--- /dev/null
+++ b/neutron/templates/job-resources-cleanup.yaml
@@ -0,0 +1,81 @@
+{{/*
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/}}
+
+{{- if .Values.manifests.job_resources_cleanup }}
+{{- $envAll := . }}
+
+{{- $serviceAccountName := "neutron-resources-cleanup" }}
+{{ tuple $envAll "resources_cleanup" $serviceAccountName | include "helm-toolkit.snippets.kubernetes_pod_rbac_serviceaccount" }}
+---
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: {{ $serviceAccountName }}
+ labels:
+{{ tuple $envAll "neutron" "resources_cleanup" | include "helm-toolkit.snippets.kubernetes_metadata_labels" | indent 4 }}
+ annotations:
+{{- if .Values.helm3_hook }}
+ "helm.sh/hook": pre-delete
+ "helm.sh/hook-delete-policy": hook-succeeded, hook-failed
+{{- end }}
+{{- if .Values.helm2_hook }}
+ "helm.sh/hook": pre-delete
+ "helm.sh/hook-delete-policy": hook-succeeded, hook-failed
+{{- end }}
+ {{ tuple $envAll | include "helm-toolkit.snippets.release_uuid" }}
+spec:
+ backoffLimit: 2
+ activeDeadlineSeconds: 1500
+ template:
+ metadata:
+ labels:
+{{ tuple $envAll "neutron" "resources_cleanup" | include "helm-toolkit.snippets.kubernetes_metadata_labels" | indent 8 }}
+ spec:
+ serviceAccountName: {{ $serviceAccountName }}
+{{ dict "envAll" $envAll "application" "neutron_resources_cleanup" | include "helm-toolkit.snippets.kubernetes_pod_security_context" | indent 6 }}
+ restartPolicy: OnFailure
+{{ if .Values.pod.tolerations.neutron.enabled }}
+{{ tuple $envAll "neutron" | include "helm-toolkit.snippets.kubernetes_tolerations" | indent 6 }}
+{{ end }}
+ nodeSelector:
+ {{ .Values.labels.job.node_selector_key }}: {{ .Values.labels.job.node_selector_value }}
+ initContainers:
+{{ tuple $envAll "resources_cleanup" list | include "helm-toolkit.snippets.kubernetes_entrypoint_init_container" | indent 8 }}
+ containers:
+ - name: {{ $serviceAccountName }}
+{{ tuple $envAll "neutron_resources_cleanup" | include "helm-toolkit.snippets.image" | indent 10 }}
+{{ tuple $envAll .Values.pod.resources.jobs.resources_cleanup | include "helm-toolkit.snippets.kubernetes_resources" | indent 10 }}
+{{ dict "envAll" $envAll "application" "neutron_resources_cleanup" "container" "neutron_resources_cleanup" | include "helm-toolkit.snippets.kubernetes_container_security_context" | indent 10 }}
+ env:
+{{- with $env := dict "ksUserSecret" .Values.secrets.identity.admin "useCA" .Values.manifests.certificates}}
+{{- include "helm-toolkit.snippets.keystone_openrc_env_vars" $env | indent 12 }}
+{{- end }}
+ command:
+ - /tmp/{{ $serviceAccountName }}.sh
+ volumeMounts:
+ - name: pod-tmp
+ mountPath: /tmp
+ - name: neutron-bin
+ mountPath: /tmp/{{ $serviceAccountName }}.sh
+ subPath: {{ $serviceAccountName }}.sh
+{{- dict "enabled" .Values.manifests.certificates "name" .Values.secrets.tls.network.server.public | include "helm-toolkit.snippets.tls_volume_mount" | indent 12 }}
+ volumes:
+ - name: pod-tmp
+ emptyDir: {}
+ - name: neutron-bin
+ configMap:
+ name: neutron-bin
+ defaultMode: 0555
+{{- dict "enabled" .Values.manifests.certificates "name" .Values.secrets.tls.network.server.public | include "helm-toolkit.snippets.tls_volume" | indent 8 }}
+{{- end }}
diff --git a/neutron/values.yaml b/neutron/values.yaml
index 29917a59..2dc95d43 100644
--- a/neutron/values.yaml
+++ b/neutron/values.yaml
@@ -42,6 +42,7 @@ images:
neutron_bagpipe_bgp: docker.io/openstackhelm/neutron:stein-ubuntu_bionic
neutron_ironic_agent: docker.io/openstackhelm/neutron:stein-ubuntu_bionic
neutron_netns_cleanup_cron: docker.io/openstackhelm/neutron:stein-ubuntu_bionic
+ neutron_resources_cleanup: docker.io/openstackhelm/heat:stein-ubuntu_bionic
dep_check: quay.io/airshipit/kubernetes-entrypoint:v1.0.0
image_repo_sync: docker.io/docker:17.07.0
pull_policy: "IfNotPresent"
@@ -326,6 +327,21 @@ dependencies:
service: oslo_cache
- endpoint: internal
service: identity
+ resources_cleanup:
+ jobs:
+ - neutron-db-sync
+ - neutron-rabbit-init
+ services:
+ - endpoint: internal
+ service: oslo_messaging
+ - endpoint: internal
+ service: oslo_db
+ - endpoint: internal
+ service: identity
+ - endpoint: internal
+ service: compute
+ - endpoint: internal
+ service: network
tests:
services:
- endpoint: internal
@@ -547,6 +563,12 @@ pod:
neutron_netns_cleanup_cron:
readOnlyRootFilesystem: true
privileged: true
+ neutron_resources_cleanup:
+ pod:
+ runAsUser: 42424
+ container:
+ neutron_resources_cleanup:
+ readOnlyRootFilesystem: true
affinity:
anti:
type:
@@ -836,6 +858,13 @@ pod:
limits:
memory: "1024Mi"
cpu: "2000m"
+ resources_cleanup:
+ requests:
+ memory: "128Mi"
+ cpu: "100m"
+ limits:
+ memory: "1024Mi"
+ cpu: "2000m"
conf:
rally_tests:
@@ -2522,6 +2551,7 @@ network_policy:
egress:
- {}
+helm2_hook: true
helm3_hook: false
manifests:
@@ -2549,6 +2579,7 @@ manifests:
job_ks_service: true
job_ks_user: true
job_rabbit_init: true
+ job_resources_cleanup: true
pdb_server: true
pod_rally_test: true
network_policy: false
--
2.17.1

View File

@ -9,7 +9,11 @@
""" System inventory Armada manifest operator."""
from oslo_log import log as logging
# fmt:off
import os
from copy import deepcopy
import ruamel.yaml as yaml
from k8sapp_openstack.common import constants as app_constants
from k8sapp_openstack.helm.aodh import AodhHelm
from k8sapp_openstack.helm.barbican import BarbicanHelm
@ -34,43 +38,54 @@ from k8sapp_openstack.helm.neutron import NeutronHelm
from k8sapp_openstack.helm.nginx_ports_control import NginxPortsControlHelm
from k8sapp_openstack.helm.nova import NovaHelm
from k8sapp_openstack.helm.nova_api_proxy import NovaApiProxyHelm
from k8sapp_openstack.helm.pci_irq_affinity_agent import PciIrqAffinityAgentHelm
from k8sapp_openstack.helm.openvswitch import OpenvswitchHelm
from k8sapp_openstack.helm.pci_irq_affinity_agent import \
PciIrqAffinityAgentHelm
from k8sapp_openstack.helm.placement import PlacementHelm
from k8sapp_openstack.helm.psp_rolebinding import PSPRolebindingHelm
from k8sapp_openstack.helm.rabbitmq import RabbitmqHelm
from k8sapp_openstack.helm.swift import SwiftHelm
from k8sapp_openstack.helm.psp_rolebinding import PSPRolebindingHelm
from sysinv.common import constants
from sysinv.common import exception
from oslo_log import log as logging
from sysinv.common import constants, exception
from sysinv.helm import manifest_base as base
# fmt:on
KEY_SCHEMA = "schema"
VAL_SCHEMA_CHART_GROUP = "armada/ChartGroup/v"
VAL_SCHEMA_MANIFEST = "armada/Manifest/v"
KEY_METADATA = "metadata"
KEY_METADATA_NAME = "name"
KEY_DATA = "data"
KEY_DATA_CHART_GROUP = "chart_group" # for chart group doc updates
KEY_DATA_CHART_GROUPS = "chart_groups" # for manifest doc updates
KEY_DATA_SEQUENCED = "sequenced"
LOG = logging.getLogger(__name__)
class OpenstackArmadaManifestOperator(base.ArmadaManifestOperator):
APP = app_constants.HELM_APP_OPENSTACK
ARMADA_MANIFEST = 'openstack-manifest'
ARMADA_MANIFEST = "openstack-manifest"
CHART_GROUP_PSP_ROLEBINDING = 'openstack-psp-rolebinding'
CHART_GROUP_INGRESS_OS = 'openstack-ingress'
CHART_GROUP_MAGNUM = 'openstack-magnum'
CHART_GROUP_MARIADB = 'openstack-mariadb'
CHART_GROUP_MEMCACHED = 'openstack-memcached'
CHART_GROUP_RABBITMQ = 'openstack-rabbitmq'
CHART_GROUP_KEYSTONE = 'openstack-keystone'
CHART_GROUP_KS_API_PROXY = 'openstack-keystone-api-proxy'
CHART_GROUP_BARBICAN = 'openstack-barbican'
CHART_GROUP_GLANCE = 'openstack-glance'
CHART_GROUP_SWIFT = 'openstack-ceph-rgw'
CHART_GROUP_CINDER = 'openstack-cinder'
CHART_GROUP_FM_REST_API = 'openstack-fm-rest-api'
CHART_GROUP_COMPUTE_KIT = 'openstack-compute-kit'
CHART_GROUP_HEAT = 'openstack-heat'
CHART_GROUP_HORIZON = 'openstack-horizon'
CHART_GROUP_TELEMETRY = 'openstack-telemetry'
CHART_GROUP_DCDBSYNC = 'openstack-dcdbsync'
CHART_GROUP_PSP_ROLEBINDING = "openstack-psp-rolebinding"
CHART_GROUP_INGRESS_OS = "openstack-ingress"
CHART_GROUP_MAGNUM = "openstack-magnum"
CHART_GROUP_MARIADB = "openstack-mariadb"
CHART_GROUP_MEMCACHED = "openstack-memcached"
CHART_GROUP_RABBITMQ = "openstack-rabbitmq"
CHART_GROUP_KEYSTONE = "openstack-keystone"
CHART_GROUP_KS_API_PROXY = "openstack-keystone-api-proxy"
CHART_GROUP_BARBICAN = "openstack-barbican"
CHART_GROUP_GLANCE = "openstack-glance"
CHART_GROUP_SWIFT = "openstack-ceph-rgw"
CHART_GROUP_CINDER = "openstack-cinder"
CHART_GROUP_FM_REST_API = "openstack-fm-rest-api"
CHART_GROUP_COMPUTE_KIT = "openstack-compute-kit"
CHART_GROUP_HEAT = "openstack-heat"
CHART_GROUP_HORIZON = "openstack-horizon"
CHART_GROUP_TELEMETRY = "openstack-telemetry"
CHART_GROUP_DCDBSYNC = "openstack-dcdbsync"
CHART_GROUPS_LUT = {
AodhHelm.CHART: CHART_GROUP_TELEMETRY,
@ -105,39 +120,44 @@ class OpenstackArmadaManifestOperator(base.ArmadaManifestOperator):
}
CHARTS_LUT = {
AodhHelm.CHART: 'openstack-aodh',
BarbicanHelm.CHART: 'openstack-barbican',
CeilometerHelm.CHART: 'openstack-ceilometer',
CinderHelm.CHART: 'openstack-cinder',
GarbdHelm.CHART: 'openstack-garbd',
FmRestApiHelm.CHART: 'openstack-fm-rest-api',
GlanceHelm.CHART: 'openstack-glance',
GnocchiHelm.CHART: 'openstack-gnocchi',
HeatHelm.CHART: 'openstack-heat',
HorizonHelm.CHART: 'openstack-horizon',
IngressHelm.CHART: 'openstack-ingress',
IronicHelm.CHART: 'openstack-ironic',
KeystoneHelm.CHART: 'openstack-keystone',
KeystoneApiProxyHelm.CHART: 'openstack-keystone-api-proxy',
LibvirtHelm.CHART: 'openstack-libvirt',
MagnumHelm.CHART: 'openstack-magnum',
MariadbHelm.CHART: 'openstack-mariadb',
MemcachedHelm.CHART: 'openstack-memcached',
NeutronHelm.CHART: 'openstack-neutron',
NginxPortsControlHelm.CHART: 'openstack-nginx-ports-control',
NovaHelm.CHART: 'openstack-nova',
NovaApiProxyHelm.CHART: 'openstack-nova-api-proxy',
PciIrqAffinityAgentHelm.CHART: 'openstack-pci-irq-affinity-agent',
OpenvswitchHelm.CHART: 'openstack-openvswitch',
PSPRolebindingHelm.CHART: 'openstack-psp-rolebinding',
PlacementHelm.CHART: 'openstack-placement',
RabbitmqHelm.CHART: 'openstack-rabbitmq',
SwiftHelm.CHART: 'openstack-ceph-rgw',
DcdbsyncHelm.CHART: 'openstack-dcdbsync',
AodhHelm.CHART: "openstack-aodh",
BarbicanHelm.CHART: "openstack-barbican",
CeilometerHelm.CHART: "openstack-ceilometer",
CinderHelm.CHART: "openstack-cinder",
GarbdHelm.CHART: "openstack-garbd",
FmRestApiHelm.CHART: "openstack-fm-rest-api",
GlanceHelm.CHART: "openstack-glance",
GnocchiHelm.CHART: "openstack-gnocchi",
HeatHelm.CHART: "openstack-heat",
HorizonHelm.CHART: "openstack-horizon",
IngressHelm.CHART: "openstack-ingress",
IronicHelm.CHART: "openstack-ironic",
KeystoneHelm.CHART: "openstack-keystone",
KeystoneApiProxyHelm.CHART: "openstack-keystone-api-proxy",
LibvirtHelm.CHART: "openstack-libvirt",
MagnumHelm.CHART: "openstack-magnum",
MariadbHelm.CHART: "openstack-mariadb",
MemcachedHelm.CHART: "openstack-memcached",
NeutronHelm.CHART: "openstack-neutron",
NginxPortsControlHelm.CHART: "openstack-nginx-ports-control",
NovaHelm.CHART: "openstack-nova",
NovaApiProxyHelm.CHART: "openstack-nova-api-proxy",
PciIrqAffinityAgentHelm.CHART: "openstack-pci-irq-affinity-agent",
OpenvswitchHelm.CHART: "openstack-openvswitch",
PSPRolebindingHelm.CHART: "openstack-psp-rolebinding",
PlacementHelm.CHART: "openstack-placement",
RabbitmqHelm.CHART: "openstack-rabbitmq",
SwiftHelm.CHART: "openstack-ceph-rgw",
DcdbsyncHelm.CHART: "openstack-dcdbsync",
}
def __init__(self, *args, **kwargs):
super(OpenstackArmadaManifestOperator, self).__init__(*args, **kwargs)
self.delete_manifest_contents = [] # OS Armada app delete manifest
def platform_mode_manifest_updates(self, dbapi, mode):
""" Update the application manifest based on the platform
"""Update the application manifest based on the platform
This is used for
@ -150,21 +170,24 @@ class OpenstackArmadaManifestOperator(base.ArmadaManifestOperator):
# MariaDB service.
self.manifest_chart_groups_set(
self.ARMADA_MANIFEST,
[self.CHART_GROUP_INGRESS_OS,
self.CHART_GROUP_MARIADB])
[self.CHART_GROUP_INGRESS_OS, self.CHART_GROUP_MARIADB],
)
elif mode == constants.OPENSTACK_RESTORE_STORAGE:
# After MariaDB data is restored, restore Keystone,
# Glance and Cinder.
self.manifest_chart_groups_set(
self.ARMADA_MANIFEST,
[self.CHART_GROUP_INGRESS_OS,
self.CHART_GROUP_MARIADB,
self.CHART_GROUP_MEMCACHED,
self.CHART_GROUP_RABBITMQ,
self.CHART_GROUP_KEYSTONE,
self.CHART_GROUP_GLANCE,
self.CHART_GROUP_CINDER])
[
self.CHART_GROUP_INGRESS_OS,
self.CHART_GROUP_MARIADB,
self.CHART_GROUP_MEMCACHED,
self.CHART_GROUP_RABBITMQ,
self.CHART_GROUP_KEYSTONE,
self.CHART_GROUP_GLANCE,
self.CHART_GROUP_CINDER,
],
)
else:
# When mode is OPENSTACK_RESTORE_NORMAL or None,
@ -175,14 +198,105 @@ class OpenstackArmadaManifestOperator(base.ArmadaManifestOperator):
LOG.exception("System %s not found.")
raise
if (system.distributed_cloud_role ==
constants.DISTRIBUTED_CLOUD_ROLE_SYSTEMCONTROLLER):
if (
system.distributed_cloud_role
== constants.DISTRIBUTED_CLOUD_ROLE_SYSTEMCONTROLLER
):
# remove the chart_groups not needed in this configuration
self.manifest_chart_groups_delete(
self.ARMADA_MANIFEST, self.CHART_GROUP_SWIFT)
self.ARMADA_MANIFEST, self.CHART_GROUP_SWIFT
)
self.manifest_chart_groups_delete(
self.ARMADA_MANIFEST, self.CHART_GROUP_COMPUTE_KIT)
self.ARMADA_MANIFEST, self.CHART_GROUP_COMPUTE_KIT
)
self.manifest_chart_groups_delete(
self.ARMADA_MANIFEST, self.CHART_GROUP_HEAT)
self.ARMADA_MANIFEST, self.CHART_GROUP_HEAT
)
self.manifest_chart_groups_delete(
self.ARMADA_MANIFEST, self.CHART_GROUP_TELEMETRY)
self.ARMADA_MANIFEST, self.CHART_GROUP_TELEMETRY
)
def save_delete_manifest(self):
"""Save an updated manifest for deletion
This is an override method to reverse the OpenStack remove sequence,
compared to the deployment sequence in OpenStack manifest
armada delete doesn't support --values files as does the apply. To
handle proper deletion of the conditional charts/chart groups that end
up in the overrides files, create a unified file for use when deleting.
NOTE #1: If we want to abandon using manifest overrides files,
this generated file could probably be used on apply and delete.
NOTE #2: Diffing the original manifest and this manifest provides a
clear view of the conditional changes that were enforced by the system
in the plugins
"""
if os.path.exists(self.manifest_path):
self.delete_manifest_contents = deepcopy(self.content)
# Reverse the OpenStack remove sequence
for i in self.delete_manifest_contents:
if VAL_SCHEMA_MANIFEST in i[KEY_SCHEMA]:
i[KEY_DATA][KEY_DATA_CHART_GROUPS].reverse()
if VAL_SCHEMA_CHART_GROUP in i[KEY_SCHEMA]:
# Neutron shall be first one to be deleted on (reversed)
# compute kit group
if (
i[KEY_METADATA][KEY_METADATA_NAME]
== self.CHART_GROUP_COMPUTE_KIT
):
try:
lst = i[KEY_DATA][KEY_DATA_CHART_GROUP]
lst.append(
lst.pop(
lst.index(
self.CHARTS_LUT[NeutronHelm.CHART]))
)
# Compute-kit group shall be deleted sequentially
i[KEY_DATA][KEY_DATA_SEQUENCED] = "true"
except Exception as e:
LOG.error(
"Failed compute-kit delete manifest. %s" % e)
# Removal sequence is the reverse of deployment sequence
# (for all groups)
i[KEY_DATA][KEY_DATA_CHART_GROUP].reverse()
# cleanup existing delete manifest
self._cleanup_deletion_manifest()
# Save overrides
if self.delete_manifest:
with open(self.delete_manifest, "w") as f:
try:
yaml.dump_all(
self.delete_manifest_contents,
f,
Dumper=yaml.RoundTripDumper,
explicit_start=True,
default_flow_style=False,
)
LOG.info(
"Delete manifest file %s is generated"
% self.delete_manifest
)
except Exception as e:
LOG.error(
"Failed to generate delete manifest file %s. %s"
% (self.delete_manifest, e)
)
else:
LOG.error("Delete manifest file does not exist.")
else:
LOG.error(
"Manifest directory %s does not exist." % self.manifest_path)

View File

@ -2,33 +2,57 @@
# SPDX-License-Identifier: Apache-2.0
#
# fmt:off
import mock
from k8sapp_openstack.armada.manifest_openstack import \
OpenstackArmadaManifestOperator
from k8sapp_openstack.common import constants as app_constants
from sysinv.common import constants
from sysinv.helm import common
from sysinv.tests.db import base as dbbase
from sysinv.tests.db import utils as dbutils
from sysinv.tests.helm import base
from sysinv.tests.helm.test_helm import HelmOperatorTestSuiteMixin
from k8sapp_openstack.common import constants as app_constants
# fmt:on
KEY_SCHEMA = "schema"
KEY_METADATA = "metadata"
KEY_METADATA_NAME = "name"
class K8SAppOpenstackAppMixin(object):
class K8SAppOpenstackAppBaseMixin(object):
app_name = app_constants.HELM_APP_OPENSTACK
path_name = app_name + '.tgz'
path_name = app_name + ".tgz"
def setUp(self):
super(K8SAppOpenstackAppMixin, self).setUp()
super(K8SAppOpenstackAppBaseMixin, self).setUp()
# Label hosts with appropriate labels
for host in self.hosts:
if host.personality == constants.CONTROLLER:
dbutils.create_test_label(
host_id=host.id,
label_key=common.LABEL_CONTROLLER,
label_value=common.LABEL_VALUE_ENABLED)
label_value=common.LABEL_VALUE_ENABLED,
)
elif host.personality == constants.WORKER:
dbutils.create_test_label(
host_id=host.id,
label_key=common.LABEL_COMPUTE_LABEL,
label_value=common.LABEL_VALUE_ENABLED)
label_value=common.LABEL_VALUE_ENABLED,
)
class K8SAppOpenstackAppMixin(K8SAppOpenstackAppBaseMixin):
def setUp(self):
super(K8SAppOpenstackAppMixin, self).setUp()
save_delete_manifest = mock.patch.object(
OpenstackArmadaManifestOperator, "save_delete_manifest"
)
save_delete_manifest.start()
self.addCleanup(save_delete_manifest.stop)
# Test Configuration:
@ -36,11 +60,13 @@ class K8SAppOpenstackAppMixin(object):
# - IPv6
# - Ceph Storage
# - stx-openstack app
class K8SAppOpenstackControllerTestCase(K8SAppOpenstackAppMixin,
dbbase.BaseIPv6Mixin,
dbbase.BaseCephStorageBackendMixin,
HelmOperatorTestSuiteMixin,
dbbase.ControllerHostTestCase):
class K8SAppOpenstackControllerTestCase(
K8SAppOpenstackAppMixin,
HelmOperatorTestSuiteMixin,
dbbase.BaseIPv6Mixin,
dbbase.BaseCephStorageBackendMixin,
dbbase.ControllerHostTestCase,
):
pass
@ -49,8 +75,142 @@ class K8SAppOpenstackControllerTestCase(K8SAppOpenstackAppMixin,
# - IPv4
# - Ceph Storage
# - stx-openstack app
class K8SAppOpenstackAIOTestCase(K8SAppOpenstackAppMixin,
dbbase.BaseCephStorageBackendMixin,
HelmOperatorTestSuiteMixin,
dbbase.AIOSimplexHostTestCase):
class K8SAppOpenstackAIOTestCase(
K8SAppOpenstackAppMixin,
HelmOperatorTestSuiteMixin,
dbbase.BaseCephStorageBackendMixin,
dbbase.AIOSimplexHostTestCase,
):
pass
# Test Configuration:
# - Controller
# - stx-openstack app
class SaveDeleteManifestTestCase(
K8SAppOpenstackAppBaseMixin,
base.HelmTestCaseMixin,
dbbase.ControllerHostTestCase
):
@mock.patch("os.path.exists", return_value=True)
@mock.patch(
"k8sapp_openstack.armada.manifest_openstack.deepcopy",
return_value=[
{
"schema": "armada/ChartGroup/v1",
"metadata": {
"name": "openstack-compute-kit",
},
"data": {
"sequenced": "false",
"chart_group": [
"openstack-libvirt",
"openstack-placement",
"openstack-nova",
"openstack-nova-api-proxy",
"openstack-pci-irq-affinity-agent",
"openstack-neutron",
],
},
},
{
"schema": "armada/Manifest/v1",
"metadata": {
"name": "openstack-manifest",
},
"data": {
"release_prefix": "osh",
"chart_groups": [
"openstack-psp-rolebinding",
"openstack-ingress",
"openstack-mariadb",
"openstack-memcached",
"openstack-rabbitmq",
"openstack-keystone",
"openstack-barbican",
"openstack-glance",
"openstack-cinder",
"openstack-ceph-rgw",
"openstack-compute-kit",
"openstack-heat",
"openstack-fm-rest-api",
"openstack-horizon",
"openstack-telemetry",
],
},
},
],
)
@mock.patch("six.moves.builtins.open", mock.mock_open(read_data="fake"))
@mock.patch(
"k8sapp_openstack.armada.manifest_openstack"
".OpenstackArmadaManifestOperator._cleanup_deletion_manifest"
)
def test_save_delete_manifest(self, *_):
def assert_manifest_overrides(manifest, parameters):
"""Validate the manifest contains the supplied parameters"""
if not isinstance(manifest, list) \
or not isinstance(parameters, list):
self.assertOverridesParameters(manifest, parameters)
else:
for i in parameters:
for j in manifest:
if (
i[KEY_SCHEMA] == j[KEY_SCHEMA]
and i[KEY_METADATA][KEY_METADATA_NAME]
== j[KEY_METADATA][KEY_METADATA_NAME]
):
self.assertOverridesParameters(j, i)
break
armada_op = OpenstackArmadaManifestOperator()
armada_op.save_delete_manifest()
assert_manifest_overrides(
armada_op.delete_manifest_contents,
[
{
"schema": "armada/ChartGroup/v1",
"metadata": {
"name": "openstack-compute-kit",
},
"data": {
"sequenced": "true",
"chart_group": [
"openstack-neutron",
"openstack-pci-irq-affinity-agent",
"openstack-nova-api-proxy",
"openstack-nova",
"openstack-placement",
"openstack-libvirt",
],
},
},
{
"schema": "armada/Manifest/v1",
"metadata": {
"name": "openstack-manifest",
},
"data": {
"release_prefix": "osh",
"chart_groups": [
"openstack-telemetry",
"openstack-horizon",
"openstack-fm-rest-api",
"openstack-heat",
"openstack-compute-kit",
"openstack-ceph-rgw",
"openstack-cinder",
"openstack-glance",
"openstack-barbican",
"openstack-keystone",
"openstack-rabbitmq",
"openstack-memcached",
"openstack-mariadb",
"openstack-ingress",
"openstack-psp-rolebinding",
],
},
},
],
)

View File

@ -1636,6 +1636,8 @@ data:
enabled: false
install:
no_hooks: false
delete:
timeout: 1800
upgrade:
no_hooks: false
pre:
@ -1791,6 +1793,7 @@ data:
neutron_bagpipe_bgp: docker.io/starlingx/stx-neutron:master-centos-stable-latest
neutron_ironic_agent: docker.io/starlingx/stx-neutron:master-centos-stable-latest
neutron_netns_cleanup_cron: docker.io/starlingx/stx-neutron:master-centos-stable-latest
neutron_resources_cleanup: docker.io/starlingx/stx-heat:master-centos-stable-latest
network:
interface:
tunnel: docker0
@ -4090,12 +4093,14 @@ data:
description: "Deploy nova and neutron, as well as supporting services"
sequenced: false
chart_group:
- openstack-libvirt
- openstack-nova
- openstack-nova-api-proxy
- openstack-pci-irq-affinity-agent
- openstack-neutron
- openstack-placement
# Keep this sequence, because OpenStack is deleted on reverse deployment-sequence.
# The Neutron chart is the first one to be deleted due to the neutron resources cleanup job.
- openstack-libvirt
- openstack-placement
- openstack-nova
- openstack-nova-api-proxy
- openstack-pci-irq-affinity-agent
- openstack-neutron
---
schema: armada/ChartGroup/v1
metadata: