Remove umbrella aka openstack chart

About half-year ago we agreed to remove the umbrella chart
to reduce the code base we need to maintain.

It seems there there are very few users if any of this
chart. There have been no any interest in the slack chat as well
regarding this chart.

Change-Id: Ie0ae67d47077b15eba44555bebad42116e451f85
Signed-off-by: Vladimir Kozhukalov <kozhukalov@gmail.com>
This commit is contained in:
Vladimir Kozhukalov
2025-11-26 11:54:18 -06:00
parent 5e894227c7
commit 91d42dd36d
113 changed files with 0 additions and 3554 deletions

View File

@@ -26,7 +26,6 @@ OpenStack charts options
neutron
nova
octavia
openstack
placement
rally
skyline

View File

@@ -1,21 +0,0 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj

View File

@@ -1,64 +0,0 @@
---
apiVersion: v2
appVersion: 1.16.0
dependencies:
- name: helm-toolkit
repository: file://../helm-toolkit
version: ">0.1.0"
condition: helm-toolkit.enabled
- name: mariadb
repository: file://../mariadb
version: ">0.1.0"
condition: mariadb.enabled
- name: rabbitmq
repository: file://../rabbitmq
version: ">0.1.0"
condition: rabbitmq.enabled
- name: memcached
repository: file://../memcached
version: ">0.1.0"
condition: memcached.enabled
- name: keystone
repository: file://../keystone
version: ">0.1.0"
condition: keystone.enabled
- name: heat
repository: file://../heat
version: ">0.1.0"
condition: heat.enabled
- name: glance
repository: file://../glance
version: ">0.1.0"
condition: glance.enabled
- name: openvswitch
repository: file://../openvswitch
version: ">0.1.0"
condition: openvswitch.enabled
- name: libvirt
repository: file://../libvirt
version: ">0.1.0"
condition: libvirt.enabled
- name: nova
repository: file://../nova
version: ">0.1.0"
condition: nova.enabled
- name: placement
repository: file://../placement
version: ">0.1.0"
condition: placement.enabled
- name: neutron
repository: file://../neutron
version: ">0.1.0"
condition: neutron.enabled
- name: horizon
repository: file://../horizon
version: ">0.1.0"
condition: horizon.enabled
description: A chart for openstack helm commmon deployment items
name: openstack
type: application
version: 2025.2.0
maintainers:
- name: OpenStack-Helm Authors
...

View File

@@ -1 +0,0 @@
../../glance/

View File

@@ -1 +0,0 @@
../../heat

View File

@@ -1 +0,0 @@
../../helm-toolkit

View File

@@ -1 +0,0 @@
../../horizon

View File

@@ -1 +0,0 @@
../../keystone/

View File

@@ -1 +0,0 @@
../../libvirt

View File

@@ -1 +0,0 @@
../../mariadb

View File

@@ -1 +0,0 @@
../../memcached

View File

@@ -1 +0,0 @@
../../neutron/

View File

@@ -1 +0,0 @@
../../nova/

View File

@@ -1 +0,0 @@
../../openvswitch

View File

@@ -1 +0,0 @@
../../placement/

View File

@@ -1 +0,0 @@
../../rabbitmq

View File

@@ -1,5 +0,0 @@
The Openstack chart (a.k.a umbrella chart) is deprecated and will be deleted after 2025.2 release.
For details see the discussion [1].
[1] https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/LAFZHXWIEM5MIT2KY2SXBE77NIOG7GK2/

View File

@@ -1,80 +0,0 @@
# default values for openstack umbrella chart
# Global overrides for subcharts
# note(v-dspecker): helm3_hook must be disabled
# There is a cyclic dependency otherwise. For example, libvirt-default ->
# nuetron-ovs-agent-default -> neutron-server -> neutron-ks-user.
# Since libvirt-default is deployed during install phase, neutron-ks-user must also
# be installed during install phase instead of post-install phase.
---
global:
subchart_release_name: true
helm-toolkit:
enabled: true
rabbitmq:
release_group: rabbitmq
enabled: true
pod:
replicas:
server: 1
mariadb:
release_group: mariadb
enabled: true
pod:
replicas:
server: 1
memcached:
release_group: memcached
enabled: true
keystone:
release_group: keystone
enabled: true
heat:
release_group: heat
enabled: true
helm3_hook: false
glance:
release_group: glance
enabled: true
helm3_hook: false
openvswitch:
release_group: openvswitch
enabled: true
libvirt:
release_group: libvirt
enabled: true
nova:
release_group: nova
enabled: true
helm3_hook: false
placement:
release_group: placement
enabled: true
helm3_hook: false
horizon:
release_group: horizon
enabled: false
helm3_hook: false
neutron:
release_group: neutron
enabled: true
helm3_hook: false
conf:
auto_bridge_add:
# no idea why, but something with sub-charts and null values get ommitted entirely from sub chart
br-ex: "null"
...

View File

@@ -1,75 +0,0 @@
#!/bin/bash
set -ex
set -o pipefail
: ${OSH_HELM_REPO:="../openstack-helm"}
# This test case aims to prove that updating a subhcart's configuration for
# the OpenStack Umbrella Helm chart results in no other subcharts' components
# being updated.
# This test case is proven by:
# 1. getting the list of DaemonSets, Deployment, StatefulSets after an installation
# 2. performing a helm upgrade with modifying a config specific to one subchart
# 3. getting the list of DaemonSets, Deployment, StatefulSets after the upgrade
# 4. Verifying the expected subchart application changes
# 5. Verifying no other applications are changed
validate_only_expected_application_changes () {
local app_name="$1"
local config_change="$2"
before_apps_list="$(mktemp)"
after_apps_list="$(mktemp)"
kubectl get daemonsets,deployments,statefulsets \
--namespace openstack \
--no-headers \
--output custom-columns=Kind:.kind,Name:.metadata.name,Generation:.status.observedGeneration \
> "$before_apps_list"
kubectl delete jobs \
--namespace openstack \
-l "application=$app_name" \
--wait
helm upgrade openstack ${OSH_HELM_REPO}/openstack \
--namespace openstack \
--reuse-values \
${config_change} \
--timeout=600s \
--wait
helm osh wait-for-pods openstack
kubectl get daemonsets,deployments,statefulsets \
--namespace openstack \
--no-headers \
--output custom-columns=Kind:.kind,Name:.metadata.name,Generation:.status.observedGeneration \
> "$after_apps_list"
# get list of apps that exist in after list, but not in before list
changed_apps="$(comm -13 "$before_apps_list" "$after_apps_list")"
if ! echo "$changed_apps" | grep "$app_name" ; then
echo "Expected $app_name application to update"
exit 1
fi
# use awk to find applications not matching app_name and pretty format as Kind/Name
unexpected_changed_apps="$(echo "$changed_apps" | awk -v appname="$app_name" '$0 !~ appname { print $1 "/" $2 }')"
if [ "x$unexpected_changed_apps" != "x" ]; then
echo "Applications changed unexpectedly: $unexpected_changed_apps"
exit 1
fi
}
validate_only_expected_application_changes "glance" "--set glance.conf.logging.logger_glance.level=WARN"
validate_only_expected_application_changes "heat" "--set heat.conf.logging.logger_heat.level=WARN"
validate_only_expected_application_changes "keystone" "--set keystone.conf.logging.logger_keystone.level=WARN"
validate_only_expected_application_changes "libvirt" "--set libvirt.conf.libvirt.log_level=2"
validate_only_expected_application_changes "memcached" "--set memcached.conf.memcached.stats_cachedump.enabled=false"
validate_only_expected_application_changes "neutron" "--set neutron.conf.logging.logger_neutron.level=WARN"
validate_only_expected_application_changes "nova" "--set nova.conf.logging.logger_nova.level=WARN"
validate_only_expected_application_changes "openvswitch" "--set openvswitch.pod.user.nova.uid=42425"
validate_only_expected_application_changes "placement" "--set placement.conf.logging.logger_placement.level=WARN"

View File

@@ -1,46 +0,0 @@
#!/bin/bash
set -ex
: ${OSH_HELM_REPO:="../openstack-helm"}
# This test confirms that upgrading a OpenStack Umbrella Helm release using
# --reuse-values does not result in any unexpected pods from being recreated.
# Ideally, no pods would be created if the upgrade has no configuration change.
# Unfortunately, some jobs have hooks defined such that each Helm release deletes
# and recreates jobs. These jobs are ignored in this test.
# This test aims to validate no Deployment, DaemonSet, or StatefulSet pods are
# changed by verifying the Observed Generation remains the same.
# This test case is proven by:
# 1. getting the list of DaemonSets, Deployment, StatefulSets after an installation
# 2. performing a helm upgrade with --reuse-values
# 3. getting the list of DaemonSets, Deployment, StatefulSets after the upgrade
# 4. Verifying the list is empty since no applications should have changed
before_apps_list="$(mktemp)"
after_apps_list="$(mktemp)"
kubectl get daemonsets,deployments,statefulsets \
--namespace openstack \
--no-headers \
--output custom-columns=Kind:.kind,Name:.metadata.name,Generation:.status.observedGeneration \
> "$before_apps_list"
helm upgrade openstack ${OSH_HELM_REPO}/openstack \
--namespace openstack \
--reuse-values \
--wait
kubectl get daemonsets,deployments,statefulsets \
--namespace openstack \
--no-headers \
--output custom-columns=Kind:.kind,Name:.metadata.name,Generation:.status.observedGeneration \
> "$after_apps_list"
# get list of apps that exist in after list, but not in before list
changed_apps="$(comm -13 "$before_apps_list" "$after_apps_list")"
if [ "x$changed_apps" != "x" ]; then
echo "Applications changed unexpectedly: $changed_apps"
exit 1
fi

View File

@@ -1,15 +0,0 @@
---
glance:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
db_init: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
db_drop: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
ks_user: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
ks_service: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
ks_endpoints: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
glance_db_sync: "quay.io/airshipit/glance:2024.1-ubuntu_jammy"
glance_api: "quay.io/airshipit/glance:2024.1-ubuntu_jammy"
glance_metadefs_load: "quay.io/airshipit/glance:2024.1-ubuntu_jammy"
glance_storage_init: "docker.io/openstackhelm/ceph-config-helper:latest-ubuntu_jammy"
...

View File

@@ -1,15 +0,0 @@
---
glance:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
db_init: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
db_drop: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
ks_user: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
ks_service: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
ks_endpoints: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
glance_db_sync: "quay.io/airshipit/glance:2024.2-ubuntu_jammy"
glance_api: "quay.io/airshipit/glance:2024.2-ubuntu_jammy"
glance_metadefs_load: "quay.io/airshipit/glance:2024.2-ubuntu_jammy"
glance_storage_init: "docker.io/openstackhelm/ceph-config-helper:latest-ubuntu_jammy"
...

View File

@@ -1,15 +0,0 @@
---
glance:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
db_init: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
db_drop: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
ks_user: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
ks_service: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
ks_endpoints: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
glance_db_sync: "quay.io/airshipit/glance:2025.1-ubuntu_jammy"
glance_api: "quay.io/airshipit/glance:2025.1-ubuntu_jammy"
glance_metadefs_load: "quay.io/airshipit/glance:2025.1-ubuntu_jammy"
glance_storage_init: "docker.io/openstackhelm/ceph-config-helper:latest-ubuntu_jammy"
...

View File

@@ -1,15 +0,0 @@
---
glance:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
db_init: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
db_drop: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
ks_user: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
ks_service: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
ks_endpoints: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
glance_db_sync: "quay.io/airshipit/glance:2025.1-ubuntu_noble"
glance_api: "quay.io/airshipit/glance:2025.1-ubuntu_noble"
glance_metadefs_load: "quay.io/airshipit/glance:2025.1-ubuntu_noble"
glance_storage_init: "docker.io/openstackhelm/ceph-config-helper:latest-ubuntu_jammy"
...

View File

@@ -1,15 +0,0 @@
---
glance:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
db_init: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
db_drop: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
ks_user: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
ks_service: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
ks_endpoints: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
glance_db_sync: "quay.io/airshipit/glance:2025.2-ubuntu_noble"
glance_api: "quay.io/airshipit/glance:2025.2-ubuntu_noble"
glance_metadefs_load: "quay.io/airshipit/glance:2025.2-ubuntu_noble"
glance_storage_init: "docker.io/openstackhelm/ceph-config-helper:latest-ubuntu_jammy"
...

View File

@@ -1,38 +0,0 @@
---
pod:
security_context:
glance:
container:
glance_api:
appArmorProfile:
type: RuntimeDefault
glance_perms:
appArmorProfile:
type: RuntimeDefault
nginx:
appArmorProfile:
type: RuntimeDefault
metadefs_load:
container:
glance_metadefs_load:
appArmorProfile:
type: RuntimeDefault
storage_init:
container:
glance_storage_init:
appArmorProfile:
type: RuntimeDefault
test:
container:
glance_test_ks_user:
appArmorProfile:
type: RuntimeDefault
glance_test:
appArmorProfile:
type: RuntimeDefault
kubernetes_entrypoint:
container:
kubernetes_entrypoint:
appArmorProfile:
type: RuntimeDefault
...

View File

@@ -1,46 +0,0 @@
---
glance:
manifests:
network_policy: true
network_policy:
glance:
ingress:
- from:
- podSelector:
matchLabels:
application: glance
- podSelector:
matchLabels:
application: nova
- podSelector:
matchLabels:
application: horizon
- podSelector:
matchLabels:
application: ingress
- podSelector:
matchLabels:
application: heat
- podSelector:
matchLabels:
application: ironic
- podSelector:
matchLabels:
application: cinder
ports:
- protocol: TCP
port: 9292
egress:
- to:
ports:
- protocol: TCP
port: 80
- protocol: TCP
port: 443
- to:
- ipBlock:
cidr: %%%REPLACE_API_ADDR%%%/32
ports:
- protocol: TCP
port: %%%REPLACE_API_PORT%%%
...

View File

@@ -1,128 +0,0 @@
---
glance:
images:
tags:
nginx: docker.io/nginx:1.18.0
conf:
glance:
DEFAULT:
bind_host: 127.0.0.1
keystone_authtoken:
cafile: /etc/glance/certs/ca.crt
glance_store:
https_ca_certificates_file: /etc/glance/certs/ca.crt
swift_store_cacert: /etc/glance/certs/ca.crt
oslo_messaging_rabbit:
ssl: true
ssl_ca_file: /etc/rabbitmq/certs/ca.crt
ssl_cert_file: /etc/rabbitmq/certs/tls.crt
ssl_key_file: /etc/rabbitmq/certs/tls.key
nginx: |
worker_processes 1;
daemon off;
user nginx;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65s;
tcp_nodelay on;
log_format main '[nginx] method=$request_method path=$request_uri '
'status=$status upstream_status=$upstream_status duration=$request_time size=$body_bytes_sent '
'"$remote_user" "$http_referer" "$http_user_agent"';
access_log /dev/stdout main;
upstream websocket {
server 127.0.0.1:$PORT;
}
server {
server_name {{ printf "%s.%s.svc.%s" "${SHORTNAME}" .Release.Namespace .Values.endpoints.cluster_domain_suffix }};
listen $POD_IP:$PORT ssl;
client_max_body_size 0;
ssl_certificate /etc/nginx/certs/tls.crt;
ssl_certificate_key /etc/nginx/certs/tls.key;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384;
location / {
proxy_pass_request_headers on;
proxy_http_version 1.1;
proxy_pass http://websocket;
proxy_read_timeout 90;
}
}
}
network:
api:
ingress:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "https"
endpoints:
identity:
name: keystone
auth:
admin:
cacert: /etc/ssl/certs/openstack-helm.crt
glance:
cacert: /etc/ssl/certs/openstack-helm.crt
test:
cacert: /etc/ssl/certs/openstack-helm.crt
scheme:
default: https
port:
api:
default: 443
image:
host_fqdn_override:
default:
tls:
secretName: glance-tls-api
issuerRef:
name: ca-issuer
kind: ClusterIssuer
scheme:
default: https
public: https
port:
api:
public: 443
dashboard:
scheme:
default: https
public: https
port:
web:
default: 80
public: 443
oslo_messaging:
port:
https:
default: 15680
pod:
security_context:
glance:
pod:
runAsUser: 0
resources:
nginx:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "1024Mi"
cpu: "2000m"
manifests:
certificates: true
...

View File

@@ -1,17 +0,0 @@
---
heat:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
db_init: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
db_drop: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
ks_user: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
ks_service: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
ks_endpoints: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
heat_db_sync: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
heat_api: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
heat_cfn: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
heat_engine: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
heat_engine_cleaner: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
heat_purge_deleted: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
...

View File

@@ -1,17 +0,0 @@
---
heat:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
db_init: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
db_drop: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
ks_user: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
ks_service: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
ks_endpoints: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
heat_db_sync: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
heat_api: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
heat_cfn: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
heat_engine: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
heat_engine_cleaner: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
heat_purge_deleted: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
...

View File

@@ -1,17 +0,0 @@
---
heat:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
db_init: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
db_drop: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
ks_user: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
ks_service: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
ks_endpoints: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
heat_db_sync: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
heat_api: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
heat_cfn: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
heat_engine: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
heat_engine_cleaner: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
heat_purge_deleted: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
...

View File

@@ -1,17 +0,0 @@
---
heat:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
db_init: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
db_drop: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
ks_user: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
ks_service: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
ks_endpoints: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
heat_db_sync: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
heat_api: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
heat_cfn: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
heat_engine: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
heat_engine_cleaner: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
heat_purge_deleted: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
...

View File

@@ -1,17 +0,0 @@
---
heat:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
db_init: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
db_drop: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
ks_user: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
ks_service: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
ks_endpoints: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
heat_db_sync: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
heat_api: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
heat_cfn: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
heat_engine: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
heat_engine_cleaner: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
heat_purge_deleted: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
...

View File

@@ -1,35 +0,0 @@
---
pod:
security_context:
heat:
container:
heat_api:
appArmorProfile:
type: RuntimeDefault
heat_cfn:
appArmorProfile:
type: RuntimeDefault
heat_engine:
appArmorProfile:
type: RuntimeDefault
engine_cleaner:
container:
heat_engine_cleaner:
appArmorProfile:
type: RuntimeDefault
ks_user:
container:
heat_ks_domain_user:
appArmorProfile:
type: RuntimeDefault
trusts:
container:
heat_trusts:
appArmorProfile:
type: RuntimeDefault
kubernetes_entrypoint:
container:
kubernetes_entrypoint:
appArmorProfile:
type: RuntimeDefault
...

View File

@@ -1,48 +0,0 @@
---
heat:
manifests:
network_policy: true
network_policy:
heat:
ingress:
- from:
- podSelector:
matchLabels:
application: heat
- podSelector:
matchLabels:
application: ingress
- podSelector:
matchLabels:
application: horizon
ports:
- protocol: TCP
port: 8000
- protocol: TCP
port: 8003
- protocol: TCP
port: 8004
egress:
- to:
- podSelector:
matchLabels:
application: neutron
- to:
- podSelector:
matchLabels:
application: nova
- to:
- podSelector:
matchLabels:
application: glance
- to:
- podSelector:
matchLabels:
application: cinder
- to:
- ipBlock:
cidr: %%%REPLACE_API_ADDR%%%/32
ports:
- protocol: TCP
port: %%%REPLACE_API_PORT%%%
...

View File

@@ -1,174 +0,0 @@
---
heat:
conf:
software:
apache2:
binary: apache2
start_parameters: -DFOREGROUND
site_dir: /etc/apache2/sites-enabled
conf_dir: /etc/apache2/conf-enabled
mods_dir: /etc/apache2/mods-available
a2enmod:
- ssl
a2dismod: null
mpm_event: |
<IfModule mpm_event_module>
ServerLimit 1024
StartServers 32
MinSpareThreads 32
MaxSpareThreads 256
ThreadsPerChild 25
MaxRequestsPerChild 128
ThreadLimit 720
</IfModule>
wsgi_heat: |
{{- $portInt := tuple "orchestration" "internal" "api" $ | include "helm-toolkit.endpoints.endpoint_port_lookup" }}
Listen {{ $portInt }}
<VirtualHost *:{{ $portInt }}>
ServerName {{ printf "%s.%s.svc.%s" "heat-api" .Release.Namespace .Values.endpoints.cluster_domain_suffix }}
WSGIDaemonProcess heat-api processes=1 threads=1 user=heat display-name=%{GROUP}
WSGIProcessGroup heat-api
WSGIScriptAlias / /var/www/cgi-bin/heat/heat-wsgi-api
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
AllowEncodedSlashes On
SetEnvIf X-Forwarded-For "^.*\..*\..*\..*" forwarded
ErrorLogFormat "%{cu}t %M"
ErrorLog /dev/stdout
CustomLog /dev/stdout combined env=!forwarded
CustomLog /dev/stdout proxy env=forwarded
SSLEngine on
SSLCertificateFile /etc/heat/certs/tls.crt
SSLCertificateKeyFile /etc/heat/certs/tls.key
SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
SSLHonorCipherOrder on
</VirtualHost>
wsgi_cfn: |
{{- $portInt := tuple "cloudformation" "internal" "api" $ | include "helm-toolkit.endpoints.endpoint_port_lookup" }}
Listen {{ $portInt }}
<VirtualHost *:{{ $portInt }}>
ServerName {{ printf "%s.%s.svc.%s" "heat-api-cfn" .Release.Namespace .Values.endpoints.cluster_domain_suffix }}
WSGIDaemonProcess heat-api-cfn processes=1 threads=1 user=heat display-name=%{GROUP}
WSGIProcessGroup heat-api-cfn
WSGIScriptAlias / /var/www/cgi-bin/heat/heat-wsgi-api-cfn
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
AllowEncodedSlashes On
SetEnvIf X-Forwarded-For "^.*\..*\..*\..*" forwarded
ErrorLogFormat "%{cu}t %M"
ErrorLog /dev/stdout
CustomLog /dev/stdout combined env=!forwarded
CustomLog /dev/stdout proxy env=forwarded
SSLEngine on
SSLCertificateFile /etc/heat/certs/tls.crt
SSLCertificateKeyFile /etc/heat/certs/tls.key
SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
SSLHonorCipherOrder on
</VirtualHost>
heat:
clients_neutron:
ca_file: /etc/heat/certs/ca.crt
clients_cinder:
ca_file: /etc/heat/certs/ca.crt
clients_glance:
ca_file: /etc/heat/certs/ca.crt
clients_nova:
ca_file: /etc/heat/certs/ca.crt
clients_swift:
ca_file: /etc/heat/certs/ca.crt
ssl:
ca_file: /etc/heat/certs/ca.crt
keystone_authtoken:
cafile: /etc/heat/certs/ca.crt
clients:
ca_file: /etc/heat/certs/ca.crt
clients_keystone:
ca_file: /etc/heat/certs/ca.crt
oslo_messaging_rabbit:
ssl: true
ssl_ca_file: /etc/rabbitmq/certs/ca.crt
ssl_cert_file: /etc/rabbitmq/certs/tls.crt
ssl_key_file: /etc/rabbitmq/certs/tls.key
network:
api:
ingress:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "https"
cfn:
ingress:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "https"
pod:
security_context:
heat:
container:
heat_api:
readOnlyRootFilesystem: false
runAsUser: 0
heat_cfn:
readOnlyRootFilesystem: false
runAsUser: 0
endpoints:
identity:
auth:
admin:
cacert: /etc/ssl/certs/openstack-helm.crt
heat:
cacert: /etc/ssl/certs/openstack-helm.crt
heat_trustee:
cacert: /etc/ssl/certs/openstack-helm.crt
heat_stack_user:
cacert: /etc/ssl/certs/openstack-helm.crt
test:
cacert: /etc/ssl/certs/openstack-helm.crt
scheme:
default: https
port:
api:
default: 443
orchestration:
host_fqdn_override:
default:
tls:
secretName: heat-tls-api
issuerRef:
name: ca-issuer
kind: ClusterIssuer
scheme:
default: https
port:
api:
public: 443
cloudformation:
host_fqdn_override:
default:
tls:
secretName: heat-tls-cfn
issuerRef:
name: ca-issuer
kind: ClusterIssuer
scheme:
default: https
port:
api:
public: 443
ingress:
port:
ingress:
default: 443
oslo_messaging:
port:
https:
default: 15680
manifests:
certificates: true
...

View File

@@ -1,9 +0,0 @@
---
horizon:
images:
tags:
db_init: quay.io/airshipit/heat:2024.1-ubuntu_jammy
db_drop: quay.io/airshipit/heat:2024.1-ubuntu_jammy
horizon_db_sync: quay.io/airshipit/horizon:2024.1-ubuntu_jammy
horizon: quay.io/airshipit/horizon:2024.1-ubuntu_jammy
...

View File

@@ -1,9 +0,0 @@
---
horizon:
images:
tags:
db_init: quay.io/airshipit/heat:2024.2-ubuntu_jammy
db_drop: quay.io/airshipit/heat:2024.2-ubuntu_jammy
horizon_db_sync: quay.io/airshipit/horizon:2024.2-ubuntu_jammy
horizon: quay.io/airshipit/horizon:2024.2-ubuntu_jammy
...

View File

@@ -1,9 +0,0 @@
---
horizon:
images:
tags:
db_init: quay.io/airshipit/heat:2025.1-ubuntu_jammy
db_drop: quay.io/airshipit/heat:2025.1-ubuntu_jammy
horizon_db_sync: quay.io/airshipit/horizon:2025.1-ubuntu_jammy
horizon: quay.io/airshipit/horizon:2025.1-ubuntu_jammy
...

View File

@@ -1,9 +0,0 @@
---
horizon:
images:
tags:
db_init: quay.io/airshipit/heat:2025.1-ubuntu_noble
db_drop: quay.io/airshipit/heat:2025.1-ubuntu_noble
horizon_db_sync: quay.io/airshipit/horizon:2025.1-ubuntu_noble
horizon: quay.io/airshipit/horizon:2025.1-ubuntu_noble
...

View File

@@ -1,9 +0,0 @@
---
horizon:
images:
tags:
db_init: quay.io/airshipit/heat:2025.2-ubuntu_noble
db_drop: quay.io/airshipit/heat:2025.2-ubuntu_noble
horizon_db_sync: quay.io/airshipit/horizon:2025.2-ubuntu_noble
horizon: quay.io/airshipit/horizon:2025.2-ubuntu_noble
...

View File

@@ -1,24 +0,0 @@
---
pod:
security_context:
horizon:
container:
horizon:
appArmorProfile:
type: RuntimeDefault
db_sync:
container:
horizon_db_sync:
appArmorProfile:
type: RuntimeDefault
test:
container:
horizon_test:
appArmorProfile:
type: RuntimeDefault
kubernetes_entrypoint:
container:
kubernetes_entrypoint:
appArmorProfile:
type: RuntimeDefault
...

View File

@@ -1,56 +0,0 @@
---
horizon:
manifests:
network_policy: true
network_policy:
horizon:
ingress:
- from:
- podSelector:
matchLabels:
application: horizon
- from:
- podSelector:
matchLabels:
application: prometheus-openstack-exporter
- from:
- podSelector:
matchLabels:
application: ingress
ports:
- port: 80
protocol: TCP
- port: 443
protocol: TCP
egress:
- to:
- podSelector:
matchLabels:
application: neutron
- to:
- podSelector:
matchLabels:
application: nova
- to:
- podSelector:
matchLabels:
application: glance
- to:
- podSelector:
matchLabels:
application: cinder
- to:
- podSelector:
matchLabels:
application: keystone
- to:
- podSelector:
matchLabels:
application: heat
- to:
- ipBlock:
cidr: %%%REPLACE_API_ADDR%%%/32
ports:
- protocol: TCP
port: %%%REPLACE_API_PORT%%%
...

View File

@@ -1,107 +0,0 @@
---
horizon:
network:
dashboard:
ingress:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "https"
conf:
software:
apache2:
a2enmod:
- headers
- rewrite
- ssl
horizon:
apache: |
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" proxy
SetEnvIf X-Forwarded-For "^.*\..*\..*\..*" forwarded
CustomLog /dev/stdout combined env=!forwarded
CustomLog /dev/stdout proxy env=forwarded
<VirtualHost *:80>
ServerName horizon-int.openstack.svc.cluster.local
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R,L]
</Virtualhost>
<VirtualHost *:{{ tuple "dashboard" "internal" "web" . | include "helm-toolkit.endpoints.endpoint_port_lookup" }}>
ServerName horizon-int.openstack.svc.cluster.local
WSGIScriptReloading On
WSGIDaemonProcess horizon-http processes=5 threads=1 user=horizon group=horizon display-name=%{GROUP} python-path=/var/lib/kolla/venv/lib/python2.7/site-packages
WSGIProcessGroup horizon-http
WSGIScriptAlias / /var/www/cgi-bin/horizon/django.wsgi
WSGIPassAuthorization On
RewriteEngine On
RewriteCond %{REQUEST_METHOD} !^(POST|PUT|GET|DELETE|PATCH)
RewriteRule .* - [F]
<Location "/">
Require all granted
</Location>
Alias /static /var/www/html/horizon
<Location "/static">
SetHandler static
</Location>
ErrorLogFormat "%{cu}t %M"
ErrorLog /dev/stdout
TransferLog /dev/stdout
SetEnvIf X-Forwarded-For "^.*\..*\..*\..*" forwarded
CustomLog /dev/stdout combined env=!forwarded
CustomLog /dev/stdout proxy env=forwarded
ErrorLog /dev/stdout
SSLEngine on
SSLCertificateFile /etc/openstack-dashboard/certs/tls.crt
SSLCertificateKeyFile /etc/openstack-dashboard/certs/tls.key
SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
SSLHonorCipherOrder on
</VirtualHost>
local_settings:
config:
use_ssl: "True"
csrf_cookie_secure: "True"
csrf_cookie_httponly: "True"
enforce_password_check: "True"
session_cookie_secure: "True"
session_cookie_httponly: "True"
endpoints:
identity:
auth:
admin:
cacert: /etc/ssl/certs/openstack-helm.crt
scheme:
default: https
port:
api:
default: 443
dashboard:
host_fqdn_override:
default:
tls:
secretName: horizon-tls-web
issuerRef:
name: ca-issuer
kind: ClusterIssuer
scheme:
default: https
public: https
port:
web:
default: 443
public: 443
ingress:
port:
ingress:
default: 443
manifests:
certificates: true
...

View File

@@ -1,17 +0,0 @@
---
keystone:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
db_init: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
db_drop: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
keystone_api: "quay.io/airshipit/keystone:2024.1-ubuntu_jammy"
keystone_bootstrap: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
keystone_credential_rotate: "quay.io/airshipit/keystone:2024.1-ubuntu_jammy"
keystone_credential_setup: "quay.io/airshipit/keystone:2024.1-ubuntu_jammy"
keystone_db_sync: "quay.io/airshipit/keystone:2024.1-ubuntu_jammy"
keystone_domain_manage: "quay.io/airshipit/keystone:2024.1-ubuntu_jammy"
keystone_fernet_rotate: "quay.io/airshipit/keystone:2024.1-ubuntu_jammy"
keystone_fernet_setup: "quay.io/airshipit/keystone:2024.1-ubuntu_jammy"
ks_user: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
...

View File

@@ -1,17 +0,0 @@
---
keystone:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
db_init: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
db_drop: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
keystone_api: "quay.io/airshipit/keystone:2024.2-ubuntu_jammy"
keystone_bootstrap: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
keystone_credential_rotate: "quay.io/airshipit/keystone:2024.2-ubuntu_jammy"
keystone_credential_setup: "quay.io/airshipit/keystone:2024.2-ubuntu_jammy"
keystone_db_sync: "quay.io/airshipit/keystone:2024.2-ubuntu_jammy"
keystone_domain_manage: "quay.io/airshipit/keystone:2024.2-ubuntu_jammy"
keystone_fernet_rotate: "quay.io/airshipit/keystone:2024.2-ubuntu_jammy"
keystone_fernet_setup: "quay.io/airshipit/keystone:2024.2-ubuntu_jammy"
ks_user: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
...

View File

@@ -1,17 +0,0 @@
---
keystone:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
db_init: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
db_drop: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
keystone_api: "quay.io/airshipit/keystone:2025.1-ubuntu_jammy"
keystone_bootstrap: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
keystone_credential_rotate: "quay.io/airshipit/keystone:2025.1-ubuntu_jammy"
keystone_credential_setup: "quay.io/airshipit/keystone:2025.1-ubuntu_jammy"
keystone_db_sync: "quay.io/airshipit/keystone:2025.1-ubuntu_jammy"
keystone_domain_manage: "quay.io/airshipit/keystone:2025.1-ubuntu_jammy"
keystone_fernet_rotate: "quay.io/airshipit/keystone:2025.1-ubuntu_jammy"
keystone_fernet_setup: "quay.io/airshipit/keystone:2025.1-ubuntu_jammy"
ks_user: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
...

View File

@@ -1,17 +0,0 @@
---
keystone:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
db_init: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
db_drop: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
keystone_api: "quay.io/airshipit/keystone:2025.1-ubuntu_noble"
keystone_bootstrap: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
keystone_credential_rotate: "quay.io/airshipit/keystone:2025.1-ubuntu_noble"
keystone_credential_setup: "quay.io/airshipit/keystone:2025.1-ubuntu_noble"
keystone_db_sync: "quay.io/airshipit/keystone:2025.1-ubuntu_noble"
keystone_domain_manage: "quay.io/airshipit/keystone:2025.1-ubuntu_noble"
keystone_fernet_rotate: "quay.io/airshipit/keystone:2025.1-ubuntu_noble"
keystone_fernet_setup: "quay.io/airshipit/keystone:2025.1-ubuntu_noble"
ks_user: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
...

View File

@@ -1,17 +0,0 @@
---
keystone:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
db_init: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
db_drop: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
keystone_api: "quay.io/airshipit/keystone:2025.2-ubuntu_noble"
keystone_bootstrap: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
keystone_credential_rotate: "quay.io/airshipit/keystone:2025.2-ubuntu_noble"
keystone_credential_setup: "quay.io/airshipit/keystone:2025.2-ubuntu_noble"
keystone_db_sync: "quay.io/airshipit/keystone:2025.2-ubuntu_noble"
keystone_domain_manage: "quay.io/airshipit/keystone:2025.2-ubuntu_noble"
keystone_fernet_rotate: "quay.io/airshipit/keystone:2025.2-ubuntu_noble"
keystone_fernet_setup: "quay.io/airshipit/keystone:2025.2-ubuntu_noble"
ks_user: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
...

View File

@@ -1,40 +0,0 @@
---
pod:
security_context:
keystone:
container:
keystone_api:
appArmorProfile:
type: RuntimeDefault
credential_setup:
container:
keystone_credential_setup:
appArmorProfile:
type: RuntimeDefault
fernet_setup:
container:
keystone_fernet_setup:
appArmorProfile:
type: RuntimeDefault
domain_manage:
container:
keystone_domain_manage:
appArmorProfile:
type: RuntimeDefault
keystone_domain_manage_init:
appArmorProfile:
type: RuntimeDefault
test:
container:
keystone_test:
appArmorProfile:
type: RuntimeDefault
keystone_test_ks_user:
appArmorProfile:
type: RuntimeDefault
kubernetes_entrypoint:
container:
kubernetes_entrypoint:
appArmorProfile:
type: RuntimeDefault
...

View File

@@ -1,59 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
keystone:
conf:
keystone:
identity:
driver: sql
default_domain_id: default
domain_specific_drivers_enabled: True
domain_configurations_from_database: True
domain_config_dir: /etc/keystone/domains
ks_domains:
ldapdomain:
identity:
driver: ldap
ldap:
url: "ldap://ldap.openstack.svc.cluster.local:389"
user: "cn=admin,dc=cluster,dc=local"
password: password
suffix: "dc=cluster,dc=local"
user_attribute_ignore: "enabled,email,tenants,default_project_id"
query_scope: sub
user_enabled_emulation: True
user_enabled_emulation_dn: "cn=overwatch,ou=Groups,dc=cluster,dc=local"
user_tree_dn: "ou=People,dc=cluster,dc=local"
user_enabled_mask: 2
user_enabled_default: 512
user_name_attribute: cn
user_id_attribute: sn
user_mail_attribute: mail
user_pass_attribute: userPassword
group_tree_dn: "ou=Groups,dc=cluster,dc=local"
group_filter: ""
group_objectclass: posixGroup
group_id_attribute: cn
group_name_attribute: cn
group_desc_attribute: description
group_member_attribute: memberUID
use_pool: true
pool_size: 27
pool_retry_max: 3
pool_retry_delay: 0.1
pool_connection_timeout: 15
pool_connection_lifetime: 600
use_auth_pool: true
auth_pool_size: 100
auth_pool_connection_lifetime: 60
...

View File

@@ -1,67 +0,0 @@
---
keystone:
manifests:
network_policy: true
network_policy:
keystone:
ingress:
- from:
- podSelector:
matchLabels:
application: ceph
- podSelector:
matchLabels:
application: ingress
- podSelector:
matchLabels:
application: keystone
- podSelector:
matchLabels:
application: heat
- podSelector:
matchLabels:
application: glance
- podSelector:
matchLabels:
application: cinder
- podSelector:
matchLabels:
application: barbican
- podSelector:
matchLabels:
application: ceilometer
- podSelector:
matchLabels:
application: horizon
- podSelector:
matchLabels:
application: ironic
- podSelector:
matchLabels:
application: magnum
- podSelector:
matchLabels:
application: mistral
- podSelector:
matchLabels:
application: nova
- podSelector:
matchLabels:
application: neutron
- podSelector:
matchLabels:
application: placement
- podSelector:
matchLabels:
application: prometheus-openstack-exporter
ports:
- protocol: TCP
port: 5000
egress:
- to:
- ipBlock:
cidr: %%%REPLACE_API_ADDR%%%/32
ports:
- protocol: TCP
port: %%%REPLACE_API_PORT%%%
...

View File

@@ -1,89 +0,0 @@
---
keystone:
network:
api:
ingress:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: null
nginx.ingress.kubernetes.io/backend-protocol: "https"
pod:
security_context:
keystone:
pod:
runAsUser: 0
container:
keystone_api:
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
conf:
software:
apache2:
a2enmod:
- ssl
keystone:
oslo_messaging_rabbit:
ssl: true
ssl_ca_file: /etc/rabbitmq/certs/ca.crt
ssl_cert_file: /etc/rabbitmq/certs/tls.crt
ssl_key_file: /etc/rabbitmq/certs/tls.key
wsgi_keystone: |
{{- $portInt := tuple "identity" "internal" "api" $ | include "helm-toolkit.endpoints.endpoint_port_lookup" }}
{{- $vh := tuple "identity" "internal" . | include "helm-toolkit.endpoints.hostname_short_endpoint_lookup" }}
Listen 0.0.0.0:{{ $portInt }}
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" proxy
SetEnvIf X-Forwarded-For "^.*\..*\..*\..*" forwarded
CustomLog /dev/stdout combined env=!forwarded
CustomLog /dev/stdout proxy env=forwarded
<VirtualHost *:{{ tuple "identity" "internal" "api" $ | include "helm-toolkit.endpoints.endpoint_port_lookup" }}>
ServerName {{ printf "%s.%s.svc.%s" "keystone-api" .Release.Namespace .Values.endpoints.cluster_domain_suffix }}
WSGIDaemonProcess keystone-public processes=1 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /var/www/cgi-bin/keystone/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /dev/stdout
SetEnvIf X-Forwarded-For "^.*\..*\..*\..*" forwarded
CustomLog /dev/stdout combined env=!forwarded
CustomLog /dev/stdout proxy env=forwarded
SSLEngine on
SSLCertificateFile /etc/keystone/certs/tls.crt
SSLCertificateKeyFile /etc/keystone/certs/tls.key
SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
SSLHonorCipherOrder on
</VirtualHost>
endpoints:
identity:
auth:
admin:
cacert: /etc/ssl/certs/openstack-helm.crt
test:
cacert: /etc/ssl/certs/openstack-helm.crt
host_fqdn_override:
default:
tls:
secretName: keystone-tls-api
issuerRef:
name: ca-issuer
kind: ClusterIssuer
scheme:
default: https
public: https
port:
api:
default: 443
oslo_messaging:
port:
https:
default: 15680
manifests:
certificates: true
...

View File

@@ -1,6 +0,0 @@
---
libvirt:
images:
tags:
libvirt: docker.io/openstackhelm/libvirt:2024.1-ubuntu_jammy
...

View File

@@ -1,6 +0,0 @@
---
libvirt:
images:
tags:
libvirt: docker.io/openstackhelm/libvirt:2024.2-ubuntu_jammy
...

View File

@@ -1,6 +0,0 @@
---
libvirt:
images:
tags:
libvirt: docker.io/openstackhelm/libvirt:2025.1-ubuntu_jammy
...

View File

@@ -1,6 +0,0 @@
---
libvirt:
images:
tags:
libvirt: docker.io/openstackhelm/libvirt:2025.1-ubuntu_noble
...

View File

@@ -1,6 +0,0 @@
---
libvirt:
images:
tags:
libvirt: docker.io/openstackhelm/libvirt:2025.2-ubuntu_noble
...

View File

@@ -1,9 +0,0 @@
---
pod:
security_context:
libvirt:
container:
libvirt:
appArmorProfile:
type: RuntimeDefault
...

View File

@@ -1,17 +0,0 @@
# Note: This yaml file serves as an example for overriding the manifest
# to enable additional externally managed Ceph Cinder backend. When additional
# externally managed Ceph Cinder backend is provisioned as shown in
# cinder/values_overrides/external-ceph-backend.yaml of repo openstack-helm,
# below override is needed to store the secret key of the cinder user in
# libvirt.
---
libvirt:
conf:
ceph:
cinder:
external_ceph:
enabled: true
user: cinder2
secret_uuid: 3f0133e4-8384-4743-9473-fecacc095c74
user_secret_name: cinder-volume-external-rbd-keyring
...

View File

@@ -1,5 +0,0 @@
---
libvirt:
manifests:
network_policy: true
...

View File

@@ -1,8 +0,0 @@
---
libvirt:
conf:
libvirt:
listen_tcp: "0"
listen_tls: "1"
listen_addr: 0.0.0.0
...

View File

@@ -1,36 +0,0 @@
---
pod:
security_context:
server:
container:
mariadb:
appArmorProfile:
type: RuntimeDefault
exporter:
appArmorProfile:
type: RuntimeDefault
perms:
appArmorProfile:
type: RuntimeDefault
mariadb_backup:
container:
mariadb_backup:
appArmorProfile:
type: RuntimeDefault
verify_perms:
appArmorProfile:
type: RuntimeDefault
backup_perms:
appArmorProfile:
type: RuntimeDefault
tests:
container:
test:
appArmorProfile:
type: RuntimeDefault
kubernetes_entrypoint:
container:
kubernetes_entrypoint:
appArmorProfile:
type: RuntimeDefault
...

View File

@@ -1,12 +0,0 @@
---
mariadb:
pod:
replicas:
server: 1
volume:
size: 1Gi
class_name: local-storage
monitoring:
prometheus:
enabled: false
...

View File

@@ -1,82 +0,0 @@
---
mariadb:
manifests:
network_policy: true
network_policy:
mariadb:
egress:
- to:
- ipBlock:
cidr: %%%REPLACE_API_ADDR%%%/32
ports:
- protocol: TCP
port: %%%REPLACE_API_PORT%%%
ingress:
- from:
- podSelector:
matchLabels:
application: keystone
- podSelector:
matchLabels:
application: heat
- podSelector:
matchLabels:
application: glance
- podSelector:
matchLabels:
application: cinder
- podSelector:
matchLabels:
application: aodh
- podSelector:
matchLabels:
application: barbican
- podSelector:
matchLabels:
application: ceilometer
- podSelector:
matchLabels:
application: designate
- podSelector:
matchLabels:
application: horizon
- podSelector:
matchLabels:
application: ironic
- podSelector:
matchLabels:
application: magnum
- podSelector:
matchLabels:
application: mistral
- podSelector:
matchLabels:
application: nova
- podSelector:
matchLabels:
application: neutron
- podSelector:
matchLabels:
application: rally
- podSelector:
matchLabels:
application: placement
- podSelector:
matchLabels:
application: prometheus-mysql-exporter
- podSelector:
matchLabels:
application: mariadb
- podSelector:
matchLabels:
application: mariadb-backup
ports:
- protocol: TCP
port: 3306
- protocol: TCP
port: 4567
- protocol: TCP
port: 80
- protocol: TCP
port: 8080
...

View File

@@ -1,24 +0,0 @@
---
mariadb:
pod:
security_context:
server:
container:
perms:
readOnlyRootFilesystem: false
mariadb:
runAsUser: 0
allowPrivilegeEscalation: true
readOnlyRootFilesystem: false
endpoints:
oslo_db:
host_fqdn_override:
default:
tls:
secretName: mariadb-tls-direct
issuerRef:
name: ca-issuer
kind: ClusterIssuer
manifests:
certificates: true
...

View File

@@ -1,17 +0,0 @@
---
pod:
security_context:
server:
container:
memcached:
appArmorProfile:
type: RuntimeDefault
memcached_exporter:
appArmorProfile:
type: RuntimeDefault
kubernetes_entrypoint:
container:
kubernetes_entrypoint:
appArmorProfile:
type: RuntimeDefault
...

View File

@@ -1,78 +0,0 @@
---
memcached:
manifests:
network_policy: true
network_policy:
memcached:
ingress:
- from:
- podSelector:
matchLabels:
application: ingress
- podSelector:
matchLabels:
application: keystone
- podSelector:
matchLabels:
application: heat
- podSelector:
matchLabels:
application: glance
- podSelector:
matchLabels:
application: cinder
- podSelector:
matchLabels:
application: barbican
- podSelector:
matchLabels:
application: ceilometer
- podSelector:
matchLabels:
application: horizon
- podSelector:
matchLabels:
application: ironic
- podSelector:
matchLabels:
application: magnum
- podSelector:
matchLabels:
application: mistral
- podSelector:
matchLabels:
application: nova
- podSelector:
matchLabels:
application: neutron
- podSelector:
matchLabels:
application: placement
- podSelector:
matchLabels:
application: prometheus_memcached_exporter
- podSelector:
matchLabels:
application: aodh
- podSelector:
matchLabels:
application: rally
- podSelector:
matchLabels:
application: memcached
- podSelector:
matchLabels:
application: gnocchi
ports:
- port: 11211
protocol: TCP
- port: 9150
protocol: TCP
egress:
- to:
- ipBlock:
cidr: %%%REPLACE_API_ADDR%%%/32
ports:
- protocol: TCP
port: %%%REPLACE_API_PORT%%%
...

View File

@@ -1,22 +0,0 @@
---
neutron:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
db_init: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
db_drop: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
ks_user: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
ks_service: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
ks_endpoints: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
neutron_db_sync: "quay.io/airshipit/neutron:2024.1-ubuntu_jammy"
neutron_dhcp: "quay.io/airshipit/neutron:2024.1-ubuntu_jammy"
neutron_l3: "quay.io/airshipit/neutron:2024.1-ubuntu_jammy"
neutron_l2gw: "quay.io/airshipit/neutron:2024.1-ubuntu_jammy"
neutron_linuxbridge_agent: "quay.io/airshipit/neutron:2024.1-ubuntu_jammy"
neutron_metadata: "quay.io/airshipit/neutron:2024.1-ubuntu_jammy"
neutron_openvswitch_agent: "quay.io/airshipit/neutron:2024.1-ubuntu_jammy"
neutron_server: "quay.io/airshipit/neutron:2024.1-ubuntu_jammy"
neutron_rpc_server: "quay.io/airshipit/neutron:2024.1-ubuntu_jammy"
neutron_bagpipe_bgp: "quay.io/airshipit/neutron:2024.1-ubuntu_jammy"
neutron_netns_cleanup_cron: "quay.io/airshipit/neutron:2024.1-ubuntu_jammy"
...

View File

@@ -1,22 +0,0 @@
---
neutron:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
db_init: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
db_drop: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
ks_user: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
ks_service: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
ks_endpoints: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
neutron_db_sync: "quay.io/airshipit/neutron:2024.2-ubuntu_jammy"
neutron_dhcp: "quay.io/airshipit/neutron:2024.2-ubuntu_jammy"
neutron_l3: "quay.io/airshipit/neutron:2024.2-ubuntu_jammy"
neutron_l2gw: "quay.io/airshipit/neutron:2024.2-ubuntu_jammy"
neutron_linuxbridge_agent: "quay.io/airshipit/neutron:2024.2-ubuntu_jammy"
neutron_metadata: "quay.io/airshipit/neutron:2024.2-ubuntu_jammy"
neutron_openvswitch_agent: "quay.io/airshipit/neutron:2024.2-ubuntu_jammy"
neutron_server: "quay.io/airshipit/neutron:2024.2-ubuntu_jammy"
neutron_rpc_server: "quay.io/airshipit/neutron:2024.2-ubuntu_jammy"
neutron_bagpipe_bgp: "quay.io/airshipit/neutron:2024.2-ubuntu_jammy"
neutron_netns_cleanup_cron: "quay.io/airshipit/neutron:2024.2-ubuntu_jammy"
...

View File

@@ -1,22 +0,0 @@
---
neutron:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
db_init: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
db_drop: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
ks_user: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
ks_service: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
ks_endpoints: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
neutron_db_sync: "quay.io/airshipit/neutron:2025.1-ubuntu_jammy"
neutron_dhcp: "quay.io/airshipit/neutron:2025.1-ubuntu_jammy"
neutron_l3: "quay.io/airshipit/neutron:2025.1-ubuntu_jammy"
neutron_l2gw: "quay.io/airshipit/neutron:2025.1-ubuntu_jammy"
neutron_linuxbridge_agent: "quay.io/airshipit/neutron:2025.1-ubuntu_jammy"
neutron_metadata: "quay.io/airshipit/neutron:2025.1-ubuntu_jammy"
neutron_openvswitch_agent: "quay.io/airshipit/neutron:2025.1-ubuntu_jammy"
neutron_server: "quay.io/airshipit/neutron:2025.1-ubuntu_jammy"
neutron_rpc_server: "quay.io/airshipit/neutron:2025.1-ubuntu_jammy"
neutron_bagpipe_bgp: "quay.io/airshipit/neutron:2025.1-ubuntu_jammy"
neutron_netns_cleanup_cron: "quay.io/airshipit/neutron:2025.1-ubuntu_jammy"
...

View File

@@ -1,22 +0,0 @@
---
neutron:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
db_init: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
db_drop: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
ks_user: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
ks_service: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
ks_endpoints: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
neutron_db_sync: "quay.io/airshipit/neutron:2025.1-ubuntu_noble"
neutron_dhcp: "quay.io/airshipit/neutron:2025.1-ubuntu_noble"
neutron_l3: "quay.io/airshipit/neutron:2025.1-ubuntu_noble"
neutron_l2gw: "quay.io/airshipit/neutron:2025.1-ubuntu_noble"
neutron_linuxbridge_agent: "quay.io/airshipit/neutron:2025.1-ubuntu_noble"
neutron_metadata: "quay.io/airshipit/neutron:2025.1-ubuntu_noble"
neutron_openvswitch_agent: "quay.io/airshipit/neutron:2025.1-ubuntu_noble"
neutron_server: "quay.io/airshipit/neutron:2025.1-ubuntu_noble"
neutron_rpc_server: "quay.io/airshipit/neutron:2025.1-ubuntu_noble"
neutron_bagpipe_bgp: "quay.io/airshipit/neutron:2025.1-ubuntu_noble"
neutron_netns_cleanup_cron: "quay.io/airshipit/neutron:2025.1-ubuntu_noble"
...

View File

@@ -1,22 +0,0 @@
---
neutron:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
db_init: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
db_drop: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
ks_user: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
ks_service: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
ks_endpoints: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
neutron_db_sync: "quay.io/airshipit/neutron:2025.2-ubuntu_noble"
neutron_dhcp: "quay.io/airshipit/neutron:2025.2-ubuntu_noble"
neutron_l3: "quay.io/airshipit/neutron:2025.2-ubuntu_noble"
neutron_l2gw: "quay.io/airshipit/neutron:2025.2-ubuntu_noble"
neutron_linuxbridge_agent: "quay.io/airshipit/neutron:2025.2-ubuntu_noble"
neutron_metadata: "quay.io/airshipit/neutron:2025.2-ubuntu_noble"
neutron_openvswitch_agent: "quay.io/airshipit/neutron:2025.2-ubuntu_noble"
neutron_server: "quay.io/airshipit/neutron:2025.2-ubuntu_noble"
neutron_rpc_server: "quay.io/airshipit/neutron:2025.2-ubuntu_noble"
neutron_bagpipe_bgp: "quay.io/airshipit/neutron:2025.2-ubuntu_noble"
neutron_netns_cleanup_cron: "quay.io/airshipit/neutron:2025.2-ubuntu_noble"
...

View File

@@ -1,81 +0,0 @@
---
pod:
security_context:
neutron_dhcp_agent:
container:
neutron_dhcp_agent:
appArmorProfile:
type: RuntimeDefault
neutron_dhcp_agent_init:
appArmorProfile:
type: RuntimeDefault
neutron_l3_agent:
container:
neutron_l3_agent:
appArmorProfile:
type: RuntimeDefault
neutron_l3_agent_init:
appArmorProfile:
type: RuntimeDefault
neutron_lb_agent:
container:
neutron_lb_agent:
appArmorProfile:
type: RuntimeDefault
neutron_lb_agent_init:
appArmorProfile:
type: RuntimeDefault
neutron_lb_agent_kernel_modules:
appArmorProfile:
type: RuntimeDefault
neutron_metadata_agent:
container:
neutron_metadata_agent_init:
appArmorProfile:
type: RuntimeDefault
neutron_ovs_agent:
container:
neutron_ovs_agent:
appArmorProfile:
type: RuntimeDefault
neutron_openvswitch_agent_kernel_modules:
appArmorProfile:
type: RuntimeDefault
neutron_ovs_agent_init:
appArmorProfile:
type: RuntimeDefault
netoffload:
appArmorProfile:
type: RuntimeDefault
neutron_sriov_agent:
container:
neutron_sriov_agent:
appArmorProfile:
type: RuntimeDefault
neutron_sriov_agent_init:
appArmorProfile:
type: RuntimeDefault
neutron_netns_cleanup_cron:
container:
neutron_netns_cleanup_cron:
appArmorProfile:
type: RuntimeDefault
neutron_server:
container:
neutron_server:
appArmorProfile:
type: RuntimeDefault
nginx:
appArmorProfile:
type: RuntimeDefault
neutron_rpc_server:
container:
neutron_rpc_server:
appArmorProfile:
type: RuntimeDefault
kubernetes_entrypoint:
container:
kubernetes_entrypoint:
appArmorProfile:
type: RuntimeDefault
...

View File

@@ -1,33 +0,0 @@
---
neutron:
network:
interface:
tunnel: br-phy-bond0
conf:
plugins:
openvswitch_agent:
agent:
tunnel_types: vxlan
ovs:
bridge_mappings: public:br-ex
datapath_type: netdev
vhostuser_socket_dir: /var/run/openvswitch/vhostuser
ovs_dpdk:
enabled: true
driver: uio_pci_generic
nics: []
bonds:
# CHANGE-ME: modify below parameters according to your hardware
- name: dpdkbond0
bridge: br-phy-bond0
# The IP from the first nic in nics list shall be used
migrate_ip: true
ovs_options: "bond_mode=active-backup"
nics:
- name: dpdk_b0s0
pci_id: '0000:00:05.0'
- name: dpdk_b0s1
pci_id: '0000:00:06.0'
bridges:
- name: br-phy-bond0
...

View File

@@ -1,27 +0,0 @@
---
neutron:
network:
interface:
tunnel: br-phy
conf:
plugins:
openvswitch_agent:
agent:
tunnel_types: vxlan
ovs:
bridge_mappings: public:br-ex
datapath_type: netdev
vhostuser_socket_dir: /var/run/openvswitch/vhostuser
ovs_dpdk:
enabled: true
driver: uio_pci_generic
nics:
# CHANGE-ME: modify pci_id according to your hardware
- name: dpdk0
pci_id: '0000:05:00.0'
bridge: br-phy
migrate_ip: true
bridges:
- name: br-phy
bonds: []
...

View File

@@ -1,25 +0,0 @@
---
neutron:
network:
interface:
tunnel: docker0
conf:
neutron:
DEFAULT:
l3_ha: False
max_l3_agents_per_router: 1
l3_ha_network_type: vxlan
dhcp_agents_per_network: 1
plugins:
ml2_conf:
ml2_type_flat:
flat_networks: public
openvswitch_agent:
agent:
tunnel_types: vxlan
ovs:
bridge_mappings: public:br-ex
linuxbridge_agent:
linux_bridge:
bridge_mappings: public:br-ex
...

View File

@@ -1,14 +0,0 @@
---
neutron:
manifests:
network_policy: true
network_policy:
neutron:
egress:
- to:
- ipBlock:
cidr: %%%REPLACE_API_ADDR%%%/32
ports:
- protocol: TCP
port: %%%REPLACE_API_PORT%%%
...

View File

@@ -1,97 +0,0 @@
---
neutron:
network:
interface:
sriov:
- device: enp3s0f0
num_vfs: 32
promisc: false
- device: enp66s0f1
num_vfs: 32
promisc: false
tunnel: br-phy-bond0
backend:
- openvswitch
- sriov
conf:
auto_bridge_add:
br-ex: null
neutron:
DEFAULT:
l3_ha: False
max_l3_agents_per_router: 1
l3_ha_network_type: vxlan
dhcp_agents_per_network: 1
service_plugins: router
plugins:
ml2_conf:
ml2:
mechanism_drivers: l2population,openvswitch,sriovnicswitch
type_drivers: vlan,flat,vxlan
tenant_network_types: vxlan
ml2_type_flat:
flat_networks: public
ml2_type_vlan:
network_vlan_ranges: ovsnet:2:4094,sriovnet1:100:4000,sriovnet2:100:4000
openvswitch_agent:
default:
ovs_vsctl_timeout: 30
agent:
tunnel_types: vxlan
securitygroup:
enable_security_group: False
firewall_driver: neutron.agent.firewall.NoopFirewallDriver
ovs:
bridge_mappings: public:br-ex,ovsnet:br-phy-bond0
datapath_type: netdev
vhostuser_socket_dir: /var/run/openvswitch/vhostuser
of_connect_timeout: 60
of_request_timeout: 30
sriov_agent:
securitygroup:
firewall_driver: neutron.agent.firewall.NoopFirewallDriver
sriov_nic:
physical_device_mappings: sriovnet1:enp3s0f0,sriovnet2:enp66s0f1
exclude_devices: enp3s0f0:0000:00:05.1,enp66s0f1:0000:00:06.1
ovs_dpdk:
enabled: true
driver: uio_pci_generic
nics: []
bonds:
# CHANGE-ME: modify below parameters according to your hardware
- name: dpdkbond0
bridge: br-phy-bond0
mtu: 9000
# The IP from the first nic in nics list shall be used
migrate_ip: true
n_rxq: 2
n_rxq_size: 1024
n_txq_size: 1024
ovs_options: "bond_mode=active-backup"
nics:
- name: dpdk_b0s0
pci_id: '0000:00:05.0'
vf_index: 0
- name: dpdk_b0s1
pci_id: '0000:00:06.0'
vf_index: 0
bridges:
- name: br-phy-bond0
modules:
- name: dpdk
log_level: info
# In case of shared profile (sriov + ovs-dpdk), sriov agent should finish
# first so as to let it configure the SRIOV VFs before ovs-agent tries to
# bind it with DPDK driver.
dependencies:
dynamic:
targeted:
openvswitch:
ovs_agent:
pod:
- requireSameNode: true
labels:
application: neutron
component: neutron-sriov-agent
...

View File

@@ -1,71 +0,0 @@
---
neutron:
images:
tags:
tf_neutron_init: opencontrailnightly/contrail-openstack-neutron-init:master-latest
labels:
job:
node_selector_key: openstack-control-plane
node_selector_value: enabled
server:
node_selector_key: openstack-control-plane
node_selector_value: enabled
test:
node_selector_key: openstack-control-plane
node_selector_value: enabled
network:
backend:
- tungstenfabric
dependencies:
dynamic:
targeted:
tungstenfabric:
server:
daemonset: []
conf:
openstack_version: queens
neutron:
DEFAULT:
core_plugin: neutron_plugin_contrail.plugins.opencontrail.contrail_plugin.NeutronPluginContrailCoreV2
service_plugins: neutron_plugin_contrail.plugins.opencontrail.loadbalancer.v2.plugin.LoadBalancerPluginV2
l3_ha: False
api_extensions_path: /opt/plugin/site-packages/neutron_plugin_contrail/extensions:/opt/plugin/site-packages/neutron_lbaas/extensions
interface_driver: null
quotas:
quota_driver: neutron_plugin_contrail.plugins.opencontrail.quota.driver.QuotaDriver
plugins:
tungstenfabric:
APISERVER:
api_server_ip: config-api-server.tungsten-fabric.svc.cluster.local
api_server_port: 8082
contrail_extensions: "ipam:neutron_plugin_contrail.plugins.opencontrail.contrail_plugin_ipam.NeutronPluginContrailIpam,policy:neutron_plugin_contrail.plugins.opencontrail.contrail_plugin_policy.NeutronPluginContrailPolicy,route-table:neutron_plugin_contrail.plugins.opencontrail.contrail_plugin_vpc.NeutronPluginContrailVpc,contrail:None,service-interface:None,vf-binding:None"
multi_tenancy: True
KEYSTONE:
insecure: True
tf_vnc_api_lib:
global:
WEB_SERVER: config-api-server.tungsten-fabric.svc.cluster.local
WEB_PORT: 8082
auth:
AUTHN_TYPE: keystone
AUTHN_PROTOCOL: http
AUTHN_URL: /v3/auth/tokens
manifests:
daemonset_dhcp_agent: false
daemonset_l3_agent: false
daemonset_lb_agent: false
daemonset_metadata_agent: false
daemonset_ovs_agent: false
daemonset_sriov_agent: false
pod_rally_test: false
pod:
mounts:
neutron_db_sync:
neutron_db_sync:
volumeMounts:
- name: db-sync-conf
mountPath: /etc/neutron/plugins/tungstenfabric/tf_plugin.ini
subPath: tf_plugin.ini
readOnly: true
volumes:
...

View File

@@ -1,142 +0,0 @@
---
neutron:
images:
tags:
nginx: docker.io/nginx:1.18.0
network:
server:
ingress:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "https"
pod:
security_context:
neutron_server:
pod:
runAsUser: 0
container:
neutron_server:
readOnlyRootFilesystem: false
resources:
nginx:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "1024Mi"
cpu: "2000m"
conf:
nginx: |
worker_processes 1;
daemon off;
user nginx;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65s;
tcp_nodelay on;
log_format main '[nginx] method=$request_method path=$request_uri '
'status=$status upstream_status=$upstream_status duration=$request_time size=$body_bytes_sent '
'"$remote_user" "$http_referer" "$http_user_agent"';
access_log /dev/stdout main;
upstream websocket {
server 127.0.0.1:$PORT;
}
server {
server_name {{ printf "%s.%s.svc.%s" "${SHORTNAME}" .Release.Namespace .Values.endpoints.cluster_domain_suffix }};
listen $POD_IP:$PORT ssl;
client_max_body_size 0;
ssl_certificate /etc/nginx/certs/tls.crt;
ssl_certificate_key /etc/nginx/certs/tls.key;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384;
location / {
proxy_pass_request_headers on;
proxy_http_version 1.1;
proxy_pass http://websocket;
proxy_read_timeout 90;
}
}
}
neutron:
DEFAULT:
bind_host: 127.0.0.1
nova:
cafile: /etc/neutron/certs/ca.crt
keystone_authtoken:
cafile: /etc/neutron/certs/ca.crt
oslo_messaging_rabbit:
ssl: true
ssl_ca_file: /etc/rabbitmq/certs/ca.crt
ssl_cert_file: /etc/rabbitmq/certs/tls.crt
ssl_key_file: /etc/rabbitmq/certs/tls.key
metadata_agent:
DEFAULT:
auth_ca_cert: /etc/ssl/certs/openstack-helm.crt
nova_metadata_port: 443
nova_metadata_protocol: https
endpoints:
compute:
scheme:
default: https
port:
api:
public: 443
compute_metadata:
scheme:
default: https
port:
metadata:
public: 443
identity:
auth:
admin:
cacert: /etc/ssl/certs/openstack-helm.crt
neutron:
cacert: /etc/ssl/certs/openstack-helm.crt
nova:
cacert: /etc/ssl/certs/openstack-helm.crt
test:
cacert: /etc/ssl/certs/openstack-helm.crt
scheme:
default: https
port:
api:
default: 443
network:
host_fqdn_override:
default:
tls:
secretName: neutron-tls-server
issuerRef:
name: ca-issuer
kind: ClusterIssuer
scheme:
default: https
port:
api:
public: 443
ingress:
port:
ingress:
default: 443
oslo_messaging:
port:
https:
default: 15680
manifests:
certificates: true
...

View File

@@ -1,24 +0,0 @@
---
nova:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
db_drop: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
db_init: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
ks_user: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
ks_service: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
ks_endpoints: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
nova_api: "quay.io/airshipit/nova:2024.1-ubuntu_jammy"
nova_cell_setup: "quay.io/airshipit/nova:2024.1-ubuntu_jammy"
nova_cell_setup_init: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
nova_compute: "quay.io/airshipit/nova:2024.1-ubuntu_jammy"
nova_compute_ssh: "quay.io/airshipit/nova:2024.1-ubuntu_jammy"
nova_conductor: "quay.io/airshipit/nova:2024.1-ubuntu_jammy"
nova_db_sync: "quay.io/airshipit/nova:2024.1-ubuntu_jammy"
nova_novncproxy: "quay.io/airshipit/nova:2024.1-ubuntu_jammy"
nova_novncproxy_assets: "quay.io/airshipit/nova:2024.1-ubuntu_jammy"
nova_scheduler: "quay.io/airshipit/nova:2024.1-ubuntu_jammy"
nova_spiceproxy: "quay.io/airshipit/nova:2024.1-ubuntu_jammy"
nova_spiceproxy_assets: "quay.io/airshipit/nova:2024.1-ubuntu_jammy"
nova_service_cleaner: "docker.io/openstackhelm/ceph-config-helper:latest-ubuntu_jammy"
...

View File

@@ -1,24 +0,0 @@
---
nova:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
db_drop: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
db_init: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
ks_user: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
ks_service: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
ks_endpoints: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
nova_api: "quay.io/airshipit/nova:2024.2-ubuntu_jammy"
nova_cell_setup: "quay.io/airshipit/nova:2024.2-ubuntu_jammy"
nova_cell_setup_init: "quay.io/airshipit/heat:2024.2-ubuntu_jammy"
nova_compute: "quay.io/airshipit/nova:2024.2-ubuntu_jammy"
nova_compute_ssh: "quay.io/airshipit/nova:2024.2-ubuntu_jammy"
nova_conductor: "quay.io/airshipit/nova:2024.2-ubuntu_jammy"
nova_db_sync: "quay.io/airshipit/nova:2024.2-ubuntu_jammy"
nova_novncproxy: "quay.io/airshipit/nova:2024.2-ubuntu_jammy"
nova_novncproxy_assets: "quay.io/airshipit/nova:2024.2-ubuntu_jammy"
nova_scheduler: "quay.io/airshipit/nova:2024.2-ubuntu_jammy"
nova_spiceproxy: "quay.io/airshipit/nova:2024.2-ubuntu_jammy"
nova_spiceproxy_assets: "quay.io/airshipit/nova:2024.2-ubuntu_jammy"
nova_service_cleaner: "docker.io/openstackhelm/ceph-config-helper:latest-ubuntu_jammy"
...

View File

@@ -1,24 +0,0 @@
---
nova:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
db_drop: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
db_init: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
ks_user: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
ks_service: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
ks_endpoints: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
nova_api: "quay.io/airshipit/nova:2025.1-ubuntu_jammy"
nova_cell_setup: "quay.io/airshipit/nova:2025.1-ubuntu_jammy"
nova_cell_setup_init: "quay.io/airshipit/heat:2025.1-ubuntu_jammy"
nova_compute: "quay.io/airshipit/nova:2025.1-ubuntu_jammy"
nova_compute_ssh: "quay.io/airshipit/nova:2025.1-ubuntu_jammy"
nova_conductor: "quay.io/airshipit/nova:2025.1-ubuntu_jammy"
nova_db_sync: "quay.io/airshipit/nova:2025.1-ubuntu_jammy"
nova_novncproxy: "quay.io/airshipit/nova:2025.1-ubuntu_jammy"
nova_novncproxy_assets: "quay.io/airshipit/nova:2025.1-ubuntu_jammy"
nova_scheduler: "quay.io/airshipit/nova:2025.1-ubuntu_jammy"
nova_spiceproxy: "quay.io/airshipit/nova:2025.1-ubuntu_jammy"
nova_spiceproxy_assets: "quay.io/airshipit/nova:2025.1-ubuntu_jammy"
nova_service_cleaner: "docker.io/openstackhelm/ceph-config-helper:latest-ubuntu_jammy"
...

View File

@@ -1,24 +0,0 @@
---
nova:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
db_drop: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
db_init: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
ks_user: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
ks_service: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
ks_endpoints: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
nova_api: "quay.io/airshipit/nova:2025.1-ubuntu_noble"
nova_cell_setup: "quay.io/airshipit/nova:2025.1-ubuntu_noble"
nova_cell_setup_init: "quay.io/airshipit/heat:2025.1-ubuntu_noble"
nova_compute: "quay.io/airshipit/nova:2025.1-ubuntu_noble"
nova_compute_ssh: "quay.io/airshipit/nova:2025.1-ubuntu_noble"
nova_conductor: "quay.io/airshipit/nova:2025.1-ubuntu_noble"
nova_db_sync: "quay.io/airshipit/nova:2025.1-ubuntu_noble"
nova_novncproxy: "quay.io/airshipit/nova:2025.1-ubuntu_noble"
nova_novncproxy_assets: "quay.io/airshipit/nova:2025.1-ubuntu_noble"
nova_scheduler: "quay.io/airshipit/nova:2025.1-ubuntu_noble"
nova_spiceproxy: "quay.io/airshipit/nova:2025.1-ubuntu_noble"
nova_spiceproxy_assets: "quay.io/airshipit/nova:2025.1-ubuntu_noble"
nova_service_cleaner: "docker.io/openstackhelm/ceph-config-helper:latest-ubuntu_jammy"
...

View File

@@ -1,24 +0,0 @@
---
nova:
images:
tags:
bootstrap: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
db_drop: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
db_init: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
ks_user: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
ks_service: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
ks_endpoints: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
nova_api: "quay.io/airshipit/nova:2025.2-ubuntu_noble"
nova_cell_setup: "quay.io/airshipit/nova:2025.2-ubuntu_noble"
nova_cell_setup_init: "quay.io/airshipit/heat:2025.2-ubuntu_noble"
nova_compute: "quay.io/airshipit/nova:2025.2-ubuntu_noble"
nova_compute_ssh: "quay.io/airshipit/nova:2025.2-ubuntu_noble"
nova_conductor: "quay.io/airshipit/nova:2025.2-ubuntu_noble"
nova_db_sync: "quay.io/airshipit/nova:2025.2-ubuntu_noble"
nova_novncproxy: "quay.io/airshipit/nova:2025.2-ubuntu_noble"
nova_novncproxy_assets: "quay.io/airshipit/nova:2025.2-ubuntu_noble"
nova_scheduler: "quay.io/airshipit/nova:2025.2-ubuntu_noble"
nova_spiceproxy: "quay.io/airshipit/nova:2025.2-ubuntu_noble"
nova_spiceproxy_assets: "quay.io/airshipit/nova:2025.2-ubuntu_noble"
nova_service_cleaner: "docker.io/openstackhelm/ceph-config-helper:latest-ubuntu_jammy"
...

View File

@@ -1,52 +0,0 @@
---
pod:
security_context:
nova:
container:
nova_compute:
appArmorProfile:
type: RuntimeDefault
nova_compute_init:
appArmorProfile:
type: RuntimeDefault
nova_compute_vnc_init:
appArmorProfile:
type: RuntimeDefault
nova_api:
appArmorProfile:
type: RuntimeDefault
nova_api_metadata_init:
appArmorProfile:
type: RuntimeDefault
nova_osapi:
appArmorProfile:
type: RuntimeDefault
nova_conductor:
appArmorProfile:
type: RuntimeDefault
nova_novncproxy:
appArmorProfile:
type: RuntimeDefault
nova_novncproxy_init_assets:
appArmorProfile:
type: RuntimeDefault
nova_novncproxy_init:
appArmorProfile:
type: RuntimeDefault
nova_scheduler:
appArmorProfile:
type: RuntimeDefault
nova_cell_setup:
container:
nova_cell_setup:
appArmorProfile:
type: RuntimeDefault
nova_cell_setup_init:
appArmorProfile:
type: RuntimeDefault
kubernetes_entrypoint:
container:
kubernetes_entrypoint:
appArmorProfile:
type: RuntimeDefault
...

View File

@@ -1,23 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
---
nova:
conf:
nova:
DEFAULT:
reserved_huge_pages:
type: multistring
values:
- node:0,size:1GB,count:4
- node:1,size:1GB,count:4
reserved_host_memory_mb: 512
...

View File

@@ -1,18 +0,0 @@
---
nova:
manifests:
network_policy: true
network_policy:
nova:
egress:
- to:
- podSelector:
matchLabels:
application: nova
- to:
- ipBlock:
cidr: %%%REPLACE_API_ADDR%%%/32
ports:
- protocol: TCP
port: %%%REPLACE_API_PORT%%%
...

View File

@@ -1,27 +0,0 @@
---
nova:
conf:
software:
apache2:
binary: apache2ctl
start_parameters: -DFOREGROUND -k start
site_dir: /etc/apache2/vhosts.d
conf_dir: /etc/apache2/conf.d
a2enmod:
- version
security: |
<Directory "/var/www">
Options Indexes FollowSymLinks
AllowOverride All
<IfModule !mod_access_compat.c>
Require all granted
</IfModule>
<IfModule mod_access_compat.c>
Order allow,deny
Allow from all
</IfModule>
</Directory>
nova:
DEFAULT:
mkisofs_cmd: mkisofs
...

View File

@@ -1,36 +0,0 @@
---
nova:
network:
ssh:
enabled: true
public_key: |
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDfgGkoPxu6jVqyBTGDlhGqoFFaTymMOH3pDRzrzXCVodqrtv1heBAyi7L63+MZ+m/facDDo43hWzhFLmmMgD00AS7L+VH+oeEwKVCfq0HN3asKLadpweBQVAkGX7PzjRKF25qj6J7iVpKAf1NcnJCsWL3b+wC9mwK7TmupOmWra8BrfP7Fvek1RLx3lwk+ZZ9lUlm6o+jwXn/9rCEFa7ywkGpdrPRBNHQshGjDlJPi15boXIKxOmoZ/DszkJq7iLYQnwa4Kdb0dJ9OE/l2LLBiEpkMlTnwXA7QCS5jEHXwW78b4BOZvqrFflga+YldhDmkyRRfnhcF5Ok2zQmx9Q+t root@openstack-helm
private_key: |
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA34BpKD8buo1asgUxg5YRqqBRWk8pjDh96Q0c681wlaHaq7b9
YXgQMouy+t/jGfpv32nAw6ON4Vs4RS5pjIA9NAEuy/lR/qHhMClQn6tBzd2rCi2n
acHgUFQJBl+z840Shduao+ie4laSgH9TXJyQrFi92/sAvZsCu05rqTplq2vAa3z+
xb3pNUS8d5cJPmWfZVJZuqPo8F5//awhBWu8sJBqXaz0QTR0LIRow5ST4teW6FyC
sTpqGfw7M5Cau4i2EJ8GuCnW9HSfThP5diywYhKZDJU58FwO0AkuYxB18Fu/G+AT
mb6qxX5YGvmJXYQ5pMkUX54XBeTpNs0JsfUPrQIDAQABAoIBAFkEFd3XtL2KSxMY
Cm50OLkSfRRQ7yVP4qYNePVZr3uJKUS27xgA78KR7UkKHrNcEW6T+hhxbbLR2AmF
wLga40VxKyhGNqgJ5Vx/OAM//Ed4AAVfxYvTkfmsXqPRPiTEjRoPKvoZTh6riFHx
ZExAd0aNWaDhyZu6v03GoA6YmaG53CLhUpDjIEpAHT8Q5fiukvpvFNAkSpSU3wWW
YD14S5BTXx8Z7v5mNgbxzDIST9P6oGm9jOoMJJCxu3KVF5Xh6k23DP1wukiWNypJ
b7dzfE8/NZUZ15Du4g1ZXHZyOATwN+4GQi1tV+oB1o6wI6829lpIMlsmqHhrw867
942SmakCgYEA9R1xFEEVRavBGIUeg/NMbFP+Ssl2DljAdnmcOASCxAFqCx6y3WSK
P2xWTD/MCG/uz627EVp+lfbapZimm171rUMpVCqTa5tH+LZ+Lbl+rjoLwSWVqySK
MGyIEzpPLq5PrpGdUghZNsGAG7kgTarJM5SYyA+Esqr8AADjDrZdmzcCgYEA6W1C
h9nU5i04UogndbkOiDVDWn0LnjUnVDTmhgGhbJDLtx4/hte/zGK7+mKl561q3Qmm
xY0s8cSQCX1ULHyrgzS9rc0k42uvuRWgpKKKT5IrjiA91HtfcVM1r9hxa2/dw4wk
WbAoaqpadjQAKoB4PNYzRfvITkv/9O+JSyK5BjsCgYEA5p9C68momBrX3Zgyc/gQ
qcQFeJxAxZLf0xjs0Q/9cSnbeobxx7h3EuF9+NP1xuJ6EVDmt5crjzHp2vDboUgh
Y1nToutENXSurOYXpjHnbUoUETCpt5LzqkgTZ/Pu2H8NXbSIDszoE8rQHEV8jVbp
Y+ymK2XedrTF0cMD363aONUCgYEAy5J4+kdUL+VyADAz0awxa0KgWdNCBZivkvWL
sYTMhgUFVM7xciTIZXQaIjRUIeeQkfKv2gvUDYlyYIRHm4Cih4vAfEmziQ7KMm0V
K1+BpgGBMLMXmS57PzblVFU8HQlzau3Wac2CgfvNZtbU6jweIFhiYP9DYl1PfQpG
PxuqJy8CgYBERsjdYfnyGMnFg3DVwgv/W/JspX201jMhQW2EW1OGDf7RQV+qTUnU
2NRGN9QbVYUvdwuRPd7C9wXQfLzXf0/E67oYg6fHHGTBNMjSq56qhZ2dSZnyQCxI
UZu0B4/1A5493Mypxp8c2fPhBdfzjTA5latsr75U26OMPxCxgFxm1A==
-----END RSA PRIVATE KEY-----
...

View File

@@ -1,79 +0,0 @@
---
nova:
images:
tags:
tf_compute_init: opencontrailnightly/contrail-openstack-compute-init:master-latest
conf:
nova:
libvirt:
virt_type: qemu
cpu_mode: host-model
agent:
compute:
node_selector_key: openstack-compute-node
node_selector_value: enabled
compute_ironic:
node_selector_key: openstack-compute-node
node_selector_value: enabled
api_metadata:
node_selector_key: openstack-control-plane
node_selector_value: enabled
conductor:
node_selector_key: openstack-control-plane
node_selector_value: enabled
job:
node_selector_key: openstack-control-plane
node_selector_value: enabled
novncproxy:
node_selector_key: openstack-control-plane
node_selector_value: enabled
osapi:
node_selector_key: openstack-control-plane
node_selector_value: enabled
scheduler:
node_selector_key: openstack-control-plane
node_selector_value: enabled
spiceproxy:
node_selector_key: openstack-control-plane
node_selector_value: enabled
test:
node_selector_key: openstack-control-plane
node_selector_value: enabled
rootwrap: |
# Configuration for nova-rootwrap
# This file should be owned by (and only-writeable by) the root user
[DEFAULT]
# List of directories to load filter definitions from (separated by ',').
# These directories MUST all be only writeable by root !
filters_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap
# List of directories to search executables in, in case filters do not
# explicitely specify a full path (separated by ',')
# If not specified, defaults to system PATH environment variable.
# These directories MUST all be only writeable by root !
exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin,/usr/local/bin,/usr/local/sbin,/var/lib/openstack/bin,/var/lib/kolla/venv/bin,/opt/plugin/bin
# Enable logging to syslog
# Default value is False
use_syslog=False
# Which syslog facility to use.
# Valid values include auth, authpriv, syslog, local0, local1...
# Default value is 'syslog'
syslog_log_facility=syslog
# Which messages to log.
# INFO means log all usage
# ERROR means only log unsuccessful attempts
syslog_log_level=ERROR
network:
backend:
- tungstenfabric
dependencies:
dynamic:
targeted:
tungstenfabric:
compute:
daemonset: []
...

View File

@@ -1,15 +0,0 @@
---
nova:
endpoints:
identity:
auth:
admin:
cacert: /etc/ssl/certs/openstack-helm.crt
nova:
cacert: /etc/ssl/certs/openstack-helm.crt
test:
cacert: /etc/ssl/certs/openstack-helm.crt
tls:
identity: true
...

View File

@@ -1,209 +0,0 @@
---
nova:
network:
osapi:
ingress:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "https"
metadata:
ingress:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "https"
novncproxy:
ingress:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "https"
conf:
mpm_event: |
<IfModule mpm_event_module>
ServerLimit 1024
StartServers 32
MinSpareThreads 32
MaxSpareThreads 256
ThreadsPerChild 25
MaxRequestsPerChild 128
ThreadLimit 720
</IfModule>
wsgi_nova_api: |
{{- $portInt := tuple "compute" "internal" "api" $ | include "helm-toolkit.endpoints.endpoint_port_lookup" }}
Listen {{ $portInt }}
<VirtualHost *:{{ $portInt }}>
ServerName {{ printf "%s.%s.svc.%s" "nova-api" .Release.Namespace .Values.endpoints.cluster_domain_suffix }}
WSGIDaemonProcess nova-api processes=1 threads=1 user=nova display-name=%{GROUP}
WSGIProcessGroup nova-api
WSGIScriptAlias / /var/www/cgi-bin/nova/nova-api-wsgi
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
AllowEncodedSlashes On
SetEnvIf X-Forwarded-For "^.*\..*\..*\..*" forwarded
ErrorLogFormat "%{cu}t %M"
ErrorLog /dev/stdout
CustomLog /dev/stdout combined env=!forwarded
CustomLog /dev/stdout proxy env=forwarded
SSLEngine on
SSLCertificateFile /etc/nova/certs/tls.crt
SSLCertificateKeyFile /etc/nova/certs/tls.key
SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
SSLHonorCipherOrder on
</VirtualHost>
wsgi_nova_metadata: |
{{- $portInt := tuple "compute_metadata" "internal" "metadata" $ | include "helm-toolkit.endpoints.endpoint_port_lookup" }}
Listen {{ $portInt }}
<VirtualHost *:{{ $portInt }}>
ServerName {{ printf "%s.%s.svc.%s" "nova-metadata" .Release.Namespace .Values.endpoints.cluster_domain_suffix }}
WSGIDaemonProcess nova-metadata processes=1 threads=1 user=nova display-name=%{GROUP}
WSGIProcessGroup nova-metadata
WSGIScriptAlias / /var/www/cgi-bin/nova/nova-metadata-wsgi
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
AllowEncodedSlashes On
SetEnvIf X-Forwarded-For "^.*\..*\..*\..*" forwarded
ErrorLogFormat "%{cu}t %M"
ErrorLog /dev/stdout
CustomLog /dev/stdout combined env=!forwarded
CustomLog /dev/stdout proxy env=forwarded
SSLEngine on
SSLCertificateFile /etc/nova/certs/tls.crt
SSLCertificateKeyFile /etc/nova/certs/tls.key
SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
SSLHonorCipherOrder on
</VirtualHost>
software:
apache2:
a2enmod:
- ssl
nova:
console:
ssl_minimum_version: tlsv1_2
glance:
cafile: /etc/nova/certs/ca.crt
ironic:
cafile: /etc/nova/certs/ca.crt
neutron:
cafile: /etc/nova/certs/ca.crt
keystone_authtoken:
cafile: /etc/nova/certs/ca.crt
cinder:
cafile: /etc/nova/certs/ca.crt
placement:
cafile: /etc/nova/certs/ca.crt
keystone:
cafile: /etc/nova/certs/ca.crt
oslo_messaging_rabbit:
ssl: true
ssl_ca_file: /etc/rabbitmq/certs/ca.crt
ssl_cert_file: /etc/rabbitmq/certs/tls.crt
ssl_key_file: /etc/rabbitmq/certs/tls.key
endpoints:
identity:
auth:
admin:
cacert: /etc/ssl/certs/openstack-helm.crt
nova:
cacert: /etc/ssl/certs/openstack-helm.crt
neutron:
cacert: /etc/ssl/certs/openstack-helm.crt
placement:
cacert: /etc/ssl/certs/openstack-helm.crt
test:
cacert: /etc/ssl/certs/openstack-helm.crt
scheme:
default: https
port:
api:
default: 443
image:
scheme:
default: https
port:
api:
public: 443
compute:
host_fqdn_override:
default:
tls:
secretName: nova-tls-api
issuerRef:
name: ca-issuer
kind: ClusterIssuer
scheme:
default: 'https'
port:
api:
public: 443
compute_metadata:
host_fqdn_override:
default:
tls:
secretName: metadata-tls-metadata
issuerRef:
name: ca-issuer
kind: ClusterIssuer
scheme:
default: https
port:
metadata:
public: 443
compute_novnc_proxy:
host_fqdn_override:
default:
tls:
secretName: nova-novncproxy-tls-proxy
issuerRef:
name: ca-issuer
kind: ClusterIssuer
scheme:
default: https
port:
novnc_proxy:
public: 443
compute_spice_proxy:
host_fqdn_override:
default:
tls:
secretName: nova-tls-spiceproxy
issuerRef:
name: ca-issuer
kind: ClusterIssuer
scheme:
default: https
placement:
host_fqdn_override:
default:
tls:
secretName: placement-tls-api
issuerRef:
name: ca-issuer
kind: ClusterIssuer
scheme:
default: https
port:
api:
public: 443
network:
scheme:
default: https
port:
api:
public: 443
oslo_messaging:
port:
https:
default: 15680
pod:
security_context:
nova:
container:
nova_api:
runAsUser: 0
readOnlyRootFilesystem: false
nova_osapi:
runAsUser: 0
readOnlyRootFilesystem: false
manifests:
certificates: true
...

View File

@@ -1,23 +0,0 @@
---
pod:
security_context:
ovs:
container:
vswitchd:
appArmorProfile:
type: RuntimeDefault
server:
appArmorProfile:
type: RuntimeDefault
modules:
appArmorProfile:
type: RuntimeDefault
perms:
appArmorProfile:
type: RuntimeDefault
kubernetes_entrypoint:
container:
kubernetes_entrypoint:
appArmorProfile:
type: RuntimeDefault
...

View File

@@ -1,25 +0,0 @@
---
openvswitch:
images:
tags:
openvswitch_db_server: docker.io/openstackhelm/openvswitch:latest-opensuse_15-dpdk
openvswitch_vswitchd: docker.io/openstackhelm/openvswitch:latest-opensuse_15-dpdk
pod:
resources:
enabled: true
ovs:
vswitchd:
requests:
memory: "2Gi"
cpu: "2"
limits:
memory: "2Gi"
cpu: "2"
hugepages-1Gi: "1Gi"
conf:
ovs_dpdk:
enabled: true
hugepages_mountpath: /dev/hugepages
vhostuser_socket_dir: vhostuser
socket_memory: 1024
...

View File

@@ -1,25 +0,0 @@
---
openvswitch:
images:
tags:
openvswitch_db_server: docker.io/openstackhelm/openvswitch:latest-ubuntu_bionic-dpdk
openvswitch_vswitchd: docker.io/openstackhelm/openvswitch:latest-ubuntu_bionic-dpdk
pod:
resources:
enabled: true
ovs:
vswitchd:
requests:
memory: "2Gi"
cpu: "2"
limits:
memory: "2Gi"
cpu: "2"
hugepages-1Gi: "1Gi"
conf:
ovs_dpdk:
enabled: true
hugepages_mountpath: /dev/hugepages
vhostuser_socket_dir: vhostuser
socket_memory: 1024
...

View File

@@ -1,5 +0,0 @@
---
openvswitch:
manifests:
network_policy: true
...

View File

@@ -1,12 +0,0 @@
---
openvswitch:
pod:
probes:
ovs_vswitch:
ovs_vswitch:
liveness:
exec:
- /bin/bash
- -c
- '/usr/bin/ovs-appctl bond/list; C1=$?; ovs-vsctl --column statistics list interface dpdk_b0s0 | grep -q -E "rx_|tx_"; C2=$?; ovs-vsctl --column statistics list interface dpdk_b0s1 | grep -q -E "rx_|tx_"; C3=$?; exit $(($C1+$C2+$C3))'
...

View File

@@ -1,20 +0,0 @@
---
placement:
images:
pull_policy: IfNotPresent
tags:
placement: "quay.io/airshipit/placement:2024.1-ubuntu_jammy"
ks_user: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
ks_service: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
ks_endpoints: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
db_init: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
db_drop: "quay.io/airshipit/heat:2024.1-ubuntu_jammy"
placement_db_sync: "quay.io/airshipit/placement:2024.1-ubuntu_jammy"
dep_check: "quay.io/airshipit/kubernetes-entrypoint:latest-ubuntu_jammy"
image_repo_sync: "docker.io/docker:17.07.0"
dependencies:
static:
db_sync:
jobs:
- placement-db-init
...

Some files were not shown because too many files have changed in this diff Show More