[k8s] Use Helm v3 by default

- Refactor helm installer to use a single meta chart install job
  install job and config which use Helm v3 client.
- Use upstream helm client binary instead of using helm-client container
  maintained by us. To verify checksum, helm_client_sha256 label is
  introduced for helm_client_tag (or alternatively for URL specified
  using new helm_client_url label).
- Default helm_client_tag=v3.2.1.
- Default tiller_tag=v2.16.7, tiller_enabled=false.

Story: 2007514
Task: 39295

Change-Id: I9b9633c81afb08b91576a9a4d3c5a0c445e0cee4
This commit is contained in:
Bharat Kunwar 2020-04-15 16:03:31 +00:00
parent 7ab504d1e2
commit a79f8f52f9
16 changed files with 621 additions and 781 deletions

View File

@ -420,13 +420,19 @@ the table are linked to more details elsewhere in the user guide.
+---------------------------------------+--------------------+---------------+ +---------------------------------------+--------------------+---------------+
| `k8s_keystone_auth_tag`_ | see below | see below | | `k8s_keystone_auth_tag`_ | see below | see below |
+---------------------------------------+--------------------+---------------+ +---------------------------------------+--------------------+---------------+
| `tiller_enabled`_ | - true | true | | `tiller_enabled`_ | - true | false |
| | - false | | | | - false | |
+---------------------------------------+--------------------+---------------+ +---------------------------------------+--------------------+---------------+
| `tiller_tag`_ | see below | "" | | `tiller_tag`_ | see below | "" |
+---------------------------------------+--------------------+---------------+ +---------------------------------------+--------------------+---------------+
| `tiller_namespace`_ | see below | see below | | `tiller_namespace`_ | see below | see below |
+---------------------------------------+--------------------+---------------+ +---------------------------------------+--------------------+---------------+
| `helm_client_url`_ | see below | see below |
+---------------------------------------+--------------------+---------------+
| `helm_client_sha256`_ | see below | see below |
+---------------------------------------+--------------------+---------------+
| `helm_client_tag`_ | see below | see below |
+---------------------------------------+--------------------+---------------+
| `master_lb_floating_ip_enabled`_ | - true | see below | | `master_lb_floating_ip_enabled`_ | - true | see below |
| | - false | | | | - false | |
+---------------------------------------+--------------------+---------------+ +---------------------------------------+--------------------+---------------+
@ -1273,7 +1279,9 @@ _`metrics_server_chart_tag`
_`metrics_server_enabled` _`metrics_server_enabled`
metrics_server_enabled is used to enable disable the installation of metrics_server_enabled is used to enable disable the installation of
the metrics server. To use this service tiller_enabled must be true. the metrics server.
To use this service tiller_enabled must be true when using
helm_client_tag<v3.0.0.
Train default: true Train default: true
Stein default: true Stein default: true
@ -1447,6 +1455,8 @@ _`k8s_keystone_auth_tag`
_`monitoring_enabled` _`monitoring_enabled`
Enable installation of cluster monitoring solution provided by the Enable installation of cluster monitoring solution provided by the
stable/prometheus-operator helm chart. stable/prometheus-operator helm chart.
To use this service tiller_enabled must be true when using
helm_client_tag<v3.0.0.
Default: false Default: false
_`prometheus_adapter_enabled` _`prometheus_adapter_enabled`
@ -1473,24 +1483,34 @@ _`prometheus_operator_chart_tag`
_`tiller_enabled` _`tiller_enabled`
If set to true, tiller will be deployed in the kube-system namespace. If set to true, tiller will be deployed in the kube-system namespace.
Ussuri default: true Ussuri default: false
Train default: false Train default: false
_`tiller_tag` _`tiller_tag`
Add tiller_tag label to select the version of tiller. If the tag is not set This label allows users to override the default container tag for Tiller.
the tag that matches the helm client version in the heat-agent will be For additional tags, `refer to Tiller page
picked. The tiller image can be stored in a private registry and the <https://github.com/helm/helm/tags>`_ and look for tags<v3.0.0.
cluster can pull it using the container_infra_prefix label. Train default: v2.12.3
Ussuri default: v2.16.7
_`tiller_namespace` _`tiller_namespace`
Configure in which namespace tiller is going to be installed. The namespace in which Tiller and Helm v2 chart install jobs are installed.
Default: magnum-tiller Default: magnum-tiller
_`helm_client_url`
URL of the helm client binary.
Default: ''
_`helm_client_sha256`
SHA256 checksum of the helm client binary.
Ussuri default: 018f9908cb950701a5d59e757653a790c66d8eda288625dbb185354ca6f41f6b
_`helm_client_tag` _`helm_client_tag`
The version of the helm client to use. This label allows users to override the default container tag for Helm
The image can be stored in a private registry and the client. For additional tags, `refer to Helm client page
cluster can pull it using the container_infra_prefix label. <https://github.com/helm/helm/tags>`_. You must use identical tiller_tag if
Default: dev you wish to use Tiller (for helm_client_tag<v3.0.0).
Ussuri default: v3.2.1
_`master_lb_floating_ip_enabled` _`master_lb_floating_ip_enabled`
Controls if Magnum allocates floating IP for the load balancer of master Controls if Magnum allocates floating IP for the load balancer of master
@ -1664,6 +1684,8 @@ _`ingress_controller`
Controller is configured. For more details about octavia-ingress-controller Controller is configured. For more details about octavia-ingress-controller
please refer to `cloud-provider-openstack document please refer to `cloud-provider-openstack document
<https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-octavia-ingress-controller.md>`_ <https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-octavia-ingress-controller.md>`_
To use 'nginx' ingress controller, tiller_enabled must be true when using
helm_client_tag<v3.0.0.
_`ingress_controller_role` _`ingress_controller_role`
This label defines the role nodes should have to run an instance of the This label defines the role nodes should have to run an instance of the

View File

@ -350,7 +350,7 @@ spec:
name: grafana name: grafana
env: env:
- name: GF_SECURITY_ADMIN_PASSWORD - name: GF_SECURITY_ADMIN_PASSWORD
value: $ADMIN_PASSWD value: ${GRAFANA_ADMIN_PASSWD}
- name: GF_DASHBOARDS_JSON_ENABLED - name: GF_DASHBOARDS_JSON_ENABLED
value: "true" value: "true"
- name: GF_DASHBOARDS_JSON_PATH - name: GF_DASHBOARDS_JSON_PATH

View File

@ -1,32 +1,81 @@
#!/bin/bash #!/bin/bash
step="install-helm-modules.sh" step="install-helm-modules"
printf "Starting to run ${step}\n" echo "START: ${step}"
set +x
. /etc/sysconfig/heat-params . /etc/sysconfig/heat-params
set -ex set -ex
ssh_cmd="ssh -F /srv/magnum/.ssh/config root@localhost"
echo "Waiting for Kubernetes API..." echo "Waiting for Kubernetes API..."
until [ "ok" = "$(curl --silent http://127.0.0.1:8080/healthz)" ] until [ "ok" = "$(curl --silent http://127.0.0.1:8080/healthz)" ]; do
do
sleep 5 sleep 5
done done
if [ "$(echo ${TILLER_ENABLED} | tr '[:upper:]' '[:lower:]')" != "true" ]; then if [[ "$(echo ${TILLER_ENABLED} | tr '[:upper:]' '[:lower:]')" != "true" && "${HELM_CLIENT_TAG}" == v2.* ]]; then
echo "Use --labels tiller_enabled=True to allow for tiller dependent resources to be installed" echo "Use --labels tiller_enabled=True for helm_client_tag<v3.0.0 to allow for tiller dependent resources to be installed."
else else
HELM_MODULES_PATH="/srv/magnum/kubernetes/helm" if [ -z "${HELM_CLIENT_URL}" ] ; then
mkdir -p ${HELM_MODULES_PATH} HELM_CLIENT_URL="https://get.helm.sh/helm-$HELM_CLIENT_TAG-linux-amd64.tar.gz"
helm_modules=(${HELM_MODULES_PATH}/*) fi
i=0
# Only run kubectl if we have modules to install until curl -o /srv/magnum/helm-client.tar.gz "${HELM_CLIENT_URL}"; do
if [ "${helm_modules}" != "${HELM_MODULES_PATH}/*" ]; then i=$((i + 1))
for module in "${helm_modules[@]}"; do [ $i -lt 5 ] || break;
echo "Applying ${module}." sleep 5
kubectl apply -f ${module}
done done
if ! echo "${HELM_CLIENT_SHA256} /srv/magnum/helm-client.tar.gz" | sha256sum -c - ; then
echo "ERROR helm-client.tar.gz computed checksum did NOT match, exiting."
exit 1
fi
source /etc/bashrc
$ssh_cmd tar xzvf /srv/magnum/helm-client.tar.gz linux-amd64/helm -O > /srv/magnum/bin/helm
$ssh_cmd chmod +x /srv/magnum/bin/helm
helm_install_cmd="helm install magnum . --namespace kube-system --values values.yaml --render-subchart-notes"
helm_history_cmd="helm history magnum --namespace kube-system"
if [[ "${HELM_CLIENT_TAG}" == v2.* ]]; then
CERTS_DIR="/etc/kubernetes/helm/certs"
export HELM_HOME="/srv/magnum/kubernetes/helm/home"
export HELM_TLS_ENABLE="true"
export TILLER_NAMESPACE
mkdir -p "${HELM_HOME}"
ln -s ${CERTS_DIR}/helm.cert.pem ${HELM_HOME}/cert.pem
ln -s ${CERTS_DIR}/helm.key.pem ${HELM_HOME}/key.pem
ln -s ${CERTS_DIR}/ca.cert.pem ${HELM_HOME}/ca.pem
# HACK - Force wait because of bug https://github.com/helm/helm/issues/5170
until helm init --client-only --wait; do
sleep 5s
done
helm_install_cmd="helm install --name magnum . --namespace kube-system --values values.yaml --render-subchart-notes"
helm_history_cmd="helm history magnum"
fi
HELM_CHART_DIR="/srv/magnum/kubernetes/helm/magnum"
if [[ -d "${HELM_CHART_DIR}" ]]; then
pushd ${HELM_CHART_DIR}
cat << EOF > Chart.yaml
apiVersion: v1
name: magnum
version: metachart
appVersion: metachart
description: Magnum Helm Charts
EOF
sed -i '1i\dependencies:' requirements.yaml
i=0
until ($helm_history_cmd | grep magnum) || (helm dep update && $helm_install_cmd); do
i=$((i + 1))
[ $i -lt 60 ] || break;
sleep 5
done
popd
fi fi
fi fi
printf "Finished running ${step}\n" echo "END: ${step}"

View File

@ -117,6 +117,8 @@ EXTERNAL_NETWORK_ID="$EXTERNAL_NETWORK_ID"
TILLER_ENABLED="$TILLER_ENABLED" TILLER_ENABLED="$TILLER_ENABLED"
TILLER_TAG="$TILLER_TAG" TILLER_TAG="$TILLER_TAG"
TILLER_NAMESPACE="$TILLER_NAMESPACE" TILLER_NAMESPACE="$TILLER_NAMESPACE"
HELM_CLIENT_URL="$HELM_CLIENT_URL"
HELM_CLIENT_SHA256="$HELM_CLIENT_SHA256"
HELM_CLIENT_TAG="$HELM_CLIENT_TAG" HELM_CLIENT_TAG="$HELM_CLIENT_TAG"
NODE_PROBLEM_DETECTOR_TAG="$NODE_PROBLEM_DETECTOR_TAG" NODE_PROBLEM_DETECTOR_TAG="$NODE_PROBLEM_DETECTOR_TAG"
NGINX_INGRESS_CONTROLLER_TAG="$NGINX_INGRESS_CONTROLLER_TAG" NGINX_INGRESS_CONTROLLER_TAG="$NGINX_INGRESS_CONTROLLER_TAG"

View File

@ -2,54 +2,24 @@
set +x set +x
. /etc/sysconfig/heat-params . /etc/sysconfig/heat-params
set -ex set -ex
step="nginx-ingress"
printf "Starting to run ${step}\n"
### Configuration
###############################################################################
CHART_NAME="nginx-ingress" CHART_NAME="nginx-ingress"
if [ "$(echo ${INGRESS_CONTROLLER} | tr '[:upper:]' '[:lower:]')" = "nginx" ]; then if [ "$(echo ${INGRESS_CONTROLLER} | tr '[:upper:]' '[:lower:]')" = "nginx" ]; then
echo "Writing ${CHART_NAME} config"
HELM_MODULE_CONFIG_FILE="/srv/magnum/kubernetes/helm/${CHART_NAME}.yaml" HELM_CHART_DIR="/srv/magnum/kubernetes/helm/magnum"
[ -f ${HELM_MODULE_CONFIG_FILE} ] || { mkdir -p ${HELM_CHART_DIR}
echo "Writing File: ${HELM_MODULE_CONFIG_FILE}"
mkdir -p $(dirname ${HELM_MODULE_CONFIG_FILE})
cat << EOF > ${HELM_MODULE_CONFIG_FILE}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: ${CHART_NAME}-config
namespace: magnum-tiller
labels:
app: helm
data:
install-${CHART_NAME}.sh: |
#!/bin/bash
set -e
set -x
mkdir -p \${HELM_HOME}
cp /etc/helm/* \${HELM_HOME}
# HACK - Force wait because of bug https://github.com/helm/helm/issues/5170 cat << EOF >> ${HELM_CHART_DIR}/requirements.yaml
until helm init --client-only --wait - name: ${CHART_NAME}
do version: ${NGINX_INGRESS_CONTROLLER_CHART_TAG}
sleep 5s repository: https://kubernetes-charts.storage.googleapis.com/
done EOF
helm repo update
if [[ \$(helm history ${CHART_NAME} | grep ${CHART_NAME}) ]]; then cat << EOF >> ${HELM_CHART_DIR}/values.yaml
echo "${CHART_NAME} already installed on server. Continue..." nginx-ingress:
exit 0
else
helm install stable/${CHART_NAME} --namespace kube-system --name ${CHART_NAME} --version ${NGINX_INGRESS_CONTROLLER_CHART_TAG} --values /opt/magnum/install-${CHART_NAME}-values.yaml
fi
install-${CHART_NAME}-values.yaml: |
controller: controller:
name: controller name: controller
image: image:
@ -156,8 +126,6 @@ data:
type: ClusterIP type: ClusterIP
serviceMonitor: serviceMonitor:
enabled: ${MONITORING_ENABLED} enabled: ${MONITORING_ENABLED}
additionalLabels:
release: prometheus-operator
namespace: kube-system namespace: kube-system
lifecycle: {} lifecycle: {}
priorityClassName: "system-node-critical" priorityClassName: "system-node-critical"
@ -201,46 +169,5 @@ data:
imagePullSecrets: [] imagePullSecrets: []
tcp: {} tcp: {}
udp: {} udp: {}
---
apiVersion: batch/v1
kind: Job
metadata:
name: install-${CHART_NAME}-job
namespace: magnum-tiller
spec:
backoffLimit: 10
template:
spec:
serviceAccountName: tiller
containers:
- name: config-helm
image: ${CONTAINER_INFRA_PREFIX:-docker.io/openstackmagnum/}helm-client:${HELM_CLIENT_TAG}
command:
- bash
args:
- /opt/magnum/install-${CHART_NAME}.sh
env:
- name: HELM_HOME
value: /helm_home
- name: TILLER_NAMESPACE
value: magnum-tiller
- name: HELM_TLS_ENABLE
value: "true"
volumeMounts:
- name: install-${CHART_NAME}-config
mountPath: /opt/magnum/
- mountPath: /etc/helm
name: helm-client-certs
restartPolicy: Never
volumes:
- name: install-${CHART_NAME}-config
configMap:
name: ${CHART_NAME}-config
- name: helm-client-certs
secret:
secretName: helm-client-secret
EOF EOF
}
fi fi
printf "Finished running ${step}\n"

View File

@ -1,100 +1,28 @@
#!/bin/bash #!/bin/bash
set +x
. /etc/sysconfig/heat-params . /etc/sysconfig/heat-params
set -ex set -ex
step="metrics-server"
printf "Starting to run ${step}\n"
### Configuration
###############################################################################
CHART_NAME="metrics-server" CHART_NAME="metrics-server"
if [ "$(echo ${METRICS_SERVER_ENABLED} | tr '[:upper:]' '[:lower:]')" = "true" ]; then if [ "$(echo ${METRICS_SERVER_ENABLED} | tr '[:upper:]' '[:lower:]')" = "true" ]; then
echo "Writing ${CHART_NAME} config"
HELM_MODULE_CONFIG_FILE="/srv/magnum/kubernetes/helm/${CHART_NAME}.yaml" HELM_CHART_DIR="/srv/magnum/kubernetes/helm/magnum"
[ -f ${HELM_MODULE_CONFIG_FILE} ] || { mkdir -p ${HELM_CHART_DIR}
echo "Writing File: ${HELM_MODULE_CONFIG_FILE}"
mkdir -p $(dirname ${HELM_MODULE_CONFIG_FILE})
cat << EOF > ${HELM_MODULE_CONFIG_FILE}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: ${CHART_NAME}-config
namespace: magnum-tiller
labels:
app: helm
data:
install-${CHART_NAME}.sh: |
#!/bin/bash
set -e
set -x
mkdir -p \${HELM_HOME}
cp /etc/helm/* \${HELM_HOME}
# HACK - Force wait because of bug https://github.com/helm/helm/issues/5170 cat << EOF >> ${HELM_CHART_DIR}/requirements.yaml
until helm init --client-only --wait - name: ${CHART_NAME}
do version: ${METRICS_SERVER_CHART_TAG}
sleep 5s repository: https://kubernetes-charts.storage.googleapis.com/
done EOF
helm repo update
if [[ \$(helm history metrics-server | grep metrics-server) ]]; then cat << EOF >> ${HELM_CHART_DIR}/values.yaml
echo "${CHART_NAME} already installed on server. Continue..." metrics-server:
exit 0
else
helm install stable/${CHART_NAME} --namespace kube-system --name ${CHART_NAME} --version ${METRICS_SERVER_CHART_TAG} --values /opt/magnum/install-${CHART_NAME}-values.yaml
fi
install-${CHART_NAME}-values.yaml: |
image: image:
repository: ${CONTAINER_INFRA_PREFIX:-gcr.io/google_containers/}metrics-server-${ARCH} repository: ${CONTAINER_INFRA_PREFIX:-gcr.io/google_containers/}metrics-server-${ARCH}
args: args:
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
---
apiVersion: batch/v1
kind: Job
metadata:
name: install-${CHART_NAME}-job
namespace: magnum-tiller
spec:
backoffLimit: 10
template:
spec:
serviceAccountName: tiller
containers:
- name: config-helm
image: ${CONTAINER_INFRA_PREFIX:-docker.io/openstackmagnum/}helm-client:${HELM_CLIENT_TAG}
command:
- bash
args:
- /opt/magnum/install-${CHART_NAME}.sh
env:
- name: HELM_HOME
value: /helm_home
- name: TILLER_NAMESPACE
value: magnum-tiller
- name: HELM_TLS_ENABLE
value: "true"
volumeMounts:
- name: install-${CHART_NAME}-config
mountPath: /opt/magnum/
- mountPath: /etc/helm
name: helm-client-certs
restartPolicy: Never
volumes:
- name: install-${CHART_NAME}-config
configMap:
name: ${CHART_NAME}-config
- name: helm-client-certs
secret:
secretName: helm-client-secret
EOF EOF
}
fi fi
printf "Finished running ${step}\n"

View File

@ -1,73 +1,38 @@
#!/bin/bash #!/bin/bash
set +x
. /etc/sysconfig/heat-params . /etc/sysconfig/heat-params
set -ex set -ex
step="prometheus-adapter" # This configuration depends on helm installed prometheus-operator.
printf "Starting to run ${step}\n"
### Configuration
# This configuration is dependent on the helm installed prometheus-operator.
###############################################################################
CHART_NAME="prometheus-adapter" CHART_NAME="prometheus-adapter"
if [ "$(echo ${MONITORING_ENABLED} | tr '[:upper:]' '[:lower:]')" = "true" ] && \ if [ "$(echo ${MONITORING_ENABLED} | tr '[:upper:]' '[:lower:]')" = "true" ] && \
[ "$(echo ${PROMETHEUS_ADAPTER_ENABLED} | tr '[:upper:]' '[:lower:]')" = "true" ]; then [ "$(echo ${PROMETHEUS_ADAPTER_ENABLED} | tr '[:upper:]' '[:lower:]')" = "true" ]; then
echo "Writing ${CHART_NAME} config"
HELM_MODULE_CONFIG_FILE="/srv/magnum/kubernetes/helm/${CHART_NAME}.yaml" HELM_CHART_DIR="/srv/magnum/kubernetes/helm/magnum"
[ -f ${HELM_MODULE_CONFIG_FILE} ] || { mkdir -p ${HELM_CHART_DIR}
echo "Writing File: ${HELM_MODULE_CONFIG_FILE}"
mkdir -p $(dirname ${HELM_MODULE_CONFIG_FILE})
cat << EOF > ${HELM_MODULE_CONFIG_FILE}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: ${CHART_NAME}-config
namespace: magnum-tiller
labels:
app: helm
data:
install-${CHART_NAME}.sh: |
#!/bin/bash
set -ex
mkdir -p \${HELM_HOME}
cp /etc/helm/* \${HELM_HOME}
# HACK - Force wait because of bug https://github.com/helm/helm/issues/5170 cat << EOF >> ${HELM_CHART_DIR}/requirements.yaml
until helm init --client-only --wait - name: ${CHART_NAME}
do version: ${PROMETHEUS_ADAPTER_CHART_TAG}
sleep 5s repository: https://kubernetes-charts.storage.googleapis.com/
done EOF
helm repo update
if [[ \$(helm history ${CHART_NAME} | grep ${CHART_NAME}) ]]; then cat << EOF >> ${HELM_CHART_DIR}/values.yaml
echo "${CHART_NAME} already installed on server. Continue..." prometheus-adapter:
exit 0
else
# TODO: Set namespace to monitoring. This is needed as the Kubernetes default priorityClass can only be used in NS kube-system
helm install stable/${CHART_NAME} --namespace kube-system --name ${CHART_NAME} --version ${PROMETHEUS_ADAPTER_CHART_TAG} --values /opt/magnum/install-${CHART_NAME}-values.yaml
fi
install-${CHART_NAME}-values.yaml: |
image: image:
repository: ${CONTAINER_INFRA_PREFIX:-docker.io/directxman12/}k8s-prometheus-adapter-${ARCH} repository: ${CONTAINER_INFRA_PREFIX:-docker.io/directxman12/}k8s-prometheus-adapter-${ARCH}
priorityClassName: "system-cluster-critical" priorityClassName: "system-cluster-critical"
prometheus: prometheus:
url: http://web.tcp.prometheus-prometheus.kube-system.svc.cluster.local url: http://web.tcp.prometheus-prometheus.kube-system.svc.cluster.local
resources: resources:
requests: requests:
cpu: 150m cpu: 150m
memory: 400Mi memory: 400Mi
rules: rules:
existing: ${PROMETHEUS_ADAPTER_CONFIGMAP} existing: ${PROMETHEUS_ADAPTER_CONFIGMAP}
# tls: # tls:
# enable: true # enable: true
# ca: |- # ca: |-
@ -76,48 +41,5 @@ data:
# # Private key of the APIService # # Private key of the APIService
# certificate: |- # certificate: |-
# # Public key of the APIService # # Public key of the APIService
---
apiVersion: batch/v1
kind: Job
metadata:
name: install-${CHART_NAME}-job
namespace: magnum-tiller
spec:
backoffLimit: 10
template:
spec:
serviceAccountName: tiller
containers:
- name: config-helm
image: ${CONTAINER_INFRA_PREFIX:-docker.io/openstackmagnum/}helm-client:${HELM_CLIENT_TAG}
command:
- bash
args:
- /opt/magnum/install-${CHART_NAME}.sh
env:
- name: HELM_HOME
value: /helm_home
- name: TILLER_NAMESPACE
value: magnum-tiller
- name: HELM_TLS_ENABLE
value: "true"
volumeMounts:
- name: install-${CHART_NAME}-config
mountPath: /opt/magnum/
- mountPath: /etc/helm
name: helm-client-certs
restartPolicy: Never
volumes:
- name: install-${CHART_NAME}-config
configMap:
name: ${CHART_NAME}-config
- name: helm-client-certs
secret:
secretName: helm-client-secret
EOF EOF
}
fi fi
printf "Finished running ${step}\n"

View File

@ -1,19 +1,23 @@
#!/bin/bash #!/bin/bash
set +x
. /etc/sysconfig/heat-params . /etc/sysconfig/heat-params
set -ex set -ex
step="prometheus-operator"
printf "Starting to run ${step}\n"
### Configuration
###############################################################################
CHART_NAME="prometheus-operator" CHART_NAME="prometheus-operator"
if [ "$(echo ${MONITORING_ENABLED} | tr '[:upper:]' '[:lower:]')" = "true" ]; then if [ "$(echo ${MONITORING_ENABLED} | tr '[:upper:]' '[:lower:]')" = "true" ]; then
echo "Writing ${CHART_NAME} config"
HELM_CHART_DIR="/srv/magnum/kubernetes/helm/magnum"
mkdir -p ${HELM_CHART_DIR}
cat << EOF >> ${HELM_CHART_DIR}/requirements.yaml
- name: ${CHART_NAME}
version: ${PROMETHEUS_OPERATOR_CHART_TAG}
repository: https://kubernetes-charts.storage.googleapis.com/
EOF
#######################
# Calculate resources needed to run the Prometheus Monitoring Solution # Calculate resources needed to run the Prometheus Monitoring Solution
# MAX_NODE_COUNT so we can have metrics even if cluster scales # MAX_NODE_COUNT so we can have metrics even if cluster scales
PROMETHEUS_SERVER_CPU=$(expr 128 + 7 \* ${MAX_NODE_COUNT} ) PROMETHEUS_SERVER_CPU=$(expr 128 + 7 \* ${MAX_NODE_COUNT} )
@ -33,86 +37,8 @@ if [ "$(echo ${MONITORING_ENABLED} | tr '[:upper:]' '[:lower:]')" = "true" ]; th
INSECURE_SKIP_VERIFY="True" INSECURE_SKIP_VERIFY="True"
fi fi
####################### cat << EOF >> ${HELM_CHART_DIR}/values.yaml
# Set up definitions for ingress objects prometheus-operator:
# Ensure name conformity
INGRESS_CONTROLLER=$(echo ${INGRESS_CONTROLLER} | tr '[:upper:]' '[:lower:]')
if [ "${INGRESS_CONTROLLER}" == "nginx" ]; then
:
elif [ "${INGRESS_CONTROLLER}" == "traefik" ]; then
APP_ADDITIONAL_SERVICE_MONITORS=$(cat << EOF
additionalServiceMonitors:
- name: prometheus-traefik-metrics
selector:
matchLabels:
k8s-app: traefik
namespaceSelector:
matchNames:
- kube-system
endpoints:
- path: /metrics
port: metrics
EOF
)
fi #END INGRESS
if [ "$(echo ${AUTO_SCALING_ENABLED } | tr '[:upper:]' '[:lower:]')" == "true" ]; then
APP_ADDITIONAL_POD_MONITORS=$(cat << EOF
additionalPodMonitors:
- name: prometheus-cluster-autoscaler
podMetricsEndpoints:
- port: metrics
scheme: http
namespaceSelector:
matchNames:
- kube-system
selector:
matchLabels:
app: cluster-autoscaler
EOF
)
fi #END AUTOSCALING
HELM_MODULE_CONFIG_FILE="/srv/magnum/kubernetes/helm/${CHART_NAME}.yaml"
[ -f ${HELM_MODULE_CONFIG_FILE} ] || {
echo "Writing File: ${HELM_MODULE_CONFIG_FILE}"
mkdir -p $(dirname ${HELM_MODULE_CONFIG_FILE})
cat << EOF > ${HELM_MODULE_CONFIG_FILE}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: ${CHART_NAME}-config
namespace: magnum-tiller
labels:
app: helm
data:
install-${CHART_NAME}.sh: |
#!/bin/bash
set -ex
mkdir -p \${HELM_HOME}
cp /etc/helm/* \${HELM_HOME}
# HACK - Force wait because of bug https://github.com/helm/helm/issues/5170
until helm init --client-only --wait
do
sleep 5s
done
helm repo update
if [[ \$(helm history ${CHART_NAME} | grep ${CHART_NAME}) ]]; then
echo "${CHART_NAME} already installed on server. Continue..."
exit 0
else
# TODO: Set namespace to monitoring. This is needed as the Kubernetes default priorityClass can only be used in NS kube-system
helm install stable/${CHART_NAME} --namespace kube-system --name ${CHART_NAME} --version ${PROMETHEUS_OPERATOR_CHART_TAG} --values /opt/magnum/install-${CHART_NAME}-values.yaml
fi
install-${CHART_NAME}-values.yaml: |
---
nameOverride: prometheus
fullnameOverride: prometheus
defaultRules: defaultRules:
rules: rules:
@ -138,7 +64,7 @@ data:
requests: requests:
cpu: 100m cpu: 100m
memory: 128Mi memory: 128Mi
adminPassword: ${ADMIN_PASSWD} adminPassword: ${GRAFANA_ADMIN_PASSWD}
kubeApiServer: kubeApiServer:
tlsConfig: tlsConfig:
@ -287,50 +213,44 @@ data:
cpu: ${PROMETHEUS_SERVER_CPU}m cpu: ${PROMETHEUS_SERVER_CPU}m
memory: ${PROMETHEUS_SERVER_RAM}M memory: ${PROMETHEUS_SERVER_RAM}M
priorityClassName: "system-cluster-critical" priorityClassName: "system-cluster-critical"
${APP_ADDITIONAL_SERVICE_MONITORS}
${APP_ADDITIONAL_POD_MONITORS}
---
apiVersion: batch/v1
kind: Job
metadata:
name: install-${CHART_NAME}-job
namespace: magnum-tiller
spec:
backoffLimit: 10
template:
spec:
serviceAccountName: tiller
containers:
- name: config-helm
image: ${CONTAINER_INFRA_PREFIX:-docker.io/openstackmagnum/}helm-client:${HELM_CLIENT_TAG}
command:
- bash
args:
- /opt/magnum/install-${CHART_NAME}.sh
env:
- name: HELM_HOME
value: /helm_home
- name: TILLER_NAMESPACE
value: magnum-tiller
- name: HELM_TLS_ENABLE
value: "true"
volumeMounts:
- name: install-${CHART_NAME}-config
mountPath: /opt/magnum/
- mountPath: /etc/helm
name: helm-client-certs
restartPolicy: Never
volumes:
- name: install-${CHART_NAME}-config
configMap:
name: ${CHART_NAME}-config
- name: helm-client-certs
secret:
secretName: helm-client-secret
EOF EOF
}
#######################
# Set up definitions for ingress objects
# Ensure name conformity
INGRESS_CONTROLLER=$(echo ${INGRESS_CONTROLLER} | tr '[:upper:]' '[:lower:]')
if [ "${INGRESS_CONTROLLER}" == "nginx" ]; then
:
elif [ "${INGRESS_CONTROLLER}" == "traefik" ]; then
cat << EOF >> ${HELM_CHART_DIR}/values.yaml
additionalServiceMonitors:
- name: prometheus-traefik-metrics
selector:
matchLabels:
k8s-app: traefik
namespaceSelector:
matchNames:
- kube-system
endpoints:
- path: /metrics
port: metrics
EOF
fi #END INGRESS
if [ "$(echo ${AUTO_SCALING_ENABLED} | tr '[:upper:]' '[:lower:]')" == "true" ]; then
cat << EOF >> ${HELM_CHART_DIR}/values.yaml
additionalPodMonitors:
- name: prometheus-cluster-autoscaler
podMetricsEndpoints:
- port: metrics
scheme: http
namespaceSelector:
matchNames:
- kube-system
selector:
matchLabels:
app: cluster-autoscaler
EOF
fi #END AUTOSCALING
fi fi
printf "Finished running ${step}\n"

View File

@ -106,6 +106,7 @@ class K8sFedoraTemplateDefinition(k8s_template_def.K8sTemplateDefinition):
'tiller_enabled', 'tiller_enabled',
'tiller_tag', 'tiller_tag',
'tiller_namespace', 'tiller_namespace',
'helm_client_url', 'helm_client_sha256',
'helm_client_tag', 'helm_client_tag',
'traefik_ingress_controller_tag', 'traefik_ingress_controller_tag',
'node_problem_detector_tag', 'node_problem_detector_tag',

View File

@ -727,17 +727,29 @@ parameters:
tiller_tag: tiller_tag:
type: string type: string
description: tag of tiller container description: tag of tiller container
default: "v2.12.3" default: "v2.16.7"
tiller_namespace: tiller_namespace:
type: string type: string
description: namespace where tiller will be installed. description: namespace where tiller will be installed.
default: "magnum-tiller" default: "magnum-tiller"
helm_client_url:
type: string
description: url of helm client tarball
default: ""
helm_client_sha256:
type: string
description: sha256 of helm client tarball
default: "018f9908cb950701a5d59e757653a790c66d8eda288625dbb185354ca6f41f6b"
helm_client_tag: helm_client_tag:
type: string type: string
description: tag of helm container description: >
default: "dev" release tag of helm client
https://github.com/helm/helm/releases
default: "v3.2.1"
auto_healing_enabled: auto_healing_enabled:
type: boolean type: boolean
@ -1119,7 +1131,6 @@ resources:
metrics_server_enabled: {get_param: metrics_server_enabled} metrics_server_enabled: {get_param: metrics_server_enabled}
metrics_server_chart_tag: {get_param: metrics_server_chart_tag} metrics_server_chart_tag: {get_param: metrics_server_chart_tag}
prometheus_monitoring: {get_param: prometheus_monitoring} prometheus_monitoring: {get_param: prometheus_monitoring}
grafana_admin_passwd: {get_param: grafana_admin_passwd}
api_public_address: {get_attr: [api_lb, floating_address]} api_public_address: {get_attr: [api_lb, floating_address]}
api_private_address: {get_attr: [api_lb, address]} api_private_address: {get_attr: [api_lb, address]}
ssh_key_name: {get_param: ssh_key_name} ssh_key_name: {get_param: ssh_key_name}
@ -1218,6 +1229,8 @@ resources:
tiller_enabled: {get_param: tiller_enabled} tiller_enabled: {get_param: tiller_enabled}
tiller_tag: {get_param: tiller_tag} tiller_tag: {get_param: tiller_tag}
tiller_namespace: {get_param: tiller_namespace} tiller_namespace: {get_param: tiller_namespace}
helm_client_url: {get_param: helm_client_url}
helm_client_sha256: {get_param: helm_client_sha256}
helm_client_tag: {get_param: helm_client_tag} helm_client_tag: {get_param: helm_client_tag}
node_problem_detector_tag: {get_param: node_problem_detector_tag} node_problem_detector_tag: {get_param: node_problem_detector_tag}
nginx_ingress_controller_tag: {get_param: nginx_ingress_controller_tag} nginx_ingress_controller_tag: {get_param: nginx_ingress_controller_tag}
@ -1269,7 +1282,7 @@ resources:
- str_replace: - str_replace:
template: {get_file: ../../common/templates/kubernetes/fragments/enable-prometheus-monitoring.sh} template: {get_file: ../../common/templates/kubernetes/fragments/enable-prometheus-monitoring.sh}
params: params:
"$ADMIN_PASSWD": {get_param: grafana_admin_passwd} "${GRAFANA_ADMIN_PASSWD}": {get_param: grafana_admin_passwd}
- str_replace: - str_replace:
params: params:
$enable-ingress-traefik: {get_file: ../../common/templates/kubernetes/fragments/enable-ingress-traefik.sh} $enable-ingress-traefik: {get_file: ../../common/templates/kubernetes/fragments/enable-ingress-traefik.sh}
@ -1288,7 +1301,7 @@ resources:
- str_replace: - str_replace:
template: {get_file: ../../common/templates/kubernetes/helm/prometheus-operator.sh} template: {get_file: ../../common/templates/kubernetes/helm/prometheus-operator.sh}
params: params:
"${ADMIN_PASSWD}": {get_param: grafana_admin_passwd} "${GRAFANA_ADMIN_PASSWD}": {get_param: grafana_admin_passwd}
"${KUBE_MASTERS_PRIVATE}": {get_attr: [kube_masters, kube_master_external_ip]} "${KUBE_MASTERS_PRIVATE}": {get_attr: [kube_masters, kube_master_external_ip]}
- get_file: ../../common/templates/kubernetes/helm/prometheus-adapter.sh - get_file: ../../common/templates/kubernetes/helm/prometheus-adapter.sh
- get_file: ../../common/templates/kubernetes/helm/ingress-nginx.sh - get_file: ../../common/templates/kubernetes/helm/ingress-nginx.sh

View File

@ -190,12 +190,6 @@ parameters:
description: > description: >
whether or not to have prometheus and grafana deployed whether or not to have prometheus and grafana deployed
grafana_admin_passwd:
type: string
hidden: true
description: >
admin user password for the Grafana monitoring interface
api_public_address: api_public_address:
type: string type: string
description: Public IP address of the Kubernetes master server. description: Public IP address of the Kubernetes master server.
@ -502,9 +496,19 @@ parameters:
type: string type: string
description: namespace where tiller will be installed description: namespace where tiller will be installed
helm_client_url:
type: string
description: url of helm client tarball
helm_client_sha256:
type: string
description: sha256 of helm client tarball
helm_client_tag: helm_client_tag:
type: string type: string
description: tag of helm container description: >
release tag of helm client
https://github.com/helm/helm/releases
auto_healing_enabled: auto_healing_enabled:
type: boolean type: boolean
@ -799,6 +803,8 @@ resources:
"$TILLER_ENABLED": {get_param: tiller_enabled} "$TILLER_ENABLED": {get_param: tiller_enabled}
"$TILLER_TAG": {get_param: tiller_tag} "$TILLER_TAG": {get_param: tiller_tag}
"$TILLER_NAMESPACE": {get_param: tiller_namespace} "$TILLER_NAMESPACE": {get_param: tiller_namespace}
"$HELM_CLIENT_URL": {get_param: helm_client_url}
"$HELM_CLIENT_SHA256": {get_param: helm_client_sha256}
"$HELM_CLIENT_TAG": {get_param: helm_client_tag} "$HELM_CLIENT_TAG": {get_param: helm_client_tag}
"$NODE_PROBLEM_DETECTOR_TAG": {get_param: node_problem_detector_tag} "$NODE_PROBLEM_DETECTOR_TAG": {get_param: node_problem_detector_tag}
"$NGINX_INGRESS_CONTROLLER_TAG": {get_param: nginx_ingress_controller_tag} "$NGINX_INGRESS_CONTROLLER_TAG": {get_param: nginx_ingress_controller_tag}

View File

@ -732,22 +732,34 @@ parameters:
tiller_enabled: tiller_enabled:
type: boolean type: boolean
description: Choose whether to install tiller or not. description: Choose whether to install tiller or not.
default: true default: false
tiller_tag: tiller_tag:
type: string type: string
description: tag of tiller container description: tag of tiller container
default: "v2.12.3" default: "v2.16.7"
tiller_namespace: tiller_namespace:
type: string type: string
description: namespace where tiller will be installed. description: namespace where tiller will be installed.
default: "magnum-tiller" default: "magnum-tiller"
helm_client_url:
type: string
description: url of helm client tarball
default: ""
helm_client_sha256:
type: string
description: sha256 of helm client tarball
default: "018f9908cb950701a5d59e757653a790c66d8eda288625dbb185354ca6f41f6b"
helm_client_tag: helm_client_tag:
type: string type: string
description: tag of helm container description: >
default: "dev" release tag of helm client
https://github.com/helm/helm/releases
default: "v3.2.1"
auto_healing_enabled: auto_healing_enabled:
type: boolean type: boolean
@ -1153,7 +1165,6 @@ resources:
metrics_server_enabled: {get_param: metrics_server_enabled} metrics_server_enabled: {get_param: metrics_server_enabled}
metrics_server_chart_tag: {get_param: metrics_server_chart_tag} metrics_server_chart_tag: {get_param: metrics_server_chart_tag}
prometheus_monitoring: {get_param: prometheus_monitoring} prometheus_monitoring: {get_param: prometheus_monitoring}
grafana_admin_passwd: {get_param: grafana_admin_passwd}
api_public_address: {get_attr: [api_lb, floating_address]} api_public_address: {get_attr: [api_lb, floating_address]}
api_private_address: {get_attr: [api_lb, address]} api_private_address: {get_attr: [api_lb, address]}
ssh_key_name: {get_param: ssh_key_name} ssh_key_name: {get_param: ssh_key_name}
@ -1253,6 +1264,8 @@ resources:
tiller_enabled: {get_param: tiller_enabled} tiller_enabled: {get_param: tiller_enabled}
tiller_tag: {get_param: tiller_tag} tiller_tag: {get_param: tiller_tag}
tiller_namespace: {get_param: tiller_namespace} tiller_namespace: {get_param: tiller_namespace}
helm_client_url: {get_param: helm_client_url}
helm_client_sha256: {get_param: helm_client_sha256}
helm_client_tag: {get_param: helm_client_tag} helm_client_tag: {get_param: helm_client_tag}
node_problem_detector_tag: {get_param: node_problem_detector_tag} node_problem_detector_tag: {get_param: node_problem_detector_tag}
nginx_ingress_controller_tag: {get_param: nginx_ingress_controller_tag} nginx_ingress_controller_tag: {get_param: nginx_ingress_controller_tag}
@ -1305,7 +1318,7 @@ resources:
- str_replace: - str_replace:
template: {get_file: ../../common/templates/kubernetes/fragments/enable-prometheus-monitoring.sh} template: {get_file: ../../common/templates/kubernetes/fragments/enable-prometheus-monitoring.sh}
params: params:
"$ADMIN_PASSWD": {get_param: grafana_admin_passwd} "${GRAFANA_ADMIN_PASSWD}": {get_param: grafana_admin_passwd}
- str_replace: - str_replace:
params: params:
$enable-ingress-traefik: {get_file: ../../common/templates/kubernetes/fragments/enable-ingress-traefik.sh} $enable-ingress-traefik: {get_file: ../../common/templates/kubernetes/fragments/enable-ingress-traefik.sh}
@ -1324,7 +1337,7 @@ resources:
- str_replace: - str_replace:
template: {get_file: ../../common/templates/kubernetes/helm/prometheus-operator.sh} template: {get_file: ../../common/templates/kubernetes/helm/prometheus-operator.sh}
params: params:
"${ADMIN_PASSWD}": {get_param: grafana_admin_passwd} "${GRAFANA_ADMIN_PASSWD}": {get_param: grafana_admin_passwd}
"${KUBE_MASTERS_PRIVATE}": {get_attr: [kube_masters, kube_master_external_ip]} "${KUBE_MASTERS_PRIVATE}": {get_attr: [kube_masters, kube_master_external_ip]}
- get_file: ../../common/templates/kubernetes/helm/prometheus-adapter.sh - get_file: ../../common/templates/kubernetes/helm/prometheus-adapter.sh
- get_file: ../../common/templates/kubernetes/helm/ingress-nginx.sh - get_file: ../../common/templates/kubernetes/helm/ingress-nginx.sh

View File

@ -194,12 +194,6 @@ parameters:
description: > description: >
whether or not to have prometheus and grafana deployed whether or not to have prometheus and grafana deployed
grafana_admin_passwd:
type: string
hidden: true
description: >
admin user password for the Grafana monitoring interface
api_public_address: api_public_address:
type: string type: string
description: Public IP address of the Kubernetes master server. description: Public IP address of the Kubernetes master server.
@ -506,9 +500,19 @@ parameters:
type: string type: string
description: namespace where tiller will be installed description: namespace where tiller will be installed
helm_client_url:
type: string
description: url of helm client tarball
helm_client_sha256:
type: string
description: sha256 of helm client tarball
helm_client_tag: helm_client_tag:
type: string type: string
description: tag of helm container description: >
release tag of helm client
https://github.com/helm/helm/releases
auto_healing_enabled: auto_healing_enabled:
type: boolean type: boolean
@ -810,6 +814,8 @@ resources:
"$TILLER_ENABLED": {get_param: tiller_enabled} "$TILLER_ENABLED": {get_param: tiller_enabled}
"$TILLER_TAG": {get_param: tiller_tag} "$TILLER_TAG": {get_param: tiller_tag}
"$TILLER_NAMESPACE": {get_param: tiller_namespace} "$TILLER_NAMESPACE": {get_param: tiller_namespace}
"$HELM_CLIENT_URL": {get_param: helm_client_url}
"$HELM_CLIENT_SHA256": {get_param: helm_client_sha256}
"$HELM_CLIENT_TAG": {get_param: helm_client_tag} "$HELM_CLIENT_TAG": {get_param: helm_client_tag}
"$NODE_PROBLEM_DETECTOR_TAG": {get_param: node_problem_detector_tag} "$NODE_PROBLEM_DETECTOR_TAG": {get_param: node_problem_detector_tag}
"$NGINX_INGRESS_CONTROLLER_TAG": {get_param: nginx_ingress_controller_tag} "$NGINX_INGRESS_CONTROLLER_TAG": {get_param: nginx_ingress_controller_tag}

View File

@ -594,6 +594,10 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
'tiller_tag') 'tiller_tag')
tiller_namespace = mock_cluster.labels.get( tiller_namespace = mock_cluster.labels.get(
'tiller_namespace') 'tiller_namespace')
helm_client_url = mock_cluster.labels.get(
'helm_client_url')
helm_client_sha256 = mock_cluster.labels.get(
'helm_client_sha256')
helm_client_tag = mock_cluster.labels.get( helm_client_tag = mock_cluster.labels.get(
'helm_client_tag') 'helm_client_tag')
npd_tag = mock_cluster.labels.get('node_problem_detector_tag') npd_tag = mock_cluster.labels.get('node_problem_detector_tag')
@ -719,6 +723,8 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
'tiller_enabled': tiller_enabled, 'tiller_enabled': tiller_enabled,
'tiller_tag': tiller_tag, 'tiller_tag': tiller_tag,
'tiller_namespace': tiller_namespace, 'tiller_namespace': tiller_namespace,
'helm_client_url': helm_client_url,
'helm_client_sha256': helm_client_sha256,
'helm_client_tag': helm_client_tag, 'helm_client_tag': helm_client_tag,
'node_problem_detector_tag': npd_tag, 'node_problem_detector_tag': npd_tag,
'auto_healing_enabled': auto_healing_enabled, 'auto_healing_enabled': auto_healing_enabled,
@ -1108,6 +1114,10 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
'tiller_tag') 'tiller_tag')
tiller_namespace = mock_cluster.labels.get( tiller_namespace = mock_cluster.labels.get(
'tiller_namespace') 'tiller_namespace')
helm_client_url = mock_cluster.labels.get(
'helm_client_url')
helm_client_sha256 = mock_cluster.labels.get(
'helm_client_sha256')
helm_client_tag = mock_cluster.labels.get( helm_client_tag = mock_cluster.labels.get(
'helm_client_tag') 'helm_client_tag')
npd_tag = mock_cluster.labels.get('node_problem_detector_tag') npd_tag = mock_cluster.labels.get('node_problem_detector_tag')
@ -1236,6 +1246,8 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
'tiller_enabled': tiller_enabled, 'tiller_enabled': tiller_enabled,
'tiller_tag': tiller_tag, 'tiller_tag': tiller_tag,
'tiller_namespace': tiller_namespace, 'tiller_namespace': tiller_namespace,
'helm_client_url': helm_client_url,
'helm_client_sha256': helm_client_sha256,
'helm_client_tag': helm_client_tag, 'helm_client_tag': helm_client_tag,
'node_problem_detector_tag': npd_tag, 'node_problem_detector_tag': npd_tag,
'auto_healing_enabled': auto_healing_enabled, 'auto_healing_enabled': auto_healing_enabled,

View File

@ -0,0 +1,19 @@
---
features:
- |
Support Helm v3 client to install helm charts. To use this feature, users
will need to use helm_client_tag>=v3.0.0 (default helm_client_tag=v3.2.1).
All the existing chart used to depend on Helm v2, e.g. nginx ingress
controller, metrics server, prometheus operator and prometheus adapter are
now also installable using v3 client. Also introduce helm_client_sha256 and
helm_client_url that users can specify to install non-default helm client
version (https://github.com/helm/helm/releases).
upgrade:
- |
Default tiller_tag is set to v2.16.7. The charts remain compatible but
helm_client_tag will also need to be set to the same value as tiller_tag,
i.e. v2.16.7. In this case, the user will also need to provide
helm_client_sha256 for the helm client binary intended for use.
deprecations:
- |
Support for Helm v2 client will be removed in X release.