Merge "[k8s] Use Helm v3 by default"
This commit is contained in:
commit
fba8cb9cba
@ -420,13 +420,19 @@ the table are linked to more details elsewhere in the user guide.
|
||||
+---------------------------------------+--------------------+---------------+
|
||||
| `k8s_keystone_auth_tag`_ | see below | see below |
|
||||
+---------------------------------------+--------------------+---------------+
|
||||
| `tiller_enabled`_ | - true | true |
|
||||
| `tiller_enabled`_ | - true | false |
|
||||
| | - false | |
|
||||
+---------------------------------------+--------------------+---------------+
|
||||
| `tiller_tag`_ | see below | "" |
|
||||
+---------------------------------------+--------------------+---------------+
|
||||
| `tiller_namespace`_ | see below | see below |
|
||||
+---------------------------------------+--------------------+---------------+
|
||||
| `helm_client_url`_ | see below | see below |
|
||||
+---------------------------------------+--------------------+---------------+
|
||||
| `helm_client_sha256`_ | see below | see below |
|
||||
+---------------------------------------+--------------------+---------------+
|
||||
| `helm_client_tag`_ | see below | see below |
|
||||
+---------------------------------------+--------------------+---------------+
|
||||
| `master_lb_floating_ip_enabled`_ | - true | see below |
|
||||
| | - false | |
|
||||
+---------------------------------------+--------------------+---------------+
|
||||
@ -1273,7 +1279,9 @@ _`metrics_server_chart_tag`
|
||||
|
||||
_`metrics_server_enabled`
|
||||
metrics_server_enabled is used to enable disable the installation of
|
||||
the metrics server. To use this service tiller_enabled must be true.
|
||||
the metrics server.
|
||||
To use this service tiller_enabled must be true when using
|
||||
helm_client_tag<v3.0.0.
|
||||
Train default: true
|
||||
Stein default: true
|
||||
|
||||
@ -1447,6 +1455,8 @@ _`k8s_keystone_auth_tag`
|
||||
_`monitoring_enabled`
|
||||
Enable installation of cluster monitoring solution provided by the
|
||||
stable/prometheus-operator helm chart.
|
||||
To use this service tiller_enabled must be true when using
|
||||
helm_client_tag<v3.0.0.
|
||||
Default: false
|
||||
|
||||
_`prometheus_adapter_enabled`
|
||||
@ -1473,24 +1483,34 @@ _`prometheus_operator_chart_tag`
|
||||
|
||||
_`tiller_enabled`
|
||||
If set to true, tiller will be deployed in the kube-system namespace.
|
||||
Ussuri default: true
|
||||
Ussuri default: false
|
||||
Train default: false
|
||||
|
||||
_`tiller_tag`
|
||||
Add tiller_tag label to select the version of tiller. If the tag is not set
|
||||
the tag that matches the helm client version in the heat-agent will be
|
||||
picked. The tiller image can be stored in a private registry and the
|
||||
cluster can pull it using the container_infra_prefix label.
|
||||
This label allows users to override the default container tag for Tiller.
|
||||
For additional tags, `refer to Tiller page
|
||||
<https://github.com/helm/helm/tags>`_ and look for tags<v3.0.0.
|
||||
Train default: v2.12.3
|
||||
Ussuri default: v2.16.7
|
||||
|
||||
_`tiller_namespace`
|
||||
Configure in which namespace tiller is going to be installed.
|
||||
The namespace in which Tiller and Helm v2 chart install jobs are installed.
|
||||
Default: magnum-tiller
|
||||
|
||||
_`helm_client_url`
|
||||
URL of the helm client binary.
|
||||
Default: ''
|
||||
|
||||
_`helm_client_sha256`
|
||||
SHA256 checksum of the helm client binary.
|
||||
Ussuri default: 018f9908cb950701a5d59e757653a790c66d8eda288625dbb185354ca6f41f6b
|
||||
|
||||
_`helm_client_tag`
|
||||
The version of the helm client to use.
|
||||
The image can be stored in a private registry and the
|
||||
cluster can pull it using the container_infra_prefix label.
|
||||
Default: dev
|
||||
This label allows users to override the default container tag for Helm
|
||||
client. For additional tags, `refer to Helm client page
|
||||
<https://github.com/helm/helm/tags>`_. You must use identical tiller_tag if
|
||||
you wish to use Tiller (for helm_client_tag<v3.0.0).
|
||||
Ussuri default: v3.2.1
|
||||
|
||||
_`master_lb_floating_ip_enabled`
|
||||
Controls if Magnum allocates floating IP for the load balancer of master
|
||||
@ -1664,6 +1684,8 @@ _`ingress_controller`
|
||||
Controller is configured. For more details about octavia-ingress-controller
|
||||
please refer to `cloud-provider-openstack document
|
||||
<https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-octavia-ingress-controller.md>`_
|
||||
To use 'nginx' ingress controller, tiller_enabled must be true when using
|
||||
helm_client_tag<v3.0.0.
|
||||
|
||||
_`ingress_controller_role`
|
||||
This label defines the role nodes should have to run an instance of the
|
||||
|
0
magnum/drivers/common/templates/kubernetes/fragments/enable-helm-tiller.sh
Executable file → Normal file
0
magnum/drivers/common/templates/kubernetes/fragments/enable-helm-tiller.sh
Executable file → Normal file
@ -350,7 +350,7 @@ spec:
|
||||
name: grafana
|
||||
env:
|
||||
- name: GF_SECURITY_ADMIN_PASSWORD
|
||||
value: $ADMIN_PASSWD
|
||||
value: ${GRAFANA_ADMIN_PASSWD}
|
||||
- name: GF_DASHBOARDS_JSON_ENABLED
|
||||
value: "true"
|
||||
- name: GF_DASHBOARDS_JSON_PATH
|
||||
|
@ -1,32 +1,81 @@
|
||||
#!/bin/bash
|
||||
|
||||
step="install-helm-modules.sh"
|
||||
printf "Starting to run ${step}\n"
|
||||
step="install-helm-modules"
|
||||
echo "START: ${step}"
|
||||
|
||||
set +x
|
||||
. /etc/sysconfig/heat-params
|
||||
|
||||
set -ex
|
||||
|
||||
ssh_cmd="ssh -F /srv/magnum/.ssh/config root@localhost"
|
||||
|
||||
echo "Waiting for Kubernetes API..."
|
||||
until [ "ok" = "$(curl --silent http://127.0.0.1:8080/healthz)" ]
|
||||
do
|
||||
until [ "ok" = "$(curl --silent http://127.0.0.1:8080/healthz)" ]; do
|
||||
sleep 5
|
||||
done
|
||||
|
||||
if [ "$(echo ${TILLER_ENABLED} | tr '[:upper:]' '[:lower:]')" != "true" ]; then
|
||||
echo "Use --labels tiller_enabled=True to allow for tiller dependent resources to be installed"
|
||||
if [[ "$(echo ${TILLER_ENABLED} | tr '[:upper:]' '[:lower:]')" != "true" && "${HELM_CLIENT_TAG}" == v2.* ]]; then
|
||||
echo "Use --labels tiller_enabled=True for helm_client_tag<v3.0.0 to allow for tiller dependent resources to be installed."
|
||||
else
|
||||
HELM_MODULES_PATH="/srv/magnum/kubernetes/helm"
|
||||
mkdir -p ${HELM_MODULES_PATH}
|
||||
helm_modules=(${HELM_MODULES_PATH}/*)
|
||||
if [ -z "${HELM_CLIENT_URL}" ] ; then
|
||||
HELM_CLIENT_URL="https://get.helm.sh/helm-$HELM_CLIENT_TAG-linux-amd64.tar.gz"
|
||||
fi
|
||||
i=0
|
||||
until curl -o /srv/magnum/helm-client.tar.gz "${HELM_CLIENT_URL}"; do
|
||||
i=$((i + 1))
|
||||
[ $i -lt 5 ] || break;
|
||||
sleep 5
|
||||
done
|
||||
|
||||
# Only run kubectl if we have modules to install
|
||||
if [ "${helm_modules}" != "${HELM_MODULES_PATH}/*" ]; then
|
||||
for module in "${helm_modules[@]}"; do
|
||||
echo "Applying ${module}."
|
||||
kubectl apply -f ${module}
|
||||
if ! echo "${HELM_CLIENT_SHA256} /srv/magnum/helm-client.tar.gz" | sha256sum -c - ; then
|
||||
echo "ERROR helm-client.tar.gz computed checksum did NOT match, exiting."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
source /etc/bashrc
|
||||
$ssh_cmd tar xzvf /srv/magnum/helm-client.tar.gz linux-amd64/helm -O > /srv/magnum/bin/helm
|
||||
$ssh_cmd chmod +x /srv/magnum/bin/helm
|
||||
|
||||
helm_install_cmd="helm install magnum . --namespace kube-system --values values.yaml --render-subchart-notes"
|
||||
helm_history_cmd="helm history magnum --namespace kube-system"
|
||||
if [[ "${HELM_CLIENT_TAG}" == v2.* ]]; then
|
||||
CERTS_DIR="/etc/kubernetes/helm/certs"
|
||||
export HELM_HOME="/srv/magnum/kubernetes/helm/home"
|
||||
export HELM_TLS_ENABLE="true"
|
||||
export TILLER_NAMESPACE
|
||||
mkdir -p "${HELM_HOME}"
|
||||
ln -s ${CERTS_DIR}/helm.cert.pem ${HELM_HOME}/cert.pem
|
||||
ln -s ${CERTS_DIR}/helm.key.pem ${HELM_HOME}/key.pem
|
||||
ln -s ${CERTS_DIR}/ca.cert.pem ${HELM_HOME}/ca.pem
|
||||
|
||||
# HACK - Force wait because of bug https://github.com/helm/helm/issues/5170
|
||||
until helm init --client-only --wait; do
|
||||
sleep 5s
|
||||
done
|
||||
helm_install_cmd="helm install --name magnum . --namespace kube-system --values values.yaml --render-subchart-notes"
|
||||
helm_history_cmd="helm history magnum"
|
||||
fi
|
||||
|
||||
HELM_CHART_DIR="/srv/magnum/kubernetes/helm/magnum"
|
||||
if [[ -d "${HELM_CHART_DIR}" ]]; then
|
||||
pushd ${HELM_CHART_DIR}
|
||||
cat << EOF > Chart.yaml
|
||||
apiVersion: v1
|
||||
name: magnum
|
||||
version: metachart
|
||||
appVersion: metachart
|
||||
description: Magnum Helm Charts
|
||||
EOF
|
||||
sed -i '1i\dependencies:' requirements.yaml
|
||||
|
||||
i=0
|
||||
until ($helm_history_cmd | grep magnum) || (helm dep update && $helm_install_cmd); do
|
||||
i=$((i + 1))
|
||||
[ $i -lt 60 ] || break;
|
||||
sleep 5
|
||||
done
|
||||
popd
|
||||
fi
|
||||
fi
|
||||
|
||||
printf "Finished running ${step}\n"
|
||||
echo "END: ${step}"
|
||||
|
@ -117,6 +117,8 @@ EXTERNAL_NETWORK_ID="$EXTERNAL_NETWORK_ID"
|
||||
TILLER_ENABLED="$TILLER_ENABLED"
|
||||
TILLER_TAG="$TILLER_TAG"
|
||||
TILLER_NAMESPACE="$TILLER_NAMESPACE"
|
||||
HELM_CLIENT_URL="$HELM_CLIENT_URL"
|
||||
HELM_CLIENT_SHA256="$HELM_CLIENT_SHA256"
|
||||
HELM_CLIENT_TAG="$HELM_CLIENT_TAG"
|
||||
NODE_PROBLEM_DETECTOR_TAG="$NODE_PROBLEM_DETECTOR_TAG"
|
||||
NGINX_INGRESS_CONTROLLER_TAG="$NGINX_INGRESS_CONTROLLER_TAG"
|
||||
|
@ -2,245 +2,172 @@
|
||||
|
||||
set +x
|
||||
. /etc/sysconfig/heat-params
|
||||
|
||||
set -ex
|
||||
|
||||
step="nginx-ingress"
|
||||
printf "Starting to run ${step}\n"
|
||||
|
||||
### Configuration
|
||||
###############################################################################
|
||||
CHART_NAME="nginx-ingress"
|
||||
|
||||
if [ "$(echo ${INGRESS_CONTROLLER} | tr '[:upper:]' '[:lower:]')" = "nginx" ]; then
|
||||
echo "Writing ${CHART_NAME} config"
|
||||
|
||||
HELM_MODULE_CONFIG_FILE="/srv/magnum/kubernetes/helm/${CHART_NAME}.yaml"
|
||||
[ -f ${HELM_MODULE_CONFIG_FILE} ] || {
|
||||
echo "Writing File: ${HELM_MODULE_CONFIG_FILE}"
|
||||
mkdir -p $(dirname ${HELM_MODULE_CONFIG_FILE})
|
||||
cat << EOF > ${HELM_MODULE_CONFIG_FILE}
|
||||
---
|
||||
kind: ConfigMap
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: ${CHART_NAME}-config
|
||||
namespace: magnum-tiller
|
||||
labels:
|
||||
app: helm
|
||||
data:
|
||||
install-${CHART_NAME}.sh: |
|
||||
#!/bin/bash
|
||||
set -e
|
||||
set -x
|
||||
mkdir -p \${HELM_HOME}
|
||||
cp /etc/helm/* \${HELM_HOME}
|
||||
HELM_CHART_DIR="/srv/magnum/kubernetes/helm/magnum"
|
||||
mkdir -p ${HELM_CHART_DIR}
|
||||
|
||||
# HACK - Force wait because of bug https://github.com/helm/helm/issues/5170
|
||||
until helm init --client-only --wait
|
||||
do
|
||||
sleep 5s
|
||||
done
|
||||
helm repo update
|
||||
|
||||
if [[ \$(helm history ${CHART_NAME} | grep ${CHART_NAME}) ]]; then
|
||||
echo "${CHART_NAME} already installed on server. Continue..."
|
||||
exit 0
|
||||
else
|
||||
helm install stable/${CHART_NAME} --namespace kube-system --name ${CHART_NAME} --version ${NGINX_INGRESS_CONTROLLER_CHART_TAG} --values /opt/magnum/install-${CHART_NAME}-values.yaml
|
||||
fi
|
||||
|
||||
install-${CHART_NAME}-values.yaml: |
|
||||
controller:
|
||||
name: controller
|
||||
image:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-quay.io/kubernetes-ingress-controller/}nginx-ingress-controller
|
||||
tag: ${NGINX_INGRESS_CONTROLLER_TAG}
|
||||
pullPolicy: IfNotPresent
|
||||
config: {}
|
||||
headers: {}
|
||||
hostNetwork: true
|
||||
dnsPolicy: ClusterFirst
|
||||
daemonset:
|
||||
useHostPort: true
|
||||
hostPorts:
|
||||
http: 80
|
||||
https: 443
|
||||
stats: 18080
|
||||
defaultBackendService: ""
|
||||
electionID: ingress-controller-leader
|
||||
ingressClass: nginx
|
||||
podLabels: {}
|
||||
publishService:
|
||||
enabled: false
|
||||
pathOverride: ""
|
||||
scope:
|
||||
enabled: false
|
||||
namespace: "" # defaults to .Release.Namespace
|
||||
extraArgs:
|
||||
enable-ssl-passthrough: ""
|
||||
extraEnvs: []
|
||||
kind: DaemonSet
|
||||
updateStrategy: {}
|
||||
minReadySeconds: 0
|
||||
tolerations: []
|
||||
affinity: {}
|
||||
nodeSelector:
|
||||
role: ${INGRESS_CONTROLLER_ROLE}
|
||||
livenessProbe:
|
||||
failureThreshold: 3
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 1
|
||||
port: 10254
|
||||
readinessProbe:
|
||||
failureThreshold: 3
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 1
|
||||
port: 10254
|
||||
podAnnotations: {}
|
||||
replicaCount: 1
|
||||
minAvailable: 1
|
||||
resources:
|
||||
requests:
|
||||
cpu: 200m
|
||||
memory: 256Mi
|
||||
autoscaling:
|
||||
enabled: false
|
||||
customTemplate:
|
||||
configMapName: ""
|
||||
configMapKey: ""
|
||||
service:
|
||||
annotations: {}
|
||||
labels: {}
|
||||
clusterIP: ""
|
||||
externalIPs: []
|
||||
loadBalancerIP: ""
|
||||
loadBalancerSourceRanges: []
|
||||
enableHttp: true
|
||||
enableHttps: true
|
||||
externalTrafficPolicy: ""
|
||||
healthCheckNodePort: 0
|
||||
targetPorts:
|
||||
http: http
|
||||
https: https
|
||||
type: NodePort
|
||||
nodePorts:
|
||||
http: "32080"
|
||||
https: "32443"
|
||||
extraContainers: []
|
||||
extraVolumeMounts: []
|
||||
extraVolumes: []
|
||||
extraInitContainers: []
|
||||
stats:
|
||||
enabled: false
|
||||
service:
|
||||
annotations: {}
|
||||
clusterIP: ""
|
||||
externalIPs: []
|
||||
loadBalancerIP: ""
|
||||
loadBalancerSourceRanges: []
|
||||
servicePort: 18080
|
||||
type: ClusterIP
|
||||
metrics:
|
||||
enabled: ${MONITORING_ENABLED}
|
||||
service:
|
||||
annotations: {}
|
||||
clusterIP: ""
|
||||
externalIPs: []
|
||||
loadBalancerIP: ""
|
||||
loadBalancerSourceRanges: []
|
||||
servicePort: 9913
|
||||
type: ClusterIP
|
||||
serviceMonitor:
|
||||
enabled: ${MONITORING_ENABLED}
|
||||
additionalLabels:
|
||||
release: prometheus-operator
|
||||
namespace: kube-system
|
||||
lifecycle: {}
|
||||
priorityClassName: "system-node-critical"
|
||||
revisionHistoryLimit: 10
|
||||
defaultBackend:
|
||||
enabled: true
|
||||
name: default-backend
|
||||
image:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-k8s.gcr.io/}defaultbackend
|
||||
tag: "1.4"
|
||||
pullPolicy: IfNotPresent
|
||||
extraArgs: {}
|
||||
port: 8080
|
||||
tolerations: []
|
||||
affinity: {}
|
||||
podLabels: {}
|
||||
nodeSelector: {}
|
||||
podAnnotations: {}
|
||||
replicaCount: 1
|
||||
minAvailable: 1
|
||||
resources:
|
||||
requests:
|
||||
cpu: 10m
|
||||
memory: 20Mi
|
||||
service:
|
||||
annotations: {}
|
||||
clusterIP: ""
|
||||
externalIPs: []
|
||||
loadBalancerIP: ""
|
||||
loadBalancerSourceRanges: []
|
||||
servicePort: 80
|
||||
type: ClusterIP
|
||||
priorityClassName: "system-cluster-critical"
|
||||
rbac:
|
||||
create: true
|
||||
podSecurityPolicy:
|
||||
enabled: false
|
||||
serviceAccount:
|
||||
create: true
|
||||
name:
|
||||
imagePullSecrets: []
|
||||
tcp: {}
|
||||
udp: {}
|
||||
---
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: install-${CHART_NAME}-job
|
||||
namespace: magnum-tiller
|
||||
spec:
|
||||
backoffLimit: 10
|
||||
template:
|
||||
spec:
|
||||
serviceAccountName: tiller
|
||||
containers:
|
||||
- name: config-helm
|
||||
image: ${CONTAINER_INFRA_PREFIX:-docker.io/openstackmagnum/}helm-client:${HELM_CLIENT_TAG}
|
||||
command:
|
||||
- bash
|
||||
args:
|
||||
- /opt/magnum/install-${CHART_NAME}.sh
|
||||
env:
|
||||
- name: HELM_HOME
|
||||
value: /helm_home
|
||||
- name: TILLER_NAMESPACE
|
||||
value: magnum-tiller
|
||||
- name: HELM_TLS_ENABLE
|
||||
value: "true"
|
||||
volumeMounts:
|
||||
- name: install-${CHART_NAME}-config
|
||||
mountPath: /opt/magnum/
|
||||
- mountPath: /etc/helm
|
||||
name: helm-client-certs
|
||||
restartPolicy: Never
|
||||
volumes:
|
||||
- name: install-${CHART_NAME}-config
|
||||
configMap:
|
||||
name: ${CHART_NAME}-config
|
||||
- name: helm-client-certs
|
||||
secret:
|
||||
secretName: helm-client-secret
|
||||
cat << EOF >> ${HELM_CHART_DIR}/requirements.yaml
|
||||
- name: ${CHART_NAME}
|
||||
version: ${NGINX_INGRESS_CONTROLLER_CHART_TAG}
|
||||
repository: https://kubernetes-charts.storage.googleapis.com/
|
||||
EOF
|
||||
}
|
||||
fi
|
||||
|
||||
printf "Finished running ${step}\n"
|
||||
cat << EOF >> ${HELM_CHART_DIR}/values.yaml
|
||||
nginx-ingress:
|
||||
controller:
|
||||
name: controller
|
||||
image:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-quay.io/kubernetes-ingress-controller/}nginx-ingress-controller
|
||||
tag: ${NGINX_INGRESS_CONTROLLER_TAG}
|
||||
pullPolicy: IfNotPresent
|
||||
config: {}
|
||||
headers: {}
|
||||
hostNetwork: true
|
||||
dnsPolicy: ClusterFirst
|
||||
daemonset:
|
||||
useHostPort: true
|
||||
hostPorts:
|
||||
http: 80
|
||||
https: 443
|
||||
stats: 18080
|
||||
defaultBackendService: ""
|
||||
electionID: ingress-controller-leader
|
||||
ingressClass: nginx
|
||||
podLabels: {}
|
||||
publishService:
|
||||
enabled: false
|
||||
pathOverride: ""
|
||||
scope:
|
||||
enabled: false
|
||||
namespace: "" # defaults to .Release.Namespace
|
||||
extraArgs:
|
||||
enable-ssl-passthrough: ""
|
||||
extraEnvs: []
|
||||
kind: DaemonSet
|
||||
updateStrategy: {}
|
||||
minReadySeconds: 0
|
||||
tolerations: []
|
||||
affinity: {}
|
||||
nodeSelector:
|
||||
role: ${INGRESS_CONTROLLER_ROLE}
|
||||
livenessProbe:
|
||||
failureThreshold: 3
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 1
|
||||
port: 10254
|
||||
readinessProbe:
|
||||
failureThreshold: 3
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 1
|
||||
port: 10254
|
||||
podAnnotations: {}
|
||||
replicaCount: 1
|
||||
minAvailable: 1
|
||||
resources:
|
||||
requests:
|
||||
cpu: 200m
|
||||
memory: 256Mi
|
||||
autoscaling:
|
||||
enabled: false
|
||||
customTemplate:
|
||||
configMapName: ""
|
||||
configMapKey: ""
|
||||
service:
|
||||
annotations: {}
|
||||
labels: {}
|
||||
clusterIP: ""
|
||||
externalIPs: []
|
||||
loadBalancerIP: ""
|
||||
loadBalancerSourceRanges: []
|
||||
enableHttp: true
|
||||
enableHttps: true
|
||||
externalTrafficPolicy: ""
|
||||
healthCheckNodePort: 0
|
||||
targetPorts:
|
||||
http: http
|
||||
https: https
|
||||
type: NodePort
|
||||
nodePorts:
|
||||
http: "32080"
|
||||
https: "32443"
|
||||
extraContainers: []
|
||||
extraVolumeMounts: []
|
||||
extraVolumes: []
|
||||
extraInitContainers: []
|
||||
stats:
|
||||
enabled: false
|
||||
service:
|
||||
annotations: {}
|
||||
clusterIP: ""
|
||||
externalIPs: []
|
||||
loadBalancerIP: ""
|
||||
loadBalancerSourceRanges: []
|
||||
servicePort: 18080
|
||||
type: ClusterIP
|
||||
metrics:
|
||||
enabled: ${MONITORING_ENABLED}
|
||||
service:
|
||||
annotations: {}
|
||||
clusterIP: ""
|
||||
externalIPs: []
|
||||
loadBalancerIP: ""
|
||||
loadBalancerSourceRanges: []
|
||||
servicePort: 9913
|
||||
type: ClusterIP
|
||||
serviceMonitor:
|
||||
enabled: ${MONITORING_ENABLED}
|
||||
namespace: kube-system
|
||||
lifecycle: {}
|
||||
priorityClassName: "system-node-critical"
|
||||
revisionHistoryLimit: 10
|
||||
defaultBackend:
|
||||
enabled: true
|
||||
name: default-backend
|
||||
image:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-k8s.gcr.io/}defaultbackend
|
||||
tag: "1.4"
|
||||
pullPolicy: IfNotPresent
|
||||
extraArgs: {}
|
||||
port: 8080
|
||||
tolerations: []
|
||||
affinity: {}
|
||||
podLabels: {}
|
||||
nodeSelector: {}
|
||||
podAnnotations: {}
|
||||
replicaCount: 1
|
||||
minAvailable: 1
|
||||
resources:
|
||||
requests:
|
||||
cpu: 10m
|
||||
memory: 20Mi
|
||||
service:
|
||||
annotations: {}
|
||||
clusterIP: ""
|
||||
externalIPs: []
|
||||
loadBalancerIP: ""
|
||||
loadBalancerSourceRanges: []
|
||||
servicePort: 80
|
||||
type: ClusterIP
|
||||
priorityClassName: "system-cluster-critical"
|
||||
rbac:
|
||||
create: true
|
||||
podSecurityPolicy:
|
||||
enabled: false
|
||||
serviceAccount:
|
||||
create: true
|
||||
name:
|
||||
imagePullSecrets: []
|
||||
tcp: {}
|
||||
udp: {}
|
||||
EOF
|
||||
fi
|
||||
|
@ -1,100 +1,28 @@
|
||||
#!/bin/bash
|
||||
|
||||
set +x
|
||||
. /etc/sysconfig/heat-params
|
||||
|
||||
set -ex
|
||||
|
||||
step="metrics-server"
|
||||
printf "Starting to run ${step}\n"
|
||||
|
||||
### Configuration
|
||||
###############################################################################
|
||||
CHART_NAME="metrics-server"
|
||||
|
||||
if [ "$(echo ${METRICS_SERVER_ENABLED} | tr '[:upper:]' '[:lower:]')" = "true" ]; then
|
||||
echo "Writing ${CHART_NAME} config"
|
||||
|
||||
HELM_MODULE_CONFIG_FILE="/srv/magnum/kubernetes/helm/${CHART_NAME}.yaml"
|
||||
[ -f ${HELM_MODULE_CONFIG_FILE} ] || {
|
||||
echo "Writing File: ${HELM_MODULE_CONFIG_FILE}"
|
||||
mkdir -p $(dirname ${HELM_MODULE_CONFIG_FILE})
|
||||
cat << EOF > ${HELM_MODULE_CONFIG_FILE}
|
||||
---
|
||||
kind: ConfigMap
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: ${CHART_NAME}-config
|
||||
namespace: magnum-tiller
|
||||
labels:
|
||||
app: helm
|
||||
data:
|
||||
install-${CHART_NAME}.sh: |
|
||||
#!/bin/bash
|
||||
set -e
|
||||
set -x
|
||||
mkdir -p \${HELM_HOME}
|
||||
cp /etc/helm/* \${HELM_HOME}
|
||||
HELM_CHART_DIR="/srv/magnum/kubernetes/helm/magnum"
|
||||
mkdir -p ${HELM_CHART_DIR}
|
||||
|
||||
# HACK - Force wait because of bug https://github.com/helm/helm/issues/5170
|
||||
until helm init --client-only --wait
|
||||
do
|
||||
sleep 5s
|
||||
done
|
||||
helm repo update
|
||||
|
||||
if [[ \$(helm history metrics-server | grep metrics-server) ]]; then
|
||||
echo "${CHART_NAME} already installed on server. Continue..."
|
||||
exit 0
|
||||
else
|
||||
helm install stable/${CHART_NAME} --namespace kube-system --name ${CHART_NAME} --version ${METRICS_SERVER_CHART_TAG} --values /opt/magnum/install-${CHART_NAME}-values.yaml
|
||||
fi
|
||||
|
||||
install-${CHART_NAME}-values.yaml: |
|
||||
image:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-gcr.io/google_containers/}metrics-server-${ARCH}
|
||||
args:
|
||||
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
|
||||
---
|
||||
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: install-${CHART_NAME}-job
|
||||
namespace: magnum-tiller
|
||||
spec:
|
||||
backoffLimit: 10
|
||||
template:
|
||||
spec:
|
||||
serviceAccountName: tiller
|
||||
containers:
|
||||
- name: config-helm
|
||||
image: ${CONTAINER_INFRA_PREFIX:-docker.io/openstackmagnum/}helm-client:${HELM_CLIENT_TAG}
|
||||
command:
|
||||
- bash
|
||||
args:
|
||||
- /opt/magnum/install-${CHART_NAME}.sh
|
||||
env:
|
||||
- name: HELM_HOME
|
||||
value: /helm_home
|
||||
- name: TILLER_NAMESPACE
|
||||
value: magnum-tiller
|
||||
- name: HELM_TLS_ENABLE
|
||||
value: "true"
|
||||
volumeMounts:
|
||||
- name: install-${CHART_NAME}-config
|
||||
mountPath: /opt/magnum/
|
||||
- mountPath: /etc/helm
|
||||
name: helm-client-certs
|
||||
restartPolicy: Never
|
||||
volumes:
|
||||
- name: install-${CHART_NAME}-config
|
||||
configMap:
|
||||
name: ${CHART_NAME}-config
|
||||
- name: helm-client-certs
|
||||
secret:
|
||||
secretName: helm-client-secret
|
||||
cat << EOF >> ${HELM_CHART_DIR}/requirements.yaml
|
||||
- name: ${CHART_NAME}
|
||||
version: ${METRICS_SERVER_CHART_TAG}
|
||||
repository: https://kubernetes-charts.storage.googleapis.com/
|
||||
EOF
|
||||
}
|
||||
|
||||
cat << EOF >> ${HELM_CHART_DIR}/values.yaml
|
||||
metrics-server:
|
||||
image:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-gcr.io/google_containers/}metrics-server-${ARCH}
|
||||
args:
|
||||
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
|
||||
EOF
|
||||
fi
|
||||
|
||||
printf "Finished running ${step}\n"
|
||||
|
140
magnum/drivers/common/templates/kubernetes/helm/prometheus-adapter.sh
Normal file → Executable file
140
magnum/drivers/common/templates/kubernetes/helm/prometheus-adapter.sh
Normal file → Executable file
@ -1,123 +1,45 @@
|
||||
#!/bin/bash
|
||||
|
||||
set +x
|
||||
. /etc/sysconfig/heat-params
|
||||
|
||||
set -ex
|
||||
|
||||
step="prometheus-adapter"
|
||||
printf "Starting to run ${step}\n"
|
||||
|
||||
### Configuration
|
||||
# This configuration is dependent on the helm installed prometheus-operator.
|
||||
###############################################################################
|
||||
# This configuration depends on helm installed prometheus-operator.
|
||||
CHART_NAME="prometheus-adapter"
|
||||
|
||||
|
||||
if [ "$(echo ${MONITORING_ENABLED} | tr '[:upper:]' '[:lower:]')" = "true" ] && \
|
||||
[ "$(echo ${PROMETHEUS_ADAPTER_ENABLED} | tr '[:upper:]' '[:lower:]')" = "true" ]; then
|
||||
echo "Writing ${CHART_NAME} config"
|
||||
|
||||
HELM_MODULE_CONFIG_FILE="/srv/magnum/kubernetes/helm/${CHART_NAME}.yaml"
|
||||
[ -f ${HELM_MODULE_CONFIG_FILE} ] || {
|
||||
echo "Writing File: ${HELM_MODULE_CONFIG_FILE}"
|
||||
mkdir -p $(dirname ${HELM_MODULE_CONFIG_FILE})
|
||||
cat << EOF > ${HELM_MODULE_CONFIG_FILE}
|
||||
---
|
||||
kind: ConfigMap
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: ${CHART_NAME}-config
|
||||
namespace: magnum-tiller
|
||||
labels:
|
||||
app: helm
|
||||
data:
|
||||
install-${CHART_NAME}.sh: |
|
||||
#!/bin/bash
|
||||
set -ex
|
||||
mkdir -p \${HELM_HOME}
|
||||
cp /etc/helm/* \${HELM_HOME}
|
||||
HELM_CHART_DIR="/srv/magnum/kubernetes/helm/magnum"
|
||||
mkdir -p ${HELM_CHART_DIR}
|
||||
|
||||
# HACK - Force wait because of bug https://github.com/helm/helm/issues/5170
|
||||
until helm init --client-only --wait
|
||||
do
|
||||
sleep 5s
|
||||
done
|
||||
helm repo update
|
||||
|
||||
if [[ \$(helm history ${CHART_NAME} | grep ${CHART_NAME}) ]]; then
|
||||
echo "${CHART_NAME} already installed on server. Continue..."
|
||||
exit 0
|
||||
else
|
||||
# TODO: Set namespace to monitoring. This is needed as the Kubernetes default priorityClass can only be used in NS kube-system
|
||||
helm install stable/${CHART_NAME} --namespace kube-system --name ${CHART_NAME} --version ${PROMETHEUS_ADAPTER_CHART_TAG} --values /opt/magnum/install-${CHART_NAME}-values.yaml
|
||||
fi
|
||||
|
||||
install-${CHART_NAME}-values.yaml: |
|
||||
image:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-docker.io/directxman12/}k8s-prometheus-adapter-${ARCH}
|
||||
|
||||
priorityClassName: "system-cluster-critical"
|
||||
|
||||
prometheus:
|
||||
url: http://web.tcp.prometheus-prometheus.kube-system.svc.cluster.local
|
||||
|
||||
resources:
|
||||
requests:
|
||||
cpu: 150m
|
||||
memory: 400Mi
|
||||
|
||||
rules:
|
||||
existing: ${PROMETHEUS_ADAPTER_CONFIGMAP}
|
||||
|
||||
# tls:
|
||||
# enable: true
|
||||
# ca: |-
|
||||
# # Public CA file that signed the APIService
|
||||
# key: |-
|
||||
# # Private key of the APIService
|
||||
# certificate: |-
|
||||
# # Public key of the APIService
|
||||
|
||||
---
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: install-${CHART_NAME}-job
|
||||
namespace: magnum-tiller
|
||||
spec:
|
||||
backoffLimit: 10
|
||||
template:
|
||||
spec:
|
||||
serviceAccountName: tiller
|
||||
containers:
|
||||
- name: config-helm
|
||||
image: ${CONTAINER_INFRA_PREFIX:-docker.io/openstackmagnum/}helm-client:${HELM_CLIENT_TAG}
|
||||
command:
|
||||
- bash
|
||||
args:
|
||||
- /opt/magnum/install-${CHART_NAME}.sh
|
||||
env:
|
||||
- name: HELM_HOME
|
||||
value: /helm_home
|
||||
- name: TILLER_NAMESPACE
|
||||
value: magnum-tiller
|
||||
- name: HELM_TLS_ENABLE
|
||||
value: "true"
|
||||
volumeMounts:
|
||||
- name: install-${CHART_NAME}-config
|
||||
mountPath: /opt/magnum/
|
||||
- mountPath: /etc/helm
|
||||
name: helm-client-certs
|
||||
restartPolicy: Never
|
||||
volumes:
|
||||
- name: install-${CHART_NAME}-config
|
||||
configMap:
|
||||
name: ${CHART_NAME}-config
|
||||
- name: helm-client-certs
|
||||
secret:
|
||||
secretName: helm-client-secret
|
||||
cat << EOF >> ${HELM_CHART_DIR}/requirements.yaml
|
||||
- name: ${CHART_NAME}
|
||||
version: ${PROMETHEUS_ADAPTER_CHART_TAG}
|
||||
repository: https://kubernetes-charts.storage.googleapis.com/
|
||||
EOF
|
||||
}
|
||||
|
||||
cat << EOF >> ${HELM_CHART_DIR}/values.yaml
|
||||
prometheus-adapter:
|
||||
image:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-docker.io/directxman12/}k8s-prometheus-adapter-${ARCH}
|
||||
priorityClassName: "system-cluster-critical"
|
||||
prometheus:
|
||||
url: http://web.tcp.prometheus-prometheus.kube-system.svc.cluster.local
|
||||
resources:
|
||||
requests:
|
||||
cpu: 150m
|
||||
memory: 400Mi
|
||||
rules:
|
||||
existing: ${PROMETHEUS_ADAPTER_CONFIGMAP}
|
||||
# tls:
|
||||
# enable: true
|
||||
# ca: |-
|
||||
# # Public CA file that signed the APIService
|
||||
# key: |-
|
||||
# # Private key of the APIService
|
||||
# certificate: |-
|
||||
# # Public key of the APIService
|
||||
EOF
|
||||
fi
|
||||
|
||||
printf "Finished running ${step}\n"
|
||||
|
508
magnum/drivers/common/templates/kubernetes/helm/prometheus-operator.sh
Normal file → Executable file
508
magnum/drivers/common/templates/kubernetes/helm/prometheus-operator.sh
Normal file → Executable file
@ -1,19 +1,23 @@
|
||||
#!/bin/bash
|
||||
|
||||
set +x
|
||||
. /etc/sysconfig/heat-params
|
||||
|
||||
set -ex
|
||||
|
||||
step="prometheus-operator"
|
||||
printf "Starting to run ${step}\n"
|
||||
|
||||
### Configuration
|
||||
###############################################################################
|
||||
CHART_NAME="prometheus-operator"
|
||||
|
||||
if [ "$(echo ${MONITORING_ENABLED} | tr '[:upper:]' '[:lower:]')" = "true" ]; then
|
||||
echo "Writing ${CHART_NAME} config"
|
||||
|
||||
HELM_CHART_DIR="/srv/magnum/kubernetes/helm/magnum"
|
||||
mkdir -p ${HELM_CHART_DIR}
|
||||
|
||||
cat << EOF >> ${HELM_CHART_DIR}/requirements.yaml
|
||||
- name: ${CHART_NAME}
|
||||
version: ${PROMETHEUS_OPERATOR_CHART_TAG}
|
||||
repository: https://kubernetes-charts.storage.googleapis.com/
|
||||
EOF
|
||||
|
||||
#######################
|
||||
# Calculate resources needed to run the Prometheus Monitoring Solution
|
||||
# MAX_NODE_COUNT so we can have metrics even if cluster scales
|
||||
PROMETHEUS_SERVER_CPU=$(expr 128 + 7 \* ${MAX_NODE_COUNT} )
|
||||
@ -33,6 +37,184 @@ if [ "$(echo ${MONITORING_ENABLED} | tr '[:upper:]' '[:lower:]')" = "true" ]; th
|
||||
INSECURE_SKIP_VERIFY="True"
|
||||
fi
|
||||
|
||||
cat << EOF >> ${HELM_CHART_DIR}/values.yaml
|
||||
prometheus-operator:
|
||||
|
||||
defaultRules:
|
||||
rules:
|
||||
#TODO: To enable this we need firstly take care of exposing certs
|
||||
etcd: false
|
||||
|
||||
alertmanager:
|
||||
alertmanagerSpec:
|
||||
image:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-quay.io/prometheus/}alertmanager
|
||||
# # Needs testing
|
||||
# resources:
|
||||
# requests:
|
||||
# cpu: 100m
|
||||
# memory: 256Mi
|
||||
priorityClassName: "system-cluster-critical"
|
||||
|
||||
|
||||
# Dashboard
|
||||
grafana:
|
||||
#enabled: ${ENABLE_GRAFANA}
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 128Mi
|
||||
adminPassword: ${GRAFANA_ADMIN_PASSWD}
|
||||
|
||||
kubeApiServer:
|
||||
tlsConfig:
|
||||
insecureSkipVerify: "False"
|
||||
|
||||
kubelet:
|
||||
serviceMonitor:
|
||||
https: "True"
|
||||
|
||||
kubeControllerManager:
|
||||
## If your kube controller manager is not deployed as a pod, specify IPs it can be found on
|
||||
endpoints: ${KUBE_MASTERS_PRIVATE}
|
||||
## If using kubeControllerManager.endpoints only the port and targetPort are used
|
||||
service:
|
||||
port: 10252
|
||||
targetPort: 10252
|
||||
# selector:
|
||||
# component: kube-controller-manager
|
||||
serviceMonitor:
|
||||
## Enable scraping kube-controller-manager over https.
|
||||
## Requires proper certs (not self-signed) and delegated authentication/authorization checks
|
||||
https: ${USE_HTTPS}
|
||||
# Skip TLS certificate validation when scraping
|
||||
insecureSkipVerify: null
|
||||
# Name of the server to use when validating TLS certificate
|
||||
serverName: null
|
||||
|
||||
coreDns:
|
||||
enabled: true
|
||||
service:
|
||||
port: 9153
|
||||
targetPort: 9153
|
||||
selector:
|
||||
k8s-app: kube-dns
|
||||
|
||||
kubeEtcd:
|
||||
## If your etcd is not deployed as a pod, specify IPs it can be found on
|
||||
endpoints: ${KUBE_MASTERS_PRIVATE}
|
||||
## Etcd service. If using kubeEtcd.endpoints only the port and targetPort are used
|
||||
service:
|
||||
port: 2379
|
||||
targetPort: 2379
|
||||
# selector:
|
||||
# component: etcd
|
||||
## Configure secure access to the etcd cluster by loading a secret into prometheus and
|
||||
## specifying security configuration below. For example, with a secret named etcd-client-cert
|
||||
serviceMonitor:
|
||||
scheme: https
|
||||
insecureSkipVerify: true
|
||||
caFile: /etc/prometheus/secrets/etcd-certificates/ca.crt
|
||||
certFile: /etc/prometheus/secrets/etcd-certificates/kubelet.crt
|
||||
keyFile: /etc/prometheus/secrets/etcd-certificates/kubelet.key
|
||||
|
||||
kubeScheduler:
|
||||
## If your kube scheduler is not deployed as a pod, specify IPs it can be found on
|
||||
endpoints: ${KUBE_MASTERS_PRIVATE}
|
||||
## If using kubeScheduler.endpoints only the port and targetPort are used
|
||||
service:
|
||||
port: 10251
|
||||
targetPort: 10251
|
||||
# selector:
|
||||
# component: kube-scheduler
|
||||
serviceMonitor:
|
||||
## Enable scraping kube-scheduler over https.
|
||||
## Requires proper certs (not self-signed) and delegated authentication/authorization checks
|
||||
https: ${USE_HTTPS}
|
||||
## Skip TLS certificate validation when scraping
|
||||
insecureSkipVerify: null
|
||||
## Name of the server to use when validating TLS certificate
|
||||
serverName: null
|
||||
|
||||
# kubeProxy:
|
||||
# ## If your kube proxy is not deployed as a pod, specify IPs it can be found on
|
||||
# endpoints: [] # masters + minions
|
||||
# serviceMonitor:
|
||||
# ## Enable scraping kube-proxy over https.
|
||||
# ## Requires proper certs (not self-signed) and delegated authentication/authorization checks
|
||||
# https: ${USE_HTTPS}
|
||||
|
||||
kube-state-metrics:
|
||||
priorityClassName: "system-cluster-critical"
|
||||
resources:
|
||||
#Guaranteed
|
||||
limits:
|
||||
cpu: 50m
|
||||
memory: 64M
|
||||
|
||||
prometheus-node-exporter:
|
||||
priorityClassName: "system-node-critical"
|
||||
resources:
|
||||
#Guaranteed
|
||||
limits:
|
||||
cpu: 20m
|
||||
memory: 20M
|
||||
extraArgs:
|
||||
- --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
|
||||
- --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
|
||||
sidecars: []
|
||||
## - name: nvidia-dcgm-exporter
|
||||
## image: nvidia/dcgm-exporter:1.4.3
|
||||
|
||||
prometheusOperator:
|
||||
priorityClassName: "system-cluster-critical"
|
||||
tlsProxy:
|
||||
image:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-squareup/}ghostunnel
|
||||
admissionWebhooks:
|
||||
patch:
|
||||
image:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-jettech/}kube-webhook-certgen
|
||||
priorityClassName: "system-cluster-critical"
|
||||
|
||||
resources: {}
|
||||
# requests:
|
||||
# cpu: 5m
|
||||
# memory: 10Mi
|
||||
image:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-quay.io/coreos/}prometheus-operator
|
||||
configmapReloadImage:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-quay.io/coreos/}configmap-reload
|
||||
prometheusConfigReloaderImage:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-quay.io/coreos/}prometheus-config-reloader
|
||||
hyperkubeImage:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-k8s.gcr.io/}hyperkube
|
||||
|
||||
prometheus:
|
||||
prometheusSpec:
|
||||
scrapeInterval: 30s
|
||||
evaluationInterval: 30s
|
||||
image:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-quay.io/prometheus/}prometheus
|
||||
retention: 14d
|
||||
externalLabels:
|
||||
cluster_uuid: ${CLUSTER_UUID}
|
||||
## Secrets is a list of Secrets in the same namespace as the Prometheus object, which shall be mounted into the Prometheus Pods.
|
||||
## The Secrets are mounted into /etc/prometheus/secrets/. Secrets changes after initial creation of a Prometheus object are not
|
||||
## reflected in the running Pods. To change the secrets mounted into the Prometheus Pods, the object must be deleted and recreated
|
||||
## with the new list of secrets.
|
||||
# secrets:
|
||||
# - etcd-certificates
|
||||
# - kube-controller-manager-certificates
|
||||
# - kube-scheduler-certificates
|
||||
# - kube-proxy-manager-certificates
|
||||
resources:
|
||||
requests:
|
||||
cpu: ${PROMETHEUS_SERVER_CPU}m
|
||||
memory: ${PROMETHEUS_SERVER_RAM}M
|
||||
priorityClassName: "system-cluster-critical"
|
||||
EOF
|
||||
|
||||
#######################
|
||||
# Set up definitions for ingress objects
|
||||
|
||||
@ -41,296 +223,34 @@ if [ "$(echo ${MONITORING_ENABLED} | tr '[:upper:]' '[:lower:]')" = "true" ]; th
|
||||
if [ "${INGRESS_CONTROLLER}" == "nginx" ]; then
|
||||
:
|
||||
elif [ "${INGRESS_CONTROLLER}" == "traefik" ]; then
|
||||
APP_ADDITIONAL_SERVICE_MONITORS=$(cat << EOF
|
||||
additionalServiceMonitors:
|
||||
- name: prometheus-traefik-metrics
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: traefik
|
||||
namespaceSelector:
|
||||
matchNames:
|
||||
- kube-system
|
||||
endpoints:
|
||||
- path: /metrics
|
||||
port: metrics
|
||||
cat << EOF >> ${HELM_CHART_DIR}/values.yaml
|
||||
additionalServiceMonitors:
|
||||
- name: prometheus-traefik-metrics
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: traefik
|
||||
namespaceSelector:
|
||||
matchNames:
|
||||
- kube-system
|
||||
endpoints:
|
||||
- path: /metrics
|
||||
port: metrics
|
||||
EOF
|
||||
)
|
||||
fi #END INGRESS
|
||||
|
||||
if [ "$(echo ${AUTO_SCALING_ENABLED } | tr '[:upper:]' '[:lower:]')" == "true" ]; then
|
||||
APP_ADDITIONAL_POD_MONITORS=$(cat << EOF
|
||||
additionalPodMonitors:
|
||||
- name: prometheus-cluster-autoscaler
|
||||
podMetricsEndpoints:
|
||||
- port: metrics
|
||||
scheme: http
|
||||
namespaceSelector:
|
||||
matchNames:
|
||||
- kube-system
|
||||
selector:
|
||||
matchLabels:
|
||||
app: cluster-autoscaler
|
||||
if [ "$(echo ${AUTO_SCALING_ENABLED} | tr '[:upper:]' '[:lower:]')" == "true" ]; then
|
||||
cat << EOF >> ${HELM_CHART_DIR}/values.yaml
|
||||
additionalPodMonitors:
|
||||
- name: prometheus-cluster-autoscaler
|
||||
podMetricsEndpoints:
|
||||
- port: metrics
|
||||
scheme: http
|
||||
namespaceSelector:
|
||||
matchNames:
|
||||
- kube-system
|
||||
selector:
|
||||
matchLabels:
|
||||
app: cluster-autoscaler
|
||||
EOF
|
||||
)
|
||||
fi #END AUTOSCALING
|
||||
|
||||
HELM_MODULE_CONFIG_FILE="/srv/magnum/kubernetes/helm/${CHART_NAME}.yaml"
|
||||
[ -f ${HELM_MODULE_CONFIG_FILE} ] || {
|
||||
echo "Writing File: ${HELM_MODULE_CONFIG_FILE}"
|
||||
mkdir -p $(dirname ${HELM_MODULE_CONFIG_FILE})
|
||||
cat << EOF > ${HELM_MODULE_CONFIG_FILE}
|
||||
---
|
||||
kind: ConfigMap
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: ${CHART_NAME}-config
|
||||
namespace: magnum-tiller
|
||||
labels:
|
||||
app: helm
|
||||
data:
|
||||
install-${CHART_NAME}.sh: |
|
||||
#!/bin/bash
|
||||
set -ex
|
||||
mkdir -p \${HELM_HOME}
|
||||
cp /etc/helm/* \${HELM_HOME}
|
||||
|
||||
# HACK - Force wait because of bug https://github.com/helm/helm/issues/5170
|
||||
until helm init --client-only --wait
|
||||
do
|
||||
sleep 5s
|
||||
done
|
||||
helm repo update
|
||||
|
||||
if [[ \$(helm history ${CHART_NAME} | grep ${CHART_NAME}) ]]; then
|
||||
echo "${CHART_NAME} already installed on server. Continue..."
|
||||
exit 0
|
||||
else
|
||||
# TODO: Set namespace to monitoring. This is needed as the Kubernetes default priorityClass can only be used in NS kube-system
|
||||
helm install stable/${CHART_NAME} --namespace kube-system --name ${CHART_NAME} --version ${PROMETHEUS_OPERATOR_CHART_TAG} --values /opt/magnum/install-${CHART_NAME}-values.yaml
|
||||
fi
|
||||
|
||||
install-${CHART_NAME}-values.yaml: |
|
||||
---
|
||||
nameOverride: prometheus
|
||||
fullnameOverride: prometheus
|
||||
|
||||
defaultRules:
|
||||
rules:
|
||||
#TODO: To enable this we need firstly take care of exposing certs
|
||||
etcd: false
|
||||
|
||||
alertmanager:
|
||||
alertmanagerSpec:
|
||||
image:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-quay.io/prometheus/}alertmanager
|
||||
# # Needs testing
|
||||
# resources:
|
||||
# requests:
|
||||
# cpu: 100m
|
||||
# memory: 256Mi
|
||||
priorityClassName: "system-cluster-critical"
|
||||
|
||||
|
||||
# Dashboard
|
||||
grafana:
|
||||
#enabled: ${ENABLE_GRAFANA}
|
||||
resources:
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 128Mi
|
||||
adminPassword: ${ADMIN_PASSWD}
|
||||
|
||||
kubeApiServer:
|
||||
tlsConfig:
|
||||
insecureSkipVerify: "False"
|
||||
|
||||
kubelet:
|
||||
serviceMonitor:
|
||||
https: "True"
|
||||
|
||||
kubeControllerManager:
|
||||
## If your kube controller manager is not deployed as a pod, specify IPs it can be found on
|
||||
endpoints: ${KUBE_MASTERS_PRIVATE}
|
||||
## If using kubeControllerManager.endpoints only the port and targetPort are used
|
||||
service:
|
||||
port: 10252
|
||||
targetPort: 10252
|
||||
# selector:
|
||||
# component: kube-controller-manager
|
||||
serviceMonitor:
|
||||
## Enable scraping kube-controller-manager over https.
|
||||
## Requires proper certs (not self-signed) and delegated authentication/authorization checks
|
||||
https: ${USE_HTTPS}
|
||||
# Skip TLS certificate validation when scraping
|
||||
insecureSkipVerify: null
|
||||
# Name of the server to use when validating TLS certificate
|
||||
serverName: null
|
||||
|
||||
coreDns:
|
||||
enabled: true
|
||||
service:
|
||||
port: 9153
|
||||
targetPort: 9153
|
||||
selector:
|
||||
k8s-app: kube-dns
|
||||
|
||||
kubeEtcd:
|
||||
## If your etcd is not deployed as a pod, specify IPs it can be found on
|
||||
endpoints: ${KUBE_MASTERS_PRIVATE}
|
||||
## Etcd service. If using kubeEtcd.endpoints only the port and targetPort are used
|
||||
service:
|
||||
port: 2379
|
||||
targetPort: 2379
|
||||
# selector:
|
||||
# component: etcd
|
||||
## Configure secure access to the etcd cluster by loading a secret into prometheus and
|
||||
## specifying security configuration below. For example, with a secret named etcd-client-cert
|
||||
serviceMonitor:
|
||||
scheme: https
|
||||
insecureSkipVerify: true
|
||||
caFile: /etc/prometheus/secrets/etcd-certificates/ca.crt
|
||||
certFile: /etc/prometheus/secrets/etcd-certificates/kubelet.crt
|
||||
keyFile: /etc/prometheus/secrets/etcd-certificates/kubelet.key
|
||||
|
||||
kubeScheduler:
|
||||
## If your kube scheduler is not deployed as a pod, specify IPs it can be found on
|
||||
endpoints: ${KUBE_MASTERS_PRIVATE}
|
||||
## If using kubeScheduler.endpoints only the port and targetPort are used
|
||||
service:
|
||||
port: 10251
|
||||
targetPort: 10251
|
||||
# selector:
|
||||
# component: kube-scheduler
|
||||
serviceMonitor:
|
||||
## Enable scraping kube-scheduler over https.
|
||||
## Requires proper certs (not self-signed) and delegated authentication/authorization checks
|
||||
https: ${USE_HTTPS}
|
||||
## Skip TLS certificate validation when scraping
|
||||
insecureSkipVerify: null
|
||||
## Name of the server to use when validating TLS certificate
|
||||
serverName: null
|
||||
|
||||
# kubeProxy:
|
||||
# ## If your kube proxy is not deployed as a pod, specify IPs it can be found on
|
||||
# endpoints: [] # masters + minions
|
||||
# serviceMonitor:
|
||||
# ## Enable scraping kube-proxy over https.
|
||||
# ## Requires proper certs (not self-signed) and delegated authentication/authorization checks
|
||||
# https: ${USE_HTTPS}
|
||||
|
||||
kube-state-metrics:
|
||||
priorityClassName: "system-cluster-critical"
|
||||
resources:
|
||||
#Guaranteed
|
||||
limits:
|
||||
cpu: 50m
|
||||
memory: 64M
|
||||
|
||||
prometheus-node-exporter:
|
||||
priorityClassName: "system-node-critical"
|
||||
resources:
|
||||
#Guaranteed
|
||||
limits:
|
||||
cpu: 20m
|
||||
memory: 20M
|
||||
extraArgs:
|
||||
- --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
|
||||
- --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
|
||||
sidecars: []
|
||||
## - name: nvidia-dcgm-exporter
|
||||
## image: nvidia/dcgm-exporter:1.4.3
|
||||
|
||||
prometheusOperator:
|
||||
priorityClassName: "system-cluster-critical"
|
||||
tlsProxy:
|
||||
image:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-squareup/}ghostunnel
|
||||
admissionWebhooks:
|
||||
patch:
|
||||
image:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-jettech/}kube-webhook-certgen
|
||||
priorityClassName: "system-cluster-critical"
|
||||
|
||||
resources: {}
|
||||
# requests:
|
||||
# cpu: 5m
|
||||
# memory: 10Mi
|
||||
image:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-quay.io/coreos/}prometheus-operator
|
||||
configmapReloadImage:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-quay.io/coreos/}configmap-reload
|
||||
prometheusConfigReloaderImage:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-quay.io/coreos/}prometheus-config-reloader
|
||||
hyperkubeImage:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-k8s.gcr.io/}hyperkube
|
||||
|
||||
prometheus:
|
||||
prometheusSpec:
|
||||
scrapeInterval: 30s
|
||||
evaluationInterval: 30s
|
||||
image:
|
||||
repository: ${CONTAINER_INFRA_PREFIX:-quay.io/prometheus/}prometheus
|
||||
retention: 14d
|
||||
externalLabels:
|
||||
cluster_uuid: ${CLUSTER_UUID}
|
||||
## Secrets is a list of Secrets in the same namespace as the Prometheus object, which shall be mounted into the Prometheus Pods.
|
||||
## The Secrets are mounted into /etc/prometheus/secrets/. Secrets changes after initial creation of a Prometheus object are not
|
||||
## reflected in the running Pods. To change the secrets mounted into the Prometheus Pods, the object must be deleted and recreated
|
||||
## with the new list of secrets.
|
||||
# secrets:
|
||||
# - etcd-certificates
|
||||
# - kube-controller-manager-certificates
|
||||
# - kube-scheduler-certificates
|
||||
# - kube-proxy-manager-certificates
|
||||
resources:
|
||||
requests:
|
||||
cpu: ${PROMETHEUS_SERVER_CPU}m
|
||||
memory: ${PROMETHEUS_SERVER_RAM}M
|
||||
priorityClassName: "system-cluster-critical"
|
||||
${APP_ADDITIONAL_SERVICE_MONITORS}
|
||||
${APP_ADDITIONAL_POD_MONITORS}
|
||||
|
||||
---
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: install-${CHART_NAME}-job
|
||||
namespace: magnum-tiller
|
||||
spec:
|
||||
backoffLimit: 10
|
||||
template:
|
||||
spec:
|
||||
serviceAccountName: tiller
|
||||
containers:
|
||||
- name: config-helm
|
||||
image: ${CONTAINER_INFRA_PREFIX:-docker.io/openstackmagnum/}helm-client:${HELM_CLIENT_TAG}
|
||||
command:
|
||||
- bash
|
||||
args:
|
||||
- /opt/magnum/install-${CHART_NAME}.sh
|
||||
env:
|
||||
- name: HELM_HOME
|
||||
value: /helm_home
|
||||
- name: TILLER_NAMESPACE
|
||||
value: magnum-tiller
|
||||
- name: HELM_TLS_ENABLE
|
||||
value: "true"
|
||||
volumeMounts:
|
||||
- name: install-${CHART_NAME}-config
|
||||
mountPath: /opt/magnum/
|
||||
- mountPath: /etc/helm
|
||||
name: helm-client-certs
|
||||
restartPolicy: Never
|
||||
volumes:
|
||||
- name: install-${CHART_NAME}-config
|
||||
configMap:
|
||||
name: ${CHART_NAME}-config
|
||||
- name: helm-client-certs
|
||||
secret:
|
||||
secretName: helm-client-secret
|
||||
EOF
|
||||
}
|
||||
|
||||
fi
|
||||
|
||||
printf "Finished running ${step}\n"
|
||||
|
@ -106,6 +106,7 @@ class K8sFedoraTemplateDefinition(k8s_template_def.K8sTemplateDefinition):
|
||||
'tiller_enabled',
|
||||
'tiller_tag',
|
||||
'tiller_namespace',
|
||||
'helm_client_url', 'helm_client_sha256',
|
||||
'helm_client_tag',
|
||||
'traefik_ingress_controller_tag',
|
||||
'node_problem_detector_tag',
|
||||
|
@ -727,17 +727,29 @@ parameters:
|
||||
tiller_tag:
|
||||
type: string
|
||||
description: tag of tiller container
|
||||
default: "v2.12.3"
|
||||
default: "v2.16.7"
|
||||
|
||||
tiller_namespace:
|
||||
type: string
|
||||
description: namespace where tiller will be installed.
|
||||
default: "magnum-tiller"
|
||||
|
||||
helm_client_url:
|
||||
type: string
|
||||
description: url of helm client tarball
|
||||
default: ""
|
||||
|
||||
helm_client_sha256:
|
||||
type: string
|
||||
description: sha256 of helm client tarball
|
||||
default: "018f9908cb950701a5d59e757653a790c66d8eda288625dbb185354ca6f41f6b"
|
||||
|
||||
helm_client_tag:
|
||||
type: string
|
||||
description: tag of helm container
|
||||
default: "dev"
|
||||
description: >
|
||||
release tag of helm client
|
||||
https://github.com/helm/helm/releases
|
||||
default: "v3.2.1"
|
||||
|
||||
auto_healing_enabled:
|
||||
type: boolean
|
||||
@ -1119,7 +1131,6 @@ resources:
|
||||
metrics_server_enabled: {get_param: metrics_server_enabled}
|
||||
metrics_server_chart_tag: {get_param: metrics_server_chart_tag}
|
||||
prometheus_monitoring: {get_param: prometheus_monitoring}
|
||||
grafana_admin_passwd: {get_param: grafana_admin_passwd}
|
||||
api_public_address: {get_attr: [api_lb, floating_address]}
|
||||
api_private_address: {get_attr: [api_lb, address]}
|
||||
ssh_key_name: {get_param: ssh_key_name}
|
||||
@ -1218,6 +1229,8 @@ resources:
|
||||
tiller_enabled: {get_param: tiller_enabled}
|
||||
tiller_tag: {get_param: tiller_tag}
|
||||
tiller_namespace: {get_param: tiller_namespace}
|
||||
helm_client_url: {get_param: helm_client_url}
|
||||
helm_client_sha256: {get_param: helm_client_sha256}
|
||||
helm_client_tag: {get_param: helm_client_tag}
|
||||
node_problem_detector_tag: {get_param: node_problem_detector_tag}
|
||||
nginx_ingress_controller_tag: {get_param: nginx_ingress_controller_tag}
|
||||
@ -1269,7 +1282,7 @@ resources:
|
||||
- str_replace:
|
||||
template: {get_file: ../../common/templates/kubernetes/fragments/enable-prometheus-monitoring.sh}
|
||||
params:
|
||||
"$ADMIN_PASSWD": {get_param: grafana_admin_passwd}
|
||||
"${GRAFANA_ADMIN_PASSWD}": {get_param: grafana_admin_passwd}
|
||||
- str_replace:
|
||||
params:
|
||||
$enable-ingress-traefik: {get_file: ../../common/templates/kubernetes/fragments/enable-ingress-traefik.sh}
|
||||
@ -1286,9 +1299,9 @@ resources:
|
||||
# Helm Based Installation Configuration Scripts
|
||||
- get_file: ../../common/templates/kubernetes/helm/metrics-server.sh
|
||||
- str_replace:
|
||||
template: {get_file: ../../common/templates/kubernetes/helm/prometheus-operator.sh}
|
||||
template: {get_file: ../../common/templates/kubernetes/helm/prometheus-operator.sh}
|
||||
params:
|
||||
"${ADMIN_PASSWD}": {get_param: grafana_admin_passwd}
|
||||
"${GRAFANA_ADMIN_PASSWD}": {get_param: grafana_admin_passwd}
|
||||
"${KUBE_MASTERS_PRIVATE}": {get_attr: [kube_masters, kube_master_external_ip]}
|
||||
- get_file: ../../common/templates/kubernetes/helm/prometheus-adapter.sh
|
||||
- get_file: ../../common/templates/kubernetes/helm/ingress-nginx.sh
|
||||
|
@ -190,12 +190,6 @@ parameters:
|
||||
description: >
|
||||
whether or not to have prometheus and grafana deployed
|
||||
|
||||
grafana_admin_passwd:
|
||||
type: string
|
||||
hidden: true
|
||||
description: >
|
||||
admin user password for the Grafana monitoring interface
|
||||
|
||||
api_public_address:
|
||||
type: string
|
||||
description: Public IP address of the Kubernetes master server.
|
||||
@ -502,9 +496,19 @@ parameters:
|
||||
type: string
|
||||
description: namespace where tiller will be installed
|
||||
|
||||
helm_client_url:
|
||||
type: string
|
||||
description: url of helm client tarball
|
||||
|
||||
helm_client_sha256:
|
||||
type: string
|
||||
description: sha256 of helm client tarball
|
||||
|
||||
helm_client_tag:
|
||||
type: string
|
||||
description: tag of helm container
|
||||
description: >
|
||||
release tag of helm client
|
||||
https://github.com/helm/helm/releases
|
||||
|
||||
auto_healing_enabled:
|
||||
type: boolean
|
||||
@ -799,6 +803,8 @@ resources:
|
||||
"$TILLER_ENABLED": {get_param: tiller_enabled}
|
||||
"$TILLER_TAG": {get_param: tiller_tag}
|
||||
"$TILLER_NAMESPACE": {get_param: tiller_namespace}
|
||||
"$HELM_CLIENT_URL": {get_param: helm_client_url}
|
||||
"$HELM_CLIENT_SHA256": {get_param: helm_client_sha256}
|
||||
"$HELM_CLIENT_TAG": {get_param: helm_client_tag}
|
||||
"$NODE_PROBLEM_DETECTOR_TAG": {get_param: node_problem_detector_tag}
|
||||
"$NGINX_INGRESS_CONTROLLER_TAG": {get_param: nginx_ingress_controller_tag}
|
||||
|
@ -732,22 +732,34 @@ parameters:
|
||||
tiller_enabled:
|
||||
type: boolean
|
||||
description: Choose whether to install tiller or not.
|
||||
default: true
|
||||
default: false
|
||||
|
||||
tiller_tag:
|
||||
type: string
|
||||
description: tag of tiller container
|
||||
default: "v2.12.3"
|
||||
default: "v2.16.7"
|
||||
|
||||
tiller_namespace:
|
||||
type: string
|
||||
description: namespace where tiller will be installed.
|
||||
default: "magnum-tiller"
|
||||
|
||||
helm_client_url:
|
||||
type: string
|
||||
description: url of helm client tarball
|
||||
default: ""
|
||||
|
||||
helm_client_sha256:
|
||||
type: string
|
||||
description: sha256 of helm client tarball
|
||||
default: "018f9908cb950701a5d59e757653a790c66d8eda288625dbb185354ca6f41f6b"
|
||||
|
||||
helm_client_tag:
|
||||
type: string
|
||||
description: tag of helm container
|
||||
default: "dev"
|
||||
description: >
|
||||
release tag of helm client
|
||||
https://github.com/helm/helm/releases
|
||||
default: "v3.2.1"
|
||||
|
||||
auto_healing_enabled:
|
||||
type: boolean
|
||||
@ -1153,7 +1165,6 @@ resources:
|
||||
metrics_server_enabled: {get_param: metrics_server_enabled}
|
||||
metrics_server_chart_tag: {get_param: metrics_server_chart_tag}
|
||||
prometheus_monitoring: {get_param: prometheus_monitoring}
|
||||
grafana_admin_passwd: {get_param: grafana_admin_passwd}
|
||||
api_public_address: {get_attr: [api_lb, floating_address]}
|
||||
api_private_address: {get_attr: [api_lb, address]}
|
||||
ssh_key_name: {get_param: ssh_key_name}
|
||||
@ -1253,6 +1264,8 @@ resources:
|
||||
tiller_enabled: {get_param: tiller_enabled}
|
||||
tiller_tag: {get_param: tiller_tag}
|
||||
tiller_namespace: {get_param: tiller_namespace}
|
||||
helm_client_url: {get_param: helm_client_url}
|
||||
helm_client_sha256: {get_param: helm_client_sha256}
|
||||
helm_client_tag: {get_param: helm_client_tag}
|
||||
node_problem_detector_tag: {get_param: node_problem_detector_tag}
|
||||
nginx_ingress_controller_tag: {get_param: nginx_ingress_controller_tag}
|
||||
@ -1305,7 +1318,7 @@ resources:
|
||||
- str_replace:
|
||||
template: {get_file: ../../common/templates/kubernetes/fragments/enable-prometheus-monitoring.sh}
|
||||
params:
|
||||
"$ADMIN_PASSWD": {get_param: grafana_admin_passwd}
|
||||
"${GRAFANA_ADMIN_PASSWD}": {get_param: grafana_admin_passwd}
|
||||
- str_replace:
|
||||
params:
|
||||
$enable-ingress-traefik: {get_file: ../../common/templates/kubernetes/fragments/enable-ingress-traefik.sh}
|
||||
@ -1322,9 +1335,9 @@ resources:
|
||||
# Helm Based Installation Configuration Scripts
|
||||
- get_file: ../../common/templates/kubernetes/helm/metrics-server.sh
|
||||
- str_replace:
|
||||
template: {get_file: ../../common/templates/kubernetes/helm/prometheus-operator.sh}
|
||||
template: {get_file: ../../common/templates/kubernetes/helm/prometheus-operator.sh}
|
||||
params:
|
||||
"${ADMIN_PASSWD}": {get_param: grafana_admin_passwd}
|
||||
"${GRAFANA_ADMIN_PASSWD}": {get_param: grafana_admin_passwd}
|
||||
"${KUBE_MASTERS_PRIVATE}": {get_attr: [kube_masters, kube_master_external_ip]}
|
||||
- get_file: ../../common/templates/kubernetes/helm/prometheus-adapter.sh
|
||||
- get_file: ../../common/templates/kubernetes/helm/ingress-nginx.sh
|
||||
|
@ -194,12 +194,6 @@ parameters:
|
||||
description: >
|
||||
whether or not to have prometheus and grafana deployed
|
||||
|
||||
grafana_admin_passwd:
|
||||
type: string
|
||||
hidden: true
|
||||
description: >
|
||||
admin user password for the Grafana monitoring interface
|
||||
|
||||
api_public_address:
|
||||
type: string
|
||||
description: Public IP address of the Kubernetes master server.
|
||||
@ -506,9 +500,19 @@ parameters:
|
||||
type: string
|
||||
description: namespace where tiller will be installed
|
||||
|
||||
helm_client_url:
|
||||
type: string
|
||||
description: url of helm client tarball
|
||||
|
||||
helm_client_sha256:
|
||||
type: string
|
||||
description: sha256 of helm client tarball
|
||||
|
||||
helm_client_tag:
|
||||
type: string
|
||||
description: tag of helm container
|
||||
description: >
|
||||
release tag of helm client
|
||||
https://github.com/helm/helm/releases
|
||||
|
||||
auto_healing_enabled:
|
||||
type: boolean
|
||||
@ -810,6 +814,8 @@ resources:
|
||||
"$TILLER_ENABLED": {get_param: tiller_enabled}
|
||||
"$TILLER_TAG": {get_param: tiller_tag}
|
||||
"$TILLER_NAMESPACE": {get_param: tiller_namespace}
|
||||
"$HELM_CLIENT_URL": {get_param: helm_client_url}
|
||||
"$HELM_CLIENT_SHA256": {get_param: helm_client_sha256}
|
||||
"$HELM_CLIENT_TAG": {get_param: helm_client_tag}
|
||||
"$NODE_PROBLEM_DETECTOR_TAG": {get_param: node_problem_detector_tag}
|
||||
"$NGINX_INGRESS_CONTROLLER_TAG": {get_param: nginx_ingress_controller_tag}
|
||||
|
@ -594,6 +594,10 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
|
||||
'tiller_tag')
|
||||
tiller_namespace = mock_cluster.labels.get(
|
||||
'tiller_namespace')
|
||||
helm_client_url = mock_cluster.labels.get(
|
||||
'helm_client_url')
|
||||
helm_client_sha256 = mock_cluster.labels.get(
|
||||
'helm_client_sha256')
|
||||
helm_client_tag = mock_cluster.labels.get(
|
||||
'helm_client_tag')
|
||||
npd_tag = mock_cluster.labels.get('node_problem_detector_tag')
|
||||
@ -719,6 +723,8 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
|
||||
'tiller_enabled': tiller_enabled,
|
||||
'tiller_tag': tiller_tag,
|
||||
'tiller_namespace': tiller_namespace,
|
||||
'helm_client_url': helm_client_url,
|
||||
'helm_client_sha256': helm_client_sha256,
|
||||
'helm_client_tag': helm_client_tag,
|
||||
'node_problem_detector_tag': npd_tag,
|
||||
'auto_healing_enabled': auto_healing_enabled,
|
||||
@ -1108,6 +1114,10 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
|
||||
'tiller_tag')
|
||||
tiller_namespace = mock_cluster.labels.get(
|
||||
'tiller_namespace')
|
||||
helm_client_url = mock_cluster.labels.get(
|
||||
'helm_client_url')
|
||||
helm_client_sha256 = mock_cluster.labels.get(
|
||||
'helm_client_sha256')
|
||||
helm_client_tag = mock_cluster.labels.get(
|
||||
'helm_client_tag')
|
||||
npd_tag = mock_cluster.labels.get('node_problem_detector_tag')
|
||||
@ -1236,6 +1246,8 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
|
||||
'tiller_enabled': tiller_enabled,
|
||||
'tiller_tag': tiller_tag,
|
||||
'tiller_namespace': tiller_namespace,
|
||||
'helm_client_url': helm_client_url,
|
||||
'helm_client_sha256': helm_client_sha256,
|
||||
'helm_client_tag': helm_client_tag,
|
||||
'node_problem_detector_tag': npd_tag,
|
||||
'auto_healing_enabled': auto_healing_enabled,
|
||||
|
19
releasenotes/notes/support-helm-v3-5c68eca89fc9446b.yaml
Normal file
19
releasenotes/notes/support-helm-v3-5c68eca89fc9446b.yaml
Normal file
@ -0,0 +1,19 @@
|
||||
---
|
||||
features:
|
||||
- |
|
||||
Support Helm v3 client to install helm charts. To use this feature, users
|
||||
will need to use helm_client_tag>=v3.0.0 (default helm_client_tag=v3.2.1).
|
||||
All the existing chart used to depend on Helm v2, e.g. nginx ingress
|
||||
controller, metrics server, prometheus operator and prometheus adapter are
|
||||
now also installable using v3 client. Also introduce helm_client_sha256 and
|
||||
helm_client_url that users can specify to install non-default helm client
|
||||
version (https://github.com/helm/helm/releases).
|
||||
upgrade:
|
||||
- |
|
||||
Default tiller_tag is set to v2.16.7. The charts remain compatible but
|
||||
helm_client_tag will also need to be set to the same value as tiller_tag,
|
||||
i.e. v2.16.7. In this case, the user will also need to provide
|
||||
helm_client_sha256 for the helm client binary intended for use.
|
||||
deprecations:
|
||||
- |
|
||||
Support for Helm v2 client will be removed in X release.
|
Loading…
x
Reference in New Issue
Block a user