[k8s] Add nginx based ingress controller

Add an nginx based Ingress controller for Kubernetes.

The use case is to provide better support use cases which require either
L4 access or SSL passthrough, which lack proper support in Traefik.

Selection is done via the same label 'ingress_controller' with value
'nginx'. Deployment relies on the upstream nginx-ingress helm chart.

Change-Id: I1db2074fce9d43c03f479a6aaeb4f238d7101555
Story: 2005327
Task: 30255
This commit is contained in:
Ricardo Rocha 2019-03-29 10:41:32 +01:00
parent cce93efa0e
commit 375fbccf58
8 changed files with 288 additions and 5 deletions

View File

@ -354,6 +354,8 @@ the table are linked to more details elsewhere in the user guide.
+---------------------------------------+--------------------+---------------+ +---------------------------------------+--------------------+---------------+
| `octavia_ingress_controller_tag`_ | see below | see below | | `octavia_ingress_controller_tag`_ | see below | see below |
+---------------------------------------+--------------------+---------------+ +---------------------------------------+--------------------+---------------+
| `nginx_ingress_controller_tag`_ | see below | see below |
+---------------------------------------+--------------------+---------------+
| `kubelet_options`_ | extra kubelet args | "" | | `kubelet_options`_ | extra kubelet args | "" |
+---------------------------------------+--------------------+---------------+ +---------------------------------------+--------------------+---------------+
| `kubeapi_options`_ | extra kubeapi args | "" | | `kubeapi_options`_ | extra kubeapi args | "" |
@ -1306,10 +1308,10 @@ worker nodes security group. For example::
--dst-port 443:443 --dst-port 443:443
_`ingress_controller` _`ingress_controller`
This label sets the Ingress Controller to be used. Currently 'traefik' and This label sets the Ingress Controller to be used. Currently 'traefik',
'octavia' are supported. The default is '', meaning no Ingress Controller 'nginx' and 'octavia' are supported. The default is '', meaning no Ingress
is configured. For more details about octavia-ingress-controller please refer Controller is configured. For more details about octavia-ingress-controller
to `cloud-provider-openstack document please refer to `cloud-provider-openstack document
<https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-octavia-ingress-controller.md>`_ <https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-octavia-ingress-controller.md>`_
_`ingress_controller_role` _`ingress_controller_role`
@ -1326,6 +1328,9 @@ _`ingress_controller_role`
_`octavia_ingress_controller_tag` _`octavia_ingress_controller_tag`
The image tag for octavia-ingress-controller. Stein-default: 1.13.2-alpha The image tag for octavia-ingress-controller. Stein-default: 1.13.2-alpha
_`nginx_ingress_controller_tag`
The image tag for nginx-ingress-controller. Stein-default: 0.23.0
DNS DNS
--- ---

View File

@ -89,3 +89,4 @@ write_files:
TILLER_TAG="$TILLER_TAG" TILLER_TAG="$TILLER_TAG"
TILLER_NAMESPACE="$TILLER_NAMESPACE" TILLER_NAMESPACE="$TILLER_NAMESPACE"
NODE_PROBLEM_DETECTOR_TAG="$NODE_PROBLEM_DETECTOR_TAG" NODE_PROBLEM_DETECTOR_TAG="$NODE_PROBLEM_DETECTOR_TAG"
NGINX_INGRESS_CONTROLLER_TAG="$NGINX_INGRESS_CONTROLLER_TAG"

View File

@ -0,0 +1,252 @@
#!/bin/bash
. /etc/sysconfig/heat-params
set -ex
step="nginx-ingress"
printf "Starting to run ${step}\n"
### Configuration
###############################################################################
CHART_NAME="nginx-ingress"
CHART_VERSION="1.4.0"
if [ "$(echo ${INGRESS_CONTROLLER} | tr '[:upper:]' '[:lower:]')" = "nginx" ]; then
HELM_MODULE_CONFIG_FILE="/srv/magnum/kubernetes/helm/${CHART_NAME}.yaml"
[ -f ${HELM_MODULE_CONFIG_FILE} ] || {
echo "Writing File: ${HELM_MODULE_CONFIG_FILE}"
mkdir -p $(dirname ${HELM_MODULE_CONFIG_FILE})
cat << EOF > ${HELM_MODULE_CONFIG_FILE}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: ${CHART_NAME}-config
namespace: magnum-tiller
labels:
app: helm
data:
install-${CHART_NAME}.sh: |
#!/bin/bash
set -e
set -x
mkdir -p \${HELM_HOME}
cp /etc/helm/* \${HELM_HOME}
# HACK - Force wait because of bug https://github.com/helm/helm/issues/5170
until helm init --client-only --wait
do
sleep 5s
done
helm repo update
if [[ \$(helm history ${CHART_NAME} | grep ${CHART_NAME}) ]]; then
echo "${CHART_NAME} already installed on server. Continue..."
exit 0
else
helm install stable/${CHART_NAME} --namespace kube-system --name ${CHART_NAME} --version v${CHART_VERSION} --values /opt/magnum/install-${CHART_NAME}-values.yaml
fi
install-${CHART_NAME}-values.yaml: |
controller:
name: controller
image:
repository: ${CONTAINER_INFRA_PREFIX:-quay.io/kubernetes-ingress-controller/}nginx-ingress-controller
tag: ${NGINX_INGRESS_CONTROLLER_TAG}
pullPolicy: IfNotPresent
runAsUser: 33
config: {}
headers: {}
hostNetwork: true
dnsPolicy: ClusterFirst
daemonset:
useHostPort: true
hostPorts:
http: 80
https: 443
stats: 18080
defaultBackendService: ""
electionID: ingress-controller-leader
ingressClass: nginx
podLabels: {}
publishService:
enabled: false
pathOverride: ""
scope:
enabled: false
namespace: "" # defaults to .Release.Namespace
extraArgs:
enable-ssl-passthrough: ""
extraEnvs: []
kind: DaemonSet
updateStrategy: {}
minReadySeconds: 0
tolerations: []
affinity: {}
nodeSelector:
role: ${INGRESS_CONTROLLER_ROLE}
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
port: 10254
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
port: 10254
podAnnotations: {}
replicaCount: 1
minAvailable: 1
resources:
limits:
cpu: 100m
memory: 64Mi
requests:
cpu: 100m
memory: 64Mi
autoscaling:
enabled: false
customTemplate:
configMapName: ""
configMapKey: ""
service:
annotations: {}
labels: {}
clusterIP: ""
externalIPs: []
loadBalancerIP: ""
loadBalancerSourceRanges: []
enableHttp: true
enableHttps: true
externalTrafficPolicy: ""
healthCheckNodePort: 0
targetPorts:
http: http
https: https
type: NodePort
nodePorts:
http: "32080"
https: "32443"
extraContainers: []
extraVolumeMounts: []
extraVolumes: []
extraInitContainers: []
stats:
enabled: false
service:
annotations: {}
clusterIP: ""
externalIPs: []
loadBalancerIP: ""
loadBalancerSourceRanges: []
servicePort: 18080
type: ClusterIP
metrics:
enabled: false
service:
annotations: {}
clusterIP: ""
externalIPs: []
loadBalancerIP: ""
loadBalancerSourceRanges: []
servicePort: 9913
type: ClusterIP
serviceMonitor:
enabled: false
additionalLabels: {}
namespace: ""
lifecycle: {}
priorityClassName: ""
revisionHistoryLimit: 10
defaultBackend:
enabled: true
name: default-backend
image:
repository: ${CONTAINER_INFRA_PREFIX:-k8s.gcr.io/}defaultbackend
tag: "1.4"
pullPolicy: IfNotPresent
extraArgs: {}
port: 8080
tolerations: []
affinity: {}
podLabels: {}
nodeSelector: {}
podAnnotations: {}
replicaCount: 1
minAvailable: 1
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
service:
annotations: {}
clusterIP: ""
externalIPs: []
loadBalancerIP: ""
loadBalancerSourceRanges: []
servicePort: 80
type: ClusterIP
priorityClassName: ""
rbac:
create: true
podSecurityPolicy:
enabled: false
serviceAccount:
create: true
name:
imagePullSecrets: []
tcp: {}
udp: {}
---
apiVersion: batch/v1
kind: Job
metadata:
name: install-${CHART_NAME}-job
namespace: magnum-tiller
spec:
backoffLimit: 5
template:
spec:
serviceAccountName: tiller
containers:
- name: config-helm
image: ${CONTAINER_INFRA_PREFIX:-docker.io/openstackmagnum/}helm-client:dev
command:
- bash
args:
- /opt/magnum/install-${CHART_NAME}.sh
env:
- name: HELM_HOME
value: /helm_home
- name: TILLER_NAMESPACE
value: magnum-tiller
- name: HELM_TLS_ENABLE
value: "true"
volumeMounts:
- name: install-${CHART_NAME}-config
mountPath: /opt/magnum/
- mountPath: /etc/helm
name: helm-client-certs
restartPolicy: Never
volumes:
- name: install-${CHART_NAME}-config
configMap:
name: ${CHART_NAME}-config
- name: helm-client-certs
secret:
secretName: helm-client-secret
EOF
}
fi
printf "Finished running ${step}\n"

View File

@ -131,7 +131,8 @@ class K8sFedoraTemplateDefinition(k8s_template_def.K8sTemplateDefinition):
'tiller_enabled', 'tiller_enabled',
'tiller_tag', 'tiller_tag',
'tiller_namespace', 'tiller_namespace',
'node_problem_detector_tag'] 'node_problem_detector_tag',
'nginx_ingress_controller_tag']
for label in label_list: for label in label_list:
label_value = cluster.labels.get(label) label_value = cluster.labels.get(label)

View File

@ -570,6 +570,11 @@ parameters:
description: tag of the node problem detector container description: tag of the node problem detector container
default: v0.6.2 default: v0.6.2
nginx_ingress_controller_tag:
type: string
description: nginx ingress controller docker image tag
default: 0.23.0
resources: resources:
###################################################################### ######################################################################
@ -846,6 +851,7 @@ resources:
tiller_tag: {get_param: tiller_tag} tiller_tag: {get_param: tiller_tag}
tiller_namespace: {get_param: tiller_namespace} tiller_namespace: {get_param: tiller_namespace}
node_problem_detector_tag: {get_param: node_problem_detector_tag} node_problem_detector_tag: {get_param: node_problem_detector_tag}
nginx_ingress_controller_tag: {get_param: nginx_ingress_controller_tag}
kube_cluster_config: kube_cluster_config:
type: OS::Heat::SoftwareConfig type: OS::Heat::SoftwareConfig
@ -882,6 +888,7 @@ resources:
template: {get_file: ../../common/templates/kubernetes/helm/prometheus-operator.sh} template: {get_file: ../../common/templates/kubernetes/helm/prometheus-operator.sh}
params: params:
"${ADMIN_PASSWD}": {get_param: grafana_admin_passwd} "${ADMIN_PASSWD}": {get_param: grafana_admin_passwd}
- get_file: ../../common/templates/kubernetes/helm/ingress-nginx.sh
- get_file: ../../common/templates/kubernetes/fragments/install-helm-modules.sh - get_file: ../../common/templates/kubernetes/fragments/install-helm-modules.sh
kube_cluster_deploy: kube_cluster_deploy:

View File

@ -443,6 +443,10 @@ parameters:
type: string type: string
description: tag of the node problem detector container description: tag of the node problem detector container
nginx_ingress_controller_tag:
type: string
description: nginx ingress controller docker image tag
resources: resources:
###################################################################### ######################################################################
# #
@ -555,6 +559,7 @@ resources:
"$TILLER_TAG": {get_param: tiller_tag} "$TILLER_TAG": {get_param: tiller_tag}
"$TILLER_NAMESPACE": {get_param: tiller_namespace} "$TILLER_NAMESPACE": {get_param: tiller_namespace}
"$NODE_PROBLEM_DETECTOR_TAG": {get_param: node_problem_detector_tag} "$NODE_PROBLEM_DETECTOR_TAG": {get_param: node_problem_detector_tag}
"$NGINX_INGRESS_CONTROLLER_TAG": {get_param: nginx_ingress_controller_tag}
install_openstack_ca: install_openstack_ca:
type: OS::Heat::SoftwareConfig type: OS::Heat::SoftwareConfig

View File

@ -479,6 +479,8 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
'ingress_controller_role') 'ingress_controller_role')
octavia_ingress_controller_tag = mock_cluster.labels.get( octavia_ingress_controller_tag = mock_cluster.labels.get(
'octavia_ingress_controller_tag') 'octavia_ingress_controller_tag')
nginx_ingress_controller_tag = mock_cluster.labels.get(
'nginx_ingress_controller_tag')
kubelet_options = mock_cluster.labels.get( kubelet_options = mock_cluster.labels.get(
'kubelet_options') 'kubelet_options')
kubeapi_options = mock_cluster.labels.get( kubeapi_options = mock_cluster.labels.get(
@ -562,6 +564,7 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
'ingress_controller': ingress_controller, 'ingress_controller': ingress_controller,
'ingress_controller_role': ingress_controller_role, 'ingress_controller_role': ingress_controller_role,
'octavia_ingress_controller_tag': octavia_ingress_controller_tag, 'octavia_ingress_controller_tag': octavia_ingress_controller_tag,
'nginx_ingress_controller_tag': nginx_ingress_controller_tag,
'octavia_enabled': False, 'octavia_enabled': False,
'kube_service_account_key': 'public_key', 'kube_service_account_key': 'public_key',
'kube_service_account_private_key': 'private_key', 'kube_service_account_private_key': 'private_key',
@ -852,6 +855,8 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
'ingress_controller_role') 'ingress_controller_role')
octavia_ingress_controller_tag = mock_cluster.labels.get( octavia_ingress_controller_tag = mock_cluster.labels.get(
'octavia_ingress_controller_tag') 'octavia_ingress_controller_tag')
nginx_ingress_controller_tag = mock_cluster.labels.get(
'nginx_ingress_controller_tag')
kubelet_options = mock_cluster.labels.get( kubelet_options = mock_cluster.labels.get(
'kubelet_options') 'kubelet_options')
kubeapi_options = mock_cluster.labels.get( kubeapi_options = mock_cluster.labels.get(
@ -937,6 +942,7 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
'ingress_controller': ingress_controller, 'ingress_controller': ingress_controller,
'ingress_controller_role': ingress_controller_role, 'ingress_controller_role': ingress_controller_role,
'octavia_ingress_controller_tag': octavia_ingress_controller_tag, 'octavia_ingress_controller_tag': octavia_ingress_controller_tag,
'nginx_ingress_controller_tag': nginx_ingress_controller_tag,
'octavia_enabled': False, 'octavia_enabled': False,
'kube_service_account_key': 'public_key', 'kube_service_account_key': 'public_key',
'kube_service_account_private_key': 'private_key', 'kube_service_account_private_key': 'private_key',

View File

@ -0,0 +1,6 @@
---
features:
- |
Add nginx as an additional Ingress controller option for Kubernetes.
Installation is done via the upstream nginx-ingress helm chart, and
selection can be done via label ingress_controller=nginx.