k8s_fedora: Deploy tiller

Add enable_tiller  label to install tiller in k8s_fedora_atomic
clusters. Defaults to false.

Add tiller_tag label to select the version of tiller. If the
tag is not set the tag that matches the helm client version in
the heat-agent will be picked.  The tiller image can be stored
in a private registry and the cluster can pull it using the
container_infra_prefix label.

Install tiller securely using helper container.

TODO:

*add instructions on how RBAC is designed
https://docs.helm.sh/using_helm/#example-deploy-tiller-in-a-namespace-restricted-to-deploying-resources-in-another-namespace
* add docs on how to install addon in the cluster using this tiller
* how users can get the creds to talk to tiller

NOTE:
The main goal of this tiller is internal usage!
Users can still deploy other tillers in other namespaces.

story: 2003902
task: 26780

Change-Id: I99d3a78085ba10030200f12bbfe58a72964e2326
Signed-off-by: dioguerra <dy090.guerra@gmail.com>
This commit is contained in:
Spyros Trigazis 2018-10-22 12:12:41 +02:00
parent 53e4b51e71
commit 0b5f4260d9
13 changed files with 359 additions and 4 deletions

View File

@ -375,6 +375,13 @@ the table are linked to more details elsewhere in the user guide.
+---------------------------------------+--------------------+---------------+
| `k8s_keystone_auth_tag`_ | see below | see below |
+---------------------------------------+--------------------+---------------+
| `tiller_enabled`_ | - true | false |
| | - false | |
+---------------------------------------+--------------------+---------------+
| `tiller_tag`_ | see below | "" |
+---------------------------------------+--------------------+---------------+
| `tiller_namespace`_ | see below | see below |
+---------------------------------------+--------------------+---------------+
Cluster
-------
@ -1199,6 +1206,20 @@ _`k8s_keystone_auth_tag`
<https://hub.docker.com/r/k8scloudprovider/k8s-keystone-auth/tags/>`_.
Stein-default: 1.13.0
_`tiller_enabled`
If set to true, tiller will be deployed in the kube-system namespace.
Defaults to false.
_`tiller_tag`
Add tiller_tag label to select the version of tiller. If the tag is not set
the tag that matches the helm client version in the heat-agent will be
picked. The tiller image can be stored in a private registry and the
cluster can pull it using the container_infra_prefix label.
_`tiller_namespace`
Configure in which namespace tiller is going to be installed.
Default: magnum-tiller
External load balancer for services
-----------------------------------

View File

@ -17,6 +17,7 @@ RUN dnf -y --setopt=tsflags=nodocs install \
gcc \
kubernetes-client \
openssh-clients \
openssl \
python-devel \
python-pip \
python-psutil \

View File

@ -0,0 +1,15 @@
ARG HELM_VERSION=v2.12.0
FROM debian:sid-slim
ARG HELM_VERSION
RUN apt-get update \
&& apt-get install -y \
curl \
bash \
&& curl -o helm.tar.gz https://storage.googleapis.com/kubernetes-helm/helm-${HELM_VERSION}-linux-amd64.tar.gz \
&& mkdir -p helm \
&& tar zxvf helm.tar.gz -C helm \
&& cp helm/linux-amd64/helm /usr/local/bin \
&& chmod +x /usr/local/bin/helm \
&& rm -rf helm*

View File

@ -0,0 +1,231 @@
#!/bin/bash
. /etc/sysconfig/heat-params
set -x
if [ "$(echo ${TILLER_ENABLED} | tr '[:upper:]' '[:lower:]')" != "true" ]; then
exit 0
fi
CERTS_DIR="/etc/kubernetes/helm/certs/"
mkdir -p "${CERTS_DIR}"
# Private CA key
openssl genrsa -out "${CERTS_DIR}/ca.key.pem" 4096
# CA public cert
openssl req -key "${CERTS_DIR}/ca.key.pem" -new -x509 -days 7300 -sha256 -out "${CERTS_DIR}/ca.cert.pem" -extensions v3_ca -subj "/C=US/ST=Texas/L=Austin/O=OpenStack/OU=Magnum/CN=tiller"
# Private tiller-server key
openssl genrsa -out "${CERTS_DIR}/tiller.key.pem" 4096
# Private helm-client key
openssl genrsa -out "${CERTS_DIR}/helm.key.pem" 4096
# Request for tiller-server cert
openssl req -key "${CERTS_DIR}/tiller.key.pem" -new -sha256 -out "${CERTS_DIR}/tiller.csr.pem" -subj "/C=US/ST=Texas/L=Austin/O=OpenStack/OU=Magnum/CN=tiller-server"
# Request for helm-client cert
openssl req -key "${CERTS_DIR}/helm.key.pem" -new -sha256 -out "${CERTS_DIR}/helm.csr.pem" -subj "/C=US/ST=Texas/L=Austin/O=OpenStack/OU=Magnum/CN=helm-client"
# Sign tiller-server cert
openssl x509 -req -CA "${CERTS_DIR}/ca.cert.pem" -CAkey "${CERTS_DIR}/ca.key.pem" -CAcreateserial -in "${CERTS_DIR}/tiller.csr.pem" -out "${CERTS_DIR}/tiller.cert.pem" -days 365
# Sign helm-client cert
openssl x509 -req -CA "${CERTS_DIR}/ca.cert.pem" -CAkey "${CERTS_DIR}/ca.key.pem" -CAcreateserial -in "${CERTS_DIR}/helm.csr.pem" -out "${CERTS_DIR}/helm.cert.pem" -days 365
_tiller_prefix=${CONTAINER_INFRA_PREFIX:-gcr.io/kubernetes-helm/}
TILLER_RBAC=/srv/magnum/kubernetes/manifests/tiller-rbac.yaml
TILLER_DEPLOYER=/srv/magnum/kubernetes/manifests/deploy-tiller.yaml
TILLER_IMAGE="${_tiller_prefix}tiller:${TILLER_TAG}"
[ -f ${TILLER_RBAC} ] || {
echo "Writing File: $TILLER_RBAC"
mkdir -p $(dirname ${TILLER_RBAC})
cat << EOF > ${TILLER_RBAC}
---
apiVersion: v1
kind: Namespace
metadata:
name: ${TILLER_NAMESPACE}
---
# Tiller service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: ${TILLER_NAMESPACE}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: ${TILLER_NAMESPACE}
EOF
}
[ -f ${TILLER_DEPLOYER} ] || {
echo "Writing File: $TILLER_DEPLOYER"
mkdir -p $(dirname ${TILLER_DEPLOYER})
cat << EOF > ${TILLER_DEPLOYER}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: helm
name: tiller
name: tiller-deploy
namespace: ${TILLER_NAMESPACE}
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: helm
name: tiller
spec:
automountServiceAccountToken: true
containers:
- env:
- name: TILLER_NAMESPACE
value: ${TILLER_NAMESPACE}
- name: TILLER_HISTORY_MAX
value: "0"
- name: TILLER_TLS_VERIFY
value: "1"
- name: TILLER_TLS_ENABLE
value: "1"
- name: TILLER_TLS_CERTS
value: /etc/certs
image: ${TILLER_IMAGE}
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /liveness
port: 44135
initialDelaySeconds: 1
timeoutSeconds: 1
name: tiller
ports:
- containerPort: 44134
name: tiller
- containerPort: 44135
name: http
readinessProbe:
httpGet:
path: /readiness
port: 44135
initialDelaySeconds: 1
timeoutSeconds: 1
resources: {}
volumeMounts:
- mountPath: /etc/certs
name: tiller-certs
readOnly: true
serviceAccountName: tiller
tolerations:
# make runnable on master nodes
- key: dedicated
value: master
effect: NoSchedule
- key: CriticalAddonsOnly
value: "True"
effect: NoSchedule
# run only on master nodes
nodeSelector:
node-role.kubernetes.io/master: ""
volumes:
- name: tiller-certs
secret:
secretName: tiller-secret
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: helm
name: tiller
name: tiller-deploy
namespace: ${TILLER_NAMESPACE}
spec:
ports:
- name: tiller
port: 44134
targetPort: tiller
selector:
app: helm
name: tiller
type: ClusterIP
status:
loadBalancer: {}
---
apiVersion: v1
kind: Secret
type: Opaque
metadata:
creationTimestamp: null
labels:
app: helm
name: tiller
name: tiller-secret
namespace: ${TILLER_NAMESPACE}
data:
ca.crt: $(cat "${CERTS_DIR}/ca.cert.pem" | base64 --wrap=0)
tls.crt: $(cat "${CERTS_DIR}/tiller.cert.pem" | base64 --wrap=0)
tls.key: $(cat "${CERTS_DIR}/tiller.key.pem" | base64 --wrap=0)
---
apiVersion: v1
kind: Secret
type: Opaque
metadata:
creationTimestamp: null
labels:
app: helm
name: tiller-ca-key
name: tiller-ca-key
namespace: ${TILLER_NAMESPACE}
data:
ca.key.pem: $(cat "${CERTS_DIR}/ca.key.pem" | base64 --wrap=0)
---
apiVersion: v1
kind: Secret
type: Opaque
metadata:
creationTimestamp: null
labels:
app: helm
name: helm-client
name: helm-client-secret
namespace: ${TILLER_NAMESPACE}
data:
ca.pem: $(cat "${CERTS_DIR}/ca.cert.pem" | base64 --wrap=0)
cert.pem: $(cat "${CERTS_DIR}/helm.cert.pem" | base64 --wrap=0)
key.pem: $(cat "${CERTS_DIR}/helm.key.pem" | base64 --wrap=0)
EOF
}
echo "Waiting for Kubernetes API..."
until [ "ok" = "$(curl --silent http://127.0.0.1:8080/healthz)" ]
do
sleep 5
done
kubectl apply -f ${TILLER_RBAC}
kubectl apply -f ${TILLER_DEPLOYER}

View File

@ -83,3 +83,6 @@ write_files:
K8S_KEYSTONE_AUTH_TAG="$K8S_KEYSTONE_AUTH_TAG"
PROJECT_ID="$PROJECT_ID"
EXTERNAL_NETWORK_ID="$EXTERNAL_NETWORK_ID"
TILLER_ENABLED="$TILLER_ENABLED"
TILLER_TAG="$TILLER_TAG"
TILLER_NAMESPACE="$TILLER_NAMESPACE"

View File

@ -115,7 +115,10 @@ class K8sFedoraTemplateDefinition(k8s_template_def.K8sTemplateDefinition):
'prometheus_tag',
'grafana_tag',
'heat_container_agent_tag',
'keystone_auth_enabled', 'k8s_keystone_auth_tag']
'keystone_auth_enabled', 'k8s_keystone_auth_tag',
'tiller_enabled',
'tiller_tag',
'tiller_namespace']
for label in label_list:
label_value = cluster.labels.get(label)

View File

@ -540,6 +540,21 @@ parameters:
description: >
project id of current project
tiller_enabled:
type: boolean
description: Choose whether to install tiller or not.
default: false
tiller_tag:
type: string
description: tag of tiller container
default: "v2.12.3"
tiller_namespace:
type: string
description: namespace where tiller will be installed.
default: "magnum-tiller"
resources:
######################################################################
@ -773,6 +788,9 @@ resources:
keystone_auth_enabled: {get_param: keystone_auth_enabled}
k8s_keystone_auth_tag: {get_param: k8s_keystone_auth_tag}
project_id: {get_param: project_id}
tiller_enabled: {get_param: tiller_enabled}
tiller_tag: {get_param: tiller_tag}
tiller_namespace: {get_param: tiller_namespace}
kube_cluster_config:
type: OS::Heat::SoftwareConfig
@ -788,11 +806,12 @@ resources:
params:
"$CA_KEY": {get_param: ca_key}
- get_file: ../../common/templates/kubernetes/fragments/core-dns-service.sh
- get_file: ../../common/templates/kubernetes/fragments/calico-service.sh
- get_file: ../../common/templates/kubernetes/fragments/enable-helm-tiller.sh
- str_replace:
template: {get_file: ../../common/templates/kubernetes/fragments/enable-prometheus-monitoring.sh}
params:
"$ADMIN_PASSWD": {get_param: grafana_admin_passwd}
- get_file: ../../common/templates/kubernetes/fragments/calico-service.sh
- str_replace:
params:
$enable-ingress-traefik: {get_file: ../../common/templates/kubernetes/fragments/enable-ingress-traefik.sh}

View File

@ -418,6 +418,18 @@ parameters:
description: >
project id of current project
tiller_enabled:
type: string
description: Whether to enable tiller or not
tiller_tag:
type: string
description: tag of tiller container
tiller_namespace:
type: string
description: namespace where tiller will be installed
resources:
######################################################################
#
@ -524,6 +536,9 @@ resources:
"$K8S_KEYSTONE_AUTH_TAG": {get_param: k8s_keystone_auth_tag}
"$PROJECT_ID": {get_param: project_id}
"$EXTERNAL_NETWORK_ID": {get_param: external_network}
"$TILLER_ENABLED": {get_param: tiller_enabled}
"$TILLER_TAG": {get_param: tiller_tag}
"$TILLER_NAMESPACE": {get_param: tiller_namespace}
install_openstack_ca:
type: OS::Heat::SoftwareConfig

View File

@ -418,6 +418,12 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
k8s_keystone_auth_tag = mock_cluster.labels.get(
'k8s_keystone_auth_tag')
project_id = mock_cluster.project_id
tiller_enabled = mock_cluster.labels.get(
'tiller_enabled')
tiller_tag = mock_cluster.labels.get(
'tiller_tag')
tiller_namespace = mock_cluster.labels.get(
'tiller_namespace')
k8s_def = k8sa_tdef.AtomicK8sTemplateDefinition()
@ -474,7 +480,10 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
'keystone_auth_enabled': keystone_auth_enabled,
'k8s_keystone_auth_tag': k8s_keystone_auth_tag,
'project_id': project_id,
'external_network': external_network_id
'external_network': external_network_id,
'tiller_enabled': tiller_enabled,
'tiller_tag': tiller_tag,
'tiller_namespace': tiller_namespace,
}}
mock_get_params.assert_called_once_with(mock_context,
mock_cluster_template,
@ -775,6 +784,12 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
k8s_keystone_auth_tag = mock_cluster.labels.get(
'k8s_keystone_auth_tag')
project_id = mock_cluster.project_id
tiller_enabled = mock_cluster.labels.get(
'tiller_enabled')
tiller_tag = mock_cluster.labels.get(
'tiller_tag')
tiller_namespace = mock_cluster.labels.get(
'tiller_namespace')
k8s_def = k8sa_tdef.AtomicK8sTemplateDefinition()
@ -833,7 +848,10 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
'keystone_auth_enabled': keystone_auth_enabled,
'k8s_keystone_auth_tag': k8s_keystone_auth_tag,
'project_id': project_id,
'external_network': external_network_id
'external_network': external_network_id,
'tiller_enabled': tiller_enabled,
'tiller_tag': tiller_tag,
'tiller_namespace': tiller_namespace,
}}
mock_get_params.assert_called_once_with(mock_context,
mock_cluster_template,

View File

@ -14,3 +14,5 @@ kubernetes_images:
magnum_images:
- name: heat-container-agent
tag: stein-dev
helm_version: v2.12.3

View File

@ -51,3 +51,14 @@
push: no
with_items: "{{ kubernetes_images }}"
retries: 10
- name: "Build helm-client image"
block:
- docker_image:
path: "{{ magnum_src_dir }}/dockerfiles/helm-client"
name: "{{ magnum_repository }}/helm-client"
tag: "{{ helm_version }}"
buildargs:
HELM_VERSION: "{{ helm_version }}"
push: no
retries: 10

View File

@ -19,3 +19,5 @@
- command: docker push {{ magnum_repository }}/{{ item.name }}:{{ kubernetes_version_v1_13 }}
with_items: "{{ kubernetes_images }}"
retries: 10
- command: docker push {{ magnum_repository }}/helm-client:{{ helm_version }}
retries: 10

View File

@ -0,0 +1,14 @@
---
features:
- |
Add tiller_enabled to install tiller in k8s_fedora_atomic
clusters. Defaults to false. Add tiller_tag label to select the
version of tiller. If the tag is not set the tag that matches the helm
client version in the heat-agent will be picked. The tiller image can
be stored in a private registry and the cluster can pull it using the
container_infra_prefix label. Add tiller_namespace label to select in
which namespace to install tiller. Tiller is install with a Kubernetes
job. This job runs with a container that includes the helm client.
This image is maintained by the magnum team and lives in,
docker.io/openstackmagnum/helm-client. This container follows the same
versions as helm and tiller.