Support calico as network driver

Adding calico as Kubernetes network driver to support network
policy of Kubernetes. Network policy is a very important feature
for k8s production use. See more information about k8s network
policy at [1] and [2], as for calico please refer [3] and [4].

[1] https://kubernetes.io/docs/concepts/services-networking/network-policies/
[2] http://blog.kubernetes.io/2017/10/enforcing-network-policies-in-kubernetes.html
[3] https://www.projectcalico.org/calico-network-policy-comes-to-kubernetes/
[4] https://cloudplatform.googleblog.com/2017/09/network-policy-support-for-kubernetes-with-calico.html

Closes-Bug: #1746379

Change-Id: I135a46cd32a67d73d8e64ac5bbc4debfae6c4568
This commit is contained in:
Feilong Wang 2018-02-03 00:57:27 +13:00 committed by Fei Long Wang
parent 5a34d7d830
commit 838b8daf6e
12 changed files with 629 additions and 16 deletions

View File

@ -176,7 +176,7 @@ They are loosely grouped as: mandatory, infrastructure, COE specific.
=========== ================= ======== =========== ================= ========
COE Network-Driver Default COE Network-Driver Default
=========== ================= ======== =========== ================= ========
Kubernetes flannel flannel Kubernetes flannel, calico flannel
Swarm docker, flannel flannel Swarm docker, flannel flannel
Mesos docker docker Mesos docker docker
=========== ================= ======== =========== ================= ========
@ -2018,13 +2018,15 @@ network-driver
The network driver name for instantiating container networks. The network driver name for instantiating container networks.
Currently, the following network drivers are supported: Currently, the following network drivers are supported:
+--------+-------------+-----------+-------------+ +--------+-------------+-------------+-------------+
| Driver | Kubernetes | Swarm | Mesos | | Driver | Kubernetes | Swarm | Mesos |
+========+=============+===========+=============+ +========+=============+=============+=============+
| Flannel| supported | supported | unsupported | | Flannel| supported | supported | unsupported |
+--------+-------------+-----------+-------------+ +--------+-------------+-------------+-------------+
| Docker | unsupported | supported | supported | | Docker | unsupported | supported | supported |
+--------+-------------+-----------+-------------+ +--------+-------------+-------------+-------------+
| Calico | supported | unsupported | unsupported |
+--------+-------------+-------------+-------------+
If not specified, the default driver is Flannel for Kubernetes, and If not specified, the default driver is Flannel for Kubernetes, and
Docker for Swarm and Mesos. Docker for Swarm and Mesos.
@ -2060,6 +2062,26 @@ _`flannel_backend`
is not specified in the ClusterTemplate, *host-gw* is the best choice for is not specified in the ClusterTemplate, *host-gw* is the best choice for
the Flannel backend. the Flannel backend.
When Calico is specified as the network driver, the following
optional labels can be added:
_`calico_ipv4pool`
IPv4 network in CIDR format which is the IP pool, from which Pod IPs will
be chosen. If not specified, the default is 192.168.0.0/16.
_`calico_tag`
Tag of the calico containers used to provision the calico node
_`calico_cni_tag`
Tag of the cni used to provision the calico node
_`calico_kube_controllers_tag`
Tag of the kube_controllers used to provision the calico node
Besides, the Calico network driver needs kube_tag with v1.9.3 or later, because
Calico needs extra mounts for the kubelet container. See `commit
<https://github.com/projectatomic/atomic-system-containers/commit/54ab8abc7fa1bfb6fa674f55cd0c2fa0c812fd36>`_
of atomic-system-containers for more information.
High Availability High Availability
================= =================

View File

@ -285,7 +285,7 @@ class Validator(object):
class K8sValidator(Validator): class K8sValidator(Validator):
supported_network_drivers = ['flannel'] supported_network_drivers = ['flannel', 'calico']
supported_server_types = ['vm', 'bm'] supported_server_types = ['vm', 'bm']
allowed_network_drivers = ( allowed_network_drivers = (
CONF.cluster_template.kubernetes_allowed_network_drivers) CONF.cluster_template.kubernetes_allowed_network_drivers)

View File

@ -0,0 +1,450 @@
#!/bin/sh
. /etc/sysconfig/heat-params
if [ "$NETWORK_DRIVER" != "calico" ]; then
exit 0
fi
_prefix=${CONTAINER_INFRA_PREFIX:-quay.io/calico/}
ETCD_SERVER_IP=${ETCD_LB_VIP:-$KUBE_NODE_IP}
CERT_DIR=/etc/kubernetes/certs
ETCD_CA=`cat ${CERT_DIR}/ca.crt | base64 | tr -d '\n'`
ETCD_CERT=`cat ${CERT_DIR}/server.crt | base64 | tr -d '\n'`
ETCD_KEY=`cat ${CERT_DIR}/server.key | base64 | tr -d '\n'`
CALICO_DEPLOY=/srv/magnum/kubernetes/manifests/calico-deploy.yaml
[ -f ${CALICO_DEPLOY} ] || {
echo "Writing File: $CALICO_DEPLOY"
mkdir -p $(dirname ${CALICO_DEPLOY})
cat << EOF > ${CALICO_DEPLOY}
# Calico Version v2.6.7
# https://docs.projectcalico.org/v2.6/releases#v2.6.7
# This manifest includes the following component versions:
# calico/node:v2.6.7
# calico/cni:v1.11.2
# calico/kube-controllers:v1.0.3
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# Configure this with the location of your etcd cluster.
etcd_endpoints: "https://${ETCD_SERVER_IP}:2379"
# Configure the Calico backend to use.
calico_backend: "bird"
# The CNI network configuration to install on each node.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.1.0",
"type": "calico",
"etcd_endpoints": "__ETCD_ENDPOINTS__",
"etcd_key_file": "__ETCD_KEY_FILE__",
"etcd_cert_file": "__ETCD_CERT_FILE__",
"etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__",
"log_level": "info",
"mtu": 1500,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s",
"k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
"k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
}
# If you're using TLS enabled etcd uncomment the following.
# You must also populate the Secret below with these files.
etcd_ca: "/calico-secrets/etcd-ca"
etcd_cert: "/calico-secrets/etcd-cert"
etcd_key: "/calico-secrets/etcd-key"
---
# The following contains k8s Secrets for use with a TLS enabled etcd cluster.
# For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: calico-etcd-secrets
namespace: kube-system
data:
# Populate the following files with etcd TLS configuration if desired, but leave blank if
# not using TLS for etcd.
# This self-hosted install expects three files with the following names. The values
# should be base64 encoded strings of the entire contents of each file.
etcd-key: ${ETCD_KEY}
etcd-cert: ${ETCD_CERT}
etcd-ca: ${ETCD_CA}
---
# This manifest installs the calico/node container, as well
# as the Calico CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
template:
metadata:
labels:
k8s-app: calico-node
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: |
[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
{"key":"CriticalAddonsOnly", "operator":"Exists"}]
spec:
hostNetwork: true
serviceAccountName: calico-node
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
terminationGracePeriodSeconds: 0
containers:
# Runs calico/node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: ${_prefix}node:${CALICO_TAG}
env:
# The location of the Calico etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# Choose the backend to use.
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Disable file logging so 'kubectl logs' works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Configure the IP Pool from which Pod IPs will be chosen.
- name: CALICO_IPV4POOL_CIDR
value: ${CALICO_IPV4POOL}
- name: CALICO_IPV4POOL_IPIP
value: "off"
- name: CALICO_IPV4POOL_NAT_OUTGOING
value: "true"
# Set noderef for node controller.
- name: CALICO_K8S_NODE_REF
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
# Set Felix logging to "info"
- name: FELIX_LOGSEVERITYSCREEN
value: "info"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
value: "1440"
# Location of the CA certificate for etcd.
- name: ETCD_CA_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_ca
# Location of the client key for etcd.
- name: ETCD_KEY_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_key
# Location of the client certificate for etcd.
- name: ETCD_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_cert
# Auto-detect the BGP IP address.
- name: IP
value: ""
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
livenessProbe:
httpGet:
path: /liveness
port: 9099
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
httpGet:
path: /readiness
port: 9099
periodSeconds: 10
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
- mountPath: /calico-secrets
name: etcd-certs
# This container installs the Calico CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: ${_prefix}cni:${CALICO_CNI_TAG}
command: ["/install-cni.sh"]
env:
# The location of the Calico etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
- mountPath: /calico-secrets
name: etcd-certs
volumes:
# Used by calico/node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
# Mount in the etcd TLS secrets.
- name: etcd-certs
secret:
secretName: calico-etcd-secrets
---
# This manifest deploys the Calico Kubernetes controllers.
# See https://github.com/projectcalico/kube-controllers
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: |
[{"key": "dedicated", "value": "master", "effect": "NoSchedule" },
{"key":"CriticalAddonsOnly", "operator":"Exists"}]
spec:
# The controllers can only have a single active instance.
replicas: 1
strategy:
type: Recreate
template:
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
# The controllers must run in the host network namespace so that
# it isn't governed by policy that would prevent it from working.
hostNetwork: true
serviceAccountName: calico-kube-controllers
containers:
- name: calico-kube-controllers
image: ${_prefix}kube-controllers:${CALICO_KUBE_CONTROLLERS_TAG}
env:
# The location of the Calico etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# Location of the CA certificate for etcd.
- name: ETCD_CA_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_ca
# Location of the client key for etcd.
- name: ETCD_KEY_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_key
# Location of the client certificate for etcd.
- name: ETCD_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_cert
# Choose which controllers to run.
- name: ENABLED_CONTROLLERS
value: policy,profile,workloadendpoint,node
volumeMounts:
# Mount in the etcd TLS secrets.
- mountPath: /calico-secrets
name: etcd-certs
volumes:
# Mount in the etcd TLS secrets.
- name: etcd-certs
secret:
secretName: calico-etcd-secrets
---
# This deployment turns off the old "policy-controller". It should remain at 0 replicas, and then
# be removed entirely once the new kube-controllers deployment has been deployed above.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: calico-policy-controller
namespace: kube-system
labels:
k8s-app: calico-policy
spec:
# Turn this deployment off in favor of the kube-controllers deployment above.
replicas: 0
strategy:
type: Recreate
template:
metadata:
name: calico-policy-controller
namespace: kube-system
labels:
k8s-app: calico-policy
spec:
hostNetwork: true
serviceAccountName: calico-kube-controllers
containers:
- name: calico-policy-controller
image: ${_prefix}kube-controllers:${CALICO_KUBE_CONTROLLERS_TAG}
env:
# The location of the Calico etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-kube-controllers
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system
# Calico Version v2.6.7
# https://docs.projectcalico.org/v2.6/releases#v2.6.7
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: calico-kube-controllers
rules:
- apiGroups:
- ""
- extensions
resources:
- pods
- namespaces
- networkpolicies
- nodes
verbs:
- watch
- list
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: calico-kube-controllers
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-kube-controllers
subjects:
- kind: ServiceAccount
name: calico-kube-controllers
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: calico-node
rules:
- apiGroups: [""]
resources:
- pods
- nodes
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system
EOF
}
until curl -sf "http://127.0.0.1:8080/healthz"
do
echo "Waiting for Kubernetes API..."
sleep 5
done
/usr/bin/kubectl apply -f ${CALICO_DEPLOY} --namespace=kube-system

View File

@ -5,7 +5,14 @@
echo "configuring kubernetes (minion)" echo "configuring kubernetes (minion)"
_prefix=${CONTAINER_INFRA_PREFIX:-docker.io/openstackmagnum/} _prefix=${CONTAINER_INFRA_PREFIX:-docker.io/openstackmagnum/}
atomic install --storage ostree --system --system-package=no --name=kubelet ${_prefix}kubernetes-kubelet:${KUBE_TAG}
_addtl_mounts=''
if [ "$NETWORK_DRIVER" = "calico" ]; then
mkdir -p /opt/cni
_addtl_mounts=',{"type":"bind","source":"/opt/cni","destination":"/opt/cni","options":["bind","rw","slave","mode=777"]}'
fi
atomic install --storage ostree --system --system-package=no --set=ADDTL_MOUNTS=${_addtl_mounts} --name=kubelet ${_prefix}kubernetes-kubelet:${KUBE_TAG}
atomic install --storage ostree --system --system-package=no --name=kube-proxy ${_prefix}kubernetes-proxy:${KUBE_TAG} atomic install --storage ostree --system --system-package=no --name=kube-proxy ${_prefix}kubernetes-proxy:${KUBE_TAG}
CERT_DIR=/etc/kubernetes/certs CERT_DIR=/etc/kubernetes/certs
@ -139,6 +146,10 @@ fi
EOF EOF
chmod +x /etc/kubernetes/get_require_kubeconfig.sh chmod +x /etc/kubernetes/get_require_kubeconfig.sh
if [ "$NETWORK_DRIVER" = "calico" ]; then
KUBELET_ARGS="${KUBELET_ARGS} --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
fi
sed -i ' sed -i '
/^KUBELET_ADDRESS=/ s/=.*/="--address=0.0.0.0"/ /^KUBELET_ADDRESS=/ s/=.*/="--address=0.0.0.0"/
/^KUBELET_HOSTNAME=/ s/=.*/=""/ /^KUBELET_HOSTNAME=/ s/=.*/=""/

View File

@ -60,7 +60,7 @@ data:
errors errors
log stdout log stdout
health health
kubernetes ${DNS_CLUSTER_DOMAIN} ${PORTAL_NETWORK_CIDR} { kubernetes ${DNS_CLUSTER_DOMAIN} ${PORTAL_NETWORK_CIDR} ${PODS_NETWORK_CIDR} {
pods verified pods verified
} }
proxy . /etc/resolv.conf proxy . /etc/resolv.conf

View File

@ -22,6 +22,7 @@ write_files:
FLANNEL_NETWORK_CIDR="$FLANNEL_NETWORK_CIDR" FLANNEL_NETWORK_CIDR="$FLANNEL_NETWORK_CIDR"
FLANNEL_NETWORK_SUBNETLEN="$FLANNEL_NETWORK_SUBNETLEN" FLANNEL_NETWORK_SUBNETLEN="$FLANNEL_NETWORK_SUBNETLEN"
FLANNEL_BACKEND="$FLANNEL_BACKEND" FLANNEL_BACKEND="$FLANNEL_BACKEND"
PODS_NETWORK_CIDR="$PODS_NETWORK_CIDR"
PORTAL_NETWORK_CIDR="$PORTAL_NETWORK_CIDR" PORTAL_NETWORK_CIDR="$PORTAL_NETWORK_CIDR"
ADMISSION_CONTROL_LIST="$ADMISSION_CONTROL_LIST" ADMISSION_CONTROL_LIST="$ADMISSION_CONTROL_LIST"
ETCD_DISCOVERY_URL="$ETCD_DISCOVERY_URL" ETCD_DISCOVERY_URL="$ETCD_DISCOVERY_URL"
@ -54,3 +55,7 @@ write_files:
DNS_CLUSTER_DOMAIN="$DNS_CLUSTER_DOMAIN" DNS_CLUSTER_DOMAIN="$DNS_CLUSTER_DOMAIN"
CERT_MANAGER_API="$CERT_MANAGER_API" CERT_MANAGER_API="$CERT_MANAGER_API"
CA_KEY="$CA_KEY" CA_KEY="$CA_KEY"
CALICO_TAG="$CALICO_TAG"
CALICO_CNI_TAG="$CALICO_CNI_TAG"
CALICO_KUBE_CONTROLLERS_TAG="$CALICO_KUBE_CONTROLLERS_TAG"
CALICO_IPV4POOL="$CALICO_IPV4POOL"

View File

@ -84,8 +84,17 @@ class K8sFedoraTemplateDefinition(k8s_template_def.K8sTemplateDefinition):
extra_params['nodes_affinity_policy'] = \ extra_params['nodes_affinity_policy'] = \
CONF.cluster.nodes_affinity_policy CONF.cluster.nodes_affinity_policy
if cluster_template.network_driver == 'flannel':
extra_params["pods_network_cidr"] = \
cluster.labels.get('flannel_network_cidr', '10.100.0.0/16')
if cluster_template.network_driver == 'calico':
extra_params["pods_network_cidr"] = \
cluster.labels.get('calico_ipv4pool', '192.168.0.0/16')
label_list = ['kube_tag', 'container_infra_prefix', label_list = ['kube_tag', 'container_infra_prefix',
'availability_zone'] 'availability_zone',
'calico_tag', 'calico_cni_tag',
'calico_kube_controllers_tag', 'calico_ipv4pool']
for label in label_list: for label in label_list:
label_value = cluster.labels.get(label) label_value = cluster.labels.get(label)
if label_value: if label_value:

View File

@ -381,6 +381,30 @@ parameters:
default: "" default: ""
hidden: true hidden: true
calico_tag:
type: string
description: tag of the calico containers used to provision the calico node
default: v2.6.7
calico_cni_tag:
type: string
description: tag of the cni used to provision the calico node
default: v1.11.2
calico_kube_controllers_tag:
type: string
description: tag of the kube_controllers used to provision the calico node
default: v1.0.3
calico_ipv4pool:
type: string
description: Configure the IP pool from which Pod IPs will be chosen
default: "192.168.0.0/16"
pods_network_cidr:
type: string
description: Configure the IP pool/range from which pod IPs will be chosen
resources: resources:
###################################################################### ######################################################################
@ -577,6 +601,11 @@ resources:
availability_zone: {get_param: availability_zone} availability_zone: {get_param: availability_zone}
ca_key: {get_param: ca_key} ca_key: {get_param: ca_key}
cert_manager_api: {get_param: cert_manager_api} cert_manager_api: {get_param: cert_manager_api}
calico_tag: {get_param: calico_tag}
calico_cni_tag: {get_param: calico_cni_tag}
calico_kube_controllers_tag: {get_param: calico_kube_controllers_tag}
calico_ipv4pool: {get_param: calico_ipv4pool}
pods_network_cidr: {get_param: pods_network_cidr}
###################################################################### ######################################################################
# #
@ -648,6 +677,7 @@ resources:
openstack_ca: {get_param: openstack_ca} openstack_ca: {get_param: openstack_ca}
nodes_server_group_id: {get_resource: nodes_server_group} nodes_server_group_id: {get_resource: nodes_server_group}
availability_zone: {get_param: availability_zone} availability_zone: {get_param: availability_zone}
pods_network_cidr: {get_param: pods_network_cidr}
outputs: outputs:

View File

@ -283,6 +283,26 @@ parameters:
description: true if the kubernetes cert api manager should be enabled description: true if the kubernetes cert api manager should be enabled
default: false default: false
calico_tag:
type: string
description: tag of the calico containers used to provision the calico node
calico_cni_tag:
type: string
description: tag of the cni used to provision the calico node
calico_kube_controllers_tag:
type: string
description: tag of the kube_controllers used to provision the calico node
calico_ipv4pool:
type: string
description: Configure the IP pool from which Pod IPs will be chosen
pods_network_cidr:
type: string
description: Configure the IP pool/range from which pod IPs will be chosen
resources: resources:
master_wait_handle: master_wait_handle:
@ -341,6 +361,7 @@ resources:
"$FLANNEL_BACKEND": {get_param: flannel_backend} "$FLANNEL_BACKEND": {get_param: flannel_backend}
"$SYSTEM_PODS_INITIAL_DELAY": {get_param: system_pods_initial_delay} "$SYSTEM_PODS_INITIAL_DELAY": {get_param: system_pods_initial_delay}
"$SYSTEM_PODS_TIMEOUT": {get_param: system_pods_timeout} "$SYSTEM_PODS_TIMEOUT": {get_param: system_pods_timeout}
"$PODS_NETWORK_CIDR": {get_param: pods_network_cidr}
"$PORTAL_NETWORK_CIDR": {get_param: portal_network_cidr} "$PORTAL_NETWORK_CIDR": {get_param: portal_network_cidr}
"$ADMISSION_CONTROL_LIST": {get_param: admission_control_list} "$ADMISSION_CONTROL_LIST": {get_param: admission_control_list}
"$ETCD_DISCOVERY_URL": {get_param: discovery_url} "$ETCD_DISCOVERY_URL": {get_param: discovery_url}
@ -371,6 +392,10 @@ resources:
"$DNS_CLUSTER_DOMAIN": {get_param: dns_cluster_domain} "$DNS_CLUSTER_DOMAIN": {get_param: dns_cluster_domain}
"$CERT_MANAGER_API": {get_param: cert_manager_api} "$CERT_MANAGER_API": {get_param: cert_manager_api}
"$CA_KEY": {get_param: ca_key} "$CA_KEY": {get_param: ca_key}
"$CALICO_TAG": {get_param: calico_tag}
"$CALICO_CNI_TAG": {get_param: calico_cni_tag}
"$CALICO_KUBE_CONTROLLERS_TAG": {get_param: calico_kube_controllers_tag}
"$CALICO_IPV4POOL": {get_param: calico_ipv4pool}
install_openstack_ca: install_openstack_ca:
type: OS::Heat::SoftwareConfig type: OS::Heat::SoftwareConfig
@ -541,6 +566,20 @@ resources:
server: {get_resource: kube-master} server: {get_resource: kube-master}
actions: ['CREATE'] actions: ['CREATE']
calico_service:
type: OS::Heat::SoftwareConfig
properties:
group: script
config: {get_file: ../../common/templates/kubernetes/fragments/calico-service.sh}
calico_service_deployment:
type: OS::Heat::SoftwareDeployment
properties:
signal_transport: HEAT_SIGNAL
config: {get_resource: calico_service}
server: {get_resource: kube-master}
actions: ['CREATE']
###################################################################### ######################################################################
# #
# a single kubernetes master. # a single kubernetes master.
@ -573,7 +612,7 @@ resources:
fixed_ips: fixed_ips:
- subnet: {get_param: fixed_subnet} - subnet: {get_param: fixed_subnet}
allowed_address_pairs: allowed_address_pairs:
- ip_address: {get_param: flannel_network_cidr} - ip_address: {get_param: pods_network_cidr}
replacement_policy: AUTO replacement_policy: AUTO
kube_master_floating: kube_master_floating:

View File

@ -241,6 +241,10 @@ parameters:
availability zone for master and nodes availability zone for master and nodes
default: "" default: ""
pods_network_cidr:
type: string
description: Configure the IP pool/range from which pod IPs will be chosen
resources: resources:
minion_wait_handle: minion_wait_handle:
@ -455,7 +459,7 @@ resources:
fixed_ips: fixed_ips:
- subnet: {get_param: fixed_subnet} - subnet: {get_param: fixed_subnet}
allowed_address_pairs: allowed_address_pairs:
- ip_address: {get_param: flannel_network_cidr} - ip_address: {get_param: pods_network_cidr}
replacement_policy: AUTO replacement_policy: AUTO
kube_minion_floating: kube_minion_floating:

View File

@ -233,6 +233,7 @@ class AtomicK8sTemplateDefinitionTestCase(BaseTemplateDefinitionTestCase):
mock_cluster_template = mock.MagicMock() mock_cluster_template = mock.MagicMock()
mock_cluster_template.tls_disabled = False mock_cluster_template.tls_disabled = False
mock_cluster_template.registry_enabled = False mock_cluster_template.registry_enabled = False
mock_cluster_template.network_driver = 'flannel'
mock_cluster = mock.MagicMock() mock_cluster = mock.MagicMock()
mock_cluster.uuid = '5d12f6fd-a196-4bf0-ae4c-1f639a523a52' mock_cluster.uuid = '5d12f6fd-a196-4bf0-ae4c-1f639a523a52'
del mock_cluster.stack_id del mock_cluster.stack_id
@ -275,6 +276,18 @@ class AtomicK8sTemplateDefinitionTestCase(BaseTemplateDefinitionTestCase):
availability_zone = mock_cluster.labels.get( availability_zone = mock_cluster.labels.get(
'availability_zone') 'availability_zone')
cert_manager_api = mock_cluster.labels.get('cert_manager_api') cert_manager_api = mock_cluster.labels.get('cert_manager_api')
calico_tag = mock_cluster.labels.get(
'calico_tag')
calico_cni_tag = mock_cluster.labels.get(
'calico_cni_tag')
calico_kube_controllers_tag = mock_cluster.labels.get(
'calico_kube_controllers_tag')
calico_ipv4pool = mock_cluster.labels.get(
'calico_ipv4pool')
if mock_cluster_template.network_driver == 'flannel':
pods_network_cidr = flannel_cidr
elif mock_cluster_template.network_driver == 'calico':
pods_network_cidr = calico_ipv4pool
k8s_def = k8sa_tdef.AtomicK8sTemplateDefinition() k8s_def = k8sa_tdef.AtomicK8sTemplateDefinition()
@ -302,7 +315,13 @@ class AtomicK8sTemplateDefinitionTestCase(BaseTemplateDefinitionTestCase):
'container_infra_prefix': container_infra_prefix, 'container_infra_prefix': container_infra_prefix,
'nodes_affinity_policy': 'soft-anti-affinity', 'nodes_affinity_policy': 'soft-anti-affinity',
'availability_zone': availability_zone, 'availability_zone': availability_zone,
'cert_manager_api': cert_manager_api}} 'cert_manager_api': cert_manager_api,
'calico_tag': calico_tag,
'calico_cni_tag': calico_cni_tag,
'calico_kube_controllers_tag': calico_kube_controllers_tag,
'calico_ipv4pool': calico_ipv4pool,
'pods_network_cidr': pods_network_cidr
}}
mock_get_params.assert_called_once_with(mock_context, mock_get_params.assert_called_once_with(mock_context,
mock_cluster_template, mock_cluster_template,
mock_cluster, mock_cluster,
@ -322,6 +341,7 @@ class AtomicK8sTemplateDefinitionTestCase(BaseTemplateDefinitionTestCase):
mock_cluster_template = mock.MagicMock() mock_cluster_template = mock.MagicMock()
mock_cluster_template.tls_disabled = True mock_cluster_template.tls_disabled = True
mock_cluster_template.registry_enabled = False mock_cluster_template.registry_enabled = False
mock_cluster_template.network_driver = 'calico'
mock_cluster = mock.MagicMock() mock_cluster = mock.MagicMock()
mock_cluster.uuid = '5d12f6fd-a196-4bf0-ae4c-1f639a523a52' mock_cluster.uuid = '5d12f6fd-a196-4bf0-ae4c-1f639a523a52'
del mock_cluster.stack_id del mock_cluster.stack_id
@ -364,6 +384,18 @@ class AtomicK8sTemplateDefinitionTestCase(BaseTemplateDefinitionTestCase):
availability_zone = mock_cluster.labels.get( availability_zone = mock_cluster.labels.get(
'availability_zone') 'availability_zone')
cert_manager_api = mock_cluster.labels.get('cert_manager_api') cert_manager_api = mock_cluster.labels.get('cert_manager_api')
calico_tag = mock_cluster.labels.get(
'calico_tag')
calico_cni_tag = mock_cluster.labels.get(
'calico_cni_tag')
calico_kube_controllers_tag = mock_cluster.labels.get(
'calico_kube_controllers_tag')
calico_ipv4pool = mock_cluster.labels.get(
'calico_ipv4pool')
if mock_cluster_template.network_driver == 'flannel':
pods_network_cidr = flannel_cidr
elif mock_cluster_template.network_driver == 'calico':
pods_network_cidr = calico_ipv4pool
k8s_def = k8sa_tdef.AtomicK8sTemplateDefinition() k8s_def = k8sa_tdef.AtomicK8sTemplateDefinition()
@ -393,7 +425,13 @@ class AtomicK8sTemplateDefinitionTestCase(BaseTemplateDefinitionTestCase):
'container_infra_prefix': container_infra_prefix, 'container_infra_prefix': container_infra_prefix,
'nodes_affinity_policy': 'soft-anti-affinity', 'nodes_affinity_policy': 'soft-anti-affinity',
'availability_zone': availability_zone, 'availability_zone': availability_zone,
'cert_manager_api': cert_manager_api}} 'cert_manager_api': cert_manager_api,
'calico_tag': calico_tag,
'calico_cni_tag': calico_cni_tag,
'calico_kube_controllers_tag': calico_kube_controllers_tag,
'calico_ipv4pool': calico_ipv4pool,
'pods_network_cidr': pods_network_cidr
}}
mock_get_params.assert_called_once_with(mock_context, mock_get_params.assert_called_once_with(mock_context,
mock_cluster_template, mock_cluster_template,
mock_cluster, mock_cluster,

View File

@ -0,0 +1,5 @@
---
issues:
- |
Adding 'calico' as network driver for Kubernetes so as to support network
isolation between namespace with k8s network policy.