Browse Source

Add initial deployment guide

Change-Id: If86858123fa4dd18d9de01077065b458fe58efee
changes/79/744379/1
Mohammed Naser 8 months ago
parent
commit
cda425bcdd
10 changed files with 1108 additions and 0 deletions
  1. +2
    -0
      doc/requirements.txt
  2. +11
    -0
      doc/source/conf.py
  3. +19
    -0
      doc/source/deployment-guide.rst
  4. +31
    -0
      doc/source/deployment-guide/install-cni.rst
  5. +20
    -0
      doc/source/deployment-guide/install-docker.rst
  6. +96
    -0
      doc/source/deployment-guide/install-kubernetes.rst
  7. +49
    -0
      doc/source/deployment-guide/setup-load-balancer.rst
  8. +37
    -0
      doc/source/deployment-guide/setup-virtual-ip.rst
  9. +1
    -0
      doc/source/index.rst
  10. +842
    -0
      doc/source/manifests/calico.yaml

+ 2
- 0
doc/requirements.txt View File

@ -1,2 +1,4 @@
doc8
sphinx
sphinx-copybutton
sphinx-tabs

+ 11
- 0
doc/source/conf.py View File

@ -1,3 +1,14 @@
project = 'OpenStack Operator'
copyright = '2020, VEXXHOST, Inc.'
author = 'VEXXHOST, Inc.'
html_extra_path = [
'manifests/calico.yaml',
]
extensions = [
'sphinx_copybutton',
'sphinx_tabs.tabs'
]
copybutton_prompt_text = "$ "

+ 19
- 0
doc/source/deployment-guide.rst View File

@ -0,0 +1,19 @@
Deployment Guide
================
The OpenStack operator requires that you have a functional Kuberentes cluster
in order to be able to deploy it. The following steps out-line the
installation of a cluster and the operator.
The deployment of the OpenStack operator is highly containerised, even for the
components not managed by the operator. The steps to get started involve
deploying Docker for running the underlying infrastructure, installing a
load balancer to access the Kubernetes API, deploying Kubernetes itself and
then the operator which should start the OpenStack deployment.
.. highlight:: console
.. include:: deployment-guide/install-docker.rst
.. include:: deployment-guide/setup-virtual-ip.rst
.. include:: deployment-guide/setup-load-balancer.rst
.. include:: deployment-guide/install-kubernetes.rst
.. include:: deployment-guide/install-cni.rst

+ 31
- 0
doc/source/deployment-guide/install-cni.rst View File

@ -0,0 +1,31 @@
Install CNI
-----------
The tested and supported CNI for the OpenStack operator is Calico due to it's
high performance and support for ``NetworkPolicy``. You can deploy it onto
the cluster by running the following::
$ iptables -I DOCKER-USER -j ACCEPT
$ kubectl apply -f https://docs.opendev.org/vexxhost/openstack-operator/calico.yaml
.. note::
The first command overrides Docker's behaviour of disabling all traffic
routing if it is enabled, as this is necessary for the functioning on the
Kubernetes cluster.
Once the CNI is deployed, you'll have to make sure that Calico detected the
correct interface to build your BGP mesh, you can run this command and make
sure that all systems are on the right network::
$ kubectl describe nodes | grep IPv4Address
If they are not on the right IP range or interface, you can run the following
command and edit the ``calico-node`` DaemonSet::
$ kubectl -n kube-system edit ds/calico-node
You'll need to add an environment variable to the container definition which
skips the interfaces you don't want, something similar to this::
- name: IP_AUTODETECTION_METHOD
value: skip-interface=bond0

+ 20
- 0
doc/source/deployment-guide/install-docker.rst View File

@ -0,0 +1,20 @@
Install Docker
--------------
Docker is used by many different components of the underlying infrastructure,
so it must be installed before anything to bootstrap the system.
It must be installed on all the machines that you intend to manage using the
OpenStack operator. It will also be used to deploy infrastructure components
such as the virtual IP and Ceph.
.. tabs::
.. code-tab:: console Debian
$ apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common
$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
$ sudo add-apt-repository "deb https://download.docker.com/linux/debian $(lsb_release -cs) stable"
$ apt-get update
$ apt-get install -y docker-ce
$ apt-mark hold docker-ce

+ 96
- 0
doc/source/deployment-guide/install-kubernetes.rst View File

@ -0,0 +1,96 @@
Install Kubernetes
------------------
The recommended container runtime for the operator is ``containerd``, it is
also what is used in production. This document outlines the installation of
Kubernetes using ``kubeadm``. You'll need to start by installing the
Kubernetes components on all of the systems.
.. tabs::
.. code-tab:: console Debian
$ curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ sudo add-apt-repository "deb https://apt.kubernetes.io/ kubernetes-xenial main"
$ apt-get update
$ apt-get install -y kubelet kubeadm kubectl
$ apt-mark hold containerd.io kubelet kubeadm kubectl
$ containerd config default > /etc/containerd/config.toml
$ systemctl restart containerd
Once this is done, you'll need to start off by preparing the configuration file
for ``kubeadm``, which should look somethig like this::
$ cat <<EOF | tee /etc/kubernetes/kubeadm.conf
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
bindPort: 16443
nodeRegistration:
criSocket: /run/containerd/containerd.sock
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
controlPlaneEndpoint: "cloud.vexxhost.net:6443"
apiServer:
extraArgs:
oidc-issuer-url: https://accounts.google.com
oidc-username-claim: email
oidc-client-id: 1075333334902-iqnm5nbme0c36eir9gub5m62e6pbkbqe.apps.googleusercontent.com
networking:
podSubnet: 10.244.0.0/16
EOF
.. note::
The ``cloud.vexxhost.net`` should be replaced by the DNS address that you
created in the previous step.
The options inside ``extraArgs`` are there to allow for OIDC authentication
via Google Suite. You should remove those or replace them with your own
OIDC provider.
The pod subnet listed there is the one recommended for usage with Calico,
which is the supported and tested CNI.
At this point, you should be ready to start and bring up your first control
plane node, you can execute the following on any of the controllers::
$ kubeadm init --config /etc/kubernetes/kubeadm.conf --upload-certs
At that point, the cluster will be up and it's best to add the
``cluster-admin`` credentials into the ``root`` user for future management::
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
.. warning::
For all of the commands running on other nodes, you'll need to make sure
that you include the following flag or it will use Docker instead of the
recommended ``containerd``::
--cri-socket /run/containerd/containerd.sock
You will also need to join the other controllers to this cluster by using the
command provided which includes the ``--control-plane`` flag. You'll also need
to make sure you add the ``--apiserver-bind-port 16443`` flag or otherwise it
will refuse to join (due to port 6443 being used by the load balancer).
Once that is done, you can proceed to the joining the remainder of the nodes
using the ``kubeadm join`` command that was provided when initializing the
cluster.
After you've completed the installation of the Kubernetes on all of the node,
you can verify that all nodes are present. It's normal for nodes to be in the
``NotReady`` status due to the fact that the CNI is not present yet::
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ctl1 NotReady master 17m v1.18.6
ctl2 NotReady master 6m27s v1.18.6
ctl3 NotReady master 5m29s v1.18.6
kvm1 NotReady <none> 18s v1.18.6
kvm2 NotReady <none> 10s v1.18.6
kvm3 NotReady <none> 2s v1.18.6

+ 49
- 0
doc/source/deployment-guide/setup-load-balancer.rst View File

@ -0,0 +1,49 @@
Setup Load Balancer
-------------------
The load balancer which will be distributing requests across all of the
Kubernetes API servers will be HAproxy.
.. note::
We do not suggest using HAproxy to distribute load across all of the
ingress controllers. The primary reason being that it introduces an extra
hop in the network for no large benefit. The ingress should be bound
directly on the virtual IP.
The following example assumes that you have 3 controllers, with their IP
addresses being ``10.0.0.1``, ``10.0.0.2``, ``10.0.0.3``. It also assumes that
all of the Kubernetes API servers will be listening on port ``16443`` and it
will be listening on port ``6443``.
You'll have to create a configuration file on the local system first::
$ mkdir /etc/haproxy
$ cat <<EOF | tee /etc/haproxy/haproxy.cfg
listen kubernetes
mode tcp
bind 0.0.0.0:6443
timeout connect 30s
timeout client 4h
timeout server 4h
server ctl1 10.0.0.1:16443 check
server ctl2 10.0.0.2:16443 check
server ctl3 10.0.0.3:16443 check
EOF
Once you've setup the configuration file, you can start up the containerized
instance of HAproxy::
$ docker run --net=host \
--volume=/etc/haproxy:/usr/local/etc/haproxy:ro \
--detach \
--restart always \
--name=haproxy \
haproxy:2.2
You'll also need to make sure that you have a DNS record pointing towards your
virtual IP address. It is also recommended that you create a wildcard DNS as
well to allow multiple hosts for the ingress without needing extra changes in
your DNS, something like this::
cloud.vexxhost.net. 86400 IN A 10.0.0.200
*.cloud.vexxhost.net 86400 IN A 10.0.0.200

+ 37
- 0
doc/source/deployment-guide/setup-virtual-ip.rst View File

@ -0,0 +1,37 @@
Setup Virtual IP
----------------
The virtual IP runs across all controllers in order to allow the Kubernetes API
server to be highly available and load balancer. It also becomes one of the
interfaces that the Kubernetes ingress listens on where all the OpenStack API
endpoints will be exposed.
The recommended way of deploying a virtual IP address is using ``keepalived``
running inside Docker in order to make sure that your environment remains clean
and easily reproducible.
You should use the following command in order to start up ``keepalived`` to
host the virtual IP address. These commands should be ran on all your
controllers and they assume that you have 3 controllers with IP addresses
``10.0.0.1``, ``10.0.0.2``, ``10.0.0.3``. The following example is what you
would run on the ``10.0.0.1`` machine with a VIP of ``10.0.0.200`` running
on the interface ``eth0``::
$ docker run --cap-add=NET_ADMIN \
--cap-add=NET_BROADCAST \
--cap-add=NET_RAW \
--net=host \
--env KEEPALIVED_INTERFACE=eth0 \
--env KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['10.0.0.2', '10.0.0.3']" \
--env KEEPALIVED_VIRTUAL_IPS="#PYTHON2BASH:['10.0.0.200']" \
--detach \
--restart always \
--name keepalived \
osixia/keepalived:2.0.20
.. note::
You'll have to make sure to edit the ``KEEPALIVED_UNICAST_PEERS``
environment variable accordingly depending on the host you're running this
on. It should always point at the other hosts.

+ 1
- 0
doc/source/index.rst View File

@ -4,4 +4,5 @@ OpenStack Operator
.. toctree::
:maxdepth: 2
deployment-guide
custom-resources

+ 842
- 0
doc/source/manifests/calico.yaml View File

@ -0,0 +1,842 @@
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# Typha is disabled.
typha_service_name: "none"
# Configure the backend to use.
calico_backend: "bird"
# Configure the MTU to use for workload interfaces and the
# tunnels. For IPIP, set to your network MTU - 20; for VXLAN
# set to your network MTU - 50.
veth_mtu: "1440"
# The CNI network configuration to install on each node. The special
# values in this config will be automatically populated.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
},
{
"type": "bandwidth",
"capabilities": {"bandwidth": true}
}
]
}
---
# Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgpconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPConfiguration
plural: bgpconfigurations
singular: bgpconfiguration
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgppeers.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPPeer
plural: bgppeers
singular: bgppeer
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: blockaffinities.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BlockAffinity
plural: blockaffinities
singular: blockaffinity
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: clusterinformations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: ClusterInformation
plural: clusterinformations
singular: clusterinformation
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: felixconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: FelixConfiguration
plural: felixconfigurations
singular: felixconfiguration
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworkpolicies.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkPolicy
plural: globalnetworkpolicies
singular: globalnetworkpolicy
shortNames:
- gnp
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworksets.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkSet
plural: globalnetworksets
singular: globalnetworkset
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: hostendpoints.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: HostEndpoint
plural: hostendpoints
singular: hostendpoint
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamblocks.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMBlock
plural: ipamblocks
singular: ipamblock
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamconfigs.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMConfig
plural: ipamconfigs
singular: ipamconfig
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamhandles.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMHandle
plural: ipamhandles
singular: ipamhandle
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ippools.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPPool
plural: ippools
singular: ippool
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: kubecontrollersconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: KubeControllersConfiguration
plural: kubecontrollersconfigurations
singular: kubecontrollersconfiguration
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networkpolicies.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkPolicy
plural: networkpolicies
singular: networkpolicy
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networksets.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkSet
plural: networksets
singular: networkset
---
---
# Source: calico/templates/rbac.yaml
# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
rules:
# Nodes are watched to monitor for deletions.
- apiGroups: [""]
resources:
- nodes
verbs:
- watch
- list
- get
# Pods are queried to check for existence.
- apiGroups: [""]
resources:
- pods
verbs:
- get
# IPAM resources are manipulated when nodes are deleted.
- apiGroups: ["crd.projectcalico.org"]
resources:
- ippools
verbs:
- list
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
- ipamblocks
- ipamhandles
verbs:
- get
- list
- create
- update
- delete
# kube-controllers manages hostendpoints.
- apiGroups: ["crd.projectcalico.org"]
resources:
- hostendpoints
verbs:
- get
- list
- create
- update
- delete
# Needs access to update clusterinformations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- clusterinformations
verbs:
- get
- create
- update
# KubeControllersConfiguration is where it gets its config
- apiGroups: ["crd.projectcalico.org"]
resources:
- kubecontrollersconfigurations
verbs:
# read its own config
- get
# create a default if none exists
- create
# update status
- update
# watch for changes
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-kube-controllers
subjects:
- kind: ServiceAccount
name: calico-kube-controllers
namespace: kube-system
---
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-node
rules:
# The CNI plugin needs to get pods, nodes, and namespaces.
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
verbs:
- get
- apiGroups: [""]
resources:
- endpoints
- services
verbs:
# Used to discover service IPs for advertisement.
- watch
- list
# Used to discover Typhas.
- get
# Pod CIDR auto-detection on kubeadm needs access to config maps.
- apiGroups: [""]
resources:
- configmaps
verbs:
- get
- apiGroups: [""]
resources:
- nodes/status
verbs:
# Needed for clearing NodeNetworkUnavailable flag.
- patch
# Calico stores some configuration information in node annotations.
- update
# Watch for changes to Kubernetes NetworkPolicies.
- apiGroups: ["networking.k8s.io"]
resources:
- networkpolicies
verbs:
- watch
- list
# Used by Calico for policy information.
- apiGroups: [""]
resources:
- pods
- namespaces
- serviceaccounts
verbs:
- list
- watch
# The CNI plugin patches pods/status.
- apiGroups: [""]
resources:
- pods/status
verbs:
- patch
# Calico monitors various CRDs for config.
- apiGroups: ["crd.projectcalico.org"]
resources:
- globalfelixconfigs
- felixconfigurations
- bgppeers
- globalbgpconfigs
- bgpconfigurations
- ippools
- ipamblocks
- globalnetworkpolicies
- globalnetworksets
- networkpolicies
- networksets
- clusterinformations
- hostendpoints
- blockaffinities
verbs:
- get
- list
- watch
# Calico must create and update some CRDs on startup.
- apiGroups: ["crd.projectcalico.org"]
resources:
- ippools
- felixconfigurations
- clusterinformations
verbs:
- create
- update
# Calico stores some configuration information on the node.
- apiGroups: [""]
resources:
- nodes
verbs:
- get
- list
- watch
# These permissions are only requried for upgrade from v2.6, and can
# be removed after upgrade or on fresh installations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- bgpconfigurations
- bgppeers
verbs:
- create
- update
# These permissions are required for Calico CNI to perform IPAM allocations.
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
- ipamblocks
- ipamhandles
verbs:
- get
- list
- create
- update
- delete
- apiGroups: ["crd.projectcalico.org"]
resources:
- ipamconfigs
verbs:
- get
# Block affinities must also be watchable by confd for route aggregation.
- apiGroups: ["crd.projectcalico.org"]
resources:
- blockaffinities
verbs:
- watch
# The Calico IPAM migration needs to get daemonsets. These permissions can be
# removed if not upgrading from an installation using host-local IPAM.
- apiGroups: ["apps"]
resources:
- daemonsets
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system
---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: calico-node
annotations:
# This, along with the CriticalAddonsOnly toleration below,
# marks the pod as a critical add-on, ensuring it gets
# priority scheduling and that its resources are reserved
# if it ever gets evicted.
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
nodeSelector:
kubernetes.io/os: linux
hostNetwork: true
tolerations:
# Make sure calico-node gets scheduled on all nodes.
- effect: NoSchedule
operator: Exists
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
serviceAccountName: calico-node
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
terminationGracePeriodSeconds: 0
priorityClassName: system-node-critical
initContainers:
# This container performs upgrade from host-local IPAM to calico-ipam.
# It can be deleted if this is a fresh installation, or if you have already
# upgraded to use calico-ipam.
- name: upgrade-ipam
image: calico/cni:v3.14.2
command: ["/opt/cni/bin/calico-ipam", "-upgrade"]
env:
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
volumeMounts:
- mountPath: /var/lib/cni/networks
name: host-local-net-dir
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
securityContext:
privileged: true
# This container installs the CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: calico/cni:v3.14.2
command: ["/install-cni.sh"]
env:
# Name of the CNI config file to create.
- name: CNI_CONF_NAME
value: "10-calico.conflist"
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
# Set the hostname based on the k8s node name.
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# CNI MTU Config variable
- name: CNI_MTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Prevents the container from sleeping forever.
- name: SLEEP
value: "false"
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
securityContext:
privileged: true
# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
# to communicate with Felix over the Policy Sync API.
- name: flexvol-driver
image: calico/pod2daemon-flexvol:v3.14.2
volumeMounts:
- name: flexvol-driver-host
mountPath: /host/driver
securityContext:
privileged: true
containers:
# Runs calico-node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: calico/node:v3.14.2
env:
# Use Kubernetes API as the backing datastore.
- name: DATASTORE_TYPE
value: "kubernetes"
# Wait for the datastore.
- name: WAIT_FOR_DATASTORE
value: "true"
# Set based on the k8s node name.
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Choose the backend to use.
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Always"
# Enable or Disable VXLAN on the default IP pool.
- name: CALICO_IPV4POOL_VXLAN
value: "Never"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Set MTU for the VXLAN tunnel device.
- name: FELIX_VXLANMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
# Set Felix logging to "info"
- name: FELIX_LOGSEVERITYSCREEN
value: "info"
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
livenessProbe:
exec:
command:
- /bin/calico-node
- -felix-live
- -bird-live
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
exec:
command:
- /bin/calico-node
- -felix-ready
- -bird-ready
periodSeconds: 10
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /run/xtables.lock
name: xtables-lock
readOnly: false
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
- mountPath: /var/lib/calico
name: var-lib-calico
readOnly: false
- name: policysync
mountPath: /var/run/nodeagent
volumes:
# Used by calico-node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: var-lib-calico
hostPath:
path: /var/lib/calico
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
# Mount in the directory for host-local IPAM allocations. This is
# used when upgrading from host-local to calico-ipam, and can be removed
# if not using the upgrade-ipam init container.
- name: host-local-net-dir
hostPath:
path: /var/lib/cni/networks
# Used to create per-pod Unix Domain Sockets
- name: policysync
hostPath:
type: DirectoryOrCreate
path: /var/run/nodeagent
# Used to install Flex Volume Driver
- name: flexvol-driver-host
hostPath:
type: DirectoryOrCreate
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system
---
# Source: calico/templates/calico-kube-controllers.yaml
# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
# The controllers can only have a single active instance.
replicas: 1
selector:
matchLabels:
k8s-app: calico-kube-controllers
strategy:
type: Recreate
template:
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
nodeSelector:
kubernetes.io/os: linux
tolerations:
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: calico-kube-controllers
priorityClassName: system-cluster-critical
containers:
- name: calico-kube-controllers
image: calico/kube-controllers:v3.14.2
env:
# Choose which controllers to run.
- name: ENABLED_CONTROLLERS
value: node
- name: DATASTORE_TYPE
value: kubernetes
readinessProbe:
exec:
command:
- /usr/bin/check-status
- -r
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-kube-controllers
namespace: kube-system
---
# Source: calico/templates/calico-etcd-secrets.yaml
---
# Source: calico/templates/calico-typha.yaml
---
# Source: calico/templates/configure-canal.yaml

Loading…
Cancel
Save