Integrate gcp provider (capg) with airshipctl

* add documentation for gcp provider (capg)
  * add manifests for gcp provider (capg)
  * add cluster templates for control plane and workers
  * add site definition to use gcp provider (capg) with
    control plane and workers

Zuul Gate Scripts: https://review.opendev.org/#/c/749165/
Change-Id: I3ea07fcafe49fc2698d9d961222550c45b40cd66
This commit is contained in:
Rishabh Kumar Jain 2020-08-25 18:10:55 -07:00
parent cb8be36fdb
commit 219b29f1a6
64 changed files with 2388 additions and 0 deletions

View File

@ -34,6 +34,7 @@ Welcome to airshipctl's Documentation!
Commands <cli/airshipctl>
providers/cluster_api_docker
providers/cluster_api_openstack
providers/cluster_api_gcp
.. toctree::
:caption: Airship Project Documentation

View File

@ -0,0 +1,786 @@
# Airshipctl and Cluster API GCP Integration
## Overview
Airshipctl and cluster api gcp integration facilitates usage of `airshipctl` to
create cluster api management and workload clusters using `gcp as infrastructure
provider`.
![Machines](https://i.imgur.com/UfxDtNO.jpg)
## Workflow
A simple workflow that can be tested, involves the following
operations:
**Initialize the management cluster with cluster api and gcp provider
components**
> airshipctl phase run clusterctl-init-ephemeral
**Create a workload cluster, with control plane and worker nodes**
> airshipctl phase run controlplane-target
> airshipctl phase run workers-target
## GCP Prerequisites
### Create Service Account
To create and manager clusters, this infrastructure providers uses a service
account to authenticate with GCP's APIs. From your cloud console, follow [these
instructions](https://cloud.google.com/iam/docs/creating-managing-service-accounts#creating)
to create a new service account with Editor permissions. Afterwards, generate a
JSON Key and store it somewhere safe. Use cloud shell to install ansible,
packer, and build the CAPI compliant vm image.
### Build Cluster API Compliant VM Image
#### Install Ansible
Start by launching cloud shell.
$ export GCP_PROJECT_ID=<project-id>
$ export GOOGLE_APPLICATION_CREDENTIALS=</path/to/serviceaccount-key.json>
$ sudo apt-get update
$ sudo apt-get install ansible -y
#### Install Packer
$ mkdir packer
$ cd packer
$ wget https://releases.hashicorp.com/packer/1.6.0/packer_1.6.0_linux_amd64.zip
$ unzip packer_1.6.0_linux_amd64.zip
$ sudo mv packer /usr/local/bin/
#### Build GCP Compliant CAPI-Ubuntu Image
$ git clone https://sigs.k8s.io/image-builder.git
$ cd image-builder/images/capi/
$ make build-gce-default
$ gcloud compute images list --project ${GCP_PROJECT_ID} --no-standard-images
```
NAME PROJECT FAMILY DEPRECATED STATUS
cluster-api-ubuntu-1804-v1-16-14-1599066516 virtual-anchor-281401 capi-ubuntu-1804-k8s-v1-16 READY
```
### Create Cloud NAT Router
Kubernetes nodes, to communicate with the control plane, pull container images
from registried (e.g. gcr.io or dockerhub) need to have NAT access or a public
ip. By default, the provider creates Machines without a public IP.
To make sure your cluster can communicate with the outside world, and the load
balancer, you can create a Cloud NAT in the region you'd like your Kubernetes
cluster to live in by following [these
instructions](https://cloud.google.com/nat/docs/using-nat#specify_ip_addresses_for_nat).
For reference, use the below images. You can create 2 cloud NAT routers for
region us-west1 and us-east1
![us-west1](https://i.imgur.com/Q5DRxtV.jpg)
![us-east1](https://i.imgur.com/94qeAch.jpg)
![nat-routers](https://i.imgur.com/wbeBSyF.jpg)
## Other Common Pre-requisites
These prerequistes are required on the VM that will be used to create workload
cluster on gcp
* Install [Docker](https://www.docker.com/)
* Install [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
* Install [Kind](https://kind.sigs.k8s.io/)
* Install
[Kustomize](https://kubernetes-sigs.github.io/kustomize/installation/binaries/)
* Install [Airshipctl](https://docs.airshipit.org/airshipctl/developers.html)
Also, check [Software Version Information](#Software-Version-Information),
[Special Instructions](#Special-Instructions) and [Virtual Machine
Specification](#Virtual-Machine-Specification)
## Getting Started
Kind will be used to setup a kubernetes cluster, that will be later transformed
into a management cluster using airshipctl. The kind kubernetes cluster will be
initialized with cluster API and Cluster API gcp provider components.
$ export KIND_EXPERIMENTAL_DOCKER_NETWORK=bridge
$ kind create cluster --name capi-gcp
```
Creating cluster "capi-gcp" ...
WARNING: Overriding docker network due to KIND_EXPERIMENTAL_DOCKER_NETWORK
WARNING: Here be dragons! This is not supported currently.
✓ Ensuring node image (kindest/node:v1.18.2) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-capi-gcp"
You can now use your cluster with:
kubectl cluster-info --context kind-capi-gcp
```
$ kubectl get pods -A
```
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-66bff467f8-kmg7c 1/1 Running 0 82s
kube-system coredns-66bff467f8-lg8qc 1/1 Running 0 82s
kube-system etcd-capi-gcp-control-plane 1/1 Running 0 91s
kube-system kindnet-dzp8v 1/1 Running 0 82s
kube-system kube-apiserver-capi-gcp-control-plane 1/1 Running 0 91s
kube-system kube-controller-manager-capi-gcp-control-plane 1/1 Running 0 90s
kube-system kube-proxy-zvdh8 1/1 Running 0 82s
kube-system kube-scheduler-capi-gcp-control-plane 1/1 Running 0 83s
local-path-storage local-path-provisioner-bd4bb6b75-6drnt 1/1 Running 0 82s
```
## Create airshipctl configuration files
$ mkdir ~/.airship
$ airshipctl config init
Run the below command to configure gcp manifest, and add it to airship config
```
$ airshipctl config set-manifest gcp_manifest --repo primary \
--url https://opendev.org/airship/airshipctl --branch master \
--primary --sub-path manifests/site/gcp-test-site --target-path /tmp/airship
```
$ airshipctl config set-context kind-capi-gcp --manifest gcp_manifest
```
Context "kind-capi-gcp" modified.
```
$ cp ~/.kube/config ~/.airship/kubeconfig
$ airshipctl config get-context
```
Context: kind-capi-gcp
contextKubeconf: kind-capi-gcp_target
manifest: gcp_manifest
LocationOfOrigin: /home/rishabh/.airship/kubeconfig
cluster: kind-capi-gcp_target
user: kind-capi-gcp
```
$ airshipctl config use-context kind-capi-gcp
```
Manifest "gcp_manifest" created.
```
$ airshipctl document pull --debug
```
[airshipctl] 2020/08/12 14:07:13 Reading current context manifest information from /home/rishabh/.airship/config
[airshipctl] 2020/08/12 14:07:13 Downloading primary repository airshipctl from https://review.opendev.org/airship/airshipctl into /tmp/airship
[airshipctl] 2020/08/12 14:07:13 Attempting to download the repository airshipctl
[airshipctl] 2020/08/12 14:07:13 Attempting to clone the repository airshipctl from https://review.opendev.org/airship/airshipctl
[airshipctl] 2020/08/12 14:07:23 Attempting to checkout the repository airshipctl from branch refs/heads/master
```
$ airshipctl config set-manifest gcp_manifest --target-path /tmp/airship/airshipctl
## Configure gcp site variables
`configure project_id`
$ cat /tmp/airship/airshipctl/manifests/site/gcp-test-site/target/controlplane/project_name.json
```
[
{ "op": "replace","path": "/spec/project","value": "<project_id>"}
]
```
Include gcp variables in clusterctl.yaml
The original values for the below variables are as follows:
```
GCP_CONTROL_PLANE_MACHINE_TYPE="n1-standard-4"
GCP_NODE_MACHINE_TYPE="n1-standard-4"
GCP_REGION="us-west1"
GCP_NETWORK_NAME="default"
GCP_PROJECT="<your_project_id>"
GCP_CREDENTIALS="$( cat ~/</path/to/serviceaccount-key.json>)"
```
Edit `airshipctl/manifests/site/gcp-test-site/shared/clusterctl/clusterctl.yaml`
to include gcp variables and their values in base64 encoded format. Use
https://www.base64decode.org/ if required.
To get the GCP_CREDENTIALS in base64 format, use the below command.
$ export GCP_B64ENCODED_CREDENTIALS=$( cat ~/</path/to/serviceaccount-key.json> | base64 | tr -d '\n' )
$ echo $GCP_B64ENCODED_CREDENTIALS
The below shown `clusterctl.yaml`, has encoded the values for all variables except
GCP_PROJECT and GCP_CREDENTIALS. You can use the base64 encoded values for
GCP_PROJECT and GCP_CREDENTIALS based on your project.
The other remaining variables in the `clusterctl.yaml` are base64 encoded.
Their original values is as follows:
```
GCP_CONTROL_PLANE_MACHINE_TYPE="n1-standard-4"
GCP_NODE_MACHINE_TYPE="n1-standard-4"
GCP_REGION="us-west1"
GCP_NETWORK_NAME="default"
```
$ cat /tmp/airship/airshipctl/manifests/site/gcp-test-site/shared/clusterctl/clusterctl.yaml
```
apiVersion: airshipit.org/v1alpha1
kind: Clusterctl
metadata:
labels:
airshipit.org/deploy-k8s: "false"
name: clusterctl-v1
init-options:
core-provider: "cluster-api:v0.3.3"
bootstrap-providers:
- "kubeadm:v0.3.3"
infrastructure-providers:
- "gcp:v0.3.0"
control-plane-providers:
- "kubeadm:v0.3.3"
providers:
- name: "gcp"
type: "InfrastructureProvider"
variable-substitution: true
versions:
v0.3.0: manifests/function/capg/v0.3.0
- name: "kubeadm"
type: "BootstrapProvider"
versions:
v0.3.3: manifests/function/cabpk/v0.3.3
- name: "cluster-api"
type: "CoreProvider"
versions:
v0.3.3: manifests/function/capi/v0.3.3
- name: "kubeadm"
type: "ControlPlaneProvider"
versions:
v0.3.3: manifests/function/cacpk/v0.3.3
additional-vars:
GCP_CONTROL_PLANE_MACHINE_TYPE: "bjEtc3RhbmRhcmQtNA=="
GCP_NODE_MACHINE_TYPE: "bjEtc3RhbmRhcmQtNA=="
GCP_PROJECT: "<B64ENCODED_GCP_PROJECT_ID>"
GCP_REGION: "dXMtd2VzdDE="
GCP_NETWORK_NAME: "ZGVmYXVsdA=="
GCP_B64ENCODED_CREDENTIALS: "<GCP_B64ENCODED_CREDENTIALS>"
```
## Initialize Management Cluster
$ airshipctl phase run clusterctl-init-ephemeral
```
[airshipctl] 2020/09/02 11:14:15 Verifying that variable GCP_REGION is allowed to be appended
[airshipctl] 2020/09/02 11:14:15 Verifying that variable GCP_B64ENCODED_CREDENTIALS is allowed to be appended
[airshipctl] 2020/09/02 11:14:15 Verifying that variable GCP_CONTROL_PLANE_MACHINE_TYPE is allowed to be appended
[airshipctl] 2020/09/02 11:14:15 Verifying that variable GCP_NETWORK_NAME is allowed to be appended
[airshipctl] 2020/09/02 11:14:15 Verifying that variable GCP_NODE_MACHINE_TYPE is allowed to be appended
.
.
.
Patching Secret="capg-manager-bootstrap-credentials" Namespace="capg-system"
Creating Service="capg-controller-manager-metrics-service" Namespace="capg-system"
Creating Deployment="capg-controller-manager" Namespace="capg-system"
Creating inventory entry Provider="infrastructure-gcp" Version="v0.3.0" TargetNamespace="capg-system"
```
$ kubectl get pods -A
```
NAMESPACE NAME READY STATUS RESTARTS AGE
capg-system capg-controller-manager-b8655ddb4-swwzk 2/2 Running 0 54s
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-66c6b6857b-22hg4 2/2 Running 0 73s
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-688f7ccc56-7g676 2/2 Running 0 65s
capi-system capi-controller-manager-549c757797-6vscq 2/2 Running 0 84s
capi-webhook-system capg-controller-manager-d5f85c48d-74gj6 2/2 Running 0 61s
capi-webhook-system capi-controller-manager-5f8fc485bb-stflj 2/2 Running 0 88s
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-6b645d9d4c-2crk7 2/2 Running 0 81s
capi-webhook-system capi-kubeadm-control-plane-controller-manager-65dbd6f999-cghmx 2/2 Running 0 70s
cert-manager cert-manager-77d8f4d85f-cqp7m 1/1 Running 0 115s
cert-manager cert-manager-cainjector-75f88c9f56-qh9m8 1/1 Running 0 115s
cert-manager cert-manager-webhook-56669d7fcb-6zddl 1/1 Running 0 115s
kube-system coredns-66bff467f8-kmg7c 1/1 Running 0 3m55s
kube-system coredns-66bff467f8-lg8qc 1/1 Running 0 3m55s
kube-system etcd-capi-gcp-control-plane 1/1 Running 0 4m4s
kube-system kindnet-dzp8v 1/1 Running 0 3m55s
kube-system kube-apiserver-capi-gcp-control-plane 1/1 Running 0 4m4s
kube-system kube-controller-manager-capi-gcp-control-plane 1/1 Running 0 4m3s
kube-system kube-proxy-zvdh8 1/1 Running 0 3m55s
kube-system kube-scheduler-capi-gcp-control-plane 1/1 Running 0 3m56s
local-path-storage local-path-provisioner-bd4bb6b75-6drnt 1/1 Running 0 3m55s
```
## Create control plane and worker nodes
$ airshipctl phase run controlplane-target --debug
```
[airshipctl] 2020/09/02 11:21:08 building bundle from kustomize path /tmp/airship/airshipctl/manifests/site/gcp-test-site/target/controlplane
[airshipctl] 2020/09/02 11:21:08 Applying bundle, inventory id: kind-capi-gcp-target-controlplane
[airshipctl] 2020/09/02 11:21:08 Inventory Object config Map not found, auto generating Invetory object
[airshipctl] 2020/09/02 11:21:08 Injecting Invetory Object: {"apiVersion":"v1","kind":"ConfigMap","metadata":{"creationTimestamp":null,"labels":{"cli-utils.sigs.k8s.io/inventory-id":"kind-capi-gcp-target-controlplane"},"name":"airshipit-kind-capi-gcp-target-controlplane","namespace":"airshipit"}}{nsfx:false,beh:unspecified} into bundle
[airshipctl] 2020/09/02 11:21:08 Making sure that inventory object namespace airshipit exists
configmap/airshipit-kind-capi-gcp-target-controlplane-5ab3466f created
cluster.cluster.x-k8s.io/gtc created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/gtc-control-plane created
gcpcluster.infrastructure.cluster.x-k8s.io/gtc created
gcpmachinetemplate.infrastructure.cluster.x-k8s.io/gtc-control-plane created
5 resource(s) applied. 5 created, 0 unchanged, 0 configured
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/gtc-control-plane is NotFound: Resource not found
gcpcluster.infrastructure.cluster.x-k8s.io/gtc is NotFound: Resource not found
gcpmachinetemplate.infrastructure.cluster.x-k8s.io/gtc-control-plane is NotFound: Resource not found
configmap/airshipit-kind-capi-gcp-target-controlplane-5ab3466f is NotFound: Resource not found
cluster.cluster.x-k8s.io/gtc is NotFound: Resource not found
configmap/airshipit-kind-capi-gcp-target-controlplane-5ab3466f is Current: Resource is always ready
cluster.cluster.x-k8s.io/gtc is Current: Resource is current
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/gtc-control-plane is Current: Resource is current
gcpcluster.infrastructure.cluster.x-k8s.io/gtc is Current: Resource is current
gcpmachinetemplate.infrastructure.cluster.x-k8s.io/gtc-control-plane is Current: Resource is current
all resources has reached the Current status
```
$ airshipctl phase run workers-target --debug
```
[airshipctl] 2020/09/02 11:21:20 building bundle from kustomize path /tmp/airship/airshipctl/manifests/site/gcp-test-site/target/workers
[airshipctl] 2020/09/02 11:21:20 Applying bundle, inventory id: kind-capi-gcp-target-workers
[airshipctl] 2020/09/02 11:21:20 Inventory Object config Map not found, auto generating Invetory object
[airshipctl] 2020/09/02 11:21:20 Injecting Invetory Object: {"apiVersion":"v1","kind":"ConfigMap","metadata":{"creationTimestamp":null,"labels":{"cli-utils.sigs.k8s.io/inventory-id":"kind-capi-gcp-target-workers"},"name":"airshipit-kind-capi-gcp-target-workers","namespace":"airshipit"}}{nsfx:false,beh:unspecified} into bundle
[airshipctl] 2020/09/02 11:21:20 Making sure that inventory object namespace airshipit exists
configmap/airshipit-kind-capi-gcp-target-workers-1a36e40a created
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/gtc-md-0 created
machinedeployment.cluster.x-k8s.io/gtc-md-0 created
gcpmachinetemplate.infrastructure.cluster.x-k8s.io/gtc-md-0 created
4 resource(s) applied. 4 created, 0 unchanged, 0 configured
configmap/airshipit-kind-capi-gcp-target-workers-1a36e40a is NotFound: Resource not found
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/gtc-md-0 is NotFound: Resource not found
machinedeployment.cluster.x-k8s.io/gtc-md-0 is NotFound: Resource not found
gcpmachinetemplate.infrastructure.cluster.x-k8s.io/gtc-md-0 is NotFound: Resource not found
configmap/airshipit-kind-capi-gcp-target-workers-1a36e40a is Current: Resource is always ready
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/gtc-md-0 is Current: Resource is current
machinedeployment.cluster.x-k8s.io/gtc-md-0 is Current: Resource is current
gcpmachinetemplate.infrastructure.cluster.x-k8s.io/gtc-md-0 is Current: Resource is current
```
$ kubectl get pods -A
```
NAMESPACE NAME READY STATUS RESTARTS AGE
capg-system capg-controller-manager-b8655ddb4-swwzk 2/2 Running 0 6m9s
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-66c6b6857b-22hg4 2/2 Running 0 6m28s
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-688f7ccc56-7g676 2/2 Running 0 6m20s
capi-system capi-controller-manager-549c757797-6vscq 2/2 Running 0 6m39s
capi-webhook-system capg-controller-manager-d5f85c48d-74gj6 2/2 Running 0 6m16s
capi-webhook-system capi-controller-manager-5f8fc485bb-stflj 2/2 Running 0 6m43s
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-6b645d9d4c-2crk7 2/2 Running 0 6m36s
capi-webhook-system capi-kubeadm-control-plane-controller-manager-65dbd6f999-cghmx 2/2 Running 0 6m25s
cert-manager cert-manager-77d8f4d85f-cqp7m 1/1 Running 0 7m10s
cert-manager cert-manager-cainjector-75f88c9f56-qh9m8 1/1 Running 0 7m10s
cert-manager cert-manager-webhook-56669d7fcb-6zddl 1/1 Running 0 7m10s
kube-system coredns-66bff467f8-kmg7c 1/1 Running 0 9m10s
kube-system coredns-66bff467f8-lg8qc 1/1 Running 0 9m10s
kube-system etcd-capi-gcp-control-plane 1/1 Running 0 9m19s
kube-system kindnet-dzp8v 1/1 Running 0 9m10s
kube-system kube-apiserver-capi-gcp-control-plane 1/1 Running 0 9m19s
kube-system kube-controller-manager-capi-gcp-control-plane 1/1 Running 0 9m18s
kube-system kube-proxy-zvdh8 1/1 Running 0 9m10s
kube-system kube-scheduler-capi-gcp-control-plane 1/1 Running 0 9m11s
local-path-storage local-path-provisioner-bd4bb6b75-6drnt 1/1 Running 0 9m10s
```
To check logs run the below command
$ kubectl logs capg-controller-manager-b8655ddb4-swwzk -n capg-system --all-containers=true -f
```
I0902 18:15:30.884391 1 main.go:213] Generating self signed cert as no cert is provided
I0902 18:15:35.135060 1 main.go:243] Starting TCP socket on 0.0.0.0:8443
I0902 18:15:35.175185 1 main.go:250] Listening securely on 0.0.0.0:8443
I0902 18:15:51.111202 1 listener.go:44] controller-runtime/metrics "msg"="metrics server is starting to listen" "addr"="127.0.0.1:8080"
I0902 18:15:51.113054 1 main.go:205] setup "msg"="starting manager"
I0902 18:15:51.113917 1 leaderelection.go:242] attempting to acquire leader lease capg-system/controller-leader-election-capg...
I0902 18:15:51.114691 1 internal.go:356] controller-runtime/manager "msg"="starting metrics server" "path"="/metrics"
I0902 18:15:51.142032 1 leaderelection.go:252] successfully acquired lease capg-system/controller-leader-election-capg
I0902 18:15:51.145165 1 controller.go:164] controller-runtime/controller "msg"="Starting EventSource" "c
```
$ kubectl get machines
```
NAME PROVIDERID PHASE
gtc-control-plane-cxcd4 gce://virtual-anchor-281401/us-west1-a/gtc-control-plane-vmplz Running
gtc-md-0-6cf7474cff-zpbxv gce://virtual-anchor-281401/us-west1-a/gtc-md-0-7mccx Running
```
$ kubectl --namespace=default get secret/gtc-kubeconfig -o jsonpath={.data.value} | base64 --decode > ./gtc.kubeconfig
$ kubectl get pods -A --kubeconfig ~/gtc.kubeconfig
```
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-6d4fbb6df9-8lf4f 1/1 Running 0 5m18s
kube-system calico-node-6lmqw 1/1 Running 0 73s
kube-system calico-node-qtgzj 1/1 Running 1 5m18s
kube-system coredns-5644d7b6d9-dqd75 1/1 Running 0 5m18s
kube-system coredns-5644d7b6d9-ls2q9 1/1 Running 0 5m18s
kube-system etcd-gtc-control-plane-vmplz 1/1 Running 0 4m53s
kube-system kube-apiserver-gtc-control-plane-vmplz 1/1 Running 0 4m42s
kube-system kube-controller-manager-gtc-control-plane-vmplz 1/1 Running 0 4m59s
kube-system kube-proxy-6hk8c 1/1 Running 0 5m18s
kube-system kube-proxy-b8mqw 1/1 Running 0 73s
kube-system kube-scheduler-gtc-control-plane-vmplz 1/1 Running 0 4m47s
```
Now, the control plane and worker node are created on google cloud.
## Tear Down Clusters
If you would like to delete the cluster run the below commands. This will delete
the control plane, workers, machine health check and all other resources
associated with the cluster on gcp.
$ airshipctl phase render controlplane -k Cluster
```
---
apiVersion: cluster.x-k8s.io/v1alpha3
kind: Cluster
metadata:
name: gtc
namespace: default
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
name: gtc-control-plane
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: GCPCluster
name: gtc
...
```
$ airshipctl phase render controlplane -k Cluster | kubectl delete -f -
```
cluster.cluster.x-k8s.io "gtc" deleted
```
$ kind delete cluster --name capi-gcp
```
Deleting cluster "capi-gcp" ...
```
## Reference
### Provider Manifests
Provider Configuration is referenced from https://github.com/kubernetes-sigs/cluster-api-provider-gcp/tree/master/config
Cluster API does not support gcp provider out of the box. Therefore, the metadata infromation is added using files in
airshipctl/manifests/function/capg/data
$ tree airshipctl/manifests/function/capg
```
airshipctl/manifests/function/capg
└── v0.3.0
├── certmanager
│ ├── certificate.yaml
│ ├── kustomization.yaml
│ └── kustomizeconfig.yaml
├── crd
│ ├── bases
│ │ ├── infrastructure.cluster.x-k8s.io_gcpclusters.yaml
│ │ ├── infrastructure.cluster.x-k8s.io_gcpmachines.yaml
│ │ └── infrastructure.cluster.x-k8s.io_gcpmachinetemplates.yaml
│ ├── kustomization.yaml
│ ├── kustomizeconfig.yaml
│ └── patches
│ ├── cainjection_in_gcpclusters.yaml
│ ├── cainjection_in_gcpmachines.yaml
│ ├── cainjection_in_gcpmachinetemplates.yaml
│ ├── webhook_in_gcpclusters.yaml
│ ├── webhook_in_gcpmachines.yaml
│ └── webhook_in_gcpmachinetemplates.yaml
├── data
│ ├── capg-resources.yaml
│ ├── kustomization.yaml
│ └── metadata.yaml
├── default
│ ├── credentials.yaml
│ ├── kustomization.yaml
│ ├── manager_credentials_patch.yaml
│ ├── manager_prometheus_metrics_patch.yaml
│ ├── manager_role_aggregation_patch.yaml
│ └── namespace.yaml
├── kustomization.yaml
├── manager
│ ├── kustomization.yaml
│ ├── manager_auth_proxy_patch.yaml
│ ├── manager_image_patch.yaml
│ ├── manager_pull_policy.yaml
│ └── manager.yaml
├── patch_crd_webhook_namespace.yaml
├── rbac
│ ├── auth_proxy_role_binding.yaml
│ ├── auth_proxy_role.yaml
│ ├── auth_proxy_service.yaml
│ ├── kustomization.yaml
│ ├── leader_election_role_binding.yaml
│ ├── leader_election_role.yaml
│ ├── role_binding.yaml
│ └── role.yaml
└── webhook
├── kustomization.yaml
├── kustomizeconfig.yaml
├── manager_webhook_patch.yaml
├── manifests.yaml
├── service.yaml
└── webhookcainjection_patch.yaml
```
#### CAPG Specific Variables
capg-resources.yaml consists of `gcp provider specific` variables required to
initialize the management cluster. The values for these variables can be
exported before running `airshipctl phase run clusterctl-init-ephemeral` or they can be defined
explicitly in clusterctl.yaml
$ cat airshipctl/manifests/function/capg/v0.3.0/data/capg-resources.yaml
```
apiVersion: v1
kind: Secret
metadata:
name: manager-bootstrap-credentials
namespace: system
type: Opaque
data:
GCP_CONTROL_PLANE_MACHINE_TYPE: ${GCP_CONTROL_PLANE_MACHINE_TYPE}
GCP_NODE_MACHINE_TYPE: ${GCP_NODE_MACHINE_TYPE}
GCP_PROJECT: ${GCP_PROJECT}
GCP_REGION: ${GCP_REGION}
GCP_NETWORK_NAME: ${GCP_NETWORK_NAME}
GCP_B64ENCODED_CREDENTIALS: ${GCP_B64ENCODED_CREDENTIALS}
```
### Cluster Templates
manifests/function/k8scontrol-capg contains cluster.yaml, controlplane.yaml templates referenced from
[cluster-template](https://github.com/kubernetes-sigs/cluster-api-provider-gcp/blob/master/templates/cluster-template.yaml)
| Template Name | CRDs |
| ----------------- | ---- |
| cluster.yaml | Cluster, GCPCluster |
| controlplane.yaml | KubeadmControlPlane, GCPMachineTemplate |
$ tree airshipctl/manifests/function/k8scontrol-capg
```
airshipctl/manifests/function/k8scontrol-capg
├── cluster.yaml
├── controlplane.yaml
└── kustomization.yaml
```
airshipctl/manifests/function/workers-capg contains workers.yaml referenced from
[cluster-template](https://github.com/kubernetes-sigs/cluster-api-provider-gcp/blob/master/templates/cluster-template.yaml)
| Template Name | CRDs |
| ----------------- | ---- |
| workers.yaml | GCPMachineTemplate, MachineDeployment, KubeadmConfigTemplate |
$ tree airshipctl/manifests/function/workers-capg
```
airshipctl/manifests/function/workers-capg
├── kustomization.yaml
└── workers.yaml
```
### Test Site Manifests
#### gcp-test-site/shared
airshipctl phase run clusterctl-init-ephemeral uses
airshipctl/manifests/site/gcp-test-site/shared/clusterctl to initialize
management cluster with defined provider components and version.
$ tree airshipctl/manifests/site/gcp-test-site/shared
```
airshipctl/manifests/site/gcp-test-site/shared
└── clusterctl
├── clusterctl.yaml
└── kustomization.yaml
```
#### gcp-test-site/target
There are 3 phases currently available in gcp-test-site/target
|Phase Name | Purpose |
|-----------|---------|
| controlplane | Patches templates in manifests/function/k8scontrol-capg |
| workers | Patches template in manifests/function/workers-capg | |
| initinfra | Simply calls `gcp-test-site/shared/clusterctl` |
Note: `airshipctl phase run clusterctl-init-ephemeral` initializes all the provider components
including the gcp infrastructure provider component.
#### Patch Merge Strategy
Json patches are applied on templates in `manifests/function/k8scontrol-capg`
from `airshipctl/manifests/site/gcp-test-site/target/controlplane` when
`airshipctl phase run controlplane-target` is executed
Json patches are applied on templates in `manifests/function/workers-capg` from
`airshipctl/manifests/site/gcp-test-site/target/workers` when `airshipctl phase
run workers-target` is executed.
| Patch Name | Purpose |
| ------------------------------- | ------------------------------------------------------------------ |
| controlplane/machine_count.json | patches control plane machine count in template function/k8scontrol-capg |
| controlplane/machine_type.json | patches control plane machine type in template function/k8scontrol-capg |
| controlplane/network_name.json | patches control plane network name in template function/k8scontrol-capg |
| controlplane/project_name.json | patches project id template function/k8scontrol-capg |
| controlplane/region_name.json | patches region name in template function/k8scontrol-capg |
| workers/machine_count.json | patches worker machine count in template function/workers-capg |
| workers/machine_type.json | patches worker machine type in template function/workers-capg |
| workers/failure_domain.json | patches failure_domain in template function/workers-capg |
$ tree airshipctl/manifests/site/gcp-test-site/target/
```
airshipctl/manifests/site/gcp-test-site/target/
├── controlplane
│ ├── kustomization.yaml
│ ├── machine_count.json
│ ├── machine_type.json
│ ├── network_name.json
│ ├── project_name.json
│ └── region_name.json
├── initinfra
│ └── kustomization.yaml
└── workers
├── failure_domain.json
├── kustomization.yaml
├── machine_count.json
└── machine_type.json
3 directories, 11 files
```
### Software Version Information
All the instructions provided in the document have been tested using the
software and version, provided in this section.
#### Virtual Machine Specification
All the instructions in the document were perfomed on a Oracle Virtual Box(6.1)
VM running Ubuntu 18.04.4 LTS (Bionic Beaver) with 16G of memory and 4 VCPUs
#### Docker
$ docker version
```
Client: Docker Engine - Community
Version: 19.03.9
API version: 1.40
Go version: go1.13.10
Git commit: 9d988398e7
Built: Fri May 15 00:25:18 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.9
API version: 1.40 (minimum version 1.12)
Go version: go1.13.10
Git commit: 9d988398e7
Built: Fri May 15 00:23:50 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
```
#### Kind
$ kind version
```
kind v0.8.1 go1.14.2 linux/amd64
```
#### Kubectl
$ kubectl version
```
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:03:42Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2020-01-14T00:09:19Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
```
#### Go
$ go version
```
go version go1.14.1 linux/amd64
```
#### Kustomize
$ kustomize version
```
{Version:kustomize/v3.8.0 GitCommit:6a50372dd5686df22750b0c729adaf369fbf193c BuildDate:2020-07-05T14:08:42Z GoOs:linux GoArch:amd64}
```
#### OS
$ cat /etc/os-release
```
NAME="Ubuntu"
VERSION="18.04.4 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.4 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
```

View File

@ -0,0 +1,24 @@
# The following manifests contain a self-signed issuer CR and a certificate CR.
# More document can be found at https://docs.cert-manager.io
apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
name: selfsigned-issuer
namespace: system
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: serving-cert # this name should match the one appeared in kustomizeconfig.yaml
namespace: system
spec:
# $(SERVICE_NAME) and $(SERVICE_NAMESPACE) will be substituted by kustomize
dnsNames:
- $(SERVICE_NAME).$(SERVICE_NAMESPACE).svc
- $(SERVICE_NAME).$(SERVICE_NAMESPACE).svc.cluster.local
issuerRef:
kind: Issuer
name: selfsigned-issuer
secretName: $(SERVICE_NAME)-cert # this secret will not be prefixed, since it's not managed by kustomize

View File

@ -0,0 +1,7 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- certificate.yaml
configurations:
- kustomizeconfig.yaml

View File

@ -0,0 +1,19 @@
# This configuration is for teaching kustomize how to update name ref and var substitution
nameReference:
- kind: Issuer
group: cert-manager.io
fieldSpecs:
- kind: Certificate
group: cert-manager.io
path: spec/issuerRef/name
varReference:
- kind: Certificate
group: cert-manager.io
path: spec/commonName
- kind: Certificate
group: cert-manager.io
path: spec/dnsNames
- kind: Certificate
group: cert-manager.io
path: spec/secretName

View File

@ -0,0 +1,236 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.2.6
creationTimestamp: null
name: gcpclusters.infrastructure.cluster.x-k8s.io
spec:
group: infrastructure.cluster.x-k8s.io
names:
categories:
- cluster-api
kind: GCPCluster
listKind: GCPClusterList
plural: gcpclusters
singular: gcpcluster
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: Cluster to which this GCPCluster belongs
jsonPath: .metadata.labels.cluster\.x-k8s\.io/cluster-name
name: Cluster
type: string
- description: Cluster infrastructure is ready for GCE instances
jsonPath: .status.ready
name: Ready
type: string
- description: GCP network the cluster is using
jsonPath: .spec.network.name
name: Network
type: string
- description: API Endpoint
jsonPath: .status.apiEndpoints[0]
name: Endpoint
priority: 1
type: string
name: v1alpha3
schema:
openAPIV3Schema:
description: GCPCluster is the Schema for the gcpclusters API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: GCPClusterSpec defines the desired state of GCPCluster
properties:
additionalLabels:
additionalProperties:
type: string
description: AdditionalLabels is an optional set of tags to add to
GCP resources managed by the GCP provider, in addition to the ones
added by default.
type: object
controlPlaneEndpoint:
description: ControlPlaneEndpoint represents the endpoint used to
communicate with the control plane.
properties:
host:
description: The hostname on which the API server is serving.
type: string
port:
description: The port on which the API server is serving.
format: int32
type: integer
required:
- host
- port
type: object
network:
description: NetworkSpec encapsulates all things related to GCP network.
properties:
autoCreateSubnetworks:
description: "AutoCreateSubnetworks: When set to true, the VPC
network is created in \"auto\" mode. When set to false, the
VPC network is created in \"custom\" mode. \n An auto mode VPC
network starts with one subnet per region. Each subnet has a
predetermined range as described in Auto mode VPC network IP
ranges. \n Defaults to true."
type: boolean
loadBalancerBackendPort:
description: Allow for configuration of load balancer backend
(useful for changing apiserver port)
format: int32
type: integer
name:
description: Name is the name of the network to be used.
type: string
subnets:
description: Subnets configuration.
items:
description: SubnetSpec configures an GCP Subnet.
properties:
cidrBlock:
description: CidrBlock is the range of internal addresses
that are owned by this subnetwork. Provide this property
when you create the subnetwork. For example, 10.0.0.0/8
or 192.168.0.0/16. Ranges must be unique and non-overlapping
within a network. Only IPv4 is supported. This field can
be set only at resource creation time.
type: string
description:
description: Description is an optional description associated
with the resource.
type: string
name:
description: Name defines a unique identifier to reference
this resource.
type: string
privateGoogleAccess:
description: PrivateGoogleAccess defines whether VMs in
this subnet can access Google services without assigning
external IP addresses
type: boolean
region:
description: Region is the name of the region where the
Subnetwork resides.
type: string
routeTableId:
description: 'EnableFlowLogs: Whether to enable flow logging
for this subnetwork. If this field is not explicitly set,
it will not appear in get listings. If not set the default
behavior is to disable flow logging.'
type: boolean
secondaryCidrBlocks:
additionalProperties:
type: string
description: SecondaryCidrBlocks defines secondary CIDR
ranges, from which secondary IP ranges of a VM may be
allocated
type: object
type: object
type: array
type: object
project:
description: Project is the name of the project to deploy the cluster
to.
type: string
region:
description: The GCP Region the cluster lives in.
type: string
required:
- project
- region
type: object
status:
description: GCPClusterStatus defines the observed state of GCPCluster
properties:
failureDomains:
additionalProperties:
description: FailureDomainSpec is the Schema for Cluster API failure
domains. It allows controllers to understand how many failure
domains a cluster can optionally span across.
properties:
attributes:
additionalProperties:
type: string
description: Attributes is a free form map of attributes an
infrastructure provider might use or require.
type: object
controlPlane:
description: ControlPlane determines if this failure domain
is suitable for use by control plane machines.
type: boolean
type: object
description: FailureDomains is a slice of FailureDomains.
type: object
network:
description: Network encapsulates GCP networking resources.
properties:
apiServerBackendService:
description: APIServerBackendService is the full reference to
the backend service created for the API Server.
type: string
apiServerForwardingRule:
description: APIServerForwardingRule is the full reference to
the forwarding rule created for the API Server.
type: string
apiServerHealthCheck:
description: APIServerHealthCheck is the full reference to the
health check created for the API Server.
type: string
apiServerInstanceGroups:
additionalProperties:
type: string
description: APIServerInstanceGroups is a map from zone to the
full reference to the instance groups created for the control
plane nodes created in the same zone.
type: object
apiServerIpAddress:
description: APIServerAddress is the IPV4 global address assigned
to the load balancer created for the API Server.
type: string
apiServerTargetProxy:
description: APIServerTargetProxy is the full reference to the
target proxy created for the API Server.
type: string
firewallRules:
additionalProperties:
type: string
description: FirewallRules is a map from the name of the rule
to its full reference.
type: object
selfLink:
description: SelfLink is the link to the Network used for this
cluster.
type: string
type: object
ready:
description: Bastion Instance `json:"bastion,omitempty"`
type: boolean
required:
- ready
type: object
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -0,0 +1,195 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.2.6
creationTimestamp: null
name: gcpmachines.infrastructure.cluster.x-k8s.io
spec:
group: infrastructure.cluster.x-k8s.io
names:
categories:
- cluster-api
kind: GCPMachine
listKind: GCPMachineList
plural: gcpmachines
singular: gcpmachine
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: Cluster to which this GCPMachine belongs
jsonPath: .metadata.labels.cluster\.x-k8s\.io/cluster-name
name: Cluster
type: string
- description: GCE instance state
jsonPath: .status.instanceState
name: State
type: string
- description: Machine ready status
jsonPath: .status.ready
name: Ready
type: string
- description: GCE instance ID
jsonPath: .spec.providerID
name: InstanceID
type: string
- description: Machine object which owns with this GCPMachine
jsonPath: .metadata.ownerReferences[?(@.kind=="Machine")].name
name: Machine
type: string
name: v1alpha3
schema:
openAPIV3Schema:
description: GCPMachine is the Schema for the gcpmachines API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: GCPMachineSpec defines the desired state of GCPMachine
properties:
additionalLabels:
additionalProperties:
type: string
description: AdditionalLabels is an optional set of tags to add to
an instance, in addition to the ones added by default by the GCP
provider. If both the GCPCluster and the GCPMachine specify the
same tag name with different values, the GCPMachine's value takes
precedence.
type: object
additionalNetworkTags:
description: AdditionalNetworkTags is a list of network tags that
should be applied to the instance. These tags are set in addition
to any network tags defined at the cluster level or in the actuator.
items:
type: string
type: array
image:
description: Image is the full reference to a valid image to be used
for this machine. Takes precedence over ImageFamily.
type: string
imageFamily:
description: ImageFamily is the full reference to a valid image family
to be used for this machine.
type: string
instanceType:
description: 'InstanceType is the type of instance to create. Example:
n1.standard-2'
type: string
providerID:
description: ProviderID is the unique identifier as specified by the
cloud provider.
type: string
publicIP:
description: PublicIP specifies whether the instance should get a
public IP. Set this to true if you don't have a NAT instances or
Cloud Nat setup.
type: boolean
rootDeviceSize:
description: RootDeviceSize is the size of the root volume in GB.
Defaults to 30.
format: int64
type: integer
serviceAccounts:
description: 'ServiceAccount specifies the service account email and
which scopes to assign to the machine. Defaults to: email: "default",
scope: []{compute.CloudPlatformScope}'
properties:
email:
description: 'Email: Email address of the service account.'
type: string
scopes:
description: 'Scopes: The list of scopes to be made available
for this service account.'
items:
type: string
type: array
type: object
subnet:
description: Subnet is a reference to the subnetwork to use for this
instance. If not specified, the first subnetwork retrieved from
the Cluster Region and Network is picked.
type: string
required:
- instanceType
type: object
status:
description: GCPMachineStatus defines the observed state of GCPMachine
properties:
addresses:
description: Addresses contains the GCP instance associated addresses.
items:
description: NodeAddress contains information for the node's address.
properties:
address:
description: The node address.
type: string
type:
description: Node address type, one of Hostname, ExternalIP
or InternalIP.
type: string
required:
- address
- type
type: object
type: array
failureMessage:
description: "FailureMessage will be set in the event that there is
a terminal problem reconciling the Machine and will contain a more
verbose string suitable for logging and human consumption. \n This
field should not be set for transitive errors that a controller
faces that are expected to be fixed automatically over time (like
service outages), but instead indicate that something is fundamentally
wrong with the Machine's spec or the configuration of the controller,
and that manual intervention is required. Examples of terminal errors
would be invalid combinations of settings in the spec, values that
are unsupported by the controller, or the responsible controller
itself being critically misconfigured. \n Any transient errors that
occur during the reconciliation of Machines can be added as events
to the Machine object and/or logged in the controller's output."
type: string
failureReason:
description: "FailureReason will be set in the event that there is
a terminal problem reconciling the Machine and will contain a succinct
value suitable for machine interpretation. \n This field should
not be set for transitive errors that a controller faces that are
expected to be fixed automatically over time (like service outages),
but instead indicate that something is fundamentally wrong with
the Machine's spec or the configuration of the controller, and that
manual intervention is required. Examples of terminal errors would
be invalid combinations of settings in the spec, values that are
unsupported by the controller, or the responsible controller itself
being critically misconfigured. \n Any transient errors that occur
during the reconciliation of Machines can be added as events to
the Machine object and/or logged in the controller's output."
type: string
instanceState:
description: InstanceStatus is the status of the GCP instance for
this machine.
type: string
ready:
description: Ready is true when the provider resource is ready.
type: boolean
type: object
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -0,0 +1,130 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.2.6
creationTimestamp: null
name: gcpmachinetemplates.infrastructure.cluster.x-k8s.io
spec:
group: infrastructure.cluster.x-k8s.io
names:
categories:
- cluster-api
kind: GCPMachineTemplate
listKind: GCPMachineTemplateList
plural: gcpmachinetemplates
singular: gcpmachinetemplate
scope: Namespaced
versions:
- name: v1alpha3
schema:
openAPIV3Schema:
description: GCPMachineTemplate is the Schema for the gcpmachinetemplates
API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: GCPMachineTemplateSpec defines the desired state of GCPMachineTemplate
properties:
template:
description: GCPMachineTemplateResource describes the data needed
to create am GCPMachine from a template
properties:
spec:
description: Spec is the specification of the desired behavior
of the machine.
properties:
additionalLabels:
additionalProperties:
type: string
description: AdditionalLabels is an optional set of tags to
add to an instance, in addition to the ones added by default
by the GCP provider. If both the GCPCluster and the GCPMachine
specify the same tag name with different values, the GCPMachine's
value takes precedence.
type: object
additionalNetworkTags:
description: AdditionalNetworkTags is a list of network tags
that should be applied to the instance. These tags are set
in addition to any network tags defined at the cluster level
or in the actuator.
items:
type: string
type: array
image:
description: Image is the full reference to a valid image
to be used for this machine. Takes precedence over ImageFamily.
type: string
imageFamily:
description: ImageFamily is the full reference to a valid
image family to be used for this machine.
type: string
instanceType:
description: 'InstanceType is the type of instance to create.
Example: n1.standard-2'
type: string
providerID:
description: ProviderID is the unique identifier as specified
by the cloud provider.
type: string
publicIP:
description: PublicIP specifies whether the instance should
get a public IP. Set this to true if you don't have a NAT
instances or Cloud Nat setup.
type: boolean
rootDeviceSize:
description: RootDeviceSize is the size of the root volume
in GB. Defaults to 30.
format: int64
type: integer
serviceAccounts:
description: 'ServiceAccount specifies the service account
email and which scopes to assign to the machine. Defaults
to: email: "default", scope: []{compute.CloudPlatformScope}'
properties:
email:
description: 'Email: Email address of the service account.'
type: string
scopes:
description: 'Scopes: The list of scopes to be made available
for this service account.'
items:
type: string
type: array
type: object
subnet:
description: Subnet is a reference to the subnetwork to use
for this instance. If not specified, the first subnetwork
retrieved from the Cluster Region and Network is picked.
type: string
required:
- instanceType
type: object
required:
- spec
type: object
required:
- template
type: object
type: object
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -0,0 +1,30 @@
commonLabels:
cluster.x-k8s.io/v1alpha3: v1alpha3
# This kustomization.yaml is not intended to be run by itself,
# since it depends on service name and namespace that are out of this kustomize package.
# It should be run by config/default
resources:
- bases/infrastructure.cluster.x-k8s.io_gcpmachines.yaml
- bases/infrastructure.cluster.x-k8s.io_gcpclusters.yaml
- bases/infrastructure.cluster.x-k8s.io_gcpmachinetemplates.yaml
# +kubebuilder:scaffold:crdkustomizeresource
#patches:
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix.
# patches here are for enabling the conversion webhook for each CRD
#- patches/webhook_in_gcpmachines.yaml
#- patches/webhook_in_gcpclusters.yaml
#- patches/webhook_in_gcpmachinetemplates.yaml
# +kubebuilder:scaffold:crdkustomizewebhookpatch
# [CERTMANAGER] To enable webhook, uncomment all the sections with [CERTMANAGER] prefix.
# patches here are for enabling the CA injection for each CRD
#- patches/cainjection_in_gcpmachines.yaml
#- patches/cainjection_in_gcpclusters.yaml
#- patches/cainjection_in_gcpmachinetemplates.yaml
# +kubebuilder:scaffold:crdkustomizecainjectionpatch
# the following config is for teaching kustomize how to do kustomization for CRDs.
configurations:
- kustomizeconfig.yaml

View File

@ -0,0 +1,17 @@
# This file is for teaching kustomize how to substitute name and namespace reference in CRD
nameReference:
- kind: Service
version: v1
fieldSpecs:
- kind: CustomResourceDefinition
group: apiextensions.k8s.io
path: spec/conversion/webhook/clientConfig/service/name
namespace:
- kind: CustomResourceDefinition
group: apiextensions.k8s.io
path: spec/conversion/webhook/clientConfig/service/namespace
create: false
varReference:
- path: metadata/annotations

View File

@ -0,0 +1,8 @@
# The following patch adds a directive for certmanager to inject CA into the CRD
# CRD conversion requires k8s 1.13 or later.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
name: gcpclusters.infrastructure.cluster.x-k8s.io

View File

@ -0,0 +1,8 @@
# The following patch adds a directive for certmanager to inject CA into the CRD
# CRD conversion requires k8s 1.13 or later.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
name: gcpmachines.infrastructure.cluster.x-k8s.io

View File

@ -0,0 +1,8 @@
# The following patch adds a directive for certmanager to inject CA into the CRD
# CRD conversion requires k8s 1.13 or later.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
name: gcpmachinetemplates.infrastructure.cluster.x-k8s.io

View File

@ -0,0 +1,19 @@
# The following patch enables conversion webhook for CRD
# CRD conversion requires k8s 1.13 or later.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: gcpclusters.infrastructure.cluster.x-k8s.io
spec:
conversion:
strategy: Webhook
webhook:
conversionReviewVersions: ["v1", "v1beta1"]
clientConfig:
# this is "\n" used as a placeholder, otherwise it will be rejected by the apiserver for being blank,
# but we're going to set it later using the cert-manager (or potentially a patch if not using cert-manager)
caBundle: Cg==
service:
namespace: system
name: webhook-service
path: /convert

View File

@ -0,0 +1,19 @@
# The following patch enables conversion webhook for CRD
# CRD conversion requires k8s 1.13 or later.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: gcpmachines.infrastructure.cluster.x-k8s.io
spec:
conversion:
strategy: Webhook
webhook:
conversionReviewVersions: ["v1", "v1beta1"]
clientConfig:
# this is "\n" used as a placeholder, otherwise it will be rejected by the apiserver for being blank,
# but we're going to set it later using the cert-manager (or potentially a patch if not using cert-manager)
caBundle: Cg==
service:
namespace: system
name: webhook-service
path: /convert

View File

@ -0,0 +1,19 @@
# The following patch enables conversion webhook for CRD
# CRD conversion requires k8s 1.13 or later.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: gcpmachinetemplates.infrastructure.cluster.x-k8s.io
spec:
conversion:
strategy: Webhook
webhook:
conversionReviewVersions: ["v1", "v1beta1"]
clientConfig:
# this is "\n" used as a placeholder, otherwise it will be rejected by the apiserver for being blank,
# but we're going to set it later using the cert-manager (or potentially a patch if not using cert-manager)
caBundle: Cg==
service:
namespace: system
name: webhook-service
path: /convert

View File

@ -0,0 +1,14 @@
---
apiVersion: v1
kind: Secret
metadata:
name: manager-bootstrap-credentials
namespace: system
type: Opaque
data:
GCP_CONTROL_PLANE_MACHINE_TYPE: ${GCP_CONTROL_PLANE_MACHINE_TYPE}
GCP_NODE_MACHINE_TYPE: ${GCP_NODE_MACHINE_TYPE}
GCP_PROJECT: ${GCP_PROJECT}
GCP_REGION: ${GCP_REGION}
GCP_NETWORK_NAME: ${GCP_NETWORK_NAME}
GCP_B64ENCODED_CREDENTIALS: ${GCP_B64ENCODED_CREDENTIALS}

View File

@ -0,0 +1,3 @@
resources:
- metadata.yaml
- capg-resources.yaml

View File

@ -0,0 +1,14 @@
---
apiVersion: clusterctl.cluster.x-k8s.io/v1alpha3
kind: Metadata
metadata:
name: repository-metadata
labels:
airshipit.org/deploy-k8s: "false"
releaseSeries:
- major: 0
minor: 3
contract: v1alpha3
- major: 0
minor: 2
contract: v1alpha2

View File

@ -0,0 +1,8 @@
apiVersion: v1
kind: Secret
metadata:
name: manager-bootstrap-credentials
namespace: system
type: Opaque
data:
credentials.json: ${GCP_B64ENCODED_CREDENTIALS}

View File

@ -0,0 +1,12 @@
namespace: capg-system
resources:
- namespace.yaml
- credentials.yaml
bases:
- ../rbac
- ../manager
patchesStrategicMerge:
- manager_credentials_patch.yaml

View File

@ -0,0 +1,20 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
template:
spec:
containers:
- name: manager
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /home/.gcp/credentials.json
volumeMounts:
- name: credentials
mountPath: /home/.gcp
volumes:
- name: credentials
secret:
secretName: manager-bootstrap-credentials

View File

@ -0,0 +1,19 @@
# This patch enables Prometheus scraping for the manager pod.
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
template:
metadata:
annotations:
prometheus.io/scrape: 'true'
spec:
containers:
# Expose the prometheus metrics on default port
- name: manager
ports:
- containerPort: 8080
name: metrics
protocol: TCP

View File

@ -0,0 +1,15 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: manager-role
labels:
cluster.x-k8s.io/aggregate-to-manager: "true"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: manager-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: aggregated-manager-role

View File

@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: system

View File

@ -0,0 +1,31 @@
namePrefix: capg-
commonLabels:
cluster.x-k8s.io/provider: "infrastructure-gcp"
bases:
- crd
- webhook # Disable this if you're not using the webhook functionality.
- default
- data
# Enable this when conversion webhooks are implemented
#patchesJson6902:
#- target:
# group: apiextensions.k8s.io
# version: v1
# kind: CustomResourceDefinition
# name: gcpclusters.infrastructure.cluster.x-k8s.io
# path: patch_crd_webhook_namespace.yaml
#- target:
# group: apiextensions.k8s.io
# version: v1
# kind: CustomResourceDefinition
# name: gcpmachines.infrastructure.cluster.x-k8s.io
# path: patch_crd_webhook_namespace.yaml
#- target:
# group: apiextensions.k8s.io
# version: v1
# kind: CustomResourceDefinition
# name: gcpmachinetemplates.infrastructure.cluster.x-k8s.io
# path: patch_crd_webhook_namespace.yaml

View File

@ -0,0 +1,7 @@
resources:
- manager.yaml
patchesStrategicMerge:
- manager_image_patch.yaml
- manager_pull_policy.yaml
- manager_auth_proxy_patch.yaml

View File

@ -0,0 +1,40 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
labels:
control-plane: capg-controller-manager
spec:
selector:
matchLabels:
control-plane: capg-controller-manager
replicas: 1
template:
metadata:
labels:
control-plane: capg-controller-manager
spec:
containers:
- args:
- --enable-leader-election
image: controller:latest
imagePullPolicy: Always
name: manager
ports:
- containerPort: 9440
name: healthz
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: healthz
livenessProbe:
httpGet:
path: /healthz
port: healthz
terminationGracePeriodSeconds: 10
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master

View File

@ -0,0 +1,25 @@
# This patch inject a sidecar container which is a HTTP proxy for the controller manager,
# it performs RBAC authorization against the Kubernetes API using SubjectAccessReviews.
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
template:
spec:
containers:
- name: kube-rbac-proxy
image: gcr.io/kubebuilder/kube-rbac-proxy:v0.4.1
args:
- "--secure-listen-address=0.0.0.0:8443"
- "--upstream=http://127.0.0.1:8080/"
- "--logtostderr=true"
- "--v=10"
ports:
- containerPort: 8443
name: https
- name: manager
args:
- "--metrics-addr=127.0.0.1:8080"
- "--enable-leader-election"

View File

@ -0,0 +1,12 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
template:
spec:
containers:
# Change the value of image field below to your controller image URL
- image: gcr.io/k8s-staging-cluster-api-gcp/cluster-api-gcp-controller:master
name: manager

View File

@ -0,0 +1,11 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
template:
spec:
containers:
- name: manager
imagePullPolicy: Always

View File

@ -0,0 +1,3 @@
- op: replace
path: "/spec/conversion/webhook/clientConfig/service/namespace"
value: capi-webhook-system

View File

@ -0,0 +1,13 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: proxy-role
rules:
- apiGroups: ["authentication.k8s.io"]
resources:
- tokenreviews
verbs: ["create"]
- apiGroups: ["authorization.k8s.io"]
resources:
- subjectaccessreviews
verbs: ["create"]

View File

@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: proxy-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: proxy-role
subjects:
- kind: ServiceAccount
name: default
namespace: system

View File

@ -0,0 +1,18 @@
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/port: "8443"
prometheus.io/scheme: https
prometheus.io/scrape: "true"
labels:
control-plane: capg-controller-manager
name: controller-manager-metrics-service
namespace: system
spec:
ports:
- name: https
port: 8443
targetPort: https
selector:
control-plane: capg-controller-manager

View File

@ -0,0 +1,10 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- role.yaml
- role_binding.yaml
- leader_election_role.yaml
- leader_election_role_binding.yaml
- auth_proxy_service.yaml
- auth_proxy_role.yaml
- auth_proxy_role_binding.yaml

View File

@ -0,0 +1,26 @@
# permissions to do leader election.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: leader-election-role
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- ""
resources:
- configmaps/status
verbs:
- get
- update
- patch

View File

@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: leader-election-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: leader-election-role
subjects:
- kind: ServiceAccount
name: default
namespace: system

View File

@ -0,0 +1,85 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: manager-role
rules:
- apiGroups:
- ""
resources:
- events
verbs:
- create
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- cluster.x-k8s.io
resources:
- clusters
- clusters/status
verbs:
- get
- list
- watch
- apiGroups:
- cluster.x-k8s.io
resources:
- machines
- machines/status
verbs:
- get
- list
- watch
- apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- gcpclusters
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- gcpclusters/status
verbs:
- get
- patch
- update
- apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- gcpmachines
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- gcpmachines/status
verbs:
- get
- patch
- update

View File

@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: manager-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: manager-role
subjects:
- kind: ServiceAccount
name: default
namespace: system

View File

@ -0,0 +1,42 @@
namespace: capi-webhook-system
resources:
- manifests.yaml
- service.yaml
- ../certmanager
- ../manager
configurations:
- kustomizeconfig.yaml
patchesStrategicMerge:
- manager_webhook_patch.yaml
- webhookcainjection_patch.yaml # Disable this value if you don't have any defaulting or validation webhook. If you don't know, you can check if the manifests.yaml file in the same directory has any contents.
vars:
- name: CERTIFICATE_NAMESPACE # namespace of the certificate CR
objref:
kind: Certificate
group: cert-manager.io
version: v1alpha2
name: serving-cert # this name should match the one in certificate.yaml
fieldref:
fieldpath: metadata.namespace
- name: CERTIFICATE_NAME
objref:
kind: Certificate
group: cert-manager.io
version: v1alpha2
name: serving-cert # this name should match the one in certificate.yaml
- name: SERVICE_NAMESPACE # namespace of the service
objref:
kind: Service
version: v1
name: webhook-service
fieldref:
fieldpath: metadata.namespace
- name: SERVICE_NAME
objref:
kind: Service
version: v1
name: webhook-service

View File

@ -0,0 +1,27 @@
# the following config is for teaching kustomize where to look at when substituting vars.
# It requires kustomize v2.1.0 or newer to work properly.
nameReference:
- kind: Service
version: v1
fieldSpecs:
- kind: MutatingWebhookConfiguration
group: admissionregistration.k8s.io
path: webhooks/clientConfig/service/name
- kind: ValidatingWebhookConfiguration
group: admissionregistration.k8s.io
path: webhooks/clientConfig/service/name
namespace:
- kind: MutatingWebhookConfiguration
group: admissionregistration.k8s.io
path: webhooks/clientConfig/service/namespace
create: true
- kind: ValidatingWebhookConfiguration
group: admissionregistration.k8s.io
path: webhooks/clientConfig/service/namespace
create: true
varReference:
- path: metadata/annotations
- kind: Deployment
path: spec/template/spec/volumes/secret/secretName

View File

@ -0,0 +1,26 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
template:
spec:
containers:
- name: manager
args:
- "--metrics-addr=127.0.0.1:8080"
- "--webhook-port=9443"
ports:
- containerPort: 9443
name: webhook-server
protocol: TCP
volumeMounts:
- mountPath: /tmp/k8s-webhook-server/serving-certs
name: cert
readOnly: true
volumes:
- name: cert
secret:
defaultMode: 420
secretName: $(SERVICE_NAME)-cert

View File

@ -0,0 +1,46 @@
---
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
creationTimestamp: null
name: validating-webhook-configuration
webhooks:
- clientConfig:
caBundle: Cg==
service:
name: webhook-service
namespace: system
path: /validate-infrastructure-cluster-x-k8s-io-v1alpha3-gcpmachine
failurePolicy: Fail
matchPolicy: Equivalent
name: validation.gcpmachine.infrastructure.cluster.x-k8s.io
rules:
- apiGroups:
- infrastructure.cluster.x-k8s.io
apiVersions:
- v1alpha3
operations:
- CREATE
- UPDATE
resources:
- gcpmachines
- clientConfig:
caBundle: Cg==
service:
name: webhook-service
namespace: system
path: /validate-infrastructure-cluster-x-k8s-io-v1alpha3-gcpmachinetemplate
failurePolicy: Fail
matchPolicy: Equivalent
name: validation.gcpmachinetemplate.infrastructure.x-k8s.io
rules:
- apiGroups:
- infrastructure.cluster.x-k8s.io
apiVersions:
- v1alpha3
operations:
- CREATE
- UPDATE
resources:
- gcpmachinetemplates

View File

@ -0,0 +1,9 @@
apiVersion: v1
kind: Service
metadata:
name: webhook-service
namespace: system
spec:
ports:
- port: 443
targetPort: webhook-server

View File

@ -0,0 +1,16 @@
# This patch add annotation to admission webhook config and
# the variables $(CERTIFICATE_NAMESPACE) and $(CERTIFICATE_NAME) will be substituted by kustomize.
# uncomment the following lines to enable mutating webhook
#apiVersion: admissionregistration.k8s.io/v1beta1
#kind: MutatingWebhookConfiguration
#metadata:
# name: mutating-webhook-configuration
# annotations:
# cert-manager.k8s.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
---
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
name: validating-webhook-configuration
annotations:
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)

View File

@ -0,0 +1,29 @@
---
apiVersion: cluster.x-k8s.io/v1alpha3
kind: Cluster
metadata:
name: "gtc"
namespace: default
spec:
clusterNetwork:
pods:
cidrBlocks: ["192.168.0.0/16"]
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: GCPCluster
name: "gtc"
controlPlaneRef:
kind: KubeadmControlPlane
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
name: "gtc-control-plane"
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: GCPCluster
metadata:
name: "gtc"
namespace: default
spec:
project: "${GCP_PROJECT}"
region: "${GCP_REGION}"
network:
name: "${GCP_NETWORK_NAME}"

View File

@ -0,0 +1,44 @@
kind: KubeadmControlPlane
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
metadata:
name: "gtc-control-plane"
namespace: default
spec:
replicas: "${CONTROL_PLANE_MACHINE_COUNT}"
infrastructureTemplate:
kind: GCPMachineTemplate
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
name: "gtc-control-plane"
kubeadmConfigSpec:
initConfiguration:
nodeRegistration:
name: '{{ ds.meta_data.local_hostname.split(".")[0] }}'
kubeletExtraArgs:
cloud-provider: gce
clusterConfiguration:
apiServer:
timeoutForControlPlane: 20m
extraArgs:
cloud-provider: gce
controllerManager:
extraArgs:
cloud-provider: gce
allocate-node-cidrs: "false"
joinConfiguration:
nodeRegistration:
name: '{{ ds.meta_data.local_hostname.split(".")[0] }}'
kubeletExtraArgs:
cloud-provider: gce
postKubeadmCommands:
- sudo kubectl --kubeconfig /etc/kubernetes/admin.conf apply -f https://docs.projectcalico.org/v3.15/manifests/calico.yaml
version: "v1.16.14"
---
kind: GCPMachineTemplate
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
metadata:
name: "gtc-control-plane"
namespace: default
spec:
template:
spec:
instanceType: "${GCP_CONTROL_PLANE_MACHINE_TYPE}"

View File

@ -0,0 +1,5 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- cluster.yaml
- controlplane.yaml

View File

@ -0,0 +1,4 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- workers.yaml

View File

@ -0,0 +1,48 @@
apiVersion: cluster.x-k8s.io/v1alpha3
kind: MachineDeployment
metadata:
name: "gtc-md-0"
namespace: default
spec:
clusterName: "gtc"
replicas: "${WORKER_MACHINE_COUNT}"
selector:
matchLabels:
template:
spec:
clusterName: "gtc"
failureDomain: "${GCP_REGION}"
version: "v1.16.14"
bootstrap:
configRef:
name: "gtc-md-0"
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfigTemplate
infrastructureRef:
name: "gtc-md-0"
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: GCPMachineTemplate
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: GCPMachineTemplate
metadata:
name: "gtc-md-0"
namespace: default
spec:
template:
spec:
instanceType: "${GCP_NODE_MACHINE_TYPE}"
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfigTemplate
metadata:
name: "gtc-md-0"
namespace: default
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
name: '{{ ds.meta_data.local_hostname.split(".")[0] }}'
kubeletExtraArgs:
cloud-provider: gce

View File

@ -0,0 +1,39 @@
apiVersion: airshipit.org/v1alpha1
kind: Clusterctl
metadata:
labels:
airshipit.org/deploy-k8s: "false"
name: clusterctl-v1
init-options:
core-provider: "cluster-api:v0.3.3"
bootstrap-providers:
- "kubeadm:v0.3.3"
infrastructure-providers:
- "gcp:v0.3.0"
control-plane-providers:
- "kubeadm:v0.3.3"
providers:
- name: "gcp"
type: "InfrastructureProvider"
variable-substitution: true
versions:
v0.3.0: manifests/function/capg/v0.3.0
- name: "kubeadm"
type: "BootstrapProvider"
versions:
v0.3.3: manifests/function/cabpk/v0.3.3
- name: "cluster-api"
type: "CoreProvider"
versions:
v0.3.3: manifests/function/capi/v0.3.3
- name: "kubeadm"
type: "ControlPlaneProvider"
versions:
v0.3.3: manifests/function/cacpk/v0.3.3
additional-vars:
GCP_CONTROL_PLANE_MACHINE_TYPE: "YmFzZTY0X2VuY29kZWRfdmFsdWU="
GCP_NODE_MACHINE_TYPE: "YmFzZTY0X2VuY29kZWRfdmFsdWU="
GCP_PROJECT: "YmFzZTY0X2VuY29kZWRfdmFsdWU="
GCP_REGION: "YmFzZTY0X2VuY29kZWRfdmFsdWU="
GCP_NETWORK_NAME: "YmFzZTY0X2VuY29kZWRfdmFsdWU="
GCP_B64ENCODED_CREDENTIALS: "YmFzZTY0X2VuY29kZWRfdmFsdWU="

View File

@ -0,0 +1,2 @@
resources:
- clusterctl.yaml

View File

@ -0,0 +1,43 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../../../function/k8scontrol-capg
patchesJson6902:
- target:
group: infrastructure.cluster.x-k8s.io
version: v1alpha3
kind: GCPCluster
name: "gtc"
path: project_name.json
- target:
group: infrastructure.cluster.x-k8s.io
version: v1alpha3
kind: GCPCluster
name: "gtc"
path: region_name.json
- target:
group: infrastructure.cluster.x-k8s.io
version: v1alpha3
kind: GCPCluster
name: "gtc"
path: network_name.json
- target:
group: controlplane.cluster.x-k8s.io
version: v1alpha3
kind: KubeadmControlPlane
name: "gtc-control-plane"
path: machine_count.json
- target:
group: infrastructure.cluster.x-k8s.io
version: v1alpha3
kind: GCPMachineTemplate
name: "gtc-control-plane"
path: machine_type.json

View File

@ -0,0 +1,3 @@
[
{ "op": "replace","path": "/spec/replicas","value": 1 }
]

View File

@ -0,0 +1,3 @@
[
{ "op": "replace","path": "/spec/template/spec/instanceType","value": "n1-standard-4" }
]

View File

@ -0,0 +1,3 @@
[
{ "op": "replace","path": "/spec/network/name","value": "default" }
]

View File

@ -0,0 +1,3 @@
[
{ "op": "replace","path": "/spec/project","value": "<project_id>"}
]

View File

@ -0,0 +1,3 @@
[
{ "op": "replace","path": "/spec/region","value": "us-west1" }
]

View File

@ -0,0 +1,4 @@
resources:
- ../../shared/clusterctl
commonLabels:
airshipit.org/stage: initinfra

View File

@ -0,0 +1,3 @@
[
{ "op": "replace","path": "/spec/template/spec/failureDomain","value": "us-west1-a" }
]

View File

@ -0,0 +1,28 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../../../function/workers-capg
patchesJson6902:
# Worker
- target:
group: cluster.x-k8s.io
version: v1alpha3
kind: MachineDeployment
name: "gtc-md-0"
path: failure_domain.json
- target:
group: cluster.x-k8s.io
version: v1alpha3
kind: MachineDeployment
name: "gtc-md-0"
path: machine_count.json
- target:
group: infrastructure.cluster.x-k8s.io
version: v1alpha3
kind: GCPMachineTemplate
name: "gtc-md-0"
path: machine_type.json

View File

@ -0,0 +1,3 @@
[
{ "op": "replace","path": "/spec/replicas","value": 1 }
]

View File

@ -0,0 +1,3 @@
[
{ "op": "replace","path": "/spec/template/spec/instanceType","value": "n1-standard-4" }
]