Since controller-runtime already logs reconcile errors, there is no need to log them ourselves, and we instead wrap errors as needed to provide extra context. Change-Id: I4fcfd1cf1e8bf2829efc1877884c46166f9927c0
|4 days ago|
|config||4 days ago|
|hack||2 months ago|
|nodelabeler||1 week ago|
|pkg||4 days ago|
|playbooks||1 week ago|
|roles||1 week ago|
|tools||4 days ago|
|zuul.d||6 days ago|
|.gitignore||1 month ago|
|.gitreview||2 weeks ago|
|.golangci.yaml||1 month ago|
|Dockerfile||1 week ago|
|LICENSE||2 months ago|
|Makefile||1 week ago|
|PROJECT||1 month ago|
|README.md||4 days ago|
|go.mod||4 days ago|
|go.sum||4 days ago|
|main.go||6 days ago|
The lifecycle of the Virtual Machines and their relationship to the Kubernetes cluster will be managed using two operators: vNode-Operator(ViNO), and the Support Infra Provider Operator (SIP).
ViNO is responsible for setting up VM infrastructure, such as:
node: rdm8r008c002- should follow k8s semi-standard
networks: [foo, bar]and the details for ViNO can be found here
The Cluster Support Infrastructure Provider, or SIP, is responsible for the lifecycle of:
BareMetalHostresources to label (or unlabel) based on scheduling constraints.
BareMetalHostobjects to use in the creation of supporting infrastructure.
ViNO is a project written in Go, and the make targets used to deploy ViNO leverage both Go and Kustomize commands which require Golang be installed.
For detailed installation instructions, please see the Golang installation guide.
In order to apply manifests to your cluster via Make targets we suggest the use of Kustomize.
For detailed installation instructions, please see the Kustomize installation guide.
If your organization requires development behind a proxy server, you will need to define the following environment variables with your organization's information:
HTTP_PROXY=http://username:password@host:port HTTPS_PROXY=http://username:password@host:port NO_PROXY="localhost,127.0.0.1,10.96.0.0/12" PROXY=http://username:password@host:port USE_PROXY=true
10.96.0.0/12 is the Kubernetes service CIDR.
Airship projects often have to deploy Kubernetes, with common requirements such as supporting network policies or working behind corporate proxies. To that end the community maintains a Kubernetes deployment script and is the suggested way of deploying your Kubernetes cluster for development purposes.
# curl -Lo deploy-k8s.sh https://opendev.org/airship/charts/raw/branch/master/tools/gate/deploy-k8s.sh # chmod +x deploy-k8s.sh # sudo ./deploy-k8s.sh
Once your cluster is up and running, you'll need to build the ViNO image to use, and to deploy the operator on your cluster:
# make docker-build-controller # make deploy
Once these steps are completed, you should have a working cluster with ViNO deployed on top of it:
# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-7985fc4dd6-6q5l4 1/1 Running 0 3h7m kube-system calico-node-lqzxp 1/1 Running 0 3h7m kube-system coredns-f9fd979d6-gbdzl 1/1 Running 0 3h7m kube-system etcd-ubuntu-virtualbox 1/1 Running 0 3h8m kube-system kube-apiserver-ubuntu-virtualbox 1/1 Running 0 3h8m kube-system kube-controller-manager-ubuntu-virtualbox 1/1 Running 0 3h8m kube-system kube-proxy-ml4gd 1/1 Running 0 3h7m kube-system kube-scheduler-ubuntu-virtualbox 1/1 Running 0 3h8m kube-system storage-provisioner 1/1 Running 0 3h8m vino-system vino-controller-manager-788b994c74-sbf26 2/2 Running 0 25m
# kubectl apply -f config/samples/vino_cr.yaml # kubectl get pods # kubectl get ds
delete vino CR and make sure DaemonSet is deleted as well
# kubectl delete vino vino-test-cr # kubectl get ds # kubectl get cm
For any questions on the ViNo, or other Airship projects, we encourage you to join the community on Slack/IRC or by participating in the mailing list. Please see this Wiki for contact information, and the community meeting schedules.