Added Automation scripts

This commit is contained in:
sirishag 2020-06-11 19:58:53 +05:30
parent 06fd808041
commit a55cd3a399
19 changed files with 301 additions and 113 deletions

109
README.md Normal file
View File

@ -0,0 +1,109 @@
# Airship HostConfig Using Ansible Operator
This repo contains the code for Airship HostConfig Application using Ansible Operator
## How to Run
## Approach 1
If Kubernetes setup is not available please refer to README.md in k8s folder to bring up the kubernetes setup. It uses Vagrant and Virtual Box to bring up 1 master and 2 worker node VMs
After the VMs are up and running, connect to master node
```
vagrant ssh k8-master
```
Navigate to airship-host-config folder
```
cd airship-host-config/airship-host-config/
```
Execute the create_labels.sh file so that the Kubernetes nodes are labelled accordingly as master and worker nodes. We are also are attaching some sample zones and regions to the kubernetes nodes
```
./create_labels.sh
```
Execute the setup.sh script to build and copy the Airship Hostconfig Ansible Operator Image to worker nodes. It also deploys the application on the Kubernetes setup as deployment kind. The below script uses "vagrant" as username and password and executes the ansible host config role using on the Kubernetes Nodes using these credentials when we are create the HostConfig Kubernetes CR objects.
```
./setup.sh
```
If you want to execute the ansible playbook in the hostconfig example with a different user, you can also set the username and password of the Kuberntes nodes when executing the setup.sh. This uses the <username> and <password> passed to execute the hostconfig role on the kubernetes nodes when we are create the HostConfig Kubernetes CR objects.
```
./setup.sh <username> <password>
```
## Approach 2
If Kubernetes setup is already available, please follow the below procedure
** Pre-requisites: Access to kubernetes setup using kubectl **
Set the Kubeconfig variable
```
export KUBECONFIG=~/.kube/config
```
Clone the repository
```
git clone https://github.com/SirishaGopigiri/airship-host-config.git
```
Navigate to airship-host-config folder
```
cd airship-host-config/airship-host-config/
```
Execute the setup.sh script to build and copy the Airship Hostconfig Ansible Operator Image to worker nodes. It also deploys the application on the Kubernetes setup as deployment kind. The below script uses "vagrant" as username and password and executes the ansible host config role using on the Kubernetes Nodes using these credentials when we create the HostConfig Kubernetes CR objects.
```
./setup.sh
```
If you want to execute the ansible playbook in the hostconfig example with a different user, you can also set the username and password of the Kuberntes nodes when executing the setup.sh. This uses the <username> and <password> passed to execute the hostconfig role on the kubernetes nodes when we create the HostConfig Kubernetes CR objects.
```
./setup.sh <username> <password>
```
## Run Examples
After the setup.sh file executed successfully, navigate to demo_examples and execute the desired examples
Before executing the examples keep tailing the logs of the airship-host-config pod to see the ansible playbook getting executed while running the examples
```
kubectl get pods
kubectl logs -f <airship-host-config-pod-name>
```
Executing examples
```
cd demo_examples
kubectl apply -f example.yaml
kubectl apply -f example1.yaml
kubectl apply -f example2.yaml
```
Apart from the logs on the pod when we execute the hostconfig role we are creating a "tetsing" file on the kubernetes nodes, please check the contents in that file which states the time of execution of the hostconfig role by the HostConfig Ansible Operator Pod.
Execute below command on the kubernetes hosts to get the execution time.
```
cat /home/vagrant/testing
``
If the setup is configured using a different user, check using the below command
```
cat /home/<username>/testing
```
## Licensing
[Apache License, Version 2.0](http://opensource.org/licenses/Apache-2.0).

View File

@ -1,5 +0,0 @@
[defaults]
roles_path = /opt/ansible/roles
library = /usr/share/ansible/openshift
remote_tmp = /tmp/ansible
host_key_checking = False

View File

@ -1,27 +0,0 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEA07lBYNQ41IEUF7hnxndAY0Xf6/Dfghqgqvrufc9RmuVg4+1e
Ab7KD5fQacBn5jI1e0muoONUKPOw2ig3vTrIAI4SeaiqIwGGDUPqE8HJWuoRCu9z
fC9EzbG8rxgohgjqLCWhKm0LJpaOcmrAecEas9Qq74VMECDPgMYTgtZ7VHzCjnBe
LSsivKcc9G+xR7kvSOhg1TOZUMhuzuZ+zFAcWtRzrqPLEns+kF3abpa+L/3NvsD6
UKpY+qJr6aRNsw/iwqtW4Xn2O2dbqJ3/ROhxdvl0nk2yyqVtJej5ZrbXDgSBnVWO
0TgByl/sgO0kFPQ/0eclB+j9KMd8Wl9BGb/NMwIDAQABAoIBAC0eV25ZC1tNvohn
hcXnd9Mv+s3+4MKLBh4pp1UsLwnBQ+qOlO/uRoUYJxPCKuIFZRxG0W37w92OQOvc
kjRDKIflvs4qQUeAdZ6yEFnxfAVhyAv6hzO5pwHmlH0Duu8FS1HpGvU9k5i/kM+V
LDtGCXi1CAlO8KynMVER5OqG5nVUVZ7Et1hAveoM4ayloG6RjQ9w+vpWuqsvpVwx
jePNSJ3qCbjtxtAHlJSrXIiKbgqgkoffK85G01wQvRWTk5FnDFLH8EsPeqKhvTn6
AreFSE5kN/HKigGCKlz4N09YJyX+m1uy6qXT8wZlQkyQwLAORxnqY414ShU+uCoP
ldxgSVECgYEA8XfL56wsEZBxwktYSNzFWmwRHrQBGVpJArahdkSz4ZuJAY8q/Ddi
H2KlYr4aQ5bkdrR7mDn+MpRomZh2GkOic/XdxZvxi5nXM2W7+B1wqqpl+Il1+6CJ
++2nyms75/ot8U+pUoGz3SIOfwAOc6VKwWsQYgpmqSrgck0EIXDieTcCgYEA4Hcz
LhTrc18dfkQ28+hCG77s1/k0wRwtzlkILOihBjo97LwFcTqEor+X9t3H6ls91STa
9rhexDTORn0Dp1/V4ixyAbDg/mhK6IOQ4rjmYmRgoWbOexO9vZALujoRyFyA78xd
xp3sBfdjtWXU84KPqeCiYjWQ2pS1fDLT2RbFGeUCgYA9Ed5JLptKqeyLhkDC1Ms5
DkHaMQ5iGhqDDCuT3NZdxdeFxG7LsToo0+seKRQ9aelIOGdV3bzzj+NQjWW5SMfK
ajF3q/QQKY1q210J6HA5SbVWgXWMeVLMm5OnNy3EgtqhwFMDofgagmWGKz58cx6Q
AoL3OMg0Grr/TYkw5/rvSwKBgCBndN8BLCBiqcpRpLE/ZVPGE0D2e/Qo0kAIwFJj
XuOcQtZLKmn3LbClAhYkXDjr5RhBEs8tPJkMmn64i299OU5GZkryMvjnK3E3lRH1
6WRo4z5JriM8bVbRVbATs/99wytbEGqc37bYyO8l/UEOJxk6EZcl7nxvnWeJmuWr
ENc1AoGBAKxdXkratZYAElMdmJu243VxGNbpE0ABWEEtGMoWDa4G9Z3GZPZK27aL
wTlh0EBn6JxLdruT4nhfpo4/soWOirMEh6Sh1hH4keOuss/Jd/1EYUHY5az7Eiy+
umjY6NBiDNBE4Od0csJYsUeI3wYIp8ijT58C3JCjlXeGprVCS/fE
-----END RSA PRIVATE KEY-----

View File

@ -1 +0,0 @@
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDTuUFg1DjUgRQXuGfGd0BjRd/r8N+CGqCq+u59z1Ga5WDj7V4BvsoPl9BpwGfmMjV7Sa6g41Qo87DaKDe9OsgAjhJ5qKojAYYNQ+oTwcla6hEK73N8L0TNsbyvGCiGCOosJaEqbQsmlo5yasB5wRqz1CrvhUwQIM+AxhOC1ntUfMKOcF4tKyK8pxz0b7FHuS9I6GDVM5lQyG7O5n7MUBxa1HOuo8sSez6QXdpulr4v/c2+wPpQqlj6omvppE2zD+LCq1bhefY7Z1uonf9E6HF2+XSeTbLKpW0l6PlmttcOBIGdVY7ROAHKX+yA7SQU9D/R5yUH6P0ox3xaX0EZv80z sirisha@dumbledore

View File

@ -0,0 +1,13 @@
#!/bin/bash
kubectl label node k8s-master kubernetes.io/role=master
kubectl label node k8s-node-1 kubernetes.io/role=worker
kubectl label node k8s-node-2 kubernetes.io/role=worker
kubectl label node k8s-master topology.kubernetes.io/region=us-east
kubectl label node k8s-node-1 topology.kubernetes.io/region=us-west
kubectl label node k8s-node-2 topology.kubernetes.io/region=us-east
kubectl label node k8s-master topology.kubernetes.io/zone=us-east-1a
kubectl label node k8s-node-1 topology.kubernetes.io/zone=us-east-1b
kubectl label node k8s-node-2 topology.kubernetes.io/zone=us-east-1c

View File

@ -5,7 +5,7 @@ metadata:
spec:
# Add fields here
message: "Its a big world"
execution_groups:
- "k8-worker-1"
- "k8-worker-2"
execution_order: true
host_groups:
- "master"
- "worker"
execution_order: false

View File

@ -5,7 +5,7 @@ metadata:
spec:
# Add fields here
message: "Its a big world"
execution_groups:
- "us-east"
- "us-west"
host_groups:
- "master"
- "worker"
execution_order: true

View File

@ -5,8 +5,7 @@ metadata:
spec:
# Add fields here
message: "Its a big world"
execution_groups:
- "worker"
- "master"
execution_strategy: 2
host_groups:
- "us-east"
- "us-west"
execution_order: true

View File

@ -5,8 +5,8 @@ metadata:
spec:
# Add fields here
message: "Its a big world"
execution_groups:
- "us-east-1a"
- "us-east-1c"
- "us-east-1b"
host_groups:
- "worker"
- "master"
execution_strategy: 1
execution_order: true

View File

@ -0,0 +1,12 @@
apiVersion: hostconfig.airshipit.org/v1alpha1
kind: HostConfig
metadata:
name: example5
spec:
# Add fields here
message: "Its a big world"
host_groups:
- "us-east-1a"
- "us-east-1c"
- "us-east-1b"
execution_order: true

View File

@ -17,17 +17,13 @@ spec:
containers:
- name: airship-host-config
# Replace this with the built image name
#image: "host-config:v1"
image: "airship-host-config:v2"
imagePullPolicy: "IfNotPresent"
image: "AIRSHIP_HOSTCONFIG_IMAGE"
imagePullPolicy: "PULL_POLICY"
securityContext:
privileged: true
volumeMounts:
- mountPath: /tmp/ansible-operator/runner
name: runner
# add inventory in here, and its used.
#- mountPath: /opt/ansible/inventory
# name: inventory
env:
- name: WATCH_NAMESPACE
valueFrom:
@ -44,13 +40,9 @@ spec:
- name: ANSIBLE_INVENTORY
value: /opt/ansible/inventory
- name: USER
value: "sirisha"
value: "USERNAME"
- name: PASS
value: "testing"
value: "PASSWORD"
volumes:
- name: runner
emptyDir: {}
#- name: inventory
# hostPath:
# type: Directory
# path: /home/sirisha/airship-host-config/airship-host-config/inventory

View File

@ -20,7 +20,6 @@ class KubeInventory(object):
self.api_instance = kubernetes.client.CoreV1Api(kubernetes.config.load_incluster_config())
if self.args.list:
# self.inventory = self.kube_inventory()
self.kube_inventory()
elif self.args.host:
# Not implemented, since we return _meta info `--list`.
@ -37,6 +36,7 @@ class KubeInventory(object):
self.set_ssh_keys()
self.get_nodes()
# Sets the ssh username and password using the pod environment variables
def set_ssh_keys(self):
self.inventory["group"]["vars"]["ansible_ssh_user"] = os.environ.get("USER") if "USER" in os.environ else "kubernetes"
if "PASS" in os.environ:
@ -47,6 +47,8 @@ class KubeInventory(object):
] = "~/.ssh/id_rsa"
return
# Gets the Kubernetes nodes labels and annotations and build the inventory
# Also groups the kubernetes nodes based on the labels and annotations
def get_nodes(self):
#label_selector = "kubernetes.io/role="+role
@ -54,9 +56,6 @@ class KubeInventory(object):
nodes = self.api_instance.list_node().to_dict()[
"items"
]
#nodes = self.api_instance.list_node(label_selector=label_selector).to_dict()[
# "items"
#]
except ApiException as e:
print("Exception when calling CoreV1Api->list_node: %s\n" % e)
@ -87,14 +86,10 @@ class KubeInventory(object):
self.inventory[value] = {"hosts": [], "vars": {}}
if node_internalip not in self.inventory[value]["hosts"]:
self.inventory[value]["hosts"].append(node_internalip)
#self.inventory["_meta"]["hostvars"][node_internalip] = {}
self.inventory["_meta"]["hostvars"][node_internalip][
"kube_node_name"
] = node["metadata"]["name"]
#self.inventory["_meta"]["hostvars"][node_internalip]["architecture"] = node['status']['node_info']['architecture']
#self.inventory["_meta"]["hostvars"][node_internalip]["kernel_version"] = node['status']['node_info']['kernel_version']
# return self.inventory
return
def empty_inventory(self):
return {"_meta": {"hostvars": {}}}

View File

@ -1,32 +1,60 @@
---
#playbook.yaml
# Ansible play to initialize custom variables
# The below block of sequence executes only when the execution order is set to true
# Which tells the ansible to execute the host_groups in sequential
- name: DISPLAY THE INVENTORY VARS
hosts: localhost
gather_facts: no
tasks:
- name: Set Serial variable
block:
## Calculates the serial variable based on the host_groups defined in the Kubernetes Hostconfig CR object
## Uses the custom host_config_serial plugin and returns a list of integers
## These integer values corresponds to the number of hosts in each host group given un Kubernetes Hostconfig CR object
## If we have 3 master and 5 worker nodes setup. And in the Kubernetes Hostconfig CR object we pass the
## host_groups as master and worker, then using the host_config_serial plugin the variable returned
## would be list of 3, 5 i.e [3, 5] so that all the 3 master execute in first iteration and
## next the 5 workers execute in second iteration
## This takes the groups parameters set by the dynamic_inventory.py as argument
- set_fact:
serial_var: "{{ execution_groups|serial(groups) }}"
host_config_serial_variable: "{{ host_groups|host_config_serial(groups) }}"
## This custom filter plugin is used to futher break the host_config_serial variable into equal length
## as specified in the Kubernetes Hostconfig CR object
## If we have 3 master and 5 worker nodes setup. And in the Kubernetes Hostconfig CR object we pass the
## host_groups as master and worker, also the serial_strategy is set to 2. Then this custom filter returns
## the following list of integers where the [3, 5] list is further split based on the
## serial_strategy given here it is 2
## host_config_serial_variable is [2, 1, 2, 2, 1]
## This task is executed only when the execution_strategy is defined in the hostconfig CR object
## When executed it overrides the previous task value for host_config_serial_variable variable
## This takes host_groups and groups as parameters
- set_fact:
serial_var: "{{ execution_strategy|serial_strategy(execution_groups, groups) }}"
host_config_serial_variable: "{{ execution_strategy|host_config_serial_strategy(host_groups, groups) }}"
when: execution_strategy is defined
- debug:
msg: "Serial Variable {{ serial_var }}"
when: execution_order is true and execution_groups is defined
msg: "Serial Variable {{ host_config_serial_variable }}"
when: execution_order is true and host_groups is defined
# The tasks below gets executed when execution_order is set to true and order of execution should be
# considered while executing
# The below tasks considers the host_config_serial_variable variable value set from the previous block
# Executes the number of hosts set in the host_config_serial_variable at every iteration
- name: Execute Roles based on hosts
hosts: "{{ execution_groups | default('all')}}"
serial: "{{ hostvars['localhost']['serial_var'] | default('100%') }}"
hosts: "{{ host_groups | default('all')}}"
serial: "{{ hostvars['localhost']['host_config_serial_variable'] | default('100%') }}"
gather_facts: no
tasks:
- import_role:
name: hostconfig
when: execution_order is true
# Executed when the execution_order is set to false or not set
# This is the default execution flow where ansible gets all the host available in host_groups
# and executes them in parallel
- name: Execute Roles based on hosts
hosts: "{{ execution_groups | default('all')}}"
hosts: "{{ host_groups | default('all')}}"
gather_facts: no
tasks:
- import_role:

View File

@ -5,20 +5,23 @@ __metaclass__ = type
import json
def serial(hosts, groups):
# Calculates the length of hosts in each groups
# Interested Groups are defined using the host_groups
# Returns a list of integers
def host_config_serial(host_groups, groups):
serial_list = list()
if type(hosts) != list:
if type(host_groups) != list:
return ''
for i in hosts:
for i in host_groups:
if i in groups.keys():
serial_list.append(str(len(groups[i])))
return str(serial_list)
class FilterModule(object):
''' Fake test plugin for ansible-operator '''
''' HostConfig Serial plugin for ansible-operator '''
def filters(self):
return {
'serial': serial
'host_config_serial': host_config_serial
}

View File

@ -0,0 +1,30 @@
#!/usr/bin/python3
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
# Futher divides the host_config_serial variable into a new list
# so that for each iteration there will be not more than the
# strategy(int variable) number of hosts executing
def host_config_serial_strategy(strategy, host_groups, groups):
serial_list = list()
if type(strategy) != int and type(host_groups) != list:
return ''
for i in host_groups:
if i in groups.keys():
length = len(groups[i])
serial_list += int(length/strategy) * [strategy]
if length%strategy != 0:
serial_list.append(length%strategy)
return str(serial_list)
class FilterModule(object):
''' HostConfig Serial Startegy plugin for ansible-operator to calucate the serial variable '''
def filters(self):
return {
'host_config_serial_strategy': host_config_serial_strategy
}

View File

@ -1,27 +0,0 @@
#!/usr/bin/python3
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
def serial_strategy(strategy, hosts, groups):
serial_list = list()
if type(strategy) != int and type(hosts) != list:
return ''
for i in hosts:
if i in groups.keys():
length = len(groups[i])
serial_list += int(length/strategy) * [strategy]
if length%strategy != 0:
serial_list.append(length%strategy)
return str(serial_list)
class FilterModule(object):
''' Fake test plugin for ansible-operator '''
def filters(self):
return {
'serial_strategy': serial_strategy
}

View File

@ -8,16 +8,12 @@
debug:
msg: "Message: {{ message }}"
#- name: DISPLAY THE HOST VARS
# debug:
# msg: "The host variable is {{ hostvars }}."
- name: DISPLAY HOST DETAILS
debug:
msg: "And the kubernetes node name is {{ kube_node_name }}, architecture is {{ architecture }} and kernel version is {{ kernel_version }}"
- name: CREATING A FILE
shell: "hostname > /home/sirisha/testing; date >> /home/sirisha/testing;cat /home/sirisha/testing;sleep 5"
shell: "hostname > ~/testing; date >> ~/testing;cat ~/testing;sleep 5"
register: output
- debug: msg={{ output.stdout }}

70
airship-host-config/setup.sh Executable file
View File

@ -0,0 +1,70 @@
#!/bin/bash
RELEASE_VERSION=v0.8.0
AIRSHIP_PATH=airship-host-config/airship-host-config
IMAGE_NAME=airship-host-config
install_operator_sdk(){
echo "Installing Operator-SDK to build image"
curl -OJL https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu
sudo mv operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu /usr/local/bin/operator-sdk
}
build_host_config_image(){
echo "Building Airship host-config Ansible operator Image"
cd $HOME/$AIRSHIP_PATH
operator-sdk build $IMAGE_NAME
}
get_worker_ips(){
echo >&2 "Getting other master and worker node IPs to copy Airship host-config Ansible Operator Image"
IP_ADDR=`ifconfig enp0s8 | grep Mask | awk '{print $2}'| cut -f2 -d:`
worker_node_ips=`kubectl get nodes -o wide | grep -v $IP_ADDR | awk '{print $6}' | sed -e '1d'`
echo $worker_node_ips
}
save_and_load_docker_image(){
cd $HOME/$AIRSHIP_PATH
echo "Saving Airship host-config Ansible Operator Image so that it would be copied to other worker nodes"
docker save $IMAGE_NAME -o $IMAGE_NAME
worker_node_ips=$(get_worker_ips)
echo "Copying Image to following worker Nodes"
echo $worker_node_ips
for i in $worker_node_ips
do
sshpass -p "vagrant" scp -o StrictHostKeyChecking=no $IMAGE_NAME vagrant@$i:~/.
sshpass -p "vagrant" ssh vagrant@$i docker load -i $IMAGE_NAME
done
}
get_username_password(){
if [ -z "$1" ]; then USERNAME="vagrant"; else USERNAME=$1; fi
if [ -z "$2" ]; then PASSWORD="vagrant"; else PASSWORD=$2; fi
echo $USERNAME $PASSWORD
}
deploy_airship_ansible_operator(){
read USERNAME PASSWORD < <(get_username_password $1 $2)
echo "Setting up Airship host-config Ansible operator"
echo "Using Username: $USERNAME and Password: $PASSWORD of K8 nodes for host-config pod setup"
sed -i "s/AIRSHIP_HOSTCONFIG_IMAGE/$IMAGE_NAME/g" $HOME/$AIRSHIP_PATH/deploy/operator.yaml
sed -i "s/PULL_POLICY/IfNotPresent/g" $HOME/$AIRSHIP_PATH/deploy/operator.yaml
sed -i "s/USERNAME/$USERNAME/g" $HOME/$AIRSHIP_PATH/deploy/operator.yaml
sed -i "s/PASSWORD/$PASSWORD/g" $HOME/$AIRSHIP_PATH/deploy/operator.yaml
kubectl apply -f $HOME/$AIRSHIP_PATH/deploy/crds/hostconfig.airshipit.org_hostconfigs_crd.yaml
kubectl apply -f $HOME/$AIRSHIP_PATH/deploy/role.yaml
kubectl apply -f $HOME/$AIRSHIP_PATH/deploy/service_account.yaml
kubectl apply -f $HOME/$AIRSHIP_PATH/deploy/role_binding.yaml
kubectl apply -f $HOME/$AIRSHIP_PATH/deploy/cluster_role_binding.yaml
kubectl apply -f $HOME/$AIRSHIP_PATH/deploy/operator.yaml
}
configure_github_host_config_setup(){
install_operator_sdk
build_host_config_image
save_and_load_docker_image
deploy_airship_ansible_operator $1 $2
}
configure_github_host_config_setup $1 $2

1
k8s Submodule

@ -0,0 +1 @@
Subproject commit a89d5dabfb596b6ace88baf9e83ce7332c736f12