Browse Source

Remove hardcoded volume point and tenant network issues

This patch set fixed the following problems

  1. Using hardcoded device names such as eth0, /dev/vdb.
     These names are configurable now
  2. Requires tenant networks to be avaialbe.
     No longer required now
  3. Terraform multiple endpoints issue can not be overcame.
     Recreated the docker swarm workload test using ansible

Change-Id: Ic4bcd07caa7f7a27f7cb520fb3302fb547f085f0
changes/84/366784/7
Tong Li 4 years ago
parent
commit
22e9aad745
37 changed files with 690 additions and 37 deletions
  1. +4
    -0
      ansible/dockerswarm/.gitignore
  2. +128
    -0
      ansible/dockerswarm/README.md
  3. +2
    -0
      ansible/dockerswarm/ansible.cfg
  4. +1
    -0
      ansible/dockerswarm/hosts
  5. +11
    -0
      ansible/dockerswarm/roles/post_apply/tasks/main.yml
  6. +23
    -0
      ansible/dockerswarm/roles/post_destroy/tasks/main.yml
  7. +88
    -0
      ansible/dockerswarm/roles/prep_apply/tasks/main.yml
  8. +47
    -0
      ansible/dockerswarm/roles/prep_apply/templates/cloudinit.j2
  9. +13
    -0
      ansible/dockerswarm/roles/prep_destroy/tasks/main.yml
  10. +34
    -0
      ansible/dockerswarm/roles/prov_apply/tasks/main.yml
  11. +31
    -0
      ansible/dockerswarm/roles/prov_apply/templates/bootstrap1.j2
  12. +32
    -0
      ansible/dockerswarm/roles/prov_apply/templates/bootstrap2.j2
  13. +2
    -0
      ansible/dockerswarm/roles/prov_apply/templates/dockerservice.j2
  14. +8
    -0
      ansible/dockerswarm/roles/prov_apply/templates/openssl.cnf
  15. +12
    -0
      ansible/dockerswarm/roles/prov_destroy/tasks/main.yml
  16. +19
    -0
      ansible/dockerswarm/roles/vm_apply/tasks/main.yml
  17. +33
    -0
      ansible/dockerswarm/site.yml
  18. +21
    -0
      ansible/dockerswarm/vars/bluebox.yml
  19. +22
    -0
      ansible/dockerswarm/vars/leap.yml
  20. +22
    -0
      ansible/dockerswarm/vars/osic.yml
  21. +21
    -0
      ansible/dockerswarm/vars/ovh.yml
  22. +6
    -0
      ansible/lampstack/.gitignore
  23. +1
    -0
      ansible/lampstack/README.md
  24. +15
    -1
      ansible/lampstack/roles/apply/tasks/main.yml
  25. +2
    -2
      ansible/lampstack/roles/apply/templates/userdata.j2
  26. +6
    -2
      ansible/lampstack/roles/balancer/tasks/main.yml
  27. +6
    -0
      ansible/lampstack/roles/cleaner/tasks/apply.yml
  28. +17
    -24
      ansible/lampstack/roles/database/tasks/main.yml
  29. +9
    -0
      ansible/lampstack/roles/destroy/tasks/main.yml
  30. +27
    -4
      ansible/lampstack/roles/webserver/tasks/main.yml
  31. +7
    -1
      ansible/lampstack/site.yml
  32. +5
    -1
      ansible/lampstack/vars/bluebox.yml
  33. +5
    -1
      ansible/lampstack/vars/leap.yml
  34. +1
    -0
      terraform/dockerswarm-coreos/README.md
  35. +3
    -0
      terraform/dockerswarm-coreos/swarm.tf
  36. +1
    -1
      terraform/dockerswarm-coreos/templates/10-docker-service.conf
  37. +5
    -0
      terraform/dockerswarm-coreos/vars-openstack.tf

+ 4
- 0
ansible/dockerswarm/.gitignore View File

@@ -0,0 +1,4 @@
*.out
*/**/*.log
*/**/.DS_Store
*/**/._

+ 128
- 0
ansible/dockerswarm/README.md View File

@@ -0,0 +1,128 @@
# Docker Swarm Ansible deployments on OpenStack Cloud

## Status

This will install a 3 node lampstack. Once the script finishes, a set of
environment varialbes will be displayed, export these environment variable
then you can run docker commands against the swarm

## Requirements

- [Install Ansible](http://docs.ansible.com/ansible/intro_installation.html)
- [Install openstack shade] (http://docs.openstack.org/infra/shade/installation.html)
- Make sure there is an openstack coreos image available on your cloud.
- Clone this project into a directory.
- To run docker commands, you will need to install docker client. Following
the following steps if you are using ubuntu to run the script, if you are
using some other environment run the script, then the steps setting up
docker client may be different::

apt-get update
apt-get -y install docker.io
ln -sf /usr/bin/docker.io /usr/local/bin/docker

## Ansible

Ansible and OpenStack Shade are used to provision all of the OpenStack
resources.

### Prep

#### Deal with ssh keys for Openstack Authentication

If you do not have a ssh key, then you should create one by using a tool.
An example command to do that is provided below. Once you have a key pair,
ensure your local ssh-agent is running and your ssh key has been added.
This step is required. Not doing this, you will have to manually give
passphrase when script runs, and script can fail. If you really do not want
to deal with passphrase, you can create a key pair without passphrase::

ssh-keygen -t rsa
eval $(ssh-agent -s)
ssh-add ~/.ssh/id_rsa

#### General Openstack Settings

Ansible's OpenStack cloud module is used to provision compute resources
against an OpenStack cloud. Before you run the script, the cloud environment
will have to be specified. Sample files have been provided in vars directory.
You may create one such file per cloud for your tests. The following is an
example::

auth: {
auth_url: "http://x.x.x.x:5000/v3",
username: "demo",
password: "{{ password }}",
domain_name: "default",
project_name: "demo"
}

app_env: {
image_name: "coreos",
region_name: "",
private_net_name: "",
net_device: "eth0",
flavor_name: "m1.small",
swarm_version: "latest",
swarm_size: 3,
fqdn: "swarm.example.com",
public_key_file: "/home/tong/.ssh/id_rsa.pub"
}


The values of these variables should be provided by your cloud provider. When
use keystone 2.0 API, you will not need to setup domain name. If your account
only has more than one regions available, specify the region_name to be used.
If there is only one, you can leave it blank or use the correct name. If your
cloud does not expose tenant network, leave private_net_name blank as well.
However, if your cloud supports tenant network and you have more than one
tenant networks in your account, you will need to specify which tenant network
to be used, otherwise, the script will error out. To create a large docker
swarm, change the swarm_size to a large value like 20, the script will create
a docker swarm with 20 coreos nodes.


## Run the script

With your cloud environment set, you should be able to run the script::

ansible-playbook -e "action=apply env=leap password=XXXXX" site.yml

The command will stand up the nodes using a cloud named leap (vars/leap.yml).
If you run the test against other cloud, you can create a new file use same
structure and specify that cloud attributes such as auth_url, etc. Then you
can simply replace work leap with that file name. Replace xxxxx with your
own cloud account password, you can also simply put your password in the
configuration file (vars/leap.yml in this case) and avoid to specify it from
the command line.

If everything goes well, it will accomplish the following::

1. Provision 3 coreos nodes on your cloud
2. Create security group
3. Add security rules to allow ping, ssh, docker access
4. Setup ssl keys, certificates
5. Display a set of environment variables that you can use to run docker
commands


## Next Steps

### Check its up

If there are no errors, you can export the environment variables shown by
the script at the end. Then you can start running docker commands, here are
few examples::

docker info
docker images
docker pull ubuntu:vivid


## Cleanup

Once you're done with the swarm, don't forget to nuke the whole thing::

ansible-playbook -e "action=destroy env=leap password=XXXXX" site.yml

The above command will destroy all the resources created by the script.

+ 2
- 0
ansible/dockerswarm/ansible.cfg View File

@@ -0,0 +1,2 @@
[defaults]
inventory = ./hosts

+ 1
- 0
ansible/dockerswarm/hosts View File

@@ -0,0 +1 @@
cloud ansible_host=127.0.0.1

+ 11
- 0
ansible/dockerswarm/roles/post_apply/tasks/main.yml View File

@@ -0,0 +1,11 @@
---
- debug:
msg: >-
export DOCKER_HOST=tcp://{{ hostvars.swarmnode1.swarmnode.openstack.public_v4 }}:2375;
export DOCKER_TLS_VERIFY=1;
export DOCKER_CERT_PATH=/tmp/{{ env }}/keys

- debug:
msg: >-
The work load test started at {{ starttime.time }},
ended at {{ ansible_date_time.time }}

+ 23
- 0
ansible/dockerswarm/roles/post_destroy/tasks/main.yml View File

@@ -0,0 +1,23 @@
---
- name: Remove security group
os_security_group:
state: absent
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
name: dockerswarm_sg
description: secuirty group for dockerswarm

- name: Delete discovery url directory
file: path="/tmp/{{ env }}" state=absent

- name: Delete a key-pair
os_keypair:
state: absent
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
name: "dockerswarm"

- debug:
msg: >-
The work load test started at {{ starttime.time }},
ended at {{ ansible_date_time.time }}

+ 88
- 0
ansible/dockerswarm/roles/prep_apply/tasks/main.yml View File

@@ -0,0 +1,88 @@
---
- name: Get start timestamp
set_fact: starttime="{{ ansible_date_time }}"

- name: Create certificate directory
file: path="/tmp/{{ env }}/keys" state=directory

- stat: path="/tmp/{{ env }}/discovery_url"
register: discovery_url_flag

- name: Get docker discovery url
get_url:
url: "https://discovery.etcd.io/new?size={{ app_env.swarm_size }}"
dest: "/tmp/{{ env }}/discovery_url"
when: discovery_url_flag.stat.exists == false

- shell: openssl genrsa -out "/tmp/{{ env }}/keys/ca-key.pem" 2048
- shell: openssl genrsa -out "/tmp/{{ env }}/keys/key.pem" 2048

- shell: >-
openssl req -x509 -new -nodes -key /tmp/{{ env }}/keys/ca-key.pem
-days 10000 -out /tmp/{{ env }}/keys/ca.pem -subj '/CN=docker-CA'

- shell: >-
openssl req -new -key /tmp/{{ env }}/keys/key.pem
-out /tmp/{{ env }}/keys/cert.csr
-subj '/CN=docker-client' -config ./roles/prov_apply/templates/openssl.cnf

- shell: >-
openssl x509 -req -in /tmp/{{ env }}/keys/cert.csr
-CA /tmp/{{ env }}/keys/ca.pem -CAkey /tmp/{{ env }}/keys/ca-key.pem
-CAcreateserial -out /tmp/{{ env }}/keys/cert.pem -days 365
-extensions v3_req -extfile ./roles/prov_apply/templates/openssl.cnf

- name: Retrieve specified flavor
os_flavor_facts:
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
name: "{{ app_env.flavor_name }}"

- name: Create a key-pair
os_keypair:
state: "present"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
name: "dockerswarm"
public_key_file: "{{ app_env.public_key_file }}"

- name: Create security group
os_security_group:
state: present
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
name: dockerswarm_sg
description: secuirty group for dockerswarm

- name: Add security rules
os_security_group_rule:
state: present
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
security_group: dockerswarm_sg
protocol: "{{ item.protocol }}"
direction: "{{ item.dir }}"
port_range_min: "{{ item.p_min }}"
port_range_max: "{{ item.p_max }}"
remote_ip_prefix: 0.0.0.0/0
with_items:
- { p_min: 22, p_max: 22, dir: ingress, protocol: tcp }
- { p_min: 2375, p_max: 2376, dir: ingress, protocol: tcp }
- { p_min: 2379, p_max: 2380, dir: ingress, protocol: tcp }
- { p_min: 2379, p_max: 2380, dir: egress, protocol: tcp }
- { p_min: -1, p_max: -1, dir: ingress, protocol: icmp }
- { p_min: -1, p_max: -1, dir: egress, protocol: icmp }

- name: Create cloudinit file for all nodes
template:
src: templates/cloudinit.j2
dest: "/tmp/{{ env }}/cloudinit"

- name: Add nodes to host group
add_host:
name: "swarmnode{{ item }}"
hostname: "127.0.0.1"
groups: dockerswarm
host_no: "{{ item }}"
with_sequence: count={{ app_env.swarm_size }}
no_log: True

+ 47
- 0
ansible/dockerswarm/roles/prep_apply/templates/cloudinit.j2 View File

@@ -0,0 +1,47 @@
#cloud-config
coreos:
units:
- name: etcd.service
mask: true
- name: etcd2.service
command: start
- name: docker.service
command: start
- name: swarm-agent.service
content: |
[Unit]
Description=swarm agent
Requires=docker.service
After=docker.service

[Service]
EnvironmentFile=/etc/environment
TimeoutStartSec=20m
ExecStartPre=/usr/bin/docker pull swarm:latest
ExecStartPre=-/usr/bin/docker rm -f swarm-agent
ExecStart=/bin/sh -c "/usr/bin/docker run --rm --name swarm-agent swarm:latest join --addr=$COREOS_PRIVATE_IPV4:2376 etcd://$COREOS_PRIVATE_IPV4:2379/docker"
ExecStop=/usr/bin/docker stop swarm-agent
- name: swarm-manager.service
content: |
[Unit]
Description=swarm manager
Requires=docker.service
After=docker.service

[Service]
EnvironmentFile=/etc/environment
TimeoutStartSec=20m
ExecStartPre=/usr/bin/docker pull swarm:latest
ExecStartPre=-/usr/bin/docker rm -f swarm-manager
ExecStart=/bin/sh -c "/usr/bin/docker run --rm --name swarm-manager -v /etc/docker/ssl:/etc/docker/ssl --net=host swarm:latest manage --tlsverify --tlscacert=/etc/docker/ssl/ca.pem --tlscert=/etc/docker/ssl/cert.pem --tlskey=/etc/docker/ssl/key.pem etcd://$COREOS_PRIVATE_IPV4:2379/docker"
ExecStop=/usr/bin/docker stop swarm-manager
etcd2:
discovery: {{ lookup('file', '/tmp/'+env+'/discovery_url') }}
advertise-client-urls: http://$private_ipv4:2379
initial-advertise-peer-urls: http://$private_ipv4:2380
listen-client-urls: http://0.0.0.0:2379
listen-peer-urls: http://$private_ipv4:2380
data-dir: /var/lib/etcd2
initial-cluster-token: openstackinterop
update:
reboot-strategy: "off"

+ 13
- 0
ansible/dockerswarm/roles/prep_destroy/tasks/main.yml View File

@@ -0,0 +1,13 @@
---
- name: Get start timestamp
set_fact: starttime="{{ ansible_date_time }}"

- name: Add web servers to webservers host group
add_host:
name: "swarmnode{{ item }}"
hostname: "127.0.0.1"
groups: dockerswarm
host_no: "{{ item }}"
with_sequence: count={{ app_env.swarm_size }}
no_log: True

+ 34
- 0
ansible/dockerswarm/roles/prov_apply/tasks/main.yml View File

@@ -0,0 +1,34 @@
---
- name: Get public IP
set_fact: node_ip="{{ swarmnode.openstack.public_v4 }}"

- name: Make certificate configuration file
copy:
src: templates/openssl.cnf
dest: "/tmp/{{ env }}/{{ node_ip }}/keys/"

- name: Make service file
template:
src: templates/dockerservice.j2
dest: "/tmp/{{ env }}/{{ node_ip }}/keys/dockerservice.cnf"

- name: Create bootstrap file
template:
src: templates/bootstrap1.j2
dest: "/tmp/{{ env }}/{{ node_ip }}/keys/bootstrap.sh"
when: swarmnode.openstack.private_v4 == ""

- name: Create bootstrap file
template:
src: templates/bootstrap2.j2
dest: "/tmp/{{ env }}/{{ node_ip }}/keys/bootstrap.sh"
when: swarmnode.openstack.private_v4 != ""

- name: Transfer configureation
shell: scp -r "/tmp/{{ env }}/{{ node_ip }}/keys" "core@{{ node_ip }}:/home/core"

- name: Transfer certificate file over to the nodes
shell: scp -r "/tmp/{{ env }}/keys" "core@{{ node_ip }}:/home/core"

- name: Start services
shell: ssh "core@{{ node_ip }}" "sh keys/bootstrap.sh"

+ 31
- 0
ansible/dockerswarm/roles/prov_apply/templates/bootstrap1.j2 View File

@@ -0,0 +1,31 @@
mkdir -p /home/core/.docker
cp /home/core/keys/ca.pem /home/core/.docker/
cp /home/core/keys/cert.pem /home/core/.docker/
cp /home/core/keys/key.pem /home/core/.docker/

echo 'subjectAltName = @alt_names' >> /home/core/keys/openssl.cnf
echo '[alt_names]' >> /home/core/keys/openssl.cnf

cd /home/core/keys

echo 'IP.1 = {{ swarmnode.openstack.public_v4 }}' >> openssl.cnf
echo 'DNS.1 = {{ app_env.fqdn }}' >> openssl.cnf
echo 'DNS.2 = {{ swarmnode.openstack.public_v4 }}.xip.io' >> openssl.cnf

openssl req -new -key key.pem -out cert.csr -subj '/CN=docker-client' -config openssl.cnf
openssl x509 -req -in cert.csr -CA ca.pem -CAkey ca-key.pem \
-CAcreateserial -out cert.pem -days 365 -extensions v3_req -extfile openssl.cnf

sudo mkdir -p /etc/docker/ssl
sudo cp ca.pem /etc/docker/ssl/
sudo cp cert.pem /etc/docker/ssl/
sudo cp key.pem /etc/docker/ssl/

# Apply localized settings to services
sudo mkdir -p /etc/systemd/system/{docker,swarm-agent,swarm-manager}.service.d

sudo mv /home/core/keys/dockerservice.cnf /etc/systemd/system/docker.service.d/10-docker-service.conf
sudo systemctl daemon-reload
sudo systemctl restart docker.service
sudo systemctl start swarm-agent.service
sudo systemctl start swarm-manager.service

+ 32
- 0
ansible/dockerswarm/roles/prov_apply/templates/bootstrap2.j2 View File

@@ -0,0 +1,32 @@
mkdir -p /home/core/.docker
cp /home/core/keys/ca.pem /home/core/.docker/
cp /home/core/keys/cert.pem /home/core/.docker/
cp /home/core/keys/key.pem /home/core/.docker/

echo 'subjectAltName = @alt_names' >> /home/core/keys/openssl.cnf
echo '[alt_names]' >> /home/core/keys/openssl.cnf

cd /home/core/keys

echo 'IP.1 = {{ swarmnode.openstack.private_v4 }}' >> openssl.cnf
echo 'IP.2 = {{ swarmnode.openstack.public_v4 }}' >> openssl.cnf
echo 'DNS.1 = {{ app_env.fqdn }}' >> openssl.cnf
echo 'DNS.2 = {{ swarmnode.openstack.public_v4 }}.xip.io' >> openssl.cnf

openssl req -new -key key.pem -out cert.csr -subj '/CN=docker-client' -config openssl.cnf
openssl x509 -req -in cert.csr -CA ca.pem -CAkey ca-key.pem \
-CAcreateserial -out cert.pem -days 365 -extensions v3_req -extfile openssl.cnf

sudo mkdir -p /etc/docker/ssl
sudo cp ca.pem /etc/docker/ssl/
sudo cp cert.pem /etc/docker/ssl/
sudo cp key.pem /etc/docker/ssl/

# Apply localized settings to services
sudo mkdir -p /etc/systemd/system/{docker,swarm-agent,swarm-manager}.service.d

sudo mv /home/core/keys/dockerservice.cnf /etc/systemd/system/docker.service.d/10-docker-service.conf
sudo systemctl daemon-reload
sudo systemctl restart docker.service
sudo systemctl start swarm-agent.service
sudo systemctl start swarm-manager.service

+ 2
- 0
ansible/dockerswarm/roles/prov_apply/templates/dockerservice.j2 View File

@@ -0,0 +1,2 @@
[Service]
Environment="DOCKER_OPTS=-H=0.0.0.0:2376 -H unix:///var/run/docker.sock --tlsverify --tlscacert=/etc/docker/ssl/ca.pem --tlscert=/etc/docker/ssl/cert.pem --tlskey=/etc/docker/ssl/key.pem --cluster-advertise {{app_env.net_device}}:2376 --cluster-store etcd://127.0.0.1:2379/docker"

+ 8
- 0
ansible/dockerswarm/roles/prov_apply/templates/openssl.cnf View File

@@ -0,0 +1,8 @@
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth, serverAuth

+ 12
- 0
ansible/dockerswarm/roles/prov_destroy/tasks/main.yml View File

@@ -0,0 +1,12 @@
---
- name: Remove docker swarm nodes
os_server:
state: "absent"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
name: docker-swarm-{{ host_no }}
key_name: "dockerswarm"
timeout: 200
security_groups: dockerswarm_sg
meta:
hostname: docker-swarm-{{ host_no }}

+ 19
- 0
ansible/dockerswarm/roles/vm_apply/tasks/main.yml View File

@@ -0,0 +1,19 @@
---
- name: Create docker swarm nodes
os_server:
state: "present"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
name: docker-swarm-{{ host_no }}
image: "{{ app_env.image_name }}"
key_name: "dockerswarm"
timeout: 200
flavor: "{{ hostvars.cloud.openstack_flavors[0].id }}"
network: "{{ app_env.private_net_name }}"
auto_ip: yes
userdata: "{{ lookup('file', '/tmp/' +env+ '/cloudinit') }}"
security_groups: dockerswarm_sg
meta:
hostname: docker-swarm-{{ host_no }}
register: swarmnode


+ 33
- 0
ansible/dockerswarm/site.yml View File

@@ -0,0 +1,33 @@
---
- name: prepare for provision
hosts: cloud
connection: local
vars_files:
- "vars/{{ env }}.yml"
roles:
- "prep_{{ action }}"

- name: provision swarm nodes
hosts: dockerswarm
serial: 1
connection: local
vars_files:
- "vars/{{ env }}.yml"
roles:
- "vm_{{ action }}"

- name: setup swarm nodes
hosts: dockerswarm
connection: local
vars_files:
- "vars/{{ env }}.yml"
roles:
- "prov_{{ action }}"

- name: post provisioning
hosts: cloud
connection: local
vars_files:
- "vars/{{ env }}.yml"
roles:
- "post_{{ action }}"

+ 21
- 0
ansible/dockerswarm/vars/bluebox.yml View File

@@ -0,0 +1,21 @@
---
horizon_url: "https://salesdemo-sjc.openstack.blueboxgrid.com"

auth: {
auth_url: "https://salesdemo-sjc.openstack.blueboxgrid.com:5000/v2.0",
username: "litong01",
password: "{{ password }}",
project_name: "Interop"
}

app_env: {
image_name: "coreos",
region_name: "",
private_net_name: "interopnet",
net_device: "eth0",
flavor_name: "m1.small",
swarm_version: "latest",
swarm_size: 3,
fqdn: "swarm.example.com",
public_key_file: "/home/tong/.ssh/id_rsa.pub"
}

+ 22
- 0
ansible/dockerswarm/vars/leap.yml View File

@@ -0,0 +1,22 @@
---
horizon_url: "http://9.30.217.9"

auth: {
auth_url: "http://9.30.217.9:5000/v3",
username: "demo",
password: "{{ password }}",
domain_name: "default",
project_name: "demo"
}

app_env: {
image_name: "CoreOS",
region_name: "",
private_net_name: "Bluebox",
net_device: "eth0",
flavor_name: "m1.small",
swarm_version: "latest",
swarm_size: 3,
fqdn: "swarm.example.com",
public_key_file: "/home/tong/.ssh/id_rsa.pub"
}

+ 22
- 0
ansible/dockerswarm/vars/osic.yml View File

@@ -0,0 +1,22 @@
---
horizon_url: "https://cloud1.osic.org"

auth: {
auth_url: "https://cloud1.osic.org:5000/v3",
username: "litong01",
password: "{{ password }}",
domain_name: "default",
project_name: "interop_challenge"
}

app_env: {
image_name: "coreos",
region_name: "",
private_net_name: "interopnet",
net_device: "eth0",
flavor_name: "m1.small",
swarm_version: "latest",
swarm_size: 3,
fqdn: "swarm.example.com",
public_key_file: "/home/tong/.ssh/id_rsa.pub"
}

+ 21
- 0
ansible/dockerswarm/vars/ovh.yml View File

@@ -0,0 +1,21 @@
---
horizon_url: "https://horizon.cloud.ovh.net"

auth: {
auth_url: "https://auth.cloud.ovh.net/v2.0",
username: "SXYbmFhC4aqQ",
password: "{{ password }}",
project_name: "2487610196015734"
}

app_env: {
image_name: "coreos",
region_name: "BHS1",
private_net_name: "",
net_device: "eth0",
flavor_name: "eg-15-ssd",
swarm_version: "latest",
swarm_size: 3,
fqdn: "swarm.example.com",
public_key_file: "/home/tong/.ssh/id_rsa.pub"
}

+ 6
- 0
ansible/lampstack/.gitignore View File

@@ -0,0 +1,6 @@
*.out
vars/*
*/**/*.log
*/**/.DS_Store
*/**/._
*/**/*.tfstate*

+ 1
- 0
ansible/lampstack/README.md View File

@@ -59,6 +59,7 @@ You may create one such file per cloud for your tests.
public_key_file: "/home/ubuntu/.ssh/id_rsa.pub",
stack_size: 4,
volume_size: 2,
block_device_name: "/dev/vdb",
wp_theme: "https://downloads.wordpress.org/theme/iribbon.2.0.65.zip",
wp_posts: "http://wpcandy.s3.amazonaws.com/resources/postsxml.zip"
}


+ 15
- 1
ansible/lampstack/roles/apply/tasks/main.yml View File

@@ -1,13 +1,18 @@
---
- name: Get start timestamp
set_fact: starttime="{{ ansible_date_time }}"

- name: Retrieve specified flavor
os_flavor_facts:
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
name: "{{ app_env.flavor_name }}"

- name: Create a key-pair
os_keypair:
state: "present"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
name: "lampstack"
public_key_file: "{{ app_env.public_key_file }}"

@@ -15,6 +20,7 @@
os_volume:
state: present
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
size: "{{ app_env.volume_size }}"
wait: yes
display_name: db_volume
@@ -23,6 +29,7 @@
os_security_group:
state: present
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
name: lampstack_sg
description: secuirty group for lampstack

@@ -30,6 +37,7 @@
os_security_group_rule:
state: present
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
security_group: lampstack_sg
protocol: "{{ item.protocol }}"
direction: "{{ item.dir }}"
@@ -49,6 +57,7 @@
os_server:
state: "present"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
name: database
image: "{{ app_env.image_name }}"
key_name: "lampstack"
@@ -71,6 +80,7 @@
os_server:
state: "present"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
name: balancer
image: "{{ app_env.image_name }}"
key_name: "lampstack"
@@ -93,14 +103,16 @@
os_server_volume:
state: present
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
server: database
volume: db_volume
device: /dev/vdb
device: "{{ app_env.block_device_name }}"

- name: Create web server nodes to host application
os_server:
state: "present"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
name: apache-{{ item }}
image: "{{ app_env.image_name }}"
key_name: "lampstack"
@@ -120,8 +132,10 @@
name: "{{ item.openstack.public_v4 }}"
groups: webservers
with_items: "{{ webserver.results }}"
no_log: True

- name: Add one web servers to wps host group
add_host:
name: "{{ webserver.results[0].openstack.public_v4 }}"
groups: wps
no_log: True

+ 2
- 2
ansible/lampstack/roles/apply/templates/userdata.j2 View File

@@ -1,4 +1,4 @@
#cloud-config
runcmd:
- ip=$(ifconfig eth0 | grep "inet addr" | cut -d ':' -f 2 | cut -d ' ' -f 1)
- echo $ip `hostname` >> /etc/hosts
- addr=$(ip -4 -o addr | grep -v '127.0.0.1' | awk 'NR==1{print $4}' | cut -d '/' -f 1)
- echo $addr `hostname` >> /etc/hosts

+ 6
- 2
ansible/lampstack/roles/balancer/tasks/main.yml View File

@@ -24,8 +24,12 @@
- name: Add web servers to the haproxy
lineinfile:
dest: /etc/haproxy/haproxy.cfg
line: " server ws{{ item.openstack.private_v4 }} {{ item.openstack.private_v4 }}:80 check"
with_items: "{{ hostvars.cloud.webserver.results }}"
line: " server ws{{ item[0].openstack[item[1]] }} {{ item[0].openstack[item[1]] }}:80 check"
with_nested:
- "{{ hostvars.cloud.webserver.results }}"
- ["private_v4", "public_v4"]
when: item[0].openstack[item[1]] != ''
no_log: True

- service: name=haproxy state=restarted enabled=yes


+ 6
- 0
ansible/lampstack/roles/cleaner/tasks/apply.yml View File

@@ -1,13 +1,19 @@
---
- os_floating_ip:
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
state: absent
floating_ip_address: "{{ database.openstack.public_v4 }}"
server: "{{ database.openstack.name }}"
when: database.openstack.private_v4 != ""
no_log: True

- os_floating_ip:
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
state: absent
floating_ip_address: "{{ item.openstack.public_v4 }}"
server: "{{ item.openstack.name }}"
with_items: "{{ webserver.results }}"
when: item.openstack.private_v4 != ""
no_log: True

+ 17
- 24
ansible/lampstack/roles/database/tasks/main.yml View File

@@ -2,32 +2,20 @@
- stat: path=/tmp/diskflag
register: diskflag

- shell: parted -s /dev/vdb mklabel msdos
- shell: parted -s "{{ app_env.block_device_name }}" mklabel msdos
when: diskflag.stat.exists == false

- shell: parted -s /dev/vdb mkpart primary ext4 1049kb 100%
- shell: parted -s "{{ app_env.block_device_name }}" mkpart primary ext4 1049kb 100%
when: diskflag.stat.exists == false

- lineinfile: dest=/tmp/diskflag line="disk is now partitioned!" create=yes

- filesystem: fstype=ext4 dev=/dev/vdb1
- mount: name=/storage src=/dev/vdb1 fstype=ext4 state=mounted
- filesystem: fstype=ext4 dev="{{ app_env.block_device_name }}1"
- mount: name=/storage src="{{ app_env.block_device_name }}1" fstype=ext4 state=mounted

- shell: ifconfig eth0 | grep 'inet addr:' | cut -d ':' -f 2 | cut -d ' ' -f 1
- shell: ip -4 -o addr | grep -v '127.0.0.1' | awk 'NR==1{print $4}' | cut -d '/' -f 1
register: local_ip

- name: Install sipcalc
apt:
name=sipcalc
state=latest
update_cache=yes

- shell: sipcalc eth0 | grep 'Network address' | cut -d "-" -f 2 | xargs
register: net_addr

- shell: sipcalc eth0 | grep 'Network mask (bits)' | cut -d "-" -f 2 | xargs
register: net_bit

- name: Creates share directory for database
file: path=/storage/sqldatabase state=directory

@@ -43,15 +31,20 @@
state=latest
update_cache=yes

- name: Setup NFS shares
- name: Setup NFS database access
lineinfile:
dest: /etc/exports
line: "{{ item.name }} {{ item.net }}(rw,sync,no_root_squash,no_subtree_check)"
with_items:
- { name: "/storage/wpcontent",
net: "{{ net_addr.stdout }}/{{ net_bit.stdout }}" }
- { name: "/storage/sqldatabase",
net: "{{ net_addr.stdout }}/{{ net_bit.stdout }}" }
line: "/storage/sqldatabase {{ local_ip.stdout }}/32(rw,sync,no_root_squash,no_subtree_check)"

- name: Setup NFS webserver access
lineinfile:
dest: /etc/exports
line: "/storage/wpcontent {{ item[0].openstack[item[1]] }}/32(rw,sync,no_root_squash,no_subtree_check)"
with_nested:
- "{{ hostvars.cloud.webserver.results }}"
- ["private_v4", "public_v4"]
when: item[0].openstack[item[1]] != ''
no_log: True

- name: nfs export
shell: exportfs -a


+ 9
- 0
ansible/lampstack/roles/destroy/tasks/main.yml View File

@@ -1,8 +1,12 @@
---
- name: Get start timestamp
set_fact: starttime="{{ ansible_date_time }}"

- name: Delete key pairs
os_keypair:
state: "absent"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
name: "lampstack"
public_key_file: "{{ app_env.public_key_file }}"

@@ -10,6 +14,7 @@
os_server:
state: "absent"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
name: database
image: "{{ app_env.image_name }}"
key_name: "lampstack"
@@ -22,6 +27,7 @@
os_server:
state: "absent"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
name: balancer
image: "{{ app_env.image_name }}"
key_name: "lampstack"
@@ -34,6 +40,7 @@
os_server:
state: "absent"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
name: apache-{{ item }}
image: "{{ app_env.image_name }}"
key_name: "lampstack"
@@ -47,6 +54,7 @@
os_security_group:
state: absent
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
name: lampstack_sg
description: secuirty group for lampstack

@@ -54,5 +62,6 @@
os_volume:
state: absent
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
wait: yes
display_name: db_volume

+ 27
- 4
ansible/lampstack/roles/webserver/tasks/main.yml View File

@@ -23,12 +23,21 @@
owner: www-data
group: www-data

- name: Mount the directory
- name: Mount the directory using private IP
mount:
name: /var/www/html/wp-content/uploads
src: "{{ hostvars.cloud.database.openstack.private_v4 }}:/storage/wpcontent"
state: mounted
fstype: nfs
when: hostvars.cloud.database.openstack.private_v4 != ""

- name: Mount the directory using public IP
mount:
name: /var/www/html/wp-content/uploads
src: "{{ hostvars.cloud.database.openstack.public_v4 }}:/storage/wpcontent"
state: mounted
fstype: nfs
when: hostvars.cloud.database.openstack.private_v4 == ""

- lineinfile: dest=/etc/apache2/apache2.conf line="ServerName localhost"

@@ -47,7 +56,7 @@
args:
warn: no

- name: Configure wordpress
- name: Configure wordpress database, username and password
replace:
dest: /var/www/html/wp-config.php
regexp: "'{{ item.then }}'"
@@ -57,8 +66,22 @@
- { then: 'database_name_here', now: 'decision2016' }
- { then: 'username_here', now: "{{ db_user }}" }
- { then: 'password_here', now: "{{ db_pass }}" }
- { then: 'localhost',
now: "{{ hostvars.cloud.database.openstack.private_v4 }}"}

- name: Configure wordpress network access using private IP
replace:
dest: /var/www/html/wp-config.php
regexp: "'localhost'"
replace: "'{{ hostvars.cloud.database.openstack.private_v4 }}'"
backup: no
when: hostvars.cloud.database.openstack.private_v4 != ""

- name: Configure wordpress network access using public IP
replace:
dest: /var/www/html/wp-config.php
regexp: "'localhost'"
replace: "'{{ hostvars.cloud.database.openstack.public_v4 }}'"
backup: no
when: hostvars.cloud.database.openstack.private_v4 == ""

- name: Change ownership of wordpress
shell: chown -R www-data:www-data /var/www/html


+ 7
- 1
ansible/lampstack/site.yml View File

@@ -12,6 +12,8 @@
user: ubuntu
become: true
become_user: root
vars_files:
- "vars/{{ env }}.yml"
roles:
- database

@@ -60,4 +62,8 @@
Access wordpress at
http://{{ hostvars.cloud.balancer.openstack.public_v4 }}.
wordpress userid is wpuser, password is {{ db_pass }}
when: hostvars.cloud.balancer is defined
when: hostvars.cloud.balancer is defined
- debug:
msg: >-
The work load test started at {{ hostvars.cloud.starttime.time }},
ended at {{ ansible_date_time.time }}

+ 5
- 1
ansible/lampstack/vars/bluebox.yml View File

@@ -1,4 +1,6 @@
---
horizon_url: "https://salesdemo-sjc.openstack.blueboxgrid.com"

auth: {
auth_url: "https://salesdemo-sjc.openstack.blueboxgrid.com:5000/v2.0",
username: "litong01",
@@ -8,12 +10,14 @@ auth: {

app_env: {
image_name: "ubuntu-15.04",
region_name: "",
private_net_name: "interopnet",
public_net_name: "external",
flavor_name: "m1.small",
public_key_file: "/home/tong/.ssh/id_rsa.pub",
stack_size: 4,
volume_size: 10,
volume_size: 2,
block_device_name: "/dev/vdb",
wp_theme: "https://downloads.wordpress.org/theme/iribbon.2.0.65.zip",
wp_posts: "http://wpcandy.s3.amazonaws.com/resources/postsxml.zip"
}

+ 5
- 1
ansible/lampstack/vars/leap.yml View File

@@ -1,4 +1,6 @@
---
horizon_url: "http://9.30.217.9"

auth: {
auth_url: "http://9.30.217.9:5000/v3",
username: "demo",
@@ -8,13 +10,15 @@ auth: {
}

app_env: {
image_name: "vivid 1504",
image_name: "ubuntu-15.04",
region_name: "",
private_net_name: "Bluebox",
public_net_name: "internet",
flavor_name: "m1.small",
public_key_file: "/home/tong/.ssh/id_rsa.pub",
stack_size: 4,
volume_size: 2,
block_device_name: "/dev/vdb",
wp_theme: "https://downloads.wordpress.org/theme/iribbon.2.0.65.zip",
wp_posts: "http://wpcandy.s3.amazonaws.com/resources/postsxml.zip"
}

+ 1
- 0
terraform/dockerswarm-coreos/README.md View File

@@ -53,6 +53,7 @@ You also want to specify the name of your CoreOS `glance` image as well as flavo
```
image_name = "coreos-alpha-884-0-0"
network_name = "internal"
net_device = "eth0"
floatingip_pool = "external"
flavor = "m1.medium"
public_key_path = "~/.ssh/id_rsa.pub"


+ 3
- 0
terraform/dockerswarm-coreos/swarm.tf View File

@@ -30,6 +30,9 @@ resource "template_file" "cloud_init" {

resource "template_file" "10_docker_service" {
template = "templates/10-docker-service.conf"
vars {
net_device = "${ var.net_device }"
}
}

resource "openstack_networking_floatingip_v2" "coreos" {


+ 1
- 1
terraform/dockerswarm-coreos/templates/10-docker-service.conf View File

@@ -1,2 +1,2 @@
[Service]
Environment="DOCKER_OPTS=-H=0.0.0.0:2376 -H unix:///var/run/docker.sock --tlsverify --tlscacert=/etc/docker/ssl/ca.pem --tlscert=/etc/docker/ssl/cert.pem --tlskey=/etc/docker/ssl/key.pem --cluster-advertise eth0:2376 --cluster-store etcd://127.0.0.1:2379/docker"
Environment="DOCKER_OPTS=-H=0.0.0.0:2376 -H unix:///var/run/docker.sock --tlsverify --tlscacert=/etc/docker/ssl/ca.pem --tlscert=/etc/docker/ssl/cert.pem --tlskey=/etc/docker/ssl/key.pem --cluster-advertise ${net_device}:2376 --cluster-store etcd://127.0.0.1:2379/docker"

+ 5
- 0
terraform/dockerswarm-coreos/vars-openstack.tf View File

@@ -10,6 +10,11 @@ variable "floatingip_pool" {
default = "external"
}

variable "net_device" {
description = "Network interface device in the system"
default = "eth0"
}

variable "flavor" {
default = "m1.medium"
}


Loading…
Cancel
Save