Vagrantfile and docs
With Vagrant it becomes real easy to create a dev environment to test and checkout $code. The focus is specifically on setting up an environment to play around in, developing Kolla and showing what it is capable of in a clean virtualised environment. When done, the environment can be destroy and re-created at will when needed. Change-Id: I440d004e76c337f298cad2397cf4c13f2cc35ddb Implements: blueprint vagrant-devenv
This commit is contained in:
parent
2e6bb0a885
commit
37561cc1f7
81
docs/vagrant.md
Normal file
81
docs/vagrant.md
Normal file
@ -0,0 +1,81 @@
|
|||||||
|
Vagrant up!
|
||||||
|
============================
|
||||||
|
|
||||||
|
This guide describes how to use [Vagrant][] to assist in developing for Kolla.
|
||||||
|
|
||||||
|
Vagrant is a tool to assist in scripted creation of virtual machines, it will
|
||||||
|
take care of setting up a CentOS-based cluster of virtual machines, each with
|
||||||
|
proper hardware like memory amount and number of network interfaces.
|
||||||
|
|
||||||
|
[Vagrant]: http://vagrantup.com
|
||||||
|
|
||||||
|
|
||||||
|
Getting Started
|
||||||
|
---------------
|
||||||
|
|
||||||
|
The vagrant setup will build a cluster with the following nodes:
|
||||||
|
|
||||||
|
- 3 support nodes
|
||||||
|
- 1 compute node
|
||||||
|
- 1 operator node
|
||||||
|
|
||||||
|
Kolla runs from the operator node to deploy OpenStack on the other nodes.
|
||||||
|
|
||||||
|
All nodes are connected with each other on the secondary nic, the primary nic
|
||||||
|
is behind a NAT interface for connecting with the internet. A third nic is
|
||||||
|
connected without IP configuration to a public bridge interface. This may be
|
||||||
|
used for Neutron/Nova to connect to instances.
|
||||||
|
|
||||||
|
Start with downloading and installing the Vagrant package for your distro of
|
||||||
|
choice. Various downloads can be found [here][]. After we will install the
|
||||||
|
hostmanager plugin so all hosts are recorded in /etc/hosts (inside each vm):
|
||||||
|
|
||||||
|
vagrant plugin install vagrant-hostmanager
|
||||||
|
|
||||||
|
Vagrant supports a wide range of virtualization technologies, of which we will
|
||||||
|
use VirtualBox for now.
|
||||||
|
|
||||||
|
Find some place in your homedir and checkout the Kolla repo
|
||||||
|
|
||||||
|
git clone https://github.com/stackforge/kolla.git ~/dev/kolla
|
||||||
|
|
||||||
|
You can now tweak the Vagrantfile or start a CentOS7-based cluster right away:
|
||||||
|
|
||||||
|
cd ~/dev/kolla/vagrant && vagrant up
|
||||||
|
|
||||||
|
The command `vagrant up` will build your cluster, `vagrant status` will give
|
||||||
|
you a quick overview once done.
|
||||||
|
|
||||||
|
[here]: https://www.vagrantup.com/downloads.html
|
||||||
|
|
||||||
|
Vagrant Up
|
||||||
|
---------
|
||||||
|
|
||||||
|
Once vagrant has completed deploying all nodes, we can focus on launching Kolla.
|
||||||
|
First, connect with the _operator_ node:
|
||||||
|
|
||||||
|
vagrant ssh operator
|
||||||
|
|
||||||
|
Once connected you can run a simple Ansible-style ping to verify if the cluster is operable:
|
||||||
|
|
||||||
|
ansible -i kolla/ansible/inventory/multinode all -m ping -e ansible_ssh_user=root
|
||||||
|
|
||||||
|
Congratulations, your cluster is usable and you can start deploying OpenStack using Ansible!
|
||||||
|
|
||||||
|
To speed things up, there is a local registry running on the operator. All nodes are configured
|
||||||
|
so they can use this insecure repo to pull from, and they will use it as mirror. Ansible may
|
||||||
|
use this registry to pull images from.
|
||||||
|
|
||||||
|
All nodes have a local folder shared between the group and the hypervisor, and a folder shared
|
||||||
|
between _all_ nodes and the hypervisor. This mapping is lost after reboots, so make sure you use
|
||||||
|
the command `vagrant reload <node>` when reboots are required. Having this shared folder you
|
||||||
|
have a method to supply a different docker binary to the cluster. The shared folder is also
|
||||||
|
used to store the docker-registry files, so they are save from destructive operations like
|
||||||
|
`vagrant destroy`.
|
||||||
|
|
||||||
|
Further Reading
|
||||||
|
---------------
|
||||||
|
|
||||||
|
All Vagrant documentation can be found on their [website][].
|
||||||
|
|
||||||
|
[website]: http://docs.vagrantup.com
|
91
vagrant/Vagrantfile
vendored
Normal file
91
vagrant/Vagrantfile
vendored
Normal file
@ -0,0 +1,91 @@
|
|||||||
|
# -*- mode: ruby -*-
|
||||||
|
# vi: set ft=ruby :
|
||||||
|
|
||||||
|
# Configure a new SSH key and config so the operator is able to connect with
|
||||||
|
# the other cluster nodes.
|
||||||
|
if not File.file?("./vagrantkey")
|
||||||
|
system("ssh-keygen -f ./vagrantkey -N '' -C this-is-vagrant")
|
||||||
|
end
|
||||||
|
|
||||||
|
Vagrant.configure(2) do |config|
|
||||||
|
# The base image to use
|
||||||
|
# TODO (harmw): something more close to vanilla would be nice, someday.
|
||||||
|
config.vm.box = "puppetlabs/centos-7.0-64-puppet"
|
||||||
|
|
||||||
|
# Next to the hostonly NAT-network there is a host-only network with all
|
||||||
|
# nodes attached. Plus, each node receives a 3rd adapter connected to the
|
||||||
|
# outside public network.
|
||||||
|
# TODO (harmw): see if there is a way to automate the selection of the bridge
|
||||||
|
# interface.
|
||||||
|
config.vm.network "private_network", type: "dhcp"
|
||||||
|
config.vm.network "public_network", ip: "0.0.0.0", bridge: "wlp3s0b1"
|
||||||
|
|
||||||
|
my_privatekey = File.read(File.join(File.dirname(__FILE__), "vagrantkey"))
|
||||||
|
my_publickey = File.read(File.join(File.dirname(__FILE__), "vagrantkey.pub"))
|
||||||
|
|
||||||
|
# TODO (harmw): This is slightly difficult to read.
|
||||||
|
config.vm.provision :shell, :inline => "mkdir -p /root/.ssh && echo '#{my_privatekey}' > /root/.ssh/id_rsa && chmod 600 /root/.ssh/id_rsa"
|
||||||
|
config.vm.provision :shell, :inline => "echo '#{my_publickey}' > /root/.ssh/authorized_keys && chmod 600 /root/.ssh/authorized_keys"
|
||||||
|
config.vm.provision :shell, :inline => "mkdir -p /home/vagrant/.ssh && echo '#{my_privatekey}' >> /home/vagrant/.ssh/id_rsa && chmod 600 /home/vagrant/.ssh/*"
|
||||||
|
config.vm.provision :shell, :inline => "echo 'Host *' > ~vagrant/.ssh/config"
|
||||||
|
config.vm.provision :shell, :inline => "echo StrictHostKeyChecking no >> ~vagrant/.ssh/config"
|
||||||
|
config.vm.provision :shell, :inline => "chown -R vagrant: /home/vagrant/.ssh"
|
||||||
|
|
||||||
|
config.hostmanager.enabled = true
|
||||||
|
config.hostmanager.ip_resolver = proc do |vm, resolving_vm|
|
||||||
|
if vm.id
|
||||||
|
`VBoxManage guestproperty get #{vm.id} "/VirtualBox/GuestInfo/Net/1/V4/IP"`.split()[1]
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
# The operator controls the deployment
|
||||||
|
config.vm.define "operator" do |admin|
|
||||||
|
admin.vm.hostname = "operator.local"
|
||||||
|
admin.vm.provision :shell, path: "bootstrap.sh", args: "operator"
|
||||||
|
admin.vm.synced_folder "storage/operator/", "/data/host", create:"True"
|
||||||
|
admin.vm.synced_folder "storage/shared/", "/data/shared", create:"True"
|
||||||
|
admin.vm.synced_folder ".", "/vagrant", disabled: true
|
||||||
|
admin.vm.provider "virtualbox" do |vb|
|
||||||
|
vb.memory = 1024
|
||||||
|
end
|
||||||
|
admin.hostmanager.aliases = "operator"
|
||||||
|
end
|
||||||
|
|
||||||
|
# Build compute nodes
|
||||||
|
(1..1).each do |i|
|
||||||
|
config.vm.define "compute0#{i}" do |compute|
|
||||||
|
compute.vm.hostname = "compute0#{i}.local"
|
||||||
|
compute.vm.provision :shell, path: "bootstrap.sh"
|
||||||
|
compute.vm.synced_folder "storage/compute/", "/data/host", create:"True"
|
||||||
|
compute.vm.synced_folder "storage/shared/", "/data/shared", create:"True"
|
||||||
|
compute.vm.synced_folder ".", "/vagrant", disabled: true
|
||||||
|
compute.vm.provider "virtualbox" do |vb|
|
||||||
|
vb.memory = 1024
|
||||||
|
end
|
||||||
|
compute.hostmanager.aliases = "compute0#{i}"
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
# Build support nodes
|
||||||
|
(1..3).each do |i|
|
||||||
|
config.vm.define "support0#{i}" do |support|
|
||||||
|
support.vm.hostname = "support0#{i}.local"
|
||||||
|
support.vm.provision :shell, path: "bootstrap.sh"
|
||||||
|
support.vm.synced_folder "storage/support/", "/data/host", create:"True"
|
||||||
|
support.vm.synced_folder "storage/shared/", "/data/shared", create:"True"
|
||||||
|
support.vm.synced_folder ".", "/vagrant", disabled: true
|
||||||
|
support.vm.provider "virtualbox" do |vb|
|
||||||
|
vb.memory = 2048
|
||||||
|
end
|
||||||
|
support.hostmanager.aliases = "support0#{i}"
|
||||||
|
|
||||||
|
# TODO: Here we bind local port 8080 to Horizon on support01 only.
|
||||||
|
# TODO: Once we implement Horizon behind a VIP, this obviously needs to
|
||||||
|
# be changed.
|
||||||
|
#if i < 2 then
|
||||||
|
# config.vm.network "forwarded_port", guest: 80, host: 8080
|
||||||
|
#end
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
end
|
124
vagrant/bootstrap.sh
Normal file
124
vagrant/bootstrap.sh
Normal file
@ -0,0 +1,124 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
#
|
||||||
|
# Bootstrap script to configure all nodes.
|
||||||
|
#
|
||||||
|
|
||||||
|
export http_proxy=
|
||||||
|
export https_proxy=
|
||||||
|
|
||||||
|
# Install common packages and do some prepwork.
|
||||||
|
function prepwork {
|
||||||
|
systemctl stop firewalld
|
||||||
|
systemctl disable firewalld
|
||||||
|
|
||||||
|
# This removes the fqdn from /etc/hosts's 127.0.0.1. This name.local will
|
||||||
|
# resolve to the public IP instead of localhost.
|
||||||
|
sed -i -r "s/^(127\.0\.0\.1\s+)(.*) `hostname` (.+)/\1 \3/" /etc/hosts
|
||||||
|
|
||||||
|
yum install -y http://mirror.nl.leaseweb.net/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
|
||||||
|
yum install -y MySQL-python vim-enhanced python-pip python-devel gcc openssl-devel libffi-devel libxml2-devel libxslt-devel && yum clean all
|
||||||
|
pip install --upgrade docker-py shade
|
||||||
|
}
|
||||||
|
|
||||||
|
# Install and configure a quick&dirty docker daemon.
|
||||||
|
function installdocker {
|
||||||
|
# Allow for an externally supplied docker binary.
|
||||||
|
if [ -f "/data/docker" ]; then
|
||||||
|
cp /vagrant/docker /usr/bin/docker
|
||||||
|
chmod +x /usr/bin/docker
|
||||||
|
else
|
||||||
|
cat >/etc/yum.repos.d/docker.repo <<-EOF
|
||||||
|
[dockerrepo]
|
||||||
|
name=Docker Repository
|
||||||
|
baseurl=https://yum.dockerproject.org/repo/main/centos/7
|
||||||
|
enabled=1
|
||||||
|
gpgcheck=1
|
||||||
|
gpgkey=https://yum.dockerproject.org/gpg
|
||||||
|
EOF
|
||||||
|
# Also upgrade device-mapper here because of:
|
||||||
|
# https://github.com/docker/docker/issues/12108
|
||||||
|
yum install -y docker-engine device-mapper
|
||||||
|
|
||||||
|
# Despite it shipping with /etc/sysconfig/docker, Docker is not configured to
|
||||||
|
# load it from it's service file.
|
||||||
|
sed -i -r 's,(ExecStart)=(.+),\1=\2 --insecure-registry operator.local:5000 --registry-mirror=http://operator.local:5000,' /usr/lib/systemd/system/docker.service
|
||||||
|
|
||||||
|
systemctl daemon-reload
|
||||||
|
systemctl enable docker
|
||||||
|
systemctl start docker
|
||||||
|
fi
|
||||||
|
|
||||||
|
usermod -aG docker vagrant
|
||||||
|
}
|
||||||
|
|
||||||
|
# Configure the operator node and install some additional packages.
|
||||||
|
function configureoperator {
|
||||||
|
yum install -y git mariadb && yum clean all
|
||||||
|
pip install --upgrade ansible python-openstackclient
|
||||||
|
|
||||||
|
if [ ! -d ~vagrant/kolla ]; then
|
||||||
|
su - vagrant sh -c "https_proxy=$https_proxy git clone https://github.com/stackforge/kolla.git ~/kolla"
|
||||||
|
pip install -r ~vagrant/kolla/requirements.txt
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Note: this trickery requires a patched docker binary.
|
||||||
|
if [ "$http_proxy" = "" ]; then
|
||||||
|
su - vagrant sh -c "echo BUILDFLAGS=\\\"--build-env=http_proxy=$http_proxy --build-env=https_proxy=$https_proxy\\\" > ~/kolla/.buildconf"
|
||||||
|
fi
|
||||||
|
|
||||||
|
ln -sf ~vagrant/kolla/etc/kolla/ /etc/kolla
|
||||||
|
ln -sf ~vagrant/kolla/etc/kolla/ /usr/share/kolla
|
||||||
|
|
||||||
|
# Make sure Ansible uses scp.
|
||||||
|
cat > ~vagrant/.ansible.cfg <<EOF
|
||||||
|
[defaults]
|
||||||
|
forks=100
|
||||||
|
|
||||||
|
[ssh_connection]
|
||||||
|
scp_if_ssh=True
|
||||||
|
EOF
|
||||||
|
chown vagrant: ~vagrant/.ansible.cfg
|
||||||
|
|
||||||
|
# The openrc file.
|
||||||
|
cat > ~vagrant/openrc <<EOF
|
||||||
|
export OS_AUTH_URL="http://support01.local:35357/v2.0"
|
||||||
|
export OS_USERNAME=admin
|
||||||
|
export OS_PASSWORD=password
|
||||||
|
export OS_TENANT_NAME=admin
|
||||||
|
export OS_VOLUME_API_VERSION=2
|
||||||
|
EOF
|
||||||
|
|
||||||
|
# Quick&dirty helper script to push images to the local registry's lokolla
|
||||||
|
# namespace.
|
||||||
|
cat > ~vagrant/tag-and-push.sh <<EOF
|
||||||
|
for image in \$(docker images|awk '/^kollaglue/ {print \$1}'); do
|
||||||
|
docker tag \$image operator.local:5000/lokolla/\${image#kollaglue/}:latest
|
||||||
|
docker push operator.local:5000/lokolla/\${image#kollaglue/}:latest
|
||||||
|
done
|
||||||
|
EOF
|
||||||
|
chmod +x ~vagrant/tag-and-push.sh
|
||||||
|
|
||||||
|
chown vagrant: ~vagrant/openrc ~vagrant/tag-and-push.sh
|
||||||
|
|
||||||
|
# Launch a local registry (and mirror) to speed up pulling images.
|
||||||
|
# 0.9.1 is actually the _latest_ tag.
|
||||||
|
if [[ ! $(docker ps -a -q -f name=registry) ]]; then
|
||||||
|
docker run -d \
|
||||||
|
--name registry \
|
||||||
|
--restart=always \
|
||||||
|
-p 5000:5000 \
|
||||||
|
-e STANDALONE=True \
|
||||||
|
-e MIRROR_SOURCE=https://registry-1.docker.io \
|
||||||
|
-e MIRROR_SOURCE_INDEX=https://index.docker.io \
|
||||||
|
-e STORAGE_PATH=/var/lib/registry \
|
||||||
|
-v /data/host/registry-storage:/var/lib/registry \
|
||||||
|
registry:0.9.1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
prepwork
|
||||||
|
installdocker
|
||||||
|
|
||||||
|
if [ "$1" = "operator" ]; then
|
||||||
|
configureoperator
|
||||||
|
fi
|
Loading…
Reference in New Issue
Block a user