Rewrite data model to be pure hiera

This (enormous) patch adds experimental support
for a distribution that uses pure hiera in place
of the scenario_node_terminus.

A python script can be used to convert from the SNT
data model to the pure hiera one:
contrib/aptira/build/convert.py
It requires PyYaml, and will back up the old data
directory to data.old. It is not even remotely idempotent.

Data mappings are expressed using interpolation except
in the case where they are non-string, in which case
yaml anchors are used. At this time, YAML anchors are not
working across files, which places some restrictions on
their use.

Scenarios are now a part of the hiera_data folder,
and each scenario yaml file contains its mappings,
globals, and in future probably its 'user.scenario.yaml'
data.

Roles are under scenarios/%scenario/$role, and use a hiera
lookup of 'classes' and 'class_groups' to include classes.

Class groups are a new yaml file in hiera_data, and have
been changed to be string quoted since starting yaml keys
with a % is invalid.

Globals are not handled correctly at this point. They need to be
converted into a new structure based around hiera_data/contrib, which
will allow arbitrary class includes, but is not quite finished.

A new install script has been added at
contrib/aptira/install/bootstrap.sh that is an idempotent bash script.
This script will install ruby 2.0.0 from a prebuild rpm and then
install puppet from gem. It then configures hiera and uses the data
model to set up pre-puppet config without needing to run puppet.

A new Vagrantfile has been added under contrib/aptira/build that is
Vagrant 1.5 compatible, which uses the new bootstrap script.

In addition, a new scenario called stacktira has been added, which
uses a new galera module and a new module for openstack general
functionality called openstacklib. This scenario is added using
the node_terminus style of yaml, but functions when converted using
the aforementioned convert.py

Change-Id: Iaa00b165e2933cdb8c5149343b3cfe40bcc8b3db
This commit is contained in:
Michael Chapman
2014-02-11 02:46:16 +11:00
parent 4208cef25d
commit 627e7c01fe
24 changed files with 2409 additions and 0 deletions

98
contrib/aptira/README.md Normal file
View File

@@ -0,0 +1,98 @@
Openstack by Aptira
===================
## Overview
This is a revision of the data model with the following goals:
- Remove dependency on the scenario_node_terminus
- Implement data model in pure hiera
- Support Centos/RHEL targets
- Support masterless deployment
- Simplify node bootstrapping
- Make HA a core feature rather than an add-on
- Move all modules to master branch
While providing a clean migration path from the current method.
## Requirements
Currently, this distribution assumes it has been provided with already-provisioned
Centos 6 servers each with more than one network interface. For production
deployments it is recommended to have additional interfaces, as the data model can
distinguish between the following network functions and assign an interface to each:
- deployment network
- public API network
- private network
- external floating IP network
## Installation
Before installing the distribution, review the following options which are available:
Set an http proxy to use for installation (default: not set)
export proxy='http://my_proxy:8000'
Set the network interface to use for deployment (default: eth1)
export network='eth0'
set install destination for the distribution (default: $HOME)
export dest='/var/lib/stacktira'
Once you have set the appropriate customisations, to install the aptira distribution,
run the following command:
\curl -sSL https://raw.github.com/michaeltchapman/puppet_openstack_builder/stacktira/contrib/aptira/installer/bootstrap.sh | bash
## Configuration
The distribution is most easily customised by editing the file
/etc/puppet/data/hiera_data/user.yaml. A sample will be placed there if
one doesn't exist during installation and this should be reviewed before
continuing. In particular, make sure all the IP addresses and interfaces
are correct for your deployment.
## Deployment
To deploy a control node, run the following command:
puppet apply /etc/puppet/manifests/site.pp --certname control-`hostname`
To deploy a compute node, run the following command:
puppet apply /etc/puppet/manifests/site.pp --certname compute-`hostname`
## Development Environment Installation
First, clone the repo and checkout the experimental stacktira branch
git clone https://github.com/michaeltchapman/puppet_openstack_builder
git checkout stacktira
The conversion from scenario_node_terminus yaml to pure hiera is done by
a script which require PyYaml. Install this library either via distro
package manager or using pip.
pip install PyYaml
Run the conversion script. This will replace the Puppetfile, Vagrantfile,
manifests and data directories with the stacktira version:
python contrib/aptira/build/convert.py
Install the modules:
mkdir -p vendor
export GEM_HOME=vendor
gem install librarian-puppet
vendor/bin/librarian-puppet install
Now you can boot using the control* and compute* vms, or using rawbox to test
out the public tarball available from Aptira.
## Authors
Michael Chapman

View File

@@ -0,0 +1,238 @@
git_protocol = ENV['git_protocol'] || 'https'
reposource = ENV['reposource'] || 'downstream'
git_protocol = 'https'
if reposource == 'downstream'
author = 'aptira'
ref = 'stacktira'
else
ref = 'master'
end
# apache
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/apache', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-apache.git", :ref => ref
# apt
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/apt', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-apt.git", :ref => ref
# ceilometer
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/ceilometer', :git => "#{git_protocol}://github.com/#{author}/puppet-ceilometer.git", :ref => ref
# cinder
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/cinder', :git => "#{git_protocol}://github.com/#{author}/puppet-cinder.git", :ref => ref
# concat
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/concat', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-concat.git", :ref => ref
# devtools
if reposource != 'downstream'
author = 'Spredzy'
end
mod 'Spredzy/devtools', :git => "#{git_protocol}://github.com/#{author}/puppet-devtools.git", :ref => ref
# dnsmasq
if reposource != 'downstream'
author = 'netmanagers'
end
mod 'netmanagers/dnsmasq', :git => "#{git_protocol}://github.com/#{author}/puppet-dnsmasq.git", :ref => ref
# edeploy
if reposource != 'downstream'
author = 'michaeltchapman'
end
mod 'michaeltchapman/edeploy', :git => "#{git_protocol}://github.com/#{author}/puppet-edeploy.git", :ref => ref
# firewall
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/firewall', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-firewall.git", :ref => ref
# galera
if reposource != 'downstream'
author = 'michaeltchapman'
end
mod 'michaeltchapman/galera', :git => "#{git_protocol}://github.com/#{author}/puppet-galera.git", :ref => ref
# glance
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/glance', :git => "#{git_protocol}://github.com/#{author}/puppet-glance.git", :ref => ref
# haproxy
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/haproxy', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-haproxy.git", :ref => ref
# heat
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/heat', :git => "#{git_protocol}://github.com/#{author}/puppet-heat.git", :ref => ref
# horizon
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/horizon', :git => "#{git_protocol}://github.com/#{author}/puppet-horizon.git", :ref => ref
# inifile
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/inifile', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-inifile.git", :ref => ref
# keepalived
if reposource != 'downstream'
author = 'arioch'
end
mod 'arioch/keepalived', :git => "#{git_protocol}://github.com/#{author}/puppet-keepalived.git", :ref => ref
# keystone
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/keystone', :git => "#{git_protocol}://github.com/#{author}/puppet-keystone.git", :ref => ref
# memcached
if reposource != 'downstream'
author = 'saz'
end
mod 'saz/memcached', :git => "#{git_protocol}://github.com/#{author}/puppet-memcached.git", :ref => ref
# mysql
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/mysql', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-mysql.git", :ref => ref
# neutron
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/neutron', :git => "#{git_protocol}://github.com/#{author}/puppet-neutron.git", :ref => ref
# nova
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/nova', :git => "#{git_protocol}://github.com/#{author}/puppet-nova.git", :ref => ref
# openstack
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/openstack', :git => "#{git_protocol}://github.com/#{author}/puppet-openstack.git", :ref => ref
# openstacklib
if reposource != 'downstream'
author = 'michaeltchapman'
end
mod 'michaeltchapman/openstacklib', :git => "#{git_protocol}://github.com/#{author}/puppet-openstacklib.git", :ref => ref
# postgresql
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/postgresql', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-postgresql.git", :ref => ref
# puppet
if reposource != 'downstream'
author = 'stephenrjohnson'
end
mod 'stephenrjohnson/puppet', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-puppet.git", :ref => ref
# puppetdb
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/puppetdb', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-puppetdb.git", :ref => ref
# rabbitmq
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/rabbitmq', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-rabbitmq.git", :ref => ref
# rsync
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/rsync', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-rsync.git", :ref => ref
# ruby-puppetdb
if reposource != 'downstream'
author = 'ripienaar'
end
mod 'ripienaar/ruby-puppetdb', :git => "#{git_protocol}://github.com/#{author}/ruby-puppetdb.git", :ref => ref
# staging
if reposource != 'downstream'
author = 'nanliu'
end
mod 'nanliu/staging', :git => "#{git_protocol}://github.com/#{author}/puppet-staging.git", :ref => ref
# stdlib
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/stdlib', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-stdlib.git", :ref => ref
# swift
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/swift', :git => "#{git_protocol}://github.com/#{author}/puppet-swift.git", :ref => ref
# sysctl
if reposource != 'downstream'
author = 'thias'
end
mod 'thias/sysctl', :git => "#{git_protocol}://github.com/#{author}/puppet-sysctl.git", :ref => ref
# tempest
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/tempest', :git => "#{git_protocol}://github.com/#{author}/puppet-tempest.git", :ref => ref
# tftp
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/tftp', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-tftp.git", :ref => ref
# vcsrepo
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/vcsrepo', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-vcsrepo.git", :ref => ref
# vswitch
if reposource != 'downstream'
author = 'stackforge'
end
mod 'stackforge/vswitch', :git => "#{git_protocol}://github.com/#{author}/puppet-vswitch.git", :ref => ref
# xinetd
if reposource != 'downstream'
author = 'puppetlabs'
end
mod 'puppetlabs/xinetd', :git => "#{git_protocol}://github.com/#{author}/puppetlabs-xinetd.git", :ref => ref

208
contrib/aptira/build/Vagrantfile vendored Normal file
View File

@@ -0,0 +1,208 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
require 'yaml'
require 'fileutils'
# Four networks:
# 0 - VM host NAT
# 1 - COE build/deploy
# 2 - COE openstack internal
# 3 - COE openstack external (public)
def parse_vagrant_config(
config_file=File.expand_path(File.join(File.dirname(__FILE__), 'data', 'config.yaml'))
)
config = {
'gui_mode' => false,
'operatingsystem' => 'redhat',
'verbose' => false,
'update_repos' => true,
'scenario' => 'stacktira'
}
if File.exists?(config_file)
overrides = YAML.load_file(config_file)
config.merge!(overrides)
end
config
end
#
# process the node group that is used to determine the
# nodes that should be provisioned. The group of nodes
# can be set with the node_group param from config.yaml
# and maps to its corresponding file in the nodes directory.
#
def process_nodes(config)
v_config = parse_vagrant_config
node_group = v_config['scenario']
node_group_file = File.expand_path(File.join(File.dirname(__FILE__), 'data', 'nodes', "#{node_group}.yaml"))
abort('node_group much be specific in config') unless node_group
abort('file must exist for node group') unless File.exists?(node_group_file)
(YAML.load_file(node_group_file)['nodes'] || {}).each do |name, options|
config.vm.define(options['vagrant_name'] || name) do |config|
configure_openstack_node(
config,
name,
options['memory'],
options['image_name'] || v_config['operatingsystem'],
options['ip_number'],
options['puppet_type'] || 'agent',
v_config,
options['environment'],
options['role'],
options['network'],
options['post_config']
)
end
end
end
# get the correct box based on the specidied type
# currently, this just retrieves a single box for precise64
def get_box(config, box_type)
if box_type == 'precise64' || box_type == 'ubuntu'
config.vm.box = 'precise64'
config.vm.box_url = 'http://files.vagrantup.com/precise64.box'
elsif box_type == 'centos' || box_type == 'redhat'
config.vm.box = 'centos64'
config.vm.box_url = 'http://developer.nrel.gov/downloads/vagrant-boxes/CentOS-6.4-x86_64-v20130427.box'
else
abort("Box type: #{box_type} is no good.")
end
end
#
# setup networks for openstack. Currently, this just sets up
# 4 virtual interfaces as follows:
#
# * eth1 => 192.168.242.0/24
# this is the network that the openstack services use to communicate with each other
# * eth2 => 10.2.3.0/24
# * eth3 => 10.2.3.0/24
#
# == Parameters
# config - vm config object
# number - the lowest octal in a /24 network
# options - additional options
# eth1_mac - mac address to set for eth1 (used for PXE booting)
#
def setup_networks(config, number, network)
config.vm.network "private_network", :ip => "192.168.242.#{number}"
config.vm.network "private_network", ip: "#{network}.2.3.#{number}"
config.vm.network "private_network", ip: "#{network}.3.3.#{number}"
# set eth3 in promiscuos mode
config.vm.provider "virtualbox" do |vconfig|
vconfig.customize ["modifyvm", :id, "--nicpromisc3", "allow-all"]
# set the boot priority to use eth1
vconfig.customize(['modifyvm', :id ,'--nicbootprio2','1'])
end
end
#
# setup the hostname of our box
#
def setup_hostname(config, hostname)
config.vm.provider "virtualbox" do |vconfig|
vconfig.customize ['modifyvm', :id, '--name', hostname]
end
config.vm.host_name = hostname
end
#
# methods that performs all openstack config
#
def configure_openstack_node(
config,
node_name,
memory,
box_name,
net_id,
puppet_type,
v_config,
environment = false,
role = false,
network = false,
post_config = false
)
cert_name = node_name
get_box(config, box_name)
setup_hostname(config, node_name)
config.vm.provider "virtualbox" do |vconfig|
vconfig.customize ["modifyvm", :id, "--memory", memory]
end
network ||= '10'
setup_networks(config, net_id, network)
config.vm.synced_folder "./modules", "/etc/puppet/modules"
config.vm.synced_folder "./", "/root/stacktira"
options = ''
if v_config['proxy']
options += " -p " + v_config['proxy']
end
if role
options += " -o " + role
end
if environment
options += " -e " + environment
end
config.vm.provision :shell do |shell|
shell.inline = '/root/stacktira/contrib/aptira/installer/bootstrap.sh' + options
end
config.vm.provision :shell do |shell|
shell.inline = 'puppet apply /etc/puppet/manifests/site.pp'
end
if post_config
Array(post_config).each do |shell_command|
config.vm.provision :shell do |shell|
shell.inline = shell_command
end
end
end
end
Vagrant.configure("2") do |config|
process_nodes(config)
end
Vagrant.configure("2") do |config|
# A 'blank' node that will pxeboot on the first private network
# use this to test deployment tools like cobbler
config.vm.define "target" do |target|
target.vm.box = "blank"
# This IP won't actually come up - you'll need to run a dhcp
# server on another node
target.vm.network "private_network", ip: "192.168.242.55"
target.vm.provider "virtualbox" do |vconfig|
vconfig.customize ['modifyvm', :id ,'--nicbootprio2','1']
vconfig.customize ['modifyvm', :id ,'--memory','1024']
vconfig.gui = true
end
end
# a node with no mounts, that will test a web install
# hostname is also not set to force --certname usage
config.vm.define "rawbox" do |target|
target.vm.box = "centos64"
setup_networks(target, 150, '10')
config.vm.provision :shell do |shell|
shell.inline = '\curl -sSL https://raw.github.com/michaeltchapman/puppet_openstack_builder/stacktira/contrib/aptira/installer/bootstrap.sh | bash'
end
config.vm.provision :shell do |shell|
shell.inline = 'puppet apply /etc/puppet/manifests/site.pp --certname control1'
end
end
end

View File

@@ -0,0 +1,303 @@
import os
import shutil
import yaml
import re
dpath = './data'
def prepare_target():
print "=============================="
print "= Preparing target directory ="
print "=============================="
dirs = os.listdir('.')
if 'data.new' not in dirs:
os.mkdir('./data.new')
print 'created data.new'
dirs = os.listdir('./data.new')
if 'hiera_data' not in dirs:
shutil.copytree(dpath + '/hiera_data', './data.new/hiera_data')
print 'copied tree from ' + dpath + '/hiera_data to /data.new/hiera_data'
# Nodes used for vagrant info
shutil.copytree(dpath + '/nodes', './data.new/nodes')
print 'copied tree from ' + dpath + '/nodes to /data.new/nodes'
shutil.copyfile('./contrib/aptira/build/Vagrantfile', './Vagrantfile')
shutil.copyfile('./contrib/aptira/build/Puppetfile', './Puppetfile')
shutil.copyfile('./contrib/aptira/puppet/config.yaml', './data.new/config.yaml')
shutil.copyfile('./contrib/aptira/puppet/site.pp', './manifests/site.pp')
shutil.copyfile('./contrib/aptira/puppet/user.yaml', './data.new/hiera_data/user.yaml')
dirs = os.listdir('./data.new/hiera_data')
if 'roles' not in dirs:
os.mkdir('./data.new/hiera_data/roles')
print 'made role dir'
if 'contrib' not in dirs:
os.mkdir('./data.new/hiera_data/contrib')
print 'made contrib dir'
def hierafy_mapping(mapping):
new_mapping = []
if '{' in mapping:
for c in mapping:
if c == '}':
new_mapping.append("')}")
elif c == '{':
new_mapping.append("{hiera('")
else:
new_mapping.append(c)
return "".join(new_mapping)
else:
return "".join(['%{hiera(\'', mapping, '\')}'])
def scenarios():
print "=============================="
print "===== Handling Scenarios ====="
print "=============================="
scenarios = {}
# This will be a mapping with scenario as key, to
# a mapping of roles to a list of classes
scenarios_as_hiera = {}
for root,dirs,files in os.walk(dpath + '/scenarios'):
for name in files:
print os.path.join(root,name)
with open(os.path.join(root,name)) as yf:
scenarios[name[:-5]] = yaml.load(yf.read())
for scenario, yaml_data in scenarios.items():
if not os.path.exists('./data.new/hiera_data/scenario/' + scenario):
os.makedirs('./data.new/hiera_data/scenario/' + scenario)
for description in yaml_data.values():
for role, values in description.items():
if os.path.isfile('./data.new/hiera_data/scenario/' + scenario + '/' + role + '.yaml'):
with open('./data.new/hiera_data/scenario/' + scenario + '/' + role + '.yaml', 'a') as yf:
if 'classes' in values:
yf.write('classes:\n')
for c in values['classes']:
yf.write(' - \"' + c + '\"\n')
if 'class_groups' in values:
yf.write('class_groups:\n')
for c in values['class_groups']:
yf.write(' - \"' + c + '\"\n')
else:
with open('./data.new/hiera_data/scenario/' + scenario + '/' + role + '.yaml', 'w') as yf:
if 'classes' in values:
yf.write('classes:\n')
for c in values['classes']:
yf.write(' - \"' + c + '\"\n')
if 'class_groups' in values:
yf.write('class_groups:\n')
for c in values['class_groups']:
yf.write(' - \"' + c + '\"\n')
def class_groups():
print "=============================="
print "=== Handling Class Groups ===="
print "=============================="
# Classes and class groups can contain interpolation, which
# should be handled
with open('./data.new/hiera_data/class_groups.yaml', 'w') as class_groups:
for root,dirs,files in os.walk(dpath + '/class_groups'):
for name in files:
if 'README' not in name:
print os.path.join(root,name)
with open(os.path.join(root,name)) as yf:
cg_yaml = yaml.load(yf.read())
class_groups.write(name[:-5] + ':\n')
if 'classes' in cg_yaml:
for clss in cg_yaml['classes']:
class_groups.write(' - \"' + clss + '\"\n')
class_groups.write('\n')
with open('./data.new/hiera_data/class_groups.yaml', 'r') as class_groups:
s = class_groups.read()
os.remove('./data.new/hiera_data/class_groups.yaml')
s.replace('%{', "%{hiera(\'").replace('}', "\')}")
with open('./data.new/hiera_data/class_groups.yaml', 'w') as class_groups:
class_groups.write(s)
def global_hiera():
print "=============================="
print "=== Handling Global Hiera ===="
print "=============================="
scenarios = {}
globals_as_hiera = {}
for root,dirs,files in os.walk(dpath + '/global_hiera_params'):
for name in files:
print os.path.join(root,name)
with open(os.path.join(root,name)) as yf:
path = os.path.join(root,name).replace(dpath,'./data.new') \
.replace('global_hiera_params', 'hiera_data')
scenarios[path] = yaml.load(yf.read())
for key in scenarios.keys():
print key
for scenario, yaml_data in scenarios.items():
if not os.path.exists(scenario):
with open(scenario, 'w') as yf:
yf.write('# Global Hiera Params:\n')
for key, value in yaml_data.items():
if value == False or value == True:
yf.write(key + ': ' + str(value).lower() + '\n')
else:
yf.write(key + ': ' + str(value) + '\n')
else:
with open(scenario, 'a') as yf:
yf.write('# Global Hiera Params:\n')
for key, value in yaml_data.items():
if value == False or value == True:
yf.write(key + ': ' + str(value).lower() + '\n')
else:
yf.write(key + ': ' + str(value) + '\n')
def find_array_mappings():
print "=============================="
print "=== Array Data Mappings ======"
print "=============================="
print "Hiera will flatten arrays when"
print "using introspection, so arrays"
print "and hashes are handled using "
print "YAML anchors. This means they "
print "must be within a single file."
print "=============================="
array_mappings = {}
# File path : [lines to change]
lines = {}
for root,dirs,files in os.walk(dpath + '/hiera_data'):
for name in files:
path = os.path.join(root,name)
with open(path) as yf:
y = yaml.load(yf.read())
for key, value in y.items():
# Numbers and strings interpolate reasonably well, and things
# that aren't mappings will be for passing variables, and thus
# should contain the double colon for scope in most cases.
# This method is certainly fallible.
if (not isinstance(value, str) and ('::' not in key)):
print key + ' IS NON STRING MAPPING: ' + str(value)
if path.replace('/data/', '/data.new/') not in lines:
lines[path.replace('/data/', '/data.new/')] = {}
for nroot,ndirs,nfiles in os.walk(dpath + '/data_mappings'):
for nname in nfiles:
with open(os.path.join(nroot,nname)) as nyf:
ny = yaml.load(nyf.read())
if key in ny.keys():
print key + ' is found, maps to: ' + str(ny[key]) + ' in ' + path
for m in ny[key]:
if key not in lines[path.replace('/data/', '/data.new/')]:
lines[path.replace('/data/', '/data.new/')][key] = []
else:
lines[path.replace('/data/', '/data.new/')][key].append(m)
# Inform data_mappings it can ignore these values
array_mappings[key] = value
# modify the files that contain the problem mappings
# to contain anchor sources
for source, mappings in lines.items():
print 'handling non-string mapping in ' + str(source)
# read original file and replace mappings
# with yaml anchor sources
with open(source, 'r') as rf:
ofile = rf.read()
for map_from in mappings.keys():
if ('\n' + map_from + ':') not in ofile:
print 'WARNING: mapping ' + map_from + 'not found in file ' + source
ofile = ofile.replace('\n' + map_from + ':','\n' + map_from + ': &' + map_from + ' ')
with open(source, 'w') as wf:
wf.write(ofile)
# appen anchor references to files
for source, mappings in lines.items():
with open(source, 'a') as wf:
wf.write('\n')
wf.write("#########################################\n")
wf.write('# Anchor mappings for non-string elements\n')
wf.write("#########################################\n\n")
for map_from, map_to in mappings.items():
for param in map_to:
wf.write(param + ': *' + map_from + '\n')
return array_mappings
def data_mappings():
""" Take everything from common.yaml and put
it in data_mappings.yaml in hiera_data, and everything
else try to append to its appropriate switch in the
hierarchy """
array_mappings = find_array_mappings()
print "=============================="
print "=== Handling Data Mappings ==="
print "=============================="
data_mappings = {}
mappings_as_hiera = {}
for root,dirs,files in os.walk(dpath + '/data_mappings'):
for name in files:
print os.path.join(root,name)
with open(os.path.join(root,name)) as yf:
path = os.path.join(root,name).replace(dpath,'data.new/') \
.replace('data_mappings', 'hiera_data')
data_mappings[path] = yaml.load(yf.read())
mappings_as_hiera[path] = []
# create a list of things to append for each file
for source, yaml_mapping in data_mappings.items():
for mapping, list_of_values in yaml_mapping.items():
if mapping in array_mappings.keys():
print mapping + ' found in ' + source + ', skipping non-string mapping'
else:
mappings_as_hiera[source].append('# ' + mapping)
for entry in list_of_values:
mappings_as_hiera[source].append(entry + ": \"" + hierafy_mapping(mapping) + '\"')
mappings_as_hiera[source].append('')
for key, values in mappings_as_hiera.items():
folder = os.path.dirname(key)
if not os.path.exists(folder):
os.makedirs(folder)
if os.path.isfile(key):
print "appending to path "+ key
with open(key, 'a') as map_file:
map_file.write("#################\n")
map_file.write("# Data Mappings #\n")
map_file.write("#################\n\n")
map_file.write("\n".join(values))
else:
print "writing to new path "+ key
with open(key, 'w') as map_file:
map_file.write("#################\n")
map_file.write("# Data Mappings #\n")
map_file.write("#################\n\n")
map_file.write('\n'.join(values))
def move_dirs():
shutil.move(dpath, './data.old')
shutil.move('./data.new', './data')
if __name__ == "__main__":
prepare_target()
data_mappings()
scenarios()
class_groups()
global_hiera()
move_dirs()

View File

@@ -0,0 +1,17 @@
if [ ! -d stacktira ] ; then
mkdir stacktira
else
rm -rf stacktira/*
fi
cd stacktira
cp -r ../modules .
cp -r ../contrib .
cp -r ../data .
find . | grep .git | xargs rm -rf
cd ..
tar -cvf stacktira.tar stacktira
rm -rf stacktira

View File

@@ -0,0 +1,38 @@
apache
apt
ceilometer
cinder
concat
devtools
dnsmasq
edeploy
firewall
galera
glance
haproxy
heat
horizon
inifile
keepalived
keystone
memcached
mysql
neutron
nova
openstack
openstacklib
postgresql
puppet
puppetdb
rabbitmq
rsync
ruby-puppetdb
staging
stdlib
swift
sysctl
tempest
tftp
vcsrepo
vswitch
xinetd

View File

@@ -0,0 +1,19 @@
# convert data model to pure hiera
python contrib/aptira/build/convert.py
# install puppet modules
mkdir -p vendor
mkdir -p modules
export GEM_HOME=vendor
gem install librarian-puppet-simple
vendor/bin/librarian-puppet install
# get package caches
rm -rf stacktira
rm -rf stacktira.tar
wget https://bitbucket.org/michaeltchapman/puppet_openstack_builder/downloads/stacktira.tar
tar -xvf stacktira.tar
cp -r stacktira/contrib/aptira/gemcache contrib/aptira
cp -r stacktira/contrib/aptira/packages contrib/aptira
vagrant up control1

View File

@@ -0,0 +1,293 @@
#!/usr/bin/env bash
# Parameters can be set via env vars or passed as
# arguments. Arguments take priority over
# env vars.
proxy="${proxy:-}"
desired_ruby="${desired_ruby:-2.0.0p353}"
desired_puppet="${desired_puppet:-3.4.3}"
network="${network:-eth1}"
dest="${destination:-$HOME}"
environment="${environment:-}"
role="${role:-}"
tarball_source="${tarball_source:-https://bitbucket.org/michaeltchapman/puppet_openstack_builder/downloads/stacktira.tar}"
while getopts "h?p:r:o:t:u:n:e:d:" opt; do
case "$opt" in
h|\?)
echo "Not helpful help message"
exit 0
;;
p) proxy=$OPTARG
;;
r) desired_ruby=$OPTARG
;;
o) role=$OPTARG
;;
t) tarball_source=$OPTARG
;;
u) desired_puppet=$OPTARG
;;
n) network=$OPTARG
;;
e) environment=$OPTARG
;;
d) destination=$OPTARG
;;
esac
done
# Set wgetrc and either yum or apt to use an http proxy.
if [ $proxy ] ; then
echo 'setting proxy'
export http_proxy=$proxy
if [ -f /etc/redhat-release ] ; then
if [ ! $(cat /etc/yum.conf | grep '^proxy=') ] ; then
echo "proxy=$proxy" >> /etc/yum.conf
fi
elif [ -f /etc/debian_version ] ; then
if [ ! -f /etc/apt/apt.conf.d/01apt-cacher-ng-proxy ] ; then
echo "Acquire::http { Proxy \"$proxy\"; };" > /etc/apt/apt.conf.d/01apt-cacher-ng-proxy;
apt-get update -q
fi
else
echo "OS not detected! Weirdness inbound!"
fi
if [ ! $(cat /etc/wgetrc | grep '^http_proxy =') ] ; then
echo "http_proxy = $proxy" >> /etc/wgetrc
fi
else
echo 'not setting proxy'
fi
cd $dest
# Download the data model tarball
if [ ! -d $dest/stacktira ] ; then
echo 'downloading data model'
wget $tarball_source
tar -xvf stacktira.tar
rm -rf stacktira.tar
else
echo "data model installed in $dest/stacktira"
fi
# Ensure both puppet and ruby are
# installed, the correct version, and ready to run.
#
# It will install from $dest/stacktira/aptira/packages
# if possible, otherwise it will wget from the
# internet. If this machine is unable to run yum
# or apt install, and unable to wget, this script
# will fail.
ruby_version=$(ruby --version | cut -d ' ' -f 2)
# Ruby 1.8.7 (standard on rhel 6) can give segfaults, so
# purge and install ruby 2.0.0
if [ "${ruby_version}" != "${desired_ruby}" ] ; then
echo "installing ruby version $desired_ruby"
if [ -f /etc/redhat-release ] ; then
# Purge current ruby
yum remove ruby puppet ruby-augeas ruby-shadow -y -q
# enable epel to get libyaml, which is required by ruby
wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm -Uvh epel-release-6*
yum install -y libyaml -q
rm epel-release-6*
# Install ruby 2.0.0
if [ -f $dest/stacktira/contrib/aptira/packages/ruby-2.0.0p353-1.el6.x86_64.rpm ] ; then
yum localinstall -y $dest/stacktira/contrib/aptira/packages/ruby-2.0.0p353-1.el6.x86_64.rpm
else
echo 'downloading ruby 2.0.0 rpm'
# wget_rpm_from_somewhere
yum localinstall ruby-2.0.0p353-1.el6.x86_64.rpm -y -q
fi
yum install augeas-devel -y -q
elif [ -f /etc/debian_version ] ; then
apt-get remove puppet ruby -y
apt-get install ruby -y
fi
else
echo "ruby version $desired_ruby already installed"
fi
# Install puppet from gem. This is not best practice, but avoids
# repackaging large numbers of rpms and debs for ruby 2.0.0
hash puppet 2>/dev/null || {
puppet_version=0
}
if [ "${puppet_version}" != '0' ] ; then
puppet_version=$(puppet --version)
fi
if [ "${puppet_version}" != "${desired_puppet}" ] ; then
echo "installing puppet version $desired_puppet"
if [ -f $dest/stacktira/contrib/aptira/gemcache/puppet-$desired_puppet.gem ] ; then
echo "installing from local gem cache"
cd $dest/stacktira/contrib/aptira/gemcache
gem install --force --local *.gem
cd -
else
echo "no local gem cache found, installing puppet gem from internet"
gem install puppet ruby-augeas --no-ri --no-rdoc
fi
else
echo "puppet version $desired_puppet already installed"
fi
# Ensure puppet user and group are configured
if ! grep puppet /etc/group; then
echo 'adding puppet group'
groupadd puppet
fi
if ! grep puppet /etc/passwd; then
echo 'adding puppet user'
useradd puppet -g puppet -d /var/lib/puppet -s /sbin/nologin
fi
# Set up minimal puppet directory structure
if [ ! -d /etc/puppet ]; then
echo 'creating /etc/puppet'
mkdir /etc/puppet
fi
if [ ! -d /etc/puppet/manifests ]; then
echo 'creating /etc/puppet/manifests'
mkdir /etc/puppet/manifests
fi
if [ ! -d /etc/puppet/modules ]; then
echo 'creating /etc/puppet/modules'
mkdir /etc/puppet/modules
fi
# Don't overwrite the one vagrant places there
if [ ! -f /etc/puppet/manifests/site.pp ]; then
echo 'copying site.pp'
cp $dest/stacktira/contrib/aptira/puppet/site.pp /etc/puppet/manifests
fi
# Create links for all modules, but if a dir is already there,
# ignore it (for dev envs)
for i in $(cat $dest/stacktira/contrib/aptira/build/modules.list); do
if [ ! -L /etc/puppet/modules/$i ] && [ ! -d /etc/puppet/modules/$i ] ; then
echo "Installing module $i"
ln -s $dest/stacktira/modules/$i /etc/puppet/modules/$i
fi
done
echo 'all modules installed'
if [ ! -d /etc/puppet/data ]; then
echo 'creating /etc/puppet/data'
mkdir /etc/puppet/data
fi
if [ ! -d /etc/puppet/data/hiera_data ]; then
echo 'linking /etc/puppet/data/hiera_data'
ln -s $dest/stacktira/data/hiera_data /etc/puppet/data/hiera_data
fi
echo 'hiera data ready'
# copy hiera.yaml to etc, so that we can query without
# running puppet just yet
if [ ! -f /etc/hiera.yaml ] ; then
echo 'setting /etc/hiera.yaml'
cp $dest/stacktira/contrib/aptira/puppet/hiera.yaml /etc/hiera.yaml
fi
# copy hiera.yaml to puppet
if [ ! -f /etc/puppet/hiera.yaml ] ; then
echo 'setting /etc/puppet/hiera.yaml'
cp $dest/stacktira/contrib/aptira/puppet/hiera.yaml /etc/puppet/hiera.yaml
fi
# Copy site data if any. This will not be overwritten by sample configs
if [ -d $dest/stacktira/contrib/aptira/site ] ; then
echo "Installing user config"
cp -r $dest/stacktira/contrib/aptira/site/* /etc/puppet/data/hiera_data
fi
mkdir -p /etc/facter/facts.d
# set environment external fact
# Requires facter > 1.7
if [ -n $environment ] ; then
if [ ! -f /etc/facter/facts.d/environment.yaml ] ; then
echo "environment: $environment" > /etc/facter/facts.d/environment.yaml
elif ! grep -q "environment" /etc/facter/facts.d/environment.yaml ; then
echo "environment: $environment" >> /etc/facter/facts.d/environment.yaml
fi
if [ ! -d $dest/stacktira/contrib/aptira/site ] ; then
if [ ! -f /etc/puppet/data/hiera_data/user.$environment.yaml ] ; then
if [ -f $dest/stacktira/contrib/aptira/puppet/user.$environment.yaml ] ; then
cp $dest/stacktira/contrib/aptira/puppet/user.$environment.yaml /etc/puppet/data/hiera_data/user.$environment.yaml
fi
fi
fi
fi
# set role external fact
# Requires facter > 1.7
if [ -n $role ] ; then
if [ ! -f /etc/facter/facts.d/role.yaml ] ; then
echo "role: $role" > /etc/facter/facts.d/role.yaml
elif ! grep -q "role" /etc/facter/facts.d/role.yaml ; then
echo "role: $role" >> /etc/facter/facts.d/role.yaml
fi
fi
# Ensure puppet isn't going to sign a cert with the wrong time or
# name
ipaddress=$(facter ipaddress_$network)
fqdn=$(hostname).$(hiera domain_name)
# If it doesn't match what puppet will be setting for fqdn, just redo
# to the point where we can see the master and have fqdn
if ! grep -q "$ipaddress\s$fqdn" /etc/hosts ; then
echo 'configuring /etc/hosts for fqdn'
if [ -f /etc/redhat-release ] ; then
echo "$ipaddress $fqdn $(hostname)" > /etc/hosts
echo "127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4" >> /etc/hosts
echo "::1 localhost localhost.localdomain localhost6 localhost6.localdomain6" >> /etc/hosts
echo "$(hiera build_server_ip) $(hiera build_server_name) $(hiera build_server_name).$(hiera domain_name)" >> /etc/hosts
elif [ -f /etc/debian_version ] ; then
echo "$ipaddress $fqdn $(hostname)" > /etc/hosts
echo "127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4" >> /etc/hosts
echo "::1 localhost localhost.localdomain localhost6 localhost6.localdomain6" >> /etc/hosts
echo "$(hiera build_server_ip) $(hiera build_server_name) $(hiera build_server_name).$(hiera domain_name)" >> /etc/hosts
fi
fi
# install ntpdate if necessary
hash ntpdate 2>/dev/null || {
echo 'installing ntpdate'
if [ -f /etc/redhat-release ] ; then
yum install -y ntpdate -q
elif [ -f /etc/debian_version] ; then
apt-get install ntpdate -y
fi
}
# this may be a list, so just take the first one
ntpdate $(hiera ntp_servers | cut -d '"' -f 2)
if [ ! -d $dest/stacktira/contrib/aptira/site ] ; then
if [ ! -f /etc/puppet/data/hiera_data/user.yaml ] ; then
echo 'No user.yaml found: installing sample'
cp $dest/stacktira/contrib/aptira/puppet/user.yaml /etc/puppet/data/hiera_data/user.yaml
fi
fi
echo 'This server has been successfully prepared to run puppet using'
echo 'the Openstack data model. Please take a moment to review your'
echo 'configuration in /etc/puppet/data/hiera_data/user.yaml'
echo
echo "When you\'re ready, run puppet apply /etc/puppet/manifests/site.pp"

View File

@@ -0,0 +1,48 @@
# To deploy experimental support for Centos6, change os to
# redhat and scenario to stacktira
os: redhat
scenario: stacktira
#proxy: 'http://192.168.0.18:8000'
# Additional Config available for use by scenariobuilder during
# the bootstrap process.
# [*initial_ntp*]
# This needs be set before puppet runs, otherwise the certs
# may have the wrong timestamps and agent won't connect to master
# [*installer_repo*]
# These determine which github account+branch to get for the
# puppet_openstack_builder repo when it is cloned onto the
# test VMs as part of the bootstrap script in cloud-init.
# installer_repo: stackforge
# [*installer_branch*]
# installer_branch: master
# [*openstack_version*]
# The release of openstack to install. Note that grizzly will require switching back to Quantum
# Options: havana, grizzly
# [*git_protocol*]
# (optional) Git protocol to use when cloning modules on testing VMs
# Defaults to https
# Options: git, https.
# [*apt_mirror_ip*]
# (optional) Sets the apt mirror IP by doing a sed on the image
# [*apt_proxy_host*]
# (optional) Sets apt-get installs and git clones to go via a proxy
# [*apt_proxy_port*]
# (optional) Sets the port for the apt_proxy_host if used
# [*custom_module*]
# (optional) The name of a module to take from a different source
# [*custom_branch*]
# (optional) The branch to use for the custom module
# [*custom_repo*]
# (optional) The github account the custom module is hosted under

View File

@@ -0,0 +1,30 @@
---
:backends:
- yaml
:yaml:
:datadir: /etc/puppet/data/hiera_data
:hierarchy:
- "hostname/%{hostname}"
- "client/%{clientcert}"
- "user.%{role}"
- "user.%{environment}"
- user
- "user.%{scenario}"
- user.common
- "osfamily/%{osfamily}"
- "cinder_backend/%{cinder_backend}"
- "glance_backend/%{glance_backend}"
- "rpc_type/%{rpc_type}"
- "db_type/%{db_type}"
- "tenant_network_type/%{tenant_network_type}"
- "network_type/%{network_type}"
- "network_plugin/%{network_plugin}"
- "password_management/%{password_management}"
- "contrib/networking/%{networking}"
- "contrib/storage/%{storage}"
- "contrib/monitoring/%{monitoring}"
- "scenario/%{scenario}"
- "scenario/%{scenario}/%{role}"
- common
- class_groups

View File

@@ -0,0 +1,73 @@
# Globals
# Role may be set by using external facts, or can
# fall back to using the first word in the clientcert
if ! $::role {
$role = regsubst($::clientcert, '([a-zA-Z]+)[^a-zA-Z].*', '\1')
}
$scenario = hiera('scenario', "")
$cinder_backend = hiera('cinder_backend', "")
$glance_backend = hiera('glance_backend', "")
$rpc_type = hiera('rpc_type', "")
$db_type = hiera('db_type', "")
$tenant_network_type = hiera('tenant_network_type', "")
$network_type = hiera('network_type', "")
$network_plugin = hiera('network_plugin', "")
$network_service = hiera('network_service', "")
$storage = hiera('storage', "")
$networking = hiera('networking', "")
$monitoring = hiera('monitoring', "")
$password_management = hiera('password_management', "")
$compute_type = hiera('compute_type', "")
node default {
notice("my scenario is ${scenario}")
notice("my role is ${role}")
# Should be defined in scenario/[name_of_scenario]/[name_of_role].yaml
$node_class_groups = hiera('class_groups', undef)
notice("class groups: ${node_class_groups}")
if $node_class_groups {
class_group { $node_class_groups: }
}
$node_classes = hiera('classes', undef)
if $node_classes {
include $node_classes
notify { " Including node classes : ${node_classes}": }
}
# get a list of contribs to include.
$stg = hiera("${role}_storage", [])
notice("storage includes ${stg}")
if (size($stg) > 0) {
contrib_group { $stg: }
}
# get a list of contribs to include.
$networking = hiera("${role}_networking", [])
notice("networking includes ${networking}")
if (size($networking) > 0) {
contrib_group { $networking: }
}
# get a list of contribs to include.
$monitoring = hiera('${role}_monitoring', [])
notice("monitoring includes ${monitoring}")
if (size($monitoring) > 0) {
contrib_group { $monitoring: }
}
}
define class_group {
include hiera($name)
notice($name)
$x = hiera($name)
notice( "including ${x}" )
}
define contrib_group {
include hiera("${name}_classes")
notice($name)
$x = hiera("${name}_classes")
notice( "including ${x}" )
}

View File

@@ -0,0 +1,129 @@
# This is the sample user.yaml for the stacktira scenario
# For additional things that can be configured, look at
# user.stacktira.yaml, or user.common.
#
# Warning:
# When working with non-string types, remember to keep yaml
# anchors within a single file - hiera cannot look them
# up across files. For this reason, editing the lower section
# of this file is not recommended.
enabled_services: &enabled_services
- nova
- neutron
- cinder
- heat
scenario: stacktira
networking: none
storage: none
monitoring: none
# The default network config is as follows:
# eth0: vagrant network in testing
# eth1: deploy network
# eth2: public api network
# eth3: private service network + GRE
# eth4: external data network
build_server_name: build-server
build_server_ip: 192.168.242.100
# These are legacy mappings, and should have no effect
controller_public_address: 10.2.3.105
controller_internal_address: 10.3.3.105
controller_admin_address: 10.3.3.105
# Interface that will be stolen by the l3 router on
# the control node.
external_interface: eth2
# for a provider network on this interface instead of
# an l3 agent use these options
openstacklib::openstack::provider::interface: eth2
neutron::plugins::ovs::network_vlan_ranges: default
# Gre tunnel address for each node
internal_ip: "%{ipaddress_eth3}"
# This is the interface that each node will be binding
# various services on.
deploy_bind_ip: "%{ipaddress_eth1}"
public_bind_ip: "%{ipaddress_eth2}"
private_bind_ip: "%{ipaddress_eth3}"
# The public VIP, where all API services are exposed to users.
public_vip: 10.2.3.105
# The private VIP, where internal services are exposed to openstack services.
private_vip: 10.3.3.105
# List of IP addresses for controllers on the public network
control_servers_public: &control_servers_public [ '10.2.3.110', '10.2.3.111', '10.2.3.112']
# List of IP addresses for controllers on the private network
control_servers_private: &control_servers_private [ '10.3.3.110', '10.3.3.111', '10.3.3.112']
# A hash of hostnames to private network IPs. Used for rabbitmq hosts
# resolution
openstacklib::hosts::cluster_hash:
regsubr1.private: '10.3.3.110'
regsubr2.private: '10.3.3.111'
regsubr3.private: '10.3.3.112'
# List of controller hostnames. Used for rabbitmq hosts list
cluster_names: &cluster_names [ 'regsubr1.private', 'regsubr2.private', 'regsubr3.private' ]
# Virtual router IDs for the VIPs in this cluster. If you are
# running multiple VIPs on one network these need to be different
# for each VIP
openstacklib::loadbalance::haproxy::public_vrid: 60
openstacklib::loadbalance::haproxy::private_vrid: 61
#Libvirt type
nova::compute::libvirt::libvirt_virt_type: qemu
horizon::wsgi::apache::bind_address: "%{ipaddress_eth2}"
# Use these to set an apt proxy if running on a Debian-like
apt::proxy_host: 192.168.0.18
apt::proxy_port: 8000
# This node will be used to bootstrap the cluster on initial deployment
# or if there is a total failure of the control cluster
galera::galera_master: 'regsubr1.domain.name'
# Proxy configuration of either apt or yum
openstacklib::repo::apt_proxy_host: '192.168.0.18'
openstacklib::repo::apt_proxy_port: '8000'
openstacklib::repo::yum_http_proxy: 'http://192.168.0.18:8000'
openstacklib::repo::yum_epel_mirror: 'http://mirror.aarnet.edu.au'
openstacklib::repo::yum_base_mirror: 'http://mirror.aarnet.edu.au'
#########################################
# Anchor mappings for non-string elements
#########################################
neutron::rabbit_hosts: *cluster_names
nova::rabbit_hosts: *cluster_names
cinder::rabbit_hosts: *cluster_names
rabbitmq::cluster_nodes: *cluster_names
openstacklib::loadbalance::haproxy::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::ceilometer::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::cinder::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::heat::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::mysql::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::neutron::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::nova::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::rabbitmq::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::ceilometer::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::cinder::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::heat::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::neutron::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::nova::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::mysql::cluster_addresses: *control_servers_private
openstacklib::loadbalance::haproxy::rabbitmq::cluster_addresses: *control_servers_private
galera::galera_servers: *control_servers_private
openstacklib::openstack::databases::enabled_services: *enabled_services

View File

@@ -0,0 +1,128 @@
# An example where two regions share keystone and glance
openstacklib::openstack::regions::regions_hash:
RegionOne:
public_ip: 10.2.3.105
private_ip: 10.3.3.105
services:
- heat
- nova
- neutron
- cinder
- ec2
RegionTwo:
public_ip: 10.2.3.205
private_ip: 10.3.3.205
services:
- heat
- nova
- neutron
- cinder
- ec2
shared:
public_ip: 10.2.3.5
private_ip: 10.3.3.5
services:
- keystone
- glance
# This will create the correct databases for the region controller
# normally this would also make endpoints, but that is covered
# by the above region hash in multi-region environments
enabled_services: &enabled_services
- glance
- keystone
openstacklib::openstack::regions::nova_user_pw: "%{hiera('nova_service_password')}"
openstacklib::openstack::regions::neutron_user_pw: "%{hiera('network_service_password')}"
openstacklib::openstack::regions::glance_user_pw: "%{hiera('glance_service_password')}"
openstacklib::openstack::regions::heat_user_pw: "%{hiera('heat_service_password')}"
openstacklib::openstack::regions::cinder_user_pw: "%{hiera('cinder_service_password')}"
openstacklib::openstack::regions::ceilometer_user_pw: "%{hiera('ceilometer_service_password')}"
# The default network config is as follows:
# eth0: vagrant network in testing
# eth1: deploy network
# eth2: public api network
# eth3: private service network + GRE
# eth4: external data network
build_server_name: build-server
build_server_ip: 192.168.242.100
# These are legacy mappings, and should have no effect
controller_public_address: 10.2.3.5
controller_internal_address: 10.3.3.5
controller_admin_address: 10.3.3.5
# This is the interface that each node will be binding
# various services on.
deploy_bind_ip: "%{ipaddress_eth1}"
public_bind_ip: "%{ipaddress_eth2}"
private_bind_ip: "%{ipaddress_eth3}"
# The public VIP, where all API services are exposed to users.
public_vip: 10.2.3.5
# The private VIP, where internal services are exposed to openstack services.
private_vip: 10.3.3.5
# List of IP addresses for controllers on the public network
control_servers_public: &control_servers_public [ '10.2.3.10', '10.2.3.11', '10.2.3.12']
# List of IP addresses for controllers on the private network
control_servers_private: &control_servers_private [ '10.3.3.10', '10.3.3.11', '10.3.3.12']
# A hash of hostnames to private network IPs. Used for rabbitmq hosts
# resolution
openstacklib::hosts::cluster_hash:
regcon1.private: '10.3.3.10'
regcon2.private: '10.3.3.11'
regcon3.private: '10.3.3.12'
# List of controller hostnames. Used for rabbitmq hosts list
cluster_names: &cluster_names [ 'regcon1.private', 'regcon2.private', 'regcon3.private' ]
horizon::wsgi::apache::bind_address: "%{ipaddress_eth2}"
# Use these to set an apt proxy if running on a Debian-like
apt::proxy_host: 192.168.0.18
apt::proxy_port: 8000
# This node will be used to bootstrap the cluster on initial deployment
# or if there is a total failure of the control cluster
galera::galera_master: 'regcon1.domain.name'
# Database allowed hosts
allowed_hosts: 10.3.3.%
# Allowed cidrs for the different interfaces. Only
# Ports used by openstack will be allowed
deploy_control_firewall_source: '192.168.242.0/24'
public_control_firewall_source: '10.2.3.0/24'
private_control_firewall_source: '10.3.3.0/24'
# Proxy configuration of either apt or yum
openstacklib::repo::apt_proxy_host: '192.168.0.18'
openstacklib::repo::apt_proxy_port: '8000'
openstacklib::repo::yum_http_proxy: 'http://192.168.0.18:8000'
openstacklib::repo::yum_epel_mirror: 'http://mirror.aarnet.edu.au'
openstacklib::repo::yum_base_mirror: 'http://mirror.aarnet.edu.au'
#########################################
# Anchor mappings for non-string elements
#########################################
openstacklib::loadbalance::haproxy::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::dashboard::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::glance::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::keystone::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::mysql::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::dashboard::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::glance::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::keystone::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::mysql::cluster_addresses: *control_servers_private
galera::galera_servers: *control_servers_private
openstacklib::openstack::databases::enabled_services: *enabled_services

View File

@@ -0,0 +1,132 @@
# This is the sample user.yaml for the stacktira scenario
# For additional things that can be configured, look at
# user.stacktira.yaml, or user.common.
#
# Warning:
# When working with non-string types, remember to keep yaml
# anchors within a single file - hiera cannot look them
# up across files. For this reason, editing the lower section
# of this file is not recommended.
scenario: stacktira
networking: none
storage: none
monitoring: none
# The default network config is as follows:
# eth0: vagrant network in testing
# eth1: deploy network
# eth2: public api network
# eth3: private service network + GRE
# eth4: external data network
build_server_name: build-server
build_server_ip: 192.168.242.100
# These are legacy mappings, and should have no effect
controller_public_address: 10.2.3.5
controller_internal_address: 10.3.3.5
controller_admin_address: 10.3.3.5
# Interface that will be stolen by the l3 router on
# the control node.
external_interface: eth4
# for a provider network on this interface instead of
# an l3 agent use these options
#openstacklib::openstack::provider::interface: eth4
#neutron::plugins::ovs::network_vlan_ranges: default
# Gre tunnel address for each node
internal_ip: "%{ipaddress_eth3}"
# This is the interface that each node will be binding
# various services on.
deploy_bind_ip: "%{ipaddress_eth1}"
public_bind_ip: "%{ipaddress_eth2}"
private_bind_ip: "%{ipaddress_eth3}"
# The public VIP, where all API services are exposed to users.
public_vip: 10.2.3.5
# The private VIP, where internal services are exposed to openstack services.
private_vip: 10.3.3.5
# List of IP addresses for controllers on the public network
control_servers_public: &control_servers_public [ '10.2.3.10', '10.2.3.11', '10.2.3.12']
# List of IP addresses for controllers on the private network
control_servers_private: &control_servers_private [ '10.3.3.10', '10.3.3.11', '10.3.3.12']
# A hash of hostnames to private network IPs. Used for rabbitmq hosts
# resolution
openstacklib::hosts::cluster_hash:
control1.private: '10.3.3.10'
control2.private: '10.3.3.11'
control3.private: '10.3.3.12'
# List of controller hostnames. Used for rabbitmq hosts list
cluster_names: &cluster_names [ 'control1.private', 'control2.private', 'control3.private' ]
#Libvirt type
nova::compute::libvirt::libvirt_virt_type: qemu
horizon::wsgi::apache::bind_address: "%{ipaddress_eth2}"
# Use these to set an apt proxy if running on a Debian-like
apt::proxy_host: 192.168.0.18
apt::proxy_port: 8000
# CIDRs for the three networks.
deploy_control_firewall_source: '192.168.242.0/24'
public_control_firewall_source: '10.2.3.0/24'
private_control_firewall_source: '10.3.3.0/24'
# Proxy configuration of either apt or yum
openstacklib::repo::apt_proxy_host: '192.168.0.18'
openstacklib::repo::apt_proxy_port: '8000'
openstacklib::repo::yum_http_proxy: 'http://192.168.0.18:8000'
openstacklib::repo::yum_epel_mirror: 'http://mirror.aarnet.edu.au'
openstacklib::repo::yum_base_mirror: 'http://mirror.aarnet.edu.au'
enabled_services: &enabled_services
- keystone
- glance
- nova
- neutron
- cinder
#########################################
# Anchor mappings for non-string elements
#########################################
neutron::rabbit_hosts: *cluster_names
nova::rabbit_hosts: *cluster_names
cinder::rabbit_hosts: *cluster_names
rabbitmq::cluster_nodes: *cluster_names
openstacklib::loadbalance::haproxy::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::ceilometer::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::cinder::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::dashboard::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::glance::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::heat::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::keystone::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::mysql::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::neutron::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::nova::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::rabbitmq::cluster_names: *cluster_names
openstacklib::loadbalance::haproxy::ceilometer::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::cinder::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::dashboard::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::glance::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::heat::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::keystone::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::neutron::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::nova::cluster_addresses: *control_servers_public
openstacklib::loadbalance::haproxy::mysql::cluster_addresses: *control_servers_private
openstacklib::loadbalance::haproxy::rabbitmq::cluster_addresses: *control_servers_private
galera::galera_servers: *control_servers_private
openstacklib::openstack::databases::enabled_services: *enabled_services
openstacklib::openstack::endpoints::enabled_services: *enabled_services

View File

@@ -0,0 +1,11 @@
# Bring up the control node and then reboot it to ensure
# it has an ip netns capable kernel
vagrant up control1
vagrant halt control1
vagrant up control1
vagrant provision control1
# Bring up compute node
vagrant up compute1
vagrant ssh -c "bash /vagrant/contrib/aptira/tests/$1/test.sh"

View File

@@ -0,0 +1,36 @@
#!/bin/bash
#
# assumes that openstack credentails are set in this file
source /root/openrc
# Grab an image. Cirros is a nice small Linux that's easy to deploy
wget --quiet http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img
# Add it to glance so that we can use it in Openstack
glance add name='cirros' is_public=true container_format=bare disk_format=qcow2 < cirros-0.3.2-x86_64-disk.img
# Capture the Image ID so that we can call the right UUID for this image
IMAGE_ID=`glance index | grep 'cirros' | head -1 | awk -F' ' '{print $1}'`
# Flat provider network.
neutron net-create --provider:physical_network=default --shared --provider:network_type=flat public
neutron subnet-create --name publicsub --allocation-pool start=10.2.3.100,end=10.2.3.200 --router:external=True public 10.2.3.0/24
neutron_net=`neutron net-list | grep net1 | awk -F' ' '{print $2}'`
# For access to the instance
nova keypair-add test > /tmp/test.private
chmod 0600 /tmp/test.private
# Allow ping and ssh
neutron security-group-rule-create --protocol icmp --direction ingress default
neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --direction ingress default
# Boot instance
nova boot --flavor 1 --image cirros --key-name test --nic net-id=$neutron_net providervm
sleep 15
address=$(nova show providervm | grep public | cut -d '|' -f '3')
ip netns exec qdhcp-$neutron_net ssh -i /tmp/test.private $address -lcirros -o StrictHostKeyChecking=no hostname

View File

@@ -0,0 +1,12 @@
classes:
- openstacklib::firewall
- openstacklib::firewall::nova
- openstacklib::firewall::keystone
- openstacklib::firewall::glance
- openstacklib::firewall::heat
- openstacklib::firewall::neutron
- openstacklib::firewall::cinder
- openstacklib::firewall::rabbitmq
- openstacklib::firewall::dashboard
- openstacklib::firewall::keepalived
- galera::firewall

View File

@@ -0,0 +1,11 @@
classes:
- openstacklib::loadbalance::haproxy
- openstacklib::loadbalance::haproxy::mysql
- openstacklib::loadbalance::haproxy::nova
- openstacklib::loadbalance::haproxy::keystone
- openstacklib::loadbalance::haproxy::glance
- openstacklib::loadbalance::haproxy::heat
- openstacklib::loadbalance::haproxy::neutron
- openstacklib::loadbalance::haproxy::cinder
- openstacklib::loadbalance::haproxy::rabbitmq
- openstacklib::loadbalance::haproxy::dashboard

View File

@@ -0,0 +1,187 @@
cluster_names:
- quantum::rabbit_hosts
- neutron::rabbit_hosts
- nova::rabbit_hosts
- cinder::rabbit_hosts
- rabbitmq::cluster_nodes
- openstacklib::loadbalance::haproxy::cluster_names
- openstacklib::loadbalance::haproxy::ceilometer::cluster_names
- openstacklib::loadbalance::haproxy::cinder::cluster_names
- openstacklib::loadbalance::haproxy::dashboard::cluster_names
- openstacklib::loadbalance::haproxy::glance::cluster_names
- openstacklib::loadbalance::haproxy::heat::cluster_names
- openstacklib::loadbalance::haproxy::keystone::cluster_names
- openstacklib::loadbalance::haproxy::mysql::cluster_names
- openstacklib::loadbalance::haproxy::neutron::cluster_names
- openstacklib::loadbalance::haproxy::nova::cluster_names
- openstacklib::loadbalance::haproxy::rabbitmq::cluster_names
mysql_module:
- ceilometer::db::mysql_module
- ceilometer::db::mysql::mysql_module
- cinder::db::mysql::mysql_module
- glance::db::mysql::mysql_module
- glance::api::mysql_module
- glance::registry::mysql_module
- heat::db::mysql::mysql_module
- heat::mysql_module
- keystone::db::mysql::mysql_module
- keystone::mysql_module
- neutron::db::mysql::mysql_module
- neutron::server::mysql_module
- nova::mysql_module
- nova::db::mysql::mysql_module
control_servers_private:
- galera::galera_servers
- openstacklib::loadbalance::haproxy::mysql::cluster_addresses
- openstacklib::loadbalance::haproxy::rabbitmq::cluster_addresses
control_servers_public:
- openstacklib::loadbalance::haproxy::cluster_addresses
- openstacklib::loadbalance::haproxy::ceilometer::cluster_addresses
- openstacklib::loadbalance::haproxy::cinder::cluster_addresses
- openstacklib::loadbalance::haproxy::dashboard::cluster_addresses
- openstacklib::loadbalance::haproxy::glance::cluster_addresses
- openstacklib::loadbalance::haproxy::heat::cluster_addresses
- openstacklib::loadbalance::haproxy::keystone::cluster_addresses
- openstacklib::loadbalance::haproxy::neutron::cluster_addresses
- openstacklib::loadbalance::haproxy::nova::cluster_addresses
domain_name:
- openstacklib::hosts::domain
deploy_control_firewall_source:
- openstacklib::firewall::edeploy::source
- openstacklib::firewall::puppet::source
deploy_control_firewall_source:
- openstacklib::firewall::edeploy::source
- openstacklib::firewall::puppet::source
public_control_firewall_source:
- openstacklib::firewall::cinder::source
- openstacklib::firewall::ceilometer::source
- openstacklib::firewall::dashboard::source
- openstacklib::firewall::glance::source
- openstacklib::firewall::heat::source
- openstacklib::firewall::keystone::source
- openstacklib::firewall::nova::source
- openstacklib::firewall::neutron::source
private_control_firewall_source:
- openstacklib::firewall::rabbitmq::source
- galera::firewall::source
- openstacklib::firewall::cinder::internal_source
- openstacklib::firewall::ceilometer::internal_source
- openstacklib::firewall::dashboard::internal_source
- openstacklib::firewall::glance::internal_source
- openstacklib::firewall::heat::internal_source
- openstacklib::firewall::keystone::internal_source
- openstacklib::firewall::nova::internal_source
- openstacklib::firewall::neutron::internal_source
public_bind_ip:
- cinder::api::bind_host
- glance::api::bind_host
- glance::registry::bind_host
- heat::api_cfn::bind_host
- heat::api_cloudwatch::bind_host
- heat::api::bind_host
- keystone::public_bind_host
- neutron::bind_host
- nova::api::api_bind_address
- nova::api::metadata_listen
- nova::objectstore::bind_address
- nova::vncproxy::host
- horizon::wsgi::apache::bind_address
- horizon::bind_address
private_bind_ip:
- galera::bind_address
- galera::local_ip
- rabbitmq::node_ip_address
- keystone::admin_bind_host
public_vip:
- glance::api::registry_host
- openstacklib::loadbalance::haproxy::cluster_public_vip
- openstacklib::loadbalance::haproxy::ceilometer::vip
- openstacklib::loadbalance::haproxy::cinder::vip
- openstacklib::loadbalance::haproxy::dashboard::vip
- openstacklib::loadbalance::haproxy::glance::vip
- openstacklib::loadbalance::haproxy::heat::vip
- openstacklib::loadbalance::haproxy::keystone::vip
- openstacklib::loadbalance::haproxy::nova::vip
- openstacklib::loadbalance::haproxy::neutron::vip
private_vip:
- openstacklib::loadbalance::haproxy::cluster_private_vip
- openstacklib::loadbalance::haproxy::mysql::vip
- openstacklib::loadbalance::haproxy::rabbitmq::vip
- openstacklib::loadbalance::haproxy::keystone::internal_vip
- openstacklib::loadbalance::haproxy::ceilometer::internal_vip
- openstacklib::loadbalance::haproxy::cinder::internal_vip
- openstacklib::loadbalance::haproxy::dashboard::internal_vip
- openstacklib::loadbalance::haproxy::glance::internal_vip
- openstacklib::loadbalance::haproxy::heat::internal_vip
- openstacklib::loadbalance::haproxy::keystone::internal_vip
- openstacklib::loadbalance::haproxy::nova::internal_vip
- openstacklib::loadbalance::haproxy::neutron::internal_vip
- glance::notify::rabbitmq::rabbit_host
- cinder::qpid_hostname
- cinder::rabbit_host
- nova::rabbit_host
- nova::qpid_hostname
- heat::qpid_hostname
- heat::rabbit_host
- quantum::rabbit_host
- quantum::qpid_hostname
- neutron::qpid_hostname
- neutron::rabbit_host
- ceilometer::db::mysql::host
- ceilometer::rabbit_host
- ceilometer::qpid_hostname
- cinder::db::mysql::host
- glance::db::mysql::host
- keystone::db::mysql::host
- nova::db::mysql::host
- quantum::db::mysql::host
- neutron::db::mysql::host
- cinder::keystone::auth::internal_address
- glance::keystone::auth::internal_address
- nova::keystone::auth::internal_address
- heat::keystone::auth::internal_address
- heat::keystone::auth_cfn::internal_address
- cinder::api::keystone_auth_host
- keystone::endpoint::internal_address
- glance::api::auth_host
- glance::registry::auth_host
- horizon::keystone_host
- nova::api::auth_host
- quantum::server::auth_host
- neutron::server::auth_host
- quantum::keystone::auth::internal_address
- neutron::keystone::auth::internal_address
- openstack::auth_file::controller_node
- quantum::agents::metadata::metadata_ip
- neutron::agents::metadata::metadata_ip
- openstack::swift::proxy::keystone_host
- swift::keystone::auth::internal_address
- ceilometer::keystone::auth::internal_address
- ceilometer::api::keystone_host
- heat::keystone_host
- heat::db::mysql::host
- cinder::keystone::auth::admin_address
- glance::keystone::auth::admin_address
- nova::keystone::auth::admin_address
- heat::keystone::auth::admin_address
- heat::keystone::auth_cfn::admin_address
- keystone::endpoint::admin_address
- quantum::keystone::auth::admin_address
- neutron::keystone::auth::admin_address
- swift::keystone::auth::admin_address
- ceilometer::keystone::auth::admin_address
openstack_release:
- openstacklib::compat::openstack_release

View File

@@ -0,0 +1,17 @@
---
db_type: mysql
rpc_type: rabbitmq
cinder_backend: iscsi
glance_backend: file
compute_type: libvirt
# networking options
network_service: neutron
# supports linuxbridge and ovs
network_plugin: ovs
# supports single-flat, provider-router, and per-tenant-router
network_type: provider-router
# supports gre or vlan
tenant_network_type: gre
password_management: individual
install_tempest: false

View File

@@ -0,0 +1,207 @@
# eth0: vagrant network in testing
# eth1: deploy network
# eth2: public api network
# eth3: private service network + GRE
# eth4: external data network
# The IP address to be used to connect to Horizon and external
# services on the control node. In the compressed_ha or full_ha scenarios,
# this will be an address to be configured as a VIP on the HAProxy
# load balancers, not the address of the control node itself.
controller_public_address: 10.2.3.5
# The IP address used for internal communication with the control node.
# In the compressed_ha or full_ha scenarios, this will be an address
# to be configured as a VIP on the HAProxy load balancers, not the address
# of the control node itself.
controller_internal_address: 10.3.3.5
# This is the address of the admin endpoints for Openstack
# services. In most cases, the admin address is the same as
# the public one.
controller_admin_address: 10.3.3.5
# Interface that will be stolen by the l3 router on
# the contorl node. The IP will be unreachable so don't
# set this to anything you were using
external_interface: eth4
# Gre tunnel address for each node
internal_ip: "%{ipaddress_eth3}"
# This is the interface that each node will be binding
# various services on.
deploy_bind_ip: "%{ipaddress_eth1}"
public_bind_ip: "%{ipaddress_eth2}"
private_bind_ip: "%{ipaddress_eth3}"
# The public VIP, where all API services are exposed to users.
public_vip: 10.2.3.5
# The private VIP, where services are exposed to openstack services.
private_vip: 10.3.3.5
# List of IP addresses for controllers on the public network
control_servers_public: [ '10.2.3.10', '10.2.3.11', '10.2.3.12']
# List of IP addresses for controllers on the private network
control_servers_private: [ '10.3.3.10', '10.3.3.11', '10.3.3.12']
# A hash of hostnames to private network IPs. Used for rabbitmq hosts
# resolution
openstacklib::hosts::cluster_hash:
control1.private: '10.3.3.10'
control2.private: '10.3.3.11'
control3.private: '10.3.3.12'
# List of controller hostnames. Used for rabbitmq hosts list
cluster_names: [ 'control1.private', 'control2.private', 'control3.private' ]
# Allowed hosts for mysql users
allowed_hosts: 10.3.3.%
#Galera status checking
galera::status::status_allow: "%{hiera('allowed_hosts')}"
galera::status::status_password: clustercheck
galera::status::status_host: "%{hiera('private_vip')}"
# Edeploy is a tool from eNovance for provisioning servers based on
# chroots created on the build node.
edeploy::serv: '%{ipaddress_eth1}'
edeploy::hserv: '%{ipaddress_eth1}'
edeploy::rserv: '%{ipaddress_eth1}'
edeploy::hserv_port: 8082
edeploy::http_install_port: 8082
edeploy::install_apache: false
edeploy::giturl: 'https://github.com/michaeltchapman/edeploy.git'
edeploy::rsync_exports:
'install':
'path': '/var/lib/debootstrap/install'
'comment': 'The Install Path'
'metadata':
'path': '/var/lib/edeploy/metadata'
'comment': 'The Metadata Path'
# Dnsmasq is used by edeploy to provide dhcp on the deploy
# network.
dnsmasq::domain_needed: false
dnsmasq::interface: 'eth1'
dnsmasq::dhcp_range: ['192.168.242.3, 192.168.242.50']
dnsmasq::dhcp_boot: ['pxelinux.0']
apache::default_vhost: false
#apache::ip: "%{ipaddress_eth2}"
horizon::wsgi::apache::bind_address: "%{ipaddress_eth2}"
# Use these to set an apt proxy if running on a Debian-like
apt::proxy_host: 192.168.0.18
apt::proxy_port: 8000
# We are using the new version of puppetlabs-mysql, which
# requires this parameter for compatibility.
mysql_module: '2.2'
# Install the python mysql bindings on all hosts
# that include mysql::bindings
mysql::bindings::python_enable: true
# This node will be used to bootstrap the cluster on initial deployment
# or if there is a total failure of the control cluster
galera::galera_master: 'control1.domain.name'
# This can be either percona or mariadb, depending on preference
galera::vendor_type: 'mariadb'
# epel is included by openstack::repo::rdo, so we
# don't need it from other modules
devtools::manage_epel: false
galera::repo::epel_needed: false
# We are using the new rabbitmq module, which removes
# the rabbitmq::server class in favor of ::rabbitmq
nova::rabbitmq::rabbitmq_class: '::rabbitmq'
# We don't want to get Rabbit from the upstream, instead
# preferring the RDO/UCA version.
rabbitmq::manage_repos: false
rabbitmq::package_source: false
# Change this to apt on debians
rabbitmq::package_provider: yum
# The rabbit module expects the upstream rabbit package, which
# includes plugins that the distro packages do not.
rabbitmq::admin_enable: false
# Rabbit clustering configuration
rabbitmq::config_cluster: true
rabbitmq::config_mirrored_queues: true
rabbitmq::cluster_node_type: 'disc'
rabbitmq::wipe_db_on_cookie_change: true
# This is the port range for rabbit clustering
rabbitmq::config_kernel_variables:
inet_dist_listen_min: 9100
inet_dist_listen_max: 9105
# Openstack version to install
openstack_release: havana
openstack::repo::uca::release: 'havana'
openstack::repo::rdo::release: 'havana'
# Proxy configuration of either apt or yum
openstacklib::repo::apt_proxy_host: '192.168.0.18'
openstacklib::repo::apt_proxy_port: '8000'
openstacklib::repo::yum_http_proxy: 'http://192.168.0.18:8000'
openstacklib::repo::yum_epel_mirror: 'http://mirror.aarnet.edu.au'
openstacklib::repo::yum_base_mirror: 'http://mirror.aarnet.edu.au'
openstacklib::hosts::build_server_ip: '192.168.242.100'
openstacklib::hosts::build_server_name: 'build-server'
openstacklib::hosts::domain: 'domain.name'
openstacklib::hosts::mgmt_ip: "%{ipaddress_eth1}"
# Loadbalancer configuration
openstacklib::loadbalance::haproxy::vip_secret: 'vip_password'
openstacklib::loadbalance::haproxy::public_iface: 'eth2'
openstacklib::loadbalance::haproxy::private_iface: 'eth3'
openstacklib::loadbalance::haproxy::cluster_master: 'control1.domain.name'
# CIDRs for the three networks.
deploy_control_firewall_source: '192.168.242.0/24'
public_control_firewall_source: '10.2.3.0/24'
private_control_firewall_source: '10.3.3.0/24'
# Store reports in puppetdb
puppet::master::reports: 'store,puppetdb'
# This purges config files to remove entries not set by puppet.
# This is essential on RDO where qpid is the default
glance::api::purge_config: true
# PKI will cause issues when using load balancing because each
# keystone will be a different CA, so use uuid.
keystone::token_provider: 'keystone.token.providers.uuid.Provider'
# Validate keystone connection via VIP before
# evaluating custom types
keystone::validate_service: true
# Haproxy is installed via puppetlabs-haproxy, so we don't need to install it
# via lbaas agent
neutron::agents::lbaas::manage_haproxy_package: false
neutron::agents::vpnaas::enabled: false
neutron::agents::lbaas::enabled: false
neutron::agents::fwaas::enabled: false
neutron::agents::metadata::shared_secret: "%{hiera('metadata_shared_secret')}"
# Multi-region mappings. See contrib/aptira/puppet/user.regcon.yaml for a sample
# on setting multiple regions
openstacklib::openstack::regions::nova_user_pw: "%{hiera('nova_service_password')}"
openstacklib::openstack::regions::neutron_user_pw: "%{hiera('network_service_password')}"
openstacklib::openstack::regions::glance_user_pw: "%{hiera('glance_service_password')}"
openstacklib::openstack::regions::heat_user_pw: "%{hiera('heat_service_password')}"
openstacklib::openstack::regions::cinder_user_pw: "%{hiera('cinder_service_password')}"
openstacklib::openstack::regions::ceilometer_user_pw: "%{hiera('ceilometer_service_password')}"

66
data/nodes/stacktira.yaml Normal file
View File

@@ -0,0 +1,66 @@
nodes:
build-server:
vagrant_name: build-server
memory: 3000
ip_number: 100
puppet_type: apply
post_config:
- 'puppet plugin download --server build-server.domain.name'
#- 'service apache2 restart'
- 'service httpd restart'
# - 'bash /vagrant/contrib/aptira/build.sh'
control1:
vagrant_name: control1
memory: 3000
ip_number: 10
control2:
vagrant_name: control2
memory: 3000
ip_number: 11
control3:
vagrant_name: control3
memory: 3000
ip_number: 12
compute1:
vagrant_name: compute1
memory: 2512
ip_number: 21
compute2:
vagrant_name: compute2
memory: 2512
ip_number: 22
regcon1:
vagrant_name: regcon1
environment: regcon
role: regcon
memory: 3000
ip_number: 10
network: 10
regsubr1:
vagrant_name: regsubr1
environment: RegionOne
role: regsub
memory: 3000
ip_number: 110
network: 10
regsubr2:
vagrant_name: regsubr2
environment: RegionTwo
role: regsub
memory: 3000
ip_number: 210
network: 10
computer1:
vagrant_name: computer1
environment: RegionOne
role: compute
memory: 2512
ip_number: 121
network: 10
computer2:
vagrant_name: computer2
environment: RegionTwo
role: compute
memory: 2512
ip_number: 221
network: 10

View File

@@ -23,3 +23,8 @@ swift-proxy02: swift_proxy
swift-storage01: swift_storage
swift-storage02: swift_storage
swift-storage03: swift_storage
#Stacktira roles
build.+: build
compute.+: compute
control.+: control

View File

@@ -0,0 +1,103 @@
#
# the two node
#
roles:
build:
classes:
- openstacklib::repo
- openstacklib::hosts
- openstacklib::puppet::master
- edeploy
- dnsmasq
- openstacklib::firewall::edeploy
- openstacklib::firewall::puppet
# Control and compute are for a standard single region deployment
control:
classes:
- galera
- mysql::bindings
- "nova::%{rpc_type}"
- openstacklib::openstack::databases
- openstacklib::openstack::endpoints
- openstacklib::openstack::provider
- openstacklib::repo
- openstacklib::hosts
- openstacklib::compat
- neutron::server
- keystone
- keystone::roles::admin
class_groups:
- network_controller
- glance_all
- cinder_controller
- nova_controller
- horizon
- heat_all
- firewall_control
- loadbalance_control
- test_file
compute:
class_groups:
- nova_compute
- cinder_volume
- ceilometer_compute
classes:
- openstacklib::repo
- openstacklib::hosts
- mysql::bindings
# This role is used by multi-region installations
# to share horizon, keystone and glance
regcon:
class_groups:
- horizon
classes:
- "galera"
- "mysql::bindings"
- "openstacklib::openstack::databases"
- "openstacklib::openstack::regions"
- "openstacklib::repo"
- "openstacklib::hosts"
- "openstacklib::compat"
- "keystone"
- "keystone::roles::admin"
- "glance"
- "glance::api"
- "glance::registry"
- "glance::backend::%{glance_backend}"
- "glance::cache::pruner"
- "glance::cache::cleaner"
- "openstacklib::loadbalance::haproxy"
- "openstacklib::loadbalance::haproxy::keystone"
- "openstacklib::loadbalance::haproxy::glance"
- "openstacklib::loadbalance::haproxy::dashboard"
- "openstacklib::loadbalance::haproxy::mysql"
- "openstacklib::firewall"
- "openstacklib::firewall::keystone"
- "openstacklib::firewall::glance"
- "openstacklib::firewall::dashboard"
- "openstacklib::firewall::keepalived"
- "galera::firewall"
# This is a child region controller that uses the top region for keystone
# and glance, but has its own cinder, nova, heat and neutron
regsub:
class_groups:
- "network_controller"
- "cinder_controller"
- "nova_controller"
- "heat_all"
- "firewall_control"
- "loadbalance_control"
- "test_file"
classes:
- "galera"
- "mysql::bindings"
- "nova::%{rpc_type}"
- "openstacklib::openstack::databases"
- "openstacklib::repo"
- "openstacklib::hosts"
- "openstacklib::compat"
- "openstacklib::openstack::provider"
- "neutron::server"