Total rewrite of the openstack project

Converts the entire project from submodules to
a single module with a rake task to manage its dependencies.

This has been done b/c there needs to be a module called openstack
to contain all of the higher level openstack specifications.
This commit is contained in:
Dan Bode
2012-04-30 17:31:29 -07:00
parent 350385b518
commit 4d76d701c9
44 changed files with 720 additions and 726 deletions

7
.gitignore vendored
View File

@@ -1,7 +0,0 @@
*.swp
images/*
facter
puppet
bashrc
.vagrant
graphs/*

51
.gitmodules vendored
View File

@@ -1,51 +0,0 @@
[submodule "modules/apt"]
path = modules/apt
url = git://github.com/puppetlabs/puppet-apt.git
[submodule "modules/mysql"]
path = modules/mysql
url = git://github.com/bodepd/puppetlabs-mysql.git
[submodule "modules/rabbitmq"]
path = modules/rabbitmq
url = git://github.com/puppetlabs/puppetlabs-rabbitmq.git
[submodule "modules/stdlib"]
path = modules/stdlib
url = git://github.com/puppetlabs/puppetlabs-stdlib.git
[submodule "modules/glance"]
path = modules/glance
url = git://github.com/puppetlabs/puppetlabs-glance.git
[submodule "modules/nova"]
path = modules/nova
url = git://github.com/puppetlabs/puppetlabs-nova.git
[submodule "modules/create_resources"]
path = modules/create_resources
url = git://github.com/puppetlabs/puppetlabs-create_resources.git
[submodule "modules/concat"]
path = modules/concat
url = git://github.com/puppetlabs/puppet-concat
[submodule "modules/keystone"]
path = modules/keystone
url = git://github.com/puppetlabs/puppetlabs-keystone
[submodule "modules/rsync"]
path = modules/rsync
url = git://github.com/puppetlabs/puppetlabs-rsync
[submodule "modules/ssh"]
path = modules/ssh
url = git://github.com/saz/puppet-ssh
[submodule "modules/memcached"]
path = modules/memcached
url = git://github.com/saz/puppet-memcached
[submodule "modules/xinetd"]
path = modules/xinetd
url = git://github.com/ghoneycutt/puppet-xinetd
[submodule "modules/swift"]
path = modules/swift
url = git://github.com/puppetlabs/puppetlabs-swift
[submodule "puppetlabs-keystone"]
path = puppetlabs-keystone
url = git://github.com/puppetlabs/puppetlabs-keystone
[submodule "modules/vcsrepo"]
path = modules/vcsrepo
url = git://github.com/puppetlabs/puppet-vcsrepo
[submodule "modules/horizon"]
path = modules/horizon
url = https://github.com/puppetlabs/puppetlabs-horizon.git

15
NOTES
View File

@@ -1,15 +0,0 @@
to install with single node:
# configure will all test
> puppet apply nova/tests/all.pp
# download the image to test with:
> mkdir /vagrant/images/lucid_ami && cd /vagrant/images/lucid_ami
> wget -q -O - http://173.203.107.207/ubuntu-lucid.tar | tar xSv
# now run the test code:
> ./ext/tests.sh
# this will verify that you can insert an image in the glance db
# and use it to boot an image

130
README
View File

@@ -1,130 +0,0 @@
Puppet Labs OpenStack
=====================
A collection of modules to install a single node OpenStack server.
Thanks to Canonical and Rackspace for their assistance in developing these modules.
Requirements
------------
* Currently only works on Ubuntu Natty
* Puppet 2.6.8 or later
* Nova PPA
Installation
------------
1. Install Python software properties
$ sudo apt-get install -y python-software-properties
2. Install the Nova PPA
$ sudo add-apt-repository ppa:nova-core/trunk
3. Update APT
$ sudo apt-get update
4. Install Puppet -- On Oneiric Ocelot (11.10) the ubuntu repos have the correct version puppet but on Natty you have to manually get the 11.10 packages
$ sudo apt-get install libxmlrpc-ruby libopenssl-ruby libshadow-ruby1.8 libaugeas-ruby1.8
$ wget http://mirror.pnl.gov/ubuntu//pool/main/p/puppet/puppet_2.6.8-1ubuntu1_all.deb \
http://mirror.pnl.gov/ubuntu//pool/main/p/puppet/puppet-common_2.6.8-1ubuntu1_all.deb \
http://mirror.pnl.gov/ubuntu//pool/main/f/facter/facter_1.5.9-1ubuntu1_all.deb
$ sudo dpkg -i *.deb
make sure you have version 2.6.8
$ sudo puppet --version
2.6.8
5. Download the Puppet OpenStack module
$ cd ~ && git clone --recurse git://github.com/puppetlabs/puppetlabs-openstack.git
6. Copy the modules into the Puppet modulepath
$ sudo cp -R ~/puppetlabs-openstack/modules/* /etc/puppet/modules/
7. Run Puppet
$ sudo puppet apply --verbose ~/puppetlabs-openstack/manifests/all.pp
Usage
-----
1. Add images using glance
2. Extract credentials
$ cd ~
$ sudo nova-manage project zipfile nova novaadmin
$ unzip nova.zip
$ source novarc
$ euca-add-keypair openstack > ~/cert.pem
3. You can list any images and flavours available.
$ nova flavor-list
$ nova image-list
4. Run an instance
$ euca-run-instances ami-00000003 -k openstack -t m1.tiny
5. You can see running instances
$ euca-describe-instances
Installing With Vagrant
------------------------
These examples assume that you have a suitable image that you can
use for testing.
1. Export the environment variable:
Step one is to export an environment variable of the vagrant box.
All of my testing has been done using a Natty image.
2. Download glance images for testing:
$ mkdir images
$ cd images
$ curl -g -o ttylinux-uec-i686-12.1_2.6.35-22_1.tar.gz http://smoser.brickies.net/ubuntu/ttylinux-uec/ttylinux-uec-i686-12.1_2.6.35-22_1.tar.gz
2. Testing single node installation:
This will install all of the currently supported openstack components into a single node.
run the rake task:
$ rake build:single
Author
------
Puppet Labs, Canonical & Rackspace!
License
-------
Author:: Puppet Labs (<info@puppetlabs.com>)
Copyright:: Copyright (c) 2011 Puppet Labs
License:: Apache License, Version 2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

3
README.dev.md Normal file
View File

@@ -0,0 +1,3 @@
This is a high level projec that wraps the other openstack projects.
Use rake module:clone_all to clone all projects

68
README.md Normal file
View File

@@ -0,0 +1,68 @@
# NOTE
This project has been completely rewritten to manage
all of the dependent modules based on a rake task.
If you are looking for the old project that managed the openstack
modules based on submodules, it has been moved to here:
https://github.com/puppetlabs/puppetlabs-openstack_project
# Puppet Module for Openstack
This module wraps the various other openstack modules and
provides higher level classes that can be used to deploy
openstack environments.
## Supported Versions
These modules are currently specific to the Essex release of OpenStack.
They have been tested and are known to work on Ubuntu 12.04 (Precise)
They are also in the process of being verified against Fedora 17.
## Installation:
1. Install Puppet
$ apt-get install puppet
2. Install other project dependencies:
$ apt-get install rake git
3. Download the Puppet OpenStack module
$ cd ~ && git clone git://github.com/puppetlabs/puppetlabs-openstack.git
4. Copy the module into the modulepath
$ sudo cp -R ~/puppetlabs-openstack/modules/* /etc/puppet/modules/
5. Use the rake task to install all other module dependencies:
<pre>
rake modules:clone_all
</pre>
This rake task is driven by the following configuration file:
<pre>
other_repos.yaml
</pre>
## Classes
This module currently provides 3 classes that can be used to deploy openstack.
openstack::all - can be used to deploy a single node all in one environemnt
openstack::controller - can be used to deploy an openstack controller
openstack::compute - can be used to deploy an openstack compute node.
## Example Usage
coming soon...

105
Rakefile
View File

@@ -1,77 +1,46 @@
require 'vagrant'
#
# Rakefile to make management of module easier (I hope :) )
#
# I did not do this in puppet b/c it requires the vcsrepo!!
#
#
env=Vagrant::Environment.new(:cwd => File.dirname(__FILE__))
# this captures the regular output to stdout
env.ui = Vagrant::UI::Shell.new(env, Thor::Base.shell.new)
env.load!
require 'puppet'
# all of the instance to build out for multi-node
instances = [
:db,
:rabbitmq,
:glance,
:controller,
:compute
]
repo_file = 'other_repos.yaml'
namespace :build do
desc 'build out 5 node openstack cluster'
task :multi do
instances.each do |instance|
build(instance, env)
namespace :modules do
desc 'clone all required modules'
task :clone do
repo_hash = YAML.load_file(File.join(File.dirname(__FILE__), repo_file))
repos = (repo_hash['repos'] || {})
repos_to_clone = (repos['repo_paths'] || {})
branches_to_checkout = (repos['checkout_branches'] || {})
repos_to_clone.each do |remote, local|
# I should check to see if the file is there?
output = `git clone #{remote} #{local}`
Puppet.debug(output)
end
branches_to_checkout.each do |local, branch|
Dir.chdir(local) do
output = `git checkout #{branch}`
end
# Puppet.debug(output)
end
end
desc 'build out openstack on one node'
task :single do
build(:all, env)
end
end
# bring vagrant vm with image name up
def build(instance, env)
unless vm = env.vms[instance]
puts "invalid VM: #{instance}"
else
if vm.created?
puts "VM: #{instance} was already created"
else
# be very fault tolerant :)
begin
# this will always fail
vm.up(:provision => true)
rescue Exception => e
puts e.class
puts e
desc 'see if any of the modules are not up-to-date'
task 'status' do
repo_hash = YAML.load_file(File.join(File.dirname(__FILE__), repo_file))
repos = (repo_hash['repos'] || {})
repos_to_clone = (repos['repo_paths'] || {})
branches_to_checkout = (repos['checkout_branches'] || {})
repos_to_clone.each do |remote, local|
# I should check to see if the file is there?
Dir.chdir(local) do
puts "Checking status of #{local}"
puts `git status`
end
end
end
end
namespace :test do
desc 'test multi-node installation'
task :multi do
{:glance => ['sudo /vagrant/ext/glance.sh'],
:controller => ['sudo /vagrant/ext/nova.sh'],
}.each do |instance, commands|
test(instance, commands, env)
end
end
desc 'test single node installation'
task :single do
test(:all, ['sudo /vagrant/ext/glance.sh', 'sudo /vagrant/ext/nova.sh'], env)
end
end
def test(instance, commands, env)
unless vm = env.vms[instance]
puts "invalid VM: #{instance}"
else
puts "testing :#{instance}"
vm.ssh.execute do |ssh|
commands.each do |c|
#puts ssh.methods - Object.methods
puts ssh.exec!(c)
end
end
end
end
end

70
Vagrantfile vendored
View File

@@ -1,70 +0,0 @@
Vagrant::Config.run do |config|
#vagrant config file for building out multi-node with Puppet :)
box = 'natty_openstack'
remote_url_base = ENV['REMOTE_VAGRANT_STORE']
config.vm.box = "#{box}"
config.ssh.forwarded_port_key = "ssh"
ssh_forward = 2222
config.vm.box = "#{box}"
config.vm.box_url = "http://faro.puppetlabs.lan/vagrant/#{box}.vbox"
config.vm.customize do |vm|
vm.memory_size = 768
vm.cpu_count = 1
end
net_base = "172.21.0"
# the master runs apply to configure itself
config.vm.define :puppetmaster do |pm|
ssh_forward = ssh_forward + 1
pm.vm.forward_port('ssh', 22, ssh_forward, :auto => true)
pm.vm.network("#{net_base}.10")
pm.vm.provision :shell, :path => 'scripts/run-puppetmaster.sh'
end
config.vm.define :all do |all|
ssh_forward = ssh_forward + 1
all.vm.forward_port('ssh', 22, ssh_forward, :auto => true)
all.vm.network("#{net_base}.11")
all.vm.provision :shell, :path => 'scripts/run-all.sh'
end
config.vm.define :db do |mysql|
ssh_forward = ssh_forward + 1
mysql.vm.forward_port('ssh', 22, ssh_forward, :auto => true)
mysql.vm.network("#{net_base}.12")
mysql.vm.provision :shell, :path => 'scripts/run-db.sh'
end
config.vm.define :rabbitmq do |rabbit|
ssh_forward = ssh_forward + 1
rabbit.vm.forward_port('ssh', 22, ssh_forward, :auto => true)
rabbit.vm.network("#{net_base}.13")
rabbit.vm.provision :shell, :path => 'scripts/run-rabbitmq.sh'
end
config.vm.define :controller do |controller|
ssh_forward = ssh_forward + 1
controller.vm.forward_port('ssh', 22, ssh_forward, :auto => true)
controller.vm.network("#{net_base}.14")
controller.vm.provision :shell, :path => 'scripts/run-controller.sh'
end
config.vm.define :compute do |compute|
ssh_forward = ssh_forward + 1
compute.vm.forward_port('ssh', 22, ssh_forward, :auto => true)
compute.vm.network("#{net_base}.15")
compute.vm.provision :shell, :path => 'scripts/run-compute.sh'
end
config.vm.define :glance do |glance|
ssh_forward = ssh_forward + 1
glance.vm.forward_port('ssh', 22, ssh_forward, :auto => true)
glance.vm.network("#{net_base}.16")
glance.vm.provision :shell, :path => 'scripts/run-glance.sh'
end
end
# vim:ft=ruby

71
examples/site.pp Normal file
View File

@@ -0,0 +1,71 @@
#
# any nodes whose certname matches nova_all should
# become an openstack all-in-one node
#
#
Exec {
logoutput => true,
}
resources { 'nova_config':
purge => true,
}
node /openstack_all/ {
class { 'openstack::all':
public_address => $ipaddress_eth0
}
class { 'openstack_controller': }
}
node /openstack_controller/ {
class { 'openstack::controller':
public_address => $public_hostname,
internal_address => $ipaddress,
}
class { 'openstack_controller': }
}
node /openstack_compute/ {
class { 'openstack::compute':
# setting to qemu b/c I still test in ec2 :(
internal_address => $ipaddress,
libvirt_type => 'qemu',
}
}
# this shows an example of the code needed to perform
# an all in one installation
#
# sets up a few things that I use for testing
#
class openstack_controller {
#
# set up auth credntials so that we can authenticate easily
#
file { '/root/auth':
content =>
'
export OS_TENANT_NAME=openstack
export OS_USERNAME=admin
export OS_PASSWORD=ChangeMe
export OS_AUTH_URL="http://localhost:5000/v2.0/"
'
}
# this is a hack that I have to do b/c openstack nova
# sets up a route to reroute calls to the metadata server
# to its own server which fails
file { '/usr/lib/ruby/1.8/facter/ec2.rb':
ensure => absent,
}
}

30
files/nova_test.sh Executable file
View File

@@ -0,0 +1,30 @@
#!/bin/bash
#
# assumes that resonable credentials have been stored at
# /root/auth
source /root/auth
# get an image to test with
#wget http://uec-images.ubuntu.com/releases/11.10/release/ubuntu-11.10-server-cloudimg-amd64-disk1.img
# import that image into glance
#glance add name="Ubuntu 11.10 cloudimg amd64" is_public=true container_format=ovf disk_format=qcow2 < ubuntu-11.10-server-cloudimg-amd64-disk1.img
#IMAGE_ID=`glance index | grep 'Ubuntu 11.10 cloudimg amd64' | head -1 | awk -F' ' '{print $1}'`
wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
glance add name='cirros image' is_public=true container_format=bare disk_format=qcow2 < cirros-0.3.0-x86_64-disk.img
IMAGE_ID=`glance index | grep 'cirros image' | head -1 | awk -F' ' '{print $1}'`
# create a pub key
ssh-keygen -f /tmp/id_rsa -t rsa -N ''
nova keypair-add --pub_key /tmp/id_rsa.pub key1
nova boot --flavor 1 --image ${IMAGE_ID} --key_name key1 dans_vm
nova show dans_vm
# create ec2 credentials
keystone ec2-credentials-create

View File

@@ -1,26 +1,212 @@
#
# This manifest installs all of the nova
# components on one node.
resources { 'nova_config':
purge => true,
}
class { 'mysql::server': }
class { 'nova::all':
db_password => 'password',
db_name => 'nova',
db_user => 'nova',
db_host => 'localhost',
rabbit_password => 'rabbitpassword',
rabbit_port => '5672',
rabbit_userid => 'rabbit_user',
rabbit_virtual_host => '/',
rabbit_host => 'localhost',
image_service => 'nova.image.glance.GlanceImageService',
glance_host => 'localhost',
glance_port => '9292',
libvirt_type => 'qemu',
#
# This class can be used to perform
# an openstack all-in-one installation.
#
class openstack::all(
# passing in the public ipaddress is required
$public_address,
# middleware credentials
$mysql_root_password = 'sql_pass',
$rabbit_password = 'rabbit_pw',
$rabbit_user = 'nova',
# opestack credentials
$admin_email = 'someuser@some_fake_email_address.foo',
$admin_user_password = 'ChangeMe',
$keystone_db_password = 'keystone_pass',
$keystone_admin_token = 'keystone_admin_token',
$nova_db_password = 'nova_pass',
$nova_user_password = 'nova_pass',
$glance_db_password = 'glance_pass',
$glance_user_password = 'glance_pass',
# config
$verbose = true,
$purge_nova_config = true,
) {
#
# indicates that all nova config entries that we did
# not specifify in Puppet should be purged from file
#
if ($purge_nova_config) {
resources { 'nova_config':
purge => true,
}
}
# set up mysql server
class { 'mysql::server':
config_hash => {
# the priv grant fails on precise if I set a root password
# 'root_password' => $mysql_root_password,
'bind_address' => '127.0.0.1'
}
}
####### KEYSTONE ###########
# set up keystone database
class { 'keystone::db::mysql':
password => $keystone_db_password,
}
# set up the keystone config for mysql
class { 'keystone::config::mysql':
password => $keystone_db_password,
}
# set up keystone
class { 'keystone':
admin_token => $keystone_admin_token,
bind_host => '127.0.0.1',
log_verbose => $verbose,
log_debug => $verbose,
catalog_type => 'sql',
}
# set up keystone admin users
class { 'keystone::roles::admin':
email => $admin_email,
password => $admin_user_password,
}
# set up the keystone service and endpoint
class { 'keystone::endpoint': }
######## END KEYSTONE ##########
######## BEGIN GLANCE ##########
# set up keystone user, endpoint, service
class { 'glance::keystone::auth':
password => $glance_user_password,
}
# creat glance db/user/grants
class { 'glance::db::mysql':
host => '127.0.0.1',
password => $glance_db_password,
}
# configure glance api
class { 'glance::api':
log_verbose => $verbose,
log_debug => $verbose,
auth_type => 'keystone',
auth_host => '127.0.0.1',
auth_port => '35357',
keystone_tenant => 'services',
keystone_user => 'glance',
keystone_password => $glance_user_password,
}
# configure glance to store images to disk
class { 'glance::backend::file': }
class { 'glance::registry':
log_verbose => $verbose,
log_debug => $verbose,
auth_type => 'keystone',
auth_host => '127.0.0.1',
auth_port => '35357',
keystone_tenant => 'services',
keystone_user => 'glance',
keystone_password => $glance_user_password,
sql_connection => "mysql://glance:${glance_db_password}@127.0.0.1/glance",
}
######## END GLANCE ###########
######## BEGIN NOVA ###########
class { 'nova::keystone::auth':
password => $nova_user_password,
}
class { 'nova::rabbitmq':
userid => $rabbit_user,
password => $rabbit_password,
}
class { 'nova::db::mysql':
password => $nova_db_password,
host => 'localhost',
}
class { 'nova':
sql_connection => "mysql://nova:${nova_db_password}@localhost/nova",
rabbit_userid => $rabbit_user,
rabbit_password => $rabbit_password,
image_service => 'nova.image.glance.GlanceImageService',
glance_api_servers => '127.0.0.1:9292',
network_manager => 'nova.network.manager.FlatDHCPManager',
}
class { 'nova::api':
enabled => true,
admin_password => $nova_user_password,
}
class { 'nova::scheduler':
enabled => true
}
class { 'nova::network':
enabled => true
}
nova::manage::network { "nova-vm-net":
network => '11.0.0.0/24',
available_ips => 128,
}
nova::manage::floating { "nova-vm-floating":
network => '10.128.0.0/24',
}
class { 'nova::objectstore':
enabled => true
}
class { 'nova::volume':
enabled => true
}
class { 'nova::cert':
enabled => true
}
class { 'nova::consoleauth':
enabled => true
}
class { 'nova::vncproxy':
host => $public_hostname,
}
class { 'nova::compute':
enabled => true,
vnc_enabled => true,
vncserver_proxyclient_address => '127.0.0.1',
vncproxy_host => $public_address,
}
class { 'nova::compute::libvirt':
libvirt_type => 'qemu',
vncserver_listen => '127.0.0.1',
}
nova::network::bridge { 'br100':
ip => '11.0.0.1',
netmask => '255.255.255.0',
}
######## Horizon ########
class { 'memcached':
listen_ip => '127.0.0.1',
}
class { 'horizon': }
######## End Horizon #####
}

56
manifests/compute.pp Normal file
View File

@@ -0,0 +1,56 @@
#
# This class is intended to serve as
# a way of deploying compute nodes.
#
# This currently makes the following assumptions:
# - libvirt is used to manage the hypervisors
# - flatdhcp networking is used
# - glance is used as the backend for the image service
#
# TODO - I need to make the choise of networking configurable
#
class openstack::compute(
# my address
$internal_address,
# conection information
$sql_connection = false,
$rabbit_host = false,
$rabbit_password = 'rabbit_pw',
$rabbit_user = 'nova',
$glance_api_servers = false,
$vncproxy_host = false,
# nova compute configuration parameters
$libvirt_type = 'kvm',
$vnc_enabled = 'true',
$bridge_ip = '11.0.0.1',
$bridge_netmask = '255.255.255.0',
) {
class { 'nova':
sql_connection => $sql_connection,
rabbit_host => $rabbit_host,
rabbit_userid => $rabbit_user,
rabbit_password => $rabbit_password,
image_service => 'nova.image.glance.GlanceImageService',
glance_api_servers => $glance_api_servers,
network_manager => 'nova.network.manager.FlatDHCPManager',
}
class { 'nova::compute':
enabled => true,
vnc_enabled => $vnc_enabled,
vncserver_proxyclient_address => $internal_address,
vncproxy_host => $vncproxy_host,
}
class { 'nova::compute::libvirt':
libvirt_type => $libvirt_type,
vncserver_listen => $internal_address,
}
nova::network::bridge { 'br100':
ip => $bridge_ip,
netmask => $bridge_netmask,
}
}

220
manifests/controller.pp Normal file
View File

@@ -0,0 +1,220 @@
#
# This can be used to build out the simplest openstack controller
#
#
# $export_resources - rather resources should be exported
#
class openstack::controller(
# my address
$public_address,
$internal_address,
$admin_address = $internal_address,
# connection information
$mysql_root_password = 'sql_pass',
$admin_email = 'some_user@some_fake_email_address.foo',
$admin_password = 'ChangeMe',
$keystone_db_password = 'keystone_pass',
$keystone_admin_token = 'keystone_admin_token',
$glance_db_password = 'glance_pass',
$glance_service_password = 'glance_pass',
$nova_db_password = 'nova_pass',
$nova_service_password = 'nova_pass',
$rabbit_password = 'rabbit_pw',
$rabbit_user = 'nova',
# network configuration
# this assumes that it is a flat network manager
$network_manager = 'nova.network.manager.FlatDHCPManager',
# I do not think that this needs a bridge?
$bridge_ip = '192.168.188.1',
$bridge_netmask = '255.255.255.0',
$verbose = false,
$export_resource = false
) {
$glance_api_servers = "${internal_address}:9292"
$nova_db = "mysql://nova:${nova_db_password}@${internal_address}/nova"
if ($export_resources) {
# export all of the things that will be needed by the clients
@@nova_config { 'rabbit_host': value => $internal_address }
Nova_config <| title == 'rabbit_host' |>
@@nova_config { 'sql_connection': value => $nova_db }
Nova_config <| title == 'sql_connection' |>
@@nova_config { 'glance_api_servers': value => $glance_api_servers }
Nova_config <| title == 'glance_api_servers' |>
@@nova_config { 'novncproxy_base_url': value => "http://${public_address}:6080/vnc_auto.html" }
$sql_connection = false
$glance_connection = false
$rabbit_connection = false
} else {
$sql_connection = $nova_db
$glance_connection = $glance_api_servers
$rabbit_connection = $rabbit_host
}
####### DATABASE SETUP ######
# set up mysql server
class { 'mysql::server':
config_hash => {
# the priv grant fails on precise if I set a root password
# TODO I should make sure that this works
# 'root_password' => $mysql_root_password,
'bind_address' => '0.0.0.0'
}
}
# set up all openstack databases, users, grants
class { 'keystone::db::mysql':
password => $keystone_db_password,
}
class { 'glance::db::mysql':
host => '127.0.0.1',
password => $glance_db_password,
}
# TODO should I allow all hosts to connect?
class { 'nova::db::mysql':
password => $nova_db_password,
host => $internal_address,
allowed_hosts => '%',
}
####### KEYSTONE ###########
# set up keystone
class { 'keystone':
admin_token => $keystone_admin_token,
bind_host => '127.0.0.1',
log_verbose => $verbose,
log_debug => $verbose,
catalog_type => 'sql',
}
# set up keystone database
# set up the keystone config for mysql
class { 'keystone::config::mysql':
password => $keystone_db_password,
}
# set up keystone admin users
class { 'keystone::roles::admin':
email => $admin_email,
password => $admin_password,
}
# set up the keystone service and endpoint
class { 'keystone::endpoint':
public_address => $public_address,
internal_address => $internal_address,
admin_address => $admin_address
}
# set up glance service,user,endpoint
class { 'glance::keystone::auth':
password => $glance_service_password,
public_address => $public_address,
internal_address => $internal_address,
admin_address => $admin_address
}
# set up nova serice,user,endpoint
class { 'nova::keystone::auth':
password => $nova_service_password,
public_address => $public_address,
internal_address => $internal_address,
admin_address => $admin_address
}
######## END KEYSTONE ##########
######## BEGIN GLANCE ##########
class { 'glance::api':
log_verbose => $verbose,
log_debug => $verbose,
auth_type => 'keystone',
auth_host => '127.0.0.1',
auth_port => '35357',
keystone_tenant => 'services',
keystone_user => 'glance',
keystone_password => $glance_service_password,
require => Keystone_user_role["glance@services"],
}
class { 'glance::backend::file': }
class { 'glance::registry':
log_verbose => $verbose,
log_debug => $verbose,
auth_type => 'keystone',
auth_host => '127.0.0.1',
auth_port => '35357',
keystone_tenant => 'services',
keystone_user => 'glance',
keystone_password => $glance_service_password,
sql_connection => "mysql://glance:${glance_db_password}@127.0.0.1/glance",
require => [Class['Glance::Db::Mysql'], Keystone_user_role['glance@services']]
}
######## END GLANCE ###########
######## BEGIN NOVA ###########
class { 'nova::rabbitmq':
userid => $rabbit_user,
password => $rabbit_password,
}
# TODO I may need to figure out if I need to set the connection information
# or if I should collect it
class { 'nova':
sql_connection => $sql_connection,
# this is false b/c we are exporting
rabbit_host => $rabbit_connection,
rabbit_userid => $rabbit_user,
rabbit_password => $rabbit_password,
image_service => 'nova.image.glance.GlanceImageService',
glance_api_servers => $glance_connection,
network_manager => 'nova.network.manager.FlatDHCPManager',
}
class { 'nova::api':
enabled => true,
# TODO this should be the nova service credentials
#admin_tenant_name => 'openstack',
#admin_user => 'admin',
#admin_password => $admin_service_password,
admin_tenant_name => 'services',
admin_user => 'nova',
admin_password => $nova_service_password,
require => Keystone_user_role["nova@services"],
}
class { [
'nova::cert',
'nova::consoleauth',
'nova::scheduler',
'nova::network',
'nova::objectstore',
'nova::vncproxy'
]:
enabled => true,
}
nova::manage::network { 'nova-vm-net':
network => '11.0.0.0/24',
available_ips => 128,
}
nova::manage::floating { 'nova-vm-floating':
network => '10.128.0.0/24',
}
######## Horizon ########
class { 'memcached':
listen_ip => '127.0.0.1',
}
class { 'horizon': }
######## End Horizon #####
}

View File

@@ -1,24 +0,0 @@
Host { ensure => present }
host { 'puppetmaster':
ip => '172.21.0.10',
}
host { 'all':
ip => '172.21.0.11',
}
host { 'db':
ip => '172.21.0.12'
}
host { 'rabbitmq':
ip => '172.21.0.13',
}
host { 'controller':
ip => '172.21.0.14',
}
host { 'compute':
ip => '172.21.0.15',
}
host { 'glance':
ip => '172.21.0.16',
}
class { 'apt': }
class { 'openstack::repo::diablo': }

View File

@@ -1,168 +0,0 @@
$db_host = 'db'
$db_username = 'nova'
$db_name = 'nova'
$db_password = 'password'
$rabbit_user = 'nova'
$rabbit_password = 'nova'
$rabbit_vhost = '/'
$rabbit_host = 'rabbitmq'
$rabbit_port = '5672'
$glance_api_servers = 'glance:9292'
$glance_host = 'glance'
$glance_port = '9292'
$api_server = 'controller'
resources { 'nova_config':
purge => true,
}
node db {
class { 'mysql::server':
config_hash => {
'bind_address' => '0.0.0.0'
#'root_password' => 'foo',
#'etc_root_password' => true
}
}
class { 'mysql::ruby': }
class { 'nova::db':
password => $db_password,
dbname => $db_name,
user => $db_username,
host => $clientcert,
# does glance need access?
allowed_hosts => ['controller', 'glance', 'compute'],
}
}
node controller {
class { 'nova::controller':
db_password => $db_password,
db_name => $db_name,
db_user => $db_username,
db_host => $db_host,
rabbit_password => $rabbit_password,
rabbit_port => $rabbit_port,
rabbit_userid => $rabbit_user,
rabbit_virtual_host => $rabbit_vhost,
rabbit_host => $rabbit_host,
image_service => 'nova.image.glance.GlanceImageService',
glance_api_servers => $glance_api_servers,
glance_host => $glance_host,
glance_port => $glance_port,
libvirt_type => 'qemu',
}
}
node compute {
class { 'nova::compute':
api_server => $api_server,
enabled => true,
api_port => 8773,
aws_address => '169.254.169.254',
}
class { 'nova::compute::libvirt':
libvirt_type => 'qemu',
flat_network_bridge => 'br100',
flat_network_bridge_ip => '11.0.0.1',
flat_network_bridge_netmask => '255.255.255.0',
}
class { "nova":
verbose => $verbose,
sql_connection => "mysql://${db_username}:${db_password}@${db_host}/${db_name}",
image_service => 'nova.image.glance.GlanceImageService',
glance_api_servers => $glance_api_servers,
glance_host => $glance_host,
glance_port => $glance_port,
rabbit_host => $rabbit_host,
rabbit_port => $rabbit_port,
rabbit_userid => $rabbit_user,
rabbit_password => $rabbit_password,
rabbit_virtual_host => $rabbit_virtual_host,
}
}
node glance {
# set up glance server
class { 'glance::api':
swift_store_user => 'foo_user',
swift_store_key => 'foo_pass',
}
class { 'glance::registry': }
}
node rabbitmq {
class { 'nova::rabbitmq':
userid => $rabbit_user,
password => $rabbit_password,
port => $rabbit_port,
virtual_host => $rabbit_vhost,
}
}
node puppetmaster {
class { 'concat::setup': }
class { 'mysql::server':
config_hash => {'bind_address' => '127.0.0.1'}
}
class { 'mysql::ruby': }
package { 'activerecord':
ensure => '2.3.5',
provider => 'gem',
}
class { 'puppet::master':
modulepath => '/vagrant/modules',
manifest => '/vagrant/manifests/site.pp',
storeconfigs => true,
storeconfigs_dbuser => 'dan',
storeconfigs_dbpassword => 'foo',
storeconfigs_dbadapter => 'mysql',
storeconfigs_dbserver => 'localhost',
storeconfigs_dbsocket => '/var/run/mysqld/mysqld.sock',
version => installed,
puppet_master_package => 'puppet',
package_provider => 'gem',
autosign => 'true',
certname => $clientcert,
}
}
node all {
#
# This manifest installs all of the nova
# components on one node.
class { 'mysql::server': }
class { 'nova::all':
db_password => 'password',
db_name => 'nova',
db_user => 'nova',
db_host => 'localhost',
rabbit_password => 'rabbitpassword',
rabbit_port => '5672',
rabbit_userid => 'rabbit_user',
rabbit_virtual_host => '/',
rabbit_host => 'localhost',
image_service => 'nova.image.glance.GlanceImageService',
glance_host => 'localhost',
glance_port => '9292',
libvirt_type => 'qemu',
}
}
node default {
fail("could not find a matching node entry for ${clientcert}")
}

View File

@@ -1,121 +0,0 @@
#
# This manifest installs all of the nova
# components on one node.
#
resources { 'nova_config':
purge => true,
}
# db settings
$db_password = 'password',
$db_name = 'nova',
$db_user = 'nova',
# this needs to be determined magically
$db_host = 'localhost',
# rabbit settings
$rabbit_password = 'rabbitpassword',
$rabbit_port = '5672',
$rabbit_userid = 'rabbit_user',
$rabbit_virtual_host = '/',
# this needs to be determined magically
$rabbit_host = 'localhost',
# glance settings
$image_service = 'nova.image.glance.GlanceImageService',
# this needs to be determined magically
$glance_host = 'localhost',
$glance_port = '9292',
# this is required for vagrant
$libvirt_type = 'qemu'
# bridge information
$flat_network_bridge = 'br100',
$flat_network_bridge_ip = '11.0.0.1',
$flat_network_bridge_netmask = '255.255.255.0',
$admin_user = 'nova_admin'
$project_name = 'nova_project'
# we need to be able to search for the following hosts:
# rabbit_host
# glance_host
# db_host
# api server
# initially going to install nova on one machine
node /nova/ {
class { "nova":
verbose => $verbose,
sql_connection => "mysql://${db_user}:${db_password}@${db_host}/${db_name}",
image_service => $image_service,
glance_host => $glance_host,
glance_port => $glance_port,
rabbit_host => $rabbit_host,
rabbit_port => $rabbit_port,
rabbit_userid => $rabbit_userid,
rabbit_password => $rabbit_password,
rabbit_virtual_host => $rabbit_virtual_host,
}
class { "nova::api": enabled => true }
class { "nova::compute":
api_server => $ipaddress,
libvirt_type => $libvirt_type,
enabled => true,
}
class { "nova::network::flat":
enabled => true,
flat_network_bridge => $flat_network_bridge,
flat_network_bridge_ip => $flat_network_bridge_ip,
flat_network_bridge_netmask => $flat_network_bridge_netmask,
}
nova::manage::admin { $admin_user: }
nova::manage::project { $project_name:
owner => $admin_user,
}
nova::manage::network { "${project_name}-net-${network}":
network => $nova_network,
available_ips => $available_ips,
require => Nova::Manage::Project[$project_name],
}
}
node /puppetmaster/ {
}
node /db/ {
class { 'mysql::server': }
class { 'nova::db':
# pass in db config as params
password => $db_password,
name => $db_name,
user => $db_user,
host => $db_host,
}
}
node /rabbit/ {
class { 'nova::rabbitmq':
port => $rabbit_port,
userid => $rabbit_userid,
password => $rabbit_password,
virtual_host => $rabbit_virtual_host,
require => Host[$hostname],
}
}
node /glance/ {
# set up glance server
class { 'glance::api':
swift_store_user => 'foo_user',
swift_store_key => 'foo_pass',
}
class { 'glance::registry': }
}

Submodule modules/apt deleted from 482609fa39

Submodule modules/concat deleted from 031bf26128

Submodule modules/glance deleted from 827b302824

Submodule modules/horizon deleted from 9c1c8275cd

Submodule modules/keystone deleted from 1cac5d0001

Submodule modules/mysql deleted from 12a4410e63

Submodule modules/nova deleted from 8298c3f3a5

Submodule modules/rabbitmq deleted from 57fe77731b

Submodule modules/rsync deleted from 139fb4c7d3

Submodule modules/ssh deleted from bc4eda65af

Submodule modules/stdlib deleted from b9a33851d2

Submodule modules/swift deleted from aaf2784c73

Submodule modules/vcsrepo deleted from 462b1d69bb

Submodule modules/xinetd deleted from f4f32fbb4a

25
other_repos.yaml Normal file
View File

@@ -0,0 +1,25 @@
repos:
repo_paths:
# openstack git repos
git://github.com/bodepd/puppetlabs-nova: /etc/puppet/modules/nova
git://github.com/puppetlabs/puppetlabs-glance: /etc/puppet/modules/glance
git://github.com/puppetlabs/puppetlabs-swift: /etc/puppet/modules/swift
git://github.com/puppetlabs/puppetlabs-keystone: /etc/puppet/modules/keystone
git://github.com/puppetlabs/puppetlabs-horizon: /etc/puppet/modules/horizon
# openstack middleware
git://github.com/puppetlabs/puppetlabs-rabbitmq: /etc/puppet/modules/rabbitmq
git://github.com/puppetlabs/puppetlabs-mysql: /etc/puppet/modules/mysql
git://github.com/puppetlabs/puppetlabs-git: /etc/puppet/modules/git
git://github.com/puppetlabs/puppet-vcsrepo: /etc/puppet/modules/vcsrepo
git://github.com/saz/puppet-memcached: /etc/puppet/modules/memcached
git://github.com/puppetlabs/puppetlabs-rsync: /etc/puppet/modules/rsync
# other deps
git://github.com/ghoneycutt/puppet-xinetd: /etc/puppet/modules/xinetd
git://github.com/saz/puppet-ssh: /etc/puppet/modules/ssh
git://github.com/puppetlabs/puppetlabs-stdlib: /etc/puppet/modules/stdlib
git://github.com/puppetlabs/puppet-apt: /etc/puppet/modules/apt
git://github.com/puppetlabs/puppet-concat: /etc/puppet/modules/concat
checkout_branches:
# /etc/puppet/modules/keystone: dev
# /etc/puppet/modules/glance: dev
/etc/puppet/modules/nova: dev

View File

@@ -1 +0,0 @@
#!/bin/bash

View File

@@ -1,3 +0,0 @@
#!/bin/bash
puppet apply /vagrant/manifests/hosts.pp --modulepath /vagrant/modules --debug
curl https://s3.amazonaws.com/pe-builds/released/2.0.1/puppet-enterprise-2.0.1-el-6-i386.tar.gz

View File

@@ -1,2 +0,0 @@
#!/bin/bash
bash /vagrant/scripts/run.sh all

View File

@@ -1,2 +0,0 @@
#!/bin/bash
bash /vagrant/scripts/run.sh compute

View File

@@ -1,3 +0,0 @@
#!/bin/bash
bash /vagrant/scripts/run.sh controller

View File

@@ -1,2 +0,0 @@
#!/bin/bash
bash /vagrant/scripts/run.sh db

View File

@@ -1,2 +0,0 @@
#!/bin/bash
bash /vagrant/scripts/run.sh glance

View File

@@ -1,3 +0,0 @@
#!/bin/bash
puppet apply /vagrant/manifests/setup_agent.pp --modulepath /vagrant/modules --debug
puppet apply /vagrant/manifests/site.pp --modulepath /vagrant/modules --graph --certname $* --graphdir /vagrant/graphs --debug --trace

View File

@@ -1,3 +0,0 @@
#!/bin/bash
bash /vagrant/scripts/run.sh rabbitmq

View File

@@ -1,5 +0,0 @@
#!/bin/bash
# TODO fix this, the image that I am using is broken
apt-get update
puppet apply /vagrant/manifests/hosts.pp --modulepath /vagrant/modules --debug
puppet apply /vagrant/modules/swift/examples/all.pp --modulepath /vagrant/modules --graph --certname $* --graphdir /vagrant/graphs --debug --trace

View File

@@ -1,4 +0,0 @@
#!/bin/bash
apt-get update
puppet apply /vagrant/manifests/hosts.pp --modulepath /vagrant/modules --debug
puppet agent --server puppetmaster --certname $* --debug --trace --test --pluginsync true

View File

@@ -1,3 +0,0 @@
#!/bin/bash
puppet apply /vagrant/manifests/hosts.pp --modulepath /vagrant/modules
puppet apply /vagrant/manifests/site.pp --modulepath /vagrant/modules --graph --certname $* --graphdir /vagrant/graphs