Merge pull request #155 from CiscoSystems/aio_install_script

Update README and master.sh script to support collapsed build
This commit is contained in:
Dan Bode
2013-10-24 17:48:49 -07:00
5 changed files with 109 additions and 36 deletions

View File

@@ -123,57 +123,71 @@ To create a basic 2 role cluster with a build, compute and control node outside
More information about this tool can be found under the stack-builder directory.
## basic install against already provisioned nodes:
# Basic install against already provisioned nodes (Ubuntu 12.04.3 LTS):
### install your build server
### install your All-in-one Build, Control, Network, Compute, and Cinder node:
first, log into your build server, and run the following script to bootstrap it as a puppet master:
These instructions assume you will be building against a machine that has two interfaces:
If you want to use the havana version of the packages, set the following ENV var:
'eth0' for management, and API access, and also to be used for GRE/VXlan tunnel via OVS
'eth1' for 'external' network access (in single provider router mode). This interface
is expected to provide an external router, and IP address range, and will leverage the
l3_agent functionality to provide outbound overloaded NAT to the VMs and 1:1 NAT with
Floating IPs. The current default setup also assumes a very small "generic" Cinder
setup, unless you create an LVM volume group called cinder-volume with free space
for persistent block volumes to be deployed against.
export openstack_version=havana
Log in to your all_in_one node, and bootstrap it into production:
bash <(curl -fsS https://raw.github.com/CiscoSystems/openstack-installer/master/install-scripts/master.sh)
### set up your data
You can over-ride the default parameters, such as ethernet interface names, or hostname, and default ip address if you choose:
on your build server, all of the data you may need to override can be found in:
scenario : change this to a scenario defined in data/scenarios, defaults to all_in_one
build_server : Hostname for your build-server, defaults to `` `hostname` ``
domain_name : Domain name for your system, defaults to `` `hostname -d` ``
default_interface : This is the interface name for your management and API interfaces (and tunnel endpoints), defautls to eth0
external_interface : This is the interface name for your "l3_agent provider router external network", defaults to eth1
build_server_ip : This is the IP that any additional devices can reach your build server on, defaults to the default_interface IP address
ntp_server : This is needed to keep puppet in sync across multiple nodes, defaults to ntp.esl.cisco.com
puppet_run_mode : Defaults to apply, and for AIO there is not a puppetmaster yet.
/etc/puppet/data/hiera_data/user.common.yaml
To change these parameters, do something like:
at the very least, you may need to update the controller ip addresses and set the
interfaces to use.
scenario=2_role bash <(curl.....master.sh)
Look at the puppet certnames that map to roles in:
### add additional nodes
/etc/puppet/data/role_mappings.yaml
Adding additional nodes is fairly straight forward (for all_in_one scenarion compute nodes can be added, others require a bit of additional effort by expanding the all_in_one scenario)
You may also find a need to change the default scenario in:
1) on the All-in-one node, add a role mapping for the new node:
/etc/puppet/data/config.yaml
echo "compute_node_name: compute" >> /etc/puppet/data/role_mappings.yaml
Choices are in:
2) Build the phyiscal or virtual compute node
3) Configure the system to point ot the all_in_one node for puppet deployment and set up the right version of puppet on the node:
export build_server_ip=X.X.X.X ; bash <(curl -fsS https://raw.github.com/CiscoSystems/openstack-installer/master/install-scripts/setup.sh)
After which you may still have to run puppet in "agent" mode to actually deploy the openstack elements:
``
puppet agent -td --server build-server.`hostname -d` --certname `hostname -f`
``
### If other role types are desired
At the scenario leve, choices are in:
/etc/puppet/data/scenarios
And you can extend the all_in_one scenario, or leverage a different variant all together.
Defaults for end user data should be located in one of the following files:
/etc/puppet/data/hiera_data/user.yaml
/etc/puppet/data/hiera_data/user.common.yaml
/etc/puppet/data/hiera_data/user.<scenario>.yaml
### install each of your components
first setup each node (unless you're doing all\_in\_one scenario, in which case you'll already have done this from the previous step):
export build_server_ip=X.X.X.X ; bash <(curl -fsS https://raw.github.com/CiscoSystems/openstack-installer/master/install-scripts/setup.sh)
then log into each server, and run:
``
puppet agent -td --server build-server.`hostname -d` --certname `hostname -f`
``
where build-server is the fully qualified name of the build server (or `` `hostname -f` `` on an all-in-one node), or its IP address that was set in user.common.yaml and ROLE\_CERT\_NAME is the fully qualified name of the local machine (or `` `hostname -f` `` which should return the same thing)
*NOTE: you'll want to run the puppet agent command on any control class nodes (or the all-in-one node) first, before running it on any compute or storage nodes.*
###Additional information on the data model being leveraged is available in the data directory of this repository.

View File

@@ -48,3 +48,7 @@ openstack::swift::storage-node::storage_devices:
- 3
apache::mpm_module: prefork
quantum::agents::ovs::local_ip: "%{ipaddress}"
neutron::agents::ovs::local_ip: "%{ipaddress}"

View File

@@ -4,14 +4,65 @@
# it runs on into a puppetmaster/build-server
#
export build_server_ip="${build_server_ip:-127.0.0.1}"
# All in one defaults to the local host name for pupet master
export build_server="${build_server:-`hostname`}"
# It'd be good to konw our domain name as well
export domain_name=`hostname -d`
# We need to know the IP address as well, so either tell me
# or I will assume it's the address associated with eth0
export default_interface="${default_interface:-eth0}"
# So let's grab that address
export build_server_ip="${build_server_ip:-`ip addr show ${default_interface} | grep 'inet ' | tr '/' ' ' | awk -F' ' '{print $2}'`}"
# Our default mode also assumes at least one other interface for OpenStack network
export external_interface="${external_interface:-eth1}"
# For good puppet hygene, we'll want NTP setup. Let's borrow one from Cisco
export ntp_server="${ntp_server:-ntp.esl.cisco.com}"
# Since this is the master script, we'll run in apply mode
export puppet_run_mode="apply"
# scenarios will map to /etc/puppet/data/scenarios/*.yaml
export scenario="${scenario:-all_in_one}"
sed -e "s/2_role:*/$scenario/" -i /root/openstack-installer/data/config.yaml
if [ "${scenario}" == "all_in_one" ] ; then
echo `hostname`: all_in_one >> /root/openstack-installer/data/role_mappings.yaml
export FACTER_build_server_ip=${build_server_ip}
export FACTER_build_server=${build_server}
cat > /root/openstack-installer/data/hiera_data/user.yaml<<EOF
domain_name: "${domain_name}"
ntp_servers:
- ${ntp_server}
# node addresses
build_node_name: ${build_server}
controller_internal_address: "${build_server_ip}"
controller_public_address: "${build_server_ip}"
controller_admin_address: "${build_server_ip}"
swift_internal_address: "${build_server_ip}"
swift_public_address: "${build_server_ip}"
swift_admin_address: "${build_server_ip}"
# physical interface definitions
external_interface: ${external_interface}
public_interface: ${default_interface}
private_interface: ${default_interface}
internal_ip: "%{ipaddress}"
nova::compute::vncserver_proxyclient_address: "%{ipaddress}"
swift_local_net_ip: "%{ipaddress}"
nova::compute::vncserver_proxyclient_address: "0.0.0.0"
EOF
fi
bash <(curl -fsS https://raw.github.com/CiscoSystems/openstack-installer/master/install-scripts/setup.sh)
cp -Rv /root/openstack-installer/modules /etc/puppet/
cp -Rv /root/openstack-installer/data /etc/puppet/
cp -Rv /root/openstack-installer/manifests /etc/puppet/
cp -R /root/openstack-installer/modules /etc/puppet/
cp -R /root/openstack-installer/data /etc/puppet/
cp -R /root/openstack-installer/manifests /etc/puppet/
puppet apply /etc/puppet/manifests/site.pp --certname build-server --debug
puppet apply /etc/puppet/manifests/site.pp --certname ${build_server} --debug
puppet plugin download --server `hostname -f`; service apache2 restart

View File

@@ -21,7 +21,7 @@ else
fi
# puppet's fqdn fact explodes if the domain is not setup
if grep 127.0.1.1 /etc/hosts ; then
sed -i -e "s/127.0.1.1.*/127.0.1.1 $(hostname).$domain $(hostname)/" /etc/hosts\n
sed -i -e "s/127.0.1.1.*/127.0.1.1 $(hostname).$domain $(hostname)/" /etc/hosts
else
echo "127.0.1.1 $(hostname).$domain $(hostname)" >> /etc/hosts
fi;

View File

@@ -53,6 +53,10 @@ if $::puppet_run_mode != 'agent' {
before => Package['puppet'],
require => Package['puppet-common']
}
package { 'puppetmaster-passenger':
ensure => $puppet_version,
require => Package['puppet'],
}
}
# set up our hiera-store!