Merge "Prepare repository for a Puppet Forge release"

This commit is contained in:
Jenkins
2013-06-18 21:26:54 +00:00
committed by Gerrit Code Review
5 changed files with 208 additions and 357 deletions

View File

@@ -1,24 +0,0 @@
* 2012-11-02 1.0.0
- add basic quantum support
- replace nova-volumes with cinder
- improve travisci support
- fail explicitly for unsupported db types
- style updates
- Add live migration support
- stop exporting/collecting resources.
- make nova-network conditional
- unset default parameters for passwords
- Remove collection code from openstack::nova::compute
- Expand tests for compute and controller
- add additional roles for multi_node deployments
* 2012-10-08 0.1.0
- testing improvements
- compute.pp's admin_password should sync with $nova_user_password of controller.pp
- Add missing send_arp_for_ha for multi_host mode
- add enable parameter
- fix rabbitmq connection
- enable floating ip for all in one
- add secret key for horizon
- add swift manifests
* 2012-06-08 0.1.0
- initial release

View File

@@ -1,17 +1,15 @@
name 'puppet-openstack' name 'puppetlabs-openstack'
version '1.0.3' version '2.0.0'
source 'https://github.com/stackforge/puppet-openstack' source 'https://github.com/stackforge/puppet-openstack'
author 'Puppet Labs' author 'Puppet Labs'
license 'Apache License 2.0' license 'Apache License 2.0'
summary 'Puppet Labs Openstack Module for folsom' summary 'Puppet Labs Openstack Module targeted for Grizzly'
description 'Module to install common openstack-essex configurations using puppet' description 'Puppet module that pulls together all the individual components of Openstack, resulting is a complete and functional stack.'
project_page 'https://github.com/stackforge/puppet-openstack' project_page 'https://github.com/stackforge/puppet-openstack'
dependency 'puppetlabs/glance', '>= 1.0.0' dependency 'puppetlabs/glance', '>= 2.0.0'
dependency 'puppetlabs/horizon', '>= 1.0.0' dependency 'puppetlabs/horizon', '>= 2.0.0'
dependency 'puppetlabs/keystone', '>= 1.0.1' dependency 'puppetlabs/keystone', '>= 2.0.0'
dependency 'saz/memcached', '>= 2.0.2' dependency 'puppetlabs/nova', '>= 2.0.0'
dependency 'puppetlabs/mysql','>= 0.5.0' dependency 'puppetlabs/cinder', '>= 2.0.0'
dependency 'puppetlabs/nova', '>= 1.0.1' dependency 'puppetlabs/swift', '>= 2.0.0'
dependency 'puppetlabs/cinder', '>= 1.0.1'
#dependency 'ekarlso/quantum', '>= 0.2.2'

View File

@@ -1,3 +0,0 @@
This is a high level projec that wraps the other openstack projects.
Use rake module:clone_all to clone all projects

516
README.md
View File

@@ -1,169 +1,100 @@
# The Openstack modules: openstack
=========
## Introduction #### Table of Contents
The Openstack Puppet Modules are a flexible Puppet implementation capable of 1. [Overview - What is the openstack module?](#overview)
configuring the core [Openstack](http://docs.openstack.org/) services: 2. [Module Description - What does the module do?](#module-description)
3. [Setup - The basics of getting started with cinder](#setup)
4. [Implementation - An under-the-hood peek at what the module is doing](#implementation)
5. [Limitations - OS compatibility, etc.](#limitations)
6. [Getting Involved - How to go deaper](#involved)
7. [Development - Guide for contributing to the module](#development)
8. [Contributors - Those with commits](#contributors)
9. [Release Notes - Notes on the most recent updates to the module](#release-notes)
* [nova](http://nova.openstack.org/) (compute service) Overview
* [glance](http://glance.openstack.org/) (image database) --------
* [swift](http://swift.openstack.org/) (object store)
* [keystone](http://keystone.openstack.org/) (authentication/authorization)
* [horizon](http://horizon.openstack.org/) (web front end)
A ['Puppet Module'](http://docs.puppetlabs.com/learning/modules1.html#modules) The Openstack Puppet Modules are a flexible Puppet implementation capable of configuring the core [Openstack](http://docs.openstack.org/) services:
is a collection of related content that can be used to model the configuration
of a discrete service. * [nova](http://nova.openstack.org/) (compute service)
* [glance](http://glance.openstack.org/) (image database)
* [swift](http://swift.openstack.org/) (object store)
* [keystone](http://keystone.openstack.org/) (authentication/authorization)
* [horizon](http://horizon.openstack.org/) (web front end)
* [cinder](http://cinder.openstack.org/) (block storage exporting)
These modules are based on the [openstack documentation](http://docs.openstack.org/) These modules are based on the [openstack documentation](http://docs.openstack.org/)
## Dependencies: Module Description
------------------
### Puppet: There are a lot of moving pieces in Openstack, consequently there are several Puppet modules needed to cover all these pieces. Each module is then made up of several class definitions, resource declarations, defined resources, and custom types/providers. A common pattern to reduce this complexity in Puppet is to create a composite module that bundles all these component type modules into a common set of configurations. The openstack module is doing this compositing and exposing a set of variables needed to be successful in getting a functional stack up and running. Multiple companies and individuals contributed to this module with the goal of producing a quick way to build single and multi-node installations that was based off documented Openstack best practices.
* [Puppet](http://docs.puppetlabs.com/puppet/) 2.7.12 or greater **Dependencies**
* [Facter](http://www.puppetlabs.com/puppet/related-projects/facter/) 1.6.1 or
greater (versions that support the osfamily fact)
### Platforms: * [Puppet](http://docs.puppetlabs.com/puppet/) 2.7.12 or greater
* [Facter](http://www.puppetlabs.com/puppet/related-projects/facter/) 1.6.1 or greater (versions that support the osfamily fact)
These modules have been fully tested on Ubuntu Precise and Debian Wheezy and RHEL 6. **Platforms**
For instructions of how to use these modules on Debian, check * These modules have been fully tested on Ubuntu Precise and Debian Wheezy and RHEL 6.
out this excellent [link](http://wiki.debian.org/OpenStackPuppetHowto): * The instructions in this document have only been verified on Ubuntu Precise. For instructions of how to use these modules on Debian, check out this excellent [link](http://wiki.debian.org/OpenStackPuppetHowto)
The instuctions in this document have only been verified on Ubuntu Precise. Setup
-----
### Network: **What the openstack module affects**
Each of the machines running the Openstack services should have a minimum of 2 * The entirety of Openstack!
NICS.
### Installing openstack
example% puppet module install puppetlabs/openstack
### Installing latest unstable openstack module from source
example% cd /etc/puppetlabs/puppet/modules (usually /etc/puppet/modules on FOSS Puppet)
example% git clone git://github.com/stackforge/puppet-openstack.git openstack
example% cd openstack
example% rake modules:clone
**Pre-puppet setup**
The things that follow can be handled by Puppet but are out of scope of this document and are not included in the openstack module.
### Networking
* Each of the machines running the Openstack services should have a minimum of 2 NICS.
* One for the public/internal network * One for the public/internal network
- This nic should be assigned an IP address - This nic should be assigned an IP address
* One of the virtual machine network * One of the virtual machine network
- This nic should not have an ipaddress assigned - This nic should not have an ipaddress assigned
* If machines only have one NIC, it is necessary to manually create a bridge called br100 that bridges into the ip address specified on that NIC.
* All interfaces that are used to bridge traffic for the internal network need to have promiscuous mode set.
* Below is an example of setting promiscuous mode on an interface on Ubuntu.
If machines only have one NIC, it is necessary to manually create a bridge #/etc/network/interfaces
called br100 that bridges into the ip address specified on that NIC auto eth1
iface eth1 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ifconfig $IFACE promisc
All interfaces that are used to bridge traffic for the internal network ### Volumes
need to have promiscuous mode set.
Below is an example of setting promiscuous mode on an interface on Ubuntu. Every node that is configured to be a cinder volume service must have a volume group called `cinder-volumes`.
#/etc/network/interfaces
auto eth1
iface eth1 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ifconfig $IFACE promisc
### Volumes:
Every node that is configured to be a cinder volume service must have a volume
group called `cinder-volumes`.
### Compute nodes ### Compute nodes
Compute nodes should be deployed onto physical hardware. * Compute nodes should be deployed onto physical hardware.
* If compute nodes are deployed on virtual machines for testing, the libvirt_type parameter for the openstack::compute class should probably be configured as 'qemu'. This is because most virtualization technologies do now pass through the virtualization CPU extensions to their virtual machines.
If compute nodes are deployed on virtual machines for testing, the ### Beginning with openstack
libvirt_type must be configured as 'qemu'.
class { 'openstack::compute': Utlization of this module can come in many forms. It was designed to be capable of deploying all services to a single node or distributed across several. This is not an exhaustive list, we recommend you consult and understand all the manifests included in this module and the [core openstack](http://docs.openstack.org) documentation.
...
libvirt_type => 'qemu'
...
}
class { 'openstack::all': **Defining an all in one configuration**
...
libvirt_type => 'qemu'
...
}
## Installation
### Install Puppet
* Puppet should be installed on all nodes:
`apt-get install puppet`
* A Puppet master is not required for all-in-one installations. It is,
however, recommended for multi-node installations.
* To install the puppetmaster:
`apt-get install puppetmaster`
* Rake and Git should also be installed on the Puppet Master:
`apt-get install rake git`
* Some features of the modules require
[storeconfigs](http://projects.puppetlabs.com/projects/1/wiki/Using_Stored_Configuration)
to be enabled on the Puppet Master.
* Create a site manifest site.pp on the master:
cat > /etc/puppet/manifests/site.pp << EOT
node default {
notify { 'I can connect!': }
}
EOT
* Restart the puppetmaster service:
`service puppetmaster restart`
* Configure each client to connect to the master and enable pluginsync. This
can be done by adding the following lines to /etc/puppet/puppet.conf:
[agent]
pluginsync = true
server = <CONTROLLER_HOSTNAME>
* Register each client with the puppetmaster:
`puppet agent -t --waitforcert 60`
* On the puppetmaster, sign the client certificates:
`puppet cert sign <CERTNAME>`
### Install the Openstack modules
* The Openstack modules should be installed into the module path of your
master or on each node (if you are running puppet apply).
Modulepath:
* open source puppet - /etc/puppet/modules
* Puppet Enterprise - /etc/puppetlabs/puppet/modules
* To install the released versions from the forge:
`puppet module install puppetlabs-openstack`
* To install the latest revision of the modules from git (for developers/
contributors):
cd <module_path>
git clone git://github.com/stackforge/puppet-openstack openstack
cd openstack
rake modules:clone
## puppetlabs-openstack
The 'puppetlabs-openstack' module was written for those who want to get up and
running with a single or multi-node Openstack deployment as quickly as possible.
It provides a simple way of deploying Openstack that is based on best practices
shaped by companies that contributed to the design of these modules.
### Classes
#### openstack::all
The openstack::all class provides a single configuration interface that can be The openstack::all class provides a single configuration interface that can be
used to deploy all Openstack services on a single host. used to deploy all Openstack services on a single host.
@@ -171,34 +102,28 @@ used to deploy all Openstack services on a single host.
This is a great starting place for people who are just kicking the tires with This is a great starting place for people who are just kicking the tires with
Openstack or with Puppet deployed OpenStack environments. Openstack or with Puppet deployed OpenStack environments.
##### Usage Example: ```puppet
class { 'openstack::all':
public_address => '192.168.1.12',
public_interface => 'eth0',
private_interface => 'eth1',
admin_email => 'some_admin@some_company',
admin_password => 'admin_password',
keystone_admin_token => 'keystone_admin_token',
nova_user_password => 'nova_user_password',
glance_user_password => 'glance_user_password',
rabbit_password => 'rabbit_password',
rabbit_user => 'rabbit_user',
libvirt_type => 'kvm',
fixed_range => '10.0.0.0/24',
}
```
An openstack all in one class can be configured as follows: For more information on the parameters, check out the inline documentation in the [manifest](https://github.com/stackforge/puppet-openstack/blob/master/manifests/all.pp).
class { 'openstack::all': **Defining a controller configuration**
public_address => '192.168.1.12',
public_interface => 'eth0',
private_interface => 'eth1',
admin_email => 'some_admin@some_company',
admin_password => 'admin_password',
keystone_admin_token => 'keystone_admin_token',
nova_user_password => 'nova_user_password',
glance_user_password => 'glance_user_password',
rabbit_password => 'rabbit_password',
rabbit_user => 'rabbit_user',
libvirt_type => 'kvm',
fixed_range => '10.0.0.0/24',
}
For more information on the parameters, check out the inline documentation in The openstack::controller class is intended to provide basic support for multi-node Openstack deployments.
the manifest:
<module_path>/openstack/manifests/all.pp
#### openstack::controller
The openstack::controller class is intended to provide basic support for
multi-node Openstack deployments.
There are two roles in this basic multi-node Openstack deployment: There are two roles in this basic multi-node Openstack deployment:
* controller - deploys all of the central management services * controller - deploys all of the central management services
@@ -213,88 +138,73 @@ The openstack::controller class deploys the following Openstack services:
* mysql * mysql
* rabbitmq * rabbitmq
##### Usage Example: ```puppet
class { 'openstack::controller':
public_address => '192.168.101.10',
public_interface => 'eth0',
private_interface => 'eth1',
internal_address => '192.168.101.10',
floating_range => '192.168.101.64/28',
fixed_range => '10.0.0.0/24',
multi_host => false,
network_manager => 'nova.network.manager.FlatDHCPManager',
admin_email => 'admin_email',
admin_password => 'admin_password',
keystone_admin_token => 'keystone_admin_token',
glance_user_password => 'glance_user_password',
nova_user_password => 'nova_user_password',
rabbit_password => 'rabbit_password',
rabbit_user => 'rabbit_user',
}
```
An openstack controller class can be configured as follows: For more information on the parameters, check out the inline documentation in the [manifest](https://github.com/stackforge/puppet-openstack/blob/master/manifests/controller.pp)
class { 'openstack::controller': **Defining a compute configuration**
public_address => '192.168.101.10',
public_interface => 'eth0',
private_interface => 'eth1',
internal_address => '192.168.101.10',
floating_range => '192.168.101.64/28',
fixed_range => '10.0.0.0/24',
multi_host => false,
network_manager => 'nova.network.manager.FlatDHCPManager',
admin_email => 'admin_email',
admin_password => 'admin_password',
keystone_admin_token => 'keystone_admin_token',
glance_user_password => 'glance_user_password',
nova_user_password => 'nova_user_password',
rabbit_password => 'rabbit_password',
rabbit_user => 'rabbit_user',
}
For more information on the parameters, check out the inline documentation in The openstack::compute class is used to manage the underlying hypervisor. A typical multi-host Openstack installation would consist of a single openstack::controller node and multiple openstack::compute nodes (based on the amount of resources being virtualized)
the manifest:
<module_path>/openstack/manifests/controller.pp
#### openstack::compute
The Openstack compute class is used to manage the underlying hypervisor. A
typical multi-host Openstack installation would consist of a single
openstack::controller node and multiple openstack::compute nodes (based on the
amount of resources being virtualized)
The openstack::compute class deploys the following services: The openstack::compute class deploys the following services:
* nova * nova
- compute service (libvirt backend) - compute service (libvirt backend)
- optionally, the nova network service (if multi_host is enabled) - optionally, the nova network service (if multi_host is enabled)
- optionally, the nova api service (if multi_host is enabled) - optionally, the nova api service (if multi_host is enabled)
- optionally, the nova volume service if it is enabled - optionally, the nova volume service if it is enabled
##### Usage Example: ```puppet
class { 'openstack::compute':
private_interface => 'eth1',
internal_address => $ipaddress_eth0,
libvirt_type => 'kvm',
fixed_range => '10.0.0.0/24',
network_manager => 'nova.network.manager.FlatDHCPManager',
multi_host => false,
sql_connection => 'mysql://nova:nova_db_passwd@192.168.101.10/nova',
rabbit_host => '192.168.101.10',
glance_api_servers => '192.168.101.10:9292',
vncproxy_host => '192.168.101.10',
vnc_enabled => true,
manage_volumes => true,
}
```
An openstack compute class can be configured as follows: For more information on the parameters, check out the inline documentation in the [manifest](https://github.com/stackforge/puppet-openstack/blob/master/manifests/compute.pp)
class { 'openstack::compute': Implementation
private_interface => 'eth1', --------------
internal_address => $ipaddress_eth0,
libvirt_type => 'kvm',
fixed_range => '10.0.0.0/24',
network_manager => 'nova.network.manager.FlatDHCPManager',
multi_host => false,
sql_connection => 'mysql://nova:nova_db_passwd@192.168.101.10/nova',
rabbit_host => '192.168.101.10',
glance_api_servers => '192.168.101.10:9292',
vncproxy_host => '192.168.101.10',
vnc_enabled => true,
manage_volumes => true,
}
For more information on the parameters, check out the inline documentation in
the manifest:
<module_path>/openstack/manifests/compute.pp
### Creating your deployment scenario ### Creating your deployment scenario
So far, classes have been discussed as configuration interfaces used to deploy So far, classes have been discussed as configuration interfaces used to deploy the openstack roles. This section explains how to apply these roles to actual nodes using a puppet site manifest.
the openstack roles. This section explains how to apply these roles to actual
nodes using a puppet site manifest.
The default file name for the site manifest is site.pp. This file should be The default file name for the site manifest is site.pp. This file should be contained in the puppetmaster's manifestdir:
contained in the puppetmaster's manifestdir:
* open source puppet - /etc/puppet/manifests/site.pp * open source puppet - /etc/puppet/manifests/site.pp
* Puppet Enterprise - /etc/puppetlabs/puppet/manifests/site.pp * Puppet Enterprise - /etc/puppetlabs/puppet/manifests/site.pp
Node blocks are used to map a node's certificate name to the classes Node blocks are used to map a node's certificate name to the classes that should be assigned to it.
that should be assigned to it.
[Node blocks](http://docs.puppetlabs.com/guides/language_guide.html#nodes) [Node blocks](http://docs.puppetlabs.com/guides/language_guide.html#nodes) can match specific hosts:
can match specific hosts:
node my_explicit_host {...} node my_explicit_host {...}
@@ -302,15 +212,13 @@ Or they can use regular expression to match sets of hosts
node /my_similar_hosts/ {...} node /my_similar_hosts/ {...}
Inside the site.pp file, Puppet resources declared within node blocks are Inside the site.pp file, Puppet resources declared within node blocks are applied to those specified nodes. Resources specified at top-scope are applied to all nodes.
applied to those specified nodes. Resources specified at top-scope are applied
to all nodes.
### Deploying an Openstack all-in-one environment ### Deploying an Openstack all-in-one environment
The easiest way to get started with the openstack::all class is to use the file The easiest way to get started with the openstack::all class is to use the file
<module_dir>/openstack/examples/site.pp <module_dir>/openstack/tests/site.pp
There is a node entry for There is a node entry for
@@ -318,13 +226,11 @@ There is a node entry for
that can be used to deploy a simple nova all-in-one environment. that can be used to deploy a simple nova all-in-one environment.
You can explicitly target this node entry by specifying a matching certname and You can explicitly target this node entry by specifying a matching certname and targeting the manifest explicitly with:
targeting the manifest explicitly with:
puppet apply /etc/puppet/modules/openstack/examples/site.pp --certname openstack_all puppet apply /etc/puppet/modules/openstack/tests/site.pp --certname openstack_all
You could also update site.pp with the hostname of the node on which you wish to You could also update site.pp with the hostname of the node on which you wish to perform an all-in-one installation:
perform an all-in-one installation:
node /<my_node>/ {...} node /<my_node>/ {...}
@@ -346,11 +252,9 @@ This file contains entries for:
Which can be used to assign the respective roles. Which can be used to assign the respective roles.
(As above, you can replace these default certificate names with the hostnames of (As above, you can replace these default certificate names with the hostnames of your nodes)
your nodes)
The first step for building out a multi-node deployment scenario is to choose The first step for building out a multi-node deployment scenario is to choose the IP address of the controller node.
the IP address of the controller node.
Both nodes will need this configuration parameter. Both nodes will need this configuration parameter.
@@ -360,16 +264,13 @@ In the example site.pp, replace the following line:
with the IP address of your controller. with the IP address of your controller.
It is also possible to use store configs in order for the compute hosts to It is also possible to use store configs in order for the compute hosts to automatically discover the address of the controller host. Documentation for this may not be available until a later release of the openstack modules.
automatically discover the address of the controller host. Documentation for
this may not be available until a later release of the openstack modules.
Once everything is configured on the master, you can configure the nodes using: Once everything is configured on the master, you can configure the nodes using:
puppet agent -t <--certname ROLE_CERTNAME> puppet agent -t <--certname ROLE_CERTNAME>
It is recommended that you first configure the controller before configuring It is recommended that you first configure the controller before configuring your compute nodes:
your compute nodes:
openstack_controller> puppet agent -t --certname openstack_controller openstack_controller> puppet agent -t --certname openstack_controller
openstack_compute1> puppet agent -t --certname openstack_compute1 openstack_compute1> puppet agent -t --certname openstack_compute1
@@ -377,8 +278,7 @@ your compute nodes:
## Verifying an OpenStack deployment ## Verifying an OpenStack deployment
Once you have installed openstack using Puppet (and assuming you experience no Once you have installed openstack using Puppet (and assuming you experience no errors), the next step is to verify the installation:
errors), the next step is to verify the installation:
### openstack::auth_file ### openstack::auth_file
@@ -386,8 +286,7 @@ The optionstack::auth_file class creates the file:
/root/openrc /root/openrc
which stores environment variables that can be used for authentication of which stores environment variables that can be used for authentication of openstack command line utilities.
openstack command line utilities.
#### Usage Example: #### Usage Example:
@@ -399,9 +298,7 @@ openstack command line utilities.
### Verification Process ### Verification Process
1. Ensure that your authentication information is stored in /root/openrc. 1. Ensure that your authentication information is stored in /root/openrc. This assumes that the class openstack::auth_file had been applied to this node.
This assumes that the class openstack::auth_file had been applied to this
node.
2. Ensure that your authenthication information is in the user's environment. 2. Ensure that your authenthication information is in the user's environment.
source /root/openrc source /root/openrc
@@ -427,11 +324,9 @@ openstack command line utilities.
bash /tmp/test_nova.sh bash /tmp/test_nova.sh
This script will verify that an image can be inserted into glance, and that This script will verify that an image can be inserted into glance, and that that image can be used to fire up a virtual machine instance.
that image can be used to fire up a virtual machine instance.
6. Log into horizon on port 80 of your controller node and walk through a few 6. Log into horizon on port 80 of your controller node and walk through a few operations:
operations:
- fire up a VM - fire up a VM
- create a volume - create a volume
@@ -443,22 +338,15 @@ openstack command line utilities.
## Building your own custom deployment scenario for Openstack ## Building your own custom deployment scenario for Openstack
The classes included in the Openstack module are implemented using a number of The classes included in the Openstack module are implemented using a number of other modules. These modules can be used directly to create a customized openstack deployment.
other modules. These modules can be used directly to create a customized
openstack deployment.
A list of the modules used by puppetlabs-openstack and the source locations for A list of the modules used by puppetlabs-openstack and the source locations for those modules can be found in `other_repos.yaml` in the openstack module folder.
those modules can be found in `other_repos.yaml` in the openstack module folder.
other_repos.yaml other_repos.yaml
These building block modules have been written to support a wide variety of These building block modules have been written to support a wide variety of specific configuration and deployment use cases. They also provide a lot of configuration options not available with the more constrained puppetlabs-openstack modules.
specific configuration and deployment use cases. They also provide a lot of
configuration options not available with the more constrained
puppetlabs-openstack modules.
The manifests in the Openstack module can serve as an example of how to use The manifests in the Openstack module can serve as an example of how to use these base building block to compose custom deployments.
these base building block to compose custom deployments.
<module_path>/openstack/manifests/{all,controller,compute}.pp <module_path>/openstack/manifests/{all,controller,compute}.pp
@@ -480,16 +368,13 @@ These files contain examples of how to deploy the following services:
* message queue * message queue
* examples currently only exist for rabbitmq * examples currently only exist for rabbitmq
Once you have selected which services need to be combined on which nodes, you Once you have selected which services need to be combined on which nodes, you should review the modules for all of these services and figure out how you can configure things like the pipelines and back-ends for these individual services.
should review the modules for all of these services and figure out how you can
configure things like the pipelines and back-ends for these individual services.
This information should then be used to compose your own custom site.pp This information should then be used to compose your own custom site.pp
## Deploying swift ## Deploying swift
In order to deploy swift, you should use the example manifest that comes with In order to deploy swift, you should use the example manifest that comes with the swift modules (tests/site.pp)
the swift modules (examples/site.pp)
In this example, the following nodes are specified: In this example, the following nodes are specified:
@@ -504,37 +389,37 @@ In this example, the following nodes are specified:
This swift configuration requires both a puppetmaster with storeconfigs enabled. This swift configuration requires both a puppetmaster with storeconfigs enabled.
To fully configure a Swift environment, the nodes must be configured in the To fully configure a Swift environment, the nodes must be configured in the following order:
following order:
* First the storage nodes need to be configured. This creates the storage * First the storage nodes need to be configured. This creates the storage services (object, container, account) and exports all of the storage endpoints for the ring builder into storeconfigs. (The replicator service fails to start in this initial configuration)
services (object, container, account) and exports all of the storage endpoints * Next, the ringbuild and swift proxy must be configured. The ringbuilder needs to collect the storage endpoints and create the ring database before the proxy can be installed. It also sets up an rsync server which is used to host the ring database. Resources are exported that are used to rsync the ring database from this server.
for the ring builder into storeconfigs. (The replicator service fails to start * Finally, the storage nodes should be run again so that they can rsync the ring databases.
in this initial configuration)
* Next, the ringbuild and swift proxy must be configured. The ringbuilder needs
to collect the storage endpoints and create the ring database before the proxy
can be installed. It also sets up an rsync server which is used to host the
ring database. Resources are exported that are used to rsync the ring
database from this server.
* Finally, the storage nodes should be run again so that they can rsync the ring
databases.
This configuration of rsync create two loopback devices on every node. For more This configuration of rsync create two loopback devices on every node. For more realistic scenarios, users should deploy their own volumes in combination with the other classes.
realistic scenarios, users should deploy their own volumes in combination with
the other classes.
Better examples of this will be provided in a future version of the module. Better examples of this will be provided in a future version of the module.
## Participating Limitations
-----------
* Deploys only with rabbitmq and mysql RPC/data backends.
* Not backwards compatible with pre-2.x release of the openstack modules.
### Upgrade warning
The current version of the code is intended for the 2.x series of the openstack modules and has the following known backwards incompatible breaking changes from 1.x.
* The cinder parameter has been removed (b/c support for nova-volumes has been removed). The manage_volumes parameter indicates if cinder volumes should be managed.
* The names of the sql connection parameters of the openstack::compute class have changed from sql_connetion to individual parameters for the db user,name,password,host.
Getting Involved
----------------
Need a feature? Found a bug? Let me know! Need a feature? Found a bug? Let me know!
We are extremely interested in growing a community of OpenStack experts and We are extremely interested in growing a community of OpenStack experts and users around these modules so they can serve as an example of consolidated best practices of how to deploy openstack.
users around these modules so they can serve as an example of consolidated best
practices of how to deploy openstack.
The best way to get help with this set of modules is to email the group The best way to get help with this set of modules is to email the group associated with this project:
associated with this project:
puppet-openstack@puppetlabs.com puppet-openstack@puppetlabs.com
@@ -548,32 +433,27 @@ The process for contributing code is as follows:
* Please visit http://wiki.openstack.org/GerritWorkflow and follow the instructions there to upload your change to Gerrit. * Please visit http://wiki.openstack.org/GerritWorkflow and follow the instructions there to upload your change to Gerrit.
* Please add rspec tests for your code if applicable * Please add rspec tests for your code if applicable
## Upgrade warning Development
-----------
The current version of the code is intended for the 2.0 release of the openstack modules and Developer documentation for the entire puppet-openstack project.
has the following know backwards incompatible breaking changes from 1.x.
* the cinder parameter has been removed (b/c support for nova-volumes has been removed). * https://wiki.openstack.org/wiki/Puppet-openstack#Developer_documentation
The manage_volumes parameter indicates is cinder volumes should be managed.
* the names of the sql connection parameters of the openstack::compute class have changed
from sql_connetion to individual parameters for the db user,name,password,host.
## Future features: Contributors
------------
efforts are underway to implement the following additional features: * https://github.com/stackforge/puppet-openstack/graphs/contributors
* Validate module on Fedora 17 and RHEL Release Notes
* monitoring (basic system and Openstack application monitoring support -------------
with Nagios/Ganglia and/or sensu)
* Redundancy/HA - implementation of modules to support highly available and
redundant Openstack deployments.
* These modules are currently intended to be classified and data-fied in a
site.pp. Starting in version 3.0, it is possible to populate class
parameters explicitly using puppet data bindings (which use hiera as the
back-end). The decision not to use hiera was primarily based on the fact
that it requires explicit function calls in 2.7.x
* Implement provisioning automation that can be used to fully provision
an entire environment from scratch
* Integrate with PuppetDB to allow service auto-discovery to simplify the
configuration of service association
**2.0.0**
* Upstream is now part of stackfoge.
* Initial support for the utilization of the quantum module.
* Ability to set vncproxy host.
* Refactors of db connections for compute.
* Refactor of glance and cinder related classes.
* Nova-conductor added.
* Various cleanups and bug fixes.