Cleanup documentation in prep for release

* Make sure all paragraphs are flowed to the same width
* Fix capitalization/grammar errors
* Unify terminology (eg puppetmaster vs. Puppet Master)
* Rephrase some sections that seemed unclear to me
This commit is contained in:
Branan Purvine-Riley
2012-06-12 11:31:58 -07:00
parent eaa3b0b8f7
commit f7066470e1

233
README.md
View File

@@ -2,8 +2,8 @@
## Introduction ## Introduction
The Openstack Puppet Modules are a flexible Puppet implementation The Openstack Puppet Modules are a flexible Puppet implementation capable of
capable of configuring the core [Openstack](http://docs.openstack.org/) services: configuring the core [Openstack](http://docs.openstack.org/) services:
* [nova](http://nova.openstack.org/) (compute service) * [nova](http://nova.openstack.org/) (compute service)
* [glance](http://glance.openstack.org/) (image database) * [glance](http://glance.openstack.org/) (image database)
@@ -11,22 +11,25 @@ capable of configuring the core [Openstack](http://docs.openstack.org/) services
* [keystone](http://keystone.openstack.org/) (authentication/authorization) * [keystone](http://keystone.openstack.org/) (authentication/authorization)
* [horizon](http://horizon.openstack.org/) (web front end) * [horizon](http://horizon.openstack.org/) (web front end)
A ['Puppet Module'](http://docs.puppetlabs.com/learning/modules1.html#modules) is a collection of related content that can be used to model A ['Puppet Module'](http://docs.puppetlabs.com/learning/modules1.html#modules)
the configuration of a discrete service. is a collection of related content that can be used to model the configuration
of a discrete service.
These modules are based on the adminstrative guides for openstack [compute](http://docs.openstack.org/essex/openstack-compute/admin/content/) These modules are based on the adminstrative guides for openstack
and [object store](http://docs.openstack.org/essex/openstack-object-storage/admin/content/) [compute](http://docs.openstack.org/essex/openstack-compute/admin/content/) and
[object store](http://docs.openstack.org/essex/openstack-object-storage/admin/content/)
## Dependencies: ## Dependencies:
### Puppet: ### Puppet:
* [Puppet](http://docs.puppetlabs.com/puppet/) 2.7.12 or greater * [Puppet](http://docs.puppetlabs.com/puppet/) 2.7.12 or greater
* [Facter](http://www.puppetlabs.com/puppet/related-projects/facter/) 1.6.1 or greater (versions that support the osfamily fact) * [Facter](http://www.puppetlabs.com/puppet/related-projects/facter/) 1.6.1 or
greater (versions that support the osfamily fact)
### Platforms: ### Platforms:
These modules have been fully tested on Ubuntu Precise and Debian (Wheezy). These modules have been fully tested on Ubuntu Precise and Debian Wheezy.
For instructions of how to use these modules on Debian, check For instructions of how to use these modules on Debian, check
out this excellent [link](http://wiki.debian.org/OpenStackPuppetHowto): out this excellent [link](http://wiki.debian.org/OpenStackPuppetHowto):
@@ -35,7 +38,8 @@ and [object store](http://docs.openstack.org/essex/openstack-object-storage/admi
### Network: ### Network:
Each of the machines running the Openstack services should have a minimum of 2 NICS. Each of the machines running the Openstack services should have a minimum of 2
NICS.
* One for the public/internal network * One for the public/internal network
- This nic should be assigned an IP address - This nic should be assigned an IP address
@@ -47,15 +51,15 @@ and [object store](http://docs.openstack.org/essex/openstack-object-storage/admi
### Volumes: ### Volumes:
Every node that is configured to be a nova volume service needs to have a Every node that is configured to be a nova volume service must have a volume
volume group called `nova-volumes`. group called `nova-volumes`.
### Compute nodes ### Compute nodes
Compute nodes should be deployed onto physical hardware. Compute nodes should be deployed onto physical hardware.
If compute nodes are deployed on virtual machines for testing, If compute nodes are deployed on virtual machines for testing, the
the libvirt_type should be configured as 'qemu'. libvirt_type must be configured as 'qemu'.
class { 'openstack::compute': class { 'openstack::compute':
... ...
@@ -63,6 +67,12 @@ and [object store](http://docs.openstack.org/essex/openstack-object-storage/admi
... ...
} }
class { 'openstack::all':
...
libvirt_type => 'qemu'
...
}
## Installation ## Installation
### Install Puppet ### Install Puppet
@@ -78,13 +88,15 @@ and [object store](http://docs.openstack.org/essex/openstack-object-storage/admi
`apt-get install puppetmaster` `apt-get install puppetmaster`
* Rake and Git should be installed on the Puppet Master: * Rake and Git should also be installed on the Puppet Master:
`apt-get install rake git` `apt-get install rake git`
* Some features of the modules require [storeconfigs](http://projects.puppetlabs.com/projects/1/wiki/Using_Stored_Configuration) to be enabled on the Puppet Master. * Some features of the modules require
[storeconfigs](http://projects.puppetlabs.com/projects/1/wiki/Using_Stored_Configuration)
to be enabled on the Puppet Master.
* A site manifest site.pp should be created on the master: * Create a site manifest site.pp on the master:
cat > /etc/puppet/manifests/site.pp << EOT cat > /etc/puppet/manifests/site.pp << EOT
node default { node default {
@@ -92,30 +104,29 @@ and [object store](http://docs.openstack.org/essex/openstack-object-storage/admi
} }
EOT EOT
* The puppetmaster service should be restarted * Restart the puppetmaster service:
`service puppetmaster restart` `service puppetmaster restart`
* Each client should be enabled to use pluginsync and configured to connect * Configure each client to connect to the master and enable pluginsync. This
to the master. The following lines should be configure in can be done by adding the following lines to /etc/puppet.conf:
/etc/puppet/puppet.conf:
[agent] [agent]
pluginsync = true pluginsync = true
server = <CONTROLLER_HOSTNAME> server = <CONTROLLER_HOSTNAME>
* Each agent should connect to the master: * Register each client with the puppetmaster:
`puppet agent -t --waitforcert 60` `puppet agent -t --waitforcert 60`
* The certificate of each agent should be manually signed: * On the puppetmaster, sign the client certificates:
`puppet cert sign <CERTNAME>` `puppet cert sign <CERTNAME>`
### Install the Openstack modules ### Install the Openstack modules
* The Openstack modules should be installed into the module path of your master * The Openstack modules should be installed into the module path of your
or on each node (if you are running puppet apply). master or on each node (if you are running puppet apply).
Modulepath: Modulepath:
* open source puppet - /etc/puppet/modules * open source puppet - /etc/puppet/modules
@@ -125,7 +136,8 @@ and [object store](http://docs.openstack.org/essex/openstack-object-storage/admi
`puppet module install puppetlabs-openstack` `puppet module install puppetlabs-openstack`
* To install the latest revision of the modules from source (for developers/contributors): * To install the latest revision of the modules from git (for developers/
contributors):
cd <module_path> cd <module_path>
git clone git://github.com/puppetlabs/puppetlabs-openstack openstack git clone git://github.com/puppetlabs/puppetlabs-openstack openstack
@@ -135,17 +147,16 @@ and [object store](http://docs.openstack.org/essex/openstack-object-storage/admi
## puppetlabs-openstack ## puppetlabs-openstack
The 'puppetlabs-openstack' module was written for those who want to get up and The 'puppetlabs-openstack' module was written for those who want to get up and
going with a single or multi-node Openstack deployment as quickly as possible. running with a single or multi-node Openstack deployment as quickly as possible.
It provides a simple way of deploying Openstack that is based on It provides a simple way of deploying Openstack that is based on best practices
best practices shaped by companies that contributed to the design of these shaped by companies that contributed to the design of these modules.
modules.
### Classes ### Classes
#### openstack::all #### openstack::all
The openstack::all class provides a single configuration interface that can The openstack::all class provides a single configuration interface that can be
be used to deploy an Openstack all-in-one node. used to deploy all Openstack services on a single host.
This is a great starting place for people who are just kicking the tires with This is a great starting place for people who are just kicking the tires with
Openstack or with Puppet deployed OpenStack environments. Openstack or with Puppet deployed OpenStack environments.
@@ -169,8 +180,8 @@ Openstack or with Puppet deployed OpenStack environments.
fixed_range => '10.0.0.0/24', fixed_range => '10.0.0.0/24',
} }
For more information on the parameters, check out the inline documentation For more information on the parameters, check out the inline documentation in
in the manifest: the manifest:
<module_path>/openstack/manifests/all.pp <module_path>/openstack/manifests/all.pp
@@ -187,7 +198,8 @@ The openstack::controller class deploys the following Openstack services:
* keystone * keystone
* horizon * horizon
* glance * glance
* nova (ommitting the nova compute service and nova network when multi_host is enabled) * nova (ommitting the nova compute service and, when multi_host is enabled,
the nova network service)
* mysql * mysql
* rabbitmq * rabbitmq
@@ -213,17 +225,17 @@ The openstack::controller class deploys the following Openstack services:
rabbit_user => 'rabbit_user', rabbit_user => 'rabbit_user',
} }
For more information on the parameters, check out the inline documentation For more information on the parameters, check out the inline documentation in
in the manifest: the manifest:
<module_path>/openstack/manifests/controller.pp <module_path>/openstack/manifests/controller.pp
#### openstack::compute #### openstack::compute
The Openstack compute role is used to manage the underlying hypervisor. The Openstack compute class is used to manage the underlying hypervisor. A
A typical multi-host Openstack installation would consist of a single typical multi-host Openstack installation would consist of a single
openstack::controller node and multiple openstack::compute nodes openstack::controller node and multiple openstack::compute nodes (based on the
(based on the amount of resources being virtualized) amount of resources being virtualized)
The openstack::compute class deploys the following services: The openstack::compute class deploys the following services:
* nova * nova
@@ -251,24 +263,22 @@ The openstack::compute class deploys the following services:
manage_volumes => true, manage_volumes => true,
} }
For more information on the parameters, check out the inline documentation For more information on the parameters, check out the inline documentation in
in the manifest: the manifest:
<module_path>/openstack/manifests/compute.pp <module_path>/openstack/manifests/compute.pp
### Creating your deployment scenario ### Creating your deployment scenario
So far, classes have just been mentioned as configuration interfaces So far, classes have been discussed as configuration interfaces used to deploy
used to deploy the openstack roles. The next section explains how to the openstack roles. This section explains how to apply these roles to actual
apply these class definitions as roles to nodes using a site manifest. nodes using a puppet site manifest.
The default file name for the site manifest is site.pp. The default file name for the site manifest is site.pp. This file should be
contained in the puppetmaster's manifestdir:
The site manifest should be contained in the master's manifestdir: * open source puppet - /etc/puppet/manifests/site.pp
* Puppet Enterprise - /etc/puppetlabs/puppet/manifests/site.pp
Manifestdir:
* open source puppet - /etc/puppet/manifests
* Puppet Enterprise - /etc/puppetlabs/puppet/manifests
Node blocks are used to map a node's certificate name to the classes Node blocks are used to map a node's certificate name to the classes
that should be assigned to it. that should be assigned to it.
@@ -282,11 +292,11 @@ Or they can use regular expression to match sets of hosts
node /my_similar_hosts/ {...} node /my_similar_hosts/ {...}
Inside the site.pp file, Puppet resources declared within node blocks Inside the site.pp file, Puppet resources declared within node blocks are
are applied to those specified nodes. Resources specified at top-scope applied to those specified nodes. Resources specified at top-scope are applied
are applied to all nodes. to all nodes.
### Deploying Openstack all-in-one environments ### Deploying an Openstack all-in-one environment
The easiest way to get started with the openstack::all class is to use the file The easiest way to get started with the openstack::all class is to use the file
@@ -298,22 +308,21 @@ There is a node entry for
that can be used to deploy a simple nova all-in-one environment. that can be used to deploy a simple nova all-in-one environment.
You can explicitly target this node entry by specifying a matching certname You can explicitly target this node entry by specifying a matching certname and
and targeting the manifest explicitly with: targeting the manifest explicitly with:
puppet apply /etc/puppet/modules/openstack/examples/site.pp --certname openstack_all puppet apply /etc/puppet/modules/openstack/examples/site.pp --certname openstack_all
You could also update the node name from site.pp to be the hostname of the You could also update site.pp with the hostname of the node on which you wish to
node on which you wish to perform an all-in-one installation. perform an all-in-one installation:
node /<my_node>/ {...} node /<my_node>/ {...}
In order to use manifests on a remote Puppet Master, you can run the following If you wish to provision an all-in-one host from a remote puppetmaster, you can run the following command:
command:
puppet agent -td puppet agent -td
### Using multi-node example ### Deploying an Openstack multi-node environment
A Puppet Master should be used when deploying multi-node environments. A Puppet Master should be used when deploying multi-node environments.
@@ -327,8 +336,8 @@ This file contains entries for:
Which can be used to assign the respective roles. Which can be used to assign the respective roles.
(As above, you can replace these default certificate names with the hostname (As above, you can replace these default certificate names with the hostnames of
of your nodes) your nodes)
The first step for building out a multi-node deployment scenario is to choose The first step for building out a multi-node deployment scenario is to choose
the IP address of the controller node. the IP address of the controller node.
@@ -342,8 +351,8 @@ In the example site.pp, replace the following line:
with the IP address of your controller. with the IP address of your controller.
It is also possible to use store configs in order for the compute hosts to It is also possible to use store configs in order for the compute hosts to
automatically discover the address of the controller host. Documentation automatically discover the address of the controller host. Documentation for
for this may not be available until a later release of the openstack modules. this may not be available until a later release of the openstack modules.
Once everything is configured on the master, you can configure the nodes using: Once everything is configured on the master, you can configure the nodes using:
@@ -358,8 +367,8 @@ your compute nodes:
## Verifying an OpenStack deployment ## Verifying an OpenStack deployment
Once you have installed openstack using Puppet (and assuming you experience Once you have installed openstack using Puppet (and assuming you experience no
no errors), the next step is to verify the installation: errors), the next step is to verify the installation:
### openstack::auth_file ### openstack::auth_file
@@ -367,8 +376,8 @@ The optionstack::auth_file class creates the file:
/root/openrc /root/openrc
which stores environment variables that can be used for authentication which stores environment variables that can be used for authentication of
of openstack command line utilities. openstack command line utilities.
#### Usage Example: #### Usage Example:
@@ -411,8 +420,8 @@ of openstack command line utilities.
This script will verify that an image can be inserted into glance, and that This script will verify that an image can be inserted into glance, and that
that image can be used to fire up a virtual machine instance. that image can be used to fire up a virtual machine instance.
6. Log into horizon on port 80 of your controller node and walk through a 6. Log into horizon on port 80 of your controller node and walk through a few
few operations: operations:
- fire up a VM - fire up a VM
- create a volume - create a volume
@@ -424,18 +433,19 @@ of openstack command line utilities.
## Building your own custom deployment scenario for Openstack ## Building your own custom deployment scenario for Openstack
The classes that we have discussed from the Openstack module are themselves The classes included in the Openstack module are implemented using a number of
composed from a large collection of modules that can be used to implement other modules. These modules can be used directly to create a customized
customized openstack deployments. openstack deployment.
A list and location of the source code for all modules used by the A list of the modules used by puppetlabs-openstack and the source locations for
puppetlabs-openstack module can be found in the following config file: those modules can be found in `other_repos.yaml` in the openstack module folder.
other_repos.yaml other_repos.yaml
These building block modules have been written to support a wide variety of specific These building block modules have been written to support a wide variety of
configuration and deployment use cases. They also provide a lot of configuration specific configuration and deployment use cases. They also provide a lot of
options not available with the more constrained puppetlabs-openstack modules. configuration options not available with the more constrained
puppetlabs-openstack modules.
The manifests in the Openstack module can serve as an example of how to use The manifests in the Openstack module can serve as an example of how to use
these base building block to compose custom deployments. these base building block to compose custom deployments.
@@ -460,16 +470,16 @@ These files contain examples of how to deploy the following services:
* message queue * message queue
* examples currently only exist for rabbitmq * examples currently only exist for rabbitmq
Once you have selected which services need to be combined on which nodes, you should Once you have selected which services need to be combined on which nodes, you
review the modules for all of these services and figure out how you can configure should review the modules for all of these services and figure out how you can
things like the pipelines and back-ends for these individual services. configure things like the pipelines and back-ends for these individual services.
This information should then be used to compose your own custom site.pp This information should then be used to compose your own custom site.pp
## Deploying swift ## Deploying swift
In order to deploy swift, you should use the example manifest that comes with the In order to deploy swift, you should use the example manifest that comes with
swift modules (examples/site.pp) the swift modules (examples/site.pp)
In this example, the following nodes are specified: In this example, the following nodes are specified:
@@ -482,37 +492,39 @@ In this example, the following nodes are specified:
* swift_storage_3 * swift_storage_3
- used as a storage node - used as a storage node
This swift configuration requires both a Puppet Master as well as This swift configuration requires both a puppetmaster with storeconfigs enabled.
storeconfigs to be enabled.
To fully configure an environment, the nodes must be configured in the following order: To fully configure a Swift environment, the nodes must be configured in the
following order:
* First the storage nodes need to be configured, this creates the storage services * First the storage nodes need to be configured. This creates the storage
(object, container, account) and exports all of the storage endpoints for the ring services (object, container, account) and exports all of the storage endpoints
builder into storeconfigs. (The replicator service fails to start in this initial for the ring builder into storeconfigs. (The replicator service fails to start
configuration) in this initial configuration)
* Next, the ringbuild and swift proxy must be configured. The ringbuilder needs to * Next, the ringbuild and swift proxy must be configured. The ringbuilder needs
collect the storage endpoints and create the ring database before the proxy can be to collect the storage endpoints and create the ring database before the proxy
installed. It also sets up an rsync server which is used to host the ring database. can be installed. It also sets up an rsync server which is used to host the
Resources are exported that are used to rsync the ring database from this server. ring database. Resources are exported that are used to rsync the ring
database from this server.
* Finally, the storage nodes should be run again so that they can rsync the ring * Finally, the storage nodes should be run again so that they can rsync the ring
databases. databases.
This configuration of rsync create two loopback devices on every node. For more realistic This configuration of rsync create two loopback devices on every node. For more
scenarios, users should deploy their own volumes in combination with the other classes. realistic scenarios, users should deploy their own volumes in combination with
the other classes.
Better examples of this should exist in the next version of these modules. Better examples of this will be provided in a future version of the module.
## Participating ## Participating
Need a feature? Found a bug? Let me know! Need a feature? Found a bug? Let me know!
We are extremely interested in growing a community of OpenStack experts and users We are extremely interested in growing a community of OpenStack experts and
around these modules. so they can serve as an example of consolidated users around these modules so they can serve as an example of consolidated best
best practices of how to deploy openstack. practices of how to deploy openstack.
The best way to get help with this set of modules is to email the group associated The best way to get help with this set of modules is to email the group
with this project: associated with this project:
puppet-openstack@puppetlabs.com puppet-openstack@puppetlabs.com
@@ -534,16 +546,15 @@ The process for contributing code is as follows:
* Validate module on Fedora 17 and RHEL * Validate module on Fedora 17 and RHEL
* monitoring (basic system and Openstack application monitoring support * monitoring (basic system and Openstack application monitoring support
with Nagios/Ganglia) with Nagios/Ganglia and/or sensu)
- sensu is also being considered * Redundancy/HA - implementation of modules to support highly available and
* Redundancy/HA - implementation of modules to support Highly available and redundant Openstack deployments.
redundant Openstack deployment.
* These modules are currently intended to be classified and data-fied in a * These modules are currently intended to be classified and data-fied in a
site.pp. Starting in version 3.0, it is possible to populate class site.pp. Starting in version 3.0, it is possible to populate class
parameters explicitly using puppet data bindings (which use hiera as the parameters explicitly using puppet data bindings (which use hiera as the
back-end). The decision not to use hiera was primarily based on the fact back-end). The decision not to use hiera was primarily based on the fact
that it requires explicit function calls in 2.7.x) that it requires explicit function calls in 2.7.x
* implement provisioning automation that can be used to fully provision * Implement provisioning automation that can be used to fully provision
an entire environment from scratch an entire environment from scratch
* Implement PuppetDB to allow service auto-discovery to simplify the * Integrate with PuppetDB to allow service auto-discovery to simplify the
configuration of service association configuration of service association