This change implements the blueprint to convert all roles and plays into a more generic setup, following upstream ansible best practices. Items Changed: * All tasks have tags. * All roles use namespaced variables. * All redundant tasks within a given play and role have been removed. * All of the repetitive plays have been removed in-favor of a more simplistic approach. This change duplicates code within the roles but ensures that the roles only ever run within their own scope. * All roles have been built using an ansible galaxy syntax. * The `*requirement.txt` files have been reformatted follow upstream Openstack practices. * Dynamically generated inventory is now more organized, this should assist anyone who may want or need to dive into the JSON blob that is created. In the inventory a properties field is used for items that customize containers within the inventory. * The environment map has been modified to support additional host groups to enable the seperation of infrastructure pieces. While the old infra_hosts group will still work this change allows for groups to be divided up into seperate chunks; eg: deployment of a swift only stack. * The LXC logic now exists within the plays. * etc/openstack_deploy/user_variables.yml has all password/token variables extracted into the separate file etc/openstack_deploy/user_secrets.yml in order to allow seperate security settings on that file. Items Excised: * All of the roles have had the LXC logic removed from within them which should allow roles to be consumed outside of the `os-ansible-deployment` reference architecture. Note: * the directory rpc_deployment still exists and is presently pointed at plays containing a deprecation warning instructing the user to move to the standard playbooks directory. * While all of the rackspace specific components and variables have been removed and or were refactored the repository still relies on an upstream mirror of Openstack built python files and container images. This upstream mirror is hosted at rackspace at "http://rpc-repo.rackspace.com" though this is not locked to and or tied to rackspace specific installations. This repository contains all of the needed code to create and/or clone your own mirror. DocImpact Co-Authored-By: Jesse Pretorius <jesse.pretorius@rackspace.co.uk> Closes-Bug: #1403676 Implements: blueprint galaxy-roles Change-Id: I03df3328b7655f0cc9e43ba83b02623d038d214e
7.9 KiB
OpenStack Ansible Deployment
- date
-
2015-02-02 22:00
- tags
-
lxc, openstack, cloud, ansible
- category
-
*nix
Building a development stack
If you are wanting to build a development stack for testing or
otherwise contributing to this repository you can do so using the
cloudserver-aio.sh
script in the script directory. To
execute the cloudserver-aio.sh
script please do so from the
os-ansible-deployment
directory that was created when you
cloned the repository.
Example AIO build process:
# Clone the source code
git clone https://github.com/stackforge/os-ansible-deployment /opt/os-ansible-deployment
# Change your directory
cd /opt/os-ansible-deployment
# Checkout your desired branch.
git checkout master
# Run the script from the root directory of the cloned repository.
./scripts/run-aio-build.sh
- To use this script successfully please make sure that you have the following:
-
- At least 60GB of available storage on "/" when
using local file system containers. Containers are built into
/var/lib/lxc
and will consume up-to 40GB on their own.- If you would like to test building containers using LVM simply create an lxc volume group before executing the script. Be aware that each container will be built with a minimum of 5GB of storage.
- 2.4GHZ quad-core processor with that is KVM capable is required.
- You must have at least 4GB of available ram.
- At least 60GB of available storage on "/" when
using local file system containers. Containers are built into
This may seem like you need a lot to run the stack, which is partially true, however consider that this simple "All in One" deployment builds a "35" node infrastructure and mimics our reference architecture. Additionally, components like Rabbitmq, MariaDB with Galera, Repository servers, and Keystone will all be clustered. Lastly the "All in One" deployment uses HAProxy for test purposes only. At this time we do not recommend running HAProxy in production. At this time you should NEVER use the AIO script on a box that you care about. Cloud servers such as Rackspace Cloud server of the flavor general1-8 variety work really well as development machines, as does Virtual Box of KVM instances.
- Using Heat:
-
If you would like to use heat to deploy an All in one node there is a heat script which you can use. Simply get and or source the raw script as found here: "https://raw.githubusercontent.com/stackforge/os-ansible-deployment/master/scripts/osad-aio-heat-template.yml"
Rebuilding the stack
Once you have completed your testing and or dev work if you'd like to
tear down the stack and restart from a new build there is a play that
will assist you in doing just that. Simply change to your playbooks
directory and execute the lxc-containers-destroy.yml
play.
Example:
# Move to the playbooks directory.
cd /opt/os-ansible-deployment/playbooks
# Destroy all of the running containers.
openstack-ansible lxc-containers-destroy.yml
# On the host stop all of the services that run locally and not within a container.
for i in $(ls /etc/init | grep -e nova -e swift -e neutron | awk -F'.' '{print $1}'); do service $i stop; done
# Uninstall the core services that were installed.
for i in $(pip freeze | grep -e nova -e neutron -e keystone -e swift); do pip uninstall -y $i; done
# Remove crusty directories.
rm -rf /openstack /etc/neutron /etc/nova /etc/swift /var/log/neutron /var/log/nova /var/log/swift
- Using the teardown script:
-
The
teardown.sh
script that will destroy everything known within an environment. You should be aware that this script will destroy whole environments and should be used WITH CAUTION.
Notice
The system uses a number of variables. You should look a the scripts
for a full explanation and description of all of the available variables
that you can set. At a minimum you should be aware of the default public
interface variable as you may be kicking on a box that does not have an
eth0
interface. To set the default public interface run the
following.
export PUBLIC_INTERFACE="<<REPLACE WITH THE NAME OF THE INTERFACE>>" # This is only required if you dont have eth0
This play will destroy all of your running containers and remove
items within the /openstack
directory for the container.
After the completion of this play you can rerun the
cloudserver-aio.sh
or you can run the plays manually to
rebuild the stack.
Diagram of stack
Here is a basic diagram that attempts to illustrate what the AIO installation job is doing. NOTICE This diagram is not to scale and is not even 100% accurate, this diagram was built for informational purposes only and should ONLY be used as such.
Diagram:
====== ASCII Diagram for AIO infrastructure ======
------->[ ETH0 == Public Network ]
|
V [ * ] Socket Connections
[ HOST MACHINE ] [ <>v^ ] Network Connections
* ^ *
| | |-----------------------------------------------------
| | |
| |---------------->[ HAProxy ] |
| ^ |
| | |
| V |
| (BR-Interfaces)<----- |
| ^ * | |
*-[ LXC ]*--*--------------------|-----|------|----| |
| | | | | | | |
| * | | | | | |
| --->[ Logstash ]<-----------|-- | | | | |
| | [ Kibana ]<-------------| | | V * | |
| --->[ Elastic search ]<-----| | | [ Galera x3 ] |
| [ Memcached ]<----------| | | | |
*-------*[ Rsyslog ]<------------|-- | * |
| [ Repos Server x3 ]<----| ---|-->[ RabbitMQ x3 ] |
| [ Horizon ]<------------| | | |
| [ Nova api ec2 ]<-------|--| | |
| [ Nova api os ]<--------|->| | |
| [ Nova spice console ]<-| | | |
| [ Nova Cert ]<----------|->| | |
| [ Cinder api ]<---------|->| | |
| [ Glance api ]<---------|->| | |
| [ Heat apis ]<----------|->| | [ Loop back devices ]*-*
| [ Heat engine ]<--------|->| | \ \ |
| ------>[ Nova api metadata ] | | | { LVM } { XFS x3 } |
| | [ Nova conductor ]<-----| | | * * |
| |----->[ Nova scheduler ]------|->| | | | |
| | [ Keystone x3 ]<--------|->| | | | |
| | |--->[ Neutron agents ]*-----|--|---------------------------*
| | | [ Neutron server ]<-----|->| | | |
| | | |->[ Swift proxy ]<--------- | | | |
*-|-|-|-*[ Cinder volume ]*--------------------* | |
| | | | | | |
| | | --------------------------------------- | |
| | --------------------------------------- | | |
| | -----------------------| | | | |
| | | | | | |
| | V | | * |
---->[ Compute ]*[ Neutron linuxbridge ]<-| |->[ Swift storage ]-
====== ASCII Diagram for AIO infrastructure ======