Initial Commit

This commit is contained in:
d34dh0r53 2014-08-26 18:08:15 -05:00
commit 6f6e75f549
482 changed files with 35002 additions and 0 deletions

49
.gitignore vendored Normal file
View File

@ -0,0 +1,49 @@
# Override Files #
rpc_deployment/playbooks/lab_plays
rpc_deployment/vars/overrides/*.yml
# Compiled source #
###################
*.com
*.class
*.dll
*.exe
*.o
*.so
*.pyc
build/
dist/
# Packages #
############
# it's better to unpack these files and commit the raw source
# git has its own built in compression methods
*.7z
*.dmg
*.gz
*.iso
*.jar
*.rar
*.tar
*.zip
# Logs and databases #
######################
*.log
*.sql
*.sqlite
# OS generated files #
######################
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
.idea
.tox
*.sublime*
*.egg-info
Icon?
ehthumbs.db
Thumbs.db

5
Changelog.md Normal file
View File

@ -0,0 +1,5 @@
# Changelog
## 9.0.0rc3 - 2014-08-xx
- Added Changelog

202
LICENSE Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

159
README.rst Normal file
View File

@ -0,0 +1,159 @@
Ansible Openstack LXC Playbook
##############################
:date: 2013-09-05 09:51
:tags: rackspace, lxc, openstack, cloud, ansible
:category: \*nix
Deploy Openstack in Containers
==============================
First Pass at Ansible playbook for LXC (openstack) Containers.
Make sure that you have the custom Ansible module installed on
your local system prior to running the playbook.
Expect bugs and general unexplainable issues and the ever so popular
API change due to general messing about with bits.
Playbook Support
----------------
OpenStack:
* keystone
* glance-api
* glance-registry
* cinder-api
* cinder-scheduler
* cinder-volume
* nova-api
* nova-api-ec2
* nova-api-metadata
* nova-api-os-compute
* nova-compute
* nova-conductor
* nova-scheduler
* heat-api
* heat-api-cfn
* heat-api-cloudwatch
* heat-engine
* horizon
* neutron-server
* neutron-dhcp-agent
* neutron-metadata-agent
* neutron-linuxbridge-agent
Infra:
* haproxy
* galara
* rabbitmq
* Deploy-Containers
* Destroy-Containers
* Clone-Container
* Archive-Container
* Archive-all-containers
* Deploy-archived-container
Assumptions
-----------
This repo assumes that you have setup the host server that will be running the Openstack Infrastructure with three
bridged network devices named: ``br-mgmt``, ``br-vmnet``, ``br-ext``. Through these bridges will be used throughout
the Openstack infrastructure.
The repo also relies on configuration files found in the `/etc` directory of this repo.
If you are running ansible from an "Un-privileged" host, you can place the contents of the /etc/ directory in your
home folder; this would be in a directory similar to `/home/kevin/rpc_deploy/`. Once you have the file in place, you
will have to input the details of your environment in the `rpc_user_config.yml` file; please see the file for how
this should look. After you have a bridged network and the files/directory in place, continue on to _`Base Usage`.
Base Usage
----------
All commands must be executed from the `rpc_deployment` directory. From this directory you will have access to all
of the playbooks, roles, and variables. It is recommended that you create an override file to contain any and all
variables that you wish to override for the deployment. While the override file is is not required it will make life
a bit easier.
All of the variables that you may wish to update are in the `vars/` directory, however you should also be aware that
services will pull in base group variables as found in `inventory/group_vars`.
All playbooks exist in the ``playbooks/`` directory and are grouped in different sub-directories.
All of the keys, tokens, and passwords are in the `user_variables.yml` file. This file contains no
preset passwords. To setup your keys, passwords, and tokens you will need to either edit this file
manually or use the script ``pw-token-gen.py``. Example:
.. code-block::
# Generate the tokens
scripts/pw-token-gen.py --file /etc/rpc_deploy/user_variables.yml
Example usage from the `rpc_deployment` directory in the `ansible-rpc-lxc` repository
.. code-block:: bash
# Run setup on all hosts:
ansible-playbook -e @vars/user_variables.yml playbooks/setup/host-setup.yml
# Run infrastructure on all hosts
ansible-playbook -e @vars/user_variables.yml playbooks/infrastructure/infrastructure-setup.yml
# Setup and configure openstack within your spec'd containers
ansible-playbook -e @vars/user_variables.yml playbooks/openstack/openstack-setup.yml
About Inventory
---------------
In ansible all things that ansible cares about are located in inventory. In the Rackspace Private Cloud all
inventory is dynamically generated using the previously mentioned configuration files. While this is a dynamically
generated inventory it is not 100% generated on every run. The inventory is saved in a file named,
`rpc_inventory.json` and is located in the directory where you've located your user configuration files. On every
run a backup of the inventory json file is created in both the current working directory as well as the location where
the user configuration files exist. The inventory json file is a living document and is intended to grow as the environment
scales in infrastructure. This means that the inventory file will be appended to as you add more nodes and or change the
container affinity from within the `rpc_user_config.yml` file. It is recommended that the base inventory file be backed
up to a safe location upon the completion of a deployment operation. While the dynamic inventory processor has guards in it
to ensure that the built inventory is not adversely effected by programatic operations this does not guard against user error
and or catastrophic failure.
Scaling
-------
If you are scaling the environment using the dynamically generated inventory you should know that the inventory was designed to
generate new entries in inventory and not remove entries from inventory. These playbooks will build an environment to spec so if
container affinity is changed and or a node is added or removed from an environment the user configuration file will need to be
modified as well as the inventory json. For this reason it is recommended that should a physical node need replacing it should be
renamed the same as the previous one. This will make things easier when rebuilding the environment. Additionally if a container
is needing to be replaced it is better to simply remove the misbehaving container and rebuild it using the existing inventory.
The reasons that bursting up and down in openstack is less than idea when talking about the infrastructure nodes is outside the
scope of this document though its safe to say that the sheer volume of moving parts within openstack make this a precarious process.
Notes
-----
* Library has an experimental `Keystone` module which adds ``keystone:`` support to ansible.
* Library has an experimental `Swift` module which adds ``swift:`` support to ansible.
* Library has an experimental `LXC` module which adds ``lxc:`` support to ansible.
License
-------
Copyright 2014, Rackspace US, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at:
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

149
etc/README.rst Normal file
View File

@ -0,0 +1,149 @@
Ansible Openstack LXC Configuration
===================================
:date: 2013-09-05 09:51
:tags: rackspace, lxc, rpc, openstack, cloud, ansible
:category: \*nix
This directory contains the files needed to make the rpc_deployment process work.
The inventory is generated from a user configuration file named ``rpc_user_config.yml``.
To load inventory you MUST copy the directory ``rpc_deploy`` to either ``$HOME/`` or ``/etc/``.
With this folder in place, you will need to enter the folder and edit the file ``rpc_user_config.yml``.
The file will contain all of the IP addresses/hostnames that your infrastructure will exist on
as well as a CIDR that your containers will have IP addresses assigned from. This allows for easy
scaling as new nodes and or affinity for containers is all set within this file.
Please see the ``rpc_user_config.yml`` file in the provided ``/etc`` directory for more details on how
that file is setup.
If you need some assistance defining the CIDR for a given ip address range check out http://www.ipaddressguide.com/cidr
Words on rpc_user_config.yml
############################
While the ``rpc_user_config.yml`` file is noted fairly heavily with examples and information regarding the options, here's some more information on what the file consists of and how to use it.
Global options
--------------
The user configuration file has three globally available options. These options allow you to set the CIDR for all of your containers IP addresses, and a list of used IP addresses that you may not want the inventory system to collide with, global overrides which are added to inventory outside of "group_vars" and "var_files" files.
----
Global Options:
* cidr:
* used_ips:
* global_overrides:
Here's the syntax for ``cidr``.
.. code-block:: yaml
cidr: <string>/<prefix>
----
To tell inventory not to attempt to consume IP addresses which may or may not exist within the defined cidr write all known IP addresses that are already consumed as a list in yaml format.
Heres the ``used_ips`` syntax
.. code-block:: yaml
used_ips:
- 10.0.0.250
- 10.0.0.251
- 10.0.0.252
- 10.0.0.253
----
If you want to specify specific globally available options and do not want to place them in ``var_files`` or within the ``group_vars/all.yml`` file you can set them in a key = value par within the ``global_overrides`` hash.
Here's the ``global_overrides`` syntax
.. code-block:: yaml
global_overrides:
debug: True
git_install_branch: master
Predefined host groups
----------------------
The user configuration file has 4 defined groups which have mapping found within the ``rpc_environment.yml`` file.
The predefined groups are:
* infra_hosts:
* compute_hosts:
* storage_hosts:
* log_hosts:
Any host specified within these groups will have containers built within them automatically. The containers that will be build are all mapped out within the rpc_environment.json file.
When specifying hosts inside of any of the known groups the syntax is as follows:
.. code-block:: yaml
infra_hosts:
infra_host1:
ip: 10.0.0.1
With this the top key is the host name and ip is used to set the known IP address of the host name. Even if you have the host names set within your environment using either the ``hosts`` file or a resolver you must specify the "ip".
If you want to use a host that is not in a predefined group and is used is some custom out of band Ansible play you can add a top level key for the host type with the host name and "ip" key. The syntax is the exact same as the predefined host groups.
Adding options to containers within targeted hosts
--------------------------------------------------
Within the host variables options can be added that will append to the ``host_vars`` of a given set of containers. This allows you to add "special" configuration to containers on a targeted host which may come in handy when scaling out or planning a deployment of services. To add these options to all containers within the host simply add ``container_vars`` under the host name and use ``key: value`` pairs for all of the desired options. All ``key: value`` pairs will be set as ``host_vars`` on all containers found under host name.
Here is an example of turning debug mode on all containers on infra1
.. code-block:: yaml
infra_hosts:
infra1:
ip: 10.0.0.10
container_vars:
debug: True
infra2:
...
In this example you can see that we are setting ``container_vars`` under the host name ``infra1`` and that debug was set to True.
Limiting the container types:
When developing the inventory it may be useful to further limit the containers that will have access to the provided options. In this case you use the option ``limit_container_types`` followed by the type of container you with to limit the options to. When using the ``limit_container_types`` option the inventory script will perform a string match on the container name and if a match is found, even if it's a partial match, the options will be appended to the container.
Here is an example of adding cinder_backends to containers on a host named cinder1 under the ``storage_hosts`` group. The options will be limited to containers matching the type "cinder_volume".
.. code-block:: yaml
storage_hosts:
cinder1:
ip: 10.0.0.10
container_vars:
cinder_backends:
limit_container_types: cinder_volume
lvm:
volume_group: cinder-volumes
driver: cinder.volume.drivers.lvm.LVMISCSIDriver
backend_name: LVM_iSCSI
cinder2:
...

22
etc/network/README.rst Normal file
View File

@ -0,0 +1,22 @@
Ansible Openstack Networking
============================
:date: 2013-09-05 09:51
:tags: rackspace, rpc, openstack, cloud, ansible, networking, bond, interfaces
:category: \*nix
This directory contains some base interface files that will allow you to see what
the networking setup might be like in your environment. Three **basic** example
configurations have been provided in the in ``interfaces.d`` directory. These
files should cover most cases in terms of host setup though should **NEVER** be
taken literally. These files should only serve as an example and **WILL** need to
be edited to fit your unique network needs. All provided files have different configurations
within them to suit very different use cases. It should also be noted that UDEV rules may
change your network setup between boxes and may require tweaking. If you have questions on
how debian networking is built out please review the following documentation.
On-line Resources:
* Ubuntu Bonding: https://help.ubuntu.com/community/UbuntuBonding
* Ubuntu Networking: https://help.ubuntu.com/14.04/serverguide/network-configuration.html
* Debian Bonding: https://wiki.debian.org/Bonding
* Debian Networking: https://wiki.debian.org/NetworkConfiguration

7
etc/network/interfaces Normal file
View File

@ -0,0 +1,7 @@
# Used by ifup(8) and ifdown(8). See the interfaces(5) manpage or
# /usr/share/doc/ifupdown/examples for more information.
# The loopback network interface
auto lo
iface lo inet loopback
source /etc/network/interfaces.d/*.cfg

View File

@ -0,0 +1,132 @@
#EXAMPLE INTERFACE FILE
#
#1293 - HOST_NET (Ignore This. It's the native VLAN.)
#2176 - CONTAINER_NET
#1998 - OVERLAY_NET
#2144 - STORAGE_NET
#2146 - GATEWAY_NET (VM Provider Network. Ignore this. Openstack will tag for us.)
## Physical interface, could be bond. This only needs to be set once for the physical device
auto eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0
auto eth1
iface eth1 inet manual
bond-master bond1
bond-primary eth1
auto eth2
iface eth2 inet manual
bond-master bond0
auto eth3
iface eth3 inet manual
bond-master bond1
auto eth4
iface eth4 inet manual
## Create a bonded interface. Note that the "bond-slaves" is set to none. This is because the
# bond-master has already been set in the raw interfaces for the new bond0.
auto bond0
iface bond0 inet static
bond-slaves none
bond-mode active-backup
bond-miimon 100
bond-downdelay 200
bond-updelay 200
address 10.240.0.100
netmask 255.255.252.0
gateway 10.240.0.1
dns-nameservers 69.20.0.164 69.20.0.196
auto bond1
iface bond1 inet manual
bond-slaves none
bond-mode active-backup
bond-miimon 100
bond-downdelay 250
bond-updelay 250
## Vlan tagged interface, this should be physical interface along with the vlan tag
# The vlan tag number should reflect your already setup vlans.
#STORAGE_NET
iface bond0.2144 inet manual
vlan-raw-device bond0
#CONTAINER_NET
iface bond0.2176 inet manual
vlan-raw-device bond0
#OVERLAY_NET
iface bond1.1998 inet manual
vlan-raw-device bond1
## Required network bridges; br-vlan, br-vxlan, br-mgmt.
# Bridge for management network
auto br-mgmt
iface br-mgmt inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
# Notice the bridge port is the vlan tagged interface
bridge_ports bond0.2176
address 172.29.236.100
netmask 255.255.252.0
dns-nameservers 69.20.0.164 69.20.0.196
# Bridge for vxlan network
# Only the COMPUTE nodes will have an IP on this bridge!
# When used by infra nodes, IPs exist in the containers and inet should be set to manual.
auto br-vxlan
iface br-vxlan inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports bond1.1998
address 172.29.240.100
netmask 255.255.252.0
# Bridge for vlan network
auto br-vlan
iface br-vlan inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
# Notice this bridge port is an Untagged host interface
bridge_ports bond1
# Bridge for storage network
# Only the COMPUTE nodes will have an IP on this bridge!
# When used by infra nodes, IPs exist in the containers and inet should be set to manual.
auto br-storage
iface br-storage inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports bond0.2144
address 172.29.244.100
netmask 255.255.252.0
# Bridge for servicenet network
# ALL nodes will have an IP on this bridge. If fact, it's the same IP.
# !! DO NOT PUT A PHYSICAL INTERFACE IN THIS BRIDGE ON THE HOST !!
# Will ue an iptables MASQUERADE rule to NAT traffic
auto br-snet
iface br-snet inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
# Notice there is NO physical interface in this bridge!
address 172.29.248.1
netmask 255.255.252.0

View File

@ -0,0 +1,282 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
component_skel:
cinder_api:
belongs_to:
- cinder_all
cinder_scheduler:
belongs_to:
- cinder_all
cinder_volume:
belongs_to:
- cinder_all
elasticsearch:
belongs_to:
- elasticsearch_all
galera:
belongs_to:
- galera_all
glance_api:
belongs_to:
- glance_all
glance_registry:
belongs_to:
- glance_all
heat_api:
belongs_to:
- heat_all
heat_api_cfn:
belongs_to:
- heat_all
heat_api_cloudwatch:
belongs_to:
- heat_all
heat_engine:
belongs_to:
- heat_all
horizon:
belongs_to:
- horizon_all
keystone:
belongs_to:
- keystone_all
kibana:
belongs_to:
- kibana_all
logstash:
belongs_to:
- logstash_all
memcached:
belongs_to:
- memcached_all
neutron_agent:
belongs_to:
- neutron_all
neutron_dhcp_agent:
belongs_to:
- neutron_all
neutron_linuxbridge_agent:
belongs_to:
- neutron_all
neutron_metering_agent:
belongs_to:
- neutron_all
neutron_l3_agent:
belongs_to:
- neutron_all
neutron_metadata_agent:
belongs_to:
- neutron_all
neutron_server:
belongs_to:
- neutron_all
nova_api_ec2:
belongs_to:
- nova_all
nova_api_metadata:
belongs_to:
- nova_all
nova_api_os_compute:
belongs_to:
- nova_all
nova_compute:
belongs_to:
- nova_all
nova_conductor:
belongs_to:
- nova_all
nova_scheduler:
belongs_to:
- nova_all
nova_spice_console:
belongs_to:
- nova_all
rabbit:
belongs_to:
- rabbit_all
rsyslog:
belongs_to:
- rsyslog_all
utility:
belongs_to:
- utility_all
container_skel:
cinder_api_container:
belongs_to:
- infra_containers
contains:
- cinder_api
cinder_volumes_container:
belongs_to:
- storage_containers
contains:
- cinder_scheduler
- cinder_volume
elasticsearch_container:
belongs_to:
- log_containers
contains:
- elasticsearch
galera_container:
belongs_to:
- infra_containers
contains:
- galera
glance_container:
belongs_to:
- infra_containers
contains:
- glance_api
- glance_registry
heat_apis_container:
belongs_to:
- infra_containers
contains:
- heat_api_cloudwatch
- heat_api_cfn
- heat_api
heat_engine_container:
belongs_to:
- infra_containers
contains:
- heat_engine
horizon_container:
belongs_to:
- infra_containers
contains:
- horizon
keystone_container:
belongs_to:
- infra_containers
contains:
- keystone
kibana_container:
belongs_to:
- log_containers
contains:
- kibana
logstash_container:
belongs_to:
- log_containers
contains:
- logstash
memcached_container:
belongs_to:
- infra_containers
contains:
- memcached
neutron_agents_container:
belongs_to:
- network_containers
contains:
- neutron_agent
- neutron_metadata_agent
- neutron_metering_agent
- neutron_linuxbridge_agent
- neutron_l3_agent
- neutron_dhcp_agent
neutron_server_container:
belongs_to:
- network_containers
contains:
- neutron_server
nova_api_ec2_container:
belongs_to:
- infra_containers
contains:
- nova_api_ec2
nova_api_metadata_container:
belongs_to:
- infra_containers
contains:
- nova_api_metadata
nova_api_os_compute_container:
belongs_to:
- infra_containers
contains:
- nova_api_os_compute
nova_compute_container:
is_metal: true
belongs_to:
- compute_containers
contains:
- neutron_linuxbridge_agent
- nova_compute
nova_conductor_container:
belongs_to:
- infra_containers
contains:
- nova_conductor
nova_scheduler_container:
belongs_to:
- infra_containers
contains:
- nova_scheduler
nova_spice_console_container:
belongs_to:
- infra_containers
contains:
- nova_spice_console
rabbit_mq_container:
belongs_to:
- infra_containers
contains:
- rabbit
rsyslog_container:
belongs_to:
- infra_containers
- compute_containers
- storage_containers
- log_containers
- network_containers
contains:
- rsyslog
utility_container:
belongs_to:
- infra_containers
contains:
- utility
physical_skel:
network_containers:
belongs_to:
- all_containers
network_hosts:
belongs_to:
- hosts
compute_containers:
belongs_to:
- all_containers
compute_hosts:
belongs_to:
- hosts
infra_containers:
belongs_to:
- all_containers
infra_hosts:
belongs_to:
- hosts
log_containers:
belongs_to:
- all_containers
log_hosts:
belongs_to:
- hosts
storage_containers:
belongs_to:
- all_containers
storage_hosts:
belongs_to:
- hosts

View File

@ -0,0 +1,159 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This is the md5 of the environment file
# this will ensure consistency when deploying.
environment_version: 5e7155d022462c5a82384c1b2ed8b946
# User defined CIDR used for containers
# Global cidr/s used for everything.
cidr_networks:
# Cidr used in the Management network
container: 172.29.236.0/22
# Cidr used in the Service network
snet: 172.29.248.0/22
# Cidr used in the VM network
tunnel: 172.29.240.0/22
# Cidr used in the Storage network
storage: 172.29.244.0/22
# User defined list of consumed IP addresses that may intersect
# with the provided CIDR.
used_ips:
- 172.29.236.1,172.29.236.50
- 172.29.244.1,172.29.244.50
# As a user you can define anything that you may wish to "globally"
# override from within the rpc_deploy configuration file. Anything
# specified here will take precedence over anything else any where.
global_overrides:
# Internal Management vip address
internal_lb_vip_address: 172.29.236.1
# External DMZ VIP address
external_lb_vip_address: 192.168.1.1
# Bridged interface to use with tunnel type networks
tunnel_bridge: "br-vxlan"
# Bridged interface to build containers with
management_bridge: "br-mgmt"
# Define your Add on container networks.
# group_binds: bind a provided network to a particular group
# container_bridge: instructs inventory where a bridge is plugged
# into on the host side of a veth pair
# container_interface: interface name within a container
# ip_from_q: name of a cidr to pull an IP address from
# type: Networks must have a type. types are: ["raw", "vxlan", "flat", "vlan"]
# range: Optional value used in "vxlan" and "vlan" type networks
# net_name: Optional value used in mapping network names used in neutron ml2
# You must have a management network.
provider_networks:
- network:
group_binds:
- all_containers
- hosts
type: "raw"
container_bridge: "br-mgmt"
container_interface: "eth1"
ip_from_q: "container"
- network:
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
type: "raw"
container_bridge: "br-storage"
container_interface: "eth2"
ip_from_q: "storage"
- network:
group_binds:
- glance_api
- nova_compute
- neutron_linuxbridge_agent
type: "raw"
container_bridge: "br-snet"
container_interface: "eth3"
ip_from_q: "snet"
- network:
group_binds:
- neutron_linuxbridge_agent
container_bridge: "br-vxlan"
container_interface: "eth10"
ip_from_q: "tunnel"
type: "vxlan"
range: "1:1000"
net_name: "vxlan"
- network:
group_binds:
- neutron_linuxbridge_agent
container_bridge: "br-vlan"
container_interface: "eth11"
type: "flat"
net_name: "vlan"
- network:
group_binds:
- neutron_linuxbridge_agent
container_bridge: "br-vlan"
container_interface: "eth11"
type: "vlan"
range: "1:1"
net_name: "vlan"
# Name of load balancer
lb_name: lb_name_in_core
# User defined Infrastructure Hosts, this should be a required group
infra_hosts:
infra1:
ip: 172.29.236.100
infra2:
ip: 172.29.236.101
infra3:
ip: 172.29.236.102
# User defined Compute Hosts, this should be a required group
compute_hosts:
compute1:
ip: 172.29.236.103
# User defined Storage Hosts, this should be a required group
storage_hosts:
cinder1:
ip: 172.29.236.104
# "container_vars" can be set outside of all other options as
# host specific optional variables.
container_vars:
# In this example we are defining what cinder volumes are
# on a given host.
cinder_backends:
# if the "limit_container_types" argument is set, within
# the top level key of the provided option the inventory
# process will perform a string match on the container name with
# the value found within the "limit_container_types" argument.
# If any part of the string found within the container
# name the options are appended as host_vars inside of inventory.
limit_container_types: cinder_volume
lvm:
volume_group: cinder-volumes
driver: cinder.volume.drivers.lvm.LVMISCSIDriver
backend_name: LVM_iSCSI
# User defined Logging Hosts, this should be a required group
log_hosts:
logger1:
ip: 172.29.236.107
# User defined Networking Hosts, this should be a required group
network_hosts:
network1:
ip: 172.29.236.108

View File

@ -0,0 +1,220 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This is the md5 of the environment file
# this will ensure consistency when deploying.
environment_version: 5e7155d022462c5a82384c1b2ed8b946
# User defined CIDR used for containers
# Global cidr/s used for everything.
cidr_networks:
# Cidr used in the Management network
container: 172.29.236.0/22
# Cidr used in the Service network
snet: 172.29.248.0/22
# Cidr used in the VM network
tunnel: 172.29.240.0/22
# Cidr used in the Storage network
storage: 172.29.244.0/22
# User defined list of consumed IP addresses that may intersect
# with the provided CIDR.
used_ips:
- 172.29.236.1,172.29.236.50
- 172.29.244.1,172.29.244.50
# As a user you can define anything that you may wish to "globally"
# override from within the rpc_deploy configuration file. Anything
# specified here will take precedence over anything else any where.
global_overrides:
# Internal Management vip address
internal_lb_vip_address: 172.29.236.1
# External DMZ VIP address
external_lb_vip_address: 192.168.1.1
# Bridged interface to use with tunnel type networks
tunnel_bridge: "br-vxlan"
# Bridged interface to build containers with
management_bridge: "br-mgmt"
# Define your Add on container networks.
# group_binds: bind a provided network to a particular group
# container_bridge: instructs inventory where a bridge is plugged
# into on the host side of a veth pair
# container_interface: interface name within a container
# ip_from_q: name of a cidr to pull an IP address from
# type: Networks must have a type. types are: ["raw", "vxlan", "flat", "vlan"]
# range: Optional value used in "vxlan" and "vlan" type networks
# net_name: Optional value used in mapping network names used in neutron ml2
# You must have a management network.
provider_networks:
- network:
group_binds:
- all_containers
- hosts
type: "raw"
container_bridge: "br-mgmt"
container_interface: "eth1"
ip_from_q: "container"
- network:
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
type: "raw"
container_bridge: "br-storage"
container_interface: "eth2"
ip_from_q: "storage"
- network:
group_binds:
- glance_api
- nova_compute
- neutron_linuxbridge_agent
type: "raw"
container_bridge: "br-snet"
container_interface: "eth3"
ip_from_q: "snet"
- network:
group_binds:
- neutron_linuxbridge_agent
container_bridge: "br-vxlan"
container_interface: "eth10"
ip_from_q: "tunnel"
type: "vxlan"
range: "1:1000"
net_name: "vxlan"
- network:
group_binds:
- neutron_linuxbridge_agent
container_bridge: "br-vlan"
container_interface: "eth11"
type: "flat"
net_name: "vlan"
- network:
group_binds:
- neutron_linuxbridge_agent
container_bridge: "br-vlan"
container_interface: "eth11"
type: "vlan"
range: "1:1"
net_name: "vlan"
# Name of load balancer
lb_name: lb_name_in_core
# Other options you may want
debug: True
### Cinder default volume type option
# # This can be set to use a specific volume type. This is
# # an optional variable because you may have different volume
# # types on different hosts named different things. For this
# # Reason if you choose to set this variable please set it
# # to the name of one of your setup volume types
# cinder_default_volume_type: lvm
### Cinder default volume type option
# User defined Infrastructure Hosts, this should be a required group
infra_hosts:
infra1:
ip: 172.29.236.100
infra2:
ip: 172.29.236.101
infra3:
ip: 172.29.236.102
# User defined Compute Hosts, this should be a required group
compute_hosts:
compute1:
ip: 172.29.236.103
host_vars:
host_networks:
- { type: raw, device_name: eth0, bond_master: bond0, bond_primary: true }
- { type: raw, device_name: eth4, bond_master: bond0, bond_primary: false }
- { type: vlan_tagged, device_name: bond0, tagged_device_name: bond0.2176 }
- { type: vlan_tagged, device_name: bond0, tagged_device_name: bond1.1998 }
- { type: bonded, device_name: bond0 }
- { type: bridged, device_name: br-mgmt, bridge_ports: ["bond0.2176"], address: "172.29.236.103", netmask: "255.255.255.0", gateway: "172.29.236.1", dns_nameservers: ["69.20.0.164", "69.20.0.196"] }
- { type: bridged, device_name: br-vxlan, bridge_ports: ["bond1.1998"], address: "172.29.240.103", netmask: "255.255.255.0" }
- { type: bridged, device_name: br-vlan, bridge_ports: ["bond1"] }
# User defined Storage Hosts, this should be a required group
storage_hosts:
cinder1:
ip: 172.29.236.104
# "container_vars" can be set outside of all other options as
# host specific optional variables.
container_vars:
# In this example we are defining what cinder volumes are
# on a given host.
cinder_backends:
# if the "limit_container_types" argument is set, within
# the top level key of the provided option the inventory
# process will perform a string match on the container name with
# the value found within the "limit_container_types" argument.
# If any part of the string found within the container
# name the options are appended as host_vars inside of inventory.
limit_container_types: cinder_volume
lvm:
volume_group: cinder-volumes
driver: cinder.volume.drivers.lvm.LVMISCSIDriver
backend_name: LVM_iSCSI
cinder2:
ip: 172.29.236.105
container_vars:
cinder_backends:
limit_container_types: cinder_volume
lvm_ssd:
volume_group: cinder-volumes
driver: cinder.volume.drivers.lvm.LVMISCSIDriver
backend_name: LVM_SSD_iSCSI
cinder3:
ip: 172.29.236.106
container_vars:
cinder_backends:
limit_container_type: cinder_volume
netapp:
netapp_storage_family: ontap_7mode
netapp_storage_protocol: iscsi
netapp_server_hostname: "{{ cinder_netapp_hostname }}"
netapp_server_port: 80
netapp_login: "{{ cinder_netapp_username }}"
netapp_password: "{{ cinder_netapp_password }}"
driver: cinder.volume.drivers.netapp.common.NetAppDriver
backend_name: NETAPP_iSCSI
# User defined Logging Hosts, this should be a required group
log_hosts:
logger1:
ip: 172.29.236.107
# User defined Networking Hosts, this should be a required group
network_hosts:
network1:
ip: 172.29.236.108
host_vars:
host_networks:
- { type: raw, device_name: eth0, bond_master: bond0, bond_primary: true }
- { type: raw, device_name: eth4, bond_master: bond0, bond_primary: false }
- { type: vlan_tagged, device_name: bond0, tagged_device_name: bond0.2176 }
- { type: vlan_tagged, device_name: bond0, tagged_device_name: bond1.1998 }
- { type: bonded, device_name: bond0 }
- { type: bridged, device_name: br-mgmt, bridge_ports: ["bond0.2176"], address: "172.29.236.108", netmask: "255.255.255.0", gateway: "172.29.236.1", dns_nameservers: ["69.20.0.164", "69.20.0.196"] }
- { type: bridged, device_name: br-vxlan, bridge_ports: ["bond1.1998"], address: "172.29.240.108", netmask: "255.255.255.0" }
- { type: bridged, device_name: br-vlan, bridge_ports: ["bond1"] }
# Other hosts can be added whenever needed. Note that containers will not be
# assigned to "other" hosts by default. If you would like to have containers
# assigned to hosts that are outside of the predefined groups, you will need to
# make an edit to the rpc_environment.yml file.
# haproxy_hosts:
# haproxy1:
# ip: 10.0.0.12

View File

@ -0,0 +1,136 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
## Rackspace Cloud Details
# UK accounts: https://lon.identity.api.rackspacecloud.com/v2.0
rackspace_cloud_auth_url: https://identity.api.rackspacecloud.com/v2.0
rackspace_cloud_tenant_id: SomeTenantID
rackspace_cloud_username: SomeUserName
rackspace_cloud_password: SomeUsersPassword
rackspace_cloud_api_key: SomeAPIKey
## Rabbit Options
rabbitmq_password:
rabbitmq_cookie_token:
## Tokens
memcached_encryption_key:
## Container default user
container_openstack_password:
## Galera Options
mysql_root_password:
mysql_debian_sys_maint_password:
## Keystone Options
keystone_container_mysql_password:
keystone_auth_admin_token:
keystone_auth_admin_password:
keystone_service_password:
## Cinder Options
cinder_container_mysql_password:
cinder_service_password:
cinder_v2_service_password:
## Glance Options
glance_default_store: file
glance_container_mysql_password:
glance_service_password:
glance_swift_store_auth_address: "{{ rackspace_cloud_auth_url }}"
glance_swift_store_user: "{{ rackspace_cloud_tenant_id }}:{{ rackspace_cloud_username }}"
glance_swift_store_key: "{{ rackspace_cloud_password }}"
glance_swift_store_container: SomeContainerName
glance_swift_store_region: SomeRegion
glance_swift_enable_snet: True
## Heat Options
heat_stack_domain_admin_password:
heat_container_mysql_password:
### THE HEAT AUTH KEY NEEDS TO BE 32 CHARACTERS LONG ##
heat_auth_encryption_key:
### THE HEAT AUTH KEY NEEDS TO BE 32 CHARACTERS LONG ##
heat_service_password:
heat_cfn_service_password:
## Horizon Options
horizon_container_mysql_password:
## MaaS Options
# Set maas_auth_method to 'token' to use maas_auth_token/maas_api_url
# instead of maas_username/maas_api_key
maas_auth_method: password
maas_auth_url: "{{ rackspace_cloud_auth_url }}"
maas_username: "{{ rackspace_cloud_username }}"
maas_api_key: "{{ rackspace_cloud_api_key }}"
maas_auth_token: some_token
maas_api_url: https://monitoring.api.rackspacecloud.com/v1.0/{{ rackspace_cloud_tenant_id }}
maas_notification_plan: npTechnicalContactsEmail
maas_agent_token: some_token
maas_target_alias: public0_v4
maas_scheme: https
# Override scheme for specific service remote monitor by specifying here: E.g.
# maas_nova_scheme: http
maas_keystone_user: maas
maas_keystone_password:
# Check this number of times before registering state change
maas_alarm_local_consecutive_count: 3
maas_alarm_remote_consecutive_count: 1
# Period and timeout times (seconds) for a check
# Timeout must be less than period
maas_check_period: 60
maas_check_timeout: 30
maas_monitoring_zones:
- mzdfw
- mziad
- mzord
- mzlon
- mzhkg
maas_repo_version: v9.0.0
## Neutron Options
neutron_container_mysql_password:
neutron_service_password:
## Nova Options
nova_virt_type: qemu
nova_container_mysql_password:
nova_metadata_proxy_secret:
nova_ec2_service_password:
nova_service_password:
nova_v3_service_password:
nova_s3_service_password:
## RPC Support
rpc_support_holland_password:
# rpc_support_holland_branch: defaults to release tag: v1.0.10
## Kibana Options
kibana_password:

13
requirements.txt Normal file
View File

@ -0,0 +1,13 @@
Jinja2==2.7.3
MarkupSafe==0.23
PyYAML==3.11
ansible==1.6.6
click==2.5
colorize==1.0.2
ecdsa==0.11
netaddr==0.7.12
paramiko==1.14.0
prettytable==0.7.2
pycrypto==2.6.1
wsgiref==0.1.2
pexpect==3.3

View File

@ -0,0 +1,8 @@
[defaults]
gathering = smart
hostfile = inventory
host_key_checking = False
[ssh_connection]
pipelining = True

View File

@ -0,0 +1,24 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Restart os service
service: name={{ item }} state=restarted pattern={{ item }}
register: service_restart
failed_when: "'msg' in service_restart and 'FAIL' in service_restart.msg|upper"
with_items: service_names
notify: Ensure os service running
- name: Ensure os service running
service: name={{ program_name }} state=started pattern={{ program_name }}

View File

@ -0,0 +1,797 @@
#!/usr/bin/env python
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# (c) 2014, Kevin Carter <kevin.carter@rackspace.com>
import argparse
import datetime
import hashlib
import json
import os
import Queue
import random
import tarfile
import uuid
try:
import yaml
except ImportError:
print('Missing Dependency, "PyYAML"')
try:
import netaddr
except ImportError:
print('Missing Dependency, "netaddr"')
USED_IPS = []
INVENTORY_SKEL = {
'_meta': {
'hostvars': {}
}
}
# This is a list of items that all hosts should have at all times.
# Any new item added to inventory that will used as a default argument in the
# inventory setup should be added to this list.
REQUIRED_HOSTVARS = [
'is_metal',
'ansible_ssh_host',
'container_address',
'container_name',
'physical_host',
'component'
]
def args():
"""Setup argument Parsing."""
parser = argparse.ArgumentParser(
usage='%(prog)s',
description='Rackspace Openstack, Inventory Generator',
epilog='Inventory Generator Licensed "Apache 2.0"')
parser.add_argument(
'--file',
help='User defined configuration file',
required=False,
default=None
)
parser.add_argument(
'--list',
help='List all entries',
action='store_true'
)
return vars(parser.parse_args())
def get_ip_address(name, ip_q):
"""Return an IP address from our IP Address queue."""
try:
ip_addr = ip_q.get(timeout=1)
while ip_addr in USED_IPS:
ip_addr = ip_q.get(timeout=1)
else:
append_if(array=USED_IPS, item=ip_addr)
return str(ip_addr)
except Queue.Empty:
raise SystemExit(
'Cannot retrieve requested amount of IP addresses. Increase the %s'
' range in your rpc_user_config.yml.' % name
)
def _load_ip_q(cidr, ip_q):
"""Load the IP queue with all IP address from a given cidr.
:param cidr: ``str`` IP address with cidr notation
"""
_all_ips = [str(i) for i in list(netaddr.IPNetwork(cidr))]
base_exclude = [
str(netaddr.IPNetwork(cidr).network),
str(netaddr.IPNetwork(cidr).broadcast)
]
USED_IPS.extend(base_exclude)
for ip in random.sample(_all_ips, len(_all_ips)):
if ip not in USED_IPS:
ip_q.put(ip)
def _parse_belongs_to(key, belongs_to, inventory):
"""Parse all items in a `belongs_to` list.
:param key: ``str`` Name of key to append to a given entry
:param belongs_to: ``list`` List of items to iterate over
:param inventory: ``dict`` Living dictionary of inventory
"""
for item in belongs_to:
if key not in inventory[item]['children']:
append_if(array=inventory[item]['children'], item=key)
def _build_container_hosts(container_affinity, container_hosts, type_and_name,
inventory, host_type, container_type,
container_host_type, physical_host_type, config,
is_metal, assignment):
"""Add in all of hte host associations into inventory.
This will add in all of the hosts into the inventory based on the given
affinity for a container component and its subsequent type groups.
:param container_affinity: ``int`` Set the number of a given container
:param container_hosts: ``list`` List of containers on an host
:param type_and_name: ``str`` Combined name of host and container name
:param inventory: ``dict`` Living dictionary of inventory
:param host_type: ``str`` Name of the host type
:param container_type: ``str`` Type of container
:param container_host_type: ``str`` Type of host
:param physical_host_type: ``str`` Name of physical host group
:param config: ``dict`` User defined information
:param is_metal: ``bol`` If true, a container entry will not be built
:param assignment: ``str`` Name of container component target
"""
container_list = []
for make_container in range(container_affinity):
for i in container_hosts:
if '%s-' % type_and_name in i:
append_if(array=container_list, item=i)
existing_count = len(list(set(container_list)))
if existing_count < container_affinity:
hostvars = inventory['_meta']['hostvars']
container_mapping = inventory[container_type]['children']
address = None
if is_metal is False:
cuuid = '%s' % uuid.uuid4()
cuuid = cuuid.split('-')[0]
container_host_name = '%s-%s' % (type_and_name, cuuid)
hostvars_options = hostvars[container_host_name] = {}
if container_host_type not in inventory:
inventory[container_host_type] = {
"hosts": [],
}
append_if(
array=inventory[container_host_type]["hosts"],
item=container_host_name
)
append_if(array=container_hosts, item=container_host_name)
else:
if host_type not in hostvars:
hostvars[host_type] = {}
hostvars_options = hostvars[host_type]
container_host_name = host_type
host_type_config = config[physical_host_type][host_type]
address = host_type_config.get('ip')
# Create a host types containers group and append it to inventory
host_type_containers = '%s_containers' % host_type
append_if(array=container_mapping, item=host_type_containers)
hostvars_options.update({
'is_metal': is_metal,
'ansible_ssh_host': address,
'container_address': address,
'container_name': container_host_name,
'physical_host': host_type,
'component': assignment
})
def _append_container_types(inventory, host_type):
"""Append the "physical_host" type to all containers.
:param inventory: ``dict`` Living dictionary of inventory
:param host_type: ``str`` Name of the host type
"""
for _host in inventory['_meta']['hostvars'].keys():
hdata = inventory['_meta']['hostvars'][_host]
if 'container_name' in hdata:
if hdata['container_name'].startswith(host_type):
if 'physical_host' not in hdata:
hdata['physical_host'] = host_type
def _append_to_host_groups(inventory, container_type, assignment, host_type,
type_and_name, host_options):
"""Append all containers to physical (logical) groups based on host types.
:param inventory: ``dict`` Living dictionary of inventory
:param container_type: ``str`` Type of container
:param assignment: ``str`` Name of container component target
:param host_type: ``str`` Name of the host type
:param type_and_name: ``str`` Combined name of host and container name
"""
physical_group_type = '%s_all' % container_type.split('_')[0]
if physical_group_type not in inventory:
inventory[physical_group_type] = {'hosts': []}
iph = inventory[physical_group_type]['hosts']
iah = inventory[assignment]['hosts']
for hname, hdata in inventory['_meta']['hostvars'].iteritems():
if 'container_types' in hdata or 'container_name' in hdata:
if 'container_name' not in hdata:
container = hdata['container_name'] = hname
else:
container = hdata['container_name']
component = hdata.get('component')
if container.startswith(host_type):
if 'physical_host' not in hdata:
hdata['physical_host'] = host_type
if container.startswith('%s-' % type_and_name):
append_if(array=iah, item=container)
elif hdata.get('is_metal') is True:
if component == assignment:
append_if(array=iah, item=container)
if container.startswith('%s-' % type_and_name):
append_if(array=iph, item=container)
elif hdata.get('is_metal') is True:
if container.startswith(host_type):
append_if(array=iph, item=container)
# Append any options in config to the host_vars of a container
container_vars = host_options.get('container_vars')
if isinstance(container_vars, dict):
for _keys, _vars in container_vars.items():
# Copy the options dictionary for manipulation
options = _vars.copy()
for _k, _v in options.items():
limit = None
# If a limit is set use the limit string as a filter
# for the container name and see if it matches.
if 'limit_container_types' in _v:
limit = _v.pop(
'limit_container_types', None
)
if limit is None or limit in container:
hdata[_keys] = {_k: _v}
def _add_container_hosts(assignment, config, container_name, container_type,
inventory, is_metal):
"""Add a given container name and type to the hosts.
:param assignment: ``str`` Name of container component target
:param config: ``dict`` User defined information
:param container_name: ``str`` Name fo container
:param container_type: ``str`` Type of container
:param inventory: ``dict`` Living dictionary of inventory
:param is_metal: ``bol`` If true, a container entry will not be built
"""
physical_host_type = '%s_hosts' % container_type.split('_')[0]
# If the physical host type is not in config return
if physical_host_type not in config:
return
for host_type in inventory[physical_host_type]['hosts']:
container_hosts = inventory[container_name]['hosts']
# If host_type is not in config do not append containers to it
if host_type not in config[physical_host_type]:
continue
# Get any set host options
host_options = config[physical_host_type][host_type]
affinity = host_options.get('affinity', {})
container_affinity = affinity.get(container_name, 1)
# Ensures that container names are not longer than 64
name_length = len(host_type)
if name_length > 25:
name_diff = name_length - 25
host_name = host_type[:-name_diff]
else:
host_name = host_type
type_and_name = '%s_%s' % (host_name, container_name)
physical_host = inventory['_meta']['hostvars'][host_type]
container_host_type = '%s_containers' % host_type
if 'container_types' not in physical_host:
physical_host['container_types'] = container_host_type
elif physical_host['container_types'] != container_host_type:
physical_host['container_types'] = container_host_type
# Add all of the containers into the inventory
_build_container_hosts(
container_affinity,
container_hosts,
type_and_name,
inventory,
host_type,
container_type,
container_host_type,
physical_host_type,
config,
is_metal,
assignment
)
# Add the physical host type to all containers from the built inventory
_append_container_types(inventory, host_type)
_append_to_host_groups(
inventory,
container_type,
assignment,
host_type,
type_and_name,
host_options
)
def user_defined_setup(config, inventory, is_metal):
"""Apply user defined entries from config into inventory.
:param config: ``dict`` User defined information
:param inventory: ``dict`` Living dictionary of inventory
:param is_metal: ``bol`` If true, a container entry will not be built
"""
for key, value in config.iteritems():
if key.endswith('hosts'):
if key not in inventory:
inventory[key] = {'hosts': []}
if value is None:
return
for _key, _value in value.iteritems():
if _key not in inventory['_meta']['hostvars']:
inventory['_meta']['hostvars'][_key] = {}
inventory['_meta']['hostvars'][_key].update({
'ansible_ssh_host': _value['ip'],
'container_address': _value['ip'],
'is_metal': is_metal,
})
if 'host_vars' in _value:
for _k, _v in _value['host_vars'].items():
inventory['_meta']['hostvars'][_key][_k] = _v
append_if(array=USED_IPS, item=_value['ip'])
append_if(array=inventory[key]['hosts'], item=_key)
def skel_setup(environment_file, inventory):
"""Build out the main inventory skeleton as needed.
:param environment_file: ``dict`` Known environment information
:param inventory: ``dict`` Living dictionary of inventory
"""
for key, value in environment_file.iteritems():
if key == 'version':
continue
for _key, _value in value.iteritems():
if _key not in inventory:
inventory[_key] = {}
if _key.endswith('container'):
if 'hosts' not in inventory[_key]:
inventory[_key]['hosts'] = []
else:
if 'children' not in inventory[_key]:
inventory[_key]['children'] = []
if 'hosts' not in inventory[_key]:
inventory[_key]['hosts'] = []
if 'belongs_to' in _value:
for assignment in _value['belongs_to']:
if assignment not in inventory:
inventory[assignment] = {}
if 'children' not in inventory[assignment]:
inventory[assignment]['children'] = []
if 'hosts' not in inventory[assignment]:
inventory[assignment]['hosts'] = []
def skel_load(skeleton, inventory):
"""Build out data as provided from the defined `skel` dictionary.
:param skeleton:
:param inventory: ``dict`` Living dictionary of inventory
"""
for key, value in skeleton.iteritems():
_parse_belongs_to(
key,
belongs_to=value['belongs_to'],
inventory=inventory
)
def _add_additional_networks(key, inventory, ip_q, k_name, netmask):
"""Process additional ip adds and append then to hosts as needed.
If the host is found to be "is_metal" it will be marked as "on_metal"
and will not have an additionally assigned IP address.
:param key: ``str`` Component key name
:param inventory: ``dict`` Living dictionary of inventory
:param ip_q: ``object`` build queue of IP addresses
:param k_name: ``str`` key to use in host vars for storage
"""
base_hosts = inventory['_meta']['hostvars']
addr_name = '%s_address' % k_name
lookup = inventory[key]
if 'children' in lookup and lookup['children']:
for group in lookup['children']:
_add_additional_networks(group, inventory, ip_q, k_name, netmask)
if 'hosts' in lookup and lookup['hosts']:
for chost in lookup['hosts']:
container = base_hosts[chost]
if not container.get(addr_name):
if ip_q is None:
container[addr_name] = None
else:
container[addr_name] = get_ip_address(
name=k_name, ip_q=ip_q
)
netmask_name = '%s_netmask' % k_name
if netmask_name not in container:
container[netmask_name] = netmask
def _load_optional_q(config, cidr_name):
"""Load optional queue with ip addresses.
:param config: ``dict`` User defined information
:param cidr_name: ``str`` Name of the cidr name
"""
cidr = config.get(cidr_name)
ip_q = None
if cidr is not None:
ip_q = Queue.Queue()
_load_ip_q(cidr=cidr, ip_q=ip_q)
return ip_q
def container_skel_load(container_skel, inventory, config):
"""Build out all containers as defined in the environment file.
:param container_skel: ``dict`` container skeleton for all known containers
:param inventory: ``dict`` Living dictionary of inventory
:param config: ``dict`` User defined information
"""
for key, value in container_skel.iteritems():
for assignment in value['contains']:
for container_type in value['belongs_to']:
_add_container_hosts(
assignment,
config,
key,
container_type,
inventory,
value.get('is_metal', False)
)
else:
cidr_networks = config.get('cidr_networks')
provider_queues = {}
for net_name in cidr_networks:
ip_q = _load_optional_q(
cidr_networks, cidr_name=net_name
)
provider_queues[net_name] = ip_q
if ip_q is not None:
net = netaddr.IPNetwork(cidr_networks.get(net_name))
provider_queues['%s_netmask' % net_name] = str(net.netmask)
overrides = config['global_overrides']
mgmt_bridge = overrides['management_bridge']
mgmt_dict = {}
if cidr_networks:
for pn in overrides['provider_networks']:
network = pn['network']
if 'ip_from_q' in network and 'group_binds' in network:
q_name = network['ip_from_q']
for group in network['group_binds']:
_add_additional_networks(
key=group,
inventory=inventory,
ip_q=provider_queues[q_name],
k_name=q_name,
netmask=provider_queues['%s_netmask' % q_name]
)
if mgmt_bridge == network['container_bridge']:
nci = network['container_interface']
ncb = network['container_bridge']
ncn = network.get('ip_from_q')
mgmt_dict['container_interface'] = nci
mgmt_dict['container_bridge'] = ncb
if ncn:
cidr_net = netaddr.IPNetwork(cidr_networks.get(ncn))
mgmt_dict['container_netmask'] = str(cidr_net.netmask)
for host, hostvars in inventory['_meta']['hostvars'].iteritems():
base_hosts = inventory['_meta']['hostvars'][host]
if 'container_network' not in base_hosts:
base_hosts['container_network'] = mgmt_dict
for _key, _value in hostvars.iteritems():
if _key == 'ansible_ssh_host' and _value is None:
ca = base_hosts['container_address']
base_hosts['ansible_ssh_host'] = ca
def file_find(filename, user_file=None, pass_exception=False):
"""Return the path to a file.
If no file is found the system will exit.
The file lookup will be done in the following directories:
/etc/rpc_deploy/
$HOME/rpc_deploy/
$(pwd)/rpc_deploy/
:param filename: ``str`` Name of the file to find
:param user_file: ``str`` Additional localtion to look in FIRST for a file
"""
file_check = [
os.path.join(
'/etc', 'rpc_deploy', filename
),
os.path.join(
os.environ.get('HOME'), 'rpc_deploy', filename
),
os.path.join(
os.getcwd(), filename
)
]
if user_file is not None:
file_check.insert(0, os.path.expanduser(user_file))
for f in file_check:
if os.path.isfile(f):
return f
else:
if pass_exception is False:
raise SystemExit('No file found at: %s' % file_check)
else:
return False
def _set_used_ips(user_defined_config, inventory):
"""Set all of the used ips into a global list.
:param user_defined_config: ``dict`` User defined configuration
:param inventory: ``dict`` Living inventory of containers and hosts
"""
used_ips = user_defined_config.get('used_ips')
if isinstance(used_ips, list):
for ip in used_ips:
split_ip = ip.split(',')
if len(split_ip) >= 2:
ip_range = list(
netaddr.iter_iprange(
split_ip[0],
split_ip[-1]
)
)
USED_IPS.extend([str(i) for i in ip_range])
else:
append_if(array=USED_IPS, item=split_ip[0])
# Find all used IP addresses and ensure that they are not used again
for host_entry in inventory['_meta']['hostvars'].values():
if 'ansible_ssh_host' in host_entry:
append_if(array=USED_IPS, item=host_entry['ansible_ssh_host'])
for key, value in host_entry.iteritems():
if key.endswith('address'):
append_if(array=USED_IPS, item=value)
def _ensure_inventory_uptodate(inventory):
"""Update inventory if needed.
Inspect the current inventory and ensure that all host items have all of
the required entries.
:param inventory: ``dict`` Living inventory of containers and hosts
"""
for key, value in inventory['_meta']['hostvars'].iteritems():
if 'container_name' not in value:
value['container_name'] = key
for rh in REQUIRED_HOSTVARS:
if rh not in value:
value[rh] = None
def _parse_global_variables(user_cidr, inventory, user_defined_config):
"""Add any extra variables that may have been set in config.
:param user_cidr: ``str`` IP address range in CIDR notation
:param inventory: ``dict`` Living inventory of containers and hosts
:param user_defined_config: ``dict`` User defined variables
"""
if 'all' not in inventory:
inventory['all'] = {}
if 'vars' not in inventory['all']:
inventory['all']['vars'] = {}
# Write the users defined cidr into global variables.
inventory['all']['vars']['container_cidr'] = user_cidr
if 'global_overrides' in user_defined_config:
if isinstance(user_defined_config['global_overrides'], dict):
inventory['all']['vars'].update(
user_defined_config['global_overrides']
)
def append_if(array, item):
"""Append an ``item`` to an ``array`` if its not already in it.
:param array: ``list`` List object to append to
:param item: ``object`` Object to append to the list
:returns array: returns the amended list.
"""
if item not in array:
array.append(item)
return array
def md5_checker(localfile):
"""Check for different Md5 in CloudFiles vs Local File.
If the md5 sum is different, return True else False
:param localfile:
:return True|False:
"""
def calc_hash():
"""Read the hash.
:return data_hash.read():
"""
return data_hash.read(128 * md5.block_size)
if os.path.isfile(localfile) is True:
md5 = hashlib.md5()
with open(localfile, 'rb') as data_hash:
for chk in iter(calc_hash, ''):
md5.update(chk)
return md5.hexdigest()
else:
raise SystemExit('This [ %s ] is not a file.' % localfile)
def main():
"""Run the main application."""
all_args = args()
# Get the contents of the user config json and load it as an object
user_config_file = file_find(
filename='rpc_user_config.yml', user_file=all_args.get('file')
)
local_path = os.path.dirname(user_config_file)
# Load the user defined configuration file
with open(user_config_file, 'rb') as f:
user_defined_config = yaml.load(f.read())
# Get the contents of the system environment json
environment_file = file_find(filename='rpc_environment.yml')
# Load existing rpc environment json
with open(environment_file, 'rb') as f:
environment = yaml.safe_load(f.read())
# Check the version of the environment file
env_version = md5_checker(localfile=environment_file)
version = user_defined_config.get('environment_version')
if env_version != version:
raise SystemExit(
'The MD5 sum of the environment file does not match the expected'
' value. To ensure that you are using the proper environment'
' please repull the correct environment file from the upstream'
' repository. Found MD5: [ %s ] expected MD5 [ %s ]'
% (env_version, version)
)
# Load existing inventory file if found
dynamic_inventory_file = os.path.join(local_path, 'rpc_inventory.json')
if os.path.isfile(dynamic_inventory_file):
with open(dynamic_inventory_file, 'rb') as f:
dynamic_inventory = json.loads(f.read())
# Create a backup of all previous inventory files as a tar archive
inventory_backup_file = os.path.join(
local_path,
'backup_rpc_inventory.tar'
)
with tarfile.open(inventory_backup_file, 'a') as tar:
basename = os.path.basename(dynamic_inventory_file)
# Time stamp the inventory file in UTC
utctime = datetime.datetime.utcnow()
utctime = utctime.strftime("%Y%m%d_%H%M%S")
backup_name = '%s-%s.json' % (basename, utctime)
tar.add(dynamic_inventory_file, arcname=backup_name)
else:
dynamic_inventory = INVENTORY_SKEL
# Save the users container cidr as a group variable
if 'container' in user_defined_config['cidr_networks']:
user_cidr = user_defined_config['cidr_networks']['container']
else:
raise SystemExit('No CIDR specified in user config')
# Add the container_cidr into the all global ansible group_vars
_parse_global_variables(user_cidr, dynamic_inventory, user_defined_config)
# Load all of the IP addresses that we know are used and set the queue
_set_used_ips(user_defined_config, dynamic_inventory)
user_defined_setup(user_defined_config, dynamic_inventory, is_metal=True)
skel_setup(environment, dynamic_inventory)
skel_load(
environment.get('physical_skel'),
dynamic_inventory
)
skel_load(
environment.get('component_skel'), dynamic_inventory
)
container_skel_load(
environment.get('container_skel'),
dynamic_inventory,
user_defined_config
)
# Look at inventory and ensure all entries have all required values.
_ensure_inventory_uptodate(inventory=dynamic_inventory)
# Load the inventory json
dynamic_inventory_json = json.dumps(dynamic_inventory, indent=4)
# Generate a list of all hosts and their used IP addresses
hostnames_ips = {}
for _host, _vars in dynamic_inventory['_meta']['hostvars'].iteritems():
host_hash = hostnames_ips[_host] = {}
for _key, _value in _vars.iteritems():
if _key.endswith('address') or _key == 'ansible_ssh_host':
host_hash[_key] = _value
# Save a list of all hosts and their given IP addresses
with open(os.path.join(local_path, 'rpc_hostnames_ips.yml'), 'wb') as f:
f.write(
json.dumps(
hostnames_ips,
indent=4
)
)
# Save new dynamic inventory
with open(dynamic_inventory_file, 'wb') as f:
f.write(dynamic_inventory_json)
# Print out our inventory
print(dynamic_inventory_json)
if __name__ == '__main__':
main()

View File

@ -0,0 +1,179 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The variables file used by the playbooks in the dbservers group.
# These don't have to be explicitly imported by vars_files: they are autopopulated.
# Defined required kernel. presently 3.13.0-32-generic
required_kernel: 3.13.0-34-generic
## Container Template Config
container_template: rpc
container_release: trusty
# Parameters on what the conatiner will be built with
container_config: /etc/lxc/lxc-rpc.conf
## Base Ansible config for all plays
ansible_ssh_port: 22
## Virtual IP address
# Internal Management vip address
internal_vip_address: "{{ internal_lb_vip_address }}"
# External DMZ VIP address
external_vip_address: "{{ external_lb_vip_address }}"
## URL for the frozen rpc repo
rpc_repo_url: "http://dc0e2a2ef0676c3453b1-31bb9324d3aeab0d08fa434012c1e64d.r5.cf1.rackcdn.com"
## GPG Keys
gpg_keys:
- { key_name: 'mariadb', keyserver: 'hkp://keyserver.ubuntu.com:80', hash_id: '0xcbcb082a1bb943db' }
## Repositories
apt_common_repos:
- { repo: "deb http://mirror.jmu.edu/pub/mariadb/repo/5.5/ubuntu {{ ansible_distribution_release }} main", state: "present" }
apt_lxc_common_repos:
- { repo: "ppa:ubuntu-lxc/stable", state: "present" }
get_pip_url: "https://bootstrap.pypa.io/get-pip.py"
## Users that will not be created via container_common
excluded_user_create:
- mysql
- rabbitmq
## Base Packages
apt_common_packages:
- vlan
- python-software-properties
- python-dev
- build-essential
- git-core
- rsyslog
- lvm2
- libssl-dev
- bridge-utils
- cgroup-lite
- sqlite3
- iptables
- sshpass
- libffi-dev
- libxml2-dev
- libxslt1-dev
- mariadb-client
- libmariadbclient-dev
# Util packages that are installed when repos are put in place
common_util_packages:
- curl
- wget
- time
- rsync
## MySQL Information
mysql_port: 3306
mysql_user: root
mysql_password: "{{ mysql_root_password }}"
mysql_address: "{{ internal_vip_address }}"
## RPC Backend
rpc_thread_pool_size: 64
rpc_conn_pool_size: 30
rpc_response_timeout: 60
rpc_cast_timeout: 30
rpc_backend: rabbit
## RabbitMQ
rabbit_port: 5672
rabbit_hosts: "{% for host in groups['rabbit'] %}{{ hostvars[host]['container_address'] }}:{{ rabbit_port }}{% if not loop.last %},{% endif %}{% endfor %}"
rabbit_use_ssl: false
rabbit_virtual_host: /
rabbit_retry_interval: 1
rabbit_retry_backoff: 2
rabbit_max_retries: 0
rabbit_ha_queues: false
rabbit_userid: openstack
rabbit_password: "{{ rabbitmq_password }}"
## Auth
auth_admin_username: admin
auth_admin_password: "{{ keystone_auth_admin_password }}"
auth_admin_token: "{{ keystone_auth_admin_password }}"
auth_admin_tenant: admin
auth_identity_uri: "http://{{ internal_vip_address }}:5000/v2.0"
auth_admin_uri: "http://{{ internal_vip_address }}:35357/v2.0"
auth_host: "{{ internal_vip_address }}"
auth_port: 35357
auth_public_port: 5000
auth_protocol: http
## Openstack Region
service_region: RegionOne
## Container User
container_username: openstack
container_password: "{{ container_openstack_password }}"
## Memcached
memcached_memory: 8192
memcached_port: 11211
memcached_user: memcache
memcached_secret_key: "{{ memcached_encryption_key }}"
## Haproxy Configuration
hap_rise: 3
hap_fall: 3
hap_interval: 12000
# Default haproxy backup nodes to empty list so this doesn't have to be
# defined for each service.
hap_backup_nodes: []
## Swift credentials for Swift Container image store
swift_archive_store:
creds_file: /root/swiftcreds
section: default
container: poc_lxc_containers
## Remote logging common configuration
elasticsearch_http_port: 9200
elasticsearch_tcp_port: 9300
elasticsearch_mode: transport
elasticsearch_cluster: openstack
elasticsearch_vip: "{{ external_vip_address }}"
logstash_port: 5544
# Directory where serverspec is installed to on utility container
serverspec_install_dir: /opt/serverspec

View File

@ -0,0 +1,92 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The variables file used by the playbooks in the Cinder-api group.
# These don't have to be explicitly imported by vars_files: they are autopopulated.
## Service Name
service_name: cinder
# Verbosity Options
debug: False
verbose: True
# only used when the lxc vg is present on the target
container_lvm_fstype: ext4
container_lvm_fssize: 5GB
## General configuration
## Set this in rpc_user_config.yml UNLESS you want all hosts to use the same
## Cinder backends. See the rpc_user_config example for more on how this is done.
# cinder_backends:
# lvm:
# volume_group: cinder-volumes
# driver: cinder.volume.drivers.lvm.LVMISCSIDriver
# backend_name: LVM_iSCSI
cinder_service_port: "{{ cinder_port|default('8776') }}"
## DB
container_mysql_user: cinder
container_mysql_password: "{{ cinder_container_mysql_password }}"
container_database: cinder
## Cinder Auth
service_admin_tenant_name: "service"
service_admin_username: "cinder"
service_admin_password: "{{ cinder_service_password }}"
## Cinder User / Group
system_user: cinder
system_group: cinder
## Service Names
service_names:
- cinder-api
- cinder-scheduler
- cinder-volume
## Git Source
git_repo: https://git.openstack.org/openstack/cinder
git_fallback_repo: https://github.com/openstack/cinder
git_etc_example: etc/cinder/
git_install_branch: stable/icehouse
service_pip_dependencies:
- pywbem
- ecdsa
- MySQL-python
- python-memcached
- pycrypto
- python-cinderclient
- python-keystoneclient
- keystonemiddleware
container_directories:
- /var/log/cinder
- /var/lib/cinder
- /var/lib/cinder/volumes
- /etc/cinder
- /etc/cinder/rootwrap.d
- /var/cache/cinder
- /var/lock/cinder
- /var/run/cinder
container_packages:
- libpq-dev
- libkmod-dev
- libkmod2
- dmeventd

View File

@ -0,0 +1,36 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Variables for the logstash containers
service_name: elasticsearch
debug: False
verbose: True
container_lvm_fstype: ext4
container_lvm_fssize: 5GB
apt_container_keys:
- { url: "http://packages.elasticsearch.org/GPG-KEY-elasticsearch", state: "present" }
apt_container_repos:
- { repo: "deb http://packages.elasticsearch.org/elasticsearch/1.2/debian stable main", state: "present"}
container_packages:
- elasticsearch
service_pip_dependencies:
- requests

View File

@ -0,0 +1,39 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
service_name: mysql
container_lvm_fstype: ext4
container_lvm_fssize: 5GB
debian_sys_maint_password: "{{ mysql_debian_sys_maint_password }}"
mariadb_server_package: mariadb-galera-server-5.5
galera_packages:
- mariadb-client
- "{{ mariadb_server_package }}"
- galera
- python-software-properties
- software-properties-common
- debconf-utils
- rsync
- xtrabackup
# Size of the galera cache
galera_gcache_size: 1G
service_pip_dependencies:
- MySQL-python

View File

@ -0,0 +1,117 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The variables file used by the playbooks in the Glance-api group.
# These don't have to be explicitly imported by vars_files: they are autopopulated.
## Service Name
service_name: glance
service_publicurl: "http://{{ external_vip_address }}:9292"
service_adminurl: "http://{{ internal_vip_address }}:9292"
service_internalurl: "http://{{ internal_vip_address }}:9292"
# Verbosity Options
debug: False
verbose: True
# only used when the lxc vg is present on the target
container_lvm_fstype: ext4
container_lvm_fssize: 5GB
# General configuration
registry_host: "{{ internal_vip_address }}"
## DB
container_mysql_user: glance
container_mysql_password: "{{ glance_container_mysql_password }}"
container_database: glance
## RPC
rpc_backend: glance.openstack.common.rpc.impl_kombu
## Backend
default_store: "{{ glance_default_store|default('file') }}"
## Swift Options
swift_store_auth_address: "{{ glance_swift_store_auth_address | default('NoAuthAddress') }}"
swift_store_user: "{{ glance_swift_store_user | default('NoUser') }}"
swift_store_key: "{{ glance_swift_store_key | default('NoKey') }}"
swift_store_region: "{{ glance_swift_store_region | default('NoRegion') }}"
swift_store_container: "{{ glance_swift_store_container | default('NoContainer')}}"
## Auth
service_admin_tenant_name: "service"
service_admin_username: "glance"
service_admin_password: "{{ glance_service_password }}"
## Glance User / Group
system_user: glance
system_group: glance
## Service Names
service_names:
- glance-api
- glance-registry
flavor: "keystone+cachemanagement"
## Git Source
git_repo: https://git.openstack.org/openstack/glance
git_fallback_repo: https://github.com/openstack/glance
git_etc_example: etc/
git_install_branch: stable/icehouse
service_pip_dependencies:
- warlock
- MySQL-python
- python-memcached
- pycrypto
- python-glanceclient
- python-keystoneclient
- keystonemiddleware
container_directories:
- /var/log/glance
- /var/lib/glance
- /var/lib/glance/cache
- /var/lib/glance/cache/api
- /var/lib/glance/cache/registry
- /var/lib/glance/scrubber
- /etc/glance
- /var/cache/glance
container_packages:
- rsync
cf_snet_endpoints:
DFW:
hostname: storage101.dfw1.clouddrive.com
ip: 10.191.208.30
HKG:
hostname: storage101.hkg1.clouddrive.com
ip: 10.191.208.34
IAD:
hostname: storage101.iad3.clouddrive.com
ip: 10.191.208.34
LON:
hostname: storage101.lon3.clouddrive.com
ip: 10.191.209.30
ORD:
hostname: storage101.ord1.clouddrive.com
ip: 10.191.208.30
SYD:
hostname: storage101.syd2.clouddrive.com
ip: 10.191.208.34

View File

@ -0,0 +1,92 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The variables file used by the playbooks in the Heat-api group.
# These don't have to be explicitly imported by vars_files: they are autopopulated.
## Service Name
service_name: heat
# Verbosity Options
debug: False
verbose: True
# only used when the lxc vg is present on the target
container_lvm_fstype: ext4
container_lvm_fssize: 5GB
## DB
container_mysql_user: heat
container_mysql_password: "{{ heat_container_mysql_password }}"
container_database: heat
## RPC
rpc_backend: heat.openstack.common.rpc.impl_kombu
## Auth
service_admin_tenant_name: "service"
service_admin_username: "heat"
service_admin_password: "{{ heat_service_password }}"
## Heat User / Group
system_user: heat
system_group: heat
## Service Names
service_names:
- heat-api
- heat-api-cfn
- heat-api-cloudwatch
- heat-engine
## Stack
stack_domain_admin_password: "{{ heat_stack_domain_admin_password }}"
stack_domain_admin: heat_domain_admin
deferred_auth_method: trusts
auth_encryption_key: "{{ heat_auth_encryption_key }}"
heat_watch_server_url: "http://{{ external_vip_address }}:8003"
heat_waitcondition_server_url: "http://{{ internal_vip_address }}:8000/v1/waitcondition"
heat_metadata_server_url: "http://{{ internal_vip_address }}:8000"
## Git Source
git_repo: https://git.openstack.org/openstack/heat
git_fallback_repo: https://github.com/openstack/heat
git_etc_example: etc/heat
git_install_branch: stable/icehouse
service_pip_dependencies:
- MySQL-python
- python-memcached
- pycrypto
- python-heatclient
- python-keystoneclient
- python-troveclient
- python-ceilometerclient
- keystonemiddleware
container_directories:
- /etc/heat
- /etc/heat/environment.d
- /etc/heat/templates
- /var/cache/heat
- /var/lib/heat
- /var/log/heat
container_packages:
- rsync
- libxslt1.1

View File

@ -0,0 +1,84 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The variables file used by the playbooks in the Horizon group.
# These don't have to be explicitly imported by vars_files: they are autopopulated.
# Enable containerization of services
containerize: true
## Service Name
service_name: apache2
# Verbosity Options
debug: False
# only used when the lxc vg is present on the target
container_lvm_fstype: ext4
container_lvm_fssize: 5GB
## DB
container_mysql_user: dash
container_mysql_password: "{{ horizon_container_mysql_password }}"
container_database: dash
## Horizon User / Group
system_user: www-data
system_group: www-data
## Git Source
git_repo: https://git.openstack.org/openstack/horizon
git_fallback_repo: https://github.com/openstack/horizon
git_install_branch: stable/icehouse
# Installation directories
install_root_dir: /opt/horizon
install_lib_dir: /opt/horizon/lib/python2.7/site-packages
service_pip_dependencies:
- oslo.config
- MySQL-python
- python-memcached
- django-appconf
- pycrypto
- ply
- greenlet
container_directories:
- "{{ install_root_dir }}"
- "{{ install_lib_dir }}"
container_packages:
- apache2
- apache2-utils
- libapache2-mod-wsgi
- libssl-dev
- libxslt1.1
- openssl
horizon_fqdn: "{{ external_vip_address }}"
horizon_server_name: "{{ container_name }}"
horizon_self_signed: true
pip_install_options: "--install-option='--prefix={{ install_root_dir }}'"
service_name: horizon
## Optional certification options
# horizon_cacert_pem: /path/to/cacert.pem
# horizon_ssl_cert: /etc/ssl/certs/apache.cert
# horizon_ssl_key: /etc/ssl/private/apache.key
# horizon_ssl_cert_path: /etc/ssl/certs

View File

@ -0,0 +1,20 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
raxmon_agent_key:
- "https://monitoring.api.rackspacecloud.com/pki/agent/linux.asc"
raxmon_agent_repos:
- { repo: "deb http://stable.packages.cloudmonitoring.rackspace.com/ubuntu-14.04-x86_64 cloudmonitoring main", state: "present" }

View File

@ -0,0 +1,99 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The variables file used by the playbooks in the Keystone-api group.
# These don't have to be explicitly imported by vars_files: they are autopopulated.
## Service Name
service_name: keystone
## Service ports
service_port: 5000
admin_port: 35357
## Verbosity Options
debug: False
verbose: True
# only used when the lxc vg is present on the target
container_lvm_fstype: ext4
container_lvm_fssize: 5GB
## DB
container_mysql_user: keystone
container_mysql_password: "{{ keystone_container_mysql_password }}"
container_database: keystone
## AUTH
auth_methods: "password,token"
token_provider: "keystone.token.providers.uuid.Provider"
# If the "token_provider" is set to PKI set this to True
keystone_use_pki: False
## Keystone User / Group
system_user: keystone
system_group: keystone
## Git Source
git_repo: https://git.openstack.org/openstack/keystone
git_fallback_repo: https://github.com/openstack/keystone
git_etc_example: etc/
git_install_branch: stable/icehouse
# Common PIP Packages
service_pip_dependencies:
- repoze.lru
- pbr
- MySQL-python
- pycrypto
- python-memcached
- python-keystoneclient
## Enable SSL
keystone_ssl: false
## Optional SSL vars
# keystone_ssl_cert: /etc/ssl/certs/apache.cert
# keystone_ssl_key: /etc/ssl/certs/apache.key
# keystone_ssl_cert_path: /etc/ssl/certs
container_directories:
- /etc/keystone
- /etc/keystone/ssl
- /var/lib/keystone
- /var/log/keystone
- /var/www/cgi-bin/keystone
container_packages:
- libsasl2-dev
- debhelper
- dh-apparmor
- docutils-common
- libjs-sphinxdoc
- libjs-underscore
- libxslt1.1
- libldap2-dev
- apache2
- apache2-utils
- libapache2-mod-wsgi

View File

@ -0,0 +1,39 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Variables for the logstash containers
service_name: kibana
debug: False
verbose: True
container_lvm_fstype: ext4
container_lvm_fssize: 5GB
kibana_url: https://download.elasticsearch.org/kibana/kibana/kibana-3.1.0.tar.gz
kibana_root: /opt/kibana
container_packages:
- apache2
- python-passlib
system_user: www-user
system_group: www-data
kibana_fqdn: "{{ external_vip_address }}"
kibana_server_name: "{{ container_name }}"
kibana_self_signed: true
kibana_ssl_port: 8443

View File

@ -0,0 +1,35 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Variables for the logstash containers
service_name: logstash
debug: False
verbose: True
container_lvm_fstype: ext4
container_lvm_fssize: 5GB
# Apt repos for ELK
apt_container_keys:
- { url: "http://packages.elasticsearch.org/GPG-KEY-elasticsearch", state: "present" }
apt_container_repos:
- { repo: "deb http://packages.elasticsearch.org/logstash/1.4/debian stable main", state: "present"}
container_packages:
- logstash
- logstash-contrib

View File

@ -0,0 +1,20 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
service_name: memcached
# only used when the lxc vg is present on the target
container_lvm_fstype: ext4
container_lvm_fssize: 5GB

View File

@ -0,0 +1,111 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The variables file used by the playbooks in the nova group.
# These don't have to be explicitly imported by vars_files: they are autopopulated.
## Service Name
service_name: neutron
# Verbosity Options
debug: False
verbose: True
## only used when the lxc vg is present on the target
container_lvm_fstype: ext4
container_lvm_fssize: 5GB
## General configuration
core_plugin: neutron.plugins.ml2.plugin.Ml2Plugin
interface_driver: neutron.agent.linux.interface.BridgeInterfaceDriver
metering_driver: neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver
service_plugins:
- neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
- neutron.services.loadbalancer.plugin.LoadBalancerPlugin
- neutron.services.firewall.fwaas_plugin.FirewallPlugin
- neutron.services.vpn.plugin.VPNDriverPlugin
- neutron.services.metering.metering_plugin.MeteringPlugin
dhcp_driver: neutron.agent.linux.dhcp.Dnsmasq
neutron_config: /etc/neutron/neutron.conf
neutron_plugin: /etc/neutron/plugins/ml2/ml2_conf.ini
neutron_revision: icehouse
## Neutron downtime
neutron_agent_down_time: 120
neutron_report_interval: "{{ neutron_agent_down_time|int / 2 }}"
neutron_agent_polling_interval: 5
## DB
container_mysql_user: neutron
container_mysql_password: "{{ neutron_container_mysql_password }}"
container_database: neutron
## RPC
rpc_backend: rabbit
## Nova Auth
service_admin_tenant_name: "service"
service_admin_username: "neutron"
service_admin_password: "{{ neutron_service_password }}"
## Nova User / Group
system_user: neutron
system_group: neutron
## Service Names
service_names:
- neutron-agent
- neutron-dhcp-agent
- neutron-linuxbridge-agent
- neutron-metadata-agent
- neutron-metering-agent
- neutron-l3-agent
- neutron-server
## Git Source
git_repo: https://git.openstack.org/openstack/neutron
git_fallback_repo: https://github.com/openstack/neutron
git_etc_example: etc/
git_install_branch: stable/icehouse
service_pip_dependencies:
- MySQL-python
- python-memcached
- pycrypto
- repoze.lru
- configobj
- cliff
- python-novaclient
- python-neutronclient
- python-keystoneclient
- keystonemiddleware
container_directories:
- /etc/neutron
- /etc/neutron/plugins
- /etc/neutron/plugins/ml2
- /etc/neutron/rootwrap.d
- /var/cache/neutron
- /var/lib/neutron
- /var/lock/neutron
- /var/log/neutron
- /var/run/neutron
container_packages:
- libpq-dev
- dnsmasq-base
- dnsmasq-utils

View File

@ -0,0 +1,103 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The variables file used by the playbooks in the nova group.
# These don't have to be explicitly imported by vars_files: they are autopopulated.
## Service Name
service_name: nova
# Verbosity Options
debug: False
verbose: True
# only used when the lxc vg is present on the target
container_lvm_fstype: ext4
container_lvm_fssize: 5GB
# General configuration
volume_driver: cinder.volume.drivers.lvm.LVMISCSIDriver
## DB
container_mysql_user: nova
container_mysql_password: "{{ nova_container_mysql_password }}"
container_database: nova
## RPC
rpc_backend: nova.openstack.common.rpc.impl_kombu
## Nova virtualization Type, set to KVM if supported
virt_type: "{{ nova_virt_type|default('qemu') }}"
## Nova Auth
service_admin_tenant_name: "service"
service_admin_username: "nova"
service_admin_password: "{{ nova_service_password }}"
## Nova User / Group
system_user: nova
system_group: nova
## Service Names
service_names:
- nova-api-metadata
- nova-api-os-compute
- nova-api-ec2
- nova-compute
- nova-conductor
- nova-scheduler
# Endpoint used throughout various services, including nova
nova_metadata_ip: "{{ internal_vip_address }}"
nova_metadata_proxy_shared_secret: "{{ nova_metadata_proxy_secret }}"
## Nova global config
nova_cpu_mode: host-model
nova_linuxnet_interface_driver: nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
nova_libvirt_vif_driver: nova.virt.libvirt.vif.NeutronLinuxBridgeVIFDriver
nova_firewall_driver: nova.virt.firewall.NoopFirewallDriver
nova_compute_driver: libvirt.LibvirtDriver
nova_max_age: 0
## Git Source
git_repo: https://git.openstack.org/openstack/nova
git_fallback_repo: https://github.com/openstack/nova
git_etc_example: etc/nova/
git_install_branch: stable/icehouse
service_pip_dependencies:
- MySQL-python
- python-memcached
- pycrypto
- python-keystoneclient
- python-novaclient
- keystonemiddleware
container_directories:
- /var/log/nova
- /var/lib/nova
- /var/lib/nova/cache/api
- /etc/nova
- /etc/nova/rootwrap.d
- /var/cache/nova
- /var/lock/nova
- /var/run/nova
container_packages:
- libpq-dev
- open-iscsi
- vlan
- kpartx

View File

@ -0,0 +1,33 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
service_name: rabbitmq
rabbitmq_key:
- "http://www.rabbitmq.com/rabbitmq-signing-key-public.asc"
rabbit_repos:
- { repo: "deb http://www.rabbitmq.com/debian/ testing main", state: "present" }
container_lvm_fstype: ext4
container_lvm_fssize: 5GB
container_config_options:
- "lxc.aa_profile=lxc-openstack"
rabbit_cookie: "{{ rabbitmq_cookie_token }}"
enable_management_plugin: true
rabbit_cluster_name: rpc

View File

@ -0,0 +1,27 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Variables for the rsyslog containers
#
service_name: rsyslog
debug: False
verbose: True
container_lvm_fstype: ext4
container_lvm_fssize: 5GB
apt_container_repos:
- { repo: "ppa:adiscon/v8-stable", state: "present" }

View File

@ -0,0 +1,37 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The variables file used by the playbooks in the utility group.
# These don't have to be explicitly imported by vars_files: they are autopopulated.
## Service Name
service_name: utility
# Only used when the lxc vg is present on the target
container_lvm_fstype: ext4
container_lvm_fssize: 5GB
service_pip_dependencies:
- python-openstackclient
- python-cinderclient
- python-glanceclient
- python-heatclient
- python-keystoneclient
- python-neutronclient
- python-novaclient
- python-swiftclient
container_packages:
- ruby1.9.1

View File

@ -0,0 +1,2 @@
[local]
localhost

View File

@ -0,0 +1,832 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2014, Kevin Carter <kevin.carter@rackspace.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Based on Jimmy Tang's implementation
DOCUMENTATION = """
---
module: keystone
version_added: "1.6.2"
short_description:
- Manage OpenStack Identity (keystone) users, tenants, roles, and endpoints.
description:
- Manage OpenStack Identity (keystone) users, tenants, roles, and endpoints.
options:
return_code:
description:
- Allow for return Codes other than 0 when executing commands.
- This is a comma separated list of acceptable return codes.
default: 0
login_user:
description:
- login username to authenticate to keystone
required: false
default: admin
login_password:
description:
- Password of login user
required: false
default: 'yes'
login_tenant_name:
description:
- The tenant login_user belongs to
required: false
default: None
token:
description:
- The token to be uses in case the password is not specified
required: false
default: None
endpoint:
description:
- The keystone url for authentication
required: false
password:
description:
- The password to be assigned to the user
required: false
default: None
user_name:
description:
- The name of the user that has to added/removed from OpenStack
required: false
default: None
tenant_name:
description:
- The tenant name that has be added/removed
required: false
default: None
role_name:
description:
- The name of the role to be assigned or created
required: false
service_name:
description:
- Name of the service.
required: false
default: None
region_name:
description:
- Name of the region.
required: false
default: None
description:
description:
- A description for the tenant
required: false
default: None
email:
description:
- Email address for the user, this is only used in "ensure_user"
required: false
default: None
service_type:
description:
- Type of service.
required: false
default: None
publicurl:
description:
- Public URL.
required: false
default: None
adminurl:
description:
- Admin URL.
required: false
default: None
internalurl:
description:
- Internal URL.
required: false
default: None
command:
description:
- Indicate desired state of the resource
choices: ['get_tenant', 'get_user', 'get_role', 'ensure_service',
'ensure_endpoint', 'ensure_role', 'ensure_user',
'ensure_user_role', 'ensure_tenant']
required: true
requirements: [ python-keystoneclient ]
author: Kevin Carter
"""
EXAMPLES = """
# Create an admin tenant
- keystone: >
command=ensure_tenant
tenant_name=admin
description="Admin Tenant"
# Create a service tenant
- keystone: >
command=ensure_tenant
tenant_name=service
description="Service Tenant"
# Create an admin user
- keystone: >
command=ensure_user
user_name=admin
tenant_name=admin
password=secrete
email="admin@some-domain.com"
# Create an admin role
- keystone: >
command=ensure_role
role_name=admin
# Create a user
- keystone: >
command=ensure_user
user_name=glance
tenant_name=service
password=secrete
email="glance@some-domain.com"
# Add a role to a user
- keystone: >
command=ensure_user_role
user_name=glance
tenant_name=service
role_name=admin
# Create a service
- keystone: >
command=ensure_service
service_name=glance
service_type=image
description="Glance Image Service"
# Create an endpoint
- keystone: >
command=ensure_endpoint
region_name=RegionOne
service_name=glance
service_type=image
publicurl=http://127.0.0.1:9292
adminurl=http://127.0.0.1:9292
internalurl=http://127.0.0.1:9292
# Get tenant id
- keystone: >
command=get_tenant
tenant_name=admin
# Get user id
- keystone: >
command=get_user
user_name=admin
# Get role id
- keystone: >
command=get_role
user_name=admin
"""
COMMAND_MAP = {
'get_tenant': {
'variables': [
'tenant_name'
]
},
'get_user': {
'variables': [
'user_name'
]
},
'get_role': {
'variables': [
'role_name',
'tenant_name',
'user_name'
]
},
'ensure_service': {
'variables': [
'service_name',
'service_type',
'description'
]
},
'ensure_endpoint': {
'variables': [
'region_name',
'service_name',
'service_type',
'publicurl',
'adminurl',
'internalurl'
]
},
'ensure_role': {
'variables': [
'role_name'
]
},
'ensure_user': {
'variables': [
'tenant_name',
'user_name',
'password',
'email'
]
},
'ensure_user_role': {
'variables': [
'user_name',
'tenant_name',
'role_name'
]
},
'ensure_tenant': {
'variables': [
'tenant_name',
'description'
]
}
}
try:
from keystoneclient.v2_0 import client
except ImportError:
keystoneclient_found = False
else:
keystoneclient_found = True
class ManageKeystone(object):
def __init__(self, module):
"""Manage Keystone via Ansible."""
self.state_change = False
self.keystone = None
# Load AnsibleModule
self.module = module
def command_router(self):
"""Run the command as its provided to the module."""
command_name = self.module.params['command']
if command_name not in COMMAND_MAP:
self.failure(
error='No Command Found',
rc=2,
msg='Command [ %s ] was not found.' % command_name
)
action_command = COMMAND_MAP[command_name]
if hasattr(self, '%s' % command_name):
action = getattr(self, '%s' % command_name)
facts = action(variables=action_command['variables'])
if facts is None:
self.module.exit_json(changed=self.state_change)
else:
self.module.exit_json(
changed=self.state_change,
ansible_facts=facts
)
else:
self.failure(
error='Command not in ManageKeystone class',
rc=2,
msg='Method [ %s ] was not found.' % command_name
)
@staticmethod
def _facts(facts):
"""Return a dict for our Ansible facts.
:param facts: ``dict`` Dict with data to return
"""
return {'keystone_facts': facts}
def _get_vars(self, variables, required=None):
"""Return a dict of all variables as found within the module.
:param variables: ``list`` List of all variables that are available to
use within the Keystone Command.
:param required: ``list`` Name of variables that are required.
"""
return_dict = {}
for variable in variables:
return_dict[variable] = self.module.params.get(variable)
else:
if isinstance(required, list):
for var_name in required:
check = return_dict.get(var_name)
if check is None:
self.failure(
error='Missing [ %s ] from Task or found a None'
' value' % var_name,
rc=000,
msg='variables %s - available params [ %s ]'
% (variables, self.module.params)
)
return return_dict
def failure(self, error, rc, msg):
"""Return a Failure when running an Ansible command.
:param error: ``str`` Error that occurred.
:param rc: ``int`` Return code while executing an Ansible command.
:param msg: ``str`` Message to report.
"""
self.module.fail_json(msg=msg, rc=rc, err=error)
def _authenticate(self):
"""Return a keystone client object."""
required_vars = ['endpoint']
variables = [
'endpoint',
'login_user',
'login_password',
'login_tenant_name',
'token'
]
variables_dict = self._get_vars(variables, required=required_vars)
endpoint = variables_dict.pop('endpoint')
login_user = variables_dict.pop('login_user')
login_password = variables_dict.pop('login_password')
login_tenant_name = variables_dict.pop('login_tenant_name')
token = variables_dict.pop('token')
if token is None:
if login_tenant_name is None:
self.failure(
error='Missing Tenant Name',
rc=2,
msg='If you do not specify a token you must use a tenant'
' name for authentication. Try adding'
' [ login_tenant_name ] to the task'
)
if login_password is None:
self.failure(
error='Missing Password',
rc=2,
msg='If you do not specify a token you must use a password'
' name for authentication. Try adding'
' [ login_password ] to the task'
)
if token:
self.keystone = client.Client(endpoint=endpoint, token=token)
else:
self.keystone = client.Client(
auth_url=endpoint,
username=login_user,
password=login_password,
tenant_name=login_tenant_name
)
def _get_tenant(self, name):
"""Return tenant information.
:param name: ``str`` Name of the tenant.
"""
for entry in self.keystone.tenants.list():
if entry.name == name:
return entry
else:
return None
def get_tenant(self, variables):
"""Return a tenant id.
This will return `None` if the ``name`` is not found.
:param variables: ``list`` List of all variables that are available to
use within the Keystone Command.
"""
self._authenticate()
variables_dict = self._get_vars(variables, required=['tenant_name'])
tenant_name = variables_dict.pop('tenant_name')
tenant = self._get_tenant(name=tenant_name)
if tenant is None:
self.failure(
error='tenant [ %s ] was not found.' % tenant_name,
rc=2,
msg='tenant was not found, does it exist?'
)
return self._facts(facts={'id': tenant.id})
def ensure_tenant(self, variables):
"""Create a new tenant within Keystone if it does not exist.
Returns the tenant ID on a successful run.
:param variables: ``list`` List of all variables that are available to
use within the Keystone Command.
"""
self._authenticate()
variables_dict = self._get_vars(variables, required=['tenant_name'])
tenant_name = variables_dict.pop('tenant_name')
tenant_description = variables_dict.pop('description')
if tenant_description is None:
tenant_description = 'Tenant %s' % tenant_name
tenant = self._get_tenant(name=tenant_name)
if tenant is None:
self.state_change = True
tenant = self.keystone.tenants.create(
tenant_name=tenant_name,
description=tenant_description,
enabled=True
)
return self._facts(facts={'id': tenant.id})
def _get_user(self, name):
"""Return a user information.
This will return `None` if the ``name`` is not found.
:param name: ``str`` Name of the user.
"""
for entry in self.keystone.users.list():
if entry.name == name:
return entry
else:
return None
def get_user(self, variables):
"""Return a tenant id.
This will return `None` if the ``name`` is not found.
:param variables: ``list`` List of all variables that are available to
use within the Keystone Command.
"""
self._authenticate()
variables_dict = self._get_vars(variables, required=['user_name'])
user_name = variables_dict.pop('user_name')
user = self._get_user(name=user_name)
if user is None:
self.failure(
error='user [ %s ] was not found.' % user_name,
rc=2,
msg='user was not found, does it exist?'
)
return self._facts(facts={'id': user.id})
def ensure_user(self, variables):
"""Create a new user within Keystone if it does not exist.
Returns the user ID on a successful run.
:param variables: ``list`` List of all variables that are available to
use within the Keystone Command.
"""
self._authenticate()
required_vars = ['tenant_name', 'user_name', 'password']
variables_dict = self._get_vars(variables, required=required_vars)
tenant_name = variables_dict.pop('tenant_name')
password = variables_dict.pop('password')
user_name = variables_dict.pop('user_name')
email = variables_dict.pop('email')
tenant = self._get_tenant(name=tenant_name)
if tenant is None:
self.failure(
error='tenant [ %s ] was not found.' % tenant_name,
rc=2,
msg='tenant was not found, does it exist?'
)
user = self._get_user(name=user_name)
if user is None:
self.state_change = True
user = self.keystone.users.create(
name=user_name,
password=password,
email=email,
tenant_id=tenant.id
)
return self._facts(facts={'id': user.id})
def _get_role(self, name):
"""Return a tenant by name.
This will return `None` if the ``name`` is not found.
:param name: ``str`` Name of the role.
"""
for entry in self.keystone.roles.list():
if entry.name == name:
return entry
else:
return None
def get_role(self, variables):
"""Return a tenant by name.
This will return `None` if the ``name`` is not found.
:param variables: ``list`` List of all variables that are available to
use within the Keystone Command.
"""
self._authenticate()
variables_dict = self._get_vars(variables, required=['role_name'])
role_name = variables_dict.pop('role_name')
role_data = self._get_role(name=role_name)
if role_data is None:
self.failure(
error='role [ %s ] was not found.' % role_name,
rc=2,
msg='role was not found, does it exist?'
)
return self._facts(facts={'id': role_data.id})
def _get_role_data(self, user_name, tenant_name, role_name):
user = self._get_user(name=user_name)
if user is None:
self.failure(
error='user [ %s ] was not found.' % user_name,
rc=2,
msg='User was not found, does it exist?'
)
tenant = self._get_tenant(name=tenant_name)
if tenant is None:
self.failure(
error='tenant [ %s ] was not found.' % tenant_name,
rc=2,
msg='tenant was not found, does it exist?'
)
role = self._get_role(name=role_name)
if role is None:
self.failure(
error='role [ %s ] was not found.' % role_name,
rc=2,
msg='role was not found, does it exist?'
)
return user, tenant, role
def ensure_role(self, variables):
"""Create a new role within Keystone if it does not exist.
Returns the user ID on a successful run.
:param variables: ``list`` List of all variables that are available to
use within the Keystone Command.
"""
self._authenticate()
variables_dict = self._get_vars(variables, required=['role_name'])
role_name = variables_dict.pop('role_name')
role = self._get_role(name=role_name)
if role is None:
self.state_change = True
role = self.keystone.roles.create(role_name)
return self._facts(facts={'id': role.id})
def _get_user_roles(self, name, user, tenant):
for entry in self.keystone.users.list_roles(user, tenant.id):
if entry.name == name:
return entry
else:
return None
def ensure_user_role(self, variables):
self._authenticate()
required_vars = ['user_name', 'tenant_name', 'role_name']
variables_dict = self._get_vars(variables, required=required_vars)
user_name = variables_dict.pop('user_name')
tenant_name = variables_dict.pop('tenant_name')
role_name = variables_dict.pop('role_name')
user, tenant, role = self._get_role_data(
user_name=user_name, tenant_name=tenant_name, role_name=role_name
)
user_role = self._get_user_roles(
name=role_name, user=user, tenant=tenant
)
if user_role is None:
self.keystone.roles.add_user_role(
user=user, role=role, tenant=tenant
)
user_role = self._get_user_roles(
name=role_name, user=user, tenant=tenant
)
return self._facts(facts={'id': user_role.id})
def _get_service(self, name, srv_type=None):
for entry in self.keystone.services.list():
if srv_type is not None:
if entry.type == srv_type and name == entry.name:
return entry
elif entry.name == name:
return entry
else:
return None
def ensure_service(self, variables):
"""Create a new service within Keystone if it does not exist.
Returns the service ID on a successful run.
:param variables: ``list`` List of all variables that are available to
use within the Keystone Command.
"""
self._authenticate()
required_vars = ['service_name', 'service_type']
variables_dict = self._get_vars(variables, required=required_vars)
service_name = variables_dict.pop('service_name')
description = variables_dict.pop('description')
service_type = variables_dict.pop('service_type')
service = self._get_service(name=service_name, srv_type=service_type)
if service is None or service.type != service_type:
self.state_change = True
service = self.keystone.services.create(
name=service_name,
service_type=service_type,
description=description
)
return self._facts(facts={'id': service.id})
def _get_endpoint(self, region, publicurl, adminurl, internalurl):
for entry in self.keystone.endpoints.list():
check = [
entry.region == region,
entry.publicurl == publicurl,
entry.adminurl == adminurl,
entry.internalurl == internalurl
]
if all(check):
return entry
else:
return None
def ensure_endpoint(self, variables):
"""Create a new endpoint within Keystone if it does not exist.
Returns the endpoint ID on a successful run.
:param variables: ``list`` List of all variables that are available to
use within the Keystone Command.
"""
self._authenticate()
required_vars = [
'region_name',
'service_name',
'service_type',
'publicurl',
'adminurl',
'internalurl'
]
variables_dict = self._get_vars(variables, required=required_vars)
service_name = variables_dict.pop('service_name')
service_type = variables_dict.pop('service_type')
region = variables_dict.pop('region_name')
publicurl = variables_dict.pop('publicurl')
adminurl = variables_dict.pop('adminurl')
internalurl = variables_dict.pop('internalurl')
service = self._get_service(name=service_name, srv_type=service_type)
if service is None:
self.failure(
error='service [ %s ] was not found.' % service_name,
rc=2,
msg='Service was not found, does it exist?'
)
endpoint = self._get_endpoint(
region=region,
publicurl=publicurl,
adminurl=adminurl,
internalurl=internalurl
)
if endpoint is None:
self.state_change = True
endpoint = self.keystone.endpoints.create(
region=region,
service_id=service.id,
publicurl=publicurl,
adminurl=adminurl,
internalurl=internalurl
)
return self._facts(facts={'id': endpoint.id})
def main():
module = AnsibleModule(
argument_spec=dict(
login_user=dict(
required=False
),
login_password=dict(
required=False
),
login_tenant_name=dict(
required=False
),
token=dict(
required=False
),
password=dict(
required=False
),
endpoint=dict(
required=True,
),
user_name=dict(
required=False
),
tenant_name=dict(
required=False
),
role_name=dict(
required=False
),
service_name=dict(
required=False
),
region_name=dict(
required=False
),
description=dict(
required=False
),
email=dict(
required=False
),
service_type=dict(
required=False
),
publicurl=dict(
required=False
),
adminurl=dict(
required=False
),
internalurl=dict(
required=False
),
command=dict(
required=True,
choices=COMMAND_MAP.keys()
),
return_code=dict(
type='str',
default='0'
)
),
supports_check_mode=False,
mutually_exclusive=[
['token', 'login_user'],
['token', 'login_password'],
['token', 'login_tenant_name']
]
)
km = ManageKeystone(module=module)
if not keystoneclient_found:
km.failure(
error='python-keystoneclient is missing',
rc=2,
msg='keystone client was not importable, is it installed?'
)
return_code = module.params.get('return_code', '').split(',')
module.params['return_code'] = return_code
km.command_router()
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

1588
rpc_deployment/library/lxc Executable file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,80 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2014, Kevin Carter <kevin.carter@rackspace.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = """
---
module: name2int
version_added: "1.6.6"
short_description:
- hash a host name and return an integer
description:
- hash a host name and return an integer
options:
name:
description:
- login username
required: true
author: Kevin Carter
"""
EXAMPLES = """
# Create a new container
- name2int:
name: "Some-hostname.com"
"""
import hashlib
import platform
class HashHostname(object):
def __init__(self, module):
"""Generate an integer from a name."""
self.module = module
def return_hashed_host(self, name):
hashed_name = hashlib.md5(name).hexdigest()
hash_int = int(hashed_name, 32)
real_int = int(hash_int % 300)
return real_int
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(
required=True
)
),
supports_check_mode=False
)
try:
sm = HashHostname(module=module)
int_value = sm.return_hashed_host(platform.node())
resp = {'int_value': int_value}
module.exit_json(changed=True, **resp)
except Exception as exp:
resp = {'stderr': exp}
module.fail_json(msg='Failed Process', **resp)
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

View File

@ -0,0 +1,633 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2014, Kevin Carter <kevin.carter@rackspace.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = """
---
module: swift
version_added: "1.6.2"
short_description:
- Manage objects stored in swift
description:
- Manage objects stored in swift
options:
login_user:
description:
- login username
required: true
login_password:
description:
- Password of login user
required: true
login_tenant_name:
description:
- The tenant login_user belongs to
required: false
default: None
login_url:
description:
- Authentication URL
required: true
region:
description:
- The password to be assigned to the user
required: false
container:
description:
- Name of container
required: true
src:
description:
- path to object. Only used for in 'upload' & 'download' command
required: false
object:
description:
- Name of object
required: false
config_file:
description:
- Path to credential file
required: false
section:
description:
- Section within ``config_file`` to load
required: false
default: default
auth_version:
description:
- Swift authentication version
default: 2.0
required: false
snet:
description:
- Enable service Net. This may not be supported by all providers
set true or false
default: false
marker:
description:
- Set beginning marker. Only used in 'list' command.
default: false
end_marker:
description:
- Set ending marker. Only used in 'list' command.
default: false
limit:
description:
- Set limit. Only used in 'list' command.
default: false
prefix:
description:
- Set prefix filter. Only used in 'list' command.
default: false
command:
description:
- Indicate desired state of the resource
choices: ['upload', 'download', 'delete', 'create', 'list']
required: true
notes:
- Environment variables can be set for all auth credentials which allows
for seemless access. The available environment variables are,
OS_USERNAME, OS_PASSWORD, OS_TENANT_ID, OS_AUTH_URL
- A configuration file can be used to load credentials, use ``config_file``
to source the file. If you have multiple sections within the
configuration file use the ``section`` argument to define the section,
however the default is set to "default".
requirements: [ python-swiftclient ]
author: Kevin Carter
"""
EXAMPLES = """
# Create a new container
- swift: >
login_user="SomeUser"
login_password="SomePassword"
login_url="https://identity.somedomain.com/v2.0/"
command=create
container=MyNewContainer
# Upload a new object
- swift: >
login_user="SomeUser"
login_password="SomePassword"
login_url="https://identity.somedomain.com/v2.0/"
command=upload
container=MyNewContainer
src=/path/to/file
object=MyNewObjectName
# Download an object
- swift: >
login_user="SomeUser"
login_password="SomePassword"
login_url="https://identity.somedomain.com/v2.0/"
command=download
container=MyNewContainer
src=/path/to/file
object=MyOldObjectName
# list up-to 10K objects
- swift: >
login_user="SomeUser"
login_password="SomePassword"
login_url="https://identity.somedomain.com/v2.0/"
command=list
container=MyNewContainer
# Delete an Object
- swift: >
login_user="SomeUser"
login_password="SomePassword"
login_url="https://identity.somedomain.com/v2.0/"
command=delete
container=MyNewContainer
object=MyOldObjectName
# Delete a container
- swift: >
login_user="SomeUser"
login_password="SomePassword"
login_url="https://identity.somedomain.com/v2.0/"
command=delete
container=MyNewContainer
"""
COMMAND_MAP = {
'upload': {
'variables': [
'login_user',
'login_password',
'login_tenant_name',
'login_url',
'region',
'container',
'src',
'object',
'auth_version'
]
},
'download': {
'variables': [
'login_user',
'login_password',
'login_tenant_name',
'login_url',
'region',
'container',
'src',
'object',
'auth_version'
]
},
'delete': {
'variables': [
'login_user',
'login_password',
'login_tenant_name',
'login_url',
'region',
'container',
'object',
'auth_version'
]
},
'create': {
'variables': [
'login_user',
'login_password',
'login_tenant_name',
'login_url',
'region',
'container',
'auth_version'
]
},
'list': {
'variables': [
'login_user',
'login_password',
'login_tenant_name',
'login_url',
'region',
'container',
'auth_version',
'marker',
'limit',
'prefix',
'end_marker'
]
}
}
import ConfigParser
try:
from swiftclient import client
except ImportError:
swiftclient_found = False
else:
swiftclient_found = True
class ManageSwift(object):
def __init__(self, module):
"""Manage Swift via Ansible."""
self.state_change = False
self.swift = None
# Load AnsibleModule
self.module = module
def command_router(self):
"""Run the command as its provided to the module."""
command_name = self.module.params['command']
if command_name not in COMMAND_MAP:
self.failure(
error='No Command Found',
rc=2,
msg='Command [ %s ] was not found.' % command_name
)
action_command = COMMAND_MAP[command_name]
if hasattr(self, '_%s' % command_name):
action = getattr(self, '_%s' % command_name)
self._authenticate()
facts = action(variables=action_command['variables'])
if facts is None:
self.module.exit_json(changed=self.state_change)
else:
self.module.exit_json(
changed=self.state_change,
ansible_facts=facts
)
else:
self.failure(
error='Command not in ManageSwift class',
rc=2,
msg='Method [ %s ] was not found.' % command_name
)
@staticmethod
def _facts(facts):
"""Return a dict for our Ansible facts.
:param facts: ``dict`` Dict with data to return
"""
return {'swift_facts': facts}
def _get_vars(self, variables, required=None):
"""Return a dict of all variables as found within the module.
:param variables: ``list`` List of all variables that are available to
use within the Swift Command.
:param required: ``list`` Name of variables that are required.
"""
return_dict = {}
for variable in variables:
return_dict[variable] = self.module.params.get(variable)
else:
if isinstance(required, list):
for var_name in required:
check = return_dict.get(var_name)
if check is None:
self.failure(
error='Missing [ %s ] from Task or found a None'
' value' % var_name,
rc=000,
msg='variables %s - available params [ %s ]'
% (variables, self.module.params)
)
return return_dict
def failure(self, error, rc, msg):
"""Return a Failure when running an Ansible command.
:param error: ``str`` Error that occurred.
:param rc: ``int`` Return code while executing an Ansible command.
:param msg: ``str`` Message to report.
"""
self.module.fail_json(msg=msg, rc=rc, err=error)
def _env_vars(self, cred_file=None, section='default'):
"""Load environment or sourced credentials.
If the credentials are specified in either environment variables
or in a credential file the sourced variables will be loaded IF the
not set within the ``module.params``.
:param cred_file: ``str`` Path to credentials file.
:param section: ``str`` Section within creds file to load.
"""
if cred_file:
parser = ConfigParser.SafeConfigParser()
parser.optionxform = str
parser.read(os.path.expanduser(cred_file))
for name, value in parser.items(section):
if name == 'OS_AUTH_URL':
if not self.module.params.get('login_url'):
self.module.params['login_url'] = value
if name == 'OS_USERNAME':
if not self.module.params.get('login_user'):
self.module.params['login_user'] = value
if name == 'OS_PASSWORD':
if not self.module.params.get('login_password'):
self.module.params['login_password'] = value
if name == 'OS_TENANT_ID':
if not self.module.params.get('login_tenant_name'):
self.module.params['login_tenant_name'] = value
else:
if not self.module.params.get('login_url'):
authurl = os.getenv('OS_AUTH_URL')
self.module.params['login_url'] = authurl
if not self.module.params.get('login_user'):
username = os.getenv('OS_USERNAME')
self.module.params['login_user'] = username
if not self.module.params.get('login_password'):
password = os.getenv('OS_PASSWORD')
self.module.params['login_password'] = password
if not self.module.params.get('login_tenant_name'):
tenant = os.getenv('OS_TENANT_ID')
self.module.params['login_tenant_name'] = tenant
def _authenticate(self):
"""Return a swift client object."""
cred_file = self.module.params.pop('config_file', None)
section = self.module.params.pop('section')
self._env_vars(cred_file=cred_file, section=section)
required_vars = ['login_url', 'login_user', 'login_password']
variables = [
'login_url',
'login_user',
'login_password',
'login_tenant_name',
'region',
'auth_version',
'snet'
]
variables_dict = self._get_vars(variables, required=required_vars)
login_url = variables_dict.pop('login_url')
login_user = variables_dict.pop(
'login_user', os.getenv('OS_AUTH_URL')
)
login_password = variables_dict.pop(
'login_password', os.getenv('OS_AUTH_URL')
)
login_tenant_name = variables_dict.pop(
'login_tenant_name', os.getenv('OS_TENANT_ID')
)
region = variables_dict.pop('region', None)
auth_version = variables_dict.pop('auth_version')
snet = variables_dict.pop('snet', None)
if snet in BOOLEANS_TRUE:
snet = True
else:
snet = None
if login_password is None:
self.failure(
error='Missing Password',
rc=2,
msg='A Password is required for authentication. Try adding'
' [ login_password ] to the task'
)
if login_tenant_name is None:
login_tenant_name = ' '
creds_dict = {
'user': login_user,
'key': login_password,
'authurl': login_url,
'tenant_name': login_tenant_name,
'os_options': {
'region': region
},
'snet': snet,
'auth_version': auth_version
}
self.swift = client.Connection(**creds_dict)
def _upload(self, variables):
"""Upload an object to a swift object store.
:param variables: ``list`` List of all variables that are available to
use within the Keystone Command.
"""
required_vars = ['container', 'src', 'object']
variables_dict = self._get_vars(variables, required=required_vars)
container_name = variables_dict.pop('container')
object_name = variables_dict.pop('object')
src_path = variables_dict.pop('src')
self._create_container(container_name=container_name)
with open(src_path, 'rb') as f:
self.swift.put_object(container_name, object_name, contents=f)
object_data = self.swift.head_object(container_name, object_name)
self.state_change = True
return self._facts(facts=[object_data])
def _download(self, variables):
"""Upload an object to a swift object store.
:param variables: ``list`` List of all variables that are available to
use within the Keystone Command.
"""
required_vars = ['container', 'src', 'object']
variables_dict = self._get_vars(variables, required=required_vars)
container_name = variables_dict.pop('container')
object_name = variables_dict.pop('object')
src_path = variables_dict.pop('src')
with open(src_path, 'wb') as f:
f.write(
self.swift.get_object(
container_name, object_name, resp_chunk_size=204800
)
)
self.state_change = True
def _delete(self, variables):
"""Upload an object to a swift object store.
If the ``object`` variable is not used the container will be deleted.
This assumes that the container is empty.
:param variables: ``list`` List of all variables that are available to
use within the Keystone Command.
"""
required_vars = ['container']
variables_dict = self._get_vars(variables, required=required_vars)
container_name = variables_dict.pop('container')
object_name = variables_dict.pop('object', None)
if object_name:
self.swift.delete_object(container_name, object_name)
else:
self.swift.delete_container(container_name)
self.state_change = True
def _create_container(self, container_name):
"""Ensure a container exists. If it does not, it will be created.
:param container_name: ``str`` Name of the container.
"""
try:
container = self.swift.head_container(container_name)
except client.ClientException:
self.swift.put_container(container_name)
else:
return container
def _create(self, variables):
"""Create a new container in swift.
:param variables: ``list`` List of all variables that are available to
use within the Keystone Command.
"""
required_vars = ['container']
variables_dict = self._get_vars(variables, required=required_vars)
container_name = variables_dict.pop('container')
container_data = self._create_container(container_name=container_name)
if not container_data:
container_data = self.swift.head_container(container_name)
return self._facts(facts=[container_data])
def _list(self, variables):
"""Return a list of objects or containers.
If the ``container`` variable is not used this will return a list of
containers in the region.
:param variables: ``list`` List of all variables that are available to
use within the Keystone Command.
"""
variables_dict = self._get_vars(variables)
container_name = variables_dict.pop('container', None)
filters = {
'marker': variables_dict.pop('marker', None),
'limit': variables_dict.pop('limit', None),
'prefix': variables_dict.pop('prefix', None),
'end_marker': variables_dict.pop('end_marker', None)
}
if container_name:
list_data = self.swift.get_container(container_name, **filters)[1]
else:
list_data = self.swift.get_account(**filters)[1]
return self._facts(facts=list_data)
def main():
module = AnsibleModule(
argument_spec=dict(
login_user=dict(
required=False
),
login_password=dict(
required=False
),
login_tenant_name=dict(
required=False
),
login_url=dict(
required=False
),
config_file=dict(
required=False
),
section=dict(
required=False,
default='default'
),
command=dict(
required=True,
choices=COMMAND_MAP.keys()
),
region=dict(
required=False
),
container=dict(
required=False
),
src=dict(
required=False
),
object=dict(
required=False
),
marker=dict(
required=False
),
limit=dict(
required=False
),
prefix=dict(
required=False
),
end_marker=dict(
required=False
),
auth_version=dict(
required=False,
default='2.0'
),
snet=dict(
required=False,
default='false',
choices=BOOLEANS
)
),
supports_check_mode=False,
)
sm = ManageSwift(module=module)
if not swiftclient_found:
sm.failure(
error='python-swiftclient is missing',
rc=2,
msg='Swift client was not importable, is it installed?'
)
sm.command_router()
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

View File

@ -0,0 +1,18 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- include: setup/host-setup.yml
- include: setup/build-containers.yml

View File

@ -0,0 +1,30 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Setup supporting services
- hosts: elasticsearch
user: root
roles:
- container_extra_setup
vars_files:
- vars/config_vars/container_config_elasticsearch.yml
- hosts: elasticsearch
user: root
roles:
- common
- container_common
- logging_common
- elasticsearch

View File

@ -0,0 +1,21 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Setup supporting services
- hosts: utility_all
user: root
roles:
- logging_common
- utility_logging

View File

@ -0,0 +1,21 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Restart each daemon in turn
- hosts: galera:!galera[0]
user: root
serial: 1
roles:
- galera_restart

View File

@ -0,0 +1,19 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: galera[0]
user: root
roles:
- galera_bootstrap

View File

@ -0,0 +1,27 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: galera
user: root
roles:
- container_extra_setup
- common
- common_sudoers
- container_common
- galera_common
- galera_client_cnf
- galera_config
vars_files:
- vars/config_vars/container_config_galera.yml

View File

@ -0,0 +1,18 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- include: galera-config.yml
- include: galera-startup.yml
- include: galera-post-config.yml

View File

@ -0,0 +1,24 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: galera[0]
user: root
roles:
- galera_setup
- hosts: galera
user: root
roles:
- galera_post_config

View File

@ -0,0 +1,19 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: galera
user: root
roles:
- galera_remove

View File

@ -0,0 +1,17 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- include: galera-bootstrap.yml
- include: galera-add-node.yml

View File

@ -0,0 +1,19 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: galera
user: root
roles:
- galera_stop

View File

@ -0,0 +1,24 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Setup supporting services
- hosts: haproxy_hosts
user: root
roles:
- common
- haproxy_common
- haproxy_service
vars_files:
- vars/config_vars/haproxy_config.yml

View File

@ -0,0 +1,24 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- include: memcached-install.yml
- include: galera-install.yml
- include: rabbit-install.yml
- include: rsyslog-install.yml
- include: elasticsearch-install.yml
- include: logstash-install.yml
- include: kibana-install.yml
- include: es2unix-install.yml
- include: rsyslog-config.yml

View File

@ -0,0 +1 @@
../../inventory/

View File

@ -0,0 +1,22 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Setup supporting services
- hosts: kibana
user: root
roles:
- common
- container_common
- kibana

View File

@ -0,0 +1 @@
../../library/

View File

@ -0,0 +1,20 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Setup supporting services
- hosts: logstash
user: root
roles:
- logstash

View File

@ -0,0 +1,31 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Setup supporting services
- hosts: logstash
user: root
roles:
- container_extra_setup
vars_files:
- vars/config_vars/container_config_logstash.yml
- hosts: logstash
user: root
roles:
- common
- container_common
- logging_common
- logstash

View File

@ -0,0 +1,24 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: memcached
user: root
roles:
- container_extra_setup
- common
- container_common
- memcached
vars_files:
- vars/config_vars/container_config_memcached.yml

View File

@ -0,0 +1,21 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: rabbit:!rabbit[0]
user: root
serial: 1
roles:
- rabbit_user
- rabbit_join_cluster

View File

@ -0,0 +1,20 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: rabbit[0]
user: root
roles:
- rabbit_user
- rabbit_create_cluster

View File

@ -0,0 +1,22 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: rabbit
user: root
roles:
- common
- container_common
- container_extra_setup
- rabbit_common

View File

@ -0,0 +1,17 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- include: rabbit-config.yml
- include: rabbit-startup.yml

View File

@ -0,0 +1,19 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: rabbit
user: root
roles:
- rabbit_remove

View File

@ -0,0 +1,17 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- include: rabbit-bootstrap.yml
- include: rabbit-add-node.yml

View File

@ -0,0 +1 @@
../../roles/

View File

@ -0,0 +1,20 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Setup supporting services
- hosts: rsyslog
user: root
roles:
- rsyslog_config

View File

@ -0,0 +1,25 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: rsyslog
user: root
roles:
- container_extra_setup
- common
- container_common
- safe_upgrade
- rsyslog
vars_files:
- vars/config_vars/container_config_rsyslog.yml

View File

@ -0,0 +1,20 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Setup supporting services
- hosts: rsyslog
user: root
roles:
- rsyslog_stop

View File

@ -0,0 +1 @@
../../vars/

View File

@ -0,0 +1 @@
../inventory/

View File

@ -0,0 +1 @@
../library/

View File

@ -0,0 +1 @@
../../handlers

View File

@ -0,0 +1 @@
../../inventory

View File

@ -0,0 +1 @@
../../library

View File

@ -0,0 +1,53 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: hosts
vars:
check_name: openmanage-memory
file_name: openmanage
check_details: file={{ file_name }}.py,args=chassis,args=memory
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
alarms:
- { 'name': 'hardware_memory_status', 'criteria': ':set consecutiveCount={{ maas_alarm_local_consecutive_count }} if (metric["hardware_memory_status"] != 1) { return new AlarmStatus(CRITICAL, "Physical Memory Error"); }' }
user: root
roles:
- maas_dell_hardware
- hosts: hosts
vars:
check_name: openmanage-processors
file_name: openmanage
check_details: file={{ file_name }}.py,args=chassis,args=processors
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
alarms:
- { 'name': 'hardware_processors_status', 'criteria': ':set consecutiveCount={{ maas_alarm_local_consecutive_count }} if (metric["hardware_processors_status"] != 1) { return new AlarmStatus(CRITICAL, "Physical Processor Error"); }' }
user: root
roles:
- maas_dell_hardware
- hosts: hosts
vars:
check_name: openmanage-vdisk
file_name: openmanage
check_details: file={{ file_name }}.py,args=storage,args=vdisk
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
alarms:
- { 'name': 'hardware_vdisk_status', 'criteria': ':set consecutiveCount={{ maas_alarm_local_consecutive_count }} if (metric["hardware_vdisk_status"] != 1) { return new AlarmStatus(CRITICAL, "Physical Disk Error"); }' }
user: root
roles:
- maas_dell_hardware

View File

@ -0,0 +1,169 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: cinder_api_container
vars:
check_name: cinder_api_local_check
check_details: file={{ check_name }}.py,args={{ ansible_ssh_host }}
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
alarms:
- { 'name': 'cinder_api_local_status', 'criteria': ':set consecutiveCount={{ maas_alarm_local_consecutive_count }} if (metric["cinder_api_local_status"] != 1) { return new AlarmStatus(CRITICAL, "API unavailable"); }' }
user: root
roles:
- maas_local
- hosts: glance_api
vars:
check_name: glance_api_local_check
check_details: file={{ check_name }}.py,args={{ ansible_ssh_host }}
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
alarms:
- { 'name': 'glance_api_local_status', 'criteria': ':set consecutiveCount={{ maas_alarm_local_consecutive_count }} if (metric["glance_api_local_status"] != 1) { return new AlarmStatus(CRITICAL, "API unavailable"); }' }
user: root
roles:
- maas_local
- hosts: glance_registry
vars:
check_name: glance_registry_local_check
check_details: file={{ check_name }}.py,args={{ ansible_ssh_host }}
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
alarms:
- { 'name': 'glance_registry_local_status', 'criteria': ':set consecutiveCount={{ maas_alarm_local_consecutive_count }} if (metric["glance_registry_local_status"] != 1) { return new AlarmStatus(CRITICAL, "API unavailable"); }' }
user: root
roles:
- maas_local
- hosts: heat_apis_container
vars:
check_name: heat_api_local_check
check_details: file={{ check_name }}.py,args={{ ansible_ssh_host }}
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
alarms:
- { 'name': 'heat_api_local_status', 'criteria': ':set consecutiveCount={{ maas_alarm_local_consecutive_count }} if (metric["heat_api_local_status"] != 1) { return new AlarmStatus(CRITICAL, "API unavailable"); }' }
user: root
roles:
- maas_local
- hosts: heat_apis_container
vars:
check_name: heat_cfn_api_check
check_details: file=service_api_local_check.py,args=heat_cfn,args={{ ansible_ssh_host }},args=8000
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
alarms:
- { 'name': 'heat_cfn_api_local_status', 'criteria': ':set consecutiveCount={{ maas_alarm_local_consecutive_count }} if (metric["nova_spice_api_local_status"] != 1) { return new AlarmStatus(CRITICAL, "API unavailable"); }' }
user: root
roles:
- maas_local
- hosts: heat_apis_container
vars:
check_name: heat_cw_api_check
check_details: file=service_api_local_check.py,args=heat_cw,args={{ ansible_ssh_host }},args=8003
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
alarms:
- { 'name': 'heat_cw_api_local_status', 'criteria': ':set consecutiveCount={{ maas_alarm_local_consecutive_count }} if (metric["nova_spice_api_local_status"] != 1) { return new AlarmStatus(CRITICAL, "API unavailable"); }' }
user: root
roles:
- maas_local
- hosts: keystone
vars:
check_name: keystone_api_local_check
check_details: file={{ check_name }}.py,args={{ ansible_ssh_host }}
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
alarms:
- { 'name': 'keystone_api_local_status', 'criteria': ':set consecutiveCount={{ maas_alarm_local_consecutive_count }} if (metric["keystone_api_local_status"] != 1) { return new AlarmStatus(CRITICAL, "API unavailable"); }' }
user: root
roles:
- maas_local
- hosts: neutron_server
vars:
check_name: neutron_api_local_check
check_details: file={{ check_name }}.py,args={{ ansible_ssh_host }}
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
alarms:
- { 'name': 'neutron_api_local_status', 'criteria': ':set consecutiveCount={{ maas_alarm_local_consecutive_count }} if (metric["neutron_api_local_status"] != 1) { return new AlarmStatus(CRITICAL, "API unavailable"); }' }
user: root
roles:
- maas_local
- hosts: nova_api_os_compute
vars:
check_name: nova_api_local_check
check_details: file={{ check_name }}.py,args={{ ansible_ssh_host }}
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
alarms:
- { 'name': 'nova_api_local_status', 'criteria': ':set consecutiveCount={{ maas_alarm_local_consecutive_count }} if (metric["nova_api_local_status"] != 1) { return new AlarmStatus(CRITICAL, "API unavailable"); }' }
user: root
roles:
- maas_local
- hosts: nova_spice_console
vars:
check_name: nova_spice_console_check
check_details: file=service_api_local_check.py,args=nova_spice,args={{ ansible_ssh_host }},args=6082
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
alarms:
- { 'name': 'nova_spice_api_local_status', 'criteria': ':set consecutiveCount={{ maas_alarm_local_consecutive_count }} if (metric["nova_spice_api_local_status"] != 1) { return new AlarmStatus(CRITICAL, "API unavailable"); }' }
user: root
roles:
- maas_local
- hosts: rabbit
vars:
check_name: rabbitmq_status
check_details: file={{ check_name }}.py,args=-H,args={{ ansible_ssh_host }},args=-n,args={{ ansible_hostname }}
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
alarms:
- { 'name': 'disk_free_alarm', 'criteria': ':set consecutiveCount={{ maas_alarm_local_consecutive_count }} if (metric["disk_free_alarm"] != 1) { return new AlarmStatus(CRITICAL, "disk_free_alarm triggered"); }' }
- { 'name': 'mem_alarm', 'criteria': ':set consecutiveCount={{ maas_alarm_local_consecutive_count }} if (metric["mem_alarm"] != 1) { return new AlarmStatus(CRITICAL, "mem_alarm triggered"); }' }
user: root
roles:
- maas_local
- hosts: galera
vars:
check_name: galera_check
check_details: file={{ check_name }}.py,args=-H,args={{ ansible_ssh_host }}
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
user: root
roles:
- maas_local
- hosts: memcached
vars:
check_name: memcached_status
check_details: file={{ check_name }}.py,args={{ ansible_ssh_host }}
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
alarms:
- { 'name': 'memcache_api_local_status', 'criteria': ':set consecutiveCount={{ maas_alarm_local_consecutive_count }} if (metric["memcache_api_local_status"] != 1) { return new AlarmStatus(CRITICAL, "memcached unavailable"); }' }
user: root
roles:
- maas_local

View File

@ -0,0 +1,204 @@
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: cinder_api[0]
user: root
roles:
- maas_remote
vars:
entity_name: "{{ lb_name }}"
target_alias: "{{ maas_target_alias }}"
check_type: remote.http
check_name: lb_api_check_cinder
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
monitoring_zones: "{{ maas_monitoring_zones }}"
notification_plan: "{{ maas_notification_plan }}"
scheme: "{{ maas_cinder_scheme | default(maas_scheme)}}"
ip_address: "{{ external_vip_address }}"
port: "{{ cinder_service_port }}"
path: ""
url: "{{ scheme }}://{{ ip_address }}:{{ port }}{{ path }}"
alarm_name: lb_api_alarm_cinder
criteria: ":set consecutiveCount={{ maas_alarm_remote_consecutive_count }} if (metric['code'] != '200') { return new AlarmStatus(CRITICAL, 'API unavailable.'); }"
- hosts: glance_api[0]
user: root
roles:
- maas_remote
vars:
entity_name: "{{ lb_name }}"
target_alias: "{{ maas_target_alias }}"
check_type: remote.http
check_name: lb_api_check_glance
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
monitoring_zones: "{{ maas_monitoring_zones }}"
notification_plan: "{{ maas_notification_plan }}"
scheme: "{{ maas_glance_scheme | default(maas_scheme)}}"
ip_address: "{{ external_vip_address }}"
port: 9292
path: ""
url: "{{ scheme }}://{{ ip_address }}:{{ port }}{{ path }}"
alarm_name: lb_api_alarm_glance
criteria: ":set consecutiveCount={{ maas_alarm_remote_consecutive_count }} if (metric['code'] != '300') { return new AlarmStatus(CRITICAL, 'API unavailable.'); }"
- hosts: keystone[0]
user: root
roles:
- maas_remote
vars:
entity_name: "{{ lb_name }}"
target_alias: "{{ maas_target_alias }}"
check_type: remote.http
check_name: lb_api_check_keystone
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
monitoring_zones: "{{ maas_monitoring_zones }}"
notification_plan: "{{ maas_notification_plan }}"
scheme: "{{ maas_keystone_scheme | default(maas_scheme)}}"
ip_address: "{{ external_vip_address }}"
port: "{{ auth_public_port }}"
path: ""
url: "{{ scheme }}://{{ ip_address }}:{{ port }}{{ path }}"
alarm_name: lb_api_alarm_keystone
criteria: ":set consecutiveCount={{ maas_alarm_remote_consecutive_count }} if (metric['code'] != '300') { return new AlarmStatus(CRITICAL, 'API unavailable.'); }"
- hosts: neutron_server[0]
user: root
roles:
- maas_remote
vars:
entity_name: "{{ lb_name }}"
target_alias: "{{ maas_target_alias }}"
check_type: remote.http
check_name: lb_api_check_neutron
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
monitoring_zones: "{{ maas_monitoring_zones }}"
notification_plan: "{{ maas_notification_plan }}"
scheme: "{{ maas_neutron_scheme | default(maas_scheme)}}"
ip_address: "{{ external_vip_address }}"
port: 9696
path: "/"
url: "{{ scheme }}://{{ ip_address }}:{{ port }}{{ path }}"
alarm_name: lb_api_alarm_neutron
criteria: ":set consecutiveCount={{ maas_alarm_remote_consecutive_count }} if (metric['code'] != '200') { return new AlarmStatus(CRITICAL, 'API unavailable.'); }"
- hosts: nova_api_os_compute[0]
user: root
roles:
- maas_remote
vars:
entity_name: "{{ lb_name }}"
target_alias: "{{ maas_target_alias }}"
check_type: remote.http
check_name: lb_api_check_nova
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
monitoring_zones: "{{ maas_monitoring_zones }}"
notification_plan: "{{ maas_notification_plan }}"
scheme: "{{ maas_nova_scheme | default(maas_scheme)}}"
ip_address: "{{ external_vip_address }}"
port: 8774
path: ""
url: "{{ scheme }}://{{ ip_address }}:{{ port }}{{ path }}"
alarm_name: lb_api_alarm_nova
criteria: ":set consecutiveCount={{ maas_alarm_remote_consecutive_count }} if (metric['code'] != '200') { return new AlarmStatus(CRITICAL, 'API unavailable.'); }"
- hosts: horizon[0]
user: root
roles:
- maas_remote
vars:
entity_name: "{{ lb_name }}"
target_alias: "{{ maas_target_alias }}"
check_type: remote.http
check_name: lb_api_check_horizon
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
monitoring_zones: "{{ maas_monitoring_zones }}"
notification_plan: "{{ maas_notification_plan }}"
scheme: "{{ maas_scheme }}"
scheme: "{{ maas_horizon_scheme | default(maas_scheme)}}"
ip_address: "{{ external_vip_address }}"
port: 443
path: ""
url: "{{ scheme }}://{{ ip_address }}:{{ port }}{{ path }}"
alarm_name: lb_api_alarm_horizon
criteria: ":set consecutiveCount={{ maas_alarm_remote_consecutive_count }} if (metric['code'] != '200') { return new AlarmStatus(CRITICAL, 'API unavailable.'); }"
- hosts: heat_api[0]
user: root
roles:
- maas_remote
vars:
entity_name: "{{ lb_name }}"
target_alias: "{{ maas_target_alias }}"
check_type: remote.http
check_name: lb_api_check_heat_api
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
monitoring_zones: "{{ maas_monitoring_zones }}"
notification_plan: "{{ maas_notification_plan }}"
scheme: "{{ maas_heat_api_scheme | default(maas_scheme)}}"
ip_address: "{{ external_vip_address }}"
port: 8004
path: ""
url: "{{ scheme }}://{{ ip_address }}:{{ port }}{{ path }}"
alarm_name: lb_api_alarm_heat_api
criteria: ":set consecutiveCount={{ maas_alarm_remote_consecutive_count }} if (metric['code'] != '300') { return new AlarmStatus(CRITICAL, 'API unavailable.'); }"
- hosts: heat_api_cfn[0]
user: root
roles:
- maas_remote
vars:
entity_name: "{{ lb_name }}"
target_alias: "{{ maas_target_alias }}"
check_type: remote.http
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
check_name: lb_api_check_heat_cfn
monitoring_zones: "{{ maas_monitoring_zones }}"
notification_plan: "{{ maas_notification_plan }}"
scheme: "{{ maas_scheme }}"
scheme: "{{ maas_heat_cfn_scheme | default(maas_scheme)}}"
ip_address: "{{ external_vip_address }}"
port: 8000
path: ""
url: "{{ scheme }}://{{ ip_address }}:{{ port }}{{ path }}"
alarm_name: lb_api_alarm_heat_cfn
criteria: ":set consecutiveCount={{ maas_alarm_remote_consecutive_count }} if (metric['code'] != '300') { return new AlarmStatus(CRITICAL, 'API unavailable.'); }"
- hosts: heat_api_cloudwatch[0]
user: root
roles:
- maas_remote
vars:
entity_name: "{{ lb_name }}"
target_alias: "{{ maas_target_alias }}"
check_type: remote.http
check_name: lb_api_check_heat_cloudwatch
check_period: "{{ maas_check_period }}"
check_timeout: "{{ maas_check_timeout }}"
monitoring_zones: "{{ maas_monitoring_zones }}"
notification_plan: "{{ maas_notification_plan }}"
scheme: "{{ maas_heat_cloudwatch_scheme | default(maas_scheme)}}"
ip_address: "{{ external_vip_address }}"
port: 8003
path: ""
url: "{{ scheme }}://{{ ip_address }}:{{ port }}{{ path }}"
alarm_name: lb_api_alarm_heat_cloudwatch
criteria: ":set consecutiveCount={{ maas_alarm_remote_consecutive_count }} if (metric['code'] != '300') { return new AlarmStatus(CRITICAL, 'API unavailable.'); }"

View File

@ -0,0 +1,47 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: hosts
user: root
roles:
- galera_client_cnf
- raxmon_cli
- raxmon_agent_install
vars:
entity_name: "{{ inventory_hostname }}"
- hosts: keystone[0]
user: root
tasks:
# TODO (mattt): Modify openstack_openrc role to allow us to pass in arbitrary
# details. This is a refactor which will need to wait until a
# later date.
- name: Create keystone user for monitoring
keystone: >
command=ensure_user
token="{{ auth_admin_token }}"
endpoint="{{ auth_admin_uri }}"
user_name="{{ maas_keystone_user }}"
tenant_name="{{ auth_admin_tenant }}"
password="{{ maas_keystone_password }}"
- name: Add monitoring keystone user to admin role
keystone: >
command=ensure_user_role
token="{{ auth_admin_token }}"
endpoint="{{ auth_admin_uri }}"
user_name="{{ maas_keystone_user }}"
tenant_name=admin
role_name=admin

View File

@ -0,0 +1 @@
../../roles

View File

@ -0,0 +1 @@
../../vars

View File

@ -0,0 +1,18 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- include: cinder-api.yml
- include: cinder-scheduler.yml
- include: cinder-volume.yml

View File

@ -0,0 +1,62 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: cinder_all
user: root
roles:
- common
- common_sudoers
- container_common
- openstack_common
- openstack_openrc
- galera_client_cnf
vars_files:
- vars/openstack_service_vars/cinder_api.yml
- hosts: cinder_api[0]
user: root
roles:
- keystone_add_service
vars_files:
- vars/openstack_service_vars/cinder_api_endpoint.yml
- hosts: cinder_api[0]
user: root
roles:
- keystone_add_service
vars_files:
- vars/openstack_service_vars/cinder_apiv2_endpoint.yml
- hosts: cinder_api[0]
user: root
roles:
- cinder_common
- galera_db_setup
- cinder_setup
- init_script
vars_files:
- vars/openstack_service_vars/cinder_api.yml
handlers:
- include: handlers/services.yml
- hosts: cinder_api!:cinder_api[0]
user: root
roles:
- cinder_common
- init_script
vars_files:
- vars/openstack_service_vars/cinder_api.yml
handlers:
- include: handlers/services.yml

View File

@ -0,0 +1,30 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: cinder_scheduler
user: root
roles:
- common
- common_sudoers
- container_common
- openstack_common
- openstack_openrc
- cinder_common
- galera_client_cnf
- init_script
vars_files:
- vars/openstack_service_vars/cinder_scheduler.yml
handlers:
- include: handlers/services.yml

View File

@ -0,0 +1,35 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: cinder_volume
user: root
roles:
- common
- common_sudoers
- container_extra_setup
- container_common
- openstack_common
- openstack_openrc
- cinder_common
- cinder_volume
- cinder_device_add
- cinder_backend_types
- galera_client_cnf
- init_script
vars_files:
- vars/config_vars/container_config_cinder_volume.yml
- vars/openstack_service_vars/cinder_volume.yml
handlers:
- include: handlers/services.yml

View File

@ -0,0 +1,17 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- include: glance-api.yml
- include: glance-registry.yml

View File

@ -0,0 +1,61 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: glance_all
user: root
roles:
- common
- common_sudoers
- container_common
- openstack_common
- openstack_openrc
- galera_client_cnf
- glance_snet_override
vars_files:
- vars/openstack_service_vars/glance_api.yml
- hosts: glance_api[0]
user: root
roles:
- keystone_add_service
vars_files:
- vars/openstack_service_vars/glance_api_endpoint.yml
- hosts: glance_api[0]
user: root
roles:
- glance_common
- galera_db_setup
- glance_setup
- init_script
- glance_cache_crons
vars_files:
- vars/config_vars/glance_config.yml
- vars/openstack_service_vars/glance_api.yml
handlers:
- include: handlers/services.yml
- hosts: glance_api!:glance_api[0]
user: root
roles:
- glance_common
- init_script
- glance_cache_crons
vars_files:
- vars/config_vars/glance_config.yml
- vars/openstack_service_vars/glance_api.yml
handlers:
- include: handlers/services.yml

View File

@ -0,0 +1,33 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This playbook deploys Glance-Registry.
- hosts: glance_registry
user: root
roles:
- common
- common_sudoers
- container_common
- glance_common
- openstack_common
- openstack_openrc
- galera_client_cnf
- init_script
- glance_snet_override
vars_files:
- vars/config_vars/glance_config.yml
- vars/openstack_service_vars/glance_registry.yml
handlers:
- include: handlers/services.yml

View File

@ -0,0 +1 @@
../../handlers/

View File

@ -0,0 +1,19 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- include: heat-api.yml
- include: heat-api-cfn.yml
- include: heat-api-cloudwatch.yml
- include: heat-engine.yml

View File

@ -0,0 +1,37 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: heat_api_cfn
user: root
roles:
- common
- common_sudoers
- container_common
- heat_common
- openstack_common
- openstack_openrc
- galera_client_cnf
- init_script
vars_files:
- vars/openstack_service_vars/heat_api_cfn.yml
handlers:
- include: handlers/services.yml
- hosts: heat_api_cfn[0]
user: root
roles:
- keystone_add_service
vars_files:
- vars/openstack_service_vars/heat_api_cfn_endpoint.yml

View File

@ -0,0 +1,30 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: heat_api_cloudwatch
user: root
roles:
- common
- common_sudoers
- container_common
- heat_common
- openstack_common
- openstack_openrc
- galera_client_cnf
- init_script
vars_files:
- vars/openstack_service_vars/heat_api_cloudwatch.yml
handlers:
- include: handlers/services.yml

View File

@ -0,0 +1,51 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: heat_all
user: root
roles:
- common
- common_sudoers
- container_common
- openstack_common
- openstack_openrc
- galera_client_cnf
vars_files:
- vars/openstack_service_vars/heat_api.yml
- hosts: heat_api[0]
user: root
roles:
- keystone_add_service
- heat_domain_user
- heat_common
- galera_db_setup
- heat_setup
- init_script
vars_files:
- vars/openstack_service_vars/heat_api.yml
- vars/openstack_service_vars/heat_api_endpoint.yml
handlers:
- include: handlers/services.yml
- hosts: heat_api!:heat_api[0]
user: root
roles:
- heat_common
- init_script
vars_files:
- vars/openstack_service_vars/heat_api.yml
handlers:
- include: handlers/services.yml

View File

@ -0,0 +1,30 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: heat_engine
user: root
roles:
- common
- common_sudoers
- container_common
- heat_common
- openstack_common
- openstack_openrc
- galera_client_cnf
- init_script
vars_files:
- vars/openstack_service_vars/heat_engine.yml
handlers:
- include: handlers/services.yml

View File

@ -0,0 +1,39 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: horizon_all
user: root
roles:
- common
- container_common
- galera_client_cnf
- hosts: horizon_all
user: root
roles:
- openstack_common
- openstack_openrc
- horizon_common
- hosts: horizon_all[0]
user: root
roles:
- galera_db_setup
- horizon_setup
- hosts: horizon_all
user: root
roles:
- horizon_apache

View File

@ -0,0 +1 @@
../../inventory/

View File

@ -0,0 +1,113 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
## Keystone
- hosts: keystone[0]
user: root
roles:
- keystone_add_service
vars_files:
- inventory/group_vars/keystone_all.yml
- vars/openstack_service_vars/keystone_endpoint.yml
## Cinder
- hosts: keystone[0]
user: root
roles:
- keystone_add_service
vars_files:
- inventory/group_vars/cinder_all.yml
- vars/openstack_service_vars/cinder_api_endpoint.yml
- hosts: keystone[0]
user: root
roles:
- keystone_add_service
vars_files:
- inventory/group_vars/cinder_all.yml
- vars/openstack_service_vars/cinder_apiv2_endpoint.yml
## Glance
- hosts: keystone[0]
user: root
roles:
- keystone_add_service
vars_files:
- inventory/group_vars/glance_all.yml
- vars/openstack_service_vars/glance_api_endpoint.yml
## Heat
- hosts: keystone[0]
user: root
roles:
- keystone_add_service
vars_files:
- inventory/group_vars/heat_all.yml
- vars/openstack_service_vars/heat_api_endpoint.yml
- hosts: keystone[0]
user: root
roles:
- keystone_add_service
vars_files:
- inventory/group_vars/heat_all.yml
- vars/openstack_service_vars/heat_api_cfn_endpoint.yml
## Neutron
- hosts: keystone[0]
user: root
roles:
- keystone_add_service
vars_files:
- inventory/group_vars/neutron_all.yml
- vars/openstack_service_vars/neutron_server_endpoint.yml
## Nova
- hosts: keystone[0]
user: root
roles:
- keystone_add_service
vars_files:
- inventory/group_vars/nova_all.yml
- vars/openstack_service_vars/nova_api_os_compute_endpoint.yml
- hosts: keystone[0]
user: root
roles:
- keystone_add_service
vars_files:
- inventory/group_vars/nova_all.yml
- vars/openstack_service_vars/nova_api_os_computev3_endpoint.yml
- hosts: keystone[0]
user: root
roles:
- keystone_add_service
vars_files:
- inventory/group_vars/nova_all.yml
- vars/openstack_service_vars/nova_api_ec2_endpoint.yml
- hosts: keystone[0]
user: root
roles:
- keystone_add_service
vars_files:
- inventory/group_vars/nova_all.yml
- vars/openstack_service_vars/nova_api_s3_endpoint.yml

View File

@ -0,0 +1,20 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Add additional users to keystone if needed.
- hosts: keystone[0]
user: root
roles:
- keystone_add_user

View File

@ -0,0 +1,53 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Setup Keystone
- hosts: keystone[0]
user: root
tasks:
- name: Perform a Keystone PKI Setup
command: >
keystone-manage pki_setup --keystone-user "{{ system_user }}" --keystone-group "{{ system_group }}"
creates=/etc/keystone/ssl/private/signing_key.pem
- name: Create Key directory
file: >
path=/tmp/keystone/ssl/
state=directory
group="{{ ansible_ssh_user }}"
owner="{{ ansible_ssh_user }}"
recurse=true
delegate_to: localhost
- name: Sync keys from keystone
command: "rsync -az root@{{ ansible_ssh_host }}:/etc/keystone/ssl/ /tmp/keystone/ssl/"
delegate_to: localhost
# Setup all keystone nodes
- hosts: keystone:!keystone[0]
user: root
tasks:
- name: Sync keys to keystone
command: "rsync -az /tmp/keystone/ssl/ root@{{ ansible_ssh_host }}:/etc/keystone/ssl/"
delegate_to: localhost
# Remove temp Key Directory
- hosts: local
gather_facts: false
user: root
tasks:
- name: Remove Key directory
file: >
path=/tmp/keystone/
state=absent
delegate_to: localhost

View File

@ -0,0 +1,51 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This playbook deploys Keystone-API.
- hosts: keystone
user: root
roles:
- common
- common_sudoers
- container_common
- keystone_common
- openstack_common
- openstack_openrc
- galera_client_cnf
vars_files:
- vars/config_vars/keystone_config.yml
- vars/openstack_service_vars/keystone.yml
# Setup Keystone
- hosts: keystone[0]
user: root
roles:
- galera_db_setup
- keystone_apache
- keystone_setup
- keystone_add_service
vars:
auth_admin_uri: "{{ auth_protocol }}://{{ container_address }}:{{ auth_port }}/v2.0"
vars_files:
- vars/openstack_service_vars/keystone.yml
- vars/openstack_service_vars/keystone_endpoint.yml
# Setup all keystone nodes
- hosts: keystone:!keystone[0]
user: root
roles:
- keystone_apache
vars_files:
- vars/openstack_service_vars/keystone.yml

View File

@ -0,0 +1 @@
../../library/

View File

@ -0,0 +1,21 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- include: neutron-server.yml
- include: neutron-metadata-agent.yml
- include: neutron-dhcp-agent.yml
- include: neutron-linuxbridge-agent.yml
- include: neutron-l3-agent.yml
- include: neutron-metering-agent.yml

Some files were not shown because too many files have changed in this diff Show More