Convert existing roles into galaxy roles

This change implements the blueprint to convert all roles and plays into
a more generic setup, following upstream ansible best practices.

Items Changed:
* All tasks have tags.
* All roles use namespaced variables.
* All redundant tasks within a given play and role have been removed.
* All of the repetitive plays have been removed in-favor of a more
  simplistic approach. This change duplicates code within the roles but
  ensures that the roles only ever run within their own scope.
* All roles have been built using an ansible galaxy syntax.
* The `*requirement.txt` files have been reformatted follow upstream
  Openstack practices.
* Dynamically generated inventory is now more organized, this should assist
  anyone who may want or need to dive into the JSON blob that is created.
  In the inventory a properties field is used for items that customize containers
  within the inventory.
* The environment map has been modified to support additional host groups to
  enable the seperation of infrastructure pieces. While the old infra_hosts group
  will still work this change allows for groups to be divided up into seperate
  chunks; eg: deployment of a swift only stack.
* The LXC logic now exists within the plays.
* etc/openstack_deploy/user_variables.yml has all password/token
  variables extracted into the separate file
  etc/openstack_deploy/user_secrets.yml in order to allow seperate
  security settings on that file.

Items Excised:
* All of the roles have had the LXC logic removed from within them which
  should allow roles to be consumed outside of the `os-ansible-deployment`
  reference architecture.

Note:
* the directory rpc_deployment still exists and is presently pointed at plays
  containing a deprecation warning instructing the user to move to the standard
  playbooks directory.
* While all of the rackspace specific components and variables have been removed
  and or were refactored the repository still relies on an upstream mirror of
  Openstack built python files and container images. This upstream mirror is hosted
  at rackspace at "http://rpc-repo.rackspace.com" though this is
  not locked to and or tied to rackspace specific installations. This repository
  contains all of the needed code to create and/or clone your own mirror.

DocImpact
Co-Authored-By: Jesse Pretorius <jesse.pretorius@rackspace.co.uk>
Closes-Bug: #1403676
Implements: blueprint galaxy-roles
Change-Id: I03df3328b7655f0cc9e43ba83b02623d038d214e
This commit is contained in:
Kevin Carter 2015-02-14 10:06:50 -06:00 committed by Jesse Pretorius
parent 81c4ab04f7
commit 8e6dbd01c9
824 changed files with 25647 additions and 18545 deletions

View File

@ -1,30 +0,0 @@
###Contributor guidelines
**Filing Bugs**
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/openstack-ansible
When submitting a bug, or working on a bug, please ensure the following criteria are met:
* The description clearly states or describes the original problem or root cause of the problem.
* Include historical information on how the problem was identified.
* Any relevant logs are included.
* The provided information should be totally self-contained. External access to web services/sites should not be needed.
* Steps to reproduce the problem if possible.
**Submitting Code**
Changes to the project should be submitted for review via the Gerrit tool, following
the workflow documented at:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
**Extra**
***Tags***: If it's a bug that needs fixing in a branch in addition to Master, add a '\<release\>-backport-potential' tag (eg ```juno-backport-potential```). There are predefined tags that will autocomplete
***Status***: Please leave this alone, it should be New till someone triages the issue.
***Importance***: Should only be touched if it is a Blocker/Gating issue. If it is, please set to High, and only use Critical if you have found a bug that can take down whole infrastructures.

90
CONTRIBUTING.rst Normal file
View File

@ -0,0 +1,90 @@
OpenStack Ansible Deployment
############################
:tags: openstack, cloud, ansible
:category: \*nix
contributor guidelines
^^^^^^^^^^^^^^^^^^^^^^
Filing Bugs
-----------
Bugs should be filed on Launchpad, not GitHub: "https://bugs.launchpad.net/openstack-ansible".
When submitting a bug, or working on a bug, please ensure the following criteria are met:
* The description clearly states or describes the original problem or root cause of the problem.
* Include historical information on how the problem was identified.
* Any relevant logs are included.
* The provided information should be totally self-contained. External access to web services/sites should not be needed.
* Steps to reproduce the problem if possible.
Submitting Code
---------------
Changes to the project should be submitted for review via the Gerrit tool, following
the workflow documented at: "http://docs.openstack.org/infra/manual/developers.html#development-workflow"
Pull requests submitted through GitHub will be ignored and closed without regard.
Extra
-----
Tags:
If it's a bug that needs fixing in a branch in addition to Master, add a '\<release\>-backport-potential' tag (eg ``juno-backport-potential``). There are predefined tags that will auto-complete.
Status:
Please leave this alone, it should be New till someone triages the issue.
Importance:
Should only be touched if it is a Blocker/Gating issue. If it is, please set to High, and only use Critical if you have found a bug that can take down whole infrastructures.
Style guide
-----------
When creating tasks and other roles for use in Ansible please create then using the YAML dictionary format.
Example YAML dictionary format:
.. code-block:: yaml
- name: The name of the tasks
module_name:
thing1: "some-stuff"
thing2: "some-other-stuff"
tags:
- some-tag
- some-other-tag
Example what **NOT** to do:
.. code-block:: yaml
- name: The name of the tasks
module_name: thing1="some-stuff" thing2="some-other-stuff"
tags: some-tag
.. code-block:: yaml
- name: The name of the tasks
module_name: >
thing1="some-stuff"
thing2="some-other-stuff"
tags: some-tag
Usage of the ">" and "|" operators should be limited to Ansible conditionals and command modules such as the ansible ``shell`` or ``command``.
Issues
------
When submitting an issue, or working on an issue please ensure the following criteria are met:
* The description clearly states or describes the original problem or root cause of the problem.
* Include historical information on how the problem was identified.
* Any relevant logs are included.
* If the issue is a bug that needs fixing in a branch other than Master, please note the associated branch within the launchpad issue.
* The provided information should be totally self-contained. External access to web services/sites should not be needed.
* Steps to reproduce the problem if possible.

201
LICENSE.txt Normal file
View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,120 +1,60 @@
OpenStack Deployment with Ansible
#################################
:date: 2014-09-25 09:00
OpenStack Ansible Deployment
############################
:date: 2015-02-02 22:00
:tags: lxc, openstack, cloud, ansible
:category: \*nix
Official Documentation
----------------------
Comprehensive installation guides, including FAQs and release notes, can be found at http://docs.rackspace.com
Playbooks
---------
Bug tracking and release management can be found in Launchpad_
There are several playbooks within that will setup hosts for use in OpenStack Cloud. The playbooks will enable LXC on hosts and provides the ability to deploy LXC containers for use within openstack.
.. _launchpad: https://launchpad.net/openstack-ansible
Plays:
* ``setup-hosts.yml`` Performs host setup for use with LXC in the OpenStack hosts.
* ``setup-infrastructure.yml`` Performs all of the setup for all infrastructure components.
* ``setup-openstack.yml`` Performs all of the setup for all of the OpenStack components.
Code reviews will be managed in Gerrit_
* If you dont want to run plays individually you can simply run ``setup-everything.yml`` which will perform all of the setup and installation for you.
.. _gerrit: https://review.openstack.org/#/q/os-ansible-deployment,n,z
Basic Setup:
1. If you have any roles that you'd like to have pulled in that are outside the scope and or replace modules within this repository please add them to the ``ansible-role-requirements.yml`` file. In this file you will want to fill in the details for the role you want to pull in using standard ansible galaxy format.
Playbook Support
----------------
.. code-block:: yaml
OpenStack:
* keystone
* glance-api
* glance-registry
* cinder-api
* cinder-scheduler
* cinder-volume
* nova-api
* nova-api-ec2
* nova-api-metadata
* nova-api-os-compute
* nova-compute
* nova-conductor
* nova-scheduler
* heat-api
* heat-api-cfn
* heat-api-cloudwatch
* heat-engine
* horizon
* neutron-server
* neutron-dhcp-agent
* neutron-metadata-agent
* neutron-linuxbridge-agent
- name: SuperAwesomeModule
src: https://github.com/super-user/SuperAwesomeModule
version: master
2. Run the ``./scripts/os-ansible-bootstrap.sh`` script, which will install, pip, ansible 1.8.x, all of the required python packages, and bring in any third part ansible roles that you may want to add to the deployment.
3. Copy the ``etc/openstack_deploy`` directory to ``/etc/openstack_deploy`` or if you are executing all of this as an unprivileged user you can add the ``openstack_deploy`` bits into your home directory as ``${HOME}/.openstack_deploy``.
4. Fill in your ``openstack_deploy/openstack_user_config.yml``, ``openstack_deploy/user_secrets.yml`` and ``openstack_deploy/user_variables.yml`` files which you've just copied to your ``/etc/`` directory or your ``${HOME}`` folder.
5. Generate all of your random passwords executing ``scripts/pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml``.
6. Accomplish all of the host networking that you want to use within the deployment. See the ``etc/network`` directory in this repository for an example network setup.
7. When ready change to the ``playbooks/`` directory and execute your desired plays. IE:
Infrastructure:
* galera
* rabbitmq
* logstash
* elastic-search
* kibana
.. code-block:: bash
Assumptions
-----------
This repo assumes that you have setup the host servers that will be running the OpenStack infrastructure with three bridged network devices named: ``br-mgmt``, ``br-vxlan``, ``br-vlan``. These bridges will be used throughout the OpenStack infrastructure.
The repo also relies on configuration files found in the `/etc` directory of this repo.
If you are running Ansible from an "unprivileged" host, you can place the contents of the /etc/ directory in your home folder; this would be in a directory similar to `/home/<myusername>/openstack_deploy/`. Once you have the file in place, you will have to enter the details of your environment in the `openstack_user_config.yml` file; please see the file for how this should look. After you have a bridged network and the files/directory in place, continue on to _`Base Usage`.
Base Usage
----------
All commands must be executed from the ``playbooks`` directory. From this directory you will have access to all of the playbooks, roles, and variables. It is recommended that you create an override file to contain any and all variables that you wish to override for the deployment. While the override file is is not required it will make life a bit easier. The default override file for the environment is the ``user_variables.yml`` file.
All of the variables that you may wish to update are in the ``vars/`` directory, however you should also be aware that services will pull in base group variables as found in ``inventory/group_vars``.
All playbooks exist in the ``playbooks/`` directory and are grouped in different sub-directories.
All of the keys, tokens, and passwords are in the ``user_variables.yml`` file. This file contains no
preset passwords. To setup your keys, passwords, and tokens you will need to either edit this file
manually or use the script ``pw-token-gen.py``. Example:
.. code-block::
# Generate the tokens
scripts/pw-token-gen.py --file /etc/openstack_deploy/user_variables.yml
Example usage from the `playbooks` directory in the ``os-ansible-deployment`` repository
.. code-block:: bash
# Run setup on all hosts:
ansible-playbook -e @vars/user_variables.yml playbooks/host-setup.yml
# Run infrastructure on all hosts
ansible-playbook -e @vars/user_variables.yml playbooks/infrastructure-setup.yml
# Setup and configure openstack within your spec'd containers
ansible-playbook -e @vars/user_variables.yml playbooks/openstack-setup.yml
About Inventory
---------------
All things that Ansible cares about are located in inventory. The whole inventory is dynamically generated using the previously mentioned configuration files. While this is a dynamically generated inventory, it is not 100% generated on every run. The inventory is saved in a file named `openstack_inventory.json` and is located in the directory where you've located your user configuration files. On every run a backup of the inventory json file is created in both the current working directory as well as the location where the user configuration files exist. The inventory json file is a living document and is intended to grow as the environment scales in infrastructure. This means that the inventory file will be appended to as you add more nodes and or change the container affinity from within the `openstack_user_config.yml` file. It is recommended that the base inventory file be backed up to a safe location upon the completion of a deployment operation. While the dynamic inventory processor has guards in it to ensure that the built inventory is not adversely effected by programmatic operations this does not guard against user error and/or catastrophic failure.
Scaling
-------
If you are scaling the environment using the dynamically generated inventory you should know that the inventory was designed to generate new entries in inventory and not remove entries from inventory. These playbooks will build an environment to spec so if container affinity is changed and or a node is added or removed from an environment the user configuration file will need to be modified as well as the inventory json. For this reason it is recommended that should a physical node need replacing it should be renamed the same as the previous one. This will make things easier when rebuilding the environment. Additionally if a container is needing to be replaced it is better to simply remove the misbehaving container and rebuild it using the existing inventory.
openstack-ansible setup-everything.yml
Notes
-----
* Library has an experimental `keystone` module which adds ``keystone:`` support to Ansible.
* Library has an experimental `swift` module which adds ``swift:`` support to Ansible.
* Library has an experimental `neutron` module which adds ``keystone:`` support to Ansible.
* Library has an experimental `glance` module which adds ``keystone:`` support to Ansible.
* Library has an experimental `lxc` module which adds ``lxc:`` support to Ansible.
* Library has an experimental `memcached` module which adds ``lxc:`` support to Ansible.
* Library has an experimental `name2int` module which adds ``lxc:`` support to Ansible.
* If you run the ``./scripts/bootstrap-ansible.sh`` script a wrapper script will be added to your system that wraps the ansible-playbook command to simplify the arguments required to run openstack ansible plays. The name of the wrapper script is **openstack-ansible**.
* The lxc network is created within the *lxcbr0* interface. This supports both NAT networks as well as more traditional networking. If NAT is enabled (default) the IPtables rules will be created along with the interface as a post-up processes. If you ever need to recreate the rules and or restart the dnsmask process you can bounce the interface IE: ``ifdown lxcb0; ifup lxcbr0`` or you can use the ``lxc-system-manage`` command.
* The tool ``lxc-system-manage`` is available on all lxc hosts and can assist in recreating parts of the LXC system whenever its needed.
* Our repository uses a custom `LXC` module which adds ``lxc:`` support to Ansible. The module within this repository is presently pending in upstream ansible at "https://github.com/ansible/ansible-modules-extras/pull/123".
* Inventory is generated by executing the ``playbooks/inventory/dynamic_inventory.py`` script. This is configured in the ``playbooks/ansible.cfg`` file.
Bugs and Blueprints
-------------------
Everything we do is in launchpad and gerrit. If you'd like to raise a bug, feature request, or are looking for ways to contribute please go to "https://launchpad.net/openstack-ansible".
Official Documentation
----------------------
Comprehensive installation guides, including FAQs and release notes, can be found at "http://docs.rackspace.com/rpc/api/v9/bk-rpc-installation/content/rpc-common-front.html" < Note that these docs may not be up-to-date with the current release of this repository however they are still a good source of documentation.

View File

@ -0,0 +1,6 @@
# Use this file to fill in your third party roles that you'd like to have added to the list of available roles.
# Example:
# - github_api: https://api.github.com/repos/os-cloud/opc_role-galera_client
# name: galera_client
# src: https://github.com/os-cloud/opc_role-galera_client
# version: master

1
dev-requirements.txt Normal file
View File

@ -0,0 +1 @@
ansible-lint>=2.0.3

143
development-stack.rst Normal file
View File

@ -0,0 +1,143 @@
OpenStack Ansible Deployment
############################
:date: 2015-02-02 22:00
:tags: lxc, openstack, cloud, ansible
:category: \*nix
Building a development stack
----------------------------
If you are wanting to build a development stack for testing or otherwise contributing to this repository you can do so using the
``cloudserver-aio.sh`` script in the script directory. To execute the ``cloudserver-aio.sh`` script please do so from the ``os-ansible-deployment`` directory that was created when you cloned the repository.
Example AIO build process:
.. code-block:: bash
# Clone the source code
git clone https://github.com/stackforge/os-ansible-deployment /opt/os-ansible-deployment
# Change your directory
cd /opt/os-ansible-deployment
# Checkout your desired branch.
git checkout master
# Run the script from the root directory of the cloned repository.
./scripts/run-aio-build.sh
To use this script successfully please make sure that you have the following:
* At least **60GB** of available storage on "/" when using local file system containers. Containers are built into ``/var/lib/lxc`` and will consume up-to 40GB on their own.
* If you would like to test building containers using LVM simply create an **lxc** volume group before executing the script. Be aware that each container will be built with a minimum of 5GB of storage.
* 2.4GHZ quad-core processor with that is KVM capable is required.
* You must have at least 4GB of available ram.
This may seem like you need a lot to run the stack, which is partially true, however consider that this simple "All in One" deployment builds a "35" node infrastructure and mimics our reference architecture. Additionally, components like Rabbitmq, MariaDB with Galera, Repository servers, and Keystone will all be clustered. Lastly the "All in One" deployment uses HAProxy for test purposes only. **At this time we do not recommend running HAProxy in production**. At this time you should **NEVER** use the AIO script on a box that you care about. Cloud servers such as Rackspace Cloud server of the flavor *general1-8* variety work really well as development machines, as does Virtual Box of KVM instances.
Using Heat:
If you would like to use heat to deploy an All in one node there is a heat script which you can use. Simply get and or source the raw script as found here: "https://raw.githubusercontent.com/stackforge/os-ansible-deployment/master/scripts/osad-aio-heat-template.yml"
Rebuilding the stack
^^^^^^^^^^^^^^^^^^^^
Once you have completed your testing and or dev work if you'd like to tear down the stack and restart from a new build there is a play that will assist you in doing just that. Simply change to your playbooks directory and execute the ``lxc-containers-destroy.yml`` play.
Example:
.. code-block:: bash
# Move to the playbooks directory.
cd /opt/os-ansible-deployment/playbooks
# Destroy all of the running containers.
openstack-ansible lxc-containers-destroy.yml
# On the host stop all of the services that run locally and not within a container.
for i in $(ls /etc/init | grep -e nova -e swift -e neutron | awk -F'.' '{print $1}'); do service $i stop; done
# Uninstall the core services that were installed.
for i in $(pip freeze | grep -e nova -e neutron -e keystone -e swift); do pip uninstall -y $i; done
# Remove crusty directories.
rm -rf /openstack /etc/neutron /etc/nova /etc/swift /var/log/neutron /var/log/nova /var/log/swift
Using the teardown script:
The ``teardown.sh`` script that will destroy everything known within an environment. You should be aware that this script will destroy whole environments and should be used **WITH CAUTION**.
Notice
^^^^^^
The system uses a number of variables. You should look a the scripts for a full explanation and description of all of the available variables that you can set. At a minimum you should be aware of the default public interface variable as you may be kicking on a box that does not have an ``eth0`` interface. To set the default public interface run the following.
.. code-block:: bash
export PUBLIC_INTERFACE="<<REPLACE WITH THE NAME OF THE INTERFACE>>" # This is only required if you dont have eth0
This play will destroy all of your running containers and remove items within the ``/openstack`` directory for the container. After the completion of this play you can rerun the ``cloudserver-aio.sh`` or you can run the plays manually to rebuild the stack.
Diagram of stack
^^^^^^^^^^^^^^^^
Here is a basic diagram that attempts to illustrate what the AIO installation job is doing. **NOTICE** This diagram is not to scale and is not even 100% accurate, this diagram was built for informational purposes only and should **ONLY** be used as such.
Diagram::
====== ASCII Diagram for AIO infrastructure ======
------->[ ETH0 == Public Network ]
|
V [ * ] Socket Connections
[ HOST MACHINE ] [ <>v^ ] Network Connections
* ^ *
| | |-----------------------------------------------------
| | |
| |---------------->[ HAProxy ] |
| ^ |
| | |
| V |
| (BR-Interfaces)<----- |
| ^ * | |
*-[ LXC ]*--*--------------------|-----|------|----| |
| | | | | | | |
| * | | | | | |
| --->[ Logstash ]<-----------|-- | | | | |
| | [ Kibana ]<-------------| | | V * | |
| --->[ Elastic search ]<-----| | | [ Galera x3 ] |
| [ Memcached ]<----------| | | | |
*-------*[ Rsyslog ]<------------|-- | * |
| [ Repos Server x3 ]<----| ---|-->[ RabbitMQ x3 ] |
| [ Horizon ]<------------| | | |
| [ Nova api ec2 ]<-------|--| | |
| [ Nova api os ]<--------|->| | |
| [ Nova spice console ]<-| | | |
| [ Nova Cert ]<----------|->| | |
| [ Cinder api ]<---------|->| | |
| [ Glance api ]<---------|->| | |
| [ Heat apis ]<----------|->| | [ Loop back devices ]*-*
| [ Heat engine ]<--------|->| | \ \ |
| ------>[ Nova api metadata ] | | | { LVM } { XFS x3 } |
| | [ Nova conductor ]<-----| | | * * |
| |----->[ Nova scheduler ]------|->| | | | |
| | [ Keystone x3 ]<--------|->| | | | |
| | |--->[ Neutron agents ]*-----|--|---------------------------*
| | | [ Neutron server ]<-----|->| | | |
| | | |->[ Swift proxy ]<--------- | | | |
*-|-|-|-*[ Cinder volume ]*--------------------* | |
| | | | | | |
| | | --------------------------------------- | |
| | --------------------------------------- | | |
| | -----------------------| | | | |
| | | | | | |
| | V | | * |
---->[ Compute ]*[ Neutron linuxbridge ]<-| |->[ Swift storage ]-
====== ASCII Diagram for AIO infrastructure ======

View File

@ -1,4 +1,9 @@
## Required network bridges; br-vlan, br-vxlan, br-mgmt.
## The default networking requires several bridges. These bridges were named to be informative
## however they can be named what ever you like and is adaptable to any network infrastructure
## environment. This file serves as an example of how to setup basic networking and was ONLY
## built for the purpose of being an example and used expressly in the building of an ALL IN
## ONE development environment.
auto br-mgmt
iface br-mgmt inet static
bridge_stp off
@ -20,21 +25,10 @@ iface br-vxlan inet static
# To ensure ssh checksum is correct
up /sbin/iptables -A POSTROUTING -t mangle -p tcp --dport 22 -j CHECKSUM --checksum-fill
down /sbin/iptables -D POSTROUTING -t mangle -p tcp --dport 22 -j CHECKSUM --checksum-fill
# To ensure dhcp checksum is correct
up /sbin/iptables -A POSTROUTING -t mangle -p udp --dport 68 -j CHECKSUM --checksum-fill
down /sbin/iptables -D POSTROUTING -t mangle -p udp --dport 68 -j CHECKSUM --checksum-fill
# To provide internet connectivity to instances
up /sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
down /sbin/iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
auto br-vlan
iface br-vlan inet manual
bridge_stp off
bridge_waitport 0
bridge_fd 0
# Notice this bridge port is an Untagged host interface
bridge_ports none
auto br-storage
iface br-storage inet static
bridge_stp off
@ -43,3 +37,20 @@ iface br-storage inet static
bridge_ports none
address 172.29.244.100
netmask 255.255.252.0
auto br-vlan
iface br-vlan inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
address 172.29.248.100
netmask 255.255.252.0
# Create veth pair, don't bomb if already exists
pre-up ip link add br-vlan-veth type veth peer name eth12 || true
# Set both ends UP
pre-up ip link set br-vlan-veth up
pre-up ip link set eth12 up
# Delete veth pair on DOWN
post-down ip link del br-vlan-veth || true
bridge_ports br-vlan-veth

View File

@ -1,10 +1,14 @@
#EXAMPLE INTERFACE FILE
#
#1293 - HOST_NET (Ignore This. It's the native VLAN.)
#2176 - CONTAINER_NET
#1998 - OVERLAY_NET
#2144 - STORAGE_NET
#2146 - GATEWAY_NET (VM Provider Network. Ignore this. OpenStack will tag for us.)
## The default networking requires several bridges. These bridges were named to be informative
## however they can be named what ever you like and is adaptable to any network infrastructure
## environment. This file serves as an example of how to setup basic networking and was ONLY
## built for the purpose of being an example.
# EXAMPLE INTERFACE FILE
# 1293 - HOST_NET (Ignore This. It's the native VLAN.)
# 2176 - CONTAINER_NET
# 1998 - OVERLAY_NET
# 2144 - STORAGE_NET
# 2146 - GATEWAY_NET (VM Provider Network. Ignore this. OpenStack will tag for us.)
## Physical interface, could be bond. This only needs to be set once for the physical device
auto eth0

View File

@ -16,322 +16,475 @@
component_skel:
cinder_api:
belongs_to:
- cinder_all
- cinder_all
cinder_scheduler:
belongs_to:
- cinder_all
- cinder_all
cinder_volume:
belongs_to:
- cinder_all
- cinder_all
elasticsearch:
belongs_to:
- elasticsearch_all
- elasticsearch_all
galera:
belongs_to:
- galera_all
- galera_all
glance_api:
belongs_to:
- glance_all
- glance_all
glance_registry:
belongs_to:
- glance_all
- glance_all
heat_api:
belongs_to:
- heat_all
- heat_all
heat_api_cfn:
belongs_to:
- heat_all
- heat_all
heat_api_cloudwatch:
belongs_to:
- heat_all
- heat_all
heat_engine:
belongs_to:
- heat_all
- heat_all
horizon:
belongs_to:
- horizon_all
- horizon_all
keystone:
belongs_to:
- keystone_all
- keystone_all
kibana:
belongs_to:
- kibana_all
- kibana_all
logstash:
belongs_to:
- logstash_all
- logstash_all
memcached:
belongs_to:
- memcached_all
- memcached_all
neutron_agent:
belongs_to:
- neutron_all
- neutron_all
neutron_dhcp_agent:
belongs_to:
- neutron_all
- neutron_all
neutron_linuxbridge_agent:
belongs_to:
- neutron_all
- neutron_all
neutron_metering_agent:
belongs_to:
- neutron_all
- neutron_all
neutron_l3_agent:
belongs_to:
- neutron_all
- neutron_all
neutron_metadata_agent:
belongs_to:
- neutron_all
- neutron_all
neutron_server:
belongs_to:
- neutron_all
- neutron_all
nova_api_ec2:
belongs_to:
- nova_all
- nova_all
nova_api_metadata:
belongs_to:
- nova_all
- nova_all
nova_api_os_compute:
belongs_to:
- nova_all
- nova_all
nova_cert:
belongs_to:
- nova_all
- nova_all
nova_compute:
belongs_to:
- nova_all
- nova_all
nova_conductor:
belongs_to:
- nova_all
- nova_all
nova_scheduler:
belongs_to:
- nova_all
- nova_all
nova_spice_console:
belongs_to:
- nova_all
rabbit:
- nova_all
pkg_repo:
belongs_to:
- rabbit_all
- repo_all
rabbitmq:
belongs_to:
- rabbitmq_all
rsyslog:
belongs_to:
- rsyslog_all
utility:
belongs_to:
- utility_all
- rsyslog_all
swift_proxy:
belongs_to:
- swift_all
- swift_all
swift_acc:
belongs_to:
- swift_all
- swift_all
swift_obj:
belongs_to:
- swift_all
- swift_all
swift_cont:
belongs_to:
- swift_all
- swift_all
utility:
belongs_to:
- utility_all
container_skel:
cinder_api_container:
belongs_to:
- infra_containers
- infra_containers
- storage-infra_containers
contains:
- cinder_api
- cinder_api
properties:
service_name: cinder
container_release: trusty
cinder_volumes_container:
belongs_to:
- storage_containers
- storage_containers
contains:
- cinder_scheduler
- cinder_volume
- cinder_scheduler
- cinder_volume
properties:
service_name: cinder
container_release: trusty
elasticsearch_container:
belongs_to:
- log_containers
- log_containers
contains:
- elasticsearch
- elasticsearch
properties:
service_name: elasticsearch
container_release: trusty
galera_container:
belongs_to:
- infra_containers
- infra_containers
- shared-infra_containers
contains:
- galera
- galera
properties:
service_name: galera
container_release: trusty
glance_container:
belongs_to:
- infra_containers
- infra_containers
- os-infra_containers
contains:
- glance_api
- glance_registry
- glance_api
- glance_registry
properties:
service_name: glance
container_release: trusty
heat_apis_container:
belongs_to:
- infra_containers
- infra_containers
- os-infra_containers
contains:
- heat_api_cloudwatch
- heat_api_cfn
- heat_api
- heat_api_cloudwatch
- heat_api_cfn
- heat_api
properties:
service_name: heat
container_release: trusty
heat_engine_container:
belongs_to:
- infra_containers
- infra_containers
- os-infra_containers
contains:
- heat_engine
- heat_engine
properties:
service_name: heat
container_release: trusty
horizon_container:
belongs_to:
- infra_containers
- infra_containers
- os-infra_containers
contains:
- horizon
- horizon
properties:
service_name: horizon
container_release: trusty
keystone_container:
belongs_to:
- infra_containers
- infra_containers
- identity_containers
contains:
- keystone
- keystone
properties:
service_name: keystone
container_release: trusty
kibana_container:
belongs_to:
- log_containers
- log_containers
contains:
- kibana
- kibana
properties:
service_name: kibana
container_release: trusty
logstash_container:
belongs_to:
- log_containers
- log_containers
contains:
- logstash
- logstash
properties:
service_name: logstash
container_release: trusty
memcached_container:
belongs_to:
- infra_containers
- infra_containers
- shared-infra_containers
contains:
- memcached
- memcached
properties:
service_name: memcached
container_release: trusty
neutron_agents_container:
belongs_to:
- network_containers
- network_containers
contains:
- neutron_agent
- neutron_metadata_agent
- neutron_metering_agent
- neutron_linuxbridge_agent
- neutron_l3_agent
- neutron_dhcp_agent
- neutron_agent
- neutron_metadata_agent
- neutron_metering_agent
- neutron_linuxbridge_agent
- neutron_l3_agent
- neutron_dhcp_agent
properties:
service_name: neutron
container_release: trusty
neutron_server_container:
belongs_to:
- network_containers
- network_containers
contains:
- neutron_server
- neutron_server
properties:
service_name: neutron
container_release: trusty
nova_api_ec2_container:
belongs_to:
- infra_containers
- infra_containers
- os-infra_containers
contains:
- nova_api_ec2
- nova_api_ec2
properties:
service_name: nova
container_release: trusty
nova_api_metadata_container:
belongs_to:
- infra_containers
- infra_containers
- os-infra_containers
contains:
- nova_api_metadata
- nova_api_metadata
properties:
service_name: nova
container_release: trusty
nova_api_os_compute_container:
belongs_to:
- infra_containers
- infra_containers
- os-infra_containers
contains:
- nova_api_os_compute
- nova_api_os_compute
properties:
service_name: nova
container_release: trusty
nova_cert_container:
belongs_to:
- infra_containers
- infra_containers
- os-infra_containers
contains:
- nova_cert
- nova_cert
properties:
service_name: nova
container_release: trusty
nova_compute_container:
is_metal: true
belongs_to:
- compute_containers
- compute_containers
contains:
- neutron_linuxbridge_agent
- nova_compute
- neutron_linuxbridge_agent
- nova_compute
properties:
is_metal: true
service_name: nova
container_release: trusty
nova_conductor_container:
belongs_to:
- infra_containers
- infra_containers
- os-infra_containers
contains:
- nova_conductor
- nova_conductor
properties:
service_name: nova
container_release: trusty
nova_scheduler_container:
belongs_to:
- infra_containers
- infra_containers
- os-infra_containers
contains:
- nova_scheduler
- nova_scheduler
properties:
service_name: nova
container_release: trusty
nova_spice_console_container:
belongs_to:
- infra_containers
- infra_containers
- os-infra_containers
contains:
- nova_spice_console
- nova_spice_console
properties:
service_name: nova
container_release: trusty
rabbit_mq_container:
belongs_to:
- infra_containers
- infra_containers
- shared-infra_containers
contains:
- rabbit
- rabbitmq
properties:
service_name: rabbitmq
container_release: trusty
repo_container:
belongs_to:
- repo-infra_containers
contains:
- pkg_repo
properties:
service_name: repo
container_release: trusty
rsyslog_container:
belongs_to:
- infra_containers
- compute_containers
- storage_containers
- log_containers
- network_containers
- infra_containers
- os-infra_containers
- shared-infra_containers
- identity_containers
- compute_containers
- storage_containers
- log_containers
- network_containers
- repo-infra_containers
contains:
- rsyslog
utility_container:
belongs_to:
- infra_containers
contains:
- utility
- rsyslog
properties:
service_name: rsyslog
container_release: trusty
swift_proxy_container:
belongs_to:
- swift-proxy_containers
- swift-proxy_containers
contains:
- swift_proxy
- swift_proxy
properties:
service_name: swift
container_release: trusty
swift_acc_container:
is_metal: true
belongs_to:
- swift_containers
belongs_to:
- swift_containers
contains:
- swift_acc
- swift_acc
properties:
is_metal: true
service_name: swift
container_release: trusty
swift_obj_container:
is_metal: true
belongs_to:
- swift_containers
- swift_containers
contains:
- swift_obj
- swift_obj
properties:
is_metal: true
service_name: swift
container_release: trusty
swift_cont_container:
is_metal: true
belongs_to:
- swift_containers
- swift_containers
contains:
- swift_cont
- swift_cont
properties:
is_metal: true
service_name: swift
container_release: trusty
utility_container:
belongs_to:
- infra_containers
- shared-infra_containers
contains:
- utility
properties:
service_name: utility
container_release: trusty
physical_skel:
network_containers:
belongs_to:
- all_containers
network_hosts:
belongs_to:
- hosts
compute_containers:
belongs_to:
- all_containers
- all_containers
compute_hosts:
belongs_to:
- hosts
- hosts
infra_containers:
belongs_to:
- all_containers
- all_containers
infra_hosts:
belongs_to:
- hosts
- hosts
identity_containers:
belongs_to:
- all_containers
identity_hosts:
belongs_to:
- hosts
log_containers:
belongs_to:
- all_containers
- all_containers
log_hosts:
belongs_to:
- hosts
- hosts
network_containers:
belongs_to:
- all_containers
network_hosts:
belongs_to:
- hosts
os-infra_containers:
belongs_to:
- all_containers
os-infra_hosts:
belongs_to:
- hosts
repo-infra_hosts:
belongs_to:
- hosts
repo-infra_containers:
belongs_to:
- all_containers
shared-infra_containers:
belongs_to:
- all_containers
shared-infra_hosts:
belongs_to:
- hosts
storage-infra_containers:
belongs_to:
- all_containers
storage-infra_hosts:
belongs_to:
- hosts
storage_containers:
belongs_to:
- all_containers
- all_containers
storage_hosts:
belongs_to:
- hosts
- hosts
swift_containers:
belongs_to:
- all_containers
- all_containers
swift_hosts:
belongs_to:
- hosts
- hosts
swift-proxy_containers:
belongs_to:
- all_containers
- all_containers
swift-proxy_hosts:
belongs_to:
- hosts
- hosts

View File

@ -1,13 +1,16 @@
---
environment_version: 3511a43b8e4cc39af4beaaa852b5f917
environment_version: 58339ffafde4614abb7021482cc6604b
cidr_networks:
container: 172.29.236.0/22
tunnel: 172.29.240.0/22
storage: 172.29.244.0/22
used_ips:
- 172.29.236.1,172.29.236.50
- 172.29.244.1,172.29.244.50
- "172.29.236.1,172.29.236.50"
- "172.29.240.1,172.29.240.50"
- "172.29.244.1,172.29.244.50"
- "172.29.248.1,172.29.248.50"
global_overrides:
internal_lb_vip_address: 172.29.236.100
@ -17,30 +20,37 @@ global_overrides:
provider_networks:
- network:
container_bridge: "br-mgmt"
container_type: "veth"
container_interface: "eth1"
type: "raw"
ip_from_q: "container"
type: "raw"
group_binds:
- all_containers
- hosts
is_container_address: true
is_ssh_address: true
- network:
container_bridge: "br-vxlan"
container_type: "veth"
container_interface: "eth10"
type: "vxlan"
ip_from_q: "tunnel"
type: "vxlan"
range: "1:1000"
net_name: "vxlan"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-vlan"
container_interface: "eth11"
container_type: "veth"
container_interface: "eth12"
host_bind_override: "eth12"
type: "flat"
net_name: "flat"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth11"
type: "vlan"
range: "1:1"
@ -49,18 +59,48 @@ global_overrides:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-storage"
container_type: "veth"
container_interface: "eth2"
type: "raw"
ip_from_q: "storage"
type: "raw"
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
# - swift_proxy
- swift_proxy
infra_hosts:
shared-infra_hosts:
aio1:
# Rabbitmq, and galera are set to multiples to test clustering.
affinity:
galera_container: 3
rabbit_mq_container: 3
ip: 172.29.236.100
os-infra_hosts:
aio1:
# Horizon is set to multiple to test clustering. This test only requires x2.
affinity:
horizon_container: 2
ip: 172.29.236.100
storage-infra_hosts:
aio1:
ip: 172.29.236.100
repo-infra_hosts:
aio1:
# Repo is set to multiple to test clustering. This test only requires x2.
affinity:
repo_container: 2
ip: 172.29.236.100
identity_hosts:
aio1:
# Keystone is set to multiple to test clustering. This test only requires x2.
affinity:
keystone_container: 2
ip: 172.29.236.100
compute_hosts:

View File

@ -15,14 +15,14 @@
# This is the md5 of the environment file
# this will ensure consistency when deploying.
environment_version: 5e7155d022462c5a82384c1b2ed8b946
environment_version: 35946eced47eb8461f1eea62fa01bcf0
# User defined container networks in CIDR notation. The inventory generator
# assigns IP addresses to network interfaces inside containers from these
# ranges.
cidr_networks:
# Management (same range as br-mgmt on the target hosts)
container: 172.29.236.0/22
management: 172.29.236.0/22
# Service (optional, same range as br-snet on the target hosts)
snet: 172.29.248.0/22
# Tunnel endpoints for VXLAN tenant networks
@ -31,22 +31,22 @@ cidr_networks:
# Storage (same range as br-storage on the target hosts)
storage: 172.29.244.0/22
# User defined list of consumed IP addresses that may intersect
# with the provided CIDR.
# User defined list of consumed IP addresses that may intersect
# with the provided CIDR. If you want to use a range, split the
# desired range with the lower and upper IP address in the range
# using a comma. IE "10.0.0.1,10.0.0.100".
used_ips:
- 172.29.236.1,172.29.236.50
- 10.240.0.1,10.240.0.50
- 172.29.244.1,172.29.244.50
# As a user you can define anything that you may wish to "globally"
# override from within the openstack_deploy configuration file. Anything
# override from within the openstack_deploy configuration file. Anything
# specified here will take precedence over anything else any where.
global_overrides:
# Internal Management vip address
internal_lb_vip_address: 172.29.236.10
internal_lb_vip_address: 10.240.0.1
# External DMZ VIP address
external_lb_vip_address: 192.168.1.1
# Name of load balancer
lb_name: lb_name_in_core
# Bridged interface to use with tunnel type networks
tunnel_bridge: "br-vxlan"
# Bridged interface to build containers with
@ -69,17 +69,20 @@ global_overrides:
type: "raw"
container_bridge: "br-mgmt"
container_interface: "eth1"
ip_from_q: "container"
container_type: "veth"
ip_from_q: "management"
is_container_address: true
is_ssh_address: true
- network:
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
# If you are using the storage network for swift_proxy add it to the group_binds
# - swift_proxy
# - swift_proxy ## If you are using the storage network for swift_proxy add it to the group_binds
type: "raw"
container_bridge: "br-storage"
container_type: "veth"
container_interface: "eth2"
ip_from_q: "storage"
- network:
@ -89,12 +92,14 @@ global_overrides:
- neutron_linuxbridge_agent
type: "raw"
container_bridge: "br-snet"
container_type: "veth"
container_interface: "eth3"
ip_from_q: "snet"
- network:
group_binds:
- neutron_linuxbridge_agent
container_bridge: "br-vxlan"
container_type: "veth"
container_interface: "eth10"
ip_from_q: "tunnel"
type: "vxlan"
@ -104,30 +109,50 @@ global_overrides:
group_binds:
- neutron_linuxbridge_agent
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth11"
type: "flat"
type: "vlan"
range: "1:1"
net_name: "vlan"
- network:
group_binds:
- neutron_linuxbridge_agent
container_bridge: "br-vlan"
container_interface: "eth11"
type: "vlan"
range: "1:1"
net_name: "vlan"
# Other options you may want
debug: True
### Cinder default volume type option
# # This can be set to use a specific volume type. This is
# # an optional variable because you may have different volume
# # types on different hosts named different things. For this
# # Reason if you choose to set this variable please set it
# # to the name of one of your setup volume types
# cinder_default_volume_type: lvm
### Cinder default volume type option
container_type: "veth"
container_interface: "eth12"
host_bind_override: "eth12"
type: "flat"
net_name: "flat"
# User defined Infrastructure Hosts, this should be a required group
infra_hosts:
# Shared infrastructure parts
shared-infra_hosts:
infra1:
ip: 10.240.0.100
infra2:
ip: 10.240.0.101
infra3:
ip: 10.240.0.102
# OpenStack Compute infrastructure parts
os-infra_hosts:
infra1:
ip: 10.240.0.100
infra2:
ip: 10.240.0.101
infra3:
ip: 10.240.0.102
# OpenStack Compute infrastructure parts
storage-infra_hosts:
infra1:
ip: 10.240.0.100
infra2:
ip: 10.240.0.101
infra3:
ip: 10.240.0.102
# Keystone Identity infrastructure parts
identity_hosts:
infra1:
ip: 10.240.0.100
infra2:
@ -139,32 +164,43 @@ infra_hosts:
compute_hosts:
compute1:
ip: 10.240.0.103
host_vars:
host_networks:
- { type: raw, device_name: eth0, bond_master: bond0, bond_primary: true }
- { type: raw, device_name: eth4, bond_master: bond0, bond_primary: false }
- { type: vlan_tagged, device_name: bond0, tagged_device_name: bond0.2176 }
- { type: vlan_tagged, device_name: bond0, tagged_device_name: bond1.1998 }
- { type: bonded, device_name: bond0 }
- { type: bridged, device_name: br-mgmt, bridge_ports: ["bond0.2176"], address: "172.29.236.103", netmask: "255.255.255.0", gateway: "172.29.236.1", dns_nameservers: ["69.20.0.164", "69.20.0.196"] }
- { type: bridged, device_name: br-vxlan, bridge_ports: ["bond1.1998"], address: "172.29.240.103", netmask: "255.255.255.0" }
- { type: bridged, device_name: br-vlan, bridge_ports: ["bond1"] }
# User defined Storage Hosts, this should be a required group
storage_hosts:
cinder1:
ip: 172.29.236.104
ip: 10.240.0.104
# "container_vars" can be set outside of all other options as
# host specific optional variables.
container_vars:
# If you would like to define a cinder availability zone this can
# be done with the name spaced variable.
cinder_storage_availability_zone: cinderAZ_1
# When creating more than ONE availability zone you should define a
# sane default for the system to use when scheduling volume creation.
cinder_default_availability_zone: cinderAZ_1
# In this example we are defining what cinder volumes are
# on a given host.
cinder_backends:
# if the "limit_container_types" argument is set, within
# the top level key of the provided option the inventory
# process will perform a string match on the container name with
# the value found within the "limit_container_types" argument.
# If any part of the string found within the container
# name the options are appended as host_vars inside of inventory.
limit_container_types: cinder_volume
lvm:
volume_group: cinder-volumes
volume_driver: cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name: LVM_iSCSI
# The ``cinder_nfs_client`` values is an optional component available
# when configuring cinder.
cinder_nfs_client:
nfs_shares_config: /etc/cinder/nfs_shares
shares:
- { ip: "{{ ip_nfs_server }}", share: "/vol/cinder" }
cinder2:
ip: 172.29.236.105
ip: 10.240.0.105
container_vars:
cinder_storage_availability_zone: cinderAZ_2
cinder_default_availability_zone: cinderAZ_1
@ -174,26 +210,6 @@ storage_hosts:
volume_group: cinder-volumes
volume_driver: cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name: LVM_iSCSI_SSD
cinder3:
ip: 10.240.0.106
container_vars:
cinder_storage_availability_zone: cinderAZ_3
cinder_default_availability_zone: cinderAZ_1
cinder_backends:
limit_container_types: cinder_volume
netapp:
netapp_storage_family: ontap_7mode
netapp_storage_protocol: iscsi
netapp_server_hostname: "{{ cinder_netapp_hostname }}"
netapp_server_port: 80
netapp_login: "{{ cinder_netapp_username }}"
netapp_password: "{{ cinder_netapp_password }}"
volume_driver: cinder.volume.drivers.netapp.common.NetAppDriver
volume_backend_name: NETAPP_iSCSI
nfs_client:
nfs_shares_config: /etc/cinder/nfs_shares
shares:
- { ip: "{{ cinder_netapp_hostname }}", share: "/vol/cinder" }
# User defined Logging Hosts, this should be a required group
log_hosts:
@ -204,21 +220,13 @@ log_hosts:
network_hosts:
network1:
ip: 10.240.0.108
host_vars:
host_networks:
- { type: raw, device_name: eth0, bond_master: bond0, bond_primary: true }
- { type: raw, device_name: eth4, bond_master: bond0, bond_primary: false }
- { type: vlan_tagged, device_name: bond0, tagged_device_name: bond0.2176 }
- { type: vlan_tagged, device_name: bond0, tagged_device_name: bond1.1998 }
- { type: bonded, device_name: bond0 }
- { type: bridged, device_name: br-mgmt, bridge_ports: ["bond0.2176"], address: "172.29.236.108", netmask: "255.255.255.0", gateway: "172.29.236.1", dns_nameservers: ["69.20.0.164", "69.20.0.196"] }
- { type: bridged, device_name: br-vxlan, bridge_ports: ["bond1.1998"], address: "172.29.240.108", netmask: "255.255.255.0" }
- { type: bridged, device_name: br-vlan, bridge_ports: ["bond1"] }
# Other hosts can be added whenever needed. Note that containers will not be
# assigned to "other" hosts by default. If you would like to have containers
# assigned to hosts that are outside of the predefined groups, you will need to
# make an edit to the openstack_environment.yml file.
# haproxy_hosts:
# haproxy1:
# ip: 10.0.0.12
# User defined Repository Hosts, this is an optional group
repo_hosts:
infra1:
ip: 10.240.0.100
infra2:
ip: 10.240.0.101
infra3:
ip: 10.240.0.102

View File

@ -0,0 +1,82 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
## Rabbitmq Options
rabbitmq_password:
rabbitmq_cookie_token:
## Tokens
memcached_encryption_key:
## Container default user
container_openstack_password:
## Galera Options
galera_root_password:
## Keystone Options
keystone_container_mysql_password:
keystone_auth_admin_token:
keystone_auth_admin_password:
keystone_service_password:
## Cinder Options
cinder_container_mysql_password:
cinder_service_password:
cinder_v2_service_password:
## Glance Options
glance_container_mysql_password:
glance_service_password:
### Extra options when configuring swift as a glance back-end.
glance_swift_store_auth_address: "https://some.auth.url.com"
glance_swift_store_user: "OPENSTACK_TENANT_ID:OPENSTACK_USER_NAME"
glance_swift_store_key: "OPENSTACK_USER_PASSWORD"
glance_swift_store_container: "NAME_OF_SWIFT_CONTAINER"
glance_swift_store_region: "NAME_OF_REGION"
## Heat Options
heat_stack_domain_admin_password:
heat_container_mysql_password:
### THE HEAT AUTH KEY NEEDS TO BE 32 CHARACTERS LONG ##
heat_auth_encryption_key:
### THE HEAT AUTH KEY NEEDS TO BE 32 CHARACTERS LONG ##
heat_service_password:
heat_cfn_service_password:
## Horizon Options
horizon_container_mysql_password:
horizon_secret_key:
## Neutron Options
neutron_container_mysql_password:
neutron_service_password:
## Nova Options
nova_container_mysql_password:
nova_metadata_proxy_secret:
nova_ec2_service_password:
nova_service_password:
nova_v3_service_password:
nova_s3_service_password:
## Kibana Options
kibana_password:
## Swift Options:
swift_service_password:
swift_container_mysql_password:
swift_dispersion_password:

View File

@ -13,110 +13,31 @@
# See the License for the specific language governing permissions and
# limitations under the License.
## Rabbit Options
rabbitmq_password:
rabbitmq_cookie_token:
## Tokens
memcached_encryption_key:
## Container default user
container_openstack_password:
## Galera Options
mysql_root_password:
# Defined in group_vars/galera, but can overriden here.
# galera_wait_timeout: 3600
## Keystone Options
keystone_container_mysql_password:
keystone_auth_admin_token:
keystone_auth_admin_password:
keystone_service_password:
## Cinder Options
cinder_container_mysql_password:
cinder_service_password:
cinder_v2_service_password:
## Glance Options
# Set default_store to "swift" if using Cloud Files or swift backend
glance_default_store: file
glance_container_mysql_password:
glance_service_password:
#glance_swift_store_auth_address:
#glance_swift_store_user:
#glance_swift_store_key:
#glance_swift_store_container: SomeContainerName
#glance_swift_store_region: SomeRegion
glance_notification_driver: noop
# `internalURL` will cause glance to speak to swift via ServiceNet, use
# `publicURL` to communicate with swift over the public network
glance_swift_store_endpoint_type: internalURL
glance_notification_driver: noop
# Set glance cache size in bytes, should be less than container size. Defaults to 10GiB
#glance_image_cache_max_size: 4294967296
## Heat Options
heat_stack_domain_admin_password:
heat_container_mysql_password:
### THE HEAT AUTH KEY NEEDS TO BE 32 CHARACTERS LONG ##
heat_auth_encryption_key:
### THE HEAT AUTH KEY NEEDS TO BE 32 CHARACTERS LONG ##
heat_service_password:
heat_cfn_service_password:
## Horizon Options
horizon_container_mysql_password:
horizon_secret_key:
## Nova
# Uncomment "nova_console_endpoint" to define a specific nova console URI or
# IP address this will construct the specific proxy endpoint for the console.
# nova_console_endpoint: console.company_domain.name
## Neutron Options
neutron_container_mysql_password:
neutron_service_password:
## Nova Options
# This defaults to KVM, if you are deploying on a host that is not KVM capable
# change this to your hypervisor type: IE "qemu", "lxc".
# nova_virt_type: kvm
# nova_cpu_allocation_ratio: 2.0
# nova_ram_allocation_ratio: 1.0
nova_container_mysql_password:
nova_metadata_proxy_secret:
nova_ec2_service_password:
nova_service_password:
nova_v3_service_password:
nova_s3_service_password:
# Uncomment "nova_console_endpoint" to define a specific nova console URI or
# IP address this will construct the specific proxy endpoint for the console.
# nova_console_endpoint: console.company_domain.name
## Kibana Options
kibana_password:
# Swift Options:
swift_service_password:
swift_container_mysql_password:
## Swift
# Once the swift cluster has been setup DO NOT change these hash values!
swift_hash_path_suffix:
swift_hash_path_prefix:
# This will allow all users to create containers and upload to swift if set to True
swift_allow_all_users: False
# The dispersion user is for swift-dispersion-report
swift_dispersion_user: dispersion
swift_dispersion_password:
# This variables is used to set haproxy's timeout client and timeout server
# values, they are set in the main config file and are only used by services
# that don't set their own values (default: 90s)
#haproxy_timeout: 90s

View File

@ -1,13 +1,19 @@
[defaults]
# Additional plugins
lookup_plugins = plugins/lookups
gathering = smart
hostfile = inventory
host_key_checking = False
# Setting forks should be based on your system. The ansible defaults to 5,
# the ansible-rpc-lxc assumes that you have a system that can support
# openstack, thus it has been conservitivly been set to 15
# Setting forks should be based on your system. The Ansible defaults to 5,
# the os-lxc-hosts assumes that you have a system that can support
# OpenStack, thus it has been conservatively been set to 15
forks = 15
# Set color options
nocolor = 0
# SSH timeout
timeout = 120

View File

@ -1,39 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Example usage:
# ansible-playbook -i inventory/dynamic_inventory.py -e "host_group=infra1,container_name=horizon_container" archive-container.yml
# This will create a new archive of an existing container and then retrieve
# the container storing the archive on the local system. Once the archive
# has been retrieved the archive is removed from the source system.
- hosts: "{{ host_group|default('hosts') }}"
user: root
tasks:
# Set facts on containers
- name: Get info on a given container
lxc:
command: "info"
name: "{{ container_name }}"
- name: Print information on all containers
debug: var=lxc_facts
- hosts: "{{ host_group|default('hosts') }}"
user: root
roles:
- container_archive
vars:
local_store_path: /tmp
remote_store_path: /tmp

View File

@ -1,50 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: cinder_api[0]
user: root
roles:
- keystone_add_service
vars_files:
- vars/openstack_service_vars/cinder_api_endpoint.yml
- hosts: cinder_api[0]
user: root
roles:
- keystone_add_service
vars_files:
- vars/openstack_service_vars/cinder_apiv2_endpoint.yml
- hosts: cinder_api[0]
user: root
roles:
- cinder_common
- galera_db_setup
- cinder_setup
- init_script
vars_files:
- vars/openstack_service_vars/cinder_api.yml
handlers:
- include: handlers/services.yml
- hosts: cinder_api!:cinder_api[0]
user: root
roles:
- cinder_common
- init_script
vars_files:
- vars/openstack_service_vars/cinder_api.yml
handlers:
- include: handlers/services.yml

View File

@ -1,32 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: cinder_volume
user: root
roles:
- container_common
- container_extra_setup
- cinder_common
- cinder_volume
- cinder_device_add
- cinder_backend_types
- nfs_client
- init_script
vars_files:
- vars/config_vars/container_config_cinder_volume.yml
- vars/openstack_service_vars/cinder_volume.yml
- vars/repo_packages/cinder.yml
handlers:
- include: handlers/services.yml

View File

@ -1,162 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Example usage:
# ansible-playbook -i inventory/hosts -M library/lxc -e "group=infra1-keystone name=keystone address=192.168.18.120 archive_name=keystone.tar.bz2" deploy-archived-container.yml
# This will create a new container from an archive of an existing container.
- hosts: "{{ host_group|default('hosts') }}"
user: root
tasks:
# Create container directory
- name: Create container directory
file:
path: "{{ lxcpath }}/{{ name }}"
state: "directory"
group: "root"
owner: "root"
recurse: "true"
# If check for the lxc VG
- name: Check for lxc volume group
shell: "(which vgs > /dev/null && vgs | grep -o {{ vg_name }}) || false"
register: vg_result
ignore_errors: True
# If lxc vg create new lv
- name: Create new LV
lvol:
vg: "{{ vg_name }}"
lv: "{{ name }}"
size: "{{ lv_size }}"
when: vg_result.rc == 0
# If lxc vg format new lv
- name: Format the new LV
filesystem:
fstype: "{{ fstype }}"
dev: "/dev/{{ vg_name }}/{{ name }}"
when: vg_result.rc == 0
# If lxc vg mount new lv at $container/rootfs
- name: Mount Container LV
mount:
name: "{{ lxcpath }}/{{ name }}/rootfs"
src: "/dev/{{ vg_name }}/{{ name }}"
fstype: "{{ fstype }}"
state: "mounted"
when: vg_result.rc == 0
# upload new archive to host
- name: Upload Archive to host
synchronize:
src: "{{ local_store_path }}/{{ archive_name }}"
dest: "{{ remote_store_path }}/{{ archive_name }}"
archive: "yes"
mode: "push"
# Unarchive container
- name: Unarchive a container
unarchive:
src: "{{ remote_store_path }}/{{ archive_name }}"
dest: "{{ lxcpath }}/{{ name }}"
register: result
# If lxc vg unmount new lv
- name: Unmount Container LV
mount:
name: "{{ lxcpath }}/{{ name }}/rootfs"
src: "/dev/{{ vg_name }}/{{ name }}"
fstype: "{{ fstype }}"
state: "unmounted"
when: vg_result.rc == 0
# Delete archive directory
- name: Cleanup archive
file:
path: "{{ remote_store_path }}/{{ archive_name }}"
state: "absent"
when: result | changed
# Ensure config is without old cruft
- name: Ensure clean config
lineinfile:
dest: "{{ lxcpath }}/{{ name }}/config"
regexp: "{{ item.regexp }}"
state: "absent"
backup: "yes"
with_items:
- { regexp: "^lxc.network.hwaddr" }
- { regexp: "^lxc.mount.entry" }
# If not lxc vg set the rootfs
- name: Set rootfs to localfs
lineinfile:
dest: "{{ lxcpath }}/{{ name }}/config"
regexp: "^lxc.rootfs"
line: "lxc.rootfs = {{ lxcpath }}/{{ name }}/rootfs"
state: "present"
when: vg_result.rc != 0
# If lxc vg set the rootfs
- name: Set rootfs to lvm
lineinfile:
dest: "{{ lxcpath }}/{{ name }}/config"
regexp: "^lxc.rootfs"
line: "lxc.rootfs = /dev/{{ vg_name }}/{{ name }}"
state: "present"
when: vg_result.rc == 0
# Ensure the configuration is complete
- name: Ensure config updated
lineinfile:
dest: "{{ lxcpath }}/{{ name }}/config"
regexp: "^lxc.utsname"
line: "lxc.utsname = {{ name }}"
state: "present"
# Ensure the mount point is correct
- name: Ensure mount point updated updated
lineinfile:
dest: "{{ lxcpath }}/{{ name }}/config"
regexp: "^lxc.mount"
line: "lxc.mount = /var/lib/lxc/{{ name }}/fstab"
state: "present"
# Start the new container
- name: Start new Container
lxc:
command: "start"
name: "{{ name }}"
# If address is set update it in the network script
- name: Update networking
lxc:
command: "attach"
name: "{{ name }}"
container_command: "sed -i 's/address.*/address\ {{ address }}/g' /etc/network/interfaces"
when: address is defined
# Restart the new container
- name: Restart new container
lxc:
command: "restart"
name: "{{ name }}"
vars:
local_store_path: /tmp
remote_store_path: /tmp
lv_size: 5g
vg_name: lxc
fstype: ext4
lxcpath: /var/lib/lxc

View File

@ -1,21 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This playbook destroys all known containers.
- hosts: "{{ host_group|default('all_containers') }}"
user: root
gather_facts: false
roles:
- container_destroy

View File

@ -1,21 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Setup supporting services
- hosts: utility_all
user: root
roles:
- logging_common
- utility_logging

View File

@ -1,21 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Restart each daemon in turn
- hosts: galera:!galera[0]
user: root
serial: 1
roles:
- galera_restart

View File

@ -1,19 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: galera[0]
user: root
roles:
- galera_bootstrap

View File

@ -1,28 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: galera
user: root
roles:
- container_extra_setup
- common
- common_sudoers
- container_common
- galera_common
- galera_client_cnf
- galera_config
vars_files:
- vars/repo_packages/galera.yml
- vars/config_vars/container_config_galera.yml

View File

@ -13,6 +13,39 @@
# See the License for the specific language governing permissions and
# limitations under the License.
- include: galera-config.yml
- include: galera-startup.yml
- include: galera-post-config.yml
- name: Install galera server
hosts: galera_all
max_fail_percentage: 20
user: root
pre_tasks:
- name: Galera extra lxc config
lxc-container:
name: "{{ container_name }}"
container_command: |
[[ ! -d "/var/lib/mysql" ]] && mkdir -p "/var/lib/mysql"
container_config:
- "lxc.mount.entry=/openstack/{{ container_name }} var/lib/mysql none bind 0 0"
delegate_to: "{{ physical_host }}"
when: is_metal == false or is_metal == "False"
tags:
- galera-mysql-dir
- name: Flush net cache
command: /usr/local/bin/lxc-system-manage flush-net-cache
delegate_to: "{{ physical_host }}"
tags:
- flush-net-cache
- name: Wait for container ssh
wait_for:
port: "22"
delay: 5
host: "{{ ansible_ssh_host }}"
delegate_to: "{{ physical_host }}"
tags:
- galera-ssh-wait
roles:
- { role: "galera_server", tags: [ "galera-server" ] }
vars:
galera_wsrep_node_name: "{{ container_name }}"
ansible_hostname: "{{ container_name }}"
ansible_ssh_host: "{{ container_address }}"
is_metal: "{{ properties.is_metal|default(false) }}"

View File

@ -1,19 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: galera
user: root
roles:
- galera_remove

View File

@ -1,17 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- include: galera-bootstrap.yml
- include: galera-add-node.yml

View File

@ -1,19 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: galera
user: root
roles:
- galera_stop

View File

@ -1,18 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- include: glance-common.yml
- include: glance-api.yml
- include: glance-registry.yml

View File

@ -1,46 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: glance_api[0]
user: root
roles:
- keystone_add_service
vars_files:
- vars/openstack_service_vars/glance_api_endpoint.yml
- hosts: glance_api[0]
user: root
roles:
- glance_common
- galera_db_setup
- glance_setup
- init_script
- glance_cache_crons
vars_files:
- vars/openstack_service_vars/glance_api.yml
handlers:
- include: handlers/services.yml
- hosts: glance_api!:glance_api[0]
user: root
roles:
- glance_common
- init_script
- glance_cache_crons
vars_files:
- vars/openstack_service_vars/glance_api.yml
handlers:
- include: handlers/services.yml

View File

@ -1,27 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: glance_all
user: root
roles:
- common
- common_sudoers
- container_common
- openstack_common
- openstack_openrc
- galera_client_cnf
vars_files:
- vars/repo_packages/glance.yml
- vars/openstack_service_vars/glance_api.yml

View File

@ -1,25 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This playbook deploys Glance-Registry.
- hosts: glance_registry
user: root
roles:
- glance_common
- init_script
vars_files:
- vars/openstack_service_vars/glance_registry.yml
handlers:
- include: handlers/services.yml

View File

@ -1,24 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Restart os service
service: name={{ item }} state=restarted pattern={{ item }}
register: service_restart
failed_when: "'msg' in service_restart and 'FAIL' in service_restart.msg|upper"
with_items: service_names
notify: Ensure os service running
- name: Ensure os service running
service: name={{ program_name }} state=started pattern={{ program_name }}

View File

@ -1,31 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Restart swift service
service: name={{ item }} state=restarted pattern={{ item }}
register: service_restart
with_items: program_names
notify: Fail if swift restart fails
- name: Fail if swift restart fails
fail:
msg: 'Service {{ item.cmd }} Failed'
when: "'msg' in item and 'FAIL' in item.msg|upper"
with_items: service_restart.results
notify: Ensure swift service running
- name: Ensure swift service running
service: name={{ item }} state=started pattern={{ item }}
with_items: service_names

View File

@ -13,12 +13,13 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# Setup supporting services
- hosts: haproxy_hosts
- name: Install haproxy
hosts: haproxy_hosts
max_fail_percentage: 20
user: root
roles:
- common
- haproxy_common
- haproxy_service
- { role: "haproxy_server", tags: [ "haproxy-server" ] }
vars_files:
- vars/config_vars/haproxy_config.yml
- vars/configs/haproxy_config.yml
vars:
is_metal: "{{ properties.is_metal|default(false) }}"

View File

@ -1,20 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- include: heat-common.yml
- include: heat-api.yml
- include: heat-api-cfn.yml
- include: heat-api-cloudwatch.yml
- include: heat-engine.yml

View File

@ -1,24 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: heat_api_cloudwatch
user: root
roles:
- heat_common
- init_script
vars_files:
- vars/openstack_service_vars/heat_api_cloudwatch.yml
handlers:
- include: handlers/services.yml

View File

@ -1,39 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: heat_api[0]
user: root
roles:
- keystone_add_service
- heat_domain_user
- heat_common
- galera_db_setup
- heat_setup
- init_script
vars_files:
- vars/openstack_service_vars/heat_api.yml
- vars/openstack_service_vars/heat_api_endpoint.yml
handlers:
- include: handlers/services.yml
- hosts: heat_api!:heat_api[0]
user: root
roles:
- heat_common
- init_script
vars_files:
- vars/openstack_service_vars/heat_api.yml
handlers:
- include: handlers/services.yml

View File

@ -1,27 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: heat_all
user: root
roles:
- common
- common_sudoers
- container_common
- openstack_common
- openstack_openrc
- galera_client_cnf
vars_files:
- vars/openstack_service_vars/heat_api.yml
- vars/repo_packages/heat.yml

View File

@ -1,24 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: heat_engine
user: root
roles:
- heat_common
- init_script
vars_files:
- vars/openstack_service_vars/heat_engine.yml
handlers:
- include: handlers/services.yml

View File

@ -1,18 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- include: horizon-common.yml
- include: horizon-ssl.yml
- include: horizon.yml

View File

@ -1,25 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: horizon_all
user: root
roles:
- common
- container_common
- openstack_common
- openstack_openrc
- galera_client_cnf
vars_files:
- vars/repo_packages/horizon.yml

View File

@ -1,53 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: horizon_all[0]
user: root
roles:
- horizon_ssl
vars_files:
- vars/repo_packages/horizon.yml
- hosts: horizon_all[0]
user: root
gather_facts: false
tasks:
- name: Distribute apache keys for cluster consumption
memcached:
name: "{{ item.name }}"
file_path: "{{ item.src }}"
state: "present"
server: "{{ hostvars[groups['memcached'][0]]['ansible_ssh_host'] }}:11211"
encrypt_string: "{{ memcached_encryption_key }}"
with_items:
- { src: "/etc/ssl/private/apache.key", name: "apache_key" }
- { src: "/etc/ssl/certs/apache.cert", name: "apache_cert" }
- hosts: horizon_all:!horizon_all[0]
user: root
gather_facts: false
tasks:
- name: Retrieve apache keys
memcached:
name: "{{ item.name }}"
file_path: "{{ item.src }}"
state: "retrieve"
file_mode: "{{ item.file_mode }}"
dir_mode: "{{ item.dir_mode }}"
server: "{{ hostvars[groups['memcached'][0]]['ansible_ssh_host'] }}:11211"
encrypt_string: "{{ memcached_encryption_key }}"
with_items:
- { src: "/etc/ssl/private/apache.key", name: "apache_key", file_mode: "0640", dir_mode: "0750" }
- { src: "/etc/ssl/certs/apache.cert", name: "apache_cert", file_mode: "0644", dir_mode: "0755" }

View File

@ -1,34 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: horizon_all[0]
user: root
roles:
- horizon_common
- galera_db_setup
- horizon_setup
- horizon_apache
vars_files:
- vars/openstack_service_vars/horizon.yml
- vars/repo_packages/horizon.yml
- hosts: horizon_all:!horizon_all[0]
user: root
roles:
- horizon_common
- horizon_apache
vars_files:
- vars/openstack_service_vars/horizon.yml
- vars/repo_packages/horizon.yml

View File

@ -1,19 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- include: setup-common.yml
- include: build-containers.yml
- include: restart-containers.yml
- include: containers-common.yml

View File

@ -39,10 +39,12 @@ INVENTORY_SKEL = {
# Any new item added to inventory that will used as a default argument in the
# inventory setup should be added to this list.
REQUIRED_HOSTVARS = [
'is_metal',
'properties',
'ansible_ssh_host',
'physical_host_group',
'container_address',
'container_name',
'container_networks',
'physical_host',
'component'
]
@ -79,6 +81,8 @@ def get_ip_address(name, ip_q):
else:
append_if(array=USED_IPS, item=ip_addr)
return str(ip_addr)
except AttributeError:
return None
except Queue.Empty:
raise SystemExit(
'Cannot retrieve requested amount of IP addresses. Increase the %s'
@ -117,7 +121,7 @@ def _parse_belongs_to(key, belongs_to, inventory):
def _build_container_hosts(container_affinity, container_hosts, type_and_name,
inventory, host_type, container_type,
container_host_type, physical_host_type, config,
is_metal, assignment):
properties, assignment):
"""Add in all of the host associations into inventory.
This will add in all of the hosts into the inventory based on the given
@ -132,10 +136,14 @@ def _build_container_hosts(container_affinity, container_hosts, type_and_name,
:param container_host_type: ``str`` Type of host
:param physical_host_type: ``str`` Name of physical host group
:param config: ``dict`` User defined information
:param is_metal: ``bol`` If true, a container entry will not be built
:param properties: ``dict`` Container properties
:param assignment: ``str`` Name of container component target
"""
container_list = []
is_metal = False
if properties:
is_metal = properties.get('is_metal', False)
for make_container in range(container_affinity):
for i in container_hosts:
if '%s-' % type_and_name in i:
@ -176,11 +184,12 @@ def _build_container_hosts(container_affinity, container_hosts, type_and_name,
append_if(array=container_mapping, item=host_type_containers)
hostvars_options.update({
'is_metal': is_metal,
'properties': properties,
'ansible_ssh_host': address,
'container_address': address,
'container_name': container_host_name,
'physical_host': host_type,
'physical_host_group': physical_host_type,
'component': assignment
})
@ -217,6 +226,11 @@ def _append_to_host_groups(inventory, container_type, assignment, host_type,
iph = inventory[physical_group_type]['hosts']
iah = inventory[assignment]['hosts']
for hname, hdata in inventory['_meta']['hostvars'].iteritems():
is_metal = False
properties = hdata.get('properties')
if properties:
is_metal = properties.get('is_metal', False)
if 'container_types' in hdata or 'container_name' in hdata:
if 'container_name' not in hdata:
container = hdata['container_name'] = hname
@ -230,13 +244,13 @@ def _append_to_host_groups(inventory, container_type, assignment, host_type,
if container.startswith('%s-' % type_and_name):
append_if(array=iah, item=container)
elif hdata.get('is_metal') is True:
elif is_metal is True:
if component == assignment:
append_if(array=iah, item=container)
if container.startswith('%s-' % type_and_name):
append_if(array=iph, item=container)
elif hdata.get('is_metal') is True:
elif is_metal is True:
if container.startswith(host_type):
append_if(array=iph, item=container)
@ -264,7 +278,7 @@ def _append_to_host_groups(inventory, container_type, assignment, host_type,
def _add_container_hosts(assignment, config, container_name, container_type,
inventory, is_metal):
inventory, properties):
"""Add a given container name and type to the hosts.
:param assignment: ``str`` Name of container component target
@ -272,7 +286,7 @@ def _add_container_hosts(assignment, config, container_name, container_type,
:param container_name: ``str`` Name fo container
:param container_type: ``str`` Type of container
:param inventory: ``dict`` Living dictionary of inventory
:param is_metal: ``bol`` If true, a container entry will not be built
:param properties: ``dict`` Dict of container properties
"""
physical_host_type = '%s_hosts' % container_type.split('_')[0]
# If the physical host type is not in config return
@ -302,9 +316,9 @@ def _add_container_hosts(assignment, config, container_name, container_type,
' 52 characters. This combination will result in a container'
' name that is longer than the maximum allowable hostname of'
' 63 characters. Before this process can continue please'
' adjust the host entries in your "openstack_user_config.yml" to use'
' a short hostname. The recommended hostname length is < 20'
' characters long.' % (host_type, container_name)
' adjust the host entries in your "openstack_user_config.yml"'
' to use a short hostname. The recommended hostname length is'
' < 20 characters long.' % (host_type, container_name)
)
physical_host = inventory['_meta']['hostvars'][host_type]
@ -325,7 +339,7 @@ def _add_container_hosts(assignment, config, container_name, container_type,
container_host_type,
physical_host_type,
config,
is_metal,
properties,
assignment,
)
@ -348,6 +362,7 @@ def user_defined_setup(config, inventory, is_metal):
:param inventory: ``dict`` Living dictionary of inventory
:param is_metal: ``bol`` If true, a container entry will not be built
"""
hvs = inventory['_meta']['hostvars']
for key, value in config.iteritems():
if key.endswith('hosts'):
if key not in inventory:
@ -360,15 +375,23 @@ def user_defined_setup(config, inventory, is_metal):
if _key not in inventory['_meta']['hostvars']:
inventory['_meta']['hostvars'][_key] = {}
inventory['_meta']['hostvars'][_key].update({
hvs[_key].update({
'ansible_ssh_host': _value['ip'],
'container_address': _value['ip'],
'is_metal': is_metal,
'physical_host_group': key
})
# If the entry is missing the properties key add it.
properties = hvs[_key].get('properties')
if not properties or not isinstance(properties, dict):
hvs[_key]['properties'] = dict()
hvs[_key]['properties'].update({'is_metal': is_metal})
if 'host_vars' in _value:
for _k, _v in _value['host_vars'].items():
inventory['_meta']['hostvars'][_key][_k] = _v
hvs[_key][_k] = _v
append_if(array=USED_IPS, item=_value['ip'])
append_if(array=inventory[key]['hosts'], item=_key)
@ -419,41 +442,6 @@ def skel_load(skeleton, inventory):
)
def _add_additional_networks(key, inventory, ip_q, k_name, netmask):
"""Process additional ip adds and append then to hosts as needed.
If the host is found to be "is_metal" it will be marked as "on_metal"
and will not have an additionally assigned IP address.
:param key: ``str`` Component key name
:param inventory: ``dict`` Living dictionary of inventory
:param ip_q: ``object`` build queue of IP addresses
:param k_name: ``str`` key to use in host vars for storage
"""
base_hosts = inventory['_meta']['hostvars']
addr_name = '%s_address' % k_name
lookup = inventory.get(key, list())
if 'children' in lookup and lookup['children']:
for group in lookup['children']:
_add_additional_networks(group, inventory, ip_q, k_name, netmask)
if 'hosts' in lookup and lookup['hosts']:
for chost in lookup['hosts']:
container = base_hosts[chost]
if not container.get(addr_name):
if ip_q is None:
container[addr_name] = None
else:
container[addr_name] = get_ip_address(
name=k_name, ip_q=ip_q
)
netmask_name = '%s_netmask' % k_name
if netmask_name not in container:
container[netmask_name] = netmask
def _load_optional_q(config, cidr_name):
"""Load optional queue with ip addresses.
@ -468,6 +456,167 @@ def _load_optional_q(config, cidr_name):
return ip_q
def _add_additional_networks(key, inventory, ip_q, q_name, netmask, interface,
bridge, net_type, user_config, is_ssh_address,
is_container_address):
"""Process additional ip adds and append then to hosts as needed.
If the host is found to be "is_metal" it will be marked as "on_metal"
and will not have an additionally assigned IP address.
:param key: ``str`` Component key name.
:param inventory: ``dict`` Living dictionary of inventory.
:param ip_q: ``object`` build queue of IP addresses.
:param q_name: ``str`` key to use in host vars for storage.
:param netmask: ``str`` netmask to use.
:param interface: ``str`` interface name to set for the network.
:param user_config: ``dict`` user defined configuration details.
:param is_ssh_address: ``bol`` set this address as ansible_ssh_host.
:param is_container_address: ``bol`` set this address to container_address.
"""
def network_entry():
"""Return a network entry for a container."""
# TODO(cloudnull) After a few releases this conditional should be
# simplified. The container address checking that is ssh address
# is only being done to support old inventory.
if is_metal:
_network = dict()
else:
_network = {'interface': interface}
if bridge:
_network['bridge'] = bridge
if net_type:
_network['type'] = net_type
return _network
def return_netmask():
"""Return the netmask for a container."""
# TODO(cloudnull) After a few releases this conditional should be
# simplified. The container address checking that is ssh address
# is only being done to support old inventory.
_old_netmask = container.get(old_netmask)
if _old_netmask:
return container.pop(old_netmask)
elif netmask:
return netmask
base_hosts = inventory['_meta']['hostvars']
lookup = inventory.get(key, list())
if 'children' in lookup and lookup['children']:
for group in lookup['children']:
_add_additional_networks(
group,
inventory,
ip_q,
q_name,
netmask,
interface,
bridge,
net_type,
user_config,
is_ssh_address,
is_container_address
)
# Make sure the lookup object has a value.
if lookup:
hosts = lookup.get('hosts')
if not hosts:
return
else:
return
# TODO(cloudnull) after a few releases this should be removed.
if q_name:
old_address = '%s_address' % q_name
else:
old_address = '%s_address' % interface
old_netmask = '%s_netmask' % q_name
for container_host in hosts:
container = base_hosts[container_host]
# TODO(cloudnull) after a few releases this should be removed.
# This removes the old container network value that now serves purpose.
container.pop('container_network', None)
if 'container_networks' in container:
networks = container['container_networks']
else:
networks = container['container_networks'] = dict()
is_metal = False
properties = container.get('properties')
if properties:
is_metal = properties.get('is_metal', False)
## This should convert found addresses based on q_name + "_address"
# and then build the network if its not found.
if not is_metal and old_address not in networks:
network = networks[old_address] = network_entry()
if old_address in container and container[old_address]:
network['address'] = container.pop(old_address)
elif not is_metal:
address = get_ip_address(name=q_name, ip_q=ip_q)
if address:
network['address'] = address
network['netmask'] = return_netmask()
elif is_metal:
network = networks[old_address] = network_entry()
network['netmask'] = return_netmask()
# TODO(cloudnull) After a few releases this conditional should be
# simplified. The container address checking that is ssh address
# is only being done to support old inventory.
if old_address in container and container[old_address]:
network['address'] = container.pop(old_address)
else:
if is_ssh_address or is_container_address:
# Container physical host group
cphg = container.get('physical_host_group')
# user_config data from the container physical host group
phg = user_config[cphg][container_host]
network['address'] = phg['ip']
if is_ssh_address is True:
container['ansible_ssh_host'] = networks[old_address]['address']
if is_container_address is True:
container['container_address'] = networks[old_address]['address']
def _net_address_search(provider_networks, main_netowrk, key):
"""Set the key netwokr type to the main network if not specified.
:param provider_networks: ``list`` Network list of ``dict``s
:param main_netowrk: ``str`` The name of the main network bridge.
:param key: ``str`` The name of the key to set true.
"""
for pn in provider_networks:
# p_net are the provider_network values
p_net = pn.get('network')
if p_net:
# Check for the key
if p_net.get(key):
break
else:
for pn in provider_networks:
p_net = pn.get('network')
if p_net:
if p_net.get('container_bridge') == main_netowrk:
print p_net
p_net[key] = True
return provider_networks
def container_skel_load(container_skel, inventory, config):
"""Build out all containers as defined in the environment file.
@ -484,7 +633,7 @@ def container_skel_load(container_skel, inventory, config):
key,
container_type,
inventory,
value.get('is_metal', False)
value.get('properties')
)
else:
cidr_networks = config.get('cidr_networks')
@ -499,41 +648,47 @@ def container_skel_load(container_skel, inventory, config):
provider_queues['%s_netmask' % net_name] = str(net.netmask)
overrides = config['global_overrides']
mgmt_bridge = overrides['management_bridge']
mgmt_dict = {}
if cidr_networks:
for pn in overrides['provider_networks']:
network = pn['network']
if 'ip_from_q' in network and 'group_binds' in network:
q_name = network['ip_from_q']
for group in network['group_binds']:
_add_additional_networks(
key=group,
inventory=inventory,
ip_q=provider_queues[q_name],
k_name=q_name,
netmask=provider_queues['%s_netmask' % q_name]
)
# iterate over a list of provider_networks, var=pn
pns = overrides.get('provider_networks', list())
pns = _net_address_search(
provider_networks=pns,
main_netowrk=config['global_overrides']['management_bridge'],
key='is_ssh_address'
)
if mgmt_bridge == network['container_bridge']:
nci = network['container_interface']
ncb = network['container_bridge']
ncn = network.get('ip_from_q')
mgmt_dict['container_interface'] = nci
mgmt_dict['container_bridge'] = ncb
if ncn:
cidr_net = netaddr.IPNetwork(cidr_networks.get(ncn))
mgmt_dict['container_netmask'] = str(cidr_net.netmask)
pns = _net_address_search(
provider_networks=pns,
main_netowrk=config['global_overrides']['management_bridge'],
key='is_container_address'
)
for host, hostvars in inventory['_meta']['hostvars'].iteritems():
base_hosts = inventory['_meta']['hostvars'][host]
if 'container_network' not in base_hosts:
base_hosts['container_network'] = mgmt_dict
for pn in pns:
# p_net are the provider_network values
p_net = pn.get('network')
if not p_net:
continue
for _key, _value in hostvars.iteritems():
if _key == 'ansible_ssh_host' and _value is None:
ca = base_hosts['container_address']
base_hosts['ansible_ssh_host'] = ca
q_name = p_net.get('ip_from_q')
ip_from_q = provider_queues.get(q_name)
if ip_from_q:
netmask = provider_queues['%s_netmask' % q_name]
else:
netmask = None
for group in p_net.get('group_binds', list()):
_add_additional_networks(
key=group,
inventory=inventory,
ip_q=ip_from_q,
q_name=q_name,
netmask=netmask,
interface=p_net['container_interface'],
bridge=p_net['container_bridge'],
net_type=p_net.get('container_type'),
user_config=config,
is_ssh_address=p_net.get('is_ssh_address'),
is_container_address=p_net.get('is_container_address')
)
def file_find(pass_exception=False, user_file=None):
@ -548,7 +703,6 @@ def file_find(pass_exception=False, user_file=None):
:param pass_exception: ``bol``
:param user_file: ``str`` Additional location to look in FIRST for a file
"""
file_check = [
os.path.join('/etc', 'openstack_deploy'),
os.path.join(os.environ.get('HOME'), 'openstack_deploy')
@ -590,15 +744,14 @@ def _set_used_ips(user_defined_config, inventory):
# Find all used IP addresses and ensure that they are not used again
for host_entry in inventory['_meta']['hostvars'].values():
if 'ansible_ssh_host' in host_entry:
append_if(array=USED_IPS, item=host_entry['ansible_ssh_host'])
for key, value in host_entry.iteritems():
if key.endswith('address'):
append_if(array=USED_IPS, item=value)
networks = host_entry.get('container_networks', dict())
for network_entry in networks.values():
address = network_entry.get('address')
if address:
append_if(array=USED_IPS, item=address)
def _ensure_inventory_uptodate(inventory):
def _ensure_inventory_uptodate(inventory, container_skel):
"""Update inventory if needed.
Inspect the current inventory and ensure that all host items have all of
@ -614,6 +767,15 @@ def _ensure_inventory_uptodate(inventory):
if rh not in value:
value[rh] = None
for key, value in container_skel.iteritems():
item = inventory.get(key)
hosts = item.get('hosts')
if hosts:
for host in hosts:
container = inventory['_meta']['hostvars'][host]
if 'properties' in value:
container['properties'] = value['properties']
def _parse_global_variables(user_cidr, inventory, user_defined_config):
"""Add any extra variables that may have been set in config.
@ -759,7 +921,9 @@ def main():
)
# Load existing inventory file if found
dynamic_inventory_file = os.path.join(local_path, 'openstack_inventory.json')
dynamic_inventory_file = os.path.join(
local_path, 'openstack_inventory.json'
)
if os.path.isfile(dynamic_inventory_file):
with open(dynamic_inventory_file, 'rb') as f:
dynamic_inventory = json.loads(f.read())
@ -780,11 +944,17 @@ def main():
dynamic_inventory = INVENTORY_SKEL
# Save the users container cidr as a group variable
if 'container' in user_defined_config.get('cidr_networks', list()):
user_cidr = user_defined_config['cidr_networks']['container']
else:
cidr_networks = user_defined_config.get('cidr_networks')
if not cidr_networks:
raise SystemExit('No container CIDR specified in user config')
if 'container' in cidr_networks:
user_cidr = cidr_networks['container']
elif 'management' in cidr_networks:
user_cidr = cidr_networks['management']
else:
raise SystemExit('No container or management network specified')
# Add the container_cidr into the all global ansible group_vars
_parse_global_variables(user_cidr, dynamic_inventory, user_defined_config)
@ -797,7 +967,8 @@ def main():
dynamic_inventory
)
skel_load(
environment.get('component_skel'), dynamic_inventory
environment.get('component_skel'),
dynamic_inventory
)
container_skel_load(
environment.get('container_skel'),
@ -806,10 +977,17 @@ def main():
)
# Look at inventory and ensure all entries have all required values.
_ensure_inventory_uptodate(inventory=dynamic_inventory)
_ensure_inventory_uptodate(
inventory=dynamic_inventory,
container_skel=environment.get('container_skel'),
)
# Load the inventory json
dynamic_inventory_json = json.dumps(dynamic_inventory, indent=4)
dynamic_inventory_json = json.dumps(
dynamic_inventory,
indent=4,
sort_keys=True
)
# Generate a list of all hosts and their used IP addresses
hostnames_ips = {}
@ -820,7 +998,8 @@ def main():
host_hash[_key] = _value
# Save a list of all hosts and their given IP addresses
with open(os.path.join(local_path, 'openstack_hostnames_ips.yml'), 'wb') as f:
hostnames_ip_file = os.path.join(local_path, 'openstack_hostnames_ips.yml')
with open(hostnames_ip_file, 'wb') as f:
f.write(
json.dumps(
hostnames_ips,

View File

@ -13,208 +13,128 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# The variables file used by the playbooks in the dbservers group.
# These don't have to be explicitly imported by vars_files: they are autopopulated.
# Defined required kernel. presently 3.13.0-32-generic
required_kernel: 3.13.0-34-generic
## Container Template Config
container_template: openstack
container_release: trusty
# Parameters on what the container will be built with
container_config: /etc/lxc/lxc-openstack.conf
## Verbosity Options
debug: False
verbose: True
## Base Ansible config for all plays
ansible_ssh_port: 22
## Repo server
repo_service_user_name: nginx
repo_service_home_folder: /var/www
repo_server_port: 8181
repo_pip_default_index: "http://rpc-repo.rackspace.com/pools"
## Virtual IP address
# Internal Management vip address
internal_vip_address: "{{ internal_lb_vip_address }}"
# External DMZ VIP address
external_vip_address: "{{ external_lb_vip_address }}"
## URL for the frozen repo
openstack_repo_url: "https://mirror.rackspace.com/rackspaceprivatecloud"
## OpenStack Source Code Release
openstack_release: master
openstack_code_name: juno
# URL for the frozen internal openstack repo.
openstack_repo_url: "http://{{ internal_lb_vip_address }}:{{ repo_server_port }}"
openstack_upstream_url: "http://rpc-repo.rackspace.com"
# Global minimum kernel requirement
openstack_host_required_kernel: 3.13.0-34-generic
## URLs for package repos
mariadb_repo_url: "http://mirror.rackspace.com/rackspaceprivatecloud/mirror/mariadb/mariadb-5.5.41/repo/ubuntu/"
elasticsearch_repo_url: "http://packages.elasticsearch.org/elasticsearch/1.2/debian"
logstash_repo_url: "http://packages.elasticsearch.org/logstash/1.4/debian"
rsyslog_repo_url: "ppa:adiscon/v8-stable"
## GPG Keys
gpg_keys:
- { key_name: 'mariadb', keyserver: 'hkp://keyserver.ubuntu.com:80', hash_id: '0xcbcb082a1bb943db' }
## Repositories
apt_common_repos:
- { repo: "deb {{ mariadb_repo_url }} {{ ansible_distribution_release }} main", state: "present" }
## URL for pip
get_pip_url: "{{ openstack_repo_url }}/downloads/get-pip.py"
## URL for the container image
container_cache_tarball: "{{ openstack_repo_url }}/downloads/rpc-trusty-container.tgz"
## Pinned packages
apt_pinned_packages:
- { package: "lxc", version: "1.0.7-0ubuntu0.1" }
- { package: "libvirt-bin", version: "1.2.2-0ubuntu13.1.8" }
- { package: "logstash", version: "1.4.2-1-2c0f5a1" }
- { package: "logstash-contrib", version: "1.4.2-1-efd53ef" }
- { package: "elasticsearch", version: "1.2.4" }
## Users that will not be created via container_common
excluded_user_create:
- mysql
- rabbitmq
## Kernel modules loaded on all hosts
host_kernel_modules:
- scsi_dh
- dm_multipath
- dm_snapshot
host_kernel_tuning:
- { key: 'vm.dirty_background_ratio', value: 5 }
- { key: 'vm.dirty_ratio', value: 10 }
- { key: 'vm.swappiness', value: 10 }
## Base Packages
apt_common_packages:
- vlan
- python-software-properties
- python-dev
- build-essential
- git-core
- rsyslog
- lvm2
- dmeventd
- libkmod-dev
- libkmod2
- libssl-dev
- bridge-utils
- cgroup-lite
- sqlite3
- iptables
- sshpass
- libffi-dev
- libxml2-dev
- libxslt1-dev
- libsqlite3-dev
- mariadb-client
- libmariadbclient-dev
# Util packages that are installed when repos are put in place
common_util_packages:
- curl
- wget
- time
- rsync
## MySQL Information
mysql_port: 3306
mysql_user: root
mysql_password: "{{ mysql_root_password }}"
mysql_address: "{{ internal_vip_address }}"
## RPC Backend
rpc_thread_pool_size: 64
rpc_conn_pool_size: 30
rpc_response_timeout: 60
rpc_cast_timeout: 30
rpc_backend: rabbit
## LXC options
lxc_container_caches:
- url: "{{ openstack_upstream_url }}/container_images/rpc-trusty-container.tgz"
name: "trusty.tgz"
## RabbitMQ
rabbit_port: 5672
rabbit_hosts: "{% for host in groups['rabbit'] %}{{ hostvars[host]['container_address'] }}:{{ rabbit_port }}{% if not loop.last %},{% endif %}{% endfor %}"
rabbit_use_ssl: false
rabbit_virtual_host: /
rabbit_retry_interval: 1
rabbit_retry_backoff: 2
rabbit_max_retries: 0
rabbit_ha_queues: false
rabbit_userid: openstack
rabbit_password: "{{ rabbitmq_password }}"
rabbitmq_userid: openstack
rabbitmq_cluster_name: openstack
rabbitmq_port: 5672
rabbitmq_servers: "{% for host in groups['rabbitmq_all'] %}{{ hostvars[host]['container_address'] }}:{{ rabbitmq_port }}{% if not loop.last %},{% endif %}{% endfor %}"
## Auth
auth_admin_username: admin
auth_admin_password: "{{ keystone_auth_admin_password }}"
auth_admin_token: "{{ keystone_auth_admin_token }}"
auth_admin_tenant: admin
auth_identity_uri: "http://{{ internal_vip_address }}:5000/v2.0"
auth_identity_uri_v3: "http://{{ internal_vip_address }}:5000/v3"
auth_admin_uri: "http://{{ internal_vip_address }}:35357/v2.0"
auth_host: "{{ internal_vip_address }}"
auth_port: 35357
auth_public_port: 5000
auth_protocol: http
## Galera
galera_wsrep_cluster_address: "{% for host in groups['galera_all'] %}{{ hostvars[host]['container_address'] }}{% if not loop.last %},{% endif %}{% endfor %}"
galera_wsrep_address: "{{ container_address }}"
galera_monitoring_user: haproxy
galera_root_user: root
# Set ``galera_max_connections`` to override the calculated max connections.
# galera_max_connections: 500
# Repositories
## OpenStack Region
service_region: RegionOne
## Container User
container_username: openstack
container_password: "{{ container_openstack_password }}"
## Memcached
memcached_memory: 8192
memcached_port: 11211
memcached_user: memcache
memcached_secret_key: "{{ memcached_encryption_key }}"
## Haproxy Configuration
hap_rise: 3
hap_fall: 3
hap_interval: 12000
# Default haproxy backup nodes to empty list so this doesn't have to be
# defined for each service.
hap_backup_nodes: []
## Swift credentials for Swift Container image store
swift_archive_store:
creds_file: /root/swiftcreds
section: default
container: poc_lxc_containers
## Remote logging common configuration
elasticsearch_http_port: 9200
elasticsearch_tcp_port: 9300
elasticsearch_mode: transport
elasticsearch_cluster: openstack
elasticsearch_vip: "{{ external_vip_address }}"
## Logstash
logstash_port: 5544
# Directory where serverspec is installed to on utility container
serverspec_install_dir: /opt/serverspec
# How long to wait for a container after a (re)start
container_start_timeout: 180
## Pip install
# Lock down pip to only a specific version of pip
pip_get_pip_options: "--no-index --find-links={{ openstack_upstream_url }}/os-releases/{{ openstack_release }}"
## Memcached options
memcached_listen: "{{ container_address }}"
memcached_port: 11211
memcached_servers: "{% for host in groups['memcached'] %}{{ hostvars[host]['container_address'] }}:{{ memcached_port }}{% if not loop.last %},{% endif %}{% endfor %}"
## Nova
nova_service_port: 8774
nova_service_proto: http
nova_service_user_name: nova
nova_service_tenant_name: service
nova_service_adminuri: "{{ nova_service_proto }}://{{ internal_lb_vip_address }}:{{ nova_service_port }}"
nova_service_adminurl: "{{ nova_service_adminuri }}/v2/%(tenant_id)s"
nova_service_region: RegionOne
nova_metadata_port: 8775
## Neutron
neutron_service_port: 9696
neutron_service_proto: http
neutron_service_user_name: neutron
neutron_service_tenant_name: service
neutron_service_adminuri: "{{ neutron_service_proto }}://{{ internal_lb_vip_address }}:{{ neutron_service_port }}"
neutron_service_adminurl: "{{ neutron_service_adminuri }}"
neutron_service_region: RegionOne
neutron_service_program_enabled: true
neutron_service_dhcp_program_enabled: true
neutron_service_l3_program_enabled: true
neutron_service_linuxbridge_program_enabled: true
neutron_service_metadata_program_enabled: true
neutron_service_metering_program_enabled: true
## Glance
glance_service_port: 9292
glance_service_proto: http
glance_service_user_name: glance
glance_service_tenant_name: service
glance_service_adminurl: "{{ glance_service_proto }}://{{ internal_lb_vip_address }}:{{ glance_service_port }}"
glance_service_region: RegionOne
glance_api_servers: "{% for host in groups['glance_all'] %}{{ hostvars[host]['container_address'] }}:{{ glance_service_port }}{% if not loop.last %},{% endif %}{% endfor %}"
## Keystone
keystone_admin_user_name: admin
keystone_admin_tenant_name: admin
keystone_admin_port: 35357
keystone_service_port: 5000
keystone_service_proto: http
keystone_service_user_name: keystone
keystone_service_tenant_name: service
keystone_service_uri: "{{ keystone_service_proto }}://{{ internal_lb_vip_address }}"
keystone_service_internaluri: "{{ keystone_service_proto }}://{{ internal_lb_vip_address }}:{{ keystone_service_port }}"
keystone_service_internalurl: "{{ keystone_service_internaluri }}/v2.0"
keystone_service_adminuri: "{{ keystone_service_uri }}:{{ keystone_admin_port }}"
keystone_service_adminurl: "{{ keystone_service_adminuri }}/v2.0"
keystone_service_internaluri_v3: "{{ keystone_service_proto }}://{{ internal_lb_vip_address }}:{{ keystone_service_port }}"
keystone_service_internalurl_v3: "{{ keystone_service_adminuri_v3 }}/v3"
keystone_service_adminuri_v3: "{{ keystone_service_proto }}://{{ internal_lb_vip_address }}:{{ keystone_admin_port }}"
keystone_service_adminurl_v3: "{{ keystone_service_adminuri_v3 }}/v3"
keystone_cache_backend_argument: "url:{% for host in groups['memcached'] %}{{ hostvars[host]['container_address'] }}{% if not loop.last %},{% endif %}{% endfor %}:{{ memcached_port }}"
keystone_memcached_servers: "{% for host in groups['keystone_all'] %}{{ hostvars[host]['container_address'] }}:{{ memcached_port }}{% if not loop.last %},{% endif %}{% endfor %}"
keystone_service_region: RegionOne
## Tempest
tempest_swift_enabled: true
## OpenStack Openrc
openrc_os_auth_url: "{{ keystone_service_internalurl }}"
openrc_os_password: "{{ keystone_auth_admin_password }}"

View File

@ -1,79 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The variables file used by the playbooks in the Cinder-api group.
# These don't have to be explicitly imported by vars_files: they are autopopulated.
## Service Name
service_name: cinder
# Verbosity Options
debug: False
verbose: True
# only used when the lxc vg is present on the target
container_lvm_fstype: ext4
container_lvm_fssize: 5GB
# (StrOpt) Method used to wipe old voumes (valid options are: none, zero,
# shred)
cinder_volume_clear: zero
# (StrOpt) The flag to pass to ionice to alter the i/o priority of the process
# used to zero a volume after deletion, for example "-c3" for idle only
# priority.
# cinder_volume_clear_ionice: -c3
# (IntOpt) Size in MiB to wipe at start of old volumes. 0 => all
cinder_volume_clear_size: 0
## General configuration
## Set this in openstack_user_config.yml UNLESS you want all hosts to use the same
## Cinder backends. See the openstack_user_config example for more on how this is done.
# cinder_backends:
# lvm:
# volume_group: cinder-volumes
# driver: cinder.volume.drivers.lvm.LVMISCSIDriver
# backend_name: LVM_iSCSI
cinder_service_port: "{{ cinder_port|default('8776') }}"
## DB
container_mysql_user: cinder
container_mysql_password: "{{ cinder_container_mysql_password }}"
container_database: cinder
## Cinder Auth
service_admin_tenant_name: "service"
service_admin_username: "cinder"
service_admin_password: "{{ cinder_service_password }}"
## Cinder User / Group
system_user: cinder
system_group: cinder
## Service Names
service_names:
- cinder-api
- cinder-scheduler
- cinder-volume
container_directories:
- { name: /var/log/cinder, mode: 755 }
- { name: /var/lib/cinder }
- { name: /var/lib/cinder/volumes }
- { name: /etc/cinder }
- { name: /etc/cinder/rootwrap.d }
- { name: /var/cache/cinder }
- { name: /var/lock/cinder }
- { name: /var/run/cinder }

View File

@ -1,35 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The variables file used by the playbooks in the Cinder-api group.
# These don't have to be explicitly imported by vars_files: they are autopopulated.
# Note, most cinder settings sare set in cinder_all,
# this file is just to override the lvm size for the volumes container.
# The volumes container needs a larger FS as it must have tmp space for
# converting glnace imges to volumes.
# https://bugs.launchpad.net/openstack-ansible/+bug/1399427
# Default is 5GB (same as other containers).
# Space must be added for cinder image conversion to work.
# For example, to be able to convert 100GB images, set this to 105GB.
cinder_volume_lv_size_gb: 5GB
# only used when the lxc vg is present on the target
container_lvm_fssize: "{{cinder_volume_lv_size_gb}}"

View File

@ -1,39 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
service_name: mysql
# Defaults to mysql_address (VIP) when unset.
# Should only be set for the galera group so that they always connect to
# their own instance.
mysql_client_host: 127.0.0.1
container_lvm_fstype: ext4
container_lvm_fssize: 5GB
# Size of the galera cache
galera_gcache_size: 1G
# Connection timeout https://mariadb.com/kb/en/mariadb/documentation/optimization-and-tuning/system-variables/server-system-variables/#wait_timeout
galera_wait_timeout: 28800
service_pip_dependencies:
- MySQL-python
- python-memcached
- pycrypto
# Directories to create
container_directories:
- { name: '/var/log/mysql', mode: 755 }

View File

@ -1,84 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The variables file used by the playbooks in the Glance-api group.
# These don't have to be explicitly imported by vars_files: they are autopopulated.
## Service Name
service_name: glance
service_publicurl: "http://{{ external_vip_address }}:9292"
service_adminurl: "http://{{ internal_vip_address }}:9292"
service_internalurl: "http://{{ internal_vip_address }}:9292"
# Verbosity Options
debug: False
verbose: True
# only used when the lxc vg is present on the target
container_lvm_fstype: ext4
container_lvm_fssize: 12GB
# General configuration
registry_host: "{{ internal_vip_address }}"
## DB
container_mysql_user: glance
container_mysql_password: "{{ glance_container_mysql_password }}"
container_database: glance
## RPC
notification_driver: "{{ glance_notification_driver|default('noop') }}"
rpc_backend: glance.openstack.common.rpc.impl_kombu
## Backend
default_store: "{{ glance_default_store|default('file') }}"
## Swift Options
swift_store_auth_address: "{{ glance_swift_store_auth_address | default('NoAuthAddress') }}"
swift_store_user: "{{ glance_swift_store_user | default('NoUser') }}"
swift_store_key: "{{ glance_swift_store_key | default('NoKey') }}"
swift_store_region: "{{ glance_swift_store_region | default('NoRegion') }}"
swift_store_container: "{{ glance_swift_store_container | default('NoContainer')}}"
swift_store_endpoint_type: "{{ glance_swift_store_endpoint_type | default('internalURL') }}"
## Auth
service_admin_tenant_name: "service"
service_admin_username: "glance"
service_admin_password: "{{ glance_service_password }}"
## Glance User / Group
system_user: glance
system_group: glance
## Service Names
service_names:
- glance-api
- glance-registry
flavor: "keystone+cachemanagement"
container_directories:
- { name: /var/log/glance, mode: 755 }
- { name: /var/lib/glance }
- { name: /var/lib/glance/cache }
- { name: /var/lib/glance/cache/api }
- { name: /var/lib/glance/cache/registry }
- { name: /var/lib/glance/scrubber }
- { name: /etc/glance }
- { name: /var/cache/glance }
container_packages:
- rsync

View File

@ -1,72 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The variables file used by the playbooks in the Heat-api group.
# These don't have to be explicitly imported by vars_files: they are autopopulated.
## Service Name
service_name: heat
# Verbosity Options
debug: False
verbose: True
# only used when the lxc vg is present on the target
container_lvm_fstype: ext4
container_lvm_fssize: 5GB
## DB
container_mysql_user: heat
container_mysql_password: "{{ heat_container_mysql_password }}"
container_database: heat
## RPC
rpc_backend: heat.openstack.common.rpc.impl_kombu
## Auth
service_admin_tenant_name: "service"
service_admin_username: "heat"
service_admin_password: "{{ heat_service_password }}"
## Heat User / Group
system_user: heat
system_group: heat
## Service Names
service_names:
- heat-api
- heat-api-cfn
- heat-api-cloudwatch
- heat-engine
## Stack
stack_domain_admin_password: "{{ heat_stack_domain_admin_password }}"
stack_domain_admin: stack_domain_admin
stack_user_domain_name: heat
deferred_auth_method: trusts
auth_encryption_key: "{{ heat_auth_encryption_key }}"
heat_watch_server_url: "http://{{ external_vip_address }}:8003"
heat_waitcondition_server_url: "http://{{ internal_vip_address }}:8000/v1/waitcondition"
heat_metadata_server_url: "http://{{ internal_vip_address }}:8000"
container_directories:
- { name: /etc/heat }
- { name: /etc/heat/environment.d }
- { name: /etc/heat/templates }
- { name: /var/cache/heat }
- { name: /var/lib/heat }
- { name: /var/log/heat, mode: 755 }

View File

@ -1,66 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The variables file used by the playbooks in the Horizon group.
# These don't have to be explicitly imported by vars_files: they are autopopulated.
# Enable containerization of services
containerize: true
## Service Name
service_name: horizon
# Verbosity Options
debug: False
# only used when the lxc vg is present on the target
container_lvm_fstype: ext4
container_lvm_fssize: 5GB
## DB
container_mysql_user: dash
container_mysql_password: "{{ horizon_container_mysql_password }}"
container_database: dash
## Horizon User / Group
system_user: www-data
system_group: www-data
## Horizon Help URL Path
horizon_help_url: http://docs.rackspace.com/rpc/api/v9/rpc-faq-v9/content/rpc-common-front.html
# Installation directories
install_lib_dir: /usr/local/lib/python2.7/dist-packages
container_directories:
- { name: /var/log/horizon, mode: 755 }
- { name: /etc/horizon }
- { name: /var/lib/horizon }
- { name: /usr/local/lib/python2.7/dist-packages/static }
- { name: /usr/local/lib/python2.7/dist-packages/openstack_dashboard/local }
horizon_fqdn: "{{ external_vip_address }}"
horizon_server_name: "{{ container_name }}"
horizon_self_signed: true
## Optional certification options
# horizon_cacert_pem: /path/to/cacert.pem
# horizon_ssl_cert: /etc/ssl/certs/apache.cert
# horizon_ssl_key: /etc/ssl/private/apache.key
# horizon_ssl_cert_path: /etc/ssl/certs

View File

@ -1,70 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The variables file used by the playbooks in the Keystone-api group.
# These don't have to be explicitly imported by vars_files: they are autopopulated.
## Service Name
service_name: keystone
## Service ports
service_port: 5000
admin_port: 35357
## Verbosity Options
debug: False
verbose: True
# only used when the lxc vg is present on the target
container_lvm_fstype: ext4
container_lvm_fssize: 5GB
## DB
container_mysql_user: keystone
container_mysql_password: "{{ keystone_container_mysql_password }}"
container_database: keystone
## AUTH
auth_methods: "password,token"
token_provider: "keystone.token.providers.uuid.Provider"
# If the "token_provider" is set to PKI set this to True
keystone_use_pki: False
## Keystone User / Group
system_user: keystone
system_group: keystone
## Enable SSL
keystone_ssl: false
## Optional SSL vars
# keystone_ssl_cert: /etc/ssl/certs/apache.cert
# keystone_ssl_key: /etc/ssl/certs/apache.key
# keystone_ssl_cert_path: /etc/ssl/certs
container_directories:
- { name: /etc/keystone }
- { name: /etc/keystone/ssl }
- { name: /var/lib/keystone }
- { name: /var/log/keystone, mode: 755 }
- { name: /var/www/cgi-bin/keystone, mode: 755 }

View File

@ -1,88 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The variables file used by the playbooks in the nova group.
# These don't have to be explicitly imported by vars_files: they are autopopulated.
## Service Name
service_name: neutron
# Verbosity Options
debug: False
verbose: True
## only used when the lxc vg is present on the target
container_lvm_fstype: ext4
container_lvm_fssize: 5GB
## General configuration
core_plugin: neutron.plugins.ml2.plugin.Ml2Plugin
interface_driver: neutron.agent.linux.interface.BridgeInterfaceDriver
metering_driver: neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver
service_plugins:
- neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
- neutron.services.loadbalancer.plugin.LoadBalancerPlugin
- neutron.services.vpn.plugin.VPNDriverPlugin
- neutron.services.metering.metering_plugin.MeteringPlugin
dhcp_driver: neutron.agent.linux.dhcp.Dnsmasq
neutron_config: /etc/neutron/neutron.conf
neutron_plugin: /etc/neutron/plugins/ml2/ml2_conf.ini
neutron_revision: head
## Neutron downtime
neutron_agent_down_time: 120
neutron_report_interval: "{{ neutron_agent_down_time|int / 2 }}"
neutron_agent_polling_interval: 5
## DB
container_mysql_user: neutron
container_mysql_password: "{{ neutron_container_mysql_password }}"
container_database: neutron
## RPC
rpc_backend: rabbit
## Nova Auth
service_admin_tenant_name: "service"
service_admin_username: "neutron"
service_admin_password: "{{ neutron_service_password }}"
## Nova User / Group
system_user: neutron
system_group: neutron
## Service Names
service_names:
- neutron-agent
- neutron-dhcp-agent
- neutron-linuxbridge-agent
- neutron-metadata-agent
- neutron-metering-agent
- neutron-l3-agent
- neutron-server
container_directories:
- { name: /etc/neutron }
- { name: /etc/neutron/plugins }
- { name: /etc/neutron/plugins/ml2 }
- { name: /etc/neutron/rootwrap.d }
- { name: /var/cache/neutron }
- { name: /var/lib/neutron, mode: 755 }
- { name: /var/lib/neutron/ha_confs }
- { name: /var/lock/neutron }
- { name: /var/log/neutron, mode: 755 }
- { name: /var/run/neutron }

View File

@ -1,99 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The variables file used by the playbooks in the nova group.
# These don't have to be explicitly imported by vars_files: they are autopopulated.
## Service Name
service_name: nova
# Verbosity Options
debug: False
verbose: True
# only used when the lxc vg is present on the target
container_lvm_fstype: ext4
container_lvm_fssize: 5GB
# General configuration
volume_driver: cinder.volume.drivers.lvm.LVMISCSIDriver
## DB
container_mysql_user: nova
container_mysql_password: "{{ nova_container_mysql_password }}"
container_database: nova
## RPC
rpc_backend: nova.openstack.common.rpc.impl_kombu
## Nova virtualization Type, set to KVM if supported
virt_type: "{{ nova_virt_type|default('kvm') }}"
## Nova Auth
service_admin_tenant_name: "service"
service_admin_username: "nova"
service_admin_password: "{{ nova_service_password }}"
## Nova User / Group
system_user: nova
system_group: nova
## Service Names
service_names:
- nova-api-metadata
- nova-api-os-compute
- nova-api-ec2
- nova-compute
- nova-conductor
- nova-scheduler
## Nova global config
nova_cpu_mode: host-model
nova_linuxnet_interface_driver: nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
nova_libvirt_vif_driver: nova.virt.libvirt.vif.NeutronLinuxBridgeVIFDriver
nova_firewall_driver: nova.virt.firewall.NoopFirewallDriver
nova_compute_driver: libvirt.LibvirtDriver
nova_max_age: 0
# Nova Scheduler
nova_cpu_allocation_ratio: 2.0
nova_disk_allocation_ratio: 1.0
nova_max_instances_per_host: 50
nova_max_io_ops_per_host: 10
nova_ram_allocation_ratio: 1.0
nova_ram_weight_multiplier: 5.0
nova_reserved_host_disk_mb: 2048
nova_reserved_host_memory_mb: 2048
nova_scheduler_driver: nova.scheduler.filter_scheduler.FilterScheduler
nova_scheduler_available_filters: nova.scheduler.filters.all_filters
nova_scheduler_default_filters: RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,CoreFilter,DiskFilter
nova_scheduler_driver_task_period: 60
nova_scheduler_host_manager: nova.scheduler.host_manager.HostManager
nova_scheduler_host_subset_size: 10
nova_scheduler_manager: nova.scheduler.manager.SchedulerManager
nova_scheduler_max_attempts: 5
nova_scheduler_weight_classes: nova.scheduler.weights.all_weighers
container_directories:
- { name: /var/log/nova, mode: 755, skip_group: nova_compute }
- { name: /var/lib/nova, mode: 755 }
- { name: /var/lib/nova/instances, mode: 755 }
- { name: /var/lib/nova/cache }
- { name: /var/lib/nova/cache/api }
- { name: /etc/nova }
- { name: /etc/nova/rootwrap.d }
- { name: /var/cache/nova }
- { name: /var/lock/nova }
- { name: /var/run/nova }

View File

@ -1,75 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The variables file used by the playbooks in the swift-hosts & swift-proxy groups.
# These don't have to be explicitly imported by vars_files: they are autopopulated.
authtoken_active: True
delay_auth_decision: true
## Service Name
service_name: swift
# Verbosity Options
debug: False
verbose: True
# only used when the lxc vg is present on the target
container_lvm_fstype: ext4
container_lvm_fssize: 5GB
# Swift default ports
swift_proxy_port: "8080"
swift_object_port: "6000"
swift_container_port: "6001"
swift_account_port: "6002"
# Swift default variables
swift_default_replication_number: 3
swift_default_min_part_hours: 1
swift_default_host_zone: 0
swift_default_host_region: 1
swift_default_drive_weight: 100
## DB
container_mysql_user: swift
container_mysql_password: "{{ swift_container_mysql_password }}"
container_database: swift
## Swift Auth
service_admin_tenant_name: "service"
service_admin_username: "swift"
service_admin_password: "{{ swift_service_password }}"
## Swift User / Group
system_user: swift
system_group: swift
## Service Names
service_names:
- swift-object
- swift-account
- swift-container
- swift-proxy
container_directories:
- { name: /var/lock/swift }
- { name: /var/cache/swift }
- { name: /etc/swift }
- { name: /etc/swift/rings/ }
- { name: /etc/swift/object-server }
- { name: /etc/swift/container-server }
- { name: /etc/swift/account-server }
- { name: /etc/swift/proxy-server }

View File

@ -1,22 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# The variables file used by the playbooks in the Glance-api group.
# These don't have to be explicitly imported by vars_files: they are autopopulated.
## Service Name
service_name: tempest
tempest_swift_enabled: True

View File

@ -1,2 +0,0 @@
[local]
localhost ansible_connection=local

View File

@ -1,113 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
## Keystone
- hosts: keystone[0]
user: root
roles:
- keystone_add_service
vars_files:
- inventory/group_vars/keystone_all.yml
- vars/openstack_service_vars/keystone_endpoint.yml
## Cinder
- hosts: keystone[0]
user: root
roles:
- keystone_add_service
vars_files:
- inventory/group_vars/cinder_all.yml
- vars/openstack_service_vars/cinder_api_endpoint.yml
- hosts: keystone[0]
user: root
roles:
- keystone_add_service
vars_files:
- inventory/group_vars/cinder_all.yml
- vars/openstack_service_vars/cinder_apiv2_endpoint.yml
## Glance
- hosts: keystone[0]
user: root
roles:
- keystone_add_service
vars_files:
- inventory/group_vars/glance_all.yml
- vars/openstack_service_vars/glance_api_endpoint.yml
## Heat
- hosts: keystone[0]
user: root
roles:
- keystone_add_service
vars_files:
- inventory/group_vars/heat_all.yml
- vars/openstack_service_vars/heat_api_endpoint.yml
- hosts: keystone[0]
user: root
roles:
- keystone_add_service
vars_files:
- inventory/group_vars/heat_all.yml
- vars/openstack_service_vars/heat_api_cfn_endpoint.yml
## Neutron
- hosts: keystone[0]
user: root
roles:
- keystone_add_service
vars_files:
- inventory/group_vars/neutron_all.yml
- vars/openstack_service_vars/neutron_server_endpoint.yml
## Nova
- hosts: keystone[0]
user: root
roles:
- keystone_add_service
vars_files:
- inventory/group_vars/nova_all.yml
- vars/openstack_service_vars/nova_api_os_compute_endpoint.yml
- hosts: keystone[0]
user: root
roles:
- keystone_add_service
vars_files:
- inventory/group_vars/nova_all.yml
- vars/openstack_service_vars/nova_api_os_computev3_endpoint.yml
- hosts: keystone[0]
user: root
roles:
- keystone_add_service
vars_files:
- inventory/group_vars/nova_all.yml
- vars/openstack_service_vars/nova_api_ec2_endpoint.yml
- hosts: keystone[0]
user: root
roles:
- keystone_add_service
vars_files:
- inventory/group_vars/nova_all.yml
- vars/openstack_service_vars/nova_api_s3_endpoint.yml

View File

@ -1,20 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Add additional users to keystone if needed.
- hosts: keystone[0]
user: root
roles:
- keystone_add_user

View File

@ -1,18 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- include: keystone-common.yml
- include: keystone.yml
- include: keystone-add-all-services.yml

View File

@ -1,28 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: keystone
user: root
roles:
- common
- common_sudoers
- container_common
- keystone_common
- openstack_common
- openstack_openrc
- galera_client_cnf
vars_files:
- vars/repo_packages/keystone.yml
- vars/openstack_service_vars/keystone.yml

View File

@ -1,53 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Setup Keystone
- hosts: keystone[0]
user: root
tasks:
- name: Perform a Keystone PKI Setup
command: >
keystone-manage pki_setup --keystone-user "{{ system_user }}" --keystone-group "{{ system_group }}"
creates=/etc/keystone/ssl/private/signing_key.pem
- name: Create Key directory
file: >
path=/tmp/keystone/ssl/
state=directory
group="{{ ansible_ssh_user }}"
owner="{{ ansible_ssh_user }}"
recurse=true
delegate_to: localhost
- name: Sync keys from keystone
command: "rsync -az root@{{ ansible_ssh_host }}:/etc/keystone/ssl/ /tmp/keystone/ssl/"
delegate_to: localhost
# Setup all keystone nodes
- hosts: keystone:!keystone[0]
user: root
tasks:
- name: Sync keys to keystone
command: "rsync -az /tmp/keystone/ssl/ root@{{ ansible_ssh_host }}:/etc/keystone/ssl/"
delegate_to: localhost
# Remove temp Key Directory
- hosts: local
gather_facts: false
user: root
tasks:
- name: Remove Key directory
file: >
path=/tmp/keystone/
state=absent
delegate_to: localhost

View File

@ -1,38 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Setup Keystone
- hosts: keystone[0]
user: root
roles:
- galera_db_setup
- keystone_apache
- keystone_setup
- keystone_add_service
vars:
auth_admin_uri: "{{ auth_protocol }}://{{ container_address }}:{{ auth_port }}/v2.0"
vars_files:
- vars/repo_packages/keystone.yml
- vars/openstack_service_vars/keystone.yml
- vars/openstack_service_vars/keystone_endpoint.yml
# Setup all keystone nodes
- hosts: keystone:!keystone[0]
user: root
roles:
- keystone_apache
vars_files:
- vars/repo_packages/keystone.yml
- vars/openstack_service_vars/keystone.yml

View File

@ -1,24 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Setup supporting services
- hosts: kibana
user: root
roles:
- common
- container_common
- kibana
vars_files:
- vars/repo_packages/kibana.yml

168
playbooks/library/dist_sort Normal file
View File

@ -0,0 +1,168 @@
#!/usr/bin/env python
# (c) 2014, Kevin Carter <kevin.carter@rackspace.com>
#
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
DOCUMENTATION = """
---
module: dist_sort
version_added: "1.6.6"
short_description:
- Deterministically sort a list to distribute the elements in the list
evenly. Based on external values such as host or static modifier. Returns
a string as named key ``sorted_list``.
description:
- This module returns a list of servers uniquely sorted based on a index
from a look up value location within a group. The group should be an
existing ansible inventory group. This will module returns the sorted
list as a delimited string.
options:
src_list:
description:
- list in the form of a string separated by a delimiter.
required: True
ref_list:
description:
- list to lookup value_to_lookup against to return index number
This should be a pre-determined ansible group containing the
``value_to_lookup``.
required: False
value_to_lookup:
description:
- value is looked up against ref_list to get index number.
required: False
sort_modifier:
description:
- add a static int into the sort equation to weight the output.
type: int
default: 0
delimiter:
description:
- delimiter used to parse ``src_list`` with.
default: ','
author:
- Kevin Carter
- Sam Yaple
"""
EXAMPLES = """
- dist_sort:
value_to_lookup: "Hostname-in-ansible-group_name"
ref_list: "{{ groups['group_name'] }}"
src_list: "Server1,Server2,Server3"
register: test_var
# With a pre-set delimiter
- dist_sort:
value_to_lookup: "Hostname-in-ansible-group_name"
ref_list: "{{ groups['group_name'] }}"
src_list: "Server1|Server2|Server3"
delimiter: '|'
register: test_var
# With a set modifier
- dist_sort:
value_to_lookup: "Hostname-in-ansible-group_name"
ref_list: "{{ groups['group_name'] }}"
src_list: "Server1#Server2#Server3"
delimiter: '#'
sort_modifier: 5
register: test_var
"""
class DistSort(object):
def __init__(self, module):
"""Deterministically sort a list of servers.
:param module: The active ansible module.
:type module: ``class``
"""
self.module = module
self.params = self.module.params
self.return_data = self._runner()
def _runner(self):
"""Return the sorted list of servers.
Based on the modulo of index of a *value_to_lookup* from an ansible
group this function will return a comma "delimiter" separated list of
items.
:returns: ``str``
"""
index = self.params['ref_list'].index(self.params['value_to_lookup'])
index += self.params['sort_modifier']
src_list = self.params['src_list'].split(
self.params['delimiter']
)
for _ in range(index % len(src_list)):
src_list.append(src_list.pop(0))
else:
return self.params['delimiter'].join(src_list)
def main():
"""Run the main app."""
module = AnsibleModule(
argument_spec=dict(
value_to_lookup=dict(
required=True,
type='str'
),
ref_list=dict(
required=True,
type='list'
),
src_list=dict(
required=True,
type='str'
),
delimiter=dict(
required=False,
type='str',
default=','
),
sort_modifier=dict(
required=False,
type='str',
default='0'
)
),
supports_check_mode=False
)
try:
# This is done so that the failure can be parsed and does not cause
# ansible to fail if a non-int is passed.
module.params['sort_modifier'] = int(module.params['sort_modifier'])
_ds = DistSort(module=module)
if _ds.return_data == module.params['src_list']:
_changed = False
else:
_changed = True
module.exit_json(changed=_changed, **{'sorted_list': _ds.return_data})
except Exception as exp:
resp = {'stderr': str(exp)}
resp.update(module.params)
module.fail_json(msg='Failed Process', **resp)
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

View File

@ -174,4 +174,3 @@ def main():
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

View File

@ -1,22 +1,19 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2014, Kevin Carter <kevin.carter@rackspace.com>
#
# This file is part of Ansible
# Copyright 2014, Rackspace US, Inc.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# http://www.apache.org/licenses/LICENSE-2.0
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Based on Jimmy Tang's implementation
@ -130,76 +127,76 @@ author: Kevin Carter
EXAMPLES = """
# Create an admin tenant
- keystone: >
command=ensure_tenant
tenant_name=admin
description="Admin Tenant"
- keystone:
command: "ensure_tenant"
tenant_name: "admin"
description: "Admin Tenant"
# Create a service tenant
- keystone: >
command=ensure_tenant
tenant_name=service
description="Service Tenant"
- keystone:
command: "ensure_tenant"
tenant_name: "service"
description: "Service Tenant"
# Create an admin user
- keystone: >
command=ensure_user
user_name=admin
tenant_name=admin
password=secrete
email="admin@some-domain.com"
- keystone:
command: "ensure_user"
user_name: "admin"
tenant_name: "admin"
password: "secrete"
email: "admin@some-domain.com"
# Create an admin role
- keystone: >
command=ensure_role
role_name=admin
- keystone:
command: "ensure_role"
role_name: "admin"
# Create a user
- keystone: >
command=ensure_user
user_name=glance
tenant_name=service
password=secrete
email="glance@some-domain.com"
- keystone:
command: "ensure_user"
user_name: "glance"
tenant_name: "service"
password: "secrete"
email: "glance@some-domain.com"
# Add a role to a user
- keystone: >
command=ensure_user_role
user_name=glance
tenant_name=service
role_name=admin
- keystone:
command: "ensure_user_role"
user_name: "glance"
tenant_name: "service"
role_name: "admin"
# Create a service
- keystone: >
command=ensure_service
service_name=glance
service_type=image
description="Glance Image Service"
- keystone:
command: "ensure_service"
service_name: "glance"
service_type: "image"
description: "Glance Image Service"
# Create an endpoint
- keystone: >
command=ensure_endpoint
region_name=RegionOne
service_name=glance
service_type=image
publicurl=http://127.0.0.1:9292
adminurl=http://127.0.0.1:9292
internalurl=http://127.0.0.1:9292
- keystone:
command: "ensure_endpoint"
region_name: "RegionOne"
service_name: "glance"
service_type: "image"
publicurl: "http://127.0.0.1:9292"
adminurl: "http://127.0.0.1:9292"
internalurl: "http://127.0.0.1:9292"
# Get tenant id
- keystone: >
command=get_tenant
tenant_name=admin
- keystone:
command: "get_tenant"
tenant_name: "admin"
# Get user id
- keystone: >
command=get_user
user_name=admin
- keystone:
command: "get_user"
user_name: "admin"
# Get role id
- keystone: >
command=get_role
user_name=admin
- keystone:
command: "get_role"
user_name: "admin"
"""

File diff suppressed because it is too large Load Diff

1481
playbooks/library/lxc-container Executable file

File diff suppressed because it is too large Load Diff

View File

@ -1,22 +1,19 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2014, Kevin Carter <kevin.carter@rackspace.com>
#
# This file is part of Ansible
# Copyright 2014, Rackspace US, Inc.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# http://www.apache.org/licenses/LICENSE-2.0
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import base64
@ -63,7 +60,7 @@ options:
required: true
server:
description:
- server IP address and port. This can be a comma seperated list of
- server IP address and port. This can be a comma separated list of
servers to connect to.
required: true
encrypt_string:
@ -537,7 +534,9 @@ class Memcached(object):
msg='The content you attempted to place within memcached'
' was not created. If you are load balancing'
' memcached, attempt to connect to a single node.'
' Returned a value of unstored keys [ %s ].' % value
' Returned a value of unstored keys [ %s ] - Original'
' Connection [ %s ]'
% (value, [i.__dict__ for i in self.mc.servers])
)

View File

@ -1,22 +1,19 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2014, Kevin Carter <kevin.carter@rackspace.com>
#
# This file is part of Ansible
# Copyright 2014, Rackspace US, Inc.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# http://www.apache.org/licenses/LICENSE-2.0
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
DOCUMENTATION = """
---

View File

@ -1,633 +0,0 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2014, Kevin Carter <kevin.carter@rackspace.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = """
---
module: swift
version_added: "1.6.2"
short_description:
- Manage objects stored in swift
description:
- Manage objects stored in swift
options:
login_user:
description:
- login username
required: true
login_password:
description:
- Password of login user
required: true
login_tenant_name:
description:
- The tenant login_user belongs to
required: false
default: None
login_url:
description:
- Authentication URL
required: true
region:
description:
- The password to be assigned to the user
required: false
container:
description:
- Name of container
required: true
src:
description:
- path to object. Only used for in 'upload' & 'download' command
required: false
object:
description:
- Name of object
required: false
config_file:
description:
- Path to credential file
required: false
section:
description:
- Section within ``config_file`` to load
required: false
default: default
auth_version:
description:
- Swift authentication version
default: 2.0
required: false
snet:
description:
- Enable service Net. This may not be supported by all providers
set true or false
default: false
marker:
description:
- Set beginning marker. Only used in 'list' command.
default: false
end_marker:
description:
- Set ending marker. Only used in 'list' command.
default: false
limit:
description:
- Set limit. Only used in 'list' command.
default: false
prefix:
description:
- Set prefix filter. Only used in 'list' command.
default: false
command:
description:
- Indicate desired state of the resource
choices: ['upload', 'download', 'delete', 'create', 'list']
required: true
notes:
- Environment variables can be set for all auth credentials which allows
for seemless access. The available environment variables are,
OS_USERNAME, OS_PASSWORD, OS_TENANT_ID, OS_AUTH_URL
- A configuration file can be used to load credentials, use ``config_file``
to source the file. If you have multiple sections within the
configuration file use the ``section`` argument to define the section,
however the default is set to "default".
requirements: [ python-swiftclient ]
author: Kevin Carter
"""
EXAMPLES = """
# Create a new container
- swift: >
login_user="SomeUser"
login_password="SomePassword"
login_url="https://identity.somedomain.com/v2.0/"
command=create
container=MyNewContainer
# Upload a new object
- swift: >
login_user="SomeUser"
login_password="SomePassword"
login_url="https://identity.somedomain.com/v2.0/"
command=upload
container=MyNewContainer
src=/path/to/file
object=MyNewObjectName
# Download an object
- swift: >
login_user="SomeUser"
login_password="SomePassword"
login_url="https://identity.somedomain.com/v2.0/"
command=download
container=MyNewContainer
src=/path/to/file
object=MyOldObjectName
# list up-to 10K objects
- swift: >
login_user="SomeUser"
login_password="SomePassword"
login_url="https://identity.somedomain.com/v2.0/"
command=list
container=MyNewContainer
# Delete an Object
- swift: >
login_user="SomeUser"
login_password="SomePassword"
login_url="https://identity.somedomain.com/v2.0/"
command=delete
container=MyNewContainer
object=MyOldObjectName
# Delete a container
- swift: >
login_user="SomeUser"
login_password="SomePassword"
login_url="https://identity.somedomain.com/v2.0/"
command=delete
container=MyNewContainer
"""
COMMAND_MAP = {
'upload': {
'variables': [
'login_user',
'login_password',
'login_tenant_name',
'login_url',
'region',
'container',
'src',
'object',
'auth_version'
]
},
'download': {
'variables': [
'login_user',
'login_password',
'login_tenant_name',
'login_url',
'region',
'container',
'src',
'object',
'auth_version'
]
},
'delete': {
'variables': [
'login_user',
'login_password',
'login_tenant_name',
'login_url',
'region',
'container',
'object',
'auth_version'
]
},
'create': {
'variables': [
'login_user',
'login_password',
'login_tenant_name',
'login_url',
'region',
'container',
'auth_version'
]
},
'list': {
'variables': [
'login_user',
'login_password',
'login_tenant_name',
'login_url',
'region',
'container',
'auth_version',
'marker',
'limit',
'prefix',
'end_marker'
]
}
}
import ConfigParser
try:
from swiftclient import client
except ImportError:
swiftclient_found = False
else:
swiftclient_found = True
class ManageSwift(object):
def __init__(self, module):
"""Manage Swift via Ansible."""
self.state_change = False
self.swift = None
# Load AnsibleModule
self.module = module
def command_router(self):
"""Run the command as its provided to the module."""
command_name = self.module.params['command']
if command_name not in COMMAND_MAP:
self.failure(
error='No Command Found',
rc=2,
msg='Command [ %s ] was not found.' % command_name
)
action_command = COMMAND_MAP[command_name]
if hasattr(self, '_%s' % command_name):
action = getattr(self, '_%s' % command_name)
self._authenticate()
facts = action(variables=action_command['variables'])
if facts is None:
self.module.exit_json(changed=self.state_change)
else:
self.module.exit_json(
changed=self.state_change,
ansible_facts=facts
)
else:
self.failure(
error='Command not in ManageSwift class',
rc=2,
msg='Method [ %s ] was not found.' % command_name
)
@staticmethod
def _facts(facts):
"""Return a dict for our Ansible facts.
:param facts: ``dict`` Dict with data to return
"""
return {'swift_facts': facts}
def _get_vars(self, variables, required=None):
"""Return a dict of all variables as found within the module.
:param variables: ``list`` List of all variables that are available to
use within the Swift Command.
:param required: ``list`` Name of variables that are required.
"""
return_dict = {}
for variable in variables:
return_dict[variable] = self.module.params.get(variable)
else:
if isinstance(required, list):
for var_name in required:
check = return_dict.get(var_name)
if check is None:
self.failure(
error='Missing [ %s ] from Task or found a None'
' value' % var_name,
rc=000,
msg='variables %s - available params [ %s ]'
% (variables, self.module.params)
)
return return_dict
def failure(self, error, rc, msg):
"""Return a Failure when running an Ansible command.
:param error: ``str`` Error that occurred.
:param rc: ``int`` Return code while executing an Ansible command.
:param msg: ``str`` Message to report.
"""
self.module.fail_json(msg=msg, rc=rc, err=error)
def _env_vars(self, cred_file=None, section='default'):
"""Load environment or sourced credentials.
If the credentials are specified in either environment variables
or in a credential file the sourced variables will be loaded IF the
not set within the ``module.params``.
:param cred_file: ``str`` Path to credentials file.
:param section: ``str`` Section within creds file to load.
"""
if cred_file:
parser = ConfigParser.SafeConfigParser()
parser.optionxform = str
parser.read(os.path.expanduser(cred_file))
for name, value in parser.items(section):
if name == 'OS_AUTH_URL':
if not self.module.params.get('login_url'):
self.module.params['login_url'] = value
if name == 'OS_USERNAME':
if not self.module.params.get('login_user'):
self.module.params['login_user'] = value
if name == 'OS_PASSWORD':
if not self.module.params.get('login_password'):
self.module.params['login_password'] = value
if name == 'OS_TENANT_ID':
if not self.module.params.get('login_tenant_name'):
self.module.params['login_tenant_name'] = value
else:
if not self.module.params.get('login_url'):
authurl = os.getenv('OS_AUTH_URL')
self.module.params['login_url'] = authurl
if not self.module.params.get('login_user'):
username = os.getenv('OS_USERNAME')
self.module.params['login_user'] = username
if not self.module.params.get('login_password'):
password = os.getenv('OS_PASSWORD')
self.module.params['login_password'] = password
if not self.module.params.get('login_tenant_name'):
tenant = os.getenv('OS_TENANT_ID')
self.module.params['login_tenant_name'] = tenant
def _authenticate(self):
"""Return a swift client object."""
cred_file = self.module.params.pop('config_file', None)
section = self.module.params.pop('section')
self._env_vars(cred_file=cred_file, section=section)
required_vars = ['login_url', 'login_user', 'login_password']
variables = [
'login_url',
'login_user',
'login_password',
'login_tenant_name',
'region',
'auth_version',
'snet'
]
variables_dict = self._get_vars(variables, required=required_vars)
login_url = variables_dict.pop('login_url')
login_user = variables_dict.pop(
'login_user', os.getenv('OS_AUTH_URL')
)
login_password = variables_dict.pop(
'login_password', os.getenv('OS_AUTH_URL')
)
login_tenant_name = variables_dict.pop(
'login_tenant_name', os.getenv('OS_TENANT_ID')
)
region = variables_dict.pop('region', None)
auth_version = variables_dict.pop('auth_version')
snet = variables_dict.pop('snet', None)
if snet in BOOLEANS_TRUE:
snet = True
else:
snet = None
if login_password is None:
self.failure(
error='Missing Password',
rc=2,
msg='A Password is required for authentication. Try adding'
' [ login_password ] to the task'
)
if login_tenant_name is None:
login_tenant_name = ' '
creds_dict = {
'user': login_user,
'key': login_password,
'authurl': login_url,
'tenant_name': login_tenant_name,
'os_options': {
'region': region
},
'snet': snet,
'auth_version': auth_version
}
self.swift = client.Connection(**creds_dict)
def _upload(self, variables):
"""Upload an object to a swift object store.
:param variables: ``list`` List of all variables that are available to
use within the Keystone Command.
"""
required_vars = ['container', 'src', 'object']
variables_dict = self._get_vars(variables, required=required_vars)
container_name = variables_dict.pop('container')
object_name = variables_dict.pop('object')
src_path = variables_dict.pop('src')
self._create_container(container_name=container_name)
with open(src_path, 'rb') as f:
self.swift.put_object(container_name, object_name, contents=f)
object_data = self.swift.head_object(container_name, object_name)
self.state_change = True
return self._facts(facts=[object_data])
def _download(self, variables):
"""Upload an object to a swift object store.
:param variables: ``list`` List of all variables that are available to
use within the Keystone Command.
"""
required_vars = ['container', 'src', 'object']
variables_dict = self._get_vars(variables, required=required_vars)
container_name = variables_dict.pop('container')
object_name = variables_dict.pop('object')
src_path = variables_dict.pop('src')
with open(src_path, 'wb') as f:
f.write(
self.swift.get_object(
container_name, object_name, resp_chunk_size=204800
)
)
self.state_change = True
def _delete(self, variables):
"""Upload an object to a swift object store.
If the ``object`` variable is not used the container will be deleted.
This assumes that the container is empty.
:param variables: ``list`` List of all variables that are available to
use within the Keystone Command.
"""
required_vars = ['container']
variables_dict = self._get_vars(variables, required=required_vars)
container_name = variables_dict.pop('container')
object_name = variables_dict.pop('object', None)
if object_name:
self.swift.delete_object(container_name, object_name)
else:
self.swift.delete_container(container_name)
self.state_change = True
def _create_container(self, container_name):
"""Ensure a container exists. If it does not, it will be created.
:param container_name: ``str`` Name of the container.
"""
try:
container = self.swift.head_container(container_name)
except client.ClientException:
self.swift.put_container(container_name)
else:
return container
def _create(self, variables):
"""Create a new container in swift.
:param variables: ``list`` List of all variables that are available to
use within the Keystone Command.
"""
required_vars = ['container']
variables_dict = self._get_vars(variables, required=required_vars)
container_name = variables_dict.pop('container')
container_data = self._create_container(container_name=container_name)
if not container_data:
container_data = self.swift.head_container(container_name)
return self._facts(facts=[container_data])
def _list(self, variables):
"""Return a list of objects or containers.
If the ``container`` variable is not used this will return a list of
containers in the region.
:param variables: ``list`` List of all variables that are available to
use within the Keystone Command.
"""
variables_dict = self._get_vars(variables)
container_name = variables_dict.pop('container', None)
filters = {
'marker': variables_dict.pop('marker', None),
'limit': variables_dict.pop('limit', None),
'prefix': variables_dict.pop('prefix', None),
'end_marker': variables_dict.pop('end_marker', None)
}
if container_name:
list_data = self.swift.get_container(container_name, **filters)[1]
else:
list_data = self.swift.get_account(**filters)[1]
return self._facts(facts=list_data)
def main():
module = AnsibleModule(
argument_spec=dict(
login_user=dict(
required=False
),
login_password=dict(
required=False
),
login_tenant_name=dict(
required=False
),
login_url=dict(
required=False
),
config_file=dict(
required=False
),
section=dict(
required=False,
default='default'
),
command=dict(
required=True,
choices=COMMAND_MAP.keys()
),
region=dict(
required=False
),
container=dict(
required=False
),
src=dict(
required=False
),
object=dict(
required=False
),
marker=dict(
required=False
),
limit=dict(
required=False
),
prefix=dict(
required=False
),
end_marker=dict(
required=False
),
auth_version=dict(
required=False,
default='2.0'
),
snet=dict(
required=False,
default='false',
choices=BOOLEANS
)
),
supports_check_mode=False,
)
sm = ManageSwift(module=module)
if not swiftclient_found:
sm.failure(
error='python-swiftclient is missing',
rc=2,
msg='Swift client was not importable, is it installed?'
)
sm.command_router()
# import module snippets
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()

View File

@ -1,20 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Setup supporting services
- hosts: logstash
user: root
roles:
- logstash

View File

@ -1,26 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: logstash
user: root
roles:
- container_extra_setup
- common
- container_common
- logging_common
- logstash
vars_files:
- vars/repo_packages/logstash.yml
- vars/config_vars/container_config_logstash.yml

View File

@ -13,11 +13,13 @@
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: "{{ host_group|default('hosts') }}"
user: root
- name: Create container(s)
hosts: "{{ container_group|default('all_containers') }}"
max_fail_percentage: 20
gather_facts: false
user: root
roles:
- container_restart
- { role: "lxc_container_create", tags: [ "lxc-container-create" ] }
vars:
default_container_groups: "{{ hostvars[inventory_hostname]['container_types'] }}"
container_groups: "{{ groups[container_group|default(default_container_groups)] | default([]) }}"
ansible_hostname: "{{ container_name }}"
is_metal: "{{ properties.is_metal|default(false) }}"

View File

@ -0,0 +1,52 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Destroy lxc containers
hosts: "{{ container_group|default('all_containers') }}"
max_fail_percentage: 20
gather_facts: false
user: root
tasks:
- name: Destroy a container
lxc-container:
name: "{{ container_name }}"
state: "absent"
delegate_to: "{{ physical_host }}"
tags:
- container-destroy
- name: Destroy container service directories
file:
path: "{{ item }}"
state: "absent"
with_items:
- "/openstack/{{ container_name }}"
- "/openstack/backup/{{ container_name }}"
- "/openstack/log/{{ container_name }}"
- "/var/lib/lxc/{{ container_name }}"
delegate_to: "{{ physical_host }}"
tags:
- container-directories
- name: Destroy lxc containers
hosts: "hosts"
max_fail_percentage: 20
gather_facts: false
user: root
tasks:
- name: Flush net cache
command: /usr/local/bin/lxc-system-manage flush-net-cache
delegate_to: "{{ physical_host }}"
tags:
- flush-net-cache

View File

@ -13,15 +13,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: cinder_all
- name: Basic lxc host setup
hosts: "{{ host_group|default('hosts') }}"
max_fail_percentage: 20
user: root
roles:
- common
- common_sudoers
- container_common
- openstack_common
- openstack_openrc
- galera_client_cnf
- { role: "lxc_hosts", tags: [ "lxc-host", "host-setup" ] }
- { role: "py_from_git", tags: [ "lxc-libs" ] }
vars_files:
- vars/repo_packages/cinder.yml
- vars/openstack_service_vars/cinder_api.yml
- vars/repo_packages/python2_lxc.yml

View File

@ -13,13 +13,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: memcached
- name: Install memcached
hosts: memcached
max_fail_percentage: 20
user: root
roles:
- container_extra_setup
- common
- container_common
- memcached
vars_files:
- vars/config_vars/container_config_memcached.yml
- vars/repo_packages/memcached.yml
- { role: "memcached_server", tags: [ "memcached-server" ] }
vars:
is_metal: "{{ properties.is_metal|default(false) }}"

View File

@ -1,22 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- include: neutron-common.yml
- include: neutron-server.yml
- include: neutron-metadata-agent.yml
- include: neutron-dhcp-agent.yml
- include: neutron-linuxbridge-agent.yml
- include: neutron-l3-agent.yml
- include: neutron-metering-agent.yml

View File

@ -1,27 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: neutron_all
user: root
roles:
- common
- common_sudoers
- container_common
- openstack_common
- openstack_openrc
- galera_client_cnf
vars_files:
- vars/repo_packages/neutron.yml
- inventory/group_vars/neutron_all.yml

View File

@ -1,25 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: neutron_dhcp_agent
user: root
roles:
- neutron_common
- init_script
vars_files:
- inventory/group_vars/neutron_all.yml
- vars/openstack_service_vars/neutron_dhcp_agent.yml
handlers:
- include: handlers/services.yml

View File

@ -1,27 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: neutron_l3_agent
user: root
roles:
- neutron_common
- galera_client_cnf
- init_script
- neutron_l3_ha
vars_files:
- inventory/group_vars/neutron_all.yml
- vars/openstack_service_vars/neutron_l3_agent.yml
handlers:
- include: handlers/services.yml

View File

@ -1,27 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: neutron_linuxbridge_agent
user: root
roles:
- container_extra_setup
- neutron_common
- init_script
vars_files:
- inventory/group_vars/neutron_all.yml
- vars/config_vars/container_config_neutron.yml
- vars/openstack_service_vars/neutron_linuxbridge_agent.yml
handlers:
- include: handlers/services.yml

View File

@ -1,25 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: neutron_metadata_agent
user: root
roles:
- neutron_common
- init_script
vars_files:
- inventory/group_vars/neutron_all.yml
- vars/openstack_service_vars/neutron_metadata_agent.yml
handlers:
- include: handlers/services.yml

View File

@ -1,25 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: neutron_metering_agent
user: root
roles:
- neutron_common
- init_script
vars_files:
- inventory/group_vars/neutron_all.yml
- vars/openstack_service_vars/neutron_metering_agent.yml
handlers:
- include: handlers/services.yml

View File

@ -1,38 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: neutron_server[0]
user: root
roles:
- galera_db_setup
- neutron_common
- neutron_setup
- init_script
vars_files:
- inventory/group_vars/neutron_all.yml
- vars/openstack_service_vars/neutron_server.yml
handlers:
- include: handlers/services.yml
- hosts: neutron_server:!neutron_server[0]
user: root
roles:
- neutron_common
- init_script
vars_files:
- inventory/group_vars/neutron_all.yml
- vars/openstack_service_vars/neutron_server.yml
handlers:
- include: handlers/services.yml

View File

@ -1,25 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- include: nova-common.yml
- include: nova-api-os-compute.yml
- include: nova-api-ec2.yml
- include: nova-api-metadata.yml
- include: nova-scheduler.yml
- include: nova-conductor.yml
- include: nova-cert.yml
- include: nova-compute.yml
- include: nova-compute-keys.yml
- include: nova-spice-console.yml

View File

@ -1,52 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: nova_api_ec2[0]
user: root
roles:
- keystone_add_service
vars_files:
- vars/openstack_service_vars/nova_api_ec2_endpoint.yml
- hosts: nova_api_ec2[0]
user: root
roles:
- keystone_add_service
vars_files:
- vars/openstack_service_vars/nova_api_s3_endpoint.yml
- hosts: nova_api_ec2
user: root
roles:
- nova_common
- init_script
vars_files:
- inventory/group_vars/nova_all.yml
- vars/openstack_service_vars/nova_api_ec2.yml
- vars/openstack_service_vars/nova_spice_console_endpoint.yml
handlers:
- include: handlers/services.yml
- hosts: nova_api_ec2:!nova_api_ec2[0]
user: root
roles:
- nova_common
- init_script
vars_files:
- inventory/group_vars/nova_all.yml
- vars/openstack_service_vars/nova_api_ec2.yml
- vars/openstack_service_vars/nova_spice_console_endpoint.yml
handlers:
- include: handlers/services.yml

View File

@ -1,26 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: nova_api_metadata
user: root
roles:
- nova_common
- init_script
vars_files:
- inventory/group_vars/nova_all.yml
- vars/openstack_service_vars/nova_api_metadata.yml
- vars/openstack_service_vars/nova_spice_console_endpoint.yml
handlers:
- include: handlers/services.yml

View File

@ -1,54 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: nova_api_os_compute[0]
user: root
roles:
- keystone_add_service
vars_files:
- vars/openstack_service_vars/nova_api_os_compute_endpoint.yml
- hosts: nova_api_os_compute[0]
user: root
roles:
- keystone_add_service
vars_files:
- vars/openstack_service_vars/nova_api_os_computev3_endpoint.yml
- hosts: nova_api_os_compute[0]
user: root
roles:
- galera_db_setup
- nova_common
- nova_setup
- init_script
vars_files:
- inventory/group_vars/nova_all.yml
- vars/openstack_service_vars/nova_api_os_compute.yml
- vars/openstack_service_vars/nova_spice_console_endpoint.yml
handlers:
- include: handlers/services.yml
- hosts: nova_api_os_compute:!nova_api_os_compute[0]
user: root
roles:
- nova_common
- init_script
vars_files:
- inventory/group_vars/nova_all.yml
- vars/openstack_service_vars/nova_api_os_compute.yml
- vars/openstack_service_vars/nova_spice_console_endpoint.yml
handlers:
- include: handlers/services.yml

View File

@ -1,26 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: nova_cert
user: root
roles:
- nova_common
- init_script
vars_files:
- inventory/group_vars/nova_all.yml
- vars/openstack_service_vars/nova_cert.yml
- vars/openstack_service_vars/nova_spice_console_endpoint.yml
handlers:
- include: handlers/services.yml

View File

@ -1,27 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: nova_all
user: root
roles:
- common
- common_sudoers
- container_common
- openstack_common
- openstack_openrc
- galera_client_cnf
vars_files:
- inventory/group_vars/nova_all.yml
- vars/repo_packages/nova.yml

View File

@ -1,56 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: nova_compute
user: root
roles:
- nova_compute_sshkey_create
vars_files:
- inventory/group_vars/nova_all.yml
- hosts: nova_compute[0]
user: root
gather_facts: false
tasks:
- name: Distribute authorized keys for cluster consumption
memcached:
name: "{{ item.name }}"
file_path: "{{ item.src }}"
state: "present"
server: "{{ hostvars[groups['memcached'][0]]['ansible_ssh_host'] }}:11211"
encrypt_string: "{{ memcached_encryption_key }}"
with_items:
- { src: "/var/lib/nova/.ssh/authorized_keys", name: "authorized_keys" }
- hosts: nova_compute:!nova_compute[0]
user: root
gather_facts: false
tasks:
- name: Retrieve authorized keys
memcached:
name: "{{ item.name }}"
file_path: "{{ item.src }}"
state: "retrieve"
file_mode: "{{ item.file_mode }}"
dir_mode: "{{ item.dir_mode }}"
server: "{{ hostvars[groups['memcached'][0]]['ansible_ssh_host'] }}:11211"
encrypt_string: "{{ memcached_encryption_key }}"
with_items:
- { src: "/var/lib/nova/.ssh/authorized_keys", name: "authorized_keys", file_mode: "0640", dir_mode: "0750" }
- hosts: nova_compute
user: root
roles:
- nova_compute_sshkey_setup

View File

@ -1,33 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: nova_compute
user: root
roles:
- container_extra_setup
- container_common
- neutron_add_network_interfaces
- nova_compute_devices
- nova_common
- nova_libvirt
- init_script
vars_files:
- inventory/group_vars/nova_all.yml
- vars/repo_packages/nova_libvirt.yml
- vars/config_vars/container_config_nova_compute.yml
- vars/openstack_service_vars/nova_compute.yml
- vars/openstack_service_vars/nova_spice_console_endpoint.yml
handlers:
- include: handlers/services.yml

View File

@ -1,26 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: nova_conductor
user: root
roles:
- nova_common
- init_script
vars_files:
- inventory/group_vars/nova_all.yml
- vars/openstack_service_vars/nova_conductor.yml
- vars/openstack_service_vars/nova_spice_console_endpoint.yml
handlers:
- include: handlers/services.yml

View File

@ -1,26 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: nova_scheduler
user: root
roles:
- nova_common
- init_script
vars_files:
- inventory/group_vars/nova_all.yml
- vars/openstack_service_vars/nova_scheduler.yml
- vars/openstack_service_vars/nova_spice_console_endpoint.yml
handlers:
- include: handlers/services.yml

View File

@ -1,38 +0,0 @@
---
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: nova_spice_console
user: root
roles:
- container_common
- nova_common
- init_script
vars_files:
- inventory/group_vars/nova_all.yml
- vars/repo_packages/nova_spice_console.yml
- vars/openstack_service_vars/nova_spice_console.yml
- vars/openstack_service_vars/nova_spice_console_endpoint.yml
handlers:
- include: handlers/services.yml
- hosts: nova_spice_console
user: root
roles:
- nova_common
- init_script
vars_files:
- vars/openstack_service_vars/nova_console_auth.yml
handlers:
- include: handlers/services.yml

View File

@ -13,10 +13,9 @@
# See the License for the specific language governing permissions and
# limitations under the License.
- hosts: "{{ host_group|default('all_containers') }}"
- name: Basic host setup
hosts: "{{ host_group|default('hosts') }}"
max_fail_percentage: 20
user: root
gather_facts: false
roles:
- container_setup
vars_files:
- vars/config_vars/container_interfaces.yml
- { role: "openstack_hosts", tags: [ "openstack-hosts-setup" ] }

Some files were not shown because too many files have changed in this diff Show More