Remove and/or rename Rackspace related bits

This patch removes and/or renames anything that is Rackspace specific
from the playbooks, roles and variables.

It also removes items which appear to be orphaned/unused and flattens
the playbooks into a single directory in order to better match ansible
best practise (and remove some horrible fiddles we were doing).

The following have been removed due to RAX/RPC naming or RAX/RPC
specific usage:
 - playbooks/monitoring
 - playbooks/rax*
 - playbooks/rpc*
 - roles/maas*
 - roles/rax*
 - roles/rpc*
 - scripts/f5-*
 - scripts/maas*
 - scripts/rpc*
 - scripts/*lab*
 - vars/repo_packages/rackspace*
 - vars/repo_packages/rax*
 - vars/repo_packages/rpc*
 - vars/repo_packages/holland.yml

The following have been removed as they are unused:
 - playbooks/setup/host-network-setup.yml
 - roles/openssl_pem_request
 - roles/host_interfaces
 - scripts/elsa*
 - ssh/
 - vars/repo_packages/turbolift.yml

The following directories have been renamed:
 - etc/rpc_deploy > etc/openstack_deploy
 - rpc_deployment > playbooks

The playbooks have all been moved into a single directory:
 - rpc_deployment/playbooks/infrastructure/* > playbooks/
 - rpc_deployment/playbooks/openstack/* > playbooks/
 - rpc_deployment/playbooks/setup/* > playbooks/

The following files have been renamed:
 - lxc-rpc > lxc-openstack
 - lxc-rpc.conf > lxc-openstack.conf
 - rpc_environment > openstack_environment
 - rpc_release > openstack_release (etc and pip)
 - rpc_tempest_gate.sh > openstack_tempest_gate.sh
 - rpc_user_config > openstack_user_config

The following variables have been renamed:
 - rpc_release > openstack_release
 - rpc_repo_url > openstack_repo_url

The following variables have been introduced:
 - openstack_code_name: The code name of the upstream OpenStack release
   (eg: Juno)

Notable variable/template value changes:
 - rabbit_cluster_name: rpc > openstack
 - wsrep_cluster_name: rpc_galera_cluster > openstack_galera_cluster

DocImpact
Closes-Bug: #1403676
Implements: blueprint rackspace-namesake
Change-Id: Ib480fdad500b03c7cb90684aa444da9946ba8032
This commit is contained in:
Jesse Pretorius 2015-02-11 16:26:22 +00:00
parent b707e13333
commit 81c4ab04f7
630 changed files with 173 additions and 49788 deletions

View File

@ -1,4 +1,4 @@
Openstack Deployment with Ansible
OpenStack Deployment with Ansible
#################################
:date: 2014-09-25 09:00
:tags: lxc, openstack, cloud, ansible
@ -58,13 +58,13 @@ Assumptions
This repo assumes that you have setup the host servers that will be running the OpenStack infrastructure with three bridged network devices named: ``br-mgmt``, ``br-vxlan``, ``br-vlan``. These bridges will be used throughout the OpenStack infrastructure.
The repo also relies on configuration files found in the `/etc` directory of this repo.
If you are running Ansible from an "unprivileged" host, you can place the contents of the /etc/ directory in your home folder; this would be in a directory similar to `/home/<myusername>/rpc_deploy/`. Once you have the file in place, you will have to enter the details of your environment in the `rpc_user_config.yml` file; please see the file for how this should look. After you have a bridged network and the files/directory in place, continue on to _`Base Usage`.
If you are running Ansible from an "unprivileged" host, you can place the contents of the /etc/ directory in your home folder; this would be in a directory similar to `/home/<myusername>/openstack_deploy/`. Once you have the file in place, you will have to enter the details of your environment in the `openstack_user_config.yml` file; please see the file for how this should look. After you have a bridged network and the files/directory in place, continue on to _`Base Usage`.
Base Usage
----------
All commands must be executed from the ``rpc_deployment`` directory. From this directory you will have access to all of the playbooks, roles, and variables. It is recommended that you create an override file to contain any and all variables that you wish to override for the deployment. While the override file is is not required it will make life a bit easier. The default override file for the RPC environment is the ``user_variables.yml`` file.
All commands must be executed from the ``playbooks`` directory. From this directory you will have access to all of the playbooks, roles, and variables. It is recommended that you create an override file to contain any and all variables that you wish to override for the deployment. While the override file is is not required it will make life a bit easier. The default override file for the environment is the ``user_variables.yml`` file.
All of the variables that you may wish to update are in the ``vars/`` directory, however you should also be aware that services will pull in base group variables as found in ``inventory/group_vars``.
@ -77,28 +77,27 @@ manually or use the script ``pw-token-gen.py``. Example:
.. code-block::
# Generate the tokens
scripts/pw-token-gen.py --file /etc/rpc_deploy/user_variables.yml
scripts/pw-token-gen.py --file /etc/openstack_deploy/user_variables.yml
Example usage from the `rpc_deployment` directory in the ``ansible-rpc-lxc`` repository
Example usage from the `playbooks` directory in the ``os-ansible-deployment`` repository
.. code-block:: bash
# Run setup on all hosts:
ansible-playbook -e @vars/user_variables.yml playbooks/setup/host-setup.yml
ansible-playbook -e @vars/user_variables.yml playbooks/host-setup.yml
# Run infrastructure on all hosts
ansible-playbook -e @vars/user_variables.yml playbooks/infrastructure/infrastructure-setup.yml
ansible-playbook -e @vars/user_variables.yml playbooks/infrastructure-setup.yml
# Setup and configure openstack within your spec'd containers
ansible-playbook -e @vars/user_variables.yml playbooks/openstack/openstack-setup.yml
ansible-playbook -e @vars/user_variables.yml playbooks/openstack-setup.yml
About Inventory
---------------
All things that Ansible cares about are located in inventory. In the Rackspace Private Cloud all
inventory is dynamically generated using the previously mentioned configuration files. While this is a dynamically generated inventory, it is not 100% generated on every run. The inventory is saved in a file named `rpc_inventory.json` and is located in the directory where you've located your user configuration files. On every run a backup of the inventory json file is created in both the current working directory as well as the location where the user configuration files exist. The inventory json file is a living document and is intended to grow as the environment scales in infrastructure. This means that the inventory file will be appended to as you add more nodes and or change the container affinity from within the `rpc_user_config.yml` file. It is recommended that the base inventory file be backed up to a safe location upon the completion of a deployment operation. While the dynamic inventory processor has guards in it to ensure that the built inventory is not adversely effected by programmatic operations this does not guard against user error and/or catastrophic failure.
All things that Ansible cares about are located in inventory. The whole inventory is dynamically generated using the previously mentioned configuration files. While this is a dynamically generated inventory, it is not 100% generated on every run. The inventory is saved in a file named `openstack_inventory.json` and is located in the directory where you've located your user configuration files. On every run a backup of the inventory json file is created in both the current working directory as well as the location where the user configuration files exist. The inventory json file is a living document and is intended to grow as the environment scales in infrastructure. This means that the inventory file will be appended to as you add more nodes and or change the container affinity from within the `openstack_user_config.yml` file. It is recommended that the base inventory file be backed up to a safe location upon the completion of a deployment operation. While the dynamic inventory processor has guards in it to ensure that the built inventory is not adversely effected by programmatic operations this does not guard against user error and/or catastrophic failure.
Scaling

View File

@ -1,28 +1,28 @@
Ansible Openstack LXC Configuration
Ansible OpenStack LXC Configuration
===================================
:date: 2013-09-05 09:51
:tags: rackspace, lxc, rpc, openstack, cloud, ansible
:tags: lxc, openstack, cloud, ansible
:category: \*nix
This directory contains the files needed to make the rpc_deployment process work.
The inventory is generated from a user configuration file named ``rpc_user_config.yml``.
To load inventory you MUST copy the directory ``rpc_deploy`` to either ``$HOME/`` or ``/etc/``.
With this folder in place, you will need to enter the folder and edit the file ``rpc_user_config.yml``.
This directory contains the files needed to make the openstack_deployment process work.
The inventory is generated from a user configuration file named ``openstack_user_config.yml``.
To load inventory you MUST copy the directory ``openstack_deploy`` to either ``$HOME/`` or ``/etc/``.
With this folder in place, you will need to enter the folder and edit the file ``openstack_user_config.yml``.
The file will contain all of the IP addresses/hostnames that your infrastructure will exist on
as well as a CIDR that your containers will have IP addresses assigned from. This allows for easy
scaling as new nodes and or affinity for containers is all set within this file.
Please see the ``rpc_user_config.yml`` file in the provided ``/etc`` directory for more details on how
Please see the ``openstack_user_config.yml`` file in the provided ``/etc`` directory for more details on how
that file is setup.
If you need some assistance defining the CIDR for a given ip address range check out http://www.ipaddressguide.com/cidr
Words on rpc_user_config.yml
############################
Words on openstack_user_config.yml
##################################
While the ``rpc_user_config.yml`` file is noted fairly heavily with examples and information regarding the options, here's some more information on what the file consists of and how to use it.
While the ``openstack_user_config.yml`` file is noted fairly heavily with examples and information regarding the options, here's some more information on what the file consists of and how to use it.
Global options
@ -79,7 +79,7 @@ Here's the ``global_overrides`` syntax
Predefined host groups
----------------------
The user configuration file has 4 defined groups which have mapping found within the ``rpc_environment.yml`` file.
The user configuration file has 4 defined groups which have mapping found within the ``openstack_environment.yml`` file.
The predefined groups are:
* infra_hosts:
@ -88,7 +88,7 @@ The predefined groups are:
* log_hosts:
Any host specified within these groups will have containers built within them automatically. The containers that will be build are all mapped out within the rpc_environment.json file.
Any host specified within these groups will have containers built within them automatically. The containers that will be build are all mapped out within the openstack_environment.json file.
When specifying hosts inside of any of the known groups the syntax is as follows:

View File

@ -1,7 +1,7 @@
Ansible Openstack Networking
Ansible OpenStack Networking
============================
:date: 2013-09-05 09:51
:tags: rackspace, rpc, openstack, cloud, ansible, networking, bond, interfaces
:tags: openstack, cloud, ansible, networking, bond, interfaces
:category: \*nix
This directory contains some base interface files that will allow you to see what

View File

@ -4,7 +4,7 @@
#2176 - CONTAINER_NET
#1998 - OVERLAY_NET
#2144 - STORAGE_NET
#2146 - GATEWAY_NET (VM Provider Network. Ignore this. Openstack will tag for us.)
#2146 - GATEWAY_NET (VM Provider Network. Ignore this. OpenStack will tag for us.)
## Physical interface, could be bond. This only needs to be set once for the physical device
auto eth0

View File

@ -38,7 +38,7 @@ used_ips:
- 172.29.244.1,172.29.244.50
# As a user you can define anything that you may wish to "globally"
# override from within the rpc_deploy configuration file. Anything
# override from within the openstack_deploy configuration file. Anything
# specified here will take precedence over anything else any where.
global_overrides:
# Internal Management vip address
@ -218,7 +218,7 @@ network_hosts:
# Other hosts can be added whenever needed. Note that containers will not be
# assigned to "other" hosts by default. If you would like to have containers
# assigned to hosts that are outside of the predefined groups, you will need to
# make an edit to the rpc_environment.yml file.
# make an edit to the openstack_environment.yml file.
# haproxy_hosts:
# haproxy1:
# ip: 10.0.0.12

View File

@ -13,15 +13,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
## Rackspace Cloud Details
# UK accounts: https://lon.identity.api.rackspacecloud.com/v2.0
rackspace_cloud_auth_url: https://identity.api.rackspacecloud.com/v2.0
rackspace_cloud_tenant_id: SomeTenantID
rackspace_cloud_username: SomeUserName
rackspace_cloud_password: SomeUsersPassword
rackspace_cloud_api_key: SomeAPIKey
## Rabbit Options
rabbitmq_password:
rabbitmq_cookie_token:
@ -60,11 +51,11 @@ cinder_v2_service_password:
glance_default_store: file
glance_container_mysql_password:
glance_service_password:
glance_swift_store_auth_address: "{{ rackspace_cloud_auth_url }}"
glance_swift_store_user: "{{ rackspace_cloud_tenant_id }}:{{ rackspace_cloud_username }}"
glance_swift_store_key: "{{ rackspace_cloud_password }}"
glance_swift_store_container: SomeContainerName
glance_swift_store_region: SomeRegion
#glance_swift_store_auth_address:
#glance_swift_store_user:
#glance_swift_store_key:
#glance_swift_store_container: SomeContainerName
#glance_swift_store_region: SomeRegion
# `internalURL` will cause glance to speak to swift via ServiceNet, use
# `publicURL` to communicate with swift over the public network
glance_swift_store_endpoint_type: internalURL
@ -86,53 +77,6 @@ heat_cfn_service_password:
horizon_container_mysql_password:
horizon_secret_key:
## MaaS Options
# Set maas_auth_method to 'token' to use maas_auth_token/maas_api_url
# instead of maas_username/maas_api_key
maas_auth_method: password
maas_auth_url: "{{ rackspace_cloud_auth_url }}"
maas_username: "{{ rackspace_cloud_username }}"
maas_api_key: "{{ rackspace_cloud_api_key }}"
maas_auth_token: some_token
maas_api_url: https://monitoring.api.rackspacecloud.com/v1.0/{{ rackspace_cloud_tenant_id }}
maas_notification_plan: npManaged
# By default we will create an agent token for each entity, however if you'd
# prefer to use the same agent token for all entities then specify it here
#maas_agent_token: some_token
maas_target_alias: public0_v4
maas_scheme: https
# Override scheme for specific service remote monitor by specifying here: E.g.
# maas_nova_scheme: http
maas_keystone_user: maas
maas_keystone_password:
# Check this number of times before registering state change
maas_alarm_local_consecutive_count: 3
maas_alarm_remote_consecutive_count: 1
# Period and timeout times (seconds) for a check
# Timeout must be less than period
maas_check_period: 60
maas_check_timeout: 30
maas_monitoring_zones:
- mzdfw
- mziad
- mzord
- mzlon
- mzhkg
# Specify the maas_fqdn_extension, defaults to empty string.
# This will be appended to the inventory_name of a host for MaaS purposes.
# The inventory name + maas_fqdn_extension must match the entity name in MaaS
# maas_fqdn_extension: .example.com
# Set the following to skip creating alarms for this device
#maas_excluded_devices: ['xvde']
# Set the threshold for filesystem monitoring when you are not specifying specific filesystems.
maas_filesystem_threshold: 90.0
# Explicitly set the filesystems to set up monitors/alerts for.
# NB You can override these in rpc_user_config per device using "maas_filesystem_overrides"
#maas_filesystem_monitors:
# - filesystem: /
# threshold: 90.0
# - filesystem: /boot
# threshold: 90.0
## Neutron Options
neutron_container_mysql_password:
@ -152,15 +96,11 @@ nova_service_password:
nova_v3_service_password:
nova_s3_service_password:
# Uncomment "nova_console_endpoint" to define a specific nova console URI or
# Uncomment "nova_console_endpoint" to define a specific nova console URI or
# IP address this will construct the specific proxy endpoint for the console.
# nova_console_endpoint: console.company_domain.name
## RPC Support
rpc_support_holland_password:
# rpc_support_holland_branch: defaults to release tag: v1.0.10
## Kibana Options
kibana_password:

View File

@ -14,7 +14,7 @@
# limitations under the License.
# Example usage:
# ansible-playbook -i inventory/dynamic_inventory.py -e "host_group=infra1,container_name=horizon_container" setup/archive-container.yml
# ansible-playbook -i inventory/dynamic_inventory.py -e "host_group=infra1,container_name=horizon_container" archive-container.yml
# This will create a new archive of an existing container and then retrieve
# the container storing the archive on the local system. Once the archive

View File

@ -52,7 +52,7 @@ def args():
"""Setup argument Parsing."""
parser = argparse.ArgumentParser(
usage='%(prog)s',
description='Rackspace Openstack, Inventory Generator',
description='OpenStack Inventory Generator',
epilog='Inventory Generator Licensed "Apache 2.0"')
parser.add_argument(
@ -82,7 +82,7 @@ def get_ip_address(name, ip_q):
except Queue.Empty:
raise SystemExit(
'Cannot retrieve requested amount of IP addresses. Increase the %s'
' range in your rpc_user_config.yml.' % name
' range in your openstack_user_config.yml.' % name
)
@ -302,7 +302,7 @@ def _add_container_hosts(assignment, config, container_name, container_type,
' 52 characters. This combination will result in a container'
' name that is longer than the maximum allowable hostname of'
' 63 characters. Before this process can continue please'
' adjust the host entries in your "rpc_user_config.yml" to use'
' adjust the host entries in your "openstack_user_config.yml" to use'
' a short hostname. The recommended hostname length is < 20'
' characters long.' % (host_type, container_name)
)
@ -541,17 +541,17 @@ def file_find(pass_exception=False, user_file=None):
If no file is found the system will exit.
The file lookup will be done in the following directories:
/etc/rpc_deploy/
$HOME/rpc_deploy/
$(pwd)/rpc_deploy/
/etc/openstack_deploy/
$HOME/openstack_deploy/
$(pwd)/openstack_deploy/
:param pass_exception: ``bol``
:param user_file: ``str`` Additional location to look in FIRST for a file
"""
file_check = [
os.path.join('/etc', 'rpc_deploy'),
os.path.join(os.environ.get('HOME'), 'rpc_deploy')
os.path.join('/etc', 'openstack_deploy'),
os.path.join(os.environ.get('HOME'), 'openstack_deploy')
]
if user_file is not None:
@ -721,7 +721,7 @@ def main():
)
# Load the user defined configuration file
user_config_file = os.path.join(local_path, 'rpc_user_config.yml')
user_config_file = os.path.join(local_path, 'openstack_user_config.yml')
if os.path.isfile(user_config_file):
with open(user_config_file, 'rb') as f:
user_defined_config.update(yaml.safe_load(f.read()) or {})
@ -735,14 +735,14 @@ def main():
if not user_defined_config:
raise SystemExit(
'No user config loadaed\n'
'No rpc_user_config files are available in either the base'
'No openstack_user_config files are available in either the base'
' location or the conf.d directory'
)
# Get the contents of the system environment json
environment_file = os.path.join(local_path, 'rpc_environment.yml')
environment_file = os.path.join(local_path, 'openstack_environment.yml')
# Load existing rpc environment json
# Load existing openstack environment json
with open(environment_file, 'rb') as f:
environment = yaml.safe_load(f.read())
@ -759,7 +759,7 @@ def main():
)
# Load existing inventory file if found
dynamic_inventory_file = os.path.join(local_path, 'rpc_inventory.json')
dynamic_inventory_file = os.path.join(local_path, 'openstack_inventory.json')
if os.path.isfile(dynamic_inventory_file):
with open(dynamic_inventory_file, 'rb') as f:
dynamic_inventory = json.loads(f.read())
@ -767,7 +767,7 @@ def main():
# Create a backup of all previous inventory files as a tar archive
inventory_backup_file = os.path.join(
local_path,
'backup_rpc_inventory.tar'
'backup_openstack_inventory.tar'
)
with tarfile.open(inventory_backup_file, 'a') as tar:
basename = os.path.basename(dynamic_inventory_file)
@ -820,7 +820,7 @@ def main():
host_hash[_key] = _value
# Save a list of all hosts and their given IP addresses
with open(os.path.join(local_path, 'rpc_hostnames_ips.yml'), 'wb') as f:
with open(os.path.join(local_path, 'openstack_hostnames_ips.yml'), 'wb') as f:
f.write(
json.dumps(
hostnames_ips,

View File

@ -20,11 +20,11 @@
required_kernel: 3.13.0-34-generic
## Container Template Config
container_template: rpc
container_template: openstack
container_release: trusty
# Parameters on what the conatiner will be built with
container_config: /etc/lxc/lxc-rpc.conf
# Parameters on what the container will be built with
container_config: /etc/lxc/lxc-openstack.conf
## Base Ansible config for all plays
@ -39,16 +39,17 @@ internal_vip_address: "{{ internal_lb_vip_address }}"
external_vip_address: "{{ external_lb_vip_address }}"
## URL for the frozen rpc repo
rpc_repo_url: "https://mirror.rackspace.com/rackspaceprivatecloud"
rpc_release: master
## URL for the frozen repo
openstack_repo_url: "https://mirror.rackspace.com/rackspaceprivatecloud"
openstack_release: master
openstack_code_name: juno
## URLs for package repos
mariadb_repo_url: "http://mirror.rackspace.com/rackspaceprivatecloud/mirror/mariadb/mariadb-5.5.41/repo/ubuntu/"
elasticsearch_repo_url: "http://packages.elasticsearch.org/elasticsearch/1.2/debian"
logstash_repo_url: "http://packages.elasticsearch.org/logstash/1.4/debian"
rsyslog_repo_url: "ppa:adiscon/v8-stable"
raxmon_repo_url: "http://stable.packages.cloudmonitoring.rackspace.com/ubuntu-14.04-x86_64"
## GPG Keys
@ -60,8 +61,16 @@ gpg_keys:
apt_common_repos:
- { repo: "deb {{ mariadb_repo_url }} {{ ansible_distribution_release }} main", state: "present" }
get_pip_url: "{{ rpc_repo_url }}/downloads/get-pip.py"
## URL for pip
get_pip_url: "{{ openstack_repo_url }}/downloads/get-pip.py"
## URL for the container image
container_cache_tarball: "{{ openstack_repo_url }}/downloads/rpc-trusty-container.tgz"
## Pinned packages
apt_pinned_packages:
- { package: "lxc", version: "1.0.7-0ubuntu0.1" }
- { package: "libvirt-bin", version: "1.2.2-0ubuntu13.1.8" }
@ -69,6 +78,7 @@ apt_pinned_packages:
- { package: "logstash-contrib", version: "1.4.2-1-efd53ef" }
- { package: "elasticsearch", version: "1.2.4" }
## Users that will not be created via container_common
excluded_user_create:
- mysql
@ -163,7 +173,7 @@ auth_public_port: 5000
auth_protocol: http
## Openstack Region
## OpenStack Region
service_region: RegionOne

View File

@ -38,8 +38,8 @@ cinder_volume_clear: zero
cinder_volume_clear_size: 0
## General configuration
## Set this in rpc_user_config.yml UNLESS you want all hosts to use the same
## Cinder backends. See the rpc_user_config example for more on how this is done.
## Set this in openstack_user_config.yml UNLESS you want all hosts to use the same
## Cinder backends. See the openstack_user_config example for more on how this is done.
# cinder_backends:
# lvm:
# volume_group: cinder-volumes

View File

@ -22,7 +22,7 @@
# The volumes container needs a larger FS as it must have tmp space for
# converting glnace imges to volumes.
# https://github.com/rcbops/ansible-lxc-rpc/issues/166
# https://bugs.launchpad.net/openstack-ansible/+bug/1399427
# Default is 5GB (same as other containers).
# Space must be added for cinder image conversion to work.

View File

@ -26,7 +26,7 @@ rabbit_cookie: "{{ rabbitmq_cookie_token }}"
enable_management_plugin: true
rabbit_cluster_name: rpc
rabbit_cluster_name: openstack
# Directories to create
container_directories:

View File

@ -172,7 +172,7 @@ EXAMPLES = """
# Create a new LXC container.
- lxc: name=test-container
template=ubuntu
config=/etc/lxc/lxc-rpc.conf
config=/etc/lxc/lxc-openstack.conf
command=create
state=running
@ -1450,7 +1450,7 @@ class LxcManagement(object):
# Test if the containers rootfs is a block device
block_backed = root_path.startswith(os.path.join(os.sep, 'dev'))
snapshot_name = '%s_rpc_ansible_snapshot' % name
snapshot_name = '%s_ansible_snapshot' % name
if block_backed:
if snapshot_name not in self._lvm_lv_list():

Some files were not shown because too many files have changed in this diff Show More