The pip_install and pip_lock_down roles have been merged.
Update playbooks to make use of the merged pip_install role by providing
the 'pip_lock_to_internal_repo' variable to based on whether or not
'pip_links' contains any entries.
Change-Id: I59ad75ac54fd2c54172b404c53cc2659b9dfe482
Currently, os_nova, os_cinder, and os_glance roles all install
python-cephlibs, which is a python package Hugh Saunders created for
Ceph Hammer python libraries. The virtualenvs created in these roles
need access to these python libraries and by using Hugh's python
package we could simply pip install into the virtualenv.
This has been working fine, but makes the assumption that the deployer
is using Ceph Hammer, which may not be true. Also, later versions of
Ceph have python C extensions, which may be more difficult to package.
This commit simply leverages the existing Ceph libraries installed
outside of the venv by checking for the path of the library using
python itself.
NOTE: I would prefer to use a None instead of '' in the ternaries,
but seem to hit issues doing that.
Change-Id: Iac3b04eec57f6d1aa21051f2960d56e3b47b6f00
All of the database and database user creates have
been removed from the roles into the playbooks. This
allows the roles to be tested independently of the
deployed database and also allows the toles to be
used independently of infrastructure choices made by
the integrated OSA project.
Change-Id: If58e482034a65c0e50241448dbe298a73c1ae71b
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
This fix executes the rabbitmq deterministic sorting
while using the product-config tag, e.g. nova-config
in order rewrite the OpenStack configs (nova/neutron etc)
Change-Id: If47e7b7abea61d3e7a00fe217a6181f8f03c5f7b
Closes-Bug: #1575297
When RabbitMQ is not installed by OSA, because a deployer is using an
existing RabbitMQ installation, or because it is not needed
(eg. Standalone Swift), then do not setup messaging vhost and user for
the various services.
Change-Id: Ia35c877939cb3d4a6b0d792165af8729a7062a6e
The Keystone role previously migrated the messaging vhost and user setup to
a pre-task in the os-keystone-install.yml playbook. This review continues this
migration for all other roles where this is applicable.
Change-Id: I3016039692d8130654fe1bff422f24ef2afc196e
Cinder should use the storage network for storage traffic if available.
Change-Id: I24c3274000b3bed08ea251ce45726e1750f4f85f
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
This patch fixes a few syntax issues required for Ansible 2
compatibility that Ansible 1.x was more lenient with.
When a 'when' clause is combined with a 'with_*' statement, the clause
is processed separately for each item. Tasks with 'when' clauses which
depended on an item variable being defined have either applied a default
empty value to the item or a new task individual task has been created
for each item in the loop.
Tasks within the os-cinder-install playboook have been updated to loop
through cinder_backends as a hash.
Change-Id: I9b53eb5dd709a6bed1797961015aa3dd328340f3
ansible_hostname is not used within any tasks and, by default, is the
same value as container_name.
ansible_ssh_host and container_address are also the same value by
default, both are assigned to hosts through the dynamic inventory script.
Additionally, overriding ansible_ssh_host through playbook vars breaks
tasks that delegate to other hosts when using Ansible2.
Change-Id: I2525b476c9302ef37f29e2beb132856232d129e2
The change builds venvs in a single repo container and then
ships them to to all targets. The built venvs will be within
the repo servers and will allow for faster deployments,
upgrades, and more consistent deployments for the life cycle
of the deployment.
This will create a versioned tarball that will allow for
greater visablility into the build process as well as giving
deployers/developers the ability to compair a release in
place.
Change-Id: Ieef0b89ebc009d1453c99e19e53a36eb2d70edae
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
This change sets the cinder storage address based on the
facts the play is able to set about the environment.
This will correct configure the iscsi address within cinder
to use the correct network which is not always the default
container address.
Change-Id: I647161c0154cad11bf2f31b6b7cd8476e0662f12
Closes-Bug: #1504208
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
This commit conditionally allows the os_cinder role to
install build and deploy within a venv. This is the new
default behavior of the role however the functionality
can be disabled.
Implements: blueprint enable-venv-support-within-the-roles
Change-Id: Icd764b78ee887f4fe2ecd4bb67b97ae4651e6fa3
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
This patch reduces the time it takes for the playbooks to execute when the
container configuration has not changed as the task to wait for a successful
ssh connection (after an initial delay) is not executed.
Change-Id: I52727d6422878c288884a70c466ccc98cd6fd0ff
This change implements a change in the file name for each service
so that the log rotate files don't collide when running on a shared
host.
Change-Id: Ia42656e4568c43667d610aa8421d2fa25437e2aa
Closes-Bug: 1499799
This commit removes the use of the net-cache flushing from all
$service plays which ensures that the cache is not overly flushed
which could impact performance of services like neutron.
The role lxc-container-destroy role was removed because its not used
and if it were ever used it its use would result in the same
situation covered by this issue.
Additionally it was noted that on container restarts, the mac addresses
of the container interfaces change. If *no* flushing is done at all,
this results in long run times whilst the arp entry for the container IP
times out. Hence, we add in here a configuration option that causes a
gratuitous arp whenever an interface has it's mac set, and/or the link
comes up. Because of the way the container veths work, we can't rely
on that happening on a linkm up event. So we forcefully set the mac
address in the post-up hook for the interface to force the issue of the
gratuitous arp.
Co-Authored-By: Evan Callicoat <diopter@gmail.com>
Co-Authored-By: Darren Birkett <darren.birkett@gmail.com>
Change-Id: I96800b2390ffbacb8341e5538545a3c3a4308cf3
Closes-Bug: 1497433
This patch adds a wait for the container's sshd to be available
after the container's apparmor profile is updated. When the
profile is updated the container is restarted, so this wait is
essential to the success of the playbook's completion.
It also includes 3 retries which has been found to improve the
rate of success.
Due to an upstream change in behaviour with netaddr 0.7.16 we
need to pin the package to a lower version until Neutron is
adjusted and we bump the Neutron SHA.
Change-Id: I30575ee31929b0c9af6353b7255cdfb6cebd2104
Closes-Bug: #1490142
Having the lxc container create role drop the lxc-openstack apparmor
profile on all containers anytime its executed leads to the possibility
of the lxc container create task overwriting the running profile on a given
container. If this happens its likley to cause service interruption until the
correct profile is loaded for all containers its effected by the action.
To fix this issue the default "lxc-openstack" profile has been removed from the
lxc contianer create task and added to all plays that are known to be executed
within an lxc container. This will ensure that the profile is untouched on
subsequent runs of the lxc-container-create.yml play.
Change-Id: Ifa4640be60c18f1232cc7c8b281fb1dfc0119e56
Closes-Bug: 1487130
This patch adds a configurable delay time for retrying the
ssh connection when waiting for the containers to restart.
This is useful for environments where resources are constrained
and containers may take longer to restart.
Change-Id: I0383e34a273b93e1b2651460c853cf1ceba89029
Closes-Bug: #1476885
This patch adds a check for the appropriate OpenSSH Daemon
reponse when waiting for the container to restart. This is
an optimisation over simply waiting for the TCP port.
Change-Id: Ie25af4f57bb98fb1d846d579b58b4d479b476675
Closes-Bug: #1476885
This patch includes the galera_address var for each playbook
that deploys a role depending on galera_client and thereby
dropping a .my.cnf onto the target.
Prior to this patch the .my.cnf would be dropped with the
galera_client's role default of 127.0.0.1 as the host address.
The address of 127.0.0.1 is intentionally left to deploy on
the galera servers so that the .my.cnf will be configured to
interact with the local database.
Change-Id: Ife33c68d825ecf1e6cdb6784802e9a099eb54289
Closes-Bug: #1487353
With cinder_volumes we were creating a specific udev device, this will
fail to mount if lxc.autodev=1. This should only be required when lvm is
a backend for the cinder_volumes container.
We can specify lxc.autodev=0 for cinder_volumes containers. To do this
properly we first check if lvm is in use. We can also use this to ensure
that redundant "lvm" configuration isn't setup on volumes hosts with no
lvm backend.
Additionally this will fix the formatting on the "udev" lxc.mount.entry
as it was adding additional spaces.
Change-Id: Iabe72003ebcfefe11d360131fdde64ca4b21a192
Closes-Bug: #1483650
Currently the playbooks do not allow Ceph to be configured as a backend
for Cinder, Glance or Nova. This commit adds a new role called
ceph_client to do the required configuration of the hosts and updates
the service roles to include the required configuration file changes.
This commit requires that a Ceph cluster already exists and does not
make any changes to that cluster.
ceph_client role, run on the OpenStack service hosts
- configures the Ceph apt repo
- installs any required Ceph dependencies
- copies the ceph.conf file and appropriate keyring file to /etc/ceph
- creates the necessary libvirt secrets
os_glance role
glance-api.conf will set the following variables for Ceph:
- [DEFAULT]/show_image_direct_url
- [glance_store]/stores
- [glance_store]/rbd_store_pool
- [glance_store]/rbd_store_user
- [glance_store]/rbd_store_ceph_conf
- [glance_store]/rbd_store_chunk_size
os_nova role
nova.conf will set the following variables for Ceph:
- [libvirt]/rbd_user
- [libvirt]/rbd_secret_uuid
- [libvirt]/images_type
- [libvirt]/images_rbd_pool
- [libvirt]/images_rbd_ceph_conf
- [libvirt]/inject_password
- [libvirt]/inject_key
- [libvirt]/inject_partition
- [libvirt]/live_migration_flag
os_cinder is not updated because ceph is defined as a backend and that
is generated from a dictionary of the config, for an example backend
config, see etc/openstack_deploy/openstack_user_config.yml.example
pw-token-gen.py is updated so that variables ending in uuid are assigned
a UUID.
DocImpact
Implements: blueprint ceph-block-devices
Closes-Bug: #1455238
Change-Id: Ie484ce0bbb93adc53c30be32f291aa5058b20028
Currently every host, both containers and bare metal, has a crontab
configured with the same values for minute, hour, day of week etc. This
means that there is the potential for a service interruption if, for
example, a cron job were to cause a service to restart.
This commit adds a new role which attempts to adjust the times defined
in the entries in the default /etc/crontab to reduce the overlap
between hosts.
Change-Id: I18bf0ac0c0610283a19c40c448ac8b6b4c8fd8f5
Closes-bug: #1424705
To enable partitioning of DB traffic by-service, each service needs to
use a custom connection string. Defaulting the service address to a
common galera_address makes things continue to work by default.
While the galera_address could be overridden on a container or host
basis this requires repeating that behavior across each infra node in
the inventory. Providing service-specific connection address variables
simplifies the management somewhat for large deployments and may reduce
error rates.
The service install playbooks now default the service-specific variables
instead of galera_address to the internal lb vip from inventory to
maintain the ease-of-use currently available.
Any value for a service-specific variable set in user_variables.yml will
override the value in the playbook's vars to provide selective
customization as needed.
Change-Id: I4c98bf906a0c1cb11ddd41277a855dce22ff646a
Closes-Bug: 1462529
The current configuration of LVM for cinder-volume has udev_sync=0.
This means that udev is not creating the devices that appear in /dev.
The device files created reference specific device numbers, and these
persist between reboots. When the host is rebooted there is no
guarantee that device numbers allocated to the logical volumes will
match those defined in the device files. This can be observed by
comparing the output of 'dmsetup info' and 'ls -l /dev/mapper'.
LVM's use of udev was disabled in an attempt to protect the host from
the potential that uevents generated would be processed by all
containers on the host. In practise this should not be an issue because
there are not other containers running on a cinder host.
This commit adjusts the lvm.conf file created so that udev is used. It
also adds a mount entry to create a devtmpfs on /dev. Finally
'udevadm trigger' is run to add the devices under /dev/mapper.
Closes-Bug: #1436999
Change-Id: I9ab35cf4438a369563f8c08870c1acfd0cc394b0
In order to ease the addition of external log receivers this adds an
rsyslog-client tag to the installation plays. This allows us to run
openstack-ansible setup-everything.yml --tags rsyslog-client to add
additional logging configuration.
Change-Id: If002f67a626ff5fe3dc06d77c9295ede9369b3dc
Partially-Implements: blueprint master-kilofication
This change was made to improve ansible stability and speed.
Additionally this change comes with the now upstream lxc-container
module which will allow us to drop our carried module. In dropping
the module the entry point was changed from `lxc-container:` to
`lxc_container:`. All of the entry points have been changed in
support of the new upstream module and the carried `lxc-container`
module has been removed.
Partially Implements Blueprint: master-kilofication
Partial-Bug: 1399373
Change-Id: I4709eb78f153afc213225ea973570efa2e873993
Currently we allow multiple cinder backends to be configured. Some
do not need a volume_group defined. This patch fixes the logic around
when we add a volume group device to cinder (only when defined)
Change-Id: I118546e8bc6c3fa7fac62b96ea2295a76711e0f3
Closes-Bug: 1439843
This commit adds the rsyslog_client role to the general stack. This
change is part 3 of 3 role will allow rsyslog to server as a log
shipper within a given host / container. The role has been setup to
allow for logs to be shipped to multiple hosts and or other
providers, e.g. splunk, loggly, etc... All of the plays that need
to support logging have been modified to use the new rsyslog_client
role.
Roles added:
* rsyslog_client
Plays modified:
* playbooks/galera-install.yml
* playbooks/lxc-hosts-setup.yml
* playbooks/os-cinder-install.yml
* playbooks/os-glance-install.yml
* playbooks/os-heat-install.yml
* playbooks/os-horizon-install.yml
* playbooks/os-keystone-install.yml
* playbooks/os-neutron-install.yml
* playbooks/os-nova-install.yml
* playbooks/os-swift-install.yml
* playbooks/os-tempest-install.yml
* playbooks/rabbitmq-install.yml
* playbooks/repo-server.yml
DocImpact
Implements: blueprint rsyslog-update
Change-Id: I4028a58db3825adb8a5aa73dbaabbe353bb33046
This change implements the blueprint to convert all roles and plays into
a more generic setup, following upstream ansible best practices.
Items Changed:
* All tasks have tags.
* All roles use namespaced variables.
* All redundant tasks within a given play and role have been removed.
* All of the repetitive plays have been removed in-favor of a more
simplistic approach. This change duplicates code within the roles but
ensures that the roles only ever run within their own scope.
* All roles have been built using an ansible galaxy syntax.
* The `*requirement.txt` files have been reformatted follow upstream
Openstack practices.
* Dynamically generated inventory is now more organized, this should assist
anyone who may want or need to dive into the JSON blob that is created.
In the inventory a properties field is used for items that customize containers
within the inventory.
* The environment map has been modified to support additional host groups to
enable the seperation of infrastructure pieces. While the old infra_hosts group
will still work this change allows for groups to be divided up into seperate
chunks; eg: deployment of a swift only stack.
* The LXC logic now exists within the plays.
* etc/openstack_deploy/user_variables.yml has all password/token
variables extracted into the separate file
etc/openstack_deploy/user_secrets.yml in order to allow seperate
security settings on that file.
Items Excised:
* All of the roles have had the LXC logic removed from within them which
should allow roles to be consumed outside of the `os-ansible-deployment`
reference architecture.
Note:
* the directory rpc_deployment still exists and is presently pointed at plays
containing a deprecation warning instructing the user to move to the standard
playbooks directory.
* While all of the rackspace specific components and variables have been removed
and or were refactored the repository still relies on an upstream mirror of
Openstack built python files and container images. This upstream mirror is hosted
at rackspace at "http://rpc-repo.rackspace.com" though this is
not locked to and or tied to rackspace specific installations. This repository
contains all of the needed code to create and/or clone your own mirror.
DocImpact
Co-Authored-By: Jesse Pretorius <jesse.pretorius@rackspace.co.uk>
Closes-Bug: #1403676
Implements: blueprint galaxy-roles
Change-Id: I03df3328b7655f0cc9e43ba83b02623d038d214e