The change modifies the nova template tasks such that it's now
using the config_template action plugin. This change will make so that
config files can be dynamically updated, by a deployer, at run time,
without requiring the need to modify the in tree templates or defaults.
Partially implements: blueprint tunable-openstack-configuration
Change-Id: I9842ed3fcb2cc4aa379a582359b1ca5d0747f714
Presently all services use the single root virtual host within RabbitMQ
and while this is “OK” for small to mid sized deployments however it
would be better to divide services into logical resource groups within
RabbitMQ which will bring with it additional security. This change set
provides OSAD better compartmentalization of consumer services that use
RabbitMQ.
UpgradeImpact
DocImpact
Change-Id: I6f9d07522faf133f3c1c84a5b9046a55d5789e52
Implements: blueprint compartmentalize-rabbitmq
This patch adds the dhcp_domain config entry to nova.conf and
implements group_vars to default both nova and neutron's dhcp
domain values to be the same.
The individual values can still be overridden in user_variables
by using nova_dhcp_domain or neutron_dhcp_domain, but it's expected
that deployers would like these to be consistent.
Change-Id: I97beb78f62aeca2665ff72805056d36ead2adaaa
Closes-Bug: #1482045
Currently live_migration_flag is only written to nova.conf if Ceph is
in use. However, there may be other use cases using live migrations
where being able to override this is necessary. This commit introduces
a new variable called nova_libvirt_live_migration_flag and writes
live_migration_flag to nova.conf irrespective of whether Ceph is in
use. We also relocate nova_libvirt_disk_cachemodes and
nova_libvirt_hw_disk_discard in defaults/main.yml so that these
defaults sit with the other nova_libvirt defaults.
NOTE: We change the live_migration_flag defaults here to match nova's
defaults. Several Ceph resources suggest to use
VIR_MIGRATE_PERSIST_DEST however in my limited testing this did
not appear to be necessary. We can update the default or amend
it when using Ceph if we find that it is in fact necessary.
Closes-Bug: #1484439
Change-Id: I1baa9d590ae90ca45c763c8a5e6546c4825eabfe
The following options have been added to improve support for rbd (Ceph)
in nova.
libvirt/hw_disk_discard = unmap
www.sebastien-han.fr/blog/2015/02/02/openstack-and-ceph-rbd-discard
libvirt/disk_cachemodes = "network=writeback"
www.sebastien-han.fr/blog/2013/08/22/configure-rbd-caching-on-nova
http://ceph.com/docs/master/rbd/rbd-openstack/
DocImpact
Implements: blueprint ceph-block-devices
Change-Id: Ibf717fa88326805254ca31421d9a77c47b326b67
This commit adds the following new variables to customise whether nova
will allow key/partition/password injection:
nova_libvirt_inject_key
nova_libvirt_inject_partition
nova_libvirt_inject_password
Additionally, the following variable has been added to allow setting
password via Horizon:
horizon_can_set_password
Lastly, password injection can now be tested with tempest via:
tempest_compute_change_password
Note that all variables have been defaulted to their current values.
Closes-Bug: #1469238
Change-Id: Iff434ed7c042f7990990485c34d0f35b9a7baa7a
This change removes the forced use of config drive to ensure that a user
can choose to use config drive as needed. This adds ability to
disable/enable config drive and allows libvirt to listen for connections
on tcp as needed for live migrations (prohibited otherwise by config drive).
The following new variables were added to os_nova role:
nova_force_config_drive
nova_libvirtd_listen_tls: 1
nova_libvirtd_listen_tcp: 0
nova_libvirtd_auth_tcp: sasl
Change-Id: I1de35a4b3611b8bc33a21930dae3fd38f9aaa151
Closes-Bug: #1468514
DocImpact
Add the ability to enable the resume_guests_state_on_host_boot flag in
nova.conf to start guests that were running before the host rebooted.
Change-Id: I7365d972dc7e41a46b340396a73518b1da918f05
Closes-Bug: 1483246
Currently the playbooks do not allow Ceph to be configured as a backend
for Cinder, Glance or Nova. This commit adds a new role called
ceph_client to do the required configuration of the hosts and updates
the service roles to include the required configuration file changes.
This commit requires that a Ceph cluster already exists and does not
make any changes to that cluster.
ceph_client role, run on the OpenStack service hosts
- configures the Ceph apt repo
- installs any required Ceph dependencies
- copies the ceph.conf file and appropriate keyring file to /etc/ceph
- creates the necessary libvirt secrets
os_glance role
glance-api.conf will set the following variables for Ceph:
- [DEFAULT]/show_image_direct_url
- [glance_store]/stores
- [glance_store]/rbd_store_pool
- [glance_store]/rbd_store_user
- [glance_store]/rbd_store_ceph_conf
- [glance_store]/rbd_store_chunk_size
os_nova role
nova.conf will set the following variables for Ceph:
- [libvirt]/rbd_user
- [libvirt]/rbd_secret_uuid
- [libvirt]/images_type
- [libvirt]/images_rbd_pool
- [libvirt]/images_rbd_ceph_conf
- [libvirt]/inject_password
- [libvirt]/inject_key
- [libvirt]/inject_partition
- [libvirt]/live_migration_flag
os_cinder is not updated because ceph is defined as a backend and that
is generated from a dictionary of the config, for an example backend
config, see etc/openstack_deploy/openstack_user_config.yml.example
pw-token-gen.py is updated so that variables ending in uuid are assigned
a UUID.
DocImpact
Implements: blueprint ceph-block-devices
Closes-Bug: #1455238
Change-Id: Ie484ce0bbb93adc53c30be32f291aa5058b20028
This allows us to override the default settings, which is useful for
large deployments or deploying a large number of instances. It also
uses an unused variable in neutron for setting the rpc_backend.
Change-Id: I83d11eb79b30dda51c6f738433ca960a0f63246e
Closes-bug: 1471926
This patch ensures that the authorized_keys ansible module, as well as
the built in "generate_ssh_keys" flag for user creation, so that we can
avoid using shell out commands.
Additionally, this moves the key synchronisation to use ansible
variables instead of the memcache server.
Change-Id: Icd97ebd44f6065fc60fdce1b61e9dc2daa45faa0
Closes-Bug: #1477512
This patch implements the implement-ceilometer blueprint.
It addes the necessary role/variables to deploy ceilometer
with a Mongodb backend. The Monogdb backend is assumed to
be up and configured and the playbooks only require a few
values to be set in user_variables to establish a connection.
Change-Id: I2164a1f27f632ce254cc2711ada2c449a9961fed
Implements: blueprint implement-ceilometer
The nova.conf and tempest.conf option for `enable_instance_password` has been
added as a default. This option has a default of True in nova but false in tempest.
This causes temptest to fail scheme validation on newer versions of temptest.
To fix this issue the option being added with a default value of True for both
tempest and nova.
Change-Id: I19f5da9820f2367b3d8dd0a7f215aa3f3ea5f611
Partial-Bug: #1468061
nova has the configuration option [cinder]/cross_az_attach with a
default of True. This option allows attaching between instances and
volumes in different availability zones.
This commit makes this option configurable in the nova.conf template
and uses a default of True.
Change-Id: Ia95f3d4447b026a8e93c74a8c65a63dcea89994f
Closes-bug: 1457140
This allows you to set the endpoint-type protocol globally for all
services, e.g. internaluri can be http, and publicuri can be https.
You will no longer have to specify it per service, although those
settings already exist and have not changed.
This patch changes no functionality for existing installs or deployments
and the values are defaulted to be the same as before, but allows these
values to be adjusted on a per-endpoint type basis.
Change-Id: I4854216726491f6ea4e265694e702f980fddc5a6
Closes-Bug: #1399383
If services are running behind an SSL terminating LB you will want to
differentiate between protocol on internalURL and publicURL endpoints.
This patch allows you to set the values of protocol per endpoint type,
but doesn't change the default behaviour which is to have it set in one
var.
Change-Id: I7a74c85a8841499623746586ae27103a71c6fec0
Partial-Bug: #1399383
Add variables for the following 3 nova.conf vars:
max_overflow (default 10)
max_pool_size (default 5)
pool_timeout (default 30)
This allows for sql tuning to better support bulk operations
(boot/delete) with the ability to define custom values in nova.conf
based on business needs.
Change-Id: Ic427e6822f636a304cbbfaab5ac74a13e912da0f
Closes-Bug: #1447389
Adding support for dynamically updating the policy files for
nova, glance, neutron, cinder and heat. Uses the copy_update
plugin to detect any updates and applies the changes to the default
policy.json
Implements: blueprint dynamically-manage-policy.json
Change-Id: I573229d6f18a5fe32460b2373ab8b2c36ac722b4
In the kilo release the nova v2.1 API is tied to the v3 API, so v3 needs
to be enabled for v2.1 to be enabled as well. This change adds a setting
to control whether the v2.1 API should be enabled or disabled. If v2.1
is enabled then v3 will be enabled as well, but without registering it
with the keystone catalog.
Change-Id: I1e80189bbcbef1dd712cd6a527b5b59aa939e9e1
Closes-Bug: #1445524
Update keystone authentication middleware in nova to
support the v3 API in Kilo.
Partially implements: blueprint master-kilofication
Change-Id: I2f38ed9a5ad82d98596835a59f6852f1bd3d8ffc
* API Versions 1.1 and 3 have been deprecated from nova, plays
have been modified to completely remove v1.1 and make v3
optional via nova_v3_deprecated_but_enabled boolean.
* Addition of v2.1 api configuration.
* Elimination of the unused nova_api_ec2 container.
* nova_spice_console has been renamed to nova_console and
nova_spice_console_container has been renamed to
nova_console_container to facilitate different consoles in
the future.
* Spice has been made the default console.
* A standalone task and init scripts for nova_spice.
- Fixed some typos
- Modified HAProxy role to remove nova_api_ec2 and rename
nova_spice_console to nova_console
- Updated user_secrets.yml
- Unbroke things that I broke
Partially Implements Blueprint: master-kilofication
Change-Id: Ia87dfb1e8c0316103a30e2121f11996a9ca87c25
* Updated Keystone wsgi and paste files from upstream.
* Updated all clients in the openstack_client.yml file.
* Kilo services are tracking the head of master.
* Removed pinned middleware because they're pinned else where.
* Added additional service references for neutron vpnaas, fwaas, and
lbaas which have now been moved into their own repos and no longer
exist within the core neutron repository.
* The neutron vpnaas, fwaas, and lbaas have been removed from the
basic plugins being loaded and a comment has been added to describe
how one might add them back in.
* Updated rootwrap filters for neutron dhcp and l3.
* Updated heat policy.json
* Added the `python-libguestfs` to the nova-compute installation
packages.
* Updates all services to point to the latest kilo tag
Services updated due to deprecated configs:
* Keystone
* Glance
* Nova
* Neutron (is still using the deprecated nova auth plugin)
* Heat
* Tempest
Items for future work post initial release:
* roles/os_neutron/files/post-up-checksum-rules:25:
TODO(cloudnull) remove this script once the bug is fixed.
* roles/rabbitmq_server/tasks/rabbitmq_cluster_join.yml:17:
TODO(someone): implement a more robust way of checking
Implements: blueprint minimal-kilo
Closes-Bug: 1428421
Closes-Bug: 1428431
Closes-Bug: 1428437
Closes-Bug: 1428445
Closes-Bug: 1428451
Closes-Bug: 1428469
Closes-Bug: 1428639
Change-Id: I28a305d9e40a9cf70148ef7d7b00d467a65ca076
Introduced namespaced variables for all OpenStack services supporting
this setting as defined through oslo libraries. Default value is False
in each case. Gating commit checks now enable the fatal_deprecations
setting for each supporting service.
Closes Bug: 1428412
Change-Id: I5f41d3fdfa1cc876efc0c33c657c9dad18a8ba51
This change implements the blueprint to convert all roles and plays into
a more generic setup, following upstream ansible best practices.
Items Changed:
* All tasks have tags.
* All roles use namespaced variables.
* All redundant tasks within a given play and role have been removed.
* All of the repetitive plays have been removed in-favor of a more
simplistic approach. This change duplicates code within the roles but
ensures that the roles only ever run within their own scope.
* All roles have been built using an ansible galaxy syntax.
* The `*requirement.txt` files have been reformatted follow upstream
Openstack practices.
* Dynamically generated inventory is now more organized, this should assist
anyone who may want or need to dive into the JSON blob that is created.
In the inventory a properties field is used for items that customize containers
within the inventory.
* The environment map has been modified to support additional host groups to
enable the seperation of infrastructure pieces. While the old infra_hosts group
will still work this change allows for groups to be divided up into seperate
chunks; eg: deployment of a swift only stack.
* The LXC logic now exists within the plays.
* etc/openstack_deploy/user_variables.yml has all password/token
variables extracted into the separate file
etc/openstack_deploy/user_secrets.yml in order to allow seperate
security settings on that file.
Items Excised:
* All of the roles have had the LXC logic removed from within them which
should allow roles to be consumed outside of the `os-ansible-deployment`
reference architecture.
Note:
* the directory rpc_deployment still exists and is presently pointed at plays
containing a deprecation warning instructing the user to move to the standard
playbooks directory.
* While all of the rackspace specific components and variables have been removed
and or were refactored the repository still relies on an upstream mirror of
Openstack built python files and container images. This upstream mirror is hosted
at rackspace at "http://rpc-repo.rackspace.com" though this is
not locked to and or tied to rackspace specific installations. This repository
contains all of the needed code to create and/or clone your own mirror.
DocImpact
Co-Authored-By: Jesse Pretorius <jesse.pretorius@rackspace.co.uk>
Closes-Bug: #1403676
Implements: blueprint galaxy-roles
Change-Id: I03df3328b7655f0cc9e43ba83b02623d038d214e