This patch adds a small script that automates the process of accessing a
service provider (SP) cloud using credentials from a identity provider
cloud (IdP), where both clouds use Keystone based authentication. The
script performs the complete authentication flow and displays the token
and endpoints to use with the openstack command line client.
Implements: blueprint keystone-federation
Change-Id: I4b8113d0aef9c754fb55497d44138df660332bb8
An ADFS v3.0 (Windows 2012 R2) Identity Provider is capable of
interacting via SAML2 to the Service provider, so there is no
special configuration over and above the same as required from
the TestShib/Keystone IdP.
This patch adds a sample configuration to the defaults file.
DocImpact
Implements: blueprint keystone-sp-adfs-idp
Change-Id: I37728e618d4624699a00f4ecfbb8cab0745e9e52
Add the swift-remote host group and environment file.
Add an os_swift_sync role which will sync the swift ring and ssh keys
for swift hosts (remote and not-remote). Which has the following:
* Moves the key and ring tasks out of os_swift role to os_swift_sync.
* This adds the use of the "-r" flag that was added to the
swift_rings.py and swift_rings_check.py.
* Adds a ring.builder vs contents file consistency check.
* Adjusts the rsync process to use the built-in synchronize module
* Ensure services have started post ring/ssh key sync.
Adds environment file and sample configuration file for swift-remote
hosts (conf.d).
Move appropriate default vars to the os_swift_sync role, and remove them
from the os_swift role.
Rename the "os-swift-install.yml" playbook to "os-swift-setup.yml" as
this handles only the setup, and add a playbook to for both
os-swift-sync.yml and an overarching playbook (os-swift-install.yml)
that will call both the os-swift-sync.yml and os-swift-setup.yml
playbooks. This means the funcitonality of "os-swift-install.yml"
remains unchanged.
Adjust the run-playbooks.sh so that it calls the new overarching swift
playbook.
Change-Id: Ie2d8041b4bc46f092a96882fe3ca430be92195ed
Partially-Implements: blueprint multi-region-swift
This patch adds the ability to configure Keystone as a Service
Provider (SP) for a Federated Identity Provider (IdP).
* New variables to configure Keystone as a service provider are now
supported under a root `keystone_sp` variable. Example configurations
can be seen in Keystone's defaults file. This configuration includes
the list of identity providers and trusted dashboards. (At this time
only one identity provider is supported).
* Identity provider configuration includes the remote-to-local user
mapping and the list of remote attributes the SP can obtain from the
IdP.
* Shibboleth is installed and configured in the Keystone containers when
SP configuration is present.
* Horizon is configured for SSO login
DocImpact
UpgradeImpact
Implements: blueprint keystone-federation
Change-Id: I78b3d740434ea4b3ca0bd9f144e4a07026be23c6
Co-Authored-By: Jesse Pretorius <jesse.pretorius@rackspace.co.uk>
This patch fixes the following:
1. Properly quote arguments to run_lock function
2. Properly parse out the playbook filename in run_lock
Specifically the upgrade steps where we were using
"-e 'rabbitmq_upgrade=true' setup-infrastructure.yml"
"/tmp/fix_container_interfaces.yml || true"
Were causing issues and this patch resolves them.
Closes-bug: 1479916
Change-Id: I809085d6da493f7f7d545547a0d984c0e7b1bf45
(cherry picked from commit 560fbbdb077c0f8d6f8bfa9b48e967ccef86664a)
Reduce neutron configuration as follows:
1) Limit [ml2*] sections to neutron server containers [1].
2) Remove the [vlan] section everywhere because it only
pertains to the defunct Linux bridge monolithic
plug-in [2].
3) Explicitly disable VXLAN if deployment only includes flat
or VLAN networks [3].
4) Limit Linux bridge agent configuration options to neutron
agent containers.
5) Remove [agent] tunnel_type option because the Linux bridge
agent does not use it.
6) Move some options to correct locations.
7) Reorder some options to improve readability.
8) Annotate groups of options or specific options.
[1] https://review.openstack.org/#/c/196759/
[2] https://review.openstack.org/#/c/196765/
[3] https://review.openstack.org/#/c/160826/
Change-Id: I275fb600360530534f7673e6eb2a3d397b10fb8e
Closes-Bug: #1473230
This patch removes the unused python clients from the tempest
role and the openstack_clients in the repo as these projects
may introduce incompatible requirements to the projects we
deploy and support.
Change-Id: I2d412dea8d91c94fc4ff9a5f64c19ae9c44fed8e
Closes-Bug: #1482260
Currently the playbooks do not allow Ceph to be configured as a backend
for Cinder, Glance or Nova. This commit adds a new role called
ceph_client to do the required configuration of the hosts and updates
the service roles to include the required configuration file changes.
This commit requires that a Ceph cluster already exists and does not
make any changes to that cluster.
ceph_client role, run on the OpenStack service hosts
- configures the Ceph apt repo
- installs any required Ceph dependencies
- copies the ceph.conf file and appropriate keyring file to /etc/ceph
- creates the necessary libvirt secrets
os_glance role
glance-api.conf will set the following variables for Ceph:
- [DEFAULT]/show_image_direct_url
- [glance_store]/stores
- [glance_store]/rbd_store_pool
- [glance_store]/rbd_store_user
- [glance_store]/rbd_store_ceph_conf
- [glance_store]/rbd_store_chunk_size
os_nova role
nova.conf will set the following variables for Ceph:
- [libvirt]/rbd_user
- [libvirt]/rbd_secret_uuid
- [libvirt]/images_type
- [libvirt]/images_rbd_pool
- [libvirt]/images_rbd_ceph_conf
- [libvirt]/inject_password
- [libvirt]/inject_key
- [libvirt]/inject_partition
- [libvirt]/live_migration_flag
os_cinder is not updated because ceph is defined as a backend and that
is generated from a dictionary of the config, for an example backend
config, see etc/openstack_deploy/openstack_user_config.yml.example
pw-token-gen.py is updated so that variables ending in uuid are assigned
a UUID.
DocImpact
Implements: blueprint ceph-block-devices
Closes-Bug: #1455238
Change-Id: Ie484ce0bbb93adc53c30be32f291aa5058b20028
This patch enables Horizon to consume a Keystone v3 API endpoint.
This patch also introduces two variables to allow the endpoint to be
specified independently if required:
- horizon_keystone_host: this defaults to the internal LB IP address
- horizon_keystone_endpoint: this defaults to the internal Keystone
endpoint
This patch also does the following:
- properly consumes the horizon_ssl_no_verify role setting;
- includes a little comment cleanup which does nothing but clutter
the local_settings configuration file.
Closes-Bug: #1478996
Change-Id: I5b7ceeecab072ead6fd380dcef7a48f1978a56f2
This patch changes the number of forks used by ansible when
using any of the convenience (and thus gate check) scripts
to the number of processors available on the deployment
system.
The previous values used were found to cause ssh connection
errors and it was found that reducing the number improved
the chances of success.
This patch also removes the forks setting from ansible.conf
so that ansible will use the default value when run in any
other way. This leaves the decision of setting the number
of forks to the deployer, as it should be.
Change-Id: I31ad7353344f7994063127ecfce8f4733769234c
Closes-Bug: #1479812
The in-tree version of user_group_vars.yml was removed in
30f9443c5d2f3a3bbb51bf75ad5743ef46c9b0ef, but the corresponding
reference in the upgrade script was not also updated.
This commit changes the behavior to remove the file from /etc/ if found.
Change-Id: I9f5b061289c5f43e32983845469f5123cc9f209d
Closes-Bug: #1479501
This patch allows the swift-proxy pipeline to be adjusted via a variable
"swift_middleware_list", which can be amended to add additional
middleware as required.
The default remains the same - which is to include the default pipeline
when using keystone.
Additionally the logic around whether "authtoken" or "tempauth" are
enabled was changed to check if these are set in "swift_middleware_list"
without requiring a separate variable. Variable "swift_authtoken_active"
was removed as it is no longer required.
Tempest object storage settings were adjusted to work with the default
list of enabled discoverable_apis for object storage. Container syncing
was also turned into a variable based on the object storage default.
Closes-Bug: #1453276
Co-Authored-By: Julian Montez <julian.montez@gmail.com>
Co-Authored-By: Darren Birkett <darren.birkett@gmail.com>
Change-Id: I70565296242d10327a58b02149f73eb5f31a877d
If mongodb has been installed as a backend for ceilometer (as it is by
the bootstrap-aio script), we want it removed in the teardown. This
commit updates the teardown.sh to do just that.
Also s/remote_pip_pacakges/remove_pip_packages/
Change-Id: I909feac8321feaf66f44642cac1b729ded10d2fb
Per the bug we need to be using xtrabackup-v2 for the wsrep_sst_method. This
patch creates an galera_sst_method variable and defaults it to xtrabackup-v2.
Change-Id: Iee88b49e84e3a8aaf477af45b4a42a4a2c31634e
Closes-Bug: 1478105
(cherry picked from commit 6f170025e6f32b9a80bfd1efee627eea3d97f267)
This patch adds python-cinderclient to the os_glance pip packages
list and removes the duplicate python-glanceclient from the list.
Change-Id: Ia5a2e04838111053b71c94838002c5319ff4386c
Closes-Bug: #1478681
In kilo, the swift dispersion user is created as a member of
the 'service' tenant. However, we are populating the
swift-dispersion.conf as though it was a member of the admin tenant (as
it was in juno). This commit fixes that, and makes
swift-dispersion-report (and any monitoring plugins that rely on it)
work again
Change-Id: I0e9b9ccaa487ee0242346ee279821aa09dd5b022
Closes-Bug: #1479006
The basic user_variables.yml file was referencing a variable that no longer
exists. As such, it has been removed.
Change-Id: I11bfd8d9af0b94a57a49043cc595a13addd4c986
This allows us to override the default settings, which is useful for
large deployments or deploying a large number of instances. It also
uses an unused variable in neutron for setting the rpc_backend.
Change-Id: I83d11eb79b30dda51c6f738433ca960a0f63246e
Closes-bug: 1471926
This change adds the bits necessary to configure Keystone as an
identity provider (IdP) for an external service provider (SP).
* New variables to configure Keystone as an identity provider are now
supported under a root `keystone_idp` variable. Example configurations
can be seen in Keystone's defaults file. This configuration includes
the location of the signing certificate, authentication endpoints and
list of allowed service providers.
* xmlsec1 is installed in the Keystone containers when IdP configuration
is enabled.
* The IdP metadata and signing certiciate are generated and installed.
Implements: blueprint keystone-federation
Change-Id: I81455e593e3059633a55f7e341511d5ad9eba76f
This patch ensures that the authorized_keys ansible module, as well as
the built in "generate_ssh_keys" flag for user creation, so that we can
avoid using shell out commands.
Additionally, this moves the key synchronisation to use ansible
variables instead of the memcache server.
Change-Id: I0072b8d0977ab9aea10dd95080756f6864612013
Closes-Bug: #1477512
This patch ensures that the authorized_keys ansible module, as well as
the built in "generate_ssh_keys" flag for user creation, so that we can
avoid using shell out commands.
Additionally, this moves the key synchronisation to use ansible
variables instead of the memcache server.
Change-Id: Icd97ebd44f6065fc60fdce1b61e9dc2daa45faa0
Closes-Bug: #1477512
This patch ensures that the authorized_keys ansible module, as well as
the built in "generate_ssh_keys" flag for user creation, so that we can
avoid using shell out commands.
Additionally, this moves the key synchronisation to use ansible
variables instead of the memcache server.
Change-Id: I4fe7620cae6bf68f4c0fe248cb1dfa3c24e44110
Closes-Bug: #1477494
glance_nfs_client adds 2 fstab entries, with then handler entry being
incorrectly ordered (src and name are the wrong way around).
This patch removes the redundant handler and moves the existing fstab
task to use the "mount" module which will add the entry to fstab and
ensure the filesystems are mounted.
Additionally this fixes a documentation bug where an incorrect variable
is referenced (glance_nfs_mounts).
Change-Id: I6e0f964d4279800d31119f380a239e2c4ae61cb5
Fixes-Bug: #1477081
This adds a forced apt update for packages and keys which is intended to help
with the HPB4 gate issues. This also creates logs for all ansible commands
run in the gate process.
Change-Id: I79b64567e615d83c52c1a8727082345ffe265c87
This change adds the container network MTU option within the container
network LXC config file. This will allow a deployer to set the MTU within
a provider networks entry in openstack_user_config.yml.
Example:
....
provider_networks:
- network:
container_bridge: "br-storage"
container_type: "veth"
container_interface: "eth2"
ip_from_q: "storage"
type: "raw"
container_mtu: "9000"
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute
- swift_proxy
This changes gives the deployer the ability to selectively set the mtu as
needed.
The dynamic_inventory.py script has been updated to allow for the MTU entry.
Example file documentation has been added to show how to use this new setting.
BackportPotential
DocImpact
Closes-Bug: #1477346
Change-Id: If8c0ee042d2f1322f8322ea6c8ee33606070d880
Output latest test run results as a subunit stream using testr's
built-in utility.
Closes-Bug: #1476793
Change-Id: Ib7afd26fe368303ac18ddc3436ee90a02301557f
The rabbitmq playbook is designed to run in parallel across the cluster.
This causes an issue when upgrading rabbitmq to a new major or minor
version because RabbitMQ does not support doing an online migration of
datasets between major versions. while a minor release can be upgrade
while online it is recommended to bring down the cluster to do any
upgrade actions. The current configuration takes no account of this.
Reference:
https://www.rabbitmq.com/clustering.html#upgrading for further details.
* A new variable has been added called `rabbitmq_upgrade`. This is set to
false by default to prevent a new version being installed unintentionally.
To run the upgrade, which will shutdown the cluster, the variable can be
set to true on the commandline:
Example:
openstack-ansible -e rabbitmq_upgrade=true \
rabbitmq-install.yml
* A new variable has been added called `rabbitmq_ignore_version_state`
which can be set "true" to ignore the package and version state tasks.
This has been provided to allow a deployer to rerun the plays in an
environment where the playbooks have been upgraded and the default
version of rabbitmq has changed within the role and the deployer has
elected to upgraded the installation at that time. This will ensure a
deployer is able to recluster an environment as needed without
effecting the package state.
Example:
openstack-ansible -e rabbitmq_ignore_version_state=true \
rabbitmq-install.yml
* A new variable has been added `rabbitmq_primary_cluster_node` which
allows a deployer to elect / set the primary cluster node in an
environment. This variable is used to determine the restart order
of RabbitMQ nodes. IE this will be the last node down and first one
up in an environment. By default this variable is set to:
rabbitmq_primary_cluster_node: "{{ groups['rabbitmq_all'][0] }}"
scripts/run-upgrade.sh has been modified to pass 'rabbitmq_upgrade=true'
on the command line so that RabbitMQ can be upgraded as part of the
upgrade between OpenStack versions.
DocImpact
Change-Id: I17d4429b9b94d47c1578dd58a2fb20698d1fe02e
Closes-bug: #1474992