This change adds manila to our playbook lineup and will allow deployers
to use the shared filesystem as a service solution in their deployments.
Depends-On: I4d95bfc15d09b7b7c0b997d7eab91509b0c63885
Change-Id: I63ee785d3241d92ea94c07f89882000cae7a0ff6
Signed-off-by: cloudnull <kevin@cloudnull.com>
In order to enable the testing of the complete telemetry
stack, we add panko to the integrated build.
Change-Id: Ica12e3c0a586609bf5a3e5b50905922932a0bbce
This commit adds experimental deployment of Masakari role.
It requires existing corosync/pacemaker cluster on compute nodes
for hostmonitors to operate correctly.
Corosync/pacemaker deployment stays out of OSA scope for now.
Depends-On: Ib33d7bc83f1428763f873e1155fd9e3eb4c937e4
Change-Id: Ie543885a52f013635b9f553982c3d6448e3cc3aa
In additin to adding mistral role we also
include os-mistral-install.yml to deploy
mistral to the appropriate hosts.
Change-Id: I9c93e82ec655459c45baf91ed6e6130f2735f61f
This variable is not defined, causing implementations
of congress in the integrated build to fail.
Change-Id: Iaf2880866d1cc3780fec47fdf429c64227db914f
The molteniron service appears to be largely for testing purposes
and both the service and role have not had much activity for over
a year. As such, it is removed from OSA's integrated build.
Change-Id: I94b1be326935f7006027b4a437ff3b2b0a6f9a69
Now that all the MQ and database creation tasks are in the roles,
and use appropriate defaults, we can remove all the wiring from
group_vars and the tasks.
To cater to the changes in passwords, we also ensure that the
upgrade tooling renames any existing secrets.
The healthcheck-infrastructure.yml playbook is deliberately left
alone due to it being refactored anyway in
https://review.openstack.org/587408
Change-Id: Ie3960e2e2ac9c0aff0bc36f46182be2fc0a038b3
Currently 3 sets of credentials are generated for MQ, per service:
- rabbitmq_password
- oslomsg_rpc_password
- oslomsg_notify_password
In each service, we should use x_oslomsg_rpc_password and
x_oslomsg_notify_password, and not rabbitmq.
However there is no wiring as of today. This could lead
to a username like nova, on a vhost nova, with 3 different
passwords. Only one would work.
This patch ensures the wiring is done by default, for all
the roles to be able to use x_oslomsg_notify_password and
x_oslomsg_rpc_password. This is done by always referencing,
in the notify part, the credentials to the rpc part.
The RPC part is then a reference to the rabbitmq_password, so
it's easy to upgrade from queens to Rocky without changes.
If a deployer wants to override the credentials, he can
do so by uncommenting the appropriate line in the
user_secrets. This would then override the existing group_vars
and wire the secrets appropriately. A new user should be
used in that case, as written in the comments.
Change-Id: I834bdc5a33f6b3c49452a9948c889caa79659f3c
There is no record for why we implement the MQ vhost/user creation
outside of the role in the playbook, when we could do it inside the
role.
Implementing it inside the role allows us to reduce the quantity of
group_vars duplicated from the role, and allows us to better document
the required variables in the role. The delegation can still be done
as it is done in the playbook too.
In this patch we remove the group_vars which were duplicated from the
role, and remove the MQ setup tasks as they are no longer required.
We also remove the user_secrets which are now totally unused.
Depends-On: https://review.openstack.org/568517
Change-Id: I366d9f7f7ffb0d6912590520a5ea5a718ab0d9af
With the automatic Octav ia cert creation the user has the option
to use a default secret for the generation of the client_ca key.
This ca will be used for generating client certificates to be used
by Octavia.
Change-Id: I6bf9c9f93e6fb96e836333bf1379035df488ee8f
Depends-On: https://review.openstack.org/553630
This commit introduces oslo.messaging service variables in place of
the rabbitmq server. This will enable the use of separate and alternative
messaging system backends for RPC and Notify communications with
minimal impact to the overall deployment configuration.
This patch:
* update service passwords
* add oslo-messaging to group vars
* update inventory group vars for each service
* add common task for oslo messaging vhost/user install
* update service install playbooks
Change-Id: I235bd33a52df5451a62d2c8700a65a1bc093e4d6
This adds a new scenario for Ceph Rados GW integration:
- It adds the RGW into haproxy to the default swift port if
swift port isn't deployed already
- It adds tempest swift API testing on the rados gw in the
check scenario
- It adds ceph rgw in default inventories.
Change-Id: I5f6ff3fa05a4a8019bf5b695b02184d9f065bc2e
Co-Authored-By: Jean-Philippe Evrard <jean-philippe@evrard.me>
Co-Authored-By: Maxime Guyot <maxime.guyot@elits.com>
The placement_database config options were added in Newton
but the actual code to use the options was reverted and is
not used.
Change-Id: I97f44c0b52af6c356433cf2c1021e9c175a8710d
Depends-On: https://review.openstack.org/541685
Related-Reviews: I31293ac4689630e4113588ab2c6373cf572b8f38
Closes-Bug: #1670419
This patch provides the necessary files and changes in existing files to
deploy tacker component. Tacker is an orchestrator and VNF manager, which
is widely used as MANO component in NFV type of deployments
Change-Id: I339c9cc032f871766a89e24c2ada38063fc7ac39
Remove the unnecessary trove_regular_user. The documenation that was
referenced [1] when this was added to the role is intended to configure
Trove for development purposes. The trove_regular_user is not used by the
Trove service and is only being created to give the developer a non-admin
user to use for testing.
[1] https://docs.openstack.org/trove/latest/install/manual_install.html
Change-Id: Ic71216c21092a22105ad56ef98e1554dff48f0b0
- Adds Octavia AIO config
- Adds group vars to trigger config of LBaaS/Ocatvia in neutron
if Octavia is installed
- Adds Octavia endpoint to haproxy
- Adds Octavia vars and secrets
- Adds Octavia to repository
- adjusts tests
- adds reno
Depends-On: Idb419a4ca5daa311d39c90eda5f83412ccf576ad
Change-Id: Ia334ed42ed0664b10cba860d4231a6aa1588800e
Additionally, resolve the group name changes and variable removal from
the os_designate role in patch:
https://review.openstack.org/#/c/427810/
Change-Id: I7ed562aaa1f8f6db0b4cfb5da46b030540332f49
Add playbook, haproxy service configuration, variable definition,
and environment definition files required to deploy barbican as an
integrated role of openstack-ansible.
Change-Id: If87099958e0b1fc48866a468a47bb60bae622f28
In order to deplpoy LXD compute types the lxd password is required. This
change adds the `lxd_trust_password` variable to the default passwords
file so that a strong password is generated by default even if it's not
to be used intitally.
Change-Id: Ifb7ec681043b02bbbbcd53bcba01653097a55253
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
1. Defined default values for some variables of trove
2. Added passwords related variables in user_secrets.yml
3. Fix variable names so that same variable name is used everywhere
Change-Id: I39c3bad7cca2ab8793419f4d184180ecf7d60f82
Closes-Bug: #1636798
Ironic in OSA is currently broken as the ironic database
user isn't created, and consequently the ironic services can't
connect to the database. It broke back in patch 91deb13.
This patch corrects the openstack-ansible side of the problem.
Another patch will fix the os-ironic side.
Change-Id: I38aa44bc33a80bb6d53a66bce34aff57048a1af3
Partial-Bug: #1625081
Signed-off-by: Michael Davies <michael@the-davies.net>
Since adding the name spaced telemetry vars for swift in:
https://review.openstack.org/#/c/363644/
We need to adjust the openstack-ansible repository to reflect these
changes. Additionally, we need to add a task that will setup the
rabbitmq user/vhost when using ceilometer with swift.
Change-Id: I74d0d7dec19abb525837e65d1354091cdd3cd0f2
This change adds a playbook for deploying sahara as part of the
sahara role integration.
This change also adds variables needed for the installation of
the sahara-dashboard and its support by the ceilometer service,
which will be added on patches in their respective roles.
Change-Id: I782d74e09d1796744ece75d12aa9c65c9453be19
This play allows deployers to install and configure
Rally for post-deployment functional and performance testing
Depends-On: I3d5cc822cc0d3c2b0b3ba7b05a9fe1b6b9e3a839
Change-Id: I1c4567649e4e35641610f27eaf3b8a57c8a722cc
Ceilometer is set to use Gnocchi dispatch when Gnocchi is deployed.
All references to MongoDB in the AIO are removed.
Partial-Blueprint: role-gnocchi
Depends-On: I94e7d461376a8032a76ea34b57190077a60a0fb5
Change-Id: Ia41141e947d48426c7d490497639d62e8dff6f8e
In addition to adding the magnum role to the repo build process we also
include the os-magnum-install.yml to deploy magnum to hosts tagged with
os_magnum.
Change-Id: I32dee168d1005572510f630a21f7d7a7a05640d9
Implements: blueprint role-magnum
As the next step in integrating Gnocchi, this playbook installs the
gnocchi role. The role is enabled for the gate by default, but can
be disabled consistent with other roles. It is also included in
the setup-openstack.yml playbook so that linters run on the
playbook.
Change-Id: I2e8b32f1cc6830c479da418b04896f273c5b2b86
Depends-On: I0eb60ef7a31d873ba70c353138da252284389f28
Partial-Blueprint: role-gnocchi
heat_profiler_hmac_key and heat_cfn_service_password
are no longer referenced within any plays,tasks or templates
and should be removed.
Related but not dependent upon change I42ca62a64a6985b37d73f7f14093207d02fefb5d
Change-Id: Id1c2d4b26735b671845cd76f63b7a922242c5662
Backgroud: Bug Requests ability to access
RabbitMQ management UI through HAproxy
Approach:
--Add rabbitmq ui port 15672 to HAProxy
--DO NOT Add monitoring user by default,
instead key on existence of rabbitmw_monitoring_userid
in user_variables.yml
--ADD user_variables.yml update per above with
explanation
--Add "monitoring" user to rabbitmq for monitoring with
"monitoring" user tag
--Add monitoring user passwd var to user_secrets
--Add features: release note
Closes-Bug: 1446434
Change-Id: Idaf02cad6bb292d02f1cf6a733dbbc6ff4b4435e
This change adds the `ironic_swift_temp_url_secret_key` variable to the
user_secrets.yml file. This file is required when doing an ironic deployment.
Change-Id: Id9f94c0238ad3b6598044fe618f9913e88acec8c
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
This patch adds the initial support for the ironic role in
openstack-ansible, but leaves ironic unconfigured and not
installed by default.
Configuration, including Nova configuration, will be addressed in
subsequent patches.
Change-Id: Id9f01deb5c46ee2186b9c41c7f88205560b5f437
Depends-On: Ide66c7ee59192ac441ac2919028eca0ad665ceea
Depends-On: I590f5ade90b3e37af7f1b8ee333000d4f993f8c5
Partially-implements: blueprint role-ironic
Signed-off-by: Michael Davies <michael@the-davies.net>
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
This change adds in the nova_api db migration that has to happen
within mitaka. This is a new required DB though the DB entry has
existed since Kilo.
Change:
* The SHA was moved forward to the new version of nova to support
this change. This was done independently of the rest of the stack
to ensure functionality of this new DB.
* An entry was added to the secrets file to support a new db user
and password.
* The requirements repo was rev'd forward to support the new
requirements within nova.
Depends-On: If63b541bfaf91333ac5963d391e6058ac8254eec
UpgradeImpact
Change-Id: I711018f4f1f27d667a3dda94a01dc76616f98f4c
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
This change enables L3HA using the neutron internals by default. This should
make the general Neutron router support more robust.
Note:
* The ability will not effect running routers so upgrades are seemless.
* The l3ha support is only rendered by default when using the ML2 plugin.
* The ATT neutron l3HA tool is still needed as a backup to ensure that the
routers are always scheduled to an agent and will remain to facilitate
L3HA on routers created without the ha capability.
Upgrade notes:
- neutron_ha_vrrp_advert_int (removed)
- neutron_ha_vrrp_auth_password (moved to user_secrets.yml)
- neutron_handle_internal_only_routers (removed)
- neutron_l3_ha_enabled (removed)
- neutron_min_l3_agents_per_router (removed)
- neutron_max_l3_agents_per_router (removed)
DocImpact
UpgradeImpact
Closes-Bug: #1416405
Change-Id: Ie456a50f525f11b9d15cd2a9c9590b41f19a9b5e
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
The alarming function of Telemetry has been seperated out
by design. This patchset creates new containers for these
alarming services and deploys them accordingly.
See:
http://lists.openstack.org/pipermail/openstack-dev/2015-September/073897.html
DocImpact
UpgradeImpact
Implements: blueprint liberty-release
Change-Id: I25294a25afa76d4d8bddad0a51c48485f33a6d20
This commit uses a keepalived role, available in
ansible galaxy, to configure keepalived for haproxy
Keepalived makes the haproxy truely HA, by having
haproxy's VIP highly available between the hosts
defined in the inventory.
The keepalived role configuration is fully
documented on the upstream role.
To configure keepalived on your host, you only have to
give it a variable (dict). A template handles the
generation of the configuration of keepalived.
By default, the variable files defined in vars/configs/
are enough to have a keepalived working for haproxy,
with a master-backup configuration.
You can define other variable files by setting
haproxy_keepalived_(master|backup)_vars in your
user_variables. This should point to a "variable
template" file like the one you can find
in vars/configs/*
The haproxy playbook has been changed to rely on
the dynamic generation script. It will use the env.d
to have haproxy hosts. The first host from the
generated inventory will be considered as master,
while the others are slaves. The keepalived role
will only run if more than haproxy host is found
in the inventory. This behaviour can be changed
and keepalived can be disabled by the variable:
haproxy_use_keepalived.
The implemented variables are the following:
* haproxy_keepalived_(ext|int)ernal_vip_cidr
* haproxy_keepalived_(ext|int)ernal_interface
* haproxy_keepalived_(ext|int)ernal_virtual_router_id
* haproxy_keepalived_priority_backup
* haproxy_keepalived_priority_master
* haproxy_keepalived_vars_file
In these variables, only the following variables
are necessary: keepalived_(ext|int)ernal_vip_cidr
However, it's recommended to also configure the
keepalived_(ext|int)ernal_interface
(to know which interface the vips can bind on)
Closes-Bug: 1414397
Change-Id: Ib87a3bb70d6f4b7ac9356e8a28fe4b5936eb9334
Presently all services use the single root virtual host within RabbitMQ
and while this is “OK” for small to mid sized deployments however it
would be better to divide services into logical resource groups within
RabbitMQ which will bring with it additional security. This change set
provides OSAD better compartmentalization of consumer services that use
RabbitMQ.
UpgradeImpact
DocImpact
Change-Id: I6f9d07522faf133f3c1c84a5b9046a55d5789e52
Implements: blueprint compartmentalize-rabbitmq
This patch enables the HAProxy webstats for all the configures
backends and frontends.
A password entry is added to user_secrets.yml for the webstats
password.
It also adds variables for port number, username and password
which can be overridden in user_variables.yml appropriately.
Change-Id: Iec866ad124bec6fb0b8524a966adf64e22422035
Closes-Bug: #1446432
Currently the playbooks do not allow Ceph to be configured as a backend
for Cinder, Glance or Nova. This commit adds a new role called
ceph_client to do the required configuration of the hosts and updates
the service roles to include the required configuration file changes.
This commit requires that a Ceph cluster already exists and does not
make any changes to that cluster.
ceph_client role, run on the OpenStack service hosts
- configures the Ceph apt repo
- installs any required Ceph dependencies
- copies the ceph.conf file and appropriate keyring file to /etc/ceph
- creates the necessary libvirt secrets
os_glance role
glance-api.conf will set the following variables for Ceph:
- [DEFAULT]/show_image_direct_url
- [glance_store]/stores
- [glance_store]/rbd_store_pool
- [glance_store]/rbd_store_user
- [glance_store]/rbd_store_ceph_conf
- [glance_store]/rbd_store_chunk_size
os_nova role
nova.conf will set the following variables for Ceph:
- [libvirt]/rbd_user
- [libvirt]/rbd_secret_uuid
- [libvirt]/images_type
- [libvirt]/images_rbd_pool
- [libvirt]/images_rbd_ceph_conf
- [libvirt]/inject_password
- [libvirt]/inject_key
- [libvirt]/inject_partition
- [libvirt]/live_migration_flag
os_cinder is not updated because ceph is defined as a backend and that
is generated from a dictionary of the config, for an example backend
config, see etc/openstack_deploy/openstack_user_config.yml.example
pw-token-gen.py is updated so that variables ending in uuid are assigned
a UUID.
DocImpact
Implements: blueprint ceph-block-devices
Closes-Bug: #1455238
Change-Id: Ie484ce0bbb93adc53c30be32f291aa5058b20028