Since ansible-core 2.10 it is recommended to use modules via FQCN
In order to align with recommendation, we perform migration
by applying suggestions made by `ansible-lint --fix=fqcn`
Change-Id: I2de9b8e2c9d526aa13a1dca1736085c505643735
In order to reduce divergance with ansible-lint rules, we apply
auto-fixing of violations.
In current patch we replace all kind of truthy variables with
`true` or `false` values to align with recommendations along with
alignment of used quotes.
Change-Id: I05cbd6ed18f7ffb3ebf9e0abc92ebdbf88297ded
With ansible-core 2.16 a breaking changes landed [1] to some filters
making their result returned in arbitrary order. With that, we were
relying on them to always return exactly same ordered lists.
With that we need to ensure that we still have determenistic behaviour
where this is important.
[1] https://github.com/ansible/ansible/issues/82554
Change-Id: Ida46fe03ae5a77a311949dbe39388a6e7bc88fa1
This reverts commit 1fb52c9a4c16e323e99e73cfc601641ae9a2fa8c.
Reason for revert: We've backported the patch to 2024.1 initial release, which means
that upgrade path will be handled by 2024.1 itself with no need to keep
the code for any longer.
Change-Id: I5f5f594d78e3cdda0823ba155a5efdb3d67d3dec
Due to the shortcoming of QManager implementation [1], in case of uWSGI
usage on metal hosts, the flow ends up with having the same
hostname/processname set, making services to fight over same file
under SHM.
In order to avoid this, we prepend the hostname with a service_name.
We can not change processname instead, since it will lead to the fight
between different processes of the same service.
[1] https://bugs.launchpad.net/oslo.messaging/+bug/2065922
Change-Id: I763357049f0ee1961cc69e58dc7616369cd6300f
<service>-config tags are quite broad and have a long execution
time. Where you only need to modify a service's '.conf' file and
similar it is useful to have a quicker method to do so.
Change-Id: I6fe51d12af039abd216c4d937766dd9da206234e
During last release cycle oslo.messaging has landed [1] series of extremely
useful changes that are designed to implement modern messaging
techniques for rabbitmq quorum queues.
Since these changes are breaking and require queues being re-created,
it makes total sense to align these with migration to quorum queues by default.
[1] https://review.opendev.org/q/topic:%22bug-2031497%22
Change-Id: I7b245649625b23a11e4a036ad44837b8b8857d58
In order to be able to globally enable notification reporting for all services,
without an need to have ceilometer deployed or bunch of overrides for each
service, we add `oslomsg_notify_enabled` variable that aims to control
behaviour of enabled notifications.
Presence of ceilometer is still respected by default and being referenced.
Potential usecase are various billing panels that do rely on notifications
but do not require presence of Ceilometer.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/914144
Change-Id: If017f0bc80b2424fef20551580fba6bb0d1feba5
In order to allow definition of policies per service, we need to add variables
so service roles, that will be passed to openstack.osa.mq_setup.
Currently this can be handled by leveraging group_vars and overriding `oslomsg_rpc_policies` as a whole, but it's not obvious and
can be non-trivial for some groups which are co-locating multiple services
or in case of metal deployments.
Change-Id: I17e5db306d55d28afe7809c170692a93d27e0a5b
At the moment we assign `heat_stack_owner` to the `admin` user in a
`service` project, which leads to a completely unwanted behaviour, since
`admin` user does not have any other privileges to the `service` project
rather then `heat_stack_owner`.
Instead we should be granting privileges to the bootstrapped project
for the admin user.
This fixes unclarity and potential issues users might face in horizon
by switching to the `service` project, where they have no permissions.
Change-Id: I95faa779bf62524fafd09576aa7ae27de029bb57
This change implements and enables by default quorum support
for rabbitmq as well as providing default variables to globally tune
it's behaviour.
In order to ensure upgrade path and ability to switch back to HA queues
we change vhost names with removing leading `/`, as enabling quorum
requires to remove exchange which is tricky thing to do with running
services.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/896017
Change-Id: I7e4e8b3be33536545b5b4bcfb4855e8c160bb152
While <service>_galera_port is defined and used for db_setup
role, it's not in fact used in a connection string for oslo.db.
Change-Id: If10b9591f4a97eaf54cf5bd09865d29ae461d639
With update of ansible-lint to version >=6.0.0 a lot of new
linters were added, that enabled by default. In order to comply
with linter rules we're applying changes to the role.
With that we also update metdata to reflect current state.
Depends-On: https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/888223
Change-Id: I68a3041edf0b0eb891fbe1e40081f779fc40c21d
By overriding the variable `heat_backend_ssl: True` HTTPS will
be enabled, disabling HTTP support on the heat backend api.
The ansible-role-pki is used to generate the required TLS
certificates if this functionality is enabled.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/879085
Change-Id: Ifb904adc61f1461e646c3fce0bd062f526b8e446
Add file to the reno documentation build to show release notes for
stable/zed.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/zed.
Sem-Ver: feature
Change-Id: I29f9ded69d449b2e12eeacf13a3a4b3c2df7e7c2
If venv_wheel_build_enable is defined to False, placement will fail to
clone and install repositories due to missing git binary.
Change-Id: If1e3eec0c558d1472da7bc3a4e87825e36ba4fdc
Related-Bug: #1989506
Closes-Bug: #1995536
This line snuck in with If9f874305d0470f267bc8bbc74e879ec11860cac
probably to bring it in line with other OSA roles, but should already
be covered by the distribution_major_version line above.
Change-Id: I48b67f163ea5cf5d6fb37a9a8ae5678aa8574fe7
Implement support for service_tokens. For that we convert
role_name to be a list along with renaming corresponding variable.
Additionally service_type is defined now for keystone_authtoken which
enables to validate tokens with restricted access rules
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/845690
Change-Id: Ib5d15aaf56112a776e2b9abb2396f9ea4f4fe319
With sphinx release of 5.0.0, they changed default for language variable
to 'en' from None. With that current None valuable is not valid and should
not be used.
Change-Id: I1ca40de0eaeacd389e7b54cbf1b06a26840fb4d0
Use a first_found lookup instead of a with_first_found loop so that
the 'paths' parameter can be used.
This ensures that only vars from the role are included, and not vars
from a parent calling role. This can happen when a parent role has
a higher priority vars file available for inclusion than the role
it calls.
Change-Id: If9f874305d0470f267bc8bbc74e879ec11860cac
- Implemented new variable ``connection_recycle_time`` responsible for SQLAlchemy's connection recycling
- Set new default values for db pooling variables which are inherited from the global ones.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/819424
Change-Id: I78301a9d98854ba9f80cf6613e62a363f8327dfc
Since we still use ceph-ansible that has their own implementation of
config_template module it's worth to use mentioned module as a collection
explicitly.
Depends-On: https://review.opendev.org/c/openstack/openstack-ansible/+/819814
Change-Id: Ie4e45d41b070c5abbc3b80305aeee89470ee739a
With PKI role in place in most cases you don't need to explicitly
provide path to the CA file because PKI role ensures that CA is trusted
by the system overall. In the meanwhile in PyMySQL [1] you must either
provide CA file or cert/key or enable verify.
Since current behaviour is to provide path to the custom CA we expect
certificate being trusted overall. Thus we enable cert verification when
galera_use_ssl is True.
[1] 78f0cf99e5/pymysql/connections.py (L267)
Change-Id: I8e689330b76e72df780be3b2f8af066a5fe96a2a
Deployment can fail if an user with name defined in _service_users exists in more than 1 domain(Multiple matches found for <username>). To avoid these errors we need to explicitly define domain in _service_users
Change-Id: I55c5c8b9806188f246af9f2e89afe4a2d1b38b3c
We've created integrated linters check job a while back and it's successfully
working for several releases. At the moment we experience difficulties
with future maintenance of the linters check from the openstack-ansible-tests
repo. So instead of fixing current one, we replace it with modern version of
the test.
Change-Id: I604c8114da81ad351e2ee9692e07e4f38c521c4b