The ipaclient ansible role requires that ansible_fqdn is defined but
due to [1] we don't have ansible_fqdn inside of ansible_facts. This
uses the 'fqdn' ansible fact for ansible_fqdn which is equivalent.
[1]: https://opendev.org/openstack/tripleo-heat-templates/commit/4e79336d69e
6b7fa4b026922bac7953bafeee96d
Related-Bug: 1915761
Closes-Bug: 1923248
Change-Id: I0a740e86588c96fff24fa09698c35e492d1c64db
Previously access to the sshd running by the nova-migration-target
container is only limited via the sshd_config. While login is
not possible from other networks, the service is reachable via
all networks. This change limits the access to the NovaLibvirt
and NovaApi networks which are used for cold and live-migration.
Change-Id: Ie868463143af66c7004dbcacefde76ca0977880e
This patch adds two new parameters for deploying Barbican with the
PCKS#11 backend `BarbicanPkcs11CryptoTokenLabels` and
`BarbicanPkcs11CryptoOsLockingOk`.
The patch also deprecates `BarbicanPkcs11CryptoTokenLabel` in favor of
the new option that can be set to more than one label.
Depends-On: Iba7013dd6e1b1e4650b25cd4dd8dc1f355ceb538
Change-Id: I1c5059799f613a62a13379eb82ba516a8ed3a15a
The bind pool information is now automatically generated and the
variables and sample config files are no longer needed. Matching bind9
and rndc key configuration is also generated.
Note: this patch also removes the use of puppet-dns which is problematic
when bind and the worker aren't on the same host and is awkward to use
with respect to rndc keys. It also modifies yaml-validate.py to correct
a rule changed with respect rndc_allowed_addresses.
Depends-On: Ib121888061b8bfcc4155528a8a209c7e274fafcb
Depends-On: I3383c19f80e70553ae71e644a01dda0f250d19da
Depends-On: I1b6674acbd6f999474cd66cb44357cf6b756a7d0
Change-Id: Ib89bcafe9f65431aee5756a32b2a82adc3d384dc
This would not have worked before we enabled server side
env merging and also we don't set that parameter in that
environment.
Change-Id: Icd6d9a12b59cf8234edb671f0f55b4df4d342d7e
Currently there is a known issue[1] in snapshot feature of cinder nfs
backend, which causes data corruption in several cases.
This change makes the feature disabled by default to require some
consideration by users before enabling the feature.
Note that this change makes the default value in Tripleo consistent
with the one(False) in cinder, so also fixes the inconsistency about
the default values.
[1] https://bugs.launchpad.net/cinder/+bug/1860913
Related-Bug: #1860913
Closes-Bug: #1896324
Change-Id: I12b8a01d0b28fed66be8ae0b1723dd89f6dc00ff
For configuring high availability for LDAP in keystone one
needs to edit /etc/openldap/ldap.conf. This worked
before control plane was containerised. Mounting the
openldap configuration into the keystone container
restores the previous behavior.
Change-Id: Id0d73a8ab0ddf7bf9e2b76ea14ffc9acff3a0ad3
Closes-Bug: #1923048
Resolves: rhbz#1944466
Previously we managed to get away with starting FRR during deployment
tasks at step1. This worked because puppet config tasks (which need
all nodes to be reachable due to pacemaker) ran after deployment step
task 1. In our testing also TLS-E setups worked okay, but that was
likely mainly due to coincidence because the IPA registration tasks
were also run at step1 of the deployment tasks and came after FRR.
FRR is needed to be up in order to reach nodes like freeipa in a BGP
based deployment.
https://review.opendev.org/c/openstack/tripleo-heat-templates/+/771832
moved IPA role from deployment_step 1 to external_deployment_step 1 and
this broke TLS-E deployments with FRR, because FRR is not up already
during external deployment step 1 and so we fail to reach the freeipa
node.
We fix this by relying on newly introduced pre_deploy_step_tasks which
are run in a separate task after container_setup_task, which is where
podman gets configured and before any deployment task.
While we're at it we also remove the state: stopped line for kolla,
which makes no sense any longer. And we also remove the main block,
since a single bunch of tasks will do it and is a bit simpler.
Tested as follows:
- Deployed an FRR-enabled TLS-E environment from master (was previously
failing 100%) a bunch of times.
Co-Authored-By: Carlos Gonçalves <cgoncalves@redhat.com>
Change-Id: I54531995fd180b3251901ff61296d6bd05fb85b2
The local certmonger cert will renew after half its lifetime, which will
be after 6 months by default. The current code would extract the CA cert
to a PEM file (and trust it), only if the cert in the existing PEM file
was expired.
But this means that the certmonger local cert could be renewed after six
months and not be replaced in the PEM file until the existing cert
expired at the end of the year. If certs are issued in this time, they
will not be trusted and the update will fail.
This patch removes this condition, so that the extracted and trusted cert
always matches what is in the PEM file, and what is trusted.
Note, this only place this occurs is on the undercloud - because this is
where we could use the certmonger local cert. We assume that the haproxy
cert will be re-issued in an update.
This change has been added to puppet-tripleo for master and all previous
releases, but in master now, we do this directly in tht as we use
ansible to get the system certs.
Change-Id: Ia0ad0ac6d7a09858b56dcb419a3bec17b63779a4
We recently changed cert generation to use linux-system roles
to generate certs instead of puppet-certmonger. However, this broke
the ability to generate the haproxy cert on the undercloud using an
IPA server, because we relied in the ability to specify the CertmongerCA
and the hieradata to provide the correct ca, principals and dns entries.
This patch restores this ability through THT template parameters.
Change-Id: Ie2e181fcd9198ae5613fde7135230d4b4cf7343d