The gnocchi::storage::corrdination_url parameter was deprecated to be
replaced by the gnocchi::coordination_url parameter.
This patch make puppet-tripleo to use the new parameter instead of the
deprecated one.
Depends-on: https://review.opendev.org/#/c/713448/
Change-Id: I2bbe95375c465aea8d2fe91b31897541ed998ae7
This patch introduces a unit test job which runs on CentOS8, so that we
can migrate from CentOS7 and CentOS8.
The new unit test job on CentOS8 is added as non-voting job initially,
but will be make as voting job with removing CentOS7 one after we
confirm the gate status of master and stable branches after this change
is merged.
Change-Id: Iee33fe1953af27b5f4b68b093464a831cb4ddcc6
When configured to use an ipv6 address, the etcd URLs and the cinder
lock manager's backend_url need to include brackets around the address.
Closes-Bug: #1868284
Change-Id: I79f385f14b5904803cdc7fdd145afa2dbcef9c49
Grafana could be exposed along with ceph dashboard
but it's actually embedded by in a view created for
this purpose.
For this reason the ceph-dashboard component should
be able to reach grafana or the requests will fail.
Closes-Bug: #1868118
Change-Id: I7894c51d18961c5cab7ac62e5eec5d515e2667c8
This is part 1 of 2, where ovn provider info located in
tripleo::profile::base::octavia::api will move
to newly created octavia::provider::ovn.
But that has to be split into 2 parts to avoid breaking the
CI until the THT+pupple-tripleo changes merges [1].
[1]: https://review.opendev.org/#/q/topic:bug/1861886+(status:open+OR+status:merged)
This patch enhances Octavia's OVN driver config, so it can connect to
OVN_Northbound DB using TLS.
Depends-On: https://review.opendev.org/#/c/711333/
Change-Id: I85049de9960586a1069aa750c8d727c6e37cec73
Related-Bug: #1861886
672452018a
in puppet-collectd adds supports of CentOS-8 and collect-python
package is already defined there.
In puppet-tripleo, it started complaining about duplicate entry of
collectd-python package, removing the same fixes for puppet-tripleo
fixes the issue.
Closes-Bug: #1866965
Change-Id: If1a2c65c4208c2255a3140134204e240496ec8b6
Signed-off-by: Chandan Kumar (raukadah) <chkumar@redhat.com>
This change just fixes the restart condition
for the radosgw file used when the certificate
is renewed.
Change-Id: Id3f76cd03c993d013090c7c764d6963a64a1c74f
Currently stonith levels kind of work out of pure luck. If puppet
decides to reorder resources stonith levels can fail with:
"Error: /Stage[main]/Tripleo::Fencing/Pacemaker::Stonith::Level[stonith-1-525400e05b23]/Pcmk_stonith_level[stonith-level-1-$(/usr/sbin/crm_node -n)-stonith-fence_ipmilan-525400e05b23]/ensure: change from absent to present failed: pcs -f /var/lib/pacemaker/cib/puppet-cib-backup20200305-60559-tn9ici create failed: Error: Stonith resource(s) 'stonith-fence_ipmilan-525400e05b23' do not exist, use --force to override"
Nowhere in the code do we mandate that the single stonith resources need
to be created *before* the stonith levels which make use of them.
Let's add a constraint via collectors so we enforce this ordering.
Tested on the environment that was not working and got a correctly
deployed IHA overcloud.
Change-Id: I78cb6ae21366a429b65a8357b3b267a485484a42
Closes-Bug: #1866214
We want to make sure that any firewall rule set to open pacemaker ports
is executed before we run any commands that invoke pcs to
authenticate remote nodes.
It simply makes sense from a high-level POV to explicitely open
up firewall rules before we invoke pcs commands that will talk to
remote nodes.
I have actually seen one case in the wild where during a scaleup
the node being scaled up was waiting on Exec['wait-for-settle']
and the bootstrap node failed to contact pcs on the scaled up node
because there the firewall rules were never opened up as it was
waiting on the 'wait-for-settle' step.
Note that we *cannot* impose the ordering via a too-generic
Firewall<||> collector because in tripleo::firewall we have
Service<||> -> Class['tripleo::firewall::post']
and we would create a circular dependency.
Tested a queens deploy with this change and we are correctly
guaranteed to open up firewalling before invoking pcs:
Mar 05 16:22:51 controller-0. puppet-user[18840]: (/Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv4]/ensure) created
Mar 05 16:22:52 controller-0. puppet-user[18840]: (/Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[swift_storage]/Tripleo::Firewall::Rule[123 swift storage]/Firewall[123 swift storage ipv6]/ensure) created
Mar 05 16:22:52 controller-0. puppet-user[18840]: (/Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[tripleo_firewall]/Tripleo::Firewall::Rule[003 accept ssh from any]/Firewall[003 accept ssh from any ipv4]/ensure) created
Mar 05 16:22:52 controller-0. puppet-user[18840]: (/Stage[main]/Tripleo::Firewall/Tripleo::Firewall::Service_rules[tripleo_firewall]/Tripleo::Firewall::Rule[003 accept ssh from any]/Firewall[003 accept ssh from any ipv6]/ensure) created
Mar 05 16:22:52 controller-0. puppet-user[18840]: (Exec[reauthenticate-across-all-nodes](provider=posix)) Executing '/sbin/pcs cluster auth controller-0 controller-1 controller-2 database-0 database-1 database-2 messaging-0 messaging-1 messag
ing-2 -u hacluster -p foobar --force'
Mar 05 16:22:52 controller-0. puppet-user[18840]: Executing: '/sbin/pcs cluster auth controller-0 controller-1 controller-2 database-0 database-1 database-2 messaging-0 messaging-1 messaging-2 -u hacluster -p AQtEeE6e3FDEqrfm --force'
Mar 05 16:22:55 controller-0. puppet-user[18840]: (Exec[Create Cluster tripleo_cluster](provider=posix)) Executing '/sbin/pcs cluster setup --wait --name tripleo_cluster controller-0 controller-1 controller-2 database-0 database-1 database-2
messaging-0 messaging-1 messaging-2 --token 10000 --encryption 1'
Mar 05 16:22:55 controller-0. puppet-user[18840]: Executing: '/sbin/pcs cluster setup --wait --name tripleo_cluster controller-0 controller-1 controller-2 database-0 database-1 database-2 messaging-0 messaging-1 messaging-2 --token 10000 --en
cryption 1'
Mar 05 16:23:20 controller-0. puppet-user[18840]: (Exec[Start Cluster tripleo_cluster](provider=posix)) Executing check '/sbin/pcs status >/dev/null 2>&1'
Mar 05 16:23:20 controller-0. puppet-user[18840]: Executing: '/sbin/pcs status >/dev/null 2>&1'
Mar 05 16:23:21 controller-0. puppet-user[18840]: (Exec[Start Cluster tripleo_cluster](provider=posix)) Executing '/sbin/pcs cluster start --all'
Mar 05 16:23:21 controller-0. puppet-user[18840]: Executing: '/sbin/pcs cluster start --all'
Change-Id: I775ad1abf87368d013054e9a5dab22931f21f86c
Closes-Bug: #1866209
it broke OVB in master
Closes-Bug: #1866031
This reverts commit 5b5291423a04e324a3075caaf07620e7b0a14ac0.
Change-Id: Id4ec674ecd18bed02034714c2da103933b4e0b42
This patch adds the ceph_rgw class required by certmonger to
create the cert/key.
This patch also creates the service_pem file since the rgw
container private key, public certificate and any other CA or
intermediate certificates should be in one file, in PEM format.
Change-Id: I960f7c48866ef11e58e63d80217f7df660455fe1
... because issue with wsgi deployment in nova[1] was resolved a while
ago, and we don't encourage users to use standalone eventlet servers.
[1] https://bugs.launchpad.net/nova/+bug/1661360
Change-Id: I40c3b6ea9a958cb5b1548282299414a72eb254c4
Add posibilities to configure replication_probe_interval for ovsdb-server.
It configure probe interval for connection for ovsdb-server when it is
in backup mode and connects to the active ovsdb-server for replication
Change-Id: I6e5af0cfc00778e251bae0fc42c116a24c8fabc3
In HA profiles, we wait for rabbitmq application readiness by
parsing the output of "rabbitmqctl status". This breaks with
rabbitmq-server 3.8 which changed the output of that command.
Fix our check by using a "rabbitmqctl eval" and by relying on
a stable function call rather than parsing output. This
approach works for rabbitmq-server 3.6 to 3.8.
Change-Id: Id88d0aee74e4b26fd64bbc2da5d0c0fc4bbd6644
Co-Authored-By: Yatin Karel <ykarel@redhat.com>
Closes-Bug: #1864962
Support for OpenShift deployment in TripleO was already removed[1],
so remove OpenShift API from haproxy LB configuration, as we don't
exepect anyone use that implementation.
[1] c845595ba3c9f8ad88e0dad24d56c7349fbd3d1b
Change-Id: Id0825d95f1effd4091dda4a4324787762c180960
Pin puppet-collectd because its recent change[1] broke its
compatibility with facter 2.X.X, which is currently required
in centos-7 jobs.
[1] bda9e87a41
Change-Id: Ie3174349b23a46d527358e0d57f9ccbce73dec49
Closes-Bug: #1862434
The 'defined' function is always true for defined variables
even when the value is undef. This makes the conditionals useless
and this patch makes the module to test value instead.
Change-Id: I9228d84e02b485f089fce84ea12ca8afba903a61
Typo slipped in when this code was merged, we need to reference the
proper variable
Closes-Bug: #1861668
Change-Id: Ida10d018e73fb19bb72032fcb2113e1762fb94fa
We need to enable creation of sudo rule creation for user under which sensubility
is executed to enable operators execution of any command they possibly need
in their health check implementation.
Change-Id: I47c6e4fb3beab14cc5c7f824646e3c2242b140d4
Add new tripleo::profile::base::glance::api::multistore_config parameter
to support configuring multiple glance-api backends. The parameter is
optional, and represents a hash of settings for each additional backend.
The existing 'glance_backend' parameter specifies the default backend.
In order to support DCN/Edge deployments, the syntax supports multiple
instances of the 'rbd' backend type. Restrictions are imposed to allow
only a single instance of the 'cinder', 'file' and 'swift' backend types.
Change-Id: I41ab9b3593bf3d078c5bbd1826df8308e3f5e7af
Depends-On: I5a1c61430879a910e7b6c79effba538431959d56
This change exposes to the end-user the new ceph dashboard
frontend which is fully integrated with grafana service.
This review also adds all the info/classes to integrate the
service with tls-everywhere framework, providing the cert
request and generation that will be passed to ceph dashboard
via ceph-ansible.
Depends-On: https://review.opendev.org/#/c/704308
Change-Id: Id6d2e4b00355cd84baccc2b493f3205c2b32a44b
Changed the name of the haproxy service so it will reflect the
new name of the service, and a reload will be possible for the cron
job.
Change-Id: I66ee58f3b4fd2f10ceba6306497ac796daaf98e8