This change combines the previous puppet and docker files into a single
file that performs the docker service installation and configuration.
With this patch the baremetal version of aodh services have been
removed.
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Depends-On: https://review.rdoproject.org/r/#/c/16994/
Change-Id: I39645aff0365218d4b841ed0d9c964b3622f143a
Related-Blueprint: services-yaml-flattening
Since we're looking at flattening the services into a deployment/
folder, we need to update the validation script to also handle this
directory structure. Additionally this change updates the service name
validation to ensure that the service name in matches the start of the
filename itself.
Change-Id: Ibb140a38b69a8780adf69362e0f437b3426f360d
Related-Blueprint: service-yaml-flattening
According to the inventory examples[1] openshift_master_cluster_hostname
points to an internal hostname/address set on the loadbalancer while
openshift_master_cluster_public_hostname points to the external.
This change sets openshift_master_cluster_hostname to use the InternalApi
network instead of the External network as it is at this moment.
[1] https://docs.openshift.com/container-platform/3.11/install/example_inventories.html
Change-Id: I9efab5b07682efd6b03da433801d636e7d324619
OpenDaylight's Infrautils project has a new, recommended method for
checking when ODL is up and ready. Use this new diagstatus ODL NB REST
API endpoint vs the old netvirt:1 endpoint.
ODL Jira that tracked adding diagstatus REST API:
https://jira.opendaylight.org/browse/INFRAUTILS-33
RH BZ tracking moving to diagstatus:
https://bugzilla.redhat.com/show_bug.cgi?id=1642270
Change-Id: I44dc5ba7680a9c5db2d6070e813d9b0e31d6e811
Signed-off-by: Daniel Farrell <dfarrell@redhat.com>
Change I81bc48b53068c3a5ed90266a4fd3e62bfb017835 moved image fetching
and tagging for pacemaker-managed services from step 1 to step 2. This
is also a step when the services are started, which probably
introduced a race condition for environments where pacemaker cluster
consists of more than one machine.
During the deployment you can get a lot of pcmk failures like:
failed to pull image 192.168.24.1:8787/tripleomaster/centos-binary-mariadb:pcmklatest
This only happens on non-bootstrap nodes. On bootstrap node the order
is still correct, first download and tag image, and then start the
pcmk resources. However, if non-bootstrap nodes are slower with
downloading and tagging, pacemaker there might start the resources
before the images are tagged (as the starting of resources is
controlled globally from bootstrap node).
Change-Id: Id669cc9a296a8366c7c80a5ee509bdb964b62a04
Closes-Bug: #1805826
By default, Compute role template set's the deprecated_param_ips
parameter in roles data. This forces the use of the deprecated
names in paramer_defaults when using predictable IPs for the
ctlplane network.
To allow the user to either use the deprecated role name, or the
non deprecated role name in parameters defaults extend the
ctlplane_fixed_ip_set contition to use or logic to test for data
in either the deprecated name parameter or the new parameter.
In the server resource use yaql to pick the first element that
is not empty. The non-deprecated parameter name is prioritiezed.
Change-Id: Iedc65064c5efaa618c3d54df10bf09296829efd2
Closes-Bug: #1805482
When Ironic uses the 'direct' deploy interface it requires
access to swift. To access swift it needs the storage
network.
Change-Id: Ie49b961bb276dff0e5afbf82b450caa57d17f6ff
For all containers where restart=always is configured and that are not
managed by Pacemaker (this part will be handled later), we remove these
containers at step 1 of post_upgrade_tasks.
Change-Id: Id446dbf7b0a18bd1d4539856e6709d35c7cfa0f0
For the isolated networks we use the subnets host_routes
to set and get the routes for overcloud node interfaces.
This change add's this to the ctlplane interface.
Partial: blueprint tripleo-routed-networks-templates
Change-Id: Id4cf0cc17bc331ae27f8d0ef8f285050330b7be0
There is a deployment race where nova-placement fails to start if
the nova api db migration have not finished before starting it.
We start nova placement early to make sure it is up before the
nova-compute services get started. Since in HA scenario there is
no sync in between the nodes on the current worked deployment step
we might have the situation that the placement service gets started
on C1/2 when the nova api db sync is not yet finished on C0.
We have two possibilities:
1) start placement later and verify that nova-computes recover correct
2) verify that db migration on nova_api db finished before start nova-
placement on the controllers
2) which was addressed via https://review.openstack.org/610966 showed
problems:
a) the docker/podman container failed to start with some file not found
error, therefore this was reverted in https://review.openstack.org/619607
b) when the scrip were running on different controllers at the same
time, the way how nova's db_version() is implemented has issues, which
is being worked on in https://review.openstack.org/619622
This patch addresses 1) and moves placement service start to step_4
and adds an additional task on the computes to wait until the placement
service is up.
Closes-Bug: #1784155
Change-Id: Ifb5ffc4b25f5ca266560bc0ac96c73071ebd1c9f
The puppet aodh-api.yaml service uses the puppet
apache service. The apache server uses the cidr
map in ServiceData.
The docker service did not pass the ServiceData
to the puppet service template. The result is
that the properties resolved to ''.
Change-Id: I736e0fa4191fa130f882b09eb87256c62ac69143
In RDO CI we're seeing this undefined, but haproxy_short_bootstrap_node_name
is defined, which proves https://review.openstack.org/#/c/605046/ is included
and working.
The root cause is that the haproxy_public_tls_inject_service is actually
created via the haproxy template as a nested stack, so we need to use
haproxy_short_bootstrap_node_name instead
Change-Id: I870825140b8947a1845307b5bec1bcff387c15c0
Closes-Bug: #1804433