This was mainly there as an legacy interface which was
for internal use. Now that we pull the passwords from
the existing environment and don't use it, we can drop
this.
Reduces a number of heat resources.
Change-Id: If83d0f3d72a229d737a45b2fd37507dc11a04649
Update the cinder-lvm-losetup systemd service to wait until the local
/var directory is mounted and the lvm2-monitor service has started
prior to creating the loopback device used by cinder's LVM backend.
Make LIO SCSI target data persistent by adding container volume mounts
for the /etc/target directory so that the data is stored on the host.
Closes-Bug: #1905617
Change-Id: I0c0cbed3a41e8d4b1fbaf5c2dc7fd0412fee644a
Calls an ansible role to create an LVM2 filter.
Change-Id: Ia01d23e252bc48b7cc6c66cd39138e6844b90a69
Depends-On: I9781007559e074f2b102f6f90c1aed6def1b02be
Closes-Bug: 1855704
Current puppet modules uses only absolute name to include classes,
so replace relative name by absolute name in template files so that
template description can be consistent with puppet implementation.
Change-Id: I7a704d113289d61ed05f7a31d65caf2908a7994a
- deploy-steps-tasks-step-1.yaml: Do not ignore errors when dealing
with check-mode directories. The file module is resilient enough to
not fail if the path is already absent.
- deploy-steps-tasks.yaml: Replace ignore_errors by another condition,
"not ansible_check_mode"; this task is not needed in check mode.
- generate-config-tasks.yaml: Replace ignore_errors by another
condition, "not ansible_check_mode"; this task is not needed in check mode.
- Neutron wrappers: use fail_key: False instead of ignore_errors: True
if a key can't be found in /etc/passwd.
- All services with service checks: Replace "ignore_errors: true" by
"failed_when: false". Since we don't care about whether or not the
task returns 0, let's just make the task never fail. It will only
improve UX when scrawling logs; no more failure will be shown for
these tasks.
- Same as above for cibadmin commands, cluster resources show
commands and keepalived container restart command; and all other shell
or command or yum modules uses where we just don't care about their potential
failures.
- Aodh/Gnocchi: Add pipefail so the task isn't support to fail
- tripleo-packages-baremetal-puppet and undercloud-upgrade: check shell
rc instead of "succeeded", since the task will always succeed.
Change-Id: I0c44db40e1b9a935e7dde115bb0c9affa15c42bf
While they are, at SELinux level, exactly the same (one is an alias to
the other), the "container_file_t" name is easier to understand (and
shorter to write).
A second pass in a couple of days or weeks will be needed in order to
change files that were merged after this first pass.
Change-Id: Ib4b3e65dbaeb5894403301251866b9817240a9d5
When upgrading from Rocky to Stein we moved also from using the docker
container engine into Podman. To ensure that every single docker container
was removed after the upgrade a post_upgrade task was added which made
use of the tripleo-docker-rm role that removed the container. In this cycle,
from Stein to Train both the Undercloud and Overcloud work with Podman, so
there is no need to remove any docker container anymore.
This patch removes all the tripleo-docker-rm post-upgrade task and in those
services which only included a single task, the post-upgrade-tasks section
is also erased.
Change-Id: I5c9ab55ec6ff332056a426a76e150ea3c9063c6e
Moving all the container environments from lists to dicts, so they can
be consumed later by the podman_container ansible module which uses
dict.
Using a dict is also easier to parse, since it doesn't involve "=" for
each item in the environment to export.
Change-Id: I894f339cdf03bc2a93c588f826f738b0b851a3ad
Depends-On: I98c75e03d78885173d829fa850f35c52c625e6bb
The tripleo-docker-rm role has been replaced by tripleo-container-rm [0].
This role will identify the docker engine via the container_cli variable
and perform a deletion of that container. However, these tasks inside the
post_upgrade_tasks section were thought to remove the old docker containers
after upgrading from rocky to stein, in which podman starts to be the
container engine by default.
For that reason, we need to ensure that the container engine in which the
containers are removed is docker, as otherwise we will be removing the
podman container and the deployment steps will fail.
Closes-Bug: #1836531
[0] - 2135446a35
Depends-On: https://review.opendev.org/#/c/671698/
Change-Id: Ib139a1d77f71fc32a49c9878d1b4a6d07564e9dc
Currently running `ansible --check' against generated playbook fails
because the "exec" won't be done, hence the registered resource won't
get the "rc" entry.
Change-Id: Ic8001d4fd8c489f10c565d05ae82060fa3fb6ce8
Closes-Bug: #1834409
When some stale shutdown happens on the node, iscsi.service detects
remaining information about iscsi connection, and recovers connections
based on the information, with starting iscsid service on host.
This causes a collision between iscsid on host and iscsid in container,
which makes iscsid container keep restarting.
This patch makes sure that iscsi.service on host is disabled
when we deploy iscsid container, to avoid iscsid on host is started
unexpectedly.
Change-Id: I6c36cd15edfa53c3c76be9095ff40cecf451490d
Closes-Bug: #1833019
This converts all Docker*Image parameter varients into
Container*Image varients.
The commit was autogenerated with the following shell commands:
for file in $(grep -lr Docker.*Image --include \*.yaml --exclude-dir releasenotes); do
sed -e "s|Docker\([^ ]*Image\)|Container\1|g" -i $file
done
Change-Id: Iab06efa5616975b99aa5772a65b415629f8d7882
Depends-On: I7d62a3424ccb7b01dc101329018ebda896ea8ff3
Depends-On: Ib1dc0c08ce7971a03639acc42b1e738d93a52f98
This change combines the previous puppet and docker files into a single
file that performs the docker service installation and configuration
for the iscsid service.
Related-Blueprint: services-yaml-flattening
Change-Id: I4a06c86da88618f9c88c5a9161dab771a82cc7d3