Evaluating ansible_check_mode as a bare variable is deprecated in
Ansible 2.8 and its support will be removed in 2.12.
Change-Id: I4d566bf8b7b3085d693fe94b53384e667f81b092
A new parameter, ContainerHealthcheckDisabled allows an operator to
disable the container healthcheck when Podman is enabled.
Depends-On: Ic3dd492405b11ec482ff86e1513149c3eceb370f
Change-Id: Id8d7e21d58cf5ab155404db597d96665b94d7c2a
Now that we've dropped docker-toool we no longer have use
for the /var/lib/container-startup-configs.json file to be laid
down on disk.
As part of this change we now check for the step 1 tasks instead
of the combined startup configs when determining whether to
re-run the common startup ansible tasks.
Change-Id: I3c50d8364823073341b5f39ecce20a512e4a82e1
Not all roles are connected to all networks, there is no
need to create metadata for networks not associated with
the role.
In edge/spine-and-leaf deployments the total number of
composable networks used can be high. Passing all the
networks we quickly go beyond the nova metadata fields
size limit (each field cannot exceed 256 bytes).
Also update tools/check-up-to-date.sh script to use the
simple yaml-diff.py instead of diff. The env generator
code will sort data, while jinja rendered environments
are not sorted, thus need to diff the data in yaml not
the text.
Closes-Bug: #1821377
Change-Id: I5ae3bc845b0a6ad6986d44b14ff4b0737a9b033b
Similar to our t-h-t defaults we should default this script
to 'podman' for faster debugging on the command line.
Change-Id: I4e826542c24848e12abc400b1621a6b812922231
Prior this commit, SELinux was configured by puppet, and this
happens way too late. Here we should get a proper SELinux configuration
at the right time.
SELinux management is also removed from puppet with this commit:
https://review.openstack.org/#/c/645477/
We just keep the "semodule" and "sebool" part within puppet. For now.
Related-Bug: #1821025
Closes-Bug: #1821178
Change-Id: Ibd7b80b2cc0b09b63b17f1ba3a9b9cc2de728c57
There were some FIXME and TODO related to paunch version. This patch
activate some logging features available in newest paunch.
Also, raised paunch version requirement in order to ensure we can activate
the new options.
Change-Id: I1df64c413373c7b8eb72baca34cf5f826d3bd51c
Depends-On: https://review.openstack.org/645532
Adds a tag specific to the external post deployment tasks as it's often
useful to just re-run these tasks.
Change-Id: I3d509fab0d1105f4c097338d6f39febd897e6582
The plan is:
- Docker is deprecated in Stein
- Podman is the default in Stein
- Docker will be removed in Train
Change-Id: I8f00d3e539abc4a169d6b48b8ce697e54aa2eae9
All the config-download steps can be run using a `stepX` tag with
ansible-playbook, except step 0. This patch adds the tag.
Change-Id: Ida335e7b7efef6c2a5a8b7a23b09f13588c7695a
These tasks output the contents of the things being copied which can be
extra verbose and not beneficial to the overall process. These files can
be retrieved off the disk if necessary.
Change-Id: I2def6c41a1df345d055b6db26130cb6faf93be53
Related-Bug: #1819226
The /var/lib/docker-puppet is deprecated and can now be found under
/var/lib/container-puppet. We don't have Docker anymore so we try to avoid
confusion in the directories. The directory still exists but a readme
file points to the right directory.
Change-Id: Ie3d05d18e2471d25c0c4ddaba4feece840b34196
This is now a prerequisite so that we can run external update/upgrade
with --tags parameter and have it do something. (I don't recall this
being necessary before, i suspect the change may have become necessary
with a bump in Ansible version, or some refactoring in t-h-t.)
Change-Id: I10356e49ad6fb200e6a419ab5dc562f274ae6f8d
Implements: blueprint upgrades-with-os
Ansible 2.6 fix didn't properly selct bootstrap node. Also
new ansible changed mysql backend library making it unable
to read misformated my.cnf. This library also needs to have
socket specified if it's going to connect to local server.
Change-Id: I31b38eaf66bb899e72b1bfeca8795e5d1007eee5
Resolves: rhbz#1678235
Closes-bug: #1816422
With this change we add an ansible variable called
'tripleo_minor_update' set to true only during the update_steps_playbook
which get run during a minor update.
Then inside common/deploy-steps-tasks when starting containers with
paunch we export this 'tripleo_minor_update' ansible variable and
push it inside the 'TRIPLEO_MINOR_UPDATE' environment variable.
Inside change Id1d671506d3ec827bc311b47d9363952e1239ce3 we will then
use the env variable and export it to the restart_bundles in order
to detect if we're inside a minor update workflow (as opposed to
a redeploy - aka stack update). The testing that has been done is
described in the above change.
Co-Authored-By: Damien Ciabrini <dciabrin@redhat.com>
Change-Id: Ib3562adbd83f7162c2aeb450329b7cc4ab200fc2
Hi, with ansible 2.6 we cannot access the groups variable using the
previous idiom anymore. Use a more robust way to access that
variable.
Co-Authored-By: "Lukas Bezdicka <lbezdick@redhat.com>"
Change-Id: I26f97e7fc4da0dd19e1e8a19b3f6a1c1160f7466
Closes-bug: #1816422
This is used in order to point where podman must push its logs.
Two scripts are using it:
- docker-puppet.py
- paunch (near future - see https://review.openstack.org/#/c/635438/)
This will allow to get the stdout for all containers, even when they
are removed before we can actually run "podman logs container_name".
Related-Bug: #1814897
Change-Id: Idc220047d56ce0eb41ac43903877177c4f7b75c2
Now that config-download is the default, RoleConfig and the associated
deployment isn't used anymore, let's remove it.
Change-Id: I0fbaccfea8f583101b03c6ee645ff01dac11b7af
Currently, docker daemon runtime has a default --log-driver set
to journald.
Podman lack of daemon prevent such a global application, meaning
we have to set that driver for each and every container when we
either create or run them.
Notes:
- podman only supports "json-file", and it's not even a json.
- docker json-file doesn't support "path" option, making this output
unusable in the end: logs end in
/var/lib/docker/containers/ID/ID-json.log
Related-Bug: #1814897
Change-Id: Ia613fc3812aa34376c3fe64c21abfed51cfc9cab
We should support arbitrary tags in upgrade tasks, update the
validation accordingly.
Change-Id: I3ebeb06b18306a8d1de11b3519e62b90a9cd6a78
Implements: blueprint upgrades-with-os
Currently this assumes all tasks will run on the primary controller
but because of composable roles, that may not be the case.
An example is if you deploy keystone on any role other than the
role tagged primary e.g Controller by default, we don't create
any of the users/endpoints because the tasks aren't written to
the role unless keystone actually runs there.
Closes-Bug: #1792613
Change-Id: Ib6efd03584c95ed4ab997f614aa3178b01877b8c
Implicit defaults hide issues with overring ansible variables as we
pass values in from deploy-steps.j2.
Make no implicit defaults for variables passed into deploy steps via
ansible vars. Only expect those take the values defined in the caller
deploy-steps.j2 playbook template. Add missing params and vars for
templates to propagate ansible values for external deploy/upgrade,
upgrade/update and post upgrade steps playbooks.
Make DockerPuppetDebug boolean to align with other booleans we pass
into deploy steps via ansible vars. Fix its processing in
docker-puppet.py, which is defaults for DockerPuppetDebug: ''
converted into 'false' in deploy steps tasks playbook, and then
that becomes always True in docker-puppet.py.
Related-Bug: #1799914
Change-Id: Ia630f08f553bd53656c76e5c8059f15d314a17c0
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
In docker-puppet.py, we only create docker-puppet.sh script if it
doesn't exist yet. It's not useful to re-create it and it can be
dangerous to regenerate the script while docker-puppet.py is running,
since we bind mount the script to the containers.
It's possible that during a multi-process task, the script changes and
then the entrypoint fails to run correctly if the interpreter is not
present in the script.
This patch makes sure that we create the script only when needed, and
also that we remove it before running docker-puppet.py, which will be
useful when doing clean deployments or upgrades.
Context: https://github.com/containers/libpod/issues/1844
Change-Id: I0ac69adb47f59a9ca82764b5537532014a782913
When docker was used, its "create host directory tree" feature was
used. It apparently created directories with "container_var_lib_t"
type, and this prevents podman container to access the content with
AVC errors (permission denied).
The following patch ensures we get a recursive chcon running.
We're using "command" instead of "file" module because ansible doesn't
like broken symlink (in fact, they are symlink with relative path
within containers).
Change-Id: I20d00c79fc898b0c4e535662ee6a70472e075b36
If compute nodes are deployed without deploying/updating the controllers then
the computes will not have cellv2 mappings as this is run in the controller
deploy steps (nova-api).
This can happen if the controller nodes are blacklisted during a compute scale
out. It's also likely to be an issue going forward if the deployment is staged
(e.g split control plane).
This change moves the cell_v2 discovery logic to the nova-compute/nova-ironic
deploy step.
Closes-bug: 1786961
Change-Id: I12a02f636f31985bc1b71bff5b744d346286a95f