kayobe/ansible/wipe-disks.yml
Mark Goddard 6c54ce4d3b Introduce max fail percentage to playbooks
This allows us to continue execution until a certain proportion of hosts
fail. This can be useful at scale, where failures are common, and
restarting a deployment is time-consuming.

The default max failure percentage is 100, keeping the default
behaviour. A global max failure percentage may be set via
kayobe_max_fail_percentage, and individual playbooks may define a max
failure percentage via <playbook>_max_fail_percentage.

Related Kolla Ansible patch:
https://review.opendev.org/c/openstack/kolla-ansible/+/805598

Change-Id: Ib81c72b63be5765cca664c38141ffc769640cf07
2024-06-03 16:24:29 +00:00

31 lines
1002 B
YAML

---
# Warning! This play can result in lost data. Take care when developing and
# using it.
# Initialisation task to be applied on first boot of a system to initalise
# disks. We search for block devices that are not currently mounted, then wipe
# any LVM or file system state from them. Any associated dm-crypt devices are
# also closed and removed from crypttab.
- name: Ensure that all unmounted block devices are wiped
hosts: seed-hypervisor:seed:overcloud:infra-vms
max_fail_percentage: >-
{{ wipe_disks_max_fail_percentage |
default(host_configure_max_fail_percentage) |
default(kayobe_max_fail_percentage) |
default(100) }}
tags:
- wipe-disks
tasks:
- block:
- name: Tear down unmounted LUKS devices
include_role:
name: stackhpc.luks
vars:
luks_action: teardown-unmounted
- name: Wipe disks
include_role:
name: wipe-disks
when: wipe_disks | default(false) | bool