Validate the playboooks metadata structure

This patch add a custom ansible-lint rule to enforce the structure of
the validations playbooks:

*ValidationHasMetadataRule*:
Throw an ansible-lint error if:
- the *hosts* key is empty or not found,
- *vars* dictionary is missing,
- *metadata* dict is missing in *vars*
- *name*/*description*/*groups* keys are missing or found with a wrong
  data type
- the validation belongs to one or several groups NOT in the official list of
  groups (groups.yaml)

*YAMLLINT*:
- Enable yamllint check in tox linters
- WIP Fix detected yamllint errors

Change-Id: If233286aa9f4299f02f13dc34f1e8c05d89df851
Signed-off-by: Gael Chamoulaud <gchamoul@redhat.com>
(cherry picked from commit e50e1a067d)
Signed-off-by: Gael Chamoulaud <gchamoul@redhat.com>
This commit is contained in:
Gael Chamoulaud 2019-12-19 10:03:20 +01:00 committed by Gael Chamoulaud (Strider)
parent 65ceb1ce3f
commit 02ebd6b335
No known key found for this signature in database
GPG Key ID: 4119D0305C651D66
114 changed files with 551 additions and 423 deletions

View File

@ -2,6 +2,8 @@ exclude_paths:
- releasenotes/ - releasenotes/
parseable: true parseable: true
quiet: false quiet: false
rulesdir:
- .ansible-lint_rules/
skip_list: skip_list:
# Lines should be no longer than 120 chars. # Lines should be no longer than 120 chars.
- '204' - '204'

View File

@ -0,0 +1,138 @@
import os
import six
import yaml
from ansiblelint import AnsibleLintRule
class ValidationHasMetadataRule(AnsibleLintRule):
id = '750'
shortdesc = 'Validation playbook must have mandatory metadata'
info = """
---
- hosts: localhost
vars:
metadata:
name: Validation Name
description: >
A full description of the validation.
groups:
- group1
- group2
- group3
"""
description = (
"The Validation playbook must have mandatory metadata:\n"
"```{}```".format(info)
)
severity = 'HIGH'
tags = ['metadata']
no_vars_found = "The validation playbook must contain a 'vars' dictionary"
no_meta_found = (
"The validation playbook must contain "
"a 'metadata' dictionary under vars"
)
no_groups_found = \
"*metadata* should contain a list of group (groups)"
unknown_groups_found = (
"Unkown group(s) '{}' found! "
"The official list of groups are '{}'. "
"To add a new validation group, please add it in the groups.yaml "
"file at the root of the tripleo-validations project."
)
def get_groups(self):
"""Returns a list of group names supported by
tripleo-validations by reading 'groups.yaml'
file located in the base direcotry.
"""
results = []
grp_file_path = os.path.abspath('groups.yaml')
with open(grp_file_path, "r") as grps:
contents = yaml.safe_load(grps)
for grp_name, grp_desc in sorted(contents.items()):
results.append(grp_name)
return results
def matchplay(self, file, data):
results = []
path = file['path']
if file['type'] == 'playbook':
if path.startswith("playbooks/") or \
path.find("tripleo-validations/playbooks/") > 0:
# *hosts* line check
hosts = data.get('hosts', None)
if not hosts:
return [({
path: data
}, "No *hosts* key found in the playbook")]
# *vars* lines check
vars = data.get('vars', None)
if not vars:
return [({
path: data
}, self.no_vars_found)]
else:
if not isinstance(vars, dict):
return [({path: data}, '*vars* should be a dictionary')]
# *metadata* lines check
metadata = data['vars'].get('metadata', None)
if metadata:
if not isinstance(metadata, dict):
return [(
{path: data},
'*metadata* should be a dictionary')]
else:
return [({path: data}, self.no_meta_found)]
# *metadata>[name|description] lines check
for info in ['name', 'description']:
if not metadata.get(info, None):
results.append((
{path: data},
'*metadata* should contain a %s key' % info))
continue
if not isinstance(metadata.get(info),
six.string_types):
results.append((
{path: data},
'*%s* should be a string' % info))
# *metadata>groups* lines check
if not metadata.get('groups', None):
results.append((
{path: data},
self.no_groups_found))
else:
if not isinstance(metadata.get('groups'), list):
results.append((
{path: data},
'*groups* should be a list'))
else:
groups = metadata.get('groups')
group_list = self.get_groups()
unknown_groups_list = list(
set(groups) - set(group_list))
if unknown_groups_list:
results.append((
{path: data},
self.unknown_groups_found.format(
unknown_groups_list,
group_list)
))
return results
return results

11
.yamllint Normal file
View File

@ -0,0 +1,11 @@
---
extends: default
rules:
line-length:
# matches hardcoded 160 value from ansible-lint
max: 160
ignore: |
zuul.d/*.yaml
releasenotes/notes/*.yaml

View File

@ -10,7 +10,7 @@
fail_without_deps: true fail_without_deps: true
tripleo_delegate_to: "{{ groups['overcloud'] | default([]) }}" tripleo_delegate_to: "{{ groups['overcloud'] | default([]) }}"
packages: packages:
- lvm2 - lvm2
tasks: tasks:
- include_role: - include_role:
name: ceph name: ceph

View File

@ -1,6 +1,6 @@
--- ---
- hosts: undercloud - hosts: undercloud
gather_facts: yes gather_facts: true
vars: vars:
metadata: metadata:
name: Check if latest version of packages is installed name: Check if latest version of packages is installed

View File

@ -7,7 +7,7 @@
This validation checks the flavors assigned to roles exist and have the This validation checks the flavors assigned to roles exist and have the
correct capabilities set. correct capabilities set.
groups: groups:
- pre-deployment - pre-deployment
- pre-upgrade - pre-upgrade
roles: roles:
- collect-flavors-and-verify-profiles - collect-flavors-and-verify-profiles

View File

@ -7,7 +7,7 @@
This validation checks that keystone admin token is disabled on both This validation checks that keystone admin token is disabled on both
undercloud and overcloud controller after deployment. undercloud and overcloud controller after deployment.
groups: groups:
- post-deployment - post-deployment
keystone_conf_file: "/var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf" keystone_conf_file: "/var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf"
roles: roles:
- controller-token - controller-token

View File

@ -6,7 +6,7 @@
description: > description: >
This will check the ulimits of each controller. This will check the ulimits of each controller.
groups: groups:
- post-deployment - post-deployment
nofiles_min: 1024 nofiles_min: 1024
nproc_min: 2048 nproc_min: 2048
roles: roles:

View File

@ -7,6 +7,6 @@
This validation checks that the nodes and hypervisor statistics This validation checks that the nodes and hypervisor statistics
add up. add up.
groups: groups:
- pre-deployment - pre-deployment
roles: roles:
- default-node-count - default-node-count

View File

@ -17,12 +17,12 @@
# will be passed to the Neutron services. The order is important # will be passed to the Neutron services. The order is important
# here: the values in later files take precedence. # here: the values in later files take precedence.
configs: configs:
- /etc/neutron/neutron.conf - /etc/neutron/neutron.conf
- /usr/share/neutron/neutron-dist.conf - /usr/share/neutron/neutron-dist.conf
- /etc/neutron/metadata_agent.ini - /etc/neutron/metadata_agent.ini
- /etc/neutron/dhcp_agent.ini - /etc/neutron/dhcp_agent.ini
- /etc/neutron/fwaas_driver.ini - /etc/neutron/fwaas_driver.ini
- /etc/neutron/l3_agent.ini - /etc/neutron/l3_agent.ini
roles: roles:
- neutron-sanity-check - neutron-sanity-check

View File

@ -7,7 +7,7 @@
When using Neutron, the `firewall_driver` option in Nova must be set to When using Neutron, the `firewall_driver` option in Nova must be set to
`NoopFirewallDriver`. `NoopFirewallDriver`.
groups: groups:
- post-deployment - post-deployment
nova_conf_path: "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf" nova_conf_path: "/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf"
roles: roles:
- no-op-firewall-nova-driver - no-op-firewall-nova-driver

View File

@ -9,6 +9,6 @@
The deployment should configure and run chronyd. This validation verifies The deployment should configure and run chronyd. This validation verifies
that it is indeed running and connected to an NTP server on all nodes. that it is indeed running and connected to an NTP server on all nodes.
groups: groups:
- post-deployment - post-deployment
roles: roles:
- ntp - ntp

View File

@ -12,7 +12,7 @@
- Are images named centos or rhel available? - Are images named centos or rhel available?
- Are there sufficient compute resources available for a default setup? (1 Master node, 1 Infra node, 2 App nodes) - Are there sufficient compute resources available for a default setup? (1 Master node, 1 Infra node, 2 App nodes)
groups: groups:
- openshift-on-openstack - openshift-on-openstack
min_total_ram_testing: 16384 # 4 per node min_total_ram_testing: 16384 # 4 per node
min_total_vcpus_testing: 4 # 1 per node min_total_vcpus_testing: 4 # 1 per node
min_total_disk_testing: 93 # Master: 40, others: 17 per node min_total_disk_testing: 93 # Master: 40, others: 17 per node
@ -23,8 +23,8 @@
min_node_disk_testing: 40 # Minimum disk per node for testing min_node_disk_testing: 40 # Minimum disk per node for testing
min_node_ram_prod: 16384 # Minimum ram per node for production min_node_ram_prod: 16384 # Minimum ram per node for production
min_node_disk_prod: 42 # Minimum disk per node for production min_node_disk_prod: 42 # Minimum disk per node for production
resource_reqs_testing: False resource_reqs_testing: false
resource_reqs_prod: False resource_reqs_prod: false
tasks: tasks:
- include_role: - include_role:
name: openshift-on-openstack name: openshift-on-openstack

View File

@ -7,7 +7,7 @@
Checks if an external network has been configured on the overcloud as Checks if an external network has been configured on the overcloud as
required for an OpenShift deployment on top of OpenStack. required for an OpenShift deployment on top of OpenStack.
groups: groups:
- openshift-on-openstack - openshift-on-openstack
tasks: tasks:
- include_role: - include_role:
name: openshift-on-openstack name: openshift-on-openstack

View File

@ -8,8 +8,8 @@
This validation gets the PublicVip address from the deployment and This validation gets the PublicVip address from the deployment and
tries to access Horizon and get a Keystone token. tries to access Horizon and get a Keystone token.
groups: groups:
- post-deployment - post-deployment
- pre-upgrade - pre-upgrade
- post-upgrade - post-upgrade
roles: roles:
- openstack-endpoints - openstack-endpoints

View File

@ -1,6 +1,6 @@
--- ---
- hosts: undercloud, overcloud - hosts: undercloud, overcloud
gather_facts: yes gather_facts: true
vars: vars:
metadata: metadata:
name: Check correctness of current repositories name: Check correctness of current repositories

View File

@ -7,7 +7,8 @@
Verify that stonith devices are configured for your OpenStack Platform HA cluster. Verify that stonith devices are configured for your OpenStack Platform HA cluster.
We don't configure stonith device with TripleO Installer. Because the hardware We don't configure stonith device with TripleO Installer. Because the hardware
configuration may be differ in each environment and requires different fence agents. configuration may be differ in each environment and requires different fence agents.
How to configure fencing please read https://access.redhat.com/documentation/en/red-hat-openstack-platform/8/paged/director-installation-and-usage/86-fencing-the-controller-nodes How to configure fencing please read
https://access.redhat.com/documentation/en/red-hat-openstack-platform/8/paged/director-installation-and-usage/86-fencing-the-controller-nodes
groups: groups:
- post-deployment - post-deployment
roles: roles:

View File

@ -8,7 +8,7 @@
and that all certs being tracked by certmonger are in the and that all certs being tracked by certmonger are in the
MONITORING state. MONITORING state.
groups: groups:
- post-deployment - post-deployment
tasks: tasks:
- include_role: - include_role:
name: tls-everywhere name: tls-everywhere

View File

@ -7,7 +7,7 @@
Checks that the undercloud has novajoin set up corectly and Checks that the undercloud has novajoin set up corectly and
that we are ready to do the overcloud deploy with tls-everywhere. that we are ready to do the overcloud deploy with tls-everywhere.
groups: groups:
- pre-deployment - pre-deployment
tasks: tasks:
- include_role: - include_role:
name: tls-everywhere name: tls-everywhere

View File

@ -7,7 +7,7 @@
Checks that the undercloud is ready to set up novajoin and Checks that the undercloud is ready to set up novajoin and
to register to IdM as a client as part of undercloud-install. to register to IdM as a client as part of undercloud-install.
groups: groups:
- prep - prep
tasks: tasks:
- include_role: - include_role:
name: tls-everywhere name: tls-everywhere

View File

@ -1,6 +1,6 @@
--- ---
- hosts: undercloud - hosts: undercloud
gather_facts: yes gather_facts: true
vars: vars:
metadata: metadata:
name: Verify undercloud fits the CPU core requirements name: Verify undercloud fits the CPU core requirements

View File

@ -11,10 +11,10 @@
groups: groups:
- pre-upgrade - pre-upgrade
volumes: volumes:
- {mount: /var/lib/docker, min_size: 10} - {mount: /var/lib/docker, min_size: 10}
- {mount: /var/lib/config-data, min_size: 3} - {mount: /var/lib/config-data, min_size: 3}
- {mount: /var, min_size: 16} - {mount: /var, min_size: 16}
- {mount: /, min_size: 20} - {mount: /, min_size: 20}
roles: roles:
- undercloud-disk-space - undercloud-disk-space

View File

@ -12,12 +12,12 @@
- prep - prep
- pre-introspection - pre-introspection
volumes: volumes:
- {mount: /var/lib/docker, min_size: 10} - {mount: /var/lib/docker, min_size: 10}
- {mount: /var/lib/config-data, min_size: 3} - {mount: /var/lib/config-data, min_size: 3}
- {mount: /var/log, min_size: 3} - {mount: /var/log, min_size: 3}
- {mount: /usr, min_size: 5} - {mount: /usr, min_size: 5}
- {mount: /var, min_size: 20} - {mount: /var, min_size: 20}
- {mount: /, min_size: 25} - {mount: /, min_size: 25}
roles: roles:
- undercloud-disk-space - undercloud-disk-space

View File

@ -8,8 +8,8 @@
heat database can grow very large. This validation checks that heat database can grow very large. This validation checks that
the purge_deleted crontab has been set up. the purge_deleted crontab has been set up.
groups: groups:
- pre-upgrade - pre-upgrade
- pre-deployment - pre-deployment
cron_check: "heat-manage purge_deleted" cron_check: "heat-manage purge_deleted"
roles: roles:
- undercloud-heat-purge-deleted - undercloud-heat-purge-deleted

View File

@ -17,13 +17,13 @@
# will be passed to the Neutron services. The order is important # will be passed to the Neutron services. The order is important
# here: the values in later files take precedence. # here: the values in later files take precedence.
configs: configs:
- /etc/neutron/neutron.conf - /etc/neutron/neutron.conf
- /usr/share/neutron/neutron-dist.conf - /usr/share/neutron/neutron-dist.conf
- /etc/neutron/metadata_agent.ini - /etc/neutron/metadata_agent.ini
- /etc/neutron/dhcp_agent.ini - /etc/neutron/dhcp_agent.ini
- /etc/neutron/plugins/ml2/openvswitch_agent.ini - /etc/neutron/plugins/ml2/openvswitch_agent.ini
- /etc/neutron/fwaas_driver.ini - /etc/neutron/fwaas_driver.ini
- /etc/neutron/l3_agent.ini - /etc/neutron/l3_agent.ini
roles: roles:
- neutron-sanity-check - neutron-sanity-check

View File

@ -1,6 +1,6 @@
--- ---
- hosts: undercloud - hosts: undercloud
gather_facts: yes gather_facts: true
vars: vars:
metadata: metadata:
name: Verify the undercloud fits the RAM requirements name: Verify the undercloud fits the RAM requirements

View File

@ -1,6 +1,6 @@
--- ---
- hosts: undercloud - hosts: undercloud
gather_facts: yes gather_facts: true
vars: vars:
metadata: metadata:
name: Undercloud SELinux Enforcing Mode Check name: Undercloud SELinux Enforcing Mode Check

View File

@ -8,7 +8,7 @@
keystone database can grow very large. This validation checks that keystone database can grow very large. This validation checks that
the keystone token_flush crontab has been set up. the keystone token_flush crontab has been set up.
groups: groups:
- pre-introspection - pre-introspection
cron_check: "keystone-manage token_flush" cron_check: "keystone-manage token_flush"
roles: roles:
- undercloud-tokenflush - undercloud-tokenflush

View File

@ -2,7 +2,7 @@
- name: List the available drives - name: List the available drives
register: drive_list register: drive_list
command: "ls /sys/class/block/" command: "ls /sys/class/block/"
changed_when: False changed_when: false
- name: Detect whether the drive uses Advanced Format - name: Detect whether the drive uses Advanced Format
advanced_format: drive={{ item }} advanced_format: drive={{ item }}

View File

@ -4,4 +4,3 @@ fail_without_deps: false
fail_on_ceph_health_err: false fail_on_ceph_health_err: false
osd_percentage_min: 0 osd_percentage_min: 0
ceph_ansible_repo: "centos-ceph-nautilus" ceph_ansible_repo: "centos-ceph-nautilus"

View File

@ -2,9 +2,9 @@
- name: Check if ceph-ansible is installed - name: Check if ceph-ansible is installed
shell: rpm -q ceph-ansible || true shell: rpm -q ceph-ansible || true
args: args:
warn: no warn: false
changed_when: False changed_when: false
ignore_errors: True ignore_errors: true
register: ceph_ansible_installed register: ceph_ansible_installed
- name: Warn about missing ceph-ansible - name: Warn about missing ceph-ansible
@ -24,7 +24,7 @@
- name: Get ceph-ansible repository - name: Get ceph-ansible repository
shell: "yum info ceph-ansible | awk '/From repo/ {print $4}'" shell: "yum info ceph-ansible | awk '/From repo/ {print $4}'"
register: repo register: repo
changed_when: False changed_when: false
- name: Fail if ceph-ansible doesn't belong to the specified repo - name: Fail if ceph-ansible doesn't belong to the specified repo
fail: fail:
@ -32,4 +32,3 @@
when: when:
- (repo.stdout | length == 0 or repo.stdout != "{{ ceph_ansible_repo }}") - (repo.stdout | length == 0 or repo.stdout != "{{ ceph_ansible_repo }}")
- fail_without_ceph_ansible|default(false)|bool - fail_without_ceph_ansible|default(false)|bool

View File

@ -4,64 +4,65 @@
shell: hiera -c /etc/puppet/hiera.yaml enabled_services | egrep -sq ceph_mon shell: hiera -c /etc/puppet/hiera.yaml enabled_services | egrep -sq ceph_mon
ignore_errors: true ignore_errors: true
register: ceph_mon_enabled register: ceph_mon_enabled
changed_when: False changed_when: false
- when: - when: "ceph_mon_enabled is succeeded"
- ceph_mon_enabled is succeeded
block: block:
- name: Set container_cli fact from the inventory - name: Set container_cli fact from the inventory
set_fact:
container_cli: "{{ hostvars[inventory_hostname].container_cli|default('podman') }}"
- name: Set container filter format
set_fact:
container_filter_format: !unsafe "--format '{{ .Names }}'"
- name: Set ceph_mon_container name
become: true
shell: "{{ container_cli }} ps {{ container_filter_format }} | grep ceph-mon"
register: ceph_mon_container
changed_when: False
- name: Set ceph cluster name
become: true
shell: find /etc/ceph -name '*.conf' -prune -print -quit | xargs basename -s '.conf'
register: ceph_cluster_name
changed_when: False
- name: Get ceph health
become: true
shell: "{{ container_cli }} exec {{ ceph_mon_container.stdout }} ceph --cluster {{ ceph_cluster_name.stdout }} health | awk '{print $1}'"
register: ceph_health
- name: Check ceph health
warn:
msg: Ceph is in {{ ceph_health.stdout }} state.
when:
- ceph_health.stdout != 'HEALTH_OK'
- not fail_on_ceph_health_err|default(true)|bool
- name: Fail if ceph health is HEALTH_ERR
fail:
msg: Ceph is in {{ ceph_health.stdout }} state.
when:
- ceph_health.stdout == 'HEALTH_ERR'
- fail_on_ceph_health_err|default(true)|bool
- when:
- osd_percentage_min|default(0) > 0
block:
- name: set jq osd percentage filter
set_fact: set_fact:
jq_osd_percentage_filter: '( (.num_in_osds) / (.num_osds) ) * 100' container_cli: "{{ hostvars[inventory_hostname].container_cli|default('podman') }}"
- name: Get OSD stat percentage - name: Set container filter format
set_fact:
container_filter_format: !unsafe "--format '{{ .Names }}'"
- name: Set ceph_mon_container name
become: true become: true
shell: "{{ container_cli }} exec {{ ceph_mon_container.stdout }} ceph --cluster {{ ceph_cluster_name.stdout }} osd stat -f json | jq '{{ jq_osd_percentage_filter }}'" shell: "{{ container_cli }} ps {{ container_filter_format }} | grep ceph-mon"
register: ceph_osd_in_percentage register: ceph_mon_container
changed_when: false
- name: Fail if there is an unacceptable percentage of in OSDs - name: Set ceph cluster name
fail: become: true
msg: "Only {{ ceph_osd_in_percentage.stdout|float }}% of OSDs are in, but {{ osd_percentage_min|default(0) }}% are required" shell: find /etc/ceph -name '*.conf' -prune -print -quit | xargs basename -s '.conf'
register: ceph_cluster_name
changed_when: false
- name: Get ceph health
become: true
shell: "{{ container_cli }} exec {{ ceph_mon_container.stdout }} ceph --cluster {{ ceph_cluster_name.stdout }} health | awk '{print $1}'"
register: ceph_health
- name: Check ceph health
warn:
msg: Ceph is in {{ ceph_health.stdout }} state.
when: when:
- ceph_osd_in_percentage.stdout|float < osd_percentage_min|default(0) - ceph_health.stdout != 'HEALTH_OK'
- not fail_on_ceph_health_err|default(true)|bool
- name: Fail if ceph health is HEALTH_ERR
fail:
msg: Ceph is in {{ ceph_health.stdout }} state.
when:
- ceph_health.stdout == 'HEALTH_ERR'
- fail_on_ceph_health_err|default(true)|bool
- when:
- osd_percentage_min|default(0) > 0
block:
- name: set jq osd percentage filter
set_fact:
jq_osd_percentage_filter: '( (.num_in_osds) / (.num_osds) ) * 100'
- name: Get OSD stat percentage
become: true
shell: >-
"{{ container_cli }}" exec "{{ ceph_mon_container.stdout }}" ceph
--cluster "{{ ceph_cluster_name.stdout }}" osd stat -f json | jq '{{ jq_osd_percentage_filter }}'
register: ceph_osd_in_percentage
- name: Fail if there is an unacceptable percentage of in OSDs
fail:
msg: "Only {{ ceph_osd_in_percentage.stdout|float }}% of OSDs are in, but {{ osd_percentage_min|default(0) }}% are required"
when:
- ceph_osd_in_percentage.stdout|float < osd_percentage_min|default(0)

View File

@ -17,7 +17,7 @@
- name: Prepare - name: Prepare
hosts: all hosts: all
gather_facts: no gather_facts: false
tasks: tasks:
- name: install patch rpm - name: install patch rpm

View File

@ -17,7 +17,7 @@
- name: Converge - name: Converge
hosts: all hosts: all
gather_facts: no gather_facts: false
tasks: tasks:
- name: successful check with ctlplane-subnet - name: successful check with ctlplane-subnet

View File

@ -17,7 +17,7 @@
- name: Prepare - name: Prepare
hosts: all hosts: all
gather_facts: no gather_facts: false
tasks: tasks:
- name: install hiera - name: install hiera

View File

@ -5,12 +5,12 @@
name: "tripleo_undercloud_conf_file" name: "tripleo_undercloud_conf_file"
- name: Get the local_subnet name from the undercloud_conf file - name: Get the local_subnet name from the undercloud_conf file
become: True become: true
validations_read_ini: validations_read_ini:
path: "{{ tripleo_undercloud_conf_file }}" path: "{{ tripleo_undercloud_conf_file }}"
section: DEFAULT section: DEFAULT
key: local_subnet key: local_subnet
ignore_missing_file: True ignore_missing_file: true
register: local_subnet register: local_subnet
- name: Get gateway value from the undercloud.conf file - name: Get gateway value from the undercloud.conf file
@ -19,7 +19,7 @@
path: "{{ tripleo_undercloud_conf_file }}" path: "{{ tripleo_undercloud_conf_file }}"
section: "{% if local_subnet.value %}{{ local_subnet.value }}{% else %}ctlplane-subnet{% endif %}" section: "{% if local_subnet.value %}{{ local_subnet.value }}{% else %}ctlplane-subnet{% endif %}"
key: gateway key: gateway
ignore_missing_file: True ignore_missing_file: true
register: gateway register: gateway
- name: Get local_ip value from the undercloud.conf file - name: Get local_ip value from the undercloud.conf file
@ -28,7 +28,7 @@
path: "{{ tripleo_undercloud_conf_file }}" path: "{{ tripleo_undercloud_conf_file }}"
section: DEFAULT section: DEFAULT
key: local_ip key: local_ip
ignore_missing_file: True ignore_missing_file: true
register: local_ip register: local_ip
- name: Test network_gateway if different from local_ip - name: Test network_gateway if different from local_ip

View File

@ -5,5 +5,5 @@ metadata:
This validation checks the flavors assigned to roles exist and have the This validation checks the flavors assigned to roles exist and have the
correct capabilities set. correct capabilities set.
groups: groups:
- pre-deployment - pre-deployment
- pre-upgrade - pre-upgrade

View File

@ -8,29 +8,29 @@
- when: "'Undercloud' in group_names" - when: "'Undercloud' in group_names"
block: block:
- name: Set container_cli fact from undercloud.conf - name: Set container_cli fact from undercloud.conf
block: block:
- name: Get the path of tripleo undercloud config file - name: Get the path of tripleo undercloud config file
become: true become: true
hiera: hiera:
name: "tripleo_undercloud_conf_file" name: "tripleo_undercloud_conf_file"
- name: Get container client from undercloud.conf - name: Get container client from undercloud.conf
validations_read_ini: validations_read_ini:
path: "{{ tripleo_undercloud_conf_file }}" path: "{{ tripleo_undercloud_conf_file }}"
section: DEFAULT section: DEFAULT
key: container_cli key: container_cli
ignore_missing_file: true ignore_missing_file: true
register: container_cli register: container_cli
- name: Set uc_container_cli for the Undercloud - name: Set uc_container_cli for the Undercloud
set_fact: set_fact:
uc_container_cli: "{{ container_cli.value|default('podman', true) }}" uc_container_cli: "{{ container_cli.value|default('podman', true) }}"
when: uc_container_cli is not defined when: uc_container_cli is not defined
- name: Get failed containers for podman - name: Get failed containers for podman
changed_when: false changed_when: false
become: True become: true
command: > command: >
{% if oc_container_cli is defined %}{{ oc_container_cli }}{% else %}{{ uc_container_cli }}{% endif %} {% if oc_container_cli is defined %}{{ oc_container_cli }}{% else %}{{ uc_container_cli }}{% endif %}
{% raw %} {% raw %}

View File

@ -2,7 +2,7 @@
- name: gather docker facts - name: gather docker facts
docker_facts: docker_facts:
container_filter: status=running container_filter: status=running
become: yes become: true
- name: compare running containers to list - name: compare running containers to list
set_fact: set_fact:
@ -25,6 +25,6 @@
state: started # Port should be open state: started # Port should be open
delay: 0 # No wait before first check (sec) delay: 0 # No wait before first check (sec)
timeout: 3 # Stop checking after timeout (sec) timeout: 3 # Stop checking after timeout (sec)
ignore_errors: yes ignore_errors: true
loop: "{{ open_ports }}" loop: "{{ open_ports }}"
when: ctlplane_ip is defined when: ctlplane_ip is defined

View File

@ -17,7 +17,7 @@
- name: Converge - name: Converge
hosts: all hosts: all
gather_facts: no gather_facts: false
tasks: tasks:
- name: pass validation - name: pass validation

View File

@ -5,7 +5,7 @@
path: "{{ keystone_conf_file }}" path: "{{ keystone_conf_file }}"
section: DEFAULT section: DEFAULT
key: admin_token key: admin_token
ignore_missing_file: True ignore_missing_file: true
register: token_result register: token_result
- name: Check if token value is disabled. - name: Check if token value is disabled.

View File

@ -5,4 +5,4 @@ metadata:
This validation checks that keystone admin token is disabled on both This validation checks that keystone admin token is disabled on both
undercloud and overcloud controller after deployment. undercloud and overcloud controller after deployment.
groups: groups:
- post-deployment - post-deployment

View File

@ -17,7 +17,7 @@
- name: Converge - name: Converge
hosts: all hosts: all
gather_facts: no gather_facts: false
vars: vars:
nofiles_min: 102400 nofiles_min: 102400

View File

@ -4,7 +4,7 @@
# NOTE: `ulimit` is a shell builtin so we have to invoke it like this: # NOTE: `ulimit` is a shell builtin so we have to invoke it like this:
command: sh -c "ulimit -n" command: sh -c "ulimit -n"
register: nofilesval register: nofilesval
changed_when: False changed_when: false
- name: Check nofiles limit - name: Check nofiles limit
fail: fail:
@ -18,7 +18,7 @@
# NOTE: `ulimit` is a shell builtin so we have to invoke it like this: # NOTE: `ulimit` is a shell builtin so we have to invoke it like this:
command: sh -c "ulimit -u" command: sh -c "ulimit -u"
register: nprocval register: nprocval
changed_when: False changed_when: false
- name: Check nproc limit - name: Check nproc limit
fail: fail:

View File

@ -17,7 +17,7 @@
- name: Converge - name: Converge
hosts: all hosts: all
gather_facts: no gather_facts: false
tasks: tasks:
- name: prepare directory tree for hiera - name: prepare directory tree for hiera

View File

@ -17,7 +17,7 @@
- name: Prepare - name: Prepare
hosts: all hosts: all
gather_facts: no gather_facts: false
tasks: tasks:
- name: install hiera - name: install hiera

View File

@ -10,7 +10,7 @@
path: "{{ tripleo_undercloud_conf_file }}" path: "{{ tripleo_undercloud_conf_file }}"
section: ctlplane-subnet section: ctlplane-subnet
key: dhcp_start key: dhcp_start
ignore_missing_file: True ignore_missing_file: true
default: "192.0.2.5" default: "192.0.2.5"
register: dhcp_start register: dhcp_start
@ -20,7 +20,7 @@
path: "{{ tripleo_undercloud_conf_file }}" path: "{{ tripleo_undercloud_conf_file }}"
section: ctlplane-subnet section: ctlplane-subnet
key: dhcp_end key: dhcp_end
ignore_missing_file: True ignore_missing_file: true
default: "192.0.2.24" default: "192.0.2.24"
register: dhcp_end register: dhcp_end

View File

@ -5,4 +5,4 @@ metadata:
This validation checks that the nodes and hypervisor statistics This validation checks that the nodes and hypervisor statistics
add up. add up.
groups: groups:
- pre-deployment - pre-deployment

View File

@ -1,6 +1,6 @@
--- ---
- name: Look up the introspection interface - name: Look up the introspection interface
become: True become: true
validations_read_ini: validations_read_ini:
path: "{{ ironic_inspector_conf }}" path: "{{ ironic_inspector_conf }}"
section: iptables section: iptables
@ -8,7 +8,7 @@
register: interface register: interface
- name: Look up the introspection interface from the deprecated option - name: Look up the introspection interface from the deprecated option
become: True become: true
validations_read_ini: validations_read_ini:
path: "{{ ironic_inspector_conf }}" path: "{{ ironic_inspector_conf }}"
section: firewall section: firewall
@ -17,4 +17,4 @@
- name: Look for rogue DHCP servers - name: Look for rogue DHCP servers
script: files/rogue_dhcp.py {{ interface.value or interface_deprecated.value or 'br-ctlplane' }} script: files/rogue_dhcp.py {{ interface.value or interface_deprecated.value or 'br-ctlplane' }}
changed_when: False changed_when: false

View File

@ -7,7 +7,7 @@
path: "{{ tripleo_undercloud_conf_file }}" path: "{{ tripleo_undercloud_conf_file }}"
section: DEFAULT section: DEFAULT
key: local_interface key: local_interface
ignore_missing_file: True ignore_missing_file: true
register: local_interface register: local_interface
- name: Look for DHCP responses - name: Look for DHCP responses

View File

@ -1,4 +1,4 @@
--- ---
- name: Ensure DNS resolution works - name: Ensure DNS resolution works
command: "getent hosts {{ server_to_lookup }}" command: "getent hosts {{ server_to_lookup }}"
changed_when: False changed_when: false

View File

@ -17,7 +17,7 @@
- name: Converge - name: Converge
hosts: all hosts: all
gather_facts: no gather_facts: false
vars: vars:
haproxy_config_file: /haproxy.cfg haproxy_config_file: /haproxy.cfg

View File

@ -2,7 +2,7 @@
- name: Get the healthcheck services list enabled on node - name: Get the healthcheck services list enabled on node
shell: > shell: >
systemctl list-unit-files | grep "^tripleo.*healthcheck.*enabled" | awk -F'.' '{print $1}' systemctl list-unit-files | grep "^tripleo.*healthcheck.*enabled" | awk -F'.' '{print $1}'
changed_when: False changed_when: false
register: healthcheck_services_list register: healthcheck_services_list
when: inflight_healthcheck_services | length < 1 when: inflight_healthcheck_services | length < 1
@ -23,7 +23,7 @@
until: until:
- systemd_healthcheck_state.status.ExecMainPID != '0' - systemd_healthcheck_state.status.ExecMainPID != '0'
- systemd_healthcheck_state.status.ActiveState in ['inactive', 'failed'] - systemd_healthcheck_state.status.ActiveState in ['inactive', 'failed']
ignore_errors: True ignore_errors: true
register: systemd_healthcheck_state register: systemd_healthcheck_state
with_items: "{{ hc_services }}" with_items: "{{ hc_services }}"

View File

@ -8,7 +8,7 @@ platforms:
- name: centos7 - name: centos7
hostname: centos7 hostname: centos7
image: centos:7 image: centos:7
override_command: True override_command: true
command: python -m SimpleHTTPServer 8787 command: python -m SimpleHTTPServer 8787
pkg_extras: python-setuptools python-enum34 python-netaddr epel-release ruby PyYAML pkg_extras: python-setuptools python-enum34 python-netaddr epel-release ruby PyYAML
easy_install: easy_install:
@ -20,7 +20,7 @@ platforms:
- name: fedora28 - name: fedora28
hostname: fedora28 hostname: fedora28
image: fedora:28 image: fedora:28
override_command: True override_command: true
command: python3 -m http.server 8787 command: python3 -m http.server 8787
pkg_extras: python*-setuptools python*-enum python*-netaddr ruby PyYAML pkg_extras: python*-setuptools python*-enum python*-netaddr ruby PyYAML
environment: environment:

View File

@ -17,7 +17,7 @@
- name: Converge - name: Converge
hosts: all hosts: all
gather_facts: no gather_facts: false
tasks: tasks:
- name: detect wrong port - name: detect wrong port
@ -39,7 +39,7 @@
block: block:
- name: run validation for 404 - name: run validation for 404
include_role: include_role:
name: image-serve name: image-serve
rescue: rescue:
- name: Clear host errors - name: Clear host errors
meta: clear_host_errors meta: clear_host_errors

View File

@ -17,7 +17,7 @@
- name: Prepare - name: Prepare
hosts: all hosts: all
gather_facts: no gather_facts: false
tasks: tasks:
- name: install hiera - name: install hiera

View File

@ -10,7 +10,7 @@
path: "{{ tripleo_undercloud_conf_file }}" path: "{{ tripleo_undercloud_conf_file }}"
section: DEFAULT section: DEFAULT
key: local_ip key: local_ip
ignore_missing_file: True ignore_missing_file: true
register: local_ip register: local_ip
- name: Set container registry host - name: Set container registry host

View File

@ -1,3 +1,4 @@
---
metadata: metadata:
name: Image-serve availability name: Image-serve availability
description: Verify that image-serve service is ready description: Verify that image-serve service is ready

View File

@ -9,7 +9,7 @@
"{{ container_cli|default('podman', true) }}" exec -u root "{{ container_cli|default('podman', true) }}" exec -u root
$("{{ container_cli|default('podman', true) }}" ps -q --filter "name=mysql|galera-bundle" | head -1) $("{{ container_cli|default('podman', true) }}" ps -q --filter "name=mysql|galera-bundle" | head -1)
/bin/bash -c 'ulimit -n' /bin/bash -c 'ulimit -n'
changed_when: False changed_when: false
register: mysqld_open_files_limit register: mysqld_open_files_limit
- name: Test the open-files-limit value - name: Test the open-files-limit value

View File

@ -7,24 +7,24 @@
- when: "'Undercloud' in group_names" - when: "'Undercloud' in group_names"
block: block:
- name: Get the path of tripleo undercloud config file - name: Get the path of tripleo undercloud config file
become: true become: true
hiera: hiera:
name: "tripleo_undercloud_conf_file" name: "tripleo_undercloud_conf_file"
- name: Get the Container CLI from the undercloud.conf file - name: Get the Container CLI from the undercloud.conf file
become: true become: true
validations_read_ini: validations_read_ini:
path: "{{ tripleo_undercloud_conf_file }}" path: "{{ tripleo_undercloud_conf_file }}"
section: DEFAULT section: DEFAULT
key: container_cli key: container_cli
ignore_missing_file: true ignore_missing_file: true
register: container_cli register: container_cli
- name: Set uc_container_cli and container_name for the Undercloud - name: Set uc_container_cli and container_name for the Undercloud
set_fact: set_fact:
uc_container_cli: "{{ container_cli.value|default('podman', true) }}" uc_container_cli: "{{ container_cli.value|default('podman', true) }}"
container_name: "neutron_ovs_agent" container_name: "neutron_ovs_agent"
- name: Run neutron-sanity-check - name: Run neutron-sanity-check
command: > command: >
@ -35,7 +35,7 @@
become: true become: true
register: nsc_return register: nsc_return
ignore_errors: true ignore_errors: true
changed_when: False changed_when: false
- name: Detect errors - name: Detect errors
set_fact: set_fact:

View File

@ -17,7 +17,7 @@
- name: Converge - name: Converge
hosts: all hosts: all
gather_facts: no gather_facts: false
vars: vars:
nova_conf_path: "/nova.conf" nova_conf_path: "/nova.conf"
@ -48,7 +48,7 @@
section: DEFAULT section: DEFAULT
option: firewall_driver option: firewall_driver
value: CHANGEME value: CHANGEME
backup: yes backup: true
- include_role: - include_role:
name: no-op-firewall-nova-driver name: no-op-firewall-nova-driver

View File

@ -5,4 +5,4 @@ metadata:
When using Neutron, the `firewall_driver` option in Nova must be set to When using Neutron, the `firewall_driver` option in Nova must be set to
`NoopFirewallDriver`. `NoopFirewallDriver`.
groups: groups:
- post-deployment - post-deployment

View File

@ -6,7 +6,7 @@
- name: Ping all overcloud nodes - name: Ping all overcloud nodes
icmp_ping: icmp_ping:
host: "{{ item }}" host: "{{ item }}"
with_items: "{{ oc_ips.results | map(attribute='ansible_facts.ansible_host') | list }}" with_items: "{{ oc_ips.results | map(attribute='ansible_facts.ansible_host') | list }}"
ignore_errors: true ignore_errors: true
register: ping_results register: ping_results

View File

@ -1,6 +1,6 @@
--- ---
- name: Get VIF Plugging setting values from nova.conf - name: Get VIF Plugging setting values from nova.conf
become: True become: true
validations_read_ini: validations_read_ini:
path: "{{ nova_config_file }}" path: "{{ nova_config_file }}"
section: DEFAULT section: DEFAULT
@ -21,14 +21,14 @@
with_items: "{{ nova_config_result.results }}" with_items: "{{ nova_config_result.results }}"
- name: Get auth_url value from hiera - name: Get auth_url value from hiera
become: True become: true
command: hiera -c /etc/puppet/hiera.yaml neutron::server::notifications::auth_url command: hiera -c /etc/puppet/hiera.yaml neutron::server::notifications::auth_url
ignore_errors: True ignore_errors: true
changed_when: False changed_when: false
register: auth_url register: auth_url
- name: Get auth_url value from neutron.conf - name: Get auth_url value from neutron.conf
become: True become: true
validations_read_ini: validations_read_ini:
path: "{{ neutron_config_file }}" path: "{{ neutron_config_file }}"
section: nova section: nova
@ -45,7 +45,7 @@
failed_when: "neutron_auth_url_result.value != auth_url.stdout" failed_when: "neutron_auth_url_result.value != auth_url.stdout"
- name: Get Notify Nova settings values from neutron.conf - name: Get Notify Nova settings values from neutron.conf
become: True become: true
validations_read_ini: validations_read_ini:
path: "{{ neutron_config_file }}" path: "{{ neutron_config_file }}"
section: DEFAULT section: DEFAULT
@ -63,7 +63,7 @@
with_items: "{{ neutron_notify_nova_result.results }}" with_items: "{{ neutron_notify_nova_result.results }}"
- name: Get Tenant Name setting value from neutron.conf - name: Get Tenant Name setting value from neutron.conf
become: True become: true
validations_read_ini: validations_read_ini:
path: "{{ neutron_config_file }}" path: "{{ neutron_config_file }}"
section: nova section: nova

View File

@ -17,7 +17,7 @@
- name: Converge - name: Converge
hosts: all hosts: all
gather_facts: no gather_facts: false
tasks: tasks:
- name: working detection - name: working detection

View File

@ -17,7 +17,7 @@
- name: Prepare - name: Prepare
hosts: all hosts: all
gather_facts: no gather_facts: false
tasks: tasks:
- name: Populate successful podman CLI - name: Populate successful podman CLI

View File

@ -7,7 +7,7 @@
- name: Check nova upgrade status - name: Check nova upgrade status
become: true become: true
command: "{{ container_cli }} exec -u root nova_api nova-status upgrade check" command: "{{ container_cli }} exec -u root nova_api nova-status upgrade check"
changed_when: False changed_when: false
register: nova_upgrade_check register: nova_upgrade_check
- name: Warn if at least one check encountered an issue - name: Warn if at least one check encountered an issue

View File

@ -1,26 +1,26 @@
--- ---
- name: Get if chrony is enabled - name: Get if chrony is enabled
become: True become: true
hiera: hiera:
name: "chrony_enabled" name: "chrony_enabled"
- when: chrony_enabled|bool - when: chrony_enabled|bool
block: block:
- name: Populate service facts - name: Populate service facts
service_facts: # needed to make yaml happy service_facts: # needed to make yaml happy
- name: Fail if chronyd service is not running - name: Fail if chronyd service is not running
fail: fail:
msg: "Chronyd service is not running" msg: "Chronyd service is not running"
when: "ansible_facts.services['chronyd.service'].state != 'running'" when: "ansible_facts.services['chronyd.service'].state != 'running'"
- name: Run chronyc - name: Run chronyc
become: True become: true
command: chronyc -a 'burst 4/4' command: chronyc -a 'burst 4/4'
changed_when: False changed_when: false
# ntpstat returns 0 if synchronised and non-zero otherwise:
- name: Run ntpstat - name: Run ntpstat
# ntpstat returns 0 if synchronised and non-zero otherwise:
command: ntpstat command: ntpstat
changed_when: False changed_when: false
when: not chrony_enabled|bool when: not chrony_enabled|bool

View File

@ -7,4 +7,4 @@ metadata:
The deployment should configure and run chronyd. This validation verifies The deployment should configure and run chronyd. This validation verifies
that it is indeed running and connected to an NTP server on all nodes. that it is indeed running and connected to an NTP server on all nodes.
groups: groups:
- post-deployment - post-deployment

View File

@ -9,5 +9,5 @@ min_node_ram_testing: 4096 # Minimum ram per node for testing
min_node_disk_testing: 40 # Minimum disk per node for testing min_node_disk_testing: 40 # Minimum disk per node for testing
min_node_ram_prod: 16384 # Minimum ram per node for production min_node_ram_prod: 16384 # Minimum ram per node for production
min_node_disk_prod: 42 # Minimum disk per node for production min_node_disk_prod: 42 # Minimum disk per node for production
resource_reqs_testing: False resource_reqs_testing: false
resource_reqs_prod: False resource_reqs_prod: false

View File

@ -23,7 +23,7 @@
domain: domain:
id: default id: default
password: "{{ overcloud_admin_password }}" password: "{{ overcloud_admin_password }}"
return_content: yes return_content: true
status_code: 201 status_code: 201
register: keystone_result register: keystone_result
no_log: true no_log: true
@ -53,7 +53,7 @@
headers: headers:
X-Auth-Token: "{{ auth_token }}" X-Auth-Token: "{{ auth_token }}"
Accept: application/vnd.openstack.compute.v2.1+json Accept: application/vnd.openstack.compute.v2.1+json
return_content: yes return_content: true
follow_redirects: all follow_redirects: all
register: flavors_result_testing register: flavors_result_testing
@ -64,7 +64,7 @@
headers: headers:
X-Auth-Token: "{{ auth_token }}" X-Auth-Token: "{{ auth_token }}"
Accept: application/vnd.openstack.compute.v2.1+json Accept: application/vnd.openstack.compute.v2.1+json
return_content: yes return_content: true
follow_redirects: all follow_redirects: all
register: flavors_result_prod register: flavors_result_prod
@ -89,7 +89,7 @@
headers: headers:
X-Auth-Token: "{{ auth_token }}" X-Auth-Token: "{{ auth_token }}"
Accept: application/vnd.openstack.compute.v2.1+json Accept: application/vnd.openstack.compute.v2.1+json
return_content: yes return_content: true
follow_redirects: all follow_redirects: all
register: hypervisors_result register: hypervisors_result
@ -116,7 +116,7 @@
method: GET method: GET
headers: headers:
X-Auth-Token: "{{ auth_token }}" X-Auth-Token: "{{ auth_token }}"
return_content: yes return_content: true
follow_redirects: all follow_redirects: all
register: images register: images

View File

@ -1,7 +1,7 @@
--- ---
- name: Set fact to identify if the overcloud was deployed - name: Set fact to identify if the overcloud was deployed
set_fact: set_fact:
overcloud_deployed: "{{ groups['overcloud'] is defined }}" overcloud_deployed: "{{ groups['overcloud'] is defined }}"
- name: Warn if no overcloud deployed yet - name: Warn if no overcloud deployed yet
warn: warn:
@ -12,62 +12,62 @@
- when: overcloud_deployed|bool - when: overcloud_deployed|bool
block: block:
# Get auth token and service catalog from Keystone and extract service urls. # Get auth token and service catalog from Keystone and extract service urls.
- name: Get token and catalog from Keystone - name: Get token and catalog from Keystone
uri: uri:
url: "{{ overcloud_keystone_url url: "{{ overcloud_keystone_url
| urlsplit('scheme') }}://{{ overcloud_keystone_url | urlsplit('scheme') }}://{{ overcloud_keystone_url
| urlsplit('netloc') }}/v3/auth/tokens" | urlsplit('netloc') }}/v3/auth/tokens"
method: POST method: POST
body_format: json body_format: json
body: body:
auth: auth:
scope: scope:
project: project:
name: admin
domain:
id: default
identity:
methods:
- password
password:
user:
name: admin name: admin
domain: domain:
id: default id: default
password: "{{ overcloud_admin_password }}" identity:
return_content: yes methods:
status_code: 201 - password
register: keystone_result password:
no_log: true user:
when: overcloud_keystone_url|default('') name: admin
domain:
id: default
password: "{{ overcloud_admin_password }}"
return_content: true
status_code: 201
register: keystone_result
no_log: true
when: overcloud_keystone_url|default('')
- name: Set auth token - name: Set auth token
set_fact: token="{{ keystone_result.x_subject_token }}" set_fact: token="{{ keystone_result.x_subject_token }}"
- name: Get Neutron URL from catalog - name: Get Neutron URL from catalog
set_fact: neutron_url="{{ keystone_result.json.token set_fact: neutron_url="{{ keystone_result.json.token
| json_query("catalog[?name=='neutron'].endpoints") | json_query("catalog[?name=='neutron'].endpoints")
| first | first
| selectattr('interface', 'equalto', 'public') | selectattr('interface', 'equalto', 'public')
| map(attribute='url') | first }}" | map(attribute='url') | first }}"
# Get overcloud networks from Neutron and check if there is # Get overcloud networks from Neutron and check if there is
# a network with a common name for external networks. # a network with a common name for external networks.
- name: Get networks from Neutron - name: Get networks from Neutron
uri: uri:
url: "{{ neutron_url }}/v2.0/networks?router:external=true" url: "{{ neutron_url }}/v2.0/networks?router:external=true"
method: GET method: GET
headers: headers:
X-Auth-Token: "{{ token }}" X-Auth-Token: "{{ token }}"
return_content: yes return_content: true
follow_redirects: all follow_redirects: all
register: networks_result register: networks_result
- name: Warn if there are no matching networks - name: Warn if there are no matching networks
warn: warn:
msg: | msg: |
No external network found. It is strongly recommended that you No external network found. It is strongly recommended that you
configure an external Neutron network with a floating IP address configure an external Neutron network with a floating IP address
pool. pool.
when: networks_result.json.networks | length == 0 when: networks_result.json.networks | length == 0

View File

@ -1,7 +1,7 @@
--- ---
- name: Set fact to identify if the overcloud was deployed - name: Set fact to identify if the overcloud was deployed
set_fact: set_fact:
overcloud_deployed: "{{ groups['overcloud'] is defined }}" overcloud_deployed: "{{ groups['overcloud'] is defined }}"
# Check that the Horizon endpoint exists # Check that the Horizon endpoint exists
- name: Fail if the HorizonPublic endpoint is not defined - name: Fail if the HorizonPublic endpoint is not defined
@ -30,7 +30,7 @@
# Check that we can obtain an auth token from horizon # Check that we can obtain an auth token from horizon
- name: Check Keystone - name: Check Keystone
no_log: True no_log: true
uri: uri:
url: "{{ overcloud_keystone_url | urlsplit('scheme') }}://{{ overcloud_keystone_url | urlsplit('netloc') }}/v3/auth/tokens" url: "{{ overcloud_keystone_url | urlsplit('scheme') }}://{{ overcloud_keystone_url | urlsplit('netloc') }}/v3/auth/tokens"
method: POST method: POST
@ -46,7 +46,7 @@
domain: domain:
name: Default name: Default
password: "{{ overcloud_admin_password }}" password: "{{ overcloud_admin_password }}"
return_content: yes return_content: true
status_code: 201 status_code: 201
register: auth_token register: auth_token
when: overcloud_keystone_url|default('') when: overcloud_keystone_url|default('')

View File

@ -1,12 +1,12 @@
--- ---
- name: Get OVS DPDK PMD cores mask value - name: Get OVS DPDK PMD cores mask value
become_method: sudo become_method: sudo
become: True become: true
register: pmd_cpu_mask register: pmd_cpu_mask
command: ovs-vsctl --no-wait get Open_vSwitch . other_config:pmd-cpu-mask command: ovs-vsctl --no-wait get Open_vSwitch . other_config:pmd-cpu-mask
changed_when: False changed_when: false
- name: Run OVS DPDK PMD cores check - name: Run OVS DPDK PMD cores check
become: True become: true
ovs_dpdk_pmd_cpus_check: ovs_dpdk_pmd_cpus_check:
pmd_cpu_mask: "{{ pmd_cpu_mask.stdout }}" pmd_cpu_mask: "{{ pmd_cpu_mask.stdout }}"

View File

@ -1,10 +1,10 @@
--- ---
- name: Check pacemaker service is running - name: Check pacemaker service is running
become: True become: true
command: "/usr/bin/systemctl show pacemaker --property ActiveState" command: "/usr/bin/systemctl show pacemaker --property ActiveState"
register: check_service register: check_service
changed_when: False changed_when: false
ignore_errors: True ignore_errors: true
- when: "check_service.stdout == 'ActiveState=active'" - when: "check_service.stdout == 'ActiveState=active'"
block: block:
@ -12,7 +12,7 @@
become: true become: true
command: pcs status xml command: pcs status xml
register: pcs_status register: pcs_status
changed_when: False changed_when: false
- name: Check pacemaker status - name: Check pacemaker status
pacemaker: pacemaker:
status: "{{ pcs_status.stdout }}" status: "{{ pcs_status.stdout }}"

View File

@ -17,7 +17,7 @@
- name: Converge - name: Converge
hosts: all hosts: all
gather_facts: no gather_facts: false
tasks: tasks:
- name: working detection - name: working detection

View File

@ -17,7 +17,7 @@
- name: Prepare - name: Prepare
hosts: all hosts: all
gather_facts: no gather_facts: false
tasks: tasks:
- name: Populate successful podman CLI - name: Populate successful podman CLI

View File

@ -30,7 +30,7 @@
name: faulty name: faulty
description: really faulty repository description: really faulty repository
baseurl: http://this.repository.do-not.exists/like-not-at-all baseurl: http://this.repository.do-not.exists/like-not-at-all
enabled: yes enabled: true
- name: execute role - name: execute role
include_role: include_role:
@ -56,7 +56,7 @@
name: faulty-bis name: faulty-bis
description: faulty repository with working DNS description: faulty repository with working DNS
baseurl: http://download.fedoraproject.org/pub/fedora/blah baseurl: http://download.fedoraproject.org/pub/fedora/blah
enabled: yes enabled: true
- name: execute role - name: execute role
include_role: include_role:

View File

@ -1,11 +1,11 @@
--- ---
- name: List repositories - name: List repositories
become: True become: true
shell: | shell: |
{{ ansible_pkg_mgr }} repolist enabled -v 2>&1 || exit 0 {{ ansible_pkg_mgr }} repolist enabled -v 2>&1 || exit 0
args: args:
warn: no warn: false
changed_when: False changed_when: false
register: repositories register: repositories
- name: Fail if we detect error in repolist output - name: Fail if we detect error in repolist output
@ -16,7 +16,7 @@
repositories.stdout is regex('(cannot|could not|failure)', ignorecase=True) repositories.stdout is regex('(cannot|could not|failure)', ignorecase=True)
- name: Find repository IDs - name: Find repository IDs
changed_when: False changed_when: false
shell: 'echo "{{ repositories.stdout }}" | grep Repo-id | sed "s/Repo-id.*://" | tr -d " "' shell: 'echo "{{ repositories.stdout }}" | grep Repo-id | sed "s/Repo-id.*://" | tr -d " "'
register: repository_ids register: repository_ids
@ -25,5 +25,5 @@
msg: Found unwanted repository {{ item.0 }} enabled msg: Found unwanted repository {{ item.0 }} enabled
when: item.0 == item.1 when: item.0 == item.1
with_nested: with_nested:
- [ 'epel/x86_64' ] - ['epel/x86_64']
- "{{ repository_ids.stdout_lines }}" - "{{ repository_ids.stdout_lines }}"

View File

@ -1,27 +0,0 @@
galaxy_info:
author: TripleO Validations Team
company: Red Hat
license: Apache
min_ansible_version: 2.4
platforms:
- name: CentOS
versions:
- 7
- name: RHEL
versions:
- 7
categories:
- cloud
- baremetal
- system
galaxy_tags: []
# List tags for your role here, one per line. A tag is a keyword that describes
# and categorizes the role. Users find roles by searching for tags. Be sure to
# remove the '[]' above, if you add tags to this list.
#
# NOTE: A tag is limited to a single word comprised of alphanumeric characters.
# Maximum 20 tags per role.
dependencies: []

View File

@ -4,7 +4,7 @@
systemctl list-units --failed --plain --no-legend --no-pager | systemctl list-units --failed --plain --no-legend --no-pager |
awk '{print $1}' awk '{print $1}'
register: systemd_status register: systemd_status
changed_when: False changed_when: false
- name: Fails if we find failed units - name: Fails if we find failed units
assert: assert:

View File

@ -17,7 +17,7 @@
- name: Prepare - name: Prepare
hosts: all hosts: all
gather_facts: no gather_facts: false
tasks: tasks:
- name: Populate successful stonith - name: Populate successful stonith

View File

@ -1,13 +1,13 @@
--- ---
- name: Check if we are in HA cluster environment - name: Check if we are in HA cluster environment
become: True become: true
register: pcs_cluster_status register: pcs_cluster_status
command: pcs cluster status command: pcs cluster status
failed_when: false failed_when: false
changed_when: false changed_when: false
- name: Get all currently configured stonith devices - name: Get all currently configured stonith devices
become: True become: true
command: "pcs stonith" command: "pcs stonith"
register: stonith_devices register: stonith_devices
changed_when: false changed_when: false

View File

@ -5,6 +5,7 @@ metadata:
Verify that stonith devices are configured for your OpenStack Platform HA cluster. Verify that stonith devices are configured for your OpenStack Platform HA cluster.
We don't configure stonith device with TripleO Installer. Because the hardware We don't configure stonith device with TripleO Installer. Because the hardware
configuration may be differ in each environment and requires different fence agents. configuration may be differ in each environment and requires different fence agents.
How to configure fencing please read https://access.redhat.com/documentation/en/red-hat-openstack-platform/8/paged/director-installation-and-usage/86-fencing-the-controller-nodes How to configure fencing please read
https://access.redhat.com/documentation/en/red-hat-openstack-platform/8/paged/director-installation-and-usage/86-fencing-the-controller-nodes
groups: groups:
- post-deployment - post-deployment

View File

@ -127,19 +127,19 @@
path: "/etc/ipa/default.conf" path: "/etc/ipa/default.conf"
section: global section: global
key: realm key: realm
ignore_missing_file: False ignore_missing_file: false
register: ipa_realm register: ipa_realm
check_mode: no check_mode: false
- name: Set fact for IdM/FreeIPA host entry - name: Set fact for IdM/FreeIPA host entry
set_fact: set_fact:
host_entry: "{{ ansible_fqdn }}@{{ ipa_realm.value }}" host_entry: "{{ ansible_fqdn }}@{{ ipa_realm.value }}"
when: ipa_conf_stat.stat.exists when: ipa_conf_stat.stat.exists
- name: Set fact for IdM/FreeIPA host principal - name: Set fact for IdM/FreeIPA host principal
set_fact: set_fact:
host_principal: "host/{{ host_entry }}" host_principal: "host/{{ host_entry }}"
when: ipa_conf_stat.stat.exists when: ipa_conf_stat.stat.exists
# Kerberos keytab related tasks # Kerberos keytab related tasks
- name: Check for kerberos host keytab - name: Check for kerberos host keytab
@ -182,7 +182,7 @@
changed_when: false changed_when: false
become: true become: true
when: krb5_keytab_stat.stat.exists when: krb5_keytab_stat.stat.exists
check_mode: no check_mode: false
- name: Set facts for host principals in /etc/krb5.keytab - name: Set facts for host principals in /etc/krb5.keytab
set_fact: set_fact:

View File

@ -4,7 +4,7 @@
become: true become: true
hiera: hiera:
name: "certmonger_user_enabled" name: "certmonger_user_enabled"
check_mode: no check_mode: false
- name: Set facts for certmonger user service not enabled - name: Set facts for certmonger user service not enabled
set_fact: set_fact:
@ -36,7 +36,7 @@
become: true become: true
changed_when: false changed_when: false
register: all_certnames register: all_certnames
check_mode: no check_mode: false
# Get status of all certificates and trim the leading whitespaces # Get status of all certificates and trim the leading whitespaces
- name: Get status of all certificates - name: Get status of all certificates
@ -47,7 +47,7 @@
loop_control: loop_control:
loop_var: certname loop_var: certname
register: all_cert_status register: all_cert_status
check_mode: no check_mode: false
- name: Gather certificates that are not in MONITORING status - name: Gather certificates that are not in MONITORING status
set_fact: set_fact:

View File

@ -3,7 +3,7 @@
- name: Verify that join.conf exists (containzerized) - name: Verify that join.conf exists (containzerized)
command: "{{ command_prefix }} exec novajoin_server test -e /etc/novajoin/join.conf" command: "{{ command_prefix }} exec novajoin_server test -e /etc/novajoin/join.conf"
register: containerized_join_conf_st register: containerized_join_conf_st
changed_when: False changed_when: false
become: true become: true
- name: Fail if join.conf is not present (containerized) - name: Fail if join.conf is not present (containerized)
@ -21,9 +21,9 @@
path: "{{ joinconf_location }}" path: "{{ joinconf_location }}"
section: DEFAULT section: DEFAULT
key: keytab key: keytab
ignore_missing_file: True ignore_missing_file: true
register: novajoin_keytab_path register: novajoin_keytab_path
check_mode: no check_mode: false
- name: Get novajoin server port from join.conf - name: Get novajoin server port from join.conf
become: true become: true
@ -31,9 +31,9 @@
path: "{{ joinconf_location }}" path: "{{ joinconf_location }}"
section: DEFAULT section: DEFAULT
key: join_listen_port key: join_listen_port
ignore_missing_file: True ignore_missing_file: true
register: novajoin_server_port register: novajoin_server_port
check_mode: no check_mode: false
- name: Get novajoin server host from join.conf - name: Get novajoin server host from join.conf
become: true become: true
@ -41,9 +41,9 @@
path: "{{ joinconf_location }}" path: "{{ joinconf_location }}"
section: DEFAULT section: DEFAULT
key: join_listen key: join_listen
ignore_missing_file: True ignore_missing_file: true
register: novajoin_server_host register: novajoin_server_host
check_mode: no check_mode: false
### verify that the keytab and principal are usable ### ### verify that the keytab and principal are usable ###
# TODO(alee): We need to move this to a subfile so we can run # TODO(alee): We need to move this to a subfile so we can run
@ -91,7 +91,7 @@
command: "{{ command_prefix }} exec novajoin_server kdestroy -c {{ item }}" command: "{{ command_prefix }} exec novajoin_server kdestroy -c {{ item }}"
with_items: "{{ temp_krb_caches }}" with_items: "{{ temp_krb_caches }}"
ignore_errors: false ignore_errors: false
changed_when: False changed_when: false
become: true become: true
when: when:
- containerized_novajoin_krb5_keytab_stat.rc == 0 - containerized_novajoin_krb5_keytab_stat.rc == 0

View File

@ -20,9 +20,9 @@
path: "{{ joinconf_location }}" path: "{{ joinconf_location }}"
section: DEFAULT section: DEFAULT
key: keytab key: keytab
ignore_missing_file: True ignore_missing_file: true
register: novajoin_keytab_path register: novajoin_keytab_path
check_mode: no check_mode: false
- name: Get novajoin server port from join.conf - name: Get novajoin server port from join.conf
become: true become: true
@ -30,9 +30,9 @@
path: "{{ joinconf_location }}" path: "{{ joinconf_location }}"
section: DEFAULT section: DEFAULT
key: join_listen_port key: join_listen_port
ignore_missing_file: True ignore_missing_file: true
register: novajoin_server_port register: novajoin_server_port
check_mode: no check_mode: false
- name: Get novajoin server host from join.conf - name: Get novajoin server host from join.conf
become: true become: true
@ -40,9 +40,9 @@
path: "{{ joinconf_location }}" path: "{{ joinconf_location }}"
section: DEFAULT section: DEFAULT
key: join_listen key: join_listen
ignore_missing_file: True ignore_missing_file: true
register: novajoin_server_host register: novajoin_server_host
check_mode: no check_mode: false
### verify that the keytab and principal are usable ### ### verify that the keytab and principal are usable ###
# TODO(alee): We need to move this to a subfile so we can run # TODO(alee): We need to move this to a subfile so we can run
@ -191,4 +191,3 @@
report_status: "{{ service_running_status }}" report_status: "{{ service_running_status }}"
report_reason: "{{ service_running_reason }}" report_reason: "{{ service_running_reason }}"
report_recommendations: "{{ service_running_recommendations }}" report_recommendations: "{{ service_running_recommendations }}"

View File

@ -6,7 +6,7 @@
- name: Get the path of tripleo undercloud config file - name: Get the path of tripleo undercloud config file
become: true become: true
hiera: name="tripleo_undercloud_conf_file" hiera: name="tripleo_undercloud_conf_file"
check_mode: no check_mode: false
- name: Get the Container CLI from the undercloud.conf file (stein+) - name: Get the Container CLI from the undercloud.conf file (stein+)
become: true become: true
@ -27,25 +27,25 @@
- not podman_install|bool - not podman_install|bool
- not docker_install|bool - not docker_install|bool
block: block:
- name: Determine if Docker is enabled and has containers running - name: Determine if Docker is enabled and has containers running
command: docker ps -q command: docker ps -q
register: docker_ps register: docker_ps
become: true become: true
ignore_errors: true ignore_errors: true
- name: Set container facts - name: Set container facts
set_fact: set_fact:
docker_install: true docker_install: true
when: not docker_ps.stdout|length == 0 when: not docker_ps.stdout|length == 0
- name: Set container facts - name: Set container facts
set_fact: set_fact:
docker_install: false docker_install: false
when: docker_ps.stdout|length == 0 when: docker_ps.stdout|length == 0
- name: Set container facts - name: Set container facts
set_fact: set_fact:
podman_install: false podman_install: false
- name: Set podman command prefix - name: Set podman command prefix
set_fact: set_fact:

View File

@ -3,7 +3,7 @@
become: true become: true
hiera: hiera:
name: "tripleo_undercloud_conf_file" name: "tripleo_undercloud_conf_file"
check_mode: no check_mode: false
- name: Verify that nameservers are set in undercloud.conf - name: Verify that nameservers are set in undercloud.conf
become: true become: true
@ -11,9 +11,9 @@
path: "{{ tripleo_undercloud_conf_file }}" path: "{{ tripleo_undercloud_conf_file }}"
section: DEFAULT section: DEFAULT
key: undercloud_nameservers key: undercloud_nameservers
ignore_missing_file: False ignore_missing_file: false
register: undercloud_nameservers register: undercloud_nameservers
check_mode: no check_mode: false
- name: Check that nameservers point to IdM/FreeIPA - name: Check that nameservers point to IdM/FreeIPA
set_fact: set_fact:
@ -52,7 +52,7 @@
shell: host {{ undercloud_conf_dns_query }} | awk '{print $5}' shell: host {{ undercloud_conf_dns_query }} | awk '{print $5}'
register: host_from_ip_reg register: host_from_ip_reg
changed_when: false changed_when: false
check_mode: no check_mode: false
- name: Get domain as set in undercloud.conf - name: Get domain as set in undercloud.conf
become: true become: true
@ -60,9 +60,9 @@
path: "{{ tripleo_undercloud_conf_file }}" path: "{{ tripleo_undercloud_conf_file }}"
section: DEFAULT section: DEFAULT
key: overcloud_domain_name key: overcloud_domain_name
ignore_missing_file: False ignore_missing_file: false
register: undercloud_overcloud_domain register: undercloud_overcloud_domain
check_mode: no check_mode: false
- name: Set facts undercloud.conf domain is not configured correctly - name: Set facts undercloud.conf domain is not configured correctly
set_fact: set_fact:
@ -96,9 +96,9 @@
path: "{{ tripleo_undercloud_conf_file }}" path: "{{ tripleo_undercloud_conf_file }}"
section: DEFAULT section: DEFAULT
key: enable_novajoin key: enable_novajoin
ignore_missing_file: False ignore_missing_file: false
register: undercloud_enable_novajoin register: undercloud_enable_novajoin
check_mode: no check_mode: false
- name: Set facts undercloud.conf enable novajoin is disabled - name: Set facts undercloud.conf enable novajoin is disabled
set_fact: set_fact:

View File

@ -1,6 +1,6 @@
--- ---
debug_check: True debug_check: true
services_conf_files: services_conf_files:
- /var/lib/config-data/puppet-generated/nova/etc/nova/nova.conf - /var/lib/config-data/puppet-generated/nova/etc/nova/nova.conf
- /var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf - /var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf

View File

@ -17,7 +17,7 @@
- name: Converge - name: Converge
hosts: all hosts: all
gather_facts: no gather_facts: false
vars: vars:
services_conf_files: services_conf_files:
@ -29,13 +29,13 @@
dest: /tmp/debug_true_1.conf dest: /tmp/debug_true_1.conf
content: | content: |
[DEFAULT] [DEFAULT]
debug: True debug: true
- name: Checking good value - name: Checking good value
include_role: include_role:
name: undercloud-debug name: undercloud-debug
vars: vars:
debug_check: False debug_check: false
- name: Should fail due to bad value - name: Should fail due to bad value
block: block:

View File

@ -1,11 +1,11 @@
--- ---
- name: Check the services for debug flag - name: Check the services for debug flag
become: True become: true
validations_read_ini: validations_read_ini:
path: "{{ item }}" path: "{{ item }}"
section: DEFAULT section: DEFAULT
key: debug key: debug
ignore_missing_file: True ignore_missing_file: true
register: config_result register: config_result
with_items: "{{ services_conf_files }}" with_items: "{{ services_conf_files }}"
failed_when: "debug_check|bool == config_result.value|bool" failed_when: "debug_check|bool == config_result.value|bool"

View File

@ -1,9 +1,8 @@
--- ---
volumes: volumes:
- {mount: /var/lib/docker, min_size: 10} - {mount: /var/lib/docker, min_size: 10}
- {mount: /var/lib/config-data, min_size: 3} - {mount: /var/lib/config-data, min_size: 3}
- {mount: /var/log, min_size: 3} - {mount: /var/log, min_size: 3}
- {mount: /usr, min_size: 5} - {mount: /usr, min_size: 5}
- {mount: /var, min_size: 20} - {mount: /var, min_size: 20}
- {mount: /, min_size: 25} - {mount: /, min_size: 25}

View File

@ -25,13 +25,13 @@
shell: df -B1 {{ item.mount }} --output=avail | sed 1d shell: df -B1 {{ item.mount }} --output=avail | sed 1d
register: volume_size register: volume_size
with_items: "{{ existing_volumes }}" with_items: "{{ existing_volumes }}"
changed_when: False changed_when: false
- name: Fail if any of the volumes are too small - name: Fail if any of the volumes are too small
fail: fail:
msg: > msg: >
Minimum free space required for {{ item.item.mount }}: {{ item.item.min_size }}G Minimum free space required for {{ item.item.mount }}: {{ item.item.min_size }}G
- current free space: {{ (item.stdout|int / const_bytes_in_gb|int) |round(1) }}G - current free space: {{ (item.stdout|int / const_bytes_in_gb|int) |round(1) }}G
when: > when: >
item.stdout|int / const_bytes_in_gb|int < item.item.min_size|int item.stdout|int / const_bytes_in_gb|int < item.item.min_size|int
with_items: "{{ volume_size.results }}" with_items: "{{ volume_size.results }}"

View File

@ -8,7 +8,7 @@ platforms:
- name: centos7 - name: centos7
hostname: centos7 hostname: centos7
image: centos:7 image: centos:7
override_command: True override_command: true
command: python -m SimpleHTTPServer 8787 command: python -m SimpleHTTPServer 8787
pkg_extras: python-setuptools python-enum34 python-netaddr epel-release ruby PyYAML pkg_extras: python-setuptools python-enum34 python-netaddr epel-release ruby PyYAML
easy_install: easy_install:
@ -20,7 +20,7 @@ platforms:
- name: fedora28 - name: fedora28
hostname: fedora28 hostname: fedora28
image: fedora:28 image: fedora:28
override_command: True override_command: true
command: python3 -m http.server 8787 command: python3 -m http.server 8787
pkg_extras: python*-setuptools python*-enum python*-netaddr ruby PyYAML pkg_extras: python*-setuptools python*-enum python*-netaddr ruby PyYAML
environment: environment:

View File

@ -17,7 +17,7 @@
- name: Converge - name: Converge
hosts: all hosts: all
gather_facts: no gather_facts: false
tasks: tasks:
- name: working detection - name: working detection

View File

@ -17,7 +17,7 @@
- name: Prepare - name: Prepare
hosts: all hosts: all
gather_facts: no gather_facts: false
tasks: tasks:
- name: install hiera - name: install hiera

View File

@ -18,7 +18,7 @@
set -o pipefail set -o pipefail
{{ container_cli.value|default('podman', true) }} exec heat_api_cron crontab -l -u heat |grep -v '^#' {{ container_cli.value|default('podman', true) }} exec heat_api_cron crontab -l -u heat |grep -v '^#'
register: cron_result register: cron_result
changed_when: False changed_when: false
- name: Check heat crontab - name: Check heat crontab
fail: fail:

View File

@ -6,5 +6,5 @@ metadata:
heat database can grow very large. This validation checks that heat database can grow very large. This validation checks that
the purge_deleted crontab has been set up. the purge_deleted crontab has been set up.
groups: groups:
- pre-upgrade - pre-upgrade
- pre-deployment - pre-deployment

Some files were not shown because too many files have changed in this diff Show More