tripleo-heat-templates/ci/common/vbmc_setup.yaml
Sandeep Yadav 84e016edb3 Fix vbmc_setup.yaml for c8 standalone
Fixing vbmc_setup.yaml for c8 scenario012-standalone job with this
patch.

vbmc_setup.yaml was originally written for c7 multinode[0] to install
virtual BMC and a libvirt domain on the controller for ironic to
manage. In scenario012 we are trying to test something close to ironic
in overcloud(Blueprint[1]).

In the past, We have moved scenario012 mutinode to standalone job and
reused vbmc_setup.yaml.

The shell script in vbmc_setup.yaml need following modification as its
now a C8 standalone job:-

* Added 'set -e' so that script exits immediately if a command exits
with a non-zero status so it would result in deployment failure.
Currently deployment is passing even if some commands in the bash
script are failing. Addition of 'set -e' will correct this behavior.
* Replaced yum with dnf as yum not present in Centos8.
* No need to override resolv.conf entry as standalone job already
configure necessary nameserver entries.
* No need to install additional repos as as standalone job already
configure necessary repos.
* When vbmc_setup.yaml was introduced intention was to do non-kvm
emulation[2] and that was the reason qemu-system-x86 and qemu-kvm-ev
was pulled from epel, but for c8 these packages are no longer available
in repos [3] including epel. We can use qemu-kvm package instead and
we will no longer need epel repos.
* create-node.sh command option to use qemu-system have to be removed
as qemu-system-x86 package is missing and we are using qemu-kvm
instead.
* Temporary workaround[4] to upgrade ansible in the neutron container
can be removed as ansible version in container is now v2.5.8+.
Test patch for tripleo-ci-centos-8-scenario012-standalone is here[5].

[0] https://review.opendev.org/#/c/485261/
[1]
https://blueprints.launchpad.net/tripleo/+spec/ironic-overcloud-ci
[2]
https://review.opendev.org/#/c/485261/29/ci/common/vbmc_setup.yaml
[3] https://bugzilla.redhat.com/show_bug.cgi?id=1715806
[4] https://review.opendev.org/#/c/579603/
[5] https://review.opendev.org/#/c/724119/

Change-Id: I9e9d63a7ef2ef538f3c072e3f4a96ec25d7dc5f7
Closes-Bug: #1875681
2020-05-05 13:13:43 -04:00

65 lines
2.5 KiB
YAML

heat_template_version: rocky
parameters:
servers:
type: json
EndpointMap:
default: {}
description: Mapping of service endpoint -> protocol. Typically set
via parameter_defaults in the resource registry.
type: json
resources:
ExtraConfig:
type: OS::Heat::SoftwareConfig
properties:
group: script
config: |
#!/bin/bash
set -e
dnf install -y python3-virtualbmc libvirt-client libvirt-daemon libvirt qemu-kvm
systemctl start libvirtd virtualbmc
git clone https://opendev.org/openstack/ironic /tmp/ironic
mkdir -p /var/log/bmlogs
chmod 777 /var/log/bmlogs
# Create a ssh keypair and place the private key somewhere ansible inside the
# neutron_api container can read it.
ssh-keygen -P "" -f /etc/puppet/ci-key
chmod 644 /etc/puppet/ci-key
cat /etc/puppet/ci-key.pub >> /root/.ssh/authorized_keys
LIBVIRT_STORAGE_POOL=${LIBVIRT_STORAGE_POOL:-"default"}
LIBVIRT_STORAGE_POOL_PATH=${LIBVIRT_STORAGE_POOL_PATH:-/var/lib/libvirt/images}
mkdir -p $LIBVIRT_STORAGE_POOL_PATH
if ! virsh pool-list --all | grep -q $LIBVIRT_STORAGE_POOL; then
virsh pool-define-as --name $LIBVIRT_STORAGE_POOL dir --target $LIBVIRT_STORAGE_POOL_PATH
virsh pool-autostart $LIBVIRT_STORAGE_POOL
virsh pool-start $LIBVIRT_STORAGE_POOL
fi
pool_state=$(virsh pool-info $LIBVIRT_STORAGE_POOL | grep State | awk '{ print $2 }')
if [ "$pool_state" != "running" ] ; then
virsh pool-start $LIBVIRT_STORAGE_POOL
fi
/tmp/ironic/devstack/tools/ironic/scripts/create-node.sh -n node1 -c 1 -m 3072 -d 10 -b br-ex -p 1161 -M 1350 -f qcow2 -a x86_64 -E qemu -l /var/log/bmlogs -A 66:0d:1d:d8:0b:11 > /var/log/bmlogs/create-node-1.log 2>&1 < /dev/null
/tmp/ironic/devstack/tools/ironic/scripts/create-node.sh -n node2 -c 1 -m 3072 -d 10 -b br-ex -p 1162 -M 1350 -f qcow2 -a x86_64 -E qemu -l /var/log/bmlogs -A 66:0d:1d:d8:0b:22 > /var/log/bmlogs/create-node-2.log 2>&1 < /dev/null
vbmc --no-daemon add node1 --port 1161
vbmc --no-daemon start node1
vbmc --no-daemon add node2 --port 1162
vbmc --no-daemon start node2
disown -a
ExtraDeployments:
type: OS::Heat::SoftwareDeploymentGroup
properties:
servers: {get_param: servers}
config: {get_resource: ExtraConfig}
actions: ['CREATE'] # Only do this on CREATE
name: VirtNodeExtraConfig