Current AIAP implementation has support for multiple VMs for
target/worker with an attribute "count".
But this is not supported for corresponding new MAC addresses.
The current implementation has mac address fixed for one
target node/one worker node.
This PS provide change in ansible playlist to handle
MAC addresses as list.
Closes: #640
Relates-To: #640
Change-Id: I14b6e2e4d9e81a4f1cf114c26f94328a1461134f
This adds the `status-checker` container, which aggregates the statuses
of the task containers. This is useful for quickly checking whether AIAP
is in a failed or successful state, allowing for early exit during
gating or testing. This also prevents the containers from stopping for
any reason, allowing for easy debugging.
Change-Id: I1571d006fb3c856e4d2bedee0befdccae6082a66
* Removed the hardcoded wait condition ( 30s delay )
for docker and libvirtd service
* Added logic to wait until docker is available for artifact-setup.
* Added logic to wait until libvritd is available for infra-builder.
* Added "imagePullPolicy: IfNotPresent" to make use of local changes.
Closes: #629
Change-Id: I8956cf5927be6142fcc0a3b1a112ea82511fd9bb
Currently the task containers will reach a conclusive state (success or
failure), and will promptly stop running. This creates the following
problems:
1. In the case of failure, a user doesn't have the ability to exec into
the container for debugging purposes.
2. In the case a success, the readiness probe for each task container
will begin to fail. This happens because readiness is defined by
the existence of a file; if the container stops running, the file
no longer exists.
This change provides a solution to the 2nd bullet, keeping the container
alive in case of success.
Change-Id: Iae432caada3d2a338493b1534da72957808185d2
Currently, when a container in AIAP fails, the container continues to
run in the backgroud. This causes the container to be marked as failed
so that the deployment will fail faster.
Closes: #626
Change-Id: I34de366390b6ddc0ef434f7e642e73797a790edf
This parameterizes the base image location used for the libvirt,
artifact-setup, infra-builder, and runner images.
Closes: #628
Change-Id: Idca33b5571017700fead0eb758af1900b4a4fcf4
The metal3-io sushy-tools container no longer includes the ping utility,
which was being used to test that the VMs were available. This changes
the container to use the shared drive to perform this check instead,
relieving the need for ping.
Change-Id: I0f697d1a7afac84f04fd27dc9b808857c9f64a7c
This increases the size of the nodes used for AIAP so that they match
those in our gating process.
Closes: #581
Change-Id: I978db75e252335842dd4311db0416617071617a4
Introduces Airship in pod. This includes:
* A base image which sets up common requirements
* An image for the libvirt service
* An image for building a specified instance of airshipctl
* An image for initializing the various libvirt infrastructure required
for a deployment
* An image which runs the deployment scripts
Closes: #313
Change-Id: Ib1114350190b0fe0c0761ff67b38b3eca783161a