Spintering image-builder off from airship/images

This moves all image-builder code from the airship/images repository
here so resources aren't wasted validating PS's against these tests
as they take a long time.

Change-Id: I478a817b694b88cf0900c21726ee29b286ec81a3
This commit is contained in:
Danny Massa 2021-06-15 17:30:42 +00:00
parent 2a9c0f9031
commit 47eac28564
130 changed files with 3729 additions and 67 deletions

14
.gitignore vendored Normal file
View File

@ -0,0 +1,14 @@
*alpine-minirootfs.tar.gz
*build/
# image-builder artifacts to ignore
**.iso
**.qcow2
**.md5sum
image-builder/assets/playbooks/roles/multistrap/vars/main.yaml
image-builder/assets/playbooks/roles/livecdcontent/vars/main.yaml
image-builder/assets/playbooks/roles/osconfig/vars/main.yaml
# IDE Directories
.idea/
vscode/

201
LICENSE Normal file
View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

8
Makefile Normal file
View File

@ -0,0 +1,8 @@
TOPTGTS := images lint all docs run_images tests clean
IMAGES := $(wildcard */Makefile)
$(TOPTGTS): $(IMAGES)
$(IMAGES):
$(MAKE) -C $(@D) $(MAKECMDGOALS)
.PHONY: $(TOPTGTS) $(IMAGES)

104
README.md Normal file
View File

@ -0,0 +1,104 @@
# Overview
Image Builder is a utility used to produce two types of artifacts needed for an
airshipctl deployment: an iso (for the ephemeral node), and qcow2s (used by
metal3io to deploy all other nodes). This is accomplished through several stages
as follows:
1. Build docker image containing the base operating system and basic configuration management
1. Run configuration management again with customized user-supplied inputs in container runtime
- A more accessible layer for user customization that doesn't require rebuilding the container
- Users may make their own decisions as to whether making a customized docker image build is worthwhile
1. Container runtime produces a final image artifact (ISO or QCOW2)
# Airship Image Variations
The ISO is built using the network information defined by the ephemeral node in the supplied airship manifests. Therefore, each airship deployment should have its own ISO created.
The QCOW2s have such networking information driven by cloud-init during metal3io deployment, and therefore is not contained in the image itself. These QCOWs would therefore not necessarily be generated for each unique airship deployment, but rather for each for unique host profile.
Note that we will refer to the QCOW2s as the “base OS” or “target OS”, rather than “baremetal OS”, since the same process can be used to build QCOW2s for baremetal and for a virtualized environment.
# Building the image-builder container locally
If you do not wish to use the image-builder container published on quay.io, you may build your own locally as follows:
```
sudo apt -y install sudo git make
git clone https://review.opendev.org/airship/images
cd images/image-builder
sudo make DOCKER_REGISTRY=mylocalreg build
```
By default, both the ISO and QCOW share the same base container image. Therefore in most cases it should be sufficient to generate a single container that's reused for all image types and further differentiated in the container runtime phase described in the next section.
# Executing the image-builder container
The following makefile target may be used to execute the image-builder container in order to produce an ISO or QCOW2 output.
```
sudo apt -y install sudo git make
git clone https://review.opendev.org/airship/images
cd images/image-builder
sudo make IMAGE_TYPE=qcow cut_image
```
In the above example, set ``IMAGE_TYPE`` to ``iso`` or ``qcow`` as appropriate. This will be passed into the container to instruct it which type of image to build. Also include ``DOCKER_REGISTRY`` override if you wish to use a local docker image as described in the previous section.
This makefile target uses config files provided in the `images/image-builder/config` directory. **Modify these files as needed in order to customize your iso and qcow generation.** This provides a good place for adding and testing customizations to build parameters, without needing to modify the ansible playbooks themselves.
# Building behind a proxy
Example building docker container locally, plus ISO and qcow behind a proxy:
```
sudo apt -y install sudo git make
git clone https://review.opendev.org/airship/images
cd images/image-builder
# Create container
sudo make DOCKER_REGISTRY=mylocalreg PROXY=http://proxy.example.com:8080 build
# Create ephemeral ISO
sudo make DOCKER_REGISTRY=mylocalreg PROXY=http://proxy.example.com:8080 IMAGE_TYPE=iso cut_image
# Create qcow
sudo make DOCKER_REGISTRY=mylocalreg PROXY=http://proxy.example.com:8080 IMAGE_TYPE=qcow cut_image
```
# Useful testing flags
The `SKIP_MULTI_ROLE` build flag is useful if you would like to test local updates to the `osconfig` playbook, or updates to custom configs for this playbook. This saves time since you do not need to rebuild the target filesystem. For example:
```
sudo make SKIP_MULTI_ROLE=true build
```
Similiarly, osconfig and livecdcontent roles can be skipped using `SKIP_OSCONFIG_ROLE` and `SKIP_LIVECDCONTENT_ROLE` respectively. `SKIP_LIVECDCONTENT_ROLE` may be useful in combination with `SKIP_MULTI_ROLE` if you want to test out playbook changes to `osconfig` (however, it won't show up in the final bootable ISO image unless you don't skip `SKIP_LIVECDCONTENT_ROLE`).
# Division of Configuration Management responsibilities
Configuration management of the base OS is divided into several realms, each with their own focus:
1. Image-builder configuration data, i.e. data baked into the QCOW2 base image. The following should be used to drive this phase:
1. The storage and compute elements of NCv1 host and hardware profiles (kernel boot params, cpu pinning, hugepage settings, disk partitioning, etc), and
1. the NCv1 divingbell apparmor, security limits, file/dir permissions, sysctl, and
1. custom-built kernel modules (e.g. dkms based installations, i40e driver, etc)
1. Necessary components for the nodes bootstrap to k8s cluster, e.g. k8s, CNI, containerd, etc
1. any other operating system setting which would require a reboot or cannot otherwise be accomodated in #2 below
1. cloud-init driven configuration for site-specific data. Examples include:
1. Hostnames, domain names, FQDNs, IP addresses, etc
1. Network configuration data (bonding, MTU settings, VLANs, DNS, NTP, ethtool settings, etc)
1. Certificates, SSH keys, user accounts and/or passwords, etc.
1. HCA (host-config agent) for limited day-2 base-OS management
1. Cron jobs, such as the Roomba cleanup script used in NCv1, or SACT/gstools scripts
1. Possible overlapping of configuration-management items with #1 - #2, but for zero-disruption day-2 management (kept to a minimum to reduce design & testing complexity, only essential things to minimize overhead.)
1. Eventually HCA may be phased out if #1 and #2 become streamlined enough and impact minimized to the degree that SLAs can be met, and use of HCA may be reduced or eliminated over time.
# Supported OSes
- Ubuntu 20.04 LTS
# FAQ
Q: Why is the build target slow?
A: There is a `mksquashfs` command which runs as part of the build target, and performs slowly if your build environment lacks certain CPU flags which accelerate compression. Use "host-passthrough" or equivalent in your build environment to pass through these CPU flags. In libvirt domain XML, you would change your `cpu` mode element as follows: `<cpu mode='host-passthrough' check='none'/>`

View File

@ -0,0 +1,11 @@
ARG FROM=alpine
FROM ${FROM}
LABEL org.opencontainers.image.authors='airship-discuss@lists.airshipit.org, irc://#airshipit@freenode' \
org.opencontainers.image.url='https://airshipit.org' \
org.opencontainers.image.documentation='https://airship-images.readthedocs.org' \
org.opencontainers.image.source='https://opendev.org/airship/images' \
org.opencontainers.image.vendor='The Airship Authors' \
org.opencontainers.image.licenses='Apache-2.0'
COPY *.qcow2 *.qcow2.md5sum /qcows/

View File

@ -0,0 +1,66 @@
FROM ubuntu:focal as base-image
LABEL org.opencontainers.image.authors='airship-discuss@lists.airshipit.org, irc://#airshipit@freenode' \
org.opencontainers.image.url='https://airshipit.org' \
org.opencontainers.image.documentation='https://airship-images.readthedocs.org' \
org.opencontainers.image.source='https://opendev.org/airship/images' \
org.opencontainers.image.vendor='The Airship Authors' \
org.opencontainers.image.licenses='Apache-2.0'
SHELL ["bash", "-exc"]
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update ;\
apt-get install -y --no-install-recommends \
ca-certificates \
multistrap \
equivs \
build-essential \
gnupg2 \
xorriso \
python3-minimal \
python3-yaml \
python3-pip \
python3-setuptools \
python3-apt \
grub-pc-bin \
coreutils \
curl \
qemu-utils \
parted \
squashfs-tools \
extlinux \
syslinux-common \
xfsprogs \
vim \
kmod \
efivar \
rsync \
dosfstools ;\
pip3 install --upgrade pip ;\
pip3 install --upgrade wheel ;\
pip3 install --upgrade ansible ;\
rm -rf /var/lib/apt/lists/*
RUN curl -L https://github.com/mikefarah/yq/releases/download/2.4.0/yq_linux_amd64 -o /bin/yq \
&& chmod +x /bin/yq
COPY assets/playbooks/inventory.yaml /opt/assets/playbooks/inventory.yaml
COPY assets/playbooks/base-chroot.yaml /opt/assets/playbooks/base-chroot.yaml
COPY assets/playbooks/roles/multistrap /opt/assets/playbooks/roles/multistrap
COPY assets/playbooks/base-osconfig.yaml /opt/assets/playbooks/base-osconfig.yaml
COPY assets/playbooks/roles/osconfig /opt/assets/playbooks/roles/osconfig
COPY assets/playbooks/base-livecdcontent.yaml /opt/assets/playbooks/base-livecdcontent.yaml
COPY assets/playbooks/roles/livecdcontent /opt/assets/playbooks/roles/livecdcontent
COPY assets/playbooks/iso.yaml /opt/assets/playbooks/iso.yaml
COPY assets/playbooks/roles/iso /opt/assets/playbooks/roles/iso
COPY assets/playbooks/qcow.yaml /opt/assets/playbooks/qcow.yaml
COPY assets/playbooks/roles/qcow /opt/assets/playbooks/roles/qcow
COPY assets/playbooks/build /build
COPY assets/*.sh /usr/bin/local/
COPY assets/*.json /usr/bin/local/
CMD /usr/bin/local/entrypoint.sh

172
image-builder/Makefile Normal file
View File

@ -0,0 +1,172 @@
# Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
SHELL := /bin/bash
COMMIT ?= $(shell git rev-parse HEAD)
LABEL ?= org.airshipit.build=community
IMAGE_NAME ?= image-builder
DOCKER_REGISTRY ?= quay.io
IMAGE_PREFIX ?= airshipit
IMAGE_TAG ?= latest
IMAGE_TYPE ?= iso # iso | qcow
PUSH_IMAGE ?= false
DISTRO ?= ubuntu_focal
WORKDIR ?= ./manifests
QCOW_BUNDLE ?= ${WORKDIR}/qcow-bundle
# Specify if you want to only build a certain subset of QCOW bundles
QCOW_BUNDLE_DIRS ?=
# Set to true to skip multistrap.sh script. Useful for testing
SKIP_MULTISTRAP ?=
# Set to true to skip multistrap playbook. Useful for testing
SKIP_MULTI_ROLE ?=
# Set to true to skip osconfig playbook. Useful for testing
SKIP_OSCONFIG_ROLE ?=
# Set to true to skip livecdcontent playbook. Useful for testing
SKIP_LIVECDCONTENT_ROLE ?=
IMAGE ?= ${DOCKER_REGISTRY}/${IMAGE_PREFIX}/${IMAGE_NAME}:${IMAGE_TAG}-${DISTRO}
PROXY ?=
NO_PROXY ?= localhost,127.0.0.1
.PHONY: help build images cut_image package_qcow run clean
.ONESHELL:
help: ## This help.
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n"} /^[a-zA-Z0-9_-]+:.*?##/ { printf " \033[36m%-15s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)
# Make target name that zuul expects for each project in this repo
images: build generate_iso package_qcow clean
build:
set -ex
# Apply any user-defined rootfs overrides to playbooks
cp $(WORKDIR)/rootfs/multistrap-vars.yaml assets/playbooks/roles/multistrap/vars/main.yaml
cp $(WORKDIR)/rootfs/osconfig-vars.yaml assets/playbooks/roles/osconfig/vars/main.yaml
cp $(WORKDIR)/rootfs/livecdcontent-vars.yaml assets/playbooks/roles/livecdcontent/vars/main.yaml
ifneq ($(PROXY), )
sudo -E ./tools/docker_proxy.sh $(PROXY) $(NO_PROXY)
export http_proxy=$(PROXY)
export https_proxy=$(PROXY)
export no_proxy=$(NO_PROXY)
export HTTP_PROXY=$(PROXY)
export HTTPS_PROXY=$(PROXY)
export NO_PROXY=$(NO_PROXY)
ifneq ($(SKIP_MULTISTRAP), true)
sudo -E ./tools/multistrap.sh $(WORKDIR)
endif
sudo -E DOCKER_BUILDKIT=1 docker -D -l debug build --tag $(IMAGE) -f Dockerfile.$(DISTRO) . \
--label $(LABEL) \
--label "org.opencontainers.image.revision=$(COMMIT)" \
--label "org.opencontainers.image.created=\
$(shell date --rfc-3339=seconds --utc)" \
--label "org.opencontainers.image.title=$(IMAGE_NAME)" \
--build-arg http_proxy=$(PROXY) \
--build-arg https_proxy=$(PROXY) \
--build-arg HTTP_PROXY=$(PROXY) \
--build-arg HTTPS_PROXY=$(PROXY) \
--build-arg no_proxy=$(NO_PROXY) \
--build-arg NO_PROXY=$(NO_PROXY)
else
ifneq ($(SKIP_MULTISTRAP), true)
sudo -E ./tools/multistrap.sh $(WORKDIR)
endif
sudo -E DOCKER_BUILDKIT=1 docker -D -l debug build --tag $(IMAGE) -f Dockerfile.$(DISTRO) . \
--label $(LABEL) \
--label "org.opencontainers.image.revision=$(COMMIT)" \
--label "org.opencontainers.image.created=\
$(shell date --rfc-3339=seconds --utc)" \
--label "org.opencontainers.image.title=$(IMAGE_NAME)"
endif
imgId=`sudo docker images | grep 'image-builder ' | awk '{print $$3}'`
sudo -E DOCKER_BUILDKIT=1 docker run $$imgId ls -ltra /build/usr/bin/sudo > /tmp/sticky_result
sudo grep '^-rws' /tmp/sticky_result >& /dev/null || \
(echo Could not find sticky bit set on target image sudo binary. Are you using buildkit? && \
sudo cat /tmp/sticky_result && exit 1)
ifeq ($(PUSH_IMAGE), true)
sudo -E DOCKER_BUILDKIT=1 docker push $(IMAGE)
endif
cut_image:
set -ex
ifneq ($(PROXY), )
sudo -E ./tools/docker_proxy.sh $(PROXY) $(NO_PROXY)
export http_proxy=$(PROXY)
export https_proxy=$(PROXY)
export no_proxy=$(NO_PROXY)
export HTTP_PROXY=$(PROXY)
export HTTPS_PROXY=$(PROXY)
export NO_PROXY=$(NO_PROXY)
endif
ifeq ($(IMAGE_TYPE), iso)
sudo -E tools/cut_image.sh $(IMAGE_TYPE) $(WORKDIR)/iso $(IMAGE) "$(PROXY)" "$(NO_PROXY)"
else
# Assemble all images based on configs defined in each subdirectory
# Trailing / allows proper function with symlinks
iterDirs="$$(find $(QCOW_BUNDLE)/ -maxdepth 1 -mindepth 1 -type d -exec basename {} \;)"
if [[ -z $$iterDirs ]]; then
echo "Could not find any qcow images defined for bundle - exiting."
exit 1
fi
for subdir in $$iterDirs; do
# QCOW configs
export osconfig_params="$(QCOW_BUNDLE)/$$subdir/osconfig-vars.yaml"
export qcow_params="$(QCOW_BUNDLE)/$$subdir/qcow-vars.yaml"
# Image name
export img_name=$$subdir.qcow2
sudo -E tools/cut_image.sh $(IMAGE_TYPE) $(QCOW_BUNDLE) $(IMAGE) "$(PROXY)" "$(NO_PROXY)"
done
endif
generate_iso:
set -ex
export IMAGE_TYPE=iso
sudo -E make cut_image
package_qcow:
set -ex
export IMAGE_TYPE=qcow
ifneq ($(QCOW_BUNDLE_DIRS), )
bundleDirs="$(QCOW_BUNDLE_DIRS)"
else
# Assemble all images based on configs defined in each $(IMAGE_TYPE)* subdirectory
# Trailing / allows proper function with symlinks
bundleDirs="$$(find $(WORKDIR)/ -maxdepth 1 -mindepth 1 -name "qcow-bundle*" -type d -exec basename {} \;)"
endif
if [[ -z $$bundleDirs ]]; then
echo "Could not find any qcow bundle directories - exiting."
exit 1
fi
for bundledir in $$bundleDirs; do
export QCOW_BUNDLE="$(WORKDIR)/$$bundledir"
sudo -E make cut_image
sudo -E DOCKER_BUILDKIT=1 docker -D -l debug build --tag $(DOCKER_REGISTRY)/$(IMAGE_PREFIX)/$$bundledir:$(IMAGE_TAG)-$(DISTRO) -f Dockerfile-qcow.$(DISTRO) $(WORKDIR)/$$bundledir \
--label $(LABEL) \
--label "org.opencontainers.image.revision=$(COMMIT)" \
--label "org.opencontainers.image.created=\
$(shell date --rfc-3339=seconds --utc)" \
--label "org.opencontainers.image.title=$(DOCKER_REGISTRY)/$(IMAGE_PREFIX)/$$bundledir:$(IMAGE_TAG)-$(DISTRO)"
ifeq ($(PUSH_IMAGE), true)
sudo -E DOCKER_BUILDKIT=1 docker push $(DOCKER_REGISTRY)/$(IMAGE_PREFIX)/$$bundledir:$(IMAGE_TAG)-$(DISTRO)
endif
done
tests:
true
clean:
set -ex
sudo -E tools/multistrap.sh clean
find $(WORKDIR) -name "*.iso" -exec rm {} \; >& /dev/null
find $(WORKDIR) -name "*.qcow2" -exec rm {} \; >& /dev/null
find $(WORKDIR) -name "*.md5sum" -exec rm {} \; >& /dev/null

View File

@ -0,0 +1,67 @@
#!/bin/bash
set -e
SOURCE="${BASH_SOURCE[0]}"
while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink
DIR="$( cd -P "$( dirname "$SOURCE" )" >/dev/null 2>&1 && pwd )"
SOURCE="$(readlink "$SOURCE")"
[[ $SOURCE != /* ]] && SOURCE="$DIR/$SOURCE" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
done
BASEDIR="$( cd -P "$( dirname "$SOURCE" )" >/dev/null 2>&1 && pwd )"
cd "$BASEDIR"
BASEDIR="$(dirname "$(realpath "$0")")"
if [ "${VERSION}" = "v2" ]; then
source "${BASEDIR}/functions_v2.sh"
else
source "${BASEDIR}/functions.sh"
fi
export http_proxy
export https_proxy
export HTTP_PROXY
export HTTPS_PROXY
export no_proxy
export NO_PROXY
if [ ! -e build ]; then
ln -s /chroot build
fi
# Instruct ansible to output the image artifact to the container's host mount
extra_vars="$extra_vars img_output_dir=${VOLUME}"
echo "Begin Ansible plays"
if [[ "${IMAGE_TYPE}" == "iso" ]]; then
_process_input_data_set_vars_iso
# Instruct ansible how to name image output artifact
extra_vars="$extra_vars img_name=${IMG_NAME}"
echo "Executing Step 1"
ansible-playbook -i /opt/assets/playbooks/inventory.yaml /opt/assets/playbooks/iso.yaml --extra-vars "$extra_vars" -vv
elif [[ "${IMAGE_TYPE}" == "qcow" ]]; then
_process_input_data_set_vars_qcow
_process_input_data_set_vars_osconfig
# Instruct ansible how to name image output artifact
extra_vars="$extra_vars img_name=${IMG_NAME} run_context=qcow"
echo "Executing Step 1: Create qcow2 partitions and filesystems"
ansible-playbook -i /opt/assets/playbooks/inventory.yaml /opt/assets/playbooks/qcow.yaml --extra-vars "$extra_vars" --tags "prep_img" -vv
echo "Executing Step 2: Applying changes from base-osconfig playbook"
ansible-playbook -i /opt/assets/playbooks/inventory.yaml /opt/assets/playbooks/base-osconfig.yaml --extra-vars "$extra_vars" -vv
echo "Executing Step 3: Close image and write qcow2"
ansible-playbook -i /opt/assets/playbooks/inventory.yaml /opt/assets/playbooks/qcow.yaml --extra-vars "$extra_vars" --tags "close_img" -vv
else
echo "\${IMAGE_TYPE} value '${IMAGE_TYPE}' does not match an expected value: [ 'iso', 'qcow' ]"
exit 1
fi
# Write md5sum
_make_metadata "${IMG_NAME}"
echo "All Ansible plays completed successfully"

153
image-builder/assets/functions.sh Executable file
View File

@ -0,0 +1,153 @@
#!/bin/bash
# NOTE: These functions are deprecated. It is only left here
# for backwards compatibility until airshipctl is migrated
# away from using it.
set -x
# Defaults
OUTPUT_METADATA_FILE_NAME_DEFAULT='output-metadata.yaml'
ISO_NAME_DEFAULT='ephemeral.iso'
# Common
echo "${BUILDER_CONFIG:?}"
if [ ! -f "${BUILDER_CONFIG}" ]
then
echo "file ${BUILDER_CONFIG} not found"
exit 1
fi
_validate_param(){
PARAM_VAL="$1"
PARAM_NAME="$2"
# Validate that a paramter is defined (default) or that
# it is defined and represents the path of a file or
# directory that is found on the filesystem (VAL_TYPE=file)
VAL_TYPE="$3"
NO_NULL_EXIT="$4"
echo "${PARAM_VAL:?}"
# yq will return the 'null' string if a key is either undefined or defined with no value
if [[ "${PARAM_VAL}" =~ null$ ]]
then
echo "variable ${PARAM_NAME} is not present in ${BUILDER_CONFIG}"
if [[ "${NO_NULL_EXIT}" == 'no_null_exit' ]]; then
echo "Using defaults"
else
exit 1
fi
else
if [[ ${VAL_TYPE} == 'file' ]]; then
if [[ ! -e "${PARAM_VAL}" ]]
then
echo "${PARAM_VAL} not exist"
exit 1
fi
fi
fi
}
IFS=':' read -ra ADDR <<<"$(yq r "${BUILDER_CONFIG}" container.volume)"
HOST_PATH="${ADDR[0]}"
VOLUME="${ADDR[1]}"
_validate_param "${VOLUME}" "container.volume" file
# Read IMAGE_TYPE from the builder config yaml if not supplied as an env var
if [[ -z "${IMAGE_TYPE}" ]]; then
IMAGE_TYPE="$(yq r "${BUILDER_CONFIG}" "builder.imageType")"
# Make iso builds the default for backwards compatibility
if [[ "${IMAGE_TYPE}" == 'null' ]]; then
echo "NOTE: No builder.imageType specified. Assuming 'iso'."
IMAGE_TYPE='iso'
fi
fi
if [[ -z "${OUTPUT_METADATA_FILE_NAME}" ]]; then
OUTPUT_METADATA_FILE_NAME="$(yq r "${BUILDER_CONFIG}" builder.outputMetadataFileName)"
if [[ "${OUTPUT_METADATA_FILE_NAME}" == 'null' ]]; then
echo "NOTE: No builder.outputMetadataFileName specified. Assuming '${OUTPUT_METADATA_FILE_NAME_DEFAULT}'."
OUTPUT_METADATA_FILE_NAME="${OUTPUT_METADATA_FILE_NAME_DEFAULT}"
fi
fi
OUTPUT_FILE_NAME="$(yq r "${BUILDER_CONFIG}" builder.outputFileName)"
_make_metadata(){
IMG_NAME="$1"
OUTPUT_METADATA_FILE_PATH="${VOLUME}/${OUTPUT_METADATA_FILE_NAME}"
# Instruct airshipctl where to locate the output image artifact
echo "bootImagePath: ${HOST_PATH}/${IMG_NAME}" > "${OUTPUT_METADATA_FILE_PATH}"
# Also include the image md5sum
md5sum=$(md5sum "${VOLUME}/${IMG_NAME}" | awk '{print $1}')
echo "md5sum: $md5sum" | tee -a "${OUTPUT_METADATA_FILE_PATH}"
}
_process_input_data_set_vars_osconfig(){
if [[ -z "${OSCONFIG_FILE}" ]]; then
OSCONFIG_FILE="$(yq r "${BUILDER_CONFIG}" builder.osconfigVarsFileName)"
fi
OSCONFIG_FILE="${VOLUME}/${OSCONFIG_FILE}"
_validate_param "${OSCONFIG_FILE}" builder.osconfigVarsFileName file no_null_exit
# Optional user-supplied playbook vars
if [[ -f "${OSCONFIG_FILE}" ]]; then
cp "${OSCONFIG_FILE}" /opt/assets/playbooks/roles/osconfig/vars/main.yaml
fi
}
_process_input_data_set_vars_iso(){
# Required user provided input
if [[ -z "${USER_DATA_FILE}" ]]; then
USER_DATA_FILE="$(yq r "${BUILDER_CONFIG}" builder.userDataFileName)"
fi
USER_DATA_FILE="${VOLUME}/${USER_DATA_FILE}"
_validate_param "${USER_DATA_FILE}" builder.userDataFileName file
# Required user provided input
if [[ -z "${NET_CONFIG_FILE}" ]]; then
NET_CONFIG_FILE="$(yq r "${BUILDER_CONFIG}" builder.networkConfigFileName)"
fi
NET_CONFIG_FILE="${VOLUME}/${NET_CONFIG_FILE}"
_validate_param "${NET_CONFIG_FILE}" builder.networkConfigFileName file
# cloud-init expects net confing specifically in json format
NET_CONFIG_JSON_FILE=/tmp/network_data.json
yq r -j "${NET_CONFIG_FILE}" > "${NET_CONFIG_JSON_FILE}"
# Optional user provided input
if [[ ${OUTPUT_FILE_NAME} != null ]]; then
IMG_NAME="${OUTPUT_FILE_NAME}"
else
IMG_NAME="${ISO_NAME_DEFAULT}"
fi
cat << EOF > /opt/assets/playbooks/roles/iso/vars/main.yaml
meta_data_file: ${BASEDIR}/meta_data.json
user_data_file: ${USER_DATA_FILE}
network_data_file: ${NET_CONFIG_JSON_FILE}
EOF
}
_process_input_data_set_vars_qcow(){
IMG_NAME=null
if [[ -z "${QCOW_CONFIG_FILE}" ]]; then
QCOW_CONFIG_FILE="$(yq r "${BUILDER_CONFIG}" builder.qcowVarsFileName)"
fi
QCOW_CONFIG_FILE="${VOLUME}/${QCOW_CONFIG_FILE}"
_validate_param "${QCOW_CONFIG_FILE}" builder.qcowVarsFileName file no_null_exit
# Optional user-supplied playbook vars
if [[ -f "${QCOW_CONFIG_FILE}" ]]; then
cp "${QCOW_CONFIG_FILE}" /opt/assets/playbooks/roles/qcow/vars/main.yaml
# Extract the image output name in the ansible vars file provided
IMG_NAME="$(yq r "${QCOW_CONFIG_FILE}" img_name)"
fi
# Retrieve from playbook defaults if not provided in user input
if [[ "${IMG_NAME}" == 'null' ]]; then
IMG_NAME="$(yq r /opt/assets/playbooks/roles/qcow/defaults/main.yaml img_name)"
fi
# User-supplied image output name in builder-config takes precedence
if [[ ${OUTPUT_FILE_NAME} != null ]]; then
IMG_NAME="${OUTPUT_FILE_NAME}"
else
_validate_param "${IMG_NAME}" img_name
fi
}

View File

@ -0,0 +1,155 @@
#!/bin/bash
# Defaults
ISO_NAME_DEFAULT='ephemeral.iso'
_validate_param(){
PARAM_VAL="$1"
PARAM_NAME="$2"
# Validate that a paramter is defined (default) or that
# it is defined and represents the path of a file or
# directory that is found on the filesystem (VAL_TYPE=file)
VAL_TYPE="$3"
NO_NULL_EXIT="$4"
echo "${PARAM_VAL:?}"
# yq will return the 'null' string if a key is either undefined or defined with no value
if [[ "${PARAM_VAL}" =~ null$ ]]
then
echo "variable ${PARAM_NAME} is not present in user-supplied config."
if [[ "${NO_NULL_EXIT}" == 'no_null_exit' ]]; then
echo "Using defaults"
else
exit 1
fi
else
if [[ ${VAL_TYPE} == 'file' ]]; then
if [[ ! -e "${PARAM_VAL}" ]]
then
echo "${PARAM_VAL} not exist"
exit 1
fi
fi
fi
}
# Capture stdin
stdin=$(cat)
yaml_dir=/tmp
echo "$stdin" > ${yaml_dir}/builder_config
OSCONFIG_FILE=osconfig
USER_DATA_FILE=user_data
NET_CONFIG_FILE=network_config
QCOW_CONFIG_FILE=qcow
file_list="${OSCONFIG_FILE}
${USER_DATA_FILE}
${NET_CONFIG_FILE}
${QCOW_CONFIG_FILE}"
IFS=$'\n'
for f in $file_list; do
found_file=no
for l in $stdin; do
if [ "${l:0:1}" != " " ]; then
found_file=no
fi
if [ "$found_file" = "yes" ]; then
echo "$l" | sed 's/^ //g' >> ${yaml_dir}/${f}
fi
if [ "$l" = "${f}:" ]; then
found_file=yes
fi
done
done
unset IFS
# Turn on -x after stdin is finished
set -x
# Output images to the first root-level mounted volume
for f in $(ls / | grep -v 'proc\|sys\|dev'); do mountpoint /$f >& /dev/null && VOLUME=/$f; done
if [ -z "$VOLUME" ]; then
echo "Error: Could not find a root-level volume mount to output images. Exiting."
exit 1
fi
# Read IMAGE_TYPE from the builder config yaml if not supplied as an env var
if [[ -z "${IMAGE_TYPE}" ]]; then
# Make iso builds the default for backwards compatibility
echo "NOTE: No IMAGE_TYPE specified. Assuming 'iso'."
IMAGE_TYPE='iso'
fi
OUTPUT_FILE_NAME="$(yq r ${yaml_dir}/builder_config outputFileName)"
_make_metadata(){
IMG_NAME="$1"
# Write and print md5sum
md5sum=$(md5sum "${VOLUME}/${IMG_NAME}" | awk '{print $1}')
echo "md5sum:"
echo "$md5sum" | tee "${VOLUME}/${IMG_NAME}.md5sum"
}
_process_input_data_set_vars_osconfig(){
OSCONFIG_FILE="${yaml_dir}/${OSCONFIG_FILE}"
# Optional user-supplied playbook vars
if [[ -f "${OSCONFIG_FILE}" ]]; then
echo "" >> /opt/assets/playbooks/roles/osconfig/vars/main.yaml
cat "${OSCONFIG_FILE}" >> /opt/assets/playbooks/roles/osconfig/vars/main.yaml
fi
}
_process_input_data_set_vars_iso(){
# Required user provided input
USER_DATA_FILE="${yaml_dir}/${USER_DATA_FILE}"
if [ ! -e $USER_DATA_FILE ]; then
echo "No user_data file supplied! Exiting."
exit 1
fi
# Required user provided input
NET_CONFIG_FILE="${yaml_dir}/${NET_CONFIG_FILE}"
if [ ! -e $USER_DATA_FILE ]; then
echo "No net_config file supplied! Exiting."
exit 1
fi
# cloud-init expects net confing specifically in json format
NET_CONFIG_JSON_FILE=/tmp/network_data.json
yq r -j "${NET_CONFIG_FILE}" > "${NET_CONFIG_JSON_FILE}"
# Optional user provided input
if [[ ${OUTPUT_FILE_NAME} != null ]]; then
IMG_NAME="${OUTPUT_FILE_NAME}"
else
IMG_NAME="${ISO_NAME_DEFAULT}"
fi
cat << EOF > /opt/assets/playbooks/roles/iso/vars/main.yaml
meta_data_file: ${BASEDIR}/meta_data.json
user_data_file: ${USER_DATA_FILE}
network_data_file: ${NET_CONFIG_JSON_FILE}
EOF
}
_process_input_data_set_vars_qcow(){
IMG_NAME=null
QCOW_CONFIG_FILE="${yaml_dir}/${QCOW_CONFIG_FILE}"
# Optional user-supplied playbook vars
if [[ -f "${QCOW_CONFIG_FILE}" ]]; then
cp "${QCOW_CONFIG_FILE}" /opt/assets/playbooks/roles/qcow/vars/main.yaml
fi
# Retrieve from playbook defaults if not provided in user input
if [[ "${IMG_NAME}" == 'null' ]]; then
IMG_NAME="$(yq r /opt/assets/playbooks/roles/qcow/defaults/main.yaml img_name)"
fi
# User-supplied image output name in builder-config takes precedence
if [[ ${OUTPUT_FILE_NAME} != null ]]; then
IMG_NAME="${OUTPUT_FILE_NAME}"
else
_validate_param "${IMG_NAME}" img_name
fi
}

View File

@ -0,0 +1 @@
{"hostname": "ephemeral", "name": "ephemeral", "uuid": "83679162-1378-4288-a2d4-70e13ec132aa"}

View File

@ -0,0 +1,4 @@
---
- hosts: localhost
roles:
- multistrap

View File

@ -0,0 +1,4 @@
---
- hosts: localhost
roles:
- livecdcontent

View File

@ -0,0 +1,5 @@
---
- hosts: build
gather_facts: false
roles:
- osconfig

View File

@ -0,0 +1,10 @@
all:
hosts:
localhost:
ansible_connection: local
ansible_python_interpreter: /usr/bin/python3
chroots:
hosts:
build:
ansible_connection: chroot
ansible_python_interpreter: /usr/bin/python3

View File

@ -0,0 +1,4 @@
---
- hosts: localhost
roles:
- iso

View File

@ -0,0 +1,4 @@
---
- hosts: localhost
roles:
- qcow

View File

@ -0,0 +1,7 @@
img_output_dir: /config
img_name: ephemeral.iso
root_image: /build
meta_data_file: /config/meta_data.json
user_data_file: /config/user_data
network_data_file: /config/network_data.json

View File

@ -0,0 +1,26 @@
- name: "Cloud Init | creating {{ root_image }}/openstack/latest directory"
file:
path: "{{ root_image }}/openstack/latest"
state: directory
mode: '0755'
#- name: "Cloud Init | Setting cloud-init datasource list"
# copy:
# content: "datasource_list: [ ConfigDrive, None ]"
# dest: "{{ root_image }}/etc/cloud/cloud.cfg.d/95_no_cloud_ds.cfg"
- name: "Cloud Init | seeding meta data"
copy:
src: "{{ meta_data_file }}"
dest: "{{ root_image }}/openstack/latest/meta_data.json"
- name: "Cloud Init | seeding user data"
copy:
src: "{{ user_data_file }}"
dest: "{{ root_image }}/openstack/latest/user_data"
- name: "Cloud Init | seeding network data"
copy:
src: "{{ network_data_file }}"
dest: "{{ root_image }}/openstack/latest/network_data.json"

View File

@ -0,0 +1,60 @@
- name: "ISO | Reduce image size"
file:
state: absent
path: "{{ root_image }}/lib"
- name: "ISO | Reduce image size"
file:
state: absent
path: "{{ root_image }}/usr"
- name: "ISO | Reduce image size"
file:
state: absent
path: "{{ root_image }}/bin"
- name: "ISO | Reduce image size"
file:
state: absent
path: "{{ root_image }}/sbin"
- name: "ISO | Reduce image size"
file:
state: absent
path: "{{ root_image }}/var"
- name: "ISO | Reduce image size"
file:
state: absent
path: "{{ root_image }}/opt"
- name: "ISO | Ensure any old iso image at target location is removed"
file:
state: absent
path: "{{ img_output_dir }}/{{ img_name }}"
- name: "ISO | Ensuring {{ img_output_dir }} directory exists"
file:
path: "{{ img_output_dir }}"
state: directory
mode: '0755'
- name: "ISO | Writing ISO with xorriso"
shell:
cmd: |
xorriso \
-as mkisofs \
-iso-level 3 \
-full-iso9660-filenames \
-volid "config-2" \
-eltorito-boot boot/grub/bios.img \
-no-emul-boot \
-boot-load-size 4 \
-boot-info-table \
--eltorito-catalog boot/grub/boot.cat \
--grub2-boot-info \
--grub2-mbr /usr/lib/grub/i386-pc/boot_hybrid.img \
-eltorito-alt-boot \
-e EFI/efiboot.img \
-no-emul-boot \
-append_partition 2 0xef {{ root_image }}/boot/grub/efiboot.img \
-output {{ img_output_dir }}/{{ img_name }} \
-graft-points \
{{ root_image }} \
/boot/grub/bios.img={{ root_image }}/boot/grub/bios.img \
/EFI/efiboot.img={{ root_image }}/boot/grub/efiboot.img

View File

@ -0,0 +1,10 @@
- name: "Task | Including any user-defined vars"
include_vars:
file: main.yaml
name: user-vars
- name: "Task | Preparing Cloud-Init data"
include_tasks: cloud-init.yaml
- name: "Task | ISO production"
include_tasks: iso.yaml

View File

@ -0,0 +1 @@
# This file will be overwritten by the container entrypoint with user-provided vars, if any are defined.

View File

@ -0,0 +1,5 @@
root_chroot: build
root_image: build
# Additional flags for mksquashfs
mksquashfs_compression: lz4
mksquashfs_threads: "{{ ansible_processor_vcpus }}"

View File

@ -0,0 +1,58 @@
- name: "Stamp out a marker file for grub to use when identifying the desired boot volume"
copy:
content: "{{ ansible_date_time.date }}"
dest: "{{ root_image }}/AIRSHIP"
- name: "create directory for boot image assembly"
tempfile:
state: directory
suffix: bootimg
register: bootimg_builddir
- name: "write out grub config"
template:
src: grub-livecd.cfg.j2
dest: "{{ bootimg_builddir.path }}/grub.cfg"
- name: "making standalone grub - efi"
shell:
cmd: |
grub-mkstandalone \
--format=x86_64-efi \
--output="{{ bootimg_builddir.path }}/bootx64.efi" \
--locales="" \
--fonts="" \
boot/grub/grub.cfg="{{ bootimg_builddir.path }}/grub.cfg"
- name: "setup efi filesystem"
shell:
cmd: |
set -e
cd {{ bootimg_builddir.path }}
dd if=/dev/zero of=efiboot.img bs=1M count=10
mkfs.vfat efiboot.img
LC_CTYPE=C mmd -i efiboot.img efi efi/boot
LC_CTYPE=C mcopy -i efiboot.img ./bootx64.efi ::efi/boot/
- name: "making standalone grub - legacy"
shell:
cmd: |
grub-mkstandalone \
--format=i386-pc \
--output="{{ bootimg_builddir.path }}/core.img" \
--install-modules="linux normal iso9660 biosdisk memdisk search tar ls all_video" \
--modules="linux normal iso9660 biosdisk search" \
--locales="" \
--fonts="" \
boot/grub/grub.cfg="{{ bootimg_builddir.path }}/grub.cfg"
- name: "ensuring directory {{ root_image }}/boot/grub exists"
file:
path: "{{ root_image }}/boot/grub"
state: directory
mode: '0755'
- name: "assembling boot img"
shell:
cmd: |
cat /usr/lib/grub/i386-pc/cdboot.img {{ bootimg_builddir.path }}/core.img > {{ root_image }}/boot/grub/bios.img
cp {{ bootimg_builddir.path }}/efiboot.img {{ root_image }}/boot/grub/efiboot.img

View File

@ -0,0 +1,10 @@
- name: "Including any user-defined vars"
include_vars:
file: main.yaml
name: user-vars
- name: "building squshfs"
include_tasks: squashfs.yaml
- name: "building livecd"
include_tasks: livecd.yaml

View File

@ -0,0 +1,20 @@
- name: "ensuring directory {{ root_image }}/live exists"
file:
path: "{{ root_image }}/live"
state: directory
mode: '0755'
- name: "ensure no previous squashfs file"
file:
path: "{{ root_image }}/live/filesystem.squashfs"
state: absent
- name: "Building squashfs"
shell:
cmd: |
mksquashfs \
"{{ root_chroot }}" \
"{{ root_image }}/live/filesystem.squashfs" \
-processors {{ mksquashfs_threads }} \
-comp {{ mksquashfs_compression }} \
-e boot

View File

@ -0,0 +1,11 @@
search --set=root --file /AIRSHIP
insmod all_video
set default="0"
set timeout=1
menuentry "Airship Ephemeral" {
linux /boot/vmlinuz boot=live quiet nomodeset overlay-size=70% systemd.unified_cgroup_hierarchy=0 ds=ConfigDrive
initrd /boot/initrd.img
}

View File

@ -0,0 +1,131 @@
rootfs_root: build
rootfs_arch: amd64
k8s_version: 1.18.6-00
kernel_base_pkg: linux-image-generic
kernel_headers_pkg: linux-headers-generic
systemd_nic_names_policy: kernel database onboard path slot
systemd_nic_alternative_names_policy: database onboard path slot
multistrap_retries: 3
multistrap_retries_delay: 3
ubuntu_packages:
- apparmor
- apt-file
- apt-utils
- apt-transport-https
- arptables
- bash-completion
- bc
- bridge-utils
- chrony
- cloud-init
- conntrack
- curl
- dbus
- dnsutils
- dosfstools
- e2fsprogs
- ebtables
- efivar
- ethtool
- file
- gawk
- gettext-base
- gnupg2
- grub2
- grub-efi-amd64-signed
- ifenslave
- isc-dhcp-client
- iproute2
- ipset
- iptables
- iputils-arping
- iputils-ping
- iputils-tracepath
- ipvsadm
- kdump-tools
- keepalived
- "{{ kernel_base_pkg }}"
- "{{ kernel_headers_pkg }}"
- kmod
- less
- live-boot
- locales
- locales-all
- logrotate
- lsb-release
- lsof
- lvm2 # required for ceph cluster provisioning
- man-db
- mawk
- mbr
- netplan.io
- net-tools
- networkd-dispatcher # required for netplan post-up scripts
- openssh-server
- passwd
- python3
- python3-apt
- rsyslog
- socat
- systemd
- systemd-sysv
- strace
- sudo
- tcpdump
- traceroute
- vim
- vlan
- wget
- xfsprogs
- xz-utils
unapproved_packages: # provide the exact name of the packages that need to be blocked
- unattended-upgrades
- systemd-timesyncd
repos:
- register_repo_with_rootfs: true
name: Ubuntu
packages: "{{ ubuntu_packages }}"
source: http://archive.ubuntu.com/ubuntu/
keyring_pkg: ubuntu-keyring
suite: focal
components: main restricted universe
- register_repo_with_rootfs: true
name: Ubuntu-Updates
packages: []
source: http://archive.ubuntu.com/ubuntu/
# NOTE: We comment this out as the package comes from the "focal" suite
# keyring_pkg: ubuntu-keyring
suite: focal-updates
omitdebsrc: "true"
components: main restricted universe
- register_repo_with_rootfs: true
name: Ubuntu-Security
packages: []
source: http://archive.ubuntu.com/ubuntu/
# NOTE: We comment this out as the package comes from the "focal" suite
# keyring_pkg: ubuntu-keyring
suite: focal-security
omitdebsrc: "true"
components: main restricted universe
- register_repo_with_rootfs: true
name: Docker
packages:
- docker-ce
- docker-ce-cli
- containerd.io
source: https://download.docker.com/linux/ubuntu
keyring_url: https://download.docker.com/linux/ubuntu/gpg
suite: focal
omitdebsrc: "true"
components: stable
- register_repo_with_rootfs: true
name: Kubernetes
packages:
- kubelet={{ k8s_version }}
- kubeadm={{ k8s_version }}
- kubectl={{ k8s_version }}
source: https://apt.kubernetes.io
keyring_url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
suite: kubernetes-xenial
omitdebsrc: "true"
components: main

View File

@ -0,0 +1,30 @@
- when: item.keyring_url is defined
block:
- name: "ensuring directory {{ rootfs_root }}/etc/apt/trusted.gpg.d exists"
file:
path: "{{ rootfs_root }}/etc/apt/trusted.gpg.d"
state: directory
mode: '0755'
- name: "create temporary directory for {{ item.name }}'s key'"
tempfile:
state: directory
suffix: aptkey
register: aptkey_tmpdir
- name: "Download {{ item.keyring_url }} for {{ item.name }} repo"
get_url:
url: "{{ item.keyring_url }}"
dest: "{{ aptkey_tmpdir.path }}/Release.key"
mode: '0440'
- name: "Installing keyring {{ item.name }}"
shell:
cmd: gpg --no-options --no-default-keyring --no-auto-check-trustdb --trustdb-name {{ rootfs_root }}/etc/apt/trusted.gpg --no-keyring --import-options import-export --import --import {{ aptkey_tmpdir.path }}/Release.key > {{ rootfs_root }}/etc/apt/trusted.gpg.d/{{ item.name }}.gpg
- when: item.keyring_pkg is defined
block:
- name: Update the apt cache
apt:
update_cache: yes
- name: "Apt keyring package defined for {{ item.name }} repo, ensuring that this is present on the build host (note that this means you need access to it in the apt sources of the builder)"
apt:
name: "{{ item.keyring_pkg }}"
state: present

View File

@ -0,0 +1,100 @@
- name: "Including any user-defined vars"
include_vars:
file: main.yaml
name: user-vars
- name: "Append any user-defined repos to multistrap list"
set_fact:
repos: "{{ repos + repos_append }}"
when: repos_append is defined
- name: "Append any user-defined pkgs to be installed from default Ubuntu mirrors"
set_fact:
ubuntu_packages: "{{ ubuntu_packages + ubuntu_packages_append }}"
when: ubuntu_packages_append is defined
- name: "ensuring directory {{ rootfs_root }} exists for rootfs"
file:
path: "{{ rootfs_root }}"
state: directory
mode: '0755'
- name: "create temporary directory for multistrap config"
tempfile:
state: directory
suffix: multistrap
register: multistrap_tempdir
- name: "Configure apt with unapproved packages"
template:
src: unapproved-packages.j2
dest: "{{ multistrap_tempdir.path }}/pref.conf"
- name: "write out multistrap config"
template:
src: multistrap.conf.j2
dest: "{{ multistrap_tempdir.path }}/multistrap.conf"
validate: multistrap --simulate -f %s
- name: "install required apt keys manually"
include_tasks: apt-key-install.yaml
loop: "{{ repos }}"
# kdump-tools does not install properly in multistrap environment. This fix allows kdump-tools
# installation to succeed.
- name: "kdump-tools fix - create directory"
shell: |
set -e
mkdir -p "{{ rootfs_root }}/etc/kernel/postinst.d"
- name: "kdump-tools fix - deploy build script"
template:
src: kdump-tools.j2
dest: "{{ rootfs_root }}/etc/kernel/postinst.d/kdump-tools"
mode: '0755'
# kdump-tools deb package will overwrite script without write protection enabled
- name: "kdump-tools fix - lock build script"
shell: |
set -e
chattr +i "{{ rootfs_root }}/etc/kernel/postinst.d/kdump-tools"
# Setting up a dummy hostname required for some packages to properly install
- name: "hostname and hosts | write out hostname file"
shell:
cmd: "echo \"$(hostname)\" > {{rootfs_root}}/etc/hostname"
- name: "hostname and hosts | write out hosts file"
shell:
cmd: "echo \"127.0.0.1 localhost $(hostname)\" > {{rootfs_root}}/etc/hosts"
- name: "Running multistrap"
shell:
cmd: "multistrap -f {{ multistrap_tempdir.path }}/multistrap.conf"
retries: "{{ multistrap_retries }}"
delay: "{{ multistrap_retries_delay }}"
register: result
until: result.rc == 0
- name: "Set systemd NIC naming"
template:
src: 99-default.link.j2
dest: "{{ rootfs_root }}/etc/systemd/network/99-default.link"
mode: '0644'
- name: "Configure apt with unapproved packages"
template:
src: unapproved-packages.j2
dest: "{{ rootfs_root }}/etc/apt/preferences.d/unapproved-packages.pref"
- name: "Configure apt to remove unapproved packages from update"
ansible.builtin.lineinfile:
path: "{{ rootfs_root }}/etc/apt/apt.conf.d/01autoremove"
insertafter: "multiverse/metapackages"
line: ' "{{ item }}";'
with_items: "{{ unapproved_packages }}"
- name: "Lock sources.list to prevent conflict and duplicates with multistrap repo list"
shell: |
set -e
if [ -f {{ rootfs_root }}/etc/apt/sources.list ] && [ ! -h {{ rootfs_root }}/etc/apt/sources.list ]; then
rm {{ rootfs_root }}/etc/apt/sources.list
ln -s /dev/null {{ rootfs_root }}/etc/apt/sources.list
fi

View File

@ -0,0 +1,7 @@
[Match]
OriginalName=*
[Link]
NamePolicy={{ systemd_nic_names_policy }}
AlternativeNamesPolicy={{ systemd_nic_alternative_names_policy }}
MACAddressPolicy=persistent

View File

@ -0,0 +1,75 @@
#!/bin/sh -e
version="$1"
kdumpdir="/var/lib/kdump"
[ -x /usr/sbin/mkinitramfs ] || exit 0
# passing the kernel version is required
if [ -z "${version}" ]; then
echo >&2 "W: kdump-tools: ${DPKG_MAINTSCRIPT_PACKAGE:-kdump-tools package} did not pass a version number"
exit 2
fi
if ! linux-version list | grep "${version}" > /dev/null ; then
exit 0
fi
# exit if kernel does not need an initramfs
if [ "$INITRD" = 'No' ]; then
exit 0
fi
# avoid running multiple times
if [ -n "$DEB_MAINT_PARAMS" ]; then
eval set -- "$DEB_MAINT_PARAMS"
if [ -z "$1" ] || [ "$1" != "configure" ]; then
exit 0
fi
fi
# We need a modified copy of initramfs-tools directory
# with MODULES=dep in initramfs.conf
if [ ! -d "$kdumpdir" ];then
mkdir "$kdumpdir" || true
fi
# Force re-creation of $kdumpdir/initramfs-tools
# in case the source has changed since last time
# we ran
if [ -d "$kdumpdir/initramfs-tools" ];then
rm -Rf $kdumpdir/initramfs-tools || true
fi
cp -pr /etc/initramfs-tools "$kdumpdir" || true
initramfsdir="$kdumpdir/initramfs-tools"
# Add scsi_dh_* modules if in use otherwise
# kexec reboot on multipath will fail
# (LP: #1635597)
for I in $(lsmod | grep scsi_dh | cut -d" " -f1);do
echo "${I}" >> $initramfsdir/modules
done
# canderson: This line needs to be commented out for kdump-tools to install with multistrap
#sed -e 's/MODULES=.*/MODULES=dep/' /etc/initramfs-tools/initramfs.conf > "$initramfsdir/initramfs.conf" || true
if ! [ -e "$initramfsdir/initramfs.conf" ];then
echo >&2 "W: kdump-tools: Unable to create $initramfsdir/initramfs.conf"
exit 2
fi
# Cleaning up existing initramfs with same version
# as mkinitramfs do not have a force option
if [ -e "$kdumpdir/initrd.img-${version}" ];then
rm -f "$kdumpdir/initrd.img-${version}" || true
fi
# we're good - create initramfs.
echo "kdump-tools: Generating $kdumpdir/initrd.img-${version}"
if mkinitramfs -d "$initramfsdir" -o "$kdumpdir/initrd.img-${version}.new" "${version}";then
mv "$kdumpdir/initrd.img-${version}.new" "$kdumpdir/initrd.img-${version}"
else
mkinitramfs_return="$?"
rm -f "${initramfs}.new"
echo "update-initramfs: failed for ${initramfs} with $mkinitramfs_return." >&2
exit $mkinitramfs_return
fi

View File

@ -0,0 +1,33 @@
#jinja2: trim_blocks:False
[General]
arch={{ rootfs_arch }}
directory={{ rootfs_root }}
# same as --tidy-up option if set to true
cleanup=true
# same as --no-auth option if set to true
# keyring packages listed in each bootstrap will
# still be installed.
noauth=false
# extract all downloaded archives (default is true)
unpack=true
#omitrequired=true
# enable MultiArch for the specified architectures
# default is empty
#multiarch=allowed
# apt preferences file
aptpreferences=pref.conf
# the order of sections is not important.
# the bootstrap option determines which repository
# is used to calculate the list of Priority: required packages.
# "bootstrap" lists the repos which will be used to create the multistrap itself. Only
# Packages listed in "bootstrap" will be downloaded and unpacked by multistrap.
bootstrap={% set space = joiner(" ") %}{% for repo in repos %}{{ space() }}{{ repo.name }}{% endfor %}
# aptsources is a list of sections to be used for downloading packages
# and lists and placed in the /etc/apt/sources.list.d/multistrap.sources.list
# of the target. Order is not important
aptsources={% set space = joiner(" ") %}{% for repo in repos %}{% if repo.register_repo_with_rootfs == true %}{{ space() }}{{ repo.name }}{% endif %}{% endfor %}
{% for repo in repos %}
[{{ repo.name }}]
{% set newline = joiner("\n") %}{% for key, value in repo.items() %}{% if ( key != 'name' ) and ( key != 'keyring_url' ) %}{{ newline() }}{% if key == 'keyring_pkg' %}keyring{% else %}{{ key }}{% endif %}={% if value %}{% if key == 'packages' %}{{ value|join(' ') }}{% else %}{{ value }}{% endif %}{% endif %}{% endif %}{% endfor %}
{% endfor %}

View File

@ -0,0 +1,6 @@
{% for package in unapproved_packages %}
Package: {{ package }}
Pin: origin *
Pin-Priority: -1
{% endfor %}

View File

@ -0,0 +1,3 @@
Do not make updates here.
See image-builder/config/rootfs/multistrap-vars.yaml

View File

@ -0,0 +1,245 @@
rootfs_root: build
default_run_context: common
qcow_run_context: qcow
user_scripts_dir_default: "/config/scripts/{{ default_run_context }}"
user_scripts_dir_qcow: "/config/scripts/{{ qcow_run_context }}"
kernel:
modules:
load:
- name: 8021q
- name: bonding
- name: ip_vs
- name: ip_vs_rr
- name: ip_vs_wrr
- name: ip_vs_sh
- name: br_netfilter
blacklist:
- name: krbd
banners:
login: |
Airship Node \l: \n.\o
Kernel: \s \m \r \v
IP address: \4
motd: |
#!/bin/sh
. /etc/lsb-release
printf "Airship Node, based on: %s (%s %s %s)\n" "$DISTRIB_DESCRIPTION" "$(uname -o)" "$(uname -r)" "$(uname -m)"
kubelet:
# Add only image-builder appropriate kubelet args here.
# Add all others to kubeadmcontrolplane.yaml
extra_systemd_args: []
#- name: reserved-cpus
# value: '0-3'
grub:
GRUB_TIMEOUT: 10
GRUB_CMDLINE_LINUX_DEFAULT:
- name: console
value: 'ttyS0,115200n8'
- name: console
value: 'tty0'
- name: amd_iommu
value: 'on'
- name: intel_iommu
value: 'on'
- name: iommu
value: 'pt'
- name: cgroup_disable
value: 'hugetlb'
- name: dpdk-socket-mem
value: '4096,4096'
- name: rcu_nocb_poll
value: 'true'
GRUB_SERIAL_COMMAND:
- name: speed
value: 'ttyS0,115200n8'
- name: unit
value: '0'
- name: word
value: '8'
- name: parity
value: 'no'
- name: stop
value: '1'
kdump_tools:
crashkernel: '768M'
limits:
- name: core_dump
domain: '0:'
type: 'hard'
item: 'core'
value: 0
- name: nofile-root-soft
domain: 'root'
type: 'soft'
item: 'nofile'
value: '65536'
- name: nofile-root-hard
domain: 'root'
type: 'hard'
item: 'nofile'
value: '1048576'
- name: nofile-all-soft
domain: '*'
type: 'soft'
item: 'nofile'
value: '65536'
- name: nofile-all-hard
domain: '*'
type: 'hard'
item: 'nofile'
value: '1048576'
sysctl:
- name: net.bridge.bridge-nf-call-ip6tables
value: '1'
- name: net.bridge.bridge-nf-call-iptables
value: '1'
- name: net.nf_conntrack_max
value: '1048576'
- name: kernel.panic
value: '3'
- name: kernel.pid_max
value: '4194303'
- name: net.ipv4.conf.default.arp_accept
value: '1'
- name: net.ipv4.conf.all.arp_accept
value: '1'
- name: net.ipv4.tcp_keepalive_intvl
value: '15'
- name: net.ipv4.tcp_keepalive_time
value: '30'
- name: net.ipv4.tcp_keepalive_probes
value: '8'
- name: net.ipv4.tcp_retries2
value: '5'
- name: net.ipv4.neigh.default.gc_thresh1
value: '4096'
- name: net.ipv4.neigh.default.gc_thresh3
value: '16384'
- name: net.ipv4.conf.default.rp_filter
value: '2'
- name: net.ipv6.conf.all.accept_ra
value: '0'
- name: net.ipv6.conf.default.accept_ra
value: '0'
- name: net.ipv6.conf.lo.accept_ra
value: '0'
- name: net.ipv6.conf.lo.disable_ipv6
value: '0'
- name: net.netfilter.nf_conntrack_acct
value: '1'
- name: fs.suid_dumpable
value: '2'
- name: fs.inotify.max_user_watches
value: '1048576'
- name: fs.protected_hardlinks
value: '1'
- name: fs.protected_symlinks
value: '1'
- name: kernel.sysrq
value: '8'
# Any directories to create on disk can be defined here
directories:
# Full path to file to create
- name: /testdir
permissions: '0755'
owner: root
group: root
# The contexts where this operation is performed
# {{ default_run_context }} = part of shared base image
# qcow = is performed for QCOW but not ephemeral (unless
# combined with previous item)
run_contexts:
- "{{ default_run_context }}"
# Any files to write to disk can be defined here
files:
# Full path to file to create
- name: /testdir/test.sh
file_content: |
#!/bin/bash
echo hello world
permissions: '0755'
owner: root
group: root
# The contexts where this operation is performed
# {{ default_run_context }} = part of shared base image
# qcow = is performed for QCOW but not ephemeral (unless
# combined with previous item)
run_contexts:
- "{{ default_run_context }}"
systemd:
# Full name, including systemd suffix. sample.service. sample.mount, sample.timer, etc.
- name: sample.service
file_content: |
[Unit]
Description=sample service
After=network.target
[Service]
ExecStart=/bin/sleep infinity
[Install]
WantedBy=multi-user.target
# whether the target image should run this service on boot
enabled: yes
# whether to override existing symlinks (e.g. name collision).
# Use only if you are intenting to overwrite an existing systemd unit
force: no
# The contexts where this operation is performed
# {{ default_run_context }} = part of shared base image
# qcow = is performed for QCOW but not ephemeral (unless
# combined with previous item)
run_contexts:
- "{{ default_run_context }}"
# If any custom shell scripts are needed for image building, they can be added here.
user_scripts:
- file_content: |
#!/bin/bash
echo "custom container buildtime script"
# The contexts where this operation is performed
# {{ default_run_context }} = part of shared base image
# qcow = is performed for QCOW but not ephemeral (unless
# combined with previous item)
run_contexts:
- "{{ default_run_context }}"
# Any other adjustments to file or directory permissions, for files that already exist.
file_permissions:
# Full path to file to create
- name: /testdir/test.sh
permissions: '0700'
owner: root
group: root
# The contexts where this operation is performed
# {{ default_run_context }} = part of shared base image
# qcow = is performed for QCOW but not ephemeral (unless
# combined with previous item)
run_contexts:
- "{{ default_run_context }}"
# Set password and login shell for existing users
# Mainly intended to lock down system users
# Creates user if does not exist
user_management:
- name: test
shell: /bin/false
password: ''
password_lock: yes
run_contexts:
- "{{ default_run_context }}"
# If any required resources need to be fetched from URL for image build customization, they can be added here.
# Downloaded resources can be found in /tmp/url_resources directory.
# Example:-
# fetch_from_url:
# - url: https://www.example.com/resource.tar.gz
# use_proxy: no
fetch_from_url: []

View File

@ -0,0 +1,28 @@
- name: "MOTD | Set Login Prompt"
copy:
content: "{{ banners.login }}\n"
dest: "/etc/issue"
owner: root
group: root
mode: '0644'
- name: "Finalize | Reset MOTD"
file:
state: "{{ item }}"
path: "/etc/update-motd.d/"
owner: root
group: root
mode: '0755'
loop:
- absent
- directory
- name: "Finalize | Remove MOTD News config"
file:
state: "absent"
path: "/etc/default/motd-news"
- name: "MOTD | Set MOTD"
copy:
content: "{{ banners.motd }}"
dest: "/etc/update-motd.d/00-motd"
owner: root
group: root
mode: '0755'

View File

@ -0,0 +1,23 @@
- name: "Cloud-Init | configure network renderer"
copy:
content: |
# prefer to render via netplan instead of /etc/network/interfaces even if ifupdown is present
system_info:
network:
renderers: ['netplan', 'eni', 'sysconfig']
dest: "/etc/cloud/cloud.cfg.d/90_override_renderer.cfg"
- name: "Cloud-Init | Mask ssh.socket allowing cloud-init to configure without failures"
systemd:
masked: yes
name: ssh.socket
- name: "Cloud-Init | Ensuring cloud-init overrides directory exists"
file:
path: "/etc/systemd/system/cloud-init-local.service.d"
state: directory
mode: '0755'
- name: "Cloud-Init | Place cloud-init override file"
template:
src: cloud-init-local-overrides.j2
dest: "/etc/systemd/system/cloud-init-local.service.d/override.conf"
mode: '0644'

View File

@ -0,0 +1,40 @@
- name: "CRI-O | ensuring directory /etc/crio exists"
file:
path: "/etc/crio"
state: directory
mode: '0755'
- name: "CRI-O | Setting up crio"
shell:
cmd: "crio config > /etc/crio/crio.conf"
- name: "CRI-O | configure runc path"
ini_file:
path: /etc/crio/crio.conf
section: "crio.runtime.runtimes.runc"
option: runtime_path
value: "\"/usr/sbin/runc\""
- name: "CRI-O | configure cgroup manager"
ini_file:
path: /etc/crio/crio.conf
section: "crio.runtime"
option: cgroup_manager
value: "\"systemd\""
- name: "CRI-O | configure logs to also output to journald"
ini_file:
path: /etc/crio/crio.conf
section: "crio"
option: log_to_journald
value: "true"
- name: "CRI-O | Disabling systemd unit"
systemd:
enabled: no
name: crio.service
- name: "CRI-O | Ensuring systemd preset directory exists"
file:
path: "/etc/systemd/system-preset"
state: directory
mode: '0755'
- name: "CRI-O | Dont enable kubelet unit by default"
copy:
content: 'disable crio.service'
dest: /etc/systemd/system-preset/00-crio.preset

View File

@ -0,0 +1,18 @@
- name: "Append any user-defined custom urls"
set_fact:
fetch_from_url: "{{ fetch_from_url + fetch_from_url_append }}"
when: fetch_from_url_append is defined
- when: fetch_from_url is defined
block:
- name: "ensuring directory /tmp/url_resources exists"
file:
path: "/tmp/url_resources"
state: directory
mode: '0755'
- name: "Download from url {{ item.url }}"
get_url:
url: "{{ item.url }}"
dest: "/tmp/url_resources/{{ item.url | basename }}"
mode: '0755'
use_proxy: "{{ item.use_proxy }}"
loop: "{{ fetch_from_url }}"

View File

@ -0,0 +1,9 @@
- name: "File Permissions | Modifying file or directory permissions for {{ item.name }}"
file:
path: "{{ item.name }}"
state: file
mode: "{{ item.permissions }}"
owner: "{{ item.owner }}"
group: "{{ item.group }}"
loop: "{{ file_permissions }}"
when: run_context in item.run_contexts

View File

@ -0,0 +1,24 @@
- name: "Finalize | Removing .pyc files"
shell:
cmd: |
find "/usr/" "/var/" \( -name "*.pyc" -o -name "__pycache__" \) -delete
apt -y clean
- name: "Finalize | Ensure no /etc/machine-id is delivered in image"
file:
path: /etc/machine-id
state: absent
- name: "Finalize | remove /var/lib/dbus/machine-id"
file:
path: /var/lib/dbus/machine-id
state: absent
- name: "Finalize | symlink /var/lib/dbus/machine-id to /etc/machine-id"
file:
src: /etc/machine-id
dest: /var/lib/dbus/machine-id
owner: root
group: root
state: link
force: yes

View File

@ -0,0 +1,9 @@
# Settings here will be applied to /boot/grub/grub.cfg when grub is installed
- name: "Append any user-defined grub cmdline linux default"
set_fact:
grub_cmdline_linux_default: "{% if grub_cmdline_linux_default_append is defined %}{{ grub.GRUB_CMDLINE_LINUX_DEFAULT + grub_cmdline_linux_default_append }}{% else %}{{ grub.GRUB_CMDLINE_LINUX_DEFAULT }}{% endif %}"
- name: "Grub | Grub config"
template:
src: grub.j2
dest: "/etc/default/grub"
mode: 0644

View File

@ -0,0 +1,11 @@
# airshipctl cloud-init will overwrite with its own /etc/hostname and /etc/hosts fqdn
- name: "hostname and hosts | write out hostname file"
template:
src: hostname.j2
dest: "/etc/hostame"
mode: 0644
- name: "hostname and hosts | write out hosts file"
template:
src: hosts.j2
dest: "/etc/hosts"
mode: 0644

View File

@ -0,0 +1,7 @@
# TODO: Move to airshipctl cloud-init to support customized post-up cmds (ethtool, etc)
#- name: "ifup-hooks | Defining ifup-hooks: routes, ethtool, etc"
# template:
# src: ifup-hooks.j2
# dest: "/etc/networkd-dispatcher/routable.d/50-ifup-hooks"
# mode: 0755
# when: ifup-hooks is defined

View File

@ -0,0 +1,6 @@
# Settings here will be applied to /etc/default/grub.d/kdump-tools.cfg when kdump-tools is installed
- name: "kdump-tools | kdump-tools config"
template:
src: kdump-tools.cfg.j2
dest: "/etc/default/grub.d/kdump-tools.cfg"
mode: 0644

View File

@ -0,0 +1,10 @@
- name: "Kubernetes | write out kubelet unit file"
template:
src: kubelet.service.j2
dest: "/etc/systemd/system/kubelet.service.d/10-kubeadm.conf"
mode: 0644
- name: "Kubernetes | configuring kubelet systemd unit"
systemd:
name: "kubelet.service"
enabled: yes
force: no

View File

@ -0,0 +1,6 @@
- name: "Limits | Defining security limits"
template:
src: limits.j2
dest: "/etc/security/limits.d/99-{{ item.name }}.conf"
mode: 0644
loop: "{{ limits }}"

View File

@ -0,0 +1,5 @@
- name: "locale | write out locale config"
template:
src: locale.j2
dest: "/etc/default/locale"
mode: 0644

View File

@ -0,0 +1,67 @@
- name: "Including any user-defined vars"
include_vars:
file: main.yaml
name: user-vars
# Tasks run when building image-builder container
- name: "configure cloud-init"
include_tasks: cloud-init.yaml
when: run_context == default_run_context
- name: "configure modules"
include_tasks: modules.yaml
when: run_context == default_run_context
- name: "configure limits"
include_tasks: limits.yaml
when: run_context == default_run_context
- name: "configure sysctl"
include_tasks: sysctl.yaml
when: run_context == default_run_context
- name: "configure grub"
include_tasks: grub.yaml
when: run_context == default_run_context or run_context == qcow_run_context
- name: "configure kdump-tools"
include_tasks: kdump-tools.yaml
when: run_context == default_run_context
- name: "configure kubernetes"
include_tasks: kubernetes.yaml
when: run_context == default_run_context
- name: "configure locale"
include_tasks: locale.yaml
when: run_context == default_run_context
- name: "configure hostname and hosts"
include_tasks: hostname-hosts.yaml
when: run_context == default_run_context
- name: "configure banners"
include_tasks: banners.yaml
when: run_context == default_run_context
- name: "unattended upgrades"
include_tasks: unattended-upgrades.yaml
when: run_context == default_run_context
- name: "configure systemd-resolved"
include_tasks: systemd-resolved.yaml
when: run_context == default_run_context
- name: "configure base systemd"
include_tasks: systemd.yaml
when: run_context == default_run_context
- name: "fetch url resource"
include_tasks: fetch-from-url.yaml
when: run_context == default_run_context
# Context-dependent tasks
- name: "write user-provided files"
include_tasks: write-user-files.yaml
- name: "configure user-defined systemd"
include_tasks: systemd-user.yaml
- name: "run system-defined scripts for qcow"
include_tasks: runtime-system-scripts.yaml
when: run_context == qcow_run_context
- name: "run user-defined scripts"
include_tasks: user-scripts.yaml
- name: "configure file permissions"
include_tasks: file-permissions.yaml
- name: "configure user password settings"
include_tasks: user-management.yaml
# Context-independent cleanup tasks
- name: "finalize rootfs"
include_tasks: finalize-rootfs.yaml

View File

@ -0,0 +1,12 @@
- name: "Modules | Defining modules to load"
template:
src: kernelmodules.j2
dest: "/etc/modules-load.d/99-{{ item.name }}.conf"
mode: 0644
loop: "{{ kernel.modules.load }}"
- name: "Modules | Defining modules to blacklist"
kernel_blacklist:
name: "{{ item.name }}"
state: present
loop: "{{ kernel.modules.blacklist }}"

View File

@ -0,0 +1,34 @@
- name: "POST-INSTALL | generate locales"
shell: |
set -e
locale-gen en_US.UTF-8
- name: "POST-INSTALL | grub-install"
shell: |
set -e
grub-install --target=i386-pc --skip-fs-probe --force "{{ lookup('file', '/tmp/nbd') }}"
grub-install --target=i386-pc --skip-fs-probe --force --recheck "{{ lookup('file', '/tmp/nbd') }}"
grub-install --target=x86_64-efi --skip-fs-probe --force "{{ lookup('file', '/tmp/nbd') }}"
grub-install --target=x86_64-efi --skip-fs-probe --force --recheck "{{ lookup('file', '/tmp/nbd') }}"
- name: "POST-INSTALL | generate grub cfg file"
shell: |
set -e
update-grub
- name: "POST-INSTALL | write root partition UUID to grub.cfg"
shell: |
set -e
cp -r /usr/lib/grub/* /boot/grub
blkid -s UUID -o value $(df -h | grep /$ | awk "{print \$1}") > /tmp/root_uuid
sed -i "s@root=/dev/nbd[0-9]p[0-9]@root=UUID=$(cat /tmp/root_uuid)@g" /boot/grub/grub.cfg
rm /tmp/root_uuid
- name: "POST-INSTALL | write boot partition UUID to UEFI grub.cfg"
shell: |
set -e
blkid -s UUID -o value $(df -h | grep /boot$ | awk "{print \$1}") > /tmp/boot_uuid
echo "search.fs_uuid $(cat /tmp/boot_uuid) root hd0,gpt2" > /boot/efi/EFI/ubuntu/grub.cfg
echo "set prefix=(\$root)'/grub'" >> /boot/efi/EFI/ubuntu/grub.cfg
echo "configfile \$prefix/grub.cfg" >> /boot/efi/EFI/ubuntu/grub.cfg
rm /tmp/boot_uuid

View File

@ -0,0 +1,6 @@
- sysctl:
name: "{{ item.name }}"
value: "{{ item.value }}"
state: present
reload: no
loop: "{{ sysctl }}"

View File

@ -0,0 +1,18 @@
# TODO - move to airshipctl cloud-init process, where domain parameter is available
#- name: "systemd-resolved | Conf file for systemd-resolved DNS settings"
# template:
# src: resolved.j2
# dest: "/etc/systemd/resolved.conf"
# mode: 0644
# when: domain is defined
- name: "systemd-resolved | Ensuring systemd-resolved overrides directory exists"
file:
path: "/etc/systemd/system/systemd-resolved.service.d"
state: directory
mode: '0755'
- name: "systemd-resolved | Place startup retry override"
template:
src: systemd-resolved-overrides.j2
dest: "/etc/systemd/system/systemd-resolved.service.d/override.conf"
mode: '0644'

View File

@ -0,0 +1,14 @@
- name: "Systemd | Writing user-provided systemd unit {{ item.name }}"
template:
src: generic-file-writer.j2
dest: "/etc/systemd/system/{{ item.name }}"
loop: "{{ systemd }}"
when: run_context in item.run_contexts
- name: "Systemd | Configuring user-provided systemd unit {{ item.name }}"
systemd:
name: "{{ item.name }}"
enabled: "{{ item.enabled }}"
force: "{{ item.force }}"
loop: "{{ systemd }}"
when: run_context in item.run_contexts

View File

@ -0,0 +1,15 @@
- name: "Systemd | Link systemd to /sbin/init"
file:
src: /bin/systemd
dest: /sbin/init
owner: root
group: root
state: link
- name: "Systemd | Enable Systemd Networkd"
systemd:
enabled: yes
name: systemd-networkd.service
- name: "Systemd | Enable Systemd Networkd-dispatcher"
systemd:
enabled: yes
name: networkd-dispatcher.service

View File

@ -0,0 +1,19 @@
- name: "unattended-upgrades | disable apt-daily timer"
file:
path: /etc/systemd/system/timers.target.wants/apt-daily.timer
state: absent
- name: "unattended-upgrades | disable apt-daily-upgrade timer"
file:
path: /etc/systemd/system/timers.target.wants/apt-daily-upgrade.timer
state: absent
- name: "unattended-upgrades | check for apt-daily cron"
stat:
path: /etc/cron.daily/apt-compat
register: stat_result
- name: "unattended-upgrades | disable apt-daily cron"
file:
path: /etc/cron.daily/apt-compat
mode: '0644'
when: stat_result.stat.exists

View File

@ -0,0 +1,8 @@
- name: "User Management | Modifying user settings for {{ item.name }}"
user:
name: "{{ item.name }}"
password: "{{ item.password }}"
password_lock: "{{ item.password_lock }}"
shell: "{{ item.shell }}"
loop: "{{ user_management }}"
when: run_context in item.run_contexts

View File

@ -0,0 +1,25 @@
# Execute scripts defined in the playbook
- name: "user-scripts | running user-defined scripts"
shell: "{{ item.file_content }}"
loop: "{{ user_scripts }}"
when: run_context in item.run_contexts
- name: "user-scripts | check for common scripts dir"
stat:
path: "{{ user_scripts_dir_default }}"
register: common_stat_result
- name: "user-scripts | check for qcow scripts dir"
stat:
path: "{{ user_scripts_dir_qcow }}"
register: qcow_stat_result
# Bulk-execute scripts in the scripts directory
- name: "user-scripts | running additional user-defined scripts"
shell: for s in $(find "{{ user_scripts_dir_default }}" -maxdepth 1 -type f | grep -v README.md | sort); do chmod 755 $s; eval $s; done
when: run_context == default_run_context and common_stat_result.stat.exists
# Bulk-execute scripts in the scripts directory
- name: "user-scripts | running additional user-defined scripts"
shell: for s in $(find "{{ user_scripts_dir_qcow }}" -maxdepth 1 -type f | grep -v README.md | sort); do chmod 755 $s; eval $s; done
when: run_context == qcow_run_context and qcow_stat_result.stat.exists

View File

@ -0,0 +1,19 @@
- name: "User Directories | Creating user-provided directory {{ item.name }}"
file:
path: "{{ item.name }}"
state: directory
mode: "{{ item.permissions }}"
owner: "{{ item.owner }}"
group: "{{ item.group }}"
loop: "{{ directories }}"
when: run_context in item.run_contexts
- name: "User Files | Writing user-provided file {{ item.name }}"
template:
src: generic-file-writer.j2
dest: "{{ item.name }}"
mode: "{{ item.permissions }}"
owner: "{{ item.owner }}"
group: "{{ item.group }}"
loop: "{{ files }}"
when: run_context in item.run_contexts

View File

@ -0,0 +1,2 @@
[Service]
ExecStart=/bin/systemd-machine-id-setup

View File

@ -0,0 +1 @@
{{ item.file_content }}

View File

@ -0,0 +1,21 @@
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
# info -f grub -n 'Simple configuration'
GRUB_DEFAULT=0
GRUB_TERMINAL="serial console"
GRUB_TIMEOUT_STYLE=menu
GRUB_TIMEOUT={{ grub.GRUB_TIMEOUT }}
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
{% set ns = namespace (content = '') %}
{% for arg in grub_cmdline_linux_default %}
{% set ns.content = ns.content + ' ' + arg.name + '=' + arg.value %}
{% endfor %}
GRUB_CMDLINE_LINUX_DEFAULT="{{ ns.content }}"
GRUB_CMDLINE_LINUX=""
{% set ns = namespace (content = '') %}
{% for arg in grub.GRUB_SERIAL_COMMAND %}
{% set ns.content = ns.content + ' --' + arg.name + '=' + arg.value %}
{% endfor %}
GRUB_SERIAL_COMMAND="serial {{ ns.content }}"

View File

@ -0,0 +1 @@
localhost

View File

@ -0,0 +1 @@
127.0.0.1 localhost

View File

@ -0,0 +1,5 @@
#!/bin/bash
{% for cmd in ifup-hooks %}
{{ cmd }}
{% endfor %}

View File

@ -0,0 +1 @@
GRUB_CMDLINE_LINUX_DEFAULT="$GRUB_CMDLINE_LINUX_DEFAULT crashkernel={{ kdump_tools.crashkernel }}"

View File

@ -0,0 +1,9 @@
{% if "args" in item %}
{% set content = item.name %}
{% for arg in item.args %}
{% set content = content + ' ' + arg.name + '=' + arg.value %}
{% endfor %}
{{ content }}
{% else %}
{{ item.name }}
{% endif %}

View File

@ -0,0 +1,26 @@
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
{% set ns = namespace (content = '') %}
{% for arg in kubelet.extra_systemd_args %}
{% set ns.content = ns.content + ' --' + arg.name + '=' + arg.value %}
{% endfor %}
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS {{ ns.content }}
CPUAffinity=
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1 @@
{{ item.domain }} {{ item.type }} {{ item.item }} {{ item.value }}

View File

@ -0,0 +1,4 @@
LANGUAGE=C
LANG=C
LC_ALL=C
LC_TERMINAL=C

View File

@ -0,0 +1,4 @@
# TODO - move to airshipctl cloud-init, where domain paramters etc will be available
#[Resolve]
#Domains={{ domain }}

View File

@ -0,0 +1,2 @@
[Unit]
StartLimitBurst=0

View File

@ -0,0 +1 @@

View File

@ -0,0 +1,56 @@
src: /build
dst: /chroot
nbd_build_dir: /tmp/nbd_build_dir
img_output_dir: /config
img_name: airship-ubuntu.qcow2
qcow_capacity: 19G
qcow_compress: true
partitions:
# Partition numbering is according to list ordering.
# Ironic default cloud-init configdrive injection requires
# root partition to be the first numbered partition.
- mount: /
mount_order: 0
part_start: 1284MiB
part_end: '100%'
filesystem:
type: ext4
fstab:
options: "defaults,errors=remount-ro,noatime"
dump: 0
fsck: 1
- mount: none
mount_order: 99
part_start: 1MiB
part_end: 5MiB
flags:
- bios_grub
- mount: /boot/efi
mount_order: 2
part_start: 5MiB
part_end: 516MiB
flags:
- esp
filesystem:
type: vfat
fstab:
options: "defaults,errors=remount-ro,noatime"
dump: 0
fsck: 1
- mount: /boot
mount_order: 1
part_start: 516MiB
part_end: 1284MiB
filesystem:
type: ext4
fstab:
options: "defaults,errors=remount-ro,noatime"
dump: 0
fsck: 2
# If any custom post-install shell scripts are needed for qcow building,
# they can be added here. This should only be used if
# osconfig_container_buildtime_scripts does not work in osconfig playbook.
qcow_container_runtime_scripts:
- file_content: |
#!/bin/bash
echo "custom qcow post-install script"

View File

@ -0,0 +1,22 @@
- name: "QCOW | Installing extlinux"
shell: |
mkdir -p "{{ dst }}"/boot/syslinux
extlinux --install "{{ dst }}"/boot/syslinux/ --device /dev/disk/by-partlabel/{{ ( partitions | selectattr('mount', 'equalto', '/boot') | list | first ).mount | hash('md5') }}
- name: "QCOW | Writing out syslinux config"
copy:
content: |
DEFAULT linux
SAY Booting Airship Node
LABEL linux
KERNEL /vmlinuz
APPEND root=/dev/disk/by-partlabel/{{ ( partitions | selectattr('mount', 'equalto', '/') | list | first ).mount | hash('md5') }} initrd=/initrd.img
dest: ""{{ dst }}/boot/syslinux/syslinux.cfg"
- name: "QCOW | Installing kernel and init ramdisk"
shell: |
rm -rf "{{ dst }}"/vmlinuz
cp -f /mnt/image/vmlinuz "{{ dst }}"/boot/
rm -rf /tmp/mnt/initrd.img
cp -f /mnt/image/initrd "{{ dst }}"/boot/initrd.img

View File

@ -0,0 +1,19 @@
- name: "QCOW | copy ansible playbooks to target image"
shell: |
set -e
cp -r /opt/assets "{{ dst }}"/opt
- name: "QCOW | unmount target"
shell: |
set -e
cd "{{ dst }}"
mountpoint dev/pts > /dev/null && umount dev/pts
mountpoint dev > /dev/null && umount dev
if [ -d /sys/firmware/efi ]; then
mountpoint sys/firmware/efi > /dev/null && umount sys/firmware/efi
fi
mountpoint sys > /dev/null && umount sys
mountpoint proc > /dev/null && umount proc
if [ -d "/run/systemd/resolve" ]; then
mountpoint run/systemd/resolve > /dev/null && umount -l run/systemd/resolve
fi

View File

@ -0,0 +1,14 @@
- name: "QCOW | Mount remaining targets"
shell: |
set -e
cd "{{ dst }}"
mountpoint sys > /dev/null || mount -t sysfs /sys sys
if [ -d /sys/firmware/efi ]; then
mountpoint sys/firmware/efi > /dev/null || mount -o bind /sys/firmware/efi sys/firmware/efi
fi
mountpoint proc > /dev/null || mount -t proc /proc proc
mountpoint dev > /dev/null || mount -o bind /dev dev
mountpoint dev/pts > /dev/null || mount -t devpts /dev/pts dev/pts
if [ -d "/run/systemd/resolve" ]; then
mountpoint run/systemd/resolve > /dev/null || mount -o bind /run/systemd/resolve run/systemd/resolve
fi

View File

@ -0,0 +1,5 @@
# Copy files onto partitioned disk
- name: "mount-helper | Copy files onto partition"
shell: |
set -e
rsync -ah {{ src }}/ {{ dst }}/ --exclude 'live'

View File

@ -0,0 +1,59 @@
- name: "QCOW | Including any user-defined vars"
include_vars:
file: main.yaml
name: user-vars
- block:
- name: "QCOW | Creating and attaching qcow image"
include_tasks:
file: qcow-create-n-attach.yaml
- name: "QCOW | Creating partitions"
include_tasks:
file: partitions-and-filesystems.yaml
with_indexed_items: "{{ partitions }}"
- name: "QCOW | Mounting filesystems"
include_tasks:
file: mount-helper.yaml
loop: "{{ partitions | sort( case_sensitive=True, attribute='mount_order' ) }}"
vars:
mount_offset: "{{ dst }}"
state: mounted
fstab: /tmp/junkfstab
- name: "QCOW | Copy files to partition"
include_tasks:
file: copy-files.yaml
- name: "QCOW | Writing image content"
include_tasks:
file: writing-image-content.yaml
- name: "QCOW | chroot prep"
include_tasks:
file: chroot-prep.yaml
tags: prep_img
- block:
- name: "QCOW | chroot cleanup"
include_tasks:
file: chroot-cleanup.yaml
- name: "QCOW | Unmounting filesystems"
include_tasks:
file: mount-helper.yaml
loop: "{{ partitions | sort( reverse=True, case_sensitive=True, attribute='mount' ) }}"
vars:
mount_offset: "{{ dst }}"
state: unmounted
fstab: /tmp/junkfstab
- name: "QCOW | Detaching and compressing QCoW2"
include_tasks:
file: qcow-detach-n-compress.yaml
tags: close_img

View File

@ -0,0 +1,9 @@
- name: "mount-helper | Setting mount state to {{ state }} for /dev/disk/by-partlabel/{{ item.mount | hash('md5') }} at the mountpoint for {{ item.mount }}"
mount:
path: "{{ mount_offset }}{{ item.mount }}"
src: "/dev/disk/by-partlabel/{{ item.mount | hash('md5') }}"
fstype: "{{ item.filesystem.type }}"
opts: "{{ item.filesystem.fstab.options }}"
state: "{{ state }}"
fstab: "{{ fstab }}"
when: item.mount != 'none'

View File

@ -0,0 +1,27 @@
- name: "QCOW | Creating Partitions"
parted:
device: "{{ lookup('file', '/tmp/nbd') }}"
number: "{{ item.0 + 1 }}"
state: present
label: gpt
flags: "{{ item.1.flags | default(omit) }}"
part_start: "{{ item.1.part_start }}"
part_end: "{{ item.1.part_end }}"
name: "{{ item.1.mount | hash('md5') }}"
align: minimal
# For some reason, udev does not honor the partition label for by-partlabel symlinks, so we rename them here
- name: "QCOW | check for symlink"
stat:
path: /dev/disk/by-partlabel/primary
register: symlink
- name: "QCOW | udev symlink rename"
command: mv /dev/disk/by-partlabel/primary /dev/disk/by-partlabel/{{ item.1.mount | hash('md5') }}
when: symlink.stat.exists
- name: "QCOW | Creating Filesystems"
filesystem:
fstype: "{{ item.1.filesystem.type }}"
dev: "/dev/disk/by-partlabel/{{ item.1.mount | hash('md5') }}"
when: item.1.mount != 'none'

View File

@ -0,0 +1,38 @@
- name: "QCOW | Enabling nbd kernel module"
command: modprobe nbd
- name: "QCOW | 3 second pause after loading nbd kernel module"
pause:
seconds: 3
- name: "QCOW | Finding availible NBD device to use"
shell:
executable: /bin/bash
cmd: |
for dev in /sys/class/block/nbd*; do
size="$(cat "$dev"/size)"
device="/dev/nbd${dev: -1}"
if (( size == 0 )) && ! ls ${device}p* >& /dev/null; then
printf "%s" "$device"
exit 0
fi
done
# NOTE: if we have got this far, then we have not been able to find a suitable nbd device to consume.
exit 1
register: role_img_nbd_device
- name: "QCOW | Creating build directory"
file:
state: directory
path: "{{ nbd_build_dir }}"
- name: "QCOW | Creating QCoW2"
command: qemu-img create -f qcow2 {{ nbd_build_dir }}/{{ img_name }} {{ qcow_capacity }}
- name: "QCOW | Connecting QCoW2 to {{ role_img_nbd_device.stdout }}"
command: qemu-nbd --connect={{ role_img_nbd_device.stdout }} {{ nbd_build_dir }}/{{ img_name }}
- name: "QCOW | Store NBD device"
copy:
content: "{{ role_img_nbd_device.stdout }}"
dest: /tmp/nbd

View File

@ -0,0 +1,14 @@
- name: "QCOW | Detaching QCoW from {{ role_img_nbd_device.stdout }}"
shell: |
qemu-nbd -d "{{ lookup('file', '/tmp/nbd') }}"
- name: "QCOW | Compressing QCoW and writing out to {{ img_output_dir }}/{{ img_name }}"
shell: |
qemu-img convert -p -O qcow2 -c {{ nbd_build_dir }}/{{ img_name }} {{ img_output_dir }}/{{ img_name }}
when: qcow_compress
- name: "QCOW | Writing QCoW to {{ img_output_dir }}/{{ img_name }}"
shell: |
qemu-img convert -p -O qcow2 {{ nbd_build_dir }}/{{ img_name }} {{ img_output_dir }}/{{ img_name }}
when: not qcow_compress

View File

@ -0,0 +1,11 @@
- name: "QCOW | Writing out fstab"
include_tasks: mount-helper.yaml
loop: "{{ partitions | sort( case_sensitive=True, attribute='mount' ) }}"
vars:
mount_offset: null
state: present
fstab: "{{ dst }}/etc/fstab"
- name: "QCOW | Setting debug password"
shell: |
chroot "{{ dst }}" sh -c "echo \"root:password\" | chpasswd"

View File

@ -0,0 +1 @@
{{ item.file_content }}

View File

@ -0,0 +1 @@
# This file will be overwritten by the container entrypoint with user-provided vars, if any are defined.

View File

@ -0,0 +1 @@
{"hostname": "ephemeral", "name": "ephemeral", "uuid": "83679162-1378-4288-a2d4-70e13ec132aa"}

View File

@ -0,0 +1,6 @@
version: 2
ethernets:
all-en:
match:
name: "en*"
dhcp4: true

View File

@ -0,0 +1,5 @@
#cloud-config
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDCAWBkS5iD7ORK59YUjJlPiWnzZXoFPbxlo8kvXjeGVgtUVD/FORZBvztoB9J1xTgE+DEg0dE2DiVrh3WXMWnUUwyaqjIu5Edo++P7xb53T9xRC7TUfc798NLAGk3CD8XvEGbDB7CD6Tvx7HcAco0WpEcPePcTcv89rZGPjal1nY4kGNT/0TWeECm99cXuWFjKm6WiMrir9ZN1yLcX/gjugrHmAGm8kQ/NJVEDRgSPV6jhppp7P/1+yqIUOOOXLx61d8oVG+ADlXEckXoetqHYjbzisxO/wa2KFM7cb5NTVKHFmxwVKX4kJeRL+I/94yLCiG05PidUFsIMzByPBEe/
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9D1m9eMr75japSYMX0Id/af1pyfDM2I1lPSwi2zZwYo8w0b3AyzV3w4iL8PzHCRmxwcm6/w5TfCxEHu7IzTJ4IkN7vIvJEVFPVCJNunuu1ZYahKkFB8g4q6+nsY6rj2ASpQRNrxkUTN2I4GmTRGB3N21uKe1KqbNuaCt5i0KxW0ydcZgAYZFs56qB8ie053VBeMBhhn3LxROKb7g3+NZ6kHkJiOo6p0q7iXiAOh0nvnSGjuSRGllOx/lPe+rdTN+NzuqWSN4sN9WPMjynqSRBMdI0TD7mI2i7uv67s2XpDIORX9dH6IudrLB4Ypz5QX/5Kxyc7Rk16HLSEn42bplj
hostname: airship

1
image-builder/config Symbolic link
View File

@ -0,0 +1 @@
manifests/

View File

@ -0,0 +1,131 @@
# Directory structure:
```
|-- manifests
|-- iso
+-- network_data.json
+-- user_data
|-- qcow-bundle-[bundle name]
|-- control-plane
+-- osconfig-vars.yaml
+-- qcow-vars.yaml
|-- data-plane
+-- osconfig-vars.yaml
+-- qcow-vars.yaml
|-- rootfs
|-- livecdcontent-vars.yaml
|-- multistrap-vars.yaml
|-- osconfig-vars.yaml
|-- scripts
|-- common
|-- qcow
```
## iso
The image-builder `generate_iso` makefile target can be used to build the
ephemeral ISO using the test config data stored under the `manifests/iso`
directory.
This is *only* for testing. It is *not* an artifact promoted or published. The
final ISO is built by airshipctl, where the network\_data and user\_data are
sourced from airshipctl manifests.
The following items are expected in the `manifests/iso` directory when using
the `generate_iso` makefile target:
- `user_data` - YAML file containing cloud-init user-data
- `network_data.json` - JSON file containing cloud-init network data
## qcow-bundles
The image-builder `package_qcow` makefile target can be used to build the QCOW
artifacts sourced from the manifests/qcow-bundle-\* directories.
QCOWs are grouped into publishable "bundles", i.e. a container image where all
QCOWs needed for a given deployment are stored. A bundle will be built for each
`manifests/qcow-bundle*` directory. Each `manifests/qcow-bundle*` directory contains
one subdirectory per QCOW that is part of that bundle, where overrides for
those images can be placed.
QCOWs expect the following files to be present in their directory:
- `osconfig-vars.yaml` - YAML file containing `osconfig` playbook overrides
- `qcow-vars.yaml` - YAML file containing `qcow` playboook overrides
## rootfs
This directory contains a number of image-builder ansible playbook overrides
which are applied to base-image inherited by all ISO and QCOWs.
`livecdcontent-vars.yaml` contains overrides to the livecdcontent playbook.
`multistrap-vars.yaml` contains overrides to the `multistrap` playbook.
`osconfig-vars.yaml` contains overrides to the `osconfig` playbook.
NOTE: qcow-bundles contains another level of `osconfig-vars` overrides, which
are applied on top of these common overrides. This common `osconfig-vars`
overrides should be used for playbook overrides, except in cases where those
overrides are actually unique to a particular QCOW variation (e.g., hugepages,
cpu pinning, or other hardware-specific configs).
## scripts
This is a convenience directory for adding scripts that run when building images.
These scripts run in the chroot of the target image. For example, a script that
writes 'hello world' to `/hello-world.txt` will appear in the same path on the
target image.
Use the `manifests/scripts/qcow` directory for scripts that should only run
when building the QCOWs. Use the `manifests/scripts/common` directory for
scripts that are applied to the base container image, which is inherited both by
the QCOWs as well as by the ephemeral ISO.
No additional configuration is needed for these scripts to run. Just add your
script(s) to these directories as needed.
# Customizing images in your environment
Keep in mind that some tasks could also be accomplished by cloud-init or by
the hostconfig operator instead. Refer to the parent image-builder README to
understand the different use-cases for each and to determine the best option
for your use-case. These are lower-effort paths if they support your use-case.
If you determine that you do require image customizations, start with a manual
image build to reduce complexity:
1. Clone this repository in your environment.
1. Make any desired changes to the `manifests` directory to customize the
image, as described in prior sections.
1. Perform a `docker login` to the docker registry you will publish image
artifacts to. This should be a registry you have credentials for and that
is accessible by the environment which you plan to consume these artifacts,
(e.g., airshipctl).
1. Run the `make images` target to generate image artifacts. Ensure that the
`PUSH_IMAGE` environment variable is set to `true`, and that the
`DOCKER_REGISTRY` environment variable is set to the container image
repository you performed the login for in the previous step.
Perform an end-to-end to deployment (e.g., with airshipctl) to verify your
customized image performs as you expect and works properly.
Now after getting this working, there are several options to proceed depending
on the nature of the changes:
1. Some set of changes to defaults could be proposed upstream (e.g., package
install list). This may be appropriate for changes that are useful for
everyone. In this case, you don't need a custom image because the changes
will be reflected in the image produced upstream.
1. Some enhancements or additions to ansible playbooks to configure some other
aspects of the image, which are useful for everyone and proposed upstream.
In this case, you would be able to leverage ansible overrides to customize
your image with ansible playbooks that are maintained/adopted upstream.
1. Some change to image configuration that is specific to your needs and not
appropriate to be upstreamed.
In the case of #2 or #3 where you have some portion of image config changes that
are specific to your use-case (i.e. not part of the default upstream image),
and you want to perform regular rebuilds with the latest upstream image-builder
plus your customized changes on top, then you can setup a Zuul child-job that
interfaces with the image-builder parent-job to accomplish this.
By overriding the `image_config_dir` zuul variable in your child-job, the
image-builder Makefile will use use your customized manifests in place of the
`manifests` directory that is present in upstream image-builder.

View File

@ -0,0 +1,3 @@
Data used when running the `iso` playbook.
This is just for testing. The ISO is not published anywhere.

View File

@ -0,0 +1,23 @@
{
"links": [
{
"ethernet_mac_address": "52:54:00:6c:99:85",
"id": "ens3",
"type": "phy"
}
],
"networks": [
{
"id": "network0",
"link": "ens3",
"network_id": "99e88329-f20d-4741-9596-25bf07847b16",
"type": "ipv4_dhcp"
}
],
"services": [
{
"address": "8.8.8.8",
"type": "dns"
}
]
}

View File

@ -0,0 +1,29 @@
#cloud-config
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDCAWBkS5iD7ORK59YUjJlPiWnzZXoFPbxlo8kvXjeGVgtUVD/FORZBvztoB9J1xTgE+DEg0dE2DiVrh3WXMWnUUwyaqjIu5Edo++P7xb53T9xRC7TUfc798NLAGk3CD8XvEGbDB7CD6Tvx7HcAco0WpEcPePcTcv89rZGPjal1nY4kGNT/0TWeECm99cXuWFjKm6WiMrir9ZN1yLcX/gjugrHmAGm8kQ/NJVEDRgSPV6jhppp7P/1+yqIUOOOXLx61d8oVG+ADlXEckXoetqHYjbzisxO/wa2KFM7cb5NTVKHFmxwVKX4kJeRL+I/94yLCiG05PidUFsIMzByPBEe/
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9D1m9eMr75japSYMX0Id/af1pyfDM2I1lPSwi2zZwYo8w0b3AyzV3w4iL8PzHCRmxwcm6/w5TfCxEHu7IzTJ4IkN7vIvJEVFPVCJNunuu1ZYahKkFB8g4q6+nsY6rj2ASpQRNrxkUTN2I4GmTRGB3N21uKe1KqbNuaCt5i0KxW0ydcZgAYZFs56qB8ie053VBeMBhhn3LxROKb7g3+NZ6kHkJiOo6p0q7iXiAOh0nvnSGjuSRGllOx/lPe+rdTN+NzuqWSN4sN9WPMjynqSRBMdI0TD7mI2i7uv67s2XpDIORX9dH6IudrLB4Ypz5QX/5Kxyc7Rk16HLSEn42bplj
hostname: airship
password: password
ssh_pwauth: True
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDCAWBkS5iD7ORK59YUjJlPiWnzZXoFPbxlo8kvXjeGVgtUVD/FORZBvztoB9J1xTgE+DEg0dE2DiVrh3WXMWnUUwyaqjIu5Edo++P7xb53T9xRC7TUfc798NLAGk3CD8XvEGbDB7CD6Tvx7HcAco0WpEcPePcTcv89rZGPjal1nY4kGNT/0TWeECm99cXuWFjKm6WiMrir9ZN1yLcX/gjugrHmAGm8kQ/NJVEDRgSPV6jhppp7P/1+yqIUOOOXLx61d8oVG+ADlXEckXoetqHYjbzisxO/wa2KFM7cb5NTVKHFmxwVKX4kJeRL+I/94yLCiG05PidUFsIMzByPBEe/
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9D1m9eMr75japSYMX0Id/af1pyfDM2I1lPSwi2zZwYo8w0b3AyzV3w4iL8PzHCRmxwcm6/w5TfCxEHu7IzTJ4IkN7vIvJEVFPVCJNunuu1ZYahKkFB8g4q6+nsY6rj2ASpQRNrxkUTN2I4GmTRGB3N21uKe1KqbNuaCt5i0KxW0ydcZgAYZFs56qB8ie053VBeMBhhn3LxROKb7g3+NZ6kHkJiOo6p0q7iXiAOh0nvnSGjuSRGllOx/lPe+rdTN+NzuqWSN4sN9WPMjynqSRBMdI0TD7mI2i7uv67s2XpDIORX9dH6IudrLB4Ypz5QX/5Kxyc7Rk16HLSEn42bplj
chpasswd:
expire: false
list: |
root:password
ubuntu:password
users:
- default
- name: root
gecos: password
ssh_pwauth: True
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDCAWBkS5iD7ORK59YUjJlPiWnzZXoFPbxlo8kvXjeGVgtUVD/FORZBvztoB9J1xTgE+DEg0dE2DiVrh3WXMWnUUwyaqjIu5Edo++P7xb53T9xRC7TUfc798NLAGk3CD8XvEGbDB7CD6Tvx7HcAco0WpEcPePcTcv89rZGPjal1nY4kGNT/0TWeECm99cXuWFjKm6WiMrir9ZN1yLcX/gjugrHmAGm8kQ/NJVEDRgSPV6jhppp7P/1+yqIUOOOXLx61d8oVG+ADlXEckXoetqHYjbzisxO/wa2KFM7cb5NTVKHFmxwVKX4kJeRL+I/94yLCiG05PidUFsIMzByPBEe/
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9D1m9eMr75japSYMX0Id/af1pyfDM2I1lPSwi2zZwYo8w0b3AyzV3w4iL8PzHCRmxwcm6/w5TfCxEHu7IzTJ4IkN7vIvJEVFPVCJNunuu1ZYahKkFB8g4q6+nsY6rj2ASpQRNrxkUTN2I4GmTRGB3N21uKe1KqbNuaCt5i0KxW0ydcZgAYZFs56qB8ie053VBeMBhhn3LxROKb7g3+NZ6kHkJiOo6p0q7iXiAOh0nvnSGjuSRGllOx/lPe+rdTN+NzuqWSN4sN9WPMjynqSRBMdI0TD7mI2i7uv67s2XpDIORX9dH6IudrLB4Ypz5QX/5Kxyc7Rk16HLSEn42bplj
runcmd:
- set -x
- export PATH=$PATH:/usr/sbin:/sbin
- mkdir -p /opt/metal3-dev-env/ironic/html/images /var/lib/ironic-persisted-data-volume
- /bin/bash -c 'kernel_libsubdir="$(ls /lib/modules | head -1)"; config_dir="/lib/modules/${kernel_libsubdir}/build"; mkdir -p "${config_dir}"; if [ -f /run/live/medium/config ] && [ ! -f "${config_dir}/.config" ]; then ln -s /run/live/medium/config "${config_dir}/.config"; fi;'

View File

@ -0,0 +1,3 @@
This folder represents a "QCOW bundle", i.e. a container image that will
contain all of the target QCOW images needed for a particular subcluster
deployment.

View File

@ -0,0 +1,3 @@
By default, the QCOW produced from this set of configs will be
deployed to subcluster control-plane nodes in airshipctl
deployment.

View File

@ -0,0 +1,4 @@
# Custom user-defined overrides to the `osconfig` playbook can be placed here.
#
# You would only use this if you needed to apply some settings uniques to
# control-plane image. Ex: kernel boot params specific to non-worker nodes.

View File

@ -0,0 +1,3 @@
# Custom user-defined overrides to the `qcow` playbook can be placed here.
# Example, Changing disk size:
#qcow_capacity: 200G

Some files were not shown because too many files have changed in this diff Show More