Add Maas and Drydock deployment to Airskiff

This commit adds MAAS and Drydock deployment to
Airskiff profile. It may be used as an integration
test gate for MAAS and Drydock.

Change-Id: Ib89a2e29182587e56034c46a83934d819ad2b430
Signed-off-by: Sergiy Markin <smarkin@mirantis.com>
This commit is contained in:
Sergiy Markin
2025-08-02 03:25:30 +00:00
parent f5493580be
commit b1e0493b1c
36 changed files with 1070 additions and 142 deletions

71
.gitignore vendored
View File

@@ -1,31 +1,40 @@
# output for generated manifest
peggles/
# Unit test / coverage reports
.tox/
config-ssh
# Sphinx Build Files
_build
# Various user specific files
.DS_Store
.idea/
.vimrc
*.swp
.vscode/
.devcontainer
# Packages
*.egg*
*.egg-info
dist
build
eggs
parts
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# output for generated manifest
peggles/
# Unit test / coverage reports
.tox/
config-ssh
# Sphinx Build Files
_build
# Various user specific files
.DS_Store
.idea/
.vimrc
*.swp
.vscode/
.devcontainer
.crossnote
# Packages
*.egg*
*.egg-info
dist
build
eggs
parts
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# secrets
ucp_airflow_api_auth_jwt_secret.yaml
ucp_airflow_fernet_key.yaml
ucp_airflow_core_fernet_key.yaml
# rendered manifests
airskiff.yaml

View File

@@ -168,9 +168,29 @@
nodes:
- primary
- nodeset:
name: treasuremap-airskiff-1node-32GB-ubuntu_jammy
nodes:
- name: primary
# This label is available in vexxhost ca-ymq-1 region
# The flavor v3-standard-8 in this particular region has
# 32GB nodes. The number of such nodes is extremely limited.
label: ubuntu-jammy-32GB
groups:
- name: primary
nodes:
- primary
- name: k8s_cluster
nodes:
- primary
- name: k8s_control_plane
nodes:
- primary
- job:
name: treasuremap-airskiff-infra-deploy-base
# Pinned for now due to Ansible 11 stopped verbosive output.
ansible-version: 9
abstract: true
roles:
- zuul: openstack/openstack-helm
@@ -285,8 +305,45 @@
vars:
USE_ARMADA_GO: true
- job:
name: treasuremap-airskiff-deploy-maas-drydock-base
# Pinned for now due to Ansible 11 stopped verbosive output.
ansible-version: 9
parent: treasuremap-airskiff-infra-deploy-base
abstract: true
# nodeset: treasuremap-airskiff-1node-32GB-ubuntu_jammy
description: |
Deploy Openstack using Airskiff and latest Treasuremap changes.
Airskiff is using latest Airship v1.x based on Airflow-3.0.2
vars:
USE_ARMADA_GO: true
MAKE_MAAS_IMAGES: true
DISTRO: ubuntu_jammy
# DOCKER_REGISTRY: localhost:5000
gate_scripts:
- ./tools/deployment/airskiff/developer/000-prepare-k8s.sh
- ./tools/deployment/airskiff/developer/009-setup-apparmor.sh
- ./tools/deployment/airskiff/developer/000-clone-dependencies.sh
- ./tools/deployment/airskiff/developer/020-setup-client.sh
- ./tools/deployment/airskiff/developer/015-make-all-charts.sh
- ./tools/deployment/airskiff/developer/017-make-all-images.sh
- ./tools/deployment/airskiff/developer/022-generate_security_keys.sh
- ./tools/deployment/airskiff/developer/025-start-artifactory.sh
- ./tools/deployment/airskiff/developer/026-reduce-site.sh
- ./tools/deployment/airskiff/developer/027-enable-armada-operator.sh
- ./tools/deployment/airskiff/developer/028-enable-maas-drydock.sh
- ./tools/deployment/airskiff/developer/030-armada-bootstrap.sh
- ./tools/deployment/airskiff/developer/100-deploy-osh.sh
- ./tools/deployment/airskiff/common/os-env.sh
- ./tools/gate/wait-for-shipyard.sh
- ./tools/deployment/airskiff/common/get-airflow-worker-logs.sh
# - ./tools/deployment/airskiff/common/sleep.sh
- job:
name: treasuremap-site-lint
# Pinned for now due to Ansible 11 stopped verbosive output.
ansible-version: 9
description:
Lint a site using Pegleg. Default site is seaworthy.
nodeset: treasuremap-single-node-ubuntu-jammy

View File

@@ -206,6 +206,7 @@ Process Flows
airskiff
airsloop
seaworthy
seaworthy-virt
.. toctree::
:caption: Airship Project Documentation

View File

@@ -0,0 +1,190 @@
Multi-node development environment
==================================
Airship-in-a-bottle multi-node is a simulated
`Treasure map <https://opendev.org/airship/treasuremap>`__ deployment
using only virtual machines that run within a single hypervisor. This gives
great control over things like network topology, 'hardware'
configuration and the like. It also introduces some limitations. Overall
the framework should allow most developers to test with a high degree of
certainty that a proposed change will be successful.
Mechanics
---------
The automation framework is contained in the
`airship-in-a-bottle <https://opendev.org/airship/in-a-bottle>`__
repository. This framework is based on original work by Mark Burnett in
the `Airship/Promenade <https://opendev.org/airship/promenade>`__ project.
There are several parallel pieces of automation in this repository, we will
focus on the tooling under ``tools/multi_nodes_gate``.
Currently the framework is hard-coded to source the deployment
definition from the ``deployment_files`` directory within the repository.
Specifically the framework attempts to deploy the site
``deployment_files/site/gate-multinode``. Below we will describe how we
can merge this framework with our
`Treasure map <https://opendev.org/Airship/Treasuremap>`__ definitions.
Here is a brief overview of the components within ``tools/multinode_gate``:
- ``./setup_gate.sh`` - Must be run once on the host machine to install
necessary packages/tools and set them up appropriately
- ``./gate.sh`` - entrypoint to running the framework, run with an
argument of the scenario you'd like to simulate. Default scenario is
``multinode_deploy``.
- ``./airship_gate/manifests`` - JSON files that drive the various
"scenarios" that can be executed by the gate.
- ``./airship_gate/stages`` - The scripts that are called based on the
stages section of a scenario manifest. These should be loose wrappers
around calling library scripts from lib.
- ``./airship_gate/lib`` - All of the framework script libraries
- ``./airship_gate/bin`` - Utility binaries that can be used during
testing/troubleshooting
Host Setup (Required once)
--------------------------
Recommended minimum requirements:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- 64GB RAM
- 24 vCPUs
- 300 GB disk
Configure /etc/profile
~~~~~~~~~~~~~~~~~~~~~~
Add the following lines to ``/etc/profile`` or create a seperate env
file that you can ``source`` separately. These environment variables are used
at various stages throughout the pipeline,
*NOTE: /etc/environment does not work well with (") or with nested env
variables*
.. code-block:: bash
export TEMP_DIR="$HOME/airship-in-a-bottle/tools/multi_nodes_gate/temp"
export IMAGE_PROMENADE_CLI="quay.io/airshipit/promenade:b417f422e9cd2a921646ee0af4a05fc4e211beab"
export IMAGE_PEGLEG_CLI="quay.io/airshipit/pegleg:50ce7a02e08a0a5277c2fbda96ece6eb5782407a"
export IMAGE_SHIPYARD_CLI="quay.io/airshipit/shipyard:f4f57a1bbf3cc9ca6c868a11cc8923326c81b6dc"
export IMAGE_COREDNS="docker.io/coredns/coredns:1.1.3"
export IMAGE_QUAGGA="cumulusnetworks/quagga:CL3.3.2"
export IMAGE_DRYDOCK_CLI="quay.io/airshipit/drydock@:54ea0e1374ddd18a5195edc418b5c0b042666f45"
export BASE_IMAGE_URL="https://cloud-images.ubuntu.com/releases/16.04/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img"
export UPSTREAM_DNS="8.8.8.8 8.8.4.4"
export AIRSHIP_KEYSTONE_URL="http://keystone.gate.local:80/v3"
export SHIPYARD_OS_PASSWORD=password123
export VIRSH_CPU_OPTS="Westmere"
export USE_EXISTING_SECRETS="true"
export GATE_SSH_KEY="$HOME/.ssh/id_rsa_airship"
# export NTP_POOLS=""
Don't forget to ``source`` in the new set of env variables
.. code-block:: bash
source /etc/profile
.. code-block:: bash
source < your own env file >
Create libvirtd group and add user virtmgr
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: bash
sudo groupadd libvirtd
sudo useradd virtmgr
sudo usermod -a -G libvirtd virtmgr
**If you are using Ubuntu 18.04, add this step:**
.. code-block:: bash
sudo groupadd libvirt
sudo usermod -a -G libvirt virtmgr
Clone the Repositories
~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: bash
git clone https://opendev.org/airship/in-a-bottle.git ~/airship-in-a-bottle
git clone https://opendev.org/airship/treasuremap.git ~/airship-in-a-bottle/treasuremap
Set up SSH Keys
~~~~~~~~~~~~~~~
.. code-block::
cat ~/airship-in-a-bottle/treasuremap/site/seaworthy-virt/secrets/passphrases/airship_drydock_kvm_ssh_key.yaml
=> Copy the block from '-----BEGIN RSA PRIVATE KEY-----' till '-----END RSA PRIVATE KEY-----' and create ${HOME}/.ssh/id_rsa_airship
cat ~/airship-in-a-bottle/treasuremap/site/seaworthy-virt/secrets/passphrases/airship_ubuntu_ssh_public_key.yaml
=> Copy only the ssh-rsa string and create ${HOME}/.ssh/id_rsa_airship.pub
Replace multinode_deploy.json with virtual seaworthy stages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
``~/airship-in-a-bottle/tools/multi_nodes_gate/airship_gate/manifests/multinode_deploy.json``
.. literalinclude:: ./seaworthy-virt/multinode_deploy.json
:language: json
Create a temp directory for generated files
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: bash
mkdir ~/airship-in-a-bottle/tools/multi_nodes_gate/temp/
Run the setup script
~~~~~~~~~~~~~~~~~~~~
.. code-block:: bash
cd ~/airship-in-a-bottle/tools/multi_nodes_gate/
./setup_gate.sh
A reboot is recommended after this
Deploy the Site
---------------
Repeat this process as many times as required.
If you want to test specific patch sets, pull them to the treasure map
repository before running the gate.
Remove the generated files and run
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: bash
cd ~/airship-in-a-bottle/tools/multi_nodes_gate/
rm -rf temp/*
./gate.sh
Update the Site
---------------
Replace the contents of update_site.json
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
``~/airship-in-a-bottle/tools/multi_nodes_gate/airship_gate/manifests/update_site.json``
.. literalinclude:: ./seaworthy-virt/update_site.json
:language: json
Run update site
~~~~~~~~~~~~~~~
Once you have made the changes on your manifests, Run the update site action
.. code-block:: bash
cd ~/airship-in-a-bottle/tools/multi_nodes_gate
./gate.sh update_site

View File

@@ -0,0 +1,125 @@
{
"configuration": {
"site": "seaworthy-virt",
"primary_repo": "treasuremap",
"aux_repos": []
},
"ingress": {
"domain": "gate.local",
"ca": "-----BEGIN CERTIFICATE-----\nMIIDIDCCAgigAwIBAgIUfikFVpFSQKVjACP9i8P4tUMnQbcwDQYJKoZIhvcNAQEL\nBQAwKDERMA8GA1UEChMIU25ha2VvaWwxEzARBgNVBAMTCmluZ3Jlc3MtY2EwHhcN\nMTgxMjAzMjEzOTAwWhcNMjMxMjAyMjEzOTAwWjAoMREwDwYDVQQKEwhTbmFrZW9p\nbDETMBEGA1UEAxMKaW5ncmVzcy1jYTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCC\nAQoCggEBAOR6+3dCF5mtKvu2TlaYNHc6/v8VPvw3I0+EI+jRskXVQHZxF0kcLAVH\n/LM2maTMzNc1sZnxCnj8YYHxfhdIco+zwzCbG1YGolSPrPaslYmMmDjR0eVl1+tb\nmLnEHDZ88ds5rXNlUXDhAURzYPJivG2aYBVImvaS4GHztndaFFNE0Q7HQpldCs1Q\n5+xbFlKWHBt/xPM4QjoD/ReLEE5m5HhkT4WN0hWC0NC1OwW6bBhVkrk4D2kDTq8d\n/b5MH4FG2HHJYHXKR4caasrCHUrmuq7m6WoicwF7z53FvlM782EsNx6vSoBKYs39\n/AC4meM/9D8rjUlWaG3AjP0KFrFCLYECAwEAAaNCMEAwDgYDVR0PAQH/BAQDAgEG\nMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFJFfhFd1reBWgmrWe6PBV2z5W/Ee\nMA0GCSqGSIb3DQEBCwUAA4IBAQAZygjSCRSJrvgPllyWDpyKN1fg2r7P2ioI0WR9\nWkSrPKzdhi2hR8VdJxkMvRpEmWRhkQT7jNGEIWgy2jtyWiYKnKYobbY/kMU86QgL\nZazh2DiIeJim+Vt3RREyfOcNDwGMX7NpfwMTz7Dzl+jvtlBwKLFN0L15d0X+4J9V\ndRp5ZkooVjiOJb6vNcozDWxBrRPAowrvzLlJkFMaKgJQmGigEpgEygnCRH++NCle\n/ivGbdFuCsYzUTlR77xf9kGXMh3socMXcdu5SOtaDS7sl52DAJnAPxo9S6l0270G\na0989is2yCgDNmld5lpphVPaQSusGa8/XTaXR7YH+oc7qn1l\n-----END CERTIFICATE-----",
"172.24.1.5": [
"maas"
],
"172.24.1.6": ["drydock", "shipyard", "keystone"]
},
"stages": [
{
"name": "Gate Setup",
"script": "gate-setup.sh"
},
{
"name": "Pegleg Collection",
"script": "pegleg-collect.sh"
},
{
"name": "Pegleg Render",
"script": "pegleg-render.sh"
},
{
"name": "Generate Certificates",
"script": "generate-certificates.sh"
},
{
"name": "Build Scripts",
"script": "build-scripts.sh"
},
{
"name": "Create VMs",
"script": "create-vms.sh"
},
{
"name": "Register Ingress",
"script": "ingress-dns.sh",
"arguments": ["build"]
},
{
"name": "Create BGP router",
"script": "bgp-router.sh",
"arguments": ["build"]
},
{
"name": "Pre Genesis Setup",
"script": "genesis-setup.sh"
},
{
"name": "Genesis",
"script": "genesis.sh",
"on_error": "collect_genesis_info.sh"
},
{
"name": "Validate Genesis",
"script": "validate-genesis.sh",
"on_error": "collect_genesis_info.sh"
},
{
"name": "Load Site Design",
"script": "shipyard-load-design.sh"
},
{
"name": "Deploy Site",
"script": "shipyard-deploy-site.sh"
},
{
"name": "Validate Kube",
"script": "validate-kube.sh",
"on_error": "collect_genesis_info.sh"
}
],
"vm": {
"build": {
"memory": 4072,
"vcpus": 2,
"mac": "52:54:00:00:be:31",
"ip": "172.24.1.9",
"io_profile": "fast",
"bootstrap": true,
"userdata": "packages: [docker.io]"
},
"n0": {
"memory": 24768,
"vcpus": 16,
"mac": "52:54:00:00:a4:31",
"ip": "172.24.1.10",
"io_profile": "fast",
"bootstrap": true
},
"n1": {
"memory": 8072,
"vcpus": 4,
"mac": "52:54:00:00:a3:31",
"ip": "172.24.1.11",
"io_profile": "fast",
"bootstrap": false
},
"n2": {
"memory": 8072,
"vcpus": 4,
"mac": "52:54:00:1a:95:0d",
"ip": "172.24.1.12",
"io_profile": "fast",
"bootstrap": false
},
"n3": {
"memory": 8072,
"vcpus": 4,
"mac": "52:54:00:31:c2:36",
"ip": "172.24.1.13",
"io_profile": "fast",
"bootstrap": false
}
},
"bgp": {
"quagga_as": 64688,
"calico_as": 64671
}
}

View File

@@ -0,0 +1,74 @@
{
"configuration": {
"site": "seaworthy-virt",
"primary_repo": "treasuremap",
"aux_repos": []
},
"ingress": {
"domain": "gate.local",
"ca": "-----BEGIN CERTIFICATE-----\nMIIDIDCCAgigAwIBAgIUfikFVpFSQKVjACP9i8P4tUMnQbcwDQYJKoZIhvcNAQEL\nBQAwKDERMA8GA1UEChMIU25ha2VvaWwxEzARBgNVBAMTCmluZ3Jlc3MtY2EwHhcN\nMTgxMjAzMjEzOTAwWhcNMjMxMjAyMjEzOTAwWjAoMREwDwYDVQQKEwhTbmFrZW9p\nbDETMBEGA1UEAxMKaW5ncmVzcy1jYTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCC\nAQoCggEBAOR6+3dCF5mtKvu2TlaYNHc6/v8VPvw3I0+EI+jRskXVQHZxF0kcLAVH\n/LM2maTMzNc1sZnxCnj8YYHxfhdIco+zwzCbG1YGolSPrPaslYmMmDjR0eVl1+tb\nmLnEHDZ88ds5rXNlUXDhAURzYPJivG2aYBVImvaS4GHztndaFFNE0Q7HQpldCs1Q\n5+xbFlKWHBt/xPM4QjoD/ReLEE5m5HhkT4WN0hWC0NC1OwW6bBhVkrk4D2kDTq8d\n/b5MH4FG2HHJYHXKR4caasrCHUrmuq7m6WoicwF7z53FvlM782EsNx6vSoBKYs39\n/AC4meM/9D8rjUlWaG3AjP0KFrFCLYECAwEAAaNCMEAwDgYDVR0PAQH/BAQDAgEG\nMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFJFfhFd1reBWgmrWe6PBV2z5W/Ee\nMA0GCSqGSIb3DQEBCwUAA4IBAQAZygjSCRSJrvgPllyWDpyKN1fg2r7P2ioI0WR9\nWkSrPKzdhi2hR8VdJxkMvRpEmWRhkQT7jNGEIWgy2jtyWiYKnKYobbY/kMU86QgL\nZazh2DiIeJim+Vt3RREyfOcNDwGMX7NpfwMTz7Dzl+jvtlBwKLFN0L15d0X+4J9V\ndRp5ZkooVjiOJb6vNcozDWxBrRPAowrvzLlJkFMaKgJQmGigEpgEygnCRH++NCle\n/ivGbdFuCsYzUTlR77xf9kGXMh3socMXcdu5SOtaDS7sl52DAJnAPxo9S6l0270G\na0989is2yCgDNmld5lpphVPaQSusGa8/XTaXR7YH+oc7qn1l\n-----END CERTIFICATE-----",
"172.24.1.5": [
"maas"
],
"172.24.1.6": ["drydock", "shipyard", "keystone"]
},
"stages": [
{
"name": "Pegleg Collection",
"script": "pegleg-collect.sh",
"arguments": [
"update"
]
},
{
"name": "Load Site Design",
"script": "shipyard-load-design.sh",
"arguments": [
"-g",
"-o"
]
},
{
"name": "Deploy Site",
"script": "shipyard-update-site.sh"
}
],
"vm": {
"build": {
"memory": 3072,
"vcpus": 2,
"mac": "52:54:00:00:be:31",
"ip": "172.24.1.9",
"bootstrap": true,
"userdata": "packages: [docker.io]"
},
"n0": {
"memory": 32768,
"vcpus": 8,
"mac": "52:54:00:00:a4:31",
"ip": "172.24.1.10",
"bootstrap": true
},
"n1": {
"memory": 3072,
"vcpus": 2,
"mac": "52:54:00:00:a3:31",
"ip": "172.24.1.11",
"bootstrap": false
},
"n2": {
"memory": 3072,
"vcpus": 2,
"mac": "52:54:00:1a:95:0d",
"ip": "172.24.1.12",
"bootstrap": false
},
"n3": {
"memory": 3072,
"vcpus": 2,
"mac": "52:54:00:31:c2:36",
"ip": "172.24.1.13",
"bootstrap": false
}
}
}

View File

@@ -8,12 +8,6 @@ metadata:
layer: global
storagePolicy: cleartext
substitutions:
- src:
schema: pegleg/SoftwareVersions/v1
name: software-versions
path: .images.kubernetes.hyperkube
dest:
path: .files[0].docker_image
- src:
schema: pegleg/SoftwareVersions/v1
@@ -47,13 +41,6 @@ metadata:
dest:
path: .images.helm.helm
- src:
schema: pegleg/SoftwareVersions/v1
name: software-versions
path: .images.kubernetes.hyperkube
dest:
path: .images.kubernetes.hyperkube
- src:
schema: pegleg/SoftwareVersions/v1
name: software-versions
@@ -112,9 +99,6 @@ metadata:
data:
files:
- path: /opt/kubernetes/bin/hyperkube
file_path: /hyperkube
mode: 0555
- path: /opt/kubernetes/bin/kubelet
tar_path: kubernetes/node/bin/kubelet
mode: 0555

View File

@@ -39,10 +39,6 @@ data:
$ref: '#/definitions/url'
tar_path:
$ref: '#/definitions/rel_path'
docker_image:
$ref: '#/definitions/url'
file_path:
$ref: '#/definitions/abs_path'
symlink:
$ref: '#/definitions/abs_path'
required:
@@ -61,12 +57,6 @@ data:
required:
- tar_url
- tar_path
- type: object
allOf:
- type: object
required:
- docker_image
- file_path
additionalProperties: false
image:
@@ -113,17 +103,11 @@ data:
required:
- helm
additionalProperties: false
kubernetes:
type: object
properties:
hyperkube:
$ref: '#/definitions/image'
monitoring_image:
$ref: '#/definitions/image'
required:
- haproxy
- helm
- kubernetes
- monitoring_image
additionalProperties: false

View File

@@ -2,7 +2,7 @@
schema: armada/Chart/v1
metadata:
schema: metadata/Document/v1
name: ucp-maas-global
name: ucp-maas
labels:
name: ucp-maas-global
layeringDefinition:
@@ -118,13 +118,13 @@ metadata:
name: ucp_maas_postgres_password
path: .
data:
chart_name: maas
release: maas
chart_name: ucp-maas
release: ucp-maas
namespace: ucp
wait:
timeout: 1800
labels:
release_group: airship-maas
release_group: airship-ucp-maas
install:
no_hooks: false
upgrade:
@@ -133,12 +133,107 @@ data:
delete:
- type: job
labels:
release_group: airship-maas
release_group: airship-ucp-maas
values:
pod:
replicas:
region: 1
rack: 1
security_context:
ingress_errors:
container:
maas_ingress_errors:
appArmorProfile:
type: RuntimeDefault
pod:
runAsUser: 33
rack:
container:
maas_rack:
appArmorProfile:
type: RuntimeDefault
readOnlyRootFilesystem: false
privileged: true
capabilities:
add:
- 'DAC_READ_SEARCH'
- 'NET_ADMIN'
- 'SYS_ADMIN'
- 'SYS_PTRACE'
- 'SYS_RESOURCE'
- 'SYS_TIME'
region:
container:
maas_cache:
appArmorProfile:
type: RuntimeDefault
maas_region:
appArmorProfile:
type: RuntimeDefault
readOnlyRootFilesystem: false
privileged: true
capabilities:
add:
- 'SYS_ADMIN'
- 'NET_ADMIN'
- 'SYS_PTRACE'
- 'SYS_TIME'
- 'SYS_RESOURCE'
- 'DAC_READ_SEARCH'
syslog:
container:
syslog:
appArmorProfile:
type: RuntimeDefault
logrotate:
appArmorProfile:
type: RuntimeDefault
ingress:
container:
maas_ingress:
appArmorProfile:
type: RuntimeDefault
maas_ingress_vip:
appArmorProfile:
type: RuntimeDefault
maas_ingress_vip_init:
appArmorProfile:
type: RuntimeDefault
bootstrap_admin_user:
container:
maas_bootstrap_admin_user:
appArmorProfile:
type: RuntimeDefault
db_init:
container:
maas_db_init:
appArmorProfile:
type: RuntimeDefault
db_sync:
container:
maas_db_sync:
appArmorProfile:
type: RuntimeDefault
export_api_key:
container:
exporter:
appArmorProfile:
type: RuntimeDefault
import_resources:
container:
region_import_resources:
appArmorProfile:
type: RuntimeDefault
api_test:
container:
maas_api_test:
appArmorProfile:
type: RuntimeDefault
kubernetes_entrypoint:
container:
kubernetes_entrypoint:
appArmorProfile:
type: RuntimeDefault
labels:
rack:
node_selector_key: maas-rack
@@ -153,23 +248,107 @@ data:
conf:
cache:
enabled: true
cloudconfig:
override: true
sections:
bootcmd:
- "sysctl net.ipv6.conf.all.disable_ipv6=1"
- "sysctl net.ipv6.conf.default.disable_ipv6=1"
- "sysctl net.ipv6.conf.lo.disable_ipv6=0"
maas:
images:
default_os: 'ubuntu'
default_image: 'xenial'
default_kernel: 'hwe-16.04'
credentials:
secret:
namespace: ucp
proxy:
# Use MAAS Built-in proxy. This supports environments where
# the PXE interface can not reach the internet.
# Also improves efficiency due to caching via MAAS.
proxy_enabled: 'true'
proxy_enabled: false
peer_proxy_enabled: false
cgroups:
disable_cgroups_region: true
disable_cgroups_rack: true
ntp:
use_external_only: 'true'
disable_ntpd_region: true
disable_ntpd_rack: true
dns:
require_dnssec: 'no'
images:
default_os: 'ubuntu'
default_image: 'focal'
default_kernel: 'ga-20.04'
extra_settings:
# disable network discovery completely
network_discovery: disabled
active_discovery_interval: 0
# don't commission during enlistment (until drydock can handle this)
enlist_commissioning: false
# don't use v2 network config
# the default for bionic and focal is v2 with source routing, which results in
# policy routes that break kubelet probes, for example:
# root@mtn57r07c001:~# ip rule
# 0: from all lookup local
# 0: from 172.30.0.128/25 to 172.30.0.128/25 lookup main
# 100: from 172.30.0.128/25 lookup 1
# 10000: from 32.67.143.144/29 to 10.97.0.0/16 lookup main
# 10100: from 32.67.143.144/29 lookup 1500
# 32766: from all lookup main
# 32767: from all lookup default
# root@mtn57r07c001:~# ip r s table 1
# default via 172.30.0.129 dev eno4 proto static
# root@mtn57r07c001:~#
# https://github.com/maas/maas/commit/442d47053e6f96bf5a94904f16968e9e5e5c965c
# https://github.com/maas/maas/commit/45f2632b8164f105eab69baa88ee401cf0f68b56
force_v1_network_yaml: true
# disable creation of root account with default password
system_user: null
system_passwd: null
manifests:
secret_ssh_key: true
ingress_region: false
configmap_ingress: false
maas_ingress: false
dependencies:
static:
rack_controller:
services:
- service: maas_region
endpoint: internal
jobs:
- maas-export-api-key
region_controller:
jobs:
- maas-db-sync
services:
- service: maas_db
endpoint: internal
db_init:
services:
- service: maas_db
endpoint: internal
db_sync:
jobs:
- maas-db-init
bootstrap_admin_user:
jobs:
- maas-db-sync
services:
- service: maas_region
endpoint: internal
- service: maas_db
endpoint: internal
import_resources:
jobs:
- maas-bootstrap-admin-user
services:
- service: maas_region
endpoint: internal
- service: maas_db
endpoint: internal
export_api_key:
jobs:
- maas-bootstrap-admin-user
services:
- service: maas_region
endpoint: internal
- service: maas_db
endpoint: internal
...

View File

@@ -34,13 +34,6 @@ metadata:
dest:
path: .values.images.tags.monitoring_image
- src:
schema: pegleg/SoftwareVersions/v1
name: software-versions
path: .images.kubernetes.hyperkube
dest:
path: .values.images.tags.hyperkube
# Files
- src:
schema: promenade/HostSystem/v1

View File

@@ -382,50 +382,50 @@ data:
rgw_s3_admin: docker.io/openstackhelm/ceph-config-helper:ubuntu_xenial-20191119
kubernetes:
apiserver:
anchor: gcr.io/google-containers/hyperkube-amd64:v1.17.3
apiserver: gcr.io/google-containers/hyperkube-amd64:v1.17.3
anchor: registry.k8s.io/kube-apiserver:v1.32.0
apiserver: registry.k8s.io/kube-apiserver:v1.32.0
dep_check: quay.io/airshipit/kubernetes-entrypoint:latest-ubuntu_jammy
key_rotate: gcr.io/google-containers/hyperkube-amd64:v1.17.3
key_rotate: registry.k8s.io/kube-apiserver:v1.32.0
controller-manager:
anchor: gcr.io/google-containers/hyperkube-amd64:v1.17.3
controller_manager: gcr.io/google-containers/hyperkube-amd64:v1.17.3
anchor: registry.k8s.io/kube-controller-manager:v1.32.0
controller_manager: registry.k8s.io/kube-controller-manager:v1.32.0
coredns:
coredns: coredns/coredns:1.11.1
test: quay.io/airshipit/promenade:latest
coredns: registry.k8s.io/coredns/coredns:v1.11.2
test: quay.io/airshipit/promenade:latest-ubuntu_jammy
etcd:
etcd: quay.io/coreos/etcd:v3.5.11
etcdctl: "quay.io/airshipit/porthole-etcdctl-utility:latest-ubuntu_focal"
etcd: registry.k8s.io/etcd:v3.5.11
etcdctl: "quay.io/airshipit/porthole-etcdctl-utility:latest-ubuntu_jammy"
haproxy:
anchor: gcr.io/google-containers/hyperkube-amd64:v1.17.3
haproxy: docker.io/library/haproxy:1.8.19
test: docker.io/library/python:3.6
hyperkube: gcr.io/google-containers/hyperkube-amd64:v1.17.3
anchor: docker.io/library/haproxy:2.9.7
haproxy: docker.io/library/haproxy:2.9.7
test: docker.io/library/python:3.10
ingress:
controller: registry.k8s.io/ingress-nginx/controller:v1.11.2
defaultBackend: k8s.gcr.io/defaultbackend-amd64:1.5
patch: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.4.3
pause: gcr.io/google-containers/pause-amd64:3.1
defaultBackend: registry.k8s.io/defaultbackend-amd64:1.5
patch: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343
pause: registry.k8s.io/pause:3.9
proxy:
proxy: gcr.io/google-containers/hyperkube-amd64:v1.17.3
proxy: registry.k8s.io/kube-proxy:v1.32.0
scheduler:
anchor: gcr.io/google-containers/hyperkube-amd64:v1.17.3
scheduler: gcr.io/google-containers/hyperkube-amd64:v1.17.3
anchor: registry.k8s.io/kube-scheduler:v1.32.0
scheduler: registry.k8s.io/kube-scheduler:v1.32.0
validation:
pod_logs:
image: docker.io/library/busybox:1.28.3
image: docker.io/library/busybox:latest
osh:
barbican:
barbican_api: docker.io/openstackhelm/barbican:ocata-ubuntu_xenial
barbican_db_sync: docker.io/openstackhelm/barbican:ocata-ubuntu_xenial
bootstrap: docker.io/openstackhelm/heat:ocata-ubuntu_xenial
db_drop: docker.io/openstackhelm/heat:ocata-ubuntu_xenial
db_init: docker.io/openstackhelm/heat:ocata-ubuntu_xenial
bootstrap: quay.io/airshipit/heat:2024.1-ubuntu_jammy
dep_check: quay.io/airshipit/kubernetes-entrypoint:latest-ubuntu_jammy
ks_endpoints: docker.io/openstackhelm/heat:ocata-ubuntu_xenial
ks_service: docker.io/openstackhelm/heat:ocata-ubuntu_xenial
ks_user: docker.io/openstackhelm/heat:ocata-ubuntu_xenial
scripted_test: quay.io/airshipit/heat:2024.1-ubuntu_jammy
db_init: quay.io/airshipit/heat:2024.1-ubuntu_jammy
barbican_db_sync: quay.io/airshipit/barbican:2024.1-ubuntu_jammy
db_drop: quay.io/airshipit/heat:2024.1-ubuntu_jammy
ks_user: quay.io/airshipit/heat:2024.1-ubuntu_jammy
ks_service: quay.io/airshipit/heat:2024.1-ubuntu_jammy
ks_endpoints: quay.io/airshipit/heat:2024.1-ubuntu_jammy
barbican_api: quay.io/airshipit/barbican:2024.1-ubuntu_jammy
rabbit_init: quay.io/airshipit/rabbitmq:3.10.18-management
scripted_test: docker.io/openstackhelm/heat:ocata-ubuntu_xenial
image_repo_sync: quay.io/airshipit/docker:27.5.0
cinder:
bootstrap: docker.io/openstackhelm/heat:ocata-ubuntu_xenial
cinder_api: docker.io/openstackhelm/cinder:ocata-ubuntu_xenial
@@ -721,8 +721,8 @@ data:
ks_service: quay.io/airshipit/heat:2024.1-ubuntu_jammy
ks_endpoints: quay.io/airshipit/heat:2024.1-ubuntu_jammy
drydock_db_init: quay.io/airshipit/postgres:14.8
drydock_db_cleanup: quay.io/airshipit/drydock:master
drydock_db_sync: quay.io/airshipit/drydock:master
drydock_db_cleanup: quay.io/airshipit/drydock:latest-ubuntu_jammy
drydock_db_sync: quay.io/airshipit/drydock:latest-ubuntu_jammy
ingress:
controller: registry.k8s.io/ingress-nginx/controller:v1.11.2
defaultBackend: k8s.gcr.io/defaultbackend-amd64:1.5
@@ -746,17 +746,17 @@ data:
image_repo_sync: quay.io/airshipit/docker:27.5.0
maas:
db_init: quay.io/airshipit/postgres:14.8
db_sync: quay.io/airshipit/maas-region-controller:latest
maas_rack: quay.io/airshipit/maas-rack-controller:latest
maas_region: quay.io/airshipit/maas-region-controller:latest
bootstrap: quay.io/airshipit/maas-region-controller:latest
export_api_key: quay.io/airshipit/maas-region-controller:latest
maas_cache: quay.io/airshipit/sstream-cache:latest
db_sync: quay.io/airshipit/maas-region-controller-focal:latest
maas_rack: quay.io/airshipit/maas-rack-controller-focal:latest
maas_region: quay.io/airshipit/maas-region-controller-focal:latest
bootstrap: quay.io/airshipit/maas-region-controller-focal:latest
export_api_key: quay.io/airshipit/maas-region-controller-focal:latest
maas_cache: quay.io/airshipit/sstream-cache-focal:latest
dep_check: quay.io/airshipit/kubernetes-entrypoint:latest-ubuntu_jammy
ingress: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
ingress: registry.k8s.io/ingress-nginx/controller:v1.11.2
ingress_vip: docker.io/busybox:latest
error_pages: gcr.io/google_containers/ingress-gce-404-server-with-metrics-amd64:v1.6.0
maas_syslog: quay.io/airshipit/maas-region-controller:latest
maas_syslog: quay.io/airshipit/maas-region-controller-focal:latest
mariadb:
mariadb: quay.io/airshipit/mariadb:latest-ubuntu_jammy
prometheus_create_mysql_user: quay.io/airshipit/mariadb:10.6.20-focal
@@ -785,7 +785,7 @@ data:
prometheus_postgresql_exporter_create_user: quay.io/airshipit/postgres:14.8
postgresql_backup: quay.io/airshipit/porthole-postgresql-utility:latest-ubuntu_jammy
promenade:
monitoring_image: busybox:1.28.3
monitoring_image: busybox:latest
promenade: quay.io/airshipit/promenade:latest-ubuntu_jammy
ks_user: quay.io/airshipit/heat:2024.1-ubuntu_jammy
ks_service: quay.io/airshipit/heat:2024.1-ubuntu_jammy

View File

@@ -36,6 +36,7 @@ CLONE_PORTHOLE: true
CLONE_PROMENADE: true
CLONE_KUBERNETES_ENTRYPOINT: true
CLONE_MAAS: true
CLONE_DRYDOCK: true
CLONE_OSH: true
MAKE_ARMADA_IMAGES: false
MAKE_ARMADA_GO_IMAGES: false
@@ -44,6 +45,8 @@ MAKE_DECKHAND_IMAGES: false
MAKE_SHIPYARD_IMAGES: false
MAKE_PORTHOLE_IMAGES: false
MAKE_PROMENADE_IMAGES: false
MAKE_MAAS_IMAGES: false
MAKE_DRYDOCK_IMAGES: false
MAKE_KUBERTENES_ENTRYPOINT_IMAGES: false
USE_ARMADA_GO: false
...

View File

@@ -37,7 +37,7 @@
FEATURES: "{{ osh_params.feature_gates | default('') | regex_replace(',', ' ') }} {{ osh_params.openstack_release | default('') }} {{ osh_params.container_distro_name | default('') }}_{{ osh_params.container_distro_version | default('') }} {{ osh_params.container_distro_name | default('') }}"
RUN_HELM_TESTS: "{{ run_helm_tests | default('yes') }}"
PL_SITE: "{{ site | default('airskiff') }}"
HELM_ARTIFACT_URL: "{{ HELM_ARTIFACT_URL | default('https://get.helm.sh/helm-v3.15.4-linux-amd64.tar.gz') }}"
HELM_ARTIFACT_URL: "{{ HELM_ARTIFACT_URL | default('https://get.helm.sh/helm-v3.18.4-linux-amd64.tar.gz') }}"
HTK_COMMIT: "{{ HTK_COMMIT | default('master') }}"
OSH_INFRA_COMMIT: "{{ OSH_INFRA_COMMIT | default('master') }}"
OSH_COMMIT: "{{ OSH_COMMIT | default('master') }}"
@@ -53,7 +53,9 @@
CLONE_PROMENADE: "{{ CLONE_PROMENADE | default('true') }}"
CLONE_KUBERNETES_ENTRYPOINT: "{{ CLONE_KUBERNETES_ENTRYPOINT | default('true') }}"
CLONE_MAAS: "{{ CLONE_MAAS | default('true') }}"
CLONE_DRYDOCK: "{{ CLONE_DRYDOCK | default('true') }}"
CLONE_OSH: "{{ CLONE_OSH | default('true') }}"
CLONE_PEGLEG: "{{ CLONE_PEGLEG | default('true') }}"
MAKE_ARMADA_IMAGES: "{{ MAKE_ARMADA_IMAGES | default('false') }}"
MAKE_ARMADA_GO_IMAGES: "{{ MAKE_ARMADA_GO_IMAGES | default('false') }}"
MAKE_ARMADA_OPERATOR_IMAGES: "{{ MAKE_ARMADA_OPERATOR_IMAGES | default('false') }}"
@@ -61,7 +63,10 @@
MAKE_SHIPYARD_IMAGES: "{{ MAKE_SHIPYARD_IMAGES | default('false') }}"
MAKE_PORTHOLE_IMAGES: "{{ MAKE_PORTHOLE_IMAGES | default('false') }}"
MAKE_PROMENADE_IMAGES: "{{ MAKE_PROMENADE_IMAGES | default('false') }}"
MAKE_MAAS_IMAGES: "{{ MAKE_MAAS_IMAGES | default('false') }}"
MAKE_DRYDOCK_IMAGES: "{{ MAKE_DRYDOCK_IMAGES | default('false') }}"
MAKE_KUBERTENES_ENTRYPOINT_IMAGES: "{{ MAKE_KUBERTENES_ENTRYPOINT_IMAGES | default('false') }}"
MAKE_PEGLEG_IMAGES: "{{ MAKE_PEGLEG_IMAGES | default('false') }}"
USE_ARMADA_GO: "{{ USE_ARMADA_GO | default('false') }}"
# NOTE(aostapenko) using bigger than async_status timeout due to async_status issue with
# not recognizing timed out jobs: https://github.com/ansible/ansible/issues/25637

View File

@@ -35,6 +35,7 @@ CLONE_PORTHOLE: true
CLONE_PROMENADE: true
CLONE_KUBERNETES_ENTRYPOINT: true
CLONE_MAAS: true
CLONE_DRYDOCK: true
CLONE_OSH: true
CLONE_PEGLEG: true
MAKE_ARMADA_IMAGES: false
@@ -44,6 +45,8 @@ MAKE_DECKHAND_IMAGES: false
MAKE_SHIPYARD_IMAGES: false
MAKE_PORTHOLE_IMAGES: false
MAKE_PROMENADE_IMAGES: false
MAKE_MAAS_IMAGES: false
MAKE_DRYDOCK_IMAGES: false
MAKE_KUBERTENES_ENTRYPOINT_IMAGES: false
MAKE_PEGLEG_IMAGES: false
USE_ARMADA_GO: false

View File

@@ -34,7 +34,7 @@
FEATURES: "{{ osh_params.feature_gates | default('') | regex_replace(',', ' ') }} {{ osh_params.openstack_release | default('') }} {{ osh_params.container_distro_name | default('') }}_{{ osh_params.container_distro_version | default('') }} {{ osh_params.container_distro_name | default('') }}"
RUN_HELM_TESTS: "{{ run_helm_tests | default('yes') }}"
PL_SITE: "{{ site | default('airskiff') }}"
HELM_ARTIFACT_URL: "{{ HELM_ARTIFACT_URL | default('https://get.helm.sh/helm-v3.15.4-linux-amd64.tar.gz') }}"
HELM_ARTIFACT_URL: "{{ HELM_ARTIFACT_URL | default('https://get.helm.sh/helm-v3.18.4-linux-amd64.tar.gz') }}"
HTK_COMMIT: "{{ HTK_COMMIT | default('master') }}"
OSH_INFRA_COMMIT: "{{ OSH_INFRA_COMMIT | default('master') }}"
OSH_COMMIT: "{{ OSH_COMMIT | default('master') }}"
@@ -50,6 +50,7 @@
CLONE_PROMENADE: "{{ CLONE_PROMENADE | default('true') }}"
CLONE_KUBERNETES_ENTRYPOINT: "{{ CLONE_KUBERNETES_ENTRYPOINT | default('true') }}"
CLONE_MAAS: "{{ CLONE_MAAS | default('true') }}"
CLONE_DRYDOCK: "{{ CLONE_DRYDOCK | default('true') }}"
CLONE_OSH: "{{ CLONE_OSH | default('true') }}"
CLONE_PEGLEG: "{{ CLONE_PEGLEG | default('true') }}"
MAKE_ARMADA_IMAGES: "{{ MAKE_ARMADA_IMAGES | default('false') }}"
@@ -59,6 +60,8 @@
MAKE_SHIPYARD_IMAGES: "{{ MAKE_SHIPYARD_IMAGES | default('false') }}"
MAKE_PORTHOLE_IMAGES: "{{ MAKE_PORTHOLE_IMAGES | default('false') }}"
MAKE_PROMENADE_IMAGES: "{{ MAKE_PROMENADE_IMAGES | default('false') }}"
MAKE_MAAS_IMAGES: "{{ MAKE_MAAS_IMAGES | default('false') }}"
MAKE_DRYDOCK_IMAGES: "{{ MAKE_DRYDOCK_IMAGES | default('false') }}"
MAKE_KUBERTENES_ENTRYPOINT_IMAGES: "{{ MAKE_KUBERTENES_ENTRYPOINT_IMAGES | default('false') }}"
MAKE_PEGLEG_IMAGES: "{{ MAKE_PEGLEG_IMAGES | default('false') }}"
USE_ARMADA_GO: "{{ USE_ARMADA_GO | default('false') }}"

View File

@@ -0,0 +1,38 @@
---
schema: deckhand/CertificateKey/v1
metadata:
schema: metadata/Document/v1
name: airship_drydock_kvm_ssh_key
layeringDefinition:
layer: site
abstract: false
storagePolicy: cleartext
data: |-
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA6gVNOBV7zP2yeZF4P+pcei6VrRW5Qy0pzFNl4Xx6JGyM8LUP
yH11pPTokQ7G4JRowzn9tsq21b10gStFLyysOogXJlKCHeR0Bu1MfQYzxshyRgCM
dTc9H+4hhLnbPfazV+wUqgV02smsIy0x28DCiHUGXnledAsRPXFcT2d+ujPYoE7u
M6WDrRhGwMBM9s6iZ2aYcwDjN8SgliaeLEd6xrk/AHjsvEHQKVCqe24PxiwXbu9q
8PMbUOHfd/OrK+ir+uzh06ZVywifPB6btP3BxBRNLVcSwGgUnPQWg/+q+vi6urlp
b66lxQ658gzltzFWHyOl/rQSMP1/rH3M1NhibwIDAQABAoIBAA1VW/70Cme1lLOk
fCt4GOjFOrXv5OxU6GrB3a4pP3RP0v/r8QhFTaymX5HUO7SUABwPc8s0ZZJsBvVN
F9YGP5HeKyN90/gMCihS4ObGsbCDvy8J3PbYvNzS3ooHZNx07+b0hoDharUEhJBE
hPC2XN8Ve9VqKN2Hu+W6Tb4gcXH+YlHEeULaeerZRmAflKxnspvYIkVzP5vV540h
qiP5LH5dTuHaJBiQcrCP9dbFzjPCqueFohHKOQI6wSbI9QbcuQvD7pxHoxPaf8B/
V68fYaZoTGuVzhUuRsKTmseaFac4/bgmCQI8j2fDnWWA7EUANhH2ldIwEwBoPiF+
nldqQbECgYEA/mcP2XQ98KIOLRRyWYMxPW/MjKRe1aefcll1Iitilt67mBwPUSvN
KB/JTLoN838Vdv/oPQiZrtTYiEsbcj3YHa+kjI62veSFXTeghMKgn4HqQ1FdHOIW
Ku+lXj6hSVUdyqC1r8vDDvoludFep+s+M0w/7tcSjlqlZHkpFgEL0uMCgYEA6316
G8luptWeYOD2AOPjqqecXoSfPO6EG8rNO3IQUyQP8LgwtQUbK1PNZ/0u9IsKGnTA
CvtjhAmyLPlq87KSjOOw7br6VSih/9uxfx/zf+y+NOwkFBqgn2/9lwFvkoJvPELk
hRr39Ej9NuX42W5m7XkINCddJgPrVaGF0FQ87AUCgYEAuM03Fzi4se+Wqqqasml5
wG5RQa05cqzUR6WyUAMCGCRuU322prlRy57jhMf20HX1qr8U/hkcQoM9VCxzIJbK
Qi5QMwaMuv6g3mlFQot7UMN34DTfldaqUcBJ+V83nGSnQoVh1fUHmf6enw/3WbWq
NmtiWeaEBULVuFnHPcO+yg8CgYEAqYha+VgpxgfyDlLGJ9voUjp6k30s2oPoLc3x
tIMoh4Jly2n+/sMfTTD2po+aV0kly+gTPZS/jxYf5MrnGWyMnsto260JfXdUMUur
XBbXiVgZkyYRzztgOYg5a5YICdTHWf3aYI0Kxx4o1XX4kiguB3Zj1pAkOjMGIE65
dELA3TUCgYAoRt2+LINxTn2dqU9sHv+oAqN9WY3AGLc8MgAG2sEyD6u6a4ji6LJA
5W48boUeUAieiyHdLqpnxZbgsndFXGoOGy3w7k511mGVT8R37uzqoW8en+l/B3aC
m6GnweW01V+kv0FiSLsMfNZmYQeCQRNYn/LdSBAjsrmg8c88z0Af6g==
-----END RSA PRIVATE KEY-----
...

View File

@@ -0,0 +1,44 @@
---
schema: armada/Chart/v1
metadata:
schema: metadata/Document/v1
name: ucp-drydock
replacement: true
layeringDefinition:
abstract: false
layer: site
parentSelector:
name: ucp-drydock-global
actions:
- method: merge
path: .
storagePolicy: cleartext
substitutions:
- src:
schema: deckhand/CertificateKey/v1
name: airship_drydock_kvm_ssh_key
path: .
dest:
path: .values.conf.ssh.private_key
data:
values:
pod:
security_context:
drydock:
pod:
# NOTE: Drydock has a hardcoded path to SSH key that
# uses root home directory, default `nobody` user
# does not have have the access to keys in the root
# directory, consequently Drydock fails to connect to
# the Libvirt host using SSH.
# Remove this workaround when Drydock is fixed.
runAsUser: 0
manifests:
secret_ssh_key: true
conf:
drydock:
plugins:
oob_driver:
- 'drydock_provisioner.drivers.oob.pyghmi_driver.driver.PyghmiDriver'
- 'drydock_provisioner.drivers.oob.libvirt_driver.driver.LibvirtDriver'
...

View File

@@ -0,0 +1,74 @@
---
schema: armada/Chart/v1
metadata:
schema: metadata/Document/v1
name: ucp-maas
replacement: true
layeringDefinition:
abstract: false
layer: site
parentSelector:
name: ucp-maas-global
actions:
- method: merge
path: .
storagePolicy: cleartext
substitutions:
- src:
schema: deckhand/CertificateKey/v1
name: airship_drydock_kvm_ssh_key
path: .
dest:
path: .values.conf.ssh.private_key
- src:
schema: pegleg/CommonAddresses/v1
name: common-addresses
path: .vip.maas_vip
dest:
path: .values.network.maas_ingress.addr
data:
values:
endpoints:
maas_ingress:
hosts:
default: maas-ingress
error_pages: maas-ingress-error
host_fqdn_override:
public: null
port:
http:
default: 8081
https:
default: 8443
ingress_default_server:
default: 8383
error_pages:
default: 8080
podport: 8080
healthz:
podport: 10283
status:
podport: 18089
stream:
podport: 18090
profiler:
podport: 18088
maas_region:
name: maas-region
hosts:
default: maas-region
path:
default: /MAAS
scheme:
default: 'http'
port:
region_api:
default: 83
nodeport: 31900
podport: 5240
public: 83
region_proxy:
default: 8000
host_fqdn_override:
default: null
...

View File

@@ -2,7 +2,7 @@
schema: armada/Chart/v1
metadata:
schema: metadata/Document/v1
name: ucp-maas-global
name: ucp-maas
replacement: true
layeringDefinition:
abstract: false

View File

@@ -144,7 +144,7 @@ promenade() {
versions_lookup "['data']['images']['ucp']['promenade']['promenade']"
IMAGE_PROMENADE=$IMAGE_URL
versions_lookup "['data']['images']['kubernetes']['hyperkube']"
versions_lookup "['data']['images']['kubernetes']['controller-manager']['controller_manager']"
IMAGE_HYPERKUBE=$IMAGE_URL
# 'cache' is hardcoded in Promenade source code

View File

@@ -29,6 +29,7 @@ set -xe
: "${CLONE_PORTHOLE:=true}"
: "${CLONE_KUBERNETES_ENTRYPOINT:=true}"
: "${CLONE_MAAS:=true}"
: "${CLONE_DRYDOCK:=true}"
: "${CLONE_OSH:=true}"
CLONE_ARMADA=$(echo "$CLONE_ARMADA" | tr '[:upper:]' '[:lower:]')
@@ -41,6 +42,7 @@ CLONE_PEGLEG=$(echo "$CLONE_PEGLEG" | tr '[:upper:]' '[:lower:]')
CLONE_PORTHOLE=$(echo "$CLONE_PORTHOLE" | tr '[:upper:]' '[:lower:]')
CLONE_KUBERNETES_ENTRYPOINT=$(echo "$CLONE_KUBERNETES_ENTRYPOINT" | tr '[:upper:]' '[:lower:]')
CLONE_MAAS=$(echo "$CLONE_MAAS" | tr '[:upper:]' '[:lower:]')
CLONE_DRYDOCK=$(echo "$CLONE_DRYDOCK" | tr '[:upper:]' '[:lower:]')
CLONE_OSH=$(echo "$CLONE_OSH" | tr '[:upper:]' '[:lower:]')
export CLONE_ARMADA
@@ -53,6 +55,7 @@ export CLONE_PEGLEG
export CLONE_PORTHOLE
export CLONE_KUBERNETES_ENTRYPOINT
export CLONE_MAAS
export CLONE_DRYDOCK
export CLONE_OSH
cd "${INSTALL_PATH}"
@@ -88,6 +91,9 @@ fi
if [[ ${CLONE_MAAS} = true ]] ; then
git clone "https://review.opendev.org/airship/maas.git"
fi
if [[ ${CLONE_DRYDOCK} = true ]] ; then
git clone "https://review.opendev.org/airship/drydock.git"
fi
# Clone dependencies
if [[ ${CLONE_OSH} = true ]] ; then
@@ -96,3 +102,5 @@ if [[ ${CLONE_OSH} = true ]] ; then
git checkout "${OSH_COMMIT}"
popd
fi
ls -la

View File

@@ -26,6 +26,10 @@ kubectl label --overwrite nodes --all ceph-osd=enabled
kubectl label --overwrite nodes --all ceph-mds=enabled
kubectl label --overwrite nodes --all ceph-rgw=enabled
kubectl label --overwrite nodes --all ceph-mgr=enabled
kubectl label nodes --all --overwrite maas-region=enabled
kubectl label nodes --all --overwrite maas-rack=enabled
# We deploy l3 agent only on the node where we run test scripts.
# In this case virtual router will be created only on this node
# and we don't need L2 overlay (will be implemented later).

View File

@@ -26,6 +26,7 @@ CURRENT_DIR="$(pwd)"
: "${MAKE_CHARTS_DECKHAND:=true}"
: "${MAKE_CHARTS_SHIPYARD:=true}"
: "${MAKE_CHARTS_MAAS:=true}"
: "${MAKE_CHARTS_DRYDOCK:=true}"
: "${MAKE_CHARTS_PORTHOLE:=true}"
: "${MAKE_CHARTS_PROMENADE:=true}"
@@ -35,6 +36,7 @@ MAKE_CHARTS_ARMADA=$(echo "$MAKE_CHARTS_ARMADA" | tr '[:upper:]' '[:lower:]')
MAKE_CHARTS_DECKHAND=$(echo "$MAKE_CHARTS_DECKHAND" | tr '[:upper:]' '[:lower:]')
MAKE_CHARTS_SHIPYARD=$(echo "$MAKE_CHARTS_SHIPYARD" | tr '[:upper:]' '[:lower:]')
MAKE_CHARTS_MAAS=$(echo "$MAKE_CHARTS_MAAS" | tr '[:upper:]' '[:lower:]')
MAKE_CHARTS_DRYDOCK=$(echo "$MAKE_CHARTS_DRYDOCK" | tr '[:upper:]' '[:lower:]')
MAKE_CHARTS_PORTHOLE=$(echo "$MAKE_CHARTS_PORTHOLE" | tr '[:upper:]' '[:lower:]')
MAKE_CHARTS_PROMENADE=$(echo "$MAKE_CHARTS_PROMENADE" | tr '[:upper:]' '[:lower:]')
export MAKE_CHARTS_OPENSTACK_HELM
@@ -42,12 +44,14 @@ export MAKE_CHARTS_ARMADA
export MAKE_CHARTS_DECKHAND
export MAKE_CHARTS_SHIPYARD
export MAKE_CHARTS_MAAS
export MAKE_CHARTS_DRYDOCK
export MAKE_CHARTS_PORTHOLE
export MAKE_CHARTS_PROMENADE
mkdir -p "${ARTIFACTS_PATH}"
cd "${INSTALL_PATH}"
ls -la
# Make charts in Airship and OSH-INFRA projects
if [[ ${MAKE_CHARTS_ARMADA} = true ]] ; then
@@ -78,17 +82,17 @@ if [[ ${MAKE_CHARTS_SHIPYARD} = true ]] ; then
done
popd
fi
if [[ ${MAKE_CHARTS_OPENSTACK_HELM} = true ]] ; then
pushd openstack-helm
make all SKIP_CHANGELOG=1
for i in $(find . -maxdepth 1 -name "*.tgz" -print | sed -E 's|\.\/([a-zA-Z0-9\-]+)-[0-9.]+\+.*\.tgz|\1|' | sort -u)
if [[ ${MAKE_CHARTS_MAAS} = true ]] ; then
pushd maas
make charts
for i in $(find . -maxdepth 1 -name "*.tgz" -print | sed -e 's/\-[0-9.]*\.tgz//'| cut -d / -f 2 | sort)
do
find . -maxdepth 1 -name "$i-[0-9]*.tgz" -print -exec cp -av {} "../artifacts/$i.tgz" \;
find . -maxdepth 1 -name "$i-[0-9.]*.tgz" -print -exec cp -av {} "../artifacts/$i.tgz" \;
done
popd
fi
if [[ ${MAKE_CHARTS_MAAS} = true ]] ; then
pushd maas
if [[ ${MAKE_CHARTS_DRYDOCK} = true ]] ; then
pushd drydock
make charts
for i in $(find . -maxdepth 1 -name "*.tgz" -print | sed -e 's/\-[0-9.]*\.tgz//'| cut -d / -f 2 | sort)
do
@@ -116,5 +120,16 @@ if [[ ${MAKE_CHARTS_PROMENADE} = true ]] ; then
done
popd
fi
if [[ ${MAKE_CHARTS_OPENSTACK_HELM} = true ]] ; then
pushd openstack-helm
make all SKIP_CHANGELOG=1
for i in $(find . -maxdepth 1 -name "*.tgz" -print | sed -E 's|\.\/([a-zA-Z0-9\-]+)-[0-9.]+\+.*\.tgz|\1|' | sort -u)
do
find . -maxdepth 1 -name "$i-[0-9]*.tgz" -print -exec cp -av {} "../artifacts/$i.tgz" \;
done
popd
fi
ls -la
cd "${CURRENT_DIR}"

View File

@@ -29,8 +29,13 @@ CURRENT_DIR="$(pwd)"
: "${MAKE_PORTHOLE_IMAGES:=false}"
: "${MAKE_PROMENADE_IMAGES:=false}"
: "${MAKE_PEGLEG_IMAGES:=false}"
: "${MAKE_MAAS_IMAGES:=false}"
: "${MAKE_DRYDOCK_IMAGES:=false}"
: "${MAKE_KUBERTENES_ENTRYPOINT_IMAGES:=false}"
: "${DISTRO:=ubuntu_jammy}"
# Convert both values to lowercase (or uppercase)
MAKE_ARMADA_IMAGES=$(echo "$MAKE_ARMADA_IMAGES" | tr '[:upper:]' '[:lower:]')
MAKE_ARMADA_GO_IMAGES=$(echo "$MAKE_ARMADA_GO_IMAGES" | tr '[:upper:]' '[:lower:]')
@@ -40,6 +45,8 @@ MAKE_SHIPYARD_IMAGES=$(echo "$MAKE_SHIPYARD_IMAGES" | tr '[:upper:]' '[:lower:]'
MAKE_PORTHOLE_IMAGES=$(echo "$MAKE_PORTHOLE_IMAGES" | tr '[:upper:]' '[:lower:]')
MAKE_PROMENADE_IMAGES=$(echo "$MAKE_PROMENADE_IMAGES" | tr '[:upper:]' '[:lower:]')
MAKE_PEGLEG_IMAGES=$(echo "$MAKE_PEGLEG_IMAGES" | tr '[:upper:]' '[:lower:]')
MAKE_MAAS_IMAGES=$(echo "$MAKE_MAAS_IMAGES" | tr '[:upper:]' '[:lower:]')
MAKE_DRYDOCK_IMAGES=$(echo "$MAKE_DRYDOCK_IMAGES" | tr '[:upper:]' '[:lower:]')
MAKE_KUBERTENES_ENTRYPOINT_IMAGES=$(echo "$MAKE_KUBERTENES_ENTRYPOINT_IMAGES" | tr '[:upper:]' '[:lower:]')
export MAKE_ARMADA_IMAGES
@@ -50,8 +57,12 @@ export MAKE_SHIPYARD_IMAGES
export MAKE_PORTHOLE_IMAGES
export MAKE_PROMENADE_IMAGES
export MAKE_PEGLEG_IMAGES
export MAKE_MAAS_IMAGES
export MAKE_DRYDOCK_IMAGES
export MAKE_KUBERTENES_ENTRYPOINT_IMAGES
export STRIPPED_DISTRO="$(echo ${DISTRO} | sed 's/^ubuntu_//')"
cd "${INSTALL_PATH}"
# Start docker registry
@@ -148,6 +159,45 @@ if [[ ${MAKE_PEGLEG_IMAGES} = true ]] ; then
grep pegleg global/software/config/versions.yaml
popd
fi
if [[ ${MAKE_MAAS_IMAGES} = true ]] ; then
pushd maas
make images
popd
pushd treasuremap
# ...existing code...
if [[ "$STRIPPED_DISTRO" == "jammy" ]]; then
echo "Running commands for Ubuntu Jammy"
sed -i "s/default_image: .*/default_image: 'jammy'/" global/software/charts/ucp/drydock/maas.yaml
sed -i "s/default_kernel: .*/default_kernel: 'ga-22.04'/" global/software/charts/ucp/drydock/maas.yaml
sed -i "s#quay.io/airshipit/maas-region-controller-[a-zA-Z0-9]\+:latest#${DOCKER_REGISTRY}/airshipit/maas-region-controller-${STRIPPED_DISTRO}:latest#g" ./global/software/config/versions.yaml
sed -i "s#quay.io/airshipit/maas-rack-controller-[a-zA-Z0-9]\+:latest#${DOCKER_REGISTRY}/airshipit/maas-rack-controller-${STRIPPED_DISTRO}:latest#g" ./global/software/config/versions.yaml
sed -i "s#quay.io/airshipit/sstream-cache-[a-zA-Z0-9]\+:latest#${DOCKER_REGISTRY}/airshipit/sstream-cache-${STRIPPED_DISTRO}:latest#g" ./global/software/config/versions.yaml
# Add Jammy-specific commands here
elif [[ "$STRIPPED_DISTRO" == "focal" ]]; then
echo "Running commands for Ubuntu Focal"
# Add Focal-specific commands here
sed -i "s/default_image: .*/default_image: 'focal'/" global/software/charts/ucp/drydock/maas.yaml
sed -i "s/default_kernel: .*/default_kernel: 'ga-20.04'/" global/software/charts/ucp/drydock/maas.yaml
sed -i "s#quay.io/airshipit/maas-region-controller-[a-zA-Z0-9]\+:latest#${DOCKER_REGISTRY}/airshipit/maas-region-controller-${STRIPPED_DISTRO}:latest#g" ./global/software/config/versions.yaml
sed -i "s#quay.io/airshipit/maas-rack-controller-[a-zA-Z0-9]\+:latest#${DOCKER_REGISTRY}/airshipit/maas-rack-controller-${STRIPPED_DISTRO}:latest#g" ./global/software/config/versions.yaml
sed -i "s#quay.io/airshipit/sstream-cache-[a-zA-Z0-9]\+:latest#${DOCKER_REGISTRY}/airshipit/sstream-cache-${STRIPPED_DISTRO}:latest#g" ./global/software/config/versions.yaml
fi
grep maas global/software/config/versions.yaml
grep sstream global/software/config/versions.yaml
grep default_image global/software/charts/ucp/drydock/maas.yaml
grep default_kernel global/software/charts/ucp/drydock/maas.yaml
popd
fi
if [[ ${MAKE_DRYDOCK_IMAGES} = true ]] ; then
pushd drydock
make images
popd
pushd treasuremap
sed -i "s#quay.io/airshipit/drydock:latest-#${DOCKER_REGISTRY}/airshipit/drydock:latest-#g" ./global/software/config/versions.yaml
grep drydock global/software/config/versions.yaml
popd
fi
if [[ ${MAKE_KUBERTENES_ENTRYPOINT_IMAGES} = true ]] ; then
pushd kubernetes-entrypoint
make images

View File

@@ -0,0 +1,5 @@
#!/bin/bash
set -ex
cp -a tools/gate/manifests/bootstrap.yaml type/skiff/manifests/bootstrap.yaml
cp -a tools/gate/manifests/shipyard.yaml type/skiff/charts/ucp/shipyard/shipyard.yaml

View File

@@ -28,6 +28,9 @@ set -xe
USE_ARMADA_GO=$(echo "$USE_ARMADA_GO" | tr '[:upper:]' '[:lower:]')
export USE_ARMADA_GO
# Lint documents
sudo ${PEGLEG} site -r . lint "${PL_SITE}" -x P001 -x P009
# Render documents
sudo ${PEGLEG} site -r . render "${PL_SITE}" -o airskiff.yaml

View File

@@ -3,7 +3,7 @@ set -ex
CLUSTER_DNS=${CLUSTER_DNS:-10.96.0.10}
KUBECTL_IMAGE=${KUBECTL_IMAGE:-gcr.io/google-containers/hyperkube-amd64:v1.17.3}
KUBECTL_IMAGE=${KUBECTL_IMAGE:-registry.k8s.io/kubectl:v1.32.0}
UBUNTU_IMAGE=${UBUNTU_IMAGE:-docker.io/ubuntu:16.04}
cat > /tmp/hanging-cgroup-release.yaml << 'EOF'

View File

@@ -3,7 +3,7 @@ set -ex
CLUSTER_DNS=${CLUSTER_DNS:-10.96.0.10}
KUBECTL_IMAGE=${KUBECTL_IMAGE:-gcr.io/google-containers/hyperkube-amd64:v1.17.3}
KUBECTL_IMAGE=${KUBECTL_IMAGE:-registry.k8s.io/kubectl:v1.32.0}
UBUNTU_IMAGE=${UBUNTU_IMAGE:-docker.io/ubuntu:16.04}
cat > /tmp/rbd-roomba-scanner.yaml << 'EOF'

View File

@@ -22,4 +22,4 @@ python3 \
--output-dir=site/$1/secrets/passphrases/
# TODO(drewwalters96): make Treasuremap sites P001 and P009 compliant.
TERM_OPTS=" " ./tools/airship pegleg site -r . lint "$1" -x P001 -x P009
TERM_OPTS=" " sudo ./tools/airship pegleg site -r . lint "$1" -x P001 -x P009

View File

@@ -0,0 +1,28 @@
---
schema: armada/Manifest/v1
metadata:
schema: metadata/Document/v1
replacement: true
name: cluster-bootstrap
labels:
name: cluster-bootstrap-type
layeringDefinition:
abstract: false
layer: type
parentSelector:
name: cluster-bootstrap-global
actions:
- method: replace
path: .chart_groups
storagePolicy: cleartext
data:
release_prefix: airship
chart_groups:
- osh-infra-nfs-provisioner
- ucp-core
- ucp-keystone
- ucp-armada
- ucp-deckhand
- ucp-drydock
- ucp-shipyard
...

View File

@@ -0,0 +1,43 @@
---
schema: armada/Chart/v1
metadata:
schema: metadata/Document/v1
name: ucp-shipyard
replacement: true
labels:
name: ucp-shipyard-type
layeringDefinition:
abstract: false
layer: type
parentSelector:
name: ucp-shipyard-global
actions:
# - method: replace
# path: .source
- method: merge
path: .
storagePolicy: cleartext
data:
wait:
timeout: 1800
# source:
# type: local
# location: /airship-components/shipyard
# subpath: charts/shipyard
values:
pod:
replicas:
shipyard:
api: 1
airflow:
worker: 1
scheduler: 1
conf:
shipyard:
# NOTE(drewwalters96): Since Drydock and Promenade are not deployed,
# temporarily alias those validations to Armada.
drydock:
service_type: physicalprovisioner
promenade:
service_type: armada
...

View File

@@ -20,10 +20,10 @@
SITE="{{ site }}"
mkdir collected
./tools/airship pegleg site \
sudo ./tools/airship pegleg site \
-r . collect ${SITE} \
-s /target/collected
./tools/airship promenade generate-certs \
sudo ./tools/airship promenade generate-certs \
-o /target/site/${SITE}/secrets \
/target/collected/treasuremap.yaml
args:

View File

@@ -12,6 +12,7 @@ metadata:
actions:
- method: merge
path: .
replacement: true
storagePolicy: cleartext
substitutions:
- src:

View File

@@ -170,10 +170,10 @@ data:
default: "http"
port:
region_api:
default: 80
default: 83
nodeport: 31900
podport: 80
public: 80
podport: 5240
public: 83
region_proxy:
default: 8000
host_fqdn_override:

View File

@@ -100,6 +100,10 @@ data:
maas_api: 30001
maas_proxy: 31800 # hardcoded in MAAS
vip:
ingress_vip: '172.24.1.6/32'
maas_vip: '172.24.1.5/32'
ntp:
# comma separated NTP server list. Verify that these upstream NTP servers
# are

View File

@@ -6,6 +6,7 @@ metadata:
name: ucp-maas
labels:
name: ucp-maas-type
replacement: true
layeringDefinition:
abstract: false
layer: type