system-config/playbooks/bootstrap-bridge.yaml
Ian Wienand 0c90c128d7
Reference bastion through prod_bastion group
In thinking harder about the bootstrap process, it struck me that the
"bastion" group we have is two separate ideas that become a bit
confusing because they share a name.

We have the testing and production paths that need to find a single
bridge node so they can run their nested Ansible.  We've recently
merged changes to the setup playbooks to not hard-code the bridge node
and they now use groups["bastion"][0] to find the bastion host -- but
this group is actually orthogonal to the group of the same name
defined in inventory/service/groups.yaml.

The testing and production paths are running on the executor, and, as
mentioned, need to know the bridge node to log into.  For the testing
path this is happening via the group created in the job definition
from zuul.d/system-config-run.yaml.  For the production jobs, this
group is populated via the add-bastion-host role which dynamically
adds the bridge host and group.

Only the *nested* Ansible running on the bastion host reads
s-c:inventory/service/groups.yaml.  None of the nested-ansible
playbooks need to target only the currently active bastion host.  For
example, we can define as many bridge nodes as we like in the
inventory and run service-bridge.yaml against them.  It won't matter
because the production jobs know the host that is the currently active
bridge as described above.

So, instead of using the same group name in two contexts, rename the
testing/production group "prod_bastion".  groups["prod_bastion"][0]
will be the host that the testing/production jobs use as the bastion
host -- references are updated in this change (i.e. the two places
this group is defined -- the group name in the system-config-run jobs,
and add-bastion-host for production).

We then can return the "bastion" group match to bridge*.opendev.org in
inventory/service/groups.yaml.

This fixes a bootstrapping problem -- if you launch, say,
bridge03.opendev.org the launch node script will now apply the
base.yaml playbook against it, and correctly apply all variables from
the "bastion" group which now matches this new host.  This is what we
want to ensure, e.g. the zuul user and keys are correctly populated.

The other thing we can do here is change the testing path
"prod_bastion" hostname to "bridge99.opendev.org".  By doing this we
ensure we're not hard-coding for the production bridge host in any way
(since if both testing and production are called bridge01.opendev.org
we can hide problems).  This is a big advantage when we want to rotate
the production bridge host, as we can be certain there's no hidden
dependencies.

Change-Id: I137ab824b9a09ccb067b8d5f0bb2896192291883
2022-11-04 09:18:35 +11:00

100 lines
4.5 KiB
YAML

# NOTE: This is included from two paths to setup the bridge/bastion
# host in different circumstances:
#
# 1) Gate tests -- here Zuul is running this on the executor against
# ephemeral nodes. It uses the "bastion" group as defined in the
# system-config-run jobs.
#
# 2) Production -- here we actually run against the real bastion host.
# The host is dynamically added in opendev/base-jobs before this
# runs, and put into a group called "bastion".
#
# In both cases, the "bastion" group has one entry, which is the
# bastion host to run against.
- hosts: prod_bastion[0]:!disabled
name: "Bridge: bootstrap the bastion host"
become: true
tasks:
# Note for production use we expect to take the defaults; unit
# test jobs override this to test with latest upstream ansible.
# For example, if there is a fix on the ansible stable branch we
# need that is unreleased, you could do the following:
#
# install_ansible_name: '{{ bridge_ansible_name | default("git+https://github.com/ansible/ansible.git@stable-2.7") }}'
# install_ansible_version: '{{ bridge_ansible_version | default(None) }}'
- name: Install ansible
include_role:
name: install-ansible
vars:
install_ansible_name: '{{ bridge_ansible_name | default("ansible") }}'
install_ansible_version: '{{ bridge_ansible_version | default("4.0.0") }}'
install_ansible_openstacksdk_name: '{{ bridge_openstacksdk_name | default("openstacksdk") }}'
install_ansible_openstacksdk_version: '{{ bridge_openstacksdk_verison | default("latest") }}'
# NOTE(ianw): At 2018-12, ARA is only enabled during gate
# testing jobs as we decide if or how to store data on
# production bridge.o.o
install_ansible_ara_name: '{{ bridge_ara_name | default("ara[server]") }}'
install_ansible_ara_version: '{{ bridge_ara_version | default("latest") }}'
# This is the key that bridge uses to log into remote hosts.
#
# For production, this root-key variable is kept with the others
# in the Ansible production secrets. Thus we need to deploy via
# the local Ansible we just installed that will load these
# variables. Remote hosts have trusted this from their bringup
# procedure.
#
# In testing, we have been called with "root_rsa_key" variable set
# with an ephemeral key. In this case, we pass it in as a "-e"
# variable directly from the file written on disk. The testing
# ephemeral nodes have been made to trust this by the multinode
# setup.
#
# NOTE(ianw) : Another option here is to keep the root key as a
# secret directly in Zuul, which could be written out directly
# here. Maybe one day we will do something like this.
- name: Create root key variable when testing
when: root_rsa_key is defined
block:
- name: Create vars dict
set_fact:
_root_rsa_key_dict:
root_rsa_key: '{{ root_rsa_key }}'
- name: Save extra-vars
copy:
content: '{{ _root_rsa_key_dict | to_nice_json }}'
dest: '/home/zuul/root-rsa-key.json'
- name: Save abstracted inventory file
copy:
content: |
{{ inventory_hostname }}
[bastion]
{{ inventory_hostname }}
dest: '/home/zuul/bastion-inventory.ini'
- name: Make ansible log directory
file:
path: '/var/log/ansible'
state: directory
owner: root
mode: 0755
- name: Install root key
shell: >-
ansible-playbook -v ${ROOT_RSA_KEY} ${BRIDGE_INVENTORY}
/home/zuul/src/opendev.org/opendev/system-config/playbooks/zuul/run-production-bootstrap-bridge-add-rootkey.yaml
> /var/log/ansible/install-root-key.{{ lookup('pipe', 'date +%Y-%m-%dT%H:%M:%S') }}.log 2>&1
environment:
ROOT_RSA_KEY: '{{ "-e @/home/zuul/root-rsa-key.json" if root_rsa_key is defined else "" }}'
# In production "install-ansible" has setup ansible to point
# to the system-config inventory which has the bastion group
# in it. In the gate, bridge is ephemeral and we haven't yet
# built the inventory to use for testing (that is done in
# zuul/run-base.yaml). Use this constructed inventory.
BRIDGE_INVENTORY: '{{ "-i/home/zuul/bastion-inventory.ini" if root_rsa_key is defined else "" }}'
ANSIBLE_ROLES_PATH: '/home/zuul/src/opendev.org/opendev/system-config/playbooks/roles'
no_log: true