Migrate the OVN migration scripts

This patch is migrating the OVN migration scripts. At the moment, only
migration from ML2/OVS to ML2/OVN in a TripleO environment is supported.

Co-Authored-By: Miguel Angel Ajo <majopela@redhat.com>
Co-Authored-By: Jakub Libosvar <libosvar@redhat.com>
Co-Authored-By: Daniel Alvarez <dalvarez@redhat.com>
Co-Authored-By: Maciej Józefczyk <mjozefcz@redhat.com>
Co-Authored-By: Numan Siddique <nusiddiq@redhat.com>
Co-Authored-By: Roman Safronov <rsafrono@redhat.com>
Co-Authored-By: Terry Wilson <twilson@redhat.com>

Related-Blueprint: neutron-ovn-merge
Change-Id: I925f4b650209b8807290d6a69440c31fd72a1762
Signed-off-by: Lucas Alvares Gomes <lucasagomes@gmail.com>
This commit is contained in:
Lucas Alvares Gomes 2020-01-13 15:42:41 +00:00
parent 6d619ea7c1
commit b27940c205
44 changed files with 2243 additions and 0 deletions

81
plugin.spec Normal file
View File

@ -0,0 +1,81 @@
---
config:
entry_point: ./tools/ovn_migration/infrared/tripleo-ovn-migration/main.yml
plugin_type: install
subparsers:
tripleo-ovn-migration:
description: Migrate an existing TripleO overcloud from Neutron ML2OVS plugin to OVN
include_groups: ["Ansible options", "Inventory", "Common options", "Answers file"]
groups:
- title: Containers
options:
registry-namespace:
type: Value
help: The alternative docker registry namespace to use for deployment.
registry-prefix:
type: Value
help: The images prefix
registry-tag:
type: Value
help: The images tag
registry-mirror:
type: Value
help: The alternative docker registry to use for deployment.
- title: Deployment Description
options:
version:
type: Value
help: |
The product version
Numbers are for OSP releases
Names are for RDO releases
If not given, same version of the undercloud will be used
choices:
- "7"
- "8"
- "9"
- "10"
- "11"
- "12"
- "13"
- "14"
- "15"
- "16"
- kilo
- liberty
- mitaka
- newton
- ocata
- pike
- queens
- rocky
- stein
- train
install_from_package:
type: Bool
help: Install python-neutron-ovn-migration-tool rpm
default: True
dvr:
type: Bool
help: If the deployment is to be dvr or not
default: False
create_resources:
type: Bool
help: Create resources to measure downtime
default: True
external_network:
type: Value
help: External network name to use
default: public
image_name:
type: Value
help: Image name to use
default: cirros-0.3.5-x86_64-disk.img

View File

@ -26,6 +26,9 @@ data_files =
etc/api-paste.ini
etc/rootwrap.conf
etc/neutron/rootwrap.d = etc/neutron/rootwrap.d/*
share/ansible/neutron-ovn-migration/playbooks = tools/ovn_migration/tripleo_environment/playbooks/*
scripts =
tools/ovn_migration/tripleo_environment/ovn_migration.sh
[entry_points]
wsgi_scripts =

View File

@ -0,0 +1,43 @@
Migration from ML2/OVS to ML2/OVN
=================================
Proof-of-concept ansible script for migrating an OpenStack deployment
that uses ML2/OVS to OVN.
If you have a tripleo ML2/OVS deployment then please see the folder
``tripleo_environment``
Prerequisites:
1. Ansible 2.2 or greater.
2. ML2/OVS must be using the OVS firewall driver.
To use:
1. Create an ansible inventory with the expected set of groups and variables
as indicated by the hosts-sample file.
2. Run the playbook::
$ ansible-playbook migrate-to-ovn.yml -i hosts
Testing Status:
- Tested on an RDO cloud on CentOS 7.3 based on Ocata.
- The cloud had 3 controller nodes and 6 compute nodes.
- Observed network downtime was 10 seconds.
- The "--forks 10" option was used with ansible-playbook to ensure
that commands could be run across the entire environment in parallel.
MTU:
- If migrating an ML2/OVS deployment using VXLAN tenant networks
to an OVN deployment using Geneve for tenant networks, we have
an unresolved issue around MTU. The VXLAN overhead is 30 bytes.
OVN with Geneve has an overhead of 38 bytes. We need the tenant
networks MTU adjusted for OVN and then we need all VMs to receive
the updated MTU value through DHCP before the migration can take
place. For testing purposes, we've just hacked the Neutron code
to indicate that the VXLAN overhead was 38 bytes instead of 30,
bypassing the issue at migration time.

View File

@ -0,0 +1,37 @@
# All controller nodes running OpenStack control services, particularly
# neutron-api. Also indicate which controller you'd like to have run
# the OVN central control services.
[controller]
overcloud-controller-0 ovn_central=true
overcloud-controller-1
overcloud-controller-2
# All compute nodes. We will replace the openvswitch agent
# with ovn-controller on these nodes.
#
# The ovn_encap_ip variable should be filled in with the IP
# address that other compute hosts should use as the tunnel
# endpoint for tunnels to that host.
[compute]
overcloud-novacompute-0 ovn_encap_ip=192.0.2.10
overcloud-novacompute-1 ovn_encap_ip=192.0.2.11
overcloud-novacompute-2 ovn_encap_ip=192.0.2.12
overcloud-novacompute-3 ovn_encap_ip=192.0.2.13
overcloud-novacompute-4 ovn_encap_ip=192.0.2.14
overcloud-novacompute-5 ovn_encap_ip=192.0.2.15
# Configure bridge mappings to be used on compute hosts.
[compute:vars]
ovn_bridge_mappings=net1:br-em1
is_compute_node=true
[overcloud:children]
controller
compute
# Fill in "ovn_db_ip" with an IP address on a management network
# that the controller and compute nodes should reach. This address
# should not be reachable otherwise.
[overcloud:vars]
ovn_db_ip=192.0.2.50
remote_user=heat-admin

View File

@ -0,0 +1,33 @@
Infrared plugin to carry out migration from ML2/OVS to OVN
==========================================================
This is an infrared plugin which can be used to carry out the migration
from ML2/OVS to OVN if the tripleo was deployed using infrared.
See http://infrared.readthedocs.io/en/stable/index.html for more information.
Before using this plugin, first deploy an ML2/OVS overcloud and then:
1. On your undercloud, install python-neutron-ovn-migration-tool package (https://trunk.rdoproject.org/centos7-master/current/)
You also need to install python-neutron and python3-openvswitch packages.
2. Run ::
$infrared plugin add "https://github.com/openstack/neutron.git"
3. Start migration by running::
$infrared tripleo-ovn-migration --version 13|14 \
--registry-namespace <REGISTRY_NAMESPACE> \
--registry-tag <TAG> \
--registry-prefix <PREFIX>
Using this as a standalone playbook for tripleo deployments
===========================================================
It is also possible to use the playbook main.yml with tripleo deployments.
In order to use this:
1. Create hosts inventory file like below
[undercloud]
undercloud_ip ansible_ssh_user=stack
2. Run the playbook as:
ansible-playbook main.yml -i hosts -e install_from_package=True -e registry_prefix=centos-binary -e registry_namespace=docker.io/tripleomaster -e registry_localnamespace=192.168.24.1:8787/tripleomaster -e registry_tag=current-tripleo-rdo

View File

@ -0,0 +1,194 @@
# Playbook which preps migration and then invokes the migration script.
- name: Install migration tool
hosts: undercloud
become: true
tasks:
- name: Install python 3 virtualenv and neutron ovn migration tool
yum:
name:
- python3-virtualenv
- python3-neutron-ovn-migration-tool
state: present
- name: Set host_key_checking to False in ansible.cfg
ini_file:
path=/etc/ansible/ansible.cfg
section=defaults
option=host_key_checking
value=False
ignore_errors: yes
- name: Prepare for migration
hosts: undercloud
tasks:
- name: Set ovn migration working dir
set_fact:
ovn_migration_working_dir: /home/stack/ovn_migration
- name: Delete temp file directory if present
file:
state: absent
path: "{{ ovn_migration_working_dir }}"
- name : Create temp file directory if not present
file:
state: directory
path: "{{ ovn_migration_working_dir }}"
- name: Set the docker registry information
block:
- name: Get the docker registry info (infrared deployment)
block:
- name: Set is_infrard deployment
set_fact:
is_infrared: True
- name: Save the docker reg
set_fact:
container_image_prepare:
namespace: "{{ install.get('registry', {}).namespace|default(False)|ternary(install.get('registry', {}).namespace, install.get('registry', {}).mirror + '/' + 'rhosp' + install.version) }}"
prefix: "{{ install.registry.prefix|default('openstack') }}"
tag: "{{ install.registry.tag|default('') }}"
local_namespace: "{{ install.registry.local|default('') }}"
is_dvr: "{{ install.dvr }}"
when:
- install is defined
- name: Get the docker registry info (tripleo deployment)
block:
- name: Set is_infrard deployment
set_fact:
is_infrared: False
- name: Save the docker reg
set_fact:
container_image_prepare:
namespace: "{{ registry_namespace }}"
local_namespace: "{{ registry_localnamespace }}"
prefix: "{{ registry_prefix }}"
tag: "{{ registry_tag }}"
is_dvr: "{{ dvr }}"
when:
- install is not defined
- name: Prepare for migration
include_role:
name: prepare-migration
vars:
infrared_deployment: "{{ is_infrared }}"
registry_namespace: "{{ container_image_prepare['namespace'] }}"
image_prefix: "{{ container_image_prepare['prefix'] }}"
image_tag: "{{ container_image_prepare['tag'] }}"
local_namespace: "{{ container_image_prepare['local_namespace'] }}"
is_dvr: "{{ container_image_prepare['is_dvr'] }}"
- name: Boot few VMs to measure downtime
hosts: undercloud
tasks:
- name: Check if need to create resources
block:
- name: Set create_vms (infrared)
set_fact:
create_vms: "{{ install.create_resources }}"
when:
- install is defined
- name: Set create_vms (tripleo deployment)
set_fact:
create_vms: "{{ create_resources }}"
when:
- install is not defined
- name: Create few resources
block:
- name: Set the public network name (infrared deployment)
set_fact:
public_net: "{{ install.external_network }}"
when: install is defined
- name: Set the public network name (Tripleo deployment)
set_fact:
public_net: "{{ external_network }}"
when: install is not defined
- name: Set the image name (infrared deployment)
set_fact:
image_to_boot: "{{ install.image_name }}"
when: install is defined
- name: Set the image name(Tripleo deployment)
set_fact:
image_to_boot: "{{ image_name }}"
when: install is not defined
- name: Create resources
include_role:
name: create-resources
vars:
public_network_name: "{{ public_net }}"
image_name: "{{ image_to_boot }}"
ovn_migration_temp_dir: /home/stack/ovn_migration
overcloudrc: /home/stack/overcloudrc
when:
- create_vms|bool
- name: Kick start the migration
hosts: undercloud
tasks:
#TODO: Get the working dir from the param
- name: Starting migration block
block:
- name: Set ovn migration working dir
set_fact:
ovn_migration_working_dir: /home/stack/ovn_migration
- name: Copy the playbook files into ovn_migration working dir
command: cp -rf /usr/share/ansible/neutron-ovn-migration/playbooks {{ ovn_migration_working_dir }}
- name: Set the public network name (infrared deployment)
set_fact:
public_network: "{{ install.external_network }}"
when: install is defined
- name: Set the public network name (Tripleo deployment)
set_fact:
public_network: "{{ external_network }}"
when: install is not defined
- name: Create ovn migration script
template:
src: templates/start-ovn-migration.sh.j2
dest: "{{ ovn_migration_working_dir }}/start-ovn-migration.sh"
mode: 0755
- name: Generate inventory file for ovn migration
shell:
set -o pipefail &&
{{ ovn_migration_working_dir }}/start-ovn-migration.sh generate-inventory 2>&1 > {{ ovn_migration_working_dir}}/generate-inventory.log
- name: Set MTU T1
shell:
set -o pipefail &&
{{ ovn_migration_working_dir }}/start-ovn-migration.sh setup-mtu-t1 2>&1 > {{ ovn_migration_working_dir}}/setup-mtu-t1.log
- name: Reduce mtu of the pre migration networks
shell:
set -o pipefail &&
{{ ovn_migration_working_dir }}/start-ovn-migration.sh reduce-mtu 2>&1 > {{ ovn_migration_working_dir}}/reduce-mtu.log
- name: Start the migration process
shell:
set -o pipefail &&
{{ ovn_migration_working_dir }}/start-ovn-migration.sh start-migration 2>&1
> {{ ovn_migration_working_dir}}/start-ovn-migration.sh.log
- name: Stop pinger if started
shell:
echo "exit" > {{ ovn_migration_working_dir}}/_pinger_cmd.txt
always:
- name: Fetch ovn_migration log directory
synchronize:
src: "{{ ovn_migration_working_dir }}"
dest: "{{ inventory_dir }}"
mode: pull
when: install is defined

View File

@ -0,0 +1,9 @@
---
public_network_name: "{{ public_network_name }}"
create_resource_script: create-resources.sh.j2
ovn_migration_temp_dir: "{{ ovn_migration_temp_dir }}"
image_name: "{{ image_name }}"
server_user_name: "{{ server_user_name }}"
overcloudrc: "{{ overcloudrc }}"
resource_suffix: pinger

View File

@ -0,0 +1,33 @@
- name: Delete temp file directory if present
file:
state: absent
path: "{{ ovn_migration_temp_dir }}"
- name : Create temp file directory if not present
file:
state: directory
path: "{{ ovn_migration_temp_dir }}"
- name: Generate resource creation script
template:
src: create-resources.sh.j2
dest: "{{ ovn_migration_temp_dir }}/create-resources.sh"
mode: 0744
- name: Creating pre pre migration resources
shell: >
set -o pipefail &&
{{ ovn_migration_temp_dir }}/create-resources.sh 2>&1 >
{{ ovn_migration_temp_dir }}/create-resources.sh.log
changed_when: true
- name: Generate pinger script
template:
src: start-pinger.sh.j2
dest: "{{ ovn_migration_temp_dir }}/start-pinger.sh"
mode: 0744
- name: Start pinger in background
shell: >
nohup {{ ovn_migration_temp_dir }}/start-pinger.sh </dev/null >/dev/null 2>&1 &
changed_when: False

View File

@ -0,0 +1,153 @@
#!/bin/bash
set -x
source {{ overcloudrc }}
image_name={{ image_name }}
openstack image show $image_name
if [ "$?" != "0" ]
then
if [ ! -f cirros-0.4.0-x86_64-disk.img ]
then
curl -Lo cirros-0.4.0-x86_64-disk.img https://github.com/cirros-dev/cirros/releases/download/0.4.0/cirros-0.4.0-x86_64-disk.img
fi
openstack image create "cirros-ovn-migration-{{ resource_suffix }}" --file cirros-0.4.0-x86_64-disk.img \
--disk-format qcow2 --container-format bare --public
image_name="cirros-ovn-migration-{{ resource_suffix }}"
fi
openstack flavor create ovn-migration-{{ resource_suffix }} --ram 1024 --disk 1 --vcpus 1
openstack keypair create ovn-migration-{{ resource_suffix }} --private-key {{ ovn_migration_temp_dir }}/ovn_migration_ssh_key
openstack security group create ovn-migration-sg-{{ resource_suffix }}
openstack security group rule create --ingress --protocol icmp ovn-migration-sg-{{ resource_suffix }}
openstack security group rule create --ingress --protocol tcp --dst-port 22 ovn-migration-sg-{{ resource_suffix }}
openstack network create ovn-migration-net-{{ resource_suffix }}
neutron net-update ovn-migration-net-{{ resource_suffix }} --mtu 1442
openstack subnet create --network ovn-migration-net-{{ resource_suffix }} --subnet-range 172.168.168.0/24 ovn-migration-subnet-{{ resource_suffix }}
num_hypervisors=`openstack hypervisor stats show | grep count | awk '{print $4}'`
openstack server create --flavor ovn-migration-{{ resource_suffix }} --image $image_name \
--key-name ovn-migration-{{ resource_suffix }} \
--nic net-id=ovn-migration-net-{{ resource_suffix }} \
--security-group ovn-migration-sg-{{ resource_suffix }} \
--min $num_hypervisors --max $num_hypervisors \
ovn-migration-server-{{ resource_suffix }}
openstack router create ovn-migration-router-{{ resource_suffix }}
openstack router set --external-gateway {{ public_network_name }} ovn-migration-router-{{ resource_suffix }}
openstack router add subnet ovn-migration-router-{{ resource_suffix }} ovn-migration-subnet-{{ resource_suffix }}
for i in $(seq 1 $num_hypervisors)
do
num_attempts=0
while true
do
openstack server show ovn-migration-server-{{ resource_suffix }}-$i -c status | grep ACTIVE
if [ "$?" == "0" ]; then
break
fi
sleep 5
num_attempts=$((num_attempts+1))
if [ $num_attempts -gt 24 ]
then
echo "VM is not up even after 2 minutes. Something is wrong"
exit 1
fi
done
vm_ip=`openstack server show ovn-migration-server-{{ resource_suffix }}-$i -c addresses | grep addresses | awk '{ split($4, ip, "="); print ip[2]}'`
port_id=`openstack port list | grep $vm_ip | awk '{print $2}'`
# Wait till the port is ACTIVE
echo "Wait till the port is ACTIVE"
port_status=`openstack port show $port_id -c status | grep status | awk '{print $4}'`
num_attempts=0
while [ "$port_status" != "ACTIVE" ]
do
num_attempts=$((num_attempts+1))
sleep 5
port_status=`openstack port show $port_id -c status | grep status | awk '{print $4}'`
echo "Port status = $port_status"
if [ $num_attempts -gt 24 ]
then
echo "Port is not up even after 2 minutes. Something is wrong"
exit 1
fi
done
echo "VM is up and the port is ACTIVE"
server_ip=`openstack floating ip create --port $port_id \
{{ public_network_name }} -c floating_ip_address | grep floating_ip_address \
| awk '{print $4'}`
echo $server_ip >> {{ ovn_migration_temp_dir }}/server_fips
# Wait till the VM allows ssh connections
vm_status="down"
num_attempts=0
while [ "$vm_status" != "up" ]
do
num_attempts=$((num_attempts+1))
sleep 5
openstack console log show ovn-migration-server-{{ resource_suffix }}-$i | grep "login:"
if [ "$?" == "0" ]
then
vm_status="up"
else
if [ $num_attempts -gt 60 ]
then
echo "VM is not up with login prompt even after 5 minutes. Something is wrong."
# Even though something seems wrong, lets try and ping.
break
fi
fi
done
done
chmod 0600 {{ ovn_migration_temp_dir }}/ovn_migration_ssh_key
for server_ip in `cat {{ ovn_migration_temp_dir }}/server_fips`
do
num_attempts=0
vm_reachable="false"
while [ "$vm_reachable" != "true" ]
do
num_attempts=$((num_attempts+1))
sleep 1
ping -c 3 $server_ip
if [ "$?" == "0" ]
then
vm_reachable="true"
else
if [ $num_attempts -gt 60 ]
then
echo "VM is not pingable. Something is wrong."
exit 1
fi
fi
done
ssh -i {{ ovn_migration_temp_dir }}/ovn_migration_ssh_key -o StrictHostKeyChecking=no \
-o UserKnownHostsFile=/dev/null cirros@$server_ip date
done
echo "Done with the resource creation : exiting"
exit 0

View File

@ -0,0 +1,58 @@
#!/bin/bash
set -x
echo "creating virtualenv in {{ ovn_migration_temp_dir }}/pinger_venv"
virtualenv {{ ovn_migration_temp_dir }}/pinger_venv
source {{ ovn_migration_temp_dir }}/pinger_venv/bin/activate
pip install --upgrade pip
pip install sh
cat > {{ ovn_migration_temp_dir }}/pinger.py <<-EOF
import sh
import sys
import time
def main(ips):
run_cmds = []
for ip in ips:
ip_out_file = "{{ ovn_migration_temp_dir }}/" + ip.replace('.', '_') + '_ping.out'
run_cmds.append(sh.ping('-i', '1', ip, _out=ip_out_file, _bg=True))
if not run_cmds:
return
while True:
try:
cmd_file = open("{{ ovn_migration_temp_dir }}/_pinger_cmd.txt", "r")
cmd = cmd_file.readline()
if cmd.startswith("exit"):
break
cmd_file.close()
except IOError:
time.sleep(3)
continue
for p in run_cmds:
p.signal(2)
p.wait()
if __name__ == '__main__':
main(sys.argv[1:])
EOF
pinger_ips=""
for ip in `cat {{ ovn_migration_temp_dir }}/server_fips`
do
pinger_ips="$pinger_ips $ip"
done
echo "pinger ips = $pinger_ips"
echo "calling pinger.py"
python {{ ovn_migration_temp_dir }}/pinger.py $pinger_ips
echo "Exiting..."

View File

@ -0,0 +1,7 @@
---
infrared_deployment: False
registry_namespace: docker.io/tripleomaster
local_namespace: 192.168.24.1:8787/tripleomaster
image_tag: current-tripleo-rdo
image_prefix: centos-binary-

View File

@ -0,0 +1,181 @@
- name: Copy overcloud deploy script to overcloud-deploy-ovn.sh
block:
- name: Check if overcloud_deploy.sh is present or not
stat:
path: ~/overcloud_deploy.sh
register: deploy_file
- name: Set the ml2ovs overcloud deploy script file name
set_fact:
overcloud_deploy_script: '~/overcloud_deploy.sh'
when: deploy_file.stat.exists|bool
- name: Check if overcloud-deploy.sh is present
stat:
path: ~/overcloud-deploy.sh
register: deploy_file_2
when: not deploy_file.stat.exists|bool
- name: Set the ml2ovs overcloud deploy script file name
set_fact:
overcloud_deploy_script: '~/overcloud-deploy.sh'
when:
- not deploy_file.stat.exists|bool
- deploy_file_2.stat.exists|bool
- name: Copy overcloud deploy script to overcloud-deploy-ovn.sh
command: cp -f {{ overcloud_deploy_script }} ~/overcloud-deploy-ovn.sh
when: infrared_deployment|bool
- name: set overcloud deploy ovn script
set_fact:
overcloud_deploy_ovn_script: '~/overcloud-deploy-ovn.sh'
- name: Set docker images environment file
set_fact:
output_env_file: /home/stack/docker-images-ovn.yaml
- name: Get the proper neutron-ovn-ha.yaml path
stat:
path: /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml
register: ovn_env_path
- name: Set the neutron-ovn-dvr-ha.yaml file path if dvr
set_fact:
neutron_ovn_env_path: /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-dvr-ha.yaml
when: is_dvr|bool
- name: Set the neutron-ovn-ha.yaml file path if not dvr
set_fact:
neutron_ovn_env_path: /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml
when: not is_dvr|bool
- name: Construct overcloud-deploy-ovn.sh script for infrared deployments
lineinfile:
dest: "{{ overcloud_deploy_ovn_script }}"
line: "{{ item }} \\"
insertbefore: "^--log-file.*"
with_items:
- "-e {{ neutron_ovn_env_path }}"
- "-e /home/stack/ovn-extras.yaml"
- "-e {{ output_env_file }}"
when:
- infrared_deployment|bool
- name: Construct overcloud-deploy-ovn.sh script for tripleo deployments
template:
src: templates/overcloud-deploy-ovn.sh.j2
dest: ~/overcloud-deploy-ovn.sh
mode: 0744
when:
- not infrared_deployment|bool
- name: Set image tag (infrared deployment)
block:
- name: Get puddle version
shell: cat containers-prepare-parameter.yaml | grep -v _tag | grep tag | awk '{print $2}'
ignore_errors: True
register: core_puddle_version
- name: Set image tag from puddle version
set_fact:
docker_image_tag: "{{ core_puddle_version.stdout }}"
- name: Get registry namespace
shell: cat containers-prepare-parameter.yaml | grep -v _namespace | grep namespace | awk '{print $2}'
ignore_errors: True
register: reg_ns
- name: Set registry namespace
set_fact:
reg_namespace: "{{ reg_ns.stdout }}"
- debug:
msg: "{{ core_puddle_version.stdout }}"
- debug:
msg: "{{ docker_image_tag }}"
- debug:
msg: "{{ reg_namespace }}"
when: infrared_deployment|bool
- name: Set image tag (tripleo deployment)
set_fact:
docker_image_tag: "{{ image_tag }}"
when:
- not infrared_deployment|bool
- name: Generate ovn container images
shell: |
echo "container_images:" > ~/ovn_container_images.yaml
args:
creates: ~/ovn_container_images.yaml
- name: Add ovn container images to ovn_container_images.yaml
lineinfile:
dest: ~/ovn_container_images.yaml
line: "- imagename: {{ reg_namespace }}/{{ image_prefix }}-{{ item }}:{{ docker_image_tag }}"
with_items:
- "ovn-northd"
- "ovn-controller"
- "neutron-server-ovn"
- "neutron-metadata-agent-ovn"
- name: Generate docker images environment file
shell: |
echo "parameter_defaults:" > ~/docker-images-ovn.yaml
changed_when: False
- name: Set the local namespace
block:
- name: Extract the local namespace
shell: |
set -exo pipefail
source ~/stackrc
openstack overcloud plan export overcloud
mkdir -p /tmp/oc_plan
mv overcloud.tar.gz /tmp/oc_plan/
cd /tmp/oc_plan
tar xvf overcloud.tar.gz
reg=`cat /tmp/oc_plan/environments/containers-default-parameters.yaml | grep ContainerNeutronApiImage | awk '{ split($2, image , "/"); print image[1] }'`
namespace=`cat /tmp/oc_plan/environments/containers-default-parameters.yaml | grep ContainerNeutronApiImage | awk '{ split($2, image , "/"); print image[2] }'`
echo $reg/$namespace > /tmp/_reg_namespace
rm -rf /tmp/oc_plan
- name: Get the local namespace
command: cat /tmp/_reg_namespace
register: local_ns
- name: Set the local registry
set_fact:
local_registry: "{{ local_ns.stdout }}"
when:
- local_namespace == ''
- name: Set the local namespace
set_fact:
local_registry: "{{ local_namespace }}"
when:
- local_namespace != ''
- name: Add ovn container images to docker images environment file
lineinfile:
dest: ~/docker-images-ovn.yaml
line: " {{ item.name }}: {{ local_registry }}/{{ image_prefix }}-{{ item.image_name }}:{{ docker_image_tag }}"
with_items:
- { name: ContainerNeutronApiImage, image_name: neutron-server-ovn}
- { name: ContainerNeutronConfigImage, image_name: neutron-server-ovn}
- { name: ContainerOvnMetadataImage, image_name: neutron-metadata-agent-ovn}
- { name: ContainerOvnControllerImage, image_name: ovn-controller}
- { name: ContainerOvnControllerConfigImage, image_name: ovn-controller}
- { name: ContainerOvnDbsImage, image_name: ovn-northd}
- { name: ContainerOvnDbsConfigImage, image_name: ovn-northd}
- name: Upload the ovn container images to the local registry
shell: |
source /home/stack/stackrc
openstack tripleo container image prepare --environment-file /home/stack/containers-prepare-parameter.yaml
become: yes
changed_when: False

View File

@ -0,0 +1,7 @@
#!/bin/bash
export PUBLIC_NETWORK_NAME={{ public_network }}
# TODO: Get this from the var
export OPT_WORKDIR=/home/stack/ovn_migration
/usr/bin/ovn_migration.sh $1

View File

@ -0,0 +1,204 @@
# Migrate a Neutron deployment using ML2/OVS to OVN.
#
# See hosts-sample for expected contents of the ansible inventory.
---
- hosts: compute
remote_user: "{{ remote_user }}"
become: true
tasks:
- name: Ensure OVN packages are installed on compute nodes.
yum:
name: openvswitch-ovn-host
state: present
# TODO to make ansible-lint happy, all of these commands should be conditionally run
# only if the config value needs to be changed.
- name: Configure ovn-encap-type.
command: "ovs-vsctl set open . external_ids:ovn-encap-type=geneve"
changed_when: false
- name: Configure ovn-encap-ip.
command: "ovs-vsctl set open . external_ids:ovn-encap-ip={{ ovn_encap_ip }}"
changed_when: false
- name: Configure ovn-remote.
command: "ovs-vsctl set open . external_ids:ovn-remote=tcp:{{ ovn_db_ip }}:6642"
changed_when: false
# TODO We could discover the appropriate value for ovn-bridge-mappings based on
# the openvswitch agent configuration instead of requiring it to be configured
# in the inventory.
- name: Configure ovn-bridge-mappings.
command: "ovs-vsctl set open . external_ids:ovn-bridge-mappings={{ ovn_bridge_mappings }}"
changed_when: false
- name: Get hostname
command: hostname -f
register: hostname
check_mode: no
changed_when: false
- name: Set host name
command: "ovs-vsctl set Open_vSwitch . external-ids:hostname={{ hostname.stdout }}"
changed_when: false
# TODO ansible has an "iptables" module, but it does not allow you specify a "rule number"
# which we require here.
- name: Open Geneve UDP port for tunneling.
command: iptables -I INPUT 10 -m state --state NEW -p udp --dport 6081 -j ACCEPT
changed_when: false
- name: Persist our iptables changes after a reboot
shell: iptables-save > /etc/sysconfig/iptables.save
args:
creates: /etc/sysconfig/iptables.save
# TODO Remove this once the metadata API is supported.
# https://bugs.launchpad.net/networking-ovn/+bug/1562132
- name: Force config drive until the metadata API is supported.
ini_file:
dest: /etc/nova/nova.conf
section: DEFAULT
option: force_config_drive
value: true
- name: Restart nova-compute service to reflect force_config_drive value.
systemd:
name: openstack-nova-compute
state: restarted
enabled: yes
- hosts: controller
remote_user: "{{ remote_user }}"
become: true
tasks:
- name: Ensure OVN packages are installed on the central OVN host.
when: ovn_central is defined
yum:
name: openvswitch-ovn-central
state: present
# TODO Set up SSL for OVN databases
# TODO ansible has an "iptables" module, but it does not allow you specify a "rule number"
# which we require here.
- name: Open OVN database ports.
command: "iptables -I INPUT 10 -m state --state NEW -p tcp --dport {{ item }} -j ACCEPT"
with_items: [ 6641, 6642 ]
changed_when: False
- name: Persist our iptables changes after a reboot
shell: iptables-save > /etc/sysconfig/iptables.save
args:
creates: /etc/sysconfig/iptables.save
# TODO Integrate HA support for the OVN control services.
- name: Start ovn-northd and the OVN databases.
when: ovn_central is defined
systemd:
name: ovn-northd
state: started
enabled: yes
- name: Enable remote access to the northbound database.
command: "ovn-nbctl set-connection ptcp:6641:{{ ovn_db_ip }}"
when: ovn_central is defined
changed_when: False
- name: Enable remote access to the southbound database.
command: "ovn-sbctl set-connection ptcp:6642:{{ ovn_db_ip }}"
when: ovn_central is defined
changed_when: False
- name: Update Neutron configuration files
ini_file: dest={{ item.dest }} section={{ item.section }} option={{ item.option }} value={{ item.value }}
with_items:
- { dest: '/etc/neutron/neutron.conf', section: 'DEFAULT', option: 'service_plugins', value: 'qos,ovn-router' }
- { dest: '/etc/neutron/neutron.conf', section: 'DEFAULT', option: 'notification_drivers', value: 'ovn-qos' }
- { dest: '/etc/neutron/plugins/ml2/ml2_conf.ini', section: 'ml2', option: 'mechanism_drivers', value: 'ovn' }
- { dest: '/etc/neutron/plugins/ml2/ml2_conf.ini', section: 'ml2', option: 'type_drivers', value: 'geneve,vxlan,vlan,flat' }
- { dest: '/etc/neutron/plugins/ml2/ml2_conf.ini', section: 'ml2', option: 'tenant_network_types', value: 'geneve' }
- { dest: '/etc/neutron/plugins/ml2/ml2_conf.ini', section: 'ml2_type_geneve', option: 'vni_ranges', value: '1:65536' }
- { dest: '/etc/neutron/plugins/ml2/ml2_conf.ini', section: 'ml2_type_geneve', option: 'max_header_size', value: '38' }
- { dest: '/etc/neutron/plugins/ml2/ml2_conf.ini', section: 'ovn', option: 'ovn_nb_connection', value: '"tcp:{{ ovn_db_ip }}:6641"' }
- { dest: '/etc/neutron/plugins/ml2/ml2_conf.ini', section: 'ovn', option: 'ovn_sb_connection', value: '"tcp:{{ ovn_db_ip }}:6642"' }
- { dest: '/etc/neutron/plugins/ml2/ml2_conf.ini', section: 'ovn', option: 'ovsdb_connection_timeout', value: '180' }
- { dest: '/etc/neutron/plugins/ml2/ml2_conf.ini', section: 'ovn', option: 'neutron_sync_mode', value: 'repair' }
- { dest: '/etc/neutron/plugins/ml2/ml2_conf.ini', section: 'ovn', option: 'ovn_l3_mode', value: 'true' }
- { dest: '/etc/neutron/plugins/ml2/ml2_conf.ini', section: 'ovn', option: 'vif_type', value: 'ovs' }
- name: Note that API downtime begins now.
debug:
msg: NEUTRON API DOWNTIME STARTING NOW FOR THIS HOST
- name: Shut down neutron-server so that we can begin data sync to OVN.
systemd:
name: neutron-server
state: stopped
- hosts: controller
remote_user: "{{ remote_user }}"
become: true
tasks:
- name: Sync Neutron state to OVN.
when: ovn_central is defined
command: neutron-ovn-db-sync-util --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
- hosts: overcloud
remote_user: "{{ remote_user }}"
become: true
tasks:
- name: Note that data plane imact starts now.
debug:
msg: DATA PLANE IMPACT BEGINS NOW.
- name: Stop metadata, DHCP, L3 and openvswitch agent if needed.
systemd: name={{ item.name }} state={{ item.state }} enabled=no
with_items:
- { name: 'neutron-metadata-agent', state: 'stopped' }
- { name: 'neutron-dhcp-agent', state: 'stopped' }
- { name: 'neutron-l3-agent', state: 'stopped' }
- { name: 'neutron-openvswitch-agent', state: 'stopped' }
- hosts: compute
remote_user: "{{ remote_user }}"
become: true
tasks:
- name: Note that data plane is being restored.
debug:
msg: DATA PLANE IS NOW BEING RESTORED.
- name: Delete br-tun as it is no longer used.
command: "ovs-vsctl del-br br-tun"
changed_when: false
- name: Reset OpenFlow protocol version before ovn-controller takes over.
with_items: [ br-int, br-ex ]
command: "ovs-vsctl set Bridge {{ item }} protocols=[]"
ignore_errors: True
changed_when: false
- name: Start ovn-controller.
systemd:
name: ovn-controller
state: started
enabled: yes
- hosts: controller
remote_user: "{{ remote_user }}"
become: true
tasks:
# TODO The sync util scheduling gateway routers depends on this patch:
# https://review.openstack.org/#/c/427020/
# If the patch is not merged, this command is harmless, but the gateway
# routers won't get scheduled until later when neutron-server starts.
- name: Schedule gateway routers by running the sync util.
when: ovn_central is defined
command: neutron-ovn-db-sync-util --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
changed_when: false
- name: Configure node for hosting gateway routers for external connectivity.
command: "ovs-vsctl set open . external_ids:ovn-cms-options=enable-chassis-as-gw"
changed_when: false
- hosts: overcloud
remote_user: "{{ remote_user }}"
become: true
tasks:
# TODO Make this smarter so that it only deletes net namespaces that were
# # created by neutron. In the simple case, this is fine, but will break
# # once containers are in use on the overcloud.
- name: Delete network namespaces.
command: ip -all netns delete
changed_when: false
- hosts: controller
remote_user: "{{ remote_user }}"
become: true
tasks:
- name: Note that the Neutron API is coming back online.
debug:
msg: THE NEUTRON API IS NOW BEING RESTORED.
- name: Start neutron-server.
systemd:
name: neutron-server
state: started
# TODO In our grenade script we had to restart rabbitmq. Is that needed?

View File

@ -0,0 +1,349 @@
#!/bin/bash
# Copyright 2018 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# With LANG set to everything else than C completely undercipherable errors
# like "file not found" and decoding errors will start to appear during scripts
# or even ansible modules
LANG=C
# Complete stackrc file path.
: ${STACKRC_FILE:=~/stackrc}
# Complete overcloudrc file path.
: ${OVERCLOUDRC_FILE:=~/overcloudrc}
# overcloud deploy script for OVN migration.
: ${OVERCLOUD_OVN_DEPLOY_SCRIPT:=~/overcloud-deploy-ovn.sh}
: ${OPT_WORKDIR:=$PWD}
: ${PUBLIC_NETWORK_NAME:=public}
: ${IMAGE_NAME:=cirros}
: ${SERVER_USER_NAME:=cirros}
: ${VALIDATE_MIGRATION:=True}
: ${DHCP_RENEWAL_TIME:=30}
check_for_necessary_files() {
if [ ! -e hosts_for_migration ]; then
echo "hosts_for_migration ansible inventory file not present"
echo "Please run ./ovn_migration.sh generate-inventory"
exit 1
fi
# Check if the user has generated overcloud-deploy-ovn.sh file
# If it is not generated. Exit
if [ ! -e $OVERCLOUD_OVN_DEPLOY_SCRIPT ]; then
echo "overcloud deploy migration script :" \
"$OVERCLOUD_OVN_DEPLOY_SCRIPT is not present. Please" \
"make sure you generate that file before running this"
exit 1
fi
cat $OVERCLOUD_OVN_DEPLOY_SCRIPT | grep neutron-ovn >/dev/null
if [ "$?" == "1" ]; then
echo "OVN t-h-t environment file seems to be missing in \
$OVERCLOUD_OVN_DEPLOY_SCRIPT. Please check the $OVERCLOUD_OVN_DEPLOY_SCRIPT \
file again."
exit 1
fi
cat $OVERCLOUD_OVN_DEPLOY_SCRIPT | grep \$HOME/ovn-extras.yaml >/dev/null
check1=$?
cat $OVERCLOUD_OVN_DEPLOY_SCRIPT | grep $HOME/ovn-extras.yaml >/dev/null
check2=$?
if [[ "$check1" == "1" && "$check2" == "1" ]]; then
echo "ovn-extras.yaml file is missing in "\
"$OVERCLOUD_OVN_DEPLOY_SCRIPT. Please add it "\
"as \" -e \$HOME/ovn-extras.yaml\""
exit 1
fi
}
get_host_ip() {
inventory_file=$1
host_name=$2
ip=`jq -r --arg role _meta --arg hostname $host_name 'to_entries[] | select(.key == $role) | .value.hostvars[$hostname].management_ip' $inventory_file`
if [[ "x$ip" == "x" ]] || [[ "x$ip" == "xnull" ]]; then
# This file does not provide translation from the hostname to the IP, or
# we already have an IP (Queens backwards compatibility)
echo $host_name
else
echo $ip
fi
}
get_role_hosts() {
inventory_file=$1
role_name=$2
roles=`jq -r \.$role_name\.children\[\] $inventory_file`
for role in $roles; do
# During the rocky cycle the format changed to have .value.hosts
hosts=`jq -r --arg role "$role" 'to_entries[] | select(.key == $role) | .value.hosts[]' $inventory_file`
if [[ "x$hosts" == "x" ]]; then
# But we keep backwards compatibility with nested childrens (Queens)
hosts=`jq -r --arg role "$role" 'to_entries[] | select(.key == $role) | .value.children[]' $inventory_file`
for host in $hosts; do
HOSTS="$HOSTS `jq -r --arg host "$host" 'to_entries[] | select(.key == $host) | .value.hosts[0]' $inventory_file`"
done
else
HOSTS="${hosts} ${HOSTS}"
fi
done
echo $HOSTS
}
# Generate the ansible.cfg file
generate_ansible_config_file() {
cat > ansible.cfg <<-EOF
[defaults]
forks=50
become=True
callback_whitelist = profile_tasks
host_key_checking = False
gathering = smart
fact_caching = jsonfile
fact_caching_connection = ./ansible_facts_cache
fact_caching_timeout = 0
#roles_path = roles:...
[ssh_connection]
control_path = %(directory)s/%%h-%%r
ssh_args = -o ControlMaster=auto -o ControlPersist=270s -o ServerAliveInterval=30 -o GSSAPIAuthentication=no
retries = 3
EOF
}
# Generate the inventory file for ansible migration playbook.
generate_ansible_inventory_file() {
echo "Generating the inventory file for ansible-playbook"
source $STACKRC_FILE
echo "[ovn-dbs]" > hosts_for_migration
ovn_central=True
/usr/bin/tripleo-ansible-inventory --list > /tmp/ansible-inventory.txt
# We want to run ovn_dbs where neutron_api is running
OVN_DBS=$(get_role_hosts /tmp/ansible-inventory.txt neutron_api)
for node_name in $OVN_DBS; do
node_ip=$(get_host_ip /tmp/ansible-inventory.txt $node_name)
node="$node_name ansible_host=$node_ip"
if [ "$ovn_central" == "True" ]; then
ovn_central=False
node="$node_name ansible_host=$node_ip ovn_central=true"
fi
echo $node ansible_ssh_user=heat-admin ansible_become=true >> hosts_for_migration
done
echo "" >> hosts_for_migration
echo "[ovn-controllers]" >> hosts_for_migration
# We want to run ovn-controller where OVS agent was running before the migration
OVN_CONTROLLERS=$(get_role_hosts /tmp/ansible-inventory.txt neutron_ovs_agent)
for node_name in $OVN_CONTROLLERS; do
node_ip=$(get_host_ip /tmp/ansible-inventory.txt $node_name)
echo $node_name ansible_host=$node_ip ansible_ssh_user=heat-admin ansible_become=true ovn_controller=true >> hosts_for_migration
done
rm -f /tmp/ansible-inventory.txt
echo "" >> hosts_for_migration
cat >> hosts_for_migration << EOF
[overcloud-controllers:children]
ovn-dbs
[overcloud:children]
ovn-controllers
ovn-dbs
EOF
add_group_vars() {
cat >> hosts_for_migration << EOF
[$1:vars]
remote_user=heat-admin
public_network_name=$PUBLIC_NETWORK_NAME
image_name=$IMAGE_NAME
working_dir=$OPT_WORKDIR
server_user_name=$SERVER_USER_NAME
validate_migration=$VALIDATE_MIGRATION
overcloud_ovn_deploy_script=$OVERCLOUD_OVN_DEPLOY_SCRIPT
overcloudrc=$OVERCLOUDRC_FILE
ovn_migration_backups=/var/lib/ovn-migration-backup
EOF
}
add_group_vars overcloud
add_group_vars overcloud-controllers
echo "***************************************"
cat hosts_for_migration
echo "***************************************"
echo "Generated the inventory file - hosts_for_migration"
echo "Please review the file before running the next command - setup-mtu-t1"
}
# Check if the public network exists, and if it has floating ips available
oc_check_public_network() {
source $OVERCLOUDRC_FILE
openstack network show $PUBLIC_NETWORK_NAME 1>/dev/null || {
echo "ERROR: PUBLIC_NETWORK_NAME=${PUBLIC_NETWORK_NAME} can't be accessed by the"
echo " admin user, please fix that before continuing."
exit 1
}
ID=$(openstack floating ip create $PUBLIC_NETWORK_NAME -c id -f value) || {
echo "ERROR: PUBLIC_NETWORK_NAME=${PUBLIC_NETWORK_NAME} doesn't have available"
echo " floating ips. Make sure that your public network has at least one"
echo " floating ip available for the admin user."
exit 1
}
openstack floating ip delete $ID 2>/dev/null 1>/dev/null
return $?
}
# Check if the neutron networks MTU has been updated to geneve MTU size or not.
# We donot want to proceed if the MTUs are not updated.
oc_check_network_mtu() {
source $OVERCLOUDRC_FILE
neutron-ovn-migration-mtu verify mtu
return $?
}
setup_mtu_t1() {
# Run the ansible playbook to reduce the DHCP T1 parameter in
# dhcp_agent.ini in all the overcloud nodes where dhcp agent is running.
ansible-playbook -vv $OPT_WORKDIR/playbooks/reduce-dhcp-renewal-time.yml \
-i hosts_for_migration -e working_dir=$OPT_WORKDIR \
-e renewal_time=$DHCP_RENEWAL_TIME
rc=$?
return $rc
}
reduce_network_mtu () {
source $OVERCLOUDRC_FILE
oc_check_network_mtu
if [ "$?" != "0" ]; then
# Reduce the network mtu
neutron-ovn-migration-mtu update mtu
rc=$?
if [ "$rc" != "0" ]; then
echo "Reducing the network mtu's failed. Exiting."
exit 1
fi
fi
return $rc
}
start_migration() {
source $STACKRC_FILE
echo "Starting the Migration"
ansible-playbook -vv $OPT_WORKDIR/playbooks/ovn-migration.yml \
-i hosts_for_migration -e working_dir=$OPT_WORKDIR \
-e public_network_name=$PUBLIC_NETWORK_NAME \
-e image_name=$IMAGE_NAME \
-e overcloud_ovn_deploy_script=$OVERCLOUD_OVN_DEPLOY_SCRIPT \
-e server_user_name=$SERVER_USER_NAME \
-e overcloudrc=$OVERCLOUDRC_FILE \
-e validate_migration=$VALIDATE_MIGRATION $*
rc=$?
return $rc
}
print_usage() {
cat << EOF
Usage:
Before running this script, please refer to the migration guide for
complete details. This script needs to be run in 5 steps.
Step 1 -> ovn_migration.sh generate-inventory
Generates the inventory file
Step 2 -> ovn_migration.sh setup-mtu-t1
Sets the DHCP renewal T1 to 30 seconds. After this step you will
need to wait at least 24h for the change to be propagated to all
VMs. This step is only necessary for VXLAN or GRE based tenant
networking.
Step 3 -> You need to wait at least 24h based on the default configuration
of neutron for the DHCP T1 parameter to be propagated, please
refer to documentation. WARNING: this is very important if you
are using VXLAN or GRE tenant networks.
Step 4 -> ovn_migration.sh reduce-mtu
Reduces the MTU of the neutron tenant networks networks. This
step is only necessary for VXLAN or GRE based tenant networking.
Step 5 -> ovn_migration.sh start-migration
Starts the migration to OVN.
EOF
}
command=$1
ret_val=0
case $command in
generate-inventory)
oc_check_public_network
generate_ansible_inventory_file
generate_ansible_config_file
ret_val=$?
;;
setup-mtu-t1)
check_for_necessary_files
setup_mtu_t1
ret_val=$?;;
reduce-mtu)
check_for_necessary_files
reduce_network_mtu
ret_val=$?;;
start-migration)
oc_check_public_network
check_for_necessary_files
shift
start_migration $*
ret_val=$?
;;
*)
print_usage;;
esac
exit $ret_val

View File

@ -0,0 +1,110 @@
# This is the playbook used by ovn-migration.sh.
#
# Pre migration and validation tasks will make sure that the initial cloud
# is functional, and will create resources which will be checked after
# migration.
#
- name: Pre migration and validation tasks
hosts: localhost
roles:
- pre-migration
tags:
- pre-migration
#
# This step is executed before migration, and will backup some config
# files related to containers before those get lost.
#
- name: Backup tripleo container config files on the nodes
hosts: ovn-controllers
roles:
- backup
tags:
- setup
#
# TripleO / Director is executed to deploy ovn using "br-migration" for the
# dataplane, while br-int is left intact to avoid dataplane disruption.
#
- name: Set up OVN and configure it using tripleo
hosts: localhost
roles:
- tripleo-update
vars:
ovn_bridge: br-migration
tags:
- setup
become: false
#
# Once everything is migrated prepare everything by syncing the neutron DB
# into the OVN NB database, and then switching the dataplane to br-int
# letting ovn-controller take control, afterwards any remaining neutron
# resources, namespaces or processes which are not needed anymore are
# cleaned up.
#
- name: Do the DB sync and dataplane switch
hosts: ovn-controllers, ovn-dbs
roles:
- migration
vars:
ovn_bridge: br-int
tags:
- migration
#
# Verify that the initial resources are still reachable, remove them,
# and afterwards create new resources and repeat the connectivity tests.
#
- name: Post migration
hosts: localhost
roles:
- delete-neutron-resources
- post-migration
tags:
- post-migration
#
# Final step to make sure tripleo knows about OVNIntegrationBridge == br-int.
#
- name: Rerun the stack update to reset the OVNIntegrationBridge to br-int
hosts: localhost
roles:
- tripleo-update
vars:
ovn_bridge: br-int
tags:
- setup
become: false
#
# Final validation after tripleo update to br-int
#
- name: Final validation
hosts: localhost
vars:
validate_premigration_resources: false
roles:
- post-migration
tags:
- final-validation
#
# Announce that it's done and ready.
#
- hosts: localhost
tasks:
- name: Migration successful.
debug:
msg: Migration from ML2OVS to OVN is now complete.

View File

@ -0,0 +1,24 @@
---
- hosts: overcloud-controllers
tasks:
- name: Update dhcp_agent configuration file option 'dhcp_renewal_time'
ini_file:
path=/var/lib/config-data/puppet-generated/neutron/etc/neutron/dhcp_agent.ini
section=DEFAULT
backup=yes
option=dhcp_renewal_time
value={{ renewal_time }}
create=no
ignore_errors: yes
- block:
- name: Get the neutron dhcp agent docker id
shell:
docker ps | grep neutron_dhcp | awk '{print $1}'
register: dhcp_agent_docker_id
ignore_errors: yes
- name: Restart neutron dhcp agent
command: docker restart {{ dhcp_agent_docker_id.stdout }}
ignore_errors: yes

View File

@ -0,0 +1,19 @@
# The following tasks ensure that we have backup data which is
# necessary later for cleanup (like l3/dhcp/metadata agent definitions)
- name: "Ensure the ovn backup directory"
file: path="{{ ovn_migration_backups }}" state=directory
- name: "Save the tripleo container definitions"
shell: |
# only copy them the first time, otherwise, on a later run when
# it has been already migrated to OVN we would miss the data
if [ ! -d {{ ovn_migration_backups }}/tripleo-config ]; then
cp -rfp /var/lib/tripleo-config {{ ovn_migration_backups }}
echo "Backed up"
fi
register: command_result
changed_when: "'Backed up' in command_result.stdout"
# TODO(majopela): Include steps for backing up the mysql database on the
# controllers and the undercloud before continuing

View File

@ -0,0 +1,3 @@
---
ovn_migration_temp_dir_del: "{{ working_dir }}/delete_neutron_resources"

View File

@ -0,0 +1,22 @@
---
- name: Delete temp file directory if present
file:
state: absent
path: "{{ ovn_migration_temp_dir_del }}"
- name : Create temp file directory if not present
file:
state: directory
path: "{{ ovn_migration_temp_dir_del }}"
- name: Generate neutron resources cleanup script
template:
src: "delete-neutron-resources.sh.j2"
dest: "{{ ovn_migration_temp_dir_del }}/delete-neutron-resources.sh"
mode: 0744
- name: Deleting the neutron agents
shell: >
{{ ovn_migration_temp_dir_del }}/delete-neutron-resources.sh 2>&1 >
{{ ovn_migration_temp_dir_del }}/delete-neutron-resources.sh.log
changed_when: true

View File

@ -0,0 +1,29 @@
#!/bin/bash
set -x
source {{ overcloudrc }}
# Delete non alive neutron agents
for i in `openstack network agent list | grep neutron- | grep -v ':-)' | awk {'print $2'}`
do
openstack network agent delete $i
done
delete_network_ports() {
net_id=$1
for p in `openstack port list --network $net_id | grep -v ID | awk '{print $2}'`
do
openstack port delete $p
done
}
# Delete HA networks
for i in `openstack network list | grep "HA network tenant" | awk '{print $2}'`
do
delete_network_ports $i
openstack network delete $i
done
exit 0

View File

@ -0,0 +1,15 @@
---
agent_cleanups:
neutron_l3_agent:
config: --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/l3_agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-l3-agent --log-file=/var/log/neutron/netns-cleanup-l3.log
cleanup_type: l3
netns_regex: "fip-|snat-|qrouter-"
neutron_dhcp:
config: --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-dhcp-agent --log-file=/var/log/neutron/netns-cleanup-dhcp.log
cleanup_type: dhcp
netns_regex: "qdhcp-"
tunnel_bridge: "br-tun"
ovn_bridge: "br-int"

View File

@ -0,0 +1,15 @@
---
- name: Generate OVN activation script
template:
src: "activate-ovn.sh.j2"
dest: "/tmp/activate-ovn.sh"
mode: 0744
- name: Run OVN activation script
shell: >
/tmp/activate-ovn.sh 2>&1 > /tmp/activate-ovn.sh.log
- name: Delete OVN activate script
file:
state: absent
path: /tmp/activate-ovn.sh

View File

@ -0,0 +1,79 @@
---
- name: Quickly disable neutron router and dhcp interfaces
shell: |
for p in `ovs-vsctl show | egrep 'qr-|ha-|qg-|rfp-' | grep Interface | awk '{print $2}'`
do
# p will be having quotes. Eg. "hr-xxxx". So strip the quotes
p=`echo $p | sed -e 's/"//g'`
ovs-vsctl clear Interface $p external-ids
ovs-vsctl set Interface $p admin-state=down
done
# dhcp tap ports cannot be easily distinguished from ovsfw ports, so we
# list them from within the qdhcp namespaces
for netns in `ip netns | awk '{ print $1 }' | grep qdhcp-`; do
for dhcp_port in `ip netns exec $netns ip -o link show | awk -F': ' '{print $2}' | grep tap`; do
ovs-vsctl clear Interface $dhcp_port external-ids
ovs-vsctl set Interface $dhcp_port admin-state=down
done
done
- name: Clean neutron datapath security groups from iptables
shell: |
iptables-save > /tmp/iptables-before-cleanup
cat /tmp/iptables-before-cleanup | grep -v neutron-openvswi | \
grep -v neutron-filter > /tmp/iptables-after-cleanup
if ! cmp /tmp/iptables-before-cleanup /tmp/iptables-after-cleanup
then
cat /tmp/iptables-after-cleanup | iptables-restore
echo "Security groups cleaned"
fi
register: out
changed_when: "'Security groups cleaned' in out.stdout"
- name: Cleanup neutron datapath resources
shell: |
# avoid cleaning up dhcp namespaces if the neutron dhcp agent is up (SR-IOV use case)
if [[ "{{ item.value.cleanup_type }}" == "dhcp" ]]; then
docker inspect neutron_dhcp && echo "Shouldn't clean DHCP namespaces if neutron_dhcp docker is up" && exit 0
fi
if ip netns | egrep -e "{{ item.value.netns_regex }}"
then
echo "Cleaning up"
cmd="$(paunch debug --file {{ ovn_migration_backups }}/tripleo-config/hashed-container-startup-config-step_4.json \
--action print-cmd --container {{ item.key }} \
--interactive | \
sed 's/--interactive /--volume=\/tmp\/cleanup-{{ item.key }}.sh:\/cleanup.sh:ro /g ' )"
f="/tmp/cleanup-{{ item.key }}.sh"
f_cmd="/tmp/container-cmd-{{ item.key }}.sh"
echo "#!/bin/sh" > $f
echo "set -x" >> $f
echo "set -e" >> $f
echo "sudo -E kolla_set_configs" >> $f
echo "neutron-netns-cleanup {{ item.value.config }} --agent-type {{ item.value.cleanup_type }} --force" >> $f
chmod a+x $f
echo $cmd /cleanup.sh
echo "#!/bin/sh" > $f_cmd
echo "set -x" >> $f_cmd
echo "set -e" >> $f_cmd
echo $cmd /cleanup.sh >> $f_cmd
chmod a+x $f_cmd
$f_cmd
fi
with_dict: "{{ agent_cleanups }}"
register: out
changed_when: "'Cleaning up' in out.stdout"

View File

@ -0,0 +1,15 @@
# we use this instead of a big shell entry because some versions of
# ansible-playbook choke on our script syntax + yaml parsing
- name: Generate script to clone br-int and provider bridges
template:
src: "clone-br-int.sh.j2"
dest: "/tmp/clone-br-int.sh"
mode: 0744
- name: Run clone script for dataplane
shell: /tmp/clone-br-int.sh
- name: Delete clone script
file:
state: absent
path: /tmp/clone-br-int.sh

View File

@ -0,0 +1,12 @@
---
- include_tasks: clone-dataplane.yml
- include_tasks: sync-dbs.yml
when: ovn_central is defined
- include_tasks: activate-ovn.yml
- include_tasks: cleanup-dataplane.yml
when: ovn_controller is defined
tags:
- cleanup-dataplane

View File

@ -0,0 +1,20 @@
---
- name: Get the neutron docker ID
shell:
docker ps | grep neutron-server-ovn | awk '{print $1}'
register: neutron_docker_id
- name: Sync neutron db with OVN db (container) - Run 1
command: docker exec "{{ neutron_docker_id.stdout }}"
neutron-ovn-db-sync-util --config-file /etc/neutron/neutron.conf
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini
--ovn-neutron_sync_mode repair
- name: Sync neutron db with OVN db (container) - Run 2
command: docker exec "{{ neutron_docker_id.stdout }}"
neutron-ovn-db-sync-util --config-file /etc/neutron/neutron.conf
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini
--ovn-neutron_sync_mode repair
- name: Pause and let ovn-controllers settle before doing the final activation (5 minute)
pause: minutes=5

View File

@ -0,0 +1,41 @@
#!/bin/bash
set -x
docker stop ovn_controller
# restore bridge mappings
ovn_orig_bm=$(ovs-vsctl get open . external_ids:ovn-bridge-mappings-back)
ovs-vsctl set open . external_ids:ovn-bridge-mappings="$ovn_orig_bm"
ovs-vsctl remove open . external_ids ovn-bridge-mappings-back
ovn_bms=$(echo $ovn_orig_bm | sed 's/\"//g' | sed 's/,/ /g')
# Reset OpenFlow protocol version before ovn-controller takes over
ovs-vsctl set Bridge {{ ovn_bridge }} protocols=[]
for bm in $ovn_bms; do
parts=($(echo $bm | sed 's/:/ /g'))
bridge=${parts[1]}
ovs-vsctl set-fail-mode $bridge standalone
ovs-vsctl set Bridge $bridge protocols=[]
ovs-vsctl del-controller $bridge
done
# Delete controller from integration bridge
ovs-vsctl del-controller {{ ovn_bridge }}
# Activate ovn-controller by configuring integration bridge
ovs-vsctl set open . external_ids:ovn-bridge={{ ovn_bridge }}
docker start ovn_controller
# Delete ovs bridges - br-tun and br-migration
ovs-vsctl --if-exists del-br {{ tunnel_bridge }}
ovs-vsctl --if-exists del-br br-migration
for br in $(ovs-vsctl list-br | egrep 'br-mig-[0-9]+'); do
ovs-vsctl --if-exists del-br $br
done
exit 0

View File

@ -0,0 +1,77 @@
# The purpose of this script is to make a clone of the br-int content
# into br-migration, and to create fake provider bridges.
# This way, while we synchronize the neutron database into the OVN
# northbound DB, and that translates into southbound content all
# the ovn-controllers around are able to create the SBDB content
# safely, without disrupting the existing neutron ml2/ovs dataplane.
OVN_MIG_PREFIX=br-mig
OVN_BR_MIGRATION=${OVN_BR_MIGRATION:-br-migration}
function recreate_bridge_mappings() {
function new_bridge_mappings() {
orig_bms=$1
if echo $orig_bms | grep $OVN_MIG_PREFIX; then
echo $orig_bms
return
fi
ovn_bms=$(echo $1 | sed 's/\"//g' | sed 's/,/ /g')
final_bm=""
br_n=0
for bm in $ovn_bms; do
parts=($(echo $bm | sed 's/:/ /g'))
physnet=${parts[0]}
bridge="${OVN_MIG_PREFIX}-${br_n}"
mapping="${physnet}:${bridge}"
if [[ -z "$final_bm" ]]; then
final_bm=$mapping
else
final_bm="${final_bm},${mapping}"
fi
# ensure bridge
ovs-vsctl --may-exist add-br $bridge
br_n=$(( br_n + 1 ))
done
echo $final_bm
}
ovn_orig_bm=$(ovs-vsctl get open . external_ids:ovn-bridge-mappings)
# backup the original mapping if we didn't already do
ovs-vsctl get open . external_ids:ovn-bridge-mappings-back || \
ovs-vsctl set open . external_ids:ovn-bridge-mappings-back="$ovn_orig_bm"
new_mapping=$(new_bridge_mappings $ovn_orig_bm)
ovs-vsctl set open . external_ids:ovn-bridge-mappings="$new_mapping"
}
function copy_interfaces_to_br_migration() {
interfaces=$(ovs-vsctl list-ifaces br-int | egrep -v 'qr-|ha-|qg-|rfp-')
for interface in $interfaces; do
if [[ "$interface" == "br-int" ]]; then
continue
fi
ifmac=$(ovs-vsctl get Interface $interface external-ids:attached-mac)
if [ $? -ne 0 ]; then
echo "Can't get port details for $interface"
continue
fi
ifstatus=$(ovs-vsctl get Interface $interface external-ids:iface-status)
ifid=$(ovs-vsctl get Interface $interface external-ids:iface-id)
ifname=x$interface
ovs-vsctl -- --may-exist add-port $OVN_BR_MIGRATION $ifname \
-- set Interface $ifname type=internal \
-- set Interface $ifname external-ids:iface-status=$ifstatus \
-- set Interface $ifname external-ids:attached-mac=$ifmac \
-- set Interface $ifname external-ids:iface-id=$ifid
echo cloned port $interface from br-int as $ifname on $OVN_BR_MIGRATION
done
}
recreate_bridge_mappings
docker restart ovn_controller
copy_interfaces_to_br_migration

View File

@ -0,0 +1,4 @@
---
ovn_migration_temp_dir: "{{ working_dir }}/post_migration_resources"
validate_premigration_resources: true

View File

@ -0,0 +1,59 @@
---
#
# Validate pre-migration resources and then clean those up
#
- name: Validate pre migration resources after migration
include_role:
name: resources/validate
vars:
restart_server: true
when:
- validate_migration|bool
- validate_premigration_resources
- name: Delete the pre migration resources
include_role:
name: resources/cleanup
tags:
- post-migration
when:
- validate_migration|bool
- validate_premigration_resources
#
# Create post-migration resources, validate, and then clean up
#
# Delete any existing resources to make sure we don't conflict on a second run
- name: Delete any post migration resources (preventive)
include_role:
name: resources/cleanup
vars:
resource_suffix: "post"
silent_cleanup: true
when: validate_migration|bool
- name: Create post-migration resources
include_role:
name: resources/create
vars:
resource_suffix: "post"
when: validate_migration|bool
- name: Validate post migration resources
include_role:
name: resources/validate
vars:
resource_suffix: "post"
when: validate_migration|bool
- name: Delete the post migration resources
include_role:
name: resources/cleanup
tags:
- post-migration
vars:
resource_suffix: "post"
when: validate_migration|bool

View File

@ -0,0 +1,17 @@
# Delete any existing resources to make sure we don't conflict on a second run
- name: Delete any existing pre migration resources (preventive)
include_role:
name: resources/cleanup
vars:
silent_cleanup: true
when: validate_migration|bool
- name: Create the pre migration resource stack
include_role:
name: resources/create
when: validate_migration|bool
- name: Validate the pre migration resources
include_role:
name: resources/validate
when: validate_migration|bool

View File

@ -0,0 +1,6 @@
---
cleanup_resource_script: cleanup-resources.sh.j2
resource_suffix: "pre"
ovn_migration_temp_dir: "{{ working_dir }}/{{ resource_suffix }}_migration_resources"
silent_cleanup: false

View File

@ -0,0 +1,26 @@
---
- name : Create temp file directory if not present
file:
state: directory
path: "{{ ovn_migration_temp_dir }}"
- name: Generate cleanup script
template:
src: "{{ cleanup_resource_script }}"
dest: "{{ ovn_migration_temp_dir }}/cleanup-resources.sh"
mode: 0744
- name: Cleaning up the migration resources (verbose)
shell: >
set -o pipefail &&
{{ ovn_migration_temp_dir }}/cleanup-resources.sh 2>&1 | tee
{{ ovn_migration_temp_dir }}/cleanup-resources.sh.log
when: not silent_cleanup
- name: Cleaning up the migration resources (silent)
shell: >
{{ ovn_migration_temp_dir }}/cleanup-resources.sh >/dev/null 2>&1
when: silent_cleanup

View File

@ -0,0 +1,32 @@
#!/bin/bash
set -x
source {{ overcloudrc }}
openstack server delete ovn-migration-server-{{ resource_suffix }}
openstack port delete ovn-migration-server-port-{{ resource_suffix }}
server_ip=`cat {{ ovn_migration_temp_dir }}/server_public_ip`
openstack floating ip delete $server_ip
openstack router remove subnet ovn-migration-router-{{ resource_suffix }} ovn-migration-subnet-{{ resource_suffix }}
openstack router unset --external-gateway ovn-migration-router-{{ resource_suffix }}
openstack router delete ovn-migration-router-{{ resource_suffix }}
openstack network delete ovn-migration-net-{{ resource_suffix }}
openstack security group delete ovn-migration-sg-{{ resource_suffix }}
openstack flavor delete ovn-migration-{{ resource_suffix }}
openstack image delete cirros-ovn-migration-{{ resource_suffix }}
openstack keypair delete ovn-migration-{{ resource_suffix }}
echo "Resource cleanup done"
exit 0

View File

@ -0,0 +1,5 @@
---
create_migration_resource_script: create-resources.sh.j2
resource_suffix: "pre"
ovn_migration_temp_dir: "{{ working_dir }}/{{ resource_suffix }}_migration_resources"

View File

@ -0,0 +1,22 @@
---
- name: Delete temp file directory if present
file:
state: absent
path: "{{ ovn_migration_temp_dir }}"
- name : Create temp file directory if not present
file:
state: directory
path: "{{ ovn_migration_temp_dir }}"
- name: Generate resource creation script
template:
src: "{{ create_migration_resource_script }}"
dest: "{{ ovn_migration_temp_dir }}/create-migration-resources.sh"
mode: 0744
- name: Creating migration resources
shell: >
set -o pipefail &&
{{ ovn_migration_temp_dir }}/create-migration-resources.sh 2>&1 | tee
{{ ovn_migration_temp_dir }}/create-migration-resources.sh.log

View File

@ -0,0 +1,128 @@
#!/bin/bash
set -x
source {{ overcloudrc }}
image_name={{ image_name }}
openstack image show $image_name
if [ "$?" != "0" ]
then
if [ ! -f cirros-0.4.0-x86_64-disk.img ]
then
curl -Lo cirros-0.4.0-x86_64-disk.img https://github.com/cirros-dev/cirros/releases/download/0.4.0/cirros-0.4.0-x86_64-disk.img
fi
openstack image create "cirros-ovn-migration-{{ resource_suffix }}" --file cirros-0.4.0-x86_64-disk.img \
--disk-format qcow2 --container-format bare --public
image_name="cirros-ovn-migration-{{ resource_suffix }}"
fi
openstack flavor create ovn-migration-{{ resource_suffix }} --ram 1024 --disk 1 --vcpus 1
openstack keypair create ovn-migration-{{ resource_suffix }} --private-key {{ ovn_migration_temp_dir }}/ovn_migration_ssh_key
openstack security group create ovn-migration-sg-{{ resource_suffix }}
openstack security group rule create --ingress --protocol icmp ovn-migration-sg-{{ resource_suffix }}
openstack security group rule create --ingress --protocol tcp --dst-port 22 ovn-migration-sg-{{ resource_suffix }}
openstack network create ovn-migration-net-{{ resource_suffix }}
neutron net-update ovn-migration-net-{{ resource_suffix }} --mtu 1442
openstack subnet create --network ovn-migration-net-{{ resource_suffix }} --subnet-range 172.168.199.0/24 ovn-migration-subnet-{{ resource_suffix }}
openstack port create --network ovn-migration-net-{{ resource_suffix }} --security-group ovn-migration-sg-{{ resource_suffix }} ovn-migration-server-port-{{ resource_suffix }}
openstack server create --flavor ovn-migration-{{ resource_suffix }} --image $image_name \
--key-name ovn-migration-{{ resource_suffix }} \
--nic port-id=ovn-migration-server-port-{{ resource_suffix }} ovn-migration-server-{{ resource_suffix }}
openstack router create ovn-migration-router-{{ resource_suffix }}
openstack router set --external-gateway {{ public_network_name }} ovn-migration-router-{{ resource_suffix }}
openstack router add subnet ovn-migration-router-{{ resource_suffix }} ovn-migration-subnet-{{ resource_suffix }}
server_ip=`openstack floating ip create --port ovn-migration-server-port-{{ resource_suffix }} \
{{ public_network_name }} -c floating_ip_address | grep floating_ip_address \
| awk '{print $4'}`
echo $server_ip > {{ ovn_migration_temp_dir }}/server_public_ip
chmod 0600 {{ ovn_migration_temp_dir }}/ovn_migration_ssh_key
# Wait till the port is ACTIVE
echo "Wait till the port is ACTIVE"
port_status=`openstack port show ovn-migration-server-port-{{ resource_suffix }} -c status | grep status | awk '{print $4}'`
num_attempts=0
while [ "$port_status" != "ACTIVE" ]
do
num_attempts=$((num_attempts+1))
sleep 5
port_status=`openstack port show ovn-migration-server-port-{{ resource_suffix }} -c status | grep status | awk '{print $4}'`
echo "Port status = $port_status"
if [ $num_attempts -gt 24 ]
then
echo "Port is not up even after 2 minutes. Something is wrong"
exit 1
fi
done
echo "VM is up and the port is ACTIVE"
# Wait till the VM allows ssh connections
vm_status="down"
num_attempts=0
while [ "$vm_status" != "up" ]
do
num_attempts=$((num_attempts+1))
sleep 5
openstack console log show ovn-migration-server-{{ resource_suffix }} | grep "login:"
if [ "$?" == "0" ]
then
vm_status="up"
else
if [ $num_attempts -gt 60 ]
then
echo "Port is not up even after 5 minutes. Something is wrong."
# Even though something seems wrong, lets try and ping.
break
fi
fi
done
num_attempts=0
vm_reachable="false"
while [ "$vm_reachable" != "true" ]
do
num_attempts=$((num_attempts+1))
sleep 1
ping -c 3 $server_ip
if [ "$?" == "0" ]
then
vm_reachable="true"
else
if [ $num_attempts -gt 60 ]
then
echo "VM is not reachable. Something is wrong."
# Even though something seems wrong, lets try and ping.
exit 1
fi
fi
done
ssh -i {{ ovn_migration_temp_dir }}/ovn_migration_ssh_key -o StrictHostKeyChecking=no \
-o UserKnownHostsFile=/dev/null cirros@$server_ip date
rc=$?
echo "Done with the resource creation : exiting with $rc"
exit $rc

View File

@ -0,0 +1,5 @@
validate_resources_script: validate-resources.sh.j2
server_user_name: "cirros"
restart_server: false
resource_suffix: "pre"
ovn_migration_temp_dir: "{{ working_dir }}/{{ resource_suffix }}_migration_resources"

View File

@ -0,0 +1,12 @@
- name: Generate resource validation script
template:
src: "{{ validate_resources_script }}"
dest: "{{ ovn_migration_temp_dir }}/validate-resources.sh"
mode: 0744
- name: Run the validation script
shell: >
set -o pipefail &&
{{ ovn_migration_temp_dir }}/validate-resources.sh 2>&1 | tee
{{ ovn_migration_temp_dir }}/validate-resources.sh.log

View File

@ -0,0 +1,19 @@
#!/bin/bash
set -x
set -e
source {{ overcloudrc }}
# This script validates the resources create by the resources/create role.
# It pings to the floating ip of the server and ssh into the server.
server_ip=`cat {{ ovn_migration_temp_dir }}/server_public_ip`
echo "Running ping test with -c 3 to the server ip - $server_ip"
ping -c 3 $server_ip
ssh -i {{ ovn_migration_temp_dir }}/ovn_migration_ssh_key -o StrictHostKeyChecking=no \
-o UserKnownHostsFile=/dev/null cirros@$server_ip date
echo "Done with the validation"

View File

@ -0,0 +1,4 @@
---
generate_ovn_extras: generate-ovn-extras.sh.j2
ovn_migration_temp_dir: "{{ working_dir }}/temp_files"

View File

@ -0,0 +1,24 @@
---
- name : Create temp file directory if not present
file:
state: directory
path: "{{ ovn_migration_temp_dir }}"
- name: Create ovn-extras generation script
template:
src: "{{ generate_ovn_extras }}"
dest: "{{ ovn_migration_temp_dir }}/generate-ovn-extras.sh"
mode: 0755
- name: Generate ovn-extras environment file
shell: >
set -o pipefail &&
{{ ovn_migration_temp_dir }}/generate-ovn-extras.sh
changed_when: False
- name: Updating the overcloud stack with OVN services
shell: >
set -o pipefail &&
{{ overcloud_ovn_deploy_script }} 2>&1 > {{ overcloud_ovn_deploy_script }}.log
changed_when: true

View File

@ -0,0 +1,7 @@
#!/bin/bash
set -x
cat > $HOME/ovn-extras.yaml << EOF
parameter_defaults:
OVNIntegrationBridge: "{{ ovn_bridge }}"
EOF