Drop ovn migration for TripleO deployments

TripleO project has been retired[1]. Remove the migration tool
implemented specifically for TripleO deployments.

Change-Id: I32775a0aa963b65a3900a87f28a88ab63901a8a0
This commit is contained in:
Takashi Kajinami 2024-09-22 16:13:43 +09:00 committed by Ihar Hrachyshka
parent 3997367300
commit 2eba495e22
54 changed files with 5 additions and 2694 deletions

View File

@ -8,7 +8,6 @@ OVN Driver
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
migration.rst
gaps.rst gaps.rst
dhcp_opts.rst dhcp_opts.rst
ml2ovn_trace.rst ml2ovn_trace.rst

View File

@ -1,388 +0,0 @@
.. _ovn_migration:
Migration Strategy
==================
This document details an in-place migration strategy from ML2/OVS to ML2/OVN
in either ovs-firewall or ovs-hybrid mode for a TripleO OpenStack deployment.
For non TripleO deployments, please refer to the file ``migration/README.rst``
and the ansible playbook ``migration/migrate-to-ovn.yml``.
Overview
--------
The migration process is orchestrated through the shell script
ovn_migration.sh, which is provided with the OVN driver.
The administrator uses ovn_migration.sh to perform readiness steps
and migration from the undercloud node.
The readiness steps, such as host inventory production, DHCP and MTU
adjustments, prepare the environment for the procedure.
Subsequent steps start the migration via Ansible.
Plan for a 24-hour wait after the reduce-dhcp-t1 step to allow VMs to catch up
with the new MTU size from the DHCP server. The default neutron ML2/OVS
configuration has a dhcp_lease_duration of 86400 seconds (24h).
Also, if there are instances using static IP assignment, the administrator
should be ready to update the MTU of those instances to the new value of 8
bytes less than the ML2/OVS (VXLAN) MTU value. For example, the typical
1500 MTU network value that makes VXLAN tenant networks use 1450 bytes of MTU
will need to change to 1442 under Geneve. Or under the same overlay network,
a GRE encapsulated tenant network would use a 1458 MTU, but again a 1442 MTU
for Geneve.
If there are instances which use DHCP but don't support lease update during
the T1 period the administrator will need to reboot them to ensure that MTU
is updated inside those instances.
Steps for migration
-------------------
Perform the following steps in the overcloud/undercloud
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Ensure that you have updated to the latest openstack/neutron version.
Perform the following steps in the undercloud
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Install openstack-neutron-ovn-migration-tool.
.. code-block:: console
# yum install openstack-neutron-ovn-migration-tool
2. Create a working directory on the undercloud, and copy the ansible playbooks
.. code-block:: console
$ mkdir ~/ovn_migration
$ cd ~/ovn_migration
$ cp -rfp /usr/share/ansible/neutron-ovn-migration/playbooks .
3. Create ``~/overcloud-deploy-ovn.sh`` script in your ``$HOME``.
This script must source your stackrc file, and then execute an ``openstack
overcloud deploy`` with your original deployment parameters, plus
the following environment files, added to the end of the command
in the following order:
When your network topology is DVR and your compute nodes have connectivity
to the external network:
.. code-block:: none
-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-dvr-ha.yaml \
-e $HOME/ovn-extras.yaml
When your compute nodes don't have external connectivity and you don't use
DVR:
.. code-block:: none
-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml \
-e $HOME/ovn-extras.yaml
Make sure that all users have execution privileges on the script, because it
will be called by ovn_migration.sh/ansible during the migration process.
.. code-block:: console
$ chmod a+x ~/overcloud-deploy-ovn.sh
4. To configure the parameters of your migration you can set the environment
variables that will be used by ``ovn_migration.sh``. You can skip setting
any values matching the defaults.
* STACKRC_FILE - must point to your stackrc file in your undercloud.
Default: ~/stackrc
* OVERCLOUDRC_FILE - must point to your overcloudrc file in your
undercloud.
Default: ~/overcloudrc
* OVERCLOUD_OVN_DEPLOY_SCRIPT - must point to the script described in
step 1.
Default: ~/overcloud-deploy-ovn.sh
* UNDERCLOUD_NODE_USER - user used on the undercloud nodes
Default: heat-admin
* STACK_NAME - Name or ID of the heat stack
Default: 'overcloud'
If the stack that is migrated differs from the default, please set this
environment variable to the stack name or ID.
* PUBLIC_NETWORK_NAME - Name of your public network.
Default: 'public'.
To support migration validation, this network must have available
floating IPs, and those floating IPs must be pingable from the
undercloud. If that's not possible please configure VALIDATE_MIGRATION
to False.
* OOO_WORKDIR - Name of TripleO working directory
Default: '$HOME/overcloud-deploy'
This directory contains different stacks in TripleO and its files. It
should be configured if TripleO commands were invoked with --work-dir
option.
* IMAGE_NAME - Name/ID of the glance image to us for booting a test server.
Default:'cirros'.
If the image does not exist it will automatically download and use
cirros during the pre-validation / post-validation process.
* VALIDATE_MIGRATION - Create migration resources to validate the
migration. The migration script, before starting the migration, boot a
server and validates that the server is reachable after the migration.
Default: False
* SERVER_USER_NAME - User name to use for logging into the migration
instances.
Default: 'cirros'.
* DHCP_RENEWAL_TIME - DHCP renewal time in seconds to configure in DHCP
agent configuration file. This renewal time is used only temporarily
during migration to ensure a synchronized MTU switch across the networks.
Default: 30
* CREATE_BACKUP - Flag to create a backup of the controllers that can be
used as a revert mechanism.
Default: True
* BACKUP_MIGRATION_IP - Only used if CREATE_BACKUP is enabled, IP of the
server that will be used as a NFS server to store the backup.
Default: 192.168.24.1
* BACKUP_MIGRATION_CTL_PLANE_CIDRS - Only used if CREATE_BACKUP is enabled.
A comma separated string of control plane subnets in CIDR notation for the
controllers being backed up. The specified subnets will be used to enable
NFS remote clients connections.
Default: 192.168.24.0/24
.. warning::
Please note that VALIDATE_MIGRATION requires enough quota (2
available floating ips, 2 networks, 2 subnets, 2 instances,
and 2 routers as admin).
For example:
.. code-block:: console
$ export PUBLIC_NETWORK_NAME=my-public-network
$ ovn_migration.sh .........
5. Run ``ovn_migration.sh generate-inventory`` to generate the inventory
file - ``hosts_for_migration`` and ``ansible.cfg``. Please review
``hosts_for_migration`` for correctness.
.. code-block:: console
$ ovn_migration.sh generate-inventory
At this step the script will inspect the TripleO ansible inventory
and generate an inventory of hosts, specifically tagged to work
with the migration playbooks.
6. Run ``ovn_migration.sh reduce-dhcp-t1``
.. code-block:: console
$ ovn_migration.sh reduce-dhcp-t1
This lowers the T1 parameter
of the internal neutron DHCP servers configuring the ``dhcp_renewal_time``
in /var/lib/config-data/puppet-generated/neutron/etc/neutron/dhcp_agent.ini
in all the nodes where DHCP agent is running.
We lower the T1 parameter to make sure that the instances start refreshing
the DHCP lease quicker (every 30 seconds by default) during the migration
proccess. The reason why we force this is to make sure that the MTU update
happens quickly across the network during step 8, this is very important
because during those 30 seconds there will be connectivity issues with
bigger packets (MTU missmatchess across the network), this is also why
step 7 is very important, even though we reduce T1, the previous T1 value
the instances leased from the DHCP server will be much higher
(24h by default) and we need to wait those 24h to make sure they have
updated T1. After migration the DHCP T1 parameter returns to normal values.
7. If you are using VXLAN or GRE tenant networking, ``wait at least 24 hours``
before continuing. This will allow VMs to catch up with the new MTU size
of the next step.
.. warning::
If you are using VXLAN or GRE networks, this 24-hour wait step is critical.
If you are using VLAN tenant networks you can proceed to the next step without delay.
.. warning::
If you have any instance with static IP assignment on VXLAN or
GRE tenant networks, you must manually modify the configuration of those instances.
If your instances don't honor the T1 parameter of DHCP they will need
to be rebooted.
to configure the new geneve MTU, which is the current VXLAN MTU minus 8 bytes.
For instance, if the VXLAN-based MTU was 1450, change it to 1442.
.. note::
24 hours is the time based on default configuration. It actually depends on
/var/lib/config-data/puppet-generated/neutron/etc/neutron/dhcp_agent.ini
dhcp_renewal_time and
/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf
dhcp_lease_duration parameters. (defaults to 86400 seconds)
.. note::
Please note that migrating a deployment which uses VLAN for tenant/project
networks is not recommended at this time because of a bug in core ovn,
full support is being worked out here:
https://mail.openvswitch.org/pipermail/ovs-dev/2018-May/347594.html
One way to verify that the T1 parameter has propagated to existing VMs
is to connect to one of the compute nodes, and run ``tcpdump`` over one
of the VM taps attached to a tenant network. If T1 propegation was a success,
you should see that requests happen on an interval of approximately 30 seconds.
.. code-block:: shell
[heat-admin@overcloud-novacompute-0 ~]$ sudo tcpdump -i tap52e872c2-e6 port 67 or port 68 -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tap52e872c2-e6, link-type EN10MB (Ethernet), capture size 262144 bytes
13:17:28.954675 IP 192.168.99.5.bootpc > 192.168.99.3.bootps: BOOTP/DHCP, Request from fa:16:3e:6b:41:3d, length 300
13:17:28.961321 IP 192.168.99.3.bootps > 192.168.99.5.bootpc: BOOTP/DHCP, Reply, length 355
13:17:56.241156 IP 192.168.99.5.bootpc > 192.168.99.3.bootps: BOOTP/DHCP, Request from fa:16:3e:6b:41:3d, length 300
13:17:56.249899 IP 192.168.99.3.bootps > 192.168.99.5.bootpc: BOOTP/DHCP, Reply, length 355
.. note::
This verification is not possible with cirros VMs. The cirros
udhcpc implementation does not obey DHCP option 58 (T1). Please
try this verification on a port that belongs to a full linux VM.
We recommend you to check all the different types of workloads your
system runs (Windows, different flavors of linux, etc..).
8. Run ``ovn_migration.sh reduce-mtu``.
This lowers the MTU of the pre migration VXLAN and GRE networks. The
tool will ignore non-VXLAN/GRE networks, so if you use VLAN for tenant
networks it will be fine if you find this step not doing anything.
.. code-block:: console
$ ovn_migration.sh reduce-mtu
This step will go network by network reducing the MTU, and tagging with
``adapted_mtu`` the networks which have been already handled.
Every time a network is updated all the existing L3/DHCP agents
connected to such network will update their internal leg MTU, instances
will start fetching the new MTU as the DHCP T1 timer expires. As explained
before, instances not obeying the DHCP T1 parameter will need to be
restarted, and instances with static IP assignment will need to be manually
updated.
9. Make TripleO ``prepare the new container images`` for OVN.
If your deployment didn't have a containers-prepare-parameter.yaml, you can
create one with:
.. code-block:: console
$ test -f $HOME/containers-prepare-parameter.yaml || \
openstack tripleo container image prepare default \
--output-env-file $HOME/containers-prepare-parameter.yaml
If you had to create the file, please make sure it's included at the end of
your $HOME/overcloud-deploy-ovn.sh and $HOME/overcloud-deploy.sh
Change the neutron_driver in the containers-prepare-parameter.yaml file to
ovn:
.. code-block:: console
$ sed -i -E 's/neutron_driver:([ ]\w+)/neutron_driver: ovn/' $HOME/containers-prepare-parameter.yaml
You can verify with:
.. code-block:: shell
$ grep neutron_driver $HOME/containers-prepare-parameter.yaml
neutron_driver: ovn
Then update the images:
.. code-block:: console
$ openstack tripleo container image prepare \
--environment-file $HOME/containers-prepare-parameter.yaml
.. note::
It's important to provide the full path to your containers-prepare-parameter.yaml
otherwise the command will finish very quickly and won't work (current
version doesn't seem to output any error).
During this step TripleO will build a list of containers, pull them from
the remote registry and push them to your deployment local registry.
10. Run ``ovn_migration.sh start-migration`` to kick start the migration
process.
.. code-block:: console
$ ovn_migration.sh start-migration
During this step, this is what will happen:
* Create pre-migration resources (network and VM) to validate existing
deployment and final migration.
* Update the overcloud stack to deploy OVN alongside reference
implementation services using a temporary bridge "br-migration" instead
of br-int.
* Start the migration process:
1. generate the OVN north db by running neutron-ovn-db-sync util
2. clone the existing resources from br-int to br-migration, so OVN
can find the same resources UUIDS over br-migration
3. re-assign ovn-controller to br-int instead of br-migration
4. cleanup network namespaces (fip, snat, qrouter, qdhcp),
5. remove any unnecessary patch ports on br-int
6. remove br-tun and br-migration ovs bridges
7. delete qr-*, ha-* and qg-* ports from br-int (via neutron netns
cleanup)
* Delete neutron agents and neutron HA internal networks from the database
via API.
* Validate connectivity on pre-migration resources.
* Delete pre-migration resources.
* Create post-migration resources.
* Validate connectivity on post-migration resources.
* Cleanup post-migration resources.
* Re-run deployment tool to update OVN on br-int, this step ensures
that the TripleO database is updated with the final integration bridge.
* Run an extra validation round to ensure the final state of the system is
fully operational.
Migration is complete !!!

View File

@ -1,83 +0,0 @@
---
config:
entry_point: ./tools/ovn_migration/infrared/tripleo-ovn-migration/main.yml
plugin_type: install
subparsers:
tripleo-ovn-migration:
description: Migrate an existing TripleO overcloud from Neutron ML2OVS plugin to OVN
include_groups: ["Ansible options", "Inventory", "Common options", "Answers file"]
groups:
- title: Containers
options:
registry-namespace:
type: Value
help: The alternative docker registry namespace to use for deployment.
registry-prefix:
type: Value
help: The images prefix
registry-tag:
type: Value
help: The images tag
registry-mirror:
type: Value
help: The alternative docker registry to use for deployment.
- title: Deployment Description
options:
version:
type: Value
help: |
The product version
Numbers are for OSP releases
Names are for RDO releases
If not given, same version of the undercloud will be used
choices:
- "7"
- "8"
- "9"
- "10"
- "11"
- "12"
- "13"
- "14"
- "15"
- "16"
- "16.1"
- "16.2"
- kilo
- liberty
- mitaka
- newton
- ocata
- pike
- queens
- rocky
- stein
- train
install_from_package:
type: Bool
help: Install openstack-neutron-ovn-migration-tool rpm
default: True
dvr:
type: Bool
help: If the deployment is to be dvr or not
default: False
create_resources:
type: Bool
help: Create resources to measure downtime
default: True
external_network:
type: Value
help: External network name to use
default: public
image_name:
type: Value
help: Image name to use
default: cirros-0.3.5-x86_64-disk.img

View File

@ -0,0 +1,5 @@
---
upgrade:
- |
The migration tool for TripleO deployments has been removed, because
TripleO project has been retired.

View File

@ -28,9 +28,6 @@ data_files =
etc/api-paste.ini etc/api-paste.ini
etc/rootwrap.conf etc/rootwrap.conf
etc/neutron/rootwrap.d = etc/neutron/rootwrap.d/* etc/neutron/rootwrap.d = etc/neutron/rootwrap.d/*
share/ansible/neutron-ovn-migration/playbooks = tools/ovn_migration/tripleo_environment/playbooks/*
scripts =
tools/ovn_migration/tripleo_environment/ovn_migration.sh
[entry_points] [entry_points]
wsgi_scripts = wsgi_scripts =

View File

@ -4,9 +4,6 @@ Migration from ML2/OVS to ML2/OVN
Proof-of-concept ansible script for migrating an OpenStack deployment Proof-of-concept ansible script for migrating an OpenStack deployment
that uses ML2/OVS to OVN. that uses ML2/OVS to OVN.
If you have a tripleo ML2/OVS deployment then please see the folder
``tripleo_environment``
Prerequisites: Prerequisites:
1. Ansible 2.2 or greater. 1. Ansible 2.2 or greater.

View File

@ -1,37 +0,0 @@
# All controller nodes running OpenStack control services, particularly
# neutron-api. Also indicate which controller you'd like to have run
# the OVN central control services.
[controller]
overcloud-controller-0 ovn_central=true
overcloud-controller-1
overcloud-controller-2
# All compute nodes. We will replace the openvswitch agent
# with ovn-controller on these nodes.
#
# The ovn_encap_ip variable should be filled in with the IP
# address that other compute hosts should use as the tunnel
# endpoint for tunnels to that host.
[compute]
overcloud-novacompute-0 ovn_encap_ip=192.0.2.10
overcloud-novacompute-1 ovn_encap_ip=192.0.2.11
overcloud-novacompute-2 ovn_encap_ip=192.0.2.12
overcloud-novacompute-3 ovn_encap_ip=192.0.2.13
overcloud-novacompute-4 ovn_encap_ip=192.0.2.14
overcloud-novacompute-5 ovn_encap_ip=192.0.2.15
# Configure bridge mappings to be used on compute hosts.
[compute:vars]
ovn_bridge_mappings=net1:br-em1
is_compute_node=true
[overcloud:children]
controller
compute
# Fill in "ovn_db_ip" with an IP address on a management network
# that the controller and compute nodes should reach. This address
# should not be reachable otherwise.
[overcloud:vars]
ovn_db_ip=192.0.2.50
remote_user=heat-admin

View File

@ -1,33 +0,0 @@
Infrared plugin to carry out migration from ML2/OVS to OVN
==========================================================
This is an infrared plugin which can be used to carry out the migration
from ML2/OVS to OVN if the tripleo was deployed using infrared.
See http://infrared.readthedocs.io/en/stable/index.html for more information.
Before using this plugin, first deploy an ML2/OVS overcloud and then:
1. On your undercloud, install openstack-neutron-ovn-migration-tool package (https://trunk.rdoproject.org/centos9-master/component/network/current/)
You also need to install python3-neutron and python3-openvswitch packages.
2. Run ::
$infrared plugin add "https://opendev.org/openstack/neutron.git"
3. Start migration by running::
$infrared tripleo-ovn-migration --version 13|14 \
--registry-namespace <REGISTRY_NAMESPACE> \
--registry-tag <TAG> \
--registry-prefix <PREFIX>
Using this as a standalone playbook for tripleo deployments
===========================================================
It is also possible to use the playbook main.yml with tripleo deployments.
In order to use this:
1. Create hosts inventory file like below
[undercloud]
undercloud_ip ansible_ssh_user=stack
2. Run the playbook as:
ansible-playbook main.yml -i hosts -e install_from_package=True -e registry_prefix=centos-binary -e registry_namespace=docker.io/tripleomaster -e registry_localnamespace=192.168.24.1:8787/tripleomaster -e registry_tag=current-tripleo-rdo

View File

@ -1,194 +0,0 @@
# Playbook which preps migration and then invokes the migration script.
- name: Install migration tool
hosts: undercloud
become: true
tasks:
- name: Install python 3 virtualenv and neutron ovn migration tool
yum:
name:
- python3-virtualenv
- openstack-neutron-ovn-migration-tool
state: present
- name: Set host_key_checking to False in ansible.cfg
ini_file:
path=/etc/ansible/ansible.cfg
section=defaults
option=host_key_checking
value=False
ignore_errors: yes
- name: Prepare for migration
hosts: undercloud
tasks:
- name: Set ovn migration working dir
set_fact:
ovn_migration_working_dir: /home/stack/ovn_migration
- name: Delete temp file directory if present
file:
state: absent
path: "{{ ovn_migration_working_dir }}"
- name : Create temp file directory if not present
file:
state: directory
path: "{{ ovn_migration_working_dir }}"
- name: Set the image registry information
block:
- name: Get the image registry info (infrared deployment)
block:
- name: Set is_infrard deployment
set_fact:
is_infrared: True
- name: Save the image reg
set_fact:
container_image_prepare:
namespace: "{{ install.get('registry', {}).namespace|default(False)|ternary(install.get('registry', {}).namespace, install.get('registry', {}).mirror + '/' + 'rhosp' + install.version) }}"
prefix: "{{ install.registry.prefix|default('openstack') }}"
tag: "{{ install.registry.tag|default('') }}"
local_namespace: "{{ install.registry.local|default('') }}"
is_dvr: "{{ install.dvr }}"
when:
- install is defined
- name: Get the image registry info (tripleo deployment)
block:
- name: Set is_infrard deployment
set_fact:
is_infrared: False
- name: Save the image reg
set_fact:
container_image_prepare:
namespace: "{{ registry_namespace }}"
local_namespace: "{{ registry_localnamespace }}"
prefix: "{{ registry_prefix }}"
tag: "{{ registry_tag }}"
is_dvr: "{{ dvr }}"
when:
- install is not defined
- name: Prepare for migration
include_role:
name: prepare-migration
vars:
infrared_deployment: "{{ is_infrared }}"
registry_namespace: "{{ container_image_prepare['namespace'] }}"
image_prefix: "{{ container_image_prepare['prefix'] }}"
image_tag: "{{ container_image_prepare['tag'] }}"
local_namespace: "{{ container_image_prepare['local_namespace'] }}"
is_dvr: "{{ container_image_prepare['is_dvr'] }}"
- name: Boot few VMs to measure downtime
hosts: undercloud
tasks:
- name: Check if need to create resources
block:
- name: Set create_vms (infrared)
set_fact:
create_vms: "{{ install.create_resources }}"
when:
- install is defined
- name: Set create_vms (tripleo deployment)
set_fact:
create_vms: "{{ create_resources }}"
when:
- install is not defined
- name: Create few resources
block:
- name: Set the public network name (infrared deployment)
set_fact:
public_net: "{{ install.external_network }}"
when: install is defined
- name: Set the public network name (Tripleo deployment)
set_fact:
public_net: "{{ external_network }}"
when: install is not defined
- name: Set the image name (infrared deployment)
set_fact:
image_to_boot: "{{ install.image_name }}"
when: install is defined
- name: Set the image name(Tripleo deployment)
set_fact:
image_to_boot: "{{ image_name }}"
when: install is not defined
- name: Create resources
include_role:
name: create-resources
vars:
public_network_name: "{{ public_net }}"
image_name: "{{ image_to_boot }}"
ovn_migration_temp_dir: /home/stack/ovn_migration
overcloudrc: /home/stack/overcloudrc
when:
- create_vms|bool
- name: Kick start the migration
hosts: undercloud
tasks:
#TODO: Get the working dir from the param
- name: Starting migration block
block:
- name: Set ovn migration working dir
set_fact:
ovn_migration_working_dir: /home/stack/ovn_migration
- name: Copy the playbook files into ovn_migration working dir
command: cp -rf /usr/share/ansible/neutron-ovn-migration/playbooks {{ ovn_migration_working_dir }}
- name: Set the public network name (infrared deployment)
set_fact:
public_network: "{{ install.external_network }}"
when: install is defined
- name: Set the public network name (Tripleo deployment)
set_fact:
public_network: "{{ external_network }}"
when: install is not defined
- name: Create ovn migration script
template:
src: templates/start-ovn-migration.sh.j2
dest: "{{ ovn_migration_working_dir }}/start-ovn-migration.sh"
mode: 0755
- name: Generate inventory file for ovn migration
shell:
set -o pipefail &&
{{ ovn_migration_working_dir }}/start-ovn-migration.sh generate-inventory 2>&1 > {{ ovn_migration_working_dir}}/generate-inventory.log
- name: Set DHCP T1 timer
shell:
set -o pipefail &&
{{ ovn_migration_working_dir }}/start-ovn-migration.sh reduce-dhcp-t1 2>&1 > {{ ovn_migration_working_dir}}/reduce-dhcp-t1.log
- name: Reduce mtu of the pre migration networks
shell:
set -o pipefail &&
{{ ovn_migration_working_dir }}/start-ovn-migration.sh reduce-mtu 2>&1 > {{ ovn_migration_working_dir}}/reduce-mtu.log
- name: Start the migration process
shell:
set -o pipefail &&
{{ ovn_migration_working_dir }}/start-ovn-migration.sh start-migration 2>&1
> {{ ovn_migration_working_dir}}/start-ovn-migration.sh.log
- name: Stop pinger if started
shell:
echo "exit" > {{ ovn_migration_working_dir}}/_pinger_cmd.txt
always:
- name: Fetch ovn_migration log directory
synchronize:
src: "{{ ovn_migration_working_dir }}"
dest: "{{ inventory_dir }}"
mode: pull
when: install is defined

View File

@ -1,9 +0,0 @@
---
public_network_name: "{{ public_network_name }}"
create_resource_script: create-resources.sh.j2
ovn_migration_temp_dir: "{{ ovn_migration_temp_dir }}"
image_name: "{{ image_name }}"
server_user_name: "{{ server_user_name }}"
overcloudrc: "{{ overcloudrc }}"
resource_suffix: pinger

View File

@ -1,33 +0,0 @@
- name: Delete temp file directory if present
file:
state: absent
path: "{{ ovn_migration_temp_dir }}"
- name : Create temp file directory if not present
file:
state: directory
path: "{{ ovn_migration_temp_dir }}"
- name: Generate resource creation script
template:
src: create-resources.sh.j2
dest: "{{ ovn_migration_temp_dir }}/create-resources.sh"
mode: 0744
- name: Creating pre pre migration resources
shell: >
set -o pipefail &&
{{ ovn_migration_temp_dir }}/create-resources.sh 2>&1 >
{{ ovn_migration_temp_dir }}/create-resources.sh.log
changed_when: true
- name: Generate pinger script
template:
src: start-pinger.sh.j2
dest: "{{ ovn_migration_temp_dir }}/start-pinger.sh"
mode: 0744
- name: Start pinger in background
shell: >
nohup {{ ovn_migration_temp_dir }}/start-pinger.sh </dev/null >/dev/null 2>&1 &
changed_when: False

View File

@ -1,170 +0,0 @@
#!/bin/bash
set -x
source {{ overcloudrc }}
image_name={{ image_name }}
openstack image show $image_name
if [ "$?" != "0" ]
then
if [ ! -f cirros-0.5.2-x86_64-disk.img ]
then
curl -Lo cirros-0.5.2-x86_64-disk.img https://github.com/cirros-dev/cirros/releases/download/0.5.2/cirros-0.5.2-x86_64-disk.img
fi
openstack image create "cirros-ovn-migration-{{ resource_suffix }}" --file cirros-0.5.2-x86_64-disk.img \
--disk-format qcow2 --container-format bare --public
image_name="cirros-ovn-migration-{{ resource_suffix }}"
fi
openstack flavor create ovn-migration-{{ resource_suffix }} --ram 1024 --disk 1 --vcpus 1
openstack keypair create ovn-migration-{{ resource_suffix }} --private-key {{ ovn_migration_temp_dir }}/ovn_migration_ssh_key
openstack security group create ovn-migration-sg-{{ resource_suffix }}
openstack security group rule create --ingress --protocol icmp ovn-migration-sg-{{ resource_suffix }}
openstack security group rule create --ingress --protocol tcp --dst-port 22 ovn-migration-sg-{{ resource_suffix }}
openstack network create ovn-migration-net-{{ resource_suffix }}
neutron net-update ovn-migration-net-{{ resource_suffix }} --mtu 1442
openstack subnet create --network ovn-migration-net-{{ resource_suffix }} --subnet-range 172.168.168.0/24 ovn-migration-subnet-{{ resource_suffix }}
num_hypervisors=`openstack hypervisor stats show | grep count | awk '{print $4}'`
openstack server create --flavor ovn-migration-{{ resource_suffix }} --image $image_name \
--key-name ovn-migration-{{ resource_suffix }} \
--nic net-id=ovn-migration-net-{{ resource_suffix }} \
--security-group ovn-migration-sg-{{ resource_suffix }} \
--min $num_hypervisors --max $num_hypervisors \
ovn-migration-server-{{ resource_suffix }}
openstack router create ovn-migration-router-{{ resource_suffix }}
openstack router set --external-gateway {{ public_network_name }} ovn-migration-router-{{ resource_suffix }}
openstack router add subnet ovn-migration-router-{{ resource_suffix }} ovn-migration-subnet-{{ resource_suffix }}
for i in $(seq 1 $num_hypervisors)
do
num_attempts=0
while true
do
openstack server show ovn-migration-server-{{ resource_suffix }}-$i -c status | grep ACTIVE
if [ "$?" == "0" ]; then
break
fi
sleep 5
num_attempts=$((num_attempts+1))
if [ $num_attempts -gt 24 ]
then
echo "VM is not up even after 2 minutes. Something is wrong"
# printing server information for debugging
openstack server show ovn-migration-server-{{ resource_suffix }}-$i
exit 1
fi
done
vm_ip=`openstack server show ovn-migration-server-{{ resource_suffix }}-$i -c addresses | grep addresses | awk '{ split($4, ip, "="); print ip[2]}'`
port_id=`openstack port list | grep $vm_ip | awk '{print $2}'`
# Wait till the port is ACTIVE
echo "Wait till the port is ACTIVE"
port_status=`openstack port show $port_id -c status | grep status | awk '{print $4}'`
num_attempts=0
while [ "$port_status" != "ACTIVE" ]
do
num_attempts=$((num_attempts+1))
sleep 5
port_status=`openstack port show $port_id -c status | grep status | awk '{print $4}'`
echo "Port status = $port_status"
if [ $num_attempts -gt 24 ]
then
echo "Port is not up even after 2 minutes. Something is wrong"
# printing port information for debugging
openstack port show $port_id
exit 1
fi
done
echo "VM is up and the port is ACTIVE"
server_ip=`openstack floating ip create --port $port_id \
{{ public_network_name }} -c floating_ip_address | grep floating_ip_address \
| awk '{print $4'}`
echo $server_ip >> {{ ovn_migration_temp_dir }}/server_fips
# Wait till the VM allows ssh connections
vm_status="down"
num_attempts=0
while [ "$vm_status" != "up" ]
do
num_attempts=$((num_attempts+1))
sleep 5
openstack console log show ovn-migration-server-{{ resource_suffix }}-$i | grep "login:"
if [ "$?" == "0" ]
then
vm_status="up"
else
if [ $num_attempts -gt 60 ]
then
echo "VM is not up with login prompt even after 5 minutes. Something is wrong."
# Even though something seems wrong, lets try and ping.
break
fi
fi
done
done
chmod 0600 {{ ovn_migration_temp_dir }}/ovn_migration_ssh_key
for server_ip in `cat {{ ovn_migration_temp_dir }}/server_fips`
do
num_attempts=0
vm_reachable="false"
while [ "$vm_reachable" != "true" ]
do
num_attempts=$((num_attempts+1))
sleep 1
ping -c 3 $server_ip
if [ "$?" == "0" ]
then
vm_reachable="true"
else
if [ $num_attempts -gt 60 ]
then
echo "VM is not pingable. Something is wrong."
# printing server information for debugging
server_id=$(openstack server list -f value | grep $server_ip | awk '{print $1}')
openstack console log show $server_id
exit 1
fi
fi
done
ssh -i {{ ovn_migration_temp_dir }}/ovn_migration_ssh_key -o StrictHostKeyChecking=no \
-o UserKnownHostsFile=/dev/null cirros@$server_ip date
rc=$?
if [ "$rc" != "0" ]
then
echo "VM not accessible via ssh. Something went wrong. Exiting with rc=$rc"
# printing server information for debugging purposes
server_id=$(openstack server list -f value | grep $server_ip | awk '{print $1}')
openstack console log show $server_id
exit $rc
fi
done
echo "Done with the resource creation : exiting"
exit 0

View File

@ -1,58 +0,0 @@
#!/bin/bash
set -x
echo "creating virtualenv in {{ ovn_migration_temp_dir }}/pinger_venv"
virtualenv {{ ovn_migration_temp_dir }}/pinger_venv
source {{ ovn_migration_temp_dir }}/pinger_venv/bin/activate
pip install --upgrade pip
pip install sh
cat > {{ ovn_migration_temp_dir }}/pinger.py <<-EOF
import sh
import sys
import time
def main(ips):
run_cmds = []
for ip in ips:
ip_out_file = "{{ ovn_migration_temp_dir }}/" + ip.replace('.', '_') + '_ping.out'
run_cmds.append(sh.ping('-i', '1', ip, _out=ip_out_file, _bg=True))
if not run_cmds:
return
while True:
try:
cmd_file = open("{{ ovn_migration_temp_dir }}/_pinger_cmd.txt", "r")
cmd = cmd_file.readline()
if cmd.startswith("exit"):
break
cmd_file.close()
except IOError:
time.sleep(3)
continue
for p in run_cmds:
p.signal(2)
p.wait()
if __name__ == '__main__':
main(sys.argv[1:])
EOF
pinger_ips=""
for ip in `cat {{ ovn_migration_temp_dir }}/server_fips`
do
pinger_ips="$pinger_ips $ip"
done
echo "pinger ips = $pinger_ips"
echo "calling pinger.py"
python {{ ovn_migration_temp_dir }}/pinger.py $pinger_ips
echo "Exiting..."

View File

@ -1,7 +0,0 @@
---
infrared_deployment: False
registry_namespace: docker.io/tripleomaster
local_namespace: 192.168.24.1:8787/tripleomaster
image_tag: current-tripleo-rdo
image_prefix: centos-binary-

View File

@ -1,191 +0,0 @@
- name: Copy overcloud deploy script to overcloud-deploy-ovn.sh
block:
- name: Check if overcloud_deploy.sh is present or not
stat:
path: ~/overcloud_deploy.sh
register: deploy_file
- name: Set the ml2ovs overcloud deploy script file name
set_fact:
overcloud_deploy_script: '~/overcloud_deploy.sh'
when: deploy_file.stat.exists|bool
- name: Check if overcloud-deploy.sh is present
stat:
path: ~/overcloud-deploy.sh
register: deploy_file_2
when: not deploy_file.stat.exists|bool
- name: Set the ml2ovs overcloud deploy script file name
set_fact:
overcloud_deploy_script: '~/overcloud-deploy.sh'
when:
- not deploy_file.stat.exists|bool
- deploy_file_2.stat.exists|bool
- name: Copy overcloud deploy script to overcloud-deploy-ovn.sh
command: cp -f {{ overcloud_deploy_script }} ~/overcloud-deploy-ovn.sh
when: infrared_deployment|bool
- name: set overcloud deploy ovn script
set_fact:
overcloud_deploy_ovn_script: '~/overcloud-deploy-ovn.sh'
- name: Remove ml2ovs-specific environment files from overcloud deploy ovn script
lineinfile:
dest: "{{ overcloud_deploy_ovn_script }}"
state: absent
regexp: "{{ item }}"
with_items:
- "^.*openstack-tripleo-heat-templates.*ovs.*yaml"
- ".*neutron-sriov.yaml.*"
when: infrared_deployment|bool
- name: Set container images environment file
set_fact:
output_env_file: /home/stack/container-images-ovn.yaml
- name: Get the proper neutron-ovn-ha.yaml path
stat:
path: /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml
register: ovn_env_path
- name: Set the neutron-ovn-dvr-ha.yaml file path if dvr
set_fact:
neutron_ovn_env_path: /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-dvr-ha.yaml
when: is_dvr|bool
- name: Set the neutron-ovn-ha.yaml file path if not dvr
set_fact:
neutron_ovn_env_path: /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml
when: not is_dvr|bool
- name: Construct overcloud-deploy-ovn.sh script for infrared deployments
lineinfile:
dest: "{{ overcloud_deploy_ovn_script }}"
line: "{{ item }} \\"
insertbefore: "^--log-file.*"
with_items:
- "-e {{ neutron_ovn_env_path }}"
- "-e /home/stack/ovn-extras.yaml"
- "-e {{ output_env_file }}"
when:
- infrared_deployment|bool
- name: Construct overcloud-deploy-ovn.sh script for tripleo deployments
template:
src: templates/overcloud-deploy-ovn.sh.j2
dest: ~/overcloud-deploy-ovn.sh
mode: 0744
when:
- not infrared_deployment|bool
- name: Set image tag (infrared deployment)
block:
- name: Get puddle version
shell: cat containers-prepare-parameter.yaml | grep -v _tag | grep tag | awk '{print $2}'
ignore_errors: True
register: core_puddle_version
- name: Set image tag from puddle version
set_fact:
container_image_tag: "{{ core_puddle_version.stdout }}"
- name: Get registry namespace
shell: cat containers-prepare-parameter.yaml | grep -v _namespace | grep namespace | awk '{print $2}'
ignore_errors: True
register: reg_ns
- name: Set registry namespace
set_fact:
reg_namespace: "{{ reg_ns.stdout }}"
- debug:
msg: "{{ core_puddle_version.stdout }}"
- debug:
msg: "{{ container_image_tag }}"
- debug:
msg: "{{ reg_namespace }}"
when: infrared_deployment|bool
- name: Set image tag (tripleo deployment)
set_fact:
container_image_tag: "{{ image_tag }}"
when:
- not infrared_deployment|bool
- name: Generate ovn container images
shell: |
echo "container_images:" > ~/ovn_container_images.yaml
args:
creates: ~/ovn_container_images.yaml
- name: Add ovn container images to ovn_container_images.yaml
lineinfile:
dest: ~/ovn_container_images.yaml
line: "- imagename: {{ reg_namespace }}/{{ image_prefix }}-{{ item }}:{{ container_image_tag }}"
with_items:
- "ovn-northd"
- "ovn-controller"
- "neutron-server-ovn"
- "neutron-metadata-agent-ovn"
- name: Generate container images environment file
shell: |
echo "parameter_defaults:" > ~/container-images-ovn.yaml
changed_when: False
- name: Set the local namespace
block:
- name: Extract the local namespace
shell: |
set -exo pipefail
source ~/stackrc
openstack overcloud plan export overcloud
mkdir -p /tmp/oc_plan
mv overcloud.tar.gz /tmp/oc_plan/
cd /tmp/oc_plan
tar xvf overcloud.tar.gz
reg=`cat /tmp/oc_plan/environments/containers-default-parameters.yaml | grep ContainerNeutronApiImage | awk '{ split($2, image , "/"); print image[1] }'`
namespace=`cat /tmp/oc_plan/environments/containers-default-parameters.yaml | grep ContainerNeutronApiImage | awk '{ split($2, image , "/"); print image[2] }'`
echo $reg/$namespace > /tmp/_reg_namespace
rm -rf /tmp/oc_plan
- name: Get the local namespace
command: cat /tmp/_reg_namespace
register: local_ns
- name: Set the local registry
set_fact:
local_registry: "{{ local_ns.stdout }}"
when:
- local_namespace == ''
- name: Set the local namespace
set_fact:
local_registry: "{{ local_namespace }}"
when:
- local_namespace != ''
- name: Add ovn container images to container images environment file
lineinfile:
dest: ~/container-images-ovn.yaml
line: " {{ item.name }}: {{ local_registry }}/{{ image_prefix }}-{{ item.image_name }}:{{ container_image_tag }}"
with_items:
- { name: ContainerNeutronApiImage, image_name: neutron-server-ovn}
- { name: ContainerNeutronConfigImage, image_name: neutron-server-ovn}
- { name: ContainerOvnMetadataImage, image_name: neutron-metadata-agent-ovn}
- { name: ContainerOvnControllerImage, image_name: ovn-controller}
- { name: ContainerOvnControllerConfigImage, image_name: ovn-controller}
- { name: ContainerOvnDbsImage, image_name: ovn-northd}
- { name: ContainerOvnDbsConfigImage, image_name: ovn-northd}
- name: Upload the ovn container images to the local registry
shell: |
source /home/stack/stackrc
openstack tripleo container image prepare --environment-file /home/stack/containers-prepare-parameter.yaml
become: yes
changed_when: False

View File

@ -1,7 +0,0 @@
#!/bin/bash
export PUBLIC_NETWORK_NAME={{ public_network }}
# TODO: Get this from the var
export OPT_WORKDIR=/home/stack/ovn_migration
/usr/bin/ovn_migration.sh $1

View File

@ -1,417 +0,0 @@
#!/bin/bash
# Copyright 2018 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# With LANG set to everything else than C completely undercipherable errors
# like "file not found" and decoding errors will start to appear during scripts
# or even ansible modules
LANG=C
# Complete stackrc file path.
: ${STACKRC_FILE:=~/stackrc}
# Complete overcloudrc file path.
: ${OVERCLOUDRC_FILE:=~/overcloudrc}
# overcloud deploy script for OVN migration.
: ${OVERCLOUD_OVN_DEPLOY_SCRIPT:=~/overcloud-deploy-ovn.sh}
# user on the nodes in the undercloud
: ${UNDERCLOUD_NODE_USER:=heat-admin}
: ${OPT_WORKDIR:=$PWD}
: ${STACK_NAME:=overcloud}
: ${OOO_WORKDIR:=$HOME/overcloud-deploy}
: ${PUBLIC_NETWORK_NAME:=public}
: ${IMAGE_NAME:=cirros}
: ${FLAVOR_NAME:=ovn-migration}
: ${SERVER_USER_NAME:=cirros}
: ${VALIDATE_MIGRATION:=False}
: ${DHCP_RENEWAL_TIME:=30}
: ${CREATE_BACKUP:=True}
: ${BACKUP_MIGRATION_IP:=192.168.24.1}
: ${BACKUP_MIGRATION_CTL_PLANE_CIDRS:=192.168.24.0/24}
check_for_necessary_files() {
if [ ! -e hosts_for_migration ]; then
echo "hosts_for_migration ansible inventory file not present"
echo "Please run ./ovn_migration.sh generate-inventory"
exit 1
fi
# Check if the user has generated overcloud-deploy-ovn.sh file
# With correct permissions
# If it is not generated. Exit
if [ ! -x $OVERCLOUD_OVN_DEPLOY_SCRIPT ]; then
echo "overcloud deploy migration script :" \
"$OVERCLOUD_OVN_DEPLOY_SCRIPT is not present" \
"or execution permission is missing. Please" \
"make sure you create that file with correct" \
"permissions before running this script."
exit 1
fi
grep -q -- '--answers-file' $OVERCLOUD_OVN_DEPLOY_SCRIPT || grep -q -- '--environment-directory' $OVERCLOUD_OVN_DEPLOY_SCRIPT
answers_templates_check=$?
grep -q -- 'neutron-ovn' $OVERCLOUD_OVN_DEPLOY_SCRIPT
if [[ $? -eq 1 ]]; then
if [[ $answers_templates_check -eq 0 ]]; then
echo -e "\nWARNING!!! You are using an answers-file or a templates directory" \
" ( --answers-file/--environment-directory) " \
"\nYou MUST make sure the proper OVN files are included in the templates called by your deploy script"
else
echo -e "OVN t-h-t environment file(s) seems to be missing in " \
"$OVERCLOUD_OVN_DEPLOY_SCRIPT. Please check the $OVERCLOUD_OVN_DEPLOY_SCRIPT" \
"file again."
exit 1
fi
fi
grep -q \$HOME/ovn-extras.yaml $OVERCLOUD_OVN_DEPLOY_SCRIPT
check1=$?
grep -q $HOME/ovn-extras.yaml $OVERCLOUD_OVN_DEPLOY_SCRIPT
check2=$?
if [[ $check1 -eq 1 && $check2 -eq 1 ]]; then
# specific case of --answers-file/--environment-directory
if [[ $answers_templates_check -eq 0 ]]; then
echo -e "\nWARNING!!! You are using an answers-file or a templates directory" \
" ( --answers-file/--environment-directory) " \
"\nYou MUST add ovn-extras.yaml to your new set of templates for OVN-based deploys." \
"\n e.g: add \" -e \$HOME/ovn-extras.yaml \" to the deploy command in $OVERCLOUD_OVN_DEPLOY_SCRIPT" \
"\nOnce OVN migration is finished, ovn-extras.yaml can then be safely removed from your OVN templates."
else
echo "ovn-extras.yaml file is missing in "\
"$OVERCLOUD_OVN_DEPLOY_SCRIPT. Please add it "\
"as \" -e \$HOME/ovn-extras.yaml\""
fi
exit 1
fi
# Check if backup is enabled
if [[ $CREATE_BACKUP = True ]]; then
# Check if backup server is reachable
ping -c4 $BACKUP_MIGRATION_IP
if [[ $? -eq 1 ]]; then
echo -e "It is not possible to reach the backup migration server IP" \
"($BACKUP_MIGRATION_IP). Make sure this IP is accessible before" \
"starting the migration." \
"Change this value by doing: export BACKUP_MIGRATION_IP=x.x.x.x"
fi
fi
}
get_host_ip() {
inventory_file=$1
host_name=$2
host_vars=$(ansible-inventory -i "$inventory_file" --host "$host_name" 2>/dev/null)
if [[ $? -eq 0 ]]; then
echo "$host_vars" | jq -r \.ansible_host
else
echo $host_name
fi
}
get_group_hosts() {
inventory_file=$1
group_name=$2
group_graph=$(ansible-inventory -i "$inventory_file" --graph "$group_name" 2>/dev/null)
if [[ $? -eq 0 ]]; then
echo "$group_graph" | sed -ne 's/^[ \t|]\+--\([a-z0-9\-]\+\)$/\1/p'
else
echo ""
fi
}
# Generate the ansible.cfg file
generate_ansible_config_file() {
cat > ansible.cfg <<-EOF
[defaults]
forks=50
become=True
callback_whitelist = profile_tasks
host_key_checking = False
gathering = smart
fact_caching = jsonfile
fact_caching_connection = ./ansible_facts_cache
fact_caching_timeout = 0
log_path = $HOME/ovn_migration_ansible.log
#roles_path = roles:...
[ssh_connection]
control_path = %(directory)s/%%h-%%r
ssh_args = -o ControlMaster=auto -o ControlPersist=270s -o ServerAliveInterval=30 -o GSSAPIAuthentication=no
retries = 3
EOF
}
# Generate the inventory file for ansible migration playbook.
# It uses tripleo-ansible-inventory.yaml which was used during deployment as source inventory
generate_ansible_inventory_file() {
local dhcp_nodes
local inventory_file="$OOO_WORKDIR/$STACK_NAME/config-download/$STACK_NAME/tripleo-ansible-inventory.yaml"
echo "Generating the inventory file for ansible-playbook"
echo "[ovn-dbs]" > hosts_for_migration
ovn_central=True
# We want to run ovn_dbs where neutron_api is running
OVN_DBS=$(get_group_hosts "$inventory_file" neutron_api)
for node_name in $OVN_DBS; do
node_ip=$(get_host_ip "$inventory_file" $node_name)
node="$node_name ansible_host=$node_ip"
if [ "$ovn_central" == "True" ]; then
ovn_central=False
node="$node_name ansible_host=$node_ip ovn_central=true"
fi
echo $node ansible_ssh_user=$UNDERCLOUD_NODE_USER ansible_become=true >> hosts_for_migration
done
echo "" >> hosts_for_migration
echo "[ovn-controllers]" >> hosts_for_migration
# We want to run ovn-controller where OVS agent was running before the migration
OVN_CONTROLLERS=$(get_group_hosts "$inventory_file" neutron_ovs_agent; get_group_hosts "$inventory_file" neutron_ovs_dpdk_agent)
for node_name in $OVN_CONTROLLERS; do
node_ip=$(get_host_ip "$inventory_file" $node_name)
echo $node_name ansible_host=$node_ip ansible_ssh_user=$UNDERCLOUD_NODE_USER ansible_become=true ovn_controller=true >> hosts_for_migration
done
echo "" >> hosts_for_migration
echo "[dhcp]" >> hosts_for_migration
dhcp_nodes=$(get_group_hosts "$inventory_file" neutron_dhcp)
for node_name in $dhcp_nodes; do
node_ip=$(get_host_ip "$inventory_file" $node_name)
echo $node_name ansible_host=$node_ip ansible_ssh_user=$UNDERCLOUD_NODE_USER ansible_become=true >> hosts_for_migration
done
echo "" >> hosts_for_migration
cat >> hosts_for_migration << EOF
[overcloud-controllers:children]
dhcp
[overcloud:children]
ovn-controllers
ovn-dbs
EOF
add_group_vars() {
cat >> hosts_for_migration << EOF
[$1:vars]
remote_user=$UNDERCLOUD_NODE_USER
public_network_name=$PUBLIC_NETWORK_NAME
image_name=$IMAGE_NAME
flavor_name=$FLAVOR_NAME
working_dir=$OPT_WORKDIR
server_user_name=$SERVER_USER_NAME
validate_migration=$VALIDATE_MIGRATION
overcloud_ovn_deploy_script=$OVERCLOUD_OVN_DEPLOY_SCRIPT
overcloudrc=$OVERCLOUDRC_FILE
ovn_migration_backups=/var/lib/ovn-migration-backup
EOF
}
add_group_vars overcloud
add_group_vars overcloud-controllers
echo "***************************************"
cat hosts_for_migration
echo "***************************************"
echo "Generated the inventory file - hosts_for_migration"
echo "Please review the file before running the next command - reduce-dhcp-t1"
}
# Check if source inventory exists
function check_source_inventory {
local inventory_file="$OOO_WORKDIR/$STACK_NAME/config-download/$STACK_NAME/tripleo-ansible-inventory.yaml"
if [ ! -f $inventory_file ]; then
echo "ERROR: Source Inventory File ${inventory_file} does not exist. Please provide the Stack Name and TripleO Workdir"
echo " via STACK_NAME and OOO_WORKDIR environment variables."
exit 1
fi
}
# Check if the public network exists, and if it has floating ips available
oc_check_public_network() {
[ "$VALIDATE_MIGRATION" != "True" ] && return 0
source $OVERCLOUDRC_FILE
openstack network show $PUBLIC_NETWORK_NAME 1>/dev/null || {
echo "ERROR: PUBLIC_NETWORK_NAME=${PUBLIC_NETWORK_NAME} can't be accessed by the"
echo " admin user, please fix that before continuing."
exit 1
}
ID=$(openstack floating ip create $PUBLIC_NETWORK_NAME -c id -f value) || {
echo "ERROR: PUBLIC_NETWORK_NAME=${PUBLIC_NETWORK_NAME} doesn't have available"
echo " floating ips. Make sure that your public network has at least one"
echo " floating ip available for the admin user."
exit 1
}
openstack floating ip delete $ID 2>/dev/null 1>/dev/null
return $?
}
# Check if the neutron networks MTU has been updated to geneve MTU size or not.
# We donot want to proceed if the MTUs are not updated.
oc_check_network_mtu() {
source $OVERCLOUDRC_FILE
neutron-ovn-migration-mtu verify mtu
return $?
}
reduce_dhcp_t1() {
# Run the ansible playbook to reduce the DHCP T1 parameter in
# dhcp_agent.ini in all the overcloud nodes where dhcp agent is running.
ansible-playbook -vv $OPT_WORKDIR/playbooks/reduce-dhcp-renewal-time.yml \
-i hosts_for_migration -e working_dir=$OPT_WORKDIR \
-e renewal_time=$DHCP_RENEWAL_TIME
rc=$?
return $rc
}
reduce_network_mtu () {
source $OVERCLOUDRC_FILE
oc_check_network_mtu
if [ "$?" != "0" ]; then
# Reduce the network mtu
neutron-ovn-migration-mtu update mtu
rc=$?
if [ "$rc" != "0" ]; then
echo "Reducing the network mtu's failed. Exiting."
exit 1
fi
fi
return $rc
}
start_migration() {
source $STACKRC_FILE
echo "Starting the Migration"
local inventory_file="$OOO_WORKDIR/$STACK_NAME/config-download/$STACK_NAME/tripleo-ansible-inventory.yaml"
if ! test -f $inventory_file; then
inventory_file=''
fi
ansible-playbook -vv $OPT_WORKDIR/playbooks/ovn-migration.yml \
-i hosts_for_migration -e working_dir=$OPT_WORKDIR \
-e public_network_name=$PUBLIC_NETWORK_NAME \
-e image_name=$IMAGE_NAME \
-e flavor_name=$FLAVOR_NAME \
-e undercloud_node_user=$UNDERCLOUD_NODE_USER \
-e overcloud_ovn_deploy_script=$OVERCLOUD_OVN_DEPLOY_SCRIPT \
-e server_user_name=$SERVER_USER_NAME \
-e overcloudrc=$OVERCLOUDRC_FILE \
-e stackrc=$STACKRC_FILE \
-e backup_migration_ip=$BACKUP_MIGRATION_IP \
-e backup_migration_ctl_plane_cidrs=$BACKUP_MIGRATION_CTL_PLANE_CIDRS \
-e create_backup=$CREATE_BACKUP \
-e ansible_inventory=$inventory_file \
-e validate_migration=$VALIDATE_MIGRATION $*
rc=$?
return $rc
}
print_usage() {
cat << EOF
Usage:
Before running this script, please refer to the migration guide for
complete details. This script needs to be run in 5 steps.
Step 1 -> ovn_migration.sh generate-inventory
Generates the inventory file
Step 2 -> ovn_migration.sh reduce-dhcp-t1 (deprecated name setup-mtu-t1)
Sets the DHCP renewal T1 to 30 seconds. After this step you will
need to wait at least 24h for the change to be propagated to all
VMs. This step is only necessary for VXLAN or GRE based tenant
networking.
Step 3 -> You need to wait at least 24h based on the default configuration
of neutron for the DHCP T1 parameter to be propagated, please
refer to documentation. WARNING: this is very important if you
are using VXLAN or GRE tenant networks.
Step 4 -> ovn_migration.sh reduce-mtu
Reduces the MTU of the neutron tenant networks networks. This
step is only necessary for VXLAN or GRE based tenant networking.
Step 5 -> ovn_migration.sh start-migration
Starts the migration to OVN.
EOF
}
command=$1
ret_val=0
case $command in
generate-inventory)
check_source_inventory
oc_check_public_network
generate_ansible_inventory_file
generate_ansible_config_file
ret_val=$?
;;
reduce-dhcp-t1 | setup-mtu-t1)
if [[ $command = 'setup-mtu-t1' ]]; then
echo -e "Warning: setup-mtu-t1 argument was renamed."\
"Use reduce-dhcp-t1 argument instead."
fi
check_for_necessary_files
reduce_dhcp_t1
ret_val=$?;;
reduce-mtu)
check_for_necessary_files
reduce_network_mtu
ret_val=$?;;
start-migration)
oc_check_public_network
check_for_necessary_files
shift
start_migration $*
ret_val=$?
;;
*)
print_usage;;
esac
exit $ret_val

View File

@ -1,121 +0,0 @@
# This is the playbook used by ovn-migration.sh.
#
# Backup the controllers to have a backup in case the
# migration fails leaving the testbed on a broken status.
#
- name: Backup controllers pre-migration
hosts: localhost
roles:
- recovery-backup
tags:
- recovery-backup
#
# Pre migration and validation tasks will make sure that the initial cloud
# is functional, and will create resources which will be checked after
# migration.
#
- name: Pre migration and validation tasks
hosts: localhost
roles:
- pre-migration
tags:
- pre-migration
#
# This step is executed before migration, and will backup some config
# files related to containers before those get lost.
#
- name: Backup tripleo container config files on the nodes
hosts: ovn-controllers
roles:
- backup
tags:
- setup
- name: Stop ml2/ovs resources
hosts: ovn-controllers
roles:
- stop-agents
tags:
- migration
#
# TripleO / Director is executed to deploy ovn using "br-migration" for the
# dataplane, while br-int is left intact to avoid dataplane disruption.
#
- name: Set up OVN and configure it using tripleo
hosts: localhost
roles:
- tripleo-update
tags:
- setup
become: false
#
# Once everything is migrated prepare everything by syncing the neutron DB
# into the OVN NB database, and then switching the dataplane to br-int
# letting ovn-controller take control, afterwards any remaining neutron
# resources, namespaces or processes which are not needed anymore are
# cleaned up.
#
- name: Do the DB sync and dataplane switch
hosts: ovn-controllers, ovn-dbs
roles:
- migration
vars:
ovn_bridge: br-int
tags:
- migration
#
# Verify that the initial resources are still reachable, remove them,
# and afterwards create new resources and repeat the connectivity tests.
#
- name: Post migration
hosts: localhost
roles:
- delete-neutron-resources
- post-migration
tags:
- post-migration
#
# Final validation after tripleo update to br-int
#
- name: Final validation
hosts: localhost
vars:
validate_premigration_resources: false
roles:
- post-migration
tags:
- final-validation
#
# Announce that it's done and ready.
#
- hosts: localhost
tasks:
- name: Migration successful.
debug:
msg: Migration from ML2OVS to OVN is now complete.

View File

@ -1,19 +0,0 @@
---
- hosts: overcloud-controllers
tasks:
- name: Update dhcp_agent configuration file option 'dhcp_renewal_time'
ini_file:
path=/var/lib/config-data/puppet-generated/neutron/etc/neutron/dhcp_agent.ini
section=DEFAULT
backup=yes
option=dhcp_renewal_time
value={{ renewal_time }}
create=no
ignore_errors: yes
- block:
- name: Restart neutron dhcp agent
shell:
podman restart $(podman ps --filter "name=neutron_dhcp" --format {% raw %}"{{.ID}}"{% endraw %})
ignore_errors: yes

View File

@ -1,4 +0,0 @@
- name: Clean computes
hosts: ovn-controllers
roles:
- revert

View File

@ -1,19 +0,0 @@
# The following tasks ensure that we have backup data which is
# necessary later for cleanup (like l3/dhcp/metadata agent definitions)
- name: "Ensure the ovn backup directory"
file: path="{{ ovn_migration_backups }}" state=directory
- name: "Save the tripleo container definitions"
shell: |
# only copy them the first time, otherwise, on a later run when
# it has been already migrated to OVN we would miss the data
if [ ! -d {{ ovn_migration_backups }}/tripleo-config ]; then
cp -rfp /var/lib/tripleo-config {{ ovn_migration_backups }}
echo "Backed up"
fi
register: command_result
changed_when: "'Backed up' in command_result.stdout"
# TODO(majopela): Include steps for backing up the mysql database on the
# controllers and the undercloud before continuing

View File

@ -1,3 +0,0 @@
---
ovn_migration_temp_dir_del: "{{ working_dir }}/delete_neutron_resources"

View File

@ -1,21 +0,0 @@
---
- name: Delete temp file directory if present
file:
state: absent
path: "{{ ovn_migration_temp_dir_del }}"
- name : Create temp file directory if not present
file:
state: directory
path: "{{ ovn_migration_temp_dir_del }}"
- name: Generate neutron resources cleanup script
template:
src: "delete-neutron-resources.sh.j2"
dest: "{{ ovn_migration_temp_dir_del }}/delete-neutron-resources.sh"
mode: 0744
- name: Deleting the neutron agents
shell: >
{{ ovn_migration_temp_dir_del }}/delete-neutron-resources.sh 2>&1 >
{{ ovn_migration_temp_dir_del }}/delete-neutron-resources.sh.log

View File

@ -1,32 +0,0 @@
#!/bin/bash
set -x
source {{ overcloudrc }}
# Delete non alive neutron agents
for i in `openstack network agent list | grep neutron- | grep -v ':-)' | awk {'print $2'}`
do
openstack network agent delete $i
done
delete_network_ports() {
net_id=$1
for p in `openstack port list --network $net_id | grep -v ID | awk '{print $2}'`
do
openstack port delete $p
done
}
# Delete HA networks
for i in `openstack network list | grep "HA network tenant" | awk '{print $2}'`
do
delete_network_ports $i
openstack network delete $i
done
# Delete DVR gateway ports
openstack port delete $(openstack port list --device-owner "network:floatingip_agent_gateway" -c id -f value)
exit 0

View File

@ -1,5 +0,0 @@
---
tunnel_bridge: "br-tun"
ovn_bridge: "br-int"
ovn_db_sync_container: "neutron-ovn-db-sync"

View File

@ -1,15 +0,0 @@
---
- name: Generate OVN activation script
template:
src: "activate-ovn.sh.j2"
dest: "/tmp/activate-ovn.sh"
mode: 0644
- name: Run OVN activation script
shell: >
sh /tmp/activate-ovn.sh 2>&1 > /tmp/activate-ovn.sh.log
- name: Delete OVN activate script
file:
state: absent
path: /tmp/activate-ovn.sh

View File

@ -1,75 +0,0 @@
---
- name: Cleanup neutron router and dhcp interfaces
shell: |
ovs-vsctl list interface | awk '/name[ ]*: qr-|ha-|qg-|rfp-|sg-|fg-/ { print $3 }' | xargs -n1 ovs-vsctl del-port
# dhcp tap ports cannot be easily distinguished from ovsfw ports, so we
# list them from within the qdhcp namespaces
for netns in `ip netns | awk '{ print $1 }' | grep qdhcp-`; do
for dhcp_port in `ip netns exec $netns ip -o link show | awk -F': ' '{print $2}' | grep tap`; do
ovs-vsctl del-port $dhcp_port
done
done
register: router_dhcp_error
ignore_errors: True
- name: Cleanup neutron router and dhcp interfaces finished
debug:
msg: WARNING error while cleaning ovs interface with msg {{ router_dhcp_error.msg }}
when:
- router_dhcp_error.changed
- router_dhcp_error.rc != 0
- name: Check if neutron trunk subports need cleanup
shell: |
ovs-vsctl list interface | awk '/name[ ]*: sp[it]-/ { print $3 }' | grep 'spi-\|spt-'
register: do_cleanup
ignore_errors: True
- name: Cleanup neutron trunk subports
when: do_cleanup.rc == 0
shell: |
ovs-vsctl list interface | awk '/name[ ]*: sp[it]-/ { print $3 }' | xargs -n1 ovs-vsctl del-port
register: trunk_error
ignore_errors: True
- name: Cleanup trunk ports finished
debug:
msg: WARNING error while cleaning ovs interface with msg {{ trunk_error.msg }}
when:
- trunk_error.changed
- trunk_error.rc != 0
- name: Clean neutron datapath security groups from iptables
shell: |
{{ iptables_exec }}-save > /tmp/iptables-before-cleanup
cat /tmp/iptables-before-cleanup | grep -v neutron-openvswi | \
grep -v neutron-filter > /tmp/iptables-after-cleanup
if ! cmp /tmp/iptables-before-cleanup /tmp/iptables-after-cleanup
then
cat /tmp/iptables-after-cleanup | {{ iptables_exec }}-restore
echo "Security groups cleaned"
fi
register: out
with_items:
- iptables
- ip6tables
loop_control:
loop_var: iptables_exec
changed_when: "'Security groups cleaned' in out.stdout"
- name: Cleanup neutron datapath resources
become: yes
shell: |
for container in $(podman ps -a --format {% raw %}"{{.ID}}"{% endraw %} --filter "name=(neutron-(dibbler|dnsmasq|haproxy|keepalived)-.*|dhcp_dnsmasq|dhcp_haproxy|l3_keepalived|l3_haproxy|l3_dibbler|l3_radvd)"); do
echo "Cleaning up side-car container $container"
podman stop $container
podman rm -f $container
done
# cleanup Neutron ml2/ovs namespaces
for netns in $(ip netns | awk '/^(snat|fip|qdhcp|qrouter)-/{ print $1 }'); do
echo "Cleaning up namespace $netns"
ip netns delete $netns
done

View File

@ -1,15 +0,0 @@
# we use this instead of a big shell entry because some versions of
# ansible-playbook choke on our script syntax + yaml parsing
- name: Generate script to clone br-int and provider bridges
template:
src: "clone-br-int.sh.j2"
dest: "/tmp/clone-br-int.sh"
mode: 0644
- name: Run clone script for dataplane
shell: sh /tmp/clone-br-int.sh
- name: Delete clone script
file:
state: absent
path: /tmp/clone-br-int.sh

View File

@ -1,11 +0,0 @@
---
- include_tasks: clone-dataplane.yml
- include_tasks: sync-dbs.yml
- include_tasks: activate-ovn.yml
- include_tasks: cleanup-dataplane.yml
when: ovn_controller is defined
tags:
- cleanup-dataplane

View File

@ -1,34 +0,0 @@
---
- name: stop neutron_api containers
ansible.builtin.systemd:
name: tripleo_neutron_api
state: stopped
- name: get neutron_server image url
command: podman ps -a --filter "name=neutron_api" --format {% raw %}"{{.Image}}"{% endraw %}
register: neutron_server_image
- name: sync neutron db with OVN db
command: podman run --name {{ ovn_db_sync_container }}
-v /var/log/containers/neutron:/var/log/neutron:Z
-v /var/lib/config-data/puppet-generated/neutron/etc/neutron:/etc/neutron:Z
--privileged=True --network host --user root
{{ neutron_server_image.stdout }}
neutron-ovn-db-sync-util --config-file /etc/neutron/neutron.conf
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini
--log-file /var/log/neutron/neutron-db-sync.log
--ovn-neutron_sync_mode migrate
--debug
when: ovn_central is defined
- name: remove db-sync container
command: podman rm -f {{ ovn_db_sync_container }}
when: ovn_central is defined
- name: start neutron_api containers
ansible.builtin.systemd:
name: tripleo_neutron_api
state: started
- name: Pause and let ovn-controllers settle before doing the final activation (5 minute)
pause: minutes=5

View File

@ -1,42 +0,0 @@
#!/bin/bash
set -x
podman stop ovn_controller
# restore bridge mappings
ovn_orig_bm=$(ovs-vsctl get open . external_ids:ovn-bridge-mappings-back)
ovs-vsctl set open . external_ids:ovn-bridge-mappings="$ovn_orig_bm"
ovs-vsctl remove open . external_ids ovn-bridge-mappings-back
ovn_bms=$(echo $ovn_orig_bm | sed 's/\"//g' | sed 's/,/ /g')
# Reset OpenFlow protocol version before ovn-controller takes over
ovs-vsctl set Bridge {{ ovn_bridge }} protocols=[]
for bm in $ovn_bms; do
parts=($(echo $bm | sed 's/:/ /g'))
bridge=${parts[1]}
ovs-vsctl set-fail-mode $bridge standalone
ovs-vsctl set Bridge $bridge protocols=[]
ovs-vsctl del-controller $bridge
done
# Delete controller from integration bridge
ovs-vsctl del-controller {{ ovn_bridge }}
# Activate ovn-controller by configuring integration bridge
ovs-vsctl set open . external_ids:ovn-bridge={{ ovn_bridge }}
podman start ovn_controller
# Delete ovs bridges - br-tun and br-migration
ovs-vsctl --if-exists del-br {{ tunnel_bridge }}
ovs-vsctl --if-exists del-br br-migration
ovs-vsctl --if-exists del-port br-int patch-tun
for br in $(ovs-vsctl list-br | egrep 'br-mig-[0-9]+'); do
ovs-vsctl --if-exists del-br $br
done
exit 0

View File

@ -1,77 +0,0 @@
# The purpose of this script is to make a clone of the br-int content
# into br-migration, and to create fake provider bridges.
# This way, while we synchronize the neutron database into the OVN
# northbound DB, and that translates into southbound content all
# the ovn-controllers around are able to create the SBDB content
# safely, without disrupting the existing neutron ml2/ovs dataplane.
OVN_MIG_PREFIX=br-mig
OVN_BR_MIGRATION=${OVN_BR_MIGRATION:-br-migration}
function recreate_bridge_mappings() {
function new_bridge_mappings() {
orig_bms=$1
if echo $orig_bms | grep $OVN_MIG_PREFIX; then
echo $orig_bms
return
fi
ovn_bms=$(echo $1 | sed 's/\"//g' | sed 's/,/ /g')
final_bm=""
br_n=0
for bm in $ovn_bms; do
parts=($(echo $bm | sed 's/:/ /g'))
physnet=${parts[0]}
bridge="${OVN_MIG_PREFIX}-${br_n}"
mapping="${physnet}:${bridge}"
if [[ -z "$final_bm" ]]; then
final_bm=$mapping
else
final_bm="${final_bm},${mapping}"
fi
# ensure bridge
ovs-vsctl --may-exist add-br $bridge
br_n=$(( br_n + 1 ))
done
echo $final_bm
}
ovn_orig_bm=$(ovs-vsctl get open . external_ids:ovn-bridge-mappings)
# backup the original mapping if we didn't already do
ovs-vsctl get open . external_ids:ovn-bridge-mappings-back || \
ovs-vsctl set open . external_ids:ovn-bridge-mappings-back="$ovn_orig_bm"
new_mapping=$(new_bridge_mappings $ovn_orig_bm)
ovs-vsctl set open . external_ids:ovn-bridge-mappings="$new_mapping"
}
function copy_interfaces_to_br_migration() {
interfaces=$(ovs-vsctl list-ifaces br-int | egrep -v 'qr-|ha-|qg-|rfp-|sg-|fg-')
for interface in $interfaces; do
if [[ "$interface" == "br-int" ]]; then
continue
fi
ifmac=$(ovs-vsctl get Interface $interface external-ids:attached-mac)
if [ $? -ne 0 ]; then
echo "Can't get port details for $interface"
continue
fi
ifstatus=$(ovs-vsctl get Interface $interface external-ids:iface-status)
ifid=$(ovs-vsctl get Interface $interface external-ids:iface-id)
ifname=x$interface
ovs-vsctl -- --may-exist add-port $OVN_BR_MIGRATION $ifname \
-- set Interface $ifname type=internal \
-- set Interface $ifname external-ids:iface-status=$ifstatus \
-- set Interface $ifname external-ids:attached-mac=$ifmac \
-- set Interface $ifname external-ids:iface-id=$ifid
echo cloned port $interface from br-int as $ifname on $OVN_BR_MIGRATION
done
}
recreate_bridge_mappings
podman restart ovn_controller
copy_interfaces_to_br_migration

View File

@ -1,4 +0,0 @@
---
ovn_migration_temp_dir: "{{ working_dir }}/post_migration_resources"
validate_premigration_resources: true

View File

@ -1,59 +0,0 @@
---
#
# Validate pre-migration resources and then clean those up
#
- name: Validate pre migration resources after migration
include_role:
name: resources/validate
vars:
restart_server: true
when:
- validate_migration|bool
- validate_premigration_resources
- name: Delete the pre migration resources
include_role:
name: resources/cleanup
tags:
- post-migration
when:
- validate_migration|bool
- validate_premigration_resources
#
# Create post-migration resources, validate, and then clean up
#
# Delete any existing resources to make sure we don't conflict on a second run
- name: Delete any post migration resources (preventive)
include_role:
name: resources/cleanup
vars:
resource_suffix: "post"
silent_cleanup: true
when: validate_migration|bool
- name: Create post-migration resources
include_role:
name: resources/create
vars:
resource_suffix: "post"
when: validate_migration|bool
- name: Validate post migration resources
include_role:
name: resources/validate
vars:
resource_suffix: "post"
when: validate_migration|bool
- name: Delete the post migration resources
include_role:
name: resources/cleanup
tags:
- post-migration
vars:
resource_suffix: "post"
when: validate_migration|bool

View File

@ -1,17 +0,0 @@
# Delete any existing resources to make sure we don't conflict on a second run
- name: Delete any existing pre migration resources (preventive)
include_role:
name: resources/cleanup
vars:
silent_cleanup: true
when: validate_migration|bool
- name: Create the pre migration resource stack
include_role:
name: resources/create
when: validate_migration|bool
- name: Validate the pre migration resources
include_role:
name: resources/validate
when: validate_migration|bool

View File

@ -1,12 +0,0 @@
---
# Name of the group hosts where the NFS instalation will take place
# If the NFS server is the undercloud (and there is only one) will
# not be a problem, but if multiple servers exist on the server_name group
# it is possible that the nfs will be installed on every server, eventho the
# storage of the backup will only be done in the backup_ip.
#
# This can be solved if a new tripleo-inventory is manually created specifying
# a [BackupNode] section, with the nfs server info
revert_preparation_server_name: "Undercloud"
backup_and_recover_temp_folder: /tmp/backup-recover-temp

View File

@ -1,69 +0,0 @@
---
- name: Create controller's backup
block:
- name: Create temp folder related to backup
file:
state: directory
path: "{{ backup_and_recover_temp_folder }}"
# Using this task on OSP17
- name: Copy tripleo-inventory
copy:
src: "{{ ansible_inventory }}"
dest: "{{ backup_and_recover_temp_folder }}/tripleo-inventory.yaml"
when:
- create_backup|bool
- ansible_inventory is defined
- ansible_inventory != ""
# Using this task in OSP16.x
- name: Generate tripleo inventory
shell: |
source {{ stackrc }} &&
tripleo-ansible-inventory \
--ansible_ssh_user {{ undercloud_node_user }} \
--static-yaml-inventory {{ backup_and_recover_temp_folder }}/tripleo-inventory.yaml
when:
- create_backup|bool
- ansible_inventory is not defined or ansible_inventory == ""
- name: Setup NFS on the backup node using IP {{ backup_migration_ip }}
shell: |
source {{ stackrc }} &&
openstack overcloud backup \
--inventory {{ backup_and_recover_temp_folder }}/tripleo-inventory.yaml \
--setup-nfs \
--extra-vars '{
"tripleo_backup_and_restore_server": {{ backup_migration_ip }},
"tripleo_backup_and_restore_clients_nets": {{ backup_migration_ctl_plane_cidrs.split(',') }},
"nfs_server_group_name": {{ revert_preparation_server_name }}
}'
- name: Setup REAR on the controllers
shell: |
source {{ stackrc }} &&
openstack overcloud backup \
--inventory {{ backup_and_recover_temp_folder }}/tripleo-inventory.yaml \
--setup-rear \
--extra-vars '{
"tripleo_backup_and_restore_server": {{ backup_migration_ip }}
}'
- name: Backup the controllers
shell: |
source {{ stackrc }} &&
openstack overcloud backup \
--inventory {{ backup_and_recover_temp_folder }}/tripleo-inventory.yaml
# Ensure that after the controller backups the api responds
- name: Ensure that the OSP api is working
shell: >
source {{ overcloudrc }} && openstack flavor list
retries: 20
register: api_rc
delay: 5
ignore_errors: yes
until: api_rc.rc == "0"
when: create_backup|bool

View File

@ -1,6 +0,0 @@
---
cleanup_resource_script: cleanup-resources.sh.j2
resource_suffix: "pre"
ovn_migration_temp_dir: "{{ working_dir }}/{{ resource_suffix }}_migration_resources"
silent_cleanup: false

View File

@ -1,26 +0,0 @@
---
- name : Create temp file directory if not present
file:
state: directory
path: "{{ ovn_migration_temp_dir }}"
- name: Generate cleanup script
template:
src: "{{ cleanup_resource_script }}"
dest: "{{ ovn_migration_temp_dir }}/cleanup-resources.sh"
mode: 0744
- name: Cleaning up the migration resources (verbose)
shell: >
set -o pipefail &&
{{ ovn_migration_temp_dir }}/cleanup-resources.sh 2>&1 | tee
{{ ovn_migration_temp_dir }}/cleanup-resources.sh.log
when: not silent_cleanup
- name: Cleaning up the migration resources (silent)
shell: >
{{ ovn_migration_temp_dir }}/cleanup-resources.sh >/dev/null 2>&1
when: silent_cleanup

View File

@ -1,32 +0,0 @@
#!/bin/bash
set -x
source {{ overcloudrc }}
openstack server delete ovn-migration-server-{{ resource_suffix }}
openstack port delete ovn-migration-server-port-{{ resource_suffix }}
server_ip=`cat {{ ovn_migration_temp_dir }}/server_public_ip`
openstack floating ip delete $server_ip
openstack router remove subnet ovn-migration-router-{{ resource_suffix }} ovn-migration-subnet-{{ resource_suffix }}
openstack router unset --external-gateway ovn-migration-router-{{ resource_suffix }}
openstack router delete ovn-migration-router-{{ resource_suffix }}
openstack network delete ovn-migration-net-{{ resource_suffix }}
openstack security group delete ovn-migration-sg-{{ resource_suffix }}
openstack flavor delete ovn-migration-{{ resource_suffix }}
openstack image delete cirros-ovn-migration-{{ resource_suffix }}
openstack keypair delete ovn-migration-{{ resource_suffix }}
echo "Resource cleanup done"
exit 0

View File

@ -1,6 +0,0 @@
---
create_migration_resource_script: create-resources.sh.j2
server_user_name: "cirros"
resource_suffix: "pre"
ovn_migration_temp_dir: "{{ working_dir }}/{{ resource_suffix }}_migration_resources"

View File

@ -1,22 +0,0 @@
---
- name: Delete temp file directory if present
file:
state: absent
path: "{{ ovn_migration_temp_dir }}"
- name : Create temp file directory if not present
file:
state: directory
path: "{{ ovn_migration_temp_dir }}"
- name: Generate resource creation script
template:
src: "{{ create_migration_resource_script }}"
dest: "{{ ovn_migration_temp_dir }}/create-migration-resources.sh"
mode: 0744
- name: Creating migration resources
shell: >
set -o pipefail &&
{{ ovn_migration_temp_dir }}/create-migration-resources.sh 2>&1 | tee
{{ ovn_migration_temp_dir }}/create-migration-resources.sh.log

View File

@ -1,144 +0,0 @@
#!/bin/bash
set -x
source {{ overcloudrc }}
server_user_name={{ server_user_name }}
flavor_name={{ flavor_name }}
image_name={{ image_name }}
openstack image show $image_name
if [ "$?" != "0" ]
then
if [ ! -f cirros-0.5.2-x86_64-disk.img ]
then
curl -Lo cirros-0.5.2-x86_64-disk.img https://github.com/cirros-dev/cirros/releases/download/0.5.2/cirros-0.5.2-x86_64-disk.img
fi
openstack image create "cirros-ovn-migration-{{ resource_suffix }}" --file cirros-0.5.2-x86_64-disk.img \
--disk-format qcow2 --container-format bare --public
image_name="cirros-ovn-migration-{{ resource_suffix }}"
fi
openstack flavor show $flavor_name
if [ "$?" != "0" ]
then
openstack flavor create ovn-migration-{{ resource_suffix }} --ram 1024 --disk 1 --vcpus 1
flavor_name="ovn-migration-{{ resource_suffix }}"
fi
# Cirros doesn't support RSA key (Default on rhel 9) ecsda should be used
ssh-keygen -t ecdsa {{ ovn_migration_temp_dir }}/ovn_migration_ssh_key -q -N ""
openstack keypair create ovn-migration-{{ resource_suffix }} --public-key {{ ovn_migration_temp_dir }}/ovn_migration_ssh_key.pub
openstack security group create ovn-migration-sg-{{ resource_suffix }}
openstack security group rule create --ingress --protocol icmp ovn-migration-sg-{{ resource_suffix }}
openstack security group rule create --ingress --protocol tcp --dst-port 22 ovn-migration-sg-{{ resource_suffix }}
openstack network create ovn-migration-net-{{ resource_suffix }}
neutron net-update ovn-migration-net-{{ resource_suffix }} --mtu 1442
openstack subnet create --network ovn-migration-net-{{ resource_suffix }} --subnet-range 172.168.199.0/24 ovn-migration-subnet-{{ resource_suffix }}
openstack router create ovn-migration-router-{{ resource_suffix }}
openstack router set --external-gateway {{ public_network_name }} ovn-migration-router-{{ resource_suffix }}
openstack router add subnet ovn-migration-router-{{ resource_suffix }} ovn-migration-subnet-{{ resource_suffix }}
openstack port create --network ovn-migration-net-{{ resource_suffix }} --security-group ovn-migration-sg-{{ resource_suffix }} ovn-migration-server-port-{{ resource_suffix }}
openstack server create --flavor $flavor_name --image $image_name \
--key-name ovn-migration-{{ resource_suffix }} \
--nic port-id=ovn-migration-server-port-{{ resource_suffix }} ovn-migration-server-{{ resource_suffix }}
server_ip=`openstack floating ip create --port ovn-migration-server-port-{{ resource_suffix }} \
{{ public_network_name }} -c floating_ip_address | grep floating_ip_address \
| awk '{print $4'}`
echo $server_ip > {{ ovn_migration_temp_dir }}/server_public_ip
chmod 0600 {{ ovn_migration_temp_dir }}/ovn_migration_ssh_key
# Wait till the port is ACTIVE
echo "Wait till the port is ACTIVE"
port_status=`openstack port show ovn-migration-server-port-{{ resource_suffix }} -c status | grep status | awk '{print $4}'`
num_attempts=0
while [ "$port_status" != "ACTIVE" ]
do
num_attempts=$((num_attempts+1))
sleep 5
port_status=`openstack port show ovn-migration-server-port-{{ resource_suffix }} -c status | grep status | awk '{print $4}'`
echo "Port status = $port_status"
if [ $num_attempts -gt 24 ]
then
echo "Port is not up even after 2 minutes. Something is wrong"
# printing port information for debugging
openstack port show ovn-migration-server-port-{{ resource_suffix }}
exit 1
fi
done
echo "VM is up and the port is ACTIVE"
# Wait till the VM allows ssh connections
vm_status="down"
num_attempts=0
while [ "$vm_status" != "up" ]
do
num_attempts=$((num_attempts+1))
sleep 5
openstack console log show ovn-migration-server-{{ resource_suffix }} | grep "login:"
if [ "$?" == "0" ]
then
vm_status="up"
else
if [ $num_attempts -gt 60 ]
then
echo "Port is not up even after 5 minutes. Something is wrong."
# Even though something seems wrong, lets try and ping.
break
fi
fi
done
num_attempts=0
vm_reachable="false"
while [ "$vm_reachable" != "true" ]
do
num_attempts=$((num_attempts+1))
sleep 1
ping -c 3 $server_ip
if [ "$?" == "0" ]
then
vm_reachable="true"
else
if [ $num_attempts -gt 60 ]
then
echo "VM is not reachable. Something is wrong."
# printing server information for debugging
server_id=$(openstack server list -f value | grep $server_ip | awk '{print $1}')
openstack console log show $server_id
exit 1
fi
fi
done
ssh -i {{ ovn_migration_temp_dir }}/ovn_migration_ssh_key -o StrictHostKeyChecking=no \
-o UserKnownHostsFile=/dev/null $server_user_name@$server_ip date
rc=$?
echo "Done with the resource creation : exiting with $rc"
# printing server information for debugging
[ "$rc" != "0" ] && openstack console log show $(openstack server list -f value | grep $server_ip | awk '{print $1}')
exit $rc

View File

@ -1,5 +0,0 @@
validate_resources_script: validate-resources.sh.j2
server_user_name: "cirros"
restart_server: false
resource_suffix: "pre"
ovn_migration_temp_dir: "{{ working_dir }}/{{ resource_suffix }}_migration_resources"

View File

@ -1,12 +0,0 @@
- name: Generate resource validation script
template:
src: "{{ validate_resources_script }}"
dest: "{{ ovn_migration_temp_dir }}/validate-resources.sh"
mode: 0744
- name: Run the validation script
shell: >
set -o pipefail &&
{{ ovn_migration_temp_dir }}/validate-resources.sh 2>&1 | tee
{{ ovn_migration_temp_dir }}/validate-resources.sh.log

View File

@ -1,19 +0,0 @@
#!/bin/bash
set -x
set -e
source {{ overcloudrc }}
# This script validates the resources create by the resources/create role.
# It pings to the floating ip of the server and ssh into the server.
server_ip=`cat {{ ovn_migration_temp_dir }}/server_public_ip`
echo "Running ping test with -c 3 to the server ip - $server_ip"
ping -c 3 $server_ip
ssh -i {{ ovn_migration_temp_dir }}/ovn_migration_ssh_key -o StrictHostKeyChecking=no \
-o UserKnownHostsFile=/dev/null {{ server_user_name }}@$server_ip date
echo "Done with the validation"

View File

@ -1,29 +0,0 @@
---
- name: Stop ovn containers
become: yes
shell: |
for agent in $(podman ps -a --format {% raw %}"{{.ID}}"{% endraw %} --filter "name=(ovn_.*|ovnmeta)"); do
echo "Cleaning up agent $agent"
podman rm -f $agent
done
- name: Clean OVN netns
become: yes
shell: |
for netns in $(ip netns ls | grep ovnmeta | cut -d' ' -f1); do
echo "delete netns $netns"
ip netns del $netns
done
- name: Delete OVN ports
become: yes
shell: |
for port in $(ovs-vsctl list interface | grep ^name | grep 'ovn-\|patch-provnet\|patch-br-int-to' | cut -d':' -f2); do
echo "Removing port $port"
ovs-vsctl del-port $port
done
- name: Revert cleanup completed.
debug:
msg: Revert cleanup done, please run overcloud deploy with the OVS configuration.

View File

@ -1,3 +0,0 @@
---
# defaults file for stop-agents
systemd_service_file_dir: /etc/systemd/system

View File

@ -1,25 +0,0 @@
---
- name: "stop and disable {{ service.name }} services and healthchecks"
systemd:
name: "{{ item }}"
state: stopped
enabled: no
become: yes
when:
- ansible_facts.services[item] is defined
- ansible_facts.services[item]["state"] == "running"
loop:
- "{{ service.healthcheck_timer_file }}"
- "{{ service.healthcheck_service_file }}"
- "{{ service.service_file }}"
# If file is already deleted this won't fail
- name: delete ml2 ovs systemd service files
file:
path: "{{ systemd_service_file_dir }}/{{ item }}"
state: absent
loop:
- "{{ service.service_file }}"
- "{{ service.healthcheck_service_file }}"
- "{{ service.healthcheck_timer_file }}"
become: yes

View File

@ -1,22 +0,0 @@
---
- name: populate service facts
service_facts:
- name: disable ml2 ovs services and healthchecks
include_tasks: cleanup.yml
loop: "{{ ml2_ovs_services }}"
loop_control:
loop_var: service
when: ansible_facts.services[service.service_file] is defined
- name: Reload systemctl daemons
systemd:
daemon_reload: yes
- name: remove containers
become: yes
shell: |
for agent in $(podman ps -a --format {% raw %}"{{.ID}}"{% endraw %} --filter "name=(neutron_.*_agent|neutron_dhcp)"); do
echo "Cleaning up agent $agent"
podman rm -f $agent
done

View File

@ -1,19 +0,0 @@
---
# vars file for stop-agents
ml2_ovs_services:
- name: dhcp
service_file: tripleo_neutron_dhcp.service
healthcheck_service_file: tripleo_neutron_dhcp_healthcheck.service
healthcheck_timer_file: tripleo_neutron_dhcp_healthcheck.timer
- name: l3
service_file: tripleo_neutron_l3_agent.service
healthcheck_service_file: tripleo_neutron_l3_agent_healthcheck.service
healthcheck_timer_file: tripleo_neutron_l3_agent_healthcheck.timer
- name: metadata
service_file: tripleo_neutron_metadata_agent.service
healthcheck_service_file: tripleo_neutron_metadata_agent_healthcheck.service
healthcheck_timer_file: tripleo_neutron_metadata_agent_healthcheck.timer
- name: ovs
service_file: tripleo_neutron_ovs_agent.service
healthcheck_service_file: tripleo_neutron_ovs_agent_healthcheck.service
healthcheck_timer_file: tripleo_neutron_ovs_agent_healthcheck.timer

View File

@ -1,4 +0,0 @@
---
generate_ovn_extras: generate-ovn-extras.sh.j2
ovn_migration_temp_dir: "{{ working_dir }}/temp_files"

View File

@ -1,24 +0,0 @@
---
- name : Create temp file directory if not present
file:
state: directory
path: "{{ ovn_migration_temp_dir }}"
- name: Create ovn-extras generation script
template:
src: "{{ generate_ovn_extras }}"
dest: "{{ ovn_migration_temp_dir }}/generate-ovn-extras.sh"
mode: 0755
- name: Generate ovn-extras environment file
shell: >
set -o pipefail &&
{{ ovn_migration_temp_dir }}/generate-ovn-extras.sh
args:
creates: $HOME/ovn-extras.yaml
- name: Updating the overcloud stack with OVN services
shell: >
set -o pipefail &&
{{ overcloud_ovn_deploy_script }} 2>&1 > {{ overcloud_ovn_deploy_script }}.log

View File

@ -1,8 +0,0 @@
#!/bin/bash
set -x
cat > $HOME/ovn-extras.yaml << EOF
parameter_defaults:
OVNIntegrationBridge: "br-migration"
ForceNeutronDriverUpdate: true
EOF