Lucas Alvares Gomes
Currently workload VMs start before subnet is connected to router. When DVR is enabled this causes sometimes that one of the VMs is not able to get metadata. Closes bug: #1947547 (Manually cherry picked from Neutron d49ce1652d31fb884285ed30e39ec10ef40c864d) Change-Id: Ifd686d7ff452abd1226fbbc97f499e05102e4596
|10 months ago|
|infrared/tripleo-ovn-migration||11 months ago|
|tripleo_environment||10 months ago|
|README.rst||4 years ago|
|hosts.sample||5 years ago|
|migrate-to-ovn.yml||3 years ago|
Migration from ML2/OVS to ML2/OVN
Proof-of-concept ansible script for migrating an OpenStack deployment that uses ML2/OVS to OVN.
If you have a tripleo ML2/OVS deployment then please see the folder
- Ansible 2.2 or greater.
- ML2/OVS must be using the OVS firewall driver.
Create an ansible inventory with the expected set of groups and variables as indicated by the hosts-sample file.
Run the playbook:
$ ansible-playbook migrate-to-ovn.yml -i hosts
- Tested on an RDO cloud on CentOS 7.3 based on Ocata.
- The cloud had 3 controller nodes and 6 compute nodes.
- Observed network downtime was 10 seconds.
- The "--forks 10" option was used with ansible-playbook to ensure that commands could be run across the entire environment in parallel.
- If migrating an ML2/OVS deployment using VXLAN tenant networks to an OVN deployment using Geneve for tenant networks, we have an unresolved issue around MTU. The VXLAN overhead is 30 bytes. OVN with Geneve has an overhead of 38 bytes. We need the tenant networks MTU adjusted for OVN and then we need all VMs to receive the updated MTU value through DHCP before the migration can take place. For testing purposes, we've just hacked the Neutron code to indicate that the VXLAN overhead was 38 bytes instead of 30, bypassing the issue at migration time.