openstack-ansible-ops/leap-upgrades
Jimmy McCrory ed717932c0 Fix downloading of pre-created leap release venvs
Precreated bundled venvs for OpenStack-Ansible have been created,
update the default VENV_URL with the path to them. Also fix the wget
command used to download them so that it returns a 0 on success and
deletes a failed download allowing the local build function to continue
on.

Change-Id: Ic4ae115384015dd7159da4e5850452bb5db181be
2017-02-23 15:24:56 -08:00
..
lib Fix downloading of pre-created leap release venvs 2017-02-23 15:24:56 -08:00
upgrade-utilities Added leap upgrade tooling 2017-02-01 14:27:07 +00:00
upgrade-utilities-kilo Added leap upgrade tooling 2017-02-01 14:27:07 +00:00
upgrade-utilities-liberty Added leap upgrade tooling 2017-02-01 14:27:07 +00:00
upgrade-utilities-mitaka Added leap upgrade tooling 2017-02-01 14:27:07 +00:00
upgrade-utilities-newton Added leap upgrade tooling 2017-02-01 14:27:07 +00:00
migrations.sh Added leap upgrade tooling 2017-02-01 14:27:07 +00:00
prep.sh Added leap upgrade tooling 2017-02-01 14:27:07 +00:00
re-deploy.sh Added leap upgrade tooling 2017-02-01 14:27:07 +00:00
README.md Added leap upgrade tooling 2017-02-01 14:27:07 +00:00
run-stages.sh Added leap upgrade tooling 2017-02-01 14:27:07 +00:00
upgrade.sh Added leap upgrade tooling 2017-02-01 14:27:07 +00:00

OpenStack-Ansible leap upgrade

Jump upgrade from OpenStack Juno to Newton using OpenStack-Ansible

==This currently a POC==

Uses

This utility can be used to upgrade any OpenStack-Ansible deployment running Juno to the latest Newton release. The process will upgrade the OSA system components, sync the database through the various releases, and then deploy OSA using the Newton release. While this method will help a deployment skip several releases deployers should be aware that skipping releases is not something OpenStack supports. To make this possible the active cloud will have the OpenStack services stopped. Active workloads "should" remain online for the most part though at this stage no effort is being put into maximizing uptime as the tool set is being developed to easy multi-release upgrades in the shortest possible time while maintaining data-integrity.

Requirements

  • You must have a Juno based OpenStack cloud as deployed by OpenStack-Ansible.
  • If you are running cinder-volume with LVM in an LXC container you must migrate the cinder-volume service to the physical host.
  • You must have the Ubuntu Trusty Backports repo enabled on all hosts before you start.

Limitations

  • Upgrading old versions of "libvirt-bin<=1.1" to newer versions of "libvirt-bin>=1.3" in a single step can cause VM downtime.
  • L3 networks may experience an outage as routers and networks are rebalanced throughout the environment.

Recommendations

  • It is recommended all physical hosts be updated to the latest patch release. This can be done using standard package manager tooling.

Process

If you need to run everything the script run-stages.sh will execute everything needed to migrate the environment.

bash ./run-stages.sh

If you want to pre-load the stages you can do so by running the various scripts independently.

bash ./prep.sh
bash ./upgrade.sh
bash ./migrations.sh
bash ./re-deploy.sh

Once all of the stages are complete the cloud will be running OpenStack Newton.


Setting up a Test environment.

Testing on a multi-node environment can be accomplished using the https://github.com/openstack/openstack-ansible-ops/tree/master/multi-node-aio repo. To create this environment for testing a single physical host can be used; Rackspace OnMetal V1 deployed Ubuntu 14.04 on an IO flavor has worked very well for development. To run the deployment execute the following commands

Requirements

  • When testing, the host will need to start with Kernel less than or equal to "3.13". Later kernels will cause neutron to fail to run under the Juno code base.
  • Start the deployment w/ ubuntu 14.04.2 to ensure the deployment version is limited in terms of package availability.

Process

Clone the ops tooling and change directory to the multi-node-aio tooling

git clone https://github.com/openstack/openstack-ansible-ops /opt/openstack-ansible-ops

Run the following commands to prep the environment.

cd /opt/openstack-ansible-ops/multi-node-aio
setup-host.sh
setup-cobbler.sh
setup-virsh-net.sh
deploy-vms.sh

After the environment has been deployed clone the RPC configurations which support Juno based clouds.

git clone https://github.com/os-cloud/leapfrog-juno-config /etc/rpc_deploy

Now clone the Juno playbooks into place.

git clone --branch leapfrog https://github.com/os-cloud/leapfrog-juno-playbooks /opt/openstack-ansible

Finally, run the bootstrap script and the haproxy and setup playbooks to deploy the cloud environment.

cd /opt/openstack-ansible
./scripts/bootstrap-ansible.sh

cd rpc_deployment
openstack-ansible playbooks/haproxy-install.yml
openstack-ansible playbooks/setup-everything.yml

To test the cloud's functionality you can execute the OpenStack resource test script located in the scripts directory of the playbooks cloned earlier.

cd /opt/openstack-ansible/rpc_deployment
ansible -m script -a /opt/openstack-ansible/scripts/setup-openstack-for-test.sh 'utility_all[0]'

The previous script will create the following:

  • New flavors
  • Neutron L2 and L3 networks
  • Neutron routers
  • Setup security groups
  • Create test images
  • 2 L2 network test VMs
  • 2 L3 network test VMs w/ floating IPs
  • 2 Cinder-volume test VMs
  • 2 new cinder volumes which will be attached to the Cinder-volume test VMs
  • Upload an assortment of files into a Test-Swift container

Once the cloud is operational it's recommended that images be created so that the environment can be reverted to a previous state should there ever be a need. See https://github.com/openstack/openstack-ansible-ops/tree/master/multi-node-aio#snapshotting-an-environment-before-major-testing for more on creating snapshots.