24 KiB
Upgrading to a Next Major Release
Upgrading a TripleO deployment to a next major release is done by first upgrading the undercloud, and then upgrading the overcloud.
Note that there are version specific caveats and notes which are pointed out as below:
Mitaka to Newton
mitaka to newton specific note
Note
You can use the "Limit Environment Specific Content" in the left hand nav bar to restrict content to the upgrade you are performing.
Note
Generic upgrade testing cannot cover all possible deployment configurations. Before performing the upgrade in production, test it in a matching staging environment, and create a backup of the production environment.
Upgrading the Undercloud
Disable the old OpenStack release repositories and enable new release repositories on the undercloud:
Backup and disable current repos. Note that the repository files might be named differently depending on your installation:
mkdir /home/stack/REPOBACKUP sudo mv /etc/yum.repos.d/delorean* /home/stack/REPOBACKUP/
Get and enable new repos for `NEW_VERSION`:
Run undercloud upgrade:
Note
It is strongly recommended that you validate the state of your undercloud before starting any upgrade operations. The tripleo-validations repo has some 'pre-upgrade' validations that you can execute by following the instructions at validations to execute the "pre-upgrade" group:
openstack workflow execution create tripleo.validations.v1.run_groups '{"group_names": ["pre-upgrade"]}' mistral execution-get-output $id_returned_above
Mitaka to Newton
In the first release of instack-undercloud newton(5.0.0), the undercloud telemetry services are disabled by default. In order to maintain the telemetry services during the mitaka to newton upgrade the operator must explicitly enable them before running the undercloud upgrade. This is done by adding:
enable_telemetry = true
in the [DEFAULT] section of the undercloud.conf configuration file.
If you are using any newer newton release, this option is switched back to enabled by default to make upgrade experience better. Hence, if you are using a later newton release you don't need to explicitly enable this option.
Ocata to Pike
Prior to Pike, TripleO deployed Ceph with puppet-ceph. With the Pike release it is possible to use TripleO to deploy Ceph with either ceph-ansible or puppet-ceph, though puppet-ceph is deprecated. To use ceph-ansible, the CentOS Storage SIG Ceph repository must be enabled on the undercloud and the ceph-ansible package must then be installed:
sudo yum -y install --enablerepo=extras centos-release-ceph-jewel sudo yum -y install ceph-ansible
It is not yet possible to migrate an existing puppet-ceph deployment to a ceph-ansible deployment. Only new deployments are currently possible with ceph-ansible.
Newton to Ocata
The following commands need to be run before the undercloud upgrade:
sudo systemctl stop openstack-* sudo systemctl stop neutron-* sudo systemctl stop httpd sudo yum -y update instack-undercloud openstack-puppet-modules openstack-tripleo-common
The following command will upgrade the undercloud:
sudo yum -y update python-openstackclient python-tripleoclient ceph-ansible openstack undercloud upgrade
Once the undercloud upgrade is fully completed you may remove the older mysql backup folder /home/stack/mysql-backup
Note
You may wish to use time and capture the output to a file for any debug:
time openstack undercloud upgrade 2>&1 | tee undercloud_upgrade.log
Note
If you added custom OVS ports to the undercloud (e.g. in a virtual testing environment) you may need to re-add them at this point.
Note
It is not necessary to update ceph-ansible if Ceph is not used in the overcloud.
Upgrading the Overcloud to Ocata or Pike
As of the Ocata release, the upgrades workflow in tripleo has changed significantly to accommodate the operators' new ability to deploy custom roles with the Newton release (see the Composable Service Upgrade spec for more info). The new workflow uses ansible upgrades tasks to define the upgrades workflow on a per-service level. The Pike release upgrade uses a similar mechanism and the steps are invoked with the same cli. A big difference however is that after upgrading to Pike most of the overcloud services will be running in containers.
Note
Upgrades to Pike or Queens will only be tested with containers. Baremetal deployments, which don't use containers, will be deprecated in Queens and have full support removed in Rocky.
The operator starts the upgrade with a
openstack overcloud deploy
that includes the major-upgrade-composable-steps.yaml
environment file (or the docker variant for the containerized upgrade to Pike__) as well as all
environment files used on the initial deployment. This will collect the
ansible upgrade tasks for all roles, except those that have the
disable_upgrade_deployment
flag set True
in roles_data.yaml.
The tasks will be executed in a series of steps, for example (and not
limited to): step 0 for validations or other pre-upgrade tasks, step 1
to stop the pacemaker cluster, step 2 to stop services, step 3 for
package updates, step 4 for cluster startup, step 5 for any special case
db syncs or post package update migrations. The Pike upgrade tasks are
in general much simpler than those used in Ocata since for Pike these
tasks are mainly for stopping and disabling the systemd services, since
they will be containerized as part of the upgrade.
After the ansible tasks have run the puppet (or docker, for Pike containers) configuration is also applied in the 'normal' manner we do on an initial deploy, to complete the upgrade and bring services back up, or start the service containers, as the case may be for Ocata or Pike.
For those roles with the disable_upgrade_deployment
flag
set True, the operator will upgrade the corresponding nodes with the upgrade-non-controller.sh.
The operator uses that script to invoke the tripleo_upgrade_node.sh
which is delivered during the major-upgrade-composable-steps that comes
first, as described above.
Run the major upgrade composable ansible steps
This step will upgrade the nodes of all roles that do not explicitly set the
disable_upgrade_deployment
flag toTrue
in the roles_data.yaml (this is an operator decision, and the current default is for the 'Compute' and' ObjectStorage' roles to have this set).The ansible upgrades tasks are collected from all service manifests and executed in a series of steps as described in the introduction above. Even before the invocation of these ansible tasks however, this upgrade step also delivers the tripleo_upgrade_node.sh and role specific puppet manifest to allow the operator to upgrade those nodes after this step has completed.
From Ocata to Pike, the Overcloud will be upgraded to a containerized environment. All the services will run under Docker.
If you deploy TripleO with custom roles, you want to synchronize them with roles_data.yaml visible in default roles and make sure parameters and new services are present in your roles.
Newton
Newton roles_data.yaml is available here: https://github.com/openstack/tripleo-heat-templates/blob/stable/newton/roles_data.yaml
Ocata
Ocata roles_data.yaml is available here: https://github.com/openstack/tripleo-heat-templates/blob/stable/ocata/roles_data.yaml
Pike
Pike roles_data.yaml is available here: https://github.com/openstack/tripleo-heat-templates/blob/stable/pike/roles_data.yaml
Create an environment file with commands to switch OpenStack repositories to a new release. This will likely be the same commands that were used to switch repositories on the undercloud:
cat > overcloud-repos.yaml <<EOF parameter_defaults: UpgradeInitCommand: | set -e # REPOSITORY SWITCH COMMANDS GO HERE EOF
Newton to Ocata
Run overcloud deploy, passing in full set of environment files plus major-upgrade-composable-steps.yaml and `overcloud-repos.yaml`:
openstack overcloud deploy --templates \ -e <full environment> \ -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-composable-steps.yaml \ -e overcloud-repos.yaml
Note
../containers_deployment/overcloud
Run overcloud deploy, passing in full set of environment files plus major-upgrade-composable-steps-docker.yaml and overcloud-repos.yaml (and docker registry if upgrading to containers):
openstack overcloud deploy --templates \ -e <full environment> \ -e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-composable-steps-docker.yaml \ -e overcloud-repos.yaml
Note
It is especially important to remember that you must include all environment files that were used to deploy the overcloud that you are about to upgrade.
Note
If the Overcloud has been deployed with Pacemaker, then add the docker-ha.yaml environment file to the upgrade command:
openstack overcloud deploy --templates \ -e <full environment> \ -e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-composable-steps-docker.yaml \ -e overcloud-repos.yaml
Note
The first step of the ansible tasks is to validate that the deployment is in a good state before performing any other upgrade operations. Each service manifest in the tripleo-heat-templates includes a check that it is running and if any of those checks fail the upgrade will exit early at ansible step 0.
If you are re-running the upgrade after an initial failed attempt, you may need to disable these checks in order to allow the upgrade to proceed with services down. This is done with the SkipUpgradeConfigTags parameter to specify that tasks with the 'validation' tag should be skipped. You can include this in any of the environment files you are using:
SkipUpgradeConfigTags: [validation]
Upgrade remaining nodes for roles with
disable_upgrade_deployment: True
It is expected that the operator will want to upgrade the roles that have the
openstack-nova-compute
andopenstack-swift-object
services deployed to allow for pre-upgrade migration of workloads. For this reason the defaultCompute
andObjectStorage
roles in the roles_data.yaml have thedisable_upgrade_deployment
setTrue
.Note that unlike in previous releases, this operator driven upgrade step includes a full puppet configuration run as happens after the ansible steps on the roles those are executed on. The significance is that nodes are 'fully' upgraded after each step completes, rather than having to wait for the final converge step as has previously been the case. In the case of Ocata to Pike the full puppet/docker config is applied to bring up the overclod services in containers.
The tripleo_upgrade_node.sh script and puppet configuration are delivered to the nodes with
disable_upgrade_deployment
setTrue
during the initial major upgrade composable steps in step 1 above.For Ocata to Pike, the tripleo_upgrade_node.sh is still delivered to the
disable_upgrade_deployment
nodes but is now empty. Instead, the upgrade_non_controller.sh downloads ansible playbooks and those are executed to deliver the upgrade. See the Queens-upgrade-spec for more information on this mechanism.To upgrade remaining roles (at your convenience):
upgrade-non-controller.sh --upgrade overcloud-compute-0 for i in $(seq 0 2); do upgrade-non-controller.sh --upgrade overcloud-objectstorage-$i & done
Converge to unpin Nova RPC
The final step is required to unpin Nova RPC version. Unlike in previous releases, for Ocata the puppet configuration has already been applied to nodes as part of each upgrades step, i.e. after the ansible tasks or when invoking the tripleo_upgrade_node.sh script to upgrade compute nodes. Thus the significance of this step is somewhat diminished compared to previously. However a re-application of puppet configuration across all nodes here will also serve as a sanity check and hopefully show any issues that an operator may have missed during any of the previous upgrade steps.
To converge, run the deploy command with `major-upgrade-converge-docker.yaml`:
openstack overcloud deploy --templates \ -e <full environment> \ -e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-converge-docker.yaml
Newton to Ocata
For Newton to Ocata, run the deploy command with `major-upgrade-pacemaker-converge.yaml`:
openstack overcloud deploy --templates \ -e <full environment> \ -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-pacemaker-converge.yaml
Note
If the Overcloud has been deployed with Pacemaker, then add the docker-ha.yaml environment file to the upgrade command:
openstack overcloud deploy --templates \ -e <full environment> \ -e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-converge-docker.yaml openstack overcloud deploy --templates \ -e <full environment> \ -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-converge.yaml
Note
It is especially important to remember that you must include all environment files that were used to deploy the overcloud.
Upgrading the Overcloud to Newton and earlier
Note
The openstack overcloud deploy calls in upgrade steps below are non-blocking. Make sure that the overcloud is UPDATE_COMPLETE in openstack stack list and sudo pcs status on a controller reports everything running fine before proceeding to the next step.
Mitaka to Newton
Deliver the migration for ceilometer to run under httpd.
This is to deliver the migration for ceilometer to be run under httpd (apache) rather than eventlet as was the case before. To execute this step run overcloud deploy, passing in the full set of environment files plus `major-upgrade-ceilometer-wsgi-mitaka-newton.yaml`:
openstack overcloud deploy --templates \
-e <full environment> \
-e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-ceilometer-wsgi-mitaka-newton.yaml
Upgrade initialization
The initialization step switches to new repositories on overcloud nodes, and it delivers upgrade scripts to nodes which are going to be upgraded one-by-one (this means non-controller nodes, except any stand-alone block storage nodes).
Create an environment file with commands to switch OpenStack repositories to a new release. This will likely be the same commands that were used to switch repositories on the undercloud:
cat > overcloud-repos.yaml <<EOF parameter_defaults: UpgradeInitCommand: | set -e # REPOSITORY SWITCH COMMANDS GO HERE EOF
And run overcloud deploy, passing in full set of environment files plus major-upgrade-pacemaker-init.yaml and `overcloud-repos.yaml`:
openstack overcloud deploy --templates \ -e <full environment> \ -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-pacemaker-init.yaml \ -e overcloud-repos.yaml
Object storage nodes upgrade
If the deployment has any standalone object storage nodes, upgrade them one-by-one using the upgrade-non-controller.sh script on the undercloud node:
upgrade-non-controller.sh --upgrade <nova-id of object storage node>
This is ran before controller node upgrade because swift storage services should be upgraded before swift proxy services.
Upgrade controller and block storage nodes
Mitaka to Newton
Explicitly disable sahara services if so desired: As discussed at bug1630247 sahara services are disabled by default in the Newton overcloud deployment. This special case is handled for the duration of the upgrade by defaulting to 'keep sahara-*'.
That is by default sahara services are restarted after the mitaka to newton upgrade of controller nodes and sahara config is re-applied during the final upgrade converge step.
If an operator wishes to disable sahara services as part of the mitaka to newton upgrade they need to include the major-upgrade-remove-sahara.yaml environment file during the controller upgrade step as well as during the converge step later:
openstack overcloud deploy --templates \ -e <full environment> \ -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-pacemaker.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-remove-sahara.yaml
All controllers will be upgraded in sync in order to make services only talk to DB schema versions they expect. Services will be unavailable during this operation. Standalone block storage nodes are automatically upgraded in this step too, in sync with controllers, because block storage services don't have a version pinning mechanism.
Run the deploy command with `major-upgrade-pacemaker.yaml`:
openstack overcloud deploy --templates \ -e <full environment> \ -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-pacemaker.yaml
Services of the compute component on the controller nodes are now pinned to communicate like the older release, ensuring that they can talk to the compute nodes which haven't been upgraded yet.
Note
If this step fails, it may leave the pacemaker cluster stopped (together with all OpenStack services on the controller nodes). The root cause and restoration procedure may vary, but in simple cases the pacemaker cluster can be started by logging into one of the controllers and running sudo pcs cluster start --all.
Upgrade ceph storage nodes
If the deployment has any ceph storage nodes, upgrade them one-by-one using the upgrade-non-controller.sh script on the undercloud node:
upgrade-non-controller.sh --upgrade <nova-id of ceph storage node>
Upgrade compute nodes
Upgrade compute nodes one-by-one using the upgrade-non-controller.sh script on the undercloud node:
upgrade-non-controller.sh --upgrade <nova-id of compute node>
Apply configuration from upgraded tripleo-heat-templates
Mitaka to Newton
Explicitly disable sahara services if so desired: As discussed at bug1630247 sahara services are disabled by default in the Newton overcloud deployment. This special case is handled for the duration of the upgrade by defaulting to 'keep sahara-*'.
That is by default sahara services are restarted after the mitaka to newton upgrade of controller nodes and sahara config is re-applied during the final upgrade converge step.
If an operator wishes to disable sahara services as part of the mitaka to newton upgrade they need to include the major-upgrade-remove-sahara.yaml environment file during the controller upgrade earlier and converge step here:
openstack overcloud deploy --templates \ -e <full environment> \ -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-pacemaker-converge.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-remove-sahara.yaml
This step unpins compute services communication (upgrade level) on controller and compute nodes, and it triggers configuration management tooling to converge the overcloud configuration according to the new release of tripleo-heat-templates.
Make sure that all overcloud nodes have been upgraded to the new release, and then run the deploy command with `major-upgrade-pacemaker-converge.yaml`:
openstack overcloud deploy --templates \ -e <full environment> \ -e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-pacemaker-converge.yaml
Mitaka to Newton
Deliver the data migration for aodh.
This is to deliver the data migration for aodh. In Newton, aodh uses its own mysql backend. This step migrates all the existing alarm data from mongodb to the new mysql backend. To execute this step run overcloud deploy, passing in the full set of environment files plus `major-upgrade-aodh-migration.yaml`:
openstack overcloud deploy --templates \
-e <full environment> \
-e /usr/share/openstack-tripleo-heat-templates/environments/major-upgrade-aodh-migration.yaml