Refactor upgrade pages - part II

Split out the known issues and specific procedures
into dedicated pages. For the Keystone token upgrade
issue some words are taken from the keystone charm
README.

Some formatting.

Change-Id: Ie300d25dda1ea8234ac49dd71697c2a80cd37bbe
This commit is contained in:
Peter Matulis 2020-12-05 02:38:45 -05:00
parent 732ecf2e51
commit 49d8ce3cda
11 changed files with 338 additions and 323 deletions

View File

@ -1,5 +1,15 @@
Appendix A: Ceph Charm Migration
================================
:orphan:
==============================================
ceph charm: migration to ceph-mon and ceph-osd
==============================================
.. note::
This page describes a procedure that may be required when performing an
upgrade of an OpenStack cloud. Please read the more general :doc:`Upgrades
overview <upgrade-overview>` before attempting any of the instructions given
here.
In order to continue to receive updates to newer Ceph versions, and for general
improvements and features in the charms to deploy Ceph, users of the ceph charm
@ -7,9 +17,12 @@ should migrate existing services to using ceph-mon and ceph-osd.
.. note::
This example migration assumes that the ceph charm is deployed to machines
0, 1 and 2 with the ceph-osd charm deployed to other machines within the
model.
This example migration assumes that the ceph charm is deployed to machines
0, 1 and 2 with the ceph-osd charm deployed to other machines within the
model.
Procedure
---------
Upgrade charms
~~~~~~~~~~~~~~
@ -40,12 +53,12 @@ Deploy ceph-mon
First deploy the ceph-mon charm; if the existing ceph charm is deployed to machines
0, 1 and 2, you can place the ceph-mon units in LXD containers on these machines:
.. code:: bash
.. code-block:: none
juju deploy --to lxd:0 ceph-mon
juju config ceph-mon no-bootstrap=True
juju add-unit --to lxd:1 ceph-mon
juju add-unit --to lxd:2 ceph-mon
juju deploy --to lxd:0 ceph-mon
juju config ceph-mon no-bootstrap=True
juju add-unit --to lxd:1 ceph-mon
juju add-unit --to lxd:2 ceph-mon
These units will install ceph, but will not bootstrap into a running monitor cluster.
@ -54,16 +67,16 @@ Bootstrap ceph-mon from ceph
Next, we'll use the existing ceph application to bootstrap the new ceph-mon units:
.. code:: bash
.. code-block:: none
juju add-relation ceph ceph-mon
juju add-relation ceph ceph-mon
Once this process has completed, you should have a Ceph MON cluster of 6 units;
this can be verified on any of the ceph or ceph-mon units:
.. code:: bash
.. code-block:: none
sudo ceph -s
sudo ceph -s
Deploy ceph-osd to ceph units
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -71,59 +84,56 @@ Deploy ceph-osd to ceph units
In order to retain any running Ceph OSD processes on the ceph units, the ceph-osd
charm must be deployed to the existing machines running the ceph units:
.. code:: bash
.. code-block:: none
juju config ceph-osd osd-reformat=False
juju add-unit --to 0 ceph-osd
juju add-unit --to 1 ceph-osd
juju add-unit --to 2 ceph-osd
juju config ceph-osd osd-reformat=False
juju add-unit --to 0 ceph-osd
juju add-unit --to 1 ceph-osd
juju add-unit --to 2 ceph-osd
NOTE: as of the 18.05 charm release, the osd-reformat config option has now been
completely removed.
As of the 18.05 charm release, the ``osd-reformat`` configuration option has
been completely removed.
The charm installation and configuration will not impact any existing running
Ceph OSD's.
Ceph OSDs.
Relate ceph-mon to all ceph clients
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The new ceph-mon units now need to be related to the ceph-osd application:
.. code:: bash
.. code-block:: none
juju add-relation ceph-mon ceph-osd
juju add-relation ceph-mon ceph-osd
and depending on your deployment you'll also need to add relations for other
Depending on your deployment you'll also need to add relations for other
applications, for example:
.. code:: bash
.. code-block:: none
juju add-relation ceph-mon cinder-ceph
juju add-relation ceph-mon glance
juju add-relation ceph-mon nova-compute
juju add-relation ceph-mon ceph-radosgw
juju add-relation ceph-mon gnocchi
juju add-relation ceph-mon cinder-ceph
juju add-relation ceph-mon glance
juju add-relation ceph-mon nova-compute
juju add-relation ceph-mon ceph-radosgw
juju add-relation ceph-mon gnocchi
once hook execution completes across all units, each client should be configured
with 6 MON addresses.
Once hook execution completes across all units, each client should be
configured with six MON addresses.
Remove the ceph application
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Its now safe to remove the ceph application from your deployment:
.. code:: bash
.. code-block:: none
juju remove-application ceph
juju remove-application ceph
As each unit of the ceph application is destroyed, its stop hook will remove the
MON process from the Ceph cluster monmap and disable Ceph MON and MGR processes
running on the machine; any Ceph OSD processes remain untouched and are now
owned by the ceph-osd units deployed alongside ceph.
.. raw:: html
<!-- LINKS -->
As each unit of the ceph application is destroyed, its stop hook will remove
the MON process from the Ceph cluster monmap and disable Ceph MON and MGR
processes running on the machine; any Ceph OSD processes remain untouched and
are now owned by the ceph-osd units deployed alongside ceph.
.. LINKS
.. _Charm upgrades: app-upgrade-openstack#charm-upgrades
.. _LP #1452641: https://bugs.launchpad.net/nova/+bug/1452641

View File

@ -1,3 +1,5 @@
:orphan:
========================
Series upgrade OpenStack
========================
@ -11,10 +13,9 @@ across the entirety of a Charmed OpenStack cloud.
forth in the more general `Series upgrade`_ appendix. That reference must be
studied in-depth prior to attempting the steps outlined here. In particular,
ensure that the :ref:`Pre-upgrade requirements <pre-upgrade_requirements>`
are satisfied; the :ref:`Specific series upgrade procedures
<series_specific_procedures>` have been reviewed and considered; and that
:ref:`Workload specific preparations <workload_specific_preparations>` have
been addressed during planning.
are satisfied; the :doc:`Upgrade issues <upgrade-issues>` have been reviewed
and considered; and that :ref:`Workload specific preparations
<workload_specific_preparations>` have been addressed during planning.
Downtime
--------

View File

@ -12,8 +12,7 @@ Please read the following before continuing:
* the :doc:`Upgrades overview <upgrade-overview>` page
* the OpenStack charms `Release Notes`_ for the corresponding current and
target versions of OpenStack
* the `Known OpenStack upgrade issues`_ section in the OpenStack upgrade
document
* the :doc:`Upgrade issues <upgrade-issues>` page
Once this document has been studied the administrator will be ready to graduate
to the :doc:`Series upgrade OpenStack <app-series-upgrade-openstack>` guide
@ -107,8 +106,8 @@ making any changes.
upgrades).
* The cloud should be running the latest OpenStack release supported by the
current series (e.g. Mitaka for trusty, Queens for xenial, etc.). See `Ubuntu
OpenStack release cycle`_ and `OpenStack upgrades`_.
current series. See `Ubuntu OpenStack release cycle`_ and `OpenStack
upgrades`_.
* The cloud should be fully operational and error-free.
@ -121,36 +120,6 @@ making any changes.
* `Automatic package updates`_ should be disabled on the nodes to avoid
potential conflicts with the manual (or scripted) APT steps.
.. _series_specific_procedures:
Specific series upgrade procedures
----------------------------------
Charms belonging to the OpenStack Charms project are designed to accommodate
the next LTS target series wherever possible. However, a new series may
occasionally introduce unavoidable challenges for a deployed charm. For
instance, it could be that a charm is replaced by an entirely new charm on the
new series. This can happen due to development policy concerning the charms
themselves (e.g. the ceph charm is replaced by the ceph-mon and ceph-osd
charms) or due to reasons independent of the charms (e.g. the workload software
is no longer supported on the new operating system). Any core OpenStack charms
affected in this way will be documented below.
* :ref:`percona-cluster charm: series upgrade to Focal <percona_series_upgrade_to_focal>`
Known series-related issues
---------------------------
Ensure that your deployment will not be adversely affected by known
series-related problems when upgrading. The following issues have been flagged
for consideration.
DNS HA with the focal series
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DNS HA has been reported to not work on the focal series. See `LP #1882508`_
for more information.
.. _workload_specific_preparations:
Workload specific preparations
@ -355,6 +324,3 @@ appendix :doc:`Series upgrade OpenStack <app-series-upgrade-openstack>`.
.. _Ubuntu OpenStack release cycle: https://ubuntu.com/about/release-cycle#ubuntu-openstack-release-cycle
.. _Application leadership: https://juju.is/docs/implementing-leadership
.. _ubuntu: https://jaas.ai/ubuntu
.. BUGS
.. _LP #1882508: https://bugs.launchpad.net/charm-deployment-guide/+bug/1882508

View File

@ -14,7 +14,7 @@ Please read the following before continuing:
* the :doc:`Upgrades overview <upgrade-overview>` page
* the OpenStack charms `Release Notes`_ for the corresponding current and
target versions of OpenStack
* the `Known OpenStack upgrade issues`_ section in this document
* the :doc:`Upgrade issues <upgrade-issues>` page
.. note::
@ -42,11 +42,12 @@ configured to the pre-upgraded service as possible. Nevertheless, there are
still times when intervention on the part of the operator may be needed, such
as when:
- a service is removed, added, or replaced
- a software bug affecting the OpenStack upgrade is present in the new charm
* a service is removed, added, or replaced
* a software bug affecting the OpenStack upgrade is present in the new charm
All known issues requiring manual intervention are documented in section `Known
OpenStack upgrade issues`_. You **must** look these over.
All known issues requiring manual intervention are documented on the
:doc:`Known upgrade issues <upgrade-issues>` page. You **must** look these
over.
Verify the current deployment
-----------------------------
@ -62,7 +63,7 @@ Perform a database backup
Before making any changes to cloud services perform a backup of the cloud
database by running the ``backup`` action on any single percona-cluster unit:
.. code:: bash
.. code-block:: none
juju run-action --wait percona-cluster/0 backup
@ -70,7 +71,7 @@ Now transfer the backup directory to the Juju client with the intention of
subsequently storing it somewhere safe. This command will grab **all** existing
backups:
.. code:: bash
.. code-block:: none
juju scp -- -r percona-cluster/0:/opt/backups/mysql /path/to/local/directory
@ -84,7 +85,7 @@ optimised by first archiving any stale data (e.g. deleted instances). Do this
by running the ``archive-data`` action on any single nova-cloud-controller
unit:
.. code:: bash
.. code-block:: none
juju run-action --wait nova-cloud-controller/0 archive-data
@ -99,13 +100,13 @@ should be purged before the upgrade. These entries will show as 'down' (and be
hosted on machines no longer in the model) in the current list of compute
services:
.. code:: bash
.. code-block:: none
openstack compute service list
To remove a compute service:
.. code:: bash
.. code-block:: none
openstack compute service delete <service-id>
@ -118,7 +119,7 @@ charms (e.g. ``nova-compute`` and ``ceph-osd``), ensure that
of the upgrade process. This is to prevent the other services from being
upgraded outside of Juju's control. On a unit run:
.. code:: bash
.. code-block:: none
sudo dpkg-reconfigure -plow unattended-upgrades
@ -218,7 +219,7 @@ There are three methods available for performing an OpenStack service upgrade.
The appropriate method is chosen based on the actions supported by the charm.
Actions for a charm can be listed in this way:
.. code:: bash
.. code-block:: none
juju actions <charm-name>
@ -238,14 +239,14 @@ application has a sole unit.
The syntax is:
.. code:: bash
.. code-block:: none
juju config <openstack-charm> openstack-origin=cloud:<cloud-archive-release>
Charms whose services are not technically part of the OpenStack project will
use the ``source`` charm option instead. The Ceph charms are a classic example:
.. code:: bash
.. code-block:: none
juju config ceph-mon source=cloud:bionic-train
@ -257,7 +258,7 @@ use the ``source`` charm option instead. The Ceph charms are a classic example:
So to upgrade Cinder across all units (currently running Bionic) from Stein to
Train:
.. code:: bash
.. code-block:: none
juju config cinder openstack-origin=cloud:bionic-train
@ -279,14 +280,14 @@ individually, **always upgrade the application leader first.** The leader is
the unit with a ***** next to it in the :command:`juju status` output. It can
also be discovered via the CLI:
.. code:: bash
.. code-block:: none
juju run --application <application-name> is-leader
For example, to upgrade a three-unit glance application from Stein to Train
where ``glance/1`` is the leader:
.. code:: bash
.. code-block:: none
juju config glance action-managed-upgrade=True
juju config glance openstack-origin=cloud:bionic-train
@ -301,6 +302,8 @@ where ``glance/1`` is the leader:
are part of the OpenStack project. For instance, you will need to use the
"all-in-one" method for the Ceph charms.
.. _paused_single_unit:
Paused-single-unit
~~~~~~~~~~~~~~~~~~
@ -319,7 +322,7 @@ by the operator.
For example, to upgrade a three-unit nova-compute application from Stein to
Train where ``nova-compute/0`` is the leader:
.. code:: bash
.. code-block:: none
juju config nova-compute action-managed-upgrade=True
juju config nova-compute openstack-origin=cloud:bionic-train
@ -349,7 +352,7 @@ flow to the associated parent unit while its upgrade is underway.
For example, to upgrade a three-unit keystone application from Stein to Train
where ``keystone/2`` is the leader:
.. code:: bash
.. code-block:: none
juju config keystone action-managed-upgrade=True
juju config keystone openstack-origin=cloud:bionic-train
@ -383,195 +386,6 @@ Verify the new deployment
Check for errors in :command:`juju status` output and any monitoring service.
Known OpenStack upgrade issues
------------------------------
Nova RPC version mismatches
~~~~~~~~~~~~~~~~~~~~~~~~~~~
If it is not possible to upgrade Neutron and Nova within the same maintenance
window, be mindful that the RPC communication between nova-cloud-controller,
nova-compute, and nova-api-metadata is very likely to cause several errors
while those services are not running the same version. This is due to the fact
that currently those charms do not support RPC version pinning or
auto-negotiation.
See bug `LP #1825999`_.
neutron-gateway charm: upgrading from Mitaka to Newton
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Between the Mitaka and Newton OpenStack releases, the ``neutron-gateway`` charm
added two options, ``bridge-mappings`` and ``data-port``, which replaced the
(now) deprecated ``ext-port`` option. This was to provide for more control over
how ``neutron-gateway`` can configure external networking. Unfortunately, the
charm was only designed to work with either ``ext-port`` (no longer
recommended) *or* ``bridge-mappings`` and ``data-port``.
See bug `LP #1809190`_.
cinder/ceph topology change: upgrading from Newton to Ocata
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If ``cinder`` is directly related to ``ceph-mon`` rather than via
``cinder-ceph`` then upgrading from Newton to Ocata will result in the loss of
some block storage functionality, specifically live migration and snapshotting.
To remedy this situation the deployment should migrate to using the cinder-ceph
charm. This can be done after the upgrade to Ocata.
.. warning::
Do not attempt to migrate a deployment with existing volumes to use the
``cinder-ceph`` charm prior to Ocata.
The intervention is detailed in the below three steps.
Step 0: Check existing configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Confirm existing volumes are in an RBD pool called 'cinder':
.. code-block:: none
juju run --unit cinder/0 "rbd --name client.cinder -p cinder ls"
Sample output:
.. code-block:: none
volume-b45066d3-931d-406e-a43e-ad4eca12cf34
volume-dd733b26-2c56-4355-a8fc-347a964d5d55
Step 1: Deploy new topology
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Deploy the ``cinder-ceph`` charm and set the 'rbd-pool-name' to match the pool
that any existing volumes are in (see above):
.. code-block:: none
juju deploy --config rbd-pool-name=cinder cinder-ceph
juju add-relation cinder cinder-ceph
juju add-relation cinder-ceph ceph-mon
juju remove-relation cinder ceph-mon
juju add-relation cinder-ceph nova-compute
Step 2: Update volume configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The existing volumes now need to be updated to associate them with the newly
defined cinder-ceph backend:
.. code-block:: none
juju run-action cinder/0 rename-volume-host currenthost='cinder' \
newhost='cinder@cinder-ceph#cinder.volume.drivers.rbd.RBDDriver'
Keystone and Fernet tokens: upgrading from Queens to Rocky
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Starting with OpenStack Rocky only the Fernet format for authentication tokens
is supported. Therefore, prior to upgrading Keystone to Rocky a transition must
be made from the legacy format (of UUID) to Fernet.
Fernet support is available upstream (and in the keystone charm) starting with
Ocata so the transition can be made on either Ocata, Pike, or Queens.
Use option ``token-provider`` to transition to Fernet tokens:
.. code-block:: none
juju config keystone token-provider=fernet
The ``token-provider`` option has no effect starting with Rocky, where the
charm defaults to Fernet and where upstream removes support for UUID. See
`Keystone Fernet Token Implementation`_ for more information.
Placement charm and nova-cloud-controller: upgrading from Stein to Train
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As of Train, the placement API is managed by the new ``placement`` charm and is
no longer managed by the ``nova-cloud-controller`` charm. The upgrade to Train
therefore requires some coordination to transition to the new API endpoints.
Prior to upgrading nova-cloud-controller to Train, the placement charm must be
deployed for Train and related to the Stein-based nova-cloud-controller. It is
important that the nova-cloud-controller unit leader is paused while the API
transition occurs (paused prior to adding relations for the placement charm) as
the placement charm will migrate existing placement tables from the nova_api
database to a new placement database. Once the new placement endpoints are
registered, nova-cloud-controller can be resumed.
Here's an example of the steps just described where `nova-cloud-controller/0`
is the leader:
.. code-block:: none
juju deploy --series bionic --config openstack-origin=cloud:bionic-train cs:placement
juju run-action nova-cloud-controller/0 pause
juju add-relation placement mysql
juju add-relation placement keystone
juju add-relation placement nova-cloud-controller
openstack endpoint list # ensure placement endpoints are listening on new placment IP address
juju run-action nova-cloud-controller/0 resume
Only after these steps have been completed can nova-cloud-controller be
upgraded. Here we upgrade all units simultaneously but see the
`Paused-single-unit`_ service upgrade method for a more controlled approach:
.. code-block:: none
juju config nova-cloud-controller openstack-origin=cloud:bionic-train
Neutron LBaaS: upgrading from Stein to Train
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As of Train, support for Neutron LBaaS has been retired. The load-balancing
services are now provided by `Octavia LBaaS`_. There is no automatic migration
path, please review the `Octavia LBaaS`_ appendix for more information.
openstack-dashboard charm: upgrading to revision 294
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When Horizon is configured with TLS (openstack-dashboard charm option
``ssl-cert``) revisions 294 and 295 of the charm have been reported to break
the dashboard (see bug `LP #1853173`_). The solution is to upgrade to a working
revision. A temporary workaround is to disable TLS without upgrading.
.. note::
Most users will not be impacted by this issue as the recommended approach is
to always upgrade to the latest revision.
To upgrade to revision 293:
.. code:: none
juju upgrade-charm openstack-dashboard --revision 293
To upgrade to revision 296:
.. code:: none
juju upgrade-charm openstack-dashboard --revision 296
To disable TLS:
.. code:: none
juju config enforce-ssl=false openstack-dashboard
Designate Upgrades to Train
~~~~~~~~~~~~~~~~~~~~~~~~~~~
When upgrading Designate to Train, there is an encoding issue between the
designate-producer and memcached that causes the designate-producer to crash.
See bug `LP #1828534`_. This can be resolved by restarting the memcached service.
.. code-block:: none
juju run --application=memcached 'sudo systemctl restart memcached'
.. LINKS
.. _Release Notes: https://docs.openstack.org/charm-guide/latest/release-notes.html
.. _Ubuntu Cloud Archive: https://wiki.ubuntu.com/OpenStack/CloudArchive

View File

@ -22,7 +22,7 @@ contribution`_.
:caption: Installation
:maxdepth: 1
install-overview
Overview <install-overview>
install-maas
install-juju
install-openstack
@ -32,11 +32,12 @@ contribution`_.
:caption: Upgrades
:maxdepth: 1
upgrade-overview
Overview <upgrade-overview>
upgrade-charms
OpenStack upgrade <app-upgrade-openstack>
Series upgrade <app-series-upgrade>
Series upgrade OpenStack <app-series-upgrade-openstack>
app-upgrade-openstack
app-series-upgrade
upgrade-special
upgrade-issues
.. toctree::
:caption: Appendices
@ -52,7 +53,6 @@ contribution`_.
Ceph RADOS Gateway multisite replication <app-rgw-multisite>
Ceph RBD mirroring <app-ceph-rbd-mirror>
Ceph iSCSI <app-ceph-iscsi>
Ceph charm deprecation <app-ceph-migration>
Masakari <app-masakari>
Policy overrides <app-policy-overrides>
OVN <app-ovn>

View File

@ -1,6 +1,6 @@
========
Overview
========
=====================
Installation overview
=====================
The purpose of the Installation section is to demonstrate how to build a
multi-node OpenStack cloud with `MAAS`_, `Juju`_, and `OpenStack Charms`_. For

View File

@ -1,24 +1,15 @@
:orphan:
.. _series_upgrade_specific_procedures:
==============================================
percona-cluster charm: series upgrade to focal
==============================================
==================================
Specific series upgrade procedures
==================================
.. note::
Overview
--------
This page describes procedures that may be required when performing a series
upgrade across a Charmed OpenStack cloud. They relate to specific cloud
workloads. Please read the more general :doc:`Series upgrade
<app-series-upgrade>` appendix before attempting any of the instructions given
here.
.. _percona_series_upgrade_to_focal:
percona-cluster charm: series upgrade to Focal
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This page describes a procedure that may be required when performing an
upgrade of an OpenStack cloud. Please read the more general :doc:`Upgrades
overview <upgrade-overview>` before attempting any of the instructions given
here.
In Ubuntu 20.04 LTS (Focal) the percona-xtradb-cluster-server package will no
longer be available. It has been replaced by mysql-server-8.0 and mysql-router

View File

@ -31,7 +31,7 @@ the charms can be upgraded in any order.
Before upgrading, a (partial) output to :command:`juju status` may look like:
.. code::
.. code-block:: console
App Version Status Scale Charm Store Rev OS Notes
keystone 15.0.0 active 1 keystone jujucharms 306 ubuntu
@ -47,7 +47,7 @@ charm revision is expected to increase in number.
So to upgrade this 'keystone' charm (to the most recent promulgated version in
the Charm Store):
.. code:: bash
.. code-block:: none
juju upgrade-charm keystone
@ -55,7 +55,7 @@ The upgrade progress can be monitored via :command:`juju status`. Any
encountered problem will surface as a message in its output. This sample
(partial) output reflects a successful upgrade:
.. code::
.. code-block:: console
App Version Status Scale Charm Store Rev OS Notes
keystone 15.0.0 active 1 keystone jujucharms 309 ubuntu

View File

@ -0,0 +1,217 @@
====================
Known upgrade issues
====================
This section documents known software upgrade issues.
DNS HA with the focal series
----------------------------
DNS HA has been reported to not work on the focal series. See `LP #1882508`_
for more information.
Nova RPC version mismatches
---------------------------
If it is not possible to upgrade Neutron and Nova within the same maintenance
window, be mindful that the RPC communication between nova-cloud-controller,
nova-compute, and nova-api-metadata is very likely to cause several errors
while those services are not running the same version. This is due to the fact
that currently those charms do not support RPC version pinning or
auto-negotiation.
See bug `LP #1825999`_.
neutron-gateway charm: upgrading from Mitaka to Newton
------------------------------------------------------
Between the Mitaka and Newton OpenStack releases, the ``neutron-gateway`` charm
added two options, ``bridge-mappings`` and ``data-port``, which replaced the
(now) deprecated ``ext-port`` option. This was to provide for more control over
how ``neutron-gateway`` can configure external networking. Unfortunately, the
charm was only designed to work with either ``ext-port`` (no longer
recommended) *or* ``bridge-mappings`` and ``data-port``.
See bug `LP #1809190`_.
cinder/ceph topology change: upgrading from Newton to Ocata
-----------------------------------------------------------
If ``cinder`` is directly related to ``ceph-mon`` rather than via
``cinder-ceph`` then upgrading from Newton to Ocata will result in the loss of
some block storage functionality, specifically live migration and snapshotting.
To remedy this situation the deployment should migrate to using the cinder-ceph
charm. This can be done after the upgrade to Ocata.
.. warning::
Do not attempt to migrate a deployment with existing volumes to use the
``cinder-ceph`` charm prior to Ocata.
The intervention is detailed in the below three steps.
Step 0: Check existing configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Confirm existing volumes are in an RBD pool called 'cinder':
.. code-block:: none
juju run --unit cinder/0 "rbd --name client.cinder -p cinder ls"
Sample output:
.. code-block:: none
volume-b45066d3-931d-406e-a43e-ad4eca12cf34
volume-dd733b26-2c56-4355-a8fc-347a964d5d55
Step 1: Deploy new topology
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Deploy the ``cinder-ceph`` charm and set the 'rbd-pool-name' to match the pool
that any existing volumes are in (see above):
.. code-block:: none
juju deploy --config rbd-pool-name=cinder cinder-ceph
juju add-relation cinder cinder-ceph
juju add-relation cinder-ceph ceph-mon
juju remove-relation cinder ceph-mon
juju add-relation cinder-ceph nova-compute
Step 2: Update volume configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The existing volumes now need to be updated to associate them with the newly
defined cinder-ceph backend:
.. code-block:: none
juju run-action cinder/0 rename-volume-host currenthost='cinder' \
newhost='cinder@cinder-ceph#cinder.volume.drivers.rbd.RBDDriver'
Keystone and Fernet tokens: upgrading from Queens to Rocky
----------------------------------------------------------
Starting with OpenStack Rocky only the Fernet format for authentication tokens
is supported. Therefore, prior to upgrading Keystone to Rocky a transition must
be made from the legacy format (of UUID) to Fernet.
Fernet support is available upstream (and in the keystone charm) starting with
Ocata so the transition can be made on either Ocata, Pike, or Queens.
A keystone charm upgrade will not alter the token format. The charm's
``token-provider`` option must be used to make the transition:
.. code-block:: none
juju config keystone token-provider=fernet
This change may result in a minor control plane outage but any running
instances will remain unaffected.
The ``token-provider`` option has no effect starting with Rocky, where the
charm defaults to Fernet and where upstream removes support for UUID. See
`Keystone Fernet Token Implementation`_ for more information.
Placement charm and nova-cloud-controller: upgrading from Stein to Train
------------------------------------------------------------------------
As of Train, the placement API is managed by the new ``placement`` charm and is
no longer managed by the ``nova-cloud-controller`` charm. The upgrade to Train
therefore requires some coordination to transition to the new API endpoints.
Prior to upgrading nova-cloud-controller to Train, the placement charm must be
deployed for Train and related to the Stein-based nova-cloud-controller. It is
important that the nova-cloud-controller unit leader is paused while the API
transition occurs (paused prior to adding relations for the placement charm) as
the placement charm will migrate existing placement tables from the nova_api
database to a new placement database. Once the new placement endpoints are
registered, nova-cloud-controller can be resumed.
Here's an example of the steps just described where `nova-cloud-controller/0`
is the leader:
.. code-block:: none
juju deploy --series bionic --config openstack-origin=cloud:bionic-train cs:placement
juju run-action nova-cloud-controller/0 pause
juju add-relation placement mysql
juju add-relation placement keystone
juju add-relation placement nova-cloud-controller
openstack endpoint list # ensure placement endpoints are listening on new placment IP address
juju run-action nova-cloud-controller/0 resume
Only after these steps have been completed can nova-cloud-controller be
upgraded. Here we upgrade all units simultaneously but see the
ref:`paused-single-unit <paused_single_unit>` service upgrade method for a more
controlled approach:
.. code-block:: none
juju config nova-cloud-controller openstack-origin=cloud:bionic-train
Neutron LBaaS: upgrading from Stein to Train
--------------------------------------------
As of Train, support for Neutron LBaaS has been retired. The load-balancing
services are now provided by `Octavia LBaaS`_. There is no automatic migration
path, please review the `Octavia LBaaS`_ appendix for more information.
openstack-dashboard charm: upgrading to revision 294
----------------------------------------------------
When Horizon is configured with TLS (openstack-dashboard charm option
``ssl-cert``) revisions 294 and 295 of the charm have been reported to break
the dashboard (see bug `LP #1853173`_). The solution is to upgrade to a working
revision. A temporary workaround is to disable TLS without upgrading.
.. note::
Most users will not be impacted by this issue as the recommended approach is
to always upgrade to the latest revision.
To upgrade to revision 293:
.. code-block:: none
juju upgrade-charm openstack-dashboard --revision 293
To upgrade to revision 296:
.. code-block:: none
juju upgrade-charm openstack-dashboard --revision 296
To disable TLS:
.. code-block:: none
juju config enforce-ssl=false openstack-dashboard
Designate upgrades to Train
---------------------------
When upgrading Designate to Train, there is an encoding issue between the
designate-producer and memcached that causes the designate-producer to crash.
See bug `LP #1828534`_. This can be resolved by restarting the memcached service.
.. code-block:: none
juju run --application=memcached 'sudo systemctl restart memcached'
.. LINKS
.. _Release Notes: https://docs.openstack.org/charm-guide/latest/release-notes.html
.. _Ubuntu Cloud Archive: https://wiki.ubuntu.com/OpenStack/CloudArchive
.. _Upgrades: https://docs.openstack.org/operations-guide/ops-upgrades.html
.. _Update services: https://docs.openstack.org/operations-guide/ops-upgrades.html#update-services
.. _Keystone Fernet Token Implementation: https://specs.openstack.org/openstack/charm-specs/specs/rocky/implemented/keystone-fernet-tokens.html
.. _Octavia LBaaS: app-octavia.html
.. BUGS
.. _LP #1825999: https://bugs.launchpad.net/charm-nova-compute/+bug/1825999
.. _LP #1809190: https://bugs.launchpad.net/charm-neutron-gateway/+bug/1809190
.. _LP #1853173: https://bugs.launchpad.net/charm-openstack-dashboard/+bug/1853173
.. _LP #1828534: https://bugs.launchpad.net/charm-designate/+bug/1828534
.. _LP #1882508: https://bugs.launchpad.net/charm-deployment-guide/+bug/1882508

View File

@ -2,8 +2,9 @@
Upgrades overview
=================
This page provides an overview of how various components of a Charmed OpenStack
deployment are upgraded. The upgrade of each of these components are distinct
The purpose of the Upgrades section is to show how to upgrade Charmed OpenStack
as a whole. This page provides a summary of the involved components and how
they relate to each other. The upgrade of each of the components are distinct
operations and are referred to as separate upgrade types. They are defined in
this way:

View File

@ -0,0 +1,15 @@
==========================
Special upgrade procedures
==========================
The OpenStack charms are designed to accommodate every supported series and
OpenStack release wherever possible. However, upgrades may occasionally
introduce unavoidable challenges for a deployed charm. For instance, it could
be that a charm is replaced by an entirely new charm on the new series. This
can happen due to development policy concerning the charms themselves or due to
reasons independent of the charms (e.g. the workload software is no longer
supported on the new operating system). Any core OpenStack charms affected in
this way will be documented here:
* :doc:`percona-cluster charm: series upgrade to focal <percona-series-upgrade-to-focal>`
* :doc:`ceph charm: migration to ceph-mon and ceph-osd <app-ceph-migration>`