Add new issues page and refactor
* Add a new 'Various issues' page. This page will contain the known issues that were previously inherited from release to release and placed in the Release Notes even though they had no connection with the current release. The Release Notes will now only contain issues that arose during the associated development cycle (upgrade notes will go directly to the upgrade issues page). Once a new Release Notes page is created issues will be moved to this new page. Populate the page with issues from the current 2104 Release Notes. * Add an upgrade note for the `worker-multiplier` option. * Organise the 'Upgrade issues' page by upgrade type. * Minor rewording of some of the issues pages. Change-Id: I62164bd4ae939ec193ae4e47d79adb2ca7af2a85
This commit is contained in:
parent
a2f7f2e181
commit
d4eb5ed7d7
|
@ -8,10 +8,10 @@ The Juju command to use is :command:`upgrade-charm`. For extra guidance see
|
|||
Please read the following before continuing:
|
||||
|
||||
* the :doc:`Upgrades overview <upgrade-overview>` page
|
||||
* the OpenStack charms `Release Notes`_ for the corresponding current and
|
||||
target versions of OpenStack
|
||||
* the OpenStack Charms `Release Notes`_
|
||||
* the :doc:`Special charm procedures <upgrade-special>` page
|
||||
* the :doc:`Upgrade issues <upgrade-issues>` page
|
||||
* the :doc:`Various issues <various-issues>` page
|
||||
|
||||
.. note::
|
||||
|
||||
|
|
|
@ -1,130 +1,33 @@
|
|||
====================
|
||||
Known upgrade issues
|
||||
====================
|
||||
==============
|
||||
Upgrade issues
|
||||
==============
|
||||
|
||||
This section documents known issues (software limitations/bugs) that may apply
|
||||
to either of the three upgrade types (charms, OpenStack, series).
|
||||
This page documents upgrade issues and notes. These may apply to either of the
|
||||
three upgrade types (charms, OpenStack, series).
|
||||
|
||||
DNS HA with the focal series
|
||||
----------------------------
|
||||
The items on this page are distinct from those found on the following pages:
|
||||
|
||||
DNS HA has been reported to not work on the focal series. See `LP #1882508`_
|
||||
for more information.
|
||||
* the `Various issues`_ page
|
||||
* the `Special charm procedures`_ page
|
||||
|
||||
Nova RPC version mismatches
|
||||
---------------------------
|
||||
The issues are organised by upgrade type.
|
||||
|
||||
If it is not possible to upgrade Neutron and Nova within the same maintenance
|
||||
window, be mindful that the RPC communication between nova-cloud-controller,
|
||||
nova-compute, and nova-api-metadata is very likely to cause several errors
|
||||
while those services are not running the same version. This is due to the fact
|
||||
that currently those charms do not support RPC version pinning or
|
||||
auto-negotiation.
|
||||
Charm upgrades
|
||||
--------------
|
||||
|
||||
See bug `LP #1825999`_.
|
||||
rabbitmq-server charm
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
neutron-gateway charm: upgrading from Mitaka to Newton
|
||||
------------------------------------------------------
|
||||
|
||||
Between the Mitaka and Newton OpenStack releases, the ``neutron-gateway`` charm
|
||||
added two options, ``bridge-mappings`` and ``data-port``, which replaced the
|
||||
(now) deprecated ``ext-port`` option. This was to provide for more control over
|
||||
how ``neutron-gateway`` can configure external networking. Unfortunately, the
|
||||
charm was only designed to work with either ``ext-port`` (no longer
|
||||
recommended) *or* ``bridge-mappings`` and ``data-port``.
|
||||
|
||||
See bug `LP #1809190`_.
|
||||
|
||||
cinder/ceph topology change: upgrading from Newton to Ocata
|
||||
-----------------------------------------------------------
|
||||
|
||||
If cinder is directly related to ceph-mon rather than via cinder-ceph then
|
||||
upgrading from Newton to Ocata will result in the loss of some block storage
|
||||
functionality, specifically live migration and snapshotting. To remedy this
|
||||
situation the deployment should migrate to using the cinder-ceph charm. This
|
||||
can be done after the upgrade to Ocata.
|
||||
|
||||
.. warning::
|
||||
|
||||
Do not attempt to migrate a deployment with existing volumes to use the
|
||||
cinder-ceph charm prior to Ocata.
|
||||
|
||||
The intervention is detailed in the below three steps.
|
||||
|
||||
Step 0: Check existing configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Confirm existing volumes are in an RBD pool called 'cinder':
|
||||
A timing issue has been observed during the upgrade of the rabbitmq-server
|
||||
charm (see bug `LP #1912638`_ for tracking). If it occurs the resulting hook
|
||||
error can be resolved with:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --unit cinder/0 "rbd --name client.cinder -p cinder ls"
|
||||
|
||||
Sample output:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
volume-b45066d3-931d-406e-a43e-ad4eca12cf34
|
||||
volume-dd733b26-2c56-4355-a8fc-347a964d5d55
|
||||
|
||||
Step 1: Deploy new topology
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Deploy the ``cinder-ceph`` charm and set the 'rbd-pool-name' to match the pool
|
||||
that any existing volumes are in (see above):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy --config rbd-pool-name=cinder cinder-ceph
|
||||
juju add-relation cinder cinder-ceph
|
||||
juju add-relation cinder-ceph ceph-mon
|
||||
juju remove-relation cinder ceph-mon
|
||||
juju add-relation cinder-ceph nova-compute
|
||||
|
||||
Step 2: Update volume configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The existing volumes now need to be updated to associate them with the newly
|
||||
defined cinder-ceph backend:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run-action cinder/0 rename-volume-host currenthost='cinder' \
|
||||
newhost='cinder@cinder-ceph#cinder.volume.drivers.rbd.RBDDriver'
|
||||
|
||||
Keystone and Fernet tokens: upgrading from Queens to Rocky
|
||||
----------------------------------------------------------
|
||||
|
||||
Starting with OpenStack Rocky only the Fernet format for authentication tokens
|
||||
is supported. Therefore, prior to upgrading Keystone to Rocky a transition must
|
||||
be made from the legacy format (of UUID) to Fernet.
|
||||
|
||||
Fernet support is available upstream (and in the keystone charm) starting with
|
||||
Ocata so the transition can be made on either Ocata, Pike, or Queens.
|
||||
|
||||
A keystone charm upgrade will not alter the token format. The charm's
|
||||
``token-provider`` option must be used to make the transition:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju config keystone token-provider=fernet
|
||||
|
||||
This change may result in a minor control plane outage but any running
|
||||
instances will remain unaffected.
|
||||
|
||||
The ``token-provider`` option has no effect starting with Rocky, where the
|
||||
charm defaults to Fernet and where upstream removes support for UUID. See
|
||||
`Keystone Fernet Token Implementation`_ for more information.
|
||||
|
||||
Neutron LBaaS: upgrading from Stein to Train
|
||||
--------------------------------------------
|
||||
|
||||
As of Train, support for Neutron LBaaS has been retired. The load-balancing
|
||||
services are now provided by `Octavia LBaaS`_. There is no automatic migration
|
||||
path, please review the `Octavia LBaaS`_ appendix for more information.
|
||||
juju resolved rabbitmq-server/N
|
||||
|
||||
openstack-dashboard charm: upgrading to revision 294
|
||||
----------------------------------------------------
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
When Horizon is configured with TLS (openstack-dashboard charm option
|
||||
``ssl-cert``) revisions 294 and 295 of the charm have been reported to break
|
||||
|
@ -154,8 +57,133 @@ To disable TLS:
|
|||
|
||||
juju config enforce-ssl=false openstack-dashboard
|
||||
|
||||
Designate upgrades to Train
|
||||
---------------------------
|
||||
Multiple charms: option ``worker-multiplier``
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Starting with OpenStack Charms 21.04 any charm that supports the
|
||||
``worker-multiplier`` configuration option will, upon upgrade, modify the
|
||||
active number of service workers according to the following: if the option is
|
||||
not set explicitly the number of workers will be capped at four regardless of
|
||||
whether the unit is containerised or not. Previously, the cap applied only to
|
||||
containerised units.
|
||||
|
||||
OpenStack upgrades
|
||||
------------------
|
||||
|
||||
Nova RPC version mismatches: upgrading Neutron and Nova
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If it is not possible to upgrade Neutron and Nova within the same maintenance
|
||||
window, be mindful that the RPC communication between nova-cloud-controller,
|
||||
nova-compute, and nova-api-metadata is very likely to cause several errors
|
||||
while those services are not running the same version. This is due to the fact
|
||||
that currently those charms do not support RPC version pinning or
|
||||
auto-negotiation.
|
||||
|
||||
See bug `LP #1825999`_.
|
||||
|
||||
neutron-gateway charm: upgrading from Mitaka to Newton
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Between the Mitaka and Newton OpenStack releases, the ``neutron-gateway`` charm
|
||||
added two options, ``bridge-mappings`` and ``data-port``, which replaced the
|
||||
(now) deprecated ``ext-port`` option. This was to provide for more control over
|
||||
how ``neutron-gateway`` can configure external networking. Unfortunately, the
|
||||
charm was only designed to work with either ``ext-port`` (no longer
|
||||
recommended) *or* ``bridge-mappings`` and ``data-port``.
|
||||
|
||||
See bug `LP #1809190`_.
|
||||
|
||||
cinder/ceph topology change: upgrading from Newton to Ocata
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If cinder is directly related to ceph-mon rather than via cinder-ceph then
|
||||
upgrading from Newton to Ocata will result in the loss of some block storage
|
||||
functionality, specifically live migration and snapshotting. To remedy this
|
||||
situation the deployment should migrate to using the cinder-ceph charm. This
|
||||
can be done after the upgrade to Ocata.
|
||||
|
||||
.. warning::
|
||||
|
||||
Do not attempt to migrate a deployment with existing volumes to use the
|
||||
cinder-ceph charm prior to Ocata.
|
||||
|
||||
The intervention is detailed in the below three steps.
|
||||
|
||||
Step 0: Check existing configuration
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Confirm existing volumes are in an RBD pool called 'cinder':
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --unit cinder/0 "rbd --name client.cinder -p cinder ls"
|
||||
|
||||
Sample output:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
volume-b45066d3-931d-406e-a43e-ad4eca12cf34
|
||||
volume-dd733b26-2c56-4355-a8fc-347a964d5d55
|
||||
|
||||
Step 1: Deploy new topology
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Deploy the ``cinder-ceph`` charm and set the 'rbd-pool-name' to match the pool
|
||||
that any existing volumes are in (see above):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy --config rbd-pool-name=cinder cinder-ceph
|
||||
juju add-relation cinder cinder-ceph
|
||||
juju add-relation cinder-ceph ceph-mon
|
||||
juju remove-relation cinder ceph-mon
|
||||
juju add-relation cinder-ceph nova-compute
|
||||
|
||||
Step 2: Update volume configuration
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The existing volumes now need to be updated to associate them with the newly
|
||||
defined cinder-ceph backend:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run-action cinder/0 rename-volume-host currenthost='cinder' \
|
||||
newhost='cinder@cinder-ceph#cinder.volume.drivers.rbd.RBDDriver'
|
||||
|
||||
Keystone and Fernet tokens: upgrading from Queens to Rocky
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Starting with OpenStack Rocky only the Fernet format for authentication tokens
|
||||
is supported. Therefore, prior to upgrading Keystone to Rocky a transition must
|
||||
be made from the legacy format (of UUID) to Fernet.
|
||||
|
||||
Fernet support is available upstream (and in the keystone charm) starting with
|
||||
Ocata so the transition can be made on either Ocata, Pike, or Queens.
|
||||
|
||||
A keystone charm upgrade will not alter the token format. The charm's
|
||||
``token-provider`` option must be used to make the transition:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju config keystone token-provider=fernet
|
||||
|
||||
This change may result in a minor control plane outage but any running
|
||||
instances will remain unaffected.
|
||||
|
||||
The ``token-provider`` option has no effect starting with Rocky, where the
|
||||
charm defaults to Fernet and where upstream removes support for UUID. See
|
||||
`Keystone Fernet Token Implementation`_ for more information.
|
||||
|
||||
Neutron LBaaS: upgrading from Stein to Train
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
As of Train, support for Neutron LBaaS has been retired. The load-balancing
|
||||
services are now provided by `Octavia LBaaS`_. There is no automatic migration
|
||||
path, please review the `Octavia LBaaS`_ appendix for more information.
|
||||
|
||||
Designate: upgrading from Stein to Train
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
When upgrading Designate to Train, there is an encoding issue between the
|
||||
designate-producer and memcached that causes the designate-producer to crash.
|
||||
|
@ -165,8 +193,8 @@ See bug `LP #1828534`_. This can be resolved by restarting the memcached service
|
|||
|
||||
juju run --application=memcached 'sudo systemctl restart memcached'
|
||||
|
||||
Ceph BlueStore mistakenly enabled at OpenStack upgrade
|
||||
------------------------------------------------------
|
||||
Ceph BlueStore mistakenly enabled during OpenStack upgrade
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Ceph BlueStore storage backend is enabled by default when Ceph Luminous is
|
||||
detected. Therefore it is possible for a non-BlueStore cloud to acquire
|
||||
|
@ -175,6 +203,28 @@ Queens). Problems will occur if storage is scaled out without first disabling
|
|||
BlueStore (set ceph-osd charm option ``bluestore`` to 'False'). See bug `LP
|
||||
#1885516`_ for details.
|
||||
|
||||
Series upgrades
|
||||
---------------
|
||||
|
||||
DNS HA: upgrade to focal
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
DNS HA has been reported to not work on the focal series. See `LP #1882508`_
|
||||
for more information.
|
||||
|
||||
Upgrading while Vault is sealed
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If a series upgrade is attempted while Vault is sealed then manual intervention
|
||||
will be required (see bugs `LP #1886083`_ and `LP #1890106`_). The vault leader
|
||||
unit (which will be in error) will need to be unsealed and the hook error
|
||||
resolved. The `vault charm`_ README has unsealing instructions, and the hook
|
||||
error can be resolved with:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju resolved vault/N
|
||||
|
||||
.. LINKS
|
||||
.. _Release Notes: https://docs.openstack.org/charm-guide/latest/release-notes.html
|
||||
.. _Ubuntu Cloud Archive: https://wiki.ubuntu.com/OpenStack/CloudArchive
|
||||
|
@ -182,6 +232,9 @@ BlueStore (set ceph-osd charm option ``bluestore`` to 'False'). See bug `LP
|
|||
.. _Update services: https://docs.openstack.org/operations-guide/ops-upgrades.html#update-services
|
||||
.. _Keystone Fernet Token Implementation: https://specs.openstack.org/openstack/charm-specs/specs/rocky/implemented/keystone-fernet-tokens.html
|
||||
.. _Octavia LBaaS: app-octavia.html
|
||||
.. _Various issues: various-issues.html
|
||||
.. _Special charm procedures: upgrade-special.html
|
||||
.. _vault charm: https://opendev.org/openstack/charm-vault/src/branch/master/src/README.md#unseal-vault
|
||||
|
||||
.. BUGS
|
||||
.. _LP #1825999: https://bugs.launchpad.net/charm-nova-compute/+bug/1825999
|
||||
|
@ -190,3 +243,6 @@ BlueStore (set ceph-osd charm option ``bluestore`` to 'False'). See bug `LP
|
|||
.. _LP #1828534: https://bugs.launchpad.net/charm-designate/+bug/1828534
|
||||
.. _LP #1882508: https://bugs.launchpad.net/charm-deployment-guide/+bug/1882508
|
||||
.. _LP #1885516: https://bugs.launchpad.net/charm-deployment-guide/+bug/1885516
|
||||
.. _LP #1886083: https://bugs.launchpad.net/vault-charm/+bug/1886083
|
||||
.. _LP #1890106: https://bugs.launchpad.net/vault-charm/+bug/1890106
|
||||
.. _LP #1912638: https://bugs.launchpad.net/charm-rabbitmq-server/+bug/1912638
|
||||
|
|
|
@ -23,7 +23,7 @@ It may be worthwhile to read the upstream OpenStack `Upgrades`_ guide.
|
|||
Release Notes
|
||||
-------------
|
||||
|
||||
The OpenStack charms `Release Notes`_ for the corresponding current and target
|
||||
The OpenStack Charms `Release Notes`_ for the corresponding current and target
|
||||
versions of OpenStack must be consulted for any special instructions. In
|
||||
particular, pay attention to services and/or configuration options that may be
|
||||
retired, deprecated, or changed.
|
||||
|
@ -43,6 +43,7 @@ upgrade is present. The next two resources cover these topics:
|
|||
|
||||
* the :doc:`Special charm procedures <upgrade-special>` page
|
||||
* the :doc:`Upgrade issues <upgrade-issues>` page
|
||||
* the :doc:`Various issues <various-issues>` page
|
||||
|
||||
Verify the current deployment
|
||||
-----------------------------
|
||||
|
|
|
@ -10,11 +10,10 @@ across the entirety of a Charmed OpenStack cloud.
|
|||
.. warning::
|
||||
|
||||
This document is based upon the foundational knowledge and guidelines set
|
||||
forth in the more general `Series upgrade`_ appendix. That reference must be
|
||||
forth on the more general `Series upgrade`_ page. That reference must be
|
||||
studied in-depth prior to attempting the steps outlined here. In particular,
|
||||
ensure that the :ref:`Pre-upgrade requirements <pre-upgrade_requirements>`
|
||||
are satisfied; the :doc:`Upgrade issues <upgrade-issues>` have been reviewed
|
||||
and considered; and that :ref:`Workload specific preparations
|
||||
are satisfied and that the :ref:`Workload specific preparations
|
||||
<workload_specific_preparations>` have been addressed during planning.
|
||||
|
||||
Downtime
|
||||
|
|
|
@ -10,10 +10,10 @@ entirely new version.
|
|||
Please read the following before continuing:
|
||||
|
||||
* the :doc:`Upgrades overview <upgrade-overview>` page
|
||||
* the OpenStack charms `Release Notes`_ for the corresponding current and
|
||||
target versions of OpenStack
|
||||
* the OpenStack Charms `Release Notes`_
|
||||
* the :doc:`Special charm procedures <upgrade-special>` page
|
||||
* the :doc:`Upgrade issues <upgrade-issues>` page
|
||||
* the :doc:`Various issues <various-issues>` page
|
||||
|
||||
Once this document has been studied the administrator will be ready to graduate
|
||||
to the :doc:`Series upgrade OpenStack <upgrade-series-openstack>` guide that
|
||||
|
|
|
@ -7,8 +7,20 @@ OpenStack release wherever possible. However, upgrades (of either of the three
|
|||
types) may occasionally introduce unavoidable challenges. For instance, it
|
||||
could be that a charm is replaced by an entirely new charm on a new series or a
|
||||
new charm is needed to accommodate a new service on a new OpenStack release.
|
||||
These kinds of special procedures are documented here:
|
||||
These kinds of special procedures are documented on this page.
|
||||
|
||||
The items on this page are distinct from those found on the following pages:
|
||||
|
||||
* the `Various issues`_ page
|
||||
* the dedicated `Upgrade issues`_ page
|
||||
|
||||
Procedures
|
||||
----------
|
||||
|
||||
* :doc:`percona-cluster charm: series upgrade to focal <percona-series-upgrade-to-focal>`
|
||||
* :doc:`placement charm: OpenStack upgrade to Train <placement-charm-upgrade-to-train>`
|
||||
* :doc:`ceph charm: migration to ceph-mon and ceph-osd <app-ceph-migration>`
|
||||
|
||||
.. LINKS
|
||||
.. _Various issues: various-issues.html
|
||||
.. _Upgrade issues: upgrade-issues.html
|
||||
|
|
|
@ -0,0 +1,82 @@
|
|||
:orphan:
|
||||
|
||||
==============
|
||||
Various issues
|
||||
==============
|
||||
|
||||
This page documents various issues (software limitations/bugs) that may apply
|
||||
to a Charmed OpenStack cloud. These are still-valid issues that have arisen
|
||||
during the development cycles of past OpenStack Charms releases. The most
|
||||
recently discovered issues are documented in the `Release Notes`_ of the latest
|
||||
version of the OpenStack Charms.
|
||||
|
||||
The items on this page are distinct from those found on the following pages:
|
||||
|
||||
* the dedicated `Upgrade issues`_ page
|
||||
* the `Special charm procedures`_ page
|
||||
|
||||
Lack of FQDN for containers on physical MAAS nodes may affect running services
|
||||
------------------------------------------------------------------------------
|
||||
|
||||
When Juju deploys to a LXD container on a physical MAAS node, the container is
|
||||
not informed of its FQDN. The services running in the container will therefore
|
||||
be unable to determine the FQDN on initial deploy and on reboot.
|
||||
|
||||
Adverse effects are service dependent. This issue is tracked in bug `LP
|
||||
#1896630`_ in an OVN and Octavia context. Several workarounds are documented in
|
||||
the bug.
|
||||
|
||||
Barbican DB migration
|
||||
---------------------
|
||||
|
||||
With Focal Ussuri, running command ``barbican-manage db upgrade`` against a
|
||||
barbican application that is backed by a MySQL InnoDB Cluster will lead to a
|
||||
failure (see bug `LP #1899104`_). This was discovered while resolving bug `LP
|
||||
#1827690`_.
|
||||
|
||||
The package bug only affects Focal Ussuri and is not present in Victoria, nor
|
||||
is it present when using (Bionic) Percona Cluster as the back-end DB.
|
||||
|
||||
Ceph iSCSI on Ubuntu 20.10
|
||||
--------------------------
|
||||
|
||||
The ceph-iscsi charm can't be deployed on Ubuntu 20.10 (Groovy) due to a Python
|
||||
library issue. See bug `LP #1904199`_ for details.
|
||||
|
||||
Adding Glance storage backends
|
||||
------------------------------
|
||||
|
||||
When a storage backend is added to Glance a service restart may be necessary in
|
||||
order for the new backend to be registered. This issue is tracked in bug `LP
|
||||
#1914819`_.
|
||||
|
||||
OVS to OVN migration procedure on Ubuntu 20.10
|
||||
----------------------------------------------
|
||||
|
||||
When performed on Ubuntu 20.10 (Groovy), the procedure for migrating an
|
||||
OpenStack cloud from ML2+OVS to ML2+OVN may require an extra step due to Open
|
||||
vSwitch bug `LP #1852221`_.
|
||||
|
||||
Following the procedure in the `Migration from Neutron ML2+OVS to ML2+OVN`_
|
||||
section of the deploy guide, the workaround is to restart the ``ovs-vswitchd``
|
||||
service after resuming the ovn-chassis charm in step 15:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run-action --wait neutron-openvswitch/0 cleanup
|
||||
juju run-action --wait ovn-chassis/0 resume
|
||||
juju run --unit ovn-chassis/0 'systemctl restart ovs-vswitchd'
|
||||
|
||||
.. LINKS
|
||||
.. _Release Notes: https://docs.openstack.org/charm-guide/latest/release-notes.html
|
||||
.. _Upgrade issues: upgrade-issues.html
|
||||
.. _Special charm procedures: upgrade-special.html
|
||||
.. _Migration from Neutron ML2+OVS to ML2+OVN: https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-ovn.html#migration-from-neutron-ml2-ovs-to-ml2-ovn
|
||||
|
||||
.. BUGS
|
||||
.. _LP #1896630: https://bugs.launchpad.net/charm-layer-ovn/+bug/1896630
|
||||
.. _LP #1899104: https://bugs.launchpad.net/ubuntu/+source/barbican/+bug/1899104
|
||||
.. _LP #1827690: https://bugs.launchpad.net/charm-barbican/+bug/1827690
|
||||
.. _LP #1904199: https://bugs.launchpad.net/charm-ceph-iscsi/+bug/1904199
|
||||
.. _LP #1914819: https://bugs.launchpad.net/charm-glance/+bug/1914819
|
||||
.. _LP #1852221: https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1852221
|
Loading…
Reference in New Issue