Merge "Experimental upgrade of environment 5.1.1 to 6.1" into stable/6.1
This commit is contained in:
BIN
_images/upgrade/env-version-check.png
Normal file
BIN
_images/upgrade/env-version-check.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 389 KiB |
BIN
_images/upgrade/fuel-version-check.png
Normal file
BIN
_images/upgrade/fuel-version-check.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 1.2 MiB |
@@ -33,6 +33,7 @@
|
||||
.. include:: /pages/operations/8900-testing-details.rst
|
||||
.. include:: /pages/operations/corosync2.rst
|
||||
.. include:: /pages/operations/9000-troubleshoot.rst
|
||||
.. include:: /pages/operations/9600-upgrade.rst
|
||||
.. include:: /pages/operations/ha-testing-scenarios-ops.rst
|
||||
.. include:: /pages/operations/db-backup-ops.rst
|
||||
.. include:: /pages/operations/isoUSB-ops.rst
|
||||
|
||||
15
pages/operations/9600-upgrade.rst
Normal file
15
pages/operations/9600-upgrade.rst
Normal file
@@ -0,0 +1,15 @@
|
||||
.. index:: OpenStack-Upgrade Guide
|
||||
|
||||
.. _OS-Upgrade-Guide:
|
||||
|
||||
.. contents:: :local:
|
||||
:depth: 2
|
||||
|
||||
Upgrade Environment (EXPERIMENTAL)
|
||||
==================================
|
||||
|
||||
.. include:: /pages/operations/upgrade/1000-overview.rst
|
||||
.. include:: /pages/operations/upgrade/1010-quick-start.rst
|
||||
.. include:: /pages/operations/upgrade/2000-solution-overview.rst
|
||||
.. include:: /pages/operations/upgrade/3000-upgrade-script.rst
|
||||
.. include:: /pages/operations/upgrade/4000-finalize-upgrade.rst
|
||||
144
pages/operations/upgrade/1000-overview.rst
Normal file
144
pages/operations/upgrade/1000-overview.rst
Normal file
@@ -0,0 +1,144 @@
|
||||
.. index:: Upgrade Overview
|
||||
|
||||
.. _Upg_Over:
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
Upgrading Mirantis OpenStack from version 5.1.1 to version 6.1 involves
|
||||
building new environment, installing an HA Controller node in that environment,
|
||||
alongside with the old Controller nodes, switching all Compute nodes to the new
|
||||
Controller nodes, and then upgrading the Compute nodes.
|
||||
|
||||
You do not need any new hardware for the upgrade.
|
||||
|
||||
.. warning::
|
||||
|
||||
During the upgrade it virtual machines and other resources might experience
|
||||
temporary network disconnects. Schedule the upgrade during a maintenance
|
||||
window.
|
||||
|
||||
Upgrade Scenario
|
||||
++++++++++++++++
|
||||
|
||||
The proposed solution includes the following general steps described
|
||||
below in more details:
|
||||
|
||||
* OpenStack environment of version 6.1 is created using Fuel API with
|
||||
the settings and network configurations matching the configuration of
|
||||
the original 5.1.1 environment.
|
||||
* One of the :ref:`Cloud Infrastructure Controllers <cic-term>` (CICs)
|
||||
in the original 5.1.1 environment is deleted and moved to the
|
||||
6.1 environment. It is then installed in that environment as
|
||||
a primary controller side-by-side with the original 5.1.1 environment.
|
||||
* All OpenStack platform services are put into :ref:`Maintenance Mode
|
||||
<db-backup-ops>` for the whole duration of the upgrade procedure to
|
||||
prevent user data loss and/or corruption.
|
||||
* Copy the state databases of all the upgradeable OpenStack components
|
||||
to the new Controller and upgrade these wuth the standard
|
||||
‘database migration’ feature of OpenStack.
|
||||
* Reconfigure the Ceph cluster in such a way that the Monitors on the
|
||||
new 6.1 CICs replace the Monitors of the 5.1.1 environment, retaining
|
||||
the original IP addresses and configuration parameters.
|
||||
* The 6.1 CIC is connected to the Management and External networks,
|
||||
while the original 5.1.1 ones are disconnected. The 6.1 CIC takes
|
||||
over Virtual IPs in the Management and Public :ref:`networks <logical-networks-arch>`.
|
||||
* Control plane services on the Compute nodes in the 5.1.1 environment
|
||||
are upgraded to 6.1 without affecting the virtual server instances
|
||||
and workloads. After the upgrade, the Compute service reconnects to
|
||||
6.1 CICs.
|
||||
* Compute nodes from the 5.1.1 environment work with the CICs from the
|
||||
6.1 environment, forming a temporary hybrid OpenStack environment
|
||||
that is only used to upgrade the Compute nodes one by one by
|
||||
re-assigning them to the 6.1 environment and re-installing with
|
||||
the new version.
|
||||
* Ceph OSD nodes from the 5.1.1 environment transparently switch to
|
||||
the new Monitors without the actual data moving in the Ceph cluster.
|
||||
* User data stored on OSD nodes must be preserved through
|
||||
re-installation of the nodes into a new release of the operating
|
||||
system and OpenStack services, and OSD nodes must connect to the
|
||||
Monitors without changing their original IDs and data set.
|
||||
|
||||
Every step requires certain actions. All of these actions are scripted
|
||||
as subcommands to the upgrade script called 'octane'.
|
||||
|
||||
Prerequisites and dependencies
|
||||
++++++++++++++++++++++++++++++
|
||||
|
||||
The procedure of upgrading Mirantis OpenStack from 5.1.1 to 6.1 version
|
||||
has certain prerequisites and dependencies. You need to verify if your
|
||||
installation of Mirantis OpenStack meets these requirements.
|
||||
|
||||
Fuel installer
|
||||
______________
|
||||
|
||||
Mirantis OpenStack 5.1.1 environment must be deployed and managed by
|
||||
Fuel installer to be upgradeable. If you installed your environment
|
||||
without leveraging Fuel, or removed the :ref:`Fuel Master node <fuel-master-node-term>`
|
||||
from the installation after a successful deployment, you will not be
|
||||
able to upgrade your environments using these instructions.
|
||||
|
||||
The upgrade scenario deviates from the standard sequence used in Fuel
|
||||
installer to deploy Mirantis OpenStack environment. These modifications
|
||||
to the behavior of the installer are implemented as modifications to the
|
||||
:ref:`deployment tasks <0010-tasks-schema>` and extensions to certain
|
||||
components of Fuel. Patches are applied to the Fuel Master node as part
|
||||
of 'preparation' phase of the Upgrade scenario. See the sections below
|
||||
for the detailed description of which components are modified and why.
|
||||
|
||||
.. _architecture-constraints:
|
||||
|
||||
Architecture constraints
|
||||
________________________
|
||||
|
||||
Make sure that your Mirantis OpenStack 5.1.1 environment meets
|
||||
the following architecture constraints. Otherwise, these instructions
|
||||
will not work for you:
|
||||
|
||||
+----------------------------------------------------+------------------+
|
||||
| Constraint | Check if comply |
|
||||
+====================================================+==================+
|
||||
| High Availability architecture | |
|
||||
+----------------------------------------------------+------------------+
|
||||
| Ubuntu 12.04 as an operating system | |
|
||||
+----------------------------------------------------+------------------+
|
||||
| Neutron networking manager with OVS+VLAN plugin | |
|
||||
+----------------------------------------------------+------------------+
|
||||
| Cinder virtual block storage volumes | |
|
||||
+----------------------------------------------------+------------------+
|
||||
| Ceph shared storage for volumes and ephemeral data | |
|
||||
+----------------------------------------------------+------------------+
|
||||
| Ceph shared storage for images and objeсt store | |
|
||||
+----------------------------------------------------+------------------+
|
||||
|
||||
Fuel upgrade to 6.1
|
||||
___________________
|
||||
|
||||
In this guide we assume that the user upgrades Fuel installer from
|
||||
version 5.1.1 to 6.1. The upgrade of Fuel installer is a standard
|
||||
feature of the system. Upgraded Fuel retains the ability to manage
|
||||
5.1.1 environments, which is leveraged by the environment upgrade solution.
|
||||
|
||||
.. note::
|
||||
|
||||
Upgrade path for the Fuel Master node is as follows: 5.1.1 to 6.0,
|
||||
then 6.0 to 6.1. So you need to download two upgrade tarballs and
|
||||
apply them to your Master node one after another.
|
||||
|
||||
Additional hardware
|
||||
___________________
|
||||
|
||||
The upgrade strategy requires installing 6.1 environment that will
|
||||
result in an OpenStack cluster along with the original environment.
|
||||
One of the Controller nodes from the original 5.1.1 environment will
|
||||
be deleted, added to the new 6.1 environment, and reinstalled. This
|
||||
allows performing an upgrade with no additional hardware.
|
||||
|
||||
.. note::
|
||||
|
||||
The trade-off for using one of the existing controllers as a
|
||||
primary upgraded controller is that the 6.1 environment will
|
||||
not be highly available for some time during the maintenance
|
||||
window dedicated to the upgrade. Once the remaining controllers
|
||||
are moved from the 5.1.1 environment and reinstalled into the 6.1
|
||||
environment, its High Availability is restored.
|
||||
159
pages/operations/upgrade/1010-quick-start.rst
Normal file
159
pages/operations/upgrade/1010-quick-start.rst
Normal file
@@ -0,0 +1,159 @@
|
||||
.. index:: Upgrade Environment Quick Start
|
||||
|
||||
.. _Upg_QuickStart:
|
||||
|
||||
Upgrade Environment Quick Start
|
||||
-------------------------------
|
||||
|
||||
.. CAUTION::
|
||||
|
||||
Only use this section as a reference. Read the detailed sections
|
||||
below to understand the upgrade scenario before running any of
|
||||
the listed commands.
|
||||
|
||||
This section lists commands required to upgrade a 5.1.1 environment to
|
||||
version 6.1 with brief comments. For the detailed description of what
|
||||
each command does and why, see the following sections:
|
||||
|
||||
* :ref:`Detailed upgrade procedure<upg_sol>`
|
||||
* :ref:`Detailed description of commands<upg_script>`
|
||||
|
||||
.. CAUTION::
|
||||
|
||||
Do not run the following commands unless you understand exactly
|
||||
what you are doing. It can completely destroy your OpenStack
|
||||
environment. Please, read the detailed sections below carefully
|
||||
before you proceed with these commands.
|
||||
|
||||
Install the Upgrade Script
|
||||
++++++++++++++++++++++++++
|
||||
|
||||
Run the following command on the Fuel Master node to download and
|
||||
install the Upgrade Script in the system:
|
||||
|
||||
::
|
||||
|
||||
yum install -y git
|
||||
git clone -b stable/6.1 https://github.com/Mirantis/octane.git
|
||||
cd octane/octane/bin && ./octane prepare
|
||||
|
||||
Pick environment to upgrade
|
||||
+++++++++++++++++++++++++++
|
||||
|
||||
Run the following command and pick an environment to upgrade from the
|
||||
list:
|
||||
|
||||
::
|
||||
|
||||
fuel2 env list
|
||||
|
||||
Note the ID of the environment and store it in a variable:
|
||||
|
||||
::
|
||||
|
||||
export ORIG_ID=<ID>
|
||||
|
||||
Create an Upgrade Seed environment
|
||||
++++++++++++++++++++++++++++++++++
|
||||
|
||||
Run the following command to create a new environment of version 6.1
|
||||
and store its ID to a variable:
|
||||
|
||||
::
|
||||
|
||||
SEED_ID=$(./octane upgrade-env $ORIG_ID)
|
||||
|
||||
Upgrade the first Controller
|
||||
++++++++++++++++++++++++++++
|
||||
|
||||
Pick one of the Controllers in your environment by ID and remember
|
||||
that ID:
|
||||
|
||||
::
|
||||
|
||||
fuel node list --env ${ORIG_ID} | awk -F\| '$7~/controller/{print($0)}'
|
||||
|
||||
Use the ID of the Controller to upgrade it with the following command:
|
||||
|
||||
::
|
||||
|
||||
./octane upgrade-node $SEED_ID isolated <ID>
|
||||
|
||||
Upgrade State Database
|
||||
++++++++++++++++++++++
|
||||
|
||||
After the first Controller in the 6.1 environment is deployed and
|
||||
ready, run the following command to upgrade the state databases
|
||||
the of OpenStack services:
|
||||
|
||||
::
|
||||
|
||||
./octane upgrade-db $ORIG_ID $SEED_ID
|
||||
|
||||
Upgrade Ceph cluster
|
||||
++++++++++++++++++++
|
||||
|
||||
Run the following command to upgrade the Monitor node on the new
|
||||
Controller with the state and configuration of the original Ceph
|
||||
cluster:
|
||||
|
||||
::
|
||||
|
||||
./octane upgrade-ceph $ORIG_ID $SEED_ID
|
||||
|
||||
Switch control plane to 6.1
|
||||
+++++++++++++++++++++++++++
|
||||
|
||||
Run the following command to switch the OpenStack environment to the
|
||||
6.1 control plane:
|
||||
|
||||
::
|
||||
|
||||
./octane upgrade-cics $ORIG_ID $SEED_ID
|
||||
|
||||
Upgrade all Controllers
|
||||
+++++++++++++++++++++++
|
||||
|
||||
Identify the remaining Controllers in the 5.1.1 environment by IDs
|
||||
(ID1, ID2, etc):
|
||||
|
||||
::
|
||||
|
||||
fuel node list --env ${ORIG_ID} | awk -F\| '$7~/controller/{print($0)}'
|
||||
|
||||
Run the following command to upgrade the remaining 5.1.1 Controllers
|
||||
to version 6.1:
|
||||
|
||||
::
|
||||
|
||||
./octane upgrade-node $SEED_ID <ID1> <ID2>
|
||||
|
||||
Upgrade Compute and Ceph OSD nodes
|
||||
++++++++++++++++++++++++++++++++++
|
||||
|
||||
Repeat the following command for every node in the 5.1.1 environment
|
||||
identified by ID:
|
||||
|
||||
::
|
||||
|
||||
./octane upgrade-node $SEED_ID <ID>
|
||||
|
||||
Clean up the Fuel Master node
|
||||
+++++++++++++++++++++++++++++
|
||||
|
||||
When no nodes remain in the 5.1.1 environment, run the following
|
||||
command to restore the original state of the 6.1 Fuel Master node:
|
||||
|
||||
::
|
||||
|
||||
./octane cleanup-fuel
|
||||
|
||||
Delete the original 5.1.1 environment
|
||||
+++++++++++++++++++++++++++++++++++++
|
||||
|
||||
After verification of the upgraded 6.1 environment, delete the
|
||||
original 5.1.1 environment with the following command:
|
||||
|
||||
::
|
||||
|
||||
fuel env --env $ORIG_ID --delete
|
||||
407
pages/operations/upgrade/2000-solution-overview.rst
Normal file
407
pages/operations/upgrade/2000-solution-overview.rst
Normal file
@@ -0,0 +1,407 @@
|
||||
.. index:: Upgrade Solution
|
||||
|
||||
.. _Upg_Sol:
|
||||
|
||||
Solution Overview
|
||||
-----------------
|
||||
|
||||
This section describes a solution that implements the upgrade strategy
|
||||
outlined in the previous section. It gives a step by step script of the
|
||||
procedure and explains every step of it. In the chapters that follow we
|
||||
will give detailed scripts with exact commands to upgrade your cluster.
|
||||
|
||||
Hardware considerations
|
||||
+++++++++++++++++++++++
|
||||
|
||||
For Mirantis OpenStack with :ref:`High Availability Reference Architecture
|
||||
<controller-arch>` using at least 3 CICs is recommended. When you
|
||||
install a 6.1 Seed environment in HA mode, you can start with only 1
|
||||
Controller. This procedure assumes that the node for this Controller is
|
||||
borrowed from the set of the Controllers in the original 5.1.1
|
||||
environment.
|
||||
|
||||
Preparations and prerequisites
|
||||
++++++++++++++++++++++++++++++
|
||||
|
||||
Before starting the upgrade itself, make sure that your system complies
|
||||
to the :ref:`architecture constraints <architecture-constraints>`
|
||||
listed above. You will also need to make some preparations to provide
|
||||
prerequisites for the upgrade procedure. These preparations are
|
||||
automated using the Upgrade Script, named ``octane``.
|
||||
|
||||
They include installation of certain packages onto the Fuel Master node
|
||||
and patching of the source code of Fuel components to modify the Fuel
|
||||
installer behavior in a way desirable for the upgrade.
|
||||
|
||||
The upgrade procedure requires the following packages to be installed
|
||||
on Fuel Master node:
|
||||
|
||||
* `PSSH <https://code.google.com/p/parallel-ssh/>`_ – provides ability to run
|
||||
shell commands in parallel on multiple hosts.
|
||||
* ``patch`` – provides ability to consistently modify the source code of
|
||||
components of Fuel installer.
|
||||
* ``python-pip`` - installer for Python packages, used to install extensions to
|
||||
Nailgun and Fuel client.
|
||||
|
||||
A description of modifications to the Fuel components is given below in
|
||||
the sections dedicated to the corresponding steps of the upgrade which
|
||||
require alternate behavior.
|
||||
|
||||
Create 6.1 Seed environment
|
||||
++++++++++++++++++++++++++++
|
||||
|
||||
Our concept of the upgrade involves installing a CIC of version 6.1
|
||||
side-by-side with the cloud being upgraded. We leverage Fuel installer
|
||||
for the task of deployment of nodes with 6.1 versions.
|
||||
|
||||
The way we create the upgraded environment is different from the way
|
||||
the ordinary OpenStack cluster is installed by Fuel 6.1. This section
|
||||
explains the specifics of the deployment of such a 'shadow' environment,
|
||||
also referred to as Upgrade Seed environment in this section.
|
||||
|
||||
Clone configuration of 5.1.1 environment
|
||||
________________________________________
|
||||
|
||||
The first step in the upgrade procedure is to install a new 6.1 Upgrade
|
||||
Seed environment. The settings and attributes of the Seed environment
|
||||
must match the settings of the original environment, with the changes
|
||||
corresponding to the changes in the data model of Nailgun between
|
||||
releases.
|
||||
|
||||
As part of the upgrade procedure automation, an extension to Nailgun
|
||||
API was developed. This extension implements a resource
|
||||
``/api/v1/cluster/<:id>/upgrade/clone`` which creates an Upgrade Seed
|
||||
environment based on the settings of the original one
|
||||
(identified by the ``id`` parameter).
|
||||
|
||||
A ``POST`` request to that resource must be sent to initiate the
|
||||
upgrade procedure.
|
||||
|
||||
An extension to the Fuel client was developed that allows sending
|
||||
the request from the CLI of the Fuel Master node.
|
||||
|
||||
Both Nailgun and Fuel client extensions are installed in the
|
||||
:ref:`preparation phase <upg_prep>` of the upgrade procedure.
|
||||
|
||||
Install 6.1 CIC
|
||||
+++++++++++++++
|
||||
|
||||
When the Fuel Master node and Upgrade Seed environment are prepared,
|
||||
you can start upgrading your 5.1.1 environment. First you need to pick
|
||||
a Controller node in the original environment. Automation scripts for
|
||||
the upgrade procedure will move that node into the Upgrade Seed
|
||||
environment and reinstall it as a primary controller in that
|
||||
environment.
|
||||
|
||||
Cloud Controller in a Seed environment is deployed by Fuel installer.
|
||||
There are several restrictions on the deployment process and the final
|
||||
state of the installed CIC due to the upgrade requirements.
|
||||
|
||||
* 6.1 Controller must have the same IP address as the Controller in the
|
||||
original environment that was picked for the upgrade.
|
||||
|
||||
* 6.1 Controller must be isolated from the Management and Public
|
||||
networks on the data link layer (L2) to avoid IP conflicts with the
|
||||
5.1.1 Controllers and clustering glitches with corosync and Galera
|
||||
in the 5.1.1 environment.
|
||||
|
||||
The nature of network isolation defines many aspects of the deployment
|
||||
process. To understand how it could be implemented, we need to analyze
|
||||
the configuration of the internal networking of Cloud Infrastructure
|
||||
Controller.
|
||||
|
||||
Fuel creates virtual bridges that connect the host to networks with
|
||||
different roles. Physical interfaces (e.g. ``eth1``) are added to those
|
||||
bridges creating L2 connections to the corresponding networks.
|
||||
|
||||
On the other hand, L3 IP address is assigned to virtual bridge
|
||||
to the network of a given type. A virtual bridge to connect to
|
||||
Management network is called ``br-mgmt``, to Public network -
|
||||
``br-ex``, and so on.
|
||||
|
||||
To install 6.1 Controller in isolation from these networks, the script
|
||||
configures the deployment information for the node in a way that
|
||||
physical interfaces are not added to certain virtual bridges when
|
||||
OpenStack is being deployed.
|
||||
|
||||
This ensures that CIC has no physical connection to Management and
|
||||
Public network.
|
||||
|
||||
Using Fuel for isolated deployment
|
||||
__________________________________
|
||||
|
||||
Fuel is responsible for the assignment of IP addresses to logical
|
||||
interfaces in the Management, Public and other types of networks.
|
||||
The environment cloning procedure does copy the IP ranges environment
|
||||
settings for you. Specific address allocations can be done through
|
||||
editing the deployment information for nodes.
|
||||
|
||||
Fuel configures Linux bridges and interfaces during the deployment of an
|
||||
environment. This configuration is managed by Puppet and is defined in
|
||||
the deployment settings. You can modify these settings to disable
|
||||
adding of certain physical interface to the bridge.
|
||||
|
||||
For deployment to succeed with the described schema, you need to ensure
|
||||
that no network checks break the installation by disabling a check for
|
||||
connectivity to the default gateway. Fuel installer expects the gateway
|
||||
to be in the Public network, which is not directly accessible from our
|
||||
isolated Controller. The exact commands to disable the check are listed
|
||||
in the :ref:`Upgrade Script <upg_script>` chapter.
|
||||
|
||||
Initial state of Ceph cluster
|
||||
_____________________________
|
||||
|
||||
By default, Fuel installer creates a number of resources in the
|
||||
installed cloud, used to verify the deployment. Among these resources,
|
||||
Fuel uploads a test VM image to Glance store. Uploading an image
|
||||
requires that the Glance store is fully operational at the time of the
|
||||
upload. If Ceph is used to store Glance images (as per the Architecture
|
||||
constraints section above) then it must have an OSD node to be able to
|
||||
store data.
|
||||
|
||||
According to the upgrade scenario, Ceph cluster must be installed in
|
||||
a way that allows for replacing the original Monitors of the 5.1.1
|
||||
environment with the new Monitors when 6.1 CICs take over. There is
|
||||
a way to install a cluster without OSD nodes and thus rule out the
|
||||
rebalance and data movement once the original OSD nodes start joining
|
||||
the cluster. However, it requires that the upload of the test VM image
|
||||
by Fuel is disabled before the deployment. We achieve this by
|
||||
disabling the corresponding tasks in the deployment graph:
|
||||
``upload_cirros`` and ``check_ceph_ready``.
|
||||
|
||||
Maintenance Mode
|
||||
++++++++++++++++
|
||||
|
||||
During the installation of 6.1 Seed cloud the original 5.1.1 environment
|
||||
continues to operate normally. Seed CIC do not interfere with the
|
||||
original CICs and the latter could continue the operation through
|
||||
the initial stages of the upgrade.
|
||||
|
||||
However, when it comes to the upgrade of state databases of
|
||||
OpenStack services, you need to make sure that no changes are made
|
||||
to the state data. Maintenance mode must be started before you download
|
||||
data from the state database of 5.1.1 OpenStack environment.
|
||||
Maintenance mode should last at least until the database upgrade is
|
||||
finished and 6.1 CICs take over the environment.
|
||||
|
||||
Note that Maintenance mode implemented according to these instructions
|
||||
does not impact operations of existing virtual server instances and
|
||||
other resources. It only affects OpenStack API endpoints which are the
|
||||
only way for the end user to change the state data of the cluster.
|
||||
|
||||
High Availability architecture of Mirantis OpenStack provides access to
|
||||
all OpenStack APIs at a single VIP address via HAProxy load balancer.
|
||||
You need to configure HAProxy server to return code ``HTTP 503`` on
|
||||
all requests to the services listening on the Public VIP in 5.1.1
|
||||
environment. This will not allow users to change the state of the
|
||||
virtual resources in the original cloud which can be lost after the
|
||||
data is downloaded from DB.
|
||||
|
||||
On 6.1 CIC, you must disable all OpenStack component services to make
|
||||
sure that they do not write to the state database while it is being
|
||||
upgraded. Otherwise, this might lead to data corruption and loss.
|
||||
|
||||
All the detailed commands used to put environments into Maintenance
|
||||
mode are listed in the Upgrade Script chapter below.
|
||||
|
||||
Upgrade databases
|
||||
+++++++++++++++++
|
||||
|
||||
Database upgrade is a standard procedure provided by OpenStack upstream
|
||||
as a main upgrade feature. It allows converting data from state
|
||||
databases of all OpenStack component services from a previous to a new
|
||||
release version schema. It is necessary to fully preserve the status of
|
||||
the virtual resources provided by the cloud through the upgrade
|
||||
procedure.
|
||||
|
||||
The data is dumped from MySQL database on one of the CIC nodes in 5.1.1
|
||||
environment. A text dump of the database is compressed and sent over
|
||||
to CIC node in 6.1 environment.
|
||||
|
||||
After uploading the data to MySQL on 6.1 CIC, use standard OpenStack
|
||||
methods to upgrade the database schema to the new release. Specific
|
||||
commands that upgrade the schema for particular components of the
|
||||
platform are listed in the Upgrade Script chapter below.
|
||||
|
||||
Configure Ceph Monitors
|
||||
+++++++++++++++++++++++
|
||||
|
||||
Architecture constraints for the upgrade procedure define that in the
|
||||
upgradeable configuration Ceph is used for all types of storage in the
|
||||
OpenStack platform: ephemeral storage, permanent storage, object
|
||||
storage and Glance image store. Ceph Monitors are essential for the
|
||||
Ceph cluster and must be upgraded seamlessly and transparently.
|
||||
|
||||
By default, Fuel installer creates a new Ceph cluster in the 6.1 Seed
|
||||
environment. You need to copy the configuration of the cluster from
|
||||
5.1.1 environment to override the default configuration. This will
|
||||
allow OSD nodes from 5.1.1 environment to switch to the new Monitors
|
||||
when 6.1 CICs take over the control plane of the upgraded environment.
|
||||
|
||||
The Upgrade Script synchronizes the configuration of Ceph Monitors in
|
||||
the 5.1.1 and 6.1 clusters during the upgrade procedure.
|
||||
|
||||
Upgrade CICs
|
||||
++++++++++++
|
||||
|
||||
This step is called 'Upgrade', as it concludes with a new CIC of
|
||||
version 6.1 listening on the same set of IP addresses as the original
|
||||
5.1.1 CICs. However, from the technical standpoint it is more of a
|
||||
switch than an upgrade. 6.1 Controller takes over the Virtual IP
|
||||
addresses of 5.1.1 environment, while the original CICs are disconnected
|
||||
from all networks except Admin. The sections that follow explain what
|
||||
happens and why at every stage of the upgrade process.
|
||||
|
||||
Start OpenStack services on 6.1 Controller
|
||||
__________________________________________
|
||||
|
||||
As part of Maintenance mode, OpenStack component services were shut
|
||||
down on 6.1 CIC before upgrading the database. These services include
|
||||
Nova, Glance, Keystone, Neutron and Cinder. Now it is time to restore
|
||||
them with a new data set created by the database migration procedure.
|
||||
This operation basically reverts the shutdown operation described above.
|
||||
It is automated in the Upgrade Script.
|
||||
|
||||
Note that Neutron restart involves creation of tenant networking
|
||||
resources on CIC nodes where Neutron agents run. This process can
|
||||
take longer than starting all other services, so check it carefully
|
||||
before you proceed with the upgrade.
|
||||
|
||||
Delete ports on 5.1.1 Controllers
|
||||
_________________________________
|
||||
|
||||
Before 6.1 CIC can take over the virtual network addresses in the
|
||||
upgraded environment, you need to disconnect 5.1.1 CICs to release
|
||||
those addresses. Based on the CICs networking schema described above,
|
||||
to do that you need to delete patch ports from certain OVS bridges.
|
||||
|
||||
This procedure is automated by the upgrade script and executed as part
|
||||
of the ``upgrade-cics`` subcommand.
|
||||
|
||||
Reconnect 6.1 Controller
|
||||
________________________
|
||||
|
||||
After 5.1.1 CICs are disconnected from all networks in the environment,
|
||||
6.1 CIC can take over their former VIP addresses. The take-over
|
||||
procedure adds physical ports to the appropriate bridges and brings
|
||||
the ports up.
|
||||
|
||||
Update 'nova-compute' package on 5.1.1 Compute nodes
|
||||
____________________________________________________
|
||||
|
||||
One of the main non-functional requirements to the upgrade procedure is
|
||||
to minimize the impact of the upgrade on the virtual resources,
|
||||
in the first place, virtual servers. The impact includes downtime of
|
||||
the virtual machine itself, up to the interruption of the virtualization
|
||||
process (i.e. qemu-kvm process), and network disconnection time due to
|
||||
the upgrade of the networking data and/or control plane software.
|
||||
|
||||
The downtime of the virtualization process occurs when a VM is shut
|
||||
down due to reboot of hypervisor host as part of an upgrade of
|
||||
an operating system. To avoid this, you could leverage live migration
|
||||
over the shared storage (Ceph). However, live migration between 2014.1
|
||||
and 2014.2 versions of OpenStack is explicitly disabled by patch
|
||||
`<https://review.openstack.org/#/c/91722/>`_.
|
||||
|
||||
This issue can be resolved by upgrading the 'nova-compute' package
|
||||
to 2014.2 release without upgrading data-plane software, i.e. hypervisor
|
||||
kernel and operating system packages. The upgrade of Nova Compute
|
||||
involves an upgrade of its dependencies, including Neutron L2 agent.
|
||||
After the upgrade, the services are restarted and reconnected to the
|
||||
new 6.1 CIC.
|
||||
|
||||
Note that the in-place upgrade of control plane services does not impact
|
||||
workloads, but the restart of Neutron L2 agent disrupts network
|
||||
connectivity of VMs for a relatively short period of time. This
|
||||
disruption can be minimized by adding the 'soft restart' capability to
|
||||
Neutron L2 OVS agent, which reloads the agent without resetting the OVS
|
||||
settings managed by it.
|
||||
|
||||
Installation of new versions of OpenStack packages without re-installing
|
||||
the whole operating system leaves the hypervisor host in the 'unclear'
|
||||
state from the standpoint of the Mirantis OpenStack versioning system.
|
||||
This is acceptable for a short period of time while the rolling
|
||||
upgrade of hypervisor hosts is in progress.
|
||||
|
||||
Upgrade hypervisor host
|
||||
+++++++++++++++++++++++
|
||||
|
||||
Hypervisor hosts provide their physical resources to run virtual
|
||||
machines. Physical resources are managed by hypervisor software,
|
||||
usually 'libvirt' and 'qemu-kvm' packages. With KVM hypervisor,
|
||||
all virtualization tasks are handled by the Linux kernel. Open vSwitch
|
||||
provides L2 network connectivity to virtual machines. All together,
|
||||
kernel, hypervisor and OVS constitute a data plane of Compute service.
|
||||
|
||||
You can upgrade data-plane software on a hypervisor host (or Compute
|
||||
node) by re-installing the operating system to a new version with Fuel
|
||||
installer. However, the deployment process takes time and impacts the
|
||||
virtual machines. To minimize the impact, leverage live migration to
|
||||
move all virtual machines from the Compute node before you start
|
||||
upgrading it. You can do that since Compute node's control plane is
|
||||
upgraded to 6.1.
|
||||
|
||||
Nailgun API extension installed by the Upgrade Script allows to move
|
||||
a node to the Upgrade Seed environment in runtime. It preserves the ID
|
||||
of the node, its hostname and configurations of its disks and
|
||||
interfaces.
|
||||
|
||||
When a node is added to the upgraded environment, the script provisions
|
||||
the node. When the provisioning is finished, the script runs the
|
||||
deployment of the node. As a result of the deployment, the node will
|
||||
be added to the environment as a fully capable Mirantis
|
||||
OpenStack 6.1 Compute node.
|
||||
|
||||
Upgrade of a single Compute node must be repeated for all the nodes
|
||||
of 5.1.1 environment in a rolling fashion. VMs must be gradually moved
|
||||
from the remaining 5.1.1 Compute nodes to the 6.1 ones with live
|
||||
migration.
|
||||
|
||||
Upgrade Ceph OSD node
|
||||
+++++++++++++++++++++
|
||||
|
||||
In a Ceph cluster all data is stored on OSD nodes. These nodes have one
|
||||
or more storage devices (or disk partitions) dedicated to Ceph data and
|
||||
run ceph-osd daemon that is responsible for I/O operations on Ceph data.
|
||||
|
||||
Upgrading OSD node via Fuel means that the node must be redeployed. Per
|
||||
the requirement to minimize end-user impact and the move of data across
|
||||
the OpenStack cluster being upgraded, we developed a procedure to
|
||||
redeploy Ceph OSD nodes with the original data set. Although Fuel
|
||||
by default erases all data from disks of the node it deploys, you can
|
||||
patch and configure the installer to keep Ceph data on the devices
|
||||
intact.
|
||||
|
||||
There are several stages of the deployment when the data is erased from
|
||||
all disks in the Ceph OSD node. First, when you delete a Ceph node,
|
||||
Nailgun agent on that node does the erasing on all non-removable disks
|
||||
by writing 0s to the first 10MB of every disk. Then, at the provisioning
|
||||
stage, Ubuntu installer creates partitions on disks and formats them
|
||||
according to the disks configuration provided by Fuel orchestration
|
||||
components.
|
||||
|
||||
As part of the upgrade procedure, we provide patches for the components
|
||||
involved in volumes management that allow to keep data on certain
|
||||
partitions or disks. These patches are applied automatically by the
|
||||
Upgrade Script.
|
||||
|
||||
Disable rebalance
|
||||
_________________
|
||||
|
||||
By default, Ceph initiates rebalance of data when OSD node goes down.
|
||||
Rebalancing means that the data of replicas is moved between the
|
||||
remaining nodes, which takes significant time and impacts end user's
|
||||
virtual machines and workloads. We disable the rebalance and
|
||||
recalculation of CRUSH maps when OSD node goes down. When a node is
|
||||
reinstalled, OSD connects to Ceph cluster with the original data set.
|
||||
|
||||
Finalizing the upgrade
|
||||
++++++++++++++++++++++
|
||||
|
||||
When all nodes are reassigned to the 6.1 environment and upgraded,
|
||||
it is time to finalize the upgrade procedure with a few steps that allow
|
||||
Fuel installer to manage with the upgraded environment just as with
|
||||
vanilla 6.1 environment, installed from scratch:
|
||||
|
||||
* revert all patches applied to Fuel components;
|
||||
* delete the original environment.
|
||||
29
pages/operations/upgrade/3000-upgrade-script.rst
Normal file
29
pages/operations/upgrade/3000-upgrade-script.rst
Normal file
@@ -0,0 +1,29 @@
|
||||
.. index:: Upgrade Script
|
||||
|
||||
.. _Upg_Script:
|
||||
|
||||
Upgrade Script Explained
|
||||
------------------------
|
||||
|
||||
In this chapter we explain how exactly we implement the steps described
|
||||
in :ref:`Solution Overview<upg_sol>`. Sections of this chapter contain
|
||||
appendices with listings of commands and scripts that implement each
|
||||
particular action.
|
||||
|
||||
.. IMPORTANT::
|
||||
|
||||
* All commands in this section must be executed on the Fuel Master
|
||||
node, unless specified otherwise.
|
||||
|
||||
* You must use Bourne shell (``bash``) to execute commands.
|
||||
|
||||
* All commands must be executed one after another in a single
|
||||
continuous ``bash`` session. Otherwise, the required variables
|
||||
will not be set, and most commands will fail.
|
||||
|
||||
.. include:: /pages/operations/upgrade/3010-upgrade-prerequisites.rst
|
||||
.. include:: /pages/operations/upgrade/3020-prepare-fuel-master.rst
|
||||
.. include:: /pages/operations/upgrade/3030-clone-env-settings.rst
|
||||
.. include:: /pages/operations/upgrade/3040-install-seed.rst
|
||||
.. include:: /pages/operations/upgrade/3050-upgrade-cics.rst
|
||||
.. include:: /pages/operations/upgrade/3060-upgrade-node.rst
|
||||
103
pages/operations/upgrade/3010-upgrade-prerequisites.rst
Normal file
103
pages/operations/upgrade/3010-upgrade-prerequisites.rst
Normal file
@@ -0,0 +1,103 @@
|
||||
.. index:: Upgrade Prereq
|
||||
|
||||
.. _Upg_Prereq:
|
||||
|
||||
Upgrade Prerequisites
|
||||
+++++++++++++++++++++
|
||||
|
||||
There are certain prerequisites for upgrading Mirantis OpenStack
|
||||
environment managed by Fuel installer. You need to make sure that
|
||||
all the requirements are met before proceeding with the procedure.
|
||||
This section describes prerequisites and gives a list of commands used
|
||||
to check if the requirements are met.
|
||||
|
||||
Versions of Fuel installer and environment
|
||||
__________________________________________
|
||||
|
||||
First, you need to check the versions of Fuel installer and the
|
||||
environment you've picked for the upgrade. Version of Fuel must be
|
||||
equal to 6.1. Version of environment must be equal to 5.1.1. You
|
||||
can check versions using the Fuel Web UI or CLI client to Fuel API.
|
||||
|
||||
Configuration of environment
|
||||
____________________________
|
||||
|
||||
The configuration of the environment picked for the upgrade must
|
||||
comply to the architecture constraints for the upgrade procedure.
|
||||
You can check the applicability of the procedure to your configuration
|
||||
via Fuel Web UI.
|
||||
|
||||
Check Upgrade Prerequisites
|
||||
___________________________
|
||||
|
||||
*Pick environment to upgrade*
|
||||
|
||||
Select an environment to upgrade from the list of environments in
|
||||
Fuel CLI and assign its ID to the ``ORIG_ID`` variable.
|
||||
|
||||
::
|
||||
|
||||
fuel env
|
||||
<select from list of environments>
|
||||
export ORIG_ID=<enter ID of environment here>
|
||||
|
||||
*Check installer version in Fuel Web UI*
|
||||
|
||||
Open the Fuel Web UI in your browser, log in and check the version
|
||||
number of Fuel installer in the lower right corner of the page:
|
||||
|
||||
.. image:: /_images/upgrade/fuel-version-check.png
|
||||
|
||||
*Check installer version in Fuel CLI*
|
||||
|
||||
Run the following command on your Fuel Master node to verify the
|
||||
version of the Fuel installer:
|
||||
|
||||
::
|
||||
|
||||
fuel release | grep available | grep -o 2014.2-6.1
|
||||
|
||||
You must see the following lines in the output:
|
||||
|
||||
::
|
||||
|
||||
2014.2-6.1
|
||||
2014.2-6.1
|
||||
|
||||
*Check environment version on the Fuel Web UI*
|
||||
|
||||
Click on the environment you've picked for the upgrade. Check the
|
||||
environment version in the status line above the list of nodes:
|
||||
|
||||
.. image:: /_images/upgrade/env-version-check.png
|
||||
|
||||
*Check environment version in Fuel CLI*
|
||||
|
||||
Run the following command on your Fuel Master node to verify the
|
||||
version of the environment you pick for the upgrade:
|
||||
|
||||
::
|
||||
|
||||
fuel env | awk -F\| '$1~/'$ORIG_ID'/{print $5}' | tr -d ' ' \
|
||||
| xargs -I@ bash -c "fuel release | awk -F\| '\$1~/@/{print \$5}'" | tr -d ' '
|
||||
|
||||
You must see the following line in the output:
|
||||
|
||||
::
|
||||
|
||||
2014.1.3-5.1.1
|
||||
|
||||
*Check configuration of environment*
|
||||
|
||||
You need to open the Fuel Web UI in your browser. Log in, click on the
|
||||
environment you'd like to upgrade and select the *Settings* tab. Check
|
||||
the following fields and verify they contain the values:
|
||||
|
||||
* Hypervisor type: **KVM**
|
||||
* Ceph RBD for volumes: **Enabled**
|
||||
* Ceph RBD for images: **Enabled**
|
||||
* Ceph RBD for ephemeral volumes: **Enabled**
|
||||
* Ceph RadosGW for objects: **Enabled**
|
||||
|
||||
Navigate to the *Networks* tab and check the second line after the tab
|
||||
title. It must state *Neutron with VLAN segmentation*.
|
||||
124
pages/operations/upgrade/3020-prepare-fuel-master.rst
Normal file
124
pages/operations/upgrade/3020-prepare-fuel-master.rst
Normal file
@@ -0,0 +1,124 @@
|
||||
.. index:: Prep Fuel
|
||||
|
||||
.. _Upg_Prep:
|
||||
|
||||
Prepare Fuel Master
|
||||
+++++++++++++++++++
|
||||
|
||||
Before you start the upgrade procedure, you must prepare your Fuel
|
||||
installer. There are several changes to the default behavior and
|
||||
configuration of installer that the upgrade functions depend on.
|
||||
The configuration changes include installation of additional packages
|
||||
and setting environment variables.
|
||||
|
||||
Modifications to the default behavior are implemented as patches and
|
||||
are applied to multiple components of Fuel.
|
||||
|
||||
In this section we briefly describe which components are affected by
|
||||
the change and why, and explain how to apply the patches correctly.
|
||||
Explanations of specific patches and their purpose will be given below
|
||||
in the sections dedicated to the steps that make use of those patches:
|
||||
|
||||
* :ref:`Patch to Nailgun<upgrade-patch-nailgun>`
|
||||
* :ref:`Patch to Cobbler snippet<upgrade-patch-cobbler>`
|
||||
* :ref:`Patch to Fuel Library modules<upgrade-patch-fuel-lib>`
|
||||
|
||||
See the instructions on how to apply the patches in
|
||||
:ref:`Commands To Prepare The Fuel Master Node<upgrade-patch-commands>`.
|
||||
|
||||
Install the Upgrade Script
|
||||
__________________________
|
||||
|
||||
The upgrade logic is automated in the script named ``octane``.
|
||||
You need to download and install this script from RPM repository
|
||||
on your Fuel Master node.
|
||||
|
||||
Install packages
|
||||
________________
|
||||
|
||||
Changes to the Fuel installer configuration include installation
|
||||
of additional packages. These packages are present in the standard Fuel
|
||||
repository, but not installed by default. Utils provided by these
|
||||
packages are used by the upgrade functions at different stages of the
|
||||
process. The upgrade procedure requires the following packages
|
||||
installed on the Fuel Master node:
|
||||
|
||||
* `pssh`
|
||||
* `patch`
|
||||
* `postgresql.x86_64`
|
||||
|
||||
.. _upgrade-patch-nailgun:
|
||||
|
||||
Patch to Nailgun
|
||||
________________
|
||||
|
||||
This patch will allow Nailgun to handle the parameter of the disk
|
||||
metadata that identifies it as used for Ceph OSD device. This parameter
|
||||
is passed to Cobbler and is used to disable the erasing and formatting
|
||||
of the device.
|
||||
|
||||
.. _upgrade-patch-cobbler:
|
||||
|
||||
Patch to Cobbler
|
||||
________________
|
||||
|
||||
This patch will allow Cobbler to identify the disks and partitions
|
||||
used for Ceph OSD devices and preserve data on those partitions through
|
||||
the installation process. It modifies the ``pmanager.py`` module.
|
||||
|
||||
.. _upgrade-patch-fuel-lib:
|
||||
|
||||
Patches to Fuel Library
|
||||
_______________________
|
||||
|
||||
When installing CIC, Puppet uses Fuel Library of modules to configure
|
||||
the services and devices on the node. This includes initialization
|
||||
of disks that are used for Ceph OSD. To preserve data, we need to change
|
||||
the module to skip the initialization of a particular disk depending on
|
||||
the metadata parameters. The first patch to Fuel Library adds handling
|
||||
for the 'keep' parameter in disk metadata.
|
||||
|
||||
We also need to disable creation of test networks by Fuel.
|
||||
Unfortunately, it is impossible to do so with the pure Puppet
|
||||
configuration, so we need to modify the manifest to allow an
|
||||
empty ``predefined_nets`` list.
|
||||
|
||||
Prepare environment variables
|
||||
_____________________________
|
||||
|
||||
There are several variables that need to be set before you start
|
||||
the upgrade procedure. They will be used throughout the whole process.
|
||||
These variables include the ID and name of the environment picked
|
||||
for the upgrade.
|
||||
|
||||
.. _upgrade-patch-commands:
|
||||
|
||||
Commands To Prepare The Fuel Master Node
|
||||
________________________________________
|
||||
|
||||
To download the script, you first need to install ``git`` version
|
||||
control system onto the Fuel Master node:
|
||||
|
||||
::
|
||||
|
||||
yum install -y git
|
||||
|
||||
Once you have ``git`` installed, clone the repository with the script
|
||||
and libs into the current directory:
|
||||
|
||||
::
|
||||
|
||||
git clone -b stable/6.1 https://github.com/Mirantis/octane.git
|
||||
|
||||
Now change to the directory that contains executable files including
|
||||
the ``octane`` script itself:
|
||||
|
||||
::
|
||||
|
||||
cd octane/octane/bin
|
||||
|
||||
The upgrade script prepares the Fuel Master node with a single command:
|
||||
|
||||
::
|
||||
|
||||
./octane prepare
|
||||
28
pages/operations/upgrade/3030-clone-env-settings.rst
Normal file
28
pages/operations/upgrade/3030-clone-env-settings.rst
Normal file
@@ -0,0 +1,28 @@
|
||||
.. index:: Clone Env
|
||||
|
||||
.. _Upg_Clone:
|
||||
|
||||
Clone Environment settings
|
||||
++++++++++++++++++++++++++
|
||||
|
||||
During the upgrade, the new Fuel environment of version 6.1
|
||||
will be created with a copy of Network parameters and Settings
|
||||
from the upgrade target environment of version 5.1.1. The generated
|
||||
configurations, like credentials of the service users, are also copied
|
||||
from the original environment wherever possible.
|
||||
|
||||
Environment clone command
|
||||
_________________________
|
||||
|
||||
Run the following command to create Upgrade Seed environment:
|
||||
|
||||
::
|
||||
|
||||
./octane upgrade-env ${ORIG_ID}
|
||||
|
||||
Store the ID of the Upgrade Seed environment displayed as a result of this
|
||||
command into variable:
|
||||
|
||||
::
|
||||
|
||||
export SEED_ID=<ID>
|
||||
53
pages/operations/upgrade/3040-install-seed.rst
Normal file
53
pages/operations/upgrade/3040-install-seed.rst
Normal file
@@ -0,0 +1,53 @@
|
||||
.. index:: Install Seed
|
||||
|
||||
.. _Upg_Seed:
|
||||
|
||||
Install Controller
|
||||
++++++++++++++++++
|
||||
|
||||
Installation of CIC node is performed by standard Fuel installer
|
||||
actions and is split into two distinct steps.
|
||||
|
||||
#. Prepare nodes settings and use the provisioning feature of
|
||||
Fuel to install an operating system and configure disks on nodes.
|
||||
#. Make modifications to the deployment information of the environment
|
||||
that affects all CIC nodes, and deploy OpenStack platform onto them.
|
||||
|
||||
Modified deployment settings
|
||||
____________________________
|
||||
|
||||
To deploy a 6.1 CIC node properly, we need to prepare deployment
|
||||
information to make Fuel configure nodes and OpenStack services
|
||||
with the following modifications:
|
||||
|
||||
#. Disable checking access to the default gateway in Public network
|
||||
#. Skip adding physical interfaces to Linux bridges
|
||||
#. Skip creation of the 'default' Fuel-defined networks in Neutron
|
||||
#. Change default gateway addresses to the address of the Fuel Master node
|
||||
|
||||
Deployment settings can be downloaded from Fuel API as a set of files.
|
||||
The upgrade script updates the settings by changing those files and
|
||||
uploading the modified information back via Fuel API.
|
||||
|
||||
The upgrade script keeps the deployment infromation for the environment
|
||||
in a cache directory (by default, ``/tmp/octane/deployment``). The
|
||||
deployment settings for the nodes are stored to subdirectory
|
||||
``deployment_<cluster-id>``, where ``<cluster-id>`` is an ID of
|
||||
the environment.
|
||||
|
||||
The deployment tasks are kept in subdirectory ``cluster_<cluster-id>``
|
||||
of the same cache directory. You can use them to check that the
|
||||
deployment is properly configured.
|
||||
|
||||
Install Controller commands
|
||||
___________________________
|
||||
|
||||
The upgrade automation script allows upgrading a contorller that
|
||||
exists in the original 5.1.1 environment. Select which controller
|
||||
you would like to upgrade and run the following command. Replace
|
||||
``<NODE-ID>`` with the actual ID of 5.1.1 Controller you'd like to
|
||||
upgrade:
|
||||
|
||||
::
|
||||
|
||||
./octane upgrade-node ${SEED_ID} isolated <NODE-ID>
|
||||
153
pages/operations/upgrade/3050-upgrade-cics.rst
Normal file
153
pages/operations/upgrade/3050-upgrade-cics.rst
Normal file
@@ -0,0 +1,153 @@
|
||||
.. index:: Upgrade Controllers
|
||||
|
||||
.. _Upg_CICs:
|
||||
|
||||
Switch to 6.1 Control Plane
|
||||
+++++++++++++++++++++++++++
|
||||
|
||||
This section describes how the Upgrade Script switches the control
|
||||
plane of the OpenStack cloud being upgraded from version 5.1.1
|
||||
to 6.1. The control plane of OpenStack cloud consists of all services
|
||||
running on the Controllers and OpenStack services on the Compute nodes:
|
||||
``nova-compute`` and ``neutron-plugin-openvswitch-agent``.
|
||||
|
||||
To switch the Controller services, the script transfers state
|
||||
data for those services from the original Controllers to 6.1
|
||||
Seed Controllers and swaps the Controllers connections to the
|
||||
Management and External networks.
|
||||
|
||||
To switch the Compute services, the Upgrade Script updates the
|
||||
version of the packages that provide corresponding services.
|
||||
|
||||
Maintenance mode
|
||||
________________
|
||||
|
||||
To prevent the loss of data in OpenStack state database, API
|
||||
interaction with the environment must be disabled. This mode
|
||||
of operations is also known as :ref:`Maintenance Mode <db-backup-ops>`.
|
||||
|
||||
In maintenance mode, all services that write to DB are disabled. All
|
||||
communications to the control plane of the cluster are also disabled.
|
||||
VMs and other virtual resources must be able to continue to operate as
|
||||
usual.
|
||||
|
||||
.. note::
|
||||
|
||||
The Maintenance Mode is automatically set up by the Upgrade Script
|
||||
as soon as you start the upgrade of the state database. Make sure
|
||||
to carefully plan the maintenance window for that time and inform
|
||||
your users in advance.
|
||||
|
||||
Database migration
|
||||
__________________
|
||||
|
||||
Before the databases could be upgraded, all OpenStack services on
|
||||
6.1 Controller must be stopped to prevent corruption of the metadata.
|
||||
The upgrade script uses standard Ubuntu startup scripts from
|
||||
``/etc/init.d`` on controllers to shut off the services.
|
||||
|
||||
Databases are dumped in text format from MySQL server on 5.1.1 CIC,
|
||||
copied to 6.1 CIC, and uploaded to the upgraded MySQL server. Standard
|
||||
OpenStack tools allow to upgrade the structure of databases saving the
|
||||
data itself via sqlalchemy-powered DB migrations.
|
||||
|
||||
Run the following command to set up Maintenance Mode and immediately
|
||||
start upgrading the state databases of OpenStack services:
|
||||
|
||||
::
|
||||
|
||||
./octane upgrade-db ${ORIG_ID} ${SEED_ID}
|
||||
|
||||
Upgrade Ceph cluster
|
||||
____________________
|
||||
|
||||
To replace Ceph Monitors on the same IP addresses, the upgrade script
|
||||
must preserve the cluster's identity and auth parameters. It copies the
|
||||
configuration files, keyrings and state dirs from 5.1.1 CICs to 6.1 CICs
|
||||
and uses Ceph management tools to restore cluster identity.
|
||||
|
||||
Run the following command to replicate the configuration of the Ceph
|
||||
cluster:
|
||||
|
||||
::
|
||||
|
||||
./octane upgrade-ceph ${ORIG_ID} ${SEED_ID}
|
||||
|
||||
Upgrade CICs
|
||||
____________
|
||||
|
||||
The following section describes the procedure for replacing Controllers
|
||||
from 5.1.1 environment with the Controllers from 6.1 environment
|
||||
and then upgrading the 5.1.1 Controllers.
|
||||
|
||||
When the DB upgrade is finished, all OpenStack services on 6.1 CIC are
|
||||
started using Pacemaker and Upstart. Then the upgrade script disconnects
|
||||
the 5.1.1 CICs from the Management and Public networks by removing patch
|
||||
ports between the logical interfaces to the respective networks and
|
||||
physical interfaces connected to the network media. For example, if 5.1.1
|
||||
CIC connected to the Management network via ``eth1`` interface, the
|
||||
configuration of the logical bridge will be as follows:
|
||||
|
||||
::
|
||||
|
||||
ovs-vsctl show
|
||||
...
|
||||
Bridge br-mgmt
|
||||
Port "br-mgmt--br-eth1"
|
||||
trunks: [0]
|
||||
Interface "br-mgmt--br-eth1"
|
||||
type: patch
|
||||
options: {peer="br-eth1--br-mgmt"}
|
||||
Port br-mgmt
|
||||
Interface br-mgmt
|
||||
type: internal
|
||||
Bridge "br-eth1"
|
||||
Port "eth1"
|
||||
Interface "eth1"
|
||||
Port "br-eth1--br-mgmt"
|
||||
trunks: [0]
|
||||
Interface "br-eth1--br-mgmt"
|
||||
type: patch
|
||||
options: {peer="br-mgmt--br-eth1"}
|
||||
Port "br-eth1"
|
||||
Interface "br-eth1"
|
||||
type: internal
|
||||
...
|
||||
|
||||
Here the highlighted port is a patch port that the script deletes
|
||||
to disconnect the host from the Management network.
|
||||
|
||||
On 6.1 CIC, the physical interface must be added to the Linux bridge
|
||||
corresponding to the Management network. This allows the Compute nodes
|
||||
to transparently switch from the old to the upgraded control plane
|
||||
without the need to reconfigure and renumber every service.
|
||||
|
||||
Upgrade Compute node control plane
|
||||
__________________________________
|
||||
|
||||
To ensure the minimal impact on the end user resources, we leverage
|
||||
the live migration technique to move all virtual server instances
|
||||
from the node prior to the upgrade.
|
||||
|
||||
Live migration is only possible between Compute services of similar
|
||||
version in Mirantis OpenStack 6.1. To solve this, we split the
|
||||
control plane and data plane upgrades on the Hypervisor node.
|
||||
First, upgrade the OpenStack services running on all hypervisors
|
||||
(i.e. nova-compute and neutron-l2-agent) using Ubuntu package manager.
|
||||
An update of the configuration files is also required. This allows using
|
||||
API of 6.1 CICs to live migrate all VMs from a hypervisor node to other
|
||||
hosts and prepare it for the data plane upgrade.
|
||||
|
||||
The Upgrade Script will automatically update the version of Compute and
|
||||
Networking services on all Compute nodes in the original 5.1.1
|
||||
environment when you execute the command listed above.
|
||||
|
||||
Commands to switch the Control Plane
|
||||
____________________________________
|
||||
|
||||
Run the following command to switch from 5.1.1 to 6.1 Control Plane:
|
||||
|
||||
::
|
||||
|
||||
./octane upgrade-cics ${ORIG_ID} ${SEED_ID}
|
||||
|
||||
114
pages/operations/upgrade/3060-upgrade-node.rst
Normal file
114
pages/operations/upgrade/3060-upgrade-node.rst
Normal file
@@ -0,0 +1,114 @@
|
||||
.. index:: Upgrade Node
|
||||
|
||||
.. _Upg_Node:
|
||||
|
||||
Upgrade Node
|
||||
++++++++++++
|
||||
|
||||
Node upgrade is essentially a reinstallation of an operating system
|
||||
and OpenStack platform services. We need to delete a node from the
|
||||
5.1.1 environment and assign it to 6.1 Seed environment, then provision
|
||||
it and deploy OpenStack on it.
|
||||
|
||||
By default, Fuel installer erases all data from the disks on the node,
|
||||
creates new partitions structure, installs an operating system and
|
||||
deploys OpenStack.
|
||||
|
||||
Depending on the roles the node has in the 5.1.1 environment,
|
||||
we might need to make changes to the behavior. For example,
|
||||
to upgrade Ceph OSD node, we need to make Fuel keep data on Ceph
|
||||
partitions of that node. To upgrade Compute nodes, we need to use
|
||||
live migration to move users' VMs from it to other hypervisor
|
||||
hosts. There are also several steps in the upgrade procedure that are
|
||||
common to both supported roles ('compute' and 'ceph-osd'). This section
|
||||
describes these common steps to upgrade a node and how to use the
|
||||
Upgrade Script to upgrade a node with any of the supported roles.
|
||||
|
||||
Prepare Ceph OSD node to upgrade
|
||||
________________________________
|
||||
|
||||
Preparations for the upgrade of Ceph OSD node include steps to minimize
|
||||
data transfers inside the Ceph cluster during the upgrade. They also
|
||||
aim to ensure that the data on OSD devices is kept intact.
|
||||
|
||||
From the impact standpoint, the most optimal solution is to minimize
|
||||
data transfers over network during the upgrade of Ceph cluster.
|
||||
Ceph will normally rebalance its data when OSD node is down. However,
|
||||
since the described procedure preserves Ceph data on the node,
|
||||
rebalance must be turned off. We use the standard Ceph flag ``noout``
|
||||
to disable the rebalance on a node outage.
|
||||
|
||||
Fuel installer has an agent on every node under its management.
|
||||
This agent, known as MCollective agent, performs lifecycle management
|
||||
actions on the node. When the node is being deleted from the original
|
||||
5.1.1 environment, the agent erases the first 10MB of data on all disks
|
||||
of the node. We need to disable the erase for Ceph OSD devices. We
|
||||
developed a patch that, when applied on the node, adds the corresponding
|
||||
block of logic to MCollective agent.
|
||||
|
||||
Prepare Compute node to upgrade
|
||||
_______________________________
|
||||
|
||||
Compute node runs virtual machines in hypervisor processes.
|
||||
To satisfy the requirement to minimize the downtime of virtual
|
||||
resources, we must ensure that the VMs are moved from the node
|
||||
in preparation for the reinstallation. The move must be
|
||||
done with the most seamless migration method available.
|
||||
This move is referred to as **evacuation**.
|
||||
|
||||
Using Ceph shared storage for the Nova instance's ephemeral
|
||||
disk allows using live migration of virtual instances. The sections
|
||||
below describe the steps required to live migrate all VMs from the
|
||||
hypervisor host picked for the upgrade.
|
||||
|
||||
To minimize the impact of the upgrade procedure on the end user
|
||||
workloads, we need to migrate all VMs off the hypervisor picked for
|
||||
the upgrade and make sure that no new VMs are provisioned to that host.
|
||||
Scheduling to the host can be ruled out by disabling Compute service on
|
||||
that host. This does not affect the ability to migrate VMs from that
|
||||
host.
|
||||
|
||||
Reassign node to Seed environment
|
||||
_________________________________
|
||||
|
||||
There is an extension to the Fuel API installed by the
|
||||
``octane prepare`` command that allows moving a node from an
|
||||
environment to another environment, in our case, the Upgrade Seed.
|
||||
So the reassignment is a matter of a single call to the Fuel API.
|
||||
It is implemented as a part of the Upgrade Script.
|
||||
|
||||
Note that with this extension the reassigned node will not change its
|
||||
ID or host name.
|
||||
|
||||
Verify node upgrade
|
||||
___________________
|
||||
|
||||
After the successful installation you need to make sure that the node
|
||||
is connected to CICs properly, according to its role. Ceph OSD node
|
||||
must be ``up`` in the OSD tree. The Compute node must be connected to
|
||||
the Nova controller services. Below in the list of commands you will
|
||||
find how to check these requirements.
|
||||
|
||||
Node Upgrade command
|
||||
____________________
|
||||
|
||||
Run the following command to upgrade a node identified by the ``NODE_ID``
|
||||
parameter:
|
||||
|
||||
::
|
||||
|
||||
./octane upgrade-node ${SEED_ID} ${NODE_ID}
|
||||
|
||||
You can upgrade more than one node at a time. Run the following command
|
||||
to upgrade multiple nodes identified by the ``NODE_ID1``, ``NODE_ID2``,
|
||||
and ``NODE_ID3`` parameters:
|
||||
|
||||
::
|
||||
|
||||
./octane upgrade-node ${SEED_ID} ${NODE_ID1} ${NODE_ID2} ${NODE_ID3}
|
||||
|
||||
.. note::
|
||||
|
||||
Pay attention to the replication factor of your Ceph cluster and
|
||||
do not upgrade more Ceph OSD nodes than the value of the replication
|
||||
factor minus one.
|
||||
60
pages/operations/upgrade/4000-finalize-upgrade.rst
Normal file
60
pages/operations/upgrade/4000-finalize-upgrade.rst
Normal file
@@ -0,0 +1,60 @@
|
||||
.. index:: Finalize Upgrade
|
||||
|
||||
.. _Upg_Final:
|
||||
|
||||
Finalize Upgrade
|
||||
----------------
|
||||
|
||||
This chapter contains actions required to finalize the upgrade procedure
|
||||
on an environment and on Fuel installer in general. Finalization
|
||||
involves the following steps:
|
||||
|
||||
* Restore the source code of the components of the installer by
|
||||
reverting the patches that implement modifications to the
|
||||
installer's behavior.
|
||||
* Delete the original 5.1.1 environment and release the original
|
||||
controller nodes.
|
||||
|
||||
See the sections below for the detailed description of how to do that
|
||||
and the list of commands:
|
||||
|
||||
* :ref:`Revert patches <upgrade-cleanup-revert>`
|
||||
* :ref:`Decommission environment <upgrade-cleanup-delete-env>`
|
||||
|
||||
.. _upgrade-cleanup-revert:
|
||||
|
||||
Revert Patches
|
||||
++++++++++++++
|
||||
|
||||
The final goal of the upgrade procedure is to get the upgraded
|
||||
environment as close as possible to the environment installed with
|
||||
the new release version and retain the ability to manage it in the new
|
||||
version of Fuel installer. To restore the original behavior of Fuel
|
||||
installer, you need to revert all changes made to its source code and
|
||||
configurations. You also need to restore the configuration of
|
||||
the environment to the state installed by Fuel.
|
||||
|
||||
Commands to revert patches
|
||||
__________________________
|
||||
|
||||
Run the following command to revert the changes made to the source
|
||||
code and configuration of components of the Fuel installer::
|
||||
|
||||
./octane cleanup-fuel
|
||||
|
||||
.. _upgrade-cleanup-delete-env:
|
||||
|
||||
Delete 5.1.1 environment
|
||||
++++++++++++++++++++++++
|
||||
|
||||
Delete the original 5.1.1 environment to release the Controller nodes
|
||||
and completely switch to use the 6.1 environment instead.
|
||||
|
||||
.. note::
|
||||
|
||||
The following operation may cause data loss if your upgrade
|
||||
operation was not completed successfully. Proceed with caution.
|
||||
|
||||
::
|
||||
|
||||
fuel env --env $ORIG_ID --delete
|
||||
@@ -9,6 +9,7 @@ Terminology Reference
|
||||
.. include:: /pages/terminology/b/bonding.rst
|
||||
.. include:: /pages/terminology/c/ceilometer.rst
|
||||
.. include:: /pages/terminology/c/ceph.rst
|
||||
.. include:: /pages/terminology/c/cic.rst
|
||||
.. include:: /pages/terminology/c/cinder.rst
|
||||
.. include:: /pages/terminology/c/cobbler.rst
|
||||
.. include:: /pages/terminology/c/compute-nodes.rst
|
||||
|
||||
10
pages/terminology/c/cic.rst
Normal file
10
pages/terminology/c/cic.rst
Normal file
@@ -0,0 +1,10 @@
|
||||
.. _cic-term:
|
||||
|
||||
CIC
|
||||
---
|
||||
CIC is a shorthand abbreviation for OpenStack :ref:`Controller
|
||||
node<controller-node-term>`: Cloud Infrastructure Controller.
|
||||
|
||||
For more information:
|
||||
|
||||
- :ref:`controller-arch` describes architecture of controller system.
|
||||
Reference in New Issue
Block a user