Merge "docs: Update docs to reflect migration to Alembic"

This commit is contained in:
Zuul 2022-07-29 17:54:10 +00:00 committed by Gerrit Code Review
commit 1dd6993d7b
5 changed files with 58 additions and 129 deletions

View File

@ -60,47 +60,6 @@ the response. Neither the cipher text, nor the hash of the key used to encrypt
the ``blob`` are exposed through the API. Furthermore, the key is only used
internally to keystone.
Encrypting existing credentials
-------------------------------
When upgrading a Mitaka deployment to Newton, three database migrations will
ensure all credentials are encrypted. The process is as follows:
1. An additive schema change is made to create the new ``encrypted_blob`` and
``key_hash`` columns in the existing ``credential`` table using
``keystone-manage db_sync --expand``.
2. A data migration will loop through all existing credentials, encrypt each
``blob`` and store the result in the new ``encrypted_blob`` column. The hash
of the key used is also written to the ``key_hash`` column for that specific
credential. This step is done using ``keystone-manage db_sync --migrate``.
3. A contractive schema will remove the ``blob`` column that held the plain
text representations of the credential using ``keystone-manage db_sync
--contract``. This should only be done after all nodes in the deployment are
running Newton. If any Mitaka nodes are running after the database is
contracted, they won't be able to read credentials since they are looking
for the ``blob`` column that no longer exists.
.. NOTE::
You may also use ``keystone-manage db_sync --check`` in order to check the
current status of your rolling upgrades.
If performing a rolling upgrade, please note that a limited service outage will
take affect during this migration. When the migration is in place, credentials
will become read-only until the database is contracted. After the contract
phase is complete, credentials will be writeable to the backend. A
``[credential] key_repository`` location must be specified through
configuration and bootstrapped with keys using ``keystone-manage
credential_setup`` prior to migrating any existing credentials. If a new key
repository isn't setup using ``keystone-manage credential_setup`` keystone will
assume a null key to encrypt and decrypt credentials until a proper key
repository is present. The null key is a key consisting of all null bytes and
its only purpose is to ease the upgrade process from Mitaka to Newton. It is
highly recommended that the null key isn't used. It is no more secure than
storing credentials in plain text. If the null key is used, you should migrate
to a proper key repository using ``keystone-manage credential_setup`` and
``keystone-manage credential_migrate``.
Encryption key management
-------------------------

View File

@ -155,7 +155,7 @@ downtime if it is required.
Upgrading without downtime
--------------------------
.. NOTE:
.. versionadded:: 10.0.0 (Newton)
Upgrading without downtime is only supported in deployments upgrading
*from* Newton or a newer release.
@ -166,6 +166,12 @@ Upgrading without downtime
``keystone-manage db_sync``), as it runs legacy (downtime-incurring)
migrations prior to running schema expansions.
.. versionchanged:: 21.0.0 (Yoga)
The migration tooling was changed from *SQLAlchemy-Migrate* to *Alembic*.
As part of this change, the data migration phase of the database upgrades
was dropped.
This is a high-level description of our upgrade strategy built around
additional options in ``keystone-manage db_sync``. Although it is much more
complex than the upgrade process described above, it assumes that you are not
@ -187,11 +193,11 @@ authenticate requests normally.
#. Update your configuration files on the first node (``/etc/keystone/``) with
those corresponding to the latest release.
#. (*New in Newton*) Run ``keystone-manage doctor`` on the first node to
#. Run ``keystone-manage doctor`` on the first node to
diagnose symptoms of common deployment issues and receive instructions for
resolving them.
#. (*New in Newton*) Run ``keystone-manage db_sync --expand`` on the first node
#. Run ``keystone-manage db_sync --expand`` on the first node
to expand the database schema to a superset of what both the previous and
next release can utilize, and create triggers to facilitate the live
migration process.
@ -210,14 +216,12 @@ authenticate requests normally.
triggers will live migrate the data to the new schema so it can be read by
the next release.
#. (*New in Newton*) Run ``keystone-manage db_sync --migrate`` on the first
node to forcefully perform data migrations. This process will migrate all
data from the old schema to the new schema while the previous release
continues to operate normally.
.. note::
When this process completes, all data will be available in both the new
schema and the old schema, so both the previous release and the next release
will be capable of operating normally.
Prior to Yoga, data migrations were treated separatly and required the
use of the ``keystone-manage db_sync --migrate`` command after applying
the expand migrations. This is no longer necessary and the
``keystone-manage db_sync --migrate`` command is now a no-op.
#. Update your configuration files (``/etc/keystone/``) on all nodes (except
the first node, which you've already done) with those corresponding to the
@ -230,20 +234,27 @@ authenticate requests normally.
As the next release begins writing to the new schema, database triggers will
also migrate the data to the old schema, keeping both data schemas in sync.
#. (*New in Newton*) Run ``keystone-manage db_sync --contract`` to remove the
old schema and all data migration triggers.
#. Run ``keystone-manage db_sync --contract`` to remove the old schema and all
data migration triggers.
When this process completes, the database will no longer be able to support
the previous release.
Using db_sync check
~~~~~~~~~~~~~~~~~~~
Using ``db_sync check``
~~~~~~~~~~~~~~~~~~~~~~~
(*New in Pike*) In order to check the current state of your rolling upgrades,
you may run the command ``keystone-manage db_sync --check``. This will inform
you of any outstanding actions you have left to take as well as any possible
upgrades you can make from your current version. Here are a list of possible
return codes.
.. versionadded:: 12.0.0 (Pike)
.. versionchanged:: 21.0.0 (Yoga)
Previously this command would return ``3`` if data migrations were
required. Data migrations are now part of the expand schema migrations,
therefore this step is no longer necessary.
In order to check the current state of your rolling upgrades, you may run the
command ``keystone-manage db_sync --check``. This will inform you of any
outstanding actions you have left to take as well as any possible upgrades you
can make from your current version. Here are a list of possible return codes.
* A return code of ``0`` means you are currently up to date with the latest
migration script version and all ``db_sync`` commands are complete.
@ -256,8 +267,5 @@ return codes.
or the database is already under control. Your first step is to run
``keystone-manage db_sync --expand``.
* A return code of ``3`` means that the expansion stage is complete, and the
next step is to run ``keystone-manage db_sync --migrate``.
* A return code of ``4`` means that the expansion and data migration stages are
complete, and the next step is to run ``keystone-manage db_sync --contract``.

View File

@ -17,52 +17,45 @@
Database Migrations
===================
.. note::
.. versionchanged:: 21.0.0 (Yoga)
The framework being used is currently being migrated from
SQLAlchemy-Migrate to Alembic, meaning this information will change in the
near-term.
The database migration framework was changed from SQLAlchemy-Migrate to
Alembic in the Yoga release. Previously there were three SQLAlchemy-Migrate
repos, corresponding to different type of migration operation: the *expand*
repo, the *data migration* repo, and the *contract* repo. There are now
only two Alembic branches, the *expand* branch and the *contract* branch,
and data migration operations have been folded into the former
Starting with Newton, keystone supports upgrading both with and without
downtime. In order to support this, there are three separate migration
repositories (all under ``keystone/common/sql/legacy_migrations``) that match
the three phases of an upgrade (schema expansion, data migration, and schema
contraction):
downtime. In order to support this, there are two separate branches (all under
``keystone/common/sql/migrations``): the *expand* and the *contract* branch.
``expand_repo``
*expand*
For additive schema modifications and triggers to ensure data is kept in
sync between the old and new schema until the point when there are no
keystone instances running old code.
``data_migration_repo``
To ensure new tables/columns are fully populated with data from the old
schema.
May also contain data migrations to ensure new tables/columns are fully
populated with data from the old schema.
``contract_repo``
*contract*
Run after all old code versions have been upgraded to running the new code,
so remove any old schema columns/tables that are not used by the new
version of the code. Drop any triggers added in the expand phase.
All migrations are required to have a migration script in each of these repos,
each with the same version number (which is indicated by the first three digits
of the name of the script, e.g. ``003_add_X_table.py``). If there is no work to
do in a specific phase, then include a no-op migration to simply ``pass`` (in
fact the ``001`` migration in each of these repositories is a no-op migration,
so that can be used as a template).
A migration script must belong to one branch. If a migration has both additive
and destruction operations, it must be split into two migrations scripts, one
in each branch.
In order to support rolling upgrades, where two releases of keystone briefly
operate side-by-side using the same database without downtime, each phase of
the migration must adhere to following constraints:
These triggers should be removed in the contract phase. There are further
restrictions as to what can and cannot be included in migration scripts in each
phase:
Expand phase:
Only additive schema changes are allowed, such as new columns, tables,
indices, and triggers.
Only additive schema changes, such as new columns, tables, indices, and
triggers, and data insertion are allowed.
Data insertion, modification, and removal is not allowed.
Data modification or removal is not allowed.
Triggers must be created to keep data in sync between the previous release
and the next release. Data written by the previous release must be readable
@ -72,20 +65,14 @@ Expand phase:
In cases it is not possible for triggers to maintain data integrity across
multiple schemas, writing data should be forbidden using triggers.
Data Migration phase:
Data is allowed to be inserted, updated, and deleted.
No schema changes are allowed.
Contract phase:
Only destructive schema changes are allowed, such as dropping or altering
columns, tables, indices, and triggers.
Data insertion, modification, and removal is not allowed.
Only destructive schema changes, such as dropping or altering
columns, tables, indices, and triggers, or data modification or removal are
allowed.
Triggers created during the expand phase must be dropped.
For more information on writing individual migration scripts refer to
`SQLAlchemy-migrate`_.
`Alembic`_.
.. _SQLAlchemy-migrate: https://opendev.org/openstack/sqlalchemy-migrate
.. _Alembic: https://alembic.sqlalchemy.org/

View File

@ -53,9 +53,7 @@ Refer to the :doc:`API Change tutorial <api_change_tutorial>`. In short, you wil
steps:
#. Create a SQL migration to add the parameter to the database table
(:py:mod:`keystone.common.sql.legacy_migration.expand_repo.versions`,
:py:mod:`keystone.common.sql.legacy_migration.data_migration_repo.versions`,
:py:mod:`keystone.common.sql.legacy_migration.contract_repo.versions`)
(:py:mod:`keystone.common.sql.migrations.versions`)
#. Add a SQL migration unit test (`keystone/tests/unit/test_sql_upgrade.py`)

View File

@ -138,32 +138,9 @@ Identity module.
Testing Schema Migrations
-------------------------
.. note::
The framework being used is currently being migrated from
SQLAlchemy-Migrate to Alembic, meaning this information will change in the
near-term.
The application of schema migrations can be tested using SQLAlchemy Migrate's
built-in test runner, one migration at a time.
.. WARNING::
This may leave your database in an inconsistent state; attempt this in
non-production environments only!
This is useful for testing the *next* migration in sequence in a database under
version control:
.. code-block:: bash
$ python keystone/common/sql/legacy_migrations/expand_repo/manage.py test \
--url=sqlite:///test.db \
--repository=keystone/common/sql/legacy_migrations/expand_repo/
This command references to a SQLite database (test.db) to be used. Depending on
the migration, this command alone does not make assertions as to the integrity
of your data during migration.
Tests for database migrations can be found in
``keystone/tests/unit/test_sql_upgrade.py`` and
``keystone/tests/unit/test_sql_banned_operations.py``.
LDAP Tests
----------