nova/nova/tests/unit/db
melanie witt becb94ae64 Dynamically archive FK related records in archive_deleted_rows
Currently, it is possible to "partially archive" the database by
running 'nova-manage db archive_deleted_rows' with --max_rows or by
interrupting the archive process in any way. When this happens, it is
possible to have archived a record with a foreign key relationship to a
parent record (example: 'instance_extra' table record is archived while
the 'instances' table record remains).

When an instance's records become "split" in this way, any API request
that can (1) access the deleted instance and (2) tries to access data
that should be in a child table (example: the embedded flavor for an
instance) will fail with an OrphanedObjectError and HTTP 500 to the
user. Examples of APIs that are affected by this are the tenant usage
APIs and listing of deleted instances as admin.

In the tenant usage example, the API looks at deleted instances to
calculate usage over a time period. It pulls deleted and non-deleted
instances and does instance.get_flavor() to calculate their usage. The
flavor data is expected to be present because
expecteds_attrs=['flavor'] is used to do a join with the
'instance_extra' table and populate the instance object's flavor data.
When get_flavor() is called, it tries to access the instance.flavor
attribute (which hasn't been populated because the 'instance_extra'
record is gone). That triggers a lazy-load of the flavor which loads
the instance from the database again with expected_attrs=['flavor']
again which doesn't populate instance.flavor (again) because the
'instance_extra' record is gone. Then the Instance._load_flavor code
intentionally orphans the instance record to avoid triggering
lazy-loads while it attempts to populate instance.flavor,
instance.new_flavor, and instance.old_flavor. Finally, another
lazy-load is triggered (because instance.flavor is still not populated)
and fails with OrphanedObjectError.

One way to solve this problem is to make it impossible for
archive_deleted_records to orphan records that are related by foreign
key relationships. The approach is to process parent tables first
(opposite of today where we process child tables first) and find all of
the tables that refer to it by foreign keys, create and collect
insert/delete statements for those child records, and then put them all
together in a single database transaction to archive all related
records "atomically". The idea is that if anything were to interrupt
the transaction (errors or other) it would roll back and keep all the
related records together. Either all or archived or none are archived.

This changes the logic of the per table archive to discover tables that
refer to the table by foreign keys and generates insert/delete query
statements to execute in the same database transaction as the table
archive itself. The extra records archived along with the table are
added to the rows_archived result. The existing code for "archiving
records if instance is deleted" also has to be removed along with this
because the new logic does the same thing dynamically and makes it
obsolete. Finally, some assertions in the unit tests need to be changed
or removed because they were assuming certain types of archiving
failures due to foreign key constraint violations that can no longer
occur with the new dynamic logic for archiving child records.

Closes-Bug: #1837995

Change-Id: Ie653e5ec69d16ae469f1f8171fee85aea754edff
2021-03-11 20:05:44 +00:00
..
__init__.py move all tests to nova/tests/unit 2014-11-12 15:31:08 -05:00
fakes.py nova-net: Remove unused 'stub_out_db_network_api' 2019-12-12 10:21:00 +00:00
test_db_api.py Dynamically archive FK related records in archive_deleted_rows 2021-03-11 20:05:44 +00:00
test_migration_utils.py Remove oslo_db.sqlalchemy.compat reference 2020-03-04 16:59:48 +00:00
test_migrations.py db: Compact Train database migrations 2021-01-07 11:47:44 +00:00
test_models.py Test that new tables don't use soft deletes 2016-02-04 09:21:31 -05:00
test_sqlalchemy_migration.py Merge "apidb: Compact Ocata database migrations" 2021-03-06 17:34:24 +00:00