Archive instance-related rows when the parent instance is deleted

This is something I expect has been very broken for a long time. We
have rows in tables such as instance_extra, instance_faults, etc that
pertain to a single instance, and thus have a foreign key on their
instance_uuid column that points to the instance. If any of those
records exist, an instance can not be archived out of the main
instances table.

The archive routine currently "handles" this by skipping over said
instances, and eventually iterating over all the tables to pull out
any records that point to that instance, thus freeing up the instance
itself for archival. The problem is, this only happens if those extra
records are actually marked as deleted themselves. If we fail during
a cleanup routine and leave some of them not marked as deleted, but
where the instance they reference *is* marked as deleted, we will
never archive them.

This patch adds another phase of the archival process for any table
that has an "instance_uuid" column, which attempts to archive records
that point to these deleted instances. With this, using a very large
real world sample database, I was able to archive my way down to
zero deleted, un-archivable instances (from north of 100k).

Closes-Bug: #1622545
Change-Id: I77255c77780f0c2b99d59a9c20adecc85335bb18
This commit is contained in:
Dan Smith
2016-09-27 10:17:00 -07:00
parent efb67379eb
commit 411113a2dd

View File

@@ -6274,6 +6274,49 @@ def task_log_end_task(context, task_name, period_beginning, period_ending,
################## ##################
def _archive_if_instance_deleted(table, shadow_table, instances, conn,
max_rows):
"""Look for records that pertain to deleted instances, but may not be
deleted themselves. This catches cases where we delete an instance,
but leave some residue because of a failure in a cleanup path or
similar.
Logic is: if I have a column called instance_uuid, and that instance
is deleted, then I can be deleted.
"""
# NOTE(guochbo): There is a circular import, nova.db.sqlalchemy.utils
# imports nova.db.sqlalchemy.api.
from nova.db.sqlalchemy import utils as db_utils
query_insert = shadow_table.insert(inline=True).\
from_select(
[c.name for c in table.c],
sql.select(
[table],
and_(instances.c.deleted != instances.c.deleted.default.arg,
instances.c.uuid == table.c.instance_uuid)).
order_by(table.c.id).limit(max_rows))
query_delete = sql.select(
[table.c.id],
and_(instances.c.deleted != instances.c.deleted.default.arg,
instances.c.uuid == table.c.instance_uuid)).\
order_by(table.c.id).limit(max_rows)
delete_statement = db_utils.DeleteFromSelect(table, query_delete,
table.c.id)
try:
with conn.begin():
conn.execute(query_insert)
result_delete = conn.execute(delete_statement)
return result_delete.rowcount
except db_exc.DBReferenceError as ex:
LOG.warning(_LW('Failed to archive %(table)s: %(error)s'),
{'table': table.__tablename__,
'error': six.text_type(ex)})
return 0
def _archive_deleted_rows_for_table(tablename, max_rows): def _archive_deleted_rows_for_table(tablename, max_rows):
"""Move up to max_rows rows from one tables to the corresponding """Move up to max_rows rows from one tables to the corresponding
shadow table. shadow table.
@@ -6359,6 +6402,7 @@ def _archive_deleted_rows_for_table(tablename, max_rows):
with conn.begin(): with conn.begin():
conn.execute(insert) conn.execute(insert)
result_delete = conn.execute(delete_statement) result_delete = conn.execute(delete_statement)
rows_archived = result_delete.rowcount
except db_exc.DBReferenceError as ex: except db_exc.DBReferenceError as ex:
# A foreign key constraint keeps us from deleting some of # A foreign key constraint keeps us from deleting some of
# these rows until we clean up a dependent table. Just # these rows until we clean up a dependent table. Just
@@ -6366,9 +6410,14 @@ def _archive_deleted_rows_for_table(tablename, max_rows):
LOG.warning(_LW("IntegrityError detected when archiving table " LOG.warning(_LW("IntegrityError detected when archiving table "
"%(tablename)s: %(error)s"), "%(tablename)s: %(error)s"),
{'tablename': tablename, 'error': six.text_type(ex)}) {'tablename': tablename, 'error': six.text_type(ex)})
return rows_archived
rows_archived = result_delete.rowcount if ((max_rows is None or rows_archived < max_rows)
and 'instance_uuid' in columns):
instances = models.BASE.metadata.tables['instances']
limit = max_rows - rows_archived if max_rows is not None else None
extra = _archive_if_instance_deleted(table, shadow_table, instances,
conn, limit)
rows_archived += extra
return rows_archived return rows_archived