heat/releasenotes/notes/convergence-delete-race-5b821bbd4c5ba5dc.yaml
Zane Bitter e63778efc9 Eliminate client race condition in convergence delete
Previously when doing a delete in convergence, we spawned a new thread to
start the delete. This was to ensure the request returned without waiting
for potentially slow operations like deleting snapshots and stopping
existing workers (which could have caused RPC timeouts).

The result, however, was that the stack was not guaranteed to be
DELETE_IN_PROGRESS by the time the request returned. In the case where a
previous delete had failed, a client request to show the stack issued soon
after the delete had returned would likely show the stack status as
DELETE_FAILED still. Only a careful examination of the updated_at timestamp
would reveal that this corresponded to the previous delete and not the one
just issued. In the case of a nested stack, this could leave the parent
stack effectively undeletable. (Since the updated_at time is not modified
on delete in the legacy path, we never checked it when deleting a nested
stack.)

To prevent this, change the order of operations so that the stack is first
put into the DELETE_IN_PROGRESS state before the delete_stack call returns.
Only after the state is stored, spawn a thread to complete the operation.

Since there is no stack lock in convergence, this gives us the flexibility
to cancel other in-progress workers after we've already written to the
Stack itself to start a new traversal.

The previous patch in the series means that snapshots are now also deleted
after the stack is marked as DELETE_IN_PROGRESS. This is consistent with
the legacy path.

Change-Id: Ib767ce8b39293c2279bf570d8399c49799cbaa70
Story: #1669608
Task: 23174
2018-07-30 20:48:28 -04:00

12 lines
589 B
YAML

---
fixes:
- |
Previously, when deleting a convergence stack, the API call would return
immediately, so that it was possible for a client immediately querying the
status of the stack to see the state of the previous operation in progress
or having failed, and confuse that with a current status. (This included
Heat itself when acting as a client for a nested stack.) Convergence stacks
are now guaranteed to have moved to the ``DELETE_IN_PROGRESS`` state before
the delete API call returns, so any subsequent polling will reflect
up-to-date information.