eadd78efe3
This attempts to recreate the following scenario: 1) boot instance on ocata host where the compute service does not have a uuid 2) migrate instance 3) delete the ocata service (thus deleting the compute node) 4) start compute service with the same name 5) migrate instance to newly-created compute node 6) upgrade to pike where services.uuid data migration happens 7) list instances as admin to join on the services table The failure occurs when listing instances because the deleted service with the same name as the compute host that the instance is running on gets pulled from the DB and the Service object attempts to set a uuid on it, which fails since it's not using a read_deleted="yes" context. While working on this, the service_get_all_by_binary DB API method had to be fixed to not hard-code read_deleted="no" since the test needs to be able to read deleted services, which it can control via its own context object (note that RequestContext.read_deleted already defaults to "no" so the hard-coding in the DB API is unnecessarily restrictive). NOTE(mriedem): This backport needed to account for not having change Idaed39629095f86d24a54334c699a26c218c6593 in Rocky so the PlacementFixture comes from the same module as the other nova fixtures. Change-Id: I4d60da26fcf0a77628d1fdf4e818884614fa4f02 Related-Bug: #1764556 (cherry picked from commit |
||
---|---|---|
.. | ||
sqlalchemy | ||
__init__.py | ||
api.py | ||
base.py | ||
constants.py | ||
migration.py |