6823ea5ed1
This is easier than Cinder, Nova etc. but more difficult than Heat. Masakari hasn't had many database migrations in recent cycles but it did have on in Antelope. This means we need to handle people doing skip-level upgrades and validate which version of the legacy migrations they are currently on. We support users coming from both Zed and Antelope and anyone else will need to go through an older version of Masakari to update their database migrations first. Other than this difference, the logic is pretty similar: as with other projects, we simply determine if we're upgrading a deployment that was previously using sqlalchemy-migrate, upgrading a deployment that has already migrated to alembic, or deploying a new deployment, and adjust accordingly. In addition, we also have to consider Taskflow's migrations. These were previously being run once as part of the legacy 006_add_persistence_tables migrations. Since Taskflow uses Alembic under the hood, it's safe to run every time. The presence of Taskflow does force us to use a different table name in Masakari though. Note that one curious side-effect of this is that the order than table rows are purged change. This appears to be happening because the notification table is now being created in the initial Alembic migration, which alters the return value of 'MetaData.sorted_tables'. In any case, the fix is simply a case of adjusting this order in the tests. Change-Id: I5285d7cc3c6da0059c0908cedc195b2262cb1fce Signed-off-by: Stephen Finucane <stephenfin@redhat.com> |
||
---|---|---|
.. | ||
.placeholder | ||
add_evacuate_error_instances_conf_option-5b4d1906137395f0.yaml | ||
add_ha_enabled_config_options-54a9270a5993d20a.yaml | ||
add_reserved_host_to_aggregates-5f506d08354ec148.yaml | ||
add-periodic-tasks-0c96d6f620502a75.yaml | ||
add-upgrade-check-framework-52268130b25317ab.yaml | ||
adopt-oslo-config-generator-cf2fdb17cf7f13db.yaml | ||
auto_priority_and_rh_priority_recovery_methods-b88cc00041fa2c4d.yaml | ||
blueprint-add-vmoves-348fd430aa936721.yaml | ||
blueprint-support-nova-system-scope-policies-c4dbd244dd3fcf1a.yaml | ||
bp-mutable-config-57efdd467c01aa7b.yaml | ||
bug-1685145-3d93145bfc76c660.yaml | ||
bug-1776385-0bcf0a0b3fad359e.yaml | ||
bug-1782517-e4dc70bad9e4e131.yaml | ||
bug-1856164-6601a6e6280eba4d.yaml | ||
bug-1859406-6b041a26acf6c7f6.yaml | ||
bug-1882516-e8dc7fd2b55f065f.yaml | ||
bug-1932194-2b721860bbc26819.yaml | ||
bug-1960619-4c2cc73483bdff86.yaml | ||
bug-1980736-975ee013e4612062.yaml | ||
bug-add-missing-domain-name-5181c02f3f033a22.yaml | ||
compute_search-3da97e69e661a73f.yaml | ||
compute-disable-reason-9570734c0bb888cf.yaml | ||
coordination_for_host_notification-a156ec5a5839a781.yaml | ||
correct_response_code-df8b43a201efa1b4.yaml | ||
customisable-ha-enabled-instance-metadata-key-af511ea2aac96690.yaml | ||
db-purge-support-7a33e2ea5d2a624b.yaml | ||
deprecate-json-formatted-policy-file-57ad537ec19cc7e0.yaml | ||
deprecate-topic-opt-af83f82143143c61.yaml | ||
drop-py-2-7-059d3cd5e7cb4e1a.yaml | ||
enabled-to-segment-7e6184feb1e4f818.yaml | ||
evacuation_in_threads-cc9c79b10acfb5f6.yaml | ||
failover_segment_apis-f5bea1cd6d103048.yaml | ||
fix-endless-periodic-f223845f3044b166.yaml | ||
fix-notification-stuck-problem-fdb84bad8641384b.yaml | ||
host-apis-46a87fcd56d8ed30.yaml | ||
notifications_apis-3c3d5055ae9c6649.yaml | ||
notifications-in-masakari-f5d79838fc23cb9b.yaml | ||
policy-in-code-8740d51624055044.yaml | ||
progress-details-recovery-workflows-5b14b7b3f87374f4.yaml | ||
recovery-method-customization-3438b0e26e322b88.yaml | ||
reserved_host_recovery_method-d2de1f205136c8d5.yaml | ||
switch-to-alembic-b438de67c5b22a40.yaml | ||
wsgi-applications-3ed7d6b89f1a5785.yaml |