* Add Rocky code name
* Add new HOT version for Rocky release
The new version is "2018-08-31" or "rocky".
* Add sem-ver flag
Sem-Ver: api-break
Change-Id: I261b6c28b8b7ee9e75ca9a895155a656ef82cd0d
Since we are doing set operations on it, make the 'requires' attribute of a
Resource a set and only convert to/from a list when loading/storing. To
avoid churn in the database, sort the list when storing it.
Change-Id: I137fae8ae630eb235d7811fcba81738d828e6a1e
When we create a replacement resource, do so with the correct requires list
that it will ultimately have, instead of a copy of the old resource's
requires.
This will happen anyway when Resource.create_convergence() is actually
called (which is on the other end of an RPC message, so it may not actually
happen if another transition starts), but this makes it consistent from the
start.
Change-Id: Idf75a55be8d75e55c893ec1fb6ee3704f46bdc4f
Obtain the list of required resources in one place in check_resource,
instead of in both Resource.create_convergence() and
Resource.update_convergence(). Since these require knowledge about how the
dependencies are encoded in the RPC message, they never really belonged in
the Resource class anyway.
Change-Id: I030c6287acddcd91dfe5fecba72c276fec52829b
In the original prototype for convergence, we passed the input_data from
the SyncPoint to the resource when calling the equivalent of
convergence_delete(), so that we could clear and needed_by references that
no longer exist. This is pointless for a few reasons:
* It's implemented incorrectly - it *sets* the referenced resources into
needed_by instead of clearing them from it.
* We don't actually pass any input data - in WorkerService.check_resource()
it's always set to an empty dict for cleanup nodes, regardless of what
came in on the wire.
* We don't store the result to the database unless we're deleting the
resource anyway - in which case it doesn't matter.
* It turns out that even in the prototype, the whole needed_by mechanism
isn't actually used for anything:
c74aac1f07
Rather than pretend that we're doing something useful with the input_data
here, just set the needed_by to an empty list, which is what was happening
anyway.
Change-Id: I73f6cf1646584dc4a83497f5a583c2c8973e8aba
If an update of a resource fails, its 'requires' should be set to a union
of the previous and new requires lists. This is because if the resource
depends on a resource that has been replaced in this stack update, we can't
know if the current resource now depends on the new or old version of the
replaced resource if the current resource failed.
This is achieved by splitting up the setting of 'requires' and
'current_template_id', and changing them directly in the update() method
instead of via a callback.
When the resource state is changed to UPDATE_IN_PROGRESS, the new
requirements are added to the old ones. When the state is changed to
UPDATE_COMPLETE, the new requirements replace the old ones altogether. If
the update fails or handle_update() raises UpdateReplace, the union of the
requirements is kept. If _needs_update() raises UpdateReplace, the old
requirements are kept.
The current_template_id is updated when the state is changed to either
UPDATE_COMPLETE or UPDATE_FAILED, or when no update is required
(_needs_update() returns False).
This also saves an extra database write when the update fails.
Change-Id: If70d457fba5c64611173e3f9a0ae6b155ec69e06
Closes-Bug: #1663388
In convergence, when a resource is traversed and left unchanged, we must
still update the current template for it in the database. In addition,
if the resource was unchanged in the template but already in a FAILED
state and we elected not to replace it by returning False from
needs_replace_failed(), we must also update the status to COMPLETE.
Currently, we do those in two separate writes. This is an unnecessary
overhead (albeit for a fairly rare case), and the two writes can be
combined into one in the case where both changes are required.
Change-Id: I9e2f1e27ce2c119647c9fe228484228d2c15d943
Related-Bug: #1763021
When updating a resource that hasn't changed, we didn't previously retry
the write when the atomic_key of the resource didn't match what we expect.
In addition to locking a resource to update it, the atomic key is also
incremented when modifying metadata and storing cached attribute values.
Apparently there is some mechanism that can cause this to happen in the
time between when the resource is loaded and when we attempt to update the
template ID &c. in the DB.
When the resource is not locked and its template ID hasn't changed since we
loaded it, we can assume that the update failed due to a mismatched atomic
key alone. Handle this case by sending another resource-check RPC message,
so that the operation check will be retried with fresh data from the DB.
Change-Id: I5afd5602096be54af5da256927fe828366dbd63b
Closes-Bug: #1763021
Remove mox usage from test_stack_resources.
Also remove mox usage from `heat/tests/engine/tools.py`
goal: mox-removal
Change-Id: I97e1cf983db411c9c1cf8f507d92eead03067a93
With python-novaclient 10.2.0 after eff607ccef91d09052d58f6798f68d67404f51ce
server listing is done in multiple requests.
Change-Id: Ib02278fca2a17ce2f26d5e28a5ac0971bd80657a
Closes-Bug: #1766254
At the beginning of a convergence traversal, log the traversal ID along
with the dependency graph for the traversal. This could be useful in
debugging. Also, log it at the DEBUG, not INFO level.
Change-Id: Ic7c567b6f949bdec9b3cface4fa07748fbe585eb
this script is not used anywhere anymore, and it will not work
with recent pip 10.x release due to changes in the pip API.
Initially this script was meant to install only a limited set of requirements
to run heat_integrationtests without having to install whole Heat.
Since the split of heat_tempest_plugin the tests left in the
heat_integrationtests are meant to be run only in a DevStack-like env,
so all the requirements are already available.
Change-Id: I94c82fe100dfb6557cc960ef1198f1195780f28f