Use a single transaction to create the replacement resource and set it as
the replaced_by link in the old resource. Also, ensure that no other
traversal has taken a lock on the old resource before we modify it.
If we end up bailing out and not creating a replacement or sending an RPC
message to check it, make sure we retrigger any new traversal.
Change-Id: I23db4f06a4060f3d26a78f7b26700de426f355e3
Closes-Bug: #1727128
In I462ce7161497306483286b78416f9037ac80d6fa we changed to use the
frozen_defintion properties for delete. However, When deleting a
resource from backup stack, where the resource is in INIT_COMPLETE,
setting the _stored_properties_data(_properties_data) to {} when
loading the resource from the db, results in error, when resources
access properties in handle_delete.
Change-Id: If76372c7ef9aee258efb1bfbc724d8637bc6a32c
Closes-Bug: #1709682
Eager load resource_properties_data in resources in the typical
resource-loading scenarios where properties data will be
accessed. Thus, we can save an extra db query per resource when
loading all the resources in a stack, for instance. Fall back to lazy
loading properties data in other scenarios.
Also, the resource object doesn't need to store a copy of its
ResourcePropertiesData object in self.rsrc_prop_data, so don't.
Change-Id: Ib7684af3fe06f818628fd21f1216de5047872948
Closes-Bug: #1665503
Store resource attributes that may be cached in the DB, saving the
cost of re-resolving them later. This works for most resources,
specifically those that do not override the get_attribute() method.
Change-Id: I71f8aa431a60457326167b8c82adc03ca750eda6
Partial-Bug: #1660831
This was most likely meant as a max 2s delay here, not a max 2ms
delay.
Also includes a related change: when retries for metadata updates are
attempted, make sure we do not have a stale value of the atomic_key
(otherwise we'll just inevitably hit the ConcurrentTransaction issue).
Co-Authored-By: Crag Wolfe <cwolfe@redhat.com>
Partial-Bug: #1651768
Change-Id: Ie56e0e4ff93633db1f4752859d2b2a9506922911
Sometimes we know we will only access particular fields of a resource
object, rather than *all* of them. This commit allows the caller to
specify (optionally) the fields that should be populated when the
resource object is instantiated. This saves memory, trips to the db,
and in some cases avoids extra join queries (e.g. for resource.data or
resource.rsrc_prop_data).
Change-Id: I405888f46451d2657aa28f610f8ca555215ff5cf
Partial-Bug: #1680658
Working towards the goal of storing resource attributes in the db so
as to avoid re-resolving them when appropriate. Adds an 'attr_data'
object to the resource object, defined as a relationship on the
already existing resource_properties_data table.
Change-Id: I2104078d850da08b22547d7feab2bde00c543478
Partial-Bug: #1660831
Add the resource_properties_data association to resource and event
objects. The resource and event engine objects do not use yet
it but will soon.
Change-Id: Idecaafffbc5e9bfcd2355e2a165836a5ed89b16f
The db.api module provides a useless indirection to the only
implementation we ever had, sqlalchemy. Let's use that directly instead
of the wrapper.
Change-Id: I80353cfed801b95571523515fd3228eae45c96ae
It's possible that we could end up with multiple resources with the same
physical resource ID, but that would be undetectable since we return only
one from the database layer. This change allows us to detect the problem an
return an error where the result is rendered ambiguous.
Change-Id: I2c5ddbe6731c33a09ec7c4a7b91dcfe414da4385
Just a refactor, no change in functionality.
The functions added to crypt are used to encrypt / decrypt resource
properties data dicts. Note that they should not be used for
encrypting / decrypting other things such as params or user creds
(which are just strings). An intermediate json conversion of each
value in a dict takes place before it is encrypted/decrypted.
Change-Id: Id6bcc90cbf430095719315ac7e9d3e8c9e745012
We are replacing all usages of the 'retrying' package with
'tenacity' as the author of retrying is not actively maintaining
the project. Tenacity is a fork of retrying, but has improved the
interface and extensibility (see [1] for more details). Our end
goal here is removing the retrying package from our requirements.
Tenacity provides the same functionality as retrying, but has the
following major differences to account for:
- Tenacity uses seconds rather than ms as retrying did.
- Tenacity has different kwargs for the decorator and
Retrying class itself.
- Tenacity has a different approach for retrying args by
using classes for its stop/wait/retry kwargs.
- By default tenacity raises a RetryError if a retried callable
times out; retrying raises the last exception from the callable.
Tenacity provides backwards compatibility here by offering
the 'reraise' kwarg.
- Tenacity defines 'time.sleep' as a default value for a kwarg.
That said consumers who need to mock patch time.sleep
need to account for this via mocking of time.sleep before
tenacity is imported.
- For retries that check a result, tenacity will raise if the retried
function raises, whereas retrying retried on all exceptions.
This patch updates all usages of retrying with tenacity.
Unit tests will be added/removed where applicable.
[1] https://github.com/jd/tenacity
Closes-Bug: #1635388
Change-Id: Iec0822cc0d5589b04c1764db518478d286455031
A context cache which memoizes the resources fetched by calls to
Resource.get_all_by_root_stack(..., cache=True)
which are recalled by subsequent calls to Resource.get_all_by_stack.
Because get_all_by_stack returns a collection instead of a single
resource, there is no way of taking advantage of the SQLAlchemy
identity map [1].
[1] http://docs.sqlalchemy.org/en/latest/orm/session_basics.html#is-the-session-a-cache
Change-Id: Ia5aae0c86a586041020e9798566c9e0af48c180d
Partial-Bug: #1578854
- HeatBase.expire, not used anywhere
- HeatBase.save_and_update, moved to sqlalchemy.api
- SoftDelete.soft_delete moved to sqlalchemy.api
update_and_save creates a transaction, so it needs to be in
sqlalchemy.api so that a context manager can eventually manage the
transaction.
Change-Id: I84749f4fd0781ed9a2d62327b39ce6eee0f07b35
The HeatBase.delete method starts its own (sub) transaction with one
of three possible sessions. This change moves all delete calls to a
sqlalchemy.api function with the current context session. This will help with
bug #1479723 to always do deletes with a session provided by the
context manager.
Change-Id: I8dfd3bc6fdb44b0e3b06fab5d7dc8e06fa3d80a8
This change removes the refresh method from the parent db object, and
adds a refresh boolean argument to db_api.resource_get function (which is
the only known use of refresh). Constraining refresh calls to be
inside db_api methods helps with bug #1479723 where the refresh can
happen under a context manager session.
Change-Id: I6fc5c03e8572eee90f89455e0925d503514040b3
Related-Bug: #1479723
In convergence, wherein concurrent updates are possible, if a resource
is deleted (by previous traversal) after dependency graph is created
for new traversal, the resource remains in graph but wouldn't be
available in DB for processing.
It is prerequisite to have resources in DB before any action can be
taken on them.
Hence during convergence resource delete action, the resource entry
from DB is not deleted i.e soft deleted, so that the latest/new update
can find the entry.
All of these soft deleted resources will be deleted when the stack has
completed its operation.
Closes-Bug: #1528560
Change-Id: I0b36ce098022560d7fe01623ce7b66d1d5b38d55
To prevent a resource query for every nested stack during a
resource-list, there needs to be a way to fetch every resource in a
single query.
Change-Id: Ib05b2166d6c7584a844e1ab4a5dd6e35437c96c4
Related-Bug: #1578854
This patch reverts change I6a212da19a774239f014163774e75fe11dfe272c
and adds new DB api resource_get_all_active_by_stack to be used by
convergence.
The new DB api will be used while generating graph for convergence
stack and will fetch all resources of stack from DB excluding
the DELETE COMPLETE resources, if any.
Change-Id: I303ef2c9b5b6a0a49253425c00565c8981cc6825
Partial-Bug: #1528560
migrate to physical_resource_id from nova_instance
at application layer, while the db schema still holds
nova_instance label. This is the first phase of
migration. Next patch will take care of db schema.
Change-Id: I6ebbe3d71d5fb9a7dd3c68ff13777982eb5bbbef
Partial-bug: #1346742
This causes the stack record to be loaded every time a resource is
loaded, and it is not used for anything other than getting the stack
ID, which is already available via the stack_id field.
Change-Id: I45ce9d18984f4881151dba496482713a62c9eae9
Partial-Bug: #1578854
There are follow changes in this patch:
- Using exception ConcurrentTransactions for processing
concurrent transactions during writing metadata.
- wrapper @retry_on_conflict was used for metadata_set method to
allow retrying in the event of a race. The same wrapper was added for
_push_metadata_software_deployments method.
- added new parameter for metadata_set method - merge_metadata.
When RetryRequest exception is raised, oslo_db_api.wrap_db_retry
re-call metadata_set method and in this case we need to refresh
old metadata. It's mostly need for signals without data and id.
For example:
A and B signals come in the same moment and both get number 1,
because metadata was empty. Then during write in db RetryRequest
exception was raised for signal B. Metadata of this signal stores old
number - 1. So we should re-calculate this value using new length
of metadata and set number - 2.
Change-Id: I1ddbad7cde3036cfa9310c670609fcde607ffcac
Co-Authored-By: Zane Bitter <zbitter@redhat.com>
Partially-Bug: #1497274
Currently when updating resource in the database, we fetch the row many
times uselessly. This adds and uses an API to do less queries.
Change-Id: Ic50f8646fba6a578634e4e869ab5155756b0a1aa
This change adds a root_stack_id column to the resource
record to allow a subsequent change enforce
max_resources_per_stack with a single query instead of the
many it currently requires.
This change includes the following:
- Data migration to add the resource.root_stack_id column
and populate all existing resources with their calculated
root stack
- Make new resources aquire and set their root_stack_id on
store or update.
- StackResource._validate_nested_resources use the stored
root_stack_id resulting in a ~15% performance improvement
for the creation time of a test stack containing 40 nested
stacks.
Change-Id: I2b00285514235834131222012408d2b5b2b37d30
Partial-Bug: 1489548
Fix Resource object _refresh method to actually pass a database object
to _from_db_object, instead of wrapped one. This fixes properties
encryption.
Change-Id: I3d8d54fa7441c95fc3de5354f80ce2e7e2ba7054
Closes-Bug: #1481644
If there are multiple resources with same name(in case of
update-replace), only one resource is returned by
resource_get_all_by_stack, due to which convergence cleanup
was breaking.
An additional argument is added to resource_get_all_by_stack to return
resources dictionary with resource id(unique) as key.
Change-Id: I6a212da19a774239f014163774e75fe11dfe272c
This updates the default crypt method to use the cryptography module
instead of the oslo crypto utils module. It also refactors decrypt to
remove some duplication.
This new patch fixes an issue with small keys.
Change-Id: I3ef166d15306693f0589903785102a359834c307
Closes-Bug: #1468025
This updates the default crypt method to use the cryptography module
instead of the oslo crypto utils module. It also refactors decrypt to
remove some duplication.
Change-Id: Ie24aebcb3080725c250a4f3ba726b23a9c995965
Closes-Bug: #1468025
1. don't return DB models from the objects API
(only return Objects)
2. delete shouldn't return anything
3. update_and_save should return the refreshed object.
Note: there is still some inconsistency in what is returned by
update_by_id() some return an object and some return a bool.
Related-bug: #1432936
Change-Id: I1a0a38773d4fc4a62af5e0a98076396f39187b6c
Encrypt properties data before storing it in database and decrypt it
when the resource is being loaded from the database.
Change-Id: I646542b1d03296f62a83041dc2a0ca2719775289
Implements: blueprint encrypt-hidden-parameters
In Heat objects, the default value of nullable attribute is
set in many of the object's fields, which is not required
as oslo object Field defines nullable as False by default
Change-Id: I0164b64c043816f624aeba19561a4a5f8d36689d
Closes-bug: #1439957
This patch adds following columns for resource table:
- `needed_by` (a list of Resource keys)
- `requires` (a list of Resource keys)
- `replaces` (a single Resource key, Null by default)
- `replaced_by` (a single Resource key, Null by default)
- `current_template_id` (a single RawTemplate key)
Co-Authored-by: Angus Salkeld <asalkeld@mirantis.com>
Co-Authored-By: Qiming Teng <tengqim@cn.ibm.com>
Change-Id: I65e1032e84b40cb7ae3126fa6b63c914988cc970
Implements: blueprint convergence-resource-table