Presently, it is allowed to rebuild servers in ERROR state, as long as they have successfully started up before. If a user tries to rebuild an instance in cell0, the request fails with an exception of type InstanceInvalidState. This spec summarizes the changes needed to enable the rebuilding of instances that failed during creation. Previously-approved: Stein Change-Id: I86ff95a3de63979c453af271c4dcd8bd0e80ca3b Implements: blueprint enable-rebuild-for-instances-in-cell0
9.3 KiB
Enable Rebuild for Instances in cell0
https://blueprints.launchpad.net/nova/+spec/enable-rebuild-for-instances-in-cell0
This spec summarizes the changes needed to enable the rebuilding of instances that failed to be scheduled because there were not enough resources.
Problem description
Presently, it is allowed to rebuild servers in ERROR state, as long
as they have successfully started up before. But if a user tries to
rebuild an instance that was never launched before because scheduler
failed to find a valid host due to the lack of available resources, the
request fails with an exception of type
InstanceInvalidState1. We are not addressing
the case where the server was never launched due to exceeding the
maximum number of build retries.
Use Cases
- As an operator I want to be able to perform corrective actions after a server fails to be scheduled because there were not enough resources (i.e. the instance ends up in PENDING state, if configured). Such actions could be adding more capacity or freeing up used resources. Following the execution of these actions I want to be able to rebuild the server that failed.
Note
Adding the PENDING state as well as setting instances to it, are out of the scope of this spec, as they are being addressed by another change2.
Proposed change
The flow of the rebuild procedure for instances mapped in cell0 because of scheduling failures caused by lack of resources would then be like this:
- The nova-api, after identifying an instance as being in cell0, should create a new BuildRequest and update the instance mapping (cell_id=None).
- At this point the api should also delete the instance records from cell0 DB. If this is a soft delete3, then after the successful completion of the operation, we would end up with one record of the instance in the new cell's DB and a record of the same instance in cell0 (deleted=True). A better approach, here, would be to hard delete4 the instance's information from cell0.
- Then the nova-api should make an RPC API call to the conductor's new
method
rebuild_instance_in_cell0. This new method's purpose is almost (if not exactly) the same as the existingschedule_and_build_instances. So we could either call to it internally or extract parts of it's functionality and reuse them. The reason behind this is mainly to avoid calling schedule and build code in the super-conductor directly from rebuild code in the API. - Finally, an RPC API call is needed from the conductor to the compute
service of the selected cell. The
rebuild_instancemethod tries to destroy an existing instance and then re-create it. In this case and since the instance was in cell0, there is nothing to destroy and re-create. So, an RPC API call to the existing methodbuild_and_run_instanceseems appropriate.
Information provided by the user in the initial request such as keypair, trusted_image_certificates, BDMs, tags and config_drive can be retrieved from the instance buried in cell0. Currently, there is no way to recover the requested networks while trying to rebuild the instance. For this:
- A reasonable change would be to extend the RequestSpec object to adding a requested_networks field, where the requested networks will be stored.
- When scheduler fails to find a valid host for an instance and the VM goes to cell0, the list of requested networks will be stored in the RequestSpec.
- As soon as the rebuild procedure starts and the requested networks are retrieved, the new field will be set to None.
The same applies for personality files, that can be provided during the initial create request and since microversion 2.57 it is deprecated from the rebuild API5. Since the field is not persisted we have no way of retrieving them during rebuild from cell0. For this we have a couple of alternatives:
- Handle personality files as requested networks and persist them in the RequestSpec.
- Document this as a limitation of the feature and that if people would like to use the new rebuild functionality they should not use personality files.
- Another option would be to track in the
system_metadataof the instance, if the instance was created with personality files. Then during rebuild from cell0, we could check and not accept the request for instances created with personality files.
There is an ongoing discussion on how to handle personality files in the mailing list6.
Quota Checks
During the normal build flow, there are quota checks in the API level7 as well as in the conductor level8. Consider the scenario where a user has enough RAM quota for a new instance. As soon as the instance is created, it ends up in cell0 because the scheduling failed.
There are two distinct cases when checking quota for instances, cores, ram:
Checking quota from Nova DB
In this case, the instance's resources, although in cell0, will be aggregated since the instance records will be in the DB. There is though a slight window for a race condition when the instance gets hard deleted.
Checking quota from Placement9
When the instance is in cell0, there are no allocations to Placement for this consumer. Meaning that the instance's resources will not be aggregated during subsequent checks and there is no check in the API level when rebuilding.
Rechecking quota at the conductor level will make sure that user's quota is enough before proceeding with the build procedure.
Between initial build and rebuild (from cell0) port usage might have changed. In this case and since port quota is not checked when rebuilding from cell0, we might fail late in the compute service trying to create the port. Although the user will not get a quick failure from the API, this is acceptable because at this point usage is already over limit and the server would not have booted successfully.
Alternatives
The user could delete the instance that failed and create a new one with the same characteristics but not the same ID. The proposed functionality is the dependency for supporting preemptible instances, where an external service automatically rebuilds the failed server after taking corrective actions. In the aforementioned feature maintaining the ID of the instance is of vital importance. This is the main reason for which this cannot be considered as an acceptable alternative solution.
Data model impact
Add a requested_networks field in the RequestSpec object
that will contain a NetworkRequestList object. Since the RequestSpec is
stored as a blob (mediumtext) in the database, no schema modification is
needed.
REST API impact
A new API microversion is needed. Rebuilding an instance that is mapped to cell0 will continue to fail for older microversions.
Security impact
None.
Notifications impact
None.
Other end user impact
Users will be allowed to rebuild instances that failed due to the lack of resources.
Performance Impact
None.
Other deployer impact
None.
Developer impact
None.
Upgrade impact
None.
Implementation
Assignee(s)
- Primary assignee:
-
<ttsiouts>
- Other contributors:
-
<johnthetubaguy> <strigazi> <belmoreira>
Work Items
See Proposed change.
Dependencies
None.
Testing
In order to verify the validity of the functionality:
- New unit tests have to be implmented and existing ones should be adapted.
- New functional tests have to be implemented to verify the rebuilding of instances in cell0 and the handling of instance tags, keypairs, trusted_image_certificates etc.
- The new tests should take into consideration BFV instances and the handling of BDMs.
Documentation Impact
We should update the documentation to state that the rebuild is allowed for instances that have never booted before.
References
Discussed at the Dublin PTG: * https://etherpad.openstack.org/p/nova-ptg-rocky (#L459)
History
| Release Name | Description |
|---|---|
| Rocky | Introduced |
| Stein | Re-proposed |
| Train | Re-proposed |
https://github.com/openstack/nova/blob/d42a007425d9adb691134137e1e0b7dda356df62/nova/compute/api.py#L147↩︎
In this scope soft delete means a non-zero value is set to the
deletedcolumn.↩︎Hard delete means that the record is removed from the table.↩︎
https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id52↩︎
http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004901.html↩︎
https://github.com/openstack/nova/blob/fc3890667e4971e3f0f35ac921c2a6c25f72adec/nova/compute/api.py#L937↩︎
https://github.com/openstack/nova/blob/fc3890667e4971e3f0f35ac921c2a6c25f72adec/nova/conductor/manager.py#L1422↩︎