Remove misleading code from _move_operation_alloc_request()

Change I1c9442eed850a3eb7ac9871fafcb0ae93ba8117c in Pike
made the scheduler "double up" the old_flavor and new_flavor
allocations when resizing to the same host. If there were
Ocata computes still in the deployment at the time, which would
be normal in Pike for rolling upgrades, the ResourceTracker
would overwrite the instance allocations in placement based on
the instance.flavor when the update_available_resource periodic
task would run.

If that periodic ran after scheduling but before finish_resize(),
the old_flavor would be used to report allocations. If the periodic
ran after finish_resize(), the new_flavor would be used to report
allocations.

That Ocata-compute auto-heal code was removed in Rocky with change
I39d93dbf8552605e34b9f146e3613e6af62a1774, but should have effectively
been vestigial since Queens when nova-compute should be at most N-1 so
there should be no Ocata compute services.

This change removes the misleading Pike-era code in the
_move_operation_alloc_request() which sums the allocations for the
old and new flavor when resizing to the same host since:

1. The compute service no longer does what the comment says.
2. Since Queens, conductor swaps the instance-held allocations
   on the source node to the migration record and the
   _move_operation_alloc_request method is only called if the
   instance has allocations, which it won't during resize. So the
   only time _move_operation_alloc_request is called now is during
   an evacuate because conductor doesn't do the allocation swap in
   that case. And since you can't evacuate to the same host, the
   elif block in _move_operation_alloc_request is dead code.

Note that change I1c9442eed850a3eb7ac9871fafcb0ae93ba8117c was
effectively a change in behavior for resize to the same host
because the scheduler sums the old/new flavor resource allocations
which could result in a false NoValidHost error when in reality
the total allocation for the instance is going to be the maximum
of the resource class allocations between the old and new flavor,
e.g. if the compute has 8 VCPUs total and the instance is using
6 VCPUs with the old flavor, and then resized to a new flavor with
8 VCPUs, the scheduler is trying to "claim" 14 VCPUs when really the
instance will only use 8 and that causes a NoValidHost error.
Comparing to the pre-Pike scheduling and ResourceTracker.resize_claim
behavior for resize to the same host, the scheduler would only filter
on the new flavor resources and the resize_claim would only claim
based on the new_flavor resources as well. The update_available_resource
periodic would account for old_flavor usage in
_update_usage_from_migration() after the instance was resized and
before it was confirmed/reverted.

Change-Id: I8c6b6c46b2587ee727653dafadbcb08b99ed7d35
Related-Bug: #1790204
This commit is contained in:
Matt Riedemann 2019-02-22 11:04:14 -05:00
parent e162bcb22c
commit 4363b10f5b
3 changed files with 1 additions and 82 deletions

View File

@ -143,9 +143,7 @@ def _move_operation_alloc_request(source_allocs, dest_alloc_req):
contains resources claimed against both source and destination, accounting
for shared providers.
Also accounts for a resize to the same host where the source and dest
compute node resource providers are going to be the same. In that case
we sum the resource allocations for the single provider.
This is expected to only be used during an evacuate operation.
:param source_allocs: Dict, keyed by resource provider UUID, of resources
allocated on the source host
@ -169,19 +167,6 @@ def _move_operation_alloc_request(source_allocs, dest_alloc_req):
if rp_uuid in new_rp_uuids:
new_alloc_req['allocations'][rp_uuid] = dest_alloc_req[
'allocations'][rp_uuid]
elif not new_rp_uuids:
# If there are no new_rp_uuids that means we're resizing to
# the same host so we need to sum the allocations for
# the compute node (and possibly shared providers) using both
# the current and new allocations.
# Note that we sum the allocations rather than take the max per
# resource class between the current and new allocations because
# the compute node/resource tracker is going to adjust for
# decrementing any old allocations as necessary, the scheduler
# shouldn't make assumptions about that.
scheduler_utils.merge_resources(
new_alloc_req['allocations'][rp_uuid]['resources'],
dest_alloc_req['allocations'][rp_uuid]['resources'])
LOG.debug("New allocation_request containing both source and "
"destination hosts in move operation: %s", new_alloc_req)

View File

@ -392,23 +392,6 @@ def resources_from_flavor(instance, flavor):
return resources
def merge_resources(original_resources, new_resources, sign=1):
"""Merge a list of new resources with existing resources.
Either add the resources (if sign is 1) or subtract (if sign is -1).
If the resulting value is 0 do not include the resource in the results.
"""
all_keys = set(original_resources.keys()) | set(new_resources.keys())
for key in all_keys:
value = (original_resources.get(key, 0) +
(sign * new_resources.get(key, 0)))
if value:
original_resources[key] = value
else:
original_resources.pop(key, None)
def resources_from_request_spec(spec_obj):
"""Given a RequestSpec object, returns a ResourceRequest of the resources,
traits, and aggregates it represents.

View File

@ -727,55 +727,6 @@ class TestUtils(test.NoDBTestCase):
self.assertIs(rg2, req.get_request_group(2))
self.assertIs(unnumbered_rg, req.get_request_group(None))
def test_merge_resources(self):
resources = {
'VCPU': 1, 'MEMORY_MB': 1024,
}
new_resources = {
'VCPU': 2, 'MEMORY_MB': 2048, 'CUSTOM_FOO': 1,
}
doubled = {
'VCPU': 3, 'MEMORY_MB': 3072, 'CUSTOM_FOO': 1,
}
saved_orig = dict(resources)
utils.merge_resources(resources, new_resources)
# Check to see that we've doubled our resources
self.assertEqual(doubled, resources)
# and then removed those doubled resources
utils.merge_resources(resources, saved_orig, -1)
self.assertEqual(new_resources, resources)
def test_merge_resources_zero(self):
"""Test 0 value resources are ignored."""
resources = {
'VCPU': 1, 'MEMORY_MB': 1024,
}
new_resources = {
'VCPU': 2, 'MEMORY_MB': 2048, 'DISK_GB': 0,
}
# The result should not include the zero valued resource.
doubled = {
'VCPU': 3, 'MEMORY_MB': 3072,
}
utils.merge_resources(resources, new_resources)
self.assertEqual(doubled, resources)
def test_merge_resources_original_zeroes(self):
"""Confirm that merging that result in a zero in the original
excludes the zeroed resource class.
"""
resources = {
'VCPU': 3, 'MEMORY_MB': 1023, 'DISK_GB': 1,
}
new_resources = {
'VCPU': 1, 'MEMORY_MB': 512, 'DISK_GB': 1,
}
merged = {
'VCPU': 2, 'MEMORY_MB': 511,
}
utils.merge_resources(resources, new_resources, -1)
self.assertEqual(merged, resources)
def test_claim_resources_on_destination_no_source_allocations(self):
"""Tests the negative scenario where the instance does not have
allocations in Placement on the source compute node so no claim is