The cross-cell resize code does not consider neutron ports with resource
request. To avoid migration failures this patch makes nova to fall back
to same cell resize if the instance has neutron ports with resource
request.
Change-Id: Icaad4b2375b491c8a7e87fb6f4977ae2e13e8190
Closes-Bug: #1907522
Cross cell resize does not support neutron ports with resource request
as nova fails to send a proper port binding to neutron. This causes the
migration to fail. However we should simply not allow the migration to
go cross cell if the server has such ports.
This patch adds a functional test to reproduce the problem
Change-Id: Id91d2e817ef6bd21124bb840bdb098054e9753b8
Related-Bug: #1907522
Fix the following warning in tests.
DeprecationWarning: Using or importing the ABCs from
'collections' instead of from 'collections.abc' is
deprecated since Python 3.3, and in 3.9 it will stop working
Change-Id: I0119da31482d667feeae5ecf5059b794da216c7d
Closes-Bug: #1906933
Signed-off-by: Takashi Natsume <takanattie@gmail.com>
Add a description to update the RESI API microversion history
after milestone-3 in the PTL guide.
Change-Id: I1530f77291feda4c916cfe9c4a54de7dfdd8180f
Signed-off-by: Takashi Natsume <takanattie@gmail.com>
Replace six.text_type with str.
This patch completes six removal.
Change-Id: I779bd1446dc1f070fa5100ccccda7881fa508d79
Implements: blueprint six-removal
Signed-off-by: Takashi Natsume <takanattie@gmail.com>
Replace six.text_type with str.
A subsequent patch will replace other six.text_type.
Change-Id: I23bb9e539d08f5c6202909054c2dd49b6c7a7a0e
Implements: blueprint six-removal
Signed-off-by: Takashi Natsume <takanattie@gmail.com>
During a cross cell resize, we do an instance snapshot and
then we spawn() instance back on target cell.
Unfortunately, we mistakenly spawn back the instance to its original
image id, instead of using freshly created snapshot_id.
The change proposes to update instance.image_ref with snapshot_id
in order that spawn()->_create_image() uses it and set back
instance.image_ref after.
Note that for qcow2 backend case, we also need to rebase disk image
with its original backing file to avoid mismatch between
instance.image_ref and backing file, as we currently do in unshelve
context.
Change-Id: I0b81282eba8238d8b64a67e38cf9d6392de1f85c
Closes-Bug: #1906428
As discussed in the bug the only advice we've been given from the
libvirt and QEMU teams is to avoid this in Bionic where QEMU is using
the legacy -drive architecture. This is passing in the new zuulv3 Focal
based job so just skip the test here for the time being in grenade.
Change-Id: I1aeab16e2b8d907a114ed22c7e716f534fe1b129
Related-Bug: #1901739
It turned out that during the qos resize work we did not implemented
support of cross cell resize with qos ports. Tempest test coverage for
the resize and migrate is landed recently that made the nova-multi-cell
job to fail.
So this patch disables the qos resize and migrate tempest tests for the
nova-multi-cell job to unblock the gate.
Related-Bug: #1907522
[1] I8c573c4a11a76932057d8415d76114a03269b9f3
Change-Id: I95bc22f7d65454cd9e7b54a0e6d9516f2f204978
Nova's `os-simple-tenant-usage` has the following statement.
```
Reports usage statistics of compute and storage resources periodically
for an individual tenant or all tenants. The usage statistics will
include all instances’ CPU, memory and local disk during a specific
period.
```
To some people, when reading that, they might get the understanding that
the `os-simple-tenant-usage` will report the actual use of resources.
Therefore, if a VM is stopped and then started back again, it (the
`os-simple-tenant-usage`) would only report the time the VM has been up
and running. However, that is not what happens. The
`os-simple-tenant-usage` reports the time that a virtual machine (VM)
exists in the cloud environment. Therefore, actions such as pause,
suspend, and stop do not affect the usage accounting.
This might be a problem for people using this API for billing, or for
people that use other systems (e.g. CloudKitty) for billing.
End-users might try to cross-check the data from the billing from
CloudKitty for instance, with the usage report found in Horizon that
uses this API, and the numbers will not add up, as people might only
consider the time a VM has been up and running to charge users in
CloudKitty.
An extension was proposed in [1], to allow operators to customize
the usage accounting. However, during a meeting with the community [2],
the extension was rejected for two reasons:
* Nova tries to avoid using config driven APIs. Meaning, APIs that change
the response according to server-side configurations;
* The community has decided to get rid of usage and billing accounting
in Nova.
Having said that, we would like to propose a documentation amendment.
The idea is to explicitly say for users/operators that the simple usage
API only considers the time that the VM existed in the cloud for the
accounting, and not the actual time it has been up and running.
This will save some misunderstanding and misuse of the API data.
References
===========
[1] https://review.opendev.org/c/openstack/nova/+/711113
[2] http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-12-03-16.00.log.txt
Closes-Bug: #1907133
Change-Id: Ic55669f5210b57f151f693393f205655765a8dc9
The job has been merged into nova-live-migration by
c0fe95fcc5aec99a83dd57093dc230ef67b36b39 so this unused playbook should
be removed.
Change-Id: Ibdf717e36fe3c7d1d57f094eecda796c6bf38467
This version of packaging is actually required by oslo.utils as of 4.5
by Ic9bda0783d3664e1f518d513d81b3271028335fd that was itself introduced
as a lower-constraints version within Nova as of
Ic4d3b998bb9701cb1e3ef12d9bb6f4d91cc19c18.
Change-Id: I67255fa1b919a27e92028da95d71ddd4bf53edc1
Closes-Bug: #1907117
The checks performed by this script aren't always useful to downstream
consumers of the repo so allow them to disable the script without having
to make changes to tox.ini.
Change-Id: I4f551dc4b57905cab8aa005c5680223ad1b57639
Nova does not run gabbi test since placement is moved to a separate
git repository. So the gabbi related tox.ini comment is removed.
Change-Id: Ic324e3e32fa03478895b32fa583e805ee6c721e2
When a compute node has zero total available for the:
* MEMORY_MB
* DISK_GB
* VGPU
* PMEM_NAMESPACE_*
resource classes, we attempt to PUT an inventory with 'total' of 0
which isn't allowed by the placement API. Doing this results in a 400
error from placement "JSON does not validate: 0 is less than the
minimum of 1" and ResourceProviderUpdateFailed and
ResourceProviderSyncFailed raised in nova.
We are already omitting most resource classes when their total amount
of the resource is 0 and we just need to also do it for the
aforementioned resource classes.
Closes-Bug: #1901120
Closes-Bug: #1906494
Change-Id: I022f3bbddbbdc24362b10004f273da2421788c97