1580 Commits

Author SHA1 Message Date
Surya Seetharaman
833af5c9bf API microversion 2.69: Handles Down Cells Documentation
This patch adds the documentation around the work regarding
handling down cells that was introduced in v2.69.

Related to blueprint handling-down-cell

Change-Id: I78ed924a802307a992ff90e61ae7ff07c2cc39d1
2019-02-20 10:10:50 -08:00
Zuul
c49d8eeae5 Merge "Default zero disk flavor to RULE_ADMIN_API in Stein" 2019-02-15 02:18:05 +00:00
Zuul
3b3d0442e4 Merge "Remove deprecated 'os-flavor-manage' policy" 2019-02-15 01:56:08 +00:00
Zuul
5c18dc108e Merge "Change live_migration_wait_for_vif_plug=True by default" 2019-02-14 19:24:33 +00:00
Takashi NATSUME
dedeff70a7 Remove deprecated 'os-flavor-manage' policy
Remove the 'os_compute_api:os-flavor-manage' policy.
The 'os_compute_api:os-flavor-manage' policy has been deprecated
since 16.0.0 Pike.
The policy has been replaced with the following policies.

- os_compute_api:os-flavor-manage:create
- os_compute_api:os-flavor-manage:delete

Change-Id: I856498dfcebfa330598a22dd7c660bd6f158351b
2019-02-14 15:00:37 +00:00
Zuul
bff3fd1cdb Merge "API: Remove evacuate/live-migrate 'force' parameter" 2019-02-09 22:55:34 +00:00
Matt Riedemann
35cc0f5e94 Share snapshot image membership with instance owner
When an admin creates a snapshot of another project owners
instance, either via the createImage API directly, or via the
shelve or createBackup APIs, the admin project is the owner
of the image and the owner of the instance (in another project)
cannot "see" the image. This is a problem, for example, if an
admin shelves a tenant user's server and then the user tries to
unshelve the server because the user will not have access to
get the shelved snapshot image.

This change fixes the problem by leveraging the sharing feature [1]
in the v2 image API. When a snapshot is created where the request
context project_id does not match the owner of the instance project_id,
the instance owner project_id is granted sharing access to the image.
By default, this means the instance owner (tenant user) can get the
image directly via the image ID if they know it, but otherwise the image
is not listed for the user to avoid spamming their image listing. In the
case of unshelve, the end user does not need to know the image ID since
it is stored in the instance system_metadata. Regardless, the user could
accept the pending image membership if they want to see the snapshot
show up when listing available images.

Note that while the non-admin project has access to the snapshot
image, they cannot delete it. For example, if the user tries to
delete or unshelve a shelved offloaded server, nova will try to
delete the snapshot image which will fail and log a warning since
the user does not own the image (the admin does). However, the
delete/unshelve operations will not fail because the image cannot
be deleted, which is an acceptable trade-off.

Due to some very old legacy virt driver code which started in the
libvirt driver and was copied to several other drivers, several virt
drivers had to be modified to not overwrite the "visibility=shared"
image property by passing "is_public=False" when uploading the image
data. There was no point in the virt drivers setting is_public=False
since the API already controls that. It does mean, however, that
the bug fix is not really in effect until both the API and compute
service code has this fix.

A functional test is added which depends on tracking the owner/member
values in the _FakeImageService fixture. Impacted unit tests are
updated accordingly.

[1] https://developer.openstack.org/api-ref/image/v2/index.html#sharing

Change-Id: If53bc8fa8ab4a8a9072061af7afed53fc12c97a5
Closes-Bug: #1675791
2019-02-08 18:06:27 -05:00
Stephen Finucane
36a91936a8 API: Remove evacuate/live-migrate 'force' parameter
Add a new microversion that removes support for the aforementioned
argument, which cannot be adequately guaranteed in the new placement
world.

Change-Id: I2a395aa6eccad75a97fa49e993b0300bdcfc7258
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
Implements: blueprint remove-force-flag-from-live-migrate-and-evacuate
APIImpact
2019-02-08 17:05:19 -05:00
Matt Riedemann
1a42eb9ec1 Change live_migration_wait_for_vif_plug=True by default
This resolves the TODO to make the option default to True
so that the source compute service will wait for the
"network-vif-plugged" event, initiated by vif plugging during
pre_live_migration on the destination compute servie, before
initiating the guest transfer in the hypervisor. There are
certain networking backends that will not send the neutron
event for vif plugging alone (which is arguably a bug) but
OVS and linuxbridge, probably the two most widely used in
OpenStack deployments, are known to work with this config.

While in here, the Timeout message is fleshed out to give
more help with what the cause of the timeout could be and
possible recourse.

Change-Id: I8da38aec0fe4808273b8587ace3df9dbbc3ab576
2019-02-07 11:43:01 -05:00
Takashi NATSUME
25477e6771 Add minimum value in maximum_instance_delete_attempts
Add minimum value 1 in the maximum_instance_delete_attempts option.

Change-Id: Ic2ce1973971a6827f1a7d00c8c054b6561ab7e72
2019-02-07 08:26:50 +00:00
Zuul
6d7336a11f Merge "Follow up for per-instance serial number change" 2019-02-05 13:01:38 +00:00
Zuul
570b07e53c Merge "Add support for vrouter HW datapath offloads" 2019-02-05 02:17:18 +00:00
Zuul
d51fcb6f49 Merge "Per-instance serial number" 2019-02-04 22:22:31 +00:00
Matt Riedemann
b29158149d Follow up for per-instance serial number change
This is a follow up change to address review nits
from I001beb2840496f7950988acc69018244847aa888.

Change-Id: I1fbfa46b52b32039ff3d6703a27306b56314c1d5
2019-02-04 11:53:15 -05:00
Kevin_Zheng
dec5dd9286 Per-instance serial number
Added a new ``unique`` choice to the ``[libvirt]/sysinfo_serial``
configuration which if set will result in the guest serial number
being set to ``instance.uuid`` on this host. It is also made to be
the default value of ``[libvirt]/sysinfo_serial`` config option.

Implements: blueprint per-instance-libvirt-sysinfo-serial

Change-Id: I001beb2840496f7950988acc69018244847aa888
2019-02-03 10:59:21 +08:00
Zuul
aa89979e67 Merge "Reject networks with QoS policy" 2019-02-02 17:58:36 +00:00
Jan Gutter
4d32b45c15 Add support for vrouter HW datapath offloads
Add plumbing for Contrail/Tungsten Fabric datapath offloads

* This change expands the VNIC type support for the vrouter VIF type by
  adding 'direct' and 'virtio-forwarder' plugging support.

* After this change, the vrouter VIF type will support the following modes:
  * 'normal': the 'classic' tap-style VNIC plugged into the instance,
  * 'direct': a PCI Virtual Function is passed through to the instance,
  * 'virtio-forwarder': a PCI Virtual Function is proxied to the
    instance via a vhost-user virtio forwarder.

* The os-vif conversion function was extended to support the two new
  plugging modes.

* Unit tests were added for the os-vif conversion functions.

* OpenContrail / Tungsten Fabric is planning to consume this
  functionality for the 5.1 release.

* os-vif 1.14.0 is required to pass the metadata

Change-Id: I327894839a892a976cf314d4292b22ce247b0afa
Depends-On: I401ee6370dad68e62bc2d089e786a840d91d0267
Signed-off-by: Jan Gutter <jan.gutter@netronome.com>
blueprint: vrouter-hw-offloads
2019-01-31 13:55:50 +00:00
Zuul
ab5a9bba31 Merge "Add fill_virtual_interface_list online_data_migration script" 2019-01-31 13:43:35 +00:00
Zuul
5b5b5749e9 Merge "unused images are always deleted (add to in-tree hper-v code)" 2019-01-31 13:07:12 +00:00
Zuul
ea32c35cdc Merge "Add configuration of maximum disk devices to attach" 2019-01-31 10:33:56 +00:00
melanie witt
bb0906f4f3 Add configuration of maximum disk devices to attach
This adds a new config option to control the maximum number of disk
devices allowed to attach to a single instance, which can be set per
compute host.

The configured maximum is enforced when device names are generated
during server create, rebuild, evacuate, unshelve, live migrate, and
attach volume. When the maximum is exceeded during server create,
rebuild, evacuate, unshelve, or live migrate, the server will go into
ERROR state and the server fault will contain the reason. When the
maximum is exceeded during an attach volume request, the request fails
fast in the API with a 403 error.

The configured maximum on the destination is not enforced before cold
migrate because the maximum is enforced in-place only (the destination
is not checked over RPC). The configured maximum is also not enforced
on shelved offloaded servers because they have no compute host, and the
option is implemented at the nova-compute level.

Part of blueprint conf-max-attach-volumes

Change-Id: Ia9cc1c250483c31f44cdbba4f6342ac8d7fbe92b
2019-01-30 15:47:10 +00:00
Zuul
9fab7e73e3 Merge "Reject interface attach with QoS aware port" 2019-01-30 13:27:59 +00:00
Zuul
a219f602f7 Merge "Convert vrouter legacy plugging to os-vif" 2019-01-30 10:40:02 +00:00
Maciej Jozefczyk
3534471c57 Add fill_virtual_interface_list online_data_migration script
In change [1] we modified _heal_instance_info_cache periodic task
to use Neutron point of view while rebuilding InstanceInfoCache
objects.
The crucial point was how we know the previous order of ports, if
the cache was broken. We decided to use VirtualInterfaceList objects
as source of port order.
For instances older than Newton VirtualInterface objects doesn't
exist, so we need to introduce a way of creating it.
This script should be executed while upgrading to Stein release.

[1] https://review.openstack.org/#/c/591607

Change-Id: Ic26d4ce3d071691a621d3c925dc5cd436b2005f1
Related-Bug: 1751923
2019-01-30 10:03:19 +00:00
Balazs Gibizer
8364abecfa Reject networks with QoS policy
When nova needs to create ports in Neutron in a network that has minimum
bandwidth policy Nova would need to create allocation for the bandwidth
resources. The port creation happens in the compute manager after the
scheduling and resource claiming. Supporting for this is considered out
of scope for the first iteration of this feature.

To avoid resource allocation inconsistencies this patch propose to
reject such request. This rejection does not break any existing use
case as minimum bandwidth policy rule only supported by the SRIOV
Neutron backend but Nova only support booting with SRIOV port if those
ports are precreated in Neutron.

Co-Authored-By: Elod Illes <elod.illes@ericsson.com>

Change-Id: I7e1edede827cf8469771c0496b1dce55c627cf5d
blueprint: bandwidth-resource-provider
2019-01-28 15:50:25 +01:00
Jan Gutter
172855f293 Convert vrouter legacy plugging to os-vif
Before this change, the vrouter VIF type used legacy VIF plugging. This
changeset ports the plugging methods over to an external os-vif plugin,
simplifying the in-tree code.

Miscellaneous notes:

 * There are two "vrouter" Neutron VIF types:
    * "contrail_vrouter" supporting vhostuser plugging, and
    * "vrouter", supporting kernel datapath plugging.
 * The VIFGeneric os-vif type is used for the kernel TAP based
   plugging when the vnic_type is 'normal'.
 * For multiqueue support, the minimum version of libvirt 1.3.1 is
   required. In that case, libvirt creates the TAP device, rather than
   the os-vif plugin. (This is the minimum version for Rocky and later)
   ref: https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/1574957
 * The corresponding commit on Tungsten Fabric / OpenContrail for this
   work is at:
   ed01d315e5

Change-Id: I047856982251fddc631679fb2dbcea0f3b0db097
Signed-off-by: Jan Gutter <jan.gutter@netronome.com>
blueprint: vrouter-os-vif-conversion
2019-01-25 17:17:55 +02:00
Takashi NATSUME
4743b08f47 Remove unused quota options
Remove the following configuration options in the 'quota' group
because they have not been used any more
since Ie01ab1c3a1219f1d123f0ecedc66a00dfb2eb2c1.

- reservation_expire
- until_refresh
- max_age

Change-Id: I56401daa8a2eee5e3aede336b26292f77cc0edd6
2019-01-25 15:17:15 +09:00
Zuul
9419c3e054 Merge "Per aggregate scheduling weight" 2019-01-24 19:58:52 +00:00
Balazs Gibizer
bd6f33070b Reject interface attach with QoS aware port
Attaching a port with minimum bandwidth policy would require to update
the allocation of the server. But for that nova would need to select the
proper networking resource provider under the compute resource provider
the server is running on.

For the first iteration of the feature we consider this out of scope. To
avoid resource allocation inconsistencies this patch propose to reject
such attach interface request. Rejecting such interface attach does not
break existing functionality as today only the SRIOV Neutron backend
supports the minimum bandwidth policy but Nova does not support
interface attach with SRIOV interfaces today.

A subsequent patch will handle attaching a network that has QoS policy.

Co-Authored-By: Elod Illes <elod.illes@ericsson.com>

Change-Id: Id8b5c48a6e8cf65dc0a7dc13a80a0a72684f70d9
blueprint: bandwidth-resource-provider
2019-01-24 16:56:43 +01:00
Zuul
31956108e6 Merge "Cleanup soft (anti)affinity weight multiplier options" 2019-01-22 18:25:15 +00:00
Yikun Jiang
e66443770d Per aggregate scheduling weight
This spec proposes to add ability to allow users to use
``Aggregate``'s ``metadata`` to override the global config options
for weights to achieve more fine-grained control over resource
weights.

blueprint: per-aggregate-scheduling-weight

Change-Id: I6e15c6507d037ffe263a460441858ed454b02504
2019-01-21 11:48:44 +08:00
Matt Riedemann
313becd5ff Cleanup soft (anti)affinity weight multiplier options
This resolves the TODO from Ocata change:

  I8871b628f0ab892830ceeede68db16948cb293c8

By adding a min=0.0 value to the soft affinity
weight multiplier configuration options.

It also removes the deprecated [DEFAULT] group
alias from Ocata change:

  I3f48e52815e80c99612bcd10cb53331a8c995fc3

Change-Id: I79e191010adbc0ec0ed02c9d589106debbf90ea8
2019-01-19 18:03:16 -05:00
Hesam Chobanlou
716d17e054 unused images are always deleted (add to in-tree hper-v code)
This is to apply changes applied to openstack/computer-hyperv
to the in-tree hyper-v code.

Change-Id: Ib89c2e407bea5168f2eebb12cfb2d50e4566ff7a
Closes-Bug: #1773342
2019-01-18 23:18:25 -05:00
Zuul
a80bc66dc7 Merge "The field instance_name was added to InstanceCreatePayload" 2019-01-16 21:12:12 +00:00
Ifat Afek
0240c7b5b6 The field instance_name was added to InstanceCreatePayload
The field instance_name was missing from Nova instance notifications.
It was added to instance.create.* notifications, as discussed in:
http://lists.openstack.org/pipermail/openstack-discuss/2018-December/001141.html

Change-Id: I52f3186d1bcda0c23a471f9bfd2c55763262658f
2019-01-15 15:27:07 +00:00
Zuul
13f8b54414 Merge "Allow run metadata api per cell" 2019-01-15 02:20:14 +00:00
Kevin_Zheng
e2e372b2b1 Allow run metadata api per cell
Adds configuration option ``[api]/local_metadata_per_cell``
to allow user run Nova metadata API service per cell. Doing
this can avoid query API DB for instance information each
time an instance query for its metadata.

Implements blueprint run-meta-api-per-cell

Change-Id: I2e6ebb551e782e8aa0ac90169f4d4b8895311b3c
2019-01-14 10:20:50 -05:00
Kashyap Chamarthy
9160fe5098 libvirt: Support native TLS for migration and disks over NBD
The encryption offered by Nova (via `live_migration_tunnelled`, i.e.
"tunnelling via libvirtd") today secures only two migration streams:
guest RAM and device state; but it does _not_ encrypt the NBD (Network
Block Device) transport—which is used to migrate disks that are on
non-shared storage setup (also called: "block migration").  Further, the
"tunnelling via libvirtd" has a huge performance penalty and latency,
because it burns more CPU and memory bandwidth due to increased number
of data copies on both source and destination hosts.

To solve this existing limitation, introduce a new config option
`live_migration_with_native_tls`, which will take advantage of "native
TLS" (i.e. TLS built into QEMU, and relevant support in libvirt).  The
native TLS transport will encrypt all migration streams, *including*
disks that are not on shared storage — all of this without incurring the
limitations of the "tunnelled via libvirtd" transport.

Closes-Bug: #1798796
Blueprint: support-qemu-native-tls-for-live-migration

Change-Id: I78f5fef41b6fbf118880cc8aa4036d904626b342
Signed-off-by: Kashyap Chamarthy <kchamart@redhat.com>
2019-01-09 11:00:35 +01:00
Lee Yarwood
d6c1f6a103 libvirt: Add workaround to cleanup instance dir when using rbd
At present all virt drivers provide a cleanup method that takes a single
destroy_disks boolean to indicate when the underlying storage of an
instance should be destroyed.

When cleaning up after an evacuation or revert resize the value of
destroy_disks is determined by the compute layer calling down both into
the check_instance_shared_storage_local method of the local virt driver
and remote check_instance_shared_storage method of the virt driver on
the host now running the instance.

For the Libvirt driver the initial local call will return None when
using the shared block RBD imagebackend as it is assumed all instance
storage is shared resulting in destroy_disks always being False when
cleaning up. This behaviour is wrong as the instance disks are stored
separately to the instance directory that still needs to be cleaned up
on the host. Additionally this directory could also be shared
independently of the disks on a NFS share for example and would need to
also be checked before removal.

This change introduces a backportable workaround configurable for the
Libvirt driver with which operators can ensure that the instance
directory is always removed during cleanup when using the RBD
imagebackend. When enabling this workaround operators will need to
ensure that the instance directories are not shared between computes.

Future work will allow for the removal of this workaround by separating
the shared storage checks from the compute to virt layers between the
actual instance disks and any additional storage required by the
specific virt backend.

Related-Bug: #1761062
Partial-Bug: #1414895
Change-Id: I8fd6b9f857a1c4919c3365951e2652d2d477df77
2019-01-02 10:52:58 +00:00
Zuul
f554c61333 Merge "Final release note for versioned notification transformation" 2018-12-25 00:22:04 +00:00
mmidolesov
38aa83b7fc vmware:add support for the hw_video_ram image property
Added create of a video card config spec and validation check
if the image meta video ram("hw_video_ram") is bigger than the
maximum allowed "hw_video:ram_max_mb" from the flavor.

Change-Id: I944d7e9235790cb2a4a21318c029d51012d157b0
2018-12-19 23:37:22 -08:00
Zuul
d3a1f3753b Merge "Address nits on I1f1fa1d0f79bec5a4101e03bc2d43ba581dd35a0" 2018-12-19 17:27:23 +00:00
Zuul
93597864a3 Merge "Make [cinder]/catalog_info no longer require a service_name" 2018-12-19 01:59:28 +00:00
Zuul
da17e0ed8a Merge "Drop request spec migration code" 2018-12-18 19:51:25 +00:00
Stephen Finucane
1f0c79b39a Address nits on I1f1fa1d0f79bec5a4101e03bc2d43ba581dd35a0
Change-Id: I3e88a1449aca4feedb3d40b238cbeeca3dbcfd69
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
2018-12-18 11:55:28 +00:00
Zuul
5d748001c2 Merge "Fail to live migration if instance has a NUMA topology" 2018-12-18 02:56:40 +00:00
Mohammed Naser
c8e65a5eb1 Default zero disk flavor to RULE_ADMIN_API in Stein
The policy to allow booting instances without a volume when
root_gb is set to 0 was to be set to default to admin-only
in Stein.

Depends-On: I537c299b0cd400982189f35b31df74755422737e

Co-Authored-By: Matt Riedemann <mriedem.os@gmail.com>

Related-Bug: #1739646
Change-Id: I247402b6c4ff8a7cb71ef247a218478194d68ff8
2018-12-17 16:59:12 -05:00
Matt Riedemann
ed4fe3ead6 Drop request spec migration code
This removes the request spec online data migration
routine added via change I61b9b50436d8bdb8ff06259cc2f876502d688b91
in Newton.

A nova-status upgrade check was added for this in Rocky with
change I1fb63765f0b0e8f35d6a66dccf9d12cc20e9c661 so operators should
be able to tell, either from the results of the nova-status upgrade
check command or "nova-manage db online_data_migrations" command,
whether or not they have completed the migration and can upgrade to
Stein.

This also allows us to start removing the various compat checks
in the API and conductor services for old instances that predated
having a request spec since those should have a request spec now
via the online data migration.

Related to blueprint request-spec-use-by-compute

Change-Id: Ib0de49b3c0d6b3d807c4eb321976848773c1a9e8
2018-12-17 13:54:48 -05:00
Zuul
4022171a61 Merge "libvirt: remove live_migration_progress_timeout config" 2018-12-15 19:33:09 +00:00
Zuul
35ee7edd94 Merge "libvirt: add live migration timeout action" 2018-12-15 19:33:02 +00:00