nova/nova/virt/hyperv
Matt Riedemann 6ca6f6fce6 Share snapshot image membership with instance owner
When an admin creates a snapshot of another project owners
instance, either via the createImage API directly, or via the
shelve or createBackup APIs, the admin project is the owner
of the image and the owner of the instance (in another project)
cannot "see" the image. This is a problem, for example, if an
admin shelves a tenant user's server and then the user tries to
unshelve the server because the user will not have access to
get the shelved snapshot image.

This change fixes the problem by leveraging the sharing feature [1]
in the v2 image API. When a snapshot is created where the request
context project_id does not match the owner of the instance project_id,
the instance owner project_id is granted sharing access to the image.
By default, this means the instance owner (tenant user) can get the
image directly via the image ID if they know it, but otherwise the image
is not listed for the user to avoid spamming their image listing. In the
case of unshelve, the end user does not need to know the image ID since
it is stored in the instance system_metadata. Regardless, the user could
accept the pending image membership if they want to see the snapshot
show up when listing available images.

Note that while the non-admin project has access to the snapshot
image, they cannot delete it. For example, if the user tries to
delete or unshelve a shelved offloaded server, nova will try to
delete the snapshot image which will fail and log a warning since
the user does not own the image (the admin does). However, the
delete/unshelve operations will not fail because the image cannot
be deleted, which is an acceptable trade-off.

Due to some very old legacy virt driver code which started in the
libvirt driver and was copied to several other drivers, several virt
drivers had to be modified to not overwrite the "visibility=shared"
image property by passing "is_public=False" when uploading the image
data. There was no point in the virt drivers setting is_public=False
since the API already controls that. It does mean, however, that
the bug fix is not really in effect until both the API and compute
service code has this fix.

A functional test is added which depends on tracking the owner/member
values in the _FakeImageService fixture. Impacted unit tests are
updated accordingly.

[1] https://developer.openstack.org/api-ref/image/v2/index.html#sharing

Conflicts:
        nova/compute/api.py
        nova/compute/utils.py

NOTE(seyeongkim): The conflict is due to not having change
7e229ba40d in Rocky.

        nova/tests/functional/test_images.py

NOTE(seyeongkim) The conflict is due to not having correct uuidsentiel
position.

Change-Id: If53bc8fa8ab4a8a9072061af7afed53fc12c97a5
Closes-Bug: #1675791
(cherry picked from commit 35cc0f5e94)
2019-04-30 21:13:46 +09:00
..
README.rst Adds Hyper-V support in nova-compute (with new network_info model), including unit tests 2012-08-16 03:38:51 +03:00
__init__.py Add Hyper-V driver in the "compute_driver" option description 2014-07-24 02:47:32 -07:00
block_device_manager.py Rename block_device_info_get_root 2018-01-11 20:46:13 +00:00
constants.py objects: Move 'arch' to 'fields.Architecture' 2016-11-25 16:19:41 +00:00
driver.py ironic: check fresh data when sync_power_state doesn't line up 2019-03-04 14:24:30 +00:00
eventhandler.py Remove translation of log messages 2017-06-13 11:20:28 +07:00
hostops.py hyperv: report disk_available_least field 2017-09-19 18:33:57 +00:00
imagecache.py Remove translation of log messages 2017-06-13 11:20:28 +07:00
livemigrationops.py Merge "Hyper-V: fix live migration with CSVs" 2017-12-01 01:38:59 +00:00
migrationops.py Hyper-V: Perform proper cleanup after cold migration 2017-08-30 17:50:11 +00:00
pathutils.py propagate OSError to MigrationPreCheckError 2017-10-20 16:46:15 -04:00
rdpconsoleops.py Hyper-V: adds os-win library 2015-12-02 16:34:24 +02:00
serialconsolehandler.py Remove translation of log messages 2017-06-13 11:20:28 +07:00
serialconsoleops.py Remove translation of log messages 2017-06-13 11:20:28 +07:00
serialproxy.py HyperV: Add serial console proxy 2016-04-18 20:32:13 +03:00
snapshotops.py Share snapshot image membership with instance owner 2019-04-30 21:13:46 +09:00
vif.py Adds Hyper-V OVS ViF driver 2017-01-11 22:22:13 +00:00
vmops.py hyperv: Cleans up live migration Planned VM 2018-09-14 11:49:17 -06:00
volumeops.py hyperv: Cleans up live migration Planned VM 2018-09-14 11:49:17 -06:00

README.rst

Hyper-V Volumes Management

To enable the volume features, the first thing that needs to be done is to enable the iSCSI service on the Windows compute nodes and set it to start automatically.

sc config msiscsi start= auto net start msiscsi

In Windows Server 2012, it's important to execute the following commands to prevent having the volumes being online by default:

diskpart san policy=OfflineAll exit

How to check if your iSCSI configuration is working properly:

On your OpenStack controller:

1. Create a volume with e.g. "nova volume-create 1" and note the generated volume id

On Windows:

  1. iscsicli QAddTargetPortal <your_iSCSI_target>
  2. iscsicli ListTargets

The output should contain the iqn related to your volume: iqn.2010-10.org.openstack:volume-<volume_id>

How to test Boot from volume in Hyper-V from the OpenStack dashboard:

  1. Fist of all create a volume
  2. Get the volume ID of the created volume

3. Upload and untar to the Cloud controller the next VHD image: http://dev.opennebula.org/attachments/download/482/ttylinux.vhd.gz 4. sudo dd if=/path/to/vhdfileofstep3 of=/dev/nova-volumes/volume-XXXXX <- Related to the ID of step 2 5. Launch an instance from any image (this is not important because we are just booting from a volume) from the dashboard, and don't forget to select boot from volume and select the volume created in step2. Important: Device name must be "vda".