6ca6f6fce6
When an admin creates a snapshot of another project owners instance, either via the createImage API directly, or via the shelve or createBackup APIs, the admin project is the owner of the image and the owner of the instance (in another project) cannot "see" the image. This is a problem, for example, if an admin shelves a tenant user's server and then the user tries to unshelve the server because the user will not have access to get the shelved snapshot image. This change fixes the problem by leveraging the sharing feature [1] in the v2 image API. When a snapshot is created where the request context project_id does not match the owner of the instance project_id, the instance owner project_id is granted sharing access to the image. By default, this means the instance owner (tenant user) can get the image directly via the image ID if they know it, but otherwise the image is not listed for the user to avoid spamming their image listing. In the case of unshelve, the end user does not need to know the image ID since it is stored in the instance system_metadata. Regardless, the user could accept the pending image membership if they want to see the snapshot show up when listing available images. Note that while the non-admin project has access to the snapshot image, they cannot delete it. For example, if the user tries to delete or unshelve a shelved offloaded server, nova will try to delete the snapshot image which will fail and log a warning since the user does not own the image (the admin does). However, the delete/unshelve operations will not fail because the image cannot be deleted, which is an acceptable trade-off. Due to some very old legacy virt driver code which started in the libvirt driver and was copied to several other drivers, several virt drivers had to be modified to not overwrite the "visibility=shared" image property by passing "is_public=False" when uploading the image data. There was no point in the virt drivers setting is_public=False since the API already controls that. It does mean, however, that the bug fix is not really in effect until both the API and compute service code has this fix. A functional test is added which depends on tracking the owner/member values in the _FakeImageService fixture. Impacted unit tests are updated accordingly. [1] https://developer.openstack.org/api-ref/image/v2/index.html#sharing Conflicts: nova/compute/api.py nova/compute/utils.py NOTE(seyeongkim): The conflict is due to not having change |
||
---|---|---|
.. | ||
README.rst | ||
__init__.py | ||
block_device_manager.py | ||
constants.py | ||
driver.py | ||
eventhandler.py | ||
hostops.py | ||
imagecache.py | ||
livemigrationops.py | ||
migrationops.py | ||
pathutils.py | ||
rdpconsoleops.py | ||
serialconsolehandler.py | ||
serialconsoleops.py | ||
serialproxy.py | ||
snapshotops.py | ||
vif.py | ||
vmops.py | ||
volumeops.py |
README.rst
Hyper-V Volumes Management
To enable the volume features, the first thing that needs to be done is to enable the iSCSI service on the Windows compute nodes and set it to start automatically.
sc config msiscsi start= auto net start msiscsi
In Windows Server 2012, it's important to execute the following commands to prevent having the volumes being online by default:
diskpart san policy=OfflineAll exit
How to check if your iSCSI configuration is working properly:
On your OpenStack controller:
1. Create a volume with e.g. "nova volume-create 1" and note the generated volume id
On Windows:
- iscsicli QAddTargetPortal <your_iSCSI_target>
- iscsicli ListTargets
The output should contain the iqn related to your volume: iqn.2010-10.org.openstack:volume-<volume_id>
How to test Boot from volume in Hyper-V from the OpenStack dashboard:
- Fist of all create a volume
- Get the volume ID of the created volume
3. Upload and untar to the Cloud controller the next VHD image: http://dev.opennebula.org/attachments/download/482/ttylinux.vhd.gz 4. sudo dd if=/path/to/vhdfileofstep3 of=/dev/nova-volumes/volume-XXXXX <- Related to the ID of step 2 5. Launch an instance from any image (this is not important because we are just booting from a volume) from the dashboard, and don't forget to select boot from volume and select the volume created in step2. Important: Device name must be "vda".