nova/nova/virt/hyperv
Sundar Nadathur 1ff60fa52d Pass accelerator requests to each virt driver from compute manager.
Update the signature of the spawn() API for each virt driver
to include accel_info, which is a list of accelerator requests.

Change-Id: I4aac66c125a162bf35991a7d0c2638c7475ec0e7
Blueprint: nova-cyborg-interaction
2020-03-21 12:03:38 -07:00
..
__init__.py Add Hyper-V driver in the "compute_driver" option description 2014-07-24 02:47:32 -07:00
block_device_manager.py Rename block_device_info_get_root 2018-01-11 20:46:13 +00:00
constants.py objects: Move 'arch' to 'fields.Architecture' 2016-11-25 16:19:41 +00:00
driver.py Pass accelerator requests to each virt driver from compute manager. 2020-03-21 12:03:38 -07:00
eventhandler.py Remove translation of log messages 2017-06-13 11:20:28 +07:00
hostops.py Implement update_provider_tree 2019-06-25 13:11:32 -04:00
imagecache.py Consolidate [image_cache] conf options 2019-11-13 11:09:03 -06:00
livemigrationops.py Avoid error state for recovered instances after failed migrations 2019-08-26 11:36:56 +03:00
migrationops.py Hyper-V: Perform proper cleanup after cold migration 2017-08-30 17:50:11 +00:00
pathutils.py propagate OSError to MigrationPreCheckError 2017-10-20 16:46:15 -04:00
rdpconsoleops.py Hyper-V: adds os-win library 2015-12-02 16:34:24 +02:00
README.rst Keep pre-commit inline with hacking and fix whitespace 2019-12-12 14:56:39 +00:00
serialconsolehandler.py Remove translation of log messages 2017-06-13 11:20:28 +07:00
serialconsoleops.py Remove translation of log messages 2017-06-13 11:20:28 +07:00
serialproxy.py Add missing ws seperator between words 2018-11-26 23:42:18 +00:00
snapshotops.py Share snapshot image membership with instance owner 2019-02-08 18:06:27 -05:00
vif.py hyperv: Remove vestigial nova-network support 2019-11-29 17:20:02 +00:00
vmops.py hyperv: Remove vestigial nova-network support 2019-11-29 17:20:02 +00:00
volumeops.py hyperv: Cleans up live migration Planned VM 2018-08-23 17:07:36 -07:00

Hyper-V Volumes Management

To enable the volume features, the first thing that needs to be done is to enable the iSCSI service on the Windows compute nodes and set it to start automatically.

sc config msiscsi start= auto net start msiscsi

In Windows Server 2012, it's important to execute the following commands to prevent having the volumes being online by default:

diskpart san policy=OfflineAll exit

How to check if your iSCSI configuration is working properly:

On your OpenStack controller:

1. Create a volume with e.g. "nova volume-create 1" and note the generated volume id

On Windows:

  1. iscsicli QAddTargetPortal <your_iSCSI_target>
  2. iscsicli ListTargets

The output should contain the iqn related to your volume: iqn.2010-10.org.openstack:volume-<volume_id>

How to test Boot from volume in Hyper-V from the OpenStack dashboard:

  1. Fist of all create a volume
  2. Get the volume ID of the created volume

3. Upload and untar to the Cloud controller the next VHD image: http://dev.opennebula.org/attachments/download/482/ttylinux.vhd.gz 4. sudo dd if=/path/to/vhdfileofstep3 of=/dev/nova-volumes/volume-XXXXX <- Related to the ID of step 2 5. Launch an instance from any image (this is not important because we are just booting from a volume) from the dashboard, and don't forget to select boot from volume and select the volume created in step2. Important: Device name must be "vda".