nova/nova/virt/hyperv
Victor Stinner b259659a22 Use six.moves.range for Python 3
The function xrange() was renamed to range() in Python 3.

Use "from six.moves import range" to get xrange() on Python 2 and range() on
Python 3 as the name "range", and replace "xrange()" with "range()".

The import is omitted for small ranges (1024 items or less).

This patch was generated by the following tool (revision 0c1d096b3903)
with the "xrange" operation:
https://bitbucket.org/haypo/misc/src/tip/python/sixer.py

Manual change:

* Replace range(n) with list(range(n)) in a loop of
  nova/virt/libvirt/driver.py which uses list.pop()

Blueprint nova-python3
Change-Id: Iceda35cace04cc8ddc6adbd59df4613b22b39793
2015-05-20 15:19:51 -07:00
..
README.rst Adds Hyper-V support in nova-compute (with new network_info model), including unit tests 2012-08-16 03:38:51 +03:00
__init__.py Add Hyper-V driver in the "compute_driver" option description 2014-07-24 02:47:32 -07:00
basevolumeutils.py Use oslo.log 2015-02-22 07:56:40 -05:00
constants.py Merge "hyperv: use standard architecture constants for CPU model" 2015-01-24 20:37:50 +00:00
driver.py Compute: no longer need to pass flavor to the spawn method 2015-03-13 02:16:12 -07:00
hostops.py Use oslo.log 2015-02-22 07:56:40 -05:00
hostutils.py Adds Hyper-V generation 2 VMs implementation 2015-01-20 13:25:02 +02:00
imagecache.py Switch nova.virt.hyperv.* to instance dot notation 2015-03-03 21:54:17 -05:00
ioutils.py Use oslo.log 2015-02-22 07:56:40 -05:00
livemigrationops.py Fix copy configdrive during live-migration on HyperV 2015-03-11 01:29:31 -07:00
livemigrationutils.py Don't add exception instance in LOG.exception 2015-03-09 09:57:25 +00:00
migrationops.py Switch nova.virt.hyperv.* to instance dot notation 2015-03-03 21:54:17 -05:00
networkutils.py Fix and Gate on E265 2014-07-24 08:11:00 -04:00
networkutilsv2.py Use oslo.i18n 2014-07-18 14:28:09 -04:00
pathutils.py Fix copy configdrive during live-migration on HyperV 2015-03-11 01:29:31 -07:00
rdpconsoleops.py Switch nova.virt.hyperv.* to instance dot notation 2015-03-03 21:54:17 -05:00
rdpconsoleutils.py Hyper-V driver RDP console access support 2014-02-07 23:41:33 +02:00
rdpconsoleutilsv2.py Hyper-V driver RDP console access support 2014-02-07 23:41:33 +02:00
snapshotops.py Switch nova.virt.hyperv.* to instance dot notation 2015-03-03 21:54:17 -05:00
utilsfactory.py Use oslo.log 2015-02-22 07:56:40 -05:00
vhdutils.py Fixes differencing VHDX images issue on Hyper-V 2014-11-18 17:07:50 +02:00
vhdutilsv2.py Switch to using oslo_* instead of oslo.* 2015-02-06 06:03:10 -05:00
vif.py Switch nova.virt.hyperv.* to instance dot notation 2015-03-03 21:54:17 -05:00
vmops.py Merge "Hyper-V: Sets *DataRoot paths for instances" 2015-04-09 09:01:37 +00:00
vmutils.py Use six.moves.range for Python 3 2015-05-20 15:19:51 -07:00
vmutilsv2.py Merge "Hyper-V: checks for existent Notes in list_instance_notes" 2015-04-20 20:16:17 +00:00
volumeops.py Use six.moves.range for Python 3 2015-05-20 15:19:51 -07:00
volumeutils.py Use six.moves.range for Python 3 2015-05-20 15:19:51 -07:00
volumeutilsv2.py Use six.moves.range for Python 3 2015-05-20 15:19:51 -07:00

README.rst

Hyper-V Volumes Management

To enable the volume features, the first thing that needs to be done is to enable the iSCSI service on the Windows compute nodes and set it to start automatically.

sc config msiscsi start= auto net start msiscsi

In Windows Server 2012, it's important to execute the following commands to prevent having the volumes being online by default:

diskpart san policy=OfflineAll exit

How to check if your iSCSI configuration is working properly:

On your OpenStack controller:

1. Create a volume with e.g. "nova volume-create 1" and note the generated volume id

On Windows:

  1. iscsicli QAddTargetPortal <your_iSCSI_target>
  2. iscsicli ListTargets

The output should contain the iqn related to your volume: iqn.2010-10.org.openstack:volume-<volume_id>

How to test Boot from volume in Hyper-V from the OpenStack dashboard:

  1. Fist of all create a volume
  2. Get the volume ID of the created volume

3. Upload and untar to the Cloud controller the next VHD image: http://dev.opennebula.org/attachments/download/482/ttylinux.vhd.gz 4. sudo dd if=/path/to/vhdfileofstep3 of=/dev/nova-volumes/volume-XXXXX <- Related to the ID of step 2 5. Launch an instance from any image (this is not important because we are just booting from a volume) from the dashboard, and don't forget to select boot from volume and select the volume created in step2. Important: Device name must be "vda".