05a55ce5c9
* driver.get_info received a new argument, which we don't use. The good news is that the manager expects a TypeError * driver.extend_volume now accepts the new volume size * the driver shouldn't set the 'is_public' image property when taking snapshots. This is already handled outside the driver * missing whitespace between words in log message * avoid using utils.execute, use processutils.execute * update os_win utils "auto-spec" helper (we're relying a bit too much on os-win internals, which meanwhile have changed) * nova dropped the helper method that was merging allocations so we'll have to include it in compute_hyperv. Note that we only use it for the cluster driver. Change-Id: I0b59a118764421ec9daba3f3732f45ec9cb7287b |
||
---|---|---|
.. | ||
cluster | ||
utils | ||
README.rst | ||
__init__.py | ||
block_device_manager.py | ||
conf.py | ||
constants.py | ||
coordination.py | ||
driver.py | ||
eventhandler.py | ||
hostops.py | ||
imagecache.py | ||
livemigrationops.py | ||
migrationops.py | ||
pathutils.py | ||
pdk.py | ||
rdpconsoleops.py | ||
serialconsolehandler.py | ||
serialconsoleops.py | ||
serialproxy.py | ||
snapshotops.py | ||
vif.py | ||
vmops.py | ||
volumeops.py |
README.rst
Hyper-V Volumes Management
To enable the volume features, the first thing that needs to be done is to enable the iSCSI service on the Windows compute nodes and set it to start automatically.
sc config msiscsi start= auto net start msiscsi
In Windows Server 2012, it's important to execute the following commands to prevent having the volumes being online by default:
diskpart san policy=OfflineAll exit
How to check if your iSCSI configuration is working properly:
On your OpenStack controller:
1. Create a volume with e.g. "nova volume-create 1" and note the generated volume id
On Windows:
- iscsicli QAddTargetPortal <your_iSCSI_target>
- iscsicli ListTargets
The output should contain the iqn related to your volume: iqn.2010-10.org.openstack:volume-<volume_id>
How to test Boot from volume in Hyper-V from the OpenStack dashboard:
- Fist of all create a volume
- Get the volume ID of the created volume
3. Upload and untar to the Cloud controller the next VHD image: http://dev.opennebula.org/attachments/download/482/ttylinux.vhd.gz 4. sudo dd if=/path/to/vhdfileofstep3 of=/dev/nova-volumes/volume-XXXXX <- Related to the ID of step 2 5. Launch an instance from any image (this is not important because we are just booting from a volume) from the dashboard, and don't forget to select boot from volume and select the volume created in step2. Important: Device name must be "vda".