Implementation of blueprint hypervisor-vmware-vsphere-support. (Link to blueprint: https://blueprints.launchpad.net/nova/+spec/hypervisor-vmware-vsphere-support)
Adds support for hypervisor vmware ESX/ESXi server in OpenStack (Nova). Key features included are, 1) Support for FLAT and VLAN networking model 2) Support for Guest console access through VMware vmrc 3) Integrated with Glance service for image storage and retrival Documentation: A readme file at "doc/source/vmwareapi_readme.rst" encapsulates configuration/installation instructions required to use this module/feature.
This commit is contained in:
commit
4e179b4fa9
1
Authors
1
Authors
@ -61,6 +61,7 @@ Ryan Lane <rlane@wikimedia.org>
|
||||
Ryan Lucio <rlucio@internap.com>
|
||||
Salvatore Orlando <salvatore.orlando@eu.citrix.com>
|
||||
Sandy Walsh <sandy.walsh@rackspace.com>
|
||||
Sateesh Chodapuneedi <sateesh.chodapuneedi@citrix.com>
|
||||
Soren Hansen <soren.hansen@rackspace.com>
|
||||
Thierry Carrez <thierry@openstack.org>
|
||||
Todd Willey <todd@ansolabs.com>
|
||||
|
BIN
doc/source/images/vmwareapi_blockdiagram.jpg
Normal file
BIN
doc/source/images/vmwareapi_blockdiagram.jpg
Normal file
Binary file not shown.
After Width: | Height: | Size: 74 KiB |
218
doc/source/vmwareapi_readme.rst
Normal file
218
doc/source/vmwareapi_readme.rst
Normal file
@ -0,0 +1,218 @@
|
||||
..
|
||||
Copyright (c) 2010 Citrix Systems, Inc.
|
||||
Copyright 2010 OpenStack LLC.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
not use this file except in compliance with the License. You may obtain
|
||||
a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
License for the specific language governing permissions and limitations
|
||||
under the License.
|
||||
|
||||
VMware ESX/ESXi Server Support for OpenStack Compute
|
||||
====================================================
|
||||
|
||||
Introduction
|
||||
------------
|
||||
A module named 'vmwareapi' is added to 'nova.virt' to add support of VMware ESX/ESXi hypervisor to OpenStack compute (Nova). Nova may now use VMware vSphere as a compute provider.
|
||||
|
||||
The basic requirement is to support VMware vSphere 4.1 as a compute provider within Nova. As the deployment architecture, support both ESX and ESXi. VM storage is restricted to VMFS volumes on local drives. vCenter is not required by the current design, and is not currently supported. Instead, Nova Compute talks directly to ESX/ESXi.
|
||||
|
||||
The 'vmwareapi' module is integrated with Glance, so that VM images can be streamed from there for boot on ESXi using Glance server for image storage & retrieval.
|
||||
|
||||
Currently supports Nova's flat networking model (Flat Manager) & VLAN networking model.
|
||||
|
||||
.. image:: images/vmwareapi_blockdiagram.jpg
|
||||
|
||||
|
||||
System Requirements
|
||||
-------------------
|
||||
Following software components are required for building the cloud using OpenStack on top of ESX/ESXi Server(s):
|
||||
|
||||
* OpenStack
|
||||
* Glance Image service
|
||||
* VMware ESX v4.1 or VMware ESXi(licensed) v4.1
|
||||
|
||||
VMware ESX Requirements
|
||||
-----------------------
|
||||
* ESX credentials with administration/root privileges
|
||||
* Single local hard disk at the ESX host
|
||||
* An ESX Virtual Machine Port Group (For Flat Networking)
|
||||
* An ESX physical network adapter (For VLAN networking)
|
||||
* Need to enable "vSphere Web Access" in "vSphere client" UI at Configuration->Security Profile->Firewall
|
||||
|
||||
Python dependencies
|
||||
-------------------
|
||||
* suds-0.4
|
||||
|
||||
* Installation procedure on Ubuntu/Debian
|
||||
|
||||
::
|
||||
|
||||
easy_install suds==0.4
|
||||
|
||||
|
||||
Configuration flags required for nova-compute
|
||||
---------------------------------------------
|
||||
::
|
||||
|
||||
--connection_type=vmwareapi
|
||||
--vmwareapi_host_ip=<VMware ESX Host IP>
|
||||
--vmwareapi_host_username=<VMware ESX Username>
|
||||
--vmwareapi_host_password=<VMware ESX Password>
|
||||
--network_driver=nova.network.vmwareapi_net [Optional, only for VLAN Networking]
|
||||
--vlan_interface=<Physical ethernet adapter name in VMware ESX host for vlan networking E.g vmnic0> [Optional, only for VLAN Networking]
|
||||
|
||||
|
||||
Configuration flags required for nova-network
|
||||
---------------------------------------------
|
||||
::
|
||||
|
||||
--network_manager=nova.network.manager.FlatManager [or nova.network.manager.VlanManager]
|
||||
--flat_network_bridge=<ESX Virtual Machine Port Group> [Optional, only for Flat Networking]
|
||||
|
||||
|
||||
Configuration flags required for nova-console
|
||||
---------------------------------------------
|
||||
::
|
||||
|
||||
--console_manager=nova.console.vmrc_manager.ConsoleVMRCManager
|
||||
--console_driver=nova.console.vmrc.VMRCSessionConsole [Optional, only for OTP (One time Passwords) as against host credentials]
|
||||
|
||||
|
||||
Other flags
|
||||
-----------
|
||||
::
|
||||
|
||||
--image_service=nova.image.glance.GlanceImageService
|
||||
--glance_host=<Glance Host>
|
||||
--vmwareapi_wsdl_loc=<http://<WEB SERVER>/vimService.wsdl>
|
||||
|
||||
Note:- Due to a faulty wsdl being shipped with ESX vSphere 4.1 we need a working wsdl which can to be mounted on any webserver. Follow the below steps to download the SDK,
|
||||
|
||||
* Go to http://www.vmware.com/support/developer/vc-sdk/
|
||||
* Go to section VMware vSphere Web Services SDK 4.0
|
||||
* Click "Downloads"
|
||||
* Enter VMware credentials when prompted for download
|
||||
* Unzip the downloaded file vi-sdk-4.0.0-xxx.zip
|
||||
* Go to SDK->WSDL->vim25 & host the files "vimService.wsdl" and "vim.wsdl" in a WEB SERVER
|
||||
* Set the flag "--vmwareapi_wsdl_loc" with url, "http://<WEB SERVER>/vimService.wsdl"
|
||||
|
||||
|
||||
VLAN Network Manager
|
||||
--------------------
|
||||
VLAN network support is added through a custom network driver in the nova-compute node i.e "nova.network.vmwareapi_net" and it uses a Physical ethernet adapter on the VMware ESX/ESXi host for VLAN Networking (the name of the ethernet adapter is specified as vlan_interface flag in the nova-compute configuration flag) in the nova-compute node.
|
||||
|
||||
Using the physical adapter name the associated Virtual Switch will be determined. In VMware ESX there can be only one Virtual Switch associated with a Physical adapter.
|
||||
|
||||
When VM Spawn request is issued with a VLAN ID the work flow looks like,
|
||||
|
||||
1. Check that a Physical adapter with the given name exists. If no, throw an error.If yes, goto next step.
|
||||
|
||||
2. Check if a Virtual Switch is associated with the Physical ethernet adapter with vlan interface name. If no, throw an error. If yes, goto next step.
|
||||
|
||||
3. Check if a port group with the network bridge name exists. If no, create a port group in the Virtual switch with the give name and VLAN id and goto step 6. If yes, goto next step.
|
||||
|
||||
4. Check if the port group is associated with the Virtual Switch. If no, throw an error. If yes, goto next step.
|
||||
|
||||
5. Check if the port group is associated with the given VLAN Id. If no, throw an error. If yes, goto next step.
|
||||
|
||||
6. Spawn the VM using this Port Group as the Network Name for the VM.
|
||||
|
||||
|
||||
Guest console Support
|
||||
---------------------
|
||||
| VMware VMRC console is a built-in console method providing graphical control of the VM remotely.
|
||||
|
|
||||
| VMRC Console types supported:
|
||||
| # Host based credentials
|
||||
| Not secure (Sends ESX admin credentials in clear text)
|
||||
|
|
||||
| # OTP (One time passwords)
|
||||
| Secure but creates multiple session entries in DB for each OpenStack console create request.
|
||||
| Console sessions created is can be used only once.
|
||||
|
|
||||
| Install browser based VMware ESX plugins/activex on the client machine to connect
|
||||
|
|
||||
| Windows:-
|
||||
| Internet Explorer:
|
||||
| https://<VMware ESX Host>/ui/plugin/vmware-vmrc-win32-x86.exe
|
||||
|
|
||||
| Mozilla Firefox:
|
||||
| https://<VMware ESX Host>/ui/plugin/vmware-vmrc-win32-x86.xpi
|
||||
|
|
||||
| Linux:-
|
||||
| Mozilla Firefox
|
||||
| 32-Bit Linux:
|
||||
| https://<VMware ESX Host>/ui/plugin/vmware-vmrc-linux-x86.xpi
|
||||
|
|
||||
| 64-Bit Linux:
|
||||
| https://<VMware ESX Host>/ui/plugin/vmware-vmrc-linux-x64.xpi
|
||||
|
|
||||
| OpenStack Console Details:
|
||||
| console_type = vmrc+credentials | vmrc+session
|
||||
| host = <VMware ESX Host>
|
||||
| port = <VMware ESX Port>
|
||||
| password = {'vm_id': <VMware VM ID>,'username':<VMware ESX Username>, 'password':<VMware ESX Password>} //base64 + json encoded
|
||||
|
|
||||
| Instantiate the plugin/activex object
|
||||
| # In Internet Explorer
|
||||
| <object id='vmrc' classid='CLSID:B94C2238-346E-4C5E-9B36-8CC627F35574'>
|
||||
| </object>
|
||||
|
|
||||
| # Mozilla Firefox and other browsers
|
||||
| <object id='vmrc' type='application/x-vmware-vmrc;version=2.5.0.0'>
|
||||
| </object>
|
||||
|
|
||||
| Open vmrc connection
|
||||
| # Host based credentials [type=vmrc+credentials]
|
||||
| <script type="text/javascript">
|
||||
| var MODE_WINDOW = 2;
|
||||
| var vmrc = document.getElementById('vmrc');
|
||||
| vmrc.connect(<VMware ESX Host> + ':' + <VMware ESX Port>, <VMware ESX Username>, <VMware ESX Password>, '', <VMware VM ID>, MODE_WINDOW);
|
||||
| </script>
|
||||
|
|
||||
| # OTP (One time passwords) [type=vmrc+session]
|
||||
| <script type="text/javascript">
|
||||
| var MODE_WINDOW = 2;
|
||||
| var vmrc = document.getElementById('vmrc');
|
||||
| vmrc.connectWithSession(<VMware ESX Host> + ':' + <VMware ESX Port>, <VMware VM ID>, <VMware ESX Password>, MODE_WINDOW);
|
||||
| </script>
|
||||
|
||||
|
||||
Assumptions
|
||||
-----------
|
||||
1. The VMware images uploaded to the image repositories have VMware Tools installed.
|
||||
|
||||
|
||||
FAQ
|
||||
---
|
||||
|
||||
1. What type of disk images are supported?
|
||||
|
||||
* Only VMware VMDK's are currently supported and of that support is available only for thick disks, thin provisioned disks are not supported.
|
||||
|
||||
|
||||
2. How is IP address information injected into the guest?
|
||||
|
||||
* IP address information is injected through 'machine.id' vmx parameter (equivalent to XenStore in XenServer). This information can be retrived inside the guest using VMware tools.
|
||||
|
||||
|
||||
3. What is the guest tool?
|
||||
|
||||
* The guest tool is a small python script that should be run either as a service or added to system startup. This script configures networking on the guest. The guest tool is available at tools/esx/guest_tool.py
|
||||
|
||||
|
||||
4. What type of consoles are supported?
|
||||
|
||||
* VMware VMRC based consoles are supported. There are 2 options for credentials one is OTP (Secure but creates multiple session entries in DB for each OpenStack console create request.) & other is host based credentials (It may not be secure as ESX credentials are transmitted as clear text).
|
||||
|
||||
5. What does 'Vim' refer to as far as vmwareapi module is concerned?
|
||||
|
||||
* Vim refers to VMware Virtual Infrastructure Methodology. This is not to be confused with "VIM" editor.
|
||||
|
144
nova/console/vmrc.py
Normal file
144
nova/console/vmrc.py
Normal file
@ -0,0 +1,144 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2011 Citrix Systems, Inc.
|
||||
# Copyright 2011 OpenStack LLC.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
VMRC console drivers.
|
||||
"""
|
||||
|
||||
import base64
|
||||
import json
|
||||
|
||||
from nova import exception
|
||||
from nova import flags
|
||||
from nova import log as logging
|
||||
from nova.virt.vmwareapi import vim_util
|
||||
|
||||
flags.DEFINE_integer('console_vmrc_port',
|
||||
443,
|
||||
"port for VMware VMRC connections")
|
||||
flags.DEFINE_integer('console_vmrc_error_retries',
|
||||
10,
|
||||
"number of retries for retrieving VMRC information")
|
||||
|
||||
FLAGS = flags.FLAGS
|
||||
|
||||
|
||||
class VMRCConsole(object):
|
||||
"""VMRC console driver with ESX credentials."""
|
||||
|
||||
def __init__(self):
|
||||
super(VMRCConsole, self).__init__()
|
||||
|
||||
@property
|
||||
def console_type(self):
|
||||
return 'vmrc+credentials'
|
||||
|
||||
def get_port(self, context):
|
||||
"""Get available port for consoles."""
|
||||
return FLAGS.console_vmrc_port
|
||||
|
||||
def setup_console(self, context, console):
|
||||
"""Sets up console."""
|
||||
pass
|
||||
|
||||
def teardown_console(self, context, console):
|
||||
"""Tears down console."""
|
||||
pass
|
||||
|
||||
def init_host(self):
|
||||
"""Perform console initialization."""
|
||||
pass
|
||||
|
||||
def fix_pool_password(self, password):
|
||||
"""Encode password."""
|
||||
# TODO(sateesh): Encrypt pool password
|
||||
return password
|
||||
|
||||
def generate_password(self, vim_session, pool, instance_name):
|
||||
"""
|
||||
Returns VMRC Connection credentials.
|
||||
|
||||
Return string is of the form '<VM PATH>:<ESX Username>@<ESX Password>'.
|
||||
"""
|
||||
username, password = pool['username'], pool['password']
|
||||
vms = vim_session._call_method(vim_util, "get_objects",
|
||||
"VirtualMachine", ["name", "config.files.vmPathName"])
|
||||
vm_ds_path_name = None
|
||||
vm_ref = None
|
||||
for vm in vms:
|
||||
vm_name = None
|
||||
ds_path_name = None
|
||||
for prop in vm.propSet:
|
||||
if prop.name == "name":
|
||||
vm_name = prop.val
|
||||
elif prop.name == "config.files.vmPathName":
|
||||
ds_path_name = prop.val
|
||||
if vm_name == instance_name:
|
||||
vm_ref = vm.obj
|
||||
vm_ds_path_name = ds_path_name
|
||||
break
|
||||
if vm_ref is None:
|
||||
raise exception.NotFound(_("instance - %s not present") %
|
||||
instance_name)
|
||||
json_data = json.dumps({"vm_id": vm_ds_path_name,
|
||||
"username": username,
|
||||
"password": password})
|
||||
return base64.b64encode(json_data)
|
||||
|
||||
def is_otp(self):
|
||||
"""Is one time password or not."""
|
||||
return False
|
||||
|
||||
|
||||
class VMRCSessionConsole(VMRCConsole):
|
||||
"""VMRC console driver with VMRC One Time Sessions."""
|
||||
|
||||
def __init__(self):
|
||||
super(VMRCSessionConsole, self).__init__()
|
||||
|
||||
@property
|
||||
def console_type(self):
|
||||
return 'vmrc+session'
|
||||
|
||||
def generate_password(self, vim_session, pool, instance_name):
|
||||
"""
|
||||
Returns a VMRC Session.
|
||||
|
||||
Return string is of the form '<VM MOID>:<VMRC Ticket>'.
|
||||
"""
|
||||
vms = vim_session._call_method(vim_util, "get_objects",
|
||||
"VirtualMachine", ["name"])
|
||||
vm_ref = None
|
||||
for vm in vms:
|
||||
if vm.propSet[0].val == instance_name:
|
||||
vm_ref = vm.obj
|
||||
if vm_ref is None:
|
||||
raise exception.NotFound(_("instance - %s not present") %
|
||||
instance_name)
|
||||
virtual_machine_ticket = \
|
||||
vim_session._call_method(
|
||||
vim_session._get_vim(),
|
||||
"AcquireCloneTicket",
|
||||
vim_session._get_vim().get_service_content().sessionManager)
|
||||
json_data = json.dumps({"vm_id": str(vm_ref.value),
|
||||
"username": virtual_machine_ticket,
|
||||
"password": virtual_machine_ticket})
|
||||
return base64.b64encode(json_data)
|
||||
|
||||
def is_otp(self):
|
||||
"""Is one time password or not."""
|
||||
return True
|
158
nova/console/vmrc_manager.py
Normal file
158
nova/console/vmrc_manager.py
Normal file
@ -0,0 +1,158 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2011 Citrix Systems, Inc.
|
||||
# Copyright 2011 OpenStack LLC.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
VMRC Console Manager.
|
||||
"""
|
||||
|
||||
from nova import exception
|
||||
from nova import flags
|
||||
from nova import log as logging
|
||||
from nova import manager
|
||||
from nova import rpc
|
||||
from nova import utils
|
||||
from nova.virt.vmwareapi_conn import VMWareAPISession
|
||||
|
||||
LOG = logging.getLogger("nova.console.vmrc_manager")
|
||||
|
||||
FLAGS = flags.FLAGS
|
||||
flags.DEFINE_string('console_public_hostname',
|
||||
'',
|
||||
'Publicly visible name for this console host')
|
||||
flags.DEFINE_string('console_driver',
|
||||
'nova.console.vmrc.VMRCConsole',
|
||||
'Driver to use for the console')
|
||||
|
||||
|
||||
class ConsoleVMRCManager(manager.Manager):
|
||||
|
||||
"""
|
||||
Manager to handle VMRC connections needed for accessing instance consoles.
|
||||
"""
|
||||
|
||||
def __init__(self, console_driver=None, *args, **kwargs):
|
||||
self.driver = utils.import_object(FLAGS.console_driver)
|
||||
super(ConsoleVMRCManager, self).__init__(*args, **kwargs)
|
||||
|
||||
def init_host(self):
|
||||
self.sessions = {}
|
||||
self.driver.init_host()
|
||||
|
||||
def _get_vim_session(self, pool):
|
||||
"""Get VIM session for the pool specified."""
|
||||
vim_session = None
|
||||
if pool['id'] not in self.sessions.keys():
|
||||
vim_session = VMWareAPISession(pool['address'],
|
||||
pool['username'],
|
||||
pool['password'],
|
||||
FLAGS.console_vmrc_error_retries)
|
||||
self.sessions[pool['id']] = vim_session
|
||||
return self.sessions[pool['id']]
|
||||
|
||||
def _generate_console(self, context, pool, name, instance_id, instance):
|
||||
"""Sets up console for the instance."""
|
||||
LOG.debug(_("Adding console"))
|
||||
|
||||
password = self.driver.generate_password(
|
||||
self._get_vim_session(pool),
|
||||
pool,
|
||||
instance.name)
|
||||
|
||||
console_data = {'instance_name': name,
|
||||
'instance_id': instance_id,
|
||||
'password': password,
|
||||
'pool_id': pool['id']}
|
||||
console_data['port'] = self.driver.get_port(context)
|
||||
console = self.db.console_create(context, console_data)
|
||||
self.driver.setup_console(context, console)
|
||||
return console
|
||||
|
||||
@exception.wrap_exception
|
||||
def add_console(self, context, instance_id, password=None,
|
||||
port=None, **kwargs):
|
||||
"""
|
||||
Adds a console for the instance. If it is one time password, then we
|
||||
generate new console credentials.
|
||||
"""
|
||||
instance = self.db.instance_get(context, instance_id)
|
||||
host = instance['host']
|
||||
name = instance['name']
|
||||
pool = self.get_pool_for_instance_host(context, host)
|
||||
try:
|
||||
console = self.db.console_get_by_pool_instance(context,
|
||||
pool['id'],
|
||||
instance_id)
|
||||
if self.driver.is_otp():
|
||||
console = self._generate_console(
|
||||
context,
|
||||
pool,
|
||||
name,
|
||||
instance_id,
|
||||
instance)
|
||||
except exception.NotFound:
|
||||
console = self._generate_console(
|
||||
context,
|
||||
pool,
|
||||
name,
|
||||
instance_id,
|
||||
instance)
|
||||
return console['id']
|
||||
|
||||
@exception.wrap_exception
|
||||
def remove_console(self, context, console_id, **_kwargs):
|
||||
"""Removes a console entry."""
|
||||
try:
|
||||
console = self.db.console_get(context, console_id)
|
||||
except exception.NotFound:
|
||||
LOG.debug(_("Tried to remove non-existent console "
|
||||
"%(console_id)s.") %
|
||||
{'console_id': console_id})
|
||||
return
|
||||
LOG.debug(_("Removing console "
|
||||
"%(console_id)s.") %
|
||||
{'console_id': console_id})
|
||||
self.db.console_delete(context, console_id)
|
||||
self.driver.teardown_console(context, console)
|
||||
|
||||
def get_pool_for_instance_host(self, context, instance_host):
|
||||
"""Gets console pool info for the instance."""
|
||||
context = context.elevated()
|
||||
console_type = self.driver.console_type
|
||||
try:
|
||||
pool = self.db.console_pool_get_by_host_type(context,
|
||||
instance_host,
|
||||
self.host,
|
||||
console_type)
|
||||
except exception.NotFound:
|
||||
pool_info = rpc.call(context,
|
||||
self.db.queue_get_for(context,
|
||||
FLAGS.compute_topic,
|
||||
instance_host),
|
||||
{"method": "get_console_pool_info",
|
||||
"args": {"console_type": console_type}})
|
||||
pool_info['password'] = self.driver.fix_pool_password(
|
||||
pool_info['password'])
|
||||
pool_info['host'] = self.host
|
||||
# ESX Address or Proxy Address
|
||||
public_host_name = pool_info['address']
|
||||
if FLAGS.console_public_hostname:
|
||||
public_host_name = FLAGS.console_public_hostname
|
||||
pool_info['public_hostname'] = public_host_name
|
||||
pool_info['console_type'] = console_type
|
||||
pool_info['compute_host'] = instance_host
|
||||
pool = self.db.console_pool_create(context, pool_info)
|
||||
return pool
|
91
nova/network/vmwareapi_net.py
Normal file
91
nova/network/vmwareapi_net.py
Normal file
@ -0,0 +1,91 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2011 Citrix Systems, Inc.
|
||||
# Copyright 2011 OpenStack LLC.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Implements vlans for vmwareapi.
|
||||
"""
|
||||
|
||||
from nova import db
|
||||
from nova import exception
|
||||
from nova import flags
|
||||
from nova import log as logging
|
||||
from nova import utils
|
||||
from nova.virt.vmwareapi_conn import VMWareAPISession
|
||||
from nova.virt.vmwareapi import network_utils
|
||||
|
||||
LOG = logging.getLogger("nova.network.vmwareapi_net")
|
||||
|
||||
FLAGS = flags.FLAGS
|
||||
flags.DEFINE_string('vlan_interface', 'vmnic0',
|
||||
'Physical network adapter name in VMware ESX host for '
|
||||
'vlan networking')
|
||||
|
||||
|
||||
def ensure_vlan_bridge(vlan_num, bridge, net_attrs=None):
|
||||
"""Create a vlan and bridge unless they already exist."""
|
||||
# Open vmwareapi session
|
||||
host_ip = FLAGS.vmwareapi_host_ip
|
||||
host_username = FLAGS.vmwareapi_host_username
|
||||
host_password = FLAGS.vmwareapi_host_password
|
||||
if not host_ip or host_username is None or host_password is None:
|
||||
raise Exception(_("Must specify vmwareapi_host_ip,"
|
||||
"vmwareapi_host_username "
|
||||
"and vmwareapi_host_password to use"
|
||||
"connection_type=vmwareapi"))
|
||||
session = VMWareAPISession(host_ip, host_username, host_password,
|
||||
FLAGS.vmwareapi_api_retry_count)
|
||||
vlan_interface = FLAGS.vlan_interface
|
||||
# Check if the vlan_interface physical network adapter exists on the host
|
||||
if not network_utils.check_if_vlan_interface_exists(session,
|
||||
vlan_interface):
|
||||
raise exception.NotFound(_("There is no physical network adapter with "
|
||||
"the name %s on the ESX host") % vlan_interface)
|
||||
|
||||
# Get the vSwitch associated with the Physical Adapter
|
||||
vswitch_associated = network_utils.get_vswitch_for_vlan_interface(
|
||||
session, vlan_interface)
|
||||
if vswitch_associated is None:
|
||||
raise exception.NotFound(_("There is no virtual switch associated "
|
||||
"with the physical network adapter with name %s") %
|
||||
vlan_interface)
|
||||
# Check whether bridge already exists and retrieve the the ref of the
|
||||
# network whose name_label is "bridge"
|
||||
network_ref = network_utils.get_network_with_the_name(session, bridge)
|
||||
if network_ref is None:
|
||||
# Create a port group on the vSwitch associated with the vlan_interface
|
||||
# corresponding physical network adapter on the ESX host
|
||||
network_utils.create_port_group(session, bridge, vswitch_associated,
|
||||
vlan_num)
|
||||
else:
|
||||
# Get the vlan id and vswitch corresponding to the port group
|
||||
pg_vlanid, pg_vswitch = \
|
||||
network_utils.get_vlanid_and_vswitch_for_portgroup(session, bridge)
|
||||
|
||||
# Check if the vsiwtch associated is proper
|
||||
if pg_vswitch != vswitch_associated:
|
||||
raise exception.Invalid(_("vSwitch which contains the port group "
|
||||
"%(bridge)s is not associated with the desired "
|
||||
"physical adapter. Expected vSwitch is "
|
||||
"%(vswitch_associated)s, but the one associated"
|
||||
" is %(pg_vswitch)s") % locals())
|
||||
|
||||
# Check if the vlan id is proper for the port group
|
||||
if pg_vlanid != vlan_num:
|
||||
raise exception.Invalid(_("VLAN tag is not appropriate for the "
|
||||
"port group %(bridge)s. Expected VLAN tag is "
|
||||
"%(vlan_num)s, but the one associated with the "
|
||||
"port group is %(pg_vlanid)s") % locals())
|
252
nova/tests/test_vmwareapi.py
Normal file
252
nova/tests/test_vmwareapi.py
Normal file
@ -0,0 +1,252 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2011 Citrix Systems, Inc.
|
||||
# Copyright 2011 OpenStack LLC.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Test suite for VMWareAPI.
|
||||
"""
|
||||
|
||||
import stubout
|
||||
|
||||
from nova import context
|
||||
from nova import db
|
||||
from nova import flags
|
||||
from nova import test
|
||||
from nova import utils
|
||||
from nova.auth import manager
|
||||
from nova.compute import power_state
|
||||
from nova.tests.glance import stubs as glance_stubs
|
||||
from nova.tests.vmwareapi import db_fakes
|
||||
from nova.tests.vmwareapi import stubs
|
||||
from nova.virt import vmwareapi_conn
|
||||
from nova.virt.vmwareapi import fake as vmwareapi_fake
|
||||
|
||||
|
||||
FLAGS = flags.FLAGS
|
||||
|
||||
|
||||
class VMWareAPIVMTestCase(test.TestCase):
|
||||
"""Unit tests for Vmware API connection calls."""
|
||||
|
||||
def setUp(self):
|
||||
super(VMWareAPIVMTestCase, self).setUp()
|
||||
self.flags(vmwareapi_host_ip='test_url',
|
||||
vmwareapi_host_username='test_username',
|
||||
vmwareapi_host_password='test_pass')
|
||||
self.manager = manager.AuthManager()
|
||||
self.user = self.manager.create_user('fake', 'fake', 'fake',
|
||||
admin=True)
|
||||
self.project = self.manager.create_project('fake', 'fake', 'fake')
|
||||
self.network = utils.import_object(FLAGS.network_manager)
|
||||
self.stubs = stubout.StubOutForTesting()
|
||||
vmwareapi_fake.reset()
|
||||
db_fakes.stub_out_db_instance_api(self.stubs)
|
||||
stubs.set_stubs(self.stubs)
|
||||
glance_stubs.stubout_glance_client(self.stubs,
|
||||
glance_stubs.FakeGlance)
|
||||
self.conn = vmwareapi_conn.get_connection(False)
|
||||
|
||||
def _create_instance_in_the_db(self):
|
||||
values = {'name': 1,
|
||||
'id': 1,
|
||||
'project_id': self.project.id,
|
||||
'user_id': self.user.id,
|
||||
'image_id': "1",
|
||||
'kernel_id': "1",
|
||||
'ramdisk_id': "1",
|
||||
'instance_type': 'm1.large',
|
||||
'mac_address': 'aa:bb:cc:dd:ee:ff',
|
||||
}
|
||||
self.instance = db.instance_create(values)
|
||||
|
||||
def _create_vm(self):
|
||||
"""Create and spawn the VM."""
|
||||
self._create_instance_in_the_db()
|
||||
self.type_data = db.instance_type_get_by_name(None, 'm1.large')
|
||||
self.conn.spawn(self.instance)
|
||||
self._check_vm_record()
|
||||
|
||||
def _check_vm_record(self):
|
||||
"""
|
||||
Check if the spawned VM's properties correspond to the instance in
|
||||
the db.
|
||||
"""
|
||||
instances = self.conn.list_instances()
|
||||
self.assertEquals(len(instances), 1)
|
||||
|
||||
# Get Nova record for VM
|
||||
vm_info = self.conn.get_info(1)
|
||||
|
||||
# Get record for VM
|
||||
vms = vmwareapi_fake._get_objects("VirtualMachine")
|
||||
vm = vms[0]
|
||||
|
||||
# Check that m1.large above turned into the right thing.
|
||||
mem_kib = long(self.type_data['memory_mb']) << 10
|
||||
vcpus = self.type_data['vcpus']
|
||||
self.assertEquals(vm_info['max_mem'], mem_kib)
|
||||
self.assertEquals(vm_info['mem'], mem_kib)
|
||||
self.assertEquals(vm.get("summary.config.numCpu"), vcpus)
|
||||
self.assertEquals(vm.get("summary.config.memorySizeMB"),
|
||||
self.type_data['memory_mb'])
|
||||
|
||||
# Check that the VM is running according to Nova
|
||||
self.assertEquals(vm_info['state'], power_state.RUNNING)
|
||||
|
||||
# Check that the VM is running according to vSphere API.
|
||||
self.assertEquals(vm.get("runtime.powerState"), 'poweredOn')
|
||||
|
||||
def _check_vm_info(self, info, pwr_state=power_state.RUNNING):
|
||||
"""
|
||||
Check if the get_info returned values correspond to the instance
|
||||
object in the db.
|
||||
"""
|
||||
mem_kib = long(self.type_data['memory_mb']) << 10
|
||||
self.assertEquals(info["state"], pwr_state)
|
||||
self.assertEquals(info["max_mem"], mem_kib)
|
||||
self.assertEquals(info["mem"], mem_kib)
|
||||
self.assertEquals(info["num_cpu"], self.type_data['vcpus'])
|
||||
|
||||
def test_list_instances(self):
|
||||
instances = self.conn.list_instances()
|
||||
self.assertEquals(len(instances), 0)
|
||||
|
||||
def test_list_instances_1(self):
|
||||
self._create_vm()
|
||||
instances = self.conn.list_instances()
|
||||
self.assertEquals(len(instances), 1)
|
||||
|
||||
def test_spawn(self):
|
||||
self._create_vm()
|
||||
info = self.conn.get_info(1)
|
||||
self._check_vm_info(info, power_state.RUNNING)
|
||||
|
||||
def test_snapshot(self):
|
||||
self._create_vm()
|
||||
info = self.conn.get_info(1)
|
||||
self._check_vm_info(info, power_state.RUNNING)
|
||||
self.conn.snapshot(self.instance, "Test-Snapshot")
|
||||
info = self.conn.get_info(1)
|
||||
self._check_vm_info(info, power_state.RUNNING)
|
||||
|
||||
def test_snapshot_non_existent(self):
|
||||
self._create_instance_in_the_db()
|
||||
self.assertRaises(Exception, self.conn.snapshot, self.instance,
|
||||
"Test-Snapshot")
|
||||
|
||||
def test_reboot(self):
|
||||
self._create_vm()
|
||||
info = self.conn.get_info(1)
|
||||
self._check_vm_info(info, power_state.RUNNING)
|
||||
self.conn.reboot(self.instance)
|
||||
info = self.conn.get_info(1)
|
||||
self._check_vm_info(info, power_state.RUNNING)
|
||||
|
||||
def test_reboot_non_existent(self):
|
||||
self._create_instance_in_the_db()
|
||||
self.assertRaises(Exception, self.conn.reboot, self.instance)
|
||||
|
||||
def test_reboot_not_poweredon(self):
|
||||
self._create_vm()
|
||||
info = self.conn.get_info(1)
|
||||
self._check_vm_info(info, power_state.RUNNING)
|
||||
self.conn.suspend(self.instance, self.dummy_callback_handler)
|
||||
info = self.conn.get_info(1)
|
||||
self._check_vm_info(info, power_state.PAUSED)
|
||||
self.assertRaises(Exception, self.conn.reboot, self.instance)
|
||||
|
||||
def test_suspend(self):
|
||||
self._create_vm()
|
||||
info = self.conn.get_info(1)
|
||||
self._check_vm_info(info, power_state.RUNNING)
|
||||
self.conn.suspend(self.instance, self.dummy_callback_handler)
|
||||
info = self.conn.get_info(1)
|
||||
self._check_vm_info(info, power_state.PAUSED)
|
||||
|
||||
def test_suspend_non_existent(self):
|
||||
self._create_instance_in_the_db()
|
||||
self.assertRaises(Exception, self.conn.suspend, self.instance,
|
||||
self.dummy_callback_handler)
|
||||
|
||||
def test_resume(self):
|
||||
self._create_vm()
|
||||
info = self.conn.get_info(1)
|
||||
self._check_vm_info(info, power_state.RUNNING)
|
||||
self.conn.suspend(self.instance, self.dummy_callback_handler)
|
||||
info = self.conn.get_info(1)
|
||||
self._check_vm_info(info, power_state.PAUSED)
|
||||
self.conn.resume(self.instance, self.dummy_callback_handler)
|
||||
info = self.conn.get_info(1)
|
||||
self._check_vm_info(info, power_state.RUNNING)
|
||||
|
||||
def test_resume_non_existent(self):
|
||||
self._create_instance_in_the_db()
|
||||
self.assertRaises(Exception, self.conn.resume, self.instance,
|
||||
self.dummy_callback_handler)
|
||||
|
||||
def test_resume_not_suspended(self):
|
||||
self._create_vm()
|
||||
info = self.conn.get_info(1)
|
||||
self._check_vm_info(info, power_state.RUNNING)
|
||||
self.assertRaises(Exception, self.conn.resume, self.instance,
|
||||
self.dummy_callback_handler)
|
||||
|
||||
def test_get_info(self):
|
||||
self._create_vm()
|
||||
info = self.conn.get_info(1)
|
||||
self._check_vm_info(info, power_state.RUNNING)
|
||||
|
||||
def test_destroy(self):
|
||||
self._create_vm()
|
||||
info = self.conn.get_info(1)
|
||||
self._check_vm_info(info, power_state.RUNNING)
|
||||
instances = self.conn.list_instances()
|
||||
self.assertEquals(len(instances), 1)
|
||||
self.conn.destroy(self.instance)
|
||||
instances = self.conn.list_instances()
|
||||
self.assertEquals(len(instances), 0)
|
||||
|
||||
def test_destroy_non_existent(self):
|
||||
self._create_instance_in_the_db()
|
||||
self.assertEquals(self.conn.destroy(self.instance), None)
|
||||
|
||||
def test_pause(self):
|
||||
pass
|
||||
|
||||
def test_unpause(self):
|
||||
pass
|
||||
|
||||
def test_diagnostics(self):
|
||||
pass
|
||||
|
||||
def test_get_console_output(self):
|
||||
pass
|
||||
|
||||
def test_get_ajax_console(self):
|
||||
pass
|
||||
|
||||
def dummy_callback_handler(self, ret):
|
||||
"""
|
||||
Dummy callback function to be passed to suspend, resume, etc., calls.
|
||||
"""
|
||||
pass
|
||||
|
||||
def tearDown(self):
|
||||
super(VMWareAPIVMTestCase, self).tearDown()
|
||||
vmwareapi_fake.cleanup()
|
||||
self.manager.delete_project(self.project)
|
||||
self.manager.delete_user(self.user)
|
||||
self.stubs.UnsetAll()
|
21
nova/tests/vmwareapi/__init__.py
Normal file
21
nova/tests/vmwareapi/__init__.py
Normal file
@ -0,0 +1,21 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2011 Citrix Systems, Inc.
|
||||
# Copyright 2011 OpenStack LLC.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
:mod:`vmwareapi` -- Stubs for VMware API
|
||||
=======================================
|
||||
"""
|
109
nova/tests/vmwareapi/db_fakes.py
Normal file
109
nova/tests/vmwareapi/db_fakes.py
Normal file
@ -0,0 +1,109 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2011 Citrix Systems, Inc.
|
||||
# Copyright 2011 OpenStack LLC.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Stubouts, mocks and fixtures for the test suite
|
||||
"""
|
||||
|
||||
import time
|
||||
|
||||
from nova import db
|
||||
from nova import utils
|
||||
|
||||
|
||||
def stub_out_db_instance_api(stubs):
|
||||
"""Stubs out the db API for creating Instances."""
|
||||
|
||||
INSTANCE_TYPES = {
|
||||
'm1.tiny': dict(memory_mb=512, vcpus=1, local_gb=0, flavorid=1),
|
||||
'm1.small': dict(memory_mb=2048, vcpus=1, local_gb=20, flavorid=2),
|
||||
'm1.medium':
|
||||
dict(memory_mb=4096, vcpus=2, local_gb=40, flavorid=3),
|
||||
'm1.large': dict(memory_mb=8192, vcpus=4, local_gb=80, flavorid=4),
|
||||
'm1.xlarge':
|
||||
dict(memory_mb=16384, vcpus=8, local_gb=160, flavorid=5)}
|
||||
|
||||
class FakeModel(object):
|
||||
"""Stubs out for model."""
|
||||
|
||||
def __init__(self, values):
|
||||
self.values = values
|
||||
|
||||
def __getattr__(self, name):
|
||||
return self.values[name]
|
||||
|
||||
def __getitem__(self, key):
|
||||
if key in self.values:
|
||||
return self.values[key]
|
||||
else:
|
||||
raise NotImplementedError()
|
||||
|
||||
def fake_instance_create(values):
|
||||
"""Stubs out the db.instance_create method."""
|
||||
|
||||
type_data = INSTANCE_TYPES[values['instance_type']]
|
||||
|
||||
base_options = {
|
||||
'name': values['name'],
|
||||
'id': values['id'],
|
||||
'reservation_id': utils.generate_uid('r'),
|
||||
'image_id': values['image_id'],
|
||||
'kernel_id': values['kernel_id'],
|
||||
'ramdisk_id': values['ramdisk_id'],
|
||||
'state_description': 'scheduling',
|
||||
'user_id': values['user_id'],
|
||||
'project_id': values['project_id'],
|
||||
'launch_time': time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime()),
|
||||
'instance_type': values['instance_type'],
|
||||
'memory_mb': type_data['memory_mb'],
|
||||
'mac_address': values['mac_address'],
|
||||
'vcpus': type_data['vcpus'],
|
||||
'local_gb': type_data['local_gb'],
|
||||
}
|
||||
return FakeModel(base_options)
|
||||
|
||||
def fake_network_get_by_instance(context, instance_id):
|
||||
"""Stubs out the db.network_get_by_instance method."""
|
||||
|
||||
fields = {
|
||||
'bridge': 'vmnet0',
|
||||
'netmask': '255.255.255.0',
|
||||
'gateway': '10.10.10.1',
|
||||
'vlan': 100}
|
||||
return FakeModel(fields)
|
||||
|
||||
def fake_instance_action_create(context, action):
|
||||
"""Stubs out the db.instance_action_create method."""
|
||||
pass
|
||||
|
||||
def fake_instance_get_fixed_address(context, instance_id):
|
||||
"""Stubs out the db.instance_get_fixed_address method."""
|
||||
return '10.10.10.10'
|
||||
|
||||
def fake_instance_type_get_all(context, inactive=0):
|
||||
return INSTANCE_TYPES
|
||||
|
||||
def fake_instance_type_get_by_name(context, name):
|
||||
return INSTANCE_TYPES[name]
|
||||
|
||||
stubs.Set(db, 'instance_create', fake_instance_create)
|
||||
stubs.Set(db, 'network_get_by_instance', fake_network_get_by_instance)
|
||||
stubs.Set(db, 'instance_action_create', fake_instance_action_create)
|
||||
stubs.Set(db, 'instance_get_fixed_address',
|
||||
fake_instance_get_fixed_address)
|
||||
stubs.Set(db, 'instance_type_get_all', fake_instance_type_get_all)
|
||||
stubs.Set(db, 'instance_type_get_by_name', fake_instance_type_get_by_name)
|
46
nova/tests/vmwareapi/stubs.py
Normal file
46
nova/tests/vmwareapi/stubs.py
Normal file
@ -0,0 +1,46 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2011 Citrix Systems, Inc.
|
||||
# Copyright 2011 OpenStack LLC.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Stubouts for the test suite
|
||||
"""
|
||||
|
||||
from nova.virt import vmwareapi_conn
|
||||
from nova.virt.vmwareapi import fake
|
||||
from nova.virt.vmwareapi import vmware_images
|
||||
|
||||
|
||||
def fake_get_vim_object(arg):
|
||||
"""Stubs out the VMWareAPISession's get_vim_object method."""
|
||||
return fake.FakeVim()
|
||||
|
||||
|
||||
def fake_is_vim_object(arg, module):
|
||||
"""Stubs out the VMWareAPISession's is_vim_object method."""
|
||||
return isinstance(module, fake.FakeVim)
|
||||
|
||||
|
||||
def set_stubs(stubs):
|
||||
"""Set the stubs."""
|
||||
stubs.Set(vmware_images, 'fetch_image', fake.fake_fetch_image)
|
||||
stubs.Set(vmware_images, 'get_vmdk_size_and_properties',
|
||||
fake.fake_get_vmdk_size_and_properties)
|
||||
stubs.Set(vmware_images, 'upload_image', fake.fake_upload_image)
|
||||
stubs.Set(vmwareapi_conn.VMWareAPISession, "_get_vim_object",
|
||||
fake_get_vim_object)
|
||||
stubs.Set(vmwareapi_conn.VMWareAPISession, "_is_vim_object",
|
||||
fake_is_vim_object)
|
@ -26,9 +26,10 @@ from nova import log as logging
|
||||
from nova import utils
|
||||
from nova.virt import driver
|
||||
from nova.virt import fake
|
||||
from nova.virt import libvirt_conn
|
||||
from nova.virt import xenapi_conn
|
||||
from nova.virt import hyperv
|
||||
from nova.virt import libvirt_conn
|
||||
from nova.virt import vmwareapi_conn
|
||||
from nova.virt import xenapi_conn
|
||||
|
||||
|
||||
LOG = logging.getLogger("nova.virt.connection")
|
||||
@ -68,6 +69,8 @@ def get_connection(read_only=False):
|
||||
conn = xenapi_conn.get_connection(read_only)
|
||||
elif t == 'hyperv':
|
||||
conn = hyperv.get_connection(read_only)
|
||||
elif t == 'vmwareapi':
|
||||
conn = vmwareapi_conn.get_connection(read_only)
|
||||
else:
|
||||
raise Exception('Unknown connection type "%s"' % t)
|
||||
|
||||
|
19
nova/virt/vmwareapi/__init__.py
Normal file
19
nova/virt/vmwareapi/__init__.py
Normal file
@ -0,0 +1,19 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2011 Citrix Systems, Inc.
|
||||
# Copyright 2011 OpenStack LLC.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
"""
|
||||
:mod:`vmwareapi` -- Nova support for VMware ESX/ESXi Server through VMware API.
|
||||
"""
|
96
nova/virt/vmwareapi/error_util.py
Normal file
96
nova/virt/vmwareapi/error_util.py
Normal file
@ -0,0 +1,96 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2011 Citrix Systems, Inc.
|
||||
# Copyright 2011 OpenStack LLC.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Exception classes and SOAP response error checking module.
|
||||
"""
|
||||
|
||||
FAULT_NOT_AUTHENTICATED = "NotAuthenticated"
|
||||
FAULT_ALREADY_EXISTS = "AlreadyExists"
|
||||
|
||||
|
||||
class VimException(Exception):
|
||||
"""The VIM Exception class."""
|
||||
|
||||
def __init__(self, exception_summary, excep):
|
||||
Exception.__init__(self)
|
||||
self.exception_summary = exception_summary
|
||||
self.exception_obj = excep
|
||||
|
||||
def __str__(self):
|
||||
return self.exception_summary + str(self.exception_obj)
|
||||
|
||||
|
||||
class SessionOverLoadException(VimException):
|
||||
"""Session Overload Exception."""
|
||||
pass
|
||||
|
||||
|
||||
class VimAttributeError(VimException):
|
||||
"""VI Attribute Error."""
|
||||
pass
|
||||
|
||||
|
||||
class VimFaultException(Exception):
|
||||
"""The VIM Fault exception class."""
|
||||
|
||||
def __init__(self, fault_list, excep):
|
||||
Exception.__init__(self)
|
||||
self.fault_list = fault_list
|
||||
self.exception_obj = excep
|
||||
|
||||
def __str__(self):
|
||||
return str(self.exception_obj)
|
||||
|
||||
|
||||
class FaultCheckers(object):
|
||||
"""
|
||||
Methods for fault checking of SOAP response. Per Method error handlers
|
||||
for which we desire error checking are defined. SOAP faults are
|
||||
embedded in the SOAP messages as properties and not as SOAP faults.
|
||||
"""
|
||||
|
||||
@staticmethod
|
||||
def retrieveproperties_fault_checker(resp_obj):
|
||||
"""
|
||||
Checks the RetrieveProperties response for errors. Certain faults
|
||||
are sent as part of the SOAP body as property of missingSet.
|
||||
For example NotAuthenticated fault.
|
||||
"""
|
||||
fault_list = []
|
||||
if not resp_obj:
|
||||
# This is the case when the session has timed out. ESX SOAP server
|
||||
# sends an empty RetrievePropertiesResponse. Normally missingSet in
|
||||
# the returnval field has the specifics about the error, but that's
|
||||
# not the case with a timed out idle session. It is as bad as a
|
||||
# terminated session for we cannot use the session. So setting
|
||||
# fault to NotAuthenticated fault.
|
||||
fault_list = ["NotAuthenticated"]
|
||||
else:
|
||||
for obj_cont in resp_obj:
|
||||
if hasattr(obj_cont, "missingSet"):
|
||||
for missing_elem in obj_cont.missingSet:
|
||||
fault_type = \
|
||||
missing_elem.fault.fault.__class__.__name__
|
||||
# Fault needs to be added to the type of fault for
|
||||
# uniformity in error checking as SOAP faults define
|
||||
fault_list.append(fault_type)
|
||||
if fault_list:
|
||||
exc_msg_list = ', '.join(fault_list)
|
||||
raise VimFaultException(fault_list, Exception(_("Error(s) %s "
|
||||
"occurred in the call to RetrieveProperties") %
|
||||
exc_msg_list))
|
711
nova/virt/vmwareapi/fake.py
Normal file
711
nova/virt/vmwareapi/fake.py
Normal file
@ -0,0 +1,711 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2011 Citrix Systems, Inc.
|
||||
# Copyright 2011 OpenStack LLC.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
A fake VMWare VI API implementation.
|
||||
"""
|
||||
|
||||
from pprint import pformat
|
||||
import uuid
|
||||
|
||||
from nova import exception
|
||||
from nova import log as logging
|
||||
from nova.virt.vmwareapi import vim
|
||||
from nova.virt.vmwareapi import error_util
|
||||
|
||||
_CLASSES = ['Datacenter', 'Datastore', 'ResourcePool', 'VirtualMachine',
|
||||
'Network', 'HostSystem', 'HostNetworkSystem', 'Task', 'session',
|
||||
'files']
|
||||
|
||||
_FAKE_FILE_SIZE = 1024
|
||||
|
||||
_db_content = {}
|
||||
|
||||
LOG = logging.getLogger("nova.virt.vmwareapi.fake")
|
||||
|
||||
|
||||
def log_db_contents(msg=None):
|
||||
"""Log DB Contents."""
|
||||
text = msg or ""
|
||||
content = pformat(_db_content)
|
||||
LOG.debug(_("%(text)s: _db_content => %(content)s") % locals())
|
||||
|
||||
|
||||
def reset():
|
||||
"""Resets the db contents."""
|
||||
for c in _CLASSES:
|
||||
# We fake the datastore by keeping the file references as a list of
|
||||
# names in the db
|
||||
if c == 'files':
|
||||
_db_content[c] = []
|
||||
else:
|
||||
_db_content[c] = {}
|
||||
create_network()
|
||||
create_host_network_system()
|
||||
create_host()
|
||||
create_datacenter()
|
||||
create_datastore()
|
||||
create_res_pool()
|
||||
|
||||
|
||||
def cleanup():
|
||||
"""Clear the db contents."""
|
||||
for c in _CLASSES:
|
||||
_db_content[c] = {}
|
||||
|
||||
|
||||
def _create_object(table, table_obj):
|
||||
"""Create an object in the db."""
|
||||
_db_content[table][table_obj.obj] = table_obj
|
||||
|
||||
|
||||
def _get_objects(obj_type):
|
||||
"""Get objects of the type."""
|
||||
lst_objs = []
|
||||
for key in _db_content[obj_type]:
|
||||
lst_objs.append(_db_content[obj_type][key])
|
||||
return lst_objs
|
||||
|
||||
|
||||
class Prop(object):
|
||||
"""Property Object base class."""
|
||||
|
||||
def __init__(self):
|
||||
self.name = None
|
||||
self.val = None
|
||||
|
||||
|
||||
class ManagedObject(object):
|
||||
"""Managed Data Object base class."""
|
||||
|
||||
def __init__(self, name="ManagedObject", obj_ref=None):
|
||||
"""Sets the obj property which acts as a reference to the object."""
|
||||
super(ManagedObject, self).__setattr__('objName', name)
|
||||
if obj_ref is None:
|
||||
obj_ref = str(uuid.uuid4())
|
||||
object.__setattr__(self, 'obj', obj_ref)
|
||||
object.__setattr__(self, 'propSet', [])
|
||||
|
||||
def set(self, attr, val):
|
||||
"""
|
||||
Sets an attribute value. Not using the __setattr__ directly for we
|
||||
want to set attributes of the type 'a.b.c' and using this function
|
||||
class we set the same.
|
||||
"""
|
||||
self.__setattr__(attr, val)
|
||||
|
||||
def get(self, attr):
|
||||
"""
|
||||
Gets an attribute. Used as an intermediary to get nested
|
||||
property like 'a.b.c' value.
|
||||
"""
|
||||
return self.__getattr__(attr)
|
||||
|
||||
def __setattr__(self, attr, val):
|
||||
for prop in self.propSet:
|
||||
if prop.name == attr:
|
||||
prop.val = val
|
||||
return
|
||||
elem = Prop()
|
||||
elem.name = attr
|
||||
elem.val = val
|
||||
self.propSet.append(elem)
|
||||
|
||||
def __getattr__(self, attr):
|
||||
for elem in self.propSet:
|
||||
if elem.name == attr:
|
||||
return elem.val
|
||||
raise exception.Error(_("Property %(attr)s not set for the managed "
|
||||
"object %(objName)s") %
|
||||
{'attr': attr,
|
||||
'objName': self.objName})
|
||||
|
||||
|
||||
class DataObject(object):
|
||||
"""Data object base class."""
|
||||
pass
|
||||
|
||||
|
||||
class VirtualDisk(DataObject):
|
||||
"""
|
||||
Virtual Disk class. Does nothing special except setting
|
||||
__class__.__name__ to 'VirtualDisk'. Refer place where __class__.__name__
|
||||
is used in the code.
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
class VirtualDiskFlatVer2BackingInfo(DataObject):
|
||||
"""VirtualDiskFlatVer2BackingInfo class."""
|
||||
pass
|
||||
|
||||
|
||||
class VirtualLsiLogicController(DataObject):
|
||||
"""VirtualLsiLogicController class."""
|
||||
pass
|
||||
|
||||
|
||||
class VirtualMachine(ManagedObject):
|
||||
"""Virtual Machine class."""
|
||||
|
||||
def __init__(self, **kwargs):
|
||||
super(VirtualMachine, self).__init__("VirtualMachine")
|
||||
self.set("name", kwargs.get("name"))
|
||||
self.set("runtime.connectionState",
|
||||
kwargs.get("conn_state", "connected"))
|
||||
self.set("summary.config.guestId", kwargs.get("guest", "otherGuest"))
|
||||
ds_do = DataObject()
|
||||
ds_do.ManagedObjectReference = [kwargs.get("ds").obj]
|
||||
self.set("datastore", ds_do)
|
||||
self.set("summary.guest.toolsStatus", kwargs.get("toolsstatus",
|
||||
"toolsOk"))
|
||||
self.set("summary.guest.toolsRunningStatus", kwargs.get(
|
||||
"toolsrunningstate", "guestToolsRunning"))
|
||||
self.set("runtime.powerState", kwargs.get("powerstate", "poweredOn"))
|
||||
self.set("config.files.vmPathName", kwargs.get("vmPathName"))
|
||||
self.set("summary.config.numCpu", kwargs.get("numCpu", 1))
|
||||
self.set("summary.config.memorySizeMB", kwargs.get("mem", 1))
|
||||
self.set("config.hardware.device", kwargs.get("virtual_disk", None))
|
||||
self.set("config.extraConfig", kwargs.get("extra_config", None))
|
||||
|
||||
def reconfig(self, factory, val):
|
||||
"""
|
||||
Called to reconfigure the VM. Actually customizes the property
|
||||
setting of the Virtual Machine object.
|
||||
"""
|
||||
try:
|
||||
# Case of Reconfig of VM to attach disk
|
||||
controller_key = val.deviceChange[1].device.controllerKey
|
||||
filename = val.deviceChange[1].device.backing.fileName
|
||||
|
||||
disk = VirtualDisk()
|
||||
disk.controllerKey = controller_key
|
||||
|
||||
disk_backing = VirtualDiskFlatVer2BackingInfo()
|
||||
disk_backing.fileName = filename
|
||||
disk_backing.key = -101
|
||||
disk.backing = disk_backing
|
||||
|
||||
controller = VirtualLsiLogicController()
|
||||
controller.key = controller_key
|
||||
|
||||
self.set("config.hardware.device", [disk, controller])
|
||||
except AttributeError:
|
||||
# Case of Reconfig of VM to set extra params
|
||||
self.set("config.extraConfig", val.extraConfig)
|
||||
|
||||
|
||||
class Network(ManagedObject):
|
||||
"""Network class."""
|
||||
|
||||
def __init__(self):
|
||||
super(Network, self).__init__("Network")
|
||||
self.set("summary.name", "vmnet0")
|
||||
|
||||
|
||||
class ResourcePool(ManagedObject):
|
||||
"""Resource Pool class."""
|
||||
|
||||
def __init__(self):
|
||||
super(ResourcePool, self).__init__("ResourcePool")
|
||||
self.set("name", "ResPool")
|
||||
|
||||
|
||||
class Datastore(ManagedObject):
|
||||
"""Datastore class."""
|
||||
|
||||
def __init__(self):
|
||||
super(Datastore, self).__init__("Datastore")
|
||||
self.set("summary.type", "VMFS")
|
||||
self.set("summary.name", "fake-ds")
|
||||
|
||||
|
||||
class HostNetworkSystem(ManagedObject):
|
||||
"""HostNetworkSystem class."""
|
||||
|
||||
def __init__(self):
|
||||
super(HostNetworkSystem, self).__init__("HostNetworkSystem")
|
||||
self.set("name", "networkSystem")
|
||||
|
||||
pnic_do = DataObject()
|
||||
pnic_do.device = "vmnic0"
|
||||
|
||||
net_info_pnic = DataObject()
|
||||
net_info_pnic.PhysicalNic = [pnic_do]
|
||||
|
||||
self.set("networkInfo.pnic", net_info_pnic)
|
||||
|
||||
|
||||
class HostSystem(ManagedObject):
|
||||
"""Host System class."""
|
||||
|
||||
def __init__(self):
|
||||
super(HostSystem, self).__init__("HostSystem")
|
||||
self.set("name", "ha-host")
|
||||
if _db_content.get("HostNetworkSystem", None) is None:
|
||||
create_host_network_system()
|
||||
host_net_key = _db_content["HostNetworkSystem"].keys()[0]
|
||||
host_net_sys = _db_content["HostNetworkSystem"][host_net_key].obj
|
||||
self.set("configManager.networkSystem", host_net_sys)
|
||||
|
||||
if _db_content.get("Network", None) is None:
|
||||
create_network()
|
||||
net_ref = _db_content["Network"][_db_content["Network"].keys()[0]].obj
|
||||
network_do = DataObject()
|
||||
network_do.ManagedObjectReference = [net_ref]
|
||||
self.set("network", network_do)
|
||||
|
||||
vswitch_do = DataObject()
|
||||
vswitch_do.pnic = ["vmnic0"]
|
||||
vswitch_do.name = "vSwitch0"
|
||||
vswitch_do.portgroup = ["PortGroup-vmnet0"]
|
||||
|
||||
net_swicth = DataObject()
|
||||
net_swicth.HostVirtualSwitch = [vswitch_do]
|
||||
self.set("config.network.vswitch", net_swicth)
|
||||
|
||||
host_pg_do = DataObject()
|
||||
host_pg_do.key = "PortGroup-vmnet0"
|
||||
|
||||
pg_spec = DataObject()
|
||||
pg_spec.vlanId = 0
|
||||
pg_spec.name = "vmnet0"
|
||||
|
||||
host_pg_do.spec = pg_spec
|
||||
|
||||
host_pg = DataObject()
|
||||
host_pg.HostPortGroup = [host_pg_do]
|
||||
self.set("config.network.portgroup", host_pg)
|
||||
|
||||
def _add_port_group(self, spec):
|
||||
"""Adds a port group to the host system object in the db."""
|
||||
pg_name = spec.name
|
||||
vswitch_name = spec.vswitchName
|
||||
vlanid = spec.vlanId
|
||||
|
||||
vswitch_do = DataObject()
|
||||
vswitch_do.pnic = ["vmnic0"]
|
||||
vswitch_do.name = vswitch_name
|
||||
vswitch_do.portgroup = ["PortGroup-%s" % pg_name]
|
||||
|
||||
vswitches = self.get("config.network.vswitch").HostVirtualSwitch
|
||||
vswitches.append(vswitch_do)
|
||||
|
||||
host_pg_do = DataObject()
|
||||
host_pg_do.key = "PortGroup-%s" % pg_name
|
||||
|
||||
pg_spec = DataObject()
|
||||
pg_spec.vlanId = vlanid
|
||||
pg_spec.name = pg_name
|
||||
|
||||
host_pg_do.spec = pg_spec
|
||||
host_pgrps = self.get("config.network.portgroup").HostPortGroup
|
||||
host_pgrps.append(host_pg_do)
|
||||
|
||||
|
||||
class Datacenter(ManagedObject):
|
||||
"""Datacenter class."""
|
||||
|
||||
def __init__(self):
|
||||
super(Datacenter, self).__init__("Datacenter")
|
||||
self.set("name", "ha-datacenter")
|
||||
self.set("vmFolder", "vm_folder_ref")
|
||||
if _db_content.get("Network", None) is None:
|
||||
create_network()
|
||||
net_ref = _db_content["Network"][_db_content["Network"].keys()[0]].obj
|
||||
network_do = DataObject()
|
||||
network_do.ManagedObjectReference = [net_ref]
|
||||
self.set("network", network_do)
|
||||
|
||||
|
||||
class Task(ManagedObject):
|
||||
"""Task class."""
|
||||
|
||||
def __init__(self, task_name, state="running"):
|
||||
super(Task, self).__init__("Task")
|
||||
info = DataObject
|
||||
info.name = task_name
|
||||
info.state = state
|
||||
self.set("info", info)
|
||||
|
||||
|
||||
def create_host_network_system():
|
||||
host_net_system = HostNetworkSystem()
|
||||
_create_object("HostNetworkSystem", host_net_system)
|
||||
|
||||
|
||||
def create_host():
|
||||
host_system = HostSystem()
|
||||
_create_object('HostSystem', host_system)
|
||||
|
||||
|
||||
def create_datacenter():
|
||||
data_center = Datacenter()
|
||||
_create_object('Datacenter', data_center)
|
||||
|
||||
|
||||
def create_datastore():
|
||||
data_store = Datastore()
|
||||
_create_object('Datastore', data_store)
|
||||
|
||||
|
||||
def create_res_pool():
|
||||
res_pool = ResourcePool()
|
||||
_create_object('ResourcePool', res_pool)
|
||||
|
||||
|
||||
def create_network():
|
||||
network = Network()
|
||||
_create_object('Network', network)
|
||||
|
||||
|
||||
def create_task(task_name, state="running"):
|
||||
task = Task(task_name, state)
|
||||
_create_object("Task", task)
|
||||
return task
|
||||
|
||||
|
||||
def _add_file(file_path):
|
||||
"""Adds a file reference to the db."""
|
||||
_db_content["files"].append(file_path)
|
||||
|
||||
|
||||
def _remove_file(file_path):
|
||||
"""Removes a file reference from the db."""
|
||||
if _db_content.get("files") is None:
|
||||
raise exception.NotFound(_("No files have been added yet"))
|
||||
# Check if the remove is for a single file object or for a folder
|
||||
if file_path.find(".vmdk") != -1:
|
||||
if file_path not in _db_content.get("files"):
|
||||
raise exception.NotFound(_("File- '%s' is not there in the "
|
||||
"datastore") % file_path)
|
||||
_db_content.get("files").remove(file_path)
|
||||
else:
|
||||
# Removes the files in the folder and the folder too from the db
|
||||
for file in _db_content.get("files"):
|
||||
if file.find(file_path) != -1:
|
||||
lst_files = _db_content.get("files")
|
||||
if lst_files and lst_files.count(file):
|
||||
lst_files.remove(file)
|
||||
|
||||
|
||||
def fake_fetch_image(image, instance, **kwargs):
|
||||
"""Fakes fetch image call. Just adds a reference to the db for the file."""
|
||||
ds_name = kwargs.get("datastore_name")
|
||||
file_path = kwargs.get("file_path")
|
||||
ds_file_path = "[" + ds_name + "] " + file_path
|
||||
_add_file(ds_file_path)
|
||||
|
||||
|
||||
def fake_upload_image(image, instance, **kwargs):
|
||||
"""Fakes the upload of an image."""
|
||||
pass
|
||||
|
||||
|
||||
def fake_get_vmdk_size_and_properties(image_id, instance):
|
||||
"""Fakes the file size and properties fetch for the image file."""
|
||||
props = {"vmware_ostype": "otherGuest",
|
||||
"vmware_adaptertype": "lsiLogic"}
|
||||
return _FAKE_FILE_SIZE, props
|
||||
|
||||
|
||||
def _get_vm_mdo(vm_ref):
|
||||
"""Gets the Virtual Machine with the ref from the db."""
|
||||
if _db_content.get("VirtualMachine", None) is None:
|
||||
raise exception.NotFound(_("There is no VM registered"))
|
||||
if vm_ref not in _db_content.get("VirtualMachine"):
|
||||
raise exception.NotFound(_("Virtual Machine with ref %s is not "
|
||||
"there") % vm_ref)
|
||||
return _db_content.get("VirtualMachine")[vm_ref]
|
||||
|
||||
|
||||
class FakeFactory(object):
|
||||
"""Fake factory class for the suds client."""
|
||||
|
||||
def create(self, obj_name):
|
||||
"""Creates a namespace object."""
|
||||
return DataObject()
|
||||
|
||||
|
||||
class FakeVim(object):
|
||||
"""Fake VIM Class."""
|
||||
|
||||
def __init__(self, protocol="https", host="localhost", trace=None):
|
||||
"""
|
||||
Initializes the suds client object, sets the service content
|
||||
contents and the cookies for the session.
|
||||
"""
|
||||
self._session = None
|
||||
self.client = DataObject()
|
||||
self.client.factory = FakeFactory()
|
||||
|
||||
transport = DataObject()
|
||||
transport.cookiejar = "Fake-CookieJar"
|
||||
options = DataObject()
|
||||
options.transport = transport
|
||||
|
||||
self.client.options = options
|
||||
|
||||
service_content = self.client.factory.create('ns0:ServiceContent')
|
||||
service_content.propertyCollector = "PropCollector"
|
||||
service_content.virtualDiskManager = "VirtualDiskManager"
|
||||
service_content.fileManager = "FileManager"
|
||||
service_content.rootFolder = "RootFolder"
|
||||
service_content.sessionManager = "SessionManager"
|
||||
self._service_content = service_content
|
||||
|
||||
def get_service_content(self):
|
||||
return self._service_content
|
||||
|
||||
def __repr__(self):
|
||||
return "Fake VIM Object"
|
||||
|
||||
def __str__(self):
|
||||
return "Fake VIM Object"
|
||||
|
||||
def _login(self):
|
||||
"""Logs in and sets the session object in the db."""
|
||||
self._session = str(uuid.uuid4())
|
||||
session = DataObject()
|
||||
session.key = self._session
|
||||
_db_content['session'][self._session] = session
|
||||
return session
|
||||
|
||||
def _logout(self):
|
||||
"""Logs out and remove the session object ref from the db."""
|
||||
s = self._session
|
||||
self._session = None
|
||||
if s not in _db_content['session']:
|
||||
raise exception.Error(
|
||||
_("Logging out a session that is invalid or already logged "
|
||||
"out: %s") % s)
|
||||
del _db_content['session'][s]
|
||||
|
||||
def _terminate_session(self, *args, **kwargs):
|
||||
"""Terminates a session."""
|
||||
s = kwargs.get("sessionId")[0]
|
||||
if s not in _db_content['session']:
|
||||
return
|
||||
del _db_content['session'][s]
|
||||
|
||||
def _check_session(self):
|
||||
"""Checks if the session is active."""
|
||||
if (self._session is None or self._session not in
|
||||
_db_content['session']):
|
||||
LOG.debug(_("Session is faulty"))
|
||||
raise error_util.VimFaultException(
|
||||
[error_util.FAULT_NOT_AUTHENTICATED],
|
||||
_("Session Invalid"))
|
||||
|
||||
def _create_vm(self, method, *args, **kwargs):
|
||||
"""Creates and registers a VM object with the Host System."""
|
||||
config_spec = kwargs.get("config")
|
||||
ds = _db_content["Datastore"][_db_content["Datastore"].keys()[0]]
|
||||
vm_dict = {"name": config_spec.name,
|
||||
"ds": ds,
|
||||
"powerstate": "poweredOff",
|
||||
"vmPathName": config_spec.files.vmPathName,
|
||||
"numCpu": config_spec.numCPUs,
|
||||
"mem": config_spec.memoryMB}
|
||||
virtual_machine = VirtualMachine(**vm_dict)
|
||||
_create_object("VirtualMachine", virtual_machine)
|
||||
task_mdo = create_task(method, "success")
|
||||
return task_mdo.obj
|
||||
|
||||
def _reconfig_vm(self, method, *args, **kwargs):
|
||||
"""Reconfigures a VM and sets the properties supplied."""
|
||||
vm_ref = args[0]
|
||||
vm_mdo = _get_vm_mdo(vm_ref)
|
||||
vm_mdo.reconfig(self.client.factory, kwargs.get("spec"))
|
||||
task_mdo = create_task(method, "success")
|
||||
return task_mdo.obj
|
||||
|
||||
def _create_copy_disk(self, method, vmdk_file_path):
|
||||
"""Creates/copies a vmdk file object in the datastore."""
|
||||
# We need to add/create both .vmdk and .-flat.vmdk files
|
||||
flat_vmdk_file_path = \
|
||||
vmdk_file_path.replace(".vmdk", "-flat.vmdk")
|
||||
_add_file(vmdk_file_path)
|
||||
_add_file(flat_vmdk_file_path)
|
||||
task_mdo = create_task(method, "success")
|
||||
return task_mdo.obj
|
||||
|
||||
def _snapshot_vm(self, method):
|
||||
"""Snapshots a VM. Here we do nothing for faking sake."""
|
||||
task_mdo = create_task(method, "success")
|
||||
return task_mdo.obj
|
||||
|
||||
def _delete_disk(self, method, *args, **kwargs):
|
||||
"""Deletes .vmdk and -flat.vmdk files corresponding to the VM."""
|
||||
vmdk_file_path = kwargs.get("name")
|
||||
flat_vmdk_file_path = \
|
||||
vmdk_file_path.replace(".vmdk", "-flat.vmdk")
|
||||
_remove_file(vmdk_file_path)
|
||||
_remove_file(flat_vmdk_file_path)
|
||||
task_mdo = create_task(method, "success")
|
||||
return task_mdo.obj
|
||||
|
||||
def _delete_file(self, method, *args, **kwargs):
|
||||
"""Deletes a file from the datastore."""
|
||||
_remove_file(kwargs.get("name"))
|
||||
task_mdo = create_task(method, "success")
|
||||
return task_mdo.obj
|
||||
|
||||
def _just_return(self):
|
||||
"""Fakes a return."""
|
||||
return
|
||||
|
||||
def _unregister_vm(self, method, *args, **kwargs):
|
||||
"""Unregisters a VM from the Host System."""
|
||||
vm_ref = args[0]
|
||||
_get_vm_mdo(vm_ref)
|
||||
del _db_content["VirtualMachine"][vm_ref]
|
||||
|
||||
def _search_ds(self, method, *args, **kwargs):
|
||||
"""Searches the datastore for a file."""
|
||||
ds_path = kwargs.get("datastorePath")
|
||||
if _db_content.get("files", None) is None:
|
||||
raise exception.NotFound(_("No files have been added yet"))
|
||||
for file in _db_content.get("files"):
|
||||
if file.find(ds_path) != -1:
|
||||
task_mdo = create_task(method, "success")
|
||||
return task_mdo.obj
|
||||
task_mdo = create_task(method, "error")
|
||||
return task_mdo.obj
|
||||
|
||||
def _make_dir(self, method, *args, **kwargs):
|
||||
"""Creates a directory in the datastore."""
|
||||
ds_path = kwargs.get("name")
|
||||
if _db_content.get("files", None) is None:
|
||||
raise exception.NotFound(_("No files have been added yet"))
|
||||
_db_content["files"].append(ds_path)
|
||||
|
||||
def _set_power_state(self, method, vm_ref, pwr_state="poweredOn"):
|
||||
"""Sets power state for the VM."""
|
||||
if _db_content.get("VirtualMachine", None) is None:
|
||||
raise exception.NotFound(_(" No Virtual Machine has been "
|
||||
"registered yet"))
|
||||
if vm_ref not in _db_content.get("VirtualMachine"):
|
||||
raise exception.NotFound(_("Virtual Machine with ref %s is not "
|
||||
"there") % vm_ref)
|
||||
vm_mdo = _db_content.get("VirtualMachine").get(vm_ref)
|
||||
vm_mdo.set("runtime.powerState", pwr_state)
|
||||
task_mdo = create_task(method, "success")
|
||||
return task_mdo.obj
|
||||
|
||||
def _retrieve_properties(self, method, *args, **kwargs):
|
||||
"""Retrieves properties based on the type."""
|
||||
spec_set = kwargs.get("specSet")[0]
|
||||
type = spec_set.propSet[0].type
|
||||
properties = spec_set.propSet[0].pathSet
|
||||
objs = spec_set.objectSet
|
||||
lst_ret_objs = []
|
||||
for obj in objs:
|
||||
try:
|
||||
obj_ref = obj.obj
|
||||
# This means that we are doing a search for the managed
|
||||
# dataobjects of the type in the inventory
|
||||
if obj_ref == "RootFolder":
|
||||
for mdo_ref in _db_content[type]:
|
||||
mdo = _db_content[type][mdo_ref]
|
||||
# Create a temp Managed object which has the same ref
|
||||
# as the parent object and copies just the properties
|
||||
# asked for. We need .obj along with the propSet of
|
||||
# just the properties asked for
|
||||
temp_mdo = ManagedObject(mdo.objName, mdo.obj)
|
||||
for prop in properties:
|
||||
temp_mdo.set(prop, mdo.get(prop))
|
||||
lst_ret_objs.append(temp_mdo)
|
||||
else:
|
||||
if obj_ref in _db_content[type]:
|
||||
mdo = _db_content[type][obj_ref]
|
||||
temp_mdo = ManagedObject(mdo.objName, obj_ref)
|
||||
for prop in properties:
|
||||
temp_mdo.set(prop, mdo.get(prop))
|
||||
lst_ret_objs.append(temp_mdo)
|
||||
except Exception, exc:
|
||||
LOG.exception(exc)
|
||||
continue
|
||||
return lst_ret_objs
|
||||
|
||||
def _add_port_group(self, method, *args, **kwargs):
|
||||
"""Adds a port group to the host system."""
|
||||
host_mdo = \
|
||||
_db_content["HostSystem"][_db_content["HostSystem"].keys()[0]]
|
||||
host_mdo._add_port_group(kwargs.get("portgrp"))
|
||||
|
||||
def __getattr__(self, attr_name):
|
||||
if attr_name != "Login":
|
||||
self._check_session()
|
||||
if attr_name == "Login":
|
||||
return lambda *args, **kwargs: self._login()
|
||||
elif attr_name == "Logout":
|
||||
self._logout()
|
||||
elif attr_name == "TerminateSession":
|
||||
return lambda *args, **kwargs: self._terminate_session(
|
||||
*args, **kwargs)
|
||||
elif attr_name == "CreateVM_Task":
|
||||
return lambda *args, **kwargs: self._create_vm(attr_name,
|
||||
*args, **kwargs)
|
||||
elif attr_name == "ReconfigVM_Task":
|
||||
return lambda *args, **kwargs: self._reconfig_vm(attr_name,
|
||||
*args, **kwargs)
|
||||
elif attr_name == "CreateVirtualDisk_Task":
|
||||
return lambda *args, **kwargs: self._create_copy_disk(attr_name,
|
||||
kwargs.get("name"))
|
||||
elif attr_name == "DeleteDatastoreFile_Task":
|
||||
return lambda *args, **kwargs: self._delete_file(attr_name,
|
||||
*args, **kwargs)
|
||||
elif attr_name == "PowerOnVM_Task":
|
||||
return lambda *args, **kwargs: self._set_power_state(attr_name,
|
||||
args[0], "poweredOn")
|
||||
elif attr_name == "PowerOffVM_Task":
|
||||
return lambda *args, **kwargs: self._set_power_state(attr_name,
|
||||
args[0], "poweredOff")
|
||||
elif attr_name == "RebootGuest":
|
||||
return lambda *args, **kwargs: self._just_return()
|
||||
elif attr_name == "ResetVM_Task":
|
||||
return lambda *args, **kwargs: self._set_power_state(attr_name,
|
||||
args[0], "poweredOn")
|
||||
elif attr_name == "SuspendVM_Task":
|
||||
return lambda *args, **kwargs: self._set_power_state(attr_name,
|
||||
args[0], "suspended")
|
||||
elif attr_name == "CreateSnapshot_Task":
|
||||
return lambda *args, **kwargs: self._snapshot_vm(attr_name)
|
||||
elif attr_name == "CopyVirtualDisk_Task":
|
||||
return lambda *args, **kwargs: self._create_copy_disk(attr_name,
|
||||
kwargs.get("destName"))
|
||||
elif attr_name == "DeleteVirtualDisk_Task":
|
||||
return lambda *args, **kwargs: self._delete_disk(attr_name,
|
||||
*args, **kwargs)
|
||||
elif attr_name == "UnregisterVM":
|
||||
return lambda *args, **kwargs: self._unregister_vm(attr_name,
|
||||
*args, **kwargs)
|
||||
elif attr_name == "SearchDatastore_Task":
|
||||
return lambda *args, **kwargs: self._search_ds(attr_name,
|
||||
*args, **kwargs)
|
||||
elif attr_name == "MakeDirectory":
|
||||
return lambda *args, **kwargs: self._make_dir(attr_name,
|
||||
*args, **kwargs)
|
||||
elif attr_name == "RetrieveProperties":
|
||||
return lambda *args, **kwargs: self._retrieve_properties(
|
||||
attr_name, *args, **kwargs)
|
||||
elif attr_name == "AcquireCloneTicket":
|
||||
return lambda *args, **kwargs: self._just_return()
|
||||
elif attr_name == "AddPortGroup":
|
||||
return lambda *args, **kwargs: self._add_port_group(attr_name,
|
||||
*args, **kwargs)
|
168
nova/virt/vmwareapi/io_util.py
Normal file
168
nova/virt/vmwareapi/io_util.py
Normal file
@ -0,0 +1,168 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2011 Citrix Systems, Inc.
|
||||
# Copyright 2011 OpenStack LLC.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Utility classes for defining the time saving transfer of data from the reader
|
||||
to the write using a LightQueue as a Pipe between the reader and the writer.
|
||||
"""
|
||||
|
||||
from eventlet import event
|
||||
from eventlet import greenthread
|
||||
from eventlet.queue import LightQueue
|
||||
|
||||
from glance import client
|
||||
|
||||
from nova import exception
|
||||
from nova import log as logging
|
||||
|
||||
LOG = logging.getLogger("nova.virt.vmwareapi.io_util")
|
||||
|
||||
IO_THREAD_SLEEP_TIME = .01
|
||||
GLANCE_POLL_INTERVAL = 5
|
||||
|
||||
|
||||
class ThreadSafePipe(LightQueue):
|
||||
"""The pipe to hold the data which the reader writes to and the writer
|
||||
reads from."""
|
||||
|
||||
def __init__(self, maxsize, transfer_size):
|
||||
LightQueue.__init__(self, maxsize)
|
||||
self.transfer_size = transfer_size
|
||||
self.transferred = 0
|
||||
|
||||
def read(self, chunk_size):
|
||||
"""Read data from the pipe. Chunksize if ignored for we have ensured
|
||||
that the data chunks written to the pipe by readers is the same as the
|
||||
chunks asked for by the Writer."""
|
||||
if self.transferred < self.transfer_size:
|
||||
data_item = self.get()
|
||||
self.transferred += len(data_item)
|
||||
return data_item
|
||||
else:
|
||||
return ""
|
||||
|
||||
def write(self, data):
|
||||
"""Put a data item in the pipe."""
|
||||
self.put(data)
|
||||
|
||||
def close(self):
|
||||
"""A place-holder to maintain consistency."""
|
||||
pass
|
||||
|
||||
|
||||
class GlanceWriteThread(object):
|
||||
"""Ensures that image data is written to in the glance client and that
|
||||
it is in correct ('active')state."""
|
||||
|
||||
def __init__(self, input, glance_client, image_id, image_meta={}):
|
||||
self.input = input
|
||||
self.glance_client = glance_client
|
||||
self.image_id = image_id
|
||||
self.image_meta = image_meta
|
||||
self._running = False
|
||||
|
||||
def start(self):
|
||||
self.done = event.Event()
|
||||
|
||||
def _inner():
|
||||
"""Function to do the image data transfer through an update
|
||||
and thereon checks if the state is 'active'."""
|
||||
self.glance_client.update_image(self.image_id,
|
||||
image_meta=self.image_meta,
|
||||
image_data=self.input)
|
||||
self._running = True
|
||||
while self._running:
|
||||
try:
|
||||
image_status = \
|
||||
self.glance_client.get_image_meta(self.image_id).get(
|
||||
"status")
|
||||
if image_status == "active":
|
||||
self.stop()
|
||||
self.done.send(True)
|
||||
# If the state is killed, then raise an exception.
|
||||
elif image_status == "killed":
|
||||
self.stop()
|
||||
exc_msg = _("Glance image %s is in killed state") %\
|
||||
self.image_id
|
||||
LOG.exception(exc_msg)
|
||||
self.done.send_exception(exception.Error(exc_msg))
|
||||
elif image_status in ["saving", "queued"]:
|
||||
greenthread.sleep(GLANCE_POLL_INTERVAL)
|
||||
else:
|
||||
self.stop()
|
||||
exc_msg = _("Glance image "
|
||||
"%(image_id)s is in unknown state "
|
||||
"- %(state)s") % {
|
||||
"image_id": self.image_id,
|
||||
"state": image_status}
|
||||
LOG.exception(exc_msg)
|
||||
self.done.send_exception(exception.Error(exc_msg))
|
||||
except Exception, exc:
|
||||
self.stop()
|
||||
self.done.send_exception(exc)
|
||||
|
||||
greenthread.spawn(_inner)
|
||||
return self.done
|
||||
|
||||
def stop(self):
|
||||
self._running = False
|
||||
|
||||
def wait(self):
|
||||
return self.done.wait()
|
||||
|
||||
def close(self):
|
||||
pass
|
||||
|
||||
|
||||
class IOThread(object):
|
||||
"""Class that reads chunks from the input file and writes them to the
|
||||
output file till the transfer is completely done."""
|
||||
|
||||
def __init__(self, input, output):
|
||||
self.input = input
|
||||
self.output = output
|
||||
self._running = False
|
||||
self.got_exception = False
|
||||
|
||||
def start(self):
|
||||
self.done = event.Event()
|
||||
|
||||
def _inner():
|
||||
"""Read data from the input and write the same to the output
|
||||
until the transfer completes."""
|
||||
self._running = True
|
||||
while self._running:
|
||||
try:
|
||||
data = self.input.read(None)
|
||||
if not data:
|
||||
self.stop()
|
||||
self.done.send(True)
|
||||
self.output.write(data)
|
||||
greenthread.sleep(IO_THREAD_SLEEP_TIME)
|
||||
except Exception, exc:
|
||||
self.stop()
|
||||
LOG.exception(exc)
|
||||
self.done.send_exception(exc)
|
||||
|
||||
greenthread.spawn(_inner)
|
||||
return self.done
|
||||
|
||||
def stop(self):
|
||||
self._running = False
|
||||
|
||||
def wait(self):
|
||||
return self.done.wait()
|
149
nova/virt/vmwareapi/network_utils.py
Normal file
149
nova/virt/vmwareapi/network_utils.py
Normal file
@ -0,0 +1,149 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2011 Citrix Systems, Inc.
|
||||
# Copyright 2011 OpenStack LLC.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Utility functions for ESX Networking.
|
||||
"""
|
||||
|
||||
from nova import exception
|
||||
from nova import log as logging
|
||||
from nova.virt.vmwareapi import error_util
|
||||
from nova.virt.vmwareapi import vim_util
|
||||
from nova.virt.vmwareapi import vm_util
|
||||
|
||||
LOG = logging.getLogger("nova.virt.vmwareapi.network_utils")
|
||||
|
||||
|
||||
def get_network_with_the_name(session, network_name="vmnet0"):
|
||||
"""
|
||||
Gets reference to the network whose name is passed as the
|
||||
argument.
|
||||
"""
|
||||
hostsystems = session._call_method(vim_util, "get_objects",
|
||||
"HostSystem", ["network"])
|
||||
vm_networks_ret = hostsystems[0].propSet[0].val
|
||||
# Meaning there are no networks on the host. suds responds with a ""
|
||||
# in the parent property field rather than a [] in the
|
||||
# ManagedObjectRefernce property field of the parent
|
||||
if not vm_networks_ret:
|
||||
return None
|
||||
vm_networks = vm_networks_ret.ManagedObjectReference
|
||||
networks = session._call_method(vim_util,
|
||||
"get_properties_for_a_collection_of_objects",
|
||||
"Network", vm_networks, ["summary.name"])
|
||||
for network in networks:
|
||||
if network.propSet[0].val == network_name:
|
||||
return network.obj
|
||||
return None
|
||||
|
||||
|
||||
def get_vswitch_for_vlan_interface(session, vlan_interface):
|
||||
"""
|
||||
Gets the vswitch associated with the physical network adapter
|
||||
with the name supplied.
|
||||
"""
|
||||
# Get the list of vSwicthes on the Host System
|
||||
host_mor = session._call_method(vim_util, "get_objects",
|
||||
"HostSystem")[0].obj
|
||||
vswitches_ret = session._call_method(vim_util,
|
||||
"get_dynamic_property", host_mor,
|
||||
"HostSystem", "config.network.vswitch")
|
||||
# Meaning there are no vSwitches on the host. Shouldn't be the case,
|
||||
# but just doing code check
|
||||
if not vswitches_ret:
|
||||
return
|
||||
vswitches = vswitches_ret.HostVirtualSwitch
|
||||
# Get the vSwitch associated with the network adapter
|
||||
for elem in vswitches:
|
||||
try:
|
||||
for nic_elem in elem.pnic:
|
||||
if str(nic_elem).split('-')[-1].find(vlan_interface) != -1:
|
||||
return elem.name
|
||||
# Catching Attribute error as a vSwitch may not be associated with a
|
||||
# physical NIC.
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
|
||||
def check_if_vlan_interface_exists(session, vlan_interface):
|
||||
"""Checks if the vlan_inteface exists on the esx host."""
|
||||
host_net_system_mor = session._call_method(vim_util, "get_objects",
|
||||
"HostSystem", ["configManager.networkSystem"])[0].propSet[0].val
|
||||
physical_nics_ret = session._call_method(vim_util,
|
||||
"get_dynamic_property", host_net_system_mor,
|
||||
"HostNetworkSystem", "networkInfo.pnic")
|
||||
# Meaning there are no physical nics on the host
|
||||
if not physical_nics_ret:
|
||||
return False
|
||||
physical_nics = physical_nics_ret.PhysicalNic
|
||||
for pnic in physical_nics:
|
||||
if vlan_interface == pnic.device:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def get_vlanid_and_vswitch_for_portgroup(session, pg_name):
|
||||
"""Get the vlan id and vswicth associated with the port group."""
|
||||
host_mor = session._call_method(vim_util, "get_objects",
|
||||
"HostSystem")[0].obj
|
||||
port_grps_on_host_ret = session._call_method(vim_util,
|
||||
"get_dynamic_property", host_mor,
|
||||
"HostSystem", "config.network.portgroup")
|
||||
if not port_grps_on_host_ret:
|
||||
excep = ("ESX SOAP server returned an empty port group "
|
||||
"for the host system in its response")
|
||||
LOG.exception(excep)
|
||||
raise exception.Error(_(excep))
|
||||
port_grps_on_host = port_grps_on_host_ret.HostPortGroup
|
||||
for p_gp in port_grps_on_host:
|
||||
if p_gp.spec.name == pg_name:
|
||||
p_grp_vswitch_name = p_gp.vswitch.split("-")[-1]
|
||||
return p_gp.spec.vlanId, p_grp_vswitch_name
|
||||
|
||||
|
||||
def create_port_group(session, pg_name, vswitch_name, vlan_id=0):
|
||||
"""
|
||||
Creates a port group on the host system with the vlan tags
|
||||
supplied. VLAN id 0 means no vlan id association.
|
||||
"""
|
||||
client_factory = session._get_vim().client.factory
|
||||
add_prt_grp_spec = vm_util.get_add_vswitch_port_group_spec(
|
||||
client_factory,
|
||||
vswitch_name,
|
||||
pg_name,
|
||||
vlan_id)
|
||||
host_mor = session._call_method(vim_util, "get_objects",
|
||||
"HostSystem")[0].obj
|
||||
network_system_mor = session._call_method(vim_util,
|
||||
"get_dynamic_property", host_mor,
|
||||
"HostSystem", "configManager.networkSystem")
|
||||
LOG.debug(_("Creating Port Group with name %s on "
|
||||
"the ESX host") % pg_name)
|
||||
try:
|
||||
session._call_method(session._get_vim(),
|
||||
"AddPortGroup", network_system_mor,
|
||||
portgrp=add_prt_grp_spec)
|
||||
except error_util.VimFaultException, exc:
|
||||
# There can be a race condition when two instances try
|
||||
# adding port groups at the same time. One succeeds, then
|
||||
# the other one will get an exception. Since we are
|
||||
# concerned with the port group being created, which is done
|
||||
# by the other call, we can ignore the exception.
|
||||
if error_util.FAULT_ALREADY_EXISTS not in exc.fault_list:
|
||||
raise exception.Error(exc)
|
||||
LOG.debug(_("Created Port Group with name %s on "
|
||||
"the ESX host") % pg_name)
|
182
nova/virt/vmwareapi/read_write_util.py
Normal file
182
nova/virt/vmwareapi/read_write_util.py
Normal file
@ -0,0 +1,182 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2011 Citrix Systems, Inc.
|
||||
# Copyright 2011 OpenStack LLC.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Classes to handle image files
|
||||
|
||||
Collection of classes to handle image upload/download to/from Image service
|
||||
(like Glance image storage and retrieval service) from/to ESX/ESXi server.
|
||||
|
||||
"""
|
||||
|
||||
import httplib
|
||||
import urllib
|
||||
import urllib2
|
||||
import urlparse
|
||||
|
||||
from eventlet import event
|
||||
from eventlet import greenthread
|
||||
|
||||
from glance import client
|
||||
|
||||
from nova import flags
|
||||
from nova import log as logging
|
||||
|
||||
LOG = logging.getLogger("nova.virt.vmwareapi.read_write_util")
|
||||
|
||||
FLAGS = flags.FLAGS
|
||||
|
||||
USER_AGENT = "OpenStack-ESX-Adapter"
|
||||
|
||||
try:
|
||||
READ_CHUNKSIZE = client.BaseClient.CHUNKSIZE
|
||||
except AttributeError:
|
||||
READ_CHUNKSIZE = 65536
|
||||
|
||||
|
||||
class GlanceFileRead(object):
|
||||
"""Glance file read handler class."""
|
||||
|
||||
def __init__(self, glance_read_iter):
|
||||
self.glance_read_iter = glance_read_iter
|
||||
self.iter = self.get_next()
|
||||
|
||||
def read(self, chunk_size):
|
||||
"""Read an item from the queue. The chunk size is ignored for the
|
||||
Client ImageBodyIterator uses its own CHUNKSIZE."""
|
||||
try:
|
||||
return self.iter.next()
|
||||
except StopIteration:
|
||||
return ""
|
||||
|
||||
def get_next(self):
|
||||
"""Get the next item from the image iterator."""
|
||||
for data in self.glance_read_iter:
|
||||
yield data
|
||||
|
||||
def close(self):
|
||||
"""A dummy close just to maintain consistency."""
|
||||
pass
|
||||
|
||||
|
||||
class VMwareHTTPFile(object):
|
||||
"""Base class for HTTP file."""
|
||||
|
||||
def __init__(self, file_handle):
|
||||
self.eof = False
|
||||
self.file_handle = file_handle
|
||||
|
||||
def set_eof(self, eof):
|
||||
"""Set the end of file marker."""
|
||||
self.eof = eof
|
||||
|
||||
def get_eof(self):
|
||||
"""Check if the end of file has been reached."""
|
||||
return self.eof
|
||||
|
||||
def close(self):
|
||||
"""Close the file handle."""
|
||||
try:
|
||||
self.file_handle.close()
|
||||
except Exception, exc:
|
||||
LOG.exception(exc)
|
||||
|
||||
def __del__(self):
|
||||
"""Close the file handle on garbage collection."""
|
||||
self.close()
|
||||
|
||||
def _build_vim_cookie_headers(self, vim_cookies):
|
||||
"""Build ESX host session cookie headers."""
|
||||
cookie_header = ""
|
||||
for vim_cookie in vim_cookies:
|
||||
cookie_header = vim_cookie.name + "=" + vim_cookie.value
|
||||
break
|
||||
return cookie_header
|
||||
|
||||
def write(self, data):
|
||||
"""Write data to the file."""
|
||||
raise NotImplementedError
|
||||
|
||||
def read(self, chunk_size):
|
||||
"""Read a chunk of data."""
|
||||
raise NotImplementedError
|
||||
|
||||
def get_size(self):
|
||||
"""Get size of the file to be read."""
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class VMWareHTTPWriteFile(VMwareHTTPFile):
|
||||
"""VMWare file write handler class."""
|
||||
|
||||
def __init__(self, host, data_center_name, datastore_name, cookies,
|
||||
file_path, file_size, scheme="https"):
|
||||
base_url = "%s://%s/folder/%s" % (scheme, host, file_path)
|
||||
param_list = {"dcPath": data_center_name, "dsName": datastore_name}
|
||||
base_url = base_url + "?" + urllib.urlencode(param_list)
|
||||
(scheme, netloc, path, params, query, fragment) = \
|
||||
urlparse.urlparse(base_url)
|
||||
if scheme == "http":
|
||||
conn = httplib.HTTPConnection(netloc)
|
||||
elif scheme == "https":
|
||||
conn = httplib.HTTPSConnection(netloc)
|
||||
conn.putrequest("PUT", path + "?" + query)
|
||||
conn.putheader("User-Agent", USER_AGENT)
|
||||
conn.putheader("Content-Length", file_size)
|
||||
conn.putheader("Cookie", self._build_vim_cookie_headers(cookies))
|
||||
conn.endheaders()
|
||||
self.conn = conn
|
||||
VMwareHTTPFile.__init__(self, conn)
|
||||
|
||||
def write(self, data):
|
||||
"""Write to the file."""
|
||||
self.file_handle.send(data)
|
||||
|
||||
def close(self):
|
||||
"""Get the response and close the connection."""
|
||||
try:
|
||||
self.conn.getresponse()
|
||||
except Exception, excep:
|
||||
LOG.debug(_("Exception during HTTP connection close in "
|
||||
"VMWareHTTpWrite. Exception is %s") % excep)
|
||||
super(VMWareHTTPWriteFile, self).close()
|
||||
|
||||
|
||||
class VmWareHTTPReadFile(VMwareHTTPFile):
|
||||
"""VMWare file read handler class."""
|
||||
|
||||
def __init__(self, host, data_center_name, datastore_name, cookies,
|
||||
file_path, scheme="https"):
|
||||
base_url = "%s://%s/folder/%s" % (scheme, host,
|
||||
urllib.pathname2url(file_path))
|
||||
param_list = {"dcPath": data_center_name, "dsName": datastore_name}
|
||||
base_url = base_url + "?" + urllib.urlencode(param_list)
|
||||
headers = {'User-Agent': USER_AGENT,
|
||||
'Cookie': self._build_vim_cookie_headers(cookies)}
|
||||
request = urllib2.Request(base_url, None, headers)
|
||||
conn = urllib2.urlopen(request)
|
||||
VMwareHTTPFile.__init__(self, conn)
|
||||
|
||||
def read(self, chunk_size):
|
||||
"""Read a chunk of data."""
|
||||
# We are ignoring the chunk size passed for we want the pipe to hold
|
||||
# data items of the chunk-size that Glance Client uses for read
|
||||
# while writing.
|
||||
return self.file_handle.read(READ_CHUNKSIZE)
|
||||
|
||||
def get_size(self):
|
||||
"""Get size of the file to be read."""
|
||||
return self.file_handle.headers.get("Content-Length", -1)
|
176
nova/virt/vmwareapi/vim.py
Normal file
176
nova/virt/vmwareapi/vim.py
Normal file
@ -0,0 +1,176 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2011 Citrix Systems, Inc.
|
||||
# Copyright 2011 OpenStack LLC.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Classes for making VMware VI SOAP calls.
|
||||
"""
|
||||
|
||||
import httplib
|
||||
|
||||
from suds import WebFault
|
||||
from suds.client import Client
|
||||
from suds.plugin import MessagePlugin
|
||||
from suds.sudsobject import Property
|
||||
|
||||
from nova import flags
|
||||
from nova.virt.vmwareapi import error_util
|
||||
|
||||
RESP_NOT_XML_ERROR = 'Response is "text/html", not "text/xml"'
|
||||
CONN_ABORT_ERROR = 'Software caused connection abort'
|
||||
ADDRESS_IN_USE_ERROR = 'Address already in use'
|
||||
|
||||
FLAGS = flags.FLAGS
|
||||
flags.DEFINE_string('vmwareapi_wsdl_loc',
|
||||
None,
|
||||
'VIM Service WSDL Location'
|
||||
'e.g http://<server>/vimService.wsdl'
|
||||
'Due to a bug in vSphere ESX 4.1 default wsdl'
|
||||
'Refer readme-vmware to setup')
|
||||
|
||||
|
||||
class VIMMessagePlugin(MessagePlugin):
|
||||
|
||||
def addAttributeForValue(self, node):
|
||||
# suds does not handle AnyType properly.
|
||||
# VI SDK requires type attribute to be set when AnyType is used
|
||||
if node.name == 'value':
|
||||
node.set('xsi:type', 'xsd:string')
|
||||
|
||||
def marshalled(self, context):
|
||||
"""suds will send the specified soap envelope.
|
||||
Provides the plugin with the opportunity to prune empty
|
||||
nodes and fixup nodes before sending it to the server.
|
||||
"""
|
||||
# suds builds the entire request object based on the wsdl schema.
|
||||
# VI SDK throws server errors if optional SOAP nodes are sent without
|
||||
# values, e.g. <test/> as opposed to <test>test</test>
|
||||
context.envelope.prune()
|
||||
context.envelope.walk(self.addAttributeForValue)
|
||||
|
||||
|
||||
class Vim:
|
||||
"""The VIM Object."""
|
||||
|
||||
def __init__(self,
|
||||
protocol="https",
|
||||
host="localhost"):
|
||||
"""
|
||||
Creates the necessary Communication interfaces and gets the
|
||||
ServiceContent for initiating SOAP transactions.
|
||||
|
||||
protocol: http or https
|
||||
host : ESX IPAddress[:port] or ESX Hostname[:port]
|
||||
"""
|
||||
self._protocol = protocol
|
||||
self._host_name = host
|
||||
wsdl_url = FLAGS.vmwareapi_wsdl_loc
|
||||
if wsdl_url is None:
|
||||
raise Exception(_("Must specify vmwareapi_wsdl_loc"))
|
||||
# TODO(sateesh): Use this when VMware fixes their faulty wsdl
|
||||
#wsdl_url = '%s://%s/sdk/vimService.wsdl' % (self._protocol,
|
||||
# self._host_name)
|
||||
url = '%s://%s/sdk' % (self._protocol, self._host_name)
|
||||
self.client = Client(wsdl_url, location=url,
|
||||
plugins=[VIMMessagePlugin()])
|
||||
self._service_content = \
|
||||
self.RetrieveServiceContent("ServiceInstance")
|
||||
|
||||
def get_service_content(self):
|
||||
"""Gets the service content object."""
|
||||
return self._service_content
|
||||
|
||||
def __getattr__(self, attr_name):
|
||||
"""Makes the API calls and gets the result."""
|
||||
try:
|
||||
return object.__getattr__(self, attr_name)
|
||||
except AttributeError:
|
||||
|
||||
def vim_request_handler(managed_object, **kwargs):
|
||||
"""
|
||||
Builds the SOAP message and parses the response for fault
|
||||
checking and other errors.
|
||||
|
||||
managed_object : Managed Object Reference or Managed
|
||||
Object Name
|
||||
**kwargs : Keyword arguments of the call
|
||||
"""
|
||||
# Dynamic handler for VI SDK Calls
|
||||
try:
|
||||
request_mo = \
|
||||
self._request_managed_object_builder(managed_object)
|
||||
request = getattr(self.client.service, attr_name)
|
||||
response = request(request_mo, **kwargs)
|
||||
# To check for the faults that are part of the message body
|
||||
# and not returned as Fault object response from the ESX
|
||||
# SOAP server
|
||||
if hasattr(error_util.FaultCheckers,
|
||||
attr_name.lower() + "_fault_checker"):
|
||||
fault_checker = getattr(error_util.FaultCheckers,
|
||||
attr_name.lower() + "_fault_checker")
|
||||
fault_checker(response)
|
||||
return response
|
||||
# Catch the VimFaultException that is raised by the fault
|
||||
# check of the SOAP response
|
||||
except error_util.VimFaultException, excep:
|
||||
raise
|
||||
except WebFault, excep:
|
||||
doc = excep.document
|
||||
detail = doc.childAtPath("/Envelope/Body/Fault/detail")
|
||||
fault_list = []
|
||||
for child in detail.getChildren():
|
||||
fault_list.append(child.get("type"))
|
||||
raise error_util.VimFaultException(fault_list, excep)
|
||||
except AttributeError, excep:
|
||||
raise error_util.VimAttributeError(_("No such SOAP method "
|
||||
"'%s' provided by VI SDK") % (attr_name), excep)
|
||||
except (httplib.CannotSendRequest,
|
||||
httplib.ResponseNotReady,
|
||||
httplib.CannotSendHeader), excep:
|
||||
raise error_util.SessionOverLoadException(_("httplib "
|
||||
"error in %s: ") % (attr_name), excep)
|
||||
except Exception, excep:
|
||||
# Socket errors which need special handling for they
|
||||
# might be caused by ESX API call overload
|
||||
if (str(excep).find(ADDRESS_IN_USE_ERROR) != -1 or
|
||||
str(excep).find(CONN_ABORT_ERROR)) != -1:
|
||||
raise error_util.SessionOverLoadException(_("Socket "
|
||||
"error in %s: ") % (attr_name), excep)
|
||||
# Type error that needs special handling for it might be
|
||||
# caused by ESX host API call overload
|
||||
elif str(excep).find(RESP_NOT_XML_ERROR) != -1:
|
||||
raise error_util.SessionOverLoadException(_("Type "
|
||||
"error in %s: ") % (attr_name), excep)
|
||||
else:
|
||||
raise error_util.VimException(
|
||||
_("Exception in %s ") % (attr_name), excep)
|
||||
return vim_request_handler
|
||||
|
||||
def _request_managed_object_builder(self, managed_object):
|
||||
"""Builds the request managed object."""
|
||||
# Request Managed Object Builder
|
||||
if type(managed_object) == type(""):
|
||||
mo = Property(managed_object)
|
||||
mo._type = managed_object
|
||||
else:
|
||||
mo = managed_object
|
||||
return mo
|
||||
|
||||
def __repr__(self):
|
||||
return "VIM Object"
|
||||
|
||||
def __str__(self):
|
||||
return "VIM Object"
|
217
nova/virt/vmwareapi/vim_util.py
Normal file
217
nova/virt/vmwareapi/vim_util.py
Normal file
@ -0,0 +1,217 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2011 Citrix Systems, Inc.
|
||||
# Copyright 2011 OpenStack LLC.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
The VMware API utility module.
|
||||
"""
|
||||
|
||||
|
||||
def build_selection_spec(client_factory, name):
|
||||
"""Builds the selection spec."""
|
||||
sel_spec = client_factory.create('ns0:SelectionSpec')
|
||||
sel_spec.name = name
|
||||
return sel_spec
|
||||
|
||||
|
||||
def build_traversal_spec(client_factory, name, spec_type, path, skip,
|
||||
select_set):
|
||||
"""Builds the traversal spec object."""
|
||||
traversal_spec = client_factory.create('ns0:TraversalSpec')
|
||||
traversal_spec.name = name
|
||||
traversal_spec.type = spec_type
|
||||
traversal_spec.path = path
|
||||
traversal_spec.skip = skip
|
||||
traversal_spec.selectSet = select_set
|
||||
return traversal_spec
|
||||
|
||||
|
||||
def build_recursive_traversal_spec(client_factory):
|
||||
"""
|
||||
Builds the Recursive Traversal Spec to traverse the object managed
|
||||
object hierarchy.
|
||||
"""
|
||||
visit_folders_select_spec = build_selection_spec(client_factory,
|
||||
"visitFolders")
|
||||
# For getting to hostFolder from datacenter
|
||||
dc_to_hf = build_traversal_spec(client_factory, "dc_to_hf", "Datacenter",
|
||||
"hostFolder", False,
|
||||
[visit_folders_select_spec])
|
||||
# For getting to vmFolder from datacenter
|
||||
dc_to_vmf = build_traversal_spec(client_factory, "dc_to_vmf", "Datacenter",
|
||||
"vmFolder", False,
|
||||
[visit_folders_select_spec])
|
||||
# For getting Host System to virtual machine
|
||||
h_to_vm = build_traversal_spec(client_factory, "h_to_vm", "HostSystem",
|
||||
"vm", False,
|
||||
[visit_folders_select_spec])
|
||||
|
||||
# For getting to Host System from Compute Resource
|
||||
cr_to_h = build_traversal_spec(client_factory, "cr_to_h",
|
||||
"ComputeResource", "host", False, [])
|
||||
|
||||
# For getting to datastore from Compute Resource
|
||||
cr_to_ds = build_traversal_spec(client_factory, "cr_to_ds",
|
||||
"ComputeResource", "datastore", False, [])
|
||||
|
||||
rp_to_rp_select_spec = build_selection_spec(client_factory, "rp_to_rp")
|
||||
rp_to_vm_select_spec = build_selection_spec(client_factory, "rp_to_vm")
|
||||
# For getting to resource pool from Compute Resource
|
||||
cr_to_rp = build_traversal_spec(client_factory, "cr_to_rp",
|
||||
"ComputeResource", "resourcePool", False,
|
||||
[rp_to_rp_select_spec, rp_to_vm_select_spec])
|
||||
|
||||
# For getting to child res pool from the parent res pool
|
||||
rp_to_rp = build_traversal_spec(client_factory, "rp_to_rp", "ResourcePool",
|
||||
"resourcePool", False,
|
||||
[rp_to_rp_select_spec, rp_to_vm_select_spec])
|
||||
|
||||
# For getting to Virtual Machine from the Resource Pool
|
||||
rp_to_vm = build_traversal_spec(client_factory, "rp_to_vm", "ResourcePool",
|
||||
"vm", False,
|
||||
[rp_to_rp_select_spec, rp_to_vm_select_spec])
|
||||
|
||||
# Get the assorted traversal spec which takes care of the objects to
|
||||
# be searched for from the root folder
|
||||
traversal_spec = build_traversal_spec(client_factory, "visitFolders",
|
||||
"Folder", "childEntity", False,
|
||||
[visit_folders_select_spec, dc_to_hf,
|
||||
dc_to_vmf, cr_to_ds, cr_to_h, cr_to_rp,
|
||||
rp_to_rp, h_to_vm, rp_to_vm])
|
||||
return traversal_spec
|
||||
|
||||
|
||||
def build_property_spec(client_factory, type="VirtualMachine",
|
||||
properties_to_collect=["name"],
|
||||
all_properties=False):
|
||||
"""Builds the Property Spec."""
|
||||
property_spec = client_factory.create('ns0:PropertySpec')
|
||||
property_spec.all = all_properties
|
||||
property_spec.pathSet = properties_to_collect
|
||||
property_spec.type = type
|
||||
return property_spec
|
||||
|
||||
|
||||
def build_object_spec(client_factory, root_folder, traversal_specs):
|
||||
"""Builds the object Spec."""
|
||||
object_spec = client_factory.create('ns0:ObjectSpec')
|
||||
object_spec.obj = root_folder
|
||||
object_spec.skip = False
|
||||
object_spec.selectSet = traversal_specs
|
||||
return object_spec
|
||||
|
||||
|
||||
def build_property_filter_spec(client_factory, property_specs, object_specs):
|
||||
"""Builds the Property Filter Spec."""
|
||||
property_filter_spec = client_factory.create('ns0:PropertyFilterSpec')
|
||||
property_filter_spec.propSet = property_specs
|
||||
property_filter_spec.objectSet = object_specs
|
||||
return property_filter_spec
|
||||
|
||||
|
||||
def get_object_properties(vim, collector, mobj, type, properties):
|
||||
"""Gets the properties of the Managed object specified."""
|
||||
client_factory = vim.client.factory
|
||||
if mobj is None:
|
||||
return None
|
||||
usecoll = collector
|
||||
if usecoll is None:
|
||||
usecoll = vim.get_service_content().propertyCollector
|
||||
property_filter_spec = client_factory.create('ns0:PropertyFilterSpec')
|
||||
property_spec = client_factory.create('ns0:PropertySpec')
|
||||
property_spec.all = (properties is None or len(properties) == 0)
|
||||
property_spec.pathSet = properties
|
||||
property_spec.type = type
|
||||
object_spec = client_factory.create('ns0:ObjectSpec')
|
||||
object_spec.obj = mobj
|
||||
object_spec.skip = False
|
||||
property_filter_spec.propSet = [property_spec]
|
||||
property_filter_spec.objectSet = [object_spec]
|
||||
return vim.RetrieveProperties(usecoll, specSet=[property_filter_spec])
|
||||
|
||||
|
||||
def get_dynamic_property(vim, mobj, type, property_name):
|
||||
"""Gets a particular property of the Managed Object."""
|
||||
obj_content = \
|
||||
get_object_properties(vim, None, mobj, type, [property_name])
|
||||
property_value = None
|
||||
if obj_content:
|
||||
dynamic_property = obj_content[0].propSet
|
||||
if dynamic_property:
|
||||
property_value = dynamic_property[0].val
|
||||
return property_value
|
||||
|
||||
|
||||
def get_objects(vim, type, properties_to_collect=["name"], all=False):
|
||||
"""Gets the list of objects of the type specified."""
|
||||
client_factory = vim.client.factory
|
||||
object_spec = build_object_spec(client_factory,
|
||||
vim.get_service_content().rootFolder,
|
||||
[build_recursive_traversal_spec(client_factory)])
|
||||
property_spec = build_property_spec(client_factory, type=type,
|
||||
properties_to_collect=properties_to_collect,
|
||||
all_properties=all)
|
||||
property_filter_spec = build_property_filter_spec(client_factory,
|
||||
[property_spec],
|
||||
[object_spec])
|
||||
return vim.RetrieveProperties(vim.get_service_content().propertyCollector,
|
||||
specSet=[property_filter_spec])
|
||||
|
||||
|
||||
def get_prop_spec(client_factory, spec_type, properties):
|
||||
"""Builds the Property Spec Object."""
|
||||
prop_spec = client_factory.create('ns0:PropertySpec')
|
||||
prop_spec.type = spec_type
|
||||
prop_spec.pathSet = properties
|
||||
return prop_spec
|
||||
|
||||
|
||||
def get_obj_spec(client_factory, obj, select_set=None):
|
||||
"""Builds the Object Spec object."""
|
||||
obj_spec = client_factory.create('ns0:ObjectSpec')
|
||||
obj_spec.obj = obj
|
||||
obj_spec.skip = False
|
||||
if select_set is not None:
|
||||
obj_spec.selectSet = select_set
|
||||
return obj_spec
|
||||
|
||||
|
||||
def get_prop_filter_spec(client_factory, obj_spec, prop_spec):
|
||||
"""Builds the Property Filter Spec Object."""
|
||||
prop_filter_spec = \
|
||||
client_factory.create('ns0:PropertyFilterSpec')
|
||||
prop_filter_spec.propSet = prop_spec
|
||||
prop_filter_spec.objectSet = obj_spec
|
||||
return prop_filter_spec
|
||||
|
||||
|
||||
def get_properties_for_a_collection_of_objects(vim, type,
|
||||
obj_list, properties):
|
||||
"""
|
||||
Gets the list of properties for the collection of
|
||||
objects of the type specified.
|
||||
"""
|
||||
client_factory = vim.client.factory
|
||||
if len(obj_list) == 0:
|
||||
return []
|
||||
prop_spec = get_prop_spec(client_factory, type, properties)
|
||||
lst_obj_specs = []
|
||||
for obj in obj_list:
|
||||
lst_obj_specs.append(get_obj_spec(client_factory, obj))
|
||||
prop_filter_spec = get_prop_filter_spec(client_factory,
|
||||
lst_obj_specs, [prop_spec])
|
||||
return vim.RetrieveProperties(vim.get_service_content().propertyCollector,
|
||||
specSet=[prop_filter_spec])
|
306
nova/virt/vmwareapi/vm_util.py
Normal file
306
nova/virt/vmwareapi/vm_util.py
Normal file
@ -0,0 +1,306 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2011 Citrix Systems, Inc.
|
||||
# Copyright 2011 OpenStack LLC.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
"""
|
||||
The VMware API VM utility module to build SOAP object specs.
|
||||
"""
|
||||
|
||||
|
||||
def build_datastore_path(datastore_name, path):
|
||||
"""Build the datastore compliant path."""
|
||||
return "[%s] %s" % (datastore_name, path)
|
||||
|
||||
|
||||
def split_datastore_path(datastore_path):
|
||||
"""
|
||||
Split the VMWare style datastore path to get the Datastore
|
||||
name and the entity path.
|
||||
"""
|
||||
spl = datastore_path.split('[', 1)[1].split(']', 1)
|
||||
path = ""
|
||||
if len(spl) == 1:
|
||||
datastore_url = spl[0]
|
||||
else:
|
||||
datastore_url, path = spl
|
||||
return datastore_url, path.strip()
|
||||
|
||||
|
||||
def get_vm_create_spec(client_factory, instance, data_store_name,
|
||||
network_name="vmnet0",
|
||||
os_type="otherGuest"):
|
||||
"""Builds the VM Create spec."""
|
||||
config_spec = client_factory.create('ns0:VirtualMachineConfigSpec')
|
||||
config_spec.name = instance.name
|
||||
config_spec.guestId = os_type
|
||||
|
||||
vm_file_info = client_factory.create('ns0:VirtualMachineFileInfo')
|
||||
vm_file_info.vmPathName = "[" + data_store_name + "]"
|
||||
config_spec.files = vm_file_info
|
||||
|
||||
tools_info = client_factory.create('ns0:ToolsConfigInfo')
|
||||
tools_info.afterPowerOn = True
|
||||
tools_info.afterResume = True
|
||||
tools_info.beforeGuestStandby = True
|
||||
tools_info.beforeGuestShutdown = True
|
||||
tools_info.beforeGuestReboot = True
|
||||
|
||||
config_spec.tools = tools_info
|
||||
config_spec.numCPUs = int(instance.vcpus)
|
||||
config_spec.memoryMB = int(instance.memory_mb)
|
||||
|
||||
nic_spec = create_network_spec(client_factory,
|
||||
network_name, instance.mac_address)
|
||||
|
||||
device_config_spec = [nic_spec]
|
||||
|
||||
config_spec.deviceChange = device_config_spec
|
||||
return config_spec
|
||||
|
||||
|
||||
def create_controller_spec(client_factory, key):
|
||||
"""
|
||||
Builds a Config Spec for the LSI Logic Controller's addition
|
||||
which acts as the controller for the virtual hard disk to be attached
|
||||
to the VM.
|
||||
"""
|
||||
# Create a controller for the Virtual Hard Disk
|
||||
virtual_device_config = \
|
||||
client_factory.create('ns0:VirtualDeviceConfigSpec')
|
||||
virtual_device_config.operation = "add"
|
||||
virtual_lsi = \
|
||||
client_factory.create('ns0:VirtualLsiLogicController')
|
||||
virtual_lsi.key = key
|
||||
virtual_lsi.busNumber = 0
|
||||
virtual_lsi.sharedBus = "noSharing"
|
||||
virtual_device_config.device = virtual_lsi
|
||||
return virtual_device_config
|
||||
|
||||
|
||||
def create_network_spec(client_factory, network_name, mac_address):
|
||||
"""
|
||||
Builds a config spec for the addition of a new network
|
||||
adapter to the VM.
|
||||
"""
|
||||
network_spec = \
|
||||
client_factory.create('ns0:VirtualDeviceConfigSpec')
|
||||
network_spec.operation = "add"
|
||||
|
||||
# Get the recommended card type for the VM based on the guest OS of the VM
|
||||
net_device = client_factory.create('ns0:VirtualPCNet32')
|
||||
|
||||
backing = \
|
||||
client_factory.create('ns0:VirtualEthernetCardNetworkBackingInfo')
|
||||
backing.deviceName = network_name
|
||||
|
||||
connectable_spec = \
|
||||
client_factory.create('ns0:VirtualDeviceConnectInfo')
|
||||
connectable_spec.startConnected = True
|
||||
connectable_spec.allowGuestControl = True
|
||||
connectable_spec.connected = True
|
||||
|
||||
net_device.connectable = connectable_spec
|
||||
net_device.backing = backing
|
||||
|
||||
# The Server assigns a Key to the device. Here we pass a -ve temporary key.
|
||||
# -ve because actual keys are +ve numbers and we don't
|
||||
# want a clash with the key that server might associate with the device
|
||||
net_device.key = -47
|
||||
net_device.addressType = "manual"
|
||||
net_device.macAddress = mac_address
|
||||
net_device.wakeOnLanEnabled = True
|
||||
|
||||
network_spec.device = net_device
|
||||
return network_spec
|
||||
|
||||
|
||||
def get_vmdk_attach_config_spec(client_factory, disksize, file_path,
|
||||
adapter_type="lsiLogic"):
|
||||
"""Builds the vmdk attach config spec."""
|
||||
config_spec = client_factory.create('ns0:VirtualMachineConfigSpec')
|
||||
|
||||
# The controller Key pertains to the Key of the LSI Logic Controller, which
|
||||
# controls this Hard Disk
|
||||
device_config_spec = []
|
||||
# For IDE devices, there are these two default controllers created in the
|
||||
# VM having keys 200 and 201
|
||||
if adapter_type == "ide":
|
||||
controller_key = 200
|
||||
else:
|
||||
controller_key = -101
|
||||
controller_spec = create_controller_spec(client_factory,
|
||||
controller_key)
|
||||
device_config_spec.append(controller_spec)
|
||||
virtual_device_config_spec = create_virtual_disk_spec(client_factory,
|
||||
disksize, controller_key, file_path)
|
||||
|
||||
device_config_spec.append(virtual_device_config_spec)
|
||||
|
||||
config_spec.deviceChange = device_config_spec
|
||||
return config_spec
|
||||
|
||||
|
||||
def get_vmdk_file_path_and_adapter_type(client_factory, hardware_devices):
|
||||
"""Gets the vmdk file path and the storage adapter type."""
|
||||
if hardware_devices.__class__.__name__ == "ArrayOfVirtualDevice":
|
||||
hardware_devices = hardware_devices.VirtualDevice
|
||||
vmdk_file_path = None
|
||||
vmdk_controler_key = None
|
||||
|
||||
adapter_type_dict = {}
|
||||
for device in hardware_devices:
|
||||
if device.__class__.__name__ == "VirtualDisk" and \
|
||||
device.backing.__class__.__name__ \
|
||||
== "VirtualDiskFlatVer2BackingInfo":
|
||||
vmdk_file_path = device.backing.fileName
|
||||
vmdk_controler_key = device.controllerKey
|
||||
elif device.__class__.__name__ == "VirtualLsiLogicController":
|
||||
adapter_type_dict[device.key] = "lsiLogic"
|
||||
elif device.__class__.__name__ == "VirtualBusLogicController":
|
||||
adapter_type_dict[device.key] = "busLogic"
|
||||
elif device.__class__.__name__ == "VirtualIDEController":
|
||||
adapter_type_dict[device.key] = "ide"
|
||||
elif device.__class__.__name__ == "VirtualLsiLogicSASController":
|
||||
adapter_type_dict[device.key] = "lsiLogic"
|
||||
|
||||
adapter_type = adapter_type_dict.get(vmdk_controler_key, "")
|
||||
|
||||
return vmdk_file_path, adapter_type
|
||||
|
||||
|
||||
def get_copy_virtual_disk_spec(client_factory, adapter_type="lsilogic"):
|
||||
"""Builds the Virtual Disk copy spec."""
|
||||
dest_spec = client_factory.create('ns0:VirtualDiskSpec')
|
||||
dest_spec.adapterType = adapter_type
|
||||
dest_spec.diskType = "thick"
|
||||
return dest_spec
|
||||
|
||||
|
||||
def get_vmdk_create_spec(client_factory, size_in_kb, adapter_type="lsiLogic"):
|
||||
"""Builds the virtual disk create spec."""
|
||||
create_vmdk_spec = \
|
||||
client_factory.create('ns0:FileBackedVirtualDiskSpec')
|
||||
create_vmdk_spec.adapterType = adapter_type
|
||||
create_vmdk_spec.diskType = "thick"
|
||||
create_vmdk_spec.capacityKb = size_in_kb
|
||||
return create_vmdk_spec
|
||||
|
||||
|
||||
def create_virtual_disk_spec(client_factory, disksize, controller_key,
|
||||
file_path=None):
|
||||
"""
|
||||
Builds spec for the creation of a new/ attaching of an already existing
|
||||
Virtual Disk to the VM.
|
||||
"""
|
||||
virtual_device_config = \
|
||||
client_factory.create('ns0:VirtualDeviceConfigSpec')
|
||||
virtual_device_config.operation = "add"
|
||||
if file_path is None:
|
||||
virtual_device_config.fileOperation = "create"
|
||||
|
||||
virtual_disk = client_factory.create('ns0:VirtualDisk')
|
||||
|
||||
disk_file_backing = \
|
||||
client_factory.create('ns0:VirtualDiskFlatVer2BackingInfo')
|
||||
disk_file_backing.diskMode = "persistent"
|
||||
disk_file_backing.thinProvisioned = False
|
||||
if file_path is not None:
|
||||
disk_file_backing.fileName = file_path
|
||||
else:
|
||||
disk_file_backing.fileName = ""
|
||||
|
||||
connectable_spec = client_factory.create('ns0:VirtualDeviceConnectInfo')
|
||||
connectable_spec.startConnected = True
|
||||
connectable_spec.allowGuestControl = False
|
||||
connectable_spec.connected = True
|
||||
|
||||
virtual_disk.backing = disk_file_backing
|
||||
virtual_disk.connectable = connectable_spec
|
||||
|
||||
# The Server assigns a Key to the device. Here we pass a -ve random key.
|
||||
# -ve because actual keys are +ve numbers and we don't
|
||||
# want a clash with the key that server might associate with the device
|
||||
virtual_disk.key = -100
|
||||
virtual_disk.controllerKey = controller_key
|
||||
virtual_disk.unitNumber = 0
|
||||
virtual_disk.capacityInKB = disksize
|
||||
|
||||
virtual_device_config.device = virtual_disk
|
||||
|
||||
return virtual_device_config
|
||||
|
||||
|
||||
def get_dummy_vm_create_spec(client_factory, name, data_store_name):
|
||||
"""Builds the dummy VM create spec."""
|
||||
config_spec = client_factory.create('ns0:VirtualMachineConfigSpec')
|
||||
|
||||
config_spec.name = name
|
||||
config_spec.guestId = "otherGuest"
|
||||
|
||||
vm_file_info = client_factory.create('ns0:VirtualMachineFileInfo')
|
||||
vm_file_info.vmPathName = "[" + data_store_name + "]"
|
||||
config_spec.files = vm_file_info
|
||||
|
||||
tools_info = client_factory.create('ns0:ToolsConfigInfo')
|
||||
tools_info.afterPowerOn = True
|
||||
tools_info.afterResume = True
|
||||
tools_info.beforeGuestStandby = True
|
||||
tools_info.beforeGuestShutdown = True
|
||||
tools_info.beforeGuestReboot = True
|
||||
|
||||
config_spec.tools = tools_info
|
||||
config_spec.numCPUs = 1
|
||||
config_spec.memoryMB = 4
|
||||
|
||||
controller_key = -101
|
||||
controller_spec = create_controller_spec(client_factory, controller_key)
|
||||
disk_spec = create_virtual_disk_spec(client_factory, 1024, controller_key)
|
||||
|
||||
device_config_spec = [controller_spec, disk_spec]
|
||||
|
||||
config_spec.deviceChange = device_config_spec
|
||||
return config_spec
|
||||
|
||||
|
||||
def get_machine_id_change_spec(client_factory, mac, ip_addr, netmask, gateway):
|
||||
"""Builds the machine id change config spec."""
|
||||
machine_id_str = "%s;%s;%s;%s" % (mac, ip_addr, netmask, gateway)
|
||||
virtual_machine_config_spec = \
|
||||
client_factory.create('ns0:VirtualMachineConfigSpec')
|
||||
|
||||
opt = client_factory.create('ns0:OptionValue')
|
||||
opt.key = "machine.id"
|
||||
opt.value = machine_id_str
|
||||
virtual_machine_config_spec.extraConfig = [opt]
|
||||
return virtual_machine_config_spec
|
||||
|
||||
|
||||
def get_add_vswitch_port_group_spec(client_factory, vswitch_name,
|
||||
port_group_name, vlan_id):
|
||||
"""Builds the virtual switch port group add spec."""
|
||||
vswitch_port_group_spec = client_factory.create('ns0:HostPortGroupSpec')
|
||||
vswitch_port_group_spec.name = port_group_name
|
||||
vswitch_port_group_spec.vswitchName = vswitch_name
|
||||
|
||||
# VLAN ID of 0 means that VLAN tagging is not to be done for the network.
|
||||
vswitch_port_group_spec.vlanId = int(vlan_id)
|
||||
|
||||
policy = client_factory.create('ns0:HostNetworkPolicy')
|
||||
nicteaming = client_factory.create('ns0:HostNicTeamingPolicy')
|
||||
nicteaming.notifySwitches = True
|
||||
policy.nicTeaming = nicteaming
|
||||
|
||||
vswitch_port_group_spec.policy = policy
|
||||
return vswitch_port_group_spec
|
789
nova/virt/vmwareapi/vmops.py
Normal file
789
nova/virt/vmwareapi/vmops.py
Normal file
@ -0,0 +1,789 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2011 Citrix Systems, Inc.
|
||||
# Copyright 2011 OpenStack LLC.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Class for VM tasks like spawn, snapshot, suspend, resume etc.
|
||||
"""
|
||||
|
||||
import base64
|
||||
import os
|
||||
import time
|
||||
import urllib
|
||||
import urllib2
|
||||
import uuid
|
||||
|
||||
from nova import context
|
||||
from nova import db
|
||||
from nova import exception
|
||||
from nova import flags
|
||||
from nova import log as logging
|
||||
from nova.compute import power_state
|
||||
from nova.virt.vmwareapi import vim_util
|
||||
from nova.virt.vmwareapi import vm_util
|
||||
from nova.virt.vmwareapi import vmware_images
|
||||
from nova.virt.vmwareapi import network_utils
|
||||
|
||||
FLAGS = flags.FLAGS
|
||||
LOG = logging.getLogger("nova.virt.vmwareapi.vmops")
|
||||
|
||||
VMWARE_POWER_STATES = {
|
||||
'poweredOff': power_state.SHUTDOWN,
|
||||
'poweredOn': power_state.RUNNING,
|
||||
'suspended': power_state.PAUSED}
|
||||
|
||||
|
||||
class VMWareVMOps(object):
|
||||
"""Management class for VM-related tasks."""
|
||||
|
||||
def __init__(self, session):
|
||||
"""Initializer."""
|
||||
self._session = session
|
||||
|
||||
def _wait_with_callback(self, instance_id, task, callback):
|
||||
"""Waits for the task to finish and does a callback after."""
|
||||
ret = None
|
||||
try:
|
||||
ret = self._session._wait_for_task(instance_id, task)
|
||||
except Exception, excep:
|
||||
LOG.exception(excep)
|
||||
callback(ret)
|
||||
|
||||
def list_instances(self):
|
||||
"""Lists the VM instances that are registered with the ESX host."""
|
||||
LOG.debug(_("Getting list of instances"))
|
||||
vms = self._session._call_method(vim_util, "get_objects",
|
||||
"VirtualMachine",
|
||||
["name", "runtime.connectionState"])
|
||||
lst_vm_names = []
|
||||
for vm in vms:
|
||||
vm_name = None
|
||||
conn_state = None
|
||||
for prop in vm.propSet:
|
||||
if prop.name == "name":
|
||||
vm_name = prop.val
|
||||
elif prop.name == "runtime.connectionState":
|
||||
conn_state = prop.val
|
||||
# Ignoring the oprhaned or inaccessible VMs
|
||||
if conn_state not in ["orphaned", "inaccessible"]:
|
||||
lst_vm_names.append(vm_name)
|
||||
LOG.debug(_("Got total of %s instances") % str(len(lst_vm_names)))
|
||||
return lst_vm_names
|
||||
|
||||
def spawn(self, instance):
|
||||
"""
|
||||
Creates a VM instance.
|
||||
|
||||
Steps followed are:
|
||||
1. Create a VM with no disk and the specifics in the instance object
|
||||
like RAM size.
|
||||
2. Create a dummy vmdk of the size of the disk file that is to be
|
||||
uploaded. This is required just to create the metadata file.
|
||||
3. Delete the -flat.vmdk file created in the above step and retain
|
||||
the metadata .vmdk file.
|
||||
4. Upload the disk file.
|
||||
5. Attach the disk to the VM by reconfiguring the same.
|
||||
6. Power on the VM.
|
||||
"""
|
||||
vm_ref = self._get_vm_ref_from_the_name(instance.name)
|
||||
if vm_ref:
|
||||
raise exception.Duplicate(_("Attempted to create a VM with a name"
|
||||
" %s, but that already exists on the host") % instance.name)
|
||||
|
||||
client_factory = self._session._get_vim().client.factory
|
||||
service_content = self._session._get_vim().get_service_content()
|
||||
|
||||
network = db.network_get_by_instance(context.get_admin_context(),
|
||||
instance['id'])
|
||||
|
||||
net_name = network['bridge']
|
||||
|
||||
def _check_if_network_bridge_exists():
|
||||
network_ref = \
|
||||
network_utils.get_network_with_the_name(self._session,
|
||||
net_name)
|
||||
if network_ref is None:
|
||||
raise exception.NotFound(_("Network with the name '%s' doesn't"
|
||||
" exist on the ESX host") % net_name)
|
||||
|
||||
_check_if_network_bridge_exists()
|
||||
|
||||
def _get_datastore_ref():
|
||||
"""Get the datastore list and choose the first local storage."""
|
||||
data_stores = self._session._call_method(vim_util, "get_objects",
|
||||
"Datastore", ["summary.type", "summary.name"])
|
||||
for elem in data_stores:
|
||||
ds_name = None
|
||||
ds_type = None
|
||||
for prop in elem.propSet:
|
||||
if prop.name == "summary.type":
|
||||
ds_type = prop.val
|
||||
elif prop.name == "summary.name":
|
||||
ds_name = prop.val
|
||||
# Local storage identifier
|
||||
if ds_type == "VMFS":
|
||||
data_store_name = ds_name
|
||||
return data_store_name
|
||||
|
||||
if data_store_name is None:
|
||||
msg = _("Couldn't get a local Datastore reference")
|
||||
LOG.exception(msg)
|
||||
raise exception.Error(msg)
|
||||
|
||||
data_store_name = _get_datastore_ref()
|
||||
|
||||
def _get_image_properties():
|
||||
"""
|
||||
Get the Size of the flat vmdk file that is there on the storage
|
||||
repository.
|
||||
"""
|
||||
image_size, image_properties = \
|
||||
vmware_images.get_vmdk_size_and_properties(
|
||||
instance.image_id, instance)
|
||||
vmdk_file_size_in_kb = int(image_size) / 1024
|
||||
os_type = image_properties.get("vmware_ostype", "otherGuest")
|
||||
adapter_type = image_properties.get("vmware_adaptertype",
|
||||
"lsiLogic")
|
||||
return vmdk_file_size_in_kb, os_type, adapter_type
|
||||
|
||||
vmdk_file_size_in_kb, os_type, adapter_type = _get_image_properties()
|
||||
|
||||
def _get_vmfolder_and_res_pool_mors():
|
||||
"""Get the Vm folder ref from the datacenter."""
|
||||
dc_objs = self._session._call_method(vim_util, "get_objects",
|
||||
"Datacenter", ["vmFolder"])
|
||||
# There is only one default datacenter in a standalone ESX host
|
||||
vm_folder_mor = dc_objs[0].propSet[0].val
|
||||
|
||||
# Get the resource pool. Taking the first resource pool coming our
|
||||
# way. Assuming that is the default resource pool.
|
||||
res_pool_mor = self._session._call_method(vim_util, "get_objects",
|
||||
"ResourcePool")[0].obj
|
||||
return vm_folder_mor, res_pool_mor
|
||||
|
||||
vm_folder_mor, res_pool_mor = _get_vmfolder_and_res_pool_mors()
|
||||
|
||||
# Get the create vm config spec
|
||||
config_spec = vm_util.get_vm_create_spec(client_factory, instance,
|
||||
data_store_name, net_name, os_type)
|
||||
|
||||
def _execute_create_vm():
|
||||
"""Create VM on ESX host."""
|
||||
LOG.debug(_("Creating VM with the name %s on the ESX host") %
|
||||
instance.name)
|
||||
# Create the VM on the ESX host
|
||||
vm_create_task = self._session._call_method(
|
||||
self._session._get_vim(),
|
||||
"CreateVM_Task", vm_folder_mor,
|
||||
config=config_spec, pool=res_pool_mor)
|
||||
self._session._wait_for_task(instance.id, vm_create_task)
|
||||
|
||||
LOG.debug(_("Created VM with the name %s on the ESX host") %
|
||||
instance.name)
|
||||
|
||||
_execute_create_vm()
|
||||
|
||||
# Set the machine id for the VM for setting the IP
|
||||
self._set_machine_id(client_factory, instance)
|
||||
|
||||
# Naming the VM files in correspondence with the VM instance name
|
||||
# The flat vmdk file name
|
||||
flat_uploaded_vmdk_name = "%s/%s-flat.vmdk" % (instance.name,
|
||||
instance.name)
|
||||
# The vmdk meta-data file
|
||||
uploaded_vmdk_name = "%s/%s.vmdk" % (instance.name, instance.name)
|
||||
flat_uploaded_vmdk_path = vm_util.build_datastore_path(data_store_name,
|
||||
flat_uploaded_vmdk_name)
|
||||
uploaded_vmdk_path = vm_util.build_datastore_path(data_store_name,
|
||||
uploaded_vmdk_name)
|
||||
|
||||
def _create_virtual_disk():
|
||||
"""Create a virtual disk of the size of flat vmdk file."""
|
||||
# Create a Virtual Disk of the size of the flat vmdk file. This is
|
||||
# done just to generate the meta-data file whose specifics
|
||||
# depend on the size of the disk, thin/thick provisioning and the
|
||||
# storage adapter type.
|
||||
# Here we assume thick provisioning and lsiLogic for the adapter
|
||||
# type
|
||||
LOG.debug(_("Creating Virtual Disk of size "
|
||||
"%(vmdk_file_size_in_kb)s KB and adapter type "
|
||||
"%(adapter_type)s on the ESX host local store"
|
||||
" %(data_store_name)s") %
|
||||
{"vmdk_file_size_in_kb": vmdk_file_size_in_kb,
|
||||
"adapter_type": adapter_type,
|
||||
"data_store_name": data_store_name})
|
||||
vmdk_create_spec = vm_util.get_vmdk_create_spec(client_factory,
|
||||
vmdk_file_size_in_kb, adapter_type)
|
||||
vmdk_create_task = self._session._call_method(
|
||||
self._session._get_vim(),
|
||||
"CreateVirtualDisk_Task",
|
||||
service_content.virtualDiskManager,
|
||||
name=uploaded_vmdk_path,
|
||||
datacenter=self._get_datacenter_name_and_ref()[0],
|
||||
spec=vmdk_create_spec)
|
||||
self._session._wait_for_task(instance.id, vmdk_create_task)
|
||||
LOG.debug(_("Created Virtual Disk of size %(vmdk_file_size_in_kb)s"
|
||||
" KB on the ESX host local store "
|
||||
"%(data_store_name)s") %
|
||||
{"vmdk_file_size_in_kb": vmdk_file_size_in_kb,
|
||||
"data_store_name": data_store_name})
|
||||
|
||||
_create_virtual_disk()
|
||||
|
||||
def _delete_disk_file():
|
||||
LOG.debug(_("Deleting the file %(flat_uploaded_vmdk_path)s "
|
||||
"on the ESX host local"
|
||||
"store %(data_store_name)s") %
|
||||
{"flat_uploaded_vmdk_path": flat_uploaded_vmdk_path,
|
||||
"data_store_name": data_store_name})
|
||||
# Delete the -flat.vmdk file created. .vmdk file is retained.
|
||||
vmdk_delete_task = self._session._call_method(
|
||||
self._session._get_vim(),
|
||||
"DeleteDatastoreFile_Task",
|
||||
service_content.fileManager,
|
||||
name=flat_uploaded_vmdk_path)
|
||||
self._session._wait_for_task(instance.id, vmdk_delete_task)
|
||||
LOG.debug(_("Deleted the file %(flat_uploaded_vmdk_path)s on the "
|
||||
"ESX host local store %(data_store_name)s") %
|
||||
{"flat_uploaded_vmdk_path": flat_uploaded_vmdk_path,
|
||||
"data_store_name": data_store_name})
|
||||
|
||||
_delete_disk_file()
|
||||
|
||||
cookies = self._session._get_vim().client.options.transport.cookiejar
|
||||
|
||||
def _fetch_image_on_esx_datastore():
|
||||
"""Fetch image from Glance to ESX datastore."""
|
||||
LOG.debug(_("Downloading image file data %(image_id)s to the ESX "
|
||||
"data store %(data_store_name)s") %
|
||||
({'image_id': instance.image_id,
|
||||
'data_store_name': data_store_name}))
|
||||
# Upload the -flat.vmdk file whose meta-data file we just created
|
||||
# above
|
||||
vmware_images.fetch_image(
|
||||
instance.image_id,
|
||||
instance,
|
||||
host=self._session._host_ip,
|
||||
data_center_name=self._get_datacenter_name_and_ref()[1],
|
||||
datastore_name=data_store_name,
|
||||
cookies=cookies,
|
||||
file_path=flat_uploaded_vmdk_name)
|
||||
LOG.debug(_("Downloaded image file data %(image_id)s to the ESX "
|
||||
"data store %(data_store_name)s") %
|
||||
({'image_id': instance.image_id,
|
||||
'data_store_name': data_store_name}))
|
||||
_fetch_image_on_esx_datastore()
|
||||
|
||||
vm_ref = self._get_vm_ref_from_the_name(instance.name)
|
||||
|
||||
def _attach_vmdk_to_the_vm():
|
||||
"""
|
||||
Attach the vmdk uploaded to the VM. VM reconfigure is done
|
||||
to do so.
|
||||
"""
|
||||
vmdk_attach_config_spec = vm_util.get_vmdk_attach_config_spec(
|
||||
client_factory,
|
||||
vmdk_file_size_in_kb, uploaded_vmdk_path,
|
||||
adapter_type)
|
||||
LOG.debug(_("Reconfiguring VM instance %s to attach the image "
|
||||
"disk") % instance.name)
|
||||
reconfig_task = self._session._call_method(
|
||||
self._session._get_vim(),
|
||||
"ReconfigVM_Task", vm_ref,
|
||||
spec=vmdk_attach_config_spec)
|
||||
self._session._wait_for_task(instance.id, reconfig_task)
|
||||
LOG.debug(_("Reconfigured VM instance %s to attach the image "
|
||||
"disk") % instance.name)
|
||||
|
||||
_attach_vmdk_to_the_vm()
|
||||
|
||||
def _power_on_vm():
|
||||
"""Power on the VM."""
|
||||
LOG.debug(_("Powering on the VM instance %s") % instance.name)
|
||||
# Power On the VM
|
||||
power_on_task = self._session._call_method(
|
||||
self._session._get_vim(),
|
||||
"PowerOnVM_Task", vm_ref)
|
||||
self._session._wait_for_task(instance.id, power_on_task)
|
||||
LOG.debug(_("Powered on the VM instance %s") % instance.name)
|
||||
_power_on_vm()
|
||||
|
||||
def snapshot(self, instance, snapshot_name):
|
||||
"""
|
||||
Create snapshot from a running VM instance.
|
||||
Steps followed are:
|
||||
1. Get the name of the vmdk file which the VM points to right now.
|
||||
Can be a chain of snapshots, so we need to know the last in the
|
||||
chain.
|
||||
2. Create the snapshot. A new vmdk is created which the VM points to
|
||||
now. The earlier vmdk becomes read-only.
|
||||
3. Call CopyVirtualDisk which coalesces the disk chain to form a single
|
||||
vmdk, rather a .vmdk metadata file and a -flat.vmdk disk data file.
|
||||
4. Now upload the -flat.vmdk file to the image store.
|
||||
5. Delete the coalesced .vmdk and -flat.vmdk created.
|
||||
"""
|
||||
vm_ref = self._get_vm_ref_from_the_name(instance.name)
|
||||
if vm_ref is None:
|
||||
raise exception.NotFound(_("instance - %s not present") %
|
||||
instance.name)
|
||||
|
||||
client_factory = self._session._get_vim().client.factory
|
||||
service_content = self._session._get_vim().get_service_content()
|
||||
|
||||
def _get_vm_and_vmdk_attribs():
|
||||
# Get the vmdk file name that the VM is pointing to
|
||||
hardware_devices = self._session._call_method(vim_util,
|
||||
"get_dynamic_property", vm_ref,
|
||||
"VirtualMachine", "config.hardware.device")
|
||||
vmdk_file_path_before_snapshot, adapter_type = \
|
||||
vm_util.get_vmdk_file_path_and_adapter_type(client_factory,
|
||||
hardware_devices)
|
||||
datastore_name = vm_util.split_datastore_path(
|
||||
vmdk_file_path_before_snapshot)[0]
|
||||
os_type = self._session._call_method(vim_util,
|
||||
"get_dynamic_property", vm_ref,
|
||||
"VirtualMachine", "summary.config.guestId")
|
||||
return (vmdk_file_path_before_snapshot, adapter_type,
|
||||
datastore_name, os_type)
|
||||
|
||||
vmdk_file_path_before_snapshot, adapter_type, datastore_name,\
|
||||
os_type = _get_vm_and_vmdk_attribs()
|
||||
|
||||
def _create_vm_snapshot():
|
||||
# Create a snapshot of the VM
|
||||
LOG.debug(_("Creating Snapshot of the VM instance %s ") %
|
||||
instance.name)
|
||||
snapshot_task = self._session._call_method(
|
||||
self._session._get_vim(),
|
||||
"CreateSnapshot_Task", vm_ref,
|
||||
name="%s-snapshot" % instance.name,
|
||||
description="Taking Snapshot of the VM",
|
||||
memory=True,
|
||||
quiesce=True)
|
||||
self._session._wait_for_task(instance.id, snapshot_task)
|
||||
LOG.debug(_("Created Snapshot of the VM instance %s ") %
|
||||
instance.name)
|
||||
|
||||
_create_vm_snapshot()
|
||||
|
||||
def _check_if_tmp_folder_exists():
|
||||
# Copy the contents of the VM that were there just before the
|
||||
# snapshot was taken
|
||||
ds_ref_ret = vim_util.get_dynamic_property(
|
||||
self._session._get_vim(),
|
||||
vm_ref,
|
||||
"VirtualMachine",
|
||||
"datastore")
|
||||
if not ds_ref_ret:
|
||||
raise exception.NotFound(_("Failed to get the datastore "
|
||||
"reference(s) which the VM uses"))
|
||||
ds_ref = ds_ref_ret.ManagedObjectReference[0]
|
||||
ds_browser = vim_util.get_dynamic_property(
|
||||
self._session._get_vim(),
|
||||
ds_ref,
|
||||
"Datastore",
|
||||
"browser")
|
||||
# Check if the vmware-tmp folder exists or not. If not, create one
|
||||
tmp_folder_path = vm_util.build_datastore_path(datastore_name,
|
||||
"vmware-tmp")
|
||||
if not self._path_exists(ds_browser, tmp_folder_path):
|
||||
self._mkdir(vm_util.build_datastore_path(datastore_name,
|
||||
"vmware-tmp"))
|
||||
|
||||
_check_if_tmp_folder_exists()
|
||||
|
||||
# Generate a random vmdk file name to which the coalesced vmdk content
|
||||
# will be copied to. A random name is chosen so that we don't have
|
||||
# name clashes.
|
||||
random_name = str(uuid.uuid4())
|
||||
dest_vmdk_file_location = vm_util.build_datastore_path(datastore_name,
|
||||
"vmware-tmp/%s.vmdk" % random_name)
|
||||
dc_ref = self._get_datacenter_name_and_ref()[0]
|
||||
|
||||
def _copy_vmdk_content():
|
||||
# Copy the contents of the disk ( or disks, if there were snapshots
|
||||
# done earlier) to a temporary vmdk file.
|
||||
copy_spec = vm_util.get_copy_virtual_disk_spec(client_factory,
|
||||
adapter_type)
|
||||
LOG.debug(_("Copying disk data before snapshot of the VM "
|
||||
" instance %s") % instance.name)
|
||||
copy_disk_task = self._session._call_method(
|
||||
self._session._get_vim(),
|
||||
"CopyVirtualDisk_Task",
|
||||
service_content.virtualDiskManager,
|
||||
sourceName=vmdk_file_path_before_snapshot,
|
||||
sourceDatacenter=dc_ref,
|
||||
destName=dest_vmdk_file_location,
|
||||
destDatacenter=dc_ref,
|
||||
destSpec=copy_spec,
|
||||
force=False)
|
||||
self._session._wait_for_task(instance.id, copy_disk_task)
|
||||
LOG.debug(_("Copied disk data before snapshot of the VM "
|
||||
"instance %s") % instance.name)
|
||||
|
||||
_copy_vmdk_content()
|
||||
|
||||
cookies = self._session._get_vim().client.options.transport.cookiejar
|
||||
|
||||
def _upload_vmdk_to_image_repository():
|
||||
# Upload the contents of -flat.vmdk file which has the disk data.
|
||||
LOG.debug(_("Uploading image %s") % snapshot_name)
|
||||
vmware_images.upload_image(
|
||||
snapshot_name,
|
||||
instance,
|
||||
os_type=os_type,
|
||||
adapter_type=adapter_type,
|
||||
image_version=1,
|
||||
host=self._session._host_ip,
|
||||
data_center_name=self._get_datacenter_name_and_ref()[1],
|
||||
datastore_name=datastore_name,
|
||||
cookies=cookies,
|
||||
file_path="vmware-tmp/%s-flat.vmdk" % random_name)
|
||||
LOG.debug(_("Uploaded image %s") % snapshot_name)
|
||||
|
||||
_upload_vmdk_to_image_repository()
|
||||
|
||||
def _clean_temp_data():
|
||||
"""
|
||||
Delete temporary vmdk files generated in image handling
|
||||
operations.
|
||||
"""
|
||||
# Delete the temporary vmdk created above.
|
||||
LOG.debug(_("Deleting temporary vmdk file %s")
|
||||
% dest_vmdk_file_location)
|
||||
remove_disk_task = self._session._call_method(
|
||||
self._session._get_vim(),
|
||||
"DeleteVirtualDisk_Task",
|
||||
service_content.virtualDiskManager,
|
||||
name=dest_vmdk_file_location,
|
||||
datacenter=dc_ref)
|
||||
self._session._wait_for_task(instance.id, remove_disk_task)
|
||||
LOG.debug(_("Deleted temporary vmdk file %s")
|
||||
% dest_vmdk_file_location)
|
||||
|
||||
_clean_temp_data()
|
||||
|
||||
def reboot(self, instance):
|
||||
"""Reboot a VM instance."""
|
||||
vm_ref = self._get_vm_ref_from_the_name(instance.name)
|
||||
if vm_ref is None:
|
||||
raise exception.NotFound(_("instance - %s not present") %
|
||||
instance.name)
|
||||
lst_properties = ["summary.guest.toolsStatus", "runtime.powerState",
|
||||
"summary.guest.toolsRunningStatus"]
|
||||
props = self._session._call_method(vim_util, "get_object_properties",
|
||||
None, vm_ref, "VirtualMachine",
|
||||
lst_properties)
|
||||
pwr_state = None
|
||||
tools_status = None
|
||||
tools_running_status = False
|
||||
for elem in props:
|
||||
for prop in elem.propSet:
|
||||
if prop.name == "runtime.powerState":
|
||||
pwr_state = prop.val
|
||||
elif prop.name == "summary.guest.toolsStatus":
|
||||
tools_status = prop.val
|
||||
elif prop.name == "summary.guest.toolsRunningStatus":
|
||||
tools_running_status = prop.val
|
||||
|
||||
# Raise an exception if the VM is not powered On.
|
||||
if pwr_state not in ["poweredOn"]:
|
||||
raise exception.Invalid(_("instance - %s not poweredOn. So can't "
|
||||
"be rebooted.") % instance.name)
|
||||
|
||||
# If latest vmware tools are installed in the VM, and that the tools
|
||||
# are running, then only do a guest reboot. Otherwise do a hard reset.
|
||||
if (tools_status == "toolsOk" and
|
||||
tools_running_status == "guestToolsRunning"):
|
||||
LOG.debug(_("Rebooting guest OS of VM %s") % instance.name)
|
||||
self._session._call_method(self._session._get_vim(), "RebootGuest",
|
||||
vm_ref)
|
||||
LOG.debug(_("Rebooted guest OS of VM %s") % instance.name)
|
||||
else:
|
||||
LOG.debug(_("Doing hard reboot of VM %s") % instance.name)
|
||||
reset_task = self._session._call_method(self._session._get_vim(),
|
||||
"ResetVM_Task", vm_ref)
|
||||
self._session._wait_for_task(instance.id, reset_task)
|
||||
LOG.debug(_("Did hard reboot of VM %s") % instance.name)
|
||||
|
||||
def destroy(self, instance):
|
||||
"""
|
||||
Destroy a VM instance. Steps followed are:
|
||||
1. Power off the VM, if it is in poweredOn state.
|
||||
2. Un-register a VM.
|
||||
3. Delete the contents of the folder holding the VM related data.
|
||||
"""
|
||||
try:
|
||||
vm_ref = self._get_vm_ref_from_the_name(instance.name)
|
||||
if vm_ref is None:
|
||||
LOG.debug(_("instance - %s not present") % instance.name)
|
||||
return
|
||||
lst_properties = ["config.files.vmPathName", "runtime.powerState"]
|
||||
props = self._session._call_method(vim_util,
|
||||
"get_object_properties",
|
||||
None, vm_ref, "VirtualMachine", lst_properties)
|
||||
pwr_state = None
|
||||
for elem in props:
|
||||
vm_config_pathname = None
|
||||
for prop in elem.propSet:
|
||||
if prop.name == "runtime.powerState":
|
||||
pwr_state = prop.val
|
||||
elif prop.name == "config.files.vmPathName":
|
||||
vm_config_pathname = prop.val
|
||||
if vm_config_pathname:
|
||||
datastore_name, vmx_file_path = \
|
||||
vm_util.split_datastore_path(vm_config_pathname)
|
||||
# Power off the VM if it is in PoweredOn state.
|
||||
if pwr_state == "poweredOn":
|
||||
LOG.debug(_("Powering off the VM %s") % instance.name)
|
||||
poweroff_task = self._session._call_method(
|
||||
self._session._get_vim(),
|
||||
"PowerOffVM_Task", vm_ref)
|
||||
self._session._wait_for_task(instance.id, poweroff_task)
|
||||
LOG.debug(_("Powered off the VM %s") % instance.name)
|
||||
|
||||
# Un-register the VM
|
||||
try:
|
||||
LOG.debug(_("Unregistering the VM %s") % instance.name)
|
||||
self._session._call_method(self._session._get_vim(),
|
||||
"UnregisterVM", vm_ref)
|
||||
LOG.debug(_("Unregistered the VM %s") % instance.name)
|
||||
except Exception, excep:
|
||||
LOG.warn(_("In vmwareapi:vmops:destroy, got this exception"
|
||||
" while un-registering the VM: %s") % str(excep))
|
||||
|
||||
# Delete the folder holding the VM related content on
|
||||
# the datastore.
|
||||
try:
|
||||
dir_ds_compliant_path = vm_util.build_datastore_path(
|
||||
datastore_name,
|
||||
os.path.dirname(vmx_file_path))
|
||||
LOG.debug(_("Deleting contents of the VM %(name)s from "
|
||||
"datastore %(datastore_name)s") %
|
||||
({'name': instance.name,
|
||||
'datastore_name': datastore_name}))
|
||||
delete_task = self._session._call_method(
|
||||
self._session._get_vim(),
|
||||
"DeleteDatastoreFile_Task",
|
||||
self._session._get_vim().get_service_content().fileManager,
|
||||
name=dir_ds_compliant_path)
|
||||
self._session._wait_for_task(instance.id, delete_task)
|
||||
LOG.debug(_("Deleted contents of the VM %(name)s from "
|
||||
"datastore %(datastore_name)s") %
|
||||
({'name': instance.name,
|
||||
'datastore_name': datastore_name}))
|
||||
except Exception, excep:
|
||||
LOG.warn(_("In vmwareapi:vmops:destroy, "
|
||||
"got this exception while deleting"
|
||||
" the VM contents from the disk: %s")
|
||||
% str(excep))
|
||||
except Exception, exc:
|
||||
LOG.exception(exc)
|
||||
|
||||
def pause(self, instance, callback):
|
||||
"""Pause a VM instance."""
|
||||
raise exception.APIError("pause not supported for vmwareapi")
|
||||
|
||||
def unpause(self, instance, callback):
|
||||
"""Un-Pause a VM instance."""
|
||||
raise exception.APIError("unpause not supported for vmwareapi")
|
||||
|
||||
def suspend(self, instance, callback):
|
||||
"""Suspend the specified instance."""
|
||||
vm_ref = self._get_vm_ref_from_the_name(instance.name)
|
||||
if vm_ref is None:
|
||||
raise exception.NotFound(_("instance - %s not present") %
|
||||
instance.name)
|
||||
|
||||
pwr_state = self._session._call_method(vim_util,
|
||||
"get_dynamic_property", vm_ref,
|
||||
"VirtualMachine", "runtime.powerState")
|
||||
# Only PoweredOn VMs can be suspended.
|
||||
if pwr_state == "poweredOn":
|
||||
LOG.debug(_("Suspending the VM %s ") % instance.name)
|
||||
suspend_task = self._session._call_method(self._session._get_vim(),
|
||||
"SuspendVM_Task", vm_ref)
|
||||
self._wait_with_callback(instance.id, suspend_task, callback)
|
||||
LOG.debug(_("Suspended the VM %s ") % instance.name)
|
||||
# Raise Exception if VM is poweredOff
|
||||
elif pwr_state == "poweredOff":
|
||||
raise exception.Invalid(_("instance - %s is poweredOff and hence "
|
||||
" can't be suspended.") % instance.name)
|
||||
LOG.debug(_("VM %s was already in suspended state. So returning "
|
||||
"without doing anything") % instance.name)
|
||||
|
||||
def resume(self, instance, callback):
|
||||
"""Resume the specified instance."""
|
||||
vm_ref = self._get_vm_ref_from_the_name(instance.name)
|
||||
if vm_ref is None:
|
||||
raise exception.NotFound(_("instance - %s not present") %
|
||||
instance.name)
|
||||
|
||||
pwr_state = self._session._call_method(vim_util,
|
||||
"get_dynamic_property", vm_ref,
|
||||
"VirtualMachine", "runtime.powerState")
|
||||
if pwr_state.lower() == "suspended":
|
||||
LOG.debug(_("Resuming the VM %s") % instance.name)
|
||||
suspend_task = self._session._call_method(
|
||||
self._session._get_vim(),
|
||||
"PowerOnVM_Task", vm_ref)
|
||||
self._wait_with_callback(instance.id, suspend_task, callback)
|
||||
LOG.debug(_("Resumed the VM %s ") % instance.name)
|
||||
else:
|
||||
raise exception.Invalid(_("instance - %s not in Suspended state "
|
||||
"and hence can't be Resumed.") % instance.name)
|
||||
|
||||
def get_info(self, instance_name):
|
||||
"""Return data about the VM instance."""
|
||||
vm_ref = self._get_vm_ref_from_the_name(instance_name)
|
||||
if vm_ref is None:
|
||||
raise exception.NotFound(_("instance - %s not present") %
|
||||
instance_name)
|
||||
|
||||
lst_properties = ["summary.config.numCpu",
|
||||
"summary.config.memorySizeMB",
|
||||
"runtime.powerState"]
|
||||
vm_props = self._session._call_method(vim_util,
|
||||
"get_object_properties", None, vm_ref, "VirtualMachine",
|
||||
lst_properties)
|
||||
max_mem = None
|
||||
pwr_state = None
|
||||
num_cpu = None
|
||||
for elem in vm_props:
|
||||
for prop in elem.propSet:
|
||||
if prop.name == "summary.config.numCpu":
|
||||
num_cpu = int(prop.val)
|
||||
elif prop.name == "summary.config.memorySizeMB":
|
||||
# In MB, but we want in KB
|
||||
max_mem = int(prop.val) * 1024
|
||||
elif prop.name == "runtime.powerState":
|
||||
pwr_state = VMWARE_POWER_STATES[prop.val]
|
||||
|
||||
return {'state': pwr_state,
|
||||
'max_mem': max_mem,
|
||||
'mem': max_mem,
|
||||
'num_cpu': num_cpu,
|
||||
'cpu_time': 0}
|
||||
|
||||
def get_diagnostics(self, instance):
|
||||
"""Return data about VM diagnostics."""
|
||||
raise exception.APIError("get_diagnostics not implemented for "
|
||||
"vmwareapi")
|
||||
|
||||
def get_console_output(self, instance):
|
||||
"""Return snapshot of console."""
|
||||
vm_ref = self._get_vm_ref_from_the_name(instance.name)
|
||||
if vm_ref is None:
|
||||
raise exception.NotFound(_("instance - %s not present") %
|
||||
instance.name)
|
||||
param_list = {"id": str(vm_ref)}
|
||||
base_url = "%s://%s/screen?%s" % (self._session._scheme,
|
||||
self._session._host_ip,
|
||||
urllib.urlencode(param_list))
|
||||
request = urllib2.Request(base_url)
|
||||
base64string = base64.encodestring(
|
||||
'%s:%s' % (
|
||||
self._session._host_username,
|
||||
self._session._host_password)).replace('\n', '')
|
||||
request.add_header("Authorization", "Basic %s" % base64string)
|
||||
result = urllib2.urlopen(request)
|
||||
if result.code == 200:
|
||||
return result.read()
|
||||
else:
|
||||
return ""
|
||||
|
||||
def get_ajax_console(self, instance):
|
||||
"""Return link to instance's ajax console."""
|
||||
return 'http://fakeajaxconsole/fake_url'
|
||||
|
||||
def _set_machine_id(self, client_factory, instance):
|
||||
"""
|
||||
Set the machine id of the VM for guest tools to pick up and change
|
||||
the IP.
|
||||
"""
|
||||
vm_ref = self._get_vm_ref_from_the_name(instance.name)
|
||||
if vm_ref is None:
|
||||
raise exception.NotFound(_("instance - %s not present") %
|
||||
instance.name)
|
||||
network = db.network_get_by_instance(context.get_admin_context(),
|
||||
instance['id'])
|
||||
mac_addr = instance.mac_address
|
||||
net_mask = network["netmask"]
|
||||
gateway = network["gateway"]
|
||||
ip_addr = db.instance_get_fixed_address(context.get_admin_context(),
|
||||
instance['id'])
|
||||
machine_id_chanfge_spec = \
|
||||
vm_util.get_machine_id_change_spec(client_factory, mac_addr,
|
||||
ip_addr, net_mask, gateway)
|
||||
LOG.debug(_("Reconfiguring VM instance %(name)s to set the machine id "
|
||||
"with ip - %(ip_addr)s") %
|
||||
({'name': instance.name,
|
||||
'ip_addr': ip_addr}))
|
||||
reconfig_task = self._session._call_method(self._session._get_vim(),
|
||||
"ReconfigVM_Task", vm_ref,
|
||||
spec=machine_id_chanfge_spec)
|
||||
self._session._wait_for_task(instance.id, reconfig_task)
|
||||
LOG.debug(_("Reconfigured VM instance %(name)s to set the machine id "
|
||||
"with ip - %(ip_addr)s") %
|
||||
({'name': instance.name,
|
||||
'ip_addr': ip_addr}))
|
||||
|
||||
def _get_datacenter_name_and_ref(self):
|
||||
"""Get the datacenter name and the reference."""
|
||||
dc_obj = self._session._call_method(vim_util, "get_objects",
|
||||
"Datacenter", ["name"])
|
||||
return dc_obj[0].obj, dc_obj[0].propSet[0].val
|
||||
|
||||
def _path_exists(self, ds_browser, ds_path):
|
||||
"""Check if the path exists on the datastore."""
|
||||
search_task = self._session._call_method(self._session._get_vim(),
|
||||
"SearchDatastore_Task",
|
||||
ds_browser,
|
||||
datastorePath=ds_path)
|
||||
# Wait till the state changes from queued or running.
|
||||
# If an error state is returned, it means that the path doesn't exist.
|
||||
while True:
|
||||
task_info = self._session._call_method(vim_util,
|
||||
"get_dynamic_property",
|
||||
search_task, "Task", "info")
|
||||
if task_info.state in ['queued', 'running']:
|
||||
time.sleep(2)
|
||||
continue
|
||||
break
|
||||
if task_info.state == "error":
|
||||
return False
|
||||
return True
|
||||
|
||||
def _mkdir(self, ds_path):
|
||||
"""
|
||||
Creates a directory at the path specified. If it is just "NAME",
|
||||
then a directory with this name is created at the topmost level of the
|
||||
DataStore.
|
||||
"""
|
||||
LOG.debug(_("Creating directory with path %s") % ds_path)
|
||||
self._session._call_method(self._session._get_vim(), "MakeDirectory",
|
||||
self._session._get_vim().get_service_content().fileManager,
|
||||
name=ds_path, createParentDirectories=False)
|
||||
LOG.debug(_("Created directory with path %s") % ds_path)
|
||||
|
||||
def _get_vm_ref_from_the_name(self, vm_name):
|
||||
"""Get reference to the VM with the name specified."""
|
||||
vms = self._session._call_method(vim_util, "get_objects",
|
||||
"VirtualMachine", ["name"])
|
||||
for vm in vms:
|
||||
if vm.propSet[0].val == vm_name:
|
||||
return vm.obj
|
||||
return None
|
201
nova/virt/vmwareapi/vmware_images.py
Normal file
201
nova/virt/vmwareapi/vmware_images.py
Normal file
@ -0,0 +1,201 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2011 Citrix Systems, Inc.
|
||||
# Copyright 2011 OpenStack LLC.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
"""
|
||||
Utility functions for Image transfer.
|
||||
"""
|
||||
|
||||
from glance import client
|
||||
|
||||
from nova import exception
|
||||
from nova import flags
|
||||
from nova import log as logging
|
||||
from nova.virt.vmwareapi import io_util
|
||||
from nova.virt.vmwareapi import read_write_util
|
||||
|
||||
LOG = logging.getLogger("nova.virt.vmwareapi.vmware_images")
|
||||
|
||||
FLAGS = flags.FLAGS
|
||||
|
||||
QUEUE_BUFFER_SIZE = 10
|
||||
|
||||
|
||||
def start_transfer(read_file_handle, data_size, write_file_handle=None,
|
||||
glance_client=None, image_id=None, image_meta={}):
|
||||
"""Start the data transfer from the reader to the writer.
|
||||
Reader writes to the pipe and the writer reads from the pipe. This means
|
||||
that the total transfer time boils down to the slower of the read/write
|
||||
and not the addition of the two times."""
|
||||
# The pipe that acts as an intermediate store of data for reader to write
|
||||
# to and writer to grab from.
|
||||
thread_safe_pipe = io_util.ThreadSafePipe(QUEUE_BUFFER_SIZE, data_size)
|
||||
# The read thread. In case of glance it is the instance of the
|
||||
# GlanceFileRead class. The glance client read returns an iterator
|
||||
# and this class wraps that iterator to provide datachunks in calls
|
||||
# to read.
|
||||
read_thread = io_util.IOThread(read_file_handle, thread_safe_pipe)
|
||||
|
||||
# In case of Glance - VMWare transfer, we just need a handle to the
|
||||
# HTTP Connection that is to send transfer data to the VMWare datastore.
|
||||
if write_file_handle:
|
||||
write_thread = io_util.IOThread(thread_safe_pipe, write_file_handle)
|
||||
# In case of VMWare - Glance transfer, we relinquish VMWare HTTP file read
|
||||
# handle to Glance Client instance, but to be sure of the transfer we need
|
||||
# to be sure of the status of the image on glnace changing to active.
|
||||
# The GlanceWriteThread handles the same for us.
|
||||
elif glance_client and image_id:
|
||||
write_thread = io_util.GlanceWriteThread(thread_safe_pipe,
|
||||
glance_client, image_id, image_meta)
|
||||
# Start the read and write threads.
|
||||
read_event = read_thread.start()
|
||||
write_event = write_thread.start()
|
||||
try:
|
||||
# Wait on the read and write events to signal their end
|
||||
read_event.wait()
|
||||
write_event.wait()
|
||||
except Exception, exc:
|
||||
# In case of any of the reads or writes raising an exception,
|
||||
# stop the threads so that we un-necessarily don't keep the other one
|
||||
# waiting.
|
||||
read_thread.stop()
|
||||
write_thread.stop()
|
||||
|
||||
# Log and raise the exception.
|
||||
LOG.exception(exc)
|
||||
raise exception.Error(exc)
|
||||
finally:
|
||||
# No matter what, try closing the read and write handles, if it so
|
||||
# applies.
|
||||
read_file_handle.close()
|
||||
if write_file_handle:
|
||||
write_file_handle.close()
|
||||
|
||||
|
||||
def fetch_image(image, instance, **kwargs):
|
||||
"""Fetch an image for attaching to the newly created VM."""
|
||||
# Depending upon the image service, make appropriate image service call
|
||||
if FLAGS.image_service == "nova.image.glance.GlanceImageService":
|
||||
func = _get_glance_image
|
||||
elif FLAGS.image_service == "nova.image.s3.S3ImageService":
|
||||
func = _get_s3_image
|
||||
elif FLAGS.image_service == "nova.image.local.LocalImageService":
|
||||
func = _get_local_image
|
||||
else:
|
||||
raise NotImplementedError(_("The Image Service %s is not implemented")
|
||||
% FLAGS.image_service)
|
||||
return func(image, instance, **kwargs)
|
||||
|
||||
|
||||
def upload_image(image, instance, **kwargs):
|
||||
"""Upload the newly snapshotted VM disk file."""
|
||||
# Depending upon the image service, make appropriate image service call
|
||||
if FLAGS.image_service == "nova.image.glance.GlanceImageService":
|
||||
func = _put_glance_image
|
||||
elif FLAGS.image_service == "nova.image.s3.S3ImageService":
|
||||
func = _put_s3_image
|
||||
elif FLAGS.image_service == "nova.image.local.LocalImageService":
|
||||
func = _put_local_image
|
||||
else:
|
||||
raise NotImplementedError(_("The Image Service %s is not implemented")
|
||||
% FLAGS.image_service)
|
||||
return func(image, instance, **kwargs)
|
||||
|
||||
|
||||
def _get_glance_image(image, instance, **kwargs):
|
||||
"""Download image from the glance image server."""
|
||||
LOG.debug(_("Downloading image %s from glance image server") % image)
|
||||
glance_client = client.Client(FLAGS.glance_host, FLAGS.glance_port)
|
||||
metadata, read_iter = glance_client.get_image(image)
|
||||
read_file_handle = read_write_util.GlanceFileRead(read_iter)
|
||||
file_size = int(metadata['size'])
|
||||
write_file_handle = read_write_util.VMWareHTTPWriteFile(
|
||||
kwargs.get("host"),
|
||||
kwargs.get("data_center_name"),
|
||||
kwargs.get("datastore_name"),
|
||||
kwargs.get("cookies"),
|
||||
kwargs.get("file_path"),
|
||||
file_size)
|
||||
start_transfer(read_file_handle, file_size,
|
||||
write_file_handle=write_file_handle)
|
||||
LOG.debug(_("Downloaded image %s from glance image server") % image)
|
||||
|
||||
|
||||
def _get_s3_image(image, instance, **kwargs):
|
||||
"""Download image from the S3 image server."""
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
def _get_local_image(image, instance, **kwargs):
|
||||
"""Download image from the local nova compute node."""
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
def _put_glance_image(image, instance, **kwargs):
|
||||
"""Upload the snapshotted vm disk file to Glance image server."""
|
||||
LOG.debug(_("Uploading image %s to the Glance image server") % image)
|
||||
read_file_handle = read_write_util.VmWareHTTPReadFile(
|
||||
kwargs.get("host"),
|
||||
kwargs.get("data_center_name"),
|
||||
kwargs.get("datastore_name"),
|
||||
kwargs.get("cookies"),
|
||||
kwargs.get("file_path"))
|
||||
file_size = read_file_handle.get_size()
|
||||
glance_client = client.Client(FLAGS.glance_host, FLAGS.glance_port)
|
||||
# The properties and other fields that we need to set for the image.
|
||||
image_metadata = {"is_public": True,
|
||||
"disk_format": "vmdk",
|
||||
"container_format": "bare",
|
||||
"type": "vmdk",
|
||||
"properties": {"vmware_adaptertype":
|
||||
kwargs.get("adapter_type"),
|
||||
"vmware_ostype": kwargs.get("os_type"),
|
||||
"vmware_image_version":
|
||||
kwargs.get("image_version")}}
|
||||
start_transfer(read_file_handle, file_size, glance_client=glance_client,
|
||||
image_id=image, image_meta=image_metadata)
|
||||
LOG.debug(_("Uploaded image %s to the Glance image server") % image)
|
||||
|
||||
|
||||
def _put_local_image(image, instance, **kwargs):
|
||||
"""Upload the snapshotted vm disk file to the local nova compute node."""
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
def _put_s3_image(image, instance, **kwargs):
|
||||
"""Upload the snapshotted vm disk file to S3 image server."""
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
def get_vmdk_size_and_properties(image, instance):
|
||||
"""
|
||||
Get size of the vmdk file that is to be downloaded for attach in spawn.
|
||||
Need this to create the dummy virtual disk for the meta-data file. The
|
||||
geometry of the disk created depends on the size.
|
||||
"""
|
||||
|
||||
LOG.debug(_("Getting image size for the image %s") % image)
|
||||
if FLAGS.image_service == "nova.image.glance.GlanceImageService":
|
||||
glance_client = client.Client(FLAGS.glance_host,
|
||||
FLAGS.glance_port)
|
||||
meta_data = glance_client.get_image_meta(image)
|
||||
size, properties = meta_data["size"], meta_data["properties"]
|
||||
elif FLAGS.image_service == "nova.image.s3.S3ImageService":
|
||||
raise NotImplementedError
|
||||
elif FLAGS.image_service == "nova.image.local.LocalImageService":
|
||||
raise NotImplementedError
|
||||
LOG.debug(_("Got image size of %(size)s for the image %(image)s") %
|
||||
locals())
|
||||
return size, properties
|
375
nova/virt/vmwareapi_conn.py
Normal file
375
nova/virt/vmwareapi_conn.py
Normal file
@ -0,0 +1,375 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2011 Citrix Systems, Inc.
|
||||
# Copyright 2011 OpenStack LLC.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
A connection to the VMware ESX platform.
|
||||
|
||||
**Related Flags**
|
||||
|
||||
:vmwareapi_host_ip: IPAddress of VMware ESX server.
|
||||
:vmwareapi_host_username: Username for connection to VMware ESX Server.
|
||||
:vmwareapi_host_password: Password for connection to VMware ESX Server.
|
||||
:vmwareapi_task_poll_interval: The interval (seconds) used for polling of
|
||||
remote tasks
|
||||
(default: 1.0).
|
||||
:vmwareapi_api_retry_count: The API retry count in case of failure such as
|
||||
network failures (socket errors etc.)
|
||||
(default: 10).
|
||||
|
||||
"""
|
||||
|
||||
import time
|
||||
|
||||
from eventlet import event
|
||||
|
||||
from nova import context
|
||||
from nova import db
|
||||
from nova import exception
|
||||
from nova import flags
|
||||
from nova import log as logging
|
||||
from nova import utils
|
||||
from nova.virt.vmwareapi import error_util
|
||||
from nova.virt.vmwareapi import vim
|
||||
from nova.virt.vmwareapi import vim_util
|
||||
from nova.virt.vmwareapi.vmops import VMWareVMOps
|
||||
|
||||
LOG = logging.getLogger("nova.virt.vmwareapi_conn")
|
||||
|
||||
FLAGS = flags.FLAGS
|
||||
flags.DEFINE_string('vmwareapi_host_ip',
|
||||
None,
|
||||
'URL for connection to VMWare ESX host.'
|
||||
'Required if connection_type is vmwareapi.')
|
||||
flags.DEFINE_string('vmwareapi_host_username',
|
||||
None,
|
||||
'Username for connection to VMWare ESX host.'
|
||||
'Used only if connection_type is vmwareapi.')
|
||||
flags.DEFINE_string('vmwareapi_host_password',
|
||||
None,
|
||||
'Password for connection to VMWare ESX host.'
|
||||
'Used only if connection_type is vmwareapi.')
|
||||
flags.DEFINE_float('vmwareapi_task_poll_interval',
|
||||
5.0,
|
||||
'The interval used for polling of remote tasks '
|
||||
'Used only if connection_type is vmwareapi')
|
||||
flags.DEFINE_float('vmwareapi_api_retry_count',
|
||||
10,
|
||||
'The number of times we retry on failures, '
|
||||
'e.g., socket error, etc.'
|
||||
'Used only if connection_type is vmwareapi')
|
||||
flags.DEFINE_string('vmwareapi_vlan_interface',
|
||||
'vmnic0',
|
||||
'Physical ethernet adapter name for vlan networking')
|
||||
|
||||
TIME_BETWEEN_API_CALL_RETRIES = 2.0
|
||||
|
||||
|
||||
class Failure(Exception):
|
||||
"""Base Exception class for handling task failures."""
|
||||
|
||||
def __init__(self, details):
|
||||
self.details = details
|
||||
|
||||
def __str__(self):
|
||||
return str(self.details)
|
||||
|
||||
|
||||
def get_connection(_):
|
||||
"""Sets up the ESX host connection."""
|
||||
host_ip = FLAGS.vmwareapi_host_ip
|
||||
host_username = FLAGS.vmwareapi_host_username
|
||||
host_password = FLAGS.vmwareapi_host_password
|
||||
api_retry_count = FLAGS.vmwareapi_api_retry_count
|
||||
if not host_ip or host_username is None or host_password is None:
|
||||
raise Exception(_("Must specify vmwareapi_host_ip,"
|
||||
"vmwareapi_host_username "
|
||||
"and vmwareapi_host_password to use"
|
||||
"connection_type=vmwareapi"))
|
||||
return VMWareESXConnection(host_ip, host_username, host_password,
|
||||
api_retry_count)
|
||||
|
||||
|
||||
class VMWareESXConnection(object):
|
||||
"""The ESX host connection object."""
|
||||
|
||||
def __init__(self, host_ip, host_username, host_password,
|
||||
api_retry_count, scheme="https"):
|
||||
session = VMWareAPISession(host_ip, host_username, host_password,
|
||||
api_retry_count, scheme=scheme)
|
||||
self._vmops = VMWareVMOps(session)
|
||||
|
||||
def init_host(self, host):
|
||||
"""Do the initialization that needs to be done."""
|
||||
# FIXME(sateesh): implement this
|
||||
pass
|
||||
|
||||
def list_instances(self):
|
||||
"""List VM instances."""
|
||||
return self._vmops.list_instances()
|
||||
|
||||
def spawn(self, instance):
|
||||
"""Create VM instance."""
|
||||
self._vmops.spawn(instance)
|
||||
|
||||
def snapshot(self, instance, name):
|
||||
"""Create snapshot from a running VM instance."""
|
||||
self._vmops.snapshot(instance, name)
|
||||
|
||||
def reboot(self, instance):
|
||||
"""Reboot VM instance."""
|
||||
self._vmops.reboot(instance)
|
||||
|
||||
def destroy(self, instance):
|
||||
"""Destroy VM instance."""
|
||||
self._vmops.destroy(instance)
|
||||
|
||||
def pause(self, instance, callback):
|
||||
"""Pause VM instance."""
|
||||
self._vmops.pause(instance, callback)
|
||||
|
||||
def unpause(self, instance, callback):
|
||||
"""Unpause paused VM instance."""
|
||||
self._vmops.unpause(instance, callback)
|
||||
|
||||
def suspend(self, instance, callback):
|
||||
"""Suspend the specified instance."""
|
||||
self._vmops.suspend(instance, callback)
|
||||
|
||||
def resume(self, instance, callback):
|
||||
"""Resume the suspended VM instance."""
|
||||
self._vmops.resume(instance, callback)
|
||||
|
||||
def get_info(self, instance_id):
|
||||
"""Return info about the VM instance."""
|
||||
return self._vmops.get_info(instance_id)
|
||||
|
||||
def get_diagnostics(self, instance):
|
||||
"""Return data about VM diagnostics."""
|
||||
return self._vmops.get_info(instance)
|
||||
|
||||
def get_console_output(self, instance):
|
||||
"""Return snapshot of console."""
|
||||
return self._vmops.get_console_output(instance)
|
||||
|
||||
def get_ajax_console(self, instance):
|
||||
"""Return link to instance's ajax console."""
|
||||
return self._vmops.get_ajax_console(instance)
|
||||
|
||||
def attach_volume(self, instance_name, device_path, mountpoint):
|
||||
"""Attach volume storage to VM instance."""
|
||||
pass
|
||||
|
||||
def detach_volume(self, instance_name, mountpoint):
|
||||
"""Detach volume storage to VM instance."""
|
||||
pass
|
||||
|
||||
def get_console_pool_info(self, console_type):
|
||||
"""Get info about the host on which the VM resides."""
|
||||
return {'address': FLAGS.vmwareapi_host_ip,
|
||||
'username': FLAGS.vmwareapi_host_username,
|
||||
'password': FLAGS.vmwareapi_host_password}
|
||||
|
||||
def update_available_resource(self, ctxt, host):
|
||||
"""This method is supported only by libvirt."""
|
||||
return
|
||||
|
||||
|
||||
class VMWareAPISession(object):
|
||||
"""
|
||||
Sets up a session with the ESX host and handles all
|
||||
the calls made to the host.
|
||||
"""
|
||||
|
||||
def __init__(self, host_ip, host_username, host_password,
|
||||
api_retry_count, scheme="https"):
|
||||
self._host_ip = host_ip
|
||||
self._host_username = host_username
|
||||
self._host_password = host_password
|
||||
self.api_retry_count = api_retry_count
|
||||
self._scheme = scheme
|
||||
self._session_id = None
|
||||
self.vim = None
|
||||
self._create_session()
|
||||
|
||||
def _get_vim_object(self):
|
||||
"""Create the VIM Object instance."""
|
||||
return vim.Vim(protocol=self._scheme, host=self._host_ip)
|
||||
|
||||
def _create_session(self):
|
||||
"""Creates a session with the ESX host."""
|
||||
while True:
|
||||
try:
|
||||
# Login and setup the session with the ESX host for making
|
||||
# API calls
|
||||
self.vim = self._get_vim_object()
|
||||
session = self.vim.Login(
|
||||
self.vim.get_service_content().sessionManager,
|
||||
userName=self._host_username,
|
||||
password=self._host_password)
|
||||
# Terminate the earlier session, if possible ( For the sake of
|
||||
# preserving sessions as there is a limit to the number of
|
||||
# sessions we can have )
|
||||
if self._session_id:
|
||||
try:
|
||||
self.vim.TerminateSession(
|
||||
self.vim.get_service_content().sessionManager,
|
||||
sessionId=[self._session_id])
|
||||
except Exception, excep:
|
||||
# This exception is something we can live with. It is
|
||||
# just an extra caution on our side. The session may
|
||||
# have been cleared. We could have made a call to
|
||||
# SessionIsActive, but that is an overhead because we
|
||||
# anyway would have to call TerminateSession.
|
||||
LOG.debug(excep)
|
||||
self._session_id = session.key
|
||||
return
|
||||
except Exception, excep:
|
||||
LOG.critical(_("In vmwareapi:_create_session, "
|
||||
"got this exception: %s") % excep)
|
||||
raise exception.Error(excep)
|
||||
|
||||
def __del__(self):
|
||||
"""Logs-out the session."""
|
||||
# Logout to avoid un-necessary increase in session count at the
|
||||
# ESX host
|
||||
try:
|
||||
self.vim.Logout(self.vim.get_service_content().sessionManager)
|
||||
except Exception, excep:
|
||||
# It is just cautionary on our part to do a logout in del just
|
||||
# to ensure that the session is not left active.
|
||||
LOG.debug(excep)
|
||||
|
||||
def _is_vim_object(self, module):
|
||||
"""Check if the module is a VIM Object instance."""
|
||||
return isinstance(module, vim.Vim)
|
||||
|
||||
def _call_method(self, module, method, *args, **kwargs):
|
||||
"""
|
||||
Calls a method within the module specified with
|
||||
args provided.
|
||||
"""
|
||||
args = list(args)
|
||||
retry_count = 0
|
||||
exc = None
|
||||
last_fault_list = []
|
||||
while True:
|
||||
try:
|
||||
if not self._is_vim_object(module):
|
||||
# If it is not the first try, then get the latest
|
||||
# vim object
|
||||
if retry_count > 0:
|
||||
args = args[1:]
|
||||
args = [self.vim] + args
|
||||
retry_count += 1
|
||||
temp_module = module
|
||||
|
||||
for method_elem in method.split("."):
|
||||
temp_module = getattr(temp_module, method_elem)
|
||||
|
||||
return temp_module(*args, **kwargs)
|
||||
except error_util.VimFaultException, excep:
|
||||
# If it is a Session Fault Exception, it may point
|
||||
# to a session gone bad. So we try re-creating a session
|
||||
# and then proceeding ahead with the call.
|
||||
exc = excep
|
||||
if error_util.FAULT_NOT_AUTHENTICATED in excep.fault_list:
|
||||
# Because of the idle session returning an empty
|
||||
# RetrievePropertiesResponse and also the same is returned
|
||||
# when there is say empty answer to the query for
|
||||
# VMs on the host ( as in no VMs on the host), we have no
|
||||
# way to differentiate.
|
||||
# So if the previous response was also am empty response
|
||||
# and after creating a new session, we get the same empty
|
||||
# response, then we are sure of the response being supposed
|
||||
# to be empty.
|
||||
if error_util.FAULT_NOT_AUTHENTICATED in last_fault_list:
|
||||
return []
|
||||
last_fault_list = excep.fault_list
|
||||
self._create_session()
|
||||
else:
|
||||
# No re-trying for errors for API call has gone through
|
||||
# and is the caller's fault. Caller should handle these
|
||||
# errors. e.g, InvalidArgument fault.
|
||||
break
|
||||
except error_util.SessionOverLoadException, excep:
|
||||
# For exceptions which may come because of session overload,
|
||||
# we retry
|
||||
exc = excep
|
||||
except Exception, excep:
|
||||
# If it is a proper exception, say not having furnished
|
||||
# proper data in the SOAP call or the retry limit having
|
||||
# exceeded, we raise the exception
|
||||
exc = excep
|
||||
break
|
||||
# If retry count has been reached then break and
|
||||
# raise the exception
|
||||
if retry_count > self.api_retry_count:
|
||||
break
|
||||
time.sleep(TIME_BETWEEN_API_CALL_RETRIES)
|
||||
|
||||
LOG.critical(_("In vmwareapi:_call_method, "
|
||||
"got this exception: %s") % exc)
|
||||
raise
|
||||
|
||||
def _get_vim(self):
|
||||
"""Gets the VIM object reference."""
|
||||
if self.vim is None:
|
||||
self._create_session()
|
||||
return self.vim
|
||||
|
||||
def _wait_for_task(self, instance_id, task_ref):
|
||||
"""
|
||||
Return a Deferred that will give the result of the given task.
|
||||
The task is polled until it completes.
|
||||
"""
|
||||
done = event.Event()
|
||||
loop = utils.LoopingCall(self._poll_task, instance_id, task_ref,
|
||||
done)
|
||||
loop.start(FLAGS.vmwareapi_task_poll_interval, now=True)
|
||||
ret_val = done.wait()
|
||||
loop.stop()
|
||||
return ret_val
|
||||
|
||||
def _poll_task(self, instance_id, task_ref, done):
|
||||
"""
|
||||
Poll the given task, and fires the given Deferred if we
|
||||
get a result.
|
||||
"""
|
||||
try:
|
||||
task_info = self._call_method(vim_util, "get_dynamic_property",
|
||||
task_ref, "Task", "info")
|
||||
task_name = task_info.name
|
||||
action = dict(
|
||||
instance_id=int(instance_id),
|
||||
action=task_name[0:255],
|
||||
error=None)
|
||||
if task_info.state in ['queued', 'running']:
|
||||
return
|
||||
elif task_info.state == 'success':
|
||||
LOG.debug(_("Task [%(task_name)s] %(task_ref)s "
|
||||
"status: success") % locals())
|
||||
done.send("success")
|
||||
else:
|
||||
error_info = str(task_info.error.localizedMessage)
|
||||
action["error"] = error_info
|
||||
LOG.warn(_("Task [%(task_name)s] %(task_ref)s "
|
||||
"status: error %(error_info)s") % locals())
|
||||
done.send_exception(exception.Error(error_info))
|
||||
db.instance_action_create(context.get_admin_context(), action)
|
||||
except Exception, excep:
|
||||
LOG.warn(_("In vmwareapi:_poll_task, Got this error %s") % excep)
|
||||
done.send_exception(excep)
|
345
tools/esx/guest_tool.py
Normal file
345
tools/esx/guest_tool.py
Normal file
@ -0,0 +1,345 @@
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright (c) 2011 Citrix Systems, Inc.
|
||||
# Copyright 2011 OpenStack LLC.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Guest tools for ESX to set up network in the guest.
|
||||
On Windows we require pyWin32 installed on Python.
|
||||
"""
|
||||
|
||||
import array
|
||||
import logging
|
||||
import os
|
||||
import platform
|
||||
import socket
|
||||
import struct
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
|
||||
PLATFORM_WIN = 'win32'
|
||||
PLATFORM_LINUX = 'linux2'
|
||||
ARCH_32_BIT = '32bit'
|
||||
ARCH_64_BIT = '64bit'
|
||||
NO_MACHINE_ID = 'No machine id'
|
||||
|
||||
# Logging
|
||||
FORMAT = "%(asctime)s - %(levelname)s - %(message)s"
|
||||
if sys.platform == PLATFORM_WIN:
|
||||
LOG_DIR = os.path.join(os.environ.get('ALLUSERSPROFILE'), 'openstack')
|
||||
elif sys.platform == PLATFORM_LINUX:
|
||||
LOG_DIR = '/var/log/openstack'
|
||||
else:
|
||||
LOG_DIR = 'logs'
|
||||
if not os.path.exists(LOG_DIR):
|
||||
os.mkdir(LOG_DIR)
|
||||
LOG_FILENAME = os.path.join(LOG_DIR, 'openstack-guest-tools.log')
|
||||
logging.basicConfig(filename=LOG_FILENAME, format=FORMAT)
|
||||
|
||||
if sys.hexversion < 0x3000000:
|
||||
_byte = ord # 2.x chr to integer
|
||||
else:
|
||||
_byte = int # 3.x byte to integer
|
||||
|
||||
|
||||
class ProcessExecutionError:
|
||||
"""Process Execution Error Class."""
|
||||
|
||||
def __init__(self, exit_code, stdout, stderr, cmd):
|
||||
self.exit_code = exit_code
|
||||
self.stdout = stdout
|
||||
self.stderr = stderr
|
||||
self.cmd = cmd
|
||||
|
||||
def __str__(self):
|
||||
return str(self.exit_code)
|
||||
|
||||
|
||||
def _bytes2int(bytes):
|
||||
"""Convert bytes to int."""
|
||||
intgr = 0
|
||||
for byt in bytes:
|
||||
intgr = (intgr << 8) + _byte(byt)
|
||||
return intgr
|
||||
|
||||
|
||||
def _parse_network_details(machine_id):
|
||||
"""
|
||||
Parse the machine.id field to get MAC, IP, Netmask and Gateway fields
|
||||
machine.id is of the form MAC;IP;Netmask;Gateway;Broadcast;DNS1,DNS2
|
||||
where ';' is the separator.
|
||||
"""
|
||||
network_details = []
|
||||
if machine_id[1].strip() == "1":
|
||||
pass
|
||||
else:
|
||||
network_info_list = machine_id[0].split(';')
|
||||
assert len(network_info_list) % 6 == 0
|
||||
no_grps = len(network_info_list) / 6
|
||||
i = 0
|
||||
while i < no_grps:
|
||||
k = i * 6
|
||||
network_details.append((
|
||||
network_info_list[k].strip().lower(),
|
||||
network_info_list[k + 1].strip(),
|
||||
network_info_list[k + 2].strip(),
|
||||
network_info_list[k + 3].strip(),
|
||||
network_info_list[k + 4].strip(),
|
||||
network_info_list[k + 5].strip().split(',')))
|
||||
i += 1
|
||||
return network_details
|
||||
|
||||
|
||||
def _get_windows_network_adapters():
|
||||
"""Get the list of windows network adapters."""
|
||||
import win32com.client
|
||||
wbem_locator = win32com.client.Dispatch('WbemScripting.SWbemLocator')
|
||||
wbem_service = wbem_locator.ConnectServer('.', 'root\cimv2')
|
||||
wbem_network_adapters = wbem_service.InstancesOf('Win32_NetworkAdapter')
|
||||
network_adapters = []
|
||||
for wbem_network_adapter in wbem_network_adapters:
|
||||
if wbem_network_adapter.NetConnectionStatus == 2 or \
|
||||
wbem_network_adapter.NetConnectionStatus == 7:
|
||||
adapter_name = wbem_network_adapter.NetConnectionID
|
||||
mac_address = wbem_network_adapter.MacAddress.lower()
|
||||
wbem_network_adapter_config = \
|
||||
wbem_network_adapter.associators_(
|
||||
'Win32_NetworkAdapterSetting',
|
||||
'Win32_NetworkAdapterConfiguration')[0]
|
||||
ip_address = ''
|
||||
subnet_mask = ''
|
||||
if wbem_network_adapter_config.IPEnabled:
|
||||
ip_address = wbem_network_adapter_config.IPAddress[0]
|
||||
subnet_mask = wbem_network_adapter_config.IPSubnet[0]
|
||||
#wbem_network_adapter_config.DefaultIPGateway[0]
|
||||
network_adapters.append({'name': adapter_name,
|
||||
'mac-address': mac_address,
|
||||
'ip-address': ip_address,
|
||||
'subnet-mask': subnet_mask})
|
||||
return network_adapters
|
||||
|
||||
|
||||
def _get_linux_network_adapters():
|
||||
"""Get the list of Linux network adapters."""
|
||||
import fcntl
|
||||
max_bytes = 8096
|
||||
arch = platform.architecture()[0]
|
||||
if arch == ARCH_32_BIT:
|
||||
offset1 = 32
|
||||
offset2 = 32
|
||||
elif arch == ARCH_64_BIT:
|
||||
offset1 = 16
|
||||
offset2 = 40
|
||||
else:
|
||||
raise OSError(_("Unknown architecture: %s") % arch)
|
||||
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
|
||||
names = array.array('B', '\0' * max_bytes)
|
||||
outbytes = struct.unpack('iL', fcntl.ioctl(
|
||||
sock.fileno(),
|
||||
0x8912,
|
||||
struct.pack('iL', max_bytes, names.buffer_info()[0])))[0]
|
||||
adapter_names = \
|
||||
[names.tostring()[n_counter:n_counter + offset1].split('\0', 1)[0]
|
||||
for n_counter in xrange(0, outbytes, offset2)]
|
||||
network_adapters = []
|
||||
for adapter_name in adapter_names:
|
||||
ip_address = socket.inet_ntoa(fcntl.ioctl(
|
||||
sock.fileno(),
|
||||
0x8915,
|
||||
struct.pack('256s', adapter_name))[20:24])
|
||||
subnet_mask = socket.inet_ntoa(fcntl.ioctl(
|
||||
sock.fileno(),
|
||||
0x891b,
|
||||
struct.pack('256s', adapter_name))[20:24])
|
||||
raw_mac_address = '%012x' % _bytes2int(fcntl.ioctl(
|
||||
sock.fileno(),
|
||||
0x8927,
|
||||
struct.pack('256s', adapter_name))[18:24])
|
||||
mac_address = ":".join([raw_mac_address[m_counter:m_counter + 2]
|
||||
for m_counter in range(0, len(raw_mac_address), 2)]).lower()
|
||||
network_adapters.append({'name': adapter_name,
|
||||
'mac-address': mac_address,
|
||||
'ip-address': ip_address,
|
||||
'subnet-mask': subnet_mask})
|
||||
return network_adapters
|
||||
|
||||
|
||||
def _get_adapter_name_and_ip_address(network_adapters, mac_address):
|
||||
"""Get the adapter name based on the MAC address."""
|
||||
adapter_name = None
|
||||
ip_address = None
|
||||
for network_adapter in network_adapters:
|
||||
if network_adapter['mac-address'] == mac_address.lower():
|
||||
adapter_name = network_adapter['name']
|
||||
ip_address = network_adapter['ip-address']
|
||||
break
|
||||
return adapter_name, ip_address
|
||||
|
||||
|
||||
def _get_win_adapter_name_and_ip_address(mac_address):
|
||||
"""Get Windows network adapter name."""
|
||||
network_adapters = _get_windows_network_adapters()
|
||||
return _get_adapter_name_and_ip_address(network_adapters, mac_address)
|
||||
|
||||
|
||||
def _get_linux_adapter_name_and_ip_address(mac_address):
|
||||
"""Get Linux network adapter name."""
|
||||
network_adapters = _get_linux_network_adapters()
|
||||
return _get_adapter_name_and_ip_address(network_adapters, mac_address)
|
||||
|
||||
|
||||
def _execute(cmd_list, process_input=None, check_exit_code=True):
|
||||
"""Executes the command with the list of arguments specified."""
|
||||
cmd = ' '.join(cmd_list)
|
||||
logging.debug(_("Executing command: '%s'") % cmd)
|
||||
env = os.environ.copy()
|
||||
obj = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE,
|
||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env)
|
||||
result = None
|
||||
if process_input != None:
|
||||
result = obj.communicate(process_input)
|
||||
else:
|
||||
result = obj.communicate()
|
||||
obj.stdin.close()
|
||||
if obj.returncode:
|
||||
logging.debug(_("Result was %s") % obj.returncode)
|
||||
if check_exit_code and obj.returncode != 0:
|
||||
(stdout, stderr) = result
|
||||
raise ProcessExecutionError(exit_code=obj.returncode,
|
||||
stdout=stdout,
|
||||
stderr=stderr,
|
||||
cmd=cmd)
|
||||
time.sleep(0.1)
|
||||
return result
|
||||
|
||||
|
||||
def _windows_set_networking():
|
||||
"""Set IP address for the windows VM."""
|
||||
program_files = os.environ.get('PROGRAMFILES')
|
||||
program_files_x86 = os.environ.get('PROGRAMFILES(X86)')
|
||||
vmware_tools_bin = None
|
||||
if os.path.exists(os.path.join(program_files, 'VMware', 'VMware Tools',
|
||||
'vmtoolsd.exe')):
|
||||
vmware_tools_bin = os.path.join(program_files, 'VMware',
|
||||
'VMware Tools', 'vmtoolsd.exe')
|
||||
elif os.path.exists(os.path.join(program_files, 'VMware', 'VMware Tools',
|
||||
'VMwareService.exe')):
|
||||
vmware_tools_bin = os.path.join(program_files, 'VMware',
|
||||
'VMware Tools', 'VMwareService.exe')
|
||||
elif program_files_x86 and os.path.exists(os.path.join(program_files_x86,
|
||||
'VMware', 'VMware Tools',
|
||||
'VMwareService.exe')):
|
||||
vmware_tools_bin = os.path.join(program_files_x86, 'VMware',
|
||||
'VMware Tools', 'VMwareService.exe')
|
||||
if vmware_tools_bin:
|
||||
cmd = ['"' + vmware_tools_bin + '"', '--cmd', 'machine.id.get']
|
||||
for network_detail in _parse_network_details(_execute(cmd,
|
||||
check_exit_code=False)):
|
||||
mac_address, ip_address, subnet_mask, gateway, broadcast,\
|
||||
dns_servers = network_detail
|
||||
adapter_name, current_ip_address = \
|
||||
_get_win_adapter_name_and_ip_address(mac_address)
|
||||
if adapter_name and not ip_address == current_ip_address:
|
||||
cmd = ['netsh', 'interface', 'ip', 'set', 'address',
|
||||
'name="%s"' % adapter_name, 'source=static', ip_address,
|
||||
subnet_mask, gateway, '1']
|
||||
_execute(cmd)
|
||||
# Windows doesn't let you manually set the broadcast address
|
||||
for dns_server in dns_servers:
|
||||
if dns_server:
|
||||
cmd = ['netsh', 'interface', 'ip', 'add', 'dns',
|
||||
'name="%s"' % adapter_name, dns_server]
|
||||
_execute(cmd)
|
||||
else:
|
||||
logging.warn(_("VMware Tools is not installed"))
|
||||
|
||||
|
||||
def _filter_duplicates(all_entries):
|
||||
final_list = []
|
||||
for entry in all_entries:
|
||||
if entry and entry not in final_list:
|
||||
final_list.append(entry)
|
||||
return final_list
|
||||
|
||||
|
||||
def _set_rhel_networking(network_details=[]):
|
||||
all_dns_servers = []
|
||||
for network_detail in network_details:
|
||||
mac_address, ip_address, subnet_mask, gateway, broadcast,\
|
||||
dns_servers = network_detail
|
||||
all_dns_servers.extend(dns_servers)
|
||||
adapter_name, current_ip_address = \
|
||||
_get_linux_adapter_name_and_ip_address(mac_address)
|
||||
if adapter_name and not ip_address == current_ip_address:
|
||||
interface_file_name = \
|
||||
'/etc/sysconfig/network-scripts/ifcfg-%s' % adapter_name
|
||||
# Remove file
|
||||
os.remove(interface_file_name)
|
||||
# Touch file
|
||||
_execute(['touch', interface_file_name])
|
||||
interface_file = open(interface_file_name, 'w')
|
||||
interface_file.write('\nDEVICE=%s' % adapter_name)
|
||||
interface_file.write('\nUSERCTL=yes')
|
||||
interface_file.write('\nONBOOT=yes')
|
||||
interface_file.write('\nBOOTPROTO=static')
|
||||
interface_file.write('\nBROADCAST=%s' % broadcast)
|
||||
interface_file.write('\nNETWORK=')
|
||||
interface_file.write('\nGATEWAY=%s' % gateway)
|
||||
interface_file.write('\nNETMASK=%s' % subnet_mask)
|
||||
interface_file.write('\nIPADDR=%s' % ip_address)
|
||||
interface_file.write('\nMACADDR=%s' % mac_address)
|
||||
interface_file.close()
|
||||
if all_dns_servers:
|
||||
dns_file_name = "/etc/resolv.conf"
|
||||
os.remove(dns_file_name)
|
||||
_execute(['touch', dns_file_name])
|
||||
dns_file = open(dns_file_name, 'w')
|
||||
dns_file.write("; generated by OpenStack guest tools")
|
||||
unique_entries = _filter_duplicates(all_dns_servers)
|
||||
for dns_server in unique_entries:
|
||||
dns_file.write("\nnameserver %s" % dns_server)
|
||||
dns_file.close()
|
||||
_execute(['/sbin/service', 'network', 'restart'])
|
||||
|
||||
|
||||
def _linux_set_networking():
|
||||
"""Set IP address for the Linux VM."""
|
||||
vmware_tools_bin = None
|
||||
if os.path.exists('/usr/sbin/vmtoolsd'):
|
||||
vmware_tools_bin = '/usr/sbin/vmtoolsd'
|
||||
elif os.path.exists('/usr/bin/vmtoolsd'):
|
||||
vmware_tools_bin = '/usr/bin/vmtoolsd'
|
||||
elif os.path.exists('/usr/sbin/vmware-guestd'):
|
||||
vmware_tools_bin = '/usr/sbin/vmware-guestd'
|
||||
elif os.path.exists('/usr/bin/vmware-guestd'):
|
||||
vmware_tools_bin = '/usr/bin/vmware-guestd'
|
||||
if vmware_tools_bin:
|
||||
cmd = [vmware_tools_bin, '--cmd', 'machine.id.get']
|
||||
network_details = _parse_network_details(_execute(cmd,
|
||||
check_exit_code=False))
|
||||
# TODO(sateesh): For other distros like ubuntu, suse, debian, BSD, etc.
|
||||
_set_rhel_networking(network_details)
|
||||
else:
|
||||
logging.warn(_("VMware Tools is not installed"))
|
||||
|
||||
if __name__ == '__main__':
|
||||
pltfrm = sys.platform
|
||||
if pltfrm == PLATFORM_WIN:
|
||||
_windows_set_networking()
|
||||
elif pltfrm == PLATFORM_LINUX:
|
||||
_linux_set_networking()
|
||||
else:
|
||||
raise NotImplementedError(_("Platform not implemented: '%s'") % pltfrm)
|
@ -30,3 +30,4 @@ sqlalchemy-migrate
|
||||
netaddr
|
||||
sphinx
|
||||
glance
|
||||
suds==0.4
|
||||
|
Loading…
Reference in New Issue
Block a user