diff --git a/doc/common/figures/vmware-nova-driver-architecture.jpg b/doc/common/figures/vmware-nova-driver-architecture.jpg
new file mode 100644
index 0000000000..0f4e89fa67
Binary files /dev/null and b/doc/common/figures/vmware-nova-driver-architecture.jpg differ
diff --git a/doc/config-reference/compute/section_hypervisor_vmware.xml b/doc/config-reference/compute/section_hypervisor_vmware.xml
index 193d9cec64..f7100cc9d6 100644
--- a/doc/config-reference/compute/section_hypervisor_vmware.xml
+++ b/doc/config-reference/compute/section_hypervisor_vmware.xml
@@ -5,49 +5,125 @@
Introduction
- OpenStack Compute supports the VMware vSphere product family. This section describes the
- additional configuration required to launch VMWare-based virtual machine images. vSphere
- versions 4.1 and greater are supported.
- There are two OpenStack Compute drivers that can be used with vSphere:
-
+ OpenStack Compute supports the VMware vSphere product family and enables access to advanced
+ features such as vMotion, High Availability, and Dynamic Resource Scheduling (DRS). This
+ section describes the configuration required to launch VMware-based virtual machine images.
+ vSphere versions 4.1 and later are supported.
+ The VMware vCenter Driver enables nova-compute to
+ communicate with a VMware vCenter server managing one or more ESX host clusters. The
+ driver aggregates the ESX hosts in each cluster to present one large hypervisor entity for each
+ cluster to the Compute scheduler. Because individual ESX hosts are not exposed to the
+ scheduler, Compute schedules to the granularity of clusters and vCenter uses DRS to select the
+ actual ESX host within the cluster. When a virtual machine makes its way into a vCenter
+ cluster, it can take advantage of all the features that come with vSphere.
+ The following sections describe how to configure the VMware vCenter driver.
+
+
+ High Level Architecture
+ The following diagram shows a high-level view of the VMware driver architecture:
+
+
+
+
+
+ In the previous diagram, the OpenStack Compute Scheduler sees three hypervisors, each
+ corresponding to a cluster in vCenter. Nova-compute contains the VMware Driver and as the
+ figure shows, you can run with multiple nova-compute services. While Compute schedules
+ at the granularity of a cluster, the VMware driver inside nova-compute interacts with the
+ vCenter APIs to select an appropriate ESX host within the cluster. Internally, vCenter
+ uses DRS for placement.
+ The VMware vCenter Driver also interacts with the OpenStack Image Service to copy
+ VMDK images from the Image Service back end store. The dotted line in the figure represents the
+ copying of VMDK images from the OpenStack Image Service to the vSphere datastore. VMDK images
+ are cached in the datastore so the copy operation is only required the first time that the
+ VMDK image is used.
+ After a VM is booted by OpenStack into a vSphere cluster, the VM becomes visible in vCenter
+ and can access vSphere advanced features. At the same time, the VM is visible in
+ the OpenStack Dashboard and you can manage it as you would any other OpenStack VM.
+ You perform advanced vSphere operations in vCenter while you configure OpenStack resources
+ such as VMs through the OpenStack dashboard.
+ Not shown in the figure above is how networking fits into the architecture. Both
+ nova-network and the OpenStack Networking Service
+ are supported. For details, see .
+
+
+ Overview of Configuration
+ Here are the basic steps to get started with the VMware vCenter Driver:
+
- vmwareapi.VMwareVCDriver: a driver that lets nova-compute communicate with a VMware vCenter server managing a cluster
- of ESX hosts. With this driver and access to shared storage, advanced vSphere features
- like vMotion, High Availability, and Dynamic Resource Scheduling (DRS) are available. With
- this driver, one nova-compute service is run per
- vCenter cluster.
+ Ensure vCenter is configured correctly. See .
- vmwareapi.VMwareESXDriver: a driver that lets nova-compute communicate directly to an ESX host, but does not support
- advanced VMware features. With this driver, one nova-compute service is run per ESX host.
+ Configure nova.conf for the VMware vCenter Driver. See .
-
+
+ Load desired VMDK images into the OpenStack Image Service. See .
+
+
+ Configure networking with either nova-network
+ or the OpenStack Networking Service. See .
+
+
- Prerequisites
- You will need to install the following software installed on each nova-compute node:
-
-
- python-suds: This software is needed by the nova-compute service to communicate with vSphere APIs. If not installed,
- the nova-compute service shuts down with the
- message: "Unable to import suds".
-
-
- On Ubuntu, this package can be installed by running:
- $sudo apt-get install python-suds
+ Prerequisites and Limitations
+ The following is a list of items that will help prepare a vSphere environment to run with
+ the VMware vCenter Driver.
+
+
+
+ vCenter Inventory: Make sure any vCenter used by
+ OpenStack contains a single datacenter. (this is a temporary limitation that will be
+ removed in a future Havana stable release).
+
+
+ DRS: For any cluster that contains multiple ESX
+ hosts, enable DRS with "Fully automated" placement turned on.
+
+
+ Shared Storage: Only shared storage is supported
+ and datastores must be shared among all hosts in a cluster. It is recommended to remove
+ datastores not intended for OpenStack from clusters being configured for OpenStack.
+ Currently, a single datastore can be used per cluster (this is a temporary limitation
+ that will be removed in a future Havana stable release).
+
+
+ Clusters and Datastores: Clusters and datastores
+ used by OpenStack should not be used for other purposes. Using clusters or datastores
+ for other purposes will cause OpenStack to display incorrect usage information.
+
+
+ Networking: The networking configuration depends on
+ the desired networking model. See .
+
+
+ Security Groups: Security Groups are not supported
+ if nova-network is used. Security Groups are
+ only supported if the VMware driver is used in conjunction with the OpenStack Networking
+ Service running the NSX plugin.
+
+
+ VNC: Enable port range 5900 - 6000 for VNC
+ Connections on every ESX Host in all the clusters under OpenStack control. See the
+ following link for more details on enabling VNC: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1246
+
+
+
- Using the VMwareVCDriver
- This section covers details of using the VMwareVCDriver.
+ Using the VMware vCenter Driver
+ Use the VMware vCenter Driver (VMwareVCDriver) to connect OpenStack Compute with vCenter.
+ This is the recommended configuration and allows access through vCenter to advanced vSphere
+ features like vMotion, High Availability, and Dynamic Resource Scheduling (DRS).
- VMWareVCDriver configuration options
- When using the VMwareVCDriver (i.e., vCenter) with OpenStack Compute, nova.conf must
- include the following VMWare-specific config options:
+ VMwareVCDriver configuration options
+ When using the VMwareVCDriver (i.e., vCenter) with OpenStack Compute,
+ nova.conf must include the following VMware-specific config
+ options:[DEFAULT]
compute_driver=vmwareapi.VMwareVCDriver
@@ -57,201 +133,244 @@ host_ip=<vCenter host IP>
host_username=<vCenter username>
host_password=<vCenter password>
cluster_name=<vCenter cluster name>
+datastore_regex=<optional datastore regex>
wsdl_location=http://127.0.0.1:8080/vmware/SDK/wsdl/vim25/vimService.wsdl
- Remember that you will have only one nova-compute service per cluster. It is recommended that this host run as a
- VM with high-availability enabled as part of that same cluster.
- Also note that many of the nova.conf
- options mentioned elsewhere in this document that are relevant
- to libvirt do not apply to using this driver.
-
-
- vSphere 5.0 (and below) additional setup
- Users of vSphere 5.0 or earlier will need to locally host their WSDL files.
- These steps are applicable for vCenter 5.0 or ESXi 5.0 and you may
- accomplish this by either mirroring the WSDL from the vCenter or ESXi
- server you intend on using, or you may download the SDK directly from VMware.
- These are both workaround steps used to fix a known issue with the WSDL that was resolved in later versions.
+ Most of the configuration options above are straightforward to understand, but here are
+ a few points to note:
+
+
+
+ Clusters: The vCenter driver can support multiple clusters. To use more than one
+ cluster, simply add multiple cluster_name lines in
+ nova.conf with the appropriate cluster name. Clusters and
+ datastores used by the vCenter driver should not contain any VMs other than those
+ created by the driver.
+
+
+ Datastores: The datastore_regex field specifies the datastores to use
+ with Compute. For example, datastore_regex="nas.*" selects all the
+ datastores that have a name starting with "nas". If this line is omitted, Compute uses
+ the first datastore returned by the vSphere API. It is recommended not to use this
+ field and instead remove datastores that are not intended for OpenStack.
+
+
-
- Mirror WSDL from vCenter (or ESXi)
-
- You'll need the IP address for your vCenter or
- ESXi host that you'll be mirroring the files from. Set the
- shell variable VMWAREAPI_IP to the IP address
- to allow you to cut and paste commands from these instructions:
- $export VMWAREAPI_IP=<your_vsphere_host_ip>
-
-
-
-
- Create a local file system directory to hold the WSDL files in.
- $mkdir -p /opt/stack/vmware/wsdl/5.0
-
-
-
- Change into the new directory.
- $cd /opt/stack/vmware/wsdl/5.0
-
-
-
- Install a command line tool that can download the the files like
- wget. Install it with your OS specific tools.
-
-
-
-
- Download the files to the local file cache.
- wget --no-check-certificate https://$VMWAREAPI_IP/sdk/vimService.wsdl
-wget --no-check-certificate https://$VMWAREAPI_IP/sdk/vim.wsdl
-wget --no-check-certificate https://$VMWAREAPI_IP/sdk/core-types.xsd
-wget --no-check-certificate https://$VMWAREAPI_IP/sdk/query-messagetypes.xsd
-wget --no-check-certificate https://$VMWAREAPI_IP/sdk/query-types.xsd
-wget --no-check-certificate https://$VMWAREAPI_IP/sdk/vim-messagetypes.xsd
-wget --no-check-certificate https://$VMWAREAPI_IP/sdk/reflect-messagetypes.xsd
-wget --no-check-certificate https://$VMWAREAPI_IP/sdk/reflect-types.xsd
- There will be two files that did not fetch properly
- reflect-types.xsd and
- reflect-messagetypes.xsd. These two files
- will need to be stubbed out. The following XML listing can be
- used to replace the missing file content. The XML parser
- underneath Python can be very particular and if you put a
- space in the wrong place it can break the parser. Copy the
- contents below carefully and watch the formatting carefully.
- <?xml version="1.0" encoding="UTF-8"?>
- <schema
- targetNamespace="urn:reflect"
- xmlns="http://www.w3.org/2001/XMLSchema"
- xmlns:xsd="http://www.w3.org/2001/XMLSchema"
- elementFormDefault="qualified">
- </schema>
-
-
-
-
-
- Now that the files are locally present, tell
- the driver to look for the SOAP service WSDLs in the local
- file system and not on the remote vSphere server. The
- following setting should be added to the
- nova.conf for your nova-compute node:
- [vmware]
-wsdl_location=file:///opt/stack/vmware/wsdl/5.0/vimService.wsdl
-
-
-
- Alternatively, download the version appropriate SDK from
- http://www.vmware.com/support/developer/vc-sdk/ and copy
- it into /opt/stack/vmware. You should
- ensure that the WSDL is available, in for example
- /opt/stack/vmware/SDK/wsdl/vim25/vimService.wsdl.
- Below we will point nova.conf to fetch this
- WSDL file from the local file system using a URL.
- When using the VMwareVCDriver (i.e vCenter) with OpenStack Compute with vSphere
- version 5.0 or below, nova.conf must include the following extra config option:
- [vmware]
-wsdl_location=file:///opt/stack/vmware/SDK/wsdl/vim25/vimService.wsdl
-
-
- Requirements and limitations
- The VMwareVCDriver is new in Grizzly, and as a result there are some important
- deployment requirements and limitations to be aware of. In many cases, these items will be
- addressed in future releases.
-
-
- Each cluster can only be configured with a single Datastore. If multiple Datastores
- are configured, the first one returned via the vSphere API will be used.
-
-
- Because a single nova-compute is used per
- cluster, the nova-scheduler views this as a
- single host with resources amounting to the aggregate
- resources of all ESX hosts managed by the cluster. This may
- result in unexpected behavior depending on your choice of
- scheduler.
-
-
- Security Groups are not supported if nova-network is used. Security
- Groups are only supported if the VMware driver is used in
- conjunction with the OpenStack Networking Service running
- the Nicira NVP plugin.
-
-
-
-
-
-
- Using the VMwareESXDriver
- This section covers details of using the VMwareESXDriver.
-
- VMWareESXDriver configuration options
- When using the VMwareESXDriver (i.e., no vCenter) with OpenStack Compute, configure
- nova.conf with the following VMWare-specific config options:
-
- [DEFAULT]
-compute_driver=vmwareapi.VMwareESXDriver
-
-[vmware]
-host_ip=<ESXi host IP>
-host_username=<ESXi host username>
-host_password=<ESXi host password>
-wsdl_location=http://127.0.0.1:8080/vmware/SDK/wsdl/vim25/vimService.wsdl
- Remember that you will have one nova-compute
- service per ESXi host. It is recommended that this host run as a VM on the same ESXi host it
- is managing.
- Also note that many of the nova.conf options mentioned elsewhere in this document that
- are relevant to libvirt do not apply to using this driver.
-
-
- Requirements and limitations
- The ESXDriver is unable to take advantage of many of the advanced capabilities
- associated with the vSphere platform, namely vMotion, High Availability, and Dynamic
- Resource Scheduler (DRS).
+ A nova-compute service can control one or more
+ clusters containing multiple ESX hosts, making nova-compute a critical service from a high availabilility perspective.
+ Since it is possible for the host running nova-compute to fail while the vCenter and ESX
+ resources are still alive, it is recommended that nova-compute be protected against host failures like other critical
+ OpenStack Services.
+ Also note that many of the nova.conf options mentioned elsewhere in
+ this document that are relevant to libvirt do not apply to using this driver.
+ Environments using vSphere 5.0 and below require additional configuration. See .Images with VMware vSphere
- When using either VMware driver, images should be uploaded to the OpenStack Image Service
- in the VMDK format. Both thick and thin images are currently supported and all images must be
- flat (i.e. contained within 1 file). For example:
- To load a thick image with a SCSI adaptor:
- $glance image-create name="ubuntu-thick-scsi" disk_format=vmdk container_format=bare \
-is_public=true --property vmware_adaptertype="lsiLogic" \
+ The vCenter Driver supports images in the VMDK format. Disks in this format can be
+ obtained from VMware Fusion or from an ESX environment. It is also possible to convert other
+ formats, such as qcow2, to the VMDK format using the qemu-img utility. Once a
+ VMDK disk is available, it should be loaded into the OpenStack Image Service and can then used
+ with the VMware vCenter Driver. The following sections provide additional details on the exact
+ types of disks supported and the commands used for conversion and upload.
+
+ Supported Image Types
+ Images should be uploaded to the OpenStack Image Service in the VMDK format. The
+ following VMDK disk types are supported:
+
+
+ VMFS Flat Disks (includes thin, thick,
+ zeroedthick, and eagerzeroedthick). Note that once a VMFS thin disk is exported from
+ VMFS to a non-VMFS location, like the OpenStack Image Service, it becomes a preallocated
+ flat disk. This has an impact on the transfer time from the OpenStack Image Service to
+ the datastore when the full preallocated flat disk, rather than the thin disk, has to be
+ transferred.
+
+
+ Monolithic Sparse disks. Sparse disks get
+ imported from the OpenStack Image Service into ESX as thin provisioned disks. Monolithic
+ Sparse disks can be obtained from VMware Fusion or can be created by converting from
+ other virtual disk formats using the qemu-img utility.
+
+
+ The following table shows the vmware_disktype property that applies to each
+ of the supported VMDK disk types:
+
+
+
+ The vmware_disktype property is set when an image is loaded into the
+ OpenStack Image Service. For example, the following command creates a Monolithic Sparse
+ image by setting vmware_disktype to "sparse":
+ $glance image-create name="ubuntu-sparse" disk_format=vmdk \
+container_format=bare is_public=true \
+--property vmware_disktype="sparse" \
+--property vmware_ostype="ubuntu64Guest" < ubuntuLTS-sparse.vmdk
+ Note that specifying "thin" does not provide any advantage over "preallocated" with the
+ current version of the driver. Future versions however may restore the thin properties of
+ the disk after it is downloaded to a vSphere datastore.
+
+
+ Converting and Loading Images
+ Using the qemu-img utility, disk images in several formats (e.g. qcow2) can
+ be converted to the VMDK format.
+ For example, the following command can be used to convert a qcow2 Ubuntu Precise cloud image:
+ $qemu-img convert -f raw ~/Downloads/precise-server-cloudimg-amd64-disk1.img \
+-O vmdk precise-server-cloudimg-amd64-disk1.vmdk
+ VMDK disks converted via qemu-img are always monolithic sparse VMDK disks with an IDE adapter type. Using the above
+ example of the Precise Ubuntu image after the qemu-img conversion, the command
+ to upload the VMDK disk should be something like:
+ $glance image-create --name precise-cloud --is-public=True \
+--container-format=bare --disk-format=vmdk \
+--property vmware_disktype="sparse" \
+--property vmware_adaptertype="ide" < \
+precise-server-cloudimg-amd64-disk1.vmdk
+ Note that the vmware_disktype is set to sparse and the vmware_adaptertype is set to ide in the command above.
+ If the image did not come from the qemu-img utility, the
+ vmware_disktype and vmware_adaptertype might be different. To
+ determine the image adapter type from an image file, use the following command and look for
+ the ddb.adapterType= line :
+
+ $head -20 <vmdk file name>
+
+ Assuming a preallocated disk type and an iSCSI "lsiLogic" adapter type, below is the
+ command to upload the VMDK disk:
+ $glance image-create name="ubuntu-thick-scsi" disk_format=vmdk \
+container_format=bare is_public=true \
+--property vmware_adaptertype="lsiLogic" \
--property vmware_disktype="preallocated" \
--property vmware_ostype="ubuntu64Guest" < ubuntuLTS-flat.vmdk
- To load a thin image with an IDE adaptor:
- $glance image-create name="unbuntu-thin-ide" disk_format=vmdk container_format=bare \
-is_public=true --property vmware_adaptertype="ide" \
---property vmware_disktype="thin" \
---property vmware_ostype="ubuntu64Guest" < unbuntuLTS-thin-flat.vmdk
- The complete list of supported vmware disk properties is documented in the Image
- Management section. It's critical that the adaptertype is correct; In fact, the image will not
- boot with the incorrect adaptertype. If you have the meta-data VMDK file, the
- ddb.adapterType property specifies the adaptertype. The default adaptertype is "lsilogic"
- which is SCSI.
+ Currently, there is a limitation that OS boot VMDK disks with an IDE adapter type cannot
+ be attached to a virtual SCSI controller and likewise disks with one of the SCSI adapter
+ types (e.g. busLogic, lsiLogic) cannot be attached to the IDE controller. Therefore, as the
+ examples above show, it is important to set the vmware_adaptertype property
+ correctly. The default adapter type is "lsiLogic" which is SCSI, so you may omit the
+ vmware_adaptertype property if you are certain that the image adapter type is
+ "lsiLogic."
+
+
+ Tagging VMware Images
+ In a mixed hypervisor environment, OpenStack Compute uses the
+ hypervisor_type tag to match images to the correct hypervisor type. For
+ VMware images, set the hypervisor type to "vmware" as shown below. Other valid hypervisor
+ types include: xen, qemu, kvm, lxc, uml, hyperv, and powervm.
+ $glance image-create name="ubuntu-thick-scsi" disk_format=vmdk \
+container_format=bare is_public=true \
+--property vmware_adaptertype="lsiLogic" \
+--property vmware_disktype="preallocated" \
+--property hypervisor_type="vmware" \
+--property vmware_ostype="ubuntu64Guest" < ubuntuLTS-flat.vmdk
+
+
+ Optimizing Images
+ Monolithic Sparse disks are considerably faster to download but have the overhead of an
+ additional conversion step. When imported into ESX, sparse disks get converted to VMFS flat
+ thin provisioned disks. The download and conversion steps only affect the first launched
+ instance that uses the sparse disk image. The converted disk image is cached, so subsequent
+ instances that use this disk image can simply use the cached version.
+ To avoid the conversion step (at the cost of longer download times) consider converting
+ sparse disks to thin provisioned or preallocated disks before loading them into the
+ OpenStack Image Service. Below are some tools that can be used to pre-convert sparse
+ disks.
+
+ Using vSphere CLI (or sometimes called the remote CLI or rCLI)
+ tools
+ Assuming that the sparse disk is made available on a datastore accessible by an
+ ESX host, the following command converts it to preallocated format:
+ vmkfstools --server=ip_of_some_ESX_host -i /vmfs/volumes/datastore1/sparse.vmdk /vmfs/volumes/datastore1/converted.vmdk
+ (Note that the vifs tool from the same CLI package can be used to upload the disk to
+ be converted. The vifs tool can also be used to download the converted disk if
+ necessary.)
+
+ Using vmkfstools directly on the ESX host
+ If the SSH service is enabled on an ESX host, the sparse disk can be uploaded to the
+ ESX datastore via scp and the vmkfstools local to the ESX host can use used to perform
+ the conversion: (After logging in to the host via ssh)
+ vmkfstools -i /vmfs/volumes/datastore1/sparse.vmdk /vmfs/volumes/datastore1/converted.vmdk
+
+
+ vmware-vdiskmanager
+ vmware-vdiskmanager is a utility that comes bundled with VMware Fusion and VMware
+ Workstation. Below is an example of converting a sparse disk to preallocated format:
+ '/Applications/VMware Fusion.app/Contents/Library/vmware-vdiskmanager' -r sparse.vmdk -t 4 converted.vmdk
+ In all of the above cases, the converted vmdk is actually a pair of files: the
+ descriptor file converted.vmdk and the actual virtual
+ disk data file converted-flat.vmdk. The file to be
+ uploaded to the OpenStack Image Service is converted-flat.vmdk.
+
+
+
+
+ Image Handling
+ The ESX hypervisor requires a copy of the VMDK file in order to boot up a virtual
+ machine. As a result, the vCenter OpenStack Compute driver must download the VMDK via HTTP
+ from the OpenStack Image Service to a datastore that is visible to the hypervisor. To
+ optimize this process, the first time a VMDK file is used, it gets cached in the datastore.
+ Subsequent virtual machines that need the VMDK use the cached version and don't have to copy
+ the file again from the OpenStack Image Service.
+ Even with a cached VMDK, there is still a copy operation from the cache location to the
+ hypervisor file directory in the shared datastore. To avoid this copy, boot the image in
+ linked_clone mode. To learn how to enable this mode, see .
+ Note also that it is possible to override the linked_clone mode on a per-image basis by
+ using the vmware_linked_clone property in the OpenStack Image Service.
+ Networking with VMware vSphere
- The VMware driver support networking with both Nova-Network and the OpenStack Networking
- Service.
+ The VMware driver supports networking with both nova-network and the OpenStack Networking Service.
- If using nova-network with the FlatManager or FlatDHCPManager, before provisioning
- VMs, create a port group with the same name as the flat_network_bridge value in
- nova.conf (default is br100).
- All VM NICs will be attached to this port group.
+ If using nova-network with the FlatManager or
+ FlatDHCPManager, before provisioning VMs, create a port group with the same name as the
+ flat_network_bridge value in nova.conf (default
+ is br100). All VM NICs will be attached to this port group. Ensure the
+ flat interface of the node running nova-network has a path to this
+ network.
- If using nova-network with the VlanManager, before provisioning VMs, make sure the
+ If using nova-network with the VlanManager, before provisioning VMs, make sure the
vlan_interface configuration option is set to match the ESX host interface
that will handle VLAN-tagged VM traffic. OpenStack Compute will automatically create the
corresponding port groups.
@@ -270,10 +389,122 @@ is_public=true --property vmware_adaptertype="ide" \
Volumes with VMware vSphere
- The VMware driver supports attaching volumes from the OpenStack Block
- Storage service. 'iscsi' volume driver provides limited support and can be
- used only for attachments. VMware VMDK driver of OpenStack Block Storage
- can be used for managing volumes based out of vSphere datastores.
+ The VMware driver supports attaching volumes from the OpenStack Block Storage service. The
+ VMware VMDK driver for OpenStack Block Storage is recommended and should be used for managing
+ volumes based on vSphere datastores. More information about the VMware VMDK driver can be
+ found at: VMware VMDK Driver. There is also a "iscsi" volume driver which provides limited
+ support and can be used only for attachments.
+
+
+ vSphere 5.0 (and below) additional setup
+ Users of vSphere 5.0 or earlier will need to locally host their WSDL files. These steps
+ are applicable for vCenter 5.0 or ESXi 5.0 and you may accomplish this by either mirroring the
+ WSDL from the vCenter or ESXi server you intend on using, or you may download the SDK directly
+ from VMware. These are both workaround steps used to fix a known issue with the WSDL that was resolved in later versions.
+
+ Mirror WSDL from vCenter (or ESXi)
+
+ You'll need the IP address for your vCenter or ESXi host that you'll be mirroring the
+ files from. Set the shell variable VMWAREAPI_IP to the IP address to allow
+ you to cut and paste commands from these instructions:
+ $export VMWAREAPI_IP=<your_vsphere_host_ip>
+
+
+
+ Create a local file system directory to hold the WSDL files in.
+ $mkdir -p /opt/stack/vmware/wsdl/5.0
+
+
+
+ Change into the new directory.
+ $cd /opt/stack/vmware/wsdl/5.0
+
+
+
+ Install a command line tool that can download the files like wget.
+ Install it with your OS specific tools.
+
+
+ Download the files to the local file cache.
+ wget --no-check-certificate https://$VMWAREAPI_IP/sdk/vimService.wsdl
+wget --no-check-certificate https://$VMWAREAPI_IP/sdk/vim.wsdl
+wget --no-check-certificate https://$VMWAREAPI_IP/sdk/core-types.xsd
+wget --no-check-certificate https://$VMWAREAPI_IP/sdk/query-messagetypes.xsd
+wget --no-check-certificate https://$VMWAREAPI_IP/sdk/query-types.xsd
+wget --no-check-certificate https://$VMWAREAPI_IP/sdk/vim-messagetypes.xsd
+wget --no-check-certificate https://$VMWAREAPI_IP/sdk/reflect-messagetypes.xsd
+wget --no-check-certificate https://$VMWAREAPI_IP/sdk/reflect-types.xsd
+ There will be two files that did not fetch properly reflect-types.xsd
+ and reflect-messagetypes.xsd. These two files will need to be stubbed
+ out. The following XML listing can be used to replace the missing file content. The XML
+ parser underneath Python can be very particular and if you put a space in the wrong place
+ it can break the parser. Copy the contents below carefully and watch the formatting
+ carefully.
+ <?xml version="1.0" encoding="UTF-8"?>
+ <schema
+ targetNamespace="urn:reflect"
+ xmlns="http://www.w3.org/2001/XMLSchema"
+ xmlns:xsd="http://www.w3.org/2001/XMLSchema"
+ elementFormDefault="qualified">
+ </schema>
+
+
+
+
+ Now that the files are locally present, tell the driver to look for the SOAP service
+ WSDLs in the local file system and not on the remote vSphere server. The following setting
+ should be added to the nova.conf for your nova-compute node:
+ [vmware]
+wsdl_location=file:///opt/stack/vmware/wsdl/5.0/vimService.wsdl
+
+
+
+ Alternatively, download the version appropriate SDK from http://www.vmware.com/support/developer/vc-sdk/ and copy it into
+ /opt/stack/vmware. You should ensure that the WSDL is available, in for
+ example /opt/stack/vmware/SDK/wsdl/vim25/vimService.wsdl. Below we will
+ point nova.conf to fetch this WSDL file from the local file system using
+ a URL.
+ When using the VMwareVCDriver (i.e vCenter) with OpenStack Compute with vSphere version
+ 5.0 or below, nova.conf must include the following extra config
+ option:
+ [vmware]
+wsdl_location=file:///opt/stack/vmware/SDK/wsdl/vim25/vimService.wsdl
+
+
+ Using the VMware ESX Driver
+ This section covers details of using the VMwareESXDriver. Note that the ESX Driver has not
+ been extensively tested and is not recommended. To configure the VMware vCenter Driver
+ instead, see
+
+ VMwareESXDriver configuration options
+ When using the VMwareESXDriver (i.e., no vCenter) with OpenStack Compute, configure
+ nova.conf with the following VMware-specific config options:
+ [DEFAULT]
+compute_driver=vmwareapi.VMwareESXDriver
+
+[vmware]
+host_ip=<ESXi host IP>
+host_username=<ESXi host username>
+host_password=<ESXi host password>
+wsdl_location=http://127.0.0.1:8080/vmware/SDK/wsdl/vim25/vimService.wsdl
+ Remember that you will have one nova-compute
+ service per ESXi host. It is recommended that this host run as a VM on the same ESXi host it
+ is managing.
+ Also note that many of the nova.conf options mentioned elsewhere in
+ this document that are relevant to libvirt do not apply to using this driver.
+
+
+ Requirements and limitations
+ The ESXDriver is unable to take advantage of many of the advanced capabilities
+ associated with the vSphere platform, namely vMotion, High Availability, and Dynamic
+ Resource Scheduler (DRS).
+ Configuration Reference