From 00d59544abb4837e316dd6ade1d1dd181f1818e6 Mon Sep 17 00:00:00 2001 From: Dan Florea Date: Fri, 1 Nov 2013 10:42:39 -0700 Subject: [PATCH] Added ephemeral disk limitationx backport: havana Fixes bug #1247207 Change-Id: I6aa119148ccd783dd80cd6dcee6322b952c0a60d --- .../compute/section_hypervisor_vmware.xml | 791 ++++++++++-------- 1 file changed, 454 insertions(+), 337 deletions(-) diff --git a/doc/config-reference/compute/section_hypervisor_vmware.xml b/doc/config-reference/compute/section_hypervisor_vmware.xml index f7100cc9d6..6523bd2287 100644 --- a/doc/config-reference/compute/section_hypervisor_vmware.xml +++ b/doc/config-reference/compute/section_hypervisor_vmware.xml @@ -1,129 +1,172 @@ -
+
VMware vSphere
Introduction - OpenStack Compute supports the VMware vSphere product family and enables access to advanced - features such as vMotion, High Availability, and Dynamic Resource Scheduling (DRS). This - section describes the configuration required to launch VMware-based virtual machine images. - vSphere versions 4.1 and later are supported. - The VMware vCenter Driver enables nova-compute to - communicate with a VMware vCenter server managing one or more ESX host clusters. The - driver aggregates the ESX hosts in each cluster to present one large hypervisor entity for each - cluster to the Compute scheduler. Because individual ESX hosts are not exposed to the - scheduler, Compute schedules to the granularity of clusters and vCenter uses DRS to select the - actual ESX host within the cluster. When a virtual machine makes its way into a vCenter - cluster, it can take advantage of all the features that come with vSphere. - The following sections describe how to configure the VMware vCenter driver. + OpenStack Compute supports the VMware vSphere product family + and enables access to advanced features such as vMotion, High + Availability, and Dynamic Resource Scheduling (DRS). This + section describes how to configure VMware-based virtual machine + images for launch. vSphere versions 4.1 and later are + supported. + The VMware vCenter driver enables nova-compute to communicate with + a VMware vCenter server that manages one or more ESX host + clusters. The driver aggregates the ESX hosts in each cluster to + present one large hypervisor entity for each cluster to the + Compute scheduler. Because individual ESX hosts are not exposed + to the scheduler, Compute schedules to the granularity of + clusters and vCenter uses DRS to select the actual ESX host + within the cluster. When a virtual machine makes its way into a + vCenter cluster, it can use all vSphere features. + The following sections describe how to configure the VMware + vCenter driver.
- High Level Architecture - The following diagram shows a high-level view of the VMware driver architecture: - + High-level architecture + The following diagram shows a high-level view of the VMware + driver architecture: +
+ VMware driver architecture + - + - - In the previous diagram, the OpenStack Compute Scheduler sees three hypervisors, each - corresponding to a cluster in vCenter. Nova-compute contains the VMware Driver and as the - figure shows, you can run with multiple nova-compute services. While Compute schedules - at the granularity of a cluster, the VMware driver inside nova-compute interacts with the - vCenter APIs to select an appropriate ESX host within the cluster. Internally, vCenter - uses DRS for placement. - The VMware vCenter Driver also interacts with the OpenStack Image Service to copy - VMDK images from the Image Service back end store. The dotted line in the figure represents the - copying of VMDK images from the OpenStack Image Service to the vSphere datastore. VMDK images - are cached in the datastore so the copy operation is only required the first time that the - VMDK image is used. - After a VM is booted by OpenStack into a vSphere cluster, the VM becomes visible in vCenter - and can access vSphere advanced features. At the same time, the VM is visible in - the OpenStack Dashboard and you can manage it as you would any other OpenStack VM. - You perform advanced vSphere operations in vCenter while you configure OpenStack resources - such as VMs through the OpenStack dashboard. - Not shown in the figure above is how networking fits into the architecture. Both - nova-network and the OpenStack Networking Service - are supported. For details, see . + +
+ As the figure shows, the OpenStack Compute Scheduler sees + three hypervisors that each correspond to a cluster in vCenter. + Nova-compute contains + the VMware driver. You can run with multiple nova-compute services. While + Compute schedules at the granularity of a cluster, the VMware + driver inside nova-compute interacts with the vCenter APIs to + select an appropriate ESX host within the cluster. Internally, + vCenter uses DRS for placement. + The VMware vCenter driver also interacts with the OpenStack + Image Service to copy VMDK images from the Image Service back + end store. The dotted line in the figure represents VMDK images + being copied from the OpenStack Image Service to the vSphere + data store. VMDK images are cached in the data store so the copy + operation is only required the first time that the VMDK image is + used. + After OpenStack boots a VM into a vSphere cluster, the VM + becomes visible in vCenter and can access vSphere advanced + features. At the same time, the VM is visible in the OpenStack + dashboard and you can manage it as you would any other OpenStack + VM. You can perform advanced vSphere operations in vCenter while + you configure OpenStack resources such as VMs through the + OpenStack dashboard. + The figure does not show how networking fits into the + architecture. Both nova-network and the OpenStack Networking + Service are supported. For details, see .
- Overview of Configuration - Here are the basic steps to get started with the VMware vCenter Driver: + Configuration overview + To get started with the VMware vCenter driver, complete the + following high-level steps: - Ensure vCenter is configured correctly. See . + Configure vCenter correctly. See . - Configure nova.conf for the VMware vCenter Driver. See . + Configure nova.conf for the VMware + vCenter driver. See . - Load desired VMDK images into the OpenStack Image Service. See . + Load desired VMDK images into the OpenStack Image + Service. See . - Configure networking with either nova-network - or the OpenStack Networking Service. See . + Configure networking with either nova-network or the OpenStack + Networking Service. See .
- Prerequisites and Limitations - The following is a list of items that will help prepare a vSphere environment to run with - the VMware vCenter Driver. - - - - vCenter Inventory: Make sure any vCenter used by - OpenStack contains a single datacenter. (this is a temporary limitation that will be - removed in a future Havana stable release). - - - DRS: For any cluster that contains multiple ESX - hosts, enable DRS with "Fully automated" placement turned on. - - - Shared Storage: Only shared storage is supported - and datastores must be shared among all hosts in a cluster. It is recommended to remove - datastores not intended for OpenStack from clusters being configured for OpenStack. - Currently, a single datastore can be used per cluster (this is a temporary limitation - that will be removed in a future Havana stable release). - - - Clusters and Datastores: Clusters and datastores - used by OpenStack should not be used for other purposes. Using clusters or datastores - for other purposes will cause OpenStack to display incorrect usage information. - - - Networking: The networking configuration depends on - the desired networking model. See . - - - Security Groups: Security Groups are not supported - if nova-network is used. Security Groups are - only supported if the VMware driver is used in conjunction with the OpenStack Networking - Service running the NSX plugin. - - - VNC: Enable port range 5900 - 6000 for VNC - Connections on every ESX Host in all the clusters under OpenStack control. See the - following link for more details on enabling VNC: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1246 - - - + Prerequisites and limitations + Use the following list to prepare a vSphere environment that + runs with the VMware vCenter driver: + + + vCenter inventory: Make + sure that any vCenter used by OpenStack contains a single + data center. This temporary limitation will be removed in a + future Havana stable release. + + + DRS: For any cluster + that contains multiple ESX hosts, enable DRS and enable + fully automated placement. + + + Shared storage: Only + shared storage is supported and data stores must be shared + among all hosts in a cluster. It is recommended to remove + data stores not intended for OpenStack from clusters being + configured for OpenStack. Currently, a single data store can + be used per cluster. This temporary limitation will be + removed in a future Havana stable release. + + + Clusters and data + stores: Do not use OpenStack clusters and data + stores for other purposes. If you do, OpenStack displays + incorrect usage information. + + + Networking: The + networking configuration depends on the desired networking + model. See . + + + Security groups: If you + use the VMware driver with the OpenStack Networking Service + running the NSX plug-in, security groups are supported. If + you use nova-network, security groups are not + supported. + + + VNC: Enable port range + 5900 - 6000 for VNC connections on every ESX host in all + clusters under OpenStack control. For details on enabling + VNC, see http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1246. + + + Ephemeral Disks: + Ephemeral disks are not supported. This temporary limitation + will be addressed in a future Havana stable release. + +
- Using the VMware vCenter Driver - Use the VMware vCenter Driver (VMwareVCDriver) to connect OpenStack Compute with vCenter. - This is the recommended configuration and allows access through vCenter to advanced vSphere - features like vMotion, High Availability, and Dynamic Resource Scheduling (DRS). + VMware vCenter driver + Use the VMware vCenter driver (VMwareVCDriver) to connect + OpenStack Compute with vCenter. This recommended configuration + enables access through vCenter to advanced vSphere features like + vMotion, High Availability, and Dynamic Resource Scheduling + (DRS).
VMwareVCDriver configuration options - When using the VMwareVCDriver (i.e., vCenter) with OpenStack Compute, - nova.conf must include the following VMware-specific config - options: + When you use the VMwareVCDriver (vCenter) with OpenStack + Compute, add the following VMware-specific config options to + the nova.conf file: [DEFAULT] compute_driver=vmwareapi.VMwareVCDriver @@ -135,165 +178,194 @@ host_password=<vCenter password> cluster_name=<vCenter cluster name> datastore_regex=<optional datastore regex> wsdl_location=http://127.0.0.1:8080/vmware/SDK/wsdl/vim25/vimService.wsdl - Most of the configuration options above are straightforward to understand, but here are - a few points to note: - + - Clusters: The vCenter driver can support multiple clusters. To use more than one - cluster, simply add multiple cluster_name lines in - nova.conf with the appropriate cluster name. Clusters and - datastores used by the vCenter driver should not contain any VMs other than those - created by the driver. + Clusters: The vCenter driver can support multiple + clusters. To use more than one cluster, simply add + multiple cluster_name lines in + nova.conf with the appropriate + cluster name. Clusters and data stores used by the + vCenter driver should not contain any VMs other than + those created by the driver. - Datastores: The datastore_regex field specifies the datastores to use - with Compute. For example, datastore_regex="nas.*" selects all the - datastores that have a name starting with "nas". If this line is omitted, Compute uses - the first datastore returned by the vSphere API. It is recommended not to use this - field and instead remove datastores that are not intended for OpenStack. + Data stores: The datastore_regex field + specifies the data stores to use with Compute. For + example, datastore_regex="nas.*" selects + all the data stores that have a name starting with + "nas". If this line is omitted, Compute uses the first + data store returned by the vSphere API. It is + recommended not to use this field and instead remove + data stores that are not intended for OpenStack. - - A nova-compute service can control one or more - clusters containing multiple ESX hosts, making nova-compute a critical service from a high availabilility perspective. - Since it is possible for the host running nova-compute to fail while the vCenter and ESX - resources are still alive, it is recommended that nova-compute be protected against host failures like other critical - OpenStack Services. - Also note that many of the nova.conf options mentioned elsewhere in - this document that are relevant to libvirt do not apply to using this driver. - Environments using vSphere 5.0 and below require additional configuration. See + A nova-compute + service can control one or more clusters containing multiple + ESX hosts, making nova-compute a critical service from a high + availability perspective. Because the host that runs + nova-compute can + fail while the vCenter and ESX still run, you must protect the + nova-compute + service against host failures. + + Many nova.conf options are relevant + to libvirt but do not apply to this driver. + + You must complete additional configuration for + environments that use vSphere 5.0 and earlier. See .
-
Images with VMware vSphere - The vCenter Driver supports images in the VMDK format. Disks in this format can be - obtained from VMware Fusion or from an ESX environment. It is also possible to convert other - formats, such as qcow2, to the VMDK format using the qemu-img utility. Once a - VMDK disk is available, it should be loaded into the OpenStack Image Service and can then used - with the VMware vCenter Driver. The following sections provide additional details on the exact - types of disks supported and the commands used for conversion and upload. + The vCenter driver supports images in the VMDK format. Disks + in this format can be obtained from VMware Fusion or from an ESX + environment. It is also possible to convert other formats, such + as qcow2, to the VMDK format using the qemu-img + utility. After a VMDK disk is available, load it into the + OpenStack Image Service. Then, you can use it with the VMware + vCenter driver. The following sections provide additional + details on the supported disks and the commands used for + conversion and upload.
- Supported Image Types - Images should be uploaded to the OpenStack Image Service in the VMDK format. The - following VMDK disk types are supported: + Supported image types + Upload images to the OpenStack Image Service in VMDK + format. The following VMDK disk types are supported: - VMFS Flat Disks (includes thin, thick, - zeroedthick, and eagerzeroedthick). Note that once a VMFS thin disk is exported from - VMFS to a non-VMFS location, like the OpenStack Image Service, it becomes a preallocated - flat disk. This has an impact on the transfer time from the OpenStack Image Service to - the datastore when the full preallocated flat disk, rather than the thin disk, has to be - transferred. + VMFS Flat Disks + (includes thin, thick, zeroedthick, and eagerzeroedthick). + Note that once a VMFS thin disk is exported from VMFS to a + non-VMFS location, like the OpenStack Image Service, it + becomes a preallocated flat disk. This impacts the + transfer time from the OpenStack Image Service to the data + store when the full preallocated flat disk, rather than + the thin disk, must be transferred. - Monolithic Sparse disks. Sparse disks get - imported from the OpenStack Image Service into ESX as thin provisioned disks. Monolithic - Sparse disks can be obtained from VMware Fusion or can be created by converting from - other virtual disk formats using the qemu-img utility. + Monolithic Sparse + disks. Sparse disks get imported from the + OpenStack Image Service into ESX as thin provisioned + disks. Monolithic Sparse disks can be obtained from VMware + Fusion or can be created by converting from other virtual + disk formats using the qemu-img + utility. - The following table shows the vmware_disktype property that applies to each - of the supported VMDK disk types: - - - OpenStack Image Service Disk Type Settings - - - - - - vmware_disktype property - VMDK disk type - - - - - sparse - - Monolithic Sparse - - - - thin - - VMFS flat, thin provisioned - - - - preallocated (default) - - VMFS flat, thick/zeroedthick/eagerzeroedthick - - - - -
-
- The vmware_disktype property is set when an image is loaded into the - OpenStack Image Service. For example, the following command creates a Monolithic Sparse - image by setting vmware_disktype to "sparse": + The following table shows the vmware_disktype + property that applies to each of the supported VMDK disk + types: + + + + + + + + + + + + + + + + + + + + + + +
OpenStack Image Service disk type settings
vmware_disktype propertyVMDK disk type
sparse + Monolithic Sparse +
thin + VMFS flat, thin provisioned +
preallocated (default) + VMFS flat, + thick/zeroedthick/eagerzeroedthick +
+ + The vmware_disktype property is set when an + image is loaded into the OpenStack Image Service. For example, + the following command creates a Monolithic Sparse image by + setting vmware_disktype to + sparse: $ glance image-create name="ubuntu-sparse" disk_format=vmdk \ container_format=bare is_public=true \ --property vmware_disktype="sparse" \ --property vmware_ostype="ubuntu64Guest" < ubuntuLTS-sparse.vmdk - Note that specifying "thin" does not provide any advantage over "preallocated" with the - current version of the driver. Future versions however may restore the thin properties of - the disk after it is downloaded to a vSphere datastore. + Note that specifying thin does not + provide any advantage over preallocated + with the current version of the driver. Future versions might + restore the thin properties of the disk after it is downloaded + to a vSphere data store.
- Converting and Loading Images - Using the qemu-img utility, disk images in several formats (e.g. qcow2) can - be converted to the VMDK format. - For example, the following command can be used to convert a Convert and load images + Using the qemu-img utility, disk images in + several formats (such as, qcow2) can be converted to the VMDK + format. + For example, the following command can be used to convert + a qcow2 Ubuntu Precise cloud image: $ qemu-img convert -f raw ~/Downloads/precise-server-cloudimg-amd64-disk1.img \ -O vmdk precise-server-cloudimg-amd64-disk1.vmdk - VMDK disks converted via qemu-img are always monolithic sparse VMDK disks with an IDE adapter type. Using the above - example of the Precise Ubuntu image after the qemu-img conversion, the command - to upload the VMDK disk should be something like: + VMDK disks converted through qemu-img are + always monolithic sparse + VMDK disks with an IDE adapter type. Using the previous + example of the Precise Ubuntu image after the + qemu-img conversion, the command to upload the + VMDK disk should be something like: $ glance image-create --name precise-cloud --is-public=True \ --container-format=bare --disk-format=vmdk \ --property vmware_disktype="sparse" \ --property vmware_adaptertype="ide" < \ precise-server-cloudimg-amd64-disk1.vmdk - Note that the vmware_disktype is set to sparse and the vmware_adaptertype is set to ide in the command above. - If the image did not come from the qemu-img utility, the - vmware_disktype and vmware_adaptertype might be different. To - determine the image adapter type from an image file, use the following command and look for - the ddb.adapterType= line : + Note that the vmware_disktype is set to + sparse and the + vmware_adaptertype is set to ide in the previous command. + If the image did not come from the qemu-img + utility, the vmware_disktype and + vmware_adaptertype might be different. To + determine the image adapter type from an image file, use the + following command and look for the + ddb.adapterType= line: $ head -20 <vmdk file name> - Assuming a preallocated disk type and an iSCSI "lsiLogic" adapter type, below is the - command to upload the VMDK disk: + Assuming a preallocated disk type and an iSCSI lsiLogic + adapter type, the following command uploads the VMDK + disk: $ glance image-create name="ubuntu-thick-scsi" disk_format=vmdk \ container_format=bare is_public=true \ --property vmware_adaptertype="lsiLogic" \ --property vmware_disktype="preallocated" \ --property vmware_ostype="ubuntu64Guest" < ubuntuLTS-flat.vmdk - Currently, there is a limitation that OS boot VMDK disks with an IDE adapter type cannot - be attached to a virtual SCSI controller and likewise disks with one of the SCSI adapter - types (e.g. busLogic, lsiLogic) cannot be attached to the IDE controller. Therefore, as the - examples above show, it is important to set the vmware_adaptertype property - correctly. The default adapter type is "lsiLogic" which is SCSI, so you may omit the - vmware_adaptertype property if you are certain that the image adapter type is - "lsiLogic." + Currently, OS boot VMDK disks with an IDE adapter type + cannot be attached to a virtual SCSI controller and likewise + disks with one of the SCSI adapter types (such as, busLogic, + lsiLogic) cannot be attached to the IDE controller. Therefore, + as the previous examples show, it is important to set the + vmware_adaptertype property correctly. The + default adapter type is lsiLogic, which is SCSI, so you can + omit the vmware_adaptertype property if + you are certain that the image adapter type is + lsiLogic.
- Tagging VMware Images - In a mixed hypervisor environment, OpenStack Compute uses the - hypervisor_type tag to match images to the correct hypervisor type. For - VMware images, set the hypervisor type to "vmware" as shown below. Other valid hypervisor - types include: xen, qemu, kvm, lxc, uml, hyperv, and powervm. + Tag VMware images + In a mixed hypervisor environment, OpenStack Compute uses + the hypervisor_type tag to match images to the + correct hypervisor type. For VMware images, set the hypervisor + type to vmware. Other valid hypervisor + types include: xen, qemu, kvm, lxc, uml, hyperv, and + powervm. $ glance image-create name="ubuntu-thick-scsi" disk_format=vmdk \ container_format=bare is_public=true \ --property vmware_adaptertype="lsiLogic" \ @@ -302,135 +374,169 @@ container_format=bare is_public=true \ --property vmware_ostype="ubuntu64Guest" < ubuntuLTS-flat.vmdk
- Optimizing Images - Monolithic Sparse disks are considerably faster to download but have the overhead of an - additional conversion step. When imported into ESX, sparse disks get converted to VMFS flat - thin provisioned disks. The download and conversion steps only affect the first launched - instance that uses the sparse disk image. The converted disk image is cached, so subsequent - instances that use this disk image can simply use the cached version. - To avoid the conversion step (at the cost of longer download times) consider converting - sparse disks to thin provisioned or preallocated disks before loading them into the - OpenStack Image Service. Below are some tools that can be used to pre-convert sparse - disks. + Optimize images + Monolithic Sparse disks are considerably faster to + download but have the overhead of an additional conversion + step. When imported into ESX, sparse disks get converted to + VMFS flat thin provisioned disks. The download and conversion + steps only affect the first launched instance that uses the + sparse disk image. The converted disk image is cached, so + subsequent instances that use this disk image can simply use + the cached version. + To avoid the conversion step (at the cost of longer + download times) consider converting sparse disks to thin + provisioned or preallocated disks before loading them into the + OpenStack Image Service. Below are some tools that can be used + to pre-convert sparse disks. - Using vSphere CLI (or sometimes called the remote CLI or rCLI) - tools - Assuming that the sparse disk is made available on a datastore accessible by an - ESX host, the following command converts it to preallocated format: + + Using vSphere CLI (or sometimes + called the remote CLI or rCLI) tools + Assuming that the sparse disk is made available on a + data store accessible by an ESX host, the following + command converts it to preallocated format: vmkfstools --server=ip_of_some_ESX_host -i /vmfs/volumes/datastore1/sparse.vmdk /vmfs/volumes/datastore1/converted.vmdk - (Note that the vifs tool from the same CLI package can be used to upload the disk to - be converted. The vifs tool can also be used to download the converted disk if + (Note that the vifs tool from the same CLI package can + be used to upload the disk to be converted. The vifs tool + can also be used to download the converted disk if necessary.) - Using vmkfstools directly on the ESX host - If the SSH service is enabled on an ESX host, the sparse disk can be uploaded to the - ESX datastore via scp and the vmkfstools local to the ESX host can use used to perform - the conversion: (After logging in to the host via ssh) + + Using vmkfstools directly on the + ESX host + If the SSH service is enabled on an ESX host, the + sparse disk can be uploaded to the ESX data store via scp + and the vmkfstools local to the ESX host can use used to + perform the conversion: (After logging in to the host via + ssh) vmkfstools -i /vmfs/volumes/datastore1/sparse.vmdk /vmfs/volumes/datastore1/converted.vmdk - vmware-vdiskmanager - vmware-vdiskmanager is a utility that comes bundled with VMware Fusion and VMware - Workstation. Below is an example of converting a sparse disk to preallocated format: + vmware-vdiskmanager + vmware-vdiskmanager is a utility that + comes bundled with VMware Fusion and VMware Workstation. + Below is an example of converting a sparse disk to + preallocated format: '/Applications/VMware Fusion.app/Contents/Library/vmware-vdiskmanager' -r sparse.vmdk -t 4 converted.vmdk - In all of the above cases, the converted vmdk is actually a pair of files: the - descriptor file converted.vmdk and the actual virtual - disk data file converted-flat.vmdk. The file to be - uploaded to the OpenStack Image Service is In all of the above cases, the converted vmdk is + actually a pair of files: the descriptor file converted.vmdk and the actual + virtual disk data file converted-flat.vmdk. The file to be uploaded + to the OpenStack Image Service is converted-flat.vmdk.
- Image Handling - The ESX hypervisor requires a copy of the VMDK file in order to boot up a virtual - machine. As a result, the vCenter OpenStack Compute driver must download the VMDK via HTTP - from the OpenStack Image Service to a datastore that is visible to the hypervisor. To - optimize this process, the first time a VMDK file is used, it gets cached in the datastore. - Subsequent virtual machines that need the VMDK use the cached version and don't have to copy - the file again from the OpenStack Image Service. - Even with a cached VMDK, there is still a copy operation from the cache location to the - hypervisor file directory in the shared datastore. To avoid this copy, boot the image in - linked_clone mode. To learn how to enable this mode, see . - Note also that it is possible to override the linked_clone mode on a per-image basis by - using the vmware_linked_clone property in the OpenStack Image Service. + Image handling + The ESX hypervisor requires a copy of the VMDK file in + order to boot up a virtual machine. As a result, the vCenter + OpenStack Compute driver must download the VMDK via HTTP from + the OpenStack Image Service to a data store that is visible to + the hypervisor. To optimize this process, the first time a + VMDK file is used, it gets cached in the data store. + Subsequent virtual machines that need the VMDK use the cached + version and don't have to copy the file again from the + OpenStack Image Service. + Even with a cached VMDK, there is still a copy operation + from the cache location to the hypervisor file directory in + the shared data store. To avoid this copy, boot the image in + linked_clone mode. To learn how to enable this mode, see . Note also that it is possible to + override the linked_clone mode on a per-image basis by using + the vmware_linked_clone property in the OpenStack + Image Service.
Networking with VMware vSphere - The VMware driver supports networking with both nova-network and the OpenStack Networking Service. + The VMware driver supports networking with both nova-network and the OpenStack + Networking Service. - If using nova-network with the FlatManager or - FlatDHCPManager, before provisioning VMs, create a port group with the same name as the - flat_network_bridge value in nova.conf (default - is br100). All VM NICs will be attached to this port group. Ensure the - flat interface of the node running nova-network has a path to this + If using nova-network with the FlatManager or + FlatDHCPManager, before provisioning VMs, create a port + group with the same name as the + flat_network_bridge value in + nova.conf (default is + br100). All VM NICs will be attached to + this port group. Ensure the flat interface of the node + running nova-network has a path to this network. - If using nova-network with the VlanManager, before provisioning VMs, make sure the - vlan_interface configuration option is set to match the ESX host interface - that will handle VLAN-tagged VM traffic. OpenStack Compute will automatically create the - corresponding port groups. + If using nova-network with the VlanManager, before + provisioning VMs, make sure the + vlan_interface configuration option is + set to match the ESX host interface that will handle + VLAN-tagged VM traffic. OpenStack Compute will automatically + create the corresponding port groups. - If using the OpenStack Networking Service, before provisioning VMs, create a port group - with the same name as the - vmware.integration_bridge value in - nova.conf (default is - br-int). All VM NICs will be attached to - this port group for management by the OpenStack Networking - Plugin. + If using the OpenStack Networking Service, before + provisioning VMs, create a port group with the same name as + the vmware.integration_bridge value in + nova.conf (default is + br-int). All VM NICs will be attached + to this port group for management by the OpenStack + Networking plug-in.
-
Volumes with VMware vSphere - The VMware driver supports attaching volumes from the OpenStack Block Storage service. The - VMware VMDK driver for OpenStack Block Storage is recommended and should be used for managing - volumes based on vSphere datastores. More information about the VMware VMDK driver can be - found at: The VMware driver supports attaching volumes from the + OpenStack Block Storage service. The VMware VMDK driver for + OpenStack Block Storage is recommended and should be used for + managing volumes based on vSphere data stores. More information + about the VMware VMDK driver can be found at: VMware VMDK Driver. There is also a "iscsi" volume driver which provides limited - support and can be used only for attachments. + >VMware VMDK Driver. Also an iscsi volume driver + provides limited support and can be used only for + attachments.
- vSphere 5.0 (and below) additional setup - Users of vSphere 5.0 or earlier will need to locally host their WSDL files. These steps - are applicable for vCenter 5.0 or ESXi 5.0 and you may accomplish this by either mirroring the - WSDL from the vCenter or ESXi server you intend on using, or you may download the SDK directly - from VMware. These are both workaround steps used to fix a vSphere 5.0 and earlier additional set up + Users of vSphere 5.0 or earlier must host their WSDL files + locally. These steps are applicable for vCenter 5.0 or ESXi 5.0 + and you can either mirror the WSDL from the vCenter or ESXi + server that you intend to use or you can download the SDK + directly from VMware. These workaround steps fix a known issue with the WSDL that was resolved in later versions. + >known issue with the WSDL that was resolved in later + versions. Mirror WSDL from vCenter (or ESXi) - You'll need the IP address for your vCenter or ESXi host that you'll be mirroring the - files from. Set the shell variable VMWAREAPI_IP to the IP address to allow - you to cut and paste commands from these instructions: - $ export VMWAREAPI_IP=<your_vsphere_host_ip> - + Set the VMWAREAPI_IP shell variable to the + IP address for your vCenter or ESXi host from where you plan + to mirror files. For example: + $ export VMWAREAPI_IP=<your_vsphere_host_ip> - Create a local file system directory to hold the WSDL files in. - $ mkdir -p /opt/stack/vmware/wsdl/5.0 - + Create a local file system directory to hold the WSDL + files: + $ mkdir -p /opt/stack/vmware/wsdl/5.0 Change into the new directory. - $ cd /opt/stack/vmware/wsdl/5.0 - + $ cd /opt/stack/vmware/wsdl/5.0 - Install a command line tool that can download the files like wget. - Install it with your OS specific tools. + Use your OS-specific tools to install a command-line + tool that can download files like + wget. - Download the files to the local file cache. - wget --no-check-certificate https://$VMWAREAPI_IP/sdk/vimService.wsdl + Download the files to the local file cache: + wget --no-check-certificate https://$VMWAREAPI_IP/sdk/vimService.wsdl wget --no-check-certificate https://$VMWAREAPI_IP/sdk/vim.wsdl wget --no-check-certificate https://$VMWAREAPI_IP/sdk/core-types.xsd wget --no-check-certificate https://$VMWAREAPI_IP/sdk/query-messagetypes.xsd @@ -438,53 +544,61 @@ wget --no-check-certificate https://$VMWAREAPI_IP/sdk/query-types.xsd wget --no-check-certificate https://$VMWAREAPI_IP/sdk/vim-messagetypes.xsd wget --no-check-certificate https://$VMWAREAPI_IP/sdk/reflect-messagetypes.xsd wget --no-check-certificate https://$VMWAREAPI_IP/sdk/reflect-types.xsd - There will be two files that did not fetch properly reflect-types.xsd - and reflect-messagetypes.xsd. These two files will need to be stubbed - out. The following XML listing can be used to replace the missing file content. The XML - parser underneath Python can be very particular and if you put a space in the wrong place - it can break the parser. Copy the contents below carefully and watch the formatting - carefully. - <?xml version="1.0" encoding="UTF-8"?> + Because the reflect-types.xsd and + reflect-messagetypes.xsd files do not + fetch properly, you must stub out these files. Use the + following XML listing to replace the missing file content. + The XML parser underneath Python can be very particular and + if you put a space in the wrong place, it can break the + parser. Copy the following contents and formatting + carefully. + <?xml version="1.0" encoding="UTF-8"?> <schema targetNamespace="urn:reflect" xmlns="http://www.w3.org/2001/XMLSchema" xmlns:xsd="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"> - </schema> - - + </schema> - Now that the files are locally present, tell the driver to look for the SOAP service - WSDLs in the local file system and not on the remote vSphere server. The following setting - should be added to the nova.conf for your nova-compute node: - [vmware] + Now that the files are locally present, tell the driver + to look for the SOAP service WSDLs in the local file system + and not on the remote vSphere server. Add the following + setting to the nova.conf file for your + nova-compute + node: + [vmware] wsdl_location=file:///opt/stack/vmware/wsdl/5.0/vimService.wsdl - - Alternatively, download the version appropriate SDK from Alternatively, download the version appropriate SDK from + http://www.vmware.com/support/developer/vc-sdk/ and copy it into - /opt/stack/vmware. You should ensure that the WSDL is available, in for - example /opt/stack/vmware/SDK/wsdl/vim25/vimService.wsdl. Below we will - point nova.conf to fetch this WSDL file from the local file system using - a URL. - When using the VMwareVCDriver (i.e vCenter) with OpenStack Compute with vSphere version - 5.0 or below, nova.conf must include the following extra config - option: + >http://www.vmware.com/support/developer/vc-sdk/ and + copy it to the /opt/stack/vmware file. Make + sure that the WSDL is available, in for example + /opt/stack/vmware/SDK/wsdl/vim25/vimService.wsdl. + You must point nova.conf to fetch this WSDL + file from the local file system by using a URL. + When using the VMwareVCDriver (vCenter) with OpenStack + Compute with vSphere version 5.0 or earlier, + nova.conf must include the following + extra config option: [vmware] wsdl_location=file:///opt/stack/vmware/SDK/wsdl/vim25/vimService.wsdl
- Using the VMware ESX Driver - This section covers details of using the VMwareESXDriver. Note that the ESX Driver has not - been extensively tested and is not recommended. To configure the VMware vCenter Driver - instead, see + VMware ESX driver + This section covers details of using the VMwareESXDriver. + The ESX Driver has not been extensively tested and is not + recommended. To configure the VMware vCenter driver instead, see + .
VMwareESXDriver configuration options - When using the VMwareESXDriver (i.e., no vCenter) with OpenStack Compute, configure - nova.conf with the following VMware-specific config options: + When you use the VMwareESXDriver (no vCenter) with + OpenStack Compute, add the following VMware-specific + configuration options to the nova.conf + file: [DEFAULT] compute_driver=vmwareapi.VMwareESXDriver @@ -493,21 +607,24 @@ host_ip=<ESXi host IP> host_username=<ESXi host username> host_password=<ESXi host password> wsdl_location=http://127.0.0.1:8080/vmware/SDK/wsdl/vim25/vimService.wsdl - Remember that you will have one nova-compute - service per ESXi host. It is recommended that this host run as a VM on the same ESXi host it - is managing. - Also note that many of the nova.conf options mentioned elsewhere in - this document that are relevant to libvirt do not apply to using this driver. + Remember that you will have one nova-compute service per ESXi + host. It is recommended that this host run as a VM on the same + ESXi host that it manages. + + Many nova.conf options are relevant + to libvirt but do not apply to this driver. +
Requirements and limitations - The ESXDriver is unable to take advantage of many of the advanced capabilities - associated with the vSphere platform, namely vMotion, High Availability, and Dynamic - Resource Scheduler (DRS). + The ESXDriver cannot use many of the vSphere platform + advanced capabilities, namely vMotion, high availability, and + DRS.
- Configuration Reference + Configuration reference