VMware VMDK driverUse the VMware VMDK driver to enable management of the
OpenStack Block Storage volumes on vCenter-managed data
- stores. Volumes are backed by VMDK files on data stores using
- any VMware-compatible storage technology such as NFS, iSCSI,
- FiberChannel, and vSAN.
+ stores. Volumes are backed by VMDK files on data stores that
+ use any VMware-compatible storage technology such as NFS,
+ iSCSI, FiberChannel, and vSAN.
-
- The VMware ESX VMDK driver is deprecated as of the Icehouse release
- and may be removed in Juno or a subsequent release. The VMware
- vCenter VMDK driver continues to be fully supported.
-
+ The VMware ESX VMDK driver is deprecated as of the
+ Icehouse release and might be removed in Juno or a
+ subsequent release. The VMware vCenter VMDK driver
+ continues to be fully supported.
-
+ Functional contextThe VMware VMDK driver connects to vCenter, through
- which it can dynamically access all the datastores visible
- from the ESX hosts in the managed cluster.
+ which it can dynamically access all the data stores
+ visible from the ESX hosts in the managed cluster.
When you create a volume, the VMDK driver creates a VMDK
- file on demand. The creation of this VMDK file is
- completed only when the volume is subsequently attached to
- an instance, because the set of datastores visible to the
- instance determines where to place the volume.
- The running vSphere VM is then automatically
- reconfigured to attach the VMDK file as an extra disk.
- Once attached, you can log in to the running vSphere VM to
- rescan and discover this extra disk.
-
-
+ file on demand. The VMDK file creation completes only when
+ the volume is subsequently attached to an instance,
+ because the set of data stores visible to the instance
+ determines where to place the volume.
+ The running vSphere VM is automatically reconfigured to
+ attach the VMDK file as an extra disk. Once attached, you
+ can log in to the running vSphere VM to rescan and
+ discover this extra disk.
+
+ Configuration
- The recommended volume driver for OpenStack Block Storage is
- the VMware vCenter VMDK driver. When you configure the
- driver, you must match it with the appropriate OpenStack
- Compute driver from VMware and both drivers must point to
- the same server.
- For example, in the nova.conf file,
- use this option to define the Compute driver:
+ The recommended volume driver for OpenStack Block
+ Storage is the VMware vCenter VMDK driver. When you
+ configure the driver, you must match it with the
+ appropriate OpenStack Compute driver from VMware and both
+ drivers must point to the same server.
+ In the nova.conf file, use this
+ option to define the Compute driver:compute_driver=vmwareapi.VMwareVCDriverIn the cinder.conf file, use this
option to define the volume driver:
@@ -48,15 +47,16 @@
drivers support for the OpenStack Block Storage
configuration (cinder.conf):
-
-
+
+ VMDK disk typeThe VMware VMDK drivers support the creation of VMDK
- disk files of type thin, thick,
- or eagerZeroedThick. Use the
- vmware:vmdk_type extra spec key with the
- appropriate value to specify the VMDK disk file type.
- The following table captures the mapping between the extra
+ disk files of type thin,
+ thick, or
+ eagerZeroedThick. Use the
+ vmware:vmdk_type extra spec key with the
+ appropriate value to specify the VMDK disk file type. The
+ following table captures the mapping between the extra
spec entry and the VMDK disk file type:
Extra spec entry to VMDK disk file type
@@ -86,28 +86,25 @@
- If no vmdk_type extra spec entry is
- specified, the default disk file type is
+ If you do not specify a vmdk_type extra
+ spec entry, the default disk file type is
thin.
- The example below shows how to create a
- thick VMDK volume using the appropriate
- vmdk_type:
-
- $cinder type-create thick_volume
- $cinder type-key thick_volume set vmware:vmdk_type=thick
- $cinder create --volume-type thick_volume --display-name volume1 1
-
-
-
+ The following example shows how to create a
+ thick VMDK volume by using the
+ appropriate vmdk_type:
+ $cinder type-create thick_volume
+$cinder type-key thick_volume set vmware:vmdk_type=thick
+$cinder create --volume-type thick_volume --display-name volume1 1
+
+ Clone typeWith the VMware VMDK drivers, you can create a volume
- from another source volume or from a snapshot point. The
- VMware vCenter VMDK driver supports clone types
- full and
- linked/fast. The clone type is
- specified using the vmware:clone_type extra
- spec key with the appropriate value. The following table
- captures the mapping for clone types:
+ from another source volume or a snapshot point. The VMware
+ vCenter VMDK driver supports the full
+ and linked/fast clone types. Use the
+ vmware:clone_type extra spec key to
+ specify the clone type. The following table captures the
+ mapping for clone types:
Extra spec entry to clone type mapping
@@ -130,23 +127,109 @@
- If not specified, the default clone type is
+ If you do not specify the clone type, the default is
full.
- The following is an example of linked cloning from
- another source volume:
-
- $cinder type-create fast_clone
- $cinder type-key fast_clone set vmware:clone_type=linked
- $cinder create --volume-type fast_clone --source-volid 25743b9d-3605-462b-b9eb-71459fe2bb35 --display-name volume1 1
-
- Note: The VMware ESX VMDK driver ignores the extra spec
- entry and always creates a full
- clone.
-
-
+ The following example shows linked cloning from another
+ source volume:
+ $cinder type-create fast_clone
+$cinder type-key fast_clone set vmware:clone_type=linked
+$cinder create --volume-type fast_clone --source-volid 25743b9d-3605-462b-b9eb-71459fe2bb35 --display-name volume1 1
+
+ The VMware ESX VMDK driver ignores the extra spec
+ entry and always creates a full
+ clone.
+
+
+
+ Use vCenter storage policies to specify back-end data
+ stores
+
+ This section describes how to configure back-end data
+ stores using storage policies. In vCenter, you can create
+ one or more storage policies and expose them as a Block
+ Storage volume-type to a vmdk volume. The storage policies
+ are exposed to the vmdk driver through the extra spec
+ property with the
+ vmware:storage_profile key.
+ For example, assume a storage policy in vCenter named
+ gold_policy. and a Block Storage
+ volume type named vol1 with the extra
+ spec key vmware:storage_profile set to
+ the value gold_policy. Any Block
+ Storage volume creation that uses the
+ vol1 volume type places the volume
+ only in data stores that match the
+ gold_policy storage policy.
+ The Block Storage back-end configuration for vSphere
+ data stores is automatically determined based on the
+ vCenter configuration. If you configure a connection to
+ connect to vCenter version 5.5 or later in the
+ cinder.conf file, the use of
+ storage policies to configure back-end data stores is
+ automatically supported.
+
+ You must configure any data stores that you
+ configure for the Block Storage service for the
+ Compute service.
+
+
+ To configure back-end data stores by using storage
+ policies
+
+ In vCenter, tag the data stores to be used for
+ the back end.
+ OpenStack also supports policies that are
+ created by using vendor-specific capabilities; for
+ example vSAN-specific storage policies.
+
+ The tag value serves as the policy. For
+ details, see .
+
+
+
+ Set the extra spec key
+ vmware:storage_profile in
+ the desired Block Storage volume types to the
+ policy name that you created in the previous
+ step.
+
+
+ Optionally, for the
+ vmware_host_version
+ parameter, enter the version number of your
+ vSphere platform. For example,
+ 5.5.
+ This setting overrides the default location for
+ the corresponding WSDL file. Among other
+ scenarios, you can use this setting to prevent
+ WSDL error messages during the development phase
+ or to work with a newer version of vCenter.
+
+
+ Complete the other vCenter configuration
+ parameters as appropriate.
+
+
+
+ The following considerations apply to configuring
+ SPBM for the Block Storage service:
+
+
+ Any volume that is created without an
+ associated policy (that is to say, without an
+ associated volume type that specifies
+ vmware:storage_profile
+ extra spec), there is no policy-based
+ placement for that volume.
+
+
+
+
+ Supported operations
- The following operations are supported by the VMware
- vCenter and ESX VMDK drivers:
+ The VMware vCenter and ESX VMDK drivers support these
+ operations:Create volume
@@ -189,17 +272,101 @@
Although the VMware ESX VMDK driver supports these
operations, it has not been extensively tested.
-
-
+
+
+
+ Storage policy-based configuration in vCenter
+ You can configure Storage Policy-Based Management (SPBM)
+ profiles for vCenter data stores supporting the Compute,
+ Image Service, and Block Storage components of an OpenStack
+ implementation.
+ In a vSphere OpenStack deployment, SPBM enables you to
+ delegate several data stores for storage, which reduces
+ the risk of running out of storage space. The policy logic
+ selects the data store based on accessibility and
+ available storage space.
+
+
+ Prerequisites
+
+
+ Determine the data stores to be used by the SPBM
+ policy.
+
+
+ Determine the tag that identifies the data
+ stores in the OpenStack component
+ configuration.
+
+
+ Create separate policies or sets of data stores
+ for separate OpenStack components.
+
+
+
+
+ Create storage policies in vCenter
+
+ To create storage policies in vCenter
+
+ In vCenter, create the tag that identifies the
+ data stores:
+
+
+ From the Home screen, click
+ Tags.
+
+
+ Specify a name for the tag.
+
+
+ Specify a tag category. For example,
+ spbm-cinder.
+
+
+
+
+ Apply the tag to the data stores to be used by
+ the SPBM policy.
+
+ For details about creating tags in vSphere,
+ see the vSphere documentation.
+
+
+
+ In vCenter, create a tag-based storage policy
+ that uses one or more tags to identify a set of
+ data stores.
+
+ You use this tag name and category when you
+ configure the *.conf file
+ for the OpenStack component. For details about
+ creating tags in vSphere, see the vSphere documentation.
+
+
+
+
+ Data store selection
- When creating a volume, the driver chooses a data store
- which is connected to maximum number of hosts. This is
- meant to reduce the number of volume migrations while
- attaching the volume to instances. The volume needs to be
- migrated if the instance's ESX host cannot access the
- data store containing the volume. In case of ties, the
- data store with lowest space utilization is selected, where
- space utilization is defined by the metric
- (1 - freespace/totalspace).
-
+ If storage policy is enabled, the driver initially
+ selects all the data stores that match the associated
+ storage policy.
+ If two or more data stores match the storage policy, the
+ driver chooses a data store that is connected to the
+ maximum number of hosts.
+ In case of ties, the driver chooses the data store with
+ lowest space utilization, where space utilization is
+ defined by the
+ (1-freespace/totalspace)
+ metric.
+ These actions reduce the number of volume migrations
+ while attaching the volume to instances.
+ The volume must be migrated if the ESX host for the
+ instance cannot access the data store that contains the
+ volume.
+