openstack-manuals/doc/arch-design/specialized/section_multi_hypervisor_specialized.xml
Rich Bowen b90c1e40b3 s/the the/the/g
Change-Id: I94364e19380025d140b2e2cd041b162c00d1905e
2015-06-15 16:45:09 -04:00

90 lines
4.7 KiB
XML

<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="multi-hypervisor-example">
<?dbhtml stop-chunking?>
<title>Multi-hypervisor example</title>
<para>A financial company requires the migration of its applications
from a traditional virtualized environment to an API driven,
orchestrated environment. Many of their applications have
strict hypervisor support requirements, but others do not have
the same restrictions and do not need the same features. Because of
these requirements, the overall target environment needs
multiple hypervisors.</para>
<para>The current environment consists of a vSphere environment
with 20 VMware ESXi hypervisors supporting 300 instances of
various sizes. Approximately 50 of the instances must run
on ESXi but the rest have more flexible requirements.</para>
<para>The company decides to bring the management of the
overall system into a common OpenStack platform.</para>
<mediaobject>
<imageobject>
<imagedata contentwidth="4in" fileref="../figures/Compute_NSX.png"/>
</imageobject>
</mediaobject>
<para>The approach is to run a host aggregate consisting of KVM
hypervisors for the general purpose instances and a separate
host aggregate for instances requiring ESXi. This way,
workloads that must run on ESXi can target those
hypervisors, but the rest target the KVM
hypervisors.</para>
<para>Images in the OpenStack Image service have particular
hypervisor metadata attached so that when a user requests a
certain image, the instance spawns on the relevant
aggregate. Images for ESXi use the VMDK format. You can convert
QEMU disk images to VMDK, VMFS Flat Disks, which
includes thin, thick, zeroed-thick, and eager-zeroed-thick.
Note that after you export a VMFS thin disk from VMFS to a
non-VMFS location, for example the OpenStack Image service, it
becomes a preallocated flat disk. This impacts the transfer
time from the OpenStack Image service to the data store as it
requires moving the full preallocated flat disk rather than the
thin disk.</para>
<para>This example has the additional complication that, rather
than spawning directly on a hypervisor by calling
a specific host aggregate using the metadata of the image, the
VMware host aggregate compute nodes communicate with vCenter
which then requests scheduling for the instance to run on
an ESXi hypervisor. As of the Icehouse release, this
functionality requires that VMware Distributed Resource
Scheduler (DRS) is enabled on a cluster and set to "Fully
Automated".</para>
<para>Due to the DRS requirement, note that vSphere requires
shared storage (DRS uses vMotion, which requires shared
storage). This solution uses shared storage to provide Block
Storage capabilities to the KVM instances while also providing
the storage for vSphere. The environment uses a dedicated data
network to provide this functionality, therefore the compute
hosts should have dedicated NICs to support this dedicated
traffic. vSphere supports the use of OpenStack Block Storage
to present storage from a VMFS datastore to an instance, so
the use of Block Storage in this architecture supports both
hypervisors.</para>
<para>In this case, OpenStack Networking provides network connectivity
with the VMware NSX plug-in driver configured. Alternatively,
the system can use legacy networking (nova-network), which both
hypervisors in this design support, but has
limitations. Specifically, vSphere with legacy networking does
not support security groups. With VMware NSX as part of the
design, however, when a user launches an instance within
either of the host aggregates, the instances attach to
appropriate network overlay-based logical networks as defined
by the user.</para>
<para>There are design considerations around OpenStack Compute
integration. When using vSphere with OpenStack, the
nova-compute service communicating with
vCenter shows up as a single large hypervisor representing the
entire ESXi cluster (multiple instances of nova-compute can
represent multiple ESXi clusters or connect to
multiple vCenter servers). If the process running the
nova-compute service crashes, it severs the connection to that
particular vCenter Server, and any ESXi clusters behind it,
and it cannot provision further
instances on that vCenter, despite the fact that you can configure
vSphere for high availability. Therefore, you must monitor the
nova-compute service that
connects to vSphere for any disruptions.</para>
</section>