Merge "Arch Design: Edits in chapter specialized"

This commit is contained in:
Jenkins 2014-08-03 07:29:15 +00:00 committed by Gerrit Code Review
commit d38917d673
7 changed files with 43 additions and 30 deletions

View File

@ -25,7 +25,7 @@
example:</para>
<itemizedlist>
<listitem>
<para>Boot storms - What happens when hundreds or
<para>Boot storms: What happens when hundreds or
thousands of users log in during shift changes,
affects the storage design.</para>
</listitem>
@ -39,19 +39,21 @@
</listitem>
</itemizedlist></section>
<section xml:id="broker"><title>Broker</title>
<para>The Connection Broker is a central component of the
architecture that determines which Remote Desktop Host will be
<para>The connection broker is a central component of the
architecture that determines which remote desktop host will be
assigned or connected to the user. The broker is often a
full-blown management product allowing for the automated
deployment and provisioning of Remote Desktop Hosts.</para></section>
deployment and provisioning of remote desktop hosts.</para></section>
<section xml:id="possible-solutions">
<title>Possible solutions</title>
<para>There a number of commercial products available today that
<para>
There are a number of commercial products available today that
provide such a broker solution but nothing that is native in
the OpenStack project. There of course is also the option of
not providing a broker and managing this manually - but this
would not suffice as a large scale, enterprise
solution.</para></section>
the OpenStack project. Not providing a broker is also
an option, but managing this manually would not suffice as a
large scale, enterprise solution.
</para>
</section>
<section xml:id="diagram"><title>Diagram</title>
<mediaobject>
<imageobject>

View File

@ -22,21 +22,25 @@
work.</para></section>
<section xml:id="solutions-specialized-hardware"><title>Solutions</title>
<para>In order to provide cryptography offloading to a set of
instances, it is possible to use Glance configuration options
to assign the cryptography chip to a device node in the guest.
The documentation at
http://docs.openstack.org/cli-reference/content/chapter_cli-glance-property.html
contains further information on configuring this solution, but
it allows all guests using the configured images to access the
hypervisor cryptography device.</para>
instances, it is possible to use Image Service configuration
options to assign the cryptography chip to a device node in
the guest. The <citetitle>OpenStack Command Line
Reference</citetitle> contains further information on
configuring this solution in the chapter <link
xlink:href="http://docs.openstack.org/cli-reference/content/chapter_cli-glance-property.html">Image
Service property keys</link> , but it allows all guests using
the configured images to access the hypervisor cryptography
device.</para>
<para>If direct access to a specific device is required, it can be
dedicated to a single instance per hypervisor through the use
of PCI pass-through. The OpenStack administrator needs to
define a flavor that specifically has the PCI device in order
to properly schedule instances. More information regarding PCI
pass-through, including instructions for implementing and
using it, is available at
https://wiki.openstack.org/wiki/Pci_passthrough#How_to_check_PCI_status_with_PCI_api_patches.</para>
using it, is available at <link
xlink:href="https://wiki.openstack.org/wiki/Pci_passthrough#How_to_check_PCI_status_with_PCI_api_patches">https://wiki.openstack.org/wiki/Pci_passthrough</link>.
</para>
<mediaobject>
<imageobject>
<imagedata contentwidth="4in" fileref="../images/Specialized_Hardware2.png"/>

View File

@ -13,7 +13,7 @@
can't be neatly categorized into one of the other major
sections. This section discusses some of these unique use
cases with some additional details and design considerations
for each use case.</para>
for each use case:</para>
<itemizedlist>
<listitem>
<para>Specialized Networking: This describes running

View File

@ -51,7 +51,7 @@
a specific host aggregate using the metadata of the image, the
VMware host aggregate compute nodes communicate with vCenter
which then requests that the instance be scheduled to run on
an ESXi hypervisor. As of the Icehouse release,this
an ESXi hypervisor. As of the Icehouse release, this
functionality requires that VMware Distributed Resource
Scheduler (DRS) be enabled on a cluster and set to "Fully
Automated".</para>

View File

@ -19,7 +19,7 @@
layer 2 listeners.</para></section>
<section xml:id="possible-solutions-specialized-networking">
<title>Possible solutions</title>
<para>Deploying an OpenStack installation using Neutron with a
<para>Deploying an OpenStack installation using OpenStack Networking with a
provider network will allow direct layer 2 connectivity to an
upstream networking device. This design provides the layer 2
connectivity required to communicate via Intermediate

View File

@ -1,4 +1,8 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
@ -15,8 +19,8 @@
maintenance process on complete OpenStack-based clouds.
Developers and those testing OpenStack can also use the
guidance to provision their own OpenStack environments on
available OpenStack Compute resources, whether Public or
Private.</para>
available OpenStack Compute resources, whether public or
private.</para>
<section xml:id="challenges-for-nested-cloud"><title>Challenges</title>
<para>The network aspect of deploying a nested cloud is the most
complicated aspect of this architecture. When using VLANs,
@ -42,10 +46,12 @@
deploying additional stacks will be a trivial thing and can be
performed in an automated fashion.</para>
<para>The OpenStack-On-OpenStack project (TripleO) is addressing
this issue - although at the current time the project does not
provide comprehensive coverage for the nested stacks. More
information can be found at
https://wiki.openstack.org/wiki/TripleO.</para></section>
this issue&mdash;although at the current time the project does
not provide comprehensive coverage for the nested stacks. More
information can be found at <link
xlink:href="https://wiki.openstack.org/wiki/TripleO">https://wiki.openstack.org/wiki/TripleO</link>.
</para>
</section>
<section xml:id="possible-solutions-nested-cloud-hypervisor">
<title>Possible solutions: hypervisor</title>
<para>In the case of running TripleO, the underlying OpenStack
@ -56,7 +62,8 @@
purposes, and performance would not be a critical factor, QEMU
can be utilized instead. It is also possible to run a KVM
hypervisor in an instance
(http://davejingtian.org/2014/03/30/nested-kvm-just-for-fun/),
(see <link
xlink:href="http://davejingtian.org/2014/03/30/nested-kvm-just-for-fun/">http://davejingtian.org/2014/03/30/nested-kvm-just-for-fun/</link>),
though this is not a supported configuration, and could be a
complex solution for such a use case.</para></section>
<section xml:id="nested-cloud-diagram"><title>Diagram</title>

View File

@ -5,8 +5,8 @@
version="5.0"
xml:id="software-defined-networking-sdn">
<?dbhtml stop-chunking?>
<title>Software Defined Networking (SDN)</title>
<para>Software Defined Networking is the separation of the data
<title>Software Defined Networking</title>
<para>Software Defined Networking (SDN) is the separation of the data
plane and control plane. SDN has become a popular method of
managing and controlling packet flows within networks. SDN
uses overlays or directly controlled layer 2 devices to