From 180cc4d9f156f9d090ea8f6c11b7957bb7789f59 Mon Sep 17 00:00:00 2001 From: Andreas Jaeger Date: Thu, 31 Jul 2014 22:23:32 +0200 Subject: [PATCH] Arch Design: Edits in chapter specialized Fix mainly links, capitalization and project names. Change-Id: Ia24a3d20278436fd0a5d4ed42521d1ca863af59a --- ...ction_desktop_as_a_service_specialized.xml | 20 +++++++++-------- .../section_hardware_specialized.xml | 22 +++++++++++-------- .../section_introduction_specialized.xml | 2 +- .../section_multi_hypervisor_specialized.xml | 2 +- .../section_networking_specialized.xml | 2 +- ...ion_openstack_on_openstack_specialized.xml | 21 ++++++++++++------ ...oftware_defined_networking_specialized.xml | 4 ++-- 7 files changed, 43 insertions(+), 30 deletions(-) diff --git a/doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml b/doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml index b15f97501e..085f987100 100644 --- a/doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml +++ b/doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml @@ -25,7 +25,7 @@ example: - Boot storms - What happens when hundreds or + Boot storms: What happens when hundreds or thousands of users log in during shift changes, affects the storage design. @@ -39,19 +39,21 @@
Broker - The Connection Broker is a central component of the - architecture that determines which Remote Desktop Host will be + The connection broker is a central component of the + architecture that determines which remote desktop host will be assigned or connected to the user. The broker is often a full-blown management product allowing for the automated - deployment and provisioning of Remote Desktop Hosts.
+ deployment and provisioning of remote desktop hosts.
Possible solutions - There a number of commercial products available today that + + There are a number of commercial products available today that provide such a broker solution but nothing that is native in - the OpenStack project. There of course is also the option of - not providing a broker and managing this manually - but this - would not suffice as a large scale, enterprise - solution.
+ the OpenStack project. Not providing a broker is also + an option, but managing this manually would not suffice as a + large scale, enterprise solution. + +
Diagram diff --git a/doc/arch-design/specialized/section_hardware_specialized.xml b/doc/arch-design/specialized/section_hardware_specialized.xml index d7571a8e66..7951da3866 100644 --- a/doc/arch-design/specialized/section_hardware_specialized.xml +++ b/doc/arch-design/specialized/section_hardware_specialized.xml @@ -22,21 +22,25 @@ work.
Solutions In order to provide cryptography offloading to a set of - instances, it is possible to use Glance configuration options - to assign the cryptography chip to a device node in the guest. - The documentation at - http://docs.openstack.org/cli-reference/content/chapter_cli-glance-property.html - contains further information on configuring this solution, but - it allows all guests using the configured images to access the - hypervisor cryptography device. + instances, it is possible to use Image Service configuration + options to assign the cryptography chip to a device node in + the guest. The OpenStack Command Line + Reference contains further information on + configuring this solution in the chapter Image + Service property keys , but it allows all guests using + the configured images to access the hypervisor cryptography + device. If direct access to a specific device is required, it can be + dedicated to a single instance per hypervisor through the use of PCI pass-through. The OpenStack administrator needs to define a flavor that specifically has the PCI device in order to properly schedule instances. More information regarding PCI pass-through, including instructions for implementing and - using it, is available at - https://wiki.openstack.org/wiki/Pci_passthrough#How_to_check_PCI_status_with_PCI_api_patches. + using it, is available at https://wiki.openstack.org/wiki/Pci_passthrough. + diff --git a/doc/arch-design/specialized/section_introduction_specialized.xml b/doc/arch-design/specialized/section_introduction_specialized.xml index 6fd4b19545..e1929644a0 100644 --- a/doc/arch-design/specialized/section_introduction_specialized.xml +++ b/doc/arch-design/specialized/section_introduction_specialized.xml @@ -13,7 +13,7 @@ can't be neatly categorized into one of the other major sections. This section discusses some of these unique use cases with some additional details and design considerations - for each use case. + for each use case: Specialized Networking: This describes running diff --git a/doc/arch-design/specialized/section_multi_hypervisor_specialized.xml b/doc/arch-design/specialized/section_multi_hypervisor_specialized.xml index c8310f92b9..a96e1e2b79 100644 --- a/doc/arch-design/specialized/section_multi_hypervisor_specialized.xml +++ b/doc/arch-design/specialized/section_multi_hypervisor_specialized.xml @@ -51,7 +51,7 @@ a specific host aggregate using the metadata of the image, the VMware host aggregate compute nodes communicate with vCenter which then requests that the instance be scheduled to run on - an ESXi hypervisor. As of the Icehouse release,this + an ESXi hypervisor. As of the Icehouse release, this functionality requires that VMware Distributed Resource Scheduler (DRS) be enabled on a cluster and set to "Fully Automated". diff --git a/doc/arch-design/specialized/section_networking_specialized.xml b/doc/arch-design/specialized/section_networking_specialized.xml index 0820715e47..afa9c4bcca 100644 --- a/doc/arch-design/specialized/section_networking_specialized.xml +++ b/doc/arch-design/specialized/section_networking_specialized.xml @@ -19,7 +19,7 @@ layer 2 listeners.
Possible solutions - Deploying an OpenStack installation using Neutron with a + Deploying an OpenStack installation using OpenStack Networking with a provider network will allow direct layer 2 connectivity to an upstream networking device. This design provides the layer 2 connectivity required to communicate via Intermediate diff --git a/doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml b/doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml index 6b62f00219..8eda69022a 100644 --- a/doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml +++ b/doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml @@ -1,4 +1,8 @@ + +%openstack; +]>
+ available OpenStack Compute resources, whether public or + private.
Challenges The network aspect of deploying a nested cloud is the most complicated aspect of this architecture. When using VLANs, @@ -42,10 +46,12 @@ deploying additional stacks will be a trivial thing and can be performed in an automated fashion. The OpenStack-On-OpenStack project (TripleO) is addressing - this issue - although at the current time the project does not - provide comprehensive coverage for the nested stacks. More - information can be found at - https://wiki.openstack.org/wiki/TripleO.
+ this issue—although at the current time the project does + not provide comprehensive coverage for the nested stacks. More + information can be found at https://wiki.openstack.org/wiki/TripleO. + +
Possible solutions: hypervisor In the case of running TripleO, the underlying OpenStack @@ -56,7 +62,8 @@ purposes, and performance would not be a critical factor, QEMU can be utilized instead. It is also possible to run a KVM hypervisor in an instance - (http://davejingtian.org/2014/03/30/nested-kvm-just-for-fun/), + (see http://davejingtian.org/2014/03/30/nested-kvm-just-for-fun/), though this is not a supported configuration, and could be a complex solution for such a use case.
Diagram diff --git a/doc/arch-design/specialized/section_software_defined_networking_specialized.xml b/doc/arch-design/specialized/section_software_defined_networking_specialized.xml index 39130b10d4..0df094d7a6 100644 --- a/doc/arch-design/specialized/section_software_defined_networking_specialized.xml +++ b/doc/arch-design/specialized/section_software_defined_networking_specialized.xml @@ -5,8 +5,8 @@ version="5.0" xml:id="software-defined-networking-sdn"> - Software Defined Networking (SDN) - Software Defined Networking is the separation of the data + Software Defined Networking + Software Defined Networking (SDN) is the separation of the data plane and control plane. SDN has become a popular method of managing and controlling packet flows within networks. SDN uses overlays or directly controlled layer 2 devices to