diff --git a/doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml b/doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml index 8264b37820..6cf4a573f1 100644 --- a/doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml +++ b/doc/arch-design/specialized/section_desktop_as_a_service_specialized.xml @@ -39,18 +39,18 @@
Broker - The connection broker is a central component of the - architecture that determines which remote desktop host the user - connects to. The broker is often a - full-blown management product enabling the automated + The connection broker determines which remote desktop host + users can access. Medium and large scale environments require a broker + since its service represents a central component of the architecture. + The broker is a complete management product, and enables automated deployment and provisioning of remote desktop hosts.
Possible solutions There are a number of commercial products currently available that - provide such a broker solution but nothing that is native to - the OpenStack project. Not providing a broker is also + provide a broker solution. However, no native OpenStack projects + provide broker services. Not providing a broker is also an option, but managing this manually would not suffice for a large scale, enterprise solution. diff --git a/doc/arch-design/specialized/section_hardware_specialized.xml b/doc/arch-design/specialized/section_hardware_specialized.xml index 675cf8457a..a82f456a2f 100644 --- a/doc/arch-design/specialized/section_hardware_specialized.xml +++ b/doc/arch-design/specialized/section_hardware_specialized.xml @@ -7,7 +7,7 @@ Specialized hardware Certain workloads require specialized hardware devices that - are either difficult to virtualize or impossible to share. + have significant virtualization or sharing challenges. Applications such as load balancers, highly parallel brute force computing, and direct to wire networking may need capabilities that basic OpenStack components do not @@ -18,7 +18,7 @@ improve performance or provide capabilities that are not virtual CPU, RAM, network, or storage. These can be a shared resource, such as a cryptography processor, or a dedicated - resource, such as a Graphics Processing Unit. OpenStack can + resource, such as a Graphics Processing Unit (GPU). OpenStack can provide some of these, while others may need extra work.
@@ -26,14 +26,14 @@ Solutions To provide cryptography offloading to a set of instances, you can use Image service configuration - options to assign the cryptography chip to a device node in - the guest. The OpenStack Command Line + options. For example, assign the cryptography chip to a + device node in the guest. The OpenStack Command Line Reference contains further information on configuring this solution in the chapter - Image service property keys, but it allows all - guests using the configured images to access the hypervisor - cryptography device. + Image service property keys. A challenge, however, is this + option allows all guests using the configured images + to access the hypervisor cryptography device. If you require direct access to a specific device, PCI pass-through enables you to dedicate the device to a single instance per hypervisor. You must define a flavor that diff --git a/doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml b/doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml index 98476ea2d2..db1684f789 100644 --- a/doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml +++ b/doc/arch-design/specialized/section_openstack_on_openstack_specialized.xml @@ -10,31 +10,32 @@ xml:id="arch-guide-openstack-on-openstack"> OpenStack on OpenStack - In some cases it is necessary to run OpenStack nested on top - of another OpenStack cloud. This scenario enables you to manage - and provision complete OpenStack cloud environments on - instances running on hypervisors and servers that the underlying - OpenStack cloud controls. Public cloud providers can use - this technique to manage the upgrade and - maintenance process on complete OpenStack-based clouds. + In some cases, users may run OpenStack nested on top + of another OpenStack cloud. This scenario describes how to + manage and provision complete OpenStack environments on instances + supported by hypervisors and servers, which an underlying OpenStack + environment controls. + Public cloud providers can use this technique to manage the + upgrade and maintenance process on complete OpenStack environments. Developers and those testing OpenStack can also use this - guidance to provision their own OpenStack environments on + technique to provision their own OpenStack environments on available OpenStack Compute resources, whether public or private.
Challenges The network aspect of deploying a nested cloud is the most complicated aspect of this architecture. You must expose VLANs - to the physical ports on which the underlying cloud runs, - as the bare metal cloud owns all the - hardware, but you must also expose them to the nested + to the physical ports on which the underlying cloud runs because + the bare metal cloud owns all the + hardware. You must also expose them to the nested levels as well. Alternatively, you can use the network overlay - technologies on the OpenStack cloud running on OpenStack to provide - the required software defined networking for the deployment. + technologies on the OpenStack environment running on the host + OpenStack environment to provide the required software defined + networking for the deployment.
Hypervisor - A key question to address in this scenario is which + In this example architecture, consider which approach you should take to provide a nested hypervisor in OpenStack. This decision influences which operating systems you use for the deployment of the nested @@ -44,7 +45,7 @@ Possible solutions: deployment Deployment of a full stack can be challenging but you can mitigate this difficulty by creating a Heat template to deploy the - entire stack or a configuration management system. After creating + entire stack, or a configuration management system. After creating the Heat template, you can automate the deployment of additional stacks. The OpenStack-on-OpenStack project (TripleO) addresses this issue. Currently, however, the project does