[arch-design-draft] Use cases edits

Clean up migrated content

Change-Id: I4fb96ddc6bc5fe6b3bd01a79e6b930a9c5211ec3
Implements: blueprint arch-guide-restructure
This commit is contained in:
daz 2016-10-14 12:16:50 +11:00 committed by KATO Tomoyuki
parent 17865e6a25
commit 4758ce6dfd
11 changed files with 93 additions and 111 deletions

View File

@ -1,5 +0,0 @@
=====================
Adding another region
=====================
.. TODO

View File

@ -24,10 +24,9 @@ Solutions
To provide cryptography offloading to a set of instances,
you can use Image service configuration options.
For example, assign the cryptography chip to a device node in the guest.
The OpenStack Command Line Reference contains further information on
configuring this solution in the section `Image service property keys
<http://docs.openstack.org/cli-reference/glance.html#image-service-property-keys>`_.
A challenge, however, is this option allows all guests using the
For further information on this configuration, see `Image service
property keys <http://docs.openstack.org/cli-reference/glance-property-
keys.html>`_. However, this option allows all guests using the
configured images to access the hypervisor cryptography device.
If you require direct access to a specific device, PCI pass-through

View File

@ -3,7 +3,7 @@ Multi-hypervisor example
========================
A financial company requires its applications migrated
from a traditional, virtualized environment to an API driven,
from a traditional, virtualized environment to an API-driven,
orchestrated environment. The new environment needs
multiple hypervisors since many of the company's applications
have strict hypervisor requirements.
@ -11,7 +11,7 @@ have strict hypervisor requirements.
Currently, the company's vSphere environment runs 20 VMware
ESXi hypervisors. These hypervisors support 300 instances of
various sizes. Approximately 50 of these instances must run
on ESXi. The remaining 250 or so have more flexible requirements.
on ESXi. The remaining 250 instances have more flexible requirements.
The financial company decides to manage the
overall system with a common OpenStack platform.
@ -27,8 +27,8 @@ Images in the OpenStack Image service have particular
hypervisor metadata attached. When a user requests a
certain image, the instance spawns on the relevant aggregate.
Images for ESXi use the VMDK format. You can convert
QEMU disk images to VMDK, VMFS Flat Disks. These disk images
Images for ESXi use the VMDK format. QEMU disk images can be
converted to VMDK, VMFS Flat Disks. These disk images
can also be thin, thick, zeroed-thick, and eager-zeroed-thick.
After exporting a VMFS thin disk from VMFS to the
OpenStack Image service (a non-VMFS location), it becomes a
@ -57,7 +57,7 @@ financial company, Block Storage in their new architecture supports
both hypervisors.
OpenStack Networking provides network connectivity in this new
architecture, with the VMware NSX plug-in driver configured. legacy
architecture, with the VMware NSX plug-in driver configured. Legacy
networking (nova-network) supports both hypervisors in this new
architecture example, but has limitations. Specifically, vSphere
with legacy networking does not support security groups. The new
@ -65,13 +65,15 @@ architecture uses VMware NSX as a part of the design. When users launch an
instance within either of the host aggregates, VMware NSX ensures the
instance attaches to the appropriate network overlay-based logical networks.
.. TODO update example??
The architecture planning teams also consider OpenStack Compute integration.
When running vSphere in an OpenStack environment, nova-compute
communications with vCenter appear as a single large hypervisor.
This hypervisor represents the entire ESXi cluster. Multiple nova-compute
instances can represent multiple ESXi clusters. They can connect to
multiple vCenter servers. If the process running nova-compute
crashes it cuts the connection to the vCenter server.
crashes, it cuts the connection to the vCenter server.
Any ESXi clusters will stop running, and you will not be able to
provision further instances on the vCenter, even if you enable high
availability. You must monitor the nova-compute service connected

View File

@ -3,30 +3,31 @@ Specialized networking example
==============================
Some applications that interact with a network require
specialized connectivity. Applications such as a looking glass
require the ability to connect to a BGP peer, or route participant
applications may need to join a network at a layer2 level.
specialized connectivity. For example, applications used in Looking Glass
servers require the ability to connect to a Border Gateway Protocol (BGP) peer,
or route participant applications may need to join a layer-2 network.
Challenges
~~~~~~~~~~
Connecting specialized network applications to their required
resources alters the design of an OpenStack installation.
Installations that rely on overlay networks are unable to
support a routing participant, and may also block layer-2 listeners.
resources impacts the OpenStack architecture design. Installations that
rely on overlay networks cannot support a routing participant, and may
also block listeners on a layer-2 network.
Possible solutions
~~~~~~~~~~~~~~~~~~
Deploying an OpenStack installation using OpenStack Networking with a
provider network allows direct layer-2 connectivity to an
upstream networking device.
This design provides the layer-2 connectivity required to communicate
via Intermediate System-to-Intermediate System (ISIS) protocol or
to pass packets controlled by an OpenFlow controller.
upstream networking device. This design provides the layer-2 connectivity
required to communicate through Intermediate System-to-Intermediate System
(ISIS) protocol, or pass packets using an OpenFlow controller.
Using the multiple layer-2 plug-in with an agent such as
:term:`Open vSwitch` allows a private connection through a VLAN
directly to a specific port in a layer-3 device.
This allows a BGP point-to-point link to join the autonomous system.
directly to a specific port in a layer-3 device. This allows a BGP
point-to-point link to join the autonomous system.
Avoid using layer-3 plug-ins as they divide the broadcast
domain and prevent router adjacencies from forming.

View File

@ -9,10 +9,10 @@ supported by hypervisors and servers, which an underlying OpenStack
environment controls.
Public cloud providers can use this technique to manage the
upgrade and maintenance process on complete OpenStack environments.
Developers and those testing OpenStack can also use this
upgrade and maintenance process on OpenStack environments.
Developers and operators testing OpenStack can also use this
technique to provision their own OpenStack environments on
available OpenStack Compute resources, whether public or private.
available OpenStack Compute resources.
Challenges
~~~~~~~~~~
@ -24,16 +24,14 @@ cloud runs because the bare metal cloud owns all the hardware.
You must also expose them to the nested levels as well.
Alternatively, you can use the network overlay technologies on the
OpenStack environment running on the host OpenStack environment to
provide the required software defined networking for the deployment.
provide the software-defined networking for the deployment.
Hypervisor
~~~~~~~~~~
In this example architecture, consider which
approach you should take to provide a nested
hypervisor in OpenStack. This decision influences which
operating systems you use for the deployment of the nested
OpenStack deployments.
approach to provide a nested hypervisor in OpenStack. This decision
influences the operating systems you use for nested OpenStack deployments.
Possible solutions: deployment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -52,7 +50,7 @@ Possible solutions: hypervisor
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In the case of running TripleO, the underlying OpenStack
cloud deploys the Compute nodes as bare-metal. You then deploy
cloud deploys bare-metal Compute nodes. You then deploy
OpenStack on these Compute bare-metal servers with the
appropriate hypervisor, such as KVM.
@ -60,11 +58,8 @@ In the case of running smaller OpenStack clouds for testing
purposes, where performance is not a critical factor, you can use
QEMU instead. It is also possible to run a KVM hypervisor in an instance
(see http://davejingtian.org/2014/03/30/nested-kvm-just-for-fun/),
though this is not a supported configuration, and could be a
though this is not a supported configuration and could be a
complex solution for such a use case.
Diagram
~~~~~~~
.. figure:: ../figures/Specialized_OOO.png
:width: 100%

View File

@ -1,5 +0,0 @@
======================
Scaling multiple cells
======================
.. TODO

View File

@ -27,9 +27,9 @@ If an SDN implementation requires layer-2 access because it
directly manipulates switches, we do not recommend running an
overlay network or a layer-3 agent.
If the controller resides within an OpenStack installation,
it may be necessary to build an ML2 plug-in and schedule the
controller instances to connect to tenant VLANs that they can
talk directly to the switch hardware.
build an ML2 plugin, and schedule the controller instances
to connect to tenant VLANs so they can talk directly to the switch
hardware.
Alternatively, depending on the external device support,
use a tunnel that terminates at the switch hardware itself.

View File

@ -43,7 +43,7 @@ of the following:
* Between 120 and 140 installations of Nginx and Tomcat, each with 2
vCPUs and 4 GB of RAM
* A three-node MariaDB and Galera cluster, each with 4 vCPUs and 8 GB
* A three node MariaDB and Galera cluster, each with 4 vCPUs and 8 GB
RAM
The company runs hardware load balancers and multiple web applications
@ -56,9 +56,10 @@ The solution would consist of the following OpenStack components:
* A firewall, switches and load balancers on the public facing network
connections.
* OpenStack Controller service running Image, Identity, Networking,
combined with support services such as MariaDB and RabbitMQ,
configured for high availability on at least three controller nodes.
* OpenStack Controller service running Image service, Identity service,
Networking service, combined with support services such as MariaDB and
RabbitMQ, configured for high availability on at least three controller
nodes.
* OpenStack Compute nodes running the KVM hypervisor.
@ -154,7 +155,7 @@ The CERN solution uses :term:`cells <cell>` for segregation of compute
resources and for transparently scaling between different data centers.
This decision meant trading off support for security groups and live
migration. In addition, they must manually replicate some details, like
flavors, across cells. In spite of these drawbacks cells provide the
flavors, across cells. In spite of these drawbacks, cells provide the
required scale while exposing a single public API endpoint to users.
CERN created a compute cell for each of the two original data centers
@ -382,4 +383,3 @@ storage system, such as Ceph, as Block Storage.
This would have given another layer of protection.
Another option would have been to store the database on an OpenStack
Block Storage volume and backing it up like any other Block Storage.

View File

@ -27,21 +27,20 @@ geographically dispersed. The workload is very sensitive to latency and
needs a rapid response to end-users. After reviewing the user, technical
and operational considerations, it is determined beneficial to build a
number of regions local to the customer's edge. Rather than build a few
large, centralized data centers, the intent of the architecture is to
provide a pair of small data centers in locations that are closer to the
customer. In this use case, spreading applications out allows for
different horizontal scaling than a traditional compute workload scale.
The intent is to scale by creating more copies of the application in
closer proximity to the users that need it most, in order to ensure
faster response time to user requests. This provider deploys two
datacenters at each of the four chosen regions. The implications of this
design are based around the method of placing copies of resources in
each of the remote regions. Swift objects, Glance images, and block
storage need to be manually replicated into each region. This may be
beneficial for some systems, such as the case of content service, where
only some of the content needs to exist in some but not all regions. A
centralized Keystone is recommended to ensure authentication and that
access to the API endpoints is easily manageable.
large, centralized data centers, the intent is to provide a pair of small
data centers in locations closer to the customer. In this use case,
spreading out applications allows for different horizontal scaling than
a traditional compute workload scale. The intent is to scale by creating
more copies of the application in closer proximity to the users that need
it most, in order to ensure faster response time to user requests. This
provider deploys two data centers at each of the four chosen regions. The
implications of this design are based on the method of placing copies
of resources in each of the remote regions. Swift objects, glance images,
and Block Storage need to be manually replicated into each region. This may
be beneficial for some systems, for example, a content service where
only some of the content needs to exist in some regions. A centralized
Identity service is recommended to manage authentication and access to
the API endpoints.
It is recommended that you install an automated DNS system such as
Designate. Application administrators need a way to manage the mapping
@ -52,7 +51,7 @@ region's zone.
Telemetry for each region is also deployed, as each region may grow
differently or be used at a different rate. Ceilometer collects each
region's meters from each of the controllers and report them back to a
region's meters from each of the controllers and reports them back to a
central location. This is useful both to the end user and the
administrator of the OpenStack environment. The end user will find this
method useful, as it makes possible to determine if certain locations
@ -73,11 +72,11 @@ being able to run a central storage repository as a primary cache for
distributing content to each of the regions.
The second redundancy type is the edge data center itself. A second data
center in each of the edge regional locations house a second region near
center in each of the edge regional locations stores a second region near
the first region. This ensures that the application does not suffer
degraded performance in terms of latency and availability.
The follow figure depicts the solution designed to have both a
The following figure depicts the solution designed to have both a
centralized set of core data centers for OpenStack services and paired edge
data centers.
@ -89,27 +88,27 @@ Geo-redundant load balancing example
------------------------------------
A large-scale web application has been designed with cloud principles in
mind. The application is designed provide service to application store,
on a 24/7 basis. The company has typical two tier architecture with a
web front-end servicing the customer requests, and a NoSQL database back
mind. The application is designed to provide service to the application
store on a 24/7 basis. The company has a two-tier architecture with
a web front-end servicing the customer requests, and a NoSQL database back
end storing the information.
Recently there has been several outages in number of major public
Recently there has been several outages in a number of major public
cloud providers due to applications running out of a single geographical
location. The design therefore should mitigate the chance of a single
location. The design, therefore, should mitigate the chance of a single
site causing an outage for their business.
The solution would consist of the following OpenStack components:
* A firewall, switches and load balancers on the public facing network
* A firewall, switches, and load balancers on the public facing network
connections.
* OpenStack Controller services running, Networking, dashboard, Block
Storage and Compute running locally in each of the three regions.
Identity service, Orchestration service, Telemetry service, Image
* OpenStack controller services running Networking service, dashboard, Block
Storage service, and Compute service running locally in each of the three
regions. Identity service, Orchestration service, Telemetry service, Image
service and Object Storage service can be installed centrally, with
nodes in each of the region providing a redundant OpenStack
Controller plane throughout the globe.
controller plane throughout the globe.
* OpenStack Compute nodes running the KVM hypervisor.
@ -126,12 +125,12 @@ The solution would consist of the following OpenStack components:
An autoscaling heat template can be used to deploy the application in
the three regions. This template includes:
* Web Servers, running Apache.
* Web servers running Apache.
* Appropriate ``user_data`` to populate the central DNS servers upon
instance launch.
* Appropriate Telemetry alarms that maintain state of the application
* Appropriate Telemetry alarms that maintain the application state
and allow for handling of region or instance failure.
Another autoscaling Heat template can be used to deploy a distributed
@ -145,11 +144,10 @@ met. But three regions are selected here to avoid abnormal load on a
single region in the event of a failure.
Orchestration is used because of the built-in functionality of
autoscaling and auto healing in the event of increased load. Additional
autoscaling and auto healing in the event of increased load. External
configuration management tools, such as Puppet or Chef could also have
been used in this scenario, but were not chosen since Orchestration had
the appropriate built-in hooks into the OpenStack cloud, whereas the
other tools were external and not native to OpenStack. In addition,
the appropriate built-in hooks into the OpenStack cloud. In addition,
external tools were not needed since this deployment scenario was
straight forward.
@ -168,11 +166,11 @@ not have any awareness of geo location.
.. figure:: ../figures/Multi-site_Geo_Redundant_LB.png
Location-local service example
Local location service example
------------------------------
A common use for multi-site OpenStack deployment is creating a Content
Delivery Network. An application that uses a location-local architecture
Delivery Network. An application that uses a local location architecture
requires low network latency and proximity to the user to provide an
optimal user experience and reduce the cost of bandwidth and transit.
The content resides on sites closer to the customer, instead of a

View File

@ -19,8 +19,8 @@ User stories
Network-focused cloud examples
------------------------------
An organization designs a large scale cloud-baesed web application. The
application scales horizontally in a bursting fashion and generates a
An organization designs a large scale cloud-based web application. The
application scales horizontally in a bursting behavior and generates a
high instance count. The application requires an SSL connection to secure
data and must not lose connection state to individual servers.
@ -38,7 +38,7 @@ Because sessions persist until closed, the routing and switching
architecture provides high availability. Switches mesh to each
hypervisor and each other, and also provide an MLAG implementation to
ensure that layer-2 connectivity does not fail. Routers use VRRP and
fully mesh with switches to ensure layer-3 connectivity. Since GRE is
fully mesh with switches to ensure layer-3 connectivity. Since GRE
provides an overlay network, Networking is present and uses the Open
vSwitch agent in GRE tunnel mode. This ensures all devices can reach all
other devices and that you can create tenant networks for private
@ -51,8 +51,8 @@ to this, it can fit into a large number of other OpenStack designs. A
few key components, however, need to be in place to handle the nature of
most web-scale workloads. You require the following components:
* OpenStack Controller services (Image, Identity, Networking and
supporting services such as MariaDB and RabbitMQ)
* OpenStack Controller services (Image service, Identity service, Networking
service, and supporting services such as MariaDB and RabbitMQ)
* OpenStack Compute running KVM hypervisor
@ -62,9 +62,9 @@ most web-scale workloads. You require the following components:
* Telemetry service
Beyond the normal Identity, Compute, Image service, and Object Storage
components, we recommend the Orchestration service component to handle
the proper scaling of workloads to adjust to demand. Due to the
Beyond the normal Identity service, Compute service, Image service, and
Object Storage components, we recommend the Orchestration service component
to handle the proper scaling of workloads to adjust to demand. Due to the
requirement for auto-scaling, the design includes the Telemetry service.
Web services tend to be bursty in load, have very defined peak and
valley usage patterns and, as a result, benefit from automatic scaling
@ -166,7 +166,7 @@ east-west traffic
Likely to be fully symmetric. Because replication originates from
any node and might target multiple other nodes algorithmically, it
is less likely for this traffic to have a larger volume in any
specific direction. However this traffic might interfere with
specific direction. However, this traffic might interfere with
north-south traffic.
.. figure:: ../figures/Network_Cloud_Storage2.png
@ -174,7 +174,7 @@ east-west traffic
This application prioritizes the north-south traffic over east-west
traffic: the north-south traffic involves customer-facing data.
The network design in this case is less dependent on availability and
The network design, in this case, is less dependent on availability and
more dependent on being able to handle high bandwidth. As a direct
result, it is beneficial to forgo redundant links in favor of bonding
those connections. This increases available bandwidth. It is also

View File

@ -12,27 +12,24 @@ Specialized use cases
specialized-openstack-on-openstack.rst
specialized-hardware.rst
specialized-single-site.rst
specialized-add-region.rst
specialized-scaling-multiple-cells.rst
This section provides details and design considerations for
specialized cases:
This section describes the architecture and design considerations for the
following specialized use cases:
* :doc:`Specialized networking <specialized-networking>`:
describes running networking-oriented software that may involve reading
Running networking-oriented software that may involve reading
packets directly from the wire or participating in routing protocols.
* :doc:`Software-defined networking (SDN)
<specialized-software-defined-networking>`:
describes both running an SDN controller from within OpenStack
Running an SDN controller from within OpenStack
as well as participating in a software-defined network.
* :doc:`Desktop-as-a-Service <specialized-desktop-as-a-service>`:
describes running a virtualized desktop environment in a cloud
(:term:`Desktop-as-a-Service`).
This applies to private and public clouds.
Running a virtualized desktop environment in a private or public cloud.
* :doc:`OpenStack on OpenStack <specialized-openstack-on-openstack>`:
describes building a multi-tiered cloud by running OpenStack
Building a multi-tiered cloud by running OpenStack
on top of an OpenStack installation.
* :doc:`Specialized hardware <specialized-hardware>`:
describes the use of specialized hardware devices from within
the OpenStack environment.
Using specialized hardware devices from within the OpenStack environment.
* :doc:`specialized-single-site`: Single site architecture with OpenStack
Networking.