[arch-design-draft] Add content to compute design overview

Added additional content and corrected errors

Change-Id: Ie269eb91a6cf3ea07a45bc2a86bf5080dd9e4989
Implements: blueprint arch-guide-restructure
This commit is contained in:
Ben Silverman 2017-01-11 15:19:24 -07:00 committed by KATO Tomoyuki
parent cf4ea7ee15
commit 6207029330

View File

@ -1,41 +1,61 @@
========= ====================================
Overview Compute Server Architecture Overview
========= ====================================
When designing compute resource pools, consider the number of processors, When designing compute resource pools, consider the number of processors,
amount of memory, and the quantity of storage required for each hypervisor. amount of memory, network requirements, the quantity of storage required for
each hypervisor, and any requirements for bare metal hosts provisioned
through ironic.
Determine whether compute resources will be provided in a single pool or in When architecting an OpenStack cloud, as part of the planning process, the
multiple pools. In most cases, multiple pools of resources can be allocated architect must not only determine what hardware to utilize but whether compute
and addressed on demand, commonly referred to as bin packing. resources will be provided in a single pool or in multiple pools or
availability zones. Will the cloud provide distinctly different profiles for
compute?
For example, CPU, memory or local storage based compute nodes. For NFV
or HPC based clouds, there may even be specific network configurations that
should be reserved for those specific workloads on specific compute nodes. This
method of designing specific resources into groups or zones of compute can be
referred to as bin packing.
In a bin packing design, each independent resource pool provides service .. note::
for specific flavors. Since instances are scheduled onto compute hypervisors,
In a bin packing design, each independent resource pool provides service for
specific flavors. Since instances are scheduled onto compute hypervisors,
each independent node's resources will be allocated to efficiently use the each independent node's resources will be allocated to efficiently use the
available hardware. Bin packing also requires a common hardware design, available hardware. While bin packing can separate workload specific
with all hardware nodes within a compute resource pool sharing a common resources onto individual servers, bin packing also requires a common
processor, memory, and storage layout. This makes it easier to deploy, hardware design, with all hardware nodes within a compute resource pool
support, and maintain nodes throughout their lifecycle. sharing a common processor, memory, and storage layout. This makes it easier
to deploy, support, and maintain nodes throughout their lifecycle.
Increasing the size of the supporting compute environment increases the Increasing the size of the supporting compute environment increases the network
network traffic and messages, adding load to the controller or traffic and messages, adding load to the controllers and administrative
networking nodes. Effective monitoring of the environment will help with services used to support the OpenStack cloud or networking nodes. When
capacity decisions on scaling. considering hardware for controller nodes, whether using the monolithic
controller design, where all of the controller services live on one or more
physical hardware nodes, or in any of the newer shared nothing control plane
models, adequate resources must be allocated and scaled to meet scale
requirements. Effective monitoring of the environment will help with capacity
decisions on scaling. Proper planning will help avoid bottlenecks and network
oversubscription as the cloud scales.
Compute nodes automatically attach to OpenStack clouds, resulting in a Compute nodes automatically attach to OpenStack clouds, resulting in a
horizontally scaling process when adding extra compute capacity to an horizontally scaling process when adding extra compute capacity to an
OpenStack cloud. Additional processes are required to place nodes into OpenStack cloud. To further group compute nodes and place nodes into
appropriate availability zones and host aggregates. When adding appropriate availability zones and host aggregates, additional work is
additional compute nodes to environments, ensure identical or functional required. It is necessary to plan rack capacity and network switches as scaling
compatible CPUs are used, otherwise live migration features will break. out compute hosts directly affects data center infrastructure resources as
It is necessary to add rack capacity or network switches as scaling out would any other infrastructure expansion.
compute hosts directly affects data center resources.
Compute host components can also be upgraded to account for increases in While not as common in large enterprises, compute host components can also be
upgraded to account for increases in
demand, known as vertical scaling. Upgrading CPUs with more demand, known as vertical scaling. Upgrading CPUs with more
cores, or increasing the overall server memory, can add extra needed cores, or increasing the overall server memory, can add extra needed
capacity depending on whether the running applications are more CPU capacity depending on whether the running applications are more CPU
intensive or memory intensive. intensive or memory intensive. Since OpenStack schedules workload placement
based on capacity and technical requirements, removing compute nodes from
availability and upgrading them using a rolling upgrade design.
When selecting a processor, compare features and performance When selecting a processor, compare features and performance
characteristics. Some processors include features specific to characteristics. Some processors include features specific to
@ -74,8 +94,8 @@ Consider the Compute requirements of non-hypervisor nodes (also referred to as
resource nodes). This includes controller, Object Storage nodes, Block Storage resource nodes). This includes controller, Object Storage nodes, Block Storage
nodes, and networking services. nodes, and networking services.
The ability to add Compute resource pools for unpredictable workloads should The ability to create pools or availability zones for unpredictable workloads
be considered. In some cases, the demand for certain instance types or flavors should be considered. In some cases, the demand for certain instance types or
may not justify individual hardware design. Allocate hardware designs that are flavors may not justify individual hardware design. Allocate hardware designs
capable of servicing the most common instance requests. Adding hardware to the that are capable of servicing the most common instance requests. Adding
overall architecture can be done later. hardware to the overall architecture can be done later.