[arch-design] Update compute resource information

1. Remove duplicated content
2. Add compute resource design information

Change-Id: Id3fdea2220fafe8728a8cb9e03c541f3953acb01
Partial-Bug: #1548184
Implements: blueprint archguide-mitaka-reorg
This commit is contained in:
daz 2016-03-08 12:02:59 +11:00
parent 1bb5545ca4
commit 7522356cb6
4 changed files with 49 additions and 74 deletions

View File

@ -191,12 +191,23 @@ Network
~~~~~~~
.. TODO(unassigned): consolidate and update existing network sub-chapters.
Compute resource design
~~~~~~~~~~~~~~~~~~~~~~~
Compute
~~~~~~~
A relationship exists between the size of the compute environment and
the supporting OpenStack infrastructure controller nodes requiring
support.
When designing compute resource pools, consider the number of processors,
amount of memory, and the quantity of storage required for each hypervisor.
Consider whether compute resources will be provided in a single pool or in
multiple pools. In most cases, multiple pools of resources can be allocated
and addressed on demand, commonly referred to as bin packing.
In a bin packing design, each independent resource pool provides service
for specific flavors. Since instances are scheduled onto compute hypervisors,
each independent node's resources will be allocated to efficiently use the
available hardware. Bin packing also requires a common hardware design,
with all hardware nodes within a compute resource pool sharing a common
processor, memory, and storage layout. This makes it easier to deploy,
support, and maintain nodes throughout their lifecycle.
Increasing the size of the supporting compute environment increases the
network traffic and messages, adding load to the controller or
@ -213,15 +224,34 @@ It is necessary to add rack capacity or network switches as scaling out
compute hosts directly affects network and data center resources.
Compute host components can also be upgraded to account for increases in
demand. This is also known as vertical scaling. Upgrading CPUs with more
demand, known as vertical scaling. Upgrading CPUs with more
cores, or increasing the overall server memory, can add extra needed
capacity depending on whether the running applications are more CPU
intensive or memory intensive.
When selecting a processor, compare features and performance
characteristics. Some processors include features specific to
virtualized compute hosts, such as hardware-assisted virtualization, and
technology related to memory paging (also known as EPT shadowing). These
types of features can have a significant impact on the performance of
your virtual machine.
The number of processor cores and threads impacts the number of worker
threads which can be run on a resource node. Design decisions must
relate directly to the service being run on it, as well as provide a
balanced infrastructure for all services.
Another option is to assess the average workloads and increase the
number of instances that can run within the compute environment by
adjusting the overcommit ratio.
An overcommit ratio is the ratio of available virtual resources to
available physical resources. This ratio is configurable for CPU and
memory. The default CPU overcommit ratio is 16:1, and the default memory
overcommit ratio is 1.5:1. Determining the tuning of the overcommit
ratios during the design phase is important as it has a direct impact on
the hardware layout of your compute nodes.
.. note::
Changing the CPU overcommit ratio can have a detrimental effect
@ -235,10 +265,20 @@ additional block storage nodes. Upgrading directly attached storage
installed in compute hosts, and adding capacity to the shared storage
for additional ephemeral storage to instances, may be necessary.
For a deeper discussion on many of these topics, refer to the `OpenStack
Operations Guide <http://docs.openstack.org/ops>`_.
Consider the compute requirements of non-hypervisor nodes (also referred to as
resource nodes). This includes controller, object storage, and block storage
nodes, and networking services.
The ability to add compute resource pools for unpredictable workloads should
be considered. In some cases, the demand for certain instance types or flavors
may not justify individual hardware design. Allocate hardware designs that are
capable of servicing the most common instance requests. Adding hardware to the
overall architecture can be done later.
For more information on these topics, refer to the `OpenStack
Operations Guide <http://docs.openstack.org/ops>`_.
Control plane API services and Horizon
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. TODO(unassigned): consolidate existing control plane sub-chapters.
.. No existing control plane sub-chapters in the current guide.

View File

@ -23,7 +23,6 @@ Contents
:maxdepth: 2
common/conventions.rst
introduction.rst
identifying-stakeholders.rst
functional-requirements.rst
@ -34,9 +33,7 @@ Contents
security-requirements.rst
legal-requirements.rst
example-architectures.rst
common/app_support.rst
common/glossary.rst
Search in this guide

View File

@ -1,61 +0,0 @@
=================
Capacity planning
=================
An important consideration in running a cloud over time is projecting growth
and utilization trends in order to plan capital expenditures for the short and
long term. Gather utilization meters for compute, network, and storage, along
with historical records of these meters. While securing major anchor tenants
can lead to rapid jumps in the utilization rates of all resources, the steady
adoption of the cloud inside an organization or by consumers in a public
offering also creates a steady trend of increased utilization.
Capacity constraints for a general purpose cloud environment include:
* Compute limits
* Storage limits
A relationship exists between the size of the compute environment and
the supporting OpenStack infrastructure controller nodes requiring
support.
Increasing the size of the supporting compute environment increases the
network traffic and messages, adding load to the controller or
networking nodes. Effective monitoring of the environment will help with
capacity decisions on scaling.
Compute nodes automatically attach to OpenStack clouds, resulting in a
horizontally scaling process when adding extra compute capacity to an
OpenStack cloud. Additional processes are required to place nodes into
appropriate availability zones and host aggregates. When adding
additional compute nodes to environments, ensure identical or functional
compatible CPUs are used, otherwise live migration features will break.
It is necessary to add rack capacity or network switches as scaling out
compute hosts directly affects network and datacenter resources.
Compute host components can also be upgraded to account for increases in
demand; this is known as vertical scaling. Upgrading CPUs with more
cores, or increasing the overall server memory, can add extra needed
capacity depending on whether the running applications are more CPU
intensive or memory intensive.
Another option is to assess the average workloads and increase the
number of instances that can run within the compute environment by
adjusting the overcommit ratio.
.. note::
It is important to remember that changing the CPU overcommit ratio
can have a detrimental effect and cause a potential increase in a
noisy neighbor.
Insufficient disk capacity could also have a negative effect on overall
performance including CPU and memory usage. Depending on the back-end
architecture of the OpenStack Block Storage layer, capacity includes
adding disk shelves to enterprise storage systems or installing
additional block storage nodes. Upgrading directly attached storage
installed in compute hosts, and adding capacity to the shared storage
for additional ephemeral storage to instances, may be necessary.
For a deeper discussion on many of these topics, refer to the `OpenStack
Operations Guide <http://docs.openstack.org/ops>`_.

View File

@ -11,7 +11,6 @@ Operator requirements
operator-requirements-licensing.rst
operator-requirements-support-maintenance.rst
operator-requirements-ops-access.rst
operator-requirements-capacity-planning.rst
operator-requirements-quota-management.rst
operator-requirements-policy-management.rst
operator-requirements-hardware-selection.rst