Merge "[admin-guide] Rework CPU topology guide"

This commit is contained in:
Jenkins 2016-08-17 22:44:31 +00:00 committed by Gerrit Code Review
commit d465f42169
3 changed files with 81 additions and 77 deletions

View File

@ -24,4 +24,4 @@ instance for these kind of workloads.
:maxdepth: 2 :maxdepth: 2
compute-pci-passthrough.rst compute-pci-passthrough.rst
compute-numa-cpu-pinning.rst compute-cpu-topologies.rst

View File

@ -1,35 +1,37 @@
.. _section-compute-numa-cpu-pinning: .. _compute-cpu-topologies:
========================================== ==============
Enabling advanced CPU topologies in guests CPU topologies
========================================== ==============
The NUMA topology and CPU pinning features in OpenStack provide high level The NUMA topology and CPU pinning features in OpenStack provide high-level
control over how instances run on host CPUs, and the topology of CPUs inside control over how instances run on hypervisor CPUs and the topology of virtual
the instance. These features can be used to minimize latency and maximize CPUs available to instances. These features help minimize latency and maximize
per-instance performance. performance.
SMP, NUMA, and SMT overviews SMP, NUMA, and SMT
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~
Symmetric multiprocessing (SMP) is a design found in many modern multi-core Symmetric multiprocessing (SMP)
systems. In an SMP system, there are two or more CPUs and these CPUs are SMP is a design found in many modern multi-core systems. In an SMP system,
connected by some interconnect. This provides CPUs with equal access to there are two or more CPUs and these CPUs are connected by some interconnect.
system resources like memory and IO ports. This provides CPUs with equal access to system resources like memory and
input/output ports.
Non-uniform memory access (NUMA) is a derivative of the SMP design that is Non-uniform memory access (NUMA)
found in many multi-socket systems. In a NUMA system, system memory is divided NUMA is a derivative of the SMP design that is found in many multi-socket
into cells or nodes that are associated with particular CPUs. Requests for systems. In a NUMA system, system memory is divided into cells or nodes that
memory on other nodes are possible through an interconnect bus, however, are associated with particular CPUs. Requests for memory on other nodes are
bandwidth across this shared bus is limited. As a result, competition for this possible through an interconnect bus. However, bandwidth across this shared
this resource can incur performance penalties. bus is limited. As a result, competition for this resource can incur
performance penalties.
Simultaneous Multi-Threading (SMT), known as as Hyper-Threading on Intel Simultaneous Multi-Threading (SMT)
platforms, is a design that is complementary to SMP. Whereas CPUs in SMP SMT is a design complementary to SMP. Whereas CPUs in SMP systems share a bus
systems share a bus and some memory, CPUs in SMT systems share many more and some memory, CPUs in SMT systems share many more components. CPUs that
components. CPUs that share components are known as thread siblings. All CPUs share components are known as thread siblings. All CPUs appear as usable
appear as usable CPUs on the system and can execute workloads in parallel, CPUs on the system and can execute workloads in parallel. However, as with
however, as with NUMA, threads compete for shared resources. NUMA, threads compete for shared resources.
In OpenStack, SMP CPUs are known as *cores*, NUMA cells or nodes are known as In OpenStack, SMP CPUs are known as *cores*, NUMA cells or nodes are known as
*sockets*, and SMT CPUs are known as *threads*. For example, a quad-socket, *sockets*, and SMT CPUs are known as *threads*. For example, a quad-socket,
@ -49,14 +51,14 @@ processes are on the same NUMA node as the memory used by these processes.
This ensures all memory accesses are local to the node and thus do not consume This ensures all memory accesses are local to the node and thus do not consume
the limited cross-node memory bandwidth, adding latency to memory accesses. the limited cross-node memory bandwidth, adding latency to memory accesses.
Similarly, large pages are assigned from memory and benefit from the same Similarly, large pages are assigned from memory and benefit from the same
performance improvements as memory allocated using standard pages, thus, they performance improvements as memory allocated using standard pages. Thus, they
also should be local. Finally, PCI devices are directly associated with also should be local. Finally, PCI devices are directly associated with
specific NUMA nodes for the purposes of DMA. Instances that use PCI or SR-IOV specific NUMA nodes for the purposes of DMA. Instances that use PCI or SR-IOV
devices should be placed on the NUMA node associated with the said devices. devices should be placed on the NUMA node associated with these devices.
By default, an instance will float across all NUMA nodes on a host. NUMA By default, an instance floats across all NUMA nodes on a host. NUMA
awareness can be enabled implicitly, through the use of hugepages or pinned awareness can be enabled implicitly through the use of hugepages or pinned
CPUs, or explicitly, through the use of flavor extra specs or image metadata. CPUs or explicitly through the use of flavor extra specs or image metadata.
In all cases, the ``NUMATopologyFilter`` filter must be enabled. Details on In all cases, the ``NUMATopologyFilter`` filter must be enabled. Details on
this filter are provided in `Scheduling`_ configuration guide. this filter are provided in `Scheduling`_ configuration guide.
@ -65,7 +67,7 @@ this filter are provided in `Scheduling`_ configuration guide.
The NUMA node(s) used are normally chosen at random. However, if a PCI The NUMA node(s) used are normally chosen at random. However, if a PCI
passthrough or SR-IOV device is attached to the instance, then the NUMA passthrough or SR-IOV device is attached to the instance, then the NUMA
node that the device is associated with will be used. This can provide node that the device is associated with will be used. This can provide
important performance improvements, however, booting a large number of important performance improvements. However, booting a large number of
similar instances can result in unbalanced NUMA node usage. Care should similar instances can result in unbalanced NUMA node usage. Care should
be taken to mitigate this issue. See this `discussion`_ for more details. be taken to mitigate this issue. See this `discussion`_ for more details.
@ -85,10 +87,10 @@ run:
.. code-block:: console .. code-block:: console
# openstack flavor set m1.large --property hw:numa_nodes=1 $ openstack flavor set m1.large --property hw:numa_nodes=1
Some workloads have very demanding requirements for memory access latency or Some workloads have very demanding requirements for memory access latency or
bandwidth which exceed the memory bandwidth available from a single NUMA node. bandwidth that exceed the memory bandwidth available from a single NUMA node.
For such workloads, it is beneficial to spread the instance across multiple For such workloads, it is beneficial to spread the instance across multiple
host NUMA nodes, even if the instance's RAM/vCPUs could theoretically fit on a host NUMA nodes, even if the instance's RAM/vCPUs could theoretically fit on a
single NUMA node. To force an instance's vCPUs to spread across two host NUMA single NUMA node. To force an instance's vCPUs to spread across two host NUMA
@ -96,21 +98,21 @@ nodes, run:
.. code-block:: console .. code-block:: console
# openstack flavor set m1.large --property hw:numa_nodes=2 $ openstack flavor set m1.large --property hw:numa_nodes=2
The allocation of instances vCPUs and memory from different host NUMA nodes can The allocation of instances vCPUs and memory from different host NUMA nodes can
be configured. This allows for asymmetric allocation of vCPUs and memory, which be configured. This allows for asymmetric allocation of vCPUs and memory, which
can be important for some workloads. To spread the six vCPUs and 6 GB of memory can be important for some workloads. To spread the 6 vCPUs and 6 GB of memory
of an instance across two NUMA nodes and create an asymmetric 1:2 vCPU and of an instance across two NUMA nodes and create an asymmetric 1:2 vCPU and
memory mapping between the two nodes, run: memory mapping between the two nodes, run:
.. code-block:: console .. code-block:: console
# openstack flavor set m1.large --property hw:numa_nodes=2 $ openstack flavor set m1.large --property hw:numa_nodes=2
# openstack flavor set m1.large \ # configure guest node 0 $ openstack flavor set m1.large \ # configure guest node 0
--property hw:numa_cpus.0=0,1 \ --property hw:numa_cpus.0=0,1 \
--property hw:numa_mem.0=2048 --property hw:numa_mem.0=2048
# openstack flavor set m1.large \ # configure guest node 1 $ openstack flavor set m1.large \ # configure guest node 1
--property hw:numa_cpus.1=2,3,4,5 \ --property hw:numa_cpus.1=2,3,4,5 \
--property hw:numa_mem.1=4096 --property hw:numa_mem.1=4096
@ -141,7 +143,7 @@ use a dedicated CPU policy. To force this, run:
.. code-block:: console .. code-block:: console
# openstack flavor set m1.large --property hw:cpu_policy=dedicated $ openstack flavor set m1.large --property hw:cpu_policy=dedicated
.. caution:: .. caution::
@ -157,7 +159,7 @@ sharing benefits performance, use thread siblings. To force this, run:
.. code-block:: console .. code-block:: console
# openstack flavor set m1.large \ $ openstack flavor set m1.large \
--property hw:cpu_policy=dedicated \ --property hw:cpu_policy=dedicated \
--property hw:cpu_thread_policy=require --property hw:cpu_thread_policy=require
@ -166,7 +168,7 @@ use non-thread siblings or non-SMT hosts. To force this, run:
.. code-block:: console .. code-block:: console
# openstack flavor set m1.large \ $ openstack flavor set m1.large \
--property hw:cpu_policy=dedicated \ --property hw:cpu_policy=dedicated \
--property hw:cpu_thread_policy=isolate --property hw:cpu_thread_policy=isolate
@ -175,7 +177,7 @@ siblings if available. This is the default, but it can be set explicitly:
.. code-block:: console .. code-block:: console
# openstack flavor set m1.large \ $ openstack flavor set m1.large \
--property hw:cpu_policy=dedicated \ --property hw:cpu_policy=dedicated \
--property hw:cpu_thread_policy=prefer --property hw:cpu_thread_policy=prefer
@ -189,25 +191,26 @@ image to use pinned vCPUs and avoid thread siblings, run:
.. code-block:: console .. code-block:: console
# openstack image set [IMAGE_ID] \ $ openstack image set [IMAGE_ID] \
--property hw_cpu_policy=dedicated \ --property hw_cpu_policy=dedicated \
--property hw_cpu_thread_policy=isolate --property hw_cpu_thread_policy=isolate
Image metadata takes precedence over flavor extra specs: configuring competing Image metadata takes precedence over flavor extra specs. Thus, configuring
policies will result in an exception. By setting a ``shared`` policy through competing policies causes an exception. By setting a ``shared`` policy
image metadata, administrators can prevent users configuring CPU policies in through image metadata, administrators can prevent users configuring CPU
flavors and impacting resource utilization. To configure this policy, run: policies in flavors and impacting resource utilization. To configure this
policy, run:
.. code-block:: console .. code-block:: console
# openstack image set [IMAGE_ID] --property hw_cpu_policy=shared $ openstack image set [IMAGE_ID] --property hw_cpu_policy=shared
.. note:: .. note::
There is no correlation required between the NUMA topology exposed in the There is no correlation required between the NUMA topology exposed in the
instance and how the instance is actually pinned on the host. This is by instance and how the instance is actually pinned on the host. This is by
design. See this `bug <https://bugs.launchpad.net/nova/+bug/1466780>`_ for design. See this `invalid bug
more information. <https://bugs.launchpad.net/nova/+bug/1466780>`_ for more information.
For more information about image metadata, refer to the `Image metadata`_ For more information about image metadata, refer to the `Image metadata`_
guide. guide.
@ -235,13 +238,13 @@ sockets. To configure a flavor to use a maximum of two sockets, run:
.. code-block:: console .. code-block:: console
# openstack flavor set m1.large --property hw:cpu_sockets=2 $ openstack flavor set m1.large --property hw:cpu_sockets=2
Similarly, to configure a flavor to use one core and one thread, run: Similarly, to configure a flavor to use one core and one thread, run:
.. code-block:: console .. code-block:: console
# openstack flavor set m1.large \ $ openstack flavor set m1.large \
--property hw:cpu_cores=1 \ --property hw:cpu_cores=1 \
--property hw:cpu_threads=1 --property hw:cpu_threads=1
@ -260,14 +263,14 @@ and ``hw:cpu_threads``, refer to the `Flavors`_ guide.
It is also possible to set upper limits on the number of sockets, cores, and It is also possible to set upper limits on the number of sockets, cores, and
threads used. Unlike the hard values above, it is not necessary for this exact threads used. Unlike the hard values above, it is not necessary for this exact
number to used because it only provides a a limit. This can be used to provide number to used because it only provides a limit. This can be used to provide
some flexibility in scheduling, while ensuring certains limits are not some flexibility in scheduling, while ensuring certains limits are not
exceeded. For example, to ensure no more than two sockets are defined in the exceeded. For example, to ensure no more than two sockets are defined in the
instance topology, run: instance topology, run:
.. code-block:: console .. code-block:: console
# openstack flavor set m1.large --property=hw:cpu_max_sockets=2 $ openstack flavor set m1.large --property=hw:cpu_max_sockets=2
For more information about the syntax for ``hw:cpu_max_sockets``, For more information about the syntax for ``hw:cpu_max_sockets``,
``hw:cpu_max_cores``, and ``hw:cpu_max_threads``, refer to the `Flavors`_ ``hw:cpu_max_cores``, and ``hw:cpu_max_threads``, refer to the `Flavors`_
@ -280,7 +283,7 @@ request a two-socket, four-core per socket topology, run:
.. code-block:: console .. code-block:: console
# openstack image set [IMAGE_ID] \ $ openstack image set [IMAGE_ID] \
--property hw_cpu_sockets=2 \ --property hw_cpu_sockets=2 \
--property hw_cpu_cores=4 --property hw_cpu_cores=4
@ -290,7 +293,7 @@ maximum of one thread, run:
.. code-block:: console .. code-block:: console
# openstack image set [IMAGE_ID] \ $ openstack image set [IMAGE_ID] \
--property hw_cpu_max_sockets=2 \ --property hw_cpu_max_sockets=2 \
--property hw_cpu_max_threads=1 --property hw_cpu_max_threads=1

View File

@ -207,6 +207,7 @@ redirect 301 /admin-guide/cli_nova_specify_host.html /admin-guide/cli-nova-speci
redirect 301 /admin-guide/cli_set_compute_quotas.html /admin-guide/cli-set-compute-quotas.html redirect 301 /admin-guide/cli_set_compute_quotas.html /admin-guide/cli-set-compute-quotas.html
redirect 301 /admin-guide/cli_set_quotas.html /admin-guide/cli-set-quotas.html redirect 301 /admin-guide/cli_set_quotas.html /admin-guide/cli-set-quotas.html
redirect 301 /admin-guide/compute_arch.html /admin-guide/compute-arch.html redirect 301 /admin-guide/compute_arch.html /admin-guide/compute-arch.html
redirect 301 /admin-guide/compute-numa-cpu-pinning.html /admin-guide/compute-cpu-topologies.html
redirect 301 /admin-guide/cross_project.html /admin-guide/cross-project.html redirect 301 /admin-guide/cross_project.html /admin-guide/cross-project.html
redirect 301 /admin-guide/cross_project_cors.html /admin-guide/cross-project-cors.html redirect 301 /admin-guide/cross_project_cors.html /admin-guide/cross-project-cors.html
redirect 301 /admin-guide/dashboard_admin_manage_roles.html /admin-guide/dashboard-admin-manage-roles.html redirect 301 /admin-guide/dashboard_admin_manage_roles.html /admin-guide/dashboard-admin-manage-roles.html