docs: Add a real-time guide

This beefy patch closes a long-standing TODO and allows us to move yet
more information out of the flavors guide and into specific documents.
This, combined with existing documentation in place, means we can remove
the sections for various extra specs from the 'user/flavors' guide:

- hw:cpu_realtime            -> doc/source/admin/real-time.rst
- hw:cpu_realtime_mask       -> doc/source/admin/real-time.rst
- hw:emulator_threads_policy -> doc/source/admin/cpu-topologies.rst
- hw:cpu_policy              -> doc/source/admin/cpu-topologies.rst
- hw:cpu_thread_policy       -> doc/source/admin/cpu-topologies.rst
- hw:cpu_sockets             -> doc/source/admin/cpu-topologies.rst
- hw:cpu_cores               -> doc/source/admin/cpu-topologies.rst
- hw:cpu_threads             -> doc/source/admin/cpu-topologies.rst
- hw:cpu_max_sockets         -> doc/source/admin/cpu-topologies.rst
- hw:cpu_max_cores           -> doc/source/admin/cpu-topologies.rst
- hw:cpu_max_threads         -> doc/source/admin/cpu-topologies.rst
- hw:numa_nodes              -> doc/source/admin/cpu-topologies.rst
- hw:numa_cpus.N             -> doc/source/admin/cpu-topologies.rst
- hw:numa_mem.N              -> doc/source/admin/cpu-topologies.rst
- hw:mem_page_size           -> doc/source/admin/huge-pages.rst

Multiple improvements to the libvirt extra spec docs are included here,
for want of a better place to include them.

Change-Id: I02b044f8246f4a42481bb5f00259842692b29b71
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
This commit is contained in:
Stephen Finucane 2021-02-22 12:52:17 +00:00
parent 8528eaa602
commit 777c02485f
7 changed files with 270 additions and 348 deletions

View File

@ -56,6 +56,8 @@ VCPU
Resource class representing a unit of CPU resources for a single guest
approximating the processing power of a single physical processor.
.. _numa-topologies:
Customizing instance NUMA placement policies
--------------------------------------------
@ -113,12 +115,14 @@ filter must be enabled. Details on this filter are provided in
When used, NUMA awareness allows the operating system of the instance to
intelligently schedule the workloads that it runs and minimize cross-node
memory bandwidth. To restrict an instance's vCPUs to a single host NUMA node,
memory bandwidth. To configure guest NUMA nodes, you can use the
:nova:extra-spec:`hw:numa_nodes` flavor extra spec.
For example, to restrict an instance's vCPUs to a single host NUMA node,
run:
.. code-block:: console
$ openstack flavor set [FLAVOR_ID] --property hw:numa_nodes=1
$ openstack flavor set $FLAVOR --property hw:numa_nodes=1
Some workloads have very demanding requirements for memory access latency or
bandwidth that exceed the memory bandwidth available from a single NUMA node.
@ -129,34 +133,56 @@ nodes, run:
.. code-block:: console
$ openstack flavor set [FLAVOR_ID] --property hw:numa_nodes=2
$ openstack flavor set $FLAVOR --property hw:numa_nodes=2
The allocation of instances vCPUs and memory from different host NUMA nodes can
The allocation of instance vCPUs and memory from different host NUMA nodes can
be configured. This allows for asymmetric allocation of vCPUs and memory, which
can be important for some workloads. To spread the 6 vCPUs and 6 GB of memory
can be important for some workloads. You can configure the allocation of
instance vCPUs and memory across each **guest** NUMA node using the
:nova:extra-spec:`hw:numa_cpus.{id}` and :nova:extra-spec:`hw:numa_mem.{id}`
extra specs respectively.
For example, to spread the 6 vCPUs and 6 GB of memory
of an instance across two NUMA nodes and create an asymmetric 1:2 vCPU and
memory mapping between the two nodes, run:
.. code-block:: console
$ openstack flavor set [FLAVOR_ID] --property hw:numa_nodes=2
$ openstack flavor set $FLAVOR --property hw:numa_nodes=2
# configure guest node 0
$ openstack flavor set [FLAVOR_ID] \
$ openstack flavor set $FLAVOR \
--property hw:numa_cpus.0=0,1 \
--property hw:numa_mem.0=2048
# configure guest node 1
$ openstack flavor set [FLAVOR_ID] \
$ openstack flavor set $FLAVOR \
--property hw:numa_cpus.1=2,3,4,5 \
--property hw:numa_mem.1=4096
.. note::
The ``{num}`` parameter is an index of *guest* NUMA nodes and may not
correspond to *host* NUMA nodes. For example, on a platform with two NUMA
nodes, the scheduler may opt to place guest NUMA node 0, as referenced in
``hw:numa_mem.0`` on host NUMA node 1 and vice versa. Similarly, the
CPUs bitmask specified in the value for ``hw:numa_cpus.{num}`` refer to
*guest* vCPUs and may not correspond to *host* CPUs. As such, this feature
cannot be used to constrain instances to specific host CPUs or NUMA nodes.
.. warning::
If the combined values of ``hw:numa_cpus.{num}`` or ``hw:numa_mem.{num}``
are greater than the available number of CPUs or memory respectively, an
exception will be raised.
.. note::
Hyper-V does not support asymmetric NUMA topologies, and the Hyper-V
driver will not spawn instances with such topologies.
For more information about the syntax for ``hw:numa_nodes``, ``hw:numa_cpus.N``
and ``hw:num_mem.N``, refer to the :ref:`NUMA
topology <extra-specs-numa-topology>` guide.
and ``hw:num_mem.N``, refer to :doc:`/configuration/extra-specs`.
.. _cpu-pinning-policies:
Customizing instance CPU pinning policies
-----------------------------------------
@ -189,14 +215,16 @@ CPUs can use the CPUs of another pinned instance, thus preventing resource
contention between instances.
CPU pinning policies can be used to determine whether an instance should be
pinned or not. There are three policies: ``dedicated``, ``mixed`` and
pinned or not. They can be configured using the
:nova:extra-spec:`hw:cpu_policy` extra spec and equivalent image metadata
property. There are three policies: ``dedicated``, ``mixed`` and
``shared`` (the default). The ``dedicated`` CPU policy is used to specify
that all CPUs of an instance should use pinned CPUs. To configure a flavor to
use the ``dedicated`` CPU policy, run:
.. code-block:: console
$ openstack flavor set [FLAVOR_ID] --property hw:cpu_policy=dedicated
$ openstack flavor set $FLAVOR --property hw:cpu_policy=dedicated
This works by ensuring ``PCPU`` allocations are used instead of ``VCPU``
allocations. As such, it is also possible to request this resource type
@ -204,9 +232,9 @@ explicitly. To configure this, run:
.. code-block:: console
$ openstack flavor set [FLAVOR_ID] --property resources:PCPU=N
$ openstack flavor set $FLAVOR --property resources:PCPU=N
where ``N`` is the number of vCPUs defined in the flavor.
(where ``N`` is the number of vCPUs defined in the flavor).
.. note::
@ -218,28 +246,28 @@ use pinned CPUs. To configure a flavor to use the ``shared`` CPU policy, run:
.. code-block:: console
$ openstack flavor set [FLAVOR_ID] --property hw:cpu_policy=shared
$ openstack flavor set $FLAVOR --property hw:cpu_policy=shared
The ``mixed`` CPU policy is used to specify that an instance use pinned CPUs
along with unpinned CPUs. The instance pinned CPU could be specified in the
``hw:cpu_dedicated_mask`` or, if real-time is enabled
(``hw:cpu_realtime``\ = yes), in the ``hw:cpu_realtime_mask`` extra spec. For
:nova:extra-spec:`hw:cpu_dedicated_mask` or, if :doc:`real-time <real-time>` is
enabled, in the :nova:extra-spec:`hw:cpu_realtime_mask` extra spec. For
example, to configure a flavor to use the ``mixed`` CPU policy with 4 vCPUs in
total and the first 2 vCPUs as pinned CPUs, with the ``hw:cpu_realtime_mask``
extra spec, run:
total and the first 2 vCPUs as pinned CPUs, run:
.. code-block:: console
$ openstack flavor set [FLAVOR_ID] \
$ openstack flavor set $FLAVOR \
--vcpus=4 \
--property hw:cpu_policy=mixed \
--property hw:cpu_dedicated_mask=0-1
To create the mixed instance with the real-time extra specs, run:
To configure a flavor to use the ``mixed`` CPU policy with 4 vCPUs in total and
the first 2 vCPUs as pinned **real-time** CPUs, run:
.. code-block:: console
$ openstack flavor set [FLAVOR_ID] \
$ openstack flavor set $FLAVOR \
--vcpus=4 \
--property hw:cpu_policy=mixed \
--property hw:cpu_realtime=yes \
@ -249,7 +277,12 @@ To create the mixed instance with the real-time extra specs, run:
For more information about the syntax for ``hw:cpu_policy``,
``hw:cpu_dedicated_mask``, ``hw:realtime_cpu`` and ``hw:cpu_realtime_mask``,
refer to the :doc:`/user/flavors` guide.
refer to :doc:`/configuration/extra-specs`
.. note::
For more information about real-time functionality, refer to the
:doc:`documentation <real-time>`.
It is also possible to configure the CPU policy via image metadata. This can
be useful when packaging applications that require real-time or near real-time
@ -259,13 +292,13 @@ policy, run:
.. code-block:: console
$ openstack image set [IMAGE_ID] --property hw_cpu_policy=dedicated
$ openstack image set $IMAGE --property hw_cpu_policy=dedicated
Likewise, to configure an image to use the ``shared`` CPU policy, run:
.. code-block:: console
$ openstack image set [IMAGE_ID] --property hw_cpu_policy=shared
$ openstack image set $IMAGE --property hw_cpu_policy=shared
.. note::
@ -312,7 +345,7 @@ performance, you can request hosts **with** SMT. To configure this, run:
.. code-block:: console
$ openstack flavor set [FLAVOR_ID] \
$ openstack flavor set $FLAVOR \
--property hw:cpu_policy=dedicated \
--property hw:cpu_thread_policy=require
@ -322,7 +355,7 @@ request this trait explicitly. To configure this, run:
.. code-block:: console
$ openstack flavor set [FLAVOR_ID] \
$ openstack flavor set $FLAVOR \
--property resources:PCPU=N \
--property trait:HW_CPU_HYPERTHREADING=required
@ -331,7 +364,7 @@ you can request hosts **without** SMT. To configure this, run:
.. code-block:: console
$ openstack flavor set [FLAVOR_ID] \
$ openstack flavor set $FLAVOR \
--property hw:cpu_policy=dedicated \
--property hw:cpu_thread_policy=isolate
@ -341,7 +374,7 @@ possible to request this trait explicitly. To configure this, run:
.. code-block:: console
$ openstack flavor set [FLAVOR_ID] \
$ openstack flavor set $FLAVOR \
--property resources:PCPU=N \
--property trait:HW_CPU_HYPERTHREADING=forbidden
@ -351,7 +384,7 @@ is the default, but it can be set explicitly:
.. code-block:: console
$ openstack flavor set [FLAVOR_ID] \
$ openstack flavor set $FLAVOR \
--property hw:cpu_policy=dedicated \
--property hw:cpu_thread_policy=prefer
@ -360,7 +393,7 @@ This does not utilize traits and, as such, there is no trait-based equivalent.
.. note::
For more information about the syntax for ``hw:cpu_thread_policy``, refer to
the :doc:`/user/flavors` guide.
:doc:`/configuration/extra-specs`.
As with CPU policies, it also possible to configure the CPU thread policy via
image metadata. This can be useful when packaging applications that require
@ -370,7 +403,7 @@ image are always pinned regardless of flavor. To configure an image to use the
.. code-block:: console
$ openstack image set [IMAGE_ID] \
$ openstack image set $IMAGE \
--property hw_cpu_policy=dedicated \
--property hw_cpu_thread_policy=require
@ -378,7 +411,7 @@ Likewise, to configure an image to use the ``isolate`` CPU thread policy, run:
.. code-block:: console
$ openstack image set [IMAGE_ID] \
$ openstack image set $IMAGE \
--property hw_cpu_policy=dedicated \
--property hw_cpu_thread_policy=isolate
@ -386,7 +419,7 @@ Finally, to configure an image to use the ``prefer`` CPU thread policy, run:
.. code-block:: console
$ openstack image set [IMAGE_ID] \
$ openstack image set $IMAGE \
--property hw_cpu_policy=dedicated \
--property hw_cpu_thread_policy=prefer
@ -400,6 +433,8 @@ an exception will be raised.
For more information about image metadata, refer to the `Image metadata`_
guide.
.. _emulator-thread-pinning-policies:
Customizing instance emulator thread pinning policies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -429,25 +464,45 @@ the ``isolate`` emulator thread policy, run:
.. code-block:: console
$ openstack flavor set [FLAVOR_ID] \
$ openstack flavor set $FLAVOR \
--property hw:cpu_policy=dedicated \
--property hw:emulator_threads_policy=isolate
The ``share`` policy is used to specify that emulator threads from a given
instance should be run on the pool of host cores listed in
:oslo.config:option:`compute.cpu_shared_set`. To configure a flavor to use the
``share`` emulator thread policy, run:
:oslo.config:option:`compute.cpu_shared_set` if configured, else across all
pCPUs of the instance.
To configure a flavor to use the ``share`` emulator thread policy, run:
.. code-block:: console
$ openstack flavor set [FLAVOR_ID] \
$ openstack flavor set $FLAVOR \
--property hw:cpu_policy=dedicated \
--property hw:emulator_threads_policy=share
The above behavior can be summarized in this helpful table:
.. list-table::
:header-rows: 1
:stub-columns: 1
* -
- :oslo.config:option:`compute.cpu_shared_set` set
- :oslo.config:option:`compute.cpu_shared_set` unset
* - ``hw:emulator_treads_policy`` unset (default)
- Pinned to all of the instance's pCPUs
- Pinned to all of the instance's pCPUs
* - ``hw:emulator_threads_policy`` = ``share``
- Pinned to :oslo.config:option:`compute.cpu_shared_set`
- Pinned to all of the instance's pCPUs
* - ``hw:emulator_threads_policy`` = ``isolate``
- Pinned to a single pCPU distinct from the instance's pCPUs
- Pinned to a single pCPU distinct from the instance's pCPUs
.. note::
For more information about the syntax for ``hw:emulator_threads_policy``,
refer to the :doc:`/user/flavors` guide.
refer to :nova:extra-spec:`the documentation <hw:emulator_threads_policy>`.
Customizing instance CPU topologies
-----------------------------------
@ -476,17 +531,17 @@ sockets.
Some workloads benefit from a custom topology. For example, in some operating
systems, a different license may be needed depending on the number of CPU
sockets. To configure a flavor to use a maximum of two sockets, run:
sockets. To configure a flavor to use two sockets, run:
.. code-block:: console
$ openstack flavor set [FLAVOR_ID] --property hw:cpu_sockets=2
$ openstack flavor set $FLAVOR --property hw:cpu_sockets=2
Similarly, to configure a flavor to use one core and one thread, run:
.. code-block:: console
$ openstack flavor set [FLAVOR_ID] \
$ openstack flavor set $FLAVOR \
--property hw:cpu_cores=1 \
--property hw:cpu_threads=1
@ -501,22 +556,25 @@ Similarly, to configure a flavor to use one core and one thread, run:
with ten cores fails.
For more information about the syntax for ``hw:cpu_sockets``, ``hw:cpu_cores``
and ``hw:cpu_threads``, refer to the :doc:`/user/flavors` guide.
and ``hw:cpu_threads``, refer to :doc:`/configuration/extra-specs`.
It is also possible to set upper limits on the number of sockets, cores, and
threads used. Unlike the hard values above, it is not necessary for this exact
number to used because it only provides a limit. This can be used to provide
some flexibility in scheduling, while ensuring certain limits are not
exceeded. For example, to ensure no more than two sockets are defined in the
instance topology, run:
exceeded. For example, to ensure no more than two sockets, eight cores and one
thread are defined in the instance topology, run:
.. code-block:: console
$ openstack flavor set [FLAVOR_ID] --property hw:cpu_max_sockets=2
$ openstack flavor set $FLAVOR \
--property hw:cpu_max_sockets=2 \
--property hw:cpu_max_cores=8 \
--property hw:cpu_max_threads=1
For more information about the syntax for ``hw:cpu_max_sockets``,
``hw:cpu_max_cores``, and ``hw:cpu_max_threads``, refer to the
:doc:`/user/flavors` guide.
``hw:cpu_max_cores``, and ``hw:cpu_max_threads``, refer to
:doc:`/configuration/extra-specs`.
Applications are frequently packaged as images. For applications that prefer
certain CPU topologies, configure image metadata to hint that created instances
@ -525,7 +583,7 @@ request a two-socket, four-core per socket topology, run:
.. code-block:: console
$ openstack image set [IMAGE_ID] \
$ openstack image set $IMAGE \
--property hw_cpu_sockets=2 \
--property hw_cpu_cores=4
@ -535,7 +593,7 @@ maximum of one thread, run:
.. code-block:: console
$ openstack image set [IMAGE_ID] \
$ openstack image set $IMAGE \
--property hw_cpu_max_sockets=2 \
--property hw_cpu_max_threads=1

View File

@ -56,6 +56,7 @@ Enabling huge pages on the host
-------------------------------
.. important::
Huge pages may not be used on a host configured for file-backed memory. See
:doc:`file-backed-memory` for details
@ -163,7 +164,7 @@ By default, an instance does not use huge pages for its underlying memory.
However, huge pages can bring important or required performance improvements
for some workloads. Huge pages must be requested explicitly through the use of
flavor extra specs or image metadata. To request an instance use huge pages,
run:
you can use the :nova:extra-spec:`hw:mem_page_size` flavor extra spec:
.. code-block:: console
@ -205,7 +206,7 @@ run:
$ openstack flavor set m1.large --property hw:mem_page_size=any
For more information about the syntax for ``hw:mem_page_size``, refer to
:doc:`flavors`.
:nova:extra-spec:`the documentation <hw:mem_page_size>`.
Applications are frequently packaged as images. For applications that require
the IO performance improvements that huge pages provides, configure image

View File

@ -107,6 +107,7 @@ instance for these kind of workloads.
pci-passthrough
cpu-topologies
real-time
huge-pages
virtual-gpu
file-backed-memory

View File

@ -187,8 +187,9 @@ with ``provider:physical_network=foo`` must be scheduled on host cores from
NUMA nodes 0, while instances using one or more networks with
``provider:physical_network=bar`` must be scheduled on host cores from both
NUMA nodes 2 and 3. For the latter case, it will be necessary to split the
guest across two or more host NUMA nodes using the ``hw:numa_nodes``
:ref:`flavor extra spec <extra-specs-numa-topology>`.
guest across two or more host NUMA nodes using the
:nova:extra-spec:`hw:numa_nodes` extra spec, as discussed :ref:`here
<numa-topologies>`.
Now, take an example for a deployment using L3 networks.

View File

@ -0,0 +1,153 @@
=========
Real Time
=========
.. versionadded:: 13.0.0 (Mitaka)
Nova supports configuring `real-time policies`__ for instances. This builds upon
the improved performance offered by :doc:`CPU pinning <cpu-topologies>` by
providing stronger guarantees for worst case scheduler latency for vCPUs.
.. __: https://en.wikipedia.org/wiki/Real-time_computing
Enabling Real-Time
------------------
Currently the creation of real-time instances is only supported when using the
libvirt compute driver with a :oslo.config:option:`libvirt.virt_type` of
``kvm`` or ``qemu``. It requires extensive configuration of the host and this
document provides but a rough overview of the changes required. Configuration
will vary depending on your hardware, BIOS configuration, host and guest OS'
and application.
BIOS configuration
~~~~~~~~~~~~~~~~~~
Configure your host BIOS as recommended in the `rt-wiki`__ page.
The most important steps are:
- Disable power management, including CPU sleep states
- Disable SMT (hyper-threading) or any option related to logical processors
These are standard steps used in benchmarking as both sets of features can
result in non-deterministic behavior.
.. __: https://rt.wiki.kernel.org/index.php/HOWTO:_Build_an_RT-application
OS configuration
~~~~~~~~~~~~~~~~
This is inherently specific to the distro used, however, there are some common
steps:
- Install the real-time (preemptible) kernel (``PREEMPT_RT_FULL``) and
real-time KVM modules
- Configure hugepages
- Isolate host cores to be used for instances from the kernel
- Disable features like CPU frequency scaling (e.g. P-States on Intel
processors)
RHEL and RHEL-derived distros like CentOS provide packages in their
repositories to accomplish. The ``kernel-rt`` and ``kernel-rt-kvm``
packages will provide the real-time kernel and real-time KVM module,
respectively, while the ``tuned-profiles-realtime`` package will provide
`tuned`__ profiles to configure the host for real-time workloads. You should
refer to your distro documentation for more information.
.. __: https://tuned-project.org/
Validation
~~~~~~~~~~
Once your BIOS and the host OS have been configured, you can validate
"real-time readiness" using the ``hwlatdetect`` and ``rteval`` utilities. On
RHEL and RHEL-derived hosts, you can install these using the ``rt-tests``
package. More information about the ``rteval`` tool can be found `here`__.
.. __: https://git.kernel.org/pub/scm/utils/rteval/rteval.git/tree/README
Configuring a flavor or image
-----------------------------
.. versionchanged:: 22.0.0 (Victoria)
Previously, it was necessary to specify
:nova:extra-spec:`hw:cpu_realtime_mask` when
:nova:extra-spec:`hw:cpu_realtime` was set to ``true``.
Starting in Victoria, it is possible
to omit this when an emulator thread policy is configured using the
:nova:extra-spec:`hw:emulator_threads_policy` extra spec, thus allowing all
guest cores to be be allocated as real-time cores.
.. versionchanged:: 22.0.0 (Victoria)
Previously, a leading caret was necessary when specifying the value for
:nova:extra-spec:`hw:cpu_realtime_mask` and omitting it would be equivalent
to not setting the mask, resulting in a failure to spawn the instance.
Compared to configuring the host, configuring the guest is relatively trivial
and merely requires a combination of flavor extra specs and image metadata
properties, along with a suitable real-time guest OS.
Enable real-time by setting the :nova:extra-spec:`hw:cpu_realtime` flavor extra
spec to ``true``. When this is configured, it is necessary to specify where
guest overhead processes should be scheduled to. This can be accomplished in
one of three ways. Firstly, the :nova:extra-spec:`hw:cpu_realtime_mask` extra
spec or equivalent image metadata property can be used to indicate which guest
cores should be scheduled as real-time cores, leaving the remainder to be
scheduled as non-real-time cores and to handle overhead processes. For example,
to allocate the first two cores of an 8 core instance as the non-real-time
cores:
.. code-block:: console
$ openstack flavor set $FLAVOR \
--property hw:cpu_realtime=yes \
--property hw:cpu_realtime_mask=2-7 # so 0,1 are non-real-time
In this configuration, any non-real-time cores configured will have an implicit
``dedicated`` :ref:`CPU pinning policy <cpu-pinning-policies>` applied. It is
possible to apply a ``shared`` policy for these non-real-time cores by
specifying the ``mixed`` :ref:`CPU pinning policy <cpu-pinning-policies>` via
the :nova:extra-spec:`hw:cpu_policy` extra spec. This can be useful to increase
resource utilization of the host. For example:
.. code-block:: console
$ openstack flavor set $FLAVOR \
--property hw:cpu_policy=mixed \
--property hw:cpu_realtime=yes \
--property hw:cpu_realtime_mask=2-7 # so 0,1 are non-real-time and unpinned
Finally, you can explicitly :ref:`offload guest overhead processes to another
host core <emulator-thread-pinning-policies>` using the
:nova:extra-spec:`hw:emulator_threads_policy` extra spec. For example:
.. code-block:: console
$ openstack flavor set $FLAVOR \
--property hw:cpu_realtime=yes \
--property hw:emulator_thread_policy=share
.. note::
Emulator thread pinning requires additional host configuration.
Refer to :ref:`the documentation <emulator-thread-pinning-policies>` for
more information.
In addition to configuring the instance CPUs, it is also likely that you will
need to configure guest huge pages. For information on how to configure these,
refer to :doc:`the documentation <huge-pages>`
References
----------
* `Libvirt real time instances (spec)`__
* `The Real Time Linux collaborative project`__
* `Deploying Real Time OpenStack`__
.. __: https://specs.openstack.org/openstack/nova-specs/specs/mitaka/implemented/libvirt-real-time.html
.. __: https://wiki.linuxfoundation.org/realtime/start
.. __: https://that.guru/blog/deploying-real-time-openstack/

View File

@ -114,9 +114,8 @@ driver-specific.
~~~~~~~~~
The following extra specs are used to configure quotas for various
paravirtualized devices.
They are only supported by the libvirt virt driver.
paravirtualized devices. Different quotas are supported by different virt
drivers, as noted below.
.. extra-specs:: quota

View File

@ -184,103 +184,6 @@ Performance Monitoring Unit (vPMU)
required, such workloads should set ``hw:pmu=False``. For most workloads
the default of unset or enabling the vPMU ``hw:pmu=True`` will be correct.
.. _extra-specs-cpu-topology:
CPU topology
For the libvirt driver, you can define the topology of the processors in the
virtual machine using properties. The properties with ``max`` limit the
number that can be selected by the user with image properties.
.. code-block:: console
$ openstack flavor set FLAVOR-NAME \
--property hw:cpu_sockets=FLAVOR-SOCKETS \
--property hw:cpu_cores=FLAVOR-CORES \
--property hw:cpu_threads=FLAVOR-THREADS \
--property hw:cpu_max_sockets=FLAVOR-SOCKETS \
--property hw:cpu_max_cores=FLAVOR-CORES \
--property hw:cpu_max_threads=FLAVOR-THREADS
Where:
- FLAVOR-SOCKETS: (integer) The number of sockets for the guest VM. By
default, this is set to the number of vCPUs requested.
- FLAVOR-CORES: (integer) The number of cores per socket for the guest VM. By
default, this is set to ``1``.
- FLAVOR-THREADS: (integer) The number of threads per core for the guest VM.
By default, this is set to ``1``.
.. _extra-specs-cpu-policy:
CPU pinning policy
For the libvirt driver, you can pin the virtual CPUs (vCPUs) of instances to
the host's physical CPU cores (pCPUs) using properties. You can further
refine this by stating how hardware CPU threads in a simultaneous
multithreading-based (SMT) architecture be used. These configurations will
result in improved per-instance determinism and performance.
.. note::
SMT-based architectures include Intel processors with Hyper-Threading
technology. In these architectures, processor cores share a number of
components with one or more other cores. Cores in such architectures are
commonly referred to as hardware threads, while the cores that a given
core share components with are known as thread siblings.
.. note::
Host aggregates should be used to separate these pinned instances from
unpinned instances as the latter will not respect the resourcing
requirements of the former.
.. code:: console
$ openstack flavor set FLAVOR-NAME \
--property hw:cpu_policy=CPU-POLICY \
--property hw:cpu_thread_policy=CPU-THREAD-POLICY
Valid CPU-POLICY values are:
- ``shared``: (default) The guest vCPUs will be allowed to freely float
across host pCPUs, albeit potentially constrained by NUMA policy.
- ``dedicated``: The guest vCPUs will be strictly pinned to a set of host
pCPUs. In the absence of an explicit vCPU topology request, the drivers
typically expose all vCPUs as sockets with one core and one thread. When
strict CPU pinning is in effect the guest CPU topology will be setup to
match the topology of the CPUs to which it is pinned. This option implies
an overcommit ratio of 1.0. For example, if a two vCPU guest is pinned to a
single host core with two threads, then the guest will get a topology of
one socket, one core, two threads.
- ``mixed``: This policy will create an instance combined with the ``shared``
policy vCPUs and ``dedicated`` policy vCPUs, as a result, some guest vCPUs
will be freely float across host pCPUs and the rest of guest vCPUs will be
pinned to host pCPUs. The pinned guest vCPUs are configured using the
``hw:cpu_dedicated_mask`` extra spec.
.. note::
The ``hw:cpu_dedicated_mask`` option is only valid if ``hw:cpu_policy``
is set to ``mixed`` and cannot be configured with ``hw:cpu_realtime_mask``
at the same time.
Valid CPU-THREAD-POLICY values are:
- ``prefer``: (default) The host may or may not have an SMT architecture.
Where an SMT architecture is present, thread siblings are preferred.
- ``isolate``: The host must not have an SMT architecture or must emulate a
non-SMT architecture. Hosts that support SMT (by reporting the
``HW_CPU_HYPERTHREADING`` trait) are excluded.
- ``require``: The host must have an SMT architecture and must report the
``HW_CPU_HYPERTHREADING`` trait. Each vCPU is allocated on thread siblings.
If the host does not have an SMT architecture, then it is not used. If the
host has an SMT architecture, but not enough cores with free thread
siblings are available, then scheduling fails.
.. note::
The ``hw:cpu_thread_policy`` option is valid if ``hw:cpu_policy`` is set
to ``dedicated`` or ``mixed``.
.. _pci_numa_affinity_policy:
PCI NUMA Affinity Policy
@ -336,60 +239,6 @@ PCI NUMA Affinity Policy
* There is no information about PCI-NUMA affinity available
.. _extra-specs-numa-topology:
NUMA topology
For the libvirt driver, you can define the host NUMA placement for the
instance vCPU threads as well as the allocation of instance vCPUs and memory
from the host NUMA nodes. For flavors whose memory and vCPU allocations are
larger than the size of NUMA nodes in the compute hosts, the definition of a
NUMA topology allows hosts to better utilize NUMA and improve performance of
the instance OS.
.. code-block:: console
$ openstack flavor set FLAVOR-NAME \
--property hw:numa_nodes=FLAVOR-NODES \
--property hw:numa_cpus.N=FLAVOR-CORES \
--property hw:numa_mem.N=FLAVOR-MEMORY
Where:
- FLAVOR-NODES: (integer) The number of host NUMA nodes to restrict execution
of instance vCPU threads to. If not specified, the vCPU threads can run on
any number of the host NUMA nodes available.
- N: (integer) The instance NUMA node to apply a given CPU or memory
configuration to, where N is in the range ``0`` to ``FLAVOR-NODES - 1``.
- FLAVOR-CORES: (comma-separated list of integers) A list of instance vCPUs
to map to instance NUMA node N. If not specified, vCPUs are evenly divided
among available NUMA nodes.
- FLAVOR-MEMORY: (integer) The number of MB of instance memory to map to
instance NUMA node N. If not specified, memory is evenly divided among
available NUMA nodes.
.. note::
``hw:numa_cpus.N`` and ``hw:numa_mem.N`` are only valid if
``hw:numa_nodes`` is set. Additionally, they are only required if the
instance's NUMA nodes have an asymmetrical allocation of CPUs and RAM
(important for some NFV workloads).
.. note::
The ``N`` parameter is an index of *guest* NUMA nodes and may not
correspond to *host* NUMA nodes. For example, on a platform with two NUMA
nodes, the scheduler may opt to place guest NUMA node 0, as referenced in
``hw:numa_mem.0`` on host NUMA node 1 and vice versa. Similarly, the
integers used for ``FLAVOR-CORES`` are indexes of *guest* vCPUs and may
not correspond to *host* CPUs. As such, this feature cannot be used to
constrain instances to specific host CPUs or NUMA nodes.
.. warning::
If the combined values of ``hw:numa_cpus.N`` or ``hw:numa_mem.N`` are
greater than the available number of CPUs or memory respectively, an
exception is raised.
.. _extra-specs-memory-encryption:
Hardware encryption of guest memory
@ -402,146 +251,6 @@ Hardware encryption of guest memory
$ openstack flavor set FLAVOR-NAME \
--property hw:mem_encryption=True
.. _extra-specs-realtime-policy:
CPU real-time policy
For the libvirt driver, you can state that one or more of your instance
virtual CPUs (vCPUs), though not all of them, run with a real-time policy.
When used on a correctly configured host, this provides stronger guarantees
for worst case scheduler latency for vCPUs and is a requirement for certain
applications.
.. todo::
Document the required steps to configure hosts and guests. There are a lot
of things necessary, from isolating hosts and configuring the
``[compute] cpu_dedicated_set`` nova configuration option on the host, to
choosing a correctly configured guest image.
.. important::
While most of your instance vCPUs can run with a real-time policy, you must
either mark at least one vCPU as non-real-time to be account for emulator
overhead (housekeeping) or explicitly configure an :ref:`emulator thread
policy <extra-specs-emulator-threads-policy>`.
.. important::
To use this extra spec, you must enable pinned CPUs. Refer to
:ref:`CPU policy <extra-specs-cpu-policy>` for more information.
.. code:: console
$ openstack flavor set FLAVOR-NAME \
--property hw:cpu_realtime=CPU-REALTIME-POLICY \
--property hw:cpu_realtime_mask=CPU-REALTIME-MASK
Where:
CPU-REALTIME-POLICY (enum):
One of:
- ``no``: (default) The guest vCPUs will not have a real-time policy
- ``yes``: The guest vCPUs will have a real-time policy
CPU-REALTIME-MASK (coremask):
A coremask indicating which vCPUs **will** or, if starting with a ``^``,
**will not** have a real-time policy. For example, a value of ``0-5``
indicates that vCPUs ``0`` to ``5`` will have a real-time policy.
Conversely, a value of ``^0-1`` indicates that all vCPUs *except* vCPUs
``0`` and ``1`` will have a real-time policy.
.. note::
The ``hw:cpu_realtime_mask`` option is only valid if ``hw:cpu_realtime``
is set to ``yes``.
.. versionchanged:: 22.0.0 (Victoria)
Previously, it was necessary to specify ``hw:cpu_realtime_mask`` when
``hw:cpu_realtime`` was set to yes. Starting in Victoria, it is possible
to omit this when an emulator thread policy is configured using the
``hw:emulator_threads_policy`` extra spec.
.. versionchanged:: 22.0.0 (Victoria)
Previously, the leading caret was necessary and omitting it would be
equivalent to not setting the mask, resulting in a failure to spawn
the instance.
.. _extra-specs-emulator-threads-policy:
Emulator threads policy
For the libvirt driver, you can assign a separate pCPU to an instance that
will be used for emulator threads, which are emulator processes not directly
related to the guest OS. This pCPU will used in addition to the pCPUs used
for the guest. This is generally required for use with a :ref:`real-time
workload <extra-specs-realtime-policy>`.
.. important::
To use this extra spec, you must enable pinned CPUs. Refer to :ref:`CPU
policy <extra-specs-cpu-policy>` for more information.
.. code:: console
$ openstack flavor set FLAVOR-NAME \
--property hw:emulator_threads_policy=THREAD-POLICY
The expected behavior of emulator threads depends on the value of the
``hw:emulator_threads_policy`` flavor extra spec and the value of
:oslo.config:option:`compute.cpu_shared_set`. It is presented in the
following table:
.. list-table::
:header-rows: 1
:stub-columns: 1
* -
- :oslo.config:option:`compute.cpu_shared_set` set
- :oslo.config:option:`compute.cpu_shared_set` unset
* - ``hw:emulator_treads_policy`` unset (default)
- Pinned to all of the instance's pCPUs
- Pinned to all of the instance's pCPUs
* - ``hw:emulator_threads_policy`` = ``share``
- Pinned to :oslo.config:option:`compute.cpu_shared_set`
- Pinned to all of the instance's pCPUs
* - ``hw:emulator_threads_policy`` = ``isolate``
- Pinned to a single pCPU distinct from the instance's pCPUs
- Pinned to a single pCPU distinct from the instance's pCPUs
.. _extra-specs-large-pages-allocation:
Large pages allocation
You can configure the size of large pages used to back the VMs.
.. code:: console
$ openstack flavor set FLAVOR-NAME \
--property hw:mem_page_size=PAGE_SIZE
Valid ``PAGE_SIZE`` values are:
- ``small``: (default) The smallest page size is used. Example: 4 KB on x86.
- ``large``: Only use larger page sizes for guest RAM. Example: either 2 MB
or 1 GB on x86.
- ``any``: It is left up to the compute driver to decide. In this case, the
libvirt driver might try to find large pages, but fall back to small pages.
Other drivers may choose alternate policies for ``any``.
- pagesize: (string) An explicit page size can be set if the workload has
specific requirements. This value can be an integer value for the page size
in KB, or can use any standard suffix. Example: ``4KB``, ``2MB``,
``2048``, ``1GB``.
.. note::
Large pages can be enabled for guest RAM without any regard to whether the
guest OS will use them or not. If the guest OS chooses not to use huge
pages, it will merely see small pages as before. Conversely, if a guest OS
does intend to use huge pages, it is very important that the guest RAM be
backed by huge pages. Otherwise, the guest OS will not be getting the
performance benefit it is expecting.
.. _extra-spec-pci-passthrough:
PCI passthrough