
Change-Id: I85d9847e98ccc3382a51d10dadb9fb8535d0b704 Signed-off-by: Suzana Fernandes <Suzana.Fernandes@windriver.com>
54 lines
2.3 KiB
ReStructuredText
54 lines
2.3 KiB
ReStructuredText
.. WARNING: Add no lines of text between the label immediately following
|
|
.. and the title.
|
|
|
|
.. _kubernetes-memory-manager-policies-3de9d87855bc:
|
|
|
|
==================================
|
|
Kubernetes Memory Manager Policies
|
|
==================================
|
|
|
|
Kubernetes memory manager policies manage memory allocation for pods with a
|
|
focus on |NUMA| topology and performance optimization. You can define the
|
|
policy using the kube-memory-mgr-policy host label via |CLI|, with the supported
|
|
values.
|
|
|
|
The **kube-memory-mgr-policy** host label supports the values ``none`` (default)
|
|
and ``static``.
|
|
|
|
For example:
|
|
|
|
.. code-block:: none
|
|
|
|
~(keystone)admin)$ system host-lock worker-1
|
|
~(keystone)admin)$ system host-label-assign --overwrite worker-1 kube-memory-mgr-policy=static
|
|
~(keystone)admin)$ system host-unlock worker-1
|
|
|
|
Setting to static, the policy ensures NUMA-aware memory allocation for
|
|
guaranteed |QoS| pods, reserving memory to meet their requirements and reduce
|
|
latency. Memory for system processes can also be reserved using the
|
|
``--reserved-memory`` flag, enhancing node stability. For the ``BestEffort``
|
|
and ``Burstable`` pods, no memory is reserved, and the default topology hints
|
|
are used.
|
|
|
|
This approach enables better performance for workloads that require predictable
|
|
memory usage, but requires careful configuration to ensure compatibility with
|
|
system resources.
|
|
|
|
For configuration options and detailed examples, consult the Kubernetes
|
|
documentation at `https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/ <https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/>`__.
|
|
|
|
|
|
-----------
|
|
Limitations
|
|
-----------
|
|
|
|
The interaction between the ``kube-memory-mgr-policy=static`` policy and the
|
|
topology manager policy ``restricted`` can cause pods not to be scheduled or
|
|
started, even when there is sufficient memory available. This is due to the
|
|
restrictive design of the NUMA-aware memory manager, which prevents the same
|
|
|NUMA| node from being used for both single and multi-NUMA allocations. It is
|
|
important that you understand the implications of these memory management
|
|
policies and configure your systems accordingly to avoid unexpected failures.
|
|
|
|
For detailed configuration options and examples, refer to the Kubernetes
|
|
documentation at https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/. |