docs/doc/source/usertasks/using-kubernetes-cpu-manager-static-policy.rst
Stone f63f0912c6 User Tasks guide
Fixed typo in LetsEncrypt example

Removed duplicate Datanet entry from main index.rst

Reworked Use Kubernetes CPU Manager Static Policy prerequisite block.

Restored fault/index version of FM toctree in top-level index.

Added merged doc entries to top level index.rst.

Incorporated review comments. Also some generic formatting clean-up such as
converting abbreviations to rST-style :abbr: markup.

Moved url with embedded substitution out of code-block.

Addressed patch 2 review comments. Some addtional rST tidying. See comment replies
for open questions/issues.

This patch fixes an issue with 'stx' in filenames that may differ downstream using-an-image-from-the-local-docker-registry-in-a-container-spec
new substitution and changing code-blocks to parsed-literals as required.

Initial submission for review. Note that a couple of references to WR persist
in examples. These will be marked up with comments in the review.

Signed-off-by: Stone <ronald.stone@windriver.com>
Change-Id: I1efef569842caff5def9dc00395b594d91d7a5d0
Signed-off-by: Stone <ronald.stone@windriver.com>
2020-12-02 10:34:53 -05:00

3.0 KiB

Use Kubernetes CPU Manager Static Policy

You can launch a container pinned to a particular set of CPU cores using a Kubernetes CPU manager static policy.

You will need to enable this CPU management mechanism before applying a policy.

  1. Define a container running a CPU stress command.

    Note

    • The pod will be pinned to the allocated set of CPUs on the host and have exclusive use of those CPUs if <resource:request:cpu> is equal to <resource:cpulimit>.
    • Resource memory must also be specified for guaranteed resource allocation.
    • Processes within the pod can float across the set of CPUs allocated to the pod, unless the application in the pod explicitly pins them to a subset of the CPUs.

    For example:

    % cat <<EOF > stress-cpu-pinned.yaml
    apiVersion: v1
    kind: Pod
    metadata:
      name: stress-ng-cpu
    spec:
      containers:
      - name: stress-ng-app
        image: alexeiled/stress-ng
        imagePullPolicy: IfNotPresent
        command: ["/stress-ng"]
        args: ["--cpu", "10", "--metrics-brief", "-v"]
        resources:
          requests:
            cpu: 2
            memory: "2Gi"
          limits:
            cpu: 2
            memory: "2Gi"
      nodeSelector:
        kubernetes.io/hostname: worker-1
    EOF

    You will likely need to adjust some values shown above to reflect your deployment configuration. For example, on an AIO-SX or AIO-DX system. worker-1 would probably become controller-0 or controller-1.

    The significant addition to this definition in support of CPU pinning, is the resources section , which sets a CPU resource request and limit of 2.

  2. Apply the definition.

    % kubectl apply -f stress-cpu-pinned.yaml

    You can SSH to the worker node and run top, and type '1' to see CPU details per core:

  3. Describe the pod or node to see the CPU Request, CPU Limits and that it is in the "Guaranteed" QoS Class.

    For example:

    % kubectl describe <node>
    Namespace                  Name           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
    ---------                  ----           ------------  ----------  ---------------  -------------  ---
    default                    stress-ng-cpu  2 (15%)       2 (15%)     2Gi (7%)         2Gi (7%)       9m31s
    
    % kubectl describe <pod> stress-ng-cpu
    ...
    QoS Class: Guaranteed
  4. Delete the container.

    % kubectl delete -f stress-cpu-pinned.yaml