From 96d9bb4fc72cfbe12ca5467140d842c64f333a99 Mon Sep 17 00:00:00 2001 From: Adil Date: Wed, 31 Mar 2021 13:28:25 -0300 Subject: [PATCH] New content for Metrics Server tutorials (user and admin) Created 2 new topics: Kubernetes user tutorials metrics server and kebernetes admin tutorials metrics server. Source of information PDF file with both topics. Input from PDF file 'Documentation Metrics Server' (file attached in Jira card) Patch 1: completed review comments by Ron Patch 2: fixed broken links and malformed hyperlink targets at the beginning of the document implemented comment from review (Mary) Patch 3: implemented changes from review comments (Ron) Patch 4: added missing link for review Patch 5: implemented changes received from Greg Patch 6: implemented suggested changes by Greg Patch 7: implemeneted suggested changes by Mary Patch 8: removed empty spaces Patch 9: removed trailing spaces Patch 10: removed trailing spaces Patch 11: acted on Ron's comments Added 'release caveat' Story: 2008457 Task: 42207 https://review.opendev.org/c/starlingx/docs/+/784121 Signed-off-by: Adil Change-Id: Ia8db36b5764ab7375b77b83b99a9c03da7e3a8ae --- doc/source/admintasks/index.rst | 10 + ...ernetes-admin-tutorials-metrics-server.rst | 81 ++++++++ doc/source/usertasks/kubernetes/index.rst | 9 + ...bernetes-user-tutorials-metrics-server.rst | 177 ++++++++++++++++++ 4 files changed, 277 insertions(+) create mode 100644 doc/source/admintasks/kubernetes-admin-tutorials-metrics-server.rst create mode 100644 doc/source/usertasks/kubernetes/kubernetes-user-tutorials-metrics-server.rst diff --git a/doc/source/admintasks/index.rst b/doc/source/admintasks/index.rst index 64365f52d..f58c1f1d1 100644 --- a/doc/source/admintasks/index.rst +++ b/doc/source/admintasks/index.rst @@ -62,3 +62,13 @@ CPU Manager for Kubernetes installing-and-running-cpu-manager-for-kubernetes removing-cpu-manager-for-kubernetes uninstalling-cpu-manager-for-kubernetes-on-ipv6 + +-------------- +Metrics Server +-------------- + +.. toctree:: + :maxdepth: 1 + + kubernetes-admin-tutorials-metrics-server + diff --git a/doc/source/admintasks/kubernetes-admin-tutorials-metrics-server.rst b/doc/source/admintasks/kubernetes-admin-tutorials-metrics-server.rst new file mode 100644 index 000000000..b8f68d637 --- /dev/null +++ b/doc/source/admintasks/kubernetes-admin-tutorials-metrics-server.rst @@ -0,0 +1,81 @@ + +.. +.. _kubernetes-admin-tutorials-metrics-server: + +====================== +Install Metrics Server +====================== + +|release-caveat| + +.. rubric:: |context| + +Metrics Server is a scalable, efficient source of container resource metrics for Kubernetes +built-in autoscaling pipelines. + +Metrics Server is meant for autoscaling purposes only. It is not intended to +provide metrics for monitoring solutions that persist and analyze historical metrics. + +Specifically, in |prod|, Metrics Server supports: + +* Use of Kubernetes' horizontal application auto-scaling based on resources + consumption, for scaling end users' containerized application deployments. +* Use of Metrics Server API within end user's containerized applications in + order for end user's application to, for example, enable application-specific + incoming load management mechanisms based on metrics of selected pods. + +For details on leveraging Metrics Server for horizontal autoscaling or for +Metrics API, see :ref:`Kubernetes User Tasks `. +Metrics Server is an optional component of |prod|. It is packaged as a system +application and included in the |prod| installation ISO. In order to enable +Metrics Server, you must upload and apply the Metrics Server system +application. + +.. rubric:: |proc| + +Perform the following steps to enable Metrics Server such that its services are +available to containerized applications for horizontal autoscaling and/or use +of Metrics API. + +#. Go to the path ``/usr/local/share/applications/helm/`` to access ``metrics-server-1.0-1.tgz`` + +#. Upload the application tarball: + + .. code-block:: + + ~(keystone_admin)]$ system application-upload metrics-server-1.0-1.tgz + +#. Run the application list to confirm that it was uploaded: + + .. code-block:: + + ~(keystone_admin)]$ system application-list + +#. Run the application to apply the metrics server: + + .. code-block:: + + ~(keystone_admin)]$ system application-apply metrics-server + +#. Run the application list to confirm it was applied: + + .. code-block:: + + ~(keystone_admin)]$ system application-list + +#. Run the following command to see the pod running: + + .. code-block:: + + ~(keystone_admin)]$ kubectl get pods -l app=metrics-server -n metrics-server + +For details on leveraging Metrics Server for horizontal autoscaling or for +Metrics API, see :ref:`Kubernetes User Tasks `. +After installing Metrics Server, the :command:`kubectl top` |CLI| command is available +to display the metrics being collected by Metrics Server and the ones being +used for defined autoscaling definitions. These metrics are also displayed +within the Kubernetes Dashboard. + +For more information see: +`https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#top +`__ diff --git a/doc/source/usertasks/kubernetes/index.rst b/doc/source/usertasks/kubernetes/index.rst index 0a039a1ca..6215c2baf 100644 --- a/doc/source/usertasks/kubernetes/index.rst +++ b/doc/source/usertasks/kubernetes/index.rst @@ -130,3 +130,12 @@ CPU Manager for Kubernetes using-kubernetes-cpu-manager-static-policy using-intels-cpu-manager-for-kubernetes-cmk uninstalling-cmk + +************** +Metrics Server +************** + +.. toctree:: + :maxdepth: 1 + + kubernetes-user-tutorials-metrics-server \ No newline at end of file diff --git a/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-metrics-server.rst b/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-metrics-server.rst new file mode 100644 index 000000000..24121e313 --- /dev/null +++ b/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-metrics-server.rst @@ -0,0 +1,177 @@ + +.. +.. _kubernetes-user-tutorials-metrics-server: + +============== +Metrics Server +============== + +|release-caveat| + +Metrics Server is a scalable, efficient source of container resource metrics for Kubernetes +built-in autoscaling pipelines. + +It collects resource metrics from Kubelets, exposing them as part of Kubernetes +apiserver through the Metrics API, which can be used via Kubernetes' Horizontal +Pod Autoscaler definitions. Also, it can be used directly by end user's containerized +applications to, for example, enable application-specific load management mechanisms. + +Metrics being collected by Metrics Server can be accessed by :command:`kubectl +top`. This is available for debugging autoscaling pipelines. + +For more information see: `https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands/ +`__. + +--------------------- +Metrics API use cases +--------------------- + +************************************************** +Use kubectl autoscaler to scale pods automatically +************************************************** + +.. rubric:: |context| + +It is possible to use Kubernetes autoscaler to scale up and down a Kubernetes +deployment based on the load. Please refer to the official example +`https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/ +`__ +to create a PHP application which scales horizontally. + +.. rubric:: |proc| + +After application deployment has completed, you can create a horizontal pod +autoscaling (hpa) definition for the deployment as follows: + +#. Use the following command to turn on autoscaling: + .. code-block:: + + ~(keystone_admin)$ kubectl autoscale deployment --cpu-percent=50 --min=1 --max=10 + +#. Use the following command to see the created horizontal pod autoscaler: + + .. code-block:: + + ~(keystone_admin)$ kubectl get hpa + +#. When the incoming load to your application deployment increases and the + percentage of CPU for existing replicas exceed the previously specified + threshold, a new replica will be created. For the PHP example above, use the + following command to increase the incoming load: + + .. code-block:: + + ~(keystone_admin)$ kubectl run -i --tty load-generator --rm --image=busybox --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done" + +#. (Optional) Use the following commands to check if replicas were created: + .. code-block:: + + ~(keystone_admin)$ kubectl get hpa + + or + + .. code-block:: + + ~(keystone_admin)$ kubectl get deployment + + If you delete the pod load-generator it will decrease the number of replicas automatically. + +************************************************ +Using Metrics API directly within your container +************************************************ + +It is also possible to use the metrics API directly within your containerized +application in order to trigger application-specific load management. + +The Metrics API consists of the following ``GET`` endpoints under the base path +``/apis/metrics.k8s.io/v1beta1``: + +``/nodes`` + All node metrics + +``/nodes/{node}`` + Metrics for a specified node + +``/namespaces/{namespace}/pods`` + All pod metrics within namespace that support all-namespaces + +``/namespaces/{namespace}/pods/{pod}`` + Metrics for a specified pod + +``/pods`` + All pod metrics of all namespaces + +Sample application +****************** + +This NodeJS-based application requests metrics every second printing them in the console. + +For a sample containerized application that uses the Metrics API, see: +`https://opendev.org/starlingx/metrics-server-armada-app/src/branch/master/sample-app +`__. + +All the requirements to deploy and run the sample application are captured in the sample-app.yml +file: service account, roles and role binding that allow the application to +communicate with the apiserver, pod. + +The application pulls the token associated with the service account from its +default location (\/var/run/secrets/kubernetes.io/serviceaccount/token\) in +order to perform authenticated requests to the /apis/metrics.k8s.io/v1beta1/pods endpoint. + +Sample application structure +**************************** + +.. code-block:: none + + - sample-app.yml + - Dockerfile + - src + - package.json + - sample-application.js + +sample-app.yml + Contains sample-app Kubernetes Deployment, Cluster Role, Cluster Role Binding + and Service Account + +src + Contains NodeJS application + +Dockerfile + Application Dockerfile + +Run sample application +********************** + +.. rubric:: |proc| + +#. Run the following command to deploy the application using the sample-app.yml file: + + .. code-block:: + + ~(keystone_admin)$ kubectl apply -f sample-app.yml + +#. Run the following command to check if the application pod is running: + + .. code-block:: + + ~(keystone_admin)$ kubectl get pods -n sample-application-ns + +#. Run the following command to view the logs and check if the sample + application is requesting successfully the Metrics Server API: + + .. code-block:: + + ~(keystone_admin)$ kubectl logs -n sample-application-ns pod-name --tail 1 -f + +.. seealso:: + + - Official example of horizontal pod autoscale: + `https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/ + `__ + + - Metrics API documentation: `https://github.com/kubernetes/metrics + `__ + + - Metrics server documentation: + `https://github.com/kubernetes-sigs/metrics-server + `__