Apply dir convention to Admintasks
Moved all Kubernetes admintasks content under a kubernetes directory. This is needed to allow title versioning distinctions in partner builds Signed-off-by: Ron Stone <ronald.stone@windriver.com> Change-Id: I38b9b0bc01096f8ef513dad15daff2add2a812a8
This commit is contained in:
@@ -0,0 +1,22 @@
|
||||
|
||||
.. njh1572366777737
|
||||
.. _about-the-admin-tutorials:
|
||||
|
||||
=========================
|
||||
About the Admin Tutorials
|
||||
=========================
|
||||
|
||||
The |prod-long| Kubernetes administration tutorials provide working examples
|
||||
of common administrative tasks.
|
||||
|
||||
.. xreflink For details on accessing the system, see :ref:`|prod| Access the System <configuring-local-cli-access>`.
|
||||
|
||||
Common administrative tasks covered in this document include:
|
||||
|
||||
- application management
|
||||
|
||||
- local Docker registries
|
||||
|
||||
- Kubernetes CPU resource management
|
||||
|
||||
|
||||
@@ -0,0 +1,479 @@
|
||||
|
||||
.. hby1568295041837
|
||||
.. _admin-application-commands-and-helm-overrides:
|
||||
|
||||
=======================================
|
||||
Application Commands and Helm Overrides
|
||||
=======================================
|
||||
|
||||
Use |prod| :command:`system application` and :command:`system helm-override`
|
||||
commands to manage containerized applications provided as part of |prod|.
|
||||
|
||||
.. note::
|
||||
Application commands and Helm overrides apply to **user overrides** only
|
||||
and take precedence over system overrides.
|
||||
|
||||
.. rubric:: |proc|
|
||||
|
||||
- Use the following command to list all containerized applications provided
|
||||
as part of |prod|.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system application-list [--nowrap]
|
||||
|
||||
where:
|
||||
|
||||
**nowrap**
|
||||
Prevents line wrapping of the output.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system application-list --nowrap
|
||||
|
||||
+-------------+---------+---------------+---------------+----------+-----------+
|
||||
| application | version | manifest name | manifest file | status | progress |
|
||||
+-------------+---------+---------------+---------------+----------+-----------+
|
||||
| platform- | 1.0-7 | platform- | manifest.yaml | applied | completed |
|
||||
| integ-apps | | integration- | | | |
|
||||
| | | manifest | | | |
|
||||
| stx- | 1.0-18 | armada- | stx-openstack | uploaded | completed |
|
||||
| openstack | | manifest | .yaml | | |
|
||||
+-------------+---------+---------------+---------------+----------+-----------+
|
||||
|
||||
- Use the following command to show details for |prod|.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system application-show <app_name>
|
||||
|
||||
where:
|
||||
|
||||
**<app\_name>**
|
||||
The name of the application to show details.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system application-show stx-openstack
|
||||
|
||||
+---------------+----------------------------------+
|
||||
| Property | Value |
|
||||
+---------------+----------------------------------+
|
||||
| active | False |
|
||||
| app_version | 1.0-18 |
|
||||
| created_at | 2019-09-06T15:34:03.194150+00:00 |
|
||||
| manifest_file | stx-openstack.yaml |
|
||||
| manifest_name | armada-manifest |
|
||||
| name | stx-openstack |
|
||||
| progress | completed |
|
||||
| status | uploaded |
|
||||
| updated_at | 2019-09-06T15:34:46.995929+00:00 |
|
||||
+---------------+----------------------------------+
|
||||
|
||||
- Use the following command to upload application Helm chart\(s\) and
|
||||
manifest.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system application-upload [-n | --app-name] <app_name> [-v | --version] <version> <tar_file>
|
||||
|
||||
where the following are optional arguments:
|
||||
|
||||
**<app\_name>**
|
||||
Assigns a custom name for application. You can use this name to
|
||||
interact with the application in the future.
|
||||
|
||||
**<version>**
|
||||
The version of the application.
|
||||
|
||||
and the following is a positional argument:
|
||||
|
||||
**<tar\_file>**
|
||||
The path to the tar file containing the application to be uploaded.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system application-upload stx-openstack-1.0-18.tgz
|
||||
+---------------+----------------------------------+
|
||||
| Property | Value |
|
||||
+---------------+----------------------------------+
|
||||
| active | False |
|
||||
| app_version | 1.0-18 |
|
||||
| created_at | 2019-09-06T15:34:03.194150+00:00 |
|
||||
| manifest_file | stx-openstack.yaml |
|
||||
| manifest_name | armada-manifest |
|
||||
| name | stx-openstack |
|
||||
| progress | None |
|
||||
| status | uploading |
|
||||
| updated_at | None |
|
||||
+---------------+----------------------------------+
|
||||
Please use 'system application-list' or 'system application-show
|
||||
stx-openstack' to view the current progress.
|
||||
|
||||
- To list the Helm chart overrides for the |prod|, use the following
|
||||
command:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system helm-override-list
|
||||
usage: system helm-override-list [--nowrap] [-l | --long] <app_name>
|
||||
|
||||
where the following is a positional argument:
|
||||
|
||||
**<app\_name>**
|
||||
The name of the application.
|
||||
|
||||
and the following is an optional argument:
|
||||
|
||||
**nowrap**
|
||||
No word-wrapping of output.
|
||||
|
||||
**long**
|
||||
List additional fields in output.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system helm-override-list stx-openstack --long
|
||||
+---------------------+--------------------------------+---------------+
|
||||
| chart name | overrides namespaces | chart enabled |
|
||||
+---------------------+--------------------------------+---------------+
|
||||
| aodh | [u'openstack'] | [False] |
|
||||
| barbican | [u'openstack'] | [False] |
|
||||
| ceilometer | [u'openstack'] | [False] |
|
||||
| ceph-rgw | [u'openstack'] | [False] |
|
||||
| cinder | [u'openstack'] | [True] |
|
||||
| garbd | [u'openstack'] | [True] |
|
||||
| glance | [u'openstack'] | [True] |
|
||||
| gnocchi | [u'openstack'] | [False] |
|
||||
| heat | [u'openstack'] | [True] |
|
||||
| helm-toolkit | [] | [] |
|
||||
| horizon | [u'openstack'] | [True] |
|
||||
| ingress | [u'kube-system', u'openstack'] | [True, True] |
|
||||
| ironic | [u'openstack'] | [False] |
|
||||
| keystone | [u'openstack'] | [True] |
|
||||
| keystone-api-proxy | [u'openstack'] | [True] |
|
||||
| libvirt | [u'openstack'] | [True] |
|
||||
| mariadb | [u'openstack'] | [True] |
|
||||
| memcached | [u'openstack'] | [True] |
|
||||
| neutron | [u'openstack'] | [True] |
|
||||
| nginx-ports-control | [] | [] |
|
||||
| nova | [u'openstack'] | [True] |
|
||||
| nova-api-proxy | [u'openstack'] | [True] |
|
||||
| openvswitch | [u'openstack'] | [True] |
|
||||
| panko | [u'openstack'] | [False] |
|
||||
| placement | [u'openstack'] | [True] |
|
||||
| rabbitmq | [u'openstack'] | [True] |
|
||||
| version_check | [] | [] |
|
||||
+---------------------+--------------------------------+---------------+
|
||||
|
||||
- To show the overrides for a particular chart, use the following command.
|
||||
System overrides are displayed in the **system\_overrides** section of
|
||||
the **Property** column.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system helm-override-show
|
||||
usage: system helm-override-show <app_name> <chart_name> <namespace>
|
||||
|
||||
where the following are positional arguments:
|
||||
|
||||
**<app\_name>**
|
||||
The name of the application.
|
||||
|
||||
**< chart\_name>**
|
||||
The name of the chart.
|
||||
|
||||
**<namespace>**
|
||||
The namespace for chart overrides.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system helm-override-show stx-openstack glance openstack
|
||||
|
||||
- To modify service configuration parameters using user-specified overrides,
|
||||
use the following command. To update a single configuration parameter, you
|
||||
can use :command:`--set`. To update multiple configuration parameters, use
|
||||
the :command:`--values` option with a **yaml** file.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system helm-override-update
|
||||
usage: system helm-override-update <app_name> <chart_name> <namespace> --reuse-values --reset-values --values <file_name> --set <commandline_overrides>
|
||||
|
||||
where the following are positional arguments:
|
||||
|
||||
**<app\_name>**
|
||||
The name of the application.
|
||||
|
||||
**<chart\_name>**
|
||||
The name of the chart.
|
||||
|
||||
**<namespace>**
|
||||
The namespace for chart overrides.
|
||||
|
||||
and the following are optional arguments:
|
||||
|
||||
**reuse-values**
|
||||
Reuse existing Helm chart user override values. This argument is
|
||||
ignored if **reset-values** is used.
|
||||
|
||||
**reset-values**
|
||||
Replace any existing Helm chart overrides with the ones specified.
|
||||
|
||||
**values**
|
||||
Specify a **yaml** file containing Helm chart override values. You can
|
||||
specify this value multiple times.
|
||||
|
||||
**set**
|
||||
Set Helm chart override values using the command line. Multiple
|
||||
override values can be specified with multiple :command:`set`
|
||||
arguments. These are processed after files passed through the
|
||||
values argument.
|
||||
|
||||
For example, to enable the glance debugging log, use the following
|
||||
command:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system helm-override-update stx-openstack glance openstack --set conf.glance.DEFAULT.DEBUG=true
|
||||
+----------------+-------------------+
|
||||
| Property | Value |
|
||||
+----------------+-------------------+
|
||||
| name | glance |
|
||||
| namespace | openstack |
|
||||
| user_overrides | conf: |
|
||||
| | glance: |
|
||||
| | DEFAULT: |
|
||||
| | DEBUG: true |
|
||||
+----------------+-------------------+
|
||||
|
||||
The user overrides are shown in the **user\_overrides** section of the
|
||||
**Property** column.
|
||||
|
||||
.. note::
|
||||
To apply the updated Helm chart overrides to the running application,
|
||||
use the :command:`system application-apply` command.
|
||||
|
||||
- To enable or disable the installation of a particular Helm chart within an
|
||||
application manifest, use the :command:`helm-chart-attribute-modify`
|
||||
command. This command does not modify a chart or modify chart overrides,
|
||||
which are managed through the :command:`helm-override-update` command.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system helm-chart-attribute-modify [--enabled <true/false>] <app_name> <chart_name> <namespace>
|
||||
|
||||
where the following is an optional argument:
|
||||
|
||||
**enabled**
|
||||
Determines whether the chart is enabled.
|
||||
|
||||
and the following are positional arguments:
|
||||
|
||||
**<app\_name>**
|
||||
The name of the application.
|
||||
|
||||
**<chart\_name>**
|
||||
The name of the chart.
|
||||
|
||||
**<namespace>**
|
||||
The namespace for chart overrides.
|
||||
|
||||
.. note::
|
||||
To apply the updated Helm chart attribute to the running application,
|
||||
use the :command:`system application-apply` command.
|
||||
|
||||
- To delete all the user overrides for a chart, use the following command:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system helm-override-delete
|
||||
usage: system helm-override-delete <app_name> <chart_name> <namespace>
|
||||
|
||||
where the following are positional arguments:
|
||||
|
||||
**<app\_name>**
|
||||
The name of the application.
|
||||
|
||||
**<chart\_name>**
|
||||
The name of the chart.
|
||||
|
||||
**<namespace>**
|
||||
The namespace for chart overrides.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system helm-override-delete stx-openstack glance openstack
|
||||
Deleted chart overrides glance:openstack for application stx-openstack
|
||||
|
||||
- Use the following command to apply or reapply an application, making it
|
||||
available for service.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system application-apply [-m | --mode] <mode> <app_name>
|
||||
|
||||
where the following is an optional argument:
|
||||
|
||||
**mode**
|
||||
An application-specific mode controlling how the manifest is
|
||||
applied. This option is used to delete and restore the
|
||||
**stx-openstack** application.
|
||||
|
||||
and the following is a positional argument:
|
||||
|
||||
**<app\_name>**
|
||||
The name of the application to apply.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system application-apply stx-openstack
|
||||
+---------------+----------------------------------+
|
||||
| Property | Value |
|
||||
+---------------+----------------------------------+
|
||||
| active | False |
|
||||
| app_version | 1.0-18 |
|
||||
| created_at | 2019-09-06T15:34:03.194150+00:00 |
|
||||
| manifest_file | stx-openstack.yaml |
|
||||
| manifest_name | armada-manifest |
|
||||
| name | stx-openstack |
|
||||
| progress | None |
|
||||
| status | applying |
|
||||
| updated_at | 2019-09-06T15:34:46.995929+00:00 |
|
||||
+---------------+----------------------------------+
|
||||
Please use 'system application-list' or 'system application-show
|
||||
stx-openstack' to view the current progress.
|
||||
|
||||
- Use the following command to abort the current application.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system application-abort <app_name>
|
||||
|
||||
where:
|
||||
|
||||
**<app\_name>**
|
||||
The name of the application to abort.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system application-abort stx-openstack
|
||||
Application abort request has been accepted. If the previous operation has not
|
||||
completed/failed, it will be cancelled shortly.
|
||||
|
||||
Use :command:`application-list` to confirm that the application has been
|
||||
aborted.
|
||||
|
||||
- Use the following command to update the deployed application to a different
|
||||
version.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system application-update [-n | --app-name] <app_name> [-v | --app-version] <version> <tar_file>
|
||||
|
||||
where the following are optional arguments:
|
||||
|
||||
**<app\_name>**
|
||||
The name of the application to update.
|
||||
|
||||
You can look up the name of an application using the
|
||||
:command:`application-list` command:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system application-list
|
||||
+--------------------------+----------+-------------------------------+---------------------------+----------+-----------+
|
||||
| application | version | manifest name | manifest file | status | progress |
|
||||
+--------------------------+----------+-------------------------------+---------------------------+----------+-----------+
|
||||
| cert-manager | 20.06-4 | cert-manager-manifest | certmanager-manifest.yaml | applied | completed |
|
||||
| nginx-ingress-controller | 20.06-1 | nginx-ingress-controller- | nginx_ingress_controller | applied | completed |
|
||||
| | | -manifest | _manifest.yaml | | |
|
||||
| oidc-auth-apps | 20.06-26 | oidc-auth-manifest | manifest.yaml | uploaded | completed |
|
||||
| platform-integ-apps | 20.06-9 | platform-integration-manifest | manifest.yaml | applied | completed |
|
||||
| wr-analytics | 20.06-2 | analytics-armada-manifest | wr-analytics.yaml | applied | completed |
|
||||
+--------------------------+----------+-------------------------------+---------------------------+----------+-----------+
|
||||
|
||||
The output indicates that the currently installed version of
|
||||
**cert-manager** is 20.06-4.
|
||||
|
||||
**<version>**
|
||||
The version to update the application to.
|
||||
|
||||
and the following is a positional argument which must come last:
|
||||
|
||||
**<tar\_file>**
|
||||
The tar file containing the application manifest, Helm charts and
|
||||
configuration file.
|
||||
|
||||
- Use the following command to remove an application from service. Removing
|
||||
an application will clean up related Kubernetes resources and delete all
|
||||
of its installed Helm charts.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system application-remove <app_name>
|
||||
|
||||
where:
|
||||
|
||||
**<app\_name>**
|
||||
The name of the application to remove.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system application-remove stx-openstack
|
||||
+---------------+----------------------------------+
|
||||
| Property | Value |
|
||||
+---------------+----------------------------------+
|
||||
| active | False |
|
||||
| app_version | 1.0-18 |
|
||||
| created_at | 2019-09-06T15:34:03.194150+00:00 |
|
||||
| manifest_file | stx-openstack.yaml |
|
||||
| manifest_name | armada-manifest |
|
||||
| name | stx-openstack |
|
||||
| progress | None |
|
||||
| status | removing |
|
||||
| updated_at | 2019-09-06T17:39:19.813754+00:00 |
|
||||
+---------------+----------------------------------+
|
||||
Please use 'system application-list' or 'system application-show
|
||||
stx-openstack' to view the current progress.
|
||||
|
||||
This command places the application in the uploaded state.
|
||||
|
||||
- Use the following command to completely delete an application from the
|
||||
system.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system application-delete <app_name>
|
||||
|
||||
where:
|
||||
|
||||
**<app\_name>**
|
||||
The name of the application to delete.
|
||||
|
||||
You must run :command:`application-remove` before deleting an application.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system application-delete stx-openstack
|
||||
Application stx-openstack deleted.
|
||||
@@ -0,0 +1,87 @@
|
||||
|
||||
.. hsq1558095273229
|
||||
.. _freeing-space-in-the-local-docker-registry:
|
||||
|
||||
=======================================
|
||||
Free Space in the Local Docker Registry
|
||||
=======================================
|
||||
|
||||
You can delete images and perform garbage collection to free unused registry
|
||||
space on the docker-distribution file system of the controllers.
|
||||
|
||||
.. rubric:: |context|
|
||||
|
||||
Simply deleting an image from the local Docker registry does not free the
|
||||
associated space from the file system. To do so, you must also run the
|
||||
:command:`registry-garbage-collect` command.
|
||||
|
||||
.. rubric:: |proc|
|
||||
|
||||
#. Identify the name of the image you want to delete.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system registry-image-list
|
||||
+------------------------------------------------------+
|
||||
| Image Name |
|
||||
+------------------------------------------------------+
|
||||
| docker.io/starlingx/k8s-cni-sriov |
|
||||
| docker.io/starlingx/k8s-plugins-sriov-network-device |
|
||||
| docker.io/starlingx/multus |
|
||||
| gcr.io/kubernetes-helm/tiller |
|
||||
| k8s.gcr.io/coredns |
|
||||
| k8s.gcr.io/etcd |
|
||||
| k8s.gcr.io/kube-apiserver |
|
||||
| k8s.gcr.io/kube-controller-manager |
|
||||
| k8s.gcr.io/kube-proxy |
|
||||
| k8s.gcr.io/kube-scheduler |
|
||||
| k8s.gcr.io/pause |
|
||||
| quay.io/airshipit/armada |
|
||||
| quay.io/calico/cni |
|
||||
| quay.io/calico/kube-controllers |
|
||||
| quay.io/calico/node |
|
||||
+------------------------------------------------------+
|
||||
|
||||
#. Find tags associated with the image.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system registry-image-tags <imageName>
|
||||
|
||||
#. Free file system space.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system registry-image-delete <imageName>:<tagName>
|
||||
|
||||
This step only removes the registry's reference to the **image:tag**.
|
||||
|
||||
.. warning::
|
||||
Do not delete **image:tags** that are currently being used by the
|
||||
system. Deleting both the local Docker registry's **image:tags** and
|
||||
the **image:tags** from the Docker cache will prevent failed deployment
|
||||
pods from recovering. If this happens, you will need to manually
|
||||
download the deleted image from the same source and push it back into
|
||||
the local Docker registry under the same name and tag.
|
||||
|
||||
If you need to free space consumed by **stx-openstack** images, you can
|
||||
delete older tagged versions.
|
||||
|
||||
#. Free up file system space associated with the deleted/unreferenced images.
|
||||
|
||||
The :command:`registry-garbage-collect` command removes unreferenced
|
||||
**image:tags** from the file system and frees the file system spaces
|
||||
allocated to deleted/unreferenced images.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system registry-garbage-collect
|
||||
Running docker registry garbage collect
|
||||
|
||||
.. note::
|
||||
In rare cases the system may trigger a swact during garbage collection,
|
||||
and the registry may be left in read-only mode. If this happens, run
|
||||
:command:`registry-garbage-collect` again to take the registry out of
|
||||
read-only mode.
|
||||
|
||||
|
||||
62
doc/source/admintasks/kubernetes/index.rst
Normal file
62
doc/source/admintasks/kubernetes/index.rst
Normal file
@@ -0,0 +1,62 @@
|
||||
.. _admin-tasks-title:
|
||||
|
||||
|
||||
========
|
||||
Contents
|
||||
========
|
||||
|
||||
--------------------
|
||||
StarlingX Kubernetes
|
||||
--------------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
about-the-admin-tutorials
|
||||
installing-and-running-cpu-manager-for-kubernetes
|
||||
|
||||
----------------------
|
||||
Application management
|
||||
----------------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
kubernetes-admin-tutorials-helm-package-manager
|
||||
kubernetes-admin-tutorials-starlingx-application-package-manager
|
||||
admin-application-commands-and-helm-overrides
|
||||
|
||||
---------------------
|
||||
Local Docker registry
|
||||
---------------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
local-docker-registry
|
||||
kubernetes-admin-tutorials-authentication-and-authorization
|
||||
installing-updating-the-docker-registry-certificate
|
||||
setting-up-a-public-repository
|
||||
freeing-space-in-the-local-docker-registry
|
||||
|
||||
--------------------------------
|
||||
Optimize application performance
|
||||
--------------------------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
kubernetes-cpu-manager-policies
|
||||
isolating-cpu-cores-to-enhance-application-performance
|
||||
kubernetes-topology-manager-policies
|
||||
|
||||
|
||||
--------------
|
||||
Metrics Server
|
||||
--------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
kubernetes-admin-tutorials-metrics-server
|
||||
|
||||
@@ -0,0 +1,239 @@
|
||||
|
||||
.. jme1561551450093
|
||||
.. _installing-and-running-cpu-manager-for-kubernetes:
|
||||
|
||||
==========================================
|
||||
Install and Run CPU Manager for Kubernetes
|
||||
==========================================
|
||||
|
||||
You must install Helm charts and label worker nodes appropriately before using
|
||||
CMK.
|
||||
|
||||
.. rubric:: |context|
|
||||
|
||||
Perform the following steps to enable CMK on a cluster.
|
||||
|
||||
.. rubric:: |proc|
|
||||
|
||||
#. Apply the **cmk-node** label to each worker node to be managed using CMK.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone)admin)$ system host-lock worker-0
|
||||
~(keystone)admin)$ system host-label-assign worker-0 cmk-node=enabled
|
||||
+-------------+--------------------------------------+
|
||||
| Property | Value |
|
||||
+-------------+--------------------------------------+
|
||||
| uuid | 2909d775-cd6c-4bc1-8268-27499fe38d5e |
|
||||
| host_uuid | 1f00d8a4-f520-41ee-b608-1b50054b1cd8 |
|
||||
| label_key | cmk-node |
|
||||
| label_value | enabled |
|
||||
+-------------+--------------------------------------+
|
||||
~(keystone)admin)$ system host-unlock worker-0
|
||||
|
||||
#. Perform the following steps if you have not specified CMK at Ansible
|
||||
Bootstrap in the localhost.yml file:
|
||||
|
||||
#. On the active controller, run the following command to generate the
|
||||
username and password to be used for Docker login.
|
||||
|
||||
This command generates the username and password to be used for Docker
|
||||
login.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo python /usr/share/ansible/stx-ansible/playbooks/roles/common/push-docker-images/files/get_registry_auth.py 625619392498.dkr.ecr.us-west-2.amazonaws.com <Access_Key_ID_from_Wind_Share> <Secret_Access_Key_from_Wind_Share>
|
||||
|
||||
#. Run the Docker login command:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone)admin)$ sudo docker login 625619392498.dkr.ecr.us-west-2.amazonaws.com -u AWS -p <password_returned_from_first_cmd>
|
||||
|
||||
#. Pull the CMK image from the AWS registry.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone)admin)$ sudo docker pull 625619392498.dkr.ecr.us-west-2.amazonaws.com/docker.io/starlingx/master/latest_image_build
|
||||
|
||||
#. Tag the image, by using the following command:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone)admin)$ sudo docker image tag 625619392498.dkr.ecr.us-west-2.amazonaws.com/docker.io/starlingx/master/latest_image_build
|
||||
|
||||
#. Authenticate the local registry, by using the following command:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone)admin)$ sudo docker login registry.local:9001 -u admin -p <admin_passwd>
|
||||
|
||||
#. Push the image, by using the following command:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone)admin)$ sudo docker image push registry.local:9001/docker.io/wind-river/cmk:WRCP.20.01-v1.3.1-15-ge3df769-1
|
||||
|
||||
|
||||
#. On all configurations with two controllers, after the CMK Docker image has
|
||||
been pulled, tagged \(with the local registry\), and pushed \(to the local
|
||||
registry\), the admin user should log in to the inactive controller and run
|
||||
the following commands:
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone)admin)$ sudo docker login registry.local:9001 -u admin -p <admin_passwd>
|
||||
~(keystone)admin)$ sudo docker image pull tis-lab-registry.cumulus.wrs.com:9001/wrcp-staging/docker.io/wind-river/cmk:WRCP.20.01-v1.3.1-15-ge3df769-1
|
||||
|
||||
#. Configure any isolated CPUs on worker nodes in order to reduce host OS
|
||||
impacts on latency for tasks running on Isolated CPUs.
|
||||
|
||||
Any container tasks running on isolated CPUs will have to explicitly manage
|
||||
their own affinity, the process scheduler will ignore them completely.
|
||||
|
||||
.. note::
|
||||
The following commands are examples only, the admin user must specify
|
||||
the number of CPUs per processor based on the node CPU topology.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone)admin)$ system host-lock worker-1
|
||||
~(keystone)admin)$ system host-cpu-modify -f platform -p0 1 worker-1
|
||||
~(keystone)admin)$ system host-cpu-modify -f application-isolated -p0 15 worker-1
|
||||
~(keystone)admin)$ system host-cpu-modify -f application-isolated -p1 15 worker-1
|
||||
~(keystone)admin)$ system host-unlock worker-1
|
||||
|
||||
This sets one platform core and 15 application-isolated cores on NUMA node
|
||||
0, and 15 application-isolated cores on NUMA node 1. At least one CPU must
|
||||
be left unspecified, which will cause it to be an application CPU.
|
||||
|
||||
#. Run the /opt/extracharts/cpu-manager-k8s-setup.sh helper script to install
|
||||
the CMK Helm charts used to configure the system for CMK.
|
||||
|
||||
#. Before running this command, untar files listed in /opt/extracharts.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone)admin)$ cd /opt/extracharts
|
||||
~(keystone)admin)$ sudo tar -xvf cpu-manager-k8s-init-1.3.1.tgz
|
||||
~(keystone)admin)$ sudo tar -xvf cpu-manager-k8s-webhook-1.3.1.tgz
|
||||
~(keystone)admin)$ sudo tar -xvf cpu-manager-k8s-1.3.1.tgz
|
||||
|
||||
#. Run the script.
|
||||
|
||||
The script is located in the /opt/extracharts directory of the active
|
||||
controller.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone)admin)$ cd /opt/extracharts
|
||||
~(keystone)admin)$ ./cpu-manager-k8s-setup.sh
|
||||
|
||||
The following actions are performed:
|
||||
|
||||
- The **cpu-manager-k8s-init** chart is installed. This will create a
|
||||
service account and set up rules-based access control.
|
||||
|
||||
- A webhook is created to insert the appropriate resources into pods
|
||||
that request CMK resources. \(This will result in one pod running.\)
|
||||
|
||||
- A daemonset is created for the per-CMK-node pod that will handle
|
||||
all CMK operations on that node.
|
||||
|
||||
- **cmk-webhook-deployment** is launched on the controller and
|
||||
**cpu-manager-k8s-cmk-default** is launched on the worker.
|
||||
|
||||
By default, each node will have one available CPU allocated to the
|
||||
shared pool, and all the rest allocated to the exclusive pool. The
|
||||
platform CPUs will be ignored.
|
||||
|
||||
#. Add more CPUs to the shared pool.
|
||||
|
||||
#. Override the allocation via per-node Helm chart overrides on the
|
||||
**cpu-manager-k8s** Helm chart.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cat <<EOF > /home/sysadmin/worker-0-cmk-overrides.yml
|
||||
# For NUM_EXCLUSIVE_CORES a value of -1 means
|
||||
# "all available cores after infra and shared
|
||||
# cores have been allocated".
|
||||
# NUM_SHARED_CORES must be at least 1.
|
||||
conf:
|
||||
cmk:
|
||||
NUM_EXCLUSIVE_CORES: -1
|
||||
NUM_SHARED_CORES: 1
|
||||
overrides:
|
||||
cpu-manager-k8s_cmk:
|
||||
hosts:
|
||||
- name: worker-0
|
||||
conf:
|
||||
cmk:
|
||||
NUM_SHARED_CORES: 2
|
||||
EOF
|
||||
|
||||
#. Apply the override.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ helm upgrade cpu-manager cpu-manager-k8s --reuse-values -f /home/sysadmin/worker-0-cmk-overrides.yml
|
||||
|
||||
#. After CMK has been installed, run the following command to patch the
|
||||
webhook to pull the image, if required for future use:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone)admin)$ kubectl -n kube-system patch deploy cmk-webhook-deployment \
|
||||
-p '{"spec":{"template":{"spec":{"containers":[{"name":"cmk-webhook",\
|
||||
"imagePullPolicy":"IfNotPresent"}]}}}}'
|
||||
|
||||
.. rubric:: |postreq|
|
||||
|
||||
Once CMK is set up, you can run workloads as described at `https://github.com/intel/CPU-Manager-for-Kubernetes <https://github.com/intel/CPU-Manager-for-Kubernetes>`__,
|
||||
with the following caveats:
|
||||
|
||||
- When using CMK, the application pods should not specify requests or limits
|
||||
for the **cpu** resource.
|
||||
|
||||
When running a container with :command:`cmk isolate --pool=exclusive`, the
|
||||
**cpu** resource should be superseded by the
|
||||
:command:`cmk.intel.com/exclusive-cores` resource.
|
||||
|
||||
When running a container with :command:`cmk isolate --pool=shared` or
|
||||
:command:`cmk isolate --pool=infra`, the **cpu** resource has no meaning as
|
||||
Kubelet assumes it has access to all the CPUs rather than just the
|
||||
**infra** or **shared** ones and this confuses the resource tracking.
|
||||
|
||||
- There is a known issue with resource tracking if a node with running
|
||||
CMK-isolated applications suffers an uncontrolled reboot. The suggested
|
||||
workaround is to wait for it to come back up, then lock/unlock the node.
|
||||
|
||||
- When using the :command:`cmk isolate --socket-id` command to run an
|
||||
application on a particular socket, there can be complications with
|
||||
scheduling because the Kubernetes scheduler isn't NUMA-aware. A pod can be
|
||||
scheduled to a kubernetes node that has enough resources across all NUMA
|
||||
nodes, but then a container trying to run :command:`cmk isolate --socket-id=<X>`
|
||||
can lead to a run-time error if there are not enough resources on that
|
||||
particular NUMA node:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone)admin)$ kubectl logs cmk-isolate-pod
|
||||
[6] Failed to execute script cmk
|
||||
Traceback (most recent call last):
|
||||
File "cmk.py", line 162, in <module> main()
|
||||
File "cmk.py", line 127, in main args["--socket-id"])
|
||||
File "intel/isolate.py", line 57, in isolate.format(pool_name))
|
||||
SystemError: Not enough free cpu lists in pool
|
||||
|
||||
.. From step 1
|
||||
.. xbooklink For more information on node labeling, see |node-doc|: :ref:`Configure Node Labels from the CLI <assigning-node-labels-from-the-cli>`.
|
||||
|
||||
.. From step 2
|
||||
.. xreflink For more information, see |inst-doc|: :ref:`Bootstrap and Deploy Cloud Platform <bootstrapping-and-deploying-starlingx>`.
|
||||
@@ -0,0 +1,83 @@
|
||||
|
||||
.. idr1582032622279
|
||||
.. _installing-updating-the-docker-registry-certificate:
|
||||
|
||||
====================================================
|
||||
Install/Update the Local Docker Registry Certificate
|
||||
====================================================
|
||||
|
||||
The local Docker registry provides secure HTTPS access using the registry API.
|
||||
|
||||
.. rubric:: |context|
|
||||
|
||||
By default a self-signed certificate is generated at installation time for the
|
||||
registry API. For more secure access, an intermediate or Root CA-signed
|
||||
certificate is strongly recommended.
|
||||
|
||||
The intermediate or Root CA-signed certificate for the registry must have at
|
||||
least the following |SANs|: DNS:registry.local, DNS:registry.central, IP
|
||||
Address:<oam-floating-ip-address>, IP Address:<mgmt-floating-ip-address>. Use
|
||||
the :command:`system addrpool-list` command to get the |OAM| floating IP
|
||||
Address and management floating IP Address for your system. You can add any
|
||||
additional |DNS| entry\(s\) that you have set up for your |OAM| floating IP
|
||||
Address.
|
||||
|
||||
Use the following procedure to install an intermediate or Root CA-signed
|
||||
certificate to either replace the default self-signed certificate or to replace
|
||||
an expired or soon to expire certificate.
|
||||
|
||||
.. rubric:: |prereq|
|
||||
|
||||
Obtain an intermediate or Root CA-signed certificate and key from a trusted
|
||||
intermediate or Root Certificate Authority \(CA\). Refer to the documentation
|
||||
for the external Root CA that you are using, on how to create public
|
||||
certificate and private key pairs, signed by an intermediate or Root CA, for
|
||||
HTTPS.
|
||||
|
||||
.. xreflink
|
||||
|
||||
For lab purposes, see |sec-doc|: :ref:`Create Certificates Locally
|
||||
using openssl <create-certificates-locally-using-openssl>` to create an
|
||||
Intermediate or test Root CA certificate and key, and use it to sign test
|
||||
certificates.
|
||||
|
||||
Put the Privacy Enhanced Mail \(PEM\) encoded versions of the certificate and
|
||||
key in a single file, and copy the file to the controller host.
|
||||
|
||||
Also obtain the certificate of the intermediate or Root CA that signed the
|
||||
above certificate.
|
||||
|
||||
.. rubric:: |proc|
|
||||
|
||||
.. _installing-updating-the-docker-registry-certificate-d271e71:
|
||||
|
||||
#. In order to enable internal use of the Docker registry certificate, update
|
||||
the trusted CA list for this system with the Root CA associated with the
|
||||
Docker registry certificate.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system certificate-install --mode ssl_ca <pathTocertificate>
|
||||
|
||||
where:
|
||||
|
||||
**<pathTocertificate>**
|
||||
|
||||
is the path to the intermediate or Root CA certificate associated with the
|
||||
Docker registry's intermediate or Root CA-signed certificate.
|
||||
|
||||
#. Update the Docker registry certificate using the
|
||||
:command:`certificate-install` command.
|
||||
|
||||
Set the mode (``-m`` or ``--mode``) parameter to docker\_registry.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system certificate-install --mode docker_registry <pathTocertificateAndKey>
|
||||
|
||||
where:
|
||||
|
||||
**<pathTocertificateAndKey>**
|
||||
|
||||
is the path to the file containing both the Docker registry's Intermediate
|
||||
or Root CA-signed certificate and private key to install.
|
||||
@@ -0,0 +1,53 @@
|
||||
|
||||
.. bew1572888575258
|
||||
.. _isolating-cpu-cores-to-enhance-application-performance:
|
||||
|
||||
========================================================
|
||||
Isolate the CPU Cores to Enhance Application Performance
|
||||
========================================================
|
||||
|
||||
|prod| supports running the most critical low-latency applications on host CPUs
|
||||
which are completely isolated from the host process scheduler.
|
||||
|
||||
This allows you to customize Kubernetes CPU management when policy is set to
|
||||
static so that low-latency applications run with optimal efficiency.
|
||||
|
||||
The following restriction applies when using application-isolated cores:
|
||||
|
||||
- There must be at least one platform and one application core on each host.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone)admin)$ system host-lock worker-1
|
||||
~(keystone)admin)$ system host-cpu-modify -f platform -p0 1 worker-1
|
||||
~(keystone)admin)$ system host-cpu-modify -f application-isolated -p0 15 worker-1
|
||||
~(keystone)admin)$ system host-cpu-modify -f application-isolated -p1 15 worker-1
|
||||
~(keystone)admin)$ system host-unlock worker-1
|
||||
|
||||
All |SMT| siblings (hyperthreads, if enabled) on a core will have the same
|
||||
assigned function. On host boot, any CPUs designated as isolated will be
|
||||
specified as part of the isolcpus kernel boot argument, which will isolate them
|
||||
from the process scheduler.
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /_includes/isolating-cpu-cores-to-enhance-application-performance.rest
|
||||
:start-after: usage-limitation-begin
|
||||
:end-before: usage-limitation-end
|
||||
|
||||
When using the static CPU manager policy before increasing the number of
|
||||
platform CPUs or changing isolated CPUs to application CPUs on a host, ensure
|
||||
that no pods on the host are making use of any isolated CPUs that will be
|
||||
affected. Otherwise, the pod\(s\) will transition to a Topology Affinity Error
|
||||
state. Although not strictly necessary, the simplest way to do this on systems
|
||||
other than AIO Simplex is to administratively lock the host, causing all the
|
||||
pods to be restarted on an alternate host, before changing CPU assigned
|
||||
functions. On AIO Simplex systems, you must explicitly delete the pods.
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /_includes/isolating-cpu-cores-to-enhance-application-performance.rest
|
||||
:start-after: changes-relative-to-root-begin
|
||||
:end-before: changes-relative-to-root-end
|
||||
@@ -0,0 +1,59 @@
|
||||
|
||||
.. khe1563458421728
|
||||
.. _kubernetes-admin-tutorials-authentication-and-authorization:
|
||||
|
||||
=======================================================
|
||||
Local Docker Registry Authentication and Authorization
|
||||
=======================================================
|
||||
|
||||
Authentication is enabled for the local Docker registry. When logging in,
|
||||
users are authenticated using their platform keystone credentials.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ docker login registry.local:9001 -u <keystoneUserName> -p <keystonePassword>
|
||||
|
||||
An authorized administrator \('admin' and 'sysinv'\) can perform any Docker
|
||||
action. Regular users can only interact with their own repositories \(i.e.
|
||||
registry.local:9001/<keystoneUserName>/\). Any authenticated user can pull from
|
||||
the following list of public images:
|
||||
|
||||
.. _kubernetes-admin-tutorials-authentication-and-authorization-d383e50:
|
||||
|
||||
- registry.local:9001:/public/\*
|
||||
|
||||
- registry.local:9001:/k8s.gcr.io/pause
|
||||
|
||||
- registry.local:9001:/quay.io/jetstack/cert-manager-acmesolver
|
||||
|
||||
The **mtce** user can only pull public images, but cannot push any images.
|
||||
|
||||
For example, only **admin** and **testuser** accounts can push to or pull from
|
||||
**registry.local:9001/testuser/busybox:latest**
|
||||
|
||||
.. _kubernetes-admin-tutorials-authentication-and-authorization-d383e87:
|
||||
|
||||
---------------------------------
|
||||
Username and Docker compatibility
|
||||
---------------------------------
|
||||
|
||||
Repository names in Docker registry paths must be lower case. For this reason,
|
||||
a keystone user must exist that consists of all lower case characters. For
|
||||
example, the user **testuser** is correct in the following URL, while
|
||||
**testUser** would result in an error:
|
||||
|
||||
**registry.local:9001/testuser/busybox:latest**
|
||||
|
||||
.. note::
|
||||
Use of the auto-generated self-signed certificate for the registry
|
||||
certificate is not recommended. If you must do so, then from the central
|
||||
cloud/systemController, access to the local registry can only be done using
|
||||
registry.local:9001. registry.central:9001 will be inaccessible. Installing
|
||||
a |CA|-signed certificate for the registry and the certificate of the |CA| as
|
||||
an 'ssl\_ca' certificate will remove this restriction.
|
||||
|
||||
For more information about Docker commands, see
|
||||
`https://docs.docker.com/engine/reference/commandline/docker/ <https://docs.docker.com/engine/reference/commandline/docker/>`__.
|
||||
|
||||
@@ -0,0 +1,46 @@
|
||||
|
||||
.. yvw1582058782861
|
||||
.. _kubernetes-admin-tutorials-helm-package-manager:
|
||||
|
||||
====================
|
||||
Helm Package Manager
|
||||
====================
|
||||
|
||||
|prod-long| supports Helm v3 package manager for Kubernetes that can be used to
|
||||
securely manage the lifecycle of applications within the Kubernetes cluster.
|
||||
|
||||
.. rubric:: |context|
|
||||
|
||||
Helm packages are defined by Helm charts with container information sufficient
|
||||
for managing a Kubernetes application. You can configure, install, and upgrade
|
||||
your Kubernetes applications using Helm charts. Helm charts are defined with a
|
||||
default set of values that describe the behavior of the service installed
|
||||
within the Kubernetes cluster.
|
||||
|
||||
A Helm v3 client is installed on controllers for local use by admins to manage
|
||||
end-users' Kubernetes applications. |prod| recommends to install a Helm v3
|
||||
client on a remote workstation, so that non-admin (and admin) end-users can
|
||||
manage their Kubernetes applications remotely.
|
||||
|
||||
Upon system installation, local Helm repositories \(containing |prod-long|
|
||||
packages\) are created and added to the Helm repo list.
|
||||
|
||||
Use the following command to list these local Helm repositories:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ helm repo list
|
||||
NAME URL
|
||||
starlingx `http://127.0.0.1:8080/helm_charts/starlingx`
|
||||
stx-platform `http://127.0.0.1:8080/helm_charts/stx-platform`
|
||||
|
||||
Where the `stx-platform` repo holds helm charts of StarlingX Applications \(see
|
||||
next section\) of the |prod| platform itself, while the `starlingx` repo holds
|
||||
helm charts of optional StarlingX applications, such as Openstack. The admin
|
||||
user can add charts to these local repos and regenerate the index to use these
|
||||
charts, and add new remote repositories to the list of known repos.
|
||||
|
||||
For more information on Helm v3, see the documentation at `https://helm.sh/docs/ <https://helm.sh/docs/>`__.
|
||||
|
||||
For more information on how to configure and use Helm both locally and remotely, see :ref:`Configure Local CLI Access <configure-local-cli-access>`,
|
||||
and :ref:`Configure Remote CLI Access <configure-remote-cli-access>`.
|
||||
@@ -0,0 +1,81 @@
|
||||
|
||||
..
|
||||
.. _kubernetes-admin-tutorials-metrics-server:
|
||||
|
||||
======================
|
||||
Install Metrics Server
|
||||
======================
|
||||
|
||||
|release-caveat|
|
||||
|
||||
.. rubric:: |context|
|
||||
|
||||
Metrics Server is a scalable, efficient source of container resource metrics for Kubernetes
|
||||
built-in autoscaling pipelines.
|
||||
|
||||
Metrics Server is meant for autoscaling purposes only. It is not intended to
|
||||
provide metrics for monitoring solutions that persist and analyze historical metrics.
|
||||
|
||||
Specifically, in |prod|, Metrics Server supports:
|
||||
|
||||
* Use of Kubernetes' horizontal application auto-scaling based on resources
|
||||
consumption, for scaling end users' containerized application deployments.
|
||||
* Use of Metrics Server API within end user's containerized applications in
|
||||
order for end user's application to, for example, enable application-specific
|
||||
incoming load management mechanisms based on metrics of selected pods.
|
||||
|
||||
For details on leveraging Metrics Server for horizontal autoscaling or for
|
||||
Metrics API, see :ref:`Kubernetes User Tasks <kubernetes-user-tutorials-metrics-server>`.
|
||||
Metrics Server is an optional component of |prod|. It is packaged as a system
|
||||
application and included in the |prod| installation ISO. In order to enable
|
||||
Metrics Server, you must upload and apply the Metrics Server system
|
||||
application.
|
||||
|
||||
.. rubric:: |proc|
|
||||
|
||||
Perform the following steps to enable Metrics Server such that its services are
|
||||
available to containerized applications for horizontal autoscaling and/or use
|
||||
of Metrics API.
|
||||
|
||||
#. Go to the path ``/usr/local/share/applications/helm/`` to access ``metrics-server-1.0-1.tgz``
|
||||
|
||||
#. Upload the application tarball:
|
||||
|
||||
.. code-block::
|
||||
|
||||
~(keystone_admin)]$ system application-upload metrics-server-1.0-1.tgz
|
||||
|
||||
#. Run the application list to confirm that it was uploaded:
|
||||
|
||||
.. code-block::
|
||||
|
||||
~(keystone_admin)]$ system application-list
|
||||
|
||||
#. Run the application to apply the metrics server:
|
||||
|
||||
.. code-block::
|
||||
|
||||
~(keystone_admin)]$ system application-apply metrics-server
|
||||
|
||||
#. Run the application list to confirm it was applied:
|
||||
|
||||
.. code-block::
|
||||
|
||||
~(keystone_admin)]$ system application-list
|
||||
|
||||
#. Run the following command to see the pod running:
|
||||
|
||||
.. code-block::
|
||||
|
||||
~(keystone_admin)]$ kubectl get pods -l app=metrics-server -n metrics-server
|
||||
|
||||
For details on leveraging Metrics Server for horizontal autoscaling or for
|
||||
Metrics API, see :ref:`Kubernetes User Tasks <kubernetes-user-tutorials-metrics-server>`.
|
||||
After installing Metrics Server, the :command:`kubectl top` |CLI| command is available
|
||||
to display the metrics being collected by Metrics Server and the ones being
|
||||
used for defined autoscaling definitions. These metrics are also displayed
|
||||
within the Kubernetes Dashboard.
|
||||
|
||||
For more information see:
|
||||
`https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#top
|
||||
<https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#top>`__
|
||||
@@ -0,0 +1,69 @@
|
||||
|
||||
.. skm1582115510876
|
||||
.. _kubernetes-admin-tutorials-starlingx-application-package-manager:
|
||||
|
||||
=====================================
|
||||
StarlingX Application Package Manager
|
||||
=====================================
|
||||
|
||||
Use the |prod| system application commands to manage containerized application
|
||||
deployment from the command-line.
|
||||
|
||||
|prod| application management provides a wrapper around Airship Armada
|
||||
\(see `https://opendev.org/airship/armada.git <https://opendev.org/airship/armada.git>`__\)
|
||||
and Kubernetes Helm \(see `https://github.com/helm/helm <https://github.com/helm/helm>`__\)
|
||||
for managing containerized applications. Armada is a tool for managing multiple
|
||||
Helm charts with dependencies by centralizing all configurations in a single
|
||||
Armada YAML definition and providing life-cycle hooks for all Helm releases.
|
||||
|
||||
A |prod| application package is a compressed tarball containing a metadata.yaml
|
||||
file, a manifest.yaml Armada manifest file, and a charts directory containing
|
||||
Helm charts and a checksum.md5 file. The metadata.yaml file contains the
|
||||
application name, version, and optional Helm repository and disabled charts
|
||||
information.
|
||||
|
||||
|prod| application package management provides a set of :command:`system`
|
||||
CLI commands for managing the lifecycle of an application, which includes
|
||||
managing overrides to the Helm charts within the application.
|
||||
|
||||
.. _kubernetes-admin-tutorials-tarlingx-application-package-manager-d463e61:
|
||||
|
||||
.. table:: Table 1. Application Package Manager Commands
|
||||
:widths: auto
|
||||
|
||||
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Command | Description |
|
||||
+========================================+=============================================================================================================================================================================================================================================================+
|
||||
| :command:`application-list` | List all applications. |
|
||||
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :command:`application-show` | Show application details such as name, status, and progress. |
|
||||
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :command:`application-upload` | Upload a new application package. |
|
||||
| | |
|
||||
| | This command loads the application's Armada manifest and Helm charts into an internal database and automatically applies system overrides for well-known Helm charts, allowing the Helm chart to be applied optimally to the current cluster configuration. |
|
||||
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :command:`helm-override-list` | List system Helm charts and the namespaces with Helm chart overrides for each Helm chart. |
|
||||
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :command:`helm-override-show` | Show a Helm chart's overrides for a particular namespace. |
|
||||
| | |
|
||||
| | This command displays system overrides, user overrides and the combined system and user overrides. |
|
||||
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :command:`helm-override-update` | Update Helm chart user overrides for a particular namespace. |
|
||||
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :command:`helm-chart-attribute-modify` | Enable or disable the installation of a particular Helm chart within an application manifest. |
|
||||
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :command:`helm-override-delete` | Delete a Helm chart's user overrides for a particular namespace. |
|
||||
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :command:`application-apply` | Apply or reapply the application manifest and Helm charts. |
|
||||
| | |
|
||||
| | This command will install or update the existing installation of the application based on its Armada manifest, Helm charts and Helm charts' combined system and user overrides. |
|
||||
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :command:`application-abort` | Abort the current application operation. |
|
||||
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :command:`application-update` | Update the deployed application to a different version. |
|
||||
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :command:`application-remove` | Uninstall an application. |
|
||||
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| :command:`application-delete` | Remove the uninstalled application's definition, including manifest and Helm charts and Helm chart overrides, from the system. |
|
||||
+----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
|
||||
@@ -0,0 +1,72 @@
|
||||
|
||||
.. mlb1573055521142
|
||||
.. _kubernetes-cpu-manager-policies:
|
||||
|
||||
===============================
|
||||
Kubernetes CPU Manager Policies
|
||||
===============================
|
||||
|
||||
You can apply the kube-cpu-mgr-policy host label from the Horizon Web interface
|
||||
or the CLI to set the Kubernetes CPU Manager policy.
|
||||
|
||||
The **kube-cpu-mgr-policy** host label supports the values ``none`` and
|
||||
``static``.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone)admin)$ system host-lock worker-1
|
||||
~(keystone)admin)$ system host-label-assign --overwrite worker-1 kube-cpu-mgr-policy=static
|
||||
~(keystone)admin)$ system host-unlock worker-1
|
||||
|
||||
Setting either of these values results in kubelet on the host being configured
|
||||
with the policy of the same name as described at `https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#cpu-management-policies <https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#cpu-management-policies>`__,
|
||||
but with the following differences:
|
||||
|
||||
----------------------------
|
||||
Static policy customizations
|
||||
----------------------------
|
||||
|
||||
- Pods in the **kube-system** namespace are affined to platform cores
|
||||
only. Other pod containers \(hosted applications\) are restricted to
|
||||
running on either the application or isolated cores. CFS quota
|
||||
throttling for Guaranteed QoS pods is disabled.
|
||||
|
||||
- When using the static policy, improved performance can be achieved if
|
||||
you also use the Isolated CPU behavior as described at :ref:`Isolating CPU Cores to Enhance Application Performance <isolating-cpu-cores-to-enhance-application-performance>`.
|
||||
|
||||
- For Kubernetes pods with a **Guaranteed** QoS \(see `https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/ <https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/>`__
|
||||
for background information\), CFS quota throttling is disabled as it
|
||||
causes performance degradation.
|
||||
|
||||
- Kubernetes pods are prevented by default from running on CPUs with an
|
||||
assigned function of **Platform**. In contrast, pods in the
|
||||
**kube-system** namespace are affined to run on **Platform** CPUs by
|
||||
default. This assumes that the number of platform CPUs is sufficiently
|
||||
large to handle the workload. These two changes further ensure that
|
||||
low-latency applications are not interrupted by housekeeping tasks.
|
||||
|
||||
|
||||
.. xreflink For information about adding labels, see |node-doc|: :ref:`Configuring Node Labels Using Horizon <configuring-node-labels-using-horizon>`
|
||||
|
||||
.. xreflink and |node-doc|: :ref:`Configuring Node Labels from the CLI <assigning-node-labels-from-the-cli>`.
|
||||
|
||||
|
||||
---------------
|
||||
Recommendations
|
||||
---------------
|
||||
|
||||
|org| recommends using the static policy.
|
||||
|
||||
--------
|
||||
See also
|
||||
--------
|
||||
|
||||
See |usertasks-doc|: :ref:`Use Kubernetes CPU Manager Static Policy’s
|
||||
Guaranteed QoS class with exclusive CPUs
|
||||
<using-kubernetes-cpu-manager-static-policy>` for an example of how to
|
||||
configure a Pod in the ‘Guaranteed QOS’ class with exclusive (or
|
||||
dedicated/pinned) cpus.
|
||||
|
||||
See |usertasks-doc|: :ref:`Use Kubernetes CPU Manager Static Policy with application-isolated cores <use-application-isolated-cores>` for an example of how to configure a Pod with cores that are both ‘isolated from the host process scheduler’ and exclusive/dedicated/pinned cpus.
|
||||
@@ -0,0 +1,51 @@
|
||||
|
||||
.. faf1573057127832
|
||||
.. _kubernetes-topology-manager-policies:
|
||||
|
||||
====================================
|
||||
Kubernetes Topology Manager Policies
|
||||
====================================
|
||||
|
||||
You can apply the kube-topology-mgr-policy host label from Horizon or the CLI
|
||||
to set the Kubernetes Topology Manager policy.
|
||||
|
||||
The kube-topology-mgr-policy host label has four supported values:
|
||||
|
||||
- none
|
||||
|
||||
- best-effort
|
||||
|
||||
This is the default when the static CPU policy is enabled.
|
||||
|
||||
- restricted
|
||||
|
||||
- single-numa-node
|
||||
|
||||
|
||||
For more information on these settings, see the Kubernetes Topology Manager
|
||||
policies described at `https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/#how-topology-manager-works <https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/#how-topology-manager-works>`__.
|
||||
|
||||
.. xreflink For information about adding labels, see |node-doc|: :ref:`Configuring Node Labels Using Horizon <configuring-node-labels-using-horizon>`
|
||||
|
||||
.. xreflink and |node-doc|: :ref:`Configuring Node Labels from the CLI <assigning-node-labels-from-the-cli>`.
|
||||
|
||||
-----------
|
||||
Limitations
|
||||
-----------
|
||||
|
||||
- The scheduler is not NUMA-aware and can therefore make suboptimal pod
|
||||
scheduling decisions when the Topology Manager policy on the destination
|
||||
node is taken into account.
|
||||
|
||||
- If a pod fails with *Topology Affinity Error* because it can't satisfy the
|
||||
Topology Manager policy on the node where the schedule placed it, it will
|
||||
remain in the error state and not be retried. If the pod is part of a
|
||||
manager object such as ReplicaSet, Deployment, etc., then a new pod will be
|
||||
created. If that new pod is placed on the same node, it can result in a
|
||||
series of pods with a status of *Topology Affinity Error*. For more
|
||||
information, see `https://github.com/kubernetes/kubernetes/issues/84757 <https://github.com/kubernetes/kubernetes/issues/84757>`__.
|
||||
|
||||
In light of these limitations, |org| recommends using the best-effort policy,
|
||||
which will cause Kubernetes to try to provide NUMA-affined resources without
|
||||
generating any unexpected errors if the policy cannot be satisfied.
|
||||
|
||||
14
doc/source/admintasks/kubernetes/local-docker-registry.rst
Normal file
14
doc/source/admintasks/kubernetes/local-docker-registry.rst
Normal file
@@ -0,0 +1,14 @@
|
||||
|
||||
.. xeu1564401508004
|
||||
.. _local-docker-registry:
|
||||
|
||||
=====================
|
||||
Local Docker Registry
|
||||
=====================
|
||||
|
||||
A local Docker registry is deployed as part of |prod| on the internal
|
||||
management network.
|
||||
|
||||
You can interact with the local Docker registry at the address
|
||||
**registry.local:9001**.
|
||||
|
||||
@@ -0,0 +1,50 @@
|
||||
|
||||
.. qay1588350945997
|
||||
.. _setting-up-a-public-repository:
|
||||
|
||||
===================================================
|
||||
Set up a Public Repository in Local Docker Registry
|
||||
===================================================
|
||||
|
||||
There will likely be scenarios where you need to make images publicly available
|
||||
to all users.
|
||||
|
||||
.. rubric:: |context|
|
||||
|
||||
The suggested method to do that is to create a
|
||||
keystone tenant/user = 'registry'/'public', which will therefore have access to
|
||||
images in the registry.local:9001/public/ repository. Then share access to
|
||||
those images by sharing the registry/public user's credentials with other users.
|
||||
|
||||
.. rubric:: |proc|
|
||||
|
||||
#. Create the keystone tenant/user of registry/public.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ openstack project create registry
|
||||
~(keystone_admin)]$ TENANTNAME="registry"
|
||||
~(keystone_admin)]$ TENANTID=`openstack project list | grep ${TENANTNAME} | awk '{print $2}'`
|
||||
~(keystone_admin)]$ USERNAME="public"
|
||||
~(keystone_admin)]$ USERPASSWORD="${USERNAME}K8*"
|
||||
~(keystone_admin)]$ openstack user create --password ${USERPASSWORD} --project ${TENANTID} ${USERNAME}
|
||||
~(keystone_admin)]$ openstack role add --project ${TENANTNAME} --user ${USERNAME} _member
|
||||
|
||||
#. Create a secret containing the credentials of the public repository in
|
||||
kube-system namespace.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
% kubectl create secret docker-registry registry-local-public-key --docker-server=registry.local:9001 --docker-username=public --docker-password=public --docker-email=noreply@windriver.com -n kube-system
|
||||
|
||||
#. Share the credentials of the public repository with other namespaces.
|
||||
|
||||
Copy the secret to the other namespace and add it as an ImagePullSecret to
|
||||
the namespace's **default** serviceAccount.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
% kubectl get secret registry-local-public-key -n kube-system -o yaml | grep -v '^\s*namespace:\s' | kubectl apply --namespace=<USERNAMESPACE> -f -
|
||||
% kubectl patch serviceaccount default -p "{\"imagePullSecrets\": [{\"name\": \"registry-local-public-key\"}]}" -n <USERNAMESPACE>
|
||||
|
||||
|
||||
Reference in New Issue
Block a user