docs/doc/source/security/kubernetes/security-configure-containe...

343 lines
13 KiB
ReStructuredText

.. fda1581955005891
.. _security-configure-container-backed-remote-clis-and-clients:
==================================================
Configure Container-backed Remote CLIs and Clients
==================================================
The |prod| command lines can be accessed from remote computers running Linux,
MacOS, and Windows.
.. rubric:: |context|
This functionality is made available using a docker container with
pre-installed CLIs and clients. The container's image is pulled as required by
the remote CLI/client configuration scripts.
.. rubric:: |prereq|
You must have a |WAD| or Local |LDAP| username and password to get the
Kubernetes authentication token, a Keystone username and password to log
into Horizon, the |OAM| IP and, optionally, the Kubernetes |CA| certificate of
the target |prod| environment.
You must have Docker installed on the remote systems you connect from. For
more information on installing Docker, see
`https://docs.docker.com/install/ <https://docs.docker.com/install/>`__.
For Windows remote clients, Docker is only supported on Windows 10.
.. note::
You must be able to run docker commands using one of the following options:
.. _security-configure-container-backed-remote-clis-and-clients-d70e42:
- Running the scripts using sudo
- Adding the Linux user to the docker group
For more information, see,
`https://docs.docker.com/engine/install/linux-postinstall/
<https://docs.docker.com/engine/install/linux-postinstall/>`__
For Windows remote clients, you must run the following commands from a
Cygwin terminal. See `https://www.cygwin.com/ <https://www.cygwin.com/>`__
for more information about the Cygwin project.
For Windows remote clients, you must also have :command:`winpty` installed.
Download the latest release tarball for Cygwin from
`https://github.com/rprichard/winpty/releases
<https://github.com/rprichard/winpty/releases>`__. After downloading the
tarball, extract it to any location and change the Windows <PATH> variable
to include its bin folder from the extracted winpty folder.
The following procedure shows how to configure the Container-backed Remote CLIs
and Clients for an admin user with cluster-admin clusterrole. If using a
non-admin user such as one with privileges only within a private namespace,
additional configuration is required in order to use :command:`helm`.
The following procedure shows how to configure the Container-backed Remote
CLIs and Clients for an admin user with cluster-admin clusterrole.
.. rubric:: |proc|
.. _security-configure-container-backed-remote-clis-and-clients-d70e93:
#. In the active controller, log in through SSH or local console using
**sysadmin** user and do the actions listed below.
#. Configure Kubernetes permissions for users.
#. Source the platform environment
.. code-block:: none
$ source /etc/platform/openrc
~(keystone_admin)]$
#. Create a user rolebinding file. You can customize the name of the
user. Alternatively, to use group rolebinding and user group
membership for authorization, see :ref:`Configure Users, Groups, and
Authorization <configure-users-groups-and-authorization>`
.. code-block:: none
~(keystone_admin)]$ MYUSER="admin-user"
~(keystone_admin)]$ cat <<EOF > admin-user-rolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ${MYUSER}-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ${MYUSER}
EOF
#. Apply the rolebinding.
.. code-block:: none
~(keystone_admin)]$ kubectl apply -f admin-user-rolebinding.yaml
#. Note the |OAM| IP address to be used later in the creation of the
kubeconfig file.
.. code-block:: none
~(keystone_admin)]$ system oam-show | grep oam_floating_ip | awk '{print $4}'
Use the command below in |AIO-SX| environments. |AIO-SX| uses <oam_ip>
instead of <oam_floating_ip>.
.. code-block:: none
~(keystone_admin)]$ system oam-show | grep oam_ip | awk '{print $4}'
#. If HTTPS has been enabled for the |prod| RESTAPI Endpoints on your
|prod| system, execute the following commands to create the |CA|
certificate and copy it to the remote workstation.
.. code-block:: none
~(keystone_admin)]$ kubectl get secret system-local-ca -n cert-manager -o=jsonpath='{.data.ca\.crt}' | base64 --decode > /home/sysadmin/stx.ca.crt
~(keystone_admin)]$ scp /home/sysadmin/stx.ca.crt <remote_workstation_user>@<remote_workstation_IP>:~/stx.ca.crt
#. Optional: copy the Kubernetes |CA| certificate
``/etc/kubernetes/pki/ca.crt`` from the active controller to the remote
workstation. This step is strongly recommended, but it still possible
to connect to the Kubernetes cluster without this certificate.
.. code-block:: none
~(keystone_admin)]$ scp /etc/kubernetes/pki/ca.crt <remote_workstation_user>@<remote_workstation_IP>:~/k8s-ca.crt
#. In the remote workstation, do the actions listed below.
#. Copy the remote client tarball file from the |prod| build servers
to the remote workstation, and extract its content.
- The tarball is available from the StarlingX Public build servers.
- You can extract the tarball's contents anywhere on your client system.
.. parsed-literal::
$ cd $HOME
$ tar xvf |prefix|-remote-clients-<version>.tgz
#. Download the user/tenant openrc file from the Horizon Web interface to
the remote workstation.
#. Log in to Horizon as the user and tenant that you want to configure
remote access for.
In this example, the 'admin' user in the 'admin' tenant.
#. Navigate to **Project** \> **API Access** \> **Download Openstack RC
file**.
#. Select **Openstack RC file**.
The file ``admin-openrc.sh`` downloads. Copy this file to the
location of the extracted tarball.
.. note::
For a Distributed Cloud system, navigate to **Project** \> **Central
Cloud Regions** \> **RegionOne** \> and download the **Openstack RC
file**.
#. If HTTPS has been enabled for the |prod| RESTAPI Endpoints on your
|prod| system, add the following line to the bottom of
``admin-openrc.sh``:
.. code-block:: none
export OS_CACERT=<path_to_ca>
where ``<path_to_ca>`` is the absolute path of the file ``stx.ca.crt``
acquired in the steps above.
#. Create an empty admin-kubeconfig file on the remote workstation using
the following command.
.. code-block:: none
$ touch admin-kubeconfig
#. Configure remote CLI/client access.
This step will also generate a remote CLI/client RC file.
#. Change to the location of the extracted tarball.
.. parsed-literal::
$ cd $HOME/|prefix|-remote-clients-<version>/
#. Create a working directory that will be mounted by the container
implementing the remote |CLIs|.
See the description of the :command:`configure_client.sh` -w option
below for more details.
.. code-block:: none
$ mkdir -p $HOME/remote_cli_wd
#. Run the :command:`configure_client.sh` script.
.. parsed-literal::
$ ./configure_client.sh -t platform -r admin-openrc.sh -k admin-kubeconfig -w HOME/remote_cli_wd -p docker.io/starlingx/stx-platformclients:|v_starlingx-stx-platformclients|
If you specify repositories that require authentication, as shown
above, you must first perform a :command:`docker login` to that
repository before using remote |CLIs|. WRS |AWS| ECR credentials or
a |CA| certificate is required.
The options for configure_client.sh are:
``-t``
The type of client configuration. The options are platform (for
|prod-long| |CLI| and clients) and openstack (for |prod-os|
application |CLI| and clients).
The default value is platform.
``-r``
The user/tenant RC file to use for :command:`openstack` CLI
commands.
The default value is admin-openrc.sh.
``-k``
The kubernetes configuration file to use for :command:`kubectl`
and :command:`helm` CLI commands.
The default value is temp-kubeconfig.
``-o``
The remote CLI/client RC file generated by this script.
This RC file needs to be sourced in the shell, to setup required
environment variables and aliases, before running any remote
|CLI| commands.
For the platform client setup, the default is
remote_client_platform.sh. For the openstack application client
setup, the default is remote_client_app.sh.
``-w``
The working directory that will be mounted by the container
implementing the remote |CLIs|. When using the remote |CLIs|,
any files passed as arguments to the remote |CLI| commands need
to be in this directory in order for the container to access the
files. The default value is the directory from which the
:command:`configure_client.sh` command was run.
``-p``
Override the container image for the platform |CLI| and clients.
By default, the platform |CLIs| and clients container image is
pulled from docker.io/starlingx/stx-platformclients.
For example, to use the container images from the WRS AWS ECR:
.. parsed-literal::
$ ./configure_client.sh -t platform -r admin-openrc.sh -k admin-kubeconfig -w $HOME/remote_cli_wd -p docker.io/starlingx/stx-platformclients:|v_starlingx-stx-platformclients|
If you specify repositories that require authentication, you
must first perform a :command:`docker login` to that repository
before using remote |CLIs|.
``-a``
Override the OpenStack application image.
By default, the OpenStack |CLIs| and clients container image is
pulled from docker.io/starlingx/stx-openstackclients.
The :command:`configure-client.sh` command will generate a
``remote_client_platform.sh`` RC file. This RC file needs to be
sourced in the shell to set up required environment variables and
aliases before any remote CLI commands can be run.
#. Copy the file ``remote_client_platform.sh`` to ``$HOME/remote_cli_wd``
#. Update the contents in the admin-kubeconfig file using the
:command:`kubectl` command from the container. Use the |OAM| IP address
and the Kubernetes |CA| certificate acquired in the steps above. If the
|OAM| IP is IPv6, use the IP enclosed in brackets (example:
"[fd00::a14:803]").
.. code-block:: none
$ cd $HOME/remote_cli_wd
$ source remote_client_platform.sh
$ kubectl config set-cluster wrcpcluster --server=https://<OAM_IP>:6443
$ kubectl config set clusters.wrcpcluster.certificate-authority-data $(base64 -w0 k8s-ca.crt)
$ kubectl config set-context ${MYUSER}@wrcpcluster --cluster=wrcpcluster --user ${MYUSER}
$ kubectl config use-context ${MYUSER}@wrcpcluster
If you don't have the Kubernetes |CA| certificate, execute the following
commands instead.
.. code-block:: none
$ cd $HOME/remote_cli_wd
$ source remote_client_platform.sh
$ kubectl config set-cluster wrcpcluster --server=https://<OAM_IP>:6443 --insecure-skip-tls-verify
$ kubectl config set-context ${MYUSER}@wrcpcluster --cluster=wrcpcluster --user ${MYUSER}
$ kubectl config use-context ${MYUSER}@wrcpcluster
.. rubric:: |postreq|
After configuring the platform's container-backed remote CLIs/clients, the
remote platform CLIs can be used in any shell after sourcing the generated
remote CLI/client RC file. This RC file sets up the required environment
variables and aliases for the remote |CLI| commands.
.. note::
Consider adding this command to your .login or shell rc file, such
that your shells will automatically be initialized with the environment
variables and aliases for the remote |CLI| commands.
See :ref:`Using Container-backed Remote CLIs and Clients
<using-container-backed-remote-clis-and-clients>` for details.
**Related information**
.. seealso::
:ref:`Using Container-backed Remote CLIs and Clients
<using-container-backed-remote-clis-and-clients>`
:ref:`Install Kubectl and Helm Clients Directly on a Host
<security-install-kubectl-and-helm-clients-directly-on-a-host>`