638425380e
This commit updates the image with the updated clients. Test: Some normal commands Commands related with https dcmanager that wasn't working System application-upload that wasn't working when executed from remote cli Closes-Bug: 1928233 Closes-Bug: 1928231 Signed-off-by: Rafael Jordão Jardim <RafaelJordao.Jardim@windriver.com> Change-Id: I7cc24fece1a235a07f2007bc6b9b67aa2d8ae837 |
||
---|---|---|
.. | ||
client_wrapper.sh | ||
config_aliases.sh | ||
configure_client.sh | ||
docker_image_version.sh | ||
README |
Copyright (c) 2019 Wind River Systems, Inc. SPDX-License-Identifier: Apache-2.0 -------------------------------------------------- StarlingX Remote CLI Clients -------------------------------------------------- To enable access to StarlingX CLI remotely, docker images containing the CLI and Client packages have been created, for installation on a remote workstation. Two CLI/Client docker images are used; one for the Kubernetes platform CLIs/Clients and one for the OpenStack application CLIs/Clients. This SDK Module includes scripts to configure and provide aliases for Kubernetes platform and OpenStack application CLI. Dependencies ------------------------------------------------- Please make sure Docker is installed on the remote workstation before using the remote CLI clients. The proper docker images are pulled automatically by the client scripts. For instructions on how to install Docker for your system please follow instructions at the following link: https://docs.docker.com/install/ On Windows, cygwin and winpty need to be installed as well. Follow instruction on how to install cygwin at: https://www.cygwin.com/ For winpty, download the latest release tarball for cygwin at https://github.com/rprichard/winpty/releases After downloading the tarball, extract it to any location and change the Windows PATH variable to include "bin" folder from the extracted winpty folder Using the Remote CLI Clients on a Linux machine: ------------------------------------------------ To install the clients on a Linux machine follow these steps: 1. Untar the provided SDK module tarball If you want to use your own docker images for remote-clients or pushed the default images to your own private docker registry, you can edit the PLATFORM_DOCKER_IMAGE and APPLICATION_DOCKER_IMAGE variables in the docker_image_versions.sh file. NOTE: If you are pulling images from custom docker registries that require authentication, use the "docker login" command to login to that custom registry, before attempting to run commands for the first time. 2. Download the openrc file from Horizon. Log in to Horizon as the user and tenant that you want to use the remote CLIs as, go to: Project -> API Access -> Download Openstack RC file -> Openstack RC file NOTE: You can do this for either the Kubernetes platform Horizon or OpenStack application Horizon side, or both 3. For kubectl and helm commands to work remotely, you must configure a kubernetes service account on the target system and generate a configuration file based on that service account. The following script is provided to automate the creation of the service account and generation of the configuration file. Run the following commands logged in as root. USER="adminuser" OUTPUT_FILE="temp-kubeconfig" cat <<EOF > admin-login.yaml apiVersion: v1 kind: ServiceAccount metadata: name: ${USER} namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: ${USER} namespace: kube-system EOF kubectl apply -f admin-login.yaml TOKEN_DATA=$(kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep ${USER} | awk '{print $1}') | grep "token:" | awk '{print $2}') source /etc/platform/openrc OAM_IP=$(system oam-show |grep oam_floating_ip| awk '{print $4}') if [ -z "$OAM_IP" ]; then # AIO-SX doesn't use oam_floating_ip, but instead uses just oam_ip OAM_IP=$(system oam-show |grep oam_ip| awk '{print $4}') fi # If it's an IPv6 address we must enclose it in brackets if [[ $OAM_IP =~ .*:.* ]]; then OAM_IP="[${OAM_IP}]" fi kubectl config --kubeconfig ${OUTPUT_FILE} set-cluster stxcluster --server=https://${OAM_IP}:6443 --insecure-skip-tls-verify kubectl config --kubeconfig ${OUTPUT_FILE} set-credentials ${USER} --token=$TOKEN_DATA kubectl config --kubeconfig ${OUTPUT_FILE} set-context stxcluster-default --cluster=stxcluster --user ${USER} --namespace=default kubectl config --kubeconfig ${OUTPUT_FILE} use-context stxcluster-default You can customize the service account name and the output configuration file by changing the USER and OUTPUT_FILE variables. Copy the generated file on the machine where the remote clients are deployed. The file should be given as a parameter to the "configure_client" script. 4. Configure the clients for Kubernetes platform / OpenStack application side Kubernetes platform side: ./configure_client.sh -t platform -r admin_openrc.sh -k temp-kubeconfig By default this will generate a remote_client_platform.sh file Openstack application side: ./configure_client.sh -t openstack -r admin_openrc.sh By default this will generate a remote_client_openstack.sh file 5. When access to the Kubernetes platform side is required, on your console, source the platform side generated at step 4. Example: source remote_client_platform.sh When access to the Openstack application side is required, on your console, source the platform side generated at step 4. Example: source remote_client_openstack.sh When prompted, enter your openstack password. And then execute either platform or openstack application CLI commands The sourcing of the config files will also populate aliases for platform/application clients. NOTE: do not source both the platform and application config files from the same console. To use both the kubernetes platform and openstack application remote clients, source each config file from separate consoles. 6. For commands that require local filesystem access, you can configure a work directory as part of the configure_client.sh script. The configuration parameter is "-w" and by default it uses the local folder from where the configuration script was called. This working directory is mounted at "/wd" in the docker container. If commands need access to local files, copy the files in your configured work directory and then provide the command with the path to the mounted folder inside the container. Example (uploading an image to glance for the openstack application): 1. cp centos63.qcow2 <configured_work_dir>/ 2. openstack image create --disk-format qcow2 --container-format bare \ --public --file /wd/centos63.qcow2 centos63-image 7. Some commands used by remote CLI are designed to leave you in a shell prompt. Examples of such commands are "openstack" or "kubectl exec -ti <pod_name> -- /bin/bash". There are cases where the mechanism for identifying commands that should leave you in a shell prompt is not identifying them correctly. If users encounter such scenarios, they can force the shell or force disable the shell options using the FORCE_SHELL or FORCE_NO_SHELL variables before the command. You cannot use both variables at the same time. Examples: - force shell: FORCE_SHELL=true kubectl exec -ti <pod_name> -- /bin/bash - disable shell: FORCE_NO_SHELL=true kubectl exec <pod_name> -- ls