From 7b8f9d6147e08603e3fab196a9cee08ed64bddad Mon Sep 17 00:00:00 2001 From: Lindsey Durway Date: Wed, 4 Dec 2019 15:11:30 -0600 Subject: [PATCH] Editorial changes to documentation files Edited and revised formatting to improve readability and consistency with other docs in this repo. Change-Id: I8693b85fdbd84e625e774ae0fe4d81dae7d74a57 --- docs/ceph_maintenance.md | 74 ++++++++++++--------- docs/rbd_pv.md | 32 ++++++---- images/calicoctl-utility/README.md | 49 ++++++++------ images/ceph-utility/README.md | 53 +++++++++------ images/compute-utility/README.md | 40 +++++++----- images/etcdctl-utility/README.md | 96 +++++++++++++++------------- images/mysqlclient-utility/README.md | 73 +++++++++++---------- images/openstack-utility/README.md | 32 ++++++---- jmphost/README.md | 71 +++++++------------- 9 files changed, 281 insertions(+), 239 deletions(-) diff --git a/docs/ceph_maintenance.md b/docs/ceph_maintenance.md index b9d62b17..0e0d24b4 100644 --- a/docs/ceph_maintenance.md +++ b/docs/ceph_maintenance.md @@ -1,79 +1,93 @@ # Ceph Maintenance -This MOP covers Maintenance Activities related to Ceph. +This document provides procedures for maintaining Ceph OSDs. -## Table of Contents ## +## Check OSD Status - - -- Table of Contents - - 1. Generic commands - - 2. Replace failed OSD - -## 1. Generic Commands ## - -### Check OSD Status -To check the current status of OSDs, execute the following: +To check the current status of OSDs, execute the following. ``` utilscli osd-maintenance check_osd_status ``` -### OSD Removal -To purge OSDs in down state, execute the following: +## OSD Removal + +To purge OSDs that are in the down state, execute the following. ``` utilscli osd-maintenance osd_remove ``` -### OSD Removal By OSD ID -To purge OSDs by OSD ID in down state, execute the following: +## OSD Removal by OSD ID + +To purge down OSDs by specifying OSD ID, execute the following. ``` utilscli osd-maintenance remove_osd_by_id --osd-id ``` -### Reweight OSDs -To adjust an OSD’s crush weight in the CRUSH map of a running cluster, execute the following: +## Reweight OSDs + +To adjust an OSD’s crush weight in the CRUSH map of a running cluster, +execute the following. ``` utilscli osd-maintenance reweight_osds ``` -## 2. Replace failed OSD ## +## Replace a Failed OSD -In the context of a failed drive, Please follow below procedure. +If a drive fails, follow these steps to replace a failed OSD. -Disable OSD pod on the host from being rescheduled +1. Disable the OSD pod on the host to keep it from being rescheduled. +``` kubectl label nodes --all ceph_maintenance_window=inactive +``` -Replace `` with the name of the node were the failed osd pods exist. +2. Below, replace `` with the name of the node where the failed OSD pods exist. +``` kubectl label nodes --overwrite ceph_maintenance_window=active +``` -Replace `` with failed OSD pod name +3. Below, replace `` with the failed OSD pod name. +``` kubectl patch -n ceph ds -p='{"spec":{"template":{"spec":{"nodeSelector":{"ceph-osd":"enabled","ceph_maintenance_window":"inactive"}}}}}' +``` -Following commands should be run from utility container +Complete the recovery by executing the following commands from the Ceph utility container. -Capture the failed OSD ID. Check for status `down` +1. Capture the failed OSD ID. Check for status `down`. +``` utilscli ceph osd tree +``` -Remove the OSD from Cluster. Replace `` with above captured failed OSD ID +2. Remove the OSD from the cluster. Below, replace +`` with the ID of the failed OSD. +``` utilscli osd-maintenance osd_remove_by_id --osd-id +``` -Remove the failed drive and replace it with a new one without bringing down the node. +3. Remove the failed drive and replace it with a new one without bringing down +the node. -Once new drive is placed, change the label and delete the concern OSD pod in `error` or `CrashLoopBackOff` state. Replace `` with failed OSD pod name. +4. Once the new drive is in place, change the label and delete the OSD pod that +is in the `error` or `CrashLoopBackOff` state. Below, replace `` +with the failed OSD pod name. +``` kubectl label nodes --overwrite ceph_maintenance_window=inactive kubectl delete pod -n ceph +``` -Once pod is deleted, kubernetes will re-spin a new pod for the OSD. Once Pod is up, the osd is added to ceph cluster with weight equal to `0`. we need to re-weight the osd. +Once the pod is deleted, Kubernetes will re-spin a new pod for the OSD. +Once the pod is up, the OSD is added to the Ceph cluster with a weight equal +to `0`. Re-weight the OSD. +``` utilscli osd-maintenance reweight_osds - +``` diff --git a/docs/rbd_pv.md b/docs/rbd_pv.md index 8eeb03af..8466e992 100644 --- a/docs/rbd_pv.md +++ b/docs/rbd_pv.md @@ -1,10 +1,12 @@ # RBD PVC/PV script -This MOP covers Maintenance Activities related to using the rbd_pv script -to backup and recover PVCs within your kubernetes environment using Ceph. +This document provides instructions for using the `rbd_pv` script to +perform Ceph maintenance actions such as +backing up and recovering PVCs within your Kubernetes environment. ## Usage -Execute utilscli rbd_pv without arguements to list usage options. + +Execute `utilscli rbd_pv` without arguments to list usage options. ``` utilscli rbd_pv @@ -14,20 +16,24 @@ Snapshot Usage: utilscli rbd_pv [-b ] [-n ] [-p +```bash + make IMAGE_TAG= +``` Example: -1. Create docker image for calicoctl release v3.4.0 +Create a docker image for calicoctl release v3.4.0. +```bash make IMAGE_TAG=v3.4.0 -======= -Utility container for Calicoctl shall enable Operations to trigger the command set for -Network APIs together from within a single shell with a uniform command structure. The -access to network-Calico shall be controlled through RBAC role assigned to the user. +``` -## Usage +## Using the Utility Container - Get in to the utility pod using kubectl exec. - To perform any operation use the below example. +The utility container for calicoctl shall enable Operations to access the +command set for network APIs together from within a single shell with a +uniform command structure. The access to network-Calico shall be controlled +through an RBAC role assigned to the user. - - kubectl exec -it -n utility /bin/bash +### Usage + +Get into the utility pod using `kubectl exec`. +Execute an operation as in the following example. + +``` + kubectl exec -it -n utility /bin/bash +``` Example: -1. utilscli calicoctl get nodes +```bash + utilscli calicoctl get nodes NAME bionic -2. utilscli calicoctl version + utilscli calicoctl version Client Version: v3.4.4 Git commit: e3ecd927 +``` diff --git a/images/ceph-utility/README.md b/images/ceph-utility/README.md index 8aa450a7..bf514e9f 100644 --- a/images/ceph-utility/README.md +++ b/images/ceph-utility/README.md @@ -1,42 +1,55 @@ # Ceph-utility Container -This CEPH utility container will help the Operation user to check the state/stats -of Ceph resources in the K8s Cluster. This utility container will help to perform -restricted admin level activities without exposing credentials/Keyring to user in -utility container. +The Ceph utility container enables Operations to check the state/stats +of Ceph resources in the Kubernetes cluster. This utility container enables +Operations to perform restricted administrative activities without exposing +the credentials or keyring. ## Generic Docker Makefile - -This is a generic make and dockerfile for ceph utility container. -This can be used to create docker images using different ceph releases and ubuntu releases +This is a generic make and dockerfile for the Ceph utility container. +This can be used to create docker images using different Ceph releases and +Ubuntu releases ## Usage -make CEPH_RELEASE= UBUNTU_RELEASE= +```bash + make CEPH_RELEASE= UBUNTU_RELEASE= +``` -example: +Example: -1. Create docker image for ceph luminous release on ubuntu xenial (16.04) +1. Create a docker image for the Ceph Luminous release on Ubuntu Xenial (16.04). - make CEPH_RELEASE=luminous UBUNTU_RELEASE=xenial +```bash + make CEPH_RELEASE=luminous UBUNTU_RELEASE=xenial +``` -2. Create docker image for ceph mimic release on ubuntu xenial (16.04) +2. Create a docker image for the Ceph Mimic release on Ubuntu Xenial (16.04). - make CEPH_RELEASE=mimic UBUNTU_RELEASE=xenial +```bash + make CEPH_RELEASE=mimic UBUNTU_RELEASE=xenial +``` -3. Create docker image for ceph luminous release on ubuntu bionic (18.04) +3. Create a docker image for the Ceph Luminous release on Ubuntu Bionic (18.04). - make CEPH_RELEASE=luminous UBUNTU_RELEASE=bionic +```bash + make CEPH_RELEASE=luminous UBUNTU_RELEASE=bionic +``` -4. Create docker image for ceph mimic release on ubuntu bionic (18.04) +4. Create a docker image for the Ceph Mimic release on Ubuntu Bionic (18.04). - make CEPH_RELEASE=mimic UBUNTU_RELEASE=bionic +```bash + make CEPH_RELEASE=mimic UBUNTU_RELEASE=bionic +``` -5. Get in to the utility pod using kubectl exec. - To perform any operation on the ceph cluster use the below example. +5. Get into the utility pod using `kubectl exec`. + Perform an operation on the Ceph cluster as in the following example. -example: +Example: + +``` utilscli ceph osd tree utilscli rbd ls utilscli rados lspools +``` diff --git a/images/compute-utility/README.md b/images/compute-utility/README.md index d38172cd..75c72082 100644 --- a/images/compute-utility/README.md +++ b/images/compute-utility/README.md @@ -1,30 +1,38 @@ # Compute-utility Container -This container shall allow access to services running on the each compute node. -Support personnel should be able to get the appropriate data from this utility container -by specifying the node and respective service command within the local cluster. +This container enables Operations personnel to access services running on +the compute nodes. Operations personnel can get the appropriate data from this +utility container by specifying the node and respective service command within +the local cluster. ## Usage -1. Get in to the utility pod using kubectl exec. To perform any operation use the below example. +1. Get into the utility pod using `kubectl exec`. Perform an operation as in +the following example. - - kubectl exec -it -n utility /bin/bash +``` + kubectl exec -it -n utility /bin/bash +``` -2. Run the utilscli with commands formatted: +2. Use the following syntax to run commands. - - utilscli +``` + utilscli +``` -example: +Example: - - utilscli libvirt-client mtn16r001c002 virsh list +``` + utilscli libvirt-client node42 virsh list +``` +Accepted client names are: -Accepted client-names are: - libvirt-client - ovs-client - ipmi-client - perccli-client - numa-client - sos-client +* libvirt-client +* ovs-client +* ipmi-client +* perccli-client +* numa-client +* sos-client Commands for each client vary with the client. diff --git a/images/etcdctl-utility/README.md b/images/etcdctl-utility/README.md index c0b5c485..157c5faf 100644 --- a/images/etcdctl-utility/README.md +++ b/images/etcdctl-utility/README.md @@ -1,70 +1,74 @@ -# etcdctl utility Container +# Etcdctl Utility Container -## Prerequisites: Deploy Airship in a Bottle(AIAB) +## Prerequisites: Deploy Airship in a Bottle (AIAB) -To get started, run the following in a fresh Ubuntu 16.04 VM (minimum 4vCPU/20GB RAM/32GB disk). -This will deploy Airship and Openstack Helm (OSH). +To get started, deploy Airship and OpenStack Helm (OSH). +Execute the following in a fresh Ubuntu 16.04 VM having these minimum requirements: -1. Add the below to /etc/sudoers +* 4 vCPU +* 20 GB RAM +* 32 GB disk storage + +1. Add the following entries to `/etc/sudoers`. ``` -root ALL=(ALL) NOPASSWD: ALL -ubuntu ALL=(ALL) NOPASSWD: ALL + root ALL=(ALL) NOPASSWD: ALL + ubuntu ALL=(ALL) NOPASSWD: ALL ``` -2. Install the latest versions of Git, CA Certs & bundle & Make if necessary +2. Install the latest versions of Git, CA Certs, and Make if necessary. -``` -set -xe \ -sudo apt-get update \ -sudo apt-get install --no-install-recommends -y \ -ca-certificates \ -git \ -make \ -jq \ -nmap \ -curl \ -uuid-runtime +```bash + set -xe \ + sudo apt-get update \ + sudo apt-get install --no-install-recommends -y \ + ca-certificates \ + git \ + make \ + jq \ + nmap \ + curl \ + uuid-runtime ``` -## Deploy Airship in a Bottle(AIAB) +## Deploy Airship in a Bottle (AIAB) -Deploy AirShip in a Bottle(AIAB) which will deploy etcdctl-utility pod. +Deploy Airship in a Bottle (AIAB), which deploys the etcdctl-utility pod. -``` -sudo -i \ -mkdir -p root/deploy && cd "$_" \ -git clone https://opendev.org/airship/treasuremap \ -cd /root/deploy/treasuremap/tools/deployment/aiab \ -./airship-in-a-bottle.sh +```bash + sudo -i \ + mkdir -p root/deploy && cd "$_" \ + git clone https://opendev.org/airship/treasuremap \ + cd /root/deploy/treasuremap/tools/deployment/aiab \ + ./airship-in-a-bottle.sh ``` ## Usage and Test -Get in to the etcdctl-utility pod using kubectl exec. -To perform any operation use the below example. +Get into the etcdctl-utility pod using `kubectl exec`. +Perform an operation as in the following example. ``` -$kubectl exec -it -n utility -- /bin/bash + kubectl exec -it -n utility -- /bin/bash ``` -example: +Example: ``` -utilscli etcdctl member list -utilscli etcdctl endpoint health -utilscli etcdctl endpoint status + utilscli etcdctl member list + utilscli etcdctl endpoint health + utilscli etcdctl endpoint status -nobody@airship-etcdctl-utility-998b4f4d6-65x6d:/$ utilscli etcdctl member list -90d1b75fa1b31b89, started, ubuntu, https://10.0.2.15:2380, https://10.0.2.15:2379 -ab1f60375c5ef1d3, started, auxiliary-1, https://10.0.2.15:22380, https://10.0.2.15:22379 -d8ed590018245b3c, started, auxiliary-0, https://10.0.2.15:12380, https://10.0.2.15:12379 -nobody@airship-etcdctl-utility-998b4f4d6-65x6d:/$ utilscli etcdctl endpoint health -https://kubernetes-etcd.kube-system.svc.cluster.local:2379 is healthy: -successfully committed proposal: took = 1.787714ms -nobody@airship-etcdctl-utility-998b4f4d6-65x6d:/$ utilscli etcdctl alarm list -nobody@airship-etcdctl-utility-998b4f4d6-65x6d:/$ utilscli etcdctl version -etcdctl version: 3.4.2 -API version: 3.3 -nobody@airship-etcdctl-utility-998b4f4d6-65x6d:/$ + nobody@airship-etcdctl-utility-998b4f4d6-65x6d:/$ utilscli etcdctl member list + 90d1b75fa1b31b89, started, ubuntu, https://10.0.2.15:2380, https://10.0.2.15:2379 + ab1f60375c5ef1d3, started, auxiliary-1, https://10.0.2.15:22380, https://10.0.2.15:22379 + d8ed590018245b3c, started, auxiliary-0, https://10.0.2.15:12380, https://10.0.2.15:12379 + nobody@airship-etcdctl-utility-998b4f4d6-65x6d:/$ utilscli etcdctl endpoint health + https://kubernetes-etcd.kube-system.svc.cluster.local:2379 is healthy: + successfully committed proposal: took = 1.787714ms + nobody@airship-etcdctl-utility-998b4f4d6-65x6d:/$ utilscli etcdctl alarm list + nobody@airship-etcdctl-utility-998b4f4d6-65x6d:/$ utilscli etcdctl version + etcdctl version: 3.4.2 + API version: 3.3 + nobody@airship-etcdctl-utility-998b4f4d6-65x6d:/$ ``` diff --git a/images/mysqlclient-utility/README.md b/images/mysqlclient-utility/README.md index 24d4ff83..ff813fea 100644 --- a/images/mysqlclient-utility/README.md +++ b/images/mysqlclient-utility/README.md @@ -1,47 +1,52 @@ # Mysqlclient-utility Container -This container allows users access to MariaDB pods remotely to perform db -functions. Authorized users in UCP keystone RBAC will able to run queries -through 'utilscli' helper. +This utility container allows Operations personnel to access MariaDB pods +remotely to perform database functions. Authorized users in UCP Keystone +RBAC will able to run queries through the `utilscli` helper. -## Usage & Test +## Usage -Get in to the utility pod using kubectl exec. Then perform the followings: +Get into the utility pod using `kubectl exec`. -## Case 1 - Execute into the pod +``` + kubectl exec -it -n utility /bin/bash +``` - - $kubectl exec -it -n utility /bin/bash +## Testing Connectivity to Mariadb (Optional) -## Case 2 - Test connectivity to Mariadb (optional) +1. Find the mariadb pod and its corresponding IP. -1. Find mariadb pod and its corresponding IP ---- - - $kubectl get pods --all-namespaces | grep -i mariadb-server | awk '{print $1,$2}' \ - | while read a b ; do kubectl get pod $b -n $a -o wide -done ---- +``` + kubectl get pods --all-namespaces | grep -i mariadb-server | awk '{print $1,$2}' \ + | while read a b ; do kubectl get pod $b -n $a -o wide + done +``` -2. Now connect to the pod as described in Case 1 by providing the arguments - as indicated for the CLI, as shown below +2. Connect to the indicated pod by providing the arguments + specified for the CLI as shown below. - - $kubectl exec -it -n utility -- mysql -h -u root -p \ +``` + kubectl exec -it -n utility -- mysql -h -u root -p \ -e 'show databases;' +``` - It's expected to see an output looks similar to below. +The output should resemble the following. ->--------------------+\ -| Database |\ -|--------------------|\ -| cinder |\ -| glance |\ -| heat |\ -| horizon |\ -| information_schema |\ -| keystone |\ -| mysql |\ -| neutron |\ -| nova |\ -| nova_api |\ -| nova_cell0 |\ -| performance_schema |\ -+--------------------+\ +``` + >--------------------+\ + | Database |\ + |--------------------|\ + | cinder |\ + | glance |\ + | heat |\ + | horizon |\ + | information_schema |\ + | keystone |\ + | mysql |\ + | neutron |\ + | nova |\ + | nova_api |\ + | nova_cell0 |\ + | performance_schema |\ + +--------------------+\ +``` diff --git a/images/openstack-utility/README.md b/images/openstack-utility/README.md index f26cd4f9..032d7849 100644 --- a/images/openstack-utility/README.md +++ b/images/openstack-utility/README.md @@ -1,24 +1,30 @@ -# Openstack-utility Container +# OpenStack-utility Container -Utility container for Openstack shall enable Operations to trigger the command set for -Compute, Network, Identity, Image, Block Storage, Queueing service APIs together from -within a single shell with a uniform command structure. The access to Openstack shall -be controlled through Openstack RBAC role assigned to the user. User will have to set -the Openstack environment (openrc) in utility container to access the Openstack CLIs. -The generic environment file will be placed in Utility container with common setting except -username, password and project_ID. User needs to pass such parameters through command options. +The utility container for OpenStack shall enable Operations to access the +command set for Compute, Network, Identity, Image, Block Storage, and +Queueing service APIs together from within a single shell with a uniform +command structure. The access to OpenStack shall be controlled through an +OpenStack RBAC role assigned to the user. The user will have to set +the OpenStack environment (openrc) in the utility container to access the +OpenStack CLIs. The generic environment file will be placed in the utility +container with common settings except username, password, and project_ID. +The user needs to specify these parameters using command options. ## Usage -1. Get in to the utility pod using kubectl exec. - To perform any operation use the below example. - Please be ready with password for accessing below cli commands. +Get into the utility pod using `kubectl exec`. +Perform an operation as in the following example. +Please be ready with your password for accessing the CLI commands. - - kubectl exec -it -n utility /bin/bash +``` + kubectl exec -it -n utility /bin/bash +``` -example: +Example: +```bash utilscli openstack server list --os-username --os-domain-name \ --os-project-name --os-domain-name \ --os-project-name FETCH_HEAD - Merge made by the 'recursive' strategy. - jmphost/README.md | 130 ++++++++++++++++++++++++++++++++++++++++ - jmphost/funs_uc.sh | 57 ++++++++++++++++++++++++++++++++++++++++ - jmphost/setup-access.sh | 132 ++++++++++++++++++++++++++++++++++++++++ - zuul.d/jmphost-utility.yaml | 35 ++++++++++++++++++++++++++++++++++++++++ - - 4 files changed, 354 insertions(+) - create mode 100644 jmphost/README.md - create mode 100755 jmphost/funs_uc.sh - create mode 100755 jmphost/setup-access.sh - create mode 100644 zuul.d/jmphost-utility.yaml - -### 2.3 Run Setup +### 2.2 Run Setup + $cd $porthole $sudo -s $cd jmphost $./setup-access.sh "site" "userid" "namespace" @@ -131,16 +101,20 @@ https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-ke args: - "--keystone-url=https:///v3" -## Validation +## 3. Validation -- Now log out and log back in as the user. -- Update the configuration file with user corresponding credentials. +To test, perform these steps. -For testing purposes: -- Replacing **"OS_USERNAME"** and **"OS_PASSWORD"** with UCP Keystone credentials -- Set the **"OS_PROJECT_NAME"** value accordingly +1. Log out and log back in as the user. -### List pods +2. Update the configuration file with the user's credentials. + + * Replace *"OS_USERNAME"* and *"OS_PASSWORD"* with UCP Keystone +credentials. + + * Set the *"OS_PROJECT_NAME"* value accordingly. + +### 3.1 List Pods $kubectl get pods -n utility @@ -152,7 +126,7 @@ For testing purposes: clcp-ucp-ceph-utility-config-ceph-ns-key-generator-pvfcl 0/1 Completed 0 4h12m clcp-ucp-ceph-utility-config-test 0/1 Completed 0 4h12m -### Execute into the pod +### 3.2 Execute into the Pod $kubectl exec -it [pod-name] -n utility /bin/bash @@ -160,5 +134,6 @@ For testing purposes: command terminated with exit code 126 -Because the user id entered in the configuration file is not a member in UCP keystone -RBAC to execute into the pod, it's expecting to see "permission denied". +The "permission denied" error is expected in this case because the user ID +entered in the configuration file is not a member in the UCP Keystone +RBAC to execute into the pod.