Airship utility CLI access
Go to file
Rahul Khiyani 1812162a08 changing the node_selector_value to primary
Setting value to primary for calicoctl and etcdctl utility as the pods
are not being deployed if the value is enabled and setting it to primary
will aline it with all the other utilities

Change-Id: I8bf325c5c4497c88d2cdf43b2d11e15ca23edd96
2019-11-06 16:01:17 -06:00
charts changing the node_selector_value to primary 2019-11-06 16:01:17 -06:00
docs [ceph]: Added procedure to stop the osd pod from being scheduled 2019-08-22 14:35:23 -05:00
images Add xenial-based calicoctl-utility containers 2019-10-28 21:51:50 -07:00
jmphost Remove ncct-utility 2019-10-14 16:47:20 -05:00
tools [WIP] Fix 'helm repo add' when running inside container 2019-10-09 17:45:49 +02:00
zuul.d Add xenial-based calicoctl-utility containers 2019-10-28 21:51:50 -07:00
.gitignore Chart/Dockerfile for Ceph Utility Container 2019-01-10 10:08:55 -06:00
.gitreview Add .gitreview 2019-08-07 14:41:35 +00:00
LICENSE Initial commit 2018-09-05 14:42:30 -05:00
Makefile Pass extra build args to Docker image builds 2019-10-28 12:56:28 -07:00
Openstack_Utility_Readme Chart/Dockerfile for Openstack Utility Container 2019-08-20 14:56:51 +00:00
README Edit nccli string to utilscli for Ceph Utility Container 2019-08-08 06:19:58 +00:00

README

Utility Container
-----------------
1. Ceph utility Container

Installation
------------
1. Add the below to /etc/sudoers

root    ALL=(ALL) NOPASSWD: ALL
ubuntu  ALL=(ALL) NOPASSWD: ALL

2. Install the latest versions of Git, CA Certs & Make if necessary

#!/bin/bash
set -xe

sudo apt-get update
sudo apt-get install --no-install-recommends -y \
        ca-certificates \
        git \
        make \
        jq \
        nmap \
        curl \
        uuid-runtime

3. Proxy Configuration

In order to deploy OpenStack-Helm behind corporate proxy servers, add the following entries to openstack-helm-infra/tools/gate/devel/local-vars.yaml.

proxy:
  http: http://username:password@host:port
  https: https://username:password@host:port
  noproxy: 127.0.0.1,localhost,172.17.0.1,.svc.cluster.local

Add the address of the Kubernetes API, 172.17.0.1, and .svc.cluster.local to your no_proxy and NO_PROXY environment variables.

export no_proxy=${no_proxy},172.17.0.1,.svc.cluster.local
export NO_PROXY=${NO_PROXY},172.17.0.1,.svc.cluster.local

4. Clone the OpenStack-Helm Repos

#!/bin/bash
set -xe

git clone https://git.openstack.org/openstack/openstack-helm-infra.git
git clone https://git.openstack.org/openstack/openstack-helm.git

5. Deploy Kubernetes & Helm

cd openstack-helm
./tools/deployment/developer/common/010-deploy-k8s.sh

6. Install OpenStack-Helm

Setup Clients on the host and assemble the charts
./tools/deployment/developer/common/020-setup-client.sh

Deploy the ingress controller
./tools/deployment/developer/common/030-ingress.sh

7. Deploy Ceph

./tools/deployment/developer/ceph/040-ceph.sh

Activate the OpenStack namespace to be able to use Ceph
./tools/deployment/developer/ceph/045-ceph-ns-activate.sh

8. Deploy Porthole

git clone https://github.com/att-comdev/porthole.git

cd porthole
./install_utility.sh

Usage
-----

Get in to the utility pod using kubectl exec. To perform any operation on the ceph cluster use the below example.

example:
   utilscli ceph osd tree
   utilscli rbd ls
   utilscli rados lspools

TODO
----
1. Customize oslo filters to restrict commands.