Merge "Editorial changes to README.md files"

This commit is contained in:
Zuul 2019-12-06 14:39:04 +00:00 committed by Gerrit Code Review
commit 31d491413a
2 changed files with 111 additions and 98 deletions

191
README.md
View File

@ -1,145 +1,158 @@
# Utility Containers
Utility containers provide a component level, consolidated view of
running containers within Network Cloud infrastructure to members
of the Operation team. This allows Operation team members access to
check the state of various services running within the component
pods of Network Cloud.
Utility containers give Operations staff an interface to an Airship
environment that enables them to perform routine operations and
troubleshooting activities. Utility containers support Airship
environments without exposing secrets and credentials while at
the same time restricting access to the actual containers.
## Prerequisites
Deploy OSH-AIO
Deploy OSH-AIO.
## System Requirements
The recommended minimum system requirements for a full deployment are:
* 16GB of RAM
* 8 Cores
* 48GB HDD
* 16 GB RAM
* 8 Cores
* 48 GB HDD
## Installation
1. Add the below to /etc/sudoers
1. Add the below to `/etc/sudoers`.
root ALL=(ALL) NOPASSWD: ALL
ubuntu ALL=(ALL) NOPASSWD: ALL
root ALL=(ALL) NOPASSWD: ALL
ubuntu ALL=(ALL) NOPASSWD: ALL
2. Install the latest versions of Git, CA Certs & Make if necessary
2. Install the latest versions of Git, CA Certs, and Make if necessary.
sudo apt-get update
sudo apt-get dist-upgrade -y
sudo apt-get install --no-install-recommends -y \
ca-certificates \
git \
make \
jq \
nmap \
curl \
uuid-runtime \
bc
sudo apt-get update
sudo apt-get dist-upgrade -y
sudo apt-get install --no-install-recommends -y \
ca-certificates \
git \
make \
jq \
nmap \
curl \
uuid-runtime \
bc
3. Clone the OpenStack-Helm Repos
3. Clone the OpenStack-Helm repositories.
git clone https://git.openstack.org/openstack/openstack-helm-infra.git
git clone https://git.openstack.org/openstack/openstack-helm.git
git clone https://git.openstack.org/openstack/openstack-helm-infra.git
git clone https://git.openstack.org/openstack/openstack-helm.git
4. Proxy Configuration
4. Configure proxies.
In order to deploy OpenStack-Helm behind corporate proxy servers,
add the following entries to openstack-helm-infra/tools/gate/devel/local-vars.yaml.
add the following entries to `openstack-helm-infra/tools/gate/devel/local-vars.yaml`.
proxy:
http: http://username:password@host:port
https: https://username:password@host:port
noproxy: 127.0.0.1,localhost,172.17.0.1,.svc.cluster.local
proxy:
http: http://username:password@host:port
https: https://username:password@host:port
noproxy: 127.0.0.1,localhost,172.17.0.1,.svc.cluster.local
Add the address of the Kubernetes API, 172.17.0.1, and .svc.cluster.local to
your no_proxy and NO_PROXY environment variables.
Add the address of the Kubernetes API, `172.17.0.1`, and `.svc.cluster.local` to
your `no_proxy` and `NO_PROXY` environment variables.
export no_proxy=${no_proxy},172.17.0.1,.svc.cluster.local
export NO_PROXY=${NO_PROXY},172.17.0.1,.svc.cluster.local
export no_proxy=${no_proxy},172.17.0.1,.svc.cluster.local
export NO_PROXY=${NO_PROXY},172.17.0.1,.svc.cluster.local
5. Deploy Kubernetes & Helm
5. Deploy Kubernetes and Helm.
cd openstack-helm
./tools/deployment/developer/common/010-deploy-k8s.sh
cd openstack-helm
./tools/deployment/developer/common/010-deploy-k8s.sh
Please remove DNS nameserver (nameserver 10.96.0.10) from /etc/resolv.conf,
Since python set-up client would fail without it.
Edit `/etc/resolv.conf` and remove the DNS nameserver entry (`nameserver 10.96.0.10`).
The Python setup client fails if this nameserver entry is present.
6. Setup Clients on the host and assemble the charts
./tools/deployment/developer/common/020-setup-client.sh
6. Setup clients on the host, and assemble the charts.
Re-add DNS nameservers back in /etc/resolv.conf so that keystone URL's DNS would resolve.
./tools/deployment/developer/common/020-setup-client.sh
7. Deploy the ingress controller
./tools/deployment/developer/common/030-ingress.sh
Re-add DNS nameservers back to `/etc/resolv.conf` so that the Keystone URLs DNS will resolve.
8. Deploy Ceph
./tools/deployment/developer/ceph/040-ceph.sh
7. Deploy the ingress controller.
9. Activate the namespace to be able to use Ceph
./tools/deployment/developer/ceph/045-ceph-ns-activate.sh
./tools/deployment/developer/common/030-ingress.sh
10. Deploy Keystone
./tools/deployment/developer/ceph/080-keystone.sh
8. Deploy Ceph.
11. Deploy Heat
./tools/deployment/developer/ceph/090-heat.sh
./tools/deployment/developer/ceph/040-ceph.sh
12. Deploy Horizon
./tools/deployment/developer/ceph/100-horizon.sh
9. Activate the namespace to be able to use Ceph.
13. Deploy Glance
./tools/deployment/developer/ceph/120-glance.sh
./tools/deployment/developer/ceph/045-ceph-ns-activate.sh
14. Deploy Cinder
./tools/deployment/developer/ceph/130-cinder.sh
10. Deploy Keystone.
15. Deploy LibVirt
./tools/deployment/developer/ceph/150-libvirt.sh
./tools/deployment/developer/ceph/080-keystone.sh
16. Deploy Compute Kit (Nova and Neutron)
./tools/deployment/developer/ceph/160-compute-kit.sh
11. Deploy Heat.
./tools/deployment/developer/ceph/090-heat.sh
12. Deploy Horizon.
./tools/deployment/developer/ceph/100-horizon.sh
13. Deploy Glance.
./tools/deployment/developer/ceph/120-glance.sh
14. Deploy Cinder.
./tools/deployment/developer/ceph/130-cinder.sh
15. Deploy LibVirt.
./tools/deployment/developer/ceph/150-libvirt.sh
16. Deploy the compute kit (Nova and Neutron).
./tools/deployment/developer/ceph/160-compute-kit.sh
17. To run further commands from the CLI manually, execute the following
to set up authentication credentials
export OS_CLOUD=openstack_helm
to set up authentication credentials.
18. Clone the Porthole repo to openstack-helm project
export OS_CLOUD=openstack_helm
git clone https://opendev.org/airship/porthole.git
18. Clone the Porthole repository to the openstack-helm project.
git clone https://opendev.org/airship/porthole.git
## To deploy utility pods
1. cd porthole
1. Add and make the chart:
2. helm repo add <chartname> http://localhost:8879/charts
cd porthole
helm repo add <chartname> http://localhost:8879/charts
make all
3. make all
2. Deploy `Ceph-utility`.
4. Deploy Ceph-utility
./tools/deployment/utilities/010-ceph-utility.sh
./tools/deployment/utilities/010-ceph-utility.sh
5. Deploy Compute-utility
./tools/deployment/utilities/020-compute-utility.sh
3. Deploy `Compute-utility`.
6. Deploy Etcdctl-utility
./tools/deployment/utilities/030-etcdctl-utility.sh
./tools/deployment/utilities/020-compute-utility.sh
7. Deploy Mysqlclient-utility.sh
./tools/deployment/utilities/040-Mysqlclient-utility.sh
4. Deploy `Etcdctl-utility`.
8. Deploy Openstack-utility.sh
./tools/deployment/utilities/050-openstack-utility.sh
./tools/deployment/utilities/030-etcdctl-utility.sh
5. Deploy `Mysqlclient-utility`.
./tools/deployment/utilities/040-Mysqlclient-utility.sh
6. Deploy `Openstack-utility`.
./tools/deployment/utilities/050-openstack-utility.sh
## NOTE
For postgresql-utility please refer to below URL as per validation
postgresql-utility is deployed in AIAB
https://opendev.org/airship/porthole/src/branch/master/images/postgresql-utility/README.md
The PostgreSQL utility container is deployed as a part of Airship-in-a-Bottle (AIAB).
To deploy and test `postgresql-utility`, see the
[PostgreSQL README](https://opendev.org/airship/porthole/src/branch/master/images/postgresql-utility/README.md).

View File

@ -1,17 +1,17 @@
# PostgreSQL Utility Container
## Prerequisites: Deploy Airship in a Bottle(AIAB)
## Prerequisites: Deploy Airship in a Bottle (AIAB)
## Installation
1. Add the below to /etc/sudoers
1. Add the below to `/etc/sudoers`.
```
root ALL=(ALL) NOPASSWD: ALL
ubuntu ALL=(ALL) NOPASSWD: ALL
```
2. Install the latest versions of Git, CA Certs & bundle & Make if necessary
2. Install the latest versions of Git, CA Certs, and Make if necessary.
```
set -xe
@ -27,20 +27,20 @@ curl \
uuid-runtime
```
3. Deploy Porthole
3. Deploy Porthole.
```
git clone https://opendev.org/airship/porthole
```
4. Modify the test case test-postgresqlutility-running.yaml
4. Modify the test case `test-postgresqlutility-running.yaml`.
## Testing
Get in to the utility pod using kubectl exec.
To perform any operation on the ucp PostgreSQL cluster use the below example.
Get in to the utility pod using `kubectl exec`.
To perform any operation on the ucp PostgreSQL cluster, use the below example.
example:
Example:
```
utilscli psql -h hostname -U username -d database
@ -56,7 +56,7 @@ Type "help" for help.
postgresdb=# \d
List of relations
Schema | Name | Type | Owner
--------+------------------+----------+---------------
-------+------------------+----------+---------------
public | company | table | postgresadmin
public | role | table | postgresadmin
public | role_role_id_seq | sequence | postgresadmin