Merge "Remove compose from documenation"

This commit is contained in:
Jenkins 2015-08-17 20:19:05 +00:00 committed by Gerrit Code Review
commit af419c835b
4 changed files with 22 additions and 909 deletions

View File

@ -1,11 +1,14 @@
# Developer Environment
If you are developing Kolla on an existing OpenStack cloud
that supports Heat, then follow the Heat template [README][].
Otherwise, follow the instructions below to manually create
your Kolla development environment.
If you are developing Kolla on an existing OpenStack cloud that supports
Heat, then follow the Heat template [README][]. Another option available
on systems with VirutalBox is the use of [Vagrant][].
The best experience is available with bare metal deployment by following
the instructions below to manually create your Kolla deployment.
[README]: https://github.com/stackforge/kolla/blob/master/devenv/README.md
[Vagrant]: https://github.com/stackforge/kolla/blob/master/docs/vagrant.md
## Installing Dependencies
@ -14,21 +17,18 @@ modules with the .xz compressed format. The guestfs system cannot read
these images because a dependent package supermin in CentOS needs to be
updated to add .xz compressed format support.
In order to run Kolla, it is mandatory to run a version of `docker-compose`
that includes pid: host support. Support was added in version 1.3.0 and is
specified in the requirements.txt. To install this and other potential future
dependencies:
To install Kolla depenedencies use:
git clone http://github.com/stackforge/kolla
cd kolla
sudo pip install -r requirements.txt
In order to run Kolla, it is mandatory to run a version of `docker` that is
1.6.0 or later. Docker 1.5.0 has a defect in `--pid=host` support where the
libvirt container cannot be stopped and crashes nova-compute on start.
1.7.0 or later.
For most systems you can install the latest stable version of Docker with the
following command:
curl -sSL https://get.docker.io | bash
For Ubuntu based systems, do not use AUFS when starting Docker daemon unless
@ -46,71 +46,31 @@ running at a time.
service libvirtd stop
The basic starting environment will be created using `docker-compose`.
The basic starting environment will be created using `ansible`.
This environment will start up the OpenStack services listed in the
compose directory.
inventory file.
## Starting Kolla
To start, setup your environment variables.
Configure Ansible by reading the Kolla Ansible configuration documentation
[DEPLOY][].
$ cd kolla
$ ./tools/genenv
The `genenv` script will create a compose/openstack.env file
and an openrc file in your current directory. The openstack.env
file contains all of your initialized environment variables, which
you can edit for a different setup.
A mandatory step is customizing the FLAT_INTERFACE network interface
environment variable. The variable defaults to eth1. In some cases, the
second interface in a system may not be eth1, but a unique name. For
example with an Intel driver, the interface is enp1s0. The interface name
can be determined by executing the ifconfig tool. The second interface must
be a real interface, not a virtual interface. Make certain to store the
interface name in `compose/openstack.env`:
NEUTRON_FLAT_NETWORK_INTERFACE=enp1s0
FLAT_INTERFACE=enp1s0
[DEPLOY]: https://github.com/stackforge/kolla/blob/master/docs/ansible-deployment.md
Next, run the start command:
$ sudo ./tools/kolla-compose start
$ sudo ./tools/kolla-ansible deploy
Finally, run the status command:
$ sudo ./tools/kolla-compose status
This will display information about all Kolla containers.
A bare metal system takes three minutes to deploy AIO. A virtual machine
takes five minutes to deploy AIO. These are estimates; your hardware may
be faster or slower but should near these results.
## Debugging Kolla
All Docker commands should be run from the directory of the Docker binary,
by default this is `/`.
The `start` command to Kolla is responsible for starting the containers
using `docker-compose -f <service-container> up -d`.
If you want to start a container set by hand use this template:
$ docker-compose -f glance-api-registry.yml up -d
You can determine a container's status by executing:
$ sudo ./docker ps -a
$ sudo docker ps -a
If any of the containers exited you can check the logs by executing:
$ sudo ./docker logs <container-id>
$ docker-compose logs <container-id>
If you want to start a individual service like `glance-api` manually, use
this template. This is a good method to test and troubleshoot an individual
container. Note some containers require special options. Reference the
compose yml specification for more details:
$ sudo ./docker run --name glance-api -d \
--net=host \
--env-file=compose/openstack.env \
kollaglue/fedora-rdo-glance-api:latest
$ sudo docker logs <container-name>

View File

@ -1,686 +0,0 @@
#!/bin/bash
#
# This script generates a minimal set of environment variables to allow
# the openstack containers to operate. It is creating a configuration
# suitable for an all-in-one installation of openstack.
#
# It also creates a suitable 'openrc' for use with the installed system.
function check_binarydependencies {
local binaries="openssl"
local missingbinaries=""
local space=""
for bin in $binaries; do
if [[ ! $(type -t $bin) ]]; then
missingbinaries+=${space}$bin
space=" "
fi
done
if [ -n "$missingbinaries" ]; then
echo Missing dependencies: $missingbinaries
exit 1
fi
}
check_binarydependencies
# Move to top level directory
REAL_PATH=$(python -c "import os,sys;print os.path.realpath('$0')")
cd "$(dirname "$REAL_PATH")/.."
MY_IP=${MY_IP:-$(ip route get $(ip route | awk '$1 == "default" {print $3}') |
awk '$4 == "src" {print $5}')}
MY_DEV=${MY_DEV:-$(ip route get $(ip route | awk '$1 == "default" {print $3}') |
awk '$4 == "src" {print $3}')}
echo MY_IP=$MY_IP
echo MY_DEV=$MY_DEV
# API versions
CINDER_API_VERSION=2
# Admin user
ADMIN_USER=admin
ADMIN_USER_PASSWORD=steakfordinner
# Database
BIND_ADDRESS=$PUBLIC_IP
CHAR_SET_SERVER=utf8
COLLATION_SERVER=utf8_general_ci
DATADIR=/var/lib/mysql
DEFAULT_STORAGE_ENGINE=innodb
HOST_IP=$MY_IP
INIT_CINDER_DB=true
INIT_CONNECT="SET NAMES utf8"
INIT_DESIGNATE_DB=true
INIT_GLANCE_DB=true
INIT_MURANO_DB=true
INIT_HEAT_DB=true
INIT_KEYSTONE_DB=true
INIT_NOVA_DB=true
INNODB_FILE_PER_TABLE=true
MARIADB_MAX_CONNECTIONS=151
MARIADB_SERVICE_PORT=3306
MONGODB_SERVICE_PORT=27017
MARIADB_ROOT_PASSWORD=kolla
PASSWORD=12345
TEMP_FILE=/tmp/mysql-first-time.sql
# Galera
DB_CLUSTER_BIND_ADDRESS=0.0.0.0
DB_CLUSTER_INIT_DB=false
DB_CLUSTER_NAME=kollacluster
DB_CLUSTER_NODES=
DB_CLUSTER_WSREP_METHOD=mysqldump
# Host
ADMIN_TENANT_NAME=admin
PUBLIC_IP=$HOST_IP
# Logging
CINDER_API_LOG_FILE=
CINDER_BACKUP_LOG_FILE=
CINDER_LOG_DIR=
CINDER_SCHEDULER_LOG_FILE=
CINDER_VOLUME_LOG_FILE=
DEBUG_LOGGING=false
NEUTRON_L3_AGENT_LOG_FILE=
NEUTRON_LINUXBRIDGE_AGENT_LOG_FILE=
NEUTRON_LOG_DIR=/var/log/neutron
NEUTRON_METADATA_AGENT_LOG_FILE=
NEUTRON_SERVER_LOG_FILE=
NOVA_API_LOG_FILE=
NOVA_COMPUTE_LOG_FILE=
NOVA_CONDUCTOR_LOG_FILE=
NOVA_CONSOLEAUTH_LOG_FILE=
NOVA_LOG_DIR=
NOVA_NOVNCPROXY_LOG_FILE=
NOVA_SCHEDULER_LOG_FILE=
OVS_DB_FILE="/etc/openvswitch/conf.db"
OVS_LOG_FILE=
OVS_UNIXSOCK="/var/run/openvswitch/db.sock"
VERBOSE_LOGGING=true
# RabbitMQ
RABBITMQ_CLUSTER_COOKIE=
RABBITMQ_CLUSTER_NODES=
RABBITMQ_SERVICE_HOST=$HOST_IP
RABBITMQ_SERVICE_PORT=5672
RABBIT_PASSWORD=guest
RABBITMQ_USER=guest
#Barbican
ADMIN_TENANT_NAME=admin
BARBICAN_ADMIN_SERVICE_PORT=9312
BARBICAN_DB_NAME=barbican
BARBICAN_DB_USER=barbican
BARBICAN_KEYSTONE_USER=barbican
BARBICAN_PUBLIC_SERVICE_PORT=9311
KEYSTONE_AUTH_PROTOCOL=http
#Ceilometer
CEILOMETER_ADMIN_PASSWORD=password
CEILOMETER_API_SERVICE_HOST=$HOST_IP
CEILOMETER_API_SERVICE_PORT=8777
CEILOMETER_DB_NAME=ceilometer
CEILOMETER_DB_PASSWORD=password
CEILOMETER_DB_USER=ceilometer
CEILOMETER_KEYSTONE_USER=ceilometer
# Cinder API
CINDER_ADMIN_PASSWORD=password
CINDER_API_SERVICE_HOST=$HOST_IP
CINDER_API_SERVICE_LISTEN=$HOST_IP
CINDER_API_SERVICE_PORT=8776
CINDER_KEYSTONE_PASSWORD=password
CINDER_KEYSTONE_USER=cinder
# Cinder Scheduler
CINDER_DB_NAME=cinder
CINDER_DB_PASSWORD=password
CINDER_DB_USER=cinder
# Cinder Backup
CINDER_BACKUP_API_CLASS=cinder.backup.api.API
CINDER_BACKUP_DRIVER=cinder.backup.drivers.swift
CINDER_BACKUP_MANAGER=cinder.backup.manager.BackupManager
CINDER_BACKUP_NAME_TEMPLATE=backup-%s
# Cinder Volume
CINDER_ENABLED_BACKEND=lvm57
CINDER_LVM_LO_VOLUME_SIZE=4G
CINDER_VOLUME_API_LISTEN=$HOST_IP
CINDER_VOLUME_BACKEND_NAME=LVM_iSCSI57
CINDER_VOLUME_DRIVER=cinder.volume.drivers.lvm.LVMISCSIDriver
CINDER_VOLUME_GROUP=cinder-volumes
ISCSI_HELPER=tgtadm
ISCSI_IP_ADDRESS=$HOST_IP
# Designate
DESIGNATE_DB_NAME=designate
DESIGNATE_DB_USER=designate
DESIGNATE_DB_PASSWORD=designatedns
DESIGNATE_KEYSTONE_USER=designate
DESIGNATE_KEYSTONE_PASSWORD=designate
DESIGNATE_BIND9_RNDC_KEY=$(openssl rand -base64 24)
DESIGNATE_MASTERNS=$HOST_IP
DESIGNATE_BACKEND=bind9
DESIGNATE_SLAVENS=$HOST_IP
DESIGNATE_API_SERVICE_HOST=$HOST_IP
DESIGNATE_API_SERVICE_PORT=9001
DESIGNATE_MDNS_PORT=5354
DESIGNATE_DNS_PORT=53
DESIGNATE_POOLMAN_POOLID=$(uuidgen)
DESIGNATE_POOLMAN_TARGETS=$(uuidgen)
DESIGNATE_POOLMAN_NSS=$(uuidgen)
DESIGNATE_ALLOW_RECURSION=true
DESIGNATE_DEFAULT_POOL_NS_RECORD=ns1.example.org.
DESIGNATE_SINK_NOVA_DOMAIN_NAME=nova.example.org.
DESIGNATE_SINK_NEUTRON_DOMAIN_NAME=neutron.example.org.
DESIGNATE_SINK_NOVA_FORMATS=("%(octet0)s-%(octet1)s-%(octet2)s-%(octet3)s.%(domain)s" "%(hostname)s.%(domain)s")
DESIGNATE_SINK_NEUTRON_FORMATS=("%(octet0)s-%(octet1)s-%(octet2)s-%(octet3)s.%(domain)s" "%(hostname)s.%(domain)s")
# Glance
GLANCE_API_SERVICE_HOST=$HOST_IP
GLANCE_API_SERVICE_PORT=9292
GLANCE_DB_NAME=glance
GLANCE_DB_PASSWORD=kolla
GLANCE_DB_USER=glance
GLANCE_KEYSTONE_PASSWORD=glance
GLANCE_KEYSTONE_USER=glance
GLANCE_REGISTRY_SERVICE_HOST=$HOST_IP
GLANCE_REGISTRY_SERVICE_PORT=9191
# Gnocchi
GNOCCHI_DB_PASSWORD=gnocchi
GNOCCHI_DB_NAME=gnocchi
GNOCCHI_DB_USER=gnocchi
GNOCCHI_SERVICE_PROTOCOL=http
GNOCCHI_SERVICE_PORT=8041
GNOCCHI_STORAGE_BACKEND=file
GNOCCHI_KEYSTONE_USER=gnocchi
GNOCCHI_KEYSTONE_PASSWORD=gnocchi
GNOCCHI_ADMIN_PASSWORD=gnocchi
GNOCCHI_API_SERVICE_HOST=$HOST_IP
# Heat
HEAT_API_CFN_SERVICE_HOST=$HOST_IP
HEAT_API_CFN_SERVICE_PORT=8000
HEAT_API_CFN_URL_HOST=$HOST_IP
HEAT_API_SERVICE_HOST=$HOST_IP
HEAT_API_SERVICE_PORT=8004
HEAT_CFN_KEYSTONE_PASSWORD=heat
HEAT_CFN_KEYSTONE_USER=heat-cfn
HEAT_DB_NAME=heat
HEAT_DB_PASSWORD=kolla
HEAT_DOMAIN_PASS=$(openssl rand -hex 8)
HEAT_KEYSTONE_PASSWORD=heat
HEAT_KEYSTONE_USER=heat
#Horizon
HORIZON_KEYSTONE_USER=horizon
HORIZON_SERVICE_PORT=80
# Keystone
KEYSTONE_ADMIN_PASSWORD=$PASSWORD
KEYSTONE_ADMIN_SERVICE_HOST=$HOST_IP
KEYSTONE_ADMIN_SERVICE_PORT=35357
KEYSTONE_ADMIN_TOKEN=$PASSWORD
KEYSTONE_API_VERSION=2.0
KEYSTONE_AUTH_PROTOCOL=http
KEYSTONE_DB_NAME=keystone
KEYSTONE_DB_PASSWORD=kolla
KEYSTONE_DB_USER=keystone
KEYSTONE_PUBLIC_SERVICE_HOST=$HOST_IP
KEYSTONE_PUBLIC_SERVICE_PORT=5000
KEYSTONE_USER=keystone
TOKEN_DRIVER=sql
TOKEN_PROVIDER=uuid
USE_STDERR=false
# Keepalived
# Here we define pairs hostname:priority. Priorities have to be unique
KEEPALIVED_HOST_PRIORITIES=host1:100,host2:99
# Magnum
MAGNUM_DB_NAME=magnum
MAGNUM_DB_USER=magnum
MAGNUM_DB_PASSWORD=kolla
MAGNUM_KEYSTONE_USER=magnum
MAGNUM_KEYSTONE_PASSWORD=magnum
MAGNUM_API_SERVICE_HOST=$HOST_IP
MAGNUM_API_SERVICE_PORT=9511
# Murano
MURANO_DB_NAME=murano
MURANO_DB_PASSWORD=murano
MURANO_DB_USER=murano
MURANO_HOST_IP=$HOST_IP
MURANO_KEYSTONE_PASSWORD=password
MURANO_KEYSTONE_USER=murano
MURANO_SERVICE_PORT=8082
MURANO_SERVICE_PROTOCOL=http
# Neutron
NEUTRON_DB_NAME=neutron
NEUTRON_DB_USER=neutron
NEUTRON_DB_PASSWORD=password
NEUTRON_KEYSTONE_USER=neutron
NEUTRON_KEYSTONE_PASSWORD=neutron
NEUTRON_SERVER_SERVICE_HOST=$HOST_IP
NEUTRON_SERVER_SERVICE_PORT=9696
NEUTRON_API_PASTE_CONFIG=/usr/share/neutron/api-paste.ini
# Neutron ML2 Plugin
TYPE_DRIVERS=flat,vxlan
TENANT_NETWORK_TYPES=flat,vxlan
MECHANISM_DRIVERS=linuxbridge,l2population
# Neutron Linux Bridge Agent
NEUTRON_FLAT_NETWORK_NAME=physnet1
NEUTRON_FLAT_NETWORK_INTERFACE=eth1
# Neutron ML2 Plugin
MECHANISM_DRIVERS=linuxbridge,l2population
TENANT_NETWORK_TYPES=flat,vxlan
TYPE_DRIVERS=flat,vxlan
# Neutron Linux Bridge Agent
DELETE_NAMESPACES=true
DHCP_DRIVER=neutron.agent.linux.dhcp.Dnsmasq
DNSMASQ_CONFIG_FILE=/etc/neutron/dnsmasq/dnsmasq-neutron.conf
ENDPOINT_TYPE=adminURL
KEYSTONE_REGION=RegionOne
NEUTRON_FLAT_NETWORK_INTERFACE=eth1
NEUTRON_FLAT_NETWORK_NAME=physnet1
ROOT_HELPER="sudo neutron-rootwrap /etc/neutron/rootwrap.conf"
USE_NAMESPACES=true
# Networking Options are nova or neutron
NETWORK_MANAGER=neutron
# Nova
ENABLED_APIS=ec2,osapi_compute,metadata
METADATA_HOST=$HOST_IP
NOVA_API_SERVICE_HOST=$HOST_IP
NOVA_API_SERVICE_PORT=8774
NOVA_DB_NAME=nova
NOVA_DB_PASSWORD=nova
NOVA_DB_USER=nova
NOVA_EC2_SERVICE_HOST=$HOST_IP
NOVA_EC2_API_SERVICE_PORT=8773
NOVA_FLAT_INTERFACE=eth1
NOVA_KEYSTONE_PASSWORD=nova
NOVA_KEYSTONE_USER=nova
NOVA_LIBVIRT_SERVICE_PORT=16509
NOVA_METADATA_API_SERVICE_HOST=$HOST_IP
NOVA_METADATA_API_SERVICE_PORT=8775
NOVA_NOVNC_BASE_ADDRESS=${PUBLIC_IP}
NOVA_NOVNC_PROXY_PORT=6080
NOVA_NOVNC_PROXY_SERVICE_HOST=0.0.0.0
NOVA_PUBLIC_INTERFACE=$MY_DEV
NOVA_VNCSERVER_LISTEN_ADDRESS=$HOST_IP
NOVA_VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP
# Nova/Neutron
NEUTRON_SHARED_SECRET=sharedsecret
# Swift
SWIFT_ACCOUNT_SVC_BIND_IP=$PUBLIC_IP
SWIFT_ACCOUNT_SVC_BIND_PORT=6002
SWIFT_ACCOUNT_SVC_DEVICES=/srv/node
SWIFT_ACCOUNT_SVC_MOUNT_CHECK=false
SWIFT_ADMIN_USER=swift
SWIFT_API_SERVICE_HOST=$HOST_IP
SWIFT_CONTAINER_SVC_BIND_IP=$PUBLIC_IP
SWIFT_CONTAINER_SVC_BIND_PORT=6001
SWIFT_CONTAINER_SVC_DEVICES=/srv/node
SWIFT_CONTAINER_SVC_MOUNT_CHECK=false
SWIFT_DIR=/etc/swift
SWIFT_HASH_PATH_SUFFIX=$(openssl rand -hex 8)
SWIFT_KEYSTONE_PASSWORD=swift
SWIFT_KEYSTONE_USER=swift
SWIFT_OBJECT_SVC_BIND_IP=$PUBLIC_IP
SWIFT_OBJECT_SVC_BIND_PORT=6000
SWIFT_OBJECT_SVC_DEVICES=/srv/node
SWIFT_OBJECT_SVC_MOUNT_CHECK=false
SWIFT_OBJECT_SVC_PIPELINE=object-server
SWIFT_PROXY_ACCOUNT_AUTOCREATE=true
SWIFT_PROXY_AUTH_PLUGIN=password
SWIFT_PROXY_BIND_IP=$PUBLIC_IP
SWIFT_PROXY_BIND_PORT=8080
SWIFT_PROXY_DELAY_AUTH_DECISION=true
SWIFT_PROXY_DIR=/etc/swift
SWIFT_PROXY_OPERATOR_ROLES=admin,user
SWIFT_PROXY_PASSWORD=swift
SWIFT_PROXY_PIPELINE_MAIN="catch_errors gatekeeper healthcheck cache container_sync bulk ratelimit authtoken keystoneauth slo dlo proxy-server"
SWIFT_PROXY_PROJECT_DOMAIN_ID=default
SWIFT_PROXY_PROJECT_NAME=service
SWIFT_PROXY_SIGNING_DIR=/var/cache/swift
SWIFT_PROXY_USER_DOMAIN_ID=default
SWIFT_PROXY_USERNAME=swift
SWIFT_USER=swift
SWIFT_OBJECT_SVC_RING_NAME=/etc/swift/object.builder
SWIFT_OBJECT_SVC_RING_PART_POWER=10
SWIFT_OBJECT_SVC_RING_REPLICAS=3
SWIFT_OBJECT_SVC_RING_MIN_PART_HOURS=1
SWIFT_OBJECT_SVC_RING_HOSTS="${HOST_IP}:6000,${HOST_IP}:6000,${HOST_IP}:6000"
SWIFT_OBJECT_SVC_RING_DEVICES="sdb1,sdb2,sdb3"
SWIFT_OBJECT_SVC_RING_WEIGHTS="1,1,1"
SWIFT_OBJECT_SVC_RING_ZONES="1,2,3"
SWIFT_CONTAINER_SVC_RING_NAME=/etc/swift/container.builder
SWIFT_CONTAINER_SVC_RING_PART_POWER=10
SWIFT_CONTAINER_SVC_RING_REPLICAS=3
SWIFT_CONTAINER_SVC_RING_MIN_PART_HOURS=1
SWIFT_CONTAINER_SVC_RING_HOSTS="${HOST_IP}:6001,${HOST_IP}:6001,${HOST_IP}:6001"
SWIFT_CONTAINER_SVC_RING_DEVICES="sdb1,sdb2,sdb3"
SWIFT_CONTAINER_SVC_RING_WEIGHTS="1,1,1"
SWIFT_CONTAINER_SVC_RING_ZONES="1,2,3"
SWIFT_ACCOUNT_SVC_RING_NAME=/etc/swift/account.builder
SWIFT_ACCOUNT_SVC_RING_PART_POWER=10
SWIFT_ACCOUNT_SVC_RING_REPLICAS=3
SWIFT_ACCOUNT_SVC_RING_MIN_PART_HOURS=1
SWIFT_ACCOUNT_SVC_RING_HOSTS="${HOST_IP}:6002,${HOST_IP}:6002,${HOST_IP}:6002"
SWIFT_ACCOUNT_SVC_RING_DEVICES="sdb1,sdb2,sdb3"
SWIFT_ACCOUNT_SVC_RING_WEIGHTS="1,1,1"
SWIFT_ACCOUNT_SVC_RING_ZONES="1,2,3"
#Zaqar
ZAQAR_KEYSTONE_USER=zaqar
ZAQAR_SERVER_SERVICE_PORT=8888
# this should use the keystone admin port
# https://bugs.launchpad.net/kolla/+bug/1469209
cat > ./openrc <<EOF
export OS_AUTH_URL="http://${KEYSTONE_PUBLIC_SERVICE_HOST}:${KEYSTONE_PUBLIC_SERVICE_PORT}/v2.0"
export OS_USERNAME=$ADMIN_TENANT_NAME
export OS_PASSWORD=$ADMIN_USER_PASSWORD
export OS_TENANT_NAME=$ADMIN_TENANT_NAME
export OS_VOLUME_API_VERSION=$CINDER_API_VERSION
EOF
cat > ./compose/openstack.env <<EOF
ADMIN_TENANT_NAME=$ADMIN_TENANT_NAME
ADMIN_USER=$ADMIN_USER
ADMIN_USER_PASSWORD=$ADMIN_USER_PASSWORD
BARBICAN_ADMIN_SERVICE_PORT=$BARBICAN_ADMIN_SERVICE_PORT
BARBICAN_DB_NAME=$BARBICAN_DB_NAME
BARBICAN_DB_USER=$BARBICAN_DB_USER
BARBICAN_KEYSTONE_USER=$BARBICAN_KEYSTONE_USER
BARBICAN_PUBLIC_SERVICE_PORT=$BARBICAN_PUBLIC_SERVICE_PORT
BIND_ADDRESS=$BIND_ADDRESS
CEILOMETER_ADMIN_PASSWORD=$CEILOMETER_ADMIN_PASSWORD
CEILOMETER_API_SERVICE_HOST=$CEILOMETER_API_SERVICE_HOST
CEILOMETER_API_SERVICE_PORT=$CEILOMETER_API_SERVICE_PORT
CEILOMETER_DB_NAME=$CEILOMETER_DB_NAME
CEILOMETER_DB_PASSWORD=$CEILOMETER_DB_PASSWORD
CEILOMETER_DB_USER=$CEILOMETER_DB_USER
CEILOMETER_KEYSTONE_USER=$CEILOMETER_KEYSTONE_USER
CHAR_SET_SERVER=$CHAR_SET_SERVER
CINDER_ADMIN_PASSWORD=$CINDER_ADMIN_PASSWORD
CINDER_API_LOG_FILE=$CINDER_API_LOG_FILE
CINDER_API_SERVICE_HOST=$HOST_IP
CINDER_API_SERVICE_LISTEN=$MY_IP
CINDER_API_SERVICE_PORT=$CINDER_API_SERVICE_PORT
CINDER_API_VERSION=$CINDER_API_VERSION
CINDER_BACKUP_API_CLASS=$CINDER_BACKUP_API_CLASS
CINDER_BACKUP_DRIVER=$CINDER_BACKUP_DRIVER
CINDER_BACKUP_LOG_FILE=$CINDER_BACKUP_LOG_FILE
CINDER_BACKUP_MANAGER=$CINDER_BACKUP_MANAGER
CINDER_BACKUP_NAME_TEMPLATE=$CINDER_BACKUP_NAME_TEMPLATE
CINDER_DB_NAME=$CINDER_DB_NAME
CINDER_DB_PASSWORD=$CINDER_DB_PASSWORD
CINDER_DB_USER=$CINDER_DB_USER
CINDER_ENABLED_BACKEND=$CINDER_ENABLED_BACKEND
CINDER_KEYSTONE_PASSWORD=$CINDER_KEYSTONE_PASSWORD
CINDER_KEYSTONE_USER=$CINDER_KEYSTONE_USER
CINDER_LOG_DIR=$CINDER_LOG_DIR
CINDER_LVM_LO_VOLUME_SIZE=$CINDER_LVM_LO_VOLUME_SIZE
CINDER_SCHEDULER_LOG_FILE=$CINDER_SCHEDULER_LOG_FILE
CINDER_VOLUME_API_LISTEN=$CINDER_VOLUME_API_LISTEN
CINDER_VOLUME_BACKEND_NAME=$CINDER_VOLUME_BACKEND_NAME
CINDER_VOLUME_DRIVER=$CINDER_VOLUME_DRIVER
CINDER_VOLUME_GROUP=$CINDER_VOLUME_GROUP
CINDER_VOLUME_LOG_FILE=$CINDER_VOLUME_LOG_FILE
COLLATION_SERVER=$COLLATION_SERVER
DATADIR=$DATADIR
DB_CLUSTER_BIND_ADDRESS=$DB_CLUSTER_BIND_ADDRESS
DB_CLUSTER_INIT_DB=$DB_CLUSTER_INIT_DB
DB_CLUSTER_NAME=$DB_CLUSTER_NAME
DB_CLUSTER_NODES=$DB_CLUSTER_NODES
DB_CLUSTER_WSREP_METHOD=$DB_CLUSTER_WSREP_METHOD
DB_ROOT_PASSWORD=$MARIADB_ROOT_PASSWORD
DEBUG_LOGGING=$DEBUG_LOGGING
DEFAULT_STORAGE_ENGINE=$DEFAULT_STORAGE_ENGINE
DELETE_NAMESPACES=$DELETE_NAMESPACES
DESIGNATE_ALLOW_RECURSION=$DESIGNATE_ALLOW_RECURSION
DESIGNATE_API_SERVICE_HOST=$DESIGNATE_API_SERVICE_HOST
DESIGNATE_API_SERVICE_PORT=$DESIGNATE_API_SERVICE_PORT
DESIGNATE_BACKEND=$DESIGNATE_BACKEND
DESIGNATE_BIND9_RNDC_KEY=$DESIGNATE_BIND9_RNDC_KEY
DESIGNATE_DB_NAME=$DESIGNATE_DB_NAME
DESIGNATE_DB_PASSWORD=$DESIGNATE_DB_PASSWORD
DESIGNATE_DB_USER=$DESIGNATE_DB_USER
DESIGNATE_DEFAULT_POOL_NS_RECORD=$DESIGNATE_DEFAULT_POOL_NS_RECORD
DESIGNATE_DNS_PORT=$DESIGNATE_DNS_PORT
DESIGNATE_INITDB=$DESIGNATE_INITDB
DESIGNATE_KEYSTONE_PASSWORD=$DESIGNATE_KEYSTONE_PASSWORD
DESIGNATE_KEYSTONE_USER=$DESIGNATE_KEYSTONE_USER
DESIGNATE_MASTERNS=$DESIGNATE_MASTERNS
DESIGNATE_MDNS_PORT=$DESIGNATE_MDNS_PORT
DESIGNATE_POOLMAN_NSS=$DESIGNATE_POOLMAN_NSS
DESIGNATE_POOLMAN_POOLID=$DESIGNATE_POOLMAN_POOLID
DESIGNATE_POOLMAN_TARGETS=$DESIGNATE_POOLMAN_TARGETS
DESIGNATE_SINK_NEUTRON_DOMAIN_NAME=$DESIGNATE_SINK_NEUTRON_DOMAIN_NAME
DESIGNATE_SINK_NOVA_DOMAIN_NAME=$DESIGNATE_SINK_NOVA_DOMAIN_NAME
DESIGNATE_SLAVENS=$DESIGNATE_SLAVENS
DHCP_DRIVER=$DHCP_DRIVER
DNSMASQ_CONFIG_FILE=$DNSMASQ_CONFIG_FILE
ENABLED_APIS=$ENABLED_APIS
ENDPOINT_TYPE=$ENDPOINT_TYPE
FLAT_INTERFACE=$NOVA_FLAT_INTERFACE
GLANCE_API_SERVICE_HOST=$GLANCE_API_SERVICE_HOST
GLANCE_API_SERVICE_PORT=$GLANCE_API_SERVICE_PORT
GLANCE_DB_NAME=$GLANCE_DB_NAME
GLANCE_DB_PASSWORD=$GLANCE_DB_PASSWORD
GLANCE_DB_USER=$GLANCE_DB_USER
GLANCE_KEYSTONE_PASSWORD=$GLANCE_KEYSTONE_PASSWORD
GLANCE_KEYSTONE_USER=$GLANCE_KEYSTONE_USER
GLANCE_REGISTRY_SERVICE_HOST=$GLANCE_REGISTRY_SERVICE_HOST
GLANCE_REGISTRY_SERVICE_PORT=$GLANCE_REGISTRY_SERVICE_PORT
HEAT_API_CFN_SERVICE_HOST=$HEAT_API_CFN_SERVICE_HOST
HEAT_API_CFN_SERVICE_PORT=$HEAT_API_CFN_SERVICE_PORT
HEAT_API_CFN_URL_HOST=$HEAT_API_CFN_URL_HOST
HEAT_API_SERVICE_HOST=$HEAT_API_SERVICE_HOST
HEAT_API_SERVICE_PORT=$HEAT_API_SERVICE_PORT
HEAT_CFN_KEYSTONE_PASSWORD=$HEAT_CFN_KEYSTONE_PASSWORD
HEAT_CFN_KEYSTONE_USER=$HEAT_CFN_KEYSTONE_USER
HEAT_DB_NAME=$HEAT_DB_NAME
HEAT_DB_PASSWORD=$HEAT_DB_PASSWORD
HEAT_DOMAIN_PASS=$HEAT_DOMAIN_PASS
HEAT_KEYSTONE_PASSWORD=$HEAT_KEYSTONE_PASSWORD
HEAT_KEYSTONE_USER=$HEAT_KEYSTONE_USER
HORIZON_KEYSTONE_USER=$HORIZON_KEYSTONE_USER
HORIZON_SERVICE_PORT=$HORIZON_SERVICE_PORT
INIT_CINDER_DB=$INIT_CINDER_DB
INIT_CONNECT=$INIT_CONNECT
INIT_DESIGNATE_DB=$INIT_DESIGNATE_DB
INIT_GLANCE_DB=$INIT_GLANCE_DB
INIT_MURANO_DB=$INIT_MURANO_DB
INIT_HEAT_DB=$INIT_HEAT_DB
INIT_KEYSTONE_DB=$INIT_KEYSTONE_DB
INIT_NOVA_DB=$INIT_NOVA_DB
INNODB_FILE_PER_TABLE=$INNODB_FILE_PER_TABLE
ISCSI_HELPER=$ISCSI_HELPER
ISCSI_IP_ADDRESS=$ISCSI_IP_ADDRESS
KEEPALIVED_HOST_PRIORITIES=$KEEPALIVED_HOST_PRIORITIES
KEYSTONE_ADMIN_PASSWORD=$KEYSTONE_ADMIN_PASSWORD
KEYSTONE_ADMIN_SERVICE_HOST=$KEYSTONE_ADMIN_SERVICE_HOST
KEYSTONE_ADMIN_SERVICE_PORT=$KEYSTONE_ADMIN_SERVICE_PORT
KEYSTONE_ADMIN_TOKEN=$KEYSTONE_ADMIN_TOKEN
KEYSTONE_API_VERSION=$KEYSTONE_API_VERSION
KEYSTONE_AUTH_PROTOCOL=$KEYSTONE_AUTH_PROTOCOL
KEYSTONE_AUTH_PROTOCOL=$KEYSTONE_AUTH_PROTOCOL
KEYSTONE_DB_NAME=$KEYSTONE_DB_NAME
KEYSTONE_DB_PASSWORD=$KEYSTONE_DB_PASSWORD
KEYSTONE_DB_USER=$KEYSTONE_DB_USER
KEYSTONE_PUBLIC_SERVICE_HOST=$KEYSTONE_PUBLIC_SERVICE_HOST
KEYSTONE_PUBLIC_SERVICE_PORT=$KEYSTONE_PUBLIC_SERVICE_PORT
KEYSTONE_REGION=$KEYSTONE_REGION
KEYSTONE_USER=$KEYSTONE_USER
KOLLA_CONFIG_STRATEGY=CONFIG_INTERNAL
MAGNUM_API_SERVICE_HOST=$MAGNUM_API_SERVICE_HOST
MAGNUM_API_SERVICE_PORT=$MAGNUM_API_SERVICE_PORT
MAGNUM_DB_NAME=$MAGNUM_DB_NAME
MAGNUM_DB_PASSWORD=$MAGNUM_DB_PASSWORD
MAGNUM_DB_USER=$MAGNUM_DB_USER
MAGNUM_KEYSTONE_PASSWORD=$MAGNUM_KEYSTONE_PASSWORD
MAGNUM_KEYSTONE_USER=$MAGNUM_KEYSTONE_USER
MARIADB_MAX_CONNECTIONS=$MARIADB_MAX_CONNECTIONS
MARIADB_ROOT_PASSWORD=$MARIADB_ROOT_PASSWORD
MARIADB_SERVICE_HOST=$HOST_IP
MARIADB_SERVICE_PORT=$MARIADB_SERVICE_PORT
MECHANISM_DRIVERS=$MECHANISM_DRIVERS
METADATA_HOST=$METADATA_HOST
MONGODB_SERVICE_PORT=$MONGODB_SERVICE_PORT
MURANO_DB_NAME=$MURANO_DB_NAME
MURANO_DB_PASSWORD=$MURANO_DB_PASSWORD
MURANO_DB_USER=$MURANO_DB_USER
MURANO_HOST_IP=$MURANO_HOST_IP
MURANO_KEYSTONE_PASSWORD=$MURANO_KEYSTONE_PASSWORD
MURANO_KEYSTONE_USER=$MURANO_KEYSTONE_USER
MURANO_SERVICE_PORT=$MURANO_SERVICE_PORT
MURANO_SERVICE_PROTOCOL=$MURANO_SERVICE_PROTOCOL
NETWORK_MANAGER=$NETWORK_MANAGER
NEUTRON_API_PASTE_CONFIG=$NEUTRON_API_PASTE_CONFIG
NEUTRON_DB_NAME=$NEUTRON_DB_NAME
NEUTRON_DB_PASSWORD=$NEUTRON_DB_PASSWORD
NEUTRON_DB_USER=$NEUTRON_DB_USER
NEUTRON_FLAT_NETWORK_INTERFACE=$NEUTRON_FLAT_NETWORK_INTERFACE
NEUTRON_FLAT_NETWORK_NAME=$NEUTRON_FLAT_NETWORK_NAME
NEUTRON_KEYSTONE_PASSWORD=$NEUTRON_KEYSTONE_PASSWORD
NEUTRON_KEYSTONE_USER=$NEUTRON_KEYSTONE_USER
NEUTRON_L3_AGENT_LOG_FILE=$NEUTRON_L3_AGENT_LOG_FILE
NEUTRON_LINUXBRIDGE_AGENT_LOG_FILE=$NEUTRON_LINUXBRIDGE_AGENT_LOG_FILE
NEUTRON_LOG_DIR=$NEUTRON_LOG_DIR
NEUTRON_METADATA_AGENT_LOG_FILE=$NEUTRON_METADATA_AGENT_LOG_FILE
NEUTRON_SERVER_LOG_FILE=$NEUTRON_SERVER_LOG_FILE
NEUTRON_SERVER_SERVICE_HOST=$NEUTRON_SERVER_SERVICE_HOST
NEUTRON_SERVER_SERVICE_PORT=$NEUTRON_SERVER_SERVICE_PORT
NEUTRON_SHARED_SECRET=$NEUTRON_SHARED_SECRET
NOVA_API_LOG_FILE=$NOVA_API_LOG_FILE
NOVA_API_SERVICE_HOST=$NOVA_API_SERVICE_HOST
NOVA_API_SERVICE_PORT=$NOVA_API_SERVICE_PORT
NOVA_COMPUTE_LOG_FILE=$NOVA_COMPUTE_LOG_FILE
NOVA_CONDUCTOR_LOG_FILE=$NOVA_CONDUCTOR_LOG_FILE
NOVA_CONSOLEAUTH_LOG_FILE=$NOVA_CONSOLEAUTH_LOG_FILE
NOVA_DB_NAME=$NOVA_DB_NAME
NOVA_DB_PASSWORD=$NOVA_DB_PASSWORD
NOVA_DB_USER=$NOVA_DB_USER
NOVA_EC2_API_SERVICE_HOST=$NOVA_EC2_SERVICE_HOST
NOVA_EC2_SERVICE_HOST=$NOVA_EC2_SERVICE_HOST
NOVA_EC2_API_SERVICE_PORT=$NOVA_EC2_API_SERVICE_PORT
NOVA_KEYSTONE_PASSWORD=$NOVA_KEYSTONE_PASSWORD
NOVA_KEYSTONE_USER=$NOVA_KEYSTONE_USER
NOVA_LOG_DIR=$NOVA_LOG_DIR
NOVA_LIBVIRT_SERVICE_PORT=$NOVA_LIBVIRT_SERVICE_PORT
NOVA_METADATA_API_SERVICE_HOST=$NOVA_METADATA_API_SERVICE_HOST
NOVA_METADATA_API_SERVICE_PORT=$NOVA_METADATA_API_SERVICE_PORT
NOVA_NOVNCPROXY_LOG_FILE=$NOVA_NOVNCPROXY_LOG_FILE
NOVA_NOVNC_BASE_ADDRESS=${NOVA_NOVNC_BASE_ADDRESS}RI
NOVA_NOVNC_PROXY_PORT=$NOVA_NOVNC_PROXY_PORT
NOVA_NOVNC_PROXY_SERVICE_HOST=$NOVA_NOVNC_PROXY_SERVICE_HOST
NOVA_SCHEDULER_LOG_FILE=$NOVA_SCHEDULER_LOG_FILE
NOVA_VNCSERVER_LISTEN_ADDRESS=$NOVA_VNCSERVER_LISTEN_ADDRESS
NOVA_VNCSERVER_PROXYCLIENT_ADDRESS=$NOVA_VNCSERVER_PROXYCLIENT_ADDRESS
OVS_DB_FILE=$OVS_DB_FILE
OVS_LOG_FILE=$OVS_LOG_FILE
OVS_UNIXSOCK=$OVS_UNIXSOCK
PUBLIC_INTERFACE=$NOVA_PUBLIC_INTERFACE
PUBLIC_IP=$HOST_IP
RABBITMQ_CLUSTER_COOKIE=$RABBITMQ_CLUSTER_COOKIE
RABBITMQ_CLUSTER_NODES=$RABBITMQ_CLUSTER_NODES
RABBITMQ_PASS=$RABBIT_PASSWORD
RABBITMQ_SERVICE_HOST=$RABBITMQ_SERVICE_HOST
RABBITMQ_SERVICE_PORT=$RABBITMQ_SERVICE_PORT
RABBITMQ_USER=$RABBITMQ_USER
RABBIT_PASSWORD=$RABBIT_PASSWORD
RABBIT_USERID=$RABBIT_USER
ROOT_HELPER=$ROOT_HELPER
SWIFT_ACCOUNT_SVC_BIND_IP=$SWIFT_ACCOUNT_SVC_BIND_IP
SWIFT_ACCOUNT_SVC_BIND_PORT=$SWIFT_ACCOUNT_SVC_BIND_PORT
SWIFT_ACCOUNT_SVC_DEVICES=$SWIFT_ACCOUNT_SVC_DEVICES
SWIFT_ACCOUNT_SVC_MOUNT_CHECK=$SWIFT_ACCOUNT_SVC_MOUNT_CHECK
SWIFT_ADMIN_USER=$SWIFT_ADMIN_USER
SWIFT_API_SERVICE_HOST=$SWIFT_API_SERVICE_HOST
SWIFT_CONTAINER_SVC_BIND_IP=$PUBLIC_IP
SWIFT_CONTAINER_SVC_BIND_PORT=$SWIFT_CONTAINER_SVC_BIND_PORT
SWIFT_CONTAINER_SVC_DEVICES=$SWIFT_CONTAINER_SVC_DEVICES
SWIFT_CONTAINER_SVC_MOUNT_CHECK=$SWIFT_CONTAINER_SVC_MOUNT_CHECK
SWIFT_DIR=$SWIFT_DIR
SWIFT_HASH_PATH_SUFFIX=$SWIFT_HASH_PATH_SUFFIX
SWIFT_KEYSTONE_PASSWORD=$SWIFT_KEYSTONE_PASSWORD
SWIFT_KEYSTONE_USER=$SWIFT_KEYSTONE_USER
SWIFT_OBJECT_SVC_BIND_IP=$SWIFT_OBJECT_SVC_BIND_IP
SWIFT_OBJECT_SVC_BIND_PORT=$SWIFT_OBJECT_SVC_BIND_PORT
SWIFT_OBJECT_SVC_DEVICES=$SWIFT_OBJECT_SVC_DEVICES
SWIFT_OBJECT_SVC_MOUNT_CHECK=$SWIFT_OBJECT_SVC_MOUNT_CHECK
SWIFT_OBJECT_SVC_PIPELINE=$SWIFT_OBJECT_SVC_PIPELINE
SWIFT_PROXY_ACCOUNT_AUTOCREATE=$SWIFT_PROXY_ACCOUNT_AUTOCREATE
SWIFT_PROXY_AUTH_PLUGIN=$SWIFT_PROXY_AUTH_PLUGIN
SWIFT_PROXY_BIND_IP=$SWIFT_PROXY_BIND_IP
SWIFT_PROXY_BIND_PORT=$SWIFT_PROXY_BIND_PORT
SWIFT_PROXY_DELAY_AUTH_DECISION=$SWIFT_PROXY_DELAY_AUTH_DECISION
SWIFT_PROXY_DIR=$SWIFT_PROXY_DIR
SWIFT_PROXY_OPERATOR_ROLES=$SWIFT_PROXY_OPERATOR_ROLES
SWIFT_PROXY_PASSWORD=$SWIFT_PROXY_PASSWORD
SWIFT_PROXY_PIPELINE_MAIN=$SWIFT_PROXY_PIPELINE_MAIN
SWIFT_PROXY_PROJECT_DOMAIN_ID=$SWIFT_PROXY_PROJECT_DOMAIN_ID
SWIFT_PROXY_PROJECT_NAME=$SWIFT_PROXY_PROJECT_NAME
SWIFT_PROXY_SIGNING_DIR=$SWIFT_PROXY_SIGNING_DIR
SWIFT_PROXY_USER_DOMAIN_ID=$SWIFT_PROXY_USER_DOMAIN_ID
SWIFT_PROXY_USERNAME=$SWIFT_PROXY_USERNAME
SWIFT_USER=$SWIFT_USER
SWIFT_OBJECT_SVC_RING_NAME=${SWIFT_OBJECT_SVC_RING_NAME}
SWIFT_OBJECT_SVC_RING_PART_POWER=${SWIFT_OBJECT_SVC_RING_PART_POWER}
SWIFT_OBJECT_SVC_RING_REPLICAS=${SWIFT_OBJECT_SVC_RING_REPLICAS}
SWIFT_OBJECT_SVC_RING_MIN_PART_HOURS=${SWIFT_OBJECT_SVC_RING_MIN_PART_HOURS}
SWIFT_OBJECT_SVC_RING_HOSTS=${SWIFT_OBJECT_SVC_RING_HOSTS}
SWIFT_OBJECT_SVC_RING_DEVICES=${SWIFT_OBJECT_SVC_RING_DEVICES}
SWIFT_OBJECT_SVC_RING_WEIGHTS=${SWIFT_OBJECT_SVC_RING_WEIGHTS}
SWIFT_OBJECT_SVC_RING_ZONES=${SWIFT_OBJECT_SVC_RING_ZONES}
SWIFT_CONTAINER_SVC_RING_NAME=${SWIFT_CONTAINER_SVC_RING_NAME}
SWIFT_CONTAINER_SVC_RING_PART_POWER=${SWIFT_CONTAINER_SVC_RING_PART_POWER}
SWIFT_CONTAINER_SVC_RING_REPLICAS=${SWIFT_CONTAINER_SVC_RING_REPLICAS}
SWIFT_CONTAINER_SVC_RING_MIN_PART_HOURS=${SWIFT_CONTAINER_SVC_RING_MIN_PART_HOURS}
SWIFT_CONTAINER_SVC_RING_HOSTS=${SWIFT_CONTAINER_SVC_RING_HOSTS}
SWIFT_CONTAINER_SVC_RING_DEVICES=${SWIFT_CONTAINER_SVC_RING_DEVICES}
SWIFT_CONTAINER_SVC_RING_WEIGHTS=${SWIFT_CONTAINER_SVC_RING_WEIGHTS}
SWIFT_CONTAINER_SVC_RING_ZONES=${SWIFT_CONTAINER_SVC_RING_ZONES}
SWIFT_ACCOUNT_SVC_RING_NAME=${SWIFT_ACCOUNT_SVC_RING_NAME}
SWIFT_ACCOUNT_SVC_RING_PART_POWER=${SWIFT_ACCOUNT_SVC_RING_PART_POWER}
SWIFT_ACCOUNT_SVC_RING_REPLICAS=${SWIFT_ACCOUNT_SVC_RING_REPLICAS}
SWIFT_ACCOUNT_SVC_RING_MIN_PART_HOURS=${SWIFT_ACCOUNT_SVC_RING_MIN_PART_HOURS}
SWIFT_ACCOUNT_SVC_RING_HOSTS=${SWIFT_ACCOUNT_SVC_RING_HOSTS}
SWIFT_ACCOUNT_SVC_RING_DEVICES=${SWIFT_ACCOUNT_SVC_RING_DEVICES}
SWIFT_ACCOUNT_SVC_RING_WEIGHTS=${SWIFT_ACCOUNT_SVC_RING_WEIGHTS}
SWIFT_ACCOUNT_SVC_RING_ZONES=${SWIFT_ACCOUNT_SVC_RING_ZONES}
TEMP_FILE=$TEMP_FILE
TENANT_NETWORK_TYPES=$TENANT_NETWORK_TYPES
TOKEN_DRIVER=$TOKEN_DRIVER
TOKEN_PROVIDER=$TOKEN_PROVIDER
TYPE_DRIVERS=$TYPE_DRIVERS
USE_NAMESPACES=$USE_NAMESPACES
USE_STDERR=$USE_STDERR
VERBOSE_LOGGING=$VERBOSE_LOGGING
ZAQAR_KEYSTONE_USER=$ZAQAR_KEYSTONE_USER
ZAQAR_SERVER_SERVICE_PORT=$ZAQAR_SERVER_SERVICE_PORT
EOF
echo Please customize your FLAT_INTERFACE to a different network then your
echo main network. The FLAT_INTERFACE is used for inter-VM communication.
echo the FLAT_INTERFACE should not have an IP address assigned.

View File

@ -16,11 +16,7 @@ export LC_ALL
REAL_PATH=$(python -c "import os,sys;print os.path.realpath('$0')")
cd "$(dirname "$REAL_PATH")/.."
NETWORK_MANAGER=$(grep -sri NETWORK_MANAGER ./compose/openstack.env | cut -f2 -d'=')
if [[ -z "$NETWORK_MANAGER" ]]; then
echo 'No network manager defined in ./compose/openstack.env, defaulting to "neutron".'
NETWORK_MANAGER="neutron"
fi
NETWORK_MANAGER="neutron"
# Test for credentials set
if [[ "${OS_USERNAME}" == "" ]]; then

View File

@ -1,157 +0,0 @@
#!/bin/bash
#
# This script can be used to interact with kolla.
# Move to top level directory
REAL_PATH=$(python -c "import os,sys;print os.path.realpath('$0')")
cd "$(dirname "$REAL_PATH")/.."
. tools/validate-docker-execute
NETWORK_MANAGER=$(grep -sri NETWORK_MANAGER ./compose/openstack.env | cut -f2 -d'=')
if [[ -z "$NETWORK_MANAGER" ]]; then
echo 'No network manager defined in ./compose/openstack.env, defaulting to "neutron".'
NETWORK_MANAGER="neutron"
fi
function process {
local service=$1
echo "$ACTION $service"
docker-compose -f ./compose/${service}.yml $COMPOSE_CMD
if [[ $? -ne 0 ]]; then
echo "Call docker-compose -f ./compose/${service}.yml $COMPOSE_CMD fail."
exit 1
fi
}
function process_all {
process rabbitmq
process mariadb
process keystone
process glance-api-registry
process nova-api-conductor-scheduler-consoleauth-novncproxy
if [[ "${NETWORK_MANAGER}" == "nova" ]] ; then
process nova-compute-network
else
# Defaulting to neutron
process nova-compute
process neutron-server
process neutron-linuxbridge-agent
process neutron-agents
fi
process heat-api-engine
process magnum-api-conductor
process horizon
process cinder-api-scheduler
process cinder-backup
process cinder-volume
process ceilometer
# TODO(coolsvap) add again with resolution for #LP1478145
#process gnocchi
}
function check_selinux {
# Check for SELinux in Enforcing mode and exit if found
if [[ -x /usr/sbin/getenforce ]]; then
if [[ $(/usr/sbin/getenforce) == "Enforcing" ]]; then
echo "You must execute this script without SELinux enforcing mode."
echo "Turn off SELinux enforcing mode by running:"
echo "$ sudo setenforce permissive"
exit 1
fi
fi
}
function pre_start {
check_selinux
if [[ -r ./openrc ]]; then
# Source openrc for commands
source ./openrc
else
echo 'Could not find ./openrc; bootstrap your environment with "./tools/genenv".'
exit 1
fi
}
function post_start {
echo -n "Waiting for OpenStack services to become available"
until [ $(nova service-list 2>&1 | grep -c enabled) -ge 4 ]; do
echo -n .
sleep 2
done
until [ $(neutron agent-list 2>&1 | grep -c ':-)') -ge 4 ]; do
echo -n .
sleep 2
done
echo " done"
echo Example Usage:
echo source openrc # source keystone credentials
echo Configure your environment once by running:
echo tools/init-runonce
}
function usage {
cat <<EOF
Usage: $0 COMMAND
Commands:
pull Pull all of the Docker images
start Start all kolla containers
status List running kolla containers
stop Stop all kolla containers
restart Restart all kolla containers
destroy Kill and remove all kolla containers and volumes
EOF
}
case "$1" in
(pull)
ACTION="Pulling"
COMPOSE_CMD="pull"
process_all
;;
(start)
ACTION="Starting"
COMPOSE_CMD="up -d"
pre_start
process_all
post_start
;;
(restart)
ACTION="Restarting"
COMPOSE_CMD="restart"
process_all
;;
(status)
ACTION="Status of"
COMPOSE_CMD="ps"
process_all
;;
(stop)
ACTION="Stopping"
COMPOSE_CMD="stop"
process_all
;;
(destroy)
ACTION="Destroying"
COMPOSE_CMD="kill"
process_all
COMPOSE_CMD="rm -f -v"
process_all
;;
(*) usage
exit 0
;;
esac