Merge "Properly format markdown code blocks"

This commit is contained in:
Jenkins 2014-11-13 21:29:58 +00:00 committed by Gerrit Code Review
commit 7391209820
20 changed files with 331 additions and 322 deletions

View File

@ -3,8 +3,8 @@ Install and configure Ceilometer.
Configuration Configuration
------------- -------------
ceilometer: ceilometer:
metering_secret: "unset" metering_secret: "unset"
- secret value for signing metering messages - secret value for signing metering messages
service-password: "unset" service-password: "unset"
- password for the metering service in Keystone - password for the metering service in Keystone

View File

@ -6,10 +6,10 @@ in images that use cinder.
Configuration Configuration
------------- -------------
cinder: cinder:
verbose: False verbose: False
- Print more verbose output (set logging level to INFO instead of default WARNING level). - Print more verbose output (set logging level to INFO instead of default WARNING level).
debug: False debug: False
- Print debugging output (set logging level to DEBUG instead of default WARNING level). - Print debugging output (set logging level to DEBUG instead of default WARNING level).
iscsi-helper: tgtadm iscsi-helper: tgtadm
- Specifies the iSCSI helper to use. Must match the target element included in the image. - Specifies the iSCSI helper to use. Must match the target element included in the image.

View File

@ -2,37 +2,38 @@ Install and configure Glance.
Configuration Configuration
------------- -------------
glance:
db: mysql://glance:unset@localhost/glance glance:
- SQLAlchemy database connection string db: mysql://glance:unset@localhost/glance
service-password: password - SQLAlchemy database connection string
- The service password for the glance user service-password: password
api: - The service password for the glance user
verbose: False api:
- Show more verbose log output (sets INFO log level output) verbose: False
debug: False - Show more verbose log output (sets INFO log level output)
- Show debugging output in logs (sets DEBUG log level output) debug: False
backend: swift - Show debugging output in logs (sets DEBUG log level output)
- The backend store to use backend: swift
swift-store-user: service:glance - The backend store to use
swift-store-key: userpassword swift-store-user: service:glance
- The credentials to use against swift if using the swift backend. swift-store-key: userpassword
workers: 1 - The credentials to use against swift if using the swift backend.
- The number of Glance API server processes to start. workers: 1
notifier-strategy: noop - The number of Glance API server processes to start.
- Strategy to use for notification queue. notifier-strategy: noop
log-file: '' - Strategy to use for notification queue.
- The path of the file to use for logging messages from Glances API server. log-file: ''
- The default is unset, which implies stdout. - The path of the file to use for logging messages from Glances API server.
default-log-levels: - The default is unset, which implies stdout.
- Logging: fine tune default log levels default-log-levels:
registry: - Logging: fine tune default log levels
verbose: False registry:
- Show more verbose log output (sets INFO log level output) verbose: False
debug: False - Show more verbose log output (sets INFO log level output)
- Show debugging output in logs (sets DEBUG log level output) debug: False
log-file: '' - Show debugging output in logs (sets DEBUG log level output)
- The path of the file to use for logging messages from Glances Registry server. log-file: ''
- The default is unset, which implies stdout. - The path of the file to use for logging messages from Glances Registry server.
default-log-levels: - The default is unset, which implies stdout.
- Logging: fine tune default log levels default-log-levels:
- Logging: fine tune default log levels

View File

@ -72,60 +72,60 @@ EX: in overcloud-source.yaml for controllerConfig under properties:
Example Configurations Example Configurations
---------------------- ----------------------
haproxy: haproxy:
nodes: nodes:
- name: notcompute - name: notcompute
ip: 192.0.2.5 ip: 192.0.2.5
- name: notcomputeSlave0 - name: notcomputeSlave0
ip: 192.0.2.6 ip: 192.0.2.6
services: services:
- name: dashboard_cluster - name: dashboard_cluster
net_binds: net_binds:
- ip: 192.0.2.3 - ip: 192.0.2.3
port: 443 port: 443
- ip: 192.0.2.3 - ip: 192.0.2.3
port: 444 port: 444
balance: roundrobin balance: roundrobin
- name: glance_api_cluster - name: glance_api_cluster
proxy_ip: 192.0.2.3 proxy_ip: 192.0.2.3
proxy_port: 9293 proxy_port: 9293
port:9292 port:9292
balance: source balance: source
- name: mysql - name: mysql
port: 3306 port: 3306
extra_server_params: extra_server_params:
- backup - backup
You can override set of nodes for a service by setting its own set of You can override set of nodes for a service by setting its own set of
haproxy.nodes inside a service definition: haproxy.nodes inside a service definition:
services: services:
- name: dashboard_cluster - name: dashboard_cluster
net_binds: net_binds:
- ip: 192.0.2.3 - ip: 192.0.2.3
port: 444 port: 444
- port: 443 - port: 443
balance: source balance: source
haproxy: haproxy:
nodes: nodes:
- name: foo0 - name: foo0
ip: 10.0.0.1 ip: 10.0.0.1
You can provide net_binds only once, for example: You can provide net_binds only once, for example:
haproxy: haproxy:
nodes: nodes:
- name: foo0 - name: foo0
ip: 10.0.0.1 ip: 10.0.0.1
net_binds: net_binds:
- ip: 192.0.2.3 - ip: 192.0.2.3
services: services:
- name: keystone - name: keystone
port: 5000 port: 5000
- name: dashboard_cluster - name: dashboard_cluster
port: 80 port: 80
net_binds: net_binds:
- ip: 192.0.2.10 - ip: 192.0.2.10
If there is no haproxy.services.net_binds.port defined - haproxy.services.port If there is no haproxy.services.net_binds.port defined - haproxy.services.port
will be used. will be used.

View File

@ -3,14 +3,14 @@ Install and configure Ironic.
Required options can be provided via heat. Required options can be provided via heat.
For example: For example:
ironic: ironic:
db: mysql://ironic:unset@192.0.2.2/ironic db: mysql://ironic:unset@192.0.2.2/ironic
service-password: unset service-password: unset
keystone: keystone:
host: 192.0.2.2 host: 192.0.2.2
glance: glance:
host: 192.0.2.2 host: 192.0.2.2
rabbit: rabbit:
host: 192.0.2.2 host: 192.0.2.2
password: guest password: guest

View File

@ -11,5 +11,5 @@ Currently only supported on ifcfg network configuration style systems.
Configuration Configuration
============= =============
network-config: network-config:
gateway-dev: eth1 gateway-dev: eth1

View File

@ -3,11 +3,11 @@ Install and configure Neutron.
Configuration Configuration
------------- -------------
neutron: neutron:
verbose: False verbose: False
- Print more verbose output (set logging level to INFO - Print more verbose output (set logging level to INFO
instead of default WARNING level). instead of default WARNING level).
debug: False debug: False
- Print debugging output (set logging level to DEBUG - Print debugging output (set logging level to DEBUG
instead of default WARNING level). instead of default WARNING level).
flat-networks: "tripleo-bm-test" flat-networks: "tripleo-bm-test"

View File

@ -3,40 +3,40 @@ Install and configure Nova.
Configuration Configuration
------------- -------------
nova: nova:
verbose: False verbose: False
- Print more verbose output (set logging level to INFO instead of default WARNING level). - Print more verbose output (set logging level to INFO instead of default WARNING level).
debug: False debug: False
- Print debugging output (set logging level to DEBUG instead of default WARNING level). - Print debugging output (set logging level to DEBUG instead of default WARNING level).
baremetal: baremetal:
pxe_deploy_timeout: "1200" pxe_deploy_timeout: "1200"
- the duration in seconds for pxe deployment timeouts. - the duration in seconds for pxe deployment timeouts.
virtual_power: virtual_power:
type: "virsh" type: "virsh"
- what virtual power driver to use. "virsh" or "vbox" - what virtual power driver to use. "virsh" or "vbox"
compute_libvirt_type: "qemu" compute_libvirt_type: "qemu"
- what libvirt compute type. Unset will use the nova default. - what libvirt compute type. Unset will use the nova default.
image_cache_manager_interval: image_cache_manager_interval:
- Number of seconds to wait between runs of the image cache manager. - Number of seconds to wait between runs of the image cache manager.
resize_fs_using_block_device: BoolOpt resize_fs_using_block_device: BoolOpt
- Attempt to resize the filesystem by accessing the image over a block device. - Attempt to resize the filesystem by accessing the image over a block device.
resume_guests_state_on_host_boot: BoolOpt resume_guests_state_on_host_boot: BoolOpt
- Whether to start guests that were running before the host rebooted. - Whether to start guests that were running before the host rebooted.
running_deleted_instance_action: running_deleted_instance_action:
- Action to take if a running deleted instance is detected. - Action to take if a running deleted instance is detected.
Valid options are: 'noop', 'log', 'shutdown', or 'reap'. Valid options are: 'noop', 'log', 'shutdown', or 'reap'.
Set to 'noop' to take no action. Set to 'noop' to take no action.
virt_mkfs: virt_mkfs:
- Name of the mkfs commands for ephemeral device. - Name of the mkfs commands for ephemeral device.
The format is <os_type>=<mkfs command> The format is <os_type>=<mkfs command>
e.g. 'linux-ext4=mkfs -t ext4 -F -L %(fs_label)s %(target)s' e.g. 'linux-ext4=mkfs -t ext4 -F -L %(fs_label)s %(target)s'
compute_manager: "ironic.nova.compute.manager.ClusterComputeManager" compute_manager: "ironic.nova.compute.manager.ClusterComputeManager"
- set to override the compute manager class used by Nova-Compute. - set to override the compute manager class used by Nova-Compute.
scheduler_host_manager: "nova.scheduler.ironic_host_manager.IronicHostManager" scheduler_host_manager: "nova.scheduler.ironic_host_manager.IronicHostManager"
- set to override the scheduler host manager used by Nova. If no - set to override the scheduler host manager used by Nova. If no
scheduler_host_manager is configured it is automatically set to scheduler_host_manager is configured it is automatically set to
the deprecated Nova baremetal and/or the old in-tree Ironic the deprecated Nova baremetal and/or the old in-tree Ironic
compute driver for Nova. compute driver for Nova.
public_ip: public_ip:
- public IP address (if any) assigned to this node. Used for VNC proxy - public IP address (if any) assigned to this node. Used for VNC proxy
connections so this is typically only required on controller nodes. connections so this is typically only required on controller nodes.

View File

@ -6,17 +6,18 @@ Sets default install type to local-only so we dont spam anyone. This can be
overwritten with the DIB_POSTFIX_INSTALL_TYPE environmental variable. overwritten with the DIB_POSTFIX_INSTALL_TYPE environmental variable.
Valid options for DIB_POSTFIX_INSTALL_TYPE are: Valid options for DIB_POSTFIX_INSTALL_TYPE are:
Local only * Local only
Internet Site * Internet Site
Internet with smarthost * Internet with smarthost
Satellite system * Satellite system
Set postfix hostname and domain via heat: Set postfix hostname and domain via heat:
postfix: postfix:
mailhostname: mail mailhostname: mail
maildomain: example.com maildomain: example.com
delay_warning_time: 4h delay_warning_time: 4h
relayhost: smtp.example.com relayhost: smtp.example.com
*NOTE: mailhostname and maildomain must match the system hostname in order to **NOTE**: mailhostname and maildomain must match the system hostname in order to
ensure local mail delivery will work. ensure local mail delivery will work.

View File

@ -5,7 +5,7 @@ To use Qpid, when building an image, add the qpid element and
remove the rabbitmq-server element. At the moment, rabbitmq-server remove the rabbitmq-server element. At the moment, rabbitmq-server
is listed as default in boot-stack/element-deps. is listed as default in boot-stack/element-deps.
sed -i "s/rabbitmq-server/qpidd/" $TRIPLEO_ROOT/tripleo-image-elements/elements/boot-stack/element-deps sed -i "s/rabbitmq-server/qpidd/" $TRIPLEO_ROOT/tripleo-image-elements/elements/boot-stack/element-deps
The configuration files of other services like Heat, Neutron, Nova, The configuration files of other services like Heat, Neutron, Nova,
Cinder, and Glance are updated by os-apply-config and os-apply-config Cinder, and Glance are updated by os-apply-config and os-apply-config
@ -20,38 +20,41 @@ default, the username should also be specified for qpid.
For the seed image the default metadata on the file system needs For the seed image the default metadata on the file system needs
to be updated. Substitute "rabbit" with "qpid". to be updated. Substitute "rabbit" with "qpid".
sed -i "s/rabbit/qpid/" $TRIPLEO_ROOT/tripleo-image-elements/elements/seed-stack-config/config.json sed -i "s/rabbit/qpid/" $TRIPLEO_ROOT/tripleo-image-elements/elements/seed-stack-config/config.json
After including the username, the qpid section should look like After including the username, the qpid section should look like
"qpid": {
"host": "127.0.0.1", "qpid": {
"username": "guest", "host": "127.0.0.1",
"password": "guest" "username": "guest",
} "password": "guest"
}
For the undercloud, update the Heat template by substituting "rabbit:" For the undercloud, update the Heat template by substituting "rabbit:"
with "qpid:". with "qpid:".
sed -i "s/rabbit:/qpid:/" $TRIPLEO_ROOT/tripleo-heat-templates/undercloud-vm.yaml sed -i "s/rabbit:/qpid:/" $TRIPLEO_ROOT/tripleo-heat-templates/undercloud-vm.yaml
After including the username, the qpid section should look like After including the username, the qpid section should look like
qpid:
host: 127.0.0.1 qpid:
username: guest host: 127.0.0.1
password: guest username: guest
password: guest
For the overcloud, update the Heat template by substituting "rabbit:" For the overcloud, update the Heat template by substituting "rabbit:"
with "qpid:". with "qpid:".
sed -i "s/rabbit:/qpid:/" $TRIPLEO_ROOT/tripleo-heat-templates/overcloud.yaml sed -i "s/rabbit:/qpid:/" $TRIPLEO_ROOT/tripleo-heat-templates/overcloud.yaml
After including the username, the qpid section(s) should look like After including the username, the qpid section(s) should look like
qpid:
host: qpid:
Fn::GetAtt: host:
- notcompute Fn::GetAtt:
- PrivateIp - notcompute
username: guest - PrivateIp
password: guest username: guest
password: guest

View File

@ -22,11 +22,11 @@ this is for backwards compatibility and will be removed in a future release.
Configuration keys Configuration keys
------------------ ------------------
bootstack: bootstack:
public\_interface\_ip: 192.0.2.1/24 public\_interface\_ip: 192.0.2.1/24
- What IP address to place on the ovs public interface. Only intended for - What IP address to place on the ovs public interface. Only intended for
use when the interface will not be otherwise configured. use when the interface will not be otherwise configured.
masquerade\_networks: [192.0.2.0] masquerade\_networks: [192.0.2.0]
- What networks, if any, to masquerade. When set, all traffic being - What networks, if any, to masquerade. When set, all traffic being
output from each network to other networks is masqueraded. Traffic output from each network to other networks is masqueraded. Traffic
to 192.168.122.1 is never masqueraded. to 192.168.122.1 is never masqueraded.

View File

@ -16,6 +16,7 @@ Grants snmp user password-less sudo access to lsof, so that the per process
check works correctly. check works correctly.
Options should be provided via heat. For example: Options should be provided via heat. For example:
snmpd: snmpd:
export_MIB: UCD-SNMP-MIB export_MIB: UCD-SNMP-MIB
readonly_user_name: RoUser readonly_user_name: RoUser

View File

@ -4,8 +4,8 @@ OpenStack services and other network clients authenticating SSL-secured connecti
Configuration Configuration
------------- -------------
ssl: ssl:
ca_certificate: certdata ca_certificate: certdata
The CA certificate will be written to /etc/ssl/from-heat-ca.crt and installed using The CA certificate will be written to /etc/ssl/from-heat-ca.crt and installed using
update-ca-certificates (apt-based distros) or update-ca-trusts (yum-based distros). update-ca-certificates (apt-based distros) or update-ca-trusts (yum-based distros).

View File

@ -2,11 +2,12 @@ Swift element for installing a swift proxy server
Configuration Configuration
------------- -------------
swift:
service-password: PASSWORD swift:
- The service password for the swift user service-password: PASSWORD
keystone: - The service password for the swift user
host: 127.0.0.1 keystone:
- The IP of the keystone host to authenticate against host: 127.0.0.1
proxy-memcache: - The IP of the keystone host to authenticate against
Comma-separated list of proxy servers in memcache ring proxy-memcache:
Comma-separated list of proxy servers in memcache ring

View File

@ -2,19 +2,20 @@ Common element for swift elements
Configuration Configuration
------------- -------------
swift:
devices: r1z<zone number>-192.0.2.6:%PORT%/d1 swift:
- A comma separated list of swift storage devices to place in the ring devices: r1z<zone number>-192.0.2.6:%PORT%/d1
file. - A comma separated list of swift storage devices to place in the ring
- This MUST be present in order for o-r-c to successfully complete. file.
zones: - This MUST be present in order for o-r-c to successfully complete.
- Servers are divided amongst separate zones if the swift.zones zones:
metadata is greater than the default of 1. Servers are placed in zones - Servers are divided amongst separate zones if the swift.zones
depending on their rank in the scaled-out list of Swift servers in the metadata is greater than the default of 1. Servers are placed in zones
yaml template used to build the overcloud stack. The scaleout rank N depending on their rank in the scaled-out list of Swift servers in the
is: SwiftStorage|controller<N>. The appropriate zone is calculated as: yaml template used to build the overcloud stack. The scaleout rank N
zone = N % swift.zones + 1. is: SwiftStorage|controller<N>. The appropriate zone is calculated as:
- To enable this calculation, the devices data takes the form of: zone = N % swift.zones + 1.
r1z%<controller or SwiftStorage><N>%-192.0.2.6:%PORT%/d1 - To enable this calculation, the devices data takes the form of:
hash: randomstring r1z%<controller or SwiftStorage><N>%-192.0.2.6:%PORT%/d1
- A hash used to salt paths on storage hosts hash: randomstring
- A hash used to salt paths on storage hosts

View File

@ -26,12 +26,13 @@ will take care of applying these settings during configuration time.
Configuration example Configuration example
--------------------- ---------------------
sysctl:
net.ipv4.conf.all.arp_filter: 1 sysctl:
net.ipv4.conf.all.arp_ignore: 2 net.ipv4.conf.all.arp_filter: 1
net.ipv4.conf.all.arp_announce: 2 net.ipv4.conf.all.arp_ignore: 2
net.ipv4.conf.default.arp_filter: 1 net.ipv4.conf.all.arp_announce: 2
net.ipv4.conf.default.arp_ignore: 2 net.ipv4.conf.default.arp_filter: 1
net.ipv4.conf.default.arp_announce: 2 net.ipv4.conf.default.arp_ignore: 2
net.ipv4.conf.default.arp_announce: 2
** Any valid sysctl key/value may be specified in this configuration format. ** Any valid sysctl key/value may be specified in this configuration format.

View File

@ -1,16 +0,0 @@
Add the tempest cloud test suite to an image.
The purpose of this element is to run tempest as a gate for tripleo based ci.
To successfully run tempest your overcloud should have
o Nodes with at least 4G of memory and 20G of diskspace
o the following services should be running
cinder, glance, heat, keystone, neutron, nova and swift
To use you should simply run the command run-tempest with the
OS_* environment variables for admin defined.
TODO:
o Remove as many of the filters in tests2skip.txt as possible
o Investigate setting allow_tenant_isolation to true
o Investigate setting run_ssh to true

View File

@ -0,0 +1,16 @@
Add the tempest cloud test suite to an image.
The purpose of this element is to run tempest as a gate for tripleo based ci.
To successfully run tempest your overcloud should have
* Nodes with at least 4G of memory and 20G of diskspace
* the following services should be running
cinder, glance, heat, keystone, neutron, nova and swift
To use you should simply run the command run-tempest with the
OS_* environment variables for admin defined.
TODO:
* Remove as many of the filters in tests2skip.txt as possible
* Investigate setting allow_tenant_isolation to true
* Investigate setting run_ssh to true

View File

@ -3,80 +3,80 @@ Install Trove-API.
Configuration Configuration
------------- -------------
trove: trove:
verbose: False verbose: False
# Print more verbose output (set logging level to INFO instead of default WARNING level). # Print more verbose output (set logging level to INFO instead of default WARNING level).
debug: False debug: False
# Print debugging output (set logging level to DEBUG instead of default WARNING level). # Print debugging output (set logging level to DEBUG instead of default WARNING level).
bind_host: 0.0.0.0 bind_host: 0.0.0.0
# Binding host for the API server # Binding host for the API server
bind_port: 8779 bind_port: 8779
# Binding port for the API server # Binding port for the API server
api_workers: 5 api_workers: 5
# Number of API service processes/threads # Number of API service processes/threads
rabbit: rabbit:
host: 10.0.0.1 host: 10.0.0.1
# For specifying single RabbitMQ node # For specifying single RabbitMQ node
nodes: 10.0.0.1, 10.0.0.2 nodes: 10.0.0.1, 10.0.0.2
# For specifying RabbitMQ Cluster # For specifying RabbitMQ Cluster
username: guest username: guest
password: guest password: guest
port: 5672 port: 5672
use_ssl: False use_ssl: False
virtual_host: / virtual_host: /
db: db:
# DB Connection String # DB Connection String
volume_support: volume_support:
enabled: True enabled: True
# Whether to provision a cinder volume for datadir. # Whether to provision a cinder volume for datadir.
block_device_mapping: vdb block_device_mapping: vdb
device_path: /dev/vdb device_path: /dev/vdb
mount_point: /var/lib/mysql mount_point: /var/lib/mysql
volume_time_out: 60 volume_time_out: 60
server_delete_time_out: 60 server_delete_time_out: 60
max_accepted_volume_size: 10 max_accepted_volume_size: 10
# Default maximum volume size for an instance. # Default maximum volume size for an instance.
max_instances_per_user: 10 max_instances_per_user: 10
# Default maximum number of instances per tenant. # Default maximum number of instances per tenant.
max_volumes_per_user: 10 max_volumes_per_user: 10
# Default maximum volume capacity (in GB) spanning across all trove volumes per tenant # Default maximum volume capacity (in GB) spanning across all trove volumes per tenant
max_backups_per_user: 10 max_backups_per_user: 10
# Default maximum number of backups created by a tenant. # Default maximum number of backups created by a tenant.
dns_support: dns_support:
enabled: True enabled: True
account_id: 123456 account_id: 123456
dns_auth_url: 123456 dns_auth_url: 123456
dns_username: user dns_username: user
dns_passkey: password dns_passkey: password
dns_ttl: 3600 dns_ttl: 3600
dns_domain_name: trove.com dns_domain_name: trove.com
dns_domain_id: 11111111-1111-1111-1111-111111111111 dns_domain_id: 11111111-1111-1111-1111-111111111111
dns_driver: trove.dns.designate.driver.DesignateDriver dns_driver: trove.dns.designate.driver.DesignateDriver
dns_instance_entry_factory: trove.dns.designate.driver.DesignateInstanceEntryFactory dns_instance_entry_factory: trove.dns.designate.driver.DesignateInstanceEntryFactory
dns_endpoint_url: http://127.0.0.1/v1/ dns_endpoint_url: http://127.0.0.1/v1/
dns_service_type: dns dns_service_type: dns
admin_roles: admin admin_roles: admin
control_exchange: trove control_exchange: trove
log_dir: /var/log/trove log_dir: /var/log/trove
keystone: keystone:
auth_host: 10.0.0.1 auth_host: 10.0.0.1
# Auth Host IP/Hostname # Auth Host IP/Hostname
auth_port: 5000 auth_port: 5000
# Port number on with Auth service is running # Port number on with Auth service is running
auth_protocol: http auth_protocol: http
# Protocol supported by Auth Service (HTTP/HTTPS) # Protocol supported by Auth Service (HTTP/HTTPS)
service_user: admin service_user: admin
# Service Account Username (Admin) # Service Account Username (Admin)
service_password: service_password:
# Service Account Password # Service Account Password
service_tenant: demo service_tenant: demo
# Service Account Tenant # Service Account Tenant
url: url:
auth: auth:
# Keystone URL # Keystone URL
compute: compute:
# Nova Compute URL # Nova Compute URL
cinder: cinder:
# Cinder URL # Cinder URL
swift: swift:
# Swift URL # Swift URL

View File

@ -5,16 +5,16 @@ Configuration
Tuskar API requires the following keys to be set via Heat Metadata. Tuskar API requires the following keys to be set via Heat Metadata.
tuskar: tuskar:
overcloud-admin-password: overcloud-admin-password:
- the password of the overcloud admin user. Use - the password of the overcloud admin user. Use
OvercloudAdminPassword template parameter to OvercloudAdminPassword template parameter to
override this option. override this option.
db: "mysql://tuskar:unset@localhost/tuskar?charset=utf8" db: "mysql://tuskar:unset@localhost/tuskar?charset=utf8"
- the connection string for a DB to be used by tuskar-api. - the connection string for a DB to be used by tuskar-api.
username: username:
- the name of the user to deploy the overcloud on behalf of - the name of the user to deploy the overcloud on behalf of
password: password:
- the password of the user to deploy the overcloud on behalf of - the password of the user to deploy the overcloud on behalf of
tenant_name: tenant_name:
- the tenant name of the user to deploy the overcloud on behalf of - the tenant name of the user to deploy the overcloud on behalf of