Merge "Properly format markdown code blocks"

This commit is contained in:
Jenkins 2014-11-13 21:29:58 +00:00 committed by Gerrit Code Review
commit 7391209820
20 changed files with 331 additions and 322 deletions

View File

@ -3,8 +3,8 @@ Install and configure Ceilometer.
Configuration
-------------
ceilometer:
metering_secret: "unset"
- secret value for signing metering messages
service-password: "unset"
- password for the metering service in Keystone
ceilometer:
metering_secret: "unset"
- secret value for signing metering messages
service-password: "unset"
- password for the metering service in Keystone

View File

@ -6,10 +6,10 @@ in images that use cinder.
Configuration
-------------
cinder:
verbose: False
- Print more verbose output (set logging level to INFO instead of default WARNING level).
debug: False
- Print debugging output (set logging level to DEBUG instead of default WARNING level).
iscsi-helper: tgtadm
- Specifies the iSCSI helper to use. Must match the target element included in the image.
cinder:
verbose: False
- Print more verbose output (set logging level to INFO instead of default WARNING level).
debug: False
- Print debugging output (set logging level to DEBUG instead of default WARNING level).
iscsi-helper: tgtadm
- Specifies the iSCSI helper to use. Must match the target element included in the image.

View File

@ -2,37 +2,38 @@ Install and configure Glance.
Configuration
-------------
glance:
db: mysql://glance:unset@localhost/glance
- SQLAlchemy database connection string
service-password: password
- The service password for the glance user
api:
verbose: False
- Show more verbose log output (sets INFO log level output)
debug: False
- Show debugging output in logs (sets DEBUG log level output)
backend: swift
- The backend store to use
swift-store-user: service:glance
swift-store-key: userpassword
- The credentials to use against swift if using the swift backend.
workers: 1
- The number of Glance API server processes to start.
notifier-strategy: noop
- Strategy to use for notification queue.
log-file: ''
- The path of the file to use for logging messages from Glances API server.
- The default is unset, which implies stdout.
default-log-levels:
- Logging: fine tune default log levels
registry:
verbose: False
- Show more verbose log output (sets INFO log level output)
debug: False
- Show debugging output in logs (sets DEBUG log level output)
log-file: ''
- The path of the file to use for logging messages from Glances Registry server.
- The default is unset, which implies stdout.
default-log-levels:
- Logging: fine tune default log levels
glance:
db: mysql://glance:unset@localhost/glance
- SQLAlchemy database connection string
service-password: password
- The service password for the glance user
api:
verbose: False
- Show more verbose log output (sets INFO log level output)
debug: False
- Show debugging output in logs (sets DEBUG log level output)
backend: swift
- The backend store to use
swift-store-user: service:glance
swift-store-key: userpassword
- The credentials to use against swift if using the swift backend.
workers: 1
- The number of Glance API server processes to start.
notifier-strategy: noop
- Strategy to use for notification queue.
log-file: ''
- The path of the file to use for logging messages from Glances API server.
- The default is unset, which implies stdout.
default-log-levels:
- Logging: fine tune default log levels
registry:
verbose: False
- Show more verbose log output (sets INFO log level output)
debug: False
- Show debugging output in logs (sets DEBUG log level output)
log-file: ''
- The path of the file to use for logging messages from Glances Registry server.
- The default is unset, which implies stdout.
default-log-levels:
- Logging: fine tune default log levels

View File

@ -72,60 +72,60 @@ EX: in overcloud-source.yaml for controllerConfig under properties:
Example Configurations
----------------------
haproxy:
nodes:
- name: notcompute
ip: 192.0.2.5
- name: notcomputeSlave0
ip: 192.0.2.6
services:
- name: dashboard_cluster
net_binds:
- ip: 192.0.2.3
port: 443
- ip: 192.0.2.3
port: 444
balance: roundrobin
- name: glance_api_cluster
proxy_ip: 192.0.2.3
proxy_port: 9293
port:9292
balance: source
- name: mysql
port: 3306
extra_server_params:
- backup
haproxy:
nodes:
- name: notcompute
ip: 192.0.2.5
- name: notcomputeSlave0
ip: 192.0.2.6
services:
- name: dashboard_cluster
net_binds:
- ip: 192.0.2.3
port: 443
- ip: 192.0.2.3
port: 444
balance: roundrobin
- name: glance_api_cluster
proxy_ip: 192.0.2.3
proxy_port: 9293
port:9292
balance: source
- name: mysql
port: 3306
extra_server_params:
- backup
You can override set of nodes for a service by setting its own set of
haproxy.nodes inside a service definition:
services:
- name: dashboard_cluster
net_binds:
- ip: 192.0.2.3
port: 444
- port: 443
balance: source
haproxy:
nodes:
- name: foo0
ip: 10.0.0.1
services:
- name: dashboard_cluster
net_binds:
- ip: 192.0.2.3
port: 444
- port: 443
balance: source
haproxy:
nodes:
- name: foo0
ip: 10.0.0.1
You can provide net_binds only once, for example:
haproxy:
nodes:
- name: foo0
ip: 10.0.0.1
net_binds:
- ip: 192.0.2.3
services:
- name: keystone
port: 5000
- name: dashboard_cluster
port: 80
net_binds:
- ip: 192.0.2.10
haproxy:
nodes:
- name: foo0
ip: 10.0.0.1
net_binds:
- ip: 192.0.2.3
services:
- name: keystone
port: 5000
- name: dashboard_cluster
port: 80
net_binds:
- ip: 192.0.2.10
If there is no haproxy.services.net_binds.port defined - haproxy.services.port
will be used.

View File

@ -3,14 +3,14 @@ Install and configure Ironic.
Required options can be provided via heat.
For example:
ironic:
db: mysql://ironic:unset@192.0.2.2/ironic
service-password: unset
keystone:
host: 192.0.2.2
glance:
host: 192.0.2.2
rabbit:
host: 192.0.2.2
password: guest
ironic:
db: mysql://ironic:unset@192.0.2.2/ironic
service-password: unset
keystone:
host: 192.0.2.2
glance:
host: 192.0.2.2
rabbit:
host: 192.0.2.2
password: guest

View File

@ -11,5 +11,5 @@ Currently only supported on ifcfg network configuration style systems.
Configuration
=============
network-config:
gateway-dev: eth1
network-config:
gateway-dev: eth1

View File

@ -3,11 +3,11 @@ Install and configure Neutron.
Configuration
-------------
neutron:
verbose: False
- Print more verbose output (set logging level to INFO
instead of default WARNING level).
debug: False
- Print debugging output (set logging level to DEBUG
instead of default WARNING level).
flat-networks: "tripleo-bm-test"
neutron:
verbose: False
- Print more verbose output (set logging level to INFO
instead of default WARNING level).
debug: False
- Print debugging output (set logging level to DEBUG
instead of default WARNING level).
flat-networks: "tripleo-bm-test"

View File

@ -3,40 +3,40 @@ Install and configure Nova.
Configuration
-------------
nova:
verbose: False
- Print more verbose output (set logging level to INFO instead of default WARNING level).
debug: False
- Print debugging output (set logging level to DEBUG instead of default WARNING level).
baremetal:
pxe_deploy_timeout: "1200"
- the duration in seconds for pxe deployment timeouts.
virtual_power:
type: "virsh"
- what virtual power driver to use. "virsh" or "vbox"
compute_libvirt_type: "qemu"
- what libvirt compute type. Unset will use the nova default.
image_cache_manager_interval:
- Number of seconds to wait between runs of the image cache manager.
resize_fs_using_block_device: BoolOpt
- Attempt to resize the filesystem by accessing the image over a block device.
resume_guests_state_on_host_boot: BoolOpt
- Whether to start guests that were running before the host rebooted.
running_deleted_instance_action:
- Action to take if a running deleted instance is detected.
Valid options are: 'noop', 'log', 'shutdown', or 'reap'.
Set to 'noop' to take no action.
virt_mkfs:
- Name of the mkfs commands for ephemeral device.
The format is <os_type>=<mkfs command>
e.g. 'linux-ext4=mkfs -t ext4 -F -L %(fs_label)s %(target)s'
compute_manager: "ironic.nova.compute.manager.ClusterComputeManager"
- set to override the compute manager class used by Nova-Compute.
scheduler_host_manager: "nova.scheduler.ironic_host_manager.IronicHostManager"
- set to override the scheduler host manager used by Nova. If no
scheduler_host_manager is configured it is automatically set to
the deprecated Nova baremetal and/or the old in-tree Ironic
compute driver for Nova.
public_ip:
- public IP address (if any) assigned to this node. Used for VNC proxy
connections so this is typically only required on controller nodes.
nova:
verbose: False
- Print more verbose output (set logging level to INFO instead of default WARNING level).
debug: False
- Print debugging output (set logging level to DEBUG instead of default WARNING level).
baremetal:
pxe_deploy_timeout: "1200"
- the duration in seconds for pxe deployment timeouts.
virtual_power:
type: "virsh"
- what virtual power driver to use. "virsh" or "vbox"
compute_libvirt_type: "qemu"
- what libvirt compute type. Unset will use the nova default.
image_cache_manager_interval:
- Number of seconds to wait between runs of the image cache manager.
resize_fs_using_block_device: BoolOpt
- Attempt to resize the filesystem by accessing the image over a block device.
resume_guests_state_on_host_boot: BoolOpt
- Whether to start guests that were running before the host rebooted.
running_deleted_instance_action:
- Action to take if a running deleted instance is detected.
Valid options are: 'noop', 'log', 'shutdown', or 'reap'.
Set to 'noop' to take no action.
virt_mkfs:
- Name of the mkfs commands for ephemeral device.
The format is <os_type>=<mkfs command>
e.g. 'linux-ext4=mkfs -t ext4 -F -L %(fs_label)s %(target)s'
compute_manager: "ironic.nova.compute.manager.ClusterComputeManager"
- set to override the compute manager class used by Nova-Compute.
scheduler_host_manager: "nova.scheduler.ironic_host_manager.IronicHostManager"
- set to override the scheduler host manager used by Nova. If no
scheduler_host_manager is configured it is automatically set to
the deprecated Nova baremetal and/or the old in-tree Ironic
compute driver for Nova.
public_ip:
- public IP address (if any) assigned to this node. Used for VNC proxy
connections so this is typically only required on controller nodes.

View File

@ -6,17 +6,18 @@ Sets default install type to local-only so we dont spam anyone. This can be
overwritten with the DIB_POSTFIX_INSTALL_TYPE environmental variable.
Valid options for DIB_POSTFIX_INSTALL_TYPE are:
Local only
Internet Site
Internet with smarthost
Satellite system
* Local only
* Internet Site
* Internet with smarthost
* Satellite system
Set postfix hostname and domain via heat:
postfix:
mailhostname: mail
maildomain: example.com
delay_warning_time: 4h
relayhost: smtp.example.com
*NOTE: mailhostname and maildomain must match the system hostname in order to
**NOTE**: mailhostname and maildomain must match the system hostname in order to
ensure local mail delivery will work.

View File

@ -5,7 +5,7 @@ To use Qpid, when building an image, add the qpid element and
remove the rabbitmq-server element. At the moment, rabbitmq-server
is listed as default in boot-stack/element-deps.
sed -i "s/rabbitmq-server/qpidd/" $TRIPLEO_ROOT/tripleo-image-elements/elements/boot-stack/element-deps
sed -i "s/rabbitmq-server/qpidd/" $TRIPLEO_ROOT/tripleo-image-elements/elements/boot-stack/element-deps
The configuration files of other services like Heat, Neutron, Nova,
Cinder, and Glance are updated by os-apply-config and os-apply-config
@ -20,38 +20,41 @@ default, the username should also be specified for qpid.
For the seed image the default metadata on the file system needs
to be updated. Substitute "rabbit" with "qpid".
sed -i "s/rabbit/qpid/" $TRIPLEO_ROOT/tripleo-image-elements/elements/seed-stack-config/config.json
sed -i "s/rabbit/qpid/" $TRIPLEO_ROOT/tripleo-image-elements/elements/seed-stack-config/config.json
After including the username, the qpid section should look like
"qpid": {
"host": "127.0.0.1",
"username": "guest",
"password": "guest"
}
"qpid": {
"host": "127.0.0.1",
"username": "guest",
"password": "guest"
}
For the undercloud, update the Heat template by substituting "rabbit:"
with "qpid:".
sed -i "s/rabbit:/qpid:/" $TRIPLEO_ROOT/tripleo-heat-templates/undercloud-vm.yaml
sed -i "s/rabbit:/qpid:/" $TRIPLEO_ROOT/tripleo-heat-templates/undercloud-vm.yaml
After including the username, the qpid section should look like
qpid:
host: 127.0.0.1
username: guest
password: guest
qpid:
host: 127.0.0.1
username: guest
password: guest
For the overcloud, update the Heat template by substituting "rabbit:"
with "qpid:".
sed -i "s/rabbit:/qpid:/" $TRIPLEO_ROOT/tripleo-heat-templates/overcloud.yaml
sed -i "s/rabbit:/qpid:/" $TRIPLEO_ROOT/tripleo-heat-templates/overcloud.yaml
After including the username, the qpid section(s) should look like
qpid:
host:
Fn::GetAtt:
- notcompute
- PrivateIp
username: guest
password: guest
qpid:
host:
Fn::GetAtt:
- notcompute
- PrivateIp
username: guest
password: guest

View File

@ -22,11 +22,11 @@ this is for backwards compatibility and will be removed in a future release.
Configuration keys
------------------
bootstack:
public\_interface\_ip: 192.0.2.1/24
- What IP address to place on the ovs public interface. Only intended for
use when the interface will not be otherwise configured.
masquerade\_networks: [192.0.2.0]
- What networks, if any, to masquerade. When set, all traffic being
output from each network to other networks is masqueraded. Traffic
to 192.168.122.1 is never masqueraded.
bootstack:
public\_interface\_ip: 192.0.2.1/24
- What IP address to place on the ovs public interface. Only intended for
use when the interface will not be otherwise configured.
masquerade\_networks: [192.0.2.0]
- What networks, if any, to masquerade. When set, all traffic being
output from each network to other networks is masqueraded. Traffic
to 192.168.122.1 is never masqueraded.

View File

@ -16,6 +16,7 @@ Grants snmp user password-less sudo access to lsof, so that the per process
check works correctly.
Options should be provided via heat. For example:
snmpd:
export_MIB: UCD-SNMP-MIB
readonly_user_name: RoUser

View File

@ -4,8 +4,8 @@ OpenStack services and other network clients authenticating SSL-secured connecti
Configuration
-------------
ssl:
ca_certificate: certdata
ssl:
ca_certificate: certdata
The CA certificate will be written to /etc/ssl/from-heat-ca.crt and installed using
update-ca-certificates (apt-based distros) or update-ca-trusts (yum-based distros).

View File

@ -2,11 +2,12 @@ Swift element for installing a swift proxy server
Configuration
-------------
swift:
service-password: PASSWORD
- The service password for the swift user
keystone:
host: 127.0.0.1
- The IP of the keystone host to authenticate against
proxy-memcache:
Comma-separated list of proxy servers in memcache ring
swift:
service-password: PASSWORD
- The service password for the swift user
keystone:
host: 127.0.0.1
- The IP of the keystone host to authenticate against
proxy-memcache:
Comma-separated list of proxy servers in memcache ring

View File

@ -2,19 +2,20 @@ Common element for swift elements
Configuration
-------------
swift:
devices: r1z<zone number>-192.0.2.6:%PORT%/d1
- A comma separated list of swift storage devices to place in the ring
file.
- This MUST be present in order for o-r-c to successfully complete.
zones:
- Servers are divided amongst separate zones if the swift.zones
metadata is greater than the default of 1. Servers are placed in zones
depending on their rank in the scaled-out list of Swift servers in the
yaml template used to build the overcloud stack. The scaleout rank N
is: SwiftStorage|controller<N>. The appropriate zone is calculated as:
zone = N % swift.zones + 1.
- To enable this calculation, the devices data takes the form of:
r1z%<controller or SwiftStorage><N>%-192.0.2.6:%PORT%/d1
hash: randomstring
- A hash used to salt paths on storage hosts
swift:
devices: r1z<zone number>-192.0.2.6:%PORT%/d1
- A comma separated list of swift storage devices to place in the ring
file.
- This MUST be present in order for o-r-c to successfully complete.
zones:
- Servers are divided amongst separate zones if the swift.zones
metadata is greater than the default of 1. Servers are placed in zones
depending on their rank in the scaled-out list of Swift servers in the
yaml template used to build the overcloud stack. The scaleout rank N
is: SwiftStorage|controller<N>. The appropriate zone is calculated as:
zone = N % swift.zones + 1.
- To enable this calculation, the devices data takes the form of:
r1z%<controller or SwiftStorage><N>%-192.0.2.6:%PORT%/d1
hash: randomstring
- A hash used to salt paths on storage hosts

View File

@ -26,12 +26,13 @@ will take care of applying these settings during configuration time.
Configuration example
---------------------
sysctl:
net.ipv4.conf.all.arp_filter: 1
net.ipv4.conf.all.arp_ignore: 2
net.ipv4.conf.all.arp_announce: 2
net.ipv4.conf.default.arp_filter: 1
net.ipv4.conf.default.arp_ignore: 2
net.ipv4.conf.default.arp_announce: 2
sysctl:
net.ipv4.conf.all.arp_filter: 1
net.ipv4.conf.all.arp_ignore: 2
net.ipv4.conf.all.arp_announce: 2
net.ipv4.conf.default.arp_filter: 1
net.ipv4.conf.default.arp_ignore: 2
net.ipv4.conf.default.arp_announce: 2
** Any valid sysctl key/value may be specified in this configuration format.

View File

@ -1,16 +0,0 @@
Add the tempest cloud test suite to an image.
The purpose of this element is to run tempest as a gate for tripleo based ci.
To successfully run tempest your overcloud should have
o Nodes with at least 4G of memory and 20G of diskspace
o the following services should be running
cinder, glance, heat, keystone, neutron, nova and swift
To use you should simply run the command run-tempest with the
OS_* environment variables for admin defined.
TODO:
o Remove as many of the filters in tests2skip.txt as possible
o Investigate setting allow_tenant_isolation to true
o Investigate setting run_ssh to true

View File

@ -0,0 +1,16 @@
Add the tempest cloud test suite to an image.
The purpose of this element is to run tempest as a gate for tripleo based ci.
To successfully run tempest your overcloud should have
* Nodes with at least 4G of memory and 20G of diskspace
* the following services should be running
cinder, glance, heat, keystone, neutron, nova and swift
To use you should simply run the command run-tempest with the
OS_* environment variables for admin defined.
TODO:
* Remove as many of the filters in tests2skip.txt as possible
* Investigate setting allow_tenant_isolation to true
* Investigate setting run_ssh to true

View File

@ -3,80 +3,80 @@ Install Trove-API.
Configuration
-------------
trove:
verbose: False
# Print more verbose output (set logging level to INFO instead of default WARNING level).
debug: False
# Print debugging output (set logging level to DEBUG instead of default WARNING level).
bind_host: 0.0.0.0
# Binding host for the API server
bind_port: 8779
# Binding port for the API server
api_workers: 5
# Number of API service processes/threads
rabbit:
host: 10.0.0.1
# For specifying single RabbitMQ node
nodes: 10.0.0.1, 10.0.0.2
# For specifying RabbitMQ Cluster
username: guest
password: guest
port: 5672
use_ssl: False
virtual_host: /
db:
# DB Connection String
volume_support:
enabled: True
# Whether to provision a cinder volume for datadir.
block_device_mapping: vdb
device_path: /dev/vdb
mount_point: /var/lib/mysql
volume_time_out: 60
server_delete_time_out: 60
max_accepted_volume_size: 10
# Default maximum volume size for an instance.
max_instances_per_user: 10
# Default maximum number of instances per tenant.
max_volumes_per_user: 10
# Default maximum volume capacity (in GB) spanning across all trove volumes per tenant
max_backups_per_user: 10
# Default maximum number of backups created by a tenant.
dns_support:
enabled: True
account_id: 123456
dns_auth_url: 123456
dns_username: user
dns_passkey: password
dns_ttl: 3600
dns_domain_name: trove.com
dns_domain_id: 11111111-1111-1111-1111-111111111111
dns_driver: trove.dns.designate.driver.DesignateDriver
dns_instance_entry_factory: trove.dns.designate.driver.DesignateInstanceEntryFactory
dns_endpoint_url: http://127.0.0.1/v1/
dns_service_type: dns
admin_roles: admin
control_exchange: trove
log_dir: /var/log/trove
keystone:
auth_host: 10.0.0.1
# Auth Host IP/Hostname
auth_port: 5000
# Port number on with Auth service is running
auth_protocol: http
# Protocol supported by Auth Service (HTTP/HTTPS)
service_user: admin
# Service Account Username (Admin)
service_password:
# Service Account Password
service_tenant: demo
# Service Account Tenant
url:
auth:
# Keystone URL
compute:
# Nova Compute URL
cinder:
# Cinder URL
swift:
# Swift URL
trove:
verbose: False
# Print more verbose output (set logging level to INFO instead of default WARNING level).
debug: False
# Print debugging output (set logging level to DEBUG instead of default WARNING level).
bind_host: 0.0.0.0
# Binding host for the API server
bind_port: 8779
# Binding port for the API server
api_workers: 5
# Number of API service processes/threads
rabbit:
host: 10.0.0.1
# For specifying single RabbitMQ node
nodes: 10.0.0.1, 10.0.0.2
# For specifying RabbitMQ Cluster
username: guest
password: guest
port: 5672
use_ssl: False
virtual_host: /
db:
# DB Connection String
volume_support:
enabled: True
# Whether to provision a cinder volume for datadir.
block_device_mapping: vdb
device_path: /dev/vdb
mount_point: /var/lib/mysql
volume_time_out: 60
server_delete_time_out: 60
max_accepted_volume_size: 10
# Default maximum volume size for an instance.
max_instances_per_user: 10
# Default maximum number of instances per tenant.
max_volumes_per_user: 10
# Default maximum volume capacity (in GB) spanning across all trove volumes per tenant
max_backups_per_user: 10
# Default maximum number of backups created by a tenant.
dns_support:
enabled: True
account_id: 123456
dns_auth_url: 123456
dns_username: user
dns_passkey: password
dns_ttl: 3600
dns_domain_name: trove.com
dns_domain_id: 11111111-1111-1111-1111-111111111111
dns_driver: trove.dns.designate.driver.DesignateDriver
dns_instance_entry_factory: trove.dns.designate.driver.DesignateInstanceEntryFactory
dns_endpoint_url: http://127.0.0.1/v1/
dns_service_type: dns
admin_roles: admin
control_exchange: trove
log_dir: /var/log/trove
keystone:
auth_host: 10.0.0.1
# Auth Host IP/Hostname
auth_port: 5000
# Port number on with Auth service is running
auth_protocol: http
# Protocol supported by Auth Service (HTTP/HTTPS)
service_user: admin
# Service Account Username (Admin)
service_password:
# Service Account Password
service_tenant: demo
# Service Account Tenant
url:
auth:
# Keystone URL
compute:
# Nova Compute URL
cinder:
# Cinder URL
swift:
# Swift URL

View File

@ -5,16 +5,16 @@ Configuration
Tuskar API requires the following keys to be set via Heat Metadata.
tuskar:
overcloud-admin-password:
- the password of the overcloud admin user. Use
OvercloudAdminPassword template parameter to
override this option.
db: "mysql://tuskar:unset@localhost/tuskar?charset=utf8"
- the connection string for a DB to be used by tuskar-api.
username:
- the name of the user to deploy the overcloud on behalf of
password:
- the password of the user to deploy the overcloud on behalf of
tenant_name:
- the tenant name of the user to deploy the overcloud on behalf of
tuskar:
overcloud-admin-password:
- the password of the overcloud admin user. Use
OvercloudAdminPassword template parameter to
override this option.
db: "mysql://tuskar:unset@localhost/tuskar?charset=utf8"
- the connection string for a DB to be used by tuskar-api.
username:
- the name of the user to deploy the overcloud on behalf of
password:
- the password of the user to deploy the overcloud on behalf of
tenant_name:
- the tenant name of the user to deploy the overcloud on behalf of