Add HA templates to deploy OpenShift on CentOS

These templates are heavily based on the existing OpenShift enterprise
templates for use with RHEL. The OpenShift installer utilized in these
templates has been slightly modified to support CentOS as well as make
the install completely automated (aside from creating districts if so
desired.)

Change-Id: I74acfdd553eb6a4c7ac771b6c0ec6543e1e63ea9
This commit is contained in:
Jeff Peeler 2014-07-10 14:12:43 -04:00
parent 5aa27cc724
commit d357422cf6
6 changed files with 1152 additions and 0 deletions

View File

@ -8,3 +8,7 @@ It includes the following template files:
* `OpenShift.yaml` - deploys OpenShift Origin in an all-in-one setup (broker+console+node)
* `OpenShift-1B1N.yaml` - deploys OpenShift Origin with separate instances for broker and node
And the following directory:
* `highly-available` - deploys OpenShift Origin in a highly available setup as further described in its README.md

View File

@ -0,0 +1,118 @@
# OpenShift Origin Highly Available Environment
This nested heat stack deploys a highly-available OpenShift Origin environment.
## Resources Deployed
* 6 instances
* Highly available OpenShift broker set (3)
* OpenShift nodes (3)
* 7 floating IPs (includes one for LBaaS VIP)
* LBaaS, consisting of health monitor (HTTPS), pool, virtual IP (VIP)
* Integrated BIND server on broker 1 for dynamic DNS updates
### Deployment
zone transferred to
upstream DNS (IT)
\ ----------------------
\ / mongo replica set \
\ / ActiveMQ pool \
--\--------- ------------ ------------
| BIND | | | | | |
-------- |---| broker 2 |---| broker 3 |
| broker 1 | | | | |
------------ ------------ ------------
\ | /
\ | /
LBaaS agent (API) ---------------- developers
/ | \
/ | \
------------ ------------ ------------
| | | | | |
| node 1 |---| node 2 |---| node 3 | ---- application
| | | | | | users
------------ ------------ ------------
## Requirements
* Neutron networking: one private and one public network
* Compute quota for six VM instances
* Pool of seven available floating IP addresses. Addresses will be created and assigned at deployment.
* Load Balancer as a Server (LBaaS) configured. See neutron [lbaas agent configuration section](http://openstack.redhat.com/LBaaS).
* IP address of upstream (IT) DNS server for zone transfers
## Files
These templates are [Heat Orchestration Templates (HOT)](http://docs.openstack.org/developer/heat/template_guide/environment.html). Environment files are used to reduce CLI parameters and provide a way to reuse resources.
* Templates
* oso_ha_stack.yaml
* oso_node_stack.yaml
* Environments
* oso_ha_env.yaml
* oso_node_env.yaml
## How to Deploy
1. `git clone https://github.com/openstack/heat-templates.git` this repository
2. Change to this directory
cd heat-templates/openshift-origin/centos65/highly-available/
3. Edit heat environment file `oso_ha_env.yaml` according to your environment.
4. Launch highly available OpenShift stack
heat stack-create openshift-ha-stack -f oso_ha_stack.yaml -e oso_ha_env.yaml
5. Monitor progress. Options include:
* `tail -f /var/log/heat/heat-engine.log`
* `tail -f /tmp/openshift.out`
* `heat stack-list`
* `heat resource-list openshift-ha-stack`
## Scaling: Adding Nodes
OpenShift nodes may be manually added as needed using the OpenShift node heat template.
1. From directory `heat-templates/openshift-origin/centos65/highly-available/` edit the heat environment file `oso_node_env.yaml`
2. Launch node stack. This will deploy a single node server with attached cinder volume and floating IP address. Be sure to pass in the node hostname parameter to override the default.
heat stack-create openshift-node -f oso_node_stack.yaml -e oso_node_env.yaml -P "node_hostname=node4"
3. On broker1 add a DNS record for the new node server in `/var/named/dynamic/<my_domain>.db`. To force a zone transfer to the upstream DNS increment the serial number by 1 and run `rndc freeze ; rndc thaw`.
## Additional configuration Steps
1. Add brokers to LBaaS pool. On OpenStack:
neutron lb-member-create --address <broker1_fixed_ip> --protocol-port 443 oso_broker_lb_pool
neutron lb-member-create --address <broker2_fixed_ip> --protocol-port 443 oso_broker_lb_pool
neutron lb-member-create --address <broker3_fixed_ip> --protocol-port 443 oso_broker_lb_pool
2. Add session persistence to LBaaS virtual IP (VIP):
neutron lb-vip-update oso_broker_vip --session-persistence type=dict type='SOURCE_IP'
3. Update upstream DNS server to accept zone transfers from the OpenShift dynamic DNS. An example configuration would be to add a slave zone to /var/named.conf
zone "<openshift_domain_name>" {
type slave;
file "slaves/<openshift_domain_name>.db";
masters { <broker1_ip_address>; };
};
* If the upstream DNS configuration is not available a test client machine may be pointed to the broker 1 IP address (e.g. edit /etc/resolv.conf).
4. Create districts. The following creates a small district and adds two nodes to the district.
oo-admin-ctl-district -c create -n small_district -p small
oo-admin-ctl-district -c add-node -n small_district -i <node1_hostname>
oo-admin-ctl-district -c add-node -n small_district -i <node2_hostname>
## Troubleshooting
* `oo-mco ping` on a broker to verify nodes are registered
* `oo-diagnostics -v` on a broker to run a comprehensive set of tests
* `oo-accept-node -v` on a node
* If LBaaS is not set up any broker hostname can be used temporarily as the developer and node API target. Be sure to edit `/etc/openshift/node.conf`.

View File

@ -0,0 +1,23 @@
parameters:
# existing OpenStack keypair
key_name: mykey
domain: example.com
hosts_domain: example.com
replicants: broker1.example.com,broker2.example.com,broker3.example.com
# IP address of existing DNS server that will be configured for zone xfer
# this server will be a slave for the OpenShift zone
upstream_dns_ip: 10.0.0.1
# Name of glance images. Using prepped images will greatly reduce deploy time.
node_image: centos-6.5-release-n
broker_image: centos-6.5-release
activemq_admin_pass: password
activemq_user_pass: password
mcollective_pass: password
mongo_broker_pass: password
openshift_pass1: password
# Use 'neutron net-list' and 'neutron subnet-list' and replace these values
private_net_id: ec6c8237-1368-42c2-af6a-2c5a6b41951b
public_net_id: c5882794-fa7d-46b2-b90a-e37e47fabdf8
private_subnet_id: 8977e24c-32c6-4fb1-ae9f-6f70c16ecf0d
resource_registry:
OpenShift::Node::Server: oso_node_stack.yaml

View File

@ -0,0 +1,744 @@
heat_template_version: 2013-05-23
description: >
Nested HOT template for deploying a highly available OpenShift Enterprise
environment. Deploys 3 HA brokers, 3 nodes, with floating IPs, LBaaS, cinder
attached storage (nodes) and dynamic DNS on broker1
parameter_groups:
- label: General parameters
description: General OpenShift parameters
parameters:
- broker1_hostname
- broker2_hostname
- broker3_hostname
- node1_hostname
- node2_hostname
- node3_hostname
- load_bal_hostname
- broker_image
- node_image
- broker_server_flavor
- node_server_flavor
- label: Networking parameters
description: Networking-related parameters
parameters:
- domain
- hosts_domain
- named_hostname
- named_ip
- upstream_dns_ip
- replicants
- cartridges
- public_net_id
- private_net_id
- private_subnet_id
- label: Credentials
description: >
Username and password parameters for OpenShift and dependent service
parameters:
- openshift_user1
- openshift_pass1
- mongo_broker_user
- mongo_broker_pass
- mcollective_user
- mcollective_pass
- activemq_admin_pass
- activemq_user_pass
parameters:
key_name:
description: Name of an existing keypair to enable SSH access to the instances
type: string
domain:
description: Your DNS domain
type: string
hosts_domain:
description: Openshift hosts domain
type: string
broker_server_flavor:
description: Flavor of broker server
type: string
default: m1.small
primary_avail_zone:
description: >
Primary availability zone to ensure distribution of brokers and nodes
type: string
default: nova
secondary_avail_zone:
description: >
Secondary availability zone to ensure distribution of brokers and nodes
type: string
default: nova
node_server_flavor:
description: Flavor of node servers
type: string
default: m1.medium
node_vol_size:
description: Node cinder volume size (GB)
type: number
default: 10
broker1_hostname:
description: Broker 1 hostname
type: string
default: broker1
broker2_hostname:
description: Broker 2 hostname
type: string
default: broker2
broker3_hostname:
description: Broker 3 hostname
type: string
default: broker3
node1_hostname:
description: Node 1 hostname
type: string
default: node1
node2_hostname:
description: Node 2 hostname
type: string
default: node2
node3_hostname:
description: Node 3 hostname
type: string
default: node3
load_bal_hostname:
description: Load balancer hostname
type: string
default: broker
broker_image:
description: Broker image name
type: string
default: centos65-x86_64-broker
node_image:
description: Node image name
type: string
default: centos65-x86_64-node
openshift_repo_base:
description: OSE Repository Base URL
type: string
default: ""
openshift_extra_repo_base:
description: OSE Extra Repository Base URL
type: string
default: ""
jboss_repo_base:
description: JBoss Repository Base URL
type: string
default: ""
named_hostname:
description: named server hostname
type: string
default: broker1
named_ip:
description: named server IP address
type: string
default: ""
upstream_dns_ip:
description: Upstream DNS IP address for zone transfer
type: string
default: ""
replicants:
description: >
Comma-separated list (no spaces) of broker hosts (FQDN) running ActiveMQ and MongoDB
type: string
cartridges:
description: >
Cartridges to install. "all" for all cartridges; "standard" for all cartridges except for JBossEWS or JBossEAP
type: string
default: "cron,diy,haproxy,mysql,nodejs,perl,php,postgresql,python,ruby"
public_net_id:
type: string
description: >
ID of public network for which floating IP addresses will be allocated
private_net_id:
type: string
description: ID of private network into which servers get deployed
private_subnet_id:
type: string
description: ID of private sub network into which servers get deployed
openshift_user1:
description: OpenShift username
type: string
default: user1
openshift_pass1:
description: OpenShift user password
type: string
hidden: true
mongo_broker_user:
description: MongoDB broker username
type: string
default: openshift
mongo_broker_pass:
description: MongoDB broker password
type: string
hidden: true
mcollective_user:
description: MCollective username
type: string
default: mcollective
mcollective_pass:
description: MCollective password
type: string
hidden: true
activemq_admin_pass:
description: ActiveMQ admin user password
type: string
hidden: true
activemq_user_pass:
description: ActiveMQ user password
type: string
hidden: true
resources:
oso_broker_sec_grp:
type: AWS::EC2::SecurityGroup
properties:
GroupDescription: broker firewall rules
SecurityGroupIngress:
- {IpProtocol: tcp, FromPort: '22', ToPort: '22', CidrIp: 0.0.0.0/0}
- {IpProtocol: udp, FromPort: '53', ToPort: '53', CidrIp: 0.0.0.0/0}
- {IpProtocol: tcp, FromPort: '53', ToPort: '53', CidrIp: 0.0.0.0/0}
- {IpProtocol: tcp, FromPort: '80', ToPort: '80', CidrIp: 0.0.0.0/0}
- {IpProtocol: tcp, FromPort: '443', ToPort: '443', CidrIp: 0.0.0.0/0}
- {IpProtocol: tcp, FromPort: '27017', ToPort: '27017', CidrIp: 0.0.0.0/0}
- {IpProtocol: tcp, FromPort: '61613', ToPort: '61613', CidrIp: 0.0.0.0/0}
- {IpProtocol: tcp, FromPort: '61616', ToPort: '61616', CidrIp: 0.0.0.0/0}
broker1_port:
type: OS::Neutron::Port
properties:
security_groups: [{ get_resource: oso_broker_sec_grp }]
network_id: { get_param: private_net_id }
fixed_ips:
- subnet_id: { get_param: private_subnet_id }
broker1_floating_ip:
type: OS::Neutron::FloatingIP
properties:
floating_network_id: { get_param: public_net_id }
port_id: { get_resource: broker1_port }
broker2_port:
type: OS::Neutron::Port
properties:
security_groups: [{ get_resource: oso_broker_sec_grp }]
network_id: { get_param: private_net_id }
fixed_ips:
- subnet_id: { get_param: private_subnet_id }
broker2_floating_ip:
type: OS::Neutron::FloatingIP
properties:
floating_network_id: { get_param: public_net_id }
port_id: { get_resource: broker2_port }
broker3_port:
type: OS::Neutron::Port
properties:
security_groups: [{ get_resource: oso_broker_sec_grp }]
network_id: { get_param: private_net_id }
fixed_ips:
- subnet_id: { get_param: private_subnet_id }
broker3_floating_ip:
type: OS::Neutron::FloatingIP
properties:
floating_network_id: { get_param: public_net_id }
port_id: { get_resource: broker3_port }
broker1_wait_handle:
type: AWS::CloudFormation::WaitConditionHandle
broker1_wait_condition:
type: AWS::CloudFormation::WaitCondition
properties:
Handle: { get_resource: broker1_wait_handle }
Timeout: '6000'
broker2_wait_handle:
type: AWS::CloudFormation::WaitConditionHandle
broker2_wait_condition:
type: AWS::CloudFormation::WaitCondition
properties:
Handle: { get_resource: broker2_wait_handle }
Timeout: '6000'
broker3_wait_handle:
type: AWS::CloudFormation::WaitConditionHandle
broker3_wait_condition:
type: AWS::CloudFormation::WaitCondition
properties:
Handle: { get_resource: broker3_wait_handle }
Timeout: '6000'
###
# load balancer
###
lb_vip_port:
type: OS::Neutron::Port
properties:
security_groups: [{ get_resource: oso_broker_sec_grp }]
network_id: { get_param: private_net_id }
fixed_ips:
- subnet_id: { get_param: private_subnet_id }
lb_vip_floating_ip:
type: OS::Neutron::FloatingIP
properties:
floating_network_id: { get_param: public_net_id }
port_id: { get_resource: lb_vip_port }
lb_pool_vip:
type: OS::Neutron::FloatingIPAssociation
properties:
floatingip_id: { get_resource: lb_vip_floating_ip }
port_id: { 'Fn::Select': ['port_id', {get_attr: [pool, vip] } ] }
monitor:
type: OS::Neutron::HealthMonitor
properties:
type: HTTPS
delay: 15
max_retries: 5
timeout: 10
pool:
type: OS::Neutron::Pool
properties:
name: oso_broker_lb_pool
description: Load balancer for OpenShift Enterprise broker hosts
protocol: HTTPS
subnet_id: { get_param: private_subnet_id }
lb_method: ROUND_ROBIN
monitors: [ { get_resource: monitor } ]
vip:
name: oso_broker_vip
description: broker virtual IP (VIP)
protocol_port: 443
session_persistence:
type: SOURCE_IP
mylb:
type: OS::Neutron::LoadBalancer
properties:
members: [ { get_resource: broker1_instance }, { get_resource: broker2_instance }, { get_resource: broker3_instance } ]
pool_id: { get_resource: pool }
protocol_port: 443
###
# Broker 1
###
broker1_instance:
type: OS::Nova::Server
depends_on: broker2_wait_condition
depends_on: broker3_wait_condition
properties:
name: oso_broker1
image: { get_param: broker_image }
flavor: { get_param: broker_server_flavor }
availability_zone: { get_param: primary_avail_zone }
key_name: { get_param: key_name }
networks:
- port: { get_resource: broker1_port }
user_data:
str_replace:
template: |
#!/bin/bash -x
export CONF_BROKER_IP_ADDR=P_BROKER_FLOATING_IP
export CONF_BROKER2_IP_ADDR=P_BROKER2_FLOATING_IP
export CONF_BROKER3_IP_ADDR=P_BROKER3_FLOATING_IP
export CONF_DOMAIN=P_DOMAIN
export CONF_BROKER_HOSTNAME=P_BROKER_HOSTNAME
export CONF_BROKER2_HOSTNAME=P_BROKER2_HOSTNAME
export CONF_BROKER3_HOSTNAME=P_BROKER3_HOSTNAME
export CONF_NAMED_HOSTNAME=P_NAMED_HOSTNAME
export CONF_NAMED_IP_ADDR=P_NAMED_IP
export CONF_NAMED_ENTRIES=P_BROKER2_HOSTNAME:P_BROKER2_FLOATING_IP,P_BROKER3_HOSTNAME:P_BROKER3_FLOATING_IP,P_NODE1_HOSTNAME:P_NODE1_FLOATING_IP,P_NODE2_HOSTNAME:P_NODE2_FLOATING_IP,P_NODE3_HOSTNAME:P_NODE3_FLOATING_IP,P_LOAD_BAL_HOSTNAME:P_LOAD_BAL_IP
export CONF_BIND_KEYALGORITHM="HMAC-MD5"
export CONF_ACTIVEMQ_HOSTNAME=P_BROKER_HOSTNAME
export CONF_DATASTORE_HOSTNAME=P_BROKER_HOSTNAME
export CONF_DATASTORE_REPLICANTS=P_REPLICANTS
export CONF_ACTIVEMQ_REPLICANTS=P_REPLICANTS
export CONF_INSTALL_METHOD='osoyum'
export CONF_OSE_REPOS_BASE=P_CONF_OSE_REPOS_BASE
export CONF_OSE_EXTRA_REPO_BASE=P_CONF_OSE_EXTRA_REPOS_BASE
export CONF_JBOSS_REPO_BASE=P_CONF_JBOSS_REPO_BASE
export CONF_INSTALL_COMPONENTS=broker,activemq,datastore,named
export CONF_ACTIONS=do_all_actions,configure_datastore_add_replicants
export CONF_OPENSHIFT_USER1=P_CONF_OPENSHIFT_USER1
export CONF_OPENSHIFT_PASSWORD1=P_CONF_OPENSHIFT_PASSWORD1
export CONF_MONGODB_BROKER_USER=P_CONF_MONGODB_BROKER_USER
export CONF_MONGODB_BROKER_PASSWORD=P_CONF_MONGODB_BROKER_PASSWORD
export CONF_MCOLLECTIVE_USER=P_CONF_MCOLLECTIVE_USER
export CONF_MCOLLECTIVE_PASSWORD=P_CONF_MCOLLECTIVE_PASSWORD
export CONF_ACTIVEMQ_ADMIN_PASSWORD=P_CONF_ACTIVEMQ_ADMIN_PASSWORD
export CONF_ACTIVEMQ_AMQ_USER_PASSWORD=P_CONF_ACTIVEMQ_AMQ_USER_PASSWORD
while [ ! -f openshift.sh ]; do
echo "Attempting to fetch installer script"
curl -O https://raw.githubusercontent.com/jpeeler/openshift-extras/enterprise-2.0/enterprise/install-scripts/generic/openshift.sh -k
return=$?
echo "Attempt resulted in $result"
sleep 5
done
chmod +x ./openshift.sh
./openshift.sh 2>&1 | tee /tmp/openshift.out
sed -i '/type master/a \
also-notify { P_UPSTREAM_DNS_IP; };\n notify yes;' /etc/named.conf
setenforce 1
cd /etc/init.d
for i in `ls cloud-*`; do chkconfig $i off; done
# FIXME: shouldn't need this. DIB step? selinux enabled when pkg instld? see rpm -q --scripts ruby193-rubygem-passenger-native
#semodule -i /opt/rh/ruby193/root/usr/share/selinux/packages/ruby193-rubygem-passenger/ruby193-rubygem-passenger.pp 2>/dev/null
#fixfiles -R ruby193-rubygem-passenger restore
#fixfiles -R ruby193-rubygem-passenger-native restore
/usr/bin/cfn-signal -e 0 -s "Broker 1 setup complete" -i "P_BROKER_HOSTNAME.P_DOMAIN" "P_BROKER_WAIT_HANDLE"
echo "date >> /var/www/html/broker_up; restorecon -r /var/www/openshift" >> /etc/rc.local
reboot
params:
P_BROKER_FLOATING_IP: { get_attr: [ broker1_floating_ip, floating_ip_address ] }
P_NODE1_FLOATING_IP: { get_attr: [ node1_instance, node_floating_ip ] }
P_NODE2_FLOATING_IP: { get_attr: [ node2_instance, node_floating_ip ] }
P_NODE3_FLOATING_IP: { get_attr: [ node3_instance, node_floating_ip ] }
P_BROKER2_FLOATING_IP: { get_attr: [ broker2_floating_ip, floating_ip_address ] }
P_BROKER3_FLOATING_IP: { get_attr: [ broker3_floating_ip, floating_ip_address ] }
P_DOMAIN: { get_param: domain }
P_HOSTS_DOMAIN: { get_param: hosts_domain }
P_LOAD_BAL_HOSTNAME: { get_param: load_bal_hostname }
P_LOAD_BAL_IP: { get_attr: [ lb_vip_floating_ip, floating_ip_address ] }
P_BROKER_HOSTNAME: { get_param: broker1_hostname }
P_BROKER2_HOSTNAME: { get_param: broker2_hostname }
P_BROKER3_HOSTNAME: { get_param: broker3_hostname }
P_NODE1_HOSTNAME: { get_param: node1_hostname }
P_NODE2_HOSTNAME: { get_param: node2_hostname }
P_NODE3_HOSTNAME: { get_param: node3_hostname }
P_NAMED_HOSTNAME: { get_param: named_hostname }
P_NAMED_IP: { get_attr: [ broker1_floating_ip, floating_ip_address ] }
P_UPSTREAM_DNS_IP: { get_param: upstream_dns_ip }
P_REPLICANTS: { get_param: replicants }
P_CONF_OSE_REPOS_BASE: { get_param: openshift_repo_base}
P_CONF_OSE_EXTRA_REPOS_BASE: { get_param: openshift_extra_repo_base}
P_CONF_JBOSS_REPO_BASE: { get_param: jboss_repo_base}
P_CONF_OPENSHIFT_USER1: { get_param: openshift_user1 }
P_CONF_OPENSHIFT_PASSWORD1: { get_param: openshift_pass1 }
P_CONF_MONGODB_BROKER_USER: { get_param: mongo_broker_user }
P_CONF_MONGODB_BROKER_PASSWORD: { get_param: mongo_broker_pass }
P_CONF_MCOLLECTIVE_USER: { get_param: mcollective_user }
P_CONF_MCOLLECTIVE_PASSWORD: { get_param: mcollective_pass }
P_CONF_ACTIVEMQ_ADMIN_PASSWORD: { get_param: activemq_admin_pass }
P_CONF_ACTIVEMQ_AMQ_USER_PASSWORD: { get_param: activemq_user_pass }
P_BROKER_WAIT_HANDLE: { get_resource: broker1_wait_handle }
###
# Broker 2
###
broker2_instance:
type: OS::Nova::Server
properties:
name: oso_broker2
image: { get_param: broker_image }
flavor: { get_param: broker_server_flavor }
availability_zone: { get_param: secondary_avail_zone }
key_name: { get_param: key_name }
networks:
- port: { get_resource: broker2_port }
user_data:
str_replace:
template: |
#!/bin/bash -x
export CONF_BROKER_IP_ADDR=P_BROKER_FLOATING_IP
export CONF_DOMAIN=P_DOMAIN
export CONF_BROKER_HOSTNAME=P_BROKER_HOSTNAME
export CONF_NAMED_HOSTNAME=P_NAMED_HOSTNAME
export CONF_NAMED_IP_ADDR=P_NAMED_IP
export CONF_DATASTORE_REPLICANTS=P_REPLICANTS
export CONF_ACTIVEMQ_REPLICANTS=P_REPLICANTS
export CONF_INSTALL_METHOD='osoyum'
export CONF_OSE_REPOS_BASE=P_CONF_OSE_REPOS_BASE
export CONF_OSE_EXTRA_REPO_BASE=P_CONF_OSE_EXTRA_REPOS_BASE
export CONF_JBOSS_REPO_BASE=P_CONF_JBOSS_REPO_BASE
export CONF_INSTALL_COMPONENTS=broker,activemq,datastore
export CONF_ACTIONS=do_all_actions
export CONF_OPENSHIFT_USER1=P_CONF_OPENSHIFT_USER1
export CONF_OPENSHIFT_PASSWORD1=P_CONF_OPENSHIFT_PASSWORD1
export CONF_MONGODB_BROKER_USER=P_CONF_MONGODB_BROKER_USER
export CONF_MONGODB_BROKER_PASSWORD=P_CONF_MONGODB_BROKER_PASSWORD
export CONF_MCOLLECTIVE_USER=P_CONF_MCOLLECTIVE_USER
export CONF_MCOLLECTIVE_PASSWORD=P_CONF_MCOLLECTIVE_PASSWORD
export CONF_ACTIVEMQ_ADMIN_PASSWORD=P_CONF_ACTIVEMQ_ADMIN_PASSWORD
export CONF_ACTIVEMQ_AMQ_USER_PASSWORD=P_CONF_ACTIVEMQ_AMQ_USER_PASSWORD
while [ ! -f openshift.sh ]; do
echo "Attempting to fetch installer script"
curl -O https://raw.githubusercontent.com/jpeeler/openshift-extras/enterprise-2.0/enterprise/install-scripts/generic/openshift.sh -k
return=$?
echo "Attempt resulted in $result"
sleep 5
done
chmod +x ./openshift.sh
./openshift.sh 2>&1 | tee /tmp/openshift.out
setenforce 1
cd /etc/init.d
for i in `ls cloud-*`; do chkconfig $i off; done
# FIXME: shouldn't need this. DIB step? selinux enabled when pkg instld? see rpm -q --scripts ruby193-rubygem-passenger-native
#semodule -i /opt/rh/ruby193/root/usr/share/selinux/packages/ruby193-rubygem-passenger/ruby193-rubygem-passenger.pp 2>/dev/null
#fixfiles -R ruby193-rubygem-passenger restore
#fixfiles -R ruby193-rubygem-passenger-native restore
/usr/bin/cfn-signal -e 0 -s "Broker 2 setup complete" -i "P_BROKER_HOSTNAME.P_DOMAIN" "P_BROKER_WAIT_HANDLE"
RESULT=1
until [ $RESULT -eq 0 ]; do
bind_key=$(wget -q -O - --no-check-certificate "https://P_NAMED_IP/rsync_bind_key")
RESULT=$?
if [ $RESULT -ne 0 ]; then
echo 'Waiting for rsync bind key...'
sleep 5
fi
done
sed -i "s,\(BIND_KEYVALUE=\).*,\1\"$bind_key\"," /etc/openshift/plugins.d/openshift-origin-dns-nsupdate.conf
# prevents broker1 from attempting mongod setup while host is rebooting
RESULT=1
until [ $RESULT -eq 0 ]; do
timestamp=$(wget -q -O - --no-check-certificate "https://P_NAMED_IP/broker_up")
RESULT=$?
if [ $RESULT -ne 0 ]; then
echo "Waiting for broker1..."
sleep 5
else
echo "Broker1 up at $timestamp"
fi
done
echo "date >> /var/www/html/broker_up; restorecon -r /var/www/openshift" >> /etc/rc.local
reboot
params:
P_BROKER_FLOATING_IP: { get_attr: [ broker2_floating_ip, floating_ip_address ] }
P_DOMAIN: { get_param: domain }
P_HOSTS_DOMAIN: { get_param: hosts_domain }
P_BROKER_HOSTNAME: { get_param: broker2_hostname }
P_NAMED_HOSTNAME: { get_param: named_hostname }
P_NAMED_IP: { get_attr: [ broker1_floating_ip, floating_ip_address ] }
P_REPLICANTS: { get_param: replicants }
P_CONF_OSE_REPOS_BASE: { get_param: openshift_repo_base}
P_CONF_OSE_EXTRA_REPOS_BASE: { get_param: openshift_extra_repo_base}
P_CONF_JBOSS_REPO_BASE: { get_param: jboss_repo_base}
P_CONF_OPENSHIFT_USER1: { get_param: openshift_user1 }
P_CONF_OPENSHIFT_PASSWORD1: { get_param: openshift_pass1 }
P_CONF_MONGODB_BROKER_USER: { get_param: mongo_broker_user }
P_CONF_MONGODB_BROKER_PASSWORD: { get_param: mongo_broker_pass }
P_CONF_MCOLLECTIVE_USER: { get_param: mcollective_user }
P_CONF_MCOLLECTIVE_PASSWORD: { get_param: mcollective_pass }
P_CONF_ACTIVEMQ_ADMIN_PASSWORD: { get_param: activemq_admin_pass }
P_CONF_ACTIVEMQ_AMQ_USER_PASSWORD: { get_param: activemq_user_pass }
P_BROKER_WAIT_HANDLE: { get_resource: broker2_wait_handle}
###
# Broker3
###
broker3_instance:
type: OS::Nova::Server
properties:
name: oso_broker3
image: { get_param: broker_image }
flavor: { get_param: broker_server_flavor }
availability_zone: { get_param: primary_avail_zone }
key_name: { get_param: key_name }
networks:
- port: { get_resource: broker3_port }
user_data:
str_replace:
template: |
#!/bin/bash -x
export CONF_BROKER_IP_ADDR=P_BROKER_FLOATING_IP
export CONF_DOMAIN=P_DOMAIN
export CONF_BROKER_HOSTNAME=P_BROKER_HOSTNAME
export CONF_NAMED_HOSTNAME=P_NAMED_HOSTNAME
export CONF_NAMED_IP_ADDR=P_NAMED_IP
export CONF_DATASTORE_REPLICANTS=P_REPLICANTS
export CONF_ACTIVEMQ_REPLICANTS=P_REPLICANTS
export CONF_INSTALL_METHOD='osoyum'
export CONF_OSE_REPOS_BASE=P_CONF_OSE_REPOS_BASE
export CONF_OSE_EXTRA_REPO_BASE=P_CONF_OSE_EXTRA_REPOS_BASE
export CONF_JBOSS_REPO_BASE=P_CONF_JBOSS_REPO_BASE
export CONF_INSTALL_COMPONENTS=broker,activemq,datastore
export CONF_ACTIONS=do_all_actions
export CONF_OPENSHIFT_USER1=P_CONF_OPENSHIFT_USER1
export CONF_OPENSHIFT_PASSWORD1=P_CONF_OPENSHIFT_PASSWORD1
export CONF_MONGODB_BROKER_USER=P_CONF_MONGODB_BROKER_USER
export CONF_MONGODB_BROKER_PASSWORD=P_CONF_MONGODB_BROKER_PASSWORD
export CONF_MCOLLECTIVE_USER=P_CONF_MCOLLECTIVE_USER
export CONF_MCOLLECTIVE_PASSWORD=P_CONF_MCOLLECTIVE_PASSWORD
export CONF_ACTIVEMQ_ADMIN_PASSWORD=P_CONF_ACTIVEMQ_ADMIN_PASSWORD
export CONF_ACTIVEMQ_AMQ_USER_PASSWORD=P_CONF_ACTIVEMQ_AMQ_USER_PASSWORD
while [ ! -f openshift.sh ]; do
echo "Attempting to fetch installer script"
curl -O https://raw.githubusercontent.com/jpeeler/openshift-extras/enterprise-2.0/enterprise/install-scripts/generic/openshift.sh -k
return=$?
echo "Attempt resulted in $result"
sleep 5
done
chmod +x ./openshift.sh
./openshift.sh 2>&1 | tee /tmp/openshift.out
setenforce 1
cd /etc/init.d
for i in `ls cloud-*`; do chkconfig $i off; done
# FIXME: shouldn't need this. DIB step? selinux enabled when pkg instld? see rpm -q --scripts ruby193-rubygem-passenger-native
#semodule -i /opt/rh/ruby193/root/usr/share/selinux/packages/ruby193-rubygem-passenger/ruby193-rubygem-passenger.pp 2>/dev/null
#fixfiles -R ruby193-rubygem-passenger restore
#fixfiles -R ruby193-rubygem-passenger-native restore
/usr/bin/cfn-signal -e 0 -s "Broker 3 setup complete" -i "P_BROKER_HOSTNAME.P_DOMAIN" "P_BROKER_WAIT_HANDLE"
RESULT=1
until [ $RESULT -eq 0 ]; do
bind_key=$(wget -q -O - --no-check-certificate "https://P_NAMED_IP/rsync_bind_key")
RESULT=$?
if [ $RESULT -ne 0 ]; then
echo 'Waiting for rsync bind key...'
sleep 5
fi
done
sed -i "s,\(BIND_KEYVALUE=\).*,\1\"$bind_key\"," /etc/openshift/plugins.d/openshift-origin-dns-nsupdate.conf
# prevents broker1 from attempting mongod setup while host is rebooting
RESULT=1
until [ $RESULT -eq 0 ]; do
timestamp=$(wget -q -O - --no-check-certificate "https://P_NAMED_IP/broker_up")
RESULT=$?
if [ $RESULT -ne 0 ]; then
echo "Waiting for broker1..."
sleep 5
else
echo "Broker1 up at $timestamp"
fi
done
echo "date >> /var/www/html/broker_up; restorecon -r /var/www/openshift" >> /etc/rc.local
reboot
params:
P_BROKER_FLOATING_IP: { get_attr: [ broker3_floating_ip, floating_ip_address ] }
P_DOMAIN: { get_param: domain }
P_HOSTS_DOMAIN: { get_param: hosts_domain }
P_BROKER_HOSTNAME: { get_param: broker3_hostname }
P_NAMED_HOSTNAME: { get_param: named_hostname }
P_NAMED_IP: { get_attr: [ broker1_floating_ip, floating_ip_address ] }
P_REPLICANTS: { get_param: replicants }
P_CONF_OSE_REPOS_BASE: { get_param: openshift_repo_base}
P_CONF_OSE_EXTRA_REPOS_BASE: { get_param: openshift_extra_repo_base}
P_CONF_JBOSS_REPO_BASE: { get_param: jboss_repo_base}
P_CONF_OPENSHIFT_USER1: { get_param: openshift_user1 }
P_CONF_OPENSHIFT_PASSWORD1: { get_param: openshift_pass1 }
P_CONF_MONGODB_BROKER_USER: { get_param: mongo_broker_user }
P_CONF_MONGODB_BROKER_PASSWORD: { get_param: mongo_broker_pass }
P_CONF_MCOLLECTIVE_USER: { get_param: mcollective_user }
P_CONF_MCOLLECTIVE_PASSWORD: { get_param: mcollective_pass }
P_CONF_ACTIVEMQ_ADMIN_PASSWORD: { get_param: activemq_admin_pass }
P_CONF_ACTIVEMQ_AMQ_USER_PASSWORD: { get_param: activemq_user_pass }
P_BROKER_WAIT_HANDLE: { get_resource: broker3_wait_handle}
###
# Node
###
node1_instance:
type: OpenShift::Node::Server
properties:
key_name: { get_param: key_name }
domain: { get_param: domain }
hosts_domain: { get_param: hosts_domain }
broker1_hostname: { get_param: broker1_hostname }
broker1_floating_ip: { get_attr: [ broker1_floating_ip, floating_ip_address ] }
node_hostname: { get_param: node1_hostname }
load_bal_hostname: { get_param: load_bal_hostname }
node_image: { get_param: node_image }
replicants: { get_param: replicants }
cartridges: { get_param: cartridges }
openshift_repo_base: { get_param: openshift_repo_base }
openshift_extra_repo_base: { get_param: openshift_extra_repo_base }
jboss_repo_base: { get_param: jboss_repo_base }
public_net_id: { get_param: public_net_id }
private_net_id: { get_param: private_net_id }
private_subnet_id: { get_param: private_subnet_id }
mcollective_user: { get_param: mcollective_user }
mcollective_pass: { get_param: mcollective_pass }
activemq_admin_pass: { get_param: activemq_admin_pass }
activemq_user_pass: { get_param: activemq_user_pass }
avail_zone: { get_param: primary_avail_zone }
node_server_flavor: { get_param: node_server_flavor }
node2_instance:
type: OpenShift::Node::Server
properties:
key_name: { get_param: key_name }
domain: { get_param: domain }
hosts_domain: { get_param: hosts_domain }
broker1_hostname: { get_param: broker1_hostname }
broker1_floating_ip: { get_attr: [ broker1_floating_ip, floating_ip_address ] }
node_hostname: { get_param: node2_hostname }
load_bal_hostname: { get_param: load_bal_hostname }
node_image: { get_param: node_image }
replicants: { get_param: replicants }
cartridges: { get_param: cartridges }
openshift_repo_base: { get_param: openshift_repo_base }
openshift_extra_repo_base: { get_param: openshift_extra_repo_base }
jboss_repo_base: { get_param: jboss_repo_base }
public_net_id: { get_param: public_net_id }
private_net_id: { get_param: private_net_id }
private_subnet_id: { get_param: private_subnet_id }
mcollective_user: { get_param: mcollective_user }
mcollective_pass: { get_param: mcollective_pass }
activemq_admin_pass: { get_param: activemq_admin_pass }
activemq_user_pass: { get_param: activemq_user_pass }
avail_zone: { get_param: secondary_avail_zone }
node_server_flavor: { get_param: node_server_flavor }
node3_instance:
type: OpenShift::Node::Server
properties:
key_name: { get_param: key_name }
domain: { get_param: domain }
hosts_domain: { get_param: hosts_domain }
broker1_hostname: { get_param: broker1_hostname }
broker1_floating_ip: { get_attr: [ broker1_floating_ip, floating_ip_address ] }
node_hostname: { get_param: node3_hostname }
load_bal_hostname: { get_param: load_bal_hostname }
node_image: { get_param: node_image }
replicants: { get_param: replicants }
cartridges: { get_param: cartridges }
openshift_repo_base: { get_param: openshift_repo_base }
openshift_extra_repo_base: { get_param: openshift_extra_repo_base }
jboss_repo_base: { get_param: jboss_repo_base }
public_net_id: { get_param: public_net_id }
private_net_id: { get_param: private_net_id }
private_subnet_id: { get_param: private_subnet_id }
mcollective_user: { get_param: mcollective_user }
mcollective_pass: { get_param: mcollective_pass }
activemq_admin_pass: { get_param: activemq_admin_pass }
activemq_user_pass: { get_param: activemq_user_pass }
avail_zone: { get_param: secondary_avail_zone }
node_server_flavor: { get_param: node_server_flavor }
outputs:
console_url:
description: OpenShift Enterprise console URL
value:
str_replace:
template: |
https://host.domain/console
params:
host: { get_param: load_bal_hostname }
domain: { get_param: domain }
default_user:
description: OpenShift Enterprise default user
value: { get_param: openshift_user1 }
load_balancer_floating_ip:
description: load balancer floating IP address
value: { get_attr: [ lb_vip_floating_ip, floating_ip_address ] }

View File

@ -0,0 +1,19 @@
parameters:
# existing OpenStack keypair
key_name: mykey
domain: example.com
hosts_domain: example.com
broker1_floating_ip: 10.0.0.1
# list of HA broker set
replicants: broker1.example.com,broker2.example.com,broker3.example.com
# Name of glance image. Using prepped images will greatly reduce deploy time.
node_image: RHEL65-x86_64-node
activemq_admin_pass: password
activemq_user_pass: password
mcollective_pass: password
mongo_broker_pass: password
openshift_pass1: password
# Use 'neutron net-list' and 'neutron subnet-list' and replace these values
private_net_id: ec6c8237-1368-42c2-af6a-2c5a6b41951b
public_net_id: c5882794-fa7d-46b2-b90a-e37e47fabdf8
private_subnet_id: 8977e24c-32c6-4fb1-ae9f-6f70c16ecf0d

View File

@ -0,0 +1,244 @@
heat_template_version: 2013-05-23
description: >
Template (HOT) for deploying an OpenShift node with attached cinder volume
with floating IP. May be used stand-alone for scaling out nodes or as part of
the HA nested stack.
parameter_groups:
- label: General parameters
description: General OpenShift parameters
parameters:
- broker1_hostname
- broker1_floating_ip
- node_hostname
- load_bal_hostname
- node_image
- node_server_flavor
- label: Networking parameters
description: Networking-related parameters
parameters:
- domain
- hosts_domain
- named_hostname
- named_ip
- replicants
- cartridges
- public_net_id
- private_net_id
- private_subnet_id
- label: Credentials
description: >
Username and password parameters for OpenShift and dependent service
parameters:
- mcollective_user
- mcollective_pass
- activemq_admin_pass
- activemq_user_pass
parameters:
key_name:
description: Name of an existing keypair to enable SSH access to the instances
type: string
domain:
description: Your DNS domain
type: string
hosts_domain:
description: Openshift hosts domain
type: string
avail_zone:
description: >
Availability zone to ensure distribution of brokers and nodes
type: string
default: nova
node_server_flavor:
description: Flavor of node servers
type: string
default: m1.medium
node_vol_size:
description: Node cinder volume size (GB)
type: number
default: 12
broker1_hostname:
description: Broker 1 hostname
type: string
default: broker1
broker1_floating_ip:
description: Broker 1 floating ip
type: string
node_hostname:
description: Node hostname
type: string
default: node
load_bal_hostname:
description: Load balancer hostname
type: string
default: broker
node_image:
description: Node image name
type: string
default: RHEL65-x86_64-node
openshift_extra_repo_base:
description: OSE Extra Repository Base URL
type: string
default: ""
openshift_repo_base:
description: OSE Repository Base URL
type: string
default: ""
jboss_repo_base:
description: JBoss Repository Base URL
type: string
default: ""
named_hostname:
description: named server hostname
type: string
default: broker1
named_ip:
description: named server IP address
type: string
default: ""
upstream_dns_ip:
description: Upstream DNS IP address for zone transfer
type: string
default: ""
replicants:
description: >
Comma-separated list (no spaces) of broker hosts (FQDN) running ActiveMQ and MongoDB
type: string
cartridges:
description: >
Cartridges to install. "all" for all cartridges; "standard" for all cartridges except for JBossEWS or JBossEAP
type: string
default: "cron,diy,haproxy,mysql,nodejs,perl,php,postgresql,python,ruby"
public_net_id:
type: string
description: >
ID of public network for which floating IP addresses will be allocated
private_net_id:
type: string
description: ID of private network into which servers get deployed
private_subnet_id:
type: string
description: ID of private sub network into which servers get deployed
mcollective_user:
description: MCollective username
type: string
default: mcollective
mcollective_pass:
description: MCollective password
type: string
hidden: true
activemq_admin_pass:
description: ActiveMQ admin user password
type: string
hidden: true
activemq_user_pass:
description: ActiveMQ user password
type: string
hidden: true
resources:
oso_node_sec_grp:
type: AWS::EC2::SecurityGroup
properties:
GroupDescription: Node firewall rules
SecurityGroupIngress:
- {IpProtocol: tcp, FromPort: '22', ToPort: '22', CidrIp: 0.0.0.0/0}
- {IpProtocol: udp, FromPort: '53', ToPort: '53', CidrIp: 0.0.0.0/0}
- {IpProtocol: tcp, FromPort: '53', ToPort: '53', CidrIp: 0.0.0.0/0}
- {IpProtocol: tcp, FromPort: '80', ToPort: '80', CidrIp: 0.0.0.0/0}
- {IpProtocol: tcp, FromPort: '443', ToPort: '443', CidrIp: 0.0.0.0/0}
- {IpProtocol: tcp, FromPort: '8000', ToPort: '8000', CidrIp: 0.0.0.0/0}
- {IpProtocol: tcp, FromPort: '8443', ToPort: '8443', CidrIp: 0.0.0.0/0}
- {IpProtocol: tcp, FromPort: '2303', ToPort: '2308', CidrIp: 0.0.0.0/0}
- {IpProtocol: tcp, FromPort: '35531', ToPort: '65535', CidrIp: 0.0.0.0/0}
- {IpProtocol: tcp, FromPort: '27017', ToPort: '27017', CidrIp: 0.0.0.0/0}
node_port:
type: OS::Neutron::Port
properties:
security_groups: [{ get_resource: oso_node_sec_grp }]
network_id: { get_param: private_net_id }
fixed_ips:
- subnet_id: { get_param: private_subnet_id }
node_floating_ip:
type: OS::Neutron::FloatingIP
properties:
floating_network_id: { get_param: public_net_id }
port_id: { get_resource: node_port }
###
# Node
###
node_instance:
type: OS::Nova::Server
properties:
name: oso_node
image: { get_param: node_image }
flavor: { get_param: node_server_flavor }
availability_zone: { get_param: avail_zone }
key_name: { get_param: key_name }
networks:
- port: { get_resource: node_port }
user_data:
str_replace:
template: |
#!/bin/bash -x
export CONF_BROKER_IP_ADDR=P_BROKER_FLOATING_IP
export CONF_NODE_IP_ADDR=P_NODE_FLOATING_IP
export CONF_DOMAIN=P_DOMAIN
export CONF_BROKER_HOSTNAME=P_LOAD_BAL_HOSTNAME
export CONF_NODE_HOSTNAME=P_NODE_HOSTNAME
export CONF_NAMED_HOSTNAME=P_NAMED_HOSTNAME
export CONF_NAMED_IP_ADDR=P_NAMED_IP
export CONF_DATASTORE_REPLICANTS=P_REPLICANTS
export CONF_ACTIVEMQ_REPLICANTS=P_REPLICANTS
export CONF_CARTRIDGES=P_CONF_CARTRIDGES
export CONF_INSTALL_METHOD='osoyum'
export CONF_OSE_REPOS_BASE=P_CONF_OSE_REPOS_BASE
export CONF_OSE_EXTRA_REPO_BASE=P_CONF_OSE_EXTRA_REPOS_BASE
export CONF_JBOSS_REPO_BASE=P_CONF_JBOSS_REPO_BASE
export CONF_INSTALL_COMPONENTS=node
export CONF_ACTIONS=do_all_actions
export CONF_MCOLLECTIVE_USER=P_CONF_MCOLLECTIVE_USER
export CONF_MCOLLECTIVE_PASSWORD=P_CONF_MCOLLECTIVE_PASSWORD
export CONF_ACTIVEMQ_ADMIN_PASSWORD=P_CONF_ACTIVEMQ_ADMIN_PASSWORD
export CONF_ACTIVEMQ_AMQ_USER_PASSWORD=P_CONF_ACTIVEMQ_AMQ_USER_PASSWORD
while [ ! -f openshift.sh ]; do
echo "Attempting to fetch installer script"
curl -O https://raw.githubusercontent.com/jpeeler/openshift-extras/enterprise-2.0/enterprise/install-scripts/generic/openshift.sh -k
return=$?
echo "Attempt resulted in $result"
sleep 5
done
chmod +x ./openshift.sh
./openshift.sh 2>&1 | tee /tmp/openshift.out
setenforce 1
cd /etc/init.d
for i in `ls cloud-*`; do chkconfig $i off; done
reboot
params:
P_BROKER_FLOATING_IP: { get_param: broker1_floating_ip }
P_NODE_FLOATING_IP: { get_attr: [ node_floating_ip, floating_ip_address ] }
P_DOMAIN: { get_param: domain }
P_HOSTS_DOMAIN: { get_param: hosts_domain }
P_LOAD_BAL_HOSTNAME: { get_param: load_bal_hostname }
P_NODE_HOSTNAME: { get_param: node_hostname }
P_NAMED_HOSTNAME: { get_param: named_hostname }
P_NAMED_IP: { get_param: broker1_floating_ip }
P_REPLICANTS: { get_param: replicants }
P_CONF_CARTRIDGES: { get_param: cartridges }
P_CONF_OSE_REPOS_BASE: { get_param: openshift_repo_base}
P_CONF_OSE_EXTRA_REPOS_BASE: { get_param: openshift_extra_repo_base}
P_CONF_JBOSS_REPO_BASE: { get_param: jboss_repo_base}
P_CONF_MCOLLECTIVE_USER: { get_param: mcollective_user }
P_CONF_MCOLLECTIVE_PASSWORD: { get_param: mcollective_pass }
P_CONF_ACTIVEMQ_ADMIN_PASSWORD: { get_param: activemq_admin_pass }
P_CONF_ACTIVEMQ_AMQ_USER_PASSWORD: { get_param: activemq_user_pass }
outputs:
node_floating_ip:
value: { get_attr: [ node_floating_ip, floating_ip_address ] }