TOSCA: create csar for monitoring use case

This CSAR is created for TOSCA monitoring use case and provides a good example
with custom type definitions, shell scripts and python artifacts. It can be
used as is and will be used to to provide support for CSAR translation.

Change-Id: I431d375d921a97b3ca8b62a65286ba1c6d9b0272
This commit is contained in:
spzala 2015-06-11 07:58:50 -07:00
parent fb5793834a
commit 03afcc8cde
32 changed files with 718 additions and 0 deletions

View File

@ -0,0 +1,13 @@
tosca_definitions_version: tosca_simple_yaml_1_0_0
description: >
collectd is a daemon which gathers statistics about the system it is running on.
node_types:
tosca.nodes.SoftwareComponent.Collectd:
derived_from: tosca.nodes.SoftwareComponent
requirements:
- log_endpoint:
capability: tosca.capabilities.Endpoint
node: tosca.nodes.SoftwareComponent.Logstash
relationship: tosca.relationships.ConnectsTo

View File

@ -0,0 +1,11 @@
tosca_definitions_version: tosca_simple_yaml_1_0_0
description: >
Elasticsearch is an open-source search engine built on top of Apache Lucene, a full-text search-engine library.
node_types:
tosca.nodes.SoftwareComponent.Elasticsearch:
derived_from: tosca.nodes.SoftwareComponent
capabilities:
search_endpoint:
type: tosca.capabilities.Endpoint

View File

@ -0,0 +1,16 @@
tosca_definitions_version: tosca_simple_yaml_1_0_0
description: >
Kibana is an open source analytics and visualization platform designed to work with Elasticsearch.
You use Kibana to search, view, and interact with data stored in Elasticsearch.
node_types:
tosca.nodes.SoftwareComponent.Kibana:
derived_from: tosca.nodes.SoftwareComponent
requirements:
- search_endpoint:
capability: tosca.capabilities.Endpoint
node: tosca.nodes.SoftwareComponent.Elasticsearch
relationship: tosca.relationships.ConnectsTo

View File

@ -0,0 +1,25 @@
tosca_definitions_version: tosca_simple_yaml_1_0_0
description: >
Logstash is a tool for receiving, processing and outputting logs. All kinds of logs. System logs, webserver logs,
error logs, application logs, and just about anything you can throw at it.
node_types:
tosca.nodes.SoftwareComponent.Logstash:
derived_from: tosca.nodes.SoftwareComponent
requirements:
- search_endpoint:
capability: tosca.capabilities.Endpoint
node: tosca.nodes.SoftwareComponent.Elasticsearch
relationship:
type: tosca.relationships.ConnectsTo
interfaces:
Configure:
pre_configure_source:
inputs:
elasticsearch_ip:
type: string
capabilities:
log_endpoint:
type: tosca.capabilities.Endpoint

View File

@ -0,0 +1,29 @@
tosca_definitions_version: tosca_simple_yaml_1_0_0
description: >
Pizza store app that allows you to explore the features provided by PayPal's REST APIs.
More detail can be found at https://github.com/paypal/rest-api-sample-app-nodejs/
node_types:
tosca.nodes.WebApplication.PayPalPizzaStore:
derived_from: tosca.nodes.WebApplication
properties:
github_url:
required: no
type: string
description: location of the application on the github.
default: https://github.com/sample.git
requirements:
#WebApplication inherits Computer, so host implied.
- database_connection:
capability: tosca.capabilities.Endpoint.Database
node: tosca.nodes.Database
relationship: tosca.relationships.ConnectsTo
interfaces:
Standard:
configure:
inputs:
github_url:
type: string
mongodb_ip:
type: string

View File

@ -0,0 +1,13 @@
tosca_definitions_version: tosca_simple_yaml_1_0_0
description: >
RSYSLOG is the Rocket-fast SYStem for LOG processing.
node_types:
tosca.nodes.SoftwareComponent.Rsyslog:
derived_from: tosca.nodes.SoftwareComponent
requirements:
- log_endpoint:
capability: tosca.capabilities.Endpoint
node: tosca.nodes.SoftwareComponent.Logstash
relationship: tosca.relationships.ConnectsTo

View File

@ -0,0 +1,228 @@
tosca_definitions_version: tosca_simple_yaml_1_0_0
description: >
This TOSCA simple profile deploys nodejs, mongodb, elasticsearch, logstash and kibana each on a separate server
with monitoring enabled for nodejs server where a sample nodejs application is running. The rsyslog and collectd are
installed on a nodejs server.
imports:
- paypalpizzastore_nodejs_app.yaml
- elasticsearch.yaml
- logstash.yaml
- kibana.yaml
- collectd.yaml
- rsyslog.yaml
dsl_definitions:
host_capabilities: &host_capabilities
# container properties (flavor)
disk_size: 10 GB
num_cpus: { get_input: my_cpus }
mem_size: 4096 MB
os_capabilities: &os_capabilities
architecture: x86_64
type: Linux
distribution: Ubuntu
version: 14.04
topology_template:
inputs:
my_cpus:
type: integer
description: Number of CPUs for the server.
constraints:
- valid_values: [ 1, 2, 4, 8 ]
github_url:
type: string
description: The URL to download nodejs.
default: https://github.com/sample.git
node_templates:
paypal_pizzastore:
type: tosca.nodes.WebApplication.PayPalPizzaStore
properties:
github_url: { get_input: github_url }
requirements:
- host:
node: nodejs
- database_connection:
node: mongo_db
interfaces:
Standard:
configure:
implementation: Scripts/nodejs/config.sh
inputs:
github_url: http://github.com/paypal/rest-api-sample-app-nodejs.git
mongodb_ip: { get_attribute: [mongo_server, private_address] }
start: Scripts/nodejs/start.sh
nodejs:
type: tosca.nodes.WebServer
requirements:
- host:
node: app_server
interfaces:
Standard:
create: Scripts/nodejs/create.sh
mongo_db:
type: tosca.nodes.Database
requirements:
- host:
node: mongo_dbms
interfaces:
Standard:
create: Scripts/mongodb/create_database.sh
mongo_dbms:
type: tosca.nodes.DBMS
requirements:
- host:
node: mongo_server
interfaces:
Standard:
create: Scripts/mongodb/create.sh
configure:
implementation: Scripts/mongodb/config.sh
inputs:
mongodb_ip: { get_attribute: [mongo_server, private_address] }
start: Scripts/mongodb/start.sh
elasticsearch:
type: tosca.nodes.SoftwareComponent.Elasticsearch
requirements:
- host:
node: elasticsearch_server
interfaces:
Standard:
create: Scripts/elasticsearch/create.sh
start: Scripts/elasticsearch/start.sh
logstash:
type: tosca.nodes.SoftwareComponent.Logstash
requirements:
- host:
node: logstash_server
- search_endpoint:
node: elasticsearch
capability: search_endpoint
relationship:
type: tosca.relationships.ConnectsTo
interfaces:
Configure:
pre_configure_source:
implementation: Python/logstash/configure_elasticsearch.py
inputs:
elasticsearch_ip: { get_attribute: [elasticsearch_server, private_address] }
interfaces:
Standard:
create: Scripts/logstash/create.sh
start: Scripts/logstash/start.sh
kibana:
type: tosca.nodes.SoftwareComponent.Kibana
requirements:
- host:
node: kibana_server
- search_endpoint:
node: elasticsearch
capability: search_endpoint
interfaces:
Standard:
create: Scripts/kibana/create.sh
configure:
implementation: Scripts/kibana/config.sh
inputs:
elasticsearch_ip: { get_attribute: [elasticsearch_server, private_address] }
kibana_ip: { get_attribute: [kibana_server, private_address] }
start: Scripts/kibana/start.sh
app_collectd:
type: tosca.nodes.SoftwareComponent.Collectd
requirements:
- host:
node: app_server
- log_endpoint:
node: logstash
capability: log_endpoint
relationship:
type: tosca.relationships.ConnectsTo
interfaces:
Configure:
pre_configure_target:
implementation: Python/logstash/configure_collectd.py
interfaces:
Standard:
create: Scripts/collectd/create.sh
configure:
implementation: Python/collectd/config.py
inputs:
logstash_ip: { get_attribute: [logstash_server, private_address] }
start: Scripts/collectd/start.sh
app_rsyslog:
type: tosca.nodes.SoftwareComponent.Rsyslog
requirements:
- host:
node: app_server
- log_endpoint:
node: logstash
capability: log_endpoint
relationship:
type: tosca.relationships.ConnectsTo
interfaces:
Configure:
pre_configure_target:
implementation: Python/logstash/configure_rsyslog.py
interfaces:
Standard:
create: Scripts/rsyslog/create.sh
configure:
implementation: Python/rsyslog/config.py
inputs:
logstash_ip: { get_attribute: [logstash_server, private_address] }
start: Scripts/rsyslog/start.sh
app_server:
type: tosca.nodes.Compute
capabilities:
host:
properties: *host_capabilities
os:
properties: *os_capabilities
mongo_server:
type: tosca.nodes.Compute
capabilities:
host:
properties: *host_capabilities
os:
properties: *os_capabilities
elasticsearch_server:
type: tosca.nodes.Compute
capabilities:
host:
properties: *host_capabilities
os:
properties: *os_capabilities
logstash_server:
type: tosca.nodes.Compute
capabilities:
host:
properties: *host_capabilities
os:
properties: *os_capabilities
kibana_server:
type: tosca.nodes.Compute
capabilities:
host:
properties: *host_capabilities
os:
properties: *os_capabilities
outputs:
nodejs_url:
description: URL for the nodejs server, http://<IP>:3000
value: { get_attribute: [ app_server, private_address ] }
mongodb_url:
description: URL for the mongodb server.
value: { get_attribute: [ mongo_server, private_address ] }
elasticsearch_url:
description: URL for the elasticsearch server.
value: { get_attribute: [ elasticsearch_server, private_address ] }
logstash_url:
description: URL for the logstash server.
value: { get_attribute: [ logstash_server, private_address ] }
kibana_url:
description: URL for the kibana server.
value: { get_attribute: [ kibana_server, private_address ] }

View File

@ -0,0 +1,25 @@
#!/usr/bin/python
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This script configures collectd to send metric data to the
# logstash server port 25826
# The environment variable logstash_ip is expected to be set up
import os
with open("/etc/collectd/collectd.conf.d/tosca_elk.conf", "w") as fh:
fh.write("""
LoadPlugin network
<Plugin network>
Server "%s" "25826"
</Plugin>
""" % (os.environ['logstash_ip']))

View File

@ -0,0 +1,28 @@
#!/usr/bin/python
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This script configures the logstash input using the udp protocol on
# port 25826. This is intended to receive data from collectd from
# any source
with open("/etc/logstash/collectd.conf", "w") as fh:
fh.write("""
input {
udp {
port => 25826 # 25826 is the default for collectd
buffer_size => 1452 # 1452 is the default for collectd
codec => collectd { }
tags => ["metrics"]
type => "collectd"
}
}""")

View File

@ -0,0 +1,25 @@
#!/usr/bin/python
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This script configures the logstash output to forward to elasticsearch
# The environment variable elasticsearch_ip is expected to be set up
import os
with open("/etc/logstash/elasticsearch.conf", 'w') as fh:
fh.write("""
output {
elasticsearch {
action => index
host => "%s"
}
}""" % (os.environ['elasticsearch_ip']))

View File

@ -0,0 +1,25 @@
#!/usr/bin/python
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This script configures the logstash input using the RELP protocol on
# port 2514 This is intended to receive logs from rsyslog from
# any source
with open("/etc/logstash/rsyslog.conf", "w") as fh:
fh.write("""
input {
relp {
port => 2514
tags => ["logs"]
}
}""")

View File

@ -0,0 +1,23 @@
#!/usr/bin/python
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This script configures the output for rsyslogd to send logs to the
# logstash server port 2514 using the RELP protocol
# The environment variable logstash_ip is expected to be set up
import os
with open("/etc/rsyslog.d/tosca_elk.conf", "w") as fh:
fh.write("""
module(load="omrelp")
action(type="omrelp" target="%s" port="2514")
""" % (os.environ['logstash_ip']))

View File

@ -0,0 +1,7 @@
README:
This TOSCA simple profile deployes nodejs, mongodb, elasticsearch, logstash and kibana each on a separate server with monitoring enabled for nodejs server where a sample nodejs application is running. The syslog and collectd are insatlled on a nodejs server.
Entry information for processing through an orchestrator is contained in file TOSCA-Metadata/TOSCA.meta. This file provides high-level information such as CSAR version or creator of the CSAR. Furthermore, it provides pointers to the entry template under 'Entry-Definitions' key. The entry template itself may contain pointer to one or more files that are used to define TOSCA base type, unless provided by Orchestrator as built-in TOSCA basetypes, and other non-normative types. These are typically provided under 'imports' section in the entry template file. Those type definitions will be read and processed by orchestrator or TOSCA parser to create an internal graph showing dependencies and relationship between various TOSCA types. The entry template may have reference to various artifacts required for deployment and will be processed accordingly.

View File

@ -0,0 +1,20 @@
#!/bin/sh -x
# This script install collectd for monitoring data
#Trying to avoid multiple apt-get's running simultaneously (in the
#rare occasion that the apt-get command fails rerun the script).
while [[ "$(ps -A | grep apt-get | awk '{print $1}')" != "" ]]; do
echo "Waiting for the other apt-get process to complete ..."
r=$RANDOM && let "sec=$r/10000" && let "mil=($r%10000)/10"
sleep $sec.$mil
done
apt-get update
#Trying to avoid multiple apt-get's running simultaneously (in the
#rare occasion that the apt-get command fails rerun the script).
while [[ "$(ps -A | grep apt-get | awk '{print $1}')" != "" ]]; do
echo "Waiting for the other apt-get process to complete ..."
r=$RANDOM && let "sec=$r/10000" && let "mil=($r%10000)/10"
sleep $sec.$mil
done
apt-get install -y collectd

View File

@ -0,0 +1,4 @@
#!/bin/sh -x
# This script starts collectd as a service in init.d
service collectd stop
service collectd start

View File

@ -0,0 +1,35 @@
#!/bin/sh -x
# This script installs java and elasticsearch
#Trying to avoid multiple apt-get's running simultaneously (in the
#rare occasion that the apt-get command fails rerun the script).
while [[ "$(ps -A | grep apt-get | awk '{print $1}')" != "" ]]; do
echo "Waiting for the other apt-get process to complete ..."
r=$RANDOM && let "sec=$r/10000" && let "mil=($r%10000)/10"
sleep $sec.$mil
done
apt-get update
#Trying to avoid multiple apt-get's running simultaneously (in the
#rare occasion that the apt-get command fails rerun the script).
while [[ "$(ps -A | grep apt-get | awk '{print $1}')" != "" ]]; do
echo "Waiting for the other apt-get process to complete ..."
r=$RANDOM && let "sec=$r/10000" && let "mil=($r%10000)/10"
sleep $sec.$mil
done
apt-get install -y openjdk-7-jre-headless
wget -qO - https://packages.elasticsearch.org/GPG-KEY-elasticsearch | apt-key add -
echo "deb http://packages.elasticsearch.org/elasticsearch/1.5/debian stable main" | tee -a /etc/apt/sources.list
#Trying to avoid multiple apt-get's running simultaneously (in the
#rare occasion that the apt-get command fails rerun the script).
while [[ "$(ps -A | grep apt-get | awk '{print $1}')" != "" ]]; do
echo "Waiting for the other apt-get process to complete ..."
r=$RANDOM && let "sec=$r/10000" && let "mil=($r%10000)/10"
sleep $sec.$mil
done
apt-get install -y elasticsearch
# set up to run as service
update-rc.d elasticsearch defaults 95 10

View File

@ -0,0 +1,4 @@
#!/bin/sh -x
# This script starts elasticsearch as a service in init.d
service elasticsearch stop
service elasticsearch start

View File

@ -0,0 +1,7 @@
#!/bin/sh -x
# This script configures kibana to connect to the elasticsearch server
# to access data and to export the app url on port 5601:
# The environment variable elasticsearch_ip and kibana_ip are expected
# to be set up.
sed -i 's/localhost/'$elasticsearch_ip'/' /opt/kibana/kibana-4.0.1-linux-x64/config/kibana.yml
sed -i 's/0.0.0.0/'$kibana_ip'/' /opt/kibana/kibana-4.0.1-linux-x64/config/kibana.yml

View File

@ -0,0 +1,13 @@
#!/bin/sh -x
# This script installs kibana and sets it up to run as a service in init.d
mkdir /opt/kibana
cd /opt/kibana
wget https://download.elasticsearch.org/kibana/kibana/kibana-4.0.1-linux-x64.tar.gz
tar xzvf kibana-4.0.1-linux-x64.tar.gz
# set up to run as service
cd /etc/init.d
wget https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/bce61d85643c2dcdfbc2728c55a41dab444dca20/kibana4
chmod +x kibana4
sed -i 's/KIBANA_BIN=\/opt\/kibana\/bin/KIBANA_BIN=\/opt\/kibana\/kibana-4.0.1-linux-x64\/bin/' kibana4
update-rc.d kibana4 defaults 96 9

View File

@ -0,0 +1,4 @@
#!/bin/sh -x
# This script starts kibana as a service in init.d
service kibana4 stop
service kibana4 start

View File

@ -0,0 +1,41 @@
#!/bin/sh -x
# This script installs java, logstash and the contrib package for logstash
# install java as prereq
#Trying to avoid multiple apt-get's running simultaneously (in the
#rare occasion that the apt-get command fails rerun the script).
while [[ "$(ps -A | grep apt-get | awk '{print $1}')" != "" ]]; do
echo "Waiting for the other apt-get process to complete ..."
r=$RANDOM && let "sec=$r/10000" && let "mil=($r%10000)/10"
sleep $sec.$mil
done
apt-get update
#Trying to avoid multiple apt-get's running simultaneously (in the
#rare occasion that the apt-get command fails rerun the script).
while [[ "$(ps -A | grep apt-get | awk '{print $1}')" != "" ]]; do
echo "Waiting for the other apt-get process to complete ..."
r=$RANDOM && let "sec=$r/10000" && let "mil=($r%10000)/10"
sleep $sec.$mil
done
apt-get install -y openjdk-7-jre-headless
mkdir /etc/logstash
# install by apt-get from repo
wget -O - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | apt-key add -
echo "deb http://packages.elasticsearch.org/logstash/1.4/debian stable main" | tee -a /etc/apt/sources.list
#Trying to avoid multiple apt-get's running simultaneously (in the
#rare occasion that the apt-get command fails rerun the script).
while [[ "$(ps -A | grep apt-get | awk '{print $1}')" != "" ]]; do
echo "Waiting for the other apt-get process to complete ..."
r=$RANDOM && let "sec=$r/10000" && let "mil=($r%10000)/10"
sleep $sec.$mil
done
apt-get install -y logstash
# install contrib to get the relp plugin
/opt/logstash/bin/plugin install contrib
# set up to run as service
update-rc.d logstash defaults 95 10

View File

@ -0,0 +1,4 @@
#!/bin/sh -x
# Run logstash as service in init.d
service logstash stop
service logstash start

View File

@ -0,0 +1,7 @@
#!/bin/sh -x
# Edit the file /etc/mongodb.conf, update with real IP of Mongo server
# This script configures the mongodb server to export its service on
# the server IP
# bind_ip = 127.0.0.1 -> bind_ip = <IP for Mongo server>
# The environment variable mongodb_ip is expected to be set up
sed -i "s/127.0.0.1/$mongodb_ip/" /etc/mongodb.conf

View File

@ -0,0 +1,20 @@
#!/bin/sh -x
# This script installs mongodb
#Trying to avoid multiple apt-get's running simultaneously (in the
#rare occasion that the apt-get command fails rerun the script).
while [[ "$(ps -A | grep apt-get | awk '{print $1}')" != "" ]]; do
echo "Waiting for the other apt-get process to complete ..."
r=$RANDOM && let "sec=$r/10000" && let "mil=($r%10000)/10"
sleep $sec.$mil
done
apt-get update
#Trying to avoid multiple apt-get's running simultaneously (in the
#rare occasion that the apt-get command fails rerun the script).
while [[ "$(ps -A | grep apt-get | awk '{print $1}')" != "" ]]; do
echo "Waiting for the other apt-get process to complete ..."
r=$RANDOM && let "sec=$r/10000" && let "mil=($r%10000)/10"
sleep $sec.$mil
done
apt-get install -y mongodb

View File

@ -0,0 +1,5 @@
#!/bin/bash
echo "conn = new Mongo();" > setup.js
echo "db = conn.getDB('paypal_pizza');" >> setup.js
echo "db.about.insert({'name': 'PayPal Pizza Store'});" >> setup.js
mongo setup.js

View File

@ -0,0 +1,6 @@
#!/bin/sh -x
# This script starts mongodb
/etc/init.d/mongodb stop
rm /var/lib/mongodb/mongod.lock
mongod --repair
mongod --dbpath /var/lib/mongodb --fork --logpath /var/log/mongod.log

View File

@ -0,0 +1,28 @@
#!/bin/sh -x
# This script installs an app for nodejs: the app intended is the paypal app
# and it is configured to connect to the mongodb server
# The environment variables github_url and mongodb_ip are expected to be set up
export app_dir=/opt/app
git clone $github_url /opt/app
if [ -f /opt/app/package.json ]; then
cd /opt/app/ && npm install
sed -i "s/localhost/$mongodb_ip/" config.json
fi
cat > /etc/init/nodeapp.conf <<EOS
description "node.js app"
start on (net-device-up
and local-filesystems
and runlevel [2345])
stop on runlevel [!2345]
expect fork
respawn
script
export HOME=/
export NODE_PATH=/usr/lib/node
exec /usr/bin/node ${app_dir}/app.js >> /var/log/nodeapp.log 2>&1 &
end script
EOS

View File

@ -0,0 +1,21 @@
#!/bin/sh -x
# This script installs nodejs and the prereq
add-apt-repository ppa:chris-lea/node.js
#Trying to avoid multiple apt-get's running simultaneously (in the
#rare occasion that the apt-get command fails rerun the script).
while [[ "$(ps -A | grep apt-get | awk '{print $1}')" != "" ]]; do
echo "Waiting for the other apt-get process to complete ..."
r=$RANDOM && let "sec=$r/10000" && let "mil=($r%10000)/10"
sleep $sec.$mil
done
apt-get update
#Trying to avoid multiple apt-get's running simultaneously (in the
#rare occasion that the apt-get command fails rerun the script).
while [[ "$(ps -A | grep apt-get | awk '{print $1}')" != "" ]]; do
echo "Waiting for the other apt-get process to complete ..."
r=$RANDOM && let "sec=$r/10000" && let "mil=($r%10000)/10"
sleep $sec.$mil
done
apt-get install -y nodejs build-essential

View File

@ -0,0 +1,3 @@
#!/bin/sh -x
# This script starts the nodejs application
start nodeapp

View File

@ -0,0 +1,20 @@
#!/bin/sh -x
# This script installs rsyslog and the library for RELP
#Trying to avoid multiple apt-get's running simultaneously (in the
#rare occasion that the apt-get command fails rerun the script).
while [[ "$(ps -A | grep apt-get | awk '{print $1}')" != "" ]]; do
echo "Waiting for the other apt-get process to complete ..."
r=$RANDOM && let "sec=$r/10000" && let "mil=($r%10000)/10"
sleep $sec.$mil
done
apt-get update
#Trying to avoid multiple apt-get's running simultaneously (in the
#rare occasion that the apt-get command fails rerun the script).
while [[ "$(ps -A | grep apt-get | awk '{print $1}')" != "" ]]; do
echo "Waiting for the other apt-get process to complete ..."
r=$RANDOM && let "sec=$r/10000" && let "mil=($r%10000)/10"
sleep $sec.$mil
done
apt-get install -y rsyslog rsyslog-relp

View File

@ -0,0 +1,4 @@
#!/bin/sh -x
# This script starts rsyslogd as a service in init.d
service rsyslog stop
service rsyslog start

View File

@ -0,0 +1,4 @@
TOSCA-Meta-File-Version: 1.0
CSAR-Version: 1.1
Created-By: OASIS TOSCA TC
Entry-Definitions: Definitions/tosca_elk.yaml