deb-sahara/sahara/plugins/spark/resources/spark-env.sh.template
Daniele Venzano 591c5eb1ee Add Spark plugin to Sahara
Integrate the Spark plugin in Sahara. It adds the capability of
provisioning Spark clusters with (Cloudera) HDFS. The plugin
assumes the use of VM images generated with the Savanna
diskimage-builder and the '-p spark' option.

Implements: blueprint spark-plugin

Wiki: https://wiki.openstack.org/wiki/Sahara/SparkPlugin

This code is running on our Bigfoot research cluster.

Change-Id: Ic105f7de64248bdfb05879ededf35503bc04925b
2014-05-27 19:34:39 +00:00

22 lines
1.1 KiB
Bash

#!/usr/bin/env bash
# This file contains environment variables required to run Spark. Copy it as
# spark-env.sh and edit that to configure Spark for your site.
#
# The following variables can be set in this file:
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
# - MESOS_NATIVE_LIBRARY, to point to your libmesos.so if you use Mesos
# - SPARK_JAVA_OPTS, to set node-specific JVM options for Spark. Note that
# we recommend setting app-wide options in the application's driver program.
# Examples of node-specific options : -Dspark.local.dir, GC options
# Examples of app-wide options : -Dspark.serializer
#
# If using the standalone deploy mode, you can also set variables for it here:
# - SPARK_MASTER_IP, to bind the master to a different IP address or hostname
# - SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports
# - SPARK_WORKER_CORES, to set the number of cores to use on this machine
# - SPARK_WORKER_MEMORY, to set how much memory to use (e.g. 1000m, 2g)
# - SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT
# - SPARK_WORKER_INSTANCES, to set the number of worker processes per node