sahara/sahara/plugins/spark/resources
Daniele Venzano 591c5eb1ee Add Spark plugin to Sahara
Integrate the Spark plugin in Sahara. It adds the capability of
provisioning Spark clusters with (Cloudera) HDFS. The plugin
assumes the use of VM images generated with the Savanna
diskimage-builder and the '-p spark' option.

Implements: blueprint spark-plugin

Wiki: https://wiki.openstack.org/wiki/Sahara/SparkPlugin

This code is running on our Bigfoot research cluster.

Change-Id: Ic105f7de64248bdfb05879ededf35503bc04925b
2014-05-27 19:34:39 +00:00
..
README.rst Add Spark plugin to Sahara 2014-05-27 19:34:39 +00:00
core-default.xml Add Spark plugin to Sahara 2014-05-27 19:34:39 +00:00
hdfs-default.xml Add Spark plugin to Sahara 2014-05-27 19:34:39 +00:00
spark-env.sh.template Add Spark plugin to Sahara 2014-05-27 19:34:39 +00:00
topology.sh Add Spark plugin to Sahara 2014-05-27 19:34:39 +00:00

README.rst

Apache Spark and HDFS Configurations for Sahara

This directory contains default XML configuration files and Spark scripts:

  • core-default.xml,
  • hdfs-default.xml,
  • spark-env.sh.template,
  • topology.sh

These files are used by Sahara's plugin for Apache Spark and Cloudera HDFS. XML config files were taken from here: * https://github.com/apache/hadoop-common/blob/release-1.2.1/src/core/core-default.xml * https://github.com/apache/hadoop-common/blob/release-1.2.1/src/hdfs/hdfs-default.xml

Cloudera packages use the same configuration files as standard Apache Hadoop.

XML configs are used to expose default Hadoop configurations to the users through Sahara's REST API. It allows users to override some config values which will be pushed to the provisioned VMs running Hadoop services as part of appropriate xml config.