Support Spark 1.3.1

Add version "1.3.1" to the list of supported versions, in tandem with
https://review.openstack.org/#/c/195054/ that adds support in the DIB.
This one-liner is sufficient, from Sahara's point of view Spark 1.0
and 1.3 are the same.
This patch set Updates also the Spark plugin documentation and the
tests.

Change-Id: I8fa482b6d1d6abaa6633aec309a3ba826a8b7ebb
Implements: blueprint support-spark-1-3
This commit is contained in:
Daniele Venzano 2015-06-29 14:41:15 +00:00
parent a470fa6aec
commit 40b8b781ff
5 changed files with 51 additions and 5 deletions

View File

@ -14,8 +14,8 @@ has been developed and tested with the images generated by sahara-image-elements
* https://github.com/openstack/sahara-image-elements
Those Ubuntu images already have Cloudera CDH4 HDFS and Apache Spark installed.
A prepared image can be found at the following location:
Those Ubuntu images already have Cloudera CDH5 HDFS and Apache Spark installed.
A prepared image for Spark 1.0 and CDH4 HDFS can be found at the following location:
* http://sahara-files.mirantis.com/sahara-juno-spark-1.0.0-ubuntu-14.04.qcow2
@ -23,7 +23,8 @@ The Spark plugin requires an image to be tagged in Sahara Image Registry with
two tags: 'spark' and '<Spark version>' (e.g. '1.0.0').
Also you should specify the username of the default cloud-user used in the image. For
images generated with the DIB it is 'ubuntu'.
the images available at the URLs listed above and for all the ones generated with the
DIB it is 'ubuntu'.
Note that the Spark cluster is deployed using the scripts available in the
Spark distribution, which allow to start all services (master and slaves), stop

View File

@ -0,0 +1,32 @@
clusters:
- plugin_name: spark
plugin_version: 1.3.1
image: %spark_image%
node_group_templates:
- name: master
flavor_id: %ci_flavor_id%
node_processes:
- master
- namenode
auto_security_group: true
- name: worker
flavor_id: %ci_flavor_id%
node_processes:
- datanode
- slave
auto_security_group: true
cluster_template:
name: spark131
node_group_templates:
master: 1
worker: 1
cluster_configs:
HDFS:
dfs.replication: 1
cluster:
name: %cluster_name%
scaling:
- operation: add
node_group: worker
size: 1
edp_jobs_flow: spark_edp

View File

@ -56,7 +56,7 @@ class SparkProvider(p.ProvisioningPluginBase):
"CDH cluster without any management consoles.")
def get_versions(self):
return ['1.0.0', '0.9.1']
return ['1.3.1', '1.0.0', '0.9.1']
def get_configs(self, hadoop_version):
return c_helper.get_plugin_configs()

View File

@ -594,7 +594,7 @@ SPARK_CONFIG_OPTS = [
help='A list of processes that will be launched '
'on worker nodes'),
cfg.StrOpt('HADOOP_VERSION',
default='1.0.0',
default='1.3.1',
help='Version of Spark (even though it says "HADOOP".'),
cfg.StrOpt('HADOOP_USER',
default='hdfs',

View File

@ -66,6 +66,19 @@ class SparkPluginTest(base.SaharaWithDbTestCase):
plugin.get_edp_engine(cluster, edp.JOB_TYPE_SPARK),
engine.SparkJobEngine)
def test_plugin13_edp_engine(self):
cluster_dict = {
'name': 'cluster',
'plugin_name': 'spark',
'hadoop_version': '1.3.1',
'default_image_id': 'image'}
cluster = conductor.cluster_create(context.ctx(), cluster_dict)
plugin = pb.PLUGINS.get_plugin(cluster.plugin_name)
self.assertIsInstance(
plugin.get_edp_engine(cluster, edp.JOB_TYPE_SPARK),
engine.SparkJobEngine)
def test_cleanup_configs(self):
remote = mock.Mock()
instance = mock.Mock()