Fix tests for specs

Changes:
* Fix errors in specs
* Fix regexp for find spec files

Closes-bug: #1417125

Change-Id: Ic1c31852cedc750078a41c62f8a72a79aca27e24
This commit is contained in:
Sergey Reshetnyak 2015-01-14 19:11:15 +03:00
parent cc33d93aa8
commit b3751b72c2
9 changed files with 69 additions and 54 deletions

View File

@ -17,23 +17,25 @@ SOLR,Key-Value Store Indexer, and Impala services.
Problem description
===================
Currently we have enabled many new services in CDH plugin.And we want to increase
the coverage of the test cases.So we plan to add test cases in the integration test,
which will check the availability of those services by using simple scripts like
we did in map_reduce_testing.
Currently we have enabled many new services in CDH plugin.And we want to
increase the coverage of the test cases.So we plan to add test cases in the
integration test, which will check the availability of those services by using
simple scripts like we did in map_reduce_testing.
Proposed change
===============
We plan to write test cases like the way we did in map_reduce_testing.First copy
the shell script to the node,then run this script, the script will run basic useage
of the services.
We plan to write test cases like the way we did in map_reduce_testing. First
copy the shell script to the node,then run this script, the script will run
basic useage of the services.
The implementation will need below changes on codes for each service:
* Add new cluster template (including all services process) in test_gating_cdh.py
* Add check_services.py (check all services) to check all basic usage of services
* Add new cluster template (including all services process) in
test_gating_cdh.py
* Add check_services.py (check all services) to check all basic usage of
services
* Add shell scripts (check all services)
Alternatives
@ -93,9 +95,11 @@ Work Items
The work items will be:
* Add python codes in sahara/sahara/tests/integration/tests/gating/test_cdh_gating.py.
* Add check services scripts files in sahara/sahara/tests/integrations/tests/resources.
* Add check_services.py in sahara/sahara/tests/integration/tests/.
* Add python codes in ``sahara/sahara/tests/integration/tests/gating/
test_cdh_gating.py``.
* Add check services scripts files in ``sahara/sahara/tests/integrations/
tests/resources``.
* Add check_services.py in ``sahara/sahara/tests/integration/tests/``.
Dependencies
============

View File

@ -33,8 +33,8 @@ Proposed change
Information can be added to context resource_uuid field and then can be
used by ContextAdapter in openstack.common.log for a group of logs.
This change requires additional saving of context in openstack.common.local.store
to access context from openstack.common.log
This change requires additional saving of context in
openstack.common.local.store to access context from openstack.common.log
We need to set cluster id and job execution id only once. So, it could be done
with 2 methods that will be added to sahara.context:
@ -59,7 +59,9 @@ If instance and cluster specified, log message will looks like:
.. sourcecode:: console
2014-12-22 13:54:19.574 23128 ERROR sahara.service.volumes [-] [instance: 3bd63e83-ed73-4c7f-a72f-ce52f823b080, cluster: 546c15a4-ab12-4b22-9987-4e38dc1724bd] message
2014-12-22 13:54:19.574 23128 ERROR sahara.service.volumes [-] [instance:
3bd63e83-ed73-4c7f-a72f-ce52f823b080, cluster: 546c15a4-ab12-4b22-9987-4e
38dc1724bd] message
..
@ -67,7 +69,8 @@ If only cluster specified:
.. sourcecode:: console
2014-12-22 13:54:19.574 23128 ERROR sahara.service.volumes [-] [instance: none, cluster: 546c15a4-ab12-4b22-9987-4e38dc1724bd] message
2014-12-22 13:54:19.574 23128 ERROR sahara.service.volumes [-] [instance:
none, cluster: 546c15a4-ab12-4b22-9987-4e38dc1724bd] message
..
@ -75,7 +78,8 @@ If job execution specified:
.. sourcecode:: console
2014-12-22 13:54:19.574 23128 ERROR sahara.service.edp.api [-] [instance: none, job_execution: 9de0de12-ec56-46f9-80ed-96356567a196] message
2014-12-22 13:54:19.574 23128 ERROR sahara.service.edp.api [-] [instance:
none, job_execution: 9de0de12-ec56-46f9-80ed-96356567a196] message
..
@ -83,7 +87,7 @@ Field "instance:" is presented in every message (even if it's not necessary)
because of default value of instance_format='[instance: %(uuid)s] '
that cannot be fixed without config changing.
After implementation of this changes, Sahara log messages should be checked and
After implementation of this changes, Sahara log messages should be checed and
fixed to avoid information duplication.
Alternatives

View File

@ -123,8 +123,8 @@ Sahara-image-elements impact
N/A
Horizon impact
--------------
Sahara-dashboard / Horizon impact
---------------------------------
N/A
The default templates will show-up in the UI and look like regular templates.
@ -178,4 +178,4 @@ expected to provide their own set of default templates.
References
==========
N/A
N/A

View File

@ -32,14 +32,14 @@ where DataSource objects currently may not be used:
* Java and Spark job types do not require DataSources since they have
no fixed arg list. Currently input and output paths must be specified as URLs
in the ``args`` list inside of ``job_configs`` and authentication configs must
be manually specified.
in the ``args`` list inside of ``job_configs`` and authentication configs
must be manually specified.
* Hive, Pig, and MapReduce jobs which use multiple input or output paths or
consume paths through custom parameters require manual configuration. Additional
paths or special configuration parameters (ie anything outside of Sahara's
assumptions) require manual specification in the ``args``, ``params``, or ``configs``
elements inside of ``job_configs``.
consume paths through custom parameters require manual configuration.
Additional paths or special configuration parameters (ie anything outside
of Sahara's assumptions) require manual specification in the ``args``,
``params``, or ``configs`` elements inside of ``job_configs``.
Allowing DataSources to be referenced in ``job_configs`` is an incremental
improvement that gives users the option of easily using DataSource objects in
@ -56,14 +56,16 @@ for all job types for maximum flexibility -- Hive and Pig jobs use parameters
to pass values, MapReduce uses configuration values, and Java and Spark use
arguments.
If Sahara resolves a value to the name or uuid of a DataSource it will substitute
the path information from the DataSource for the value and update the job
configuration as necessary to support authentication. If a value does not resolve
to a DataSource name or uuid value, the original value will be used.
If Sahara resolves a value to the name or uuid of a DataSource it will
substitute the path information from the DataSource for the value and update
the job configuration as necessary to support authentication. If a value does
not resolve to a DataSource name or uuid value, the original value will be
used.
Note that the substitution will occur during submission of the job to the cluster
but will *not* alter the original JobExecution. This means that if a user
relaunches a JobExecution or examines it, the orignal values will be present.
Note that the substitution will occur during submission of the job to the
cluster but will *not* alter the original JobExecution. This means that if
a user relaunches a JobExecution or examines it, the orignal values will be
present.
The following non mutually exclusive configuration values will control this
feature:
@ -87,10 +89,11 @@ execution configuration panel.
Alternatives
------------
A slightly diferent approach could be taken in which DataSource names or uuids are prepended
with a prefix to identify them. This would eliminate the need for config values to turn the
feature on and would allow individual values to be looked up rather than all values. It would
be unambiguous but may hurt readability or be unclear to new users.
A slightly diferent approach could be taken in which DataSource names or uuids
are prepended with a prefix to identify them. This would eliminate the need for
config values to turn the feature on and would allow individual values to be
looked up rather than all values. It would be unambiguous but may hurt
readability or be unclear to new users.
Data model impact
-----------------
@ -161,7 +164,8 @@ Unit tests
Documentation Impact
====================
We will need to document this in the sections covering submission of jobs to Sahara
We will need to document this in the sections covering submission of jobs
to Sahara
References

View File

@ -29,7 +29,7 @@ necessary Swift credentials (`fs.swift.service.sahara.username` and
As with Oozie Java actions, job source code may be modified and recompiled to
add the necessary configuration values to the jobs Hadoop configuration.
However, this means that a Spark job which runs successfully with HDFS input
and output sources cannot be used “as is” with Swift input and output sources.
and output sources cannot be used "as is" with Swift input and output sources.
Sahara should allow users to run Spark jobs with Swift input and output
sources without altering job source code.
@ -43,8 +43,8 @@ Java compatibility.
A new configuration value will be added to Sahara, `edp.spark.adapt_for_swift`.
If this configuration value is set to True on a Spark job, Sahara will run a
wrapper class (SparkWrapper) instead of the original class indicated by the job.
The default for this configuration value will be False.
wrapper class (SparkWrapper) instead of the original class indicated by the
job. The default for this configuration value will be False.
Sahara will generate a `spark.xml` file containing the necessary Swift
credentials as Hadoop configuration values. This XML file will be uploaded to
@ -54,8 +54,8 @@ other files normally needed to execute a Spark job.
Saharas Spark launcher script will run the SparkWrapper class instead of the
jobs designated main class. The launcher will pass the name of the XML
configuration file to SparkWrapper at runtime, followed by the name of the
original class and any job arguments. SparkWrapper will add this XML file to the
default Hadoop resource list in the jobs configuration before invoking the
original class and any job arguments. SparkWrapper will add this XML file to
the default Hadoop resource list in the job's configuration before invoking the
original class with any arguments.
When the jobs main class is run, its default Hadoop configuration will

View File

@ -140,7 +140,8 @@ Primary assignee:
Work Items
----------
* Add a new attribute to cluster-level configs to indicate whether HA is enabled or not.
* Add a new attribute to cluster-level configs to indicate whether HA is
enabled or not.
* Add new service classes to HDP 2.0.6 for Journal nodes and Zookeeper Failover
Controllers
* Add new remote methods to hdp hadoopserver.py for remote commands

View File

@ -29,7 +29,8 @@ Currently there are several ways to give Sahara access to VMs:
* all nodes need to have floating IPs
- floating IPs are limited resource
- floating IPs are usually for external world, not for access from controller
- floating IPs are usually for external world, not for access from
controller
* all nodes need to be accessible from controller nodes
@ -219,4 +220,4 @@ The feature needs to be documented.
References
==========
None
None

View File

@ -119,8 +119,8 @@ Sahara-image-elements impact
N/A
Horizon impact
--------------
Sahara-dashboard / Horizon impact
---------------------------------
Horizon will be updated to include an "edit" button in each row of both the
node group and cluster templates tables. That edit button will bring up the

View File

@ -78,16 +78,17 @@ class TestTitles(testtools.TestCase):
"Found %s literal carriage returns in file %s" %
(len(matches), tpl))
def _check_trailing_spaces(self, tpl, raw):
for i, line in enumerate(raw.split("\n")):
trailing_spaces = re.findall(" +$", line)
self.assertEqual(len(trailing_spaces),0,
"Found trailing spaces on line %s of %s" % (i+1, tpl))
self.assertEqual(
len(trailing_spaces), 0,
"Found trailing spaces on line %s of %s" % (i + 1, tpl))
def test_template(self):
files = ['specs/template.rst'] + glob.glob('specs/*/*/*')
files = ['specs/template.rst'] + [fn for fn in glob.glob('specs/*/*')
if not fn.endswith("README.rst")
and not fn.endswith('redirects')]
for filename in files:
self.assertTrue(filename.endswith(".rst"),
"spec's file must uses 'rst' extension.")