Fix large amount typos in the specifications.

Fix typos in specifications from:
- Saharaclient specs
- Kilo specs
- Template spec

Change-Id: Ieb0261f153547c4652a1b37e885fbc3c044e22ac
This commit is contained in:
Alexey Galkin 2015-10-29 16:33:02 +03:00
parent e269ec8e95
commit d9c326f65b
17 changed files with 26 additions and 25 deletions

View File

@ -142,7 +142,7 @@ Sahara Integration test for CDH plugin is enough.
Documentation Impact Documentation Impact
==================== ====================
Documents about CDH plugin prerequisites and enabling should be modifed, for Documents about CDH plugin prerequisites and enabling should be modified, for
cm_api is not required any more. cm_api is not required any more.
References References

View File

@ -35,7 +35,7 @@ The implementation will need below changes on codes for each service:
* Add process names of the service in some places. * Add process names of the service in some places.
* Add service or process configuration, and network ports to open. * Add service or process configuration, and network ports to open.
* Add service validation. * Add service validation.
* Moidify some util methods, like get_service, to meet more services. * Modify some utils methods, like get_service, to meet more services.
* Some other changes for a few specific services if needed. * Some other changes for a few specific services if needed.
Alternatives Alternatives

View File

@ -28,7 +28,7 @@ Proposed change
We plan to write test cases like the way we did in map_reduce_testing. First We plan to write test cases like the way we did in map_reduce_testing. First
copy the shell script to the node,then run this script, the script will run copy the shell script to the node,then run this script, the script will run
basic useage of the services. basic usage of the services.
The implementation will need below changes on codes for each service: The implementation will need below changes on codes for each service:

View File

@ -147,7 +147,7 @@ manually.
Documentation Impact Documentation Impact
==================== ====================
Required to document this feauture in sahara/userdoc/configuration.guide. Required to document this feature in sahara/userdoc/configuration.guide.
References References
========== ==========

View File

@ -111,7 +111,7 @@ Proposed change
2) Add a CLI util that can be executed by cron with admin credentials to 2) Add a CLI util that can be executed by cron with admin credentials to
create/update existing default templates. This utility needs to be able to create/update existing default templates. This utility needs to be able to
take some placeholders like "flavor" or "network" and make the appropriate take some placeholders like "flavor" or "network" and make the appropriate
substitutions (either from configs or via commnad line args) at runtime. substitutions (either from configs or via command line args) at runtime.
The cron job can be optional if we want to force any updates to be The cron job can be optional if we want to force any updates to be
triggered explicitly. triggered explicitly.

View File

@ -36,7 +36,7 @@ from performing desired processing they have a few choices:
too heavyweight and not everyone knows it. too heavyweight and not everyone knows it.
* Modify the Sahara source. A savvy user might extend Sahara EDP * Modify the Sahara source. A savvy user might extend Sahara EDP
themselves to get the desired functionality. Howerver, not everyone themselves to get the desired functionality. However, not everyone
is a developer or has the time to understand Sahara enough to do this. is a developer or has the time to understand Sahara enough to do this.
* Submit a bug or a blueprint and wait for the Sahara team to address it. * Submit a bug or a blueprint and wait for the Sahara team to address it.
@ -89,13 +89,14 @@ The `args` are specified with the **<argument>** tag and will be passed
to the shell script in order of specification. to the shell script in order of specification.
In the reference section below there is a simple example of a shell In the reference section below there is a simple example of a shell
action workflow. There are three tags in the worflow that for Sahara's action workflow. There are three tags in the workflow that for Sahara's
purposes are unique to the `Shell` action and should be handled by purposes are unique to the `Shell` action and should be handled by
Sahara: Sahara:
* **<exec>script</exec>** * **<exec>script</exec>**
This identifies the command that should be executed by the shell action. This identifies the command that should be executed by the shell action.
The value specified here will be the name of the script idenfied in `mains`. The value specified here will be the name of the script identified in
`mains`.
Technically, this can be any command on the path but it is probably Technically, this can be any command on the path but it is probably
simpler if we require it to be a script. Based on some experimentation, simpler if we require it to be a script. Based on some experimentation,
there are subtleties of path evaluation that can be avoided if a script there are subtleties of path evaluation that can be avoided if a script
@ -166,7 +167,7 @@ Sahara-dashboard / Horizon impact
We would need a new form for a Shell job type submission. The form should allow We would need a new form for a Shell job type submission. The form should allow
specification of a main script, supporting libs, configuration values, specification of a main script, supporting libs, configuration values,
arguments, and environment variables (which are 100% analagous to params from arguments, and environment variables (which are 100% analogous to params from
the perspective of the UI) the perspective of the UI)
Implementation Implementation

View File

@ -64,7 +64,7 @@ used.
Note that the substitution will occur during submission of the job to the Note that the substitution will occur during submission of the job to the
cluster but will *not* alter the original JobExecution. This means that if cluster but will *not* alter the original JobExecution. This means that if
a user relaunches a JobExecution or examines it, the orignal values will be a user relaunches a JobExecution or examines it, the original values will be
present. present.
The following non mutually exclusive configuration values will control this The following non mutually exclusive configuration values will control this
@ -89,7 +89,7 @@ execution configuration panel.
Alternatives Alternatives
------------ ------------
A slightly diferent approach could be taken in which DataSource names or uuids A slightly different approach could be taken in which DataSource names or uuids
are prepended with a prefix to identify them. This would eliminate the need for are prepended with a prefix to identify them. This would eliminate the need for
config values to turn the feature on and would allow individual values to be config values to turn the feature on and would allow individual values to be
looked up rather than all values. It would be unambiguous but may hurt looked up rather than all values. It would be unambiguous but may hurt

View File

@ -102,7 +102,7 @@ None
REST API impact REST API impact
--------------- ---------------
Backward compatiblity will be maintained since this is a new endpoint. Backward compatibility will be maintained since this is a new endpoint.
**GET /v1.1/{tenant_id}/job-types** **GET /v1.1/{tenant_id}/job-types**

View File

@ -86,7 +86,7 @@ than through configuration files.
This does present some security risk, but it is no greater than the risk This does present some security risk, but it is no greater than the risk
already presented by Oozie jobs that include Swift credentials. In fact, this already presented by Oozie jobs that include Swift credentials. In fact, this
is probaby safer since a user must have direct access to the job directory to is probably safer since a user must have direct access to the job directory to
read the credentials written by Sahara. read the credentials written by Sahara.
Data model impact Data model impact

View File

@ -35,7 +35,7 @@ by:
Proposed change Proposed change
=============== ===============
The impementaion will change start_cluster to call first_run, and remove the The implementation will change start_cluster to call first_run, and remove the
other part of work can be done by first_run from the method body. other part of work can be done by first_run from the method body.
For detail, it will be like following: For detail, it will be like following:
@ -65,7 +65,7 @@ create_hive_dirs can be removed.
Alternatives Alternatives
------------ ------------
Current way works at this stage, but it increases comlexity of coding work to Current way works at this stage, but it increases complexity of coding work to
add more services support to CDH plugin. And, when CM is upgraded in the add more services support to CDH plugin. And, when CM is upgraded in the
future, the correctness of current codes cannot be assured. At the end, the future, the correctness of current codes cannot be assured. At the end, the
first_run method to start services is recommended by Cloudera. first_run method to start services is recommended by Cloudera.

View File

@ -17,12 +17,12 @@ Problem description
=================== ===================
Now log levels and messages in Sahara are mixed and don't match the OpenStack Now log levels and messages in Sahara are mixed and don't match the OpenStack
logging guideliness. logging guidelines.
Proposed change Proposed change
=============== ===============
The good way to unify our log system would be to follow the major guideliness. The good way to unify our log system would be to follow the major guidelines.
Here is a brief description of log levels: Here is a brief description of log levels:
* Debug: Shows everything and is likely not suitable for normal production * Debug: Shows everything and is likely not suitable for normal production
@ -98,7 +98,7 @@ readable please use {<smthg>} instead of {0} in log messages.
Alternatives Alternatives
------------ ------------
We need to follow OpenStack guideliness, but if needed we can move plugin logs We need to follow OpenStack guidelines, but if needed we can move plugin logs
to DEBUG level instead of INFO. It should be discussed separately in each case. to DEBUG level instead of INFO. It should be discussed separately in each case.
Data model impact Data model impact

View File

@ -48,7 +48,7 @@ plugin specific topics.
The information provided is intended to be updated as new methodologies, The information provided is intended to be updated as new methodologies,
plugins, and features are implemented. It will also be open to patching plugins, and features are implemented. It will also be open to patching
through the standrd OpenStack workflows by the community at large. through the standard OpenStack workflows by the community at large.
Alternatives Alternatives

View File

@ -49,7 +49,7 @@ repository changes. Namely that any work within a release that is either
predicted to not be staffed, or that is not started at the end of a predicted to not be staffed, or that is not started at the end of a
release should be moved to the backlog directory. This process should be release should be moved to the backlog directory. This process should be
directed by the specification drafters as they will most likely be the directed by the specification drafters as they will most likely be the
primary assignees for new work. In situtations where the drafter of a primary assignees for new work. In situations where the drafter of a
specification feels that there will be insufficient resources to create specification feels that there will be insufficient resources to create
an implementation then they should move an approved specification to the an implementation then they should move an approved specification to the
backlog directory. This process should also be revisited at the end of a backlog directory. This process should also be revisited at the end of a
@ -133,7 +133,7 @@ Work Items
* Create the backlog directory and documentation. * Create the backlog directory and documentation.
* Clean up the juno directory. * Clean up the juno directory.
* Add references to backlog in the contributing documenation. * Add references to backlog in the contributing documentation.
Dependencies Dependencies

View File

@ -122,7 +122,7 @@ The implementation is divided in three steps:
on Spark's but necessary changes will be made. on Spark's but necessary changes will be made.
* Storm doesn't rely on many configuration files, there is only one needed * Storm doesn't rely on many configuration files, there is only one needed
and it is used by all nodes. This configuration file is written in YAML and it is used by all nodes. This configuration file is written in YAML
and it should be dinamically written in the plugin since it needs to have and it should be dynamically written in the plugin since it needs to have
the name or ip of the master node and also zookeeper node(s). We will need the name or ip of the master node and also zookeeper node(s). We will need
PYYAML to parse this configuration to YAML. PYYAML is already a global PYYAML to parse this configuration to YAML. PYYAML is already a global
requirement of OpenStack and will be added to Sahara's requirement as well. requirement of OpenStack and will be added to Sahara's requirement as well.

View File

@ -107,7 +107,7 @@ No developer impact.
Sahara-image-elements impact Sahara-image-elements impact
---------------------------- ----------------------------
No sahara-image-elements imbact. No sahara-image-elements impact.
Sahara-dashboard / Horizon impact Sahara-dashboard / Horizon impact
--------------------------------- ---------------------------------

View File

@ -58,7 +58,7 @@ short term solution, can temporary use --names and --ids for all
delete verbs of the CLI. And once the CLI will be refactored, delete verbs of the CLI. And once the CLI will be refactored,
we will remove all --name(s) and --id(s) arguments. we will remove all --name(s) and --id(s) arguments.
So the proposed change implies to add --names and --ids arguement So the proposed change implies to add --names and --ids arguments
which consist of a Comma separated list of names and ids:: which consist of a Comma separated list of names and ids::
sahara cluster-delete [--name NAME] [--id cluster_id] sahara cluster-delete [--name NAME] [--id cluster_id]

View File

@ -119,7 +119,7 @@ Each API method which is either added or changed should have the following
inconsistent parameters supplied to the method, or when an inconsistent parameters supplied to the method, or when an
instance is not in an appropriate state for the request to instance is not in an appropriate state for the request to
succeed. Errors caused by syntactic problems covered by the JSON succeed. Errors caused by syntactic problems covered by the JSON
schema defintion do not need to be included. schema definition do not need to be included.
* URL for the resource * URL for the resource