Merge "Documentation fixes and updates for devref"
This commit is contained in:
commit
6c1e4158ab
@ -63,10 +63,10 @@ $ mv 507eb70202af_my_new_revision.py 007_my_new_revision.py
|
|||||||
Add Alembic Operations to the Script
|
Add Alembic Operations to the Script
|
||||||
++++++++++++++++++++++++++++++++++++
|
++++++++++++++++++++++++++++++++++++
|
||||||
|
|
||||||
The migration script contains method ``upgrade()``. Since Kilo release Sahara
|
The migration script contains method ``upgrade()``. Sahara has not supported
|
||||||
doesn't support downgrades. Fill in this method with the appropriate Alembic
|
downgrades since the Kilo release. Fill in this method with the appropriate
|
||||||
operations to perform upgrades. In the above example, an upgrade will move from
|
Alembic operations to perform upgrades. In the above example, an upgrade will
|
||||||
revision '006' to revision '007'.
|
move from revision '006' to revision '007'.
|
||||||
|
|
||||||
Command Summary for sahara-db-manage
|
Command Summary for sahara-db-manage
|
||||||
++++++++++++++++++++++++++++++++++++
|
++++++++++++++++++++++++++++++++++++
|
||||||
@ -87,15 +87,15 @@ To run the offline migration between specific migration versions::
|
|||||||
|
|
||||||
$ sahara-db-manage --config-file /path/to/sahara.conf upgrade <start version>:<end version> --sql
|
$ sahara-db-manage --config-file /path/to/sahara.conf upgrade <start version>:<end version> --sql
|
||||||
|
|
||||||
Upgrade the database incrementally::
|
To upgrade the database incrementally::
|
||||||
|
|
||||||
$ sahara-db-manage --config-file /path/to/sahara.conf upgrade --delta <# of revs>
|
$ sahara-db-manage --config-file /path/to/sahara.conf upgrade --delta <# of revs>
|
||||||
|
|
||||||
Create new revision::
|
To create a new revision::
|
||||||
|
|
||||||
$ sahara-db-manage --config-file /path/to/sahara.conf revision -m "description of revision" --autogenerate
|
$ sahara-db-manage --config-file /path/to/sahara.conf revision -m "description of revision" --autogenerate
|
||||||
|
|
||||||
Create a blank file::
|
To create a blank file::
|
||||||
|
|
||||||
$ sahara-db-manage --config-file /path/to/sahara.conf revision -m "description of revision"
|
$ sahara-db-manage --config-file /path/to/sahara.conf revision -m "description of revision"
|
||||||
|
|
||||||
|
@ -32,7 +32,7 @@ On Ubuntu:
|
|||||||
$ sudo apt-get install git-core python-dev python-virtualenv gcc libpq-dev libmysqlclient-dev python-pip rabbitmq-server
|
$ sudo apt-get install git-core python-dev python-virtualenv gcc libpq-dev libmysqlclient-dev python-pip rabbitmq-server
|
||||||
$ sudo pip install tox
|
$ sudo pip install tox
|
||||||
|
|
||||||
On Fedora-based distributions (e.g., Fedora/RHEL/CentOS/Scientific Linux):
|
On Red Hat and related distributions (CentOS/Fedora/RHEL/Scientific Linux):
|
||||||
|
|
||||||
.. sourcecode:: console
|
.. sourcecode:: console
|
||||||
|
|
||||||
|
@ -78,7 +78,7 @@ Documentation Guidelines
|
|||||||
------------------------
|
------------------------
|
||||||
|
|
||||||
All Sahara docs are written using Sphinx / RST and located in the main repo
|
All Sahara docs are written using Sphinx / RST and located in the main repo
|
||||||
in ``doc`` directory. You can add/edit pages here to update
|
in the ``doc`` directory. You can add or edit pages here to update the
|
||||||
http://docs.openstack.org/developer/sahara site.
|
http://docs.openstack.org/developer/sahara site.
|
||||||
|
|
||||||
The documentation in docstrings should follow the `PEP 257`_ conventions
|
The documentation in docstrings should follow the `PEP 257`_ conventions
|
||||||
@ -92,10 +92,7 @@ More specifically:
|
|||||||
3. For docstrings that take multiple lines, there should be a newline
|
3. For docstrings that take multiple lines, there should be a newline
|
||||||
after the opening quotes, and before the closing quotes.
|
after the opening quotes, and before the closing quotes.
|
||||||
4. `Sphinx`_ is used to build documentation, so use the restructured text
|
4. `Sphinx`_ is used to build documentation, so use the restructured text
|
||||||
markup to designate parameters, return values, etc. Documentation on
|
markup to designate parameters, return values, etc.
|
||||||
the sphinx specific markup can be found here:
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Run the following command to build docs locally.
|
Run the following command to build docs locally.
|
||||||
|
|
||||||
@ -106,14 +103,14 @@ Run the following command to build docs locally.
|
|||||||
After it you can access generated docs in ``doc/build/`` directory, for
|
After it you can access generated docs in ``doc/build/`` directory, for
|
||||||
example, main page - ``doc/build/html/index.html``.
|
example, main page - ``doc/build/html/index.html``.
|
||||||
|
|
||||||
To make docs generation process faster you can use:
|
To make the doc generation process faster you can use:
|
||||||
|
|
||||||
.. sourcecode:: console
|
.. sourcecode:: console
|
||||||
|
|
||||||
$ SPHINX_DEBUG=1 tox -e docs
|
$ SPHINX_DEBUG=1 tox -e docs
|
||||||
|
|
||||||
or to avoid sahara reinstallation to virtual env each time you want to rebuild
|
To avoid sahara reinstallation to virtual env each time you want to rebuild
|
||||||
docs you can use the following command (it could be executed only after
|
docs you can use the following command (it can be executed only after
|
||||||
running ``tox -e docs`` first time):
|
running ``tox -e docs`` first time):
|
||||||
|
|
||||||
.. sourcecode:: console
|
.. sourcecode:: console
|
||||||
@ -123,8 +120,8 @@ running ``tox -e docs`` first time):
|
|||||||
|
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
For more details on documentation guidelines see file HACKING.rst in the root
|
For more details on documentation guidelines see HACKING.rst in the root of
|
||||||
of Sahara repo.
|
the Sahara repo.
|
||||||
|
|
||||||
|
|
||||||
.. _PEP 8: http://www.python.org/dev/peps/pep-0008/
|
.. _PEP 8: http://www.python.org/dev/peps/pep-0008/
|
||||||
@ -136,47 +133,48 @@ running ``tox -e docs`` first time):
|
|||||||
Event log Guidelines
|
Event log Guidelines
|
||||||
--------------------
|
--------------------
|
||||||
|
|
||||||
Currently Sahara keep with cluster useful information about provisioning.
|
Currently Sahara keeps useful information about provisioning for each cluster.
|
||||||
Cluster provisioning can be represented as a linear series of provisioning
|
Cluster provisioning can be represented as a linear series of provisioning
|
||||||
steps, which are executed one after another. Also each step would consist of
|
steps, which are executed one after another. Each step may consist of several
|
||||||
several events. The amount of events depends on the step and the amount of
|
events. The number of events depends on the step and the number of instances
|
||||||
instances in the cluster. Also each event can contain information about
|
in the cluster. Also each event can contain information about its cluster,
|
||||||
cluster, instance, and node group. In case of errors, this event would contain
|
instance, and node group. In case of errors, events contain useful information
|
||||||
information about reasons of errors. Each exception in sahara contains a
|
for identifying the error. Additionally, each exception in sahara contains a
|
||||||
unique identifier that will allow the user to find extra information about
|
unique identifier that allows the user to find extra information about that
|
||||||
the reasons for errors in the sahara logs. Here
|
error in the sahara logs. You can see an example of provisioning progress
|
||||||
http://developer.openstack.org/api-ref-data-processing-v1.1.html#v1.1eventlog
|
information here:
|
||||||
you can see an example of provisioning progress information.
|
http://developer.openstack.org/api-ref/data-processing/#event-log
|
||||||
|
|
||||||
This means that if you add some important phase for cluster provisioning to
|
This means that if you add some important phase for cluster provisioning to
|
||||||
sahara code, it's recommended to add new provisioning step for this phase.
|
the sahara code, it's recommended to add a new provisioning step for this
|
||||||
It would allow users to use event log for handling errors during this phase.
|
phase. This will allow users to use event log for handling errors during this
|
||||||
|
phase.
|
||||||
|
|
||||||
Sahara already have special utils for operating provisioning steps and events
|
Sahara already has special utils for operating provisioning steps and events
|
||||||
in module ``sahara/utils/cluster_progress_ops.py``.
|
in the module ``sahara/utils/cluster_progress_ops.py``.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
It's strictly recommended not use ``conductor`` event log ops directly
|
It's strictly recommended not to use ``conductor`` event log ops directly
|
||||||
to assign events and operate provisioning steps.
|
to assign events and operate provisioning steps.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
You should not add a new provisioning step until the previous step
|
You should not start a new provisioning step until the previous step has
|
||||||
successfully completed.
|
successfully completed.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
It's strictly recommended to use ``event_wrapper`` for events handling
|
It's strictly recommended to use ``event_wrapper`` for event handling.
|
||||||
|
|
||||||
OpenStack client usage guidelines
|
OpenStack client usage guidelines
|
||||||
---------------------------------
|
---------------------------------
|
||||||
|
|
||||||
The sahara project uses several OpenStack clients internally. These clients
|
The sahara project uses several OpenStack clients internally. These clients
|
||||||
are all wrapped by utility functions which make using them more convenient.
|
are all wrapped by utility functions which make using them more convenient.
|
||||||
When developing sahara, if you need to use a OpenStack client you should
|
When developing sahara, if you need to use an OpenStack client you should
|
||||||
check the ``sahara.utils.openstack`` package for the appropriate one.
|
check the ``sahara.utils.openstack`` package for the appropriate one.
|
||||||
|
|
||||||
When developing new OpenStack client interactions in sahara, it is important
|
When developing new OpenStack client interactions in sahara, it is important
|
||||||
to understand the ``sahara.service.sessions`` package and the usage of
|
to understand the ``sahara.service.sessions`` package and the usage of the
|
||||||
keystone ``Session`` and auth plugin objects(for example, ``Token`` or
|
keystone ``Session`` and auth plugin objects (for example, ``Token`` and
|
||||||
``Password``). Sahara is migrating all clients to use this authentication
|
``Password``). Sahara is migrating all clients to use this authentication
|
||||||
methodology, where available. For more information on using sessions with
|
methodology, where available. For more information on using sessions with
|
||||||
keystone, please see
|
keystone, please see
|
||||||
|
@ -1,14 +1,14 @@
|
|||||||
Setup DevStack
|
Setup DevStack
|
||||||
==============
|
==============
|
||||||
|
|
||||||
The DevStack could be installed on Fedora, Ubuntu and CentOS. For supported
|
DevStack can be installed on Fedora, Ubuntu, and CentOS. For supported
|
||||||
versions see `DevStack documentation <http://devstack.org>`_
|
versions see `DevStack documentation <http://devstack.org>`_
|
||||||
|
|
||||||
We recommend to install DevStack not into your main system, but run it in
|
We recommend that you install DevStack in a VM, rather than on your main
|
||||||
a VM instead. That way you may avoid contamination of your system
|
system. That way you may avoid contamination of your system. You may find
|
||||||
with various stuff. You may find hypervisor and VM requirements in the
|
hypervisor and VM requirements in the the next section. If you still want to
|
||||||
the next section. If you still want to install DevStack on top of your
|
install DevStack on your baremetal system, just skip the next section and read
|
||||||
main system, just skip the next section and read further.
|
further.
|
||||||
|
|
||||||
|
|
||||||
Start VM and set up OS
|
Start VM and set up OS
|
||||||
@ -54,7 +54,7 @@ Ubuntu 14.04 system.
|
|||||||
$ sudo apt-get install git-core
|
$ sudo apt-get install git-core
|
||||||
$ git clone https://git.openstack.org/openstack-dev/devstack.git
|
$ git clone https://git.openstack.org/openstack-dev/devstack.git
|
||||||
|
|
||||||
2. Create file ``local.conf`` in devstack directory with the following
|
2. Create the file ``local.conf`` in devstack directory with the following
|
||||||
content:
|
content:
|
||||||
|
|
||||||
.. sourcecode:: bash
|
.. sourcecode:: bash
|
||||||
@ -101,14 +101,15 @@ Ubuntu 14.04 system.
|
|||||||
|
|
||||||
In cases where you need to specify a git refspec (branch, tag, or commit hash)
|
In cases where you need to specify a git refspec (branch, tag, or commit hash)
|
||||||
for the sahara in-tree devstack plugin (or sahara repo), it should be
|
for the sahara in-tree devstack plugin (or sahara repo), it should be
|
||||||
appended after the git repo URL as follows:
|
appended to the git repo URL as follows:
|
||||||
|
|
||||||
.. sourcecode:: bash
|
.. sourcecode:: bash
|
||||||
|
|
||||||
enable_plugin sahara git://git.openstack.org/openstack/sahara <some_git_refspec>
|
enable_plugin sahara git://git.openstack.org/openstack/sahara <some_git_refspec>
|
||||||
|
|
||||||
3. Sahara can send notifications to Ceilometer, if Ceilometer is enabled.
|
3. Sahara can send notifications to Ceilometer, if Ceilometer is enabled.
|
||||||
If you want to enable Ceilometer add the following lines to ``local.conf`` file:
|
If you want to enable Ceilometer add the following lines to the
|
||||||
|
``local.conf`` file:
|
||||||
|
|
||||||
.. sourcecode:: bash
|
.. sourcecode:: bash
|
||||||
|
|
||||||
@ -120,20 +121,21 @@ appended after the git repo URL as follows:
|
|||||||
|
|
||||||
$ ./stack.sh
|
$ ./stack.sh
|
||||||
|
|
||||||
5. Once previous step is finished Devstack will print Horizon URL. Navigate to
|
5. Once the previous step is finished Devstack will print a Horizon URL.
|
||||||
this URL and login with login "admin" and password from ``local.conf``.
|
Navigate to this URL and login with login "admin" and password from
|
||||||
|
``local.conf``.
|
||||||
|
|
||||||
6. Congratulations! You have OpenStack running in your VM and ready to launch
|
6. Congratulations! You have OpenStack running in your VM and you're ready to
|
||||||
VMs inside that VM :)
|
launch VMs inside that VM. :)
|
||||||
|
|
||||||
|
|
||||||
Managing sahara in DevStack
|
Managing sahara in DevStack
|
||||||
---------------------------
|
---------------------------
|
||||||
|
|
||||||
If you install DevStack with sahara included you can rejoin screen with
|
If you install DevStack with sahara included you can rejoin screen with the
|
||||||
``rejoin-stack.sh`` command and switch to ``sahara`` tab. Here you can manage
|
``rejoin-stack.sh`` command and switch to the ``sahara`` tab. Here you can
|
||||||
the sahara service as other OpenStack services. Sahara source code is located
|
manage the sahara service as other OpenStack services. Sahara source code is
|
||||||
at ``$DEST/sahara`` which is usually ``/opt/stack/sahara``.
|
located at ``$DEST/sahara`` which is usually ``/opt/stack/sahara``.
|
||||||
|
|
||||||
|
|
||||||
.. _fusion-fixed-ip:
|
.. _fusion-fixed-ip:
|
||||||
@ -172,4 +174,4 @@ Setting fixed IP address for VMware Fusion VM
|
|||||||
$ sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli --stop
|
$ sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli --stop
|
||||||
$ sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli --start
|
$ sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli --start
|
||||||
|
|
||||||
7. Now start your VM, it should have new fixed IP address
|
7. Now start your VM; it should have new fixed IP address.
|
||||||
|
@ -1,14 +1,13 @@
|
|||||||
Elastic Data Processing (EDP) SPI
|
Elastic Data Processing (EDP) SPI
|
||||||
=================================
|
=================================
|
||||||
|
|
||||||
EDP job engine objects provide methods for creating, monitoring, and
|
The EDP job engine objects provide methods for creating, monitoring, and
|
||||||
terminating jobs on Sahara clusters. Provisioning plugins that support EDP must
|
terminating jobs on Sahara clusters. Provisioning plugins that support EDP
|
||||||
return an EDP job engine object from the :ref:`get_edp_engine` method described
|
must return an EDP job engine object from the :ref:`get_edp_engine` method
|
||||||
in :doc:`plugin.spi`.
|
described in :doc:`plugin.spi`.
|
||||||
|
|
||||||
Sahara provides subclasses of the base job engine interface that support EDP
|
Sahara provides subclasses of the base job engine interface that support EDP
|
||||||
on clusters running Oozie or on Spark standalone clusters. These are described
|
on clusters running Oozie, Spark, and/or Storm. These are described below.
|
||||||
below.
|
|
||||||
|
|
||||||
.. _edp_spi_job_types:
|
.. _edp_spi_job_types:
|
||||||
|
|
||||||
@ -25,8 +24,10 @@ values for job types:
|
|||||||
* MapReduce.Streaming
|
* MapReduce.Streaming
|
||||||
* Spark
|
* Spark
|
||||||
* Shell
|
* Shell
|
||||||
|
* Storm
|
||||||
|
|
||||||
Note, constants for job types are defined in *sahara.utils.edp*
|
.. note::
|
||||||
|
Constants for job types are defined in *sahara.utils.edp*.
|
||||||
|
|
||||||
Job Status Values
|
Job Status Values
|
||||||
------------------------
|
------------------------
|
||||||
@ -61,7 +62,7 @@ cancel_job(job_execution)
|
|||||||
Stops the running job whose id is stored in the job_execution object.
|
Stops the running job whose id is stored in the job_execution object.
|
||||||
|
|
||||||
*Returns*: None if the operation was unsuccessful or an updated job status
|
*Returns*: None if the operation was unsuccessful or an updated job status
|
||||||
value
|
value.
|
||||||
|
|
||||||
get_job_status(job_execution)
|
get_job_status(job_execution)
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
@ -69,7 +70,7 @@ get_job_status(job_execution)
|
|||||||
Returns the current status of the job whose id is stored in the job_execution
|
Returns the current status of the job whose id is stored in the job_execution
|
||||||
object.
|
object.
|
||||||
|
|
||||||
*Returns*: a job status value
|
*Returns*: a job status value.
|
||||||
|
|
||||||
|
|
||||||
run_job(job_execution)
|
run_job(job_execution)
|
||||||
@ -77,7 +78,7 @@ run_job(job_execution)
|
|||||||
|
|
||||||
Starts the job described by the job_execution object
|
Starts the job described by the job_execution object
|
||||||
|
|
||||||
*Returns*: a tuple of the form (job_id, job_status_value, job_extra_info)
|
*Returns*: a tuple of the form (job_id, job_status_value, job_extra_info).
|
||||||
|
|
||||||
* *job_id* is required and must be a string that allows the EDP engine to
|
* *job_id* is required and must be a string that allows the EDP engine to
|
||||||
uniquely identify the job.
|
uniquely identify the job.
|
||||||
@ -100,8 +101,8 @@ raise an exception.
|
|||||||
get_possible_job_config(job_type)
|
get_possible_job_config(job_type)
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Returns hints used by the Sahara UI to prompt users for values when configuring
|
Returns hints used by the Sahara UI to prompt users for values when
|
||||||
and launching a job. Note that no hints are required.
|
configuring and launching a job. Note that no hints are required.
|
||||||
|
|
||||||
See :doc:`/userdoc/edp` for more information on how configuration values,
|
See :doc:`/userdoc/edp` for more information on how configuration values,
|
||||||
parameters, and arguments are used by different job types.
|
parameters, and arguments are used by different job types.
|
||||||
@ -123,7 +124,7 @@ get_supported_job_types()
|
|||||||
This method returns the job types that the engine supports. Not all engines
|
This method returns the job types that the engine supports. Not all engines
|
||||||
will support all job types.
|
will support all job types.
|
||||||
|
|
||||||
*Returns*: a list of job types supported by the engine
|
*Returns*: a list of job types supported by the engine.
|
||||||
|
|
||||||
Oozie Job Engine Interface
|
Oozie Job Engine Interface
|
||||||
--------------------------
|
--------------------------
|
||||||
@ -132,8 +133,8 @@ The sahara.service.edp.oozie.engine.OozieJobEngine class is derived from
|
|||||||
JobEngine. It provides implementations for all of the methods in the base
|
JobEngine. It provides implementations for all of the methods in the base
|
||||||
interface but adds a few more abstract methods.
|
interface but adds a few more abstract methods.
|
||||||
|
|
||||||
Note, the *validate_job_execution(cluster, job, data)* method does basic checks
|
Note that the *validate_job_execution(cluster, job, data)* method does basic
|
||||||
on the job configuration but probably should be overloaded to include
|
checks on the job configuration but probably should be overloaded to include
|
||||||
additional checks on the cluster configuration. For example, the job engines
|
additional checks on the cluster configuration. For example, the job engines
|
||||||
for plugins that support Oozie add checks to make sure that the Oozie service
|
for plugins that support Oozie add checks to make sure that the Oozie service
|
||||||
is up and running.
|
is up and running.
|
||||||
@ -145,7 +146,7 @@ get_hdfs_user()
|
|||||||
Oozie uses HDFS to distribute job files. This method gives the name of the
|
Oozie uses HDFS to distribute job files. This method gives the name of the
|
||||||
account that is used on the data nodes to access HDFS (such as 'hadoop' or
|
account that is used on the data nodes to access HDFS (such as 'hadoop' or
|
||||||
'hdfs'). The Oozie job engine expects that HDFS contains a directory for this
|
'hdfs'). The Oozie job engine expects that HDFS contains a directory for this
|
||||||
user under */user/*
|
user under */user/*.
|
||||||
|
|
||||||
*Returns*: a string giving the username for the account used to access HDFS on
|
*Returns*: a string giving the username for the account used to access HDFS on
|
||||||
the cluster.
|
the cluster.
|
||||||
@ -170,8 +171,8 @@ get_oozie_server_uri(cluster)
|
|||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Returns the full URI for the Oozie server, for example
|
Returns the full URI for the Oozie server, for example
|
||||||
*http://my_oozie_host:11000/oozie*. This URI is used by an Oozie client to send
|
*http://my_oozie_host:11000/oozie*. This URI is used by an Oozie client to
|
||||||
commands and queries to the Oozie server.
|
send commands and queries to the Oozie server.
|
||||||
|
|
||||||
*Returns*: a string giving the Oozie server URI.
|
*Returns*: a string giving the Oozie server URI.
|
||||||
|
|
||||||
@ -179,9 +180,10 @@ commands and queries to the Oozie server.
|
|||||||
get_oozie_server(self, cluster)
|
get_oozie_server(self, cluster)
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Returns the node instance for the host in the cluster running the Oozie server
|
Returns the node instance for the host in the cluster running the Oozie
|
||||||
|
server.
|
||||||
|
|
||||||
*Returns*: a node instance
|
*Returns*: a node instance.
|
||||||
|
|
||||||
|
|
||||||
get_name_node_uri(self, cluster)
|
get_name_node_uri(self, cluster)
|
||||||
@ -198,7 +200,7 @@ get_resource_manager_uri(self, cluster)
|
|||||||
Returns the full URI for the Hadoop JobTracker for Hadoop version 1 or the
|
Returns the full URI for the Hadoop JobTracker for Hadoop version 1 or the
|
||||||
Hadoop ResourceManager for Hadoop version 2.
|
Hadoop ResourceManager for Hadoop version 2.
|
||||||
|
|
||||||
*Returns*: a string giving the JobTracker or ResourceManager URI
|
*Returns*: a string giving the JobTracker or ResourceManager URI.
|
||||||
|
|
||||||
Spark Job Engine
|
Spark Job Engine
|
||||||
----------------
|
----------------
|
||||||
@ -206,11 +208,12 @@ Spark Job Engine
|
|||||||
The sahara.service.edp.spark.engine.SparkJobEngine class provides a full EDP
|
The sahara.service.edp.spark.engine.SparkJobEngine class provides a full EDP
|
||||||
implementation for Spark standalone clusters.
|
implementation for Spark standalone clusters.
|
||||||
|
|
||||||
Note, the *validate_job_execution(cluster, job, data)* method does basic checks
|
.. note::
|
||||||
on the job configuration but probably should be overloaded to include
|
The *validate_job_execution(cluster, job, data)* method does basic
|
||||||
additional checks on the cluster configuration. For example, the job engine
|
checks on the job configuration but probably should be overloaded to
|
||||||
returned by the Spark plugin checks that the Spark version is >= 1.0.0 to
|
include additional checks on the cluster configuration. For example, the
|
||||||
ensure that *spark-submit* is available.
|
job engine returned by the Spark plugin checks that the Spark version is
|
||||||
|
>= 1.0.0 to ensure that *spark-submit* is available.
|
||||||
|
|
||||||
get_driver_classpath(self)
|
get_driver_classpath(self)
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
@ -218,4 +221,4 @@ get_driver_classpath(self)
|
|||||||
Returns driver class path.
|
Returns driver class path.
|
||||||
|
|
||||||
*Returns*: a string of the following format ' --driver-class-path
|
*Returns*: a string of the following format ' --driver-class-path
|
||||||
*class_path_value*'
|
*class_path_value*'.
|
||||||
|
@ -1,16 +1,14 @@
|
|||||||
Code Reviews with Gerrit
|
Code Reviews with Gerrit
|
||||||
========================
|
========================
|
||||||
|
|
||||||
Sahara uses the `Gerrit`_ tool to review proposed code changes. The review site
|
Sahara uses the `Gerrit`_ tool to review proposed code changes. The review
|
||||||
is http://review.openstack.org.
|
site is http://review.openstack.org.
|
||||||
|
|
||||||
Gerrit is a complete replacement for Github pull requests. `All Github pull
|
Gerrit is a complete replacement for Github pull requests. `All Github pull
|
||||||
requests to the Sahara repository will be ignored`.
|
requests to the Sahara repository will be ignored`.
|
||||||
|
|
||||||
See `Gerrit Workflow Quick Reference`_ for information about how to get
|
See `Development Workflow`_ for information about how to get
|
||||||
started using Gerrit. See `Development Workflow`_ for more detailed
|
started using Gerrit.
|
||||||
documentation on how to work with Gerrit.
|
|
||||||
|
|
||||||
.. _Gerrit: http://code.google.com/p/gerrit
|
.. _Gerrit: http://code.google.com/p/gerrit
|
||||||
.. _Development Workflow: http://docs.openstack.org/infra/manual/developers.html#development-workflow
|
.. _Development Workflow: http://docs.openstack.org/infra/manual/developers.html#development-workflow
|
||||||
.. _Gerrit Workflow Quick Reference: http://docs.openstack.org/infra/manual/developers.html#development-workflow
|
|
||||||
|
@ -24,16 +24,18 @@ To build Oozie the following command can be used:
|
|||||||
|
|
||||||
$ {oozie_dir}/bin/mkdistro.sh -DskipTests
|
$ {oozie_dir}/bin/mkdistro.sh -DskipTests
|
||||||
|
|
||||||
By default it builds against Hadoop 1.1.1. To built it with 2.x Hadoop version:
|
By default it builds against Hadoop 1.1.1. To built it with Hadoop version
|
||||||
* hadoop-2 version in pom.xml files should be changed.
|
2.x:
|
||||||
It could be done manually or with following command(You should replace
|
|
||||||
2.x.x to your hadoop version):
|
* The hadoop-2 version should be changed in pom.xml.
|
||||||
|
This can be done manually or with the following command (you should
|
||||||
|
replace 2.x.x with your hadoop version):
|
||||||
|
|
||||||
.. sourcecode:: console
|
.. sourcecode:: console
|
||||||
|
|
||||||
$ find . -name pom.xml | xargs sed -ri 's/2.3.0/2.x.x/'
|
$ find . -name pom.xml | xargs sed -ri 's/2.3.0/2.x.x/'
|
||||||
|
|
||||||
* build command should be launched with ``-P hadoop-2`` flag
|
* The build command should be launched with the ``-P hadoop-2`` flag
|
||||||
|
|
||||||
JDK Versions
|
JDK Versions
|
||||||
------------
|
------------
|
||||||
@ -44,12 +46,12 @@ There are 2 build properties that can be used to change the JDK version
|
|||||||
requirements:
|
requirements:
|
||||||
|
|
||||||
* ``javaVersion`` specifies the version of the JDK used to compile (default
|
* ``javaVersion`` specifies the version of the JDK used to compile (default
|
||||||
1.6)
|
1.6).
|
||||||
|
|
||||||
* ``targetJavaVersion`` specifies the version of the generated bytecode
|
* ``targetJavaVersion`` specifies the version of the generated bytecode
|
||||||
(default 1.6)
|
(default 1.6).
|
||||||
|
|
||||||
For example, to specify 1.7 JDK version, build command should contain
|
For example, to specify JDK version 1.7, the build command should contain the
|
||||||
``-D javaVersion=1.7 -D tagetJavaVersion=1.7`` flags.
|
``-D javaVersion=1.7 -D tagetJavaVersion=1.7`` flags.
|
||||||
|
|
||||||
|
|
||||||
@ -57,16 +59,16 @@ For example, to specify 1.7 JDK version, build command should contain
|
|||||||
Build
|
Build
|
||||||
-----
|
-----
|
||||||
|
|
||||||
To build Ozzie with 2.6.0 hadoop and 1.7 JDK versions following command can be
|
To build Oozie with Hadoop 2.6.0 and JDK version 1.7, the following command
|
||||||
used:
|
can be used:
|
||||||
|
|
||||||
.. sourcecode:: console
|
.. sourcecode:: console
|
||||||
|
|
||||||
$ {oozie_dir}/bin/mkdistro.sh assembly:single -P hadoop-2 -D javaVersion=1.7 -D targetJavaVersion=1.7 -D skipTests
|
$ {oozie_dir}/bin/mkdistro.sh assembly:single -P hadoop-2 -D javaVersion=1.7 -D targetJavaVersion=1.7 -D skipTests
|
||||||
|
|
||||||
Also, pig version can be passed as maven property with ``-D pig.version=x.x.x``
|
Also, the pig version can be passed as a maven property with the flag
|
||||||
flag.
|
``-D pig.version=x.x.x``.
|
||||||
|
|
||||||
Similar instruction to build oozie.tar.gz you may find there:
|
You can find similar instructions to build oozie.tar.gz here:
|
||||||
http://oozie.apache.org/docs/4.0.0/DG_QuickStart.html#Building_Oozie
|
http://oozie.apache.org/docs/4.0.0/DG_QuickStart.html#Building_Oozie
|
||||||
|
|
||||||
|
@ -4,7 +4,7 @@ How to Participate
|
|||||||
Getting started
|
Getting started
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
* Create account on `Github <https://github.com/openstack/sahara>`_
|
* Create an account on `Github <https://github.com/openstack/sahara>`_
|
||||||
(if you don't have one)
|
(if you don't have one)
|
||||||
|
|
||||||
* Make sure that your local git is properly configured by executing
|
* Make sure that your local git is properly configured by executing
|
||||||
@ -29,25 +29,29 @@ Getting started
|
|||||||
|
|
||||||
* Go to ``watched projects``
|
* Go to ``watched projects``
|
||||||
* Add ``openstack/sahara``, ``openstack/sahara-extra``,
|
* Add ``openstack/sahara``, ``openstack/sahara-extra``,
|
||||||
``openstack/python-saharaclient``, ``openstack/sahara-image-elements``
|
``openstack/python-saharaclient``, and ``openstack/sahara-image-elements``
|
||||||
|
|
||||||
|
|
||||||
How to stay in touch with the community?
|
How to stay in touch with the community
|
||||||
----------------------------------------
|
---------------------------------------
|
||||||
|
|
||||||
* If you have something to discuss use
|
* If you have something to discuss use
|
||||||
`OpenStack development mail-list <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>`_.
|
`OpenStack development mail-list <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>`_.
|
||||||
Prefix mail subject with ``[Sahara]``
|
Prefix the mail subject with ``[Sahara]``
|
||||||
|
|
||||||
* Join ``#openstack-sahara`` IRC channel on `freenode <http://freenode.net/>`_
|
* Join ``#openstack-sahara`` IRC channel on `freenode <http://freenode.net/>`_
|
||||||
|
|
||||||
|
* Attend Sahara team meetings
|
||||||
|
|
||||||
* Weekly on Thursdays at 1400 UTC and 1800 UTC (on alternate weeks)
|
* Weekly on Thursdays at 1400 UTC and 1800 UTC (on alternate weeks)
|
||||||
|
|
||||||
* IRC channel: ``#openstack-meeting-alt`` (1800UTC) and
|
* IRC channel: ``#openstack-meeting-alt`` (1800UTC) or
|
||||||
``#openstack-meeting-3`` (1400UTC)
|
``#openstack-meeting-3`` (1400UTC)
|
||||||
|
|
||||||
|
* See agenda at https://wiki.openstack.org/wiki/Meetings/SaharaAgenda
|
||||||
|
|
||||||
How to send your first patch on review?
|
|
||||||
|
How to post your first patch for review
|
||||||
---------------------------------------
|
---------------------------------------
|
||||||
|
|
||||||
* Checkout Sahara code from `Github <https://github.com/openstack/sahara>`_
|
* Checkout Sahara code from `Github <https://github.com/openstack/sahara>`_
|
||||||
@ -61,9 +65,9 @@ How to send your first patch on review?
|
|||||||
* Make sure that your code passes ``PEP8`` checks and unit-tests.
|
* Make sure that your code passes ``PEP8`` checks and unit-tests.
|
||||||
See :doc:`development.guidelines`
|
See :doc:`development.guidelines`
|
||||||
|
|
||||||
* Send your patch on review
|
* Post your patch for review
|
||||||
|
|
||||||
* Monitor status of your patch review on https://review.openstack.org/#/
|
* Monitor the status of your patch review on https://review.openstack.org/#/
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@ -7,7 +7,7 @@ feature will enable your plugin to:
|
|||||||
|
|
||||||
* Validate that images passed to it for use in cluster provisioning meet its
|
* Validate that images passed to it for use in cluster provisioning meet its
|
||||||
specifications.
|
specifications.
|
||||||
* Enable your plugin to provision images from "clean" (OS-only) images.
|
* Provision images from "clean" (OS-only) images.
|
||||||
* Pack pre-populated images for registration in Glance and use by Sahara.
|
* Pack pre-populated images for registration in Glance and use by Sahara.
|
||||||
|
|
||||||
All of these features can use the same image declaration, meaning that logic
|
All of these features can use the same image declaration, meaning that logic
|
||||||
@ -66,7 +66,7 @@ base image.
|
|||||||
This CLI will automatically populate the set of available plugins and
|
This CLI will automatically populate the set of available plugins and
|
||||||
versions from the plugin set loaded in Sahara, and will show any plugin for
|
versions from the plugin set loaded in Sahara, and will show any plugin for
|
||||||
which the image packing feature is available. The next sections of this guide
|
which the image packing feature is available. The next sections of this guide
|
||||||
will describe, first, how to modify an image packing specification for one
|
will first describe how to modify an image packing specification for one
|
||||||
of the plugins, and second, how to enable the image packing feature for new
|
of the plugins, and second, how to enable the image packing feature for new
|
||||||
or existing plugins.
|
or existing plugins.
|
||||||
|
|
||||||
@ -340,7 +340,8 @@ The Argument Set Validator
|
|||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
You may find that you wish to store state in one place in the specification
|
You may find that you wish to store state in one place in the specification
|
||||||
for use in another. In this case, you can use this validator to
|
for use in another. In this case, you can use this validator to set an
|
||||||
|
argument for future use.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
|
@ -2,7 +2,7 @@ Continuous Integration with Jenkins
|
|||||||
===================================
|
===================================
|
||||||
|
|
||||||
Each change made to Sahara core code is tested with unit and integration tests
|
Each change made to Sahara core code is tested with unit and integration tests
|
||||||
and style checks flake8.
|
and style checks using flake8.
|
||||||
|
|
||||||
Unit tests and style checks are performed on public `OpenStack Jenkins
|
Unit tests and style checks are performed on public `OpenStack Jenkins
|
||||||
<https://jenkins.openstack.org/>`_ managed by `Zuul
|
<https://jenkins.openstack.org/>`_ managed by `Zuul
|
||||||
@ -10,10 +10,10 @@ Unit tests and style checks are performed on public `OpenStack Jenkins
|
|||||||
|
|
||||||
Unit tests are checked using python 2.7.
|
Unit tests are checked using python 2.7.
|
||||||
|
|
||||||
The result of those checks and Unit tests are +1 or -1 to *Verify* column in a
|
The result of those checks and Unit tests are represented as a vote of +1 or
|
||||||
code review from *Jenkins* user.
|
-1 in the *Verify* column in code reviews from the *Jenkins* user.
|
||||||
|
|
||||||
Integration tests check CRUD operations for Image Registry, Templates and
|
Integration tests check CRUD operations for the Image Registry, Templates, and
|
||||||
Clusters. Also a test job is launched on a created Cluster to verify Hadoop
|
Clusters. Also a test job is launched on a created Cluster to verify Hadoop
|
||||||
work.
|
work.
|
||||||
|
|
||||||
@ -27,15 +27,16 @@ integration testing may take a while.
|
|||||||
Jenkins is controlled for the most part by Zuul which determines what jobs are
|
Jenkins is controlled for the most part by Zuul which determines what jobs are
|
||||||
run when.
|
run when.
|
||||||
|
|
||||||
Zuul status is available by address: `Zuul Status
|
Zuul status is available at this address: `Zuul Status
|
||||||
<https://sahara.mirantis.com/zuul>`_.
|
<https://sahara.mirantis.com/zuul>`_.
|
||||||
|
|
||||||
For more information see: `Sahara Hadoop Cluster CI
|
For more information see: `Sahara Hadoop Cluster CI
|
||||||
<https://wiki.openstack.org/wiki/Sahara/SaharaCI>`_.
|
<https://wiki.openstack.org/wiki/Sahara/SaharaCI>`_.
|
||||||
|
|
||||||
The integration tests result is +1 or -1 to *Verify* column in a code review
|
The integration tests result is represented as a vote of +1 or -1 in the
|
||||||
from *Sahara Hadoop Cluster CI* user.
|
*Verify* column in a code review from the *Sahara Hadoop Cluster CI* user.
|
||||||
|
|
||||||
You can put *sahara-ci-recheck* in comment, if you want to recheck sahara-ci
|
You can put *sahara-ci-recheck* in comment, if you want to recheck sahara-ci
|
||||||
jobs. Also, you can put *recheck* in comment, if you want to recheck both
|
jobs. Also, you can put *recheck* in comment, if you want to recheck both
|
||||||
jenkins and sahara-ci jobs.
|
Jenkins and sahara-ci jobs. Finally, you can put *reverify* in a comment, if
|
||||||
|
you only want to recheck Jenkins jobs.
|
||||||
|
@ -1,8 +1,8 @@
|
|||||||
Project hosting
|
Project hosting
|
||||||
===============
|
===============
|
||||||
|
|
||||||
`Launchpad`_ hosts the Sahara project. The Sahara project homepage on Launchpad
|
`Launchpad`_ hosts the Sahara project. The Sahara project homepage on
|
||||||
is http://launchpad.net/sahara.
|
Launchpad is http://launchpad.net/sahara.
|
||||||
|
|
||||||
Launchpad credentials
|
Launchpad credentials
|
||||||
---------------------
|
---------------------
|
||||||
@ -18,9 +18,10 @@ OpenStack-related sites. These sites include:
|
|||||||
Mailing list
|
Mailing list
|
||||||
------------
|
------------
|
||||||
|
|
||||||
The mailing list email is ``openstack-dev@lists.openstack.org`` with subject
|
The mailing list email is ``openstack-dev@lists.openstack.org``; use the
|
||||||
prefix ``[sahara]``. To participate in the mailing list subscribe to the list
|
subject prefix ``[sahara]`` to address the team. To participate in the
|
||||||
at http://lists.openstack.org/cgi-bin/mailman/listinfo
|
mailing list subscribe to the list at
|
||||||
|
http://lists.openstack.org/cgi-bin/mailman/listinfo
|
||||||
|
|
||||||
Bug tracking
|
Bug tracking
|
||||||
------------
|
------------
|
||||||
@ -36,7 +37,9 @@ proposed changes and track associated commits. Sahara also uses specs for
|
|||||||
in-depth descriptions and discussions of blueprints. Specs follow a defined
|
in-depth descriptions and discussions of blueprints. Specs follow a defined
|
||||||
format and are submitted as change requests to the openstack/sahara-specs
|
format and are submitted as change requests to the openstack/sahara-specs
|
||||||
repository. Every blueprint should have an associated spec that is agreed
|
repository. Every blueprint should have an associated spec that is agreed
|
||||||
on and merged to the sahara-specs repository before it is approved.
|
on and merged to the sahara-specs repository before it is approved, unless the
|
||||||
|
whole team agrees that the implementation path for the feature described in
|
||||||
|
the blueprint is completely understood.
|
||||||
|
|
||||||
Technical support
|
Technical support
|
||||||
-----------------
|
-----------------
|
||||||
|
@ -28,7 +28,7 @@ log levels:
|
|||||||
Formatting Guidelines
|
Formatting Guidelines
|
||||||
---------------------
|
---------------------
|
||||||
|
|
||||||
Now sahara uses string formatting defined in `PEP 3101`_ for logs.
|
Sahara uses string formatting defined in `PEP 3101`_ for logs.
|
||||||
|
|
||||||
.. code:: python
|
.. code:: python
|
||||||
|
|
||||||
@ -41,8 +41,8 @@ Now sahara uses string formatting defined in `PEP 3101`_ for logs.
|
|||||||
Translation Guidelines
|
Translation Guidelines
|
||||||
----------------------
|
----------------------
|
||||||
|
|
||||||
All log levels except Debug requires translation. None of the separate
|
All log levels except Debug require translation. None of the separate
|
||||||
cli tools packaged with sahara contain log translations.
|
CLI tools packaged with sahara contain log translations.
|
||||||
|
|
||||||
* Debug: no translation
|
* Debug: no translation
|
||||||
* Info: _LI
|
* Info: _LI
|
||||||
|
@ -7,20 +7,21 @@ Plugin interface
|
|||||||
get_versions()
|
get_versions()
|
||||||
~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Returns all versions of Hadoop that could be used with the plugin. It is
|
Returns all available versions of the plugin. Depending on the plugin, this
|
||||||
responsibility of the plugin to make sure that all required images for each
|
version may map directly to the HDFS version, or it may not; check your
|
||||||
hadoop version are available, as well as configs and whatever else that plugin
|
plugin's documentation. It is responsibility of the plugin to make sure that
|
||||||
needs to create the Hadoop cluster.
|
all required images for each hadoop version are available, as well as configs
|
||||||
|
and whatever else that plugin needs to create the Hadoop cluster.
|
||||||
|
|
||||||
*Returns*: list of strings - Hadoop versions
|
*Returns*: list of strings representing plugin versions
|
||||||
|
|
||||||
*Example return value*: [“1.2.1”, “2.3.0”, “2.4.1”]
|
*Example return value*: [“1.2.1”, “2.3.0”, “2.4.1”]
|
||||||
|
|
||||||
get_configs( hadoop_version )
|
get_configs( hadoop_version )
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Lists all configs supported by plugin with descriptions, defaults and targets
|
Lists all configs supported by the plugin with descriptions, defaults, and
|
||||||
for which this config is applicable.
|
targets for which this config is applicable.
|
||||||
|
|
||||||
*Returns*: list of configs
|
*Returns*: list of configs
|
||||||
|
|
||||||
@ -28,7 +29,7 @@ for which this config is applicable.
|
|||||||
MB", "int", “512”, `“mapreduce”`, "node", True, 1))
|
MB", "int", “512”, `“mapreduce”`, "node", True, 1))
|
||||||
|
|
||||||
get_node_processes( hadoop_version )
|
get_node_processes( hadoop_version )
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Returns all supported services and node processes for a given Hadoop version.
|
Returns all supported services and node processes for a given Hadoop version.
|
||||||
Each node process belongs to a single service and that relationship is
|
Each node process belongs to a single service and that relationship is
|
||||||
@ -40,9 +41,9 @@ reflected in the returned dict object. See example for details.
|
|||||||
["datanode", "namenode"]}
|
["datanode", "namenode"]}
|
||||||
|
|
||||||
get_required_image_tags( hadoop_version )
|
get_required_image_tags( hadoop_version )
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Lists tags, that should be added to OpenStack Image via Image Registry. Tags
|
Lists tags that should be added to OpenStack Image via Image Registry. Tags
|
||||||
are used to filter Images by plugin and hadoop version.
|
are used to filter Images by plugin and hadoop version.
|
||||||
|
|
||||||
*Returns*: list of tags
|
*Returns*: list of tags
|
||||||
@ -50,10 +51,10 @@ are used to filter Images by plugin and hadoop version.
|
|||||||
*Example return value*: ["tag1", "some_other_tag", ...]
|
*Example return value*: ["tag1", "some_other_tag", ...]
|
||||||
|
|
||||||
validate( cluster )
|
validate( cluster )
|
||||||
~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Validates a given cluster object. Raises *SaharaException* with meaningful
|
Validates a given cluster object. Raises a *SaharaException* with a meaningful
|
||||||
message.
|
message in the case of validation failure.
|
||||||
|
|
||||||
*Returns*: None
|
*Returns*: None
|
||||||
|
|
||||||
@ -62,7 +63,7 @@ message='Hadoop cluster should contain only 1 NameNode instance. Actual NN
|
|||||||
count is 2' }>
|
count is 2' }>
|
||||||
|
|
||||||
validate_scaling( cluster, existing, additional )
|
validate_scaling( cluster, existing, additional )
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
To be improved.
|
To be improved.
|
||||||
|
|
||||||
@ -71,37 +72,35 @@ Validates a given cluster before scaling operation.
|
|||||||
*Returns*: list of validation_errors
|
*Returns*: list of validation_errors
|
||||||
|
|
||||||
update_infra( cluster )
|
update_infra( cluster )
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Plugin has a chance to change cluster description here. Specifically, plugin
|
This method is no longer used now that Sahara utilizes Heat for OpenStack
|
||||||
must specify image for VMs
|
resource provisioning, and is not currently utilized by any plugin.
|
||||||
could change VMs specs in any way it needs.
|
|
||||||
For instance, plugin can ask for additional VMs for the management tool.
|
|
||||||
|
|
||||||
*Returns*: None
|
*Returns*: None
|
||||||
|
|
||||||
configure_cluster( cluster )
|
configure_cluster( cluster )
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Configures cluster on provisioned by sahara VMs. In this function plugin
|
Configures cluster on the VMs provisioned by sahara. In this function the
|
||||||
should perform all actions like adjusting OS, installing required packages
|
plugin should perform all actions like adjusting OS, installing required
|
||||||
(including Hadoop, if needed), configuring Hadoop, etc.
|
packages (including Hadoop, if needed), configuring Hadoop, etc.
|
||||||
|
|
||||||
*Returns*: None
|
*Returns*: None
|
||||||
|
|
||||||
start_cluster( cluster )
|
start_cluster( cluster )
|
||||||
~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Start already configured cluster. This method is guaranteed to be called only
|
Start already configured cluster. This method is guaranteed to be called only
|
||||||
on cluster which was already prepared with configure_cluster(...) call.
|
on a cluster which was already prepared with configure_cluster(...) call.
|
||||||
|
|
||||||
*Returns*: None
|
*Returns*: None
|
||||||
|
|
||||||
scale_cluster( cluster, instances )
|
scale_cluster( cluster, instances )
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Scale an existing cluster with additional instances. Instances argument is a
|
Scale an existing cluster with additional instances. The instances argument is
|
||||||
list of ready-to-configure instances. Plugin should do all configuration
|
a list of ready-to-configure instances. Plugin should do all configuration
|
||||||
operations in this method and start all services on those instances.
|
operations in this method and start all services on those instances.
|
||||||
|
|
||||||
*Returns*: None
|
*Returns*: None
|
||||||
@ -109,7 +108,7 @@ operations in this method and start all services on those instances.
|
|||||||
.. _get_edp_engine:
|
.. _get_edp_engine:
|
||||||
|
|
||||||
get_edp_engine( cluster, job_type )
|
get_edp_engine( cluster, job_type )
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Returns an EDP job engine object that supports the specified job_type on the
|
Returns an EDP job engine object that supports the specified job_type on the
|
||||||
given cluster, or None if there is no support. The EDP job engine object
|
given cluster, or None if there is no support. The EDP job engine object
|
||||||
@ -120,49 +119,74 @@ job_type is a String matching one of the job types listed in
|
|||||||
*Returns*: an EDP job engine object or None
|
*Returns*: an EDP job engine object or None
|
||||||
|
|
||||||
decommission_nodes( cluster, instances )
|
decommission_nodes( cluster, instances )
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Scale cluster down by removing a list of instances. Plugin should stop services
|
Scale cluster down by removing a list of instances. The plugin should stop
|
||||||
on a provided list of instances. Plugin also may want to update some
|
services on the provided list of instances. The plugin also may need to update
|
||||||
configurations on other instances, so this method is the right place to do
|
some configurations on other instances when nodes are removed; if so, this
|
||||||
that.
|
method must perform that reconfiguration.
|
||||||
|
|
||||||
*Returns*: None
|
*Returns*: None
|
||||||
|
|
||||||
on_terminate_cluster( cluster )
|
on_terminate_cluster( cluster )
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
When user terminates cluster, sahara simply shuts down all the cluster VMs.
|
When user terminates cluster, sahara simply shuts down all the cluster VMs.
|
||||||
This method is guaranteed to be invoked before that, allowing plugin to do some
|
This method is guaranteed to be invoked before that, allowing the plugin to do
|
||||||
clean-up.
|
some clean-up.
|
||||||
|
|
||||||
*Returns*: None
|
*Returns*: None
|
||||||
|
|
||||||
get_open_ports( node_group )
|
get_open_ports( node_group )
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
When user requests sahara to automatically create security group for the node
|
When user requests sahara to automatically create a security group for the
|
||||||
group (``auto_security_group`` property set to True), sahara will call this
|
node group (``auto_security_group`` property set to True), sahara will call
|
||||||
plugin method to get list of ports that need to be opened.
|
this plugin method to get a list of ports that need to be opened.
|
||||||
|
|
||||||
*Returns*: list of ports to be open in auto security group for the given node
|
*Returns*: list of ports to be open in auto security group for the given node
|
||||||
group
|
group
|
||||||
|
|
||||||
def get_edp_job_types( versions )
|
def get_edp_job_types( versions )
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Optional method, which provides ability to see all supported job types for
|
Optional method, which provides the ability to see all supported job types for
|
||||||
specified plugin versions
|
specified plugin versions.
|
||||||
|
|
||||||
*Returns*: dict with supported job types for specified versions of plugin
|
*Returns*: dict with supported job types for specified versions of plugin
|
||||||
|
|
||||||
def recommend_configs( self, cluster, scaling=False )
|
def recommend_configs( self, cluster, scaling=False )
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Optional method, which provides recommendations for cluster configuration
|
Optional method, which provides recommendations for cluster configuration
|
||||||
before creating/scaling operation.
|
before creating/scaling operation.
|
||||||
|
|
||||||
*Returns*: None
|
def get_image_arguments( self, hadoop_version ):
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
Optional method, which gets the argument set taken by the plugin's image
|
||||||
|
generator, or NotImplemented if the plugin does not provide image generation
|
||||||
|
support. See :doc:`image-gen`.
|
||||||
|
|
||||||
|
*Returns*: A sequence with items of type sahara.plugins.images.ImageArgument.
|
||||||
|
|
||||||
|
def pack_image( self, hadoop_version, remote, reconcile=True, ... ):
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
Optional method which packs an image for registration in Glance and use by
|
||||||
|
Sahara. This method is called from the image generation CLI rather than from
|
||||||
|
the Sahara api or engine service. See :doc:`image-gen`.
|
||||||
|
|
||||||
|
*Returns*: None (modifies the image pointed to by the remote in-place.)
|
||||||
|
|
||||||
|
def validate_images( self, cluster, reconcile=True, image_arguments=None ):
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
Validates the image to be used to create a cluster, to ensure that it meets
|
||||||
|
the specifications of the plugin. See :doc:`image-gen`.
|
||||||
|
|
||||||
|
*Returns*: None; may raise a sahara.plugins.exceptions.ImageValidationError
|
||||||
|
|
||||||
|
|
||||||
Object Model
|
Object Model
|
||||||
============
|
============
|
||||||
@ -171,9 +195,9 @@ Here is a description of all the objects involved in the API.
|
|||||||
|
|
||||||
Notes:
|
Notes:
|
||||||
|
|
||||||
- cluster and node_group have ‘extra’ field allowing plugin to persist any
|
- clusters and node_groups have ‘extra’ fields allowing the plugin to
|
||||||
complementary info about the cluster.
|
persist any supplementary info about the cluster.
|
||||||
- node_process is just a process that runs at some node in cluster.
|
- node_process is just a process that runs on some node in cluster.
|
||||||
|
|
||||||
Example list of node processes:
|
Example list of node processes:
|
||||||
|
|
||||||
@ -212,7 +236,7 @@ An object, describing one configuration entry
|
|||||||
| scope | enum | Could be either 'node' or 'cluster'. |
|
| scope | enum | Could be either 'node' or 'cluster'. |
|
||||||
+-------------------+--------+------------------------------------------------+
|
+-------------------+--------+------------------------------------------------+
|
||||||
| is_optional | bool | If is_optional is False and no default_value |
|
| is_optional | bool | If is_optional is False and no default_value |
|
||||||
| | | is specified, user should provide a value |
|
| | | is specified, user must provide a value. |
|
||||||
+-------------------+--------+------------------------------------------------+
|
+-------------------+--------+------------------------------------------------+
|
||||||
| priority | int | 1 or 2. A Hint for UI. Configs with priority |
|
| priority | int | 1 or 2. A Hint for UI. Configs with priority |
|
||||||
| | | *1* are always displayed. |
|
| | | *1* are always displayed. |
|
||||||
@ -245,7 +269,7 @@ An instance created for cluster.
|
|||||||
+===============+=========+===================================================+
|
+===============+=========+===================================================+
|
||||||
| instance_id | string | Unique instance identifier. |
|
| instance_id | string | Unique instance identifier. |
|
||||||
+---------------+---------+---------------------------------------------------+
|
+---------------+---------+---------------------------------------------------+
|
||||||
| instance_name | string | OpenStack Instance name. |
|
| instance_name | string | OpenStack instance name. |
|
||||||
+---------------+---------+---------------------------------------------------+
|
+---------------+---------+---------------------------------------------------+
|
||||||
| internal_ip | string | IP to communicate with other instances. |
|
| internal_ip | string | IP to communicate with other instances. |
|
||||||
+---------------+---------+---------------------------------------------------+
|
+---------------+---------+---------------------------------------------------+
|
||||||
@ -255,7 +279,7 @@ An instance created for cluster.
|
|||||||
| volumes | list | List of volumes attached to instance. Empty if |
|
| volumes | list | List of volumes attached to instance. Empty if |
|
||||||
| | | ephemeral drive is used. |
|
| | | ephemeral drive is used. |
|
||||||
+---------------+---------+---------------------------------------------------+
|
+---------------+---------+---------------------------------------------------+
|
||||||
| nova_info | object | Nova Instance object. |
|
| nova_info | object | Nova instance object. |
|
||||||
+---------------+---------+---------------------------------------------------+
|
+---------------+---------+---------------------------------------------------+
|
||||||
| username | string | Username, that sahara uses for establishing |
|
| username | string | Username, that sahara uses for establishing |
|
||||||
| | | remote connections to instance. |
|
| | | remote connections to instance. |
|
||||||
@ -265,7 +289,7 @@ An instance created for cluster.
|
|||||||
| fqdn | string | Fully qualified domain name for this instance. |
|
| fqdn | string | Fully qualified domain name for this instance. |
|
||||||
+---------------+---------+---------------------------------------------------+
|
+---------------+---------+---------------------------------------------------+
|
||||||
| remote | helpers | Object with helpers for performing remote |
|
| remote | helpers | Object with helpers for performing remote |
|
||||||
| | | operations |
|
| | | operations. |
|
||||||
+---------------+---------+---------------------------------------------------+
|
+---------------+---------+---------------------------------------------------+
|
||||||
|
|
||||||
|
|
||||||
|
@ -1,22 +1,23 @@
|
|||||||
Pluggable Provisioning Mechanism
|
Pluggable Provisioning Mechanism
|
||||||
================================
|
================================
|
||||||
|
|
||||||
Sahara could be integrated with 3rd party management tools like Apache Ambari
|
Sahara can be integrated with 3rd party management tools like Apache Ambari
|
||||||
and Cloudera Management Console. The integration is achieved using plugin
|
and Cloudera Management Console. The integration is achieved using the plugin
|
||||||
mechanism.
|
mechanism.
|
||||||
|
|
||||||
In short, responsibilities are divided between Sahara core and plugin as
|
In short, responsibilities are divided between the Sahara core and a plugin as
|
||||||
follows. Sahara interacts with user and provisions infrastructure (VMs).
|
follows. Sahara interacts with the user and uses Heat to provision OpenStack
|
||||||
Plugin installs and configures Hadoop cluster on the VMs. Optionally Plugin
|
resources (VMs, baremetal servers, security groups, etc.) The plugin installs
|
||||||
could deploy management and monitoring tools for the cluster. Sahara
|
and configures a Hadoop cluster on the provisioned instances. Optionally,
|
||||||
provides plugin with utility methods to work with VMs.
|
a plugin can deploy management and monitoring tools for the cluster. Sahara
|
||||||
|
provides plugins with utility methods to work with provisioned instances.
|
||||||
|
|
||||||
A plugin must extend `sahara.plugins.provisioning:ProvisioningPluginBase`
|
A plugin must extend the `sahara.plugins.provisioning:ProvisioningPluginBase`
|
||||||
class and implement all the required methods. Read :doc:`plugin.spi` for
|
class and implement all the required methods. Read :doc:`plugin.spi` for
|
||||||
details.
|
details.
|
||||||
|
|
||||||
The `instance` objects provided by Sahara have `remote` property which
|
The `instance` objects provided by Sahara have a `remote` property which
|
||||||
could be used to work with VM. The `remote` is a context manager so you
|
can be used to interact with instances. The `remote` is a context manager so
|
||||||
can use it in `with instance.remote:` statements. The list of available
|
you can use it in `with instance.remote:` statements. The list of available
|
||||||
commands could be found in `sahara.utils.remote.InstanceInteropHelper`.
|
commands can be found in `sahara.utils.remote.InstanceInteropHelper`.
|
||||||
See Vanilla plugin source for usage examples.
|
See the source code of the Vanilla plugin for usage examples.
|
||||||
|
@ -54,7 +54,7 @@ choice.
|
|||||||
$ ssh user@hostname
|
$ ssh user@hostname
|
||||||
$ wget http://sahara-files.mirantis.com/images/upstream/<openstack_release>/<sahara_image>.qcow2
|
$ wget http://sahara-files.mirantis.com/images/upstream/<openstack_release>/<sahara_image>.qcow2
|
||||||
|
|
||||||
Upload the above downloaded image into the OpenStack Image service:
|
Upload the image downloaded above into the OpenStack Image service:
|
||||||
|
|
||||||
.. sourcecode:: console
|
.. sourcecode:: console
|
||||||
|
|
||||||
@ -87,7 +87,7 @@ OR
|
|||||||
|
|
||||||
* Build the image using: `diskimage-builder script <https://github.com/openstack/sahara-image-elements/blob/master/diskimage-create/README.rst>`_
|
* Build the image using: `diskimage-builder script <https://github.com/openstack/sahara-image-elements/blob/master/diskimage-create/README.rst>`_
|
||||||
|
|
||||||
Remember the image name or save the image ID, this will be used during the
|
Remember the image name or save the image ID. This will be used during the
|
||||||
image registration with sahara. You can get the image ID using the
|
image registration with sahara. You can get the image ID using the
|
||||||
``openstack`` command line tool as follows:
|
``openstack`` command line tool as follows:
|
||||||
|
|
||||||
@ -106,8 +106,18 @@ image registration with sahara. You can get the image ID using the
|
|||||||
Now you will begin to interact with sahara by registering the virtual
|
Now you will begin to interact with sahara by registering the virtual
|
||||||
machine image in the sahara image registry.
|
machine image in the sahara image registry.
|
||||||
|
|
||||||
Register the image with the username ``ubuntu``. *Note, the username
|
Register the image with the username ``ubuntu``.
|
||||||
will vary depending on the source image used, for more please see*
|
|
||||||
|
.. note::
|
||||||
|
The username will vary depending on the source image used, as follows:
|
||||||
|
Ubuntu: ``ubuntu``
|
||||||
|
CentOS 7: ``centos``
|
||||||
|
CentOS 6: ``cloud-user``
|
||||||
|
Fedora: ``fedora``
|
||||||
|
Note that the Sahara team recommends using CentOS 7 instead of CentOS 6 as
|
||||||
|
a base OS wherever possible; it is better supported throughout OpenStack
|
||||||
|
image maintenance infrastructure and its more modern filesystem is much
|
||||||
|
more appropriate for large-scale data processing. For more please see
|
||||||
:doc:`../userdoc/vanilla_plugin`
|
:doc:`../userdoc/vanilla_plugin`
|
||||||
|
|
||||||
.. sourcecode:: console
|
.. sourcecode:: console
|
||||||
@ -118,7 +128,8 @@ will vary depending on the source image used, for more please see*
|
|||||||
Tag the image to inform sahara about the plugin and the version with which
|
Tag the image to inform sahara about the plugin and the version with which
|
||||||
it shall be used.
|
it shall be used.
|
||||||
|
|
||||||
**Note:** For the steps below and the rest of this guide, substitute
|
.. note::
|
||||||
|
For the steps below and the rest of this guide, substitute
|
||||||
``<plugin_version>`` with the appropriate version of your plugin.
|
``<plugin_version>`` with the appropriate version of your plugin.
|
||||||
|
|
||||||
.. sourcecode:: console
|
.. sourcecode:: console
|
||||||
@ -174,8 +185,9 @@ with the ``plugin show`` command. For example:
|
|||||||
| YARN | nodemanager, resourcemanager |
|
| YARN | nodemanager, resourcemanager |
|
||||||
+---------------------+-----------------------------------------------------------------------------------------------------------------------+
|
+---------------------+-----------------------------------------------------------------------------------------------------------------------+
|
||||||
|
|
||||||
*Note, these commands assume that floating IP addresses are being used. For
|
.. note::
|
||||||
more details on floating IP please see* :ref:`floating_ip_management`
|
These commands assume that floating IP addresses are being used. For more
|
||||||
|
details on floating IP please see :ref:`floating_ip_management`.
|
||||||
|
|
||||||
Create a master node group template with the command:
|
Create a master node group template with the command:
|
||||||
|
|
||||||
@ -237,7 +249,7 @@ Create a worker node group template with the command:
|
|||||||
|
|
||||||
Alternatively you can create node group templates from JSON files:
|
Alternatively you can create node group templates from JSON files:
|
||||||
|
|
||||||
If your environment does not use floating IP, omit defining floating IP in
|
If your environment does not use floating IPs, omit defining floating IP in
|
||||||
the template below.
|
the template below.
|
||||||
|
|
||||||
Sample templates can be found here:
|
Sample templates can be found here:
|
||||||
@ -302,8 +314,8 @@ added properly:
|
|||||||
| vanilla-default-worker | 6546bf44-0590-4539-bfcb-99f8e2c11efc | vanilla | <plugin_version> |
|
| vanilla-default-worker | 6546bf44-0590-4539-bfcb-99f8e2c11efc | vanilla | <plugin_version> |
|
||||||
+------------------------+--------------------------------------+-------------+--------------------+
|
+------------------------+--------------------------------------+-------------+--------------------+
|
||||||
|
|
||||||
Remember the name or save the ID for the master and worker node group templates
|
Remember the name or save the ID for the master and worker node group
|
||||||
as they will be used during cluster template creation.
|
templates, as they will be used during cluster template creation.
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
|
|
||||||
@ -420,7 +432,7 @@ Create a cluster with the command:
|
|||||||
| Version | <plugin_version> |
|
| Version | <plugin_version> |
|
||||||
+----------------------------+----------------------------------------------------+
|
+----------------------------+----------------------------------------------------+
|
||||||
|
|
||||||
Alternatively you can create cluster template from JSON file:
|
Alternatively you can create a cluster template from a JSON file:
|
||||||
|
|
||||||
Create a file named ``my_cluster_create.json`` with the following content:
|
Create a file named ``my_cluster_create.json`` with the following content:
|
||||||
|
|
||||||
@ -445,11 +457,10 @@ Dashboard, or through the ``openstack`` command line client as follows:
|
|||||||
$ openstack keypair create my_stack --public-key $PATH_TO_PUBLIC_KEY
|
$ openstack keypair create my_stack --public-key $PATH_TO_PUBLIC_KEY
|
||||||
|
|
||||||
If sahara is configured to use neutron for networking, you will also need to
|
If sahara is configured to use neutron for networking, you will also need to
|
||||||
include the ``--neutron-network`` argument in the ``cluster create`` command or
|
include the ``--neutron-network`` argument in the ``cluster create`` command
|
||||||
``neutron_management_network`` parameter in ``my_cluster_create.json``. If
|
or the ``neutron_management_network`` parameter in ``my_cluster_create.json``.
|
||||||
your environment does not use neutron, you can omit ``--neutron-network`` or
|
If your environment does not use neutron, you should omit these arguments. You
|
||||||
the ``neutron_management_network`` above. You can determine the neutron network
|
can determine the neutron network id with the following command:
|
||||||
id with the following command:
|
|
||||||
|
|
||||||
.. sourcecode:: console
|
.. sourcecode:: console
|
||||||
|
|
||||||
@ -475,9 +486,9 @@ line tool as follows:
|
|||||||
|
|
||||||
The cluster creation operation may take several minutes to complete. During
|
The cluster creation operation may take several minutes to complete. During
|
||||||
this time the "status" returned from the previous command may show states
|
this time the "status" returned from the previous command may show states
|
||||||
other than ``Active``. A cluster also can be created with the ``wait`` flag. In
|
other than ``Active``. A cluster also can be created with the ``wait`` flag.
|
||||||
that case the cluster creation command will not be finished until the cluster
|
In that case the cluster creation command will not be finished until the
|
||||||
will be moved to the ``Active`` state.
|
cluster is moved to the ``Active`` state.
|
||||||
|
|
||||||
8. Run a MapReduce job to check Hadoop installation
|
8. Run a MapReduce job to check Hadoop installation
|
||||||
---------------------------------------------------
|
---------------------------------------------------
|
||||||
@ -485,7 +496,8 @@ will be moved to the ``Active`` state.
|
|||||||
Check that your Hadoop installation is working properly by running an
|
Check that your Hadoop installation is working properly by running an
|
||||||
example job on the cluster manually.
|
example job on the cluster manually.
|
||||||
|
|
||||||
* Login to NameNode (usually master node) via ssh with ssh-key used above:
|
* Login to the NameNode (usually the master node) via ssh with the ssh-key
|
||||||
|
used above:
|
||||||
|
|
||||||
.. sourcecode:: console
|
.. sourcecode:: console
|
||||||
|
|
||||||
|
@ -6,24 +6,31 @@ We have a bunch of different tests for Sahara.
|
|||||||
Unit Tests
|
Unit Tests
|
||||||
++++++++++
|
++++++++++
|
||||||
|
|
||||||
In most Sahara sub repositories we have `_package_/tests/unit` or
|
In most Sahara sub-repositories we have a directory that contains Python unit
|
||||||
`_package_/tests` that contains Python unit tests.
|
tests, located at `_package_/tests/unit` or `_package_/tests`.
|
||||||
|
|
||||||
Scenario integration tests
|
Scenario integration tests
|
||||||
++++++++++++++++++++++++++
|
++++++++++++++++++++++++++
|
||||||
|
|
||||||
New scenario integration tests were implemented for Sahara, they are available
|
New scenario integration tests were implemented for Sahara. They are available
|
||||||
in the sahara-tests repository (https://git.openstack.org/cgit/openstack/sahara-tests).
|
in the sahara-tests repository
|
||||||
|
(https://git.openstack.org/cgit/openstack/sahara-tests).
|
||||||
|
|
||||||
Tempest tests
|
Tempest tests
|
||||||
+++++++++++++
|
+++++++++++++
|
||||||
|
|
||||||
We have some tests based on Tempest (https://git.openstack.org/cgit/openstack/tempest)
|
Sahara has a Tempest plugin in the sahara-tests repository covering all major
|
||||||
that tests Sahara. Here is a list of currently implemented tests:
|
API features.
|
||||||
|
|
||||||
* REST API tests are checking how the Sahara REST API works.
|
Additional tests
|
||||||
The only part that is not tested is cluster creation, more info about api
|
++++++++++++++++
|
||||||
tests - http://docs.openstack.org/developer/tempest/field_guide/api.html
|
|
||||||
|
|
||||||
* CLI tests are checking read-only operations using the Sahara CLI, more info -
|
Additional tests reside in the sahara-tests repository (as above):
|
||||||
|
|
||||||
|
* REST API tests checking to ensure that the Sahara REST API works.
|
||||||
|
The only parts that are not tested are cluster creation and EDP. For more
|
||||||
|
info about api tests see
|
||||||
|
http://docs.openstack.org/developer/tempest/field_guide/api.html
|
||||||
|
|
||||||
|
* CLI tests check read-only operations using the Sahara CLI. For more info see
|
||||||
http://docs.openstack.org/developer/tempest/field_guide/cli.html
|
http://docs.openstack.org/developer/tempest/field_guide/cli.html
|
||||||
|
Loading…
Reference in New Issue
Block a user