Merge "Renames all doc references from Savanna to Sahara"

This commit is contained in:
Jenkins 2014-03-17 19:23:42 +00:00 committed by Gerrit Code Review
commit c429c103cf
35 changed files with 404 additions and 404 deletions

View File

@ -1,7 +1,7 @@
<h3>Useful Links</h3>
<ul>
<li><a href="https://wiki.openstack.org/wiki/Savanna">Savanna @ OpenStack Wiki</a></li>
<li><a href="https://launchpad.net/savanna">Savanna @ Launchpad</a></li>
<li><a href="https://wiki.openstack.org/wiki/Sahara">Sahara @ OpenStack Wiki</a></li>
<li><a href="https://launchpad.net/sahara">Sahara @ Launchpad</a></li>
</ul>
{% if READTHEDOCS %}

View File

@ -1,7 +1,7 @@
Architecture
============
.. image:: images/savanna-architecture.png
.. image:: images/sahara-architecture.png
:width: 800 px
:scale: 99 %
:align: left

View File

@ -55,8 +55,8 @@ source_suffix = '.rst'
master_doc = 'index'
# General information about the project.
project = u'Savanna'
copyright = u'2013, OpenStack Foundation'
project = u'Sahara'
copyright = u'2014, OpenStack Foundation'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
@ -122,7 +122,7 @@ if on_rtd:
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
html_title = 'Savanna'
html_title = 'Sahara'
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
@ -189,7 +189,7 @@ html_sidebars = {
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'SavannaDoc'
htmlhelp_basename = 'SaharaDoc'
# -- Options for LaTeX output --------------------------------------------------
@ -238,7 +238,7 @@ latex_documents = [
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'savanna', u'Savanna',
('index', 'sahara', u'Sahara',
[u'OpenStack Foundation'], 1)
]
@ -252,8 +252,8 @@ man_pages = [
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'Savanna', u'Savanna',
u'OpenStack Foundation', 'Savanna', 'Savanna',
('index', 'Sahara', u'Sahara',
u'OpenStack Foundation', 'Sahara', 'Sahara',
'Miscellaneous'),
]

View File

@ -1,24 +1,24 @@
Setting Up a Development Environment
====================================
This page describes how to point a local running Savanna instance to remote OpenStack.
This page describes how to point a local running Sahara instance to remote OpenStack.
You should be able to debug and test your changes without having to deploy.
Setup Local Environment with Savanna inside DevStack
----------------------------------------------------
Setup Local Environment with Sahara inside DevStack
---------------------------------------------------
The easiest way to have local Savanna environment with DevStack is to include
Savanna component in DevStack.
The easiest way to have local Sahara environment with DevStack is to include
Sahara component in DevStack.
.. toctree::
:maxdepth: 1
devstack
After you install DevStack with Savanna included you can rejoin screen with
``rejoin-stack.sh`` command and switch to ``savanna`` tab. Here you can manage
savanna service as other OpenStack services. Savanna source code is located at
``$DEST/savanna`` which is usually ``/opt/stack/savanna``.
After you install DevStack with Sahara included you can rejoin screen with
``rejoin-stack.sh`` command and switch to ``sahara`` tab. Here you can manage
sahara service as other OpenStack services. Sahara source code is located at
``$DEST/sahara`` which is usually ``/opt/stack/sahara``.
Setup Local Environment with external OpenStack
-----------------------------------------------
@ -52,8 +52,8 @@ On Fedora-based distributions (e.g., Fedora/RHEL/CentOS/Scientific Linux):
.. sourcecode:: console
$ git clone git://github.com/openstack/savanna.git
$ cd savanna
$ git clone git://github.com/openstack/sahara.git
$ cd sahara
3. Prepare virtual environment:
@ -65,9 +65,9 @@ On Fedora-based distributions (e.g., Fedora/RHEL/CentOS/Scientific Linux):
.. sourcecode:: console
$ cp ./etc/savanna/savanna.conf.sample-basic ./etc/savanna/savanna.conf
$ cp ./etc/sahara/sahara.conf.sample-basic ./etc/sahara/sahara.conf
5. Look through the savanna.conf and change parameters which default values do
5. Look through the sahara.conf and change parameters which default values do
not suite you. Set ``os_auth_host`` to the address of OpenStack keystone.
If you are using Neutron instead of Nova Network add ``use_neutron = True`` to
@ -76,23 +76,23 @@ also specify ``use_namespaces = True``.
.. note::
Config file can be specified for ``savanna-api`` command using ``--config-file`` flag.
Config file can be specified for ``sahara-api`` command using ``--config-file`` flag.
6. Create database schema:
.. sourcecode:: console
$ tox -evenv -- savanna-db-manage --config-file etc/savanna/savanna.conf upgrade head
$ tox -evenv -- sahara-db-manage --config-file etc/sahara/sahara.conf upgrade head
7. To start Savanna call:
7. To start Sahara call:
.. sourcecode:: console
$ tox -evenv -- savanna-api --config-file etc/savanna/savanna.conf -d
$ tox -evenv -- sahara-api --config-file etc/sahara/sahara.conf -d
Setup local OpenStack dashboard with Savanna plugin
---------------------------------------------------
Setup local OpenStack dashboard with Sahara plugin
--------------------------------------------------
.. toctree::
:maxdepth: 1

View File

@ -4,7 +4,7 @@ Development Guidelines
Coding Guidelines
-----------------
For all the code in Savanna we have a rule - it should pass `PEP 8`_.
For all the code in Sahara we have a rule - it should pass `PEP 8`_.
To check your code against PEP 8 run:
@ -14,22 +14,22 @@ To check your code against PEP 8 run:
.. note::
For more details on coding guidelines see file ``HACKING.rst`` in the root
of Savanna repo.
of Sahara repo.
Testing Guidelines
------------------
Savanna has a suite of tests that are run on all submitted code,
Sahara has a suite of tests that are run on all submitted code,
and it is recommended that developers execute the tests themselves to
catch regressions early. Developers are also expected to keep the
test suite up-to-date with any submitted code changes.
Unit tests are located at ``savanna/tests``.
Unit tests are located at ``sahara/tests``.
Savanna's suite of unit tests can be executed in an isolated environment
Sahara's suite of unit tests can be executed in an isolated environment
with `Tox`_. To execute the unit tests run the following from the root of
Savanna repo:
Sahara repo:
.. sourcecode:: console
@ -39,9 +39,9 @@ Savanna repo:
Documentation Guidelines
------------------------
All Savanna docs are written using Sphinx / RST and located in the main repo
All Sahara docs are written using Sphinx / RST and located in the main repo
in ``doc`` directory. You can add/edit pages here to update
https://savanna.readthedocs.org/en/latest/ site.
http://docs.openstack.org/developer/sahara site.
The documentation in docstrings should follow the `PEP 257`_ conventions
(as mentioned in the `PEP 8`_ guidelines).
@ -74,7 +74,7 @@ To make docs generation process faster you can use:
$ SPHINX_DEBUG=1 tox -e docs
or to avoid savanna reinstallation to virtual env each time you want to rebuild
or to avoid sahara reinstallation to virtual env each time you want to rebuild
docs you can use the following command (it could be executed only after
running ``tox -e docs`` first time):
@ -86,7 +86,7 @@ running ``tox -e docs`` first time):
.. note::
For more details on documentation guidelines see file HACKING.rst in the root
of Savanna repo.
of Sahara repo.
.. _PEP 8: http://www.python.org/dev/peps/pep-0008/

View File

@ -86,7 +86,7 @@ Now we are going to install DevStack in VM we just created. So, connect to VM wi
# But only use the top end of the network by using a /27 and starting at the 224 octet.
FLOATING_RANGE=192.168.55.224/27
# Enable auto assignment of floating IPs. By default Savanna expects this setting to be enabled
# Enable auto assignment of floating IPs. By default Sahara expects this setting to be enabled
EXTRA_OPTS=(auto_assign_floating_ip=True)
# Enable logging
@ -97,12 +97,12 @@ Now we are going to install DevStack in VM we just created. So, connect to VM wi
# access to install prerequisites and fetch repositories.
# OFFLINE=True
3. If you would like to have Savanna included into devstack add the following lines to ``localrc``:
3. If you would like to have Sahara included into devstack add the following lines to ``localrc``:
.. sourcecode:: bash
# Enable Savanna
ENABLED_SERVICES+=,savanna
# Enable Sahara
ENABLED_SERVICES+=,sahara
4. Start DevStack:

View File

@ -1,11 +1,11 @@
Code Reviews with Gerrit
========================
Savanna uses the `Gerrit`_ tool to review proposed code changes. The review site
Sahara uses the `Gerrit`_ tool to review proposed code changes. The review site
is http://review.openstack.org.
Gerrit is a complete replacement for Github pull requests. `All Github pull
requests to the Savanna repository will be ignored`.
requests to the Sahara repository will be ignored`.
See `Gerrit Workflow Quick Reference`_ for information about how to get
started using Gerrit. See `Gerrit, Jenkins and Github`_ for more detailed
@ -13,4 +13,4 @@ documentation on how to work with Gerrit.
.. _Gerrit: http://code.google.com/p/gerrit
.. _Gerrit, Jenkins and Github: http://wiki.openstack.org/GerritJenkinsGithub
.. _Gerrit Workflow Quick Reference: http://wiki.openstack.org/GerritWorkflow
.. _Gerrit Workflow Quick Reference: http://wiki.openstack.org/GerritWorkflow

View File

@ -4,13 +4,13 @@ How to Participate
Getting started
---------------
* Create account on `Github <https://github.com/openstack/savanna>`_
* Create account on `Github <https://github.com/openstack/sahara>`_
(if you don't have one)
* Make sure that your local git is properly configured by executing
``git config --list``. If not, configure ``user.name``, ``user.email``
* Create account on `Launchpad <https://launchpad.net/savanna>`_
* Create account on `Launchpad <https://launchpad.net/sahara>`_
(if you don't have one)
* Subscribe to `OpenStack general mail-list <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack>`_
@ -28,9 +28,9 @@ Getting started
* Subscribe to code-reviews. Go to your settings on http://review.openstack.org
* Go to ``watched projects``
* Add ``openstack/savanna``, ``openstack/savanna-dashboard``,
``openstack/savanna-extra``, ``openstack/python-savannaclient``,
``openstack/savanna-image-elements``
* Add ``openstack/sahara``, ``openstack/sahara-dashboard``,
``openstack/sahara-extra``, ``openstack/python-saharaclient``,
``openstack/sahara-image-elements``
How to stay in touch with the community?
@ -38,9 +38,9 @@ How to stay in touch with the community?
* If you have something to discuss use
`OpenStack development mail-list <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>`_.
Prefix mail subject with ``[Savanna]``
Prefix mail subject with ``[Sahara]``
* Join ``#savanna`` IRC channel on `freenode <http://freenode.net/>`_
* Join ``#sahara`` IRC channel on `freenode <http://freenode.net/>`_
* Join public weekly meetings on *Thursdays at 18:00 UTC* on
``#openstack-meeting-alt`` IRC channel
@ -49,7 +49,7 @@ How to stay in touch with the community?
How to send your first patch on review?
---------------------------------------
* Checkout Savanna code from `Github <https://github.com/openstack/savanna>`_
* Checkout Sahara code from `Github <https://github.com/openstack/sahara>`_
* Carefully read https://wiki.openstack.org/wiki/Gerrit_Workflow

View File

@ -1,7 +1,7 @@
Continuous Integration with Jenkins
===================================
Each change made to Savanna core code is tested with unit and integration tests and style checks flake8.
Each change made to Sahara core code is tested with unit and integration tests and style checks flake8.
Unit tests and style checks are performed on public `OpenStack Jenkins <https://jenkins.openstack.org/>`_ managed by `Zuul <http://status.openstack.org/zuul/>`_.
Unit tests are checked using both python 2.6 and python 2.7.
@ -14,4 +14,4 @@ Also a test job is launched on a created Cluster to verify Hadoop work.
All integration tests are launched by `Jenkins <http://jenkins.savanna.mirantis.com/>`_ on internal Mirantis OpenStack Lab.
Jenkins keeps a pool of VMs to run tests in parallel. Still integration testing may take a while.
The integration tests result is +1 or -1 to *Verify* column in a code review from *savanna-ci* user.
The integration tests result is +1 or -1 to *Verify* column in a code review from *savanna-ci* user.

View File

@ -1,8 +1,8 @@
Project hosting with Launchpad
==============================
`Launchpad`_ hosts the Savanna project. The Savanna project homepage on Launchpad is
http://launchpad.net/savanna.
`Launchpad`_ hosts the Sahara project. The Sahara project homepage on Launchpad is
http://launchpad.net/sahara.
Launchpad credentials
---------------------
@ -18,31 +18,31 @@ OpenStack-related sites. These sites include:
Mailing list
------------
The mailing list email is ``savanna-all@lists.launchpad.net``. To participate in the mailing list:
The mailing list email is ``sahara-all@lists.launchpad.net``. To participate in the mailing list:
#. Join the `Savanna Team`_ on Launchpad.
#. Subscribe to the list on the `Savanna Team`_ page on Launchpad.
#. Join the `Sahara Team`_ on Launchpad.
#. Subscribe to the list on the `Sahara Team`_ page on Launchpad.
The mailing list archives are at https://lists.launchpad.net/savanna-all
The mailing list archives are at https://lists.launchpad.net/sahara-all
Bug tracking
------------
Report Savanna bugs at https://bugs.launchpad.net/savanna
Report Sahara bugs at https://bugs.launchpad.net/sahara
Feature requests (Blueprints)
-----------------------------
Savanna uses Launchpad Blueprints to track feature requests. Blueprints are at
https://blueprints.launchpad.net/savanna.
Sahara uses Launchpad Blueprints to track feature requests. Blueprints are at
https://blueprints.launchpad.net/sahara.
Technical support (Answers)
---------------------------
Savanna uses Launchpad Answers to track Savanna technical support questions. The Savanna
Answers page is at https://answers.launchpad.net/savanna
Sahara uses Launchpad Answers to track Sahara technical support questions. The Sahara
Answers page is at https://answers.launchpad.net/sahara
.. _Launchpad: http://launchpad.net
.. _Wiki: http://wiki.openstack.org/savanna
.. _Savanna Team: https://launchpad.net/~savanna-all
.. _Wiki: http://wiki.openstack.org/sahara
.. _Sahara Team: https://launchpad.net/~sahara-all

View File

@ -75,7 +75,7 @@ For instance, plugin can ask for additional VMs for the management tool.
configure_cluster(cluster)
~~~~~~~~~~~~~~~~~~~~~~~~~~
Configures cluster on provisioned by savanna VMs.
Configures cluster on provisioned by Sahara VMs.
In this function plugin should perform all actions like adjusting OS, installing required packages (including Hadoop, if needed), configuring Hadoop, etc.
*Returns*: None
@ -109,7 +109,7 @@ convert(config, plugin_name, version, template_name, cluster_template_create)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Provides plugin with ability to create cluster based on plugin-specific config.
Savanna expects plugin to fill in all the required fields.
Sahara expects plugin to fill in all the required fields.
The last argument is the function that plugin should call to save the Cluster
Template.
See “Cluster Lifecycle for Config File Mode” section below for clarification.
@ -117,7 +117,7 @@ See “Cluster Lifecycle for Config File Mode” section below for clarification
on_terminate_cluster(cluster)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When user terminates cluster, Savanna simply shuts down all the cluster VMs. This method is guaranteed to be invoked before that, allowing plugin to do some clean-up.
When user terminates cluster, Sahara simply shuts down all the cluster VMs. This method is guaranteed to be invoked before that, allowing plugin to do some clean-up.
*Returns*: None
@ -220,7 +220,7 @@ An instance created for cluster.
+---------------+---------+---------------------------------------------------------+
| nova_info | object | Nova Instance object. |
+---------------+---------+---------------------------------------------------------+
| username | string | Username, that Savanna uses for establishing remote |
| username | string | Username, that Sahara uses for establishing remote |
| | | connections to instance. |
+---------------+---------+---------------------------------------------------------+
| hostname | string | Same as instance_name. |
@ -261,7 +261,7 @@ Group of instances.
+----------------------+--------+--------------------------------------------------------+
| count | int | Number of instances in this Node Group. |
+----------------------+--------+--------------------------------------------------------+
| username | string | Username used by Savanna to establish remote |
| username | string | Username used by Sahara to establish remote |
| | | connections to instances. |
+----------------------+--------+--------------------------------------------------------+
| configuration | dict | Merged dictionary of node configurations and cluster |

View File

@ -1,22 +1,22 @@
Pluggable Provisioning Mechanism
================================
Savanna could be integrated with 3rd party management tools like Apache Ambari
Sahara could be integrated with 3rd party management tools like Apache Ambari
and Cloudera Management Console. The integration is achieved using plugin
mechanism.
In short, responsibilities are divided between Savanna core and plugin as
follows. Savanna interacts with user and provisions infrastructure (VMs).
In short, responsibilities are divided between Sahara core and plugin as
follows. Sahara interacts with user and provisions infrastructure (VMs).
Plugin installs and configures Hadoop cluster on the VMs. Optionally Plugin
could deploy management and monitoring tools for the cluster. Savanna
could deploy management and monitoring tools for the cluster. Sahara
provides plugin with utility methods to work with VMs.
A plugin must extend `savanna.plugins.provisioning:ProvisioningPluginBase`
A plugin must extend `sahara.plugins.provisioning:ProvisioningPluginBase`
class and implement all the required methods. Read :doc:`plugin.spi` for
details.
The `instance` objects provided by Savanna have `remote` property which
The `instance` objects provided by Sahara have `remote` property which
could be used to work with VM. The `remote` is a context manager so you
can use it in `with instance.remote:` statements. The list of available
commands could be found in `savanna.utils.remote.InstanceInteropHelper`.
See Vanilla plugin source for usage examples.
commands could be found in `sahara.utils.remote.InstanceInteropHelper`.
See Vanilla plugin source for usage examples.

View File

@ -4,11 +4,11 @@ Quickstart guide
This guide will help you to setup vanilla Hadoop cluster using
:doc:`../restapi/rest_api_v1.0`.
1. Install Savanna
------------------
1. Install Sahara
-----------------
* If you want to hack the code follow :doc:`development.environment`.
* If you just want to install and use Savanna follow :doc:`../userdoc/installation.guide`.
* If you just want to install and use Sahara follow :doc:`../userdoc/installation.guide`.
2. Keystone endpoints setup
@ -17,7 +17,7 @@ This guide will help you to setup vanilla Hadoop cluster using
To use CLI tools, such as OpenStack's python clients, we should specify
environment variables with addresses and credentials. Let's mind that we have
keystone at ``127.0.0.1:5000`` with tenant ``admin``, credentials ``admin:nova``
and Savanna API at ``127.0.0.1:8386``. Here is a list of commands to set env:
and Sahara API at ``127.0.0.1:8386``. Here is a list of commands to set env:
.. sourcecode:: console
@ -68,8 +68,8 @@ images yourself:
.. sourcecode:: console
$ ssh user@hostname
$ wget http://savanna-files.mirantis.com/savanna-0.3-vanilla-1.2.1-ubuntu-13.04.qcow2
$ glance image-create --name=savanna-0.3-vanilla-1.2.1-ubuntu-13.04 \
$ wget http://sahara-files.mirantis.com/savanna-0.3-vanilla-1.2.1-ubuntu-13.04.qcow2
$ glance image-create --name=sahara-0.3-vanilla-1.2.1-ubuntu-13.04 \
--disk-format=qcow2 --container-format=bare < ./savanna-0.3-vanilla-1.2.1-ubuntu-13.04.qcow2
@ -78,8 +78,8 @@ images yourself:
.. sourcecode:: console
$ ssh user@hostname
$ wget http://savanna-files.mirantis.com/savanna-0.3-vanilla-1.2.1-fedora-19.qcow2
$ glance image-create --name=savanna-0.3-vanilla-1.2.1-fedora-19 \
$ wget http://sahara-files.mirantis.com/savanna-0.3-vanilla-1.2.1-fedora-19.qcow2
$ glance image-create --name=sahara-0.3-vanilla-1.2.1-fedora-19 \
--disk-format=qcow2 --container-format=bare < ./savanna-0.3-vanilla-1.2.1-fedora-19.qcow2
@ -90,11 +90,11 @@ Save image id. You can get image id from command ``glance image-list``:
.. sourcecode:: console
$ glance image-list --name savanna-0.3-vanilla-1.2.1-ubuntu-13.04
$ glance image-list --name sahara-0.3-vanilla-1.2.1-ubuntu-13.04
+--------------------------------------+-----------------------------------------+
| ID | Name |
+--------------------------------------+-----------------------------------------+
| 3f9fc974-b484-4756-82a4-bff9e116919b | savanna-0.3-vanilla-1.2.1-ubuntu-13.04 |
| 3f9fc974-b484-4756-82a4-bff9e116919b | sahara-0.3-vanilla-1.2.1-ubuntu-13.04 |
+--------------------------------------+-----------------------------------------+
$ export IMAGE_ID="3f9fc974-b484-4756-82a4-bff9e116919b"
@ -103,7 +103,7 @@ Save image id. You can get image id from command ``glance image-list``:
4. Register image in Image Registry
-----------------------------------
* Now we will actually start to interact with Savanna.
* Now we will actually start to interact with Sahara.
.. sourcecode:: console
@ -115,7 +115,7 @@ Save image id. You can get image id from command ``glance image-list``:
$ sudo pip install httpie
* Send POST request to Savanna API to register image with username ``ubuntu``.
* Send POST request to Sahara API to register image with username ``ubuntu``.
.. sourcecode:: console
@ -156,7 +156,7 @@ Save image id. You can get image id from command ``glance image-list``:
},
"minDisk": 0,
"minRam": 0,
"name": "savanna-0.3-vanilla-1.2.1-ubuntu-13.04",
"name": "sahara-0.3-vanilla-1.2.1-ubuntu-13.04",
"progress": 100,
"status": "ACTIVE",
"tags": [
@ -200,7 +200,7 @@ following content:
"node_processes": ["tasktracker", "datanode"]
}
Send POST requests to Savanna API to upload NodeGroup templates:
Send POST requests to Sahara API to upload NodeGroup templates:
.. sourcecode:: console
@ -212,7 +212,7 @@ Send POST requests to Savanna API to upload NodeGroup templates:
You can list available NodeGroup templates by sending the following request to
Savanna API:
Sahara API:
.. sourcecode:: console
@ -294,7 +294,7 @@ following content:
]
}
Send POST request to Savanna API to upload Cluster template:
Send POST request to Sahara API to upload Cluster template:
.. sourcecode:: console
@ -328,7 +328,7 @@ your own keypair in in Horizon UI, or using the command line client:
nova keypair-add stack --pub-key $PATH_TO_PUBLIC_KEY
Send POST request to Savanna API to create and start the cluster:
Send POST request to Sahara API to create and start the cluster:
.. sourcecode:: console

View File

@ -1,15 +1,15 @@
Savanna UI User Guide
=====================
Sahara UI User Guide
====================
This guide assumes that you already have savanna-api and the Savanna Dashboard configured and running.
This guide assumes that you already have sahara-api and the Sahara Dashboard configured and running.
If you require assistance with that, please see the installation guides.
Launching a cluster via the Savanna Dashboard
---------------------------------------------
Launching a cluster via the Sahara Dashboard
--------------------------------------------
Registering an Image
--------------------
1) Navigate to the "Savanna" tab in the dashboard, then click on the "Image Registry" panel.
1) Navigate to the "Sahara" tab in the dashboard, then click on the "Image Registry" panel.
2) From that page, click on the "Register Image" button at the top right.
@ -24,7 +24,7 @@ Registering an Image
Create Node Group Templates
---------------------------
1) Navigate to the "Savanna" tab in the dashboard, then click on the "Node Group Templates" panel.
1) Navigate to the "Sahara" tab in the dashboard, then click on the "Node Group Templates" panel.
2) From that page, click on the "Create Template" button at the top right.
@ -43,7 +43,7 @@ Create Node Group Templates
Create a Cluster Template
-------------------------
1) Navigate to the "Savanna" tab in the dashboard, then click on the "Cluster Templates" panel.
1) Navigate to the "Sahara" tab in the dashboard, then click on the "Cluster Templates" panel.
2) From that page, click on the "Create Template" button at the top right.
@ -64,7 +64,7 @@ Create a Cluster Template
Launching a Cluster
-------------------
1) Navigate to the "Savanna" tab in the dashboard, then click on the "Clusters" panel.
1) Navigate to the "Sahara" tab in the dashboard, then click on the "Clusters" panel.
2) Click on the "Launch Cluster" button at the top right.
@ -85,7 +85,7 @@ Launching a Cluster
Scaling a Cluster
-----------------
1) From the Savanna/Clusters page, click on the "Scale Cluster" button of the row that contains the cluster that you want to scale.
1) From the Sahara/Clusters page, click on the "Scale Cluster" button of the row that contains the cluster that you want to scale.
2) You can adjust the numbers of instances for existing Node Group Templates.
@ -102,11 +102,11 @@ Data Sources
------------
Data Sources are where the input and output from your jobs are housed.
1) From the Savanna/Data Sources page, click on the "Create Data Source" button at the top right.
1) From the Sahara/Data Sources page, click on the "Create Data Source" button at the top right.
2) Give your Data Source a name.
3) Enter the URL to the Data Source. For a Swift object, the url will look like <container>.savanna/<path>. The "swift://" is automatically added for you.
3) Enter the URL to the Data Source. For a Swift object, the url will look like <container>.sahara/<path>. The "swift://" is automatically added for you.
4) Enter the username and password for the Data Source.
@ -120,14 +120,14 @@ Job Binaries
------------
Job Binaries are where you define/upload the source code (mains and libraries) for your job.
1) From the Savanna/Job Binaries page, click on the "Create Job Binary" button at the top right.
1) From the Sahara/Job Binaries page, click on the "Create Job Binary" button at the top right.
2) Give your Job Binary a name (this can be different than the actual filename).
3) Choose the type of storage for your Job Binary.
- For "Swift Internal", you will need to enter the URL of your binary (<container>.savanna/<path>) as well as the username and password.
- For "Savanna internal database", you can choose from a pre-existing "job binary internal", "Create a script" or "Upload a new file".
- For "Swift Internal", you will need to enter the URL of your binary (<container>.sahara/<path>) as well as the username and password.
- For "Sahara internal database", you can choose from a pre-existing "job binary internal", "Create a script" or "Upload a new file".
4) Enter an optional description.
@ -139,7 +139,7 @@ Jobs
----
Jobs are where you define the type of job you'd like to run as well as which "Job Binaries" are required.
1) From the Savanna/Jobs page, click on the "Create Job" button at the top right.
1) From the Sahara/Jobs page, click on the "Create Job" button at the top right.
2) Give your Job a name.
@ -157,7 +157,7 @@ Job Executions
--------------
Job Executions are what you get by "Launching" a job. You can monitor the status of your job to see when it has completed its run.
1) From the Savanna/Jobs page, find the row that contains the job you want to launch and click on the "Launch Job" button at the right side of that row.
1) From the Sahara/Jobs page, find the row that contains the job you want to launch and click on the "Launch Job" button at the right side of that row.
2) Choose the cluster (already running--see `Launching a Cluster`_ above) on which you would like the job to run.
@ -168,9 +168,9 @@ Job Executions are what you get by "Launching" a job. You can monitor the statu
- Additional configuration properties can be defined by clicking on the "Add" button.
- An example configuration entry might be mapred.mapper.class for the Name and org.apache.oozie.example.SampleMapper for the Value.
5) Click on "Launch". To monitor the status of your job, you can navigate to the Savanna/Job Executions panel.
5) Click on "Launch". To monitor the status of your job, you can navigate to the Sahara/Job Executions panel.
Additional Notes
----------------
1) Throughout the Savanna UI, you will find that if you try to delete an object that you will not be able to delete it if another object depends on it.
1) Throughout the Sahara UI, you will find that if you try to delete an object that you will not be able to delete it if another object depends on it.
An example of this would be trying to delete a Job that has an existing Job Execution. In order to be able to delete that job, you would first need to delete any Job Executions that relate to that job.

View File

@ -1,28 +1,28 @@
Savanna UI Dev Environment Setup
============================================
Sahara UI Dev Environment Setup
===============================
Install as a part of DevStack
-----------------------------
The easiest way to have local Savanna UI environment with DevStack is to
include Savanna component in DevStack.
The easiest way to have local Sahara UI environment with DevStack is to
include Sahara component in DevStack.
.. toctree::
:maxdepth: 1
../devref/devstack
After Savanna installation as a part of DevStack Horizon will contain Savanna
tab. Savanna dashboard source code will be located at
``$DEST/savanna_dashboard`` which is usually ``/opt/stack/savanna_dashboard``.
After Sahara installation as a part of DevStack Horizon will contain Sahara
tab. Sahara dashboard source code will be located at
``$DEST/sahara_dashboard`` which is usually ``/opt/stack/sahara_dashboard``.
Isolated Dashboard for Savanna
------------------------------
Isolated Dashboard for Sahara
-----------------------------
These installation steps suite for two purposes:
* to setup dev environment
* to setup isolated Dashboard for Savanna
* to setup isolated Dashboard for Sahara
Note that the host where you're going to perform installation has to be
able to connected to all OpenStack endpoints. You can list all available
@ -77,7 +77,7 @@ and set right value for variables:
.. sourcecode:: python
OPENSTACK_HOST = "ip of your controller"
SAVANNA_URL = "url for savanna (e.g. "http://localhost:8386/v1.1")"
SAVANNA_URL = "url for sahara (e.g. "http://localhost:8386/v1.1")"
If you are using Neutron instead of Nova Network:
@ -92,43 +92,43 @@ If you are not using nova-network with auto_assign_floating_ip=True, also set:
AUTO_ASSIGNMENT_ENABLED = False
..
5. Clone savanna-dashboard sources from ``https://github.com/openstack/savanna-dashboard.git``
5. Clone sahara-dashboard sources from ``https://github.com/openstack/sahara-dashboard.git``
.. sourcecode:: console
$ git clone https://github.com/openstack/savanna-dashboard.git
$ git clone https://github.com/openstack/sahara-dashboard.git
6. Export SAVANNA_DASHBOARD_HOME environment variable with path to savanna-dashboard folder. E.g.:
6. Export SAVANNA_DASHBOARD_HOME environment variable with path to sahara-dashboard folder. E.g.:
.. sourcecode:: console
$ export SAVANNA_DASHBOARD_HOME=$(pwd)/savanna-dashboard
$ export SAVANNA_DASHBOARD_HOME=$(pwd)/sahara-dashboard
7. Install savanna-dashboard module to horizon's venv. Go to horizon folder and execute:
7. Install sahara-dashboard module to horizon's venv. Go to horizon folder and execute:
.. sourcecode:: console
$ .venv/bin/pip install $SAVANNA_DASHBOARD_HOME
8. Create a symlink to savanna-dashboard source
8. Create a symlink to sahara-dashboard source
.. sourcecode:: console
$ ln -s $SAVANNA_DASHBOARD_HOME/savannadashboard .venv/lib/python2.7/site-packages/savannadashboard
$ ln -s $SAVANNA_DASHBOARD_HOME/saharadashboard .venv/lib/python2.7/site-packages/saharadashboard
9. In ``openstack_dashboard/settings.py`` add savanna to
9. In ``openstack_dashboard/settings.py`` add sahara to
.. sourcecode:: python
HORIZON_CONFIG = {
'dashboards': ('nova', 'syspanel', 'settings', 'savanna'),
'dashboards': ('nova', 'syspanel', 'settings', 'sahara'),
and add savannadashboard to
and add saharadashboard to
.. sourcecode:: python
INSTALLED_APPS = (
'savannadashboard',
'saharadashboard',
....
10. Start horizon

View File

@ -1,10 +1,10 @@
Savanna UI Installation Guide
=============================
Sahara UI Installation Guide
============================
Savanna UI is a plugin for OpenStack Dashboard. There are two ways to install
Sahara UI is a plugin for OpenStack Dashboard. There are two ways to install
it. One is to plug it into existing Dashboard installation and another is
to setup another Dashboard and plug Savanna UI there. The first approach
advantage is that you will have Savanna UI in the very same Dashboard with
to setup another Dashboard and plug Sahara UI there. The first approach
advantage is that you will have Sahara UI in the very same Dashboard with
which you work with OpenStack. The disadvantage is that you have to tweak
your Dashboard configuration in order to enable the plugin. The second
approach does not have this disadvantage.
@ -17,12 +17,12 @@ approach see :doc:`/horizon/dev.environment.guide`
1) OpenStack environment (Folsom, Grizzly or Havana version) installed.
2) Savanna installed, configured and running, see :doc:`/userdoc/installation.guide`.
2) Sahara installed, configured and running, see :doc:`/userdoc/installation.guide`.
2. Savanna Dashboard Installation
2. Sahara Dashboard Installation
---------------------------------
1) Go to the machine where Dashboard resides and install Savanna UI:
1) Go to the machine where Dashboard resides and install Sahara UI:
For RDO:
@ -35,29 +35,29 @@ approach see :doc:`/horizon/dev.environment.guide`
.. sourcecode:: console
$ sudo pip install savanna-dashboard
$ sudo pip install sahara-dashboard
..
This will install latest stable release of Savanna UI. If you want to install master branch of Savanna UI:
This will install latest stable release of Sahara UI. If you want to install master branch of Sahara UI:
.. sourcecode:: console
$ sudo pip install 'http://tarballs.openstack.org/savanna-dashboard/savanna-dashboard-master.tar.gz'
$ sudo pip install 'http://tarballs.openstack.org/sahara-dashboard/sahara-dashboard-master.tar.gz'
2) Configure OpenStack Dashboard. In ``settings.py`` add savanna to
2) Configure OpenStack Dashboard. In ``settings.py`` add sahara to
.. sourcecode:: python
HORIZON_CONFIG = {
'dashboards': ('nova', 'syspanel', 'settings', ..., 'savanna'),
'dashboards': ('nova', 'syspanel', 'settings', ..., 'sahara'),
..
and also add savannadashboard to
and also add saharadashboard to
.. sourcecode:: python
INSTALLED_APPS = (
'savannadashboard',
'saharadashboard',
....
..
@ -106,4 +106,4 @@ If you are not using nova-network with auto_assign_floating_ip=True, also set:
..
You can check that service has been started successfully. Go to Horizon URL and if installation is correct you'll be able to see the Savanna tab.
You can check that service has been started successfully. Go to Horizon URL and if installation is correct you'll be able to see the Sahara tab.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 99 KiB

After

Width:  |  Height:  |  Size: 156 KiB

View File

Before

Width:  |  Height:  |  Size: 58 KiB

After

Width:  |  Height:  |  Size: 58 KiB

View File

@ -1,7 +1,7 @@
Welcome to Savanna!
Welcome to Sahara!
===================
Savanna project aims to provide users with simple means to provision a Hadoop
Sahara project aims to provide users with simple means to provision a Hadoop
cluster at OpenStack by specifying several parameters like Hadoop version,
cluster topology, nodes hardware details and a few more.
@ -13,7 +13,7 @@ Overview
overview
architecture
Roadmap <https://wiki.openstack.org/wiki/Savanna/Roadmap>
Roadmap <https://wiki.openstack.org/wiki/Sahara/Roadmap>
User guide
@ -85,7 +85,7 @@ Developer Guide
devref/how_to_build_oozie
**Background Concepts for Savanna**
**Background Concepts for Sahara**
.. toctree::
:maxdepth: 1

View File

@ -8,10 +8,10 @@ Apache Hadoop is an industry standard and widely adopted MapReduce implementatio
The aim of this project is to enable users to easily provision and manage Hadoop clusters on OpenStack.
It is worth mentioning that Amazon provides Hadoop for several years as Amazon Elastic MapReduce (EMR) service.
Savanna aims to provide users with simple means to provision Hadoop clusters
Sahara aims to provide users with simple means to provision Hadoop clusters
by specifying several parameters like Hadoop version, cluster topology, nodes hardware details
and a few more. After user fills in all the parameters, Savanna deploys the cluster in a few minutes.
Also Savanna provides means to scale already provisioned cluster by adding/removing worker nodes on demand.
and a few more. After user fills in all the parameters, Sahara deploys the cluster in a few minutes.
Also Sahara provides means to scale already provisioned cluster by adding/removing worker nodes on demand.
The solution will address following use cases:
@ -31,11 +31,11 @@ Key features are:
Details
-------
The Savanna product communicates with the following OpenStack components:
The Sahara product communicates with the following OpenStack components:
* Horizon - provides GUI with ability to use all of Savannas features;
* Horizon - provides GUI with ability to use all of Saharas features;
* Keystone - authenticates users and provides security token that is used to work with the OpenStack,
hence limiting user abilities in Savanna to his OpenStack privileges;
hence limiting user abilities in Sahara to his OpenStack privileges;
* Nova - is used to provision VMs for Hadoop Cluster;
* Glance - Hadoop VM images are stored there, each image containing an installed OS and Hadoop;
the pre-installed Hadoop should give us good handicap on node start-up;
@ -49,7 +49,7 @@ The Savanna product communicates with the following OpenStack components:
General Workflow
----------------
Savanna will provide two level of abstraction for API and UI based on the addressed use cases:
Sahara will provide two level of abstraction for API and UI based on the addressed use cases:
cluster provisioning and analytics as a service.
For the fast cluster provisioning generic workflow will be as following:
@ -57,13 +57,13 @@ For the fast cluster provisioning generic workflow will be as following:
* select Hadoop version;
* select base image with or without pre-installed Hadoop:
* for base images without Hadoop pre-installed Savanna will support pluggable deployment engines integrated with vendor tooling;
* for base images without Hadoop pre-installed Sahara will support pluggable deployment engines integrated with vendor tooling;
* define cluster configuration, including size and topology of the cluster and setting the different type of Hadoop parameters (e.g. heap size):
* to ease the configuration of such parameters mechanism of configurable templates will be provided;
* provision the cluster: Savanna will provision VMs, install and configure Hadoop;
* provision the cluster: Sahara will provision VMs, install and configure Hadoop;
* operation on the cluster: add/remove nodes;
* terminate the cluster when its not needed anymore.
@ -88,7 +88,7 @@ For analytic as a service generic workflow will be as following:
Users Perspective
------------------
While provisioning cluster through Savanna, user operates on three types of entities: Node Group Templates, Cluster Templates and Clusters.
While provisioning cluster through Sahara, user operates on three types of entities: Node Group Templates, Cluster Templates and Clusters.
A Node Group Template describes a group of nodes within cluster. It contains a list of hadoop processes that will be launched on each instance in a group.
Also a Node Group Template may provide node scoped configurations for those processes.
@ -97,21 +97,21 @@ This kind of templates encapsulates hardware parameters (flavor) for the node VM
A Cluster Template is designed to bring Node Group Templates together to form a Cluster.
A Cluster Template defines what Node Groups will be included and how many instances will be created in each.
Some of Hadoop Configurations can not be applied to a single node, but to a whole Cluster, so user can specify this kind of configurations in a Cluster Template.
Savanna enables user to specify which processes should be added to an anti-affinity group within a Cluster Template. If a process is included into an anti-affinity
Sahara enables user to specify which processes should be added to an anti-affinity group within a Cluster Template. If a process is included into an anti-affinity
group, it means that VMs where this process is going to be launched should be scheduled to different hardware hosts.
The Cluster entity represents a Hadoop Cluster. It is mainly characterized by VM image with pre-installed Hadoop which
will be used for cluster deployment. User may choose one of pre-configured Cluster Templates to start a Cluster.
To get access to VMs after a Cluster has started, user should specify a keypair.
Savanna provides several constraints on Hadoop cluster topology. JobTracker and NameNode processes could be run either on a single
Sahara provides several constraints on Hadoop cluster topology. JobTracker and NameNode processes could be run either on a single
VM or two separate ones. Also cluster could contain worker nodes of different types. Worker nodes could run both TaskTracker and DataNode,
or either of these processes alone. Savanna allows user to create cluster with any combination of these options,
or either of these processes alone. Sahara allows user to create cluster with any combination of these options,
but it will not allow to create a non working topology, for example: a set of workers with DataNodes, but without a NameNode.
Each Cluster belongs to some tenant determined by user. Users have access only to objects located in
tenants they have access to. Users could edit/delete only objects they created. Naturally admin users have full access to every object.
That way Savanna complies with general OpenStack access policy.
That way Sahara complies with general OpenStack access policy.
Integration with Swift
----------------------
@ -133,6 +133,6 @@ To get more information on how to enable Swift support see :doc:`userdoc/hadoop-
Pluggable Deployment and Monitoring
-----------------------------------
In addition to the monitoring capabilities provided by vendor-specific Hadoop management tooling, Savanna will provide pluggable integration with external monitoring systems such as Nagios or Zabbix.
In addition to the monitoring capabilities provided by vendor-specific Hadoop management tooling, Sahara will provide pluggable integration with external monitoring systems such as Nagios or Zabbix.
Both deployment and monitoring tools will be installed on stand-alone VMs, thus allowing a single instance to manage/monitor several clusters at once.

View File

@ -1,4 +1,4 @@
Savanna REST API docs
Sahara REST API docs
*********************
.. toctree::

View File

@ -1,31 +1,31 @@
Savanna REST API v1.0
Sahara REST API v1.0
*********************
.. note::
REST API v1.0 corresponds to Savanna v0.2.X
REST API v1.0 corresponds to Sahara v0.2.X
1 General API information
=========================
This section contains base info about the Savanna REST API design.
This section contains base info about the Sahara REST API design.
1.1 Authentication and Authorization
------------------------------------
The Savanna API uses the Keystone Identity Service as the default authentication service.
When Keystone is enabled, users who submit requests to the Savanna service must provide an authentication token
The Sahara API uses the Keystone Identity Service as the default authentication service.
When Keystone is enabled, users who submit requests to the Sahara service must provide an authentication token
in X-Auth-Token request header. User could obtain the token by authenticating to the Keystone endpoint.
For more information about Keystone, see the OpenStack Identity Developer Guide.
Also with each request user must specify OpenStack tenant in url path like that: '/v1.0/{tenant_id}/clusters'.
Savanna will perform the requested operation in that tenant using provided credentials. Therefore, user will be able
Sahara will perform the requested operation in that tenant using provided credentials. Therefore, user will be able
to create and manage clusters only within tenants he have access to.
1.2 Request / Response Types
----------------------------
The Savanna API supports the JSON data serialization format.
The Sahara API supports the JSON data serialization format.
This means that for requests that contain a body, the Content-Type header must be set to the MIME type value
"application/json". Also, clients should accept JSON serialized responses by specifying the Accept header
with the MIME type value "application/json" or adding ".json" extension to the resource name.
@ -48,8 +48,8 @@ or
1.3 Faults
----------
The Savanna API returns an error response if a failure occurs while processing a request.
Savanna uses only standard HTTP error codes. 4xx errors indicate problems in the particular
The Sahara API returns an error response if a failure occurs while processing a request.
Sahara uses only standard HTTP error codes. 4xx errors indicate problems in the particular
request being sent from the client and 5xx errors indicate server-side problems.
The response body will contain richer information about the cause of the error.
@ -84,7 +84,7 @@ Plugin object provides information about what Hadoop distribution/version it can
+-----------------+-------------------------------------------------------------------+-----------------------------------------------------+
| Verb | URI | Description |
+=================+===================================================================+=====================================================+
| GET | /v1.0/{tenant_id}/plugins | Lists all plugins registered in Savanna. |
| GET | /v1.0/{tenant_id}/plugins | Lists all plugins registered in Sahara. |
+-----------------+-------------------------------------------------------------------+-----------------------------------------------------+
| GET | /v1.0/{tenant_id}/plugins/{plugin_name} | Shows short information about specified plugin. |
+-----------------+-------------------------------------------------------------------+-----------------------------------------------------+
@ -115,7 +115,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna/v1.0/775181/plugins
GET http://sahara/v1.0/775181/plugins
**response**
@ -157,7 +157,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna/v1.0/775181/plugins/vanilla
GET http://sahara/v1.0/775181/plugins/vanilla
**response**
@ -197,7 +197,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna/v1.0/775181/plugins/vanilla/1.2.1
GET http://sahara/v1.0/775181/plugins/vanilla/1.2.1
**response**
@ -274,7 +274,7 @@ The request body should contain configuration file.
.. sourcecode:: http
POST http://savanna/v1.0/775181/plugins/some-plugin/1.1/convert-config
POST http://sahara/v1.0/775181/plugins/some-plugin/1.1/convert-config
**response**
@ -341,7 +341,7 @@ The request body should contain configuration file.
**Description**
Image Registry is a tool for managing images. Each plugin provides a list of required tags an image should have.
Savanna also requires username to login into instance's OS for remote operations execution.
Sahara also requires username to login into instance's OS for remote operations execution.
Image Registry provides an ability to add/remove tags to images and define OS username.
@ -385,7 +385,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna/v1.0/775181/images
GET http://sahara/v1.0/775181/images
**response**
@ -401,7 +401,7 @@ This operation does not require a request body.
{
"status": "ACTIVE",
"username": "ec2-user",
"name": "fedoraSwift_hadoop_savanna_v02",
"name": "fedoraSwift_hadoop_sahara_v02",
"tags": [
"vanilla",
"1.2.1"
@ -437,7 +437,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna/v1.0/775181/images?tags=vanilla
GET http://sahara/v1.0/775181/images?tags=vanilla
**response**
@ -453,7 +453,7 @@ This operation does not require a request body.
{
"status": "ACTIVE",
"username": "ec2-user",
"name": "fedoraSwift_hadoop_savanna_v02",
"name": "fedoraSwift_hadoop_sahara_v02",
"tags": [
"vanilla",
"1.2.1"
@ -491,7 +491,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna/v1.0/775181/images/daa50c37-b11b-4f3d-a586-e5dcd0a4110f
GET http://sahara/v1.0/775181/images/daa50c37-b11b-4f3d-a586-e5dcd0a4110f
**response**
@ -506,7 +506,7 @@ This operation does not require a request body.
"image": {
"status": "ACTIVE",
"username": "ec2-user",
"name": "fedoraSwift_hadoop_savanna_v02",
"name": "fedoraSwift_hadoop_sahara_v02",
"tags": [
"vanilla",
"1.2.1"
@ -540,7 +540,7 @@ This operation returns registered image.
.. sourcecode:: http
POST http://savanna/v1.0/775181/images/daa50c37-b11b-4f3d-a586-e5dcd0a4110f
POST http://sahara/v1.0/775181/images/daa50c37-b11b-4f3d-a586-e5dcd0a4110f
.. sourcecode:: json
@ -562,7 +562,7 @@ This operation returns registered image.
"image": {
"status": "ACTIVE",
"username": "ec2-user",
"name": "fedoraSwift_hadoop_savanna_v02",
"name": "fedoraSwift_hadoop_sahara_v02",
"tags": [],
"minDisk": 0,
"progress": 100,
@ -595,7 +595,7 @@ This operation does not require a request body.
.. sourcecode:: http
DELETE http://savanna/v1.0/775181/images/daa50c37-b11b-4f3d-a586-e5dcd0a4110f
DELETE http://sahara/v1.0/775181/images/daa50c37-b11b-4f3d-a586-e5dcd0a4110f
**response**
@ -622,7 +622,7 @@ Add Tags to Image.
.. sourcecode:: http
POST http://savanna/v1.0/775181/images/daa50c37-b11b-4f3d-a586-e5dcd0a4110f/tag
POST http://sahara/v1.0/775181/images/daa50c37-b11b-4f3d-a586-e5dcd0a4110f/tag
.. sourcecode:: json
@ -643,7 +643,7 @@ Add Tags to Image.
"image": {
"status": "ACTIVE",
"username": "ec2-user",
"name": "fedoraSwift_hadoop_savanna_v02",
"name": "fedoraSwift_hadoop_sahara_v02",
"tags": ["tag1", "some_other_tag"],
"minDisk": 0,
"progress": 100,
@ -676,7 +676,7 @@ Removes Tags form Image.
.. sourcecode:: http
POST http://savanna/v1.0/775181/images/daa50c37-b11b-4f3d-a586-e5dcd0a4110f/untag
POST http://sahara/v1.0/775181/images/daa50c37-b11b-4f3d-a586-e5dcd0a4110f/untag
.. sourcecode:: json
@ -697,7 +697,7 @@ Removes Tags form Image.
"image": {
"status": "ACTIVE",
"username": "ec2-user",
"name": "fedoraSwift_hadoop_savanna_v02",
"name": "fedoraSwift_hadoop_sahara_v02",
"tags": ["tag1"],
"minDisk": 0,
"progress": 100,
@ -755,7 +755,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna/v1.0/775181/node-group-templates
GET http://sahara/v1.0/775181/node-group-templates
**response**
@ -828,7 +828,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna/v1.0/775181/node-group-templates/ea34d320-09d7-4dc1-acbf-75b57cec81c9
GET http://sahara/v1.0/775181/node-group-templates/ea34d320-09d7-4dc1-acbf-75b57cec81c9
**response**
@ -878,7 +878,7 @@ This operation returns created Node Group Template.
.. sourcecode:: http
POST http://savanna/v1.0/775181/node-group-templates
POST http://sahara/v1.0/775181/node-group-templates
.. sourcecode:: json
@ -927,7 +927,7 @@ This operation returns created Node Group Template.
.. sourcecode:: http
POST http://savanna/v1.0/775181/node-group-templates
POST http://sahara/v1.0/775181/node-group-templates
.. sourcecode:: json
@ -1005,7 +1005,7 @@ This operation does not require a request body.
.. sourcecode:: http
DELETE http://savanna/v1.0/775181/node-group-templates/060afabe-f4b3-487e-8d48-65c5bb5eb79e
DELETE http://sahara/v1.0/775181/node-group-templates/060afabe-f4b3-487e-8d48-65c5bb5eb79e
**response**
@ -1058,7 +1058,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna/v1.0/775181/cluster-templates
GET http://sahara/v1.0/775181/cluster-templates
**response**
@ -1142,7 +1142,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna/v1.0/775181/cluster-templates/c365b7dd-9b11-492d-a119-7ae023c19b51
GET http://sahara/v1.0/775181/cluster-templates/c365b7dd-9b11-492d-a119-7ae023c19b51
**response**
@ -1221,7 +1221,7 @@ This operation returns created Cluster Template.
.. sourcecode:: http
POST http://savanna/v1.0/775181/cluster-templates
POST http://sahara/v1.0/775181/cluster-templates
.. sourcecode:: json
@ -1307,7 +1307,7 @@ This operation returns created Cluster Template.
.. sourcecode:: http
POST http://savanna/v1.0/775181/node-group-templates
POST http://sahara/v1.0/775181/node-group-templates
.. sourcecode:: json
@ -1416,7 +1416,7 @@ This operation does not require a request body.
.. sourcecode:: http
DELETE http://savanna/v1.0/775181/cluster-templates/9d72bc1a-8d38-493e-99f3-ebca4ec99ad8
DELETE http://sahara/v1.0/775181/cluster-templates/9d72bc1a-8d38-493e-99f3-ebca4ec99ad8
**response**
@ -1471,7 +1471,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna/v1.0/775181/clusters
GET http://sahara/v1.0/775181/clusters
**response**
@ -1586,7 +1586,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna/v1.0/775181/clusters/c365b7dd-9b11-492d-a119-7ae023c19b51
GET http://sahara/v1.0/775181/clusters/c365b7dd-9b11-492d-a119-7ae023c19b51
**response**
@ -1696,7 +1696,7 @@ This operation returns created Cluster.
.. sourcecode:: http
POST http://savanna/v1.0/775181/clusters
POST http://sahara/v1.0/775181/clusters
.. sourcecode:: json
@ -1805,7 +1805,7 @@ This operation returns created Cluster.
.. sourcecode:: http
POST http://savanna/v1.0/775181/clusters
POST http://sahara/v1.0/775181/clusters
.. sourcecode:: json
@ -1951,7 +1951,7 @@ This operation returns updated Cluster.
.. sourcecode:: http
PUT http://savanna/v1.0/775181/clusters/9d7g51a-8123-424e-sdsr3-eb222ec989b1
PUT http://sahara/v1.0/775181/clusters/9d7g51a-8123-424e-sdsr3-eb222ec989b1
.. sourcecode:: json
@ -2123,7 +2123,7 @@ This operation does not require a request body.
.. sourcecode:: http
DELETE http://savanna/v1.0/775181/clusters/9d7g51a-8123-424e-sdsr3-eb222ec989b1
DELETE http://sahara/v1.0/775181/clusters/9d7g51a-8123-424e-sdsr3-eb222ec989b1
**response**

View File

@ -1,9 +1,9 @@
Savanna REST API v1.1 (EDP)
***************************
Sahara REST API v1.1 (EDP)
**************************
.. note::
REST API v1.1 corresponds to Savanna v0.3.X
REST API v1.1 corresponds to Sahara v0.3.X
1. General information
======================
@ -17,7 +17,7 @@ REST API V1.1 is :doc:`../userdoc/edp` REST API. It covers the majority of new f
**Description**
A Data Source object provides the location of input or output for MapReduce jobs and may reference different types of storage.
Savanna doesn't perform any validation checks for data source locations.
Sahara doesn't perform any validation checks for data source locations.
**Data Source ops**
@ -53,7 +53,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna:8386/v1.1/11587919cc534bcbb1027a161c82cf58/data-sources
GET http://sahara:8386/v1.1/11587919cc534bcbb1027a161c82cf58/data-sources
**response**
@ -126,7 +126,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna:8386/v1.1/11587919cc534bcbb1027a161c82cf58/data-sources/151d0c0c-464f-4724-96a6-4732d0ca62e1
GET http://sahara:8386/v1.1/11587919cc534bcbb1027a161c82cf58/data-sources/151d0c0c-464f-4724-96a6-4732d0ca62e1
**response**
@ -170,7 +170,7 @@ This operation returns the created Data Source.
.. sourcecode:: http
POST http://savanna:8386/v1.1/11587919cc534bcbb1027a161c82cf58/data-sources
POST http://sahara:8386/v1.1/11587919cc534bcbb1027a161c82cf58/data-sources
.. sourcecode:: json
@ -218,7 +218,7 @@ This operation returns the created Data Source.
.. sourcecode:: http
POST http://savanna:8386/v1.1/e262c255a7de4a0ab0434bafd75660cd/data-sources
POST http://sahara:8386/v1.1/e262c255a7de4a0ab0434bafd75660cd/data-sources
.. sourcecode:: json
@ -272,7 +272,7 @@ This operation does not require a request body.
.. sourcecode:: http
DELETE http://savanna:8386/v1.1/11587919cc534bcbb1027a161c82cf58/data-sources/af7dc864-6331-4c30-80f5-63d74b667eaf
DELETE http://sahara:8386/v1.1/11587919cc534bcbb1027a161c82cf58/data-sources/af7dc864-6331-4c30-80f5-63d74b667eaf
**response**
@ -286,7 +286,7 @@ This operation does not require a request body.
**Description**
Job Binary Internals are objects for storing job binaries in the Savanna internal database.
Job Binary Internals are objects for storing job binaries in the Sahara internal database.
A Job Binary Internal contains raw data of executable Jar files, Pig or Hive scripts.
**Job Binary Internal ops**
@ -325,7 +325,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna:8386/v1.1/11587919cc534bcbb1027a161c82cf58/job-binary-internals
GET http://sahara:8386/v1.1/11587919cc534bcbb1027a161c82cf58/job-binary-internals
**response**
@ -375,7 +375,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna:8386/v1.1/11587919cc534bcbb1027a161c82cf58/job-binary-internals/d2498cbf-4589-484a-a814-81436c18beb3
GET http://sahara:8386/v1.1/11587919cc534bcbb1027a161c82cf58/job-binary-internals/d2498cbf-4589-484a-a814-81436c18beb3
**response**
@ -415,7 +415,7 @@ The request body should contain raw data (file) or script text.
.. sourcecode:: http
PUT http://savanna:8386/v1.1/11587919cc534bcbb1027a161c82cf58/job-binary-internals/script.pig
PUT http://sahara:8386/v1.1/11587919cc534bcbb1027a161c82cf58/job-binary-internals/script.pig
**response**
@ -446,7 +446,7 @@ Normal Response Code: 204 (NO CONTENT)
Errors: none
Removes Job Binary Internal object from Savanna's db
Removes Job Binary Internal object from Sahara's db
This operation returns nothing.
@ -457,7 +457,7 @@ This operation does not require a request body.
.. sourcecode:: http
DELETE http://savanna:8386/v1.1/11587919cc534bcbb1027a161c82cf58/job-binary-internals/4833dc4b-8682-4d5b-8a9f-2036b47a0996
DELETE http://sahara:8386/v1.1/11587919cc534bcbb1027a161c82cf58/job-binary-internals/4833dc4b-8682-4d5b-8a9f-2036b47a0996
**response**
@ -486,7 +486,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna:8386/v1.1/11587919cc534bcbb1027a161c82cf58/job-binary-internals/4248975-3c82-4206-a58d-6e7fb3a563fd/data
GET http://sahara:8386/v1.1/11587919cc534bcbb1027a161c82cf58/job-binary-internals/4248975-3c82-4206-a58d-6e7fb3a563fd/data
**response**
@ -501,7 +501,7 @@ This operation does not require a request body.
**Description**
Job Binaries objects are designed to create links to certain binaries stored either in Savanna internal db or in Swift.
Job Binaries objects are designed to create links to certain binaries stored either in Sahara internal db or in Swift.
**Job Binaries ops**
@ -539,7 +539,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna:8386/v1.1/11587919cc534bcbb1027a161c82cf58/job-binaries
GET http://sahara:8386/v1.1/11587919cc534bcbb1027a161c82cf58/job-binaries
**response**
@ -555,7 +555,7 @@ This operation does not require a request body.
{
"description": "",
"extra": {},
"url": "savanna-db://d2498cbf-4589-484a-a814-81436c18beb3",
"url": "sahara-db://d2498cbf-4589-484a-a814-81436c18beb3",
"tenant_id": "11587919cc534bcbb1027a161c82cf58",
"created_at": "2013-10-15 12:36:59.375060",
"updated_at": null,
@ -565,7 +565,7 @@ This operation does not require a request body.
{
"description": "",
"extra": {},
"url": "savanna-db://22f1d87a-23c8-483e-a0dd-cb4a16dde5f9",
"url": "sahara-db://22f1d87a-23c8-483e-a0dd-cb4a16dde5f9",
"tenant_id": "11587919cc534bcbb1027a161c82cf58",
"created_at": "2013-10-15 12:43:52.265899",
"updated_at": null,
@ -606,7 +606,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna:8386/v1.1/11587919cc534bcbb1027a161c82cf58/job-binaries/a716a9cd-9add-4b12-b1b6-cdb71aaef350
GET http://sahara:8386/v1.1/11587919cc534bcbb1027a161c82cf58/job-binaries/a716a9cd-9add-4b12-b1b6-cdb71aaef350
**response**
@ -649,7 +649,7 @@ This operation shows information about the created Job Binary.
.. sourcecode:: http
POST http://savanna:8386/v1.1/11587919cc534bcbb1027a161c82cf58/job-binaries
POST http://sahara:8386/v1.1/11587919cc534bcbb1027a161c82cf58/job-binaries
.. sourcecode:: json
@ -707,7 +707,7 @@ This operation does not require a request body.
.. sourcecode:: http
DELETE http://savanna:8386/v1.1/11587919cc534bcbb1027a161c82cf58/job-binaries/07f86352-ee8a-4b08-b737-d705ded5ff9c
DELETE http://sahara:8386/v1.1/11587919cc534bcbb1027a161c82cf58/job-binaries/07f86352-ee8a-4b08-b737-d705ded5ff9c
**response**
@ -736,7 +736,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna:8386/v1.1/11587919cc534bcbb1027a161c82cf58/job-binaries/84248975-3c82-4206-a58d-6e7fb3a563fd/data
GET http://sahara:8386/v1.1/11587919cc534bcbb1027a161c82cf58/job-binaries/84248975-3c82-4206-a58d-6e7fb3a563fd/data
**response**
@ -794,7 +794,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna:8386/v1.1/11587919cc534bcbb1027a161c82cf58/jobs
GET http://sahara:8386/v1.1/11587919cc534bcbb1027a161c82cf58/jobs
**response**
@ -815,7 +815,7 @@ This operation does not require a request body.
{
"description": "",
"extra": {},
"url": "savanna-db://d2498cbf-4589-484a-a814-81436c18beb3",
"url": "sahara-db://d2498cbf-4589-484a-a814-81436c18beb3",
"tenant_id": "11587919cc534bcbb1027a161c82cf58",
"created_at": "2013-10-15 12:36:59.375060",
"updated_at": null,
@ -828,7 +828,7 @@ This operation does not require a request body.
{
"description": "",
"extra": {},
"url": "savanna-db://22f1d87a-23c8-483e-a0dd-cb4a16dde5f9",
"url": "sahara-db://22f1d87a-23c8-483e-a0dd-cb4a16dde5f9",
"tenant_id": "11587919cc534bcbb1027a161c82cf58",
"created_at": "2013-10-15 12:43:52.265899",
"updated_at": null,
@ -886,7 +886,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna:8386/v1.1/11587919cc534bcbb1027a161c82cf58/jobs/7600373c-d262-45c6-845f-77f339f3e503
GET http://sahara:8386/v1.1/11587919cc534bcbb1027a161c82cf58/jobs/7600373c-d262-45c6-845f-77f339f3e503
**response**
@ -941,7 +941,7 @@ This operation shows information about the created Job object.
.. sourcecode:: http
POST http://savanna:8386/v1.1/11587919cc534bcbb1027a161c82cf58/jobs
POST http://sahara:8386/v1.1/11587919cc534bcbb1027a161c82cf58/jobs
.. sourcecode:: json
@ -971,7 +971,7 @@ This operation shows information about the created Job object.
{
"description": "",
"extra": {},
"url": "savanna-db://d2498cbf-4589-484a-a814-81436c18beb3",
"url": "sahara-db://d2498cbf-4589-484a-a814-81436c18beb3",
"tenant_id": "11587919cc534bcbb1027a161c82cf58",
"created_at": "2013-10-15 12:36:59.375060",
"updated_at": null,
@ -983,7 +983,7 @@ This operation shows information about the created Job object.
{
"description": "",
"extra": {},
"url": "savanna-db://22f1d87a-23c8-483e-a0dd-cb4a16dde5f9",
"url": "sahara-db://22f1d87a-23c8-483e-a0dd-cb4a16dde5f9",
"tenant_id": "11587919cc534bcbb1027a161c82cf58",
"created_at": "2013-10-15 12:43:52.265899",
"updated_at": null,
@ -1017,7 +1017,7 @@ This operation does not require a request body.
.. sourcecode:: http
DELETE http://savanna:8386/v1.1/11587919cc534bcbb1027a161c82cf58/jobs/07f86352-ee8a-4b08-b737-d705ded5ff9c
DELETE http://sahara:8386/v1.1/11587919cc534bcbb1027a161c82cf58/jobs/07f86352-ee8a-4b08-b737-d705ded5ff9c
**response**
@ -1047,7 +1047,7 @@ This REST call is used just for hints and doesn't force the user to apply any of
.. sourcecode:: http
GET http://savanna/v1.1/11587919cc534bcbb1027a161c82cf58/jobs/config-hints/Jar
GET http://sahara/v1.1/11587919cc534bcbb1027a161c82cf58/jobs/config-hints/Jar
**response**
@ -1142,7 +1142,7 @@ This operation returns the created Job Execution object. Note that different job
.. sourcecode:: http
POST http://savanna:8386/v1.1/11587919cc534bcbb1027a161c82cf58/jobs/65afed9c-dad7-4658-9554-b7b4e1ca908f/execute
POST http://sahara:8386/v1.1/11587919cc534bcbb1027a161c82cf58/jobs/65afed9c-dad7-4658-9554-b7b4e1ca908f/execute
.. sourcecode:: json
@ -1206,7 +1206,7 @@ This operation returns the created Job Execution object. Note that different job
.. sourcecode:: http
POST http://savanna:8386/v1.1/11587919cc534bcbb1027a161c82cf58/jobs/65afed9c-dad7-4658-9554-b7b4e1ca908f/execute
POST http://sahara:8386/v1.1/11587919cc534bcbb1027a161c82cf58/jobs/65afed9c-dad7-4658-9554-b7b4e1ca908f/execute
.. sourcecode:: json
@ -1303,7 +1303,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna/v1.1/11587919cc534bcbb1027a161c82cf58/job-executions
GET http://sahara/v1.1/11587919cc534bcbb1027a161c82cf58/job-executions
**response**
@ -1437,7 +1437,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna/v1.1/11587919cc534bcbb1027a161c82cf58/job-executions/e63bdc21-0126-4fd2-90c6-5163d16f31df
GET http://sahara/v1.1/11587919cc534bcbb1027a161c82cf58/job-executions/e63bdc21-0126-4fd2-90c6-5163d16f31df
**response**
@ -1467,7 +1467,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna/v1.1/11587919cc534bcbb1027a161c82cf58/job-executions/4a911624-1e25-4650-bd1d-382d19695708/refresh-status
GET http://sahara/v1.1/11587919cc534bcbb1027a161c82cf58/job-executions/4a911624-1e25-4650-bd1d-382d19695708/refresh-status
**response**
@ -1497,7 +1497,7 @@ This operation does not require a request body.
.. sourcecode:: http
GET http://savanna/v1.1/11587919cc534bcbb1027a161c82cf58/job-executions/4a911624-1e25-4650-bd1d-382d19695708/refresh-status
GET http://sahara/v1.1/11587919cc534bcbb1027a161c82cf58/job-executions/4a911624-1e25-4650-bd1d-382d19695708/refresh-status
**response**
@ -1529,7 +1529,7 @@ This operation does not require a request body.
.. sourcecode:: http
DELETE http://savanna/v1.1/job-executions/<job-execution-id>/d7g51a-8123-424e-sdsr3-eb222ec989b1
DELETE http://sahara/v1.1/job-executions/<job-execution-id>/d7g51a-8123-424e-sdsr3-eb222ec989b1
**response**
@ -1543,7 +1543,7 @@ This operation does not require a request body.
Job Execution object
====================
The following json response represents Job Execution object returned from Savanna
The following json response represents Job Execution object returned from Sahara
.. sourcecode:: json

View File

@ -9,12 +9,12 @@ simplify task of building such images we use
code that alters how the image is built, or runs within the chroot to prepare
the image.
Elements for building vanilla images are stored in `Savanna extra repository <https://github.com/openstack/savanna-image-elements>`_
Elements for building vanilla images are stored in `Sahara extra repository <https://github.com/openstack/sahara-image-elements>`_
.. note::
Savanna requires images with cloud-init package installed:
Sahara requires images with cloud-init package installed:
* `For Fedora <http://pkgs.fedoraproject.org/cgit/cloud-init.git/>`_
* `For Ubuntu <http://packages.ubuntu.com/precise/cloud-init>`_
@ -22,7 +22,7 @@ Elements for building vanilla images are stored in `Savanna extra repository <ht
In this document you will find instruction on how to build Ubuntu and Fedora
images with Apache Hadoop.
1. Clone repository "https://github.com/openstack/savanna-image-elements" locally.
1. Clone repository "https://github.com/openstack/sahara-image-elements" locally.
2. You just can run script diskimage-create.sh in any directory (for example, in home directory). This script will create two cloud images - Fedora and Ubuntu.
@ -33,18 +33,18 @@ images with Apache Hadoop.
This scripts will update your system and install required packages.
* kpartx
* qemu
Then it will clone the repositories "https://github.com/openstack/diskimage-builder" and "https://github.com/openstack/savanna-image-elements" and export nessesary parameters.
Then it will clone the repositories "https://github.com/openstack/diskimage-builder" and "https://github.com/openstack/sahara-image-elements" and export nessesary parameters.
* ``DIB_HADOOP_VERSION`` - version of Hadoop to install
* ``JAVA_DOWNLOAD_URL`` - download link for JDK (tarball or bin)
* ``OOZIE_DOWNLOAD_URL`` - download link for OOZIE (we have built
Oozie libs here: http://savanna-files.mirantis.com/oozie-4.0.0.tar.gz
Oozie libs here: http://sahara-files.mirantis.com/oozie-4.0.0.tar.gz
* ``HIVE_VERSION`` - version of Hive to install (currently supports only 0.11.0)
* ``ubuntu_image_name``
* ``fedora_image_name``
* ``DIB_IMAGE_SIZE`` - parameter that specifies a volume of hard disk of
instance. You need to specify it only for Fedora because Fedora doesn't use all available volume
* ``DIB_COMMIT_ID`` - latest commit id of diksimage-builder project
* ``SAVANNA_ELEMENTS_COMMIT_ID`` - latest commit id of savanna-image-elements project
* ``SAHARA_ELEMENTS_COMMIT_ID`` - latest commit id of sahara-image-elements project
NOTE: If you don't want to use default values, you should edit this script and set your values of parameters.

View File

@ -4,10 +4,10 @@ Elastic Data Processing (EDP)
Overview
--------
Savanna's Elastic Data Processing facility or :dfn:`EDP` allows the execution of Hadoop jobs on clusters created from Savanna. EDP supports:
Sahara's Elastic Data Processing facility or :dfn:`EDP` allows the execution of Hadoop jobs on clusters created from Sahara. EDP supports:
* Hive, Pig, MapReduce, and Java job types
* storage of job binaries in Swift or Savanna's own database
* storage of job binaries in Swift or Sahara's own database
* access to input and output data sources in Swift or HDFS
* configuration of jobs at submission time
* execution of jobs on existing clusters or transient clusters
@ -15,14 +15,14 @@ Savanna's Elastic Data Processing facility or :dfn:`EDP` allows the execution of
Interfaces
----------
The EDP features can be used from the Savanna web UI which is described in the :doc:`../horizon/dashboard.user.guide`.
The EDP features can be used from the Sahara web UI which is described in the :doc:`../horizon/dashboard.user.guide`.
The EDP features also can be used directly by a client through the :doc:`../restapi/rest_api_v1.1_EDP`.
EDP Concepts
------------
Savanna EDP uses a collection of simple objects to define and execute Hadoop jobs. These objects are stored in the Savanna database when they
Sahara EDP uses a collection of simple objects to define and execute Hadoop jobs. These objects are stored in the Sahara database when they
are created, allowing them to be reused. This modular approach with database persistence allows code and data to be reused across multiple jobs.
The essential components of a job are:
@ -37,13 +37,13 @@ These components are supplied through the objects described below.
Job Binaries
++++++++++++
A :dfn:`Job Binary` object stores a URL to a single Pig script, Hive script, or Jar file and any credentials needed to retrieve the file. The file itself may be stored in the Savanna internal database or in Swift.
A :dfn:`Job Binary` object stores a URL to a single Pig script, Hive script, or Jar file and any credentials needed to retrieve the file. The file itself may be stored in the Sahara internal database or in Swift.
Files in the Savanna database are stored as raw bytes in a :dfn:`Job Binary Internal` object. This object's sole purpose is to store a file for later retrieval. No extra credentials need to be supplied for files stored internally.
Files in the Sahara database are stored as raw bytes in a :dfn:`Job Binary Internal` object. This object's sole purpose is to store a file for later retrieval. No extra credentials need to be supplied for files stored internally.
Savanna requires credentials (username and password) to access files stored in Swift. The Swift service must be running in the same OpenStack installation referenced by Savanna.
Sahara requires credentials (username and password) to access files stored in Swift. The Swift service must be running in the same OpenStack installation referenced by Sahara.
There is a configurable limit on the size of a single job binary that may be retrieved by Savanna. This limit is 5MB and may be set with the *job_binary_max_KB* setting in the :file:`savanna.conf` configuration file.
There is a configurable limit on the size of a single job binary that may be retrieved by Sahara. This limit is 5MB and may be set with the *job_binary_max_KB* setting in the :file:`sahara.conf` configuration file.
Jobs
++++
@ -68,26 +68,26 @@ Data Sources
A :dfn:`Data Source` object stores a URL which designates the location of input or output data and any credentials needed to access the location.
Savanna supports data sources in Swift. The Swift service must be running in the same OpenStack installation referenced by Savanna.
Sahara supports data sources in Swift. The Swift service must be running in the same OpenStack installation referenced by Sahara.
Savanna also supports data sources in HDFS. Any HDFS instance running on a Savanna cluster in the same OpenStack installation is accessible without manual configuration. Other instances of HDFS may be used as well provided that the URL is resolvable from the node executing the job.
Sahara also supports data sources in HDFS. Any HDFS instance running on a Sahara cluster in the same OpenStack installation is accessible without manual configuration. Other instances of HDFS may be used as well provided that the URL is resolvable from the node executing the job.
Job Execution
+++++++++++++
Job objects must be *launched* or *executed* in order for them to run on the cluster. During job launch, a user specifies execution details including data sources, configuration values, and program arguments. The relevant details will vary by job type. The launch will create a :dfn:`Job Execution` object in Savanna which is used to monitor and manage the job.
Job objects must be *launched* or *executed* in order for them to run on the cluster. During job launch, a user specifies execution details including data sources, configuration values, and program arguments. The relevant details will vary by job type. The launch will create a :dfn:`Job Execution` object in Sahara which is used to monitor and manage the job.
To execute the job, Savanna generates a workflow and submits it to the Oozie server running on the cluster. Familiarity with Oozie is not necessary for using Savanna but it may be beneficial to the user. A link to the Oozie web console can be found in the Savanna web UI in the cluster details.
To execute the job, Sahara generates a workflow and submits it to the Oozie server running on the cluster. Familiarity with Oozie is not necessary for using Sahara but it may be beneficial to the user. A link to the Oozie web console can be found in the Sahara web UI in the cluster details.
.. _edp_workflow:
General Workflow
----------------
The general workflow for defining and executing a job in Savanna is essentially the same whether using the web UI or the REST API.
The general workflow for defining and executing a job in Sahara is essentially the same whether using the web UI or the REST API.
1. Launch a cluster from Savanna if there is not one already available
2. Create all of the Job Binaries needed to run the job, stored in the Savanna database or in Swift
1. Launch a cluster from Sahara if there is not one already available
2. Create all of the Job Binaries needed to run the job, stored in the Sahara database or in Swift
+ When using the REST API and internal storage of job binaries, there is an extra step here to first create the Job Binary Internal objects
+ Once the Job Binary Internal objects are created, Job Binary objects may be created which refer to them by URL
@ -125,7 +125,7 @@ Jobs can be configured at launch. The job type determines the kinds of values th
* :dfn:`Configuration values` are key/value pairs. They set options for EDP, Oozie or Hadoop.
+ The EDP configuration values have names beginning with *edp.* and are consumed by Savanna
+ The EDP configuration values have names beginning with *edp.* and are consumed by Sahara
+ The Oozie and Hadoop configuration values may be read by running jobs
* :dfn:`Parameters` are key/value pairs. They supply values for the Hive and Pig parameter substitution mechanisms.
@ -133,32 +133,32 @@ Jobs can be configured at launch. The job type determines the kinds of values th
These values can be set on the :guilabel:`Configure` tab during job launch through the web UI or through the *job_configs* parameter when using the */jobs/<job_id>/execute* REST method.
In some cases Savanna generates configuration values or parameters automatically. Values set explicitly by the user during launch will override those generated by Savanna.
In some cases Sahara generates configuration values or parameters automatically. Values set explicitly by the user during launch will override those generated by Sahara.
Generation of Swift Properties for Data Sources
+++++++++++++++++++++++++++++++++++++++++++++++
If a job is run with data sources in Swift, Savanna will automatically generate Swift username and password configuration values based on the credentials in the data sources. If the input and output data sources are both in Swift, it is expected that they specify the same credentials.
If a job is run with data sources in Swift, Sahara will automatically generate Swift username and password configuration values based on the credentials in the data sources. If the input and output data sources are both in Swift, it is expected that they specify the same credentials.
The Swift credentials can be set explicitly with the following configuration values:
+------------------------------------+
| Name |
+====================================+
| fs.swift.service.savanna.username |
| fs.swift.service.sahara.username |
+------------------------------------+
| fs.swift.service.savanna.password |
| fs.swift.service.sahara.password |
+------------------------------------+
Additional Details for Hive jobs
++++++++++++++++++++++++++++++++
Savanna will automatically generate values for the ``INPUT`` and ``OUTPUT`` parameters required by Hive based on the specified data sources.
Sahara will automatically generate values for the ``INPUT`` and ``OUTPUT`` parameters required by Hive based on the specified data sources.
Additional Details for Pig jobs
+++++++++++++++++++++++++++++++
Savanna will automatically generate values for the ``INPUT`` and ``OUTPUT`` parameters required by Pig based on the specified data sources.
Sahara will automatically generate values for the ``INPUT`` and ``OUTPUT`` parameters required by Pig based on the specified data sources.
For Pig jobs, ``arguments`` should be thought of as command line arguments separated by spaces and passed to the ``pig`` shell.
@ -203,49 +203,49 @@ values to the ``main`` method:
Data Source objects are not used with Java job types. Instead, any input or output paths must be passed to the ``main`` method
using one of the above two methods. Furthermore, if Swift data sources are used the configuration values listed in `Generation of Swift Properties for Data Sources`_ must be passed with one of the above two methods and set in the configuration by ``main``.
The ``edp-wordcount`` example bundled with Savanna shows how to use configuration values, arguments, and Swift data paths in a Java job type.
The ``edp-wordcount`` example bundled with Sahara shows how to use configuration values, arguments, and Swift data paths in a Java job type.
Special Savanna URLs
Special Sahara URLs
--------------------
Savanna uses custom URLs to refer to objects stored in Swift or the Savanna internal database. These URLs are not meant to be used
outside of Savanna.
Sahara uses custom URLs to refer to objects stored in Swift or the Sahara internal database. These URLs are not meant to be used
outside of Sahara.
Savanna Swift URLs have the form:
Sahara Swift URLs have the form:
``swift://container.savanna/object``
``swift://container.sahara/object``
Savanna internal database URLs have the form:
Sahara internal database URLs have the form:
``savanna-db://savanna-generated-uuid``
``sahara-db://sahara-generated-uuid``
EDP Requirements
================
The OpenStack installation and the cluster launched from Savanna must meet the following minimum requirements in order for EDP to function:
The OpenStack installation and the cluster launched from Sahara must meet the following minimum requirements in order for EDP to function:
OpenStack Services
------------------
When a job is executed, binaries are first uploaded to a job tracker and then moved from the job tracker's local filesystem to HDFS. Therefore, there must be an instance of HDFS available to the nodes in the Savanna cluster.
When a job is executed, binaries are first uploaded to a job tracker and then moved from the job tracker's local filesystem to HDFS. Therefore, there must be an instance of HDFS available to the nodes in the Sahara cluster.
If the Swift service *is not* running in the OpenStack installation
+ Job binaries may only be stored in the Savanna internal database
+ Job binaries may only be stored in the Sahara internal database
+ Data sources require a long-running HDFS
If the Swift service *is* running in the OpenStack installation
+ Job binaries may be stored in Swift or the Savanna internal database
+ Job binaries may be stored in Swift or the Sahara internal database
+ Data sources may be in Swift or a long-running HDFS
Cluster Processes
-----------------
At a minimum the Savanna cluster must run a single instance of these processes to support EDP:
At a minimum the Sahara cluster must run a single instance of these processes to support EDP:
* jobtracker
* namenode
@ -270,23 +270,23 @@ finished.
Two config parameters control the behaviour of periodic clusters:
* periodic_enable - if set to 'False', Savanna will do nothing to a transient
* periodic_enable - if set to 'False', Sahara will do nothing to a transient
cluster once the job it was created for is completed. If it is set to
'True', then the behaviour depends on the value of the next parameter.
* use_identity_api_v3 - set it to 'False' if your OpenStack installation
does not provide Keystone API v3. In that case Savanna will not terminate
does not provide Keystone API v3. In that case Sahara will not terminate
unneeded clusters. Instead it will set their state to 'AwaitingTermination'
meaning that they could be manually deleted by a user. If the parameter is
set to 'True', Savanna will itself terminate the cluster. The limitation is
set to 'True', Sahara will itself terminate the cluster. The limitation is
caused by lack of 'trusts' feature in Keystone API older than v3.
If both parameters are set to 'True', Savanna works with transient clusters in
If both parameters are set to 'True', Sahara works with transient clusters in
the following manner:
1. When a user requests for a job to be executed on a transient cluster,
Savanna creates such a cluster.
2. Savanna drops the user's credentials once the cluster is created but
Sahara creates such a cluster.
2. Sahara drops the user's credentials once the cluster is created but
prior to that it creates a trust allowing it to operate with the
cluster instances in the future without user credentials.
3. Once a cluster is not needed, Savanna terminates its instances using the
stored trust. Savanna drops the trust after that.
3. Once a cluster is not needed, Sahara terminates its instances using the
stored trust. Sahara drops the trust after that.

View File

@ -16,7 +16,7 @@ Swift Integration
If you want to work with Swift, e.g. to run jobs on data located in Swift or put jobs` result into it, you need to use patched Hadoop and Swift.
For more info about this patching and configuring see :doc:`hadoop-swift`. There is a number of possible configs for Swift which can be set, but
currently Savanna automatically set information about swift filesystem implementation, location awareness, URL and tenant name for authorization.
currently Sahara automatically set information about swift filesystem implementation, location awareness, URL and tenant name for authorization.
The only required information that is still needed to be set are username and password to access Swift. So you need to explicitly specify these parameters while launching the job.
E.g. :
@ -33,7 +33,7 @@ determined from tenant name from configs. Actually, account=tenant.
${provider} was designed to provide an opportunity to work
with several Swift installations. E.g. it is possible to read data from one Swift installation and write it to another one.
But as for now, Savanna automatically generates configs only for one Swift installation
But as for now, Sahara automatically generates configs only for one Swift installation
with name "savanna".
Currently user can only enable/disable Swift for a Hadoop cluster. But there is a blueprint about making Swift access
@ -53,9 +53,9 @@ All volumes are attached during Cluster creation/scaling operations.
Neutron and Nova Network support
--------------------------------
OpenStack Cluster may use Nova Network or Neutron as a networking service. Savanna supports both, but when deployed,
a special configuration for networking should be set explicitly. By default Savanna will behave as if Nova Network is used.
If OpenStack Cluster uses Neutron, then ``use_neutron`` option should be set to ``True`` in Savanna configuration file. In
OpenStack Cluster may use Nova Network or Neutron as a networking service. Sahara supports both, but when deployed,
a special configuration for networking should be set explicitly. By default Sahara will behave as if Nova Network is used.
If OpenStack Cluster uses Neutron, then ``use_neutron`` option should be set to ``True`` in Sahara configuration file. In
addition, if the OpenStack Cluster supports network namespaces, set the ``use_namespaces`` option to ``True``
.. sourcecode:: cfg
@ -63,30 +63,30 @@ addition, if the OpenStack Cluster supports network namespaces, set the ``use_na
use_neutron=True
use_namespaces=True
Savanna Dashboard should also be configured properly to support Neutron. ``SAVANNA_USE_NEUTRON`` should be set to ``True`` in
Sahara Dashboard should also be configured properly to support Neutron. ``SAHARA_USE_NEUTRON`` should be set to ``True`` in
OpenStack Dashboard ``local_settings.py`` configuration file.
.. sourcecode:: python
SAVANNA_USE_NEUTRON=True
SAHARA_USE_NEUTRON=True
Floating IP Management
----------------------
Savanna needs to access instances through ssh during a Cluster setup. To establish a connection Savanna may
Sahara needs to access instances through ssh during a Cluster setup. To establish a connection Sahara may
use both: fixed and floating IP of an Instance. By default ``use_floating_ips`` parameter is set to ``True``, so
Savanna will use Floating IP of an Instance to connect. In this case, user has two options for how to make all instances
Sahara will use Floating IP of an Instance to connect. In this case, user has two options for how to make all instances
get a floating IP:
* Nova Network may be configured to assign floating IPs automatically by setting ``auto_assign_floating_ip`` to ``True`` in ``nova.conf``
* User may specify a floating IP pool for each Node Group directly.
Note: When using floating IPs for management (``use_floating_ip=True``) **every** instance in the Cluster should have a floating IP,
otherwise Savanna will not be able to work with it.
otherwise Sahara will not be able to work with it.
If ``use_floating_ips`` parameter is set to ``False`` Savanna will use Instances' fixed IPs for management. In this case
the node where Savanna is running should have access to Instances' fixed IP network. When OpenStack uses Neutron for
If ``use_floating_ips`` parameter is set to ``False`` Sahara will use Instances' fixed IPs for management. In this case
the node where Sahara is running should have access to Instances' fixed IP network. When OpenStack uses Neutron for
networking, user will be able to choose fixed IP network for all instances in a Cluster.
Anti-affinity
@ -94,7 +94,7 @@ Anti-affinity
One of the problems in Hadoop running on OpenStack is that there is no ability to control where machine is actually running.
We cannot be sure that two new virtual machines are started on different physical machines. As a result, any replication with cluster
is not reliable because all replicas may turn up on one physical machine.
Anti-affinity feature provides an ability to explicitly tell Savanna to run specified processes on different compute nodes. This
Anti-affinity feature provides an ability to explicitly tell Sahara to run specified processes on different compute nodes. This
is especially useful for Hadoop datanode process to make HDFS replicas reliable.
.. _`enable-anti-affinity`:
@ -123,16 +123,16 @@ possible. Hadoop supports data-locality feature and can schedule jobs to
tasktracker nodes that are local for input stream. In this case tasktracker
could communicate directly with local data node.
Savanna supports topology configuration for HDFS and Swift data sources.
Sahara supports topology configuration for HDFS and Swift data sources.
To enable data-locality set ``enable_data_locality`` parameter to ``True`` in
Savanna configuration file
Sahara configuration file
.. sourcecode:: cfg
enable_data_locality=True
In this case two files with topology must be provided to Savanna.
In this case two files with topology must be provided to Sahara.
Options ``compute_topology_file`` and ``swift_topology_file`` parameters
control location of files with compute and swift nodes topology descriptions
correspondingly.
@ -164,18 +164,18 @@ to swift nodes.
Hadoop versions after 1.2.0 support four-layer topology
(https://issues.apache.org/jira/browse/HADOOP-8468). To enable this feature
set ``enable_hypervisor_awareness`` option to ``True`` in Savanna configuration
file. In this case Savanna will add compute node ID as a second level of
set ``enable_hypervisor_awareness`` option to ``True`` in Sahara configuration
file. In this case Sahara will add compute node ID as a second level of
topology for Virtual Machines.
Heat Integration
----------------
Savanna may use `OpenStack Orchestration engine <https://wiki.openstack.org/wiki/Heat>`_ (aka Heat) to provision nodes for Hadoop cluster.
To make Savanna work with Heat the following steps are required:
Sahara may use `OpenStack Orchestration engine <https://wiki.openstack.org/wiki/Heat>`_ (aka Heat) to provision nodes for Hadoop cluster.
To make Sahara work with Heat the following steps are required:
* Your OpenStack installation must have 'orchestration' service up and running
* Savanna must contain the following configuration parameter in *savanna.conf*:
* Sahara must contain the following configuration parameter in *sahara.conf*:
.. sourcecode:: cfg
@ -211,4 +211,4 @@ The following features are supported in the new Heat engine:
| Nova Network support | TBD | https://launchpad.net/bugs/1259176 |
+-----------------------------------------+-------------------------+-----------------------------------------+
| Elastic Data Processing | Not affected | |
+-----------------------------------------+-------------------------+-----------------------------------------+
+-----------------------------------------+-------------------------+-----------------------------------------+

View File

@ -1,7 +1,7 @@
Requirements for Guests
=======================
Savanna manages guests of various platforms (for example Ubuntu, Fedora, RHEL, and CentOS) with various versions of the Hadoop ecosystem projects installed. There are common requirements for all guests, and additional requirements based on the plugin that is used for cluster deployment.
Sahara manages guests of various platforms (for example Ubuntu, Fedora, RHEL, and CentOS) with various versions of the Hadoop ecosystem projects installed. There are common requirements for all guests, and additional requirements based on the plugin that is used for cluster deployment.
Common Requirements
-------------------
@ -22,7 +22,7 @@ If the Vanilla Plugin is used for cluster deployment the guest is required to ha
* Apache Hadoop installed
* 'hadoop' user created
See :doc:`hadoop-swift` for information on using Swift with your Savanna cluster (for EDP support Swift integration is currently required).
See :doc:`hadoop-swift` for information on using Swift with your Sahara cluster (for EDP support Swift integration is currently required).
To support EDP, the following components must also be installed on the guest:

View File

@ -6,7 +6,7 @@ marriage. There were two steps to achieve this:
* Hadoop side: https://issues.apache.org/jira/browse/HADOOP-8545
This patch is not merged yet and is still being developed, so that's why
there is an ability to get the latest-version jar file from CDN:
http://savanna-files.mirantis.com/hadoop-swift/hadoop-swift-latest.jar
http://sahara-files.mirantis.com/hadoop-swift/hadoop-swift-latest.jar
* Swift side: https://review.openstack.org/#/c/21015
This patch is merged into Grizzly. If you want to make it work in Folsom
see the instructions in the section below.
@ -69,7 +69,7 @@ Hadoop patching
---------------
You may build jar file by yourself choosing the latest patch from
https://issues.apache.org/jira/browse/HADOOP-8545. Or you may get the latest
one from CDN http://savanna-files.mirantis.com/hadoop-swift/hadoop-swift-latest.jar
one from CDN http://sahara-files.mirantis.com/hadoop-swift/hadoop-swift-latest.jar
You need to put this file to hadoop libraries (e.g. /usr/lib/share/hadoop/lib)
into each job-tracker and task-tracker node in cluster. The main step in this
section is to configure core-site.xml file on each of this node.

View File

@ -1,7 +1,7 @@
Hortonworks Data Plaform Plugin
===============================
The Hortonworks Data Platform (HDP) Savanna plugin provides a way to provision HDP clusters on OpenStack using templates in a single click and in an easily repeatable fashion. As seen from the architecture diagram below, the Savanna controller serves as the glue between Hadoop and OpenStack. The HDP plugin mediates between the Savanna controller and Apache Ambari in order to deploy and configure Hadoop on OpenStack. Core to the HDP Plugin is Apache Ambari that is used as the orchestrator for deploying the HDP stack on OpenStack.
The Hortonworks Data Platform (HDP) Sahara plugin provides a way to provision HDP clusters on OpenStack using templates in a single click and in an easily repeatable fashion. As seen from the architecture diagram below, the Sahara controller serves as the glue between Hadoop and OpenStack. The HDP plugin mediates between the Sahara controller and Apache Ambari in order to deploy and configure Hadoop on OpenStack. Core to the HDP Plugin is Apache Ambari that is used as the orchestrator for deploying the HDP stack on OpenStack.
.. image:: ../images/hdp-plugin-architecture.png
:width: 800 px
@ -27,7 +27,7 @@ The HDP Plugin performs the following four primary functions during cluster crea
Images
------
The Savanna HDP plugin can make use of either minimal (operating system only) images or pre-populated HDP images. The base requirement for both is that the image is cloud-init enabled and contains a supported operating system (see http://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.2.4/bk_hdp1-system-admin-guide/content/sysadminguides_ha_chap2_3.html).
The Sahara HDP plugin can make use of either minimal (operating system only) images or pre-populated HDP images. The base requirement for both is that the image is cloud-init enabled and contains a supported operating system (see http://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.2.4/bk_hdp1-system-admin-guide/content/sysadminguides_ha_chap2_3.html).
The advantage of a pre-populated image is that provisioning time is accelerated, as packages do not need to be downloaded and installed which make up the majority of the time spent in the provisioning cycle.
@ -60,9 +60,9 @@ Any packages that are not installed in a pre-populated image will automatically
There are two VM images provided for use with the HDP Plugin:
1. `centos-6_64-hdp-1.3.qcow2 <http://public-repo-1.hortonworks.com/savanna/images/centos-6_4-64-hdp-1.3.qcow2>`_: This image contains most of the requisite packages necessary for HDP deployment. The packages contained herein correspond to the HDP 1.3 release. The operating system is a minimal CentOS 6.4 cloud-init enabled install. This image can only be used to provision HDP 1.3 hadoop clusters.
2. `centos-6-64-hdp-vanilla.qcow2 <http://public-repo-1.hortonworks.com/savanna/images/centos-6_4-64-vanilla.qcow2>`_: This image provides only a minimal install of CentOS 6.4 and is cloud-init enabled. This image can be used to provision any versions of HDP supported by Savanna.
2. `centos-6-64-hdp-vanilla.qcow2 <http://public-repo-1.hortonworks.com/savanna/images/centos-6_4-64-vanilla.qcow2>`_: This image provides only a minimal install of CentOS 6.4 and is cloud-init enabled. This image can be used to provision any versions of HDP supported by Sahara.
HDP plugin requires an image to be tagged in Savanna Image Registry with
HDP plugin requires an image to be tagged in Sahara Image Registry with
two tags: 'hdp' and '<hdp version>' (e.g. '1.3.2').
Also in the Image Registry you will need to specify username for an image.
@ -76,8 +76,8 @@ The HDP plugin currently has the following limitations:
* Currently, the HDP plugin provides support for HDP 1.3. Once HDP2 is released, support for this version will be provided.
* Swift integration is not yet implemented.
* It is not possible to decrement the number of node-groups or hosts per node group in a Savanna generated cluster.
* Only the following services are available to be deployed via Savanna:
* It is not possible to decrement the number of node-groups or hosts per node group in a Sahara generated cluster.
* Only the following services are available to be deployed via Sahara:
* Ambari
* Nagios
* Ganglia
@ -95,6 +95,6 @@ Prior to Hadoop cluster creation, the HDP plugin will perform the following vali
* Ensure the deployment of one Ambari Server instance to the cluster
* Ensure that each defined node group had an associated Ambari Agent configured
The HDP Plugin and Savanna Support
The HDP Plugin and Sahara Support
----------------------------------
A Hortonworks supported version of HDP OpenStack plugin will become available at a future date. For more information, please contact Hortonworks.

View File

@ -1,8 +1,8 @@
Savanna Installation Guide
Sahara Installation Guide
==========================
We recommend you install in a way that will can keep your system in a
consistent state. Ways we recommend to install Savanna are:
consistent state. Ways we recommend to install Sahara are:
* Install via `Fuel <http://fuel.mirantis.com/>`_
@ -18,7 +18,7 @@ To install with Fuel
1. Start by following `Quickstart <http://software.mirantis.com/quick-start/>`_
to install and setup OpenStack
2. Enable Savanna service during installation
2. Enable Sahara service during installation
@ -29,21 +29,21 @@ To install with RDO
<http://openstack.redhat.com/Quickstart>`_ to install and setup
OpenStack.
2. Install the savanna-api service with,
2. Install the sahara-api service with,
.. sourcecode:: console
$ yum install openstack-savanna
$ yum install openstack-sahara
..
3. Configure the savanna-api service to your liking. The configuration
file is located in ``/etc/savanna/savanna.conf``.
3. Configure the sahara-api service to your liking. The configuration
file is located in ``/etc/sahara/sahara.conf``.
4. Start the savanna-api service with,
4. Start the sahara-api service with,
.. sourcecode:: console
$ service openstack-savanna-api start
$ service openstack-sahara-api start
..
@ -74,77 +74,77 @@ To install into a virtual environment
$ sudo easy_install pip
$ sudo pip install virtualenv
2. Setup virtual environment for Savanna:
2. Setup virtual environment for sahara:
.. sourcecode:: console
$ virtualenv savanna-venv
$ virtualenv sahara-venv
..
This will install python virtual environment into ``savanna-venv`` directory
This will install python virtual environment into ``sahara-venv`` directory
in your current working directory. This command does not require super
user privileges and could be executed in any directory current user has
write permission.
3. You can install the latest Savanna release version from pypi:
3. You can install the latest sahara release version from pypi:
.. sourcecode:: console
$ savanna-venv/bin/pip install savanna
$ sahara-venv/bin/pip install sahara
..
Or you can get Savanna archive from `<http://tarballs.openstack.org/savanna/>`_ and install it using pip:
Or you can get Sahara archive from `<http://tarballs.openstack.org/sahara/>`_ and install it using pip:
.. sourcecode:: console
$ savanna-venv/bin/pip install 'http://tarballs.openstack.org/savanna/savanna-master.tar.gz'
$ sahara-venv/bin/pip install 'http://tarballs.openstack.org/sahara/sahara-master.tar.gz'
..
Note that savanna-master.tar.gz contains the latest changes and might not be stable at the moment.
We recommend browsing `<http://tarballs.openstack.org/savanna/>`_ and selecting the latest stable release.
Note that sahara-master.tar.gz contains the latest changes and might not be stable at the moment.
We recommend browsing `<http://tarballs.openstack.org/sahara/>`_ and selecting the latest stable release.
4. After installation you should create configuration file. Sample config file location
depends on your OS. For Ubuntu it is ``/usr/local/share/savanna/savanna.conf.sample-basic``,
for Red Hat - ``/usr/share/savanna/savanna.conf.sample-basic``. Below is an example for Ubuntu:
depends on your OS. For Ubuntu it is ``/usr/local/share/sahara/sahara.conf.sample-basic``,
for Red Hat - ``/usr/share/sahara/sahara.conf.sample-basic``. Below is an example for Ubuntu:
.. sourcecode:: console
$ mkdir savanna-venv/etc
$ cp savanna-venv/share/savanna/savanna.conf.sample-basic savanna-venv/etc/savanna.conf
$ mkdir sahara-venv/etc
$ cp sahara-venv/share/sahara/sahara.conf.sample-basic sahara-venv/etc/sahara.conf
..
check each option in savanna-venv/etc/savanna.conf, and make necessary changes
check each option in sahara-venv/etc/sahara.conf, and make necessary changes
5. Create database schema:
.. sourcecode:: console
$ savanna-venv/bin/python savanna-venv/bin/savanna-db-manage --config-file savanna-venv/etc/savanna.conf upgrade head
$ sahara-venv/bin/python sahara-venv/bin/sahara-db-manage --config-file sahara-venv/etc/sahara.conf upgrade head
..
6. To start Savanna call:
6. To start Sahara call:
.. sourcecode:: console
$ savanna-venv/bin/python savanna-venv/bin/savanna-api --config-file savanna-venv/etc/savanna.conf
$ sahara-venv/bin/python sahara-venv/bin/sahara-api --config-file sahara-venv/etc/sahara.conf
..
Note:
-----
One of the :doc:`Savanna features <features>`, Anti-Affinity, requires a Nova adjustment.
One of the :doc:`Sahara features <features>`, Anti-Affinity, requires a Nova adjustment.
See :ref:`Enabling Anti-Affinity <enable-anti-affinity>` for details. But that is purely optional.
Make sure that your operating system is not blocking Savanna port (default: 8386).
Make sure that your operating system is not blocking Sahara port (default: 8386).
You may need to configure iptables in CentOS and some other operating systems.
To get the list of all possible options run:
.. sourcecode:: console
$ savanna-venv/bin/python savanna-venv/bin/savanna-api --help
$ sahara-venv/bin/python sahara-venv/bin/sahara-api --help
Further consider reading :doc:`overview` for general Savanna concepts and
Further consider reading :doc:`overview` for general Sahara concepts and
:doc:`plugins` for specific plugin features/requirements

View File

@ -4,7 +4,7 @@ Getting Started
Clusters
--------
A cluster deployed by Savanna consists of node groups. Node groups vary by
A cluster deployed by Sahara consists of node groups. Node groups vary by
their role, parameters and number of machines. The picture below
illustrates example of Hadoop cluster consisting of 3 node groups each having
different role (set of processes).
@ -24,7 +24,7 @@ VMs.
Templates
---------
In order to simplify cluster provisioning Savanna employs concept of templates.
In order to simplify cluster provisioning Sahara employs concept of templates.
There are two kind of templates: node group template and cluster template. The
former is used to create node groups, the later - clusters. Essentially
templates have the very same parameters as corresponding entities. Their aim
@ -54,16 +54,16 @@ Image Registry
--------------
OpenStack starts VMs based on pre-built image with installed OS. The image
requirements for Savanna depend on plugin and Hadoop version. Some plugins
requirements for Sahara depend on plugin and Hadoop version. Some plugins
require just basic cloud image and install Hadoop on VMs from scratch. Some
plugins might require images with pre-installed Hadoop.
The Savanna Image Registry is a feature which helps filter out images during
The Sahara Image Registry is a feature which helps filter out images during
cluster creation. See :doc:`registering_image` for details on how to
work with Image Registry.
Features
--------
Savanna has several interesting features. The full list could be found there:
Sahara has several interesting features. The full list could be found there:
:doc:`features`

View File

@ -2,7 +2,7 @@ Provisioning Plugins
====================
This page lists all available provisioning plugins. In general a plugin
enables Savanna to deploy a specific Hadoop version/distribution in
enables Sahara to deploy a specific Hadoop version/distribution in
various topologies and with management/monitoring tools.
* :doc:`vanilla_plugin` - deploys Vanilla Apache Hadoop

View File

@ -1,12 +1,12 @@
Registering an Image
====================
Savanna deploys cluster of machines based on images stored in Glance.
Sahara deploys cluster of machines based on images stored in Glance.
Each plugin has its own requirements on image contents, see specific plugin
documentation for details. A general requirement for an image is to have
cloud-init package installed.
Savanna requires image to be registered in Savanna Image Registry order to work with it.
Sahara requires image to be registered in Sahara Image Registry order to work with it.
A registered image must have two properties set:
* username - a name of the default cloud-init user.

View File

@ -1,11 +1,11 @@
Savanna Cluster Statuses Overview
=================================
Sahara Cluster Statuses Overview
================================
All Savanna Cluster operations are performed in multiple steps. A Cluster object
has a ``Status`` attribute which changes when Savanna finishes one step of
All Sahara Cluster operations are performed in multiple steps. A Cluster object
has a ``Status`` attribute which changes when Sahara finishes one step of
operations and starts another one.
Savanna supports three types of Cluster operations:
Sahara supports three types of Cluster operations:
* Create a new Cluster
* Scale/Shrink an existing Cluster
* Delete an existing Cluster
@ -16,7 +16,7 @@ Creating a new Cluster
1. Validating
~~~~~~~~~~~~~
Before performing any operations with OpenStack environment, Savanna validates
Before performing any operations with OpenStack environment, Sahara validates
user input.
There are two types of validations, that are done:
@ -35,38 +35,38 @@ This status means that the Provisioning plugin performs some infrastructural upd
3. Spawning
~~~~~~~~~~~
Savanna sends requests to OpenStack for all resources to be created:
Sahara sends requests to OpenStack for all resources to be created:
* VMs
* Volumes
* Floating IPs (if Savanna is configured to use Floating IPs)
* Floating IPs (if Sahara is configured to use Floating IPs)
It takes some time for OpenStack to schedule all required VMs and Volumes,
so Savanna wait until all of them are in ``Active`` state.
so Sahara wait until all of them are in ``Active`` state.
4. Waiting
~~~~~~~~~~
Savanna waits while VMs' operating systems boot up and all internal infrastructure
Sahara waits while VMs' operating systems boot up and all internal infrastructure
components like networks and volumes are attached and ready to use.
5. Preparing
~~~~~~~~~~~~
Savanna preparers a Cluster for starting. This step includes generating ``/etc/hosts``
file, so that all instances could access each other by a hostname. Also Savanna
Sahara preparers a Cluster for starting. This step includes generating ``/etc/hosts``
file, so that all instances could access each other by a hostname. Also Sahara
updates ``authorized_keys`` file on each VM, so that communications could be done
without passwords.
6. Configuring
~~~~~~~~~~~~~~
Savanna pushes service configurations to VMs. Both XML based configurations and
Sahara pushes service configurations to VMs. Both XML based configurations and
environmental variables are set on this step.
7. Starting
~~~~~~~~~~~
Savanna is starting Hadoop services on Cluster's VMs.
Sahara is starting Hadoop services on Cluster's VMs.
8. Active
~~~~~~~~~
@ -80,19 +80,19 @@ Scaling/Shrinking an existing Cluster
1. Validating
~~~~~~~~~~~~~
Savanna checks the scale/shrink request for validity. The Plugin method called
Sahara checks the scale/shrink request for validity. The Plugin method called
for performing Plugin specific checks is different from creation validation method.
2. Scaling
~~~~~~~~~~
Savanna performs database operations updating all affected existing Node Groups
Sahara performs database operations updating all affected existing Node Groups
and creating new ones.
3. Adding Instances
~~~~~~~~~~~~~~~~~~~
State similar to ``Spawning`` while Custer creation. Savanna adds required amount
State similar to ``Spawning`` while Custer creation. Sahara adds required amount
of VMs to existing Node Groups and creates new Node Groups.
4. Configuring
@ -105,14 +105,14 @@ with a new ``/etc/hosts`` file.
5. Decommissioning
~~~~~~~~~~~~~~~~~~
Savanna stops Hadoop services on VMs that will be deleted from a Cluster.
Sahara stops Hadoop services on VMs that will be deleted from a Cluster.
Decommissioning Data Node may take some time because Hadoop rearranges data replicas
around the Cluster, so that no data will be lost after tht VM is deleted.
6. Deleting Instances
~~~~~~~~~~~~~~~~~~~~~
Savanna sends requests to OpenStack to release unneeded resources:
Sahara sends requests to OpenStack to release unneeded resources:
* VMs
* Volumes
* Floating IPs (if they are used)
@ -137,9 +137,9 @@ Error State
If Cluster creation fails, the Cluster will get into ``Error`` state.
This state means the Cluster may not be able to perform any operations normally.
This cluster will stay in database until it is manually deleted. The reason of
failure may be found in Savanna logs.
failure may be found in Sahara logs.
If an error occurs during ``Adding Instances`` operation, Savanna will first
If an error occurs during ``Adding Instances`` operation, Sahara will first
try to rollback this operation. If rollback is impossible or fails itself, then
the Cluster will also get into ``Error`` state.
the Cluster will also get into ``Error`` state.

View File

@ -15,7 +15,7 @@ Keep in mind that if you want to use "Swift Integration" feature ( :doc:`feature
Hadoop must be patched with implementation of Swift File System.
For more information about patching required by "Swift Integration" feature see :doc:`hadoop-swift`.
Vanilla plugin requires an image to be tagged in Savanna Image Registry with
Vanilla plugin requires an image to be tagged in Sahara Image Registry with
two tags: 'vanilla' and '<hadoop version>' (e.g. '1.2.1').
Also you should specify username of default cloud-user used in the Image: