Refine the doc for sahara

As a rookie stepping into sahara, I just find some nits
that can be corrected in the doc. This patch is going to
address them.

Change-Id: If8eadce0db8b5fca22d495ed5ed48abbda93aba1
This commit is contained in:
Xi Yang 2015-10-26 13:27:38 +08:00
parent 888ebd1662
commit ab4df88b0c
3 changed files with 11 additions and 10 deletions

View File

@ -29,7 +29,7 @@ The Sahara architecture consists of several components:
* REST API - exposes sahara functionality via REST HTTP interface.
* Python Sahara Client - like other OpenStack components, sahara has
its own python client
its own python client.
* Sahara pages - a GUI for the sahara is located in the OpenStack Dashboard
(horizon).

View File

@ -98,7 +98,7 @@ For fast cluster provisioning a generic workflow will be as following:
* perform operations on the cluster; add or remove nodes.
* terminate the cluster when it is no longer needed.
For analytics as a service, a generic workflow will be as follows:
For analytics as a service, a generic workflow will be as following:
* select one of the predefined data processing framework versions.
* configure a job:
@ -114,7 +114,7 @@ For analytics as a service, a generic workflow will be as follows:
to the user.
* cluster will be removed automatically after job completion.
* get the results of computations (for example, from Swift).
* get the results of computations (for example, from swift).
User's Perspective
------------------
@ -136,7 +136,7 @@ configurations can not be applied to a single node, but to a whole Cluster.
A user can specify these kinds of configurations in a Cluster Template. Sahara
enables users to specify which processes should be added to an anti-affinity
group within a Cluster Template. If a process is included into an
anti-affinity group, it means that VMs where this process are going to be
anti-affinity group, it means that VMs where this process is going to be
launched should be scheduled to different hardware hosts.
The Cluster entity represents a collection of VM instances that all have the
@ -151,13 +151,13 @@ and NameNode processes could be run either on a single VM or two separate
VMs. Also a cluster could contain worker nodes of different types. Worker
nodes could run both TaskTracker and DataNode, or either of these processes
alone. Sahara allows a user to create a cluster with any combination of these
options, but it will not allow the creation of a non working topology (for
options, but it will not allow the creation of a non-working topology (for
example: a set of workers with DataNodes, but without a NameNode).
Each Cluster belongs to some Identity service project determined by the user.
Each Cluster belongs to an Identity service project determined by the user.
Users have access only to objects located in projects they have access to.
Users can edit and delete only objects they have created or exist in their
project. Naturally, admin users have full access to every object. In this
projects. Naturally, admin users have full access to every object. In this
manner, sahara complies with general OpenStack access policy.
Integration with Object Storage
@ -184,6 +184,6 @@ In addition to the monitoring capabilities provided by vendor-specific
Hadoop management tooling, sahara provides pluggable integration with
external monitoring systems such as Nagios or Zabbix.
Both deployment and monitoring tools can be installed on stand-alone VMs,
Both deployment and monitoring tools can be installed on standalone VMs,
thus allowing a single instance to manage and monitor several clusters at
once.

View File

@ -29,10 +29,11 @@ within tenants to which the user has access.
The sahara API supports the JSON data serialization format. This means that
for requests that contain a body, the ``Content-Type`` header must be set to
the MIME type value ``application/json``. Also, clients should accept JSON
serialized responses by specifying the Accept header with the MIME type
serialized responses by specifying the ``Accept`` header with the MIME type
value ``application/json`` or adding the ``.json`` extension to the resource
name. The default response format is ``application/json`` if the client does
not specify an Accept header or append the ``.json`` extension in the URL path.
not specify an ``Accept`` header or append the ``.json`` extension in the URL
path.
Example: