Update Sahara Dev Quickstart Guide

Clean-up of old references generalized version

Change-Id: Ie4e6554cd283cd81f56e268f3a99da3120aa4d19
Closes-Bug: #1485648
This commit is contained in:
Ashish Billore 2015-11-12 19:26:41 +05:30 committed by Vitaly Gridnev
parent a0162c8639
commit b0c1d8ba0a

View File

@ -12,10 +12,9 @@ of OpenStack command line tools and the sahara :doc:`REST API <../restapi>`.
OR OR
* If you just want to install and use Sahara follow * If you just want to install and use sahara follow
:doc:`../userdoc/installation.guide`. :doc:`../userdoc/installation.guide`.
2. Identity service configuration 2. Identity service configuration
--------------------------------- ---------------------------------
@ -27,18 +26,17 @@ whose password is ``nova``:
.. sourcecode:: console .. sourcecode:: console
$ export OS_AUTH_URL=http://127.0.0.1:5000/v2.0/ $ export OS_AUTH_URL=http://127.0.0.1:5000/v2.0/
$ export OS_TENANT_NAME=admin $ export OS_TENANT_NAME=admin
$ export OS_USERNAME=admin $ export OS_USERNAME=admin
$ export OS_PASSWORD=nova $ export OS_PASSWORD=nova
With these environment variables set you can get an authentication With these environment variables set you can get an authentication
token using the ``keystone`` command line client as follows: token using the ``keystone`` command line client as follows:
.. sourcecode:: console .. sourcecode:: console
$ keystone token-get $ keystone token-get
If authentication succeeds, the output will be as follows: If authentication succeeds, the output will be as follows:
@ -47,10 +45,10 @@ If authentication succeeds, the output will be as follows:
+-----------+----------------------------------+ +-----------+----------------------------------+
| Property | Value | | Property | Value |
+-----------+----------------------------------+ +-----------+----------------------------------+
| expires | 2013-07-08T15:21:18Z | | expires | 2015-09-03T13:37:32Z |
| id | dd92e3cdb4e1462690cd444d6b01b746 | | id | 2542c427092a4b09a07ee7612c3d99ae |
| tenant_id | 62bd2046841e4e94a87b4a22aa886c13 | | tenant_id | c82e4bce56ce4cf9b90bd15dfdef699d |
| user_id | 720fb87141a14fd0b204f977f5f02512 | | user_id | 7f5becaaa38b4c9e850ccd11672a4c96 |
+-----------+----------------------------------+ +-----------+----------------------------------+
The ``id`` and ``tenant_id`` values will be used for creating REST calls The ``id`` and ``tenant_id`` values will be used for creating REST calls
@ -61,40 +59,50 @@ variables for ease of use later.
.. sourcecode:: console .. sourcecode:: console
$ export AUTH_TOKEN="dd92e3cdb4e1462690cd444d6b01b746" $ export AUTH_TOKEN="2542c427092a4b09a07ee7612c3d99ae"
$ export TENANT_ID="62bd2046841e4e94a87b4a22aa886c13" $ export TENANT_ID="c82e4bce56ce4cf9b90bd15dfdef699d"
Alternatively, if a devstack environment is used, these values are available
through "openrc" file under the "devstack_install_root" directory and can be
configured as:
.. sourcecode:: console
$ source <devstack_install_root>/openrc
3. Upload an image to the Image service 3. Upload an image to the Image service
--------------------------------------- ---------------------------------------
You will need to upload a virtual machine image to the OpenStack Image You will need to upload a virtual machine image to the OpenStack Image
service. You can download pre-built images with vanilla Apache Hadoop service. You can download pre-built images with vanilla Apache Hadoop
installed, or build the images yourself: installed, or build the images yourself. This guide uses the latest available
Ubuntu upstream image, referred to as ``sahara-vanilla-latest-ubuntu.qcow2``
and the latest version of vanilla plugin as an example.
Sample images are available here:
* Download and install a pre-built image with Ubuntu 13.10 `Sample Images <http://sahara-files.mirantis.com/images/upstream/>`_
* Download a pre-built image
**Note:** For the steps below, substitute ``<openstack_release>`` with the
appropriate OpenStack release and ``<sahara_image>`` with the image of your
choice.
.. sourcecode:: console .. sourcecode:: console
$ ssh user@hostname $ ssh user@hostname
$ wget http://sahara-files.mirantis.com/sahara-icehouse-vanilla-1.2.1-ubuntu-13.10.qcow2 $ wget http://sahara-files.mirantis.com/images/upstream/<openstack_release>/<sahara_image>.qcow2
$ glance image-create --name=sahara-icehouse-vanilla-1.2.1-ubuntu-13.10 \
--disk-format=qcow2 --container-format=bare < ./sahara-icehouse-vanilla-1.2.1-ubuntu-13.10.qcow2
OR Upload the above downloaded image into the OpenStack Image service:
* with Fedora 20
.. sourcecode:: console .. sourcecode:: console
$ ssh user@hostname $ glance image-create --name=sahara-vanilla-latest-ubuntu \
$ wget http://sahara-files.mirantis.com/sahara-icehouse-vanilla-1.2.1-fedora-20.qcow2 --disk-format=qcow2 --container-format=bare < ./sahara-vanilla-latest-ubuntu.qcow2
$ glance image-create --name=sahara-icehouse-vanilla-1.2.1-fedora-20 \
--disk-format=qcow2 --container-format=bare < ./sahara-icehouse-vanilla-1.2.1-fedora-20.qcow2
OR OR
* build the image using :doc:`../userdoc/diskimagebuilder`. * Build the image using: `diskimage-builder script <https://github.com/openstack/sahara-image-elements/blob/master/diskimage-create/README.rst>`_
Save the image id, this will be used during the image registration with Save the image id, this will be used during the image registration with
sahara. You can get the image id using the ``glance`` command line tool sahara. You can get the image id using the ``glance`` command line tool
@ -102,15 +110,14 @@ as follows:
.. sourcecode:: console .. sourcecode:: console
$ glance image-list --name sahara-icehouse-vanilla-1.2.1-ubuntu-13.10 $ glance image-list --name sahara-vanilla-latest-ubuntu
+--------------------------------------+---------------------------------------------+ +--------------------------------------+-------------------------------------+
| ID | Name | | ID | Name |
+--------------------------------------+---------------------------------------------+ +--------------------------------------+-------------------------------------+
| 3f9fc974-b484-4756-82a4-bff9e116919b | sahara-icehouse-vanilla-1.2.1-ubuntu-13.10 | | c119f99c-67f2-4404-9cff-f30e4b185036 | sahara-vanilla-latest-ubuntu |
+--------------------------------------+---------------------------------------------+ +--------------------------------------+-------------------------------------+
$ export IMAGE_ID="3f9fc974-b484-4756-82a4-bff9e116919b"
$ export IMAGE_ID="c119f99c-67f2-4404-9cff-f30e4b185036"
4. Register the image with the sahara image registry 4. Register the image with the sahara image registry
---------------------------------------------------- ----------------------------------------------------
@ -124,63 +131,82 @@ will vary depending on the source image used, for more please see*
.. sourcecode:: console .. sourcecode:: console
$ sahara image-register --id $IMAGE_ID --username ubuntu $ sahara image-register --id $IMAGE_ID --username ubuntu
Tag the image to inform sahara about the plugin with which it shall be used: Tag the image to inform sahara about the plugin and the version with which
it shall be used.
**Note:** For the steps below and the rest of this guide, substitute
``<plugin_version>`` with the appropriate version of your plugin.
.. sourcecode:: console .. sourcecode:: console
$ sahara image-add-tag --id $IMAGE_ID --tag vanilla $ sahara image-add-tag --id $IMAGE_ID --tag vanilla
$ sahara image-add-tag --id $IMAGE_ID --tag 1.2.1 $ sahara image-add-tag --id $IMAGE_ID --tag <plugin_version>
Ensure that the image is registered correctly by querying sahara. If Ensure that the image is registered correctly by querying sahara. If
registered successfully, the image will appear in the output as follows: registered successfully, the image will appear in the output as follows:
.. sourcecode:: console .. sourcecode:: console
$ sahara image-list $ sahara image-list
+--------------------------------------------+---------------------------------------+----------+----------------+-------------+ +------------------------------+--------------------------------------+----------+---------------------------+-------------+
| name | id | username | tags | description | | name | id | username | tags | description |
+--------------------------------------------+---------------------------------------+----------+----------------+-------------+ +------------------------------+--------------------------------------+----------+---------------------------+-------------+
| sahara-icehouse-vanilla-1.2.1-ubuntu-13.10 | 3f9fc974-b484-4756-82a4-bff9e116919b | ubuntu | vanilla, 1.2.1 | None | | sahara-vanilla-latest-ubuntu | c119f99c-67f2-4404-9cff-f30e4b185036 | ubuntu | vanilla, <plugin_version> | None |
+--------------------------------------------+---------------------------------------+----------+----------------+-------------+ +------------------------------+--------------------------------------+----------+---------------------------+-------------+
5. Create node group templates 5. Create node group templates
------------------------------ ------------------------------
Node groups are the building blocks of clusters in sahara. Before you can Node groups are the building blocks of clusters in sahara. Before you can
begin provisioning clusters you must define a few node group templates to begin provisioning clusters you must define a few node group templates to
describe their configurations. describe node group configurations.
*Note, these templates assume that floating IP addresses are not being *Note, these templates assume that floating IP addresses are being used. For
used, for more information please see* :ref:`floating_ip_management` more details on floating IP please see* :ref:`floating_ip_management`
Create a file named ``ng_master_template_create.json`` with the following If your environment does not use floating IP, omit defining floating IP in
the template below.
Sample templates can be found here:
`Sample Templates <https://github.com/openstack/sahara/tree/master/sahara/plugins/default_templates/>`_
Create a file named ``my_master_template_create.json`` with the following
content: content:
.. sourcecode:: json .. sourcecode:: json
{ {
"name": "test-master-tmpl",
"flavor_id": "2",
"plugin_name": "vanilla", "plugin_name": "vanilla",
"hadoop_version": "1.2.1", "hadoop_version": "<plugin_version>",
"node_processes": ["jobtracker", "namenode"], "node_processes": [
"namenode",
"resourcemanager",
"hiveserver"
],
"name": "vanilla-default-master",
"floating_ip_pool": "public",
"flavor_id": "2",
"auto_security_group": true "auto_security_group": true
} }
Create a file named ``ng_worker_template_create.json`` with the following Create a file named ``my_worker_template_create.json`` with the following
content: content:
.. sourcecode:: json .. sourcecode:: json
{ {
"name": "test-worker-tmpl",
"flavor_id": "2",
"plugin_name": "vanilla", "plugin_name": "vanilla",
"hadoop_version": "1.2.1", "hadoop_version": "<plugin_version>",
"node_processes": ["tasktracker", "datanode"], "node_processes": [
"nodemanager",
"datanode"
],
"name": "vanilla-default-worker",
"floating_ip_pool": "public",
"flavor_id": "2",
"auto_security_group": true "auto_security_group": true
} }
@ -188,28 +214,28 @@ Use the ``sahara`` client to upload the node group templates:
.. sourcecode:: console .. sourcecode:: console
$ sahara node-group-template-create --json ng_master_template_create.json $ sahara node-group-template-create --json my_master_template_create.json
$ sahara node-group-template-create --json ng_worker_template_create.json $ sahara node-group-template-create --json my_worker_template_create.json
List the available node group templates to ensure that they have been List the available node group templates to ensure that they have been
added properly: added properly:
.. sourcecode:: console .. sourcecode:: console
$ sahara node-group-template-list $ sahara node-group-template-list
+------------------+--------------------------------------+-------------+-----------------------+-------------+ +------------------------+--------------------------------------+-------------+---------------------------------------+-------------+
| name | id | plugin_name | node_processes | description | | name | id | plugin_name | node_processes | description |
+------------------+--------------------------------------+-------------+-----------------------+-------------+ +------------------------+--------------------------------------+-------------+---------------------------------------+-------------+
| test-master-tmpl | b38227dc-64fe-42bf-8792-d1456b453ef3 | vanilla | jobtracker, namenode | None | | vanilla-default-master | 9d3b5b2c-d5d5-4d16-8a93-a568d29c6569 | vanilla | namenode, resourcemanager, hiveserver | None |
| test-worker-tmpl | 634827b9-6a18-4837-ae15-5371d6ecf02c | vanilla | datanode, nodemanager | None | | vanilla-default-worker | 1aa4a397-cb1e-4f38-be18-7f65fa0cc2eb | vanilla | nodemanager, datanode | None |
+------------------+--------------------------------------+-------------+-----------------------+-------------+ +------------------------+--------------------------------------+-------------+---------------------------------------+-------------+
Save the id for the master and worker node group templates as they will be Save the id for the master and worker node group templates as they will be
used during cluster template creation. For example: used during cluster template creation.
For example:
* Master node group template id: ``b38227dc-64fe-42bf-8792-d1456b453ef3``
* Worker node group template id: ``634827b9-6a18-4837-ae15-5371d6ecf02c``
* Master node group template id: ``9d3b5b2c-d5d5-4d16-8a93-a568d29c6569``
* Worker node group template id: ``1aa4a397-cb1e-4f38-be18-7f65fa0cc2eb``
6. Create a cluster template 6. Create a cluster template
---------------------------- ----------------------------
@ -217,147 +243,329 @@ used during cluster template creation. For example:
The last step before provisioning the cluster is to create a template The last step before provisioning the cluster is to create a template
that describes the node groups of the cluster. that describes the node groups of the cluster.
Create a file named ``cluster_template_create.json`` with the following Create a file named ``my_cluster_template_create.json`` with the following
content: content:
.. sourcecode:: json .. sourcecode:: json
{ {
"name": "demo-cluster-template",
"plugin_name": "vanilla", "plugin_name": "vanilla",
"hadoop_version": "1.2.1", "hadoop_version": "<plugin_version>",
"node_groups": [ "node_groups": [
{ {
"name": "master", "name": "worker",
"node_group_template_id": "b38227dc-64fe-42bf-8792-d1456b453ef3", "count": 2,
"count": 1 "node_group_template_id": "1aa4a397-cb1e-4f38-be18-7f65fa0cc2eb"
}, },
{ {
"name": "workers", "name": "master",
"node_group_template_id": "634827b9-6a18-4837-ae15-5371d6ecf02c", "count": 1,
"count": 2 "node_group_template_id": "9d3b5b2c-d5d5-4d16-8a93-a568d29c6569"
} }
] ],
"name": "vanilla-default-cluster",
"cluster_configs": {}
} }
Upload the Cluster template using the ``sahara`` command line tool: Upload the Cluster template using the ``sahara`` command line tool:
.. sourcecode:: console .. sourcecode:: console
$ sahara cluster-template-create --json cluster_template_create.json $ sahara cluster-template-create --json my_cluster_template_create.json
Save the template id for use in the cluster provisioning command. The Save the cluster template id for use in the cluster provisioning command. The
cluster id can be found in the output of the creation command or by cluster id can be found in the output of the creation command or by listing
listing the cluster templates as follows: the cluster templates as follows:
.. sourcecode:: console .. sourcecode:: console
$ sahara cluster-template-list $ sahara cluster-template-list
+-----------------------+--------------------------------------+-------------+-----------------------+-------------+ +-------------------------+--------------------------------------+-------------+----------------------+-------------+
| name | id | plugin_name | node_groups | description | | name | id | plugin_name | node_groups | description |
+-----------------------+--------------------------------------+-------------+-----------------------+-------------+ +-------------------------+--------------------------------------+-------------+----------------------+-------------+
| demo-cluster-template | c0609da7-faac-4dcf-9cbc-858a3aa130cd | vanilla | master: 1, workers: 2 | None | | vanilla-default-cluster | 74add4df-07c2-4053-931f-d5844712727f | vanilla | master: 1, worker: 2 | None |
+-----------------------+--------------------------------------+-------------+-----------------------+-------------+ +-------------------------+--------------------------------------+-------------+----------------------+-------------+
7. Create cluster 7. Create cluster
----------------- -----------------
Now you will provision the cluster. This step requires a few pieces of Now you are ready to provision the cluster. This step requires a few pieces of
information that will be found by querying various OpenStack services. information that can be found by querying various OpenStack services.
Create a file named ``cluster_create.json`` with the following content: Create a file named ``my_cluster_create.json`` with the following content:
.. sourcecode:: json .. sourcecode:: json
{ {
"name": "cluster-1", "name": "my-cluster-1",
"plugin_name": "vanilla", "plugin_name": "vanilla",
"hadoop_version": "1.2.1", "hadoop_version": "<plugin_version>",
"cluster_template_id" : "c0609da7-faac-4dcf-9cbc-858a3aa130cd", "cluster_template_id" : "74add4df-07c2-4053-931f-d5844712727f",
"user_keypair_id": "stack", "user_keypair_id": "my_stack",
"default_image_id": "3f9fc974-b484-4756-82a4-bff9e116919b" "default_image_id": "c119f99c-67f2-4404-9cff-f30e4b185036",
"neutron_management_network": "8cccf998-85e4-4c5f-8850-63d33c1c6916" "neutron_management_network": "8cccf998-85e4-4c5f-8850-63d33c1c6916"
} }
The parameter ``user_keypair_id`` with the value ``stack`` is generated by The parameter ``user_keypair_id`` with the value ``my_stack`` is generated by
creating a keypair. You can create your own keypair in the OpenStack creating a keypair. You can create your own keypair in the OpenStack
Dashboard, or through the ``nova`` command line client as follows: Dashboard, or through the ``nova`` command line client as follows:
.. sourcecode:: console .. sourcecode:: console
$ nova keypair-add stack --pub-key $PATH_TO_PUBLIC_KEY $ nova keypair-add my_stack --pub-key $PATH_TO_PUBLIC_KEY
If sahara is configured to use neutron for networking, you will also need to If sahara is configured to use neutron for networking, you will also need to
include the ``neutron_management_network`` parameter in include the ``neutron_management_network`` parameter in
``cluster_create.json``. Cluster instances will get fixed IP addresses in ``my_cluster_create.json``. If your environment does not use neutron, you can
this network. You can determine the neutron network id with the following omit ``neutron_management_network`` above. You can determine the neutron
command: network id with the following command:
.. sourcecode:: console .. sourcecode:: console
$ neutron net-list $ neutron net-list
Create and start the cluster: Create and start the cluster:
.. sourcecode:: console .. sourcecode:: console
$ sahara cluster-create --json cluster_create.json $ sahara cluster-create --json my_cluster_create.json
+----------------------------+-------------------------------------------------+ +----------------------------+-------------------------------------------------+
| Property | Value | | Property | Value |
+----------------------------+-------------------------------------------------+ +----------------------------+-------------------------------------------------+
| status | Validating | | status | Active |
| neutron_management_network | 8cccf998-85e4-4c5f-8850-63d33c1c6916 | | neutron_management_network | None |
| is_transient | False | | is_transient | False |
| description | None | | description | None |
| user_keypair_id | stack | | user_keypair_id | my_stack |
| updated_at | 2013-07-07T19:01:51 | | updated_at | 2015-09-02T10:58:02 |
| plugin_name | vanilla | | plugin_name | vanilla |
| provision_progress | [{u'successful': True, u'tenant_id': |
| | u'c82e4bce56ce4cf9b90bd15dfdef699d', |
| | u'created_at': u'2015-09-02T10:41:07', |
| | u'step_type': u'Engine: create cluster', |
| | u'updated_at': u'2015-09-02T10:41:12', |
| | u'cluster_id': u'9b094131-a858-4ddb- |
| | 81a8-b71597417cad', u'step_name': u'Wait for |
| | instances to become active', u'total': 3, |
| | u'id': u'34b4b23e- |
| | dc94-4253-bb36-d343a4ec1e57'}, {u'successful': |
| | True, u'tenant_id': |
| | u'c82e4bce56ce4cf9b90bd15dfdef699d', |
| | u'created_at': u'2015-09-02T10:41:05', |
| | u'step_type': u'Engine: create cluster', |
| | u'updated_at': u'2015-09-02T10:41:07', |
| | u'cluster_id': u'9b094131-a858-4ddb- |
| | 81a8-b71597417cad', u'step_name': u'Run |
| | instances', u'total': 3, u'id': u'401f6812 |
| | -d92c-44f0-acfe-f22f4dc1c3fe'}, {u'successful': |
| | True, u'tenant_id': |
| | u'c82e4bce56ce4cf9b90bd15dfdef699d', |
| | u'created_at': u'2015-09-02T10:52:12', |
| | u'step_type': u'Plugin: start cluster', |
| | u'updated_at': u'2015-09-02T10:55:02', |
| | u'cluster_id': u'9b094131-a858-4ddb- |
| | 81a8-b71597417cad', u'step_name': u'Await |
| | DataNodes start up', u'total': 1, u'id': u |
| | '407379af-94a4-4821-9952-14a21be06ebc'}, |
| | {u'successful': True, u'tenant_id': |
| | u'c82e4bce56ce4cf9b90bd15dfdef699d', |
| | u'created_at': u'2015-09-02T10:41:13', |
| | u'step_type': u'Engine: create cluster', |
| | u'updated_at': u'2015-09-02T10:48:21', |
| | u'cluster_id': u'9b094131-a858-4ddb- |
| | 81a8-b71597417cad', u'step_name': u'Wait for |
| | instance accessibility', u'total': 3, u'id': |
| | u'534a3a7b-2678-44f4-9562-f859fef00b1f'}, |
| | {u'successful': True, u'tenant_id': |
| | u'c82e4bce56ce4cf9b90bd15dfdef699d', |
| | u'created_at': u'2015-09-02T10:51:43', |
| | u'step_type': u'Plugin: start cluster', |
| | u'updated_at': u'2015-09-02T10:52:12', |
| | u'cluster_id': u'9b094131-a858-4ddb- |
| | 81a8-b71597417cad', u'step_name': u'Start the |
| | following process(es): DataNodes, |
| | NodeManagers', u'total': 2, u'id': u'628a995c- |
| | 316c-4eed-acbf-17076ffa34db'}, {u'successful': |
| | True, u'tenant_id': |
| | u'c82e4bce56ce4cf9b90bd15dfdef699d', |
| | u'created_at': u'2015-09-02T10:48:21', |
| | u'step_type': u'Engine: create cluster', |
| | u'updated_at': u'2015-09-02T10:48:33', |
| | u'cluster_id': u'9b094131-a858-4ddb- |
| | 81a8-b71597417cad', u'step_name': u'Configure |
| | instances', u'total': 3, u'id': u'7fa3987a- |
| | 636f-48a5-a34c-7a6ecd6b5a44'}, {u'successful': |
| | True, u'tenant_id': |
| | u'c82e4bce56ce4cf9b90bd15dfdef699d', |
| | u'created_at': u'2015-09-02T10:50:26', |
| | u'step_type': u'Plugin: start cluster', |
| | u'updated_at': u'2015-09-02T10:51:30', |
| | u'cluster_id': u'9b094131-a858-4ddb- |
| | 81a8-b71597417cad', u'step_name': u'Start the |
| | following process(es): NameNode', u'total': 1, |
| | u'id': u'8988c41f-9bef-484a- |
| | bd93-58700f55f82b'}, {u'successful': True, |
| | u'tenant_id': |
| | u'c82e4bce56ce4cf9b90bd15dfdef699d', |
| | u'created_at': u'2015-09-02T10:50:14', |
| | u'step_type': u'Plugin: configure cluster', |
| | u'updated_at': u'2015-09-02T10:50:25', |
| | u'cluster_id': u'9b094131-a858-4ddb- |
| | 81a8-b71597417cad', u'step_name': u'Configure |
| | topology data', u'total': 1, u'id': |
| | u'bc20afb9-c44a-4825-9ac2-8bd69bf7efcc'}, |
| | {u'successful': True, u'tenant_id': |
| | u'c82e4bce56ce4cf9b90bd15dfdef699d', |
| | u'created_at': u'2015-09-02T10:48:33', |
| | u'step_type': u'Plugin: configure cluster', |
| | u'updated_at': u'2015-09-02T10:50:14', |
| | u'cluster_id': u'9b094131-a858-4ddb- |
| | 81a8-b71597417cad', u'step_name': u'Configure |
| | instances', u'total': 3, u'id': u'c0a3f2ac- |
| | 508f-4ef4-ac87-db82a4999795'}, {u'successful': |
| | True, u'tenant_id': |
| | u'c82e4bce56ce4cf9b90bd15dfdef699d', |
| | u'created_at': u'2015-09-02T10:55:02', |
| | u'step_type': u'Plugin: start cluster', |
| | u'updated_at': u'2015-09-02T10:58:01', |
| | u'cluster_id': u'9b094131-a858-4ddb- |
| | 81a8-b71597417cad', u'step_name': u'Start the |
| | following process(es): HiveServer', u'total': |
| | 1, u'id': u'd5ab5d4c-b8e7-4fe0-b36f- |
| | 116861bdfcb3'}, {u'successful': True, |
| | u'tenant_id': |
| | u'c82e4bce56ce4cf9b90bd15dfdef699d', |
| | u'created_at': u'2015-09-02T10:41:13', |
| | u'step_type': u'Engine: create cluster', |
| | u'updated_at': u'2015-09-02T10:41:13', |
| | u'cluster_id': u'9b094131-a858-4ddb- |
| | 81a8-b71597417cad', u'step_name': u'Assign |
| | IPs', u'total': 3, u'id': |
| | u'd6848957-6206-4116-a310-ec458e651c12'}, |
| | {u'successful': True, u'tenant_id': |
| | u'c82e4bce56ce4cf9b90bd15dfdef699d', |
| | u'created_at': u'2015-09-02T10:51:30', |
| | u'step_type': u'Plugin: start cluster', |
| | u'updated_at': u'2015-09-02T10:51:43', |
| | u'cluster_id': u'9b094131-a858-4ddb- |
| | 81a8-b71597417cad', u'step_name': u'Start the |
| | following process(es): ResourceManager', |
| | u'total': 1, u'id': u'dcd433e3-017a- |
| | 430a-8217-94cae4b813c2'}] |
| use_autoconfig | True |
| anti_affinity | [] | | anti_affinity | [] |
| node_groups | [{u'count': 1, u'name': u'master', | | node_groups | [{u'volume_local_to_instance': False, |
| | u'instances': [], u'volume_mount_prefix': | | | u'availability_zone': None, u'updated_at': |
| | u'/volumes/disk', u'created_at': u'2015-03-17 | | | u'2015-09-02T10:41:06', u'instances': |
| | 18:33:42', u'updated_at': None, | | | [{u'instance_id': u'949da8aa-7c9e-48b3-882e- |
| | u'floating_ip_pool': u'70b8c139-096b-4b3b-b29f- | | | 0c7a0049100e', u'created_at': |
| | f42b16316758', u'image_id': None, | | | u'2015-09-02T10:41:06', u'updated_at': |
| | u'volumes_size': 0, u'node_configs': {}, | | | u'2015-09-02T10:41:13', u'instance_name': |
| | u'node_group_template_id': u'09946a01-7973-4f63 | | | u'cluster-3-master-001', u'management_ip': |
| | -9aca-7fc6d498d8a6', u'volumes_per_node': 0, | | | u'192.168.1.134', u'internal_ip': |
| | u'node_processes': [u'jobtracker', | | | u'172.24.17.2', u'id': u'e27503e8-a118-4c3e- |
| | u'namenode'], u'auto_security_group': True, | | | a7d7-ee64fcd4568a'}], |
| | u'security_groups': None, u'flavor_id': u'2'}, | | | u'node_group_template_id': u'9d3b5b2c- |
| | {u'count': 2, u'name': u'workers', | | | d5d5-4d16-8a93-a568d29c6569', |
| | u'instances': [], u'volume_mount_prefix': | | | u'volumes_per_node': 0, u'id': u'6a53f95a-c2aa- |
| | u'/volumes/disk', u'created_at': u'2015-03-17 | | | 48d7-b43a-62d149c656af', u'security_groups': |
| | 18:33:42', u'updated_at': None, | | | [6], u'shares': None, u'node_configs': |
| | u'floating_ip_pool': u'70b8c139-096b-4b3b-b29f- | | | {u'MapReduce': {u'mapreduce.map.memory.mb': |
| | f42b16316758', u'image_id': None, | | | 256, u'mapreduce.reduce.memory.mb': 512, |
| | u'volumes_size': 0, u'node_configs': {}, | | | u'yarn.app.mapreduce.am.command-opts': |
| | u'node_group_template_id': u'ceb017bd-0568-42e9 | | | u'-Xmx204m', u'mapreduce.reduce.java.opts': |
| | -890b-03eb298dc99f', u'volumes_per_node': 0, | | | u'-Xmx409m', |
| | u'node_processes': [u'tasktracker', | | | u'yarn.app.mapreduce.am.resource.mb': 256, |
| | u'datanode'], u'auto_security_group': True, | | | u'mapreduce.map.java.opts': u'-Xmx204m', |
| | u'security_groups': None, u'flavor_id': u'2'}] | | | u'mapreduce.task.io.sort.mb': 102}, u'YARN': |
| management_public_key | ssh-rsa BBBBB3NzaB1yc2EAAAADAQABAAABAQCziEF+3oJ | | | {u'yarn.scheduler.minimum-allocation-mb': 256, |
| | ki6Fd1rvuiducJ470DN9ZFagiFbLfcwqu7TNKee10uice5P | | | u'yarn.scheduler.maximum-allocation-mb': 2048, |
| | KmvpusXMaL5LiZFTHafbFJfNUlah90yGpfsYqbcx2dMNqoU | | | u'yarn.nodemanager.vmem-check-enabled': |
| | EF4ZvEVO7RVU8jCe7DXBEkBFGQ1x/v17vyaxIJ8AqnFVSuu | | | u'false', u'yarn.nodemanager.resource.memory- |
| | FgfcHuihLAC250ZlfNWMcoFhUy6MsBocoxCF6MVal5Xt8nw | | | mb': 2048}}, u'auto_security_group': True, |
| | Y8o8xTQwd/f4wbAeAE3P0TaOCpXpMxxLL/hMDALekdxs1Gh | | | u'volumes_availability_zone': None, |
| | Mk0k5rbj4oD9AKx8+/jucIxS6mmwqWwwqo7jmy2jIsukOGZ | | | u'volume_mount_prefix': u'/volumes/disk', |
| | 1LdeNe0ctOX56k1LoZybzMzT6NbgUwfuIRbOwuryy2QbWwV | | | u'floating_ip_pool': u'public', u'image_id': |
| | gX6t Generated by Sahara | | | None, u'volumes_size': 0, u'is_proxy_gateway': |
| | False, u'count': 1, u'name': u'master', |
| | u'created_at': u'2015-09-02T10:41:02', |
| | u'volume_type': None, u'node_processes': |
| | [u'namenode', u'resourcemanager', |
| | u'hiveserver'], u'flavor_id': u'2', |
| | u'use_autoconfig': True}, |
| | {u'volume_local_to_instance': False, |
| | u'availability_zone': None, u'updated_at': |
| | u'2015-09-02T10:41:07', u'instances': |
| | [{u'instance_id': u'47f97841-4a17-4e18-a8eb- |
| | b4ff7dd4c3d8', u'created_at': |
| | u'2015-09-02T10:41:06', u'updated_at': |
| | u'2015-09-02T10:41:13', u'instance_name': |
| | u'cluster-3-worker-001', u'management_ip': |
| | u'192.168.1.135', u'internal_ip': |
| | u'172.24.17.3', u'id': u'c4a02678-113b-432e- |
| | 8f91-927b8e7cfe83'}, {u'instance_id': |
| | u'a02aea39-cc1f-4a1f-8232-2470ab6e8478', |
| | u'created_at': u'2015-09-02T10:41:07', |
| | u'updated_at': u'2015-09-02T10:41:13', |
| | u'instance_name': u'cluster-3-worker-002', |
| | u'management_ip': u'192.168.1.130', |
| | u'internal_ip': u'172.24.17.4', u'id': u |
| | 'b7b2d6db-cd50-484b-8036-09820d2623f2'}], |
| | u'node_group_template_id': u'1aa4a397-cb1e- |
| | 4f38-be18-7f65fa0cc2eb', u'volumes_per_node': |
| | 0, u'id': u'b666103f-a44b-4cf8-b3ae- |
| | 7d2623c6cd18', u'security_groups': [7], |
| | u'shares': None, u'node_configs': |
| | {u'MapReduce': {u'mapreduce.map.memory.mb': |
| | 256, u'mapreduce.reduce.memory.mb': 512, |
| | u'yarn.app.mapreduce.am.command-opts': |
| | u'-Xmx204m', u'mapreduce.reduce.java.opts': |
| | u'-Xmx409m', |
| | u'yarn.app.mapreduce.am.resource.mb': 256, |
| | u'mapreduce.map.java.opts': u'-Xmx204m', |
| | u'mapreduce.task.io.sort.mb': 102}, u'YARN': |
| | {u'yarn.scheduler.minimum-allocation-mb': 256, |
| | u'yarn.scheduler.maximum-allocation-mb': 2048, |
| | u'yarn.nodemanager.vmem-check-enabled': |
| | u'false', u'yarn.nodemanager.resource.memory- |
| | mb': 2048}}, u'auto_security_group': True, |
| | u'volumes_availability_zone': None, |
| | u'volume_mount_prefix': u'/volumes/disk', |
| | u'floating_ip_pool': u'public', u'image_id': |
| | None, u'volumes_size': 0, u'is_proxy_gateway': |
| | False, u'count': 2, u'name': u'worker', |
| | u'created_at': u'2015-09-02T10:41:02', |
| | u'volume_type': None, u'node_processes': |
| | [u'nodemanager', u'datanode'], u'flavor_id': |
| | u'2', u'use_autoconfig': True}] |
| is_public | False |
| management_public_key | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDiFXlWNVD |
| | 6gJT74wherHWtgchqpvgi2aJ4fPWXP+WgB4GEKpfD7a/dWu |
| | Qg9eDBQIrWvVsKgG1i9YgRTHOQ7DdwoSKUAcpEewgw927ER |
| | wdJ3IV7EDu0xENUgrUgp+CwPdk94SXPg1G4oHOCbOvJYcW6 |
| | /b8Ci86vH9A7Uyu2T7tbVS4ciMKfwI0Z47lzcp2qDV6W8M7 |
| | neghC1mNT4k29ghgcYOzY4SxQjxp1a5Iu6RtnJ2fvHbLeMS |
| | 0hgeobSZ8heQzLImrp2dbyZy74goOcwKtk9dDPV853aZrjL |
| | yOsc78EgW6n2Gugu7Ks12v9QEDr4H3yTt3DNTrB5Y8tt468 |
| | k2n1 Generated-by-Sahara |
| status_description | | | status_description | |
| hadoop_version | 1.2.1 | | hadoop_version | <plugin_version> |
| id | c5e755a2-b3f9-417b-948b-e99ed7fbf1e3 | | id | 9b094131-a858-4ddb-81a8-b71597417cad |
| trust_id | None | | trust_id | None |
| info | {} | | info | {u'HDFS': {u'NameNode': |
| cluster_template_id | c0609da7-faac-4dcf-9cbc-858a3aa130cd | | | u'hdfs://cluster-3-master-001:9000', u'Web UI': |
| name | cluster-1 | | | u'http://192.168.1.134:50070'}, u'YARN': {u'Web |
| cluster_configs | {} | | | UI': u'http://192.168.1.134:8088', |
| created_at | 2013-07-07T19:01:51 | | | u'ResourceManager': |
| default_image_id | 3f9fc974-b484-4756-82a4-bff9e116919b | | | u'http://192.168.1.134:8032'}} |
| tenant_id | 3fd7266fb3b547b1a45307b481bcadfd | | cluster_template_id | 74add4df-07c2-4053-931f-d5844712727f |
| name | my-cluster-1 |
| cluster_configs | {u'HDFS': {u'dfs.replication': 2}} |
| created_at | 2015-09-02T10:41:02 |
| default_image_id | c119f99c-67f2-4404-9cff-f30e4b185036 |
| shares | None |
| is_protected | False |
| tenant_id | c82e4bce56ce4cf9b90bd15dfdef699d |
+----------------------------+-------------------------------------------------+ +----------------------------+-------------------------------------------------+
Verify the cluster launched successfully by using the ``sahara`` command Verify the cluster launched successfully by using the ``sahara`` command
@ -365,41 +573,41 @@ line tool as follows:
.. sourcecode:: console .. sourcecode:: console
$ sahara cluster-list $ sahara cluster-list
+-----------+--------------------------------------+--------+------------+ +--------------+--------------------------------------+--------+------------+
| name | id | status | node_count | | name | id | status | node_count |
+-----------+--------------------------------------+--------+------------+ +--------------+--------------------------------------+--------+------------+
| cluster-1 | c5e755a2-b3f9-417b-948b-e99ed7fbf1e3 | Active | 3 | | my-cluster-1 | 9b094131-a858-4ddb-81a8-b71597417cad | Active | 3 |
+-----------+--------------------------------------+--------+------------+ +--------------+--------------------------------------+--------+------------+
The cluster creation operation may take several minutes to complete. During The cluster creation operation may take several minutes to complete. During
this time the "status" returned from the previous command may show states this time the "status" returned from the previous command may show states
other than "Active". other than "Active".
8. Run a MapReduce job 8. Run a MapReduce job to check Hadoop installation
---------------------- ---------------------------------------------------
Check that your Hadoop installation is working properly by running an Check that your Hadoop installation is working properly by running an
example job on the cluster manually. example job on the cluster manually.
* Login to the NameNode via ssh: * Login to NameNode (usually master node) via ssh with ssh-key used above:
.. sourcecode:: console .. sourcecode:: console
$ ssh ubuntu@<namenode_ip> $ ssh -i my_stack.pem ubuntu@<namenode_ip>
* Switch to the hadoop user: * Switch to the hadoop user:
.. sourcecode:: console .. sourcecode:: console
$ sudo su hadoop $ sudo su hadoop
* Go to the shared hadoop directory and run the simplest MapReduce example: * Go to the shared hadoop directory and run the simplest MapReduce example:
.. sourcecode:: console .. sourcecode:: console
$ cd /usr/share/hadoop $ cd /opt/hadoop-<plugin_version>/share/hadoop/mapreduce
$ hadoop jar hadoop-examples-1.2.1.jar pi 10 100 $ /opt/hadoop-<plugin_version>/bin/hadoop jar hadoop-mapreduce-examples-<plugin_version>.jar pi 10 100
Congratulations! Now you have the Hadoop cluster ready on the OpenStack Congratulations! Your Hadoop cluster is ready to use, running on your
cloud. OpenStack cloud.