Remove IDH plugin from sahara

Partially implements blueprint: remove-idh-plugin

Change-Id: I74b6fac1825864556417ff511c82b694ae83292f
This commit is contained in:
Sergey Reshetnyak 2014-04-11 16:59:04 +04:00
parent 93b79027ac
commit f10b259214
58 changed files with 27 additions and 16015 deletions

View File

@ -9,10 +9,6 @@ include sahara/db/migration/alembic_migrations/versions/README
recursive-include sahara/locale *
include sahara/plugins/intel/v2_5_1/resources/*.xml
include sahara/plugins/intel/v2_5_1/resources/*.xsd
include sahara/plugins/intel/v3_0_2/resources/*.xml
include sahara/plugins/intel/v3_0_2/resources/*.xsd
include sahara/plugins/vanilla/v1_2_1/resources/*.xml
include sahara/plugins/vanilla/v1_2_1/resources/*.sh
include sahara/plugins/vanilla/v1_2_1/resources/*.sql

View File

@ -46,7 +46,6 @@ User guide
userdoc/plugins
userdoc/vanilla_plugin
userdoc/hdp_plugin
userdoc/idh_plugin
**Elastic Data Processing**

View File

@ -186,8 +186,6 @@ The following features are supported in the new Heat engine:
+-----------------------------------------+-------------------------+-----------------------------------------+
| HDP plugin provisioning | Implemented | |
+-----------------------------------------+-------------------------+-----------------------------------------+
| IDH plugin provisioning | Implemented | |
+-----------------------------------------+-------------------------+-----------------------------------------+
| Cluster scaling | Implemented | |
+-----------------------------------------+-------------------------+-----------------------------------------+
| Cluster rollback | Implemented | |
@ -211,20 +209,20 @@ Plugin Capabilities
-------------------
The below tables provides a plugin capability matrix:
+--------------------------+---------+--------------+-----+
| | Plugin |
| +---------+--------------+-----+
| Feature | Vanilla | HDP | IDH |
+==========================+=========+==============+=====+
| Nova and Neutron network | x | x | x |
+--------------------------+---------+--------------+-----+
| Cluster Scaling | x | Scale Up | x |
+--------------------------+---------+--------------+-----+
| Swift Integration | x | x | x |
+--------------------------+---------+--------------+-----+
| Cinder Support | x | x | x |
+--------------------------+---------+--------------+-----+
| Data Locality | x | x | N/A |
+--------------------------+---------+--------------+-----+
| EDP | x | x | x |
+--------------------------+---------+--------------+-----+
+--------------------------+---------+--------------+
| | Plugin |
| +---------+--------------+
| Feature | Vanilla | HDP |
+==========================+=========+==============+
| Nova and Neutron network | x | x |
+--------------------------+---------+--------------+
| Cluster Scaling | x | Scale Up |
+--------------------------+---------+--------------+
| Swift Integration | x | x |
+--------------------------+---------+--------------+
| Cinder Support | x | x |
+--------------------------+---------+--------------+
| Data Locality | x | x |
+--------------------------+---------+--------------+
| EDP | x | x |
+--------------------------+---------+--------------+

View File

@ -1,55 +0,0 @@
Intel Distribution for Apache Hadoop Plugin
===========================================
The Intel Distribution for Apache Hadoop (IDH) Sahara plugin provides a way
to provision IDH clusters on OpenStack using templates in a single click and
in an easily repeatable fashion. The Sahara controller serves as the glue
between Hadoop and OpenStack. The IDH plugin mediates between the Sahara
controller and Intel Manager in order to deploy and configure Hadoop on
OpenStack. Intel Manager is used as the orchestrator for deploying the IDH
stack on OpenStack.
For cluster provisioning images supporting cloud init should be used. The only
supported operation system for now is Cent OS 6.4. Here you can find the image:
* http://sahara-files.mirantis.com/CentOS-6.4-cloud-init.qcow2
IDH plugin requires an image to be tagged in Sahara Image Registry with
two tags: 'idh' and '<IDH version>' (e.g. '2.5.1').
Also you should specify a default username of "cloud-user" to be used in the
Image.
Limitations
-----------
The IDH plugin currently has the following limitations:
* IDH plugin uses requests python library 1.2.1 or later version. It is
necessary for connection retries to IDH manager.
* IDH plugin downloads the Intel Manager package from a URL provided in the
cluster configuration. A local HTTP mirror should be used in cases where the
VMs do not have access to the Internet or have port limitations.
* IDH plugin adds the Intel rpm repository to the yum configuration. The
repository URL can be chosen during Sahara cluster configuration. A local
mirror should be used in cases where the VMs have no access to the Internet
or have port limitations. Refer to the IDH documentation for instructions on
how to create a local mirror.
* Hadoop cluster scaling is supported only for datanode and tasktracker
(nodemanager for IDH 3.x) processes.
Cluster Validation
------------------
When a user creates or scales a Hadoop cluster using the IDH plugin, the
cluster topology requested by the user is verified for consistency.
Currently there are the following limitations in cluster topology for IDH plugin:
* Cluster should contain
* exactly one manager
* exactly one namenode
* at most one jobtracker for IDH 2.x or resourcemanager for IDH 3.x
* at most one oozie
* Cluster cannot be created if it contains worker processes without containing
corresponding master processes. E.g. it cannot contain tasktracker if there
is no jobtracker.

View File

@ -227,7 +227,7 @@
# List of plugins to be loaded. Sahara preserves the order of
# the list when returning it. (list value)
#plugins=vanilla,hdp,idh
#plugins=vanilla,hdp
#

View File

@ -104,7 +104,7 @@
# List of plugins to be loaded. Sahara preserves the order of
# the list when returning it. (list value)
#plugins=vanilla,hdp,idh
#plugins=vanilla,hdp
[database]
#connection=sqlite:////sahara/openstack/common/db/$sqlite_db

View File

@ -27,7 +27,7 @@ LOG = logging.getLogger(__name__)
opts = [
cfg.ListOpt('plugins',
default=['vanilla', 'hdp', 'idh'],
default=['vanilla', 'hdp'],
help='List of plugins to be loaded. Sahara preserves the '
'order of the list when returning it.'),
]

View File

@ -1,59 +0,0 @@
# Copyright (c) 2014 Intel Corporation
# Copyright (c) 2014 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
import six
@six.add_metaclass(abc.ABCMeta)
class AbstractVersionHandler():
@abc.abstractmethod
def get_node_processes(self):
return
@abc.abstractmethod
def get_plugin_configs(self):
return
@abc.abstractmethod
def configure_cluster(self, cluster):
return
@abc.abstractmethod
def start_cluster(self, cluster):
return
@abc.abstractmethod
def validate(self, cluster):
return
@abc.abstractmethod
def scale_cluster(self, cluster, instances):
return
@abc.abstractmethod
def decommission_nodes(self, cluster, instances):
return
@abc.abstractmethod
def validate_scaling(self, cluster, existing, additional):
return
@abc.abstractmethod
def get_resource_manager_uri(self, cluster):
return

View File

@ -1,44 +0,0 @@
# Copyright (c) 2013 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins.intel.client import context as c
from sahara.plugins.intel.client import session
class Cluster(c.IntelContext):
def create(self):
url = '/cluster'
data = {
'name': self.cluster_name,
'dnsresolution': True,
'acceptlicense': True
}
return self.rest.post(url, data)
def get(self):
url = '/cluster/%s' % self.cluster_name
return self.rest.get(url)
def install_software(self, nodes):
_nodes = [{'hostname': host} for host in nodes]
url = '/cluster/%s/nodes/commands/installsoftware' % self.cluster_name
session_id = self.rest.post(url, _nodes)['sessionID']
return session.wait(self, session_id)
def upload_authzkeyfile(self, authzkeyfile):
url = '/cluster/%s/upload/authzkey' % self.cluster_name
return self.rest.post(url,
files={'file': authzkeyfile})['upload result']

View File

@ -1,21 +0,0 @@
# Copyright (c) 2013 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
class IntelContext(object):
def __init__(self, ctx):
self._ctx = ctx._ctx
self.cluster_name = ctx.cluster_name
self.rest = ctx.rest

View File

@ -1,65 +0,0 @@
# Copyright (c) 2013 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins.intel.client import context as c
from sahara.plugins.intel.client import session
from sahara.plugins.intel import exceptions as iex
class Nodes(c.IntelContext):
def add(self, nodes, rack, username, path_to_key, keypass=''):
hosts = {
'method': 'useauthzkeyfile',
'nodeinfo': map(lambda host: {
'hostname': host,
'username': username,
'passphrase': keypass,
'authzkeyfile': path_to_key,
'rackName': rack
}, nodes)
}
url = '/cluster/%s/nodes' % self.cluster_name
resp = self.rest.post(url, hosts)['items']
for node_info in resp:
if node_info['info'] != 'Connected':
raise iex.IntelPluginException(
'Error adding nodes: %s' % node_info['iporhostname'])
def get(self):
url = '/cluster/%s/nodes' % self.cluster_name
return self.rest.get(url)
def get_status(self, node):
url = '/cluster/%s/nodes/%s' % (self.cluster_name, node)
return self.rest.get(url)['status']
def delete(self, node):
url = '/cluster/%s/nodes/%s' % (self.cluster_name, node)
return self.rest.delete(url)
def config(self, force=False):
url = ('/cluster/%s/nodes/commands/confignodes/%s'
% (self.cluster_name, 'force' if force else 'noforce'))
session_id = self.rest.post(url)['sessionID']
return session.wait(self, session_id)
def stop(self, nodes):
url = '/cluster/%s/nodes/commands/stopnodes' % self.cluster_name
data = [{'hostname': host} for host in nodes]
return self.rest.post(url, data)

View File

@ -1,79 +0,0 @@
# Copyright (c) 2013 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara import exceptions
from sahara.plugins.intel.client import context as c
class BaseParams(c.IntelContext):
def __init__(self, ctx, service):
super(BaseParams, self).__init__(ctx)
self.service = service
def add(self, item, value, desc=''):
data = {
'editdesc': desc,
'items': [
{
'type': self.service,
'item': item,
'value': value,
'desc': desc
}
]
}
url = '/cluster/%s/configuration' % self.cluster_name
return self.rest.post(url, data)
def update(self, item, value, desc='', nodes=None):
data = {
'editdesc': desc,
'items': [
{
'type': self.service,
'item': item,
'value': value
}
]
}
if nodes:
data = {
'editdesc': desc,
'items': map(lambda node: {
'type': self.service,
'item': item,
'value': value,
'hostname': node
}, nodes)
}
url = '/cluster/%s/configuration' % self.cluster_name
return self.rest.put(url, data)
def get(self, hosts, item):
raise exceptions.NotImplementedException("BaseParams.get")
class Params(c.IntelContext):
def __init__(self, ctx, is_yarn_supported):
super(Params, self).__init__(ctx)
self.hadoop = BaseParams(self, 'hadoop')
self.hdfs = BaseParams(self, 'hdfs')
if is_yarn_supported:
self.yarn = BaseParams(self, 'yarn')
else:
self.mapred = BaseParams(self, 'mapred')
self.oozie = BaseParams(self, 'oozie')

View File

@ -1,83 +0,0 @@
# Copyright (c) 2013 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
from requests import auth
from sahara.openstack.common import log as logging
from sahara.plugins.intel import exceptions as iex
LOG = logging.getLogger(__name__)
def _check_response(fct):
def wrapper(*args, **kwargs):
resp = fct(*args, **kwargs)
if not resp.ok:
raise iex.IntelPluginException(
"Request to manager returned with code '%s', reason '%s' and "
"response '%s'" % (resp.status_code, resp.reason, resp.text))
else:
return json.loads(resp.text)
return wrapper
class RESTClient():
def __init__(self, manager, auth_username, auth_password, version):
#TODO(alazarev) make port configurable (bug #1262895)
port = '9443'
self.session = manager.remote().get_http_client(port, max_retries=10)
self.base_url = ('https://%s:%s/restapi/intelcloud/api/%s'
% (manager.management_ip, port, version))
LOG.debug("Connecting to manager with URL of %s", self.base_url)
self.auth = auth.HTTPBasicAuth(auth_username, auth_password)
@_check_response
def get(self, url):
url = self.base_url + url
LOG.debug("Sending GET to URL of %s", url)
return self.session.get(url, verify=False, auth=self.auth)
@_check_response
def post(self, url, data=None, files=None):
url = self.base_url + url
LOG.debug("Sending POST to URL '%s' (%s files): %s", url,
len(files) if files else 0,
data if data else 'no data')
return self.session.post(url, data=json.dumps(data) if data else None,
verify=False, auth=self.auth, files=files)
@_check_response
def delete(self, url):
url = self.base_url + url
LOG.debug("Sending DELETE to URL of %s", url)
return self.session.delete(url, verify=False, auth=self.auth)
@_check_response
def put(self, url, data=None):
url = self.base_url + url
if data:
LOG.debug("Sending PUT to URL of %s: %s", url, data)
r = self.session.put(url, data=json.dumps(data), verify=False,
auth=self.auth)
else:
LOG.debug("Sending PUT to URL of %s with no data", url)
r = self.session.put(url, verify=False, auth=self.auth)
return r

View File

@ -1,146 +0,0 @@
# Copyright (c) 2013 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo.config import cfg
from sahara import context
from sahara.openstack.common import log as logging
from sahara.plugins.intel.client import context as c
from sahara.plugins.intel.client import session
from sahara.plugins.intel import exceptions as iex
LOG = logging.getLogger(__name__)
class BaseService(c.IntelContext):
def __init__(self, ctx, service_name):
super(BaseService, self).__init__(ctx)
self.service = service_name
def start(self):
url = ('/cluster/%s/services/%s/commands/start'
% (self.cluster_name, self.service))
self.rest.post(url)
#TODO(alazarev) make timeout configurable (bug #1262897)
timeout = 600
cur_time = 0
while cur_time < timeout:
context.sleep(2)
if self.status() == 'running':
break
else:
cur_time += 2
else:
raise iex.IntelPluginException(
"Service '%s' has failed to start in %s seconds"
% (self.service, timeout))
def stop(self):
url = ('/cluster/%s/services/%s/commands/stop'
% (self.cluster_name, self.service))
return self.rest.post(url)
def status(self):
url = '/cluster/%s/services' % self.cluster_name
statuses = self.rest.get(url)['items']
for st in statuses:
if st['serviceName'] == self.service:
return st['status']
raise iex.IntelPluginException(
"Service '%s' is not installed on cluster '%s'"
% (self.service, self.cluster_name))
def get_nodes(self):
url = '/cluster/%s/services/%s' % (self.cluster_name, self.service)
return self.rest.get(url)
def add_nodes(self, role, nodes):
url = ('/cluster/%s/services/%s/roles'
% (self.cluster_name, self.service))
data = map(lambda host: {
'rolename': role,
'hostname': host
}, nodes)
return self.rest.post(url, data)
class HDFSService(BaseService):
def format(self, force=False):
url = ('/cluster/%s/services/hdfs/commands/hdfsformat/%s'
% (self.cluster_name, 'force' if force else 'noforce'))
session_id = self.rest.post(url)['sessionID']
return session.wait(self, session_id)
def decommission_nodes(self, nodes, force=False):
url = ('/cluster/%s/nodes/commands/decommissionnodes/%s'
% (self.cluster_name, 'force' if force else 'noforce'))
data = map(lambda host: {
'hostname': host
}, nodes)
return self.rest.post(url, data)
def get_datanodes_status(self):
url = '/cluster/%s/nodes/commands/datanodes/status' % self.cluster_name
return self.rest.get(url)['items']
def get_datanode_status(self, datanode):
stats = self.get_datanodes_status()
for stat in stats:
hostname = stat['hostname']
fqdn = hostname + '.' + cfg.CONF.node_domain
if hostname == datanode or fqdn == datanode:
return stat['status'].strip()
raise iex.IntelPluginException(
"Datanode service is is not installed on node '%s'" % datanode)
class Services(c.IntelContext):
def __init__(self, ctx, is_yarn_supported):
super(Services, self).__init__(ctx)
self.hdfs = HDFSService(self, 'hdfs')
if is_yarn_supported:
self.yarn = BaseService(self, 'yarn')
else:
self.mapred = BaseService(self, 'mapred')
self.hive = BaseService(self, 'hive')
self.oozie = BaseService(self, 'oozie')
def add(self, services):
_services = map(lambda service: {
'serviceName': service,
'type': service
}, services)
url = '/cluster/%s/services' % self.cluster_name
return self.rest.post(url, _services)
def get_services(self):
url = '/cluster/%s/services' % self.cluster_name
return self.rest.get(url)
def delete_service(self, service):
url = '/cluster/%s/services/%s' % (self.cluster_name, service)
return self.rest.delete(url)

View File

@ -1,50 +0,0 @@
# Copyright (c) 2013 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara import context
from sahara.openstack.common import log as logging
from sahara.plugins.intel import exceptions as iex
LOG = logging.getLogger(__name__)
def get(ctx, session_id):
url = '/cluster/%s/session/%s' % (ctx.cluster_name, session_id)
return ctx.rest.get(url)
def wait(ctx, session_id):
#TODO(alazarev) add check on Hadoop cluster state (exit on delete)
#TODO(alazarev) make configurable (bug #1262897)
timeout = 4*60*60 # 4 hours
cur_time = 0
while cur_time < timeout:
info_items = get(ctx, session_id)['items']
for item in info_items:
progress = item['nodeprogress']
if progress['info'].strip() == '_ALLFINISH':
return
else:
context.sleep(10)
cur_time += 10
debug_msg = 'Hostname: %s\nInfo: %s'
debug_msg = debug_msg % (progress['hostname'], progress['info'])
LOG.debug(debug_msg)
else:
raise iex.IntelPluginException(
"Cluster '%s' has failed to start in %s minutes"
% (ctx.cluster_name, timeout / 60))

View File

@ -1,22 +0,0 @@
# Copyright (c) 2013 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sahara.exceptions as e
class IntelPluginException(e.SaharaException):
def __init__(self, message):
self.message = message
self.code = "INTEL_PLUGIN_EXCEPTION"

View File

@ -1,79 +0,0 @@
# Copyright (c) 2013 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins.general import utils as u
from sahara.plugins.intel import versionfactory as vhf
from sahara.plugins import provisioning as p
class IDHProvider(p.ProvisioningPluginBase):
def __init__(self):
self.version_factory = vhf.VersionFactory.get_instance()
def get_description(self):
return \
'The IDH OpenStack plugin works with project ' \
'Sahara to automate the deployment of the Intel Distribution ' \
'of Apache Hadoop on OpenStack based ' \
'public & private clouds'
def _get_version_handler(self, hadoop_version):
return self.version_factory.get_version_handler(hadoop_version)
def get_hdfs_user(self):
return 'hadoop'
def get_node_processes(self, hadoop_version):
return self._get_version_handler(hadoop_version).get_node_processes()
def get_versions(self):
return self.version_factory.get_versions()
def get_title(self):
return "Intel(R) Distribution for Apache Hadoop* Software"
def get_configs(self, hadoop_version):
return self._get_version_handler(hadoop_version).get_plugin_configs()
def configure_cluster(self, cluster):
self._get_version_handler(
cluster.hadoop_version).configure_cluster(cluster)
def start_cluster(self, cluster):
self._get_version_handler(
cluster.hadoop_version).start_cluster(cluster)
def validate(self, cluster):
self._get_version_handler(
cluster.hadoop_version).validate(cluster)
def scale_cluster(self, cluster, instances):
self._get_version_handler(
cluster.hadoop_version).scale_cluster(cluster, instances)
def decommission_nodes(self, cluster, instances):
self._get_version_handler(
cluster.hadoop_version).decommission_nodes(cluster, instances)
def validate_scaling(self, cluster, existing, additional):
self._get_version_handler(cluster.hadoop_version).validate_scaling(
cluster, existing, additional)
def get_oozie_server(self, cluster):
return u.get_instance(cluster, "oozie")
def get_resource_manager_uri(self, cluster):
return self._get_version_handler(
cluster.hadoop_version).get_resource_manager_uri(cluster)

View File

@ -1,33 +0,0 @@
# Copyright (c) 2013 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins.intel.client import cluster
from sahara.plugins.intel.client import nodes
from sahara.plugins.intel.client import params
from sahara.plugins.intel.client import rest as r
from sahara.plugins.intel.client import services
class IntelClient():
def __init__(self, manager, cluster_name):
#TODO(alazarev) make credentials configurable (bug #1262881)
self.rest = r.RESTClient(manager, 'admin', 'admin', 'v1')
self.cluster_name = cluster_name
self._ctx = self
self.cluster = cluster.Cluster(self)
self.nodes = nodes.Nodes(self)
self.params = params.Params(self, False)
self.services = services.Services(self, False)

View File

@ -1,155 +0,0 @@
# Copyright (c) 2013 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins import provisioning as p
from sahara.utils import xmlutils as x
CORE_DEFAULT = x.load_hadoop_xml_defaults_with_type_and_locale(
'plugins/intel/v2_5_1/resources/hadoop-default.xml')
HDFS_DEFAULT = x.load_hadoop_xml_defaults_with_type_and_locale(
'plugins/intel/v2_5_1/resources/hdfs-default.xml')
MAPRED_DEFAULT = x.load_hadoop_xml_defaults_with_type_and_locale(
'plugins/intel/v2_5_1/resources/mapred-default.xml')
OOZIE_DEFAULT = x.load_hadoop_xml_defaults(
'plugins/intel/v2_5_1/resources/oozie-default.xml')
XML_CONFS = {
"Hadoop": [CORE_DEFAULT],
"HDFS": [HDFS_DEFAULT],
"MapReduce": [MAPRED_DEFAULT],
"JobFlow": [OOZIE_DEFAULT]
}
IDH_TARBALL_URL = p.Config('IDH tarball URL', 'general', 'cluster', priority=1,
default_value='http://repo2.intelhadoop.com/'
'setup/setup-intelhadoop-'
'2.5.1-en-evaluation.RHEL.tar.gz')
OS_REPO_URL = p.Config('OS repository URL', 'general', 'cluster', priority=1,
is_optional=True,
default_value='http://mirror.centos.org/'
'centos-6/6/os/x86_64')
IDH_REPO_URL = p.Config('IDH repository URL', 'general', 'cluster',
priority=1, is_optional=True,
default_value='http://repo2.intelhadoop.com'
'/evaluation/en/RHEL/2.5.1/rpm')
OOZIE_EXT22_URL = p.Config(
'Ext 2.2 URL', 'general', 'cluster',
description='Ext 2.2 library is required for Oozie Web Console. '
'The file will be downloaded from VM with oozie.',
priority=1, is_optional=True,
default_value='http://extjs.com/deploy/ext-2.2.zip')
ENABLE_SWIFT = p.Config('Enable Swift', 'general', 'cluster',
config_type="bool", priority=1,
default_value=True, is_optional=True)
HADOOP_SWIFTFS_JAR_URL = p.Config(
'Hadoop SwiftFS jar URL', 'general', 'cluster',
description='Library that adds swift support to hadoop. '
'The file will be downloaded from VM with oozie.',
priority=1, is_optional=True,
default_value='http://sahara-files.mirantis.com/'
'hadoop-swift/hadoop-swift-latest.jar')
HIDDEN_CONFS = ['fs.default.name', 'dfs.name.dir', 'dfs.data.dir',
'mapred.job.tracker', 'mapred.system.dir', 'mapred.local.dir']
CLUSTER_WIDE_CONFS = ['dfs.block.size', 'dfs.permissions', 'dfs.replication',
'dfs.replication.min', 'dfs.replication.max',
'io.file.buffer.size', 'mapreduce.job.counters.max',
'mapred.output.compress', 'io.compression.codecs',
'mapred.output.compression.codec',
'mapred.output.compression.type',
'mapred.compress.map.output',
'mapred.map.output.compression.codec']
PRIORITY_1_CONFS = ['dfs.datanode.du.reserved',
'dfs.datanode.failed.volumes.tolerated',
'dfs.datanode.max.xcievers', 'dfs.datanode.handler.count',
'dfs.namenode.handler.count', 'mapred.child.java.opts',
'mapred.jobtracker.maxtasks.per.job',
'mapred.job.tracker.handler.count',
'mapred.map.child.java.opts',
'mapred.reduce.child.java.opts',
'io.sort.mb', 'mapred.tasktracker.map.tasks.maximum',
'mapred.tasktracker.reduce.tasks.maximum']
PRIORITY_1_CONFS += CLUSTER_WIDE_CONFS
CFG_TYPE = {
"Boolean": "bool",
"String": "string",
"Integer": "int",
"Choose": "string",
"Class": "string",
"Directory": "string",
"Float": "string",
"Int_range": "string",
}
def _initialise_configs():
configs = []
for service, config_lists in XML_CONFS.iteritems():
for config_list in config_lists:
for config in config_list:
if config['name'] not in HIDDEN_CONFS:
cfg = p.Config(
config['name'], service, "cluster", is_optional=True,
config_type="string",
default_value=str(config['value']),
description=config['description'])
if config.get('type'):
cfg.config_type = CFG_TYPE[config['type']]
if cfg.config_type == 'bool':
cfg.default_value = cfg.default_value == 'true'
if cfg.config_type == 'int':
if cfg.default_value:
cfg.default_value = int(cfg.default_value)
else:
cfg.config_type = 'string'
if config['name'] in PRIORITY_1_CONFS:
cfg.priority = 1
configs.append(cfg)
configs.append(IDH_TARBALL_URL)
configs.append(IDH_REPO_URL)
configs.append(OS_REPO_URL)
configs.append(OOZIE_EXT22_URL)
configs.append(ENABLE_SWIFT)
return configs
PLUGIN_CONFIGS = _initialise_configs()
def get_plugin_configs():
return PLUGIN_CONFIGS
def get_config_value(cluster_configs, key):
if not cluster_configs or cluster_configs.get(key.name) is None:
return key.default_value
return cluster_configs.get(key.name)

View File

@ -1,446 +0,0 @@
# Copyright (c) 2013 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import telnetlib
import six
from sahara import conductor
from sahara import context
from sahara.openstack.common import log as logging
from sahara.plugins.general import utils as u
from sahara.plugins.intel import exceptions as iex
from sahara.plugins.intel.v2_5_1 import client as c
from sahara.plugins.intel.v2_5_1 import config_helper as c_helper
from sahara.swift import swift_helper as swift
from sahara.utils import crypto
conductor = conductor.API
LOG = logging.getLogger(__name__)
_INST_CONF_TEMPLATE = """
network_interface=eth0
mode=silent
accept_jdk_license=accept
how_to_setup_os_repo=2
os_repo=%s
os_repo_username=
os_repo_password=
os_repo_proxy=
how_to_setup_idh_repo=1
idh_repo=%s
idh_repo_username=
idh_repo_password=
idh_repo_proxy=
firewall_selinux_setting=1"""
def install_manager(cluster):
LOG.info("Starting Install Manager Process")
mng_instance = u.get_instance(cluster, 'manager')
idh_tarball_path = c_helper.get_config_value(
cluster.cluster_configs.get('general'), c_helper.IDH_TARBALL_URL)
idh_tarball_filename = idh_tarball_path.rsplit('/', 1)[-1]
idh_dir = idh_tarball_filename[:idh_tarball_filename.find('.tar.gz')]
LOG.info("IDH tgz will be retrieved from: \'%s\'", idh_tarball_path)
idh_repo = c_helper.get_config_value(
cluster.cluster_configs.get('general'), c_helper.IDH_REPO_URL)
os_repo = c_helper.get_config_value(
cluster.cluster_configs.get('general'), c_helper.OS_REPO_URL)
idh_install_cmd = 'sudo ./%s/install.sh --mode=silent 2>&1' % idh_dir
with mng_instance.remote() as r:
LOG.info("Download IDH manager ")
try:
r.execute_command('curl -O %s 2>&1' % idh_tarball_path)
except Exception as e:
raise RuntimeError("Unable to download IDH manager from %s" %
idh_tarball_path, e)
# unpack archive
LOG.info("Unpack manager %s ", idh_tarball_filename)
try:
r.execute_command('tar xzf %s 2>&1' % idh_tarball_filename)
except Exception as e:
raise RuntimeError("Unable to unpack tgz %s",
idh_tarball_filename, e)
# install idh
LOG.debug("Install manager with %s : ", idh_install_cmd)
inst_conf = _INST_CONF_TEMPLATE % (os_repo, idh_repo)
r.write_file_to('%s/ui-installer/conf' % idh_dir, inst_conf)
#TODO(alazarev) make timeout configurable (bug #1262897)
r.execute_command(idh_install_cmd, timeout=3600)
# fix nginx persimmions bug
r.execute_command('sudo chmod o+x /var/lib/nginx/ /var/lib/nginx/tmp '
'/var/lib/nginx/tmp/client_body')
# waiting start idh manager
#TODO(alazarev) make timeout configurable (bug #1262897)
timeout = 600
LOG.debug("Waiting %s seconds for Manager to start : ", timeout)
while timeout:
try:
telnetlib.Telnet(mng_instance.management_ip, 9443)
break
except IOError:
timeout -= 2
context.sleep(2)
else:
message = ("IDH Manager failed to start in %s minutes on node '%s' "
"of cluster '%s'"
% (timeout / 60, mng_instance.management_ip, cluster.name))
LOG.error(message)
raise iex.IntelPluginException(message)
def configure_os(cluster):
instances = u.get_instances(cluster)
configure_os_from_instances(cluster, instances)
def create_hadoop_ssh_keys(cluster):
private_key, public_key = crypto.generate_key_pair()
extra = {
'hadoop_private_ssh_key': private_key,
'hadoop_public_ssh_key': public_key
}
return conductor.cluster_update(context.ctx(), cluster, {'extra': extra})
def configure_os_from_instances(cluster, instances):
for instance in instances:
with instance.remote() as remote:
LOG.debug("Configuring OS settings on %s : ", instance.hostname())
# configure hostname, RedHat/Centos specific
remote.replace_remote_string('/etc/sysconfig/network',
'HOSTNAME=.*',
'HOSTNAME=%s' % instance.fqdn())
# disable selinux and iptables, because Intel distribution requires
# this to be off
remote.execute_command('sudo /usr/sbin/setenforce 0')
remote.replace_remote_string('/etc/selinux/config',
'SELINUX=.*', 'SELINUX=disabled')
# disable iptables
remote.execute_command('sudo /sbin/service iptables stop')
remote.execute_command('sudo /sbin/chkconfig iptables off')
# create 'hadoop' user
remote.write_files_to({
'id_rsa': cluster.extra.get('hadoop_private_ssh_key'),
'authorized_keys': cluster.extra.get('hadoop_public_ssh_key')
})
remote.execute_command(
'sudo useradd hadoop && '
'sudo sh -c \'echo "hadoop ALL=(ALL) NOPASSWD:ALL" '
'>> /etc/sudoers\' && '
'sudo mkdir -p /home/hadoop/.ssh/ && '
'sudo mv id_rsa authorized_keys /home/hadoop/.ssh && '
'sudo chown -R hadoop:hadoop /home/hadoop/.ssh && '
'sudo chmod 600 /home/hadoop/.ssh/{id_rsa,authorized_keys}')
swift_enable = c_helper.get_config_value(
cluster.cluster_configs.get('general'), c_helper.ENABLE_SWIFT)
if swift_enable:
hadoop_swiftfs_jar_url = c_helper.get_config_value(
cluster.cluster_configs.get('general'),
c_helper.HADOOP_SWIFTFS_JAR_URL)
swift_lib_dir = '/usr/lib/hadoop/lib'
swift_lib_path = swift_lib_dir + '/hadoop-swift-latest.jar'
cmd = ('sudo mkdir -p %s && sudo curl \'%s\' -o %s'
% (swift_lib_dir, hadoop_swiftfs_jar_url,
swift_lib_path))
remote.execute_command(cmd)
def _configure_services(client, cluster):
nn_host = u.get_namenode(cluster).fqdn()
snn = u.get_secondarynamenodes(cluster)
snn_host = snn[0].fqdn() if snn else None
jt_host = u.get_jobtracker(cluster).fqdn() if u.get_jobtracker(
cluster) else None
dn_hosts = [dn.fqdn() for dn in u.get_datanodes(cluster)]
tt_hosts = [tt.fqdn() for tt in u.get_tasktrackers(cluster)]
oozie_host = u.get_oozie(cluster).fqdn() if u.get_oozie(
cluster) else None
hive_host = u.get_hiveserver(cluster).fqdn() if u.get_hiveserver(
cluster) else None
services = []
if u.get_namenode(cluster):
services += ['hdfs']
if u.get_jobtracker(cluster):
services += ['mapred']
if oozie_host:
services += ['oozie']
services += ['pig']
if hive_host:
services += ['hive']
LOG.debug("Add services: %s" % ', '.join(services))
client.services.add(services)
LOG.debug("Assign roles to hosts")
client.services.hdfs.add_nodes('PrimaryNameNode', [nn_host])
client.services.hdfs.add_nodes('DataNode', dn_hosts)
if snn:
client.services.hdfs.add_nodes('SecondaryNameNode', [snn_host])
if oozie_host:
client.services.oozie.add_nodes('Oozie', [oozie_host])
if hive_host:
client.services.hive.add_nodes('HiveServer', [hive_host])
if jt_host:
client.services.mapred.add_nodes('JobTracker', [jt_host])
client.services.mapred.add_nodes('TaskTracker', tt_hosts)
def _configure_storage(client, cluster):
datanode_ng = u.get_node_groups(cluster, 'datanode')[0]
storage_paths = datanode_ng.storage_paths()
dn_hosts = [i.fqdn() for i in u.get_datanodes(cluster)]
name_dir_param = ",".join(
[st_path + '/dfs/name' for st_path in storage_paths])
data_dir_param = ",".join(
[st_path + '/dfs/data' for st_path in storage_paths])
client.params.hdfs.update('dfs.name.dir', name_dir_param)
client.params.hdfs.update('dfs.data.dir', data_dir_param, nodes=dn_hosts)
def _configure_swift(client, cluster):
swift_enable = c_helper.get_config_value(
cluster.cluster_configs.get('general'), c_helper.ENABLE_SWIFT)
if swift_enable:
swift_configs = swift.get_swift_configs()
for conf in swift_configs:
client.params.hadoop.add(conf['name'], conf['value'])
def _add_user_params(client, cluster):
for p in six.iteritems(cluster.cluster_configs.get("Hadoop", {})):
client.params.hadoop.update(p[0], p[1])
for p in six.iteritems(cluster.cluster_configs.get("HDFS", {})):
client.params.hdfs.update(p[0], p[1])
for p in six.iteritems(cluster.cluster_configs.get("MapReduce", {})):
client.params.mapred.update(p[0], p[1])
for p in six.iteritems(cluster.cluster_configs.get("JobFlow", {})):
client.params.oozie.update(p[0], p[1])
def install_cluster(cluster):
mng_instance = u.get_instance(cluster, 'manager')
all_hosts = list(set([i.fqdn() for i in u.get_instances(cluster)]))
client = c.IntelClient(mng_instance, cluster.name)
LOG.info("Create cluster")
client.cluster.create()
LOG.info("Add nodes to cluster")
rack = '/Default'
client.nodes.add(all_hosts, rack, 'hadoop',
'/home/hadoop/.ssh/id_rsa')
LOG.info("Install software")
client.cluster.install_software(all_hosts)
LOG.info("Configure services")
_configure_services(client, cluster)
LOG.info("Deploy cluster")
client.nodes.config(force=True)
LOG.info("Provisioning configs")
# cinder and ephemeral drive support
_configure_storage(client, cluster)
# swift support
_configure_swift(client, cluster)
# user configs
_add_user_params(client, cluster)
LOG.info("Format HDFS")
client.services.hdfs.format()
def _setup_oozie(cluster):
with (u.get_oozie(cluster)).remote() as r:
LOG.info("Oozie: add hadoop libraries to java.library.path")
r.execute_command(
"sudo ln -s /usr/lib/hadoop/lib/native/Linux-amd64-64/libhadoop.so"
" /usr/lib64/ && "
"sudo ln -s /usr/lib/hadoop/lib/native/Linux-amd64-64/libsnappy.so"
" /usr/lib64/")
ext22 = c_helper.get_config_value(
cluster.cluster_configs.get('general'), c_helper.OOZIE_EXT22_URL)
if ext22:
LOG.info("Oozie: downloading and installing ext 2.2 from '%s'"
% ext22)
r.execute_command(
"curl -L -o ext-2.2.zip %s && "
"sudo unzip ext-2.2.zip -d /var/lib/oozie && "
"rm ext-2.2.zip" % ext22)
LOG.info("Oozie: installing oozie share lib")
r.execute_command(
"mkdir /tmp/oozielib && "
"tar xzf /usr/lib/oozie/oozie-sharelib.tar.gz -C /tmp/oozielib && "
"rm /tmp/oozielib/share/lib/pig/pig-0.11.1-Intel.jar &&"
"cp /usr/lib/pig/pig-0.11.1-Intel.jar "
"/tmp/oozielib/share/lib/pig/pig-0.11.1-Intel.jar && "
"sudo su - -c '"
"hadoop fs -put /tmp/oozielib/share /user/oozie/share' hadoop && "
"rm -rf /tmp/oozielib")
def start_cluster(cluster):
client = c.IntelClient(u.get_instance(cluster, 'manager'), cluster.name)
LOG.debug("Starting hadoop services")
client.services.hdfs.start()
if u.get_jobtracker(cluster):
client.services.mapred.start()
if u.get_hiveserver(cluster):
client.services.hive.start()
if u.get_oozie(cluster):
LOG.info("Setup oozie")
_setup_oozie(cluster)
client.services.oozie.start()
def scale_cluster(cluster, instances):
scale_ins_hosts = [i.fqdn() for i in instances]
dn_hosts = [dn.fqdn() for dn in u.get_datanodes(cluster)]
tt_hosts = [tt.fqdn() for tt in u.get_tasktrackers(cluster)]
to_scale_dn = []
to_scale_tt = []
for i in scale_ins_hosts:
if i in dn_hosts:
to_scale_dn.append(i)
if i in tt_hosts:
to_scale_tt.append(i)
client = c.IntelClient(u.get_instance(cluster, 'manager'), cluster.name)
rack = '/Default'
client.nodes.add(scale_ins_hosts, rack, 'hadoop',
'/home/hadoop/.ssh/id_rsa')
client.cluster.install_software(scale_ins_hosts)
if to_scale_tt:
client.services.mapred.add_nodes('TaskTracker', to_scale_tt)
if to_scale_dn:
client.services.hdfs.add_nodes('DataNode', to_scale_dn)
client.nodes.config()
if to_scale_dn:
client.services.hdfs.start()
if to_scale_tt:
client.services.mapred.start()
def decommission_nodes(cluster, instances):
dec_hosts = [i.fqdn() for i in instances]
dn_hosts = [dn.fqdn() for dn in u.get_datanodes(cluster)]
tt_hosts = [dn.fqdn() for dn in u.get_tasktrackers(cluster)]
client = c.IntelClient(u.get_instance(cluster, 'manager'), cluster.name)
dec_dn_hosts = []
for dec_host in dec_hosts:
if dec_host in dn_hosts:
dec_dn_hosts.append(dec_host)
if dec_dn_hosts:
client.services.hdfs.decommission_nodes(dec_dn_hosts)
#TODO(alazarev) make timeout configurable (bug #1262897)
timeout = 14400 # 4 hours
cur_time = 0
for host in dec_dn_hosts:
while cur_time < timeout:
if client.services.hdfs.get_datanode_status(
host) == 'Decomissioned':
break
context.sleep(5)
cur_time += 5
else:
LOG.warn("Failed to decomission node '%s' of cluster '%s' "
"in %s minutes" % (host, cluster.name, timeout / 60))
client.nodes.stop(dec_hosts)
# wait stop services
#TODO(alazarev) make timeout configurable (bug #1262897)
timeout = 600 # 10 minutes
cur_time = 0
for instance in instances:
while cur_time < timeout:
stopped = True
if instance.fqdn() in dn_hosts:
code, out = instance.remote().execute_command(
'sudo /sbin/service hadoop-datanode status',
raise_when_error=False)
if out.strip() != 'datanode is stopped':
stopped = False
if out.strip() == 'datanode dead but pid file exists':
instance.remote().execute_command(
'sudo rm -f '
'/var/run/hadoop/hadoop-hadoop-datanode.pid')
if instance.fqdn() in tt_hosts:
code, out = instance.remote().execute_command(
'sudo /sbin/service hadoop-tasktracker status',
raise_when_error=False)
if out.strip() != 'tasktracker is stopped':
stopped = False
if stopped:
break
else:
context.sleep(5)
cur_time += 5
else:
LOG.warn("Failed to stop services on node '%s' of cluster '%s' "
"in %s minutes" % (instance, cluster.name, timeout / 60))
for node in dec_hosts:
LOG.info("Deleting node '%s' on cluster '%s'" % (node, cluster.name))
client.nodes.delete(node)

View File

@ -1,103 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:complexType name="display">
<xs:sequence>
<xs:element name="en">
<xs:simpleType>
<xs:restriction base="xs:string">
<xs:minLength value="1" />
</xs:restriction>
</xs:simpleType>
</xs:element>
</xs:sequence>
</xs:complexType>
<xs:simpleType name="valueType">
<xs:restriction base="xs:string">
<xs:enumeration value="Boolean" />
<xs:enumeration value="Integer" />
<xs:enumeration value="Float" />
<xs:enumeration value="IP" />
<xs:enumeration value="Port" />
<xs:enumeration value="IPWithPort" />
<xs:enumeration value="IPWithMask" />
<xs:enumeration value="URL" />
<xs:enumeration value="String" />
<xs:enumeration value="MepRedCapacity" />
<xs:enumeration value="HBaseClientScannerCaching" />
<xs:enumeration value="Class" />
<xs:enumeration value="Choose" />
<xs:enumeration value="Directory" />
<xs:enumeration value="Int_range" />
</xs:restriction>
</xs:simpleType>
<xs:element name="configuration">
<xs:complexType mixed="true">
<xs:sequence>
<xs:element name="property" maxOccurs="unbounded">
<xs:complexType>
<xs:all>
<xs:element name="name" type="xs:string" />
<xs:element name="value" type="xs:string" />
<xs:element name="intel_default" type="xs:string" minOccurs="0" />
<xs:element name="recommendation" type="xs:string" minOccurs="0" />
<xs:element name="valuetype" type="valueType" />
<xs:element name="group" type="xs:string" />
<xs:element name="definition" type="display" />
<xs:element name="description" type="display" />
<xs:element name="global" type="xs:boolean" minOccurs="0" />
<xs:element name="allowempty" type="xs:boolean" minOccurs="0" />
<xs:element name="readonly" type="xs:boolean" minOccurs="0" />
<xs:element name="hide" type="xs:boolean" minOccurs="0" />
<xs:element name="automatic" type="xs:boolean" minOccurs="0" />
<xs:element name="enable" type="xs:boolean" minOccurs="0" />
<xs:element name="reserved" type="xs:boolean" minOccurs="0" />
<xs:element name="radios" type="xs:string" minOccurs="0" />
<xs:element name="script" type="xs:string" minOccurs="0" />
<xs:element name="type" type="xs:string" minOccurs="0" />
<xs:element name="form" type="xs:string" minOccurs="0" />
<xs:element name="chooselist" type="xs:string" minOccurs="0" />
<xs:element name="implementation" type="xs:string" minOccurs="0" />
<xs:element name="sectionname" type="xs:string" minOccurs="0" />
<xs:element name="refor" minOccurs="0">
<xs:complexType>
<xs:all>
<xs:element name="refand">
<xs:complexType>
<xs:all>
<xs:element name="value" type="xs:string" />
<xs:element name="valuetype" type="valueType" />
<xs:element name="index" type="xs:string" />
</xs:all>
</xs:complexType>
</xs:element>
</xs:all>
</xs:complexType>
</xs:element>
</xs:all>
<xs:attribute name="skipInDoc" type="xs:boolean" />
</xs:complexType>
</xs:element>
<xs:element name="briefsection" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:all>
<xs:element name="sectionname" type="xs:string" />
<xs:element name="name_en" type="xs:string" />
<xs:element name="description_en" type="xs:string" minOccurs="0" />
<xs:element name="autoexpand" type="xs:boolean" />
<xs:element name="showdescription" type="xs:boolean" />
</xs:all>
</xs:complexType>
</xs:element>
<xs:element name="group" minOccurs="1" maxOccurs="unbounded">
<xs:complexType>
<xs:all>
<xs:element name="id" type="xs:string" />
<xs:element name="name_en" type="xs:string" />
<xs:element name="description_en" type="xs:string" />
</xs:all>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,168 +0,0 @@
# Copyright (c) 2014 Intel Corporation
# Copyright (c) 2014 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara import conductor
from sahara import context
from sahara.openstack.common import log as logging
from sahara.plugins.general import exceptions as ex
from sahara.plugins.general import utils as u
from sahara.plugins.intel import abstractversionhandler as avm
from sahara.plugins.intel.v2_5_1 import config_helper as c_helper
from sahara.plugins.intel.v2_5_1 import installer as ins
LOG = logging.getLogger(__name__)
conductor = conductor.API
class VersionHandler(avm.AbstractVersionHandler):
def get_node_processes(self):
processes = {
"Manager": ["manager"],
"HDFS": ["namenode", "datanode", "secondarynamenode"],
"MapReduce": ["jobtracker", "tasktracker"],
"Hadoop": [],
"JobFlow": ["oozie"]
}
return processes
def get_plugin_configs(self):
return c_helper.get_plugin_configs()
def configure_cluster(self, cluster):
LOG.info("Configure IDH cluster")
cluster = ins.create_hadoop_ssh_keys(cluster)
ins.configure_os(cluster)
ins.install_manager(cluster)
ins.install_cluster(cluster)
def start_cluster(self, cluster):
LOG.info("Start IDH cluster")
ins.start_cluster(cluster)
self._set_cluster_info(cluster)
def validate(self, cluster):
nn_count = sum([ng.count for ng
in u.get_node_groups(cluster, 'namenode')])
if nn_count != 1:
raise ex.InvalidComponentCountException('namenode', 1, nn_count)
jt_count = sum([ng.count for ng
in u.get_node_groups(cluster, 'jobtracker')])
if jt_count > 1:
raise ex.InvalidComponentCountException('jobtracker', '0 or 1',
jt_count)
tt_count = sum([ng.count for ng
in u.get_node_groups(cluster, 'tasktracker')])
if jt_count == 0 and tt_count > 0:
raise ex.RequiredServiceMissingException(
'jobtracker', required_by='tasktracker')
mng_count = sum([ng.count for ng
in u.get_node_groups(cluster, 'manager')])
if mng_count != 1:
raise ex.InvalidComponentCountException('manager', 1, mng_count)
def scale_cluster(self, cluster, instances):
ins.configure_os_from_instances(cluster, instances)
ins.scale_cluster(cluster, instances)
def decommission_nodes(self, cluster, instances):
ins.decommission_nodes(cluster, instances)
def validate_scaling(self, cluster, existing, additional):
self._validate_additional_ng_scaling(cluster, additional)
self._validate_existing_ng_scaling(cluster, existing)
def _get_scalable_processes(self):
return ["datanode", "tasktracker"]
def _get_by_id(self, lst, id):
for obj in lst:
if obj.id == id:
return obj
def _validate_additional_ng_scaling(self, cluster, additional):
jt = u.get_jobtracker(cluster)
scalable_processes = self._get_scalable_processes()
for ng_id in additional:
ng = self._get_by_id(cluster.node_groups, ng_id)
if not set(ng.node_processes).issubset(scalable_processes):
raise ex.NodeGroupCannotBeScaled(
ng.name, "Intel plugin cannot scale nodegroup"
" with processes: " +
' '.join(ng.node_processes))
if not jt and 'tasktracker' in ng.node_processes:
raise ex.NodeGroupCannotBeScaled(
ng.name, "Intel plugin cannot scale node group with "
"processes which have no master-processes run "
"in cluster")
def _validate_existing_ng_scaling(self, cluster, existing):
scalable_processes = self._get_scalable_processes()
dn_to_delete = 0
for ng in cluster.node_groups:
if ng.id in existing:
if ng.count > existing[ng.id] and "datanode" in \
ng.node_processes:
dn_to_delete += ng.count - existing[ng.id]
if not set(ng.node_processes).issubset(scalable_processes):
raise ex.NodeGroupCannotBeScaled(
ng.name, "Intel plugin cannot scale nodegroup"
" with processes: " +
' '.join(ng.node_processes))
def _set_cluster_info(self, cluster):
mng = u.get_instances(cluster, 'manager')[0]
nn = u.get_namenode(cluster)
jt = u.get_jobtracker(cluster)
oozie = u.get_oozie(cluster)
#TODO(alazarev) make port configurable (bug #1262895)
info = {'IDH Manager': {
'Web UI': 'https://%s:9443' % mng.management_ip
}}
if jt:
#TODO(alazarev) make port configurable (bug #1262895)
info['MapReduce'] = {
'Web UI': 'http://%s:50030' % jt.management_ip
}
#TODO(alazarev) make port configurable (bug #1262895)
info['MapReduce']['JobTracker'] = '%s:54311' % jt.hostname()
if nn:
#TODO(alazarev) make port configurable (bug #1262895)
info['HDFS'] = {
'Web UI': 'http://%s:50070' % nn.management_ip
}
#TODO(alazarev) make port configurable (bug #1262895)
info['HDFS']['NameNode'] = 'hdfs://%s:8020' % nn.hostname()
if oozie:
#TODO(alazarev) make port configurable (bug #1262895)
info['JobFlow'] = {
'Oozie': 'http://%s:11000' % oozie.management_ip
}
ctx = context.ctx()
conductor.cluster_update(ctx, cluster, {'info': info})
def get_resource_manager_uri(self, cluster):
return cluster['info']['MapReduce']['JobTracker']

View File

@ -1,33 +0,0 @@
# Copyright (c) 2013 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins.intel.client import cluster
from sahara.plugins.intel.client import nodes
from sahara.plugins.intel.client import params
from sahara.plugins.intel.client import rest as r
from sahara.plugins.intel.client import services
class IntelClient():
def __init__(self, manager, cluster_name):
#TODO(alazarev) make credentials configurable (bug #1262881)
self.rest = r.RESTClient(manager, 'admin', 'admin', 'v2')
self.cluster_name = cluster_name
self._ctx = self
self.cluster = cluster.Cluster(self)
self.nodes = nodes.Nodes(self)
self.params = params.Params(self, True)
self.services = services.Services(self, True)

View File

@ -1,145 +0,0 @@
# Copyright (c) 2013 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins import provisioning as p
from sahara.utils import xmlutils as x
CORE_DEFAULT = x.load_hadoop_xml_defaults(
'plugins/intel/v3_0_2/resources/hadoop-default.xml')
HDFS_DEFAULT = x.load_hadoop_xml_defaults(
'plugins/intel/v3_0_2/resources/hdfs-default.xml')
YARN_DEFAULT = x.load_hadoop_xml_defaults(
'plugins/intel/v3_0_2/resources/yarn-default.xml')
OOZIE_DEFAULT = x.load_hadoop_xml_defaults(
'plugins/intel/v3_0_2/resources/oozie-default.xml')
XML_CONFS = {
"Hadoop": [CORE_DEFAULT],
"HDFS": [HDFS_DEFAULT],
"YARN": [YARN_DEFAULT],
"JobFlow": [OOZIE_DEFAULT]
}
IDH_TARBALL_URL = p.Config('IDH tarball URL', 'general', 'cluster', priority=1,
default_value='http://repo2.intelhadoop.com/'
'setup/setup-intelhadoop-'
'3.0.2-en-evaluation.RHEL.tar.gz')
OS_REPO_URL = p.Config('OS repository URL', 'general', 'cluster', priority=1,
is_optional=True,
default_value='http://mirror.centos.org/'
'centos-6/6/os/x86_64')
IDH_REPO_URL = p.Config('IDH repository URL', 'general', 'cluster',
priority=1, is_optional=True,
default_value='http://repo2.intelhadoop.com'
'/evaluation/en/RHEL/3.0.2/rpm')
OOZIE_EXT22_URL = p.Config(
'Ext 2.2 URL', 'general', 'cluster',
description='Ext 2.2 library is required for Oozie Web Console. '
'The file will be downloaded from VM with oozie.',
priority=1, is_optional=True,
default_value='http://extjs.com/deploy/ext-2.2.zip')
ENABLE_SWIFT = p.Config('Enable Swift', 'general', 'cluster',
config_type="bool", priority=1,
default_value=True, is_optional=True)
HADOOP_SWIFTFS_JAR_URL = p.Config(
'Hadoop SwiftFS jar URL', 'general', 'cluster',
description='Library that adds swift support to hadoop. '
'The file will be downloaded from VM with oozie.',
priority=1, is_optional=True,
default_value='http://sahara-files.mirantis.com/'
'hadoop-swift/hadoop-swift-latest.jar')
HIDDEN_CONFS = ['fs.default.name', 'dfs.namenode.name.dir',
'dfs.datanode.data.dir']
CLUSTER_WIDE_CONFS = ['dfs.block.size', 'dfs.permissions', 'dfs.replication',
'dfs.replication.min', 'dfs.replication.max',
'io.file.buffer.size']
PRIORITY_1_CONFS = ['dfs.datanode.du.reserved',
'dfs.datanode.failed.volumes.tolerated',
'dfs.datanode.max.xcievers', 'dfs.datanode.handler.count',
'dfs.namenode.handler.count'
'io.sort.mb']
PRIORITY_1_CONFS += CLUSTER_WIDE_CONFS
CFG_TYPE = {
"Boolean": "bool",
"String": "string",
"Integer": "int",
"Choose": "string",
"Class": "string",
"Directory": "string",
"Float": "string",
"Int_range": "string",
}
def _initialise_configs():
configs = []
for service, config_lists in XML_CONFS.iteritems():
for config_list in config_lists:
for config in config_list:
if config['name'] not in HIDDEN_CONFS:
cfg = p.Config(
config['name'], service, "cluster", is_optional=True,
config_type="string",
default_value=str(config['value']),
description=config['description'])
if config.get('type'):
cfg.config_type = CFG_TYPE[config['type']]
if cfg.config_type == 'bool':
cfg.default_value = cfg.default_value == 'true'
if cfg.config_type == 'int':
if cfg.default_value:
cfg.default_value = int(cfg.default_value)
else:
cfg.config_type = 'string'
if config['name'] in PRIORITY_1_CONFS:
cfg.priority = 1
configs.append(cfg)
configs.append(IDH_TARBALL_URL)
configs.append(IDH_REPO_URL)
configs.append(OS_REPO_URL)
configs.append(OOZIE_EXT22_URL)
configs.append(ENABLE_SWIFT)
return configs
PLUGIN_CONFIGS = _initialise_configs()
def get_plugin_configs():
return PLUGIN_CONFIGS
def get_config_value(cluster_configs, key):
if not cluster_configs or cluster_configs.get(key.name) is None:
return key.default_value
return cluster_configs.get(key.name)

View File

@ -1,467 +0,0 @@
# Copyright (c) 2013 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import telnetlib
import six
from sahara import conductor
from sahara import context
from sahara.openstack.common import log as logging
from sahara.plugins.general import utils as u
from sahara.plugins.intel import exceptions as iex
from sahara.plugins.intel.v3_0_2 import client as c
from sahara.plugins.intel.v3_0_2 import config_helper as c_helper
from sahara.swift import swift_helper as swift
from sahara.utils import crypto
conductor = conductor.API
LOG = logging.getLogger(__name__)
_INST_CONF_TEMPLATE = """
network_interface=eth0
mode=silent
accept_jdk_license=accept
how_to_setup_os_repo=2
os_repo=%s
os_repo_username=
os_repo_password=
os_repo_proxy=
how_to_setup_idh_repo=1
idh_repo=%s
idh_repo_username=
idh_repo_password=
idh_repo_proxy=
firewall_selinux_setting=1"""
def install_manager(cluster):
LOG.info("Starting Install Manager Process")
mng_instance = u.get_instance(cluster, 'manager')
idh_tarball_path = c_helper.get_config_value(
cluster.cluster_configs.get('general'), c_helper.IDH_TARBALL_URL)
idh_tarball_filename = idh_tarball_path.rsplit('/', 1)[-1]
idh_dir = idh_tarball_filename[:idh_tarball_filename.find('.tar.gz')]
LOG.info("IDH tgz will be retrieved from: \'%s\'", idh_tarball_path)
idh_repo = c_helper.get_config_value(
cluster.cluster_configs.get('general'), c_helper.IDH_REPO_URL)
os_repo = c_helper.get_config_value(
cluster.cluster_configs.get('general'), c_helper.OS_REPO_URL)
idh_install_cmd = 'sudo ./%s/install.sh --mode=silent 2>&1' % idh_dir
with mng_instance.remote() as r:
LOG.info("Download IDH manager ")
try:
r.execute_command('curl -O %s 2>&1' % idh_tarball_path)
except Exception as e:
raise RuntimeError("Unable to download IDH manager from %s" %
idh_tarball_path, e)
# unpack archive
LOG.info("Unpack manager %s ", idh_tarball_filename)
try:
r.execute_command('tar xzf %s 2>&1' % idh_tarball_filename)
except Exception as e:
raise RuntimeError("Unable to unpack tgz %s",
idh_tarball_filename, e)
# install idh
LOG.debug("Install manager with %s : ", idh_install_cmd)
inst_conf = _INST_CONF_TEMPLATE % (os_repo, idh_repo)
r.write_file_to('%s/ui-installer/conf' % idh_dir, inst_conf)
#TODO(alazarev) make timeout configurable (bug #1262897)
r.execute_command(idh_install_cmd, timeout=3600)
# fix nginx persimmions bug
r.execute_command('sudo chmod o+x /var/lib/nginx/ /var/lib/nginx/tmp '
'/var/lib/nginx/tmp/client_body')
# waiting start idh manager
#TODO(alazarev) make timeout configurable (bug #1262897)
timeout = 600
LOG.debug("Waiting %s seconds for Manager to start : ", timeout)
while timeout:
try:
telnetlib.Telnet(mng_instance.management_ip, 9443)
break
except IOError:
timeout -= 2
context.sleep(2)
else:
message = ("IDH Manager failed to start in %s minutes on node '%s' "
"of cluster '%s'"
% (timeout / 60, mng_instance.management_ip, cluster.name))
LOG.error(message)
raise iex.IntelPluginException(message)
def configure_os(cluster):
instances = u.get_instances(cluster)
configure_os_from_instances(cluster, instances)
def create_hadoop_ssh_keys(cluster):
private_key, public_key = crypto.generate_key_pair()
extra = {
'hadoop_private_ssh_key': private_key,
'hadoop_public_ssh_key': public_key
}
return conductor.cluster_update(context.ctx(), cluster, {'extra': extra})
def configure_os_from_instances(cluster, instances):
for instance in instances:
with instance.remote() as remote:
LOG.debug("Configuring OS settings on %s : ", instance.hostname())
# configure hostname, RedHat/Centos specific
remote.replace_remote_string('/etc/sysconfig/network',
'HOSTNAME=.*',
'HOSTNAME=%s' % instance.fqdn())
# disable selinux and iptables, because Intel distribution requires
# this to be off
remote.execute_command('sudo /usr/sbin/setenforce 0')
remote.replace_remote_string('/etc/selinux/config',
'SELINUX=.*', 'SELINUX=disabled')
# disable iptables
remote.execute_command('sudo /sbin/service iptables stop')
remote.execute_command('sudo /sbin/chkconfig iptables off')
# create 'hadoop' user
remote.write_files_to({
'id_rsa': cluster.extra.get('hadoop_private_ssh_key'),
'authorized_keys': cluster.extra.get('hadoop_public_ssh_key')
})
remote.execute_command(
'sudo useradd hadoop && '
'sudo sh -c \'echo "hadoop ALL=(ALL) NOPASSWD:ALL" '
'>> /etc/sudoers\' && '
'sudo mkdir -p /home/hadoop/.ssh/ && '
'sudo mv id_rsa authorized_keys /home/hadoop/.ssh && '
'sudo chown -R hadoop:hadoop /home/hadoop/.ssh && '
'sudo chmod 600 /home/hadoop/.ssh/{id_rsa,authorized_keys}')
swift_enable = c_helper.get_config_value(
cluster.cluster_configs.get('general'), c_helper.ENABLE_SWIFT)
if swift_enable:
hadoop_swiftfs_jar_url = c_helper.get_config_value(
cluster.cluster_configs.get('general'),
c_helper.HADOOP_SWIFTFS_JAR_URL)
swift_lib_dir = '/usr/lib/hadoop/lib'
swift_lib_path = swift_lib_dir + '/hadoop-swift-latest.jar'
cmd = ('sudo mkdir -p %s && sudo curl \'%s\' -o %s'
% (swift_lib_dir, hadoop_swiftfs_jar_url,
swift_lib_path))
remote.execute_command(cmd)
def _configure_services(client, cluster):
nn_host = u.get_namenode(cluster).fqdn()
snn = u.get_secondarynamenodes(cluster)
snn_host = snn[0].fqdn() if snn else None
rm_host = u.get_resourcemanager(cluster).fqdn() if u.get_resourcemanager(
cluster) else None
hs_host = u.get_historyserver(cluster).fqdn() if u.get_historyserver(
cluster) else None
dn_hosts = [dn.fqdn() for dn in u.get_datanodes(cluster)]
nm_hosts = [tt.fqdn() for tt in u.get_nodemanagers(cluster)]
oozie_host = u.get_oozie(cluster).fqdn() if u.get_oozie(
cluster) else None
hive_host = u.get_hiveserver(cluster).fqdn() if u.get_hiveserver(
cluster) else None
services = []
if u.get_namenode(cluster):
services += ['hdfs']
if u.get_resourcemanager(cluster):
services += ['yarn']
if oozie_host:
services += ['oozie']
services += ['pig']
if hive_host:
services += ['hive']
LOG.debug("Add services: %s" % ', '.join(services))
client.services.add(services)
LOG.debug("Assign roles to hosts")
client.services.hdfs.add_nodes('PrimaryNameNode', [nn_host])
client.services.hdfs.add_nodes('DataNode', dn_hosts)
if snn:
client.services.hdfs.add_nodes('SecondaryNameNode', [snn_host])
if oozie_host:
client.services.oozie.add_nodes('Oozie', [oozie_host])
if hive_host:
client.services.hive.add_nodes('HiveServer', [hive_host])
if rm_host:
client.services.yarn.add_nodes('ResourceManager', [rm_host])
client.services.yarn.add_nodes('NodeManager', nm_hosts)
if hs_host:
client.services.yarn.add_nodes('HistoryServer', [hs_host])
def _configure_storage(client, cluster):
datanode_ng = u.get_node_groups(cluster, 'datanode')[0]
storage_paths = datanode_ng.storage_paths()
dn_hosts = [i.fqdn() for i in u.get_datanodes(cluster)]
name_dir_param = ",".join(
[st_path + '/dfs/name' for st_path in storage_paths])
data_dir_param = ",".join(
[st_path + '/dfs/data' for st_path in storage_paths])
client.params.hdfs.update('dfs.namenode.name.dir', name_dir_param)
client.params.hdfs.update('dfs.datanode.data.dir', data_dir_param,
nodes=dn_hosts)
def _configure_swift(client, cluster):
swift_enable = c_helper.get_config_value(
cluster.cluster_configs.get('general'), c_helper.ENABLE_SWIFT)
if swift_enable:
swift_configs = swift.get_swift_configs()
for conf in swift_configs:
client.params.hadoop.add(conf['name'], conf['value'])
def _add_user_params(client, cluster):
for p in six.iteritems(cluster.cluster_configs.get("Hadoop", {})):
client.params.hadoop.update(p[0], p[1])
for p in six.iteritems(cluster.cluster_configs.get("HDFS", {})):
client.params.hdfs.update(p[0], p[1])
for p in six.iteritems(cluster.cluster_configs.get("YARN", {})):
client.params.yarn.update(p[0], p[1])
for p in six.iteritems(cluster.cluster_configs.get("JobFlow", {})):
client.params.oozie.update(p[0], p[1])
def install_cluster(cluster):
mng_instance = u.get_instance(cluster, 'manager')
all_hosts = list(set([i.fqdn() for i in u.get_instances(cluster)]))
client = c.IntelClient(mng_instance, cluster.name)
LOG.info("Create cluster")
client.cluster.create()
LOG.info("Add nodes to cluster")
rack = '/Default'
client.nodes.add(all_hosts, rack, 'hadoop',
'/home/hadoop/.ssh/id_rsa')
LOG.info("Install software")
client.cluster.install_software(all_hosts)
LOG.info("Configure services")
_configure_services(client, cluster)
LOG.info("Deploy cluster")
client.nodes.config(force=True)
LOG.info("Provisioning configs")
# cinder and ephemeral drive support
_configure_storage(client, cluster)
# swift support
_configure_swift(client, cluster)
# user configs
_add_user_params(client, cluster)
LOG.info("Format HDFS")
client.services.hdfs.format()
def _setup_oozie(cluster):
with (u.get_oozie(cluster)).remote() as r:
LOG.info("Oozie: add hadoop libraries to java.library.path")
r.execute_command(
"sudo ln -s /usr/lib/hadoop/lib/native/Linux-amd64-64/libhadoop.so"
" /usr/lib64/ && "
"sudo ln -s /usr/lib/hadoop/lib/native/Linux-amd64-64/libsnappy.so"
" /usr/lib64/")
ext22 = c_helper.get_config_value(
cluster.cluster_configs.get('general'), c_helper.OOZIE_EXT22_URL)
if ext22:
LOG.info("Oozie: downloading and installing ext 2.2 from '%s'"
% ext22)
r.execute_command(
"curl -L -o ext-2.2.zip %s && "
"sudo unzip ext-2.2.zip -d "
"/var/lib/oozie/oozie-server/webapps/oozie && "
"sudo chown oozie:oozie "
"/var/lib/oozie/oozie-server/webapps/oozie -R && "
"rm ext-2.2.zip" % ext22)
LOG.info("Oozie: installing oozie share lib")
r.execute_command(
"mkdir /tmp/oozielib && "
"tar xzf /usr/lib/oozie/oozie-sharelib.tar.gz -C /tmp/oozielib && "
"rm /tmp/oozielib/share/lib/pig/pig-0.11.1-Intel.jar &&"
"cp /usr/lib/pig/pig-0.11.1-Intel.jar "
"/tmp/oozielib/share/lib/pig/pig-0.11.1-Intel.jar && "
"sudo su - -c '"
"hadoop fs -mkdir /user/oozie && "
"hadoop fs -put /tmp/oozielib/share /user/oozie/share' hadoop && "
"rm -rf /tmp/oozielib")
def start_cluster(cluster):
client = c.IntelClient(u.get_instance(cluster, 'manager'), cluster.name)
LOG.debug("Starting hadoop services")
client.services.hdfs.start()
if u.get_resourcemanager(cluster):
client.services.yarn.start()
if u.get_hiveserver(cluster):
client.services.hive.start()
if u.get_oozie(cluster):
LOG.info("Setup oozie")
_setup_oozie(cluster)
client.services.oozie.start()
def scale_cluster(cluster, instances):
scale_ins_hosts = [i.fqdn() for i in instances]
dn_hosts = [dn.fqdn() for dn in u.get_datanodes(cluster)]
nm_hosts = [nm.fqdn() for nm in u.get_nodemanagers(cluster)]
to_scale_dn = []
to_scale_nm = []
for i in scale_ins_hosts:
if i in dn_hosts:
to_scale_dn.append(i)
if i in nm_hosts:
to_scale_nm.append(i)
client = c.IntelClient(u.get_instance(cluster, 'manager'), cluster.name)
rack = '/Default'
client.nodes.add(scale_ins_hosts, rack, 'hadoop',
'/home/hadoop/.ssh/id_rsa')
client.cluster.install_software(scale_ins_hosts)
if to_scale_nm:
client.services.yarn.add_nodes('NodeManager', to_scale_nm)
if to_scale_dn:
client.services.hdfs.add_nodes('DataNode', to_scale_dn)
# IDH 3.0.2 reset cluster parameters (bug #1300603)
# Restoring them back
LOG.info("Provisioning configs")
# cinder and ephemeral drive support
_configure_storage(client, cluster)
# swift support
_configure_swift(client, cluster)
# user configs
_add_user_params(client, cluster)
client.nodes.config()
if to_scale_dn:
client.services.hdfs.start()
if to_scale_nm:
client.services.yarn.start()
def decommission_nodes(cluster, instances):
dec_hosts = [i.fqdn() for i in instances]
dn_hosts = [dn.fqdn() for dn in u.get_datanodes(cluster)]
nm_hosts = [nm.fqdn() for nm in u.get_nodemanagers(cluster)]
client = c.IntelClient(u.get_instance(cluster, 'manager'), cluster.name)
dec_dn_hosts = []
for dec_host in dec_hosts:
if dec_host in dn_hosts:
dec_dn_hosts.append(dec_host)
if dec_dn_hosts:
client.services.hdfs.decommission_nodes(dec_dn_hosts)
#TODO(alazarev) make timeout configurable (bug #1262897)
timeout = 14400 # 4 hours
cur_time = 0
for host in dec_dn_hosts:
while cur_time < timeout:
if client.services.hdfs.get_datanode_status(
host) == 'Decomissioned':
break
context.sleep(5)
cur_time += 5
else:
LOG.warn("Failed to decomission node '%s' of cluster '%s' "
"in %s minutes" % (host, cluster.name, timeout / 60))
client.nodes.stop(dec_hosts)
# wait stop services
#TODO(alazarev) make timeout configurable (bug #1262897)
timeout = 600 # 10 minutes
cur_time = 0
for instance in instances:
while cur_time < timeout:
stopped = True
if instance.fqdn() in dn_hosts:
stopped = stopped and _is_hadoop_service_stopped(
instance, 'hadoop-hdfs-datanode')
if instance.fqdn() in nm_hosts:
stopped = stopped and _is_hadoop_service_stopped(
instance, 'hadoop-yarn-nodemanager')
if stopped:
break
else:
context.sleep(5)
cur_time += 5
else:
LOG.warn("Failed to stop services on node '%s' of cluster '%s' "
"in %s minutes" % (instance, cluster.name, timeout / 60))
for node in dec_hosts:
LOG.info("Deleting node '%s' on cluster '%s'" % (node, cluster.name))
client.nodes.delete(node)
def _is_hadoop_service_stopped(instance, service):
code, out = instance.remote().execute_command(
'sudo /sbin/service %s status' % service,
raise_when_error=False)
return ('is not running' in out or
'is dead and pid file exists' in out)

View File

@ -1,103 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:complexType name="display">
<xs:sequence>
<xs:element name="en">
<xs:simpleType>
<xs:restriction base="xs:string">
<xs:minLength value="1" />
</xs:restriction>
</xs:simpleType>
</xs:element>
</xs:sequence>
</xs:complexType>
<xs:simpleType name="valueType">
<xs:restriction base="xs:string">
<xs:enumeration value="Boolean" />
<xs:enumeration value="Integer" />
<xs:enumeration value="Float" />
<xs:enumeration value="IP" />
<xs:enumeration value="Port" />
<xs:enumeration value="IPWithPort" />
<xs:enumeration value="IPWithMask" />
<xs:enumeration value="URL" />
<xs:enumeration value="String" />
<xs:enumeration value="MepRedCapacity" />
<xs:enumeration value="HBaseClientScannerCaching" />
<xs:enumeration value="Class" />
<xs:enumeration value="Choose" />
<xs:enumeration value="Directory" />
<xs:enumeration value="Int_range" />
</xs:restriction>
</xs:simpleType>
<xs:element name="configuration">
<xs:complexType mixed="true">
<xs:sequence>
<xs:element name="property" maxOccurs="unbounded">
<xs:complexType>
<xs:all>
<xs:element name="name" type="xs:string" />
<xs:element name="value" type="xs:string" />
<xs:element name="intel_default" type="xs:string" minOccurs="0" />
<xs:element name="recommendation" type="xs:string" minOccurs="0" />
<xs:element name="valuetype" type="valueType" />
<xs:element name="group" type="xs:string" />
<xs:element name="definition" type="display" />
<xs:element name="description" type="display" />
<xs:element name="global" type="xs:boolean" minOccurs="0" />
<xs:element name="allowempty" type="xs:boolean" minOccurs="0" />
<xs:element name="readonly" type="xs:boolean" minOccurs="0" />
<xs:element name="hide" type="xs:boolean" minOccurs="0" />
<xs:element name="automatic" type="xs:boolean" minOccurs="0" />
<xs:element name="enable" type="xs:boolean" minOccurs="0" />
<xs:element name="reserved" type="xs:boolean" minOccurs="0" />
<xs:element name="radios" type="xs:string" minOccurs="0" />
<xs:element name="script" type="xs:string" minOccurs="0" />
<xs:element name="type" type="xs:string" minOccurs="0" />
<xs:element name="form" type="xs:string" minOccurs="0" />
<xs:element name="chooselist" type="xs:string" minOccurs="0" />
<xs:element name="implementation" type="xs:string" minOccurs="0" />
<xs:element name="sectionname" type="xs:string" minOccurs="0" />
<xs:element name="refor" minOccurs="0">
<xs:complexType>
<xs:all>
<xs:element name="refand">
<xs:complexType>
<xs:all>
<xs:element name="value" type="xs:string" />
<xs:element name="valuetype" type="valueType" />
<xs:element name="index" type="xs:string" />
</xs:all>
</xs:complexType>
</xs:element>
</xs:all>
</xs:complexType>
</xs:element>
</xs:all>
<xs:attribute name="skipInDoc" type="xs:boolean" />
</xs:complexType>
</xs:element>
<xs:element name="briefsection" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:all>
<xs:element name="sectionname" type="xs:string" />
<xs:element name="name_en" type="xs:string" />
<xs:element name="description_en" type="xs:string" minOccurs="0" />
<xs:element name="autoexpand" type="xs:boolean" />
<xs:element name="showdescription" type="xs:boolean" />
</xs:all>
</xs:complexType>
</xs:element>
<xs:element name="group" minOccurs="1" maxOccurs="unbounded">
<xs:complexType>
<xs:all>
<xs:element name="id" type="xs:string" />
<xs:element name="name_en" type="xs:string" />
<xs:element name="description_en" type="xs:string" />
</xs:all>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,675 +0,0 @@
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!-- Do not modify this file directly. Instead, copy entries that you -->
<!-- wish to modify from this file into yarn-site.xml and change them -->
<!-- there. If yarn-site.xml does not already exist, create it. -->
<configuration>
<!-- IPC Configs -->
<property>
<description>Factory to create client IPC classes.</description>
<name>yarn.ipc.client.factory.class</name>
</property>
<property>
<description>Type of serialization to use.</description>
<name>yarn.ipc.serializer.type</name>
<value>protocolbuffers</value>
</property>
<property>
<description>Factory to create server IPC classes.</description>
<name>yarn.ipc.server.factory.class</name>
</property>
<property>
<description>Factory to create IPC exceptions.</description>
<name>yarn.ipc.exception.factory.class</name>
</property>
<property>
<description>Factory to create serializeable records.</description>
<name>yarn.ipc.record.factory.class</name>
</property>
<property>
<description>RPC class implementation</description>
<name>yarn.ipc.rpc.class</name>
<value>org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC</value>
</property>
<!-- Resource Manager Configs -->
<property>
<description>The address of the applications manager interface in the RM.</description>
<name>yarn.resourcemanager.address</name>
<value>0.0.0.0:8032</value>
</property>
<property>
<description>The number of threads used to handle applications manager requests.</description>
<name>yarn.resourcemanager.client.thread-count</name>
<value>50</value>
</property>
<property>
<description>The expiry interval for application master reporting.</description>
<name>yarn.am.liveness-monitor.expiry-interval-ms</name>
<value>600000</value>
</property>
<property>
<description>The Kerberos principal for the resource manager.</description>
<name>yarn.resourcemanager.principal</name>
</property>
<property>
<description>The address of the scheduler interface.</description>
<name>yarn.resourcemanager.scheduler.address</name>
<value>0.0.0.0:8030</value>
</property>
<property>
<description>Number of threads to handle scheduler interface.</description>
<name>yarn.resourcemanager.scheduler.client.thread-count</name>
<value>50</value>
</property>
<property>
<description>The address of the RM web application.</description>
<name>yarn.resourcemanager.webapp.address</name>
<value>0.0.0.0:8088</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>0.0.0.0:8031</value>
</property>
<property>
<description>Are acls enabled.</description>
<name>yarn.acl.enable</name>
<value>true</value>
</property>
<property>
<description>ACL of who can be admin of the YARN cluster.</description>
<name>yarn.admin.acl</name>
<value>*</value>
</property>
<property>
<description>The address of the RM admin interface.</description>
<name>yarn.resourcemanager.admin.address</name>
<value>0.0.0.0:8033</value>
</property>
<property>
<description>Number of threads used to handle RM admin interface.</description>
<name>yarn.resourcemanager.admin.client.thread-count</name>
<value>1</value>
</property>
<property>
<description>How often should the RM check that the AM is still alive.</description>
<name>yarn.resourcemanager.amliveliness-monitor.interval-ms</name>
<value>1000</value>
</property>
<property>
<description>The maximum number of application master retries.</description>
<name>yarn.resourcemanager.am.max-retries</name>
<value>1</value>
</property>
<property>
<description>How often to check that containers are still alive. </description>
<name>yarn.resourcemanager.container.liveness-monitor.interval-ms</name>
<value>600000</value>
</property>
<property>
<description>The keytab for the resource manager.</description>
<name>yarn.resourcemanager.keytab</name>
<value>/etc/krb5.keytab</value>
</property>
<property>
<description>How long to wait until a node manager is considered dead.</description>
<name>yarn.nm.liveness-monitor.expiry-interval-ms</name>
<value>600000</value>
</property>
<property>
<description>How often to check that node managers are still alive.</description>
<name>yarn.resourcemanager.nm.liveness-monitor.interval-ms</name>
<value>1000</value>
</property>
<property>
<description>Path to file with nodes to include.</description>
<name>yarn.resourcemanager.nodes.include-path</name>
<value></value>
</property>
<property>
<description>Path to file with nodes to exclude.</description>
<name>yarn.resourcemanager.nodes.exclude-path</name>
<value></value>
</property>
<property>
<description>Number of threads to handle resource tracker calls.</description>
<name>yarn.resourcemanager.resource-tracker.client.thread-count</name>
<value>50</value>
</property>
<property>
<description>The class to use as the resource scheduler.</description>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
</property>
<property>
<description>The minimum allocation for every container request at the RM,
in MBs. Memory requests lower than this won't take effect,
and the specified value will get allocated at minimum.</description>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>1024</value>
</property>
<property>
<description>The maximum allocation for every container request at the RM,
in MBs. Memory requests higher than this won't take effect,
and will get capped to this value.</description>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>8192</value>
</property>
<property>
<description>The minimum allocation for every container request at the RM,
in terms of virtual CPU cores. Requests lower than this won't take effect,
and the specified value will get allocated the minimum.</description>
<name>yarn.scheduler.minimum-allocation-vcores</name>
<value>1</value>
</property>
<property>
<description>The maximum allocation for every container request at the RM,
in terms of virtual CPU cores. Requests higher than this won't take effect,
and will get capped to this value.</description>
<name>yarn.scheduler.maximum-allocation-vcores</name>
<value>32</value>
</property>
<property>
<description>Enable RM to recover state after starting. If true, then
yarn.resourcemanager.store.class must be specified</description>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>false</value>
</property>
<property>
<description>The class to use as the persistent store.</description>
<name>yarn.resourcemanager.store.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore</value>
</property>
<property>
<description>URI pointing to the location of the FileSystem path where
RM state will be stored. This must be supplied when using
org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
as the value for yarn.resourcemanager.store.class</description>
<name>yarn.resourcemanager.fs.rm-state-store.uri</name>
<value>${hadoop.tmp.dir}/yarn/system/rmstore</value>
<!--value>hdfs://localhost:9000/rmstore</value-->
</property>
<property>
<description>The maximum number of completed applications RM keeps. </description>
<name>yarn.resourcemanager.max-completed-applications</name>
<value>10000</value>
</property>
<property>
<description>Interval at which the delayed token removal thread runs</description>
<name>yarn.resourcemanager.delayed.delegation-token.removal-interval-ms</name>
<value>30000</value>
</property>
<property>
<description>Interval for the roll over for the master key used to generate
application tokens
</description>
<name>yarn.resourcemanager.application-tokens.master-key-rolling-interval-secs</name>
<value>86400</value>
</property>
<property>
<description>Interval for the roll over for the master key used to generate
container tokens. It is expected to be much greater than
yarn.nm.liveness-monitor.expiry-interval-ms and
yarn.rm.container-allocation.expiry-interval-ms. Otherwise the
behavior is undefined.
</description>
<name>yarn.resourcemanager.container-tokens.master-key-rolling-interval-secs</name>
<value>86400</value>
</property>
<!-- Node Manager Configs -->
<property>
<description>The address of the container manager in the NM.</description>
<name>yarn.nodemanager.address</name>
<value>0.0.0.0:0</value>
</property>
<property>
<description>Environment variables that should be forwarded from the NodeManager's environment to the container's.</description>
<name>yarn.nodemanager.admin-env</name>
<value>MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX</value>
</property>
<property>
<description>Environment variables that containers may override rather than use NodeManager's default.</description>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,HADOOP_YARN_HOME</value>
</property>
<property>
<description>who will execute(launch) the containers.</description>
<name>yarn.nodemanager.container-executor.class</name>
<value>org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor</value>
<!--<value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>-->
</property>
<property>
<description>Number of threads container manager uses.</description>
<name>yarn.nodemanager.container-manager.thread-count</name>
<value>20</value>
</property>
<property>
<description>Number of threads used in cleanup.</description>
<name>yarn.nodemanager.delete.thread-count</name>
<value>4</value>
</property>
<property>
<description>
Number of seconds after an application finishes before the nodemanager's
DeletionService will delete the application's localized file directory
and log directory.
To diagnose Yarn application problems, set this property's value large
enough (for example, to 600 = 10 minutes) to permit examination of these
directories. After changing the property's value, you must restart the
nodemanager in order for it to have an effect.
The roots of Yarn applications' work directories is configurable with
the yarn.nodemanager.local-dirs property (see below), and the roots
of the Yarn applications' log directories is configurable with the
yarn.nodemanager.log-dirs property (see also below).
</description>
<name>yarn.nodemanager.delete.debug-delay-sec</name>
<value>0</value>
</property>
<property>
<description>Heartbeat interval to RM</description>
<name>yarn.nodemanager.heartbeat.interval-ms</name>
<value>1000</value>
</property>
<property>
<description>Keytab for NM.</description>
<name>yarn.nodemanager.keytab</name>
<value>/etc/krb5.keytab</value>
</property>
<property>
<description>List of directories to store localized files in. An
application's localized file directory will be found in:
${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}.
Individual containers' work directories, called container_${contid}, will
be subdirectories of this.
</description>
<name>yarn.nodemanager.local-dirs</name>
<value>${hadoop.tmp.dir}/nm-local-dir</value>
</property>
<property>
<description>Address where the localizer IPC is.</description>
<name>yarn.nodemanager.localizer.address</name>
<value>0.0.0.0:8040</value>
</property>
<property>
<description>Interval in between cache cleanups.</description>
<name>yarn.nodemanager.localizer.cache.cleanup.interval-ms</name>
<value>600000</value>
</property>
<property>
<description>Target size of localizer cache in MB, per local directory.</description>
<name>yarn.nodemanager.localizer.cache.target-size-mb</name>
<value>10240</value>
</property>
<property>
<description>Number of threads to handle localization requests.</description>
<name>yarn.nodemanager.localizer.client.thread-count</name>
<value>5</value>
</property>
<property>
<description>Number of threads to use for localization fetching.</description>
<name>yarn.nodemanager.localizer.fetch.thread-count</name>
<value>4</value>
</property>
<property>
<description>
Where to store container logs. An application's localized log directory
will be found in ${yarn.nodemanager.log-dirs}/application_${appid}.
Individual containers' log directories will be below this, in directories
named container_{$contid}. Each container directory will contain the files
stderr, stdin, and syslog generated by that container.
</description>
<name>yarn.nodemanager.log-dirs</name>
<value>${yarn.log.dir}/userlogs</value>
</property>
<property>
<description>Whether to enable log aggregation</description>
<name>yarn.log-aggregation-enable</name>
<value>false</value>
</property>
<property>
<description>How long to keep aggregation logs before deleting them. -1 disables.
Be careful set this too small and you will spam the name node.</description>
<name>yarn.log-aggregation.retain-seconds</name>
<value>-1</value>
</property>
<property>
<description>How long to wait between aggregated log retention checks.
If set to 0 or a negative value then the value is computed as one-tenth
of the aggregated log retention time. Be careful set this too small and
you will spam the name node.</description>
<name>yarn.log-aggregation.retain-check-interval-seconds</name>
<value>-1</value>
</property>
<property>
<description>Time in seconds to retain user logs. Only applicable if
log aggregation is disabled
</description>
<name>yarn.nodemanager.log.retain-seconds</name>
<value>10800</value>
</property>
<property>
<description>Where to aggregate logs to.</description>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/tmp/logs</value>
</property>
<property>
<description>The remote log dir will be created at
{yarn.nodemanager.remote-app-log-dir}/${user}/{thisParam}
</description>
<name>yarn.nodemanager.remote-app-log-dir-suffix</name>
<value>logs</value>
</property>
<property>
<description>Amount of physical memory, in MB, that can be allocated
for containers.</description>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>8192</value>
</property>
<property>
<description>Whether physical memory limits will be enforced for
containers.</description>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>true</value>
</property>
<property>
<description>Whether virtual memory limits will be enforced for
containers.</description>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>true</value>
</property>
<property>
<description>Ratio between virtual memory to physical memory when
setting memory limits for containers. Container allocations are
expressed in terms of physical memory, and virtual memory usage
is allowed to exceed this allocation by this ratio.
</description>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>2.1</value>
</property>
<property>
<description>Number of CPU cores that can be allocated
for containers.</description>
<name>yarn.nodemanager.resource.cpu-cores</name>
<value>8</value>
</property>
<property>
<description>Ratio between virtual cores to physical cores when
allocating CPU resources to containers.
</description>
<name>yarn.nodemanager.vcores-pcores-ratio</name>
<value>2</value>
</property>
<property>
<description>NM Webapp address.</description>
<name>yarn.nodemanager.webapp.address</name>
<value>0.0.0.0:8042</value>
</property>
<property>
<description>How often to monitor containers.</description>
<name>yarn.nodemanager.container-monitor.interval-ms</name>
<value>3000</value>
</property>
<property>
<description>Class that calculates containers current resource utilization.</description>
<name>yarn.nodemanager.container-monitor.resource-calculator.class</name>
</property>
<property>
<description>Frequency of running node health script.</description>
<name>yarn.nodemanager.health-checker.interval-ms</name>
<value>600000</value>
</property>
<property>
<description>Script time out period.</description>
<name>yarn.nodemanager.health-checker.script.timeout-ms</name>
<value>1200000</value>
</property>
<property>
<description>The health check script to run.</description>
<name>yarn.nodemanager.health-checker.script.path</name>
<value></value>
</property>
<property>
<description>The arguments to pass to the health check script.</description>
<name>yarn.nodemanager.health-checker.script.opts</name>
<value></value>
</property>
<property>
<description>Frequency of running disk health checker code.</description>
<name>yarn.nodemanager.disk-health-checker.interval-ms</name>
<value>120000</value>
</property>
<property>
<description>The minimum fraction of number of disks to be healthy for the
nodemanager to launch new containers. This correspond to both
yarn-nodemanager.local-dirs and yarn.nodemanager.log-dirs. i.e. If there
are less number of healthy local-dirs (or log-dirs) available, then
new containers will not be launched on this node.</description>
<name>yarn.nodemanager.disk-health-checker.min-healthy-disks</name>
<value>0.25</value>
</property>
<property>
<description>The path to the Linux container executor.</description>
<name>yarn.nodemanager.linux-container-executor.path</name>
</property>
<property>
<description>The class which should help the LCE handle resources.</description>
<name>yarn.nodemanager.linux-container-executor.resources-handler.class</name>
<value>org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler</value>
<!-- <value>org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler</value> -->
</property>
<property>
<description>The cgroups hierarchy under which to place YARN proccesses (cannot contain commas).
If yarn.nodemanager.linux-container-executor.cgroups.mount is false (that is, if cgroups have
been pre-configured), then this cgroups hierarchy must already exist and be writable by the
NodeManager user, otherwise the NodeManager may fail.
Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler.</description>
<name>yarn.nodemanager.linux-container-executor.cgroups.hierarchy</name>
<value>/hadoop-yarn</value>
</property>
<property>
<description>Whether the LCE should attempt to mount cgroups if not found.
Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler.</description>
<name>yarn.nodemanager.linux-container-executor.cgroups.mount</name>
<value>false</value>
</property>
<property>
<description>Where the LCE should attempt to mount cgroups if not found. Common locations
include /sys/fs/cgroup and /cgroup; the default location can vary depending on the Linux
distribution in use. This path must exist before the NodeManager is launched.
Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler, and
yarn.nodemanager.linux-container-executor.cgroups.mount is true.</description>
<name>yarn.nodemanager.linux-container-executor.cgroups.mount-path</name>
</property>
<property>
<description>T-file compression types used to compress aggregated logs.</description>
<name>yarn.nodemanager.log-aggregation.compression-type</name>
<value>none</value>
</property>
<property>
<description>The kerberos principal for the node manager.</description>
<name>yarn.nodemanager.principal</name>
<value></value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value></value>
<!-- <value>mapreduce.shuffle</value> -->
</property>
<property>
<description>No. of ms to wait between sending a SIGTERM and SIGKILL to a container</description>
<name>yarn.nodemanager.sleep-delay-before-sigkill.ms</name>
<value>250</value>
</property>
<property>
<description>Max time to wait for a process to come up when trying to cleanup a container</description>
<name>yarn.nodemanager.process-kill-wait.ms</name>
<value>2000</value>
</property>
<property>
<description>Max time, in seconds, to wait to establish a connection to RM when NM starts.
The NM will shutdown if it cannot connect to RM within the specified max time period.
If the value is set as -1, then NM will retry forever.</description>
<name>yarn.nodemanager.resourcemanager.connect.wait.secs</name>
<value>900</value>
</property>
<property>
<description>Time interval, in seconds, between each NM attempt to connect to RM.</description>
<name>yarn.nodemanager.resourcemanager.connect.retry_interval.secs</name>
<value>30</value>
</property>
<!--Map Reduce configuration-->
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>mapreduce.job.jar</name>
<value/>
</property>
<property>
<name>mapreduce.job.hdfs-servers</name>
<value>${fs.defaultFS}</value>
</property>
<!-- WebAppProxy Configuration-->
<property>
<description>The kerberos principal for the proxy, if the proxy is not
running as part of the RM.</description>
<name>yarn.web-proxy.principal</name>
<value/>
</property>
<property>
<description>Keytab for WebAppProxy, if the proxy is not running as part of
the RM.</description>
<name>yarn.web-proxy.keytab</name>
</property>
<property>
<description>The address for the web proxy as HOST:PORT, if this is not
given then the proxy will run as part of the RM</description>
<name>yarn.web-proxy.address</name>
<value/>
</property>
<!-- Applications' Configuration-->
<property>
<description>CLASSPATH for YARN applications. A comma-separated list
of CLASSPATH entries</description>
<name>yarn.application.classpath</name>
<value>$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$HADOOP_YARN_HOME/share/hadoop/yarn/*,$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*</value>
</property>
</configuration>

View File

@ -1,168 +0,0 @@
# Copyright (c) 2014 Intel Corporation
# Copyright (c) 2014 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara import conductor
from sahara import context
from sahara.openstack.common import log as logging
from sahara.plugins.general import exceptions as ex
from sahara.plugins.general import utils as u
from sahara.plugins.intel import abstractversionhandler as avm
from sahara.plugins.intel.v3_0_2 import config_helper as c_helper
from sahara.plugins.intel.v3_0_2 import installer as ins
LOG = logging.getLogger(__name__)
conductor = conductor.API
class VersionHandler(avm.AbstractVersionHandler):
def get_node_processes(self):
processes = {
"Manager": ["manager"],
"HDFS": ["namenode", "datanode", "secondarynamenode"],
"MapReduce": ["resourcemanager", "historyserver", "nodemanager"],
"Hadoop": [],
"JobFlow": ["oozie"]
}
return processes
def get_plugin_configs(self):
return c_helper.get_plugin_configs()
def configure_cluster(self, cluster):
LOG.info("Configure IDH cluster")
cluster = ins.create_hadoop_ssh_keys(cluster)
ins.configure_os(cluster)
ins.install_manager(cluster)
ins.install_cluster(cluster)
def start_cluster(self, cluster):
LOG.info("Start IDH cluster")
ins.start_cluster(cluster)
self._set_cluster_info(cluster)
def validate(self, cluster):
nn_count = sum([ng.count for ng
in u.get_node_groups(cluster, 'namenode')])
if nn_count != 1:
raise ex.InvalidComponentCountException('namenode', 1, nn_count)
rm_count = sum([ng.count for ng
in u.get_node_groups(cluster, 'resourcemanager')])
if rm_count > 1:
raise ex.InvalidComponentCountException(
'resourcemanager', '0 or 1', rm_count)
tt_count = sum([ng.count for ng
in u.get_node_groups(cluster, 'nodemanager')])
if rm_count == 0 and tt_count > 0:
raise ex.RequiredServiceMissingException(
'resourcemanager', required_by='nodemanager')
mng_count = sum([ng.count for ng
in u.get_node_groups(cluster, 'manager')])
if mng_count != 1:
raise ex.InvalidComponentCountException('manager', 1, mng_count)
def scale_cluster(self, cluster, instances):
ins.configure_os_from_instances(cluster, instances)
ins.scale_cluster(cluster, instances)
def decommission_nodes(self, cluster, instances):
ins.decommission_nodes(cluster, instances)
def validate_scaling(self, cluster, existing, additional):
self._validate_additional_ng_scaling(cluster, additional)
self._validate_existing_ng_scaling(cluster, existing)
def _get_scalable_processes(self):
return ["datanode", "nodemanager"]
def _get_by_id(self, lst, id):
for obj in lst:
if obj.id == id:
return obj
def _validate_additional_ng_scaling(self, cluster, additional):
rm = u.get_resourcemanager(cluster)
scalable_processes = self._get_scalable_processes()
for ng_id in additional:
ng = self._get_by_id(cluster.node_groups, ng_id)
if not set(ng.node_processes).issubset(scalable_processes):
raise ex.NodeGroupCannotBeScaled(
ng.name, "Intel plugin cannot scale nodegroup"
" with processes: " +
' '.join(ng.node_processes))
if not rm and 'nodemanager' in ng.node_processes:
raise ex.NodeGroupCannotBeScaled(
ng.name, "Intel plugin cannot scale node group with "
"processes which have no master-processes run "
"in cluster")
def _validate_existing_ng_scaling(self, cluster, existing):
scalable_processes = self._get_scalable_processes()
dn_to_delete = 0
for ng in cluster.node_groups:
if ng.id in existing:
if ng.count > existing[ng.id] and "datanode" in \
ng.node_processes:
dn_to_delete += ng.count - existing[ng.id]
if not set(ng.node_processes).issubset(scalable_processes):
raise ex.NodeGroupCannotBeScaled(
ng.name, "Intel plugin cannot scale nodegroup"
" with processes: " +
' '.join(ng.node_processes))
def _set_cluster_info(self, cluster):
mng = u.get_instances(cluster, 'manager')[0]
nn = u.get_namenode(cluster)
jt = u.get_resourcemanager(cluster)
oozie = u.get_oozie(cluster)
#TODO(alazarev) make port configurable (bug #1262895)
info = {'IDH Manager': {
'Web UI': 'https://%s:9443' % mng.management_ip
}}
if jt:
#TODO(alazarev) make port configurable (bug #1262895)
info['Yarn'] = {
'ResourceManager Web UI': 'http://%s:8088' % jt.management_ip
}
#TODO(alazarev) make port configurable (bug #1262895)
info['Yarn']['ResourceManager'] = '%s:8032' % jt.hostname()
if nn:
#TODO(alazarev) make port configurable (bug #1262895)
info['HDFS'] = {
'Web UI': 'http://%s:50070' % nn.management_ip
}
#TODO(alazarev) make port configurable (bug #1262895)
info['HDFS']['NameNode'] = 'hdfs://%s:8020' % nn.hostname()
if oozie:
#TODO(alazarev) make port configurable (bug #1262895)
info['JobFlow'] = {
'Oozie': 'http://%s:11000' % oozie.management_ip
}
ctx = context.ctx()
conductor.cluster_update(ctx, cluster, {'info': info})
def get_resource_manager_uri(self, cluster):
return cluster['info']['Yarn']['ResourceManager']

View File

@ -1,55 +0,0 @@
# Copyright (c) 2014 Intel Corporation
# Copyright (c) 2014 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import re
class VersionFactory():
versions = None
modules = None
initialized = False
@staticmethod
def get_instance():
if not VersionFactory.initialized:
src_dir = os.path.join(os.path.dirname(__file__), '')
VersionFactory.versions = (
[name[1:].replace('_', '.')
for name in os.listdir(src_dir)
if (os.path.isdir(os.path.join(src_dir, name))
and re.match(r'^v\d+_\d+_\d+$', name))])
VersionFactory.modules = {}
for version in VersionFactory.versions:
module_name = 'sahara.plugins.intel.v{0}.'\
'versionhandler'.format(
version.replace('.', '_'))
module_class = getattr(
__import__(module_name, fromlist=['sahara']),
'VersionHandler')
module = module_class()
key = version.replace('_', '.')
VersionFactory.modules[key] = module
VersionFactory.initialized = True
return VersionFactory()
def get_versions(self):
return VersionFactory.versions
def get_version_handler(self, version):
return VersionFactory.modules[version]

View File

@ -34,7 +34,7 @@ corresponding tox env:
..
<tag> may have the following values: ``transient``, ``vanilla1``, ``vanilla2``,
``hdp``, ``idh2`` and ``idh3``.
``hdp``.
For example, you want to run tests for the Vanilla plugin with the Hadoop
version 1.2.1. In this case you should use the following tox env:
@ -53,12 +53,12 @@ should use the corresponding tox env:
..
For example, you want to run tests for the Vanilla plugin with the Hadoop
version 2.3.0 and for the IDH plugin with the Intel Hadoop version 3.0.2. In
this case you should use the following tox env:
version 2.3.0 and for the HDP plugin with the Hortonworks Data Platform version
1.3.2. In this case you should use the following tox env:
.. sourcecode:: console
$ tox -e integration -- vanilla2 idh3
$ tox -e integration -- vanilla2 hdp
..
Here are a few more examples.
@ -69,9 +69,9 @@ version 1.2.1. More info about transient cluster see in section ``Contents``.
``tox -e integration -- hdp`` will run tests for the HDP plugin.
``tox -e integration -- transient vanilla2 idh2`` will run test for transient
cluster, tests for the Vanilla plugin with the Hadoop version 1.2.1 and tests
for the IDH plugin with the Intel Hadoop version 2.5.1.
``tox -e integration -- transient vanilla2 hdp`` will run test for transient
cluster, tests for the Vanilla plugin with the Hadoop version 2.3.0 and tests
for the HDP plugin with the Hortonworks Data Platform version 1.3.2.
Contents
--------
@ -155,18 +155,3 @@ The HDP plugin has the following checks:
4. Elastic Data Processing (EDP).
5. Swift availability.
6. Cluster scaling.
The IDH plugin with the Intel Hadoop version 2.5.1 has the following checks:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1. Proper cluster creation.
2. Map Reduce.
3. Swift availability.
4. Cluster scaling.
The IDH plugin with the Intel Hadoop version 3.0.2 has the following checks:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1. Proper cluster creation.
2. Swift availability.
3. Cluster scaling.

View File

@ -407,185 +407,6 @@ HDP_CONFIG_OPTS = [
cfg.BoolOpt('SKIP_SCALING_TEST', default=False)
]
IDH2_CONFIG_GROUP = cfg.OptGroup(name='IDH2')
IDH2_CONFIG_OPTS = [
cfg.StrOpt('PLUGIN_NAME',
default='idh',
help='Name of plugin.'),
cfg.StrOpt('MANAGER_FLAVOR_ID',
default='3',
help='Flavor ID to Intel Manager. Intel manager requires '
'>=4Gb RAM'),
cfg.StrOpt('IMAGE_ID',
default=None,
help='ID for image which is used for cluster creation. Also '
'you can specify image name or tag of image instead of '
'image ID. If you do not specify image related '
'parameters, then image for cluster creation will be '
'chosen by tag "sahara_i_tests".'),
cfg.StrOpt('IMAGE_NAME',
default=None,
help='Name for image which is used for cluster creation. Also '
'you can specify image ID or tag of image instead of '
'image name. If you do not specify image related '
'parameters, then image for cluster creation will be '
'chosen by tag "sahara_i_tests".'),
cfg.StrOpt('IMAGE_TAG',
default=None,
help='Tag for image which is used for cluster creation. Also '
'you can specify image ID or image name instead of tag of '
'image. If you do not specify image related parameters, '
'then image for cluster creation will be chosen by '
'tag "sahara_i_tests".'),
cfg.StrOpt('SSH_USERNAME',
default=None,
help='Username to get cluster node with SSH.'),
cfg.StrOpt('HADOOP_VERSION',
default='2.5.1',
help='Version of Hadoop.'),
cfg.StrOpt('HADOOP_USER',
default='hdfs',
help='Username which is used for access to Hadoop services.'),
cfg.StrOpt('HADOOP_DIRECTORY',
default='/usr/lib/hadoop',
help='Directory where Hadoop jar files are located.'),
cfg.StrOpt('HADOOP_EXAMPLES_JAR_PATH',
default='/usr/lib/hadoop/hadoop-examples.jar',
help='Path to hadoop examples jar file.'),
cfg.StrOpt('HADOOP_LOG_DIRECTORY',
default='/var/log/hadoop/userlogs',
help='Directory where log info about completed jobs is '
'located.'),
cfg.DictOpt('HADOOP_PROCESSES_WITH_PORTS',
default={
'jobtracker': 54311,
'namenode': 50070,
'tasktracker': 50060,
'datanode': 50075,
'secondarynamenode': 50090,
'oozie': 11000
},
help='Hadoop process map with ports for IDH plugin.'),
cfg.DictOpt('PROCESS_NAMES',
default={
'nn': 'namenode',
'tt': 'tasktracker',
'dn': 'datanode'
},
help='Names for namenode, tasktracker and datanode '
'processes.'),
cfg.StrOpt(
'IDH_TARBALL_URL',
default='http://repo2.intelhadoop.com/setup/'
'setup-intelhadoop-2.5.1-en-evaluation.RHEL.tar.gz'
),
cfg.StrOpt(
'IDH_REPO_URL',
default='http://repo2.intelhadoop.com/evaluation/en/RHEL/2.5.1/rpm'
),
cfg.StrOpt(
'OS_REPO_URL',
default='http://mirror.centos.org/centos-6/6/os/x86_64'
),
cfg.BoolOpt('SKIP_ALL_TESTS_FOR_PLUGIN',
default=False,
help='If this flag is True, then all tests for IDH plugin '
'will be skipped.'),
cfg.BoolOpt('SKIP_MAP_REDUCE_TEST', default=False),
cfg.BoolOpt('SKIP_SWIFT_TEST', default=True),
cfg.BoolOpt('SKIP_SCALING_TEST', default=False)
]
IDH3_CONFIG_GROUP = cfg.OptGroup(name='IDH3')
IDH3_CONFIG_OPTS = [
cfg.StrOpt('PLUGIN_NAME',
default='idh',
help='Name of plugin.'),
cfg.StrOpt('MANAGER_FLAVOR_ID',
default='3',
help='Flavor ID to Intel Manager. Intel manager requires '
'>=4Gb RAM'),
cfg.StrOpt('IMAGE_ID',
default=None,
help='ID for image which is used for cluster creation. Also '
'you can specify image name or tag of image instead of '
'image ID. If you do not specify image related '
'parameters, then image for cluster creation will be '
'chosen by tag "sahara_i_tests".'),
cfg.StrOpt('IMAGE_NAME',
default=None,
help='Name for image which is used for cluster creation. Also '
'you can specify image ID or tag of image instead of '
'image name. If you do not specify image related '
'parameters, then image for cluster creation will be '
'chosen by tag "sahara_i_tests".'),
cfg.StrOpt('IMAGE_TAG',
default=None,
help='Tag for image which is used for cluster creation. Also '
'you can specify image ID or image name instead of tag of '
'image. If you do not specify image related parameters, '
'then image for cluster creation will be chosen by '
'tag "sahara_i_tests".'),
cfg.StrOpt('SSH_USERNAME',
default=None,
help='Username to get cluster node with SSH.'),
cfg.StrOpt('HADOOP_VERSION',
default='3.0.2',
help='Version of Hadoop.'),
cfg.StrOpt('HADOOP_USER',
default='hdfs',
help='Username which is used for access to Hadoop services.'),
cfg.DictOpt('HADOOP_PROCESSES_WITH_PORTS',
default={
'resourcemanager': 8032,
'namenode': 50070,
'tasktracker': 50060,
'datanode': 50075,
'secondarynamenode': 50090,
'oozie': 11000
},
help='Hadoop process map with ports for IDH plugin.'),
cfg.DictOpt('PROCESS_NAMES',
default={
'nn': 'namenode',
'tt': 'nodemanager',
'dn': 'datanode'
},
help='Names for namenode, tasktracker and datanode '
'processes.'),
cfg.StrOpt(
'IDH_TARBALL_URL',
default='http://repo2.intelhadoop.com/setup/'
'setup-intelhadoop-3.0.2-en-evaluation.RHEL.tar.gz'
),
cfg.StrOpt(
'IDH_REPO_URL',
default='http://repo2.intelhadoop.com/evaluation/'
'en/RHEL/3.0.2/rpm'
),
cfg.StrOpt(
'OS_REPO_URL',
default='http://mirror.centos.org/centos-6/6/os/x86_64'
),
cfg.BoolOpt('SKIP_ALL_TESTS_FOR_PLUGIN',
default=False,
help='If this flag is True, then all tests for IDH plugin '
'will be skipped.'),
cfg.BoolOpt('SKIP_SWIFT_TEST', default=True),
cfg.BoolOpt('SKIP_SCALING_TEST', default=False)
]
def register_config(config, config_group, config_opts):
config.register_group(config_group)
@ -614,10 +435,8 @@ class ITConfig:
register_config(cfg.CONF, COMMON_CONFIG_GROUP, COMMON_CONFIG_OPTS)
register_config(cfg.CONF, VANILLA_CONFIG_GROUP, VANILLA_CONFIG_OPTS)
register_config(cfg.CONF, HDP_CONFIG_GROUP, HDP_CONFIG_OPTS)
register_config(cfg.CONF, IDH2_CONFIG_GROUP, IDH2_CONFIG_OPTS)
register_config(
cfg.CONF, VANILLA_TWO_CONFIG_GROUP, VANILLA_TWO_CONFIG_OPTS)
register_config(cfg.CONF, IDH3_CONFIG_GROUP, IDH3_CONFIG_OPTS)
cfg.CONF(
[], project='Sahara_integration_tests',
@ -628,5 +447,3 @@ class ITConfig:
self.vanilla_config = cfg.CONF.VANILLA
self.vanilla_two_config = cfg.CONF.VANILLA_TWO
self.hdp_config = cfg.CONF.HDP
self.idh2_config = cfg.CONF.IDH2
self.idh3_config = cfg.CONF.IDH3

View File

@ -75,8 +75,6 @@ class ITestCase(testtools.TestCase, testtools.testcase.WithAttributes,
self.vanilla_config = cfg.ITConfig().vanilla_config
self.vanilla_two_config = cfg.ITConfig().vanilla_two_config
self.hdp_config = cfg.ITConfig().hdp_config
self.idh2_config = cfg.ITConfig().idh2_config
self.idh3_config = cfg.ITConfig().idh3_config
telnetlib.Telnet(
self.common_config.SAHARA_HOST, self.common_config.SAHARA_PORT

View File

@ -1,240 +0,0 @@
# Copyright (c) 2014 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from testtools import testcase
from sahara.tests.integration.configs import config as cfg
from sahara.tests.integration.tests import base as b
from sahara.tests.integration.tests import cluster_configs
from sahara.tests.integration.tests import edp
from sahara.tests.integration.tests import map_reduce
from sahara.tests.integration.tests import scaling
from sahara.tests.integration.tests import swift
class IDH2GatingTest(cluster_configs.ClusterConfigTest, edp.EDPTest,
map_reduce.MapReduceTest, swift.SwiftTest,
scaling.ScalingTest):
idh2_config = cfg.ITConfig().idh2_config
SKIP_MAP_REDUCE_TEST = idh2_config.SKIP_MAP_REDUCE_TEST
SKIP_SWIFT_TEST = idh2_config.SKIP_SWIFT_TEST
SKIP_SCALING_TEST = idh2_config.SKIP_SCALING_TEST
def setUp(self):
super(IDH2GatingTest, self).setUp()
self.cluster_id = None
self.cluster_template_id = None
self.ng_template_ids = []
def _prepare_test(self):
self.floating_ip_pool = self.common_config.FLOATING_IP_POOL
self.internal_neutron_net = None
if self.common_config.NEUTRON_ENABLED:
self.internal_neutron_net = self.get_internal_neutron_net_id()
self.floating_ip_pool = \
self.get_floating_ip_pool_id_for_neutron_net()
self.idh2_config.IMAGE_ID, self.idh2_config.SSH_USERNAME = (
self.get_image_id_and_ssh_username(self.idh2_config))
@b.errormsg("Failure while 'tt-dn' node group template creation: ")
def _create_tt_dn_ng_template(self):
template = {
'name': 'test-node-group-template-idh-tt-dn',
'plugin_config': self.idh2_config,
'description': 'test node group template for Intel plugin',
'volumes_per_node': 0,
'volume_size': 0,
'node_processes': ['tasktracker', 'datanode'],
'floating_ip_pool': self.floating_ip_pool,
'node_configs': {}
}
self.ng_tmpl_tt_dn_id = self.create_node_group_template(**template)
self.ng_template_ids.append(self.ng_tmpl_tt_dn_id)
@b.errormsg("Failure while 'tt' node group template creation: ")
def _create_tt_ng_template(self):
template = {
'name': 'test-node-group-template-idh-tt',
'plugin_config': self.idh2_config,
'description': 'test node group template for Intel plugin',
'volumes_per_node': 0,
'volume_size': 0,
'node_processes': ['tasktracker'],
'floating_ip_pool': self.floating_ip_pool,
'node_configs': {}
}
self.ng_tmpl_tt_id = self.create_node_group_template(**template)
self.ng_template_ids.append(self.ng_tmpl_tt_id)
@b.errormsg("Failure while 'dn' node group template creation: ")
def _create_dn_ng_template(self):
template = {
'name': 'test-node-group-template-idh-dn',
'plugin_config': self.idh2_config,
'description': 'test node group template for Intel plugin',
'volumes_per_node': 0,
'volume_size': 0,
'node_processes': ['datanode'],
'floating_ip_pool': self.floating_ip_pool,
'node_configs': {}
}
self.ng_tmpl_dn_id = self.create_node_group_template(**template)
self.ng_template_ids.append(self.ng_tmpl_dn_id)
@b.errormsg("Failure while cluster template creation: ")
def _create_cluster_template(self):
template = {
'name': 'test-cluster-template-idh',
'plugin_config': self.idh2_config,
'description': 'test cluster template for Intel plugin',
'cluster_configs': {
'general': {
'Enable Swift': True,
'IDH tarball URL': self.idh2_config.IDH_TARBALL_URL,
'IDH repository URL': self.idh2_config.IDH_REPO_URL,
'OS repository URL': self.idh2_config.OS_REPO_URL
},
'HDFS': {
'dfs.replication': 1
}
},
'node_groups': [
{
'name': 'manager-node',
'flavor_id': self.idh2_config.MANAGER_FLAVOR_ID,
'node_processes': ['manager'],
'floating_ip_pool': self.floating_ip_pool,
'count': 1
},
{
'name': 'master-node-jt-nn',
'flavor_id': self.flavor_id,
'node_processes': ['namenode', 'jobtracker'],
'floating_ip_pool': self.floating_ip_pool,
'count': 1
},
{
'name': 'worker-node-tt-dn',
'node_group_template_id': self.ng_tmpl_tt_dn_id,
'count': 2
},
{
'name': 'worker-node-dn',
'node_group_template_id': self.ng_tmpl_dn_id,
'count': 1
},
{
'name': 'worker-node-tt',
'node_group_template_id': self.ng_tmpl_tt_id,
'count': 1
}
],
'net_id': self.internal_neutron_net
}
self.cluster_template_id = self.create_cluster_template(**template)
@b.errormsg("Failure while cluster creation: ")
def _create_cluster(self):
cluster_name = (self.common_config.CLUSTER_NAME + '-' +
self.idh2_config.PLUGIN_NAME)
cluster = {
'name': cluster_name,
'plugin_config': self.idh2_config,
'cluster_template_id': self.cluster_template_id,
'description': 'test cluster',
'cluster_configs': {}
}
self.create_cluster(**cluster)
self.cluster_info = self.get_cluster_info(self.idh2_config)
self.await_active_workers_for_namenode(self.cluster_info['node_info'],
self.idh2_config)
@b.errormsg("Failure while Map Reduce testing: ")
def _check_mapreduce(self):
self.map_reduce_testing(self.cluster_info)
@b.errormsg("Failure during check of Swift availability: ")
def _check_swift(self):
self.check_swift_availability(self.cluster_info)
@b.errormsg("Failure while cluster scaling: ")
def _check_scaling(self):
change_list = [
{
'operation': 'resize',
'info': ['worker-node-tt-dn', 4]
},
{
'operation': 'resize',
'info': ['worker-node-dn', 0]
},
{
'operation': 'resize',
'info': ['worker-node-tt', 0]
},
{
'operation': 'add',
'info': [
'new-worker-node-tt', 1, '%s' % self.ng_tmpl_tt_id
]
},
{
'operation': 'add',
'info': [
'new-worker-node-dn', 1, '%s' % self.ng_tmpl_dn_id
]
}
]
self.cluster_info = self.cluster_scaling(self.cluster_info,
change_list)
self.await_active_workers_for_namenode(self.cluster_info['node_info'],
self.idh2_config)
@b.errormsg("Failure while Map Reduce testing after cluster scaling: ")
def _check_mapreduce_after_scaling(self):
if not self.idh2_config.SKIP_SCALING_TEST:
self.map_reduce_testing(self.cluster_info)
@b.errormsg(
"Failure during check of Swift availability after cluster scaling: ")
def _check_swift_after_scaling(self):
if not self.idh2_config.SKIP_SCALING_TEST:
self.check_swift_availability(self.cluster_info)
@testcase.skipIf(cfg.ITConfig().idh2_config.SKIP_ALL_TESTS_FOR_PLUGIN,
"All tests for Intel plugin were skipped")
@testcase.attr('idh2')
def test_idh_plugin_gating(self):
self._prepare_test()
self._create_tt_dn_ng_template()
self._create_tt_ng_template()
self._create_dn_ng_template()
self._create_cluster_template()
self._create_cluster()
self._check_mapreduce()
self._check_swift()
self._check_scaling()
self._check_mapreduce_after_scaling()
self._check_swift_after_scaling()
def tearDown(self):
self.delete_objects(self.cluster_id, self.cluster_template_id,
self.ng_template_ids)
super(IDH2GatingTest, self).tearDown()

View File

@ -1,224 +0,0 @@
# Copyright (c) 2014 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from testtools import testcase
from sahara.tests.integration.configs import config as cfg
from sahara.tests.integration.tests import base
from sahara.tests.integration.tests import scaling
from sahara.tests.integration.tests import swift
class IDH3GatingTest(swift.SwiftTest,
scaling.ScalingTest):
idh3_config = cfg.ITConfig().idh3_config
SKIP_SWIFT_TEST = idh3_config.SKIP_SWIFT_TEST
SKIP_SCALING_TEST = idh3_config.SKIP_SCALING_TEST
def setUp(self):
super(IDH3GatingTest, self).setUp()
self.cluster_id = None
self.cluster_template_id = None
self.ng_template_ids = []
def _prepare_test(self):
self.idh3_config = cfg.ITConfig().idh3_config
self.floating_ip_pool = self.common_config.FLOATING_IP_POOL
self.internal_neutron_net = None
if self.common_config.NEUTRON_ENABLED:
self.internal_neutron_net = self.get_internal_neutron_net_id()
self.floating_ip_pool = \
self.get_floating_ip_pool_id_for_neutron_net()
self.idh3_config.IMAGE_ID, self.idh3_config.SSH_USERNAME = (
self.get_image_id_and_ssh_username(self.idh3_config))
@base.errormsg("Failure while 'tt-dn' node group template creation: ")
def _create_tt_dn_ng_template(self):
template = {
'name': 'test-node-group-template-idh3-tt-dn',
'plugin_config': self.idh3_config,
'description': 'test node group template for Intel plugin',
'volumes_per_node': 0,
'volume_size': 0,
'node_processes': ['nodemanager', 'datanode'],
'floating_ip_pool': self.floating_ip_pool,
'node_configs': {}
}
self.ng_tmpl_tt_dn_id = self.create_node_group_template(**template)
self.ng_template_ids.append(self.ng_tmpl_tt_dn_id)
@base.errormsg("Failure while 'tt' node group template creation: ")
def _create_tt_ng_template(self):
template = {
'name': 'test-node-group-template-idh3-tt',
'plugin_config': self.idh3_config,
'description': 'test node group template for Intel plugin',
'volumes_per_node': 0,
'volume_size': 0,
'node_processes': ['nodemanager'],
'floating_ip_pool': self.floating_ip_pool,
'node_configs': {}
}
self.ng_tmpl_tt_id = self.create_node_group_template(**template)
self.ng_template_ids.append(self.ng_tmpl_tt_id)
@base.errormsg("Failure while 'dn' node group template creation: ")
def _create_dn_ng_template(self):
template = {
'name': 'test-node-group-template-idh3-dn',
'plugin_config': self.idh3_config,
'description': 'test node group template for Intel plugin',
'volumes_per_node': 0,
'volume_size': 0,
'node_processes': ['datanode'],
'floating_ip_pool': self.floating_ip_pool,
'node_configs': {}
}
self.ng_tmpl_dn_id = self.create_node_group_template(**template)
self.ng_template_ids.append(self.ng_tmpl_dn_id)
@base.errormsg("Failure while cluster template creation: ")
def _create_cluster_template(self):
template = {
'name': 'test-cluster-template-idh3',
'plugin_config': self.idh3_config,
'description': 'test cluster template for Intel plugin',
'cluster_configs': {
'general': {
'Enable Swift': True,
'IDH tarball URL': self.idh3_config.IDH_TARBALL_URL,
'IDH repository URL': self.idh3_config.IDH_REPO_URL,
'OS repository URL': self.idh3_config.OS_REPO_URL
},
'HDFS': {
'dfs.replication': 1
}
},
'node_groups': [
{
'name': 'manager-node',
'flavor_id': self.idh3_config.MANAGER_FLAVOR_ID,
'node_processes': ['manager'],
'floating_ip_pool': self.floating_ip_pool,
'count': 1
},
{
'name': 'master-node-jt-nn-hm',
'flavor_id': self.flavor_id,
'node_processes': ['namenode', 'resourcemanager',
'historyserver'],
'floating_ip_pool': self.floating_ip_pool,
'count': 1
},
{
'name': 'worker-node-tt-dn',
'node_group_template_id': self.ng_tmpl_tt_dn_id,
'count': 2
},
{
'name': 'worker-node-dn',
'node_group_template_id': self.ng_tmpl_dn_id,
'count': 1
},
{
'name': 'worker-node-tt',
'node_group_template_id': self.ng_tmpl_tt_id,
'count': 1
}
],
'net_id': self.internal_neutron_net
}
self.cluster_template_id = self.create_cluster_template(**template)
@base.errormsg("Failure while cluster creation: ")
def _create_cluster(self):
cluster_name = '%s-%s-v3' % (self.common_config.CLUSTER_NAME,
self.idh3_config.PLUGIN_NAME)
cluster = {
'name': cluster_name,
'plugin_config': self.idh3_config,
'cluster_template_id': self.cluster_template_id,
'description': 'test cluster',
'cluster_configs': {}
}
self.create_cluster(**cluster)
self.cluster_info = self.get_cluster_info(self.idh3_config)
self.await_active_workers_for_namenode(self.cluster_info['node_info'],
self.idh3_config)
@base.errormsg("Failure during check of Swift availability: ")
def _check_swift(self):
self.check_swift_availability(self.cluster_info)
@base.errormsg("Failure while cluster scaling: ")
def _check_scaling(self):
change_list = [
{
'operation': 'resize',
'info': ['worker-node-tt-dn', 4]
},
{
'operation': 'resize',
'info': ['worker-node-dn', 0]
},
{
'operation': 'resize',
'info': ['worker-node-tt', 0]
},
{
'operation': 'add',
'info': [
'new-worker-node-tt', 1, '%s' % self.ng_tmpl_tt_id
]
},
{
'operation': 'add',
'info': [
'new-worker-node-dn', 1, '%s' % self.ng_tmpl_dn_id
]
}
]
self.cluster_info = self.cluster_scaling(self.cluster_info,
change_list)
@base.errormsg(
"Failure during check of Swift availability after cluster scaling: ")
def _check_swift_after_scaling(self):
if not self.idh3_config.SKIP_SCALING_TEST:
self.check_swift_availability(self.cluster_info)
@testcase.skipIf(cfg.ITConfig().idh3_config.SKIP_ALL_TESTS_FOR_PLUGIN,
"All tests for Intel plugin were skipped")
@testcase.attr('idh3')
def test_idh_plugin_gating(self):
self._prepare_test()
self._create_tt_dn_ng_template()
self._create_tt_ng_template()
self._create_dn_ng_template()
self._create_cluster_template()
self._create_cluster()
self._check_swift()
self._check_scaling()
self._check_swift_after_scaling()
def tearDown(self):
self.delete_objects(self.cluster_id, self.cluster_template_id,
self.ng_template_ids)
super(IDH3GatingTest, self).tearDown()

View File

@ -1,67 +0,0 @@
# Copyright (c) 2014 Intel Corporation
# Copyright (c) 2014 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import mock
from sahara.plugins.intel.client import services
from sahara.plugins.intel import exceptions as iex
from sahara.tests.unit import base
class HDFSServiceTest(base.SaharaTestCase):
def test_get_datanode_status_nodomains(self):
self.override_config("node_domain", "domain")
ctx = mock.Mock()
ctx.cluster_name = 'cluster'
ctx.rest.get.return_value = {
"items": [
{"status": "Stopped", "hostname": "manager-001"},
{"status": "Stopped", "hostname": "master-001"},
{"status": "Running", "hostname": "worker-001"},
{"status": "Running", "hostname": "worker-002"},
{"status": "Decomissioned", "hostname": "worker-003"}]}
hdfs = services.HDFSService(ctx, 'hdfs')
self.assertEqual('Stopped',
hdfs.get_datanode_status('master-001.domain'))
self.assertEqual('Running',
hdfs.get_datanode_status('worker-001.domain'))
self.assertEqual('Decomissioned',
hdfs.get_datanode_status('worker-003.domain'))
self.assertRaises(iex.IntelPluginException,
hdfs.get_datanode_status, 'worker-004.domain')
def test_get_datanode_status_domains(self):
self.override_config("node_domain", "domain")
ctx = mock.Mock()
ctx.cluster_name = 'cluster'
ctx.rest.get.return_value = {
"items": [
{"status": "Stopped", "hostname": "manager-001.domain"},
{"status": "Stopped", "hostname": "master-001.domain"},
{"status": "Running", "hostname": "worker-001.domain"},
{"status": "Running", "hostname": "worker-002.domain"},
{"status": "Decomissioned", "hostname": "worker-003.domain"}]}
hdfs = services.HDFSService(ctx, 'hdfs')
self.assertEqual('Stopped',
hdfs.get_datanode_status('master-001.domain'))
self.assertEqual('Running',
hdfs.get_datanode_status('worker-001.domain'))
self.assertEqual('Decomissioned',
hdfs.get_datanode_status('worker-003.domain'))
self.assertRaises(iex.IntelPluginException,
hdfs.get_datanode_status, 'worker-004.domain')

View File

@ -1,64 +0,0 @@
# Copyright (c) 2013 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins.general import exceptions as g_ex
from sahara.plugins.intel import plugin as p
from sahara.plugins.intel.v2_5_1 import config_helper as c_helper
from sahara.tests.unit import base
from sahara.tests.unit import testutils as tu
class TestIDHPlugin251(base.SaharaWithDbTestCase):
def test_get_configs(self):
plugin = p.IDHProvider()
configs = plugin.get_configs('2.5.1')
self.assertIn(c_helper.IDH_REPO_URL, configs)
self.assertIn(c_helper.IDH_TARBALL_URL, configs)
self.assertIn(c_helper.OS_REPO_URL, configs)
def test_validate(self):
plugin = p.IDHProvider()
ng_mng = tu.make_ng_dict('mng', 'f1', ['manager'], 1)
ng_nn = tu.make_ng_dict('nn', 'f1', ['namenode'], 1)
ng_jt = tu.make_ng_dict('jt', 'f1', ['jobtracker'], 1)
ng_dn = tu.make_ng_dict('dn', 'f1', ['datanode'], 2)
ng_tt = tu.make_ng_dict('tt', 'f1', ['tasktracker'], 2)
cl = tu.create_cluster('cl1', 't1', 'intel', '2.5.1',
[ng_nn] + [ng_dn])
self.assertRaises(g_ex.InvalidComponentCountException,
plugin.validate, cl)
cl = tu.create_cluster('cl1', 't1', 'intel', '2.5.1', [ng_mng])
self.assertRaises(g_ex.InvalidComponentCountException,
plugin.validate, cl)
cl = tu.create_cluster('cl1', 't1', 'intel', '2.5.1',
[ng_mng] + [ng_nn] * 2)
self.assertRaises(g_ex.InvalidComponentCountException,
plugin.validate, cl)
cl = tu.create_cluster('cl1', 't1', 'intel', '2.5.1',
[ng_mng] + [ng_nn] + [ng_tt])
self.assertRaises(g_ex.RequiredServiceMissingException,
plugin.validate, cl)
cl = tu.create_cluster('cl1', 't1', 'intel', '2.5.1',
[ng_mng] + [ng_nn] + [ng_jt] * 2 + [ng_tt])
self.assertRaises(g_ex.InvalidComponentCountException,
plugin.validate, cl)

View File

@ -1,64 +0,0 @@
# Copyright (c) 2013 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins.general import exceptions as g_ex
from sahara.plugins.intel import plugin as p
from sahara.plugins.intel.v3_0_2 import config_helper as c_helper
from sahara.tests.unit import base
from sahara.tests.unit import testutils as tu
class TestIDHPlugin302(base.SaharaWithDbTestCase):
def test_get_configs(self):
plugin = p.IDHProvider()
configs = plugin.get_configs('3.0.2')
self.assertIn(c_helper.IDH_REPO_URL, configs)
self.assertIn(c_helper.IDH_TARBALL_URL, configs)
self.assertIn(c_helper.OS_REPO_URL, configs)
def test_validate(self):
plugin = p.IDHProvider()
ng_mng = tu.make_ng_dict('mng', 'f1', ['manager'], 1)
ng_nn = tu.make_ng_dict('nn', 'f1', ['namenode'], 1)
ng_rm = tu.make_ng_dict('rm', 'f1', ['resourcemanager'], 1)
ng_dn = tu.make_ng_dict('dn', 'f1', ['datanode'], 2)
ng_nm = tu.make_ng_dict('nm', 'f1', ['nodemanager'], 2)
cl = tu.create_cluster('cl1', 't1', 'intel', '3.0.2',
[ng_nn] + [ng_dn])
self.assertRaises(g_ex.InvalidComponentCountException,
plugin.validate, cl)
cl = tu.create_cluster('cl1', 't1', 'intel', '3.0.2', [ng_mng])
self.assertRaises(g_ex.InvalidComponentCountException,
plugin.validate, cl)
cl = tu.create_cluster('cl1', 't1', 'intel', '3.0.2',
[ng_mng] + [ng_nn] * 2)
self.assertRaises(g_ex.InvalidComponentCountException,
plugin.validate, cl)
cl = tu.create_cluster('cl1', 't1', 'intel', '3.0.2',
[ng_mng] + [ng_nn] + [ng_nm])
self.assertRaises(g_ex.RequiredServiceMissingException,
plugin.validate, cl)
cl = tu.create_cluster('cl1', 't1', 'intel', '3.0.2',
[ng_mng] + [ng_nn] + [ng_rm] * 2 + [ng_rm])
self.assertRaises(g_ex.InvalidComponentCountException,
plugin.validate, cl)

View File

@ -1,28 +0,0 @@
# Copyright (c) 2013 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
class Response:
def __init__(self, data=None, ok=True, status_code=200):
self.text = json.dumps(data)
self.ok = ok
self.status_code = status_code
self.reason = None
def make_resp(data=None, ok=True, status_code=200):
return Response(data, ok, status_code)

View File

@ -1,320 +0,0 @@
# Copyright (c) 2013 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import mock
from requests import sessions
from sahara import exceptions as ex
from sahara.plugins.intel import exceptions as iex
from sahara.plugins.intel.v2_5_1 import client as c
from sahara.tests.unit import base
from sahara.tests.unit.plugins.intel.v2_5_1 import response as r
SESSION_POST_DATA = {'sessionID': '123'}
SESSION_GET_DATA = {"items": [
{
"nodeprogress": {
"hostname": 'host',
'info': '_ALLFINISH\n'
}
}
]}
class TestClient(base.SaharaTestCase):
def _get_instance(self):
inst_remote = mock.MagicMock()
inst_remote.get_http_client.return_value = sessions.Session()
inst_remote.__enter__.return_value = inst_remote
inst = mock.MagicMock()
inst.remote.return_value = inst_remote
return inst
@mock.patch('requests.sessions.Session.post')
@mock.patch('requests.sessions.Session.get')
def test_cluster_op(self, get, post):
client = c.IntelClient(self._get_instance(), 'rty')
data = {'lelik': 'bolik'}
post.return_value = r.make_resp(data)
self.assertEqual(client.cluster.create(), data)
get.return_value = r.make_resp(data)
self.assertEqual(client.cluster.get(), data)
post.return_value = r.make_resp(SESSION_POST_DATA)
get.return_value = r.make_resp(SESSION_GET_DATA)
client.cluster.install_software(['bla-bla'])
self.assertEqual(post.call_count, 2)
self.assertEqual(get.call_count, 2)
@mock.patch('requests.sessions.Session.delete')
@mock.patch('requests.sessions.Session.post')
@mock.patch('requests.sessions.Session.get')
def test_nodes_op(self, get, post, delete):
client = c.IntelClient(self._get_instance(), 'rty')
# add
post.return_value = r.make_resp(data={
"items": [
{
"iporhostname": "n1",
"info": "Connected"
},
{
"iporhostname": "n2",
"info": "Connected"
}
]
})
client.nodes.add(['n1', 'n2'], 'hadoop', '/Def', '/tmp/key')
post.return_value = r.make_resp(data={
"items": [
{
"iporhostname": "n1",
"info": "bla-bla"
}
]
})
self.assertRaises(iex.IntelPluginException, client.nodes.add,
['n1'], 'hadoop', '/Def', '/tmp/key')
# config
post.return_value = r.make_resp(SESSION_POST_DATA)
get.return_value = r.make_resp(SESSION_GET_DATA)
client.nodes.config()
# delete
delete.return_value = r.make_resp()
client.nodes.delete(['n1'])
# get
get.return_value = r.make_resp()
client.nodes.get()
# get_status
get.return_value = r.make_resp(data={"status": "running"})
client.nodes.get_status(['n1'])
# stop_nodes
post.return_value = r.make_resp()
client.nodes.stop(['n1'])
self.assertEqual(delete.call_count, 1)
self.assertEqual(post.call_count, 4)
self.assertEqual(get.call_count, 3)
@mock.patch('requests.sessions.Session.put')
@mock.patch('requests.sessions.Session.post')
def test_params_op(self, post, put):
client = c.IntelClient(self._get_instance(), 'rty')
post.return_value = r.make_resp()
put.return_value = r.make_resp()
# add
client.params.hdfs.add('lelik', 'bolik')
client.params.hadoop.add('lelik', 'bolik')
client.params.mapred.add('lelik', 'bolik')
# get
self.assertRaises(ex.NotImplementedException, client.params.hdfs.get,
['n1'], 'lelik')
self.assertRaises(ex.NotImplementedException, client.params.hadoop.get,
['n1'], 'lelik')
self.assertRaises(ex.NotImplementedException, client.params.mapred.get,
['n1'], 'lelik')
# update
client.params.hdfs.update('lelik', 'bolik', nodes=['n1'])
client.params.hdfs.update('lelik', 'bolik')
client.params.hadoop.update('lelik', 'bolik', nodes=['n1'])
client.params.hadoop.update('lelik', 'bolik')
client.params.mapred.update('lelik', 'bolik', nodes=['n1'])
client.params.mapred.update('lelik', 'bolik')
self.assertEqual(post.call_count, 3)
self.assertEqual(put.call_count, 6)
@mock.patch('sahara.context.sleep', lambda x: None)
@mock.patch('requests.sessions.Session.post')
@mock.patch('requests.sessions.Session.get')
def test_base_services_op(self, get, post):
client = c.IntelClient(self._get_instance(), 'rty')
# start
post.return_value = r.make_resp()
get.return_value = r.make_resp(data={
"items": [
{
"serviceName": "hdfs",
"status": "running"
},
{
"serviceName": "mapred",
"status": "running"
}
]})
client.services.hdfs.start()
client.services.mapred.start()
get.return_value = r.make_resp(data={
"items": [
{
"serviceName": "hdfs",
"status": "stopped"
},
{
"serviceName": "mapred",
"status": "stopped"
}
]
})
self.assertRaises(iex.IntelPluginException,
client.services.hdfs.start)
self.assertRaises(iex.IntelPluginException,
client.services.mapred.start)
# stop
post.return_value = r.make_resp()
client.services.hdfs.stop()
client.services.mapred.stop()
# service
get.return_value = r.make_resp(data={
"items": [
{
"serviceName": "bla-bla",
"status": "fail"
}
]
})
self.assertRaises(iex.IntelPluginException,
client.services.hdfs.status)
self.assertRaises(iex.IntelPluginException,
client.services.mapred.status)
# get_nodes
get.return_value = r.make_resp()
client.services.hdfs.get_nodes()
client.services.mapred.get_nodes()
# add_nodes
post.return_value = r.make_resp()
client.services.hdfs.add_nodes('DataNode', ['n1', 'n2'])
client.services.mapred.add_nodes('NameNode', ['n1', 'n2'])
self.assertEqual(get.call_count, 606)
self.assertEqual(post.call_count, 8)
@mock.patch('requests.sessions.Session.delete')
@mock.patch('requests.sessions.Session.post')
@mock.patch('requests.sessions.Session.get')
def test_services_op(self, get, post, delete):
client = c.IntelClient(self._get_instance(), 'rty')
# add
post.return_value = r.make_resp()
client.services.add(['hdfs', 'mapred'])
# get_services
get.return_value = r.make_resp()
client.services.get_services()
# delete_service
delete.return_value = r.make_resp()
client.services.delete_service('hdfs')
@mock.patch('requests.sessions.Session.post')
@mock.patch('requests.sessions.Session.get')
def test_hdfs_services_op(self, get, post):
client = c.IntelClient(self._get_instance(), 'rty')
# format
get.return_value = r.make_resp(SESSION_GET_DATA)
post.return_value = r.make_resp(SESSION_POST_DATA)
client.services.hdfs.format()
# decommission
post.return_value = r.make_resp()
client.services.hdfs.decommission_nodes(['n1'])
# get status
get.return_value = r.make_resp(data={
"items": [
{
"hostname": "n1",
"status": "start"
}
]
})
client.services.hdfs.get_datanodes_status()
self.assertEqual(client.services.hdfs.get_datanode_status('n1'),
'start')
self.assertRaises(iex.IntelPluginException,
client.services.hdfs.get_datanode_status, 'n2')
self.assertEqual(get.call_count, 4)
self.assertEqual(post.call_count, 2)
@mock.patch('sahara.context.sleep', lambda x: None)
@mock.patch('requests.sessions.Session.post')
@mock.patch('requests.sessions.Session.get')
def test_session_op(self, get, post):
client = c.IntelClient(self._get_instance(), 'rty')
data1 = {
"items": [
{
"nodeprogress": {
"hostname": 'host',
'info': 'info\n'
}
}
]
}
data2 = {
"items": [
{
"nodeprogress": {
"hostname": 'host',
'info': '_ALLFINISH\n'
}
}
]
}
get.side_effect = (r.make_resp(data1), r.make_resp(data2))
post.return_value = r.make_resp(SESSION_POST_DATA)
client.services.hdfs.format()
self.assertEqual(get.call_count, 2)
self.assertEqual(post.call_count, 1)
@mock.patch('requests.sessions.Session.get')
def test_rest_client(self, get):
client = c.IntelClient(self._get_instance(), 'rty')
get.return_value = r.make_resp(ok=False, status_code=500, data={
"message": "message"
})
self.assertRaises(iex.IntelPluginException,
client.services.get_services)

View File

@ -1,28 +0,0 @@
# Copyright (c) 2013 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
class Response:
def __init__(self, data=None, ok=True, status_code=200):
self.text = json.dumps(data)
self.ok = ok
self.status_code = status_code
self.reason = None
def make_resp(data=None, ok=True, status_code=200):
return Response(data, ok, status_code)

View File

@ -1,320 +0,0 @@
# Copyright (c) 2013 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import mock
from requests import sessions
from sahara import exceptions as ex
from sahara.plugins.intel import exceptions as iex
from sahara.plugins.intel.v3_0_2 import client as c
from sahara.tests.unit import base
from sahara.tests.unit.plugins.intel.v3_0_2 import response as r
SESSION_POST_DATA = {'sessionID': '123'}
SESSION_GET_DATA = {"items": [
{
"nodeprogress": {
"hostname": 'host',
'info': '_ALLFINISH\n'
}
}
]}
class TestClient(base.SaharaTestCase):
def _get_instance(self):
inst_remote = mock.MagicMock()
inst_remote.get_http_client.return_value = sessions.Session()
inst_remote.__enter__.return_value = inst_remote
inst = mock.MagicMock()
inst.remote.return_value = inst_remote
return inst
@mock.patch('requests.sessions.Session.post')
@mock.patch('requests.sessions.Session.get')
def test_cluster_op(self, get, post):
client = c.IntelClient(self._get_instance(), 'rty')
data = {'lelik': 'bolik'}
post.return_value = r.make_resp(data)
self.assertEqual(client.cluster.create(), data)
get.return_value = r.make_resp(data)
self.assertEqual(client.cluster.get(), data)
post.return_value = r.make_resp(SESSION_POST_DATA)
get.return_value = r.make_resp(SESSION_GET_DATA)
client.cluster.install_software(['bla-bla'])
self.assertEqual(post.call_count, 2)
self.assertEqual(get.call_count, 2)
@mock.patch('requests.sessions.Session.delete')
@mock.patch('requests.sessions.Session.post')
@mock.patch('requests.sessions.Session.get')
def test_nodes_op(self, get, post, delete):
client = c.IntelClient(self._get_instance(), 'rty')
# add
post.return_value = r.make_resp(data={
"items": [
{
"iporhostname": "n1",
"info": "Connected"
},
{
"iporhostname": "n2",
"info": "Connected"
}
]
})
client.nodes.add(['n1', 'n2'], 'hadoop', '/Def', '/tmp/key')
post.return_value = r.make_resp(data={
"items": [
{
"iporhostname": "n1",
"info": "bla-bla"
}
]
})
self.assertRaises(iex.IntelPluginException, client.nodes.add,
['n1'], 'hadoop', '/Def', '/tmp/key')
# config
post.return_value = r.make_resp(SESSION_POST_DATA)
get.return_value = r.make_resp(SESSION_GET_DATA)
client.nodes.config()
# delete
delete.return_value = r.make_resp()
client.nodes.delete(['n1'])
# get
get.return_value = r.make_resp()
client.nodes.get()
# get_status
get.return_value = r.make_resp(data={"status": "running"})
client.nodes.get_status(['n1'])
# stop_nodes
post.return_value = r.make_resp()
client.nodes.stop(['n1'])
self.assertEqual(delete.call_count, 1)
self.assertEqual(post.call_count, 4)
self.assertEqual(get.call_count, 3)
@mock.patch('requests.sessions.Session.put')
@mock.patch('requests.sessions.Session.post')
def test_params_op(self, post, put):
client = c.IntelClient(self._get_instance(), 'rty')
post.return_value = r.make_resp()
put.return_value = r.make_resp()
# add
client.params.hdfs.add('lelik', 'bolik')
client.params.hadoop.add('lelik', 'bolik')
client.params.yarn.add('lelik', 'bolik')
# get
self.assertRaises(ex.NotImplementedException, client.params.hdfs.get,
['n1'], 'lelik')
self.assertRaises(ex.NotImplementedException, client.params.hadoop.get,
['n1'], 'lelik')
self.assertRaises(ex.NotImplementedException, client.params.yarn.get,
['n1'], 'lelik')
# update
client.params.hdfs.update('lelik', 'bolik', nodes=['n1'])
client.params.hdfs.update('lelik', 'bolik')
client.params.hadoop.update('lelik', 'bolik', nodes=['n1'])
client.params.hadoop.update('lelik', 'bolik')
client.params.yarn.update('lelik', 'bolik', nodes=['n1'])
client.params.yarn.update('lelik', 'bolik')
self.assertEqual(post.call_count, 3)
self.assertEqual(put.call_count, 6)
@mock.patch('sahara.context.sleep', lambda x: None)
@mock.patch('requests.sessions.Session.post')
@mock.patch('requests.sessions.Session.get')
def test_base_services_op(self, get, post):
client = c.IntelClient(self._get_instance(), 'rty')
# start
post.return_value = r.make_resp()
get.return_value = r.make_resp(data={
"items": [
{
"serviceName": "hdfs",
"status": "running"
},
{
"serviceName": "yarn",
"status": "running"
}
]})
client.services.hdfs.start()
client.services.yarn.start()
get.return_value = r.make_resp(data={
"items": [
{
"serviceName": "hdfs",
"status": "stopped"
},
{
"serviceName": "yarn",
"status": "stopped"
}
]
})
self.assertRaises(iex.IntelPluginException,
client.services.hdfs.start)
self.assertRaises(iex.IntelPluginException,
client.services.yarn.start)
# stop
post.return_value = r.make_resp()
client.services.hdfs.stop()
client.services.yarn.stop()
# service
get.return_value = r.make_resp(data={
"items": [
{
"serviceName": "bla-bla",
"status": "fail"
}
]
})
self.assertRaises(iex.IntelPluginException,
client.services.hdfs.status)
self.assertRaises(iex.IntelPluginException,
client.services.yarn.status)
# get_nodes
get.return_value = r.make_resp()
client.services.hdfs.get_nodes()
client.services.yarn.get_nodes()
# add_nodes
post.return_value = r.make_resp()
client.services.hdfs.add_nodes('DataNode', ['n1', 'n2'])
client.services.yarn.add_nodes('NodeManager', ['n1', 'n2'])
self.assertEqual(get.call_count, 606)
self.assertEqual(post.call_count, 8)
@mock.patch('requests.sessions.Session.delete')
@mock.patch('requests.sessions.Session.post')
@mock.patch('requests.sessions.Session.get')
def test_services_op(self, get, post, delete):
client = c.IntelClient(self._get_instance(), 'rty')
# add
post.return_value = r.make_resp()
client.services.add(['hdfs', 'yarn'])
# get_services
get.return_value = r.make_resp()
client.services.get_services()
# delete_service
delete.return_value = r.make_resp()
client.services.delete_service('hdfs')
@mock.patch('requests.sessions.Session.post')
@mock.patch('requests.sessions.Session.get')
def test_hdfs_services_op(self, get, post):
client = c.IntelClient(self._get_instance(), 'rty')
# format
get.return_value = r.make_resp(SESSION_GET_DATA)
post.return_value = r.make_resp(SESSION_POST_DATA)
client.services.hdfs.format()
# decommission
post.return_value = r.make_resp()
client.services.hdfs.decommission_nodes(['n1'])
# get status
get.return_value = r.make_resp(data={
"items": [
{
"hostname": "n1",
"status": "start"
}
]
})
client.services.hdfs.get_datanodes_status()
self.assertEqual(client.services.hdfs.get_datanode_status('n1'),
'start')
self.assertRaises(iex.IntelPluginException,
client.services.hdfs.get_datanode_status, 'n2')
self.assertEqual(get.call_count, 4)
self.assertEqual(post.call_count, 2)
@mock.patch('sahara.context.sleep', lambda x: None)
@mock.patch('requests.sessions.Session.post')
@mock.patch('requests.sessions.Session.get')
def test_session_op(self, get, post):
client = c.IntelClient(self._get_instance(), 'rty')
data1 = {
"items": [
{
"nodeprogress": {
"hostname": 'host',
'info': 'info\n'
}
}
]
}
data2 = {
"items": [
{
"nodeprogress": {
"hostname": 'host',
'info': '_ALLFINISH\n'
}
}
]
}
get.side_effect = (r.make_resp(data1), r.make_resp(data2))
post.return_value = r.make_resp(SESSION_POST_DATA)
client.services.hdfs.format()
self.assertEqual(get.call_count, 2)
self.assertEqual(post.call_count, 1)
@mock.patch('requests.sessions.Session.get')
def test_rest_client(self, get):
client = c.IntelClient(self._get_instance(), 'rty')
get.return_value = r.make_resp(ok=False, status_code=500, data={
"message": "message"
})
self.assertRaises(iex.IntelPluginException,
client.services.get_services)

View File

@ -1,41 +0,0 @@
# Copyright (c) 2014 Intel Corporation
# Copyright (c) 2014 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import mock
from sahara.plugins.intel.v3_0_2 import installer
from sahara.tests.unit import base
class InstallerTest(base.SaharaTestCase):
def test_is_hadoop_service_stopped(self):
instance = mock.Mock()
instance.remote.return_value.execute_command.return_value = (
1, "Hadoop datanode is not running [FAILURE]")
self.assertTrue(
installer._is_hadoop_service_stopped(instance, 'datanode'))
instance = mock.Mock()
instance.remote.return_value.execute_command.return_value = (
1, "Hadoop datanode is dead and pid file exists [FAILURE]")
self.assertTrue(
installer._is_hadoop_service_stopped(instance, 'datanode'))
instance = mock.Mock()
instance.remote.return_value.execute_command.return_value = (
0, "Hadoop datanode is running [SUCCESS]")
self.assertFalse(
installer._is_hadoop_service_stopped(instance, 'datanode'))

View File

@ -36,7 +36,6 @@ console_scripts =
sahara.cluster.plugins =
vanilla = sahara.plugins.vanilla.plugin:VanillaProvider
hdp = sahara.plugins.hdp.ambariplugin:AmbariPlugin
idh = sahara.plugins.intel.plugin:IDHProvider
sahara.infrastructure.engine =
direct = sahara.service.direct_engine:DirectEngine