Retire Sahara: remove repo content

Sahara project is retiring
- https://review.opendev.org/c/openstack/governance/+/919374

this commit remove the content of this project repo

Depends-On: https://review.opendev.org/c/openstack/project-config/+/919376
Change-Id: Ifad618c77ccaf71c2737763d01a50fd01a17a353
This commit is contained in:
Ghanshyam Mann
2024-05-10 17:28:40 -07:00
parent a2d031e759
commit 8bb418637b
462 changed files with 8 additions and 119514 deletions

30
.gitignore vendored
View File

@ -1,30 +0,0 @@
*.egg-info
*.egg[s]
*.log
*.py[co]
.coverage
.testrepository
.tox
.stestr
.venv
.idea
AUTHORS
ChangeLog
build
cover
develop-eggs
dist
doc/build
doc/html
eggs
etc/sahara.conf
etc/sahara/*.conf
etc/sahara/*.topology
sdist
target
tools/lintstack.head.py
tools/pylint_exceptions
doc/source/sample.config
# Files created by releasenotes build
releasenotes/build

View File

@ -1,3 +0,0 @@
[DEFAULT]
test_path=./sahara_plugin_cdh/tests/unit
top_dir=./

View File

@ -1,10 +0,0 @@
- project:
templates:
- check-requirements
- openstack-python3-zed-jobs
- publish-openstack-docs-pti
- release-notes-jobs-python3
check:
jobs:
- sahara-buildimages-cloudera:
voting: false

View File

@ -1,19 +0,0 @@
The source repository for this project can be found at:
https://opendev.org/openstack/sahara-plugin-cdh
Pull requests submitted through GitHub are not monitored.
To start contributing to OpenStack, follow the steps in the contribution guide
to set up and use Gerrit:
https://docs.openstack.org/contributors/code-and-documentation/quick-start.html
Bugs should be filed on Storyboard:
https://storyboard.openstack.org/#!/project/openstack/sahara-plugin-cdh
For more specific information about contributing to this repository, see the
sahara-plugin-cdh contributor guide:
https://docs.openstack.org/sahara-plugin-cdh/latest/contributor/contributing.html

175
LICENSE
View File

@ -1,175 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,38 +1,10 @@
========================
Team and repository tags
========================
This project is no longer maintained.
.. image:: https://governance.openstack.org/tc/badges/sahara.svg
:target: https://governance.openstack.org/tc/reference/tags/index.html
.. Change things from this point on
OpenStack Data Processing ("Sahara") CDH Plugin
================================================
OpenStack Sahara CDH Plugin provides the users the option to
start CDH clusters on OpenStack Sahara.
Check out OpenStack Sahara documentation to see how to deploy the
CDH Plugin.
Sahara at wiki.openstack.org: https://wiki.openstack.org/wiki/Sahara
Storyboard project: https://storyboard.openstack.org/#!/project/openstack/sahara-plugin-cdh
Sahara docs site: https://docs.openstack.org/sahara/latest/
Quickstart guide: https://docs.openstack.org/sahara/latest/user/quickstart.html
How to participate: https://docs.openstack.org/sahara/latest/contributor/how-to-participate.html
Source: https://opendev.org/openstack/sahara-plugin-cdh
Bugs and feature requests: https://storyboard.openstack.org/#!/project/openstack/sahara-plugin-cdh
Release notes: https://docs.openstack.org/releasenotes/sahara-plugin-cdh
License
-------
Apache License Version 2.0 http://www.apache.org/licenses/LICENSE-2.0
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
OFTC.

View File

@ -1 +0,0 @@
[python: **.py]

View File

@ -1,9 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
openstackdocstheme>=2.2.1 # Apache-2.0
os-api-ref>=1.4.0 # Apache-2.0
reno>=3.1.0 # Apache-2.0
sphinx>=2.0.0,!=2.1.0 # BSD
sphinxcontrib-httpdomain>=1.3.0 # BSD
whereto>=0.3.0 # Apache-2.0

View File

@ -1,213 +0,0 @@
# -*- coding: utf-8 -*-
#
# sahara-plugin-cdh documentation build configuration file.
#
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'reno.sphinxext',
'openstackdocstheme',
]
# openstackdocstheme options
openstackdocs_repo_name = 'openstack/sahara-plugin-cdh'
openstackdocs_pdf_link = True
openstackdocs_use_storyboard = True
openstackdocs_projects = [
'sahara'
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
copyright = '2015, Sahara team'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'native'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
#html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'saharacdhplugin-testsdoc'
# -- Options for LaTeX output --------------------------------------------------
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'doc-sahara-plugin-cdh.tex', 'Sahara CDH Plugin Documentation',
'Sahara team', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
smartquotes_excludes = {'builders': ['latex']}
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'sahara-plugin-cdh', 'sahara-plugin-cdh Documentation',
['Sahara team'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'sahara-plugin-cdh', 'sahara-plugin-cdh Documentation',
'Sahara team', 'sahara-plugin-cdh', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'

View File

@ -1,14 +0,0 @@
============================
So You Want to Contribute...
============================
For general information on contributing to OpenStack, please check out the
`contributor guide <https://docs.openstack.org/contributors/>`_ to get started.
It covers all the basics that are common to all OpenStack projects: the
accounts you need, the basics of interacting with our Gerrit review system, how
we communicate as a community, etc.
sahara-plugin-cdh is maintained by the OpenStack Sahara project.
To understand our development process and how you can contribute to it, please
look at the Sahara project's general contributor's page:
http://docs.openstack.org/sahara/latest/contributor/contributing.html

View File

@ -1,8 +0,0 @@
=================
Contributor Guide
=================
.. toctree::
:maxdepth: 2
contributing

View File

@ -1,8 +0,0 @@
CDH plugin for Sahara
=====================
.. toctree::
:maxdepth: 2
user/index
contributor/index

View File

@ -1,191 +0,0 @@
Cloudera Plugin
===============
The Cloudera plugin is a Sahara plugin which allows the user to
deploy and operate a cluster with Cloudera Manager.
The Cloudera plugin is enabled in Sahara by default. You can manually
modify the Sahara configuration file (default /etc/sahara/sahara.conf) to
explicitly enable or disable it in "plugins" line.
Images
------
For cluster provisioning, prepared images should be used.
.. list-table:: Support matrix for the `vanilla` plugin
:widths: 15 15 20 15 35
:header-rows: 1
* - Version
(image tag)
- Distribution
- Build method
- Version
(build parameter)
- Notes
* - 5.13.0
- Ubuntu 16.04, CentOS 7
- sahara-image-pack
- 5.13.0
-
* - 5.11.0
- Ubuntu 16.04, CentOS 7
- sahara-image-pack, sahara-image-create
- 5.11.0
-
* - 5.9.0
- Ubuntu 14.04, CentOS 7
- sahara-image-pack, sahara-image-create
- 5.9.0
-
* - 5.7.0
- Ubuntu 14.04, CentOS 7
- sahara-image-pack, sahara-image-create
- 5.7.0
-
For more information about building image, refer to
:sahara-doc:`Sahara documentation <user/building-guest-images.html>`.
The cloudera plugin requires an image to be tagged in Sahara Image Registry
with two tags: 'cdh' and '<cloudera version>' (e.g. '5.13.0', '5.11.0',
'5.9.0', etc).
The default username specified for these images is different for each
distribution. For more information, refer to the
:sahara-doc:`registering image <user/registering-image.html>` section.
Build settings
~~~~~~~~~~~~~~
It is possible to specify minor versions of CDH when ``sahara-image-create``
is used.
If you want to use a minor versions, export ``DIB_CDH_MINOR_VERSION``
before starting the build command, e.g.:
.. sourcecode:: console
export DIB_CDH_MINOR_VERSION=5.7.1
Services Supported
------------------
Currently below services are supported in both versions of Cloudera plugin:
HDFS, Oozie, YARN, Spark, Zookeeper, Hive, Hue, HBase. 5.3.0 version of
Cloudera Plugin also supported following services: Impala, Flume, Solr, Sqoop,
and Key-value Store Indexer. In version 5.4.0 KMS service support was added
based on version 5.3.0. Kafka 2.0.2 was added for CDH 5.5 and higher.
.. note::
Sentry service is enabled in Cloudera plugin. However, as we do not enable
Kerberos authentication in the cluster for CDH version < 5.5 (which is
required for Sentry functionality) then using Sentry service will not
really take any effect, and other services depending on Sentry will not do
any authentication too.
High Availability Support
-------------------------
Currently HDFS NameNode High Availability is supported beginning with
Cloudera 5.4.0 version. You can refer to
:sahara-doc:`Features Overview <user/features.html>` for the detail
info.
YARN ResourceManager High Availability is supported beginning with Cloudera
5.4.0 version. This feature adds redundancy in the form of an Active/Standby
ResourceManager pair to avoid the failure of single RM. Upon failover, the
Standby RM become Active so that the applications can resume from their last
check-pointed state.
Cluster Validation
------------------
When the user performs an operation on the cluster using a Cloudera plugin, the
cluster topology requested by the user is verified for consistency.
The following limitations are required in the cluster topology for all
cloudera plugin versions:
+ Cluster must contain exactly one manager.
+ Cluster must contain exactly one namenode.
+ Cluster must contain exactly one secondarynamenode.
+ Cluster must contain at least ``dfs_replication`` datanodes.
+ Cluster can contain at most one resourcemanager and this process is also
required by nodemanager.
+ Cluster can contain at most one jobhistory and this process is also
required for resourcemanager.
+ Cluster can contain at most one oozie and this process is also required
for EDP.
+ Cluster can't contain oozie without datanode.
+ Cluster can't contain oozie without nodemanager.
+ Cluster can't contain oozie without jobhistory.
+ Cluster can't contain hive on the cluster without the following services:
metastore, hive server, webcat and resourcemanager.
+ Cluster can contain at most one hue server.
+ Cluster can't contain hue server without hive service and oozie.
+ Cluster can contain at most one spark history server.
+ Cluster can't contain spark history server without resourcemanager.
+ Cluster can't contain hbase master service without at least one zookeeper
and at least one hbase regionserver.
+ Cluster can't contain hbase regionserver without at least one hbase maser.
In case of 5.3.0, 5.4.0, 5.5.0, 5.7.x or 5.9.x version of Cloudera Plugin
there are few extra limitations in the cluster topology:
+ Cluster can't contain flume without at least one datanode.
+ Cluster can contain at most one sentry server service.
+ Cluster can't contain sentry server service without at least one zookeeper
and at least one datanode.
+ Cluster can't contain solr server without at least one zookeeper and at
least one datanode.
+ Cluster can contain at most one sqoop server.
+ Cluster can't contain sqoop server without at least one datanode,
nodemanager and jobhistory.
+ Cluster can't contain hbase indexer without at least one datanode,
zookeeper, solr server and hbase master.
+ Cluster can contain at most one impala catalog server.
+ Cluster can contain at most one impala statestore.
+ Cluster can't contain impala catalogserver without impala statestore,
at least one impalad service, at least one datanode, and metastore.
+ If using Impala, the daemons must be installed on every datanode.
In case of version 5.5.0, 5.7.x or 5.9.x of Cloudera Plugin additional
services in the cluster topology are available:
+ Cluster can have the kafka service and several kafka brokers.
Enabling Kerberos security for cluster
--------------------------------------
If you want to protect your clusters using MIT Kerberos security you have to
complete a few steps below.
* If you would like to create a cluster protected by Kerberos security you
just need to enable Kerberos by checkbox in the ``General Parameters``
section of the cluster configuration. If you prefer to use the OpenStack CLI
for cluster creation, you have to put the data below in the
``cluster_configs`` section:
.. sourcecode:: console
"cluster_configs": {
"Enable Kerberos Security": true,
}
Sahara in this case will correctly prepare KDC server and will create
principals along with keytabs to enable authentication for Hadoop services.
* Ensure that you have the latest hadoop-openstack jar file distributed
on your cluster nodes. You can download one at
``https://tarballs.openstack.org/sahara-extra/dist/``
* Sahara will create principals along with keytabs for system users
like ``hdfs`` and ``spark`` so that you will not have to
perform additional auth operations to execute your jobs on top of the
cluster.

View File

@ -1,8 +0,0 @@
==========
User Guide
==========
.. toctree::
:maxdepth: 2
cdh-plugin

View File

@ -1,6 +0,0 @@
---
upgrade:
- |
Python 2.7 support has been dropped. Last release of sahara and its plugins
to support python 2.7 is OpenStack Train. The minimum version of Python now
supported by sahara and its plugins is Python 3.6.

View File

@ -1,6 +0,0 @@
===========================
2023.1 Series Release Notes
===========================
.. release-notes::
:branch: stable/2023.1

View File

@ -1,210 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Sahara Release Notes documentation build configuration file
extensions = [
'reno.sphinxext',
'openstackdocstheme'
]
# openstackdocstheme options
openstackdocs_repo_name = 'openstack/sahara-plugin-cdh'
openstackdocs_use_storyboard = True
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
copyright = '2015, Sahara Developers'
# Release do not need a version number in the title, they
# cover multiple versions.
# The full version, including alpha/beta/rc tags.
release = ''
# The short X.Y version.
version = ''
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'native'
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'SaharaCDHReleaseNotesdoc'
# -- Options for LaTeX output ---------------------------------------------
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'SaharaCDHReleaseNotes.tex',
'Sahara CDH Plugin Release Notes Documentation',
'Sahara Developers', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'saharacdhreleasenotes',
'Sahara CDH Plugin Release Notes Documentation',
['Sahara Developers'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'SaharaCDHReleaseNotes',
'Sahara CDH Plugin Release Notes Documentation',
'Sahara Developers', 'SaharaCDHReleaseNotes',
'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
# -- Options for Internationalization output ------------------------------
locale_dirs = ['locale/']

View File

@ -1,17 +0,0 @@
=================================
Sahara CDH Plugin Release Notes
=================================
.. toctree::
:maxdepth: 1
unreleased
2023.1
zed
yoga
xena
wallaby
victoria
ussuri
train
stein

View File

@ -1,44 +0,0 @@
# Andreas Jaeger <jaegerandi@gmail.com>, 2019. #zanata
# Andreas Jaeger <jaegerandi@gmail.com>, 2020. #zanata
msgid ""
msgstr ""
"Project-Id-Version: sahara-plugin-cdh\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-04-24 23:41+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2020-04-25 10:42+0000\n"
"Last-Translator: Andreas Jaeger <jaegerandi@gmail.com>\n"
"Language-Team: German\n"
"Language: de\n"
"X-Generator: Zanata 4.3.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
msgid "Current Series Release Notes"
msgstr "Aktuelle Serie Releasenotes"
msgid ""
"Python 2.7 support has been dropped. Last release of sahara and its plugins "
"to support python 2.7 is OpenStack Train. The minimum version of Python now "
"supported by sahara and its plugins is Python 3.6."
msgstr ""
"Python 2.7 Unterstรผtzung wurde beendet. Der letzte Release von Sahara und "
"seinen Plugins der Python 2.7 unterstรผtzt ist OpenStack Train. Die minimal "
"Python Version welche von Sahara und seinen Plugins unterstรผtzt wird, ist "
"Python 3.6."
msgid "Sahara CDH Plugin Release Notes"
msgstr "Sahara CDH Plugin Releasenotes"
msgid "Stein Series Release Notes"
msgstr "Stein Serie Releasenotes"
msgid "Train Series Release Notes"
msgstr "Train Serie Releasenotes"
msgid "Upgrade Notes"
msgstr "Aktualisierungsnotizen"
msgid "Ussuri Series Release Notes"
msgstr "Ussuri Serie Releasenotes"

View File

@ -1,23 +0,0 @@
# Andreas Jaeger <jaegerandi@gmail.com>, 2019. #zanata
msgid ""
msgstr ""
"Project-Id-Version: sahara-plugin-cdh\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2019-09-20 17:23+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2019-09-25 06:20+0000\n"
"Last-Translator: Andreas Jaeger <jaegerandi@gmail.com>\n"
"Language-Team: Indonesian\n"
"Language: id\n"
"X-Generator: Zanata 4.3.3\n"
"Plural-Forms: nplurals=1; plural=0\n"
# auto translated by TM merge from project: sahara-plugin-storm, version: master, DocId: releasenotes/source/locale/releasenotes
msgid "Current Series Release Notes"
msgstr "Catatan Rilis Seri Saat Ini"
# auto translated by TM merge from project: sahara-plugin-storm, version: master, DocId: releasenotes/source/locale/releasenotes
msgid "Stein Series Release Notes"
msgstr "Catatan Rilis Seri Stein"

View File

@ -1,6 +0,0 @@
===================================
Stein Series Release Notes
===================================
.. release-notes::
:branch: stable/stein

View File

@ -1,6 +0,0 @@
==========================
Train Series Release Notes
==========================
.. release-notes::
:branch: stable/train

View File

@ -1,5 +0,0 @@
==============================
Current Series Release Notes
==============================
.. release-notes::

View File

@ -1,6 +0,0 @@
===========================
Ussuri Series Release Notes
===========================
.. release-notes::
:branch: stable/ussuri

View File

@ -1,6 +0,0 @@
=============================
Victoria Series Release Notes
=============================
.. release-notes::
:branch: stable/victoria

View File

@ -1,6 +0,0 @@
============================
Wallaby Series Release Notes
============================
.. release-notes::
:branch: stable/wallaby

View File

@ -1,6 +0,0 @@
=========================
Xena Series Release Notes
=========================
.. release-notes::
:branch: stable/xena

View File

@ -1,6 +0,0 @@
=========================
Yoga Series Release Notes
=========================
.. release-notes::
:branch: stable/yoga

View File

@ -1,6 +0,0 @@
========================
Zed Series Release Notes
========================
.. release-notes::
:branch: stable/zed

View File

@ -1,18 +0,0 @@
# Requirements lower bounds listed here are our best effort to keep them up to
# date but we do not test them so no guarantee of having them all correct. If
# you find any incorrect lower bounds, let us know or propose a fix.
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr!=2.1.0,>=2.0.0 # Apache-2.0
Babel!=2.4.0,>=2.3.4 # BSD
eventlet>=0.26.0 # MIT
oslo.i18n>=3.15.3 # Apache-2.0
oslo.log>=3.36.0 # Apache-2.0
oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0
oslo.utils>=3.33.0 # Apache-2.0
requests>=2.14.2 # Apache-2.0
sahara>=18.0.0

View File

@ -1,26 +0,0 @@
# Copyright (c) 2014 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# It's based on oslo.i18n usage in OpenStack Keystone project and
# recommendations from https://docs.openstack.org/oslo.i18n/latest/
# user/usage.html
import oslo_i18n
_translators = oslo_i18n.TranslatorFactory(domain='sahara_plugin_cdh')
# The primary translation function using the well-known name "_"
_ = _translators.primary

View File

@ -1,284 +0,0 @@
# Andreas Jaeger <jaegerandi@gmail.com>, 2019. #zanata
msgid ""
msgstr ""
"Project-Id-Version: sahara-plugin-cdh VERSION\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2019-09-20 17:23+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2019-09-25 06:24+0000\n"
"Last-Translator: Andreas Jaeger <jaegerandi@gmail.com>\n"
"Language-Team: German\n"
"Language: de\n"
"X-Generator: Zanata 4.3.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "%(problem)s, reason: %(reason)s"
msgstr "%(problem)s, Grund: %(reason)s"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "%(problem)s. %(description)s"
msgstr "%(problem)s. %(description)s"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "'cluster' or 'instance' argument missed"
msgstr "Argument 'Cluster' oder 'Instanz' fehlt"
# auto translated by TM merge from project: sahara-plugin-vanilla, version: master, DocId: sahara_plugin_vanilla/locale/sahara_plugin_vanilla
msgid "0 or 1"
msgstr "0 oder 1"
#, python-format
msgid "API version %(minv)s is required but %(acv)s is in use."
msgstr "API-Version %(minv)s ist erforderlich, aber %(acv)s wird verwendet."
#, python-format
msgid "Attribute %(attname)s of class %(classname)s is read only."
msgstr "Attribut %(attname)s der Klasse %(classname)s ist schreibgeschรผtzt."
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Await Cloudera agents"
msgstr "Erwarten Cloudera-Agenten"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Await agents"
msgstr "Erwarten Agenten"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Await starting Cloudera Manager"
msgstr "Warte auf den Cloudera Manager"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "CDH %s health check"
msgstr "CDH %s Gesundheitscheck"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid ""
"CDH plugin cannot scale node group with processes which have no master-"
"processes run in cluster"
msgstr ""
"Das CDH-Plug-in kann Knotengruppen nicht mit Prozessen skalieren, fรผr die "
"keine Masterprozesse im Cluster ausgefรผhrt werden"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "CDH plugin cannot scale nodegroup with processes: %(processes)s"
msgstr ""
"Das CDH-Plugin kann Knotengruppen nicht mit Prozessen skalieren: "
"%(processes)s"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "CM API attribute error: %s"
msgstr "CM-API-Attributfehler:%s"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "CM API error: %s"
msgstr "CM-API-Fehler:%s"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "CM API value error: %s"
msgstr "CM-API-Wertfehler:%s"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "CM API version not meet requirement: %s"
msgstr "CM API-Version erfรผllt nicht die Anforderung:%s"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Can't get response from Cloudera Manager"
msgstr "Kann keine Antwort von Cloudera Manager erhalten"
#, python-format
msgid ""
"Class %(class1)s does not derive from %(class2)s; cannot update attributes."
msgstr ""
"Die Klasse %(class1)s wird nicht von %(class2)s abgeleitet. Attribute kรถnnen "
"nicht aktualisiert werden."
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "Cloudera Manager has responded that service is in the %s state"
msgstr ""
"Cloudera Manager hat geantwortet, dass der Dienst sich im Status '%s' "
"befindet"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Cloudera Manager health check"
msgstr "Cloudera Manager-Systemdiagnose"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Cloudera Manager is Active"
msgstr "Cloudera Manager ist Aktiv"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Cloudera {base} or higher required to run {type}jobs"
msgstr "Cloudera {base} oder hรถher erforderlich, um {type} Jobs auszufรผhren"
#, python-format
msgid "Command %(method)s %(path)s failed: %(msg)s"
msgstr "Befehl %(method)s %(path)s fehlgeschlagen: %(msg)s"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Configure OS"
msgstr "Konfiguriere das Betriebssystem"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Configure Swift"
msgstr "Konfiguriere Swift"
# auto translated by TM merge from project: sahara-plugin-vanilla, version: master, DocId: sahara_plugin_vanilla/locale/sahara_plugin_vanilla
msgid "Configure instances"
msgstr "Konfiguriere Instanzen"
# auto translated by TM merge from project: sahara-plugin-ambari, version: master, DocId: sahara_plugin_ambari/locale/sahara_plugin_ambari
msgid "Configure rack awareness"
msgstr "Rack-Erkennung konfigurieren"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Configure services"
msgstr "Konfiguriere Dienste"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Create mgmt service"
msgstr "Erstelle einen Verwaltungsdienst"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Create services"
msgstr "Erstelle Dienste"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Decommission nodes"
msgstr "Ausschussknoten"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Delete instances"
msgstr "Instanzen lรถschen"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Deploy configs"
msgstr "Stelle Konfigurationen bereit"
msgid "Either 'version' or 'fullVersion' must be specified"
msgstr "Entweder 'version' oder 'fullVersion' muss angegeben werden"
# auto translated by TM merge from project: sahara-plugin-ambari, version: master, DocId: sahara_plugin_ambari/locale/sahara_plugin_ambari
msgid "Enable NameNode HA"
msgstr "Aktiviere NameNode HA"
# auto translated by TM merge from project: sahara-plugin-ambari, version: master, DocId: sahara_plugin_ambari/locale/sahara_plugin_ambari
msgid "Enable ResourceManager HA"
msgstr "Aktiviere ResourceManager HA"
msgid "Finish cluster starting"
msgstr "Beenden Sie den Clusterstart"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "First run cluster"
msgstr "Erster Lauf-Cluster"
msgid "Get retry max time reached."
msgstr "Maximale Wiederholungszeit erreicht."
msgid "HDFS_NAMENODE should be enabled in anti_affinity."
msgstr "HDFS_NAMENODE sollte in anti_affinity aktiviert sein."
msgid "HDFS_SECONDARYNAMENODE should be enabled in anti_affinity."
msgstr "HDFS_SECONDARYNAMENODE sollte in anti_affinity aktiviert werden."
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "IMPALAD must be installed on every HDFS_DATANODE"
msgstr "IMPALAD muss auf jedem HDFS_DATANODE installiert werden"
# auto translated by TM merge from project: openstack-manuals, version: master, DocId: doc/ha-guide/source/locale/ha-guide
msgid "Install packages"
msgstr "Installiere Pakete"
#, python-format
msgid "Invalid property %(attname)s for class %(classname)s."
msgstr "Ungรผltige Eigenschaft %(attname)s fรผr Klasse %(classname)s."
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Number of datanodes must be not less than dfs_replication."
msgstr "Die Anzahl der Daten muss nicht kleiner als dfs_replication sein."
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "OS on image is not supported by CDH plugin"
msgstr "OS on image wird vom CDH-Plugin nicht unterstรผtzt"
msgid "Prepare cluster"
msgstr "Vorbereiten des Clusters"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "Process %(process)s is not supported by CDH plugin"
msgstr "Process %(process)s wird vom CDH-Plugin nicht unterstรผtzt"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Refresh DataNodes"
msgstr "Aktualisiere DataNodes"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Refresh YARNNodes"
msgstr "Aktualisiere YARNNodes"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Restart stale services"
msgstr "Abgelaufene Dienste neu starten"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Start Cloudera Agents"
msgstr "Starte Cloudera-Agenten"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Start Cloudera Manager"
msgstr "Starte den Cloudera Manager"
msgid "Start roles: NODEMANAGER, DATANODE"
msgstr "Start Rollen: NODEMANAGER, DATANODE"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid ""
"The Cloudera Sahara plugin provides the ability to launch the Cloudera "
"distribution of Apache Hadoop (CDH) with Cloudera Manager management console."
msgstr ""
"Das Cloudera Sahara-Plugin bietet die Mรถglichkeit, die Cloudera-Distribution "
"von Apache Hadoop (CDH) mit der Cloudera Manager Management Console zu "
"starten."
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "The following checks did not pass: %s"
msgstr "Die folgenden รœberprรผfungen wurden nicht bestanden: %s"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Unable to find config: applicable_target: {target}, name: {name}"
msgstr ""
"Konnte nicht gefunden werden: applicable_target: {target}, name: {name}"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Update configs"
msgstr "Aktualisiere die Konfigurationen"
msgid "YARN_RESOURCEMANAGER should be enabled in anti_affinity."
msgstr "YARN_RESOURCEMANAGER sollte in anti_affinity aktiviert sein."
msgid "YARN_STANDBYRM should be enabled in anti_affinity."
msgstr "YARN_STANDBYRM sollte in anti_affinity aktiviert sein."
# auto translated by TM merge from project: sahara, version: master, DocId: sahara/locale/sahara
msgid "at least 1"
msgstr "mindestens 1"
msgid "be odd"
msgstr "sei ungerade"
msgid "not less than 3"
msgstr "nicht kleiner als 3"

View File

@ -1,231 +0,0 @@
# Copyright (c) 2014 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
from sahara.plugins import conductor
from sahara.plugins import context
from sahara.plugins import kerberos
from sahara_plugin_cdh.plugins.cdh import db_helper as dh
from sahara_plugin_cdh.plugins.cdh import health
class AbstractVersionHandler(object, metaclass=abc.ABCMeta):
@abc.abstractmethod
def get_node_processes(self):
return
@abc.abstractmethod
def get_plugin_configs(self):
return
@abc.abstractmethod
def configure_cluster(self, cluster):
return
@abc.abstractmethod
def start_cluster(self, cluster):
return
@abc.abstractmethod
def validate(self, cluster):
return
@abc.abstractmethod
def scale_cluster(self, cluster, instances):
return
@abc.abstractmethod
def decommission_nodes(self, cluster, instances):
return
@abc.abstractmethod
def validate_scaling(self, cluster, existing, additional):
return
@abc.abstractmethod
def get_edp_engine(self, cluster, job_type):
return
@abc.abstractmethod
def get_edp_job_types(self):
return []
@abc.abstractmethod
def get_edp_config_hints(self, job_type):
return {}
@abc.abstractmethod
def get_open_ports(self, node_group):
return
def on_terminate_cluster(self, cluster):
dh.delete_passwords_from_keymanager(cluster)
@abc.abstractmethod
def get_image_arguments(self):
return NotImplemented
@abc.abstractmethod
def pack_image(self, hadoop_version, remote, test_only=False,
image_arguments=None):
pass
@abc.abstractmethod
def validate_images(self, cluster, test_only=False, image_arguments=None):
pass
class BaseVersionHandler(AbstractVersionHandler):
def __init__(self):
# Need to be specified in subclass
self.config_helper = None # config helper
self.cloudera_utils = None # ClouderaUtils
self.deploy = None # to deploy
self.edp_engine = None
self.plugin_utils = None # PluginUtils
self.validation = None # to validate
def get_plugin_configs(self):
result = self.config_helper.get_plugin_configs()
result.extend(kerberos.get_config_list())
return result
def get_node_processes(self):
return {
"CLOUDERA": ['CLOUDERA_MANAGER'],
"HDFS": ['HDFS_NAMENODE', 'HDFS_DATANODE',
'HDFS_SECONDARYNAMENODE', 'HDFS_JOURNALNODE'],
"YARN": ['YARN_RESOURCEMANAGER', 'YARN_NODEMANAGER',
'YARN_JOBHISTORY', 'YARN_STANDBYRM'],
"OOZIE": ['OOZIE_SERVER'],
"HIVE": ['HIVE_SERVER2', 'HIVE_METASTORE', 'HIVE_WEBHCAT'],
"HUE": ['HUE_SERVER'],
"SPARK_ON_YARN": ['SPARK_YARN_HISTORY_SERVER'],
"ZOOKEEPER": ['ZOOKEEPER_SERVER'],
"HBASE": ['HBASE_MASTER', 'HBASE_REGIONSERVER'],
"FLUME": ['FLUME_AGENT'],
"IMPALA": ['IMPALA_CATALOGSERVER', 'IMPALA_STATESTORE', 'IMPALAD'],
"KS_INDEXER": ['KEY_VALUE_STORE_INDEXER'],
"SOLR": ['SOLR_SERVER'],
"SQOOP": ['SQOOP_SERVER'],
"SENTRY": ['SENTRY_SERVER'],
"KMS": ['KMS'],
"KAFKA": ['KAFKA_BROKER'],
"YARN_GATEWAY": [],
"RESOURCEMANAGER": [],
"NODEMANAGER": [],
"JOBHISTORY": [],
"HDFS_GATEWAY": [],
'DATANODE': [],
'NAMENODE': [],
'SECONDARYNAMENODE': [],
'JOURNALNODE': [],
'REGIONSERVER': [],
'MASTER': [],
'HIVEMETASTORE': [],
'HIVESERVER': [],
'WEBCAT': [],
'CATALOGSERVER': [],
'STATESTORE': [],
'IMPALAD': [],
'Kerberos': [],
}
def validate(self, cluster):
self.validation.validate_cluster_creating(cluster)
def configure_cluster(self, cluster):
self.deploy.configure_cluster(cluster)
conductor.cluster_update(
context.ctx(), cluster, {
'info':
self.cloudera_utils.get_cloudera_manager_info(cluster)})
def start_cluster(self, cluster):
self.deploy.start_cluster(cluster)
self._set_cluster_info(cluster)
def decommission_nodes(self, cluster, instances):
self.deploy.decommission_cluster(cluster, instances)
def validate_scaling(self, cluster, existing, additional):
self.validation.validate_existing_ng_scaling(cluster, existing)
self.validation.validate_additional_ng_scaling(cluster, additional)
def scale_cluster(self, cluster, instances):
self.deploy.scale_cluster(cluster, instances)
def _set_cluster_info(self, cluster):
info = self.cloudera_utils.get_cloudera_manager_info(cluster)
hue = self.cloudera_utils.pu.get_hue(cluster)
if hue:
info['Hue Dashboard'] = {
'Web UI': 'http://%s:8888' % hue.get_ip_or_dns_name()
}
ctx = context.ctx()
conductor.cluster_update(ctx, cluster, {'info': info})
def get_edp_engine(self, cluster, job_type):
oozie_type = self.edp_engine.EdpOozieEngine.get_supported_job_types()
spark_type = self.edp_engine.EdpSparkEngine.get_supported_job_types()
if job_type in oozie_type:
return self.edp_engine.EdpOozieEngine(cluster)
if job_type in spark_type:
return self.edp_engine.EdpSparkEngine(cluster)
return None
def get_edp_job_types(self):
return (self.edp_engine.EdpOozieEngine.get_supported_job_types() +
self.edp_engine.EdpSparkEngine.get_supported_job_types())
def get_edp_config_hints(self, job_type):
return self.edp_engine.EdpOozieEngine.get_possible_job_config(job_type)
def get_open_ports(self, node_group):
return self.deploy.get_open_ports(node_group)
def recommend_configs(self, cluster, scaling):
self.plugin_utils.recommend_configs(
cluster, self.get_plugin_configs(), scaling)
def get_health_checks(self, cluster):
return health.get_health_checks(cluster, self.cloudera_utils)
def get_image_arguments(self):
if hasattr(self, 'images'):
return self.images.get_image_arguments()
else:
return NotImplemented
def pack_image(self, hadoop_version, remote, test_only=False,
image_arguments=None):
if hasattr(self, 'images'):
self.images.pack_image(
remote, test_only=test_only, image_arguments=image_arguments)
def validate_images(self, cluster, test_only=False, image_arguments=None):
if hasattr(self, 'images'):
self.images.validate_images(
cluster, test_only=test_only, image_arguments=image_arguments)

View File

@ -1,145 +0,0 @@
# Copyright (c) 2014 Intel Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# The contents of this file are mainly copied from cm_api sources,
# released by Cloudera. Codes not used by Sahara CDH plugin are removed.
# You can find the original codes at
#
# https://github.com/cloudera/cm_api/tree/master/python/src/cm_api
#
# To satisfy the pep8 and python3 tests, we did some changes to the codes.
# We also change some importings to use Sahara inherited classes.
from sahara_plugin_cdh.plugins.cdh.client import clusters
from sahara_plugin_cdh.plugins.cdh.client import cms
from sahara_plugin_cdh.plugins.cdh.client import hosts
from sahara_plugin_cdh.plugins.cdh.client import http_client
from sahara_plugin_cdh.plugins.cdh.client import resource
from sahara_plugin_cdh.plugins.cdh.client import users
API_AUTH_REALM = "Cloudera Manager"
API_CURRENT_VERSION = 8
class ApiResource(resource.Resource):
"""Top-level API Resource
Resource object that provides methods for managing the top-level API
resources.
"""
def __init__(self, server_host, server_port=None,
username="admin", password="admin",
use_tls=False, version=API_CURRENT_VERSION):
"""Creates a Resource object that provides API endpoints.
:param server_host: The hostname of the Cloudera Manager server.
:param server_port: The port of the server. Defaults to 7180 (http) or
7183 (https).
:param username: Login name.
:param password: Login password.
:param use_tls: Whether to use tls (https).
:param version: API version.
:return: Resource object referring to the root.
"""
self._version = version
protocol = "https" if use_tls else "http"
if server_port is None:
server_port = 7183 if use_tls else 7180
base_url = ("%s://%s:%s/api/v%s"
% (protocol, server_host, server_port, version))
client = http_client.HttpClient(base_url)
client.set_basic_auth(username, password, API_AUTH_REALM)
client.set_headers({"Content-Type": "application/json"})
resource.Resource.__init__(self, client)
@property
def version(self):
"""Returns the API version (integer) being used."""
return self._version
def get_cloudera_manager(self):
"""Returns a Cloudera Manager object."""
return cms.ClouderaManager(self)
def create_cluster(self, name, version=None, fullVersion=None):
"""Create a new cluster
:param name: Cluster name.
:param version: Cluster major CDH version, e.g. 'CDH5'. Ignored if
fullVersion is specified.
:param fullVersion: Complete CDH version, e.g. '5.1.2'. Overrides major
version if both specified.
:return: The created cluster.
"""
return clusters.create_cluster(self, name, version, fullVersion)
def get_all_clusters(self, view=None):
"""Retrieve a list of all clusters
:param view: View to materialize ('full' or 'summary').
:return: A list of ApiCluster objects.
"""
return clusters.get_all_clusters(self, view)
def get_cluster(self, name):
"""Look up a cluster by name
:param name: Cluster name.
:return: An ApiCluster object.
"""
return clusters.get_cluster(self, name)
def delete_host(self, host_id):
"""Delete a host by id
:param host_id: Host id
:return: The deleted ApiHost object
"""
return hosts.delete_host(self, host_id)
def get_all_hosts(self, view=None):
"""Get all hosts
:param view: View to materialize ('full' or 'summary').
:return: A list of ApiHost objects.
"""
return hosts.get_all_hosts(self, view)
def get_user(self, username):
"""Look up a user by username.
@param username: Username to look up
@return: An ApiUser object
"""
return users.get_user(self, username)
def update_user(self, user):
"""Update a user detail profile.
@param user: An ApiUser object
@return: An ApiUser object
"""
return users.update_user(self, user)
def get_service_health_status(self, cluster):
"""Get clusters service health status
:param cluster: Cluster name.
:return: A dict with cluster health status
"""
cluster = clusters.get_cluster(self, cluster)
return cluster.get_service_health_status()

View File

@ -1,240 +0,0 @@
# Copyright (c) 2014 Intel Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# The contents of this file are mainly copied from cm_api sources,
# released by Cloudera. Codes not used by Sahara CDH plugin are removed.
# You can find the original codes at
#
# https://github.com/cloudera/cm_api/tree/master/python/src/cm_api
#
# To satisfy the pep8 and python3 tests, we did some changes to the codes.
# We also change some importings to use Sahara inherited classes.
from sahara_plugin_cdh.i18n import _
from sahara_plugin_cdh.plugins.cdh.client import services
from sahara_plugin_cdh.plugins.cdh.client import types
from sahara_plugin_cdh.plugins.cdh import exceptions as ex
CLUSTERS_PATH = "/clusters"
def create_cluster(resource_root, name, version=None, fullVersion=None):
"""Create a cluster
:param resource_root: The root Resource object.
:param name: Cluster name
:param version: Cluster CDH major version (eg: "CDH4")
- The CDH minor version will be assumed to be the
latest released version for CDH4, or 5.0 for CDH5.
:param fullVersion: Cluster's full CDH version. (eg: "5.1.1")
- If specified, 'version' will be ignored.
- Since: v6
:return: An ApiCluster object
"""
if version is None and fullVersion is None:
raise ex.CMApiVersionError(
_("Either 'version' or 'fullVersion' must be specified"))
if fullVersion is not None:
api_version = 6
version = None
else:
api_version = 1
apicluster = ApiCluster(resource_root, name, version, fullVersion)
return types.call(resource_root.post, CLUSTERS_PATH, ApiCluster, True,
data=[apicluster], api_version=api_version)[0]
def get_cluster(resource_root, name):
"""Lookup a cluster by name
:param resource_root: The root Resource object.
:param name: Cluster name
:return: An ApiCluster object
"""
return types.call(resource_root.get, "%s/%s"
% (CLUSTERS_PATH, name), ApiCluster)
def get_all_clusters(resource_root, view=None):
"""Get all clusters
:param resource_root: The root Resource object.
:return: A list of ApiCluster objects.
"""
return types.call(resource_root.get, CLUSTERS_PATH, ApiCluster, True,
params=(dict(view=view) if view else None))
class ApiCluster(types.BaseApiResource):
_ATTRIBUTES = {
'name': None,
'clusterUrl': None,
'displayName': None,
'version': None,
'fullVersion': None,
'hostsUrl': types.ROAttr(),
'maintenanceMode': types.ROAttr(),
'maintenanceOwners': types.ROAttr(),
'entityStatus': types.ROAttr(),
}
def __init__(self, resource_root, name=None, version=None,
fullVersion=None):
types.BaseApiObject.init(self, resource_root, locals())
def _path(self):
return "%s/%s" % (CLUSTERS_PATH, self.name)
def get_service_types(self):
"""Get all service types supported by this cluster
:return: A list of service types (strings)
"""
resp = self._get_resource_root().get(self._path() + '/serviceTypes')
return resp[types.ApiList.LIST_KEY]
def get_commands(self, view=None):
"""Retrieve a list of running commands for this cluster
:param view: View to materialize ('full' or 'summary')
:return: A list of running commands.
"""
return self._get("commands", types.ApiCommand, True,
params=(dict(view=view) if view else None))
def create_service(self, name, service_type):
"""Create a service
:param name: Service name
:param service_type: Service type
:return: An ApiService object
"""
return services.create_service(self._get_resource_root(), name,
service_type, self.name)
def get_service(self, name):
"""Lookup a service by name
:param name: Service name
:return: An ApiService object
"""
return services.get_service(self._get_resource_root(),
name, self.name)
def start(self):
"""Start all services in a cluster, respecting dependencies
:return: Reference to the submitted command.
"""
return self._cmd('start')
def restart(self, restart_only_stale_services=None,
redeploy_client_configuration=None,
restart_service_names=None):
"""Restart all services in the cluster. Services are restarted in the
appropriate order given their dependencies.
:param restart_only_stale_services: Only restart services that
have stale configuration and their dependent
services. Default is False.
:param redeploy_client_configuration: Re-deploy client configuration
for all services in the cluster. Default is False.
:param restart_service_names: Only restart services that are specified
and their dependent services.
:return: Reference to the submitted command.
"""
if self._get_resource_root().version < 6:
return self._cmd('restart')
args = dict()
args['restartOnlyStaleServices'] = restart_only_stale_services
args['redeployClientConfiguration'] = redeploy_client_configuration
if self._get_resource_root().version >= 11:
args['restartServiceNames'] = restart_service_names
return self._cmd('restart', data=args, api_version=6)
def stop(self):
"""Stop all services in a cluster, respecting dependencies
:return: Reference to the submitted command.
"""
return self._cmd('stop')
def deploy_client_config(self):
"""Deploys Service client configuration to the hosts on the cluster
:return: Reference to the submitted command.
:since: API v2
"""
return self._cmd('deployClientConfig')
def first_run(self):
"""Prepare and start services in a cluster
Perform all the steps needed to prepare each service in a
cluster and start the services in order.
:return: Reference to the submitted command.
:since: API v7
"""
return self._cmd('firstRun', None, api_version=7)
def remove_host(self, hostId):
"""Removes the association of the host with the cluster
:return: A ApiHostRef of the host that was removed.
:since: API v3
"""
return self._delete("hosts/" + hostId, types.ApiHostRef, api_version=3)
def get_service_health_status(self):
"""Lookup a service health status by name
:return: A dict with cluster health status
"""
health_dict = {}
cl_services = services.get_all_services(self._get_resource_root(),
cluster_name=self.name)
for curr in cl_services:
health_dict[curr.name] = {
'summary': curr.get_health_summary(),
'checks': curr.get_health_checks_status()}
return health_dict
def configure_for_kerberos(self, datanode_transceiver_port=None,
datanode_web_port=None):
"""Command to configure the cluster to use Kerberos for authentication.
This command will configure all relevant services on a cluster for
Kerberos usage. This command will trigger a GenerateCredentials
command to create Kerberos keytabs for all roles in the cluster.
:param datanode_transceiver_port: The HDFS DataNode transceiver port
to use. This will be applied to all DataNode role
configuration groups. If not specified, this will default to
1004.
:param datanode_web_port: The HDFS DataNode web port to use. This will
be applied to all DataNode role configuration groups. If not
specified, this will default to 1006.
:return: Reference to the submitted command.
:since: API v11
"""
args = dict()
if datanode_transceiver_port:
args['datanodeTransceiverPort'] = datanode_transceiver_port
if datanode_web_port:
args['datanodeWebPort'] = datanode_web_port
return self._cmd('configureForKerberos', data=args, api_version=11)

View File

@ -1,84 +0,0 @@
# Copyright (c) 2014 Intel Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# The contents of this file are mainly copied from cm_api sources,
# released by Cloudera. Codes not used by Sahara CDH plugin are removed.
# You can find the original codes at
#
# https://github.com/cloudera/cm_api/tree/master/python/src/cm_api
#
# To satisfy the pep8 and python3 tests, we did some changes to the codes.
# We also change some importings to use Sahara inherited classes.
from sahara_plugin_cdh.plugins.cdh.client.services import ApiService
from sahara_plugin_cdh.plugins.cdh.client import types
class ClouderaManager(types.BaseApiResource):
"""The Cloudera Manager instance
Provides access to CM configuration and services.
"""
def __init__(self, resource_root):
types.BaseApiObject.init(self, resource_root)
def _path(self):
return '/cm'
def create_mgmt_service(self, service_setup_info):
"""Setup the Cloudera Management Service
:param service_setup_info: ApiServiceSetupInfo object.
:return: The management service instance.
"""
return self._put("service", ApiService, data=service_setup_info)
def get_service(self):
"""Return the Cloudera Management Services instance
:return: An ApiService instance.
"""
return self._get("service", ApiService)
def hosts_start_roles(self, host_names):
"""Start all the roles on the specified hosts
:param host_names: List of names of hosts on which to start all roles.
:return: Information about the submitted command.
:since: API v2
"""
return self._cmd('hostsStartRoles', data=host_names)
def update_config(self, config):
"""Update the CM configuration.
:param config: Dictionary with configuration to update.
:return: Dictionary with updated configuration.
"""
return self._update_config("config", config)
def import_admin_credentials(self, username, password):
"""Imports the KDC Account Manager credentials needed by Cloudera
Manager to create kerberos principals needed by CDH services.
:param username Username of the Account Manager. Full name including
the Kerberos realm must be specified.
:param password Password for the Account Manager.
:return: Information about the submitted command.
:since: API v7
"""
return self._cmd('importAdminCredentials', params=dict(
username=username, password=password))

View File

@ -1,90 +0,0 @@
# Copyright (c) 2014 Intel Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# The contents of this file are mainly copied from cm_api sources,
# released by Cloudera. Codes not used by Sahara CDH plugin are removed.
# You can find the original codes at
#
# https://github.com/cloudera/cm_api/tree/master/python/src/cm_api
#
# To satisfy the pep8 and python3 tests, we did some changes to the codes.
# We also change some importings to use Sahara inherited classes.
import datetime
from sahara_plugin_cdh.plugins.cdh.client import types
HOSTS_PATH = "/hosts"
def get_all_hosts(resource_root, view=None):
"""Get all hosts
:param resource_root: The root Resource object.
:return: A list of ApiHost objects.
"""
return types.call(resource_root.get, HOSTS_PATH, ApiHost, True,
params=(dict(view=view) if view else None))
def delete_host(resource_root, host_id):
"""Delete a host by id
:param resource_root: The root Resource object.
:param host_id: Host id
:return: The deleted ApiHost object
"""
return types.call(resource_root.delete, "%s/%s"
% (HOSTS_PATH, host_id), ApiHost)
class ApiHost(types.BaseApiResource):
_ATTRIBUTES = {
'hostId': None,
'hostname': None,
'ipAddress': None,
'rackId': None,
'status': types.ROAttr(),
'lastHeartbeat': types.ROAttr(datetime.datetime),
'roleRefs': types.ROAttr(types.ApiRoleRef),
'healthSummary': types.ROAttr(),
'healthChecks': types.ROAttr(),
'hostUrl': types.ROAttr(),
'commissionState': types.ROAttr(),
'maintenanceMode': types.ROAttr(),
'maintenanceOwners': types.ROAttr(),
'numCores': types.ROAttr(),
'totalPhysMemBytes': types.ROAttr(),
}
def __init__(self, resource_root, hostId=None, hostname=None,
ipAddress=None, rackId=None):
types.BaseApiObject.init(self, resource_root, locals())
def __str__(self):
return "<ApiHost>: %s (%s)" % (self.hostId, self.ipAddress)
def _path(self):
return HOSTS_PATH + '/' + self.hostId
def put_host(self):
"""Update this resource
note (mionkin):Currently, according to Cloudera docs,
only updating the rackId is supported.
All other fields of the host will be ignored.
:return: The updated object.
"""
return self._put('', ApiHost, data=self)

View File

@ -1,148 +0,0 @@
# Copyright (c) 2014 Intel Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# The contents of this file are mainly copied from cm_api sources,
# released by Cloudera. Codes not used by Sahara CDH plugin are removed.
# You can find the original codes at
#
# https://github.com/cloudera/cm_api/tree/master/python/src/cm_api
#
# To satisfy the pep8 and python3 tests, we did some changes to the codes.
# We also change some importings to use Sahara inherited classes.
import http.cookiejar
import posixpath
import urllib
from oslo_log import log as logging
from oslo_serialization import jsonutils as json
from sahara_plugin_cdh.plugins.cdh import exceptions as ex
LOG = logging.getLogger(__name__)
class HttpClient(object):
"""Basic HTTP client tailored for rest APIs."""
def __init__(self, base_url, exc_class=ex.CMApiException):
"""Init Method
:param base_url: The base url to the API.
:param exc_class: An exception class to handle non-200 results.
Creates an HTTP(S) client to connect to the Cloudera Manager API.
"""
self._base_url = base_url.rstrip('/')
self._exc_class = exc_class
self._headers = {}
# Make a basic auth handler that does nothing. Set credentials later.
self._passmgr = urllib.request.HTTPPasswordMgrWithDefaultRealm()
authhandler = urllib.request.HTTPBasicAuthHandler(self._passmgr)
# Make a cookie processor
cookiejar = http.cookiejar.CookieJar()
self._opener = urllib.request.build_opener(
urllib.request.HTTPErrorProcessor(),
urllib.request.HTTPCookieProcessor(cookiejar),
authhandler)
def set_basic_auth(self, username, password, realm):
"""Set up basic auth for the client
:param username: Login name.
:param password: Login password.
:param realm: The authentication realm.
:return: The current object
"""
self._passmgr.add_password(realm, self._base_url, username, password)
return self
def set_headers(self, headers):
"""Add headers to the request
:param headers: A dictionary with the key value pairs for the headers
:return: The current object
"""
self._headers = headers
return self
@property
def base_url(self):
return self._base_url
def _get_headers(self, headers):
res = self._headers.copy()
if headers:
res.update(headers)
return res
def execute(self, http_method, path, params=None, data=None, headers=None):
"""Submit an HTTP request
:param http_method: GET, POST, PUT, DELETE
:param path: The path of the resource.
:param params: Key-value parameter data.
:param data: The data to attach to the body of the request.
:param headers: The headers to set for this request.
:return: The result of urllib.request.urlopen()
"""
# Prepare URL and params
url = self._make_url(path, params)
if http_method in ("GET", "DELETE"):
if data is not None:
LOG.warning("{method} method does not pass any data. "
"Path {path}".format(method=http_method,
path=path))
data = None
if http_method in ("POST", "PUT"):
if data is not None:
data = data.encode('utf-8')
# Setup the request
request = urllib.request.Request(url, data)
# Hack/workaround because urllib2 only does GET and POST
request.get_method = lambda: http_method
headers = self._get_headers(headers)
for k, v in headers.items():
request.add_header(k, v)
# Call it
LOG.debug("Method: {method}, URL: {url}".format(method=http_method,
url=url))
try:
return self._opener.open(request)
except urllib.error.HTTPError as ex:
message = str(ex)
try:
json_body = json.loads(message)
message = json_body['message']
except (ValueError, KeyError):
pass # Ignore json parsing error
raise self._exc_class(message)
def _make_url(self, path, params):
res = self._base_url
if path:
res += posixpath.normpath('/' + path.lstrip('/'))
if params:
param_str = urllib.parse.urlencode(params, True)
res += '?' + param_str
return res

View File

@ -1,179 +0,0 @@
# Copyright (c) 2014 Intel Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# The contents of this file are mainly copied from cm_api sources,
# released by Cloudera. Codes not used by Sahara CDH plugin are removed.
# You can find the original codes at
#
# https://github.com/cloudera/cm_api/tree/master/python/src/cm_api
#
# To satisfy the pep8 and python3 tests, we did some changes to the codes.
# We also change some importings to use Sahara inherited classes.
import posixpath
import socket
from oslo_log import log as logging
from oslo_serialization import jsonutils as json
import urllib
from sahara.plugins import context
from sahara_plugin_cdh.i18n import _
from sahara_plugin_cdh.plugins.cdh import exceptions as ex
LOG = logging.getLogger(__name__)
class Resource(object):
"""Base Resource
Encapsulates a resource, and provides actions to invoke on it.
"""
def __init__(self, client, relpath=""):
"""Constructor method
:param client: A Client object.
:param relpath: The relative path of the resource.
"""
self._client = client
self._path = relpath.strip('/')
self.retries = 3
self.retry_sleep = 3
@property
def base_url(self):
return self._client.base_url
def _join_uri(self, relpath):
if relpath is None:
return self._path
return self._path + posixpath.normpath('/' + relpath)
def invoke(self, method, relpath=None, params=None, data=None,
headers=None):
"""Invoke an API method
:return: Raw body or JSON dictionary (if response content type is
JSON).
"""
path = self._join_uri(relpath)
resp = self._client.execute(method,
path,
params=params,
data=data,
headers=headers)
try:
body = resp.read()
except Exception as ex:
raise ex.CMApiException(
_("Command %(method)s %(path)s failed: %(msg)s")
% {'method': method, 'path': path, 'msg': str(ex)})
LOG.debug("{method} got response: {body}".format(method=method,
body=body[:32]))
# Is the response application/json?
if (len(body) != 0 and
self._get_content_maintype(resp.info()) == "application"
and self._get_content_subtype(resp.info()) == "json"):
try:
json_dict = json.loads(body)
return json_dict
except Exception:
LOG.error('JSON decode error: {body}'.format(body=body))
raise
else:
return body
def get(self, relpath=None, params=None):
"""Invoke the GET method on a resource
:param relpath: Optional. A relative path to this resource's path.
:param params: Key-value data.
:return: A dictionary of the JSON result.
"""
for retry in range(self.retries + 1):
if retry:
context.sleep(self.retry_sleep)
try:
return self.invoke("GET", relpath, params)
except (socket.error, urllib.error.URLError) as e:
if "timed out" in str(e).lower():
if retry < self.retries:
LOG.warning("Timeout issuing GET request for "
"{path}. Will retry".format(
path=self._join_uri(relpath)))
else:
LOG.warning("Timeout issuing GET request for "
"{path}. No retries left".format(
path=self._join_uri(relpath)))
else:
raise
else:
raise ex.CMApiException(_("Get retry max time reached."))
def delete(self, relpath=None, params=None):
"""Invoke the DELETE method on a resource
:param relpath: Optional. A relative path to this resource's path.
:param params: Key-value data.
:return: A dictionary of the JSON result.
"""
return self.invoke("DELETE", relpath, params)
def post(self, relpath=None, params=None, data=None, contenttype=None):
"""Invoke the POST method on a resource
:param relpath: Optional. A relative path to this resource's path.
:param params: Key-value data.
:param data: Optional. Body of the request.
:param contenttype: Optional.
:return: A dictionary of the JSON result.
"""
return self.invoke("POST", relpath, params, data,
self._make_headers(contenttype))
def put(self, relpath=None, params=None, data=None, contenttype=None):
"""Invoke the PUT method on a resource
:param relpath: Optional. A relative path to this resource's path.
:param params: Key-value data.
:param data: Optional. Body of the request.
:param contenttype: Optional.
:return: A dictionary of the JSON result.
"""
return self.invoke("PUT", relpath, params, data,
self._make_headers(contenttype))
def _make_headers(self, contenttype=None):
if contenttype:
return {'Content-Type': contenttype}
return None
def _get_content_maintype(self, info):
try:
return info.getmaintype()
except AttributeError:
return info.get_content_maintype()
def _get_content_subtype(self, info):
try:
return info.getsubtype()
except AttributeError:
return info.get_content_subtype()

View File

@ -1,108 +0,0 @@
# Copyright (c) 2014 Intel Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# The contents of this file are mainly copied from cm_api sources,
# released by Cloudera. Codes not used by Sahara CDH plugin are removed.
# You can find the original codes at
#
# https://github.com/cloudera/cm_api/tree/master/python/src/cm_api
#
# To satisfy the pep8 and python3 tests, we did some changes to the codes.
# We also change some importings to use Sahara inherited classes.
from sahara_plugin_cdh.plugins.cdh.client import types
ROLE_CONFIG_GROUPS_PATH = "/clusters/%s/services/%s/roleConfigGroups"
CM_ROLE_CONFIG_GROUPS_PATH = "/cm/service/roleConfigGroups"
def _get_role_config_groups_path(cluster_name, service_name):
if cluster_name:
return ROLE_CONFIG_GROUPS_PATH % (cluster_name, service_name)
else:
return CM_ROLE_CONFIG_GROUPS_PATH
def _get_role_config_group_path(cluster_name, service_name, name):
path = _get_role_config_groups_path(cluster_name, service_name)
return "%s/%s" % (path, name)
def get_all_role_config_groups(resource_root, service_name,
cluster_name="default"):
"""Get all role config groups in the specified service
:param resource_root: The root Resource object.
:param service_name: Service name.
:param cluster_name: Cluster name.
:return: A list of ApiRoleConfigGroup objects.
:since: API v3
"""
return types.call(resource_root.get,
_get_role_config_groups_path(cluster_name, service_name),
ApiRoleConfigGroup, True, api_version=3)
class ApiRoleConfigGroup(types.BaseApiResource):
_ATTRIBUTES = {
'name': None,
'displayName': None,
'roleType': None,
'config': types.Attr(types.ApiConfig),
'base': types.ROAttr(),
'serviceRef': types.ROAttr(types.ApiServiceRef),
}
def __init__(self, resource_root, name=None, displayName=None,
roleType=None, config=None):
types.BaseApiObject.init(self, resource_root, locals())
def __str__(self):
return ("<ApiRoleConfigGroup>: %s (cluster: %s; service: %s)"
% (self.name, self.serviceRef.clusterName,
self.serviceRef.serviceName))
def _api_version(self):
return 3
def _path(self):
return _get_role_config_group_path(self.serviceRef.clusterName,
self.serviceRef.serviceName,
self.name)
def get_config(self, view=None):
"""Retrieve the group's configuration
The 'summary' view contains strings as the dictionary values. The full
view contains types.ApiConfig instances as the values.
:param view: View to materialize ('full' or 'summary').
:return: Dictionary with configuration data.
"""
path = self._path() + '/config'
resp = self._get_resource_root().get(
path, params=(dict(view=view) if view else None))
return types.json_to_config(resp, view == 'full')
def update_config(self, config):
"""Update the group's configuration
:param config: Dictionary with configuration to update.
:return: Dictionary with updated configuration.
"""
path = self._path() + '/config'
resp = self._get_resource_root().put(
path, data=types.config_to_json(config))
return types.json_to_config(resp)

View File

@ -1,187 +0,0 @@
# Copyright (c) 2014 Intel Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# The contents of this file are mainly copied from cm_api sources,
# released by Cloudera. Codes not used by Sahara CDH plugin are removed.
# You can find the original codes at
#
# https://github.com/cloudera/cm_api/tree/master/python/src/cm_api
#
# To satisfy the pep8 and python3 tests, we did some changes to the codes.
# We also change some importings to use Sahara inherited classes.
from sahara_plugin_cdh.plugins.cdh.client import types
ROLES_PATH = "/clusters/%s/services/%s/roles"
CM_ROLES_PATH = "/cm/service/roles"
def _get_roles_path(cluster_name, service_name):
if cluster_name:
return ROLES_PATH % (cluster_name, service_name)
else:
return CM_ROLES_PATH
def _get_role_path(cluster_name, service_name, role_name):
path = _get_roles_path(cluster_name, service_name)
return "%s/%s" % (path, role_name)
def create_role(resource_root,
service_name,
role_type,
role_name,
host_id,
cluster_name="default"):
"""Create a role
:param resource_root: The root Resource object.
:param service_name: Service name
:param role_type: Role type
:param role_name: Role name
:param cluster_name: Cluster name
:return: An ApiRole object
"""
apirole = ApiRole(resource_root, role_name, role_type,
types.ApiHostRef(resource_root, host_id))
return types.call(resource_root.post,
_get_roles_path(cluster_name, service_name),
ApiRole, True, data=[apirole])[0]
def get_role(resource_root, service_name, name, cluster_name="default"):
"""Lookup a role by name
:param resource_root: The root Resource object.
:param service_name: Service name
:param name: Role name
:param cluster_name: Cluster name
:return: An ApiRole object
"""
return _get_role(resource_root, _get_role_path(cluster_name,
service_name, name))
def _get_role(resource_root, path):
return types.call(resource_root.get, path, ApiRole)
def get_all_roles(resource_root, service_name, cluster_name="default",
view=None):
"""Get all roles
:param resource_root: The root Resource object.
:param service_name: Service name
:param cluster_name: Cluster name
:return: A list of ApiRole objects.
"""
return types.call(resource_root.get,
_get_roles_path(cluster_name, service_name), ApiRole,
True, params=(dict(view=view) if view else None))
def get_roles_by_type(resource_root, service_name, role_type,
cluster_name="default", view=None):
"""Get all roles of a certain type in a service
:param resource_root: The root Resource object.
:param service_name: Service name
:param role_type: Role type
:param cluster_name: Cluster name
:return: A list of ApiRole objects.
"""
roles = get_all_roles(resource_root, service_name, cluster_name, view)
return [r for r in roles if r.type == role_type]
def delete_role(resource_root, service_name, name, cluster_name="default"):
"""Delete a role by name
:param resource_root: The root Resource object.
:param service_name: Service name
:param name: Role name
:param cluster_name: Cluster name
:return: The deleted ApiRole object
"""
return types.call(resource_root.delete,
_get_role_path(cluster_name, service_name, name),
ApiRole)
class ApiRole(types.BaseApiResource):
_ATTRIBUTES = {
'name': None,
'type': None,
'hostRef': types.Attr(types.ApiHostRef),
'roleState': types.ROAttr(),
'healthSummary': types.ROAttr(),
'healthChecks': types.ROAttr(),
'serviceRef': types.ROAttr(types.ApiServiceRef),
'configStale': types.ROAttr(),
'configStalenessStatus': types.ROAttr(),
'haStatus': types.ROAttr(),
'roleUrl': types.ROAttr(),
'commissionState': types.ROAttr(),
'maintenanceMode': types.ROAttr(),
'maintenanceOwners': types.ROAttr(),
'roleConfigGroupRef': types.ROAttr(types.ApiRoleConfigGroupRef),
'zooKeeperServerMode': types.ROAttr(),
}
def __init__(self, resource_root, name=None, type=None, hostRef=None):
types.BaseApiObject.init(self, resource_root, locals())
def __str__(self):
return ("<ApiRole>: %s (cluster: %s; service: %s)"
% (self.name, self.serviceRef.clusterName,
self.serviceRef.serviceName))
def _path(self):
return _get_role_path(self.serviceRef.clusterName,
self.serviceRef.serviceName,
self.name)
def _get_log(self, log):
path = "%s/logs/%s" % (self._path(), log)
return self._get_resource_root().get(path)
def get_commands(self, view=None):
"""Retrieve a list of running commands for this role
:param view: View to materialize ('full' or 'summary')
:return: A list of running commands.
"""
return self._get("commands", types.ApiCommand, True,
params=(dict(view=view) if view else None))
def get_config(self, view=None):
"""Retrieve the role's configuration
The 'summary' view contains strings as the dictionary values. The full
view contains types.ApiConfig instances as the values.
:param view: View to materialize ('full' or 'summary')
:return: Dictionary with configuration data.
"""
return self._get_config("config", view)
def update_config(self, config):
"""Update the role's configuration
:param config: Dictionary with configuration to update.
:return: Dictionary with updated configuration.
"""
return self._update_config("config", config)

View File

@ -1,527 +0,0 @@
# Copyright (c) 2014 Intel Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# The contents of this file are mainly copied from cm_api sources,
# released by Cloudera. Codes not used by Sahara CDH plugin are removed.
# You can find the original codes at
#
# https://github.com/cloudera/cm_api/tree/master/python/src/cm_api
#
# To satisfy the pep8 and python3 tests, we did some changes to the codes.
# We also change some importings to use Sahara inherited classes.
from oslo_serialization import jsonutils as json
from sahara_plugin_cdh.plugins.cdh.client import role_config_groups
from sahara_plugin_cdh.plugins.cdh.client import roles
from sahara_plugin_cdh.plugins.cdh.client import types
SERVICES_PATH = "/clusters/%s/services"
SERVICE_PATH = "/clusters/%s/services/%s"
ROLETYPES_CFG_KEY = 'roleTypeConfigs'
def create_service(resource_root, name, service_type,
cluster_name="default"):
"""Create a service
:param resource_root: The root Resource object.
:param name: Service name
:param service_type: Service type
:param cluster_name: Cluster name
:return: An ApiService object
"""
apiservice = ApiService(resource_root, name, service_type)
return types.call(resource_root.post, SERVICES_PATH % (cluster_name,),
ApiService, True, data=[apiservice])[0]
def get_service(resource_root, name, cluster_name="default"):
"""Lookup a service by name
:param resource_root: The root Resource object.
:param name: Service name
:param cluster_name: Cluster name
:return: An ApiService object
"""
return _get_service(resource_root, "%s/%s"
% (SERVICES_PATH % (cluster_name,), name))
def _get_service(resource_root, path):
return types.call(resource_root.get, path, ApiService)
def get_all_services(resource_root, cluster_name="default", view=None):
"""Get all services
:param resource_root: The root Resource object.
:param cluster_name: Cluster name
:return: A list of ApiService objects.
"""
return types.call(resource_root.get, SERVICES_PATH % (cluster_name,),
ApiService, True,
params=(dict(view=view) if view else None))
def delete_service(resource_root, name, cluster_name="default"):
"""Delete a service by name
:param resource_root: The root Resource object.
:param name: Service name
:param cluster_name: Cluster name
:return: The deleted ApiService object
"""
return types.call(resource_root.delete,
"%s/%s" % (SERVICES_PATH % (cluster_name,), name),
ApiService)
class ApiService(types.BaseApiResource):
_ATTRIBUTES = {
'name': None,
'type': None,
'displayName': None,
'serviceState': types.ROAttr(),
'healthSummary': types.ROAttr(),
'healthChecks': types.ROAttr(),
'clusterRef': types.ROAttr(types.ApiClusterRef),
'configStale': types.ROAttr(),
'configStalenessStatus': types.ROAttr(),
'clientConfigStalenessStatus': types.ROAttr(),
'serviceUrl': types.ROAttr(),
'maintenanceMode': types.ROAttr(),
'maintenanceOwners': types.ROAttr(),
}
def __init__(self, resource_root, name=None, type=None):
types.BaseApiObject.init(self, resource_root, locals())
def __str__(self):
return ("<ApiService>: %s (cluster: %s)"
% (self.name, self._get_cluster_name()))
def _get_cluster_name(self):
if hasattr(self, 'clusterRef') and self.clusterRef:
return self.clusterRef.clusterName
return None
def _path(self):
"""Return the API path for this service
This method assumes that lack of a cluster reference means that the
object refers to the Cloudera Management Services instance.
"""
if self._get_cluster_name():
return SERVICE_PATH % (self._get_cluster_name(), self.name)
else:
return '/cm/service'
def _role_cmd(self, cmd, roles, api_version=1):
return self._post("roleCommands/" + cmd, types.ApiBulkCommandList,
data=roles, api_version=api_version)
def _parse_svc_config(self, json_dic, view=None):
"""Parse a json-decoded ApiServiceConfig dictionary into a 2-tuple
:param json_dic: The json dictionary with the config data.
:param view: View to materialize.
:return: 2-tuple (service config dictionary, role type configurations)
"""
svc_config = types.json_to_config(json_dic, view == 'full')
rt_configs = {}
if ROLETYPES_CFG_KEY in json_dic:
for rt_config in json_dic[ROLETYPES_CFG_KEY]:
rt_configs[rt_config['roleType']] = types.json_to_config(
rt_config, view == 'full')
return (svc_config, rt_configs)
def create_yarn_job_history_dir(self):
"""Create the Yarn job history directory
:return: Reference to submitted command.
:since: API v6
"""
return self._cmd('yarnCreateJobHistoryDirCommand', api_version=6)
def get_config(self, view=None):
"""Retrieve the service's configuration
Retrieves both the service configuration and role type configuration
for each of the service's supported role types. The role type
configurations are returned as a dictionary, whose keys are the
role type name, and values are the respective configuration
dictionaries.
The 'summary' view contains strings as the dictionary values. The full
view contains types.ApiConfig instances as the values.
:param view: View to materialize ('full' or 'summary')
:return: 2-tuple (service config dictionary, role type configurations)
"""
path = self._path() + '/config'
resp = self._get_resource_root().get(
path, params=(dict(view=view) if view else None))
return self._parse_svc_config(resp, view)
def update_config(self, svc_config, **rt_configs):
"""Update the service's configuration
:param svc_config: Dictionary with service configuration to update.
:param rt_configs: Dict of role type configurations to update.
:return: 2-tuple (service config dictionary, role type configurations)
"""
path = self._path() + '/config'
if svc_config:
data = types.config_to_api_list(svc_config)
else:
data = {}
if rt_configs:
rt_list = []
for rt, cfg in rt_configs.items():
rt_data = types.config_to_api_list(cfg)
rt_data['roleType'] = rt
rt_list.append(rt_data)
data[ROLETYPES_CFG_KEY] = rt_list
resp = self._get_resource_root().put(path, data=json.dumps(data))
return self._parse_svc_config(resp)
def create_role(self, role_name, role_type, host_id):
"""Create a role
:param role_name: Role name
:param role_type: Role type
:param host_id: ID of the host to assign the role to
:return: An ApiRole object
"""
return roles.create_role(self._get_resource_root(), self.name,
role_type, role_name, host_id,
self._get_cluster_name())
def delete_role(self, name):
"""Delete a role by name
:param name: Role name
:return: The deleted ApiRole object
"""
return roles.delete_role(self._get_resource_root(), self.name, name,
self._get_cluster_name())
def get_roles_by_type(self, role_type, view=None):
"""Get all roles of a certain type in a service
:param role_type: Role type
:param view: View to materialize ('full' or 'summary')
:return: A list of ApiRole objects.
"""
return roles.get_roles_by_type(self._get_resource_root(), self.name,
role_type, self._get_cluster_name(),
view)
def get_all_role_config_groups(self):
"""Get a list of role configuration groups in the service
:return: A list of ApiRoleConfigGroup objects.
:since: API v3
"""
return role_config_groups.get_all_role_config_groups(
self._get_resource_root(), self.name, self._get_cluster_name())
def start(self):
"""Start a service
:return: Reference to the submitted command.
"""
return self._cmd('start')
def stop(self):
"""Stop a service
:return: Reference to the submitted command.
"""
return self._cmd('stop')
def restart(self):
"""Restart a service
:return: Reference to the submitted command.
"""
return self._cmd('restart')
def get_health_summary(self):
return getattr(self, 'healthSummary', None)
def get_health_checks_status(self):
return getattr(self, 'healthChecks', None)
def start_roles(self, *role_names):
"""Start a list of roles
:param role_names: names of the roles to start.
:return: List of submitted commands.
"""
return self._role_cmd('start', role_names)
def create_hbase_root(self):
"""Create the root directory of an HBase service
:return: Reference to the submitted command.
"""
return self._cmd('hbaseCreateRoot')
def create_hdfs_tmp(self):
"""Create /tmp directory in HDFS
Create the /tmp directory in HDFS with appropriate ownership and
permissions.
:return: Reference to the submitted command
:since: API v2
"""
return self._cmd('hdfsCreateTmpDir')
def refresh(self, *role_names):
"""Execute the "refresh" command on a set of roles
:param role_names: Names of the roles to refresh.
:return: Reference to the submitted command.
"""
return self._role_cmd('refresh', role_names)
def decommission(self, *role_names):
"""Decommission roles in a service
:param role_names: Names of the roles to decommission.
:return: Reference to the submitted command.
"""
return self._cmd('decommission', data=role_names)
def deploy_client_config(self, *role_names):
"""Deploys client configuration to the hosts where roles are running
:param role_names: Names of the roles to decommission.
:return: Reference to the submitted command.
"""
return self._cmd('deployClientConfig', data=role_names)
def format_hdfs(self, *namenodes):
"""Format NameNode instances of an HDFS service
:param namenodes: Name of NameNode instances to format.
:return: List of submitted commands.
"""
return self._role_cmd('hdfsFormat', namenodes)
def install_oozie_sharelib(self):
"""Installs the Oozie ShareLib
Oozie must be stopped before running this command.
:return: Reference to the submitted command.
:since: API v3
"""
return self._cmd('installOozieShareLib', api_version=3)
def create_oozie_db(self):
"""Creates the Oozie Database Schema in the configured database
:return: Reference to the submitted command.
:since: API v2
"""
return self._cmd('createOozieDb', api_version=2)
def upgrade_oozie_db(self):
"""Upgrade Oozie Database schema as part of a major version upgrade
:return: Reference to the submitted command.
:since: API v6
"""
return self._cmd('oozieUpgradeDb', api_version=6)
def create_hive_metastore_tables(self):
"""Creates the Hive metastore tables in the configured database
Will do nothing if tables already exist. Will not perform an upgrade.
:return: Reference to the submitted command.
:since: API v3
"""
return self._cmd('hiveCreateMetastoreDatabaseTables', api_version=3)
def create_hive_warehouse(self):
"""Creates the Hive warehouse directory in HDFS
:return: Reference to the submitted command.
:since: API v3
"""
return self._cmd('hiveCreateHiveWarehouse')
def create_hive_userdir(self):
"""Creates the Hive user directory in HDFS
:return: Reference to the submitted command.
:since: API v4
"""
return self._cmd('hiveCreateHiveUserDir')
def enable_nn_ha(self, active_name, standby_host_id, nameservice, jns,
standby_name_dir_list=None, qj_name=None,
standby_name=None, active_fc_name=None,
standby_fc_name=None, zk_service_name=None,
force_init_znode=True,
clear_existing_standby_name_dirs=True,
clear_existing_jn_edits_dir=True):
"""Enable High Availability (HA) with Auto-Failover for HDFS NameNode
@param active_name: Name of Active NameNode.
@param standby_host_id: ID of host where Standby NameNode will be
created.
@param nameservice: Nameservice to be used while enabling HA.
Optional if Active NameNode already has this
config set.
@param jns: List of Journal Nodes to be created during the command.
Each element of the list must be a dict containing the
following items:
- jns['jnHostId']: ID of the host where the new JournalNode
will be created.
- jns['jnName']: Name of the JournalNode role (optional)
- jns['jnEditsDir']: Edits dir of the JournalNode. Can be
omitted if the config is already set
at RCG level.
@param standby_name_dir_list: List of directories for the new Standby
NameNode. If not provided then it will
use same dirs as Active NameNode.
@param qj_name: Name of the journal located on each JournalNodes'
filesystem. This can be optionally provided if the
config hasn't been already set for the Active NameNode.
If this isn't provided and Active NameNode doesn't
also have the config, then nameservice is used by
default.
@param standby_name: Name of the Standby NameNode role to be created
(Optional).
@param active_fc_name: Name of the Active Failover Controller role to
be created (Optional).
@param standby_fc_name: Name of the Standby Failover Controller role to
be created (Optional).
@param zk_service_name: Name of the ZooKeeper service to use for auto-
failover. If HDFS service already depends on a
ZooKeeper service then that ZooKeeper service
will be used for auto-failover and in that case
this parameter can either be omitted or should
be the same ZooKeeper service.
@param force_init_znode: Indicates if the ZNode should be force
initialized if it is already present. Useful
while re-enabling High Availability. (Default:
TRUE)
@param clear_existing_standby_name_dirs: Indicates if the existing name
directories for Standby
NameNode should be cleared
during the workflow.
Useful while re-enabling High
Availability. (Default: TRUE)
@param clear_existing_jn_edits_dir: Indicates if the existing edits
directories for the JournalNodes
for the specified nameservice
should be cleared during the
workflow. Useful while re-enabling
High Availability. (Default: TRUE)
@return: Reference to the submitted command.
@since: API v6
"""
args = dict(
activeNnName=active_name,
standbyNnName=standby_name,
standbyNnHostId=standby_host_id,
standbyNameDirList=standby_name_dir_list,
nameservice=nameservice,
qjName=qj_name,
activeFcName=active_fc_name,
standbyFcName=standby_fc_name,
zkServiceName=zk_service_name,
forceInitZNode=force_init_znode,
clearExistingStandbyNameDirs=clear_existing_standby_name_dirs,
clearExistingJnEditsDir=clear_existing_jn_edits_dir,
jns=jns
)
return self._cmd('hdfsEnableNnHa', data=args, api_version=6)
def enable_rm_ha(self, new_rm_host_id, zk_service_name=None):
"""Enable high availability for a YARN ResourceManager.
@param new_rm_host_id: id of the host where the second ResourceManager
will be added.
@param zk_service_name: Name of the ZooKeeper service to use for auto-
failover. If YARN service depends on a
ZooKeeper service then that ZooKeeper service
will be used for auto-failover and in that case
this parameter can be omitted.
@return: Reference to the submitted command.
@since: API v6
"""
args = dict(
newRmHostId=new_rm_host_id,
zkServiceName=zk_service_name
)
return self._cmd('enableRmHa', data=args)
class ApiServiceSetupInfo(ApiService):
_ATTRIBUTES = {
'name': None,
'type': None,
'config': types.Attr(types.ApiConfig),
'roles': types.Attr(roles.ApiRole),
}
def __init__(self, name=None, type=None,
config=None, roles=None):
# The BaseApiObject expects a resource_root, which we don't care about
resource_root = None
# Unfortunately, the json key is called "type". So our input arg
# needs to be called "type" as well, despite it being a python keyword.
types.BaseApiObject.init(self, None, locals())
def set_config(self, config):
"""Set the service configuration
:param config: A dictionary of config key/value
"""
if self.config is None:
self.config = {}
self.config.update(types.config_to_api_list(config))
def add_role_info(self, role_name, role_type, host_id, config=None):
"""Add a role info
The role will be created along with the service setup.
:param role_name: Role name
:param role_type: Role type
:param host_id: The host where the role should run
:param config: (Optional) A dictionary of role config values
"""
if self.roles is None:
self.roles = []
api_config_list = (config is not None
and types.config_to_api_list(config)
or None)
self.roles.append({
'name': role_name,
'type': role_type,
'hostRef': {'hostId': host_id},
'config': api_config_list})

View File

@ -1,683 +0,0 @@
# Copyright (c) 2014 Intel Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# The contents of this file are mainly copied from cm_api sources,
# released by Cloudera. Codes not used by Sahara CDH plugin are removed.
# You can find the original codes at
#
# https://github.com/cloudera/cm_api/tree/master/python/src/cm_api
#
# To satisfy the pep8 and python3 tests, we did some changes to the codes.
# We also change some importings to use Sahara inherited classes.
import copy
import datetime
import time
from oslo_serialization import jsonutils as json
from oslo_utils import reflection
from sahara.plugins import context
from sahara_plugin_cdh.i18n import _
from sahara_plugin_cdh.plugins.cdh import exceptions as ex
class Attr(object):
"""Base Attribute
Encapsulates information about an attribute in the JSON encoding of the
object. It identifies properties of the attribute such as whether it's
read-only, its type, etc.
"""
DATE_FMT = "%Y-%m-%dT%H:%M:%S.%fZ"
def __init__(self, atype=None, rw=True, is_api_list=False):
self._atype = atype
self._is_api_list = is_api_list
self.rw = rw
def to_json(self, value, preserve_ro):
"""Returns the JSON encoding of the given attribute value
If the value has a 'to_json_dict' object, that method is called.
Otherwise, the following values are returned for each input type:
- datetime.datetime: string with the API representation of a date.
- dictionary: if 'atype' is ApiConfig, a list of ApiConfig objects.
- python list: python list (or ApiList) with JSON encoding of items
- the raw value otherwise
"""
if hasattr(value, 'to_json_dict'):
return value.to_json_dict(preserve_ro)
elif isinstance(value, dict) and self._atype == ApiConfig:
return config_to_api_list(value)
elif isinstance(value, datetime.datetime):
return value.strftime(self.DATE_FMT)
elif isinstance(value, list) or isinstance(value, tuple):
if self._is_api_list:
return ApiList(value).to_json_dict()
else:
return [self.to_json(x, preserve_ro) for x in value]
else:
return value
def from_json(self, resource_root, data):
"""Parses the given JSON value into an appropriate python object
This means:
- a datetime.datetime if 'atype' is datetime.datetime
- a converted config dictionary or config list if 'atype' is ApiConfig
- if the attr is an API list, an ApiList with instances of 'atype'
- an instance of 'atype' if it has a 'from_json_dict' method
- a python list with decoded versions of the member objects if the
input is a python list.
- the raw value otherwise
"""
if data is None:
return None
if self._atype == datetime.datetime:
return datetime.datetime.strptime(data, self.DATE_FMT)
elif self._atype == ApiConfig:
# ApiConfig is special. We want a python dictionary for summary
# views, but an ApiList for full views. Try to detect each case
# from the JSON data.
if not data['items']:
return {}
first = data['items'][0]
return json_to_config(data, len(first) == 2)
elif self._is_api_list:
return ApiList.from_json_dict(data, resource_root, self._atype)
elif isinstance(data, list):
return [self.from_json(resource_root, x) for x in data]
elif hasattr(self._atype, 'from_json_dict'):
return self._atype.from_json_dict(data, resource_root)
else:
return data
class ROAttr(Attr):
"""Subclass that just defines the attribute as read-only."""
def __init__(self, atype=None, is_api_list=False):
Attr.__init__(self, atype=atype, rw=False, is_api_list=is_api_list)
def check_api_version(resource_root, min_version):
"""Check API version
Checks if the resource_root's API version it at least the given minimum
version.
"""
if resource_root.version < min_version:
raise ex.CMApiVersionError(
_("API version %(minv)s is required but %(acv)s is in use.")
% {'minv': min_version, 'acv': resource_root.version})
def call(method, path, ret_type,
ret_is_list=False, data=None, params=None, api_version=1):
"""Call a resource method
Generic function for calling a resource method and automatically dealing
with serialization of parameters and deserialization of return values.
:param method: method to call (must be bound to a resource;
e.g., "resource_root.get").
:param path: the full path of the API method to call.
:param ret_type: return type of the call.
:param ret_is_list: whether the return type is an ApiList.
:param data: Optional data to send as payload to the call.
:param params: Optional query parameters for the call.
:param api_version: minimum API version for the call.
"""
check_api_version(method.__self__, api_version)
if data is not None:
data = json.dumps(Attr(is_api_list=True).to_json(data, False))
ret = method(path, data=data, params=params)
else:
ret = method(path, params=params)
if ret_type is None:
return
elif ret_is_list:
return ApiList.from_json_dict(ret, method.__self__, ret_type)
elif isinstance(ret, list):
return [ret_type.from_json_dict(x, method.__self__) for x in ret]
else:
return ret_type.from_json_dict(ret, method.__self__)
class BaseApiObject(object):
"""The BaseApiObject helps with (de)serialization from/to JSON
The derived class has two ways of defining custom attributes:
- Overwriting the '_ATTRIBUTES' field with the attribute dictionary
- Override the _get_attributes() method, in case static initialization of
the above field is not possible.
It's recommended that the _get_attributes() implementation do caching to
avoid computing the dictionary on every invocation.
The derived class's constructor must call the base class's init() static
method. All constructor arguments (aside from self and resource_root) must
be keywords arguments with default values (typically None), or
from_json_dict() will not work.
"""
_ATTRIBUTES = {}
_WHITELIST = ('_resource_root', '_attributes')
@classmethod
def _get_attributes(cls):
"""Get an attribute dictionary
Returns a map of property names to attr instances (or None for default
attribute behavior) describing the properties of the object.
By default, this method will return the class's _ATTRIBUTES field.
Classes can override this method to do custom initialization of the
attributes when needed.
"""
return cls._ATTRIBUTES
@staticmethod
def init(obj, resource_root, attrs=None):
"""Wraper of real constructor
Wraper around the real constructor to avoid issues with the 'self'
argument. Call like this, from a subclass's constructor:
- BaseApiObject.init(self, locals())
"""
# This works around http://bugs.python.org/issue2646
# We use unicode strings as keys in kwargs.
str_attrs = {}
if attrs:
for k, v in attrs.items():
if k not in ('self', 'resource_root'):
str_attrs[k] = v
BaseApiObject.__init__(obj, resource_root, **str_attrs)
def __init__(self, resource_root, **attrs):
"""Init method
Initializes internal state and sets all known writable properties of
the object to None. Then initializes the properties given in the
provided attributes dictionary.
:param resource_root: API resource object.
:param attrs: optional dictionary of attributes to set. This should
only contain r/w attributes.
"""
self._resource_root = resource_root
for name, attr in self._get_attributes().items():
object.__setattr__(self, name, None)
if attrs:
self._set_attrs(attrs, from_json=False)
def _set_attrs(self, attrs, allow_ro=False, from_json=True):
"""Set attributes from dictionary
Sets all the attributes in the dictionary. Optionally, allows setting
read-only attributes (e.g. when deserializing from JSON) and skipping
JSON deserialization of values.
"""
for k, v in attrs.items():
attr = self._check_attr(k, allow_ro)
if attr and from_json:
v = attr.from_json(self._get_resource_root(), v)
object.__setattr__(self, k, v)
def __setattr__(self, name, val):
if name not in BaseApiObject._WHITELIST:
self._check_attr(name, False)
object.__setattr__(self, name, val)
def _check_attr(self, name, allow_ro):
cls_name = reflection.get_class_name(self, fully_qualified=False)
if name not in self._get_attributes():
raise ex.CMApiAttributeError(
_('Invalid property %(attname)s for class %(classname)s.')
% {'attname': name, 'classname': cls_name})
attr = self._get_attributes()[name]
if not allow_ro and attr and not attr.rw:
raise ex.CMApiAttributeError(
_('Attribute %(attname)s of class %(classname)s '
'is read only.')
% {'attname': name, 'classname': cls_name})
return attr
def _get_resource_root(self):
return self._resource_root
def _update(self, api_obj):
"""Copy state from api_obj to this object."""
if not isinstance(self, api_obj.__class__):
raise ex.CMApiValueError(
_("Class %(class1)s does not derive from %(class2)s; "
"cannot update attributes.")
% {'class1': self.__class__, 'class2': api_obj.__class__})
for name in self._get_attributes().keys():
try:
val = getattr(api_obj, name)
setattr(self, name, val)
except AttributeError:
pass
def to_json_dict(self, preserve_ro=False):
dic = {}
for name, attr in self._get_attributes().items():
if not preserve_ro and attr and not attr.rw:
continue
try:
value = getattr(self, name)
if value is not None:
if attr:
dic[name] = attr.to_json(value, preserve_ro)
else:
dic[name] = value
except AttributeError:
pass
return dic
def __str__(self):
"""Give a printable format of an attribute
Default implementation of __str__. Uses the type name and the first
attribute retrieved from the attribute map to create the string.
"""
cls_name = reflection.get_class_name(self, fully_qualified=False)
name = list(self._get_attributes().keys())[0]
value = getattr(self, name, None)
return "<%s>: %s = %s" % (cls_name, name, value)
@classmethod
def from_json_dict(cls, dic, resource_root):
obj = cls(resource_root)
obj._set_attrs(dic, allow_ro=True)
return obj
class BaseApiResource(BaseApiObject):
"""Base ApiResource
A specialization of BaseApiObject that provides some utility methods for
resources. This class allows easier serialization / deserialization of
parameters and return values.
"""
def _api_version(self):
"""Get API version
Returns the minimum API version for this resource. Defaults to 1.
"""
return 1
def _path(self):
"""Get resource path
Returns the path to the resource.
e.g., for a service 'foo' in cluster 'bar', this should return
'/clusters/bar/services/foo'.
"""
raise NotImplementedError
def _require_min_api_version(self, version):
"""Check minimum version requirement
Raise an exception if the version of the api is less than the given
version.
:param version: The minimum required version.
"""
actual_version = self._get_resource_root().version
version = max(version, self._api_version())
if actual_version < version:
raise ex.CMApiVersionError(
_("API version %(minv)s is required but %(acv)s is in use.")
% {'minv': version, 'acv': actual_version})
def _cmd(self, command, data=None, params=None, api_version=1):
"""Invoke a command on the resource
Invokes a command on the resource. Commands are expected to be under
the "commands/" sub-resource.
"""
return self._post("commands/" + command, ApiCommand,
data=data, params=params, api_version=api_version)
def _get_config(self, rel_path, view, api_version=1):
"""Get resource configurations
Retrieves an ApiConfig list from the given relative path.
"""
self._require_min_api_version(api_version)
params = dict(view=view) if view else None
resp = self._get_resource_root().get(self._path() + '/' + rel_path,
params=params)
return json_to_config(resp, view == 'full')
def _update_config(self, rel_path, config, api_version=1):
self._require_min_api_version(api_version)
resp = self._get_resource_root().put(self._path() + '/' + rel_path,
data=config_to_json(config))
return json_to_config(resp, False)
def _delete(self, rel_path, ret_type, ret_is_list=False, params=None,
api_version=1):
return self._call('delete', rel_path, ret_type, ret_is_list, None,
params, api_version)
def _get(self, rel_path, ret_type, ret_is_list=False, params=None,
api_version=1):
return self._call('get', rel_path, ret_type, ret_is_list, None,
params, api_version)
def _post(self, rel_path, ret_type, ret_is_list=False, data=None,
params=None, api_version=1):
return self._call('post', rel_path, ret_type, ret_is_list, data,
params, api_version)
def _put(self, rel_path, ret_type, ret_is_list=False, data=None,
params=None, api_version=1):
return self._call('put', rel_path, ret_type, ret_is_list, data,
params, api_version)
def _call(self, method, rel_path, ret_type, ret_is_list=False, data=None,
params=None, api_version=1):
path = self._path()
if rel_path:
path += '/' + rel_path
return call(getattr(self._get_resource_root(), method),
path,
ret_type,
ret_is_list,
data,
params,
api_version)
class ApiList(BaseApiObject):
"""A list of some api object"""
LIST_KEY = "items"
def __init__(self, objects, resource_root=None, **attrs):
BaseApiObject.__init__(self, resource_root, **attrs)
# Bypass checks in BaseApiObject.__setattr__
object.__setattr__(self, 'objects', objects)
def __str__(self):
return ("<ApiList>(%d): [%s]" % (len(self.objects),
", ".join([str(item) for item in self.objects])))
def to_json_dict(self, preserve_ro=False):
ret = BaseApiObject.to_json_dict(self, preserve_ro)
attr = Attr()
ret[ApiList.LIST_KEY] = [attr.to_json(x, preserve_ro)
for x in self.objects]
return ret
def __len__(self):
return self.objects.__len__()
def __iter__(self):
return self.objects.__iter__()
def __getitem__(self, i):
return self.objects.__getitem__(i)
def __getslice__(self, i, j):
return self.objects.__getslice__(i, j)
@classmethod
def from_json_dict(cls, dic, resource_root, member_cls=None):
if not member_cls:
member_cls = cls._MEMBER_CLASS
attr = Attr(atype=member_cls)
items = []
if ApiList.LIST_KEY in dic:
items = [attr.from_json(resource_root, x)
for x in dic[ApiList.LIST_KEY]]
ret = cls(items)
# If the class declares custom attributes, populate them based on the
# input dict. The check avoids extra overhead for the common case,
# where we just have a plain list. _set_attrs() also does not
# understand the "items" attribute, so it can't be in the input data.
if cls._ATTRIBUTES:
if ApiList.LIST_KEY in dic:
dic = copy.copy(dic)
del dic[ApiList.LIST_KEY]
ret._set_attrs(dic, allow_ro=True)
return ret
class ApiHostRef(BaseApiObject):
_ATTRIBUTES = {
'hostId': None,
}
def __init__(self, resource_root, hostId=None):
BaseApiObject.init(self, resource_root, locals())
def __str__(self):
return "<ApiHostRef>: %s" % (self.hostId)
class ApiServiceRef(BaseApiObject):
_ATTRIBUTES = {
'clusterName': None,
'serviceName': None,
'peerName': None,
}
def __init__(self, resource_root, serviceName=None, clusterName=None,
peerName=None):
BaseApiObject.init(self, resource_root, locals())
class ApiClusterRef(BaseApiObject):
_ATTRIBUTES = {
'clusterName': None,
}
def __init__(self, resource_root, clusterName=None):
BaseApiObject.init(self, resource_root, locals())
class ApiRoleRef(BaseApiObject):
_ATTRIBUTES = {
'clusterName': None,
'serviceName': None,
'roleName': None,
}
def __init__(self, resource_root, serviceName=None, roleName=None,
clusterName=None):
BaseApiObject.init(self, resource_root, locals())
class ApiRoleConfigGroupRef(BaseApiObject):
_ATTRIBUTES = {
'roleConfigGroupName': None,
}
def __init__(self, resource_root, roleConfigGroupName=None):
BaseApiObject.init(self, resource_root, locals())
class ApiCommand(BaseApiObject):
SYNCHRONOUS_COMMAND_ID = -1
@classmethod
def _get_attributes(cls):
if not ('_ATTRIBUTES' in cls.__dict__):
cls._ATTRIBUTES = {
'id': ROAttr(),
'name': ROAttr(),
'startTime': ROAttr(datetime.datetime),
'endTime': ROAttr(datetime.datetime),
'active': ROAttr(),
'success': ROAttr(),
'resultMessage': ROAttr(),
'clusterRef': ROAttr(ApiClusterRef),
'serviceRef': ROAttr(ApiServiceRef),
'roleRef': ROAttr(ApiRoleRef),
'hostRef': ROAttr(ApiHostRef),
'children': ROAttr(ApiCommand, is_api_list=True),
'parent': ROAttr(ApiCommand),
'resultDataUrl': ROAttr(),
'canRetry': ROAttr(),
}
return cls._ATTRIBUTES
def __str__(self):
return ("<ApiCommand>: '%s' (id: %s; active: %s; success: %s)"
% (self.name, self.id, self.active, self.success))
def _path(self):
return '/commands/%d' % self.id
def fetch(self):
"""Retrieve updated data about the command from the server
:return: A new ApiCommand object.
"""
if self.id == ApiCommand.SYNCHRONOUS_COMMAND_ID:
return self
resp = self._get_resource_root().get(self._path())
return ApiCommand.from_json_dict(resp, self._get_resource_root())
def wait(self, timeout=None):
"""Wait for command to finish
:param timeout: (Optional) Max amount of time (in seconds) to wait.
Wait forever by default.
:return: The final ApiCommand object, containing the last known state.
The command may still be running in case of timeout.
"""
if self.id == ApiCommand.SYNCHRONOUS_COMMAND_ID:
return self
SLEEP_SEC = 5
if timeout is None:
deadline = None
else:
deadline = time.time() + timeout
while True:
cmd = self.fetch()
if not cmd.active:
return cmd
if deadline is not None:
now = time.time()
if deadline < now:
return cmd
else:
context.sleep(min(SLEEP_SEC, deadline - now))
else:
context.sleep(SLEEP_SEC)
def abort(self):
"""Abort a running command
:return: A new ApiCommand object with the updated information.
"""
if self.id == ApiCommand.SYNCHRONOUS_COMMAND_ID:
return self
path = self._path() + '/abort'
resp = self._get_resource_root().post(path)
return ApiCommand.from_json_dict(resp, self._get_resource_root())
class ApiBulkCommandList(ApiList):
_ATTRIBUTES = {
'errors': ROAttr(),
}
_MEMBER_CLASS = ApiCommand
#
# Configuration helpers.
#
class ApiConfig(BaseApiObject):
_ATTRIBUTES = {
'name': None,
'value': None,
'required': ROAttr(),
'default': ROAttr(),
'displayName': ROAttr(),
'description': ROAttr(),
'relatedName': ROAttr(),
'validationState': ROAttr(),
'validationMessage': ROAttr(),
}
def __init__(self, resource_root, name=None, value=None):
BaseApiObject.init(self, resource_root, locals())
def __str__(self):
return "<ApiConfig>: %s = %s" % (self.name, self.value)
def config_to_api_list(dic):
"""Convert a python dictionary into an ApiConfig list
Converts a python dictionary into a list containing the proper
ApiConfig encoding for configuration data.
:param dic: Key-value pairs to convert.
:return: JSON dictionary of an ApiConfig list (*not* an ApiList).
"""
config = []
for k, v in dic.items():
config.append({'name': k, 'value': v})
return {ApiList.LIST_KEY: config}
def config_to_json(dic):
"""Converts a python dictionary into a JSON payload
The payload matches the expected "apiConfig list" type used to update
configuration parameters using the API.
:param dic: Key-value pairs to convert.
:return: String with the JSON-encoded data.
"""
return json.dumps(config_to_api_list(dic))
def json_to_config(dic, full=False):
"""Converts a JSON-decoded config dictionary to a python dictionary
When materializing the full view, the values in the dictionary will be
instances of ApiConfig, instead of strings.
:param dic: JSON-decoded config dictionary.
:param full: Whether to materialize the full view of the config data.
:return: Python dictionary with config data.
"""
config = {}
for entry in dic['items']:
k = entry['name']
if full:
config[k] = ApiConfig.from_json_dict(entry, None)
else:
config[k] = entry.get('value')
return config

View File

@ -1,62 +0,0 @@
# Copyright (c) 2015 Intel Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# The contents of this file are mainly copied from cm_api sources,
# released by Cloudera. Codes not used by Sahara CDH plugin are removed.
# You can find the original codes at
#
# https://github.com/cloudera/cm_api/tree/master/python/src/cm_api
#
# To satisfy the pep8 and python3 tests, we did some changes to the codes.
# We also change some importings to use Sahara inherited classes.
from sahara_plugin_cdh.plugins.cdh.client import types
USERS_PATH = "/users"
def get_user(resource_root, username):
"""Look up a user by username.
@param resource_root: The root Resource object
@param username: Username to look up
@return: An ApiUser object
"""
return types.call(resource_root.get,
'%s/%s' % (USERS_PATH, username), ApiUser)
def update_user(resource_root, user):
"""Update a user.
Replaces the user's details with those provided.
@param resource_root: The root Resource object
@param user: An ApiUser object
@return: An ApiUser object
"""
return types.call(resource_root.put,
'%s/%s' % (USERS_PATH, user.name), ApiUser, data=user)
class ApiUser(types.BaseApiResource):
_ATTRIBUTES = {
'name': None,
'password': None,
'roles': None,
}
def __init__(self, resource_root, name=None, password=None, roles=None):
types.BaseApiObject.init(self, resource_root, locals())

View File

@ -1,835 +0,0 @@
# Copyright (c) 2014 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import functools
from sahara.plugins import context
from sahara.plugins import exceptions as ex
from sahara.plugins import kerberos
from sahara.plugins import swift_helper
from sahara.plugins import topology_helper as t_helper
from sahara.plugins import utils
from sahara_plugin_cdh.i18n import _
from sahara_plugin_cdh.plugins.cdh.client import api_client
from sahara_plugin_cdh.plugins.cdh.client import services
from sahara_plugin_cdh.plugins.cdh import db_helper as dh
from sahara_plugin_cdh.plugins.cdh import plugin_utils
from sahara_plugin_cdh.plugins.cdh import validation
HDFS_SERVICE_TYPE = 'HDFS'
YARN_SERVICE_TYPE = 'YARN'
OOZIE_SERVICE_TYPE = 'OOZIE'
HIVE_SERVICE_TYPE = 'HIVE'
HUE_SERVICE_TYPE = 'HUE'
SPARK_SERVICE_TYPE = 'SPARK_ON_YARN'
ZOOKEEPER_SERVICE_TYPE = 'ZOOKEEPER'
HBASE_SERVICE_TYPE = 'HBASE'
FLUME_SERVICE_TYPE = 'FLUME'
SENTRY_SERVICE_TYPE = 'SENTRY'
SOLR_SERVICE_TYPE = 'SOLR'
SQOOP_SERVICE_TYPE = 'SQOOP'
KS_INDEXER_SERVICE_TYPE = 'KS_INDEXER'
IMPALA_SERVICE_TYPE = 'IMPALA'
KMS_SERVICE_TYPE = 'KMS'
KAFKA_SERVICE_TYPE = 'KAFKA'
def cloudera_cmd(f):
@functools.wraps(f)
def wrapper(*args, **kwargs):
for cmd in f(*args, **kwargs):
result = cmd.wait()
if not result.success:
if result.children is not None:
for c in result.children:
if not c.success:
raise ex.HadoopProvisionError(c.resultMessage)
else:
raise ex.HadoopProvisionError(result.resultMessage)
return wrapper
class ClouderaUtils(object):
CM_DEFAULT_USERNAME = 'admin'
CM_DEFAULT_PASSWD = 'admin'
CM_API_VERSION = 8
HDFS_SERVICE_NAME = 'hdfs01'
YARN_SERVICE_NAME = 'yarn01'
OOZIE_SERVICE_NAME = 'oozie01'
HIVE_SERVICE_NAME = 'hive01'
HUE_SERVICE_NAME = 'hue01'
SPARK_SERVICE_NAME = 'spark_on_yarn01'
ZOOKEEPER_SERVICE_NAME = 'zookeeper01'
HBASE_SERVICE_NAME = 'hbase01'
FLUME_SERVICE_NAME = 'flume01'
SOLR_SERVICE_NAME = 'solr01'
SQOOP_SERVICE_NAME = 'sqoop01'
KS_INDEXER_SERVICE_NAME = 'ks_indexer01'
IMPALA_SERVICE_NAME = 'impala01'
SENTRY_SERVICE_NAME = 'sentry01'
KMS_SERVICE_NAME = 'kms01'
KAFKA_SERVICE_NAME = 'kafka01'
NAME_SERVICE = 'nameservice01'
def __init__(self):
self.pu = plugin_utils.AbstractPluginUtils()
self.validator = validation.Validator
self.c_helper = None
def get_api_client_by_default_password(self, cluster):
manager_ip = self.pu.get_manager(cluster).management_ip
return api_client.ApiResource(manager_ip,
username=self.CM_DEFAULT_USERNAME,
password=self.CM_DEFAULT_PASSWD,
version=self.CM_API_VERSION)
def get_api_client(self, cluster, api_version=None):
manager_ip = self.pu.get_manager(cluster).management_ip
cm_password = dh.get_cm_password(cluster)
version = self.CM_API_VERSION if not api_version else api_version
return api_client.ApiResource(manager_ip,
username=self.CM_DEFAULT_USERNAME,
password=cm_password,
version=version)
def update_cloudera_password(self, cluster):
api = self.get_api_client_by_default_password(cluster)
user = api.get_user(self.CM_DEFAULT_USERNAME)
user.password = dh.get_cm_password(cluster)
api.update_user(user)
def get_cloudera_cluster(self, cluster):
api = self.get_api_client(cluster)
return api.get_cluster(cluster.name)
@cloudera_cmd
def start_cloudera_cluster(self, cluster):
cm_cluster = self.get_cloudera_cluster(cluster)
yield cm_cluster.start()
@cloudera_cmd
def stop_cloudera_cluster(self, cluster):
cm_cluster = self.get_cloudera_cluster(cluster)
yield cm_cluster.stop()
def start_instances(self, cluster):
self.start_cloudera_cluster(cluster)
@utils.event_wrapper(
True, step=_("Delete instances"), param=('cluster', 1))
def delete_instances(self, cluster, instances):
api = self.get_api_client(cluster)
cm_cluster = self.get_cloudera_cluster(cluster)
hosts = api.get_all_hosts(view='full')
hostsnames_to_deleted = [i.fqdn() for i in instances]
for host in hosts:
if host.hostname in hostsnames_to_deleted:
cm_cluster.remove_host(host.hostId)
api.delete_host(host.hostId)
@utils.event_wrapper(
True, step=_("Decommission nodes"), param=('cluster', 1))
def decommission_nodes(self, cluster, process,
decommission_roles, roles_to_delete=None):
service = self.get_service_by_role(process, cluster)
service.decommission(*decommission_roles).wait()
# not all roles should be decommissioned
if roles_to_delete:
decommission_roles.extend(roles_to_delete)
for role_name in decommission_roles:
service.delete_role(role_name)
@utils.event_wrapper(
True, step=_("Refresh DataNodes"), param=('cluster', 1))
def refresh_datanodes(self, cluster):
self._refresh_nodes(cluster, 'DATANODE', self.HDFS_SERVICE_NAME)
@utils.event_wrapper(
True, step=_("Refresh YARNNodes"), param=('cluster', 1))
def refresh_yarn_nodes(self, cluster):
self._refresh_nodes(cluster, 'NODEMANAGER', self.YARN_SERVICE_NAME)
@cloudera_cmd
def _refresh_nodes(self, cluster, process, service_name):
cm_cluster = self.get_cloudera_cluster(cluster)
service = cm_cluster.get_service(service_name)
nds = [n.name for n in service.get_roles_by_type(process)]
for nd in nds:
for st in service.refresh(nd):
yield st
@utils.event_wrapper(
True, step=_("Restart stale services"), param=('cluster', 1))
@cloudera_cmd
def restart_stale_services(self, cluster):
cm_cluster = self.get_cloudera_cluster(cluster)
yield cm_cluster.restart(
restart_only_stale_services=True,
redeploy_client_configuration=True)
@utils.event_wrapper(True, step=_("Deploy configs"), param=('cluster', 1))
@cloudera_cmd
def deploy_configs(self, cluster):
cm_cluster = self.get_cloudera_cluster(cluster)
yield cm_cluster.deploy_client_config()
def update_configs(self, instances):
# instances non-empty
utils.add_provisioning_step(
instances[0].cluster_id, _("Update configs"), len(instances))
with context.PluginsThreadGroup() as tg:
for instance in instances:
tg.spawn("update-configs-%s" % instance.instance_name,
self._update_configs, instance)
context.sleep(1)
@utils.event_wrapper(True)
@cloudera_cmd
def _update_configs(self, instance):
for process in instance.node_group.node_processes:
process = self.pu.convert_role_showname(process)
service = self.get_service_by_role(process, instance=instance)
yield service.deploy_client_config(self.pu.get_role_name(instance,
process))
def get_mgmt_service(self, cluster):
api = self.get_api_client(cluster)
cm = api.get_cloudera_manager()
mgmt_service = cm.get_service()
return mgmt_service
@cloudera_cmd
def restart_mgmt_service(self, cluster):
service = self.get_mgmt_service(cluster)
yield service.restart()
@cloudera_cmd
def start_service(self, service):
yield service.start()
@cloudera_cmd
def stop_service(self, service):
yield service.stop()
@cloudera_cmd
def start_roles(self, service, *role_names):
for role in service.start_roles(*role_names):
yield role
@utils.event_wrapper(
True, step=_("Create mgmt service"), param=('cluster', 1))
def create_mgmt_service(self, cluster):
api = self.get_api_client(cluster)
cm = api.get_cloudera_manager()
setup_info = services.ApiServiceSetupInfo()
manager = self.pu.get_manager(cluster)
hostname = manager.fqdn()
processes = ['SERVICEMONITOR', 'HOSTMONITOR',
'EVENTSERVER', 'ALERTPUBLISHER']
for proc in processes:
setup_info.add_role_info(self.pu.get_role_name(manager, proc),
proc, hostname)
cm.create_mgmt_service(setup_info)
cm.hosts_start_roles([hostname])
def get_service_by_role(self, role, cluster=None, instance=None):
if cluster:
cm_cluster = self.get_cloudera_cluster(cluster)
elif instance:
cm_cluster = self.get_cloudera_cluster(instance.cluster)
else:
raise ValueError(_("'cluster' or 'instance' argument missed"))
if role in ['NAMENODE', 'DATANODE', 'SECONDARYNAMENODE',
'HDFS_GATEWAY']:
return cm_cluster.get_service(self.HDFS_SERVICE_NAME)
elif role in ['RESOURCEMANAGER', 'NODEMANAGER', 'JOBHISTORY',
'YARN_GATEWAY']:
return cm_cluster.get_service(self.YARN_SERVICE_NAME)
elif role in ['OOZIE_SERVER']:
return cm_cluster.get_service(self.OOZIE_SERVICE_NAME)
elif role in ['HIVESERVER2', 'HIVEMETASTORE', 'WEBHCAT']:
return cm_cluster.get_service(self.HIVE_SERVICE_NAME)
elif role in ['HUE_SERVER']:
return cm_cluster.get_service(self.HUE_SERVICE_NAME)
elif role in ['SPARK_YARN_HISTORY_SERVER']:
return cm_cluster.get_service(self.SPARK_SERVICE_NAME)
elif role in ['SERVER']:
return cm_cluster.get_service(self.ZOOKEEPER_SERVICE_NAME)
elif role in ['MASTER', 'REGIONSERVER']:
return cm_cluster.get_service(self.HBASE_SERVICE_NAME)
elif role in ['AGENT']:
return cm_cluster.get_service(self.FLUME_SERVICE_NAME)
elif role in ['SENTRY_SERVER']:
return cm_cluster.get_service(self.SENTRY_SERVICE_NAME)
elif role in ['SQOOP_SERVER']:
return cm_cluster.get_service(self.SQOOP_SERVICE_NAME)
elif role in ['SOLR_SERVER']:
return cm_cluster.get_service(self.SOLR_SERVICE_NAME)
elif role in ['HBASE_INDEXER']:
return cm_cluster.get_service(self.KS_INDEXER_SERVICE_NAME)
elif role in ['CATALOGSERVER', 'STATESTORE', 'IMPALAD', 'LLAMA']:
return cm_cluster.get_service(self.IMPALA_SERVICE_NAME)
elif role in ['KMS']:
return cm_cluster.get_service(self.KMS_SERVICE_NAME)
elif role in ['JOURNALNODE']:
return cm_cluster.get_service(self.HDFS_SERVICE_NAME)
elif role in ['YARN_STANDBYRM']:
return cm_cluster.get_service(self.YARN_SERVICE_NAME)
elif role in ['KAFKA_BROKER']:
return cm_cluster.get_service(self.KAFKA_SERVICE_NAME)
else:
raise ValueError(
_("Process %(process)s is not supported by CDH plugin") %
{'process': role})
@utils.event_wrapper(
True, step=_("First run cluster"), param=('cluster', 1))
@cloudera_cmd
def first_run(self, cluster):
cm_cluster = self.get_cloudera_cluster(cluster)
yield cm_cluster.first_run()
@utils.event_wrapper(
True, step=_("Create services"), param=('cluster', 1))
def create_services(self, cluster):
api = self.get_api_client(cluster)
cm_cluster = api.create_cluster(cluster.name,
fullVersion=cluster.hadoop_version)
if len(self.pu.get_zookeepers(cluster)) > 0:
cm_cluster.create_service(self.ZOOKEEPER_SERVICE_NAME,
ZOOKEEPER_SERVICE_TYPE)
cm_cluster.create_service(self.HDFS_SERVICE_NAME, HDFS_SERVICE_TYPE)
cm_cluster.create_service(self.YARN_SERVICE_NAME, YARN_SERVICE_TYPE)
cm_cluster.create_service(self.OOZIE_SERVICE_NAME, OOZIE_SERVICE_TYPE)
if self.pu.get_hive_metastore(cluster):
cm_cluster.create_service(self.HIVE_SERVICE_NAME,
HIVE_SERVICE_TYPE)
if self.pu.get_hue(cluster):
cm_cluster.create_service(self.HUE_SERVICE_NAME, HUE_SERVICE_TYPE)
if self.pu.get_spark_historyserver(cluster):
cm_cluster.create_service(self.SPARK_SERVICE_NAME,
SPARK_SERVICE_TYPE)
if self.pu.get_hbase_master(cluster):
cm_cluster.create_service(self.HBASE_SERVICE_NAME,
HBASE_SERVICE_TYPE)
if len(self.pu.get_flumes(cluster)) > 0:
cm_cluster.create_service(self.FLUME_SERVICE_NAME,
FLUME_SERVICE_TYPE)
if self.pu.get_sentry(cluster):
cm_cluster.create_service(self.SENTRY_SERVICE_NAME,
SENTRY_SERVICE_TYPE)
if len(self.pu.get_solrs(cluster)) > 0:
cm_cluster.create_service(self.SOLR_SERVICE_NAME,
SOLR_SERVICE_TYPE)
if self.pu.get_sqoop(cluster):
cm_cluster.create_service(self.SQOOP_SERVICE_NAME,
SQOOP_SERVICE_TYPE)
if len(self.pu.get_hbase_indexers(cluster)) > 0:
cm_cluster.create_service(self.KS_INDEXER_SERVICE_NAME,
KS_INDEXER_SERVICE_TYPE)
if self.pu.get_catalogserver(cluster):
cm_cluster.create_service(self.IMPALA_SERVICE_NAME,
IMPALA_SERVICE_TYPE)
if self.pu.get_kms(cluster):
cm_cluster.create_service(self.KMS_SERVICE_NAME,
KMS_SERVICE_TYPE)
if len(self.pu.get_kafka_brokers(cluster)) > 0:
cm_cluster.create_service(self.KAFKA_SERVICE_NAME,
KAFKA_SERVICE_TYPE)
def _agents_connected(self, instances, api):
hostnames = [i.fqdn() for i in instances]
hostnames_to_manager = [h.hostname for h in
api.get_all_hosts('full')]
for hostname in hostnames:
if hostname not in hostnames_to_manager:
return False
return True
@utils.event_wrapper(True, step=_("Await agents"), param=('cluster', 1))
def _await_agents(self, cluster, instances, timeout_config):
api = self.get_api_client(instances[0].cluster)
utils.plugin_option_poll(
cluster, self._agents_connected, timeout_config,
_("Await Cloudera agents"), 5, {
'instances': instances, 'api': api})
def await_agents(self, cluster, instances):
self._await_agents(cluster, instances,
self.c_helper.AWAIT_AGENTS_TIMEOUT)
@utils.event_wrapper(
True, step=_("Configure services"), param=('cluster', 1))
def configure_services(self, cluster):
cm_cluster = self.get_cloudera_cluster(cluster)
if len(self.pu.get_zookeepers(cluster)) > 0:
zookeeper = cm_cluster.get_service(self.ZOOKEEPER_SERVICE_NAME)
zookeeper.update_config(self._get_configs(ZOOKEEPER_SERVICE_TYPE,
cluster=cluster))
hdfs = cm_cluster.get_service(self.HDFS_SERVICE_NAME)
hdfs.update_config(self._get_configs(HDFS_SERVICE_TYPE,
cluster=cluster))
yarn = cm_cluster.get_service(self.YARN_SERVICE_NAME)
yarn.update_config(self._get_configs(YARN_SERVICE_TYPE,
cluster=cluster))
oozie = cm_cluster.get_service(self.OOZIE_SERVICE_NAME)
oozie.update_config(self._get_configs(OOZIE_SERVICE_TYPE,
cluster=cluster))
if self.pu.get_hive_metastore(cluster):
hive = cm_cluster.get_service(self.HIVE_SERVICE_NAME)
hive.update_config(self._get_configs(HIVE_SERVICE_TYPE,
cluster=cluster))
if self.pu.get_hue(cluster):
hue = cm_cluster.get_service(self.HUE_SERVICE_NAME)
hue.update_config(self._get_configs(HUE_SERVICE_TYPE,
cluster=cluster))
if self.pu.get_spark_historyserver(cluster):
spark = cm_cluster.get_service(self.SPARK_SERVICE_NAME)
spark.update_config(self._get_configs(SPARK_SERVICE_TYPE,
cluster=cluster))
if self.pu.get_hbase_master(cluster):
hbase = cm_cluster.get_service(self.HBASE_SERVICE_NAME)
hbase.update_config(self._get_configs(HBASE_SERVICE_TYPE,
cluster=cluster))
if len(self.pu.get_flumes(cluster)) > 0:
flume = cm_cluster.get_service(self.FLUME_SERVICE_NAME)
flume.update_config(self._get_configs(FLUME_SERVICE_TYPE,
cluster=cluster))
if self.pu.get_sentry(cluster):
sentry = cm_cluster.get_service(self.SENTRY_SERVICE_NAME)
sentry.update_config(self._get_configs(SENTRY_SERVICE_TYPE,
cluster=cluster))
if len(self.pu.get_solrs(cluster)) > 0:
solr = cm_cluster.get_service(self.SOLR_SERVICE_NAME)
solr.update_config(self._get_configs(SOLR_SERVICE_TYPE,
cluster=cluster))
if self.pu.get_sqoop(cluster):
sqoop = cm_cluster.get_service(self.SQOOP_SERVICE_NAME)
sqoop.update_config(self._get_configs(SQOOP_SERVICE_TYPE,
cluster=cluster))
if len(self.pu.get_hbase_indexers(cluster)) > 0:
ks_indexer = cm_cluster.get_service(self.KS_INDEXER_SERVICE_NAME)
ks_indexer.update_config(
self._get_configs(KS_INDEXER_SERVICE_TYPE, cluster=cluster))
if self.pu.get_catalogserver(cluster):
impala = cm_cluster.get_service(self.IMPALA_SERVICE_NAME)
impala.update_config(self._get_configs(IMPALA_SERVICE_TYPE,
cluster=cluster))
if self.pu.get_kms(cluster):
kms = cm_cluster.get_service(self.KMS_SERVICE_NAME)
kms.update_config(self._get_configs(KMS_SERVICE_TYPE,
cluster=cluster))
if len(self.pu.get_kafka_brokers(cluster)) > 0:
kafka = cm_cluster.get_service(self.KAFKA_SERVICE_NAME)
kafka.update_config(self._get_configs(KAFKA_SERVICE_TYPE,
cluster=cluster))
def configure_instances(self, instances, cluster=None):
# instances non-empty
utils.add_provisioning_step(
instances[0].cluster_id, _("Configure instances"), len(instances))
for inst in instances:
self.configure_instance(inst, cluster)
def get_roles_list(self, node_processes):
current = set(node_processes)
extra_roles = {
'YARN_GATEWAY': ["YARN_NODEMANAGER"],
'HDFS_GATEWAY': ['HDFS_NAMENODE', 'HDFS_DATANODE',
"HDFS_SECONDARYNAMENODE"]
}
for extra_role in extra_roles.keys():
valid_processes = extra_roles[extra_role]
for valid in valid_processes:
if valid in current:
current.add(extra_role)
break
return list(current)
def get_role_type(self, process):
mapper = {
'YARN_GATEWAY': 'GATEWAY',
'HDFS_GATEWAY': 'GATEWAY',
}
return mapper.get(process, process)
@utils.event_wrapper(True)
def configure_instance(self, instance, cluster=None):
roles_list = self.get_roles_list(instance.node_group.node_processes)
for role in roles_list:
self._add_role(instance, role, cluster)
def _add_role(self, instance, process, cluster):
if process in ['CLOUDERA_MANAGER', 'HDFS_JOURNALNODE',
'YARN_STANDBYRM']:
return
process = self.pu.convert_role_showname(process)
service = self.get_service_by_role(process, instance=instance)
role_type = self.get_role_type(process)
role = service.create_role(self.pu.get_role_name(instance, process),
role_type, instance.fqdn())
role.update_config(self._get_configs(process, cluster,
instance=instance))
@cloudera_cmd
def restart_service(self, process, instance):
service = self.get_service_by_role(process, instance=instance)
yield service.restart()
def update_role_config(self, instance, process):
process = self.pu.convert_role_showname(process)
service = self.get_service_by_role(process, instance=instance)
api = self.get_api_client(instance.cluster)
hosts = api.get_all_hosts(view='full')
ihost_id = None
for host in hosts:
if instance.fqdn() == host.hostname:
ihost_id = host.hostId
break
role_type = self.get_role_type(process)
roles = service.get_roles_by_type(role_type)
for role in roles:
if role.hostRef.hostId == ihost_id:
role.update_config(
self._get_configs(role_type, instance=instance))
self.restart_service(process, instance)
@cloudera_cmd
def import_admin_credentials(self, cm, username, password):
yield cm.import_admin_credentials(username, password)
@cloudera_cmd
def configure_for_kerberos(self, cluster):
api = self.get_api_client(cluster, api_version=11)
cluster = api.get_cluster(cluster.name)
yield cluster.configure_for_kerberos()
def push_kerberos_configs(self, cluster):
manager = self.pu.get_manager(cluster)
kdc_host = kerberos.get_kdc_host(cluster, manager)
security_realm = kerberos.get_realm_name(cluster)
username = "%s@%s" % (kerberos.get_admin_principal(cluster),
kerberos.get_realm_name(cluster))
password = kerberos.get_server_password(cluster)
api = self.get_api_client(cluster)
cm = api.get_cloudera_manager()
cm.update_config({'SECURITY_REALM': security_realm,
'KDC_HOST': kdc_host})
self.import_admin_credentials(cm, username, password)
self.configure_for_kerberos(cluster)
self.deploy_configs(cluster)
def configure_rack_awareness(self, cluster):
if t_helper.is_data_locality_enabled():
self._configure_rack_awareness(cluster)
@utils.event_wrapper(
True, step=_("Configure rack awareness"), param=('cluster', 1))
def _configure_rack_awareness(self, cluster):
api = self.get_api_client(cluster)
topology = t_helper.generate_topology_map(
cluster, is_node_awareness=False)
for host in api.get_all_hosts():
host.rackId = topology[host.ipAddress]
host.put_host()
def full_cluster_stop(self, cluster):
self.stop_cloudera_cluster(cluster)
mgmt = self.get_mgmt_service(cluster)
self.stop_service(mgmt)
def full_cluster_start(self, cluster):
self.start_cloudera_cluster(cluster)
mgmt = self.get_mgmt_service(cluster)
self.start_service(mgmt)
def get_cloudera_manager_info(self, cluster):
mng = self.pu.get_manager(cluster)
info = {
'Cloudera Manager': {
'Web UI': 'http://%s:7180' % mng.get_ip_or_dns_name(),
'Username': 'admin',
'Password': dh.get_cm_password(cluster)
}
}
return info
@utils.event_wrapper(
True, step=_("Enable NameNode HA"), param=('cluster', 1))
@cloudera_cmd
def enable_namenode_ha(self, cluster):
standby_nn = self.pu.get_secondarynamenode(cluster)
standby_nn_host_name = standby_nn.fqdn()
jns = self.pu.get_jns(cluster)
jn_list = []
for index, jn in enumerate(jns):
jn_host_name = jn.fqdn()
jn_list.append({'jnHostId': jn_host_name,
'jnName': 'JN%i' % index,
'jnEditsDir': '/dfs/jn'
})
cm_cluster = self.get_cloudera_cluster(cluster)
hdfs = cm_cluster.get_service(self.HDFS_SERVICE_NAME)
nn = hdfs.get_roles_by_type('NAMENODE')[0]
yield hdfs.enable_nn_ha(active_name=nn.name,
standby_host_id=standby_nn_host_name,
nameservice=self.NAME_SERVICE, jns=jn_list
)
@utils.event_wrapper(
True, step=_("Enable ResourceManager HA"), param=('cluster', 1))
@cloudera_cmd
def enable_resourcemanager_ha(self, cluster):
new_rm = self.pu.get_stdb_rm(cluster)
new_rm_host_name = new_rm.fqdn()
cm_cluster = self.get_cloudera_cluster(cluster)
yarn = cm_cluster.get_service(self.YARN_SERVICE_NAME)
yield yarn.enable_rm_ha(new_rm_host_id=new_rm_host_name)
def _load_version_specific_instance_configs(self, instance, default_conf):
pass
def _get_configs(self, service, cluster=None, instance=None):
def get_hadoop_dirs(mount_points, suffix):
return ','.join([x + suffix for x in mount_points])
all_confs = {}
if cluster:
zk_count = self.validator.get_inst_count(cluster,
'ZOOKEEPER_SERVER')
hbm_count = self.validator.get_inst_count(cluster, 'HBASE_MASTER')
snt_count = self.validator.get_inst_count(cluster,
'SENTRY_SERVER')
ks_count =\
self.validator.get_inst_count(cluster,
'KEY_VALUE_STORE_INDEXER')
kms_count = self.validator.get_inst_count(cluster, 'KMS')
imp_count =\
self.validator.get_inst_count(cluster,
'IMPALA_CATALOGSERVER')
hive_count = self.validator.get_inst_count(cluster,
'HIVE_METASTORE')
slr_count = self.validator.get_inst_count(cluster, 'SOLR_SERVER')
sqp_count = self.validator.get_inst_count(cluster, 'SQOOP_SERVER')
core_site_safety_valve = ''
if self.pu.c_helper.is_swift_enabled(cluster):
configs = swift_helper.get_swift_configs()
confs = {c['name']: c['value'] for c in configs}
core_site_safety_valve = utils.create_elements_xml(confs)
all_confs = {
'HDFS': {
'zookeeper_service':
self.ZOOKEEPER_SERVICE_NAME if zk_count else '',
'dfs_block_local_path_access_user':
'impala' if imp_count else '',
'kms_service': self.KMS_SERVICE_NAME if kms_count else '',
'core_site_safety_valve': core_site_safety_valve
},
'HIVE': {
'mapreduce_yarn_service': self.YARN_SERVICE_NAME,
'sentry_service':
self.SENTRY_SERVICE_NAME if snt_count else '',
'zookeeper_service':
self.ZOOKEEPER_SERVICE_NAME if zk_count else ''
},
'OOZIE': {
'mapreduce_yarn_service': self.YARN_SERVICE_NAME,
'hive_service':
self.HIVE_SERVICE_NAME if hive_count else '',
'zookeeper_service':
self.ZOOKEEPER_SERVICE_NAME if zk_count else ''
},
'YARN': {
'hdfs_service': self.HDFS_SERVICE_NAME,
'zookeeper_service':
self.ZOOKEEPER_SERVICE_NAME if zk_count else ''
},
'HUE': {
'hive_service': self.HIVE_SERVICE_NAME,
'oozie_service': self.OOZIE_SERVICE_NAME,
'sentry_service':
self.SENTRY_SERVICE_NAME if snt_count else '',
'solr_service':
self.SOLR_SERVICE_NAME if slr_count else '',
'zookeeper_service':
self.ZOOKEEPER_SERVICE_NAME if zk_count else '',
'hbase_service':
self.HBASE_SERVICE_NAME if hbm_count else '',
'impala_service':
self.IMPALA_SERVICE_NAME if imp_count else '',
'sqoop_service':
self.SQOOP_SERVICE_NAME if sqp_count else ''
},
'SPARK_ON_YARN': {
'yarn_service': self.YARN_SERVICE_NAME
},
'HBASE': {
'hdfs_service': self.HDFS_SERVICE_NAME,
'zookeeper_service': self.ZOOKEEPER_SERVICE_NAME,
'hbase_enable_indexing': 'true' if ks_count else 'false',
'hbase_enable_replication':
'true' if ks_count else 'false'
},
'FLUME': {
'hdfs_service': self.HDFS_SERVICE_NAME,
'solr_service':
self.SOLR_SERVICE_NAME if slr_count else '',
'hbase_service':
self.HBASE_SERVICE_NAME if hbm_count else ''
},
'SENTRY': {
'hdfs_service': self.HDFS_SERVICE_NAME,
'sentry_server_config_safety_valve': (
self.c_helper.SENTRY_IMPALA_CLIENT_SAFETY_VALVE
if imp_count else '')
},
'SOLR': {
'hdfs_service': self.HDFS_SERVICE_NAME,
'zookeeper_service': self.ZOOKEEPER_SERVICE_NAME
},
'SQOOP': {
'mapreduce_yarn_service': self.YARN_SERVICE_NAME
},
'KS_INDEXER': {
'hbase_service': self.HBASE_SERVICE_NAME,
'solr_service': self.SOLR_SERVICE_NAME
},
'IMPALA': {
'hdfs_service': self.HDFS_SERVICE_NAME,
'hbase_service':
self.HBASE_SERVICE_NAME if hbm_count else '',
'hive_service': self.HIVE_SERVICE_NAME,
'sentry_service':
self.SENTRY_SERVICE_NAME if snt_count else '',
'zookeeper_service':
self.ZOOKEEPER_SERVICE_NAME if zk_count else ''
}
}
hive_confs = {
'HIVE': {
'hive_metastore_database_type': 'postgresql',
'hive_metastore_database_host':
self.pu.get_manager(cluster).internal_ip,
'hive_metastore_database_port': '7432',
'hive_metastore_database_password':
dh.get_hive_db_password(cluster)
}
}
hue_confs = {
'HUE': {
'hue_webhdfs': self.pu.get_role_name(
self.pu.get_namenode(cluster), 'NAMENODE')
}
}
sentry_confs = {
'SENTRY': {
'sentry_server_database_type': 'postgresql',
'sentry_server_database_host':
self.pu.get_manager(cluster).internal_ip,
'sentry_server_database_port': '7432',
'sentry_server_database_password':
dh.get_sentry_db_password(cluster)
}
}
kafka_confs = {
'KAFKA': {
'zookeeper_service': self.ZOOKEEPER_SERVICE_NAME
}
}
all_confs = utils.merge_configs(all_confs, hue_confs)
all_confs = utils.merge_configs(all_confs, hive_confs)
all_confs = utils.merge_configs(all_confs, sentry_confs)
all_confs = utils.merge_configs(all_confs, kafka_confs)
all_confs = utils.merge_configs(all_confs, cluster.cluster_configs)
if instance:
snt_count = self.validator.get_inst_count(instance.cluster,
'SENTRY_SERVER')
paths = instance.storage_paths()
instance_default_confs = {
'NAMENODE': {
'dfs_name_dir_list': get_hadoop_dirs(paths, '/fs/nn')
},
'SECONDARYNAMENODE': {
'fs_checkpoint_dir_list':
get_hadoop_dirs(paths, '/fs/snn')
},
'DATANODE': {
'dfs_data_dir_list': get_hadoop_dirs(paths, '/fs/dn'),
'dfs_datanode_data_dir_perm': 755,
'dfs_datanode_handler_count': 30
},
'NODEMANAGER': {
'yarn_nodemanager_local_dirs':
get_hadoop_dirs(paths, '/yarn/local'),
'container_executor_allowed_system_users':
"nobody,impala,hive,llama,hdfs,yarn,mapred,"
"spark,oozie",
"container_executor_banned_users": "bin"
},
'SERVER': {
'maxSessionTimeout': 60000
},
'HIVESERVER2': {
'hiveserver2_enable_impersonation':
'false' if snt_count else 'true',
'hive_hs2_config_safety_valve': (
self.c_helper.HIVE_SERVER2_SENTRY_SAFETY_VALVE
if snt_count else '')
},
'HIVEMETASTORE': {
'hive_metastore_config_safety_valve': (
self.c_helper.HIVE_METASTORE_SENTRY_SAFETY_VALVE
if snt_count else '')
}
}
self._load_version_specific_instance_configs(
instance, instance_default_confs)
ng_user_confs = self.pu.convert_process_configs(
instance.node_group.node_configs)
all_confs = utils.merge_configs(all_confs, ng_user_confs)
all_confs = utils.merge_configs(all_confs, instance_default_confs)
return all_confs.get(service, {})

View File

@ -1,116 +0,0 @@
# Copyright (c) 2014 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins import exceptions as ex
from sahara_plugin_cdh.i18n import _
def _root(remote, cmd, **kwargs):
return remote.execute_command(cmd, run_as_root=True, **kwargs)
def _get_os_distrib(remote):
return remote.get_os_distrib()
def is_centos_os(remote):
return _get_os_distrib(remote) == 'centos'
def is_ubuntu_os(remote):
return _get_os_distrib(remote) == 'ubuntu'
def is_pre_installed_cdh(remote):
code, out = remote.execute_command('ls /etc/init.d/cloudera-scm-server',
raise_when_error=False)
return code == 0
def start_cloudera_db(remote):
_root(remote, 'service cloudera-scm-server-db start')
# for Hive access
hive_access_param = 'host metastore hive 0.0.0.0/0 md5'
remote.append_to_file('/var/lib/cloudera-scm-server-db/data/pg_hba.conf',
hive_access_param, run_as_root=True)
_root(remote, 'service cloudera-scm-server-db restart')
def start_manager(remote):
_root(remote, 'service cloudera-scm-server start')
def configure_agent(remote, manager_address):
remote.replace_remote_string('/etc/cloudera-scm-agent/config.ini',
'server_host=.*',
'server_host=%s' % manager_address)
def start_agent(remote):
_root(remote, 'service cloudera-scm-agent start')
def install_packages(remote, packages, timeout=1800):
distrib = _get_os_distrib(remote)
if distrib == 'ubuntu':
cmd = 'RUNLEVEL=1 apt-get install -y %s'
elif distrib == 'centos':
cmd = 'yum install -y %s'
else:
raise ex.HadoopProvisionError(
_("OS on image is not supported by CDH plugin"))
cmd = cmd % ' '.join(packages)
_root(remote, cmd, timeout=timeout)
def update_repository(remote):
if is_ubuntu_os(remote):
_root(remote, 'apt-get update')
if is_centos_os(remote):
_root(remote, 'yum clean all')
def push_remote_file(remote, src, dst):
cmd = 'curl %s -o %s' % (src, dst)
_root(remote, cmd)
def add_ubuntu_repository(r, repo_list_url, repo_name):
push_remote_file(r, repo_list_url,
'/etc/apt/sources.list.d/%s.list' % repo_name)
def write_ubuntu_repository(r, repo_content, repo_name):
r.write_file_to('/etc/apt/sources.list.d/%s.list' % repo_name,
repo_content, run_as_root=True)
def add_apt_key(remote, key_url):
cmd = 'wget -qO - %s | apt-key add -' % key_url
_root(remote, cmd)
def add_centos_repository(r, repo_list_url, repo_name):
push_remote_file(r, repo_list_url, '/etc/yum.repos.d/%s.repo' % repo_name)
def write_centos_repository(r, repo_content, repo_name):
r.write_file_to('/etc/yum.repos.d/%s.repo' % repo_name,
repo_content, run_as_root=True)
def start_mysql_server(remote):
_root(remote, 'service mysql start')

View File

@ -1,307 +0,0 @@
# Copyright (c) 2016 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_serialization import jsonutils as json
from sahara.plugins import provisioning as p
from sahara.plugins import utils
class ConfigHelper(object):
path_to_config = ''
CDH5_REPO_URL = p.Config(
'CDH5 repo list URL', 'general', 'cluster', priority=1,
default_value="")
CDH5_REPO_KEY_URL = p.Config(
'CDH5 repo key URL (for debian-based only)', 'general', 'cluster',
priority=1, default_value="")
CM5_REPO_URL = p.Config(
'CM5 repo list URL', 'general', 'cluster', priority=1,
default_value="")
CM5_REPO_KEY_URL = p.Config(
'CM5 repo key URL (for debian-based only)', 'general', 'cluster',
priority=1, default_value="")
ENABLE_HBASE_COMMON_LIB = p.Config(
'Enable HBase Common Lib', 'general', 'cluster', config_type='bool',
priority=1, default_value=True)
ENABLE_SWIFT = p.Config(
'Enable Swift', 'general', 'cluster',
config_type='bool', priority=1, default_value=True)
DEFAULT_SWIFT_LIB_URL = (
'https://repository.cloudera.com/artifactory/repo/org'
'/apache/hadoop/hadoop-openstack/2.6.0-cdh5.5.0'
'/hadoop-openstack-2.6.0-cdh5.5.0.jar')
SWIFT_LIB_URL = p.Config(
'Hadoop OpenStack library URL', 'general', 'cluster', priority=1,
default_value=DEFAULT_SWIFT_LIB_URL,
description=("Library that adds Swift support to CDH. The file"
" will be downloaded by VMs."))
DEFAULT_EXTJS_LIB_URL = (
'https://tarballs.openstack.org/sahara-extra/dist/common-artifacts/'
'ext-2.2.zip')
EXTJS_LIB_URL = p.Config(
"ExtJS library URL", 'general', 'cluster', priority=1,
default_value=DEFAULT_EXTJS_LIB_URL,
description=("Ext 2.2 library is required for Oozie Web Console. "
"The file will be downloaded by VMs with oozie."))
_default_executor_classpath = ":".join(
['/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar',
'/usr/lib/hadoop-mapreduce/hadoop-openstack.jar'])
EXECUTOR_EXTRA_CLASSPATH = p.Config(
'Executor extra classpath', 'Spark', 'cluster', priority=2,
default_value=_default_executor_classpath,
description='Value for spark.executor.extraClassPath in '
'spark-defaults.conf (default: %s)'
% _default_executor_classpath)
KMS_REPO_URL = p.Config(
'KMS repo list URL', 'general', 'cluster', priority=1,
default_value="")
KMS_REPO_KEY_URL = p.Config(
'KMS repo key URL (for debian-based only)', 'general',
'cluster',
priority=1, default_value="")
REQUIRE_ANTI_AFFINITY = p.Config(
'Require Anti Affinity', 'general', 'cluster',
config_type='bool', priority=2, default_value=True)
AWAIT_AGENTS_TIMEOUT = p.Config(
'Await Cloudera agents timeout', 'general', 'cluster',
config_type='int', priority=1, default_value=300, is_optional=True,
description="Timeout for Cloudera agents connecting to"
" Cloudera Manager, in seconds")
AWAIT_MANAGER_STARTING_TIMEOUT = p.Config(
'Timeout for Cloudera Manager starting', 'general', 'cluster',
config_type='int', priority=1, default_value=300, is_optional=True,
description='Timeout for Cloudera Manager starting, in seconds')
def __new__(cls):
# make it a singleton
if not hasattr(cls, '_instance'):
cls._instance = super(ConfigHelper, cls).__new__(cls)
setattr(cls, '__init__', cls.decorate_init(cls.__init__))
return cls._instance
@classmethod
def decorate_init(cls, f):
"""decorate __init__ to prevent multiple calling."""
def wrap(*args, **kwargs):
if not hasattr(cls, '_init'):
f(*args, **kwargs)
cls._init = True
return wrap
def __init__(self):
self.ng_plugin_configs = []
self.priority_one_confs = {}
def _load_json(self, path_to_file):
data = utils.get_file_text(path_to_file, 'sahara_plugin_cdh')
return json.loads(data)
def _init_ng_configs(self, confs, app_target, scope):
prepare_value = lambda x: x.replace('\n', ' ') if x else ""
cfgs = []
for cfg in confs:
priority = 1 if cfg['name'] in self.priority_one_confs else 2
c = p.Config(cfg['name'], app_target, scope, priority=priority,
default_value=prepare_value(cfg['value']),
description=cfg['desc'], is_optional=True)
cfgs.append(c)
return cfgs
def _init_all_ng_plugin_configs(self):
self.hdfs_confs = self._load_and_init_configs(
'hdfs-service.json', 'HDFS', 'cluster')
self.namenode_confs = self._load_and_init_configs(
'hdfs-namenode.json', 'NAMENODE', 'node')
self.datanode_confs = self._load_and_init_configs(
'hdfs-datanode.json', 'DATANODE', 'node')
self.secnamenode_confs = self._load_and_init_configs(
'hdfs-secondarynamenode.json', 'SECONDARYNAMENODE', 'node')
self.hdfs_gateway_confs = self._load_and_init_configs(
'hdfs-gateway.json', 'HDFS_GATEWAY', 'node')
self.journalnode_confs = self._load_and_init_configs(
'hdfs-journalnode.json', 'JOURNALNODE', 'node')
self.yarn_confs = self._load_and_init_configs(
'yarn-service.json', 'YARN', 'cluster')
self.resourcemanager_confs = self._load_and_init_configs(
'yarn-resourcemanager.json', 'RESOURCEMANAGER', 'node')
self.nodemanager_confs = self._load_and_init_configs(
'yarn-nodemanager.json', 'NODEMANAGER', 'node')
self.jobhistory_confs = self._load_and_init_configs(
'yarn-jobhistory.json', 'JOBHISTORY', 'node')
self.yarn_gateway_conf = self._load_and_init_configs(
'yarn-gateway.json', 'YARN_GATEWAY', 'node')
self.oozie_service_confs = self._load_and_init_configs(
'oozie-service.json', 'OOZIE', 'cluster')
self.oozie_role_confs = self._load_and_init_configs(
'oozie-oozie_server.json', 'OOZIE', 'node')
self.hive_service_confs = self._load_and_init_configs(
'hive-service.json', 'HIVE', 'cluster')
self.hive_metastore_confs = self._load_and_init_configs(
'hive-hivemetastore.json', 'HIVEMETASTORE', 'node')
self.hive_hiveserver_confs = self._load_and_init_configs(
'hive-hiveserver2.json', 'HIVESERVER', 'node')
self.hive_webhcat_confs = self._load_and_init_configs(
'hive-webhcat.json', 'WEBHCAT', 'node')
self.hue_service_confs = self._load_and_init_configs(
'hue-service.json', 'HUE', 'cluster')
self.hue_role_confs = self._load_and_init_configs(
'hue-hue_server.json', 'HUE', 'node')
self.spark_service_confs = self._load_and_init_configs(
'spark-service.json', 'SPARK_ON_YARN', 'cluster')
self.spark_role_confs = self._load_and_init_configs(
'spark-spark_yarn_history_server.json', 'SPARK_ON_YARN', 'node')
self.zookeeper_server_confs = self._load_and_init_configs(
'zookeeper-service.json', 'ZOOKEEPER', 'cluster')
self.zookeeper_service_confs = self._load_and_init_configs(
'zookeeper-server.json', 'ZOOKEEPER', 'node')
self.hbase_confs = self._load_and_init_configs(
'hbase-service.json', 'HBASE', 'cluster')
self.master_confs = self._load_and_init_configs(
'hbase-master.json', 'MASTER', 'node')
self.regionserver_confs = self._load_and_init_configs(
'hbase-regionserver.json', 'REGIONSERVER', 'node')
self.flume_service_confs = self._load_and_init_configs(
'flume-service.json', 'FLUME', 'cluster')
self.flume_agent_confs = self._load_and_init_configs(
'flume-agent.json', 'FLUME', 'node')
self.sentry_service_confs = self._load_and_init_configs(
'sentry-service.json', 'SENTRY', 'cluster')
self.sentry_server_confs = self._load_and_init_configs(
'sentry-sentry_server.json', 'SENTRY', 'node')
self.solr_service_confs = self._load_and_init_configs(
'solr-service.json', 'SOLR', 'cluster')
self.solr_server_confs = self._load_and_init_configs(
'solr-solr_server.json', 'SOLR', 'node')
self.sqoop_service_confs = self._load_and_init_configs(
'sqoop-service.json', 'SQOOP', 'cluster')
self.sqoop_server_confs = self._load_and_init_configs(
'sqoop-sqoop_server.json', 'SQOOP', 'node')
self.ks_indexer_service_confs = self._load_and_init_configs(
'ks_indexer-service.json', 'KS_INDEXER', 'cluster')
self.ks_indexer_role_confs = self._load_and_init_configs(
'ks_indexer-hbase_indexer.json', 'KS_INDEXER', 'node')
self.impala_service_confs = self._load_and_init_configs(
'impala-service.json', 'IMPALA', 'cluster')
self.impala_catalogserver_confs = self._load_and_init_configs(
'impala-catalogserver.json', 'CATALOGSERVER', 'node')
self.impala_impalad_confs = self._load_and_init_configs(
'impala-impalad.json', 'IMPALAD', 'node')
self.impala_statestore_confs = self._load_and_init_configs(
'impala-statestore.json', 'STATESTORE', 'node')
self.kms_service_confs = self._load_and_init_configs(
'kms-service.json', 'KMS', 'cluster')
self.kms_kms_confs = self._load_and_init_configs(
'kms-kms.json', 'KMS', 'node')
self.kafka_service = self._load_and_init_configs(
'kafka-service.json', 'KAFKA', 'cluster')
self.kafka_kafka_broker = self._load_and_init_configs(
'kafka-kafka_broker.json', 'KAFKA', 'node')
self.kafka_kafka_mirror_maker = self._load_and_init_configs(
'kafka-kafka_mirror_maker.json', 'KAFKA', 'node')
def _load_and_init_configs(self, filename, app_target, scope):
confs = self._load_json(self.path_to_config + filename)
cfgs = self._init_ng_configs(confs, app_target, scope)
self.ng_plugin_configs += cfgs
return cfgs
def _get_ng_plugin_configs(self):
return self.ng_plugin_configs
def _get_cluster_plugin_configs(self):
return [self.CDH5_REPO_URL, self.CDH5_REPO_KEY_URL, self.CM5_REPO_URL,
self.CM5_REPO_KEY_URL, self.ENABLE_SWIFT, self.SWIFT_LIB_URL,
self.ENABLE_HBASE_COMMON_LIB, self.EXTJS_LIB_URL,
self.AWAIT_MANAGER_STARTING_TIMEOUT, self.AWAIT_AGENTS_TIMEOUT,
self.EXECUTOR_EXTRA_CLASSPATH, self.KMS_REPO_URL,
self.KMS_REPO_KEY_URL, self.REQUIRE_ANTI_AFFINITY]
def get_plugin_configs(self):
cluster_wide = self._get_cluster_plugin_configs()
ng_wide = self._get_ng_plugin_configs()
return cluster_wide + ng_wide
def _get_config_value(self, cluster, key):
return cluster.cluster_configs.get(
'general', {}).get(key.name, key.default_value)
def get_cdh5_repo_url(self, cluster):
return self._get_config_value(cluster, self.CDH5_REPO_URL)
def get_cdh5_key_url(self, cluster):
return self._get_config_value(cluster, self.CDH5_REPO_KEY_URL)
def get_cm5_repo_url(self, cluster):
return self._get_config_value(cluster, self.CM5_REPO_URL)
def get_cm5_key_url(self, cluster):
return self._get_config_value(cluster, self.CM5_REPO_KEY_URL)
def is_swift_enabled(self, cluster):
return self._get_config_value(cluster, self.ENABLE_SWIFT)
def is_hbase_common_lib_enabled(self, cluster):
return self._get_config_value(cluster,
self.ENABLE_HBASE_COMMON_LIB)
def is_keytrustee_available(self):
return True
def get_swift_lib_url(self, cluster):
return self._get_config_value(cluster, self.SWIFT_LIB_URL)
def get_extjs_lib_url(self, cluster):
return self._get_config_value(cluster, self.EXTJS_LIB_URL)
def get_kms_key_url(self, cluster):
return self._get_config_value(cluster, self.KMS_REPO_KEY_URL)
def get_required_anti_affinity(self, cluster):
return self._get_config_value(cluster, self.REQUIRE_ANTI_AFFINITY)

View File

@ -1,48 +0,0 @@
# Copyright (c) 2015 Intel Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins import edp
from sahara.plugins import utils
def get_possible_hive_config_from(file_name):
'''Return the possible configs, args, params for a Hive job.'''
config = {
'configs': utils.load_hadoop_xml_defaults(file_name,
'sahara_plugin_cdh'),
'params': {}
}
return config
def get_possible_mapreduce_config_from(file_name):
'''Return the possible configs, args, params for a MapReduce job.'''
config = {
'configs': get_possible_pig_config_from(file_name).get('configs')
}
config['configs'] += edp.get_possible_mapreduce_configs()
return config
def get_possible_pig_config_from(file_name):
'''Return the possible configs, args, params for a Pig job.'''
config = {
'configs': utils.load_hadoop_xml_defaults(file_name,
'sahara_plugin_cdh'),
'args': [],
'params': {}
}
return config

View File

@ -1,118 +0,0 @@
# Copyright (c) 2015 Intel Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_utils import uuidutils
from sahara.plugins import castellan_utils as key_manager
from sahara.plugins import conductor
from sahara.plugins import context
from sahara.plugins import utils
CM_PASSWORD = 'cm_password'
HIVE_DB_PASSWORD = 'hive_db_password'
SENTRY_DB_PASSWORD = 'sentry_db_password'
def delete_password_from_keymanager(cluster, pwname):
"""delete the named password from the key manager
This function will lookup the named password in the cluster entry
and delete it from the key manager.
:param cluster: The cluster record containing the password
:param pwname: The name associated with the password
"""
ctx = context.ctx()
cluster = conductor.cluster_get(ctx, cluster.id)
key_id = cluster.extra.get(pwname) if cluster.extra else None
if key_id is not None:
key_manager.delete_key(key_id, ctx)
def delete_passwords_from_keymanager(cluster):
"""delete all passwords associated with a cluster
This function will remove all passwords stored in a cluster database
entry from the key manager.
:param cluster: The cluster record containing the passwords
"""
delete_password_from_keymanager(cluster, CM_PASSWORD)
delete_password_from_keymanager(cluster, HIVE_DB_PASSWORD)
delete_password_from_keymanager(cluster, SENTRY_DB_PASSWORD)
def get_password_from_db(cluster, pwname):
"""return a password for the named entry
This function will return, or create and return, a password for the
named entry. It will store the password in the key manager and use
the ID in the database entry.
:param cluster: The cluster record containing the password
:param pwname: The entry name associated with the password
:returns: The cleartext password
"""
ctx = context.ctx()
cluster = conductor.cluster_get(ctx, cluster.id)
passwd = cluster.extra.get(pwname) if cluster.extra else None
if passwd:
return key_manager.get_secret(passwd, ctx)
passwd = uuidutils.generate_uuid()
extra = cluster.extra.to_dict() if cluster.extra else {}
extra[pwname] = key_manager.store_secret(passwd, ctx)
conductor.cluster_update(ctx, cluster, {'extra': extra})
return passwd
def get_cm_password(cluster):
return get_password_from_db(cluster, CM_PASSWORD)
def remote_execute_db_script(remote, script_content):
script_name = 'script_to_exec.sql'
remote.write_file_to(script_name, script_content)
psql_cmd = ('PGPASSWORD=$(sudo head -1 /var/lib/cloudera-scm-server-db'
'/data/generated_password.txt) psql -U cloudera-scm '
'-h localhost -p 7432 -d scm -f %s') % script_name
remote.execute_command(psql_cmd)
remote.execute_command('rm %s' % script_name)
def get_hive_db_password(cluster):
return get_password_from_db(cluster, 'hive_db_password')
def get_sentry_db_password(cluster):
return get_password_from_db(cluster, 'sentry_db_password')
def create_hive_database(cluster, remote):
db_password = get_hive_db_password(cluster)
create_db_script = utils.try_get_file_text(
'plugins/cdh/db_resources/create_hive_db.sql', 'sahara_plugin_cdh')
create_db_script = create_db_script % db_password.encode('utf-8')
remote_execute_db_script(remote, create_db_script)
def create_sentry_database(cluster, remote):
db_password = get_sentry_db_password(cluster)
create_db_script = utils.try_get_file_text(
'plugins/cdh/db_resources/create_sentry_db.sql', 'sahara_plugin_cdh')
create_db_script = create_db_script % db_password.encode('utf-8')
remote_execute_db_script(remote, create_db_script)

View File

@ -1,4 +0,0 @@
CREATE ROLE hive LOGIN PASSWORD '%s';
CREATE DATABASE metastore OWNER hive encoding 'UTF8';
GRANT ALL PRIVILEGES ON DATABASE metastore TO hive;
COMMIT;

View File

@ -1,4 +0,0 @@
CREATE ROLE sentry LOGIN PASSWORD '%s';
CREATE DATABASE sentry OWNER sentry encoding 'UTF8';
GRANT ALL PRIVILEGES ON DATABASE sentry TO sentry;
COMMIT;

View File

@ -1,124 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins import kerberos
PACKAGES = [
'cloudera-manager-agent',
'cloudera-manager-daemons',
'cloudera-manager-server',
'cloudera-manager-server-db-2',
'flume-ng',
'hadoop-hdfs-datanode',
'hadoop-hdfs-namenode',
'hadoop-hdfs-secondarynamenode',
'hadoop-kms'
'hadoop-mapreduce',
'hadoop-mapreduce-historyserver',
'hadoop-yarn-nodemanager',
'hadoop-yarn-resourcemanager',
'hbase',
'hbase-solr',
'hive-hcatalog',
'hive-metastore',
'hive-server2',
'hive-webhcat-server',
'hue',
'impala',
'impala-server',
'impala-state-store',
'impala-catalog',
'impala-shell',
'kafka',
'kafka-server'
'keytrustee-keyprovider',
'oozie',
'oracle-j2sdk1.7',
'sentry',
'solr-server',
'solr-doc',
'search',
'spark-history-server',
'sqoop2',
'unzip',
'zookeeper'
]
def setup_kerberos_for_cluster(cluster, cloudera_utils):
if kerberos.is_kerberos_security_enabled(cluster):
manager = cloudera_utils.pu.get_manager(cluster)
kerberos.deploy_infrastructure(cluster, manager)
cloudera_utils.full_cluster_stop(cluster)
kerberos.prepare_policy_files(cluster)
cloudera_utils.push_kerberos_configs(cluster)
cloudera_utils.full_cluster_start(cluster)
kerberos.create_keytabs_for_map(
cluster,
{'hdfs': cloudera_utils.pu.get_hdfs_nodes(cluster),
'spark': [cloudera_utils.pu.get_spark_historyserver(cluster)]})
def prepare_scaling_kerberized_cluster(cluster, cloudera_utils, instances):
if kerberos.is_kerberos_security_enabled(cluster):
server = None
if not kerberos.using_existing_kdc(cluster):
server = cloudera_utils.pu.get_manager(cluster)
kerberos.setup_clients(cluster, server)
kerberos.prepare_policy_files(cluster)
# manager can correctly handle updating configs
cloudera_utils.push_kerberos_configs(cluster)
kerberos.create_keytabs_for_map(
cluster,
{'hdfs': cloudera_utils.pu.get_hdfs_nodes(cluster, instances)})
def get_open_ports(node_group):
ports = [9000] # for CM agent
ports_map = {
'CLOUDERA_MANAGER': [7180, 7182, 7183, 7432, 7184, 8084, 8086, 10101,
9997, 9996, 8087, 9998, 9999, 8085, 9995, 9994],
'HDFS_NAMENODE': [8020, 8022, 50070, 50470],
'HDFS_SECONDARYNAMENODE': [50090, 50495],
'HDFS_DATANODE': [50010, 1004, 50075, 1006, 50020],
'YARN_RESOURCEMANAGER': [8030, 8031, 8032, 8033, 8088],
'YARN_STANDBYRM': [8030, 8031, 8032, 8033, 8088],
'YARN_NODEMANAGER': [8040, 8041, 8042],
'YARN_JOBHISTORY': [10020, 19888],
'HIVE_METASTORE': [9083],
'HIVE_SERVER2': [10000],
'HUE_SERVER': [8888],
'OOZIE_SERVER': [11000, 11001],
'SPARK_YARN_HISTORY_SERVER': [18088],
'ZOOKEEPER_SERVER': [2181, 3181, 4181, 9010],
'HBASE_MASTER': [60000],
'HBASE_REGIONSERVER': [60020],
'FLUME_AGENT': [41414],
'SENTRY_SERVER': [8038],
'SOLR_SERVER': [8983, 8984],
'SQOOP_SERVER': [8005, 12000],
'KEY_VALUE_STORE_INDEXER': [],
'IMPALA_CATALOGSERVER': [25020, 26000],
'IMPALA_STATESTORE': [25010, 24000],
'IMPALAD': [21050, 21000, 23000, 25000, 28000, 22000],
'KMS': [16000, 16001],
'JOURNALNODE': [8480, 8481, 8485]
}
for process in node_group.node_processes:
if process in ports_map:
ports.extend(ports_map[process])
return ports

View File

@ -1,100 +0,0 @@
# Copyright (c) 2014 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins import edp
from sahara.plugins import exceptions as pl_ex
from sahara.plugins import kerberos
from sahara.plugins import utils as u
from sahara_plugin_cdh.i18n import _
class EdpOozieEngine(edp.PluginsOozieJobEngine):
def __init__(self, cluster):
super(EdpOozieEngine, self).__init__(cluster)
# will be defined in derived classes
self.cloudera_utils = None
def get_client(self):
if kerberos.is_kerberos_security_enabled(self.cluster):
return super(EdpOozieEngine, self).get_remote_client()
return super(EdpOozieEngine, self).get_client()
def get_hdfs_user(self):
return 'hdfs'
def create_hdfs_dir(self, remote, dir_name):
edp.create_dir_hadoop2(remote, dir_name, self.get_hdfs_user())
def get_oozie_server_uri(self, cluster):
oozie_ip = self.cloudera_utils.pu.get_oozie(cluster).management_ip
return 'http://%s:11000/oozie' % oozie_ip
def get_name_node_uri(self, cluster):
if len(self.cloudera_utils.pu.get_jns(cluster)) > 0:
return 'hdfs://%s' % self.cloudera_utils.NAME_SERVICE
else:
namenode_ip = self.cloudera_utils.pu.get_namenode(cluster).fqdn()
return 'hdfs://%s:8020' % namenode_ip
def get_resource_manager_uri(self, cluster):
resourcemanager = self.cloudera_utils.pu.get_resourcemanager(cluster)
return '%s:8032' % resourcemanager.fqdn()
def get_oozie_server(self, cluster):
return self.cloudera_utils.pu.get_oozie(cluster)
def validate_job_execution(self, cluster, job, data):
oo_count = u.get_instances_count(cluster, 'OOZIE_SERVER')
if oo_count != 1:
raise pl_ex.InvalidComponentCountException(
'OOZIE_SERVER', '1', oo_count)
super(EdpOozieEngine, self).validate_job_execution(cluster, job, data)
class EdpSparkEngine(edp.PluginsSparkJobEngine):
edp_base_version = ""
def __init__(self, cluster):
super(EdpSparkEngine, self).__init__(cluster)
self.master = u.get_instance(cluster, "SPARK_YARN_HISTORY_SERVER")
self.plugin_params["spark-user"] = "sudo -u spark "
self.plugin_params["spark-submit"] = "spark-submit"
self.plugin_params["deploy-mode"] = "cluster"
self.plugin_params["master"] = "yarn-cluster"
driver_cp = u.get_config_value_or_default(
"Spark", "Executor extra classpath", self.cluster)
self.plugin_params["driver-class-path"] = driver_cp
@classmethod
def edp_supported(cls, version):
return version >= cls.edp_base_version
def validate_job_execution(self, cluster, job, data):
if not self.edp_supported(cluster.hadoop_version):
raise pl_ex.PluginInvalidDataException(
_('Cloudera {base} or higher required to run {type}'
'jobs').format(base=self.edp_base_version, type=job.type))
shs_count = u.get_instances_count(
cluster, 'SPARK_YARN_HISTORY_SERVER')
if shs_count != 1:
raise pl_ex.InvalidComponentCountException(
'SPARK_YARN_HISTORY_SERVER', '1', shs_count)
super(EdpSparkEngine, self).validate_job_execution(
cluster, job, data)

View File

@ -1,78 +0,0 @@
# Copyright (c) 2015 Intel Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins import exceptions as e
from sahara_plugin_cdh.i18n import _
class CMApiVersionError(e.SaharaPluginException):
"""Exception indicating that CM API Version does not meet requirement.
A message indicating the reason for failure must be provided.
"""
base_message = _("CM API version not meet requirement: %s")
def __init__(self, message):
self.code = "CM_API_VERSION_ERROR"
self.message = self.base_message % message
super(CMApiVersionError, self).__init__()
class CMApiException(e.SaharaPluginException):
"""Exception Type from CM API Errors.
Any error result from the CM API is converted into this exception type.
This handles errors from the HTTP level as well as the API level.
"""
base_message = _("CM API error: %s")
def __init__(self, message):
self.code = "CM_API_EXCEPTION"
self.message = self.base_message % message
super(CMApiException, self).__init__()
class CMApiAttributeError(e.SaharaPluginException):
"""Exception indicating a CM API attribute error.
A message indicating the reason for failure must be provided.
"""
base_message = _("CM API attribute error: %s")
def __init__(self, message):
self.code = "CM_API_ATTRIBUTE_ERROR"
self.message = self.base_message % message
super(CMApiAttributeError, self).__init__()
class CMApiValueError(e.SaharaPluginException):
"""Exception indicating a CM API value error.
A message indicating the reason for failure must be provided.
"""
base_message = _("CM API value error: %s")
def __init__(self, message):
self.code = "CM_API_VALUE_ERROR"
self.message = self.base_message % message
super(CMApiValueError, self).__init__()

View File

@ -1,144 +0,0 @@
# Copyright (c) 2016 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import functools
from oslo_log import log as logging
from sahara.plugins import health_check_base
from sahara_plugin_cdh.i18n import _
LOG = logging.getLogger(__name__)
class HealthStatusProvider(object):
def __init__(self, cluster, cloudera_tools):
self.cluster = cluster
self.cloudera_tools = cloudera_tools
self._data = None
self._cluster_services = None
self._exception_store = None
self.get_health_status()
def get_cluster_services(self):
return self._cluster_services
def is_cloudera_active(self):
if self._exception_store:
raise health_check_base.RedHealthError(self._exception_store)
return _("Cloudera Manager is Active")
def get_cloudera_health(self):
cu = self.cloudera_tools
api = cu.get_api_client(self.cluster)
return api.get_service_health_status(self.cluster.name)
def get_important_services(self):
# will be overridable in future
cu = self.cloudera_tools
return [
cu.HDFS_SERVICE_NAME,
cu.YARN_SERVICE_NAME,
cu.OOZIE_SERVICE_NAME
]
def get_health_status(self, service=None):
if self._data is not None:
return self._data.get(service, []) if service else self._data
self._data = {}
self._cluster_services = []
try:
# all data already grouped by services
self._data = self.get_cloudera_health()
self._cluster_services = self._data.keys()
except Exception as e:
msg = _("Can't get response from Cloudera "
"Manager")
LOG.exception(msg)
self._exception_store = _(
"%(problem)s, reason: %(reason)s") % {
'problem': msg, 'reason': str(e)}
class ClouderaManagerHealthCheck(health_check_base.BasicHealthCheck):
def __init__(self, cluster, provider):
self.provider = provider
super(ClouderaManagerHealthCheck, self).__init__(cluster)
def get_health_check_name(self):
return _("Cloudera Manager health check")
def is_available(self):
return self.cluster.plugin_name == 'cdh'
def check_health(self):
return self.provider.is_cloudera_active()
class ServiceHealthCheck(health_check_base.BasicHealthCheck):
def __init__(self, cluster, provider, service):
self.provider = provider
self.service = service
super(ServiceHealthCheck, self).__init__(cluster)
def get_health_check_name(self):
return _("CDH %s health check") % self.service
def is_available(self):
return self.cluster.plugin_name == 'cdh'
def check_health(self):
important_services = self.provider.get_important_services()
observed_data = self.provider.get_health_status(self.service)
imp_map = {'BAD': 'red', 'CONCERNING': 'yellow', 'GOOD': 'green'}
summary = observed_data['summary']
checks = observed_data.get('checks', [])
failed_checks = []
for check in checks:
if check['summary'] != 'GOOD':
failed_checks.append('%(name)s - %(summary)s state' % {
'name': check['name'], 'summary': check['summary']
})
additional_info = None
if failed_checks:
additional_info = _(
"The following checks did not pass: %s") % ",".join(
failed_checks)
if self.service in important_services:
overall = imp_map.get(summary, 'red')
else:
overall = 'green'
if summary != 'GOOD':
overall = 'yellow'
msg = _("Cloudera Manager has responded that service is in "
"the %s state") % summary
if additional_info:
msg = _("%(problem)s. %(description)s") % {
'problem': msg, 'description': additional_info}
if overall == 'red':
raise health_check_base.RedHealthError(msg)
elif overall == 'yellow':
raise health_check_base.YellowHealthError(msg)
return msg
def get_health_checks(cluster, cloudera_utils):
provider = HealthStatusProvider(cluster, cloudera_utils)
checks = [functools.partial(
ClouderaManagerHealthCheck, provider=provider)]
for service in provider.get_cluster_services():
checks.append(functools.partial(
ServiceHealthCheck, provider=provider, service=service))
return checks

View File

@ -1,125 +0,0 @@
# Copyright (c) 2014 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from sahara.plugins import provisioning as p
from sahara_plugin_cdh.i18n import _
from sahara_plugin_cdh.plugins.cdh import versionfactory as vhf
class CDHPluginProvider(p.ProvisioningPluginBase):
def __init__(self):
self.version_factory = vhf.VersionFactory.get_instance()
def get_title(self):
return "Cloudera Plugin"
def get_description(self):
return _('The Cloudera Sahara plugin provides the ability to '
'launch the Cloudera distribution of Apache Hadoop '
'(CDH) with Cloudera Manager management console.')
def get_labels(self):
default = {'enabled': {'status': True}, 'stable': {'status': True}}
result = {'plugin_labels': copy.deepcopy(default)}
deprecated = {'enabled': {'status': True},
'deprecated': {'status': True}}
result['version_labels'] = {
'5.13.0': copy.deepcopy(default),
'5.11.0': copy.deepcopy(default),
'5.9.0': copy.deepcopy(default),
'5.7.0': copy.deepcopy(deprecated)
}
return result
def _get_version_handler(self, hadoop_version):
return self.version_factory.get_version_handler(hadoop_version)
def get_versions(self):
return self.version_factory.get_versions()
def get_node_processes(self, hadoop_version):
return self._get_version_handler(hadoop_version).get_node_processes()
def get_configs(self, hadoop_version):
return self._get_version_handler(hadoop_version).get_plugin_configs()
def configure_cluster(self, cluster):
return self._get_version_handler(
cluster.hadoop_version).configure_cluster(cluster)
def start_cluster(self, cluster):
return self._get_version_handler(
cluster.hadoop_version).start_cluster(cluster)
def validate(self, cluster):
return self._get_version_handler(
cluster.hadoop_version).validate(cluster)
def scale_cluster(self, cluster, instances):
return self._get_version_handler(
cluster.hadoop_version).scale_cluster(cluster, instances)
def decommission_nodes(self, cluster, instances):
return self._get_version_handler(
cluster.hadoop_version).decommission_nodes(cluster, instances)
def validate_scaling(self, cluster, existing, additional):
return self._get_version_handler(
cluster.hadoop_version).validate_scaling(cluster, existing,
additional)
def get_edp_engine(self, cluster, job_type):
return self._get_version_handler(
cluster.hadoop_version).get_edp_engine(cluster, job_type)
def get_edp_job_types(self, versions=None):
res = {}
for vers in self.version_factory.get_versions():
if not versions or vers in versions:
vh = self.version_factory.get_version_handler(vers)
res[vers] = vh.get_edp_job_types()
return res
def get_edp_config_hints(self, job_type, version):
version_handler = (
self.version_factory.get_version_handler(version))
return version_handler.get_edp_config_hints(job_type)
def get_open_ports(self, node_group):
return self._get_version_handler(
node_group.cluster.hadoop_version).get_open_ports(node_group)
def recommend_configs(self, cluster, scaling=False):
return self._get_version_handler(
cluster.hadoop_version).recommend_configs(cluster, scaling)
def get_health_checks(self, cluster):
return self._get_version_handler(
cluster.hadoop_version).get_health_checks(cluster)
def get_image_arguments(self, hadoop_version):
return self._get_version_handler(hadoop_version).get_image_arguments()
def pack_image(self, hadoop_version, remote,
test_only=False, image_arguments=None):
version = self._get_version_handler(hadoop_version)
version.pack_image(hadoop_version, remote, test_only=test_only,
image_arguments=image_arguments)
def validate_images(self, cluster, test_only=False, image_arguments=None):
self._get_version_handler(cluster.hadoop_version).validate_images(
cluster, test_only=test_only, image_arguments=image_arguments)

View File

@ -1,468 +0,0 @@
# Copyright (c) 2014 Intel Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file only contains utils not related to cm_api, while in
# cloudera_utils the functions are cm_api involved.
import os
import telnetlib # nosec
from oslo_log import log as logging
from sahara.plugins import context
from sahara.plugins import edp
from sahara.plugins import exceptions as exc
from sahara.plugins import recommendations_utils as ru
from sahara.plugins import resource as res
from sahara.plugins import swift_helper
from sahara.plugins import utils as u
from sahara_plugin_cdh.i18n import _
from sahara_plugin_cdh.plugins.cdh import commands as cmd
from sahara_plugin_cdh.plugins.cdh import db_helper as dh
PATH_TO_CORE_SITE_XML = '/etc/hadoop/conf/core-site.xml'
HADOOP_LIB_DIR = '/usr/lib/hadoop-mapreduce'
CM_API_PORT = 7180
LOG = logging.getLogger(__name__)
AUTO_CONFIGURATION_SCHEMA = {
'node_configs': {
'yarn.scheduler.minimum-allocation-mb': (
'RESOURCEMANAGER', 'yarn_scheduler_minimum_allocation_mb'),
'mapreduce.reduce.memory.mb': (
'YARN_GATEWAY', 'mapreduce_reduce_memory_mb'),
'mapreduce.map.memory.mb': (
'YARN_GATEWAY', 'mapreduce_map_memory_mb',),
'yarn.scheduler.maximum-allocation-mb': (
'RESOURCEMANAGER', 'yarn_scheduler_maximum_allocation_mb'),
'yarn.app.mapreduce.am.command-opts': (
'YARN_GATEWAY', 'yarn_app_mapreduce_am_command_opts'),
'yarn.nodemanager.resource.memory-mb': (
'NODEMANAGER', 'yarn_nodemanager_resource_memory_mb'),
'mapreduce.task.io.sort.mb': (
'YARN_GATEWAY', 'io_sort_mb'),
'mapreduce.map.java.opts': (
'YARN_GATEWAY', 'mapreduce_map_java_opts'),
'mapreduce.reduce.java.opts': (
'YARN_GATEWAY', 'mapreduce_reduce_java_opts'),
'yarn.app.mapreduce.am.resource.mb': (
'YARN_GATEWAY', 'yarn_app_mapreduce_am_resource_mb')
},
'cluster_configs': {
'dfs.replication': ('HDFS', 'dfs_replication')
}
}
class CDHPluginAutoConfigsProvider(ru.HadoopAutoConfigsProvider):
def get_datanode_name(self):
return 'HDFS_DATANODE'
class AbstractPluginUtils(object):
def __init__(self):
# c_helper will be defined in derived classes.
self.c_helper = None
def get_role_name(self, instance, service):
# NOTE: role name must match regexp "[_A-Za-z][-_A-Za-z0-9]{0,63}"
shortcuts = {
'AGENT': 'A',
'ALERTPUBLISHER': 'AP',
'CATALOGSERVER': 'ICS',
'DATANODE': 'DN',
'EVENTSERVER': 'ES',
'HBASE_INDEXER': 'LHBI',
'HIVEMETASTORE': 'HVM',
'HIVESERVER2': 'HVS',
'HOSTMONITOR': 'HM',
'IMPALAD': 'ID',
'JOBHISTORY': 'JS',
'JOURNALNODE': 'JN',
'KAFKA_BROKER': 'KB',
'KMS': 'KMS',
'MASTER': 'M',
'NAMENODE': 'NN',
'NODEMANAGER': 'NM',
'OOZIE_SERVER': 'OS',
'REGIONSERVER': 'RS',
'RESOURCEMANAGER': 'RM',
'SECONDARYNAMENODE': 'SNN',
'SENTRY_SERVER': 'SNT',
'SERVER': 'S',
'SERVICEMONITOR': 'SM',
'SOLR_SERVER': 'SLR',
'SPARK_YARN_HISTORY_SERVER': 'SHS',
'SQOOP_SERVER': 'S2S',
'STATESTORE': 'ISS',
'WEBHCAT': 'WHC',
'HDFS_GATEWAY': 'HG',
'YARN_GATEWAY': 'YG'
}
return '%s_%s' % (shortcuts.get(service, service),
instance.hostname().replace('-', '_'))
def get_manager(self, cluster):
return u.get_instance(cluster, 'CLOUDERA_MANAGER')
def get_namenode(self, cluster):
return u.get_instance(cluster, "HDFS_NAMENODE")
def get_datanodes(self, cluster):
return u.get_instances(cluster, 'HDFS_DATANODE')
def get_hdfs_nodes(self, cluster, instances=None):
instances = instances if instances else u.get_instances(cluster)
return u.instances_with_services(
instances, ["HDFS_DATANODE", "HDFS_NAMENODE",
"HDFS_SECONDARYNAMENODE"])
def get_secondarynamenode(self, cluster):
return u.get_instance(cluster, 'HDFS_SECONDARYNAMENODE')
def get_historyserver(self, cluster):
return u.get_instance(cluster, 'YARN_JOBHISTORY')
def get_resourcemanager(self, cluster):
return u.get_instance(cluster, 'YARN_RESOURCEMANAGER')
def get_nodemanagers(self, cluster):
return u.get_instances(cluster, 'YARN_NODEMANAGER')
def get_oozie(self, cluster):
return u.get_instance(cluster, 'OOZIE_SERVER')
def get_hive_metastore(self, cluster):
return u.get_instance(cluster, 'HIVE_METASTORE')
def get_hive_servers(self, cluster):
return u.get_instances(cluster, 'HIVE_SERVER2')
def get_hue(self, cluster):
return u.get_instance(cluster, 'HUE_SERVER')
def get_spark_historyserver(self, cluster):
return u.get_instance(cluster, 'SPARK_YARN_HISTORY_SERVER')
def get_zookeepers(self, cluster):
return u.get_instances(cluster, 'ZOOKEEPER_SERVER')
def get_hbase_master(self, cluster):
return u.get_instance(cluster, 'HBASE_MASTER')
def get_sentry(self, cluster):
return u.get_instance(cluster, 'SENTRY_SERVER')
def get_flumes(self, cluster):
return u.get_instances(cluster, 'FLUME_AGENT')
def get_solrs(self, cluster):
return u.get_instances(cluster, 'SOLR_SERVER')
def get_sqoop(self, cluster):
return u.get_instance(cluster, 'SQOOP_SERVER')
def get_hbase_indexers(self, cluster):
return u.get_instances(cluster, 'KEY_VALUE_STORE_INDEXER')
def get_catalogserver(self, cluster):
return u.get_instance(cluster, 'IMPALA_CATALOGSERVER')
def get_statestore(self, cluster):
return u.get_instance(cluster, 'IMPALA_STATESTORE')
def get_impalads(self, cluster):
return u.get_instances(cluster, 'IMPALAD')
def get_kms(self, cluster):
return u.get_instances(cluster, 'KMS')
def get_jns(self, cluster):
return u.get_instances(cluster, 'HDFS_JOURNALNODE')
def get_stdb_rm(self, cluster):
return u.get_instance(cluster, 'YARN_STANDBYRM')
def get_kafka_brokers(self, cluster):
return u.get_instances(cluster, 'KAFKA_BROKER')
def convert_process_configs(self, configs):
p_dict = {
"CLOUDERA": ['MANAGER'],
"NAMENODE": ['NAMENODE'],
"DATANODE": ['DATANODE'],
"SECONDARYNAMENODE": ['SECONDARYNAMENODE'],
"RESOURCEMANAGER": ['RESOURCEMANAGER'],
"NODEMANAGER": ['NODEMANAGER'],
"JOBHISTORY": ['JOBHISTORY'],
"OOZIE": ['OOZIE_SERVER'],
"HIVESERVER": ['HIVESERVER2'],
"HIVEMETASTORE": ['HIVEMETASTORE'],
"WEBHCAT": ['WEBHCAT'],
"HUE": ['HUE_SERVER'],
"SPARK_ON_YARN": ['SPARK_YARN_HISTORY_SERVER'],
"ZOOKEEPER": ['SERVER'],
"MASTER": ['MASTER'],
"REGIONSERVER": ['REGIONSERVER'],
"FLUME": ['AGENT'],
"CATALOGSERVER": ['CATALOGSERVER'],
"STATESTORE": ['STATESTORE'],
"IMPALAD": ['IMPALAD'],
"KS_INDEXER": ['HBASE_INDEXER'],
"SENTRY": ['SENTRY_SERVER'],
"SOLR": ['SOLR_SERVER'],
"SQOOP": ['SQOOP_SERVER'],
"KMS": ['KMS'],
"YARN_GATEWAY": ['YARN_GATEWAY'],
"HDFS_GATEWAY": ['HDFS_GATEWAY'],
"JOURNALNODE": ['JOURNALNODE'],
"KAFKA": ['KAFKA_BROKER']
}
if res.is_resource_instance(configs):
configs = configs.to_dict()
for k in configs.keys():
if k in p_dict.keys():
item = configs[k]
del configs[k]
newkey = p_dict[k][0]
configs[newkey] = item
return res.create_resource(configs)
def convert_role_showname(self, showname):
# Yarn ResourceManager and Standby ResourceManager will
# be converted to ResourceManager.
name_dict = {
'CLOUDERA_MANAGER': 'MANAGER',
'HDFS_NAMENODE': 'NAMENODE',
'HDFS_DATANODE': 'DATANODE',
'HDFS_JOURNALNODE': 'JOURNALNODE',
'HDFS_SECONDARYNAMENODE': 'SECONDARYNAMENODE',
'YARN_RESOURCEMANAGER': 'RESOURCEMANAGER',
'YARN_STANDBYRM': 'RESOURCEMANAGER',
'YARN_NODEMANAGER': 'NODEMANAGER',
'YARN_JOBHISTORY': 'JOBHISTORY',
'OOZIE_SERVER': 'OOZIE_SERVER',
'HIVE_SERVER2': 'HIVESERVER2',
'HIVE_METASTORE': 'HIVEMETASTORE',
'HIVE_WEBHCAT': 'WEBHCAT',
'HUE_SERVER': 'HUE_SERVER',
'SPARK_YARN_HISTORY_SERVER': 'SPARK_YARN_HISTORY_SERVER',
'ZOOKEEPER_SERVER': 'SERVER',
'HBASE_MASTER': 'MASTER',
'HBASE_REGIONSERVER': 'REGIONSERVER',
'FLUME_AGENT': 'AGENT',
'IMPALA_CATALOGSERVER': 'CATALOGSERVER',
'IMPALA_STATESTORE': 'STATESTORE',
'IMPALAD': 'IMPALAD',
'KEY_VALUE_STORE_INDEXER': 'HBASE_INDEXER',
'SENTRY_SERVER': 'SENTRY_SERVER',
'SOL_SERVER': 'SOLR_SERVER',
'SQOOP_SERVER': 'SQOOP_SERVER',
}
return name_dict.get(showname, showname)
def install_packages(self, instances, packages):
# instances non-empty
u.add_provisioning_step(
instances[0].cluster_id, _("Install packages"), len(instances))
with context.PluginsThreadGroup() as tg:
for i in instances:
tg.spawn('cdh-inst-pkgs-%s' % i.instance_name,
self._install_pkgs, i, packages)
@u.event_wrapper(True)
def _install_pkgs(self, instance, packages):
with instance.remote() as r:
cmd.install_packages(r, packages)
def start_cloudera_agents(self, instances):
# instances non-empty
u.add_provisioning_step(
instances[0].cluster_id, _("Start Cloudera Agents"),
len(instances))
with context.PluginsThreadGroup() as tg:
for i in instances:
tg.spawn('cdh-agent-start-%s' % i.instance_name,
self._start_cloudera_agent, i)
@u.event_wrapper(True)
def _start_cloudera_agent(self, instance):
mng_hostname = self.get_manager(instance.cluster).hostname()
with instance.remote() as r:
cmd.configure_agent(r, mng_hostname)
cmd.start_agent(r)
def configure_swift(self, cluster, instances=None):
if self.c_helper.is_swift_enabled(cluster):
if not instances:
instances = u.get_instances(cluster)
u.add_provisioning_step(
cluster.id, _("Configure Swift"), len(instances))
with context.PluginsThreadGroup() as tg:
for i in instances:
tg.spawn('cdh-swift-conf-%s' % i.instance_name,
self._configure_swift_to_inst, i)
swift_helper.install_ssl_certs(instances)
@u.event_wrapper(True)
def _configure_swift_to_inst(self, instance):
cluster = instance.cluster
swift_lib_remote_url = self.c_helper.get_swift_lib_url(cluster)
with instance.remote() as r:
if r.execute_command('ls %s/hadoop-openstack.jar' % HADOOP_LIB_DIR,
raise_when_error=False)[0] != 0:
r.execute_command('sudo curl %s -o %s/hadoop-openstack.jar' % (
swift_lib_remote_url, HADOOP_LIB_DIR))
def configure_sentry(self, cluster):
manager = self.get_manager(cluster)
with manager.remote() as r:
dh.create_sentry_database(cluster, r)
def put_hive_hdfs_xml(self, cluster):
servers = self.get_hive_servers(cluster)
with servers[0].remote() as r:
conf_path = edp.get_hive_shared_conf_path('hdfs')
r.execute_command(
'sudo su - -c "hadoop fs -mkdir -p %s" hdfs'
% os.path.dirname(conf_path))
r.execute_command(
'sudo su - -c "hadoop fs -put /etc/hive/conf/hive-site.xml '
'%s" hdfs' % conf_path)
def configure_hive(self, cluster):
manager = self.get_manager(cluster)
with manager.remote() as r:
dh.create_hive_database(cluster, r)
def install_extjs(self, cluster):
extjs_remote_location = self.c_helper.get_extjs_lib_url(cluster)
extjs_vm_location_dir = '/var/lib/oozie'
extjs_vm_location_path = extjs_vm_location_dir + '/extjs.zip'
with self.get_oozie(cluster).remote() as r:
if r.execute_command('ls %s/ext-2.2' % extjs_vm_location_dir,
raise_when_error=False)[0] != 0:
r.execute_command('curl -L -o \'%s\' %s' % (
extjs_vm_location_path, extjs_remote_location),
run_as_root=True)
r.execute_command('unzip %s -d %s' % (
extjs_vm_location_path, extjs_vm_location_dir),
run_as_root=True)
def _check_cloudera_manager_started(self, manager):
try:
conn = telnetlib.Telnet(manager.management_ip, CM_API_PORT)
conn.close()
return True
except IOError:
return False
@u.event_wrapper(
True, step=_("Start Cloudera Manager"), param=('cluster', 1))
def _start_cloudera_manager(self, cluster, timeout_config):
manager = self.get_manager(cluster)
with manager.remote() as r:
cmd.start_cloudera_db(r)
cmd.start_manager(r)
u.plugin_option_poll(
cluster, self._check_cloudera_manager_started, timeout_config,
_("Await starting Cloudera Manager"), 2, {'manager': manager})
def configure_os(self, instances):
# instances non-empty
u.add_provisioning_step(
instances[0].cluster_id, _("Configure OS"), len(instances))
with context.PluginsThreadGroup() as tg:
for inst in instances:
tg.spawn('cdh-repo-conf-%s' % inst.instance_name,
self._configure_repo_from_inst, inst)
@u.event_wrapper(True)
def _configure_repo_from_inst(self, instance):
LOG.debug("Configure repos from instance {instance}".format(
instance=instance.instance_name))
cluster = instance.cluster
with instance.remote() as r:
if cmd.is_ubuntu_os(r):
cdh5_key = (
self.c_helper.get_cdh5_key_url(cluster) or
self.c_helper.DEFAULT_CDH5_UBUNTU_REPO_KEY_URL)
cm5_key = (
self.c_helper.get_cm5_key_url(cluster) or
self.c_helper.DEFAULT_CM5_UBUNTU_REPO_KEY_URL)
if self.c_helper.is_keytrustee_available():
kms_key = (
self.c_helper.get_kms_key_url(cluster) or
self.c_helper.DEFAULT_KEY_TRUSTEE_UBUNTU_REPO_KEY_URL)
kms_repo_url = self.c_helper.KEY_TRUSTEE_UBUNTU_REPO_URL
cmd.add_ubuntu_repository(r, kms_repo_url, 'kms')
cmd.add_apt_key(r, kms_key)
cdh5_repo_content = self.c_helper.CDH5_UBUNTU_REPO
cm5_repo_content = self.c_helper.CM5_UBUNTU_REPO
cmd.write_ubuntu_repository(r, cdh5_repo_content, 'cdh')
cmd.add_apt_key(r, cdh5_key)
cmd.write_ubuntu_repository(r, cm5_repo_content, 'cm')
cmd.add_apt_key(r, cm5_key)
cmd.update_repository(r)
if cmd.is_centos_os(r):
cdh5_repo_content = self.c_helper.CDH5_CENTOS_REPO
cm5_repo_content = self.c_helper.CM5_CENTOS_REPO
if self.c_helper.is_keytrustee_available():
kms_repo_url = self.c_helper.KEY_TRUSTEE_CENTOS_REPO_URL
cmd.add_centos_repository(r, kms_repo_url, 'kms')
cmd.write_centos_repository(r, cdh5_repo_content, 'cdh')
cmd.write_centos_repository(r, cm5_repo_content, 'cm')
cmd.update_repository(r)
def _get_config_value(self, service, name, configs, cluster=None):
if cluster:
conf = cluster.cluster_configs
if service in conf and name in conf[service]:
return u.transform_to_num(conf[service][name])
for node_group in cluster.node_groups:
conf = node_group.node_configs
if service in conf and name in conf[service]:
return u.transform_to_num(conf[service][name])
for config in configs:
if config.applicable_target == service and config.name == name:
return u.transform_to_num(config.default_value)
raise exc.InvalidDataException(
_("Unable to find config: applicable_target: {target}, name: "
"{name}").format(target=service, name=name))
def recommend_configs(self, cluster, plugin_configs, scaling):
provider = CDHPluginAutoConfigsProvider(
AUTO_CONFIGURATION_SCHEMA, plugin_configs, cluster, scaling)
provider.apply_recommended_configs()
def start_cloudera_manager(self, cluster):
self._start_cloudera_manager(
cluster, self.c_helper.AWAIT_MANAGER_STARTING_TIMEOUT)
def get_config_value(self, service, name, cluster=None):
configs = self.c_helper.get_plugin_configs()
return self._get_config_value(service, name, configs, cluster)

View File

@ -1,28 +0,0 @@
# Copyright (c) 2016 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara_plugin_cdh.plugins.cdh import cloudera_utils as cu
from sahara_plugin_cdh.plugins.cdh.v5_11_0 import config_helper
from sahara_plugin_cdh.plugins.cdh.v5_11_0 import plugin_utils as pu
from sahara_plugin_cdh.plugins.cdh.v5_11_0 import validation
class ClouderaUtilsV5110(cu.ClouderaUtils):
def __init__(self):
cu.ClouderaUtils.__init__(self)
self.pu = pu.PluginUtilsV5110()
self.validator = validation.ValidatorV5110
self.c_helper = config_helper.ConfigHelperV5110()

View File

@ -1,103 +0,0 @@
# Copyright (c) 2016 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins import provisioning as p
from sahara.plugins import utils
from sahara_plugin_cdh.plugins.cdh import config_helper as c_h
class ConfigHelperV5110(c_h.ConfigHelper):
path_to_config = 'plugins/cdh/v5_11_0/resources/'
CDH5_UBUNTU_REPO = (
'deb [arch=amd64] http://archive.cloudera.com/cdh5'
'/ubuntu/xenial/amd64/cdh trusty-cdh5.11.0 contrib'
'\ndeb-src http://archive.cloudera.com/cdh5/ubuntu'
'/xenial/amd64/cdh trusty-cdh5.11.0 contrib')
DEFAULT_CDH5_UBUNTU_REPO_KEY_URL = (
'http://archive.cloudera.com/cdh5/ubuntu'
'/xenial/amd64/cdh/archive.key')
CM5_UBUNTU_REPO = (
'deb [arch=amd64] http://archive.cloudera.com/cm5'
'/ubuntu/xenial/amd64/cm trusty-cm5.11.0 contrib'
'\ndeb-src http://archive.cloudera.com/cm5/ubuntu'
'/xenial/amd64/cm trusty-cm5.11.0 contrib')
DEFAULT_CM5_UBUNTU_REPO_KEY_URL = (
'http://archive.cloudera.com/cm5/ubuntu'
'/xenial/amd64/cm/archive.key')
CDH5_CENTOS_REPO = (
'[cloudera-cdh5]'
'\nname=Cloudera\'s Distribution for Hadoop, Version 5'
'\nbaseurl=http://archive.cloudera.com/cdh5/redhat/6'
'/x86_64/cdh/5.11.0/'
'\ngpgkey = http://archive.cloudera.com/cdh5/redhat/6'
'/x86_64/cdh/RPM-GPG-KEY-cloudera'
'\ngpgcheck = 1')
CM5_CENTOS_REPO = (
'[cloudera-manager]'
'\nname=Cloudera Manager'
'\nbaseurl=http://archive.cloudera.com/cm5/redhat/6'
'/x86_64/cm/5.11.0/'
'\ngpgkey = http://archive.cloudera.com/cm5/redhat/6'
'/x86_64/cm/RPM-GPG-KEY-cloudera'
'\ngpgcheck = 1')
KEY_TRUSTEE_UBUNTU_REPO_URL = (
'http://archive.cloudera.com/navigator-'
'keytrustee5/ubuntu/xenial/amd64/navigator-'
'keytrustee/cloudera.list')
DEFAULT_KEY_TRUSTEE_UBUNTU_REPO_KEY_URL = (
'http://archive.cloudera.com/navigator-'
'keytrustee5/ubuntu/xenial/amd64/navigator-'
'keytrustee/archive.key')
KEY_TRUSTEE_CENTOS_REPO_URL = (
'http://archive.cloudera.com/navigator-'
'keytrustee5/redhat/6/x86_64/navigator-'
'keytrustee/navigator-keytrustee5.repo')
DEFAULT_SWIFT_LIB_URL = (
'https://repository.cloudera.com/artifactory/repo/org'
'/apache/hadoop/hadoop-openstack/2.6.0-cdh5.11.0'
'/hadoop-openstack-2.6.0-cdh5.11.0.jar')
SWIFT_LIB_URL = p.Config(
'Hadoop OpenStack library URL', 'general', 'cluster', priority=1,
default_value=DEFAULT_SWIFT_LIB_URL,
description=("Library that adds Swift support to CDH. The file"
" will be downloaded by VMs."))
HIVE_SERVER2_SENTRY_SAFETY_VALVE = utils.get_file_text(
path_to_config + 'hive-server2-sentry-safety.xml', 'sahara_plugin_cdh')
HIVE_METASTORE_SENTRY_SAFETY_VALVE = utils.get_file_text(
path_to_config + 'hive-metastore-sentry-safety.xml',
'sahara_plugin_cdh')
SENTRY_IMPALA_CLIENT_SAFETY_VALVE = utils.get_file_text(
path_to_config + 'sentry-impala-client-safety.xml',
'sahara_plugin_cdh')
def __init__(self):
super(ConfigHelperV5110, self).__init__()
self.priority_one_confs = self._load_json(
self.path_to_config + 'priority-one-confs.json')
self._init_all_ng_plugin_configs()

View File

@ -1,168 +0,0 @@
# Copyright (c) 2016 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins import edp
from sahara.plugins import utils as plugin_utils
from sahara_plugin_cdh.i18n import _
from sahara_plugin_cdh.plugins.cdh import commands as cmd
from sahara_plugin_cdh.plugins.cdh import deploy as common_deploy
from sahara_plugin_cdh.plugins.cdh.v5_11_0 import cloudera_utils as cu
CU = cu.ClouderaUtilsV5110()
PACKAGES = common_deploy.PACKAGES
def configure_cluster(cluster):
instances = plugin_utils.get_instances(cluster)
if not cmd.is_pre_installed_cdh(CU.pu.get_manager(cluster).remote()):
CU.pu.configure_os(instances)
CU.pu.install_packages(instances, PACKAGES)
CU.pu.start_cloudera_agents(instances)
CU.pu.start_cloudera_manager(cluster)
CU.update_cloudera_password(cluster)
CU.configure_rack_awareness(cluster)
CU.await_agents(cluster, instances)
CU.create_mgmt_service(cluster)
CU.create_services(cluster)
CU.configure_services(cluster)
CU.configure_instances(instances, cluster)
CU.deploy_configs(cluster)
@plugin_utils.event_wrapper(
True, step=_("Start roles: NODEMANAGER, DATANODE"), param=('cluster', 0))
def _start_roles(cluster, instances):
for instance in instances:
if 'HDFS_DATANODE' in instance.node_group.node_processes:
hdfs = CU.get_service_by_role('DATANODE', instance=instance)
CU.start_roles(hdfs, CU.pu.get_role_name(instance, 'DATANODE'))
if 'YARN_NODEMANAGER' in instance.node_group.node_processes:
yarn = CU.get_service_by_role('NODEMANAGER', instance=instance)
CU.start_roles(yarn, CU.pu.get_role_name(instance, 'NODEMANAGER'))
def scale_cluster(cluster, instances):
if not instances:
return
if not cmd.is_pre_installed_cdh(instances[0].remote()):
CU.pu.configure_os(instances)
CU.pu.install_packages(instances, PACKAGES)
CU.pu.start_cloudera_agents(instances)
CU.await_agents(cluster, instances)
CU.configure_rack_awareness(cluster)
CU.configure_instances(instances, cluster)
CU.update_configs(instances)
common_deploy.prepare_scaling_kerberized_cluster(
cluster, CU, instances)
CU.pu.configure_swift(cluster, instances)
_start_roles(cluster, instances)
CU.refresh_datanodes(cluster)
CU.refresh_yarn_nodes(cluster)
CU.restart_stale_services(cluster)
def decommission_cluster(cluster, instances):
dns = []
dns_to_delete = []
nms = []
nms_to_delete = []
for i in instances:
if 'HDFS_DATANODE' in i.node_group.node_processes:
dns.append(CU.pu.get_role_name(i, 'DATANODE'))
dns_to_delete.append(
CU.pu.get_role_name(i, 'HDFS_GATEWAY'))
if 'YARN_NODEMANAGER' in i.node_group.node_processes:
nms.append(CU.pu.get_role_name(i, 'NODEMANAGER'))
nms_to_delete.append(
CU.pu.get_role_name(i, 'YARN_GATEWAY'))
if dns:
CU.decommission_nodes(
cluster, 'DATANODE', dns, dns_to_delete)
if nms:
CU.decommission_nodes(
cluster, 'NODEMANAGER', nms, nms_to_delete)
CU.delete_instances(cluster, instances)
CU.refresh_datanodes(cluster)
CU.refresh_yarn_nodes(cluster)
CU.restart_stale_services(cluster)
@plugin_utils.event_wrapper(
True, step=_("Prepare cluster"), param=('cluster', 0))
def _prepare_cluster(cluster):
if CU.pu.get_oozie(cluster):
CU.pu.install_extjs(cluster)
if CU.pu.get_hive_metastore(cluster):
CU.pu.configure_hive(cluster)
if CU.pu.get_sentry(cluster):
CU.pu.configure_sentry(cluster)
@plugin_utils.event_wrapper(
True, step=_("Finish cluster starting"), param=('cluster', 0))
def _finish_cluster_starting(cluster):
if CU.pu.get_hive_metastore(cluster):
CU.pu.put_hive_hdfs_xml(cluster)
server = CU.pu.get_hbase_master(cluster)
if CU.pu.c_helper.is_hbase_common_lib_enabled(cluster) and server:
with server.remote() as r:
edp.create_hbase_common_lib(r)
if CU.pu.get_flumes(cluster):
flume = CU.get_service_by_role('AGENT', cluster)
CU.start_service(flume)
def start_cluster(cluster):
_prepare_cluster(cluster)
CU.first_run(cluster)
CU.pu.configure_swift(cluster)
if len(CU.pu.get_jns(cluster)) > 0:
CU.enable_namenode_ha(cluster)
# updating configs for NameNode role on needed nodes
CU.update_role_config(CU.pu.get_secondarynamenode(cluster),
'HDFS_NAMENODE')
if CU.pu.get_stdb_rm(cluster):
CU.enable_resourcemanager_ha(cluster)
# updating configs for ResourceManager on needed nodes
CU.update_role_config(CU.pu.get_stdb_rm(cluster), 'YARN_STANDBYRM')
_finish_cluster_starting(cluster)
common_deploy.setup_kerberos_for_cluster(cluster, CU)
def get_open_ports(node_group):
ports = common_deploy.get_open_ports(node_group)
return ports

View File

@ -1,46 +0,0 @@
# Copyright (c) 2016 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins import edp
from sahara_plugin_cdh.plugins.cdh import confighints_helper as ch_helper
from sahara_plugin_cdh.plugins.cdh import edp_engine
from sahara_plugin_cdh.plugins.cdh.v5_11_0 import cloudera_utils as cu
class EdpOozieEngine(edp_engine.EdpOozieEngine):
def __init__(self, cluster):
super(EdpOozieEngine, self).__init__(cluster)
self.cloudera_utils = cu.ClouderaUtilsV5110()
@staticmethod
def get_possible_job_config(job_type):
if edp.compare_job_type(job_type, edp.JOB_TYPE_HIVE):
return {'job_config': ch_helper.get_possible_hive_config_from(
'plugins/cdh/v5_11_0/resources/hive-site.xml')}
if edp.compare_job_type(job_type,
edp.JOB_TYPE_MAPREDUCE,
edp.JOB_TYPE_MAPREDUCE_STREAMING):
return {'job_config': ch_helper.get_possible_mapreduce_config_from(
'plugins/cdh/v5_11_0/resources/mapred-site.xml')}
if edp.compare_job_type(job_type, edp.JOB_TYPE_PIG):
return {'job_config': ch_helper.get_possible_pig_config_from(
'plugins/cdh/v5_11_0/resources/mapred-site.xml')}
return edp.PluginsOozieJobEngine.get_possible_job_config(job_type)
class EdpSparkEngine(edp_engine.EdpSparkEngine):
edp_base_version = "5.11.0"

View File

@ -1,44 +0,0 @@
# Copyright (c) 2016 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins import images
from sahara.plugins import utils as plugin_utils
_validator = images.SaharaImageValidator.from_yaml(
'plugins/cdh/v5_11_0/resources/images/image.yaml',
resource_roots=['plugins/cdh/v5_11_0/resources/images'],
package='sahara_plugin_cdh')
def get_image_arguments():
return _validator.get_argument_list()
def pack_image(remote, test_only=False, image_arguments=None):
_validator.validate(remote, test_only=test_only,
image_arguments=image_arguments)
def validate_images(cluster, test_only=False, image_arguments=None):
image_arguments = get_image_arguments()
if not test_only:
instances = plugin_utils.get_instances(cluster)
else:
instances = plugin_utils.get_instances(cluster)[0]
for instance in instances:
with instance.remote() as r:
_validator.validate(r, test_only=test_only,
image_arguments=image_arguments)

View File

@ -1,23 +0,0 @@
# Copyright (c) 2016 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara_plugin_cdh.plugins.cdh import plugin_utils as pu
from sahara_plugin_cdh.plugins.cdh.v5_11_0 import config_helper
class PluginUtilsV5110(pu.AbstractPluginUtils):
def __init__(self):
self.c_helper = config_helper.ConfigHelperV5110()

View File

@ -1,64 +0,0 @@
# Copyright (c) 2017 Massachusetts Open Cloud
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cm_api.api_client import ApiResource
cm_host = "localhost"
api = ApiResource(cm_host, username="admin", password="admin") # nosec
c = api.get_all_clusters()[0]
services = c.get_all_services()
def process_service(service):
service_name = service.name
if service_name == "spark_on_yarn":
service_name = "spark"
for role_cfgs in service.get_all_role_config_groups():
role_cm_cfg = role_cfgs.get_config(view='full')
role_cfg = parse_config(role_cm_cfg)
role_name = role_cfgs.roleType.lower()
write_cfg(role_cfg, '%s-%s.json' % (service_name, role_name))
service_cm_cfg = service.get_config(view='full')[0]
service_cfg = parse_config(service_cm_cfg)
write_cfg(service_cfg, '%s-service.json' % service_name)
def parse_config(config):
cfg = []
for name, value in config.items():
p = {
'name': value.name,
'value': value.default,
'display_name': value.displayName,
'desc': value.description
}
cfg.append(p)
return cfg
def write_cfg(cfg, file_name):
to_write = __import__('json').dumps(cfg, sort_keys=True, indent=4,
separators=(',', ': '))
with open(file_name, 'w') as f:
f.write(to_write)
if __name__ == '__main__':
for service in services:
process_service(service)

View File

@ -1,14 +0,0 @@
#!/bin/bash
# prereqs - virtualenv
virtualenv venv
. venv/bin/activate
pip install cm-api
python cdh_config.py
deactivate
rm -rf venv

File diff suppressed because one or more lines are too long

View File

@ -1,164 +0,0 @@
[
{
"desc": "Name of the HBase service that this Flume service instance depends on",
"display_name": "HBase Service",
"name": "hbase_service",
"value": null
},
{
"desc": "The location on disk of the trust store, in .jks format, used to confirm the authenticity of TLS/SSL servers that Flume might connect to. This is used when Flume is the client in a TLS/SSL connection. This trust store must contain the certificate(s) used to sign the service(s) connected to. If this parameter is not provided, the default list of well-known certificate authorities is used instead.",
"display_name": "Flume TLS/SSL Certificate Trust Store File",
"name": "flume_truststore_file",
"value": null
},
{
"desc": "Whether to suppress the results of the Agent Health heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts.",
"display_name": "Suppress Health Test: Agent Health",
"name": "service_health_suppression_flume_agents_healthy",
"value": "false"
},
{
"desc": "The user that this service's processes should run as.",
"display_name": "System User",
"name": "process_username",
"value": "flume"
},
{
"desc": "Name of the HDFS service that this Flume service instance depends on",
"display_name": "HDFS Service",
"name": "hdfs_service",
"value": null
},
{
"desc": "Whether to suppress configuration warnings produced by the Agent Count Validator configuration validator.",
"display_name": "Suppress Configuration Validator: Agent Count Validator",
"name": "service_config_suppression_agent_count_validator",
"value": "false"
},
{
"desc": "Name of the Solr service that this Flume service instance depends on",
"display_name": "Solr Service",
"name": "solr_service",
"value": null
},
{
"desc": "For advanced use only, key-value pairs (one on each line) to be inserted into a role's environment. Applies to configurations of all roles in this service except client configuration.",
"display_name": "Flume Service Environment Advanced Configuration Snippet (Safety Valve)",
"name": "flume_env_safety_valve",
"value": null
},
{
"desc": "Sets the maximum number of Flume components that will be returned under Flume Metric Details. Increasing this value will negatively impact the interactive performance of the Flume Metrics Details page.",
"display_name": "Maximum displayed Flume metrics components",
"name": "flume_context_groups_request_limit",
"value": "1000"
},
{
"desc": "Kerberos principal short name used by all roles of this service.",
"display_name": "Kerberos Principal",
"name": "kerberos_princ_name",
"value": "flume"
},
{
"desc": "The health test thresholds of the overall Agent health. The check returns \"Concerning\" health if the percentage of \"Healthy\" Agents falls below the warning threshold. The check is unhealthy if the total percentage of \"Healthy\" and \"Concerning\" Agents falls below the critical threshold.",
"display_name": "Healthy Agent Monitoring Thresholds",
"name": "flume_agents_healthy_thresholds",
"value": "{\"critical\":\"never\",\"warning\":\"95.0\"}"
},
{
"desc": "The group that this service's processes should run as.",
"display_name": "System Group",
"name": "process_groupname",
"value": "flume"
},
{
"desc": "When set, Cloudera Manager will send alerts when this entity's configuration changes.",
"display_name": "Enable Configuration Change Alerts",
"name": "enable_config_alerts",
"value": "false"
},
{
"desc": "Whether to suppress configuration warnings produced by the built-in parameter validation for the Service Triggers parameter.",
"display_name": "Suppress Parameter Validation: Service Triggers",
"name": "service_config_suppression_service_triggers",
"value": "false"
},
{
"desc": "The frequency in which the log4j event publication appender will retry sending undelivered log events to the Event server, in seconds",
"display_name": "Log Event Retry Frequency",
"name": "log_event_retry_frequency",
"value": "30"
},
{
"desc": "<p>The configured triggers for this service. This is a JSON-formatted list of triggers. These triggers are evaluated as part as the health system. Every trigger expression is parsed, and if the trigger condition is met, the list of actions provided in the trigger expression is executed.</p><p>Each trigger has the following fields:</p><ul><li><code>triggerName</code> <b>(mandatory)</b> - The name of the trigger. This value must be unique for the specific service. </li><li><code>triggerExpression</code> <b>(mandatory)</b> - A tsquery expression representing the trigger. </li><li><code>streamThreshold</code> <b>(optional)</b> - The maximum number of streams that can satisfy a condition of a trigger before the condition fires. By default set to 0, and any stream returned causes the condition to fire. </li><li><code>enabled</code> <b> (optional)</b> - By default set to 'true'. If set to 'false', the trigger is not evaluated.</li><li><code>expressionEditorConfig</code> <b> (optional)</b> - Metadata for the trigger editor. If present, the trigger should only be edited from the Edit Trigger page; editing the trigger here can lead to inconsistencies.</li></ul><p>For example, the followig JSON formatted trigger fires if there are more than 10 DataNodes with more than 500 file descriptors opened:</p><p><pre>[{\"triggerName\": \"sample-trigger\",\n \"triggerExpression\": \"IF (SELECT fd_open WHERE roleType = DataNode and last(fd_open) > 500) DO health:bad\",\n \"streamThreshold\": 10, \"enabled\": \"true\"}]</pre></p><p>See the trigger rules documentation for more details on how to write triggers using tsquery.</p><p>The JSON format is evolving and may change and, as a result, backward compatibility is not guaranteed between releases.</p>",
"display_name": "Service Triggers",
"name": "service_triggers",
"value": "[]"
},
{
"desc": "When set, each role identifies important log events and forwards them to Cloudera Manager.",
"display_name": "Enable Log Event Capture",
"name": "catch_events",
"value": "true"
},
{
"desc": "Whether to suppress configuration warnings produced by the built-in parameter validation for the System User parameter.",