Retire Sahara: remove repo content

Sahara project is retiring
- https://review.opendev.org/c/openstack/governance/+/919374

this commit remove the content of this project repo

Depends-On: https://review.opendev.org/c/openstack/project-config/+/919376
Change-Id: I2ca927796262fc441a430514b7bf2ecedbbc4539
This commit is contained in:
Ghanshyam Mann 2024-05-10 17:28:16 -07:00
parent 396518837e
commit 3e0213e5c4
92 changed files with 8 additions and 13328 deletions

30
.gitignore vendored
View File

@ -1,30 +0,0 @@
*.egg-info
*.egg[s]
*.log
*.py[co]
.coverage
.testrepository
.tox
.stestr
.venv
.idea
AUTHORS
ChangeLog
build
cover
develop-eggs
dist
doc/build
doc/html
eggs
etc/sahara.conf
etc/sahara/*.conf
etc/sahara/*.topology
sdist
target
tools/lintstack.head.py
tools/pylint_exceptions
doc/source/sample.config
# Files created by releasenotes build
releasenotes/build

View File

@ -1,3 +0,0 @@
[DEFAULT]
test_path=./sahara_plugin_ambari/tests/unit
top_dir=./

View File

@ -1,10 +0,0 @@
- project:
templates:
- check-requirements
- openstack-python3-jobs
- publish-openstack-docs-pti
- release-notes-jobs-python3
check:
jobs:
- sahara-buildimages-ambari:
voting: false

View File

@ -1,19 +0,0 @@
The source repository for this project can be found at:
https://opendev.org/openstack/sahara-plugin-ambari
Pull requests submitted through GitHub are not monitored.
To start contributing to OpenStack, follow the steps in the contribution guide
to set up and use Gerrit:
https://docs.openstack.org/contributors/code-and-documentation/quick-start.html
Bugs should be filed on Storyboard:
https://storyboard.openstack.org/#!/project/openstack/sahara-plugin-ambari
For more specific information about contributing to this repository, see the
sahara-plugin-ambari contributor guide:
https://docs.openstack.org/sahara-plugin-ambari/latest/contributor/contributing.html

175
LICENSE
View File

@ -1,175 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,38 +1,10 @@
======================== This project is no longer maintained.
Team and repository tags
========================
.. image:: https://governance.openstack.org/tc/badges/sahara.svg The contents of this repository are still available in the Git
:target: https://governance.openstack.org/tc/reference/tags/index.html source code management system. To see the contents of this
repository before it reached its end of life, please check out the
.. Change things from this point on previous commit with "git checkout HEAD^1".
OpenStack Data Processing ("Sahara") Ambari Plugin
===================================================
OpenStack Sahara Ambari Plugin provides the users the option to
start Ambari clusters on OpenStack Sahara.
Check out OpenStack Sahara documentation to see how to deploy the
Ambari Plugin.
Sahara at wiki.openstack.org: https://wiki.openstack.org/wiki/Sahara
Storyboard project: https://storyboard.openstack.org/#!/project/openstack/sahara-plugin-ambari
Sahara docs site: https://docs.openstack.org/sahara/latest/
Quickstart guide: https://docs.openstack.org/sahara/latest/user/quickstart.html
How to participate: https://docs.openstack.org/sahara/latest/contributor/how-to-participate.html
Source: https://opendev.org/openstack/sahara-plugin-ambari
Bugs and feature requests: https://storyboard.openstack.org/#!/project/openstack/sahara-plugin-ambari
Release notes: https://docs.openstack.org/releasenotes/sahara-plugin-ambari/
License
-------
Apache License Version 2.0 http://www.apache.org/licenses/LICENSE-2.0
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
OFTC.

View File

@ -1 +0,0 @@
[python: **.py]

View File

@ -1,9 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
openstackdocstheme>=2.2.1 # Apache-2.0
os-api-ref>=1.4.0 # Apache-2.0
reno>=3.1.0 # Apache-2.0
sphinx>=2.0.0,!=2.1.0 # BSD
sphinxcontrib-httpdomain>=1.3.0 # BSD
whereto>=0.3.0 # Apache-2.0

View File

@ -1,214 +0,0 @@
# -*- coding: utf-8 -*-
#
# sahara-plugin-ambari documentation build configuration file.
#
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'reno.sphinxext',
'openstackdocstheme',
]
# openstackdocstheme options
openstackdocs_repo_name = 'openstack/sahara-plugin-ambari'
openstackdocs_pdf_link = True
openstackdocs_use_storyboard = True
openstackdocs_projects = [
'sahara'
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
copyright = '2015, Sahara team'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'native'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
#html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'saharaambariplugin-testsdoc'
# -- Options for LaTeX output --------------------------------------------------
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'doc-sahara-plugin-ambari.tex', 'Sahara Ambari Plugin Documentation',
'Sahara team', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
smartquotes_excludes = {'builders': ['latex']}
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'sahara-plugin-ambari', 'sahara-plugin-ambari Documentation',
['Sahara team'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'sahara-plugin-ambari', 'sahara-plugin-ambari Documentation',
'Sahara team', 'sahara-plugin-ambari', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'

View File

@ -1,14 +0,0 @@
============================
So You Want to Contribute...
============================
For general information on contributing to OpenStack, please check out the
`contributor guide <https://docs.openstack.org/contributors/>`_ to get started.
It covers all the basics that are common to all OpenStack projects: the
accounts you need, the basics of interacting with our Gerrit review system, how
we communicate as a community, etc.
sahara-plugin-ambari is maintained by the OpenStack Sahara project.
To understand our development process and how you can contribute to it, please
look at the Sahara project's general contributor's page:
http://docs.openstack.org/sahara/latest/contributor/contributing.html

View File

@ -1,8 +0,0 @@
=================
Contributor Guide
=================
.. toctree::
:maxdepth: 2
contributing

View File

@ -1,8 +0,0 @@
Ambari plugin for Sahara
========================
.. toctree::
:maxdepth: 2
user/index
contributor/index

View File

@ -1,162 +0,0 @@
Ambari Plugin
=============
The Ambari sahara plugin provides a way to provision
clusters with Hortonworks Data Platform on OpenStack using templates in a
single click and in an easily repeatable fashion. The sahara controller serves
as the glue between Hadoop and OpenStack. The Ambari plugin mediates between
the sahara controller and Apache Ambari in order to deploy and configure Hadoop
on OpenStack. Core to the HDP Plugin is Apache Ambari
which is used as the orchestrator for deploying HDP on OpenStack. The Ambari
plugin uses Ambari Blueprints for cluster provisioning.
Apache Ambari Blueprints
------------------------
Apache Ambari Blueprints is a portable document definition, which provides a
complete definition for an Apache Hadoop cluster, including cluster topology,
components, services and their configurations. Ambari Blueprints can be
consumed by the Ambari plugin to instantiate a Hadoop cluster on OpenStack. The
benefits of this approach is that it allows for Hadoop clusters to be
configured and deployed using an Ambari native format that can be used with as
well as outside of OpenStack allowing for clusters to be re-instantiated in a
variety of environments.
Images
------
For cluster provisioning, prepared images should be used.
.. list-table:: Support matrix for the `ambari` plugin
:widths: 15 15 20 15 35
:header-rows: 1
* - Version
(image tag)
- Distribution
- Build method
- Version
(build parameter)
- Notes
* - 2.6
- Ubuntu 16.04, CentOS 7
- sahara-image-pack
- 2.6
- uses Ambari 2.6
* - 2.5
- Ubuntu 16.04, CentOS 7
- sahara-image-pack
- 2.5
- uses Ambari 2.6
* - 2.4
- Ubuntu 14.04, CentOS 7
- sahara-image-pack
- 2.4
- uses Ambari 2.6
* - 2.4
- Ubuntu 14.04, CentOS 7
- sahara-image-create
- 2.4
- uses Ambari 2.2.1.0
* - 2.3
- Ubuntu 14.04, CentOS 7
- sahara-image-pack
- 2.3
- uses Ambari 2.4
* - 2.3
- Ubuntu 14.04, CentOS 7
- sahara-image-create
- 2.3
- uses Ambari 2.2.0.0
For more information about building image, refer to
:sahara-doc:`Sahara documentation <user/building-guest-images.html>`.
HDP plugin requires an image to be tagged in sahara Image Registry with two
tags: 'ambari' and '<plugin version>' (e.g. '2.5').
The image requires a username. For more information, refer to the
:sahara-doc:`registering image <user/registering-image.html>` section
of the Sahara documentation.
To speed up provisioning, the HDP packages can be pre-installed on the image
used. The packages' versions depend on the HDP version required.
High Availability for HDFS and YARN
-----------------------------------
High Availability (Using the Quorum Journal Manager) can be
deployed automatically with the Ambari plugin. You can deploy High Available
cluster through UI by selecting ``NameNode HA`` and/or ``ResourceManager HA``
options in general configs of cluster template.
The NameNode High Availability is deployed using 2 NameNodes, one active and
one standby. The NameNodes use a set of JournalNodes and Zookepeer Servers to
ensure the necessary synchronization. In case of ResourceManager HA 2
ResourceManagers should be enabled in addition.
A typical Highly available Ambari cluster uses 2 separate NameNodes, 2 separate
ResourceManagers and at least 3 JournalNodes and at least 3 Zookeeper Servers.
HDP Version Support
-------------------
The HDP plugin currently supports deployment of HDP 2.3, 2.4 and 2.5.
Cluster Validation
------------------
Prior to Hadoop cluster creation, the HDP plugin will perform the following
validation checks to ensure a successful Hadoop deployment:
* Ensure the existence of Ambari Server process in the cluster;
* Ensure the existence of a NameNode, Zookeeper, ResourceManagers processes
HistoryServer and App TimeLine Server in the cluster
Enabling Kerberos security for cluster
--------------------------------------
If you want to protect your clusters using MIT Kerberos security you have to
complete a few steps below.
* If you would like to create a cluster protected by Kerberos security you
just need to enable Kerberos by checkbox in the ``General Parameters``
section of the cluster configuration. If you prefer to use the OpenStack CLI
for cluster creation, you have to put the data below in the
``cluster_configs`` section:
.. sourcecode:: console
"cluster_configs": {
"Enable Kerberos Security": true,
}
Sahara in this case will correctly prepare KDC server and will create
principals along with keytabs to enable authentication for Hadoop services.
* Ensure that you have the latest hadoop-openstack jar file distributed
on your cluster nodes. You can download one at
``https://tarballs.openstack.org/sahara-extra/dist/``
* Sahara will create principals along with keytabs for system users
like ``oozie``, ``hdfs`` and ``spark`` so that you will not have to
perform additional auth operations to execute your jobs on top of the
cluster.
Adjusting Ambari Agent Package Installation timeout Parameter
-------------------------------------------------------------
For a cluster with large number of nodes or slow connectivity to HDP repo
server, a Sahara HDP Cluster creation may fail due to ambari agent
reaching the timeout threshold while installing the packages in the nodes.
Such failures will occur during the "cluster start" stage which can be
monitored from Cluster Events tab of Sahara Dashboard. The timeout error will
be visible from the Ambari Dashboard as well.
* To avoid the package installation timeout by ambari agent you need to change
the default value of ``Ambari Agent Package Install timeout`` parameter which
can be found in the ``General Parameters`` section of the cluster template
configuration.

View File

@ -1,8 +0,0 @@
==========
User Guide
==========
.. toctree::
:maxdepth: 2
ambari-plugin

View File

@ -1,6 +0,0 @@
---
upgrade:
- |
Python 2.7 support has been dropped. Last release of sahara and its plugins
to support python 2.7 is OpenStack Train. The minimum version of Python now
supported by sahara and its plugins is Python 3.6.

View File

@ -1,5 +0,0 @@
---
fixes:
- |
Fixed several bugs which prevented sahara-image-pack from generating
Ambari-based Ubuntu images.

View File

@ -1,6 +0,0 @@
===========================
2023.1 Series Release Notes
===========================
.. release-notes::
:branch: stable/2023.1

View File

@ -1,210 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Sahara Release Notes documentation build configuration file
extensions = [
'reno.sphinxext',
'openstackdocstheme'
]
# openstackdocstheme options
openstackdocs_repo_name = 'openstack/sahara-plugin-ambari'
openstackdocs_use_storyboard = True
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
copyright = '2015, Sahara Developers'
# Release do not need a version number in the title, they
# cover multiple versions.
# The full version, including alpha/beta/rc tags.
release = ''
# The short X.Y version.
version = ''
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'native'
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'SaharaAmbariReleaseNotesdoc'
# -- Options for LaTeX output ---------------------------------------------
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'SaharaAmbariReleaseNotes.tex',
'Sahara Ambari Plugin Release Notes Documentation',
'Sahara Developers', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'saharaambarireleasenotes',
'Sahara Ambari Plugin Release Notes Documentation',
['Sahara Developers'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'SaharaAmbariReleaseNotes',
'Sahara Ambari Plugin Release Notes Documentation',
'Sahara Developers', 'SaharaAmbariReleaseNotes',
'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
# -- Options for Internationalization output ------------------------------
locale_dirs = ['locale/']

View File

@ -1,17 +0,0 @@
====================================
Sahara Ambari Plugin Release Notes
====================================
.. toctree::
:maxdepth: 1
unreleased
2023.1
zed
yoga
xena
wallaby
victoria
ussuri
train
stein

View File

@ -1,57 +0,0 @@
# Andreas Jaeger <jaegerandi@gmail.com>, 2019. #zanata
# Andreas Jaeger <jaegerandi@gmail.com>, 2020. #zanata
msgid ""
msgstr ""
"Project-Id-Version: sahara-plugin-ambari\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-04-24 23:41+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2020-04-25 10:43+0000\n"
"Last-Translator: Andreas Jaeger <jaegerandi@gmail.com>\n"
"Language-Team: German\n"
"Language: de\n"
"X-Generator: Zanata 4.3.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
msgid "1.0.0"
msgstr "1.0.0"
msgid "Bug Fixes"
msgstr "Fehlerkorrekturen"
msgid "Current Series Release Notes"
msgstr "Aktuelle Serie Releasenotes"
msgid ""
"Fixed several bugs which prevented sahara-image-pack from generating Ambari-"
"based Ubuntu images."
msgstr ""
"Mehrere Fehler wurden gefixt welche sahara-image-pack hinderten Ambari-"
"basierte Ubuntu Abbilder zu erzeugen."
msgid ""
"Python 2.7 support has been dropped. Last release of sahara and its plugins "
"to support python 2.7 is OpenStack Train. The minimum version of Python now "
"supported by sahara and its plugins is Python 3.6."
msgstr ""
"Python 2.7 Unterstützung wurde beendet. Der letzte Release von Sahara und "
"seinen Plugins der Python 2.7 unterstützt ist OpenStack Train. Die minimal "
"Python Version welche von Sahara und seinen Plugins unterstützt wird, ist "
"Python 3.6."
msgid "Sahara Ambari Plugin Release Notes"
msgstr "Sahara Ambari Plugin Release Notes"
msgid "Stein Series Release Notes"
msgstr "Stein Serie Releasenotes"
msgid "Train Series Release Notes"
msgstr "Train Serie Releasenotes"
msgid "Upgrade Notes"
msgstr "Aktualisierungsnotizen"
msgid "Ussuri Series Release Notes"
msgstr "Ussuri Serie Releasenotes"

View File

@ -1,58 +0,0 @@
# Andi Chandler <andi@gowling.com>, 2020. #zanata
msgid ""
msgstr ""
"Project-Id-Version: sahara-plugin-ambari\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-04-26 20:52+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2020-05-02 09:30+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en_GB\n"
"X-Generator: Zanata 4.3.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
msgid "1.0.0"
msgstr "1.0.0"
msgid "3.0.0.0rc1"
msgstr "3.0.0.0rc1"
msgid "Bug Fixes"
msgstr "Bug Fixes"
msgid "Current Series Release Notes"
msgstr "Current Series Release Notes"
msgid ""
"Fixed several bugs which prevented sahara-image-pack from generating Ambari-"
"based Ubuntu images."
msgstr ""
"Fixed several bugs which prevented sahara-image-pack from generating Ambari-"
"based Ubuntu images."
msgid ""
"Python 2.7 support has been dropped. Last release of sahara and its plugins "
"to support python 2.7 is OpenStack Train. The minimum version of Python now "
"supported by sahara and its plugins is Python 3.6."
msgstr ""
"Python 2.7 support has been dropped. Last release of sahara and its plugins "
"to support python 2.7 is OpenStack Train. The minimum version of Python now "
"supported by sahara and its plugins is Python 3.6."
msgid "Sahara Ambari Plugin Release Notes"
msgstr "Sahara Ambari Plugin Release Notes"
msgid "Stein Series Release Notes"
msgstr "Stein Series Release Notes"
msgid "Train Series Release Notes"
msgstr "Train Series Release Notes"
msgid "Upgrade Notes"
msgstr "Upgrade Notes"
msgid "Ussuri Series Release Notes"
msgstr "Ussuri Series Release Notes"

View File

@ -1,37 +0,0 @@
# Surit Aryal <aryalsurit@gmail.com>, 2019. #zanata
msgid ""
msgstr ""
"Project-Id-Version: sahara-plugin-ambari\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2019-07-23 14:26+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2019-08-02 09:12+0000\n"
"Last-Translator: Surit Aryal <aryalsurit@gmail.com>\n"
"Language-Team: Nepali\n"
"Language: ne\n"
"X-Generator: Zanata 4.3.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
msgid "1.0.0"
msgstr "१.."
msgid "Bug Fixes"
msgstr "बग फिक्स"
msgid "Current Series Release Notes"
msgstr "Current Series रिलीज नोट्स"
msgid ""
"Fixed several bugs which prevented sahara-image-pack from generating Ambari-"
"based Ubuntu images."
msgstr ""
"धेरै बगहरू स्थिर गरियो जसले sahara-image-packलाई Ambari-based Ubuntu छविहरू "
"उत्पादन गर्नबाट रोक्छ।"
msgid "Sahara Ambari Plugin Release Notes"
msgstr "Sahara Ambari प्लगइन रिलीज नोट्स"
msgid "Stein Series Release Notes"
msgstr "Stein Series रिलीज नोट्स"

View File

@ -1,34 +0,0 @@
# Rodrigo Loures <rmoraesloures@gmail.com>, 2019. #zanata
msgid ""
msgstr ""
"Project-Id-Version: sahara-plugin-ambari\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2019-04-22 11:43+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2019-04-18 09:33+0000\n"
"Last-Translator: Rodrigo Loures <rmoraesloures@gmail.com>\n"
"Language-Team: Portuguese (Brazil)\n"
"Language: pt_BR\n"
"X-Generator: Zanata 4.3.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
msgid "Bug Fixes"
msgstr "Correção de erros"
msgid "Current Series Release Notes"
msgstr "Atual - Série de Notas de Versão"
msgid ""
"Fixed several bugs which prevented sahara-image-pack from generating Ambari-"
"based Ubuntu images."
msgstr ""
"Correção de alguns erros aos quais impediam sahara-image-pack de gerar "
"imagens Ubuntu baseadas em Ambari. "
msgid "Sahara Ambari Plugin Release Notes"
msgstr "Notas de versão do plugin Sahara Ambari"
msgid "Stein Series Release Notes"
msgstr "Notas de versão da Série Stein"

View File

@ -1,6 +0,0 @@
===================================
Stein Series Release Notes
===================================
.. release-notes::
:branch: stable/stein

View File

@ -1,6 +0,0 @@
==========================
Train Series Release Notes
==========================
.. release-notes::
:branch: stable/train

View File

@ -1,5 +0,0 @@
==============================
Current Series Release Notes
==============================
.. release-notes::

View File

@ -1,6 +0,0 @@
===========================
Ussuri Series Release Notes
===========================
.. release-notes::
:branch: stable/ussuri

View File

@ -1,6 +0,0 @@
=============================
Victoria Series Release Notes
=============================
.. release-notes::
:branch: stable/victoria

View File

@ -1,6 +0,0 @@
============================
Wallaby Series Release Notes
============================
.. release-notes::
:branch: stable/wallaby

View File

@ -1,6 +0,0 @@
=========================
Xena Series Release Notes
=========================
.. release-notes::
:branch: stable/xena

View File

@ -1,6 +0,0 @@
=========================
Yoga Series Release Notes
=========================
.. release-notes::
:branch: stable/yoga

View File

@ -1,6 +0,0 @@
========================
Zed Series Release Notes
========================
.. release-notes::
:branch: stable/zed

View File

@ -1,18 +0,0 @@
# Requirements lower bounds listed here are our best effort to keep them up to
# date but we do not test them so no guarantee of having them all correct. If
# you find any incorrect lower bounds, let us know or propose a fix.
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr!=2.1.0,>=2.0.0 # Apache-2.0
Babel!=2.4.0,>=2.3.4 # BSD
eventlet>=0.26.0 # MIT
oslo.i18n>=3.15.3 # Apache-2.0
oslo.log>=3.36.0 # Apache-2.0
oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0
oslo.utils>=3.33.0 # Apache-2.0
requests>=2.14.2 # Apache-2.0
sahara>=10.0.0.0b1

View File

@ -1,26 +0,0 @@
# Copyright (c) 2014 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# It's based on oslo.i18n usage in OpenStack Keystone project and
# recommendations from https://docs.openstack.org/oslo.i18n/latest/
# user/usage.html
import oslo_i18n
_translators = oslo_i18n.TranslatorFactory(domain='sahara_plugin_ambari')
# The primary translation function using the well-known name "_"
_ = _translators.primary

View File

@ -1,215 +0,0 @@
# Andreas Jaeger <jaegerandi@gmail.com>, 2019. #zanata
msgid ""
msgstr ""
"Project-Id-Version: sahara-plugin-ambari VERSION\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2019-09-20 17:28+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2019-09-25 06:06+0000\n"
"Last-Translator: Andreas Jaeger <jaegerandi@gmail.com>\n"
"Language-Team: German\n"
"Language: de\n"
"X-Generator: Zanata 4.3.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "%(problem)s: %(description)s"
msgstr "%(problem)s: %(description)s"
# auto translated by TM merge from project: sahara, version: master, DocId: sahara/locale/sahara
msgid "0 or 1"
msgstr "0 oder 1"
# auto translated by TM merge from project: sahara, version: master, DocId: sahara/locale/sahara
msgid "1 or more"
msgstr "1 oder mehr"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "3 or more. Odd number"
msgstr "3 oder mehr. Ungerade Zahl"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Add Hadoop Swift jar to instances"
msgstr "Füge Hadoop Swift-Jar zu Instanzen hinzu"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Add new hosts"
msgstr "Fügen Sie neue Hosts hinzu"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "Ambari Monitor has responded that cluster has %(red)d critical alert(s)"
msgstr ""
"Ambari Monitor hat geantwortet, dass der Cluster %(red)d kritische Alarme hat"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid ""
"Ambari Monitor has responded that cluster has %(red)d critical and "
"%(yellow)d warning alert(s)"
msgstr ""
"Ambari Monitor hat geantwortet, dass der Cluster %(red)d kritisch und "
"%(yellow)d Warnmeldung(en) hat"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "Ambari Monitor has responded that cluster has %d warning alert(s)"
msgstr ""
"Ambari Monitor hat geantwortet, dass der Cluster %d-Warnmeldung(en) enthält"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Ambari Monitor is healthy"
msgstr "Ambari Monitor ist gesund"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Ambari plugin of {base} or higher required to run {type} jobs"
msgstr ""
"Ambari-Plugin von {base} oder höher, das zum Ausführen von {type} Jobs "
"erforderlich ist"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "Ambari request in %s state"
msgstr "Ambari Anfrage in %s Zustand"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "At least 3 JournalNodes are required for HA"
msgstr "Mindestens 3 JournalNodes sind für HA erforderlich"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "At least 3 ZooKeepers are required for HA"
msgstr "Für HA sind mindestens 3 ZooKeeper erforderlich"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Can't get response from Ambari Monitor"
msgstr "Antwort von Ambari Monitor nicht möglich"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Cleanup config groups"
msgstr "Konfigurationsgruppen bereinigen"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Configure rack awareness"
msgstr "Rack-Erkennung konfigurieren"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Create Ambari blueprint"
msgstr "Erstellen Sie Ambari Blueprint"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Decommission NodeManagers and DataNodes"
msgstr "NodeManagers und DataNodes außer Betrieb setzen"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Enable HBase RegionServer HA"
msgstr "Aktivieren Sie HBase RegionServer HA"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Enable NameNode HA"
msgstr "Aktivieren Sie NameNode HA"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Enable ResourceManager HA"
msgstr "Aktivieren Sie ResourceManager HA"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Generate config groups"
msgstr "Generieren Sie Konfigurationsgruppen"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Install services on hosts"
msgstr "Installiere Dienste auf Hosts"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "No alerts found"
msgstr "Keine Alarme gefunden"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Odd number"
msgstr "Ungerade Zahl"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Odd number of JournalNodes are required for HA"
msgstr "Eine ungerade Anzahl von JournalNodes ist für HA erforderlich"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Odd number of ZooKeepers are required for HA"
msgstr "Für HA ist eine ungerade Anzahl von ZooKeepern erforderlich"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Prepare Hive"
msgstr "Bereite Hive vor"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Regenerate keytabs for Kerberos"
msgstr "Generieren Sie Keytabs für Kerberos neu"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Remove hosts"
msgstr "Entferne Hosts"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Restart HDFS and MAPREDUCE2 services"
msgstr "Starte die HDFS- und MAPREDUCE2-Dienste neu"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Restart NameNodes and ResourceManagers"
msgstr "Starte NameNodes und ResourceManagers neu"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Restart of ambari-agent is needed for host {}, reason: {}"
msgstr "Neustart von ambari-agent wird für Host {} benötigt, Grund: {}"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Set up Ambari agents"
msgstr "Richten Sie Ambari-Agenten ein"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Set up Ambari management console"
msgstr "Richten Sie die Ambari-Verwaltungskonsole ein"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Set up HDP repositories"
msgstr "Richten Sie HDP-Repositorys ein"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "Some Ambari request(s) not in COMPLETED state: %(description)s."
msgstr ""
"Einige Ambari-Anfragen sind nicht im Status COMPLETED: %(description)s."
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Start cluster"
msgstr "Cluster starten"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Start services on hosts"
msgstr "Starte Dienste auf Hosts"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid ""
"The Ambari Sahara plugin provides the ability to launch clusters with "
"Hortonworks Data Platform (HDP) on OpenStack using Apache Ambari"
msgstr ""
"Das Ambari Sahara-Plugin bietet die Möglichkeit, Cluster mit Hortonworks "
"Data Platform (HDP) auf OpenStack mit Apache Ambari zu starten"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Update default Ambari password"
msgstr "Aktualisieren Sie das Standard-Ambari-Passwort"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Wait Ambari accessible"
msgstr "Warte auf Ambari zugänglich"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
msgid "Wait registration of hosts"
msgstr "Warten Sie die Registrierung der Hosts"
# auto translated by TM merge from project: sahara, version: stable-queens, DocId: sahara/locale/sahara
#, python-format
msgid "request %(id)d: %(name)s - in status %(status)s"
msgstr "Anfrage %(id)d: %(name)s - in Status %(status)s"

View File

@ -1,166 +0,0 @@
# Andi Chandler <andi@gowling.com>, 2020. #zanata
msgid ""
msgstr ""
"Project-Id-Version: sahara-plugin-ambari VERSION\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2020-04-26 20:52+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2020-05-02 09:33+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en_GB\n"
"X-Generator: Zanata 4.3.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
#, python-format
msgid "%(problem)s: %(description)s"
msgstr "%(problem)s: %(description)s"
msgid "0 or 1"
msgstr "0 or 1"
msgid "1 or more"
msgstr "1 or more"
msgid "3 or more. Odd number"
msgstr "3 or more. Odd number"
msgid "Add Hadoop Swift jar to instances"
msgstr "Add Hadoop Swift jar to instances"
msgid "Add new hosts"
msgstr "Add new hosts"
#, python-format
msgid "Ambari Monitor has responded that cluster has %(red)d critical alert(s)"
msgstr ""
"Ambari Monitor has responded that cluster has %(red)d critical alert(s)"
#, python-format
msgid ""
"Ambari Monitor has responded that cluster has %(red)d critical and "
"%(yellow)d warning alert(s)"
msgstr ""
"Ambari Monitor has responded that cluster has %(red)d critical and "
"%(yellow)d warning alert(s)"
#, python-format
msgid "Ambari Monitor has responded that cluster has %d warning alert(s)"
msgstr "Ambari Monitor has responded that cluster has %d warning alert(s)"
msgid "Ambari Monitor is healthy"
msgstr "Ambari Monitor is healthy"
msgid "Ambari plugin of {base} or higher required to run {type} jobs"
msgstr "Ambari plugin of {base} or higher required to run {type} jobs"
#, python-format
msgid "Ambari request in %s state"
msgstr "Ambari request in %s state"
msgid "At least 3 JournalNodes are required for HA"
msgstr "At least 3 JournalNodes are required for HA"
msgid "At least 3 ZooKeepers are required for HA"
msgstr "At least 3 ZooKeepers are required for HA"
msgid "Can't get response from Ambari Monitor"
msgstr "Can't get response from Ambari Monitor"
msgid "Cleanup config groups"
msgstr "Cleanup config groups"
msgid "Configure rack awareness"
msgstr "Configure rack awareness"
msgid "Create Ambari blueprint"
msgstr "Create Ambari blueprint"
msgid "Decommission NodeManagers and DataNodes"
msgstr "Decommission NodeManagers and DataNodes"
msgid "Enable HBase RegionServer HA"
msgstr "Enable HBase RegionServer HA"
msgid "Enable NameNode HA"
msgstr "Enable NameNode HA"
msgid "Enable ResourceManager HA"
msgstr "Enable ResourceManager HA"
msgid "Generate config groups"
msgstr "Generate config groups"
msgid "Install services on hosts"
msgstr "Install services on hosts"
msgid "No alerts found"
msgstr "No alerts found"
msgid "Odd number"
msgstr "Odd number"
msgid "Odd number of JournalNodes are required for HA"
msgstr "Odd number of JournalNodes are required for HA"
msgid "Odd number of ZooKeepers are required for HA"
msgstr "Odd number of ZooKeepers are required for HA"
msgid "Prepare Hive"
msgstr "Prepare Hive"
msgid "Regenerate keytabs for Kerberos"
msgstr "Regenerate keytabs for Kerberos"
msgid "Remove hosts"
msgstr "Remove hosts"
msgid "Restart HDFS and MAPREDUCE2 services"
msgstr "Restart HDFS and MAPREDUCE2 services"
msgid "Restart NameNodes and ResourceManagers"
msgstr "Restart NameNodes and ResourceManagers"
msgid "Restart of ambari-agent is needed for host {}, reason: {}"
msgstr "Restart of ambari-agent is needed for host {}, reason: {}"
msgid "Set up Ambari agents"
msgstr "Set up Ambari agents"
msgid "Set up Ambari management console"
msgstr "Set up Ambari management console"
msgid "Set up HDP repositories"
msgstr "Set up HDP repositories"
#, python-format
msgid "Some Ambari request(s) not in COMPLETED state: %(description)s."
msgstr "Some Ambari request(s) not in COMPLETED state: %(description)s."
msgid "Start cluster"
msgstr "Start cluster"
msgid "Start services on hosts"
msgstr "Start services on hosts"
msgid ""
"The Ambari Sahara plugin provides the ability to launch clusters with "
"Hortonworks Data Platform (HDP) on OpenStack using Apache Ambari"
msgstr ""
"The Ambari Sahara plugin provides the ability to launch clusters with "
"Hortonworks Data Platform (HDP) on OpenStack using Apache Ambari"
msgid "Update default Ambari password"
msgstr "Update default Ambari password"
msgid "Wait Ambari accessible"
msgstr "Wait Ambari accessible"
msgid "Wait registration of hosts"
msgstr "Wait registration of hosts"
#, python-format
msgid "request %(id)d: %(name)s - in status %(status)s"
msgstr "request %(id)d: %(name)s - in status %(status)s"

View File

@ -1,169 +0,0 @@
# suhartono <cloudsuhartono@gmail.com>, 2019. #zanata
msgid ""
msgstr ""
"Project-Id-Version: sahara-plugin-ambari VERSION\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2019-09-30 09:30+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2019-10-06 02:53+0000\n"
"Last-Translator: suhartono <cloudsuhartono@gmail.com>\n"
"Language-Team: Indonesian\n"
"Language: id\n"
"X-Generator: Zanata 4.3.3\n"
"Plural-Forms: nplurals=1; plural=0\n"
#, python-format
msgid "%(problem)s: %(description)s"
msgstr "%(problem)s: %(description)s"
msgid "0 or 1"
msgstr "0 or 1"
msgid "1 or more"
msgstr "1 atau lebih"
msgid "3 or more. Odd number"
msgstr "3 atau lebih. Angka ganjil"
msgid "Add Hadoop Swift jar to instances"
msgstr "Tambahkan jar Hadoop Swift ke instance"
msgid "Add new hosts"
msgstr "Tambahkan host baru"
#, python-format
msgid "Ambari Monitor has responded that cluster has %(red)d critical alert(s)"
msgstr ""
"Ambari Monitor has responded that cluster has %(red)d critical alert(s)"
#, python-format
msgid ""
"Ambari Monitor has responded that cluster has %(red)d critical and "
"%(yellow)d warning alert(s)"
msgstr ""
"Ambari Monitor telah merespons bahwa cluster telah %(red)d critical dan "
"%(yellow)d warning alert(s)"
#, python-format
msgid "Ambari Monitor has responded that cluster has %d warning alert(s)"
msgstr "Ambari Monitor telah merespons bahwa cluster telah %d warning alert(s)"
msgid "Ambari Monitor is healthy"
msgstr "Ambari Monitor sehat"
msgid "Ambari plugin of {base} or higher required to run {type} jobs"
msgstr ""
"Plugin Ambari dari {base} atau lebih tinggi diperlukan untuk menjalankan "
"jobs {type}"
#, python-format
msgid "Ambari request in %s state"
msgstr "Ambari meminta dalam %s state"
msgid "At least 3 JournalNodes are required for HA"
msgstr "Setidaknya 3 JournalNodes diperlukan untuk HA"
msgid "At least 3 ZooKeepers are required for HA"
msgstr "Setidaknya 3 ZooKeepers diperlukan untuk HA"
msgid "Can't get response from Ambari Monitor"
msgstr "Tidak dapat mendapat respons dari Ambari Monitor"
msgid "Cleanup config groups"
msgstr "Bersihkan grup konfigurasi"
msgid "Configure rack awareness"
msgstr "Konfigurasikan rack awareness"
msgid "Create Ambari blueprint"
msgstr "Buat cetak biru Ambari"
msgid "Decommission NodeManagers and DataNodes"
msgstr "Decommission NodeManagers dan DataNodes"
msgid "Enable HBase RegionServer HA"
msgstr "Aktifkan HBase RegionServer HA"
msgid "Enable NameNode HA"
msgstr "Aktifkan NameNode HA"
msgid "Enable ResourceManager HA"
msgstr "Aktifkan ResourceManager HA"
msgid "Generate config groups"
msgstr "Hasilkan grup konfigurasi"
msgid "Install services on hosts"
msgstr "Instal layanan di host"
msgid "No alerts found"
msgstr "Tidak ada lansiran (alerts) yang ditemukan"
msgid "Odd number"
msgstr "Angka ganjil"
msgid "Odd number of JournalNodes are required for HA"
msgstr "Jumlah Aneh JournalNodes diperlukan untuk HA"
msgid "Odd number of ZooKeepers are required for HA"
msgstr "Angka ganjil dari ZooKeepers diperlukan untuk HA"
msgid "Prepare Hive"
msgstr "Siapkan Hive"
msgid "Regenerate keytabs for Kerberos"
msgstr "Regenerasi keytabs untuk Kerberos"
msgid "Remove hosts"
msgstr "Hapus host"
msgid "Restart HDFS and MAPREDUCE2 services"
msgstr "Restart layanan HDFS dan MAPREDUCE2"
msgid "Restart NameNodes and ResourceManagers"
msgstr "Restart NameNodes dan ResourceManagers"
msgid "Restart of ambari-agent is needed for host {}, reason: {}"
msgstr "Restart agen ambari diperlukan untuk host {}, reason: {}"
msgid "Set up Ambari agents"
msgstr "Menyiapkan agen Ambari"
msgid "Set up Ambari management console"
msgstr "Siapkan konsol manajemen Ambari"
msgid "Set up HDP repositories"
msgstr "Siapkan repositori HDP"
#, python-format
msgid "Some Ambari request(s) not in COMPLETED state: %(description)s."
msgstr ""
"Beberapa permintaan Ambari tidak dalam keadaan COMPLETED: %(description)s."
msgid "Start cluster"
msgstr "Mulai cluster"
msgid "Start services on hosts"
msgstr "Mulai layanan di host"
msgid ""
"The Ambari Sahara plugin provides the ability to launch clusters with "
"Hortonworks Data Platform (HDP) on OpenStack using Apache Ambari"
msgstr ""
"Plugin Ambari Sahara menyediakan kemampuan untuk meluncurkan cluster dengan "
"Hortonworks Data Platform (HDP) di OpenStack menggunakan Apache Ambari"
msgid "Update default Ambari password"
msgstr "Perbarui kata sandi Ambari standar"
msgid "Wait Ambari accessible"
msgstr "Tunggu Ambari dapat diakses"
msgid "Wait registration of hosts"
msgstr "Tunggu pendaftaran host"
#, python-format
msgid "request %(id)d: %(name)s - in status %(status)s"
msgstr "permintaan %(id)d: %(name)s - dalam status %(status)s"

View File

@ -1,363 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log as logging
from oslo_serialization import jsonutils
from requests import auth
from sahara.plugins import context
from sahara.plugins import exceptions as p_exc
from sahara_plugin_ambari.i18n import _
from sahara_plugin_ambari.plugins.ambari import requests_helper as r_helper
LOG = logging.getLogger(__name__)
class AmbariNotFound(Exception):
pass
class AmbariClient(object):
def __init__(self, instance, port="8080", **kwargs):
kwargs.setdefault("username", "admin")
kwargs.setdefault("password", "admin")
self._port = port
self._base_url = "http://{host}:{port}/api/v1".format(
host=instance.management_ip, port=port)
self._instance = instance
self._http_client = instance.remote().get_http_client(port)
self._headers = {"X-Requested-By": "sahara"}
self._auth = auth.HTTPBasicAuth(kwargs["username"], kwargs["password"])
self._default_client_args = {"verify": False, "auth": self._auth,
"headers": self._headers}
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
self.close()
def close(self):
self._instance.remote().close_http_session(self._port)
def get(self, *args, **kwargs):
kwargs.update(self._default_client_args)
return self._http_client.get(*args, **kwargs)
def post(self, *args, **kwargs):
kwargs.update(self._default_client_args)
return self._http_client.post(*args, **kwargs)
def put(self, *args, **kwargs):
kwargs.update(self._default_client_args)
return self._http_client.put(*args, **kwargs)
def delete(self, *args, **kwargs):
kwargs.update(self._default_client_args)
return self._http_client.delete(*args, **kwargs)
def get_alerts_data(self, cluster):
url = self._base_url + "/clusters/%s/alerts?fields=*" % cluster.name
resp = self.get(url)
data = self.check_response(resp)
return data.get('items', [])
@staticmethod
def check_response(resp, handle_not_found=False):
if handle_not_found and resp.status_code == 404:
raise AmbariNotFound()
resp.raise_for_status()
if resp.text:
return jsonutils.loads(resp.text)
@staticmethod
def req_id(response):
if not response.text:
raise p_exc.HadoopProvisionError("Cannot find request id. "
"No response body")
body = jsonutils.loads(response.text)
if "Requests" not in body or "id" not in body["Requests"]:
raise p_exc.HadoopProvisionError("Cannot find request id. "
"Unexpected response format")
return body["Requests"]["id"]
def import_credential(self, cl_name, alias, data):
url = self._base_url + "/clusters/%s/credentials/%s" % (cl_name, alias)
resp = self.post(url, data=jsonutils.dumps(data))
self.check_response(resp)
def get_credential(self, cl_name, alias):
url = self._base_url + "/clusters/%s/credentials/%s" % (cl_name, alias)
resp = self.get(url)
self.check_response(resp, handle_not_found=True)
def regenerate_keytabs(self, cl_name):
url = (self._base_url +
"/clusters/%s?regenerate_keytabs=missing" % cl_name)
data = jsonutils.dumps({"Clusters": {"security_type": "KERBEROS"}})
resp = self.put(url, data=data)
self.check_response(resp)
return self.req_id(resp)
def get_registered_hosts(self):
url = self._base_url + "/hosts"
resp = self.get(url)
data = self.check_response(resp)
return data.get("items", [])
def get_host_info(self, host):
url = self._base_url + "/hosts/%s" % host
resp = self.get(url)
data = self.check_response(resp)
return data.get("Hosts", {})
def update_user_password(self, user, old_password, new_password):
url = self._base_url + "/users/%s" % user
data = jsonutils.dumps({
"Users": {
"old_password": old_password,
"password": new_password
}
})
resp = self.put(url, data=data)
self.check_response(resp)
def create_blueprint(self, name, data):
url = self._base_url + "/blueprints/%s" % name
resp = self.post(url, data=jsonutils.dumps(data))
return self.check_response(resp)
def create_cluster(self, name, data):
url = self._base_url + "/clusters/%s" % name
resp = self.post(url, data=jsonutils.dumps(data))
return self.check_response(resp).get("Requests")
def add_host_to_cluster(self, instance):
cluster_name = instance.cluster.name
hostname = instance.fqdn()
url = self._base_url + "/clusters/{cluster}/hosts/{hostname}".format(
cluster=cluster_name, hostname=hostname)
resp = self.post(url)
self.check_response(resp)
def get_config_groups(self, cluster):
url = self._base_url + "/clusters/%s/config_groups" % cluster.name
resp = self.get(url)
return self.check_response(resp)
def get_detailed_config_group(self, cluster, cfg_id):
url = self._base_url + "/clusters/%s/config_groups/%s" % (
cluster.name, cfg_id)
resp = self.get(url)
return self.check_response(resp)
def remove_config_group(self, cluster, cfg_id):
url = self._base_url + "/clusters/%s/config_groups/%s" % (
cluster.name, cfg_id)
resp = self.delete(url)
return self.check_response(resp)
def create_config_group(self, cluster, data):
url = self._base_url + "/clusters/%s/config_groups" % cluster.name
resp = self.post(url, data=jsonutils.dumps(data))
return self.check_response(resp)
def add_service_to_host(self, inst, service):
url = "{pref}/clusters/{cluster}/hosts/{host}/host_components/{proc}"
url = url.format(pref=self._base_url, cluster=inst.cluster.name,
host=inst.fqdn(), proc=service)
self.check_response(self.post(url))
def start_service_on_host(self, inst, service, final_state):
url = "{pref}/clusters/{cluster}/hosts/{host}/host_components/{proc}"
url = url.format(
pref=self._base_url, cluster=inst.cluster.name, host=inst.fqdn(),
proc=service)
data = {
'HostRoles': {
'state': final_state
},
'RequestInfo': {
'context': "Starting service {service}, moving to state "
"{state}".format(service=service, state=final_state)
}
}
resp = self.put(url, data=jsonutils.dumps(data))
self.check_response(resp)
# return req_id to check health of request
return self.req_id(resp)
def decommission_nodemanagers(self, cluster_name, instances):
url = self._base_url + "/clusters/%s/requests" % cluster_name
data = r_helper.build_nodemanager_decommission_request(cluster_name,
instances)
resp = self.post(url, data=jsonutils.dumps(data))
self.wait_ambari_request(self.req_id(resp), cluster_name)
def decommission_datanodes(self, cluster_name, instances):
url = self._base_url + "/clusters/%s/requests" % cluster_name
data = r_helper.build_datanode_decommission_request(cluster_name,
instances)
resp = self.post(url, data=jsonutils.dumps(data))
self.wait_ambari_request(self.req_id(resp), cluster_name)
def remove_process_from_host(self, cluster_name, instance, process):
url = self._base_url + "/clusters/%s/hosts/%s/host_components/%s" % (
cluster_name, instance.fqdn(), process)
resp = self.delete(url)
return self.check_response(resp)
def stop_process_on_host(self, cluster_name, instance, process):
url = self._base_url + "/clusters/%s/hosts/%s/host_components/%s" % (
cluster_name, instance.fqdn(), process)
check_installed_resp = self.check_response(self.get(url))
if check_installed_resp["HostRoles"]["state"] != "INSTALLED":
data = {"HostRoles": {"state": "INSTALLED"},
"RequestInfo": {"context": "Stopping %s" % process}}
resp = self.put(url, data=jsonutils.dumps(data))
self.wait_ambari_request(self.req_id(resp), cluster_name)
def restart_namenode(self, cluster_name, instance):
url = self._base_url + "/clusters/%s/requests" % cluster_name
data = r_helper.build_namenode_restart_request(cluster_name, instance)
resp = self.post(url, data=jsonutils.dumps(data))
self.wait_ambari_request(self.req_id(resp), cluster_name)
def restart_resourcemanager(self, cluster_name, instance):
url = self._base_url + "/clusters/%s/requests" % cluster_name
data = r_helper.build_resourcemanager_restart_request(cluster_name,
instance)
resp = self.post(url, data=jsonutils.dumps(data))
self.wait_ambari_request(self.req_id(resp), cluster_name)
def restart_service(self, cluster_name, service_name):
url = self._base_url + "/clusters/{}/services/{}".format(
cluster_name, service_name)
data = r_helper.build_stop_service_request(service_name)
resp = self.put(url, data=jsonutils.dumps(data))
self.wait_ambari_request(self.req_id(resp), cluster_name)
data = r_helper.build_start_service_request(service_name)
resp = self.put(url, data=jsonutils.dumps(data))
self.wait_ambari_request(self.req_id(resp), cluster_name)
def delete_host(self, cluster_name, instance):
url = self._base_url + "/clusters/%s/hosts/%s" % (cluster_name,
instance.fqdn())
resp = self.delete(url)
return self.check_response(resp)
def check_request_status(self, cluster_name, req_id):
url = self._base_url + "/clusters/%s/requests/%d" % (cluster_name,
req_id)
resp = self.get(url)
return self.check_response(resp).get("Requests")
def list_host_processes(self, cluster_name, instance):
url = self._base_url + "/clusters/%s/hosts/%s" % (
cluster_name, instance.fqdn())
resp = self.get(url)
body = jsonutils.loads(resp.text)
procs = [p["HostRoles"]["component_name"]
for p in body["host_components"]]
return procs
def set_up_mirror(self, stack_version, os_type, repo_id, repo_url):
url = self._base_url + (
"/stacks/HDP/versions/%s/operating_systems/%s/repositories/%s") % (
stack_version, os_type, repo_id)
data = {
"Repositories": {
"base_url": repo_url,
"verify_base_url": True
}
}
resp = self.put(url, data=jsonutils.dumps(data))
self.check_response(resp)
def set_rack_info_for_instance(self, cluster_name, instance, rack_name):
url = self._base_url + "/clusters/%s/hosts/%s" % (
cluster_name, instance.fqdn())
data = {
"Hosts": {
"rack_info": rack_name
}
}
resp = self.put(url, data=jsonutils.dumps(data))
self.check_response(resp)
def get_request_info(self, cluster_name, request_id):
url = self._base_url + ("/clusters/%s/requests/%s" %
(cluster_name, request_id))
resp = self.check_response(self.get(url))
return resp.get('Requests')
def wait_ambari_requests(self, requests, cluster_name):
requests = set(requests)
failed = []
context.sleep(20)
while len(requests) > 0:
completed, not_completed = set(), set()
for req_id in requests:
request = self.get_request_info(cluster_name, req_id)
status = request.get("request_status")
if status == 'COMPLETED':
completed.add(req_id)
elif status in ['IN_PROGRESS', 'PENDING']:
not_completed.add(req_id)
else:
failed.append(request)
if failed:
msg = _("Some Ambari request(s) "
"not in COMPLETED state: %(description)s.")
descrs = []
for req in failed:
descr = _(
"request %(id)d: %(name)s - in status %(status)s")
descrs.append(descr %
{'id': req.get("id"),
'name': req.get("request_context"),
'status': req.get("request_status")})
raise p_exc.HadoopProvisionError(msg % {'description': descrs})
requests = not_completed
context.sleep(5)
LOG.debug("Waiting for %d ambari request(s) to be completed",
len(not_completed))
LOG.debug("All ambari requests have been completed")
def wait_ambari_request(self, request_id, cluster_name):
context.sleep(20)
while True:
status = self.check_request_status(cluster_name, request_id)
LOG.debug("Task %(context)s in %(status)s state. "
"Completed %(percent).1f%%",
{'context': status["request_context"],
'status': status["request_status"],
'percent': status["progress_percent"]})
if status["request_status"] == "COMPLETED":
return
if status["request_status"] in ["IN_PROGRESS", "PENDING"]:
context.sleep(5)
else:
raise p_exc.HadoopProvisionError(
_("Ambari request in %s state") % status["request_status"])

View File

@ -1,155 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins import kerberos
# define service names
AMBARI_SERVICE = "Ambari"
FALCON_SERVICE = "Falcon"
FLUME_SERVICE = "Flume"
HBASE_SERVICE = "HBase"
HDFS_SERVICE = "HDFS"
HIVE_SERVICE = "Hive"
KAFKA_SERVICE = "Kafka"
KNOX_SERVICE = "Knox"
MAPREDUCE2_SERVICE = "MAPREDUCE2"
OOZIE_SERVICE = "Oozie"
RANGER_SERVICE = "Ranger"
SLIDER_SERVICE = "Slider"
SPARK_SERVICE = "Spark"
SQOOP_SERVICE = "Sqoop"
STORM_SERVICE = "Storm"
YARN_SERVICE = "YARN"
ZOOKEEPER_SERVICE = "ZooKeeper"
# define process names
AMBARI_SERVER = "Ambari"
APP_TIMELINE_SERVER = "YARN Timeline Server"
DATANODE = "DataNode"
DRPC_SERVER = "DRPC Server"
FALCON_SERVER = "Falcon Server"
FLUME_HANDLER = "Flume"
HBASE_MASTER = "HBase Master"
HBASE_REGIONSERVER = "HBase RegionServer"
HISTORYSERVER = "MapReduce History Server"
HIVE_METASTORE = "Hive Metastore"
HIVE_SERVER = "HiveServer"
KAFKA_BROKER = "Kafka Broker"
KNOX_GATEWAY = "Knox Gateway"
NAMENODE = "NameNode"
NIMBUS = "Nimbus"
NODEMANAGER = "NodeManager"
OOZIE_SERVER = "Oozie"
RANGER_ADMIN = "Ranger Admin"
RANGER_USERSYNC = "Ranger Usersync"
RESOURCEMANAGER = "ResourceManager"
SECONDARY_NAMENODE = "SecondaryNameNode"
SLIDER = "Slider"
SPARK_JOBHISTORYSERVER = "Spark History Server"
SQOOP = "Sqoop"
STORM_UI_SERVER = "Storm UI Server"
SUPERVISOR = "Supervisor"
ZOOKEEPER_SERVER = "ZooKeeper"
JOURNAL_NODE = "JournalNode"
PROC_MAP = {
AMBARI_SERVER: ["METRICS_COLLECTOR"],
APP_TIMELINE_SERVER: ["APP_TIMELINE_SERVER"],
DATANODE: ["DATANODE"],
DRPC_SERVER: ["DRPC_SERVER"],
FALCON_SERVER: ["FALCON_SERVER"],
HBASE_MASTER: ["HBASE_MASTER"],
HBASE_REGIONSERVER: ["HBASE_REGIONSERVER"],
HISTORYSERVER: ["HISTORYSERVER"],
HIVE_METASTORE: ["HIVE_METASTORE"],
HIVE_SERVER: ["HIVE_SERVER", "MYSQL_SERVER", "WEBHCAT_SERVER"],
KAFKA_BROKER: ["KAFKA_BROKER"],
KNOX_GATEWAY: ["KNOX_GATEWAY"],
NAMENODE: ["NAMENODE"],
NIMBUS: ["NIMBUS"],
NODEMANAGER: ["NODEMANAGER"],
OOZIE_SERVER: ["OOZIE_SERVER", "PIG"],
RANGER_ADMIN: ["RANGER_ADMIN"],
RANGER_USERSYNC: ["RANGER_USERSYNC"],
RESOURCEMANAGER: ["RESOURCEMANAGER"],
SECONDARY_NAMENODE: ["SECONDARY_NAMENODE"],
SLIDER: ["SLIDER"],
SPARK_JOBHISTORYSERVER: ["SPARK_JOBHISTORYSERVER"],
SQOOP: ["SQOOP"],
STORM_UI_SERVER: ["STORM_UI_SERVER"],
SUPERVISOR: ["SUPERVISOR"],
ZOOKEEPER_SERVER: ["ZOOKEEPER_SERVER"],
JOURNAL_NODE: ["JOURNALNODE"]
}
CLIENT_MAP = {
APP_TIMELINE_SERVER: ["MAPREDUCE2_CLIENT", "YARN_CLIENT"],
DATANODE: ["HDFS_CLIENT"],
FALCON_SERVER: ["FALCON_CLIENT"],
FLUME_HANDLER: ["FLUME_HANDLER"],
HBASE_MASTER: ["HBASE_CLIENT"],
HBASE_REGIONSERVER: ["HBASE_CLIENT"],
HISTORYSERVER: ["MAPREDUCE2_CLIENT", "YARN_CLIENT"],
HIVE_METASTORE: ["HIVE_CLIENT"],
HIVE_SERVER: ["HIVE_CLIENT"],
NAMENODE: ["HDFS_CLIENT"],
NODEMANAGER: ["MAPREDUCE2_CLIENT", "YARN_CLIENT"],
OOZIE_SERVER: ["OOZIE_CLIENT", "TEZ_CLIENT"],
RESOURCEMANAGER: ["MAPREDUCE2_CLIENT", "YARN_CLIENT"],
SECONDARY_NAMENODE: ["HDFS_CLIENT"],
SPARK_JOBHISTORYSERVER: ["SPARK_CLIENT"],
ZOOKEEPER_SERVER: ["ZOOKEEPER_CLIENT"]
}
KERBEROS_CLIENT = 'KERBEROS_CLIENT'
ALL_LIST = ["METRICS_MONITOR"]
# types of HA
NAMENODE_HA = "NameNode HA"
RESOURCEMANAGER_HA = "ResourceManager HA"
HBASE_REGIONSERVER_HA = "HBase RegionServer HA"
def get_ambari_proc_list(node_group):
procs = []
for sp in node_group.node_processes:
procs.extend(PROC_MAP.get(sp, []))
return procs
def get_clients(cluster):
procs = []
for ng in cluster.node_groups:
procs.extend(ng.node_processes)
clients = []
for proc in procs:
clients.extend(CLIENT_MAP.get(proc, []))
clients = list(set(clients))
clients.extend(ALL_LIST)
if kerberos.is_kerberos_security_enabled(cluster):
clients.append(KERBEROS_CLIENT)
return clients
def instances_have_process(instances, process):
for i in instances:
if process in i.node_group.node_processes:
return True
return False

View File

@ -1,333 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_serialization import jsonutils
from sahara.plugins import provisioning
from sahara.plugins import swift_helper
from sahara.plugins import utils
from sahara_plugin_ambari.i18n import _
from sahara_plugin_ambari.plugins.ambari import common
CONFIGS = {}
OBJ_CONFIGS = {}
CFG_PROCESS_MAP = {
"admin-properties": common.RANGER_SERVICE,
"ams-env": common.AMBARI_SERVICE,
"ams-hbase-env": common.AMBARI_SERVICE,
"ams-hbase-policy": common.AMBARI_SERVICE,
"ams-hbase-security-site": common.AMBARI_SERVICE,
"ams-hbase-site": common.AMBARI_SERVICE,
"ams-site": common.AMBARI_SERVICE,
"capacity-scheduler": common.YARN_SERVICE,
"cluster-env": "general",
"core-site": common.HDFS_SERVICE,
"falcon-env": common.FALCON_SERVICE,
"falcon-runtime.properties": common.FALCON_SERVICE,
"falcon-startup.properties": common.FALCON_SERVICE,
"flume-env": common.FLUME_SERVICE,
"gateway-site": common.KNOX_SERVICE,
"hadoop-env": common.HDFS_SERVICE,
"hadoop-policy": common.HDFS_SERVICE,
"hbase-env": common.HBASE_SERVICE,
"hbase-policy": common.HBASE_SERVICE,
"hbase-site": common.HBASE_SERVICE,
"hdfs-site": common.HDFS_SERVICE,
"hive-env": common.HIVE_SERVICE,
"hive-site": common.HIVE_SERVICE,
"hiveserver2-site": common.HIVE_SERVICE,
"kafka-broker": common.KAFKA_SERVICE,
"kafka-env": common.KAFKA_SERVICE,
"knox-env": common.KNOX_SERVICE,
"mapred-env": common.YARN_SERVICE,
"mapred-site": common.YARN_SERVICE,
"oozie-env": common.OOZIE_SERVICE,
"oozie-site": common.OOZIE_SERVICE,
"ranger-env": common.RANGER_SERVICE,
"ranger-hbase-plugin-properties": common.HBASE_SERVICE,
"ranger-hdfs-plugin-properties": common.HDFS_SERVICE,
"ranger-hive-plugin-properties": common.HIVE_SERVICE,
"ranger-knox-plugin-properties": common.KNOX_SERVICE,
"ranger-site": common.RANGER_SERVICE,
"ranger-storm-plugin-properties": common.STORM_SERVICE,
"spark-defaults": common.SPARK_SERVICE,
"spark-env": common.SPARK_SERVICE,
"sqoop-env": common.SQOOP_SERVICE,
"storm-env": common.STORM_SERVICE,
"storm-site": common.STORM_SERVICE,
"tez-site": common.OOZIE_SERVICE,
"usersync-properties": common.RANGER_SERVICE,
"yarn-env": common.YARN_SERVICE,
"yarn-site": common.YARN_SERVICE,
"zoo.cfg": common.ZOOKEEPER_SERVICE,
"zookeeper-env": common.ZOOKEEPER_SERVICE
}
SERVICES_TO_CONFIGS_MAP = None
def get_service_to_configs_map():
global SERVICES_TO_CONFIGS_MAP
if SERVICES_TO_CONFIGS_MAP:
return SERVICES_TO_CONFIGS_MAP
data = {}
for (key, item) in CFG_PROCESS_MAP.items():
if item not in data:
data[item] = []
data[item].append(key)
SERVICES_TO_CONFIGS_MAP = data
return SERVICES_TO_CONFIGS_MAP
ng_confs = [
"dfs.datanode.data.dir",
"dtnode_heapsize",
"mapreduce.map.java.opts",
"mapreduce.map.memory.mb",
"mapreduce.reduce.java.opts",
"mapreduce.reduce.memory.mb",
"mapreduce.task.io.sort.mb",
"nodemanager_heapsize",
"yarn.app.mapreduce.am.command-opts",
"yarn.app.mapreduce.am.resource.mb",
"yarn.nodemanager.resource.cpu-vcores",
"yarn.nodemanager.resource.memory-mb",
"yarn.scheduler.maximum-allocation-mb",
"yarn.scheduler.minimum-allocation-mb"
]
use_base_repos_cfg = provisioning.Config(
"Enable external repos on instances", 'general', 'cluster', priority=1,
default_value=True, config_type="bool")
hdp_repo_cfg = provisioning.Config(
"HDP repo URL", "general", "cluster", priority=1, default_value="")
hdp_utils_repo_cfg = provisioning.Config(
"HDP-UTILS repo URL", "general", "cluster", priority=1, default_value="")
autoconfigs_strategy = provisioning.Config(
"Auto-configuration strategy", 'general', 'cluster', priority=1,
config_type='dropdown',
default_value='NEVER_APPLY',
config_values=[(v, v) for v in [
'NEVER_APPLY', 'ALWAYS_APPLY', 'ONLY_STACK_DEFAULTS_APPLY',
]],
)
ambari_pkg_install_timeout = provisioning.Config(
"Ambari Agent Package Install timeout", "general", "cluster",
priority=1, default_value="1800")
def _get_service_name(service):
return CFG_PROCESS_MAP.get(service, service)
def _get_config_group(group, param, plugin_version):
if not CONFIGS or plugin_version not in CONFIGS:
load_configs(plugin_version)
for section, process in CFG_PROCESS_MAP.items():
if process == group and param in CONFIGS[plugin_version][section]:
return section
def _get_param_scope(param):
if param in ng_confs:
return "node"
else:
return "cluster"
def _get_ha_params():
enable_namenode_ha = provisioning.Config(
name=common.NAMENODE_HA,
applicable_target="general",
scope="cluster",
config_type="bool",
default_value=False,
is_optional=True,
description=_("Enable NameNode HA"),
priority=1)
enable_resourcemanager_ha = provisioning.Config(
name=common.RESOURCEMANAGER_HA,
applicable_target="general",
scope="cluster",
config_type="bool",
default_value=False,
is_optional=True,
description=_("Enable ResourceManager HA"),
priority=1)
enable_regionserver_ha = provisioning.Config(
name=common.HBASE_REGIONSERVER_HA,
applicable_target="general",
scope="cluster",
config_type="bool",
default_value=False,
is_optional=True,
description=_("Enable HBase RegionServer HA"),
priority=1)
return [enable_namenode_ha,
enable_resourcemanager_ha,
enable_regionserver_ha]
def load_configs(version):
if OBJ_CONFIGS.get(version):
return OBJ_CONFIGS[version]
cfg_path = "plugins/ambari/resources/configs-%s.json" % version
vanilla_cfg = jsonutils.loads(utils.get_file_text(cfg_path,
'sahara_plugin_ambari'))
CONFIGS[version] = vanilla_cfg
sahara_cfg = [hdp_repo_cfg, hdp_utils_repo_cfg, use_base_repos_cfg,
autoconfigs_strategy, ambari_pkg_install_timeout]
for service, confs in vanilla_cfg.items():
for k, v in confs.items():
sahara_cfg.append(provisioning.Config(
k, _get_service_name(service), _get_param_scope(k),
default_value=v))
sahara_cfg.extend(_get_ha_params())
OBJ_CONFIGS[version] = sahara_cfg
return sahara_cfg
def _get_config_value(cluster, key):
return cluster.cluster_configs.get("general", {}).get(key.name,
key.default_value)
def use_base_repos_needed(cluster):
return _get_config_value(cluster, use_base_repos_cfg)
def get_hdp_repo_url(cluster):
return _get_config_value(cluster, hdp_repo_cfg)
def get_hdp_utils_repo_url(cluster):
return _get_config_value(cluster, hdp_utils_repo_cfg)
def get_auto_configuration_strategy(cluster):
return _get_config_value(cluster, autoconfigs_strategy)
def get_ambari_pkg_install_timeout(cluster):
return _get_config_value(cluster, ambari_pkg_install_timeout)
def _serialize_ambari_configs(configs):
return list(map(lambda x: {x: configs[x]}, configs))
def _create_ambari_configs(sahara_configs, plugin_version):
configs = {}
for service, params in sahara_configs.items():
if service == "general" or service == "Kerberos":
# General and Kerberos configs are designed for Sahara, not for
# the plugin
continue
for k, v in params.items():
group = _get_config_group(service, k, plugin_version)
configs.setdefault(group, {})
configs[group].update({k: v})
return configs
def _make_paths(dirs, suffix):
return ",".join([d + suffix for d in dirs])
def get_instance_params_mapping(inst):
configs = _create_ambari_configs(inst.node_group.node_configs,
inst.node_group.cluster.hadoop_version)
storage_paths = inst.storage_paths()
configs.setdefault("hdfs-site", {})
configs["hdfs-site"]["dfs.datanode.data.dir"] = _make_paths(
storage_paths, "/hdfs/data")
configs["hdfs-site"]["dfs.journalnode.edits.dir"] = _make_paths(
storage_paths, "/hdfs/journalnode")
configs["hdfs-site"]["dfs.namenode.checkpoint.dir"] = _make_paths(
storage_paths, "/hdfs/namesecondary")
configs["hdfs-site"]["dfs.namenode.name.dir"] = _make_paths(
storage_paths, "/hdfs/namenode")
configs.setdefault("yarn-site", {})
configs["yarn-site"]["yarn.nodemanager.local-dirs"] = _make_paths(
storage_paths, "/yarn/local")
configs["yarn-site"]["yarn.nodemanager.log-dirs"] = _make_paths(
storage_paths, "/yarn/log")
configs["yarn-site"][
"yarn.timeline-service.leveldb-timeline-store.path"] = _make_paths(
storage_paths, "/yarn/timeline")
configs.setdefault("oozie-site", {})
configs["oozie-site"][
"oozie.service.AuthorizationService.security.enabled"] = "false"
return configs
def get_instance_params(inst):
return _serialize_ambari_configs(get_instance_params_mapping(inst))
def get_cluster_params(cluster):
configs = _create_ambari_configs(cluster.cluster_configs,
cluster.hadoop_version)
swift_configs = {x["name"]: x["value"]
for x in swift_helper.get_swift_configs()}
configs.setdefault("core-site", {})
configs["core-site"].update(swift_configs)
if utils.get_instance(cluster, common.RANGER_ADMIN):
configs.setdefault("admin-properties", {})
configs["admin-properties"]["db_root_password"] = (
cluster.extra["ranger_db_password"])
return _serialize_ambari_configs(configs)
def get_config_group(instance):
params = get_instance_params_mapping(instance)
groups = []
for (service, targets) in get_service_to_configs_map().items():
current_group = {
'cluster_name': instance.cluster.name,
'group_name': "%s:%s" % (
instance.cluster.name, instance.instance_name),
'tag': service,
'description': "Config group for scaled "
"node %s" % instance.instance_name,
'hosts': [
{
'host_name': instance.fqdn()
}
],
'desired_configs': []
}
at_least_one_added = False
for target in targets:
configs = params.get(target, {})
if configs:
current_group['desired_configs'].append({
'type': target,
'properties': configs,
'tag': instance.instance_name
})
at_least_one_added = True
if at_least_one_added:
# Config Group without overridden data is not interesting
groups.append({'ConfigGroup': current_group})
return groups

View File

@ -1,723 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import functools
import telnetlib # nosec
from oslo_log import log as logging
from oslo_utils import uuidutils
from sahara.plugins import conductor
from sahara.plugins import context
from sahara.plugins import kerberos
from sahara.plugins import topology_helper as t_helper
from sahara.plugins import utils as plugin_utils
from sahara_plugin_ambari.i18n import _
from sahara_plugin_ambari.plugins.ambari import client as ambari_client
from sahara_plugin_ambari.plugins.ambari import common as p_common
from sahara_plugin_ambari.plugins.ambari import configs
from sahara_plugin_ambari.plugins.ambari import ha_helper
LOG = logging.getLogger(__name__)
repo_id_map = {
"2.3": {
"HDP": "HDP-2.3",
"HDP-UTILS": "HDP-UTILS-1.1.0.20"
},
"2.4": {
"HDP": "HDP-2.4",
"HDP-UTILS": "HDP-UTILS-1.1.0.20"
},
"2.5": {
"HDP": "HDP-2.5",
"HDP-UTILS": "HDP-UTILS-1.1.0.21"
},
"2.6": {
"HDP": "HDP-2.6",
"HDP-UTILS": "HDP-UTILS-1.1.0.22"
},
}
os_type_map = {
"centos6": "redhat6",
"redhat6": "redhat6",
"centos7": "redhat7",
"redhat7": "redhat7",
"ubuntu14": "ubuntu14"
}
@plugin_utils.event_wrapper(True, step=_("Set up Ambari management console"),
param=('cluster', 0))
def setup_ambari(cluster):
LOG.debug("Set up Ambari management console")
ambari = plugin_utils.get_instance(cluster, p_common.AMBARI_SERVER)
ambari_settings = ("agent.package.install.task.timeout=%s"
% configs.get_ambari_pkg_install_timeout(cluster))
with ambari.remote() as r:
sudo = functools.partial(r.execute_command, run_as_root=True)
sudo("rngd -r /dev/urandom -W 4096")
r.replace_remote_line("/etc/ambari-server/conf/ambari.properties",
"agent.package.install.task.timeout=",
ambari_settings)
sudo("ambari-server setup -s -j"
" `cut -f2 -d \"=\" /etc/profile.d/99-java.sh`", timeout=1800)
# the following change must be after ambari-setup, or it would be
# overwritten (probably because it's not part of the base set of
# keywords/values handled by ambari-setup).
r.append_to_file("/etc/ambari-server/conf/ambari.properties",
"server.startup.web.timeout=180", run_as_root=True)
redirect_file = "/tmp/%s" % uuidutils.generate_uuid()
sudo("service ambari-server start >{rfile} && "
"cat {rfile} && rm {rfile}".format(rfile=redirect_file))
LOG.debug("Ambari management console installed")
def setup_agents(cluster, instances=None):
LOG.debug("Set up Ambari agents")
manager_address = plugin_utils.get_instance(
cluster, p_common.AMBARI_SERVER).fqdn()
if not instances:
instances = plugin_utils.get_instances(cluster)
_setup_agents(instances, manager_address)
def _setup_agents(instances, manager_address):
plugin_utils.add_provisioning_step(
instances[0].cluster.id, _("Set up Ambari agents"), len(instances))
with context.PluginsThreadGroup() as tg:
for inst in instances:
tg.spawn("hwx-agent-setup-%s" % inst.id,
_setup_agent, inst, manager_address)
LOG.debug("Ambari agents have been installed")
def _disable_repos_on_inst(instance):
with context.set_current_instance_id(instance_id=instance.instance_id):
with instance.remote() as r:
sudo = functools.partial(r.execute_command, run_as_root=True)
if r.get_os_distrib() == "ubuntu":
sudo("mv /etc/apt/sources.list /etc/apt/sources.list.tmp")
else:
tmp_name = "/tmp/yum.repos.d-%s" % instance.instance_id[:8]
# moving to other folder
sudo("mv /etc/yum.repos.d/ {fold_name}".format(
fold_name=tmp_name))
sudo("mkdir /etc/yum.repos.d")
def disable_repos(cluster):
if configs.use_base_repos_needed(cluster):
LOG.debug("Using base repos")
return
instances = plugin_utils.get_instances(cluster)
with context.PluginsThreadGroup() as tg:
for inst in instances:
tg.spawn("disable-repos-%s" % inst.instance_name,
_disable_repos_on_inst, inst)
@plugin_utils.event_wrapper(True)
def _setup_agent(instance, ambari_address):
with instance.remote() as r:
sudo = functools.partial(r.execute_command, run_as_root=True)
r.replace_remote_string("/etc/ambari-agent/conf/ambari-agent.ini",
"localhost", ambari_address)
try:
sudo("ambari-agent start")
except Exception as e:
# workaround for ubuntu, because on ubuntu the ambari agent
# starts automatically after image boot
msg = _("Restart of ambari-agent is needed for host {}, "
"reason: {}").format(instance.fqdn(), e)
LOG.exception(msg)
sudo("ambari-agent restart")
# for correct installing packages
r.update_repository()
@plugin_utils.event_wrapper(True, step=_("Wait Ambari accessible"),
param=('cluster', 0))
def wait_ambari_accessible(cluster):
ambari = plugin_utils.get_instance(cluster, p_common.AMBARI_SERVER)
kwargs = {"host": ambari.management_ip, "port": 8080}
plugin_utils.poll(_check_port_accessible, kwargs=kwargs, timeout=300)
def _check_port_accessible(host, port):
try:
conn = telnetlib.Telnet(host, port)
conn.close()
return True
except IOError:
return False
def resolve_package_conflicts(cluster, instances=None):
if not instances:
instances = plugin_utils.get_instances(cluster)
for instance in instances:
with instance.remote() as r:
if r.get_os_distrib() == 'ubuntu':
try:
r.execute_command(
"apt-get remove -y libmysql-java", run_as_root=True)
except Exception:
LOG.warning("Can't remove libmysql-java, "
"it's probably not installed")
def _prepare_ranger(cluster):
ranger = plugin_utils.get_instance(cluster, p_common.RANGER_ADMIN)
if not ranger:
return
ambari = plugin_utils.get_instance(cluster, p_common.AMBARI_SERVER)
with ambari.remote() as r:
sudo = functools.partial(r.execute_command, run_as_root=True)
sudo("ambari-server setup --jdbc-db=mysql "
"--jdbc-driver=/usr/share/java/mysql-connector-java.jar")
init_db_template = (
"create user 'root'@'%' identified by '{password}';\n"
"set password for 'root'@'localhost' = password('{password}');")
password = uuidutils.generate_uuid()
extra = cluster.extra.to_dict() if cluster.extra else {}
extra["ranger_db_password"] = password
ctx = context.ctx()
conductor.cluster_update(ctx, cluster, {"extra": extra})
with ranger.remote() as r:
sudo = functools.partial(r.execute_command, run_as_root=True)
# TODO(sreshetnyak): add ubuntu support
sudo("yum install -y mysql-server")
sudo("service mysqld start")
r.write_file_to("/tmp/init.sql",
init_db_template.format(password=password))
sudo("mysql < /tmp/init.sql")
sudo("rm /tmp/init.sql")
@plugin_utils.event_wrapper(True,
step=_("Prepare Hive"), param=('cluster', 0))
def prepare_hive(cluster):
hive = plugin_utils.get_instance(cluster, p_common.HIVE_SERVER)
if not hive:
return
with hive.remote() as r:
r.execute_command(
'sudo su - -c "hadoop fs -mkdir /user/oozie/conf" hdfs')
r.execute_command(
'sudo su - -c "hadoop fs -copyFromLocal '
'/etc/hive/conf/hive-site.xml '
'/user/oozie/conf/hive-site.xml" hdfs')
@plugin_utils.event_wrapper(True, step=_("Update default Ambari password"),
param=('cluster', 0))
def update_default_ambari_password(cluster):
ambari = plugin_utils.get_instance(cluster, p_common.AMBARI_SERVER)
new_password = uuidutils.generate_uuid()
with ambari_client.AmbariClient(ambari) as client:
client.update_user_password("admin", "admin", new_password)
extra = cluster.extra.to_dict() if cluster.extra else {}
extra["ambari_password"] = new_password
ctx = context.ctx()
conductor.cluster_update(ctx, cluster, {"extra": extra})
cluster = conductor.cluster_get(ctx, cluster.id)
@plugin_utils.event_wrapper(True, step=_("Wait registration of hosts"),
param=('cluster', 0))
def wait_host_registration(cluster, instances):
with _get_ambari_client(cluster) as client:
kwargs = {"client": client, "instances": instances}
plugin_utils.poll(_check_host_registration, kwargs=kwargs,
timeout=600)
def _check_host_registration(client, instances):
hosts = client.get_registered_hosts()
registered_host_names = [h["Hosts"]["host_name"] for h in hosts]
for instance in instances:
if instance.fqdn() not in registered_host_names:
return False
return True
@plugin_utils.event_wrapper(True, step=_("Set up HDP repositories"),
param=('cluster', 0))
def _set_up_hdp_repos(cluster, hdp_repo, hdp_utils_repo):
ambari = plugin_utils.get_instance(cluster, p_common.AMBARI_SERVER)
pv = cluster.hadoop_version
repos = repo_id_map[pv]
with _get_ambari_client(cluster) as client:
os_type = os_type_map[client.get_host_info(ambari.fqdn())["os_type"]]
if hdp_repo:
client.set_up_mirror(pv, os_type, repos["HDP"], hdp_repo)
if hdp_utils_repo:
client.set_up_mirror(pv, os_type, repos["HDP-UTILS"],
hdp_utils_repo)
def set_up_hdp_repos(cluster):
hdp_repo = configs.get_hdp_repo_url(cluster)
hdp_utils_repo = configs.get_hdp_utils_repo_url(cluster)
if hdp_repo or hdp_utils_repo:
_set_up_hdp_repos(cluster, hdp_repo, hdp_utils_repo)
def get_kdc_server(cluster):
return plugin_utils.get_instance(
cluster, p_common.AMBARI_SERVER)
def _prepare_kerberos(cluster, instances=None):
if instances is None:
kerberos.deploy_infrastructure(cluster, get_kdc_server(cluster))
kerberos.prepare_policy_files(cluster)
else:
server = None
if not kerberos.using_existing_kdc(cluster):
server = get_kdc_server(cluster)
kerberos.setup_clients(cluster, server)
kerberos.prepare_policy_files(cluster)
def prepare_kerberos(cluster, instances=None):
if kerberos.is_kerberos_security_enabled(cluster):
_prepare_kerberos(cluster, instances)
def _serialize_mit_kdc_kerberos_env(cluster):
return {
'kerberos-env': {
"realm": kerberos.get_realm_name(cluster),
"kdc_type": "mit-kdc",
"kdc_host": kerberos.get_kdc_host(
cluster, get_kdc_server(cluster)),
"admin_server_host": kerberos.get_kdc_host(
cluster, get_kdc_server(cluster)),
'encryption_types': 'aes256-cts-hmac-sha1-96',
'ldap_url': '', 'container_dn': '',
}
}
def _serialize_krb5_configs(cluster):
return {
"krb5-conf": {
"properties_attributes": {},
"properties": {
"manage_krb5_conf": "false"
}
}
}
def _get_credentials(cluster):
return [{
"alias": "kdc.admin.credential",
"principal": kerberos.get_admin_principal(cluster),
"key": kerberos.get_server_password(cluster),
"type": "TEMPORARY"
}]
def get_host_group_components(cluster, processes):
result = []
for proc in processes:
result.append({'name': proc})
return result
@plugin_utils.event_wrapper(True, step=_("Create Ambari blueprint"),
param=('cluster', 0))
def create_blueprint(cluster):
_prepare_ranger(cluster)
cluster = conductor.cluster_get(context.ctx(), cluster.id)
host_groups = []
for ng in cluster.node_groups:
procs = p_common.get_ambari_proc_list(ng)
procs.extend(p_common.get_clients(cluster))
for instance in ng.instances:
hg = {
"name": instance.instance_name,
"configurations": configs.get_instance_params(instance),
"components": get_host_group_components(cluster, procs)
}
host_groups.append(hg)
bp = {
"Blueprints": {
"stack_name": "HDP",
"stack_version": cluster.hadoop_version,
},
"host_groups": host_groups,
"configurations": configs.get_cluster_params(cluster)
}
if kerberos.is_kerberos_security_enabled(cluster):
bp['configurations'].extend([
_serialize_mit_kdc_kerberos_env(cluster),
_serialize_krb5_configs(cluster)
])
bp['Blueprints']['security'] = {'type': 'KERBEROS'}
general_configs = cluster.cluster_configs.get("general", {})
if (general_configs.get(p_common.NAMENODE_HA) or
general_configs.get(p_common.RESOURCEMANAGER_HA) or
general_configs.get(p_common.HBASE_REGIONSERVER_HA)):
bp = ha_helper.update_bp_ha_common(cluster, bp)
if general_configs.get(p_common.NAMENODE_HA):
bp = ha_helper.update_bp_for_namenode_ha(cluster, bp)
if general_configs.get(p_common.RESOURCEMANAGER_HA):
bp = ha_helper.update_bp_for_resourcemanager_ha(cluster, bp)
if general_configs.get(p_common.HBASE_REGIONSERVER_HA):
bp = ha_helper.update_bp_for_hbase_ha(cluster, bp)
with _get_ambari_client(cluster) as client:
return client.create_blueprint(cluster.name, bp)
def _build_ambari_cluster_template(cluster):
cl_tmpl = {
"blueprint": cluster.name,
"default_password": uuidutils.generate_uuid(),
"host_groups": []
}
if cluster.use_autoconfig:
strategy = configs.get_auto_configuration_strategy(cluster)
cl_tmpl["config_recommendation_strategy"] = strategy
if kerberos.is_kerberos_security_enabled(cluster):
cl_tmpl["credentials"] = _get_credentials(cluster)
cl_tmpl["security"] = {"type": "KERBEROS"}
topology = _get_topology_data(cluster)
for ng in cluster.node_groups:
for instance in ng.instances:
host = {"fqdn": instance.fqdn()}
if t_helper.is_data_locality_enabled():
host["rack_info"] = topology[instance.instance_name]
cl_tmpl["host_groups"].append({
"name": instance.instance_name,
"hosts": [host]
})
return cl_tmpl
@plugin_utils.event_wrapper(True,
step=_("Start cluster"), param=('cluster', 0))
def start_cluster(cluster):
ambari_template = _build_ambari_cluster_template(cluster)
with _get_ambari_client(cluster) as client:
req_id = client.create_cluster(cluster.name, ambari_template)["id"]
client.wait_ambari_request(req_id, cluster.name)
@plugin_utils.event_wrapper(True)
def _add_host_to_cluster(instance, client):
client.add_host_to_cluster(instance)
def add_new_hosts(cluster, instances):
with _get_ambari_client(cluster) as client:
plugin_utils.add_provisioning_step(
cluster.id, _("Add new hosts"), len(instances))
for inst in instances:
_add_host_to_cluster(inst, client)
@plugin_utils.event_wrapper(True, step=_("Generate config groups"),
param=('cluster', 0))
def manage_config_groups(cluster, instances):
groups = []
for instance in instances:
groups.extend(configs.get_config_group(instance))
with _get_ambari_client(cluster) as client:
client.create_config_group(cluster, groups)
@plugin_utils.event_wrapper(True, step=_("Cleanup config groups"),
param=('cluster', 0))
def cleanup_config_groups(cluster, instances):
to_remove = set()
for instance in instances:
cfg_name = "%s:%s" % (cluster.name, instance.instance_name)
to_remove.add(cfg_name)
with _get_ambari_client(cluster) as client:
config_groups = client.get_config_groups(cluster)
for group in config_groups['items']:
cfg_id = group['ConfigGroup']['id']
detailed = client.get_detailed_config_group(cluster, cfg_id)
cfg_name = detailed['ConfigGroup']['group_name']
# we have config group per host
if cfg_name in to_remove:
client.remove_config_group(cluster, cfg_id)
@plugin_utils.event_wrapper(True, step=_("Regenerate keytabs for Kerberos"),
param=('cluster', 0))
def _regenerate_keytabs(cluster):
with _get_ambari_client(cluster) as client:
alias = "kdc.admin.credential"
try:
client.get_credential(cluster.name, alias)
except ambari_client.AmbariNotFound:
# credentials are missing
data = {
'Credential': {
"principal": kerberos.get_admin_principal(cluster),
"key": kerberos.get_server_password(cluster),
"type": "TEMPORARY"
}
}
client.import_credential(cluster.name, alias, data)
req_id = client.regenerate_keytabs(cluster.name)
client.wait_ambari_request(req_id, cluster.name)
@plugin_utils.event_wrapper(True, step=_("Install services on hosts"),
param=('cluster', 0))
def _install_services_to_hosts(cluster, instances):
requests_ids = []
with _get_ambari_client(cluster) as client:
clients = p_common.get_clients(cluster)
for instance in instances:
services = p_common.get_ambari_proc_list(instance.node_group)
services.extend(clients)
for service in services:
client.add_service_to_host(instance, service)
requests_ids.append(
client.start_service_on_host(
instance, service, 'INSTALLED'))
client.wait_ambari_requests(requests_ids, cluster.name)
@plugin_utils.event_wrapper(True, step=_("Start services on hosts"),
param=('cluster', 0))
def _start_services_on_hosts(cluster, instances):
with _get_ambari_client(cluster) as client:
# all services added and installed, let's start them
requests_ids = []
for instance in instances:
services = p_common.get_ambari_proc_list(instance.node_group)
services.extend(p_common.ALL_LIST)
for service in services:
requests_ids.append(
client.start_service_on_host(
instance, service, 'STARTED'))
client.wait_ambari_requests(requests_ids, cluster.name)
def manage_host_components(cluster, instances):
_install_services_to_hosts(cluster, instances)
if kerberos.is_kerberos_security_enabled(cluster):
_regenerate_keytabs(cluster)
_start_services_on_hosts(cluster, instances)
@plugin_utils.event_wrapper(True,
step=_("Decommission NodeManagers and DataNodes"),
param=('cluster', 0))
def decommission_hosts(cluster, instances):
nodemanager_instances = filter(
lambda i: p_common.NODEMANAGER in i.node_group.node_processes,
instances)
if len(nodemanager_instances) > 0:
decommission_nodemanagers(cluster, nodemanager_instances)
datanode_instances = filter(
lambda i: p_common.DATANODE in i.node_group.node_processes,
instances)
if len(datanode_instances) > 0:
decommission_datanodes(cluster, datanode_instances)
def decommission_nodemanagers(cluster, instances):
with _get_ambari_client(cluster) as client:
client.decommission_nodemanagers(cluster.name, instances)
def decommission_datanodes(cluster, instances):
with _get_ambari_client(cluster) as client:
client.decommission_datanodes(cluster.name, instances)
def restart_namenode(cluster, instance):
with _get_ambari_client(cluster) as client:
client.restart_namenode(cluster.name, instance)
def restart_resourcemanager(cluster, instance):
with _get_ambari_client(cluster) as client:
client.restart_resourcemanager(cluster.name, instance)
@plugin_utils.event_wrapper(True,
step=_("Restart NameNodes and ResourceManagers"),
param=('cluster', 0))
def restart_nns_and_rms(cluster):
nns = plugin_utils.get_instances(cluster, p_common.NAMENODE)
for nn in nns:
restart_namenode(cluster, nn)
rms = plugin_utils.get_instances(cluster, p_common.RESOURCEMANAGER)
for rm in rms:
restart_resourcemanager(cluster, rm)
def restart_service(cluster, service_name):
with _get_ambari_client(cluster) as client:
client.restart_service(cluster.name, service_name)
@plugin_utils.event_wrapper(True,
step=_("Remove hosts"), param=('cluster', 0))
def remove_services_from_hosts(cluster, instances):
for inst in instances:
LOG.debug("Stopping and removing processes from host %s", inst.fqdn())
_remove_services_from_host(cluster, inst)
LOG.debug("Removing the host %s", inst.fqdn())
_remove_host(cluster, inst)
def _remove_services_from_host(cluster, instance):
with _get_ambari_client(cluster) as client:
hdp_processes = client.list_host_processes(cluster.name, instance)
for proc in hdp_processes:
LOG.debug("Stopping process %(proc)s on host %(fqdn)s ",
{'proc': proc, 'fqdn': instance.fqdn()})
client.stop_process_on_host(cluster.name, instance, proc)
LOG.debug("Removing process %(proc)s from host %(fqdn)s ",
{'proc': proc, 'fqdn': instance.fqdn()})
client.remove_process_from_host(cluster.name, instance, proc)
_wait_all_processes_removed(cluster, instance)
def _remove_host(cluster, inst):
with _get_ambari_client(cluster) as client:
client.delete_host(cluster.name, inst)
def _wait_all_processes_removed(cluster, instance):
with _get_ambari_client(cluster) as client:
while True:
hdp_processes = client.list_host_processes(cluster.name, instance)
if not hdp_processes:
return
context.sleep(5)
def _get_ambari_client(cluster):
ambari = plugin_utils.get_instance(cluster, p_common.AMBARI_SERVER)
password = cluster.extra["ambari_password"]
return ambari_client.AmbariClient(ambari, password=password)
def _get_topology_data(cluster):
if not t_helper.is_data_locality_enabled():
return {}
LOG.warning("Node group awareness is not implemented in YARN yet "
"so enable_hypervisor_awareness set to False "
"explicitly")
return t_helper.generate_topology_map(cluster, is_node_awareness=False)
@plugin_utils.event_wrapper(True)
def _configure_topology_data(cluster, inst, client):
topology = _get_topology_data(cluster)
client.set_rack_info_for_instance(
cluster.name, inst, topology[inst.instance_name])
@plugin_utils.event_wrapper(True,
step=_("Restart HDFS and MAPREDUCE2 services"),
param=('cluster', 0))
def _restart_hdfs_and_mapred_services(cluster, client):
client.restart_service(cluster.name, p_common.HDFS_SERVICE)
client.restart_service(cluster.name, p_common.MAPREDUCE2_SERVICE)
def configure_rack_awareness(cluster, instances):
if not t_helper.is_data_locality_enabled():
return
with _get_ambari_client(cluster) as client:
plugin_utils.add_provisioning_step(
cluster.id, _("Configure rack awareness"), len(instances))
for inst in instances:
_configure_topology_data(cluster, inst, client)
_restart_hdfs_and_mapred_services(cluster, client)
@plugin_utils.event_wrapper(True)
def _add_hadoop_swift_jar(instance, new_jar):
with instance.remote() as r:
code, out = r.execute_command(
"test -f %s" % new_jar, raise_when_error=False)
if code == 0:
# get ambari hadoop version (e.g.: 2.7.1.2.3.4.0-3485)
code, amb_hadoop_version = r.execute_command(
"sudo hadoop version | grep 'Hadoop' | awk '{print $2}'")
amb_hadoop_version = amb_hadoop_version.strip()
# get special code of ambari hadoop version(e.g.:2.3.4.0-3485)
amb_code = '.'.join(amb_hadoop_version.split('.')[3:])
origin_jar = (
"/usr/hdp/{}/hadoop-mapreduce/hadoop-openstack-{}.jar".format(
amb_code, amb_hadoop_version))
r.execute_command("sudo cp {} {}".format(new_jar, origin_jar))
else:
LOG.warning("The {jar_file} file cannot be found "
"in the {dir} directory so Keystone API v3 "
"is not enabled for this cluster."
.format(jar_file="hadoop-openstack.jar",
dir="/opt"))
def add_hadoop_swift_jar(instances):
new_jar = "/opt/hadoop-openstack.jar"
plugin_utils.add_provisioning_step(instances[0].cluster.id,
_("Add Hadoop Swift jar to instances"),
len(instances))
for inst in instances:
_add_hadoop_swift_jar(inst, new_jar)
def deploy_kerberos_principals(cluster, instances=None):
if not kerberos.is_kerberos_security_enabled(cluster):
return
if instances is None:
instances = plugin_utils.get_instances(cluster)
mapper = {
'hdfs': plugin_utils.instances_with_services(
instances, [p_common.SECONDARY_NAMENODE, p_common.NAMENODE,
p_common.DATANODE, p_common.JOURNAL_NODE]),
'spark': plugin_utils.instances_with_services(
instances, [p_common.SPARK_JOBHISTORYSERVER]),
'oozie': plugin_utils.instances_with_services(
instances, [p_common.OOZIE_SERVER]),
}
kerberos.create_keytabs_for_map(cluster, mapper)

View File

@ -1,127 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins import edp
from sahara.plugins import exceptions as pex
from sahara.plugins import kerberos
from sahara.plugins import utils as plugin_utils
from sahara_plugin_ambari.i18n import _
from sahara_plugin_ambari.plugins.ambari import common as p_common
def _get_lib_location(instance, lib_name):
with instance.remote() as r:
code, jar_path = r.execute_command(
('find /usr/hdp -name "{lib_name}" 2>/dev/null '
'-print | head -n 1'.format(lib_name=lib_name)),
run_as_root=True)
# drop last whitespace character
return jar_path.rstrip()
def _get_hadoop_openstack_jar_location(instance):
return _get_lib_location(instance, "hadoop-openstack*.jar")
def _get_jackson_core(instance):
return _get_lib_location(instance, "jackson-core-asl-1.9*.jar")
class EDPOozieEngine(edp.PluginsOozieJobEngine):
def get_hdfs_user(self):
return "oozie"
def get_client(self):
if kerberos.is_kerberos_security_enabled(self.cluster):
return super(EDPOozieEngine, self).get_remote_client()
return super(EDPOozieEngine, self).get_client()
def create_hdfs_dir(self, remote, dir_name):
edp.create_dir_hadoop2(remote, dir_name, self.get_hdfs_user())
def get_oozie_server_uri(self, cluster):
oozie = plugin_utils.get_instance(cluster, p_common.OOZIE_SERVER)
return "http://%s:11000/oozie" % oozie.management_ip
def get_name_node_uri(self, cluster):
namenodes = plugin_utils.get_instances(cluster, p_common.NAMENODE)
if len(namenodes) == 1:
return "hdfs://%s:8020" % namenodes[0].fqdn()
else:
return "hdfs://hdfs-ha"
def get_resource_manager_uri(self, cluster):
resourcemanagers = plugin_utils.get_instances(cluster,
p_common.RESOURCEMANAGER)
return "%s:8050" % resourcemanagers[0].fqdn()
def get_oozie_server(self, cluster):
return plugin_utils.get_instance(cluster, p_common.OOZIE_SERVER)
def validate_job_execution(self, cluster, job, data):
oozie_count = plugin_utils.get_instances_count(cluster,
p_common.OOZIE_SERVER)
if oozie_count != 1:
raise pex.InvalidComponentCountException(
p_common.OOZIE_SERVER, "1", oozie_count)
super(EDPOozieEngine, self).validate_job_execution(cluster, job, data)
@staticmethod
def get_possible_job_config(job_type):
return {"job_config": []}
class EDPSparkEngine(edp.PluginsSparkJobEngine):
edp_base_version = "2.2"
def __init__(self, cluster):
super(EDPSparkEngine, self).__init__(cluster)
# searching for spark instance
self.master = plugin_utils.get_instance(
cluster, p_common.SPARK_JOBHISTORYSERVER)
self.plugin_params["spark-user"] = "sudo -u spark "
self.plugin_params["spark-submit"] = "spark-submit"
self.plugin_params["deploy-mode"] = "cluster"
self.plugin_params["master"] = "yarn-cluster"
@staticmethod
def edp_supported(version):
return version >= EDPSparkEngine.edp_base_version
def run_job(self, job_execution):
# calculate class-path dynamically
driver_classpath = [
_get_hadoop_openstack_jar_location(self.master),
_get_jackson_core(self.master)]
self.plugin_params['driver-class-path'] = ":".join(driver_classpath)
self.plugin_params['drivers-to-jars'] = driver_classpath
return super(EDPSparkEngine, self).run_job(job_execution)
def validate_job_execution(self, cluster, job, data):
if not self.edp_supported(cluster.hadoop_version):
raise pex.PluginInvalidDataException(
_('Ambari plugin of {base} or higher required to run {type} '
'jobs').format(
base=EDPSparkEngine.edp_base_version, type=job.type))
spark_nodes_count = plugin_utils.get_instances_count(
cluster, p_common.SPARK_JOBHISTORYSERVER)
if spark_nodes_count != 1:
raise pex.InvalidComponentCountException(
p_common.SPARK_JOBHISTORYSERVER, '1', spark_nodes_count)
super(EDPSparkEngine, self).validate_job_execution(
cluster, job, data)

View File

@ -1,252 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins import utils
from sahara_plugin_ambari.plugins.ambari import common as p_common
CORE_SITE = "core-site"
YARN_SITE = "yarn-site"
HBASE_SITE = "hbase-site"
HDFS_SITE = "hdfs-site"
HADOOP_ENV = "hadoop-env"
ZOO_CFG = "zoo.cfg"
def update_bp_ha_common(cluster, blueprint):
blueprint = _set_default_fs(cluster, blueprint, p_common.NAMENODE_HA)
blueprint = _set_high_zk_limits(blueprint)
return blueprint
def update_bp_for_namenode_ha(cluster, blueprint):
blueprint = _add_zkfc_to_namenodes(blueprint)
blueprint = _set_zk_quorum(cluster, blueprint, CORE_SITE)
blueprint = _configure_hdfs_site(cluster, blueprint)
return blueprint
def update_bp_for_resourcemanager_ha(cluster, blueprint):
blueprint = _configure_yarn_site(cluster, blueprint)
blueprint = _set_zk_quorum(cluster, blueprint, YARN_SITE)
blueprint = _set_default_fs(cluster, blueprint,
p_common.RESOURCEMANAGER_HA)
return blueprint
def update_bp_for_hbase_ha(cluster, blueprint):
return _confgure_hbase_site(cluster, blueprint)
def _add_zkfc_to_namenodes(blueprint):
for hg in blueprint["host_groups"]:
if {"name": "NAMENODE"} in hg["components"]:
hg["components"].append({"name": "ZKFC"})
return blueprint
def _find_create_properties_section(blueprint, section_name):
for conf_group in blueprint["configurations"]:
if section_name in conf_group:
return conf_group[section_name]
new_group = {section_name: {}}
blueprint["configurations"].append(new_group)
return new_group[section_name]
def _find_hdfs_site(blueprint):
return _find_create_properties_section(blueprint, HDFS_SITE)
def _find_yarn_site(blueprint):
return _find_create_properties_section(blueprint, YARN_SITE)
def _find_core_site(blueprint):
return _find_create_properties_section(blueprint, CORE_SITE)
def _find_hadoop_env(blueprint):
return _find_create_properties_section(blueprint, HADOOP_ENV)
def _find_zoo_cfg(blueprint):
return _find_create_properties_section(blueprint, ZOO_CFG)
def _find_hbase_site(blueprint):
return _find_create_properties_section(blueprint, HBASE_SITE)
def _set_default_fs(cluster, blueprint, ha_type):
if ha_type == p_common.NAMENODE_HA:
_find_core_site(blueprint)["fs.defaultFS"] = "hdfs://hdfs-ha"
elif ha_type == p_common.RESOURCEMANAGER_HA:
nn_instance = utils.get_instances(cluster, p_common.NAMENODE)[0]
_find_core_site(blueprint)["fs.defaultFS"] = (
"hdfs://%s:8020" % nn_instance.fqdn())
return blueprint
def _set_zk_quorum(cluster, blueprint, conf_type):
zk_instances = utils.get_instances(cluster, p_common.ZOOKEEPER_SERVER)
value = ",".join(["%s:2181" % i.fqdn() for i in zk_instances])
if conf_type == CORE_SITE:
_find_core_site(blueprint)["ha.zookeeper.quorum"] = value
elif conf_type == YARN_SITE:
_find_yarn_site(blueprint)["hadoop.registry.zk.quorum"] = value
return blueprint
def _set_high_zk_limits(blueprint):
props = _find_zoo_cfg(blueprint)
props["tickTime"] = "10000"
return blueprint
def _set_primary_and_standby_namenode(cluster, blueprint):
props = _find_hadoop_env(blueprint)
nns = utils.get_instances(cluster, p_common.NAMENODE)
props["dfs_ha_initial_namenode_active"] = nns[0].fqdn()
props["dfs_ha_initial_namenode_standby"] = nns[1].fqdn()
return blueprint
def _configure_hdfs_site(cluster, blueprint):
props = _find_hdfs_site(blueprint)
props["dfs.client.failover.proxy.provider.hdfs-ha"] = (
"org.apache.hadoop.hdfs.server.namenode.ha."
"ConfiguredFailoverProxyProvider")
props["dfs.ha.automatic-failover.enabled"] = "true"
props["dfs.ha.fencing.methods"] = "shell(/bin/true)"
props["dfs.nameservices"] = "hdfs-ha"
jns = utils.get_instances(cluster, p_common.JOURNAL_NODE)
journalnodes_concat = ";".join(
["%s:8485" % i.fqdn() for i in jns])
journalnodes_value = "qjournal://%s/hdfs-ha" % journalnodes_concat
props["dfs.namenode.shared.edits.dir"] = journalnodes_value
nns = utils.get_instances(cluster, p_common.NAMENODE)
nn_id_concat = ",".join([i.instance_name for i in nns])
props["dfs.ha.namenodes.hdfs-ha"] = nn_id_concat
props["dfs.namenode.http-address"] = "%s:50070" % nns[0].fqdn()
props["dfs.namenode.https-address"] = "%s:50470" % nns[0].fqdn()
for i in nns:
props["dfs.namenode.http-address.hdfs-ha.%s" % i.instance_name] = (
"%s:50070" % i.fqdn())
props["dfs.namenode.https-address.hdfs-ha.%s" % i.instance_name] = (
"%s:50470" % i.fqdn())
props["dfs.namenode.rpc-address.hdfs-ha.%s" % i.instance_name] = (
"%s:8020" % i.fqdn())
return blueprint
def _configure_yarn_site(cluster, blueprint):
props = _find_yarn_site(blueprint)
name = cluster.name
rm_instances = utils.get_instances(cluster, p_common.RESOURCEMANAGER)
props["hadoop.registry.rm.enabled"] = "false"
zk_instances = utils.get_instances(cluster, p_common.ZOOKEEPER_SERVER)
zks = ",".join(["%s:2181" % i.fqdn() for i in zk_instances])
props["yarn.resourcemanager.zk-address"] = zks
hs = utils.get_instance(cluster, p_common.HISTORYSERVER)
props["yarn.log.server.url"] = "%s:19888/jobhistory/logs/" % hs.fqdn()
props["yarn.resourcemanager.address"] = "%s:8050" % rm_instances[0].fqdn()
props["yarn.resourcemanager.admin.address"] = ("%s:8141" %
rm_instances[0].fqdn())
props["yarn.resourcemanager.cluster-id"] = name
props["yarn.resourcemanager.ha.automatic-failover.zk-base-path"] = (
"/yarn-leader-election")
props["yarn.resourcemanager.ha.enabled"] = "true"
rm_id_concat = ",".join([i.instance_name for i in rm_instances])
props["yarn.resourcemanager.ha.rm-ids"] = rm_id_concat
for i in rm_instances:
props["yarn.resourcemanager.hostname.%s" % i.instance_name] = i.fqdn()
props["yarn.resourcemanager.webapp.address.%s" %
i.instance_name] = "%s:8088" % i.fqdn()
props["yarn.resourcemanager.webapp.https.address.%s" %
i.instance_name] = "%s:8090" % i.fqdn()
props["yarn.resourcemanager.hostname"] = rm_instances[0].fqdn()
props["yarn.resourcemanager.recovery.enabled"] = "true"
props["yarn.resourcemanager.resource-tracker.address"] = (
"%s:8025" % rm_instances[0].fqdn())
props["yarn.resourcemanager.scheduler.address"] = (
"%s:8030" % rm_instances[0].fqdn())
props["yarn.resourcemanager.store.class"] = (
"org.apache.hadoop.yarn.server.resourcemanager.recovery."
"ZKRMStateStore")
props["yarn.resourcemanager.webapp.address"] = (
"%s:8088" % rm_instances[0].fqdn())
props["yarn.resourcemanager.webapp.https.address"] = (
"%s:8090" % rm_instances[0].fqdn())
tls_instance = utils.get_instance(cluster, p_common.APP_TIMELINE_SERVER)
props["yarn.timeline-service.address"] = "%s:10200" % tls_instance.fqdn()
props["yarn.timeline-service.webapp.address"] = (
"%s:8188" % tls_instance.fqdn())
props["yarn.timeline-service.webapp.https.address"] = (
"%s:8190" % tls_instance.fqdn())
return blueprint
def _confgure_hbase_site(cluster, blueprint):
props = _find_hbase_site(blueprint)
props["hbase.regionserver.global.memstore.lowerLimit"] = "0.38"
props["hbase.regionserver.global.memstore.upperLimit"] = "0.4"
props["hbase.regionserver.handler.count"] = "60"
props["hbase.regionserver.info.port"] = "16030"
props["hbase.regionserver.storefile.refresh.period"] = "20"
props["hbase.rootdir"] = "hdfs://hdfs-ha/apps/hbase/data"
props["hbase.security.authentication"] = "simple"
props["hbase.security.authorization"] = "false"
props["hbase.superuser"] = "hbase"
props["hbase.tmp.dir"] = "/hadoop/hbase"
props["hbase.zookeeper.property.clientPort"] = "2181"
zk_instances = utils.get_instances(cluster, p_common.ZOOKEEPER_SERVER)
zk_quorum_value = ",".join([i.fqdn() for i in zk_instances])
props["hbase.zookeeper.quorum"] = zk_quorum_value
props["hbase.zookeeper.useMulti"] = "true"
props["hfile.block.cache.size"] = "0.40"
props["zookeeper.session.timeout"] = "30000"
props["zookeeper.znode.parent"] = "/hbase-unsecure"
return blueprint

View File

@ -1,148 +0,0 @@
# Copyright (c) 2016 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
import functools
from oslo_log import log as logging
from sahara.plugins import health_check_base
from sahara.plugins import utils as plugin_utils
from sahara_plugin_ambari.i18n import _
from sahara_plugin_ambari.plugins.ambari import client
from sahara_plugin_ambari.plugins.ambari import common as p_common
LOG = logging.getLogger(__name__)
class AlertsProvider(object):
def __init__(self, cluster):
self._data = None
self._cluster_services = None
self._exception_store = None
self.cluster = cluster
# calling to cache all data
self.get_alerts_data()
def get_cluster_services(self):
return self._cluster_services
def is_ambari_active(self):
if self._exception_store:
raise health_check_base.RedHealthError(self._exception_store)
return _("Ambari Monitor is healthy")
def get_alerts_data(self, service=None):
if self._data is not None:
# return cached data
return self._data.get(service, []) if service else self._data
self._data = {}
self._cluster_services = []
try:
ambari = plugin_utils.get_instance(
self.cluster, p_common.AMBARI_SERVER)
password = self.cluster.extra.get("ambari_password")
with client.AmbariClient(ambari, password=password) as ambari:
resp = ambari.get_alerts_data(self.cluster)
for alert in resp:
alert = alert.get('Alert', {})
service = alert.get('service_name').lower()
if service not in self._data:
self._data[service] = []
self._cluster_services.append(service)
self._data[service].append(alert)
except Exception as e:
prefix = _("Can't get response from Ambari Monitor")
msg = _("%(problem)s: %(description)s") % {
'problem': prefix, 'description': str(e)}
# don't put in exception to logs, it will be done by log.exception
LOG.exception(prefix)
self._exception_store = msg
class AmbariHealthCheck(health_check_base.BasicHealthCheck):
def __init__(self, cluster, provider):
self.provider = provider
super(AmbariHealthCheck, self).__init__(cluster)
def get_health_check_name(self):
return "Ambari alerts health check"
def is_available(self):
return self.cluster.plugin_name == 'ambari'
def check_health(self):
return self.provider.is_ambari_active()
class AmbariServiceHealthCheck(health_check_base.BasicHealthCheck):
def __init__(self, cluster, provider, service):
self.provider = provider
self.service = service.lower()
super(AmbariServiceHealthCheck, self).__init__(cluster)
def get_health_check_name(self):
return "Ambari alerts for %s Service" % self.service
def is_available(self):
return self.cluster.plugin_name == 'ambari'
def get_important_services(self):
return [
p_common.HDFS_SERVICE.lower(),
p_common.YARN_SERVICE.lower(),
p_common.OOZIE_SERVICE.lower(),
p_common.ZOOKEEPER_SERVICE.lower()
]
def check_health(self):
imp_map = {'OK': 'GREEN', 'WARNING': 'YELLOW', 'CRITICAL': 'RED'}
other_map = {'OK': 'GREEN'}
color_counter = collections.Counter()
important_services = self.get_important_services()
for alert in self.provider.get_alerts_data(self.service):
alert_summary = alert.get('state', 'UNKNOWN')
if self.service in important_services:
target = imp_map.get(alert_summary, 'RED')
else:
target = other_map.get(alert_summary, 'YELLOW')
color_counter[target] += 1
if color_counter['RED'] > 0 and color_counter['YELLOW'] > 0:
raise health_check_base.RedHealthError(
_("Ambari Monitor has responded that cluster has "
"%(red)d critical and %(yellow)d warning alert(s)")
% {'red': color_counter['RED'],
'yellow': color_counter['YELLOW']})
elif color_counter['RED'] > 0:
raise health_check_base.RedHealthError(
_("Ambari Monitor has responded that cluster has "
"%(red)d critical alert(s)")
% {'red': color_counter['RED']})
elif color_counter['YELLOW'] > 0:
raise health_check_base.YellowHealthError(
_("Ambari Monitor has responded that cluster "
"has %d warning alert(s)")
% color_counter['YELLOW'])
return _("No alerts found")
def get_health_checks(cluster):
provider = AlertsProvider(cluster)
checks = [functools.partial(AmbariHealthCheck, provider=provider)]
for service in provider.get_cluster_services():
checks.append(functools.partial(
AmbariServiceHealthCheck, provider=provider, service=service))
return checks

View File

@ -1,297 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins import conductor
from sahara.plugins import context
from sahara.plugins import images
from sahara.plugins import kerberos
from sahara.plugins import provisioning as p
from sahara.plugins import swift_helper
from sahara.plugins import utils as plugin_utils
from sahara_plugin_ambari.i18n import _
from sahara_plugin_ambari.plugins.ambari import common as p_common
from sahara_plugin_ambari.plugins.ambari import configs
from sahara_plugin_ambari.plugins.ambari import deploy
from sahara_plugin_ambari.plugins.ambari import edp_engine
from sahara_plugin_ambari.plugins.ambari import health
from sahara_plugin_ambari.plugins.ambari import validation
class AmbariPluginProvider(p.ProvisioningPluginBase):
def get_title(self):
return "HDP Plugin"
def get_description(self):
return _("The Ambari Sahara plugin provides the ability to launch "
"clusters with Hortonworks Data Platform (HDP) on OpenStack "
"using Apache Ambari")
def get_versions(self):
return ["2.3", "2.4", "2.5", "2.6"]
def get_node_processes(self, hadoop_version):
return {
p_common.AMBARI_SERVICE: [p_common.AMBARI_SERVER],
p_common.FALCON_SERVICE: [p_common.FALCON_SERVER],
p_common.FLUME_SERVICE: [p_common.FLUME_HANDLER],
p_common.HBASE_SERVICE: [p_common.HBASE_MASTER,
p_common.HBASE_REGIONSERVER],
p_common.HDFS_SERVICE: [p_common.DATANODE, p_common.NAMENODE,
p_common.SECONDARY_NAMENODE,
p_common.JOURNAL_NODE],
p_common.HIVE_SERVICE: [p_common.HIVE_METASTORE,
p_common.HIVE_SERVER],
p_common.KAFKA_SERVICE: [p_common.KAFKA_BROKER],
p_common.KNOX_SERVICE: [p_common.KNOX_GATEWAY],
p_common.OOZIE_SERVICE: [p_common.OOZIE_SERVER],
p_common.RANGER_SERVICE: [p_common.RANGER_ADMIN,
p_common.RANGER_USERSYNC],
p_common.SLIDER_SERVICE: [p_common.SLIDER],
p_common.SPARK_SERVICE: [p_common.SPARK_JOBHISTORYSERVER],
p_common.SQOOP_SERVICE: [p_common.SQOOP],
p_common.STORM_SERVICE: [
p_common.DRPC_SERVER, p_common.NIMBUS,
p_common.STORM_UI_SERVER, p_common.SUPERVISOR],
p_common.YARN_SERVICE: [
p_common.APP_TIMELINE_SERVER, p_common.HISTORYSERVER,
p_common.NODEMANAGER, p_common.RESOURCEMANAGER],
p_common.ZOOKEEPER_SERVICE: [p_common.ZOOKEEPER_SERVER],
'Kerberos': [],
}
def get_configs(self, hadoop_version):
cfgs = kerberos.get_config_list()
cfgs.extend(configs.load_configs(hadoop_version))
return cfgs
def configure_cluster(self, cluster):
deploy.disable_repos(cluster)
deploy.setup_ambari(cluster)
deploy.setup_agents(cluster)
deploy.wait_ambari_accessible(cluster)
deploy.update_default_ambari_password(cluster)
cluster = conductor.cluster_get(context.ctx(), cluster.id)
deploy.wait_host_registration(cluster,
plugin_utils.get_instances(cluster))
deploy.prepare_kerberos(cluster)
deploy.set_up_hdp_repos(cluster)
deploy.resolve_package_conflicts(cluster)
deploy.create_blueprint(cluster)
def start_cluster(self, cluster):
self._set_cluster_info(cluster)
deploy.start_cluster(cluster)
cluster_instances = plugin_utils.get_instances(cluster)
swift_helper.install_ssl_certs(cluster_instances)
deploy.add_hadoop_swift_jar(cluster_instances)
deploy.prepare_hive(cluster)
deploy.deploy_kerberos_principals(cluster)
def _set_cluster_info(self, cluster):
ambari_ip = plugin_utils.get_instance(
cluster, p_common.AMBARI_SERVER).get_ip_or_dns_name()
ambari_port = "8080"
info = {
p_common.AMBARI_SERVER: {
"Web UI": "http://{host}:{port}".format(host=ambari_ip,
port=ambari_port),
"Username": "admin",
"Password": cluster.extra["ambari_password"]
}
}
nns = plugin_utils.get_instances(cluster, p_common.NAMENODE)
info[p_common.NAMENODE] = {}
for idx, namenode in enumerate(nns):
info[p_common.NAMENODE][
"Web UI %s" % (idx + 1)] = (
"http://%s:50070" % namenode.get_ip_or_dns_name())
rms = plugin_utils.get_instances(cluster, p_common.RESOURCEMANAGER)
info[p_common.RESOURCEMANAGER] = {}
for idx, resourcemanager in enumerate(rms):
info[p_common.RESOURCEMANAGER][
"Web UI %s" % (idx + 1)] = (
"http://%s:8088" % resourcemanager.get_ip_or_dns_name())
historyserver = plugin_utils.get_instance(cluster,
p_common.HISTORYSERVER)
if historyserver:
info[p_common.HISTORYSERVER] = {
"Web UI": "http://%s:19888" %
historyserver.get_ip_or_dns_name()
}
atlserver = plugin_utils.get_instance(cluster,
p_common.APP_TIMELINE_SERVER)
if atlserver:
info[p_common.APP_TIMELINE_SERVER] = {
"Web UI": "http://%s:8188" % atlserver.get_ip_or_dns_name()
}
oozie = plugin_utils.get_instance(cluster, p_common.OOZIE_SERVER)
if oozie:
info[p_common.OOZIE_SERVER] = {
"Web UI": "http://%s:11000/oozie" % oozie.get_ip_or_dns_name()
}
hbase_master = plugin_utils.get_instance(cluster,
p_common.HBASE_MASTER)
if hbase_master:
info[p_common.HBASE_MASTER] = {
"Web UI": "http://%s:16010" % hbase_master.get_ip_or_dns_name()
}
falcon = plugin_utils.get_instance(cluster, p_common.FALCON_SERVER)
if falcon:
info[p_common.FALCON_SERVER] = {
"Web UI": "http://%s:15000" % falcon.get_ip_or_dns_name()
}
storm_ui = plugin_utils.get_instance(cluster, p_common.STORM_UI_SERVER)
if storm_ui:
info[p_common.STORM_UI_SERVER] = {
"Web UI": "http://%s:8744" % storm_ui.get_ip_or_dns_name()
}
ranger_admin = plugin_utils.get_instance(cluster,
p_common.RANGER_ADMIN)
if ranger_admin:
info[p_common.RANGER_ADMIN] = {
"Web UI": "http://%s:6080" % ranger_admin.get_ip_or_dns_name(),
"Username": "admin",
"Password": "admin"
}
spark_hs = plugin_utils.get_instance(cluster,
p_common.SPARK_JOBHISTORYSERVER)
if spark_hs:
info[p_common.SPARK_JOBHISTORYSERVER] = {
"Web UI": "http://%s:18080" % spark_hs.get_ip_or_dns_name()
}
info.update(cluster.info.to_dict())
ctx = context.ctx()
conductor.cluster_update(ctx, cluster, {"info": info})
cluster = conductor.cluster_get(ctx, cluster.id)
def validate(self, cluster):
validation.validate(cluster.id)
def scale_cluster(self, cluster, instances):
deploy.prepare_kerberos(cluster, instances)
deploy.setup_agents(cluster, instances)
cluster = conductor.cluster_get(context.ctx(), cluster.id)
deploy.wait_host_registration(cluster, instances)
deploy.resolve_package_conflicts(cluster, instances)
deploy.add_new_hosts(cluster, instances)
deploy.manage_config_groups(cluster, instances)
deploy.manage_host_components(cluster, instances)
deploy.configure_rack_awareness(cluster, instances)
swift_helper.install_ssl_certs(instances)
deploy.add_hadoop_swift_jar(instances)
deploy.deploy_kerberos_principals(cluster, instances)
def decommission_nodes(self, cluster, instances):
deploy.decommission_hosts(cluster, instances)
deploy.remove_services_from_hosts(cluster, instances)
deploy.restart_nns_and_rms(cluster)
deploy.cleanup_config_groups(cluster, instances)
def validate_scaling(self, cluster, existing, additional):
validation.validate(cluster.id)
def get_edp_engine(self, cluster, job_type):
if job_type in edp_engine.EDPSparkEngine.get_supported_job_types():
return edp_engine.EDPSparkEngine(cluster)
if job_type in edp_engine.EDPOozieEngine.get_supported_job_types():
return edp_engine.EDPOozieEngine(cluster)
return None
def get_edp_job_types(self, versions=None):
res = {}
for version in self.get_versions():
if not versions or version in versions:
oozie_engine = edp_engine.EDPOozieEngine
spark_engine = edp_engine.EDPSparkEngine
res[version] = (oozie_engine.get_supported_job_types() +
spark_engine.get_supported_job_types())
return res
def get_edp_config_hints(self, job_type, version):
if job_type in edp_engine.EDPSparkEngine.get_supported_job_types():
return edp_engine.EDPSparkEngine.get_possible_job_config(job_type)
if job_type in edp_engine.EDPOozieEngine.get_supported_job_types():
return edp_engine.EDPOozieEngine.get_possible_job_config(job_type)
def get_open_ports(self, node_group):
ports_map = {
p_common.AMBARI_SERVER: [8080],
p_common.APP_TIMELINE_SERVER: [8188, 8190, 10200],
p_common.DATANODE: [50075, 50475],
p_common.DRPC_SERVER: [3772, 3773],
p_common.FALCON_SERVER: [15000],
p_common.FLUME_HANDLER: [8020, 41414],
p_common.HBASE_MASTER: [16000, 16010],
p_common.HBASE_REGIONSERVER: [16020, 16030],
p_common.HISTORYSERVER: [10020, 19888],
p_common.HIVE_METASTORE: [9933],
p_common.HIVE_SERVER: [9999, 10000],
p_common.KAFKA_BROKER: [6667],
p_common.NAMENODE: [8020, 9000, 50070, 50470],
p_common.NIMBUS: [6627],
p_common.NODEMANAGER: [8042, 8044, 45454],
p_common.OOZIE_SERVER: [11000, 11443],
p_common.RANGER_ADMIN: [6080],
p_common.RESOURCEMANAGER: [8025, 8030, 8050, 8088, 8141],
p_common.SECONDARY_NAMENODE: [50090],
p_common.SPARK_JOBHISTORYSERVER: [18080],
p_common.STORM_UI_SERVER: [8000, 8080, 8744],
p_common.ZOOKEEPER_SERVER: [2181],
}
ports = []
for service in node_group.node_processes:
ports.extend(ports_map.get(service, []))
return ports
def get_health_checks(self, cluster):
return health.get_health_checks(cluster)
validator = images.SaharaImageValidator.from_yaml(
'plugins/ambari/resources/images/image.yaml',
resource_roots=['plugins/ambari/resources/images'],
package='sahara_plugin_ambari')
def get_image_arguments(self, hadoop_version):
if hadoop_version not in self.get_versions():
return NotImplemented
return self.validator.get_argument_list()
def pack_image(self, hadoop_version, remote,
test_only=False, image_arguments=None):
if hadoop_version == '2.3':
image_arguments['ambari_version'] = '2.4.3.0'
self.validator.validate(remote, test_only=test_only,
image_arguments=image_arguments)
def validate_images(self, cluster, test_only=False, image_arguments=None):
image_arguments = self.get_image_arguments(cluster['hadoop_version'])
if cluster['hadoop_version'] == '2.3':
for arguments in image_arguments:
if arguments.name == 'ambari_version':
arguments.default = '2.4.3.0'
if not test_only:
instances = plugin_utils.get_instances(cluster)
else:
instances = plugin_utils.get_instances(cluster)[0]
for instance in instances:
with instance.remote() as r:
self.validator.validate(r, test_only=test_only,
image_arguments=image_arguments)

View File

@ -1,145 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
_COMMON_DECOMMISSION_TEMPLATE = {
"RequestInfo": {
"context": "",
"command": "DECOMMISSION",
"parameters": {
"slave_type": "",
"excluded_hosts": ""
},
"operation_level": {
"level": "HOST_COMPONENT",
"cluster_name": ""
}
},
"Requests/resource_filters": [
{
"service_name": "",
"component_name": ""
}
]
}
_COMMON_RESTART_TEMPLATE = {
"RequestInfo": {
"context": "",
"command": "RESTART",
"operation_level": {
"level": "HOST",
"cluster_name": ""
}
},
"Requests/resource_filters": [
{
"service_name": "",
"component_name": "",
"hosts": ""
}
]
}
_COMMON_RESTART_SERVICE_TEMPLATE = {
"RequestInfo": {
"context": "",
},
"Body": {
"ServiceInfo": {
"state": ""
}
}
}
def build_datanode_decommission_request(cluster_name, instances):
tmpl = copy.deepcopy(_COMMON_DECOMMISSION_TEMPLATE)
tmpl["RequestInfo"]["context"] = "Decommission DataNodes"
tmpl["RequestInfo"]["parameters"]["slave_type"] = "DATANODE"
tmpl["RequestInfo"]["parameters"]["excluded_hosts"] = ",".join(
[i.fqdn() for i in instances])
tmpl["RequestInfo"]["operation_level"]["cluster_name"] = cluster_name
tmpl["Requests/resource_filters"][0]["service_name"] = "HDFS"
tmpl["Requests/resource_filters"][0]["component_name"] = "NAMENODE"
return tmpl
def build_nodemanager_decommission_request(cluster_name, instances):
tmpl = copy.deepcopy(_COMMON_DECOMMISSION_TEMPLATE)
tmpl["RequestInfo"]["context"] = "Decommission NodeManagers"
tmpl["RequestInfo"]["parameters"]["slave_type"] = "NODEMANAGER"
tmpl["RequestInfo"]["parameters"]["excluded_hosts"] = ",".join(
[i.fqdn() for i in instances])
tmpl["RequestInfo"]["operation_level"]["cluster_name"] = cluster_name
tmpl["Requests/resource_filters"][0]["service_name"] = "YARN"
tmpl["Requests/resource_filters"][0]["component_name"] = "RESOURCEMANAGER"
return tmpl
def build_namenode_restart_request(cluster_name, nn_instance):
tmpl = copy.deepcopy(_COMMON_RESTART_TEMPLATE)
tmpl["RequestInfo"]["context"] = "Restart NameNode"
tmpl["RequestInfo"]["operation_level"]["cluster_name"] = cluster_name
tmpl["Requests/resource_filters"][0]["service_name"] = "HDFS"
tmpl["Requests/resource_filters"][0]["component_name"] = "NAMENODE"
tmpl["Requests/resource_filters"][0]["hosts"] = nn_instance.fqdn()
return tmpl
def build_resourcemanager_restart_request(cluster_name, rm_instance):
tmpl = copy.deepcopy(_COMMON_RESTART_TEMPLATE)
tmpl["RequestInfo"]["context"] = "Restart ResourceManager"
tmpl["RequestInfo"]["operation_level"]["cluster_name"] = cluster_name
tmpl["Requests/resource_filters"][0]["service_name"] = "YARN"
tmpl["Requests/resource_filters"][0]["component_name"] = "RESOURCEMANAGER"
tmpl["Requests/resource_filters"][0]["hosts"] = rm_instance.fqdn()
return tmpl
def build_stop_service_request(service_name):
tmpl = copy.deepcopy(_COMMON_RESTART_SERVICE_TEMPLATE)
tmpl["RequestInfo"]["context"] = (
"Restart %s service (stopping)" % service_name)
tmpl["Body"]["ServiceInfo"]["state"] = "INSTALLED"
return tmpl
def build_start_service_request(service_name):
tmpl = copy.deepcopy(_COMMON_RESTART_SERVICE_TEMPLATE)
tmpl["RequestInfo"]["context"] = (
"Restart %s service (starting)" % service_name)
tmpl["Body"]["ServiceInfo"]["state"] = "STARTED"
return tmpl

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1,79 +0,0 @@
#!/usr/bin/env python3
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import sys
from oslo_serialization import jsonutils
import requests
def get_blueprint(ambari_address, username, password, cluster_name):
url = "http://%s:8080/api/v1/clusters/%s?format=blueprint" % (
ambari_address, cluster_name)
resp = requests.get(url, auth=(username, password))
resp.raise_for_status()
if resp.text:
return jsonutils.loads(resp.text)
def generate_config(blueprint):
configs = {}
for entity in blueprint["configurations"]:
for cfg in entity:
p = entity[cfg]["properties"]
if not p:
continue
if "content" in p:
del p["content"]
for k, v in p.items():
p[k] = " ".join(v.split())
if p:
configs[cfg] = p
return configs
def write_config(cfg, version):
with open("sahara/plugins/ambari/resources/configs-%s.json" % version,
"w") as fp:
jsonutils.dump(cfg, fp, indent=4, sort_keys=True,
separators=(",", ": "))
def main():
parser = argparse.ArgumentParser(
description="Ambari sample config generator")
parser.add_argument("--address", help="Ambari address",
default="localhost")
parser.add_argument("--username", help="Ambari username",
default="admin")
parser.add_argument("--password", help="Ambari password",
default="admin")
parser.add_argument("--cluster-name", help="Name of cluster",
default="cluster")
ns = parser.parse_args(sys.argv[1:])
bp = get_blueprint(ns.address,
ns.username,
ns.password,
ns.cluster_name)
cfg = generate_config(bp)
write_config(cfg, bp["Blueprints"]["stack_version"])
if __name__ == "__main__":
main()

View File

@ -1,8 +0,0 @@
#!/usr/bin/env bash
if [ $test_only -eq 0 ]; then
chkconfig ambari-server off
chkconfig ambari-agent off
else
exit 0
fi

View File

@ -1,12 +0,0 @@
#!/bin/bash
config=/etc/python/cert-verification.cfg
check=$(cat $config | grep 'verify=disable' | wc -l)
if [ $check -eq 0 ]; then
if [ $test_only -eq 0 ]; then
[ -e $config ] && sed -i "s%^\(verify=\s*\).*$%verify=disable%" $config
else
exit 0
fi
fi

View File

@ -1,20 +0,0 @@
#!/bin/bash
check=$(systemctl --no-pager list-unit-files iptables.service | grep 'enabled' | wc -l)
if [ $check -eq 1 ]; then
if [ $test_only -eq 0 ]; then
if type -p systemctl && [[ "$(systemctl --no-pager list-unit-files firewalld)" =~ 'enabled' ]]; then
systemctl disable firewalld
fi
if type -p service; then
service ip6tables save
service iptables save
chkconfig ip6tables off
chkconfig iptables off
fi
else
exit 0
fi
fi

View File

@ -1,12 +0,0 @@
#!/bin/bash
check=$(cat /etc/selinux/config | grep 'SELINUX=disabled' | wc -l)
if [ $check -eq 0 ]; then
if [ $test_only -eq 0 ]; then
config=/etc/selinux/config
[ -e $config ] && sed -i "s%^\(SELINUX=\s*\).*$%SELINUX=disabled%" $config
else
exit 0
fi
fi

View File

@ -1,31 +0,0 @@
#!/bin/bash
JAVA_RC="/etc/profile.d/99-java.sh"
JAVA_BIN_RC="/etc/profile.d/98-java-bin.sh"
if [ ! -f $JAVA_RC ]; then
if [ $test_only -eq 0 ]; then
case "$java_distro" in
openjdk )
JRE_HOME="/usr/lib/jvm/java-openjdk/jre"
JDK_HOME="/usr/lib/jvm/java-openjdk"
;;
oracle-java )
JRE_HOME="/usr/java/oracle-jdk"
JDK_HOME="/usr/java/oracle-jdk"
;;
esac
echo "export JAVA_HOME=$JRE_HOME" >> $JAVA_RC
chmod +x $JAVA_RC
echo "export PATH=$JRE_HOME/bin:\$PATH" >> $JAVA_BIN_RC
echo "export PATH=$JDK_HOME/bin:\$PATH" >> $JAVA_BIN_RC
chmod +x $JAVA_BIN_RC
alternatives --install /usr/bin/java java $JRE_HOME/bin/java 200000
alternatives --install /usr/bin/javac javac $JDK_HOME/bin/javac 200000
else
exit 0
fi
fi

View File

@ -1,11 +0,0 @@
#!/bin/bash
if [ ! -d /tmp/UnlimitedPolicy/ ]; then
if [ $test_only -eq 0 ]; then
mkdir /tmp/UnlimitedPolicy/
curl -sS https://tarballs.openstack.org/sahara-extra/dist/common-artifacts/local_policy.jar -o /tmp/UnlimitedPolicy/local_policy.jar
curl -sS https://tarballs.openstack.org/sahara-extra/dist/common-artifacts/US_export_policy.jar -o /tmp/UnlimitedPolicy/US_export_policy.jar
else
exit 0
fi
fi

View File

@ -1,9 +0,0 @@
#!/usr/bin/env bash
if [ ! -f /etc/yum.repos.d/ambari.repo ]; then
if [ $test_only -eq 0 ]; then
wget http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/$ambari_version/ambari.repo -O /etc/yum.repos.d/ambari.repo
else
exit 0
fi
fi

View File

@ -1,31 +0,0 @@
#!/bin/sh
hadoop="2.7.1"
HDFS_LIB_DIR=${hdfs_lib_dir:-"/usr/share/hadoop/lib"}
JAR_BUILD_DATE="2016-03-17"
SWIFT_LIB_URI="https://tarballs.openstack.org/sahara-extra/dist/hadoop-openstack/master/hadoop-openstack-${hadoop}.jar"
HADOOP_SWIFT_JAR_NAME=hadoop-openstack.jar
if [ ! -f $HDFS_LIB_DIR/$HADOOP_SWIFT_JAR_NAME ]; then
if [ $test_only -eq 0 ]; then
if [ -z "${swift_url:-}" ]; then
curl -sS -o $HDFS_LIB_DIR/$HADOOP_SWIFT_JAR_NAME $SWIFT_LIB_URI
else
curl -sS -o $HDFS_LIB_DIR/$HADOOP_SWIFT_JAR_NAME $swift_url
fi
if [ $? -ne 0 ]; then
printf "Could not download Swift Hadoop FS implementation.\nAborting\n"
exit 1
fi
chmod 0644 $HDFS_LIB_DIR/$HADOOP_SWIFT_JAR_NAME
else
exit 0
fi
fi

View File

@ -1,17 +0,0 @@
#!/bin/sh
AMBARI_AGENT_INI="/etc/ambari-agent/conf/ambari-agent.ini"
FORCE_HTTPS_CONF="force_https_protocol=PROTOCOL_TLSv1_2"
if [ $test_only -eq 0 ]; then
if grep -q '\[security\]' ${AMBARI_AGENT_INI}; then
if ! grep -q "${FORCE_HTTPS_CONF}" ${AMBARI_AGENT_INI}; then
sed -i '/^\[security\]/a\'${FORCE_HTTPS_CONF} ${AMBARI_AGENT_INI}
fi
else
printf "[security]\n${FORCE_HTTPS_CONF}\n" >>${AMBARI_AGENT_INI}
fi
else
grep -q "${FORCE_HTTPS_CONF}" ${AMBARI_AGENT_INI}
exit $?
fi

View File

@ -1,14 +0,0 @@
#!/bin/bash -x
# This is necessary due to the information on the link below
# https://community.hortonworks.com/articles/170133/hive-start-failed-because-of-ambari-error-mysql-co.html
if [ ! -L /var/lib/ambari-server/resources/mysql-connector-java.jar ]; then
if [ $test_only -eq 0 ]; then
ln -s /usr/share/java/mysql-connector-java.jar /var/lib/ambari-server/resources/mysql-connector-java.jar
else
exit 1
fi
else
exit 0
fi

View File

@ -1,41 +0,0 @@
#!/bin/sh
# NOTE: $(dirname $0) is read-only, use space under $TARGET_ROOT
JAVA_LOCATION=${JAVA_TARGET_LOCATION:-"/usr/java"}
JAVA_NAME="oracle-jdk"
JAVA_HOME=$JAVA_LOCATION/$JAVA_NAME
JAVA_DOWNLOAD_URL=${JAVA_DOWNLOAD_URL:-"http://download.oracle.com/otn-pub/java/jdk/7u51-b13/jdk-7u51-linux-x64.tar.gz"}
if [ ! -d $JAVA_LOCATION ]; then
if [ $test_only -eq 0 ]; then
echo "Begin: installation of Java"
mkdir -p $JAVA_LOCATION
if [ -n "$JAVA_DOWNLOAD_URL" ]; then
JAVA_FILE=$(basename $JAVA_DOWNLOAD_URL)
wget --no-check-certificate --no-cookies -c \
--header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" \
-O $JAVA_LOCATION/$JAVA_FILE $JAVA_DOWNLOAD_URL
elif [ -n "$JAVA_FILE" ]; then
install -D -g root -o root -m 0755 $(dirname $0)/$JAVA_FILE $JAVA_LOCATION
fi
cd $JAVA_LOCATION
echo "Decompressing Java archive"
printf "\n\n" | tar -zxf $JAVA_FILE
echo "Setting up $JAVA_NAME"
chown -R root:root $JAVA_LOCATION
JAVA_DIR=`ls -1 $JAVA_LOCATION | grep -v tar.gz`
ln -s $JAVA_LOCATION/$JAVA_DIR $JAVA_HOME
setup-java-home $JAVA_HOME $JAVA_HOME
rm $JAVA_FILE
echo "End: installation of Java"
else
exit 0
fi
fi

View File

@ -1,140 +0,0 @@
arguments:
ambari_version:
description: The version of Ambari to install. Defaults to 2.6.2.0.
default: 2.6.2.0
choices:
- 2.6.2.0 # HDP 2.6 / HDP 2.5 / HDP 2.4
- 2.4.3.0 # HDP 2.5 / HDP 2.4 / HDP 2.3
java_distro:
default: openjdk
description: The distribution of Java to install. Defaults to openjdk.
choices:
- openjdk
- oracle-java
hdfs_lib_dir:
default: /opt
description: The path to HDFS lib. Defaults to /opt.
required: False
swift_url:
default: https://tarballs.openstack.org/sahara-extra/dist/hadoop-openstack/master/hadoop-openstack-2.7.1.jar
description: Location of the swift jar file.
required: False
validators:
- os_case:
- ubuntu:
- script:
apt_update:
inline: apt-get update
- argument_case:
argument_name: java_distro
cases:
openjdk:
- os_case:
- redhat:
- package: java-1.8.0-openjdk-devel
- ubuntu:
- argument_case:
argument_name: ambari_version
cases:
2.6.2.0:
- package: openjdk-8-jdk
2.4.3.0:
- package: openjdk-7-jdk
oracle-java:
- script: common/oracle_java
- argument_case:
argument_name: ambari_version
cases:
2.6.2.0:
- os_case:
- redhat:
- package: libtirpc-devel
- ubuntu:
- package: libtirpc-dev
- os_case:
- redhat:
- script: centos/disable_selinux
- script: centos/disable_certificate_check
- script:
centos/setup_java_home:
env_vars: [java_distro]
- package: wget
- script:
centos/wget_repo:
env_vars: [ambari_version]
- package: redhat-lsb
- package:
- mariadb
- mariadb-libs
- mariadb-server
- mysql-connector-java
- package: ntp
- package:
- ambari-metrics-monitor
- ambari-server
- ambari-metrics-collector
- ambari-metrics-hadoop-sink
- package: nmap-ncat
- package: fuse-libs
- package: snappy-devel
- package: iptables-services
- ubuntu:
- script:
ubuntu/wget_repo:
env_vars: [ambari_version]
- script:
ubuntu/setup_java_home:
env_vars: [java_distro]
- package:
- ambari-metrics-assembly
- ambari-server
- ambari-logsearch-portal
- ambari-logsearch-logfeeder
- ambari-infra-solr-client
- ambari-infra-solr
- netcat
- iptables
- iptables-persistent
- package: fuse
- package:
- mysql-client
- mysql-server
- libmysql-java
- script: common/mysql_connector_java_link
- package: ambari-agent
- script: common/fix_tls_ambari_agent
- package:
- unzip
- zip
- curl
- tar
- rpcbind
- rng-tools
- os_case:
- redhat:
- script: centos/disable_ambari
- script: centos/disable_firewall
- script:
common/add_jar:
env_vars: [hdfs_lib_dir, swift_url]
- script:
centos/unlimited_security_artifacts:
env_vars: [unlimited_security_location]
- ubuntu:
- script:
common/add_jar:
env_vars: [hdfs_lib_dir, swift_url]
- os_case:
- redhat:
- package:
- krb5-server
- krb5-libs
- krb5-workstation
- ubuntu:
- package:
- krb5-admin-server
- libpam-krb5
- krb5-user
- ldap-utils

View File

@ -1,33 +0,0 @@
#!/bin/bash
JAVA_RC="/etc/profile.d/99-java.sh"
JAVA_BIN_RC="/etc/profile.d/98-java-bin.sh"
if [ ! -f $JAVA_RC ]; then
if [ $test_only -eq 0 ]; then
case "$java_distro" in
openjdk )
JDK_HOME=$(echo /usr/lib/jvm/java-?-openjdk-amd64)
JRE_HOME="$JDK_HOME/jre"
;;
oracle-java )
JRE_HOME="/usr/java/oracle-jdk"
JDK_HOME="/usr/java/oracle-jdk"
;;
esac
echo "export JAVA_HOME=$JRE_HOME" >> $JAVA_RC
chmod +x $JAVA_RC
echo "export PATH=$JRE_HOME/bin:\$PATH" >> $JAVA_BIN_RC
echo "export PATH=$JDK_HOME/bin:\$PATH" >> $JAVA_BIN_RC
chmod +x $JAVA_BIN_RC
update-alternatives --remove-all java
update-alternatives --remove-all javac
update-alternatives --install /usr/bin/java java $JRE_HOME/bin/java 200000
update-alternatives --install /usr/bin/javac javac $JDK_HOME/bin/javac 200000
else
exit 0
fi
fi

View File

@ -1,11 +0,0 @@
#!/usr/bin/env bash
if [ ! -f /etc/apt/sources.list.d/ambari.list ]; then
if [ $test_only -eq 0 ]; then
wget http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/$ambari_version/ambari.list -O /etc/apt/sources.list.d/ambari.list && \
apt-key adv --recv-keys --keyserver keyserver.ubuntu.com B9733A7A07513CAD && \
apt-get update
else
exit 0
fi
fi

View File

@ -1,223 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara.plugins import conductor
from sahara.plugins import context
from sahara.plugins import exceptions as ex
from sahara.plugins import utils
from sahara_plugin_ambari.i18n import _
from sahara_plugin_ambari.plugins.ambari import common
def validate(cluster_id):
ctx = context.ctx()
cluster = conductor.cluster_get(ctx, cluster_id)
_check_ambari(cluster)
_check_hdfs(cluster)
_check_yarn(cluster)
_check_oozie(cluster)
_check_hive(cluster)
_check_hbase(cluster)
_check_spark(cluster)
_check_ranger(cluster)
_check_storm(cluster)
def _check_ambari(cluster):
am_count = utils.get_instances_count(cluster, common.AMBARI_SERVER)
zk_count = utils.get_instances_count(cluster, common.ZOOKEEPER_SERVER)
if am_count != 1:
raise ex.InvalidComponentCountException(common.AMBARI_SERVER, 1,
am_count)
if zk_count == 0:
raise ex.InvalidComponentCountException(common.ZOOKEEPER_SERVER,
_("1 or more"), zk_count)
def _check_hdfs(cluster):
nn_count = utils.get_instances_count(cluster, common.NAMENODE)
dn_count = utils.get_instances_count(cluster, common.DATANODE)
snn_count = utils.get_instances_count(cluster, common.SECONDARY_NAMENODE)
if cluster.cluster_configs.get("general", {}).get(common.NAMENODE_HA):
_check_zk_ha(cluster)
_check_jn_ha(cluster)
if nn_count != 2:
raise ex.InvalidComponentCountException(common.NAMENODE, 2,
nn_count)
else:
if nn_count != 1:
raise ex.InvalidComponentCountException(common.NAMENODE, 1,
nn_count)
if snn_count != 1:
raise ex.InvalidComponentCountException(common.SECONDARY_NAMENODE,
1, snn_count)
if dn_count == 0:
raise ex.InvalidComponentCountException(
common.DATANODE, _("1 or more"), dn_count)
def _check_yarn(cluster):
rm_count = utils.get_instances_count(cluster, common.RESOURCEMANAGER)
nm_count = utils.get_instances_count(cluster, common.NODEMANAGER)
hs_count = utils.get_instances_count(cluster, common.HISTORYSERVER)
at_count = utils.get_instances_count(cluster, common.APP_TIMELINE_SERVER)
if cluster.cluster_configs.get("general", {}).get(
common.RESOURCEMANAGER_HA):
_check_zk_ha(cluster)
if rm_count != 2:
raise ex.InvalidComponentCountException(common.RESOURCEMANAGER, 2,
rm_count)
else:
if rm_count != 1:
raise ex.InvalidComponentCountException(common.RESOURCEMANAGER, 1,
rm_count)
if hs_count != 1:
raise ex.InvalidComponentCountException(common.HISTORYSERVER, 1,
hs_count)
if at_count != 1:
raise ex.InvalidComponentCountException(common.APP_TIMELINE_SERVER, 1,
at_count)
if nm_count == 0:
raise ex.InvalidComponentCountException(common.NODEMANAGER,
_("1 or more"), nm_count)
def _check_zk_ha(cluster):
zk_count = utils.get_instances_count(cluster, common.ZOOKEEPER_SERVER)
if zk_count < 3:
raise ex.InvalidComponentCountException(
common.ZOOKEEPER_SERVER,
_("3 or more. Odd number"),
zk_count, _("At least 3 ZooKeepers are required for HA"))
if zk_count % 2 != 1:
raise ex.InvalidComponentCountException(
common.ZOOKEEPER_SERVER,
_("Odd number"),
zk_count, _("Odd number of ZooKeepers are required for HA"))
def _check_jn_ha(cluster):
jn_count = utils.get_instances_count(cluster, common.JOURNAL_NODE)
if jn_count < 3:
raise ex.InvalidComponentCountException(
common.JOURNAL_NODE,
_("3 or more. Odd number"),
jn_count, _("At least 3 JournalNodes are required for HA"))
if jn_count % 2 != 1:
raise ex.InvalidComponentCountException(
common.JOURNAL_NODE,
_("Odd number"),
jn_count, _("Odd number of JournalNodes are required for HA"))
def _check_oozie(cluster):
count = utils.get_instances_count(cluster, common.OOZIE_SERVER)
if count > 1:
raise ex.InvalidComponentCountException(common.OOZIE_SERVER,
_("0 or 1"), count)
def _check_hive(cluster):
hs_count = utils.get_instances_count(cluster, common.HIVE_SERVER)
hm_count = utils.get_instances_count(cluster, common.HIVE_METASTORE)
if hs_count > 1:
raise ex.InvalidComponentCountException(common.HIVE_SERVER,
_("0 or 1"), hs_count)
if hm_count > 1:
raise ex.InvalidComponentCountException(common.HIVE_METASTORE,
_("0 or 1"), hm_count)
if hs_count == 0 and hm_count == 1:
raise ex.RequiredServiceMissingException(
common.HIVE_SERVER, required_by=common.HIVE_METASTORE)
if hs_count == 1 and hm_count == 0:
raise ex.RequiredServiceMissingException(
common.HIVE_METASTORE, required_by=common.HIVE_SERVER)
def _check_hbase(cluster):
hm_count = utils.get_instances_count(cluster, common.HBASE_MASTER)
hr_count = utils.get_instances_count(cluster, common.HBASE_REGIONSERVER)
if hm_count > 1:
raise ex.InvalidComponentCountException(common.HBASE_MASTER,
_("0 or 1"), hm_count)
if hm_count == 1 and hr_count == 0:
raise ex.RequiredServiceMissingException(
common.HBASE_REGIONSERVER, required_by=common.HBASE_MASTER)
if hr_count > 0 and hm_count == 0:
raise ex.RequiredServiceMissingException(
common.HBASE_MASTER, required_by=common.HBASE_REGIONSERVER)
def _check_spark(cluster):
count = utils.get_instances_count(cluster, common.SPARK_JOBHISTORYSERVER)
if count > 1:
raise ex.InvalidComponentCountException(common.SPARK_JOBHISTORYSERVER,
_("0 or 1"), count)
def _check_ranger(cluster):
ra_count = utils.get_instances_count(cluster, common.RANGER_ADMIN)
ru_count = utils.get_instances_count(cluster, common.RANGER_USERSYNC)
if ra_count > 1:
raise ex.InvalidComponentCountException(common.RANGER_ADMIN,
_("0 or 1"), ra_count)
if ru_count > 1:
raise ex.InvalidComponentCountException(common.RANGER_USERSYNC,
_("0 or 1"), ru_count)
if ra_count == 1 and ru_count == 0:
raise ex.RequiredServiceMissingException(
common.RANGER_USERSYNC, required_by=common.RANGER_ADMIN)
if ra_count == 0 and ru_count == 1:
raise ex.RequiredServiceMissingException(
common.RANGER_ADMIN, required_by=common.RANGER_USERSYNC)
def _check_storm(cluster):
dr_count = utils.get_instances_count(cluster, common.DRPC_SERVER)
ni_count = utils.get_instances_count(cluster, common.NIMBUS)
su_count = utils.get_instances_count(cluster, common.STORM_UI_SERVER)
sv_count = utils.get_instances_count(cluster, common.SUPERVISOR)
if dr_count > 1:
raise ex.InvalidComponentCountException(common.DRPC_SERVER,
_("0 or 1"), dr_count)
if ni_count > 1:
raise ex.InvalidComponentCountException(common.NIMBUS,
_("0 or 1"), ni_count)
if su_count > 1:
raise ex.InvalidComponentCountException(common.STORM_UI_SERVER,
_("0 or 1"), su_count)
if dr_count == 0 and ni_count == 1:
raise ex.RequiredServiceMissingException(
common.DRPC_SERVER, required_by=common.NIMBUS)
if dr_count == 1 and ni_count == 0:
raise ex.RequiredServiceMissingException(
common.NIMBUS, required_by=common.DRPC_SERVER)
if su_count == 1 and (dr_count == 0 or ni_count == 0):
raise ex.RequiredServiceMissingException(
common.NIMBUS, required_by=common.STORM_UI_SERVER)
if dr_count == 1 and sv_count == 0:
raise ex.RequiredServiceMissingException(
common.SUPERVISOR, required_by=common.DRPC_SERVER)
if sv_count > 0 and dr_count == 0:
raise ex.RequiredServiceMissingException(
common.DRPC_SERVER, required_by=common.SUPERVISOR)

View File

@ -1,17 +0,0 @@
# Copyright (c) 2014 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara_plugin_ambari.utils import patches
patches.patch_all()

View File

@ -1,53 +0,0 @@
# Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslotest import base
from sahara.plugins import context
from sahara.plugins import db as db_api
from sahara.plugins import main
from sahara.plugins import utils
class SaharaTestCase(base.BaseTestCase):
def setUp(self):
super(SaharaTestCase, self).setUp()
self.setup_context()
utils.rpc_setup('all-in-one')
def setup_context(self, username="test_user", tenant_id="tenant_1",
auth_token="test_auth_token", tenant_name='test_tenant',
service_catalog=None, **kwargs):
self.addCleanup(context.set_ctx,
context.ctx() if context.has_ctx() else None)
context.set_ctx(context.PluginsContext(
username=username, tenant_id=tenant_id,
auth_token=auth_token, service_catalog=service_catalog or {},
tenant_name=tenant_name, **kwargs))
def override_config(self, name, override, group=None):
main.set_override(name, override, group)
self.addCleanup(main.clear_override, name, group)
class SaharaWithDbTestCase(SaharaTestCase):
def setUp(self):
super(SaharaWithDbTestCase, self).setUp()
self.override_config('connection', "sqlite://", group='database')
db_api.setup_db()
self.addCleanup(db_api.drop_db)

View File

@ -1,372 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from unittest import mock
from oslo_serialization import jsonutils
from sahara.plugins import exceptions as p_exc
from sahara_plugin_ambari.plugins.ambari import client as ambari_client
from sahara_plugin_ambari.tests.unit import base
class AmbariClientTestCase(base.SaharaTestCase):
def setUp(self):
super(AmbariClientTestCase, self).setUp()
self.http_client = mock.Mock()
self.http_client.get = mock.Mock()
self.http_client.post = mock.Mock()
self.http_client.put = mock.Mock()
self.http_client.delete = mock.Mock()
self.headers = {"X-Requested-By": "sahara"}
self.remote = mock.Mock()
self.remote.get_http_client.return_value = self.http_client
self.instance = mock.Mock()
self.instance.remote.return_value = self.remote
self.instance.management_ip = "1.2.3.4"
self.good_pending_resp = mock.MagicMock()
self.good_pending_resp.status_code = 200
self.good_pending_resp.text = ('{"Requests": '
'{"id": 1, "status": "PENDING"}}')
def test_init_client_default(self):
client = ambari_client.AmbariClient(self.instance)
self.assertEqual(self.http_client, client._http_client)
self.assertEqual("http://1.2.3.4:8080/api/v1", client._base_url)
self.assertEqual("admin", client._auth.username)
self.assertEqual("admin", client._auth.password)
self.remote.get_http_client.assert_called_with("8080")
def test_init_client_manual(self):
client = ambari_client.AmbariClient(self.instance, port="1234",
username="user", password="pass")
self.assertEqual("http://1.2.3.4:1234/api/v1", client._base_url)
self.assertEqual("user", client._auth.username)
self.assertEqual("pass", client._auth.password)
self.remote.get_http_client.assert_called_with("1234")
def test_close_http_session(self):
with ambari_client.AmbariClient(self.instance):
pass
self.remote.close_http_session.assert_called_with("8080")
def test_get_method(self):
client = ambari_client.AmbariClient(self.instance)
client.get("http://spam")
self.http_client.get.assert_called_with(
"http://spam", verify=False, auth=client._auth,
headers=self.headers)
def test_post_method(self):
client = ambari_client.AmbariClient(self.instance)
client.post("http://spam", data="data")
self.http_client.post.assert_called_with(
"http://spam", data="data", verify=False, auth=client._auth,
headers=self.headers)
def test_put_method(self):
client = ambari_client.AmbariClient(self.instance)
client.put("http://spam", data="data")
self.http_client.put.assert_called_with(
"http://spam", data="data", verify=False, auth=client._auth,
headers=self.headers)
def test_delete_method(self):
client = ambari_client.AmbariClient(self.instance)
client.delete("http://spam")
self.http_client.delete.assert_called_with(
"http://spam", verify=False, auth=client._auth,
headers=self.headers)
def test_import_credential(self):
resp = mock.Mock()
resp.text = ""
resp.status_code = 200
self.http_client.post.return_value = resp
client = ambari_client.AmbariClient(self.instance)
client.import_credential("test", alias="credential",
data={"some": "data"})
self.http_client.post.assert_called_once_with(
"http://1.2.3.4:8080/api/v1/clusters/test/credentials/credential",
verify=False, data=jsonutils.dumps({"some": "data"}),
auth=client._auth, headers=self.headers)
def test_get_credential(self):
resp = mock.Mock()
resp.text = ""
resp.status_code = 200
self.http_client.get.return_value = resp
client = ambari_client.AmbariClient(self.instance)
client.get_credential("test", alias="credential")
self.http_client.get.assert_called_once_with(
"http://1.2.3.4:8080/api/v1/clusters/test/credentials/credential",
verify=False, auth=client._auth, headers=self.headers)
resp.status_code = 404
self.assertRaises(ambari_client.AmbariNotFound,
ambari_client.AmbariClient.check_response,
resp, True)
@mock.patch("sahara_plugin_ambari.plugins.ambari.client."
"AmbariClient.check_response")
def test_get_alerts_data(self, mock_check_response):
cluster = mock.Mock()
cluster.name = "test_cluster"
client = ambari_client.AmbariClient(self.instance)
# check_response returning empty json
mock_check_response.return_value = {}
res = client.get_alerts_data(cluster)
self.assertEqual(res, [])
self.http_client.get.assert_called_once_with(
"http://1.2.3.4:8080/api/v1/clusters/test_cluster/alerts?fields=*",
verify=False, auth=client._auth,
headers=self.headers)
mock_check_response.assert_called_once()
# check_response returning json with items as key
mock_check_response.return_value = {'items': ['item1', 'item2']}
res = client.get_alerts_data(cluster)
self.assertEqual(res, ['item1', 'item2'])
self.http_client.get.assert_called_with(
"http://1.2.3.4:8080/api/v1/clusters/test_cluster/alerts?fields=*",
verify=False, auth=client._auth,
headers=self.headers)
self.assertEqual(self.http_client.get.call_count, 2)
self.assertEqual(mock_check_response.call_count, 2)
def test_check_response(self):
resp = mock.Mock()
resp.status_code = 404
self.assertRaises(ambari_client.AmbariNotFound,
ambari_client.AmbariClient.check_response,
resp, True)
resp.status_code = 200
resp.text = '{"json": "example"}'
resp.raise_for_status = mock.Mock()
res = ambari_client.AmbariClient.check_response(resp)
self.assertEqual(res, {"json": "example"})
resp.raise_for_status.assert_called_once()
def test_req_id(self):
resp = mock.Mock()
resp.text = None
self.assertRaises(p_exc.HadoopProvisionError,
ambari_client.AmbariClient.req_id, resp)
resp.text = '{"text" : "example"}'
self.assertRaises(p_exc.HadoopProvisionError,
ambari_client.AmbariClient.req_id, resp)
resp.text = '{"Requests": {"example" : "text"}}'
self.assertRaises(p_exc.HadoopProvisionError,
ambari_client.AmbariClient.req_id, resp)
resp.text = '{"Requests" : {"id" : "test_id"}}'
res = ambari_client.AmbariClient.req_id(resp)
self.assertEqual(res, "test_id")
def test_get_registered_hosts(self):
client = ambari_client.AmbariClient(self.instance)
resp_data = """{
"href" : "http://1.2.3.4:8080/api/v1/hosts",
"items" : [
{
"href" : "http://1.2.3.4:8080/api/v1/hosts/host1",
"Hosts" : {
"host_name" : "host1"
}
},
{
"href" : "http://1.2.3.4:8080/api/v1/hosts/host2",
"Hosts" : {
"host_name" : "host2"
}
},
{
"href" : "http://1.2.3.4:8080/api/v1/hosts/host3",
"Hosts" : {
"host_name" : "host3"
}
}
]
}"""
resp = mock.Mock()
resp.text = resp_data
resp.status_code = 200
self.http_client.get.return_value = resp
hosts = client.get_registered_hosts()
self.http_client.get.assert_called_with(
"http://1.2.3.4:8080/api/v1/hosts", verify=False,
auth=client._auth, headers=self.headers)
self.assertEqual(3, len(hosts))
self.assertEqual("host1", hosts[0]["Hosts"]["host_name"])
self.assertEqual("host2", hosts[1]["Hosts"]["host_name"])
self.assertEqual("host3", hosts[2]["Hosts"]["host_name"])
def test_update_user_password(self):
client = ambari_client.AmbariClient(self.instance)
resp = mock.Mock()
resp.text = ""
resp.status_code = 200
self.http_client.put.return_value = resp
client.update_user_password("bart", "old_pw", "new_pw")
exp_req = jsonutils.dumps({
"Users": {
"old_password": "old_pw",
"password": "new_pw"
}
})
self.http_client.put.assert_called_with(
"http://1.2.3.4:8080/api/v1/users/bart", data=exp_req,
verify=False, auth=client._auth, headers=self.headers)
def test_create_blueprint(self):
client = ambari_client.AmbariClient(self.instance)
resp = mock.Mock()
resp.text = ""
resp.status_code = 200
self.http_client.post.return_value = resp
client.create_blueprint("cluster_name", {"some": "data"})
self.http_client.post.assert_called_with(
"http://1.2.3.4:8080/api/v1/blueprints/cluster_name",
data=jsonutils.dumps({"some": "data"}), verify=False,
auth=client._auth, headers=self.headers)
def test_create_cluster(self):
client = ambari_client.AmbariClient(self.instance)
resp = mock.Mock()
resp.text = """{
"Requests": {
"id": 1,
"status": "InProgress"
}
}"""
resp.status_code = 200
self.http_client.post.return_value = resp
req_info = client.create_cluster("cluster_name", {"some": "data"})
self.assertEqual(1, req_info["id"])
self.http_client.post.assert_called_with(
"http://1.2.3.4:8080/api/v1/clusters/cluster_name",
data=jsonutils.dumps({"some": "data"}), verify=False,
auth=client._auth, headers=self.headers)
def test_add_host_to_cluster(self):
client = ambari_client.AmbariClient(self.instance)
resp = mock.Mock()
resp.text = ""
resp.status_code = 200
self.http_client.post.return_value = resp
instance = mock.MagicMock()
instance.fqdn.return_value = "i1"
instance.cluster.name = "cl"
client.add_host_to_cluster(instance)
self.http_client.post.assert_called_with(
"http://1.2.3.4:8080/api/v1/clusters/cl/hosts/i1",
verify=False, auth=client._auth, headers=self.headers)
def test_start_process_on_host(self):
client = ambari_client.AmbariClient(self.instance)
self.http_client.put.return_value = self.good_pending_resp
client.wait_ambari_request = mock.MagicMock()
instance = mock.MagicMock()
instance.fqdn.return_value = "i1"
instance.cluster.name = "cl"
client.start_service_on_host(instance, "HDFS", 'STATE')
self.http_client.put.assert_called_with(
"http://1.2.3.4:8080/api/v1/clusters/"
"cl/hosts/i1/host_components/HDFS",
data=jsonutils.dumps(
{
"HostRoles": {"state": "STATE"},
"RequestInfo": {
"context": "Starting service HDFS, "
"moving to state STATE"}
}),
verify=False, auth=client._auth, headers=self.headers)
def test_stop_process_on_host(self):
client = ambari_client.AmbariClient(self.instance)
check_mock = mock.MagicMock()
check_mock.status_code = 200
check_mock.text = '{"HostRoles": {"state": "SOME_STATE"}}'
self.http_client.get.return_value = check_mock
self.http_client.put.return_value = self.good_pending_resp
client.wait_ambari_request = mock.MagicMock()
instance = mock.MagicMock()
instance.fqdn.return_value = "i1"
client.stop_process_on_host("cluster_name", instance, "p1")
self.http_client.put.assert_called_with(
"http://1.2.3.4:8080/api/v1/clusters/"
"cluster_name/hosts/i1/host_components/p1",
data=jsonutils.dumps(
{
"HostRoles": {"state": "INSTALLED"},
"RequestInfo": {"context": "Stopping p1"}
}),
verify=False, auth=client._auth, headers=self.headers)
@mock.patch("sahara_plugin_ambari.plugins.ambari.client.context")
def test_wait_ambari_request(self, mock_context):
client = ambari_client.AmbariClient(self.instance)
check_mock = mock.MagicMock()
d1 = {"request_context": "r1", "request_status": "PENDING",
"progress_percent": 42}
d2 = {"request_context": "r1", "request_status": "COMPLETED",
"progress_percent": 100}
check_mock.side_effect = [d1, d2]
client.check_request_status = check_mock
client.wait_ambari_request("id1", "c1")
check_mock.assert_has_calls([mock.call("c1", "id1"),
mock.call("c1", "id1")])
@mock.patch("sahara_plugin_ambari.plugins.ambari.client.context")
def test_wait_ambari_request_error(self, mock_context):
client = ambari_client.AmbariClient(self.instance)
check_mock = mock.MagicMock()
d1 = {"request_context": "r1", "request_status": "ERROR",
"progress_percent": 146}
check_mock.return_value = d1
client.check_request_status = check_mock
self.assertRaises(p_exc.HadoopProvisionError,
client.wait_ambari_request, "id1", "c1")

View File

@ -1,69 +0,0 @@
# Copyright (c) 2017 EasyStack Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from unittest import mock
from sahara_plugin_ambari.plugins.ambari import common
from sahara_plugin_ambari.tests.unit import base
class AmbariCommonTestCase(base.SaharaTestCase):
def setUp(self):
super(AmbariCommonTestCase, self).setUp()
self.master_ng = mock.Mock()
self.master_ng.node_processes = ['Ambari', 'HiveServer']
self.worker_ng = mock.Mock()
self.worker_ng.node_processes = ['DataNode', 'Oozie']
self.cluster = mock.Mock()
self.cluster.node_groups = [self.master_ng, self.worker_ng]
def test_get_ambari_proc_list(self):
procs = common.get_ambari_proc_list(self.master_ng)
expected = ['METRICS_COLLECTOR', 'HIVE_SERVER',
'MYSQL_SERVER', 'WEBHCAT_SERVER']
self.assertEqual(procs, expected)
procs = common.get_ambari_proc_list(self.worker_ng)
expected = ['DATANODE', 'OOZIE_SERVER', 'PIG']
self.assertEqual(procs, expected)
@mock.patch('sahara.plugins.kerberos.is_kerberos_security_enabled')
def test_get_clients(self, kerberos):
kerberos.return_value = False
clients = common.get_clients(self.cluster)
expected = ['OOZIE_CLIENT', 'HIVE_CLIENT', 'HDFS_CLIENT',
'TEZ_CLIENT', 'METRICS_MONITOR']
for e in expected:
self.assertIn(e, clients)
kerberos.return_value = True
clients = common.get_clients(self.cluster)
expected = ['OOZIE_CLIENT', 'HIVE_CLIENT', 'HDFS_CLIENT',
'TEZ_CLIENT', 'METRICS_MONITOR', 'KERBEROS_CLIENT']
for e in expected:
self.assertIn(e, clients)
def test_instances_have_process(self):
instance1 = mock.Mock()
instance2 = mock.Mock()
instance1.node_group = self.master_ng
instance2.node_group = self.worker_ng
self.assertTrue(common.instances_have_process([instance1], "Ambari"))
self.assertTrue(common.instances_have_process([instance1, instance2],
"DataNode"))
self.assertFalse(common.instances_have_process([instance1, instance2],
"DRPC Server"))

View File

@ -1,164 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
from unittest import mock
from sahara_plugin_ambari.plugins.ambari import configs
from sahara_plugin_ambari.tests.unit import base
class AmbariConfigsTestCase(base.SaharaTestCase):
def setUp(self):
super(AmbariConfigsTestCase, self).setUp()
configs.load_configs("2.3")
self.ng = mock.Mock()
self.ng.node_configs = {}
self.ng.cluster = mock.Mock()
self.ng.cluster.hadoop_version = "2.3"
self.instance = mock.Mock()
self.instance.node_group = self.ng
self.instance.storage_paths = mock.Mock()
self.instance.storage_paths.return_value = ["/data1", "/data2"]
def assertConfigEqual(self, expected, actual):
self.assertEqual(len(expected), len(actual))
cnt_ex = collections.Counter()
cnt_act = collections.Counter()
for i, ex in enumerate(expected):
for j, act in enumerate(actual):
if ex == act:
cnt_ex[i] += 1
cnt_act[j] += 1
self.assertEqual(len(expected), len(cnt_ex))
self.assertEqual(len(actual), len(cnt_act))
def test_get_service_to_configs_map(self):
self.assertIsNone(configs.SERVICES_TO_CONFIGS_MAP)
configs_map = configs.get_service_to_configs_map()
configs_expected = {
'ZooKeeper': ['zoo.cfg', 'zookeeper-env'],
'Knox': ['knox-env', 'ranger-knox-plugin-properties',
'gateway-site'],
'YARN': ['yarn-site', 'mapred-env', 'yarn-env',
'capacity-scheduler', 'mapred-site'],
'general': ['cluster-env'], 'Flume': ['flume-env'],
'Ambari': ['ams-hbase-policy', 'ams-site', 'ams-env',
'ams-hbase-site', 'ams-hbase-env',
'ams-hbase-security-site'],
'HDFS': ['core-site', 'ranger-hdfs-plugin-properties',
'hadoop-policy', 'hdfs-site', 'hadoop-env'],
'Ranger': ['ranger-env', 'admin-properties',
'usersync-properties', 'ranger-site'],
'Spark': ['spark-defaults', 'spark-env'],
'Hive': ['hive-env', 'hive-site', 'hiveserver2-site',
'ranger-hive-plugin-properties'],
'Storm': ['ranger-storm-plugin-properties', 'storm-site',
'storm-env'],
'Oozie': ['oozie-env', 'oozie-site', 'tez-site'],
'HBase': ['ranger-hbase-plugin-properties', 'hbase-env',
'hbase-site', 'hbase-policy'],
'Sqoop': ['sqoop-env'], 'Kafka': ['kafka-broker', 'kafka-env'],
'Falcon': ['falcon-startup.properties',
'falcon-runtime.properties', 'falcon-env']
}
for (key, item) in configs_map.items():
item.sort()
for (key, item) in configs_expected.items():
item.sort()
self.assertEqual(configs_map, configs_expected)
self.assertIsNotNone(configs.SERVICES_TO_CONFIGS_MAP)
def test_get_instance_params_default(self):
instance_configs = configs.get_instance_params(self.instance)
expected = [
{
"hdfs-site": {
"dfs.datanode.data.dir":
"/data1/hdfs/data,/data2/hdfs/data",
"dfs.journalnode.edits.dir":
"/data1/hdfs/journalnode,/data2/hdfs/journalnode",
"dfs.namenode.checkpoint.dir":
"/data1/hdfs/namesecondary,/data2/hdfs/namesecondary",
"dfs.namenode.name.dir":
"/data1/hdfs/namenode,/data2/hdfs/namenode"
}
},
{
"yarn-site": {
"yarn.nodemanager.local-dirs":
"/data1/yarn/local,/data2/yarn/local",
"yarn.nodemanager.log-dirs":
"/data1/yarn/log,/data2/yarn/log",
"yarn.timeline-service.leveldb-timeline-store.path":
"/data1/yarn/timeline,/data2/yarn/timeline"
}
},
{
"oozie-site": {
"oozie.service.AuthorizationService.security.enabled":
"false"
}
}
]
self.assertConfigEqual(expected, instance_configs)
def test_get_instance_params(self):
self.ng.node_configs = {
"YARN": {
"mapreduce.map.java.opts": "-Dk=v",
"yarn.scheduler.minimum-allocation-mb": "256"
}
}
instance_configs = configs.get_instance_params(self.instance)
expected = [
{
"hdfs-site": {
"dfs.datanode.data.dir":
"/data1/hdfs/data,/data2/hdfs/data",
"dfs.journalnode.edits.dir":
"/data1/hdfs/journalnode,/data2/hdfs/journalnode",
"dfs.namenode.checkpoint.dir":
"/data1/hdfs/namesecondary,/data2/hdfs/namesecondary",
"dfs.namenode.name.dir":
"/data1/hdfs/namenode,/data2/hdfs/namenode"
}
},
{
"mapred-site": {
"mapreduce.map.java.opts": "-Dk=v"
}
},
{
"yarn-site": {
"yarn.nodemanager.local-dirs":
"/data1/yarn/local,/data2/yarn/local",
"yarn.nodemanager.log-dirs":
"/data1/yarn/log,/data2/yarn/log",
"yarn.scheduler.minimum-allocation-mb": "256",
"yarn.timeline-service.leveldb-timeline-store.path":
"/data1/yarn/timeline,/data2/yarn/timeline"
}
},
{
"oozie-site": {
"oozie.service.AuthorizationService.security.enabled":
"false"
}
}
]
self.assertConfigEqual(expected, instance_configs)

View File

@ -1,102 +0,0 @@
# Copyright (c) 2016 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from functools import wraps
from unittest import mock
from oslo_serialization import jsonutils
def mock_event_wrapper(*args, **kwargs):
def decorator(f):
@wraps(f)
def decorated_function(*args, **kwargs):
return f(*args, **kwargs)
return decorated_function
return decorator
mock.patch('sahara.plugins.utils.event_wrapper', mock_event_wrapper).start()
from sahara.plugins import utils as pu
from sahara_plugin_ambari.plugins.ambari import deploy
from sahara_plugin_ambari.tests.unit import base
class TestDeploy(base.SaharaTestCase):
@mock.patch('sahara.plugins.utils.add_provisioning_step')
@mock.patch('sahara.plugins.utils.check_cluster_exists')
@mock.patch('sahara.plugins.utils.get_instance')
@mock.patch('sahara_plugin_ambari.plugins.ambari.client.AmbariClient.get')
@mock.patch('sahara_plugin_ambari.plugins.ambari.client.'
'AmbariClient.delete')
def test_cleanup_config_groups(self, client_delete, client_get,
get_instance, check_cluster_exists,
add_provisioning_step):
def response(data):
fake = mock.Mock()
fake.text = jsonutils.dumps(data)
fake.raise_for_status.return_value = True
return fake
fake_config_groups = {
'items': [
{'ConfigGroup': {'id': "1"}},
{'ConfigGroup': {'id': "2"}}
]
}
config_group1 = {
'ConfigGroup': {'id': '1', 'group_name': "test:fakename"}}
config_group2 = {
'ConfigGroup': {'id': '2', 'group_name': "test:toremove"}}
pu.event_wrapper = mock_event_wrapper
fake_ambari = mock.Mock()
fake_ambari.management_ip = "127.0.0.1"
get_instance.return_value = fake_ambari
inst1 = mock.Mock()
inst1.instance_name = "toremove"
cl = mock.Mock(extra={'ambari_password': "SUPER_STRONG"})
cl.name = "test"
client_get.side_effect = [
response(fake_config_groups), response(config_group1),
response(config_group2)
]
client_delete.side_effect = [response({})]
check_cluster_exists.return_value = True
deploy.cleanup_config_groups(cl, [inst1])
get_calls = [
mock.call(
'http://127.0.0.1:8080/api/v1/clusters/test/config_groups'),
mock.call(
'http://127.0.0.1:8080/api/v1/clusters/test/config_groups/1'),
mock.call(
'http://127.0.0.1:8080/api/v1/clusters/test/config_groups/2')
]
self.assertEqual(get_calls, client_get.call_args_list)
delete_calls = [
mock.call(
'http://127.0.0.1:8080/api/v1/clusters/test/config_groups/2')
]
self.assertEqual(delete_calls, client_delete.call_args_list)

View File

@ -1,263 +0,0 @@
# Copyright (c) 2016 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from unittest import mock
from sahara_plugin_ambari.plugins.ambari import ha_helper as ha
from sahara_plugin_ambari.tests.unit import base
class HAHelperTestCase(base.SaharaTestCase):
def setUp(self):
super(HAHelperTestCase, self).setUp()
self.cluster = mock.MagicMock()
self.cluster.name = "clusterName"
for i in range(1, 4):
instance = mock.MagicMock()
instance.fqdn.return_value = "in{}".format(i)
instance.instance_name = "in{}name".format(i)
setattr(self, "in{}".format(i), instance)
self.bp = {
"host_groups": [
{
"components": [
{"name": "NAMENODE"}
]
}
],
"configurations": [
{"hdfs-site": {}},
{"yarn-site": {}},
{"core-site": {}},
{"hadoop-env": {}},
{"zoo.cfg": {}}
]
}
@mock.patch("sahara_plugin_ambari.plugins.ambari.ha_helper."
"_set_high_zk_limits")
@mock.patch("sahara_plugin_ambari.plugins.ambari.ha_helper."
"_set_default_fs")
def test_update_bp_ha_common(self, mock__set_default_fs,
mock__set_high_zk_limits):
ha.update_bp_ha_common(self.cluster, copy.deepcopy(self.bp))
self.assertTrue(mock__set_default_fs.called)
self.assertTrue(mock__set_high_zk_limits.called)
@mock.patch("sahara_plugin_ambari.plugins.ambari.ha_helper."
"_configure_hdfs_site")
@mock.patch("sahara_plugin_ambari.plugins.ambari.ha_helper._set_zk_quorum")
@mock.patch("sahara_plugin_ambari.plugins.ambari.ha_helper."
"_add_zkfc_to_namenodes")
def test_update_bp_for_namenode_ha(self, mock__add_zkfc_to_namenodes,
mock__set_zk_quorum,
mock__configure_hdfs_site):
ha.update_bp_for_namenode_ha(self.cluster, copy.deepcopy(self.bp))
self.assertTrue(mock__add_zkfc_to_namenodes.called)
self.assertTrue(mock__set_zk_quorum.called)
self.assertTrue(mock__configure_hdfs_site.called)
@mock.patch("sahara_plugin_ambari.plugins.ambari.ha_helper."
"_set_default_fs")
@mock.patch("sahara_plugin_ambari.plugins.ambari.ha_helper._set_zk_quorum")
@mock.patch("sahara_plugin_ambari.plugins.ambari.ha_helper."
"_configure_yarn_site")
def test_update_bp_for_resourcemanager_ha(self, mock__configure_yarn_site,
mock__set_zk_quorum,
mock__set_default_fs):
ha.update_bp_for_resourcemanager_ha(self.cluster,
copy.deepcopy(self.bp))
self.assertTrue(mock__configure_yarn_site.called)
self.assertTrue(mock__set_zk_quorum.called)
self.assertTrue(mock__set_default_fs.called)
@mock.patch("sahara_plugin_ambari.plugins.ambari.ha_helper."
"_confgure_hbase_site")
def test_update_bp_for_hbase_ha(self, mock__confgure_hbase_site):
ha.update_bp_for_hbase_ha(self.cluster, copy.deepcopy(self.bp))
self.assertTrue(mock__confgure_hbase_site.called)
def test__add_zkfc_to_namenodes(self):
bp = ha._add_zkfc_to_namenodes(copy.deepcopy(self.bp))
self.assertIn({"name": "ZKFC"}, bp["host_groups"][0]["components"])
@mock.patch("sahara.plugins.utils.get_instances")
def test__set_default_fs(self, mock_get_instances):
bp = ha._set_default_fs(self.cluster, copy.deepcopy(self.bp),
ha.p_common.NAMENODE_HA)
self.assertEqual("hdfs://hdfs-ha",
ha._find_core_site(bp)["fs.defaultFS"])
mock_get_instances.return_value = [self.in1]
bp = ha._set_default_fs(self.cluster, copy.deepcopy(self.bp),
ha.p_common.RESOURCEMANAGER_HA)
self.assertEqual("hdfs://{}:8020".format(self.in1.fqdn()),
ha._find_core_site(bp)["fs.defaultFS"])
@mock.patch("sahara.plugins.utils.get_instances")
def test__set_zk_quorum(self, mock_get_instances):
mock_get_instances.return_value = [self.in1, self.in2, self.in3]
bp = ha._set_zk_quorum(self.cluster, copy.deepcopy(self.bp),
ha.CORE_SITE)
self.assertEqual(
"{}:2181,{}:2181,{}:2181".format(
self.in1.fqdn(), self.in2.fqdn(), self.in3.fqdn()),
ha._find_core_site(bp)['ha.zookeeper.quorum'])
bp = ha._set_zk_quorum(self.cluster, copy.deepcopy(self.bp),
ha.YARN_SITE)
self.assertEqual(
"{}:2181,{}:2181,{}:2181".format(
self.in1.fqdn(), self.in2.fqdn(), self.in3.fqdn()),
ha._find_yarn_site(bp)['hadoop.registry.zk.quorum'])
def test__set_high_zk_limits(self):
bp = ha._set_high_zk_limits(copy.deepcopy(self.bp))
self.assertEqual("10000", ha._find_zoo_cfg(bp)["tickTime"])
@mock.patch("sahara.plugins.utils.get_instances")
def test__set_primary_and_standby_namenode(self, mock_get_instances):
mock_get_instances.return_value = [self.in1, self.in2]
bp = ha._set_primary_and_standby_namenode(self.cluster,
copy.deepcopy(self.bp))
self.assertEqual(
self.in1.fqdn(),
ha._find_hadoop_env(bp)['dfs_ha_initial_namenode_active'])
self.assertEqual(
self.in2.fqdn(),
ha._find_hadoop_env(bp)['dfs_ha_initial_namenode_standby'])
@mock.patch("sahara.plugins.utils.get_instances")
def test__configure_hdfs_site(self, mock_get_instances):
mock_get_instances.return_value = [self.in1, self.in2]
bp = ha._configure_hdfs_site(self.cluster, copy.deepcopy(self.bp))
j_nodes = ";".join(
["{}:8485".format(i.fqdn()) for i in mock_get_instances()])
nn_id_concat = ",".join(
[i.instance_name for i in mock_get_instances()])
result = {
"hdfs-site": {
"dfs.client.failover.proxy.provider.hdfs-ha":
"org.apache.hadoop.hdfs.server.namenode.ha."
"ConfiguredFailoverProxyProvider",
"dfs.ha.automatic-failover.enabled": "true",
"dfs.ha.fencing.methods": "shell(/bin/true)",
"dfs.nameservices": "hdfs-ha",
"dfs.namenode.shared.edits.dir":
"qjournal://{}/hdfs-ha".format(j_nodes),
"dfs.ha.namenodes.hdfs-ha": nn_id_concat,
"dfs.namenode.http-address": "{}:50070".format(
self.in1.fqdn()),
"dfs.namenode.https-address": "{}:50470".format(
self.in1.fqdn()),
}
}
prop = result["hdfs-site"]
for i in mock_get_instances():
prop["dfs.namenode.http-address.hdfs-ha.%s" % i.instance_name] = (
"%s:50070" % i.fqdn())
prop["dfs.namenode.https-address.hdfs-ha.%s" % i.instance_name] = (
"%s:50470" % i.fqdn())
prop["dfs.namenode.rpc-address.hdfs-ha.%s" % i.instance_name] = (
"%s:8020" % i.fqdn())
self.assertEqual(result["hdfs-site"], ha._find_hdfs_site(bp))
@mock.patch("sahara.plugins.utils.get_instance")
@mock.patch("sahara.plugins.utils.get_instances")
def test__configure_yarn_site(self, mock_get_instances, mock_get_instance):
mock_get_instances.return_value = [self.in1, self.in2, self.in3]
mock_get_instance.return_value = self.in1
bp = ha._configure_yarn_site(self.cluster, copy.deepcopy(self.bp))
zks = ",".join(["%s:2181" % i.fqdn() for i in mock_get_instances()])
rm_ids = ",".join([i.instance_name for i in mock_get_instances()])
result = {
"yarn-site": {
"hadoop.registry.rm.enabled": "false",
"yarn.resourcemanager.zk-address": zks,
"yarn.log.server.url": "{}:19888/jobhistory/logs/".format(
mock_get_instance().fqdn()),
"yarn.resourcemanager.address": "{}:8050".format(
mock_get_instances()[0].fqdn()),
"yarn.resourcemanager.admin.address": "{}:8141".format(
mock_get_instances()[0].fqdn()),
"yarn.resourcemanager.cluster-id": self.cluster.name,
"yarn.resourcemanager.ha.automatic-failover.zk-base-path":
"/yarn-leader-election",
"yarn.resourcemanager.ha.enabled": "true",
"yarn.resourcemanager.ha.rm-ids": rm_ids,
"yarn.resourcemanager.hostname":
mock_get_instances()[0].fqdn(),
"yarn.resourcemanager.recovery.enabled": "true",
"yarn.resourcemanager.resource-tracker.address":
"{}:8025".format(mock_get_instances()[0].fqdn()),
"yarn.resourcemanager.scheduler.address": "{}:8030".format(
mock_get_instances()[0].fqdn()),
"yarn.resourcemanager.store.class":
"org.apache.hadoop.yarn.server.resourcemanager.recovery."
"ZKRMStateStore",
"yarn.resourcemanager.webapp.address": "{}:8088".format(
mock_get_instances()[0].fqdn()),
"yarn.resourcemanager.webapp.https.address": "{}:8090".format(
mock_get_instances()[0].fqdn()),
"yarn.timeline-service.address": "{}:10200".format(
mock_get_instance().fqdn()),
"yarn.timeline-service.webapp.address": "{}:8188".format(
mock_get_instance().fqdn()),
"yarn.timeline-service.webapp.https.address": "{}:8190".format(
mock_get_instance().fqdn())
}
}
props = result["yarn-site"]
for i in mock_get_instances():
props["yarn.resourcemanager.hostname.{}".format(
i.instance_name)] = i.fqdn()
props["yarn.resourcemanager.webapp.address.{}".format(
i.instance_name)] = "{}:8088".format(i.fqdn())
props["yarn.resourcemanager.webapp.https.address.{}".format(
i.instance_name)] = "{}:8090".format(i.fqdn())
self.assertEqual(result["yarn-site"], ha._find_yarn_site(bp))
@mock.patch("sahara.plugins.utils.get_instances")
def test__confgure_hbase_site(self, mock_get_instances):
mock_get_instances.return_value = [self.in1, self.in2, self.in3]
bp = ha._confgure_hbase_site(self.cluster, copy.deepcopy(self.bp))
result = {
"hbase-site": {
"hbase.regionserver.global.memstore.lowerLimit": "0.38",
"hbase.regionserver.global.memstore.upperLimit": "0.4",
"hbase.regionserver.handler.count": "60",
"hbase.regionserver.info.port": "16030",
"hbase.regionserver.storefile.refresh.period": "20",
"hbase.rootdir": "hdfs://hdfs-ha/apps/hbase/data",
"hbase.security.authentication": "simple",
"hbase.security.authorization": "false",
"hbase.superuser": "hbase",
"hbase.tmp.dir": "/hadoop/hbase",
"hbase.zookeeper.property.clientPort": "2181",
"hbase.zookeeper.useMulti": "true",
"hfile.block.cache.size": "0.40",
"zookeeper.session.timeout": "30000",
"zookeeper.znode.parent": "/hbase-unsecure",
"hbase.zookeeper.quorum":
",".join([i.fqdn() for i in mock_get_instances()])
}
}
self.assertEqual(result["hbase-site"], ha._find_hbase_site(bp))

View File

@ -1,122 +0,0 @@
# Copyright (c) 2016 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from unittest import mock
import testtools
from sahara.plugins import health_check_base
from sahara_plugin_ambari.plugins.ambari import health
from sahara_plugin_ambari.tests.unit import base
class TestAmbariHealthCheck(base.SaharaTestCase):
def _standard_negative_test(self, mockclient, return_value, col, count):
mockclient.return_value = return_value
pr = health.AlertsProvider(mock.Mock())
service = return_value[0].get('Alert').get('service_name')
expected_exc = health_check_base.YellowHealthError
if col == 'RED':
expected_exc = health_check_base.RedHealthError
with testtools.ExpectedException(expected_exc):
try:
health.AmbariServiceHealthCheck(
mock.Mock(extra={}), pr, service).check_health()
except Exception as e:
self.assertEqual(
"Cluster health is %s. Reason: "
"Ambari Monitor has responded that cluster "
"has %s alert(s)" % (col, count), str(e))
raise
@mock.patch('sahara_plugin_ambari.plugins.ambari.client.AmbariClient.'
'__init__')
@mock.patch('sahara_plugin_ambari.plugins.ambari.client.AmbariClient.'
'close')
@mock.patch('sahara_plugin_ambari.plugins.ambari.client.AmbariClient.'
'get_alerts_data')
@mock.patch('sahara.plugins.utils.get_instance')
def test_check_health(self, get_instance, alerts_response, close, init):
init.return_value = None
alerts_response.return_value = [
{
'Alert': {
'state': 'OK',
'service_name': 'ZOOKEEPER'
}
}
]
result = health.AmbariServiceHealthCheck(
mock.Mock(extra={}), health.AlertsProvider(mock.Mock()),
'ZOOKEEPER').check_health()
self.assertEqual('No alerts found', result)
self._standard_negative_test(alerts_response, [
{
'Alert': {
'state': 'WARNING',
'service_name': 'ZOOKEEPER'
}
}
], 'YELLOW', "1 warning")
self._standard_negative_test(alerts_response, [
{
'Alert': {
'state': 'CRITICAL',
'service_name': 'ZOOKEEPER'
}
}
], 'RED', "1 critical")
# not important service: only contribute as yellow
self._standard_negative_test(alerts_response, [
{
'Alert': {
'state': 'CRITICAL',
'service_name': 'Kafka'
}
}
], 'YELLOW', "1 warning")
self._standard_negative_test(alerts_response, [
{
'Alert': {
'state': 'CRITICAL',
'service_name': 'ZOOKEEPER'
},
},
{
'Alert': {
'state': 'WARNING',
'service_name': 'ZOOKEEPER'
}
}
], 'RED', "1 critical and 1 warning")
alerts_response.side_effect = [ValueError(
"OOUCH!")]
with testtools.ExpectedException(health_check_base.RedHealthError):
try:
health.AmbariHealthCheck(
mock.Mock(extra={}), health.AlertsProvider(mock.Mock())
).check_health()
except Exception as e:
self.assertEqual(
"Cluster health is RED. Reason: "
"Can't get response from Ambari Monitor: OOUCH!",
str(e))
raise

View File

@ -1,33 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from unittest import mock
from sahara_plugin_ambari.plugins.ambari import common as p_common
from sahara_plugin_ambari.plugins.ambari import plugin
from sahara_plugin_ambari.tests.unit import base
class GetPortsTestCase(base.SaharaTestCase):
def setUp(self):
super(GetPortsTestCase, self).setUp()
self.plugin = plugin.AmbariPluginProvider()
def test_get_ambari_port(self):
ng = mock.Mock()
ng.node_processes = [p_common.AMBARI_SERVER]
ports = self.plugin.get_open_ports(ng)
self.assertEqual([8080], ports)

View File

@ -1,51 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from sahara_plugin_ambari.plugins.ambari import plugin
from sahara_plugin_ambari.tests.unit import base as test_base
class TestPlugin(test_base.SaharaTestCase):
def setUp(self):
self.plugin = plugin.AmbariPluginProvider()
super(TestPlugin, self).setUp()
def test_job_types(self):
self.assertEqual({
'2.3': [
'Hive', 'Java', 'MapReduce', 'MapReduce.Streaming',
'Pig', 'Shell', 'Spark'],
'2.4': [
'Hive', 'Java', 'MapReduce', 'MapReduce.Streaming',
'Pig', 'Shell', 'Spark'],
'2.5': [
'Hive', 'Java', 'MapReduce', 'MapReduce.Streaming',
'Pig', 'Shell', 'Spark'],
'2.6': [
'Hive', 'Java', 'MapReduce', 'MapReduce.Streaming',
'Pig', 'Shell', 'Spark'],
}, self.plugin.get_edp_job_types())
self.assertEqual({
'2.3': [
'Hive', 'Java', 'MapReduce', 'MapReduce.Streaming',
'Pig', 'Shell', 'Spark'],
}, self.plugin.get_edp_job_types(versions=['2.3']))
self.assertEqual({
'2.4': [
'Hive', 'Java', 'MapReduce', 'MapReduce.Streaming',
'Pig', 'Shell', 'Spark'],
}, self.plugin.get_edp_job_types(versions=['2.4']))

View File

@ -1,96 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from unittest import mock
from sahara_plugin_ambari.plugins.ambari import requests_helper
from sahara_plugin_ambari.tests.unit import base
class RequestsHelperTestCase(base.SaharaTestCase):
def setUp(self):
super(RequestsHelperTestCase, self).setUp()
self.i1 = mock.MagicMock()
self.i1.fqdn.return_value = "i1"
self.i2 = mock.MagicMock()
self.i2.fqdn.return_value = "i2"
def test_build_datanode_decommission_request(self):
c_name = "c1"
instances = [self.i1, self.i2]
res = requests_helper.build_datanode_decommission_request(c_name,
instances)
self.assertEqual("i1,i2",
res["RequestInfo"]["parameters"]["excluded_hosts"])
self.assertEqual("c1",
res["RequestInfo"]["operation_level"]["cluster_name"])
def test_build_nodemanager_decommission_request(self):
c_name = "c1"
instances = [self.i1, self.i2]
res = requests_helper.build_nodemanager_decommission_request(
c_name, instances)
self.assertEqual("i1,i2",
res["RequestInfo"]["parameters"]["excluded_hosts"])
self.assertEqual("c1",
res["RequestInfo"]["operation_level"]["cluster_name"])
def test_build_namenode_restart_request(self):
res = requests_helper.build_namenode_restart_request("c1", self.i1)
self.assertEqual("i1", res["Requests/resource_filters"][0]["hosts"])
self.assertEqual("c1",
res["RequestInfo"]["operation_level"]["cluster_name"])
def test_build_resourcemanager_restart_request(self):
res = requests_helper.build_resourcemanager_restart_request("c1",
self.i1)
self.assertEqual("i1", res["Requests/resource_filters"][0]["hosts"])
self.assertEqual("c1",
res["RequestInfo"]["operation_level"]["cluster_name"])
def test_build_stop_service_request(self):
res = requests_helper.build_stop_service_request("HDFS")
expected = {
"RequestInfo": {
"context": "Restart HDFS service (stopping)",
},
"Body": {
"ServiceInfo": {
"state": "INSTALLED"
}
}
}
self.assertEqual(res, expected)
def test_build_start_service_request(self):
res = requests_helper.build_start_service_request("HDFS")
expected = {
"RequestInfo": {
"context": "Restart HDFS service (starting)",
},
"Body": {
"ServiceInfo": {
"state": "STARTED"
}
}
}
self.assertEqual(res, expected)

View File

@ -1,68 +0,0 @@
# Copyright (c) 2015 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from unittest import mock
from sahara.plugins import exceptions
from sahara_plugin_ambari.plugins.ambari import common as p_common
from sahara_plugin_ambari.plugins.ambari import plugin
from sahara_plugin_ambari.tests.unit import base
def make_cluster(processes_map):
m = mock.Mock()
ngs = []
for count, processes in processes_map.items():
ng = mock.Mock()
ng.count = count
ng.node_processes = processes
ngs.append(ng)
m.node_groups = ngs
return m
class AmbariValidationTestCase(base.SaharaTestCase):
def setUp(self):
super(AmbariValidationTestCase, self).setUp()
self.plugin = plugin.AmbariPluginProvider()
def test_cluster_with_ambari(self):
cluster = make_cluster({1: [p_common.AMBARI_SERVER,
p_common.ZOOKEEPER_SERVER,
p_common.NAMENODE,
p_common.DATANODE,
p_common.RESOURCEMANAGER,
p_common.NODEMANAGER,
p_common.HISTORYSERVER,
p_common.APP_TIMELINE_SERVER,
p_common.SECONDARY_NAMENODE]})
cluster.cluster_configs = {"general": {}}
with mock.patch(
"sahara_plugin_ambari.plugins.ambari.validation."
"conductor") as p:
p.cluster_get = mock.Mock()
p.cluster_get.return_value = cluster
self.assertIsNone(self.plugin.validate(cluster))
def test_cluster_without_ambari(self):
cluster = make_cluster({1: ["spam"]})
with mock.patch(
"sahara_plugin_ambari.plugins.ambari.validation."
"conductor") as p:
p.cluster_get = mock.Mock()
p.cluster_get.return_value = cluster
self.assertRaises(exceptions.InvalidComponentCountException,
self.plugin.validate, cluster)

View File

@ -1,108 +0,0 @@
# Copyright (c) 2013 Mirantis Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import eventlet
EVENTLET_MONKEY_PATCH_MODULES = dict(os=True,
select=True,
socket=True,
thread=True,
time=True)
def patch_all():
"""Apply all patches.
List of patches:
* eventlet's monkey patch for all cases;
* minidom's writexml patch for py < 2.7.3 only.
"""
eventlet_monkey_patch()
patch_minidom_writexml()
def eventlet_monkey_patch():
"""Apply eventlet's monkey patch.
This call should be the first call in application. It's safe to call
monkey_patch multiple times.
"""
eventlet.monkey_patch(**EVENTLET_MONKEY_PATCH_MODULES)
def eventlet_import_monkey_patched(module):
"""Returns module monkey patched by eventlet.
It's needed for some tests, for example, context test.
"""
return eventlet.import_patched(module, **EVENTLET_MONKEY_PATCH_MODULES)
def patch_minidom_writexml():
"""Patch for xml.dom.minidom toprettyxml bug with whitespaces around text
We apply the patch to avoid excess whitespaces in generated xml
configuration files that brakes Hadoop.
(This patch will be applied for all Python versions < 2.7.3)
Issue: http://bugs.python.org/issue4147
Patch: http://hg.python.org/cpython/rev/cb6614e3438b/
Description: http://ronrothman.com/public/leftbraned/xml-dom-minidom-\
toprettyxml-and-silly-whitespace/#best-solution
"""
import sys
if sys.version_info >= (2, 7, 3):
return
import xml.dom.minidom as md
def element_writexml(self, writer, indent="", addindent="", newl=""):
# indent = current indentation
# addindent = indentation to add to higher levels
# newl = newline string
writer.write(indent + "<" + self.tagName)
attrs = self._get_attributes()
a_names = list(attrs.keys())
a_names.sort()
for a_name in a_names:
writer.write(" %s=\"" % a_name)
md._write_data(writer, attrs[a_name].value)
writer.write("\"")
if self.childNodes:
writer.write(">")
if (len(self.childNodes) == 1
and self.childNodes[0].nodeType == md.Node.TEXT_NODE):
self.childNodes[0].writexml(writer, '', '', '')
else:
writer.write(newl)
for node in self.childNodes:
node.writexml(writer, indent + addindent, addindent, newl)
writer.write(indent)
writer.write("</%s>%s" % (self.tagName, newl))
else:
writer.write("/>%s" % (newl))
md.Element.writexml = element_writexml
def text_writexml(self, writer, indent="", addindent="", newl=""):
md._write_data(writer, "%s%s%s" % (indent, self.data, newl))
md.Text.writexml = text_writexml

View File

@ -1,43 +0,0 @@
[metadata]
name = sahara-plugin-ambari
summary = Ambari Plugin for Sahara Project
description_file = README.rst
license = Apache Software License
python_requires = >=3.8
classifiers =
Programming Language :: Python
Programming Language :: Python :: Implementation :: CPython
Programming Language :: Python :: 3 :: Only
Programming Language :: Python :: 3
Programming Language :: Python :: 3.8
Programming Language :: Python :: 3.9
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
author = OpenStack
author_email = openstack-discuss@lists.openstack.org
home_page = https://docs.openstack.org/sahara/latest/
[files]
packages =
sahara_plugin_ambari
[entry_points]
sahara.cluster.plugins =
ambari = sahara_plugin_ambari.plugins.ambari.plugin:AmbariPluginProvider
[compile_catalog]
directory = sahara_plugin_ambari/locale
domain = sahara_plugin_ambari
[update_catalog]
domain = sahara_plugin_ambari
output_dir = sahara_plugin_ambari/locale
input_file = sahara_plugin_ambari/locale/sahara_plugin_ambari.pot
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = sahara_plugin_ambari/locale/sahara_plugin_ambari.pot

View File

@ -1,20 +0,0 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import setuptools
setuptools.setup(
setup_requires=['pbr>=2.0.0'],
pbr=True)

View File

@ -1,16 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
hacking>=3.0.1,<3.1.0 # Apache-2.0
bandit>=1.1.0 # Apache-2.0
bashate>=0.5.1 # Apache-2.0
coverage!=4.4,>=4.0 # Apache-2.0
doc8>=0.6.0 # Apache-2.0
fixtures>=3.0.0 # Apache-2.0/BSD
oslotest>=3.2.0 # Apache-2.0
stestr>=1.0.0 # Apache-2.0
pylint==1.4.5 # GPLv2
testscenarios>=0.4 # Apache-2.0/BSD
testtools>=2.4.0 # MIT

99
tox.ini
View File

@ -1,99 +0,0 @@
[tox]
envlist = py38,pep8
minversion = 3.1.1
skipsdist = True
# this allows tox to infer the base python from the environment name
# and override any basepython configured in this file
ignore_basepython_conflict = true
[testenv]
basepython = python3
usedevelop = True
install_command = pip install {opts} {packages}
setenv =
VIRTUAL_ENV={envdir}
DISCOVER_DIRECTORY=sahara_plugin_ambari/tests/unit
deps =
-c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master}
-r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
commands = stestr run {posargs}
passenv = http_proxy
https_proxy
no_proxy
[testenv:debug-py36]
basepython = python3.6
commands = oslo_debug_helper -t sahara_plugin_ambari/tests/unit {posargs}
[testenv:debug-py37]
basepython = python3.7
commands = oslo_debug_helper -t sahara_plugin_ambari/tests/unit {posargs}
[testenv:pep8]
deps =
-c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master}
-r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
-r{toxinidir}/doc/requirements.txt
commands =
flake8 {posargs}
doc8 doc/source
[testenv:venv]
commands = {posargs}
[testenv:docs]
deps =
-c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master}
-r{toxinidir}/doc/requirements.txt
commands =
rm -rf doc/build/html
sphinx-build -W -b html doc/source doc/build/html
allowlist_externals =
rm
[testenv:pdf-docs]
deps = {[testenv:docs]deps}
commands =
rm -rf doc/build/pdf
sphinx-build -W -b latex doc/source doc/build/pdf
make -C doc/build/pdf
allowlist_externals =
make
rm
[testenv:releasenotes]
deps =
-c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master}
-r{toxinidir}/doc/requirements.txt
commands =
rm -rf releasenotes/build releasenotes/html
sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html
allowlist_externals = rm
[testenv:debug]
# It runs tests from the specified dir (default is sahara_plugin_ambari/tests)
# in interactive mode, so, you could use pbr for tests debug.
# Example usage: tox -e debug -- -t sahara_plugin_ambari/tests/unit some.test.path
# https://docs.openstack.org/oslotest/latest/features.html#debugging-with-oslo-debug-helper
commands = oslo_debug_helper -t sahara_plugin_ambari/tests/unit {posargs}
[flake8]
show-source = true
builtins = _
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,tools
# [H904] Delay string interpolations at logging calls
# [H106] Don't put vim configuration in source files
# [H203] Use assertIs(Not)None to check for None.
# [H204] Use assert(Not)Equal to check for equality
# [H205] Use assert(Greater|Less)(Equal) for comparison
enable-extensions=H904,H106,H203,H204,H205
# [E123] Closing bracket does not match indentation of opening bracket's line
# [E226] Missing whitespace around arithmetic operator
# [E402] Module level import not at top of file
# [E731] Do not assign a lambda expression, use a def
# [W503] Line break occurred before a binary operator
# [W504] line break after binary operator
ignore=E123,E226,E402,E731,W503,W504