a126fb2140
CDH plugin has a DATANODE configuration parameter called "dfs_datanode_du_reserved". If user sets cinder volume as the storage location and sets volumes size less than the value of the parameter, HDFS would not work. The cluster deployment would be Error in "Starting" Step because of "Oozie Share Lib Upload Failed". This patch adds a validation in "Validating" Step before spawning Cluster. BTW, If this situation (volume size less than "du_reserved") happen in Vanilla, Spark plugin, the deploy can finish successfully, but the HDFS can't be used when upload file to HDFS. Closes-Bug: #1709071 Change-Id: I840559d54ca474299eeec9812dc1573ab305bc1b |
||
---|---|---|
api-ref/source | ||
devstack | ||
doc/source | ||
etc | ||
releasenotes | ||
sahara | ||
tools | ||
.coveragerc | ||
.gitignore | ||
.gitreview | ||
.testr.conf | ||
CONTRIBUTING.rst | ||
HACKING.rst | ||
LICENSE | ||
README.rst | ||
babel.cfg | ||
bandit.yaml | ||
bindep.txt | ||
pylintrc | ||
requirements.txt | ||
setup.cfg | ||
setup.py | ||
test-requirements-bandit.txt | ||
test-requirements.txt | ||
tox.ini |
README.rst
Team and repository tags
OpenStack Data Processing ("Sahara") project
Sahara at wiki.openstack.org: https://wiki.openstack.org/wiki/Sahara
Launchpad project: https://launchpad.net/sahara
Sahara docs site: https://docs.openstack.org/sahara/latest/
Roadmap: https://wiki.openstack.org/wiki/Sahara/Roadmap
Quickstart guide: https://docs.openstack.org/sahara/latest/user/quickstart.html
How to participate: https://docs.openstack.org/sahara/latest/contributor/how-to-participate.html
Source: http://git.openstack.org/cgit/openstack/sahara
License
Apache License Version 2.0 http://www.apache.org/licenses/LICENSE-2.0