Retire puppet-openstack-specs

openstack-specs repo is retired now, so we can retire
puppet-openstack-specs also:

https: //review.opendev.org/q/topic:%22retire-openstack-specs%22+(status:open%20OR%20status:merged)
Change-Id: I11613236f36279a3f876526b8ac41000a24537da
This commit is contained in:
Ghanshyam Mann 2021-06-23 16:31:06 -05:00
parent b4cfaff289
commit b1d484f6b6
32 changed files with 8 additions and 3868 deletions

7
.gitignore vendored
View File

@ -1,7 +0,0 @@
.stestr/
doc/build
*.egg-info
*.pyc
.tox
AUTHORS
Changelog

View File

@ -1,4 +0,0 @@
[DEFAULT]
test_path=./tests
top_dir=./

View File

@ -1,9 +0,0 @@
- project:
templates:
- openstack-specs-jobs
check:
jobs:
- openstack-tox-py37
gate:
jobs:
- openstack-tox-py37

View File

@ -1,12 +0,0 @@
If you would like to contribute to the development of OpenStack,
you must follow the steps in this page:
http://docs.openstack.org/infra/manual/developers.html
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.

View File

@ -1,4 +0,0 @@
puppet-openstack-specs Style Commandments
=========================================
Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/

175
LICENSE
View File

@ -1,175 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,6 +0,0 @@
include AUTHORS
include ChangeLog
exclude .gitignore
exclude .gitreview
global-exclude *.pyc

View File

@ -1,14 +1,10 @@
========================
Team and repository tags
========================
This project is no longer maintained.
.. image:: https://governance.openstack.org/tc/badges/puppet-openstack-specs.svg
:target: https://governance.openstack.org/tc/reference/tags/index.html
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
.. Change things from this point on
===============================
puppet-openstack-specs
===============================
Puppet OpenStack modules specs repository
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
OFTC.

View File

@ -1,163 +0,0 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = build
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " wadl to build a WADL file for api.openstack.org"
clean:
-rm -rf $(BUILDDIR)/*
html: check-dependencies
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
.PHONY: check-dependencies
check-dependencies:
@python -c 'import sphinxcontrib.autohttp.flask' >/dev/null 2>&1 || (echo "ERROR: Missing Sphinx dependencies. Run: pip install sphinxcontrib-httpdomain" && exit 1)
wadl:
$(SPHINXBUILD) -b docbook $(ALLSPHINXOPTS) $(BUILDDIR)/wadl
@echo
@echo "Build finished. The WADL pages are in $(BUILDDIR)/wadl."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Ceilometer.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Ceilometer.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/Ceilometer"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/Ceilometer"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."

View File

@ -1,72 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
# sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'openstackdocstheme',
#'sphinx.ext.intersphinx',
'yasfb',
]
# Feed configuration for yasfb
feed_base_url = 'https://specs.openstack.org/openstack/puppet-openstack-specs'
feed_author = 'Puppet OpenStack Team'
exclude_patterns = [
'template.rst',
'**/template.rst',
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
copyright = u'%s, OpenStack Foundation' % datetime.date.today().year
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'native'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
html_theme_path = []
html_theme = "openstackdocs"
html_static_path = []
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}
# openstackdocstheme options
openstackdocs_repo_name = 'openstack/puppet-openstack-specs'
openstackdocs_bug_project = 'puppet-openstack-specs'
openstackdocs_bug_tag = ''

View File

@ -1,51 +0,0 @@
.. puppet openstack documentation master file
=======================================
Puppet OpenStack Program Specifications
=======================================
Icehouse approved specs:
.. toctree::
:glob:
:maxdepth: 1
specs/icehouse/*
Juno approved specs:
.. toctree::
:glob:
:maxdepth: 1
specs/juno/*
Kilo approved specs:
.. toctree::
:glob:
:maxdepth: 1
specs/kilo/*
Liberty approved specs:
.. toctree::
:glob:
:maxdepth: 1
specs/liberty/*
Newton approved specs:
.. toctree::
:glob:
:maxdepth: 1
specs/newton/*
==================
Indices and tables
==================
* :ref:`search`

View File

@ -1 +0,0 @@
../../specs

View File

@ -1,6 +0,0 @@
pbr!=2.1.0,>=2.0.0 # Apache-2.0
sphinx>=3.5.1 # BSD
openstackdocstheme>=2.2.1 # Apache-2.0
stestr>=2.0.0
testtools>=0.9.34
yasfb>=0.5.1

View File

@ -1,12 +0,0 @@
[metadata]
name = puppet-openstack-specs
summary = Puppet OpenStack modules specs repository
description_file =
README.rst
author = OpenStack
author_email = openstack-discuss@lists.openstack.org
home_page = http://specs.openstack.org/openstack/puppet-openstack-specs/
classifier =
Intended Audience :: Developers
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux

View File

@ -1,22 +0,0 @@
#!/usr/bin/env python
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
setuptools.setup(
setup_requires=['pbr'],
pbr=True)

View File

@ -1,123 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=======================================
Common OpenStack Configuration Provider
=======================================
https://blueprints.launchpad.net/puppet-openstacklib/+spec/common-openstack-configuration-provider
This spec is about introducing a common configuration provider based on
the ini_setting provider from puppetlabs-inifile. Other configuration
providers would now inherit from this provider instead.
Problem description
===================
All puppet modules for OpenStack uses the ini_setting provider as their
grounds for their specialized version.
Recent work has been started to implement a secret parameter where changes
for a specific configuration value would be hidden from Puppet logs.
This work is and will be duplicated across all the specialized providers,
violating the DRY concept.
Proposed change
===============
Introduce a common configuration provider which could be used by other
specialized configuration providers. This common provider would provide
a basic set of features. Example of such features are the secret parameter
and the capitalization of boolean values.
Alternatives
------------
Without this proposition, we should have to continue using the ini_setting
provider as the base of our specialized configuration providers and
implement/copy features in each of them.
We could also try proposing our features and changes to upstream so
we don't have to maintain them anymore. Those changes might however not
fit the vision of upstream, like the capitalization of boolean values.
Data model impact
-----------------
None
Module API impact
-----------------
This proposition does not include any change to any module API.
End user impact
---------------
None
Performance Impact
------------------
None
Deployer impact
---------------
This proposition introduces a new mandatory dependency on openstacklib.
Those deploying from the master branch and/or using Puppetfile would need to
install the openstacklib puppet module.
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
mgagne
Other contributors:
None
Work Items
----------
* Create openstack_ini_setting provider
* Refactor the existing configuration providers to use openstack_ini_setting.
Dependencies
============
None
Testing
=======
* Unit test fixtures of all puppet modules would need to be updated
to install openstacklib.
Documentation Impact
====================
None
References
==========
None

View File

@ -1,213 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==================================
Common OpenStack Database Resource
==================================
https://blueprints.launchpad.net/puppet-openstacklib/+spec/commmon-openstack-database-resource
This spec proposes to introduce defined resource types in openstacklib to
provide a common interface for setting up databases used by the major OpenStack
services. The current modules for these services would use this interface
rather than duplicating code across these modules.
Problem description
===================
Nearly all of the Puppet modules for OpenStack services set up a database for
themselves and most of them do this using nearly identical code. This causes a
great deal of repetition across modules and will lead to inconsistencies.
Proposed change
===============
Abstract the database functionality into defined resource types in the
openstacklib module. This resource would implement the same functionality as
the db::mysql class and db::mysql::host_access type already do in ceilometer,
cinder, glance, heat, keystone, neutron, and nova and will make the ability to
configure postgresql databases consistent. The new resource would use the
newest mysql and postgresql modules.
Alternatives
------------
The alternative to implementing this feature in openstacklib is to maintain the
database setup functionality in each module separately. This would necessitate
updating the postgresql classes in each module to work with the current version
of the postgresql module as well as adding postgresql support to the heat module.
If additional backends are desired they would have to be implemented in each
module individually.
Data model impact
-----------------
The ceilometer, cinder, glance, heat, keystone, neutron, and nova modules
would be updated to depend on the openstacklib module and use the new database
resources to configure their respective databases.
The mysql and postgresql subclasses of ceilometer, cinder, glance, heat,
keystone, neutron, and nova modules would have their current code replaced
with a call to the new defined resource type in openstacklib. No new classes
would be needed for these modules, except for heat which would need to have a
postgresql subclass added.
Module API impact
-----------------
* New defined resource types:
openstacklib::db::mysql
openstacklib::db::postgresql
A later spec may introduce an openstacklib::db::mongodb resource.
* Parameters for openstacklib::db::mysql:
dbname : The name of the database;
string; optional; default to the $title of the resource, i.e. 'nova'
user : The database user to create;
string; optional; default to the $title of the resource, i.e. 'nova'
password_hash : Password hash to use for the database user for this service;
string; required
host : The IP address or hostname of the user in mysql_grant;
string; optional; default to '127.0.0.1'
charset : The charset to use for the database;
string; optional; default to 'utf8'
collate : The collate to use for the database;
string; optional; default to 'utf8_unicode_ci'
allowed_hosts : Additional hosts that are allowed to access this database;
array or string; optional; default to undef
privileges : Privileges given to the database user;
string or array of strings; optional; default to 'ALL'
* Example use case for openstacklib::db::mysql:
In ::nova::db::mysql::
::openstacklib::db::mysql { 'nova':
password_hash => '2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19',
notify => Exec['nova-db-sync'],
}
would create a mysql database called 'nova' with user 'nova@127.0.0.1'. It
would not allow other hosts access to the database. It will re-execute the
'nova-db-sync' exec when the database refreshes.
Another example in ::keystone::db::mysql::
::openstacklib::db::mysql { 'keystone':
password_hash => '2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19',
host => '1.2.3.4',
allowed_hosts => ['1.2.3.4', '5.6.7.8'],
notify => Exec['keystone-manage db_sync'],
before => Service['keystone'],
}
would create a mysql database called 'keystone' with user 'keystone@1.2.3.4'
and grants to 'keystone' on hosts '1.2.3.4' and '5.6.7.8'. The database will
be set up before the keystone service is started and the 'keystone-manage
db_sync' exec will be re-executed when it refreshes.
* Parameters for openstacklib::db::postgresql:
dbname : The name of the database;
string; optional; default to the $title of the resource, i.e. 'nova'
user : The database user to create;
string; optional; default to the $title of the resource, i.e. 'nova'
password_hash : Password hash to use for the database user for this service;
string; required
encoding : The charset or encoding to use for the database;
string; optional; default to undef
* Example use case for openstacklib::db::postgresql
In ::nova::db::postgresql::
::openstacklib::db::postgresql { 'nova':
password_hash => '2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19',
notify => Exec['nova-db-sync'],
}
would create a postgresql database called 'nova' with user 'nova'. It will
re-execute the 'nova-db-sync' exec when the database refreshes.
Another example in ::keystone::db::postgresql::
::openstacklib::db::postgresql { 'keystone':
password_hash => '2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19',
notify => Exec['nova-db-sync'],
before => Service['keystone'],
}
would create a postgresql database called 'keystone' with user 'keystone'.
The database will be set up before the keystone service is started and the
'keystone-manage db_sync' exec will be re-executed when it refreshes.
End user impact
---------------------
None aside from the API.
Performance Impact
------------------
None
Deployer impact
---------------------
The user needs to install the openstacklib module prior to using the
ceilometer, cinder, glance, heat, keystone, neutron, or nova modules.
Developer impact
----------------
Changes to database setup will happen in the openstacklib module rather than in
the individual OpenStack service modules.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
krinkle
Other contributors:
None
Work Items
----------
* Create new defined resource type in openstacklib.
* Update ceilometer, cinder, glance, heat, keystone, neutron, and nova modules
to depend on openstacklib and use the new resource.
Dependencies
============
None
Testing
=======
Unit test fixtures of all puppet modules would need to be updated to install
openstacklib. Existing tests in these modules would be replicated in
openstacklib.
Documentation Impact
====================
None
References
==========
None

View File

@ -1,173 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==================================
Common OpenStack Identity Resource
==================================
https://blueprints.launchpad.net/puppet-openstacklib/+spec/common-openstack-identity-resource
This spec proposes to introduce defined resource types in openstacklib to
provide a common interface for setting up Keystone resources used by the major OpenStack
services. The current modules for these services would use this interface
rather than duplicating code across these modules.
Problem description
===================
Nearly all of the Puppet modules for OpenStack services set up Keystone users, roles,
tenants and endpoints for themselves and most of them do this using nearly identical
code. This causes a great deal of repetition across modules and will lead to
inconsistencies.
Proposed change
===============
Abstract the identity functionality into defined resource types in the
keystone::auth type already done in most of OpenStack modules.
openstacklib module. This resource would implement the same functionality as we had before
with Keystone resources management.
The new resource would use the newest identity modules.
Alternatives
------------
The alternative to implementing this feature in openstacklib is to maintain the
Keystone resources management functionality in each module separately.
Data model impact
-----------------
The OpenStack modules would be updated to depend to use
the new identity resources to configure their respective Keystone resources.
Some modules would have their current code replaced
with a call to the new defined resource type in openstacklib. No new classes
would be needed for these modules.
Module API impact
-----------------
* New defined resource type:
openstacklib::identity
* Parameters for openstacklib::service_identity:
password : Password to create for the service user;
string; required
auth_name : The name of the service user;
string; optional; default to the $title of the resource, i.e. 'nova'
service_name : Name of the service;
string; required
public_url : Public endpoint URL;
string; required
internal_url : Internal endpoint URL;
string; required
admin_url : Admin endpoint URL;
string; required
region : Endpoint region;
string; optional: default to 'RegionOne'
tenant : Service tenant;
string; optional: default to 'services'
domain : User domain (keystone v3);
string; optional: default to undef
email : Service email;
string; optional: default to '$auth_name@localhost'
configure_endpoint : Whether to create the endpoint.
string; optional: default to True
configure_user : Whether to create the user.
string; optional: default to True
configure_user_role : Whether to create the user role.
string; optional: default to True
* Example use case for openstacklib::service_identity:
In ::nova::keystone::auth::
::openstacklib::service_identity { 'nova':
password => 'secrete',
auth_name => 'nova',
domain => 'domain1',
service_name => 'compute',
public_url => 'https://my-nova-api.com:8774/v2',
admin_url => 'https://my-nova-api.com:8774/v2',
internal_url => 'https://my-nova-api.com:8774/v2',
notify => Service['nova-api'],
}
would create user, tenant, role and endpoint for OpenStack Compute API service and restart nova-api.
In the case of multiple endpoints version (Keystone & Nova v2+v3), we would have to use two times ::openstacklib::service_identity with v3 optional and disabled by default for backward compatibility. In Nova, we have an EC2 endpoint and we will also use the new function exactly how it is in current puppet-nova code.
In the case of multiple regions, it would be suggested to use Hiera and the new nova::keystone::auth with is consumming a define function in openstacklib, to create multiple resources in multiple regions.
In the case of a dedicated Keystone server, it's obvious the nova::keystone::auth classes are only set for Keystone nodes, otherwise it will lead to catalog compilation issues since keystone.conf is not on other nodes.
End user impact
---------------------
None aside from the API.
Performance Impact
------------------
None
Deployer impact
---------------------
The user needs to install the openstacklib module prior to using the
OpenStack modules.
Developer impact
----------------
Changes to keystone setup will happen in the openstacklib module rather than in
the individual OpenStack service modules.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
EmilienM
Other contributors:
None
Work Items
----------
* Create new defined resource type in openstacklib.
* Update ceilometer, cinder, glance, heat, keystone, neutron, and nova modules
to depend on openstacklib and use the new resource.
Dependencies
============
None
Testing
=======
Unit test fixtures of all puppet modules would need to be updated to install
openstacklib. Existing tests in these modules would be replicated in
openstacklib.
Documentation Impact
====================
README will be updated for each module consuming this new feature.
References
==========
None

View File

@ -1,160 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
Repository Management in Extras
==========================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/puppet-openstack/+spec/extras-repository-refactor
Problem description
===================
Currently it is necessary to maintain a separate module to handle base
repositories and any other repos that are needed for operation. This patch seeks
to allow arbitrary definitions of repositories via hash parameters and
create_resources integrated with and alongside the rdo and uca options already
available.
This patch will also define a sensible directory structure to use, so that common
code between similar systems can be shared.
Proposed change
===============
Create classes that manage arbitrary repositories but also RDO and UCA.
Share code for similar systems where possible.
Define directory structure for repo management
Alternatives
------------
Port the current repo management code across
This results in an implicit requirement to use either data-bindings or another class
with collectors to override resources in the case where the user wants to provide
their own baseurl for epel, or provide a proxy, since the parameters for those
resources are not exposed to the class.
If every parameter for each resource is exposed, we end up with a bloated interface
for the classes.
A suggestion was made to use stahnma's epel class, but this is an even greater
exaggeration of this problem since that class has so many parameters. Instead,
we should offer to not manage epel, and allow the user to include the epel module
if they wish, while continuing to offer a convenience parameter with the current
defaults.
Data model impact
-----------------
None
Module API impact
-----------------
New classes for debian:
openstack_extras::repo::debian::ubuntu
openstack_extras::repo::debian::debian
openstack_extras::repo::debian::params
Parameters for debian osfamily:
- release : The openstack release name
- manage_[uca|whz] : whether to add the default uca/wheezy repo
- source_hash : a hash of apt::source resources
- source_defaults : a hash of apt::source parameters for defaults
- package_require : whether to use a collector for all packages to require apt-get update
New classes for redhat:
openstack_extras::repo::redhat::redhat
openstack_extras::repo::redhat::params
Parameters for redhat osfamily:
- release : The openstack release name
- manage_rdo : whether to add the default rdo repo
- repo_hash : a hash of yumrepo resources
- repo_defaults : a hash of yumrepo parameters for defaults
- gpgkey_hash : a hash of file resources to create gpg keys
- gpgkey_defaults : a hash of file parameters for defaults
- purge_unmanaged : whether to purge unmanaged yum repos from yum.repos.d
- package_require : whether to use a collector for all packages to require all yum repos
Directory structure should follow:
openstack_extras::repo::${downcase(::osfamily)}::${downcase(::operatingsystem)}
currently redhat, centos, fedora all handled using RDO, but leave the option to diverge later
New functions:
We need some functions that will perform validation on the hashes passed to file and yumrepo
to catch typos as early as possible.
End user impact
---------------------
None
Performance Impact
------------------
None
Deployer impact
---------------------
Allows deployers to manage all repos on their system without
having to make their own class.
Deployers will need to understand how to pass resource hashes
around in order to take advantage of this. Hiera example will
be provided. All users of the current repo mgmt system should
migrate to the new format at their leisure - the old classes
will not be changed.
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
michchap
Work Items
----------
debian/ubuntu support
redhat/centos/fedora support
Dependencies
============
Adds apt dependency to openstack_extras
Testing
=======
rspec for unit tests
will be integrated into aptira's product during testing for integration tests
Documentation Impact
====================
Complete change of repo management API
References
==========
None

View File

@ -1,238 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===============================
Use Aviator in Module Resources
===============================
https://blueprints.launchpad.net/puppet-openstacklib/+spec/use-aviator-in-module-resources
This spec proposes to incorporate an interface to the OpenStack API in
openstacklib with the purpose of improving maintainability and promoting a more
configurable authentication interface. The providers would avoid the use of
the Python-based command-line OpenStack clients in favor of using Aviator, a
Ruby API binding for OpenStack. The corresponding types would implement an
interface for configuring authentication and endpoint details.
Problem description
===================
The OpenStack puppet modules contain a number of types and providers to access
OpenStack services via their API endpoints. Currently they rely on the
Python-based command-line clients to access these endpoints. The command-line
clients make requests at the respective OpenStack services, parse the response
and restructure it as a human-readable string. This means that the puppet
providers that wrap around these clients are heavily dependent on the format of
the restructured strings to interpret it correctly. Since these clients are
part of fast-paced projects, the output format is subject to change
unpredictably on each new release (which happens fairly frequently;
keystoneclient, for example, releases every few days or weeks
https://github.com/openstack/python-keystoneclient/releases), causing
the puppet modules to have a high maintenance cost. Moreover, the clients often
output warning messages that are not useful to the puppet modules and are
currently explicitly ignored. This adds additional complexity to parsing the
client output.
A related problem is that the providers currently rely on the presence of a
service configuration file on localhost. This means that the puppet
modules must rely on authentication information residing in a configuration
file on the node, whose state may or may not be managed by Puppet and
therefore may be improperly configured or missing (due to misconfiguration or
due to the service residing on another host).
Finally, there is some common functionality, such as configuration of
authentication endpoints and credentials that a number of modules share that
is currently duplicated across the modules. This will lead to inconsistency
across the modules.
Proposed change
===============
The proposed solution is to use the Aviator Ruby library instead of the
command-line clients as an interface to the OpenStack services. The
openstacklib module will add a dependency on the aimonb/aviator puppet
module. This module contains aviator as well as all of its dependencies within
its lib/puppet/features directory, so no gems need to be installed on the
master or the agents. Since Aviator returns pure JSON objects in its
responses, the providers in the puppet modules that need to interact with the
OpenStack REST API will parse the JSON returned directly from the service,
rather than the string output as interpreted by the command-line client. This
means that maintenance of the providers in the OpenStack puppet modules only
needs to keep up with changes to the REST API, and not with the command-line
clients, whose output is subject to change more frequently.
Using a Ruby library also helps promote the value that API wrappers should
reflect the idioms of the language in which they were written.
This change opens the opportunity to update the way the endpoint URI and
authentication parameters are handled in order to provide a greater degree of
configurability to the module user. Much of the logic for configuring endpoints
and credentials is duplicated across the modules and can be abstracted out to
the openstacklib module.
Alternatives
------------
Without implementing these changes in the providers, the module maintainers
will have to continue to watch for changes in the command-line clients and
update the providers to handle these changes appropriately.
Fog is another Ruby binding for OpenStack that could be uses as an alternative
to Aviator, but it is too large to incorporate into the module and is too
overbearing for our needs.
The providers could implement the REST client functionality themselves, but
this adds significant complexity to the providers when a library can accomplish
this already.
Data model impact
-----------------
None
Module API impact
-----------------
Aviator provides a variety of ways to authenticate to OpenStack services,
among them providing credentials at run time. The existing types need to have
parameters added to accept these credentials as well as endpoint URIs. The
openstacklib module will implement a class Puppet::Provider::Aviator that
inherits from Puppet::Provider. This class will implement an authenticate
method and an endpoint method, both of which will accept parameters that can
be passed from a type's parameters as well as the service being utilized. The
providers in the openstack modules will use this class as a :parent, e.g.::
Puppet::Type.type(:keystone_service).provide(
:aviator,
:parent => Puppet::Provider::Aviator
) do
The authenticate method will replace methods like auth_keystone and
auth_neutron. The endpoint method will replace methods like get_admin_endpoint
(in keystone) and get_auth_endpoint (in glance and neutron). These methods
currently rely on parsing the configuration file with methods such as
neutron_conf or keystone_file. Those methods will remain, but other methods
need to be written to accept credential and endpoint for the authenticate and
endpoint methods. They will fall back to reading the configuration file. This
will offer the modules a more robust and consistent way of authenticating
without relying on configuration files.
The authenticate method needs to ensure that a connection is retried
appropriately if needed.
The list_* methods will need to be rewritten to use the new authenticate and
endpoint methods.
The parse_* methods will need to be rewritten to parse JSON objects rather
than strings.
Every provider in the various OpenStack modules that currently uses a
command-line tool in order to function will be rewritten to use calls to
aviator. These can be identified by providers that contain a "commands" or
"optional_commands" statement to import a command-line tool as a Ruby
function.
The remove_warnings methods in puppet-keystone and puppet-glance will be
removed.
End user impact
---------------------
An end user of the OpenStack modules will need to pull in the openstacklib
module. They will have the option to supply authentication information to the
reworked types, but this will be optional and the providers will default to
using data from configuration files if these parameters are not defined.
Performance Impact
------------------
None
Deployer impact
---------------------
The user needs to install the openstacklib module prior to using the types that
interact with the REST endpoints in the OpenStack modules.
Developer impact
----------------
Developers of the module plugins will have to learn the Aviator API and will
no longer have to use the Python-based command-line clients.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
krinkle (crinkle on freenode)
Work Items
----------
* Update the Modulefile or metadata.json and .fixtures.yml in openstacklib to
reflect the new dependency on aimonb/aviator
* Add providers to openstacklib to wrap some of the common procedures that will
be done with Aviator, such as authentication
* Update the providers in the OpenStack modules, starting with puppet-
keystone, to use Aviator and the new functions provided by openstacklib
* Add new authentication and endpoint configuration parameters to the module
types, with options to configure from parameters specified in the manifest, a
.conf file, or openrc.
Dependencies
============
None
Testing
=======
Unit test fixtures of all the OpenStack puppet modules will need to be updated
to install openstacklib.
Documentation Impact
====================
None
References
==========
Relevant research:
* Error messages changing often
Bug: https://bugs.launchpad.net/puppet-keystone/+bug/1340447
The keystone command-line client is subject to change its output
unexpectedly, causing the puppet modules to fail to parse it properly.
* Retrying neutron connections
Bug: https://bugs.launchpad.net/fuel/+bug/1246795
Discussion: http://irclog.perlgeek.de/puppet-openstack/2014-07-21#i_9056413
Numerous error messages from the neutron command-line client indicate that
a request should be retried. Using HTTP responses via Aviator rather than
error strings to detect when a retry is necessary makes the regexes easier
to write and interpret.
* Caching query results
Bug: https://bugs.launchpad.net/puppet-neutron/+bug/1344293
Discussion: http://irclog.perlgeek.de/puppet-openstack/2014-07-16#i_9036216
Caching values during authentication can cause problems when setting up
multiple services within one puppet run. The rewritten providers will need to
be careful to fetch fresh values when authenticating to various services with
Aviator.

View File

@ -1,192 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
Pacemaker provider for Openstack services
==========================================
https://blueprints.launchpad.net/puppet-nova/+spec/pacemaker-provider-for-openstack
It is a common practice to make services managed by Pacemaker in Linux HA clusters.
Puppet modules for Openstack should be able to ensure both standard and Pacemaker
providers for Openstack services as well.
In order to achieve this as a transparent solution (w/o modifications to the core
Puppet modules for Openstack) special wrapping classes should be created in
puppet-openstack_extras. These wrappers should contain overrides for Openstack
services as a Pacemaker service provider and ensure creation of corresponding
Pacemaker resources as well.
Problem description
===================
A detailed description of the problem:
Puppet modules for Openstack configure services in a classic way, such as
system service providers.
For HA clusters, though, operators might have wanted to have an option
to delegate Openstack services management for Pacemaker instead.
Pacemaker OCF scripts provide very good options for cluster-wide resources
management and Puppet should deliver this options as well.
Proposed change
===============
A Pacemaker service provider and special wrapper class or classes, like
``openstack_extras::pacemaker::heat`` or ``openstack_extras::pacemaker::nova``
should be created as well as Resource Agents (OCF) scripts for related cluster
resources management. Once a wrapper class is included in the Puppet catalog,
it would override the default service provider of the corresponding Openstack
service configured by its base core module. That will ensure a transparent way
of HA configuration for Openstack services without any modifications to the
base core modules, such as puppet-nova or puppet-heat and so on.
For the first stage of implementation, there should be only a service provider
for Pacemaker created and put into the openstack_extras. That will assume the
deployers should use their own custom classes and Resource Agents (OCF) scripts
for the cluster resources configuration.
The second stage should be a creation of a basic HA wrapper class (or classes)
shipped with a generic OCF scripts for the cluster resources definition which
could benefit every developer. Of course, there should be a way for deployer
to use his custom OCF scripts as well.
Note, that Puppet modules for Openstack will not configure Openstack services
in HA by default. Also, related infra services (such as MySQL or RabbitMQ)
should not be configured by Puppet Openstack core modules, so we do not expect
it for HA provider as well. But HA provider still could be used to do so,
if user wants to override some basic service provider to HA one. By design,
it should work transparently for whichever service in Puppet catalog.
The module for Pacemaker and Corosync to be used with this HA provider is a
puppetlabs/corosync.
Alternatives
------------
The another option is to use puppet-openstacklibs instead of extras.
There is also an option to switch to another module for Pacemaker and Corosync
instead of puppetlabs/corosync. The former one does not support clones,
location constraints, operates via pcs/crmsh CLI which are not good then cluster
is being deoplyed in parallel. So, it could make sence to redesign it
completely, or switch from it to another module, or provide an option to specify
which module deployer wants to use.
A completely another approach could be considered as well.
Pacemaker provider configuration could be ensured as a simple pipe, i.e. we
are passing a some external class name (and its params as a hash) into the
corresponding Openstack classes. This approach resembles the one for
``rabbitmq_class`` which uses ``rabbitmq::server`` as a default value but
could be anything else. But for this case it should be some Pacemaker or
Corosync related class instead.
Each Openstack class providing the service should be as well extended then
with the following parameters:
* ``pacemaker_provider`` (or ``ha-mode``) type parameter. The default value
should be ``false`` for backward compatibility reasons.
* ``corosync_class`` or ``pacemaker_class`` reference to a some external
class which should be used for Corosync (Pacemaker) resources definition.
The default value should be ``false`` for backward compatibility reasons.
* One for ``resource_hash`` parameters being passed into the aforementioned
class above.
Data model impact
-----------------
None
Module API impact
-----------------
None
End user impact
---------------------
The user will be able to configure Openstack services in a much
more flexible and Highly available way.
Performance Impact
------------------
There would be no performance impact, if default service provider for
Openstack services is used. The performance of Pacemaker provider is
deeply tight with the Pacemaker or Corosync Puppet module chosen by
the user. Anyway, the cluster resources definition for services should
be done only once, hence the performance impact will be the minimal in
overall deployment process and near to none in later operations.
Deployer impact
---------------------
The deployment behavior will not change at all for the case with wrapper
classes in puppet-openstack_extras (openstacklibs).
The only action required would be to include a wrapper classes for Openstack
services in catalog in order to ensure Pacemaker provider for them.
For the approach with an external class reference, the deployer would
have to provide a valid Pacemaker or Corosync class reference for some
external module in catalog and a valid hash of parameters for it.
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
bogdando <bdobrelia@mirantis.com>
Other contributors:
idv1985 <dilyin@mirantis.com>
Work Items
----------
* Create a service provider for Pacemaker (HA provider) as a part of the
openstack_extras.
* Create a basic HA wrapper classes shipped with a generic OCF scripts for
the cluster resources definition and an option to specify a custom OCF
scripts as well.
* Describe in documentation how HA provider could be used with Puppet modules
for Openstack to configure Openstack services in HA.
* (optional) Describe in documentation how HA provider could be used to
configure related services, such as RabbitMQ and MySQL in HA.
Dependencies
============
None
Testing
=======
The feature should be tested with the rspecs provided.
Documentation Impact
====================
The feature should be described in the docs for puppet-openstack_extras
(or openstacklibs) module or core Puppet modules for Openstack for the case
with external Corosync (Pacemaker) classes.
New wrapper classes should be described and usage examples provided.
References
==========
None

View File

@ -1,160 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=======================================
Use OpenstackClient in Module Resources
=======================================
https://blueprints.launchpad.net/puppet-openstacklib/+spec/use-openstackclient-in-module-resources
This spec proposes to alter the course proposed in blueprint:
use-aviator-in-module-resources. The preferred solution to the problem
description is to use the universal OpenStack command-line client.
Problem description
===================
The original problem is framed in the original Aviator blueprint.
The problem that this spec proposes to solve is that using the API
directly adds increased complexity which decreases maintainability. The
providers must manage HTTP sessions themselves, which reinvents the
wheel.
Aviator also does not appear to be actively keeping up with new changes
in OpenStack. It does not yet have support for the Neutron API. It also
only has partial support for the Keystone V3 API, which is an immediate
requirement. The workload to contribute upstream to Aviator to fulfill
these requirements is quite high compared to the workload involved to
incorporate openstackclient.
Proposed change
===============
Work to use Aviator in the base provider in the openstacklib module has
already been done. This work lays out the options that providers can use
for authenticating against the REST APIs. The work to convert the
providers to use the Aviator base provider has not been completed.
The change would simply swap the calls to the Aviator library with calls
to the openstack command. The base provider will no longer have to
manage sessions itself, which means not having to differentiate between
a password-authenticated session and using a token directly.
OpenStackClient is actively keeping up with API changes and is rapidly
developing, so we will monitor its progress and work with the developers
to get features we need and fix bugs.
Alternatives
------------
The alternative is to continue on the path with Aviator.
Data model impact
-----------------
None.
Module API impact
-----------------
The log_file parameter that puppet/util/aviator added to the puppet
types would no longer be necessary since that was a requirement only
for Aviator.
The OpenStack client is bundled with other OpenStack services so it
needs a manifest to install it explicitly.
The version of openstackclient available on Debian, Ubuntu, and RedHat
is 0.4.0. A 1.0.0 release will be available by Kilo. In the meantime,
the 0.4.0 version is sufficient for our needs as it can format its
output in CSV format.
We need to take advantage of OpenStackClient's Keystone API v3
capabilities in a Juno feature release of the keystone module. In order
for the keystone module to maintain backwards compatibility during this
cycle, we will first incorporate the base provider into the keystone
module, since we cannot bump the dependency on openstacklib without a
major release. Once these changes are backported into the Juno branch,
we will extract the base provider into the openstacklib module in
preparation for the Kilo release. Then we can migrate providers from
the other modules on their master branches, targeting Kilo.
End user impact
---------------------
There should be no end user impact.
Performance Impact
------------------
OpenStackClient may not be as fast as using the API directly, but it is
certainly not slower than using the individual command line clients.
OpenStackClient plans to soon provide the ability to cache resources
locally in order to speed up requests, so using that functionality
should increase performance.
Deployer impact
---------------------
None.
Developer impact
----------------
The parameters of the base provider's request() method can change
slightly to better mirror how parameters will be passed to openstack(),
but this is not a hard requirement as long as all the necessary
information is passed to openstack().
Implementation
==============
Assignee(s)
-----------
Primary assignee:
krinkle
richm
Work Items
----------
* Rewrite the base provider in openstacklib to reflect these changes. Work
for this has already been started.
* Rewrite the keystone providers to inherit from the new base provider
and utilize its methods.
* Rewrite providers for other modules
Dependencies
============
None
Testing
=======
Unit tests will be revised and simplified. Rather than using VCR with
HTTP recorded sessions as fixtures we will simply stub the openstack()
method.
Documentation Impact
====================
None
References
==========
* Proofs of concept, still works in progress:
- Base provider: https://review.opendev.org/#/c/134843/
- keystone_tenant rewritten: https://review.opendev.org/#/c/134844/
* Mailing list discussion:
- https://groups.google.com/a/puppetlabs.com/forum/#!topic/puppet-openstack/GJwDHNAFVYw
* IRC discussion: (starting at 14:36:50)
- http://eavesdrop.openstack.org/meetings/puppet_openstack/2014/puppet_openstack.2014-11-17-14.01.log.html

View File

@ -1,764 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===================================================
Support Keystone v3 API in OpenStack puppet modules
===================================================
Launchpad blueprint:
https://blueprints.launchpad.net/puppet-keystone/+spec/api-v3-support
Beginning with Icehouse, Keystone has declared the v3 API (URLs with /v3) fully
supported, and beginning with Kilo, Keystone has deprecated the v2 API (URLs
with /v2.0). The OpenStack puppet modules should use the v3 API wherever
possible. Keystone will continue to fully support v2 and v3 at the same time
for the next couple of cycles.
.. _project-is-tenant-note:
.. note:: Project is used instead of tenant
Keystone has renamed "tenant" to "project". Everywhere in the v3 API,
`openstack` command line tool, environment variables, Keystone documentation,
and elsewhere, project is used instead of tenant. However, in
puppet-keystone, we still use the keystone_tenant resource.
Problem description
===================
There are a number of features that are only available with the v3 API:
* The ability to for one user, to delegate to another user, the authority to
perform certain tasks, using the Keystone v3 Trusts extension. For example:
* Have a service launch an instance on behalf of a user without that user
having to first authenticate (e.g. for auto-scaling at 3 AM).
* Delegate restricted nova API access to (implicitly) untrusted in-instance
processes.
* Provide access to accounts and containers on swift without using ACLs.
* http://techs.enovance.com/5858/role-delegation-in-keystone-trusts
* The ability to have multiple domains, each with its own separate identity
backend. One of the most important use cases is to put the OpenStack service
accounts in a "service" domain, backed by an SQL identity backend, and have
the default domain (that is, the domain that is used if not explicitly
specified in the request) point to the enterprise LDAP server, both in
read-write and read-only mode. In order to do this, the services will need
to use v3 for authentication and specify the domain for the user and for the
project.
Proposed change
===============
In Keystone, the *domain* is the container for users, groups, and projects.
Other identity resources, such as roles, services, and endpoints, are not
contained within a domain. In order to uniquely identify a Keystone resource
by name, it must be qualified with the domain name. This is a problem with
naming puppet resources, which must have a unique name. The string "::" can be
used to construct a unique Puppet resource name from a name part and a domain
name part. However, existing manifests expect to be able to name resources
without the domain part, and it may be difficult for operators to change
manifests to make all Keystone resource names have the domain in them.
Therefore, the puppet-keystone v3 support implementation must support current
manifests as much as possible. Consider this case::
# Composition layer
class { '::glance::keystone::auth':
region => $region,
password => $glance_user_password,
public_address => $public_url,
admin_address => $admin_url,
internal_address => $internal_url,
}
# puppet-glance
class glance::keystone::auth(
$password,
$email = 'glance@localhost',
$auth_name = 'glance',
$configure_user = true,
$configure_user_role = true,
...
$tenant = 'services',
...
) {
...
keystone::resource::service_identity { $auth_name:
configure_user => $configure_user,
configure_user_role => $configure_user_role,
...
tenant => $tenant,
...
}
...
}
# puppet-keystone
define keystone::resource::service_identity(
...
$auth_name = $name,
...
$tenant = 'services',
...
) {
...
keystone_user { $auth_name:
ensure => 'present',
enabled => true,
password => $password,
email => $email,
tenant => $tenant,
ignore_default_tenant => $ignore_default_tenant,
domain => $user_domain_real,
}
}
if $configure_user_role {
keystone_user_role { "${auth_name}@${tenant}":
ensure => 'present',
roles => $roles,
}
}
The puppet-keystone code must be able to use just `glance` as the name without
it having to be qualified by a domain. The puppet-keystone code must be smart
enough to figure out that there is only one user named `glance` among all of
the domains, figure out which domain it is in, and use it. Same with the
`service` project, and with the `glance@service` user role. As long as there
is only one resource with the given name, the code should work. However, if
there is a `glance` user in `domain1`, and a `glance` user in `domain2`, it is
the responsibility of the manifest writer to fully qualify both user names.
There is no way for puppet-keystone, when given a username of `glance`, to know
if it is referring to `glance::domain1` or `glance::domain2`. We will need to
warn puppet users that the use of "::" in Keystone resource names (such as a
user named "foo::bar") is not supported.
.. _back-compat-note:
.. note:: Resource names with "::domainname" are not required if the resource names are unique
One of the goals of this effort is to preserve backwards compatibility with
existing manifests and other related Puppet code. You do not have to declare
a resource name with "::domainname" if the base resource name is unique among
all domains. For example, if you have only one user named `glance` among all
domains, you can refer to this using only 'glance' in keystone_user and
keystone_user_role resources. The puppet-keystone code will determine if
there is only one user named glance among all domains, and will know that if
you just specify `glance` it means the unique user named glance in whatever
domain it happens to be in. However, if the resource name is **not** unique,
you **must** specify the '::domainname' part in the resource name.
.. _finding-unique-resources-note:
.. note:: How to find unique resources
You can use `openstack user list --long` and `openstack project list --long`
to search through all domains (with admin credentials - that is, use a user
that has the role `admin`), then grep for users that you want to ensure are
unique. If the users are coming from your enterprise identity provider
(e.g. an LDAP server, an SQL database), use the tools provided to search for
users. The `keystone_user` and `keystone_tenant` resource code will both
search through all domains looking for user and project resources, so you can
also use `puppet resource keystone_user` and `puppet resource
keystone_tenant` to see what Puppet's "view" is of the deployment.
Puppet resource declarations that have a name with "::" and a domain part will
look like this::
keystone_user { 'admin::services': ...}
keystone_user { 'admin::users': ...}
keystone_tenant { 'admin::services': ...}
These would declare two different admin users - one for the "services" domain,
and one for the "users" domain, and create a project called "admin" in
the services domain.
The current keystone_user_role resource looks like this::
keystone_user_role { 'glance@services': roles => ['admin'] }
This assigns the user "glance" to the role of "admin" in the project
"services". With Keystone v3, if you need to specify a domain name, the
declarations will look like this::
keystone_user_role { 'sysadmin::admin_domain@administrators::services': roles => ['admin']}
keystone_user_role { 'sysadmin::admin_domain@::users': roles => ['admin']}
This would assign the user "sysadmin" in the domain "admin_domain" to the role
"admin" in the project "administrators" in the domain "services". The last one
would assign the user "sysadmin" in the domain "admin_domain" the role of
"admin" in the domain "users". If the project name (the string after the "@"
character) starts with "::", `keystone_user_role`, will assume it is a domain
name and this role is a domain scoped role instead of a project scoped role.
When you name a resource with '::domainname', you **must** consistently use
that name everywhere. For example, if you need to have multiple projects in
different domains with the name `service`, you will need to declare the
resource like this: `keystone_tenant { 'service::domain1': ...}`. If you
use that project in a keystone_user resource, you **must** use it like this:
`keystone_user { 'username': tenant => 'service::domain1', ... }`. If you use
that project in a keystone_user_role resource, you **must** use it like this:
`keystone_user_role { 'username@service::domain1': ... }`.
The resources that are part of a domain, namely keystone_user, keystone_tenant,
keystone_group, will have a **domain** parameter. The domain parameter will
override any domain specified in the name. For example, if you have::
keystone_user { 'user::foodomain':
domain => 'bardomain',
}
The domain parameter 'bardomain' would override the domain specified in the
resource title 'foodomain'. In this case, the domain in the title would just
serve to make the resource unique (as opposed to "user" or
"user::anotherdomain").
Why have both a domain parameter and a domain in the resource name? There may
be cases where you want to override the domain. Using a parameter makes it
easier to use some of the puppet facilities for doing overrides. In most
cases, the domain parameter would not be used. Instead, the domain would be
specified as part of the resource name, in order to unique-ify the resource
name.
.. note:: About the "@" character in keystone_user_role
The "@" character is a valid character in Keystone user names. The way
keystone_user_role works is that everything before the **last** "@" is part
of the username, and everything after the **last** "@" is part of the project
name. So "user@domain.com@project" has a username of "user@domain.com" and a
project name of "project". This also means that the "@" character is **NOT**
a valid character in project names or in domain names. Keystone v3 doesn't
change this, this is the way it has always been, as a side effect of choosing
"@" as the delimiter character for keystone_user_role.
For legacy applications, there are two cases:
* The v2.0 api is being used - using a URL ending in "/v2.0" and using
OpenStack with `OS_IDENTITY_API_VERSION=2`, or omitting the api version
altogether. In this case, Keystone, not puppet-keystone, will implicitly add
a 'default domain' to the request. The default domain is specified in the
Keystone config with the **identity/default_domain_id** configuration
parameter.
.. _domain-search-note:
* The v3 api is being used - using a URL ending in "/v3" and using OpenStack
with `OS_IDENTITY_API_VERSION=3`. In this case, the Keystone server side
will **not** implicitly add a default domain - the domain must be explicitly
specified on the client side, either in the resource title (`keystone_user {
'admin::domain' }`) or in a parameter (`keystone_user {'admin': domain =>
'domain'}`). In order to accommodate legacy puppet code that does not
specify a domain in either of these two ways, the keystone provider will
attempt to determine a domain to use by the following methods in order:
* default_domain_id from keystone.conf
* 'Default' - the Keystone "default" default domain, if none is specified in
identity/default_domain_id in keystone.conf
.. note:: Using default_domain_id and other settings from keystone.conf
Using the default_domain_id from keystone.conf is considered *deprecated*.
Developers *must* allow the domain to be explicitly provided everywhere, and
users should specify the domain wherever possible.
Keystone v3 provides two new API objects - *domain* and *group*. There will
need to be puppet-keystone resources for each of these - **keystone_domain**
and **keystone_group**. In order to create roles with groups, there will need
to be a **keystone_group_role** resource. This resource will work exactly like
keystone_user_role, except for groups.
The **openstack** command line tool provided by the python-openstackclient
package has full Keystone v3 support. This command will be used by puppet to
access the Keystone v3 API and features. Version 1.0.2 or later of this
command is required for all features. The `puppet-openstacklib` `openstack`
provider will support using Keystone v2 or v3 credentials.
All of the domain-aware Keystone resources (such as keystone_user) will always
add a *domain* argument for those operations which require a domain (such as
creating a user), or use the resource *id* instead of the resource *name*
wherever possible to avoid having to use the domain. In Keystone, the *id* is
guaranteed to be unique among all domains.
For trust support, a **keystone_trust** resource will be added. This resource
will be used to manage trusts.
**Puppet Modules Other Than puppet-keystone**
There are a few changes that will need to be made to every OpenStack Puppet
module to allow Keystone v3 authentication:
* Allow specifying the domain for users and projects
Any place where a manifest specifies a user or project used for Keystone
authentication will need to be changed to add parameters for the domain of the
user and the domain of the project (and renaming tenant to project is
probably a good idea since Keystone uses *project* in Icehouse and later
:ref:`Use Project <project-is-tenant-note>`).
In addition, the resource name may be specified as `name::domainname`. The
puppet-keystone code will handle this case. It should not be necessary for
other puppet modules to split resource names by "::" to get the base name part
and the domain part. It should just pass these names down to the
puppet-keystone code which should handle it.
For ease of use, an additional *default domain* parameter can be added which
will be used for both the user and project. For example, from glance::
class glance::api(
...
$keystone_tenant = 'services',
$keystone_user = 'glance',
...
$keystone_user_domain = undef,
$keystone_project_domain = undef,
$keystone_default_domain = undef,
...
)
`$keystone_user_domain` is used to specify the domain of the `$keystone_user`,
and `$keystone_project_domain` is used to specify the domain of the
`$keystone_tenant`. If the user or the project domain is omitted, and
`$keystone_default_domain` is specified, then that value will be used for the
missing user or project domain. Puppet modules that pass authentication
parameters will need to be able to pass domain arguments. For example, from
`puppet-glance/lib/puppet/providers/glance.rb`::
def self.request(service, action, properties=nil)
super
rescue Puppet::Error::OpenstackAuthInputError => error
glance_request(service, action, error, properties)
end
That is, try the `openstack` request with the provided credentials, and if that
fails, fall back to try the request again with the credentials from the glance
config file. The module local request method (e.g. `glance_request`) will need
to be able to pass in the user domain, project domain, and other v3
authentication parameters from its config file as authentication arguments.
* Use `keystone::resource::authtoken` and the new `keystone_authtoken` parameters in config files
The application config files are usually managed with a config resource. For
example, the file `/etc/glance/glance-api.conf` is managed with a
`glance_api_config` resource, `/etc/glance/glance-registry.conf` is managed
with a `glance_registry_config` resource, etc. The config section that
contains the Keystone authentication parameters is `keystone_authtoken`. For
v3, there are some name changes (`admin_user => username`) and several new
parameters for domains and other v3 resources. To make it easier to manage
this section, a new Keystone resource `keystone::resource::authtoken` has been
added. For example, instead of doing this::
glance_api_config {
'keystone_authtoken/admin_tenant_name': value => $keystone_tenant;
'keystone_authtoken/admin_user' : value => $keystone_user;
'keystone_authtoken/admin_password' : value => $keystone_password,
secret => true;
...
}
manifests should do this instead::
keystone::resource::authtoken { 'glance_api_config':
username => $keystone_user,
password => $keystone_password,
auth_url => $real_identity_uri,
project_name => $keystone_tenant,
user_domain_name => $keystone_user_domain,
project_domain_name => $keystone_project_domain,
default_domain_name => $keystone_default_domain,
cacert => $ca_file,
...
}
The use of `keystone::resource::authtoken` makes it easy to avoid mistakes,
and makes it easier to support some of the newer authentication types coming
with Keystone Kilo and later, such as Kerberos, Federation, etc.
`keystone::resource::authtoken` knows how to handle the case where the
`username` is specified as `user::domainname` and will use the `domainname` part
as the `user_domain_name` if the `user_domain_name` is not provided. Same with
`project_name`.
* Do not use a version suffix in Keystone authentication URLs
For example, for any URL used to perform authentication to Keystone, such as
provided by the `OS_AUTH_URL`, `--os-auth-url`, or similar configuration
parameters, do not add a Keystone API version suffix to the URL. For example,
use `http://keystone.host:35357/` instead of `http://keystone.host:35357/v2.0/`
or `http://keystone.host:35357/v3/`. Both the `openstack` resource provider,
and the `keystonemiddleware` used for service to Keystone authentication, will
determine if v3 can be used based on the parameters given.
Alternatives
------------
There aren't really any alternatives to support for domains, groups, and
trusts.
Data model impact
-----------------
* The addition of the keystone_trust, keystone_domain, keystone_group,
keystone_group_role resources
This will mostly be using the existing resources as a template to create the
new resources. This should be very straightforward.
* The addition of domain and group to the other puppet-keystone resources
For example, being able to create a user in a specific domain will require the
addition of domain property to keystone_user. Likewise with projects and
keystone_tenant. For service accounts, the resource
keystone::resource::service_identity hides most of the actual implementation,
so it should be easy to assign domains to service accounts without having to
change other puppet modules.
Module API impact
-----------------
Each API method which is either added or changed should have the following,
depending upon if it is a new class or an addition to an existing class.
* New defined resource types:
* Name
keystone_domain
* Description
This resource represents a Keystone domain. It can optionally be used to
ensure that the default_domain_id is set.
* Parameters for keystone_domain:
name : Domain name;
string; required; namevar
enabled : Domain is enabled for use;
boolean; optional: default to True
description : Domain description;
string; optional; default to undef
id : Domain id assigned by Keystone;
string; required; read-only
is_default : If this is true, the specified domain is the default domain,
and the provider will ensure that this is the [identity]
default_domain_id value in the keystone.conf file;
boolean; optional: default to False
* Example use::
keystone_domain { 'services':
ensure => present,
description => 'Domain to use for service accounts',
enabled => true,
}
* Name
keystone::resource::authtoken
* Description
This resource provides a convenient, safe way to update the Keystone
authentication parameters in application config files which use a
`*_config` resource.
The username and project_name parameters may be given in the form
"name::domainname". The authtoken resource will use the domains in
the following order:
1) The given domain parameter (user_domain_name or project_domain_name)
2) The domain given as the "::domainname" part of username or project_name
3) The default_domain_name
* Parameters for keystone::resource::authtoken
[*name*]
The name of the resource corresponding to the config file. For example,
keystone::authtoken { 'glance_api_config': ... }
Where 'glance_api_config' is the name of the resource used to manage
the glance api configuration.
string; required
[*username*]
The name of the service user;
string; required
[*password*]
Password to create for the service user;
string; required
[*auth_url*]
The URL to use for authentication.
string; required
[*auth_plugin*]
The plugin to use for authentication.
string; optional: default to 'password'
[*user_id*]
The ID of the service user;
string; optional: default to undef
[*user_domain_name*]
(Optional) Name of domain for $username
Defaults to undef
[*user_domain_id*]
(Optional) ID of domain for $username
Defaults to undef
[*project_name*]
Service project name;
string; optional: default to undef
[*project_id*]
Service project ID;
string; optional: default to undef
[*project_domain_name*]
(Optional) Name of domain for $project_name
Defaults to undef
[*project_domain_id*]
(Optional) ID of domain for $project_name
Defaults to undef
[*domain_name*]
(Optional) Use this for auth to obtain a domain-scoped token.
If using this option, do not specify $project_name or $project_id.
Defaults to undef
[*domain_id*]
(Optional) Use this for auth to obtain a domain-scoped token.
If using this option, do not specify $project_name or $project_id.
Defaults to undef
[*default_domain_name*]
(Optional) Name of domain for $username and $project_name
If user_domain_name is not specified, use $default_domain_name
If project_domain_name is not specified, use $default_domain_name
Defaults to undef
[*default_domain_id*]
(Optional) ID of domain for $user_id and $project_id
If user_domain_id is not specified, use $default_domain_id
If project_domain_id is not specified, use $default_domain_id
Defaults to undef
[*trust_id*]
(Optional) Trust ID
Defaults to undef
[*cacert*]
(Optional) CA certificate file for TLS (https)
Defaults to undef
[*cert*]
(Optional) Certificate file for TLS (https)
Defaults to undef
[*key*]
(Optional) Key file for TLS (https)
Defaults to undef
[*insecure*]
If true, explicitly allow TLS without checking server cert against any
certificate authorities. WARNING: not recommended. Use with caution.
boolean; Defaults to false (which means be secure)
* Example use::
keystone::resource::authtoken { 'glance_api_config':
username => $keystone_user,
password => $keystone_password,
auth_url => $real_identity_uri,
project_name => $keystone_tenant,
user_domain_name => $keystone_user_domain,
project_domain_name => $keystone_project_domain,
default_domain_name => $keystone_default_domain,
cacert => $ca_file,
}
* New parameters for keystone_user, keystone_tenant:
* Name
domain : Domain name;
string; optional: default see :ref:`Domain Search Note <domain-search-note>`
* Description
Name of the domain to which the resource belongs.
* Resources affected:
keystone_user
keystone_tenant
* Reason for addition:
With Keystone v3, you can have a resource with the same name in multiple
domains. It is the combination of name and domain that uniquely identifies
a resource. With Keystone v3, users, groups, and projects exist inside
domains, so the domain must be specified when creating these resources.
* Changed parameters:
* Name:
title : The Puppet resource title;
string; required;
* Resources affected:
keystone_user
keystone_tenant
keystone_user_role
keystone_group
keystone_group_role
keystone::roles::admin
keystone::resource::service_identity
* Reason for change:
With Keystone v3, you can have a resource with the same name in multiple
domains. In Puppet, you cannot have two resources with the same title.
With Keystone resources, the title is usually also the name of the
resource. By using "name::domain" in the resource title, you can uniquely
identify the resource to Puppet, as well as specify both the resource name
and domain. The affected resources have been changed to look for a title
in the form "name::domain".
* Example use::
keystone_user { 'admin::admin_domain':
ensure => present,
enabled => true,
tenant => 'admin',
email => 'admin@localhost',
password => 'itsasecret',
}
End user impact
---------------
There should be no mandatory end user impact. Existing manifests should
continue to work exactly as before. See also :ref:`Backwards Compatibility Note <back-compat-note>`.
Users wanting to create users and projects in domains other than the default
domain will need to change their manifests in order to pass in the domain or
specify the domain in the resource title. Note that there may be several
layers of resources/classes in manifests before the actual declaration of the
keystone resource, so intermediate resources/classes may need to be changed so
that the domain can be passed all the way down to the keystone resource.
Performance Impact
------------------
There will be additional domain lookups, in order to map domain ids to domain
names in certain calls. For example, when creating a user, the create call
will return the domain id, not the domain name, but the domain name is needed
for resource name/title/domain comparisons. The keystone provider provides
utility methods for this, and will cache the results of domain lookups.
Deployer impact
---------------
None other than what has been already mentioned.
Developer impact
----------------
Developers will have to make themselves aware of the new Keystone v3
authentication parameters mentioned in the links and elsewhere in this
documentation, and will need to make sure composition layers, config files,
etc. allow the specification of those parameters such as user domains, project
domains, etc.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
rmeggins (IRC nick richm)
Other contributors:
gilles@redhat.com (IRC nick gildub)
ichavero (IRC nick imcsk8)
Work Items
----------
puppet-keystone
* Create keystone_domain resource
* Create domain splitting utility code
* Create domain id to name mapping code
* Add domain support to keystone_user
* Add domain support to keystone_tenant
* Add domain support to keystone_user_role
* Add default domain support to class keystone
* Add admin user domain, admin tenant domain, and service tenant domain to
class keystone::roles::admin
* Create keystone::resource::authtoken
* Convert keystone_service and keystone_endpoint to use the v3 api
* Convert keystone_role to use the v3 api
Other puppet-modules
* Allow specifying the domain for users and projects
* Use `keystone::resource::authtoken` and the new `keystone_authtoken`
parameters in config files
* Do not use a version suffix in Keystone authentication URLs
Dependencies
============
* https://blueprints.launchpad.net/puppet-openstacklib/+spec/use-openstackclient-in-module-resources
* Use OpenstackClient in Module Resources
* puppet-keystone has already been converted to use the openstack provider.
All other puppet modules will need to be converted to use the openstack
provider from puppet-openstacklib.
* https://blueprints.launchpad.net/puppet-openstacklib/+spec/auth-consolidation
* Restructures authentication for resource providers
* puppet-keystone has already been converted to do this.
All other puppet modules will need to be converted.
* python-openstackclient version 1.0.2 or later is required for full
functionality. 1.0.1 may be used for testing purposes, but will not provide
the full functionality required.
* python-keystonemiddleware version 1.3 or later is required for OpenStack
services to perform service to Keystone v3 authentication.
Testing
=======
If and when tempest tests are available for puppet-keystone, tests of the
functionality in this blueprint should be added.
Documentation Impact
====================
The README.md and the examples in the examples directory will be updated.
References
==========
Openstack client: http://docs.openstack.org/developer/python-openstackclient/
Keystone v3 REST API: http://developer.openstack.org/api-ref-identity-v3.html
Trust extension:
https://github.com/openstack/identity-api/blob/master/v3/src/markdown/identity-api-v3-os-trust-ext.md
and: https://wiki.openstack.org/wiki/Keystone/Trusts
Service to Keystone v3 authentication:
http://www.jamielennox.net/blog/2015/02/23/v3-authentication-with-auth-token-middleware/
New Keystone v3 authentication parameters in config files:
http://www.jamielennox.net/blog/2015/02/17/loading-authentication-plugins/

View File

@ -1,199 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==================================================================
Restructure authentication for OpenStack Puppet resource providers
==================================================================
Launchpad blueprint:
https://blueprints.launchpad.net/puppet-openstacklib/+spec/auth-restructure
Since OpenStack Puppet modules providers are managing their resources using
python-openstackclient (OSC), the 'openstack' CLI, to interface to OpenStack,
the mechanism which is used for holding authentication data is causing
unnecessary complexity and an inconsistent behavior for providers depending on
their context usage.
Problem description
====================
The authentication data is currently stored as a type parameter ':auth'.
Having the authentication information as type parameter makes the credentials
available in an instance context but not in class context such as when a
resource is used in general context, for example in self.instances method.
This has several consequences:
1. It creates code duplication down the road to work around the fact there
isn't authentication data provided in a general context, for example in keystone
provider:
* lib/puppet/provider/keystone_tenant/openstack.rb#L60
* lib/puppet/provider/keystone_tenant/openstack.rb#L73
When dealing with authentication, each provider has to duplicate every method
in regards of instance or class context.
2. Consequently, it appears that even in a general context, authentication
information has to be obtained from somewhere. Which means the authentication
mechanism needs to be revisited to have a consistent way of setting the
credentials in a consistent way no matter the context.
Also, independently of this problem but related to the authentication data, a
security issue came up when using command line options to pass credentials to
openstack command, for instance '--os_password=blah'. The risk is for a spawned
process to have the information displayed, i.e. with a 'PS' command.
And finally, a change in the way the credentials are obtained changes with step
2. which formalizes the default RC file to be used if needed:
1. Environment variables OS_* are used by default and will be overridden only if
defined below
2. If not enough credentials then OS_* variables defined in RC file to be used.
The default is located current user (per execution of Puppet) home directory:
$HOME/openrc
3. If authentication fails it's up to the provider to implement a failsafe to
use configuration file to find the credentials.
Proposed change
===============
Therefore the need to review the architecture of handling the authentication
data while offering the same features to the OpenStack providers.
The fist change removes the :auth parameter from the types.
This is replaced with a combination of a auth module, and a credentials object.
Replacing the polymorphism approach used by providers inheriting from
Puppet::Provider::Openstack class by a module.
The authentication related methods are moved to the module.
The module creates an interface to the provider classes.
When a provider 'extend' the module, all the methods defined in the module
are available to the 'inheriting' class as class methods.
The authentication methods are acting at a sublevel, leaving the superclass
Puppet::Provider::Openstack to deal purely interfacing with openstack client.
The openstacklib structure is used as follow, using keystone provider cases:
class Puppet::Provider::Openstack
end
class Puppet::Provider::Keystone < Puppet::Provider::Openstack
end
class Puppet::Provider::Keystone_tenant < Puppet::Provider::Keystone
end
The module is extended by the provider intermediate class, for instance
class Puppet::Provider::Keystone < Puppet::Provider::Openstack
extend Puppet::Provider::Openstack::Auth
end
Secondly a credentials class Puppet::Provider::Openstack::Credentials
makes manipulating the authentication data not only easier but allows a class
instance variable @credentials to hold the information into a dedicated object.
To address the security issue point:
Credentials passed on a CLI must be forbidden in order to not be visible via PS
commands.
This is enforced by using of `withenv` method from Puppet Util library could help:
https://github.com/puppetlabs/puppet/blob/3.0.0/lib/puppet/util.rb#L39-L51
Note:
Although using ENV instead of CLI parameters doesn't remove issues such as:
"puppet resource keystone_user foo email=foo@example.com password=test"
In order to avoid the above, the related provider type API would have to be
changed as well.
Alternatives
------------
This is an alternative way to the existing solution, other was have not been
envisaged so far.
Data model impact
-----------------
The goal of this effort is: no data model impact. Existing manifests should
continue to work as before.
Module API impact
-----------------
None other than what has been already mentioned.
End user impact
---------------------
There should be no mandatory end user impact. Existing manifests should
continue to work exactly as before.
Operations wanting use different authentication should be able to provide it.
Performance Impact
------------------
The `request` method and helper methods used by it should cache the contents of
files instead of opening/reading/parsing/closing every time.
Deployer impact
---------------------
None other than what has been already mentioned.
Developer impact
----------------
None.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
gilles@redhat.com (IRC nick gildub)
Other contributors:
rmeggins (IRC nick richm)
Work Items
----------
* Implement the code described in the "Module API impact" section.
Dependencies
============
None
Testing
=======
Write tests for the new beaker CI test framework.
Documentation Impact
====================
The README.md and the examples in the examples directory will be updated.
References
==========
Openstack client: http://docs.openstack.org/developer/python-openstackclient/
Openstack client config file: http://docs.openstack.org/developer/python-openstackclient/configuration.html#configuration-files

View File

@ -1,124 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
========================
Define our master policy
========================
The blueprint has been created for puppet-nova, but affects all modules:
https://blueprints.launchpad.net/puppet-openstack/+spec/master-policy
Problem description
===================
Here are the problems that lead us to write this blueprint:
* OpenStack projects use to update their configuration files and API every
release, with all the time backward compatibility support. When using
deprecated configuration parameters, OpenStack projects uses WARNING level
in logging files (visible when the program starts), that warn operators to
update configuration files so they have the latest options.
* Since the beginning of the Puppet modules project, master branch has always been
the development branch and be used to integrate features from OpenStack master
or very recent release. Until now, master branch always meant to be used to test
the last stable OpenStack release.
* Over the time, our community grew and we've got contributions and
more feedback from operators world that complained master branch was broken
for a recent stable release.
* When fixing a bug in master, we often miss to backport it to stable branches
because it requires manual work (cherry-pick). This is problem #4: what can
we backport to stable branches?
Proposed change
===============
* Accept that master branch is not supposed to work on stable releases of OpenStack.
* Functional testing CI jobs should pass using the latest testing package repositories
from the Ubuntu and RHEL distributions.
* Submit a feature in master only if it can be tested by functional testing CI jobs.
We make an announcement on the mailing-list each time we update the repositories for
the functional tests.
* Help developers to easily backport patches to stable branches when needed.
Alternatives
------------
* Creating a branch called 'future/<release>' for those changes that would get merged
back into master and then into 'stable/<release>' when that branch is created.
Data model impact
-----------------
Data model needs to be backward compatible at least 2 releases.
Module API impact
-----------------
Module API needs to be backward compatible at least 2 releases.
End user impact
---------------------
None.
Performance Impact
------------------
None.
Deployer impact
---------------------
Deployers will need to be more careful if they used to run master branches.
* use master if they target a deployment with the current development version.
* use stable branches if they plan to deploy a stable version of OpenStack.
Developer impact
----------------
Developers will need to figure if their patch can be tested by the current state of master.
Also, they will need to be more engaged in backport policy and make sure to cherry-pick interesting
bugfixes and features to the right stable branches.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
emilienm
Work Items
----------
None.
Dependencies
============
None.
Testing
=======
Beaker jobs are in place and working on master (which is running Kilo at this time).
Documentation Impact
====================
We need to update the Wiki to explain this policy to our contributors.
References
==========
* Mailing list discussion:
- http://lists.openstack.org/pipermail/openstack-dev/2015-April/061640.html
* Etherpad:
- https://etherpad.openstack.org/p/puppet-openstack-master-policy
- https://etherpad.openstack.org/p/liberty-summit-design-puppet-master-branch

View File

@ -1,270 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===================
Enabling Federation
===================
:tags: federation
`bp enabling-federation <https://blueprints.launchpad.net/puppet-keystone/+spec
/enabling-federation>`_
The purpose of this spec is to provide an efficient way for cloud
administrators to configure their Keystone as a Service Provider or Identity
Provider in order to use Identity Federation.
Problem description
===================
As a cloud administrator, I want a fast and convenient way to configure my
Keystone (Kilo version or newer) as a Service Provider or as an Identity
Provider. Nowadays, the manual configuration of such features can be
cumbersome and error prone.
Proposed change
===============
Introduce two new classes, one to configure Keystone as a Service Provider
and the other to configure Keystone as an Identity Provider.
* Service Provider
Keystone as Service Provider can be configured in three different ways:
- Using OpenID Connect protocol.
- Using SAML protocol with mellon module.
- Using SAML protocol with Shibboleth module.
Each possibility mentioned above will have a class with the proper
attributes to configure Keystone files.
As attributes for this class we can have:
- method:
The method to be used for federation authentication.
- plugin:
The plugin for the authentication method.
* Identity Provider
There will be a class identity_provider to configure Keystone as an Identity
Provider, by installing the necessary packages and adding the necessary
configuration to the keystone.conf file. Currently Keystone can only provide
SAML assertions.
As attributes for this class we can have:
* Required
- certfile:
Path of the certfile for SAML signing. The attribute ssl_ca_certs from
Keystone can be used.
- keyfile:
Path of the keyfile for SAML signing. The attribute ssl_ca_key from
Keystone can be used.
- idp_entity_id:
Entity ID value for unique Identity Provider identification.
- idp_sso_endpoint:
Identity Provider Single-Sign-On service value, required in the Identity
Provider's metadata.
- idp_metadata_path:
Path to the Identity Provider Metadata file.
* Optional - this attributes will be undef.
- idp_organization_name:
Organization name the installation belongs to.
- idp_organization_display_name:
Organization name to be displayed.
- idp_organization_url:
URL of the organization.
- idp_contact_company:
Company of contact person.
- idp_contact_name:
Given name of contact person
- idp_contact_surname:
Surname of contact person.
- idp_contact_email:
Email address of contact person.
- idp_contact_telephone:
Telephone number of contact person.
- idp_contact_type:
Contact type.
To know more about this attributes see `Table 7.28 <http://docs.openstack.org
/kilo/config-reference/content/keystone-configuration-file.html>`_.
.. note::
For the Federated Identity feature, any web server is allowed since it
supports SAML or OpenID Connector. In this spec we are only considering the
use of Apache, since it supports both and have more documentation. To know
more about some values for the attributes see reference number two.
.. note::
For the Service Provider, there are some packages that Keystone needs.
For example, in a Debian based distribution: the packages
``libapache2-mod-shib2`` or ``libapache-mod-auth-mellon`` for SAML2 support
and ``libapache2-auth-openidc`` for OpenID Connect support.
.. note::
For the Identity Provider, there are some packages that Keystone needs.
For example, in a Debian/Ubuntu distribution: the packages ``pysaml2`` and
``xmlsec1`` are necessary.
.. note::
The apache packages required for the Service Provider will be installed
using the defined type ``apache::mod`` from puppetlabs-apache, according
to the selected module.
.. note::
Only the configurations that belongs to Keystone's configuration files will
be applied by puppet.
Alternatives
------------
Keep these setups as they are right now: manually install packages and add
necessary changes to configuration files.
Data model impact
-----------------
None
Module API impact
-----------------
Everything has been already mentioned in Proposed change.
End user impact
---------------------
None
Performance Impact
------------------
None
Deployer impact
---------------------
To use this the modules from this feature the cloud operator/admin will need
to apply during or after Keystone is installed and running over apache.
``site.pp`` example - Keystone as a Service Provider using Mellon::
class { 'keystone::federation::mellon':
idps_urls => 'https://ipa.rdodom.test/idp',
saml_dir => '/etc/httpd/saml2/test',
http_conf => '/etc/httpd/conf.d/keystone-mellon.conf',
service => 'keystone',
saml_base => '/v3',
saml_auth => 'OS-FEDERATION/identity_providers/ipsilon/protocols/saml2/auth',
saml_sp => 'mellon',
saml_sp_logout => 'logout',
saml_sp_postresp => 'postResponse',
enable_ssl => false,
sp_port => 5000,
}
``site.pp`` example - Keystone as an Identity Provider::
class { 'keystone::federation::identity_provider':
idp_entity_id => 'https://keystone.example.com/v3/OS-FEDERATION/saml2/idp',
idp_sso_endpoint => 'https://keystone.example.com/v3/OS-FEDERATION/saml2/sso',
idp_metadata_path => '/etc/keystone/saml2_idp_metadata.xml',
}
Examples of the configurations added to Keystone and Apache can be found below:
**For Identity Provider:**
See topic `Keystone as an Identity Provider (IdP) <http://docs.openstack.org/de
veloper/keystone/configure_federation.html>`_.
**For Service Provider**
See topic `Keystone as a Service Provider (SP) <http://docs.openstack.org/devel
oper/keystone/configure_federation.html>`_.
* For Shibboleth configuration see `Setup Shibboleth
<http://docs.openstack.org/developer/keystone/federation/shibboleth.html>`_.
* For OpenID configuration see `Setup OpenID Connect
<http://docs.openstack.org/developer/keystone/federation/openidc.html>`_.
* For mod_auth_mellon, see `Setup Mellon
<http://docs.openstack.org/developer/keystone/federation/mellon.html>`_.
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
iurygregory
Work Items
----------
* Create a class for the Identity Provider configuration, which will apply
the extra configurations to the Keystone configuration file and install all
necessary packages to make Keystone work as Identity Provider.
* Write tests to ensure that the Identity Provider configuration is valid.
* Create one class for each possibility of Service Provider.
* Write tests to ensure that each class of Service Provider is valid.
* Provide documentation in README to explain how to use federation classes
* Provide some manifest example in examples directory with a real deployment
with shibboleth, by using eventual external module for that.
Dependencies
============
* The Keystone version should be at least stable/kilo.
* This feature will be only supported if Keystone is running over an Apache.
Testing
=======
* Create spec tests files for each configurable parameter used by the new
Identity/Service Providers classes, to ensure that all applied settings
are valid.
* Create functional tests with acceptance.
Documentation Impact
====================
Add examples in the puppet-keystone repository for both classes.
References
==========
1. Summit video: https://www.youtube.com/watch?v=PxNM8tBdCs4
2. http://docs.openstack.org/kilo/config-reference/content/config_overview.html
Chapter 7. Identity service - Identity service configuration
- for identity provider take a look at [saml]
- for service provider take a look at [auth]
3. http://rodrigods.com/
4. http://irclog.perlgeek.de/puppet-openstack/2015-05-19
start: 19:26
5. http://docs.openstack.org/developer/keystone/configure_federation.html

View File

@ -1,140 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
============================
Support domain configuration
============================
:tags: keystone,domain
`Launchpad blueprint <https://blueprints.launchpad.net/puppet-keystone/+spec/keystone-domain-configuration>`_
Offer a way to configure multiple domains for Openstack with keystone
API v3.
Problem description
===================
As a deployer, I would like to install OpenStack keystone running API
v3 and be able to configure multiple domains.
This requires multiple keystone configuration files. Openstack expects
such configuration in a file named ``keystone.$domain.conf`` in a
directory defined by ``identity/domain_config_dir`` in
``keystone.conf``. It's ``/etc/keystone/domains`` by default.
Proposed change
===============
Make a new provider ``keystone_domain_config``. The syntax would be to
allow the use of the domain in the resource name::
keystone_domain_config {
"services::ldap/url": value => $url;
}
This will set the ``[ldap]`` section ``url`` to the value of ``$url``
in the configuration file
``/etc/keystone/domains/keystone.services.conf``
I'm proposing ``::`` as the delimiter since that's what the proposed
keystone v3 patch uses. Note that in this case, the domain comes
first, before the ``::``
This implementation will be a subclass keystone_config.
Alternatives
------------
It should be noted that the rest API offer a way to do it directly
without the configuration file, but it's currently unavailable to the
openstack cli see this `openstackclient bug
<https://bugs.launchpad.net/python-openstackclient/+bug/1433307>`_.
When this becomes available, the file creation can be removed in favor
of the cli.
Another way to do it would be to add the missing name parsing in
"keystone_config". The lesser encapsulation means that when the
openstack cli finally supports the direct modification of the
configuration, we won't be able to adjust the provider easily.
Data model impact
-----------------
None
Module API impact
-----------------
Everything has been already mentioned in Proposed change.
End user impact
---------------------
None
Performance Impact
------------------
None
Deployer impact
---------------------
If a deployer use a parameter with ``::`` in it, then the left side of
the string will be interpreted as a domain and put in the domain file,
not in the keystone.conf file. There is no such parameter for the
moment in the whole configuration and it's unlikely that they ever
will be.
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
sofer-athlan-guyot
Other contributors:
rmeggins
Work Items
----------
* Create a unit rspec;
* Create a functional test;
* Add the name parsing and file path logic in the provider;
Dependencies
============
* The keystone version must be at least stable/kilo.
Testing
=======
For the moment the functional test are covered by beaker. Future
change in the puppet gate can make tempest tests useful for this
feature.
Documentation Impact
====================
Add a examples in the puppet-keystone repository for this feature.
References
==========
1. `Official documentation; <http://docs.openstack.org/kilo/config-reference/content/section_keystone-domain-configs.html>`_
2. `Discussion on trello; <https://trello.com/c/xDDgtctf/22-extend-keystone-config-to-support-multiple-domains-with-a-config-file-per-domain>`_
3. `Domain configuration management API <http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#domain-configuration-management>`_

View File

@ -1,172 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
Configuration File Deprecation Support
==========================================
`Launchpad blueprint <https://blueprints.launchpad.net/puppet-openstacklib/+spec/config-file-deprecation-support>`_
Allow the inifile (config file) providers to handle deprecated options. This
should include cleaning up old deprecated names from config files and also
allow using the deprecated name instead of the new name.
Problem description
===================
When moving to new releases of OpenStack, config file options are frequently
deprecated because they're no longer needed, or they've been renamed.
After an upgrade this means that operators will frequently have the old and new
config option in place, but the old option is no longer used or maintained.
Additionally, when upgrading it can be useful to update the Puppet modules
before actually upgrading the service. A frequent problem when doing this is
that the newer module only supports the new names for options instead of the
older deprecated names.
Proposed change
===============
Add additional parameters to the openstacklib inifile provider help manage
cleanup removed and renamed configuration options. All inifile providers in
other OpenStack Puppet modules that derive from this provider would inherit the
new behavior. The syntax would allow specifying the old name for options which
could be used to purge the old names or to use them in preference to the new
names. For example::
cinder_config { 'oslo_logging/new_option':
deprecated_name => 'DEFAULT/old_option_name',
value => $value,
}
In this case, it would populate the ``oslo_logging/new_option`` key in the
Cinder config file with the contents of ``$value`` as normally expected.
However, it would also purge the old name ``DEFAULT/old_option_name`` as given
by ``deprecated_name``.
The following new parameters would be supported:
``deprecated_name``
This is a string or array of old names that have been deprecated. Array
should be supported because there are sometimes multiple names that have been
deprecated in a single release. Defaults to an empty array.
``purge_deprecated``
If true, and if ``use_deprecated`` is false, then any ``deprecated_name``
values provided will be treated as if they had been specified separately with
an ensure value of ``absent``. Defaults to true.
``use_deprecated``
If true, then the inifile provider will act as if additional resources of the
same type were specified for each ``deprecated_name`` with the same
``value`` and ``ensure`` parameters. Defaults to false.
It's expected that both ``use_deprecated`` and ``purge_deprecated`` would be
set globally for each config provider as needed, either using resource defaults
or resource collectors.
Alternatives
------------
Currently the old deprecated options left behind cause no harm. We could
ignore this issue. This has the downside that it is not clear which value is
currently used.
The inifile provider could be set to purge unmanaged options. Currently we
depend on distributions to provide a base configuration file. In the past
there have been issues with the OpenStack Puppet modules not managing all
required options and depending on values provided in the base configuration
files. This approach would require testing all of the existing modules to
ensure they manage all needed options and also changing acceptance tests to use
the purge functionality to prevent regressions.
Data model impact
-----------------
None
Module API impact
-----------------
Everything has been already mentioned in Proposed change.
End user impact
---------------------
None
Performance Impact
------------------
Potential implementations might create additional resources to ensure the
deprecated config value state. This could increase the number of resources
being managed and the Puppet catalog size.
A small amount of additional overhead may be incurred to find and remove the
deprecated names.
Deployer impact
---------------------
Existing configuration files with deprecated options would be removed as
described. When this occurs, services will be restarted. Impact of this is
expected to be minimal since most of these changes would occur between major
releases of the modules. In that case, the deployer is expected to be
upgrading services and restarting them already.
It is expected that deployers may set ``use_deprecated`` to true in
preparation to for upgrades, allowing upgrading the Puppet modules in some
cases before upgrading the services they manage.
Deployers that wish to disable the new behavior could set the
``purge_deprecated`` parameter to false.
Developer impact
----------------
Developers would need to populate the ``deprecated_names`` parameter when
renaming configuration options.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
clayton-oneill (IRC nick clayton)
Work Items
----------
* Identify existing option with deprecation as test candidate
* Create a unit rspec in openstacklib and child module
* Create a functional test in openstacklib and child module
* Add implementation to openstacklib inifile provider
Dependencies
============
None
Testing
=======
Unit and functional should be added to ensure base functionality and prevent
regressions.
Documentation Impact
====================
Currently documentation of inifile providers is spotty. This may be an
opportunity to move that documentation into the ``openstacklib`` module.
References
==========
None

View File

@ -1,272 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
Example Spec - The title of your blueprint
==========================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/puppet-[projectname]/+spec/example
Introduction paragraph -- why are we doing anything? A single paragraph of
prose that operators can understand.
Some notes about using this template:
* Your spec should be in ReSTructured text, like this template.
* Please wrap text at 79 columns.
* The filename in the git repository should match the launchpad URL, for
example a URL of: https://blueprints.launchpad.net/nova/+spec/awesome-thing
should be named awesome-thing.rst
* Please do not delete any of the sections in this template. If you have
nothing to say for a whole section, just write: None
* For help with syntax, see http://sphinx-doc.org/rest.html
* To test out your formatting, build the docs using tox, or see:
http://rst.ninjs.org
* If you would like to provide a diagram with your spec, ascii diagrams are
required. http://asciiflow.com/ is a very nice tool to assist with making
ascii diagrams. The reason for this is that the tool used to review specs is
based purely on plain text. Plain text will allow review to proceed without
having to look at additional files which can not be viewed in gerrit. It
will also allow inline feedback on the diagram itself.
Problem description
===================
A detailed description of the problem:
* For a new feature this might be use cases. Ensure you are clear about the
actors in each use case: End User vs Deployer
* For a major reworking of something existing it would describe the
problems in that feature that are being addressed.
Proposed change
===============
Here is where you cover the change you propose to make in detail. How do you
propose to solve this problem?
If this is one part of a larger effort make it clear where this piece ends. In
other words, what's the scope of this effort?
Alternatives
------------
What other ways could we do this thing? Why aren't we using those? This doesn't
have to be a full literature review, but it should demonstrate that thought has
been put into why the proposed solution is an appropriate one.
Data model impact
-----------------
Changes which require modifications to the data model often have a wider impact
on the system. The community often has strong opinions on how the data model
should be evolved, from both a functional and performance perspective. It is
therefore important to capture and gain agreement as early as possible on any
proposed changes to the data model.
Questions which need to be addressed by this section include:
* What new data objects and/or database schema changes is this going to
require?
* What database migrations will accompany this change.
* How will the initial set of new data objects be generated, for example if you
need to take into account existing instances, or modify other existing data
describe how that will work.
Module API impact
-----------------
Each API method which is either added or changed should have the following,
depending upon if it is a new class or an addition to an existing class.
* For new classes:
* Specification for the class.
* A description of what the class does suitable for use in
user documentation.
* Documentation of all parameters, including default values.
* For new parameters:
* Documentation, including parameters.
* Reason for addition (rather than using config classes).
* For deprecations:
* Reason for deprecation.
* Alternative to deprecated parameters.
* Scheduled release for deprecated parameter removal.
* Example use case including typical API samples.
* Discuss any policy changes, and discuss what things a deployer needs to
think about when defining their policy.
Note that the schema should be defined as restrictively as
possible. Parameters which are required should be marked as such and
only under exceptional circumstances should additional parameters
which are not defined in the schema be permitted .
End user impact
---------------------
Aside from the API, are there other ways a user will interact with this
feature?
* Does this change have an impact on python-novaclient? What does the user
interface there look like?
Performance Impact
------------------
Describe any potential performance impact on the system, for example
how often will new code be called, and is there a major change to the calling
pattern of existing code.
Examples of things to consider here include:
* A periodic task might look like a small addition but if it calls conductor or
another service the load is multiplied by the number of nodes in the system.
* Scheduler filters get called once per host for every instance being created,
so any latency they introduce is linear with the size of the system.
* A small change in a utility function or a commonly used decorator can have a
large impacts on performance.
* Calls which result in a database queries (whether direct or via conductor)
can have a profound impact on performance when called in critical sections of
the code.
* Will the change include any locking, and if so what considerations are there
on holding the lock?
Deployer impact
---------------------
Discuss things that will affect how you deploy and configure OpenStack
that have not already been mentioned, such as:
* What config options are being added? Should they be more generic than
proposed (for example a flag that other hypervisor drivers might want to
implement as well)? Are the default values ones which will work well in
real deployments?
* Is this a change that takes immediate effect after its merged, or is it
something that has to be explicitly enabled?
* If this change is a new binary, how would it be deployed?
* Please state anything that those doing continuous deployment, or those
upgrading from the previous release, need to be aware of. Also describe
any plans to deprecate configuration values or features. For example, if we
change the directory name that instances are stored in, how do we handle
instance directories created before the change landed? Do we move them? Do
we have a special case in the code? Do we assume that the operator will
recreate all the instances in their cloud?
Developer impact
----------------
Discuss things that will affect other developers working on Puppet OpenStack,
such as.
Implementation
==============
Assignee(s)
-----------
Who is leading the writing of the code? Or is this a blueprint where you're
throwing it out there to see who picks it up?
If more than one person is working on the implementation, please designate the
primary author and contact.
Primary assignee:
<launchpad-id or None>
Other contributors:
<launchpad-id or None>
Work Items
----------
Work items or tasks -- break the feature up into the things that need to be
done to implement it. Those parts might end up being done by different people,
but we're mostly trying to understand the timeline for implementation.
Dependencies
============
* Include specific references to specs and/or blueprints in nova, or in other
projects, that this one either depends on or is related to.
* If this requires functionality of another project that is not currently used
by Nova (such as the glance v2 API when we previously only required v1),
document that fact.
* Does this feature require any new library dependencies or code otherwise not
included in OpenStack? Or does it depend on a specific version of library?
Testing
=======
Please discuss how the change will be tested. We especially want to know what
tempest tests will be added. It is assumed that unit test coverage will be
added so that doesn't need to be mentioned explicitly, but discussion of why
you think unit tests are sufficient and we don't need to add more tempest
tests would need to be included.
Is this untestable in gate given current limitations (specific hardware /
software configurations available)? If so, are there mitigation plans (3rd
party testing, gate enhancements, etc).
Documentation Impact
====================
What is the impact on the docs team of this change? Some changes might require
donating resources to the docs team to have the documentation updated. Don't
repeat details discussed above, but please reference them here.
References
==========
Please add any useful references here. You are not required to have any
reference. Moreover, this specification should still make sense when your
references are unavailable. Examples of what you could include are:
* Links to mailing list or IRC discussions
* Links to notes from a summit session
* Links to relevant research, if appropriate
* Related specifications as appropriate (e.g. if it's an EC2 thing, link the
EC2 docs)
* Anything else you feel it is worthwhile to refer to

View File

View File

@ -1,92 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import glob
import docutils.core
import testtools
class TestTitles(testtools.TestCase):
def _get_title(self, section_tree):
section = {
'subtitles': [],
}
for node in section_tree:
if node.tagname == 'title':
section['name'] = node.rawsource
elif node.tagname == 'section':
subsection = self._get_title(node)
section['subtitles'].append(subsection['name'])
return section
def _get_titles(self, spec):
titles = {}
for node in spec:
if node.tagname == 'section':
section = self._get_title(node)
titles[section['name']] = section['subtitles']
return titles
def _check_titles(self, titles):
self.assertEqual(7, len(titles))
problem = 'Problem description'
self.assertIn(problem, titles)
self.assertEqual(0, len(titles[problem]))
proposed = 'Proposed change'
self.assertIn(proposed, titles)
self.assertEqual(
[
'Alternatives',
'Data model impact',
'Module API impact',
'End user impact',
'Performance Impact',
'Deployer impact',
'Developer impact',
],
titles[proposed])
impl = 'Implementation'
self.assertIn(impl, titles)
self.assertEqual(2, len(titles[impl]))
self.assertIn('Assignee(s)', titles[impl])
self.assertIn('Work Items', titles[impl])
deps = 'Dependencies'
self.assertIn(deps, titles)
self.assertEqual(0, len(titles[deps]))
testing = 'Testing'
self.assertIn(testing, titles)
self.assertEqual(0, len(titles[testing]))
docs = 'Documentation Impact'
self.assertIn(docs, titles)
self.assertEqual(0, len(titles[docs]))
refs = 'References'
self.assertIn(refs, titles)
self.assertEqual(0, len(titles[refs]))
def test_template(self):
files = ['specs/template.rst'] + glob.glob('specs/*/*')
for filename in files:
self.assertTrue(filename.endswith(".rst"),
"spec's file must uses 'rst' extension.")
with open(filename) as f:
data = f.read()
spec = docutils.core.publish_doctree(data)
titles = self._get_titles(spec)
self._check_titles(titles)

20
tox.ini
View File

@ -1,20 +0,0 @@
[tox]
minversion = 3.1
envlist = docs,py37
skipsdist = True
ignore_basepython_conflict = True
[testenv]
basepython = python3
usedevelop = True
setenv = VIRTUAL_ENV={envdir}
deps =
-c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master}
-r{toxinidir}/requirements.txt
commands = stestr run --slowest {posargs}
[testenv:venv]
commands = {posargs}
[testenv:docs]
commands = sphinx-build -W -b html doc/source doc/build/html