Retire Solum: remove repo content

Solum project is retiring
- https://review.opendev.org/c/openstack/governance/+/919211

this commit remove the content of this project repo

Change-Id: I1b296db413d2b0c0424f28b8eaf33e2476ec5e88
This commit is contained in:
Ghanshyam Mann 2024-05-10 12:33:14 -07:00 committed by Ghanshyam
parent 71013aed21
commit 64f1dac678
24 changed files with 8 additions and 3353 deletions

10
.gitignore vendored
View File

@ -1,10 +0,0 @@
AUTHORS
ChangeLog
build
.tox
.venv
*.egg*
*.swp
*.swo
*.pyc
.testrepository

View File

@ -1,4 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -1,3 +0,0 @@
- project:
templates:
- openstack-specs-jobs

201
LICENSE
View File

@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,54 +1,10 @@
========================
Team and repository tags
========================
This project is no longer maintained.
.. image:: http://governance.openstack.org/badges/solum-specs.svg
:target: http://governance.openstack.org/reference/tags/index.html
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
.. Change things from this point on
==================================
OpenStack Solum Specifications
==================================
This git repository is used to hold approved design specifications for additions
to the Solum project. Reviews of the specs are done in gerrit, using a similar
workflow to how we review and merge changes to the code itself.
The layout of this repository is::
specs/<release>/
You can find an example spec in `doc/source/specs/template.rst`.
Specifications are proposed for a given release by adding them to the
`specs/<release>` directory and posting it for review. The implementation
status of a blueprint for a given release can be found by looking at the
blueprint in launchpad. Not all approved blueprints will get fully implemented.
Specifications have to be re-proposed for every release. The review may be
quick, but even if something was previously approved, it should be re-reviewed
to make sure it still makes sense as written.
Prior to the Juno development cycle, this repository was not used for spec
reviews. Reviews prior to Juno were completed entirely through Launchpad
blueprints::
http://blueprints.launchpad.net/solum
Please note, Launchpad blueprints are still used for tracking the
current status of blueprints. For more information, see::
https://wiki.openstack.org/wiki/Blueprints
For more information about working with gerrit, see::
http://docs.openstack.org/infra/manual/developers.html#development-workflow
To validate that the specification is syntactically correct (i.e. get more
confidence in the Jenkins result), please execute the following command::
$ tox
After running ``tox``, the documentation will be available for viewing in HTML
format in the ``doc/build/`` directory.
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
OFTC.

View File

@ -1,264 +0,0 @@
# Tempest documentation build configuration file, created by
# sphinx-quickstart on Tue May 21 17:43:32 2013.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.todo',
'sphinx.ext.viewcode',
'openstackdocstheme'
]
todo_include_todos = True
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Solum Specs'
copyright = u'2014, Solum Contributors'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = False
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
modindex_common_prefix = ['solum-specs.']
# openstackdocstheme options
repository_name = 'openstack/solum-specs'
bug_project = 'solum'
bug_tag = 'specs'
# -- Options for man page output ----------------------------------------------
man_pages = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
html_domain_indices = False
# If false, no index is generated.
html_use_index = False
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'Solum-Specsdoc'
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'Solum-specs.tex', u'Solum Specs',
u'Solum Contributors', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'Solum-specs', u'Solum Design Specs',
u'Solum Contributors', 'solum-specs', 'Design specifications for the Solum project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# -- Options for Epub output ---------------------------------------------------
# Bibliographic Dublin Core info.
epub_title = u'Solum Specs'
epub_author = u'Solum Contributors'
epub_publisher = u'Solum Contributors'
epub_copyright = u'2014, Solum Contributors'
# The language of the text. It defaults to the language option
# or en if the language is not set.
#epub_language = ''
# The scheme of the identifier. Typical schemes are ISBN or URL.
#epub_scheme = ''
# The unique identifier of the text. This can be a ISBN number
# or the project homepage.
#epub_identifier = ''
# A unique identification for the text.
#epub_uid = ''
# A tuple containing the cover image and cover page html template filenames.
#epub_cover = ()
# HTML files that should be inserted before the pages created by sphinx.
# The format is a list of tuples containing the path and title.
#epub_pre_files = []
# HTML files shat should be inserted after the pages created by sphinx.
# The format is a list of tuples containing the path and title.
#epub_post_files = []
# A list of files that should not be packed into the epub file.
#epub_exclude_files = []
# The depth of the table of contents in toc.ncx.
#epub_tocdepth = 3
# Allow duplicate toc entries.
#epub_tocdup = True

View File

@ -1,35 +0,0 @@
.. solum-specs documentation master file
============================
Solum Project Specifications
============================
Contents:
.. toctree::
:glob:
:maxdepth: 1
specs/*
Juno approved specs:
.. toctree::
:glob:
:maxdepth: 1
specs/juno/*
Liberty approved specs:
.. toctree::
:glob:
:maxdepth: 1
specs/liberty/*
==================
Indices and tables
==================
* :ref:`search`

View File

@ -1 +0,0 @@
../../specs

View File

@ -1,5 +0,0 @@
pbr>=2.0,!=2.1
sphinx>=2.0.0
openstackdocstheme>=2.0.0
testrepository>=0.0.18
testtools>=0.9.34

View File

@ -1,12 +0,0 @@
[metadata]
name = solum-specs
summary = OpenStack Solum Project Development Specs
description-file =
README.rst
author = OpenStack
author-email = openstack-discuss@lists.openstack.org
home-page = http://specs.openstack.org/openstack/solum-specs/
classifier =
Intended Audience :: Developers
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux

View File

@ -1,22 +0,0 @@
#!/usr/bin/env python
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
setuptools.setup(
setup_requires=['pbr'],
pbr=True)

View File

@ -1,112 +0,0 @@
High-level Description:
------------------------------------
This spec outlines approaches towards creating a Chef language pack.
A recommendation is made towards one particular approach.
Problem:
----------------
Within Solum we want to provide ability for application developers to test
their chef cookbooks. Testing chef cookbooks broadly involves two kinds of
tests. The first kind of tests are concerned with things such as
ensuring that the cookbook code is
syntactically correct, stylistically correct, and the recipes are
unit tested to ensure that the mutual
inter-dependencies are working fine. The tools available for these tasks are,
Knife for ruby syntax, foodcritic for linting
(https://github.com/acrmp/foodcritic) and
ChefSpec for unit testing (https://github.com/sethvargo/chefspec).
The second kind of tests are concerned with testing whether the configuration
which resulted from running a set of recipes has indeed converged to the
intended state. The tool available for this is test
kitchen (https://github.com/test-kitchen/test-kitchen).
In Solum we want to support application developers who are developing chef
recipes to use above tools for testing their recipes.
Proposed Solution:
---------------------
We propose to provide a Chef language pack (a image) in
Solum with chef, foodcritic,
ChefSpec, and test kitchen installed on it.
Solum operator would create this language pack and register it within
Solum's glance.
Developers would use this language pack id within their plan definition.
Implementation options:
------------------------
There are at least four different ways one can build such a Chef
language pack.
1) We could use disk-image-builder and write a 'Chef element' which
provides a script with the installation instructions.
2) We could customize Heroku's Ruby buildpack
(https://github.com/heroku/heroku-buildpack-ruby) to
install the required tools.
3) We could provide a Dockerfile to create a container image and use that
as the Chef language pack.
Option 1 has problems associated with it. For details
see:
https://review.openstack.org/#/c/103689/
I propose that we rule out option 2. The reasons are as follows.
We could customize Heroku's buildpack in two ways. One, we maintain a
fork of the buildpack repo.
This approach, even though possible, is lot of work.
Second approach would be to maintain just the customization bits and then
'apply' them
to the 'lp-cedarish's heroku-buildpack-ruby, which we clone and 'install'
within the lp-cedarish flow.
The customization bits would essentially be installation logic written
in a '.rb' file which
would need to be 'added' to the 'spec' directory of the heroku-buildpack-ruby.
The problem with this approach is that the creation of the language pack
gets tied with Heroku's Ruby buildpack.
So that leaves option 3. I propose we start there.
(This is the recommended option in https://review.openstack.org/#/c/103689/)
First cut of the Chef language pack is available here:
https://review.openstack.org/#/c/103671/
Design Issues:
-----------------------
1) How to pass custom style rules to foodcritic? Once this is possible
we would invoke foodcritic like so:
foodcritic cookbooks --include foodcritic-rules.rb --tags ~FC001
2) How to pass location of cookbooks on the running VM to the various
commands. One option is that the developer will provide the exact command(s)
for invoking the test. These will be provided in the Plan file. For example,
lets say there is a hello_world cookbook
in my git repository. Then in the plan file I may specify commands
in the following manner:
{
syntax_check: knife cookbook test /hello_world
style_check: foodcritic /hello_world
unit_test: rspec /hello_world --format RspecJunitFormatter
integration_test: kitchen test /hello_world
}
References:
-----------
https://blueprints.launchpad.net/solum/+spec/chef-language-pack

View File

@ -1,258 +0,0 @@
Problem description
===================
This spec considers requirements, implementation, and changes to Solum
CLI and API to support custom language packs.
Problem Details:
------------------
A language pack in Solum is essentially a base image (VM) with appropriate
libraries, compilers, runtimes installed on it.
Custom language pack is a base image that has libraries and packages
specified by the user (cloud operator and/or an application developer)
installed on it.
Towards building such custom language packs Solum needs:
(a) the ability to specify the required base image type and version
(b) the ability to specify the required libraries and packages to be installed
on the base image
(c) the ability to register the language pack with Glance
One way to implement this feature is to require that a user provide two things
to Solum:
(a) base image name and type
(b) link to github repository with a script with a predefined name
(such as 'prepare') in it, which contains installation instructions to install
the required libraries and packages.
Solum would provide a 'language pack builder' API to build the language pack
and register it in glance.
At a high-level the builder API will do following actions in a secure
environment:
(a) mount the specified base image
(b) clone the repository
(c) run the 'prepare' script
(d) snapshot the new image
(e) upload the image to glance
Proposed implementation:
-------------------------
User provides a Dockerfile in their language
pack repository. Solum builds the language pack via 'docker build'.
Note that the language pack created using this approach is going to be a
Docker container and not a VM image.
One of the main advantage of this approach is that it uses Dockerfiles as
standard format which became mainstream in several systems requiring
containers.
Towards using such a language pack in creating a DU (deployment unit), we have
to consider the following.
If the operator has configured Solum to use 'docker' as the SOLUM_IMAGE_FORMAT
then the Docker based LP will work without any issues.
However, if SOLUM_IMAGE_FORMAT is set to 'vm' then we will have to provide
a VM image that has Docker installed. CoreOS is a possible approach
(see this WIP https://review.openstack.org/#/c/102646/).
Another approach would be to use the heat docker plugin.
Additional considerations:
----------------------------
The question of why to separate out language pack creation step and not have
the language pack Dockerfile listed in users' application code repository
can be asked. While this is certainly possible there are several advantages of
separating language pack creation from application building. First, language
pack creation is a one time process. Clubbing it with application
building will lead to build performance overhead. Second, by requiring
application developers to define Dockerfiles in their application
repositories Solum will be binding them
in a contract that their code would work only when the operator has configured
Solum to use Docker as the image format.
Proposed change
===============
1) CLI changes:
- Create a new command 'languagepack build' which will be used as follows:
solum languagepack build <github url of custom lp repository>
This will use the 'builder' API (which runs on port 9778) which Solum already provides.
2) API changes:
(a) Start with our builder API. Modify it if required.
We can start with POST /v1/solum/builder/
The data sent in will be
github repository URL containing the Dockerfile.
This will lead to following steps:
- clone the git repo
- do 'docker build'
- do 'docker push' to Solum's docker registry to upload the LP and make it
available in Glance
Alternatives
------------
Option 1:
---------
Use disk-image-builder (dib) as the mechanism to build the language pack
The issues with this option are:
(1) Figuring out how do the contents of 'prepare' translate into dib element
(pre-install.d, install.d, post-install.d, finalize.d, etc.)
(2) Figuring out how to run 'prepare' in a sandboxed environment.
We need this as the contents of 'prepare' can be anything.
Advantages:
-----------
The images created are compatible with glance.
Disadvantages:
----------------
One of the disadvantages is that Solum would need to
build a translator to translate contents of 'prepare' in the dib's dsl.
Option 2:
---------
Don't use disk-image-builder, but do similar to what we are currently
doing as part of 'build-app' (i.e. through Solum code perform the steps
of mounting fs, executing installation steps, creating image snapshot).
Advantages:
-----------
One of the advantages of Option 2 is that no translation is required of
the contents of the 'prepare' script.
Disadvantages:
--------------
The disadvantage is that Solum would need to build mechanisms to create a
Glance compatible image.
One approach to achieve advantages of both options without incurring their
disadvantages is to use dibs but avoid converting 'prepare' into various
elements. Just convert it into install.d element.
Overtime Solum can support 'hints' which can be used to indicate how should
'prepare' be broken up to be used within different elements. It is possible
that such hints are provided as separate scripts that are supported by Solum.
For example, Solum may start supporting 'pre_install, prepare, post_install'
scripts.
Data model impact
-----------------
Would need to be changed to support LP creation actions. This includes
changes to the data model to persist build and test status.
REST API impact
---------------
Discussed above
Security impact
---------------
Language pack builds would need to be done in isolated environments so
as to not affect other builds. Need to investigate in detail if Docker-based
approach would provide good enough contained environment.
Notifications impact
--------------------
Other end user impact
---------------------
Performance Impact
------------------
Building a languagepack is expensive operation as it involves building an image
by downloading and installing the specified packages in the Dockerfile.
Depending on the available CPU, network, and memory resources the performance
of this operation will get affected.
Other deployer impact
---------------------
Developer impact
----------------
Implementation
==============
Assignee(s)
-----------
Devdatta Kulkarni (devdatta-kulkarni) will implement this custom language
pack proposal:
Work Items
----------
- Create an example custom language pack
(https://review.openstack.org/#/c/103671/)
- Make the required REST API changes (identified in proposed changes section)
- Make the required CLI change (identified in proposed changes section)
(Arati Mahimane has started on this)
- Enchance the data model to store data about image build actions and the
test results.
Dependencies
============
We will start with the solum-builder api (and code). Once the build-farm
stuff is merged we can revisit this to see if that would be something
that can be used for this.
Testing
=======
Test that the created language pack was installed with all the packages and
libraries listed in the Dockerfile.
One way to achieve this would be to delegate the testing responsibility
itself to the language pack author. For instance, lp author could use
RUN statements within their Dockerfile that does the necessary checks.
We would require that after checks are completed a test result file be
created (say, /solum.lp.test) which contains the language pack creation
status. We can then look for that to determine if the testing passed or not,
without worrying about figuring out what package management style was used.
If language pack creation fails then Solum will take following actions:
- Save the details of the build and test results in Solum's internal database
- Log the build failure status in log stream for the user's actions
- If a user's email address is available, send a notification of build failure
to that address. Solum would first need to be enhanced with a
mail server functionality to support this.
Documentation Impact
====================
Documentation would need to be updated to reflect the addition of new CLI
commands.
References
==========
https://blueprints.launchpad.net/solum/+spec/custom-language-packs
https://etherpad.openstack.org/p/custom-language-packs

View File

@ -1,136 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
Git As a Service
==========================================
https://blueprints.launchpad.net/solum/+spec/solum-git-push
Enable users to push their code to Solum (via git server(s) hosted by Solum).
Following is the minimum viable use-case:
A user who is 'registered' with Solum will be able to do git push to their
code's remote in Solum.
Once the code is pushed, it will trigger some set of steps (a workflow)
which will generate/update running application.
Problem description
===================
Currently, we can only use external hosted repositories
Proposed change
===============
Add git hosted repositories in Solum, the same way we will do with the build
farm
To do this, I propose to use `gitolite <https://github.com/sitaramc/gitolite>`_
which is a simple Open Source tool to manage ssh repositories and access rights
Gitolite doesn't come with a UI, and is focus on access control.
We will provide QCOW2 and Docker images for Gitolite. In a first attempt, repo
would be store on ephemeral storage. but we should consider using cinder
volumes in a next iteration (For Docker, we would depend on Cinder support in
Docker)
Alternatives
------------
None
Data model impact
-----------------
We will reuse the infrastructure object and the git_url would be store in the
plan
REST API impact
---------------
This will reuse the infra endpoint proposed in solum-build-farm
Security impact
---------------
We will need a ssh keypair for each user who wants access to a git repository.
We will also need to be able to add/remove user access to a git repo. This can
be described in another blueprint
Notifications impact
--------------------
None
Other end user impact
---------------------
None
Performance Impact
------------------
None
Other deployer impact
---------------------
In a first implementation, Git VMs/Containers would be created "per tenant" but
are solum's responsibility to maintain and backup. In another iteration, this
will be driver by policy from the operator.
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
  vey-julien
Other contributors:
None
Work Items
----------
* Create VM and Docker images for Gitolite
* Store git repos info in our DB
* Add git hooks to trigger Solum Pipelines when a repo is created
* Use the same mechanisms to configure the instances as described in build farm
Dependencies
============
Cinder support in Docker (Alternative is to run Docker containers on CoreOS
VMs, work on that was start by PaulCzar)
Testing
=======
Functional testing if it doesn't have too much impact on our gate performance
Documentation Impact
====================
Changes to the development and deployment process will need to be documented.
References
==========
Whiteboard on https://blueprints.launchpad.net/solum/+spec/solum-git-push with
previous discussions

View File

@ -1,186 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==============
Solum pipeline
==============
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/solum/+spec/pipeline
The design of Solum's API in release 2014.1.1 focuses primarily on
resources to enable Application Lifecycle Management. It is suitable
for expressing how to deploy an application, but it's not as useful
for modeling a custom build/test/deploy pipeline for a given
application taking into account everything needed in order to produce
a CI/CD environment. This proposed design addresses that concern by
detailing how the default behavior of Solum can be customized to
accommodate a variety of different CI workflows.
Problem description
===================
Solum currently allows integration with simple development process
using Git, and a pre-defined workflow. We plan to add components that
will allow customization of events that happen before the
application's deployment, such as testing, image building, and
advancing between various Environments.
Proposed change
===============
Solum will use Mistral for it's workflow execution and definition.
This needs to be flexible so the user can select different types of
tasks on a range of infrastructural elements.
Program flow
------------
execute pipeline: https://drive.google.com/file/d/0B3SsMUWSuQAlbkdkNmV1bUtsZXc/edit?usp=sharing
Pluggable infrastructure
------------------------
* https://review.openstack.org/100539
* https://review.openstack.org/101212
Alternatives
------------
None
Data model impact
-----------------
* add new pipeline db objects
* add default workbooks
REST API impact
---------------
/v1/pipelines/ (POST/GET)
/v1/pipelines/(pipeline_id)/ (PUT/GET/DELETE)
Security impact
---------------
Solum will use the normal keystone auth, except for the trigger in
which case a trust will be used to perform actions on behalf of the user.
Notifications impact
--------------------
None
Other end user impact
---------------------
* This has support in the client (merged):
https://review.openstack.org/#/c/100124/
* UI: https://review.openstack.org/101253
Performance Impact
------------------
None - speed of image building and deployment should not change.
Other deployer impact
---------------------
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
asalkeld
Other contributors:
related blueprints will be done by who ever picks up the work.
Work Items
----------
mistral
^^^^^^^
- mistral trust [asalkeld] (auth_token_info - to prevent re-authentication)
- mistral webhook trigger [optional] (https://blueprints.launchpad.net/mistral/+spec/mistral-ceilometer-integration)
- https://blueprints.launchpad.net/mistral/+spec/mistral-multitenancy
[asalkeld - started, but others can help]
solum
^^^^^
- API and db objects (done)
- Create an empty stack on pipeline create.
this is to work around the chained trusts issue
- If the workbook does not exist (and we have the definition), create it for the user.
This is a work around until mistral has workbook sharing.
- Add mistral-plugins for:
- heat update and status [asalkeld - started]
- image build [asalkeld - started]
- unit testing
- functional testing (hopefully the same as unit test one)
- Add calls to mistral to kick off the execution (in review)
solumclient
^^^^^^^^^^^
- https://review.openstack.org/100124 (merged)
- once we are at least as functional as the assembly, change the cli
Dependencies
============
solum-dashboard
---------------
- investigate mistral dashboard
can we use the mistral task history?
- add support for pipelines/
- link to mistral tasks if it looks good, else make
a more build-job like tasks ui (pass/fail)
- since catalog is not a "thing" yet how do we discover our
capabilities (supported workflows, vm/docker, buildfarm, etc)
see: https://review.openstack.org/101253
Testing
=======
* There will be unit tests
* functional tests can be achieved with fake mistral plugins (to be
fast and not to require boot images)
Documentation Impact
====================
* The getting started guide will need to be modified.
* The rest api will need to be referenced in the auto generated docs.
References
==========
* https://wiki.openstack.org/wiki/Solum/Environments
* https://wiki.openstack.org/wiki/Solum/Pipeline
* https://blueprints.launchpad.net/solum/+spec/solum-build-farm
* https://blueprints.launchpad.net/solum/+spec/environments

View File

@ -1,430 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
Solum CAMP API
==========================================
https://blueprints.launchpad.net/solum/+spec/solum-camp-api
The Cloud Application Management for Platforms (CAMP) [CAMP-v1.1]_
specification is an open standard for managing applications in a PaaS
environment. CAMP was created with the goals of increasing
interoperability between independent application management tools and
PaaS clouds as well furthering the portability of applications across
different clouds. Due to its mindshare, momentum, etc. OpenStack is an
obvious candidate for a CAMP implementation. The Solum project, which
is partially based on CAMP, is the natural place for this
implementation.
Problem description
===================
Although the Solum API and resource model are similar (and in some
cases identical) to the API and resource model defined in the CAMP
specification, they are also different in a number of significant
ways. Tools and applications written to consume the CAMP API cannot
use the Solum API. What is required is to provide a CAMP facade on the
services provided by Solum.
Proposed change
===============
This specification proposes adding an alternate, CAMP compliant, API
to the Solum API service. This API (hereafter called the "Solum CAMP
API") will exist alongside the current REST API (hereafter referred to
as the "Solum API").
This proposal is scoped by the following constraints:
* The existence of the Solum CAMP API shall have no impact on Users
and Deployers that interact exclusively with in the Solum API. The
"solum" command-line client, the Horizon plugin, the python-solum
library etc. shall be unaffected by the Solum CAMP API.
* The use cases supported by the Solum CAMP API shall be a subset of
those supported by the Solum API; Users should not have to choose
between alternate sets of functionality. Consumers of the CAMP API
should do so because their application/tool requires that API.
* From a functional perspective, entities (applications, plans,
components) created by interacting with one API shall be visible via
the other API. For example, if a User creates an application via the
Solum API, another User should (if they are authorized to do so) be
able to see the "assembly" resource that represents that application
via the Solum CAMP API.
* From an implementation perspective the Solum CAMP API should, to the
greatest extent possible, re-use existing Solum classes and
configuration settings.
* The architectural constraints and implementation conventions of
Solum shall be strictly adhered to.
Alternatives
------------
None.
Data model impact
-----------------
The impact of the Solum CAMP API on the data model can be divided into
two separate areas: impacts on the data model due to the
implementation of CAMP-specific resource types and impacts on the data
model due to the difference between Solum resources and their CAMP
analogs (i.e. the difference between the Solum API's the CAMP API's
versions of the "assembly" resource).
CAMP-specific resources
^^^^^^^^^^^^^^^^^^^^^^^
The CAMP-specific resources (i.e. resource types that exist in the
CAMP API but not the Solum API) are either static (e.g. the
"platform_endpoints" resource or the "type_definitions" resource) or
act as collections of other resources (e.g. the "services"
resource). The information presented by these resources is either a
reflection of configuration information or information about other
resources. In neither case is it necessary to store data about
multiple instances of these resource types.
CAMP-analog resources
^^^^^^^^^^^^^^^^^^^^^
At the time of this writing it appears that the majority of the
CAMP-analog resources (for example, the CAMP version of the "assembly"
resource) can be implemented without changing the database
schema. This is due to the fact that most of the CAMP-required
attributes that are missing from their Solum API counterparts are
Links to other resources. The information necessary to synthesize these
Links is present in Solum, but is presented in a different fashion by
the Solum API.
*If* additional information is required to support a CAMP-analog
resource, the suggested solution would be to create an additional
table that contains the CAMP-unique information and cross-references
the Solum resource using its id as a key.
REST API impact
---------------
CAMP's REST API is described by the Cloud Application Management for
Platforms (CAMP) specification. The URLs for Solum CAMP API resources
will exist in a separate sub-tree (named "camp") from those of the
Solum API.
The URL of the "platform_endpoints" resource (the resource that is
used to advertise the existence of distinct CAMP implementations) will
be:
*Solum_API_base_URL*/camp/platform_endpoints
The URL of the "platform_endpoint" resource for the CAMP v1.1
implementation will be:
*Solum_API_base_URL*/camp/camp_v1_1_endpoint
The URL of the "platform" resource (the root of the CAMP v1.1
resource tree) will be:
*Solum_API_base_URL*/camp/v1_1/platform
Security impact
---------------
Since the Solum CAMP API functions as an alternate interface to the
core Solum functionality, the addition of this API should not create
any additional attack vectors beyond those that may already exist
within Solum.
Notifications impact
--------------------
The Solum CAMP API will send the same notifications, for the same
events, as the existing Solum API.
Other end user impact
---------------------
Users employing CAMP compliant tools for managing their applications
will be able to use OpenStack/Solum without having to change these
tools.
Performance Impact
------------------
None.
Other deployer impact
---------------------
The Solum CAMP API will be enabled/disabled via a configuration option
(e.g."camp-support-enabled = [True | False]"). The Solum CAMP API will
be enabled by default. Solum's configuration documentation will be
updated to describe this option and the effect of enabling/disabling
it.
Developer impact
----------------
Adding additional code to the Solum project will have a maintenance
impact as features are added and bugs are fixed. For example, a change
to a handler class that is shared by the Solum API and CAMP API could
break the CAMP API code. The following steps will be taken to address
this impact:
* Ensure that interface between the Solum CAMP API and the core Solum
code is as clean as possible. This decreases the probability that
unrelated changes will break the CAMP API code or that changes to
the CAMP API will break other Solum code.
* Assign resources to maintain the Solum CAMP API code. The
implementation assignees identified below will be assigned this
task.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
gilbert.pilz
Other contributors:
anish-karmarkar
Work Items
----------
static resources
^^^^^^^^^^^^^^^^
Some of the resources defined in CAMP are static for a given
deployment and configuration. This work item will implement those
resources.
:Resources to be implemented:
platform
platform_endpoints
platform_endpoint
formats
format
type_definitions
type_definition
attribute_definition
At the completion of this step it will be possible to perform a
successful HTTP GET on these resources. Some of the attributes in
these resource may be missing or contain dummy values.
top-level container resources
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This work item will implement the "top-level" container resources
defined by CAMP.
:Resources to be implemented:
assemblies
services
plans
Upon completion of this item it will be possible to perform a
successful HTTP GET on these resources. The Link arrays in these
resources will reference the Solum API versions of the "assembly",
"service", and "plan" resources (respectively) even though the Solum
versions of some of these resources are not CAMP-compliant.
register a Plan via the Solum CAMP API
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This item will implement the code necessary to allow Users to POST a
Plan to the *Solum_API_base_URL*/camp/v1_1/plans resource using one of
the methods described in Section 6.12.2, "Registering a Plan by Value"
of the CAMP Specification.
Upon completion of this item it will be possible to POST a Plan to
*Solum_API_base_URL*/camp/v1_1/plans and have the contents of that
file appear as a "plan" resource in both the
*Solum_API_base_URL*/camp/v1_1/plans resource (as an element in the
plans_links array) and *Solum_API_base_URL*/v1/plans resource (as per
today). Note the "plan" resource in question will be the Solum version
of the resource; at this point there will be no CAMP-analog of the
"plan" resource.
Sending a DELETE request to the
*Solum_API_base_URL*/camp/v1_1/plan/*uuid* resource will remove the
"plan" resource from both the CAMP and Solum collections.
create an assembly from a reference to a plan resource
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This item will implement the code necessary to allow Users create a
running application by POST-ing a reference to a "plan" resource to
the *Solum_API_base_URL*/camp/v1_1/assemblies resource as described in
Section 6.11.1, "Deploying an Application by Reference" of the CAMP
Specification.
:Resources to be implemented:
assembly
component
Upon completion of this item it will be possible to POST a reference
to a plan resource to *Solum_API_base_URL/camp/v1_1/assemblies* and
have Solum build and create a running application. This application
will be represented by two, analogous resources, a CAMP version
(*Solum_API_base_URL*/camp/v1_1/assembly/*uuid*) and a Solum version
(Solum_API_base_URL/v1/assemblies/*uuid*). Each resource will be
referenced by its corresponding container. The CAMP version of the
"assembly" resource will reference a tree of CAMP-specific "component"
resources (also analogs of their Solum counterparts)
Sending a DELETE request to the
*Solum_API_base_URL*/camp/v1_1/assembly/*uuid* resource will halt the
application and remove it from the system, removing both the CAMP and
Solum versions of the "assembly" resource and any corresponding
"component" resources.
create an assembly directly from a Plan file
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This item will implement the code necessary to allow Users to create a
running application by POST-ing a Plan to the
*Solum_API_base_URL*/camp/v1_1/assemblies resource using one of the
methods described in Section 6.11.2, "Deploying an Application by
Value" of the CAMP Specification. This will be implemented by
combining the implementations of the two previous items.
Upon completion of this item it will be possible to POST a Plan file
to *Solum_API_base_URL*/camp/v1_1/assemblies and have Solum build and
create a running application. As a side-effect, a "plan" resource
will be created. The "assembly" and "component" resources will be the
same as for the preceding item.
Sending a DELETE request to the
*Solum_API_base_URL*/camp/v1_1/assembly/uuid resource will halt the
application and remove it from the system, removing both the CAMP and
Solum versions of the "assembly" resource and any corresponding
"component" resources but not the "plan" resource.
select_attr support
^^^^^^^^^^^^^^^^^^^
The CAMP specification requires implementations to support the use of
the "select_attr" query parameter as defined in sections 6.5, "Request
Parameters", and 6.10.1.1, "Partial Updates with PUT", of the CAMP
specification.
The Solum API does not support the use of the “select_attr” query
parameter. This item will add support for “select_attr”, for both
GET and PUT, to all resources exposed by the Solum CAMP API.
Upon completion of this item it should be possible for Users to use
the “select_attr” query parameter in conjunction with the GET and PUT
methods to retrieve and update (where permitted) a subset of a
resources attributes.
HTTP PATCH / JSON Patch support
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The CAMP specification requires implementations to support the use of
the HTTP PATCH method in conjunction with the
"application/json-patch+json" [RFC6902]_ media type.
The Solum API does not support the HTTP PATCH method. This item
will add support for the use of JSON Patch in conjunction with the
HTTP PATCH method to all resources.
Upon completion of this item it should be possible for Users to use
HTTP PATCH with a JSON Patch payload to update (where permitted) all
or subset of a resources attributes.
Dependencies
============
No dependencies other than those that already exist in Solum.
Testing
=======
In addition to unit tests, this project will develop a number of
tempest tests which will exercise the main use cases of the Solum CAMP
API. These tests will have the following pattern:
1. Walk the Solum CAMP API resource tree to find the appropriate
resource (e.g. "assemblies" resource)
2. Perform some action on that resource (e.g. POST a Plan file to that
resource).
3. Verify that the action has produced the desired result (e.g. the
creation of a new "assembly" resource.
The assertions defined in the "Cloud Application Management for
Platform (CAMP) Test Assertions" [CAMP-Test-Assertions-v1.1]_ document
will be used, where appropriate, to verify the proper behavior of the
API. Note, it is not our intention to cover every assertion defined in
this document, but simply to leverage the work that has been done in
this area.
Documentation Impact
====================
Information about the CAMP API (specifications, primers, etc.) is
provided by the OASIS CAMP Technical Committee.
Information about enabling/disabling the Solum CAMP API and any other
configuration information will be added to the Solum documentation.
References
==========
Specifications
--------------
.. [CAMP-v1.1] *Cloud Application Management for Platforms Version
1.1.* Edited by Jaques Durand, Adrian Otto, Gilbert Pilz, and Tom
Rutt. 12 February 2014. OASIS Committee Specification Draft 04 /
Public Review Draft 02.
http://docs.oasis-open.org/camp/camp-spec/v1.1/csprd02/camp-spec-v1.1-csprd02.html
Latest version:
http://docs.oasis-open.org/camp/camp-spec/v1.1/camp-spec-v1.1.html
.. [CAMP-Test-Assertions-v1.1] *Cloud Application Management for
Platforms (CAMP) Test Assertions v1.1.* Edited by Jaques Durand, Adrian
Otto, Gilbert Pilz, and Tom Rutt. 12 February 2014. OASIS Committee
Specification Draft 01 / Public Review Draft 01.
http://docs.oasis-open.org/camp/camp-ta/v1.1/csprd01/camp-ta-v1.1-csprd01.html
Latest version:
http://docs.oasis-open.org/camp/camp-ta/v1.1/camp-ta-v1.1.html
.. [RFC6902] Bryan, P., Ed., and M. Nottingham, Ed., "JavaScript
Object Notation (JSON) Patch", RFC 6902,
April 2013. http://www.ietf.org/rfc/rfc6902.txt
Implementations
---------------
* nCAMP, CAMP v1.1 Proof of Concept.
http://ec2-107-20-16-71.compute-1.amazonaws.com/campSrv/

View File

@ -1,203 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===================================
Storage and retrieval of stage logs
===================================
https://blueprints.launchpad.net/solum/+spec/stage-logs
Solum needs to collect the logs of each stage of assembly creation, and
associate those logs with the assemblies for reference and debugging by users
of Solum.
Problem description
===================
At present, users of Solum cannot see the log output from tasks such as
build-app and unit-test. I identify three important steps to provide these
logs to users:
- Solum's worker agent must store the output of the tasks it performs.
- Solum must associate these logs with the appropriate Assembly.
- Solum must communicate this association to users in a simple way. A user must
be able to find the logs of each of the stages the user's assembly has been
through with no more information than the assembly id.
I also identify several constraints on this problem:
- Users must only have their own logs presented to them.
- Storage and organization of logs must be as simple as possible to facilitate
consumption by a service such as Kibana to be run by deployers of Solum, and
also simple manual consumption by deployers not wishing to deploy and
maintain additional dependent services to run Solum.
- Multiple worker agents may be running one or more hosts concurrently, so
individual agents must not inhibit the work of other agents.
- An assembly will likely be rebuilt over the course of an application's
development; therefore a single assembly may bear more than one set of logs
from a particular stage. These multiple runs must be distinguished from each
other, and of special note is finding which of a set of logs is the most
recent. Once Mistral is used for workflow execution, we can associate these
logs by Execution, but until then, timestamps is the simplest solution.
Proposed change
===============
These features are best achieved in incremental steps:
- Store the logs of a stage on the host of the worker agent.
- Develop a naming scheme for ouput files for storage. I propose ::
/var/log/solum/worker/<assembly>/<stage>/<iso8601>.log
- For logging the output of a stage, maintaining metadata is important,
and a file path is not a strong way to associate it, especially with
multiple hosts simultaneously recording logs. It is then important that
the metadata remain in the body of the log. To that end, I propose
recording logs using the following format: ::
{ "@timestamp": "<iso8601 timestamp>",
"assembly_id": "<assembly id>",
"solum_stage": "<'unittest', 'build-app', etc>",
...
"message": "<captured output>"
}
This format is suggested as a simple way for this logging mechanism to
interface with log-consuming tools like Logstash or ElasticSearch, or
syslog.
- In addition, attaching a field to Assembly to convey the location of its
relevant logs would be helpful at this stage, using a text field to store
a dictionary shaped nominally like the following: ::
{
unittest: {
<date1>: <host>:<path/to/file>,
<date2>: <host>:<path/to/file>
},
build-app: {
<date1>: <host>:<path/to/file>
}
}
- Collect and organize the logs from the hosts to a central location.
- This may be achieved most easily by storing the logs initially on network
storage, or emitting them direclty to a service like syslog.
- Present the organized logs to users, ideally via additional API methods.
- Develop a URI scheme for presenting the logs to users. I propose ::
SOLUM/v2/assemblies/<assembly>/logs/<stage>/<iso8601>.log
Alternatives
------------
- Swift can likely be used for storage, but Solum does not already make use of
the service. Initially, we will focus on local storage, and work in other
services such as Swift in later stages.
Data model impact
-----------------
Minimal. Initially, an extra field on Assembly is proposed.
REST API impact
---------------
None
Security impact
---------------
Exposing the output of a docker container running user unit tests against
user code is no more dangerous than running those tests against the code is
originally.
One concern of this format is making sure only the owner of an assembly can
retrieve its logs.
The concern of a user filling up storage is not a large one. Ideally, system
monitor triggers ought to warn long before a user exhausts all available
storage with a single docker container.
Notifications impact
--------------------
None
Other end user impact
---------------------
None
Performance Impact
------------------
None
Other deployer impact
---------------------
None
Developer impact
----------------
This setup can be reused by future stages--
functional testing, for example.
Implementation
==============
Assignee(s)
-----------
Ed Cranford (ed--cranford) will implement this logging proposal.
Work Items
----------
- Add configurable path and ensure the directory is created as worker starts.
- Modify worker shell handler to capture and store the output of build and
unittest tasks.
Dependencies
============
None
Testing
=======
This is a practical change, and difficult to test automatically. This is
further complicated by the differences between the development environment VM
and an actual deployed Solum environment, which may be distributed across
several distinct hosts and present accessibility problems we cannot foresee in
development.
Documentation Impact
====================
The documentation changes must cover minimally:
- For operators, how to configure the logging location.
- For users, how to retrieve the output of assembly stages.
References
==========
None

View File

@ -1 +0,0 @@
.. include:: ../template.rst

View File

@ -1,784 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===========================================
An app resource for managing applications
===========================================
https://blueprints.launchpad.net/solum/+spec/app-resource
This specification lines out a new first-class resource, the app, to be
presented by the API as a simplified collection of assemblies, plan
registrations, logs of actions, and infrastructure.
Problem description
===================
Interacting with Solum presently involves managing languagepacks, plans,
assemblies, and sometimes components, heat stacks, pipelines, and services.
The CLI has been augmented with a virtual "app" resource to simplify some of
these interactions to reduce the resources to languagepacks and these new apps.
Unfortunately, this simplification isn't available in the Solum API, and users
of the API outside the CLI are missing out on this interaction.
In addition, due to the direct reference to plan in assembly, plans may not be
deleted if there are any existing assemblies referencing it, meaning at present
that soft deletions aren't possible for assemblies with our current models.
As a result of these hard deletes, a lot of arguably important data about Solum
apps is destroyed when an assembly or plan is deleted. It's very difficult to
reliably track the progress of an assembly as a third party, but trivial to do
it from within the engine.
Proposed change
===============
Some elements of an application should persist between builds and deployments,
like the networking information, or the overall status of an application.
Other elements are complicated enough to warrant being separate resources,
though not top-level due to their direct reliance on other resources, and to
their nature of changing more frequently than the app as a whole.
In addition to standard Openstack resource metadata, an app will own configuration
information including where and how to get a users's code, what languagepack to use
to build apps, and what steps to follow to produce the app.
The app will also bear some living data that is not directly mutable by users, including
a heat stack that contains a load balancer and a possibly a container running the user's
app.
An app resource will own a series of workflows, and the app's status will be
an aggregate of the states of those workflows. An app can be simultaneously deployed
and building, for example. These workflows will glean information about the steps
to execute, and how to execute those steps, from the app's configuration. Should
some of that data require evaluation, as in the case of source revisions pointing
to the head of a branch or deploying the most recent successful build of an app,
this information will be stored with the workflow to avoid ambiguity.
As an app's workflows progress through the stages of their execution, they may
produce logs or container images in addition to the result of a stage. This
data will be stored along with timestamp and relevant association data so the
complete history of an app can be discerned with simple queries.
Alternatives
------------
One of the major pain points in manipulating Solum resources at present is
understanding the relationship between plan and assembly, and what part each
resource represents in an application. At minimum, decoupling assembly's direct
reference to plan would be a good start, and could open the way to adding
versioning to plans without necessarily needing to rebuild every assembly.
Having an assembly keep a current record of which plan, or more specifically
which code source and test/run commands were run would make for a much clearer
interface.
Discerning from the API how many assemblies were created with a given plan is a
tedious task, requiring fetching all a user's assemblies and filtering them by
inspecting their plan_uri, and parsing that to fetch the uuid. At minimum, some
search tools ought to be built into the API for example to filter, paginate,
and order these assemblies by their fields, including created and updated
dates. In any case, the conveniences the official CLI provides ought to be
pulled back into the API to make Solum more accessible.
Data model impact
-----------------
- Plan table to be supplanted by new App table; relevant registration data like
repository, branch, and test commands to be part of app entries.
- Workflow table to be added to track progress of an app. An app can have
several workflows in various stages of completion at any given time.
- History table to track status changes, created logs, created artifacts,
and even rows deleted from the app table.
- Assembly table to be removed. Containers will still be used to do the actual
work of testing, building, and deploying, but they'll belong to workflows and
their progress will be tracked accordingly.
App table
- Immutable fields:
- (standard metadata)
- :code:`languagepack_id`: reference to images.id
- :code:`load_balancer`: entry point of deployed app, points to LB
- :code:`heat_stack_id`: stack that contains persistent LB
- :code:`entry_points`: JSON array of user entry points, like {"web": <url>, "db": <url>}
- :code:`trigger_id`: type-4 uuid for building trigger URLs
- :code:`language_pack_id`: reference to images.id, indirectly mutable
- Mutable fields:
- :code:`name`: human-set and -readable title
- :code:`description`: human-set
- :code:`ports`: list of ports on the app container to expose
- :code:`source`: JSON object containing repository, revision, and pertinent auth information
- :code:`workflow_config`: JSON object containing test command, run command, and pertinent config information
- :code:`trigger_actions`: JSON array of commands to be executed when the trigger is invoked.
Workflow table:
- Immutable fields:
- (standard metadata)
- :code:`app_id`: reference to app.id
- :code:`wf_id`: incremental id, orders workflows that share app_id
- :code:`source`: snapshot of app.source
- :code:`config`: shapshot of app.workflow_config
- :code:`actions`: JSON list of actions to be completed on app
Workflow history table:
- Immutable fields:
- (standard metadata)
- :code:`app_id`: weak reference to app.id, for easier indexing and searches
- :code:`workflow_id`: weak reference to workflow.id
- :code:`source`: snapshot of workflow.source
- :code:`config`: snapshot of workflow.config
- :code:`action`: current stage of workflow execution
- :code:`status`: progress of mentioned :code:`action`; one of *QUEUED*, *IN PROGRESS*, *SUCCESS*, *FAILURE*, and *ERROR*
- :code:`logs`: JSON array of URIs pointing to any created logs
- :code:`artifacts`: JSON array of URIs pointing to any created artifacts
REST API impact
---------------
**App Commands**
The app is the primary resource Solum manages. In addition to its own metadata
and information, an app also owns a list of workflows. An app's status is an
aggregate of the status fields of its child workflow resources. As such, an
app can be deployed, building, and testing at once. It can also have a failed
build and remain deployed with no problem. Being a first-class REST resource,
an app has the standard Create, Read, Update, and Delete verbs:
List all Apps::
GET /apps
200 OK
{
"apps": [
{
"uuid": "039db61a-b79a-43b3-821f-1e84e49fbdf3",
"status": {
"test": "IN PROGRESS",
"build": "QUEUED",
"deploy": "SUCCESS",
},
"created": <datetime>,
"updated": <datetime>,
"app_url": "http://192.0.2.100",
"language_pack": "python27",
"trigger_url": "/triggers/f2970536-c225-4959-9634-ddaf162cc214",
"name": "ghost",
"description": "My ghost blog",
"ports": [80],
"source": {
"repository": "http://github.com/fakeuser/ghost.git",
"revision": "master",
"oauth_token": "ghostblog-token"
},
"workflow_config": {
"test_cmd": "tox -epep8 -epy27",
"run_cmd": "/app/bin/run-blog.sh -f /app/config/ghost.conf",
},
"trigger_actions": [
"test",
"build",
"deploy"
],
"workflows": [
...
]
}
]
}
Create an App::
POST /apps/
{
"name": "djangodemo",
"description": "Simple todo-list app",
"ports": [80],
"source": {
"repository": "http://github.com/fakeuser/fakedjangoapp.git",
"revision": "master"
},
"language_pack": "python27"
"workflow_config": {
"test_cmd": "tox -epep8 -epy27",
"run_cmd": "bin/start-app.py",
},
"trigger_actions": [
"test",
"build",
"deploy
]
}
200 OK
{
"app": {
"uuid": "94cb7b89-0de8-492b-bf54-05ae96c9bd0e",
"status": {
"test": "IN PROGRESS",
"build": "QUEUED",
"deploy": "SUCCESS",
},
"created": <datetime>,
"updated": <datetime>,
"app_url": "http://192.0.2.101",
"language_pack": "python27",
"trigger_url": "/triggers/4ed0cd4b-da91-4552-a9c0-8c4a49fd2f56",
"name": "djangodemo",
"description": "Simple todo-list app",
"ports": [80],
"source": {
"repository": "http://github.com/fakeuser/fakedjangoapp.git",
"revision": "master",
},
"workflow_config": {
"test_cmd": "tox -epep8 -epy27",
"run_cmd": "bin/start-app.py",
},
"trigger_actions": [
"test",
"build",
"deploy"
],
"workflows": []
}
}
Show one App::
GET /apps/94cb7b89-0de8-492b-bf54-05ae96c9bd0e
200 OK
{
"app": {
"uuid": "94cb7b89-0de8-492b-bf54-05ae96c9bd0e",
"status": {
"test": "IN PROGRESS",
"build": "QUEUED",
"deploy": "SUCCESS",
},
"created": <datetime>,
"updated": <datetime>,
"app_url": "http://192.0.2.101",
"language_pack": "python27",
"trigger_url": "/triggers/4ed0cd4b-da91-4552-a9c0-8c4a49fd2f56",
"name": "djangodemo",
"description": "Simple todo-list app",
"ports": [80],
"source": {
"repository": "http://github.com/fakeuser/fakedjangoapp.git",
"revision": "master",
},
"workflow_config": {
"test_cmd": "tox -epep8 -epy27",
"run_cmd": "bin/start-app.py",
},
"trigger_actions": [
"test",
"build",
"deploy"
],
"workflows: [
...
]
}
}
Update one App::
PATCH /apps/94cb7b89-0de8-492b-bf54-05ae96c9bd0e
{
"description": "To-do list with new engine",
}
200 OK
{
"app": {
"uuid": "94cb7b89-0de8-492b-bf54-05ae96c9bd0e",
"status": {
"test": "IN PROGRESS",
"build": "QUEUED",
"deploy": "SUCCESS",
},
"created": <datetime>,
"updated": <datetime>,
"app_url": "http://192.0.2.101",
"language_pack": "python27",
"trigger_url": "/triggers/4ed0cd4b-da91-4552-a9c0-8c4a49fd2f56",
"name": "djangodemo",
"description": "To-do list with new engine",
"ports": [80],
"source": {
"repository": "http://github.com/fakeuser/fakedjangoapp.git",
"revision": "master",
},
"workflow_config": {
"test_cmd": "tox -epep8 -epy27",
"run_cmd": "bin/start-app.py",
},
"trigger_actions": [
"test",
"build",
"deploy"
],
"workflows": [
...
]
}
}
Delete one stopped App::
DELETE /apps/039db61a-b79a-43b3-821f-1e84e49fbdf3
204 NO CONTENT
**Workflow commands**
An app manages an application through each stage of its CI/CD lifecycle. The
workflow commands expose control verbs to the user for direct interaction.
Test an app's code::
POST /apps/94cb7b89-0de8-492b-bf54-05ae96c9bd0e/workflows
{
"actions": ["test"]
}
202 ACCEPTED
{
"workflow": {
"app_id": "94cb7b89-0de8-492b-bf54-05ae96c9bd0e",
"wf_id": 34,
"created": <datetime>,
"updated": <datetime>,
"status": {
"test": "QUEUED"
},
"source": {
"repository": "http://github.com/fakeuser/fakedjangoapp.git",
"revision": "master"
},
"config": {
"test_cmd": "tox -epep8 -epy27",
"run_cmd": "bin/start-app.py",
"build_id": 34,
},
"actions": [
"test"
],
"logs": [],
"artifacts": []
}
}
Test and build app::
POST /apps/94cb7b89-0de8-492b-bf54-05ae96c9bd0e/workflows
{
"actions": ["test", "build"]
}
202 ACCEPTED
{
"workflow": {
"app_id": "94cb7b89-0de8-492b-bf54-05ae96c9bd0e",
"wf_id": 35,
"created": <datetime>,
"updated": <datetime>,
"status": {
"test": "QUEUED",
"build": "QUEUED"
},
"source": {
"repository": "http://github.com/fakeuser/fakedjangoapp.git",
"revision": "master"
},
"config": {
"test_cmd": "tox -epep8 -epy27",
"run_cmd": "bin/start-app.py",
"build_id": 35,
},
"actions": [
"test",
"build"
],
"logs": [],
"artifacts": []
}
}
Skip tests, just build app::
POST /apps/94cb7b89-0de8-492b-bf54-05ae96c9bd0e/workflows
{
"actions": ["build"]
}
202 ACCEPTED
{
"workflow": {
"app_id": "94cb7b89-0de8-492b-bf54-05ae96c9bd0e",
"wf_id": 36,
"created": <datetime>,
"updated": <datetime>,
"status": {
"build": "QUEUED"
},
"source": {
"repository": "http://github.com/fakeuser/fakedjangoapp.git",
"revision": "master"
},
"config": {
"test_cmd": "tox -epep8 -epy27",
"run_cmd": "bin/start-app.py",
"build_id": 36,
},
"actions": [
"build"
],
"logs": [],
"artifacts": []
}
}
Deploy app with last good build
Since wf_id starts at 1, a build_id 0 is a sentinel to mean "the last good
build at deploy time". This is not updated until the deploy action is started,
at which point the workflow's build_id will be updated to the appropriate value,
or else the workflow will be marked as ERROR, with the reason "No such build".::
POST /apps/94cb7b89-0de8-492b-bf54-05ae96c9bd0e/workflows
{
"actions": ["deploy"],
"config": {
"build_id": 0
}
}
202 ACCEPTED
{
"workflow": {
"app_id": "94cb7b89-0de8-492b-bf54-05ae96c9bd0e",
"wf_id": 37,
"created": <datetime>,
"updated": <datetime>,
"status": {
"deploy": "QUEUED"
},
"source": {
"repository": "http://github.com/fakeuser/fakedjangoapp.git",
"revision": "master"
},
"config": {
"test_cmd": "tox -epep8 -epy27",
"run_cmd": "bin/start-app.py",
"build_id": 0,
},
"actions": [
"deploy"
],
"logs": [],
"artifacts": []
}
}
...
GET /apps/94cb7b89-0de8-492b-bf54-05ae96c9bd0e/workflows/37
200 OK
{
"workflow": {
"app_id": "94cb7b89-0de8-492b-bf54-05ae96c9bd0e",
"wf_id": 37,
"created": <datetime>,
"updated": <datetime>,
"status": {
"deploy": "IN PROGRESS"
},
"source": {
"repository": "http://github.com/fakeuser/fakedjangoapp.git",
"revision": "master"
},
"config": {
"test_cmd": "tox -epep8 -epy27",
"run_cmd": "bin/start-app.py",
"build_id": 34,
},
"actions": [
"deploy"
],
"logs": [],
"artifacts": []
}
}
Deploy app with specific build
If there's no artifact from that build_id (the build failed), this request
will fail. If it is a build, and isn't QUEUED, IN PROGRESS, or SUCCESS, this
request will fail. ::
POST /apps/94cb7b89-0de8-492b-bf54-05ae96c9bd0e/workflows
{
"actions": ["deploy"],
"config": {
"build_id": 33
}
}
202 ACCEPTED
{
"workflow": {
"app_id": "94cb7b89-0de8-492b-bf54-05ae96c9bd0e",
"wf_id": 38,
"created": <datetime>,
"updated": <datetime>,
"status": {
"deploy": "QUEUED"
},
"source": {
"repository": "http://github.com/fakeuser/fakedjangoapp.git",
"revision": "master"
},
"config": {
"test_cmd": "tox -epep8 -epy27",
"run_cmd": "bin/start-app.py",
"build_id": 33,
},
"actions": [
"deploy"
],
"logs": [],
"artifacts": []
}
}
Stop running app
Halt and destroy a container running user code. Starting another app is a
matter of deploying the same build again, which will create a new assembly
in the process. ::
POST /apps/94cb7b89-0de8-492b-bf54-05ae96c9bd0e/workflows
{
"actions": ["stop"]
}
202 ACCEPTED
{
"workflow": {
"app_id": "94cb7b89-0de8-492b-bf54-05ae96c9bd0e",
"wf_id": 39,
"created": <datetime>,
"updated": <datetime>,
"status": {
"stop": "QUEUED",
},
"source": {
"repository": "http://github.com/fakeuser/fakedjangoapp.git",
"revision": "master"
},
"config": {
"test_cmd": "tox -epep8 -epy27",
"run_cmd": "bin/start-app.py",
"build_id": 39,
},
"actions": [
"stop"
],
"logs": [],
"artifacts": []
}
}
**History, Log, and Artifact commands**
As workflows progress through their actions, a record of the changes made to
an app are recorded in the history table. In addition, any logs or artifacts
produced during the course of these actions will be stored with this record.
These three resources are managed by the engine with the same table, but several
resources are exposed via the API to facilitate their retrieval.
These commands in particular benefit from filtering and pagination.
Show change history of one app::
GET /apps/2797a1f4-fc03-4c21-9dde-099cf7636ceb/history
Show recent history of one app::
GET /apps/2797a1f4-fc03-4c21-9dde-099cf7636ceb/history?limit=5
List all logs for one app::
GET /apps/2797a1f4-fc03-4c21-9dde-099cf7636ceb/logs
Fetch logs for last failed test action of one app::
GET /apps/2797a1f4-fc03-4c21-9dde-099cf7636ceb/logs?action=test&status=FAILED&limit=1
List all artifacts for one app::
GET /apps/2797a1f4-fc03-4c21-9dde-099cf7636ceb/artifacts
Security impact
---------------
(to be determined)
Notifications impact
--------------------
(to be determined)
Other end user impact
---------------------
At minimum, python-solumclient will be drastically simplified. At present, it
already presents app commands that manipulate primarily plan and assembly
resources. By implementing these features in the API, the playing field is much
more level should someone want to interact with Solum without using the
official CLI.
The planfile format will also change dramatically. I seek to remove the
confusing multiple-artifact section, and its free-form content section.
Instead, to reflect the new fields in the planfile, it might be a simple as: ::
name: fooweb
description: my fooweb app
languagepack: python27
ports: [80]
source:
- repository: https://github.com/fakeuser/fooweb.git
workflow_config:
- test_cmd: tox -epep8 -epy27
run_cmd: /app/run.sh /app/app.ini
Performance Impact
------------------
None
Other deployer impact
---------------------
None
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
<ed--cranford>
Work Items
----------
- Create app resource
- Replace use of plan resources in code with app
- Create workflow resource
- Replace use of assembly resource in code with workflow
- Create history resource
- Add code to add rows to history table with every workflow update
- Add CLI commands for (new) app, workflows, and history resources
- Remove assembly, plan, component, pipeline commands from CLI
- Remove plan and assembly resources from API
Dependencies
============
None
Testing
=======
A drastic modification of the models and resources in Solum will of course
require extensive changes to both unit and tempest tests. Ideally, the tests
will be made simpler, as a lot of metadata should be handled by the API and not
by waitloops and client-side aggregation and filtering of API responses.
Documentation Impact
====================
Significantly less effort will be spent on explaining assemblies and plans and
their relationship to an application--arguably one of the most confusing ideas
in Solum at present is that it is a tool for managing application lifecycles
and yet has no application resource to speak of.
References
==========
None
.. # vim: set sw=2 ts=2 sts=2 tw=79:

View File

@ -1,193 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
================
Deployer Plugins
================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/solum/+spec/deployer_plugins
Currently, any customization or user-selectable variety in application
deployment architecture is exclusively governed by available Heat templates
that the single deployer has access to. While sufficient for a few simple
architectural topologies sharing the same parameter requirements and update
workflows, more numerous and complex options make this single deployer
design untenable.
Problem description
===================
The current deployer design makes implementing many types of architectural
topology options difficult if not impossible. Relying solely on different
templates is insufficient since customization is limited by having to either
restrict the needs of these architectures to a strictly limited set of
parameters or requiring every template to include every possible parameter
of every other template.
Additionally, this mechanism doesn't allow for various different update
strategies. For example, an HA architectural topology may require multiple
calls to stack update and modifications to the template that simply
cannot be accounted for in a "one-size-fits-all" deployer design.
Finally, the current design requires operators to patch the deployer
should they wish to rely on different deployment mechanisms and/or
workflows.
Proposed change
===============
The proposed change would use Stevedore plugins to allow the deployer to be
easily extended to support various different architectural topologies. These
plugins would also expose any additional parameters that are relevant to and/or
allowed for in the particular architectural topology it implements.
Functionality such as generating pre-authenticated urls and general preamble
would still be handled by the current deployer class. However, in this proposed
design, the deployer would pass parameters and other user selections on to the
desired plugin which would then handle the actual deployment (template
selection, manipulation, and interaction with Heat or other provisioning
strategies). The plugin will also be responsible for monitoring provisioning
status and reporting same to the deployer. These plugins would still run in the
same process as the deployer and would not require any additional
synchronization or communication mechanisms.
Each plugin would optionally define properties similar to the way Heat
resource plugins do. These allow the plugin author to define the
user-configurable options available for that architectural topology. Property
definition will be similar to that found in Heat resource plugins. When an
application is created, the user will be able to pass property values which
are then validated and used in provisioning by the chosen plugin. The user
can query the API for available architectural topologies and their properties.
The API will get this information by requesting it from the deployer which in
turn will simply examine its loaded plugins.
Alternatives
------------
As mentioned in the Problem Description, various templates and ever-branching
logic in the deployer code could be relied on for a time, but this is
untenable in the long term.
Data model impact
-----------------
The ``app`` table will need to be modified to include a ``topology``
column to identify the plugin used to deploy the application. Since the
deployer plugin is now responsible for provisioning, the ``stack_id`` column
can be replaced by a generic json blob column called ``deployer_info``. This
column can contain arbitrary data that the specific plugin needs (stack ids,
persistent property values, etc).
REST API impact
---------------
The api would be extended to include the following calls:
*GET /topologies*
Return a list of supported architectural topologies:
- Response codes: 200
- Returns a list of topologies and their short descriptions
*GET /topologies/<topology>*
Return details of the specified topology:
- Response codes: 200
- Returns details of a topology similar to the response from
``heat resource-type-show``
Additionally, application creation and update would be extended to accept
parameters. These parameters and their values will be validated by the plugin
implementing the specified topology.
Security impact
---------------
None.
Notifications impact
--------------------
None.
Other end user impact
---------------------
Python-solum client will need to add the following corresponding methods:
* ``solum topology list``
Return a list of supported architectural topologies
* ``solum topology show``
Return details of the specified topology
Performance Impact
------------------
None
Other deployer impact
---------------------
Deployers will need to be aware of plugin packaging and deployment should
they wish to use this mechanism for extension via custom plugins that are not
distributed by default.
Developer impact
----------------
This is a new method of adding and maintaining deployer code. Developers will
need to be aware of and familiar with Stevedore plugins and how they are
defined, registered, and loaded.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
randallburt
Work Items
----------
* include Stevedore and create deployer plugin base class
* refactor current deployer to load plugins for topologies
* refactor existing "basic" flavor to be a plugin
* refactor tests and add coverage for manager and basic plugin
* add topology listing and detail to the api
* add functional tests (Tempest) for topology listing and detail
* add topology listing and detail to the cli
Dependencies
============
* Stevedore <http://docs.openstack.org/developer/stevedore/> will be an
additional dependency in ``requirements.txt``.
Testing
=======
Tempest tests for the new api endpoints will be added to cover basic
functionality. Application deployment and other functions should not
impact current tests for them; in fact, current tests are required to pass as
is to prove no regressions
Documentation Impact
====================
Documentation for python-solumclient will need to be updated with the new
operations.
References
==========
None.

View File

@ -1,320 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
Example Spec - The title of your blueprint
==========================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/solum/+spec/example
Introduction paragraph -- why are we doing anything? A single paragraph of
prose that operators can understand. The title and this first paragraph
should be used as the subject line and body of the commit message
respectively.
Some notes about using this template:
* Your spec should be in ReSTructured text, like this template.
* Please wrap text at 79 columns.
* The filename in the git repository should match the launchpad URL, for
example a URL of: https://blueprints.launchpad.net/solum/+spec/awesome-thing
should be named awesome-thing.rst
* Please do not delete any of the sections in this template. If you have
nothing to say for a whole section, just write: None
* For help with syntax, see http://sphinx-doc.org/rest.html
* To test out your formatting, build the docs using tox, or see:
http://rst.ninjs.org
* If you would like to provide a diagram with your spec, ascii diagrams are
required. http://asciiflow.com/ is a very nice tool to assist with making
ascii diagrams. The reason for this is that the tool used to review specs is
based purely on plain text. Plain text will allow review to proceed without
having to look at additional files which can not be viewed in gerrit. It
will also allow inline feedback on the diagram itself.
Problem description
===================
A detailed description of the problem:
* For a new feature this might be use cases. Ensure you are clear about the
actors in each use case: End User vs Deployer
* For a major reworking of something existing it would describe the
problems in that feature that are being addressed.
Proposed change
===============
Here is where you cover the change you propose to make in detail. How do you
propose to solve this problem?
If this is one part of a larger effort make it clear where this piece ends. In
other words, what's the scope of this effort?
Alternatives
------------
What other ways could we do this thing? Why aren't we using those? This doesn't
have to be a full literature review, but it should demonstrate that thought has
been put into why the proposed solution is an appropriate one.
Data model impact
-----------------
Changes which require modifications to the data model often have a wider impact
on the system. The community often has strong opinions on how the data model
should be evolved, from both a functional and performance perspective. It is
therefore important to capture and gain agreement as early as possible on any
proposed changes to the data model.
Questions which need to be addressed by this section include:
* What new data objects and/or database schema changes is this going to
require?
* What database migrations will accompany this change.
* How will the initial set of new data objects be generated, for example if you
need to take into account existing instances, or modify other existing data
describe how that will work.
REST API impact
---------------
Each API method which is either added or changed should have the following
* Specification for the method
* A description of what the method does suitable for use in
user documentation
* Method type (POST/PUT/GET/DELETE)
* Normal http response code(s)
* Expected error http response code(s)
* A description for each possible error code should be included
describing semantic errors which can cause it such as
inconsistent parameters supplied to the method, or when an
instance is not in an appropriate state for the request to
succeed. Errors caused by syntactic problems covered by the JSON
schema defintion do not need to be included.
* URL for the resource
* Parameters which can be passed via the url
* JSON schema definition for the body data if allowed
* JSON schema definition for the response data if any
* Example use case including typical API samples for both data supplied
by the caller and the response
* Discuss any policy changes, and discuss what things a deployer needs to
think about when defining their policy.
Example JSON schema definitions can be found in the Solum tree
http://git.openstack.org/cgit/openstack/solum/tree/solum/api/openstack/compute/schemas/v3
Note that the schema should be defined as restrictively as
possible. Parameters which are required should be marked as such and
only under exceptional circumstances should additional parameters
which are not defined in the schema be permitted (eg
additionaProperties should be False).
Reuse of existing predefined parameter types such as regexps for
passwords and user defined names is highly encouraged.
Security impact
---------------
Describe any potential security impact on the system. Some of the items to
consider include:
* Does this change touch sensitive data such as tokens, keys, or user data?
* Does this change alter the API in a way that may impact security, such as
a new way to access sensitive information or a new way to login?
* Does this change involve cryptography or hashing?
* Does this change require the use of sudo or any elevated privileges?
* Does this change involve using or parsing user-provided data? This could
be directly at the API level or indirectly such as changes to a cache layer.
* Can this change enable a resource exhaustion attack, such as allowing a
single API interaction to consume significant server resources? Some examples
of this include launching subprocesses for each connection, or entity
expansion attacks in XML.
For more detailed guidance, please see the OpenStack Security Guidelines as
a reference (https://wiki.openstack.org/wiki/Security/Guidelines). These
guidelines are a work in progress and are designed to help you identify
security best practices. For further information, feel free to reach out
to the OpenStack Security Group at openstack-security@lists.openstack.org.
Notifications impact
--------------------
Please specify any changes to notifications. Be that an extra notification,
changes to an existing notification, or removing a notification.
Other end user impact
---------------------
Aside from the API, are there other ways a user will interact with this
feature?
* Does this change have an impact on python-solumclient? What does the user
interface there look like?
Performance Impact
------------------
Describe any potential performance impact on the system, for example
how often will new code be called, and is there a major change to the calling
pattern of existing code.
Examples of things to consider here include:
* A periodic task might look like a small addition but if it calls conductor or
another service the load is multiplied by the number of nodes in the system.
* A small change in a utility function or a commonly used decorator can have a
large impacts on performance.
* Calls which result in a database queries (whether direct or via conductor)
can have a profound impact on performance when called in critical sections of
the code.
* Will the change include any locking, and if so what considerations are there
on holding the lock?
Other deployer impact
---------------------
Discuss things that will affect how you deploy and configure OpenStack
that have not already been mentioned, such as:
* What config options are being added? Should they be more generic than
proposed (for example a flag that other hypervisor drivers might want to
implement as well)? Are the default values ones which will work well in
real deployments?
* Is this a change that takes immediate effect after its merged, or is it
something that has to be explicitly enabled?
* If this change is a new binary, how would it be deployed?
* Please state anything that those doing continuous deployment, or those
upgrading from the previous release, need to be aware of. Also describe
any plans to deprecate configuration values or features. For example, if we
change the directory name that instances are stored in, how do we handle
instance directories created before the change landed? Do we move them? Do
we have a special case in the code? Do we assume that the operator will
recreate all the instances in their cloud?
Developer impact
----------------
Discuss things that will affect other developers working on OpenStack,
such as:
* If the blueprint proposes a change to the driver API, discussion of how
other hypervisors would implement the feature is required.
Implementation
==============
Assignee(s)
-----------
Who is leading the writing of the code? Or is this a blueprint where you're
throwing it out there to see who picks it up?
If more than one person is working on the implementation, please designate the
primary author and contact.
Primary assignee:
<launchpad-id or None>
Other contributors:
<launchpad-id or None>
Work Items
----------
Work items or tasks -- break the feature up into the things that need to be
done to implement it. Those parts might end up being done by different people,
but we're mostly trying to understand the timeline for implementation.
Dependencies
============
* Include specific references to specs and/or blueprints in solum, or in other
projects, that this one either depends on or is related to.
* If this requires functionality of another project that is not currently used
by Solum (such as the glance v2 API when we previously only required v1),
document that fact.
* Does this feature require any new library dependencies or code otherwise not
included in OpenStack? Or does it depend on a specific version of library?
Testing
=======
Please discuss how the change will be tested. We especially want to know what
tempest tests will be added. It is assumed that unit test coverage will be
added so that doesn't need to be mentioned explicitly, but discussion of why
you think unit tests are sufficient and we don't need to add more tempest
tests would need to be included.
Is this untestable in gate given current limitations (specific hardware /
software configurations available)? If so, are there mitigation plans (3rd
party testing, gate enhancements, etc).
Documentation Impact
====================
What is the impact on the docs team of this change? Some changes might require
donating resources to the docs team to have the documentation updated. Don't
repeat details discussed above, but please reference them here.
References
==========
Please add any useful references here. You are not required to have any
reference. Moreover, this specification should still make sense when your
references are unavailable. Examples of what you could include are:
* Links to mailing list or IRC discussions
* Links to notes from a summit session
* Links to relevant research, if appropriate
* Related specifications as appropriate (e.g. if it's an EC2 thing, link the
EC2 docs)
* Anything else you feel it is worthwhile to refer to

View File

View File

@ -1,103 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import glob
import re
import docutils.core
import testtools
class TestTitles(testtools.TestCase):
def _get_title(self, section_tree):
section = {
'subtitles': [],
}
for node in section_tree:
if node.tagname == 'title':
section['name'] = node.rawsource
elif node.tagname == 'section':
subsection = self._get_title(node)
section['subtitles'].append(subsection['name'])
return section
def _get_titles(self, spec):
titles = {}
for node in spec:
if node.tagname == 'section':
section = self._get_title(node)
titles[section['name']] = section['subtitles']
return titles
def _check_titles(self, fname, titles):
expected_titles = ('Problem description', 'Proposed change',
'Implementation', 'Dependencies',
'Testing', 'Documentation Impact',
'References')
self.assertEqual(
sorted(expected_titles),
sorted(titles.keys()),
"Expected titles not found in document %s" % fname)
proposed = 'Proposed change'
self.assertIn('Alternatives', titles[proposed])
self.assertIn('Data model impact', titles[proposed])
self.assertIn('REST API impact', titles[proposed])
self.assertIn('Security impact', titles[proposed])
self.assertIn('Notifications impact', titles[proposed])
self.assertIn('Other end user impact', titles[proposed])
self.assertIn('Performance Impact', titles[proposed])
self.assertIn('Other deployer impact', titles[proposed])
self.assertIn('Developer impact', titles[proposed])
impl = 'Implementation'
self.assertIn('Assignee(s)', titles[impl])
self.assertIn('Work Items', titles[impl])
def _check_lines_wrapping(self, tpl, raw):
for i, line in enumerate(raw.split("\n")):
if "http://" in line or "https://" in line:
continue
self.assertTrue(
len(line) < 80,
msg="%s:%d: Line limited to a maximum of 79 characters." %
(tpl, i+1))
def _check_no_cr(self, tpl, raw):
matches = re.findall('\r', raw)
self.assertEqual(
len(matches), 0,
"Found %s literal carriage returns in file %s" %
(len(matches), tpl))
def _check_trailing_spaces(self, tpl, raw):
for i, line in enumerate(raw.split("\n")):
trailing_spaces = re.findall(" +$", line)
self.assertEqual(len(trailing_spaces),0,
"Found trailing spaces on line %s of %s" % (i+1, tpl))
def test_template(self):
files = ['specs/template.rst'] + glob.glob('specs/*/*')
for filename in files:
self.assertTrue(filename.endswith(".rst"),
"spec's file must uses 'rst' extension.")
with open(filename) as f:
data = f.read()
spec = docutils.core.publish_doctree(data)
titles = self._get_titles(spec)
self._check_titles(filename, titles)
self._check_lines_wrapping(filename, data)
self._check_no_cr(filename, data)
self._check_trailing_spaces(filename, data)

18
tox.ini
View File

@ -1,18 +0,0 @@
[tox]
minversion = 3.1.1
envlist = docs,pep8
skipsdist = True
ignore_basepython_conflict = True
[testenv]
basepython = python3
usedevelop = True
setenv = VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/requirements.txt
commands = python setup.py testr --slowest --testr-args='{posargs}'
[testenv:venv]
commands = {posargs}
[testenv:docs]
commands = sphinx-build -W -b html doc/source doc/build/html