Retire repository

Fuel (from openstack namespace) and fuel-ccp (in x namespace)
repositories are unused and ready to retire.

This change removes all content from the repository and adds the usual
README file to point out that the repository is retired following the
process from
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

See also
http://lists.openstack.org/pipermail/openstack-discuss/2019-December/011647.html

Depends-On: https://review.opendev.org/699362
Change-Id: I11b06516aa9d24b8defbb85d504ee5a90d10f2a6
This commit is contained in:
Andreas Jaeger 2019-12-18 09:40:42 +01:00
parent 65f170b4f7
commit e05275f9f9
443 changed files with 10 additions and 33715 deletions

4
.gitignore vendored
View File

@ -1,4 +0,0 @@
plantuml.jar
/_build
/.tox

159
Makefile
View File

@ -1,159 +0,0 @@
# Makefile for Sphinx documentation
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " pdf to make PDF using rst2pdf"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
-rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html -W $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/fuel.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/fuel.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/fuel"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/fuel"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
pdf:
$(SPHINXBUILD) -b pdf $(ALLSPHINXOPTS) $(BUILDDIR)/pdf
@echo
@echo "Build finished; the PDF file is in $(BUILDDIR)/pdf."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."

10
README.rst Normal file
View File

@ -0,0 +1,10 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
Freenode.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 214 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 20 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 81 KiB

View File

@ -1,50 +0,0 @@
#embeddedFonts: [ [ "PT Sans" ], [ "PT Mono" ], [ "PT Serif" ] ]
# ["PT_Sans-CaptionBold.ttf", "PT_Sans-Bold.ttf", PT_Sans-Italic.ttf, PTF75F.ttf, PT_Sans-BoldItalic.ttf,
# PT_Sans-Narrow.ttf, PTF76F.ttf, PT_Sans-Bold_0.ttf, PT_Sans-NarrowBold.ttf,
# PTZ55F.ttf, PT_Sans-Caption.ttf, PT_Sans-Regular.ttf]
fontsAlias:
stdBold: PT Sans Bold
stdBoldItalic: PT Sans Bold Italic
stdFont: PT Sans
stdItalic: PT Sans Italic
stdMono: PT Mono
stdMonoBold: PT Mono Bold
stdMonoBoldItalic: PT Mono Bold
stdMonoItalic: PT Mono
stdSans: PT Sans
stdSansBold: PT Sans Bold
stdSansBoldItalic: PT Sans BoldItalic
stdSansItalic: PT Sans Italic
stdSerif: PT Serif
pageSetup:
firstTemplate: coverPage
# size: LETTER
pageTemplates:
coverPage:
frames: []
[0cm, 0cm, 100%, 100%]
background : _images/title-page.png
showHeader : false
showFooter : false
oneColumn:
frames: []
[0cm, 0cm, 100%, 100%]
showHeader : true
showFooter : true
styles:
seealso:
backColor: transparent
borderColor: transparent
parent: admonition
seealso-heading:
parent: heading
fontName: PT Sans
fontSize: 150%
keepWithNext: true
spaceBefore: 0
spaceAfter: 10

View File

@ -1,20 +0,0 @@
<!-- Bootstrap CSS -->
<link href="{{pathto('_static/css/bootstrap.min.css', 1)}}" rel="stylesheet">
<!-- Pygments CSS -->
<link href="{{pathto('_static/css/native.css', 1)}}" rel="stylesheet">
<!-- Fonts -->
<link href="http://maxcdn.bootstrapcdn.com/font-awesome/4.2.0/css/font-awesome.min.css" rel="stylesheet">
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300,400,700' rel='stylesheet' type='text/css'>
<!-- Custom CSS -->
<link href="{{pathto('_static/css/combined.css', 1)}}" rel="stylesheet">
<link href="{{pathto('_static/css/styles.css', 1)}}" rel="stylesheet">
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
<![endif]-->

View File

@ -1,56 +0,0 @@
<footer>
<div class="container">
<div class="row footer-links">
<div class="col-lg-2 col-sm-2">
<h3>OpenStack</h3>
<ul>
<li><a href="http://openstack.org/projects/">Projects</a></li>
<li><a href="http://openstack.org/projects/openstack-security/">OpenStack Security</a></li>
<li><a href="http://openstack.org/projects/openstack-faq/">Common Questions</a></li>
<li><a href="http://openstack.org/blog/">Blog</a></li>
<li><a href="http://openstack.org/news/">News</a></li>
</ul>
</div>
<div class="col-lg-2 col-sm-2">
<h3>Community</h3>
<ul>
<li><a href="http://openstack.org/community/">User Groups</a></li>
<li><a href="http://openstack.org/community/events/">Events</a></li>
<li><a href="http://openstack.org/community/jobs/">Jobs</a></li>
<li><a href="http://openstack.org/foundation/companies/">Companies</a></li>
<li><a href="http://docs.openstack.org/infra/manual/developers.html">Contribute</a></li>
</ul>
</div>
<div class="col-lg-2 col-sm-2">
<h3>Documentation</h3>
<ul>
<li><a href="http://docs.openstack.org">OpenStack Manuals</a></li>
<li><a href="http://openstack.org/software/start/">Getting Started</a></li>
<li><a href="http://developer.openstack.org">API Documentation</a></li>
<li><a href="https://wiki.openstack.org">Wiki</a></li>
</ul>
</div>
<div class="col-lg-2 col-sm-2">
<h3>Branding & Legal</h3>
<ul>
<li><a href="http://openstack.org/brand/">Logos & Guidelines</a></li>
<li><a href="http://openstack.org/brand/openstack-trademark-policy/">Trademark Policy</a></li>
<li><a href="http://openstack.org/privacy/">Privacy Policy</a></li>
<li><a href="https://wiki.openstack.org/wiki/How_To_Contribute#Contributor_License_Agreement">OpenStack CLA</a></li>
</ul>
</div>
<div class="col-lg-4 col-sm-4">
<h3>Stay In Touch</h3>
<a href="https://twitter.com/OpenStack" target="_blank" class="social-icons footer-twitter"></a>
<a href="https://www.facebook.com/openstack" target="_blank" class="social-icons footer-facebook"></a>
<a href="https://www.linkedin.com/company/openstack" target="_blank" class="social-icons footer-linkedin"></a>
<a href="https://www.youtube.com/user/OpenStackFoundation" target="_blank" class="social-icons footer-youtube"></a>
<p class="fine-print">
The OpenStack project is provided under the
<a href="http://www.apache.org/licenses/LICENSE-2.0">Apache 2.0 license</a>. Openstack.org is powered by
<a href="http://rackspace.com" target="_blank">Rackspace Cloud Computing</a>.
</p>
</div>
</div>
</div>
</footer>

View File

@ -1,107 +0,0 @@
<nav class="navbar navbar-default inner" role="navigation">
<div class="container">
<div class="navbar-header">
<button class="navbar-toggle" data-target="#bs-example-navbar-collapse-1" data-toggle="collapse" type="button">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<div class="brand-wrapper">
<a class="navbar-brand" href="/"></a>
</div>
<div class="search-icon show"><i class="fa fa-search"></i> Search</div></div>
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
<div class="search-container tiny">
<div id="gcse">
<script type="text/javascript">
(function() {
var cx = '000108871792296872333:noj9nikm74i';
var gcse = document.createElement('script');
gcse.type = 'text/javascript';
gcse.async = true;
gcse.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') +
'//www.google.com/cse/cse.js?cx=' + cx;
var s = document.getElementsByTagName('script')[0];
s.parentNode.insertBefore(gcse, s);
})();
</script>
<gcse:search gname="standard"></gcse:search>
</div>
<i class="fa fa-times close-search"></i>
</div>
<ul class="nav navbar-nav navbar-main show">
<li>
<div id="gcse-mobile">
<gcse:search gname="mobile"></gcse:search>
</div>
</li>
<li>
<a href="http://www.openstack.org/software/" class="drop" id="dropdownMenuSoftware">Software <i class="fa fa-caret-down"></i></a>
<ul class="dropdown-menu" role="menu" aria-labelledby="dropdownMenuSoftware">
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://www.openstack.org/software/">Overview</a></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://www.openstack.org/software/project-navigator/">Project Navigator</a></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://www.openstack.org/software/sample-configs/">Sample Configs</a></li>
<li role="presentation" class="divider"></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://www.openstack.org/software/start/">Get Started</a></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://www.openstack.org/software/roadmap/">Roadmap</a></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://www.openstack.org/software/latest-release/">Latest Release</a></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://www.openstack.org/software/sourcecode/">Source Code</a></li>
</ul>
</li>
<li>
<a href="http://www.openstack.org/user-stories/" class="drop" id="dropdownMenuUsers">Users <i class="fa fa-caret-down"></i></a>
<ul class="dropdown-menu" role="menu" aria-labelledby="dropdownMenuUsers">
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://www.openstack.org/user-stories/">Overview</a></li>
<li role="presentation" class="divider"></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://www.openstack.org/enterprise/">OpenStack in the Enterprise</a></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://www.openstack.org/appdev/">Application Developers</a></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://superuser.openstack.org/">Superuser Magazine</a></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://www.openstack.org/surveys/landing">User Survey</a></li>
</ul>
</li>
<li>
<a href="http://www.openstack.org/community/" class="drop" id="dropdownMenuCommunity">Community <i class="fa fa-caret-down"></i></a>
<ul class="dropdown-menu" role="menu" aria-labelledby="dropdownMenuCommunity">
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://www.openstack.org/community/">Welcome! Start Here</a></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://www.openstack.org/foundation/">OpenStack Foundation</a></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://wiki.openstack.org">OpenStack Wiki</a></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://groups.openstack.org">User Groups</a></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://www.openstack.org/community/speakers/">Speakers Bureau</a></li>
<li role="presentation" class="divider"></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://www.openstack.org/foundation/companies/">Supporting Companies</a></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://www.openstack.org/community/jobs/">Jobs</a></li>
<li role="presentation" class="divider"></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://www.openstack.org/join/">Join The Community</a></li>
</ul>
</li>
<li>
<a href="http://www.openstack.org/marketplace/">Marketplace</a>
</li>
<li>
<a href="http://www.openstack.org/events/" class="drop" id="dropdownMenuEvents">Events <i class="fa fa-caret-down"></i></a>
<ul class="dropdown-menu" role="menu" aria-labelledby="dropdownMenuEvents">
<li role="presentation"><a role="menuitem" tabindex="-1" href="//www.openstack.org/community/events/">Overview</a></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="https://www.openstack.org/summit/">The OpenStack Summit</a></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="//www.openstack.org/community/events/">More OpenStack Events</a></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://www.openstack.org/community/events/#openstack_days">OpenStack Days</a></li>
</ul>
</li>
<li>
<a href="http://www.openstack.org/learn/" class="drop" id="dropdownMenuLearn">Learn <i class="fa fa-caret-down"></i></a>
<ul class="dropdown-menu dropdown-hover" role="menu" aria-labelledby="dropdownMenuEvents">
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://www.openstack.org/marketplace/training/">Training</a></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="//superuser.openstack.org">Superuser Magazine</a></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="//ask.openstack.org">Ask a Technical Question</a></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://www.openstack.org/news/">News</a></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://www.openstack.org/blog/">Blog</a></li>
<li role="presentation"><a role="menuitem" tabindex="-1" href="http://www.openstack.org/summit/tokyo-2015/summit-videos/">Summit Videos</a></li>
</ul>
</li>
<li>
<a href="http://docs.openstack.org/">Docs</a>
</li>
</ul>
</div>
</div>
</nav>

View File

@ -1,82 +0,0 @@
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html lang="en" xml:lang="en" xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type"/>
{% block header %}{% endblock %}
<title>OpenStack Docs: {{ title }}</title>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
{{ metatags }}
{% include 'css.html' %}
{%- for cssfile in css_files %}
<link rel="stylesheet" href="{{ pathto(cssfile, 1) }}" type="text/css" />
{%- endfor %}
{# FAVICON #}
{% if favicon %}
<link rel="shortcut icon" href="{{ pathto('_static/favicon.ico') }}"/>
{% endif %}
{% if theme_analytics_tracking_code %}
<!-- Google Analytics -->
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', '{{ theme_analytics_tracking_code }}', 'auto');
ga('send', 'pageview');
</script>
<!-- End Google Analytics -->
{% endif %}
</head>
<body>
{% include 'header.html' %}
<div class="container docs-book-wrapper">
<div class="row">
<div class="col-lg-9 col-md-8 col-sm-8 col-lg-push-3 col-md-push-4 col-sm-push-4">
{% include 'titlerow.html' %}
<div class="row docs-byline">
<div class="docs-updated">updated: {{ last_updated }}</div>
</div>
<div class="row">
<div class="col-lg-12">
<div class="docs-top-contents">
{% include 'localtoc.html' %}
</div>
<div class="docs-body">
{% block body %}{% endblock %}
</div>
</div>
</div>
<div class="docs-actions">
{% if prev %}
<a href="{{ prev.link|e }}"><i class="fa fa-angle-double-left" data-toggle="tooltip" data-placement="top" title="Previous: {{ prev.title }}"></i></a>
{% endif %}
{% if next %}
<a href="{{ next.link|e }}"><i class="fa fa-angle-double-right" data-toggle="tooltip" data-placement="top" title="Next: {{ next.title }}"></i></a>
{% endif %}
<a id="logABugLink3" href="" target="_blank" title="Found an error? Report a bug against this page"><i class="fa fa-bug" data-toggle="tooltip" data-placement="top" title="Report a Bug"></i></a>
</div>
<div class="row docs-byline bottom">
<div class="docs-updated">updated: {{ last_updated }}</div>
</div>
<div class="row">
<div class="col-lg-8 col-md-8 col-sm-8 docs-license">
{% include 'license_cc.html' %}
</div>
<div class="col-lg-4 col-md-4 col-sm-4 docs-actions-wrapper">
<!-- ID buglinkbottom added so that pre-filled doc bugs
are sent to Launchpad projects related to the document -->
<a href="#" id="logABugLink2" class="docs-footer-actions"><i class="fa fa-bug"></i> found an error? report a bug</a>
<a href="http://ask.openstack.org" class="docs-footer-actions"><i class="fa fa-question-circle"></i> questions?</a>
</div>
</div>
</div>
{% include 'sidebartoc.html' %}
</div>
</div>
{% include 'footer.html' %}
{% include 'script_footer.html' %}
</body>
</html>

View File

@ -1,9 +0,0 @@
<a href="https://creativecommons.org/licenses/by/3.0/">
<img src="{{pathto('_static/images/docs/license.png', 1)}}" alt="Creative Commons Attribution 3.0 License"/>
</a>
<p>
Except where otherwise noted, this document is licensed under
<a href="https://creativecommons.org/licenses/by/3.0/">Creative Commons
Attribution 3.0 License</a>. See all <a href="http://www.openstack.org/legal">
OpenStack Legal Documents</a>.
</p>

View File

@ -1,4 +0,0 @@
{%- if display_toc %}
<h5><a href="{{ pathto(master_doc) }}">Contents</a></h5>
{{ toc }}
{%- endif %}

View File

@ -1,29 +0,0 @@
<ul id="Menu1">
<li>
<a href="http://www.openstack.org/" title="Go to the OpenStack Home page">Home</a>
</li>
<li>
<a class="link" href="http://www.openstack.org/software/" title="About OpenStack">About</a>
</li>
<li>
<a class="link" href="http://www.openstack.org/user-stories/" title="Read stories about companies that use OpenStack to get work done.">User Stories</a>
</li>
<li>
<a class="link" href="http://www.openstack.org/community/" title="Go to the OpenStack Community page">Community</a>
</li>
<li>
<a class="link" href="http://www.openstack.org/profile/" title="Edit your OpenStack community profile">Profile</a>
</li>
<li>
<a href="http://www.openstack.org/blog/" title="Go to the OpenStack Blog">Blog</a>
</li>
<li>
<a href="http://wiki.openstack.org/" title="Go to the OpenStack Wiki">Wiki</a>
</li>
<li>
<a href="http://docs.openstack.org/glossary/content/glossary.html" title="See definitions of OpenStack terms">Glossary</a>
</li>
<li>
<a class="current" href="http://docs.openstack.org/" title="Go to the OpenStack Documentation">Documentation</a>
</li>
</ul>

View File

@ -1,71 +0,0 @@
<!-- jQuery -->
<script type="text/javascript" src="{{pathto('_static/js/jquery-1.11.3.js', 1)}}"></script>
<!-- Bootstrap JavaScript -->
<script type="text/javascript" src="{{pathto('_static/js/bootstrap.min.js', 1)}}"></script>
<!-- The rest of the JS -->
<script type="text/javascript" src="{{pathto('_static/js/navigation.js', 1)}}"></script>
<!-- Docs JS -->
<script type="text/javascript" src="{{pathto('_static/js/docs.js', 1)}}"></script>
<!-- Popovers -->
<script type="text/javascript" src="{{pathto('_static/js/webui-popover.js', 1)}}"></script>
<!-- Javascript for page -->
<script language="JavaScript">
/* build a description of this page including SHA, source location on git repo,
build time and the project's launchpad bug tag. Set the HREF of the bug
buttons */
var lineFeed = "%0A";
var gitURL = "Source: Can't derive source file URL";
/* there have been cases where "pagename" wasn't set; better check for it */
{%- if giturl and pagename %}
/* The URL of the source file on Git is based on the giturl variable
in conf.py, which must be manually initialized to the source file
URL in Git.
"pagename" is a standard sphinx parameter containing the name of
the source file, without extension. */
var sourceFile = "{{ pagename }}" + ".rst";
gitURL = "Source: {{ giturl }}" + "/" + sourceFile;
{%- endif %}
/* gitsha, project and bug_tag rely on variables in conf.py */
var gitSha = "SHA: {{ gitsha }}";
{%- if bug_project %}
var bugProject = "{{ bug_project }}";
{%- else %}
var bugProject = "fuel";
{%- endif %}
var bugTitle = "{{ title }} in {{ project }}";
var fieldTags = "{{ bug_tag }}";
/* "last_updated" is the build date and time. It relies on the
conf.py variable "html_last_updated_fmt", which should include
year/month/day as well as hours and minutes */
var buildstring = "Release: {{ release }} on {{ last_updated }}";
var fieldComment = encodeURI(buildstring) +
lineFeed + encodeURI(gitSha) +
lineFeed + encodeURI(gitURL) ;
logABug(bugTitle, bugProject, fieldComment, fieldTags);
</script>
<!-- Javascript for search boxes (both sidebar and top nav) -->
<script type="text/javascript">
(function() {
var cx = '000108871792296872333:noj9nikm74i';
var gcse = document.createElement('script');
gcse.type = 'text/javascript';
gcse.async = true;
gcse.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') +
'//www.google.com/cse/cse.js?cx=' + cx;
var s = document.getElementsByTagName('script')[0];
s.parentNode.insertBefore(gcse, s);
})();
</script>

View File

@ -1,28 +0,0 @@
<script src="http://www.google.com/jsapi" type="text/javascript"></script>
<script type="text/javascript">
//<![CDATA[
google.load('search', '1', {
language: 'en'
});
var _gaq = _gaq ||[];
_gaq.push([ "_setAccount", "UA-17511903-1"]);
function _trackQuery(control, searcher, query) {
var gaQueryParamName = "q";
var loc = document.location;
var url =[
loc.pathname,
loc.search,
loc.search ? '&': '?',
gaQueryParamName == '' ? 'q': encodeURIComponent(gaQueryParamName),
'=',
encodeURIComponent(query)].join('');
_gaq.push([ "_trackPageview", url]);
}
google.setOnLoadCallback(function () {
var customSearchControl = new google.search.CustomSearchControl('011012898598057286222:elxsl505o0o');
customSearchControl.setResultSetSize(google.search.Search.FILTERED_CSE_RESULTSET);
customSearchControl.setSearchStartingCallback(null, _trackQuery);
customSearchControl.draw('cse');
},
true);//]]>
</script>

View File

@ -1,12 +0,0 @@
<div class="col-lg-3 col-md-4 col-sm-4 col-lg-pull-9 col-md-pull-8 col-sm-pull-8 docs-sidebar">
<div class="btn-group docs-sidebar-releases">
<button onclick="location.href='/'" class="btn docs-sidebar-home" data-toggle="tooltip" data-placement="top" title="Docs Home"><i class="fa fa-arrow-circle-o-left"></i></button>
<div class="docs-sidebar-release-select"></div>
</div>
<div class="docs-sidebar-toc">
<div class="docs-sidebar-section" id="table-of-contents">
<a href="{{ pathto(master_doc) }}" class="docs-sidebar-section-title"><h4>{{ _('Contents') }}</h4></a>
{{ toctree(maxdepth=theme_globaltoc_depth|toint, collapse=False, includehidden=theme_globaltoc_includehidden|tobool) }}
</div>
</div>
</div>

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@ -1,69 +0,0 @@
.hll { background-color: #404040 }
.c { color: #999999; font-style: italic } /* Comment */
.err { color: #a61717; background-color: #e3d2d2 } /* Error */
.g { color: #d0d0d0 } /* Generic */
.k { color: #6ab825; font-weight: bold } /* Keyword */
.l { color: #d0d0d0 } /* Literal */
.n { color: #d0d0d0 } /* Name */
.o { color: #d0d0d0 } /* Operator */
.x { color: #d0d0d0 } /* Other */
.p { color: #d0d0d0 } /* Punctuation */
.cm { color: #999999; font-style: italic } /* Comment.Multiline */
.cp { color: #cd2828; font-weight: bold } /* Comment.Preproc */
.c1 { color: #999999; font-style: italic } /* Comment.Single */
.cs { color: #e50808; font-weight: bold; background-color: #520000 } /* Comment.Special */
.gd { color: #d22323 } /* Generic.Deleted */
.ge { color: #d0d0d0; font-style: italic } /* Generic.Emph */
.gr { color: #d22323 } /* Generic.Error */
.gh { color: #ffffff; font-weight: bold } /* Generic.Heading */
.gi { color: #589819 } /* Generic.Inserted */
.go { color: #cccccc } /* Generic.Output */
.gp { color: #aaaaaa } /* Generic.Prompt */
.gs { color: #d0d0d0; font-weight: bold } /* Generic.Strong */
.gu { color: #ffffff; text-decoration: underline } /* Generic.Subheading */
.gt { color: #d22323 } /* Generic.Traceback */
.kc { color: #6ab825; font-weight: bold } /* Keyword.Constant */
.kd { color: #6ab825; font-weight: bold } /* Keyword.Declaration */
.kn { color: #6ab825; font-weight: bold } /* Keyword.Namespace */
.kp { color: #6ab825 } /* Keyword.Pseudo */
.kr { color: #6ab825; font-weight: bold } /* Keyword.Reserved */
.kt { color: #6ab825; font-weight: bold } /* Keyword.Type */
.ld { color: #d0d0d0 } /* Literal.Date */
.m { color: #3677a9 } /* Literal.Number */
.s { color: #ed9d13 } /* Literal.String */
.na { color: #bbbbbb } /* Name.Attribute */
.nb { color: #24909d } /* Name.Builtin */
.nc { color: #447fcf; text-decoration: underline } /* Name.Class */
.no { color: #40ffff } /* Name.Constant */
.nd { color: #ffa500 } /* Name.Decorator */
.ni { color: #d0d0d0 } /* Name.Entity */
.ne { color: #bbbbbb } /* Name.Exception */
.nf { color: #447fcf } /* Name.Function */
.nl { color: #d0d0d0 } /* Name.Label */
.nn { color: #447fcf; text-decoration: underline } /* Name.Namespace */
.nx { color: #d0d0d0 } /* Name.Other */
.py { color: #d0d0d0 } /* Name.Property */
.nt { color: #6ab825; font-weight: bold } /* Name.Tag */
.nv { color: #40ffff } /* Name.Variable */
.ow { color: #6ab825; font-weight: bold } /* Operator.Word */
.w { color: #666666 } /* Text.Whitespace */
.mf { color: #3677a9 } /* Literal.Number.Float */
.mh { color: #3677a9 } /* Literal.Number.Hex */
.mi { color: #3677a9 } /* Literal.Number.Integer */
.mo { color: #3677a9 } /* Literal.Number.Oct */
.sb { color: #ed9d13 } /* Literal.String.Backtick */
.sc { color: #ed9d13 } /* Literal.String.Char */
.sd { color: #ed9d13 } /* Literal.String.Doc */
.s2 { color: #ed9d13 } /* Literal.String.Double */
.se { color: #ed9d13 } /* Literal.String.Escape */
.sh { color: #ed9d13 } /* Literal.String.Heredoc */
.si { color: #ed9d13 } /* Literal.String.Interpol */
.sx { color: #ffa500 } /* Literal.String.Other */
.sr { color: #ed9d13 } /* Literal.String.Regex */
.s1 { color: #ed9d13 } /* Literal.String.Single */
.ss { color: #ed9d13 } /* Literal.String.Symbol */
.bp { color: #24909d } /* Name.Builtin.Pseudo */
.vc { color: #40ffff } /* Name.Variable.Class */
.vg { color: #40ffff } /* Name.Variable.Global */
.vi { color: #40ffff } /* Name.Variable.Instance */
.il { color: #3677a9 } /* Literal.Number.Integer.Long */

File diff suppressed because it is too large Load Diff

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 729 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.1 KiB

View File

@ -1,285 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 16.0.4, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
width="136px" height="33px" viewBox="0 0 136 33" enable-background="new 0 0 136 33" xml:space="preserve">
<g>
<path fill="#6D6D6D" d="M51.721,16.175c0,5.531-2.574,7.395-4.764,7.395c-2.574,0-4.645-2.336-4.645-7.365
c0-5.237,2.336-7.427,4.792-7.427C49.649,8.779,51.721,11.175,51.721,16.175z M44.591,16.175c0,2.367,0.413,5.591,2.484,5.591
c1.953,0,2.336-3.372,2.336-5.591c0-2.189-0.384-5.562-2.397-5.562C44.976,10.613,44.591,13.986,44.591,16.175z"/>
<path fill="#6D6D6D" d="M54.235,13.484c0-2.19-0.09-3.462-0.118-4.468h2.041l0.118,1.834h0.059
c0.768-1.538,1.893-2.071,3.018-2.071c2.483,0,4.143,2.663,4.143,7.367c0,5.295-2.19,7.425-4.439,7.425
c-1.331,0-2.101-0.858-2.484-1.655h-0.058v7.1h-2.279V13.484z M56.514,18.069c0,0.443,0,0.857,0.087,1.183
c0.443,2.041,1.479,2.426,2.101,2.426c1.894,0,2.484-2.604,2.484-5.502c0-2.958-0.71-5.444-2.514-5.444
c-1.035,0-1.953,1.302-2.101,2.605c-0.058,0.354-0.058,0.708-0.058,1.093V18.069z"/>
<path fill="#6D6D6D" d="M67.667,16.383c0.027,4.378,1.745,5.323,3.401,5.323c0.975,0,1.805-0.235,2.367-0.562l0.354,1.687
c-0.798,0.443-2.011,0.682-3.136,0.682c-3.431,0-5.208-2.812-5.208-7.19c0-4.645,1.953-7.544,4.821-7.544
c2.931,0,4.085,3.077,4.085,6.332c0,0.532,0,0.916-0.03,1.272H67.667z M72.162,14.696c0.061-2.869-1.037-4.141-2.13-4.141
c-1.477,0-2.247,2.188-2.338,4.141H72.162z"/>
<path fill="#6D6D6D" d="M76.805,12.714c0-1.687-0.087-2.544-0.118-3.698h1.985l0.117,1.715h0.059
c0.623-1.153,1.774-1.952,3.285-1.952c1.981,0,3.461,1.48,3.461,4.941v9.615h-2.278v-9.23c0-1.687-0.326-3.402-1.982-3.402
c-0.947,0-1.862,0.797-2.16,2.336c-0.058,0.354-0.089,0.8-0.089,1.272v9.024h-2.279V12.714z"/>
<path fill="#CE3427" d="M88.403,20.967c0.503,0.326,1.392,0.739,2.309,0.739c1.3,0,2.101-0.797,2.101-2.07
c0-1.094-0.387-1.834-1.836-2.81c-1.862-1.212-2.869-2.426-2.869-4.083c0-2.307,1.718-3.964,3.937-3.964
c1.123,0,1.98,0.384,2.573,0.74l-0.623,1.716c-0.532-0.355-1.152-0.622-1.893-0.622c-1.243,0-1.863,0.887-1.863,1.805
c0,0.976,0.354,1.509,1.773,2.484c1.658,1.065,2.958,2.367,2.958,4.35c0,2.869-1.952,4.261-4.287,4.261
c-1.067,0-2.221-0.325-2.842-0.829L88.403,20.967z"/>
<path fill="#CE3427" d="M100.117,5.23v3.786h2.751v1.746h-2.751v8.491c0,1.863,0.682,2.367,1.508,2.367
c0.356,0,0.653-0.03,0.918-0.09l0.089,1.745c-0.414,0.147-0.946,0.238-1.685,0.238c-0.89,0-1.687-0.238-2.249-0.859
c-0.562-0.651-0.859-1.627-0.859-3.607v-8.285h-1.744V9.016h1.744V5.88L100.117,5.23z"/>
<path fill="#CE3427" d="M110.53,23.335l-0.146-1.51h-0.091c-0.62,1.095-1.626,1.745-2.838,1.745c-1.924,0-3.404-1.626-3.404-4.082
c0-3.581,2.871-5.177,6.095-5.206v-0.443c0-1.924-0.473-3.255-2.278-3.255c-0.887,0-1.687,0.296-2.366,0.74l-0.503-1.598
c0.593-0.443,1.924-0.947,3.344-0.947c2.868,0,4.112,1.894,4.112,5.118v6.183c0,1.097,0,2.337,0.148,3.256H110.53z M110.204,15.939
c-1.181,0-3.935,0.207-3.935,3.313c0,1.863,0.947,2.544,1.747,2.544c1.007,0,1.862-0.739,2.129-2.16
c0.059-0.266,0.059-0.562,0.059-0.798V15.939z"/>
<path fill="#CE3427" d="M122.631,22.979c-0.505,0.296-1.361,0.534-2.367,0.534c-3.137,0-5.236-2.516-5.236-7.249
c0-4.112,2.069-7.455,5.649-7.455c0.77,0,1.599,0.207,2.012,0.444l-0.443,1.863c-0.295-0.147-0.888-0.385-1.626-0.385
c-2.279,0-3.312,2.722-3.312,5.533c0,3.344,1.271,5.355,3.37,5.355c0.623,0,1.124-0.148,1.656-0.385L122.631,22.979z"/>
<path fill="#CE3427" d="M126.831,15.673h0.058c0.268-0.562,0.505-1.125,0.769-1.598l2.842-5.06h2.456l-3.787,6.036l4.112,8.283
h-2.545l-3.106-6.746l-0.798,1.332v5.414h-2.28V2.863h2.28V15.673z"/>
</g>
<g>
<linearGradient id="SVGID_1_" gradientUnits="userSpaceOnUse" x1="135.2939" y1="-289.2324" x2="150.7576" y2="-290.3611" gradientTransform="matrix(0.711 -0.7032 -0.7032 -0.711 -294.0409 -81.7453)">
<stop offset="0" style="stop-color:#000000;stop-opacity:0"/>
<stop offset="1" style="stop-color:#010201"/>
</linearGradient>
<path fill="url(#SVGID_1_)" d="M3.407,22.683c-0.546-0.552-0.542-0.74,0.011-1.286l6.262,0.35c0.552-0.546,1.489,0.218,2.035,0.771
l3.739,2.75c0.547,0.553,2.563,1.941,2.011,2.487l-3.58,2.169c-0.552,0.547-2.675,0.635-3.221,0.083L3.407,22.683z"/>
<linearGradient id="SVGID_2_" gradientUnits="userSpaceOnUse" x1="295.373" y1="-347.7021" x2="296.8124" y2="-346.4509" gradientTransform="matrix(1 0 0 -1 -268.5 -325.5)">
<stop offset="0" style="stop-color:#C33A28"/>
<stop offset="0.8825" style="stop-color:#8A241C"/>
</linearGradient>
<path fill="url(#SVGID_2_)" d="M25.441,20.554l3.321,3.294c0.795-0.128,0.926-0.529,0.833-1.422l-3.31-3.285l0,0
C26.136,20.271,26.181,20.519,25.441,20.554L25.441,20.554z"/>
<linearGradient id="SVGID_3_" gradientUnits="userSpaceOnUse" x1="277.2236" y1="-332.2896" x2="278.9883" y2="-329.2331" gradientTransform="matrix(1 0 0 -1 -268.5 -325.5)">
<stop offset="0" style="stop-color:#8A241C"/>
<stop offset="1" style="stop-color:#C33A28"/>
</linearGradient>
<path fill="url(#SVGID_3_)" d="M10.542,7.84L7.219,4.55c0-0.742,0.532-1.597,1.662-1.744l3.285,3.311l0,0
C11.036,6.264,10.577,7.1,10.542,7.84L10.542,7.84z"/>
<g>
<g>
<g>
<g>
<g>
<g>
<g enable-background="new ">
<g>
<g>
<polygon fill="#8A241C" points="29.603,21.781 26.278,18.487 31.301,18.046 34.624,21.341 "/>
</g>
<g>
<polygon fill="#8A241C" points="29.603,21.781 26.278,18.487 31.301,18.046 34.624,21.341 "/>
</g>
</g>
<g>
<polygon fill="#8A241C" points="29.601,22.939 26.278,19.646 26.278,18.487 29.603,21.781 "/>
</g>
<g>
<polygon fill="#C33A28" points="16.385,24.936 13.065,21.641 25.441,20.554 28.762,23.848 "/>
</g>
<g enable-background="new ">
<g>
<defs>
<path id="SVGID_4_" d="M15.777,24.723c-0.052-0.05-0.118-0.114-0.169-0.166c-1.05-1.043-2.104-2.084-3.156-3.129
c0.152,0.151,0.368,0.233,0.612,0.213l3.32,3.295C16.143,24.957,15.929,24.87,15.777,24.723z"/>
</defs>
<clipPath id="SVGID_5_">
<use xlink:href="#SVGID_4_" overflow="visible"/>
</clipPath>
<g clip-path="url(#SVGID_5_)" enable-background="new ">
<path fill="#942820" d="M15.777,24.723l-3.325-3.295c0.011,0.013,0.023,0.021,0.034,0.03l3.323,3.296
C15.796,24.741,15.788,24.733,15.777,24.723"/>
<path fill="#972920" d="M15.81,24.754l-3.323-3.296c0.017,0.018,0.036,0.031,0.053,0.045l3.322,3.293
C15.844,24.782,15.827,24.766,15.81,24.754"/>
<path fill="#9A2C21" d="M15.862,24.796l-3.322-3.293c0.015,0.012,0.032,0.021,0.049,0.031l3.323,3.293
C15.897,24.817,15.88,24.807,15.862,24.796"/>
<path fill="#9E2D21" d="M15.912,24.827l-3.323-3.293c0.017,0.012,0.034,0.021,0.051,0.027l3.323,3.294
C15.946,24.849,15.929,24.839,15.912,24.827"/>
<path fill="#A22C23" d="M15.963,24.855l-3.323-3.294c0.016,0.009,0.032,0.017,0.047,0.026l3.322,3.29
C15.996,24.87,15.98,24.863,15.963,24.855"/>
<path fill="#A72D24" d="M16.01,24.878l-3.322-3.29c0.017,0.003,0.036,0.011,0.051,0.017l3.321,3.293
C16.044,24.89,16.028,24.886,16.01,24.878"/>
<path fill="#AB3125" d="M16.06,24.897l-3.321-3.293c0.017,0.005,0.034,0.012,0.051,0.016l3.323,3.292
C16.094,24.908,16.077,24.901,16.06,24.897"/>
<path fill="#AE3125" d="M16.113,24.912l-3.323-3.292c0.015,0.005,0.034,0.008,0.053,0.012l3.32,3.291
C16.146,24.92,16.128,24.915,16.113,24.912"/>
<path fill="#B13126" d="M16.164,24.923l-3.32-3.291c0.017,0.002,0.034,0.005,0.054,0.007l3.323,3.292
C16.204,24.931,16.181,24.93,16.164,24.923"/>
<path fill="#B63327" d="M16.221,24.931l-3.323-3.292c0.02,0.004,0.041,0.004,0.06,0.005l3.323,3.294
C16.258,24.938,16.242,24.936,16.221,24.931"/>
<path fill="#B63627" d="M16.28,24.938l-3.323-3.294c0.021,0.001,0.045,0.001,0.066,0.001l3.324,3.293
C16.325,24.938,16.304,24.938,16.28,24.938"/>
<path fill="#BB3727" d="M16.349,24.938l-3.324-3.293c0.013-0.002,0.027-0.002,0.041-0.004l3.32,3.295
C16.372,24.938,16.359,24.938,16.349,24.938"/>
</g>
</g>
</g>
<g enable-background="new ">
<g>
<defs>
<path id="SVGID_6_" d="M10.944,29.924l-3.323-3.293c-0.265-0.261-0.425-0.629-0.425-1.044l3.324,3.289
C10.521,29.296,10.679,29.658,10.944,29.924z"/>
</defs>
<clipPath id="SVGID_7_">
<use xlink:href="#SVGID_6_" overflow="visible"/>
</clipPath>
<g clip-path="url(#SVGID_7_)" enable-background="new ">
<path fill="#82211A" d="M10.944,29.924l-3.323-3.293c-0.237-0.235-0.392-0.557-0.419-0.921l3.322,3.294
C10.553,29.369,10.707,29.688,10.944,29.924"/>
<path fill="#87241B" d="M10.524,29.004L7.202,25.71c-0.004-0.039-0.004-0.083-0.006-0.123l3.324,3.289
C10.521,28.92,10.523,28.96,10.524,29.004"/>
</g>
</g>
</g>
<g>
<path fill="#D83E27" d="M34.624,21.341l-0.008,5.421c-0.002,0.879-0.716,1.656-1.601,1.734l-20.903,1.836
c-0.881,0.078-1.596-0.576-1.592-1.456l0.011-5.421l5.024-0.443l-0.002,1.162c0,0.461,0.372,0.801,0.833,0.762l12.26-1.088
c0.459-0.042,0.952-0.445,0.956-0.908l0.001-1.158L34.624,21.341z"/>
</g>
<g>
<g>
<polygon fill="#8A241C" points="10.531,23.455 7.209,20.163 12.234,19.72 15.555,23.012 "/>
</g>
<g>
<polygon fill="#8A241C" points="10.531,23.455 7.209,20.163 12.234,19.72 15.555,23.012 "/>
</g>
</g>
<g>
<polygon fill="#8A241C" points="10.521,28.876 7.196,25.587 7.209,20.163 10.531,23.455 "/>
</g>
</g>
<g enable-background="new ">
<g>
<g>
<polygon fill="#E6584F" points="34.596,12.192 34.626,20.171 29.603,20.612 29.573,12.632 "/>
</g>
<g>
<polygon fill="#E2493C" points="34.596,12.192 34.626,20.171 29.603,20.612 29.573,12.632 "/>
</g>
</g>
<g>
<polygon fill="#8A241C" points="29.603,20.612 26.284,17.318 26.25,9.337 29.573,12.632 "/>
</g>
<g>
<g>
<polygon fill="#8A241C" points="29.573,12.632 26.25,9.337 31.274,8.898 34.596,12.192 "/>
</g>
<g>
<polygon fill="#8A241C" points="29.573,12.632 26.25,9.337 31.274,8.898 34.596,12.192 "/>
</g>
</g>
</g>
<g enable-background="new ">
<g>
<g>
<polygon fill="#E6584F" points="15.526,13.863 15.559,21.845 10.533,22.285 10.499,14.306 "/>
</g>
<g>
<polygon fill="#E2493C" points="15.526,13.863 15.559,21.845 10.533,22.285 10.499,14.306 "/>
</g>
</g>
<g>
<polygon fill="#8A241C" points="10.533,22.285 7.21,18.992 7.179,11.015 10.499,14.306 "/>
</g>
<g>
<polygon fill="#8A241C" points="10.499,14.306 7.179,11.015 12.204,10.573 15.526,13.863 "/>
</g>
</g>
</g>
<g>
<g enable-background="new ">
<g>
<polygon fill="#8A241C" points="29.598,11.441 26.273,8.151 26.278,7.148 29.601,10.441 "/>
</g>
<g enable-background="new ">
<g>
<defs>
<path id="SVGID_8_" d="M26.055,6.599c1.108,1.098,2.214,2.196,3.32,3.294c0.14,0.137,0.225,0.329,0.225,0.548
l-3.323-3.293C26.278,6.928,26.192,6.737,26.055,6.599z"/>
</defs>
<clipPath id="SVGID_9_">
<use xlink:href="#SVGID_8_" overflow="visible"/>
</clipPath>
<g clip-path="url(#SVGID_9_)" enable-background="new ">
<path fill="#87241B" d="M29.601,10.441l-3.323-3.293c0-0.023-0.004-0.042-0.007-0.065l3.325,3.293
C29.598,10.396,29.601,10.421,29.601,10.441"/>
<path fill="#82211A" d="M29.596,10.376l-3.325-3.293c-0.012-0.193-0.091-0.362-0.216-0.484l3.32,3.293
C29.5,10.016,29.581,10.184,29.596,10.376"/>
</g>
</g>
</g>
<g enable-background="new ">
<g>
<g>
<path id="SVGID_16_" fill="#C33A28" d="M30.891,1.404c0.223,0.216,0.445,0.437,0.663,0.655
c0.888,0.879,1.774,1.757,2.66,2.634c-0.292-0.287-0.706-0.445-1.168-0.409l-3.323-3.291
C30.185,0.953,30.602,1.112,30.891,1.404z"/>
</g>
<g>
<defs>
<path id="SVGID_10_" d="M30.891,1.404c0.223,0.216,0.445,0.437,0.663,0.655c0.888,0.879,1.774,1.757,2.66,2.634
c-0.292-0.287-0.706-0.445-1.168-0.409l-3.323-3.291C30.185,0.953,30.602,1.112,30.891,1.404z"/>
</defs>
<clipPath id="SVGID_11_">
<use xlink:href="#SVGID_10_" overflow="visible"/>
</clipPath>
<g clip-path="url(#SVGID_11_)">
<path fill="#C33A28" d="M29.722,0.993l3.323,3.292c0.021,0,0.044-0.003,0.068-0.003L29.791,0.99
C29.769,0.99,29.746,0.991,29.722,0.993z"/>
<path fill="#C33A28" d="M29.791,0.99l3.323,3.292c0.042-0.003,0.085-0.003,0.126-0.001l-3.32-3.291
C29.878,0.986,29.833,0.986,29.791,0.99z"/>
<path fill="#C33A28" d="M29.919,0.99l3.32,3.291c0.041,0.001,0.081,0.004,0.118,0.01l-3.323-3.293
C29.996,0.992,29.96,0.99,29.919,0.99z"/>
<path fill="#C33A28" d="M30.035,0.998l3.323,3.293c0.036,0.004,0.072,0.009,0.106,0.016l-3.323-3.295
C30.107,1.007,30.071,1,30.035,0.998z"/>
<path fill="#C33A28" d="M30.142,1.012l3.323,3.295c0.034,0.003,0.071,0.012,0.103,0.021l-3.323-3.29
C30.21,1.028,30.178,1.021,30.142,1.012z"/>
<path fill="#C33A28" d="M30.245,1.037l3.323,3.29c0.031,0.011,0.063,0.019,0.096,0.03l-3.322-3.292
C30.308,1.055,30.278,1.043,30.245,1.037z"/>
<path fill="#C33A28" d="M30.341,1.065l3.322,3.292c0.033,0.01,0.064,0.022,0.097,0.036l-3.322-3.292
C30.405,1.088,30.373,1.075,30.341,1.065z"/>
<path fill="#C33A28" d="M30.438,1.101l3.322,3.292c0.034,0.014,0.062,0.027,0.095,0.045l-3.323-3.295
C30.501,1.129,30.469,1.112,30.438,1.101z"/>
<path fill="#C33A28" d="M30.531,1.143l3.323,3.295c0.032,0.015,0.063,0.033,0.096,0.051l-3.325-3.294
C30.595,1.18,30.563,1.162,30.531,1.143z"/>
<path fill="#C33A28" d="M30.625,1.194l3.325,3.295c0.032,0.02,0.064,0.042,0.097,0.064l-3.324-3.292
C30.694,1.238,30.659,1.217,30.625,1.194z"/>
<path fill="#C33A28" d="M30.723,1.262l3.324,3.291c0.036,0.027,0.068,0.052,0.1,0.078l-3.32-3.289
C30.792,1.312,30.761,1.284,30.723,1.262z"/>
<path fill="#C33A28" d="M30.891,1.404c-0.021-0.024-0.043-0.044-0.064-0.061l3.32,3.289
c0.026,0.024,0.045,0.042,0.067,0.062L30.891,1.404z"/>
</g>
</g>
</g>
</g>
<g>
<polygon fill="#8A241C" points="10.527,13.104 7.204,9.811 7.219,4.55 10.542,7.84 "/>
</g>
<g>
<polygon fill="#C33A28" points="12.14,6.109 8.817,2.816 29.722,0.993 33.045,4.285 "/>
</g>
<g>
<g>
<path fill="#E6584F" d="M12.14,6.109l20.905-1.825c0.881-0.073,1.594,0.575,1.591,1.459l-0.016,5.261l-5.022,0.438
l0.002-1c0-0.459-0.371-0.8-0.833-0.76l-12.379,1.077c-0.458,0.041-0.834,0.447-0.834,0.908l-0.002,0.998l-5.025,0.439
l0.014-5.263C10.544,6.961,11.261,6.187,12.14,6.109z"/>
</g>
<g>
<path fill="#E6584F" d="M12.14,6.109l20.905-1.825c0.881-0.073,1.594,0.575,1.591,1.459l-0.016,5.261l-5.022,0.438
l0.002-1c0-0.459-0.371-0.8-0.833-0.76l-12.379,1.077c-0.458,0.041-0.834,0.447-0.834,0.908l-0.002,0.998l-5.025,0.439
l0.014-5.263C10.544,6.961,11.261,6.187,12.14,6.109z"/>
</g>
</g>
</g>
</g>
</g>
</g>
</g>
</g>
</g>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.5 KiB

View File

@ -1,286 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 16.0.4, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
width="59.37px" height="54.166px" viewBox="0 0 59.37 54.166" enable-background="new 0 0 59.37 54.166" xml:space="preserve">
<g>
<path fill="#6D6D6D" d="M6.49,44.841c0,3.587-1.669,4.795-3.089,4.795c-1.668,0-3.011-1.515-3.011-4.774
c0-3.396,1.515-4.815,3.107-4.815C5.146,40.046,6.49,41.6,6.49,44.841z M1.867,44.841c0,1.535,0.268,3.625,1.611,3.625
c1.266,0,1.515-2.186,1.515-3.625c0-1.419-0.249-3.606-1.554-3.606C2.117,41.234,1.867,43.422,1.867,44.841z"/>
<path fill="#6D6D6D" d="M8.12,43.096c0-1.419-0.058-2.244-0.076-2.896h1.322l0.077,1.189h0.038
c0.499-0.997,1.228-1.343,1.957-1.343c1.611,0,2.686,1.727,2.686,4.776c0,3.433-1.42,4.813-2.878,4.813
c-0.862,0-1.362-0.557-1.611-1.072H9.598v4.603H8.12V43.096z M9.598,46.069c0,0.288,0,0.556,0.057,0.767
c0.288,1.324,0.959,1.573,1.362,1.573c1.228,0,1.611-1.688,1.611-3.568c0-1.917-0.46-3.529-1.63-3.529
c-0.671,0-1.266,0.845-1.362,1.689c-0.038,0.229-0.038,0.459-0.038,0.708V46.069z"/>
<path fill="#6D6D6D" d="M16.828,44.976c0.019,2.84,1.132,3.452,2.206,3.452c0.633,0,1.17-0.153,1.535-0.364l0.23,1.093
c-0.518,0.288-1.305,0.441-2.034,0.441c-2.224,0-3.376-1.822-3.376-4.66c0-3.012,1.266-4.892,3.126-4.892
c1.9,0,2.648,1.995,2.648,4.104c0,0.346,0,0.595-0.019,0.825H16.828z M19.744,43.882c0.039-1.859-0.672-2.685-1.381-2.685
c-0.958,0-1.457,1.418-1.516,2.685H19.744z"/>
<path fill="#6D6D6D" d="M22.754,42.598c0-1.095-0.057-1.649-0.077-2.398h1.287l0.076,1.112h0.038
c0.403-0.748,1.15-1.266,2.13-1.266c1.285,0,2.244,0.96,2.244,3.204v6.232h-1.478v-5.984c0-1.093-0.21-2.206-1.285-2.206
c-0.615,0-1.208,0.518-1.401,1.516c-0.037,0.23-0.057,0.519-0.057,0.824v5.851h-1.478V42.598z"/>
<path fill="#CE3427" d="M30.273,47.948c0.326,0.212,0.902,0.479,1.497,0.479c0.844,0,1.361-0.518,1.361-1.343
c0-0.709-0.25-1.188-1.189-1.821c-1.208-0.786-1.86-1.573-1.86-2.648c0-1.495,1.113-2.569,2.552-2.569
c0.729,0,1.284,0.249,1.669,0.48l-0.404,1.111c-0.346-0.229-0.747-0.403-1.228-0.403c-0.806,0-1.207,0.575-1.207,1.171
c0,0.633,0.229,0.979,1.15,1.611c1.073,0.69,1.917,1.535,1.917,2.819c0,1.861-1.266,2.762-2.781,2.762
c-0.69,0-1.438-0.21-1.841-0.536L30.273,47.948z"/>
<path fill="#CE3427" d="M37.869,37.744v2.455h1.782v1.132h-1.782v5.505c0,1.208,0.441,1.535,0.977,1.535
c0.231,0,0.424-0.021,0.597-0.06l0.057,1.133c-0.268,0.096-0.613,0.153-1.092,0.153c-0.578,0-1.094-0.153-1.459-0.556
c-0.364-0.422-0.557-1.057-0.557-2.34v-5.371H35.26v-1.132h1.132v-2.033L37.869,37.744z"/>
<path fill="#CE3427" d="M44.619,49.482l-0.094-0.978h-0.061c-0.401,0.709-1.054,1.131-1.839,1.131
c-1.248,0-2.207-1.054-2.207-2.646c0-2.322,1.861-3.358,3.952-3.376v-0.288c0-1.247-0.308-2.109-1.479-2.109
c-0.574,0-1.094,0.191-1.533,0.479l-0.327-1.036c0.386-0.288,1.248-0.614,2.169-0.614c1.86,0,2.666,1.229,2.666,3.317v4.01
c0,0.711,0,1.515,0.097,2.109H44.619z M44.409,44.688c-0.766,0-2.553,0.134-2.553,2.147c0,1.208,0.616,1.65,1.134,1.65
c0.653,0,1.207-0.479,1.381-1.401c0.038-0.172,0.038-0.363,0.038-0.518V44.688z"/>
<path fill="#CE3427" d="M52.465,49.253c-0.327,0.191-0.882,0.345-1.534,0.345c-2.033,0-3.395-1.631-3.395-4.699
c0-2.665,1.341-4.833,3.663-4.833c0.498,0,1.037,0.134,1.304,0.287l-0.288,1.209c-0.19-0.096-0.575-0.25-1.055-0.25
c-1.477,0-2.146,1.765-2.146,3.587c0,2.169,0.824,3.473,2.186,3.473c0.403,0,0.729-0.097,1.073-0.25L52.465,49.253z"/>
<path fill="#CE3427" d="M55.189,44.516h0.037c0.173-0.365,0.326-0.729,0.498-1.036l1.843-3.28h1.592l-2.455,3.914l2.666,5.369
h-1.649l-2.014-4.373l-0.518,0.864v3.509H53.71V36.21h1.479V44.516z"/>
</g>
<g>
<linearGradient id="SVGID_1_" gradientUnits="userSpaceOnUse" x1="133.168" y1="-304.7446" x2="150.967" y2="-306.0438" gradientTransform="matrix(0.711 -0.7032 -0.7032 -0.711 -297.5767 -90.3491)">
<stop offset="0" style="stop-color:#000000;stop-opacity:0"/>
<stop offset="1" style="stop-color:#010201"/>
</linearGradient>
<path fill="url(#SVGID_1_)" d="M8.945,25.686c-0.629-0.635-0.624-0.852,0.013-1.481l7.208,0.403
c0.635-0.628,1.713,0.25,2.342,0.888l4.304,3.166c0.629,0.637,2.95,2.234,2.314,2.863l-4.121,2.496
c-0.636,0.63-3.079,0.73-3.708,0.096L8.945,25.686z"/>
<linearGradient id="SVGID_2_" gradientUnits="userSpaceOnUse" x1="304.4561" y1="-328.6318" x2="306.1114" y2="-327.1929" gradientTransform="matrix(1 0 0 -1 -268.5 -303.5)">
<stop offset="0" style="stop-color:#C33A28"/>
<stop offset="0.8825" style="stop-color:#8A241C"/>
</linearGradient>
<path fill="url(#SVGID_2_)" d="M34.308,23.235l3.821,3.792c0.916-0.147,1.066-0.609,0.958-1.636l-3.81-3.782l0,0
C35.108,22.911,35.16,23.195,34.308,23.235L34.308,23.235z"/>
<linearGradient id="SVGID_3_" gradientUnits="userSpaceOnUse" x1="283.5645" y1="-310.8936" x2="285.5962" y2="-307.3745" gradientTransform="matrix(1 0 0 -1 -268.5 -303.5)">
<stop offset="0" style="stop-color:#8A241C"/>
<stop offset="1" style="stop-color:#C33A28"/>
</linearGradient>
<path fill="url(#SVGID_3_)" d="M17.157,8.602l-3.824-3.788c0-0.854,0.612-1.837,1.913-2.007l3.781,3.812l0,0
C17.726,6.788,17.198,7.75,17.157,8.602L17.157,8.602z"/>
<g>
<g>
<g>
<g>
<g>
<g>
<g enable-background="new ">
<g>
<g>
<polygon fill="#8A241C" points="39.098,24.648 35.271,20.857 41.053,20.349 44.878,24.142 "/>
</g>
<g>
<polygon fill="#8A241C" points="39.098,24.648 35.271,20.857 41.053,20.349 44.878,24.142 "/>
</g>
</g>
<g>
<polygon fill="#8A241C" points="39.096,25.981 35.271,22.19 35.271,20.857 39.098,24.648 "/>
</g>
<g>
<polygon fill="#C33A28" points="23.883,28.279 20.062,24.486 34.308,23.235 38.129,27.027 "/>
</g>
<g enable-background="new ">
<g>
<defs>
<path id="SVGID_4_" d="M23.184,28.034c-0.059-0.058-0.135-0.131-0.194-0.191c-1.209-1.201-2.421-2.399-3.632-3.602
c0.175,0.174,0.424,0.269,0.705,0.245l3.821,3.793C23.605,28.305,23.358,28.204,23.184,28.034z"/>
</defs>
<clipPath id="SVGID_5_">
<use xlink:href="#SVGID_4_" overflow="visible"/>
</clipPath>
<g clip-path="url(#SVGID_5_)" enable-background="new ">
<path fill="#942820" d="M23.184,28.034l-3.827-3.792c0.012,0.014,0.027,0.023,0.04,0.034l3.825,3.793
C23.206,28.056,23.196,28.047,23.184,28.034"/>
<path fill="#972920" d="M23.221,28.069l-3.825-3.793c0.02,0.021,0.042,0.037,0.061,0.053l3.824,3.79
C23.26,28.104,23.241,28.084,23.221,28.069"/>
<path fill="#9A2C21" d="M23.282,28.118l-3.824-3.79c0.017,0.013,0.037,0.024,0.057,0.035l3.825,3.791
C23.322,28.144,23.302,28.131,23.282,28.118"/>
<path fill="#9E2D21" d="M23.339,28.154l-3.825-3.791c0.02,0.014,0.04,0.024,0.059,0.032l3.825,3.792
C23.378,28.179,23.358,28.168,23.339,28.154"/>
<path fill="#A22C23" d="M23.398,28.188l-3.825-3.792c0.018,0.01,0.037,0.019,0.055,0.03l3.824,3.788
C23.435,28.204,23.418,28.196,23.398,28.188"/>
<path fill="#A72D24" d="M23.452,28.214l-3.824-3.788c0.02,0.004,0.042,0.013,0.059,0.019l3.823,3.791
C23.491,28.227,23.472,28.222,23.452,28.214"/>
<path fill="#AB3125" d="M23.509,28.235l-3.823-3.791c0.02,0.006,0.04,0.014,0.059,0.018l3.825,3.789
C23.549,28.248,23.529,28.239,23.509,28.235"/>
<path fill="#AE3125" d="M23.57,28.252l-3.825-3.789c0.017,0.006,0.04,0.009,0.062,0.013l3.822,3.789
C23.607,28.261,23.587,28.256,23.57,28.252"/>
<path fill="#B13126" d="M23.629,28.265l-3.822-3.789c0.02,0.003,0.04,0.006,0.062,0.008l3.826,3.79
C23.675,28.274,23.648,28.272,23.629,28.265"/>
<path fill="#B63327" d="M23.694,28.274l-3.826-3.79c0.023,0.005,0.047,0.005,0.069,0.006l3.825,3.791
C23.737,28.281,23.718,28.279,23.694,28.274"/>
<path fill="#B63627" d="M23.763,28.281l-3.825-3.791c0.025,0.001,0.052,0.001,0.076,0.001l3.827,3.79
C23.814,28.281,23.79,28.281,23.763,28.281"/>
<path fill="#BB3727" d="M23.841,28.281l-3.827-3.79c0.015-0.002,0.032-0.002,0.047-0.005l3.821,3.793
C23.868,28.281,23.853,28.281,23.841,28.281"/>
</g>
</g>
</g>
<g enable-background="new ">
<g>
<defs>
<path id="SVGID_6_" d="M17.621,34.021l-3.825-3.79c-0.305-0.3-0.489-0.724-0.489-1.202l3.826,3.786
C17.133,33.298,17.315,33.716,17.621,34.021z"/>
</defs>
<clipPath id="SVGID_7_">
<use xlink:href="#SVGID_6_" overflow="visible"/>
</clipPath>
<g clip-path="url(#SVGID_7_)" enable-background="new ">
<path fill="#82211A" d="M17.621,34.021l-3.825-3.79c-0.273-0.271-0.45-0.641-0.483-1.061l3.824,3.792
C17.17,33.383,17.347,33.749,17.621,34.021"/>
<path fill="#87241B" d="M17.137,32.962l-3.824-3.792c-0.004-0.044-0.004-0.095-0.006-0.142l3.826,3.786
C17.133,32.866,17.136,32.911,17.137,32.962"/>
</g>
</g>
</g>
<g>
<path fill="#D83E27" d="M44.878,24.142l-0.011,6.239c-0.003,1.013-0.824,1.906-1.842,1.996l-24.061,2.114
c-1.014,0.089-1.837-0.663-1.832-1.677l0.013-6.24l5.783-0.51l-0.003,1.337c0,0.531,0.428,0.922,0.958,0.877l14.112-1.252
c0.527-0.048,1.095-0.513,1.102-1.046l0.001-1.333L44.878,24.142z"/>
</g>
<g>
<g>
<polygon fill="#8A241C" points="17.146,26.575 13.321,22.786 19.105,22.275 22.928,26.065 "/>
</g>
<g>
<polygon fill="#8A241C" points="17.146,26.575 13.321,22.786 19.105,22.275 22.928,26.065 "/>
</g>
</g>
<g>
<polygon fill="#8A241C" points="17.133,32.814 13.307,29.028 13.321,22.786 17.146,26.575 "/>
</g>
</g>
<g enable-background="new ">
<g>
<g>
<polygon fill="#E6584F" points="44.844,13.61 44.881,22.795 39.098,23.303 39.063,14.118 "/>
</g>
<g>
<polygon fill="#E2493C" points="44.844,13.61 44.881,22.795 39.098,23.303 39.063,14.118 "/>
</g>
</g>
<g>
<polygon fill="#8A241C" points="39.098,23.303 35.277,19.512 35.238,10.324 39.063,14.118 "/>
</g>
<g>
<g>
<polygon fill="#8A241C" points="39.063,14.118 35.238,10.324 41.021,9.82 44.844,13.61 "/>
</g>
<g>
<polygon fill="#8A241C" points="39.063,14.118 35.238,10.324 41.021,9.82 44.844,13.61 "/>
</g>
</g>
</g>
<g enable-background="new ">
<g>
<g>
<polygon fill="#E6584F" points="22.895,15.535 22.933,24.722 17.147,25.229 17.108,16.044 "/>
</g>
<g>
<polygon fill="#E2493C" points="22.895,15.535 22.933,24.722 17.147,25.229 17.108,16.044 "/>
</g>
</g>
<g>
<polygon fill="#8A241C" points="17.147,25.229 13.323,21.438 13.286,12.256 17.108,16.044 "/>
</g>
<g>
<polygon fill="#8A241C" points="17.108,16.044 13.286,12.256 19.071,11.747 22.895,15.535 "/>
</g>
</g>
</g>
<g>
<g enable-background="new ">
<g>
<polygon fill="#8A241C" points="39.091,12.747 35.265,8.959 35.271,7.805 39.096,11.595 "/>
</g>
<g enable-background="new ">
<g>
<defs>
<path id="SVGID_8_" d="M35.014,7.173c1.277,1.264,2.549,2.527,3.821,3.792c0.162,0.157,0.261,0.378,0.261,0.631
l-3.825-3.79C35.271,7.552,35.172,7.332,35.014,7.173z"/>
</defs>
<clipPath id="SVGID_9_">
<use xlink:href="#SVGID_8_" overflow="visible"/>
</clipPath>
<g clip-path="url(#SVGID_9_)" enable-background="new ">
<path fill="#87241B" d="M39.096,11.595l-3.825-3.79c0-0.026-0.006-0.048-0.008-0.075l3.826,3.791
C39.091,11.543,39.096,11.572,39.096,11.595"/>
<path fill="#82211A" d="M39.089,11.521L35.263,7.73c-0.014-0.222-0.104-0.416-0.249-0.557l3.821,3.791
C38.979,11.106,39.072,11.299,39.089,11.521"/>
</g>
</g>
</g>
<g enable-background="new ">
<g>
<g>
<path id="SVGID_16_" fill="#C33A28" d="M40.58,1.193c0.256,0.249,0.513,0.502,0.763,0.753
c1.022,1.012,2.043,2.023,3.062,3.032c-0.336-0.33-0.813-0.513-1.345-0.47l-3.824-3.788
C39.767,0.674,40.247,0.857,40.58,1.193z"/>
</g>
<g>
<defs>
<path id="SVGID_10_" d="M40.58,1.193c0.256,0.249,0.513,0.502,0.763,0.753c1.022,1.012,2.043,2.023,3.062,3.032
c-0.336-0.33-0.813-0.513-1.345-0.47l-3.824-3.788C39.767,0.674,40.247,0.857,40.58,1.193z"/>
</defs>
<clipPath id="SVGID_11_">
<use xlink:href="#SVGID_10_" overflow="visible"/>
</clipPath>
<g clip-path="url(#SVGID_11_)">
<path fill="#C33A28" d="M39.236,0.721l3.824,3.789c0.023,0,0.051-0.004,0.077-0.004l-3.824-3.789
C39.289,0.716,39.262,0.718,39.236,0.721z"/>
<path fill="#C33A28" d="M39.313,0.716l3.824,3.789c0.05-0.004,0.1-0.004,0.146-0.001l-3.821-3.788
C39.415,0.713,39.363,0.713,39.313,0.716z"/>
<path fill="#C33A28" d="M39.463,0.716l3.821,3.788c0.046,0.001,0.093,0.004,0.135,0.011l-3.824-3.79
C39.551,0.719,39.509,0.716,39.463,0.716z"/>
<path fill="#C33A28" d="M39.595,0.726l3.824,3.79c0.043,0.005,0.083,0.01,0.124,0.019l-3.825-3.793
C39.679,0.736,39.637,0.729,39.595,0.726z"/>
<path fill="#C33A28" d="M39.718,0.742l3.825,3.793c0.038,0.004,0.08,0.014,0.118,0.023l-3.825-3.787
C39.797,0.761,39.76,0.752,39.718,0.742z"/>
<path fill="#C33A28" d="M39.836,0.771l3.825,3.787c0.036,0.012,0.072,0.021,0.11,0.034l-3.824-3.789
C39.909,0.792,39.875,0.778,39.836,0.771z"/>
<path fill="#C33A28" d="M39.947,0.804l3.824,3.789c0.037,0.012,0.073,0.026,0.111,0.041l-3.824-3.789
C40.02,0.83,39.984,0.815,39.947,0.804z"/>
<path fill="#C33A28" d="M40.059,0.844l3.824,3.789c0.039,0.016,0.071,0.031,0.108,0.052l-3.825-3.792
C40.131,0.877,40.095,0.857,40.059,0.844z"/>
<path fill="#C33A28" d="M40.166,0.893l3.824,3.792c0.039,0.017,0.074,0.038,0.111,0.058l-3.827-3.791
C40.239,0.936,40.202,0.914,40.166,0.893z"/>
<path fill="#C33A28" d="M40.274,0.952l3.827,3.792c0.037,0.022,0.073,0.048,0.111,0.074L40.387,1.03
C40.353,1.002,40.313,0.979,40.274,0.952z"/>
<path fill="#C33A28" d="M40.387,1.03l3.826,3.789c0.041,0.031,0.079,0.06,0.114,0.09l-3.822-3.785
C40.467,1.087,40.431,1.055,40.387,1.03z"/>
<path fill="#C33A28" d="M40.58,1.193c-0.025-0.028-0.05-0.051-0.075-0.07l3.822,3.785
c0.029,0.027,0.052,0.048,0.077,0.071L40.58,1.193z"/>
</g>
</g>
</g>
</g>
<g>
<polygon fill="#8A241C" points="17.141,14.66 13.316,10.87 13.333,4.814 17.157,8.602 "/>
</g>
<g>
<polygon fill="#C33A28" points="18.998,6.609 15.173,2.819 39.236,0.721 43.061,4.509 "/>
</g>
<g>
<g>
<path fill="#E6584F" d="M18.998,6.609l24.063-2.1c1.014-0.084,1.833,0.662,1.831,1.679l-0.02,6.055l-5.781,0.503
l0.005-1.151c0-0.529-0.427-0.921-0.959-0.875l-14.249,1.24c-0.527,0.047-0.96,0.515-0.96,1.045l-0.003,1.149
l-5.784,0.505l0.016-6.058C17.16,7.59,17.985,6.699,18.998,6.609z"/>
</g>
<g>
<path fill="#E6584F" d="M18.998,6.609l24.063-2.1c1.014-0.084,1.833,0.662,1.831,1.679l-0.02,6.055l-5.781,0.503
l0.005-1.151c0-0.529-0.427-0.921-0.959-0.875l-14.249,1.24c-0.527,0.047-0.96,0.515-0.96,1.045l-0.003,1.149
l-5.784,0.505l0.016-6.058C17.16,7.59,17.985,6.699,18.998,6.609z"/>
</g>
</g>
</g>
</g>
</g>
</g>
</g>
</g>
</g>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 278 B

File diff suppressed because one or more lines are too long

View File

@ -1,20 +0,0 @@
{
"images":[
{
"image":"superuser1.png",
"caption":"Jesse Proudman"
},
{
"image":"superuser2.png",
"caption":"Narayan Desai"
},
{
"image":"superuser3.png",
"caption":"Elissa Murphy"
},
{
"image":"superuser4.png",
"caption":"Tim Bell"
}
]
}

View File

@ -1,144 +0,0 @@
// Toggle main sections
$(".docs-sidebar-section-title").click(function () {
$('.docs-sidebar-section').not(this).closest('.docs-sidebar-section').removeClass('active');
$(this).closest('.docs-sidebar-section').toggleClass('active');
// Bug #1422454
// Commenting out next line, the default behavior which was preventing links
// from working.
// event.preventDefault();
});
/* Bug #1422454
The toggle functions below enable the expand/collapse, but for now
there's no easy way to get deeper links from other guides. So,
commenting both toggle functions out.
// Toggle 1st sub-sections
$(".docs-sidebar-section ol lh").click(function () {
$('.docs-sidebar-section ol').not(this).closest('.docs-sidebar-section ol').removeClass('active');
$(this).closest('.docs-sidebar-section ol').toggleClass('active');
if ($('.docs-has-sub').hasClass('active')) {
$(this).closest('.docs-sidebar-section ol li').addClass('open');
}
event.preventDefault();
});
// Toggle 2nd sub-sections
$(".docs-sidebar-section ol > li > a").click(function () {
$('.docs-sidebar-section ol li').not(this).removeClass('active').removeClass('open');
$(this).closest('.docs-sidebar-section ol li').toggleClass('active');
if ($('.docs-has-sub').hasClass('active')) {
$(this).closest('.docs-sidebar-section ol li').addClass('open');
}
event.preventDefault();
});
/* Bug #1417291
The rule below creates a shaded plus sign next to
a numbered sublist of a bulleted list.
It's probably there to implement expand/collapse of
list items, but unfortunately it affects also those
lists where expand/collapse is not intended.
I am commenting it out to fix this bug. If it causes
problems elsewhere, they have to be fixed elsewhere. */
// $('ol > li:has(ul)').addClass('docs-has-sub');
// webui popover
$(document).ready(function() {
function checkWidth() {
var windowSize = $(window).width();
if (windowSize <= 767) {
$('.gloss').webuiPopover({placement:'auto',trigger:'click'});
}
else if (windowSize >= 768) {
$('.gloss').webuiPopover({placement:'auto',trigger:'hover'});
}
}
// Execute on load
checkWidth();
// Bind event listener
$(window).resize(checkWidth);
});
// Bootstrap stuff
$('.docs-actions i').tooltip();
$('.docs-sidebar-home').tooltip();
// Hide/Toggle definitions
$("#toggle-definitions").click(function () {
$(this).toggleClass('docs-info-off');
if ($('.gloss').hasClass('on')) {
$('.gloss').removeClass('on').addClass('off').webuiPopover('destroy');
} else if ($('.gloss').hasClass('off')) {
$('.gloss').removeClass('off').addClass('on').webuiPopover();
}
});
// Smooth scroll
$('a').click(function () {
if($.attr(this, 'href').indexOf("#") != -1){
$('html, body').animate({
scrollTop: $($.attr(this, 'href')).offset().top
}, 500);
return false;
}
});
// Change character image on refresh
// Add file names and captions to doc-characters.json
if($('#superuser-img').length > 0) { //This shouldn't happen unless #superuser-img is available
$.getJSON('/common/js/doc-characters.json', function(data) {
var item = data.images[Math.floor(Math.random()*data.images.length)];
$('<img src="common/images/docs/' + item.image + '">').appendTo('#superuser-img');
$('<p>' + item.caption + '<strong>OpenStack Operator</strong></p>').appendTo('#superuser-img');
});
}
/* BB 150310
openstackdocstheme provides three types of admonitions, important, note
and warning. We decorate their title paragraphs with Font Awesome icons
by adding the appropriate FA classes. */
$('div.important > p.admonition-title').addClass('fa fa-info-circle');
$('div.note > p.admonition-title').addClass('fa fa-check-circle');
$('div.warning > p.admonition-title').addClass('fa fa-exclamation-triangle');
/* BB 150310
We also insert a space between the icon and the admonition title
("Note", "Warning", "Important" or their i18n equivalents).
This could be done with a single clause $('p.admonition-title')....,
affecting all types of admonitions. I play it safe here and explicitly
work on the three openstackdocstheme admonitions.
The first parameter of the text() callback is not needed here (it's
the index of the HTML element that we are modifying) */
$('div.important > p.admonition-title').text(function(ignored_para,original) {
return " "+original
});
$('div.note > p.admonition-title').text(function(ignored_para,original) {
return " "+original
});
$('div.warning > p.admonition-title').text(function(ignored_para,original) {
return " "+original
});
// Gives the log a bug icon the information it needs to generate the bug in
// Launchpad with pre-filled information such as git SHA, git.openstack.org
// source URL, published document URL and tag.
function logABug(bugTitle, bugProject, fieldComment, fieldTags) {
var lineFeed = "%0A";
var urlBase = "https://bugs.launchpad.net/" + bugProject + "/+filebug?field.title="
var currentURL = "URL: " + window.location.href;
var bugLink = urlBase + encodeURIComponent(bugTitle) +
"&field.comment=" + lineFeed + lineFeed + "-----------------------------------" + lineFeed + fieldComment +
lineFeed + currentURL +
"&field.tags=" + fieldTags;
document.getElementById("logABugLink1").href=bugLink;
document.getElementById("logABugLink2").href=bugLink;
document.getElementById("logABugLink3").href=bugLink;
}

File diff suppressed because one or more lines are too long

View File

@ -1,67 +0,0 @@
// Open header search bar
$(function() {
$(".search-icon").click(function() {
$(".navbar-main").toggleClass("show");
$(".search-container").toggleClass("show");
$(".search-icon").toggleClass("show");
$('#gsc-i-id1').focus();
});
});
// Close header search bar
$(function() {
$(".close-search").click(function() {
$(".navbar-main").toggleClass("show");
$(".search-container").toggleClass("show")
$(".search-icon").toggleClass("show");
});
});
// Open header drop downs on hover
jQuery(document).ready(function(){
if (jQuery(window).width() > 767) {
$('ul.navbar-main li ul.dropdown-menu').addClass('dropdown-hover');
$('ul.navbar-main li').hover(function() {
$(this).find('.dropdown-hover').stop(true, true).delay(400).fadeIn(100);
}, function() {
$(this).find('.dropdown-hover').stop(true, true).delay(100).fadeOut(200);
});
} else {
$('ul.navbar-main li ul.dropdown-menu').removeClass('dropdown-hover');
}
});
jQuery(window).resize(function () {
if (jQuery(window).width() > 767) {
$('ul.navbar-main li ul.dropdown-menu').addClass('dropdown-hover');
$('ul.navbar-main li').hover(function() {
$(this).find('.dropdown-hover').stop(true, true).delay(400).fadeIn(100);
}, function() {
$(this).find('.dropdown-hover').stop(true, true).delay(100).fadeOut(200);
});
} else {
$('ul.navbar-main li ul.dropdown-menu').removeClass('dropdown-hover');
}
});
// Remove Search text in smaller browser windows
jQuery(document).ready(function(){
if (jQuery(window).width() < 1050) {
$('#search-label').text('');
} else {
$('#search-label').text('Search');
}
});
jQuery(window).resize(function () {
if (jQuery(window).width() < 1050) {
$('#search-label').text('');
} else {
$('#search-label').text('Search');
}
});
// Show placeholder text in Google Search
setTimeout( function() {
$(".gsc-input").attr("placeholder", "search docs.openstack.org");
}, 1000 );

View File

@ -1,434 +0,0 @@
;(function ( $, window, document, undefined ) {
// Create the defaults once
var pluginName = 'webuiPopover';
var pluginClass = 'webui-popover';
var pluginType = 'webui.popover';
var defaults = {
placement:'auto',
width:'auto',
height:'auto',
trigger:'click',
style:'',
delay:300,
cache:true,
multi:false,
arrow:true,
title:'',
content:'',
closeable:false,
padding:true,
url:'',
type:'html',
template:'<div class="webui-popover">'+
'<div class="arrow"></div>'+
'<div class="webui-popover-inner">'+
'<a href="#" class="close">x</a>'+
'<h3 class="webui-popover-title"></h3>'+
'<div class="webui-popover-content"><i class="icon-refresh"></i> <p>&nbsp;</p></div>'+
'</div>'+
'</div>'
};
// The actual plugin constructor
function WebuiPopover ( element, options ) {
this.$element = $(element);
this.options = $.extend( {}, defaults, options );
this._defaults = defaults;
this._name = pluginName;
this.init();
}
WebuiPopover.prototype = {
//init webui popover
init: function () {
//init the event handlers
if (this.options.trigger==='click'){
this.$element.off('click').on('click',$.proxy(this.toggle,this));
}else{
this.$element.off('mouseenter mouseleave')
.on('mouseenter',$.proxy(this.mouseenterHandler,this))
.on('mouseleave',$.proxy(this.mouseleaveHandler,this));
}
this._poped = false;
this._inited = true;
},
/* api methods and actions */
destroy:function(){
this.hide();
this.$element.data('plugin_'+pluginName,null);
this.$element.off();
if (this.$target){
this.$target.remove();
}
},
hide:function(event){
if (event){
event.preventDefault();
event.stopPropagation();
}
var e = $.Event('hide.' + pluginType);
this.$element.trigger(e);
if (this.$target){this.$target.removeClass('in').hide();}
this.$element.trigger('hidden.'+pluginType);
},
toggle:function(e){
if (e) {
e.preventDefault();
e.stopPropagation();
}
this[this.getTarget().hasClass('in') ? 'hide' : 'show']();
},
hideAll:function(){
$('div.webui-popover').not('.webui-popover-fixed').removeClass('in').hide();
},
/*core method ,show popover */
show:function(){
var
$target = this.getTarget().removeClass().addClass(pluginClass);
if (!this.options.multi){
this.hideAll();
}
// use cache by default, if not cache set, reInit the contents
if (!this.options.cache||!this._poped){
this.setTitle(this.getTitle());
if (!this.options.closeable){
$target.find('.close').off('click').remove();
}
if (!this.isAsync()){
this.setContent(this.getContent());
}else{
this.setContentASync(this.options.content);
this.displayContent();
return;
}
$target.show();
}
this.displayContent();
this.bindBodyEvents();
},
displayContent:function(){
var
//element position
elementPos = this.getElementPosition(),
//target position
$target = this.getTarget().removeClass().addClass(pluginClass),
//target content
$targetContent = this.getContentElement(),
//target Width
targetWidth = $target[0].offsetWidth,
//target Height
targetHeight = $target[0].offsetHeight,
//placement
placement = 'bottom',
e = $.Event('show.' + pluginType);
//if (this.hasContent()){
this.$element.trigger(e);
//}
if (this.options.width!=='auto') {$target.width(this.options.width);}
if (this.options.height!=='auto'){$targetContent.height(this.options.height);}
//init the popover and insert into the document body
if (!this.options.arrow){
$target.find('.arrow').remove();
}
$target.remove().css({ top: -1000, left: -1000, display: 'block' }).appendTo(document.body);
targetWidth = $target[0].offsetWidth;
targetHeight = $target[0].offsetHeight;
placement = this.getPlacement(elementPos,targetHeight);
this.initTargetEvents();
var postionInfo = this.getTargetPositin(elementPos,placement,targetWidth,targetHeight);
this.$target.css(postionInfo.position).addClass(placement).addClass('in');
if (this.options.type==='iframe'){
var $iframe = $target.find('iframe');
$iframe.width($target.width()).height($iframe.parent().height());
}
if (this.options.style){
this.$target.addClass(pluginClass+'-'+this.options.style);
}
if (!this.options.padding){
$targetContent.css('height',$targetContent.outerHeight());
this.$target.addClass('webui-no-padding');
}
if (!this.options.arrow){
this.$target.css({'margin':0});
}
if (this.options.arrow){
var $arrow = this.$target.find('.arrow');
$arrow.removeAttr('style');
if (postionInfo.arrowOffset){
$arrow.css(postionInfo.arrowOffset);
}
}
this._poped = true;
this.$element.trigger('shown.'+pluginType);
},
isTargetLoaded:function(){
return this.getTarget().find('i.glyphicon-refresh').length===0;
},
/*getter setters */
getTarget:function(){
if (!this.$target){
this.$target = $(this.options.template);
}
return this.$target;
},
getTitleElement:function(){
return this.getTarget().find('.'+pluginClass+'-title');
},
getContentElement:function(){
return this.getTarget().find('.'+pluginClass+'-content');
},
getTitle:function(){
return this.options.title||this.$element.attr('data-title')||this.$element.attr('title');
},
setTitle:function(title){
var $titleEl = this.getTitleElement();
if (title){
$titleEl.html(title);
}else{
$titleEl.remove();
}
},
hasContent:function () {
return this.getContent();
},
getContent:function(){
if (this.options.url){
if (this.options.type==='iframe'){
this.content = $('<iframe frameborder="0"></iframe>').attr('src',this.options.url);
}
}else if (!this.content){
var content='';
if ($.isFunction(this.options.content)){
content = this.options.content.apply(this.$element[0],arguments);
}else{
content = this.options.content;
}
this.content = this.$element.attr('data-content')||content;
}
return this.content;
},
setContent:function(content){
var $target = this.getTarget();
this.getContentElement().html(content);
this.$target = $target;
},
isAsync:function(){
return this.options.type==='async';
},
setContentASync:function(content){
var that = this;
$.ajax({
url:this.options.url,
type:'GET',
cache:this.options.cache,
success:function(data){
if (content&&$.isFunction(content)){
that.content = content.apply(that.$element[0],[data]);
}else{
that.content = data;
}
that.setContent(that.content);
var $targetContent = that.getContentElement();
$targetContent.removeAttr('style');
that.displayContent();
}
});
},
bindBodyEvents:function(){
$('body').off('keyup.webui-popover').on('keyup.webui-popover',$.proxy(this.escapeHandler,this));
$('body').off('click.webui-popover').on('click.webui-popover',$.proxy(this.bodyClickHandler,this));
},
/* event handlers */
mouseenterHandler:function(){
var self = this;
if (self._timeout){clearTimeout(self._timeout);}
if (!self.getTarget().is(':visible')){self.show();}
},
mouseleaveHandler:function(){
var self = this;
//key point, set the _timeout then use clearTimeout when mouse leave
self._timeout = setTimeout(function(){
self.hide();
},self.options.delay);
},
escapeHandler:function(e){
if (e.keyCode===27){
this.hideAll();
}
},
bodyClickHandler:function(){
this.hideAll();
},
targetClickHandler:function(e){
e.stopPropagation();
},
//reset and init the target events;
initTargetEvents:function(){
if (this.options.trigger!=='click'){
this.$target.off('mouseenter mouseleave')
.on('mouseenter',$.proxy(this.mouseenterHandler,this))
.on('mouseleave',$.proxy(this.mouseleaveHandler,this));
}
this.$target.find('.close').off('click').on('click', $.proxy(this.hide,this));
this.$target.off('click.webui-popover').on('click.webui-popover',$.proxy(this.targetClickHandler,this));
},
/* utils methods */
//caculate placement of the popover
getPlacement:function(pos,targetHeight){
var
placement,
de = document.documentElement,
db = document.body,
clientWidth = de.clientWidth,
clientHeight = de.clientHeight,
scrollTop = Math.max(db.scrollTop,de.scrollTop),
scrollLeft = Math.max(db.scrollLeft,de.scrollLeft),
pageX = Math.max(0,pos.left - scrollLeft),
pageY = Math.max(0,pos.top - scrollTop),
arrowSize = 20;
//if placement equals autocaculate the placement by element information;
if (typeof(this.options.placement)==='function'){
placement = this.options.placement.call(this, this.getTarget()[0], this.$element[0]);
}else{
placement = this.$element.data('placement')||this.options.placement;
}
if (placement==='auto'){
if (pageX<clientWidth/3){
if (pageY<clientHeight/3){
placement = 'bottom-right';
}else if (pageY<clientHeight*2/3){
placement = 'right';
}else{
placement = 'top-right';
}
//placement= pageY>targetHeight+arrowSize?'top-right':'bottom-right';
}else if (pageX<clientWidth*2/3){
if (pageY<clientHeight/3){
placement = 'bottom';
}else if (pageY<clientHeight*2/3){
placement = 'bottom';
}else{
placement = 'top';
}
}else{
placement = pageY>targetHeight+arrowSize?'top-left':'bottom-left';
if (pageY<clientHeight/3){
placement = 'bottom-left';
}else if (pageY<clientHeight*2/3){
placement = 'left';
}else{
placement = 'top-left';
}
}
}
return placement;
},
getElementPosition:function(){
return $.extend({},this.$element.offset(), {
width: this.$element[0].offsetWidth,
height: this.$element[0].offsetHeight
});
},
getTargetPositin:function(elementPos,placement,targetWidth,targetHeight){
var pos = elementPos,
elementW = this.$element.outerWidth(),
elementH = this.$element.outerHeight(),
position={},
arrowOffset=null,
arrowSize = this.options.arrow?28:0,
fixedW = elementW<arrowSize+10?arrowSize:0,
fixedH = elementH<arrowSize+10?arrowSize:0;
switch (placement) {
case 'bottom':
position = {top: pos.top + pos.height, left: pos.left + pos.width / 2 - targetWidth / 2};
break;
case 'top':
position = {top: pos.top - targetHeight, left: pos.left + pos.width / 2 - targetWidth / 2};
break;
case 'left':
position = {top: pos.top + pos.height / 2 - targetHeight / 2, left: pos.left - targetWidth};
break;
case 'right':
position = {top: pos.top + pos.height / 2 - targetHeight / 2, left: pos.left + pos.width};
break;
case 'top-right':
position = {top: pos.top - targetHeight, left: pos.left-fixedW};
arrowOffset = {left: elementW/2 + fixedW};
break;
case 'top-left':
position = {top: pos.top - targetHeight, left: pos.left -targetWidth +pos.width + fixedW};
arrowOffset = {left: targetWidth - elementW /2 -fixedW};
break;
case 'bottom-right':
position = {top: pos.top + pos.height, left: pos.left-fixedW};
arrowOffset = {left: elementW /2+fixedW};
break;
case 'bottom-left':
position = {top: pos.top + pos.height, left: pos.left -targetWidth +pos.width+fixedW};
arrowOffset = {left: targetWidth- elementW /2 - fixedW};
break;
case 'right-top':
position = {top: pos.top -targetHeight + pos.height + fixedH, left: pos.left + pos.width};
arrowOffset = {top: targetHeight - elementH/2 -fixedH};
break;
case 'right-bottom':
position = {top: pos.top - fixedH, left: pos.left + pos.width};
arrowOffset = {top: elementH /2 +fixedH };
break;
case 'left-top':
position = {top: pos.top -targetHeight + pos.height+fixedH, left: pos.left - targetWidth};
arrowOffset = {top: targetHeight - elementH/2 - fixedH};
break;
case 'left-bottom':
position = {top: pos.top , left: pos.left -targetWidth};
arrowOffset = {top: elementH /2 };
break;
}
return {position:position,arrowOffset:arrowOffset};
}
};
$.fn[ pluginName ] = function ( options ) {
return this.each(function() {
var webuiPopover = $.data( this, 'plugin_' + pluginName );
if (!webuiPopover) {
if (!options){
webuiPopover = new WebuiPopover( this, null);
}else if (typeof options ==='string'){
if (options!=='destroy'){
webuiPopover = new WebuiPopover( this, null );
webuiPopover[options]();
}
}else if (typeof options ==='object'){
webuiPopover = new WebuiPopover( this, options );
}
$.data( this, 'plugin_' + pluginName, webuiPopover);
}else{
if (options==='destroy'){
webuiPopover.destroy();
}else if (typeof options ==='string'){
webuiPopover[options]();
}
}
});
};
})( jQuery, window, document );

View File

@ -1,9 +0,0 @@
[theme]
inherit = basic
stylesheet = css/basic.css
pygments_style = native
[options]
globaltoc_depth = 6
globaltoc_includehidden = true
analytics_tracking_code = UA-17511903-1

View File

@ -1,14 +0,0 @@
<div class="row">
<div class="col-lg-8">
<h2>{{ title }}</h2>
</div>
<div class="docs-actions">
{% if prev %}
<a href="{{ prev.link|e }}"><i class="fa fa-angle-double-left" data-toggle="tooltip" data-placement="top" title="Previous: {{ prev.title }}"></i></a>
{% endif %}
{% if next %}
<a href="{{ next.link|e }}"><i class="fa fa-angle-double-right" data-toggle="tooltip" data-placement="top" title="Next: {{ next.title }}"></i></a>
{% endif %}
<a id="logABugLink1" href="" target="_blank" title="Found an error? Report a bug against this page"><i class="fa fa-bug" data-toggle="tooltip" data-placement="top" title="Report a Bug"></i></a>
</div>
</div>

View File

@ -1,103 +0,0 @@
# The encoding of source files.
source_encoding = 'utf-8-sig'
#source_encoding = 'shift_jis'
# The language for content autogenerated by Sphinx.
language = 'en'
#language = 'ja'
# The theme to use for HTML and HTML Help pages.
#html_theme = 'default'
#html_theme = 'sphinxdoc'
#html_theme = 'scrolls'
#html_theme = 'agogo'
#html_theme = 'traditional'
#html_theme = 'nature'
#html_theme = 'haiku'
# If this is not the empty string, a 'Last updated on:' timestamp
# is inserted at every page bottom, using the given strftime() format.
# Default is '%b %d, %Y' (or a locale-dependent equivalent).
html_last_updated_fmt = '%Y/%m/%d'
# A list of paths that contains extra files not directly related to the
# documentation, such as robots.txt or .htaccess. Relative paths are taken
# as relative to the configuration directory. They are copied to the output
# directory. They will overwrite any existing file of the same name.
html_extra_path = ['examples']
# Enable Antialiasing
blockdiag_antialias = True
acttdiag_antialias = True
seqdiag_antialias = True
nwdiag_antialias = True
extensions += ['rst2pdf.pdfbuilder']
pdf_documents = [
(master_doc, project, project, copyright),
]
pdf_stylesheets = ['sphinx', 'kerning', 'a4']
pdf_language = "en_US"
# Mode for literal blocks wider than the frame. Can be
# overflow, shrink or truncate
pdf_fit_mode = "shrink"
# Section level that forces a break page.
# For example: 1 means top-level sections start in a new page
# 0 means disabled
#pdf_break_level = 0
# When a section starts in a new page, force it to be 'even', 'odd',
# or just use 'any'
pdf_breakside = 'any'
# Insert footnotes where they are defined instead of
# at the end.
pdf_inline_footnotes = False
# verbosity level. 0 1 or 2
pdf_verbosity = 0
# If false, no index is generated.
pdf_use_index = True
# If false, no modindex is generated.
pdf_use_modindex = True
# If false, no coverpage is generated.
pdf_use_coverpage = True
# Name of the cover page template to use
#pdf_cover_template = 'sphinxcover.tmpl'
# Documents to append as an appendix to all manuals.
#pdf_appendices = []
# Enable experimental feature to split table cells. Use it
# if you get "DelayedTable too big" errors
#pdf_splittables = False
# Set the default DPI for images
#pdf_default_dpi = 72
# Enable rst2pdf extension modules (default is only vectorpdf)
# you need vectorpdf if you want to use sphinx's graphviz support
#pdf_extensions = ['vectorpdf']
# Page template name for "regular" pages
#pdf_page_template = 'cutePage'
# Show Table Of Contents at the beginning?
pdf_use_toc = True
# How many levels deep should the table of contents be?
pdf_toc_depth = 3
# Add section number to section references
pdf_use_numbered_links = False
# Background images fitting mode
pdf_fit_background_mode = 'scale'
pdf_stylesheets = ['letter', 'fuel']
pdf_style_path = ['_templates']

275
conf.py
View File

@ -1,275 +0,0 @@
# -*- coding: utf-8 -*-
#
# fuel documentation build configuration file
#
# This file is execfile()d with the current directory set to its containing
# dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import os
import sys
import subprocess
# import openstackdocstheme
sys.path.insert(0, os.path.join(os.path.abspath('.')))
autodoc_default_flags = ['members', 'show-inheritance']
autodoc_member_order = 'bysource'
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'rst2pdf.pdfbuilder'
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Fuel'
copyright = u'2012-2017, OpenStack'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '8.0'
# The full version, including alpha/beta/rc tags.
release = '8.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
language = 'en'
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build', '.*', 'userdocs/snippets/*']
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# documentation.
# html_theme_options =
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = ['_templates']
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
html_sidebars = {
'**': ['localtoc.html', 'relations.html', 'sourcelink.html', 'sidebarpdf.html', 'searchbox.html'],
}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'fuel-doc'
# -- Options for LaTeX output -------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
'papersize': 'a4paper',
# The font size ('10pt', '11pt' or '12pt').
'pointsize': '12pt',
# Additional stuff for the LaTeX preamble.
'preamble': '''
\setcounter{tocdepth}{3}
\usepackage{tocbibind}
\pagenumbering{arabic}
'''
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index', 'fuel.tex', u'Fuel Documentation', u'Mike Scherbakov', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output -------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'fuel', u'Fuel Documentation', [u'Mike Scherbakov'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output -----------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [(
'index', 'fuel', u'Fuel Documentation', u'Mike Scherbakov',
'fuel', 'OpenStack Installer', 'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# -- Additional Settings ------------------------------------------------------
# The variables to make "Log a bug" link send
# metadata for the project where the docs reside:
# We ask git for the SHA checksum
# The git SHA checksum is used by "log-a-bug"
git_cmd = ["/usr/bin/git", "rev-parse", "HEAD"]
gitsha = subprocess.Popen(git_cmd, stdout=subprocess.PIPE).communicate()[0].strip('\n')
# tag that reported bugs will be tagged with
bug_tag = ""
# source tree
pwd = os.getcwd()
# html_context allows us to pass arbitrary values into the html template
html_context = {"pwd": pwd, "gitsha": gitsha}
# Must set this variable to include year, month, day, hours, and minutes.
html_last_updated_fmt = '%Y-%m-%d %H:%M'
execfile('./common_conf.py')

View File

@ -1,267 +0,0 @@
.. _buildsystem:
Fuel ISO build system
=====================
Use the `fuel-main repository <https://github.com/openstack/fuel-main.git>`_
to build Fuel components such as an ISO or an upgrade tarball.
This repository contains a set of GNU Make build scripts.
Quick start
-----------
1. You must use Ubuntu 14.04 distribution to build Fuel components or the build process may fail. Note that build only works for x64 platforms.
2. Check whether you have git installed in
your system. To do that, use the following command:
::
which git
If git is not found, install it with the following command:
::
apt-get install git
3. Clone the **fuel-main** git repository to the location where
you will work. The root of your repo will be named `fuel-main`.
In this example, it will be located under the *~/fuel* directory:
::
mkdir ~/fuel
cd ~/fuel
git clone https://github.com/openstack/fuel-main.git
cd fuel-main
.. note::Fuel build system consists of the following components:
* a shell script (**./prepare-build-env.sh**) - prepares the build environment by checking
that all necessary packages are installed and installing any that are not.
* **fuel-main** directory - the only one required repository for building the Fuel ISO.
The make script then downloads the additional components
(Fuel Library, Nailgun, Astute and OSTF).
Unless otherwise specified in the makefile,
the master branch of each respective repo is used to build the ISO.
4. Run the shell script:
::
./prepare-build-env.sh
and wait until **prepare-build-env.sh**
installs the Fuel build evironment on your computer.
5. After the script runs successfully, issue the following command to build a
Fuel ISO:
::
make iso
6. Use the following command to list the available make targets:
::
make help
For the full list of available targets with description, see :ref:`Build targets <build-targets>` section below.
Build system structure
----------------------
Fuel consists of several components such as web interface,
puppet modules, orchestration components, testing components.
Source code of all those components is split into multiple git
repositories like, for example:
- https://github.com/openstack/fuel-web
- https://github.com/openstack/fuel-ui
- https://github.com/openstack/fuel-astute
- https://github.com/openstack/fuel-ostf
- https://github.com/openstack/fuel-library
- https://github.com/openstack/fuel-docs
The main component of the Fuel build system is
*fuel-main* directory.
Fuel build processes are quite complicated,
so to make the **fuel-main** code easily
maintainable, it is
split into a bunch of files and directories.
Those files
and directories contain independent
(or at least almost independent)
pieces of Fuel build system:
* **Makefile** - the main Makefile which includes all other make modules.
* **config.mk** - contains parameters used to customize the build process,
specifying items such as build paths,
upstream mirrors, source code repositories
and branches, built-in default Fuel settings and ISO name.
* **rules.mk** - defines frequently used macros.
* **repos.mk** - contains make scripts to download the
other repositories to develop Fuel
components put into separate repos.
* **sandbox.mk** - shell script definitions that create
and destroy the special chroot environment required to
build some components.
For example, for building RPM packages,
CentOS images we use CentOS chroot environment.
* **mirror** - contains the code which is used to download
all necessary packages from upstream mirrors and build new
ones which are to be copied on Fuel ISO even if Internet
connection is down.
* **packages** - contains DEB and RPM
specs as well as make code for building those packages,
included in Fuel DEB and RPM mirrors.
* **bootstrap** - contains a make script intended
to build CentOS-based miniroot image (a.k.a initrd or initramfs).
* **docker** - contains the make scripts to
build docker containers, deployed on the Fuel Master node.
* **iso** - contains **make** scripts for building Fuel ISO file.
.. _build-targets:
Build targets
-------------
* **all** - used for building all Fuel artifacts.
Currently, it is an alias for **iso** target.
* **bootstrap** - used for building in-memory bootstrap
image which is used for auto-discovering.
* **mirror** - used for building local mirrors (the copies of CentOS and
Ubuntu mirrors which are then placed into Fuel ISO).
They contain all necessary packages including those listed in
*requirements-*.txt* files with their dependencies as well as those which
are Fuel packages. Packages listed in *requirements-*.txt* files are downloaded
from upstream mirrors while Fuel packages are built from source code.
* **iso** - used for building Fuel ISO. If build succeeds,
ISO is put into build/artifacts folder.
* **clean** - removes build directory.
* **deep_clean** - removes build directory and local mirror.
Note that if you remove a local mirror, then next time
the ISO build job will download all necessary packages again.
So, the process goes faster if you keep local mirrors.
You should also mind the following:
it is better to run *make deep_clean* every time when building an ISO to make sure the local mirror is consistent.
Customizing build process
-------------------------
There are plenty of variables in make files.
Some of them represent a kind of build parameters.
They are defined in **config.mk** file:
* **TOP_DIR** - a default current directory.
All other build directories are relative to this path.
* **BUILD_DIR** - contains all files, used during build process.
By default, it is **$(TOP_DIR)/build**.
* **ARTS_DIR** - contains build artifacts such as ISO and IMG files
By default, it is **$(BUILD_DIR)/artifacts**.
* **LOCAL_MIRROR** - contains local CentOS and Ubuntu mirrors
By default, it is **$(TOP_DIR)/local_mirror**.
* **DEPS_DIR** - contains build targets that depend on artifacts
of the previous build jobs, placed there
before build starts. By default, it is **$(TOP_DIR)/deps**.
* **ISO_NAME** - a name of Fuel ISO without file extension:
if **ISO_NAME** = **MY_CUSTOM_NAME**, then Fuel ISO file will
be placed into **$(MY_CUSTOM_NAME).iso**.
* **ISO_PATH** - used to specify Fuel ISO full path instead of defining
just ISO name.
By default, it is **$(ARTS_DIR)/$(ISO_NAME).iso**.
* Fuel ISO contains some default settings for the
Fuel Master node. These settings can be customized
during Fuel Master node installation.
One can customize those
settings using the following variables:
- **MASTER_IP** - the Fuel Master node IP address.
By default, it is 10.20.0.2.
- **MASTER_NETMASK** - Fuel Master node IP netmask.
By default, it is 255.255.255.0.
- **MASTER_GW** - Fuel Master node default gateway.
By default, it is 10.20.0.1.
- **MASTER_DNS** - the upstream DNS location for the Fuel master node.
FUel Master node DNS will redirect there all DNS requests that it is not able to resolve itself.
By default, it is 10.20.0.1.
Other options
-------------
* **[repo]_REPO** - remote source code repo.
URL or git repository can be specified for each of the Fuel components.
(FUELLIB, NAILGUN, ASTUTE, OSTF).
* **[repo]_COMMIT** - source branch for each of the Fuel components to build.
* **[repo]_GERRIT_URL** - gerrit repo.
* **[repo]_GERRIT_COMMIT** - list of extra commits from gerrit.
* **[repo]_SPEC_REPO** - repo for RPM/DEB specs of OpenStack packages.
* **[repo]_SPEC_COMMIT** - branch for checkout.
* **[repo]_SPEC_GERRIT_URL** - gerrit repo for OpenStack specs.
* **[repo]_SPEC_GERRIT_COMMIT** - list of extra commits from gerrit for specs.
* **USE_MIRROR** - pre-built mirrors from Fuel infrastructure.
The following mirrors can be used:
* ext (external mirror, publicly available one)
* none (reserved for building local mirrors: in this case
CentOS and Ubuntu packages will be fetched from upstream mirrors, so
that it will make the build process much slower).
* **MIRROR_CENTOS** - download CentOS packages from a specific remote repo.
* **MIRROR_UBUNTU** - download Ubuntu packages from a specific remote repo.
* **MIRROR_DOCKER** - download docker images from a specific remote url.
* **EXTRA_RPM_REPOS** - extra repos with RPM packages.
Each repo must be comma separated
tuple with repo-name and repo-path:
<first_repo_name>,<repo_path> <second_repo_name>,<second_repo_path>
For example,
qemu2,http://hostname.domain.tld/some/path/ libvirt,http://hostname.domain.tld/another/path/
Note that if you want to add more packages to the Fuel Master node, you should update the **requirements-rpm.txt** file.

View File

@ -1,23 +0,0 @@
.. _develop:
Developer Guide
===============
.. toctree::
:maxdepth: 3
develop/architecture
develop/sequence
develop/quick_start
develop/addition_examples
develop/env
develop/system_tests/tree
develop/live_masternode
develop/nailgun
develop/module_structure
develop/fuel_settings
develop/puppet_tips
develop/pxe_deployment
develop/ostf_contributors_guide
develop/custom-bootstrap-node
develop/modular-architecture

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 467 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 264 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

View File

@ -1,198 +0,0 @@
Fuel Development Examples
=========================
This section provides examples of the Fuel development
process. It builds on the information in the `How to
contribute
<https://wiki.openstack.org/wiki/Fuel/How_to_contribute>`_
document, and the :doc:`Fuel Development Quick-Start Guide
</devdocs/develop/quick_start>` which illustrate the development
process for a single Fuel component. These examples show how
to manage development and integration of a more complicated
example.
Any new feature effort should start with the creation of a
blueprint where implementation decisions and related commits
are tracked. More information on launchpad blueprints can
be found here: `https://wiki.openstack.org/wiki/Blueprints
<https://wiki.openstack.org/wiki/Blueprints>`_.
Understanding the Fuel architecture helps you understand
which components any particular addition will impact. The
following documents provide valuable information about the
Fuel architecture, and the provisioning and deployment
process:
* `Fuel architecture on the OpenStack wiki <https://wiki.openstack.org/wiki/Fuel#Fuel_architecture>`_
* :doc:`Architecture section of Fuel documentation <architecture>`
* :doc:`Visual of provisioning tasks <sequence>`
Adding Zabbix Role
------------------
This section outlines the steps followed to add a new role
to Fuel. In this case, monitoring service functionality was
added by enabling the deployment of a Zabbix server
configured to monitor an OpenStack environment deployed by
Fuel.
The monitoring server role was initially planned in `this
blueprint
<https://blueprints.launchpad.net/fuel/+spec/monitoring-system>`_.
Core Fuel developers provided feedback to small
commits via Gerrit and IRC while the work was coming
together. Ultimately the work was rolled up into two
commits including over 23k lines of code, and these two
commits were merged into `fuel-web <https://github.com/openstack/fuel-web>`_
and `fuel-library
<https://github.com/openstack/fuel-library>`_.
Additions to Fuel-Web for Zabbix role
-------------------------------------
In fuel-web, the `Support for Zabbix
<https://review.openstack.org/#/c/84408/>`_ commit added the
additional role to :doc:`Nailgun <nailgun>`. The
reader is urged to review this commit closely as a good
example of where specific additions fit. In order to
include this as an option in the Fuel deployment process,
the following files were included in the commit for
fuel-web:
UI components::
nailgun/static/translations/core.json
nailgun/static/js/views/cluster_page_tabs/nodes_tab_screens/node_list_screen.jsx
Testing additions::
nailgun/nailgun/test/integration/test_cluster_changes_handler.py
nailgun/nailgun/test/integration/test_orchestrator_serializer.py
General Nailgun additions::
nailgun/nailgun/errors/__init__.py
nailgun/nailgun/fixtures/openstack.yaml
nailgun/nailgun/network/manager.py
nailgun/nailgun/orchestrator/deployment_serializers.py
nailgun/nailgun/rpc/receiver.py
nailgun/nailgun/settings.yaml
nailgun/nailgun/task/task.py
nailgun/nailgun/utils/zabbix.py
Additions to Fuel-Library for Zabbix role
-----------------------------------------
In addition to the Nailgun additions, the related Puppet
modules were added to the `fuel-library repository
<https://github.com/openstack/fuel-library>`_. This
`Zabbix fuel-library integration
<https://review.openstack.org/#/c/101844/>`_ commit included
all the puppet files, many of which are brand new modules
specifically for Zabbix, in addition to adjustments to the
following files::
deployment/puppet/openstack/manifests/logging.pp
deployment/puppet/osnailyfacter/manifests/cluster_ha.pp
deployment/puppet/osnailyfacter/manifests/cluster_simple.pp
Once all these commits passed CI and had been reviewed by
both community members and the Fuel PTLs, they were merged
into master.
Adding Hardware Support
-----------------------
This section outlines the steps followed to add support for
a Mellanox network card, which requires a kernel driver that
is available in most Linux distributions but was not loaded
by default. Adding support for other hardware would touch
similar Fuel components, so this outline should provide a
reasonable guide for contributors wishing to add support for
new hardware to Fuel.
It is important to keep in mind that the Fuel node discovery
process works by providing a bootstrap image via PXE. Once
the node boots with this image, a basic inventory of
hardware information is gathered and sent back to the Fuel
controller. If a node contains hardware requiring a unique
kernel module, the bootstrap image must contain that module
in order to detect the hardware during discovery.
In this example, loading the module in the bootstrap image
was enabled by adjusting the ISO makefile and specifying the
appropriate requirements.
Adding a hardware driver to bootstrap
-------------------------------------
The `Added bootstrap support to Mellanox
<https://review.openstack.org/#/c/101126>`_ commit shows how
this is achieved by adding the modprobe call to load the
driver specified in the requirements-rpm.txt file, requiring
modification of only two files in the fuel-main repository::
bootstrap/module.mk
requirements-rpm.txt
.. note:: Any package specified in the bootstrap building procedure
must be listed in the requirements-rpm.txt file explicitly.
The Fuel mirrors must be rebuilt by the OSCI team prior to
merging requests like this one.
.. note:: Changes made to bootstrap do not affect package sets for
target systems, so in case if you're adding support for NIC,
for example, you have to add installation of all related
packages to kickstart/preceed as well.
The `Adding OFED drivers installation
<https://review.openstack.org/#/c/103427>`_ commit shows the
changes made to the preseed (for Ubuntu) and kickstart (for
CentOS) files in the fuel-library repository::
deployment/puppet/cobbler/manifests/snippets.pp
deployment/puppet/cobbler/templates/kickstart/centos.ks.erb
deployment/puppet/cobbler/templates/preseed/ubuntu-1404.preseed.erb
deployment/puppet/cobbler/templates/snippets/centos_ofed_prereq_pkgs_if_enabled.erb
deployment/puppet/cobbler/templates/snippets/ofed_install_with_sriov.erb
deployment/puppet/cobbler/templates/snippets/ubuntu_packages.erb
Though this example did not require it, if the hardware
driver is required during the operating system installation,
the installer images (debian-installer and anaconda) would
also need to be repacked. For most installations though,
ensuring the driver package is available during installation
should be sufficient.
Adding to Fuel package repositories
-----------------------------------
If the addition will be committed back to the public Fuel
codebase to benefit others, you will need to submit a bug in
the Fuel project to request the package be added to the
repositories.
Let's look at this process step by step by the example
of `Add neutron-lbaas-agent package
<https://bugs.launchpad.net/bugs/1330610>`_ bug:
* you create a bug in the Fuel project providing full description on
the packages to be added, and assign it to the Fuel OSCI team
* you create a request to add these packages to Fuel requirements-\*.txt
files `Add all neutron packages to requirements
<https://review.openstack.org/#/c/104633/>`_
You receive +1 vote from Fuel CI if these packages already exist on
either Fuel internal mirrors or upstream mirrors for respective OS
type (rpm/deb), or -1 vote in any other case.
* if requested packages do not exist in the upstream OS distributive,
OSCI team builds them and then places on internal Fuel mirrors
* OSCI team rebuilds public Fuel mirrors with `Add all neutron packages to
requirements <https://review.openstack.org/#/c/104633/>`_ request
* `Add all neutron packages to requirements
<https://review.openstack.org/#/c/104633/>`_ request is merged
.. note:: The package must include a license that complies
with the Fedora project license requirements for binary
firmware. See the `Fedora Project licensing page
<https://fedoraproject.org/wiki/Licensing:Main#Binary_Firmware>`_
for more information.

View File

@ -1,110 +0,0 @@
Fuel Architecture
=================
Good overview of Fuel architecture is represented on
`OpenStack wiki <https://wiki.openstack.org/wiki/Fuel#Fuel_architecture>`_.
You can find a detailed breakdown of how this works in the
:doc:`Sequence Diagrams </devdocs/develop/sequence>`.
Master node is the main part of the Fuel project. It contains all the
services needed for network provisioning of other managed nodes,
installing an operating system, and then deploying OpenStack services to
create a cloud environment. *Nailgun* is the most important service.
It is a RESTful application written in Python that contains all the
business logic of the system. A user can interact with it either using
the *Fuel Web* interface or by the means of *CLI utility*. He can create
a new environment, edit its settings, assign roles to the discovered
nodes, and start the deployment process of the new OpenStack cluster.
Nailgun stores all of its data in a *PostgreSQL* database. It contains
the hardware configuration of all discovered managed nodes, the roles,
environment settings, current deployment status and progress of
running deployments.
.. image:: _images/uml/nailgun-agent.png
:width: 100%
Managed nodes are discovered over PXE using a special bootstrap image
and the PXE boot server located on the master node. The bootstrap image
runs a special script called Nailgun agent. The agent **nailgun-agent.rb**
collects the server's hardware information and submits it to Nailgun
through the REST API.
The deployment process is started by the user after he has configured
a new environment. The Nailgun service creates a JSON data structure
with the environment settings, its nodes and their roles and puts this
file into the *RabbitMQ* queue. This message should be received by one
of the worker processes who will actually deploy the environment. These
processes are called *Astute*.
.. image:: _images/uml/astute.png
:width: 100%
The Astute workers are listening to the RabbitMQ queue and receives
messages. They use the *Astute* library which implements all deployment
actions. First, it starts the provisioning of the environment's nodes.
Astute uses XML-RPC to set these nodes' configuration in Cobbler and
then reboots the nodes using *MCollective agent* to let Cobbler install
the base operating system. *Cobbler* is a deployment system that can
control DHCP and TFTP services and use them to network boot the managed
node and start the OS installer with the user-configured settings.
Astute puts a special message into the RabbitMQ queue that contains
the action that should be executed on the managed node. MCollective
servers are started on all bootstrapped nodes and they constantly listen
for these messages, when they receive a message, they run the required
agent action with the given parameters. MCollective agents are just Ruby
scripts with a set of procedures. These procedures are actions that the
MCollective server can run when asked to.
When the managed node's OS is installed, Astute can start the deployment
of OpenStack services. First, it uploads the node's configuration
to the **/etc/astute.yaml** file on node using the **uploadfile** agent.
This file contains all the variables and settings that will be needed
for the deployment.
Next, Astute uses the **puppetsync** agent to synchronize Puppet
modules and manifests. This agent runs an rsync process that connects
to the rsyncd server on the Master node and downloads the latest version
of Puppet modules and manifests.
.. image:: _images/uml/puppetsync.png
:width: 100%
When the modules are synchronized, Astute can run the actual deployment
by applying the main Puppet manifest **site.pp**. MCollective agent runs
the Puppet process in the background using the **daemonize** tool.
The command looks like this:
::
daemonize puppet apply /etc/puppet/manifests/site.pp
Astute periodically polls the agent to check if the deployment has
finished and reports the progress to Nailgun through its RabbitMQ queue.
When started, Puppet reads the **astute.yaml** file content as a fact
and then parses it into the **$fuel_settings** structure used to get all
deployment settings.
When the Puppet process exits either successfully or with an error,
Astute gets the summary file from the node and reports the results to
Nailgun. The user can always monitor both the progress and the
results using Fuel Web interface or the CLI tool.
Fuel installs the **puppet-pull** script. Developers can use it if
they need to manually synchronize manifests from the Master node and
run the Puppet process on node again.
Astute also does some additional actions, depending on environment
configuration, either before the deployment of after successful one:
* Generates and uploads SSH keys that will be needed during deployment.
* During network verification phase **net_verify.py** script.
* Uploads CirrOS guest image into Glance after the deployment.
* Updates **/etc/hosts** file on all nodes when new nodes are deployed.
* Updates RadosGW map when Ceph nodes are deployed.
Astute also uses MCollective agents when a node or the entire
environment is being removed. It erases all boot sectors on the node
and reboots it. The node will be network booted with the bootstrap
image again, and will be ready to be used in a new environment.

File diff suppressed because it is too large Load Diff

View File

@ -1,370 +0,0 @@
Fuel Development Environment
============================
.. warning:: Fuel ISO build works only on 64-bit operating system.
If you are modifying or augmenting the Fuel source code or if you
need to build a Fuel ISO from the latest branch, you will need
an environment with the necessary packages installed. This page
lays out the steps you will need to follow in order to prepare
the development environment, test the individual components of
Fuel, and build the ISO which will be used to deploy your
Fuel master node.
The basic operating system for Fuel development is Ubuntu Linux.
The setup instructions below assume Ubuntu 14.04 (64 bit) though most of
them should be applicable to other Ubuntu and Debian versions, too.
Each subsequent section below assumes that you have followed the steps
described in all preceding sections. By the end of this document, you
should be able to run and test all key components of Fuel, build the
Fuel master node installation ISO, and generate documentation.
.. _getting-source:
Getting the Source Code
-----------------------
Source code of OpenStack Fuel can be found at git.openstack.org or
GitHub.
Follow these steps to clone the repositories for each of
the Fuel components:
::
apt-get install git
git clone https://github.com/openstack/fuel-main
git clone https://github.com/openstack/fuel-web
git clone https://github.com/openstack/fuel-ui
git clone https://github.com/openstack/fuel-agent
git clone https://github.com/openstack/fuel-astute
git clone https://github.com/openstack/fuel-ostf
git clone https://github.com/openstack/fuel-library
git clone https://github.com/openstack/fuel-docs
.. _building-fuel-iso:
Building the Fuel ISO
---------------------
The "fuel-main" repository is the only one required in order
to build the Fuel ISO. The make script then downloads the
additional components (Fuel Library, Nailgun, Astute and OSTF).
Unless otherwise specified in the makefile, the master branch of
each respective repo is used to build the ISO.
The basic steps to build the Fuel ISO from trunk in an
Ubuntu 14.04 environment are:
::
apt-get install git
git clone https://github.com/openstack/fuel-main
cd fuel-main
./prepare-build-env.sh
make iso
If you want to build an ISO using a specific commit or repository,
you will need to modify the "Repos and versions" section in the
config.mk file found in the fuel-main repo before executing "make
iso". For example, this would build a Fuel ISO against the v5.0
tag of Fuel:
::
# Repos and versions
FUELLIB_COMMIT?=tags/5.0
NAILGUN_COMMIT?=tags/5.0
FUEL_UI_COMMIT?=tags/5.0
ASTUTE_COMMIT?=tags/5.0
OSTF_COMMIT?=tags/5.0
FUELLIB_REPO?=https://github.com/openstack/fuel-library.git
NAILGUN_REPO?=https://github.com/openstack/fuel-web.git
FUEL_UI_REPO?=https://github.com/openstack/fuel-ui.git
ASTUTE_REPO?=https://github.com/openstack/fuel-astute.git
OSTF_REPO?=https://github.com/openstack/fuel-ostf.git
To build an ISO image from custom gerrit patches on review, edit the
"Gerrit URLs and commits" section of config.mk, e.g. for
https://review.openstack.org/#/c/63732/8 (id:63732, patch:8) set:
::
FUELLIB_GERRIT_COMMIT?=refs/changes/32/63732/8
If you are building Fuel from an older branch that does not contain the
"prepare-build-env.sh" script, you can follow these steps to prepare
your Fuel ISO build environment on Ubuntu 14.04:
#. ISO build process requires sudo permissions, allow yourself to run
commands as root user without request for a password::
echo "`whoami` ALL=(ALL) NOPASSWD: ALL" | sudo tee -a /etc/sudoers
#. Install software::
sudo apt-get update
sudo apt-get install apt-transport-https
echo deb http://mirror.yandex.ru/mirrors/docker/ docker main | sudo tee /etc/apt/sources.list.d/docker.list
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
sudo apt-get update
sudo apt-get install lxc-docker
sudo apt-get update
sudo apt-get remove nodejs nodejs-legacy npm
sudo apt-get install software-properties-common python-software-properties
sudo add-apt-repository -y ppa:chris-lea/node.js
sudo apt-get update
sudo apt-get install build-essential make git ruby ruby-dev rubygems debootstrap createrepo \
python-setuptools yum yum-utils libmysqlclient-dev isomd5sum \
python-nose libvirt-bin python-ipaddr python-paramiko python-yaml \
python-pip kpartx extlinux unzip genisoimage nodejs multistrap \
lrzip python-daemon
sudo gem install bundler -v 1.2.1
sudo gem install builder
sudo pip install xmlbuilder jinja2
#. If you haven't already done so, get the source code::
git clone https://github.com/openstack/fuel-main
#. Now you can build the Fuel ISO image::
cd fuel-main
make iso
#. If you encounter issues and need to rebase or start over::
make clean #remove build/ directory
make deep_clean #remove build/ and local_mirror/
.. note::
In case you use a virtual machine for building the image, verify the
following:
- Both ``BUILD_DIR`` and ``LOCAL_MIRROR`` build directories are
out of the shared folder path in the `config.mk
<https://github.com/openstack/fuel-main/blob/master/config.mk>`_
file. For more information, see:
- `Shared folders of VitrualBox
<https://www.virtualbox.org/manual/ch04.html#sharedfolders>`_
documentation
- `Synced folders of Vargant
<https://docs.vagrantup.com/v2/synced-folders/>`_
documentation
- To prevent a random docker unexpected termination, a virtual
machine has a kernel that supports the ``aufs`` file system.
To install the kernel, run:
.. code-block:: console
sudo apt-get install --yes linux-image-extra-virtual
Reboot the kernel when the installation is complete. Check that
docker is using ``aufs`` by running:
.. code-block:: console
sudo docker info 2>&1 | grep -q 'Storage Driver: aufs' \
&& echo OK || echo KO
For more information, see `Select a storage driver for docker
<https://docs.docker.com/engine/userguide/storagedriver/selectadriver/>`_.
You can also use the following tools to make your work and development process
with Fuel easier:
* CGenie fuel-utils - a set of tools to interact with code on a Fuel Master node created
from the ISO. It provides the *fuel* command
that gives a simple way to upload Python or UI code (with staticfiles compression)
to Docker containers, SSH into machine and into the container,
display the logs etc.
* Vagrant SaltStack-based -Vagrant box definition with quick and basic Fuel
environment with fake tasks.
This is useful for UI or Nailgun development.
You can download both tools from the
`fuel-dev-tools <https://github.com/openstack/fuel-dev-tools>`_.
Nailgun (Fuel-Web)
------------------
Nailgun is the heart of Fuel project. It implements a REST API as well
as deployment data management. It manages disk volume configuration data,
network configuration data and any other environment specific data
necessary for a successful deployment of OpenStack. It provides the
required orchestration logic for provisioning and
deployment of the OpenStack components and nodes in the right order.
Nailgun uses a SQL database to store its data and an AMQP service to
interact with workers.
Requirements for preparing the nailgun development environment, along
with information on how to modify and test nailgun can be found in
the Nailgun Development Instructions document: :ref:`nailgun-development`
Astute
------
Astute is the Fuel component that represents Nailgun's workers, and
its function is to run actions according to the instructions provided
from Nailgun. Astute provides a layer which encapsulates all the details
about interaction with a variety of services such as Cobbler, Puppet,
shell scripts, etc. and provides a universal asynchronous interface to
those services.
#. Astute can be found in fuel-astute repository
#. Install Ruby dependencies::
sudo apt-get install git curl
curl -sSL https://get.rvm.io | bash -s stable
source ~/.rvm/scripts/rvm
rvm install 2.1
rvm use 2.1
git clone https://github.com/nulayer/raemon.git
cd raemon
git checkout b78eaae57c8e836b8018386dd96527b8d9971acc
gem build raemon.gemspec
gem install raemon-0.3.0.gem
cd ..
rm -Rf raemon
#. Install or update dependencies and run unit tests::
cd fuel-astute
./run_tests.sh
#. (optional) Run Astute MCollective integration test (you'll need to
have MCollective server running for this to work)::
cd fuel-astute
bundle exec rspec spec/integration/mcollective_spec.rb
Running Fuel Puppet Modules Unit Tests
--------------------------------------
If you are modifying any puppet modules used by Fuel, or including
additional modules, you can use the PuppetLabs RSpec Helper
to run the unit tests for any individual puppet module. Follow
these steps to install the RSpec Helper:
#. Install PuppetLabs RSpec Helper::
cd ~
gem2deb puppetlabs_spec_helper
sudo dpkg -i ruby-puppetlabs-spec-helper_0.4.1-1_all.deb
gem2deb rspec-puppet
sudo dpkg -i ruby-rspec-puppet_0.1.6-1_all.deb
#. Run unit tests for a Puppet module::
cd fuel/deployment/puppet/module
rake spec
Installing Cobbler
------------------
Install Cobbler from GitHub (it can't be installed from PyPi, and deb
package in Ubuntu is outdated)::
cd ~
git clone git://github.com/cobbler/cobbler.git
cd cobbler
git checkout release24
sudo make install
Building Documentation
----------------------
You should prepare your build environment before you can build
this documentation. First you must install Java, using the
appropriate procedure for your operating system.
Java is needed to use PlantUML to automatically generate UML diagrams
from the source. You can also use `PlantUML Server
<http://www.plantuml.com/plantuml/>`_ for a quick preview of your
diagrams and language documentation.
Then you need to install all the packages required for creating of
the Python virtual environment and dependencies installation.
::
sudo apt-get install make postgresql postgresql-server-dev-9.1
sudo apt-get install python-dev python-pip python-virtualenv
Now you can create the virtual environment and activate it.
::
virtualenv fuel-web-venv
. fuel-web-venv/bin/activate
And then install the dependencies.
::
pip install -r nailgun/test-requirements.txt
Now you can look at the list of available formats and generate
the one you need:
::
cd docs
make help
make html
There is a helper script **build-docs.sh**. It can perform
all the required steps automatically. The script can build documentation
in required format.
::
Documentation build helper
-o - Open generated documentation after build
-c - Clear the build directory
-n - Don't install any packages
-f - Documentation format [html,signlehtml,latexpdf,pdf,epub]
For example, if you want to build HTML documentation you can just
use the following script, like this:
::
./build-docs.sh -f html -o
It will create virtualenv, install the required dependencies and
build the documentation in HTML format. It will also open the
documentation with your default browser.
If you don't want to install all the dependencies and you are not
interested in building automatic API documentation there is an easy
way to do it.
First remove autodoc modules from extensions section of **conf.py**
file in the **docs** directory. This section should be like this:
::
extensions = [
'rst2pdf.pdfbuilder',
'sphinxcontrib.plantuml',
]
Then remove **develop/api_doc.rst** file and reference to it from
**develop.rst** index.
Now you can build documentation as usual using make command.
This method can be useful if you want to make some corrections to
text and see the results without building the entire environment.
The only Python packages you need are Sphinx packages:
::
Sphinx
sphinxcontrib-actdiag
sphinxcontrib-blockdiag
sphinxcontrib-nwdiag
sphinxcontrib-plantuml
sphinxcontrib-seqdiag
Just don't forget to rollback all these changes before you commit your
corrections.

View File

@ -1,130 +0,0 @@
Using Fuel settings
~~~~~~~~~~~~~~~~~~~
Fuel uses a special method to pass settings from Nailgun to Puppet manifests:
- Before the deployment process begins,
Astute uploads all settings
to the */etc/astute.yaml* files that are located on each node.
- When Puppet is run,
Facter reads the contents of all these */etc/astute.yaml* files
and creates a single file called *$astute_settings_yaml*.
- The **parseyaml** function (at the beginning of the *site.pp* file)
then parses these settings
and creates a rich data structure called *$fuel_settings*.
All of the settings used during node deployment are stored there
and can be used anywhere in Puppet code.
For example, single top level variables are available as
*$::fuel_settings['debug']*.
More complex structures are also available as
values of *$::fuel_settings* hash keys
and can be accessed like normal hashes and arrays.
Many aliases and generated values are provided
to help you retrieve values easily.
You can create a variable from any hash key in *$fuel_settings*
and work with this variable within your local scope
or from other classes, using fully qualified paths::
$debug = $::fuel_settings['debug']
Some variables and structures are generated from the settings hash
by filtering and transformation functions.
For example the $node structure only contains
settings of the current node,
filtered from the has of all nodes.
It can be accessed as::
$node = filter_nodes($nodes_hash, 'name', $::hostname)
If you are going to use your module inside the Fuel Library
and need some settings,
you can get them from this *$::fuel_settings* structure.
Most variables related to network and OpenStack services configuration
are already available there and you can use them as they are.
If your modules require some additional or custom settings,
you must either use **Custom Attributes**
by editing the JSON files before deployment, or,
if you are integrating your project with the Fuel Library,
you can contact the Fuel UI developers
and ask them to add your configuration options to the Fuel setting panel.
After you have defined all classes you need inside your module,
you can add this module's declaration
into the Fuel manifests such as
*cluster_simple.pp* and *cluster_ha.pp* located inside
the *osnailyfacter/manifests* folder
or, if your additions are related to another class,
can add them into that class.
Example module
~~~~~~~~~~~~~~
To demonstrate how to add a new module to the Fuel Library,
let us add a simple class
that changes the terminal color of Red Hat based systems.
Our module is named *profile* and has only one class.::
profile
profile/manifests
profile/manifests/init.pp
profile/files
profile/files/colorcmd.sh
init.pp could have a class definition such as:::
class profile {
if $::osfamily == 'RedHat' {
file { 'colorcmd.sh' :
ensure => present,
owner => 'root',
group => 'root',
mode => '0644',
path => "/etc/profile.d/colorcmd.sh",
source => 'puppet:///modules/profile/colorcmd.sh',
}
}
}
This class downloads the *colorcmd.sh* file
and places it in the defined location
when the class is run on a Red Hat or CentOS system.
The profile module can be added to Fuel modules
by uploading its folder to */etc/puppet/modules*
on the Fuel Master node.
Now we need to declare this module somewhere inside the Fuel manifests.
Since this module should be run on every server,
we can use our main *site.pp* manifest
found inside the *osnailyfacter/examples* folder.
On the deployed master node,
this file will be copied to */etc/puppet/manifests*
and used to deploy Fuel on all other nodes.
The only thing we need to do here is to add the *include profile*
to the end of the */etc/puppet/manifests/site.pp* file
on the already deployed master node
and to the *osnailyfacter/examples/site.pp* file inside the Fuel repository.
Declaring a class outside of a node block
forces this class to be included everywhere.
If you want to include your module only on some nodes,
you can add its declaration
to the blocks associated with the role that is running on those nodes
inside the *cluster_simple* and *cluster_ha* classes.
You can add some additional logic to allow this module to be disabled,
either from the Fuel UI or by passing Customer Attributes
to the Fuel configuration.::
if $::fuel_settings['enable_profile'] {
include 'profile'
}
This block uses the *enable_profile* variable
to enable or disable inclusion of the profile module.
The variable should be passed from Nailgun and saved
to the */etc/astute.yaml* files on managed nodes.
You can do this either by downloading the settings files
and manually editing them before deployment
or by asking the Fuel UI developers to include additional options
in the settings panel.

View File

@ -1,9 +0,0 @@
Fuel Development Environment on Live Master Node
================================================
If you need to deploy your own developer version of FUEL on live
Master Node, you can use the helper scripts, that are available
in the `fuel-dev-tools <https://github.com/openstack/fuel-dev-tools>` repository.
Help information about fuel-dev-tools can be found by running it
with the '-h' parameter.

View File

@ -1,394 +0,0 @@
Modular Architecture
====================
The idea behind the modular architecture introduced in Fuel 6.1 is the separation of the legacy site.pp manifests to a group of small manifests. Each manifest can be designed to do only a limited part of the deployment process. These manifests can be applied by Puppet the same way as it was done in Fuel 6.0 or older. The deployment process in Fuel 6.1 consists of a sequential application of small manifests in a predefined order.
Using smaller manifests instead of the monolithic ones has the following advantages:
* **Independent development**
As a developer, you can work only with those Fuel components that you are interested in. A separate manifest
can be dedicated wholly to a single task without any interference from other components and developers. This
task may require the system to be in some state before the deployment can be started and the task may require
some input data be available. But other than that each task is on its own.
* **Granular testing**
With granular deployment introduced in Fuel 6.1, any finished task can be tested independently. Testing can be
automated with autotests; you can snapshot and revert the environment to a previous state; or you can manually
run tests in your environment. With Fuel 6.0 or older, testing consumes a considerable amount of time as there
is no way to test only a part of the deployment -- each new change requires the whole deployment to be started
from scratch. See also the `granular deployment blueprint <https://blueprints.launchpad.net/fuel/+spec/granular-deployment-based-on-tasks>`_.
* **Encapsulation**
Puppet deems all resources to be unique within the catalog and the way it works with dependencies and
ordering. Normally one cannot take a third party module and expect that it will work as designed within the
Puppet infrastructure. Modular manifests, introduced in Fuel 6.1, solve this by making every single task use
its own catalog without directly interfering with other tasks.
* **Self-Testing**
Granular architecture allows making tests for every task. These tests can be run either after the task to
check if it is successful or before the task to check if the system is in the required state to start the
task. These tests can be used by the developer as acceptance tests, by the Continuous Integration (CI) to
determine if the changes can be merged, or during the real deployment to control the whole process and to
raise alarm if something goes wrong.
* **Using multiple tools**
Sometimes you may use a tool other than Puppet (from shell scripts to Python or Ruby, and even binary
executables). Granular deployment allows using any tools you see fit for the task. Tasks, tests and pre/post
hooks can be implemented using anything the developer knows best. In Fuel 6.1 only pre/post hooks can use
non-Puppet tasks.
Granular deployment process
---------------------------
Granular deployment is implemented using the Nailgun plugin system. Nailgun uses the deployment graph data to determine what tasks on which nodes should be run. This data graph is traversed and sent to Astute as an ordered list of tasks to be executed with the information on which nodes they should be run.
Astute receives this data structure and starts running the tasks one by one in the following order:
#. Pre-deploy actions
#. Main deployment tasks
#. Post-deploy actions
Each tasks reports back if it is successful. Astute stops the deployment on any failed task.
.. image:: _images/granularDeployment.png
Task Graph
----------
A task graph is built by Nailgun from *tasks.yaml* files during Fuel Master node bootstrap:
::
fuel rel --sync-deployment-tasks --dir /etc/puppet/
*tasks.yaml* files describe a group of tasks (or a single task).
::
- id: netconfig
type: puppet
groups: [primary-controller, controller, cinder, compute, ceph-osd, zabbix-server, primary-mongo, mongo]
required_for: [deploy]
requires: [hiera]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/netconfig.pp
puppet_modules: /etc/puppet/modules
timeout: 3600
where:
* ``id`` - Each task must have the unique ID.
* ``type`` - Determines how the tasks should be executed. Currently there are Puppet and exec types.
* ``groups`` - Groups are used to determine on which nodes these tasks should be started and are mostly related to the node roles.
* ``required_for`` - The list of tasks that require this task to start. Can be empty.
* ``requires`` - The list of tasks that are required by this task to start. Can be empty.
* Both the ``requires`` and ``required_for`` fields are used to build the dependency graph and to determine the order of task execution.
* ``parameters`` - The actual payload of the task. For the Puppet task these can be paths to modules (puppet_modules) and the manifest (puppet_manifest) to apply; the exec type requires the actual command to run.
* ``timeout`` determines how long the orchestrator should wait (in seconds) for the task to complete before marking it as failed.
Graph example
-------------
::
- id: netconfig
type: puppet
groups: [primary-controller, controller, cinder, compute, ceph-osd, zabbix-server, primary-mongo, mongo]
required_for: [deploy_start]
requires: [hiera]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/netconfig.pp
puppet_modules: /etc/puppet/modules
timeout: 3600
::
- id: tools
type: puppet
groups: [primary-controller, controller, cinder, compute, ceph-osd, zabbix-server, primary-mongo, mongo]
required_for: [deploy_start]
requires: [hiera]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/tools.pp
puppet_modules: /etc/puppet/modules
timeout: 3600
- id: hosts
type: puppet
groups: [primary-controller, controller, cinder, compute, ceph-osd, zabbix-server, primary-mongo, mongo]
required_for: [deploy_start]
requires: [netconfig]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/hosts.pp
puppet_modules: /etc/puppet/modules
timeout: 3600
- id: firewall
type: puppet
groups: [primary-controller, controller, cinder, compute, ceph-osd, zabbix-server, primary-mongo, mongo]
required_for: [deploy_start]
requires: [netconfig]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/firewall.pp
puppet_modules: /etc/puppet/modules
timeout: 3600
- id: hiera
type: puppet
groups: [primary-controller, controller, cinder, compute, ceph-osd, zabbix-server, primary-mongo, mongo]
required_for: [deploy_start]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/hiera.pp
puppet_modules: /etc/puppet/modules
timeout: 3600
This graph data will be processed to the following graph when imported to the Nailgun. Deploy task is an anchor used to start the graph traversal and is hidden from the image.
.. image:: _images/graph.png
Nailgun will run the hiera task first, then netconfig or tools, and then firewall or hosts. Astute will start each task on those nodes whose roles are present in the groups field of each task.
Modular manifests
-----------------
Starting with Fuel 6.1, granular deployment allows using a number of small manifests instead of the single monolithic one. These small manifests are placed in the ``deployment/puppet/osnailyfacter/modular`` folder and its subfolders. In Fuel 6.0 or older there was a single entry point manifest used -- located at ``deployment/puppet/osnailyfacter/examples/site.pp`` in the `fuel-library <https://github.com/openstack/fuel-library/>`_ repository.
To write a modular manifest, you will need to take all the resources, classes and definitions you are using to deploy your component and place them into a single file. This manifest should be able to do everything that is required for your component.
The system should be in some state before you will be able to start your task. For example, database, Pacemaker, or Keystone should be present.
You should also satisfy the missing dependencies. Some of the manifests may have internal dependencies on other manifests and their parts. You will have to either remove these dependencies or make dummy classes to satisfy them.
Modular example
---------------
Here is an example of a modular manifest that installs Apache and creates a basic site.
::
>>> site.pp
$fuel_settings = parseyaml($astute_settings_yaml)
File {
owner => root,
group => root,
mode => 0644,
}
package { apache :
ensure => installed,
}
service { apache :
ensure => running,
enable => true,
}
file { /etc/apache.conf :
ensure => present,
content => template(apache/config.erb),
}
$www_root = $fuel_settings[www_root]
file { “${www_root}/index.html” :
ensure => present,
content => hello world,
}
As the first line of any granular Puppet manifest, add the following:
::
notice("MODULAR: $$$TASK_ID_OR_NAME$$$")
It will help you debug by finding a place in ``puppet.log`` where your task started.
Now let's split the manifest into several tasks:
::
>>> apache_install.pp
package { apache :
ensure => installed,
}
>>> apache_config.pp
File {
owner => root,
group => root,
mode => 0644,
}
$www_root = hiera(www_root)
file { /etc/apache.conf :
ensure => present,
content => template(apache/config.erb),
}
>>> create_site.pp
File {
owner => root,
group => root,
mode => 0644,
}
$www_root = hiera(www_root)
file { “${www_root}/index.html” :
ensure => present,
content => hello world,
}
>>> apache_start.pp
service { apache :
ensure => running,
enable => true,
}
We have just created several manifests. Each will do just its simple action. First we install an Apache package, then we create a configuration file, then create a sample site, and, finally, start the service. Each of these tasks now can be started separately together with any other task. We have also replaced ``$fuel_settings`` with hiera calls.
Since there are some dependencies, we cannot start the Apache service without installing the package first, but we can start the service just after the package installation without the configuration and sample site creation.
So there are the following tasks:
* install
* config
* site
* start
* hiera (to enable the hiera function)
A visual representation of the dependency graph will be the following:
.. image:: _images/dependGraph.png
**start**, **config**, and **site** require the package to be installed. **site** and **config** require the **hiera** function to work. Apache should be configured and **site** should be created to start.
Now, lets write a data yaml to describe this structure:
::
- id: hiera
type: puppet
role: [test]
required_for: [deploy]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/hiera.pp
puppet_modules: /etc/puppet/modules
timeout: 3600
- id: install
type: puppet
role: [test]
required_for: [deploy]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/apache_install.pp
puppet_modules: /etc/puppet/modules
timeout: 3600
- id: config
type: puppet
role: [test]
required_for: [deploy]
requires: [hiera, install]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/apache_config.pp
puppet_modules: /etc/puppet/modules
timeout: 3600
- id: site
type: puppet
role: [test]
required_for: [deploy]
requires: [install, hiera]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/create_site.pp
puppet_modules: /etc/puppet/modules
timeout: 3600
- id: start
type: puppet
role: [test]
required_for: [deploy]
requires: [install, config, site]
parameters:
puppet_manifest: /etc/puppet/modules/osnailyfacter/modular/apache_start.pp
puppet_modules: /etc/puppet/modules
timeout: 3600
Nailgun can process this data file and tell Astute to deploy all the tasks in the required order. Other nodes or other deployment modes may require more tasks or tasks run in different order.
Now, let's say you have a new apache_proxy class, and want to add it to the setup:
::
>>> apache_proxy/init.pp
file { /etc/apache.conf :
owner => root,
group => root,
mode => 0644,
ensure => present,
source => puppet:///apache/proxy.conf,
} ->
service { apache :
ensure => running,
enable => true,
}
This tasks updates the main Apache configuration as well, and it conflicts with the previous configuration tasks. It would not be possible to combine them in a single catalog. It also attempts to enable the Apache service, which produces a new duplicate error.
Granular deployment solves this. You can still use them together without trying to do something with duplicates or dependency problems.
.. image:: _images/dependGraph02.png
We have just inserted a new proxy task between the **config** and **start** tasks. The proxy task will rewrite the configuration file created in the **config** task making the **config** task pointless. This setup will still work as expected and we will have a working Apache-based proxy. Apache will be started on the proxy task but the **start** task will not produce any errors due to Puppet's idempotency.
There are also `granular noop tests <https://ci.fuel-infra.org/job/fuellib_noop_tests/>`_ based on rspec-puppet. These CI tests will put -1 for any new Puppet task not covered with tests.
Testing
-------
Testing these manifests is easier than having a single monolithic manifest.
After writing each file you can manually apply it to check if the task works as expected.
If the task is complex enough, it can benefit from automated acceptance testing. These tests can be implemented using any tool you as a developer see fit.
For example, lets try using `http://serverspec.org <http://serverspec.org>`_. This is an rspec extension that is very convenient for server testing.
The only thing the install task does is the package installation and it has no preconditions. The spec file for it may look like this:
::
require 'spec_helper'
describe package(apache) do
it { should be_installed }
end
Running the spec should produce an output similar to the following:
::
Package "nginx"
should be installed
Finished in 0.17428 seconds
1 example, 0 failures
There are many different resource types *serverspec* can work with, and this can easily be extended. Other tasks can be tested with specs like the following:
::
describe service('apache') do
it { should be_enabled }
it { should be_running }
end
describe file(/etc/apache.conf) do
it { should be_file }
its(:content) { should match %r{DocumentRoot /var/www/html} }
end

View File

@ -1,423 +0,0 @@
Contributing to Fuel Library
============================
This chapter explains how to add a new module or project into the Fuel Library,
how to integrate with other components,
and how to avoid different problems and potential mistakes.
The Fuel Library is a very big project
and even experienced Puppet users may have problems
understanding its structure and internal workings.
Adding new modules to fuel-library
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*Case A. Pulling in an existing module*
If you are adding a module that is the work of another project
and is already tracked in a separate repo:
1. Create a review request with an unmodified copy
of the upstream module from which you are working
and *no* other related modifications.
* This review should also contain the commit hash from the upstream repo
in the commit message.
* The review should be evaluated to determine its suitability
and either rejected
(for licensing, code quality, outdated version requested)
or accepted without requiring modifications.
* The review should not include code that calls this new module.
2. Any changes necessary to make it work with Fuel
should then be proposed as a dependent change(s).
*Case B. Adding a new module*
If you are adding a new module that is a work purely for Fuel
and is not tracked in a separate repo,
submit incremental reviews that consist of
working implementations of features for your module.
If you have features that are necessary but do not yet work fully,
then prevent them from running during the deployment.
Once your feature is complete,
submit a review to activate the module during deployment.
Contributing to existing fuel-library modules
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As developers of Puppet modules, we tend to collaborate with the Puppet
OpenStack community. As a result, we contribute to upstream modules all of the
improvements, fixes and customizations we make to improve Fuel as well.
That implies that every contributor must follow Puppet DSL basics,
`puppet-openstack dev docs
<https://wiki.openstack.org/wiki/Puppet-openstack#Developer_documentation>`_
and `Puppet rspec tests
<https://wiki.openstack.org/wiki/Puppet-openstack#Rspec_puppet_tests>`_
requirements.
The most common and general rule is that upstream modules should be modified
only when bugfixes and improvements could benefit everyone in the community.
And appropriate patch should be proposed to the upstream project prior
to Fuel project.
In other cases (like applying some very specific custom logic or settings)
contributor should submit patches to ``openstack::*`` `classes
<https://github.com/openstack/fuel-library/tree/master/deployment/puppet/
openstack>`_
Fuel library includes custom modules as well as ones forked from upstream
sources. Note that ``Modulefile``, if any exists, should be used in order
to recognize either given module is forked upstream one or not.
In case there is no ``Modulefile`` in module's directory, the contributor may
submit a patch directly to this module in Fuel library.
Otherwise, he or she should submit patch to upstream module first, and once
merged or +2 recieved from a core reviewer, the patch should be backported to
Fuel library as well. Note that the patch submitted for Fuel library should
contain in commit message the upstream commit SHA or link to github pull-request
(if the module is not on git.openstack.org) or Change-Id of gerrit patch.
The Puppet modules structure
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Code that is contributed into the Fuel Library
should be organized into a Puppet module.
A module is a self-contained set of Puppet code
that is usually made to perform a specific function.
For example, you could have a module for each service
you are going to configure or for every part of your project.
Usually it is a good idea to make a module independent
but sometimes it may require or be required by other modules.
You can think of a module as a sort of library.
The most important part of every Puppet module is its **manifests** folder.
This folder contains Puppet classes and definitions
which also contain resources managed by this module.
Modules and classes also form namespaces.
Each class or definition should be placed into a single file
inside the manifests folder
and this file should have the same name as the class or definition.
The module should have a top level class
that serves as the module's entry point
and is named same as the module.
This class should be placed into the *init.pp* file.
This example module shows the standard structure
that every Puppet module should follow::
example
example/manifests/init.pp
example/manifests/params.pp
example/manifests/client.pp
example/manifests/server
example/manifests/server/vhost.pp
example/manifests/server/service.pp
example/templates
example/templates/server.conf.erb
example/files
example/files/client.data
The first file in the *manifests* folder is named *init.pp*
and should contain the entry point class of this module.
This class should have the same name as the module::
class example {
}
The second file is *params.pp*.
This file is not mandatory but is often used
to store different configuration values and parameters
that are used by other classes of the module.
For example, it could contain the service name and package name
of our hypothetical example module.
Conditional statements might be included
if you need to change default values in different environments.
The *params* class should be named as a child
to the module's namespace as are all other classes of the module::
class example::params {
$service = 'example'
$server_package = 'example-server'
$client_package = 'example-client'
$server_port = '80'
}
All other files inside the manifests folder
contain classes as well and can perform any action
you might want to identify as a separate piece of code.
This generally falls into sub-classes that do not require its users
to configure the parameters explicitly,
or may be optional classes that are not required in all cases.
In the following example,
we create a client class that defines a client package
that will be installed and placed into a file called *client.pp*::
class example::client {
include example::params
package { $example::params::client_package :
ensure => installed,
}
}
As you can see, we have used the package name from params class.
Consolidating all values that might require editing into a single class,
as opposed to hardcoding them,
allows you to reduce the effort required
to maintain and develop the module further in the future.
If you are going to use any values from the params class,
you should include it first to force its code
to execute and create all required variables.
You can add more levels into the namespace structure if you want.
Let's create server folder inside our manifests folder
and add the *service.pp* file there.
It would be responsible for installing and running
the server part of our imaginary software.
Placing the class inside the subfolder adds one level
into the name of the contained class.::
class example::server::service (
$port = $example::params::server_port,
) inherits example::params {
$package = $example::params::server_package
$service = $example::params::service
package { $package :
ensure => installed,
}
service { $service :
ensure => running,
enabled => true,
hasstatus => true,
hasrestart => true,
}
file { 'example_config' :
ensure => present,
path => '/etc/example.conf',
owner => 'root',
group => 'root',
mode => '0644',
content => template('example/server.conf.erb'),
}
file { 'example_config_dir' :
ensure => directory,
path => '/etc/example.d',
owner => 'example',
group => 'example',
mode => '0755',
}
Package[$package] -> File['example_config', 'example_config_dir'] ~>
Service['example_config']
}
This example is a bit more complex. Let's see what it does.
Class *example::server::service* is **parametrized**
and can accept one parameter:
the port to which the server process should bind.
It also uses a popular "smart defaults" hack.
This class inherits the params class and uses its default values
only if no port parameter is provided.
In this case, you cannot use *include params*
to load the default values
because it is called by the *inherits example::params* clause
of the class definition.
Inside our class, we take several variables from the params class
and declare them as variables of the local scope.
This is a convenient practice to make their names shorter.
Next we declare our resources.
These resources are package, service, config file and config dir.
The package resource installs the package
whose name is taken from the variable
if it is not already installed.
File resources create the config file and config dir;
the service resource starts the daemon process and enables its autostart.
The final part of this class is the *dependency* declaration.
We have used a "chain" syntax to specify the order of evaluation
of these resources.
It is important to install the package first,
then install the configuration files
and only then start the service.
Trying to start the service before installing the package will definitely fail.
So we need to tell Puppet that there are dependencies between our resources.
The arrow operator that has a tilde instead of a minus sign (~>)
means not only dependency relationship
but also *notifies* the object to the right of the arrow to refresh itself.
In our case, any changes in the configuration file
would make the service restart and load a new configuration file.
Service resources react to the notification event
by restating the managed service.
Other resources may instead perform other supported actions.
The configuration file content is generated by the template function.
Templates are text files that use Ruby's erb language tags
and are used to generate a text file using pre-defined text
and some variables from the manifest.
These template files are located inside the **templates** folder
of the module and usually have the *erb* extension.
When a template function is called
with the template name and module name prefix,
Fuel tries to load this template and compile it
using variables from the local scope of the class function
from which the template was called.
For example, the following template saved in
the templates folder as *server.conf.erb file*
is a setting to bind the port of our service::
bind_port = <%= @port %>
The template function will replace the 'port' tag
with the value of the port variable from our class
during Puppet's catalog compilation.
If the service needs several virtual hosts,
you need to define **definitions**,
which are similar to classes but, unlike classes,
they have titles like resources do
and can be used many times with different titles
to produce many instances of the managed resources.
Classes cannot be declared several times with different parameters.
Definitions are placed in single files inside the manifests directories
just as classes are
and are named in a similar way, using the namespace hierarchy.
Let's create our vhost definition.::
define example::server::vhost (
$path = '/var/data',
) {
include example::params
$config = “/etc/example.d/${title}.conf”
$service = $example::params::service
file { $config :
ensure => present,
owner => 'example',
group => 'example',
mode => '0644',
content => template('example/vhost.conf.erb'),
}
File[$config] ~> Service[$service]
}
This defined type only creates a file resource
with its name populated by the title
that is used when it gets defined.
It sets the notification relationship with the service
to make it restart when the vhost file is changed.
This defined type can be used by other classes
like a simple resource type to create as many vhost files as we need.::
example::server::vhost { 'mydata' :
path => '/path/to/my/data',
}
Defined types can form relationships in the same way as resources do
but you need to capitalize all elements of the path to make the reference::
File['/path/to/my/data'] -> Example::Server::Vhost['mydata']
This is works for text files but binary files must be handled differently.
Binary files or text files that will always be same
can be placed into the **files** directory of the module
and then be taken by the file resource.
To illustrate this, let's add a file resource for a file
that contains some binary data that must be distributed
in our client package.
The file resource is the *example::client* class::
file { 'example_data' :
path => '/var/lib/example.data',
owner => 'example',
group => 'example',
mode => '0644',
source => 'puppet:///modules/example/client.data',
}
We have specified source as a special puppet URL scheme
with the module's and the file's name.
This file will be placed in the specified location when Puppet runs.
On each run, Puppet will check this file's checksum,
overwriting it if the checksum changes;
note that this method should not be used with mutable data.
Puppet's fileserving works in both client-server and masterless modes.
We now have all classes and resources that are required
to manage our hypothetical example service.
Our example class defined inside *init.pp* is still empty
so we can use it to declare all other classes
to put everything together::
class example {
include example::params
include example::client
class { 'example::server::service' :
port => '100',
}
example::server::vhost { 'site1' :
path => '/data/site1',
}
example::server::vhost { 'site2' :
path => '/data/site2',
}
example::server::vhost { 'test' :
path => '/data/test',
}
}
Now we have entire module packed inside *example* class and we can just
include this class to any node where we want to see our service running.
Declaration of parametrized class also did override default port number from
params file and we have three separate virtual hosts for out service. Client
package is also included into this class.
Adding Python code to fuel-library
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
All Python code that is added to fuel-library must pass style checks and have
tests written.
Whole test suite is run by `python_run_tests.sh <docs/develop/module_structure.rst>`_.
It uses a virtualenv in which all Python modules from
`python-tests-requirements.txt <https://github.com/openstack/fuel-library/blob/master/utils/jenkins/python-test-requirements.txt>`_
are installed. If tests need any third-party library, it should be added as a requirement into this file.
Before starting any test for Python code, test suite runs style checks for any Python code
found in fuel-library. Those checks are performed by `flake8` (for more information, see the
`flake8 documentation <http://flake8.readthedocs.org/en/2.3.0/>`_)
with additional `hacking` checks installed. Those checks are a set of guidelines for Python code.
More information about those guidelines could be found in `hacking documentation <http://flake8.readthedocs.org/en/2.3.0/>`_
If, for some reason, you need to disable style checks in the given file you can add the following
line at the beginning of the file:::
# flake8: noqa
After style checks, test suite will execute Python tests by using `py.test <http://pytest.org>`_ test runner.
`py.test` will look for Python files whose names begin with 'test\_' and will search for the tests in them.
Documentation on how to write tests could be found in
`the official Python documentation <https://docs.python.org/2/library/unittest.html>`_ and
`py.test documentation <http://pytest.org/latest/assert.html>`_.

View File

@ -1,7 +0,0 @@
.. _nailgun:
Nailgun
=======
.. toctree::
nailgun/tree

View File

@ -1,139 +0,0 @@
Bonding in UI/Nailgun
=====================
Abstract
--------
The NIC bonding allows you to aggregate multiple physical links to one link
to increase speed and provide fault tolerance.
Design docs
-----------
https://etherpad.openstack.org/p/fuel-bonding-design
Fuel Support
------------
The Puppet module L23network has support for OVS and native Linux bonding,
so we can use it for both NovaNetwork and Neutron deployments. Only Native
OVS bonding (Neutron only) is implemented in Nailgun now. Vlan splinters cannot
be used on bonds now. Three modes are supported now: 'active-backup',
'balance-slb', 'lacp-balance-tcp' (see nailgun.consts.OVS_BOND_MODES).
Deployment serialization
------------------------
For details on deployment serialization for neutron, see: https://etherpad.openstack.org/p/neutron-orchestrator-serialization
Changes related to bonding are in the “transformations” section:
1. "add-bond" section
::
{
"action": "add-bond",
"name": "bond-xxx", # name is generated in UI
"interfaces": [], # list of NICs; ex: ["eth1", "eth2"]
"bridge": "br-xxx",
"properties": [] # info on bond's policy, mode; ex: ["bond_mode=active-backup"]
}
2. Instead of creating separate OVS bridges for every bonded NIC we need to create one bridge for the bond itself
::
{
"action": "add-br",
"name": "br-xxx"
}
REST API
--------
NodeNICsHandler and NodeCollectionNICsHandler are used for bonds creation,
update and removal. Operations with bonds and networks assignment are done in
single request fashion. It means that creation of bond and appropriate networks
reassignment is done using one request. Request parameters must contain
sufficient and consistent data for construction of new interfaces topology and
proper assignment of all node's networks.
Request/response data example::
[
{
"name": "ovs-bond0", # only name is set for bond, not id
"type": "bond",
"mode": "balance-slb", # see nailgun.consts.OVS_BOND_MODES for modes list
"slaves": [
{"name": "eth1"}, # only “name” must be in slaves list
{"name": "eth2"}],
"assigned_networks": [
{
"id": 9,
"name": "public"
}
]
},
{
"name": "eth0",
"state": "up",
"mac": "52:54:00:78:55:68",
"max_speed": null,
"current_speed": null,
"assigned_networks": [
{
"id": 1,
"name": "fuelweb_admin"
},
{
"id": 10,
"name": "management"
},
{
"id": 11,
"name": "storage"
}
],
"type": "ether",
"id": 5
},
{
"name": "eth1",
"state": "up",
"mac": "52:54:00:88:c8:78",
"max_speed": null,
"current_speed": null,
"assigned_networks": [], # empty for bond slave interfaces
"type": "ether",
"id": 2
},
{
"name": "eth2",
"state": "up",
"mac": "52:54:00:03:d1:d2",
"max_speed": null,
"current_speed": null,
"assigned_networks": [], # empty for bond slave interfaces
"type": "ether",
"id": 1
}
]
Following fields are required in request body for bond interface:
name, type, mode, slaves.
Following fields are required in request body for NIC:
id, type.
Nailgun DB
----------
Now we have separate models for bond interfaces and NICs: NodeBondInterface and
NodeNICInterface. Node's interfaces can be accessed through Node.nic_interfaces
and Node.bond_interfaces separately or through Node.interfaces (property,
read-only) all together.
Relationship between them (bond:NIC ~ 1:M) is expressed in “slaves” field in
NodeBondInterface model.
Two more new fields in NodeBondInterface are: “flags” and “mode”.
Bond's “mode” can accept values from nailgun.consts.OVS_BOND_MODES.
Bond's “flags” are not in use now. “type” property (read-only) indicates whether
it is a bond or NIC (see nailgun.consts.NETWORK_INTERFACE_TYPES).

View File

@ -1,327 +0,0 @@
Nailgun Extensions
__________________
Overview of extensions
======================
Nailgun extensions provide a capability for Fuel Developers to extend Fuel
features. Extensions were introduced to provide *pythonic way* for adding
integrations with external services, extending existing features, or adding
new features **without** changing the Nailgun source code.
A Nailgun extension can execute its method on specific events
such as ``on_node_create`` or ``on_cluster_delete`` (more about event handlers
in the `Available Events`_ section) and also to change deployment and
provisioning data just before it is sent to orchestrator by the means of
Data Pipelines classes.
.. note::
The extensions mechanism does not provide a sufficient level
of isolation. Therefore, the extension may not work after you upgrade Fuel.
On the contrary, Fuel plugins provide backward compatibility and a friendly
UI for the end user. Use plugins for all changes in the system performed
by the Fuel user.
Required properties
===================
All Nailgun extensions must populate the following class variables:
* ``name`` - a string which will be used inside Nailgun to identify the
extension. It should consist only of lowercase letters with "_" (underscore)
separator and digits.
* ``version`` - a string with version. It should follow semantic versioning:
http://semver.org/
* ``description`` - a short text which briefly describes the actions that the
extension performs.
Available events
================
Extension can execute event handlers on specific events. There is
a list of available handlers:
.. list-table::
:widths: 10 10
:header-rows: 1
* - Method
- Event
* - ``on_node_create``
- Node has been created
* - ``on_node_update``
- Node has been updated
* - ``on_node_reset``
- Node has been reseted
* - ``on_node_delete``
- Node has been deleted
* - ``on_node_collection_delete``
- Collection of nodes has been deleted
* - ``on_cluster_delete``
- Cluster has been deleted
* - ``on_before_deployment_check``
- Called right before running "before deployment check task"
REST API handlers
=================
Nailgun Extensions also provide a way to add additional API endpoints.
To add an extension-specific handler sub-class from::
nailgun.api.v1.handlers.base.BaseHandler
The second step is to register the handler by providing the ``urls`` list in
Extension class:
.. code-block:: python
urls = [
{'uri': r'/example_extension/(?P<node_id>\d+)/?$',
'handler': ExampleNodeHandler},
{'uri': r'/example_extension/(?P<node_id>\d+)/properties/',
'handler': NodeDefaultsDisksHandler},
]
As you can see you need to provide a list of dicts with keys:
.. list-table::
:widths: 10 10
:header-rows: 1
* - key
- value
* - ``uri``
- a regular expression (string) for the URL path
* - ``handler``
- handler class
Database interaction
====================
There is a possibility to use the Nailgun database to store the data needed by
a Nailgun extension. To use it you must provide alembic migration scripts which
should be placed in::
extension_module/alembic_migrations/migrations/
Where ``extension_module`` is the one where the file with your extension class
is placed.
You can also change this directory by overriding the classmethod::
alembic_migrations_path
It should return an absolute path (string) to alembic migrations
directory.
Additionally, use a table name with an extension-specific prefix in models
classes and alembic migration scripts. We recommend that you use the
``table_prefix`` extension method to retrieve the prefix (string).
.. note::
Do not use the direct db calls to Nailgun core tables in the extension
class. Use the ``nailgun.objects`` module which ensures compatibility
between the Nailgun DB and the configuration implemented in your extension.
There **must be no** relations between extension models and core models.
Extension Data Pipelines
========================
If you want to change the deployment or provisioning data just before it is
sent to an orchestrator use Extension Data Pipelines.
Data Pipeline is a class which inherits from::
nailgun.extensions.BasePipeline
BasePipeline provides two methods which you can override:
* ``process_provisioning``
* ``process_deployment``
Both methods take the following parameters:
* ``data`` - serialized data which will be sent to orchestrator. Data
**does not include** nodes data which was defined by User in
``replaced_deployment_info`` or in ``replaced_provisioning_info``.
* ``cluster`` - a cluster instance for which the data was serialized.
* ``nodes`` - nodes instances for which the data was serialized. Nodes list
**does not include** node instances which were filtered out in ``data``
parameter.
* ``**kwargs`` - additional kwargs - must be in method definition to provide
backwards-compatibility for future (small) changes in extensions API.
Both methods must return the ``data`` dict so it can be processed by other
pipelines.
To enable pipelines, add the ``data_pipelines`` variable in your extensions
class:
.. code-block:: python
class ExamplePipelineOne(BasePipeline):
@classmethod
def process_provisioning(cls, data, cluster, nodes, **kwargs):
data['new_field'] = 'example_value'
return data
@classmethod
def process_deployment(cls, data, cluster, nodes, **kwargs):
data['new_field'] = 'example_value'
return data
class ExamplePipelineTwo(BasePipeline):
@classmethod
def process_deployment(cls, data, cluster, nodes, **kwargs):
data['new_field2'] = 'example_value2'
return data
class ExampleExtension(BaseExtension):
...
data_pipelines = [
ExamplePipelineOne,
ExamplePipelineTwo,
]
...
Pipeline classes will be executed **in the order they are defined** in the
``data_pipelines`` variable.
How to install and plug in extensions
=====================================
To use extensions system in Nailgun, implement an extension class which will
be the subclass of::
nailgun.extensions.BaseExtension
The class must be placed in a separate module which defines ``entry_points`` in
its ``setup.py`` file.
Extension entry point should use Nailgun extensions namespace which is::
nailgun.extensions
Example ``setup.py`` file with ``ExampleExtension`` may look like this:
.. code-block:: python
from setuptools import setup, find_packages
setup(
name='example_package',
version='1.0',
description='Demonstration package for Nailgun Extensions',
author='Fuel Nailgman',
author_email='fuel@nailgman.com',
url='http://example.com',
classifiers=['Development Status :: 3 - Alpha',
'License :: OSI Approved :: Apache Software License',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Environment :: Console',
],
packages=find_packages(),
entry_points={
'nailgun.extensions': [
'ExampleExtension = example_package.nailgun_extensions.ExampleExtension',
],
},
)
Now to enable the extension it is enough to run::
python setup.py install
or::
pip install .
Now extension will be discovered by Nailgun automatically after restart.
Example Extension with Pipeline - additional logging
====================================================
.. code-block:: python
import datetime
import logging
from nailgun.extensions import BaseExtension
from nailgun.extensions import BasePipeline
logger = logging.getLogger(__name__)
class TimeStartedPipeline(BasePipeline):
@classmethod
def process_provisioning(cls, data, cluster, nodes, **kwargs):
now = datetime.datetime.now()
data['time_started'] = 'provisioning started at {}'.format(now)
return data
@classmethod
def process_deployment(cls, data, cluster, nodes, **kwargs):
now = datetime.datetime.now()
data['time_started'] = 'deployment started at {}'.format(now)
return data
class ExampleExtension(BaseExtension):
name = 'additional_logger'
version = '1.0.0'
description = 'Additional Logging Extension '
data_pipelines = [
TimeStartedPipeline,
]
@classmethod
def on_node_create(cls, node):
logging.debug('Node %s has been created', node.id)
@classmethod
def on_node_update(cls, node):
logging.debug('Node %s has been updated', node.id)
@classmethod
def on_node_reset(cls, node):
logging.debug('Node %s has been reseted', node.id)
@classmethod
def on_node_delete(cls, node):
logging.debug('Node %s has been deleted', node.id)
@classmethod
def on_node_collection_delete(cls, node_ids):
logging.debug('Nodes %s have been deleted', ', '.join(node_ids))
@classmethod
def on_cluster_delete(cls, cluster):
logging.debug('Cluster %s has been deleted', cluster.id)
@classmethod
def on_before_deployment_check(cls, cluster):
logging.debug('Cluster %s will be deployed soon', cluster.id)

View File

@ -1,179 +0,0 @@
.. _dev-create-partition:
Creating Partitions on Nodes
============================
Fuel generates Anaconda Kickstart scripts for Red Hat based systems and
preseed files for Ubuntu to partition block devices on new nodes. Most
of the work is done in the pmanager.py_ Cobbler script using the data
from the "ks_spaces" variable generated by the Nailgun VolumeManager_
class based on the volumes metadata defined in the openstack.yaml_
release fixture.
.. _pmanager.py: https://github.com/openstack/fuel-library/blob/master/deployment/puppet/cobbler/templates/scripts/pmanager.py
.. _VolumeManager: https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/extensions/volume_manager/manager.py
.. _openstack.yaml: https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml
Volumes are created following best practices for OpenStack and other
components. Following volume types are supported:
vg
an LVM volume group that can contain one or more volumes with type set
to "lv"
partition
plain non-LVM partition
raid
a Linux software RAID-1 array of LVM volumes
Typical slave node will always have an "os" volume group and one or more
volumes of other types, depending on the roles assigned to that node and
the role-to-volumes mapping defined in the "volumes_roles_mapping"
section of openstack.yaml.
There are a few different ways to add another volume to a slave node:
#. Add a new logical volume definition to one of the existing LVM volume
groups.
#. Create a new volume group containing your new logical volumes.
#. Create a new plain partition.
Adding an LV to an Existing Volume Group
----------------------------------------
If you need to add a new volume to an existing volume group, for example
"os", your volume definition in openstack.yaml might look like this::
- id: "os"
type: "vg"
min_size: {generator: "calc_min_os_size"}
label: "Base System"
volumes:
- mount: "/"
type: "lv"
name: "root"
size: {generator: "calc_total_root_vg"}
file_system: "ext4"
- mount: "swap"
type: "lv"
name: "swap"
size: {generator: "calc_swap_size"}
file_system: "swap"
- mount: "/mnt/some/path"
type: "lv"
name: "LOGICAL_VOLUME_NAME"
size:
generator: "calc_LOGICAL_VOLUME_size"
generator_args: ["arg1", "arg2"]
file_system: "ext4"
Make sure that your logical volume name ("LOGICAL_VOLUME_NAME" in the
example above) is not the same as the volume group name ("os"), and
refer to current version of openstack.yaml_ for up-to-date format.
Adding Generators to Nailgun VolumeManager
------------------------------------------
The "size" field in a volume definition can be defined either directly
as an integer number in megabytes, or indirectly via a so called
generator. Generator is a Python lambda that can be called to calculate
logical volume size dynamically. In the json example above size is
defined as a dictionary with two keys: "generator" is the name of the
generator lambda and "generator_args" is the list of arguments that will
be passed to the generator lambda.
There is the method in the VolumeManager class where generators are
defined. New volume generator 'NEW_GENERATOR_TO_CALCULATE_SIZ' needs to
be added in the generators dictionary inside this method.
.. code-block:: python
class VolumeManager(object):
...
def call_generator(self, generator, *args):
generators = {
...
'NEW_GENERATOR_TO_CALCULATE_SIZE': lambda: 1000,
...
}
Creating a New Volume Group
---------------------------
Another way to add new volume to slave nodes is to create new volume
group and to define one or more logical volume inside the volume group
definition::
- id: "NEW_VOLUME_GROUP_NAME"
type: "vg"
min_size: {generator: "calc_NEW_VOLUME_NAME_size"}
label: "Label for NEW VOLUME GROUP as it will be shown on UI"
volumes:
- mount: "/path/to/mount/point"
type: "lv"
name: "LOGICAL_VOLUME_NAME"
size:
generator: "another_generator_to_calc_LOGICAL_VOLUME_size"
generator_args: ["arg"]
file_system: "xfs"
Creating a New Plain Partition
------------------------------
Some node roles may be incompatible with LVM and would require plain
partitions. If that's the case, you may have to define a standalone
volume with type "partition" instead of "vg"::
- id: "NEW_PARTITION_NAME"
type: "partition"
min_size: {generator: "calc_NEW_PARTITION_NAME_size"}
label: "Label for NEW PARTITION as it will be shown on UI"
mount: "none"
disk_label: "LABEL"
file_system: "xfs"
Note how you can set mount point to "none" and define a disk label to
identify the partition instead. Its only possible to set a disk label on
a formatted portition, so you have to set "file_system" parameter to use
disk labels.
Updating the Node Role to Volumes Mapping
-----------------------------------------
Unlike a new logical volume added to a pre-existing logical volume
group, a new logical volume group or partition will not be allocated on
the node unless it is included in the role-to-volumes mapping
corresponding to one of the node's roles, like this::
volumes_roles_mapping:
controller:
- {allocate_size: "min", id: "os"}
- {allocate_size: "all", id: "image"}
compute:
...
* *controller* - is a role for which partitioning information is given
* *id* - is id of volume group or plain partition
* *allocate_size* - can be "min" or "all"
* *min* - allocate volume with minimal size
* *all* - allocate all free space for
volume, if several volumes have this key
then free space will be allocated equally
Setting Volume Parameters from Nailgun Settings
-----------------------------------------------
In addition to VolumeManager generators, it is also possible to define
sizes or whatever you want in the nailgun configuration file
(/etc/nailgun/settings.yaml). All fixture files are templated using
Jinja2 templating engine just before being loaded into nailgun database.
For example, we can define mount point for a new volume as follows::
"mount": "{{settings.NEW_LOGICAL_VOLUME_MOUNT_POINT}}"
Of course, *NEW_LOGICAL_VOLUME_MOUNT_POINT* must be defined in the
settings file.

View File

@ -1,66 +0,0 @@
Nailgun is the core of FuelWeb.
To allow an enterprise features be easily connected,
and open source commity to extend it as well, Nailgun must
have simple, very well defined and documented core,
with the great pluggable capabilities.
Reliability
___________
All software contains bugs and may fail, and Nailgun is not an exception of this rule.
In reality, it is not possible to cover all failure scenarios,
even to come close to 100%.
The question is how we can design the system to avoid bugs in one module causing the damage
of the whole system.
Example from the Nailgun's past:
Agent collected hardware information, include current_speed param on the interfaces.
One of the interfaces had current_speed=0. At the registration attempt, Nailgun's validator
checked that current_speed > 0, and validator raised an exception InvalidData,
which declined node discovery.
current_speed is one of the attibutes which we can easily skip, it is not even
used for deployment in any way at the moment and used only for the information provided to the user.
But it prevented node discovery, and it made the server unusable.
Another example. Due to the coincedence of bug and wrong metadata of one of the nodes,
GET request on that node would return 500 Internal Server Error.
Looks like it should affect the only one node, and logically we could remove such
failing node from the environment to get it discovered again.
However, UI + API handlers were written in the following way:
* UI calls /api/nodes to fetch info about all nodes to just show how many nodes are allocated, and how many are not
* NodesCollectionHandler would return 500 if any of nodes raise an exception
It is simple to guess, that the whole UI was completely destroyed by just one
failed node. It was impossible to do any action on UI.
These two examples give us the starting point to rethink on how to avoid
Nailgun crash just if one of the meta attr is wrong.
First, we must divide the meta attributes discovered by agent on two categories:
* absolutely required for node discovering (i.e. MAC address)
* non-required for discovering
* required for deployment (i.e. disks)
* non-required for deployment (i.e. current_speed)
Second, we must have UI refactored to fetch only the information required,
not the whole DB to just show two numbers. To be more specific,
we have to make sure that issues in one environment must not
affect the other environment. Such a refactoring will require additional
handlers in Nailgun, as well as some additions, such as pagination and etc.
From Nailgun side, it is bad idea to fail the whole CollectionHandler if one
of the objects fail to calculate some attribute. My(mihgen) idea is to simply set
attribute to Null if failed to calculate, and program UI to handle it properly.
Unit tests must help in testing of this.
Another idea is to limit the /api/nodes,
/api/networks and other calls
to work only if cluster_id param provided, whether set to None or some of cluster Ids.
In such a way we can be sure that one env will not be able to break the whole UI.

View File

@ -1,121 +0,0 @@
Creating roles
==============
Each release has its own role list which can be customized. A plain list of
roles is stored in the "roles" section of each release in the openstack.yaml_::
roles:
- controller
- compute
- cinder
The order in which the roles are listed here determines the order in which
they are displayed on the UI.
For each role in this list there should also be entry in "roles_metadata"
section. It defines role name, description and conflicts with other roles::
roles_metadata:
controller:
name: "Controller"
description: "..."
conflicts:
- compute
compute:
name: "Compute"
description: "..."
conflicts:
- controller
cinder:
name: "Storage - Cinder LVM"
description: "..."
"conflicts" section should contain a list of other roles that cannot be placed
on the same node. In this example, "controller" and "compute" roles cannot be
combined.
Roles restrictions
------------------
You should take the following restrictions for the role into consideration:
#. Controller
* There should be at least one controller. The `Not enough deployed
controllers` error occurs when you have 1 Controller installed
and want to redeploy that controller. During redeployment, it will
be taken off, so for some time the cluster will not have any
operating controller, which is wrong.
* If we are using simple multinode mode, then we cannot add more
than one controller.
* In HA mode, we can add as much as possible controllers, though
it is recommended to add at least 3.
* Controller role cannot be combined with compute.
#. Compute
* It is recommended to have at least one compute in non-vCenter env
(https://bugs.launchpad.net/fuel/+bug/1381613: note that this is a
bug in UI and not yaml-specific). Beginning
with Fuel 6.1, no vCenter-related limitations are longer present.
* Computes cannot be combined with controllers.
* Computes cannot be added if vCenter is chosen as a hypervisor.
#. Cinder
* It is impossible to add Cinder nodes to an environment with Ceph RBD.
* At least one Cinder node is recommended
* Cinder LVM needs to be enabled on the *Settings* tab of the Fuel web UI.
#. MongoDB
* Cannot be added to already deployed environment.
* Can be added only if Ceilometer is enabled.
* Cannot be combined with Ceph OSD and Compute.
* For a simple mode there has to be 1 Mongo node, for HA at least 3.
* It is not allowed to choose MongoDB role for a node if external
MongoDB setup is used.
#. Zabbix
* Only available in experimental ISO.
* Cannot be combined with any other roles.
* Only one Zabbix node can be assigned in an environment.
#. Ceph
* Cannot be used with Mongo and Zabbix.
* Minimal number of Ceph nodes is equal to
the `Ceph object replication factor` value from the *Settings* tab
of the Fuel web UI.
* Ceph cannot be added if vCenter is chosen as a hypervisor
and `volumes_ceph`, `images_ceph` and `ephemeral_ceph` settings
are all False.
#. Ceilometer
* Either a node with MongoDB role or external MongoDB turned on is
required.
.. _openstack.yaml: https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml

View File

@ -1,186 +0,0 @@
Extending OpenStack Settings
============================
Each release has a list of OpenStack settings that can be customized.
The settings configuration is stored in the ``attributes_metadata.editable``
release section in the openstack.yaml_ file.
Settings are divided into groups. Each group should have a ``metadata`` section
with the following attributes::
metadata:
toggleable: true
enabled: false
weight: 40
group: "security"
* ``toggleable`` defines an ability to enable/disable the whole setting group
on UI (checkbox control is presented near a setting group label).
* ``enabled`` indicates whether the group is checked on the UI.
* ``weight`` defines the order in which this group is displayed on the tab.
* ``restrictions``: see restrictions_.
* ``group`` identifies which subtab on the UI this group of settings will be
displayed on.
Other sections of a setting group represent separate settings. A setting
structure includes the following attributes::
syslog_transport:
value: "tcp"
label: "Syslog transport protocol"
description: ""
weight: 30
type: "radio"
values:
- data: "udp"
label: "UDP"
description: ""
restrictions:
- "cluster:net_provider != 'neutron'"
- data: "tcp"
label: "TCP"
description: ""
regex:
source: "^[A-z0-9]+$"
error: "Invalid data"
min: 1
max: 3
group: "logging"
* ``label`` is a setting title that is displayed on UI.
* ``weight`` defines the order in which this setting is displayed in its group.
This attribute is desirable.
* ``type`` defines the type of UI control to use for the setting.
The following types are supported:
* ``text`` - single line input
* ``number`` - number input
* ``password`` - password input
* ``textarea`` - multiline input
* ``checkbox`` - multiple-options selector
* ``radio`` - single-option selector
* ``select`` - drop-down list
* ``hidden`` - invisible input
* ``file`` - file contents input
* ``text_list`` - multiple sigle-line text inputs
* ``textarea_list`` - multiple multi-line text inputs
* ``regex`` section is applicable for settings of "text" type. ``regex.source``
is used when validating with a regular expression. ``regex.error`` contains
a warning displayed near invalid field.
* ``restrictions``: see restrictions_.
* ``description`` section should also contain information about setting
restrictions (dependencies, conflicts).
* ``values`` list is needed for settings of "radio" or "select" type to declare
its possible values. Options from ``values`` list also support dependencies
and conflcits declaration.
* ``min`` is used for setting the "number", "text_list" or "textarea_list" type.
For the "number" type, "min" declares the minimum input number value.
For the "text_list" and "textarea_list" types, it declares the minimum list length for the
setting.
* ``max`` is used for setting the "number", "text_list", or "textarea_list" type.
For the "number" type, "max" declares the maximum input number value.
For the "text_list" and "textarea_list" types, it declares the maximum list length for the
setting.
* ``group`` specifies which subtab on the UI settings/networks page this setting will be
displayed on. Inherited from the ``metadata`` section if not provided.
The following values are supported by UI:
* ``general`` - main cluster settings
* ``security`` - security settings
* ``compute`` - common compute settings
* ``network`` - network settings (are collected on the separate :guilabel:`Networks` tab)
* ``storage`` - storage settings
* ``logging`` - logging settings
* ``openstack_services`` - OpenStack services settings (:guilabel:`Additional Components`
subtab)
* ``other`` - other settings (everything out of the above list)
.. _restrictions:
Restrictions
------------
Restrictions define when settings and setting groups should be available.
Each restriction is defined as a ``condition`` with optional ``action``, ``message``,
and ``strict``::
restrictions:
- condition: "settings:common.libvirt_type.value != 'kvm'"
message: "KVM only is supported"
- condition: "not ('experimental' in version:feature_groups)"
action: hide
* ``condition`` is an expression written in `Expression DSL`_. If returned value
is true, then ``action`` is performed and ``message`` is shown (if specified).
* ``action`` defines what to do if ``condition`` is satisfied. Supported values
are ``disable``, ``hide`` and ``none``.``none`` can be used just to display
``message``. This field is optional (default value is ``disable``).
* ``message`` is a message that is shown if ``condition`` is satisfied. This field
is optional.
* ``strict`` is a boolean flag which specifies how to handle non-existent keys
in expressions. If it is set to ``true`` (default value), exception is thrown in
case of non-existent key. Otherwise, values of such keys have a ``null`` value.
Setting this flag to ``false`` is useful for conditions which rely on settings
provided by plugins::
restrictions:
- condition: "settings:other_plugin == null or settings:other_plugin.metadata.enabled != true"
strict: false
message: "Other plugin must be installed and enabled"
There are also short forms of restrictions::
restrictions:
- "settings:common.libvirt_type.value != 'kvm'": "KVM only is supported"
- "settings:storage.volumes_ceph.value == true"
.. _Expression DSL:
Expression Syntax
-----------------
Expression DSL can describe arbitrarily complex conditions that compare fields
of models and scalar values.
Supported types are:
* Number (123, 5.67)
* String ("qwe", 'zxc')
* Boolean (true, false)
* Null value (null)
* ModelPath (settings:common.libvirt_type.value, cluster:net_provider)
ModelPaths consist of a model name and a field name separated by ":". Nested
fields (like in settings) are supported, separated by ".". Models available for
usage are "cluster", "settings", "networking_parameters" and "version".
Supported operators are:
* ``==``. Returns true if operands are equal::
settings:common.libvirt_type.value == 'qemu'
* ``!=``. Returns true if operands are not equal::
cluster:net_provider != 'neutron'
* ``in``. Returns true if the right operand (Array or String) contains the left
operand::
'ceph-osd' in release:roles
* Boolean operators: ``and``, ``or``, ``not``::
cluster:mode == "ha_compact" and not (settings:common.libvirt_type.value == 'kvm' or 'experimental' in version:feature_groups)
* Parentheses can be used to override the order of precedence.
.. _openstack.yaml: https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml

View File

@ -1,56 +0,0 @@
Code testing policy
===================
When writing tests, please note the following rules:
#. Each code change MUST be covered with tests. The test for specific code
change must fail if that change to code is reverted, i.e. the test must
really cover the code change and not the general case. Bug fixes should
have tests for failing case.
#. The tests MUST be in the same patchset with the code changes.
#. It's permitted not to write tests in extreme cases. The extreme cases are:
* hot-fix / bug-fix with *Critical* status.
* patching during Feature Freeze (FF_) or Hard Code Freeze (HCF_).
In this case, request for writing tests should be reported as a bug with
*technical-debt* tag. It has to be related to the bug which was fixed with
a patchset that didn't have the tests included.
.. _FF: https://wiki.openstack.org/wiki/FeatureFreeze
.. _HCF: https://wiki.openstack.org/wiki/Fuel/Hard_Code_Freeze
#. Before writing tests please consider which type(s) of testing is suitable
for the unit/module you're covering.
#. Test coverage should not be decreased.
#. Nailgun application can be sliced up to tree layers (Presentation, Object,
Model). Consider usage of the unit testing if it is performed within one of
the layers or implementing mock objects is not complicated.
#. The tests have to be isolated. The order and count of executions must not
influence test results.
#. Tests must be repetitive and must always pass regardless of how many times
they are run.
#. Parametrize tests to avoid testing many times the same behaviour but with
different data. This gives an additional flexibility in the methods' usage.
#. Follow DRY principle in tests code. If common code parts are present, please
extract them to a separate method/class.
#. Unit tests are grouped by namespaces as corresponding unit. For instance,
the unit is located at: ``nailgun/db/dl_detector.py``, corresponding test
would be placed in ``nailgun/test/unit/nailgun.db/test_dl_detector.py``
#. Integration tests are grouped at the discretion of the developer.
#. Consider implementing performance tests for the cases:
* new handler is added which depends on number of resources in the database.
* new logic is added which parses/operates on elements like nodes.

View File

@ -1,43 +0,0 @@
Nailgun database migrations
===========================
Nailgun uses Alembic (http://alembic.readthedocs.org/en/latest/) for database
migrations, allowing access to all common Alembic commands through "python
manage.py migrate"
This command creates DB tables for Nailgun service::
python manage.py syncdb
This is done by applying one by one a number of database migration files,
which are located in nailgun/nailgun/db/migration/alembic_migrations/versions.
This command does not create corresponding DB tables unless you created another
migration file or updated an existing one, even if you're making some changes
in SQLAlchemy models or creating the new ones.
A new migration file can be generated by running::
python manage.py migrate revision -m "Revision message" --autogenerate
There are two important points here:
1) This command always creates a "diff" between the current database state
and the one described by your SQLAlchemy models, so you should always
run "python manage.py syncdb" before this command. This prevents running
the migrate command with an empty database, which would cause it to
create all tables from scratch.
2) Some modifications may not be detected by "--autogenerate", which
require manual addition to the migration file. For example, adding a new
value to ENUM field is not detected.
After creating a migration file, you can upgrade the database to a new state
by using this command::
python manage.py migrate upgrade +1
To merge your migration with an existing migration file, you can just move
lines of code from the "upgrade()" and "downgrade()" methods to the bottom of corresponding methods in previous migration file. As of this writing,
the migration file is called "current.py".
For all additional features and needs, you may refer to Alembic documentation:
http://alembic.readthedocs.org/en/latest/tutorial.html

View File

@ -1,256 +0,0 @@
Setting up Environment
======================
For information on how to get source code see :ref:`getting-source`.
.. _nailgun_dependencies:
Preparing Development Environment
---------------------------------
.. warning:: Nailgun requires Python 2.7. Please check
installed Python version using ``python --version``.
#. Nailgun can be found in fuel-web/nailgun
#. Install and configure PostgreSQL database. Please note that
Ubuntu 12.04 requires postgresql-server-dev-9.1 while
Ubuntu 14.04 requires postgresql-server-dev-9.3::
sudo apt-get install --yes postgresql postgresql-server-dev-all
sudo sed -ir 's/peer/trust/' /etc/postgresql/9.*/main/pg_hba.conf
sudo service postgresql restart
sudo -u postgres psql -c "CREATE ROLE nailgun WITH LOGIN PASSWORD 'nailgun'"
sudo -u postgres createdb nailgun
If required, you can specify Unix-domain
socket in 'host' settings to connect to PostgreSQL database:
.. code-block:: yaml
DATABASE:
engine: "postgresql"
name: "nailgun"
host: "/var/run/postgresql"
port: ""
user: "nailgun"
passwd: "nailgun"
#. Install pip and development tools::
sudo apt-get install --yes python-dev python-pip
#. Install virtualenv. This step increases flexibility
when dealing with environment settings and package installation::
sudo pip install virtualenv virtualenvwrapper
. /usr/local/bin/virtualenvwrapper.sh # you can save this to .bashrc
mkvirtualenv fuel # you can use any name instead of 'fuel'
workon fuel # command selects the particular environment
#. Install Python dependencies. This section assumes that you use virtual environment.
Otherwise, you must install all packages globally.
You can install pip and use it to require all the other packages at once::
sudo apt-get install --yes git
git clone https://github.com/openstack/fuel-web.git
cd fuel-web
pip install -r nailgun/test-requirements.txt
#. Install Nailgun in the developers mode by running the command below in the
`nailgun` folder. Thanks to that, Nailgun extensions will be discovered::
python setup.py develop
Or if you are using pip::
pip install -e .
#. Create required folder for log files::
sudo mkdir /var/log/nailgun
sudo chown -R `whoami`.`whoami` /var/log/nailgun
sudo chmod -R a+w /var/log/nailgun
Setup for Nailgun Unit Tests
----------------------------
#. Nailgun unit tests use `Tox <http://testrun.org/tox/latest/>`_ for generating test
environments. This means that you don't need to install all Python packages required
for the project to run them, because Tox does this by itself.
#. First, create a virtualenv the way it's described in previous section. Then, install
the Tox package::
workon fuel #activate virtual environment created in the previous section
pip install tox
#. Run the Nailgun backend unit tests and flake8 test::
sudo apt-get install puppet-common #install missing package required by tasklib tests
./run_tests.sh
#. You can also run the same tests by hand, using tox itself::
cd nailgun
tox -epy26 -- -vv nailgun/test
tox -epep8
#. Tox reuses the previously created environment. After making some changes with package
dependencies, tox should be run with **-r** option to recreate existing virtualenvs::
tox -r -epy26 -- -vv nailgun/test
tox -r -epep8
Running Nailgun Performance Tests
+++++++++++++++++++++++++++++++++
Now you can run performance tests using -x option:
::
./run_tests.sh -x
If -x is not specified, run_tests.sh will not run performance tests.
The -n or -N option works exactly as before: it states whether
tests should be launched or not.
For example:
* run_tests.sh -n -x - run both regular and performance Nailgun tests.
* run_tests.sh -x - run nailgun performance tests only, do not run
regular Nailgun tests.
* run_tests.sh -n - run regular Naigun tests only.
* run_tests.sh -N - run all tests except for Nailgun regular and
performance tests.
.. _running-parallel-tests-py:
Running parallel tests with py.test
-----------------------------------
Now tests can be run over several processes
in a distributed manner; each test is executed
within an isolated database.
Prerequisites
+++++++++++++
- Nailgun user requires createdb permission.
- Postgres database is used for initial connection.
- If createdb cannot be granted for the environment,
then several databases should be created. The number of
databases should be equal to *TEST_WORKERS* variable.
The *createdb* permission
should have the following format: *nailgun0*, *nailgun1*.
- If no *TEST_WORKERS* variable is provided, then a default
database name will be used. Often it is nailgun,
but you can overwrite it with *TEST_NAILGUN_DB*
environment variable.
- To execute parallel tests on your local environment,
run the following command from *fuel-web/nailgun*:
::
py.test -n 4 nailgun/test
You can also run the it from *fuel-web*:
::
py.test -n 4 nailgun/nailgun/test
.. _running-nailgun-in-fake-mode:
Running Nailgun in Fake Mode
----------------------------
#. Switch to virtual environment::
workon fuel
#. Populate the database from fixtures::
cd nailgun
./manage.py syncdb
./manage.py loaddefault # It loads all basic fixtures listed in settings.yaml
./manage.py loaddata nailgun/fixtures/sample_environment.json # Loads fake nodes
#. Start application in "fake" mode, when no real calls to orchestrator
are performed::
python manage.py run -p 8000 --fake-tasks | egrep --line-buffered -v '^$|HTTP' >> /var/log/nailgun.log 2>&1 &
#. (optional) You can also use --fake-tasks-amqp option if you want to
make fake environment use real RabbitMQ instead of fake one::
python manage.py run -p 8000 --fake-tasks-amqp | egrep --line-buffered -v '^$|HTTP' >> /var/log/nailgun.log 2>&1 &
Nailgun in fake mode is usually used for Fuel UI development and Fuel UI
functional tests. For more information, please check out README file in
the fuel-ui repo.
Note: Diagnostic Snapshot is not available in a Fake mode.
Running the Fuel System Tests
-----------------------------
For fuel-devops configuration info please refer to
:doc:`Devops Guide </devdocs/devops>` article.
#. Run the integration test::
cd fuel-main
make test-integration
#. To save time, you can execute individual test cases from the
integration test suite like this (nice thing about TestAdminNode
is that it takes you from nothing to a Fuel master with 9 blank nodes
connected to 3 virtual networks)::
cd fuel-main
export PYTHONPATH=$(pwd)
export ENV_NAME=fuelweb
export PUBLIC_FORWARD=nat
export ISO_PATH=`pwd`/build/iso/fuelweb-centos-6.5-x86_64.iso
./fuelweb_tests/run_tests.py --group=test_cobbler_alive
#. The test harness creates a snapshot of all nodes called 'empty'
before starting the tests, and creates a new snapshot if a test
fails. You can revert to a specific snapshot with this command::
dos.py revert --snapshot-name <snapshot_name> <env_name>
#. To fully reset your test environment, tell the Devops toolkit to erase it::
dos.py list
dos.py erase <env_name>
Flushing database before/after running tests
--------------------------------------------
The database should be cleaned after running tests;
before parallel tests were enabled,
you could only run dropdb with *./run_tests.sh* script.
Now you need to run dropdb for each slave node:
the *py.test --cleandb <path to the tests>* command is introduced for this
purpose.

View File

@ -1,35 +0,0 @@
Fuel UI Internationalization Guidelines
=======================================
Fuel UI internationalization is done using `i18next <http://i18next.com/>`_
library. Please read `i18next documentation
<http://i18next.com/pages/doc_features.html>`_ first.
All translations are stored in nailgun/static/translations/core.json
If you want to add new strings to the translations file, follow these rules:
#. Use words describing placement of strings like "button", "title", "summary",
"description", "label" and place them at the end of the key
(like "apply_button", "cluster_description", etc.). One-word strings may
look better without any of these suffixes.
#. Do NOT use shortcuts ("bt" instead of "button", "descr" instead of
"description", etc.)
#. Nest keys if it makes sense, for example, if there are a few values
for statuses, etc.
#. If some keys are used in a few places (for example, in utils), move them to
"common.*" namespace.
#. Use defaultValue ONLY with dynamically generated keys.
Validating translations
=========================================
To search for absent and unnecessary translation keys you can perform the following steps:
#. Open terminal and cd to fuel-web/nailgun directory.
#. Run "gulp i18n:validate" to start the validation.
If there are any mismatches, you'll see the list of mismatching keys.
Gulp task "i18n:validate" has one optional argument - a comma-separated list of
languages to compare with base English en-US translations. Run
"gulp i18n:validate --locales=zh-CN" to perform comparison only between English
and Chinese keys. You can also run "grunt i18n:validate --locales=zh-CN,ru-RU"
to perform comparison between English-Chinese and English-Russian keys.

View File

@ -1,120 +0,0 @@
Interacting with Nailgun using Shell
====================================
.. contents:: :local:
Launching shell
---------------
Development shell for Nailgun can only be accessed inside its virtualenv,
which can be activated by launching the following command::
source /opt/nailgun/bin/activate
After that, the shell is accessible through this command::
python /opt/nailgun/bin/manage.py shell
Its appearance depends on availability of ipython on current system. This
package is not available by default on the master node but you can use the
command above to run a default Python shell inside the Nailgun environment::
Python 2.7.3 (default, Feb 27 2014, 19:58:35)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>>
.. note:: If you want to quickly access the database,
use *manage.py dbshell* command.
Interaction
-----------
There are two ways user may interact with Nailgun object instances
through shell:
* Using Nailgun objects abstraction
* Using raw SQLAlchemy queries
**IMPORTANT NOTE:** Second way (which is equal to straightforward modifying
objects in DB) should only be used if nothing else works.
.. _shell-objects:
Objects approach
****************
Importing objects may look like this::
>>> from nailgun import objects
>>> objects.Release
<class 'nailgun.objects.release.Release'>
>>> objects.Cluster
<class 'nailgun.objects.cluster.Cluster'>
>>> objects.Node
<class 'nailgun.objects.node.Node'>
These are common abstractions around basic items Nailgun is dealing with.
These objects allow user to interact with items in DB on higher level, which
includes all necessary business logic which is not executed then values in DB
are changed by hands. For working examples continue to :ref:`shell-faq`.
SQLAlchemy approach
*******************
Using raw SQLAlchemy models and queries allows user to modify objects through
ORM, almost the same way it can be done through SQL CLI.
First, you need to get a DB session and import models::
>>> from nailgun.db import db
>>> from nailgun.db.sqlalchemy import models
>>> models.Release
<class 'nailgun.db.sqlalchemy.models.release.Release'>
>>> models.Cluster
<class 'nailgun.db.sqlalchemy.models.cluster.Cluster'>
>>> models.Node
<class 'nailgun.db.sqlalchemy.models.node.Node'>
and then get necessary instances from DB, modify them and commit current
transaction::
>>> node = db().query(models.Node).get(1) # getting object by ID
>>> node
<nailgun.db.sqlalchemy.models.node.Node object at 0x3451790>
>>> node.status = 'error'
>>> db().commit()
You may refer to `SQLAlchemy documentation <http://docs.sqlalchemy.org/en/rel_0_7/orm/query.html>`_
to find some more info on how to do queries.
.. _shell-faq:
Frequently Asked Questions
--------------------------
As a first step in any case objects should be imported as is
described here: :ref:`shell-objects`.
**Q:** How can I change status for particular node?
**A:** Just retrieve node by its ID and update it::
>>> node = objects.Node.get_by_uid(1)
>>> objects.Node.update(node, {"status": "ready"})
>>> objects.Node.save(node)
**Q:** How can I remove node from cluster by hands?
**A:** Get node by ID and call its method::
>>> node = objects.Node.get_by_uid(1)
>>> objects.Node.remove_from_cluster(node)
>>> objects.Node.save(node)

View File

@ -1,41 +0,0 @@
Managing UI Dependencies
========================
The dependencies of Fuel UI are managed by NPM_.
Used NPM packages are listed in *dependencies* and *devDependencies* sections
of a package.json file. To install all required packages, run::
npm install
To use gulp_ you also need to install the gulp package globally::
sudo npm install -g gulp
To add a new package, it is not enough just to add a new entry to a
package.json file because npm-shrinkwrap_ is used to lock down package
versions. First you need to install the clingwrap package globally:
sudo npm install -g clingwrap
Then install required package::
npm install --save some-package
Then run::
clingwrap some-package
to update npm-shrinkwrap.json.
Alternatively, you can completely regenerate npm-shrinkwrap.json by running::
rm npm-shrinkwrap.json
rm -rf node_modules
npm install
npm shrinkwrap --dev
clingwrap npmbegone
.. _npm: https://www.npmjs.org/
.. _gulp: http://gulpjs.com/
.. _npm-shrinkwrap: https://www.npmjs.org/doc/cli/npm-shrinkwrap.html

View File

@ -1,26 +0,0 @@
.. _nailgun-development:
Nailgun Development Instructions
================================
.. toctree::
development/env
development/i18n
development/db_migrations
development/shell_doc
development/ui_dependencies
development/code_testing
Nailgun Customization Instructions
==================================
.. _nailgun-customization:
.. toctree::
customization/partitions
customization/reliability
customization/roles
customization/settings
customization/bonding_in_ui
customization/extensions

View File

@ -1,519 +0,0 @@
Health Check (OSTF) Contributor's Guide
=======================================
Health Check or OSTF?
^^^^^^^^^^^^^^^^^^^^^
Fuel UI has a tab which is called Health Check. In development team though,
there is an established acronym OSTF, which stands for OpenStack Testing Framework.
This is all about the same. For simplicity, this document will use the widely
accepted term OSTF.
Main goal of OSTF
^^^^^^^^^^^^^^^^^
After OpenStack installation via Fuel, it`s very important to understand whether it was successful and if it`s ready for work.
OSTF provides a set of health checks - sanity, smoke, HA and additional components tests that check the proper operation of all system components in typical conditions.
There are tests for OpenStack scenarios validation and other specific tests useful in validating an OpenStack deployment.
Main rules of code contributions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There are a few rules you need to follow to successfully pass the code review and contribute high-quality code.
How to setup my environment?
----------------------------
OSTF repository is located at git.openstack.org or GitHub mirror: https://github.com/openstack/fuel-ostf. You also have to install and hook-up gerrit, because otherwise you will not be able to contribute code. To do that you need to follow registration and installation instructions in the document https://wiki.openstack.org/wiki/CLA#Contributors_License_Agreement
After you've completed the instructions, you're all set to begin editing/creating code.
How should my modules look like?
--------------------------------
The rules are quite simple:
- follow Python coding rules
- follow OpenStack contributor's rules
- watch out for mistakes in docstrings
- follow correct test structure
- always execute your tests after you wrote them before sending them to review
Speaking of following Python coding standards, you can find the style guide here: http://www.python.org/dev/peps/pep-0008/. You should read it carefully once and after implementing scripts you need to run some checks that will ensure that your code corresponds the standards. Without correcting issues with coding stadards your scripts will not be merged to master.
You should always follow the following implementation rules:
- name the test module, test class and test method beginning with the word "test"
- if you have some tests that should be ran in a specific order, add a number to test method name, for example: test_001_create_keypair
- use verify(), verify_response_body_content() and other methods from mixins (see OSTF package architecture fuel_health/common/test_mixins.py section) with giving them failed step parameter
- always list all steps you are checking using test_mixins methods in the docstring in Scenario section in correct order
- always use verify() method when you want to check an operation that can go to an infinite loop
The test docstrings are another important piece and you should always stick to the following docstring structure:
- test title - test description that will be always shown on UI (the remaining part of docstring will only be shown in cases when test failed)
- target component (optional) - component name that is tested (e.g. Nova, Keystone)
- blank line
- test scenario, example::
Scenario:
1. Create a new small-size volume.
2. Wait for volume status to become "available".
3. Check volume has correct name.
4. Create new instance.
5. Wait for "Active" status.
6. Attach volume to an instance.
7. Check volume status is "in use".
8. Get information on the created volume by its id.
9. Detach volume from the instance.
10. Check volume has "available" status.
11. Delete volume.
- test duration - an estimate of how much a test will take
deployment tags (optional) - gives information about what kind of environment the test will be run, possible values are CENTOS, Ubuntu, RHEL nova_network, Heat, Sahara)
Here's a test example which confirms the above explanations:
.. image:: _images/test_docstring_structure.png
Test run ordering and profiles
------------------------------
Each test set (sanity, smoke, HA and platform_tests) contains a special
variable in __init__.py module which is called __profile__.
The profile variable makes it possible to set different rules, such as test run
order, set up deployment tags, information gathering on cleanup and expected
time estimate for running a test set.
If you are to develop a new set of tests, you need to create __init__.py module
and place __profile__ dict in it. It is important that your profile matches
the following structure::
__profile__ = {
"test_runs_ordering_priority": 4,
"id": "platform_tests",
"driver": "nose",
"test_path": "fuel_health/tests/platform_tests",
"description": ("Platform services functional tests."
" Duration 3 min - 60 min"),
"cleanup_path": "fuel_health.cleanup",
"deployment_tags": ['additional_components'],
"exclusive_testsets": [],
"available_since_release": "2015.2-6.1",
}
Take note of each field in the profile, along with acceptable values.
- test_runs_ordering_priority is a field responsible for setting the priority
in which the test set will be displayed. For example, if you set "6" for
sanity tests and "3" for smoke tests, smoke test set will be displayed
first on the HealthCheck tab;
- id is just the unique id of a test set;
- driver field is used for setting the test runner;
- test_path is the field representing path where test set is located starting
from fuel_health directory;
- description is the field which contains the value to be shown on the UI
as the tests duration;
- cleanup_path is the field that specifies path to module responsible for
cleanup mechanism (if you do not specify this value, cleanup will not be
started after your test set);
- deployment_tags field is used for defining when these tests should be
available depending on cluster settings;
- exclusive_testsets field gives you an opportunity to specify test sets that
will be run successively. For example, you can specify "smoke_sanity" for
smoke and sanity test set profiles, then these tests will be run not
simultaneously, but successively.
- available_since_release field is responsible for the release version
starting from which a particular test set can be run. This means that
the test will run only on the specified or newer version of Fuel.
It is necessary to specify a value for each of the attributes. The optional
attribute is "deployment_tags", meaning optionally you may not specify it
in your profile at all. You can leave the "exclusive_testsets" empty ([]) to
run your testset simultaneously with other ones.
How to execute my tests?
------------------------
Simplest way is to install Fuel, and OSTF will be installed as part of it.
- install virtualbox
- build Fuel ISO: :ref:`building-fuel-iso`
- use `virtualbox scripts to run an ISO <https://github.com/openstack/fuel-virtualbox/tree/master/>`_
- once the installation is finished, go to Fuel UI (usually it's 10.20.0.2:8000) and create a new cluster with necessary configuration
- execute::
rsync -avz <path to fuel_health>/ root@10.20.0.2:/opt/fuel_plugins/ostf/lib/python2.6/site-packages/fuel_health/
- execute::
ssh root@10.20.0.2
service ostf restart
- go to Fuel UI and run your new tests
Now I'm done, what's next?
--------------------------
- don't forget to run pep8 on modified part of code
- commit your changes
- execute git review
- ask to review in IRC
From this part you'll only need to fix and commit review comments (if there are any) by doing the same steps. If there are no review comments left, the reviewers will accept your code and it will be automatically merged to master.
General OSTF architecture
^^^^^^^^^^^^^^^^^^^^^^^^^
Tests are included to Fuel, so they will be accessible as soon as you install Fuel on your lab. OSTF architecture is quite simple, it consists of two main packages:
- fuel_health which contains the test set itself and related modules
- fuel_plugin which contains OSTF-adapter that forms necessary test list in context of cluster deployment options and transfers them to UI using REST_API
On the other hand, there is some information necessary for test execution itself. There are several modules that gather information and parse them into objects which will be used in the tests themselves. All information is gathered from Nailgun component.
OSTF REST api interface
-----------------------
Fuel OSTF module provides not only testing, but also RESTful
interface, a means for interaction with the components.
In terms of REST, all types of OSTF entities are managed by three HTTP verbs:
GET, POST and PUT.
The following basic URL is used to make requests to OSTF::
{ostf_host}:{ostf_port}/v1/{requested_entity}/{cluster_id}
Currently, you can get information about testsets, tests and testruns
via GET request on corresponding URLs for ostf_plugin.
To get information about testsets, make the following GET request on::
{ostf_host}:{ostf_port}/v1/testsets/{cluster_id}
To get information about tests, make GET request on::
{ostf_host}:{ostf_port}/v1/tests/{cluster_id}
To get information about executed tests, make the following GET
requests:
- for the whole set of testruns::
{ostf_host}:{ostf_port}/v1/testruns/
- for the particular testrun::
{ostf_host}:{ostf_port}/v1/testruns/{testrun_id}
- for the list of testruns executed on the particular cluster::
{ostf_host}:{ostf_port}/v1/testruns/last/{cluster_id}
To start test execution, make the following POST request on this URL::
{ostf_host}:{ostf_port}/v1/testruns/
The body must consist of JSON data structure with testsets and the list
of tests belonging to it that must be executed. It should also have
metadata with the information about the cluster
(the key with the "cluster_id" name is used to store the parameter's value)::
[
{
"testset": "test_set_name",
"tests": ["module.path.to.test.1", ..., "module.path.to.test.n"],
"metadata": {"cluster_id": id}
},
...,
{...}, # info for another testrun
{...},
...,
{...}
]
If succeeded, OSTF adapter returns attributes of created testrun entities
in JSON format. If you want to launch only one test, put its id
into the list. To launch all tests, leave the list empty (by default).
Example of the response::
[
{
"status": "running",
"testset": "sanity",
"meta": null,
"ended_at": "2014-12-12 15:31:54.528773",
"started_at": "2014-12-12 15:31:41.481071",
"cluster_id": 1,
"id": 1,
"tests": [.....info on tests.....]
},
....
]
You can also stop and restart testruns. To do that, make a PUT request on
testruns. The request body must contain the list of the testruns and
tests to be stopped or restarted. Example::
[
{
"id": test_run_id,
"status": ("stopped" | "restarted"),
"tests": ["module.path.to.test.1", ..., "module.path.to.test.n"]
},
...,
{...}, # info for another testrun
{...},
...,
{...}
]
If succeeded, OSTF adapter returns attributes of the processed testruns
in JSON format. Its structure is the same as for POST request, described
above.
OSTF package architecture
^^^^^^^^^^^^^^^^^^^^^^^^^
The main modules used in fuel_health package are:
**config** module is responsible of getting data which is necessary for tests. All data is gathered from Nailgun component or a text config.
Nailgun provides us with the following data:
- OpenStack admin user name
- OpenStack admin user password
- OpenStack admin user tenant
- ip of controllers node
- ip of compute node - easily get data from nailgun by parsing role key in response json
- deployment mode (HA /non-HA)
- deployment os (RHEL/CENTOS)
- keystone / horizon urls
- tiny proxy address
All other information we need is stored in config.py itself and remains default in this case. In case you are using data from Nailgun (OpenStack installation using Fuel) you should to the following:
initialize NailgunConfig() class.
Nailgun is running on Fuel master node, so you can easily get data for each cluster by invoking curl http:/localhost:8000/api/<uri_here>. Cluster id can be get from OS environment (provided by Fuel)
If you want run OSTF for non Fuel installation, change the initialization of NailgunConfig() to FileConfig() and set parameters marked with green color in config - see Appendix 1 (default config file path fuel_health/etc/test.conf)
**cleanup.py** - invoked by OSTF adapter in case if user stops test execution in Web UI. This module is responsible for deleting all test resources created during test suite run. It simply finds all resources whose name starts with ost1_test- and destroys each of them using _delete_it method.
*Important: if you decide to add additional cleanup for this resource, you have to keep in mind:
All resources depend on each other, that's why deleting a resource that is still in use will give you an exception;
Don't forget that deleting several resources requires an ID for each resource, but not its name. You'll need to specify delete_type optional argument in _delete_it method to id*
**nmanager.py** contains base classes for tests. Each base class contains setup, teardown and methods that act as an interlayer between tests and OpenStack python clients (see nmanager architecture diagram).
.. image:: _images/nmanager.png
**fuel_health/common/test_mixins.py** - provides mixins to pack response verification into a human-readable message. For assertion failure cases, the method requires a step on which we failed and a descriptive
message to be provided. The verify() method also requires a timeout value to be set. This method should be used when checking OpenStack operations (such as instance creation). Sometimes a cluster
operation taking too long may be a sign of a problem, so this will secure the tests from such a situation or even from going into infinite loop.
**fuel_health/common/ssh.py** - provides an easy way to ssh to nodes or instances. This module uses the paramiko library and contains some useful wrappers that make some routine tasks for you
(such as ssh key authentication, starting transport threads, etc). Also, it contains a rather useful method exec_command_on_vm(), which makes an ssh to an instance through a controller and then executes
the necessary command on it.
OSTF Adapter architecture
^^^^^^^^^^^^^^^^^^^^^^^^^
.. image:: _images/plugin_structure.png
The important thing to remember about OSTF Adapter is that just like when writing tests, all code should follow pep8 standard.
Appendix 1
----------
::
IdentityGroup = [
cfg.StrOpt('catalog_type',
default='identity', may be changes on keystone
help="Catalog type of the Identity service."),
cfg.BoolOpt('disable_ssl_certificate_validation',
default=False,
help="Set to True if using self-signed SSL certificates."),
cfg.StrOpt('uri',
default='http://localhost/' (If you are using FileConfig set here appropriate address)
help="Full URI of the OpenStack Identity API (Keystone), v2"),
cfg.StrOpt('url',
default='http://localhost:5000/v2.0/', (If you are using FileConfig set here appropriate address to horizon)
help="Dashboard Openstack url, v2"),
cfg.StrOpt('uri_v3',
help='Full URI of the OpenStack Identity API (Keystone), v3'),
cfg.StrOpt('strategy',
default='keystone',
help="Which auth method does the environment use? "
"(basic|keystone)"),
cfg.StrOpt('region',
default='RegionOne',
help="The identity region name to use."),
cfg.StrOpt('admin_username',
default='nova' , (If you are using FileConfig set appropriate value here)
help="Administrative Username to use for"
"Keystone API requests."),
cfg.StrOpt('admin_tenant_name', (If you are using FileConfig set appropriate value here)
default='service',
help="Administrative Tenant name to use for Keystone API "
"requests."),
cfg.StrOpt('admin_password', (If you are using FileConfig set appropriate value here)
default='nova',
help="API key to use when authenticating as admin.",
secret=True),
]
ComputeGroup = [
cfg.BoolOpt('allow_tenant_isolation',
default=False,
help="Allows test cases to create/destroy tenants and "
"users. This option enables isolated test cases and "
"better parallel execution, but also requires that "
"OpenStack Identity API admin credentials are known."),
cfg.BoolOpt('allow_tenant_reuse',
default=True,
help="If allow_tenant_isolation is True and a tenant that "
"would be created for a given test already exists (such "
"as from a previously-failed run), re-use that tenant "
"instead of failing because of the conflict. Note that "
"this would result in the tenant being deleted at the "
"end of a subsequent successful run."),
cfg.StrOpt('image_ssh_user',
default="root", (If you are using FileConfig set appropriate value here)
help="User name used to authenticate to an instance."),
cfg.StrOpt('image_alt_ssh_user',
default="root", (If you are using FileConfig set appropriate value here)
help="User name used to authenticate to an instance using "
"the alternate image."),
cfg.BoolOpt('create_image_enabled',
default=True,
help="Does the test environment support snapshots?"),
cfg.IntOpt('build_interval',
default=10,
help="Time in seconds between build status checks."),
cfg.IntOpt('build_timeout',
default=160,
help="Timeout in seconds to wait for an instance to build."),
cfg.BoolOpt('run_ssh',
default=False,
help="Does the test environment support snapshots?"),
cfg.StrOpt('ssh_user',
default='root', (If you are using FileConfig set appropriate value here)
help="User name used to authenticate to an instance."),
cfg.IntOpt('ssh_timeout',
default=50,
help="Timeout in seconds to wait for authentication to "
"succeed."),
cfg.IntOpt('ssh_channel_timeout',
default=20,
help="Timeout in seconds to wait for output from ssh "
"channel."),
cfg.IntOpt('ip_version_for_ssh',
default=4,
help="IP version used for SSH connections."),
cfg.StrOpt('catalog_type',
default='compute',
help="Catalog type of the Compute service."),
cfg.StrOpt('path_to_private_key',
default='/root/.ssh/id_rsa', (If you are using FileConfig set appropriate value here)
help="Path to a private key file for SSH access to remote "
"hosts"),
cfg.ListOpt('controller_nodes',
default=[], (If you are using FileConfig set appropriate value here)
help="IP addresses of controller nodes"),
cfg.ListOpt('compute_nodes',
default=[], (If you are using FileConfig set appropriate value here)
help="IP addresses of compute nodes"),
cfg.StrOpt('controller_node_ssh_user',
default='root', (If you are using FileConfig set appropriate value here)
help="ssh user of one of the controller nodes"),
cfg.StrOpt('controller_node_ssh_password',
default='r00tme', (If you are using FileConfig set appropriate value here)
help="ssh user pass of one of the controller nodes"),
cfg.StrOpt('image_name',
default="TestVM", (If you are using FileConfig set appropriate value here)
help="Valid secondary image reference to be used in tests."),
cfg.StrOpt('deployment_mode',
default="ha", (If you are using FileConfig set appropriate value here)
help="Deployments mode"),
cfg.StrOpt('deployment_os',
default="RHEL", (If you are using FileConfig set appropriate value here)
help="Deployments os"),
cfg.IntOpt('flavor_ref',
default=42,
help="Valid primary flavor to use in tests."),
]
ImageGroup = [
cfg.StrOpt('api_version',
default='1',
help="Version of the API"),
cfg.StrOpt('catalog_type',
default='image',
help='Catalog type of the Image service.'),
cfg.StrOpt('http_image',
default='http://download.cirros-cloud.net/0.3.1/'
'cirros-0.3.1-x86_64-uec.tar.gz',
help='http accessable image')
]
NetworkGroup = [
cfg.StrOpt('catalog_type',
default='network',
help='Catalog type of the Network service.'),
cfg.StrOpt('tenant_network_cidr',
default="10.100.0.0/16",
help="The cidr block to allocate tenant networks from"),
cfg.IntOpt('tenant_network_mask_bits',
default=29,
help="The mask bits for tenant networks"),
cfg.BoolOpt('tenant_networks_reachable',
default=True,
help="Whether tenant network connectivity should be "
"evaluated directly"),
cfg.BoolOpt('neutron_available',
default=False,
help="Whether or not neutron is expected to be available"),
]
VolumeGroup = [
cfg.IntOpt('build_interval',
default=10,
help='Time in seconds between volume availability checks.'),
cfg.IntOpt('build_timeout',
default=180,
help='Timeout in seconds to wait for a volume to become'
'available.'),
cfg.StrOpt('catalog_type',
default='volume',
help="Catalog type of the Volume Service"),
cfg.BoolOpt('cinder_node_exist',
default=True,
help="Allow to run tests if cinder exist"),
cfg.BoolOpt('multi_backend_enabled',
default=False,
help="Runs Cinder multi-backend test (requires 2 backends)"),
cfg.StrOpt('backend1_name',
default='BACKEND_1',
help="Name of the backend1 (must be declared in cinder.conf)"),
cfg.StrOpt('backend2_name',
default='BACKEND_2',
help="Name of the backend2 (must be declared in cinder.conf)"),
]
ObjectStoreConfig = [
cfg.StrOpt('catalog_type',
default='object-store',
help="Catalog type of the Object-Storage service."),
cfg.StrOpt('container_sync_timeout',
default=120,
help="Number of seconds to time on waiting for a container"
"to container synchronization complete."),
cfg.StrOpt('container_sync_interval',
default=5,
help="Number of seconds to wait while looping to check the"
"status of a container to container synchronization"),
]

View File

@ -1,315 +0,0 @@
Resource duplication and file conflicts
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you have been developing your module that somehow uses services which are
already in use by other components of OpenStack, most likely you will try to
declare some of the same resources that have already been declared.
Puppet architecture doesn't allow declaration of resources that have same
type and title even if they do have same attributes.
For example, your module could be using Apache and has Service['apache']
declared. When you are running your module outside Fuel nothing else tries to
control this service to and everything work fine. But when you will try to add
this module to Fuel you will get resource duplication error because Apache is
already managed by Horizon module.
There is pretty much nothing you can do about this problem because uniqueness
of Puppet resources is one on its core principles. But you can try to solve
the problem by one of following ways.
The best thing you can do is to try to use an already declared resource by
settings dependencies to the other class that does use it. This will not work
in many cases and you may have to modify both modules or move conflicting
resource elsewhere to avoid conflicts.
Puppet does provide a good solution to this problem - **virtual resources**.
The idea behind it is that you move resource declaration to separate class and
make them virtual. Virtual resources will not be evaluated until you realize
them and you can do it in all modules that do require this resources.
The trouble starts when these resources have different attributes and complex
dependencies. Most current Puppet modules doesn't use virtual resources and
will require major refactoring to add them.
Puppet style guidelines advise to move all classes related with the same
service inside a single module instead of using many modules to work with
same service to minimize conflicts, but in many cases this approach
doesn't work.
There are also some hacks such are defining resource inside *if !
defined(Service['apache']) { ... }* block or using **ensure_resource**
function from Puppet's stdlib.
Similar problems often arise then working with configuration files.
Even using templates doesn't allow several modules to directly edit same
file. There are a number of solutions to this starting from using
configurations directories and snippets if service supports them to
representing lines or configuration options as resources and managing
them instead of entire files.
Many services does support configuration directories where you can place
configuration files snippets. Daemon will read them all, concatenate and
use like it was a single file. Such services are the most convenient to
manage with Puppet. You can just separate you configuration and manage
its pieces as templates. If your service doesn't know how to work with
snippets you still can use them. You only need to create parts of your
configuration file in some directory and then just combine them all
using simple exec with *cat* command. There is also a special *concat*
resource type to make this approach easier.
Some configuration files could have standard structure and can be managed
by custom resource types. For example, there is the *ini_file* resource
type to manage values in compatible configuration as single resources.
There is also *augeas* resource type that can manage many popular
configuration file formats.
Each approach has its own limitations and editing single file from
many modules is still non-trivial task in most cases.
Both resource duplication and file editing problems doesn't have a good
solution for every possible case and significantly limit possibility
of code reuse.
The last approach to solving this problem you can try is to modify files
by scripts and sed patches ran by exec resources. This can have unexpected
results because you can't be sure of what other operations are performed
on this configuration file, what text patterns exist there, and if your
script breaks another exec.
Puppet module containment
~~~~~~~~~~~~~~~~~~~~~~~~~
Fuel Library consists of many modules with a complex structure and
several dependencies defined between the provided modules.
There is a known Puppet problem related to dependencies between
resources contained inside classes declared from other classes.
If you declare resources inside a class or definition they will be
contained inside it and entire container will not be finished until all
of its contents have been evaluated.
For example, we have two classes with one notify resource each.::
class a {
notify { 'a' :}
}
class b {
notify { 'b' :}
}
Class['a'] -> Class['b']
include a
include b
Dependencies between classes will force contained resources to be executed in
declared order.
But if we add another layer of containers dependencies between them will not
affect resources declared in first two classes.::
class a {
notify { 'a' :}
}
class b {
notify { 'b' :}
}
class l1 {
include a
}
class l2 {
include b
}
Class['l1'] -> Class['l2']
include 'l1'
include 'l2'
This problem can lead to unexpected and in most cases unwanted behaviour
when some resources 'fall out' from their classes and can break the logic
of the deployment process.
The most common solution to this issue is **Anchor Pattern**. Anchors are
special 'do-nothing' resources found in Puppetlab's stdlib module.
Anchors can be declared inside top level class and be contained
inside as any normal resource. If two anchors was declared they can be
named as *start* and *end* anchor. All classes, that should be contained
inside the top-level class can have dependencies with both anchors.
If a class should go after the start anchor and before the end anchor
it will be locked between them and will be correctly contained inside
the parent class.::
class a {
notify { 'a' :}
}
class b {
notify { 'b' :}
}
class l1 {
anchor { 'l1-start' :}
include a
anchor { 'l1-end' :}
Anchor['l1-start'] -> Class['a'] -> Anchor['l1-end']
}
class l2 {
anchor { 'l2-start' :}
include b
anchor { 'l2-end' :}
Anchor['l2-start'] -> Class['b'] -> Anchor['l2-end']
}
Class['l1'] -> Class['l2']
include 'l1'
include 'l2'
This hack does help to prevent resources from randomly floating out of their
places, but look very ugly and is hard to understand. We have to use this
technique in many of Fuel modules which are rather complex and require such
containment.
If your module is going to work with dependency scheme like this, you could
find anchors useful too.
There is also another solution found in the most recent versions of Puppet.
*Contain* function can force declared class to be locked within its
container.::
class l1 {
contain 'a'
}
class l2 {
contain 'b'
}
Puppet scope and variables
~~~~~~~~~~~~~~~~~~~~~~~~~~
The way Puppet looks for values of variables from inside classes can be
confusing too. There are several levels of scope in Puppet.
**Top scope** contains all facts and built-in variables and goes from the
start of *site.pp* file before any class or node declaration. There is also a
**node scope**. It can be different for every node block. Each class and
definition start their own **local scopes** and their variables and resource
defaults are available their. **They can also have parent scopes**.
Reference to a variable can consist of two parts
**$(class_name)::(variable_name)** for example *$apache::docroot*. Class name
can also be empty and such record will explicitly reference top level scope
for example *$::ipaddress*.
If you are going to use value of a fact or top-scope variable it's usually a
good idea to add two colons to the start of its name to ensure that you
will get the value you are looking for.
If you want to reference variable found in another class and use fully
qualified name like this *$apache::docroot*. But you should remember that
referenced class should be already declared. Just having it inside your
modules folder is not enough for it. Using *include apache* before referencing
*$apache::docroot* will help. This technique is commonly used to make
**params** classes inside every module and are included to every other class
that use their values.
And finally if you reference a local variable you can write just *$myvar*.
Puppet will first look inside local scope of current class of defined type,
then inside parent scope, then node scope and finally top scope. If variable
is found on any of this scopes you get the first match value.
Definition of what the parent scope is varies between Puppet 2.* and Puppet
3.*. Puppet 2.* thinks about parent scope as a class from where current class
was declared and all of its parents too. If current class was inherited
from another class base class also is parent scope allowing to do popular
*Smart Defaults* trick.::
class a {
$var = a
}
class b(
$a = $a::var,
) inherits a {
}
Puppet 3.* thinks about parent scope only as a class from which current class
was inherited if any and doesn't take declaration into account.
For example::
$msg = 'top'
class a {
$msg = "a"
}
class a_child inherits a {
notify { $msg :}
}
Will say 'a' in puppet 2.* and 3.* both. But.::
$msg = 'top'
class n1 {
$msg = 'n1'
include 'n2'
}
class n2 {
notify { $msg :}
}
include 'n1'
Will say 'n1' in puppet 2.6, will say 'n1' and issue *deprecation warning* in
2.7, and will say 'top' in puppet 3.*
Finding such variable references replacing them with fully qualified names is
very important part Fuel of migration to Puppet 3.*
Where to find more information
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The best place to start learning Puppet is Puppetlabs' official learning
course (http://docs.puppetlabs.com/learning/). There is also a special virtual
machine image you can use to safely play with Puppet manifests.
Then you can continue to read Puppet reference and other pages of Puppetlabs
documentation.
You can also find a number of printed book about Puppet and how to use it to
manage your IT infrastructure.
Pro Puppet
http://www.apress.com/9781430230571
Pro Puppet. 2nd Edition
http://www.apress.com/9781430260400
Puppet 2.7 Cookbook
https://www.packtpub.com/networking-and-servers/puppet-27-cookbook
Puppet 3 Cookbook
https://www.packtpub.com/networking-and-servers/puppet-3-cookbook
Puppet 3: Beginners Guide
https://www.packtpub.com/networking-and-servers/puppet-3-beginner%E2%80%99s-guide
Instant Puppet 3 Starter
https://www.packtpub.com/networking-and-servers/instant-puppet-3-starter-instant
Pulling Strings with Puppet Configuration Management Made Easy
http://www.apress.com/9781590599785
Puppet Types and Providers Extending Puppet with Ruby
http://shop.oreilly.com/product/0636920026860.do
Managing Infrastructure with Puppet. Configuration Management at Scale
http://shop.oreilly.com/product/0636920020875.do

Some files were not shown because too many files have changed in this diff Show More