Retire the Tuskar codebase

Change-Id: If724a2cbde086d4c4341539a2e51ee4fd1915a26
Depends-On: I904b2f27591333e104bf9080bb8c3876fcb3596c
This commit is contained in:
Dougal Matthews 2016-01-21 15:10:12 +00:00
parent 3e67a04813
commit b129603824
182 changed files with 10 additions and 20243 deletions

View File

@ -1,4 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ ./ $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,6 +0,0 @@
include AUTHORS
include ChangeLog
exclude .gitignore
exclude .gitreview
global-exclude *.pyc

10
README Normal file
View File

@ -0,0 +1,10 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev or #tripleo
on Freenode.

View File

@ -1,45 +0,0 @@
======
Tuskar
======
What is Tuskar?
---------------
Tuskar is a management service for planning TripleO deployments.
Interested in seeing the full Tuskar and Tuskar UI setup? `Watch
the demo. <https://www.youtube.com/watch?v=--WWdJXmf8o>`_
For additional information, take a look at the `Tuskar
documentation <http://git.openstack.org/cgit/openstack/tuskar/tree/docs/index.rst>`_.
Installation Information
------------------------
Please see `install.rst <doc/source/install.rst>`_ for an
installation guide.
Building the Docs
-----------------
From the root directory::
python setup.py build_sphinx
Contributing
------------
Interested in contributing? Take a look at `contributing.rst
<doc/source/contributing.rst>`_ for details on how to do so.
Contact Us
----------
Join us on IRC (Internet Relay Chat)::
Network: Freenode (irc.freenode.net/tuskar)
Channel: #tripleo and #tuskar

View File

@ -1,159 +0,0 @@
#!/bin/bash
# Install and start the **Tuskar** service
# library code (equivalent to lib/tuskar)
# ---------
# - install_tuskarclient
# - install_tuskar
# - configure_tuskarclient
# - configure_tuskar
# - init_tuskar
# - start_tuskar
# - stop_tuskar
# - cleanup_tuskar
# Save trace setting
XTRACE=$(set +o | grep xtrace)
set +o xtrace
# Functions
# ---------
# Test if any Tuskar services are enabled
# is_tuskar_enabled
function is_tuskar_enabled {
[[ ,${ENABLED_SERVICES} =~ ,"tuskar-" ]] && return 0
return 1
}
# cleanup_tuskar() - Remove residual data files, anything left over from previous
# runs that a clean run would need to clean up
function cleanup_tuskar {
sudo rm -rf $TUSKAR_AUTH_CACHE_DIR
}
# configure_tuskar() - Set config files, create data dirs, etc
function configure_tuskar {
if [[ ! -d $TUSKAR_CONF_DIR ]]; then
sudo mkdir -p $TUSKAR_CONF_DIR
fi
sudo chown $STACK_USER $TUSKAR_CONF_DIR
# remove old config files
rm -f $TUSKAR_CONF_DIR/tuskar-*.conf
TUSKAR_POLICY_FILE=$TUSKAR_CONF_DIR/policy.json
cp $TUSKAR_DIR/etc/tuskar/policy.json $TUSKAR_POLICY_FILE
cp $TUSKAR_DIR/etc/tuskar/tuskar.conf.sample $TUSKAR_CONF
# common options
iniset $TUSKAR_CONF database connection `database_connection_url tuskar`
# logging
iniset $TUSKAR_CONF DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
iniset $TUSKAR_CONF DEFAULT use_syslog $SYSLOG
if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ]; then
# Add color to logging output
setup_colorized_logging $TUSKAR_CONF DEFAULT tenant user
fi
configure_auth_token_middleware $TUSKAR_CONF tuskar $TUSKAR_AUTH_CACHE_DIR
if is_ssl_enabled_service "key"; then
iniset $TUSKAR_CONF clients_keystone ca_file $SSL_BUNDLE_FILE
fi
iniset $TUSKAR_CONF tuskar_api bind_port $TUSKAR_API_PORT
}
# init_tuskar() - Initialize database
function init_tuskar {
# (re)create tuskar database
recreate_database tuskar
tuskar-dbsync --config-file $TUSKAR_CONF
create_tuskar_cache_dir
}
# create_tuskar_cache_dir() - Part of the init_tuskar() process
function create_tuskar_cache_dir {
# Create cache dirs
sudo mkdir -p $TUSKAR_AUTH_CACHE_DIR
sudo chown $STACK_USER $TUSKAR_AUTH_CACHE_DIR
}
# install_tuskar() - Collect source and prepare
function install_tuskar {
setup_develop $TUSKAR_DIR
}
# start_tuskar() - Start running processes, including screen
function start_tuskar {
run_process tuskar-api "tuskar-api --config-file=$TUSKAR_CONF"
}
# stop_tuskar() - Stop running processes
function stop_tuskar {
# Kill the screen windows
local serv
for serv in tuskar-api; do
stop_process $serv
done
}
# create_tuskar_accounts() - Set up common required tuskar accounts
function create_tuskar_accounts {
# migrated from files/keystone_data.sh
local service_tenant=$(openstack project list | awk "/ $SERVICE_TENANT_NAME / { print \$2 }")
local admin_role=$(openstack role list | awk "/ admin / { print \$2 }")
local tuskar_user=$(get_or_create_user "tuskar" \
"$SERVICE_PASSWORD" $service_tenant)
get_or_add_user_role $admin_role $tuskar_user $service_tenant
if [[ "$KEYSTONE_CATALOG_BACKEND" = 'sql' ]]; then
local tuskar_service=$(get_or_create_service "tuskar" \
"management" "Tuskar Management Service")
get_or_create_endpoint $tuskar_service \
"$REGION_NAME" \
"$SERVICE_PROTOCOL://$TUSKAR_API_HOST:$TUSKAR_API_PORT" \
"$SERVICE_PROTOCOL://$TUSKAR_API_HOST:$TUSKAR_API_PORT" \
"$SERVICE_PROTOCOL://$TUSKAR_API_HOST:$TUSKAR_API_PORT"
fi
}
# Main dispatcher
if [[ "$1" == "source" ]]; then
# Initial source, do nothing as functions sourced
# are below rather than in lib/tuskar
echo_summary "source extras tuskar"
elif [[ "$1" == "stack" && "$2" == "install" ]]; then
echo_summary "Installing Tuskar"
install_tuskar
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
echo_summary "Configuring Tuskar"
configure_tuskar
if is_service_enabled key; then
create_tuskar_accounts
fi
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
echo_summary "Initializing Tuskar"
init_tuskar
start_tuskar
fi
if [[ "$1" == "unstack" ]]; then
stop_tuskar
fi
# Restore xtrace
$XTRACE
# Tell emacs to use shell-script-mode
## Local variables:
## mode: shell-script
## End:

View File

@ -1,15 +0,0 @@
# tuskar plugin settings
TUSKAR_DIR=$DEST/tuskar
TUSKARCLIENT_DIR=$DEST/python-tuskarclient
TUSKAR_AUTH_CACHE_DIR=${TUSKAR_AUTH_CACHE_DIR:-/var/cache/tuskar}
TUSKAR_STANDALONE=$(trueorfalse False TUSKAR_STANDALONE)
TUSKAR_CONF_DIR=/etc/tuskar
TUSKAR_CONF=$TUSKAR_CONF_DIR/tuskar.conf
TUSKAR_API_HOST=${TUSKAR_API_HOST:-$HOST_IP}
TUSKAR_API_PORT=${TUSKAR_API_PORT:-8585}
# Tell Tempest this project is present
TEMPEST_SERVICES+=,tuskar
enable_service tuskar-api

View File

@ -1,416 +0,0 @@
/**
* Sphinx stylesheet -- basic theme
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*/
/* -- main layout ----------------------------------------------------------- */
div.clearer {
clear: both;
}
/* -- relbar ---------------------------------------------------------------- */
div.related {
width: 100%;
font-size: 90%;
}
div.related h3 {
display: none;
}
div.related ul {
margin: 0;
padding: 0 0 0 10px;
list-style: none;
}
div.related li {
display: inline;
}
div.related li.right {
float: right;
margin-right: 5px;
}
/* -- sidebar --------------------------------------------------------------- */
div.sphinxsidebarwrapper {
padding: 10px 5px 0 10px;
}
div.sphinxsidebar {
float: left;
width: 230px;
margin-left: -100%;
font-size: 90%;
}
div.sphinxsidebar ul {
list-style: none;
}
div.sphinxsidebar ul ul,
div.sphinxsidebar ul.want-points {
margin-left: 20px;
list-style: square;
}
div.sphinxsidebar ul ul {
margin-top: 0;
margin-bottom: 0;
}
div.sphinxsidebar form {
margin-top: 10px;
}
div.sphinxsidebar input {
border: 1px solid #98dbcc;
font-family: sans-serif;
font-size: 1em;
}
img {
border: 0;
}
/* -- search page ----------------------------------------------------------- */
ul.search {
margin: 10px 0 0 20px;
padding: 0;
}
ul.search li {
padding: 5px 0 5px 20px;
background-image: url(file.png);
background-repeat: no-repeat;
background-position: 0 7px;
}
ul.search li a {
font-weight: bold;
}
ul.search li div.context {
color: #888;
margin: 2px 0 0 30px;
text-align: left;
}
ul.keywordmatches li.goodmatch a {
font-weight: bold;
}
/* -- index page ------------------------------------------------------------ */
table.contentstable {
width: 90%;
}
table.contentstable p.biglink {
line-height: 150%;
}
a.biglink {
font-size: 1.3em;
}
span.linkdescr {
font-style: italic;
padding-top: 5px;
font-size: 90%;
}
/* -- general index --------------------------------------------------------- */
table.indextable td {
text-align: left;
vertical-align: top;
}
table.indextable dl, table.indextable dd {
margin-top: 0;
margin-bottom: 0;
}
table.indextable tr.pcap {
height: 10px;
}
table.indextable tr.cap {
margin-top: 10px;
background-color: #f2f2f2;
}
img.toggler {
margin-right: 3px;
margin-top: 3px;
cursor: pointer;
}
/* -- general body styles --------------------------------------------------- */
a.headerlink {
visibility: hidden;
}
h1:hover > a.headerlink,
h2:hover > a.headerlink,
h3:hover > a.headerlink,
h4:hover > a.headerlink,
h5:hover > a.headerlink,
h6:hover > a.headerlink,
dt:hover > a.headerlink {
visibility: visible;
}
div.body p.caption {
text-align: inherit;
}
div.body td {
text-align: left;
}
.field-list ul {
padding-left: 1em;
}
.first {
}
p.rubric {
margin-top: 30px;
font-weight: bold;
}
/* -- sidebars -------------------------------------------------------------- */
div.sidebar {
margin: 0 0 0.5em 1em;
border: 1px solid #ddb;
padding: 7px 7px 0 7px;
background-color: #ffe;
width: 40%;
float: right;
}
p.sidebar-title {
font-weight: bold;
}
/* -- topics ---------------------------------------------------------------- */
div.topic {
border: 1px solid #ccc;
padding: 7px 7px 0 7px;
margin: 10px 0 10px 0;
}
p.topic-title {
font-size: 1.1em;
font-weight: bold;
margin-top: 10px;
}
/* -- admonitions ----------------------------------------------------------- */
div.admonition {
margin-top: 10px;
margin-bottom: 10px;
padding: 7px;
}
div.admonition dt {
font-weight: bold;
}
div.admonition dl {
margin-bottom: 0;
}
p.admonition-title {
margin: 0px 10px 5px 0px;
font-weight: bold;
}
div.body p.centered {
text-align: center;
margin-top: 25px;
}
/* -- tables ---------------------------------------------------------------- */
table.docutils {
border: 0;
border-collapse: collapse;
}
table.docutils td, table.docutils th {
padding: 1px 8px 1px 0;
border-top: 0;
border-left: 0;
border-right: 0;
border-bottom: 1px solid #aaa;
}
table.field-list td, table.field-list th {
border: 0 !important;
}
table.footnote td, table.footnote th {
border: 0 !important;
}
th {
text-align: left;
padding-right: 5px;
}
/* -- other body styles ----------------------------------------------------- */
dl {
margin-bottom: 15px;
}
dd p {
margin-top: 0px;
}
dd ul, dd table {
margin-bottom: 10px;
}
dd {
margin-top: 3px;
margin-bottom: 10px;
margin-left: 30px;
}
dt:target, .highlight {
background-color: #fbe54e;
}
dl.glossary dt {
font-weight: bold;
font-size: 1.1em;
}
.field-list ul {
margin: 0;
padding-left: 1em;
}
.field-list p {
margin: 0;
}
.refcount {
color: #060;
}
.optional {
font-size: 1.3em;
}
.versionmodified {
font-style: italic;
}
.system-message {
background-color: #fda;
padding: 5px;
border: 3px solid red;
}
.footnote:target {
background-color: #ffa
}
.line-block {
display: block;
margin-top: 1em;
margin-bottom: 1em;
}
.line-block .line-block {
margin-top: 0;
margin-bottom: 0;
margin-left: 1.5em;
}
/* -- code displays --------------------------------------------------------- */
pre {
overflow: auto;
}
td.linenos pre {
padding: 5px 0px;
border: 0;
background-color: transparent;
color: #aaa;
}
table.highlighttable {
margin-left: 0.5em;
}
table.highlighttable td {
padding: 0 0.5em 0 0.5em;
}
tt.descname {
background-color: transparent;
font-weight: bold;
font-size: 1.2em;
}
tt.descclassname {
background-color: transparent;
}
tt.xref, a tt {
background-color: transparent;
font-weight: bold;
}
h1 tt, h2 tt, h3 tt, h4 tt, h5 tt, h6 tt {
background-color: transparent;
}
/* -- math display ---------------------------------------------------------- */
img.math {
vertical-align: middle;
}
div.body div.math p {
text-align: center;
}
span.eqno {
float: right;
}
/* -- printout stylesheet --------------------------------------------------- */
@media print {
div.document,
div.documentwrapper,
div.bodywrapper {
margin: 0 !important;
width: 100%;
}
div.sphinxsidebar,
div.related,
div.footer,
#top-link {
display: none;
}
}

View File

@ -1,230 +0,0 @@
/**
* Sphinx stylesheet -- default theme
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*/
@import url("basic.css");
/* -- page layout ----------------------------------------------------------- */
body {
font-family: sans-serif;
font-size: 100%;
background-color: #11303d;
color: #000;
margin: 0;
padding: 0;
}
div.document {
background-color: #1c4e63;
}
div.documentwrapper {
float: left;
width: 100%;
}
div.bodywrapper {
margin: 0 0 0 230px;
}
div.body {
background-color: #ffffff;
color: #000000;
padding: 0 20px 30px 20px;
}
div.footer {
color: #ffffff;
width: 100%;
padding: 9px 0 9px 0;
text-align: center;
font-size: 75%;
}
div.footer a {
color: #ffffff;
text-decoration: underline;
}
div.related {
background-color: #133f52;
line-height: 30px;
color: #ffffff;
}
div.related a {
color: #ffffff;
}
div.sphinxsidebar {
}
div.sphinxsidebar h3 {
font-family: 'Trebuchet MS', sans-serif;
color: #ffffff;
font-size: 1.4em;
font-weight: normal;
margin: 0;
padding: 0;
}
div.sphinxsidebar h3 a {
color: #ffffff;
}
div.sphinxsidebar h4 {
font-family: 'Trebuchet MS', sans-serif;
color: #ffffff;
font-size: 1.3em;
font-weight: normal;
margin: 5px 0 0 0;
padding: 0;
}
div.sphinxsidebar p {
color: #ffffff;
}
div.sphinxsidebar p.topless {
margin: 5px 10px 10px 10px;
}
div.sphinxsidebar ul {
margin: 10px;
padding: 0;
color: #ffffff;
}
div.sphinxsidebar a {
color: #98dbcc;
}
div.sphinxsidebar input {
border: 1px solid #98dbcc;
font-family: sans-serif;
font-size: 1em;
}
/* -- body styles ----------------------------------------------------------- */
a {
color: #355f7c;
text-decoration: none;
}
a:hover {
text-decoration: underline;
}
div.body p, div.body dd, div.body li {
text-align: left;
line-height: 130%;
}
div.body h1,
div.body h2,
div.body h3,
div.body h4,
div.body h5,
div.body h6 {
font-family: 'Trebuchet MS', sans-serif;
background-color: #f2f2f2;
font-weight: normal;
color: #20435c;
border-bottom: 1px solid #ccc;
margin: 20px -20px 10px -20px;
padding: 3px 0 3px 10px;
}
div.body h1 { margin-top: 0; font-size: 200%; }
div.body h2 { font-size: 160%; }
div.body h3 { font-size: 140%; }
div.body h4 { font-size: 120%; }
div.body h5 { font-size: 110%; }
div.body h6 { font-size: 100%; }
a.headerlink {
color: #c60f0f;
font-size: 0.8em;
padding: 0 4px 0 4px;
text-decoration: none;
}
a.headerlink:hover {
background-color: #c60f0f;
color: white;
}
div.body p, div.body dd, div.body li {
text-align: left;
line-height: 130%;
}
div.admonition p.admonition-title + p {
display: inline;
}
div.admonition p {
margin-bottom: 5px;
}
div.admonition pre {
margin-bottom: 5px;
}
div.admonition ul, div.admonition ol {
margin-bottom: 5px;
}
div.note {
background-color: #eee;
border: 1px solid #ccc;
}
div.seealso {
background-color: #ffc;
border: 1px solid #ff6;
}
div.topic {
background-color: #eee;
}
div.warning {
background-color: #ffe4e4;
border: 1px solid #f66;
}
p.admonition-title {
display: inline;
}
p.admonition-title:after {
content: ":";
}
pre {
padding: 5px;
background-color: #eeffcc;
color: #333333;
line-height: 120%;
border: 1px solid #ac9;
border-left: none;
border-right: none;
}
tt {
background-color: #ecf0f3;
padding: 0 1px 0 1px;
font-size: 0.95em;
}
.warning tt {
background: #efc2c2;
}
.note tt {
background: #d6d6d6;
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.7 KiB

View File

@ -1,154 +0,0 @@
(function($) {
$.fn.tweet = function(o){
var s = {
username: ["seaofclouds"], // [string] required, unless you want to display our tweets. :) it can be an array, just do ["username1","username2","etc"]
list: null, //[string] optional name of list belonging to username
avatar_size: null, // [integer] height and width of avatar if displayed (48px max)
count: 3, // [integer] how many tweets to display?
intro_text: null, // [string] do you want text BEFORE your your tweets?
outro_text: null, // [string] do you want text AFTER your tweets?
join_text: null, // [string] optional text in between date and tweet, try setting to "auto"
auto_join_text_default: "i said,", // [string] auto text for non verb: "i said" bullocks
auto_join_text_ed: "i", // [string] auto text for past tense: "i" surfed
auto_join_text_ing: "i am", // [string] auto tense for present tense: "i was" surfing
auto_join_text_reply: "i replied to", // [string] auto tense for replies: "i replied to" @someone "with"
auto_join_text_url: "i was looking at", // [string] auto tense for urls: "i was looking at" http:...
loading_text: null, // [string] optional loading text, displayed while tweets load
query: null // [string] optional search query
};
if(o) $.extend(s, o);
$.fn.extend({
linkUrl: function() {
var returning = [];
var regexp = /((ftp|http|https):\/\/(\w+:{0,1}\w*@)?(\S+)(:[0-9]+)?(\/|\/([\w#!:.?+=&%@!\-\/]))?)/gi;
this.each(function() {
returning.push(this.replace(regexp,"<a href=\"$1\">$1</a>"));
});
return $(returning);
},
linkUser: function() {
var returning = [];
var regexp = /[\@]+([A-Za-z0-9-_]+)/gi;
this.each(function() {
returning.push(this.replace(regexp,"<a href=\"http://twitter.com/$1\">@$1</a>"));
});
return $(returning);
},
linkHash: function() {
var returning = [];
var regexp = / [\#]+([A-Za-z0-9-_]+)/gi;
this.each(function() {
returning.push(this.replace(regexp, ' <a href="http://search.twitter.com/search?q=&tag=$1&lang=all&from='+s.username.join("%2BOR%2B")+'">#$1</a>'));
});
return $(returning);
},
capAwesome: function() {
var returning = [];
this.each(function() {
returning.push(this.replace(/\b(awesome)\b/gi, '<span class="awesome">$1</span>'));
});
return $(returning);
},
capEpic: function() {
var returning = [];
this.each(function() {
returning.push(this.replace(/\b(epic)\b/gi, '<span class="epic">$1</span>'));
});
return $(returning);
},
makeHeart: function() {
var returning = [];
this.each(function() {
returning.push(this.replace(/(&lt;)+[3]/gi, "<tt class='heart'>&#x2665;</tt>"));
});
return $(returning);
}
});
function relative_time(time_value) {
var parsed_date = Date.parse(time_value);
var relative_to = (arguments.length > 1) ? arguments[1] : new Date();
var delta = parseInt((relative_to.getTime() - parsed_date) / 1000);
var pluralize = function (singular, n) {
return '' + n + ' ' + singular + (n == 1 ? '' : 's');
};
if(delta < 60) {
return 'less than a minute ago';
} else if(delta < (45*60)) {
return 'about ' + pluralize("minute", parseInt(delta / 60)) + ' ago';
} else if(delta < (24*60*60)) {
return 'about ' + pluralize("hour", parseInt(delta / 3600)) + ' ago';
} else {
return 'about ' + pluralize("day", parseInt(delta / 86400)) + ' ago';
}
}
function build_url() {
var proto = ('https:' == document.location.protocol ? 'https:' : 'http:');
if (s.list) {
return proto+"//api.twitter.com/1/"+s.username[0]+"/lists/"+s.list+"/statuses.json?per_page="+s.count+"&callback=?";
} else if (s.query == null && s.username.length == 1) {
return proto+'//twitter.com/status/user_timeline/'+s.username[0]+'.json?count='+s.count+'&callback=?';
} else {
var query = (s.query || 'from:'+s.username.join('%20OR%20from:'));
return proto+'//search.twitter.com/search.json?&q='+query+'&rpp='+s.count+'&callback=?';
}
}
return this.each(function(){
var list = $('<ul class="tweet_list">').appendTo(this);
var intro = '<p class="tweet_intro">'+s.intro_text+'</p>';
var outro = '<p class="tweet_outro">'+s.outro_text+'</p>';
var loading = $('<p class="loading">'+s.loading_text+'</p>');
if(typeof(s.username) == "string"){
s.username = [s.username];
}
if (s.loading_text) $(this).append(loading);
$.getJSON(build_url(), function(data){
if (s.loading_text) loading.remove();
if (s.intro_text) list.before(intro);
$.each((data.results || data), function(i,item){
// auto join text based on verb tense and content
if (s.join_text == "auto") {
if (item.text.match(/^(@([A-Za-z0-9-_]+)) .*/i)) {
var join_text = s.auto_join_text_reply;
} else if (item.text.match(/(^\w+:\/\/[A-Za-z0-9-_]+\.[A-Za-z0-9-_:%&\?\/.=]+) .*/i)) {
var join_text = s.auto_join_text_url;
} else if (item.text.match(/^((\w+ed)|just) .*/im)) {
var join_text = s.auto_join_text_ed;
} else if (item.text.match(/^(\w*ing) .*/i)) {
var join_text = s.auto_join_text_ing;
} else {
var join_text = s.auto_join_text_default;
}
} else {
var join_text = s.join_text;
};
var from_user = item.from_user || item.user.screen_name;
var profile_image_url = item.profile_image_url || item.user.profile_image_url;
var join_template = '<span class="tweet_join"> '+join_text+' </span>';
var join = ((s.join_text) ? join_template : ' ');
var avatar_template = '<a class="tweet_avatar" href="http://twitter.com/'+from_user+'"><img src="'+profile_image_url+'" height="'+s.avatar_size+'" width="'+s.avatar_size+'" alt="'+from_user+'\'s avatar" title="'+from_user+'\'s avatar" border="0"/></a>';
var avatar = (s.avatar_size ? avatar_template : '');
var date = '<a href="http://twitter.com/'+from_user+'/statuses/'+item.id+'" title="view tweet on twitter">'+relative_time(item.created_at)+'</a>';
var text = '<span class="tweet_text">' +$([item.text]).linkUrl().linkUser().linkHash().makeHeart().capAwesome().capEpic()[0]+ '</span>';
// until we create a template option, arrange the items below to alter a tweet's display.
list.append('<li>' + avatar + date + join + text + '</li>');
list.children('li:first').addClass('tweet_first');
list.children('li:odd').addClass('tweet_even');
list.children('li:even').addClass('tweet_odd');
});
if (s.outro_text) list.after(outro);
});
});
};
})(jQuery);

View File

@ -1,245 +0,0 @@
/*
* nature.css_t
* ~~~~~~~~~~~~
*
* Sphinx stylesheet -- nature theme.
*
* :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
@import url("basic.css");
/* -- page layout ----------------------------------------------------------- */
body {
font-family: Arial, sans-serif;
font-size: 100%;
background-color: #111;
color: #555;
margin: 0;
padding: 0;
}
div.documentwrapper {
float: left;
width: 100%;
}
div.bodywrapper {
margin: 0 0 0 {{ theme_sidebarwidth|toint }}px;
}
hr {
border: 1px solid #B1B4B6;
}
div.document {
background-color: #eee;
}
div.body {
background-color: #ffffff;
color: #3E4349;
padding: 0 30px 30px 30px;
font-size: 0.9em;
}
div.footer {
color: #555;
width: 100%;
padding: 13px 0;
text-align: center;
font-size: 75%;
}
div.footer a {
color: #444;
text-decoration: underline;
}
div.related {
background-color: #6BA81E;
line-height: 32px;
color: #fff;
text-shadow: 0px 1px 0 #444;
font-size: 0.9em;
}
div.related a {
color: #E2F3CC;
}
div.sphinxsidebar {
font-size: 0.75em;
line-height: 1.5em;
}
div.sphinxsidebarwrapper{
padding: 20px 0;
}
div.sphinxsidebar h3,
div.sphinxsidebar h4 {
font-family: Arial, sans-serif;
color: #222;
font-size: 1.2em;
font-weight: normal;
margin: 0;
padding: 5px 10px;
background-color: #ddd;
text-shadow: 1px 1px 0 white
}
div.sphinxsidebar h4{
font-size: 1.1em;
}
div.sphinxsidebar h3 a {
color: #444;
}
div.sphinxsidebar p {
color: #888;
padding: 5px 20px;
}
div.sphinxsidebar p.topless {
}
div.sphinxsidebar ul {
margin: 10px 20px;
padding: 0;
color: #000;
}
div.sphinxsidebar a {
color: #444;
}
div.sphinxsidebar input {
border: 1px solid #ccc;
font-family: sans-serif;
font-size: 1em;
}
div.sphinxsidebar input[type=text]{
margin-left: 20px;
}
/* -- body styles ----------------------------------------------------------- */
a {
color: #005B81;
text-decoration: none;
}
a:hover {
color: #E32E00;
text-decoration: underline;
}
div.body h1,
div.body h2,
div.body h3,
div.body h4,
div.body h5,
div.body h6 {
font-family: Arial, sans-serif;
background-color: #BED4EB;
font-weight: normal;
color: #212224;
margin: 30px 0px 10px 0px;
padding: 5px 0 5px 10px;
text-shadow: 0px 1px 0 white
}
div.body h1 { border-top: 20px solid white; margin-top: 0; font-size: 200%; }
div.body h2 { font-size: 150%; background-color: #C8D5E3; }
div.body h3 { font-size: 120%; background-color: #D8DEE3; }
div.body h4 { font-size: 110%; background-color: #D8DEE3; }
div.body h5 { font-size: 100%; background-color: #D8DEE3; }
div.body h6 { font-size: 100%; background-color: #D8DEE3; }
a.headerlink {
color: #c60f0f;
font-size: 0.8em;
padding: 0 4px 0 4px;
text-decoration: none;
}
a.headerlink:hover {
background-color: #c60f0f;
color: white;
}
div.body p, div.body dd, div.body li {
line-height: 1.5em;
}
div.admonition p.admonition-title + p {
display: inline;
}
div.highlight{
background-color: white;
}
div.note {
background-color: #eee;
border: 1px solid #ccc;
}
div.seealso {
background-color: #ffc;
border: 1px solid #ff6;
}
div.topic {
background-color: #eee;
}
div.warning {
background-color: #ffe4e4;
border: 1px solid #f66;
}
p.admonition-title {
display: inline;
}
p.admonition-title:after {
content: ":";
}
pre {
padding: 10px;
background-color: White;
color: #222;
line-height: 1.2em;
border: 1px solid #C6C9CB;
font-size: 1.1em;
margin: 1.5em 0 1.5em 0;
-webkit-box-shadow: 1px 1px 1px #d8d8d8;
-moz-box-shadow: 1px 1px 1px #d8d8d8;
}
tt {
background-color: #ecf0f3;
color: #222;
/* padding: 1px 2px; */
font-size: 1.1em;
font-family: monospace;
}
.viewcode-back {
font-family: Arial, sans-serif;
}
div.viewcode-block:target {
background-color: #f4debf;
border-top: 1px solid #ac9;
border-bottom: 1px solid #ac9;
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.6 KiB

View File

@ -1,62 +0,0 @@
.highlight .hll { background-color: #ffffcc }
.highlight { background: #eeffcc; }
.highlight .c { color: #408090; font-style: italic } /* Comment */
.highlight .err { border: 1px solid #FF0000 } /* Error */
.highlight .k { color: #007020; font-weight: bold } /* Keyword */
.highlight .o { color: #666666 } /* Operator */
.highlight .cm { color: #408090; font-style: italic } /* Comment.Multiline */
.highlight .cp { color: #007020 } /* Comment.Preproc */
.highlight .c1 { color: #408090; font-style: italic } /* Comment.Single */
.highlight .cs { color: #408090; background-color: #fff0f0 } /* Comment.Special */
.highlight .gd { color: #A00000 } /* Generic.Deleted */
.highlight .ge { font-style: italic } /* Generic.Emph */
.highlight .gr { color: #FF0000 } /* Generic.Error */
.highlight .gh { color: #000080; font-weight: bold } /* Generic.Heading */
.highlight .gi { color: #00A000 } /* Generic.Inserted */
.highlight .go { color: #333333 } /* Generic.Output */
.highlight .gp { color: #c65d09; font-weight: bold } /* Generic.Prompt */
.highlight .gs { font-weight: bold } /* Generic.Strong */
.highlight .gu { color: #800080; font-weight: bold } /* Generic.Subheading */
.highlight .gt { color: #0044DD } /* Generic.Traceback */
.highlight .kc { color: #007020; font-weight: bold } /* Keyword.Constant */
.highlight .kd { color: #007020; font-weight: bold } /* Keyword.Declaration */
.highlight .kn { color: #007020; font-weight: bold } /* Keyword.Namespace */
.highlight .kp { color: #007020 } /* Keyword.Pseudo */
.highlight .kr { color: #007020; font-weight: bold } /* Keyword.Reserved */
.highlight .kt { color: #902000 } /* Keyword.Type */
.highlight .m { color: #208050 } /* Literal.Number */
.highlight .s { color: #4070a0 } /* Literal.String */
.highlight .na { color: #4070a0 } /* Name.Attribute */
.highlight .nb { color: #007020 } /* Name.Builtin */
.highlight .nc { color: #0e84b5; font-weight: bold } /* Name.Class */
.highlight .no { color: #60add5 } /* Name.Constant */
.highlight .nd { color: #555555; font-weight: bold } /* Name.Decorator */
.highlight .ni { color: #d55537; font-weight: bold } /* Name.Entity */
.highlight .ne { color: #007020 } /* Name.Exception */
.highlight .nf { color: #06287e } /* Name.Function */
.highlight .nl { color: #002070; font-weight: bold } /* Name.Label */
.highlight .nn { color: #0e84b5; font-weight: bold } /* Name.Namespace */
.highlight .nt { color: #062873; font-weight: bold } /* Name.Tag */
.highlight .nv { color: #bb60d5 } /* Name.Variable */
.highlight .ow { color: #007020; font-weight: bold } /* Operator.Word */
.highlight .w { color: #bbbbbb } /* Text.Whitespace */
.highlight .mf { color: #208050 } /* Literal.Number.Float */
.highlight .mh { color: #208050 } /* Literal.Number.Hex */
.highlight .mi { color: #208050 } /* Literal.Number.Integer */
.highlight .mo { color: #208050 } /* Literal.Number.Oct */
.highlight .sb { color: #4070a0 } /* Literal.String.Backtick */
.highlight .sc { color: #4070a0 } /* Literal.String.Char */
.highlight .sd { color: #4070a0; font-style: italic } /* Literal.String.Doc */
.highlight .s2 { color: #4070a0 } /* Literal.String.Double */
.highlight .se { color: #4070a0; font-weight: bold } /* Literal.String.Escape */
.highlight .sh { color: #4070a0 } /* Literal.String.Heredoc */
.highlight .si { color: #70a0d0; font-style: italic } /* Literal.String.Interpol */
.highlight .sx { color: #c65d09 } /* Literal.String.Other */
.highlight .sr { color: #235388 } /* Literal.String.Regex */
.highlight .s1 { color: #4070a0 } /* Literal.String.Single */
.highlight .ss { color: #517918 } /* Literal.String.Symbol */
.highlight .bp { color: #007020 } /* Name.Builtin.Pseudo */
.highlight .vc { color: #bb60d5 } /* Name.Variable.Class */
.highlight .vg { color: #bb60d5 } /* Name.Variable.Global */
.highlight .vi { color: #bb60d5 } /* Name.Variable.Instance */
.highlight .il { color: #208050 } /* Literal.Number.Integer.Long */

View File

@ -1,94 +0,0 @@
body {
background: #fff url(../_static/header_bg.jpg) top left no-repeat;
}
#header {
width: 950px;
margin: 0 auto;
height: 102px;
}
#header h1#logo {
background: url(../_static/openstack_logo.png) top left no-repeat;
display: block;
float: left;
text-indent: -9999px;
width: 175px;
height: 55px;
}
#navigation {
background: url(../_static/header-line.gif) repeat-x 0 bottom;
display: block;
float: left;
margin: 27px 0 0 25px;
padding: 0;
}
#navigation li{
float: left;
display: block;
margin-right: 25px;
}
#navigation li a {
display: block;
font-weight: normal;
text-decoration: none;
background-position: 50% 0;
padding: 20px 0 5px;
color: #353535;
font-size: 14px;
}
#navigation li a.current, #navigation li a.section {
border-bottom: 3px solid #cf2f19;
color: #cf2f19;
}
div.related {
background-color: #cde2f8;
border: 1px solid #b0d3f8;
}
div.related a {
color: #4078ba;
text-shadow: none;
}
div.sphinxsidebarwrapper {
padding-top: 0;
}
pre {
color: #555;
}
div.documentwrapper h1, div.documentwrapper h2, div.documentwrapper h3, div.documentwrapper h4, div.documentwrapper h5, div.documentwrapper h6 {
font-family: 'PT Sans', sans-serif !important;
color: #264D69;
border-bottom: 1px dotted #C5E2EA;
padding: 0;
background: none;
padding-bottom: 5px;
}
div.documentwrapper h3 {
color: #CF2F19;
}
a.headerlink {
color: #fff !important;
margin-left: 5px;
background: #CF2F19 !important;
}
div.body {
margin-top: -25px;
margin-left: 230px;
}
div.document {
width: 960px;
margin: 0 auto;
}

View File

@ -1,83 +0,0 @@
{% extends "basic/layout.html" %}
{% set css_files = css_files + ['_static/tweaks.css'] %}
{% set script_files = script_files + ['_static/jquery.tweet.js'] %}
{%- macro sidebar() %}
{%- if not embedded %}{% if not theme_nosidebar|tobool %}
<div class="sphinxsidebar">
<div class="sphinxsidebarwrapper">
{%- block sidebarlogo %}
{%- if logo %}
<p class="logo"><a href="{{ pathto(master_doc) }}">
<img class="logo" src="{{ pathto('_static/' + logo, 1) }}" alt="Logo"/>
</a></p>
{%- endif %}
{%- endblock %}
{%- block sidebartoc %}
{%- if display_toc %}
<h3><a href="{{ pathto(master_doc) }}">{{ _('Table Of Contents') }}</a></h3>
{{ toc }}
{%- endif %}
{%- endblock %}
{%- block sidebarrel %}
{%- if prev %}
<h4>{{ _('Previous topic') }}</h4>
<p class="topless"><a href="{{ prev.link|e }}"
title="{{ _('previous chapter') }}">{{ prev.title }}</a></p>
{%- endif %}
{%- if next %}
<h4>{{ _('Next topic') }}</h4>
<p class="topless"><a href="{{ next.link|e }}"
title="{{ _('next chapter') }}">{{ next.title }}</a></p>
{%- endif %}
{%- endblock %}
{%- block sidebarsourcelink %}
{%- if show_source and has_source and sourcename %}
<h3>{{ _('This Page') }}</h3>
<ul class="this-page-menu">
<li><a href="{{ pathto('_sources/' + sourcename, true)|e }}"
rel="nofollow">{{ _('Show Source') }}</a></li>
</ul>
{%- endif %}
{%- endblock %}
{%- if customsidebar %}
{% include customsidebar %}
{%- endif %}
{%- block sidebarsearch %}
{%- if pagename != "search" %}
<div id="searchbox" style="display: none">
<h3>{{ _('Quick search') }}</h3>
<form class="search" action="{{ pathto('search') }}" method="get">
<input type="text" name="q" size="18" />
<input type="submit" value="{{ _('Go') }}" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
<p class="searchtip" style="font-size: 90%">
{{ _('Enter search terms or a module, class or function name.') }}
</p>
</div>
<script type="text/javascript">$('#searchbox').show(0);</script>
{%- endif %}
{%- endblock %}
</div>
</div>
{%- endif %}{% endif %}
{%- endmacro %}
{% block relbar1 %}{% endblock relbar1 %}
{% block header %}
<div id="header">
<h1 id="logo"><a href="http://www.openstack.org/">OpenStack</a></h1>
<ul id="navigation">
<li><a href="http://www.openstack.org/" title="Go to the Home page" class="link">Home</a></li>
<li><a href="http://www.openstack.org/projects/" title="Go to the OpenStack Projects page">Projects</a></li>
<li><a href="http://www.openstack.org/user-stories/" title="Go to the User Stories page" class="link">User Stories</a></li>
<li><a href="http://www.openstack.org/community/" title="Go to the Community page" class="link">Community</a></li>
<li><a href="http://www.openstack.org/blog/" title="Go to the OpenStack Blog">Blog</a></li>
<li><a href="http://wiki.openstack.org/" title="Go to the OpenStack Wiki">Wiki</a></li>
<li><a href="http://docs.openstack.org/" title="Go to OpenStack Documentation" class="current">Documentation</a></li>
</ul>
</div>
{% endblock %}

View File

@ -1,4 +0,0 @@
[theme]
inherit = basic
stylesheet = nature.css
pygments_style = tango

View File

@ -1,197 +0,0 @@
============================
cURL Commands for API ver. 2
============================
.. _index:
Resources
---------
- `Plan`_
- `Role`_
Plan
----
Example of JSON Representation of Plan
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
{
"created_at": "2014-09-26T20:23:14.222815",
"description": "Development testing cloud",
"name": "dev-cloud",
"parameters":
[
{
"constraints":
[
{
"constraint_type": "range",
"definition":
{
"min": "0"
},
"description": "Can't be less than zero"
}
],
"default": "0",
"description": "The number of cinder storage nodes to deploy"
"hidden": false,
"label": "The number of cinder storage nodes to deploy",
"name": "Cinder-Storage-1::count",
"parameter_type": "number",
"value": "0"
},
{
"constraints": []
"default": "guest",
"description": "The password for RabbitMQ",
"hidden": true,
"label": null,
"name": "compute-1::RabbitPassword",
"parameter_type: "string"
"value": "secret-password"
}
],
"roles":
[
{
"description": "OpenStack hypervisor node. Can be wrapped in a ResourceGroup for scaling.\n",
"name": "compute",
"uuid": "b7b1583c-5c80-481f-a25b-708ed4a39734",
"version": 1
}
],
"updated_at": null,
"uuid": "53268a27-afc8-4b21-839f-90227dd7a001"
}
List All Plans
~~~~~~~~~~
::
curl -v -X GET -H 'Content-Type: application/json' -H 'Accept: application/json' http://0.0.0.0:8585/v2/plans/
Retrieve a Single Plan
~~~~~~~~~~~~~~~~~~~~~~
::
curl -v -X GET -H 'Content-Type: application/json' -H 'Accept: application/json' http://0.0.0.0:8585/v2/plans/53268a27-afc8-4b21-839f-90227dd7a001
Create a New Plan
~~~~~~~~~~~~~~~~~
::
curl -v -X POST -H 'Content-Type: application/json' -H 'Accept: application/json' -d '
{
"name": "dev-cloud",
"description": "Development testing cloud",
}
' http://0.0.0.0:8585/v2/plans
This command will create new Plan without any Roles associated with it.
To assign a Role to Plan see `How to Add a Role to a Plan <#adding-a-role-to-a-plan>`_.
Delete an Existing Plan
~~~~~~~~~~~~~~~~~~~~~~~
::
curl -v -X DELETE http://localhost:8585/v2/plans/53268a27-afc8-4b21-839f-90227dd7a001
Changing a Plans Configuration Values
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
curl -v -X PATCH -H 'Content-Type: application/json' -H 'Accept: application/json' -d '
[
{
"name" : "database_host",
"value" : "10.11.12.13"
},
{
"name" : "database_password",
"value" : "secret"
}
]
' http://0.0.0.0:8585/v2/plans/53268a27-afc8-4b21-839f-90227dd7a001
You can change only existing parameters in Plan.
Retrieve a Plans Template Files
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
curl -v -X GET -H 'Content-Type: application/json' -H 'Accept: application/json' http://0.0.0.0:8585/v2/plans/53268a27-afc8-4b21-839f-90227dd7a001/templates
Example of JSON representation:
::
{
"environment.yaml" : "... content of template file ...",
"plan.yaml" : "... content of template file ...",
"provider-compute-1.yaml" : "... content of template file ..."
}
`back to top <#index>`_
Role
----
Example of JSON Representation of Role
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
{
"description": "OpenStack hypervisor node. Can be wrapped in a ResourceGroup for scaling.\n",
"name": "compute",
"uuid": "b7b1583c-5c80-481f-a25b-708ed4a39734",
"version": 1
}
Retrieving Possible Roles
~~~~~~~~~~~~~~~~~~~~~~~~~
::
curl -v -X GET -H 'Content-Type: application/json' -H 'Accept: application/json' http://0.0.0.0:8585/v2/roles/
Adding a Role to a Plan
~~~~~~~~~~~~~~~~~~~~~~~
::
curl -v -X POST -H 'Content-Type: application/json' -H 'Accept: application/json' -d '
{
"uuid": "b7b1583c-5c80-481f-a25b-708ed4a39734"
}
' http://0.0.0.0:8585/v2/plans/53268a27-afc8-4b21-839f-90227dd7a001
Removing a Role from a Plan
~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
curl -v -X DELETE http://localhost:8585/v2/plans/53268a27-afc8-4b21-839f-90227dd7a001/roles/b7b1583c-5c80-481f-a25b-708ed4a39734
`back to top <#index>`_

View File

@ -1,92 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc',
'sphinx.ext.viewcode',
'sphinxcontrib.httpdomain',
'sphinxcontrib.pecanwsme.rest',
'wsmeext.sphinxext',
]
wsme_protocols = ['restjson', 'restxml']
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Tuskar'
copyright = u'OpenStack Foundation'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
from tuskar import version as tuskar_version
# The full version, including alpha/beta/rc tags.
release = tuskar_version.version_info.release_string()
# The short X.Y version.
version = tuskar_version.version_info.version_string()
# A list of ignored prefixes for module index sorting.
modindex_common_prefix = ['tuskar.']
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
html_theme_path = ["."]
html_theme = '_theme'
html_static_path = ['_static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
(
'index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation',
'manual'
),
]

View File

@ -1,157 +0,0 @@
======================
Contributing to Tuskar
======================
Tuskar follows the OpenStack development processes for code and
communication. The `repository is hosted on git.openstack.org
<http://git.openstack.org/cgit/openstack/tuskar>`_, `bugs and
blueprints are on Launchpad <https://launchpad.net/tuskar>`_ and
we use the openstack-dev mailing list (subject `[tuskar]`) and
the `#tripleo` IRC channel for communication.
As Tuskar is under the TripleO umbrella of projects you will also
want to look at the `TripleO contributing guidelines
<http://docs.openstack.org/developer/tripleo-
incubator/CONTRIBUTING.html>`_.
Coding Standards
----------------
We comply with the `OpenStack coding standards
<http://docs.openstack.org/developer/hacking/>`_.
Be sure to familiarise yourself with `OpenStack's Gerrit Workflow
<http://docs.openstack.org/infra/manual/developers.html#development-workflow>`_.
Before submitting your code, please make sure you have completed
the following checklist:
#. Update the API docs (if needed)
#. Update the tests (if needed)
Finding your way around
~~~~~~~~~~~~~~~~~~~~~~~
There are various pieces of the codebase that may not be
immediately obvious to a newcomer to the project, so we attempt
to explain some of that in this section.
Where do the tuskar commands come from? (tuskar-api, tuskar-dbsync, etc)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The project-specific commands live in tuskar/cmd, and are
implementations that use the oslo.config project as a base. They
are generated and put into your venv when you run 'python
setup.py develop'. Adding a new one consists of:
#. Creating a new file in tuskar/cmd
#. Adding the appropriate name and package reference to the
entry\_points section of setup.cfg
How do I add a new controller?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Controllers are contained in tuskar/api/controllers/v2.py. To add
a new controller, you need to add an 'HTTP Representation' of
whatever model you wish to expose with this controller. This is a
simple python object that extends Base, and describes the key and
value types that the object will return. For example, say there
is a Foo model object you wish to return.
.. code-block:: python
class Foo(Base):
id = int
name = wtypes.text
fred = Fred # Fred is another object defined in this file
Then add a controller for it (anywhere above the Controller class,
which is the last in the file. For example:
.. code-block:: python
class FoosController(rest.RestController):
@wsme_pecan.wsexpose([Foo])
def get_all(self)
result = []
"""Do some things to get your list of Foos"""
return result
Lastly, add a reference to the controller in the Controller class at
the bottom of the file as so.
.. code-block:: python
class Controller(object):
foos = FoosController()
The name you give the controller above will be how it is accessed by
the client, so in the above case, you could get the list of foos
with.
.. code-block:: bash
curl http://0.0.0.0:8585/v1/foos
For doing something simple, like a poc controller that doesn't
return any objects, you can return plain text as so
.. code-block:: python
class FarkleController(rest.RestController):
@wsme_pecan.wsexpose(None, wtypes.text)
def get_all(self):
return "Hi, I am farkle!"
Where are my changes to the app?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You may make a change to, say, a controller, and wonder why your
change does not seem to happen when you call your curl command on
that resource. This is because, at least at the current time, you
must ctrl+c to kill the tuskar-api server, and then restart it
again to pick up your changes.
How do I create a new model?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Models live in tuskar/db/sqlalchemy/. There are two files here of
relevance for describing the model (we will get to defining the
table in the next section), api.py and models.py. The models.py
file contains the definition of the columns to expose to the
client for the model objects, as well as a mapping of the object
in this file to the tablename define in the migration (below). In
api.py, we have utility methods, as well as validation rules and
other custom methods for interacting with the models.
How do I define the table for my new model?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is described in a migration file, located in
tuskar/db/sqlalchemy/migrate\_repo/versions/. Each new table or
change to an existing table should get a new file here with a
descriptive name, starting with a 3 digit number. Each new file
should increment the number to avoid collisions. The primary part of
this file is the definition of your table, which s done via a Table
object, and you describe the columns, using, surprisingly enough, a
Column object. There are upgrade nd downgrade methods in these
migrations to describe what to do for creating a given set of
tables, as well as dropping them, or rolling back to what was done
before the upgrade.
Writing and Running tests
~~~~~~~~~~~~~~~~~~~~~~~~~
We use testtools for our unit tests, and mox for mock objects.
You can run tests using Tox:
.. code-block:: bash
$ tox
This will run tests under Python 2.7 and verify `PEP 8
<http://www.python.org/dev/peps/pep-0008/>`_ compliance. The identical test
suite is run by OpenStack's Jenkins whenever you send a patch.

View File

@ -1,60 +0,0 @@
======
Tuskar
======
Tuskar is a management service for planning TripleO deployments.
Interested in seeing the full Tuskar and Tuskar UI setup? `Watch
the demo. <https://www.youtube.com/watch?v=--WWdJXmf8o>`_
High-Level Overview
===================
*TODO* Add project overview
- *TODO* feature examples
- *TODO* link to high-level portion of FAQ
- :doc:`Recommended reading <recommended-reading>`
Related Projects
----------------
- `tuskar-ui <http://git.openstack.org/cgit/openstack/tuskar-
ui>`_ - tuskar-ui provides dashboard access to Tuskar
functionality as a Horizon plugin. See the `Tuskar UI
documentation <http://tuskar-ui.readthedocs.org/en/latest/>`_
- `python-tuskarclient <http://git.openstack.org/cgit/openstack
/python-tuskarclient>`_ - A Python client for the Tuskar API,
python-tuskarclient is
utilized by tuskar-ui.
Developer Information
=====================
Install and Contribute
----------------------
.. toctree::
:maxdepth: 2
install
contributing
recommended-reading
API version 2
-------------
.. toctree::
:maxdepth: 2
api/curl-v2
Contact Us
==========
Join us on IRC (Internet Relay Chat)::
Network: Freenode (irc.freenode.net/tuskar)
Channel: #tripleo

View File

@ -1,184 +0,0 @@
============================
Developer Installation Guide
============================
The Tuskar source code should be pulled directly from git.
.. code-block:: bash
git clone https://git.openstack.org/openstack/tuskar
Dependencies
------------
Setting up a local environment for development can be done with
tox.
.. code-block:: bash
# install prerequisites
* Fedora/RHEL:
$ sudo yum install python-devel python-pip libxml2-devel \
libxslt-devel postgresql-devel mariadb-devel
* Ubuntu/Debian:
$ sudo apt-get install python-dev python-pip libxml2-dev \
libxslt-dev libpq-dev libmysqlclient-dev
.. note::
If you wish you run Tuskar against MySQL or PostgreSQL you
will need also install and configure these at this point.
Otherwise you can run Tuskar with an sqlite database.
To run the Tuskar test suite you will also need to install Tox.
.. code-block:: bash
$ sudo pip install tox
.. note::
An `issue with tox <https://bugs.launchpad.net/openstack-ci/+bug/1274135>`_
requires that you use a version <1.70 or >= 1.7.2.
Now create your virtualenv.
.. code-block:: bash
$ cd <your_src_dir>/tuskar
$ tox -e venv
.. note::
If ``pip install`` fails due to an outdated setuptools, you
can try to update it first.
.. code-block:: bash
$ sudo pip install --upgrade setuptools
To run the test suite use the following command. This will run
against Python 2.7 and run the `flake8
<https://flake8.readthedocs.org>`_ code linting.
.. code-block:: bash
$ tox
Configuration
-------------
Copy the sample configuration file:
.. code-block:: bash
$ cp etc/tuskar/tuskar.conf.sample etc/tuskar/tuskar.conf
We need to tell tuskar where to connect to database. Edit the
config file in ``database`` section and change
.. code-block:: ini
#connection=<None>
to
.. code-block:: ini
connection=sqlite:///tuskar/tuskar.sqlite
.. note::
If you are using a different database backend, you will need
to enter a `SQLAlchemy compatible conection string
<http://docs.sqlalchemy.org/en/latest/core/engines.html
#database-urls>`_ for this setting.
We need to initialise the database schema.
.. code-block:: bash
# activate the virtualenv
$ source .tox/venv/bin/activate
# if you delete tuskar.sqlite this will force creation of tables again - e.g.
# if you added a new resource table definitions etc in an existing migration
# file
$ tuskar-dbsync --config-file etc/tuskar/tuskar.conf
You can verify this was successful (in addition to seeing no
error output) with.
.. code-block:: bash
$ sqlite3 tuskar/tuskar.sqlite .schema
Then, launch the app.
.. code-block:: bash
$ tuskar-api --config-file etc/tuskar/tuskar.conf
You can then verify that everything worked by running.
.. code-block:: bash
$ curl -v -X GET -H 'Accept: application/json' http://0.0.0.0:8585/v2/plans/ | python -mjson.tool
This command should return JSON with an empty result set.
Running Tuskar API
------------------
Whenever you want to run the API again, just switch to the
virtualenv and run `tuskar-api` command.
.. code-block:: bash
$ source .tox/venv/bin/activate
$ tuskar-api --config-file etc/tuskar/tuskar.conf
Loading Initial Roles
---------------------
Tuskar needs to be provided with a set of roles that can be added
to a deployment plan. The following steps will add the roles from
the TripleO Heat Templates repository.
.. code-block:: bash
$ git clone https://git.openstack.org/openstack/tripleo-heat-templates
$ tuskar-load-roles --config-file etc/tuskar/tuskar.conf \
-r tripleo-heat-templates/compute.yaml \
-r tripleo-heat-templates/controller.yaml
After this, if the Tuskar API isn't running, start it with the
above command and the following curl command should show you the
loaded roles.
.. code-block:: bash
$ curl -v -X GET -H 'Accept: application/json' http://0.0.0.0:8585/v2/roles/ | python -mjson.tool
Keystone Configuration
----------------------
By default, Tuskar is configured to skip authentication for REST
API calls. Keystone authentication can be enabled by making the
appropriate changes to the ``tuskar.conf`` file as described in
the `keystone documentation <http://docs.openstack.org/developer/
keystone/configuringservices.html>`_
Contributing
------------
For additional developer information, take a look at
:doc:`the contributing guide <contributing>`.

View File

@ -1,30 +0,0 @@
===================
Recommended Reading
===================
Tuskar Design Discussions
-------------------------
- `Juno Planning <https://wiki.openstack.org/wiki/TripleO/TuskarJunoPlanning>`_
- `Template storage planning <https://wiki.openstack.org/wiki/TripleO/TuskarJunoPlanning/TemplateBackend>`_
- `TripleO Specifications <http://git.openstack.org/cgit/openstack/tripleo-specs/>`_
Relevant OpenStack Projects
---------------------------
- `TripleO <http://docs.openstack.org/developer/tripleo-incubator/>`_
- `Heat <http://docs.openstack.org/developer/heat/>`_
- `oslo.db <http://docs.openstack.org/developer/oslo.db/>`_
- `oslo.config <http://docs.openstack.org/developer/oslo.config>`_
- `hacking <http://docs.openstack.org/developer/hacking>`_ This enforces
openstack community coding style guidelines
General Python/Frameworks
-------------------------
- `dive into python <http://www.diveintopython.net>`_
- `pecan <http://pecan.readthedocs.org/en/latest/>`_
- `sqlalchemy <http://docs.sqlalchemy.org/en/rel_0_8/>`_
- `style guide <http://www.python.org/dev/peps/pep-0008/>`_ This guide
is the baseline for 'hacking' above.

View File

@ -1,4 +0,0 @@
keystone_url: "http://10.34.32.181:5000/v2.0/"
nova_username: "admin"
nova_tenantname: "admin"
nova_password: "bcfa838f13e64436"

View File

@ -1,6 +0,0 @@
{
"admin_api": "is_admin:True",
"admin_or_owner": "is_admin:True or project_id:%(project_id)s",
"context_is_admin": "role:admin",
"default": "rule:admin_or_owner"
}

View File

@ -1,484 +0,0 @@
[DEFAULT]
#
# Options defined in tuskar.netconf
#
# ip address of this host (string value)
#my_ip=10.0.0.1
# use ipv6 (boolean value)
#use_ipv6=false
#
# Options defined in tuskar.api
#
# IP for the Tuskar API server to bind to (string value)
#tuskar_api_bind_ip=0.0.0.0
# The port for the Tuskar API server (integer value)
#tuskar_api_port=8585
# Local path holding tripleo-heat-templates (string value)
#tht_local_dir=/etc/tuskar/tripleo-heat-templates/
#
# Options defined in tuskar.api.app
#
# Method to use for auth: noauth or keystone. (string value)
#auth_strategy=keystone
#
# Options defined in tuskar.common.exception
#
# make exception message format errors fatal (boolean value)
#fatal_exception_format_errors=false
#
# Options defined in tuskar.common.paths
#
# Directory where the nova python module is installed (string
# value)
#pybasedir=/usr/lib/python/site-packages/tuskar
# Directory where nova binaries are installed (string value)
#bindir=$pybasedir/bin
# Top-level directory for maintaining nova's state (string
# value)
#state_path=$pybasedir
#
# Options defined in tuskar.db.sqlalchemy.models
#
# MySQL engine (string value)
#mysql_engine=InnoDB
#
# Options defined in tuskar.openstack.common.log
#
# Print debugging output (set logging level to DEBUG instead
# of default WARNING level). (boolean value)
#debug=false
# Print more verbose output (set logging level to INFO instead
# of default WARNING level). (boolean value)
#verbose=false
# Log output to standard error (boolean value)
#use_stderr=true
# Format string to use for log messages with context (string
# value)
#logging_context_format_string=%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages without context
# (string value)
#logging_default_format_string=%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Data to append to log format when level is DEBUG (string
# value)
#logging_debug_format_suffix=%(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format
# (string value)
#logging_exception_prefix=%(asctime)s.%(msecs)03d %(process)d TRACE %(name)s %(instance)s
# List of logger=LEVEL pairs (list value)
#default_log_levels=amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN
# Publish error events (boolean value)
#publish_errors=false
# Make deprecations fatal (boolean value)
#fatal_deprecations=false
# If an instance is passed with the log message, format it
# like this (string value)
#instance_format="[instance: %(uuid)s] "
# If an instance UUID is passed with the log message, format
# it like this (string value)
#instance_uuid_format="[instance: %(uuid)s] "
# The name of logging configuration file. It does not disable
# existing loggers, but just appends specified logging
# configuration to any other existing logging options. Please
# see the Python logging module documentation for details on
# logging configuration files. (string value)
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append=<None>
# DEPRECATED. A logging.Formatter log message format string
# which may use any of the available logging.LogRecord
# attributes. This option is deprecated. Please use
# logging_context_format_string and
# logging_default_format_string instead. (string value)
#log_format=<None>
# Format string for %%(asctime)s in log records. Default:
# %(default)s (string value)
#log_date_format=%Y-%m-%d %H:%M:%S
# (Optional) Name of log file to output to. If no default is
# set, logging will go to stdout. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file=<None>
# (Optional) The base directory used for relative --log-file
# paths (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir=<None>
# Use syslog for logging. Existing syslog format is DEPRECATED
# during I, and then will be changed in J to honor RFC5424
# (boolean value)
#use_syslog=false
# (Optional) Use syslog rfc5424 format for logging. If
# enabled, will add APP-NAME (RFC5424) before the MSG part of
# the syslog message. The old format without APP-NAME is
# deprecated in I, and will be removed in J. (boolean value)
#use_syslog_rfc_format=false
# Syslog facility to receive log lines (string value)
#syslog_log_facility=LOG_USER
#
# Options defined in tuskar.storage.drivers.sqlalchemy
#
# MySQL engine (string value)
#mysql_engine=InnoDB
[database]
#
# Options defined in oslo.db
#
# The file name to use with SQLite. (string value)
#sqlite_db=oslo.sqlite
# If True, SQLite uses synchronous mode. (boolean value)
#sqlite_synchronous=true
# The back end to use for the database. (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend=sqlalchemy
# The SQLAlchemy connection string to use to connect to the
# database. (string value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
#connection=<None>
# The SQLAlchemy connection string to use to connect to the
# slave database. (string value)
#slave_connection=<None>
# The SQL mode to be used for MySQL sessions. This option,
# including the default, overrides any server-set SQL mode. To
# use whatever SQL mode is set by the server configuration,
# set this to no value. Example: mysql_sql_mode= (string
# value)
#mysql_sql_mode=TRADITIONAL
# Timeout before idle SQL connections are reaped. (integer
# value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout=3600
# Minimum number of SQL connections to keep open in a pool.
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size=1
# Maximum number of SQL connections to keep open in a pool.
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size=<None>
# Maximum number of database connection retries during
# startup. Set to -1 to specify an infinite retry count.
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries=10
# Interval between retries of opening a SQL connection.
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval=10
# If set, use this value for max_overflow with SQLAlchemy.
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow=<None>
# Verbosity of SQL debugging information: 0=None,
# 100=Everything. (integer value)
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug=0
# Add Python stack traces to SQL as comment strings. (boolean
# value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
#connection_trace=false
# If set, use this value for pool_timeout with SQLAlchemy.
# (integer value)
# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
#pool_timeout=<None>
# Enable the experimental use of database reconnect on
# connection lost. (boolean value)
#use_db_reconnect=false
# Seconds between retries of a database transaction. (integer
# value)
#db_retry_interval=1
# If True, increases the interval between retries of a
# database operation up to db_max_retry_interval. (boolean
# value)
#db_inc_retry_interval=true
# If db_inc_retry_interval is set, the maximum seconds between
# retries of a database operation. (integer value)
#db_max_retry_interval=10
# Maximum retries in case of connection error or deadlock
# error before error is raised. Set to -1 to specify an
# infinite retry count. (integer value)
#db_max_retries=20
[heat]
#
# Options defined in tuskar.heat.client
#
# Name of the overcloud Heat stack (string value)
#stack_name=overcloud
# Heat API service type registered in keystone (string value)
#service_type=orchestration
# Heat API service endpoint type in keystone (string value)
#endpoint_type=publicURL
[heat_keystone]
#
# Options defined in tuskar.heat.client
#
# The name of a user the overcloud is deployed on behalf of
# (string value)
#username=admin
# The pass of a user the overcloud is deployed on behalf of
# (string value)
#password=<None>
# The tenant name the overcloud is deployed on behalf of
# (string value)
#tenant_name=admin
# Keystone authentication URL (string value)
#auth_url=http://localhost:35357/v2.0
# Set to False when Heat API uses HTTPS (boolean value)
#insecure=true
[keystone_authtoken]
#
# Options defined in keystoneclient.middleware.auth_token
#
# Prefix to prepend at the beginning of the path. Deprecated,
# use identity_uri. (string value)
#auth_admin_prefix=
# Host providing the admin Identity API endpoint. Deprecated,
# use identity_uri. (string value)
#auth_host=127.0.0.1
# Port of the admin Identity API endpoint. Deprecated, use
# identity_uri. (integer value)
#auth_port=35357
# Protocol of the admin Identity API endpoint (http or https).
# Deprecated, use identity_uri. (string value)
#auth_protocol=https
# Complete public Identity API endpoint (string value)
#auth_uri=<None>
# Complete admin Identity API endpoint. This should specify
# the unversioned root endpoint e.g. https://localhost:35357/
# (string value)
#identity_uri=<None>
# API version of the admin Identity API endpoint (string
# value)
#auth_version=<None>
# Do not handle authorization requests within the middleware,
# but delegate the authorization decision to downstream WSGI
# components (boolean value)
#delay_auth_decision=false
# Request timeout value for communicating with Identity API
# server. (boolean value)
#http_connect_timeout=<None>
# How many times are we trying to reconnect when communicating
# with Identity API Server. (integer value)
#http_request_max_retries=3
# This option is deprecated and may be removed in a future
# release. Single shared secret with the Keystone
# configuration used for bootstrapping a Keystone
# installation, or otherwise bypassing the normal
# authentication process. This option should not be used, use
# `admin_user` and `admin_password` instead. (string value)
#admin_token=<None>
# Keystone account username (string value)
#admin_user=<None>
# Keystone account password (string value)
#admin_password=<None>
# Keystone service account tenant name to validate user tokens
# (string value)
#admin_tenant_name=admin
# Env key for the swift cache (string value)
#cache=<None>
# Required if Keystone server requires client certificate
# (string value)
#certfile=<None>
# Required if Keystone server requires client certificate
# (string value)
#keyfile=<None>
# A PEM encoded Certificate Authority to use when verifying
# HTTPs connections. Defaults to system CAs. (string value)
#cafile=<None>
# Verify HTTPS connections. (boolean value)
#insecure=false
# Directory used to cache files related to PKI tokens (string
# value)
#signing_dir=<None>
# Optionally specify a list of memcached server(s) to use for
# caching. If left undefined, tokens will instead be cached
# in-process. (list value)
# Deprecated group/name - [DEFAULT]/memcache_servers
#memcached_servers=<None>
# In order to prevent excessive effort spent validating
# tokens, the middleware caches previously-seen tokens for a
# configurable duration (in seconds). Set to -1 to disable
# caching completely. (integer value)
#token_cache_time=300
# Determines the frequency at which the list of revoked tokens
# is retrieved from the Identity service (in seconds). A high
# number of revocation events combined with a low cache
# duration may significantly reduce performance. (integer
# value)
#revocation_cache_time=10
# (optional) if defined, indicate whether token data should be
# authenticated or authenticated and encrypted. Acceptable
# values are MAC or ENCRYPT. If MAC, token data is
# authenticated (with HMAC) in the cache. If ENCRYPT, token
# data is encrypted and authenticated in the cache. If the
# value is not one of these options or empty, auth_token will
# raise an exception on initialization. (string value)
#memcache_security_strategy=<None>
# (optional, mandatory if memcache_security_strategy is
# defined) this string is used for key derivation. (string
# value)
#memcache_secret_key=<None>
# (optional) indicate whether to set the X-Service-Catalog
# header. If False, middleware will not ask for service
# catalog on token validation and will not set the X-Service-
# Catalog header. (boolean value)
#include_service_catalog=true
# Used to control the use and type of token binding. Can be
# set to: "disabled" to not check token binding. "permissive"
# (default) to validate binding information if the bind type
# is of a form known to the server and ignore it if not.
# "strict" like "permissive" but if the bind type is unknown
# the token will be rejected. "required" any form of token
# binding is needed to be allowed. Finally the name of a
# binding method that must be present in tokens. (string
# value)
#enforce_token_bind=permissive
# If true, the revocation list will be checked for cached
# tokens. This requires that PKI tokens are configured on the
# Keystone server. (boolean value)
#check_revocations_for_cached=false
# Hash algorithms to use for hashing PKI tokens. This may be a
# single algorithm or multiple. The algorithms are those
# supported by Python standard hashlib.new(). The hashes will
# be tried in the order given, so put the preferred one first
# for performance. The result of the first hash will be stored
# in the cache. This will typically be set to multiple values
# only while migrating from a less secure algorithm to a more
# secure one. Once all the old tokens are expired this option
# should be set to a single value for better performance.
# (list value)
#hash_algorithms=md5
[storage]
#
# Options defined in tuskar.storage
#
# Storage driver to store Deployment Plans and Heat
# Orchestration Templates (string value)
#driver=tuskar.storage.drivers.sqlalchemy.SQLAlchemyDriver

View File

@ -1,21 +0,0 @@
[DEFAULT]
module=config.generator
module=db
module=db.sqlalchemy
module=excutils
module=fileutils
module=gettextutils
module=importutils
module=jsonutils
module=local
module=lockutils
module=log
module=policy
module=strutils
module=timeutils
# Tools
script=tools/install_venv_common
# The base module to hold the copy of openstack.common
base=tuskar

View File

@ -1,27 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr>=1.6
anyjson>=0.3.3
Babel>=1.3
# The egg=tripleo-heat-templates suffix is needed by pbr's requirements
# processing code
http://tarballs.openstack.org/tripleo-heat-templates/tripleo-heat-templates-master.tar.gz#egg=tripleo_heat_templates
eventlet>=0.17.4
greenlet>=0.3.2
iso8601>=0.1.9
kombu>=3.0.7
lxml>=2.3
oslo.config>=2.3.0 # Apache-2.0
oslo.db>=2.4.1 # Apache-2.0
pecan>=1.0.0
posix-ipc
python-heatclient>=0.3.0
python-keystoneclient>=1.6.0
PyYAML>=3.1.0
six>=1.9.0
SQLAlchemy<1.1.0,>=0.9.9
sqlalchemy-migrate>=0.9.6
WebOb>=1.2.3
WSME>=0.7

View File

@ -1,55 +0,0 @@
[metadata]
name = tuskar
version = 2013.2
summary = An OpenStack Management Service
description-file =
README.rst
author = Mark McLoughlin
author-email = markmc@redhat.com
home-page = http://git.openstack.org/cgit/openstack/tuskar
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
[files]
packages =
tuskar
[entry_points]
console_scripts =
tuskar-api = tuskar.cmd.api:main
tuskar-dbsync = tuskar.cmd.dbsync:main
tuskar-load-roles = tuskar.cmd.load_roles:main
tuskar-load-seed = tuskar.cmd.load_seed:main
tuskar-delete-roles = tuskar.cmd.delete_roles:main
tuskar-load-role = tuskar.cmd.load_role:main
[build_sphinx]
all_files = 1
build-dir = doc/build
source-dir = doc/source
[egg_info]
tag_build =
tag_date = 0
tag_svn_revision = 0
[compile_catalog]
directory = tuskar/locale
domain = tuskar
[update_catalog]
domain = tuskar
output_dir = tuskar/locale
input_file = tuskar/locale/tuskar.pot
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = tuskar/locale/tuskar.pot

View File

@ -1,29 +0,0 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr>=1.8'],
pbr=True)

View File

@ -1,20 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
# Hacking already pins down pep8, pyflakes and flake8
hacking<0.10,>=0.9.2
coverage>=3.6
discover
fixtures>=1.3.1
mock>=1.2
oslotest>=1.10.0 # Apache-2.0
# Doc requirements
sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
sphinxcontrib-pecanwsme>=0.8
stevedore>=1.5.0 # Apache-2.0
testrepository>=0.0.18
testtools>=1.4.0
unittest2
psycopg2>=2.5
PyMySQL>=0.6.2 # MIT License

View File

View File

@ -1,25 +0,0 @@
#!/usr/bin/env bash
PROJECT_NAME=${PROJECT_NAME:-tuskar}
CFGFILE_NAME=${PROJECT_NAME}.conf.sample
if [ -e etc/${PROJECT_NAME}/${CFGFILE_NAME} ]; then
CFGFILE=etc/${PROJECT_NAME}/${CFGFILE_NAME}
elif [ -e etc/${CFGFILE_NAME} ]; then
CFGFILE=etc/${CFGFILE_NAME}
else
echo "${0##*/}: can not find config file"
exit 1
fi
TEMPDIR=`mktemp -d /tmp/${PROJECT_NAME}.XXXXXX`
trap "rm -rf $TEMPDIR" EXIT
tools/config/generate_sample.sh -b ./ -p ${PROJECT_NAME} -o ${TEMPDIR}
if ! diff -u ${TEMPDIR}/${CFGFILE_NAME} ${CFGFILE}
then
echo "${0##*/}: ${PROJECT_NAME}.conf.sample is not up to date."
echo "${0##*/}: Please run ${0%%${0##*/}}generate_sample.sh."
exit 1
fi

View File

@ -1,119 +0,0 @@
#!/usr/bin/env bash
print_hint() {
echo "Try \`${0##*/} --help' for more information." >&2
}
PARSED_OPTIONS=$(getopt -n "${0##*/}" -o hb:p:m:l:o: \
--long help,base-dir:,package-name:,output-dir:,module:,library: -- "$@")
if [ $? != 0 ] ; then print_hint ; exit 1 ; fi
eval set -- "$PARSED_OPTIONS"
while true; do
case "$1" in
-h|--help)
echo "${0##*/} [options]"
echo ""
echo "options:"
echo "-h, --help show brief help"
echo "-b, --base-dir=DIR project base directory"
echo "-p, --package-name=NAME project package name"
echo "-o, --output-dir=DIR file output directory"
echo "-m, --module=MOD extra python module to interrogate for options"
echo "-l, --library=LIB extra library that registers options for discovery"
exit 0
;;
-b|--base-dir)
shift
BASEDIR=`echo $1 | sed -e 's/\/*$//g'`
shift
;;
-p|--package-name)
shift
PACKAGENAME=`echo $1`
shift
;;
-o|--output-dir)
shift
OUTPUTDIR=`echo $1 | sed -e 's/\/*$//g'`
shift
;;
-m|--module)
shift
MODULES="$MODULES -m $1"
shift
;;
-l|--library)
shift
LIBRARIES="$LIBRARIES -l $1"
shift
;;
--)
break
;;
esac
done
BASEDIR=${BASEDIR:-`pwd`}
if ! [ -d $BASEDIR ]
then
echo "${0##*/}: missing project base directory" >&2 ; print_hint ; exit 1
elif [[ $BASEDIR != /* ]]
then
BASEDIR=$(cd "$BASEDIR" && pwd)
fi
PACKAGENAME=${PACKAGENAME:-${BASEDIR##*/}}
TARGETDIR=$BASEDIR/$PACKAGENAME
if ! [ -d $TARGETDIR ]
then
echo "${0##*/}: invalid project package name" >&2 ; print_hint ; exit 1
fi
OUTPUTDIR=${OUTPUTDIR:-$BASEDIR/etc}
# NOTE(bnemec): Some projects put their sample config in etc/,
# some in etc/$PACKAGENAME/
if [ -d $OUTPUTDIR/$PACKAGENAME ]
then
OUTPUTDIR=$OUTPUTDIR/$PACKAGENAME
elif ! [ -d $OUTPUTDIR ]
then
echo "${0##*/}: cannot access \`$OUTPUTDIR': No such file or directory" >&2
exit 1
fi
BASEDIRESC=`echo $BASEDIR | sed -e 's/\//\\\\\//g'`
find $TARGETDIR -type f -name "*.pyc" -delete
FILES=$(find $TARGETDIR -type f -name "*.py" ! -path "*/tests/*" \
-exec grep -l "Opt(" {} + | sed -e "s/^$BASEDIRESC\///g" | sort -u)
RC_FILE="`dirname $0`/oslo.config.generator.rc"
if test -r "$RC_FILE"
then
source "$RC_FILE"
fi
for mod in ${TUSKAR_CONFIG_GENERATOR_EXTRA_MODULES}; do
MODULES="$MODULES -m $mod"
done
for lib in ${TUSKAR_CONFIG_GENERATOR_EXTRA_LIBRARIES}; do
LIBRARIES="$LIBRARIES -l $lib"
done
export EVENTLET_NO_GREENDNS=yes
OS_VARS=$(set | sed -n '/^OS_/s/=[^=]*$//gp' | xargs)
[ "$OS_VARS" ] && eval "unset \$OS_VARS"
DEFAULT_CONFIG_GENERATOR=tuskar.openstack.common.config.generator
CONFIG_GENERATOR=${CONFIG_GENERATOR:-$DEFAULT_CONFIG_GENERATOR}
OUTPUTFILE=$OUTPUTDIR/$PACKAGENAME.conf.sample
python -m $CONFIG_GENERATOR $MODULES $LIBRARIES $FILES > $OUTPUTFILE
# Hook to allow projects to append custom config file snippets
CONCAT_FILES=$(ls $BASEDIR/tools/config/*.conf.sample 2>/dev/null)
for CONCAT_FILE in $CONCAT_FILES; do
cat $CONCAT_FILE >> $OUTPUTFILE
done

View File

@ -1,2 +0,0 @@
export TUSKAR_CONFIG_GENERATOR_EXTRA_MODULES=keystoneclient.middleware.auth_token
export TUSKAR_CONFIG_GENERATOR_EXTRA_LIBRARIES=oslo.db

View File

@ -1,138 +0,0 @@
#!/usr/bin/env python
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Using the REST API, populates the DB with some sample data
# Based on python-ceilometerclient/ceilometerclient/common/http.py
import httplib
import json
import logging
import socket
import six.moves.urllib.parse as urlparse
import uuid
LOG = logging.getLogger(__name__)
def log_curl_request(conn, base_url, url, method, kwargs):
curl = ['curl -i -X %s' % method]
for (key, value) in kwargs['headers'].items():
header = '-H \'%s: %s\'' % (key, value)
curl.append(header)
if 'body' in kwargs:
curl.append('-d \'%s\'' % kwargs['body'])
curl.append('http://%s:%d%s%s' % (conn.host, conn.port, base_url, url))
LOG.debug(' '.join(curl))
def log_http_response(resp, body=None):
status = (resp.version / 10.0, resp.status, resp.reason)
dump = ['\nHTTP/%.1f %s %s' % status]
dump.extend(['%s: %s' % (k, v) for k, v in resp.getheaders()])
dump.append('')
if body:
dump.extend([body, ''])
LOG.debug('\n'.join(dump))
def make_connection_url(base_url, url):
return '%s/%s' % (base_url.rstrip('/'), url.lstrip('/'))
def http_request(conn, base_url, url, method, **kwargs):
log_curl_request(conn, base_url, url, method, kwargs)
try:
conn_url = make_connection_url(base_url, url)
conn.request(method, conn_url, **kwargs)
resp = conn.getresponse()
except socket.gaierror as e:
message = ('Error finding address for %(url)s: %(e)s' %
{'url': url, 'e': e})
raise RuntimeError(message)
except (socket.error, socket.timeout) as e:
message = ('Error communicating with %(endpoint)s %(e)s' %
{'endpoint': 'http://%s:%d' % (conn.host, conn.port),
'e': e})
raise RuntimeError(message)
body = resp.read()
log_http_response(resp, body)
if 300 <= resp.status < 600:
LOG.warn('Request returned failure/redirect status.')
raise RuntimeError('Status code %d returned' % resp.status)
return resp, body
def json_request(conn, base_url, url, method, **kwargs):
kwargs.setdefault('headers', {})
kwargs['headers'].setdefault('Content-Type', 'application/json')
kwargs['headers'].setdefault('Accept', 'application/json')
if 'body' in kwargs:
kwargs['body'] = json.dumps(kwargs['body'])
resp, body = http_request(conn, base_url, url, method, **kwargs)
content_type = resp.getheader('content-type', None)
if resp.status == 204 or resp.status == 205 or content_type is None:
body = None
elif 'application/json' in content_type:
try:
body = json.loads(body)
except ValueError:
LOG.error('Could not decode response body as JSON')
else:
body = None
return resp, body
def create_overcloud_role(conn, base_url, name, description, image_name):
return json_request(conn, base_url, '/overcloud_roles', 'POST',
body=dict(name=name, description=description,
image_name=image_name))
def generate_data():
conn = httplib.HTTPConnection('localhost', 8585)
base_url = '/v1'
create_overcloud_role(conn, base_url,
name='Controller',
description='controller role',
image_name='overcloud-control')
create_overcloud_role(conn, base_url,
name='Compute',
description='compute role',
image_name='overcloud-compute')
create_overcloud_role(conn, base_url,
name='Block Storage',
description='block storage role',
image_name='overcloud-cinder-volume')
create_overcloud_role(conn, base_url,
name='Object Storage',
description='object storage role',
image_name='overcloud-swift-storage')
if __name__ == '__main__':
logging.basicConfig(format='%(message)s', level=logging.DEBUG)
generate_data()

View File

@ -1,182 +0,0 @@
# Copyright 2013 OpenStack Foundation
# Copyright 2013 IBM Corp.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Provides methods needed by installation script for OpenStack development
virtual environments.
Since this script is used to bootstrap a virtualenv from the system's Python
environment, it should be kept strictly compatible with Python 2.6.
Synced in from openstack-common
"""
from __future__ import print_function
import optparse
import os
import subprocess
import sys
class InstallVenv(object):
def __init__(self, root, venv, pip_requires, test_requires, py_version,
project):
self.root = root
self.venv = venv
self.pip_requires = pip_requires
self.test_requires = test_requires
self.py_version = py_version
self.project = project
def die(self, message, *args):
print(message % args, file=sys.stderr)
sys.exit(1)
def check_python_version(self):
if sys.version_info < (2, 6):
self.die("Need Python Version >= 2.6")
def run_command_with_code(self, cmd, redirect_output=True,
check_exit_code=True):
"""Runs a command in an out-of-process shell.
Returns the output of that command. Working directory is self.root.
"""
if redirect_output:
stdout = subprocess.PIPE
else:
stdout = None
proc = subprocess.Popen(cmd, cwd=self.root, stdout=stdout)
output = proc.communicate()[0]
if check_exit_code and proc.returncode != 0:
self.die('Command "%s" failed.\n%s', ' '.join(cmd), output)
return (output, proc.returncode)
def run_command(self, cmd, redirect_output=True, check_exit_code=True):
return self.run_command_with_code(cmd, redirect_output,
check_exit_code)[0]
def get_distro(self):
if (os.path.exists('/etc/fedora-release') or
os.path.exists('/etc/redhat-release')):
return Fedora(self.root, self.venv, self.pip_requires,
self.test_requires, self.py_version, self.project)
else:
return Distro(self.root, self.venv, self.pip_requires,
self.test_requires, self.py_version, self.project)
def check_dependencies(self):
self.get_distro().install_virtualenv()
def create_virtualenv(self, no_site_packages=True):
"""Creates the virtual environment and installs PIP.
Creates the virtual environment and installs PIP only into the
virtual environment.
"""
if not os.path.isdir(self.venv):
print('Creating venv...', end=' ')
if no_site_packages:
self.run_command(['virtualenv', '-q', '--no-site-packages',
self.venv])
else:
self.run_command(['virtualenv', '-q', self.venv])
print('done.')
print('Installing pip in venv...', end=' ')
if not self.run_command(['tools/with_venv.sh', 'easy_install',
'pip>1.0']).strip():
self.die("Failed to install pip.")
print('done.')
else:
print("venv already exists...")
pass
def pip_install(self, *args):
self.run_command(['tools/with_venv.sh',
'pip', 'install', '--upgrade'] + list(args),
redirect_output=False)
def install_dependencies(self):
print('Installing dependencies with pip (this can take a while)...')
# First things first, make sure our venv has the latest pip and
# distribute.
# NOTE: we keep pip at version 1.1 since the most recent version causes
# the .venv creation to fail. See:
# https://bugs.launchpad.net/nova/+bug/1047120
self.pip_install('pip==1.1')
self.pip_install('distribute')
# Install greenlet by hand - just listing it in the requires file does
# not
# get it installed in the right order
self.pip_install('greenlet')
self.pip_install('-r', self.pip_requires)
self.pip_install('-r', self.test_requires)
def parse_args(self, argv):
"""Parses command-line arguments."""
parser = optparse.OptionParser()
parser.add_option('-n', '--no-site-packages',
action='store_true',
help="Do not inherit packages from global Python "
"install")
return parser.parse_args(argv[1:])[0]
class Distro(InstallVenv):
def check_cmd(self, cmd):
return bool(self.run_command(['which', cmd],
check_exit_code=False).strip())
def install_virtualenv(self):
if self.check_cmd('virtualenv'):
return
if self.check_cmd('easy_install'):
print('Installing virtualenv via easy_install...', end=' ')
if self.run_command(['easy_install', 'virtualenv']):
print('Succeeded')
return
else:
print('Failed')
self.die('ERROR: virtualenv not found.\n\n%s development'
' requires virtualenv, please install it using your'
' favorite package management tool' % self.project)
class Fedora(Distro):
"""This covers all Fedora-based distributions.
Includes: Fedora, RHEL, CentOS, Scientific Linux
"""
def check_pkg(self, pkg):
return self.run_command_with_code(['rpm', '-q', pkg],
check_exit_code=False)[1] == 0
def install_virtualenv(self):
if self.check_cmd('virtualenv'):
return
if not self.check_pkg('python-virtualenv'):
self.die("Please install 'python-virtualenv'.")
super(Fedora, self).install_virtualenv()

View File

@ -1,8 +0,0 @@
#!/bin/bash
tools_path=${tools_path:-$(dirname $0)}
venv_path=${venv_path:-${tools_path}}
tox_env=$(cd ${venv_path} && find ../.tox -maxdepth 1 -name "py*" | sort | tail -n1)
venv_dir=${venv_name:-${tox_env}}
TOOLS=${tools_path}
VENV=${venv:-${venv_path}/${venv_dir}}
source ${VENV}/bin/activate && "$@"

36
tox.ini
View File

@ -1,36 +0,0 @@
[tox]
minversion = 1.6
skipsdist = True
envlist = py27,pep8
[testenv]
usedevelop = True
install_command = pip install {opts} {packages}
setenv = VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
commands =
python setup.py testr --slowest --testr-args='{posargs}'
{toxinidir}/tools/config/check_uptodate.sh
[tox:jenkins]
downloadcache = ~/cache/pip
[testenv:pep8]
commands =
flake8
[testenv:cover]
setenv = VIRTUAL_ENV={envdir}
commands =
python setup.py testr --coverage {posargs}
[testenv:venv]
commands = {posargs}
[flake8]
# H302 import only modules.
# H405 multi line docstring summary not separated with an empty line
ignore = H302,H405
builtins = _
exclude = .venv,.git,.tox,dist,doc,*openstack/common*,*lib/python*,*egg,build,tools

View File

View File

@ -1,37 +0,0 @@
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
API_SERVICE_OPTS = [
cfg.StrOpt(
'tuskar_api_bind_ip',
default='0.0.0.0',
help='IP for the Tuskar API server to bind to',
),
cfg.IntOpt(
'tuskar_api_port',
default=8585,
help='The port for the Tuskar API server',
),
cfg.StrOpt(
'tht_local_dir',
default='/etc/tuskar/tripleo-heat-templates/',
help='Local path holding tripleo-heat-templates',
)
]
CONF = cfg.CONF
CONF.register_opts(API_SERVICE_OPTS)

View File

@ -1,36 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Access Control Lists (ACL's) control access the API server."""
from keystoneclient.middleware import auth_token
from oslo_config import cfg
OPT_GROUP_NAME = 'keystone_authtoken'
def register_opts(conf):
"""Register keystoneclient middleware options
"""
conf.register_opts(auth_token.opts, group=OPT_GROUP_NAME)
auth_token.CONF = conf
def install(app, conf):
"""Install ACL check on application."""
register_opts(cfg.CONF)
return auth_token.AuthProtocol(app,
conf=dict(conf.get(OPT_GROUP_NAME)))

View File

@ -1,77 +0,0 @@
# Copyright 2012 New Dream Network, LLC (DreamHost)
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
import pecan
from tuskar.api import acl
from tuskar.api import config
from tuskar.api import hooks
from tuskar.api import renderers
auth_opts = [
cfg.StrOpt(
'auth_strategy',
default='keystone',
help='Method to use for auth: noauth or keystone.'),
]
CONF = cfg.CONF
CONF.register_opts(auth_opts)
def get_pecan_config():
# Set up the pecan configuration
filename = config.__file__.replace('.pyc', '.py')
return pecan.configuration.conf_from_file(filename)
def setup_app(pecan_config=None, extra_hooks=None):
app_hooks = [hooks.ConfigHook(),
hooks.DBHook()]
if extra_hooks:
app_hooks.extend(extra_hooks)
if not pecan_config:
pecan_config = get_pecan_config()
pecan.configuration.set_config(dict(pecan_config), overwrite=True)
# TODO(deva): add middleware.ParsableErrorMiddleware from Ceilometer
app = pecan.make_app(
pecan_config.app.root,
custom_renderers=dict(wsmejson=renderers.JSONRenderer),
static_root=pecan_config.app.static_root,
template_path=pecan_config.app.template_path,
debug=CONF.debug,
force_canonical=getattr(pecan_config.app, 'force_canonical', True),
hooks=app_hooks,
)
if pecan_config.app.enable_acl:
return acl.install(app, cfg.CONF)
return app
class VersionSelectorApplication(object):
def __init__(self):
pc = get_pecan_config()
pc.app.enable_acl = (CONF.auth_strategy == 'keystone')
self.v1 = setup_app(pecan_config=pc)
def __call__(self, environ, start_response):
return self.v1(environ, start_response)

View File

@ -1,35 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Server Specific Configurations
server = {
'port': '6382',
'host': '0.0.0.0'
}
# Pecan Application Configurations
app = {
'root': 'tuskar.api.controllers.root.RootController',
'modules': ['tuskar.api'],
'static_root': '%(confdir)s/public',
'template_path': '%(confdir)s/templates',
'debug': False,
'enable_acl': False,
}
# Custom Configurations must be in Python dictionary format::
#
# foo = {'bar':'baz'}
#
# All configurations are accessible at::
# pecan.conf

View File

@ -1,50 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from tuskar.api.controllers.v1 import controller as v1_controller
from tuskar.api.controllers.v2 import controller as v2_controller
class RootController(object):
v1 = v1_controller.Controller()
v2 = v2_controller.Controller()
@pecan.expose('json')
def index(self):
return {
'versions': {
'values': [
{
'status': 'development',
'media-types': [{'base': 'application/json'}],
'id': 'v1.0',
'links': [{
'href': '/v1/',
'rel': 'self',
}]
},
{
'status': 'development',
'media-types': [{'base': 'application/json'}],
'id': 'v2.0',
'links': [{
'href': '/v2/',
'rel': 'self',
}]
}
]
}
}

View File

@ -1,38 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from tuskar.api.controllers.v1.overcloud import OvercloudsController
from tuskar.api.controllers.v1.overcloud_roles import OvercloudRolesController
class Controller(object):
"""Version 1 API controller root."""
overcloud_roles = OvercloudRolesController()
overclouds = OvercloudsController()
@pecan.expose('json')
def index(self):
return {
'version': {
'status': 'stable',
'media-types': [{'base': 'application/json'}],
'id': 'v1.0',
'links': [{
'href': '/v1/',
'rel': 'self',
}]
}
}

View File

@ -1,160 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Contains transfer objects for use with WSME REST APIs. The objects in this
module also contain the translations between the REST transfer objects and
the internal Tuskar domain model.
"""
import logging
from wsme import types as wtypes
from tuskar.db.sqlalchemy import models as db_models
LOG = logging.getLogger(__name__)
class Base(wtypes.Base):
"""Base functionality for all API models.
This class should never be directly instantiated. Subclasses must be sure
to define an attribute named _db_class for the to_db_model to use
when instantiating DB models.
"""
@classmethod
def from_db_model(cls, db_model, skip_fields=None):
"""Returns the database representation of the given transfer object."""
skip_fields = skip_fields or []
data = dict((k, v) for k, v in db_model.as_dict().items()
if k not in skip_fields)
return cls(**data)
def to_db_model(self, omit_unset=False, skip_fields=None):
"""Converts this object into its database representation."""
skip_fields = skip_fields or []
attribute_names = [a.name for a in self._wsme_attributes
if a.name not in skip_fields]
if omit_unset:
attribute_names = [n for n in attribute_names
if getattr(self, n) != wtypes.Unset]
values = dict((name, self._lookup(name)) for name in attribute_names)
db_object = self._db_class(**values)
return db_object
def _lookup(self, key):
"""Looks up a key, translating WSME's Unset into Python's None.
:return: value of the given attribute; None if it is not set
"""
value = getattr(self, key)
if value == wtypes.Unset:
value = None
return value
class OvercloudRole(Base):
"""Transfer object for overcloud roles."""
_db_class = db_models.OvercloudRole
id = int
name = wtypes.text
description = wtypes.text
image_name = wtypes.text
flavor_id = wtypes.text
class OvercloudRoleCount(Base):
"""Transfer object for overcloud role counts."""
_db_class = db_models.OvercloudRoleCount
id = int
overcloud_role_id = int
overcloud_id = int
num_nodes = int
class Overcloud(Base):
"""Transfer object for overclouds."""
_db_class = db_models.Overcloud
id = int
stack_id = wtypes.text
name = wtypes.text
description = wtypes.text
attributes = {wtypes.text: wtypes.text}
counts = [OvercloudRoleCount]
@classmethod
def from_db_model(cls, db_overcloud, skip_fields=None,
mask_passwords=True):
# General Data
transfer_overcloud = super(Overcloud, cls).from_db_model(
db_overcloud, skip_fields=['attributes', 'counts'])
# Attributes
translated = {}
for db_attribute in db_overcloud.attributes:
# FIXME(rpodolyaka): a workaround for bug 1308172. To fix this
# properly we should either stop storing passwords in Tuskar API
# or delegate this task to another service.
if mask_passwords and 'password' in db_attribute.key.lower():
value = '******'
else:
value = db_attribute.value
translated[db_attribute.key] = value
transfer_overcloud.attributes = translated
# Counts
transfer_overcloud.counts = [OvercloudRoleCount.from_db_model(c)
for c in db_overcloud.counts]
return transfer_overcloud
def to_db_model(self, omit_unset=False, skip_fields=None):
# General Data
db_model = super(Overcloud, self).to_db_model(
omit_unset=omit_unset,
skip_fields=['attributes', 'counts'])
# Attributes
if self.attributes != wtypes.Unset:
translated = []
for key, value in self.attributes.items():
translated.append(db_models.OvercloudAttribute(
key=key, value=value, overcloud_id=self.id
))
db_model.attributes = translated
# Counts
if self.counts != wtypes.Unset:
translated = []
for count in self.counts:
translated.append(db_models.OvercloudRoleCount(
num_nodes=count.num_nodes,
overcloud_role_id=count.overcloud_role_id,
overcloud_id=self.id
))
db_model.counts = translated
return db_model

View File

@ -1,377 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import pecan
from pecan import rest
import six
import wsme
from wsmeext import pecan as wsme_pecan
from tuskar.api.controllers.v1 import models
from tuskar.common import exception
from tuskar.heat.client import HeatClient
import tuskar.heat.template_tools as template_tools
LOG = logging.getLogger(__name__)
def parse_counts_and_flavors(counts, overcloud_roles):
"""Helper for parsing the OvercloudRoleCount object
Given a list of OvercloudRoleCount and dict of OverlcoudRole objects
return a dict of (image_name, count) and (image_name, flavor_id) in a
format used for building a template.
:param counts: List of tuskar.api.controllers.v1.models.OvercloudRoleCount
:type counts: list
:param overcloud_roles: Dict of (overcloud_role_id, overcloud_role) so
we can access image_name and flavor_id of roles
:type overcloud_roles: dict
:return: Tuple of dicts {(image_name, count)}, {(image_name, flavor_id)}
:rtype: two dict objects
"""
parsed_counts = {}
parsed_flavors = {}
for count_obj in counts:
image_name = overcloud_roles[count_obj.overcloud_role_id].image_name
flavor_id = overcloud_roles[count_obj.overcloud_role_id].flavor_id
count = count_obj.num_nodes
parsed_counts[image_name] = count
parsed_flavors[image_name] = flavor_id
return parsed_counts, parsed_flavors
def filter_template_attributes(allowed_data, attributes):
"""Helper filtering attributes for template
Given a list of allowed data and attributes, filter the attributes
only with keys of allowed data and return filtered data.
:param allowed_data: Dict of allowed attributes for template returned by
validating of template.
:type allowed_data: dict
:param attributes: Dict of attributes sent from user in deploying stack
operation
:type attributes: Dict
:return: Dict of filtered attributes
:rtype: dict
"""
allowed_keys = allowed_data.get("Parameters", {}).keys()
filtered_data = dict([(key, value) for key, value in attributes.items()
if key in allowed_keys])
return filtered_data
def get_overcloud_roles_dict():
return dict((overcloud_role.id, overcloud_role)
for overcloud_role in
pecan.request.dbapi.get_overcloud_roles())
def get_flavor_attributes(parsed_flavors):
"""Helper for building dict of flavor attributes
Given a dict of parsed flavors, it will put a flavor_ids stored in
role into attributes that will be fed to heat stack create/update.
Mapping of image name to flavor_param is stored in template_tools.ROLES.
:param parsed_flavors: Dict of (image_name, flavor_id)
:type parsed_flavors: dict
:return: Dict of (flavor_param, flavor_id) for Heat Template params
:rtype: dict
"""
flavor_attributes = {}
for image_name, flavor_id in parsed_flavors.items():
role = template_tools.ROLES.get(image_name, None)
if role:
flavor_param = role['flavor_param']
flavor_attributes[flavor_param] = flavor_id
return flavor_attributes
def process_stack(attributes, counts, overcloud_roles, create=False):
"""Helper function for processing the stack.
Given a params dict containing the Overcloud Roles and initialization
parameters create or update the stack.
:param attributes: Dictionary of initialization params and overcloud roles
for heat template and initialization of stack
:type attributes: dict
:param counts: Dictionary of counts of roles to be deployed
:type counts: dict
:param overcloud_roles: Dict of (overcloud_role_id, overcloud_role) so
we can access image_name and flavor_id of roles
:type overcloud_roles: dict
:param create: A flag to designate if we are creating or updating the stack
:type create: bool
"""
heat_client = HeatClient()
try:
# Get how many of each role we want and what flavor each role uses.
parsed_counts, parsed_flavors = parse_counts_and_flavors(
counts, overcloud_roles)
except Exception as e:
raise exception.ParseCountsAndFlavorsFailed(six.text_type(e))
try:
# Build the template
overcloud = template_tools.merge_templates(parsed_counts)
except Exception as e:
raise exception.HeatTemplateCreateFailed(six.text_type(e))
try:
# Get the parameters that the template accepts and validate
allowed_data = heat_client.validate_template(overcloud)
except Exception as e:
raise exception.HeatTemplateValidateFailed(six.text_type(e))
stack_exists = heat_client.exists_stack()
if stack_exists and create:
raise exception.StackAlreadyCreated()
elif not stack_exists and not create:
raise exception.StackNotFound()
try:
# Put flavors from OverloudRoles into attributes
attributes.update(get_flavor_attributes(parsed_flavors))
# Filter the attributes to allowed only
filtered_attributes = filter_template_attributes(allowed_data,
attributes)
except Exception as e:
raise exception.HeatStackProcessingAttributesFailed(six.text_type(e))
if create:
operation = heat_client.create_stack
else:
operation = heat_client.update_stack
try:
result = operation(overcloud, filtered_attributes)
except Exception as e:
if create:
raise exception.HeatStackCreateFailed(six.text_type(e))
else:
raise exception.HeatStackUpdateFailed(six.text_type(e))
return result
class OvercloudsController(rest.RestController):
"""REST controller for the Overcloud class."""
_custom_actions = {'template_parameters': ['GET']}
@pecan.expose('json')
def template_parameters(self):
# TODO(lsmola) returning all possible parameters now, later in J
# user should pick what to build first and we should return
# appropriate parameters.
fixed_params = {template_tools.OVERCLOUD_COMPUTE_ROLE: 1,
template_tools.OVERCLOUD_VOLUME_ROLE: 1,
template_tools.OVERCLOUD_OBJECT_STORAGE_ROLE: 1}
# We don't want user to fill flavor based parameters, cause
# it is already stored in OvercloudRoles, also Image parameters
# are expected to be default, otherwise our associations
# will not work.
except_parameters = ('OvercloudControlFlavor',
'OvercloudComputeFlavor',
'OvercloudBlockStorageFlavor',
'OvercloudSwiftStorageFlavor',
'NovaImage',
'notcomputeImage',
'BlockStorageImage',
'SwiftStorageImage',)
overcloud = template_tools.merge_templates(fixed_params)
heat_client = HeatClient()
try:
allowed_data = heat_client.validate_template(overcloud)
except Exception as e:
raise exception.HeatTemplateValidateFailed(unicode(e))
# Send back only wanted parameters
template_parameters = dict((key, value) for key, value
in allowed_data['Parameters'].items()
if key not in except_parameters)
return template_parameters
@wsme.validate(models.Overcloud)
@wsme_pecan.wsexpose(models.Overcloud,
body=models.Overcloud,
status_code=201)
def post(self, transfer_overcloud):
"""Creates a new overcloud.
:param transfer_overcloud: data submitted by the user
:type transfer_overcloud:
tuskar.api.controllers.v1.models.Overcloud
:return: created overcloud
:rtype: tuskar.api.controllers.v1.models.Overcloud
:raises: tuskar.common.exception.OvercloudExists: if an overcloud
with the given name exists
"""
LOG.debug('Creating overcloud: %s' % transfer_overcloud)
# FIXME(lsmola) This is just POC of creating a stack
# this has to be done properly with proper Work-flow abstraction of:
# step 1- build template and start stack-create
# step 2- put the right stack_id to the overcloud
# step 3- initialize the stack
# step 4- set the correct overcloud status
stack = process_stack(transfer_overcloud.attributes,
transfer_overcloud.counts,
get_overcloud_roles_dict(),
create=True)
# Persist to the database
transfer_overcloud.stack_id = stack['stack']['id']
db_overcloud = transfer_overcloud.to_db_model()
result = pecan.request.dbapi.create_overcloud(db_overcloud)
# Package for transfer back to the user
saved_overcloud = models.Overcloud.from_db_model(result)
return saved_overcloud
@wsme.validate(models.Overcloud)
@wsme_pecan.wsexpose(models.Overcloud,
int,
body=models.Overcloud)
def put(self, overcloud_id, overcloud_delta):
"""Updates an existing overcloud, including its attributes and counts.
:param overcloud_id: identifies the overcloud being deleted
:type overcloud_id: int
:param overcloud_delta: contains only values that are to be affected
by the update
:type overcloud_delta:
tuskar.api.controllers.v1.models.Overcloud
:return: created overcloud
:rtype: tuskar.api.controllers.v1.models.Overcloud
:raises: tuskar.common.exception.OvercloudNotFound if there
is no overcloud with the given ID
"""
LOG.debug('Updating overcloud: %s' % overcloud_id)
# ID is in the URL so make sure it's in the transfer object
# before translation
overcloud_delta.id = overcloud_id
db_delta = overcloud_delta.to_db_model(omit_unset=True)
# Will raise a not found if there is no overcloud with the ID
result = pecan.request.dbapi.update_overcloud(db_delta)
updated_overcloud = models.Overcloud.from_db_model(
result, mask_passwords=False)
# FIXME(lsmola) This is just POC of updating a stack
# this probably should also have workflow
# step 1- build template and stack-update
# step 2- set the correct overcloud status
process_stack(updated_overcloud.attributes, result.counts,
get_overcloud_roles_dict())
return models.Overcloud.from_db_model(result)
@wsme_pecan.wsexpose(None, int, status_code=204)
def delete(self, overcloud_id):
"""Deletes the given overcloud.
:param overcloud_id: identifies the overcloud being deleted
:type overcloud_id: int
:raises: tuskar.common.exception.OvercloudNotFound if there
is no overcloud with the given ID
"""
# FIXME(lsmola) this should always try to delete both overcloud
# and stack. So it requires some exception catch over below.
# FIXME(lsmola) there is also a workflow needed
# step 1- delete stack and set status deleting in progress to
# overcloud
# step 2 - once stack is deleted, delete the overcloud
LOG.debug('Deleting overcloud with ID: %s' % overcloud_id)
pecan.request.dbapi.delete_overcloud_by_id(overcloud_id)
heat_client = HeatClient()
if not heat_client.exists_stack():
# If the stack doesn't exist, we have nothing else to do here.
return
try:
heat_client.delete_stack()
except Exception:
raise exception.HeatStackDeleteFailed()
@wsme_pecan.wsexpose(models.Overcloud, int)
def get_one(self, overcloud_id):
"""Returns a specific overcloud.
An exception is raised if no overcloud is found with the
given ID.
:param overcloud_id: identifies the overcloud being deleted
:type overcloud_id: int
:return: matching overcloud
:rtype: tuskar.api.controllers.v1.models.Overcloud
:raises: tuskar.common.exception.OvercloudNotFound if there
is no overcloud with the given ID
"""
LOG.debug('Retrieving overcloud with ID: %s' % overcloud_id)
overcloud = pecan.request.dbapi.get_overcloud_by_id(overcloud_id)
transfer_overcloud = models.Overcloud.from_db_model(overcloud)
return transfer_overcloud
@wsme_pecan.wsexpose([models.Overcloud])
def get_all(self):
"""Returns all overclouds.
An empty list is returned if no overclouds are present.
:return: list of overclouds; empty list if none are found
:rtype: list of tuskar.api.controllers.v1.models.Overcloud
"""
LOG.debug('Retrieving all overclouds')
overclouds = pecan.request.dbapi.get_overclouds()
transfer_overclouds = [models.Overcloud.from_db_model(o)
for o in overclouds]
return transfer_overclouds

View File

@ -1,143 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import pecan
from pecan import rest
import wsme
from wsmeext import pecan as wsme_pecan
from tuskar.api.controllers.v1 import models
LOG = logging.getLogger(__name__)
class OvercloudRolesController(rest.RestController):
"""REST controller for the OvercloudRole class."""
@wsme.validate(models.OvercloudRole)
@wsme_pecan.wsexpose(models.OvercloudRole,
body=models.OvercloudRole,
status_code=201)
def post(self, transfer_role):
"""Creates a new overcloud role.
:param transfer_role: data submitted by the user
:type transfer_role:
tuskar.api.controllers.v1.models.OvercloudRole
:return: created role
:rtype: tuskar.api.controllers.v1.models.OvercloudRole
:raises: tuskar.common.exception.OvercloudRoleExists: if an overcloud
role with the given name exists
"""
LOG.debug('Creating overcloud role: %s' % transfer_role)
# Persist to the database
db_role = transfer_role.to_db_model()
result = pecan.request.dbapi.create_overcloud_role(db_role)
# Package for transfer back to the user
saved_role = models.OvercloudRole.from_db_model(result)
return saved_role
@wsme.validate(models.OvercloudRole)
@wsme_pecan.wsexpose(models.OvercloudRole,
int,
body=models.OvercloudRole)
def put(self, role_id, role_delta):
"""Updates an existing overcloud role.
:param role_id: identifies the role being deleted
:type role_id: int
:param role_delta: contains only values that are to be affected
by the update operation
:type role_delta:
tuskar.api.controllers.v1.models.OvercloudRole
:return: role with updated values
:rtype: tuskar.api.controllers.v1.models.OvercloudRole
:raises: tuskar.common.exception.OvercloudRoleNotFound if there
is no role with the given ID
"""
LOG.debug('Updating overcloud role: %s' % role_id)
# ID is in the URL so make sure it's in the transfer object
# before translation
role_delta.id = role_id
db_delta = role_delta.to_db_model(omit_unset=True)
# Will raise a not found if there is no role with the ID
updated = pecan.request.dbapi.update_overcloud_role(db_delta)
return updated
@wsme_pecan.wsexpose(None, int, status_code=204)
def delete(self, role_id):
"""Deletes the given overcloud role.
:param role_id: identifies the role being deleted
:type role_id: int
:raises: tuskar.common.exception.OvercloudRoleNotFound if there
is no role with the given ID
"""
LOG.debug('Deleting overcloud role with ID: %s' % role_id)
pecan.request.dbapi.delete_overcloud_role_by_id(role_id)
@wsme_pecan.wsexpose(models.OvercloudRole, int)
def get_one(self, role_id):
"""Returns a specific overcloud role.
An exception is raised if no overcloud role is found with the
given ID.
:param role_id: identifies the role being deleted
:type role_id: int
:return: matching resource role
:rtype: tuskar.api.controllers.v1.models.OvercloudRole
:raises: tuskar.common.exception.OvercloudRoleNotFound if there
is no role with the given ID
"""
LOG.debug('Retrieving overcloud role with ID: %s' % role_id)
db_role = pecan.request.dbapi.get_overcloud_role_by_id(role_id)
transfer_role = models.OvercloudRole.from_db_model(db_role)
return transfer_role
@wsme_pecan.wsexpose([models.OvercloudRole])
def get_all(self):
"""Returns all overcloud roles.
An empty list is returned if no overcloud roles are present.
:return: list of roles; empty list if none are found
:rtype: list of tuskar.api.controllers.v1.models.OvercloudRole
"""
LOG.debug('Retrieving all overcloud roles')
db_roles = pecan.request.dbapi.get_overcloud_roles()
transfer_roles = [models.OvercloudRole.from_db_model(c)
for c in db_roles]
return transfer_roles

View File

@ -1,38 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from tuskar.api.controllers.v2.plans import PlansController
from tuskar.api.controllers.v2.roles import RolesController
class Controller(object):
"""Version 2 API controller root."""
plans = PlansController()
roles = RolesController()
@pecan.expose('json')
def index(self):
return {
'version': {
'status': 'development',
'media-types': [{'base': 'application/json'}],
'id': 'v2.0',
'links': [{
'href': '/v2/',
'rel': 'self',
}]
}
}

View File

@ -1,171 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Contains transfer objects for use with WSME REST APIs. The objects in this
module also contain the translations between the REST transfer objects and
the internal Tuskar domain model.
"""
import datetime
import logging
import six
from wsme import types as wtypes
from tuskar.api.controllers.v2 import types as v2types
from tuskar.manager import models as manager_models
LOG = logging.getLogger(__name__)
class Base(wtypes.Base):
"""Base functionality for all API models.
This class should never be directly instantiated.
"""
def _lookup(self, key):
"""Looks up a key, translating WSME's Unset into Python's None.
:return: value of the given attribute; None if it is not set
"""
value = getattr(self, key)
if value == wtypes.Unset:
value = None
return value
class Role(Base):
"""Transfer object for roles."""
uuid = wtypes.text
name = wtypes.text
version = int
description = wtypes.text
@classmethod
def from_tuskar_model(cls, role):
"""Translates from the Tuskar domain model.
:type role: tuskar.manager.models.Role
"""
r = cls(**{
'uuid': role.uuid,
'name': role.name,
'version': role.version,
'description': role.description
})
return r
class ParameterConstraint(Base):
constraint_type = wtypes.text
definition = v2types.MultiType(list, dict, wtypes.text)
description = wtypes.text
@classmethod
def from_tuskar_model(cls, constraint):
return cls(**{'constraint_type': constraint.constraint_type,
'definition': constraint.definition,
'description': constraint.description})
class PlanParameter(Base):
name = wtypes.text
label = wtypes.text
default = v2types.MultiType(wtypes.text, six.integer_types, list, dict)
description = wtypes.text
hidden = bool
value = v2types.MultiType(wtypes.text, six.integer_types, list, dict)
constraints = [ParameterConstraint]
parameter_type = wtypes.text
@classmethod
def from_tuskar_model(cls, param):
"""Translates from the Tuskar domain model.
:type param: tuskar.manager.models.PlanParameter
"""
constraints = [ParameterConstraint.from_tuskar_model(c)
for c in param.constraints]
p = cls(**{
'name': param.name,
'label': param.label,
'default': param.default,
'description': param.description,
'hidden': param.hidden,
'value': param.value,
'constraints': constraints,
'parameter_type': param.param_type
})
return p
class Plan(Base):
uuid = wtypes.text
name = wtypes.text
description = wtypes.text
created_at = datetime.datetime
updated_at = datetime.datetime
roles = [Role]
parameters = [PlanParameter]
@classmethod
def from_tuskar_model(cls, plan):
"""Translates from the Tuskar domain model.
:type plan: tuskar.manager.models.DeploymentPlan
"""
roles = [Role.from_tuskar_model(r) for r in plan.roles]
params = [PlanParameter.from_tuskar_model(p) for p in plan.parameters]
p = cls(**{
'uuid': plan.uuid,
'name': plan.name,
'description': plan.description,
'created_at': plan.created_at,
'updated_at': plan.updated_at,
'roles': roles,
'parameters': params,
})
return p
class ParameterValue(Base):
name = wtypes.text
value = wtypes.text
@classmethod
def from_tuskar_model(cls, param_value):
"""Translates from the Tuskar domain model.
:type param_value: tuskar.manager.models.ParameterValue
"""
p = cls(**{
'name': param_value.name,
'value': param_value.value,
})
return p
def to_tuskar_model(self):
"""Translates into the Tuskar domain model.
:rtype: tuskar.manager.models.ParameterValue
"""
p = manager_models.ParameterValue(self.name, self.value)
return p

View File

@ -1,171 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
from pecan import rest
import wsme
from wsme.types import UnsetType
from wsmeext import pecan as wsme_pecan
from tuskar.api.controllers.v2 import models
from tuskar.api.controllers.v2 import roles
from tuskar.common import exception
from tuskar.manager.plan import PlansManager
from tuskar.storage import exceptions as storage_exceptions
LOG = logging.getLogger(__name__)
class PlansController(rest.RestController):
"""REST controller for the Plan class."""
_custom_actions = {'templates': ['GET']}
roles = roles.RolesController()
@wsme_pecan.wsexpose([models.Plan])
def get_all(self):
"""Returns all plans.
An empty list is returned if no plans are present.
:return: list of plans; empty list if none are found
:rtype: list of tuskar.api.controllers.v2.models.Plan
"""
LOG.debug('Retrieving all plans')
manager = PlansManager()
all_plans = manager.list_plans()
transfer_plans = [models.Plan.from_tuskar_model(p)
for p in all_plans]
return transfer_plans
@wsme_pecan.wsexpose(models.Plan, str)
def get_one(self, plan_uuid):
"""Returns a specific plan.
An exception is raised if no plan is found with the
given UUID.
:param plan_uuid: identifies the plan being fetched
:type plan_uuid: str
:return: matching plan
:rtype: tuskar.api.controllers.v2.models.Plan
:raises: tuskar.common.exception.PlanNotFound if there
is no plan with the given UUID
"""
LOG.debug('Retrieving plan with UUID: %s' % plan_uuid)
manager = PlansManager()
try:
found = manager.retrieve_plan(plan_uuid)
except storage_exceptions.UnknownUUID:
LOG.exception('Could not retrieve plan: %s' % plan_uuid)
raise exception.PlanNotFound()
transfer = models.Plan.from_tuskar_model(found)
return transfer
@wsme_pecan.wsexpose(None, str, status_code=204)
def delete(self, plan_uuid):
"""Deletes the given plan.
:param plan_uuid: identifies the plan being deleted
:type plan_uuid: str
:raises: tuskar.common.exception.PlanNotFound if there
is no plan with the given UUID
"""
LOG.debug('Deleting plan with UUID: %s' % plan_uuid)
manager = PlansManager()
try:
manager.delete_plan(plan_uuid)
except storage_exceptions.UnknownUUID:
LOG.exception('Could not delete plan: %s' % plan_uuid)
raise exception.PlanNotFound()
@wsme_pecan.wsexpose(models.Plan,
body=models.Plan,
status_code=201)
def post(self, transfer_plan):
"""Creates a new plan.
:param transfer_plan: data submitted by the user
:type transfer_plan:
tuskar.api.controllers.v1.models.Plan
:return: created plan
:rtype: tuskar.api.controllers.v1.models.Plan
:raises: tuskar.common.exception.PlanExists: if a plan
with the given name exists
"""
LOG.debug('Creating plan: %s' % transfer_plan)
# We don't want the wsme types bleed into the rest of Tuskar, so
# explicitly set to None if it wasn't specified.
description = transfer_plan.description
if isinstance(description, UnsetType):
description = None
manager = PlansManager()
try:
created = manager.create_plan(transfer_plan.name,
description)
except storage_exceptions.NameAlreadyUsed:
LOG.exception('Plan already exists with this name')
raise exception.PlanExists(transfer_plan.name)
transfer = models.Plan.from_tuskar_model(created)
return transfer
@wsme_pecan.wsexpose({str: str}, str)
def templates(self, plan_uuid):
"""Returns the template files for a given plan.
:return: dictionary of filenames to contents for each template file
involved in the plan
:rtype: dict
"""
LOG.debug('Retrieving templates for plan: %s' % plan_uuid)
manager = PlansManager()
try:
templates = manager.package_templates(plan_uuid)
except storage_exceptions.UnknownUUID:
LOG.exception('Could not retrieve templates for plan: %s' %
plan_uuid)
raise exception.PlanNotFound()
return templates
@wsme.validate(models.Plan)
@wsme_pecan.wsexpose(models.Plan,
str,
body=[models.ParameterValue],
status_code=201)
def patch(self, plan_uuid, param_list):
"""Patches existing plan.
:return: patched plan
:rtype: tuskar.api.controllers.v1.models.Plan
"""
manager = PlansManager()
params = [p.to_tuskar_model() for p in param_list]
try:
updated_plan = manager.set_parameter_values(plan_uuid, params)
except storage_exceptions.UnknownUUID:
LOG.exception('Could not patch plan: %s' % plan_uuid)
raise exception.PlanNotFound()
transfer_plan = models.Plan.from_tuskar_model(updated_plan)
return transfer_plan

View File

@ -1,150 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
from pecan import rest
from wsmeext import pecan as wsme_pecan
from tuskar.api.controllers.v2 import models
from tuskar.common import exception
from tuskar.common import utils
from tuskar.manager.plan import PlansManager
from tuskar.manager.role import RoleManager
from tuskar.storage import exceptions as storage_exceptions
LOG = logging.getLogger(__name__)
class RolesController(rest.RestController):
"""REST controller for the Role class."""
_custom_actions = {'extra_data': ['GET']}
@wsme_pecan.wsexpose([models.Role])
def get_all(self):
"""Returns all roles.
An empty list is returned if no roles are present.
:return: list of roles; empty list if none are found
:rtype: list of tuskar.api.controllers.v2.models.Role
"""
LOG.debug('Retrieving all roles')
manager = RoleManager()
all_roles = manager.list_roles(only_latest=False)
transfer_roles = [models.Role.from_tuskar_model(r) for r in all_roles]
return transfer_roles
@wsme_pecan.wsexpose({str: str}, str)
def extra_data(self, role_uuid):
"""Retrieve the extra data files associated with a given role.
:param role_uuid: identifies the role
:type role_uuid: str
:return: a dict where keys are filenames and values are their contents
:rtype: dict
This method will retrieve all stored role_extra records (these are
created at the same time that the Roles are, by using --role-extra
parameter to tuskar-load-roles).
The internal representation for a given role-extra file encodes the
file extension into the name. For instance 'hieradata/compute.yaml'
is stored as 'extra_compute_yaml'.
The given role's template is searched for 'get_file' directives and
then matched against the stored role-extra records (based on their
name... e.g. 'extra_controller_yaml' we look for 'controller.yaml'
after a get_file directive).
This method thus returns all the matched role-extra files for the
given role. The keys will include the relative path if one is
used in the role template:
{
"hieradata/common.yaml": "CONTENTS",
"hieradata/controller.yaml": "CONTENTS",
"hieradata/object.yaml": "CONTENTS"
}
"""
manager = RoleManager()
db_role = manager.retrieve_db_role_by_uuid(role_uuid)
db_role_extra = manager.retrieve_db_role_extra()
role_extra_paths = utils.resolve_template_extra_data(
db_role, db_role_extra)
return manager.template_extra_data_for_output(role_extra_paths)
@wsme_pecan.wsexpose(models.Plan,
str,
body=models.Role,
status_code=201)
def post(self, plan_uuid, role):
"""Adds a new role to plan.
:param plan_uuid: identifies the plan
:type plan_uuid: str
:param role: identifies the role to add
:type role: tuskar.api.controllers.v2.models.Role
:return: modified plan
:rtype: tuskar.api.controllers.v2.models.Plan
:raises: tuskar.common.exception.PlanAlreadyHasRole if the role has
already been added to the plan.
"""
LOG.debug('Adding role: %(role_uuid)s to plan: %(plan_uuid)s' %
{'role_uuid': role.uuid, 'plan_uuid': plan_uuid})
manager = PlansManager()
try:
updated_plan = manager.add_role_to_plan(plan_uuid, role.uuid)
except ValueError:
LOG.debug('The role has already been added to the plan.')
raise exception.PlanAlreadyHasRole(
plan_uuid=plan_uuid,
role_uuid=role.uuid
)
except storage_exceptions.UnknownUUID as e:
LOG.debug(('Either the plan UUID {0} or role UUID {1} could not be'
'found').format(plan_uuid, role.uuid))
raise exception.NotFound(
message=str(e))
transfer_plan = models.Plan.from_tuskar_model(updated_plan)
return transfer_plan
@wsme_pecan.wsexpose(models.Plan,
str,
str)
def delete(self, plan_uuid, role_uuid):
"""Removes a role from given plan.
:param plan_uuid: identifies the plan
:type plan_uuid: str
:param role_uuid: identifies the role to be deleted from plan
:type role_uuid: str
"""
LOG.debug('Removing role: %(role_uuid)s from plan: %(plan_uuid)s' %
{'role_uuid': role_uuid, 'plan_uuid': plan_uuid})
manager = PlansManager()
try:
updated_plan = manager.remove_role_from_plan(plan_uuid, role_uuid)
except storage_exceptions.UnknownUUID as e:
LOG.debug(('Either the plan UUID {0} or role UUID {1} could not be'
'found').format(plan_uuid, role_uuid))
raise exception.NotFound(
message=str(e))
transfer_plan = models.Plan.from_tuskar_model(updated_plan)
return transfer_plan

View File

@ -1,40 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import wsme
from wsme import types as wtypes
class MultiType(wtypes.UserType):
"""A complex type that represents one or more types.
Used for validating that a value is an instance of one of the types.
:param *types: Variable-length list of types.
"""
def __init__(self, *types):
self.types = types
def __str__(self):
return ' | '.join(map(str, self.types))
def validate(self, value):
for t in self.types:
if t is wsme.types.text and isinstance(value, wsme.types.bytes):
value = value.decode()
if isinstance(value, t):
return value
else:
raise ValueError(
_("Wrong type. Expected '%(type)s', got '%(value)s'")
% {'type': self.types, 'value': type(value)})

View File

@ -1,34 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from pecan import hooks
from tuskar.db import api as dbapi
class ConfigHook(hooks.PecanHook):
"""Attach the configuration object to the request
so controllers can get to it.
"""
def before(self, state):
state.request.cfg = cfg.CONF
class DBHook(hooks.PecanHook):
def before(self, state):
state.request.dbapi = dbapi.get_instance()

View File

@ -1,71 +0,0 @@
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
import wsme
from wsme import api
class JSONRenderer(object):
"""Custom JSON renderer.
Renders to JSON and handles responses for various HTTP status codes.
"""
def __init__(self, path, extra_vars):
"""Create an empty __init__ to accept the arguments provided to a
Renderer but ignore them as they are not needed.
"""
def _render_fault(self, message, details, code=500):
"""Given the namespace dictionary render a JSON error response for the
fault in the format defined by the OpenStack identity service
documentation.
"""
body = {
'identityFault': {
"message": message,
"details": details,
"code": code
}
}
return wsme.rest.json.encode_error(None, body)
def render(self, template_path, namespace):
"""Given a namespace dict render the response as JSON and return.
If the dict contains a faultcode or wsme.api.Response its a fault from
user code and is rendered via _render_fault.
template_path is a required parameter for renderers but unused in
this context.
"""
if 'faultcode' in namespace:
return self._render_fault(
namespace['faultstring'],
namespace['debuginfo'])
result = namespace['result']
if isinstance(namespace['result'], api.Response):
pecan.response.status_code = result.status_code
return self._render_fault(
result.obj.faultstring, result.obj.debuginfo,
code=result.status_code)
return wsme.rest.json.encode_result(
result,
namespace['datatype']
)

View File

@ -1,87 +0,0 @@
<%def name="render(x)">\
NovaCompute${x}:
Metadata:
OpenStack::ImageBuilder::Elements:
- nova-compute
admin-password: unset
glance:
host:
Fn::GetAtt:
- notcompute
- PrivateIp
heat:
access_key_id:
Ref: Key
refresh:
- resource: NovaCompute
secret_key:
Fn::GetAtt:
- Key
- SecretAccessKey
stack:
name:
Ref: AWS::StackName
region:
Ref: AWS::Region
interfaces:
control:
Ref: NovaInterfaces
keystone:
host:
Fn::GetAtt:
- notcompute
- PrivateIp
neutron:
host:
Fn::GetAtt:
- notcompute
- PrivateIp
ovs:
bridge_mappings: ''
enable_tunneling: 'True'
local_ip: 0.0.0.0
network_vlan_ranges: ''
tenant_network_type: gre
ovs_db:
Fn::Join:
- ''
- - mysql+pymysql://neutron:unset@
- Fn::GetAtt:
- notcompute
- PrivateIp
- /neutron
nova:
compute_driver:
Ref: NovaComputeDriver
db:
Fn::Join:
- ''
- - mysql+pymysql://nova:unset@
- Fn::GetAtt:
- notcompute
- PrivateIp
- /nova
host:
Fn::GetAtt:
- notcompute
- PrivateIp
rabbit:
host:
Fn::GetAtt:
- notcompute
- PrivateIp
password: guest
service-password: unset
swift:
store_key: ''
store_user: ''
Properties:
ImageId:
Ref: NovaImage
InstanceType:
Ref: InstanceType
KeyName:
Ref: KeyName
AvailabilityZone: nova::${x}
Type: AWS::EC2::Instance
</%def>\

View File

@ -1,12 +0,0 @@
<%def name="title()">
Tuskar API v1
</%def>
<header>
</header>
<div id="content">
<p> TODO </p>
</div>

View File

@ -1,112 +0,0 @@
<%namespace name="conf" file="provision.conf.mako"/>\
<%namespace name="sh" file="provision.sh.mako"/>\
<%def name="render(x=0)">\
notcompute:
Metadata:
AWS::CloudFormation::Init:
config:
files:
/root/tuskar/provision.conf:
content:
Fn::Base64:
|
<% conf.render() %>\
mode: "000644"
owner: root
group: root
/root/tuskar/provision.sh:
content:
Fn::Base64:
|
<% sh.render() %>\
mode: "000700"
owner: root
group: root
OpenStack::Heat::Stack: {}
OpenStack::ImageBuilder::Elements:
- boot-stack
- heat-cfntools
- heat-localip
- neutron-network-node
admin-password: unset
admin-token: unset
cinder:
db: mysql+pymysql://cinder:unset@localhost/cinder
volume_size_mb: '5000'
controller-address: 192.0.2.5
db-password: unset
glance:
db: mysql+pymysql://glance:unset@localhost/glance
host: 192.0.2.5
heat:
access_key_id:
Ref: Key
admin_password: unset
admin_tenant_name: service
admin_user: heat
auth_encryption_key: unset___________
db: mysql+pymysql://heat:unset@localhost/heat
heat_watch_server_url: http://192.0.2.5:8003
metadata_server_url: http://192.0.2.5:8000
refresh:
- resource: notcompute
secret_key:
Fn::GetAtt:
- Key
- SecretAccessKey
stack:
name:
Ref: AWS::StackName
region:
Ref: AWS::Region
waitcondition_server_url: http://192.0.2.5:8000/v1/waitcondition
interfaces:
control: eth2
keystone:
db: mysql+pymysql://keystone:unset@localhost/keystone
host: 192.0.2.5
neutron:
floatingip_end: 192.0.2.64
floatingip_range: 192.0.2.0/24
floatingip_start: 192.0.2.45
host: 192.0.2.5
metadata_proxy_shared_secret: unset
ovs:
enable_tunneling: 'True'
fixed_range:
end: 10.255.255.254
start: 10.0.0.2
local_ip: 192.0.2.5
ovs_range: 10.0.0.0/8
public_interface: eth0
tenant_network_type: gre
ovs_db: mysql+pymysql://neutron:unset@localhost/ovs_neutron?charset=utf8
nova:
compute_driver: libvirt.LibvirtDriver
db: mysql+pymysql://nova:unset@localhost/nova
host: 192.0.2.5
metadata-proxy: true
rabbit:
host: 192.0.2.5
password: guest
service-password: unset
Properties:
ImageId:
Ref: notcomputeImage
InstanceType:
Ref: InstanceType
KeyName:
Ref: KeyName
AvailabilityZone: nova::${x}
UserData:
Fn::Base64:
|
#!/bin/bash -v
/opt/aws/bin/cfn-init
# We need to set the undercloud Heat IP in the boto config. Atm this
# is hardcoded. We should set a hostname in the tuskar conf.
sed -e "s/0.0.0.0/192.0.2.2/g" /var/lib/heat-cfntools/cfn-boto-cfg > /tmp/cfn-boto-cfg
mv -f /tmp/cfn-boto-cfg /var/lib/heat-cfntools/cfn-boto-cfg
exec /root/tuskar/provision.sh &> /root/tuskar/provision.log &
Type: AWS::EC2::Instance
</%def>\

View File

@ -1,63 +0,0 @@
<%namespace name="not_compute" file="not_compute.yaml"/>\
<%namespace name="compute" file="compute.yaml"/>\
<% # Mapping between resource class service type and a HEAT template
# TODO (mtaylor) Move this into Config file or add to model via API call.
templates = {"compute": compute, "not_compute": not_compute, "controller": not_compute} %>\
Description: Nova API,Keystone,Heat Engine and API,Glance,Neutron,Dedicated MySQL
server,Dedicated RabbitMQ Server,Group of Nova Computes
HeatTemplateFormatVersion: '2012-12-12'
Parameters:
InstanceType:
Default: baremetal
Description: Flavor to request when deploying.
Type: String
KeyName:
Default: default
Description: Name of an existing EC2 KeyPair to enable SSH access to the instances
Type: String
NovaComputeDriver:
Default: libvirt.LibvirtDriver
Type: String
NovaImage:
Default: overcloud-compute
Type: String
NovaInterfaces:
Default: eth0
Type: String
PowerUserName:
Default: stack
Description: What username to ssh to the virtual power host with.
Type: String
notcomputeImage:
Type: String
Default: overcloud-control
Resources:
AccessPolicy:
Properties:
AllowedResources:
- notcompute
Type: OS::Heat::AccessPolicy
Key:
Properties:
UserName:
Ref: User
Type: AWS::IAM::AccessKey
User:
Properties:
Policies:
- Ref: AccessPolicy
Type: AWS::IAM::User
<% for rc in resource_classes:
for r in rc.racks:
for n in r.nodes:
templates[rc.service_type].render(n.node_id)
%>
Outputs:
KeystoneURL:
Description: URL for the Overcloud Keystone service
Value:
Fn::Join:
- ''
- - http://
- Fn::GetAtt: [notcompute, PublicIp]
- :5000/v2.0/

View File

@ -1,26 +0,0 @@
<%def name="render()">\
# Aggregates #
declare -A AGGREGATES
% for rc in resource_classes:
AGGREGATES[${rc.name}]=-1
% endfor
# Nodes #
declare -A BM_HOSTS
% for rc in resource_classes:
%if rc.service_type=="compute":
%for r in rc.racks:
%for n in r.nodes:
BM_HOSTS[${n.node_id}]=${rc.name}
% endfor
% endfor
% endif
% endfor
# Flavors #
declare -A FLAVORS
% for rc in resource_classes:
% for f in rc.flavors:
<% ram, vcpu, disk, ephemeral, swap = nova_util.extract_from_capacities(f) %>\
FLAVORS[${rc.name}.${f.name}]='--ephemeral=${ephemeral} --swap=${swap} ${rc.name}.${f.name} auto ${ram} ${disk} ${vcpu}'
% endfor
% endfor
</%def>\

View File

@ -1,83 +0,0 @@
<%def name="render()">\
#!/bin/bash -v
CONF=/root/tuskar/provision.conf
source /root/stackrc
wait_for(){
LOOPS=$1
SLEEPTIME=$2
shift ; shift
i=0
while [ $i -lt $LOOPS ] ; do
i=$((i + 1))
eval "$@" && return 0 || true
sleep $SLEEPTIME
done
return 1
}
wait_for 60 10 test -f /opt/stack/boot-stack.ok
wait_for 60 10 nova list
# We must enable host aggregate matching when scheduling
echo "scheduler_default_filters=AggregateInstanceExtraSpecsFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter" >> /etc/nova/nova.conf
service nova-scheduler restart
# Remove default flavors
for i in {1..5}
do
nova flavor-delete $i
done
# Set to not empty
HASH="md5sum"
declare -A EXISTING_AGGREGATES
while true
do
if [ "$HASH" != "`md5sum $CONF`" ]
then
HASH="`md5sum $CONF`"
echo "New Resources Found, Registering"
source $CONF
# Register Host Aggregates
aggs=`nova aggregate-list`
for a in ${'${!AGGREGATES[@]}'}
do
# Check to see
if [ `expr "$aggs" : ".*\s$a\s"` == 0 ]
then
${"EXISTING_AGGREGATES[$a]=$(nova aggregate-create $a | tail -n +4 | head -n 1 | tr -s ' ' | cut -d '|' -f2)"}
${"nova aggregate-set-metadata ${EXISTING_AGGREGATES[$a]} class=$a-hosts"}
fi
done
# Register Flavors
for f in ${'${!FLAVORS[@]}'}
do
${'nova flavor-show $f &> /dev/null'}
if [ $? == 1 ]; then
${'nova flavor-create ${FLAVORS[$f]}'}
${'nova flavor-key $f set class=`expr $f : "\(.*\)\."`-hosts'}
fi
done
# Register Hosts
${'while [ ${#BM_HOSTS[@]} -gt 0 ]'}
do
LIST=`nova host-list`
for i in ${'${!BM_HOSTS[@]}'}
do
HOST_ID=`expr "$LIST" : ".*\(overcloud-novacompute$i-\(\w\)\{6\}\)"`
if [ $HOST_ID ]
then
# Check to see if this host is already added to this aggregate
${'AGG_DETAILS=`nova aggregate-details ${EXISTING_AGGREGATES[${BM_HOSTS[$i]}]}`'}
${'if [ `expr "$AGG_DETAILS" : ".*$HOST_ID"` ]'}
then
${'nova aggregate-add-host ${EXISTING_AGGREGATES[${BM_HOSTS[$i]}]} $HOST_ID'}
fi
unset BM_HOSTS[$i]
fi
done
sleep 1m
done
echo "Resource Registration Complete"
else
cat /var/cache/heat-cfntools/last_metadata | python -c 'import sys;import json;print json.load(sys.stdin)["AWS::CloudFormation::Init"]["config"]["files"]["/root/tuskar/provision.conf"]["content"]' > /root/tuskar/provision.conf
fi
sleep 1m
done
</%def>\

View File

@ -1,26 +0,0 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# TODO(deva): move eventlet imports to tuskar.__init__ once we move to PBR
import os
os.environ['EVENTLET_NO_GREENDNS'] = 'yes'
import eventlet
eventlet.monkey_patch(os=False)
from tuskar.openstack.common import gettextutils
gettextutils.install('tuskar')

View File

@ -1,53 +0,0 @@
#!/usr/bin/env python
#
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""The Tuskar Service API."""
import logging
import sys
from wsgiref import simple_server
from oslo_config import cfg
from tuskar.api import app
from tuskar.common import service as tuskar_service
from tuskar.openstack.common import log
def main(argv=None):
if argv is None:
argv = sys.argv
tuskar_service.prepare_service(argv)
# Build and start the WSGI app
host = cfg.CONF.tuskar_api_bind_ip
port = cfg.CONF.tuskar_api_port
wsgi = simple_server.make_server(
host, port,
app.VersionSelectorApplication())
LOG = log.getLogger(__name__)
LOG.info("Serving on http://%s:%s" % (host, port))
LOG.info("Configuration:")
cfg.CONF.log_opt_values(LOG, logging.INFO)
try:
wsgi.serve_forever()
except KeyboardInterrupt:
pass

View File

@ -1,37 +0,0 @@
#!/usr/bin/env python
#
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Run storage database migration.
"""
import sys
from tuskar.common import service
from tuskar.db import migration
from tuskar.db.sqlalchemy.api import get_backend
def main(argv=None):
if argv is None:
argv = sys.argv
# Prepare the Tuskar service and load the database backend.
service.prepare_service(argv)
get_backend()
migration.db_sync()

View File

@ -1,59 +0,0 @@
#!/usr/bin/env python
#
# Copyright 2015 Red Hat
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
import sys
from oslo_config import cfg
from tuskar.common import service
from tuskar.storage import delete_roles as dr
def _print_names(message, names):
print("{0}: \n {1}".format(message, '\n '.join(names)))
cfg.CONF.register_cli_opt(cfg.BoolOpt('dryrun', default=False))
cfg.CONF.register_cli_opt(cfg.MultiStrOpt(
'uuid', short='u', help='List of role uuid to delete'))
cfg.CONF.register_cli_opt(cfg.BoolOpt(
'all', default=False,
help='If specified, all roles will be deleted; overrides the '
'--uuids argument'))
def main(argv=None):
if argv is None:
argv = sys.argv
service.prepare_service(argv)
if not cfg.CONF.uuid and not cfg.CONF.all:
sys.stderr.write(
'Either specific roles must be specified using the --uuid '
'argument or --all must be specified\n')
sys.exit(1)
if cfg.CONF.uuid:
deleted = dr.delete_roles(cfg.CONF.uuid, noop=cfg.CONF.dryrun)
else:
deleted = dr.delete_all_roles(noop=cfg.CONF.dryrun)
if len(deleted):
_print_names("Deleted", deleted)

View File

@ -1,58 +0,0 @@
#!/usr/bin/env python
#
# Copyright 2015 Red Hat
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
import sys
from oslo_config import cfg
from tuskar.common import service
from tuskar.storage.load_roles import load_role
def _print_names(message, names):
print("{0}: \n {1}".format(message, '\n '.join(names)))
cfg.CONF.register_cli_opt(cfg.StrOpt('name', short='n', dest='name'))
cfg.CONF.register_cli_opt(cfg.StrOpt(
'filepath', dest='file_path', short='f'))
cfg.CONF.register_cli_opt(cfg.StrOpt('relative-path', dest='relative_path'))
cfg.CONF.register_cli_opt(cfg.MultiStrOpt('extra-data', short='e'))
def main(argv=None):
if argv is None:
argv = sys.argv
service.prepare_service(argv)
if not cfg.CONF.file_path:
sys.stderr.write("You must specify the path to the main template "
"which defines this role.")
sys.exit(1)
name = cfg.CONF.name if cfg.CONF.name else ''
relative_path = cfg.CONF.relative_path if cfg.CONF.relative_path else None
created, updated = load_role(name, cfg.CONF.file_path,
extra_data=cfg.CONF.extra_data,
relative_path=relative_path)
if len(created):
_print_names("Created", created)
if len(updated):
_print_names("Updated", updated)

View File

@ -1,69 +0,0 @@
#!/usr/bin/env python
#
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
import sys
from oslo_config import cfg
from tuskar.common import service
from tuskar.storage.load_roles import load_roles
def _print_names(message, names):
print("{0}: \n {1}".format(message, '\n '.join(names)))
seed_help = ('Full path to the template that should be loaded '
'as the master seed')
resource_registry_help = ('Path to the Heat environment file which maps the'
'custom resource types to template paths.')
cfg.CONF.register_cli_opt(cfg.StrOpt('master-seed', dest='master_seed',
help=seed_help))
cfg.CONF.register_cli_opt(cfg.StrOpt('resource-registry',
dest='resource_registry',
help=resource_registry_help))
cfg.CONF.register_cli_opt(cfg.MultiStrOpt('role', short='r'))
cfg.CONF.register_cli_opt(cfg.MultiStrOpt('role-extra', short='re'))
def main(argv=None):
if argv is None:
argv = sys.argv
service.prepare_service(argv)
if cfg.CONF.master_seed and not cfg.CONF.resource_registry:
sys.stderr.write("When using `master-seed` you must also specify "
"`resource-registry`.")
sys.exit(1)
all_roles, created, updated = load_roles(
cfg.CONF.role,
seed_file=cfg.CONF.master_seed,
resource_registry_path=cfg.CONF.resource_registry,
role_extra=cfg.CONF.role_extra)
if len(created):
_print_names("Created", created)
if len(updated):
_print_names("Updated", updated)
print("Imported {0} roles".format(len(all_roles)))

View File

@ -1,60 +0,0 @@
#!/usr/bin/env python
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
import sys
from oslo_config import cfg
from tuskar.common import service
from tuskar.storage.load_roles import load_seed
def _print_names(message, names):
print("{0}: \n {1}".format(message, '\n '.join(names)))
seed_help = ('Full path to the template that should be loaded '
'as the master seed')
resource_registry_help = ('Path to the Heat environment file which maps the'
'custom resource types to template paths.')
cfg.CONF.register_cli_opt(cfg.StrOpt('master-seed', dest='master_seed',
help=seed_help))
cfg.CONF.register_cli_opt(cfg.StrOpt('resource-registry',
dest='resource_registry',
help=resource_registry_help))
def main(argv=None):
if argv is None:
argv = sys.argv
service.prepare_service(argv)
if not cfg.CONF.master_seed or not cfg.CONF.resource_registry:
sys.stderr.write("You must specify both `master-seed` and "
"`resource-registry`.")
sys.exit(1)
created, updated = load_seed(
seed_file=cfg.CONF.master_seed,
resource_registry_path=cfg.CONF.resource_registry)
if len(created):
_print_names("Created", created)
if len(updated):
_print_names("Updated", updated)

View File

@ -1,30 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
# Copyright 2012 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_db import options as db_options
from tuskar import version
def parse_args(argv, default_config_files=None):
db_options.set_defaults(cfg.CONF, sqlite_db='tuskar.sqlite')
cfg.CONF(argv[1:],
project='tuskar',
version=version.version_info.release_string(),
default_config_files=default_config_files)

View File

@ -1,214 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Tuskar base exception handling.
Includes decorator for re-raising Tuskar-type exceptions.
SHOULD include dedicated exception logging.
"""
from oslo_config import cfg
import six
from tuskar.openstack.common.gettextutils import _ # noqa
from tuskar.openstack.common import log as logging
LOG = logging.getLogger(__name__)
exc_log_opts = [
cfg.BoolOpt('fatal_exception_format_errors',
default=False,
help='make exception message format errors fatal'),
]
CONF = cfg.CONF
CONF.register_opts(exc_log_opts)
def _cleanse_dict(original):
"""Strip all admin_password, new_pass, rescue_pass keys from a dict."""
return dict((k, v) for k, v in original.iteritems() if "_pass" not in k)
class TuskarException(Exception):
"""Base Tuskar Exception
To correctly use this class, inherit from it and define
a 'message' property. That message will get printf'd
with the keyword arguments provided to the constructor.
"""
message = _("An unknown exception occurred.")
code = 500
headers = {}
safe = False
def __init__(self, message=None, **kwargs):
self.kwargs = kwargs
if 'code' not in self.kwargs:
try:
self.kwargs['code'] = self.code
except AttributeError:
pass
if not message:
try:
message = self.message % kwargs
except Exception as e:
# kwargs doesn't match a variable in the message
# log the issue and the kwargs
LOG.exception(_('Exception in string format operation'))
for name, value in kwargs.iteritems():
LOG.error("%s: %s" % (name, value))
if CONF.fatal_exception_format_errors:
raise e
else:
# at least get the core message out if something happened
message = self.message
super(TuskarException, self).__init__(message)
def format_message(self):
if self.__class__.__name__.endswith('_Remote'):
return self.args[0]
else:
return six.text_type(self)
class Invalid(TuskarException):
message = _("Invalid.")
code = 400
class NotAuthorized(TuskarException):
message = _("Not authorized.")
code = 403
class AdminRequired(NotAuthorized):
message = _("User does not have admin privileges")
class PolicyNotAuthorized(NotAuthorized):
message = _("Policy doesn't allow %(action)s to be performed.")
class NotFound(TuskarException):
message = _("Resource could not be found.")
code = 404
class OvercloudRoleNotFound(NotFound):
message = _('Overcloud role could not be found.')
class OvercloudRoleCountNotFound(NotFound):
message = _('Overcloud role count could not be found.')
class OvercloudNotFound(NotFound):
message = _('Overcloud could not be found.')
class DuplicateEntry(TuskarException):
message = _("Duplicate entry found.")
code = 409
class OvercloudRoleExists(DuplicateEntry):
message = _("Overcloud role with name %(name)s already exists.")
class OvercloudRoleInUse(Invalid):
message = _('Role %(name)s is in use by an overcloud.')
class OvercloudRoleCountExists(DuplicateEntry):
message = _("Count for overcloud %(cloud)s and "
"role %(role)s already exists.")
class OvercloudExists(DuplicateEntry):
message = _("Overcloud with name %(name)s already exists.")
class DuplicateAttribute(DuplicateEntry):
message = _("One or more attributes is duplicated for the overcloud.")
class ConfigNotFound(TuskarException):
message = _("Could not find config at %(path)s")
class StackNotFound(NotFound):
message = _("The Stack for this Overcloud can't be found.")
class StackAlreadyCreated(DuplicateEntry):
message = _("The Stack for this Overcloud already exists.")
class ParseCountsAndFlavorsFailed(DuplicateEntry):
message = _("Parsing of counts and flavors from roles failed.")
class HeatTemplateCreateFailed(Invalid):
message = _("The Heat template failed to create.")
class HeatTemplateValidateFailed(Invalid):
message = _("Validation of the Heat template failed.")
class HeatStackProcessingAttributesFailed(Invalid):
message = _("Processing of Heat stack attributes failed")
class HeatStackUpdateFailed(Invalid):
message = _("The Heat stack failed to update.")
class HeatStackCreateFailed(Invalid):
message = _("The Heat stack failed to update.")
class HeatStackDeleteFailed(Invalid):
message = _("The Heat stack failed to delete.")
class PlanNotFound(NotFound):
message = _('Plan could not be found.')
class PlanExists(DuplicateEntry):
message = _("Plan with name %(name)s already exists.")
class PlanAlreadyHasRole(DuplicateEntry):
message = _("Plan %(plan_uuid)s already has role %(role_uuid)s.")
class PlanParametersNotExist(Invalid):
message = _("There are no parameters named %(param_names)s"
" in plan %(plan_uuid)s.")
class InvalidTemplateExtraStoredName(TuskarException):
# code 500 definitely internal server error. Default for TuskarException
message = _("Unexpected name for stored template extra file "
"%(name)s . Expected to start with 'extra_'")

View File

@ -1,66 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
# Copyright 2012 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from oslo_config import cfg
path_opts = [
cfg.StrOpt('pybasedir',
default=os.path.abspath(os.path.join(os.path.dirname(__file__),
'../')),
help='Directory where the nova python module is installed'),
cfg.StrOpt('bindir',
default='$pybasedir/bin',
help='Directory where nova binaries are installed'),
cfg.StrOpt('state_path',
default='$pybasedir',
help="Top-level directory for maintaining nova's state"),
]
CONF = cfg.CONF
CONF.register_opts(path_opts)
def basedir_def(*args):
"""Return an uninterpolated path relative to $pybasedir."""
return os.path.join('$pybasedir', *args)
def bindir_def(*args):
"""Return an uninterpolated path relative to $bindir."""
return os.path.join('$bindir', *args)
def state_path_def(*args):
"""Return an uninterpolated path relative to $state_path."""
return os.path.join('$state_path', *args)
def basedir_rel(*args):
"""Return a path relative to $pybasedir."""
return os.path.join(CONF.pybasedir, *args)
def bindir_rel(*args):
"""Return a path relative to $bindir."""
return os.path.join(CONF.bindir, *args)
def state_path_rel(*args):
"""Return a path relative to $state_path."""
return os.path.join(CONF.state_path, *args)

View File

@ -1,49 +0,0 @@
#!/usr/bin/env python
#
# Copyright 2012 eNovance <licensing@enovance.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import socket
from oslo_config import cfg
from tuskar.openstack.common import log
cfg.CONF.register_opts([
cfg.IntOpt('periodic_interval',
default=60,
help='seconds between running periodic tasks'),
cfg.StrOpt('host',
default=socket.getfqdn(),
help='Name of this node. This can be an opaque identifier. '
'It is not necessarily a hostname, FQDN, or IP address. '
'However, the node name must be valid within '
'an AMQP key, and if using ZeroMQ, a valid '
'hostname, FQDN, or IP address'),
])
def prepare_service(argv=[]):
cfg.set_defaults(log.log_opts,
default_log_levels=['amqp=WARN',
'amqplib=WARN',
'qpid.messaging=INFO',
'sqlalchemy=WARN',
'keystoneclient=INFO',
'stevedore=INFO',
'eventlet.wsgi.server=WARN',
'iso8601=WARN'
])
cfg.CONF(argv[1:], project='tuskar')
log.setup('tuskar')

View File

@ -1,175 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright 2011 Justin Santa Barbara
# Copyright (c) 2012 NTT DOCOMO, INC.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Utilities and helper functions."""
import os
import re
from oslo_config import cfg
from tuskar.common import exception
from tuskar.openstack.common import log as logging
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
class LazyPluggable(object):
"""A pluggable backend loaded lazily based on some value."""
def __init__(self, pivot, config_group=None, **backends):
self.__backends = backends
self.__pivot = pivot
self.__backend = None
self.__config_group = config_group
def __get_backend(self):
if not self.__backend:
if self.__config_group is None:
backend_name = CONF[self.__pivot]
else:
backend_name = CONF[self.__config_group][self.__pivot]
if backend_name not in self.__backends:
msg = _('Invalid backend: %s') % backend_name
raise exception.TuskarException(msg)
backend = self.__backends[backend_name]
if isinstance(backend, tuple):
name = backend[0]
fromlist = backend[1]
else:
name = backend
fromlist = backend
self.__backend = __import__(name, None, None, fromlist)
return self.__backend
def __getattr__(self, key):
backend = self.__get_backend()
return getattr(backend, key)
def is_int_like(val):
"""Check if a value looks like an int."""
try:
return str(int(val)) == str(val)
except Exception:
return False
def read_cached_file(filename, cache_info, reload_func=None):
"""Read from a file if it has been modified.
:param cache_info: dictionary to hold opaque cache.
:param reload_func: optional function to be called with data when
file is reloaded due to a modification.
:returns: data from file
"""
mtime = os.path.getmtime(filename)
if not cache_info or mtime != cache_info.get('mtime'):
LOG.debug("Reloading cached file %s" % filename)
with open(filename) as fap:
cache_info['data'] = fap.read()
cache_info['mtime'] = mtime
if reload_func:
reload_func(cache_info['data'])
return cache_info['data']
def resolve_role_extra_name_from_path(role_extra_path):
"""Get the name we will use to store a role-extra file based on its name
We want to capture the filename and extension into the name of the
store role-extra object. The name is constructed by prepending 'extra_'
and using the final '_' to include the extension. Any paths used before
the filename are dropped at this point (these are resolved relative to
a given template, i.e. where they are used and referenced).
For instance 'hieradata/compute.yaml' is stored as
'extra_compute_yaml'.
"""
name_ext = os.path.basename(role_extra_path)
name, extension = os.path.splitext(name_ext)
return "extra_%s_%s" % (name, extension.replace('.', ''))
def resolve_template_file_name_from_role_extra_name(role_extra_name):
"""Return the name of the included file based on the role-extra name
The internal representation for a given role-extra file encodes the
file extension into the name. For instance 'compute.yaml'
is stored as 'extra_compute_yaml'. Here, given the stored name,
return name.extension
Raises a InvalidTemplateExtraStoredName exception if the given
role_extra_name doesn't start with 'extra_' as a prefix.
:param role_extra_name: the name as stored for the role-extra
:type role_extra_name: string
:return: the name as used in the template
:rtype: string
Returns 'compute.yaml' from 'extra_compute_yaml'.
"""
if not role_extra_name.startswith("extra_"):
raise exception.InvalidTemplateExtraStoredName(name=role_extra_name)
role_extra_name = role_extra_name[6:]
name_extension = role_extra_name.rsplit("_", 1)
if name_extension[1] == '':
return name_extension[0]
return ".".join(name_extension)
def resolve_template_extra_data(template, template_extra=[]):
"""Match all occurences of get_file against the stored role-extra data.
:param template: the given heat template to search for "get_file"(s)
:type template: tuskar.storage.models.StoredFile
:param template_extra: a list of all stored role-extra data
:type template_extra: list of tuskar.storage.models.StoredFile
:return: a dict of 'name'=>'path' for each matched role-extra
:rtype: dict
Using regex, compile a list of all occurences of 'get_file:' in the
template. Match each of the stored role-extra data based on their name.
For each match capture the full path as it appears in the template
and couple it to the name of the role-extra we have on record. For
example:
[{'extra_common_yaml': 'hieradata/common.yaml'},
{'extra_object_yaml': 'hieradata/object.yaml'}]
"""
included_files = []
all_get_files = re.findall("get_file:.*\n", template.contents)
# looks like: ["get_file: hieradata/common.yaml}", ... ]
for te in template_extra:
token = resolve_template_file_name_from_role_extra_name(te.name)
for get_file in all_get_files:
if re.match("get_file:.*%s[}]*\n" % token, get_file):
path = get_file.replace("get_file:", "").lstrip().replace(
"}", "").rstrip()
included_files.append({te.name: path})
return included_files

View File

View File

@ -1,41 +0,0 @@
#
# Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Base classes for storage engines
"""
import abc
from oslo_config import cfg
from oslo_db import api as db_api
import six
_BACKEND_MAPPING = {'sqlalchemy': 'tuskar.db.sqlalchemy.api'}
IMPL = db_api.DBAPI.from_config(cfg.CONF, backend_mapping=_BACKEND_MAPPING,
lazy=True)
def get_instance():
"""Return a DB API instance."""
return IMPL
@six.add_metaclass(abc.ABCMeta)
class Connection(object):
"""Base class for storage system connections."""
@abc.abstractmethod
def __init__(self):
"""Constructor."""

View File

@ -1,36 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Database setup and migration commands."""
from tuskar.common import utils
IMPL = utils.LazyPluggable(
pivot='backend',
config_group='database',
sqlalchemy='tuskar.db.sqlalchemy.migration')
INIT_VERSION = 0
def db_sync(version=None):
"""Migrate the database to `version` or the most recent version."""
return IMPL.db_sync(version=version)
def db_version():
"""Display the current database version."""
return IMPL.db_version()

View File

@ -1,8 +0,0 @@
import logging
# There is probably a better way of doing this, but it was being done
# in the models.py while I was cleaning that up. I suspect this will
# be moved elsewhere in the future.
logging.basicConfig()
logging.getLogger('sqlalchemy.engine').setLevel(logging.INFO)

View File

@ -1,420 +0,0 @@
#
# Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""SQLAlchemy storage backend."""
import threading
from oslo_config import cfg
from oslo_db import exception as db_exception
from oslo_db.sqlalchemy import session as db_session
from sqlalchemy.exc import IntegrityError
from sqlalchemy.orm.exc import NoResultFound
from sqlalchemy.orm import subqueryload
from tuskar.common import exception
from tuskar.db import api
from tuskar.db.sqlalchemy import models
from tuskar.openstack.common import log
CONF = cfg.CONF
LOG = log.getLogger(__name__)
_FACADE = None
_LOCK = threading.Lock()
def _create_facade_lazily():
global _LOCK, _FACADE
if _FACADE is None:
with _LOCK:
if _FACADE is None:
_FACADE = db_session.EngineFacade.from_config(CONF)
return _FACADE
def get_engine():
facade = _create_facade_lazily()
return facade.get_engine()
def get_session(**kwargs):
facade = _create_facade_lazily()
return facade.get_session(**kwargs)
def get_backend():
"""The backend is this module itself."""
return Connection()
def model_query(model, *args, **kwargs):
"""Query helper for simpler session usage.
:param session: if present, the session to use
"""
session = kwargs.get('session') or get_session()
query = session.query(model, *args)
return query
class Connection(api.Connection):
"""SqlAlchemy connection."""
def __init__(self):
# The superclass __init__ is abstract and prevents the class
# from being instantiated unless we explicitly remove that
# here.
pass
def get_overcloud_roles(self):
"""Returns all overcloud roles known to Tuskar.
:return: list of roles; empty list if none are found
:rtype: list of tuskar.db.sqlalchemy.models.OvercloudRole
"""
session = get_session()
roles = session.query(models.OvercloudRole).all()
session.close()
return roles
def get_overcloud_role_by_id(self, role_id):
"""Single overcloud role query.
:return: role if one exists with the given ID
:rtype: tuskar.db.sqlalchemy.models.OvercloudRole
:raises: tuskar.common.exception.OvercloudRoleNotFound: if no
role with the given ID exists
"""
session = get_session()
try:
query = session.query(models.OvercloudRole).filter_by(
id=role_id)
result = query.one()
except NoResultFound:
raise exception.OvercloudRoleNotFound()
finally:
session.close()
return result
def create_overcloud_role(self, overcloud_role):
"""Creates a new overcloud role in the database.
:param overcloud_role: role instance to save
:type overcloud_role: tuskar.db.sqlalchemy.models.OvercloudRole
:return: the role instance that was saved with its
ID populated
:rtype: tuskar.db.sqlalchemy.models.OvercloudRole
:raises: tuskar.common.exception.OvercloudRoleExists: if a role
with the given name exists
"""
session = get_session()
session.begin()
try:
session.add(overcloud_role)
session.commit()
return overcloud_role
except db_exception.DBDuplicateEntry:
raise exception.OvercloudRoleExists(name=overcloud_role.name)
finally:
session.close()
def update_overcloud_role(self, updated):
"""Updates the given overcloud role.
:param updated: role instance containing changed values
:type updated: tuskar.db.sqlalchemy.models.OvercloudRole
:return: the role instance that was saved
:rtype: tuskar.db.sqlalchemy.models.OvercloudRole
:raises: tuskar.common.exception.OvercloudRoleNotFound if there
is no role with the given ID
"""
existing = self.get_overcloud_role_by_id(updated.id)
for a in ('name', 'description', 'image_name', 'flavor_id'):
if getattr(updated, a) is not None:
setattr(existing, a, getattr(updated, a))
return self.create_overcloud_role(existing)
def delete_overcloud_role_by_id(self, role_id):
"""Deletes an overcloud role from the database.
:param role_id: database ID of the role
:type role_id: int
:raises: tuskar.common.exception.OvercloudRoleNotFound if there
is no role with the given ID
"""
role = self.get_overcloud_role_by_id(role_id)
session = get_session()
session.begin()
try:
session.delete(role)
session.commit()
except db_exception.DBError as e:
if isinstance(e.inner_exception, IntegrityError):
raise exception.OvercloudRoleInUse(name=role.name)
else:
raise
finally:
session.close()
def get_overclouds(self):
"""Returns all overcloud instances from the database.
:return: list of overcloud instances; empty list if none are found
:rtype: list of tuskar.db.sqlalchemy.models.Overcloud
"""
session = get_session()
overclouds = (
session.query(models.Overcloud).
options(subqueryload(models.Overcloud.attributes)).
options(subqueryload(models.Overcloud.counts)).
all()
)
session.close()
return overclouds
def get_overcloud_by_id(self, overcloud_id):
"""Returns a specific overcloud instance.
:return: overcloud if one exists with the given ID
:rtype: tuskar.db.sqlalchemy.models.Overcloud
:raises: tuskar.common.exception.OvercloudNotFound: if no
overcloud with the given ID exists
"""
session = get_session()
try:
query = (
session.query(models.Overcloud).
options(subqueryload(models.Overcloud.attributes)).
options(subqueryload(models.Overcloud.counts)).
options(subqueryload('counts.overcloud_role')).
filter_by(id=overcloud_id)
)
result = query.one()
except NoResultFound:
raise exception.OvercloudNotFound()
finally:
session.close()
return result
def create_overcloud(self, overcloud):
"""Creates a new overcloud instance to the database.
:param overcloud: overcloud instance to save
:type overcloud: tuskar.db.sqlalchemy.models.Overcloud
:return: the overcloud instance that was saved with its
ID populated
:rtype: tuskar.db.sqlalchemy.models.Overcloud
:raises: tuskar.common.exception.OvercloudExists: if an overcloud
role with the given name exists
"""
session = get_session()
session.begin()
try:
session.add(overcloud)
session.commit()
# Reload from the database to load all of the joined table data
overcloud = self.get_overcloud_by_id(overcloud.id)
return overcloud
except db_exception.DBDuplicateEntry as e:
if 'name' in e.columns:
raise exception.OvercloudExists(name=overcloud.name)
else:
raise exception.DuplicateAttribute()
finally:
session.close()
def update_overcloud(self, updated):
"""Updates the configuration of an existing overcloud.
The specified parameter is an instance of the domain model with
the changes to be made. Updating follows the given rules:
- The updated overcloud must include the ID of the overcloud
being updated.
- Any direct attributes on the overcloud that are *not* being changed
should have their values set to None.
- For attributes and counts, only differences are specified according
to the following rules:
- New items are specified in the updated object's lists
- Updated items are specified in the updated object's lists with
the new value and existing key
- Removed items are specified in the updated object's lists with
a value of None (zero in the case of a count).
- Unchanged items are *not* specified.
:param updated: overcloud instance containing changed values
:type updated: tuskar.db.sqlalchemy.models.Overcloud
:return: the overcloud instance that was saved
:rtype: tuskar.db.sqlalchemy.models.Overcloud
:raises: tuskar.common.exception.OvercloudNotFound if there
is no overcloud with the given ID
"""
existing = self.get_overcloud_by_id(updated.id)
session = get_session()
session.begin()
try:
# First class attributes on the overcloud
for name in ('stack_id', 'name', 'description'):
new_value = getattr(updated, name)
if new_value is not None:
setattr(existing, name, new_value)
self._update_overcloud_attributes(existing, session, updated)
self._update_overcloud_counts(existing, session, updated)
# Save the modified object
session.add(existing)
session.commit()
return existing
finally:
session.close()
@staticmethod
def _update_overcloud_attributes(existing, session, updated):
if updated.attributes is not None:
existing_keys = [a.key for a in existing.attributes]
existing_attributes_by_key = (
dict((a.key, a) for a in existing.attributes))
delete_keys = []
for a in updated.attributes:
# Deleted
if a.value is None:
delete_keys.append(a.key)
continue
# Updated
if a.key in existing_keys:
updating = existing_attributes_by_key[a.key]
updating.value = a.value
session.add(updating)
continue
# Added
if a.key not in existing_keys:
existing_attributes_by_key[a.key] = a
a.overcloud_id = updated.id
existing.attributes.append(a)
session.add(a)
continue
# Purge deleted attributes
for a in existing.attributes:
if a.key in delete_keys:
existing.attributes.remove(a)
session.delete(a)
@staticmethod
def _update_overcloud_counts(existing, session, updated):
if updated.counts is not None:
existing_count_role_ids = [c.overcloud_role_id
for c in existing.counts]
existing_counts_by_role_id = (
dict((c.overcloud_role_id, c) for c in existing.counts))
delete_role_ids = []
for c in updated.counts:
# Deleted
if c.num_nodes == 0:
delete_role_ids.append(c.overcloud_role_id)
continue
# Updated
if c.overcloud_role_id in existing_count_role_ids:
updating = existing_counts_by_role_id[c.overcloud_role_id]
updating.num_nodes = c.num_nodes
session.add(updating)
continue
# New
if c.overcloud_role_id not in existing_count_role_ids:
existing_counts_by_role_id[c.overcloud_role_id] = c
c.overcloud_id = updated.id
existing.counts.append(c)
session.add(c)
continue
# Purge deleted counts
for c in existing.counts:
if c.overcloud_role_id in delete_role_ids:
existing.counts.remove(c)
session.delete(c)
def delete_overcloud_by_id(self, overcloud_id):
"""Deletes a overcloud from the database.
:param overcloud_id: database ID of the overcloud
:type overcloud_id: int
:raises: tuskar.common.exception.OvercloudNotFound if there
is no overcloud with the given ID
"""
overcloud = self.get_overcloud_by_id(overcloud_id)
session = get_session()
session.begin()
try:
session.delete(overcloud)
session.commit()
finally:
session.close()

View File

@ -1,20 +0,0 @@
#
# Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from migrate.versioning.shell import main
if __name__ == '__main__':
main(debug='False', repository='.')

View File

@ -1,20 +0,0 @@
[db_settings]
# Used to identify which repository this database is versioned under.
# You can use the name of your project.
repository_id=tuskar
# The name of the database table used to track the schema version.
# This name shouldn't already be used by your project.
# If this is changed once a database is under version control, you'll need to
# change the table name in each database too.
version_table=migrate_version
# When committing a change script, Migrate will attempt to generate the
# sql for all supported databases; normally, if one of them fails - probably
# because you don't have that database installed - it is ignored and the
# commit continues, perhaps ending successfully.
# Databases in this list MUST compile successfully during a commit, or the
# entire commit will fail. List the databases your application will actually
# be using to ensure your updates to that database work properly.
# This must be a list; example: ['postgres','sqlite']
required_dbs=[]

View File

@ -1,133 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from migrate.changeset import UniqueConstraint
from sqlalchemy import (Column, DateTime, ForeignKey, Integer,
MetaData, String, Table, Text)
from tuskar.db.sqlalchemy import models
from tuskar.openstack.common.gettextutils import _ # noqa
from tuskar.openstack.common import log as logging
LOG = logging.getLogger(__name__)
ENGINE = 'InnoDB'
CHARSET = 'utf8'
def upgrade(migrate_engine):
meta = MetaData()
meta.bind = migrate_engine
overcloud_roles = Table(
'overcloud_roles',
meta,
Column('id', Integer, primary_key=True, nullable=False),
Column('name', String(length=models.LENGTH_NAME), unique=True),
Column('description', String(length=models.LENGTH_DESCRIPTION)),
Column('image_name', String(length=64)),
Column('flavor_id', String(length=36)),
Column('created_at', DateTime),
Column('updated_at', DateTime),
mysql_engine=ENGINE,
mysql_charset=CHARSET,
)
overcloud = Table(
'overclouds',
meta,
Column('id', Integer, primary_key=True, nullable=False),
Column('name', String(length=models.LENGTH_NAME), unique=True),
Column('description', String(length=models.LENGTH_DESCRIPTION)),
Column('stack_id', String(length=36)),
Column('created_at', DateTime),
Column('updated_at', DateTime),
mysql_engine=ENGINE,
mysql_charset=CHARSET,
)
overcloud_role_counts = Table(
'overcloud_role_counts',
meta,
Column('id', Integer, primary_key=True, nullable=False),
Column('overcloud_role_id',
Integer,
ForeignKey('overcloud_roles.id'),
nullable=False),
Column('overcloud_id',
Integer,
ForeignKey('overclouds.id'),
nullable=False),
Column('num_nodes', Integer, nullable=False),
Column('created_at', DateTime),
Column('updated_at', DateTime),
mysql_engine=ENGINE,
mysql_charset=CHARSET,
)
overcloud_attributes = Table(
'overcloud_attributes',
meta,
Column('id', Integer, primary_key=True, nullable=False),
Column('key', String(length=64), nullable=False),
Column('value', Text()),
Column('overcloud_id',
Integer,
ForeignKey('overclouds.id'),
nullable=False),
Column('created_at', DateTime),
Column('updated_at', DateTime),
mysql_engine=ENGINE,
mysql_charset=CHARSET,
)
tables = [overcloud_roles, overcloud, overcloud_role_counts,
overcloud_attributes]
for table in tables:
try:
LOG.info(repr(table))
table.create()
except Exception:
LOG.info(repr(table))
LOG.exception(_('Exception while creating table.'))
raise
indexes = [
]
# There eventually needs to be a uniqueness constraint for
# overcloud role counts across overcloud role,
# overcloud, and profile. I'm skipping it for now until we decide
# on a plan for the node profiles in Icehouse.
# jdob, Jan 16, 2014
uniques = [
UniqueConstraint('name', table=overcloud_roles,
name='uniq_overcloud_roles0name'),
UniqueConstraint('name', table=overcloud,
name='uniq_overcloud0name'),
UniqueConstraint('overcloud_id', 'key', table=overcloud_attributes,
name='uniq_overcloud_attributes0overcloud_name')
]
for index in indexes:
index.create(migrate_engine)
for index in uniques:
index.create(migrate_engine)
def downgrade(migrate_engine):
raise NotImplementedError('Downgrade is unsupported.')

View File

@ -1,54 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy import Column, DateTime, Integer, MetaData, String, Table, Text
from tuskar.openstack.common.gettextutils import _ # noqa
from tuskar.openstack.common import log as logging
LOG = logging.getLogger(__name__)
ENGINE = 'InnoDB'
CHARSET = 'utf8'
def upgrade(migrate_engine):
meta = MetaData()
meta.bind = migrate_engine
stored_file = Table(
'stored_file',
meta,
Column('uuid', String(length=36), primary_key=True, nullable=False),
Column('contents', Text(), nullable=False),
Column('object_type', String(length=20), nullable=False),
Column('name', String(length=64), nullable=True),
Column('version', Integer(), nullable=True),
Column('created_at', DateTime),
Column('updated_at', DateTime),
mysql_engine=ENGINE,
mysql_charset=CHARSET,
)
try:
LOG.info(repr(stored_file))
stored_file.create()
except Exception:
LOG.info(repr(stored_file))
LOG.exception(_('Exception while creating table.'))
raise
def downgrade(migrate_engine):
raise NotImplementedError('Downgrade is unsupported.')

View File

@ -1,24 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_db.sqlalchemy import utils
from sqlalchemy import Column, String
def upgrade(migrate_engine):
stored_file = utils.get_table(migrate_engine, 'stored_file')
relative_path = Column('relative_path', String(256), nullable=True)
stored_file.create_column(relative_path)
def downgrade(migrate_engine):
raise NotImplementedError('Downgrade is unsupported.')

View File

@ -1,24 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_db.sqlalchemy import utils
from sqlalchemy import Column, String
def upgrade(migrate_engine):
stored_file = utils.get_table(migrate_engine, 'stored_file')
registry_path = Column('registry_path', String(256), nullable=True)
stored_file.create_column(registry_path)
def downgrade(migrate_engine):
raise NotImplementedError('Downgrade is unsupported.')

View File

@ -1,43 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_db.sqlalchemy import utils
from sqlalchemy import MetaData
from tuskar.db.sqlalchemy.types import LongText
from tuskar.openstack.common.gettextutils import _ # noqa
from tuskar.openstack.common import log as logging
LOG = logging.getLogger(__name__)
ENGINE = 'InnoDB'
CHARSET = 'utf8'
def upgrade(migrate_engine):
meta = MetaData()
meta.bind = migrate_engine
stored_file = utils.get_table(migrate_engine, 'stored_file')
try:
LOG.info(repr(stored_file))
col = stored_file._columns.get('contents')
col.alter(type=LongText)
except Exception:
LOG.info(repr(stored_file))
LOG.exception(_('Exception while creating table.'))
raise
def downgrade(migrate_engine):
raise NotImplementedError('Downgrade is unsupported.')

View File

@ -1,82 +0,0 @@
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from migrate import exceptions as versioning_exceptions
from migrate.versioning import api as versioning_api
from migrate.versioning.repository import Repository
import sqlalchemy
from tuskar.common import exception
from tuskar.db import migration
from tuskar.db.sqlalchemy import api as sqla_api
_REPOSITORY = None
get_engine = sqla_api.get_engine
def db_sync(version=None):
if version is not None:
try:
version = int(version)
except ValueError:
raise exception.TuskarException(_("version should be an integer"))
current_version = db_version()
repository = _find_migrate_repo()
if version is None or version > current_version:
return versioning_api.upgrade(get_engine(), repository, version)
else:
return versioning_api.downgrade(get_engine(), repository,
version)
def db_version():
repository = _find_migrate_repo()
try:
return versioning_api.db_version(get_engine(), repository)
except versioning_exceptions.DatabaseNotControlledError:
meta = sqlalchemy.MetaData()
engine = get_engine()
meta.reflect(bind=engine)
tables = meta.tables
if len(tables) == 0:
db_version_control(migration.INIT_VERSION)
return versioning_api.db_version(get_engine(), repository)
else:
# Some pre-Essex DB's may not be version controlled.
# Require them to upgrade using Essex first.
raise exception.TuskarException(
_("Upgrade DB using Essex release first."))
def db_version_control(version=None):
repository = _find_migrate_repo()
versioning_api.version_control(get_engine(), repository, version)
return version
def _find_migrate_repo():
"""Get the path for the migrate repository."""
global _REPOSITORY
path = os.path.join(os.path.abspath(os.path.dirname(__file__)),
'migrate_repo')
assert os.path.exists(path)
if _REPOSITORY is None:
_REPOSITORY = Repository(path)
return _REPOSITORY

View File

@ -1,246 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Tuskar domain models for use with SQLAlchemy.
"""
from oslo_config import cfg
from oslo_db.sqlalchemy import models
from sqlalchemy import (Column, ForeignKey, Integer, String, Text)
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship
from tuskar.db.sqlalchemy.types import LongText
sql_opts = [
cfg.StrOpt('mysql_engine',
default='InnoDB',
help='MySQL engine')
]
cfg.CONF.register_opts(sql_opts)
# Column lengths for common attributes
LENGTH_NAME = 64
LENGTH_DESCRIPTION = 256
LENGTH_HOST = 32
LENGTH_USERNAME = 64
LENGTH_PASSWORD = 64
TABLE_NODE_PROFILE = 'node_profiles'
TABLE_OVERCLOUD = 'overclouds'
TABLE_OVERCLOUD_ATTRIBUTES = 'overcloud_attributes'
TABLE_OVERCLOUD_ROLE = 'overcloud_roles'
TABLE_OVERCLOUD_ROLE_COUNT = 'overcloud_role_counts'
class TuskarBase(models.TimestampMixin, models.ModelBase):
"""Base class for all Tuskar domain models."""
metadata = None
def as_dict(self):
d = dict([(c.name, self[c.name]) for c in self.__table__.columns])
return d
Base = declarative_base(cls=TuskarBase)
class OvercloudRole(Base):
"""Overcloud role domain model.
Represents a type of entity that is deployed into the undercloud to create
the overcloud. For example, a controller or a compute node.
"""
__tablename__ = TABLE_OVERCLOUD_ROLE
# Unique identifier for the role
id = Column(Integer, primary_key=True)
# User-readable display name of the role
name = Column(String(length=LENGTH_NAME), nullable=False, unique=True)
# User-readable text describing what the role does
description = Column(String(length=LENGTH_DESCRIPTION))
# Name of the image, in Glance, that is used when creating an instance of
# this role.
# Note: This should be the image UUID, but due to Icehouse time constraints
# the user will create the image on their own with a pre-defined
# name and the image referenced through that.
# Note: In the future, we will likely support multiple images for a
# role, so this will likely change to its own table and a FK
# relationship. jdob, Jan 10, 2014
image_name = Column(String(length=64))
# UUID of the flavor of node this role should be deployed on.
# Example: f03266e8-5c99-471c-9eac-375772b45a43
# Note: In the future, we will likely support multiple flavors for
# a role, so this will likely change. jdob, Feb 5, 2014
flavor_id = Column(String(length=36))
def __eq__(self, other):
return self.name == other.name
class OvercloudRoleCount(Base):
"""Configuration for an overcloud role's deployment in an overcloud.
Maps an overcloud role definition to number of instances to be
deployed into an overcloud.
Note: In the future this will likely be enhanced to include the
flavor of node being deployed on.
"""
__tablename__ = TABLE_OVERCLOUD_ROLE_COUNT
# Unique identifier for the deployment configuration
id = Column(Integer, primary_key=True)
# Role being configured
overcloud_role_id = Column(
Integer,
ForeignKey('%s.id' % TABLE_OVERCLOUD_ROLE),
nullable=False
)
# Overcloud in which the role is being deployed
overcloud_id = Column(
Integer,
ForeignKey('%s.id' % TABLE_OVERCLOUD, ondelete='CASCADE'),
nullable=False
)
# Number of nodes of this configuration that should be deployed
num_nodes = Column(Integer, nullable=False)
# Reference to the full role (this is not the foreign key relationship,
# that's overcloud_role_id above, this is to eager load the role data).
overcloud_role = relationship(OvercloudRole.__name__)
def __eq__(self, other):
return (self.overcloud_role_id == other.overcloud_role_id
and self.overcloud_id == other.overcloud_id)
class OvercloudAttribute(Base):
"""Overcloud-level configuration attribute domain model.
Contains a single configuration parameter for an overcloud. These
attributes include configuration for the overcloud database,
message bus, and keystone instance.
"""
__tablename__ = TABLE_OVERCLOUD_ATTRIBUTES
# Unique identifier for the overcloud
id = Column(Integer, primary_key=True)
# Reference back to the overcloud being configured
overcloud_id = Column(Integer,
ForeignKey('%s.id' % TABLE_OVERCLOUD,
ondelete='CASCADE'),
nullable=False)
# Identifier and value of the configuration attribute
key = Column(String(length=64), nullable=False)
value = Column(Text())
def __eq__(self, other):
return (self.overcloud_id == other.overcloud_id
and self.key == other.key)
class Overcloud(Base):
"""Overcloud domain model.
Represents the configuration of a cloud deployed into the undercloud by
Tuskar.
"""
__tablename__ = TABLE_OVERCLOUD
# Unique identifier for the overcloud
id = Column(Integer, primary_key=True)
# UUID of the stack, in Heat, that was created from this configuration
stack_id = Column(String(length=36))
# User-readable name of the overcloud
name = Column(String(length=LENGTH_NAME), nullable=False, unique=True)
# User-readable text describing the overcloud
description = Column(String(length=LENGTH_DESCRIPTION))
# List of configuration attributes for the overcloud
attributes = relationship(OvercloudAttribute.__name__,
cascade='all,delete')
# List of counts of overcloud roles to deploy
counts = relationship(OvercloudRoleCount.__name__,
cascade='all,delete')
def __eq__(self, other):
return self.name == other.name
def as_dict(self):
d = dict([(c.name, self[c.name]) for c in self.__table__.columns])
# Foreign keys aren't picked up by the base as_dict, so add them in
# here
attribute_dicts = [a.as_dict() for a in self.attributes]
d['attributes'] = attribute_dicts
count_dicts = [c.as_dict() for c in self.counts]
d['counts'] = count_dicts
return d
class StoredFile(Base):
"""Tuskar Stored File
The StoredFile model is used by the tuskar.storage package and more
specifically for the SQLAlchemy storage driver. Simply put it is a
collection of text files with some metadata.
"""
__tablename__ = "stored_file"
#: UUID's are used as the unique identifier.
uuid = Column(String(length=36), primary_key=True)
#: contents contains the full file contents as a string.
contents = Column(LongText(), nullable=False)
#: Object type flags the type of file that this is, i.e. template or
#: environment file.
object_type = Column(String(length=20), nullable=False)
#: Names provide a short human readable description of a file.
name = Column(String(length=64), nullable=True)
#: Relative path to which the file belongs
relative_path = Column(String(length=256), nullable=True)
#: Resource registry path for the file belongs
registry_path = Column(String(length=256), nullable=True)
#: Versions are an automatic incrementing count.
version = Column(Integer(), nullable=True)

View File

@ -1,24 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy.dialects import mysql
from sqlalchemy import types
class LongText(types.TypeDecorator):
impl = types.Text
def load_dialect_impl(self, dialect):
if dialect.name == 'mysql':
return dialect.type_descriptor(mysql.LONGTEXT())
else:
return self.impl

View File

@ -1,140 +0,0 @@
# Copyright 2013 Red Hat # All Rights Reserved.
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Most of the following was copied shamelessly from nova @
# https://github.com/openstack/nova/blob/master/nova/image/glance.py
# It's the way nova talks to glance, though obviously
# s/python-glanceclient/python-novaclient
"""A client library for accessing Heat CloudFormations API using Boto"""
from os import environ as env
from oslo_config import cfg
from tuskar.openstack.common import log as logging
heat_opts = [
cfg.StrOpt('stack_name',
default='overcloud',
help='Name of the overcloud Heat stack'
),
cfg.StrOpt('service_type',
default='orchestration',
help='Heat API service type registered in keystone'
),
cfg.StrOpt('endpoint_type',
default='publicURL',
help='Heat API service endpoint type in keystone'
)
]
heat_keystone_opts = [
# TODO(rpodolyaka): https://bugs.launchpad.net/tuskar/+bug/1236703
cfg.StrOpt('username',
default=env.get('OS_USERNAME') or 'admin',
help='The name of a user the overcloud is deployed on behalf of'
),
cfg.StrOpt('password',
help='The pass of a user the overcloud is deployed on behalf of'
),
cfg.StrOpt('tenant_name',
default=env.get('OS_TENANT_NAME') or 'admin',
help='The tenant name the overcloud is deployed on behalf of'
),
cfg.StrOpt('auth_url',
default=env.get('OS_AUTH_URL') or 'http://localhost:35357/v2.0',
help='Keystone authentication URL'
),
cfg.BoolOpt('insecure',
default=True,
help='Set to False when Heat API uses HTTPS'
)
]
CONF = cfg.CONF
CONF.register_opts(heat_opts, group='heat')
CONF.register_opts(heat_keystone_opts, group='heat_keystone')
LOG = logging.getLogger(__name__)
from heatclient.exc import HTTPNotFound as HeatStackNotFound
from heatclient.v1.client import Client as heatclient
from keystoneclient.v2_0 import client as ksclient
class HeatClient(object):
"""Heat CloudFormations API client to use in Tuskar."""
def __init__(self):
try:
keystone = ksclient.Client(**CONF.heat_keystone)
endpoint = keystone.service_catalog.url_for(
service_type=CONF.heat['service_type'],
endpoint_type=CONF.heat['endpoint_type']
)
self.connection = heatclient(
endpoint=endpoint,
token=keystone.auth_token,
username=CONF.heat_keystone['username'],
password=CONF.heat_keystone['password'])
except Exception:
LOG.exception("An error occurred initialising the HeatClient")
self.connection = None
def validate_template(self, template_body):
"""Validate given Heat template."""
return self.connection.stacks.validate(
template=template_body)
def get_stack(self, name=None):
"""Get overcloud Heat template."""
if name is None:
name = CONF.heat['stack_name']
if self.connection:
return self.connection.stacks.get(name)
def get_template(self):
"""Get JSON representation of the Heat overcloud template."""
return self.connection.stacks.template(
stack_id=CONF.heat['stack_name']
)
def update_stack(self, template_body, params):
"""Update the Heat overcloud stack."""
return self.connection.stacks.update(stack_id=CONF.heat['stack_name'],
template=template_body,
parameters=params)
def delete_stack(self):
"""Delete the Heat overcloud stack."""
return self.connection.stacks.delete(stack_id=CONF.heat['stack_name'])
def create_stack(self, template_body, params):
"""Update the Heat overcloud stack."""
return self.connection.stacks.create(
stack_name=CONF.heat['stack_name'],
template=template_body,
parameters=params)
def exists_stack(self, name=None):
if name is None:
name = CONF.heat['stack_name']
try:
self.get_stack(name)
return True
# return false if 404
except HeatStackNotFound:
return False

View File

@ -1,109 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Utilities for using merge.py to generate overcloud.yaml to hand over to Heat.
Translates Tuskar resources into the overcloud heat template, using merge.py
from upstream tripleo-heat-templates.
"""
import os
from oslo_config import cfg
from tripleo_heat_merge import merge
# TODO(lsmola) For now static definition of roles for Icehouse
# we will need to load these associations from somewhere.
OVERCLOUD_CONTROL_ROLE = 'overcloud-control'
OVERCLOUD_COMPUTE_ROLE = 'overcloud-compute'
OVERCLOUD_VOLUME_ROLE = 'overcloud-cinder-volume'
OVERCLOUD_OBJECT_STORAGE_ROLE = 'overcloud-swift-storage'
ROLES = {}
ROLES[OVERCLOUD_CONTROL_ROLE] = {
'template_param': 'Control',
'flavor_param': 'OvercloudControlFlavor', }
ROLES[OVERCLOUD_COMPUTE_ROLE] = {
'template_param': 'NovaCompute',
'flavor_param': 'OvercloudComputeFlavor', }
ROLES[OVERCLOUD_VOLUME_ROLE] = {
'template_param': 'BlockStorage',
'flavor_param': 'OvercloudBlockStorageFlavor', }
ROLES[OVERCLOUD_OBJECT_STORAGE_ROLE] = {
'template_param': 'SwiftStorage',
'flavor_param': 'OvercloudSwiftStorageFlavor', }
def generate_scaling_params(overcloud_roles):
"""Given a dictionary containing a key value mapping of Overcloud Role name
to a count of the nodes return the scaling parameters to be used by
tripleo_heat_merge
:param overcloud_roles: Dictionary with role names and a count of the nodes
:type overcloud_roles: dict
:return: scaling parameters dict
:rtype: dict
"""
# Default values, merge.py needs also the 0 counts.
scaling_defaults = ['NovaCompute=0', 'SwiftStorage=0', 'BlockStorage=0']
scaling = merge.parse_scaling(scaling_defaults)
for overcloud_role, count in overcloud_roles.items():
overcloud_role = overcloud_role.lower()
if overcloud_role in ROLES:
scale_str = "%s=%s" % (
ROLES[overcloud_role]['template_param'], count)
scaling.update(merge.parse_scaling([scale_str]))
return scaling
def _join_template_path(file_name):
return os.path.abspath(
os.path.join(os.path.dirname(cfg.CONF.tht_local_dir), file_name)
)
def merge_templates(overcloud_roles):
"""Merge the Overcloud Roles with overcloud.yaml using merge from
tripleo_heat_merge
See tripleo-heat-templates for further details.
"""
# TODO(dmatthews): Add exception handling to catch merge errors
scale_params = generate_scaling_params(overcloud_roles)
overcloud_source = _join_template_path("overcloud-source.yaml")
block_storage = _join_template_path("block-storage.yaml")
swift_source = _join_template_path("swift-source.yaml")
swift_storage_source = _join_template_path("swift-storage-source.yaml")
ssl_src_path = _join_template_path("ssl-source.yaml")
swift_deploy = _join_template_path("swift-deploy.yaml")
nova_compute_config = _join_template_path("nova-compute-config.yaml")
# Adding all templates like in tripleo-heat-templates Makefile.
# They will be used by merge.py according to scale_params. So the
# decision what template to pick will not be here.
merged_paths = [overcloud_source, block_storage, swift_source,
swift_storage_source, ssl_src_path, swift_deploy,
nova_compute_config]
template = merge.merge(merged_paths, None, None,
included_template_dir=cfg.CONF.tht_local_dir,
scaling=scale_params)
return template

Some files were not shown because too many files have changed in this diff Show More