Initial commit
Change-Id: I0ebc7a689a37b077606545d50a5d139f2059ffdb
This commit is contained in:
parent
5b9c316aa5
commit
1491d54a1d
19
CONTRIBUTING.rst
Normal file
19
CONTRIBUTING.rst
Normal file
@ -0,0 +1,19 @@
|
||||
If you would like to contribute to the development of OpenStack,
|
||||
you must follow the steps in this page:
|
||||
|
||||
http://docs.openstack.org/infra/manual/developers.html
|
||||
|
||||
Once those steps have been completed, changes to OpenStack
|
||||
should be submitted for review via the Gerrit tool, following
|
||||
the workflow documented at:
|
||||
|
||||
http://docs.openstack.org/infra/manual/developers.html#development-workflow
|
||||
|
||||
Pull requests submitted through GitHub will be ignored.
|
||||
|
||||
Bugs should be filed on Launchpad, not GitHub:
|
||||
|
||||
https://bugs.launchpad.net/meteos
|
||||
|
||||
|
||||
|
11
HACKING.rst
Normal file
11
HACKING.rst
Normal file
@ -0,0 +1,11 @@
|
||||
Meteos Style Commandments
|
||||
============================
|
||||
|
||||
- Step 1: Read the OpenStack Style Commandments
|
||||
http://docs.openstack.org/developer/hacking/
|
||||
- Step 2: Read on
|
||||
|
||||
|
||||
Meteos Specific Commandments
|
||||
-------------------------------
|
||||
|
176
LICENSE
Normal file
176
LICENSE
Normal file
@ -0,0 +1,176 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
27
README.rst
Normal file
27
README.rst
Normal file
@ -0,0 +1,27 @@
|
||||
======
|
||||
Meteos
|
||||
======
|
||||
|
||||
You have come across an OpenStack Machine Learning service. It has
|
||||
identified itself as "Meteos." It was abstracted from the Manila
|
||||
project.
|
||||
|
||||
* Wiki: https://wiki.openstack.org/Meteos
|
||||
* Developer docs: http://docs.openstack.org/developer/meteos
|
||||
|
||||
Getting Started
|
||||
---------------
|
||||
|
||||
If you'd like to run from the master branch, you can clone the git repo:
|
||||
|
||||
git clone https://github.com/openstack/meteos.git
|
||||
|
||||
For developer information please see
|
||||
`HACKING.rst <https://github.com/openstack/meteos/blob/master/HACKING.rst>`_
|
||||
|
||||
You can raise bugs here http://bugs.launchpad.net/meteos
|
||||
|
||||
Python client
|
||||
-------------
|
||||
|
||||
https://github.com/openstack/python-meteosclient.git
|
21
devstack/README.rst
Normal file
21
devstack/README.rst
Normal file
@ -0,0 +1,21 @@
|
||||
======================
|
||||
Enabling in Devstack
|
||||
======================
|
||||
|
||||
1. Download DevStack
|
||||
|
||||
2. Add this repo as an external repository in ``local.conf``
|
||||
|
||||
.. sourcecode:: bash
|
||||
|
||||
[[local|localrc]]
|
||||
enable_plugin meteos git://git.openstack.org/openstack/meteos
|
||||
|
||||
Optionally, a git refspec may be provided as follows:
|
||||
|
||||
.. sourcecode:: bash
|
||||
|
||||
[[local|localrc]]
|
||||
enable_plugin meteos git://git.openstack.org/openstack/meteos <refspec>
|
||||
|
||||
3. run ``stack.sh``
|
50
devstack/exercise.sh
Normal file
50
devstack/exercise.sh
Normal file
@ -0,0 +1,50 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Sanity check that Meteos started if enabled
|
||||
|
||||
echo "*********************************************************************"
|
||||
echo "Begin DevStack Exercise: $0"
|
||||
echo "*********************************************************************"
|
||||
|
||||
# This script exits on an error so that errors don't compound and you see
|
||||
# only the first error that occurred.
|
||||
set -o errexit
|
||||
|
||||
# Print the commands being run so that we can see the command that triggers
|
||||
# an error. It is also useful for following allowing as the install occurs.
|
||||
set -o xtrace
|
||||
|
||||
|
||||
# Settings
|
||||
# ========
|
||||
|
||||
# Keep track of the current directory
|
||||
EXERCISE_DIR=$(cd $(dirname "$0") && pwd)
|
||||
TOP_DIR=$(cd $EXERCISE_DIR/..; pwd)
|
||||
|
||||
# Import common functions
|
||||
source $TOP_DIR/functions
|
||||
|
||||
# Import configuration
|
||||
source $TOP_DIR/openrc
|
||||
|
||||
# Import exercise configuration
|
||||
source $TOP_DIR/exerciserc
|
||||
|
||||
is_service_enabled meteos || exit 55
|
||||
|
||||
if is_ssl_enabled_service "meteos" ||\
|
||||
is_ssl_enabled_service "meteos-api" ||\
|
||||
is_service_enabled tls-proxy; then
|
||||
METEOS_SERVICE_PROTOCOL="https"
|
||||
fi
|
||||
|
||||
METEOS_SERVICE_PROTOCOL=${METEOS_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL}
|
||||
|
||||
$CURL_GET $METEOS_SERVICE_PROTOCOL://$SERVICE_HOST:8989/ 2>/dev/null \
|
||||
| grep -q 'Auth' || die $LINENO "Meteos API isn't functioning!"
|
||||
|
||||
set +o xtrace
|
||||
echo "*********************************************************************"
|
||||
echo "SUCCESS: End DevStack Exercise: $0"
|
||||
echo "*********************************************************************"
|
187
devstack/plugin.sh
Normal file
187
devstack/plugin.sh
Normal file
@ -0,0 +1,187 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# lib/meteos
|
||||
|
||||
# Dependencies:
|
||||
# ``functions`` file
|
||||
# ``DEST``, ``DATA_DIR``, ``STACK_USER`` must be defined
|
||||
|
||||
# ``stack.sh`` calls the entry points in this order:
|
||||
#
|
||||
# install_meteos
|
||||
# install_python_meteosclient
|
||||
# configure_meteos
|
||||
# start_meteos
|
||||
# stop_meteos
|
||||
# cleanup_meteos
|
||||
|
||||
# Save trace setting
|
||||
XTRACE=$(set +o | grep xtrace)
|
||||
set +o xtrace
|
||||
|
||||
|
||||
# Functions
|
||||
# ---------
|
||||
|
||||
# create_meteos_accounts() - Set up common required meteos accounts
|
||||
#
|
||||
# Tenant User Roles
|
||||
# ------------------------------
|
||||
# service meteos admin
|
||||
function create_meteos_accounts {
|
||||
|
||||
create_service_user "meteos"
|
||||
|
||||
get_or_create_service "meteos" "machine-learning" "Meteos Machine Learning"
|
||||
get_or_create_endpoint "machine-learning" \
|
||||
"$REGION_NAME" \
|
||||
"$METEOS_SERVICE_PROTOCOL://$METEOS_SERVICE_HOST:$METEOS_SERVICE_PORT/v1/\$(tenant_id)s" \
|
||||
"$METEOS_SERVICE_PROTOCOL://$METEOS_SERVICE_HOST:$METEOS_SERVICE_PORT/v1/\$(tenant_id)s" \
|
||||
"$METEOS_SERVICE_PROTOCOL://$METEOS_SERVICE_HOST:$METEOS_SERVICE_PORT/v1/\$(tenant_id)s"
|
||||
}
|
||||
|
||||
# cleanup_meteos() - Remove residual data files, anything left over from
|
||||
# previous runs that would need to clean up.
|
||||
function cleanup_meteos {
|
||||
|
||||
# Cleanup auth cache dir
|
||||
sudo rm -rf $METEOS_AUTH_CACHE_DIR
|
||||
}
|
||||
|
||||
# configure_meteos() - Set config files, create data dirs, etc
|
||||
function configure_meteos {
|
||||
sudo install -d -o $STACK_USER $METEOS_CONF_DIR
|
||||
|
||||
if [[ -f $METEOS_DIR/etc/meteos/policy.json ]]; then
|
||||
cp -p $METEOS_DIR/etc/meteos/policy.json $METEOS_CONF_DIR
|
||||
fi
|
||||
|
||||
cp -p $METEOS_DIR/etc/meteos/api-paste.ini $METEOS_CONF_DIR
|
||||
|
||||
# Create auth cache dir
|
||||
sudo install -d -o $STACK_USER -m 700 $METEOS_AUTH_CACHE_DIR
|
||||
rm -rf $METEOS_AUTH_CACHE_DIR/*
|
||||
|
||||
configure_auth_token_middleware \
|
||||
$METEOS_CONF_FILE meteos $METEOS_AUTH_CACHE_DIR
|
||||
|
||||
# Set admin user parameters needed for trusts creation
|
||||
iniset $METEOS_CONF_FILE \
|
||||
keystone_authtoken admin_tenant_name $SERVICE_TENANT_NAME
|
||||
iniset $METEOS_CONF_FILE keystone_authtoken admin_user meteos
|
||||
iniset $METEOS_CONF_FILE \
|
||||
keystone_authtoken admin_password $SERVICE_PASSWORD
|
||||
|
||||
iniset_rpc_backend meteos $METEOS_CONF_FILE DEFAULT
|
||||
|
||||
# Set configuration to send notifications
|
||||
iniset $METEOS_CONF_FILE DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL
|
||||
|
||||
iniset $METEOS_CONF_FILE DEFAULT plugins $METEOS_ENABLED_PLUGINS
|
||||
|
||||
iniset $METEOS_CONF_FILE \
|
||||
database connection `database_connection_url meteos`
|
||||
|
||||
# Format logging
|
||||
if [ "$LOG_COLOR" == "True" ] && [ "$SYSLOG" == "False" ]; then
|
||||
setup_colorized_logging $METEOS_CONF_FILE DEFAULT
|
||||
fi
|
||||
|
||||
recreate_database meteos
|
||||
$METEOS_BIN_DIR/meteos-manage \
|
||||
--config-file $METEOS_CONF_FILE db sync
|
||||
}
|
||||
|
||||
# install_meteos() - Collect source and prepare
|
||||
function install_meteos {
|
||||
setup_develop $METEOS_DIR
|
||||
}
|
||||
|
||||
# install_python_meteosclient() - Collect source and prepare
|
||||
function install_python_meteosclient {
|
||||
git_clone $METEOSCLIENT_REPO $METEOSCLIENT_DIR $METEOSCLIENT_BRANCH
|
||||
setup_develop $METEOSCLIENT_DIR
|
||||
}
|
||||
|
||||
# start_meteos() - Start running processes, including screen
|
||||
function start_meteos {
|
||||
local service_port=$METEOS_SERVICE_PORT
|
||||
local service_protocol=$METEOS_SERVICE_PROTOCOL
|
||||
|
||||
run_process meteos-all "$METEOS_BIN_DIR/meteos-all \
|
||||
--config-file $METEOS_CONF_FILE"
|
||||
run_process meteos-api "$METEOS_BIN_DIR/meteos-api \
|
||||
--config-file $METEOS_CONF_FILE"
|
||||
run_process meteos-eng "$METEOS_BIN_DIR/meteos-engine \
|
||||
--config-file $METEOS_CONF_FILE"
|
||||
|
||||
echo "Waiting for Meteos to start..."
|
||||
if ! wait_for_service $SERVICE_TIMEOUT \
|
||||
$service_protocol://$METEOS_SERVICE_HOST:$service_port; then
|
||||
die $LINENO "Meteos did not start"
|
||||
fi
|
||||
}
|
||||
|
||||
# configure_tempest_for_meteos() - Tune Tempest configuration for Meteos
|
||||
function configure_tempest_for_meteos {
|
||||
if is_service_enabled tempest; then
|
||||
iniset $TEMPEST_CONFIG service_available meteos True
|
||||
iniset $TEMPEST_CONFIG data-processing-feature-enabled plugins $METEOS_ENABLED_PLUGINS
|
||||
fi
|
||||
}
|
||||
|
||||
# stop_meteos() - Stop running processes
|
||||
function stop_meteos {
|
||||
# Kill the Meteos screen windows
|
||||
stop_process meteos-all
|
||||
stop_process meteos-api
|
||||
stop_process meteos-eng
|
||||
}
|
||||
|
||||
# is_meteos_enabled. This allows is_service_enabled meteos work
|
||||
# correctly throughout devstack.
|
||||
function is_meteos_enabled {
|
||||
if is_service_enabled meteos-api || \
|
||||
is_service_enabled meteos-eng || \
|
||||
is_service_enabled meteos-all; then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Dispatcher for Meteos plugin
|
||||
if is_service_enabled meteos; then
|
||||
if [[ "$1" == "stack" && "$2" == "install" ]]; then
|
||||
echo_summary "Installing meteos"
|
||||
install_meteos
|
||||
install_python_meteosclient
|
||||
cleanup_meteos
|
||||
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
|
||||
echo_summary "Configuring meteos"
|
||||
configure_meteos
|
||||
create_meteos_accounts
|
||||
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
|
||||
echo_summary "Initializing meteos"
|
||||
start_meteos
|
||||
elif [[ "$1" == "stack" && "$2" == "test-config" ]]; then
|
||||
echo_summary "Configuring tempest"
|
||||
configure_tempest_for_meteos
|
||||
fi
|
||||
|
||||
if [[ "$1" == "unstack" ]]; then
|
||||
stop_meteos
|
||||
fi
|
||||
|
||||
if [[ "$1" == "clean" ]]; then
|
||||
cleanup_meteos
|
||||
fi
|
||||
fi
|
||||
|
||||
|
||||
# Restore xtrace
|
||||
$XTRACE
|
||||
|
||||
# Local variables:
|
||||
# mode: shell-script
|
||||
# End:
|
37
devstack/settings
Normal file
37
devstack/settings
Normal file
@ -0,0 +1,37 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Settings needed for the Meteos plugin
|
||||
# -------------------------------------
|
||||
|
||||
# Set up default directories
|
||||
METEOSCLIENT_DIR=$DEST/python-meteosclient
|
||||
METEOS_DIR=$DEST/meteos
|
||||
|
||||
METEOSCLIENT_REPO=${METEOSCLIENT_REPO:-\
|
||||
${GIT_BASE}/openstack/python-meteosclient.git}
|
||||
METEOSCLIENT_BRANCH=${METEOSCLIENT_BRANCH:-master}
|
||||
|
||||
METEOS_CONF_DIR=${METEOS_CONF_DIR:-/etc/meteos}
|
||||
METEOS_CONF_FILE=${METEOS_CONF_DIR}/meteos.conf
|
||||
|
||||
# TODO(slukjanov): Should we append meteos to SSL_ENABLED_SERVICES?
|
||||
|
||||
if is_ssl_enabled_service "meteos" || is_service_enabled tls-proxy; then
|
||||
METEOS_SERVICE_PROTOCOL="https"
|
||||
fi
|
||||
METEOS_SERVICE_HOST=${METEOS_SERVICE_HOST:-$SERVICE_HOST}
|
||||
METEOS_SERVICE_PORT=${METEOS_SERVICE_PORT:-8989}
|
||||
METEOS_SERVICE_PROTOCOL=${METEOS_SERVICE_PROTOCOL:-$SERVICE_PROTOCOL}
|
||||
METEOS_ENDPOINT_TYPE=${METEOS_ENDPOINT_TYPE:-adminURL}
|
||||
|
||||
METEOS_AUTH_CACHE_DIR=${METEOS_AUTH_CACHE_DIR:-/var/cache/meteos}
|
||||
|
||||
METEOS_BIN_DIR=$(get_python_exec_prefix)
|
||||
|
||||
METEOS_ENABLE_DISTRIBUTED_PERIODICS=${METEOS_ENABLE_DISTRIBUTED_PERIODICS:-\
|
||||
True}
|
||||
|
||||
# Tell Tempest this project is present
|
||||
TEMPEST_SERVICES+=,meteos
|
||||
|
||||
enable_service meteos-api meteos-eng
|
4
etc/meteos/README.meteos.conf
Normal file
4
etc/meteos/README.meteos.conf
Normal file
@ -0,0 +1,4 @@
|
||||
To generate the sample meteos.conf file, run the following
|
||||
command from the top level of the meteos directory:
|
||||
|
||||
tox -egenconfig
|
49
etc/meteos/api-paste.ini
Normal file
49
etc/meteos/api-paste.ini
Normal file
@ -0,0 +1,49 @@
|
||||
#############
|
||||
# OpenStack #
|
||||
#############
|
||||
|
||||
[composite:osapi_learning]
|
||||
use = call:meteos.api:root_app_factory
|
||||
/: apiversions
|
||||
/v1: openstack_learning_api
|
||||
|
||||
[composite:openstack_learning_api]
|
||||
use = call:meteos.api.middleware.auth:pipeline_factory
|
||||
noauth = cors faultwrap http_proxy_to_wsgi sizelimit noauth api
|
||||
keystone = cors faultwrap http_proxy_to_wsgi sizelimit authtoken keystonecontext api
|
||||
keystone_nolimit = cors faultwrap http_proxy_to_wsgi sizelimit authtoken keystonecontext api
|
||||
|
||||
[filter:faultwrap]
|
||||
paste.filter_factory = meteos.api.middleware.fault:FaultWrapper.factory
|
||||
|
||||
[filter:noauth]
|
||||
paste.filter_factory = meteos.api.middleware.auth:NoAuthMiddleware.factory
|
||||
|
||||
[filter:sizelimit]
|
||||
paste.filter_factory = oslo_middleware.sizelimit:RequestBodySizeLimiter.factory
|
||||
|
||||
[filter:http_proxy_to_wsgi]
|
||||
paste.filter_factory = oslo_middleware.http_proxy_to_wsgi:HTTPProxyToWSGI.factory
|
||||
|
||||
[app:api]
|
||||
paste.app_factory = meteos.api.v1.router:APIRouter.factory
|
||||
|
||||
[pipeline:apiversions]
|
||||
pipeline = cors faultwrap http_proxy_to_wsgi oslearningversionapp
|
||||
|
||||
[app:oslearningversionapp]
|
||||
paste.app_factory = meteos.api.versions:VersionsRouter.factory
|
||||
|
||||
##########
|
||||
# Engine #
|
||||
##########
|
||||
|
||||
[filter:keystonecontext]
|
||||
paste.filter_factory = meteos.api.middleware.auth:MeteosKeystoneContext.factory
|
||||
|
||||
[filter:authtoken]
|
||||
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
|
||||
|
||||
[filter:cors]
|
||||
paste.filter_factory = oslo_middleware.cors:filter_factory
|
||||
oslo_config_project = meteos
|
49
etc/meteos/api-paste.ini.orig
Normal file
49
etc/meteos/api-paste.ini.orig
Normal file
@ -0,0 +1,49 @@
|
||||
#############
|
||||
# OpenStack #
|
||||
#############
|
||||
|
||||
[composite:osapi_learning]
|
||||
use = call:meteos.api:root_app_factory
|
||||
/: apiversions
|
||||
/v1: openstack_learning_api
|
||||
|
||||
[composite:openstack_learning_api]
|
||||
use = call:meteos.api.middleware.auth:pipeline_factory
|
||||
noauth = cors faultwrap http_proxy_to_wsgi sizelimit noauth api
|
||||
keystone = cors faultwrap http_proxy_to_wsgi sizelimit authtoken keystonecontext api
|
||||
keystone_nolimit = cors faultwrap http_proxy_to_wsgi sizelimit authtoken keystonecontext api
|
||||
|
||||
[filter:faultwrap]
|
||||
paste.filter_factory = meteos.api.middleware.fault:FaultWrapper.factory
|
||||
|
||||
[filter:noauth]
|
||||
paste.filter_factory = meteos.api.middleware.auth:NoAuthMiddleware.factory
|
||||
|
||||
[filter:sizelimit]
|
||||
paste.filter_factory = oslo_middleware.sizelimit:RequestBodySizeLimiter.factory
|
||||
|
||||
[filter:http_proxy_to_wsgi]
|
||||
paste.filter_factory = oslo_middleware.http_proxy_to_wsgi:HTTPProxyToWSGI.factory
|
||||
|
||||
[app:api]
|
||||
paste.app_factory = meteos.api.v1.router:APIRouter.factory
|
||||
|
||||
[pipeline:apiversions]
|
||||
pipeline = cors faultwrap http_proxy_to_wsgi oslearningversionapp
|
||||
|
||||
[app:oslearningversionapp]
|
||||
paste.app_factory = meteos.api.versions:VersionsRouter.factory
|
||||
|
||||
##########
|
||||
# Engine #
|
||||
##########
|
||||
|
||||
[filter:keystonecontext]
|
||||
paste.filter_factory = meteos.api.middleware.auth:MeteosKeystoneContext.factory
|
||||
|
||||
[filter:authtoken]
|
||||
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
|
||||
|
||||
[filter:cors]
|
||||
paste.filter_factory = oslo_middleware.cors:filter_factory
|
||||
oslo_config_project = meteos
|
73
etc/meteos/logging_sample.conf
Normal file
73
etc/meteos/logging_sample.conf
Normal file
@ -0,0 +1,73 @@
|
||||
[loggers]
|
||||
keys = root, meteos
|
||||
|
||||
[handlers]
|
||||
keys = stderr, stdout, watchedfile, syslog, null
|
||||
|
||||
[formatters]
|
||||
keys = default
|
||||
|
||||
[logger_root]
|
||||
level = WARNING
|
||||
handlers = null
|
||||
|
||||
[logger_meteos]
|
||||
level = INFO
|
||||
handlers = stderr
|
||||
qualname = meteos
|
||||
|
||||
[logger_amqplib]
|
||||
level = WARNING
|
||||
handlers = stderr
|
||||
qualname = amqplib
|
||||
|
||||
[logger_sqlalchemy]
|
||||
level = WARNING
|
||||
handlers = stderr
|
||||
qualname = sqlalchemy
|
||||
# "level = INFO" logs SQL queries.
|
||||
# "level = DEBUG" logs SQL queries and results.
|
||||
# "level = WARNING" logs neither. (Recommended for production systems.)
|
||||
|
||||
[logger_boto]
|
||||
level = WARNING
|
||||
handlers = stderr
|
||||
qualname = boto
|
||||
|
||||
[logger_suds]
|
||||
level = INFO
|
||||
handlers = stderr
|
||||
qualname = suds
|
||||
|
||||
[logger_eventletwsgi]
|
||||
level = WARNING
|
||||
handlers = stderr
|
||||
qualname = eventlet.wsgi.server
|
||||
|
||||
[handler_stderr]
|
||||
class = StreamHandler
|
||||
args = (sys.stderr,)
|
||||
formatter = default
|
||||
|
||||
[handler_stdout]
|
||||
class = StreamHandler
|
||||
args = (sys.stdout,)
|
||||
formatter = default
|
||||
|
||||
[handler_watchedfile]
|
||||
class = handlers.WatchedFileHandler
|
||||
args = ('meteos.log',)
|
||||
formatter = default
|
||||
|
||||
[handler_syslog]
|
||||
class = handlers.SysLogHandler
|
||||
args = ('/dev/log', handlers.SysLogHandler.LOG_USER)
|
||||
formatter = default
|
||||
|
||||
[handler_null]
|
||||
class = meteos.common.openstack.NullHandler
|
||||
formatter = default
|
||||
args = ()
|
||||
|
||||
[formatter_default]
|
||||
format = %(message)s
|
39
etc/meteos/meteos.conf
Normal file
39
etc/meteos/meteos.conf
Normal file
@ -0,0 +1,39 @@
|
||||
[DEFAULT]
|
||||
|
||||
debug = True
|
||||
verbose = True
|
||||
log_dir = /var/log/meteos
|
||||
rpc_backend = rabbit
|
||||
|
||||
host = 10.0.0.6
|
||||
hostname = meteos-vm
|
||||
osapi_learning_listen_port = 8989
|
||||
osapi_learning_workers = 4
|
||||
meteos-engine_workers = 4
|
||||
api_paste_config=api-paste.ini
|
||||
|
||||
[sahara]
|
||||
auth_url = http://192.168.0.4:5000/v2.0
|
||||
|
||||
[database]
|
||||
connection = sqlite:///etc/meteos/meteos.sqlite3
|
||||
|
||||
[keystone_authtoken]
|
||||
|
||||
auth_uri = http://192.168.0.4:5000/v2.0
|
||||
identity_uri=http://192.168.0.4:35357
|
||||
admin_tenant_name=services
|
||||
admin_password=9396d9e9e57545d8
|
||||
admin_user=sahara
|
||||
|
||||
[oslo_messaging_rabbit]
|
||||
|
||||
rabbit_host = 192.168.0.4
|
||||
rabbit_port = 5672
|
||||
rabbit_use_ssl = False
|
||||
rabbit_userid = guest
|
||||
rabbit_password = guest
|
||||
|
||||
[engine]
|
||||
|
||||
worker = 4
|
13
etc/meteos/policy.json
Normal file
13
etc/meteos/policy.json
Normal file
@ -0,0 +1,13 @@
|
||||
{
|
||||
"context_is_admin": "role:admin",
|
||||
"admin_or_owner": "is_admin:True or project_id:%(project_id)s",
|
||||
"default": "rule:admin_or_owner",
|
||||
|
||||
"admin_api": "is_admin:True",
|
||||
|
||||
"learning:create": "",
|
||||
"learning:delete": "rule:default",
|
||||
"learning:get": "rule:default",
|
||||
"learning:get_all": "rule:default",
|
||||
"learning:update": "rule:default"
|
||||
}
|
27
etc/meteos/rootwrap.conf
Normal file
27
etc/meteos/rootwrap.conf
Normal file
@ -0,0 +1,27 @@
|
||||
# Configuration for meteos-rootwrap
|
||||
# This file should be owned by (and only-writeable by) the root user
|
||||
|
||||
[DEFAULT]
|
||||
# List of directories to load filter definitions from (separated by ',').
|
||||
# These directories MUST all be only writeable by root !
|
||||
filters_path=/etc/meteos/rootwrap.d,/usr/learning/meteos/rootwrap
|
||||
|
||||
# List of directories to search executables in, in case filters do not
|
||||
# explicitely specify a full path (separated by ',')
|
||||
# If not specified, defaults to system PATH environment variable.
|
||||
# These directories MUST all be only writeable by root !
|
||||
exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin,/usr/local/sbin,/usr/local/bin,/usr/lpp/mmfs/bin
|
||||
|
||||
# Enable logging to syslog
|
||||
# Default value is False
|
||||
use_syslog=False
|
||||
|
||||
# Which syslog facility to use.
|
||||
# Valid values include auth, authpriv, syslog, user0, user1...
|
||||
# Default value is 'syslog'
|
||||
syslog_log_facility=syslog
|
||||
|
||||
# Which messages to log.
|
||||
# INFO means log all usage
|
||||
# ERROR means only log unsuccessful attempts
|
||||
syslog_log_level=ERROR
|
20
etc/meteos/rootwrap.d/learning.filters
Normal file
20
etc/meteos/rootwrap.d/learning.filters
Normal file
@ -0,0 +1,20 @@
|
||||
# meteos-rootwrap command filters for share nodes
|
||||
# This file should be owned by (and only-writeable by) the root user
|
||||
|
||||
[Filters]
|
||||
# meteos/utils.py : 'chown', '%s', '%s'
|
||||
chown: CommandFilter, chown, root
|
||||
# meteos/utils.py : 'cat', '%s'
|
||||
cat: CommandFilter, cat, root
|
||||
|
||||
# meteos/share/drivers/lvm.py: 'rmdir', '%s'
|
||||
rmdir: CommandFilter, rmdir, root
|
||||
|
||||
# meteos/share/drivers/helpers.py: 'cp', '%s', '%s'
|
||||
cp: CommandFilter, cp, root
|
||||
|
||||
# meteos/share/drivers/helpers.py: 'service', '%s', '%s'
|
||||
service: CommandFilter, service, root
|
||||
|
||||
# meteos/share/drivers/glusterfs.py: 'rm', '-rf', '%s'
|
||||
rm: CommandFilter, rm, root
|
9
etc/oslo-config-generator/meteos.conf
Normal file
9
etc/oslo-config-generator/meteos.conf
Normal file
@ -0,0 +1,9 @@
|
||||
[DEFAULT]
|
||||
output_file = etc/meteos/meteos.conf.sample
|
||||
namespace = meteos
|
||||
namespace = oslo.messaging
|
||||
namespace = oslo.middleware.cors
|
||||
namespace = oslo.middleware.http_proxy_to_wsgi
|
||||
namespace = oslo.db
|
||||
namespace = oslo.db.concurrency
|
||||
namespace = keystonemiddleware.auth_token
|
0
meteos/__init__.py
Normal file
0
meteos/__init__.py
Normal file
21
meteos/api/__init__.py
Normal file
21
meteos/api/__init__.py
Normal file
@ -0,0 +1,21 @@
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import paste.urlmap
|
||||
|
||||
|
||||
def root_app_factory(loader, global_conf, **local_conf):
|
||||
return paste.urlmap.urlmap_factory(loader, global_conf, **local_conf)
|
38
meteos/api/auth.py
Normal file
38
meteos/api/auth.py
Normal file
@ -0,0 +1,38 @@
|
||||
# Copyright (c) 2013 OpenStack, LLC.
|
||||
#
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_log import log
|
||||
|
||||
from meteos.api.middleware import auth
|
||||
from meteos.i18n import _LW
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class MeteosKeystoneContext(auth.MeteosKeystoneContext):
|
||||
def __init__(self, application):
|
||||
LOG.warning(_LW('meteos.api.auth:MeteosKeystoneContext is deprecated. '
|
||||
'Please use '
|
||||
'meteos.api.middleware.auth:MeteosKeystoneContext '
|
||||
'instead.'))
|
||||
super(MeteosKeystoneContext, self).__init__(application)
|
||||
|
||||
|
||||
def pipeline_factory(loader, global_conf, **local_conf):
|
||||
LOG.warning(_LW('meteos.api.auth:pipeline_factory is deprecated. '
|
||||
'Please use meteos.api.middleware.auth:pipeline_factory '
|
||||
'instead.'))
|
||||
auth.pipeline_factory(loader, global_conf, **local_conf)
|
318
meteos/api/common.py
Normal file
318
meteos/api/common.py
Normal file
@ -0,0 +1,318 @@
|
||||
# Copyright 2010 OpenStack LLC.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import os
|
||||
import re
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
from six.moves.urllib import parse
|
||||
import webob
|
||||
|
||||
from meteos.api.openstack import api_version_request as api_version
|
||||
from meteos.api.openstack import versioned_method
|
||||
from meteos.i18n import _
|
||||
|
||||
api_common_opts = [
|
||||
cfg.IntOpt(
|
||||
'osapi_max_limit',
|
||||
default=1000,
|
||||
help='The maximum number of items returned in a single response from '
|
||||
'a collection resource.'),
|
||||
cfg.StrOpt(
|
||||
'osapi_learning_base_URL',
|
||||
help='Base URL to be presented to users in links to the Learning API'),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(api_common_opts)
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
# Regex that matches alphanumeric characters, periods, hypens,
|
||||
# colons and underscores:
|
||||
# ^ assert position at start of the string
|
||||
# [\w\.\-\:\_] match expression
|
||||
# $ assert position at end of the string
|
||||
VALID_KEY_NAME_REGEX = re.compile(r"^[\w\.\-\:\_]+$", re.UNICODE)
|
||||
|
||||
|
||||
def validate_key_names(key_names_list):
|
||||
"""Validate each item of the list to match key name regex."""
|
||||
for key_name in key_names_list:
|
||||
if not VALID_KEY_NAME_REGEX.match(key_name):
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def get_pagination_params(request):
|
||||
"""Return marker, limit tuple from request.
|
||||
|
||||
:param request: `wsgi.Request` possibly containing 'marker' and 'limit'
|
||||
GET variables. 'marker' is the id of the last element
|
||||
the client has seen, and 'limit' is the maximum number
|
||||
of items to return. If 'limit' is not specified, 0, or
|
||||
> max_limit, we default to max_limit. Negative values
|
||||
for either marker or limit will cause
|
||||
exc.HTTPBadRequest() exceptions to be raised.
|
||||
|
||||
"""
|
||||
params = {}
|
||||
if 'limit' in request.GET:
|
||||
params['limit'] = _get_limit_param(request)
|
||||
if 'marker' in request.GET:
|
||||
params['marker'] = _get_marker_param(request)
|
||||
return params
|
||||
|
||||
|
||||
def _get_limit_param(request):
|
||||
"""Extract integer limit from request or fail."""
|
||||
try:
|
||||
limit = int(request.GET['limit'])
|
||||
except ValueError:
|
||||
msg = _('limit param must be an integer')
|
||||
raise webob.exc.HTTPBadRequest(explanation=msg)
|
||||
if limit < 0:
|
||||
msg = _('limit param must be positive')
|
||||
raise webob.exc.HTTPBadRequest(explanation=msg)
|
||||
return limit
|
||||
|
||||
|
||||
def _get_marker_param(request):
|
||||
"""Extract marker ID from request or fail."""
|
||||
return request.GET['marker']
|
||||
|
||||
|
||||
def limited(items, request, max_limit=CONF.osapi_max_limit):
|
||||
"""Return a slice of items according to requested offset and limit.
|
||||
|
||||
:param items: A sliceable entity
|
||||
:param request: ``wsgi.Request`` possibly containing 'offset' and 'limit'
|
||||
GET variables. 'offset' is where to start in the list,
|
||||
and 'limit' is the maximum number of items to return. If
|
||||
'limit' is not specified, 0, or > max_limit, we default
|
||||
to max_limit. Negative values for either offset or limit
|
||||
will cause exc.HTTPBadRequest() exceptions to be raised.
|
||||
:kwarg max_limit: The maximum number of items to return from 'items'
|
||||
"""
|
||||
try:
|
||||
offset = int(request.GET.get('offset', 0))
|
||||
except ValueError:
|
||||
msg = _('offset param must be an integer')
|
||||
raise webob.exc.HTTPBadRequest(explanation=msg)
|
||||
|
||||
try:
|
||||
limit = int(request.GET.get('limit', max_limit))
|
||||
except ValueError:
|
||||
msg = _('limit param must be an integer')
|
||||
raise webob.exc.HTTPBadRequest(explanation=msg)
|
||||
|
||||
if limit < 0:
|
||||
msg = _('limit param must be positive')
|
||||
raise webob.exc.HTTPBadRequest(explanation=msg)
|
||||
|
||||
if offset < 0:
|
||||
msg = _('offset param must be positive')
|
||||
raise webob.exc.HTTPBadRequest(explanation=msg)
|
||||
|
||||
limit = min(max_limit, limit or max_limit)
|
||||
range_end = offset + limit
|
||||
return items[offset:range_end]
|
||||
|
||||
|
||||
def limited_by_marker(items, request, max_limit=CONF.osapi_max_limit):
|
||||
"""Return a slice of items according to the requested marker and limit."""
|
||||
params = get_pagination_params(request)
|
||||
|
||||
limit = params.get('limit', max_limit)
|
||||
marker = params.get('marker')
|
||||
|
||||
limit = min(max_limit, limit)
|
||||
start_index = 0
|
||||
if marker:
|
||||
start_index = -1
|
||||
for i, item in enumerate(items):
|
||||
if 'flavorid' in item:
|
||||
if item['flavorid'] == marker:
|
||||
start_index = i + 1
|
||||
break
|
||||
elif item['id'] == marker or item.get('uuid') == marker:
|
||||
start_index = i + 1
|
||||
break
|
||||
if start_index < 0:
|
||||
msg = _('marker [%s] not found') % marker
|
||||
raise webob.exc.HTTPBadRequest(explanation=msg)
|
||||
range_end = start_index + limit
|
||||
return items[start_index:range_end]
|
||||
|
||||
|
||||
def remove_version_from_href(href):
|
||||
"""Removes the first api version from the href.
|
||||
|
||||
Given: 'http://www.meteos.com/v1.1/123'
|
||||
Returns: 'http://www.meteos.com/123'
|
||||
|
||||
Given: 'http://www.meteos.com/v1.1'
|
||||
Returns: 'http://www.meteos.com'
|
||||
|
||||
"""
|
||||
parsed_url = parse.urlsplit(href)
|
||||
url_parts = parsed_url.path.split('/', 2)
|
||||
|
||||
# NOTE: this should match vX.X or vX
|
||||
expression = re.compile(r'^v([0-9]+|[0-9]+\.[0-9]+)(/.*|$)')
|
||||
if expression.match(url_parts[1]):
|
||||
del url_parts[1]
|
||||
|
||||
new_path = '/'.join(url_parts)
|
||||
|
||||
if new_path == parsed_url.path:
|
||||
msg = 'href %s does not contain version' % href
|
||||
LOG.debug(msg)
|
||||
raise ValueError(msg)
|
||||
|
||||
parsed_url = list(parsed_url)
|
||||
parsed_url[2] = new_path
|
||||
return parse.urlunsplit(parsed_url)
|
||||
|
||||
|
||||
def dict_to_query_str(params):
|
||||
# TODO(throughnothing): we should just use urllib.urlencode instead of this
|
||||
# But currently we don't work with urlencoded url's
|
||||
param_str = ""
|
||||
for key, val in params.items():
|
||||
param_str = param_str + '='.join([str(key), str(val)]) + '&'
|
||||
|
||||
return param_str.rstrip('&')
|
||||
|
||||
|
||||
class ViewBuilder(object):
|
||||
"""Model API responses as dictionaries."""
|
||||
|
||||
_collection_name = None
|
||||
_detail_version_modifiers = []
|
||||
|
||||
def _get_links(self, request, identifier):
|
||||
return [{"rel": "self",
|
||||
"href": self._get_href_link(request, identifier), },
|
||||
{"rel": "bookmark",
|
||||
"href": self._get_bookmark_link(request, identifier), }]
|
||||
|
||||
def _get_next_link(self, request, identifier):
|
||||
"""Return href string with proper limit and marker params."""
|
||||
params = request.params.copy()
|
||||
params["marker"] = identifier
|
||||
prefix = self._update_link_prefix(request.application_url,
|
||||
CONF.osapi_learning_base_URL)
|
||||
url = os.path.join(prefix,
|
||||
request.environ["meteos.context"].project_id,
|
||||
self._collection_name)
|
||||
return "%s?%s" % (url, dict_to_query_str(params))
|
||||
|
||||
def _get_href_link(self, request, identifier):
|
||||
"""Return an href string pointing to this object."""
|
||||
prefix = self._update_link_prefix(request.application_url,
|
||||
CONF.osapi_learning_base_URL)
|
||||
return os.path.join(prefix,
|
||||
request.environ["meteos.context"].project_id,
|
||||
self._collection_name,
|
||||
str(identifier))
|
||||
|
||||
def _get_bookmark_link(self, request, identifier):
|
||||
"""Create a URL that refers to a specific resource."""
|
||||
base_url = remove_version_from_href(request.application_url)
|
||||
base_url = self._update_link_prefix(base_url,
|
||||
CONF.osapi_learning_base_URL)
|
||||
return os.path.join(base_url,
|
||||
request.environ["meteos.context"].project_id,
|
||||
self._collection_name,
|
||||
str(identifier))
|
||||
|
||||
def _get_collection_links(self, request, items, id_key="uuid"):
|
||||
"""Retrieve 'next' link, if applicable."""
|
||||
links = []
|
||||
limit = int(request.params.get("limit", 0))
|
||||
if limit and limit == len(items):
|
||||
last_item = items[-1]
|
||||
if id_key in last_item:
|
||||
last_item_id = last_item[id_key]
|
||||
else:
|
||||
last_item_id = last_item["id"]
|
||||
links.append({
|
||||
"rel": "next",
|
||||
"href": self._get_next_link(request, last_item_id),
|
||||
})
|
||||
return links
|
||||
|
||||
def _update_link_prefix(self, orig_url, prefix):
|
||||
if not prefix:
|
||||
return orig_url
|
||||
url_parts = list(parse.urlsplit(orig_url))
|
||||
prefix_parts = list(parse.urlsplit(prefix))
|
||||
url_parts[0:2] = prefix_parts[0:2]
|
||||
return parse.urlunsplit(url_parts)
|
||||
|
||||
def update_versioned_resource_dict(self, request, resource_dict, resource):
|
||||
"""Updates the given resource dict for the given request version.
|
||||
|
||||
This method calls every method, that is applicable to the request
|
||||
version, in _detail_version_modifiers.
|
||||
"""
|
||||
for method_name in self._detail_version_modifiers:
|
||||
method = getattr(self, method_name)
|
||||
if request.api_version_request.matches_versioned_method(method):
|
||||
request_context = request.environ['meteos.context']
|
||||
method.func(self, request_context, resource_dict, resource)
|
||||
|
||||
@classmethod
|
||||
def versioned_method(cls, min_ver, max_ver=None, experimental=False):
|
||||
"""Decorator for versioning API methods.
|
||||
|
||||
:param min_ver: string representing minimum version
|
||||
:param max_ver: optional string representing maximum version
|
||||
:param experimental: flag indicating an API is experimental and is
|
||||
subject to change or removal at any time
|
||||
"""
|
||||
|
||||
def decorator(f):
|
||||
obj_min_ver = api_version.APIVersionRequest(min_ver)
|
||||
if max_ver:
|
||||
obj_max_ver = api_version.APIVersionRequest(max_ver)
|
||||
else:
|
||||
obj_max_ver = api_version.APIVersionRequest()
|
||||
|
||||
# Add to list of versioned methods registered
|
||||
func_name = f.__name__
|
||||
new_func = versioned_method.VersionedMethod(
|
||||
func_name, obj_min_ver, obj_max_ver, experimental, f)
|
||||
|
||||
return new_func
|
||||
|
||||
return decorator
|
||||
|
||||
|
||||
def remove_invalid_options(context, search_options, allowed_search_options):
|
||||
"""Remove search options that are not valid for non-admin API/context."""
|
||||
if context.is_admin:
|
||||
# Allow all options
|
||||
return
|
||||
# Otherwise, strip out all unknown options
|
||||
unknown_options = [opt for opt in search_options
|
||||
if opt not in allowed_search_options]
|
||||
bad_options = ", ".join(unknown_options)
|
||||
LOG.debug("Removing options '%(bad_options)s' from query",
|
||||
{"bad_options": bad_options})
|
||||
for opt in unknown_options:
|
||||
del search_options[opt]
|
37
meteos/api/contrib/__init__.py
Normal file
37
meteos/api/contrib/__init__.py
Normal file
@ -0,0 +1,37 @@
|
||||
# Copyright 2011 Justin Santa Barbara
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Contrib contains extensions that are shipped with meteos.
|
||||
|
||||
It can't be called 'extensions' because that causes namespacing problems.
|
||||
|
||||
"""
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
|
||||
from meteos.api import extensions
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
def standard_extensions(ext_mgr):
|
||||
extensions.load_standard_extensions(ext_mgr, LOG, __path__, __package__)
|
||||
|
||||
|
||||
def select_extensions(ext_mgr):
|
||||
extensions.load_standard_extensions(ext_mgr, LOG, __path__, __package__,
|
||||
CONF.osapi_learning_ext_list)
|
0
meteos/api/middleware/__init__.py
Normal file
0
meteos/api/middleware/__init__.py
Normal file
150
meteos/api/middleware/auth.py
Normal file
150
meteos/api/middleware/auth.py
Normal file
@ -0,0 +1,150 @@
|
||||
# Copyright 2010 OpenStack LLC.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
"""
|
||||
Common Auth Middleware.
|
||||
|
||||
"""
|
||||
import os
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
from oslo_serialization import jsonutils
|
||||
import webob.dec
|
||||
import webob.exc
|
||||
|
||||
from meteos.api.openstack import wsgi
|
||||
from meteos import context
|
||||
from meteos.i18n import _
|
||||
from meteos import wsgi as base_wsgi
|
||||
|
||||
use_forwarded_for_opt = cfg.BoolOpt(
|
||||
'use_forwarded_for',
|
||||
default=False,
|
||||
help='Treat X-Forwarded-For as the canonical remote address. '
|
||||
'Only enable this if you have a sanitizing proxy.')
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opt(use_forwarded_for_opt)
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
def pipeline_factory(loader, global_conf, **local_conf):
|
||||
"""A paste pipeline replica that keys off of auth_strategy."""
|
||||
pipeline = local_conf[CONF.auth_strategy]
|
||||
if not CONF.api_rate_limit:
|
||||
limit_name = CONF.auth_strategy + '_nolimit'
|
||||
pipeline = local_conf.get(limit_name, pipeline)
|
||||
pipeline = pipeline.split()
|
||||
filters = [loader.get_filter(n) for n in pipeline[:-1]]
|
||||
app = loader.get_app(pipeline[-1])
|
||||
filters.reverse()
|
||||
for filter in filters:
|
||||
app = filter(app)
|
||||
return app
|
||||
|
||||
|
||||
class InjectContext(base_wsgi.Middleware):
|
||||
"""Add a 'meteos.context' to WSGI environ."""
|
||||
|
||||
def __init__(self, context, *args, **kwargs):
|
||||
self.context = context
|
||||
super(InjectContext, self).__init__(*args, **kwargs)
|
||||
|
||||
@webob.dec.wsgify(RequestClass=base_wsgi.Request)
|
||||
def __call__(self, req):
|
||||
req.environ['meteos.context'] = self.context
|
||||
return self.application
|
||||
|
||||
|
||||
class MeteosKeystoneContext(base_wsgi.Middleware):
|
||||
"""Make a request context from keystone headers."""
|
||||
|
||||
@webob.dec.wsgify(RequestClass=base_wsgi.Request)
|
||||
def __call__(self, req):
|
||||
user_id = req.headers.get('X_USER')
|
||||
user_id = req.headers.get('X_USER_ID', user_id)
|
||||
if user_id is None:
|
||||
LOG.debug("Neither X_USER_ID nor X_USER found in request")
|
||||
return webob.exc.HTTPUnauthorized()
|
||||
# get the roles
|
||||
roles = [r.strip() for r in req.headers.get('X_ROLE', '').split(',')]
|
||||
if 'X_TENANT_ID' in req.headers:
|
||||
# This is the new header since Keystone went to ID/Name
|
||||
project_id = req.headers['X_TENANT_ID']
|
||||
else:
|
||||
# This is for legacy compatibility
|
||||
project_id = req.headers['X_TENANT']
|
||||
|
||||
# Get the auth token
|
||||
auth_token = req.headers.get('X_AUTH_TOKEN',
|
||||
req.headers.get('X_STORAGE_TOKEN'))
|
||||
|
||||
# Build a context, including the auth_token...
|
||||
remote_address = req.remote_addr
|
||||
if CONF.use_forwarded_for:
|
||||
remote_address = req.headers.get('X-Forwarded-For', remote_address)
|
||||
|
||||
service_catalog = None
|
||||
if req.headers.get('X_SERVICE_CATALOG') is not None:
|
||||
try:
|
||||
catalog_header = req.headers.get('X_SERVICE_CATALOG')
|
||||
service_catalog = jsonutils.loads(catalog_header)
|
||||
except ValueError:
|
||||
raise webob.exc.HTTPInternalServerError(
|
||||
_('Invalid service catalog json.'))
|
||||
|
||||
ctx = context.RequestContext(user_id,
|
||||
project_id,
|
||||
roles=roles,
|
||||
auth_token=auth_token,
|
||||
remote_address=remote_address,
|
||||
service_catalog=service_catalog)
|
||||
|
||||
req.environ['meteos.context'] = ctx
|
||||
return self.application
|
||||
|
||||
|
||||
class NoAuthMiddleware(base_wsgi.Middleware):
|
||||
"""Return a fake token if one isn't specified."""
|
||||
|
||||
@webob.dec.wsgify(RequestClass=wsgi.Request)
|
||||
def __call__(self, req):
|
||||
if 'X-Auth-Token' not in req.headers:
|
||||
user_id = req.headers.get('X-Auth-User', 'admin')
|
||||
project_id = req.headers.get('X-Auth-Project-Id', 'admin')
|
||||
os_url = os.path.join(req.url, project_id)
|
||||
res = webob.Response()
|
||||
# NOTE(vish): This is expecting and returning Auth(1.1), whereas
|
||||
# keystone uses 2.0 auth. We should probably allow
|
||||
# 2.0 auth here as well.
|
||||
res.headers['X-Auth-Token'] = '%s:%s' % (user_id, project_id)
|
||||
res.headers['X-Server-Management-Url'] = os_url
|
||||
res.content_type = 'text/plain'
|
||||
res.status = '204'
|
||||
return res
|
||||
|
||||
token = req.headers['X-Auth-Token']
|
||||
user_id, _sep, project_id = token.partition(':')
|
||||
project_id = project_id or user_id
|
||||
remote_address = getattr(req, 'remote_address', '127.0.0.1')
|
||||
if CONF.use_forwarded_for:
|
||||
remote_address = req.headers.get('X-Forwarded-For', remote_address)
|
||||
ctx = context.RequestContext(user_id,
|
||||
project_id,
|
||||
is_admin=True,
|
||||
remote_address=remote_address)
|
||||
|
||||
req.environ['meteos.context'] = ctx
|
||||
return self.application
|
74
meteos/api/middleware/fault.py
Normal file
74
meteos/api/middleware/fault.py
Normal file
@ -0,0 +1,74 @@
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_log import log
|
||||
import six
|
||||
import webob.dec
|
||||
import webob.exc
|
||||
|
||||
from meteos.api.openstack import wsgi
|
||||
from meteos.i18n import _LE, _LI
|
||||
from meteos import utils
|
||||
from meteos import wsgi as base_wsgi
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class FaultWrapper(base_wsgi.Middleware):
|
||||
"""Calls down the middleware stack, making exceptions into faults."""
|
||||
|
||||
_status_to_type = {}
|
||||
|
||||
@staticmethod
|
||||
def status_to_type(status):
|
||||
if not FaultWrapper._status_to_type:
|
||||
for clazz in utils.walk_class_hierarchy(webob.exc.HTTPError):
|
||||
FaultWrapper._status_to_type[clazz.code] = clazz
|
||||
return FaultWrapper._status_to_type.get(
|
||||
status, webob.exc.HTTPInternalServerError)()
|
||||
|
||||
def _error(self, inner, req):
|
||||
LOG.exception(_LE("Caught error: %s"), six.text_type(inner))
|
||||
|
||||
safe = getattr(inner, 'safe', False)
|
||||
headers = getattr(inner, 'headers', None)
|
||||
status = getattr(inner, 'code', 500)
|
||||
if status is None:
|
||||
status = 500
|
||||
|
||||
msg_dict = dict(url=req.url, status=status)
|
||||
LOG.info(_LI("%(url)s returned with HTTP %(status)d"), msg_dict)
|
||||
outer = self.status_to_type(status)
|
||||
if headers:
|
||||
outer.headers = headers
|
||||
# NOTE(johannes): We leave the explanation empty here on
|
||||
# purpose. It could possibly have sensitive information
|
||||
# that should not be returned back to the user. See
|
||||
# bugs 868360 and 874472
|
||||
# NOTE(eglynn): However, it would be over-conservative and
|
||||
# inconsistent with the EC2 API to hide every exception,
|
||||
# including those that are safe to expose, see bug 1021373
|
||||
if safe:
|
||||
outer.explanation = '%s: %s' % (inner.__class__.__name__,
|
||||
six.text_type(inner))
|
||||
return wsgi.Fault(outer)
|
||||
|
||||
@webob.dec.wsgify(RequestClass=wsgi.Request)
|
||||
def __call__(self, req):
|
||||
try:
|
||||
return req.get_response(self.application)
|
||||
except Exception as ex:
|
||||
return self._error(ex, req)
|
81
meteos/api/openstack/__init__.py
Normal file
81
meteos/api/openstack/__init__.py
Normal file
@ -0,0 +1,81 @@
|
||||
# Copyright (c) 2013 OpenStack, LLC.
|
||||
#
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
WSGI middleware for OpenStack API controllers.
|
||||
"""
|
||||
|
||||
from oslo_log import log
|
||||
import routes
|
||||
|
||||
from meteos.api.openstack import wsgi
|
||||
from meteos.i18n import _, _LW
|
||||
from meteos import wsgi as base_wsgi
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class APIMapper(routes.Mapper):
|
||||
def routematch(self, url=None, environ=None):
|
||||
if url is "":
|
||||
result = self._match("", environ)
|
||||
return result[0], result[1]
|
||||
return routes.Mapper.routematch(self, url, environ)
|
||||
|
||||
|
||||
class ProjectMapper(APIMapper):
|
||||
def resource(self, member_name, collection_name, **kwargs):
|
||||
if 'parent_resource' not in kwargs:
|
||||
kwargs['path_prefix'] = '{project_id}/'
|
||||
else:
|
||||
parent_resource = kwargs['parent_resource']
|
||||
p_collection = parent_resource['collection_name']
|
||||
p_member = parent_resource['member_name']
|
||||
kwargs['path_prefix'] = '{project_id}/%s/:%s_id' % (p_collection,
|
||||
p_member)
|
||||
routes.Mapper.resource(self,
|
||||
member_name,
|
||||
collection_name,
|
||||
**kwargs)
|
||||
|
||||
|
||||
class APIRouter(base_wsgi.Router):
|
||||
"""Routes requests on the API to the appropriate controller and method."""
|
||||
ExtensionManager = None # override in subclasses
|
||||
|
||||
@classmethod
|
||||
def factory(cls, global_config, **local_config):
|
||||
"""Simple paste factory, :class:`meteos.wsgi.Router` doesn't have."""
|
||||
return cls()
|
||||
|
||||
def __init__(self, ext_mgr=None):
|
||||
mapper = ProjectMapper()
|
||||
self.resources = {}
|
||||
self._setup_routes(mapper, ext_mgr)
|
||||
super(APIRouter, self).__init__(mapper)
|
||||
|
||||
def _setup_routes(self, mapper, ext_mgr):
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class FaultWrapper(base_wsgi.Middleware):
|
||||
def __init__(self, application):
|
||||
LOG.warning(_LW('meteos.api.openstack:FaultWrapper is deprecated. '
|
||||
'Please use '
|
||||
'meteos.api.middleware.fault:FaultWrapper instead.'))
|
||||
# Avoid circular imports from here.
|
||||
from meteos.api.middleware import fault
|
||||
super(FaultWrapper, self).__init__(fault.FaultWrapper(application))
|
169
meteos/api/openstack/api_version_request.py
Normal file
169
meteos/api/openstack/api_version_request.py
Normal file
@ -0,0 +1,169 @@
|
||||
# Copyright 2014 IBM Corp.
|
||||
# Copyright 2015 Clinton Knight
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import re
|
||||
|
||||
from meteos.api.openstack import versioned_method
|
||||
from meteos import exception
|
||||
from meteos.i18n import _
|
||||
from meteos import utils
|
||||
|
||||
# Define the minimum and maximum version of the API across all of the
|
||||
# REST API. The format of the version is:
|
||||
# X.Y where:
|
||||
#
|
||||
# - X will only be changed if a significant backwards incompatible API
|
||||
# change is made which affects the API as whole. That is, something
|
||||
# that is only very very rarely incremented.
|
||||
#
|
||||
# - Y when you make any change to the API. Note that this includes
|
||||
# semantic changes which may not affect the input or output formats or
|
||||
# even originate in the API code layer. We are not distinguishing
|
||||
# between backwards compatible and backwards incompatible changes in
|
||||
# the versioning system. It must be made clear in the documentation as
|
||||
# to what is a backwards compatible change and what is a backwards
|
||||
# incompatible one.
|
||||
|
||||
#
|
||||
# You must update the API version history string below with a one or
|
||||
# two line description as well as update rest_api_version_history.rst
|
||||
REST_API_VERSION_HISTORY = """
|
||||
|
||||
REST API Version History:
|
||||
|
||||
* 1.0 - Initial version. Includes all V1 APIs and extensions in Mitaka.
|
||||
"""
|
||||
|
||||
# The minimum and maximum versions of the API supported
|
||||
# The default api version request is defined to be the
|
||||
# the minimum version of the API supported.
|
||||
_MIN_API_VERSION = "1.0"
|
||||
_MAX_API_VERSION = "1.0"
|
||||
DEFAULT_API_VERSION = _MIN_API_VERSION
|
||||
|
||||
|
||||
# NOTE(cyeoh): min and max versions declared as functions so we can
|
||||
# mock them for unittests. Do not use the constants directly anywhere
|
||||
# else.
|
||||
def min_api_version():
|
||||
return APIVersionRequest(_MIN_API_VERSION)
|
||||
|
||||
|
||||
def max_api_version():
|
||||
return APIVersionRequest(_MAX_API_VERSION)
|
||||
|
||||
|
||||
class APIVersionRequest(utils.ComparableMixin):
|
||||
"""This class represents an API Version Request.
|
||||
|
||||
This class includes convenience methods for manipulation
|
||||
and comparison of version numbers as needed to implement
|
||||
API microversions.
|
||||
"""
|
||||
|
||||
def __init__(self, version_string=None, experimental=False):
|
||||
"""Create an API version request object."""
|
||||
self._ver_major = None
|
||||
self._ver_minor = None
|
||||
self._experimental = experimental
|
||||
|
||||
if version_string is not None:
|
||||
match = re.match(r"^([1-9]\d*)\.([1-9]\d*|0)$",
|
||||
version_string)
|
||||
if match:
|
||||
self._ver_major = int(match.group(1))
|
||||
self._ver_minor = int(match.group(2))
|
||||
else:
|
||||
raise exception.InvalidAPIVersionString(version=version_string)
|
||||
|
||||
def __str__(self):
|
||||
"""Debug/Logging representation of object."""
|
||||
return ("API Version Request Major: %(major)s, Minor: %(minor)s"
|
||||
% {'major': self._ver_major, 'minor': self._ver_minor})
|
||||
|
||||
def is_null(self):
|
||||
return self._ver_major is None and self._ver_minor is None
|
||||
|
||||
def _cmpkey(self):
|
||||
"""Return the value used by ComparableMixin for rich comparisons."""
|
||||
return self._ver_major, self._ver_minor
|
||||
|
||||
@property
|
||||
def experimental(self):
|
||||
return self._experimental
|
||||
|
||||
@experimental.setter
|
||||
def experimental(self, value):
|
||||
if type(value) != bool:
|
||||
msg = _('The experimental property must be a bool value.')
|
||||
raise exception.InvalidParameterValue(err=msg)
|
||||
self._experimental = value
|
||||
|
||||
def matches_versioned_method(self, method):
|
||||
"""Compares this version to that of a versioned method."""
|
||||
|
||||
if type(method) != versioned_method.VersionedMethod:
|
||||
msg = _('An API version request must be compared '
|
||||
'to a VersionedMethod object.')
|
||||
raise exception.InvalidParameterValue(err=msg)
|
||||
|
||||
return self.matches(method.start_version,
|
||||
method.end_version,
|
||||
method.experimental)
|
||||
|
||||
def matches(self, min_version, max_version, experimental=False):
|
||||
"""Compares this version to the specified min/max range.
|
||||
|
||||
Returns whether the version object represents a version
|
||||
greater than or equal to the minimum version and less than
|
||||
or equal to the maximum version.
|
||||
|
||||
If min_version is null then there is no minimum limit.
|
||||
If max_version is null then there is no maximum limit.
|
||||
If self is null then raise ValueError.
|
||||
|
||||
:param min_version: Minimum acceptable version.
|
||||
:param max_version: Maximum acceptable version.
|
||||
:param experimental: Whether to match experimental APIs.
|
||||
:returns: boolean
|
||||
"""
|
||||
|
||||
if self.is_null():
|
||||
raise ValueError
|
||||
# NOTE(cknight): An experimental request should still match a
|
||||
# non-experimental API, so the experimental check isn't just
|
||||
# looking for equality.
|
||||
if not self.experimental and experimental:
|
||||
return False
|
||||
if max_version.is_null() and min_version.is_null():
|
||||
return True
|
||||
elif max_version.is_null():
|
||||
return min_version <= self
|
||||
elif min_version.is_null():
|
||||
return self <= max_version
|
||||
else:
|
||||
return min_version <= self <= max_version
|
||||
|
||||
def get_string(self):
|
||||
"""Returns a string representation of this object.
|
||||
|
||||
If this method is used to create an APIVersionRequest,
|
||||
the resulting object will be an equivalent request.
|
||||
"""
|
||||
if self.is_null():
|
||||
raise ValueError
|
||||
return ("%(major)s.%(minor)s" %
|
||||
{'major': self._ver_major, 'minor': self._ver_minor})
|
14
meteos/api/openstack/rest_api_version_history.rst
Normal file
14
meteos/api/openstack/rest_api_version_history.rst
Normal file
@ -0,0 +1,14 @@
|
||||
REST API Version History
|
||||
========================
|
||||
|
||||
This documents the changes made to the REST API with every
|
||||
microversion change. The description for each version should be a
|
||||
verbose one which has enough information to be suitable for use in
|
||||
user documentation.
|
||||
|
||||
1.0
|
||||
---
|
||||
The 1.0 Meteos API includes all v1 core APIs existing prior to
|
||||
the introduction of microversions. The /v1 URL is used to call
|
||||
1.0 APIs, and microversions headers sent to this endpoint are
|
||||
ignored.
|
29
meteos/api/openstack/urlmap.py
Normal file
29
meteos/api/openstack/urlmap.py
Normal file
@ -0,0 +1,29 @@
|
||||
# Copyright (c) 2013 OpenStack, LLC.
|
||||
#
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_log import log
|
||||
|
||||
from meteos.api import urlmap
|
||||
from meteos.i18n import _LW
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
def urlmap_factory(loader, global_conf, **local_conf):
|
||||
LOG.warning(_LW('meteos.api.openstack.urlmap:urlmap_factory '
|
||||
'is deprecated. '
|
||||
'Please use meteos.api.urlmap:urlmap_factory instead.'))
|
||||
urlmap.urlmap_factory(loader, global_conf, **local_conf)
|
49
meteos/api/openstack/versioned_method.py
Normal file
49
meteos/api/openstack/versioned_method.py
Normal file
@ -0,0 +1,49 @@
|
||||
# Copyright 2014 IBM Corp.
|
||||
# Copyright 2015 Clinton Knight
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from meteos import utils
|
||||
|
||||
|
||||
class VersionedMethod(utils.ComparableMixin):
|
||||
|
||||
def __init__(self, name, start_version, end_version, experimental, func):
|
||||
"""Versioning information for a single method.
|
||||
|
||||
Minimum and maximums are inclusive.
|
||||
|
||||
:param name: Name of the method
|
||||
:param start_version: Minimum acceptable version
|
||||
:param end_version: Maximum acceptable_version
|
||||
:param experimental: True if method is experimental
|
||||
:param func: Method to call
|
||||
"""
|
||||
self.name = name
|
||||
self.start_version = start_version
|
||||
self.end_version = end_version
|
||||
self.experimental = experimental
|
||||
self.func = func
|
||||
|
||||
def __str__(self):
|
||||
args = {
|
||||
'name': self.name,
|
||||
'start': self.start_version,
|
||||
'end': self.end_version
|
||||
}
|
||||
return ("Version Method %(name)s: min: %(start)s, max: %(end)s" % args)
|
||||
|
||||
def _cmpkey(self):
|
||||
"""Return the value used by ComparableMixin for rich comparisons."""
|
||||
return self.start_version
|
1343
meteos/api/openstack/wsgi.py
Normal file
1343
meteos/api/openstack/wsgi.py
Normal file
File diff suppressed because it is too large
Load Diff
291
meteos/api/urlmap.py
Normal file
291
meteos/api/urlmap.py
Normal file
@ -0,0 +1,291 @@
|
||||
# Copyright 2011 OpenStack LLC.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import re
|
||||
try:
|
||||
from urllib.request import parse_http_list # noqa
|
||||
except ImportError:
|
||||
from urllib2 import parse_http_list # noqa
|
||||
|
||||
import paste.urlmap
|
||||
|
||||
from meteos.api.openstack import wsgi
|
||||
|
||||
|
||||
_quoted_string_re = r'"[^"\\]*(?:\\.[^"\\]*)*"'
|
||||
_option_header_piece_re = re.compile(
|
||||
r';\s*([^\s;=]+|%s)\s*'
|
||||
r'(?:=\s*([^;]+|%s))?\s*' %
|
||||
(_quoted_string_re, _quoted_string_re))
|
||||
|
||||
|
||||
def unquote_header_value(value):
|
||||
"""Unquotes a header value.
|
||||
|
||||
This does not use the real unquoting but what browsers are actually
|
||||
using for quoting.
|
||||
|
||||
:param value: the header value to unquote.
|
||||
"""
|
||||
if value and value[0] == value[-1] == '"':
|
||||
# this is not the real unquoting, but fixing this so that the
|
||||
# RFC is met will result in bugs with internet explorer and
|
||||
# probably some other browsers as well. IE for example is
|
||||
# uploading files with "C:\foo\bar.txt" as filename
|
||||
value = value[1:-1]
|
||||
return value
|
||||
|
||||
|
||||
def parse_list_header(value):
|
||||
"""Parse lists as described by RFC 2068 Section 2.
|
||||
|
||||
In particular, parse comma-separated lists where the elements of
|
||||
the list may include quoted-strings. A quoted-string could
|
||||
contain a comma. A non-quoted string could have quotes in the
|
||||
middle. Quotes are removed automatically after parsing.
|
||||
|
||||
The return value is a standard :class:`list`:
|
||||
|
||||
>>> parse_list_header('token, "quoted value"')
|
||||
['token', 'quoted value']
|
||||
|
||||
:param value: a string with a list header.
|
||||
:return: :class:`list`
|
||||
"""
|
||||
result = []
|
||||
for item in parse_http_list(value):
|
||||
if item[:1] == item[-1:] == '"':
|
||||
item = unquote_header_value(item[1:-1])
|
||||
result.append(item)
|
||||
return result
|
||||
|
||||
|
||||
def parse_options_header(value):
|
||||
"""Parse header into content type and options.
|
||||
|
||||
Parse a ``Content-Type`` like header into a tuple with the content
|
||||
type and the options:
|
||||
|
||||
>>> parse_options_header('Content-Type: text/html; mimetype=text/html')
|
||||
('Content-Type:', {'mimetype': 'text/html'})
|
||||
|
||||
:param value: the header to parse.
|
||||
:return: (str, options)
|
||||
"""
|
||||
def _tokenize(string):
|
||||
for match in _option_header_piece_re.finditer(string):
|
||||
key, value = match.groups()
|
||||
key = unquote_header_value(key)
|
||||
if value is not None:
|
||||
value = unquote_header_value(value)
|
||||
yield key, value
|
||||
|
||||
if not value:
|
||||
return '', {}
|
||||
|
||||
parts = _tokenize(';' + value)
|
||||
name = next(parts)[0]
|
||||
extra = dict(parts)
|
||||
return name, extra
|
||||
|
||||
|
||||
class Accept(object):
|
||||
def __init__(self, value):
|
||||
self._content_types = [parse_options_header(v) for v in
|
||||
parse_list_header(value)]
|
||||
|
||||
def best_match(self, supported_content_types):
|
||||
# FIXME: Should we have a more sophisticated matching algorithm that
|
||||
# takes into account the version as well?
|
||||
best_quality = -1
|
||||
best_content_type = None
|
||||
best_params = {}
|
||||
best_match = '*/*'
|
||||
|
||||
for content_type in supported_content_types:
|
||||
for content_mask, params in self._content_types:
|
||||
try:
|
||||
quality = float(params.get('q', 1))
|
||||
except ValueError:
|
||||
continue
|
||||
|
||||
if quality < best_quality:
|
||||
continue
|
||||
elif best_quality == quality:
|
||||
if best_match.count('*') <= content_mask.count('*'):
|
||||
continue
|
||||
|
||||
if self._match_mask(content_mask, content_type):
|
||||
best_quality = quality
|
||||
best_content_type = content_type
|
||||
best_params = params
|
||||
best_match = content_mask
|
||||
|
||||
return best_content_type, best_params
|
||||
|
||||
def content_type_params(self, best_content_type):
|
||||
"""Find parameters in Accept header for given content type."""
|
||||
for content_type, params in self._content_types:
|
||||
if best_content_type == content_type:
|
||||
return params
|
||||
|
||||
return {}
|
||||
|
||||
def _match_mask(self, mask, content_type):
|
||||
if '*' not in mask:
|
||||
return content_type == mask
|
||||
if mask == '*/*':
|
||||
return True
|
||||
mask_major = mask[:-2]
|
||||
content_type_major = content_type.split('/', 1)[0]
|
||||
return content_type_major == mask_major
|
||||
|
||||
|
||||
def urlmap_factory(loader, global_conf, **local_conf):
|
||||
if 'not_found_app' in local_conf:
|
||||
not_found_app = local_conf.pop('not_found_app')
|
||||
else:
|
||||
not_found_app = global_conf.get('not_found_app')
|
||||
if not_found_app:
|
||||
not_found_app = loader.get_app(not_found_app, global_conf=global_conf)
|
||||
urlmap = URLMap(not_found_app=not_found_app)
|
||||
for path, app_name in local_conf.items():
|
||||
path = paste.urlmap.parse_path_expression(path)
|
||||
app = loader.get_app(app_name, global_conf=global_conf)
|
||||
urlmap[path] = app
|
||||
return urlmap
|
||||
|
||||
|
||||
class URLMap(paste.urlmap.URLMap):
|
||||
def _match(self, host, port, path_info):
|
||||
"""Find longest match for a given URL path."""
|
||||
for (domain, app_url), app in self.applications:
|
||||
if domain and domain != host and domain != host + ':' + port:
|
||||
continue
|
||||
if (path_info == app_url or path_info.startswith(app_url + '/')):
|
||||
return app, app_url
|
||||
|
||||
return None, None
|
||||
|
||||
def _set_script_name(self, app, app_url):
|
||||
def wrap(environ, start_response):
|
||||
environ['SCRIPT_NAME'] += app_url
|
||||
return app(environ, start_response)
|
||||
|
||||
return wrap
|
||||
|
||||
def _munge_path(self, app, path_info, app_url):
|
||||
def wrap(environ, start_response):
|
||||
environ['SCRIPT_NAME'] += app_url
|
||||
environ['PATH_INFO'] = path_info[len(app_url):]
|
||||
return app(environ, start_response)
|
||||
|
||||
return wrap
|
||||
|
||||
def _path_strategy(self, host, port, path_info):
|
||||
"""Check path suffix for MIME type and path prefix for API version."""
|
||||
mime_type = app = app_url = None
|
||||
|
||||
parts = path_info.rsplit('.', 1)
|
||||
if len(parts) > 1:
|
||||
possible_type = 'application/' + parts[1]
|
||||
if possible_type in wsgi.SUPPORTED_CONTENT_TYPES:
|
||||
mime_type = possible_type
|
||||
|
||||
parts = path_info.split('/')
|
||||
if len(parts) > 1:
|
||||
possible_app, possible_app_url = self._match(host, port, path_info)
|
||||
# Don't use prefix if it ends up matching default
|
||||
if possible_app and possible_app_url:
|
||||
app_url = possible_app_url
|
||||
app = self._munge_path(possible_app, path_info, app_url)
|
||||
|
||||
return mime_type, app, app_url
|
||||
|
||||
def _content_type_strategy(self, host, port, environ):
|
||||
"""Check Content-Type header for API version."""
|
||||
app = None
|
||||
params = parse_options_header(environ.get('CONTENT_TYPE', ''))[1]
|
||||
if 'version' in params:
|
||||
app, app_url = self._match(host, port, '/v' + params['version'])
|
||||
if app:
|
||||
app = self._set_script_name(app, app_url)
|
||||
|
||||
return app
|
||||
|
||||
def _accept_strategy(self, host, port, environ, supported_content_types):
|
||||
"""Check Accept header for best matching MIME type and API version."""
|
||||
accept = Accept(environ.get('HTTP_ACCEPT', ''))
|
||||
|
||||
app = None
|
||||
|
||||
# Find the best match in the Accept header
|
||||
mime_type, params = accept.best_match(supported_content_types)
|
||||
if 'version' in params:
|
||||
app, app_url = self._match(host, port, '/v' + params['version'])
|
||||
if app:
|
||||
app = self._set_script_name(app, app_url)
|
||||
|
||||
return mime_type, app
|
||||
|
||||
def __call__(self, environ, start_response):
|
||||
host = environ.get('HTTP_HOST', environ.get('SERVER_NAME')).lower()
|
||||
if ':' in host:
|
||||
host, port = host.split(':', 1)
|
||||
else:
|
||||
if environ['wsgi.url_scheme'] == 'http':
|
||||
port = '80'
|
||||
else:
|
||||
port = '443'
|
||||
|
||||
path_info = environ['PATH_INFO']
|
||||
path_info = self.normalize_url(path_info, False)[1]
|
||||
|
||||
# The API version is determined in one of three ways:
|
||||
# 1) URL path prefix (eg /v1.1/tenant/servers/detail)
|
||||
# 2) Content-Type header (eg application/json;version=1.1)
|
||||
# 3) Accept header (eg application/json;q=0.8;version=1.1)
|
||||
|
||||
# Meteos supports only application/json as MIME type for the responses.
|
||||
supported_content_types = list(wsgi.SUPPORTED_CONTENT_TYPES)
|
||||
|
||||
mime_type, app, app_url = self._path_strategy(host, port, path_info)
|
||||
|
||||
if not app:
|
||||
app = self._content_type_strategy(host, port, environ)
|
||||
|
||||
if not mime_type or not app:
|
||||
possible_mime_type, possible_app = self._accept_strategy(
|
||||
host, port, environ, supported_content_types)
|
||||
if possible_mime_type and not mime_type:
|
||||
mime_type = possible_mime_type
|
||||
if possible_app and not app:
|
||||
app = possible_app
|
||||
|
||||
if not mime_type:
|
||||
mime_type = 'application/json'
|
||||
|
||||
if not app:
|
||||
# Didn't match a particular version, probably matches default
|
||||
app, app_url = self._match(host, port, path_info)
|
||||
if app:
|
||||
app = self._munge_path(app, path_info, app_url)
|
||||
|
||||
if app:
|
||||
environ['meteos.best_content_type'] = mime_type
|
||||
return app(environ, start_response)
|
||||
|
||||
environ['paste.urlmap_object'] = self
|
||||
return self.not_found_application(environ, start_response)
|
0
meteos/api/v1/__init__.py
Normal file
0
meteos/api/v1/__init__.py
Normal file
156
meteos/api/v1/datasets.py
Normal file
156
meteos/api/v1/datasets.py
Normal file
@ -0,0 +1,156 @@
|
||||
# Copyright 2013 NetApp
|
||||
# All Rights Reserved.
|
||||
# Copyright (c) 2016 NEC Corporation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""The datasets api."""
|
||||
|
||||
import ast
|
||||
import re
|
||||
import string
|
||||
|
||||
from oslo_log import log
|
||||
from oslo_utils import strutils
|
||||
from oslo_utils import uuidutils
|
||||
import six
|
||||
import webob
|
||||
from webob import exc
|
||||
|
||||
from meteos.api import common
|
||||
from meteos.api.openstack import wsgi
|
||||
from meteos.api.views import datasets as dataset_views
|
||||
from meteos import exception
|
||||
from meteos.i18n import _, _LI
|
||||
from meteos import engine
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class DatasetController(wsgi.Controller, wsgi.AdminActionsMixin):
|
||||
|
||||
"""The Datasets API v1 controller for the OpenStack API."""
|
||||
resource_name = 'dataset'
|
||||
_view_builder_class = dataset_views.ViewBuilder
|
||||
|
||||
def __init__(self):
|
||||
super(self.__class__, self).__init__()
|
||||
self.engine_api = engine.API()
|
||||
|
||||
def show(self, req, id):
|
||||
"""Return data about the given dataset."""
|
||||
context = req.environ['meteos.context']
|
||||
|
||||
try:
|
||||
dataset = self.engine_api.get_dataset(context, id)
|
||||
except exception.NotFound:
|
||||
raise exc.HTTPNotFound()
|
||||
|
||||
return self._view_builder.detail(req, dataset)
|
||||
|
||||
def delete(self, req, id):
|
||||
"""Delete a dataset."""
|
||||
context = req.environ['meteos.context']
|
||||
|
||||
LOG.info(_LI("Delete dataset with id: %s"), id, context=context)
|
||||
|
||||
try:
|
||||
self.engine_api.delete_dataset(context, id)
|
||||
except exception.NotFound:
|
||||
raise exc.HTTPNotFound()
|
||||
except exception.InvalidLearning as e:
|
||||
raise exc.HTTPForbidden(explanation=six.text_type(e))
|
||||
|
||||
return webob.Response(status_int=202)
|
||||
|
||||
def index(self, req):
|
||||
"""Returns a summary list of datasets."""
|
||||
return self._get_datasets(req, is_detail=False)
|
||||
|
||||
def detail(self, req):
|
||||
"""Returns a detailed list of datasets."""
|
||||
return self._get_datasets(req, is_detail=True)
|
||||
|
||||
def _get_datasets(self, req, is_detail):
|
||||
"""Returns a list of datasets, transformed through view builder."""
|
||||
context = req.environ['meteos.context']
|
||||
|
||||
search_opts = {}
|
||||
search_opts.update(req.GET)
|
||||
|
||||
# Remove keys that are not related to dataset attrs
|
||||
search_opts.pop('limit', None)
|
||||
search_opts.pop('offset', None)
|
||||
sort_key = search_opts.pop('sort_key', 'created_at')
|
||||
sort_dir = search_opts.pop('sort_dir', 'desc')
|
||||
|
||||
datasets = self.engine_api.get_all_datasets(
|
||||
context, search_opts=search_opts, sort_key=sort_key,
|
||||
sort_dir=sort_dir)
|
||||
|
||||
limited_list = common.limited(datasets, req)
|
||||
|
||||
if is_detail:
|
||||
datasets = self._view_builder.detail_list(req, limited_list)
|
||||
else:
|
||||
datasets = self._view_builder.summary_list(req, limited_list)
|
||||
return datasets
|
||||
|
||||
def create(self, req, body):
|
||||
"""Creates a new dataset."""
|
||||
context = req.environ['meteos.context']
|
||||
|
||||
if not self.is_valid_body(body, 'dataset'):
|
||||
raise exc.HTTPUnprocessableEntity()
|
||||
|
||||
dataset = body['dataset']
|
||||
|
||||
LOG.debug("Create dataset with request: %s", dataset)
|
||||
|
||||
try:
|
||||
experiment = self.engine_api.get_experiment(
|
||||
context, dataset['experiment_id'])
|
||||
template = self.engine_api.get_template(
|
||||
context, experiment.template_id)
|
||||
except exception.NotFound:
|
||||
raise exc.HTTPNotFound()
|
||||
|
||||
display_name = dataset.get('display_name')
|
||||
display_description = dataset.get('display_description')
|
||||
method = dataset.get('method')
|
||||
experiment_id = dataset.get('experiment_id')
|
||||
source_dataset_url = dataset.get('source_dataset_url')
|
||||
params = dataset.get('params')
|
||||
swift_tenant = dataset.get('swift_tenant')
|
||||
swift_username = dataset.get('swift_username')
|
||||
swift_password = dataset.get('swift_password')
|
||||
|
||||
new_dataset = self.engine_api.create_dataset(context,
|
||||
display_name,
|
||||
display_description,
|
||||
method,
|
||||
source_dataset_url,
|
||||
params,
|
||||
template.id,
|
||||
template.job_template_id,
|
||||
experiment_id,
|
||||
experiment.cluster_id,
|
||||
swift_tenant,
|
||||
swift_username,
|
||||
swift_password)
|
||||
|
||||
return self._view_builder.detail(req, new_dataset)
|
||||
|
||||
|
||||
def create_resource():
|
||||
return wsgi.Resource(DatasetController())
|
144
meteos/api/v1/experiments.py
Normal file
144
meteos/api/v1/experiments.py
Normal file
@ -0,0 +1,144 @@
|
||||
# Copyright 2013 NetApp
|
||||
# All Rights Reserved.
|
||||
# Copyright (c) 2016 NEC Corporation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""The experiments api."""
|
||||
|
||||
import ast
|
||||
import re
|
||||
import string
|
||||
|
||||
from oslo_log import log
|
||||
from oslo_utils import strutils
|
||||
from oslo_utils import uuidutils
|
||||
import six
|
||||
import webob
|
||||
from webob import exc
|
||||
|
||||
from meteos.api import common
|
||||
from meteos.api.openstack import wsgi
|
||||
from meteos.api.views import experiments as experiment_views
|
||||
from meteos import exception
|
||||
from meteos.i18n import _, _LI
|
||||
from meteos import engine
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class ExperimentController(wsgi.Controller, wsgi.AdminActionsMixin):
|
||||
|
||||
"""The Experiments API v1 controller for the OpenStack API."""
|
||||
resource_name = 'experiment'
|
||||
_view_builder_class = experiment_views.ViewBuilder
|
||||
|
||||
def __init__(self):
|
||||
super(self.__class__, self).__init__()
|
||||
self.engine_api = engine.API()
|
||||
|
||||
def show(self, req, id):
|
||||
"""Return data about the given experiment."""
|
||||
context = req.environ['meteos.context']
|
||||
|
||||
try:
|
||||
experiment = self.engine_api.get_experiment(context, id)
|
||||
except exception.NotFound:
|
||||
raise exc.HTTPNotFound()
|
||||
|
||||
return self._view_builder.detail(req, experiment)
|
||||
|
||||
def delete(self, req, id):
|
||||
"""Delete a experiment."""
|
||||
context = req.environ['meteos.context']
|
||||
|
||||
LOG.info(_LI("Delete experiment with id: %s"), id, context=context)
|
||||
|
||||
try:
|
||||
self.engine_api.delete_experiment(context, id)
|
||||
except exception.NotFound:
|
||||
raise exc.HTTPNotFound()
|
||||
except exception.InvalidLearning as e:
|
||||
raise exc.HTTPForbidden(explanation=six.text_type(e))
|
||||
|
||||
return webob.Response(status_int=202)
|
||||
|
||||
def index(self, req):
|
||||
"""Returns a summary list of experiments."""
|
||||
return self._get_experiments(req, is_detail=False)
|
||||
|
||||
def detail(self, req):
|
||||
"""Returns a detailed list of experiments."""
|
||||
return self._get_experiments(req, is_detail=True)
|
||||
|
||||
def _get_experiments(self, req, is_detail):
|
||||
"""Returns a list of experiments, transformed through view builder."""
|
||||
context = req.environ['meteos.context']
|
||||
|
||||
search_opts = {}
|
||||
search_opts.update(req.GET)
|
||||
|
||||
# Remove keys that are not related to experiment attrs
|
||||
search_opts.pop('limit', None)
|
||||
search_opts.pop('offset', None)
|
||||
sort_key = search_opts.pop('sort_key', 'created_at')
|
||||
sort_dir = search_opts.pop('sort_dir', 'desc')
|
||||
|
||||
experiments = self.engine_api.get_all_experiments(
|
||||
context, search_opts=search_opts, sort_key=sort_key,
|
||||
sort_dir=sort_dir)
|
||||
|
||||
limited_list = common.limited(experiments, req)
|
||||
|
||||
if is_detail:
|
||||
experiments = self._view_builder.detail_list(req, limited_list)
|
||||
else:
|
||||
experiments = self._view_builder.summary_list(req, limited_list)
|
||||
return experiments
|
||||
|
||||
def create(self, req, body):
|
||||
"""Creates a new experiment."""
|
||||
context = req.environ['meteos.context']
|
||||
|
||||
if not self.is_valid_body(body, 'experiment'):
|
||||
raise exc.HTTPUnprocessableEntity()
|
||||
|
||||
experiment = body['experiment']
|
||||
|
||||
LOG.debug("Create experiment with request: %s", experiment)
|
||||
|
||||
try:
|
||||
self.engine_api.get_template(context, experiment['template_id'])
|
||||
except exception.NotFound:
|
||||
raise exc.HTTPNotFound()
|
||||
|
||||
display_name = experiment.get('display_name')
|
||||
display_description = experiment.get('display_description')
|
||||
template_id = experiment.get('template_id')
|
||||
key_name = experiment.get('key_name')
|
||||
neutron_management_network = experiment.get(
|
||||
'neutron_management_network')
|
||||
|
||||
new_experiment = self.engine_api.create_experiment(
|
||||
context,
|
||||
display_name,
|
||||
display_description,
|
||||
template_id,
|
||||
key_name,
|
||||
neutron_management_network)
|
||||
|
||||
return self._view_builder.detail(req, new_experiment)
|
||||
|
||||
|
||||
def create_resource():
|
||||
return wsgi.Resource(ExperimentController())
|
155
meteos/api/v1/learnings.py
Normal file
155
meteos/api/v1/learnings.py
Normal file
@ -0,0 +1,155 @@
|
||||
# Copyright 2013 NetApp
|
||||
# All Rights Reserved.
|
||||
# Copyright (c) 2016 NEC Corporation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""The learnings api."""
|
||||
|
||||
import ast
|
||||
import re
|
||||
import string
|
||||
|
||||
from oslo_log import log
|
||||
from oslo_utils import strutils
|
||||
from oslo_utils import uuidutils
|
||||
import six
|
||||
import webob
|
||||
from webob import exc
|
||||
|
||||
from meteos.api import common
|
||||
from meteos.api.openstack import wsgi
|
||||
from meteos.api.views import learnings as learning_views
|
||||
from meteos import exception
|
||||
from meteos.i18n import _, _LI
|
||||
from meteos import engine
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class LearningController(wsgi.Controller, wsgi.AdminActionsMixin):
|
||||
|
||||
"""The Learnings API v1 controller for the OpenStack API."""
|
||||
resource_name = 'learning'
|
||||
_view_builder_class = learning_views.ViewBuilder
|
||||
|
||||
def __init__(self):
|
||||
super(self.__class__, self).__init__()
|
||||
self.engine_api = engine.API()
|
||||
|
||||
def show(self, req, id):
|
||||
"""Return data about the given learning."""
|
||||
context = req.environ['meteos.context']
|
||||
|
||||
try:
|
||||
learning = self.engine_api.get_learning(context, id)
|
||||
except exception.NotFound:
|
||||
raise exc.HTTPNotFound()
|
||||
|
||||
return self._view_builder.detail(req, learning)
|
||||
|
||||
def delete(self, req, id):
|
||||
"""Delete a learning."""
|
||||
context = req.environ['meteos.context']
|
||||
|
||||
LOG.info(_LI("Delete learning with id: %s"), id, context=context)
|
||||
|
||||
try:
|
||||
self.engine_api.delete_learning(context, id)
|
||||
except exception.NotFound:
|
||||
raise exc.HTTPNotFound()
|
||||
except exception.InvalidLearning as e:
|
||||
raise exc.HTTPForbidden(explanation=six.text_type(e))
|
||||
|
||||
return webob.Response(status_int=202)
|
||||
|
||||
def index(self, req):
|
||||
"""Returns a summary list of learnings."""
|
||||
return self._get_learnings(req, is_detail=False)
|
||||
|
||||
def detail(self, req):
|
||||
"""Returns a detailed list of learnings."""
|
||||
return self._get_learnings(req, is_detail=True)
|
||||
|
||||
def _get_learnings(self, req, is_detail):
|
||||
"""Returns a list of learnings, transformed through view builder."""
|
||||
context = req.environ['meteos.context']
|
||||
|
||||
search_opts = {}
|
||||
search_opts.update(req.GET)
|
||||
|
||||
# Remove keys that are not related to learning attrs
|
||||
search_opts.pop('limit', None)
|
||||
search_opts.pop('offset', None)
|
||||
sort_key = search_opts.pop('sort_key', 'created_at')
|
||||
sort_dir = search_opts.pop('sort_dir', 'desc')
|
||||
|
||||
learnings = self.engine_api.get_all_learnings(
|
||||
context, search_opts=search_opts, sort_key=sort_key,
|
||||
sort_dir=sort_dir)
|
||||
|
||||
limited_list = common.limited(learnings, req)
|
||||
|
||||
if is_detail:
|
||||
learnings = self._view_builder.detail_list(req, limited_list)
|
||||
else:
|
||||
learnings = self._view_builder.summary_list(req, limited_list)
|
||||
return learnings
|
||||
|
||||
def create(self, req, body):
|
||||
"""Creates a new learning."""
|
||||
context = req.environ['meteos.context']
|
||||
|
||||
if not self.is_valid_body(body, 'learning'):
|
||||
raise exc.HTTPUnprocessableEntity()
|
||||
|
||||
learning = body['learning']
|
||||
|
||||
LOG.debug("Create learning with request: %s", learning)
|
||||
|
||||
try:
|
||||
self.engine_api.get_experiment(context, learning['experiment_id'])
|
||||
except exception.NotFound:
|
||||
raise exc.HTTPNotFound()
|
||||
|
||||
display_name = learning.get('display_name')
|
||||
display_description = learning.get('display_description')
|
||||
experiment_id = learning.get('experiment_id')
|
||||
model_id = learning.get('model_id')
|
||||
method = learning.get('method')
|
||||
args = learning.get('args')
|
||||
|
||||
try:
|
||||
experiment = self.engine_api.get_experiment(context, experiment_id)
|
||||
template = self.engine_api.get_template(
|
||||
context, experiment.template_id)
|
||||
except exception.NotFound:
|
||||
raise exc.HTTPNotFound()
|
||||
|
||||
new_learning = self.engine_api.create_learning(
|
||||
context,
|
||||
display_name,
|
||||
display_description,
|
||||
model_id,
|
||||
method,
|
||||
args,
|
||||
template.id,
|
||||
template.job_template_id,
|
||||
experiment_id,
|
||||
experiment.cluster_id)
|
||||
|
||||
return self._view_builder.detail(req, new_learning)
|
||||
|
||||
|
||||
def create_resource():
|
||||
return wsgi.Resource(LearningController())
|
158
meteos/api/v1/models.py
Normal file
158
meteos/api/v1/models.py
Normal file
@ -0,0 +1,158 @@
|
||||
# Copyright 2013 NetApp
|
||||
# All Rights Reserved.
|
||||
# Copyright (c) 2016 NEC Corporation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""The models api."""
|
||||
|
||||
import ast
|
||||
import re
|
||||
import string
|
||||
|
||||
from oslo_log import log
|
||||
from oslo_utils import strutils
|
||||
from oslo_utils import uuidutils
|
||||
import six
|
||||
import webob
|
||||
from webob import exc
|
||||
|
||||
from meteos.api import common
|
||||
from meteos.api.openstack import wsgi
|
||||
from meteos.api.views import models as model_views
|
||||
from meteos import exception
|
||||
from meteos.i18n import _, _LI
|
||||
from meteos import engine
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class ModelController(wsgi.Controller, wsgi.AdminActionsMixin):
|
||||
|
||||
"""The Models API v1 controller for the OpenStack API."""
|
||||
resource_name = 'model'
|
||||
_view_builder_class = model_views.ViewBuilder
|
||||
|
||||
def __init__(self):
|
||||
super(self.__class__, self).__init__()
|
||||
self.engine_api = engine.API()
|
||||
|
||||
def show(self, req, id):
|
||||
"""Return data about the given model."""
|
||||
context = req.environ['meteos.context']
|
||||
|
||||
try:
|
||||
model = self.engine_api.get_model(context, id)
|
||||
except exception.NotFound:
|
||||
raise exc.HTTPNotFound()
|
||||
|
||||
return self._view_builder.detail(req, model)
|
||||
|
||||
def delete(self, req, id):
|
||||
"""Delete a model."""
|
||||
context = req.environ['meteos.context']
|
||||
|
||||
LOG.info(_LI("Delete model with id: %s"), id, context=context)
|
||||
|
||||
try:
|
||||
self.engine_api.delete_model(context, id)
|
||||
except exception.NotFound:
|
||||
raise exc.HTTPNotFound()
|
||||
except exception.InvalidLearning as e:
|
||||
raise exc.HTTPForbidden(explanation=six.text_type(e))
|
||||
|
||||
return webob.Response(status_int=202)
|
||||
|
||||
def index(self, req):
|
||||
"""Returns a summary list of models."""
|
||||
return self._get_models(req, is_detail=False)
|
||||
|
||||
def detail(self, req):
|
||||
"""Returns a detailed list of models."""
|
||||
return self._get_models(req, is_detail=True)
|
||||
|
||||
def _get_models(self, req, is_detail):
|
||||
"""Returns a list of models, transformed through view builder."""
|
||||
context = req.environ['meteos.context']
|
||||
|
||||
search_opts = {}
|
||||
search_opts.update(req.GET)
|
||||
|
||||
# Remove keys that are not related to model attrs
|
||||
search_opts.pop('limit', None)
|
||||
search_opts.pop('offset', None)
|
||||
sort_key = search_opts.pop('sort_key', 'created_at')
|
||||
sort_dir = search_opts.pop('sort_dir', 'desc')
|
||||
|
||||
models = self.engine_api.get_all_models(
|
||||
context, search_opts=search_opts, sort_key=sort_key,
|
||||
sort_dir=sort_dir)
|
||||
|
||||
limited_list = common.limited(models, req)
|
||||
|
||||
if is_detail:
|
||||
models = self._view_builder.detail_list(req, limited_list)
|
||||
else:
|
||||
models = self._view_builder.summary_list(req, limited_list)
|
||||
return models
|
||||
|
||||
def create(self, req, body):
|
||||
"""Creates a new model."""
|
||||
context = req.environ['meteos.context']
|
||||
|
||||
if not self.is_valid_body(body, 'model'):
|
||||
raise exc.HTTPUnprocessableEntity()
|
||||
|
||||
model = body['model']
|
||||
|
||||
LOG.debug("Create model with request: %s", model)
|
||||
|
||||
try:
|
||||
experiment = self.engine_api.get_experiment(
|
||||
context, model['experiment_id'])
|
||||
template = self.engine_api.get_template(
|
||||
context, experiment.template_id)
|
||||
except exception.NotFound:
|
||||
raise exc.HTTPNotFound()
|
||||
|
||||
display_name = model.get('display_name')
|
||||
display_description = model.get('display_description')
|
||||
experiment_id = model.get('experiment_id')
|
||||
source_dataset_url = model.get('source_dataset_url')
|
||||
dataset_format = model.get('dataset_format', 'csv')
|
||||
model_type = model.get('model_type')
|
||||
model_params = model.get('model_params')
|
||||
swift_tenant = model.get('swift_tenant')
|
||||
swift_username = model.get('swift_username')
|
||||
swift_password = model.get('swift_password')
|
||||
|
||||
new_model = self.engine_api.create_model(context,
|
||||
display_name,
|
||||
display_description,
|
||||
source_dataset_url,
|
||||
dataset_format,
|
||||
model_type,
|
||||
model_params,
|
||||
template.id,
|
||||
template.job_template_id,
|
||||
experiment_id,
|
||||
experiment.cluster_id,
|
||||
swift_tenant,
|
||||
swift_username,
|
||||
swift_password)
|
||||
|
||||
return self._view_builder.detail(req, new_model)
|
||||
|
||||
|
||||
def create_resource():
|
||||
return wsgi.Resource(ModelController())
|
73
meteos/api/v1/router.py
Normal file
73
meteos/api/v1/router.py
Normal file
@ -0,0 +1,73 @@
|
||||
# Copyright 2011 OpenStack LLC.
|
||||
# Copyright 2011 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
WSGI middleware for OpenStack Learning API v1.
|
||||
"""
|
||||
|
||||
import meteos.api.openstack
|
||||
from meteos.api.v1 import learnings
|
||||
from meteos.api.v1 import experiments
|
||||
from meteos.api.v1 import templates
|
||||
from meteos.api.v1 import datasets
|
||||
from meteos.api.v1 import models
|
||||
from meteos.api import versions
|
||||
|
||||
|
||||
class APIRouter(meteos.api.openstack.APIRouter):
|
||||
"""Route API requests.
|
||||
|
||||
Routes requests on the OpenStack API to the appropriate controller
|
||||
and method.
|
||||
"""
|
||||
def _setup_routes(self, mapper, ext_mgr):
|
||||
self.resources['versions'] = versions.create_resource()
|
||||
mapper.connect("versions", "/",
|
||||
controller=self.resources['versions'],
|
||||
action='index')
|
||||
|
||||
mapper.redirect("", "/")
|
||||
|
||||
self.resources['templates'] = templates.create_resource()
|
||||
mapper.resource("template", "templates",
|
||||
controller=self.resources['templates'],
|
||||
collection={'detail': 'GET'},
|
||||
member={'action': 'POST'})
|
||||
|
||||
self.resources['experiments'] = experiments.create_resource()
|
||||
mapper.resource("experiment", "experiments",
|
||||
controller=self.resources['experiments'],
|
||||
collection={'detail': 'GET'},
|
||||
member={'action': 'POST'})
|
||||
|
||||
self.resources['learnings'] = learnings.create_resource()
|
||||
mapper.resource("learning", "learnings",
|
||||
controller=self.resources['learnings'],
|
||||
collection={'detail': 'GET'},
|
||||
member={'action': 'POST'})
|
||||
|
||||
self.resources['datasets'] = datasets.create_resource()
|
||||
mapper.resource("dataset", "datasets",
|
||||
controller=self.resources['datasets'],
|
||||
collection={'detail': 'GET'},
|
||||
member={'action': 'POST'})
|
||||
|
||||
self.resources['models'] = models.create_resource()
|
||||
mapper.resource("model", "models",
|
||||
controller=self.resources['models'],
|
||||
collection={'detail': 'GET'},
|
||||
member={'action': 'POST'})
|
145
meteos/api/v1/templates.py
Normal file
145
meteos/api/v1/templates.py
Normal file
@ -0,0 +1,145 @@
|
||||
# Copyright 2013 NetApp
|
||||
# All Rights Reserved.
|
||||
# Copyright (c) 2016 NEC Corporation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""The templates api."""
|
||||
|
||||
import ast
|
||||
import re
|
||||
import string
|
||||
|
||||
from oslo_log import log
|
||||
from oslo_utils import strutils
|
||||
from oslo_utils import uuidutils
|
||||
import six
|
||||
import webob
|
||||
from webob import exc
|
||||
|
||||
from meteos.api import common
|
||||
from meteos.api.openstack import wsgi
|
||||
from meteos.api.views import templates as template_views
|
||||
from meteos import exception
|
||||
from meteos.i18n import _, _LI
|
||||
from meteos import engine
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class TemplateController(wsgi.Controller, wsgi.AdminActionsMixin):
|
||||
|
||||
"""The Templates API v1 controller for the OpenStack API."""
|
||||
resource_name = 'template'
|
||||
_view_builder_class = template_views.ViewBuilder
|
||||
|
||||
def __init__(self):
|
||||
super(self.__class__, self).__init__()
|
||||
self.engine_api = engine.API()
|
||||
|
||||
def show(self, req, id):
|
||||
"""Return data about the given template."""
|
||||
context = req.environ['meteos.context']
|
||||
|
||||
try:
|
||||
template = self.engine_api.get_template(context, id)
|
||||
except exception.NotFound:
|
||||
raise exc.HTTPNotFound()
|
||||
|
||||
return self._view_builder.detail(req, template)
|
||||
|
||||
def delete(self, req, id):
|
||||
"""Delete a template."""
|
||||
context = req.environ['meteos.context']
|
||||
|
||||
LOG.info(_LI("Delete template with id: %s"), id, context=context)
|
||||
|
||||
try:
|
||||
self.engine_api.delete_template(context, id)
|
||||
except exception.NotFound:
|
||||
raise exc.HTTPNotFound()
|
||||
except exception.InvalidLearning as e:
|
||||
raise exc.HTTPForbidden(explanation=six.text_type(e))
|
||||
|
||||
return webob.Response(status_int=202)
|
||||
|
||||
def index(self, req):
|
||||
"""Returns a summary list of templates."""
|
||||
return self._get_templates(req, is_detail=False)
|
||||
|
||||
def detail(self, req):
|
||||
"""Returns a detailed list of templates."""
|
||||
return self._get_templates(req, is_detail=True)
|
||||
|
||||
def _get_templates(self, req, is_detail):
|
||||
"""Returns a list of templates, transformed through view builder."""
|
||||
context = req.environ['meteos.context']
|
||||
|
||||
search_opts = {}
|
||||
search_opts.update(req.GET)
|
||||
|
||||
# Remove keys that are not related to template attrs
|
||||
search_opts.pop('limit', None)
|
||||
search_opts.pop('offset', None)
|
||||
sort_key = search_opts.pop('sort_key', 'created_at')
|
||||
sort_dir = search_opts.pop('sort_dir', 'desc')
|
||||
|
||||
templates = self.engine_api.get_all_templates(
|
||||
context, search_opts=search_opts, sort_key=sort_key,
|
||||
sort_dir=sort_dir)
|
||||
|
||||
limited_list = common.limited(templates, req)
|
||||
|
||||
if is_detail:
|
||||
templates = self._view_builder.detail_list(req, limited_list)
|
||||
else:
|
||||
templates = self._view_builder.summary_list(req, limited_list)
|
||||
return templates
|
||||
|
||||
def create(self, req, body):
|
||||
"""Creates a new template."""
|
||||
context = req.environ['meteos.context']
|
||||
|
||||
if not self.is_valid_body(body, 'template'):
|
||||
raise exc.HTTPUnprocessableEntity()
|
||||
|
||||
template = body['template']
|
||||
|
||||
LOG.debug("Create template with request: %s", template)
|
||||
|
||||
kwargs = {
|
||||
'image_id': template.get('image_id'),
|
||||
'master_nodes_num': template.get('master_nodes_num'),
|
||||
'master_flavor_id': template.get('master_flavor_id'),
|
||||
'worker_nodes_num': template.get('worker_nodes_num'),
|
||||
'worker_flavor_id': template.get('worker_flavor_id'),
|
||||
'spark_version': template.get('spark_version'),
|
||||
'floating_ip_pool': template.get('floating_ip_pool'),
|
||||
}
|
||||
|
||||
display_name = template.get('display_name')
|
||||
display_description = template.get('display_description')
|
||||
|
||||
try:
|
||||
new_template = self.engine_api.create_template(context,
|
||||
display_name,
|
||||
display_description,
|
||||
**kwargs)
|
||||
except exception.NotFound:
|
||||
raise exc.HTTPNotFound()
|
||||
|
||||
return self._view_builder.detail(req, new_template)
|
||||
|
||||
|
||||
def create_resource():
|
||||
return wsgi.Resource(TemplateController())
|
97
meteos/api/versions.py
Normal file
97
meteos/api/versions.py
Normal file
@ -0,0 +1,97 @@
|
||||
# Copyright 2010 OpenStack LLC.
|
||||
# Copyright 2015 Clinton Knight
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import copy
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from meteos.api import openstack
|
||||
from meteos.api.openstack import api_version_request
|
||||
from meteos.api.openstack import wsgi
|
||||
from meteos.api.views import versions as views_versions
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
_LINKS = [{
|
||||
'rel': 'describedby',
|
||||
'type': 'text/html',
|
||||
'href': 'http://docs.openstack.org/',
|
||||
}]
|
||||
|
||||
_MEDIA_TYPES = [{
|
||||
'base': 'application/json',
|
||||
'type': 'application/vnd.openstack.learning+json;version=1',
|
||||
}]
|
||||
|
||||
_KNOWN_VERSIONS = {
|
||||
'v1.0': {
|
||||
'id': 'v1.0',
|
||||
'status': 'SUPPORTED',
|
||||
'version': '',
|
||||
'min_version': '',
|
||||
'updated': '2015-08-27T11:33:21Z',
|
||||
'links': _LINKS,
|
||||
'media-types': _MEDIA_TYPES,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
class VersionsRouter(openstack.APIRouter):
|
||||
"""Route versions requests."""
|
||||
|
||||
def _setup_routes(self, mapper, ext_mgr):
|
||||
self.resources['versions'] = create_resource()
|
||||
mapper.connect('versions', '/',
|
||||
controller=self.resources['versions'],
|
||||
action='all')
|
||||
mapper.redirect('', '/')
|
||||
|
||||
|
||||
class VersionsController(wsgi.Controller):
|
||||
|
||||
def __init__(self):
|
||||
super(VersionsController, self).__init__(None)
|
||||
|
||||
@wsgi.Controller.api_version('1.0', '1.0')
|
||||
def index(self, req):
|
||||
"""Return versions supported prior to the microversions epoch."""
|
||||
builder = views_versions.get_view_builder(req)
|
||||
known_versions = copy.deepcopy(_KNOWN_VERSIONS)
|
||||
known_versions.pop('v2.0')
|
||||
return builder.build_versions(known_versions)
|
||||
|
||||
@wsgi.Controller.api_version('2.0') # noqa
|
||||
def index(self, req): # pylint: disable=E0102
|
||||
"""Return versions supported after the start of microversions."""
|
||||
builder = views_versions.get_view_builder(req)
|
||||
known_versions = copy.deepcopy(_KNOWN_VERSIONS)
|
||||
known_versions.pop('v1.0')
|
||||
return builder.build_versions(known_versions)
|
||||
|
||||
# NOTE (cknight): Calling the versions API without
|
||||
# /v1 or /v2 in the URL will lead to this unversioned
|
||||
# method, which should always return info about all
|
||||
# available versions.
|
||||
@wsgi.response(300)
|
||||
def all(self, req):
|
||||
"""Return all known versions."""
|
||||
builder = views_versions.get_view_builder(req)
|
||||
known_versions = copy.deepcopy(_KNOWN_VERSIONS)
|
||||
return builder.build_versions(known_versions)
|
||||
|
||||
|
||||
def create_resource():
|
||||
return wsgi.Resource(VersionsController())
|
0
meteos/api/views/__init__.py
Normal file
0
meteos/api/views/__init__.py
Normal file
80
meteos/api/views/datasets.py
Normal file
80
meteos/api/views/datasets.py
Normal file
@ -0,0 +1,80 @@
|
||||
# Copyright 2013 OpenStack LLC.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from meteos.api import common
|
||||
|
||||
|
||||
class ViewBuilder(common.ViewBuilder):
|
||||
|
||||
"""Model a server API response as a python dictionary."""
|
||||
|
||||
_collection_name = 'datasets'
|
||||
_detail_version_modifiers = []
|
||||
|
||||
def summary_list(self, request, datasets):
|
||||
"""Show a list of datasets without many details."""
|
||||
return self._list_view(self.summary, request, datasets)
|
||||
|
||||
def detail_list(self, request, datasets):
|
||||
"""Detailed view of a list of datasets."""
|
||||
return self._list_view(self.detail, request, datasets)
|
||||
|
||||
def summary(self, request, dataset):
|
||||
"""Generic, non-detailed view of a dataset."""
|
||||
return {
|
||||
'dataset': {
|
||||
'id': dataset.get('id'),
|
||||
'source_dataset_url': dataset.get('source_dataset_url'),
|
||||
'name': dataset.get('display_name'),
|
||||
'description': dataset.get('display_description'),
|
||||
'status': dataset.get('status'),
|
||||
'created_at': dataset.get('created_at'),
|
||||
'links': self._get_links(request, dataset['id'])
|
||||
}
|
||||
}
|
||||
|
||||
def detail(self, request, dataset):
|
||||
"""Detailed view of a single dataset."""
|
||||
context = request.environ['meteos.context']
|
||||
|
||||
dataset_dict = {
|
||||
'id': dataset.get('id'),
|
||||
'created_at': dataset.get('created_at'),
|
||||
'status': dataset.get('status'),
|
||||
'name': dataset.get('display_name'),
|
||||
'description': dataset.get('display_description'),
|
||||
'user_id': dataset.get('user_id'),
|
||||
'project_id': dataset.get('project_id'),
|
||||
'head': dataset.get('head'),
|
||||
'stderr': dataset.get('stderr'),
|
||||
}
|
||||
|
||||
self.update_versioned_resource_dict(request, dataset_dict, dataset)
|
||||
|
||||
return {'dataset': dataset_dict}
|
||||
|
||||
def _list_view(self, func, request, datasets):
|
||||
"""Provide a view for a list of datasets."""
|
||||
datasets_list = [func(request, dataset)['dataset']
|
||||
for dataset in datasets]
|
||||
datasets_links = self._get_collection_links(request,
|
||||
datasets,
|
||||
self._collection_name)
|
||||
datasets_dict = dict(datasets=datasets_list)
|
||||
|
||||
if datasets_links:
|
||||
datasets_dict['datasets_links'] = datasets_links
|
||||
|
||||
return datasets_dict
|
80
meteos/api/views/experiments.py
Normal file
80
meteos/api/views/experiments.py
Normal file
@ -0,0 +1,80 @@
|
||||
# Copyright 2013 OpenStack LLC.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from meteos.api import common
|
||||
|
||||
|
||||
class ViewBuilder(common.ViewBuilder):
|
||||
|
||||
"""Model a server API response as a python dictionary."""
|
||||
|
||||
_collection_name = 'experiments'
|
||||
_detail_version_modifiers = []
|
||||
|
||||
def summary_list(self, request, experiments):
|
||||
"""Show a list of experiments without many details."""
|
||||
return self._list_view(self.summary, request, experiments)
|
||||
|
||||
def detail_list(self, request, experiments):
|
||||
"""Detailed view of a list of experiments."""
|
||||
return self._list_view(self.detail, request, experiments)
|
||||
|
||||
def summary(self, request, experiment):
|
||||
"""Generic, non-detailed view of a experiment."""
|
||||
return {
|
||||
'experiment': {
|
||||
'id': experiment.get('id'),
|
||||
'name': experiment.get('display_name'),
|
||||
'description': experiment.get('display_description'),
|
||||
'status': experiment.get('status'),
|
||||
'created_at': experiment.get('created_at'),
|
||||
'links': self._get_links(request, experiment['id'])
|
||||
}
|
||||
}
|
||||
|
||||
def detail(self, request, experiment):
|
||||
"""Detailed view of a single experiment."""
|
||||
context = request.environ['meteos.context']
|
||||
|
||||
experiment_dict = {
|
||||
'id': experiment.get('id'),
|
||||
'created_at': experiment.get('created_at'),
|
||||
'status': experiment.get('status'),
|
||||
'name': experiment.get('display_name'),
|
||||
'description': experiment.get('display_description'),
|
||||
'project_id': experiment.get('project_id'),
|
||||
'user_id': experiment.get('user_id'),
|
||||
'key_name': experiment.get('key_name'),
|
||||
'management_network': experiment.get('neutron_management_network'),
|
||||
}
|
||||
|
||||
self.update_versioned_resource_dict(
|
||||
request, experiment_dict, experiment)
|
||||
|
||||
return {'experiment': experiment_dict}
|
||||
|
||||
def _list_view(self, func, request, experiments):
|
||||
"""Provide a view for a list of experiments."""
|
||||
experiments_list = [func(request, experiment)['experiment']
|
||||
for experiment in experiments]
|
||||
experiments_links = self._get_collection_links(request,
|
||||
experiments,
|
||||
self._collection_name)
|
||||
experiments_dict = dict(experiments=experiments_list)
|
||||
|
||||
if experiments_links:
|
||||
experiments_dict['experiments_links'] = experiments_links
|
||||
|
||||
return experiments_dict
|
84
meteos/api/views/learnings.py
Normal file
84
meteos/api/views/learnings.py
Normal file
@ -0,0 +1,84 @@
|
||||
# Copyright 2013 OpenStack LLC.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from meteos.api import common
|
||||
|
||||
|
||||
class ViewBuilder(common.ViewBuilder):
|
||||
|
||||
"""Model a server API response as a python dictionary."""
|
||||
|
||||
_collection_name = 'learnings'
|
||||
_detail_version_modifiers = []
|
||||
|
||||
def summary_list(self, request, learnings):
|
||||
"""Show a list of learnings without many details."""
|
||||
return self._list_view(self.summary, request, learnings)
|
||||
|
||||
def detail_list(self, request, learnings):
|
||||
"""Detailed view of a list of learnings."""
|
||||
return self._list_view(self.detail, request, learnings)
|
||||
|
||||
def summary(self, request, learning):
|
||||
"""Generic, non-detailed view of a learning."""
|
||||
return {
|
||||
'learning': {
|
||||
'id': learning.get('id'),
|
||||
'name': learning.get('display_name'),
|
||||
'description': learning.get('display_description'),
|
||||
'status': learning.get('status'),
|
||||
'type': learning.get('model_type'),
|
||||
'args': learning.get('args'),
|
||||
'stdout': learning.get('stdout'),
|
||||
'created_at': learning.get('created_at'),
|
||||
'links': self._get_links(request, learning['id'])
|
||||
}
|
||||
}
|
||||
|
||||
def detail(self, request, learning):
|
||||
"""Detailed view of a single learning."""
|
||||
context = request.environ['meteos.context']
|
||||
|
||||
learning_dict = {
|
||||
'id': learning.get('id'),
|
||||
'created_at': learning.get('created_at'),
|
||||
'status': learning.get('status'),
|
||||
'name': learning.get('display_name'),
|
||||
'description': learning.get('display_description'),
|
||||
'user_id': learning.get('user_id'),
|
||||
'project_id': learning.get('project_id'),
|
||||
'stdout': learning.get('stdout'),
|
||||
'stderr': learning.get('stderr'),
|
||||
'method': learning.get('method'),
|
||||
'args': learning.get('args'),
|
||||
}
|
||||
|
||||
self.update_versioned_resource_dict(request, learning_dict, learning)
|
||||
|
||||
return {'learning': learning_dict}
|
||||
|
||||
def _list_view(self, func, request, learnings):
|
||||
"""Provide a view for a list of learnings."""
|
||||
learnings_list = [func(request, learning)['learning']
|
||||
for learning in learnings]
|
||||
learnings_links = self._get_collection_links(request,
|
||||
learnings,
|
||||
self._collection_name)
|
||||
learnings_dict = dict(learnings=learnings_list)
|
||||
|
||||
if learnings_links:
|
||||
learnings_dict['learnings_links'] = learnings_links
|
||||
|
||||
return learnings_dict
|
82
meteos/api/views/models.py
Normal file
82
meteos/api/views/models.py
Normal file
@ -0,0 +1,82 @@
|
||||
# Copyright 2013 OpenStack LLC.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from meteos.api import common
|
||||
|
||||
|
||||
class ViewBuilder(common.ViewBuilder):
|
||||
|
||||
"""Model a server API response as a python dictionary."""
|
||||
|
||||
_collection_name = 'models'
|
||||
_detail_version_modifiers = []
|
||||
|
||||
def summary_list(self, request, models):
|
||||
"""Show a list of models without many details."""
|
||||
return self._list_view(self.summary, request, models)
|
||||
|
||||
def detail_list(self, request, models):
|
||||
"""Detailed view of a list of models."""
|
||||
return self._list_view(self.detail, request, models)
|
||||
|
||||
def summary(self, request, model):
|
||||
"""Generic, non-detailed view of a model."""
|
||||
return {
|
||||
'model': {
|
||||
'id': model.get('id'),
|
||||
'source_dataset_url': model.get('source_dataset_url'),
|
||||
'name': model.get('display_name'),
|
||||
'description': model.get('display_description'),
|
||||
'type': model.get('model_type'),
|
||||
'status': model.get('status'),
|
||||
'created_at': model.get('created_at'),
|
||||
'links': self._get_links(request, model['id'])
|
||||
}
|
||||
}
|
||||
|
||||
def detail(self, request, model):
|
||||
"""Detailed view of a single model."""
|
||||
context = request.environ['meteos.context']
|
||||
|
||||
model_dict = {
|
||||
'id': model.get('id'),
|
||||
'created_at': model.get('created_at'),
|
||||
'status': model.get('status'),
|
||||
'name': model.get('display_name'),
|
||||
'description': model.get('display_description'),
|
||||
'user_id': model.get('user_id'),
|
||||
'project_id': model.get('project_id'),
|
||||
'type': model.get('model_type'),
|
||||
'params': model.get('model_params'),
|
||||
'stdout': model.get('stdout'),
|
||||
'stderr': model.get('stderr'),
|
||||
}
|
||||
|
||||
self.update_versioned_resource_dict(request, model_dict, model)
|
||||
|
||||
return {'model': model_dict}
|
||||
|
||||
def _list_view(self, func, request, models):
|
||||
"""Provide a view for a list of models."""
|
||||
models_list = [func(request, model)['model'] for model in models]
|
||||
models_links = self._get_collection_links(request,
|
||||
models,
|
||||
self._collection_name)
|
||||
models_dict = dict(models=models_list)
|
||||
|
||||
if models_links:
|
||||
models_dict['models_links'] = models_links
|
||||
|
||||
return models_dict
|
85
meteos/api/views/templates.py
Normal file
85
meteos/api/views/templates.py
Normal file
@ -0,0 +1,85 @@
|
||||
# Copyright 2013 OpenStack LLC.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from meteos.api import common
|
||||
|
||||
|
||||
class ViewBuilder(common.ViewBuilder):
|
||||
|
||||
"""Model a server API response as a python dictionary."""
|
||||
|
||||
_collection_name = 'templates'
|
||||
_detail_version_modifiers = []
|
||||
|
||||
def summary_list(self, request, templates):
|
||||
"""Show a list of templates without many details."""
|
||||
return self._list_view(self.summary, request, templates)
|
||||
|
||||
def detail_list(self, request, templates):
|
||||
"""Detailed view of a list of templates."""
|
||||
return self._list_view(self.detail, request, templates)
|
||||
|
||||
def summary(self, request, template):
|
||||
"""Generic, non-detailed view of a template."""
|
||||
return {
|
||||
'template': {
|
||||
'id': template.get('id'),
|
||||
'name': template.get('display_name'),
|
||||
'description': template.get('display_description'),
|
||||
'master_nodes': template.get('master_nodes_num'),
|
||||
'worker_nodes': template.get('worker_nodes_num'),
|
||||
'spark_version': template.get('spark_version'),
|
||||
'status': template.get('status'),
|
||||
'links': self._get_links(request, template['id'])
|
||||
}
|
||||
}
|
||||
|
||||
def detail(self, request, template):
|
||||
"""Detailed view of a single template."""
|
||||
context = request.environ['meteos.context']
|
||||
|
||||
template_dict = {
|
||||
'id': template.get('id'),
|
||||
'created_at': template.get('created_at'),
|
||||
'status': template.get('status'),
|
||||
'name': template.get('display_name'),
|
||||
'description': template.get('display_description'),
|
||||
'user_id': template.get('user_id'),
|
||||
'project_id': template.get('project_id'),
|
||||
'master_nodes': template.get('master_nodes_num'),
|
||||
'master_flavor': template.get('master_flavor_id'),
|
||||
'worker_nodes': template.get('worker_nodes_num'),
|
||||
'worker_flavor': template.get('worker_flavor_id'),
|
||||
'spark_version': template.get('spark_version'),
|
||||
'cluster_id': template.get('cluster_id'),
|
||||
}
|
||||
|
||||
self.update_versioned_resource_dict(request, template_dict, template)
|
||||
|
||||
return {'template': template_dict}
|
||||
|
||||
def _list_view(self, func, request, templates):
|
||||
"""Provide a view for a list of templates."""
|
||||
templates_list = [func(request, template)['template']
|
||||
for template in templates]
|
||||
templates_links = self._get_collection_links(request,
|
||||
templates,
|
||||
self._collection_name)
|
||||
templates_dict = dict(templates=templates_list)
|
||||
|
||||
if templates_links:
|
||||
templates_dict['templates_links'] = templates_links
|
||||
|
||||
return templates_dict
|
66
meteos/api/views/versions.py
Normal file
66
meteos/api/views/versions.py
Normal file
@ -0,0 +1,66 @@
|
||||
# Copyright 2010-2011 OpenStack LLC.
|
||||
# Copyright 2015 Clinton Knight
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import copy
|
||||
import re
|
||||
|
||||
from six.moves import urllib
|
||||
|
||||
|
||||
def get_view_builder(req):
|
||||
return ViewBuilder(req.application_url)
|
||||
|
||||
|
||||
_URL_SUFFIX = {'v1.0': 'v1', 'v2.0': 'v2'}
|
||||
|
||||
|
||||
class ViewBuilder(object):
|
||||
def __init__(self, base_url):
|
||||
"""Initialize ViewBuilder.
|
||||
|
||||
:param base_url: url of the root wsgi application
|
||||
"""
|
||||
self.base_url = base_url
|
||||
|
||||
def build_versions(self, versions):
|
||||
views = [self._build_version(versions[key])
|
||||
for key in sorted(list(versions.keys()))]
|
||||
return dict(versions=views)
|
||||
|
||||
def _build_version(self, version):
|
||||
view = copy.deepcopy(version)
|
||||
view['links'] = self._build_links(version)
|
||||
return view
|
||||
|
||||
def _build_links(self, version_data):
|
||||
"""Generate a container of links that refer to the provided version."""
|
||||
links = copy.deepcopy(version_data.get('links', {}))
|
||||
version = _URL_SUFFIX.get(version_data['id'])
|
||||
links.append({'rel': 'self',
|
||||
'href': self._generate_href(version=version)})
|
||||
return links
|
||||
|
||||
def _generate_href(self, version='v1', path=None):
|
||||
"""Create a URL that refers to a specific version_number."""
|
||||
base_url = self._get_base_url_without_version()
|
||||
href = urllib.parse.urljoin(base_url, version).rstrip('/') + '/'
|
||||
if path:
|
||||
href += path.lstrip('/')
|
||||
return href
|
||||
|
||||
def _get_base_url_without_version(self):
|
||||
"""Get the base URL with out the /v1 suffix."""
|
||||
return re.sub('v[1-9]+/?$', '', self.base_url)
|
33
meteos/cluster/__init__.py
Normal file
33
meteos/cluster/__init__.py
Normal file
@ -0,0 +1,33 @@
|
||||
# Copyright 2014 Mirantis Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import oslo_config.cfg
|
||||
import oslo_utils.importutils
|
||||
|
||||
_volume_opts = [
|
||||
oslo_config.cfg.StrOpt('cluster_api_class',
|
||||
default='meteos.cluster.sahara.API',
|
||||
help='The full class name of the '
|
||||
'Cluster API class to use.'),
|
||||
]
|
||||
|
||||
oslo_config.cfg.CONF.register_opts(_volume_opts)
|
||||
|
||||
|
||||
def API():
|
||||
importutils = oslo_utils.importutils
|
||||
cluster_api_class = oslo_config.cfg.CONF.cluster_api_class
|
||||
cls = importutils.import_class(cluster_api_class)
|
||||
return cls()
|
386
meteos/cluster/binary/meteos-script-1.6.0.py
Normal file
386
meteos/cluster/binary/meteos-script-1.6.0.py
Normal file
@ -0,0 +1,386 @@
|
||||
#
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
# Copyright 2016 NEC Corpocation All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
|
||||
import base64
|
||||
import sys
|
||||
import uuid
|
||||
import socket
|
||||
from ast import literal_eval
|
||||
from numpy import array
|
||||
from math import sqrt
|
||||
from pyspark import SparkContext
|
||||
|
||||
from pyspark.mllib.linalg import SparseVector
|
||||
from pyspark.mllib.classification import LogisticRegressionWithSGD
|
||||
from pyspark.mllib.classification import LogisticRegressionModel
|
||||
from pyspark.mllib.clustering import KMeans, KMeansModel
|
||||
from pyspark.mllib.recommendation import ALS, MatrixFactorizationModel, Rating
|
||||
from pyspark.mllib.regression import LabeledPoint
|
||||
from pyspark.mllib.regression import LinearRegressionWithSGD
|
||||
from pyspark.mllib.regression import LinearRegressionModel
|
||||
from pyspark.mllib.tree import DecisionTree, DecisionTreeModel
|
||||
from pyspark.mllib.util import MLUtils
|
||||
|
||||
|
||||
class ModelController(object):
|
||||
|
||||
"""Class defines interface of Model Controller."""
|
||||
|
||||
def __init__(self):
|
||||
super(ModelController, self).__init__()
|
||||
|
||||
def create_model(self, data, params):
|
||||
"""Is called to create mode."""
|
||||
raise NotImplementedError()
|
||||
|
||||
def create_model_libsvm(self, data, params):
|
||||
"""Is called to create mode."""
|
||||
raise NotImplementedError()
|
||||
|
||||
def load_model(self, context, path):
|
||||
"""Is called to load mode."""
|
||||
raise NotImplementedError()
|
||||
|
||||
def predict(self, context, params):
|
||||
"""Is called to predict value."""
|
||||
raise NotImplementedError()
|
||||
|
||||
def predict_libsvm(self, context, params):
|
||||
"""Is called to predict value."""
|
||||
raise NotImplementedError()
|
||||
|
||||
def parsePoint(self, line):
|
||||
values = [float(s) for s in line.split(',')]
|
||||
if values[0] == -1:
|
||||
values[0] = 0
|
||||
return LabeledPoint(values[0], values[1:])
|
||||
|
||||
class KMeansModelController(ModelController):
|
||||
|
||||
def __init__(self):
|
||||
super(KMeansModelController, self).__init__()
|
||||
|
||||
def create_model(self, data, params):
|
||||
|
||||
numClasses = params.get('numClasses', 2)
|
||||
numIterations = params.get('numIterations', 10)
|
||||
runs = params.get('runs', 10)
|
||||
mode = params.get('mode', 'random')
|
||||
|
||||
parsedData = data.map(
|
||||
lambda line: array([float(x) for x in line.split(',')]))
|
||||
|
||||
return KMeans.train(parsedData,
|
||||
numClasses,
|
||||
maxIterations=numIterations,
|
||||
runs=runs,
|
||||
initializationMode=mode)
|
||||
|
||||
def load_model(self, context, path):
|
||||
return KMeansModel.load(context, path)
|
||||
|
||||
def predict(self, model, params):
|
||||
return model.predict(params.split(','))
|
||||
|
||||
|
||||
class RecommendationController(ModelController):
|
||||
|
||||
def __init__(self):
|
||||
super(RecommendationController, self).__init__()
|
||||
|
||||
def create_model(self, data, params):
|
||||
|
||||
# Build the recommendation model using Alternating Least Squares
|
||||
rank = params.get('rank', 10)
|
||||
numIterations = params.get('numIterations', 10)
|
||||
|
||||
ratings = data.map(lambda l: l.split(','))\
|
||||
.map(lambda l: Rating(int(l[0]), int(l[1]), float(l[2])))
|
||||
|
||||
return ALS.train(ratings, rank, numIterations)
|
||||
|
||||
def load_model(self, context, path):
|
||||
return MatrixFactorizationModel.load(context, path)
|
||||
|
||||
def predict(self, model, params):
|
||||
|
||||
parsedData = params.split(',')
|
||||
return model.predict(parsedData[0], parsedData[1])
|
||||
|
||||
|
||||
class LinearRegressionModelController(ModelController):
|
||||
|
||||
def __init__(self):
|
||||
super(LinearRegressionModelController, self).__init__()
|
||||
|
||||
def create_model(self, data, params):
|
||||
|
||||
iterations = params.get('numIterations', 10)
|
||||
step = params.get('step', 0.00000001)
|
||||
|
||||
points = data.map(self.parsePoint)
|
||||
return LinearRegressionWithSGD.train(points,
|
||||
iterations=iterations,
|
||||
step=step)
|
||||
|
||||
def create_model_libsvm(self, data, params):
|
||||
|
||||
iterations = params.get('numIterations', 10)
|
||||
step = params.get('step', 0.00000001)
|
||||
|
||||
return LinearRegressionWithSGD.train(data,
|
||||
iterations=iterations,
|
||||
step=step)
|
||||
|
||||
def load_model(self, context, path):
|
||||
return LinearRegressionModel.load(context, path)
|
||||
|
||||
def predict(self, model, params):
|
||||
return model.predict(params.split(','))
|
||||
|
||||
def predict_libsvm(self, model, params):
|
||||
return self.predict(model, params)
|
||||
|
||||
|
||||
class LogisticRegressionModelController(ModelController):
|
||||
|
||||
def __init__(self):
|
||||
super(LogisticRegressionModelController, self).__init__()
|
||||
|
||||
def create_model(self, data, params):
|
||||
|
||||
numIterations = params.get('numIterations', 10)
|
||||
|
||||
points = data.map(self.parsePoint)
|
||||
return LogisticRegressionWithSGD.train(points, numIterations)
|
||||
|
||||
def create_model_libsvm(self, data, params):
|
||||
|
||||
numIterations = params.get('numIterations', 10)
|
||||
|
||||
return LogisticRegressionWithSGD.train(data, numIterations)
|
||||
|
||||
def load_model(self, context, path):
|
||||
return LogisticRegressionModel.load(context, path)
|
||||
|
||||
def predict(self, model, params):
|
||||
return model.predict(params.split(','))
|
||||
|
||||
def predict_libsvm(self, model, params):
|
||||
return self.predict(model, params)
|
||||
|
||||
|
||||
class DecisionTreeModelController(ModelController):
|
||||
|
||||
def __init__(self):
|
||||
super(DecisionTreeModelController, self).__init__()
|
||||
|
||||
def _parse_to_libsvm(self, param):
|
||||
|
||||
index_l = []
|
||||
value_l = []
|
||||
|
||||
param_l = param.split(' ')
|
||||
param_len = str(len(param_l) * 2)
|
||||
|
||||
for p in param_l:
|
||||
index_l.append(str(int(p.split(':')[0]) - 1))
|
||||
value_l.append(p.split(':')[1])
|
||||
|
||||
index = ','.join(index_l)
|
||||
value = ','.join(value_l)
|
||||
|
||||
parsed_str = '(' + param_len + ', [' + index + '],[' + value + '])'
|
||||
|
||||
return SparseVector.parse(parsed_str)
|
||||
|
||||
def create_model_libsvm(self, data, params):
|
||||
|
||||
impurity = params.get('impurity', 'variance')
|
||||
maxDepth = params.get('maxDepth', 5)
|
||||
maxBins = params.get('maxBins', 32)
|
||||
|
||||
return DecisionTree.trainRegressor(data,
|
||||
categoricalFeaturesInfo={},
|
||||
impurity=impurity,
|
||||
maxDepth=maxDepth,
|
||||
maxBins=maxBins)
|
||||
|
||||
def load_model(self, context, path):
|
||||
return DecisionTreeModel.load(context, path)
|
||||
|
||||
def predict(self, model, params):
|
||||
return model.predict(params.split(','))
|
||||
|
||||
def predict_libsvm(self, model, params):
|
||||
parsed_params = self._parse_to_libsvm(params)
|
||||
return model.predict(parsed_params)
|
||||
|
||||
|
||||
class MeteosSparkController(object):
|
||||
|
||||
def init_context(self):
|
||||
|
||||
self.base_hostname = socket.gethostname().split(".")[0]
|
||||
master_node = 'spark://' + self.base_hostname + ':7077'
|
||||
self.context = SparkContext(master_node, 'INFO')
|
||||
|
||||
def parse_args(self, args):
|
||||
|
||||
self.id = args[3]
|
||||
decoded_args = base64.b64decode(args[4])
|
||||
self.job_args = literal_eval(decoded_args)
|
||||
|
||||
self.datapath = 'data-' + self.id
|
||||
self.modelpath = 'model-' + self.id
|
||||
|
||||
def init_model_controller(self):
|
||||
|
||||
model_type = self.job_args['model']['type']
|
||||
|
||||
if model_type == 'KMeans':
|
||||
self.controller = KMeansModelController()
|
||||
elif model_type == 'Recommendation':
|
||||
self.controller = RecommendationController()
|
||||
elif model_type == 'LogisticRegression':
|
||||
self.controller = LogisticRegressionModelController()
|
||||
elif model_type == 'LinearRegression':
|
||||
self.controller = LinearRegressionModelController()
|
||||
elif model_type == 'DecisionTreeRegression':
|
||||
self.controller = DecisionTreeModelController()
|
||||
|
||||
def save_data(self, collect=True):
|
||||
|
||||
if collect:
|
||||
self.data.collect()
|
||||
self.data.saveAsTextFile(self.datapath)
|
||||
print self.data.take(10)
|
||||
|
||||
def load_data(self):
|
||||
|
||||
source_dataset_url = self.job_args['source_dataset_url']
|
||||
|
||||
if source_dataset_url.count('swift'):
|
||||
swift = self.job_args['swift']
|
||||
tenant = swift['tenant']
|
||||
username = swift['username']
|
||||
password = swift['password']
|
||||
container_name = source_dataset_url.split('/')[2]
|
||||
object_name = source_dataset_url.split('/')[3]
|
||||
|
||||
prefix = 'fs.swift.service.sahara'
|
||||
hconf = self.context._jsc.hadoopConfiguration()
|
||||
hconf.set(prefix + '.tenant', tenant)
|
||||
hconf.set(prefix + '.username', username)
|
||||
hconf.set(prefix + '.password', password)
|
||||
hconf.setInt(prefix + ".http.port", 8080)
|
||||
|
||||
self.data = self._load_data('swift://' + container_name + '.sahara/' + object_name)
|
||||
else:
|
||||
dataset_path = 'data-' + source_dataset_url.split('/')[2]
|
||||
self.data = self._load_data(dataset_path)
|
||||
|
||||
def _load_data(self, path):
|
||||
|
||||
dataset_format = self.job_args.get('dataset_format')
|
||||
|
||||
if dataset_format == 'libsvm':
|
||||
return MLUtils.loadLibSVMFile(self.context, path)
|
||||
else:
|
||||
return self.context.textFile(path).cache()
|
||||
|
||||
def create_and_save_model(self):
|
||||
|
||||
model_params = self.job_args['model']['params']
|
||||
params = base64.b64decode(model_params)
|
||||
list_params = literal_eval(params)
|
||||
|
||||
dataset_format = self.job_args.get('dataset_format')
|
||||
|
||||
if dataset_format == 'libsvm':
|
||||
self.model = self.controller.create_model_libsvm(self.data, list_params)
|
||||
else:
|
||||
self.model = self.controller.create_model(self.data, list_params)
|
||||
|
||||
self.model.save(self.context, self.modelpath)
|
||||
|
||||
def download_dataset(self):
|
||||
|
||||
self.load_data()
|
||||
self.save_data()
|
||||
|
||||
def parse_dataset(self):
|
||||
|
||||
self.load_data()
|
||||
|
||||
dataset_param = self.job_args['dataset']['params']
|
||||
params = base64.b64decode(dataset_param)
|
||||
list_params = literal_eval(params)
|
||||
|
||||
cmd = ''
|
||||
|
||||
for param in list_params:
|
||||
cmd = cmd + '.' + param['method'] + '(' + param['args'] + ')'
|
||||
|
||||
exec('self.data = self.data' + cmd)
|
||||
self.save_data()
|
||||
|
||||
def create_model(self):
|
||||
|
||||
self.load_data()
|
||||
self.create_and_save_model()
|
||||
|
||||
def predict(self):
|
||||
|
||||
predict_params = self.job_args['learning']['params']
|
||||
params = base64.b64decode(predict_params)
|
||||
|
||||
self.model = self.controller.load_model(self.context, self.modelpath)
|
||||
|
||||
dataset_format = self.job_args.get('dataset_format')
|
||||
|
||||
if dataset_format == 'libsvm':
|
||||
self.output = self.controller.predict_libsvm(self.model, params)
|
||||
else:
|
||||
self.output = self.controller.predict(self.model, params)
|
||||
|
||||
print(self.output)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
meteos = MeteosSparkController()
|
||||
meteos.parse_args(sys.argv)
|
||||
meteos.init_model_controller()
|
||||
meteos.init_context()
|
||||
|
||||
getattr(meteos, meteos.job_args['method'])()
|
177
meteos/cluster/sahara.py
Normal file
177
meteos/cluster/sahara.py
Normal file
@ -0,0 +1,177 @@
|
||||
# Copyright 2014 Mirantis Inc.
|
||||
# All Rights Reserved.
|
||||
# Copyright (c) 2016 NEC Corporation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Handles all requests relating to volumes + sahara.
|
||||
"""
|
||||
|
||||
import copy
|
||||
import os
|
||||
|
||||
from osc_lib import exceptions as sahara_exception
|
||||
from saharaclient import client as sahara_client
|
||||
from keystoneauth1 import loading as ks_loading
|
||||
from oslo_config import cfg
|
||||
import six
|
||||
|
||||
from meteos.common import client_auth
|
||||
from meteos.common.config import core_opts
|
||||
import meteos.context as ctxt
|
||||
from meteos.db import base
|
||||
from meteos import exception
|
||||
from meteos.i18n import _
|
||||
|
||||
SAHARA_GROUP = 'sahara'
|
||||
|
||||
sahara_opts = [
|
||||
cfg.StrOpt('auth_url',
|
||||
default='http://localhost:5000/v2.0',
|
||||
help='Identity service URL.',
|
||||
deprecated_group='DEFAULT')
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(core_opts)
|
||||
CONF.register_opts(sahara_opts, SAHARA_GROUP)
|
||||
ks_loading.register_session_conf_options(CONF, SAHARA_GROUP)
|
||||
ks_loading.register_auth_conf_options(CONF, SAHARA_GROUP)
|
||||
|
||||
|
||||
def list_opts():
|
||||
return client_auth.AuthClientLoader.list_opts(SAHARA_GROUP)
|
||||
|
||||
|
||||
def saharaclient(context):
|
||||
deprecated_opts_for_v2 = {
|
||||
'auth_url': CONF.sahara.auth_url,
|
||||
'token': context.auth_token,
|
||||
'tenant_id': context.tenant,
|
||||
}
|
||||
AUTH_OBJ = client_auth.AuthClientLoader(
|
||||
client_class=sahara_client.Client,
|
||||
exception_module=sahara_exception,
|
||||
cfg_group=SAHARA_GROUP,
|
||||
deprecated_opts_for_v2=deprecated_opts_for_v2,
|
||||
url=CONF.sahara.auth_url,
|
||||
token=context.auth_token)
|
||||
return AUTH_OBJ.get_client(context)
|
||||
|
||||
|
||||
class API(base.Base):
|
||||
|
||||
"""API for interacting with the data processing manager."""
|
||||
|
||||
def image_set(self, context, id, user_name):
|
||||
item = saharaclient(context).images.update_image(id, user_name)
|
||||
return item.image['id']
|
||||
|
||||
def image_tags_add(self, context, id, data):
|
||||
saharaclient(context).images.update_tags(id, data)
|
||||
|
||||
def image_remove(self, context, id):
|
||||
saharaclient(context).images.unregister_image(id)
|
||||
|
||||
def create_node_group_template(self, context, name, plugin_name, version,
|
||||
flavor_id, node_processes, floating_ip_pool,
|
||||
auto_security_group):
|
||||
item = saharaclient(context).node_group_templates.create(
|
||||
name,
|
||||
plugin_name,
|
||||
version,
|
||||
flavor_id,
|
||||
node_processes=node_processes,
|
||||
floating_ip_pool=floating_ip_pool,
|
||||
auto_security_group=auto_security_group)
|
||||
|
||||
return item.id
|
||||
|
||||
def delete_node_group_template(self, context, id):
|
||||
saharaclient(context).node_group_templates.delete(id)
|
||||
|
||||
def create_cluster_template(self, context, name, plugin_name,
|
||||
version, node_groups):
|
||||
item = saharaclient(context).cluster_templates.create(
|
||||
name,
|
||||
plugin_name,
|
||||
version,
|
||||
node_groups=node_groups)
|
||||
|
||||
return item.id
|
||||
|
||||
def delete_cluster_template(self, context, id):
|
||||
saharaclient(context).cluster_templates.delete(id)
|
||||
|
||||
def get_job_binary_data(self, context, id):
|
||||
item = saharaclient(context).job_binary_internals.get(id)
|
||||
return item.id
|
||||
|
||||
def create_job_binary_data(self, context, name, data):
|
||||
item = saharaclient(context).job_binary_internals.create(name, data)
|
||||
return item.id
|
||||
|
||||
def delete_job_binary_data(self, context, id):
|
||||
saharaclient(context).job_binary_internals.delete(id)
|
||||
|
||||
def create_job_binary(self, context, name, url):
|
||||
item = saharaclient(context).job_binaries.create(name, url)
|
||||
return item.id
|
||||
|
||||
def delete_job_binary(self, context, id):
|
||||
saharaclient(context).job_binaries.delete(id)
|
||||
|
||||
def create_job_template(self, context, name, type, mains):
|
||||
item = saharaclient(context).jobs.create(name, type, mains=mains)
|
||||
return item.id
|
||||
|
||||
def delete_job_template(self, context, id):
|
||||
saharaclient(context).jobs.delete(id)
|
||||
|
||||
def get_node_groups(self, context, id):
|
||||
item = saharaclient(context).clusters.get(id)
|
||||
return item.node_groups
|
||||
|
||||
def create_cluster(self, context, name, plugin, version, image_id,
|
||||
template_id, keypair, neutron_management_network):
|
||||
|
||||
item = saharaclient(context).clusters.create(
|
||||
name,
|
||||
plugin,
|
||||
version,
|
||||
cluster_template_id=template_id,
|
||||
default_image_id=image_id,
|
||||
user_keypair_id=keypair,
|
||||
net_id=neutron_management_network)
|
||||
|
||||
return item.id
|
||||
|
||||
def delete_cluster(self, context, id):
|
||||
saharaclient(context).clusters.delete(id)
|
||||
|
||||
def get_cluster(self, context, id):
|
||||
item = saharaclient(context).clusters.get(id)
|
||||
return item
|
||||
|
||||
def job_create(self, context, job_template_id, cluster_id, configs):
|
||||
item = saharaclient(context).job_executions.create(
|
||||
job_template_id, cluster_id, configs=configs)
|
||||
return item.id
|
||||
|
||||
def job_delete(self, context, id):
|
||||
saharaclient(context).job_executions.delete(id)
|
||||
|
||||
def get_job(self, context, id):
|
||||
item = saharaclient(context).job_executions.get(id)
|
||||
return item
|
0
meteos/cmd/__init__.py
Normal file
0
meteos/cmd/__init__.py
Normal file
54
meteos/cmd/api.py
Normal file
54
meteos/cmd/api.py
Normal file
@ -0,0 +1,54 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""Starter script for meteos OS API."""
|
||||
|
||||
import eventlet
|
||||
eventlet.monkey_patch()
|
||||
|
||||
import sys
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
|
||||
from meteos import i18n
|
||||
i18n.enable_lazy()
|
||||
|
||||
from meteos.common import config # Need to register global_opts # noqa
|
||||
from meteos import service
|
||||
from meteos import utils
|
||||
from meteos import version
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
|
||||
def main():
|
||||
log.register_options(CONF)
|
||||
CONF(sys.argv[1:], project='meteos',
|
||||
version=version.version_string())
|
||||
log.setup(CONF, "meteos")
|
||||
utils.monkey_patch()
|
||||
|
||||
launcher = service.process_launcher()
|
||||
server = service.WSGIService('osapi_learning')
|
||||
launcher.launch_service(server, workers=server.workers or 1)
|
||||
launcher.wait()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
60
meteos/cmd/engine.py
Normal file
60
meteos/cmd/engine.py
Normal file
@ -0,0 +1,60 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
# Copyright 2013 NetApp
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Starter script for meteos Learning."""
|
||||
|
||||
import eventlet
|
||||
eventlet.monkey_patch()
|
||||
|
||||
import sys
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
|
||||
from meteos import i18n
|
||||
i18n.enable_lazy()
|
||||
|
||||
from meteos.common import config # Need to register global_opts # noqa
|
||||
from meteos import service
|
||||
from meteos import utils
|
||||
from meteos import version
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
|
||||
def main():
|
||||
log.register_options(CONF)
|
||||
CONF(sys.argv[1:], project='meteos',
|
||||
version=version.version_string())
|
||||
log.setup(CONF, "meteos")
|
||||
utils.monkey_patch()
|
||||
launcher = service.process_launcher()
|
||||
if CONF.enabled_learning_backends:
|
||||
for backend in CONF.enabled_learning_backends:
|
||||
host = "%s@%s" % (CONF.host, backend)
|
||||
server = service.Service.create(host=host,
|
||||
service_name=backend,
|
||||
binary='meteos-engine')
|
||||
launcher.launch_service(server)
|
||||
else:
|
||||
server = service.Service.create(binary='meteos-engine')
|
||||
launcher.launch_service(server)
|
||||
launcher.wait()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
465
meteos/cmd/manage.py
Normal file
465
meteos/cmd/manage.py
Normal file
@ -0,0 +1,465 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
# Copyright (c) 2011 X.commerce, a business unit of eBay Inc.
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
# Interactive shell based on Django:
|
||||
#
|
||||
# Copyright (c) 2005, the Lawrence Journal-World
|
||||
# All rights reserved.
|
||||
#
|
||||
# Redistribution and use in source and binary forms, with or without
|
||||
# modification, are permitted provided that the following conditions are met:
|
||||
#
|
||||
# 1. Redistributions of source code must retain the above copyright notice,
|
||||
# this list of conditions and the following disclaimer.
|
||||
#
|
||||
# 2. Redistributions in binary form must reproduce the above copyright
|
||||
# notice, this list of conditions and the following disclaimer in the
|
||||
# documentation and/or other materials provided with the distribution.
|
||||
#
|
||||
# 3. Neither the name of Django nor the names of its contributors may be
|
||||
# used to endorse or promote products derived from this software without
|
||||
# specific prior written permission.
|
||||
#
|
||||
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
|
||||
"""
|
||||
CLI interface for meteos management.
|
||||
"""
|
||||
|
||||
from __future__ import print_function
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
from meteos import i18n
|
||||
i18n.enable_lazy()
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
from oslo_utils import uuidutils
|
||||
|
||||
from meteos.common import config # Need to register global_opts # noqa
|
||||
from meteos import context
|
||||
from meteos import db
|
||||
from meteos.db import migration
|
||||
from meteos.i18n import _
|
||||
from meteos import utils
|
||||
from meteos import version
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
|
||||
# Decorators for actions
|
||||
def args(*args, **kwargs):
|
||||
def _decorator(func):
|
||||
func.__dict__.setdefault('args', []).insert(0, (args, kwargs))
|
||||
return func
|
||||
return _decorator
|
||||
|
||||
|
||||
def param2id(object_id):
|
||||
"""Helper function to convert various id types to internal id.
|
||||
|
||||
args: [object_id], e.g. 'vol-0000000a' or 'volume-0000000a' or '10'
|
||||
"""
|
||||
if uuidutils.is_uuid_like(object_id):
|
||||
return object_id
|
||||
elif '-' in object_id:
|
||||
# FIXME(ja): mapping occurs in nova?
|
||||
pass
|
||||
else:
|
||||
return int(object_id)
|
||||
|
||||
|
||||
class ShellCommands(object):
|
||||
|
||||
def bpython(self):
|
||||
"""Runs a bpython shell.
|
||||
|
||||
Falls back to Ipython/python shell if unavailable
|
||||
"""
|
||||
self.run('bpython')
|
||||
|
||||
def ipython(self):
|
||||
"""Runs an Ipython shell.
|
||||
|
||||
Falls back to Python shell if unavailable
|
||||
"""
|
||||
self.run('ipython')
|
||||
|
||||
def python(self):
|
||||
"""Runs a python shell.
|
||||
|
||||
Falls back to Python shell if unavailable
|
||||
"""
|
||||
self.run('python')
|
||||
|
||||
@args('--shell', dest="shell",
|
||||
metavar='<bpython|ipython|python>',
|
||||
help='Python shell')
|
||||
def run(self, shell=None):
|
||||
"""Runs a Python interactive interpreter."""
|
||||
if not shell:
|
||||
shell = 'bpython'
|
||||
|
||||
if shell == 'bpython':
|
||||
try:
|
||||
import bpython
|
||||
bpython.embed()
|
||||
except ImportError:
|
||||
shell = 'ipython'
|
||||
if shell == 'ipython':
|
||||
try:
|
||||
from IPython import embed
|
||||
embed()
|
||||
except ImportError:
|
||||
# Ipython < 0.11
|
||||
try:
|
||||
import IPython
|
||||
|
||||
# Explicitly pass an empty list as arguments, because
|
||||
# otherwise IPython would use sys.argv from this script.
|
||||
shell = IPython.Shell.IPShell(argv=[])
|
||||
shell.mainloop()
|
||||
except ImportError:
|
||||
# no IPython module
|
||||
shell = 'python'
|
||||
|
||||
if shell == 'python':
|
||||
import code
|
||||
try:
|
||||
# Try activating rlcompleter, because it's handy.
|
||||
import readline
|
||||
except ImportError:
|
||||
pass
|
||||
else:
|
||||
# We don't have to wrap the following import in a 'try',
|
||||
# because we already know 'readline' was imported successfully.
|
||||
import rlcompleter # noqa
|
||||
readline.parse_and_bind("tab:complete")
|
||||
code.interact()
|
||||
|
||||
@args('--path', required=True, help='Script path')
|
||||
def script(self, path):
|
||||
"""Runs the script from the specified path with flags set properly.
|
||||
|
||||
arguments: path
|
||||
"""
|
||||
exec(compile(open(path).read(), path, 'exec'), locals(), globals())
|
||||
|
||||
|
||||
class HostCommands(object):
|
||||
|
||||
"""List hosts."""
|
||||
|
||||
@args('zone', nargs='?', default=None,
|
||||
help='Availability Zone (default: %(default)s)')
|
||||
def list(self, zone=None):
|
||||
"""Show a list of all physical hosts. Filter by zone.
|
||||
|
||||
args: [zone]
|
||||
"""
|
||||
print("%-25s\t%-15s" % (_('host'), _('zone')))
|
||||
ctxt = context.get_admin_context()
|
||||
services = db.service_get_all(ctxt)
|
||||
if zone:
|
||||
services = [
|
||||
s for s in services if s['availability_zone']['name'] == zone]
|
||||
hosts = []
|
||||
for srv in services:
|
||||
if not [h for h in hosts if h['host'] == srv['host']]:
|
||||
hosts.append(srv)
|
||||
|
||||
for h in hosts:
|
||||
print("%-25s\t%-15s" % (h['host'], h['availability_zone']['name']))
|
||||
|
||||
|
||||
class DbCommands(object):
|
||||
|
||||
"""Class for managing the database."""
|
||||
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
@args('version', nargs='?', default=None,
|
||||
help='Database version')
|
||||
def sync(self, version=None):
|
||||
"""Sync the database up to the most recent version."""
|
||||
return migration.upgrade(version)
|
||||
|
||||
def version(self):
|
||||
"""Print the current database version."""
|
||||
print(migration.version())
|
||||
|
||||
# NOTE(imalinovskiy):
|
||||
# Meteos init migration hardcoded here,
|
||||
# because alembic has strange behaviour:
|
||||
# downgrade base = downgrade from head(001) -> base(001)
|
||||
# = downgrade from 001 -> (empty) [ERROR]
|
||||
# downgrade 001 = downgrade from head(001)->001
|
||||
# = do nothing [OK]
|
||||
@args('version', nargs='?', default='001',
|
||||
help='Version to downgrade')
|
||||
def downgrade(self, version=None):
|
||||
"""Downgrade database to the given version."""
|
||||
return migration.downgrade(version)
|
||||
|
||||
@args('--message', help='Revision message')
|
||||
@args('--autogenerate', help='Autogenerate migration from schema')
|
||||
def revision(self, message, autogenerate):
|
||||
"""Generate new migration."""
|
||||
return migration.revision(message, autogenerate)
|
||||
|
||||
@args('version', nargs='?', default=None,
|
||||
help='Version to stamp version table with')
|
||||
def stamp(self, version=None):
|
||||
"""Stamp the version table with the given version."""
|
||||
return migration.stamp(version)
|
||||
|
||||
|
||||
class VersionCommands(object):
|
||||
|
||||
"""Class for exposing the codebase version."""
|
||||
|
||||
def list(self):
|
||||
print(version.version_string())
|
||||
|
||||
def __call__(self):
|
||||
self.list()
|
||||
|
||||
|
||||
class ConfigCommands(object):
|
||||
|
||||
"""Class for exposing the flags defined by flag_file(s)."""
|
||||
|
||||
def list(self):
|
||||
for key, value in CONF.items():
|
||||
if value is not None:
|
||||
print('%s = %s' % (key, value))
|
||||
|
||||
|
||||
class GetLogCommands(object):
|
||||
|
||||
"""Get logging information."""
|
||||
|
||||
def errors(self):
|
||||
"""Get all of the errors from the log files."""
|
||||
error_found = 0
|
||||
if CONF.log_dir:
|
||||
logs = [x for x in os.listdir(CONF.log_dir) if x.endswith('.log')]
|
||||
for file in logs:
|
||||
log_file = os.path.join(CONF.log_dir, file)
|
||||
lines = [line.strip() for line in open(log_file, "r")]
|
||||
lines.reverse()
|
||||
print_name = 0
|
||||
for index, line in enumerate(lines):
|
||||
if line.find(" ERROR ") > 0:
|
||||
error_found += 1
|
||||
if print_name == 0:
|
||||
print(log_file + ":-")
|
||||
print_name = 1
|
||||
print("Line %d : %s" % (len(lines) - index, line))
|
||||
if error_found == 0:
|
||||
print("No errors in logfiles!")
|
||||
|
||||
@args('num_entries', nargs='?', type=int, default=10,
|
||||
help='Number of entries to list (default: %(default)d)')
|
||||
def syslog(self, num_entries=10):
|
||||
"""Get <num_entries> of the meteos syslog events."""
|
||||
entries = int(num_entries)
|
||||
count = 0
|
||||
log_file = ''
|
||||
if os.path.exists('/var/log/syslog'):
|
||||
log_file = '/var/log/syslog'
|
||||
elif os.path.exists('/var/log/messages'):
|
||||
log_file = '/var/log/messages'
|
||||
else:
|
||||
print("Unable to find system log file!")
|
||||
sys.exit(1)
|
||||
lines = [line.strip() for line in open(log_file, "r")]
|
||||
lines.reverse()
|
||||
print("Last %s meteos syslog entries:-" % (entries))
|
||||
for line in lines:
|
||||
if line.find("meteos") > 0:
|
||||
count += 1
|
||||
print("%s" % (line))
|
||||
if count == entries:
|
||||
break
|
||||
|
||||
if count == 0:
|
||||
print("No meteos entries in syslog!")
|
||||
|
||||
|
||||
class ServiceCommands(object):
|
||||
|
||||
"""Methods for managing services."""
|
||||
|
||||
def list(self):
|
||||
"""Show a list of all meteos services."""
|
||||
ctxt = context.get_admin_context()
|
||||
services = db.service_get_all(ctxt)
|
||||
print_format = "%-16s %-36s %-16s %-10s %-5s %-10s"
|
||||
print(print_format % (
|
||||
_('Binary'),
|
||||
_('Host'),
|
||||
_('Zone'),
|
||||
_('Status'),
|
||||
_('State'),
|
||||
_('Updated At'))
|
||||
)
|
||||
for svc in services:
|
||||
alive = utils.service_is_up(svc)
|
||||
art = ":-)" if alive else "XXX"
|
||||
status = 'enabled'
|
||||
if svc['disabled']:
|
||||
status = 'disabled'
|
||||
print(print_format % (
|
||||
svc['binary'],
|
||||
svc['host'].partition('.')[0],
|
||||
svc['availability_zone']['name'],
|
||||
status,
|
||||
art,
|
||||
svc['updated_at'],
|
||||
))
|
||||
|
||||
|
||||
CATEGORIES = {
|
||||
'config': ConfigCommands,
|
||||
'db': DbCommands,
|
||||
'host': HostCommands,
|
||||
'logs': GetLogCommands,
|
||||
'service': ServiceCommands,
|
||||
'shell': ShellCommands,
|
||||
'version': VersionCommands
|
||||
}
|
||||
|
||||
|
||||
def methods_of(obj):
|
||||
"""Get all callable methods of an object that don't start with underscore.
|
||||
|
||||
Returns a list of tuples of the form (method_name, method).
|
||||
"""
|
||||
result = []
|
||||
for i in dir(obj):
|
||||
if callable(getattr(obj, i)) and not i.startswith('_'):
|
||||
result.append((i, getattr(obj, i)))
|
||||
return result
|
||||
|
||||
|
||||
def add_command_parsers(subparsers):
|
||||
for category in CATEGORIES:
|
||||
command_object = CATEGORIES[category]()
|
||||
|
||||
parser = subparsers.add_parser(category)
|
||||
parser.set_defaults(command_object=command_object)
|
||||
|
||||
category_subparsers = parser.add_subparsers(dest='action')
|
||||
|
||||
for (action, action_fn) in methods_of(command_object):
|
||||
parser = category_subparsers.add_parser(action)
|
||||
|
||||
action_kwargs = []
|
||||
for args, kwargs in getattr(action_fn, 'args', []):
|
||||
parser.add_argument(*args, **kwargs)
|
||||
|
||||
parser.set_defaults(action_fn=action_fn)
|
||||
parser.set_defaults(action_kwargs=action_kwargs)
|
||||
|
||||
|
||||
category_opt = cfg.SubCommandOpt('category',
|
||||
title='Command categories',
|
||||
handler=add_command_parsers)
|
||||
|
||||
|
||||
def get_arg_string(args):
|
||||
arg = None
|
||||
if args[0] == '-':
|
||||
# (Note)zhiteng: args starts with CONF.oparser.prefix_chars
|
||||
# is optional args. Notice that cfg module takes care of
|
||||
# actual ArgParser so prefix_chars is always '-'.
|
||||
if args[1] == '-':
|
||||
# This is long optional arg
|
||||
arg = args[2:]
|
||||
else:
|
||||
arg = args[1:]
|
||||
else:
|
||||
arg = args
|
||||
|
||||
return arg
|
||||
|
||||
|
||||
def fetch_func_args(func):
|
||||
fn_args = []
|
||||
for args, kwargs in getattr(func, 'args', []):
|
||||
arg = get_arg_string(args[0])
|
||||
fn_args.append(getattr(CONF.category, arg))
|
||||
|
||||
return fn_args
|
||||
|
||||
|
||||
def main():
|
||||
"""Parse options and call the appropriate class/method."""
|
||||
CONF.register_cli_opt(category_opt)
|
||||
script_name = sys.argv[0]
|
||||
if len(sys.argv) < 2:
|
||||
print(_("\nOpenStack meteos version: %(version)s\n") %
|
||||
{'version': version.version_string()})
|
||||
print(script_name + " category action [<args>]")
|
||||
print(_("Available categories:"))
|
||||
for category in CATEGORIES:
|
||||
print("\t%s" % category)
|
||||
sys.exit(2)
|
||||
|
||||
try:
|
||||
log.register_options(CONF)
|
||||
CONF(sys.argv[1:], project='meteos',
|
||||
version=version.version_string())
|
||||
log.setup(CONF, "meteos")
|
||||
except cfg.ConfigFilesNotFoundError:
|
||||
cfgfile = CONF.config_file[-1] if CONF.config_file else None
|
||||
if cfgfile and not os.access(cfgfile, os.R_OK):
|
||||
st = os.stat(cfgfile)
|
||||
print(_("Could not read %s. Re-running with sudo") % cfgfile)
|
||||
try:
|
||||
os.execvp('sudo', ['sudo', '-u', '#%s' % st.st_uid] + sys.argv)
|
||||
except Exception:
|
||||
print(_('sudo failed, continuing as if nothing happened'))
|
||||
|
||||
print(_('Please re-run meteos-manage as root.'))
|
||||
sys.exit(2)
|
||||
|
||||
fn = CONF.category.action_fn
|
||||
|
||||
fn_args = fetch_func_args(fn)
|
||||
fn(*fn_args)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
0
meteos/common/__init__.py
Normal file
0
meteos/common/__init__.py
Normal file
107
meteos/common/client_auth.py
Normal file
107
meteos/common/client_auth.py
Normal file
@ -0,0 +1,107 @@
|
||||
# Copyright 2016 SAP SE
|
||||
# All Rights Reserved
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import copy
|
||||
|
||||
from keystoneauth1 import loading as ks_loading
|
||||
from keystoneauth1.loading._plugins.identity import v2
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
|
||||
from meteos import exception
|
||||
from meteos.i18n import _, _LW
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
"""Helper class to support keystone v2 and v3 for clients
|
||||
|
||||
Builds auth and session context before instantiation of the actual
|
||||
client. In order to build this context a dedicated config group is
|
||||
needed to load all needed parameters dynamically.
|
||||
|
||||
|
||||
"""
|
||||
|
||||
|
||||
class AuthClientLoader(object):
|
||||
|
||||
def __init__(self, client_class, exception_module, cfg_group,
|
||||
deprecated_opts_for_v2=None, url=None, token=None):
|
||||
self.client_class = client_class
|
||||
self.exception_module = exception_module
|
||||
self.group = cfg_group
|
||||
self.admin_auth = None
|
||||
self.conf = CONF
|
||||
self.session = None
|
||||
self.auth_plugin = None
|
||||
self.deprecated_opts_for_v2 = deprecated_opts_for_v2
|
||||
self.url = url
|
||||
self.token = token
|
||||
|
||||
@staticmethod
|
||||
def list_opts(group):
|
||||
"""Generates a list of config option for a given group
|
||||
|
||||
:param group: group name
|
||||
:return: list of auth default configuration
|
||||
"""
|
||||
opts = copy.deepcopy(ks_loading.register_session_conf_options(
|
||||
CONF, group))
|
||||
opts.insert(0, ks_loading.get_auth_common_conf_options()[0])
|
||||
|
||||
for plugin_option in ks_loading.get_auth_plugin_conf_options(
|
||||
'password'):
|
||||
found = False
|
||||
for option in opts:
|
||||
if option.name == plugin_option.name:
|
||||
found = True
|
||||
break
|
||||
if not found:
|
||||
opts.append(plugin_option)
|
||||
opts.sort(key=lambda x: x.name)
|
||||
return [(group, opts)]
|
||||
|
||||
def _load_auth_plugin(self):
|
||||
if self.admin_auth:
|
||||
return self.admin_auth
|
||||
self.auth_plugin = ks_loading.load_auth_from_conf_options(
|
||||
CONF, self.group)
|
||||
|
||||
self.auth_plugin = v2.Token().load_from_options(
|
||||
**self.deprecated_opts_for_v2)
|
||||
|
||||
if self.auth_plugin:
|
||||
return self.auth_plugin
|
||||
|
||||
msg = _('Cannot load auth plugin for %s') % self.group
|
||||
raise self.exception_module.Unauthorized(message=msg)
|
||||
|
||||
def get_client(self, context, admin=False, **kwargs):
|
||||
"""Get's the client with the correct auth/session context
|
||||
|
||||
"""
|
||||
if not self.session:
|
||||
self.session = ks_loading.load_session_from_conf_options(
|
||||
self.conf, self.group)
|
||||
|
||||
if not self.admin_auth:
|
||||
self.admin_auth = self._load_auth_plugin()
|
||||
auth_plugin = self.admin_auth
|
||||
|
||||
return self.client_class(version='1.0',
|
||||
session=self.session,
|
||||
auth=auth_plugin,
|
||||
**kwargs)
|
178
meteos/common/config.py
Normal file
178
meteos/common/config.py
Normal file
@ -0,0 +1,178 @@
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
# Copyright 2012 Red Hat, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Command-line flag library.
|
||||
|
||||
Emulates gflags by wrapping cfg.ConfigOpts.
|
||||
|
||||
The idea is to move fully to cfg eventually, and this wrapper is a
|
||||
stepping stone.
|
||||
|
||||
"""
|
||||
|
||||
import socket
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
from oslo_middleware import cors
|
||||
from oslo_utils import netutils
|
||||
import six
|
||||
|
||||
from meteos.common import constants
|
||||
from meteos import exception
|
||||
from meteos.i18n import _
|
||||
|
||||
CONF = cfg.CONF
|
||||
log.register_options(CONF)
|
||||
|
||||
|
||||
core_opts = [
|
||||
cfg.StrOpt('api_paste_config',
|
||||
default="api-paste.ini",
|
||||
help='File name for the paste.deploy config for meteos-api.'),
|
||||
cfg.StrOpt('state_path',
|
||||
default='/var/lib/meteos',
|
||||
help="Top-level directory for maintaining meteos's state."),
|
||||
cfg.StrOpt('os_region_name',
|
||||
help='Region name of this node.'),
|
||||
]
|
||||
|
||||
debug_opts = [
|
||||
]
|
||||
|
||||
CONF.register_cli_opts(core_opts)
|
||||
CONF.register_cli_opts(debug_opts)
|
||||
|
||||
global_opts = [
|
||||
cfg.StrOpt('my_ip',
|
||||
default=netutils.get_my_ipv4(),
|
||||
sample_default='<your_ip>',
|
||||
help='IP address of this host.'),
|
||||
cfg.StrOpt('scheduler_topic',
|
||||
default='meteos-scheduler',
|
||||
help='The topic scheduler nodes listen on.'),
|
||||
cfg.StrOpt('learning_topic',
|
||||
default='meteos-engine',
|
||||
help='The topic learning nodes listen on.'),
|
||||
cfg.StrOpt('data_topic',
|
||||
default='meteos-data',
|
||||
help='The topic data nodes listen on.'),
|
||||
cfg.BoolOpt('api_rate_limit',
|
||||
default=True,
|
||||
help='Whether to rate limit the API.'),
|
||||
cfg.ListOpt('osapi_learning_ext_list',
|
||||
default=[],
|
||||
help='Specify list of extensions to load when using osapi_'
|
||||
'learning_extension option with meteos.api.contrib.'
|
||||
'select_extensions.'),
|
||||
cfg.ListOpt('osapi_learning_extension',
|
||||
default=['meteos.api.contrib.standard_extensions'],
|
||||
help='The osapi learning extensions to load.'),
|
||||
cfg.StrOpt('sqlite_db',
|
||||
default='meteos.sqlite',
|
||||
help='The filename to use with sqlite.'),
|
||||
cfg.BoolOpt('sqlite_synchronous',
|
||||
default=True,
|
||||
help='If passed, use synchronous mode for sqlite.'),
|
||||
cfg.IntOpt('sql_idle_timeout',
|
||||
default=3600,
|
||||
help='Timeout before idle SQL connections are reaped.'),
|
||||
cfg.IntOpt('sql_max_retries',
|
||||
default=10,
|
||||
help='Maximum database connection retries during startup. '
|
||||
'(setting -1 implies an infinite retry count).'),
|
||||
cfg.IntOpt('sql_retry_interval',
|
||||
default=10,
|
||||
help='Interval between retries of opening a SQL connection.'),
|
||||
cfg.StrOpt('engine_manager',
|
||||
default='meteos.engine.manager.LearningManager',
|
||||
help='Full class name for the learning manager.'),
|
||||
cfg.StrOpt('host',
|
||||
default=socket.gethostname(),
|
||||
sample_default='<your_hostname>',
|
||||
help='Name of this node. This can be an opaque identifier. '
|
||||
'It is not necessarily a hostname, FQDN, or IP address.'),
|
||||
# NOTE(vish): default to nova for compatibility with nova installs
|
||||
cfg.StrOpt('storage_availability_zone',
|
||||
default='nova',
|
||||
help='Availability zone of this node.'),
|
||||
cfg.StrOpt('default_learning_type',
|
||||
help='Default learning type to use.'),
|
||||
cfg.ListOpt('memcached_servers',
|
||||
help='Memcached servers or None for in process cache.'),
|
||||
cfg.StrOpt('learning_usage_audit_period',
|
||||
default='month',
|
||||
help='Time period to generate learning usages for. '
|
||||
'Time period must be hour, day, month or year.'),
|
||||
cfg.StrOpt('root_helper',
|
||||
default='sudo',
|
||||
help='Deprecated: command to use for running commands as '
|
||||
'root.'),
|
||||
cfg.StrOpt('rootwrap_config',
|
||||
help='Path to the rootwrap configuration file to use for '
|
||||
'running commands as root.'),
|
||||
cfg.BoolOpt('monkey_patch',
|
||||
default=False,
|
||||
help='Whether to log monkey patching.'),
|
||||
cfg.ListOpt('monkey_patch_modules',
|
||||
default=[],
|
||||
help='List of modules or decorators to monkey patch.'),
|
||||
cfg.IntOpt('service_down_time',
|
||||
default=60,
|
||||
help='Maximum time since last check-in for up service.'),
|
||||
cfg.StrOpt('learning_api_class',
|
||||
default='meteos.engine.api.API',
|
||||
help='The full class name of the learning API class to use.'),
|
||||
cfg.StrOpt('auth_strategy',
|
||||
default='keystone',
|
||||
help='The strategy to use for auth. Supports noauth, keystone, '
|
||||
'and deprecated.'),
|
||||
cfg.ListOpt('enabled_learning_backends',
|
||||
help='A list of learning backend names to use. These backend '
|
||||
'names should be backed by a unique [CONFIG] group '
|
||||
'with its options.'),
|
||||
]
|
||||
|
||||
CONF.register_opts(global_opts)
|
||||
|
||||
|
||||
def set_middleware_defaults():
|
||||
"""Update default configuration options for oslo.middleware."""
|
||||
# CORS Defaults
|
||||
# TODO(krotscheck): Update with https://review.openstack.org/#/c/285368/
|
||||
cfg.set_defaults(cors.CORS_OPTS,
|
||||
allow_headers=['X-Auth-Token',
|
||||
'X-OpenStack-Request-ID',
|
||||
'X-Openstack-Meteos-Api-Version',
|
||||
'X-OpenStack-Meteos-API-Experimental',
|
||||
'X-Identity-Status',
|
||||
'X-Roles',
|
||||
'X-Service-Catalog',
|
||||
'X-User-Id',
|
||||
'X-Tenant-Id'],
|
||||
expose_headers=['X-Auth-Token',
|
||||
'X-OpenStack-Request-ID',
|
||||
'X-Openstack-Meteos-Api-Version',
|
||||
'X-OpenStack-Meteos-API-Experimental',
|
||||
'X-Subject-Token',
|
||||
'X-Service-Token'],
|
||||
allow_methods=['GET',
|
||||
'PUT',
|
||||
'POST',
|
||||
'DELETE',
|
||||
'PATCH']
|
||||
)
|
28
meteos/common/constants.py
Normal file
28
meteos/common/constants.py
Normal file
@ -0,0 +1,28 @@
|
||||
# Copyright 2013 OpenStack Foundation
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
STATUS_NEW = 'new'
|
||||
STATUS_CREATING = 'creating'
|
||||
STATUS_DELETING = 'deleting'
|
||||
STATUS_DELETED = 'deleted'
|
||||
STATUS_ERROR = 'error'
|
||||
STATUS_ERROR_DELETING = 'error_deleting'
|
||||
STATUS_AVAILABLE = 'available'
|
||||
STATUS_ACTIVE = 'active'
|
||||
STATUS_INACTIVE = 'inactive'
|
||||
STATUS_UPDATING = 'updating'
|
||||
STATUS_SAHARA_ACTIVE = 'Active'
|
||||
STATUS_JOB_SUCCESS = 'SUCCEEDED'
|
||||
STATUS_JOB_ERROR = 'DONEWITHERROR'
|
151
meteos/context.py
Normal file
151
meteos/context.py
Normal file
@ -0,0 +1,151 @@
|
||||
# Copyright 2011 OpenStack LLC.
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""RequestContext: context for requests that persist through all of meteos."""
|
||||
|
||||
import copy
|
||||
|
||||
from oslo_context import context
|
||||
from oslo_log import log
|
||||
from oslo_utils import timeutils
|
||||
import six
|
||||
|
||||
from meteos.i18n import _, _LW
|
||||
from meteos import policy
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class RequestContext(context.RequestContext):
|
||||
"""Security context and request information.
|
||||
|
||||
Represents the user taking a given action within the system.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, user_id, project_id, is_admin=None, read_deleted="no",
|
||||
roles=None, remote_address=None, timestamp=None,
|
||||
request_id=None, auth_token=None, overwrite=True,
|
||||
quota_class=None, service_catalog=None, **kwargs):
|
||||
"""Initialize RequestContext.
|
||||
|
||||
:param read_deleted: 'no' indicates deleted records are hidden, 'yes'
|
||||
indicates deleted records are visible, 'only' indicates that
|
||||
*only* deleted records are visible.
|
||||
|
||||
:param overwrite: Set to False to ensure that the greenthread local
|
||||
copy of the index is not overwritten.
|
||||
|
||||
:param kwargs: Extra arguments that might be present, but we ignore
|
||||
because they possibly came in from older rpc messages.
|
||||
"""
|
||||
|
||||
user = kwargs.pop('user', None)
|
||||
tenant = kwargs.pop('tenant', None)
|
||||
super(RequestContext, self).__init__(
|
||||
auth_token=auth_token,
|
||||
user=user_id or user,
|
||||
tenant=project_id or tenant,
|
||||
domain=kwargs.pop('domain', None),
|
||||
user_domain=kwargs.pop('user_domain', None),
|
||||
project_domain=kwargs.pop('project_domain', None),
|
||||
is_admin=is_admin,
|
||||
read_only=kwargs.pop('read_only', False),
|
||||
show_deleted=kwargs.pop('show_deleted', False),
|
||||
request_id=request_id,
|
||||
resource_uuid=kwargs.pop('resource_uuid', None),
|
||||
overwrite=overwrite,
|
||||
roles=roles)
|
||||
|
||||
kwargs.pop('user_identity', None)
|
||||
if kwargs:
|
||||
LOG.warning(_LW('Arguments dropped when creating context: %s.'),
|
||||
str(kwargs))
|
||||
self.user_id = self.user
|
||||
self.project_id = self.tenant
|
||||
|
||||
if self.is_admin is None:
|
||||
self.is_admin = policy.check_is_admin(self.roles)
|
||||
elif self.is_admin and 'admin' not in self.roles:
|
||||
self.roles.append('admin')
|
||||
self.read_deleted = read_deleted
|
||||
self.remote_address = remote_address
|
||||
if not timestamp:
|
||||
timestamp = timeutils.utcnow()
|
||||
if isinstance(timestamp, six.string_types):
|
||||
timestamp = timeutils.parse_strtime(timestamp)
|
||||
self.timestamp = timestamp
|
||||
if service_catalog:
|
||||
self.service_catalog = [s for s in service_catalog
|
||||
if s.get('type') in ('compute', 'volume')]
|
||||
else:
|
||||
self.service_catalog = []
|
||||
|
||||
self.quota_class = quota_class
|
||||
|
||||
def _get_read_deleted(self):
|
||||
return self._read_deleted
|
||||
|
||||
def _set_read_deleted(self, read_deleted):
|
||||
if read_deleted not in ('no', 'yes', 'only'):
|
||||
raise ValueError(_("read_deleted can only be one of 'no', "
|
||||
"'yes' or 'only', not %r") % read_deleted)
|
||||
self._read_deleted = read_deleted
|
||||
|
||||
def _del_read_deleted(self):
|
||||
del self._read_deleted
|
||||
|
||||
read_deleted = property(_get_read_deleted, _set_read_deleted,
|
||||
_del_read_deleted)
|
||||
|
||||
def to_dict(self):
|
||||
values = super(RequestContext, self).to_dict()
|
||||
values.update({
|
||||
'user_id': getattr(self, 'user_id', None),
|
||||
'project_id': getattr(self, 'project_id', None),
|
||||
'read_deleted': getattr(self, 'read_deleted', None),
|
||||
'remote_address': getattr(self, 'remote_address', None),
|
||||
'timestamp': self.timestamp.isoformat() if hasattr(
|
||||
self, 'timestamp') else None,
|
||||
'quota_class': getattr(self, 'quota_class', None),
|
||||
'service_catalog': getattr(self, 'service_catalog', None)})
|
||||
return values
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, values):
|
||||
return cls(**values)
|
||||
|
||||
def elevated(self, read_deleted=None, overwrite=False):
|
||||
"""Return a version of this context with admin flag set."""
|
||||
ctx = copy.deepcopy(self)
|
||||
ctx.is_admin = True
|
||||
|
||||
if 'admin' not in ctx.roles:
|
||||
ctx.roles.append('admin')
|
||||
|
||||
if read_deleted is not None:
|
||||
ctx.read_deleted = read_deleted
|
||||
|
||||
return ctx
|
||||
|
||||
|
||||
def get_admin_context(read_deleted="no"):
|
||||
return RequestContext(user_id=None,
|
||||
project_id=None,
|
||||
is_admin=True,
|
||||
read_deleted=read_deleted,
|
||||
overwrite=False)
|
20
meteos/db/__init__.py
Normal file
20
meteos/db/__init__.py
Normal file
@ -0,0 +1,20 @@
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
"""
|
||||
DB abstraction for Meteos
|
||||
"""
|
||||
|
||||
from meteos.db.api import * # noqa
|
328
meteos/db/api.py
Normal file
328
meteos/db/api.py
Normal file
@ -0,0 +1,328 @@
|
||||
# Copyright (c) 2011 X.commerce, a business unit of eBay Inc.
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Defines interface for DB access.
|
||||
|
||||
The underlying driver is loaded as a :class:`LazyPluggable`.
|
||||
|
||||
Functions in this module are imported into the meteos.db namespace. Call these
|
||||
functions from meteos.db namespace, not the meteos.db.api namespace.
|
||||
|
||||
All functions in this module return objects that implement a dictionary-like
|
||||
interface. Currently, many of these objects are sqlalchemy objects that
|
||||
implement a dictionary interface. However, a future goal is to have all of
|
||||
these objects be simple dictionaries.
|
||||
|
||||
|
||||
**Related Flags**
|
||||
|
||||
:backend: string to lookup in the list of LazyPluggable backends.
|
||||
`sqlalchemy` is the only supported backend right now.
|
||||
|
||||
:connection: string specifying the sqlalchemy connection to use, like:
|
||||
`sqlite:///var/lib/meteos/meteos.sqlite`.
|
||||
|
||||
:enable_new_services: when adding a new service to the database, is it in the
|
||||
pool of available hardware (Default: True)
|
||||
|
||||
"""
|
||||
from oslo_config import cfg
|
||||
from oslo_db import api as db_api
|
||||
|
||||
db_opts = [
|
||||
cfg.StrOpt('db_backend',
|
||||
default='sqlalchemy',
|
||||
help='The backend to use for database.'),
|
||||
cfg.BoolOpt('enable_new_services',
|
||||
default=True,
|
||||
help='Services to be added to the available pool on create.'),
|
||||
cfg.StrOpt('learning_name_template',
|
||||
default='learning-%s',
|
||||
help='Template string to be used to generate learning names.'),
|
||||
cfg.StrOpt('learning_snapshot_name_template',
|
||||
default='learning-snapshot-%s',
|
||||
help='Template string to be used to generate learning snapshot '
|
||||
'names.'),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(db_opts)
|
||||
|
||||
_BACKEND_MAPPING = {'sqlalchemy': 'meteos.db.sqlalchemy.api'}
|
||||
IMPL = db_api.DBAPI.from_config(cfg.CONF, backend_mapping=_BACKEND_MAPPING,
|
||||
lazy=True)
|
||||
|
||||
|
||||
def authorize_project_context(context, project_id):
|
||||
"""Ensures a request has permission to access the given project."""
|
||||
return IMPL.authorize_project_context(context, project_id)
|
||||
|
||||
|
||||
def authorize_quota_class_context(context, class_name):
|
||||
"""Ensures a request has permission to access the given quota class."""
|
||||
return IMPL.authorize_quota_class_context(context, class_name)
|
||||
|
||||
|
||||
#
|
||||
|
||||
|
||||
def service_destroy(context, service_id):
|
||||
"""Destroy the service or raise if it does not exist."""
|
||||
return IMPL.service_destroy(context, service_id)
|
||||
|
||||
|
||||
def service_get(context, service_id):
|
||||
"""Get a service or raise if it does not exist."""
|
||||
return IMPL.service_get(context, service_id)
|
||||
|
||||
|
||||
def service_get_by_host_and_topic(context, host, topic):
|
||||
"""Get a service by host it's on and topic it listens to."""
|
||||
return IMPL.service_get_by_host_and_topic(context, host, topic)
|
||||
|
||||
|
||||
def service_get_all(context, disabled=None):
|
||||
"""Get all services."""
|
||||
return IMPL.service_get_all(context, disabled)
|
||||
|
||||
|
||||
def service_get_all_by_topic(context, topic):
|
||||
"""Get all services for a given topic."""
|
||||
return IMPL.service_get_all_by_topic(context, topic)
|
||||
|
||||
|
||||
def service_get_all_learning_sorted(context):
|
||||
"""Get all learning services sorted by learning count.
|
||||
|
||||
:returns: a list of (Service, learning_count) tuples.
|
||||
|
||||
"""
|
||||
return IMPL.service_get_all_learning_sorted(context)
|
||||
|
||||
|
||||
def service_get_by_args(context, host, binary):
|
||||
"""Get the state of an service by node name and binary."""
|
||||
return IMPL.service_get_by_args(context, host, binary)
|
||||
|
||||
|
||||
def service_create(context, values):
|
||||
"""Create a service from the values dictionary."""
|
||||
return IMPL.service_create(context, values)
|
||||
|
||||
|
||||
def service_update(context, service_id, values):
|
||||
"""Set the given properties on an service and update it.
|
||||
|
||||
Raises NotFound if service does not exist.
|
||||
|
||||
"""
|
||||
return IMPL.service_update(context, service_id, values)
|
||||
|
||||
|
||||
#
|
||||
|
||||
|
||||
def experiment_create(context, experiment_values):
|
||||
"""Create new experiment."""
|
||||
return IMPL.experiment_create(context, experiment_values)
|
||||
|
||||
|
||||
def experiment_update(context, experiment_id, values):
|
||||
"""Update experiment fields."""
|
||||
return IMPL.experiment_update(context, experiment_id, values)
|
||||
|
||||
|
||||
def experiment_get(context, experiment_id):
|
||||
"""Get experiment by id."""
|
||||
return IMPL.experiment_get(context, experiment_id)
|
||||
|
||||
|
||||
def experiment_get_all(context, filters=None, sort_key=None, sort_dir=None):
|
||||
"""Get all experiments."""
|
||||
return IMPL.experiment_get_all(
|
||||
context, filters=filters, sort_key=sort_key, sort_dir=sort_dir,
|
||||
)
|
||||
|
||||
|
||||
def experiment_get_all_by_project(context, project_id, filters=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
"""Returns all experiments with given project ID."""
|
||||
return IMPL.experiment_get_all_by_project(
|
||||
context, project_id, filters=filters,
|
||||
sort_key=sort_key, sort_dir=sort_dir,
|
||||
)
|
||||
|
||||
|
||||
def experiment_delete(context, experiment_id):
|
||||
"""Delete experiment."""
|
||||
return IMPL.experiment_delete(context, experiment_id)
|
||||
|
||||
|
||||
#
|
||||
|
||||
|
||||
def template_create(context, template_values):
|
||||
"""Create new template."""
|
||||
return IMPL.template_create(context, template_values)
|
||||
|
||||
|
||||
def template_update(context, template_id, values):
|
||||
"""Update template fields."""
|
||||
return IMPL.template_update(context, template_id, values)
|
||||
|
||||
|
||||
def template_get(context, template_id):
|
||||
"""Get template by id."""
|
||||
return IMPL.template_get(context, template_id)
|
||||
|
||||
|
||||
def template_get_all(context, filters=None, sort_key=None, sort_dir=None):
|
||||
"""Get all templates."""
|
||||
return IMPL.template_get_all(
|
||||
context, filters=filters, sort_key=sort_key, sort_dir=sort_dir,
|
||||
)
|
||||
|
||||
|
||||
def template_get_all_by_project(context, project_id, filters=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
"""Returns all templates with given project ID."""
|
||||
return IMPL.template_get_all_by_project(
|
||||
context, project_id, filters=filters,
|
||||
sort_key=sort_key, sort_dir=sort_dir,
|
||||
)
|
||||
|
||||
|
||||
def template_delete(context, template_id):
|
||||
"""Delete template."""
|
||||
return IMPL.template_delete(context, template_id)
|
||||
|
||||
|
||||
#
|
||||
|
||||
|
||||
def dataset_create(context, dataset_values):
|
||||
"""Create new dataset."""
|
||||
return IMPL.dataset_create(context, dataset_values)
|
||||
|
||||
|
||||
def dataset_update(context, dataset_id, values):
|
||||
"""Update dataset fields."""
|
||||
return IMPL.dataset_update(context, dataset_id, values)
|
||||
|
||||
|
||||
def dataset_get(context, dataset_id):
|
||||
"""Get dataset by id."""
|
||||
return IMPL.dataset_get(context, dataset_id)
|
||||
|
||||
|
||||
def dataset_get_all(context, filters=None, sort_key=None, sort_dir=None):
|
||||
"""Get all datasets."""
|
||||
return IMPL.dataset_get_all(
|
||||
context, filters=filters, sort_key=sort_key, sort_dir=sort_dir,
|
||||
)
|
||||
|
||||
|
||||
def dataset_get_all_by_project(context, project_id, filters=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
"""Returns all datasets with given project ID."""
|
||||
return IMPL.dataset_get_all_by_project(
|
||||
context, project_id, filters=filters,
|
||||
sort_key=sort_key, sort_dir=sort_dir,
|
||||
)
|
||||
|
||||
|
||||
def dataset_delete(context, dataset_id):
|
||||
"""Delete dataset."""
|
||||
return IMPL.dataset_delete(context, dataset_id)
|
||||
|
||||
|
||||
#
|
||||
|
||||
|
||||
def model_create(context, model_values):
|
||||
"""Create new model."""
|
||||
return IMPL.model_create(context, model_values)
|
||||
|
||||
|
||||
def model_update(context, model_id, values):
|
||||
"""Update model fields."""
|
||||
return IMPL.model_update(context, model_id, values)
|
||||
|
||||
|
||||
def model_get(context, model_id):
|
||||
"""Get model by id."""
|
||||
return IMPL.model_get(context, model_id)
|
||||
|
||||
|
||||
def model_get_all(context, filters=None, sort_key=None, sort_dir=None):
|
||||
"""Get all models."""
|
||||
return IMPL.model_get_all(
|
||||
context, filters=filters, sort_key=sort_key, sort_dir=sort_dir,
|
||||
)
|
||||
|
||||
|
||||
def model_get_all_by_project(context, project_id, filters=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
"""Returns all models with given project ID."""
|
||||
return IMPL.model_get_all_by_project(
|
||||
context, project_id, filters=filters,
|
||||
sort_key=sort_key, sort_dir=sort_dir,
|
||||
)
|
||||
|
||||
|
||||
def model_delete(context, model_id):
|
||||
"""Delete model."""
|
||||
return IMPL.model_delete(context, model_id)
|
||||
|
||||
|
||||
#
|
||||
|
||||
|
||||
def learning_create(context, learning_values):
|
||||
"""Create new learning."""
|
||||
return IMPL.learning_create(context, learning_values)
|
||||
|
||||
|
||||
def learning_update(context, learning_id, values):
|
||||
"""Update learning fields."""
|
||||
return IMPL.learning_update(context, learning_id, values)
|
||||
|
||||
|
||||
def learning_get(context, learning_id):
|
||||
"""Get learning by id."""
|
||||
return IMPL.learning_get(context, learning_id)
|
||||
|
||||
|
||||
def learning_get_all(context, filters=None, sort_key=None, sort_dir=None):
|
||||
"""Get all learnings."""
|
||||
return IMPL.learning_get_all(
|
||||
context, filters=filters, sort_key=sort_key, sort_dir=sort_dir,
|
||||
)
|
||||
|
||||
|
||||
def learning_get_all_by_project(context, project_id, filters=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
"""Returns all learnings with given project ID."""
|
||||
return IMPL.learning_get_all_by_project(
|
||||
context, project_id, filters=filters,
|
||||
sort_key=sort_key, sort_dir=sort_dir,
|
||||
)
|
||||
|
||||
|
||||
def learning_delete(context, learning_id):
|
||||
"""Delete learning."""
|
||||
return IMPL.learning_delete(context, learning_id)
|
37
meteos/db/base.py
Normal file
37
meteos/db/base.py
Normal file
@ -0,0 +1,37 @@
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Base class for classes that need modular database access."""
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_utils import importutils
|
||||
|
||||
db_driver_opt = cfg.StrOpt('db_driver',
|
||||
default='meteos.db',
|
||||
help='Driver to use for database access.')
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opt(db_driver_opt)
|
||||
|
||||
|
||||
class Base(object):
|
||||
"""DB driver is injected in the init method."""
|
||||
|
||||
def __init__(self, db_driver=None):
|
||||
super(Base, self).__init__()
|
||||
if not db_driver:
|
||||
db_driver = CONF.db_driver
|
||||
self.db = importutils.import_module(db_driver) # pylint: disable=C0103
|
48
meteos/db/migration.py
Normal file
48
meteos/db/migration.py
Normal file
@ -0,0 +1,48 @@
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Database setup and migration commands."""
|
||||
|
||||
from meteos import utils
|
||||
|
||||
|
||||
IMPL = utils.LazyPluggable(
|
||||
'db_backend', sqlalchemy='meteos.db.migrations.alembic.migration')
|
||||
|
||||
|
||||
def upgrade(version):
|
||||
"""Upgrade database to 'version' or the most recent version."""
|
||||
return IMPL.upgrade(version)
|
||||
|
||||
|
||||
def downgrade(version):
|
||||
"""Downgrade database to 'version' or to initial state."""
|
||||
return IMPL.downgrade(version)
|
||||
|
||||
|
||||
def version():
|
||||
"""Display the current database version."""
|
||||
return IMPL.version()
|
||||
|
||||
|
||||
def stamp(version):
|
||||
"""Stamp database with 'version' or the most recent version."""
|
||||
return IMPL.stamp(version)
|
||||
|
||||
|
||||
def revision(message, autogenerate):
|
||||
"""Generate new migration script."""
|
||||
return IMPL.revision(message, autogenerate)
|
0
meteos/db/migrations/__init__.py
Normal file
0
meteos/db/migrations/__init__.py
Normal file
59
meteos/db/migrations/alembic.ini
Normal file
59
meteos/db/migrations/alembic.ini
Normal file
@ -0,0 +1,59 @@
|
||||
# A generic, single database configuration.
|
||||
|
||||
[alembic]
|
||||
# path to migration scripts
|
||||
script_location = %(here)s/alembic
|
||||
|
||||
# template used to generate migration files
|
||||
# file_template = %%(rev)s_%%(slug)s
|
||||
|
||||
# max length of characters to apply to the
|
||||
# "slug" field
|
||||
#truncate_slug_length = 40
|
||||
|
||||
# set to 'true' to run the environment during
|
||||
# the 'revision' command, regardless of autogenerate
|
||||
# revision_environment = false
|
||||
|
||||
# set to 'true' to allow .pyc and .pyo files without
|
||||
# a source .py file to be detected as revisions in the
|
||||
# versions/ directory
|
||||
# sourceless = false
|
||||
|
||||
#sqlalchemy.url = driver://user:pass@localhost/dbname
|
||||
|
||||
|
||||
# Logging configuration
|
||||
[loggers]
|
||||
keys = root,sqlalchemy,alembic
|
||||
|
||||
[handlers]
|
||||
keys = console
|
||||
|
||||
[formatters]
|
||||
keys = generic
|
||||
|
||||
[logger_root]
|
||||
level = WARN
|
||||
handlers = console
|
||||
qualname =
|
||||
|
||||
[logger_sqlalchemy]
|
||||
level = WARN
|
||||
handlers =
|
||||
qualname = sqlalchemy.engine
|
||||
|
||||
[logger_alembic]
|
||||
level = INFO
|
||||
handlers =
|
||||
qualname = alembic
|
||||
|
||||
[handler_console]
|
||||
class = StreamHandler
|
||||
args = (sys.stderr,)
|
||||
level = NOTSET
|
||||
formatter = generic
|
||||
|
||||
[formatter_generic]
|
||||
format = %(levelname)-5.5s [%(name)s] %(message)s
|
||||
datefmt = %H:%M:%S
|
0
meteos/db/migrations/alembic/__init__.py
Normal file
0
meteos/db/migrations/alembic/__init__.py
Normal file
41
meteos/db/migrations/alembic/env.py
Normal file
41
meteos/db/migrations/alembic/env.py
Normal file
@ -0,0 +1,41 @@
|
||||
# Copyright 2014 Mirantis Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from __future__ import with_statement
|
||||
|
||||
from alembic import context
|
||||
|
||||
from meteos.db.sqlalchemy import api as db_api
|
||||
from meteos.db.sqlalchemy import models as db_models
|
||||
|
||||
|
||||
def run_migrations_online():
|
||||
"""Run migrations in 'online' mode.
|
||||
|
||||
In this scenario we need to create an Engine
|
||||
and associate a connection with the context.
|
||||
"""
|
||||
engine = db_api.get_engine()
|
||||
connection = engine.connect()
|
||||
target_metadata = db_models.MeteosBase.metadata
|
||||
context.configure(connection=connection, # pylint: disable=E1101
|
||||
target_metadata=target_metadata)
|
||||
try:
|
||||
with context.begin_transaction(): # pylint: disable=E1101
|
||||
context.run_migrations() # pylint: disable=E1101
|
||||
finally:
|
||||
connection.close()
|
||||
|
||||
|
||||
run_migrations_online()
|
84
meteos/db/migrations/alembic/migration.py
Normal file
84
meteos/db/migrations/alembic/migration.py
Normal file
@ -0,0 +1,84 @@
|
||||
# Copyright 2014 Mirantis Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import os
|
||||
|
||||
import alembic
|
||||
from alembic import config as alembic_config
|
||||
import alembic.migration as alembic_migration
|
||||
from oslo_config import cfg
|
||||
|
||||
from meteos.db.sqlalchemy import api as db_api
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
|
||||
def _alembic_config():
|
||||
path = os.path.join(os.path.dirname(__file__), os.pardir, 'alembic.ini')
|
||||
config = alembic_config.Config(path)
|
||||
return config
|
||||
|
||||
|
||||
def version():
|
||||
"""Current database version.
|
||||
|
||||
:returns: Database version
|
||||
:rtype: string
|
||||
"""
|
||||
engine = db_api.get_engine()
|
||||
with engine.connect() as conn:
|
||||
context = alembic_migration.MigrationContext.configure(conn)
|
||||
return context.get_current_revision()
|
||||
|
||||
|
||||
def upgrade(revision):
|
||||
"""Upgrade database.
|
||||
|
||||
:param version: Desired database version
|
||||
:type version: string
|
||||
"""
|
||||
return alembic.command.upgrade(_alembic_config(), revision or 'head')
|
||||
|
||||
|
||||
def downgrade(revision):
|
||||
"""Downgrade database.
|
||||
|
||||
:param version: Desired database version
|
||||
:type version: string
|
||||
"""
|
||||
return alembic.command.downgrade(_alembic_config(), revision or 'base')
|
||||
|
||||
|
||||
def stamp(revision):
|
||||
"""Stamp database with provided revision.
|
||||
|
||||
Don't run any migrations.
|
||||
|
||||
:param revision: Should match one from repository or head - to stamp
|
||||
database with most recent revision
|
||||
:type revision: string
|
||||
"""
|
||||
return alembic.command.stamp(_alembic_config(), revision or 'head')
|
||||
|
||||
|
||||
def revision(message=None, autogenerate=False):
|
||||
"""Create template for migration.
|
||||
|
||||
:param message: Text that will be used for migration title
|
||||
:type message: string
|
||||
:param autogenerate: If True - generates diff based on current database
|
||||
state
|
||||
:type autogenerate: bool
|
||||
"""
|
||||
return alembic.command.revision(_alembic_config(), message, autogenerate)
|
34
meteos/db/migrations/alembic/script.py.mako
Normal file
34
meteos/db/migrations/alembic/script.py.mako
Normal file
@ -0,0 +1,34 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""${message}
|
||||
|
||||
Revision ID: ${up_revision}
|
||||
Revises: ${down_revision}
|
||||
Create Date: ${create_date}
|
||||
|
||||
"""
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = ${repr(up_revision)}
|
||||
down_revision = ${repr(down_revision)}
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
${imports if imports else ""}
|
||||
|
||||
def upgrade():
|
||||
${upgrades if upgrades else "pass"}
|
||||
|
||||
|
||||
def downgrade():
|
||||
${downgrades if downgrades else "pass"}
|
228
meteos/db/migrations/alembic/versions/001_meteos_init.py
Normal file
228
meteos/db/migrations/alembic/versions/001_meteos_init.py
Normal file
@ -0,0 +1,228 @@
|
||||
# Copyright 2012 OpenStack LLC.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""meteos_init
|
||||
|
||||
Revision ID: 001
|
||||
Revises: None
|
||||
Create Date: 2016-09-27 17:51:57.077203
|
||||
|
||||
"""
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = '001'
|
||||
down_revision = None
|
||||
|
||||
from alembic import op
|
||||
from oslo_log import log
|
||||
from sqlalchemy import Boolean, Column, DateTime, ForeignKey, Text
|
||||
from sqlalchemy import Integer, MetaData, String, Table, UniqueConstraint
|
||||
|
||||
from meteos.i18n import _LE
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
def upgrade():
|
||||
migrate_engine = op.get_bind().engine
|
||||
meta = MetaData()
|
||||
meta.bind = migrate_engine
|
||||
|
||||
services = Table(
|
||||
'services', meta,
|
||||
Column('created_at', DateTime),
|
||||
Column('updated_at', DateTime),
|
||||
Column('deleted_at', DateTime),
|
||||
Column('deleted', Integer, default=0),
|
||||
Column('id', Integer, primary_key=True, nullable=False),
|
||||
Column('host', String(length=255)),
|
||||
Column('binary', String(length=255)),
|
||||
Column('topic', String(length=255)),
|
||||
Column('report_count', Integer, nullable=False),
|
||||
Column('disabled', Boolean),
|
||||
Column('availability_zone', String(length=255)),
|
||||
mysql_engine='InnoDB',
|
||||
mysql_charset='utf8'
|
||||
)
|
||||
|
||||
templates = Table(
|
||||
'templates', meta,
|
||||
Column('created_at', DateTime),
|
||||
Column('updated_at', DateTime),
|
||||
Column('deleted_at', DateTime),
|
||||
Column('deleted', String(length=36), default='False'),
|
||||
Column('id', String(length=36), primary_key=True, nullable=False),
|
||||
Column('user_id', String(length=255)),
|
||||
Column('project_id', String(length=255)),
|
||||
Column('status', String(length=255)),
|
||||
Column('scheduled_at', DateTime),
|
||||
Column('launched_at', DateTime),
|
||||
Column('terminated_at', DateTime),
|
||||
Column('display_name', String(length=255)),
|
||||
Column('display_description', String(length=255)),
|
||||
Column('sahara_image_id', String(length=36)),
|
||||
Column('master_node_id', String(length=36)),
|
||||
Column('slave_node_id', String(length=36)),
|
||||
Column('binary_data_id', String(length=36)),
|
||||
Column('binary_id', String(length=36)),
|
||||
Column('cluster_template_id', String(length=36)),
|
||||
Column('job_template_id', String(length=36)),
|
||||
Column('master_flavor_id', String(length=36)),
|
||||
Column('master_nodes_num', Integer),
|
||||
Column('worker_flavor_id', String(length=36)),
|
||||
Column('worker_nodes_num', Integer),
|
||||
Column('spark_version', String(length=36)),
|
||||
Column('floating_ip_pool', String(length=36)),
|
||||
mysql_engine='InnoDB',
|
||||
mysql_charset='utf8'
|
||||
)
|
||||
|
||||
experiments = Table(
|
||||
'experiments', meta,
|
||||
Column('created_at', DateTime),
|
||||
Column('updated_at', DateTime),
|
||||
Column('deleted_at', DateTime),
|
||||
Column('deleted', String(length=36), default='False'),
|
||||
Column('id', String(length=36), primary_key=True, nullable=False),
|
||||
Column('user_id', String(length=255)),
|
||||
Column('project_id', String(length=255)),
|
||||
Column('status', String(length=255)),
|
||||
Column('scheduled_at', DateTime),
|
||||
Column('launched_at', DateTime),
|
||||
Column('terminated_at', DateTime),
|
||||
Column('display_name', String(length=255)),
|
||||
Column('display_description', String(length=255)),
|
||||
Column('template_id', String(length=36)),
|
||||
Column('cluster_id', String(length=36)),
|
||||
Column('key_name', String(length=36)),
|
||||
Column('neutron_management_network', String(length=36)),
|
||||
mysql_engine='InnoDB',
|
||||
mysql_charset='utf8'
|
||||
)
|
||||
|
||||
data_sets = Table(
|
||||
'data_sets', meta,
|
||||
Column('created_at', DateTime),
|
||||
Column('updated_at', DateTime),
|
||||
Column('deleted_at', DateTime),
|
||||
Column('deleted', String(length=36), default='False'),
|
||||
Column('id', String(length=36), primary_key=True, nullable=False),
|
||||
Column('source_dataset_url', String(length=255)),
|
||||
Column('user_id', String(length=255)),
|
||||
Column('project_id', String(length=255)),
|
||||
Column('experiment_id', String(length=36)),
|
||||
Column('cluster_id', String(length=36)),
|
||||
Column('job_id', String(length=36)),
|
||||
Column('status', String(length=255)),
|
||||
Column('scheduled_at', DateTime),
|
||||
Column('launched_at', DateTime),
|
||||
Column('terminated_at', DateTime),
|
||||
Column('display_name', String(length=255)),
|
||||
Column('display_description', String(length=255)),
|
||||
Column('container_name', String(length=255)),
|
||||
Column('object_name', String(length=255)),
|
||||
Column('user', String(length=255)),
|
||||
Column('password', String(length=255)),
|
||||
Column('head', Text),
|
||||
Column('stderr', Text),
|
||||
mysql_engine='InnoDB',
|
||||
mysql_charset='utf8'
|
||||
)
|
||||
|
||||
models = Table(
|
||||
'models', meta,
|
||||
Column('created_at', DateTime),
|
||||
Column('updated_at', DateTime),
|
||||
Column('deleted_at', DateTime),
|
||||
Column('deleted', String(length=36), default='False'),
|
||||
Column('id', String(length=36), primary_key=True, nullable=False),
|
||||
Column('source_dataset_url', String(length=255)),
|
||||
Column('dataset_format', String(length=255)),
|
||||
Column('user_id', String(length=255)),
|
||||
Column('project_id', String(length=255)),
|
||||
Column('experiment_id', String(length=36)),
|
||||
Column('cluster_id', String(length=36)),
|
||||
Column('job_id', String(length=36)),
|
||||
Column('status', String(length=255)),
|
||||
Column('scheduled_at', DateTime),
|
||||
Column('launched_at', DateTime),
|
||||
Column('terminated_at', DateTime),
|
||||
Column('display_name', String(length=255)),
|
||||
Column('display_description', String(length=255)),
|
||||
Column('model_type', String(length=255)),
|
||||
Column('model_params', String(length=255)),
|
||||
Column('stdout', Text),
|
||||
Column('stderr', Text),
|
||||
mysql_engine='InnoDB',
|
||||
mysql_charset='utf8'
|
||||
)
|
||||
|
||||
learnings = Table(
|
||||
'learnings', meta,
|
||||
Column('created_at', DateTime),
|
||||
Column('updated_at', DateTime),
|
||||
Column('deleted_at', DateTime),
|
||||
Column('deleted', String(length=36), default='False'),
|
||||
Column('id', String(length=36), primary_key=True, nullable=False),
|
||||
Column('model_id', String(length=36)),
|
||||
Column('model_type', String(length=255)),
|
||||
Column('user_id', String(length=255)),
|
||||
Column('project_id', String(length=255)),
|
||||
Column('experiment_id', String(length=36)),
|
||||
Column('cluster_id', String(length=36)),
|
||||
Column('job_id', String(length=36)),
|
||||
Column('status', String(length=255)),
|
||||
Column('scheduled_at', DateTime),
|
||||
Column('launched_at', DateTime),
|
||||
Column('terminated_at', DateTime),
|
||||
Column('display_name', String(length=255)),
|
||||
Column('display_description', String(length=255)),
|
||||
Column('method', String(length=255)),
|
||||
Column('args', String(length=255)),
|
||||
Column('stdout', Text),
|
||||
Column('stderr', Text),
|
||||
mysql_engine='InnoDB',
|
||||
mysql_charset='utf8'
|
||||
)
|
||||
|
||||
# create all tables
|
||||
# Take care on create order for those with FK dependencies
|
||||
tables = [services, templates, learnings, experiments, data_sets, models]
|
||||
|
||||
for table in tables:
|
||||
if not table.exists():
|
||||
try:
|
||||
table.create()
|
||||
except Exception:
|
||||
LOG.info(repr(table))
|
||||
LOG.exception(_LE('Exception while creating table.'))
|
||||
raise
|
||||
|
||||
if migrate_engine.name == "mysql":
|
||||
tables = ["services", "learnings"]
|
||||
|
||||
migrate_engine.execute("SET foreign_key_checks = 0")
|
||||
for table in tables:
|
||||
migrate_engine.execute(
|
||||
"ALTER TABLE %s CONVERT TO CHARACTER SET utf8" % table)
|
||||
migrate_engine.execute("SET foreign_key_checks = 1")
|
||||
migrate_engine.execute(
|
||||
"ALTER DATABASE %s DEFAULT CHARACTER SET utf8" %
|
||||
migrate_engine.url.database)
|
||||
migrate_engine.execute("ALTER TABLE %s Engine=InnoDB" % table)
|
||||
|
||||
|
||||
def downgrade():
|
||||
raise NotImplementedError('Downgrade from initial Meteos install is not'
|
||||
' supported.')
|
21
meteos/db/migrations/utils.py
Normal file
21
meteos/db/migrations/utils.py
Normal file
@ -0,0 +1,21 @@
|
||||
# Copyright 2015 Mirantis Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import sqlalchemy as sa
|
||||
|
||||
|
||||
def load_table(name, connection):
|
||||
return sa.Table(name, sa.MetaData(), autoload=True,
|
||||
autoload_with=connection)
|
0
meteos/db/sqlalchemy/__init__.py
Normal file
0
meteos/db/sqlalchemy/__init__.py
Normal file
915
meteos/db/sqlalchemy/api.py
Normal file
915
meteos/db/sqlalchemy/api.py
Normal file
@ -0,0 +1,915 @@
|
||||
# Copyright (c) 2011 X.commerce, a business unit of eBay Inc.
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# Copyright (c) 2014 Mirantis, Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Implementation of SQLAlchemy backend."""
|
||||
|
||||
import copy
|
||||
import datetime
|
||||
from functools import wraps
|
||||
import sys
|
||||
import uuid
|
||||
import warnings
|
||||
|
||||
# NOTE(uglide): Required to override default oslo_db Query class
|
||||
import meteos.db.sqlalchemy.query # noqa
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_db import api as oslo_db_api
|
||||
from oslo_db import exception as db_exception
|
||||
from oslo_db import options as db_options
|
||||
from oslo_db.sqlalchemy import session
|
||||
from oslo_db.sqlalchemy import utils as db_utils
|
||||
from oslo_log import log
|
||||
from oslo_utils import timeutils
|
||||
from oslo_utils import uuidutils
|
||||
import six
|
||||
from sqlalchemy import and_
|
||||
from sqlalchemy import or_
|
||||
from sqlalchemy.orm import joinedload
|
||||
from sqlalchemy.sql.expression import true
|
||||
from sqlalchemy.sql import func
|
||||
|
||||
from meteos.common import constants
|
||||
from meteos.db.sqlalchemy import models
|
||||
from meteos import exception
|
||||
from meteos.i18n import _, _LE, _LW
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
_DEFAULT_QUOTA_NAME = 'default'
|
||||
PER_PROJECT_QUOTAS = []
|
||||
|
||||
_FACADE = None
|
||||
|
||||
_DEFAULT_SQL_CONNECTION = 'sqlite://'
|
||||
db_options.set_defaults(cfg.CONF,
|
||||
connection=_DEFAULT_SQL_CONNECTION)
|
||||
|
||||
|
||||
def _create_facade_lazily():
|
||||
global _FACADE
|
||||
if _FACADE is None:
|
||||
_FACADE = session.EngineFacade.from_config(cfg.CONF)
|
||||
return _FACADE
|
||||
|
||||
|
||||
def get_engine():
|
||||
facade = _create_facade_lazily()
|
||||
return facade.get_engine()
|
||||
|
||||
|
||||
def get_session(**kwargs):
|
||||
facade = _create_facade_lazily()
|
||||
return facade.get_session(**kwargs)
|
||||
|
||||
|
||||
def get_backend():
|
||||
"""The backend is this module itself."""
|
||||
|
||||
return sys.modules[__name__]
|
||||
|
||||
|
||||
def is_admin_context(context):
|
||||
"""Indicates if the request context is an administrator."""
|
||||
if not context:
|
||||
warnings.warn(_('Use of empty request context is deprecated'),
|
||||
DeprecationWarning)
|
||||
raise Exception('die')
|
||||
return context.is_admin
|
||||
|
||||
|
||||
def is_user_context(context):
|
||||
"""Indicates if the request context is a normal user."""
|
||||
if not context:
|
||||
return False
|
||||
if context.is_admin:
|
||||
return False
|
||||
if not context.user_id or not context.project_id:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def authorize_project_context(context, project_id):
|
||||
"""Ensures a request has permission to access the given project."""
|
||||
if is_user_context(context):
|
||||
if not context.project_id:
|
||||
raise exception.NotAuthorized()
|
||||
elif context.project_id != project_id:
|
||||
raise exception.NotAuthorized()
|
||||
|
||||
|
||||
def authorize_user_context(context, user_id):
|
||||
"""Ensures a request has permission to access the given user."""
|
||||
if is_user_context(context):
|
||||
if not context.user_id:
|
||||
raise exception.NotAuthorized()
|
||||
elif context.user_id != user_id:
|
||||
raise exception.NotAuthorized()
|
||||
|
||||
|
||||
def authorize_quota_class_context(context, class_name):
|
||||
"""Ensures a request has permission to access the given quota class."""
|
||||
if is_user_context(context):
|
||||
if not context.quota_class:
|
||||
raise exception.NotAuthorized()
|
||||
elif context.quota_class != class_name:
|
||||
raise exception.NotAuthorized()
|
||||
|
||||
|
||||
def require_admin_context(f):
|
||||
"""Decorator to require admin request context.
|
||||
|
||||
The first argument to the wrapped function must be the context.
|
||||
|
||||
"""
|
||||
@wraps(f)
|
||||
def wrapper(*args, **kwargs):
|
||||
if not is_admin_context(args[0]):
|
||||
raise exception.AdminRequired()
|
||||
return f(*args, **kwargs)
|
||||
return wrapper
|
||||
|
||||
|
||||
def require_context(f):
|
||||
"""Decorator to require *any* user or admin context.
|
||||
|
||||
This does no authorization for user or project access matching, see
|
||||
:py:func:`authorize_project_context` and
|
||||
:py:func:`authorize_user_context`.
|
||||
|
||||
The first argument to the wrapped function must be the context.
|
||||
|
||||
"""
|
||||
@wraps(f)
|
||||
def wrapper(*args, **kwargs):
|
||||
if not is_admin_context(args[0]) and not is_user_context(args[0]):
|
||||
raise exception.NotAuthorized()
|
||||
return f(*args, **kwargs)
|
||||
return wrapper
|
||||
|
||||
|
||||
def model_query(context, model, *args, **kwargs):
|
||||
"""Query helper that accounts for context's `read_deleted` field.
|
||||
|
||||
:param context: context to query under
|
||||
:param model: model to query. Must be a subclass of ModelBase.
|
||||
:param session: if present, the session to use
|
||||
:param read_deleted: if present, overrides context's read_deleted field.
|
||||
:param project_only: if present and context is user-type, then restrict
|
||||
query to match the context's project_id.
|
||||
"""
|
||||
session = kwargs.get('session') or get_session()
|
||||
read_deleted = kwargs.get('read_deleted') or context.read_deleted
|
||||
project_only = kwargs.get('project_only')
|
||||
kwargs = dict()
|
||||
|
||||
if project_only and not context.is_admin:
|
||||
kwargs['project_id'] = context.project_id
|
||||
if read_deleted in ('no', 'n', False):
|
||||
kwargs['deleted'] = False
|
||||
elif read_deleted in ('yes', 'y', True):
|
||||
kwargs['deleted'] = True
|
||||
|
||||
return db_utils.model_query(
|
||||
model=model, session=session, args=args, **kwargs)
|
||||
|
||||
|
||||
def exact_filter(query, model, filters, legal_keys):
|
||||
"""Applies exact match filtering to a query.
|
||||
|
||||
Returns the updated query. Modifies filters argument to remove
|
||||
filters consumed.
|
||||
|
||||
:param query: query to apply filters to
|
||||
:param model: model object the query applies to, for IN-style
|
||||
filtering
|
||||
:param filters: dictionary of filters; values that are lists,
|
||||
tuples, sets, or frozensets cause an 'IN' test to
|
||||
be performed, while exact matching ('==' operator)
|
||||
is used for other values
|
||||
:param legal_keys: list of keys to apply exact filtering to
|
||||
"""
|
||||
|
||||
filter_dict = {}
|
||||
|
||||
# Walk through all the keys
|
||||
for key in legal_keys:
|
||||
# Skip ones we're not filtering on
|
||||
if key not in filters:
|
||||
continue
|
||||
|
||||
# OK, filtering on this key; what value do we search for?
|
||||
value = filters.pop(key)
|
||||
|
||||
if isinstance(value, (list, tuple, set, frozenset)):
|
||||
# Looking for values in a list; apply to query directly
|
||||
column_attr = getattr(model, key)
|
||||
query = query.filter(column_attr.in_(value))
|
||||
else:
|
||||
# OK, simple exact match; save for later
|
||||
filter_dict[key] = value
|
||||
|
||||
# Apply simple exact matches
|
||||
if filter_dict:
|
||||
query = query.filter_by(**filter_dict)
|
||||
|
||||
return query
|
||||
|
||||
|
||||
def ensure_dict_has_id(model_dict):
|
||||
if not model_dict.get('id'):
|
||||
model_dict['id'] = uuidutils.generate_uuid()
|
||||
return model_dict
|
||||
|
||||
|
||||
def _sync_learnings(context, project_id, user_id, session):
|
||||
(learnings, gigs) = learning_data_get_for_project(context,
|
||||
project_id,
|
||||
user_id,
|
||||
session=session)
|
||||
return {'learnings': learnings}
|
||||
|
||||
|
||||
QUOTA_SYNC_FUNCTIONS = {
|
||||
'_sync_learnings': _sync_learnings,
|
||||
}
|
||||
|
||||
|
||||
#
|
||||
|
||||
|
||||
@require_admin_context
|
||||
def service_destroy(context, service_id):
|
||||
session = get_session()
|
||||
with session.begin():
|
||||
service_ref = service_get(context, service_id, session=session)
|
||||
service_ref.soft_delete(session)
|
||||
|
||||
|
||||
@require_admin_context
|
||||
def service_get(context, service_id, session=None):
|
||||
result = model_query(
|
||||
context,
|
||||
models.Service,
|
||||
session=session).\
|
||||
filter_by(id=service_id).\
|
||||
first()
|
||||
if not result:
|
||||
raise exception.ServiceNotFound(service_id=service_id)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
@require_admin_context
|
||||
def service_get_all(context, disabled=None):
|
||||
query = model_query(context, models.Service)
|
||||
|
||||
if disabled is not None:
|
||||
query = query.filter_by(disabled=disabled)
|
||||
|
||||
return query.all()
|
||||
|
||||
|
||||
@require_admin_context
|
||||
def service_get_all_by_topic(context, topic):
|
||||
return model_query(
|
||||
context, models.Service, read_deleted="no").\
|
||||
filter_by(disabled=False).\
|
||||
filter_by(topic=topic).\
|
||||
all()
|
||||
|
||||
|
||||
@require_admin_context
|
||||
def service_get_by_host_and_topic(context, host, topic):
|
||||
result = model_query(
|
||||
context, models.Service, read_deleted="no").\
|
||||
filter_by(disabled=False).\
|
||||
filter_by(host=host).\
|
||||
filter_by(topic=topic).\
|
||||
first()
|
||||
if not result:
|
||||
raise exception.ServiceNotFound(service_id=host)
|
||||
return result
|
||||
|
||||
|
||||
@require_admin_context
|
||||
def _service_get_all_topic_subquery(context, session, topic, subq, label):
|
||||
sort_value = getattr(subq.c, label)
|
||||
return model_query(context, models.Service,
|
||||
func.coalesce(sort_value, 0),
|
||||
session=session, read_deleted="no").\
|
||||
filter_by(topic=topic).\
|
||||
filter_by(disabled=False).\
|
||||
outerjoin((subq, models.Service.host == subq.c.host)).\
|
||||
order_by(sort_value).\
|
||||
all()
|
||||
|
||||
|
||||
@require_admin_context
|
||||
def service_get_all_learning_sorted(context):
|
||||
session = get_session()
|
||||
with session.begin():
|
||||
topic = CONF.learning_topic
|
||||
label = 'learning_gigabytes'
|
||||
subq = model_query(context, models.Share,
|
||||
func.sum(models.Share.size).label(label),
|
||||
session=session, read_deleted="no").\
|
||||
join(models.ShareInstance,
|
||||
models.ShareInstance.learning_id == models.Share.id).\
|
||||
group_by(models.ShareInstance.host).\
|
||||
subquery()
|
||||
return _service_get_all_topic_subquery(context,
|
||||
session,
|
||||
topic,
|
||||
subq,
|
||||
label)
|
||||
|
||||
|
||||
@require_admin_context
|
||||
def service_get_by_args(context, host, binary):
|
||||
result = model_query(context, models.Service).\
|
||||
filter_by(host=host).\
|
||||
filter_by(binary=binary).\
|
||||
first()
|
||||
|
||||
if not result:
|
||||
raise exception.HostBinaryNotFound(host=host, binary=binary)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
@require_admin_context
|
||||
def service_create(context, values):
|
||||
session = get_session()
|
||||
|
||||
service_ref = models.Service()
|
||||
service_ref.update(values)
|
||||
if not CONF.enable_new_services:
|
||||
service_ref.disabled = True
|
||||
|
||||
with session.begin():
|
||||
service_ref.save(session)
|
||||
return service_ref
|
||||
|
||||
|
||||
@require_admin_context
|
||||
@oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True)
|
||||
def service_update(context, service_id, values):
|
||||
session = get_session()
|
||||
|
||||
with session.begin():
|
||||
service_ref = service_get(context, service_id, session=session)
|
||||
service_ref.update(values)
|
||||
service_ref.save(session=session)
|
||||
|
||||
|
||||
#
|
||||
|
||||
|
||||
def _experiment_get_query(context, session=None):
|
||||
if session is None:
|
||||
session = get_session()
|
||||
return model_query(context, models.Experiment, session=session)
|
||||
|
||||
|
||||
@require_context
|
||||
def experiment_get(context, experiment_id, session=None):
|
||||
result = _experiment_get_query(
|
||||
context, session).filter_by(id=experiment_id).first()
|
||||
|
||||
if result is None:
|
||||
raise exception.NotFound()
|
||||
|
||||
return result
|
||||
|
||||
|
||||
@require_context
|
||||
def experiment_create(context, experiment_values):
|
||||
values = copy.deepcopy(experiment_values)
|
||||
values = ensure_dict_has_id(values)
|
||||
|
||||
session = get_session()
|
||||
experiment_ref = models.Experiment()
|
||||
experiment_ref.update(values)
|
||||
|
||||
with session.begin():
|
||||
experiment_ref.save(session=session)
|
||||
|
||||
# NOTE(u_glide): Do so to prevent errors with relationships
|
||||
return experiment_get(context, experiment_ref['id'], session=session)
|
||||
|
||||
|
||||
@require_context
|
||||
@oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True)
|
||||
def experiment_update(context, experiment_id, update_values):
|
||||
session = get_session()
|
||||
values = copy.deepcopy(update_values)
|
||||
|
||||
with session.begin():
|
||||
experiment_ref = experiment_get(
|
||||
context, experiment_id, session=session)
|
||||
|
||||
experiment_ref.update(values)
|
||||
experiment_ref.save(session=session)
|
||||
return experiment_ref
|
||||
|
||||
|
||||
def _experiment_get_all_with_filters(context, project_id=None, filters=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
if not sort_key:
|
||||
sort_key = 'created_at'
|
||||
if not sort_dir:
|
||||
sort_dir = 'desc'
|
||||
query = (
|
||||
_experiment_get_query(context).join()
|
||||
)
|
||||
|
||||
# Apply filters
|
||||
if not filters:
|
||||
filters = {}
|
||||
|
||||
# Apply sorting
|
||||
if sort_dir.lower() not in ('desc', 'asc'):
|
||||
msg = _("Wrong sorting data provided: sort key is '%(sort_key)s' "
|
||||
"and sort direction is '%(sort_dir)s'.") % {
|
||||
"sort_key": sort_key, "sort_dir": sort_dir}
|
||||
raise exception.InvalidInput(reason=msg)
|
||||
|
||||
def apply_sorting(model, query):
|
||||
sort_attr = getattr(model, sort_key)
|
||||
sort_method = getattr(sort_attr, sort_dir.lower())
|
||||
return query.order_by(sort_method())
|
||||
|
||||
try:
|
||||
query = apply_sorting(models.Experiment, query)
|
||||
except AttributeError:
|
||||
msg = _("Wrong sorting key provided - '%s'.") % sort_key
|
||||
raise exception.InvalidInput(reason=msg)
|
||||
|
||||
# Returns list of experiments that satisfy filters.
|
||||
query = query.all()
|
||||
return query
|
||||
|
||||
|
||||
@require_context
|
||||
def experiment_get_all_by_project(context, project_id, filters=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
"""Returns list of experiments with given project ID."""
|
||||
query = _experiment_get_all_with_filters(
|
||||
context, project_id=project_id, filters=filters,
|
||||
sort_key=sort_key, sort_dir=sort_dir,
|
||||
)
|
||||
return query
|
||||
|
||||
|
||||
@require_context
|
||||
def experiment_delete(context, experiment_id):
|
||||
session = get_session()
|
||||
|
||||
with session.begin():
|
||||
experiment_ref = experiment_get(context, experiment_id, session)
|
||||
experiment_ref.soft_delete(session=session)
|
||||
|
||||
|
||||
#
|
||||
|
||||
|
||||
def _template_get_query(context, session=None):
|
||||
if session is None:
|
||||
session = get_session()
|
||||
return model_query(context, models.Template, session=session)
|
||||
|
||||
|
||||
@require_context
|
||||
def template_get(context, template_id, session=None):
|
||||
result = _template_get_query(
|
||||
context, session).filter_by(id=template_id).first()
|
||||
|
||||
if result is None:
|
||||
raise exception.NotFound()
|
||||
|
||||
return result
|
||||
|
||||
|
||||
@require_context
|
||||
def template_create(context, template_values):
|
||||
values = copy.deepcopy(template_values)
|
||||
values = ensure_dict_has_id(values)
|
||||
|
||||
session = get_session()
|
||||
template_ref = models.Template()
|
||||
template_ref.update(values)
|
||||
|
||||
with session.begin():
|
||||
template_ref.save(session=session)
|
||||
|
||||
# NOTE(u_glide): Do so to prevent errors with relationships
|
||||
return template_get(context, template_ref['id'], session=session)
|
||||
|
||||
|
||||
@require_context
|
||||
@oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True)
|
||||
def template_update(context, template_id, update_values):
|
||||
session = get_session()
|
||||
values = copy.deepcopy(update_values)
|
||||
|
||||
with session.begin():
|
||||
template_ref = template_get(context, template_id, session=session)
|
||||
|
||||
template_ref.update(values)
|
||||
template_ref.save(session=session)
|
||||
return template_ref
|
||||
|
||||
|
||||
def _template_get_all_with_filters(context, project_id=None, filters=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
if not sort_key:
|
||||
sort_key = 'created_at'
|
||||
if not sort_dir:
|
||||
sort_dir = 'desc'
|
||||
query = (
|
||||
_template_get_query(context).join()
|
||||
)
|
||||
|
||||
# Apply filters
|
||||
if not filters:
|
||||
filters = {}
|
||||
|
||||
# Apply sorting
|
||||
if sort_dir.lower() not in ('desc', 'asc'):
|
||||
msg = _("Wrong sorting data provided: sort key is '%(sort_key)s' "
|
||||
"and sort direction is '%(sort_dir)s'.") % {
|
||||
"sort_key": sort_key, "sort_dir": sort_dir}
|
||||
raise exception.InvalidInput(reason=msg)
|
||||
|
||||
def apply_sorting(model, query):
|
||||
sort_attr = getattr(model, sort_key)
|
||||
sort_method = getattr(sort_attr, sort_dir.lower())
|
||||
return query.order_by(sort_method())
|
||||
|
||||
try:
|
||||
query = apply_sorting(models.Template, query)
|
||||
except AttributeError:
|
||||
msg = _("Wrong sorting key provided - '%s'.") % sort_key
|
||||
raise exception.InvalidInput(reason=msg)
|
||||
|
||||
# Returns list of templates that satisfy filters.
|
||||
query = query.all()
|
||||
return query
|
||||
|
||||
|
||||
@require_context
|
||||
def template_get_all_by_project(context, project_id, filters=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
"""Returns list of templates with given project ID."""
|
||||
query = _template_get_all_with_filters(
|
||||
context, project_id=project_id, filters=filters,
|
||||
sort_key=sort_key, sort_dir=sort_dir,
|
||||
)
|
||||
return query
|
||||
|
||||
|
||||
@require_context
|
||||
def template_delete(context, template_id):
|
||||
session = get_session()
|
||||
|
||||
with session.begin():
|
||||
template_ref = template_get(context, template_id, session)
|
||||
template_ref.soft_delete(session=session)
|
||||
|
||||
|
||||
#
|
||||
|
||||
|
||||
def _dataset_get_query(context, session=None):
|
||||
if session is None:
|
||||
session = get_session()
|
||||
return model_query(context, models.Dataset, session=session)
|
||||
|
||||
|
||||
@require_context
|
||||
def dataset_get(context, dataset_id, session=None):
|
||||
result = _dataset_get_query(
|
||||
context, session).filter_by(id=dataset_id).first()
|
||||
|
||||
if result is None:
|
||||
raise exception.NotFound()
|
||||
|
||||
return result
|
||||
|
||||
|
||||
@require_context
|
||||
def dataset_create(context, dataset_values):
|
||||
values = copy.deepcopy(dataset_values)
|
||||
values = ensure_dict_has_id(values)
|
||||
|
||||
session = get_session()
|
||||
dataset_ref = models.Dataset()
|
||||
dataset_ref.update(values)
|
||||
|
||||
with session.begin():
|
||||
dataset_ref.save(session=session)
|
||||
|
||||
# NOTE(u_glide): Do so to prevent errors with relationships
|
||||
return dataset_get(context, dataset_ref['id'], session=session)
|
||||
|
||||
|
||||
@require_context
|
||||
@oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True)
|
||||
def dataset_update(context, dataset_id, update_values):
|
||||
session = get_session()
|
||||
values = copy.deepcopy(update_values)
|
||||
|
||||
with session.begin():
|
||||
dataset_ref = dataset_get(context, dataset_id, session=session)
|
||||
|
||||
dataset_ref.update(values)
|
||||
dataset_ref.save(session=session)
|
||||
return dataset_ref
|
||||
|
||||
|
||||
def _dataset_get_all_with_filters(context, project_id=None, filters=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
if not sort_key:
|
||||
sort_key = 'created_at'
|
||||
if not sort_dir:
|
||||
sort_dir = 'desc'
|
||||
query = (
|
||||
_dataset_get_query(context).join()
|
||||
)
|
||||
|
||||
# Apply filters
|
||||
if not filters:
|
||||
filters = {}
|
||||
|
||||
# Apply sorting
|
||||
if sort_dir.lower() not in ('desc', 'asc'):
|
||||
msg = _("Wrong sorting data provided: sort key is '%(sort_key)s' "
|
||||
"and sort direction is '%(sort_dir)s'.") % {
|
||||
"sort_key": sort_key, "sort_dir": sort_dir}
|
||||
raise exception.InvalidInput(reason=msg)
|
||||
|
||||
def apply_sorting(model, query):
|
||||
sort_attr = getattr(model, sort_key)
|
||||
sort_method = getattr(sort_attr, sort_dir.lower())
|
||||
return query.order_by(sort_method())
|
||||
|
||||
try:
|
||||
query = apply_sorting(models.Dataset, query)
|
||||
except AttributeError:
|
||||
msg = _("Wrong sorting key provided - '%s'.") % sort_key
|
||||
raise exception.InvalidInput(reason=msg)
|
||||
|
||||
# Returns list of datasets that satisfy filters.
|
||||
query = query.all()
|
||||
return query
|
||||
|
||||
|
||||
@require_context
|
||||
def dataset_get_all_by_project(context, project_id, filters=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
"""Returns list of datasets with given project ID."""
|
||||
query = _dataset_get_all_with_filters(
|
||||
context, project_id=project_id, filters=filters,
|
||||
sort_key=sort_key, sort_dir=sort_dir,
|
||||
)
|
||||
return query
|
||||
|
||||
|
||||
@require_context
|
||||
def dataset_delete(context, dataset_id):
|
||||
session = get_session()
|
||||
|
||||
with session.begin():
|
||||
dataset_ref = dataset_get(context, dataset_id, session)
|
||||
dataset_ref.soft_delete(session=session)
|
||||
|
||||
|
||||
#
|
||||
|
||||
|
||||
def _model_get_query(context, session=None):
|
||||
if session is None:
|
||||
session = get_session()
|
||||
return model_query(context, models.Model, session=session)
|
||||
|
||||
|
||||
@require_context
|
||||
def model_get(context, model_id, session=None):
|
||||
result = _model_get_query(context, session).filter_by(id=model_id).first()
|
||||
|
||||
if result is None:
|
||||
raise exception.NotFound()
|
||||
|
||||
return result
|
||||
|
||||
|
||||
@require_context
|
||||
def model_create(context, model_values):
|
||||
values = copy.deepcopy(model_values)
|
||||
values = ensure_dict_has_id(values)
|
||||
|
||||
session = get_session()
|
||||
model_ref = models.Model()
|
||||
model_ref.update(values)
|
||||
|
||||
with session.begin():
|
||||
model_ref.save(session=session)
|
||||
|
||||
# NOTE(u_glide): Do so to prevent errors with relationships
|
||||
return model_get(context, model_ref['id'], session=session)
|
||||
|
||||
|
||||
@require_context
|
||||
@oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True)
|
||||
def model_update(context, model_id, update_values):
|
||||
session = get_session()
|
||||
values = copy.deepcopy(update_values)
|
||||
|
||||
with session.begin():
|
||||
model_ref = model_get(context, model_id, session=session)
|
||||
|
||||
model_ref.update(values)
|
||||
model_ref.save(session=session)
|
||||
return model_ref
|
||||
|
||||
|
||||
def _model_get_all_with_filters(context, project_id=None, filters=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
if not sort_key:
|
||||
sort_key = 'created_at'
|
||||
if not sort_dir:
|
||||
sort_dir = 'desc'
|
||||
query = (
|
||||
_model_get_query(context).join()
|
||||
)
|
||||
|
||||
# Apply filters
|
||||
if not filters:
|
||||
filters = {}
|
||||
|
||||
# Apply sorting
|
||||
if sort_dir.lower() not in ('desc', 'asc'):
|
||||
msg = _("Wrong sorting data provided: sort key is '%(sort_key)s' "
|
||||
"and sort direction is '%(sort_dir)s'.") % {
|
||||
"sort_key": sort_key, "sort_dir": sort_dir}
|
||||
raise exception.InvalidInput(reason=msg)
|
||||
|
||||
def apply_sorting(model, query):
|
||||
sort_attr = getattr(model, sort_key)
|
||||
sort_method = getattr(sort_attr, sort_dir.lower())
|
||||
return query.order_by(sort_method())
|
||||
|
||||
try:
|
||||
query = apply_sorting(models.Model, query)
|
||||
except AttributeError:
|
||||
msg = _("Wrong sorting key provided - '%s'.") % sort_key
|
||||
raise exception.InvalidInput(reason=msg)
|
||||
|
||||
# Returns list of models that satisfy filters.
|
||||
query = query.all()
|
||||
return query
|
||||
|
||||
|
||||
@require_context
|
||||
def model_get_all_by_project(context, project_id, filters=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
"""Returns list of models with given project ID."""
|
||||
query = _model_get_all_with_filters(
|
||||
context, project_id=project_id, filters=filters,
|
||||
sort_key=sort_key, sort_dir=sort_dir,
|
||||
)
|
||||
return query
|
||||
|
||||
|
||||
@require_context
|
||||
def model_delete(context, model_id):
|
||||
session = get_session()
|
||||
|
||||
with session.begin():
|
||||
model_ref = model_get(context, model_id, session)
|
||||
model_ref.soft_delete(session=session)
|
||||
|
||||
|
||||
#
|
||||
|
||||
|
||||
def _learning_get_query(context, session=None):
|
||||
if session is None:
|
||||
session = get_session()
|
||||
return model_query(context, models.Learning, session=session)
|
||||
|
||||
|
||||
@require_context
|
||||
def learning_get(context, learning_id, session=None):
|
||||
result = _learning_get_query(
|
||||
context, session).filter_by(id=learning_id).first()
|
||||
|
||||
if result is None:
|
||||
raise exception.NotFound()
|
||||
|
||||
return result
|
||||
|
||||
|
||||
@require_context
|
||||
def learning_create(context, learning_values):
|
||||
values = copy.deepcopy(learning_values)
|
||||
values = ensure_dict_has_id(values)
|
||||
|
||||
session = get_session()
|
||||
learning_ref = models.Learning()
|
||||
learning_ref.update(values)
|
||||
|
||||
with session.begin():
|
||||
learning_ref.save(session=session)
|
||||
|
||||
# NOTE(u_glide): Do so to prevent errors with relationships
|
||||
return learning_get(context, learning_ref['id'], session=session)
|
||||
|
||||
|
||||
@require_context
|
||||
@oslo_db_api.wrap_db_retry(max_retries=5, retry_on_deadlock=True)
|
||||
def learning_update(context, learning_id, update_values):
|
||||
session = get_session()
|
||||
values = copy.deepcopy(update_values)
|
||||
|
||||
with session.begin():
|
||||
learning_ref = learning_get(context, learning_id, session=session)
|
||||
|
||||
learning_ref.update(values)
|
||||
learning_ref.save(session=session)
|
||||
return learning_ref
|
||||
|
||||
|
||||
def _learning_get_all_with_filters(context, project_id=None, filters=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
if not sort_key:
|
||||
sort_key = 'created_at'
|
||||
if not sort_dir:
|
||||
sort_dir = 'desc'
|
||||
query = (
|
||||
_learning_get_query(context).join()
|
||||
)
|
||||
|
||||
# Apply filters
|
||||
if not filters:
|
||||
filters = {}
|
||||
|
||||
# Apply sorting
|
||||
if sort_dir.lower() not in ('desc', 'asc'):
|
||||
msg = _("Wrong sorting data provided: sort key is '%(sort_key)s' "
|
||||
"and sort direction is '%(sort_dir)s'.") % {
|
||||
"sort_key": sort_key, "sort_dir": sort_dir}
|
||||
raise exception.InvalidInput(reason=msg)
|
||||
|
||||
def apply_sorting(learning, query):
|
||||
sort_attr = getattr(learning, sort_key)
|
||||
sort_method = getattr(sort_attr, sort_dir.lower())
|
||||
return query.order_by(sort_method())
|
||||
|
||||
try:
|
||||
query = apply_sorting(models.Learning, query)
|
||||
except AttributeError:
|
||||
msg = _("Wrong sorting key provided - '%s'.") % sort_key
|
||||
raise exception.InvalidInput(reason=msg)
|
||||
|
||||
# Returns list of learnings that satisfy filters.
|
||||
query = query.all()
|
||||
return query
|
||||
|
||||
|
||||
@require_context
|
||||
def learning_get_all_by_project(context, project_id, filters=None,
|
||||
sort_key=None, sort_dir=None):
|
||||
"""Returns list of learnings with given project ID."""
|
||||
query = _learning_get_all_with_filters(
|
||||
context, project_id=project_id, filters=filters,
|
||||
sort_key=sort_key, sort_dir=sort_dir,
|
||||
)
|
||||
return query
|
||||
|
||||
|
||||
@require_context
|
||||
def learning_delete(context, learning_id):
|
||||
session = get_session()
|
||||
|
||||
with session.begin():
|
||||
learning_ref = learning_get(context, learning_id, session)
|
||||
learning_ref.soft_delete(session=session)
|
200
meteos/db/sqlalchemy/models.py
Normal file
200
meteos/db/sqlalchemy/models.py
Normal file
@ -0,0 +1,200 @@
|
||||
# Copyright (c) 2011 X.commerce, a business unit of eBay Inc.
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# Copyright 2011 Piston Cloud Computing, Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
"""
|
||||
SQLAlchemy models for Meteos data.
|
||||
"""
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_db.sqlalchemy import models
|
||||
from oslo_log import log
|
||||
from sqlalchemy import Column, Integer, String, schema
|
||||
from sqlalchemy.ext.declarative import declarative_base
|
||||
from sqlalchemy import orm
|
||||
from sqlalchemy import ForeignKey, DateTime, Boolean, Enum, Text
|
||||
|
||||
from meteos.common import constants
|
||||
|
||||
CONF = cfg.CONF
|
||||
BASE = declarative_base()
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class MeteosBase(models.ModelBase,
|
||||
models.TimestampMixin,
|
||||
models.SoftDeleteMixin):
|
||||
|
||||
"""Base class for Meteos Models."""
|
||||
__table_args__ = {'mysql_engine': 'InnoDB'}
|
||||
metadata = None
|
||||
|
||||
def to_dict(self):
|
||||
model_dict = {}
|
||||
for k, v in self.items():
|
||||
if not issubclass(type(v), MeteosBase):
|
||||
model_dict[k] = v
|
||||
return model_dict
|
||||
|
||||
def soft_delete(self, session, update_status=False,
|
||||
status_field_name='status'):
|
||||
"""Mark this object as deleted."""
|
||||
if update_status:
|
||||
setattr(self, status_field_name, constants.STATUS_DELETED)
|
||||
|
||||
return super(MeteosBase, self).soft_delete(session)
|
||||
|
||||
|
||||
class Service(BASE, MeteosBase):
|
||||
|
||||
"""Represents a running service on a host."""
|
||||
|
||||
__tablename__ = 'services'
|
||||
id = Column(Integer, primary_key=True)
|
||||
host = Column(String(255)) # , ForeignKey('hosts.id'))
|
||||
binary = Column(String(255))
|
||||
topic = Column(String(255))
|
||||
report_count = Column(Integer, nullable=False, default=0)
|
||||
disabled = Column(Boolean, default=False)
|
||||
|
||||
|
||||
class Template(BASE, MeteosBase):
|
||||
|
||||
__tablename__ = 'templates'
|
||||
id = Column(String(36), primary_key=True)
|
||||
deleted = Column(String(36), default='False')
|
||||
user_id = Column(String(255))
|
||||
project_id = Column(String(255))
|
||||
|
||||
display_name = Column(String(255))
|
||||
display_description = Column(String(255))
|
||||
|
||||
sahara_image_id = Column(String(36))
|
||||
master_node_id = Column(String(36))
|
||||
slave_node_id = Column(String(36))
|
||||
binary_data_id = Column(String(36))
|
||||
binary_id = Column(String(36))
|
||||
cluster_template_id = Column(String(36))
|
||||
job_template_id = Column(String(36))
|
||||
|
||||
master_flavor_id = Column(String(36))
|
||||
worker_flavor_id = Column(String(36))
|
||||
master_nodes_num = Column(Integer)
|
||||
worker_nodes_num = Column(Integer)
|
||||
floating_ip_pool = Column(String(36))
|
||||
spark_version = Column(String(36))
|
||||
|
||||
status = Column(String(255))
|
||||
launched_at = Column(DateTime)
|
||||
|
||||
|
||||
class Experiment(BASE, MeteosBase):
|
||||
|
||||
__tablename__ = 'experiments'
|
||||
id = Column(String(36), primary_key=True)
|
||||
deleted = Column(String(36), default='False')
|
||||
user_id = Column(String(255))
|
||||
project_id = Column(String(255))
|
||||
|
||||
display_name = Column(String(255))
|
||||
display_description = Column(String(255))
|
||||
template_id = Column(String(36))
|
||||
cluster_id = Column(String(36))
|
||||
key_name = Column(String(36))
|
||||
neutron_management_network = Column(String(36))
|
||||
|
||||
status = Column(String(255))
|
||||
launched_at = Column(DateTime)
|
||||
|
||||
|
||||
class Dataset(BASE, MeteosBase):
|
||||
|
||||
__tablename__ = 'data_sets'
|
||||
id = Column(String(36), primary_key=True)
|
||||
source_dataset_url = Column(String(255))
|
||||
deleted = Column(String(36), default='False')
|
||||
user_id = Column(String(255))
|
||||
project_id = Column(String(255))
|
||||
experiment_id = Column(String(36))
|
||||
cluster_id = Column(String(36))
|
||||
job_id = Column(String(36))
|
||||
|
||||
display_name = Column(String(255))
|
||||
display_description = Column(String(255))
|
||||
|
||||
container_name = Column(String(255))
|
||||
object_name = Column(String(255))
|
||||
user = Column(String(255))
|
||||
password = Column(String(255))
|
||||
|
||||
status = Column(String(255))
|
||||
launched_at = Column(DateTime)
|
||||
|
||||
head = Column(Text)
|
||||
stderr = Column(Text)
|
||||
|
||||
|
||||
class Model(BASE, MeteosBase):
|
||||
|
||||
__tablename__ = 'models'
|
||||
id = Column(String(36), primary_key=True)
|
||||
source_dataset_url = Column(String(255))
|
||||
dataset_format = Column(String(255))
|
||||
deleted = Column(String(36), default='False')
|
||||
user_id = Column(String(255))
|
||||
project_id = Column(String(255))
|
||||
experiment_id = Column(String(36))
|
||||
cluster_id = Column(String(36))
|
||||
job_id = Column(String(36))
|
||||
|
||||
display_name = Column(String(255))
|
||||
display_description = Column(String(255))
|
||||
|
||||
model_type = Column(String(255))
|
||||
model_params = Column(String(255))
|
||||
|
||||
status = Column(String(255))
|
||||
launched_at = Column(DateTime)
|
||||
|
||||
stdout = Column(Text)
|
||||
stderr = Column(Text)
|
||||
|
||||
|
||||
class Learning(BASE, MeteosBase):
|
||||
|
||||
__tablename__ = 'learnings'
|
||||
id = Column(String(36), primary_key=True)
|
||||
model_id = Column(String(36))
|
||||
model_type = Column(String(255))
|
||||
deleted = Column(String(36), default='False')
|
||||
user_id = Column(String(255))
|
||||
project_id = Column(String(255))
|
||||
experiment_id = Column(String(36))
|
||||
cluster_id = Column(String(36))
|
||||
job_id = Column(String(36))
|
||||
|
||||
display_name = Column(String(255))
|
||||
display_description = Column(String(255))
|
||||
|
||||
method = Column(String(255))
|
||||
args = Column(String(255))
|
||||
|
||||
status = Column(String(255))
|
||||
launched_at = Column(DateTime)
|
||||
|
||||
stdout = Column(Text)
|
||||
stderr = Column(Text)
|
40
meteos/db/sqlalchemy/query.py
Normal file
40
meteos/db/sqlalchemy/query.py
Normal file
@ -0,0 +1,40 @@
|
||||
# Copyright 2015 Mirantis Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslo_db.sqlalchemy import orm
|
||||
import sqlalchemy
|
||||
|
||||
from meteos.common import constants
|
||||
|
||||
|
||||
class Query(orm.Query):
|
||||
def soft_delete(self, synchronize_session='evaluate', update_status=False,
|
||||
status_field_name='status'):
|
||||
if update_status:
|
||||
setattr(self, status_field_name, constants.STATUS_DELETED)
|
||||
|
||||
return super(Query, self).soft_delete(synchronize_session)
|
||||
|
||||
|
||||
def get_maker(engine, autocommit=True, expire_on_commit=False):
|
||||
"""Return a SQLAlchemy sessionmaker using the given engine."""
|
||||
return sqlalchemy.orm.sessionmaker(bind=engine,
|
||||
class_=orm.Session,
|
||||
autocommit=autocommit,
|
||||
expire_on_commit=expire_on_commit,
|
||||
query_cls=Query)
|
||||
|
||||
# NOTE(uglide): Monkey patch oslo_db get_maker() function to use custom Query
|
||||
orm.get_maker = get_maker
|
25
meteos/engine/__init__.py
Normal file
25
meteos/engine/__init__.py
Normal file
@ -0,0 +1,25 @@
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
# Importing full names to not pollute the namespace and cause possible
|
||||
# collisions with use of 'from meteos.engine import <foo>' elsewhere.
|
||||
import oslo_utils.importutils as import_utils
|
||||
|
||||
from meteos.common import config
|
||||
|
||||
CONF = config.CONF
|
||||
|
||||
API = import_utils.import_class(CONF.learning_api_class)
|
470
meteos/engine/api.py
Normal file
470
meteos/engine/api.py
Normal file
@ -0,0 +1,470 @@
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
# Copyright (c) 2015 Tom Barron. All rights reserved.
|
||||
# Copyright (c) 2015 Mirantis Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Handles all requests relating to learnings.
|
||||
"""
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
from oslo_utils import excutils
|
||||
from oslo_utils import strutils
|
||||
from oslo_utils import timeutils
|
||||
import six
|
||||
|
||||
from meteos.common import constants
|
||||
from meteos.db import base
|
||||
from meteos import exception
|
||||
from meteos.i18n import _, _LE, _LI, _LW
|
||||
from meteos import policy
|
||||
from meteos.engine import rpcapi as engine_rpcapi
|
||||
from meteos import utils
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class API(base.Base):
|
||||
|
||||
"""API for interacting with the learning manager."""
|
||||
|
||||
def __init__(self, db_driver=None):
|
||||
self.engine_rpcapi = engine_rpcapi.LearningAPI()
|
||||
super(API, self).__init__(db_driver)
|
||||
|
||||
def get_all_templates(self, context, search_opts=None,
|
||||
sort_key='created_at', sort_dir='desc'):
|
||||
policy.check_policy(context, 'template', 'get_all')
|
||||
|
||||
if search_opts is None:
|
||||
search_opts = {}
|
||||
|
||||
LOG.debug("Searching for templates by: %s", six.text_type(search_opts))
|
||||
|
||||
project_id = context.project_id
|
||||
|
||||
templates = self.db.template_get_all_by_project(context, project_id,
|
||||
sort_key=sort_key,
|
||||
sort_dir=sort_dir)
|
||||
|
||||
if search_opts:
|
||||
results = []
|
||||
for s in templates:
|
||||
# values in search_opts can be only strings
|
||||
if all(s.get(k, None) == v for k, v in search_opts.items()):
|
||||
results.append(s)
|
||||
templates = results
|
||||
return templates
|
||||
|
||||
def get_template(self, context, template_id):
|
||||
rv = self.db.template_get(context, template_id)
|
||||
return rv
|
||||
|
||||
def create_template(self, context, name, description, image_id=None,
|
||||
master_nodes_num=None, master_flavor_id=None,
|
||||
worker_nodes_num=None, worker_flavor_id=None,
|
||||
spark_version=None, floating_ip_pool=None):
|
||||
"""Create new Expariment."""
|
||||
policy.check_policy(context, 'template', 'create')
|
||||
|
||||
template = {'id': None,
|
||||
'user_id': context.user_id,
|
||||
'project_id': context.project_id,
|
||||
'display_name': name,
|
||||
'display_description': description,
|
||||
'image_id': image_id,
|
||||
'master_nodes_num': master_nodes_num,
|
||||
'master_flavor_id': master_flavor_id,
|
||||
'worker_nodes_num': worker_nodes_num,
|
||||
'worker_flavor_id': worker_flavor_id,
|
||||
'spark_version': spark_version,
|
||||
'floating_ip_pool': floating_ip_pool,
|
||||
}
|
||||
|
||||
try:
|
||||
result = self.db.template_create(context, template)
|
||||
self.engine_rpcapi.create_template(context, result)
|
||||
except Exception:
|
||||
with excutils.save_and_reraise_exception():
|
||||
self.db.template_delete(context, result['id'])
|
||||
|
||||
# Retrieve the learning with instance details
|
||||
template = self.db.template_get(context, result['id'])
|
||||
|
||||
return template
|
||||
|
||||
def delete_template(self, context, id, force=False):
|
||||
"""Delete template."""
|
||||
|
||||
policy.check_policy(context, 'template', 'delete')
|
||||
|
||||
template = self.db.template_get(context, id)
|
||||
|
||||
statuses = (constants.STATUS_AVAILABLE, constants.STATUS_ERROR,
|
||||
constants.STATUS_INACTIVE)
|
||||
if not (force or template['status'] in statuses):
|
||||
msg = _("Learning status must be one of %(statuses)s") % {
|
||||
"statuses": statuses}
|
||||
raise exception.InvalidLearning(reason=msg)
|
||||
|
||||
result = self.engine_rpcapi.delete_template(context, id)
|
||||
|
||||
def get_all_experiments(self, context, search_opts=None,
|
||||
sort_key='created_at', sort_dir='desc'):
|
||||
|
||||
policy.check_policy(context, 'experiment', 'get_all')
|
||||
|
||||
if search_opts is None:
|
||||
search_opts = {}
|
||||
|
||||
LOG.debug("Searching for experiments by: %s",
|
||||
six.text_type(search_opts))
|
||||
|
||||
project_id = context.project_id
|
||||
|
||||
experiments = self.db.experiment_get_all_by_project(
|
||||
context, project_id,
|
||||
sort_key=sort_key, sort_dir=sort_dir)
|
||||
|
||||
if search_opts:
|
||||
results = []
|
||||
for s in experiments:
|
||||
# values in search_opts can be only strings
|
||||
if all(s.get(k, None) == v for k, v in search_opts.items()):
|
||||
results.append(s)
|
||||
experiments = results
|
||||
return experiments
|
||||
|
||||
def get_experiment(self, context, experiment_id):
|
||||
rv = self.db.experiment_get(context, experiment_id)
|
||||
return rv
|
||||
|
||||
def create_experiment(self, context, name, description, template_id,
|
||||
key_name, neutron_management_network):
|
||||
"""Create new Experiment."""
|
||||
policy.check_policy(context, 'experiment', 'create')
|
||||
|
||||
experiment = {'id': None,
|
||||
'user_id': context.user_id,
|
||||
'project_id': context.project_id,
|
||||
'display_name': name,
|
||||
'display_description': description,
|
||||
'template_id': template_id,
|
||||
'key_name': key_name,
|
||||
'neutron_management_network': neutron_management_network,
|
||||
}
|
||||
|
||||
try:
|
||||
result = self.db.experiment_create(context, experiment)
|
||||
self.engine_rpcapi.create_experiment(context, result)
|
||||
updates = {'status': constants.STATUS_CREATING}
|
||||
|
||||
LOG.info(_LI("Accepted creation of experiment %s."), result['id'])
|
||||
self.db.experiment_update(context, result['id'], updates)
|
||||
|
||||
except Exception:
|
||||
with excutils.save_and_reraise_exception():
|
||||
self.db.experiment_delete(context, result['id'])
|
||||
|
||||
# Retrieve the learning with instance details
|
||||
experiment = self.db.experiment_get(context, result['id'])
|
||||
|
||||
return experiment
|
||||
|
||||
def delete_experiment(self, context, id, force=False):
|
||||
"""Delete experiment."""
|
||||
|
||||
policy.check_policy(context, 'experiment', 'delete')
|
||||
|
||||
experiment = self.db.experiment_get(context, id)
|
||||
|
||||
statuses = (constants.STATUS_AVAILABLE, constants.STATUS_ERROR,
|
||||
constants.STATUS_INACTIVE)
|
||||
if not (force or experiment['status'] in statuses):
|
||||
msg = _("Learning status must be one of %(statuses)s") % {
|
||||
"statuses": statuses}
|
||||
raise exception.InvalidLearning(reason=msg)
|
||||
|
||||
result = self.engine_rpcapi.delete_experiment(context, id)
|
||||
|
||||
def get_all_datasets(self, context, search_opts=None,
|
||||
sort_key='created_at', sort_dir='desc'):
|
||||
|
||||
policy.check_policy(context, 'dataset', 'get_all')
|
||||
|
||||
if search_opts is None:
|
||||
search_opts = {}
|
||||
|
||||
LOG.debug("Searching for datasets by: %s", six.text_type(search_opts))
|
||||
|
||||
project_id = context.project_id
|
||||
|
||||
datasets = self.db.dataset_get_all_by_project(context,
|
||||
project_id,
|
||||
sort_key=sort_key,
|
||||
sort_dir=sort_dir)
|
||||
|
||||
if search_opts:
|
||||
results = []
|
||||
for s in datasets:
|
||||
# values in search_opts can be only strings
|
||||
if all(s.get(k, None) == v for k, v in search_opts.items()):
|
||||
results.append(s)
|
||||
datasets = results
|
||||
return datasets
|
||||
|
||||
def get_dataset(self, context, dataset_id):
|
||||
rv = self.db.dataset_get(context, dataset_id)
|
||||
return rv
|
||||
|
||||
def create_dataset(self, context, name, description, method,
|
||||
source_dataset_url, params, template_id,
|
||||
job_template_id, experiment_id, cluster_id,
|
||||
swift_tenant, swift_username, swift_password):
|
||||
"""Create a Dataset"""
|
||||
policy.check_policy(context, 'dataset', 'create')
|
||||
|
||||
dataset = {'id': None,
|
||||
'display_name': name,
|
||||
'display_description': description,
|
||||
'method': method,
|
||||
'source_dataset_url': source_dataset_url,
|
||||
'user_id': context.user_id,
|
||||
'project_id': context.project_id,
|
||||
'experiment_id': experiment_id,
|
||||
'cluster_id': cluster_id,
|
||||
'params': params,
|
||||
'cluster_id': cluster_id
|
||||
}
|
||||
|
||||
try:
|
||||
result = self.db.dataset_create(context, dataset)
|
||||
result['template_id'] = template_id
|
||||
result['job_template_id'] = job_template_id
|
||||
result['swift_tenant'] = swift_tenant
|
||||
result['swift_username'] = swift_username
|
||||
result['swift_password'] = swift_password
|
||||
self.engine_rpcapi.create_dataset(context, result)
|
||||
updates = {'status': constants.STATUS_CREATING}
|
||||
|
||||
LOG.info(_LI("Accepted parsing of dataset %s."), result['id'])
|
||||
self.db.dataset_update(context, result['id'], updates)
|
||||
except Exception:
|
||||
with excutils.save_and_reraise_exception():
|
||||
self.db.dataset_delete(context, result['id'])
|
||||
|
||||
# Retrieve the learning with instance details
|
||||
dataset = self.db.dataset_get(context, result['id'])
|
||||
|
||||
return dataset
|
||||
|
||||
def delete_dataset(self, context, id, force=False):
|
||||
"""Delete dataset."""
|
||||
|
||||
policy.check_policy(context, 'dataset', 'delete')
|
||||
|
||||
dataset = self.db.dataset_get(context, id)
|
||||
|
||||
statuses = (constants.STATUS_AVAILABLE, constants.STATUS_ERROR,
|
||||
constants.STATUS_INACTIVE)
|
||||
if not (force or dataset['status'] in statuses):
|
||||
msg = _("Learning status must be one of %(statuses)s") % {
|
||||
"statuses": statuses}
|
||||
raise exception.InvalidLearning(reason=msg)
|
||||
|
||||
result = self.engine_rpcapi.delete_dataset(context,
|
||||
dataset['cluster_id'],
|
||||
dataset['job_id'],
|
||||
id)
|
||||
|
||||
def get_all_models(self, context, search_opts=None, sort_key='created_at',
|
||||
sort_dir='desc'):
|
||||
policy.check_policy(context, 'model', 'get_all')
|
||||
|
||||
if search_opts is None:
|
||||
search_opts = {}
|
||||
|
||||
LOG.debug("Searching for models by: %s", six.text_type(search_opts))
|
||||
|
||||
project_id = context.project_id
|
||||
|
||||
models = self.db.model_get_all_by_project(context,
|
||||
project_id,
|
||||
sort_key=sort_key,
|
||||
sort_dir=sort_dir)
|
||||
|
||||
if search_opts:
|
||||
results = []
|
||||
for s in models:
|
||||
# values in search_opts can be only strings
|
||||
if all(s.get(k, None) == v for k, v in search_opts.items()):
|
||||
results.append(s)
|
||||
models = results
|
||||
return models
|
||||
|
||||
def get_model(self, context, model_id):
|
||||
rv = self.db.model_get(context, model_id)
|
||||
return rv
|
||||
|
||||
def create_model(self, context, name, description, source_dataset_url,
|
||||
dataset_format, model_type, model_params, template_id,
|
||||
job_template_id, experiment_id, cluster_id,
|
||||
swift_tenant, swift_username, swift_password):
|
||||
"""Create a Model"""
|
||||
policy.check_policy(context, 'model', 'create')
|
||||
|
||||
model = {'id': None,
|
||||
'display_name': name,
|
||||
'display_description': description,
|
||||
'source_dataset_url': source_dataset_url,
|
||||
'dataset_format': dataset_format,
|
||||
'user_id': context.user_id,
|
||||
'project_id': context.project_id,
|
||||
'model_type': model_type,
|
||||
'model_params': model_params,
|
||||
'experiment_id': experiment_id,
|
||||
'cluster_id': cluster_id
|
||||
}
|
||||
|
||||
try:
|
||||
result = self.db.model_create(context, model)
|
||||
result['job_template_id'] = job_template_id
|
||||
result['template_id'] = template_id
|
||||
result['swift_tenant'] = swift_tenant
|
||||
result['swift_username'] = swift_username
|
||||
result['swift_password'] = swift_password
|
||||
self.engine_rpcapi.create_model(context, result)
|
||||
updates = {'status': constants.STATUS_CREATING}
|
||||
|
||||
LOG.info(_LI("Accepted creation of model %s."), result['id'])
|
||||
self.db.model_update(context, result['id'], updates)
|
||||
except Exception:
|
||||
with excutils.save_and_reraise_exception():
|
||||
self.db.model_delete(context, result['id'])
|
||||
|
||||
# Retrieve the learning with instance details
|
||||
model = self.db.model_get(context, result['id'])
|
||||
|
||||
return model
|
||||
|
||||
def delete_model(self, context, id, force=False):
|
||||
"""Delete model."""
|
||||
|
||||
policy.check_policy(context, 'model', 'delete')
|
||||
|
||||
model = self.db.model_get(context, id)
|
||||
|
||||
statuses = (constants.STATUS_AVAILABLE, constants.STATUS_ERROR,
|
||||
constants.STATUS_INACTIVE)
|
||||
if not (force or model['status'] in statuses):
|
||||
msg = _("Learning status must be one of %(statuses)s") % {
|
||||
"statuses": statuses}
|
||||
raise exception.InvalidLearning(reason=msg)
|
||||
|
||||
result = self.engine_rpcapi.delete_model(context,
|
||||
model['cluster_id'],
|
||||
model['job_id'],
|
||||
id)
|
||||
|
||||
def get_all_learnings(self, context, search_opts=None,
|
||||
sort_key='created_at', sort_dir='desc'):
|
||||
policy.check_policy(context, 'learning', 'get_all')
|
||||
|
||||
if search_opts is None:
|
||||
search_opts = {}
|
||||
|
||||
LOG.debug("Searching for learnings by: %s", six.text_type(search_opts))
|
||||
|
||||
project_id = context.project_id
|
||||
|
||||
learnings = self.db.learning_get_all_by_project(context,
|
||||
project_id,
|
||||
sort_key=sort_key,
|
||||
sort_dir=sort_dir)
|
||||
|
||||
if search_opts:
|
||||
results = []
|
||||
for s in learnings:
|
||||
# values in search_opts can be only strings
|
||||
if all(s.get(k, None) == v for k, v in search_opts.items()):
|
||||
results.append(s)
|
||||
learnings = results
|
||||
return learnings
|
||||
|
||||
def get_learning(self, context, learning_id):
|
||||
rv = self.db.learning_get(context, learning_id)
|
||||
return rv
|
||||
|
||||
def create_learning(self, context, name, description, model_id, method,
|
||||
args, template_id, job_template_id,
|
||||
experiment_id, cluster_id):
|
||||
"""Create a Learning"""
|
||||
policy.check_policy(context, 'learning', 'create')
|
||||
model = self.db.model_get(context, model_id)
|
||||
|
||||
learning = {'id': None,
|
||||
'display_name': name,
|
||||
'display_description': description,
|
||||
'model_id': model_id,
|
||||
'model_type': model.model_type,
|
||||
'user_id': context.user_id,
|
||||
'project_id': context.project_id,
|
||||
'method': method,
|
||||
'args': args,
|
||||
'job_template_id': job_template_id,
|
||||
'experiment_id': experiment_id,
|
||||
'cluster_id': cluster_id
|
||||
}
|
||||
|
||||
try:
|
||||
result = self.db.learning_create(context, learning)
|
||||
result['template_id'] = template_id
|
||||
result['job_template_id'] = job_template_id
|
||||
result['cluster_id'] = cluster_id
|
||||
result['dataset_format'] = model.dataset_format
|
||||
self.engine_rpcapi.create_learning(context, result)
|
||||
updates = {'status': constants.STATUS_CREATING}
|
||||
|
||||
LOG.info(_LI("Accepted creation of learning %s."), result['id'])
|
||||
self.db.learning_update(context, result['id'], updates)
|
||||
except Exception:
|
||||
with excutils.save_and_reraise_exception():
|
||||
self.db.learning_delete(context, result['id'])
|
||||
|
||||
# Retrieve the learning with instance details
|
||||
learning = self.db.learning_get(context, result['id'])
|
||||
|
||||
return learning
|
||||
|
||||
def delete_learning(self, context, id, force=False):
|
||||
"""Delete learning."""
|
||||
|
||||
policy.check_policy(context, 'learning', 'delete')
|
||||
|
||||
learning = self.db.learning_get(context, id)
|
||||
|
||||
statuses = (constants.STATUS_AVAILABLE, constants.STATUS_ERROR,
|
||||
constants.STATUS_INACTIVE)
|
||||
if not (force or learning['status'] in statuses):
|
||||
msg = _("Learning status must be one of %(statuses)s") % {
|
||||
"statuses": statuses}
|
||||
raise exception.InvalidLearning(reason=msg)
|
||||
|
||||
result = self.engine_rpcapi.delete_learning(context,
|
||||
learning['cluster_id'],
|
||||
learning['job_id'],
|
||||
id)
|
81
meteos/engine/configuration.py
Normal file
81
meteos/engine/configuration.py
Normal file
@ -0,0 +1,81 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
# Copyright (c) 2012 Rackspace Hosting
|
||||
# Copyright (c) 2013 NetApp
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Configuration support for all drivers.
|
||||
|
||||
This module allows support for setting configurations either from default
|
||||
or from a particular CONF group, to be able to set multiple configurations
|
||||
for a given set of values.
|
||||
|
||||
For instance, two generic configurations can be set by naming them in groups as
|
||||
|
||||
[generic1]
|
||||
learning_backend_name=generic-backend-1
|
||||
...
|
||||
|
||||
[generic2]
|
||||
learning_backend_name=generic-backend-2
|
||||
...
|
||||
|
||||
And the configuration group name will be passed in so that all calls to
|
||||
configuration.volume_group within that instance will be mapped to the proper
|
||||
named group.
|
||||
|
||||
This class also ensures the implementation's configuration is grafted into the
|
||||
option group. This is due to the way cfg works. All cfg options must be defined
|
||||
and registered in the group in which they are used.
|
||||
"""
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
|
||||
class Configuration(object):
|
||||
|
||||
def __init__(self, learning_opts, config_group=None):
|
||||
"""Graft config values into config group.
|
||||
|
||||
This takes care of grafting the implementation's config values
|
||||
into the config group.
|
||||
"""
|
||||
self.config_group = config_group
|
||||
|
||||
# set the local conf so that __call__'s know what to use
|
||||
if self.config_group:
|
||||
self._ensure_config_values(learning_opts)
|
||||
self.local_conf = CONF._get(self.config_group)
|
||||
else:
|
||||
self.local_conf = CONF
|
||||
|
||||
def _ensure_config_values(self, learning_opts):
|
||||
CONF.register_opts(learning_opts,
|
||||
group=self.config_group)
|
||||
|
||||
def append_config_values(self, learning_opts):
|
||||
self._ensure_config_values(learning_opts)
|
||||
|
||||
def safe_get(self, value):
|
||||
try:
|
||||
return self.__getattr__(value)
|
||||
except cfg.NoSuchOptError:
|
||||
return None
|
||||
|
||||
def __getattr__(self, value):
|
||||
return getattr(self.local_conf, value)
|
127
meteos/engine/driver.py
Normal file
127
meteos/engine/driver.py
Normal file
@ -0,0 +1,127 @@
|
||||
# Copyright 2012 NetApp
|
||||
# Copyright 2015 Mirantis inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
"""
|
||||
Drivers for learnings.
|
||||
|
||||
"""
|
||||
|
||||
import six
|
||||
import time
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
|
||||
from meteos import exception
|
||||
from meteos.i18n import _, _LE
|
||||
from meteos import utils
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
ssh_opts = [
|
||||
cfg.StrOpt(
|
||||
'ssh_user',
|
||||
default='ubuntu',
|
||||
help='SSH login user.'),
|
||||
cfg.StrOpt(
|
||||
'ssh_password',
|
||||
default='ubuntu',
|
||||
help='SSH login password.'),
|
||||
cfg.IntOpt(
|
||||
'ssh_port',
|
||||
default=22,
|
||||
help='SSH connection port number.'),
|
||||
cfg.IntOpt(
|
||||
'ssh_conn_timeout',
|
||||
default=60,
|
||||
help='Backend server SSH connection timeout.'),
|
||||
cfg.IntOpt(
|
||||
'ssh_min_pool_conn',
|
||||
default=1,
|
||||
help='Minimum number of connections in the SSH pool.'),
|
||||
cfg.IntOpt(
|
||||
'ssh_max_pool_conn',
|
||||
default=10,
|
||||
help='Maximum number of connections in the SSH pool.'),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(ssh_opts)
|
||||
|
||||
|
||||
class LearningDriver(object):
|
||||
|
||||
"""Class defines interface of NAS driver."""
|
||||
|
||||
def __init__(self, driver_handles_learning_servers, *args, **kwargs):
|
||||
"""Implements base functionality for learning drivers.
|
||||
|
||||
:param driver_handles_learning_servers: expected boolean value or
|
||||
tuple/list/set of boolean values.
|
||||
There are two possible approaches for learning drivers in Meteos.
|
||||
First is when learning driver is able to handle learning-servers
|
||||
and second when not.
|
||||
Drivers can support either both (indicated by a tuple/set/list with
|
||||
(True, False)) or only one of these approaches. So, it is allowed
|
||||
to be 'True' when learning driver does support handling of learning
|
||||
servers and allowed to be 'False' when it does support usage of
|
||||
unhandled learning-servers that are not tracked by Meteos.
|
||||
Learning drivers are allowed to work only in one of two possible
|
||||
driver modes, that is why only one should be chosen.
|
||||
:param config_opts: tuple, list or set of config option lists
|
||||
that should be registered in driver's configuration right after
|
||||
this attribute is created. Useful for usage with mixin classes.
|
||||
"""
|
||||
super(LearningDriver, self).__init__()
|
||||
self.configuration = kwargs.get('configuration', None)
|
||||
self.initialized = False
|
||||
self._stats = {}
|
||||
|
||||
self.pools = []
|
||||
|
||||
for config_opt_set in kwargs.get('config_opts', []):
|
||||
self.configuration.append_config_values(config_opt_set)
|
||||
|
||||
def create_template(self, context, request_specs):
|
||||
"""Is called to create template."""
|
||||
raise NotImplementedError()
|
||||
|
||||
def delete_template(self, context, request_specs):
|
||||
"""Is called to delete template."""
|
||||
raise NotImplementedError()
|
||||
|
||||
def create_experiment(self, context, request_specs):
|
||||
"""Is called to create experimnet."""
|
||||
raise NotImplementedError()
|
||||
|
||||
def delete_experiment(self, context, request_specs):
|
||||
"""Is called to delete experimnet."""
|
||||
raise NotImplementedError()
|
||||
|
||||
def create_dataset(self, context, request_specs):
|
||||
"""Is called to create dataset."""
|
||||
raise NotImplementedError()
|
||||
|
||||
def delete_dataset(self, context, request_specs):
|
||||
"""Is called to delete dataset."""
|
||||
raise NotImplementedError()
|
||||
|
||||
def create_model(self, context, request_specs):
|
||||
"""Is called to create model."""
|
||||
raise NotImplementedError()
|
||||
|
||||
def delete_model(self, context, request_specs):
|
||||
"""Is called to delete model."""
|
||||
raise NotImplementedError()
|
22
meteos/engine/drivers/__init__.py
Normal file
22
meteos/engine/drivers/__init__.py
Normal file
@ -0,0 +1,22 @@
|
||||
# Copyright 2012 OpenStack LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
:mod:`meteos.engine.driver` -- Meteos Learning Drivers
|
||||
=====================================================
|
||||
|
||||
.. automodule:: meteos.engine.driver
|
||||
:platform: Unix
|
||||
:synopsis: Module containing all the Meteos Learning drivers.
|
||||
"""
|
468
meteos/engine/drivers/generic.py
Normal file
468
meteos/engine/drivers/generic.py
Normal file
@ -0,0 +1,468 @@
|
||||
# Copyright (c) 2014 NetApp, Inc.
|
||||
# All Rights Reserved.
|
||||
# Copyright (c) 2016 NEC Corporation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Generic Driver for learnings."""
|
||||
|
||||
import base64
|
||||
from eventlet import greenthread
|
||||
import os
|
||||
import random
|
||||
import time
|
||||
|
||||
from oslo_concurrency import processutils
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
from oslo_utils import excutils
|
||||
from oslo_utils import importutils
|
||||
from oslo_utils import units
|
||||
import retrying
|
||||
import six
|
||||
|
||||
from meteos.common import constants as const
|
||||
from meteos import context
|
||||
from meteos import exception
|
||||
from meteos.i18n import _, _LE, _LI, _LW
|
||||
from meteos.engine import driver
|
||||
from meteos import utils
|
||||
from meteos import cluster
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
learning_opts = [
|
||||
cfg.IntOpt(
|
||||
'create_experiment_timeout',
|
||||
default=600,
|
||||
help="Time to wait for creating experiment (seconds)."),
|
||||
cfg.IntOpt(
|
||||
'execute_job_timeout',
|
||||
default=600,
|
||||
help='Timeout for executing job (seconds).'),
|
||||
cfg.IntOpt(
|
||||
'api_retry_interval',
|
||||
default=10,
|
||||
help='The number of seconds to wait before retrying the request.'),
|
||||
]
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(learning_opts)
|
||||
|
||||
SAHARA_GROUP = 'sahara'
|
||||
PLUGIN_NAME = 'spark'
|
||||
SPARK_USER_NAME = 'ubuntu'
|
||||
METEOS_JOB_TYPE = 'Spark'
|
||||
|
||||
|
||||
class GenericLearningDriver(driver.LearningDriver):
|
||||
|
||||
"""Executes commands relating to Learnings."""
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
"""Do initialization."""
|
||||
super(GenericLearningDriver, self).__init__(
|
||||
[False, True], *args, **kwargs)
|
||||
self.admin_context = context.get_admin_context()
|
||||
self.cluster_api = cluster.API()
|
||||
self.sshpool = None
|
||||
|
||||
def _run_ssh(self, ip, cmd_list, check_exit_code=True, attempts=1):
|
||||
utils.check_ssh_injection(cmd_list)
|
||||
|
||||
ssh_conn_timeout = self.configuration.ssh_conn_timeout
|
||||
ssh_port = self.configuration.ssh_port
|
||||
ssh_user = self.configuration.ssh_user
|
||||
ssh_password = self.configuration.ssh_password
|
||||
min_size = self.configuration.ssh_min_pool_conn
|
||||
max_size = self.configuration.ssh_max_pool_conn
|
||||
command = ' '. join(cmd_list)
|
||||
|
||||
if not self.sshpool:
|
||||
self.sshpool = utils.SSHPool(
|
||||
ip,
|
||||
ssh_port,
|
||||
ssh_conn_timeout,
|
||||
ssh_user,
|
||||
ssh_password,
|
||||
min_size=min_size,
|
||||
max_size=max_size)
|
||||
last_exception = None
|
||||
try:
|
||||
with self.sshpool.item() as ssh:
|
||||
while attempts > 0:
|
||||
attempts -= 1
|
||||
try:
|
||||
return processutils.ssh_execute(
|
||||
ssh,
|
||||
command,
|
||||
check_exit_code=check_exit_code)
|
||||
except Exception as e:
|
||||
LOG.error(e)
|
||||
last_exception = e
|
||||
greenthread.sleep(random.randint(20, 500) / 100.0)
|
||||
try:
|
||||
raise processutils.ProcessExecutionError(
|
||||
exit_code=last_exception.exit_code,
|
||||
stdout=last_exception.stdout,
|
||||
stderr=last_exception.stderr,
|
||||
cmd=last_exception.cmd)
|
||||
except AttributeError:
|
||||
raise processutils.ProcessExecutionError(
|
||||
exit_code=-1,
|
||||
stdout="",
|
||||
stderr="Error running SSH command",
|
||||
cmd=command)
|
||||
|
||||
except Exception:
|
||||
with excutils.save_and_reraise_exception():
|
||||
LOG.error(_LE("Error running SSH command: %s"), command)
|
||||
|
||||
def _delete_hdfs_dir(self, context, cluster_id, dir_name):
|
||||
|
||||
cluster = self.cluster_api.get_cluster(context, cluster_id)
|
||||
node_groups = cluster.node_groups
|
||||
|
||||
for node in node_groups:
|
||||
if 'master' in node['node_processes']:
|
||||
ip = node['instances'][0]['management_ip']
|
||||
|
||||
path = '/user/ubuntu/' + dir_name
|
||||
|
||||
cmd = ['sudo', '-u', 'hdfs', 'hadoop', 'fs', '-rm', '-r', path]
|
||||
|
||||
try:
|
||||
self._run_ssh(ip, cmd)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def create_template(self, context, request_specs):
|
||||
"""Creates Template."""
|
||||
|
||||
image_id = request_specs['image_id']
|
||||
master_nodes_num = request_specs['master_nodes_num']
|
||||
master_flavor_id = request_specs['master_flavor_id']
|
||||
worker_nodes_num = request_specs['worker_nodes_num']
|
||||
worker_flavor_id = request_specs['worker_flavor_id']
|
||||
floating_ip_pool = request_specs['floating_ip_pool']
|
||||
spark_version = request_specs['spark_version']
|
||||
|
||||
master_node_name = 'master-tmpl-' + request_specs['id']
|
||||
slave_node_name = 'slave-tmpl-' + request_specs['id']
|
||||
cluster_node_name = 'cluster-tmpl-' + request_specs['id']
|
||||
job_binary_name = 'meteos-' + request_specs['id'] + '.py'
|
||||
job_name = 'meteos-job-' + request_specs['id']
|
||||
|
||||
sahara_image_id = self.cluster_api.image_set(
|
||||
context, image_id, SPARK_USER_NAME)
|
||||
self.cluster_api.image_tags_add(
|
||||
context, sahara_image_id, ['spark', spark_version])
|
||||
|
||||
master_node_id = self.cluster_api.create_node_group_template(
|
||||
context, master_node_name, PLUGIN_NAME, spark_version,
|
||||
master_flavor_id, ['master', 'namenode'], floating_ip_pool, True)
|
||||
|
||||
slave_node_id = self.cluster_api.create_node_group_template(
|
||||
context, slave_node_name, PLUGIN_NAME, spark_version,
|
||||
worker_flavor_id, ["slave", "datanode"], floating_ip_pool, True)
|
||||
|
||||
cluster_node_groups = [
|
||||
{
|
||||
"name": "master",
|
||||
"node_group_template_id": master_node_id,
|
||||
"count": master_nodes_num
|
||||
},
|
||||
{
|
||||
"name": "workers",
|
||||
"node_group_template_id": slave_node_id,
|
||||
"count": worker_nodes_num
|
||||
}]
|
||||
|
||||
cluster_template_id = self.cluster_api.create_cluster_template(
|
||||
context,
|
||||
cluster_node_name,
|
||||
PLUGIN_NAME, spark_version,
|
||||
cluster_node_groups)
|
||||
|
||||
filename = 'meteos-script-' + spark_version + '.py'
|
||||
filepath = os.path.dirname(__file__) + '/../../cluster/binary/' + filename
|
||||
|
||||
data = utils.file_open(filepath)
|
||||
|
||||
binary_data_id = self.cluster_api.create_job_binary_data(
|
||||
context, job_binary_name, data)
|
||||
|
||||
binary_url = 'internal-db://' + binary_data_id
|
||||
binary_id = self.cluster_api.create_job_binary(
|
||||
context, job_binary_name, binary_url)
|
||||
mains = [binary_id]
|
||||
job_template_id = self.cluster_api.create_job_template(
|
||||
context, job_name, METEOS_JOB_TYPE, mains)
|
||||
|
||||
response = {'sahara_image_id': sahara_image_id,
|
||||
'master_node_id': master_node_id,
|
||||
'slave_node_id': slave_node_id,
|
||||
'binary_data_id': binary_data_id,
|
||||
'binary_id': binary_id,
|
||||
'cluster_template_id': cluster_template_id,
|
||||
'job_template_id': job_template_id}
|
||||
|
||||
return response
|
||||
|
||||
def delete_template(self, context, request_specs):
|
||||
"""Delete Template."""
|
||||
|
||||
self.cluster_api.delete_job_template(
|
||||
context, request_specs['job_template_id'])
|
||||
self.cluster_api.delete_job_binary(context, request_specs['binary_id'])
|
||||
self.cluster_api.delete_job_binary_data(
|
||||
context, request_specs['binary_data_id'])
|
||||
self.cluster_api.delete_cluster_template(
|
||||
context, request_specs['cluster_template_id'])
|
||||
self.cluster_api.delete_node_group_template(
|
||||
context, request_specs['slave_node_id'])
|
||||
self.cluster_api.delete_node_group_template(
|
||||
context, request_specs['master_node_id'])
|
||||
# self.cluster_api.image_remove(context,
|
||||
# request_specs['sahara_image_id'])
|
||||
|
||||
def create_experiment(self, context, request_specs,
|
||||
image_id, cluster_id, spark_version):
|
||||
"""Creates Experiment."""
|
||||
|
||||
cluster_name = 'cluster-' + request_specs['id'][0:8]
|
||||
key_name = request_specs['key_name']
|
||||
neutron_management_network = request_specs[
|
||||
'neutron_management_network']
|
||||
|
||||
cluster_id = self.cluster_api.create_cluster(
|
||||
context,
|
||||
cluster_name,
|
||||
PLUGIN_NAME,
|
||||
spark_version,
|
||||
image_id,
|
||||
cluster_id,
|
||||
key_name,
|
||||
neutron_management_network)
|
||||
|
||||
return cluster_id
|
||||
|
||||
def delete_experiment(self, context, id):
|
||||
"""Delete Experiment."""
|
||||
self.cluster_api.delete_cluster(context, id)
|
||||
|
||||
def wait_for_cluster_create(self, context, id):
|
||||
|
||||
starttime = time.time()
|
||||
deadline = starttime + self.configuration.create_experiment_timeout
|
||||
interval = self.configuration.api_retry_interval
|
||||
tries = 0
|
||||
|
||||
while True:
|
||||
cluster = self.cluster_api.get_cluster(context, id)
|
||||
|
||||
if cluster.status == const.STATUS_SAHARA_ACTIVE:
|
||||
break
|
||||
|
||||
tries += 1
|
||||
now = time.time()
|
||||
if now > deadline:
|
||||
msg = _("Timeout trying to create experiment "
|
||||
"%s") % id
|
||||
raise exception.Invalid(reason=msg)
|
||||
|
||||
LOG.debug("Waiting for cluster to complete: Current status: %s",
|
||||
cluster.status)
|
||||
time.sleep(interval)
|
||||
|
||||
def get_job_result(self, context, job_id, template_id, cluster_id):
|
||||
|
||||
stdout = ""
|
||||
stderr = ""
|
||||
|
||||
starttime = time.time()
|
||||
deadline = starttime + self.configuration.execute_job_timeout
|
||||
interval = self.configuration.api_retry_interval
|
||||
tries = 0
|
||||
|
||||
while True:
|
||||
job = self.cluster_api.get_job(context, job_id)
|
||||
|
||||
if job.info['status'] == const.STATUS_JOB_SUCCESS:
|
||||
stdout = self._get_job_result(context,
|
||||
template_id,
|
||||
cluster_id,
|
||||
job_id)
|
||||
break
|
||||
elif job.info['status'] == const.STATUS_JOB_ERROR:
|
||||
stderr = self._get_job_result(context,
|
||||
template_id,
|
||||
cluster_id,
|
||||
job_id)
|
||||
break
|
||||
|
||||
tries += 1
|
||||
now = time.time()
|
||||
if now > deadline:
|
||||
msg = _("Timeout trying to create experiment "
|
||||
"%s") % job_id
|
||||
raise exception.Invalid(reason=msg)
|
||||
|
||||
LOG.debug("Waiting for job to complete: Current status: %s",
|
||||
job.info['status'])
|
||||
time.sleep(interval)
|
||||
return stdout, stderr
|
||||
|
||||
def create_dataset(self, context, request_specs):
|
||||
"""Create Dataset."""
|
||||
|
||||
job_args = {}
|
||||
|
||||
job_template_id = request_specs['job_template_id']
|
||||
cluster_id = request_specs['cluster_id']
|
||||
source_dataset_url = request_specs['source_dataset_url']
|
||||
|
||||
job_args['method'] = request_specs['method'] + '_dataset'
|
||||
job_args['source_dataset_url'] = source_dataset_url
|
||||
|
||||
dataset_args = {'params': request_specs['params']}
|
||||
job_args['dataset'] = dataset_args
|
||||
|
||||
swift_args = {}
|
||||
|
||||
if source_dataset_url.count('swift'):
|
||||
swift_args['tenant'] = request_specs['swift_tenant']
|
||||
swift_args['username'] = request_specs['swift_username']
|
||||
swift_args['password'] = request_specs['swift_password']
|
||||
|
||||
job_args['swift'] = swift_args
|
||||
|
||||
model_args = {'type': None}
|
||||
job_args['model'] = model_args
|
||||
|
||||
LOG.debug("Execute job with args: %s", job_args)
|
||||
|
||||
configs = {'configs': {'edp.java.main_class': 'sahara.dummy',
|
||||
'edp.spark.adapt_for_swift': True},
|
||||
'args': [request_specs['id'],
|
||||
base64.b64encode(str(job_args))]}
|
||||
|
||||
result = self.cluster_api.job_create(
|
||||
context, job_template_id, cluster_id, configs)
|
||||
|
||||
return result
|
||||
|
||||
def delete_dataset(self, context, cluster_id, job_id, id):
|
||||
"""Delete Dataset."""
|
||||
|
||||
dir_name = 'data-' + id
|
||||
|
||||
self._delete_hdfs_dir(context, cluster_id, dir_name)
|
||||
self.cluster_api.job_delete(context, job_id)
|
||||
|
||||
def create_model(self, context, request_specs):
|
||||
"""Create Model."""
|
||||
|
||||
job_args = {}
|
||||
|
||||
job_template_id = request_specs['job_template_id']
|
||||
cluster_id = request_specs['cluster_id']
|
||||
source_dataset_url = request_specs['source_dataset_url']
|
||||
|
||||
job_args['method'] = 'create_model'
|
||||
job_args['source_dataset_url'] = source_dataset_url
|
||||
job_args['dataset_format'] = request_specs['dataset_format']
|
||||
|
||||
model_args = {'type': request_specs['model_type'],
|
||||
'params': request_specs['model_params']}
|
||||
job_args['model'] = model_args
|
||||
|
||||
swift_args = {}
|
||||
|
||||
if source_dataset_url.count('swift'):
|
||||
swift_args['tenant'] = request_specs['swift_tenant']
|
||||
swift_args['username'] = request_specs['swift_username']
|
||||
swift_args['password'] = request_specs['swift_password']
|
||||
|
||||
job_args['swift'] = swift_args
|
||||
|
||||
LOG.debug("Execute job with args: %s", job_args)
|
||||
|
||||
configs = {'configs': {'edp.java.main_class': 'sahara.dummy',
|
||||
'edp.spark.adapt_for_swift': True},
|
||||
'args': [request_specs['id'],
|
||||
base64.b64encode(str(job_args))]}
|
||||
|
||||
result = self.cluster_api.job_create(
|
||||
context, job_template_id, cluster_id, configs)
|
||||
|
||||
return result
|
||||
|
||||
def delete_model(self, context, cluster_id, job_id, id):
|
||||
"""Delete Model."""
|
||||
|
||||
dir_name = 'model-' + id
|
||||
|
||||
self._delete_hdfs_dir(context, cluster_id, dir_name)
|
||||
self.cluster_api.job_delete(context, job_id)
|
||||
|
||||
def create_learning(self, context, request_specs):
|
||||
"""Create Learning."""
|
||||
|
||||
job_args = {}
|
||||
|
||||
job_template_id = request_specs['job_template_id']
|
||||
cluster_id = request_specs['cluster_id']
|
||||
job_args['method'] = request_specs['method']
|
||||
job_args['dataset_format'] = request_specs['dataset_format']
|
||||
|
||||
model_args = {'type': request_specs['model_type']}
|
||||
job_args['model'] = model_args
|
||||
|
||||
learning_args = {'params': request_specs['args']}
|
||||
job_args['learning'] = learning_args
|
||||
|
||||
LOG.debug("Execute job with args: %s", job_args)
|
||||
|
||||
configs = {'configs': {'edp.java.main_class': 'sahara.dummy',
|
||||
'edp.spark.adapt_for_swift': True},
|
||||
'args': [request_specs['model_id'],
|
||||
base64.b64encode(str(job_args))]}
|
||||
|
||||
result = self.cluster_api.job_create(
|
||||
context, job_template_id, cluster_id, configs)
|
||||
|
||||
return result
|
||||
|
||||
def delete_learning(self, context, cluster_id, job_id, id):
|
||||
"""Delete Learning."""
|
||||
|
||||
self.cluster_api.job_delete(context, job_id)
|
||||
|
||||
def _get_job_result(self, context, template_id, cluster_id, job_id):
|
||||
|
||||
result = {}
|
||||
cluster = self.cluster_api.get_cluster(context, cluster_id)
|
||||
node_groups = cluster.node_groups
|
||||
|
||||
for node in node_groups:
|
||||
if 'master' in node['node_processes']:
|
||||
ip = node['instances'][0]['management_ip']
|
||||
|
||||
path = '/tmp/spark-edp/meteos-job-' + \
|
||||
template_id + '/' + job_id + '/stdout'
|
||||
|
||||
stdout, stderr = self._run_ssh(ip, ['cat', path])
|
||||
|
||||
return stdout
|
340
meteos/engine/manager.py
Normal file
340
meteos/engine/manager.py
Normal file
@ -0,0 +1,340 @@
|
||||
# Copyright (c) 2014 NetApp Inc.
|
||||
# All Rights Reserved.
|
||||
# Copyright (c) 2016 NEC Corporation.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
"""NAS learning manager managers creating learnings and access rights.
|
||||
|
||||
**Related Flags**
|
||||
|
||||
:learning_driver: Used by :class:`LearningManager`.
|
||||
"""
|
||||
|
||||
import copy
|
||||
import datetime
|
||||
import functools
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
from oslo_serialization import jsonutils
|
||||
from oslo_service import periodic_task
|
||||
from oslo_utils import excutils
|
||||
from oslo_utils import importutils
|
||||
from oslo_utils import strutils
|
||||
from oslo_utils import timeutils
|
||||
import six
|
||||
|
||||
from meteos.common import constants
|
||||
from meteos import context
|
||||
from meteos import exception
|
||||
from meteos.i18n import _, _LE, _LI, _LW
|
||||
from meteos import manager
|
||||
from meteos.engine import api
|
||||
import meteos.engine.configuration
|
||||
from meteos.engine import rpcapi as engine_rpcapi
|
||||
from meteos import utils
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
engine_manager_opts = [
|
||||
cfg.StrOpt('learning_driver',
|
||||
default='meteos.engine.drivers.generic.GenericLearningDriver',
|
||||
help='Driver to use for learning creation.'),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(engine_manager_opts)
|
||||
CONF.import_opt('periodic_interval', 'meteos.service')
|
||||
|
||||
|
||||
class LearningManager(manager.Manager):
|
||||
|
||||
"""Manages Learning resources."""
|
||||
|
||||
RPC_API_VERSION = '1.0'
|
||||
|
||||
def __init__(self, learning_driver=None, service_name=None,
|
||||
*args, **kwargs):
|
||||
"""Load the driver from args, or from flags."""
|
||||
self.configuration = meteos.engine.configuration.Configuration(
|
||||
engine_manager_opts,
|
||||
config_group=service_name)
|
||||
super(LearningManager, self).__init__(*args, **kwargs)
|
||||
|
||||
if not learning_driver:
|
||||
learning_driver = self.configuration.learning_driver
|
||||
|
||||
self.driver = importutils.import_object(
|
||||
learning_driver, configuration=self.configuration,
|
||||
)
|
||||
|
||||
def _update_status(self, context, resource_name,
|
||||
id, job_id, stdout, stderr):
|
||||
|
||||
if stderr:
|
||||
status = constants.STATUS_ERROR
|
||||
LOG.error(_LI("Fail to create %s %s."), resource_name, id)
|
||||
else:
|
||||
status = constants.STATUS_AVAILABLE
|
||||
LOG.info(_LI("%s %s created successfully."), resource_name, id)
|
||||
|
||||
updates = {
|
||||
'status': status,
|
||||
'job_id': job_id,
|
||||
'launched_at': timeutils.utcnow(),
|
||||
'stderr': stderr,
|
||||
}
|
||||
|
||||
if resource_name == 'DataSet':
|
||||
updates['head'] = stdout
|
||||
self.db.dataset_update(context, id, updates)
|
||||
|
||||
elif resource_name == 'Model':
|
||||
updates['stdout'] = stdout
|
||||
self.db.model_update(context, id, updates)
|
||||
|
||||
elif resource_name == 'Learning':
|
||||
updates['stdout'] = stdout.rstrip('\n')
|
||||
self.db.learning_update(context, id, updates)
|
||||
|
||||
def create_template(self, context, request_spec=None):
|
||||
"""Creates a template."""
|
||||
context = context.elevated()
|
||||
|
||||
LOG.debug("Create template with request: %s", request_spec)
|
||||
|
||||
try:
|
||||
response = self.driver.create_template(
|
||||
context, request_spec)
|
||||
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
LOG.error(_LE("template %s failed on creation."),
|
||||
request_spec['id'])
|
||||
self.db.template_update(
|
||||
context, request_spec['id'],
|
||||
{'status': constants.STATUS_ERROR}
|
||||
)
|
||||
|
||||
LOG.info(_LI("template %s created successfully."),
|
||||
request_spec['id'])
|
||||
|
||||
updates = response
|
||||
updates['status'] = constants.STATUS_AVAILABLE
|
||||
updates['launched_at'] = timeutils.utcnow()
|
||||
|
||||
self.db.template_update(context, request_spec['id'], updates)
|
||||
|
||||
def delete_template(self, context, id=None):
|
||||
"""Deletes a template."""
|
||||
context = context.elevated()
|
||||
|
||||
try:
|
||||
template = self.db.template_get(context, id)
|
||||
self.driver.delete_template(context, template)
|
||||
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
LOG.error(_LE("Template %s failed on deletion."), id)
|
||||
self.db.template_update(
|
||||
context, id,
|
||||
{'status': constants.STATUS_ERROR_DELETING}
|
||||
)
|
||||
|
||||
LOG.info(_LI("Template %s deleted successfully."), id)
|
||||
self.db.template_delete(context, id)
|
||||
|
||||
def create_experiment(self, context, request_spec=None):
|
||||
"""Creates a Experiment."""
|
||||
context = context.elevated()
|
||||
|
||||
LOG.debug("Create experiment with request: %s", request_spec)
|
||||
|
||||
try:
|
||||
template = self.db.template_get(
|
||||
context, request_spec['template_id'])
|
||||
cluster_id = self.driver.create_experiment(
|
||||
context, request_spec, template.sahara_image_id,
|
||||
template.cluster_template_id, template.spark_version)
|
||||
self.driver.wait_for_cluster_create(context, cluster_id)
|
||||
|
||||
experiment = request_spec
|
||||
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
LOG.error(_LE("Experiment %s failed on creation."),
|
||||
request_spec['id'])
|
||||
self.db.experiment_update(
|
||||
context, request_spec['id'],
|
||||
{'status': constants.STATUS_ERROR}
|
||||
)
|
||||
|
||||
LOG.info(_LI("Experiment %s created successfully."),
|
||||
experiment['id'])
|
||||
updates = {
|
||||
'status': constants.STATUS_AVAILABLE,
|
||||
'launched_at': timeutils.utcnow(),
|
||||
'cluster_id': cluster_id,
|
||||
}
|
||||
self.db.experiment_update(context, experiment['id'], updates)
|
||||
|
||||
def delete_experiment(self, context, id=None):
|
||||
"""Deletes a experiment."""
|
||||
context = context.elevated()
|
||||
|
||||
try:
|
||||
experiment = self.db.experiment_get(context, id)
|
||||
self.driver.delete_experiment(context, experiment['cluster_id'])
|
||||
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
LOG.error(_LE("Experiment %s failed on deletion."), id)
|
||||
self.db.experiment_update(
|
||||
context, id,
|
||||
{'status': constants.STATUS_ERROR_DELETING}
|
||||
)
|
||||
|
||||
LOG.info(_LI("Experiment %s deleted successfully."), id)
|
||||
self.db.experiment_delete(context, id)
|
||||
|
||||
def create_dataset(self, context, request_spec=None):
|
||||
"""Create a Dataset."""
|
||||
context = context.elevated()
|
||||
|
||||
LOG.debug("Create dataset with request: %s", request_spec)
|
||||
|
||||
try:
|
||||
job_id = self.driver.create_dataset(context, request_spec)
|
||||
stdout, stderr = self.driver.get_job_result(
|
||||
context,
|
||||
job_id,
|
||||
request_spec['template_id'],
|
||||
request_spec['cluster_id'])
|
||||
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
LOG.error(_LE("Dataset %s failed on creation."),
|
||||
request_spec['id'])
|
||||
self.db.dataset_update(
|
||||
context, request_spec['id'],
|
||||
{'status': constants.STATUS_ERROR}
|
||||
)
|
||||
|
||||
self._update_status(context, 'DataSet', request_spec['id'],
|
||||
job_id, stdout, stderr)
|
||||
|
||||
def delete_dataset(self, context, cluster_id=None, job_id=None, id=None):
|
||||
"""Deletes a Dataset."""
|
||||
context = context.elevated()
|
||||
|
||||
try:
|
||||
self.driver.delete_dataset(context, cluster_id, job_id, id)
|
||||
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
LOG.error(_LE("Dataset %s failed on deletion."), id)
|
||||
self.db.dataset_update(
|
||||
context, id,
|
||||
{'status': constants.STATUS_ERROR_DELETING}
|
||||
)
|
||||
|
||||
LOG.info(_LI("Dataset %s deleted successfully."), id)
|
||||
self.db.dataset_delete(context, id)
|
||||
|
||||
def create_model(self, context, request_spec=None):
|
||||
"""Create a Model."""
|
||||
context = context.elevated()
|
||||
|
||||
LOG.debug("Create model with request: %s", request_spec)
|
||||
|
||||
try:
|
||||
job_id = self.driver.create_model(context, request_spec)
|
||||
stdout, stderr = self.driver.get_job_result(
|
||||
context,
|
||||
job_id,
|
||||
request_spec['template_id'],
|
||||
request_spec['cluster_id'])
|
||||
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
LOG.error(_LE("Model %s failed on creation."),
|
||||
request_spec['id'])
|
||||
self.db.model_update(
|
||||
context, request_spec['id'],
|
||||
{'status': constants.STATUS_ERROR}
|
||||
)
|
||||
|
||||
self._update_status(context, 'Model', request_spec['id'],
|
||||
job_id, stdout, stderr)
|
||||
|
||||
def delete_model(self, context, cluster_id=None, job_id=None, id=None):
|
||||
"""Deletes a Model."""
|
||||
context = context.elevated()
|
||||
|
||||
try:
|
||||
self.driver.delete_model(context, cluster_id, job_id, id)
|
||||
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
LOG.error(_LE("Model %s failed on deletion."), id)
|
||||
self.db.model_update(
|
||||
context, id,
|
||||
{'status': constants.STATUS_ERROR_DELETING}
|
||||
)
|
||||
|
||||
LOG.info(_LI("Model %s deleted successfully."), id)
|
||||
self.db.model_delete(context, id)
|
||||
|
||||
def create_learning(self, context, request_spec=None):
|
||||
"""Create a Learning."""
|
||||
context = context.elevated()
|
||||
|
||||
LOG.debug("Create learning with request: %s", request_spec)
|
||||
|
||||
try:
|
||||
job_id = self.driver.create_learning(context, request_spec)
|
||||
stdout, stderr = self.driver.get_job_result(
|
||||
context,
|
||||
job_id,
|
||||
request_spec['template_id'],
|
||||
request_spec['cluster_id'])
|
||||
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
LOG.error(_LE("Learning %s failed on creation."),
|
||||
request_spec['id'])
|
||||
self.db.learning_update(
|
||||
context, request_spec['id'],
|
||||
{'status': constants.STATUS_ERROR}
|
||||
)
|
||||
|
||||
self._update_status(context, 'Learning', request_spec['id'],
|
||||
job_id, stdout, stderr)
|
||||
|
||||
def delete_learning(self, context, cluster_id=None, job_id=None, id=None):
|
||||
"""Deletes a Learning."""
|
||||
context = context.elevated()
|
||||
|
||||
try:
|
||||
self.driver.delete_learning(context, cluster_id, job_id, id)
|
||||
|
||||
except Exception as e:
|
||||
with excutils.save_and_reraise_exception():
|
||||
LOG.error(_LE("Learning %s failed on deletion."), id)
|
||||
self.db.learning_update(
|
||||
context, id,
|
||||
{'status': constants.STATUS_ERROR_DELETING}
|
||||
)
|
||||
|
||||
self.db.learning_delete(context, id)
|
||||
LOG.info(_LI("Learning %s deleted successfully."), id)
|
118
meteos/engine/rpcapi.py
Normal file
118
meteos/engine/rpcapi.py
Normal file
@ -0,0 +1,118 @@
|
||||
# Copyright 2012, Intel, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Client side of the learning RPC API.
|
||||
"""
|
||||
|
||||
from oslo_config import cfg
|
||||
import oslo_messaging as messaging
|
||||
from oslo_serialization import jsonutils
|
||||
|
||||
from meteos import rpc
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
|
||||
class LearningAPI(object):
|
||||
"""Client side of the learning rpc API.
|
||||
|
||||
API version history:
|
||||
|
||||
1.0 - Initial version.
|
||||
"""
|
||||
|
||||
BASE_RPC_API_VERSION = '1.0'
|
||||
|
||||
def __init__(self, topic=None):
|
||||
super(LearningAPI, self).__init__()
|
||||
target = messaging.Target(topic=CONF.learning_topic,
|
||||
version=self.BASE_RPC_API_VERSION)
|
||||
self._client = rpc.get_client(target, version_cap='1.12')
|
||||
|
||||
@staticmethod
|
||||
def make_msg(method, **kwargs):
|
||||
return method, kwargs
|
||||
|
||||
def call(self, ctxt, msg, version=None, timeout=None):
|
||||
method, kwargs = msg
|
||||
|
||||
if version is not None:
|
||||
client = self._client.prepare(version=version)
|
||||
else:
|
||||
client = self._client
|
||||
|
||||
if timeout is not None:
|
||||
client = client.prepare(timeout=timeout)
|
||||
|
||||
return client.call(ctxt, method, **kwargs)
|
||||
|
||||
def cast(self, ctxt, msg, version=None):
|
||||
method, kwargs = msg
|
||||
if version is not None:
|
||||
client = self._client.prepare(version=version)
|
||||
else:
|
||||
client = self._client
|
||||
return client.cast(ctxt, method, **kwargs)
|
||||
|
||||
def create_template(self, context, request_spec):
|
||||
request_spec_p = jsonutils.to_primitive(request_spec)
|
||||
return self.call(context, self.make_msg('create_template',
|
||||
request_spec=request_spec_p))
|
||||
|
||||
def delete_template(self, context, id):
|
||||
return self.call(context, self.make_msg('delete_template',
|
||||
id=id))
|
||||
|
||||
def create_experiment(self, context, request_spec):
|
||||
request_spec_p = jsonutils.to_primitive(request_spec)
|
||||
return self.cast(context, self.make_msg('create_experiment',
|
||||
request_spec=request_spec_p))
|
||||
|
||||
def delete_experiment(self, context, id):
|
||||
return self.cast(context, self.make_msg('delete_experiment',
|
||||
id=id))
|
||||
|
||||
def create_dataset(self, context, request_spec):
|
||||
request_spec_p = jsonutils.to_primitive(request_spec)
|
||||
return self.cast(context, self.make_msg('create_dataset',
|
||||
request_spec=request_spec_p))
|
||||
|
||||
def delete_dataset(self, context, cluster_id, job_id, id):
|
||||
return self.call(context, self.make_msg('delete_dataset',
|
||||
cluster_id=cluster_id,
|
||||
job_id=job_id,
|
||||
id=id))
|
||||
|
||||
def create_model(self, context, request_spec):
|
||||
request_spec_p = jsonutils.to_primitive(request_spec)
|
||||
return self.cast(context, self.make_msg('create_model',
|
||||
request_spec=request_spec_p))
|
||||
|
||||
def delete_model(self, context, cluster_id, job_id, id):
|
||||
return self.call(context, self.make_msg('delete_model',
|
||||
cluster_id=cluster_id,
|
||||
job_id=job_id,
|
||||
id=id))
|
||||
|
||||
def create_learning(self, context, request_spec):
|
||||
request_spec_p = jsonutils.to_primitive(request_spec)
|
||||
return self.cast(context, self.make_msg('create_learning',
|
||||
request_spec=request_spec_p))
|
||||
|
||||
def delete_learning(self, context, cluster_id, job_id, id):
|
||||
return self.call(context, self.make_msg('delete_learning',
|
||||
cluster_id=cluster_id,
|
||||
job_id=job_id,
|
||||
id=id))
|
194
meteos/exception.py
Normal file
194
meteos/exception.py
Normal file
@ -0,0 +1,194 @@
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Meteos base exception handling.
|
||||
|
||||
Includes decorator for re-raising Meteos-type exceptions.
|
||||
|
||||
SHOULD include dedicated exception logging.
|
||||
|
||||
"""
|
||||
import re
|
||||
|
||||
from oslo_concurrency import processutils
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
import six
|
||||
import webob.exc
|
||||
|
||||
from meteos.i18n import _
|
||||
from meteos.i18n import _LE
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
exc_log_opts = [
|
||||
cfg.BoolOpt('fatal_exception_format_errors',
|
||||
default=False,
|
||||
help='Whether to make exception message format errors fatal.'),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(exc_log_opts)
|
||||
|
||||
|
||||
ProcessExecutionError = processutils.ProcessExecutionError
|
||||
|
||||
|
||||
class ConvertedException(webob.exc.WSGIHTTPException):
|
||||
def __init__(self, code=400, title="", explanation=""):
|
||||
self.code = code
|
||||
self.title = title
|
||||
self.explanation = explanation
|
||||
super(ConvertedException, self).__init__()
|
||||
|
||||
|
||||
class Error(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class MeteosException(Exception):
|
||||
"""Base Meteos Exception
|
||||
|
||||
To correctly use this class, inherit from it and define
|
||||
a 'message' property. That message will get printf'd
|
||||
with the keyword arguments provided to the constructor.
|
||||
|
||||
"""
|
||||
message = _("An unknown exception occurred.")
|
||||
code = 500
|
||||
headers = {}
|
||||
safe = False
|
||||
|
||||
def __init__(self, message=None, detail_data={}, **kwargs):
|
||||
self.kwargs = kwargs
|
||||
self.detail_data = detail_data
|
||||
|
||||
if 'code' not in self.kwargs:
|
||||
try:
|
||||
self.kwargs['code'] = self.code
|
||||
except AttributeError:
|
||||
pass
|
||||
for k, v in self.kwargs.items():
|
||||
if isinstance(v, Exception):
|
||||
self.kwargs[k] = six.text_type(v)
|
||||
|
||||
if not message:
|
||||
try:
|
||||
message = self.message % kwargs
|
||||
|
||||
except Exception:
|
||||
# kwargs doesn't match a variable in the message
|
||||
# log the issue and the kwargs
|
||||
LOG.exception(_LE('Exception in string format operation.'))
|
||||
for name, value in kwargs.items():
|
||||
LOG.error(_LE("%(name)s: %(value)s"), {
|
||||
'name': name, 'value': value})
|
||||
if CONF.fatal_exception_format_errors:
|
||||
raise
|
||||
else:
|
||||
# at least get the core message out if something happened
|
||||
message = self.message
|
||||
elif isinstance(message, Exception):
|
||||
message = six.text_type(message)
|
||||
|
||||
if re.match('.*[^\.]\.\.$', message):
|
||||
message = message[:-1]
|
||||
self.msg = message
|
||||
super(MeteosException, self).__init__(message)
|
||||
|
||||
|
||||
class Conflict(MeteosException):
|
||||
message = _("%(err)s")
|
||||
code = 409
|
||||
|
||||
|
||||
class Invalid(MeteosException):
|
||||
message = _("Unacceptable parameters.")
|
||||
code = 400
|
||||
|
||||
|
||||
class InvalidRequest(Invalid):
|
||||
message = _("The request is invalid.")
|
||||
|
||||
|
||||
class InvalidResults(Invalid):
|
||||
message = _("The results are invalid.")
|
||||
|
||||
|
||||
class InvalidInput(Invalid):
|
||||
message = _("Invalid input received: %(reason)s.")
|
||||
|
||||
|
||||
class InvalidContentType(Invalid):
|
||||
message = _("Invalid content type %(content_type)s.")
|
||||
|
||||
|
||||
class InvalidParameterValue(Invalid):
|
||||
message = _("%(err)s")
|
||||
|
||||
|
||||
class InvalidUUID(Invalid):
|
||||
message = _("Expected a uuid but received %(uuid)s.")
|
||||
|
||||
|
||||
class InvalidLearning(Invalid):
|
||||
message = _("Invalid learning: %(reason)s.")
|
||||
|
||||
|
||||
class NotAuthorized(MeteosException):
|
||||
message = _("Not authorized.")
|
||||
code = 403
|
||||
|
||||
|
||||
class NotFound(MeteosException):
|
||||
message = _("Resource could not be found.")
|
||||
code = 404
|
||||
safe = True
|
||||
|
||||
|
||||
class VersionNotFoundForAPIMethod(Invalid):
|
||||
message = _("API version %(version)s is not supported on this method.")
|
||||
|
||||
|
||||
class HostBinaryNotFound(NotFound):
|
||||
message = _("Could not find binary %(binary)s on host %(host)s.")
|
||||
|
||||
|
||||
class MalformedRequestBody(MeteosException):
|
||||
message = _("Malformed message body: %(reason)s.")
|
||||
|
||||
|
||||
class NotAuthorized(MeteosException):
|
||||
message = _("Not authorized.")
|
||||
code = 403
|
||||
|
||||
|
||||
class AdminRequired(NotAuthorized):
|
||||
message = _("User does not have admin privileges.")
|
||||
|
||||
|
||||
class PolicyNotAuthorized(NotAuthorized):
|
||||
message = _("Policy doesn't allow %(action)s to be performed.")
|
||||
|
||||
|
||||
class DriverNotInitialized(MeteosException):
|
||||
message = _("Share driver '%(driver)s' not initialized.")
|
||||
|
||||
|
||||
class Duplicated(MeteosException):
|
||||
message = _("Duplicate entry")
|
||||
code = 409
|
||||
safe = True
|
0
meteos/hacking/__init__.py
Normal file
0
meteos/hacking/__init__.py
Normal file
360
meteos/hacking/checks.py
Normal file
360
meteos/hacking/checks.py
Normal file
@ -0,0 +1,360 @@
|
||||
# Copyright (c) 2012, Cloudscaling
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import ast
|
||||
import re
|
||||
import six
|
||||
|
||||
import pep8
|
||||
|
||||
|
||||
"""
|
||||
Guidelines for writing new hacking checks
|
||||
|
||||
- Use only for Meteos specific tests. OpenStack general tests
|
||||
should be submitted to the common 'hacking' module.
|
||||
- Pick numbers in the range M3xx. Find the current test with
|
||||
the highest allocated number and then pick the next value.
|
||||
- Keep the test method code in the source file ordered based
|
||||
on the M3xx value.
|
||||
- List the new rule in the top level HACKING.rst file
|
||||
- Add test cases for each new rule to meteos/tests/test_hacking.py
|
||||
|
||||
"""
|
||||
|
||||
UNDERSCORE_IMPORT_FILES = []
|
||||
|
||||
log_translation = re.compile(
|
||||
r"(.)*LOG\.(audit|error|info|critical|exception)\(\s*('|\")")
|
||||
log_translation_LC = re.compile(
|
||||
r"(.)*LOG\.(critical)\(\s*(_\(|'|\")")
|
||||
log_translation_LE = re.compile(
|
||||
r"(.)*LOG\.(error|exception)\(\s*(_\(|'|\")")
|
||||
log_translation_LI = re.compile(
|
||||
r"(.)*LOG\.(info)\(\s*(_\(|'|\")")
|
||||
log_translation_LW = re.compile(
|
||||
r"(.)*LOG\.(warning|warn)\(\s*(_\(|'|\")")
|
||||
translated_log = re.compile(
|
||||
r"(.)*LOG\.(audit|error|info|warn|warning|critical|exception)"
|
||||
"\(\s*_\(\s*('|\")")
|
||||
string_translation = re.compile(r"[^_]*_\(\s*('|\")")
|
||||
underscore_import_check = re.compile(r"(.)*import _$")
|
||||
underscore_import_check_multi = re.compile(r"(.)*import (.)*_, (.)*")
|
||||
# We need this for cases where they have created their own _ function.
|
||||
custom_underscore_check = re.compile(r"(.)*_\s*=\s*(.)*")
|
||||
oslo_namespace_imports = re.compile(r"from[\s]*oslo[.](.*)")
|
||||
dict_constructor_with_list_copy_re = re.compile(r".*\bdict\((\[)?(\(|\[)")
|
||||
assert_no_xrange_re = re.compile(r"\s*xrange\s*\(")
|
||||
assert_True = re.compile(r".*assertEqual\(True, .*\)")
|
||||
assert_None = re.compile(r".*assertEqual\(None, .*\)")
|
||||
|
||||
|
||||
class BaseASTChecker(ast.NodeVisitor):
|
||||
|
||||
"""Provides a simple framework for writing AST-based checks.
|
||||
|
||||
Subclasses should implement visit_* methods like any other AST visitor
|
||||
implementation. When they detect an error for a particular node the
|
||||
method should call ``self.add_error(offending_node)``. Details about
|
||||
where in the code the error occurred will be pulled from the node
|
||||
object.
|
||||
|
||||
Subclasses should also provide a class variable named CHECK_DESC to
|
||||
be used for the human readable error message.
|
||||
|
||||
"""
|
||||
|
||||
CHECK_DESC = 'No check message specified'
|
||||
|
||||
def __init__(self, tree, filename):
|
||||
"""This object is created automatically by pep8.
|
||||
|
||||
:param tree: an AST tree
|
||||
:param filename: name of the file being analyzed
|
||||
(ignored by our checks)
|
||||
"""
|
||||
self._tree = tree
|
||||
self._errors = []
|
||||
|
||||
def run(self):
|
||||
"""Called automatically by pep8."""
|
||||
self.visit(self._tree)
|
||||
return self._errors
|
||||
|
||||
def add_error(self, node, message=None):
|
||||
"""Add an error caused by a node to the list of errors for pep8."""
|
||||
message = message or self.CHECK_DESC
|
||||
error = (node.lineno, node.col_offset, message, self.__class__)
|
||||
self._errors.append(error)
|
||||
|
||||
def _check_call_names(self, call_node, names):
|
||||
if isinstance(call_node, ast.Call):
|
||||
if isinstance(call_node.func, ast.Name):
|
||||
if call_node.func.id in names:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def no_translate_debug_logs(logical_line, filename):
|
||||
"""Check for 'LOG.debug(_('
|
||||
|
||||
As per our translation policy,
|
||||
https://wiki.openstack.org/wiki/LoggingStandards#Log_Translation
|
||||
we shouldn't translate debug level logs.
|
||||
|
||||
* This check assumes that 'LOG' is a logger.
|
||||
* Use filename so we can start enforcing this in specific folders instead
|
||||
of needing to do so all at once.
|
||||
|
||||
M319
|
||||
"""
|
||||
if logical_line.startswith("LOG.debug(_("):
|
||||
yield(0, "M319 Don't translate debug level logs")
|
||||
|
||||
|
||||
class CheckLoggingFormatArgs(BaseASTChecker):
|
||||
|
||||
"""Check for improper use of logging format arguments.
|
||||
|
||||
LOG.debug("Volume %s caught fire and is at %d degrees C and climbing.",
|
||||
('volume1', 500))
|
||||
|
||||
The format arguments should not be a tuple as it is easy to miss.
|
||||
|
||||
"""
|
||||
|
||||
CHECK_DESC = 'M310 Log method arguments should not be a tuple.'
|
||||
LOG_METHODS = [
|
||||
'debug', 'info',
|
||||
'warn', 'warning',
|
||||
'error', 'exception',
|
||||
'critical', 'fatal',
|
||||
'trace', 'log'
|
||||
]
|
||||
|
||||
def _find_name(self, node):
|
||||
"""Return the fully qualified name or a Name or Attribute."""
|
||||
if isinstance(node, ast.Name):
|
||||
return node.id
|
||||
elif (isinstance(node, ast.Attribute)
|
||||
and isinstance(node.value, (ast.Name, ast.Attribute))):
|
||||
method_name = node.attr
|
||||
obj_name = self._find_name(node.value)
|
||||
if obj_name is None:
|
||||
return None
|
||||
return obj_name + '.' + method_name
|
||||
elif isinstance(node, six.string_types):
|
||||
return node
|
||||
else: # could be Subscript, Call or many more
|
||||
return None
|
||||
|
||||
def visit_Call(self, node):
|
||||
"""Look for the 'LOG.*' calls."""
|
||||
# extract the obj_name and method_name
|
||||
if isinstance(node.func, ast.Attribute):
|
||||
obj_name = self._find_name(node.func.value)
|
||||
if isinstance(node.func.value, ast.Name):
|
||||
method_name = node.func.attr
|
||||
elif isinstance(node.func.value, ast.Attribute):
|
||||
obj_name = self._find_name(node.func.value)
|
||||
method_name = node.func.attr
|
||||
else: # could be Subscript, Call or many more
|
||||
return super(CheckLoggingFormatArgs, self).generic_visit(node)
|
||||
|
||||
# obj must be a logger instance and method must be a log helper
|
||||
if (obj_name != 'LOG'
|
||||
or method_name not in self.LOG_METHODS):
|
||||
return super(CheckLoggingFormatArgs, self).generic_visit(node)
|
||||
|
||||
# the call must have arguments
|
||||
if not len(node.args):
|
||||
return super(CheckLoggingFormatArgs, self).generic_visit(node)
|
||||
|
||||
# any argument should not be a tuple
|
||||
for arg in node.args:
|
||||
if isinstance(arg, ast.Tuple):
|
||||
self.add_error(arg)
|
||||
|
||||
return super(CheckLoggingFormatArgs, self).generic_visit(node)
|
||||
|
||||
|
||||
def validate_log_translations(logical_line, physical_line, filename):
|
||||
# Translations are not required in the test and tempest
|
||||
# directories.
|
||||
if ("meteos/tests" in filename or "meteos_tempest_tests" in filename or
|
||||
"contrib/tempest" in filename):
|
||||
return
|
||||
if pep8.noqa(physical_line):
|
||||
return
|
||||
msg = "M327: LOG.critical messages require translations `_LC()`!"
|
||||
if log_translation_LC.match(logical_line):
|
||||
yield (0, msg)
|
||||
msg = ("M328: LOG.error and LOG.exception messages require translations "
|
||||
"`_LE()`!")
|
||||
if log_translation_LE.match(logical_line):
|
||||
yield (0, msg)
|
||||
msg = "M329: LOG.info messages require translations `_LI()`!"
|
||||
if log_translation_LI.match(logical_line):
|
||||
yield (0, msg)
|
||||
msg = "M330: LOG.warning messages require translations `_LW()`!"
|
||||
if log_translation_LW.match(logical_line):
|
||||
yield (0, msg)
|
||||
msg = "M331: Log messages require translations!"
|
||||
if log_translation.match(logical_line):
|
||||
yield (0, msg)
|
||||
|
||||
|
||||
def check_explicit_underscore_import(logical_line, filename):
|
||||
"""Check for explicit import of the _ function
|
||||
|
||||
We need to ensure that any files that are using the _() function
|
||||
to translate logs are explicitly importing the _ function. We
|
||||
can't trust unit test to catch whether the import has been
|
||||
added so we need to check for it here.
|
||||
"""
|
||||
|
||||
# Build a list of the files that have _ imported. No further
|
||||
# checking needed once it is found.
|
||||
if filename in UNDERSCORE_IMPORT_FILES:
|
||||
pass
|
||||
elif (underscore_import_check.match(logical_line) or
|
||||
underscore_import_check_multi.match(logical_line) or
|
||||
custom_underscore_check.match(logical_line)):
|
||||
UNDERSCORE_IMPORT_FILES.append(filename)
|
||||
elif (translated_log.match(logical_line) or
|
||||
string_translation.match(logical_line)):
|
||||
yield(0, "M323: Found use of _() without explicit import of _ !")
|
||||
|
||||
|
||||
class CheckForStrUnicodeExc(BaseASTChecker):
|
||||
|
||||
"""Checks for the use of str() or unicode() on an exception.
|
||||
|
||||
This currently only handles the case where str() or unicode()
|
||||
is used in the scope of an exception handler. If the exception
|
||||
is passed into a function, returned from an assertRaises, or
|
||||
used on an exception created in the same scope, this does not
|
||||
catch it.
|
||||
"""
|
||||
|
||||
CHECK_DESC = ('M325 str() and unicode() cannot be used on an '
|
||||
'exception. Remove or use six.text_type()')
|
||||
|
||||
def __init__(self, tree, filename):
|
||||
super(CheckForStrUnicodeExc, self).__init__(tree, filename)
|
||||
self.name = []
|
||||
self.already_checked = []
|
||||
|
||||
# Python 2
|
||||
def visit_TryExcept(self, node):
|
||||
for handler in node.handlers:
|
||||
if handler.name:
|
||||
self.name.append(handler.name.id)
|
||||
super(CheckForStrUnicodeExc, self).generic_visit(node)
|
||||
self.name = self.name[:-1]
|
||||
else:
|
||||
super(CheckForStrUnicodeExc, self).generic_visit(node)
|
||||
|
||||
# Python 3
|
||||
def visit_ExceptHandler(self, node):
|
||||
if node.name:
|
||||
self.name.append(node.name)
|
||||
super(CheckForStrUnicodeExc, self).generic_visit(node)
|
||||
self.name = self.name[:-1]
|
||||
else:
|
||||
super(CheckForStrUnicodeExc, self).generic_visit(node)
|
||||
|
||||
def visit_Call(self, node):
|
||||
if self._check_call_names(node, ['str', 'unicode']):
|
||||
if node not in self.already_checked:
|
||||
self.already_checked.append(node)
|
||||
if isinstance(node.args[0], ast.Name):
|
||||
if node.args[0].id in self.name:
|
||||
self.add_error(node.args[0])
|
||||
super(CheckForStrUnicodeExc, self).generic_visit(node)
|
||||
|
||||
|
||||
class CheckForTransAdd(BaseASTChecker):
|
||||
|
||||
"""Checks for the use of concatenation on a translated string.
|
||||
|
||||
Translations should not be concatenated with other strings, but
|
||||
should instead include the string being added to the translated
|
||||
string to give the translators the most information.
|
||||
"""
|
||||
|
||||
CHECK_DESC = ('M326 Translated messages cannot be concatenated. '
|
||||
'String should be included in translated message.')
|
||||
|
||||
TRANS_FUNC = ['_', '_LI', '_LW', '_LE', '_LC']
|
||||
|
||||
def visit_BinOp(self, node):
|
||||
if isinstance(node.op, ast.Add):
|
||||
if self._check_call_names(node.left, self.TRANS_FUNC):
|
||||
self.add_error(node.left)
|
||||
elif self._check_call_names(node.right, self.TRANS_FUNC):
|
||||
self.add_error(node.right)
|
||||
super(CheckForTransAdd, self).generic_visit(node)
|
||||
|
||||
|
||||
def check_oslo_namespace_imports(logical_line, physical_line, filename):
|
||||
if pep8.noqa(physical_line):
|
||||
return
|
||||
if re.match(oslo_namespace_imports, logical_line):
|
||||
msg = ("M333: '%s' must be used instead of '%s'.") % (
|
||||
logical_line.replace('oslo.', 'oslo_'),
|
||||
logical_line)
|
||||
yield(0, msg)
|
||||
|
||||
|
||||
def dict_constructor_with_list_copy(logical_line):
|
||||
msg = ("M336: Must use a dict comprehension instead of a dict constructor"
|
||||
" with a sequence of key-value pairs."
|
||||
)
|
||||
if dict_constructor_with_list_copy_re.match(logical_line):
|
||||
yield (0, msg)
|
||||
|
||||
|
||||
def no_xrange(logical_line):
|
||||
if assert_no_xrange_re.match(logical_line):
|
||||
yield(0, "M337: Do not use xrange().")
|
||||
|
||||
|
||||
def validate_assertTrue(logical_line):
|
||||
if re.match(assert_True, logical_line):
|
||||
msg = ("M313: Unit tests should use assertTrue(value) instead"
|
||||
" of using assertEqual(True, value).")
|
||||
yield(0, msg)
|
||||
|
||||
|
||||
def validate_assertIsNone(logical_line):
|
||||
if re.match(assert_None, logical_line):
|
||||
msg = ("M312: Unit tests should use assertIsNone(value) instead"
|
||||
" of using assertEqual(None, value).")
|
||||
yield(0, msg)
|
||||
|
||||
|
||||
def factory(register):
|
||||
register(validate_log_translations)
|
||||
register(check_explicit_underscore_import)
|
||||
register(no_translate_debug_logs)
|
||||
register(CheckForStrUnicodeExc)
|
||||
register(CheckLoggingFormatArgs)
|
||||
register(CheckForTransAdd)
|
||||
register(check_oslo_namespace_imports)
|
||||
register(dict_constructor_with_list_copy)
|
||||
register(no_xrange)
|
||||
register(validate_assertTrue)
|
||||
register(validate_assertIsNone)
|
50
meteos/i18n.py
Normal file
50
meteos/i18n.py
Normal file
@ -0,0 +1,50 @@
|
||||
# Copyright 2014 IBM Corp.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""oslo.i18n integration module.
|
||||
|
||||
See http://docs.openstack.org/developer/oslo.i18n/usage.html .
|
||||
|
||||
"""
|
||||
|
||||
import oslo_i18n
|
||||
|
||||
DOMAIN = 'meteos'
|
||||
|
||||
_translators = oslo_i18n.TranslatorFactory(domain=DOMAIN)
|
||||
|
||||
# The primary translation function using the well-known name "_"
|
||||
_ = _translators.primary
|
||||
|
||||
# Translators for log levels.
|
||||
#
|
||||
# The abbreviated names are meant to reflect the usual use of a short
|
||||
# name like '_'. The "L" is for "log" and the other letter comes from
|
||||
# the level.
|
||||
_LI = _translators.log_info
|
||||
_LW = _translators.log_warning
|
||||
_LE = _translators.log_error
|
||||
_LC = _translators.log_critical
|
||||
|
||||
|
||||
def enable_lazy():
|
||||
return oslo_i18n.enable_lazy()
|
||||
|
||||
|
||||
def translate(value, user_locale):
|
||||
return oslo_i18n.translate(value, user_locale)
|
||||
|
||||
|
||||
def get_available_languages():
|
||||
return oslo_i18n.get_available_languages(DOMAIN)
|
114
meteos/manager.py
Normal file
114
meteos/manager.py
Normal file
@ -0,0 +1,114 @@
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Base Manager class.
|
||||
|
||||
Managers are responsible for a certain aspect of the system. It is a logical
|
||||
grouping of code relating to a portion of the system. In general other
|
||||
components should be using the manager to make changes to the components that
|
||||
it is responsible for.
|
||||
|
||||
For example, other components that need to deal with volumes in some way,
|
||||
should do so by calling methods on the VolumeManager instead of directly
|
||||
changing fields in the database. This allows us to keep all of the code
|
||||
relating to volumes in the same place.
|
||||
|
||||
We have adopted a basic strategy of Smart managers and dumb data, which means
|
||||
rather than attaching methods to data objects, components should call manager
|
||||
methods that act on the data.
|
||||
|
||||
Methods on managers that can be executed locally should be called directly. If
|
||||
a particular method must execute on a remote host, this should be done via rpc
|
||||
to the service that wraps the manager
|
||||
|
||||
Managers should be responsible for most of the db access, and
|
||||
non-implementation specific data. Anything implementation specific that can't
|
||||
be generalized should be done by the Driver.
|
||||
|
||||
In general, we prefer to have one manager with multiple drivers for different
|
||||
implementations, but sometimes it makes sense to have multiple managers. You
|
||||
can think of it this way: Abstract different overall strategies at the manager
|
||||
level(FlatNetwork vs VlanNetwork), and different implementations at the driver
|
||||
level(LinuxNetDriver vs CiscoNetDriver).
|
||||
|
||||
Managers will often provide methods for initial setup of a host or periodic
|
||||
tasks to a wrapping service.
|
||||
|
||||
This module provides Manager, a base class for managers.
|
||||
|
||||
"""
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
from oslo_service import periodic_task
|
||||
|
||||
from meteos.db import base
|
||||
from meteos import version
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class PeriodicTasks(periodic_task.PeriodicTasks):
|
||||
def __init__(self):
|
||||
super(PeriodicTasks, self).__init__(CONF)
|
||||
|
||||
|
||||
class Manager(base.Base, PeriodicTasks):
|
||||
|
||||
@property
|
||||
def RPC_API_VERSION(self):
|
||||
"""Redefine this in child classes."""
|
||||
raise NotImplementedError
|
||||
|
||||
@property
|
||||
def target(self):
|
||||
"""This property is used by oslo_messaging.
|
||||
|
||||
https://wiki.openstack.org/wiki/Oslo/Messaging#API_Version_Negotiation
|
||||
"""
|
||||
if not hasattr(self, '_target'):
|
||||
import oslo_messaging as messaging
|
||||
self._target = messaging.Target(version=self.RPC_API_VERSION)
|
||||
return self._target
|
||||
|
||||
def __init__(self, host=None, db_driver=None):
|
||||
if not host:
|
||||
host = CONF.host
|
||||
self.host = host
|
||||
self.additional_endpoints = []
|
||||
super(Manager, self).__init__(db_driver)
|
||||
|
||||
def periodic_tasks(self, context, raise_on_error=False):
|
||||
"""Tasks to be run at a periodic interval."""
|
||||
return self.run_periodic_tasks(context, raise_on_error=raise_on_error)
|
||||
|
||||
def init_host(self):
|
||||
"""Handle initialization if this is a standalone service.
|
||||
|
||||
Child classes should override this method.
|
||||
|
||||
"""
|
||||
pass
|
||||
|
||||
def service_version(self, context):
|
||||
return version.version_string()
|
||||
|
||||
def service_config(self, context):
|
||||
config = {}
|
||||
for key in CONF:
|
||||
config[key] = CONF.get(key, None)
|
||||
return config
|
75
meteos/opts.py
Normal file
75
meteos/opts.py
Normal file
@ -0,0 +1,75 @@
|
||||
# Copyright (c) 2014 SUSE Linux Products GmbH.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
__all__ = [
|
||||
'list_opts'
|
||||
]
|
||||
|
||||
import copy
|
||||
import itertools
|
||||
|
||||
import oslo_concurrency.opts
|
||||
import oslo_log._options
|
||||
import oslo_middleware.opts
|
||||
import oslo_policy.opts
|
||||
|
||||
import meteos.api.common
|
||||
import meteos.api.middleware.auth
|
||||
import meteos.common.config
|
||||
import meteos.db.api
|
||||
import meteos.db.base
|
||||
import meteos.exception
|
||||
import meteos.service
|
||||
import meteos.engine.api
|
||||
import meteos.engine.driver
|
||||
import meteos.engine.drivers.generic
|
||||
import meteos.engine.manager
|
||||
import meteos.cluster.sahara
|
||||
import meteos.wsgi
|
||||
|
||||
|
||||
# List of *all* options in [DEFAULT] namespace of meteos.
|
||||
# Any new option list or option needs to be registered here.
|
||||
_global_opt_lists = [
|
||||
# Keep list alphabetically sorted
|
||||
meteos.api.common.api_common_opts,
|
||||
[meteos.api.middleware.auth.use_forwarded_for_opt],
|
||||
meteos.common.config.core_opts,
|
||||
meteos.common.config.debug_opts,
|
||||
meteos.common.config.global_opts,
|
||||
meteos.cluster.sahara.sahara_opts,
|
||||
meteos.db.api.db_opts,
|
||||
[meteos.db.base.db_driver_opt],
|
||||
meteos.exception.exc_log_opts,
|
||||
meteos.service.service_opts,
|
||||
meteos.engine.driver.ssh_opts,
|
||||
meteos.engine.drivers.generic.learning_opts,
|
||||
meteos.engine.manager.engine_manager_opts,
|
||||
meteos.wsgi.eventlet_opts,
|
||||
meteos.wsgi.socket_opts,
|
||||
]
|
||||
|
||||
_opts = [
|
||||
(None, list(itertools.chain(*_global_opt_lists)))
|
||||
]
|
||||
|
||||
_opts.extend(oslo_concurrency.opts.list_opts())
|
||||
_opts.extend(oslo_log._options.list_opts())
|
||||
_opts.extend(oslo_middleware.opts.list_opts())
|
||||
_opts.extend(oslo_policy.opts.list_opts())
|
||||
|
||||
|
||||
def list_opts():
|
||||
"""Return a list of oslo.config options available in Meteos."""
|
||||
return [(m, copy.deepcopy(o)) for m, o in _opts]
|
113
meteos/policy.py
Normal file
113
meteos/policy.py
Normal file
@ -0,0 +1,113 @@
|
||||
# Copyright (c) 2011 OpenStack, LLC.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Policy Engine For Meteos"""
|
||||
|
||||
import functools
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_policy import policy
|
||||
|
||||
from meteos import exception
|
||||
|
||||
CONF = cfg.CONF
|
||||
_ENFORCER = None
|
||||
|
||||
|
||||
def reset():
|
||||
global _ENFORCER
|
||||
if _ENFORCER:
|
||||
_ENFORCER.clear()
|
||||
_ENFORCER = None
|
||||
|
||||
|
||||
def init(policy_path=None):
|
||||
global _ENFORCER
|
||||
if not _ENFORCER:
|
||||
_ENFORCER = policy.Enforcer(CONF)
|
||||
if policy_path:
|
||||
_ENFORCER.policy_path = policy_path
|
||||
_ENFORCER.load_rules()
|
||||
|
||||
|
||||
def enforce(context, action, target, do_raise=True):
|
||||
"""Verifies that the action is valid on the target in this context.
|
||||
|
||||
:param context: meteos context
|
||||
:param action: string representing the action to be checked
|
||||
this should be colon separated for clarity.
|
||||
i.e. ``compute:create_instance``,
|
||||
``compute:attach_volume``,
|
||||
``volume:attach_volume``
|
||||
:param target: dictionary representing the object of the action
|
||||
for object creation this should be a dictionary representing the
|
||||
location of the object e.g. ``{'project_id': context.project_id}``
|
||||
:param do_raise: Whether to raise an exception if check fils.
|
||||
|
||||
:returns: When ``do_raise`` is ``False``, returns a value that
|
||||
evaluates as ``True`` or ``False`` depending on whether
|
||||
the policy allows action on the target.
|
||||
|
||||
:raises: meteos.exception.PolicyNotAuthorized if verification fails
|
||||
and ``do_raise`` is ``True``.
|
||||
|
||||
"""
|
||||
init()
|
||||
if not isinstance(context, dict):
|
||||
context = context.to_dict()
|
||||
|
||||
# Add the exception arguments if asked to do a raise
|
||||
extra = {}
|
||||
if do_raise:
|
||||
extra.update(exc=exception.PolicyNotAuthorized, action=action,
|
||||
do_raise=do_raise)
|
||||
return _ENFORCER.enforce(action, target, context, **extra)
|
||||
|
||||
|
||||
def check_is_admin(roles):
|
||||
"""Whether or not roles contains 'admin' role according to policy setting.
|
||||
|
||||
"""
|
||||
init()
|
||||
|
||||
# include project_id on target to avoid KeyError if context_is_admin
|
||||
# policy definition is missing, and default admin_or_owner rule
|
||||
# attempts to apply. Since our credentials dict does not include a
|
||||
# project_id, this target can never match as a generic rule.
|
||||
target = {'project_id': ''}
|
||||
credentials = {'roles': roles}
|
||||
return _ENFORCER.enforce("context_is_admin", target, credentials)
|
||||
|
||||
|
||||
def wrap_check_policy(resource):
|
||||
"""Check policy corresponding to the wrapped methods prior to execution."""
|
||||
def check_policy_wraper(func):
|
||||
@functools.wraps(func)
|
||||
def wrapped(self, context, target_obj, *args, **kwargs):
|
||||
check_policy(context, resource, func.__name__, target_obj)
|
||||
return func(self, context, target_obj, *args, **kwargs)
|
||||
|
||||
return wrapped
|
||||
return check_policy_wraper
|
||||
|
||||
|
||||
def check_policy(context, resource, action, target_obj=None):
|
||||
target = {
|
||||
'project_id': context.project_id,
|
||||
'user_id': context.user_id,
|
||||
}
|
||||
target.update(target_obj or {})
|
||||
_action = '%s:%s' % (resource, action)
|
||||
enforce(context, _action, target)
|
153
meteos/rpc.py
Normal file
153
meteos/rpc.py
Normal file
@ -0,0 +1,153 @@
|
||||
# Copyright 2013 Red Hat, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
__all__ = [
|
||||
'init',
|
||||
'cleanup',
|
||||
'set_defaults',
|
||||
'add_extra_exmods',
|
||||
'clear_extra_exmods',
|
||||
'get_allowed_exmods',
|
||||
'RequestContextSerializer',
|
||||
'get_client',
|
||||
'get_server',
|
||||
'get_notifier',
|
||||
'TRANSPORT_ALIASES',
|
||||
]
|
||||
|
||||
from oslo_config import cfg
|
||||
import oslo_messaging as messaging
|
||||
from oslo_serialization import jsonutils
|
||||
|
||||
import meteos.context
|
||||
import meteos.exception
|
||||
|
||||
CONF = cfg.CONF
|
||||
TRANSPORT = None
|
||||
NOTIFIER = None
|
||||
|
||||
ALLOWED_EXMODS = [
|
||||
meteos.exception.__name__,
|
||||
]
|
||||
EXTRA_EXMODS = []
|
||||
|
||||
# NOTE(flaper87): The meteos.openstack.common.rpc entries are
|
||||
# for backwards compat with Havana rpc_backend configuration
|
||||
# values. The meteos.rpc entries are for compat with Folsom values.
|
||||
TRANSPORT_ALIASES = {
|
||||
'meteos.openstack.common.rpc.impl_kombu': 'rabbit',
|
||||
'meteos.openstack.common.rpc.impl_qpid': 'qpid',
|
||||
'meteos.openstack.common.rpc.impl_zmq': 'zmq',
|
||||
'meteos.rpc.impl_kombu': 'rabbit',
|
||||
'meteos.rpc.impl_qpid': 'qpid',
|
||||
'meteos.rpc.impl_zmq': 'zmq',
|
||||
}
|
||||
|
||||
|
||||
def init(conf):
|
||||
global TRANSPORT, NOTIFIER
|
||||
exmods = get_allowed_exmods()
|
||||
TRANSPORT = messaging.get_transport(conf,
|
||||
allowed_remote_exmods=exmods,
|
||||
aliases=TRANSPORT_ALIASES)
|
||||
|
||||
serializer = RequestContextSerializer(JsonPayloadSerializer())
|
||||
NOTIFIER = messaging.Notifier(TRANSPORT, serializer=serializer)
|
||||
|
||||
|
||||
def initialized():
|
||||
return None not in [TRANSPORT, NOTIFIER]
|
||||
|
||||
|
||||
def cleanup():
|
||||
global TRANSPORT, NOTIFIER
|
||||
assert TRANSPORT is not None
|
||||
assert NOTIFIER is not None
|
||||
TRANSPORT.cleanup()
|
||||
TRANSPORT = NOTIFIER = None
|
||||
|
||||
|
||||
def set_defaults(control_exchange):
|
||||
messaging.set_transport_defaults(control_exchange)
|
||||
|
||||
|
||||
def add_extra_exmods(*args):
|
||||
EXTRA_EXMODS.extend(args)
|
||||
|
||||
|
||||
def clear_extra_exmods():
|
||||
del EXTRA_EXMODS[:]
|
||||
|
||||
|
||||
def get_allowed_exmods():
|
||||
return ALLOWED_EXMODS + EXTRA_EXMODS
|
||||
|
||||
|
||||
class JsonPayloadSerializer(messaging.NoOpSerializer):
|
||||
|
||||
@staticmethod
|
||||
def serialize_entity(context, entity):
|
||||
return jsonutils.to_primitive(entity, convert_instances=True)
|
||||
|
||||
|
||||
class RequestContextSerializer(messaging.Serializer):
|
||||
|
||||
def __init__(self, base):
|
||||
self._base = base
|
||||
|
||||
def serialize_entity(self, context, entity):
|
||||
if not self._base:
|
||||
return entity
|
||||
return self._base.serialize_entity(context, entity)
|
||||
|
||||
def deserialize_entity(self, context, entity):
|
||||
if not self._base:
|
||||
return entity
|
||||
return self._base.deserialize_entity(context, entity)
|
||||
|
||||
def serialize_context(self, context):
|
||||
return context.to_dict()
|
||||
|
||||
def deserialize_context(self, context):
|
||||
return meteos.context.RequestContext.from_dict(context)
|
||||
|
||||
|
||||
def get_transport_url(url_str=None):
|
||||
return messaging.TransportURL.parse(CONF, url_str, TRANSPORT_ALIASES)
|
||||
|
||||
|
||||
def get_client(target, version_cap=None, serializer=None):
|
||||
assert TRANSPORT is not None
|
||||
serializer = RequestContextSerializer(serializer)
|
||||
return messaging.RPCClient(TRANSPORT,
|
||||
target,
|
||||
version_cap=version_cap,
|
||||
serializer=serializer)
|
||||
|
||||
|
||||
def get_server(target, endpoints, serializer=None):
|
||||
assert TRANSPORT is not None
|
||||
serializer = RequestContextSerializer(serializer)
|
||||
return messaging.get_rpc_server(TRANSPORT,
|
||||
target,
|
||||
endpoints,
|
||||
executor='eventlet',
|
||||
serializer=serializer)
|
||||
|
||||
|
||||
def get_notifier(service=None, host=None, publisher_id=None):
|
||||
assert NOTIFIER is not None
|
||||
if not publisher_id:
|
||||
publisher_id = "%s.%s" % (service, host or CONF.host)
|
||||
return NOTIFIER.prepare(publisher_id=publisher_id)
|
379
meteos/service.py
Normal file
379
meteos/service.py
Normal file
@ -0,0 +1,379 @@
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# Copyright 2011 Justin Santa Barbara
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Generic Node base class for all workers that run on hosts."""
|
||||
|
||||
import inspect
|
||||
import os
|
||||
import random
|
||||
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
import oslo_messaging as messaging
|
||||
from oslo_service import loopingcall
|
||||
from oslo_service import service
|
||||
from oslo_utils import importutils
|
||||
|
||||
from meteos import context
|
||||
from meteos import db
|
||||
from meteos import exception
|
||||
from meteos.i18n import _, _LE, _LI, _LW
|
||||
from meteos import rpc
|
||||
from meteos import version
|
||||
from meteos import wsgi
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
service_opts = [
|
||||
cfg.IntOpt('report_interval',
|
||||
default=10,
|
||||
help='Seconds between nodes reporting state to datastore.'),
|
||||
cfg.IntOpt('periodic_interval',
|
||||
default=60,
|
||||
help='Seconds between running periodic tasks.'),
|
||||
cfg.IntOpt('periodic_fuzzy_delay',
|
||||
default=60,
|
||||
help='Range of seconds to randomly delay when starting the '
|
||||
'periodic task scheduler to reduce stampeding. '
|
||||
'(Disable by setting to 0)'),
|
||||
cfg.StrOpt('osapi_learning_listen',
|
||||
default="::",
|
||||
help='IP address for OpenStack Learning API to listen on.'),
|
||||
cfg.PortOpt('osapi_learning_listen_port',
|
||||
default=8989,
|
||||
help='Port for OpenStack Learning API to listen on.'),
|
||||
cfg.IntOpt('osapi_learning_workers',
|
||||
default=1,
|
||||
help='Number of workers for OpenStack Learning API service.'),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(service_opts)
|
||||
|
||||
|
||||
class Service(service.Service):
|
||||
"""Service object for binaries running on hosts.
|
||||
|
||||
A service takes a manager and enables rpc by listening to queues based
|
||||
on topic. It also periodically runs tasks on the manager and reports
|
||||
it state to the database services table.
|
||||
"""
|
||||
|
||||
def __init__(self, host, binary, topic, manager, report_interval=None,
|
||||
periodic_interval=None, periodic_fuzzy_delay=None,
|
||||
service_name=None, *args, **kwargs):
|
||||
super(Service, self).__init__()
|
||||
if not rpc.initialized():
|
||||
rpc.init(CONF)
|
||||
self.host = host
|
||||
self.binary = binary
|
||||
self.topic = topic
|
||||
self.manager_class_name = manager
|
||||
manager_class = importutils.import_class(self.manager_class_name)
|
||||
self.manager = manager_class(host=self.host,
|
||||
service_name=service_name,
|
||||
*args, **kwargs)
|
||||
self.report_interval = report_interval
|
||||
self.periodic_interval = periodic_interval
|
||||
self.periodic_fuzzy_delay = periodic_fuzzy_delay
|
||||
self.saved_args, self.saved_kwargs = args, kwargs
|
||||
self.timers = []
|
||||
|
||||
def start(self):
|
||||
version_string = version.version_string()
|
||||
LOG.info(_LI('Starting %(topic)s node (version %(version_string)s)'),
|
||||
{'topic': self.topic, 'version_string': version_string})
|
||||
self.model_disconnected = False
|
||||
ctxt = context.get_admin_context()
|
||||
try:
|
||||
service_ref = db.service_get_by_args(ctxt,
|
||||
self.host,
|
||||
self.binary)
|
||||
self.service_id = service_ref['id']
|
||||
except exception.NotFound:
|
||||
self._create_service_ref(ctxt)
|
||||
|
||||
LOG.debug("Creating RPC server for service %s.", self.topic)
|
||||
|
||||
target = messaging.Target(topic=self.topic, server=self.host)
|
||||
endpoints = [self.manager]
|
||||
endpoints.extend(self.manager.additional_endpoints)
|
||||
self.rpcserver = rpc.get_server(target, endpoints)
|
||||
self.rpcserver.start()
|
||||
|
||||
self.manager.init_host()
|
||||
if self.report_interval:
|
||||
pulse = loopingcall.FixedIntervalLoopingCall(self.report_state)
|
||||
pulse.start(interval=self.report_interval,
|
||||
initial_delay=self.report_interval)
|
||||
self.timers.append(pulse)
|
||||
|
||||
if self.periodic_interval:
|
||||
if self.periodic_fuzzy_delay:
|
||||
initial_delay = random.randint(0, self.periodic_fuzzy_delay)
|
||||
else:
|
||||
initial_delay = None
|
||||
|
||||
periodic = loopingcall.FixedIntervalLoopingCall(
|
||||
self.periodic_tasks)
|
||||
periodic.start(interval=self.periodic_interval,
|
||||
initial_delay=initial_delay)
|
||||
self.timers.append(periodic)
|
||||
|
||||
def _create_service_ref(self, context):
|
||||
zone = CONF.storage_availability_zone
|
||||
service_ref = db.service_create(context,
|
||||
{'host': self.host,
|
||||
'binary': self.binary,
|
||||
'topic': self.topic,
|
||||
'report_count': 0,
|
||||
'availability_zone': zone})
|
||||
self.service_id = service_ref['id']
|
||||
|
||||
def __getattr__(self, key):
|
||||
manager = self.__dict__.get('manager', None)
|
||||
return getattr(manager, key)
|
||||
|
||||
@classmethod
|
||||
def create(cls, host=None, binary=None, topic=None, manager=None,
|
||||
report_interval=None, periodic_interval=None,
|
||||
periodic_fuzzy_delay=None, service_name=None):
|
||||
"""Instantiates class and passes back application object.
|
||||
|
||||
:param host: defaults to CONF.host
|
||||
:param binary: defaults to basename of executable
|
||||
:param topic: defaults to bin_name - 'meteos-' part
|
||||
:param manager: defaults to CONF.<topic>_manager
|
||||
:param report_interval: defaults to CONF.report_interval
|
||||
:param periodic_interval: defaults to CONF.periodic_interval
|
||||
:param periodic_fuzzy_delay: defaults to CONF.periodic_fuzzy_delay
|
||||
|
||||
"""
|
||||
if not host:
|
||||
host = CONF.host
|
||||
if not binary:
|
||||
binary = os.path.basename(inspect.stack()[-1][1])
|
||||
if not topic:
|
||||
topic = binary
|
||||
if not manager:
|
||||
subtopic = topic.rpartition('meteos-')[2]
|
||||
manager = CONF.get('%s_manager' % subtopic, None)
|
||||
if report_interval is None:
|
||||
report_interval = CONF.report_interval
|
||||
if periodic_interval is None:
|
||||
periodic_interval = CONF.periodic_interval
|
||||
if periodic_fuzzy_delay is None:
|
||||
periodic_fuzzy_delay = CONF.periodic_fuzzy_delay
|
||||
service_obj = cls(host, binary, topic, manager,
|
||||
report_interval=report_interval,
|
||||
periodic_interval=periodic_interval,
|
||||
periodic_fuzzy_delay=periodic_fuzzy_delay,
|
||||
service_name=service_name)
|
||||
|
||||
return service_obj
|
||||
|
||||
def kill(self):
|
||||
"""Destroy the service object in the datastore."""
|
||||
self.stop()
|
||||
try:
|
||||
db.service_destroy(context.get_admin_context(), self.service_id)
|
||||
except exception.NotFound:
|
||||
LOG.warning(_LW('Service killed that has no database entry.'))
|
||||
|
||||
def stop(self):
|
||||
# Try to shut the connection down, but if we get any sort of
|
||||
# errors, go ahead and ignore them.. as we're shutting down anyway
|
||||
try:
|
||||
self.rpcserver.stop()
|
||||
except Exception:
|
||||
pass
|
||||
for x in self.timers:
|
||||
try:
|
||||
x.stop()
|
||||
except Exception:
|
||||
pass
|
||||
self.timers = []
|
||||
|
||||
super(Service, self).stop()
|
||||
|
||||
def wait(self):
|
||||
for x in self.timers:
|
||||
try:
|
||||
x.wait()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def periodic_tasks(self, raise_on_error=False):
|
||||
"""Tasks to be run at a periodic interval."""
|
||||
ctxt = context.get_admin_context()
|
||||
self.manager.periodic_tasks(ctxt, raise_on_error=raise_on_error)
|
||||
|
||||
def report_state(self):
|
||||
"""Update the state of this service in the datastore."""
|
||||
ctxt = context.get_admin_context()
|
||||
zone = CONF.storage_availability_zone
|
||||
state_catalog = {}
|
||||
try:
|
||||
try:
|
||||
service_ref = db.service_get(ctxt, self.service_id)
|
||||
except exception.NotFound:
|
||||
LOG.debug('The service database object disappeared, '
|
||||
'Recreating it.')
|
||||
self._create_service_ref(ctxt)
|
||||
service_ref = db.service_get(ctxt, self.service_id)
|
||||
|
||||
state_catalog['report_count'] = service_ref['report_count'] + 1
|
||||
|
||||
db.service_update(ctxt,
|
||||
self.service_id, state_catalog)
|
||||
|
||||
# TODO(termie): make this pattern be more elegant.
|
||||
if getattr(self, 'model_disconnected', False):
|
||||
self.model_disconnected = False
|
||||
LOG.error(_LE('Recovered model server connection!'))
|
||||
|
||||
# TODO(vish): this should probably only catch connection errors
|
||||
except Exception: # pylint: disable=W0702
|
||||
if not getattr(self, 'model_disconnected', False):
|
||||
self.model_disconnected = True
|
||||
LOG.exception(_LE('model server went away'))
|
||||
|
||||
|
||||
class WSGIService(service.ServiceBase):
|
||||
"""Provides ability to launch API from a 'paste' configuration."""
|
||||
|
||||
def __init__(self, name, loader=None):
|
||||
"""Initialize, but do not start the WSGI server.
|
||||
|
||||
:param name: The name of the WSGI server given to the loader.
|
||||
:param loader: Loads the WSGI application using the given name.
|
||||
:returns: None
|
||||
|
||||
"""
|
||||
self.name = name
|
||||
self.manager = self._get_manager()
|
||||
self.loader = loader or wsgi.Loader()
|
||||
if not rpc.initialized():
|
||||
rpc.init(CONF)
|
||||
self.app = self.loader.load_app(name)
|
||||
self.host = getattr(CONF, '%s_listen' % name, "0.0.0.0")
|
||||
self.port = getattr(CONF, '%s_listen_port' % name, 0)
|
||||
self.workers = getattr(CONF, '%s_workers' % name, None)
|
||||
if self.workers is not None and self.workers < 1:
|
||||
LOG.warning(
|
||||
_LW("Value of config option %(name)s_workers must be integer "
|
||||
"greater than 1. Input value ignored.") % {'name': name})
|
||||
# Reset workers to default
|
||||
self.workers = None
|
||||
self.server = wsgi.Server(name,
|
||||
self.app,
|
||||
host=self.host,
|
||||
port=self.port)
|
||||
|
||||
def _get_manager(self):
|
||||
"""Initialize a Manager object appropriate for this service.
|
||||
|
||||
Use the service name to look up a Manager subclass from the
|
||||
configuration and initialize an instance. If no class name
|
||||
is configured, just return None.
|
||||
|
||||
:returns: a Manager instance, or None.
|
||||
|
||||
"""
|
||||
fl = '%s_manager' % self.name
|
||||
if fl not in CONF:
|
||||
return None
|
||||
|
||||
manager_class_name = CONF.get(fl, None)
|
||||
if not manager_class_name:
|
||||
return None
|
||||
|
||||
manager_class = importutils.import_class(manager_class_name)
|
||||
return manager_class()
|
||||
|
||||
def start(self):
|
||||
"""Start serving this service using loaded configuration.
|
||||
|
||||
Also, retrieve updated port number in case '0' was passed in, which
|
||||
indicates a random port should be used.
|
||||
|
||||
:returns: None
|
||||
|
||||
"""
|
||||
if self.manager:
|
||||
self.manager.init_host()
|
||||
self.server.start()
|
||||
self.port = self.server.port
|
||||
|
||||
def stop(self):
|
||||
"""Stop serving this API.
|
||||
|
||||
:returns: None
|
||||
|
||||
"""
|
||||
self.server.stop()
|
||||
|
||||
def wait(self):
|
||||
"""Wait for the service to stop serving this API.
|
||||
|
||||
:returns: None
|
||||
|
||||
"""
|
||||
self.server.wait()
|
||||
|
||||
def reset(self):
|
||||
"""Reset server greenpool size to default.
|
||||
|
||||
:returns: None
|
||||
"""
|
||||
self.server.reset()
|
||||
|
||||
|
||||
def process_launcher():
|
||||
return service.ProcessLauncher(CONF)
|
||||
|
||||
|
||||
# NOTE(vish): the global launcher is to maintain the existing
|
||||
# functionality of calling service.serve +
|
||||
# service.wait
|
||||
_launcher = None
|
||||
|
||||
|
||||
def serve(server, workers=None):
|
||||
global _launcher
|
||||
if _launcher:
|
||||
raise RuntimeError(_('serve() can only be called once'))
|
||||
_launcher = service.launch(CONF, server, workers=workers)
|
||||
|
||||
|
||||
def wait():
|
||||
LOG.debug('Full set of CONF:')
|
||||
for flag in CONF:
|
||||
flag_get = CONF.get(flag, None)
|
||||
# hide flag contents from log if contains a password
|
||||
# should use secret flag when switch over to openstack-common
|
||||
if ("_password" in flag or "_key" in flag or
|
||||
(flag == "sql_connection" and "mysql:" in flag_get)):
|
||||
LOG.debug('%(flag)s : FLAG SET ', {"flag": flag})
|
||||
else:
|
||||
LOG.debug('%(flag)s : %(flag_get)s',
|
||||
{"flag": flag, "flag_get": flag_get})
|
||||
try:
|
||||
_launcher.wait()
|
||||
except KeyboardInterrupt:
|
||||
_launcher.stop()
|
||||
rpc.cleanup()
|
354
meteos/test.py
Normal file
354
meteos/test.py
Normal file
@ -0,0 +1,354 @@
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Base classes for our unit tests.
|
||||
|
||||
Allows overriding of flags for use of fakes, and some black magic for
|
||||
inline callbacks.
|
||||
|
||||
"""
|
||||
|
||||
import os
|
||||
import shutil
|
||||
import uuid
|
||||
|
||||
import fixtures
|
||||
import mock
|
||||
from oslo_concurrency import lockutils
|
||||
from oslo_config import cfg
|
||||
from oslo_config import fixture as config_fixture
|
||||
import oslo_i18n
|
||||
from oslo_messaging import conffixture as messaging_conffixture
|
||||
import oslotest.base as base_test
|
||||
|
||||
from meteos.db import migration
|
||||
from meteos.db.sqlalchemy import api as db_api
|
||||
from meteos.db.sqlalchemy import models as db_models
|
||||
from meteos import rpc
|
||||
from meteos import service
|
||||
from meteos.tests import conf_fixture
|
||||
from meteos.tests import fake_notifier
|
||||
|
||||
test_opts = [
|
||||
cfg.StrOpt('sqlite_clean_db',
|
||||
default='clean.sqlite',
|
||||
help='File name of clean sqlite database.'),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(test_opts)
|
||||
|
||||
_DB_CACHE = None
|
||||
|
||||
|
||||
class Database(fixtures.Fixture):
|
||||
|
||||
def __init__(self, db_session, db_migrate, sql_connection, sqlite_db,
|
||||
sqlite_clean_db):
|
||||
self.sql_connection = sql_connection
|
||||
self.sqlite_db = sqlite_db
|
||||
self.sqlite_clean_db = sqlite_clean_db
|
||||
self.engine = db_session.get_engine()
|
||||
self.engine.dispose()
|
||||
conn = self.engine.connect()
|
||||
if sql_connection == "sqlite://":
|
||||
self.setup_sqlite(db_migrate)
|
||||
else:
|
||||
testdb = os.path.join(CONF.state_path, sqlite_db)
|
||||
db_migrate.upgrade('head')
|
||||
if os.path.exists(testdb):
|
||||
return
|
||||
if sql_connection == "sqlite://":
|
||||
conn = self.engine.connect()
|
||||
self._DB = "".join(line for line in conn.connection.iterdump())
|
||||
self.engine.dispose()
|
||||
else:
|
||||
cleandb = os.path.join(CONF.state_path, sqlite_clean_db)
|
||||
shutil.copyfile(testdb, cleandb)
|
||||
|
||||
def setUp(self):
|
||||
super(Database, self).setUp()
|
||||
if self.sql_connection == "sqlite://":
|
||||
conn = self.engine.connect()
|
||||
conn.connection.executescript(self._DB)
|
||||
self.addCleanup(self.engine.dispose) # pylint: disable=E1101
|
||||
else:
|
||||
shutil.copyfile(
|
||||
os.path.join(CONF.state_path, self.sqlite_clean_db),
|
||||
os.path.join(CONF.state_path, self.sqlite_db),
|
||||
)
|
||||
|
||||
def setup_sqlite(self, db_migrate):
|
||||
if db_migrate.version():
|
||||
return
|
||||
db_models.BASE.metadata.create_all(self.engine)
|
||||
db_migrate.stamp('head')
|
||||
|
||||
|
||||
class TestCase(base_test.BaseTestCase):
|
||||
|
||||
"""Test case base class for all unit tests."""
|
||||
|
||||
def setUp(self):
|
||||
"""Run before each test method to initialize test environment."""
|
||||
super(TestCase, self).setUp()
|
||||
|
||||
oslo_i18n.enable_lazy(enable=False)
|
||||
conf_fixture.set_defaults(CONF)
|
||||
CONF([], default_config_files=[])
|
||||
|
||||
global _DB_CACHE
|
||||
if not _DB_CACHE:
|
||||
_DB_CACHE = Database(
|
||||
db_api,
|
||||
migration,
|
||||
sql_connection=CONF.database.connection,
|
||||
sqlite_db=CONF.sqlite_db,
|
||||
sqlite_clean_db=CONF.sqlite_clean_db,
|
||||
)
|
||||
self.useFixture(_DB_CACHE)
|
||||
|
||||
self.injected = []
|
||||
self._services = []
|
||||
self.flags(fatal_exception_format_errors=True)
|
||||
# This will be cleaned up by the NestedTempfile fixture
|
||||
lock_path = self.useFixture(fixtures.TempDir()).path
|
||||
self.fixture = self.useFixture(config_fixture.Config(lockutils.CONF))
|
||||
self.fixture.config(lock_path=lock_path, group='oslo_concurrency')
|
||||
self.fixture.config(
|
||||
disable_process_locking=True, group='oslo_concurrency')
|
||||
|
||||
rpc.add_extra_exmods('meteos.tests')
|
||||
self.addCleanup(rpc.clear_extra_exmods)
|
||||
self.addCleanup(rpc.cleanup)
|
||||
|
||||
self.messaging_conf = messaging_conffixture.ConfFixture(CONF)
|
||||
self.messaging_conf.transport_driver = 'fake'
|
||||
self.messaging_conf.response_timeout = 15
|
||||
self.useFixture(self.messaging_conf)
|
||||
rpc.init(CONF)
|
||||
|
||||
mock.patch('keystoneauth1.loading.load_auth_from_conf_options').start()
|
||||
|
||||
fake_notifier.stub_notifier(self)
|
||||
|
||||
def tearDown(self):
|
||||
"""Runs after each test method to tear down test environment."""
|
||||
super(TestCase, self).tearDown()
|
||||
# Reset any overridden flags
|
||||
CONF.reset()
|
||||
|
||||
# Stop any timers
|
||||
for x in self.injected:
|
||||
try:
|
||||
x.stop()
|
||||
except AssertionError:
|
||||
pass
|
||||
|
||||
# Kill any services
|
||||
for x in self._services:
|
||||
try:
|
||||
x.kill()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Delete attributes that don't start with _ so they don't pin
|
||||
# memory around unnecessarily for the duration of the test
|
||||
# suite
|
||||
for key in [k for k in self.__dict__.keys() if k[0] != '_']:
|
||||
del self.__dict__[key]
|
||||
|
||||
def flags(self, **kw):
|
||||
"""Override flag variables for a test."""
|
||||
for k, v in kw.items():
|
||||
CONF.set_override(k, v, enforce_type=True)
|
||||
|
||||
def start_service(self, name, host=None, **kwargs):
|
||||
host = host and host or uuid.uuid4().hex
|
||||
kwargs.setdefault('host', host)
|
||||
kwargs.setdefault('binary', 'meteos-%s' % name)
|
||||
svc = service.Service.create(**kwargs)
|
||||
svc.start()
|
||||
self._services.append(svc)
|
||||
return svc
|
||||
|
||||
def mock_object(self, obj, attr_name, new_attr=None, **kwargs):
|
||||
"""Use python mock to mock an object attribute
|
||||
|
||||
Mocks the specified objects attribute with the given value.
|
||||
Automatically performs 'addCleanup' for the mock.
|
||||
|
||||
"""
|
||||
if not new_attr:
|
||||
new_attr = mock.Mock()
|
||||
patcher = mock.patch.object(obj, attr_name, new_attr, **kwargs)
|
||||
patcher.start()
|
||||
self.addCleanup(patcher.stop)
|
||||
return new_attr
|
||||
|
||||
def mock_class(self, class_name, new_val=None, **kwargs):
|
||||
"""Use python mock to mock a class
|
||||
|
||||
Mocks the specified objects attribute with the given value.
|
||||
Automatically performs 'addCleanup' for the mock.
|
||||
|
||||
"""
|
||||
if not new_val:
|
||||
new_val = mock.Mock()
|
||||
patcher = mock.patch(class_name, new_val, **kwargs)
|
||||
patcher.start()
|
||||
self.addCleanup(patcher.stop)
|
||||
return new_val
|
||||
|
||||
# Useful assertions
|
||||
def assertDictMatch(self, d1, d2, approx_equal=False, tolerance=0.001):
|
||||
"""Assert two dicts are equivalent.
|
||||
|
||||
This is a 'deep' match in the sense that it handles nested
|
||||
dictionaries appropriately.
|
||||
|
||||
NOTE:
|
||||
|
||||
If you don't care (or don't know) a given value, you can specify
|
||||
the string DONTCARE as the value. This will cause that dict-item
|
||||
to be skipped.
|
||||
|
||||
"""
|
||||
def raise_assertion(msg):
|
||||
d1str = str(d1)
|
||||
d2str = str(d2)
|
||||
base_msg = ('Dictionaries do not match. %(msg)s d1: %(d1str)s '
|
||||
'd2: %(d2str)s' %
|
||||
{"msg": msg, "d1str": d1str, "d2str": d2str})
|
||||
raise AssertionError(base_msg)
|
||||
|
||||
d1keys = set(d1.keys())
|
||||
d2keys = set(d2.keys())
|
||||
if d1keys != d2keys:
|
||||
d1only = d1keys - d2keys
|
||||
d2only = d2keys - d1keys
|
||||
raise_assertion('Keys in d1 and not d2: %(d1only)s. '
|
||||
'Keys in d2 and not d1: %(d2only)s' %
|
||||
{"d1only": d1only, "d2only": d2only})
|
||||
|
||||
for key in d1keys:
|
||||
d1value = d1[key]
|
||||
d2value = d2[key]
|
||||
try:
|
||||
error = abs(float(d1value) - float(d2value))
|
||||
within_tolerance = error <= tolerance
|
||||
except (ValueError, TypeError):
|
||||
# If both values aren't convertible to float, just ignore
|
||||
# ValueError if arg is a str, TypeError if it's something else
|
||||
# (like None)
|
||||
within_tolerance = False
|
||||
|
||||
if hasattr(d1value, 'keys') and hasattr(d2value, 'keys'):
|
||||
self.assertDictMatch(d1value, d2value)
|
||||
elif 'DONTCARE' in (d1value, d2value):
|
||||
continue
|
||||
elif approx_equal and within_tolerance:
|
||||
continue
|
||||
elif d1value != d2value:
|
||||
raise_assertion("d1['%(key)s']=%(d1value)s != "
|
||||
"d2['%(key)s']=%(d2value)s" %
|
||||
{
|
||||
"key": key,
|
||||
"d1value": d1value,
|
||||
"d2value": d2value
|
||||
})
|
||||
|
||||
def assertDictListMatch(self, L1, L2, approx_equal=False, tolerance=0.001):
|
||||
"""Assert a list of dicts are equivalent."""
|
||||
def raise_assertion(msg):
|
||||
L1str = str(L1)
|
||||
L2str = str(L2)
|
||||
base_msg = ('List of dictionaries do not match: %(msg)s '
|
||||
'L1: %(L1str)s L2: %(L2str)s' %
|
||||
{"msg": msg, "L1str": L1str, "L2str": L2str})
|
||||
raise AssertionError(base_msg)
|
||||
|
||||
L1count = len(L1)
|
||||
L2count = len(L2)
|
||||
if L1count != L2count:
|
||||
raise_assertion('Length mismatch: len(L1)=%(L1count)d != '
|
||||
'len(L2)=%(L2count)d' %
|
||||
{"L1count": L1count, "L2count": L2count})
|
||||
|
||||
for d1, d2 in zip(L1, L2):
|
||||
self.assertDictMatch(d1, d2, approx_equal=approx_equal,
|
||||
tolerance=tolerance)
|
||||
|
||||
def assertSubDictMatch(self, sub_dict, super_dict):
|
||||
"""Assert a sub_dict is subset of super_dict."""
|
||||
self.assertTrue(set(sub_dict.keys()).issubset(set(super_dict.keys())))
|
||||
for k, sub_value in sub_dict.items():
|
||||
super_value = super_dict[k]
|
||||
if isinstance(sub_value, dict):
|
||||
self.assertSubDictMatch(sub_value, super_value)
|
||||
elif 'DONTCARE' in (sub_value, super_value):
|
||||
continue
|
||||
else:
|
||||
self.assertEqual(sub_value, super_value)
|
||||
|
||||
def assertIn(self, a, b, *args, **kwargs):
|
||||
"""Python < v2.7 compatibility. Assert 'a' in 'b'."""
|
||||
try:
|
||||
f = super(TestCase, self).assertIn
|
||||
except AttributeError:
|
||||
self.assertTrue(a in b, *args, **kwargs)
|
||||
else:
|
||||
f(a, b, *args, **kwargs)
|
||||
|
||||
def assertNotIn(self, a, b, *args, **kwargs):
|
||||
"""Python < v2.7 compatibility. Assert 'a' NOT in 'b'."""
|
||||
try:
|
||||
f = super(TestCase, self).assertNotIn
|
||||
except AttributeError:
|
||||
self.assertFalse(a in b, *args, **kwargs)
|
||||
else:
|
||||
f(a, b, *args, **kwargs)
|
||||
|
||||
def assertIsInstance(self, a, b, *args, **kwargs):
|
||||
"""Python < v2.7 compatibility."""
|
||||
try:
|
||||
f = super(TestCase, self).assertIsInstance
|
||||
except AttributeError:
|
||||
self.assertIsInstance(a, b)
|
||||
else:
|
||||
f(a, b, *args, **kwargs)
|
||||
|
||||
def assertIsNone(self, a, *args, **kwargs):
|
||||
"""Python < v2.7 compatibility."""
|
||||
try:
|
||||
f = super(TestCase, self).assertIsNone
|
||||
except AttributeError:
|
||||
self.assertTrue(a is None)
|
||||
else:
|
||||
f(a, *args, **kwargs)
|
||||
|
||||
def _dict_from_object(self, obj, ignored_keys):
|
||||
if ignored_keys is None:
|
||||
ignored_keys = []
|
||||
return {k: v for k, v in obj.iteritems()
|
||||
if k not in ignored_keys}
|
||||
|
||||
def _assertEqualListsOfObjects(self, objs1, objs2, ignored_keys=None):
|
||||
obj_to_dict = lambda o: self._dict_from_object(o, ignored_keys)
|
||||
sort_key = lambda d: [d[k] for k in sorted(d)]
|
||||
conv_and_sort = lambda obj: sorted(map(obj_to_dict, obj), key=sort_key)
|
||||
|
||||
self.assertEqual(conv_and_sort(objs1), conv_and_sort(objs2))
|
53
meteos/testing/README.rst
Normal file
53
meteos/testing/README.rst
Normal file
@ -0,0 +1,53 @@
|
||||
=======================================
|
||||
OpenStack Meteos Testing Infrastructure
|
||||
=======================================
|
||||
|
||||
A note of clarification is in order, to help those who are new to testing in
|
||||
OpenStack Meteos:
|
||||
|
||||
- actual unit tests are created in the "tests" directory;
|
||||
- the "testing" directory is used to house the infrastructure needed to support
|
||||
testing in OpenStack Meteos.
|
||||
|
||||
This README file attempts to provide current and prospective contributors with
|
||||
everything they need to know in order to start creating unit tests and
|
||||
utilizing the convenience code provided in meteos.testing.
|
||||
|
||||
Writing Unit Tests
|
||||
------------------
|
||||
|
||||
- All new unit tests are to be written in python-mock.
|
||||
- Old tests that are still written in mox should be updated to use python-mock.
|
||||
Usage of mox has been deprecated for writing Meteos unit tests.
|
||||
- use addCleanup in favor of tearDown
|
||||
|
||||
test.TestCase
|
||||
-------------
|
||||
The TestCase class from meteos.test (generally imported as test) will
|
||||
automatically manage self.stubs using the stubout module.
|
||||
They will automatically verify and clean up during the tearDown step.
|
||||
|
||||
If using test.TestCase, calling the super class setUp is required and
|
||||
calling the super class tearDown is required to be last if tearDown
|
||||
is overridden.
|
||||
|
||||
Running Tests
|
||||
-------------
|
||||
|
||||
In the root of the Meteos source code run the run_tests.sh script. This will
|
||||
offer to create a virtual environment and populate it with dependencies.
|
||||
If you don't have dependencies installed that are needed for compiling Meteos's
|
||||
direct dependencies, you'll have to use your operating system's method of
|
||||
installing extra dependencies. To get help using this script execute it with
|
||||
the -h parameter to get options `./run_tests.sh -h`
|
||||
|
||||
Tests and assertRaises
|
||||
----------------------
|
||||
When asserting that a test should raise an exception, test against the
|
||||
most specific exception possible. An overly broad exception type (like
|
||||
Exception) can mask errors in the unit test itself.
|
||||
|
||||
Example::
|
||||
|
||||
self.assertRaises(exception.InstanceNotFound, db.instance_get_by_uuid,
|
||||
elevated, instance_uuid)
|
27
meteos/tests/__init__.py
Normal file
27
meteos/tests/__init__.py
Normal file
@ -0,0 +1,27 @@
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
:mod:`meteos.tests` -- Meteos Unittests
|
||||
=====================================================
|
||||
|
||||
.. automodule:: meteos.tests
|
||||
:platform: Unix
|
||||
"""
|
||||
|
||||
import eventlet
|
||||
|
||||
eventlet.monkey_patch()
|
400
meteos/utils.py
Normal file
400
meteos/utils.py
Normal file
@ -0,0 +1,400 @@
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# Copyright 2011 Justin Santa Barbara
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Utilities and helper functions."""
|
||||
|
||||
import contextlib
|
||||
import errno
|
||||
import functools
|
||||
import inspect
|
||||
import os
|
||||
import pyclbr
|
||||
import random
|
||||
import re
|
||||
import shutil
|
||||
import socket
|
||||
import sys
|
||||
import tempfile
|
||||
import time
|
||||
|
||||
from eventlet import pools
|
||||
import netaddr
|
||||
from oslo_concurrency import lockutils
|
||||
from oslo_concurrency import processutils
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
from oslo_utils import importutils
|
||||
from oslo_utils import netutils
|
||||
from oslo_utils import timeutils
|
||||
import paramiko
|
||||
import retrying
|
||||
import six
|
||||
|
||||
from meteos.common import constants
|
||||
from meteos.db import api as db_api
|
||||
from meteos import exception
|
||||
from meteos.i18n import _
|
||||
|
||||
CONF = cfg.CONF
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
synchronized = lockutils.synchronized_with_prefix('meteos-')
|
||||
|
||||
|
||||
def _get_root_helper():
|
||||
return 'sudo meteos-rootwrap %s' % CONF.rootwrap_config
|
||||
|
||||
|
||||
def execute(*cmd, **kwargs):
|
||||
"""Convenience wrapper around oslo's execute() function."""
|
||||
if 'run_as_root' in kwargs and 'root_helper' not in kwargs:
|
||||
kwargs['root_helper'] = _get_root_helper()
|
||||
return processutils.execute(*cmd, **kwargs)
|
||||
|
||||
|
||||
def trycmd(*args, **kwargs):
|
||||
"""Convenience wrapper around oslo's trycmd() function."""
|
||||
if 'run_as_root' in kwargs and 'root_helper' not in kwargs:
|
||||
kwargs['root_helper'] = _get_root_helper()
|
||||
return processutils.trycmd(*args, **kwargs)
|
||||
|
||||
|
||||
class SSHPool(pools.Pool):
|
||||
"""A simple eventlet pool to hold ssh connections."""
|
||||
|
||||
def __init__(self, ip, port, conn_timeout, login, password=None,
|
||||
privatekey=None, *args, **kwargs):
|
||||
self.ip = ip
|
||||
self.port = port
|
||||
self.login = login
|
||||
self.password = password
|
||||
self.conn_timeout = conn_timeout if conn_timeout else None
|
||||
self.path_to_private_key = privatekey
|
||||
super(SSHPool, self).__init__(*args, **kwargs)
|
||||
|
||||
def create(self):
|
||||
ssh = paramiko.SSHClient()
|
||||
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
|
||||
look_for_keys = True
|
||||
if self.path_to_private_key:
|
||||
self.path_to_private_key = os.path.expanduser(
|
||||
self.path_to_private_key)
|
||||
look_for_keys = False
|
||||
elif self.password:
|
||||
look_for_keys = False
|
||||
try:
|
||||
ssh.connect(self.ip,
|
||||
port=self.port,
|
||||
username=self.login,
|
||||
password=self.password,
|
||||
key_filename=self.path_to_private_key,
|
||||
look_for_keys=look_for_keys,
|
||||
timeout=self.conn_timeout)
|
||||
# Paramiko by default sets the socket timeout to 0.1 seconds,
|
||||
# ignoring what we set through the sshclient. This doesn't help for
|
||||
# keeping long lived connections. Hence we have to bypass it, by
|
||||
# overriding it after the transport is initialized. We are setting
|
||||
# the sockettimeout to None and setting a keepalive packet so that,
|
||||
# the server will keep the connection open. All that does is send
|
||||
# a keepalive packet every ssh_conn_timeout seconds.
|
||||
if self.conn_timeout:
|
||||
transport = ssh.get_transport()
|
||||
transport.sock.settimeout(None)
|
||||
transport.set_keepalive(self.conn_timeout)
|
||||
return ssh
|
||||
except Exception as e:
|
||||
msg = _("Check whether private key or password are correctly "
|
||||
"set. Error connecting via ssh: %s") % e
|
||||
LOG.error(msg)
|
||||
raise exception.SSHException(msg)
|
||||
|
||||
def get(self):
|
||||
"""Return an item from the pool, when one is available.
|
||||
|
||||
This may cause the calling greenthread to block. Check if a
|
||||
connection is active before returning it. For dead connections
|
||||
create and return a new connection.
|
||||
"""
|
||||
if self.free_items:
|
||||
conn = self.free_items.popleft()
|
||||
if conn:
|
||||
if conn.get_transport().is_active():
|
||||
return conn
|
||||
else:
|
||||
conn.close()
|
||||
return self.create()
|
||||
if self.current_size < self.max_size:
|
||||
created = self.create()
|
||||
self.current_size += 1
|
||||
return created
|
||||
return self.channel.get()
|
||||
|
||||
def remove(self, ssh):
|
||||
"""Close an ssh client and remove it from free_items."""
|
||||
ssh.close()
|
||||
ssh = None
|
||||
if ssh in self.free_items:
|
||||
self.free_items.pop(ssh)
|
||||
if self.current_size > 0:
|
||||
self.current_size -= 1
|
||||
|
||||
|
||||
def check_ssh_injection(cmd_list):
|
||||
ssh_injection_pattern = ['`', '$', '|', '||', ';', '&', '&&', '>', '>>',
|
||||
'<']
|
||||
|
||||
# Check whether injection attacks exist
|
||||
for arg in cmd_list:
|
||||
arg = arg.strip()
|
||||
|
||||
# Check for matching quotes on the ends
|
||||
is_quoted = re.match('^(?P<quote>[\'"])(?P<quoted>.*)(?P=quote)$', arg)
|
||||
if is_quoted:
|
||||
# Check for unescaped quotes within the quoted argument
|
||||
quoted = is_quoted.group('quoted')
|
||||
if quoted:
|
||||
if (re.match('[\'"]', quoted) or
|
||||
re.search('[^\\\\][\'"]', quoted)):
|
||||
raise exception.SSHInjectionThreat(command=cmd_list)
|
||||
else:
|
||||
# We only allow spaces within quoted arguments, and that
|
||||
# is the only special character allowed within quotes
|
||||
if len(arg.split()) > 1:
|
||||
raise exception.SSHInjectionThreat(command=cmd_list)
|
||||
|
||||
# Second, check whether danger character in command. So the shell
|
||||
# special operator must be a single argument.
|
||||
for c in ssh_injection_pattern:
|
||||
if c not in arg:
|
||||
continue
|
||||
|
||||
result = arg.find(c)
|
||||
if not result == -1:
|
||||
if result == 0 or not arg[result - 1] == '\\':
|
||||
raise exception.SSHInjectionThreat(command=cmd_list)
|
||||
|
||||
|
||||
class LazyPluggable(object):
|
||||
"""A pluggable backend loaded lazily based on some value."""
|
||||
|
||||
def __init__(self, pivot, **backends):
|
||||
self.__backends = backends
|
||||
self.__pivot = pivot
|
||||
self.__backend = None
|
||||
|
||||
def __get_backend(self):
|
||||
if not self.__backend:
|
||||
backend_name = CONF[self.__pivot]
|
||||
if backend_name not in self.__backends:
|
||||
raise exception.Error(_('Invalid backend: %s') % backend_name)
|
||||
|
||||
backend = self.__backends[backend_name]
|
||||
if isinstance(backend, tuple):
|
||||
name = backend[0]
|
||||
fromlist = backend[1]
|
||||
else:
|
||||
name = backend
|
||||
fromlist = backend
|
||||
|
||||
self.__backend = __import__(name, None, None, fromlist)
|
||||
LOG.debug('backend %s', self.__backend)
|
||||
return self.__backend
|
||||
|
||||
def __getattr__(self, key):
|
||||
backend = self.__get_backend()
|
||||
return getattr(backend, key)
|
||||
|
||||
|
||||
def monkey_patch():
|
||||
"""Patch decorator.
|
||||
|
||||
If the Flags.monkey_patch set as True,
|
||||
this function patches a decorator
|
||||
for all functions in specified modules.
|
||||
You can set decorators for each modules
|
||||
using CONF.monkey_patch_modules.
|
||||
The format is "Module path:Decorator function".
|
||||
Example: 'meteos.api.ec2.cloud:' \
|
||||
meteos.openstack.common.notifier.api.notify_decorator'
|
||||
|
||||
Parameters of the decorator is as follows.
|
||||
(See meteos.openstack.common.notifier.api.notify_decorator)
|
||||
|
||||
name - name of the function
|
||||
function - object of the function
|
||||
"""
|
||||
# If CONF.monkey_patch is not True, this function do nothing.
|
||||
if not CONF.monkey_patch:
|
||||
return
|
||||
# Get list of modules and decorators
|
||||
for module_and_decorator in CONF.monkey_patch_modules:
|
||||
module, decorator_name = module_and_decorator.split(':')
|
||||
# import decorator function
|
||||
decorator = importutils.import_class(decorator_name)
|
||||
__import__(module)
|
||||
# Retrieve module information using pyclbr
|
||||
module_data = pyclbr.readmodule_ex(module)
|
||||
for key in module_data.keys():
|
||||
# set the decorator for the class methods
|
||||
if isinstance(module_data[key], pyclbr.Class):
|
||||
clz = importutils.import_class("%s.%s" % (module, key))
|
||||
# NOTE(vponomaryov): we need to distinguish class methods types
|
||||
# for py2 and py3, because the concept of 'unbound methods' has
|
||||
# been removed from the python3.x
|
||||
if six.PY3:
|
||||
member_type = inspect.isfunction
|
||||
else:
|
||||
member_type = inspect.ismethod
|
||||
for method, func in inspect.getmembers(clz, member_type):
|
||||
setattr(
|
||||
clz, method,
|
||||
decorator("%s.%s.%s" % (module, key, method), func))
|
||||
# set the decorator for the function
|
||||
if isinstance(module_data[key], pyclbr.Function):
|
||||
func = importutils.import_class("%s.%s" % (module, key))
|
||||
setattr(sys.modules[module], key,
|
||||
decorator("%s.%s" % (module, key), func))
|
||||
|
||||
|
||||
def file_open(filename):
|
||||
"""Open file
|
||||
|
||||
see built-in file() documentation for more details
|
||||
|
||||
Note: The reason this is kept in a separate module is to easily
|
||||
be able to provide a stub module that doesn't alter system
|
||||
state at all (for unit tests)
|
||||
"""
|
||||
|
||||
try:
|
||||
fd = open(filename)
|
||||
except IOError as e:
|
||||
if e.errno != errno.ENOENT:
|
||||
raise
|
||||
result = False
|
||||
else:
|
||||
data = fd.read()
|
||||
fd.close()
|
||||
|
||||
return data
|
||||
|
||||
|
||||
def service_is_up(service):
|
||||
"""Check whether a service is up based on last heartbeat."""
|
||||
last_heartbeat = service['updated_at'] or service['created_at']
|
||||
# Timestamps in DB are UTC.
|
||||
tdelta = timeutils.utcnow() - last_heartbeat
|
||||
elapsed = tdelta.total_seconds()
|
||||
return abs(elapsed) <= CONF.service_down_time
|
||||
|
||||
|
||||
def validate_service_host(context, host):
|
||||
service = db_api.service_get_by_host_and_topic(context, host,
|
||||
'meteos-engine')
|
||||
if not service_is_up(service):
|
||||
raise exception.ServiceIsDown(service=service['host'])
|
||||
|
||||
return service
|
||||
|
||||
|
||||
def walk_class_hierarchy(clazz, encountered=None):
|
||||
"""Walk class hierarchy, yielding most derived classes first."""
|
||||
if not encountered:
|
||||
encountered = []
|
||||
for subclass in clazz.__subclasses__():
|
||||
if subclass not in encountered:
|
||||
encountered.append(subclass)
|
||||
# drill down to leaves first
|
||||
for subsubclass in walk_class_hierarchy(subclass, encountered):
|
||||
yield subsubclass
|
||||
yield subclass
|
||||
|
||||
|
||||
class IsAMatcher(object):
|
||||
def __init__(self, expected_value=None):
|
||||
self.expected_value = expected_value
|
||||
|
||||
def __eq__(self, actual_value):
|
||||
return isinstance(actual_value, self.expected_value)
|
||||
|
||||
|
||||
class ComparableMixin(object):
|
||||
def _compare(self, other, method):
|
||||
try:
|
||||
return method(self._cmpkey(), other._cmpkey())
|
||||
except (AttributeError, TypeError):
|
||||
# _cmpkey not implemented, or return different type,
|
||||
# so I can't compare with "other".
|
||||
return NotImplemented
|
||||
|
||||
def __lt__(self, other):
|
||||
return self._compare(other, lambda s, o: s < o)
|
||||
|
||||
def __le__(self, other):
|
||||
return self._compare(other, lambda s, o: s <= o)
|
||||
|
||||
def __eq__(self, other):
|
||||
return self._compare(other, lambda s, o: s == o)
|
||||
|
||||
def __ge__(self, other):
|
||||
return self._compare(other, lambda s, o: s >= o)
|
||||
|
||||
def __gt__(self, other):
|
||||
return self._compare(other, lambda s, o: s > o)
|
||||
|
||||
def __ne__(self, other):
|
||||
return self._compare(other, lambda s, o: s != o)
|
||||
|
||||
|
||||
def require_driver_initialized(func):
|
||||
@functools.wraps(func)
|
||||
def wrapper(self, *args, **kwargs):
|
||||
# we can't do anything if the driver didn't init
|
||||
if not self.driver.initialized:
|
||||
driver_name = self.driver.__class__.__name__
|
||||
raise exception.DriverNotInitialized(driver=driver_name)
|
||||
return func(self, *args, **kwargs)
|
||||
return wrapper
|
||||
|
||||
|
||||
def wait_for_access_update(context, db, learning_instance,
|
||||
migration_wait_access_rules_timeout):
|
||||
starttime = time.time()
|
||||
deadline = starttime + migration_wait_access_rules_timeout
|
||||
tries = 0
|
||||
|
||||
while True:
|
||||
instance = db.learning_instance_get(context, learning_instance['id'])
|
||||
|
||||
if instance['access_rules_status'] == constants.STATUS_ACTIVE:
|
||||
break
|
||||
|
||||
tries += 1
|
||||
now = time.time()
|
||||
if instance['access_rules_status'] == constants.STATUS_ERROR:
|
||||
msg = _("Failed to update access rules"
|
||||
" on learning instance %s") % learning_instance['id']
|
||||
raise exception.LearningMigrationFailed(reason=msg)
|
||||
elif now > deadline:
|
||||
msg = _("Timeout trying to update access rules"
|
||||
" on learning instance %(learning_id)s. Timeout "
|
||||
"was %(timeout)s seconds.") % {
|
||||
'learning_id': learning_instance['id'],
|
||||
'timeout': migration_wait_access_rules_timeout}
|
||||
raise exception.LearningMigrationFailed(reason=msg)
|
||||
else:
|
||||
time.sleep(tries ** 2)
|
23
meteos/version.py
Normal file
23
meteos/version.py
Normal file
@ -0,0 +1,23 @@
|
||||
# Copyright 2011 OpenStack LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from pbr import version as pbr_version
|
||||
|
||||
METEOS_VENDOR = "OpenStack Foundation"
|
||||
METEOS_PRODUCT = "OpenStack Meteos"
|
||||
METEOS_PACKAGE = None # OS distro package version suffix
|
||||
|
||||
loaded = False
|
||||
version_info = pbr_version.VersionInfo('meteos')
|
||||
version_string = version_info.version_string
|
551
meteos/wsgi.py
Normal file
551
meteos/wsgi.py
Normal file
@ -0,0 +1,551 @@
|
||||
# Copyright 2010 United States Government as represented by the
|
||||
# Administrator of the National Aeronautics and Space Administration.
|
||||
# Copyright 2010 OpenStack LLC.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""Utility methods for working with WSGI servers."""
|
||||
|
||||
from __future__ import print_function
|
||||
|
||||
import errno
|
||||
import os
|
||||
import socket
|
||||
import ssl
|
||||
import sys
|
||||
import time
|
||||
|
||||
import eventlet
|
||||
import eventlet.wsgi
|
||||
import greenlet
|
||||
from oslo_config import cfg
|
||||
from oslo_log import log
|
||||
from oslo_service import service
|
||||
from oslo_utils import excutils
|
||||
from oslo_utils import netutils
|
||||
from paste import deploy
|
||||
import routes.middleware
|
||||
import webob.dec
|
||||
import webob.exc
|
||||
|
||||
from meteos.common import config
|
||||
from meteos import exception
|
||||
from meteos.i18n import _, _LE, _LI
|
||||
|
||||
socket_opts = [
|
||||
cfg.IntOpt('backlog',
|
||||
default=4096,
|
||||
help="Number of backlog requests to configure the socket "
|
||||
"with."),
|
||||
cfg.BoolOpt('tcp_keepalive',
|
||||
default=True,
|
||||
help="Sets the value of TCP_KEEPALIVE (True/False) for each "
|
||||
"server socket."),
|
||||
cfg.IntOpt('tcp_keepidle',
|
||||
default=600,
|
||||
help="Sets the value of TCP_KEEPIDLE in seconds for each "
|
||||
"server socket. Not supported on OS X."),
|
||||
cfg.IntOpt('tcp_keepalive_interval',
|
||||
help="Sets the value of TCP_KEEPINTVL in seconds for each "
|
||||
"server socket. Not supported on OS X."),
|
||||
cfg.IntOpt('tcp_keepalive_count',
|
||||
help="Sets the value of TCP_KEEPCNT for each "
|
||||
"server socket. Not supported on OS X."),
|
||||
cfg.StrOpt('ssl_ca_file',
|
||||
help="CA certificate file to use to verify "
|
||||
"connecting clients."),
|
||||
cfg.StrOpt('ssl_cert_file',
|
||||
help="Certificate file to use when starting "
|
||||
"the server securely."),
|
||||
cfg.StrOpt('ssl_key_file',
|
||||
help="Private key file to use when starting "
|
||||
"the server securely."),
|
||||
]
|
||||
|
||||
eventlet_opts = [
|
||||
cfg.IntOpt('max_header_line',
|
||||
default=16384,
|
||||
help="Maximum line size of message headers to be accepted. "
|
||||
"Option max_header_line may need to be increased when "
|
||||
"using large tokens (typically those generated by the "
|
||||
"Keystone v3 API with big service catalogs)."),
|
||||
cfg.IntOpt('client_socket_timeout',
|
||||
default=900,
|
||||
help="Timeout for client connections socket operations. "
|
||||
"If an incoming connection is idle for this number of "
|
||||
"seconds it will be closed. A value of '0' means "
|
||||
"wait forever."),
|
||||
cfg.BoolOpt('wsgi_keep_alive',
|
||||
default=True,
|
||||
help='If False, closes the client socket connection '
|
||||
'explicitly. Setting it to True to maintain backward '
|
||||
'compatibility. Recommended setting is set it to False.'),
|
||||
]
|
||||
|
||||
CONF = cfg.CONF
|
||||
CONF.register_opts(socket_opts)
|
||||
CONF.register_opts(eventlet_opts)
|
||||
|
||||
LOG = log.getLogger(__name__)
|
||||
|
||||
|
||||
class Server(service.ServiceBase):
|
||||
"""Server class to manage a WSGI server, serving a WSGI application."""
|
||||
|
||||
default_pool_size = 1000
|
||||
|
||||
def __init__(self, name, app, host=None, port=None, pool_size=None,
|
||||
protocol=eventlet.wsgi.HttpProtocol, backlog=128):
|
||||
"""Initialize, but do not start, a WSGI server.
|
||||
|
||||
:param name: Pretty name for logging.
|
||||
:param app: The WSGI application to serve.
|
||||
:param host: IP address to serve the application.
|
||||
:param port: Port number to server the application.
|
||||
:param pool_size: Maximum number of eventlets to spawn concurrently.
|
||||
:returns: None
|
||||
|
||||
"""
|
||||
eventlet.wsgi.MAX_HEADER_LINE = CONF.max_header_line
|
||||
self.client_socket_timeout = CONF.client_socket_timeout
|
||||
self.name = name
|
||||
self.app = app
|
||||
self._host = host or "0.0.0.0"
|
||||
self._port = port or 0
|
||||
self._server = None
|
||||
self._socket = None
|
||||
self._protocol = protocol
|
||||
self.pool_size = pool_size or self.default_pool_size
|
||||
self._pool = eventlet.GreenPool(self.pool_size)
|
||||
self._logger = log.getLogger("eventlet.wsgi.server")
|
||||
|
||||
if backlog < 1:
|
||||
raise exception.InvalidInput(
|
||||
reason='The backlog must be more than 1')
|
||||
|
||||
bind_addr = (host, port)
|
||||
# TODO(dims): eventlet's green dns/socket module does not actually
|
||||
# support IPv6 in getaddrinfo(). We need to get around this in the
|
||||
# future or monitor upstream for a fix
|
||||
try:
|
||||
info = socket.getaddrinfo(bind_addr[0],
|
||||
bind_addr[1],
|
||||
socket.AF_UNSPEC,
|
||||
socket.SOCK_STREAM)[0]
|
||||
family = info[0]
|
||||
bind_addr = info[-1]
|
||||
except Exception:
|
||||
family = socket.AF_INET
|
||||
|
||||
cert_file = CONF.ssl_cert_file
|
||||
key_file = CONF.ssl_key_file
|
||||
ca_file = CONF.ssl_ca_file
|
||||
self._use_ssl = cert_file or key_file
|
||||
|
||||
if cert_file and not os.path.exists(cert_file):
|
||||
raise RuntimeError(_("Unable to find cert_file : %s") % cert_file)
|
||||
|
||||
if ca_file and not os.path.exists(ca_file):
|
||||
raise RuntimeError(_("Unable to find ca_file : %s") % ca_file)
|
||||
|
||||
if key_file and not os.path.exists(key_file):
|
||||
raise RuntimeError(_("Unable to find key_file : %s") % key_file)
|
||||
|
||||
if self._use_ssl and (not cert_file or not key_file):
|
||||
raise RuntimeError(_("When running server in SSL mode, you must "
|
||||
"specify both a cert_file and key_file "
|
||||
"option value in your configuration file"))
|
||||
|
||||
retry_until = time.time() + 30
|
||||
while not self._socket and time.time() < retry_until:
|
||||
try:
|
||||
self._socket = eventlet.listen(
|
||||
bind_addr, backlog=backlog, family=family)
|
||||
except socket.error as err:
|
||||
if err.args[0] != errno.EADDRINUSE:
|
||||
raise
|
||||
eventlet.sleep(0.1)
|
||||
|
||||
if not self._socket:
|
||||
raise RuntimeError(_("Could not bind to %(host)s:%(port)s "
|
||||
"after trying for 30 seconds") %
|
||||
{'host': host, 'port': port})
|
||||
|
||||
(self._host, self._port) = self._socket.getsockname()[0:2]
|
||||
LOG.info(_LI("%(name)s listening on %(_host)s:%(_port)s"),
|
||||
{'name': self.name, '_host': self._host, '_port': self._port})
|
||||
|
||||
def start(self):
|
||||
"""Start serving a WSGI application.
|
||||
|
||||
:returns: None
|
||||
:raises: meteos.exception.InvalidInput
|
||||
|
||||
"""
|
||||
# The server socket object will be closed after server exits,
|
||||
# but the underlying file descriptor will remain open, and will
|
||||
# give bad file descriptor error. So duplicating the socket object,
|
||||
# to keep file descriptor usable.
|
||||
|
||||
config.set_middleware_defaults()
|
||||
dup_socket = self._socket.dup()
|
||||
|
||||
netutils.set_tcp_keepalive(
|
||||
dup_socket,
|
||||
tcp_keepalive=CONF.tcp_keepalive,
|
||||
tcp_keepidle=CONF.tcp_keepidle,
|
||||
tcp_keepalive_interval=CONF.tcp_keepalive_interval,
|
||||
tcp_keepalive_count=CONF.tcp_keepalive_count
|
||||
)
|
||||
|
||||
if self._use_ssl:
|
||||
try:
|
||||
ssl_kwargs = {
|
||||
'server_side': True,
|
||||
'certfile': CONF.ssl_cert_file,
|
||||
'keyfile': CONF.ssl_key_file,
|
||||
'cert_reqs': ssl.CERT_NONE,
|
||||
}
|
||||
|
||||
if CONF.ssl_ca_file:
|
||||
ssl_kwargs['ca_certs'] = CONF.ssl_ca_file
|
||||
ssl_kwargs['cert_reqs'] = ssl.CERT_REQUIRED
|
||||
|
||||
dup_socket = ssl.wrap_socket(dup_socket,
|
||||
**ssl_kwargs)
|
||||
|
||||
dup_socket.setsockopt(socket.SOL_SOCKET,
|
||||
socket.SO_REUSEADDR, 1)
|
||||
|
||||
except Exception:
|
||||
with excutils.save_and_reraise_exception():
|
||||
LOG.error(
|
||||
_LE("Failed to start %(name)s on %(_host)s:%(_port)s "
|
||||
"with SSL support."),
|
||||
{"name": self.name, "_host": self._host,
|
||||
"_port": self._port}
|
||||
)
|
||||
|
||||
wsgi_kwargs = {
|
||||
'func': eventlet.wsgi.server,
|
||||
'sock': dup_socket,
|
||||
'site': self.app,
|
||||
'protocol': self._protocol,
|
||||
'custom_pool': self._pool,
|
||||
'log': self._logger,
|
||||
'socket_timeout': self.client_socket_timeout,
|
||||
'keepalive': CONF.wsgi_keep_alive,
|
||||
}
|
||||
|
||||
self._server = eventlet.spawn(**wsgi_kwargs)
|
||||
|
||||
@property
|
||||
def host(self):
|
||||
return self._host
|
||||
|
||||
@property
|
||||
def port(self):
|
||||
return self._port
|
||||
|
||||
def stop(self):
|
||||
"""Stop this server.
|
||||
|
||||
This is not a very nice action, as currently the method by which a
|
||||
server is stopped is by killing its eventlet.
|
||||
|
||||
:returns: None
|
||||
|
||||
"""
|
||||
LOG.info(_LI("Stopping WSGI server."))
|
||||
if self._server is not None:
|
||||
# Resize pool to stop new requests from being processed
|
||||
self._pool.resize(0)
|
||||
self._server.kill()
|
||||
|
||||
def wait(self):
|
||||
"""Block, until the server has stopped.
|
||||
|
||||
Waits on the server's eventlet to finish, then returns.
|
||||
|
||||
:returns: None
|
||||
|
||||
"""
|
||||
try:
|
||||
if self._server is not None:
|
||||
self._pool.waitall()
|
||||
self._server.wait()
|
||||
except greenlet.GreenletExit:
|
||||
LOG.info(_LI("WSGI server has stopped."))
|
||||
|
||||
def reset(self):
|
||||
"""Reset server greenpool size to default.
|
||||
|
||||
:returns: None
|
||||
"""
|
||||
self._pool.resize(self.pool_size)
|
||||
|
||||
|
||||
class Request(webob.Request):
|
||||
pass
|
||||
|
||||
|
||||
class Application(object):
|
||||
"""Base WSGI application wrapper. Subclasses need to implement __call__."""
|
||||
|
||||
@classmethod
|
||||
def factory(cls, global_config, **local_config):
|
||||
"""Used for paste app factories in paste.deploy config files.
|
||||
|
||||
Any local configuration (that is, values under the [app:APPNAME]
|
||||
section of the paste config) will be passed into the `__init__` method
|
||||
as kwargs.
|
||||
|
||||
A hypothetical configuration would look like:
|
||||
|
||||
[app:wadl]
|
||||
latest_version = 1.3
|
||||
paste.app_factory = meteos.api.fancy_api:Wadl.factory
|
||||
|
||||
which would result in a call to the `Wadl` class as
|
||||
|
||||
import meteos.api.fancy_api
|
||||
fancy_api.Wadl(latest_version='1.3')
|
||||
|
||||
You could of course re-implement the `factory` method in subclasses,
|
||||
but using the kwarg passing it shouldn't be necessary.
|
||||
|
||||
"""
|
||||
return cls(**local_config)
|
||||
|
||||
def __call__(self, environ, start_response):
|
||||
r"""Subclasses will probably want to implement __call__ like this:
|
||||
|
||||
@webob.dec.wsgify(RequestClass=Request)
|
||||
def __call__(self, req):
|
||||
# Any of the following objects work as responses:
|
||||
|
||||
# Option 1: simple string
|
||||
res = 'message\n'
|
||||
|
||||
# Option 2: a nicely formatted HTTP exception page
|
||||
res = exc.HTTPForbidden(detail='Nice try')
|
||||
|
||||
# Option 3: a webob Response object (in case you need to play with
|
||||
# headers, or you want to be treated like an iterable, or or or)
|
||||
res = Response();
|
||||
res.app_iter = open('somefile')
|
||||
|
||||
# Option 4: any wsgi app to be run next
|
||||
res = self.application
|
||||
|
||||
# Option 5: you can get a Response object for a wsgi app, too, to
|
||||
# play with headers etc
|
||||
res = req.get_response(self.application)
|
||||
|
||||
# You can then just return your response...
|
||||
return res
|
||||
# ... or set req.response and return None.
|
||||
req.response = res
|
||||
|
||||
See the end of http://pythonpaste.org/webob/modules/dec.html
|
||||
for more info.
|
||||
|
||||
"""
|
||||
raise NotImplementedError(_('You must implement __call__'))
|
||||
|
||||
|
||||
class Middleware(Application):
|
||||
"""Base WSGI middleware.
|
||||
|
||||
These classes require an application to be
|
||||
initialized that will be called next. By default the middleware will
|
||||
simply call its wrapped app, or you can override __call__ to customize its
|
||||
behavior.
|
||||
|
||||
"""
|
||||
|
||||
@classmethod
|
||||
def factory(cls, global_config, **local_config):
|
||||
"""Used for paste app factories in paste.deploy config files.
|
||||
|
||||
Any local configuration (that is, values under the [filter:APPNAME]
|
||||
section of the paste config) will be passed into the `__init__` method
|
||||
as kwargs.
|
||||
|
||||
A hypothetical configuration would look like:
|
||||
|
||||
[filter:analytics]
|
||||
redis_host = 127.0.0.1
|
||||
paste.filter_factory = meteos.api.analytics:Analytics.factory
|
||||
|
||||
which would result in a call to the `Analytics` class as
|
||||
|
||||
import meteos.api.analytics
|
||||
analytics.Analytics(app_from_paste, redis_host='127.0.0.1')
|
||||
|
||||
You could of course re-implement the `factory` method in subclasses,
|
||||
but using the kwarg passing it shouldn't be necessary.
|
||||
|
||||
"""
|
||||
def _factory(app):
|
||||
return cls(app, **local_config)
|
||||
return _factory
|
||||
|
||||
def __init__(self, application):
|
||||
self.application = application
|
||||
|
||||
def process_request(self, req):
|
||||
"""Called on each request.
|
||||
|
||||
If this returns None, the next application down the stack will be
|
||||
executed. If it returns a response then that response will be returned
|
||||
and execution will stop here.
|
||||
|
||||
"""
|
||||
return None
|
||||
|
||||
def process_response(self, response):
|
||||
"""Do whatever you'd like to the response."""
|
||||
return response
|
||||
|
||||
@webob.dec.wsgify(RequestClass=Request)
|
||||
def __call__(self, req):
|
||||
response = self.process_request(req)
|
||||
if response:
|
||||
return response
|
||||
response = req.get_response(self.application)
|
||||
return self.process_response(response)
|
||||
|
||||
|
||||
class Debug(Middleware):
|
||||
"""Helper class for debugging a WSGI application.
|
||||
|
||||
Can be inserted into any WSGI application chain to get information
|
||||
about the request and response.
|
||||
|
||||
"""
|
||||
|
||||
@webob.dec.wsgify(RequestClass=Request)
|
||||
def __call__(self, req):
|
||||
print(('*' * 40) + ' REQUEST ENVIRON')
|
||||
for key, value in req.environ.items():
|
||||
print(key, '=', value)
|
||||
print()
|
||||
resp = req.get_response(self.application)
|
||||
|
||||
print(('*' * 40) + ' RESPONSE HEADERS')
|
||||
for (key, value) in resp.headers.items():
|
||||
print(key, '=', value)
|
||||
print()
|
||||
|
||||
resp.app_iter = self.print_generator(resp.app_iter)
|
||||
|
||||
return resp
|
||||
|
||||
@staticmethod
|
||||
def print_generator(app_iter):
|
||||
"""Iterator that prints the contents of a wrapper string."""
|
||||
print(('*' * 40) + ' BODY')
|
||||
for part in app_iter:
|
||||
sys.stdout.write(part.decode())
|
||||
sys.stdout.flush()
|
||||
yield part
|
||||
print()
|
||||
|
||||
|
||||
class Router(object):
|
||||
"""WSGI middleware that maps incoming requests to WSGI apps."""
|
||||
|
||||
def __init__(self, mapper):
|
||||
"""Create a router for the given routes.Mapper.
|
||||
|
||||
Each route in `mapper` must specify a 'controller', which is a
|
||||
WSGI app to call. You'll probably want to specify an 'action' as
|
||||
well and have your controller be an object that can route
|
||||
the request to the action-specific method.
|
||||
|
||||
Examples:
|
||||
mapper = routes.Mapper()
|
||||
sc = ServerController()
|
||||
|
||||
# Explicit mapping of one route to a controller+action
|
||||
mapper.connect(None, '/svrlist', controller=sc, action='list')
|
||||
|
||||
# Actions are all implicitly defined
|
||||
mapper.resource('server', 'servers', controller=sc)
|
||||
|
||||
# Pointing to an arbitrary WSGI app. You can specify the
|
||||
# {path_info:.*} parameter so the target app can be handed just that
|
||||
# section of the URL.
|
||||
mapper.connect(None, '/v1.0/{path_info:.*}', controller=BlogApp())
|
||||
|
||||
"""
|
||||
self.map = mapper
|
||||
self._router = routes.middleware.RoutesMiddleware(self._dispatch,
|
||||
self.map)
|
||||
|
||||
@webob.dec.wsgify(RequestClass=Request)
|
||||
def __call__(self, req):
|
||||
"""Route the incoming request to a controller based on self.map.
|
||||
|
||||
If no match, return a 404.
|
||||
|
||||
"""
|
||||
return self._router
|
||||
|
||||
@staticmethod
|
||||
@webob.dec.wsgify(RequestClass=Request)
|
||||
def _dispatch(req):
|
||||
"""Dispatch the request to the appropriate controller.
|
||||
|
||||
Called by self._router after matching the incoming request to a route
|
||||
and putting the information into req.environ. Either returns 404
|
||||
or the routed WSGI app's response.
|
||||
|
||||
"""
|
||||
match = req.environ['wsgiorg.routing_args'][1]
|
||||
if not match:
|
||||
return webob.exc.HTTPNotFound()
|
||||
app = match['controller']
|
||||
return app
|
||||
|
||||
|
||||
class Loader(object):
|
||||
"""Used to load WSGI applications from paste configurations."""
|
||||
|
||||
def __init__(self, config_path=None):
|
||||
"""Initialize the loader, and attempt to find the config.
|
||||
|
||||
:param config_path: Full or relative path to the paste config.
|
||||
:returns: None
|
||||
|
||||
"""
|
||||
config_path = config_path or CONF.api_paste_config
|
||||
self.config_path = CONF.find_file(config_path)
|
||||
if not self.config_path:
|
||||
raise exception.ConfigNotFound(path=config_path)
|
||||
|
||||
def load_app(self, name):
|
||||
"""Return the paste URLMap wrapped WSGI application.
|
||||
|
||||
:param name: Name of the application to load.
|
||||
:returns: Paste URLMap object wrapping the requested application.
|
||||
:raises: `meteos.exception.PasteAppNotFound`
|
||||
|
||||
"""
|
||||
try:
|
||||
return deploy.loadapp("config:%s" % self.config_path, name=name)
|
||||
except LookupError as err:
|
||||
LOG.error(err)
|
||||
raise exception.PasteAppNotFound(name=name, path=self.config_path)
|
38
pylintrc
Normal file
38
pylintrc
Normal file
@ -0,0 +1,38 @@
|
||||
# The format of this file isn't really documented; just use --generate-rcfile
|
||||
|
||||
[Messages Control]
|
||||
# NOTE(justinsb): We might want to have a 2nd strict pylintrc in future
|
||||
# C0111: Don't require docstrings on every method
|
||||
# W0511: TODOs in code comments are fine.
|
||||
# W0142: *args and **kwargs are fine.
|
||||
# W0622: Redefining id is fine.
|
||||
disable=C0111,W0511,W0142,W0622
|
||||
|
||||
[Basic]
|
||||
# Variable names can be 1 to 31 characters long, with lowercase and underscores
|
||||
variable-rgx=[a-z_][a-z0-9_]{0,30}$
|
||||
|
||||
# Argument names can be 2 to 31 characters long, with lowercase and underscores
|
||||
argument-rgx=[a-z_][a-z0-9_]{1,30}$
|
||||
|
||||
# Method names should be at least 3 characters long
|
||||
# and be lowecased with underscores
|
||||
method-rgx=([a-z_][a-z0-9_]{2,50}|setUp|tearDown)$
|
||||
|
||||
# Module names matching manila-* are ok (files in bin/)
|
||||
module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+)|(manila-[a-z0-9_-]+))$
|
||||
|
||||
# Don't require docstrings on tests.
|
||||
no-docstring-rgx=((__.*__)|([tT]est.*)|setUp|tearDown)$
|
||||
|
||||
[Design]
|
||||
max-public-methods=100
|
||||
min-public-methods=0
|
||||
max-args=6
|
||||
|
||||
[Variables]
|
||||
|
||||
# List of additional names supposed to be defined in builtins. Remember that
|
||||
# you should avoid to define new builtins when possible.
|
||||
# _ is used by our localization
|
||||
additional-builtins=_
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user