Merge shade and os-client-config into the tree
This sucks in the git history for both projects, then moves their files in place. It should not introduce any behavior changes to any of the existing openstacksdk code, nor to openstack.config and openstack.cloud - other than the name change. TODO(shade) comments have been left indicating places where further integration work should be done. It should not be assumed that these are the final places for either to live. This is just about getting them in-tree so we can work with them. The enforcer code for reasons surpassing understanding does not work with python setup.py build_sphinx but it does work with sphinx-build (what?) For now turn it off. We can turn it back on once the build sphinx job is migrated to the new PTI. Change-Id: I9523e4e281285360c61e9e0456a8e07b7ac1243c
This commit is contained in:
commit
535f2f48ff
1
.gitignore
vendored
1
.gitignore
vendored
@ -29,6 +29,7 @@ cover/*
|
||||
.tox
|
||||
nosetests.xml
|
||||
.testrepository
|
||||
.stestr
|
||||
|
||||
# Translations
|
||||
*.mo
|
||||
|
3
.mailmap
3
.mailmap
@ -1,3 +1,6 @@
|
||||
# Format is:
|
||||
# <preferred e-mail> <other e-mail 1>
|
||||
# <preferred e-mail> <other e-mail 2>
|
||||
<corvus@inaugust.com> <jeblair@redhat.com>
|
||||
<corvus@inaugust.com> <jeblair@linux.vnet.ibm.com>
|
||||
<corvus@inaugust.com> <jeblair@hp.com>
|
||||
|
3
.stestr.conf
Normal file
3
.stestr.conf
Normal file
@ -0,0 +1,3 @@
|
||||
[DEFAULT]
|
||||
test_path=./openstack/tests/unit
|
||||
top_dir=./
|
@ -1,8 +0,0 @@
|
||||
[DEFAULT]
|
||||
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
|
||||
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
|
||||
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
|
||||
${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./openstack/tests/unit} $LISTOPT $IDOPTION
|
||||
test_id_option=--load-list $IDFILE
|
||||
test_list_option=--list
|
||||
group_regex=([^\.]+\.)+
|
@ -1,16 +1,45 @@
|
||||
If you would like to contribute to the development of OpenStack,
|
||||
you must follow the steps in this page:
|
||||
.. _contributing:
|
||||
|
||||
https://docs.openstack.org/infra/manual/developers.html
|
||||
===================================
|
||||
Contributing to python-openstacksdk
|
||||
===================================
|
||||
|
||||
Once those steps have been completed, changes to OpenStack
|
||||
should be submitted for review via the Gerrit tool, following
|
||||
the workflow documented at:
|
||||
If you're interested in contributing to the python-openstacksdk project,
|
||||
the following will help get you started.
|
||||
|
||||
https://docs.openstack.org/infra/manual/developers.html#development-workflow
|
||||
Contributor License Agreement
|
||||
-----------------------------
|
||||
|
||||
.. index::
|
||||
single: license; agreement
|
||||
|
||||
In order to contribute to the python-openstacksdk project, you need to have
|
||||
signed OpenStack's contributor's agreement.
|
||||
|
||||
Please read `DeveloperWorkflow`_ before sending your first patch for review.
|
||||
Pull requests submitted through GitHub will be ignored.
|
||||
|
||||
Bugs should be filed on Launchpad, not GitHub:
|
||||
.. seealso::
|
||||
|
||||
https://bugs.launchpad.net/python-openstacksdk
|
||||
* http://wiki.openstack.org/HowToContribute
|
||||
* http://wiki.openstack.org/CLA
|
||||
|
||||
.. _DeveloperWorkflow: http://docs.openstack.org/infra/manual/developers.html#development-workflow
|
||||
|
||||
Project Hosting Details
|
||||
-------------------------
|
||||
|
||||
Project Documentation
|
||||
http://docs.openstack.org/sdks/python/openstacksdk/
|
||||
|
||||
Bug tracker
|
||||
https://bugs.launchpad.net/python-openstacksdk
|
||||
|
||||
Mailing list (prefix subjects with ``[sdk]`` for faster responses)
|
||||
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
|
||||
|
||||
Code Hosting
|
||||
https://git.openstack.org/cgit/openstack/python-openstacksdk
|
||||
|
||||
Code Review
|
||||
https://review.openstack.org/#/q/status:open+project:openstack/python-openstacksdk,n,z
|
||||
|
51
HACKING.rst
51
HACKING.rst
@ -1,4 +1,49 @@
|
||||
python-openstacksdk Style Commandments
|
||||
======================================
|
||||
openstacksdk Style Commandments
|
||||
===============================
|
||||
|
||||
Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/
|
||||
Read the OpenStack Style Commandments
|
||||
http://docs.openstack.org/developer/hacking/
|
||||
|
||||
Indentation
|
||||
-----------
|
||||
|
||||
PEP-8 allows for 'visual' indentation. Do not use it. Visual indentation looks
|
||||
like this:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
return_value = self.some_method(arg1, arg1,
|
||||
arg3, arg4)
|
||||
|
||||
Visual indentation makes refactoring the code base unneccesarily hard.
|
||||
|
||||
Instead of visual indentation, use this:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
return_value = self.some_method(
|
||||
arg1, arg1, arg3, arg4)
|
||||
|
||||
That way, if some_method ever needs to be renamed, the only line that needs
|
||||
to be touched is the line with some_method. Additionaly, if you need to
|
||||
line break at the top of a block, please indent the continuation line
|
||||
an additional 4 spaces, like this:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
for val in self.some_method(
|
||||
arg1, arg1, arg3, arg4):
|
||||
self.do_something_awesome()
|
||||
|
||||
Neither of these are 'mandated' by PEP-8. However, they are prevailing styles
|
||||
within this code base.
|
||||
|
||||
Unit Tests
|
||||
----------
|
||||
|
||||
Unit tests should be virtually instant. If a unit test takes more than 1 second
|
||||
to run, it is a bad unit test. Honestly, 1 second is too slow.
|
||||
|
||||
All unit test classes should subclass `openstack.tests.unit.base.BaseTestCase`. The
|
||||
base TestCase class takes care of properly creating `OpenStackCloud` objects
|
||||
in a way that protects against local environment.
|
||||
|
137
README.rst
137
README.rst
@ -1,36 +1,119 @@
|
||||
OpenStack Python SDK
|
||||
====================
|
||||
openstacksdk
|
||||
============
|
||||
|
||||
The ``python-openstacksdk`` is a collection of libraries for building
|
||||
applications to work with OpenStack clouds. The project aims to provide
|
||||
a consistent and complete set of interactions with OpenStack's many
|
||||
services, along with complete documentation, examples, and tools.
|
||||
openstacksdk is a client library for for building applications to work
|
||||
with OpenStack clouds. The project aims to provide a consistent and
|
||||
complete set of interactions with OpenStack's many services, along with
|
||||
complete documentation, examples, and tools.
|
||||
|
||||
This SDK is under active development, and in the interests of providing
|
||||
a high-quality interface, the APIs provided in this release may differ
|
||||
from those provided in future release.
|
||||
It also contains a simple interface layer. Clouds can do many things, but
|
||||
there are probably only about 10 of them that most people care about with any
|
||||
regularity. If you want to do complicated things, the per-service oriented
|
||||
portions of the SDK are for you. However, if what you want is to be able to
|
||||
write an application that talks to clouds no matter what crazy choices the
|
||||
deployer has made in an attempt to be more hipster than their self-entitled
|
||||
narcissist peers, then the ``openstack.cloud`` layer is for you.
|
||||
|
||||
Usage
|
||||
-----
|
||||
A Brief History
|
||||
---------------
|
||||
|
||||
The following example simply connects to an OpenStack cloud and lists
|
||||
the containers in the Object Store service.::
|
||||
openstacksdk started its life as three different libraries: shade,
|
||||
os-client-config and python-openstacksdk.
|
||||
|
||||
from openstack import connection
|
||||
conn = connection.Connection(auth_url="http://openstack:5000/v3",
|
||||
project_name="big_project",
|
||||
username="SDK_user",
|
||||
password="Super5ecretPassw0rd")
|
||||
for container in conn.object_store.containers():
|
||||
print(container.name)
|
||||
``shade`` started its life as some code inside of OpenStack Infra's nodepool
|
||||
project, and as some code inside of Ansible. Ansible had a bunch of different
|
||||
OpenStack related modules, and there was a ton of duplicated code. Eventually,
|
||||
between refactoring that duplication into an internal library, and adding logic
|
||||
and features that the OpenStack Infra team had developed to run client
|
||||
applications at scale, it turned out that we'd written nine-tenths of what we'd
|
||||
need to have a standalone library.
|
||||
|
||||
Documentation
|
||||
-------------
|
||||
``os-client-config`` was a library for collecting client configuration for
|
||||
using an OpenStack cloud in a consistent and comprehensive manner.
|
||||
In parallel, the python-openstacksdk team was working on a library to expose
|
||||
the OpenStack APIs to developers in a consistent and predictable manner. After
|
||||
a while it became clear that there was value in both a high-level layer that
|
||||
contains business logic, a lower-level SDK that exposes services and their
|
||||
resources as Python objects, and also to be able to make direct REST calls
|
||||
when needed with a properly configured Session or Adapter from python-requests.
|
||||
This led to the merger of the three projects.
|
||||
|
||||
Documentation is available at
|
||||
https://developer.openstack.org/sdks/python/openstacksdk/
|
||||
The contents of the shade library have been moved into ``openstack.cloud``
|
||||
and os-client-config has been moved in to ``openstack.config``. The next
|
||||
release of shade will be a thin compatibility layer that subclasses the objects
|
||||
from ``openstack.cloud`` and provides different argument defaults where needed
|
||||
for compat. Similarly the next release of os-client-config will be a compat
|
||||
layer shim around ``openstack.config``.
|
||||
|
||||
License
|
||||
-------
|
||||
openstack.config
|
||||
================
|
||||
|
||||
Apache 2.0
|
||||
``openstack.config`` will find cloud configuration for as few as 1 clouds and
|
||||
as many as you want to put in a config file. It will read environment variables
|
||||
and config files, and it also contains some vendor specific default values so
|
||||
that you don't have to know extra info to use OpenStack
|
||||
|
||||
* If you have a config file, you will get the clouds listed in it
|
||||
* If you have environment variables, you will get a cloud named `envvars`
|
||||
* If you have neither, you will get a cloud named `defaults` with base defaults
|
||||
|
||||
Sometimes an example is nice.
|
||||
|
||||
Create a ``clouds.yaml`` file:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
clouds:
|
||||
mordred:
|
||||
region_name: Dallas
|
||||
auth:
|
||||
username: 'mordred'
|
||||
password: XXXXXXX
|
||||
project_name: 'shade'
|
||||
auth_url: 'https://identity.example.com'
|
||||
|
||||
Please note: ``openstack.config`` will look for a file called ``clouds.yaml``
|
||||
in the following locations:
|
||||
|
||||
* Current Directory
|
||||
* ``~/.config/openstack``
|
||||
* ``/etc/openstack``
|
||||
|
||||
More information at https://developer.openstack.org/sdks/python/openstacksdk/users/config
|
||||
|
||||
openstack.cloud
|
||||
===============
|
||||
|
||||
Create a server using objects configured with the ``clouds.yaml`` file:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import openstack.cloud
|
||||
|
||||
# Initialize and turn on debug logging
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
# Initialize cloud
|
||||
# Cloud configs are read with openstack.config
|
||||
cloud = openstack.openstack_cloud(cloud='mordred')
|
||||
|
||||
# Upload an image to the cloud
|
||||
image = cloud.create_image(
|
||||
'ubuntu-trusty', filename='ubuntu-trusty.qcow2', wait=True)
|
||||
|
||||
# Find a flavor with at least 512M of RAM
|
||||
flavor = cloud.get_flavor_by_ram(512)
|
||||
|
||||
# Boot a server, wait for it to boot, and then do whatever is needed
|
||||
# to get a public ip for it.
|
||||
cloud.create_server(
|
||||
'my-server', image=image, flavor=flavor, wait=True, auto_ip=True)
|
||||
|
||||
Links
|
||||
=====
|
||||
|
||||
* `Issue Tracker <https://storyboard.openstack.org/#!/project/760>`_
|
||||
* `Code Review <https://review.openstack.org/#/q/status:open+project:openstack/python-openstacksdk,n,z>`_
|
||||
* `Documentation <https://developer.openstack.org/sdks/python/openstacksdk/>`_
|
||||
* `PyPI <https://pypi.python.org/pypi/python-openstacksdk/>`_
|
||||
* `Mailing list <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>`_
|
||||
|
8
bindep.txt
Normal file
8
bindep.txt
Normal file
@ -0,0 +1,8 @@
|
||||
# This is a cross-platform list tracking distribution packages needed by tests;
|
||||
# see http://docs.openstack.org/infra/bindep/ for additional information.
|
||||
|
||||
build-essential [platform:dpkg]
|
||||
python-dev [platform:dpkg]
|
||||
python-devel [platform:rpm]
|
||||
libffi-dev [platform:dpkg]
|
||||
libffi-devel [platform:rpm]
|
54
devstack/plugin.sh
Normal file
54
devstack/plugin.sh
Normal file
@ -0,0 +1,54 @@
|
||||
# Install and configure **openstacksdk** library in devstack
|
||||
#
|
||||
# To enable openstacksdk in devstack add an entry to local.conf that looks like
|
||||
#
|
||||
# [[local|localrc]]
|
||||
# enable_plugin openstacksdk git://git.openstack.org/openstack/python-openstacksdk
|
||||
|
||||
function preinstall_openstacksdk {
|
||||
:
|
||||
}
|
||||
|
||||
function install_openstacksdk {
|
||||
if use_library_from_git "python-openstacksdk"; then
|
||||
# don't clone, it'll be done by the plugin install
|
||||
setup_dev_lib "python-openstacksdk"
|
||||
else
|
||||
pip_install "python-openstacksdk"
|
||||
fi
|
||||
}
|
||||
|
||||
function configure_openstacksdk {
|
||||
:
|
||||
}
|
||||
|
||||
function initialize_openstacksdk {
|
||||
:
|
||||
}
|
||||
|
||||
function unstack_openstacksdk {
|
||||
:
|
||||
}
|
||||
|
||||
function clean_openstacksdk {
|
||||
:
|
||||
}
|
||||
|
||||
# This is the main for plugin.sh
|
||||
if [[ "$1" == "stack" && "$2" == "pre-install" ]]; then
|
||||
preinstall_openstacksdk
|
||||
elif [[ "$1" == "stack" && "$2" == "install" ]]; then
|
||||
install_openstacksdk
|
||||
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
|
||||
configure_openstacksdk
|
||||
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
|
||||
initialize_openstacksdk
|
||||
fi
|
||||
|
||||
if [[ "$1" == "unstack" ]]; then
|
||||
unstack_openstacksdk
|
||||
fi
|
||||
|
||||
if [[ "$1" == "clean" ]]; then
|
||||
clean_openstacksdk
|
||||
fi
|
@ -19,18 +19,28 @@ import openstackdocstheme
|
||||
|
||||
sys.path.insert(0, os.path.abspath('../..'))
|
||||
sys.path.insert(0, os.path.abspath('.'))
|
||||
|
||||
# -- General configuration ----------------------------------------------------
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be
|
||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
|
||||
extensions = [
|
||||
'sphinx.ext.autodoc',
|
||||
'sphinx.ext.intersphinx',
|
||||
'openstackdocstheme',
|
||||
'enforcer'
|
||||
]
|
||||
|
||||
# openstackdocstheme options
|
||||
repository_name = 'openstack/python-openstacksdk'
|
||||
bug_project = '760'
|
||||
bug_tag = ''
|
||||
html_last_updated_fmt = '%Y-%m-%d %H:%M'
|
||||
html_theme = 'openstackdocs'
|
||||
|
||||
# TODO(shade) Set this to true once the build-openstack-sphinx-docs job is
|
||||
# updated to use sphinx-build.
|
||||
# When True, this will raise an exception that kills sphinx-build.
|
||||
enforcer_warnings_as_errors = True
|
||||
enforcer_warnings_as_errors = False
|
||||
|
||||
# autodoc generation is a bit aggressive and a nuisance when doing heavy
|
||||
# text edit cycles.
|
||||
@ -47,18 +57,7 @@ master_doc = 'index'
|
||||
|
||||
# General information about the project.
|
||||
project = u'python-openstacksdk'
|
||||
copyright = u'2015, OpenStack Foundation'
|
||||
|
||||
# The version info for the project you're documenting, acts as replacement for
|
||||
# |version| and |release|, also used in various other places throughout the
|
||||
# built documents.
|
||||
#
|
||||
# "version" and "release" are used by the "log-a-bug" feature
|
||||
#
|
||||
# The short X.Y version.
|
||||
version = '1.0'
|
||||
# The full version, including alpha/beta/rc tags.
|
||||
release = '1.0'
|
||||
copyright = u'2017, Various members of the OpenStack Foundation'
|
||||
|
||||
# A few variables have to be set for the log-a-bug feature.
|
||||
# giturl: The location of conf.py on Git. Must be set manually.
|
||||
@ -101,13 +100,6 @@ exclude_patterns = []
|
||||
|
||||
# -- Options for HTML output ----------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
||||
# a list of builtin themes.
|
||||
html_theme = 'openstackdocs'
|
||||
|
||||
# Add any paths that contain custom themes here, relative to this directory.
|
||||
html_theme_path = [openstackdocstheme.get_html_theme_path()]
|
||||
|
||||
# Don't let openstackdocstheme insert TOCs automatically.
|
||||
theme_include_auto_toc = False
|
||||
|
||||
@ -124,9 +116,5 @@ latex_documents = [
|
||||
u'OpenStack Foundation', 'manual'),
|
||||
]
|
||||
|
||||
# Example configuration for intersphinx: refer to the Python standard library.
|
||||
intersphinx_mapping = {'https://docs.python.org/3/': None,
|
||||
'http://docs.python-requests.org/en/master/': None}
|
||||
|
||||
# Include both the class and __init__ docstrings when describing the class
|
||||
autoclass_content = "both"
|
||||
|
114
doc/source/contributor/coding.rst
Normal file
114
doc/source/contributor/coding.rst
Normal file
@ -0,0 +1,114 @@
|
||||
========================================
|
||||
OpenStack SDK Developer Coding Standards
|
||||
========================================
|
||||
|
||||
In the beginning, there were no guidelines. And it was good. But that
|
||||
didn't last long. As more and more people added more and more code,
|
||||
we realized that we needed a set of coding standards to make sure that
|
||||
the openstacksdk API at least *attempted* to display some form of consistency.
|
||||
|
||||
Thus, these coding standards/guidelines were developed. Note that not
|
||||
all of openstacksdk adheres to these standards just yet. Some older code has
|
||||
not been updated because we need to maintain backward compatibility.
|
||||
Some of it just hasn't been changed yet. But be clear, all new code
|
||||
*must* adhere to these guidelines.
|
||||
|
||||
Below are the patterns that we expect openstacksdk developers to follow.
|
||||
|
||||
Release Notes
|
||||
=============
|
||||
|
||||
openstacksdk uses `reno <http://docs.openstack.org/developer/reno/>`_ for
|
||||
managing its release notes. A new release note should be added to
|
||||
your contribution anytime you add new API calls, fix significant bugs,
|
||||
add new functionality or parameters to existing API calls, or make any
|
||||
other significant changes to the code base that we should draw attention
|
||||
to for the user base.
|
||||
|
||||
It is *not* necessary to add release notes for minor fixes, such as
|
||||
correction of documentation typos, minor code cleanup or reorganization,
|
||||
or any other change that a user would not notice through normal usage.
|
||||
|
||||
Exceptions
|
||||
==========
|
||||
|
||||
Exceptions should NEVER be wrapped and re-raised inside of a new exception.
|
||||
This removes important debug information from the user. All of the exceptions
|
||||
should be raised correctly the first time.
|
||||
|
||||
openstack.cloud API Methods
|
||||
===========================
|
||||
|
||||
The `openstack.cloud` layer has some specific rules:
|
||||
|
||||
- When an API call acts on a resource that has both a unique ID and a
|
||||
name, that API call should accept either identifier with a name_or_id
|
||||
parameter.
|
||||
|
||||
- All resources should adhere to the get/list/search interface that
|
||||
control retrieval of those resources. E.g., `get_image()`, `list_images()`,
|
||||
`search_images()`.
|
||||
|
||||
- Resources should have `create_RESOURCE()`, `delete_RESOURCE()`,
|
||||
`update_RESOURCE()` API methods (as it makes sense).
|
||||
|
||||
- For those methods that should behave differently for omitted or None-valued
|
||||
parameters, use the `_utils.valid_kwargs` decorator. Notably: all Neutron
|
||||
`update_*` functions.
|
||||
|
||||
- Deleting a resource should return True if the delete succeeded, or False
|
||||
if the resource was not found.
|
||||
|
||||
Returned Resources
|
||||
------------------
|
||||
|
||||
Complex objects returned to the caller must be a `munch.Munch` type. The
|
||||
`openstack.cloud._adapter.Adapter` class makes resources into `munch.Munch`.
|
||||
|
||||
All objects should be normalized. It is shade's purpose in life to make
|
||||
OpenStack consistent for end users, and this means not trusting the clouds
|
||||
to return consistent objects. There should be a normalize function in
|
||||
`openstack/cloud/_normalize.py` that is applied to objects before returning
|
||||
them to the user. See :doc:`../user/model` for further details on object model
|
||||
requirements.
|
||||
|
||||
Fields should not be in the normalization contract if we cannot commit to
|
||||
providing them to all users.
|
||||
|
||||
Fields should be renamed in normalization to be consistent with
|
||||
the rest of `openstack.cloud`. For instance, nothing in `openstack.cloud`
|
||||
exposes the legacy OpenStack concept of "tenant" to a user, but instead uses
|
||||
"project" even if the cloud in question uses tenant.
|
||||
|
||||
Nova vs. Neutron
|
||||
----------------
|
||||
|
||||
- Recognize that not all cloud providers support Neutron, so never
|
||||
assume it will be present. If a task can be handled by either
|
||||
Neutron or Nova, code it to be handled by either.
|
||||
|
||||
- For methods that accept either a Nova pool or Neutron network, the
|
||||
parameter should just refer to the network, but documentation of it
|
||||
should explain about the pool. See: `create_floating_ip()` and
|
||||
`available_floating_ip()` methods.
|
||||
|
||||
Tests
|
||||
=====
|
||||
|
||||
- New API methods *must* have unit tests!
|
||||
|
||||
- New unit tests should only mock at the REST layer using `requests_mock`.
|
||||
Any mocking of openstacksdk itself should be considered legacy and to be
|
||||
avoided. Exceptions to this rule can be made when attempting to test the
|
||||
internals of a logical shim where the inputs and output of the method aren't
|
||||
actually impacted by remote content.
|
||||
|
||||
- Functional tests should be added, when possible.
|
||||
|
||||
- In functional tests, always use unique names (for resources that have this
|
||||
attribute) and use it for clean up (see next point).
|
||||
|
||||
- In functional tests, always define cleanup functions to delete data added
|
||||
by your test, should something go wrong. Data removal should be wrapped in
|
||||
a try except block and try to delete as many entries added by the test as
|
||||
possible.
|
1
doc/source/contributor/contributing.rst
Normal file
1
doc/source/contributor/contributing.rst
Normal file
@ -0,0 +1 @@
|
||||
.. include:: ../../../CONTRIBUTING.rst
|
@ -13,6 +13,14 @@ software development kit for the programs which make up the OpenStack
|
||||
community. It is a set of Python-based libraries, documentation, examples,
|
||||
and tools released under the Apache 2 license.
|
||||
|
||||
Contribution Mechanics
|
||||
----------------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
contributing
|
||||
|
||||
Contacting the Developers
|
||||
-------------------------
|
||||
|
||||
@ -33,6 +41,17 @@ mailing list fields questions of all types on OpenStack. Using the
|
||||
``[python-openstacksdk]`` filter to begin your email subject will ensure
|
||||
that the message gets to SDK developers.
|
||||
|
||||
Coding Standards
|
||||
----------------
|
||||
|
||||
We are a bit stricter than usual in the coding standards department. It's a
|
||||
good idea to read through the :doc:`coding <coding>` section.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
coding
|
||||
|
||||
Development Environment
|
||||
-----------------------
|
||||
|
@ -123,8 +123,11 @@ def build_finished(app, exception):
|
||||
app.info("ENFORCER: Found %d missing proxy methods "
|
||||
"in the output" % missing_count)
|
||||
|
||||
for name in sorted(missing):
|
||||
app.warn("ENFORCER: %s was not included in the output" % name)
|
||||
# TODO(shade) Remove the if DEBUG once the build-openstack-sphinx-docs
|
||||
# has been updated to use sphinx-build.
|
||||
if DEBUG:
|
||||
for name in sorted(missing):
|
||||
app.info("ENFORCER: %s was not included in the output" % name)
|
||||
|
||||
if app.config.enforcer_warnings_as_errors and missing_count > 0:
|
||||
raise EnforcementError(
|
||||
|
@ -1 +0,0 @@
|
||||
.. include:: ../../ChangeLog
|
@ -4,7 +4,7 @@ Welcome to the OpenStack SDK!
|
||||
This documentation is split into two sections: one for
|
||||
:doc:`users <users/index>` looking to build applications which make use of
|
||||
OpenStack, and another for those looking to
|
||||
:doc:`contribute <contributors/index>` to this project.
|
||||
:doc:`contribute <contributor/index>` to this project.
|
||||
|
||||
For Users
|
||||
---------
|
||||
@ -13,6 +13,10 @@ For Users
|
||||
:maxdepth: 2
|
||||
|
||||
users/index
|
||||
install/index
|
||||
user/index
|
||||
|
||||
.. TODO(shade) merge users/index and user/index into user/index
|
||||
|
||||
For Contributors
|
||||
----------------
|
||||
@ -20,7 +24,9 @@ For Contributors
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
contributors/index
|
||||
contributor/index
|
||||
|
||||
.. include:: ../../README.rst
|
||||
|
||||
General Information
|
||||
-------------------
|
||||
@ -31,4 +37,4 @@ General information about the SDK including a glossary and release history.
|
||||
:maxdepth: 1
|
||||
|
||||
Glossary of Terms <glossary>
|
||||
Release History <history>
|
||||
Release Notes <releasenotes>
|
||||
|
12
doc/source/install/index.rst
Normal file
12
doc/source/install/index.rst
Normal file
@ -0,0 +1,12 @@
|
||||
============
|
||||
Installation
|
||||
============
|
||||
|
||||
At the command line::
|
||||
|
||||
$ pip install python-openstacksdk
|
||||
|
||||
Or, if you have virtualenv wrapper installed::
|
||||
|
||||
$ mkvirtualenv python-openstacksdk
|
||||
$ pip install python-openstacksdk
|
6
doc/source/releasenotes.rst
Normal file
6
doc/source/releasenotes.rst
Normal file
@ -0,0 +1,6 @@
|
||||
=============
|
||||
Release Notes
|
||||
=============
|
||||
|
||||
Release notes for `python-openstacksdk` can be found at
|
||||
http://docs.openstack.org/releasenotes/python-openstacksdk/
|
303
doc/source/user/config/configuration.rst
Normal file
303
doc/source/user/config/configuration.rst
Normal file
@ -0,0 +1,303 @@
|
||||
===========================================
|
||||
Configuring os-client-config Applications
|
||||
===========================================
|
||||
|
||||
Environment Variables
|
||||
---------------------
|
||||
|
||||
`os-client-config` honors all of the normal `OS_*` variables. It does not
|
||||
provide backwards compatibility to service-specific variables such as
|
||||
`NOVA_USERNAME`.
|
||||
|
||||
If you have OpenStack environment variables set, `os-client-config` will produce
|
||||
a cloud config object named `envvars` containing your values from the
|
||||
environment. If you don't like the name `envvars`, that's ok, you can override
|
||||
it by setting `OS_CLOUD_NAME`.
|
||||
|
||||
Service specific settings, like the nova service type, are set with the
|
||||
default service type as a prefix. For instance, to set a special service_type
|
||||
for trove set
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export OS_DATABASE_SERVICE_TYPE=rax:database
|
||||
|
||||
Config Files
|
||||
------------
|
||||
|
||||
`os-client-config` will look for a file called `clouds.yaml` in the following
|
||||
locations:
|
||||
|
||||
* Current Directory
|
||||
* ~/.config/openstack
|
||||
* /etc/openstack
|
||||
|
||||
The first file found wins.
|
||||
|
||||
You can also set the environment variable `OS_CLIENT_CONFIG_FILE` to an
|
||||
absolute path of a file to look for and that location will be inserted at the
|
||||
front of the file search list.
|
||||
|
||||
The keys are all of the keys you'd expect from `OS_*` - except lower case
|
||||
and without the OS prefix. So, region name is set with `region_name`.
|
||||
|
||||
Service specific settings, like the nova service type, are set with the
|
||||
default service type as a prefix. For instance, to set a special service_type
|
||||
for trove (because you're using Rackspace) set:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
database_service_type: 'rax:database'
|
||||
|
||||
|
||||
Site Specific File Locations
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In addition to `~/.config/openstack` and `/etc/openstack` - some platforms
|
||||
have other locations they like to put things. `os-client-config` will also
|
||||
look in an OS specific config dir
|
||||
|
||||
* `USER_CONFIG_DIR`
|
||||
* `SITE_CONFIG_DIR`
|
||||
|
||||
`USER_CONFIG_DIR` is different on Linux, OSX and Windows.
|
||||
|
||||
* Linux: `~/.config/openstack`
|
||||
* OSX: `~/Library/Application Support/openstack`
|
||||
* Windows: `C:\\Users\\USERNAME\\AppData\\Local\\OpenStack\\openstack`
|
||||
|
||||
`SITE_CONFIG_DIR` is different on Linux, OSX and Windows.
|
||||
|
||||
* Linux: `/etc/openstack`
|
||||
* OSX: `/Library/Application Support/openstack`
|
||||
* Windows: `C:\\ProgramData\\OpenStack\\openstack`
|
||||
|
||||
An example config file is probably helpful:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
clouds:
|
||||
mtvexx:
|
||||
profile: vexxhost
|
||||
auth:
|
||||
username: mordred@inaugust.com
|
||||
password: XXXXXXXXX
|
||||
project_name: mordred@inaugust.com
|
||||
region_name: ca-ymq-1
|
||||
dns_api_version: 1
|
||||
mordred:
|
||||
region_name: RegionOne
|
||||
auth:
|
||||
username: 'mordred'
|
||||
password: XXXXXXX
|
||||
project_name: 'shade'
|
||||
auth_url: 'https://montytaylor-sjc.openstack.blueboxgrid.com:5001/v2.0'
|
||||
infra:
|
||||
profile: rackspace
|
||||
auth:
|
||||
username: openstackci
|
||||
password: XXXXXXXX
|
||||
project_id: 610275
|
||||
regions:
|
||||
- DFW
|
||||
- ORD
|
||||
- IAD
|
||||
|
||||
You may note a few things. First, since `auth_url` settings are silly
|
||||
and embarrassingly ugly, known cloud vendor profile information is included and
|
||||
may be referenced by name. One of the benefits of that is that `auth_url`
|
||||
isn't the only thing the vendor defaults contain. For instance, since
|
||||
Rackspace lists `rax:database` as the service type for trove, `os-client-config`
|
||||
knows that so that you don't have to. In case the cloud vendor profile is not
|
||||
available, you can provide one called `clouds-public.yaml`, following the same
|
||||
location rules previously mentioned for the config files.
|
||||
|
||||
`regions` can be a list of regions. When you call `get_all_clouds`,
|
||||
you'll get a cloud config object for each cloud/region combo.
|
||||
|
||||
As seen with `dns_service_type`, any setting that makes sense to be per-service,
|
||||
like `service_type` or `endpoint` or `api_version` can be set by prefixing
|
||||
the setting with the default service type. That might strike you funny when
|
||||
setting `service_type` and it does me too - but that's just the world we live
|
||||
in.
|
||||
|
||||
Auth Settings
|
||||
-------------
|
||||
|
||||
Keystone has auth plugins - which means it's not possible to know ahead of time
|
||||
which auth settings are needed. `os-client-config` sets the default plugin type
|
||||
to `password`, which is what things all were before plugins came about. In
|
||||
order to facilitate validation of values, all of the parameters that exist
|
||||
as a result of a chosen plugin need to go into the auth dict. For password
|
||||
auth, this includes `auth_url`, `username` and `password` as well as anything
|
||||
related to domains, projects and trusts.
|
||||
|
||||
Splitting Secrets
|
||||
-----------------
|
||||
|
||||
In some scenarios, such as configuration management controlled environments,
|
||||
it might be easier to have secrets in one file and non-secrets in another.
|
||||
This is fully supported via an optional file `secure.yaml` which follows all
|
||||
the same location rules as `clouds.yaml`. It can contain anything you put
|
||||
in `clouds.yaml` and will take precedence over anything in the `clouds.yaml`
|
||||
file.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
# clouds.yaml
|
||||
clouds:
|
||||
internap:
|
||||
profile: internap
|
||||
auth:
|
||||
username: api-55f9a00fb2619
|
||||
project_name: inap-17037
|
||||
regions:
|
||||
- ams01
|
||||
- nyj01
|
||||
# secure.yaml
|
||||
clouds:
|
||||
internap:
|
||||
auth:
|
||||
password: XXXXXXXXXXXXXXXXX
|
||||
|
||||
SSL Settings
|
||||
------------
|
||||
|
||||
When the access to a cloud is done via a secure connection, `os-client-config`
|
||||
will always verify the SSL cert by default. This can be disabled by setting
|
||||
`verify` to `False`. In case the cert is signed by an unknown CA, a specific
|
||||
cacert can be provided via `cacert`. **WARNING:** `verify` will always have
|
||||
precedence over `cacert`, so when setting a CA cert but disabling `verify`, the
|
||||
cloud cert will never be validated.
|
||||
|
||||
Client certs are also configurable. `cert` will be the client cert file
|
||||
location. In case the cert key is not included within the client cert file,
|
||||
its file location needs to be set via `key`.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
# clouds.yaml
|
||||
clouds:
|
||||
secure:
|
||||
auth: ...
|
||||
key: /home/myhome/client-cert.key
|
||||
cert: /home/myhome/client-cert.crt
|
||||
cacert: /home/myhome/ca.crt
|
||||
insecure:
|
||||
auth: ...
|
||||
verify: False
|
||||
|
||||
Cache Settings
|
||||
--------------
|
||||
|
||||
Accessing a cloud is often expensive, so it's quite common to want to do some
|
||||
client-side caching of those operations. To facilitate that, `os-client-config`
|
||||
understands passing through cache settings to dogpile.cache, with the following
|
||||
behaviors:
|
||||
|
||||
* Listing no config settings means you get a null cache.
|
||||
* `cache.expiration_time` and nothing else gets you memory cache.
|
||||
* Otherwise, `cache.class` and `cache.arguments` are passed in
|
||||
|
||||
Different cloud behaviors are also differently expensive to deal with. If you
|
||||
want to get really crazy and tweak stuff, you can specify different expiration
|
||||
times on a per-resource basis by passing values, in seconds to an expiration
|
||||
mapping keyed on the singular name of the resource. A value of `-1` indicates
|
||||
that the resource should never expire.
|
||||
|
||||
`os-client-config` does not actually cache anything itself, but it collects
|
||||
and presents the cache information so that your various applications that
|
||||
are connecting to OpenStack can share a cache should you desire.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
cache:
|
||||
class: dogpile.cache.pylibmc
|
||||
expiration_time: 3600
|
||||
arguments:
|
||||
url:
|
||||
- 127.0.0.1
|
||||
expiration:
|
||||
server: 5
|
||||
flavor: -1
|
||||
clouds:
|
||||
mtvexx:
|
||||
profile: vexxhost
|
||||
auth:
|
||||
username: mordred@inaugust.com
|
||||
password: XXXXXXXXX
|
||||
project_name: mordred@inaugust.com
|
||||
region_name: ca-ymq-1
|
||||
dns_api_version: 1
|
||||
|
||||
|
||||
IPv6
|
||||
----
|
||||
|
||||
IPv6 is the future, and you should always use it if your cloud supports it and
|
||||
if your local network supports it. Both of those are easily detectable and all
|
||||
friendly software should do the right thing. However, sometimes you might
|
||||
exist in a location where you have an IPv6 stack, but something evil has
|
||||
caused it to not actually function. In that case, there is a config option
|
||||
you can set to unbreak you `force_ipv4`, or `OS_FORCE_IPV4` boolean
|
||||
environment variable.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
client:
|
||||
force_ipv4: true
|
||||
clouds:
|
||||
mtvexx:
|
||||
profile: vexxhost
|
||||
auth:
|
||||
username: mordred@inaugust.com
|
||||
password: XXXXXXXXX
|
||||
project_name: mordred@inaugust.com
|
||||
region_name: ca-ymq-1
|
||||
dns_api_version: 1
|
||||
monty:
|
||||
profile: rax
|
||||
auth:
|
||||
username: mordred@inaugust.com
|
||||
password: XXXXXXXXX
|
||||
project_name: mordred@inaugust.com
|
||||
region_name: DFW
|
||||
|
||||
The above snippet will tell client programs to prefer returning an IPv4
|
||||
address.
|
||||
|
||||
Per-region settings
|
||||
-------------------
|
||||
|
||||
Sometimes you have a cloud provider that has config that is common to the
|
||||
cloud, but also with some things you might want to express on a per-region
|
||||
basis. For instance, Internap provides a public and private network specific
|
||||
to the user in each region, and putting the values of those networks into
|
||||
config can make consuming programs more efficient.
|
||||
|
||||
To support this, the region list can actually be a list of dicts, and any
|
||||
setting that can be set at the cloud level can be overridden for that
|
||||
region.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
clouds:
|
||||
internap:
|
||||
profile: internap
|
||||
auth:
|
||||
password: XXXXXXXXXXXXXXXXX
|
||||
username: api-55f9a00fb2619
|
||||
project_name: inap-17037
|
||||
regions:
|
||||
- name: ams01
|
||||
values:
|
||||
networks:
|
||||
- name: inap-17037-WAN1654
|
||||
routes_externally: true
|
||||
- name: inap-17037-LAN6745
|
||||
- name: nyj01
|
||||
values:
|
||||
networks:
|
||||
- name: inap-17037-WAN1654
|
||||
routes_externally: true
|
||||
- name: inap-17037-LAN6745
|
12
doc/source/user/config/index.rst
Normal file
12
doc/source/user/config/index.rst
Normal file
@ -0,0 +1,12 @@
|
||||
========================
|
||||
Using os-client-config
|
||||
========================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
configuration
|
||||
using
|
||||
vendor-support
|
||||
network-config
|
||||
reference
|
60
doc/source/user/config/network-config.rst
Normal file
60
doc/source/user/config/network-config.rst
Normal file
@ -0,0 +1,60 @@
|
||||
==============
|
||||
Network Config
|
||||
==============
|
||||
|
||||
There are several different qualities that networks in OpenStack might have
|
||||
that might not be able to be automatically inferred from the available
|
||||
metadata. To help users navigate more complex setups, `os-client-config`
|
||||
allows configuring a list of network metadata.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
clouds:
|
||||
amazing:
|
||||
networks:
|
||||
- name: blue
|
||||
routes_externally: true
|
||||
- name: purple
|
||||
routes_externally: true
|
||||
default_interface: true
|
||||
- name: green
|
||||
routes_externally: false
|
||||
- name: yellow
|
||||
routes_externally: false
|
||||
nat_destination: true
|
||||
- name: chartreuse
|
||||
routes_externally: false
|
||||
routes_ipv6_externally: true
|
||||
- name: aubergine
|
||||
routes_ipv4_externally: false
|
||||
routes_ipv6_externally: true
|
||||
|
||||
Every entry must have a name field, which can hold either the name or the id
|
||||
of the network.
|
||||
|
||||
`routes_externally` is a boolean field that labels the network as handling
|
||||
north/south traffic off of the cloud. In a public cloud this might be thought
|
||||
of as the "public" network, but in private clouds it's possible it might
|
||||
be an RFC1918 address. In either case, it's provides IPs to servers that
|
||||
things not on the cloud can use. This value defaults to `false`, which
|
||||
indicates only servers on the same network can talk to it.
|
||||
|
||||
`routes_ipv4_externally` and `routes_ipv6_externally` are boolean fields to
|
||||
help handle `routes_externally` in the case where a network has a split stack
|
||||
with different values for IPv4 and IPv6. Either entry, if not given, defaults
|
||||
to the value of `routes_externally`.
|
||||
|
||||
`default_interface` is a boolean field that indicates that the network is the
|
||||
one that programs should use. It defaults to false. An example of needing to
|
||||
use this value is a cloud with two private networks, and where a user is
|
||||
running ansible in one of the servers to talk to other servers on the private
|
||||
network. Because both networks are private, there would otherwise be no way
|
||||
to determine which one should be used for the traffic. There can only be one
|
||||
`default_interface` per cloud.
|
||||
|
||||
`nat_destination` is a boolean field that indicates which network floating
|
||||
ips should be attached to. It defaults to false. Normally this can be inferred
|
||||
by looking for a network that has subnets that have a gateway_ip. But it's
|
||||
possible to have more than one network that satisfies that condition, so the
|
||||
user might want to tell programs which one to pick. There can be only one
|
||||
`nat_destination` per cloud.
|
10
doc/source/user/config/reference.rst
Normal file
10
doc/source/user/config/reference.rst
Normal file
@ -0,0 +1,10 @@
|
||||
=============
|
||||
API Reference
|
||||
=============
|
||||
|
||||
.. module:: openstack.config
|
||||
:synopsis: OpenStack client configuration
|
||||
|
||||
.. autoclass:: openstack.config.OpenStackConfig
|
||||
:members:
|
||||
:inherited-members:
|
141
doc/source/user/config/using.rst
Normal file
141
doc/source/user/config/using.rst
Normal file
@ -0,0 +1,141 @@
|
||||
========================================
|
||||
Using openstack.config in an Application
|
||||
========================================
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
The simplest and least useful thing you can do is:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
python -m openstack.config.loader
|
||||
|
||||
Which will print out whatever if finds for your config. If you want to use
|
||||
it from python, which is much more likely what you want to do, things like:
|
||||
|
||||
Get a named cloud.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import openstack.config
|
||||
|
||||
cloud_config = openstack.config.OpenStackConfig().get_one_cloud(
|
||||
'internap', region_name='ams01')
|
||||
print(cloud_config.name, cloud_config.region, cloud_config.config)
|
||||
|
||||
Or, get all of the clouds.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import openstack.config
|
||||
|
||||
cloud_config = openstack.config.OpenStackConfig().get_all_clouds()
|
||||
for cloud in cloud_config:
|
||||
print(cloud.name, cloud.region, cloud.config)
|
||||
|
||||
argparse
|
||||
--------
|
||||
|
||||
If you're using `openstack.config` from a program that wants to process
|
||||
command line options, there is a registration function to register the
|
||||
arguments that both `openstack.config` and keystoneauth know how to deal
|
||||
with - as well as a consumption argument.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import argparse
|
||||
import sys
|
||||
|
||||
import openstack.config
|
||||
|
||||
cloud_config = openstack.config.OpenStackConfig()
|
||||
parser = argparse.ArgumentParser()
|
||||
cloud_config.register_argparse_arguments(parser, sys.argv)
|
||||
|
||||
options = parser.parse_args()
|
||||
|
||||
cloud = cloud_config.get_one_cloud(argparse=options)
|
||||
|
||||
Constructing a Connection object
|
||||
--------------------------------
|
||||
|
||||
If what you want to do is get an `openstack.connection.Connection` and you
|
||||
want it to do all the normal things related to clouds.yaml, `OS_` environment
|
||||
variables, a helper function is provided. The following will get you a fully
|
||||
configured `openstacksdk` instance.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import openstack.config
|
||||
|
||||
conn = openstack.config.make_connection()
|
||||
|
||||
If you want to do the same thing but on a named cloud.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import openstack.config
|
||||
|
||||
conn = openstack.config.make_connection(cloud='mtvexx')
|
||||
|
||||
If you want to do the same thing but also support command line parsing.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import argparse
|
||||
|
||||
import openstack.config
|
||||
|
||||
conn = openstack.config.make_connection(options=argparse.ArgumentParser())
|
||||
|
||||
Constructing cloud objects
|
||||
--------------------------
|
||||
|
||||
If what you want to do is get an
|
||||
`opentack.cloud.openstackcloud.OpenStackCloud` object, a
|
||||
helper function that honors clouds.yaml and `OS_` environment variables is
|
||||
provided. The following will get you a fully configured `OpenStackCloud`
|
||||
instance.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import openstack.config
|
||||
|
||||
cloud = openstack.config.make_cloud()
|
||||
|
||||
If you want to do the same thing but on a named cloud.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import openstack.config
|
||||
|
||||
cloud = openstack.config.make_cloud(cloud='mtvexx')
|
||||
|
||||
If you want to do the same thing but also support command line parsing.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import argparse
|
||||
|
||||
import openstack.config
|
||||
|
||||
cloud = openstack.config.make_cloud(options=argparse.ArgumentParser())
|
||||
|
||||
Constructing REST API Clients
|
||||
-----------------------------
|
||||
|
||||
What if you want to make direct REST calls via a Session interface? You're
|
||||
in luck. A similar interface is available as with `openstacksdk` and `shade`.
|
||||
The main difference is that you need to specify which service you want to
|
||||
talk to and `make_rest_client` will return you a keystoneauth Session object
|
||||
that is mounted on the endpoint for the service you're looking for.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import openstack.config
|
||||
|
||||
session = openstack.config.make_rest_client('compute', cloud='vexxhost')
|
||||
|
||||
response = session.get('/servers')
|
||||
server_list = response.json()['servers']
|
337
doc/source/user/config/vendor-support.rst
Normal file
337
doc/source/user/config/vendor-support.rst
Normal file
@ -0,0 +1,337 @@
|
||||
==============
|
||||
Vendor Support
|
||||
==============
|
||||
|
||||
OpenStack presents deployers with many options, some of which can expose
|
||||
differences to end users. `os-client-config` tries its best to collect
|
||||
information about various things a user would need to know. The following
|
||||
is a text representation of the vendor related defaults `os-client-config`
|
||||
knows about.
|
||||
|
||||
Default Values
|
||||
--------------
|
||||
|
||||
These are the default behaviors unless a cloud is configured differently.
|
||||
|
||||
* Identity uses `password` authentication
|
||||
* Identity API Version is 2
|
||||
* Image API Version is 2
|
||||
* Volume API Version is 2
|
||||
* Images must be in `qcow2` format
|
||||
* Images are uploaded using PUT interface
|
||||
* Public IPv4 is directly routable via DHCP from Neutron
|
||||
* IPv6 is not provided
|
||||
* Floating IPs are not required
|
||||
* Floating IPs are provided by Neutron
|
||||
* Security groups are provided by Neutron
|
||||
* Vendor specific agents are not used
|
||||
|
||||
auro
|
||||
----
|
||||
|
||||
https://api.auro.io:5000/v2.0
|
||||
|
||||
============== ================
|
||||
Region Name Location
|
||||
============== ================
|
||||
van1 Vancouver, BC
|
||||
============== ================
|
||||
|
||||
* Public IPv4 is provided via NAT with Neutron Floating IP
|
||||
|
||||
catalyst
|
||||
--------
|
||||
|
||||
https://api.cloud.catalyst.net.nz:5000/v2.0
|
||||
|
||||
============== ================
|
||||
Region Name Location
|
||||
============== ================
|
||||
nz-por-1 Porirua, NZ
|
||||
nz_wlg_2 Wellington, NZ
|
||||
============== ================
|
||||
|
||||
* Image API Version is 1
|
||||
* Images must be in `raw` format
|
||||
* Volume API Version is 1
|
||||
|
||||
citycloud
|
||||
---------
|
||||
|
||||
https://identity1.citycloud.com:5000/v3/
|
||||
|
||||
============== ================
|
||||
Region Name Location
|
||||
============== ================
|
||||
Buf1 Buffalo, NY
|
||||
Fra1 Frankfurt, DE
|
||||
Kna1 Karlskrona, SE
|
||||
La1 Los Angeles, CA
|
||||
Lon1 London, UK
|
||||
Sto2 Stockholm, SE
|
||||
============== ================
|
||||
|
||||
* Identity API Version is 3
|
||||
* Public IPv4 is provided via NAT with Neutron Floating IP
|
||||
* Volume API Version is 1
|
||||
|
||||
conoha
|
||||
------
|
||||
|
||||
https://identity.%(region_name)s.conoha.io
|
||||
|
||||
============== ================
|
||||
Region Name Location
|
||||
============== ================
|
||||
tyo1 Tokyo, JP
|
||||
sin1 Singapore
|
||||
sjc1 San Jose, CA
|
||||
============== ================
|
||||
|
||||
* Image upload is not supported
|
||||
|
||||
datacentred
|
||||
-----------
|
||||
|
||||
https://compute.datacentred.io:5000
|
||||
|
||||
============== ================
|
||||
Region Name Location
|
||||
============== ================
|
||||
sal01 Manchester, UK
|
||||
============== ================
|
||||
|
||||
* Image API Version is 1
|
||||
|
||||
dreamcompute
|
||||
------------
|
||||
|
||||
https://iad2.dream.io:5000
|
||||
|
||||
============== ================
|
||||
Region Name Location
|
||||
============== ================
|
||||
RegionOne Ashburn, VA
|
||||
============== ================
|
||||
|
||||
* Identity API Version is 3
|
||||
* Images must be in `raw` format
|
||||
* IPv6 is provided to every server
|
||||
|
||||
dreamhost
|
||||
---------
|
||||
|
||||
Deprecated, please use dreamcompute
|
||||
|
||||
https://keystone.dream.io/v2.0
|
||||
|
||||
============== ================
|
||||
Region Name Location
|
||||
============== ================
|
||||
RegionOne Ashburn, VA
|
||||
============== ================
|
||||
|
||||
* Images must be in `raw` format
|
||||
* Public IPv4 is provided via NAT with Neutron Floating IP
|
||||
* IPv6 is provided to every server
|
||||
|
||||
otc
|
||||
---
|
||||
|
||||
https://iam.%(region_name)s.otc.t-systems.com/v3
|
||||
|
||||
============== ================
|
||||
Region Name Location
|
||||
============== ================
|
||||
eu-de Germany
|
||||
============== ================
|
||||
|
||||
* Identity API Version is 3
|
||||
* Images must be in `vhd` format
|
||||
* Public IPv4 is provided via NAT with Neutron Floating IP
|
||||
|
||||
elastx
|
||||
------
|
||||
|
||||
https://ops.elastx.net:5000/v2.0
|
||||
|
||||
============== ================
|
||||
Region Name Location
|
||||
============== ================
|
||||
regionOne Stockholm, SE
|
||||
============== ================
|
||||
|
||||
* Public IPv4 is provided via NAT with Neutron Floating IP
|
||||
|
||||
entercloudsuite
|
||||
---------------
|
||||
|
||||
https://api.entercloudsuite.com/v2.0
|
||||
|
||||
============== ================
|
||||
Region Name Location
|
||||
============== ================
|
||||
nl-ams1 Amsterdam, NL
|
||||
it-mil1 Milan, IT
|
||||
de-fra1 Frankfurt, DE
|
||||
============== ================
|
||||
|
||||
* Image API Version is 1
|
||||
* Volume API Version is 1
|
||||
|
||||
fuga
|
||||
----
|
||||
|
||||
https://identity.api.fuga.io:5000
|
||||
|
||||
============== ================
|
||||
Region Name Location
|
||||
============== ================
|
||||
cystack Netherlands
|
||||
============== ================
|
||||
|
||||
* Identity API Version is 3
|
||||
* Volume API Version is 3
|
||||
|
||||
internap
|
||||
--------
|
||||
|
||||
https://identity.api.cloud.iweb.com/v2.0
|
||||
|
||||
============== ================
|
||||
Region Name Location
|
||||
============== ================
|
||||
ams01 Amsterdam, NL
|
||||
da01 Dallas, TX
|
||||
nyj01 New York, NY
|
||||
sin01 Singapore
|
||||
sjc01 San Jose, CA
|
||||
============== ================
|
||||
|
||||
* Floating IPs are not supported
|
||||
|
||||
ovh
|
||||
---
|
||||
|
||||
https://auth.cloud.ovh.net/v2.0
|
||||
|
||||
============== ================
|
||||
Region Name Location
|
||||
============== ================
|
||||
BHS1 Beauharnois, QC
|
||||
SBG1 Strassbourg, FR
|
||||
GRA1 Gravelines, FR
|
||||
============== ================
|
||||
|
||||
* Images may be in `raw` format. The `qcow2` default is also supported
|
||||
* Floating IPs are not supported
|
||||
|
||||
rackspace
|
||||
---------
|
||||
|
||||
https://identity.api.rackspacecloud.com/v2.0/
|
||||
|
||||
============== ================
|
||||
Region Name Location
|
||||
============== ================
|
||||
DFW Dallas, TX
|
||||
HKG Hong Kong
|
||||
IAD Washington, D.C.
|
||||
LON London, UK
|
||||
ORD Chicago, IL
|
||||
SYD Sydney, NSW
|
||||
============== ================
|
||||
|
||||
* Database Service Type is `rax:database`
|
||||
* Compute Service Name is `cloudServersOpenStack`
|
||||
* Images must be in `vhd` format
|
||||
* Images must be uploaded using the Glance Task Interface
|
||||
* Floating IPs are not supported
|
||||
* Public IPv4 is directly routable via static config by Nova
|
||||
* IPv6 is provided to every server
|
||||
* Security groups are not supported
|
||||
* Uploaded Images need properties to not use vendor agent::
|
||||
:vm_mode: hvm
|
||||
:xenapi_use_agent: False
|
||||
* Volume API Version is 1
|
||||
* While passwords are recommended for use, API keys do work as well.
|
||||
The `rackspaceauth` python package must be installed, and then the following
|
||||
can be added to clouds.yaml::
|
||||
|
||||
auth:
|
||||
username: myusername
|
||||
api_key: myapikey
|
||||
auth_type: rackspace_apikey
|
||||
|
||||
switchengines
|
||||
-------------
|
||||
|
||||
https://keystone.cloud.switch.ch:5000/v2.0
|
||||
|
||||
============== ================
|
||||
Region Name Location
|
||||
============== ================
|
||||
LS Lausanne, CH
|
||||
ZH Zurich, CH
|
||||
============== ================
|
||||
|
||||
* Images must be in `raw` format
|
||||
* Images must be uploaded using the Glance Task Interface
|
||||
* Volume API Version is 1
|
||||
|
||||
ultimum
|
||||
-------
|
||||
|
||||
https://console.ultimum-cloud.com:5000/v2.0
|
||||
|
||||
============== ================
|
||||
Region Name Location
|
||||
============== ================
|
||||
RegionOne Prague, CZ
|
||||
============== ================
|
||||
|
||||
* Volume API Version is 1
|
||||
|
||||
unitedstack
|
||||
-----------
|
||||
|
||||
https://identity.api.ustack.com/v3
|
||||
|
||||
============== ================
|
||||
Region Name Location
|
||||
============== ================
|
||||
bj1 Beijing, CN
|
||||
gd1 Guangdong, CN
|
||||
============== ================
|
||||
|
||||
* Identity API Version is 3
|
||||
* Images must be in `raw` format
|
||||
* Volume API Version is 1
|
||||
|
||||
vexxhost
|
||||
--------
|
||||
|
||||
http://auth.vexxhost.net
|
||||
|
||||
============== ================
|
||||
Region Name Location
|
||||
============== ================
|
||||
ca-ymq-1 Montreal, QC
|
||||
============== ================
|
||||
|
||||
* DNS API Version is 1
|
||||
* Identity API Version is 3
|
||||
|
||||
zetta
|
||||
-----
|
||||
|
||||
https://identity.api.zetta.io/v3
|
||||
|
||||
============== ================
|
||||
Region Name Location
|
||||
============== ================
|
||||
no-osl1 Oslo, NO
|
||||
============== ================
|
||||
|
||||
* DNS API Version is 2
|
||||
* Identity API Version is 3
|
13
doc/source/user/examples/cleanup-servers.py
Normal file
13
doc/source/user/examples/cleanup-servers.py
Normal file
@ -0,0 +1,13 @@
|
||||
import openstack.cloud
|
||||
|
||||
# Initialize and turn on debug logging
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
for cloud_name, region_name in [
|
||||
('my-vexxhost', 'ca-ymq-1'),
|
||||
('my-citycloud', 'Buf1'),
|
||||
('my-internap', 'ams01')]:
|
||||
# Initialize cloud
|
||||
cloud = openstack.openstack_cloud(cloud=cloud_name, region_name=region_name)
|
||||
for server in cloud.search_servers('my-server'):
|
||||
cloud.delete_server(server, wait=True, delete_ips=True)
|
22
doc/source/user/examples/create-server-dict.py
Normal file
22
doc/source/user/examples/create-server-dict.py
Normal file
@ -0,0 +1,22 @@
|
||||
import openstack.cloud
|
||||
|
||||
# Initialize and turn on debug logging
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
for cloud_name, region_name, image, flavor_id in [
|
||||
('my-vexxhost', 'ca-ymq-1', 'Ubuntu 16.04.1 LTS [2017-03-03]',
|
||||
'5cf64088-893b-46b5-9bb1-ee020277635d'),
|
||||
('my-citycloud', 'Buf1', 'Ubuntu 16.04 Xenial Xerus',
|
||||
'0dab10b5-42a2-438e-be7b-505741a7ffcc'),
|
||||
('my-internap', 'ams01', 'Ubuntu 16.04 LTS (Xenial Xerus)',
|
||||
'A1.4')]:
|
||||
# Initialize cloud
|
||||
cloud = openstack.openstack_cloud(cloud=cloud_name, region_name=region_name)
|
||||
|
||||
# Boot a server, wait for it to boot, and then do whatever is needed
|
||||
# to get a public ip for it.
|
||||
server = cloud.create_server(
|
||||
'my-server', image=image, flavor=dict(id=flavor_id),
|
||||
wait=True, auto_ip=True)
|
||||
# Delete it - this is a demo
|
||||
cloud.delete_server(server, wait=True, delete_ips=True)
|
25
doc/source/user/examples/create-server-name-or-id.py
Normal file
25
doc/source/user/examples/create-server-name-or-id.py
Normal file
@ -0,0 +1,25 @@
|
||||
import openstack.cloud
|
||||
|
||||
# Initialize and turn on debug logging
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
for cloud_name, region_name, image, flavor in [
|
||||
('my-vexxhost', 'ca-ymq-1',
|
||||
'Ubuntu 16.04.1 LTS [2017-03-03]', 'v1-standard-4'),
|
||||
('my-citycloud', 'Buf1',
|
||||
'Ubuntu 16.04 Xenial Xerus', '4C-4GB-100GB'),
|
||||
('my-internap', 'ams01',
|
||||
'Ubuntu 16.04 LTS (Xenial Xerus)', 'A1.4')]:
|
||||
# Initialize cloud
|
||||
cloud = openstack.openstack_cloud(cloud=cloud_name, region_name=region_name)
|
||||
cloud.delete_server('my-server', wait=True, delete_ips=True)
|
||||
|
||||
# Boot a server, wait for it to boot, and then do whatever is needed
|
||||
# to get a public ip for it.
|
||||
server = cloud.create_server(
|
||||
'my-server', image=image, flavor=flavor, wait=True, auto_ip=True)
|
||||
print(server.name)
|
||||
print(server['name'])
|
||||
cloud.pprint(server)
|
||||
# Delete it - this is a demo
|
||||
cloud.delete_server(server, wait=True, delete_ips=True)
|
6
doc/source/user/examples/debug-logging.py
Normal file
6
doc/source/user/examples/debug-logging.py
Normal file
@ -0,0 +1,6 @@
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
cloud = openstack.openstack_cloud(
|
||||
cloud='my-vexxhost', region_name='ca-ymq-1')
|
||||
cloud.get_image('Ubuntu 16.04.1 LTS [2017-03-03]')
|
7
doc/source/user/examples/find-an-image.py
Normal file
7
doc/source/user/examples/find-an-image.py
Normal file
@ -0,0 +1,7 @@
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging()
|
||||
|
||||
cloud = openstack.openstack_cloud(cloud='fuga', region_name='cystack')
|
||||
cloud.pprint([
|
||||
image for image in cloud.list_images()
|
||||
if 'ubuntu' in image.name.lower()])
|
6
doc/source/user/examples/http-debug-logging.py
Normal file
6
doc/source/user/examples/http-debug-logging.py
Normal file
@ -0,0 +1,6 @@
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging(http_debug=True)
|
||||
|
||||
cloud = openstack.openstack_cloud(
|
||||
cloud='my-vexxhost', region_name='ca-ymq-1')
|
||||
cloud.get_image('Ubuntu 16.04.1 LTS [2017-03-03]')
|
7
doc/source/user/examples/munch-dict-object.py
Normal file
7
doc/source/user/examples/munch-dict-object.py
Normal file
@ -0,0 +1,7 @@
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
cloud = openstack.openstack_cloud(cloud='ovh', region_name='SBG1')
|
||||
image = cloud.get_image('Ubuntu 16.10')
|
||||
print(image.name)
|
||||
print(image['name'])
|
7
doc/source/user/examples/normalization.py
Normal file
7
doc/source/user/examples/normalization.py
Normal file
@ -0,0 +1,7 @@
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging()
|
||||
|
||||
cloud = openstack.openstack_cloud(cloud='fuga', region_name='cystack')
|
||||
image = cloud.get_image(
|
||||
'Ubuntu 16.04 LTS - Xenial Xerus - 64-bit - Fuga Cloud Based Image')
|
||||
cloud.pprint(image)
|
23
doc/source/user/examples/server-information.py
Normal file
23
doc/source/user/examples/server-information.py
Normal file
@ -0,0 +1,23 @@
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
cloud = openstack.openstack_cloud(cloud='my-citycloud', region_name='Buf1')
|
||||
try:
|
||||
server = cloud.create_server(
|
||||
'my-server', image='Ubuntu 16.04 Xenial Xerus',
|
||||
flavor=dict(id='0dab10b5-42a2-438e-be7b-505741a7ffcc'),
|
||||
wait=True, auto_ip=True)
|
||||
|
||||
print("\n\nFull Server\n\n")
|
||||
cloud.pprint(server)
|
||||
|
||||
print("\n\nTurn Detailed Off\n\n")
|
||||
cloud.pprint(cloud.get_server('my-server', detailed=False))
|
||||
|
||||
print("\n\nBare Server\n\n")
|
||||
cloud.pprint(cloud.get_server('my-server', bare=True))
|
||||
|
||||
finally:
|
||||
# Delete it - this is a demo
|
||||
cloud.delete_server(server, wait=True, delete_ips=True)
|
||||
|
@ -0,0 +1,5 @@
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
cloud = openstack.openstack_cloud(cloud='rax', region_name='DFW')
|
||||
print(cloud.has_service('network'))
|
6
doc/source/user/examples/service-conditionals.py
Normal file
6
doc/source/user/examples/service-conditionals.py
Normal file
@ -0,0 +1,6 @@
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
cloud = openstack.openstack_cloud(cloud='kiss', region_name='region1')
|
||||
print(cloud.has_service('network'))
|
||||
print(cloud.has_service('container-orchestration'))
|
8
doc/source/user/examples/strict-mode.py
Normal file
8
doc/source/user/examples/strict-mode.py
Normal file
@ -0,0 +1,8 @@
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging()
|
||||
|
||||
cloud = openstack.openstack_cloud(
|
||||
cloud='fuga', region_name='cystack', strict=True)
|
||||
image = cloud.get_image(
|
||||
'Ubuntu 16.04 LTS - Xenial Xerus - 64-bit - Fuga Cloud Based Image')
|
||||
cloud.pprint(image)
|
10
doc/source/user/examples/upload-large-object.py
Normal file
10
doc/source/user/examples/upload-large-object.py
Normal file
@ -0,0 +1,10 @@
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
cloud = openstack.openstack_cloud(cloud='ovh', region_name='SBG1')
|
||||
cloud.create_object(
|
||||
container='my-container', name='my-object',
|
||||
filename='/home/mordred/briarcliff.sh3d',
|
||||
segment_size=1000000)
|
||||
cloud.delete_object('my-container', 'my-object')
|
||||
cloud.delete_container('my-container')
|
10
doc/source/user/examples/upload-object.py
Normal file
10
doc/source/user/examples/upload-object.py
Normal file
@ -0,0 +1,10 @@
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
cloud = openstack.openstack_cloud(cloud='ovh', region_name='SBG1')
|
||||
cloud.create_object(
|
||||
container='my-container', name='my-object',
|
||||
filename='/home/mordred/briarcliff.sh3d',
|
||||
segment_size=1000000)
|
||||
cloud.delete_object('my-container', 'my-object')
|
||||
cloud.delete_container('my-container')
|
6
doc/source/user/examples/user-agent.py
Normal file
6
doc/source/user/examples/user-agent.py
Normal file
@ -0,0 +1,6 @@
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging(http_debug=True)
|
||||
|
||||
cloud = openstack.openstack_cloud(
|
||||
cloud='datacentred', app_name='AmazingApp', app_version='1.0')
|
||||
cloud.list_networks()
|
20
doc/source/user/index.rst
Normal file
20
doc/source/user/index.rst
Normal file
@ -0,0 +1,20 @@
|
||||
==================
|
||||
Shade User Guide
|
||||
==================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
config/index
|
||||
usage
|
||||
logging
|
||||
model
|
||||
microversions
|
||||
|
||||
Presentations
|
||||
=============
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
multi-cloud-demo
|
105
doc/source/user/logging.rst
Normal file
105
doc/source/user/logging.rst
Normal file
@ -0,0 +1,105 @@
|
||||
=======
|
||||
Logging
|
||||
=======
|
||||
|
||||
.. note:: TODO(shade) This document is written from a shade POV. It needs to
|
||||
be combined with the existing logging guide, but also the logging
|
||||
systems need to be rationalized.
|
||||
|
||||
`openstacksdk` uses `Python Logging`_. As `openstacksdk` is a library, it does
|
||||
not configure logging handlers automatically, expecting instead for that to be
|
||||
the purview of the consuming application.
|
||||
|
||||
Simple Usage
|
||||
------------
|
||||
|
||||
For consumers who just want to get a basic logging setup without thinking
|
||||
about it too deeply, there is a helper method. If used, it should be called
|
||||
before any other `shade` functionality.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging()
|
||||
|
||||
`openstack.cloud.simple_logging` takes two optional boolean arguments:
|
||||
|
||||
debug
|
||||
Turns on debug logging.
|
||||
|
||||
http_debug
|
||||
Turns on debug logging as well as debug logging of the underlying HTTP calls.
|
||||
|
||||
`openstack.cloud.simple_logging` also sets up a few other loggers and
|
||||
squelches some warnings or log messages that are otherwise uninteresting or
|
||||
unactionable by a `openstack.cloud` user.
|
||||
|
||||
Advanced Usage
|
||||
--------------
|
||||
|
||||
`openstack.cloud` logs to a set of different named loggers.
|
||||
|
||||
Most of the logging is set up to log to the root `openstack.cloud` logger.
|
||||
There are additional sub-loggers that are used at times, primarily so that a
|
||||
user can decide to turn on or off a specific type of logging. They are listed
|
||||
below.
|
||||
|
||||
openstack.cloud.task_manager
|
||||
`openstack.cloud` uses a Task Manager to perform remote calls. The
|
||||
`openstack.cloud.task_manager` logger emits messages at the start and end
|
||||
of each Task announcing what it is going to run and then what it ran and
|
||||
how long it took. Logging `openstack.cloud.task_manager` is a good way to
|
||||
get a trace of external actions `openstack.cloud` is taking without full
|
||||
`HTTP Tracing`_.
|
||||
|
||||
openstack.cloud.request_ids
|
||||
The `openstack.cloud.request_ids` logger emits a log line at the end of each
|
||||
HTTP interaction with the OpenStack Request ID associated with the
|
||||
interaction. This can be be useful for tracking action taken on the
|
||||
server-side if one does not want `HTTP Tracing`_.
|
||||
|
||||
openstack.cloud.exc
|
||||
If `log_inner_exceptions` is set to True, `shade` will emit any wrapped
|
||||
exception to the `openstack.cloud.exc` logger. Wrapped exceptions are usually
|
||||
considered implementation details, but can be useful for debugging problems.
|
||||
|
||||
openstack.cloud.iterate_timeout
|
||||
When `shade` needs to poll a resource, it does so in a loop that waits
|
||||
between iterations and ultimately timesout. The
|
||||
`openstack.cloud.iterate_timeout` logger emits messages for each iteration
|
||||
indicating it is waiting and for how long. These can be useful to see for
|
||||
long running tasks so that one can know things are not stuck, but can also
|
||||
be noisy.
|
||||
|
||||
openstack.cloud.http
|
||||
`shade` will sometimes log additional information about HTTP interactions
|
||||
to the `openstack.cloud.http` logger. This can be verbose, as it sometimes
|
||||
logs entire response bodies.
|
||||
|
||||
openstack.cloud.fnmatch
|
||||
`shade` will try to use `fnmatch`_ on given `name_or_id` arguments. It's a
|
||||
best effort attempt, so pattern misses are logged to
|
||||
`openstack.cloud.fnmatch`. A user may not be intending to use an fnmatch
|
||||
pattern - such as if they are trying to find an image named
|
||||
``Fedora 24 [official]``, so these messages are logged separately.
|
||||
|
||||
.. _fnmatch: https://pymotw.com/2/fnmatch/
|
||||
|
||||
HTTP Tracing
|
||||
------------
|
||||
|
||||
HTTP Interactions are handled by `keystoneauth`. If you want to enable HTTP
|
||||
tracing while using `shade` and are not using `openstack.cloud.simple_logging`,
|
||||
set the log level of the `keystoneauth` logger to `DEBUG`.
|
||||
|
||||
Python Logging
|
||||
--------------
|
||||
|
||||
Python logging is a standard feature of Python and is documented fully in the
|
||||
Python Documentation, which varies by version of Python.
|
||||
|
||||
For more information on Python Logging for Python v2, see
|
||||
https://docs.python.org/2/library/logging.html.
|
||||
|
||||
For more information on Python Logging for Python v3, see
|
||||
https://docs.python.org/3/library/logging.html.
|
75
doc/source/user/microversions.rst
Normal file
75
doc/source/user/microversions.rst
Normal file
@ -0,0 +1,75 @@
|
||||
=============
|
||||
Microversions
|
||||
=============
|
||||
|
||||
As shade rolls out support for consuming microversions, it will do so on a
|
||||
call by call basis as needed. Just like with major versions, shade should have
|
||||
logic to handle each microversion for a given REST call it makes, with the
|
||||
following rules in mind:
|
||||
|
||||
* If an activity shade performs can be done differently or more efficiently
|
||||
with a new microversion, the support should be added to openstack.cloud.
|
||||
|
||||
* shade should always attempt to use the latest microversion it is aware of
|
||||
for a given call, unless a microversion removes important data.
|
||||
|
||||
* Microversion selection should under no circumstances be exposed to the user,
|
||||
except in the case of missing feature error messages.
|
||||
|
||||
* If a feature is only exposed for a given microversion and cannot be simulated
|
||||
for older clouds without that microversion, it is ok to add it to shade but
|
||||
a clear error message should be given to the user that the given feature is
|
||||
not available on their cloud. (A message such as "This cloud only supports
|
||||
a maximum microversion of XXX for service YYY and this feature only exists
|
||||
on clouds with microversion ZZZ. Please contact your cloud provider for
|
||||
information about when this feature might be available")
|
||||
|
||||
* When adding a feature to shade that only exists behind a new microversion,
|
||||
every effort should be made to figure out how to provide the same
|
||||
functionality if at all possible, even if doing so is inefficient. If an
|
||||
inefficient workaround is employed, a warning should be provided to the
|
||||
user. (the user's workaround to skip the inefficient behavior would be to
|
||||
stop using that shade API call)
|
||||
|
||||
* If shade is aware of logic for more than one microversion, it should always
|
||||
attempt to use the latest version available for the service for that call.
|
||||
|
||||
* Objects returned from shade should always go through normalization and thus
|
||||
should always conform to shade's documented data model and should never look
|
||||
different to the shade user regardless of the microversion used for the REST
|
||||
call.
|
||||
|
||||
* If a microversion adds new fields to an object, those fields should be
|
||||
added to shade's data model contract for that object and the data should
|
||||
either be filled in by performing additional REST calls if the data is
|
||||
available that way, or the field should have a default value of None which
|
||||
the user can be expected to test for when attempting to use the new value.
|
||||
|
||||
* If a microversion removes fields from an object that are part of shade's
|
||||
existing data model contract, care should be taken to not use the new
|
||||
microversion for that call unless forced to by lack of availablity of the
|
||||
old microversion on the cloud in question. In the case where an old
|
||||
microversion is no longer available, care must be taken to either find the
|
||||
data from another source and fill it in, or to put a value of None into the
|
||||
field and document for the user that on some clouds the value may not exist.
|
||||
|
||||
* If a microversion removes a field and the outcome is particularly intractable
|
||||
and impossible to work around without fundamentally breaking shade's users,
|
||||
an issue should be raised with the service team in question. Hopefully a
|
||||
resolution can be found during the period while clouds still have the old
|
||||
microversion.
|
||||
|
||||
* As new calls or objects are added to shade, it is important to check in with
|
||||
the service team in question on the expected stability of the object. If
|
||||
there are known changes expected in the future, even if they may be a few
|
||||
years off, shade should take care to not add committments to its data model
|
||||
for those fields/features. It is ok for shade to not have something.
|
||||
|
||||
..note::
|
||||
shade does not currently have any sort of "experimental" opt-in API that
|
||||
would allow a shade to expose things to a user that may not be supportable
|
||||
under shade's normal compatibility contract. If a conflict arises in the
|
||||
future where there is a strong desire for a feature but also a lack of
|
||||
certainty about its stability over time, an experimental API may want to
|
||||
be explored ... but concrete use cases should arise before such a thing
|
||||
is started.
|
504
doc/source/user/model.rst
Normal file
504
doc/source/user/model.rst
Normal file
@ -0,0 +1,504 @@
|
||||
==========
|
||||
Data Model
|
||||
==========
|
||||
|
||||
shade has a very strict policy on not breaking backwards compatability ever.
|
||||
However, with the data structures returned from OpenStack, there are places
|
||||
where the resource structures from OpenStack are returned to the user somewhat
|
||||
directly, leaving a shade user open to changes/differences in result content.
|
||||
|
||||
To combat that, shade 'normalizes' the return structure from OpenStack in many
|
||||
places, and the results of that normalization are listed below. Where shade
|
||||
performs normalization, a user can count on any fields declared in the docs
|
||||
as being completely safe to use - they are as much a part of shade's API
|
||||
contract as any other Python method.
|
||||
|
||||
Some OpenStack objects allow for arbitrary attributes at
|
||||
the root of the object. shade will pass those through so as not to break anyone
|
||||
who may be counting on them, but as they are arbitrary shade can make no
|
||||
guarantees as to their existence. As part of normalization, shade will put any
|
||||
attribute from an OpenStack resource that is not in its data model contract
|
||||
into an attribute called 'properties'. The contents of properties are
|
||||
defined to be an arbitrary collection of key value pairs with no promises as
|
||||
to any particular key ever existing.
|
||||
|
||||
If a user passes `strict=True` to the shade constructor, shade will not pass
|
||||
through arbitrary objects to the root of the resource, and will instead only
|
||||
put them in the properties dict. If a user is worried about accidentally
|
||||
writing code that depends on an attribute that is not part of the API contract,
|
||||
this can be a useful tool. Keep in mind all data can still be accessed via
|
||||
the properties dict, but any code touching anything in the properties dict
|
||||
should be aware that the keys found there are highly user/cloud specific.
|
||||
Any key that is transformed as part of the shade data model contract will
|
||||
not wind up with an entry in properties - only keys that are unknown.
|
||||
|
||||
Location
|
||||
--------
|
||||
|
||||
A Location defines where a resource lives. It includes a cloud name and a
|
||||
region name, an availability zone as well as information about the project
|
||||
that owns the resource.
|
||||
|
||||
The project information may contain a project id, or a combination of one or
|
||||
more of a project name with a domain name or id. If a project id is present,
|
||||
it should be considered correct.
|
||||
|
||||
Some resources do not carry ownership information with them. For those, the
|
||||
project information will be filled in from the project the user currently
|
||||
has a token for.
|
||||
|
||||
Some resources do not have information about availability zones, or may exist
|
||||
region wide. Those resources will have None as their availability zone.
|
||||
|
||||
If all of the project information is None, then
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
Location = dict(
|
||||
cloud=str(),
|
||||
region=str(),
|
||||
zone=str() or None,
|
||||
project=dict(
|
||||
id=str() or None,
|
||||
name=str() or None,
|
||||
domain_id=str() or None,
|
||||
domain_name=str() or None))
|
||||
|
||||
|
||||
Flavor
|
||||
------
|
||||
|
||||
A flavor for a Nova Server.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
Flavor = dict(
|
||||
location=Location(),
|
||||
id=str(),
|
||||
name=str(),
|
||||
is_public=bool(),
|
||||
is_disabled=bool(),
|
||||
ram=int(),
|
||||
vcpus=int(),
|
||||
disk=int(),
|
||||
ephemeral=int(),
|
||||
swap=int(),
|
||||
rxtx_factor=float(),
|
||||
extra_specs=dict(),
|
||||
properties=dict())
|
||||
|
||||
|
||||
Flavor Access
|
||||
-------------
|
||||
|
||||
An access entry for a Nova Flavor.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
FlavorAccess = dict(
|
||||
flavor_id=str(),
|
||||
project_id=str())
|
||||
|
||||
|
||||
Image
|
||||
-----
|
||||
|
||||
A Glance Image.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
Image = dict(
|
||||
location=Location(),
|
||||
id=str(),
|
||||
name=str(),
|
||||
min_ram=int(),
|
||||
min_disk=int(),
|
||||
size=int(),
|
||||
virtual_size=int(),
|
||||
container_format=str(),
|
||||
disk_format=str(),
|
||||
checksum=str(),
|
||||
created_at=str(),
|
||||
updated_at=str(),
|
||||
owner=str(),
|
||||
is_public=bool(),
|
||||
is_protected=bool(),
|
||||
visibility=str(),
|
||||
status=str(),
|
||||
locations=list(),
|
||||
direct_url=str() or None,
|
||||
tags=list(),
|
||||
properties=dict())
|
||||
|
||||
|
||||
Keypair
|
||||
-------
|
||||
|
||||
A keypair for a Nova Server.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
Keypair = dict(
|
||||
location=Location(),
|
||||
name=str(),
|
||||
id=str(),
|
||||
public_key=str(),
|
||||
fingerprint=str(),
|
||||
type=str(),
|
||||
user_id=str(),
|
||||
private_key=str() or None
|
||||
properties=dict())
|
||||
|
||||
|
||||
Security Group
|
||||
--------------
|
||||
|
||||
A Security Group from either Nova or Neutron
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
SecurityGroup = dict(
|
||||
location=Location(),
|
||||
id=str(),
|
||||
name=str(),
|
||||
description=str(),
|
||||
security_group_rules=list(),
|
||||
properties=dict())
|
||||
|
||||
Security Group Rule
|
||||
-------------------
|
||||
|
||||
A Security Group Rule from either Nova or Neutron
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
SecurityGroupRule = dict(
|
||||
location=Location(),
|
||||
id=str(),
|
||||
direction=str(), # oneof('ingress', 'egress')
|
||||
ethertype=str(),
|
||||
port_range_min=int() or None,
|
||||
port_range_max=int() or None,
|
||||
protocol=str() or None,
|
||||
remote_ip_prefix=str() or None,
|
||||
security_group_id=str() or None,
|
||||
remote_group_id=str() or None
|
||||
properties=dict())
|
||||
|
||||
Server
|
||||
------
|
||||
|
||||
A Server from Nova
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
Server = dict(
|
||||
location=Location(),
|
||||
id=str(),
|
||||
name=str(),
|
||||
image=dict() or str(),
|
||||
flavor=dict(),
|
||||
volumes=list(), # Volume
|
||||
interface_ip=str(),
|
||||
has_config_drive=bool(),
|
||||
accessIPv4=str(),
|
||||
accessIPv6=str(),
|
||||
addresses=dict(), # string, list(Address)
|
||||
created=str(),
|
||||
key_name=str(),
|
||||
metadata=dict(), # string, string
|
||||
private_v4=str(),
|
||||
progress=int(),
|
||||
public_v4=str(),
|
||||
public_v6=str(),
|
||||
security_groups=list(), # SecurityGroup
|
||||
status=str(),
|
||||
updated=str(),
|
||||
user_id=str(),
|
||||
host_id=str() or None,
|
||||
power_state=str() or None,
|
||||
task_state=str() or None,
|
||||
vm_state=str() or None,
|
||||
launched_at=str() or None,
|
||||
terminated_at=str() or None,
|
||||
task_state=str() or None,
|
||||
properties=dict())
|
||||
|
||||
ComputeLimits
|
||||
-------------
|
||||
|
||||
Limits and current usage for a project in Nova
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
ComputeLimits = dict(
|
||||
location=Location(),
|
||||
max_personality=int(),
|
||||
max_personality_size=int(),
|
||||
max_server_group_members=int(),
|
||||
max_server_groups=int(),
|
||||
max_server_meta=int(),
|
||||
max_total_cores=int(),
|
||||
max_total_instances=int(),
|
||||
max_total_keypairs=int(),
|
||||
max_total_ram_size=int(),
|
||||
total_cores_used=int(),
|
||||
total_instances_used=int(),
|
||||
total_ram_used=int(),
|
||||
total_server_groups_used=int(),
|
||||
properties=dict())
|
||||
|
||||
ComputeUsage
|
||||
------------
|
||||
|
||||
Current usage for a project in Nova
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
ComputeUsage = dict(
|
||||
location=Location(),
|
||||
started_at=str(),
|
||||
stopped_at=str(),
|
||||
server_usages=list(),
|
||||
max_personality=int(),
|
||||
max_personality_size=int(),
|
||||
max_server_group_members=int(),
|
||||
max_server_groups=int(),
|
||||
max_server_meta=int(),
|
||||
max_total_cores=int(),
|
||||
max_total_instances=int(),
|
||||
max_total_keypairs=int(),
|
||||
max_total_ram_size=int(),
|
||||
total_cores_used=int(),
|
||||
total_hours=int(),
|
||||
total_instances_used=int(),
|
||||
total_local_gb_usage=int(),
|
||||
total_memory_mb_usage=int(),
|
||||
total_ram_used=int(),
|
||||
total_server_groups_used=int(),
|
||||
total_vcpus_usage=int(),
|
||||
properties=dict())
|
||||
|
||||
ServerUsage
|
||||
-----------
|
||||
|
||||
Current usage for a server in Nova
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
ComputeUsage = dict(
|
||||
started_at=str(),
|
||||
ended_at=str(),
|
||||
flavor=str(),
|
||||
hours=int(),
|
||||
instance_id=str(),
|
||||
local_gb=int(),
|
||||
memory_mb=int(),
|
||||
name=str(),
|
||||
state=str(),
|
||||
uptime=int(),
|
||||
vcpus=int(),
|
||||
properties=dict())
|
||||
|
||||
Floating IP
|
||||
-----------
|
||||
|
||||
A Floating IP from Neutron or Nova
|
||||
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
FloatingIP = dict(
|
||||
location=Location(),
|
||||
id=str(),
|
||||
description=str(),
|
||||
attached=bool(),
|
||||
fixed_ip_address=str() or None,
|
||||
floating_ip_address=str() or None,
|
||||
network=str() or None,
|
||||
port=str() or None,
|
||||
router=str(),
|
||||
status=str(),
|
||||
created_at=str() or None,
|
||||
updated_at=str() or None,
|
||||
revision_number=int() or None,
|
||||
properties=dict())
|
||||
|
||||
Project
|
||||
-------
|
||||
|
||||
A Project from Keystone (or a tenant if Keystone v2)
|
||||
|
||||
Location information for Project has some specific semantics.
|
||||
|
||||
If the project has a parent project, that will be in location.project.id,
|
||||
and if it doesn't that should be None. If the Project is associated with
|
||||
a domain that will be in location.project.domain_id regardless of the current
|
||||
user's token scope. location.project.name and location.project.domain_name
|
||||
will always be None. Finally, location.region_name will always be None as
|
||||
Projects are global to a cloud. If a deployer happens to deploy OpenStack
|
||||
in such a way that users and projects are not shared amongst regions, that
|
||||
necessitates treating each of those regions as separate clouds from shade's
|
||||
POV.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
Project = dict(
|
||||
location=Location(),
|
||||
id=str(),
|
||||
name=str(),
|
||||
description=str(),
|
||||
is_enabled=bool(),
|
||||
is_domain=bool(),
|
||||
properties=dict())
|
||||
|
||||
Volume
|
||||
------
|
||||
|
||||
A volume from cinder.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
Volume = dict(
|
||||
location=Location(),
|
||||
id=str(),
|
||||
name=str(),
|
||||
description=str(),
|
||||
size=int(),
|
||||
attachments=list(),
|
||||
status=str(),
|
||||
migration_status=str() or None,
|
||||
host=str() or None,
|
||||
replication_driver=str() or None,
|
||||
replication_status=str() or None,
|
||||
replication_extended_status=str() or None,
|
||||
snapshot_id=str() or None,
|
||||
created_at=str(),
|
||||
updated_at=str() or None,
|
||||
source_volume_id=str() or None,
|
||||
consistencygroup_id=str() or None,
|
||||
volume_type=str() or None,
|
||||
metadata=dict(),
|
||||
is_bootable=bool(),
|
||||
is_encrypted=bool(),
|
||||
can_multiattach=bool(),
|
||||
properties=dict())
|
||||
|
||||
|
||||
VolumeType
|
||||
----------
|
||||
|
||||
A volume type from cinder.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
VolumeType = dict(
|
||||
location=Location(),
|
||||
id=str(),
|
||||
name=str(),
|
||||
description=str() or None,
|
||||
is_public=bool(),
|
||||
qos_specs_id=str() or None,
|
||||
extra_specs=dict(),
|
||||
properties=dict())
|
||||
|
||||
|
||||
VolumeTypeAccess
|
||||
----------------
|
||||
|
||||
A volume type access from cinder.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
VolumeTypeAccess = dict(
|
||||
location=Location(),
|
||||
volume_type_id=str(),
|
||||
project_id=str(),
|
||||
properties=dict())
|
||||
|
||||
|
||||
ClusterTemplate
|
||||
---------------
|
||||
|
||||
A Cluster Template from magnum.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
ClusterTemplate = dict(
|
||||
location=Location(),
|
||||
apiserver_port=int(),
|
||||
cluster_distro=str(),
|
||||
coe=str(),
|
||||
created_at=str(),
|
||||
dns_nameserver=str(),
|
||||
docker_volume_size=int(),
|
||||
external_network_id=str(),
|
||||
fixed_network=str() or None,
|
||||
flavor_id=str(),
|
||||
http_proxy=str() or None,
|
||||
https_proxy=str() or None,
|
||||
id=str(),
|
||||
image_id=str(),
|
||||
insecure_registry=str(),
|
||||
is_public=bool(),
|
||||
is_registry_enabled=bool(),
|
||||
is_tls_disabled=bool(),
|
||||
keypair_id=str(),
|
||||
labels=dict(),
|
||||
master_flavor_id=str() or None,
|
||||
name=str(),
|
||||
network_driver=str(),
|
||||
no_proxy=str() or None,
|
||||
server_type=str(),
|
||||
updated_at=str() or None,
|
||||
volume_driver=str(),
|
||||
properties=dict())
|
||||
|
||||
MagnumService
|
||||
-------------
|
||||
|
||||
A Magnum Service from magnum
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
MagnumService = dict(
|
||||
location=Location(),
|
||||
binary=str(),
|
||||
created_at=str(),
|
||||
disabled_reason=str() or None,
|
||||
host=str(),
|
||||
id=str(),
|
||||
report_count=int(),
|
||||
state=str(),
|
||||
properties=dict())
|
||||
|
||||
Stack
|
||||
-----
|
||||
|
||||
A Stack from Heat
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
Stack = dict(
|
||||
location=Location(),
|
||||
id=str(),
|
||||
name=str(),
|
||||
created_at=str(),
|
||||
deleted_at=str(),
|
||||
updated_at=str(),
|
||||
description=str(),
|
||||
action=str(),
|
||||
identifier=str(),
|
||||
is_rollback_enabled=bool(),
|
||||
notification_topics=list(),
|
||||
outputs=list(),
|
||||
owner=str(),
|
||||
parameters=dict(),
|
||||
parent=str(),
|
||||
stack_user_project_id=str(),
|
||||
status=str(),
|
||||
status_reason=str(),
|
||||
tags=dict(),
|
||||
tempate_description=str(),
|
||||
timeout_mins=int(),
|
||||
properties=dict())
|
811
doc/source/user/multi-cloud-demo.rst
Normal file
811
doc/source/user/multi-cloud-demo.rst
Normal file
@ -0,0 +1,811 @@
|
||||
================
|
||||
Multi-Cloud Demo
|
||||
================
|
||||
|
||||
This document contains a presentation in `presentty`_ format. If you want to
|
||||
walk through it like a presentation, install `presentty` and run:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
presentty doc/source/user/multi-cloud-demo.rst
|
||||
|
||||
The content is hopefully helpful even if it's not being narrated, so it's being
|
||||
included in the `shade` docs.
|
||||
|
||||
.. _presentty: https://pypi.python.org/pypi/presentty
|
||||
|
||||
Using Multiple OpenStack Clouds Easily with Shade
|
||||
=================================================
|
||||
|
||||
Who am I?
|
||||
=========
|
||||
|
||||
Monty Taylor
|
||||
|
||||
* OpenStack Infra Core
|
||||
* irc: mordred
|
||||
* twitter: @e_monty
|
||||
|
||||
What are we going to talk about?
|
||||
================================
|
||||
|
||||
`shade`
|
||||
|
||||
* a task and end-user oriented Python library
|
||||
* abstracts deployment differences
|
||||
* designed for multi-cloud
|
||||
* simple to use
|
||||
* massive scale
|
||||
|
||||
* optional advanced features to handle 20k servers a day
|
||||
|
||||
* Initial logic/design extracted from nodepool
|
||||
* Librified to re-use in Ansible
|
||||
|
||||
shade is Free Software
|
||||
======================
|
||||
|
||||
* https://git.openstack.org/cgit/openstack-infra/shade
|
||||
* openstack-dev@lists.openstack.org
|
||||
* #openstack-shade on freenode
|
||||
|
||||
This talk is Free Software, too
|
||||
===============================
|
||||
|
||||
* Written for presentty (https://pypi.python.org/pypi/presentty)
|
||||
* doc/source/multi-cloud-demo.rst
|
||||
* examples in doc/source/examples
|
||||
* Paths subject to change- this is the first presentation in tree!
|
||||
|
||||
Complete Example
|
||||
================
|
||||
|
||||
.. code:: python
|
||||
|
||||
import openstack.cloud
|
||||
|
||||
# Initialize and turn on debug logging
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
for cloud_name, region_name in [
|
||||
('my-vexxhost', 'ca-ymq-1'),
|
||||
('my-citycloud', 'Buf1'),
|
||||
('my-internap', 'ams01')]:
|
||||
# Initialize cloud
|
||||
cloud = openstack.openstack_cloud(cloud=cloud_name, region_name=region_name)
|
||||
|
||||
# Upload an image to the cloud
|
||||
image = cloud.create_image(
|
||||
'devuan-jessie', filename='devuan-jessie.qcow2', wait=True)
|
||||
|
||||
# Find a flavor with at least 512M of RAM
|
||||
flavor = cloud.get_flavor_by_ram(512)
|
||||
|
||||
# Boot a server, wait for it to boot, and then do whatever is needed
|
||||
# to get a public ip for it.
|
||||
cloud.create_server(
|
||||
'my-server', image=image, flavor=flavor, wait=True, auto_ip=True)
|
||||
|
||||
Let's Take a Few Steps Back
|
||||
===========================
|
||||
|
||||
Multi-cloud is easy, but you need to know a few things.
|
||||
|
||||
* Terminology
|
||||
* Config
|
||||
* Shade API
|
||||
|
||||
Cloud Terminology
|
||||
=================
|
||||
|
||||
Let's define a few terms, so that we can use them with ease:
|
||||
|
||||
* `cloud` - logically related collection of services
|
||||
* `region` - completely independent subset of a given cloud
|
||||
* `patron` - human who has an account
|
||||
* `user` - account on a cloud
|
||||
* `project` - logical collection of cloud resources
|
||||
* `domain` - collection of users and projects
|
||||
|
||||
Cloud Terminology Relationships
|
||||
===============================
|
||||
|
||||
* A `cloud` has one or more `regions`
|
||||
* A `patron` has one or more `users`
|
||||
* A `patron` has one or more `projects`
|
||||
* A `cloud` has one or more `domains`
|
||||
* In a `cloud` with one `domain` it is named "default"
|
||||
* Each `patron` may have their own `domain`
|
||||
* Each `user` is in one `domain`
|
||||
* Each `project` is in one `domain`
|
||||
* A `user` has one or more `roles` on one or more `projects`
|
||||
|
||||
HTTP Sessions
|
||||
=============
|
||||
|
||||
* HTTP interactions are authenticated via keystone
|
||||
* Authenticating returns a `token`
|
||||
* An authenticated HTTP Session is shared across a `region`
|
||||
|
||||
Cloud Regions
|
||||
=============
|
||||
|
||||
A `cloud region` is the basic unit of REST interaction.
|
||||
|
||||
* A `cloud` has a `service catalog`
|
||||
* The `service catalog` is returned in the `token`
|
||||
* The `service catalog` lists `endpoint` for each `service` in each `region`
|
||||
* A `region` is completely autonomous
|
||||
|
||||
Users, Projects and Domains
|
||||
===========================
|
||||
|
||||
In clouds with multiple domains, project and user names are
|
||||
only unique within a region.
|
||||
|
||||
* Names require `domain` information for uniqueness. IDs do not.
|
||||
* Providing `domain` information when not needed is fine.
|
||||
* `project_name` requires `project_domain_name` or `project_domain_id`
|
||||
* `project_id` does not
|
||||
* `username` requires `user_domain_name` or `user_domain_id`
|
||||
* `user_id` does not
|
||||
|
||||
Confused Yet?
|
||||
=============
|
||||
|
||||
Don't worry - you don't have to deal with most of that.
|
||||
|
||||
Auth per cloud, select per region
|
||||
=================================
|
||||
|
||||
In general, the thing you need to know is:
|
||||
|
||||
* Configure authentication per `cloud`
|
||||
* Select config to use by `cloud` and `region`
|
||||
|
||||
clouds.yaml
|
||||
===========
|
||||
|
||||
Information about the clouds you want to connect to is stored in a file
|
||||
called `clouds.yaml`.
|
||||
|
||||
`clouds.yaml` can be in your homedir: `~/.config/openstack/clouds.yaml`
|
||||
or system-wide: `/etc/openstack/clouds.yaml`.
|
||||
|
||||
Information in your homedir, if it exists, takes precedence.
|
||||
|
||||
Full docs on `clouds.yaml` are at
|
||||
https://docs.openstack.org/developer/os-client-config/
|
||||
|
||||
What about Mac and Windows?
|
||||
===========================
|
||||
|
||||
`USER_CONFIG_DIR` is different on Linux, OSX and Windows.
|
||||
|
||||
* Linux: `~/.config/openstack`
|
||||
* OSX: `~/Library/Application Support/openstack`
|
||||
* Windows: `C:\\Users\\USERNAME\\AppData\\Local\\OpenStack\\openstack`
|
||||
|
||||
`SITE_CONFIG_DIR` is different on Linux, OSX and Windows.
|
||||
|
||||
* Linux: `/etc/openstack`
|
||||
* OSX: `/Library/Application Support/openstack`
|
||||
* Windows: `C:\\ProgramData\\OpenStack\\openstack`
|
||||
|
||||
Config Terminology
|
||||
==================
|
||||
|
||||
For multi-cloud, think of two types:
|
||||
|
||||
* `profile` - Facts about the `cloud` that are true for everyone
|
||||
* `cloud` - Information specific to a given `user`
|
||||
|
||||
Apologies for the use of `cloud` twice.
|
||||
|
||||
Environment Variables and Simple Usage
|
||||
======================================
|
||||
|
||||
* Environment variables starting with `OS_` go into a cloud called `envvars`
|
||||
* If you only have one cloud, you don't have to specify it
|
||||
* `OS_CLOUD` and `OS_REGION_NAME` are default values for
|
||||
`cloud` and `region_name`
|
||||
|
||||
TOO MUCH TALKING - NOT ENOUGH CODE
|
||||
==================================
|
||||
|
||||
basic clouds.yaml for the example code
|
||||
======================================
|
||||
|
||||
Simple example of a clouds.yaml
|
||||
|
||||
* Config for a named `cloud` "my-citycloud"
|
||||
* Reference a well-known "named" profile: `citycloud`
|
||||
* `os-client-config` has a built-in list of profiles at
|
||||
https://docs.openstack.org/developer/os-client-config/vendor-support.html
|
||||
* Vendor profiles contain various advanced config
|
||||
* `cloud` name can match `profile` name (using different names for clarity)
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
clouds:
|
||||
my-citycloud:
|
||||
profile: citycloud
|
||||
auth:
|
||||
username: mordred
|
||||
project_id: 65222a4d09ea4c68934fa1028c77f394
|
||||
user_domain_id: d0919bd5e8d74e49adf0e145807ffc38
|
||||
project_domain_id: d0919bd5e8d74e49adf0e145807ffc38
|
||||
|
||||
Where's the password?
|
||||
|
||||
secure.yaml
|
||||
===========
|
||||
|
||||
* Optional additional file just like `clouds.yaml`
|
||||
* Values overlaid on `clouds.yaml`
|
||||
* Useful if you want to protect secrets more stringently
|
||||
|
||||
Example secure.yaml
|
||||
===================
|
||||
|
||||
* No, my password isn't XXXXXXXX
|
||||
* `cloud` name should match `clouds.yaml`
|
||||
* Optional - I actually keep mine in my `clouds.yaml`
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
clouds:
|
||||
my-citycloud:
|
||||
auth:
|
||||
password: XXXXXXXX
|
||||
|
||||
more clouds.yaml
|
||||
================
|
||||
|
||||
More information can be provided.
|
||||
|
||||
* Use v3 of the `identity` API - even if others are present
|
||||
* Use `https://image-ca-ymq-1.vexxhost.net/v2` for `image` API
|
||||
instead of what's in the catalog
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
my-vexxhost:
|
||||
identity_api_version: 3
|
||||
image_endpoint_override: https://image-ca-ymq-1.vexxhost.net/v2
|
||||
profile: vexxhost
|
||||
auth:
|
||||
user_domain_id: default
|
||||
project_domain_id: default
|
||||
project_name: d8af8a8f-a573-48e6-898a-af333b970a2d
|
||||
username: 0b8c435b-cc4d-4e05-8a47-a2ada0539af1
|
||||
|
||||
Much more complex clouds.yaml example
|
||||
=====================================
|
||||
|
||||
* Not using a profile - all settings included
|
||||
* In the `ams01` `region` there are two networks with undiscoverable qualities
|
||||
* Each one are labeled here so choices can be made
|
||||
* Any of the settings can be specific to a `region` if needed
|
||||
* `region` settings override `cloud` settings
|
||||
* `cloud` does not support `floating-ips`
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
my-internap:
|
||||
auth:
|
||||
auth_url: https://identity.api.cloud.iweb.com
|
||||
username: api-55f9a00fb2619
|
||||
project_name: inap-17037
|
||||
identity_api_version: 3
|
||||
floating_ip_source: None
|
||||
regions:
|
||||
- name: ams01
|
||||
values:
|
||||
networks:
|
||||
- name: inap-17037-WAN1654
|
||||
routes_externally: true
|
||||
default_interface: true
|
||||
- name: inap-17037-LAN3631
|
||||
routes_externally: false
|
||||
|
||||
Complete Example Again
|
||||
======================
|
||||
|
||||
.. code:: python
|
||||
|
||||
import openstack.cloud
|
||||
|
||||
# Initialize and turn on debug logging
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
for cloud_name, region_name in [
|
||||
('my-vexxhost', 'ca-ymq-1'),
|
||||
('my-citycloud', 'Buf1'),
|
||||
('my-internap', 'ams01')]:
|
||||
# Initialize cloud
|
||||
cloud = openstack.openstack_cloud(cloud=cloud_name, region_name=region_name)
|
||||
|
||||
# Upload an image to the cloud
|
||||
image = cloud.create_image(
|
||||
'devuan-jessie', filename='devuan-jessie.qcow2', wait=True)
|
||||
|
||||
# Find a flavor with at least 512M of RAM
|
||||
flavor = cloud.get_flavor_by_ram(512)
|
||||
|
||||
# Boot a server, wait for it to boot, and then do whatever is needed
|
||||
# to get a public ip for it.
|
||||
cloud.create_server(
|
||||
'my-server', image=image, flavor=flavor, wait=True, auto_ip=True)
|
||||
|
||||
Step By Step
|
||||
============
|
||||
|
||||
Import the library
|
||||
==================
|
||||
|
||||
.. code:: python
|
||||
|
||||
import openstack.cloud
|
||||
|
||||
Logging
|
||||
=======
|
||||
|
||||
* `shade` uses standard python logging
|
||||
* Special `openstack.cloud.request_ids` logger for API request IDs
|
||||
* `simple_logging` does easy defaults
|
||||
* Squelches some meaningless warnings
|
||||
|
||||
* `debug`
|
||||
|
||||
* Logs shade loggers at debug level
|
||||
* Includes `openstack.cloud.request_ids` debug logging
|
||||
|
||||
* `http_debug` Implies `debug`, turns on HTTP tracing
|
||||
|
||||
.. code:: python
|
||||
|
||||
# Initialize and turn on debug logging
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
Example with Debug Logging
|
||||
==========================
|
||||
|
||||
* doc/source/examples/debug-logging.py
|
||||
|
||||
.. code:: python
|
||||
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
cloud = openstack.openstack_cloud(
|
||||
cloud='my-vexxhost', region_name='ca-ymq-1')
|
||||
cloud.get_image('Ubuntu 16.04.1 LTS [2017-03-03]')
|
||||
|
||||
Example with HTTP Debug Logging
|
||||
===============================
|
||||
|
||||
* doc/source/examples/http-debug-logging.py
|
||||
|
||||
.. code:: python
|
||||
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging(http_debug=True)
|
||||
|
||||
cloud = openstack.openstack_cloud(
|
||||
cloud='my-vexxhost', region_name='ca-ymq-1')
|
||||
cloud.get_image('Ubuntu 16.04.1 LTS [2017-03-03]')
|
||||
|
||||
Cloud Regions
|
||||
=============
|
||||
|
||||
* `cloud` constructor needs `cloud` and `region_name`
|
||||
* `openstack.openstack_cloud` is a helper factory function
|
||||
|
||||
.. code:: python
|
||||
|
||||
for cloud_name, region_name in [
|
||||
('my-vexxhost', 'ca-ymq-1'),
|
||||
('my-citycloud', 'Buf1'),
|
||||
('my-internap', 'ams01')]:
|
||||
# Initialize cloud
|
||||
cloud = openstack.openstack_cloud(cloud=cloud_name, region_name=region_name)
|
||||
|
||||
Upload an Image
|
||||
===============
|
||||
|
||||
* Picks the correct upload mechanism
|
||||
* **SUGGESTION** Always upload your own base images
|
||||
|
||||
.. code:: python
|
||||
|
||||
# Upload an image to the cloud
|
||||
image = cloud.create_image(
|
||||
'devuan-jessie', filename='devuan-jessie.qcow2', wait=True)
|
||||
|
||||
Always Upload an Image
|
||||
======================
|
||||
|
||||
Ok. You don't have to. But, for multi-cloud...
|
||||
|
||||
* Images with same content are named different on different clouds
|
||||
* Images with same name on different clouds can have different content
|
||||
* Upload your own to all clouds, both problems go away
|
||||
* Download from OS vendor or build with `diskimage-builder`
|
||||
|
||||
Find a flavor
|
||||
=============
|
||||
|
||||
* Flavors are all named differently on clouds
|
||||
* Flavors can be found via RAM
|
||||
* `get_flavor_by_ram` finds the smallest matching flavor
|
||||
|
||||
.. code:: python
|
||||
|
||||
# Find a flavor with at least 512M of RAM
|
||||
flavor = cloud.get_flavor_by_ram(512)
|
||||
|
||||
Create a server
|
||||
===============
|
||||
|
||||
* my-vexxhost
|
||||
|
||||
* Boot server
|
||||
* Wait for `status==ACTIVE`
|
||||
|
||||
* my-internap
|
||||
|
||||
* Boot server on network `inap-17037-WAN1654`
|
||||
* Wait for `status==ACTIVE`
|
||||
|
||||
* my-citycloud
|
||||
|
||||
* Boot server
|
||||
* Wait for `status==ACTIVE`
|
||||
* Find the `port` for the `fixed_ip` for `server`
|
||||
* Create `floating-ip` on that `port`
|
||||
* Wait for `floating-ip` to attach
|
||||
|
||||
.. code:: python
|
||||
|
||||
# Boot a server, wait for it to boot, and then do whatever is needed
|
||||
# to get a public ip for it.
|
||||
cloud.create_server(
|
||||
'my-server', image=image, flavor=flavor, wait=True, auto_ip=True)
|
||||
|
||||
Wow. We didn't even deploy Wordpress!
|
||||
=====================================
|
||||
|
||||
Image and Flavor by Name or ID
|
||||
==============================
|
||||
|
||||
* Pass string to image/flavor
|
||||
* Image/Flavor will be found by name or ID
|
||||
* Common pattern
|
||||
* doc/source/examples/create-server-name-or-id.py
|
||||
|
||||
.. code:: python
|
||||
|
||||
import openstack.cloud
|
||||
|
||||
# Initialize and turn on debug logging
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
for cloud_name, region_name, image, flavor in [
|
||||
('my-vexxhost', 'ca-ymq-1',
|
||||
'Ubuntu 16.04.1 LTS [2017-03-03]', 'v1-standard-4'),
|
||||
('my-citycloud', 'Buf1',
|
||||
'Ubuntu 16.04 Xenial Xerus', '4C-4GB-100GB'),
|
||||
('my-internap', 'ams01',
|
||||
'Ubuntu 16.04 LTS (Xenial Xerus)', 'A1.4')]:
|
||||
# Initialize cloud
|
||||
cloud = openstack.openstack_cloud(cloud=cloud_name, region_name=region_name)
|
||||
|
||||
# Boot a server, wait for it to boot, and then do whatever is needed
|
||||
# to get a public ip for it.
|
||||
server = cloud.create_server(
|
||||
'my-server', image=image, flavor=flavor, wait=True, auto_ip=True)
|
||||
print(server.name)
|
||||
print(server['name'])
|
||||
cloud.pprint(server)
|
||||
# Delete it - this is a demo
|
||||
cloud.delete_server(server, wait=True, delete_ips=True)
|
||||
|
||||
cloud.pprint method was just added this morning
|
||||
===============================================
|
||||
|
||||
Delete Servers
|
||||
==============
|
||||
|
||||
* `delete_ips` Delete any `floating_ips` the server may have
|
||||
|
||||
.. code:: python
|
||||
|
||||
cloud.delete_server('my-server', wait=True, delete_ips=True)
|
||||
|
||||
Image and Flavor by Dict
|
||||
========================
|
||||
|
||||
* Pass dict to image/flavor
|
||||
* If you know if the value is Name or ID
|
||||
* Common pattern
|
||||
* doc/source/examples/create-server-dict.py
|
||||
|
||||
.. code:: python
|
||||
|
||||
import openstack.cloud
|
||||
|
||||
# Initialize and turn on debug logging
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
for cloud_name, region_name, image, flavor_id in [
|
||||
('my-vexxhost', 'ca-ymq-1', 'Ubuntu 16.04.1 LTS [2017-03-03]',
|
||||
'5cf64088-893b-46b5-9bb1-ee020277635d'),
|
||||
('my-citycloud', 'Buf1', 'Ubuntu 16.04 Xenial Xerus',
|
||||
'0dab10b5-42a2-438e-be7b-505741a7ffcc'),
|
||||
('my-internap', 'ams01', 'Ubuntu 16.04 LTS (Xenial Xerus)',
|
||||
'A1.4')]:
|
||||
# Initialize cloud
|
||||
cloud = openstack.openstack_cloud(cloud=cloud_name, region_name=region_name)
|
||||
|
||||
# Boot a server, wait for it to boot, and then do whatever is needed
|
||||
# to get a public ip for it.
|
||||
server = cloud.create_server(
|
||||
'my-server', image=image, flavor=dict(id=flavor_id),
|
||||
wait=True, auto_ip=True)
|
||||
# Delete it - this is a demo
|
||||
cloud.delete_server(server, wait=True, delete_ips=True)
|
||||
|
||||
Munch Objects
|
||||
=============
|
||||
|
||||
* Behave like a dict and an object
|
||||
* doc/source/examples/munch-dict-object.py
|
||||
|
||||
.. code:: python
|
||||
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
cloud = openstack.openstack_cloud(cloud='zetta', region_name='no-osl1')
|
||||
image = cloud.get_image('Ubuntu 14.04 (AMD64) [Local Storage]')
|
||||
print(image.name)
|
||||
print(image['name'])
|
||||
|
||||
API Organized by Logical Resource
|
||||
=================================
|
||||
|
||||
* list_servers
|
||||
* search_servers
|
||||
* get_server
|
||||
* create_server
|
||||
* delete_server
|
||||
* update_server
|
||||
|
||||
For other things, it's still {verb}_{noun}
|
||||
|
||||
* attach_volume
|
||||
* wait_for_server
|
||||
* add_auto_ip
|
||||
|
||||
Cleanup Script
|
||||
==============
|
||||
|
||||
* Sometimes my examples had bugs
|
||||
* doc/source/examples/cleanup-servers.py
|
||||
|
||||
.. code:: python
|
||||
|
||||
import openstack.cloud
|
||||
|
||||
# Initialize and turn on debug logging
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
for cloud_name, region_name in [
|
||||
('my-vexxhost', 'ca-ymq-1'),
|
||||
('my-citycloud', 'Buf1'),
|
||||
('my-internap', 'ams01')]:
|
||||
# Initialize cloud
|
||||
cloud = openstack.openstack_cloud(cloud=cloud_name, region_name=region_name)
|
||||
for server in cloud.search_servers('my-server'):
|
||||
cloud.delete_server(server, wait=True, delete_ips=True)
|
||||
|
||||
Normalization
|
||||
=============
|
||||
|
||||
* https://docs.openstack.org/developer/shade/model.html#image
|
||||
* doc/source/examples/normalization.py
|
||||
|
||||
.. code:: python
|
||||
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging()
|
||||
|
||||
cloud = openstack.openstack_cloud(cloud='fuga', region_name='cystack')
|
||||
image = cloud.get_image(
|
||||
'Ubuntu 16.04 LTS - Xenial Xerus - 64-bit - Fuga Cloud Based Image')
|
||||
cloud.pprint(image)
|
||||
|
||||
Strict Normalized Results
|
||||
=========================
|
||||
|
||||
* Return only the declared model
|
||||
* doc/source/examples/strict-mode.py
|
||||
|
||||
.. code:: python
|
||||
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging()
|
||||
|
||||
cloud = openstack.openstack_cloud(
|
||||
cloud='fuga', region_name='cystack', strict=True)
|
||||
image = cloud.get_image(
|
||||
'Ubuntu 16.04 LTS - Xenial Xerus - 64-bit - Fuga Cloud Based Image')
|
||||
cloud.pprint(image)
|
||||
|
||||
How Did I Find the Image Name for the Last Example?
|
||||
===================================================
|
||||
|
||||
* I often make stupid little utility scripts
|
||||
* doc/source/examples/find-an-image.py
|
||||
|
||||
.. code:: python
|
||||
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging()
|
||||
|
||||
cloud = openstack.openstack_cloud(cloud='fuga', region_name='cystack')
|
||||
cloud.pprint([
|
||||
image for image in cloud.list_images()
|
||||
if 'ubuntu' in image.name.lower()])
|
||||
|
||||
Added / Modified Information
|
||||
============================
|
||||
|
||||
* Servers need more extra help
|
||||
* Fetch addresses dict from neutron
|
||||
* Figure out which IPs are good
|
||||
* `detailed` - defaults to True, add everything
|
||||
* `bare` - no extra calls - don't even fix broken things
|
||||
* `bare` is still normalized
|
||||
* doc/source/examples/server-information.py
|
||||
|
||||
.. code:: python
|
||||
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
cloud = openstack.openstack_cloud(cloud='my-citycloud', region_name='Buf1')
|
||||
try:
|
||||
server = cloud.create_server(
|
||||
'my-server', image='Ubuntu 16.04 Xenial Xerus',
|
||||
flavor=dict(id='0dab10b5-42a2-438e-be7b-505741a7ffcc'),
|
||||
wait=True, auto_ip=True)
|
||||
|
||||
print("\n\nFull Server\n\n")
|
||||
cloud.pprint(server)
|
||||
|
||||
print("\n\nTurn Detailed Off\n\n")
|
||||
cloud.pprint(cloud.get_server('my-server', detailed=False))
|
||||
|
||||
print("\n\nBare Server\n\n")
|
||||
cloud.pprint(cloud.get_server('my-server', bare=True))
|
||||
|
||||
finally:
|
||||
# Delete it - this is a demo
|
||||
cloud.delete_server(server, wait=True, delete_ips=True)
|
||||
|
||||
Exceptions
|
||||
==========
|
||||
|
||||
* All shade exceptions are subclasses of `OpenStackCloudException`
|
||||
* Direct REST calls throw `OpenStackCloudHTTPError`
|
||||
* `OpenStackCloudHTTPError` subclasses `OpenStackCloudException`
|
||||
and `requests.exceptions.HTTPError`
|
||||
* `OpenStackCloudURINotFound` for 404
|
||||
* `OpenStackCloudBadRequest` for 400
|
||||
|
||||
User Agent Info
|
||||
===============
|
||||
|
||||
* Set `app_name` and `app_version` for User Agents
|
||||
* (sssh ... `region_name` is optional if the cloud has one region)
|
||||
* doc/source/examples/user-agent.py
|
||||
|
||||
.. code:: python
|
||||
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging(http_debug=True)
|
||||
|
||||
cloud = openstack.openstack_cloud(
|
||||
cloud='datacentred', app_name='AmazingApp', app_version='1.0')
|
||||
cloud.list_networks()
|
||||
|
||||
Uploading Large Objects
|
||||
=======================
|
||||
|
||||
* swift has a maximum object size
|
||||
* Large Objects are uploaded specially
|
||||
* shade figures this out and does it
|
||||
* multi-threaded
|
||||
* doc/source/examples/upload-object.py
|
||||
|
||||
.. code:: python
|
||||
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
cloud = openstack.openstack_cloud(cloud='ovh', region_name='SBG1')
|
||||
cloud.create_object(
|
||||
container='my-container', name='my-object',
|
||||
filename='/home/mordred/briarcliff.sh3d')
|
||||
cloud.delete_object('my-container', 'my-object')
|
||||
cloud.delete_container('my-container')
|
||||
|
||||
Uploading Large Objects
|
||||
=======================
|
||||
|
||||
* Default max_file_size is 5G
|
||||
* This is a conference demo
|
||||
* Let's force a segment_size
|
||||
* One MILLION bytes
|
||||
* doc/source/examples/upload-object.py
|
||||
|
||||
.. code:: python
|
||||
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
cloud = openstack.openstack_cloud(cloud='ovh', region_name='SBG1')
|
||||
cloud.create_object(
|
||||
container='my-container', name='my-object',
|
||||
filename='/home/mordred/briarcliff.sh3d',
|
||||
segment_size=1000000)
|
||||
cloud.delete_object('my-container', 'my-object')
|
||||
cloud.delete_container('my-container')
|
||||
|
||||
Service Conditionals
|
||||
====================
|
||||
|
||||
.. code:: python
|
||||
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
cloud = openstack.openstack_cloud(cloud='kiss', region_name='region1')
|
||||
print(cloud.has_service('network'))
|
||||
print(cloud.has_service('container-orchestration'))
|
||||
|
||||
Service Conditional Overrides
|
||||
=============================
|
||||
|
||||
* Sometimes clouds are weird and figuring that out won't work
|
||||
|
||||
.. code:: python
|
||||
|
||||
import openstack.cloud
|
||||
openstack.cloud.simple_logging(debug=True)
|
||||
|
||||
cloud = openstack.openstack_cloud(cloud='rax', region_name='DFW')
|
||||
print(cloud.has_service('network'))
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
clouds:
|
||||
rax:
|
||||
profile: rackspace
|
||||
auth:
|
||||
username: mordred
|
||||
project_id: 245018
|
||||
# This is already in profile: rackspace
|
||||
has_network: false
|
||||
|
||||
Coming Soon
|
||||
===========
|
||||
|
||||
* Completion of RESTification
|
||||
* Full version discovery support
|
||||
* Multi-cloud facade layer
|
||||
* Microversion support (talk tomorrow)
|
||||
* Completion of caching tier (talk tomorrow)
|
||||
* All of you helping hacking on shade!!! (we're friendly)
|
22
doc/source/user/usage.rst
Normal file
22
doc/source/user/usage.rst
Normal file
@ -0,0 +1,22 @@
|
||||
=====
|
||||
Usage
|
||||
=====
|
||||
|
||||
To use `openstack.cloud` in a project:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import openstack.cloud
|
||||
|
||||
.. note::
|
||||
API methods that return a description of an OpenStack resource (e.g.,
|
||||
server instance, image, volume, etc.) do so using a `munch.Munch` object
|
||||
from the `Munch library <https://github.com/Infinidat/munch>`_. `Munch`
|
||||
objects can be accessed using either dictionary or object notation
|
||||
(e.g., ``server.id``, ``image.name`` and ``server['id']``, ``image['name']``)
|
||||
|
||||
.. autoclass:: openstack.OpenStackCloud
|
||||
:members:
|
||||
|
||||
.. autoclass:: openstack.OperatorCloud
|
||||
:members:
|
@ -18,7 +18,7 @@ Default Location
|
||||
To create a connection from a file you need a YAML file to contain the
|
||||
configuration.
|
||||
|
||||
.. literalinclude:: ../../contributors/clouds.yaml
|
||||
.. literalinclude:: ../../contributor/clouds.yaml
|
||||
:language: yaml
|
||||
|
||||
To use a configuration file called ``clouds.yaml`` in one of the default
|
||||
@ -33,7 +33,7 @@ function takes three optional arguments:
|
||||
|
||||
* **cloud_name** allows you to specify a cloud from your ``clouds.yaml`` file.
|
||||
* **cloud_config** allows you to pass in an existing
|
||||
``os_client_config.config.OpenStackConfig``` object.
|
||||
``openstack.config.loader.OpenStackConfig``` object.
|
||||
* **options** allows you to specify a namespace object with options to be
|
||||
added to the cloud config.
|
||||
|
||||
|
@ -44,8 +44,7 @@ efficient method may be to iterate over a stream of the response data.
|
||||
By choosing to stream the response content, you determine the ``chunk_size``
|
||||
that is appropriate for your needs, meaning only that many bytes of data are
|
||||
read for each iteration of the loop until all data has been consumed.
|
||||
See :meth:`requests.Response.iter_content` for more information, as well
|
||||
as Requests' :ref:`body-content-workflow`.
|
||||
See :meth:`requests.Response.iter_content` for more information.
|
||||
|
||||
When you choose to stream an image download, openstacksdk is no longer
|
||||
able to compute the checksum of the response data for you. This example
|
||||
|
@ -19,8 +19,7 @@ For a full guide see TODO(etoews):link to docs on developer.openstack.org
|
||||
import argparse
|
||||
import os
|
||||
|
||||
import os_client_config
|
||||
|
||||
from openstack import config as occ
|
||||
from openstack import connection
|
||||
from openstack import profile
|
||||
from openstack import utils
|
||||
@ -49,8 +48,8 @@ def _get_resource_value(resource_key, default):
|
||||
except KeyError:
|
||||
return default
|
||||
|
||||
occ = os_client_config.OpenStackConfig()
|
||||
cloud = occ.get_one_cloud(TEST_CLOUD)
|
||||
config = occ.OpenStackConfig()
|
||||
cloud = config.get_one_cloud(TEST_CLOUD)
|
||||
|
||||
SERVER_NAME = 'openstacksdk-example'
|
||||
IMAGE_NAME = _get_resource_value('image_name', 'cirros-0.3.5-x86_64-disk')
|
||||
@ -68,14 +67,14 @@ EXAMPLE_IMAGE_NAME = 'openstacksdk-example-public-image'
|
||||
|
||||
def create_connection_from_config():
|
||||
opts = Opts(cloud_name=TEST_CLOUD)
|
||||
occ = os_client_config.OpenStackConfig()
|
||||
cloud = occ.get_one_cloud(opts.cloud)
|
||||
config = occ.OpenStackConfig()
|
||||
cloud = config.get_one_cloud(opts.cloud)
|
||||
return connection.from_config(cloud_config=cloud, options=opts)
|
||||
|
||||
|
||||
def create_connection_from_args():
|
||||
parser = argparse.ArgumentParser()
|
||||
config = os_client_config.OpenStackConfig()
|
||||
config = occ.OpenStackConfig()
|
||||
config.register_argparse_arguments(parser, sys.argv[1:])
|
||||
args = parser.parse_args()
|
||||
return connection.from_config(options=args)
|
||||
|
14
extras/delete-network.sh
Normal file
14
extras/delete-network.sh
Normal file
@ -0,0 +1,14 @@
|
||||
neutron router-gateway-clear router1
|
||||
neutron router-interface-delete router1
|
||||
for subnet in private-subnet ipv6-private-subnet ; do
|
||||
neutron router-interface-delete router1 $subnet
|
||||
subnet_id=$(neutron subnet-show $subnet -f value -c id)
|
||||
neutron port-list | grep $subnet_id | awk '{print $2}' | xargs -n1 neutron port-delete
|
||||
neutron subnet-delete $subnet
|
||||
done
|
||||
neutron router-delete router1
|
||||
neutron net-delete private
|
||||
|
||||
# Make the public network directly consumable
|
||||
neutron subnet-update public-subnet --enable-dhcp=True
|
||||
neutron net-update public --shared=True
|
32
extras/install-tips.sh
Normal file
32
extras/install-tips.sh
Normal file
@ -0,0 +1,32 @@
|
||||
#!/bin/bash
|
||||
# Copyright (c) 2017 Red Hat, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
for lib in \
|
||||
python-keystoneclient \
|
||||
python-ironicclient \
|
||||
os-client-config \
|
||||
keystoneauth
|
||||
do
|
||||
egg=$(echo $lib | tr '-' '_' | sed 's/python-//')
|
||||
if [ -d /opt/stack/new/$lib ] ; then
|
||||
tip_location="git+file:///opt/stack/new/$lib#egg=$egg"
|
||||
echo "$(which pip) install -U -e $tip_location"
|
||||
pip uninstall -y $lib
|
||||
pip install -U -e $tip_location
|
||||
else
|
||||
echo "$lib not found in /opt/stack/new/$lib"
|
||||
fi
|
||||
done
|
94
extras/run-ansible-tests.sh
Executable file
94
extras/run-ansible-tests.sh
Executable file
@ -0,0 +1,94 @@
|
||||
#!/bin/bash
|
||||
#############################################################################
|
||||
# run-ansible-tests.sh
|
||||
#
|
||||
# Script used to setup a tox environment for running Ansible. This is meant
|
||||
# to be called by tox (via tox.ini). To run the Ansible tests, use:
|
||||
#
|
||||
# tox -e ansible [TAG ...]
|
||||
# or
|
||||
# tox -e ansible -- -c cloudX [TAG ...]
|
||||
# or to use the development version of Ansible:
|
||||
# tox -e ansible -- -d -c cloudX [TAG ...]
|
||||
#
|
||||
# USAGE:
|
||||
# run-ansible-tests.sh -e ENVDIR [-d] [-c CLOUD] [TAG ...]
|
||||
#
|
||||
# PARAMETERS:
|
||||
# -d Use Ansible source repo development branch.
|
||||
# -e ENVDIR Directory of the tox environment to use for testing.
|
||||
# -c CLOUD Name of the cloud to use for testing.
|
||||
# Defaults to "devstack-admin".
|
||||
# [TAG ...] Optional list of space-separated tags to control which
|
||||
# modules are tested.
|
||||
#
|
||||
# EXAMPLES:
|
||||
# # Run all Ansible tests
|
||||
# run-ansible-tests.sh -e ansible
|
||||
#
|
||||
# # Run auth, keypair, and network tests against cloudX
|
||||
# run-ansible-tests.sh -e ansible -c cloudX auth keypair network
|
||||
#############################################################################
|
||||
|
||||
|
||||
CLOUD="devstack-admin"
|
||||
ENVDIR=
|
||||
USE_DEV=0
|
||||
|
||||
while getopts "c:de:" opt
|
||||
do
|
||||
case $opt in
|
||||
d) USE_DEV=1 ;;
|
||||
c) CLOUD=${OPTARG} ;;
|
||||
e) ENVDIR=${OPTARG} ;;
|
||||
?) echo "Invalid option: -${OPTARG}"
|
||||
exit 1;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [ -z ${ENVDIR} ]
|
||||
then
|
||||
echo "Option -e is required"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
shift $((OPTIND-1))
|
||||
TAGS=$( echo "$*" | tr ' ' , )
|
||||
|
||||
# We need to source the current tox environment so that Ansible will
|
||||
# be setup for the correct python environment.
|
||||
source $ENVDIR/bin/activate
|
||||
|
||||
if [ ${USE_DEV} -eq 1 ]
|
||||
then
|
||||
if [ -d ${ENVDIR}/ansible ]
|
||||
then
|
||||
echo "Using existing Ansible source repo"
|
||||
else
|
||||
echo "Installing Ansible source repo at $ENVDIR"
|
||||
git clone --recursive https://github.com/ansible/ansible.git ${ENVDIR}/ansible
|
||||
fi
|
||||
source $ENVDIR/ansible/hacking/env-setup
|
||||
else
|
||||
echo "Installing Ansible from pip"
|
||||
pip install ansible
|
||||
fi
|
||||
|
||||
# Run the shade Ansible tests
|
||||
tag_opt=""
|
||||
if [ ! -z ${TAGS} ]
|
||||
then
|
||||
tag_opt="--tags ${TAGS}"
|
||||
fi
|
||||
|
||||
# Until we have a module that lets us determine the image we want from
|
||||
# within a playbook, we have to find the image here and pass it in.
|
||||
# We use the openstack client instead of nova client since it can use clouds.yaml.
|
||||
IMAGE=`openstack --os-cloud=${CLOUD} image list -f value -c Name | grep cirros | grep -v -e ramdisk -e kernel`
|
||||
if [ $? -ne 0 ]
|
||||
then
|
||||
echo "Failed to find Cirros image"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ansible-playbook -vvv ./openstack/tests/ansible/run.yml -e "cloud=${CLOUD} image=${IMAGE}" ${tag_opt}
|
@ -0,0 +1,132 @@
|
||||
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
import warnings
|
||||
|
||||
import keystoneauth1.exceptions
|
||||
import pbr.version
|
||||
import requestsexceptions
|
||||
|
||||
from openstack import _log
|
||||
from openstack.cloud.exc import * # noqa
|
||||
from openstack.cloud.openstackcloud import OpenStackCloud
|
||||
from openstack.cloud.operatorcloud import OperatorCloud
|
||||
|
||||
__version__ = pbr.version.VersionInfo('openstacksdk').version_string()
|
||||
|
||||
if requestsexceptions.SubjectAltNameWarning:
|
||||
warnings.filterwarnings(
|
||||
'ignore', category=requestsexceptions.SubjectAltNameWarning)
|
||||
|
||||
|
||||
def _get_openstack_config(app_name=None, app_version=None):
|
||||
import openstack.config
|
||||
# Protect against older versions of os-client-config that don't expose this
|
||||
try:
|
||||
return openstack.config.OpenStackConfig(
|
||||
app_name=app_name, app_version=app_version)
|
||||
except Exception:
|
||||
return openstack.config.OpenStackConfig()
|
||||
|
||||
|
||||
def simple_logging(debug=False, http_debug=False):
|
||||
if http_debug:
|
||||
debug = True
|
||||
if debug:
|
||||
log_level = logging.DEBUG
|
||||
else:
|
||||
log_level = logging.INFO
|
||||
if http_debug:
|
||||
# Enable HTTP level tracing
|
||||
log = _log.setup_logging('keystoneauth')
|
||||
log.addHandler(logging.StreamHandler())
|
||||
log.setLevel(log_level)
|
||||
# We only want extra shade HTTP tracing in http debug mode
|
||||
log = _log.setup_logging('openstack.cloud.http')
|
||||
log.setLevel(log_level)
|
||||
else:
|
||||
# We only want extra shade HTTP tracing in http debug mode
|
||||
log = _log.setup_logging('openstack.cloud.http')
|
||||
log.setLevel(logging.WARNING)
|
||||
log = _log.setup_logging('openstack.cloud')
|
||||
log.addHandler(logging.StreamHandler())
|
||||
log.setLevel(log_level)
|
||||
# Suppress warning about keystoneauth loggers
|
||||
log = _log.setup_logging('keystoneauth.identity.base')
|
||||
log = _log.setup_logging('keystoneauth.identity.generic.base')
|
||||
|
||||
|
||||
# TODO(shade) Document this and add some examples
|
||||
# TODO(shade) This wants to be renamed before we make a release.
|
||||
def openstack_clouds(
|
||||
config=None, debug=False, cloud=None, strict=False,
|
||||
app_name=None, app_version=None):
|
||||
if not config:
|
||||
config = _get_openstack_config(app_name, app_version)
|
||||
try:
|
||||
if cloud is None:
|
||||
return [
|
||||
OpenStackCloud(
|
||||
cloud=f.name, debug=debug,
|
||||
cloud_config=f,
|
||||
strict=strict,
|
||||
**f.config)
|
||||
for f in config.get_all_clouds()
|
||||
]
|
||||
else:
|
||||
return [
|
||||
OpenStackCloud(
|
||||
cloud=f.name, debug=debug,
|
||||
cloud_config=f,
|
||||
strict=strict,
|
||||
**f.config)
|
||||
for f in config.get_all_clouds()
|
||||
if f.name == cloud
|
||||
]
|
||||
except keystoneauth1.exceptions.auth_plugins.NoMatchingPlugin as e:
|
||||
raise OpenStackCloudException(
|
||||
"Invalid cloud configuration: {exc}".format(exc=str(e)))
|
||||
|
||||
|
||||
# TODO(shade) This wants to be renamed before we make a release - there is
|
||||
# ultimately no reason to have an openstack_cloud and a connect
|
||||
# factory function - but we have a few steps to go first and this is used
|
||||
# in the imported tests from shade.
|
||||
def openstack_cloud(
|
||||
config=None, strict=False, app_name=None, app_version=None, **kwargs):
|
||||
if not config:
|
||||
config = _get_openstack_config(app_name, app_version)
|
||||
try:
|
||||
cloud_config = config.get_one_cloud(**kwargs)
|
||||
except keystoneauth1.exceptions.auth_plugins.NoMatchingPlugin as e:
|
||||
raise OpenStackCloudException(
|
||||
"Invalid cloud configuration: {exc}".format(exc=str(e)))
|
||||
return OpenStackCloud(cloud_config=cloud_config, strict=strict)
|
||||
|
||||
|
||||
# TODO(shade) This wants to be renamed before we make a release - there is
|
||||
# ultimately no reason to have an operator_cloud and a connect
|
||||
# factory function - but we have a few steps to go first and this is used
|
||||
# in the imported tests from shade.
|
||||
def operator_cloud(
|
||||
config=None, strict=False, app_name=None, app_version=None, **kwargs):
|
||||
if not config:
|
||||
config = _get_openstack_config(app_name, app_version)
|
||||
try:
|
||||
cloud_config = config.get_one_cloud(**kwargs)
|
||||
except keystoneauth1.exceptions.auth_plugins.NoMatchingPlugin as e:
|
||||
raise OpenStackCloudException(
|
||||
"Invalid cloud configuration: {exc}".format(exc=str(e)))
|
||||
return OperatorCloud(cloud_config=cloud_config, strict=strict)
|
28
openstack/_log.py
Normal file
28
openstack/_log.py
Normal file
@ -0,0 +1,28 @@
|
||||
# Copyright (c) 2015 IBM Corp.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
|
||||
|
||||
class NullHandler(logging.Handler):
|
||||
def emit(self, record):
|
||||
pass
|
||||
|
||||
|
||||
def setup_logging(name):
|
||||
log = logging.getLogger(name)
|
||||
if len(log.handlers) == 0:
|
||||
h = NullHandler()
|
||||
log.addHandler(h)
|
||||
return log
|
0
openstack/cloud/__init__.py
Normal file
0
openstack/cloud/__init__.py
Normal file
166
openstack/cloud/_adapter.py
Normal file
166
openstack/cloud/_adapter.py
Normal file
@ -0,0 +1,166 @@
|
||||
# Copyright (c) 2016 Red Hat, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
''' Wrapper around keystoneauth Session to wrap calls in TaskManager '''
|
||||
|
||||
import functools
|
||||
from keystoneauth1 import adapter
|
||||
from six.moves import urllib
|
||||
|
||||
from openstack import _log
|
||||
from openstack.cloud import exc
|
||||
from openstack.cloud import task_manager
|
||||
|
||||
|
||||
def extract_name(url):
|
||||
'''Produce a key name to use in logging/metrics from the URL path.
|
||||
|
||||
We want to be able to logic/metric sane general things, so we pull
|
||||
the url apart to generate names. The function returns a list because
|
||||
there are two different ways in which the elements want to be combined
|
||||
below (one for logging, one for statsd)
|
||||
|
||||
Some examples are likely useful:
|
||||
|
||||
/servers -> ['servers']
|
||||
/servers/{id} -> ['servers']
|
||||
/servers/{id}/os-security-groups -> ['servers', 'os-security-groups']
|
||||
/v2.0/networks.json -> ['networks']
|
||||
'''
|
||||
|
||||
url_path = urllib.parse.urlparse(url).path.strip()
|
||||
# Remove / from the beginning to keep the list indexes of interesting
|
||||
# things consistent
|
||||
if url_path.startswith('/'):
|
||||
url_path = url_path[1:]
|
||||
|
||||
# Special case for neutron, which puts .json on the end of urls
|
||||
if url_path.endswith('.json'):
|
||||
url_path = url_path[:-len('.json')]
|
||||
|
||||
url_parts = url_path.split('/')
|
||||
if url_parts[-1] == 'detail':
|
||||
# Special case detail calls
|
||||
# GET /servers/detail
|
||||
# returns ['servers', 'detail']
|
||||
name_parts = url_parts[-2:]
|
||||
else:
|
||||
# Strip leading version piece so that
|
||||
# GET /v2.0/networks
|
||||
# returns ['networks']
|
||||
if url_parts[0] in ('v1', 'v2', 'v2.0'):
|
||||
url_parts = url_parts[1:]
|
||||
name_parts = []
|
||||
# Pull out every other URL portion - so that
|
||||
# GET /servers/{id}/os-security-groups
|
||||
# returns ['servers', 'os-security-groups']
|
||||
for idx in range(0, len(url_parts)):
|
||||
if not idx % 2 and url_parts[idx]:
|
||||
name_parts.append(url_parts[idx])
|
||||
|
||||
# Keystone Token fetching is a special case, so we name it "tokens"
|
||||
if url_path.endswith('tokens'):
|
||||
name_parts = ['tokens']
|
||||
|
||||
# Getting the root of an endpoint is doing version discovery
|
||||
if not name_parts:
|
||||
name_parts = ['discovery']
|
||||
|
||||
# Strip out anything that's empty or None
|
||||
return [part for part in name_parts if part]
|
||||
|
||||
|
||||
# TODO(shade) This adapter should go away in favor of the work merging
|
||||
# adapter with openstack.proxy.
|
||||
class ShadeAdapter(adapter.Adapter):
|
||||
|
||||
def __init__(self, shade_logger, manager, *args, **kwargs):
|
||||
super(ShadeAdapter, self).__init__(*args, **kwargs)
|
||||
self.shade_logger = shade_logger
|
||||
self.manager = manager
|
||||
self.request_log = _log.setup_logging('openstack.cloud.request_ids')
|
||||
|
||||
def _log_request_id(self, response, obj=None):
|
||||
# Log the request id and object id in a specific logger. This way
|
||||
# someone can turn it on if they're interested in this kind of tracing.
|
||||
request_id = response.headers.get('x-openstack-request-id')
|
||||
if not request_id:
|
||||
return response
|
||||
tmpl = "{meth} call to {service} for {url} used request id {req}"
|
||||
kwargs = dict(
|
||||
meth=response.request.method,
|
||||
service=self.service_type,
|
||||
url=response.request.url,
|
||||
req=request_id)
|
||||
|
||||
if isinstance(obj, dict):
|
||||
obj_id = obj.get('id', obj.get('uuid'))
|
||||
if obj_id:
|
||||
kwargs['obj_id'] = obj_id
|
||||
tmpl += " returning object {obj_id}"
|
||||
self.request_log.debug(tmpl.format(**kwargs))
|
||||
return response
|
||||
|
||||
def _munch_response(self, response, result_key=None, error_message=None):
|
||||
exc.raise_from_response(response, error_message=error_message)
|
||||
|
||||
if not response.content:
|
||||
# This doens't have any content
|
||||
return self._log_request_id(response)
|
||||
|
||||
# Some REST calls do not return json content. Don't decode it.
|
||||
if 'application/json' not in response.headers.get('Content-Type'):
|
||||
return self._log_request_id(response)
|
||||
|
||||
try:
|
||||
result_json = response.json()
|
||||
self._log_request_id(response, result_json)
|
||||
except Exception:
|
||||
return self._log_request_id(response)
|
||||
return result_json
|
||||
|
||||
def request(
|
||||
self, url, method, run_async=False, error_message=None,
|
||||
*args, **kwargs):
|
||||
name_parts = extract_name(url)
|
||||
name = '.'.join([self.service_type, method] + name_parts)
|
||||
class_name = "".join([
|
||||
part.lower().capitalize() for part in name.split('.')])
|
||||
|
||||
request_method = functools.partial(
|
||||
super(ShadeAdapter, self).request, url, method)
|
||||
|
||||
class RequestTask(task_manager.BaseTask):
|
||||
|
||||
def __init__(self, **kw):
|
||||
super(RequestTask, self).__init__(**kw)
|
||||
self.name = name
|
||||
self.__class__.__name__ = str(class_name)
|
||||
self.run_async = run_async
|
||||
|
||||
def main(self, client):
|
||||
self.args.setdefault('raise_exc', False)
|
||||
return request_method(**self.args)
|
||||
|
||||
response = self.manager.submit_task(RequestTask(**kwargs))
|
||||
if run_async:
|
||||
return response
|
||||
else:
|
||||
return self._munch_response(response, error_message=error_message)
|
||||
|
||||
def _version_matches(self, version):
|
||||
api_version = self.get_api_major_version()
|
||||
if api_version:
|
||||
return api_version[0] == version
|
||||
return False
|
0
openstack/cloud/_heat/__init__.py
Normal file
0
openstack/cloud/_heat/__init__.py
Normal file
56
openstack/cloud/_heat/environment_format.py
Normal file
56
openstack/cloud/_heat/environment_format.py
Normal file
@ -0,0 +1,56 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import yaml
|
||||
|
||||
from openstack.cloud._heat import template_format
|
||||
|
||||
|
||||
SECTIONS = (
|
||||
PARAMETER_DEFAULTS, PARAMETERS, RESOURCE_REGISTRY,
|
||||
ENCRYPTED_PARAM_NAMES, EVENT_SINKS,
|
||||
PARAMETER_MERGE_STRATEGIES
|
||||
) = (
|
||||
'parameter_defaults', 'parameters', 'resource_registry',
|
||||
'encrypted_param_names', 'event_sinks',
|
||||
'parameter_merge_strategies'
|
||||
)
|
||||
|
||||
|
||||
def parse(env_str):
|
||||
"""Takes a string and returns a dict containing the parsed structure.
|
||||
|
||||
This includes determination of whether the string is using the
|
||||
YAML format.
|
||||
"""
|
||||
try:
|
||||
env = yaml.load(env_str, Loader=template_format.yaml_loader)
|
||||
except yaml.YAMLError:
|
||||
# NOTE(prazumovsky): we need to return more informative error for
|
||||
# user, so use SafeLoader, which return error message with template
|
||||
# snippet where error has been occurred.
|
||||
try:
|
||||
env = yaml.load(env_str, Loader=yaml.SafeLoader)
|
||||
except yaml.YAMLError as yea:
|
||||
raise ValueError(yea)
|
||||
else:
|
||||
if env is None:
|
||||
env = {}
|
||||
elif not isinstance(env, dict):
|
||||
raise ValueError(
|
||||
'The environment is not a valid YAML mapping data type.')
|
||||
|
||||
for param in env:
|
||||
if param not in SECTIONS:
|
||||
raise ValueError('environment has wrong section "%s"' % param)
|
||||
|
||||
return env
|
98
openstack/cloud/_heat/event_utils.py
Normal file
98
openstack/cloud/_heat/event_utils.py
Normal file
@ -0,0 +1,98 @@
|
||||
# Copyright 2015 Red Hat Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import collections
|
||||
import time
|
||||
|
||||
from openstack.cloud import meta
|
||||
|
||||
|
||||
def get_events(cloud, stack_id, event_args, marker=None, limit=None):
|
||||
# TODO(mordred) FIX THIS ONCE assert_calls CAN HANDLE QUERY STRINGS
|
||||
params = collections.OrderedDict()
|
||||
for k in sorted(event_args.keys()):
|
||||
params[k] = event_args[k]
|
||||
|
||||
if marker:
|
||||
event_args['marker'] = marker
|
||||
if limit:
|
||||
event_args['limit'] = limit
|
||||
|
||||
data = cloud._orchestration_client.get(
|
||||
'/stacks/{id}/events'.format(id=stack_id),
|
||||
params=params)
|
||||
events = meta.get_and_munchify('events', data)
|
||||
|
||||
# Show which stack the event comes from (for nested events)
|
||||
for e in events:
|
||||
e['stack_name'] = stack_id.split("/")[0]
|
||||
return events
|
||||
|
||||
|
||||
def poll_for_events(
|
||||
cloud, stack_name, action=None, poll_period=5, marker=None):
|
||||
"""Continuously poll events and logs for performed action on stack."""
|
||||
|
||||
if action:
|
||||
stop_status = ('%s_FAILED' % action, '%s_COMPLETE' % action)
|
||||
stop_check = lambda a: a in stop_status
|
||||
else:
|
||||
stop_check = lambda a: a.endswith('_COMPLETE') or a.endswith('_FAILED')
|
||||
|
||||
no_event_polls = 0
|
||||
msg_template = "\n Stack %(name)s %(status)s \n"
|
||||
|
||||
def is_stack_event(event):
|
||||
if event.get('resource_name', '') != stack_name:
|
||||
return False
|
||||
|
||||
phys_id = event.get('physical_resource_id', '')
|
||||
links = dict((l.get('rel'),
|
||||
l.get('href')) for l in event.get('links', []))
|
||||
stack_id = links.get('stack', phys_id).rsplit('/', 1)[-1]
|
||||
return stack_id == phys_id
|
||||
|
||||
while True:
|
||||
events = get_events(
|
||||
cloud, stack_id=stack_name,
|
||||
event_args={'sort_dir': 'asc', 'marker': marker})
|
||||
|
||||
if len(events) == 0:
|
||||
no_event_polls += 1
|
||||
else:
|
||||
no_event_polls = 0
|
||||
# set marker to last event that was received.
|
||||
marker = getattr(events[-1], 'id', None)
|
||||
|
||||
for event in events:
|
||||
# check if stack event was also received
|
||||
if is_stack_event(event):
|
||||
stack_status = getattr(event, 'resource_status', '')
|
||||
msg = msg_template % dict(
|
||||
name=stack_name, status=stack_status)
|
||||
if stop_check(stack_status):
|
||||
return stack_status, msg
|
||||
|
||||
if no_event_polls >= 2:
|
||||
# after 2 polls with no events, fall back to a stack get
|
||||
stack = cloud.get_stack(stack_name)
|
||||
stack_status = stack['stack_status']
|
||||
msg = msg_template % dict(
|
||||
name=stack_name, status=stack_status)
|
||||
if stop_check(stack_status):
|
||||
return stack_status, msg
|
||||
# go back to event polling again
|
||||
no_event_polls = 0
|
||||
|
||||
time.sleep(poll_period)
|
69
openstack/cloud/_heat/template_format.py
Normal file
69
openstack/cloud/_heat/template_format.py
Normal file
@ -0,0 +1,69 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import json
|
||||
import yaml
|
||||
|
||||
if hasattr(yaml, 'CSafeLoader'):
|
||||
yaml_loader = yaml.CSafeLoader
|
||||
else:
|
||||
yaml_loader = yaml.SafeLoader
|
||||
|
||||
if hasattr(yaml, 'CSafeDumper'):
|
||||
yaml_dumper = yaml.CSafeDumper
|
||||
else:
|
||||
yaml_dumper = yaml.SafeDumper
|
||||
|
||||
|
||||
def _construct_yaml_str(self, node):
|
||||
# Override the default string handling function
|
||||
# to always return unicode objects
|
||||
return self.construct_scalar(node)
|
||||
yaml_loader.add_constructor(u'tag:yaml.org,2002:str', _construct_yaml_str)
|
||||
# Unquoted dates like 2013-05-23 in yaml files get loaded as objects of type
|
||||
# datetime.data which causes problems in API layer when being processed by
|
||||
# openstack.common.jsonutils. Therefore, make unicode string out of timestamps
|
||||
# until jsonutils can handle dates.
|
||||
yaml_loader.add_constructor(u'tag:yaml.org,2002:timestamp',
|
||||
_construct_yaml_str)
|
||||
|
||||
|
||||
def parse(tmpl_str):
|
||||
"""Takes a string and returns a dict containing the parsed structure.
|
||||
|
||||
This includes determination of whether the string is using the
|
||||
JSON or YAML format.
|
||||
"""
|
||||
# strip any whitespace before the check
|
||||
tmpl_str = tmpl_str.strip()
|
||||
if tmpl_str.startswith('{'):
|
||||
tpl = json.loads(tmpl_str)
|
||||
else:
|
||||
try:
|
||||
tpl = yaml.load(tmpl_str, Loader=yaml_loader)
|
||||
except yaml.YAMLError:
|
||||
# NOTE(prazumovsky): we need to return more informative error for
|
||||
# user, so use SafeLoader, which return error message with template
|
||||
# snippet where error has been occurred.
|
||||
try:
|
||||
tpl = yaml.load(tmpl_str, Loader=yaml.SafeLoader)
|
||||
except yaml.YAMLError as yea:
|
||||
raise ValueError(yea)
|
||||
else:
|
||||
if tpl is None:
|
||||
tpl = {}
|
||||
# Looking for supported version keys in the loaded template
|
||||
if not ('HeatTemplateFormatVersion' in tpl
|
||||
or 'heat_template_version' in tpl
|
||||
or 'AWSTemplateFormatVersion' in tpl):
|
||||
raise ValueError("Template format version not found.")
|
||||
return tpl
|
314
openstack/cloud/_heat/template_utils.py
Normal file
314
openstack/cloud/_heat/template_utils.py
Normal file
@ -0,0 +1,314 @@
|
||||
# Copyright 2012 OpenStack Foundation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import collections
|
||||
import json
|
||||
import six
|
||||
from six.moves.urllib import parse
|
||||
from six.moves.urllib import request
|
||||
|
||||
from openstack.cloud._heat import environment_format
|
||||
from openstack.cloud._heat import template_format
|
||||
from openstack.cloud._heat import utils
|
||||
from openstack.cloud import exc
|
||||
|
||||
|
||||
def get_template_contents(template_file=None, template_url=None,
|
||||
template_object=None, object_request=None,
|
||||
files=None, existing=False):
|
||||
|
||||
is_object = False
|
||||
tpl = None
|
||||
|
||||
# Transform a bare file path to a file:// URL.
|
||||
if template_file:
|
||||
template_url = utils.normalise_file_path_to_url(template_file)
|
||||
|
||||
if template_url:
|
||||
tpl = request.urlopen(template_url).read()
|
||||
|
||||
elif template_object:
|
||||
is_object = True
|
||||
template_url = template_object
|
||||
tpl = object_request and object_request('GET',
|
||||
template_object)
|
||||
elif existing:
|
||||
return {}, None
|
||||
else:
|
||||
raise exc.OpenStackCloudException(
|
||||
'Must provide one of template_file,'
|
||||
' template_url or template_object')
|
||||
|
||||
if not tpl:
|
||||
raise exc.OpenStackCloudException(
|
||||
'Could not fetch template from %s' % template_url)
|
||||
|
||||
try:
|
||||
if isinstance(tpl, six.binary_type):
|
||||
tpl = tpl.decode('utf-8')
|
||||
template = template_format.parse(tpl)
|
||||
except ValueError as e:
|
||||
raise exc.OpenStackCloudException(
|
||||
'Error parsing template %(url)s %(error)s' %
|
||||
{'url': template_url, 'error': e})
|
||||
|
||||
tmpl_base_url = utils.base_url_for_url(template_url)
|
||||
if files is None:
|
||||
files = {}
|
||||
resolve_template_get_files(template, files, tmpl_base_url, is_object,
|
||||
object_request)
|
||||
return files, template
|
||||
|
||||
|
||||
def resolve_template_get_files(template, files, template_base_url,
|
||||
is_object=False, object_request=None):
|
||||
|
||||
def ignore_if(key, value):
|
||||
if key != 'get_file' and key != 'type':
|
||||
return True
|
||||
if not isinstance(value, six.string_types):
|
||||
return True
|
||||
if (key == 'type' and
|
||||
not value.endswith(('.yaml', '.template'))):
|
||||
return True
|
||||
return False
|
||||
|
||||
def recurse_if(value):
|
||||
return isinstance(value, (dict, list))
|
||||
|
||||
get_file_contents(template, files, template_base_url,
|
||||
ignore_if, recurse_if, is_object, object_request)
|
||||
|
||||
|
||||
def is_template(file_content):
|
||||
try:
|
||||
if isinstance(file_content, six.binary_type):
|
||||
file_content = file_content.decode('utf-8')
|
||||
template_format.parse(file_content)
|
||||
except (ValueError, TypeError):
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def get_file_contents(from_data, files, base_url=None,
|
||||
ignore_if=None, recurse_if=None,
|
||||
is_object=False, object_request=None):
|
||||
|
||||
if recurse_if and recurse_if(from_data):
|
||||
if isinstance(from_data, dict):
|
||||
recurse_data = from_data.values()
|
||||
else:
|
||||
recurse_data = from_data
|
||||
for value in recurse_data:
|
||||
get_file_contents(value, files, base_url, ignore_if, recurse_if,
|
||||
is_object, object_request)
|
||||
|
||||
if isinstance(from_data, dict):
|
||||
for key, value in from_data.items():
|
||||
if ignore_if and ignore_if(key, value):
|
||||
continue
|
||||
|
||||
if base_url and not base_url.endswith('/'):
|
||||
base_url = base_url + '/'
|
||||
|
||||
str_url = parse.urljoin(base_url, value)
|
||||
if str_url not in files:
|
||||
if is_object and object_request:
|
||||
file_content = object_request('GET', str_url)
|
||||
else:
|
||||
file_content = utils.read_url_content(str_url)
|
||||
if is_template(file_content):
|
||||
if is_object:
|
||||
template = get_template_contents(
|
||||
template_object=str_url, files=files,
|
||||
object_request=object_request)[1]
|
||||
else:
|
||||
template = get_template_contents(
|
||||
template_url=str_url, files=files)[1]
|
||||
file_content = json.dumps(template)
|
||||
files[str_url] = file_content
|
||||
# replace the data value with the normalised absolute URL
|
||||
from_data[key] = str_url
|
||||
|
||||
|
||||
def deep_update(old, new):
|
||||
'''Merge nested dictionaries.'''
|
||||
|
||||
# Prevents an error if in a previous iteration
|
||||
# old[k] = None but v[k] = {...},
|
||||
if old is None:
|
||||
old = {}
|
||||
|
||||
for k, v in new.items():
|
||||
if isinstance(v, collections.Mapping):
|
||||
r = deep_update(old.get(k, {}), v)
|
||||
old[k] = r
|
||||
else:
|
||||
old[k] = new[k]
|
||||
return old
|
||||
|
||||
|
||||
def process_multiple_environments_and_files(env_paths=None, template=None,
|
||||
template_url=None,
|
||||
env_path_is_object=None,
|
||||
object_request=None,
|
||||
env_list_tracker=None):
|
||||
"""Reads one or more environment files.
|
||||
|
||||
Reads in each specified environment file and returns a dictionary
|
||||
of the filenames->contents (suitable for the files dict)
|
||||
and the consolidated environment (after having applied the correct
|
||||
overrides based on order).
|
||||
|
||||
If a list is provided in the env_list_tracker parameter, the behavior
|
||||
is altered to take advantage of server-side environment resolution.
|
||||
Specifically, this means:
|
||||
|
||||
* Populating env_list_tracker with an ordered list of environment file
|
||||
URLs to be passed to the server
|
||||
* Including the contents of each environment file in the returned
|
||||
files dict, keyed by one of the URLs in env_list_tracker
|
||||
|
||||
:param env_paths: list of paths to the environment files to load; if
|
||||
None, empty results will be returned
|
||||
:type env_paths: list or None
|
||||
:param template: unused; only included for API compatibility
|
||||
:param template_url: unused; only included for API compatibility
|
||||
:param env_list_tracker: if specified, environment filenames will be
|
||||
stored within
|
||||
:type env_list_tracker: list or None
|
||||
:return: tuple of files dict and a dict of the consolidated environment
|
||||
:rtype: tuple
|
||||
"""
|
||||
merged_files = {}
|
||||
merged_env = {}
|
||||
|
||||
# If we're keeping a list of environment files separately, include the
|
||||
# contents of the files in the files dict
|
||||
include_env_in_files = env_list_tracker is not None
|
||||
|
||||
if env_paths:
|
||||
for env_path in env_paths:
|
||||
files, env = process_environment_and_files(
|
||||
env_path=env_path,
|
||||
template=template,
|
||||
template_url=template_url,
|
||||
env_path_is_object=env_path_is_object,
|
||||
object_request=object_request,
|
||||
include_env_in_files=include_env_in_files)
|
||||
|
||||
# 'files' looks like {"filename1": contents, "filename2": contents}
|
||||
# so a simple update is enough for merging
|
||||
merged_files.update(files)
|
||||
|
||||
# 'env' can be a deeply nested dictionary, so a simple update is
|
||||
# not enough
|
||||
merged_env = deep_update(merged_env, env)
|
||||
|
||||
if env_list_tracker is not None:
|
||||
env_url = utils.normalise_file_path_to_url(env_path)
|
||||
env_list_tracker.append(env_url)
|
||||
|
||||
return merged_files, merged_env
|
||||
|
||||
|
||||
def process_environment_and_files(env_path=None,
|
||||
template=None,
|
||||
template_url=None,
|
||||
env_path_is_object=None,
|
||||
object_request=None,
|
||||
include_env_in_files=False):
|
||||
"""Loads a single environment file.
|
||||
|
||||
Returns an entry suitable for the files dict which maps the environment
|
||||
filename to its contents.
|
||||
|
||||
:param env_path: full path to the file to load
|
||||
:type env_path: str or None
|
||||
:param include_env_in_files: if specified, the raw environment file itself
|
||||
will be included in the returned files dict
|
||||
:type include_env_in_files: bool
|
||||
:return: tuple of files dict and the loaded environment as a dict
|
||||
:rtype: (dict, dict)
|
||||
"""
|
||||
files = {}
|
||||
env = {}
|
||||
|
||||
is_object = env_path_is_object and env_path_is_object(env_path)
|
||||
|
||||
if is_object:
|
||||
raw_env = object_request and object_request('GET', env_path)
|
||||
env = environment_format.parse(raw_env)
|
||||
env_base_url = utils.base_url_for_url(env_path)
|
||||
|
||||
resolve_environment_urls(
|
||||
env.get('resource_registry'),
|
||||
files,
|
||||
env_base_url, is_object=True, object_request=object_request)
|
||||
|
||||
elif env_path:
|
||||
env_url = utils.normalise_file_path_to_url(env_path)
|
||||
env_base_url = utils.base_url_for_url(env_url)
|
||||
raw_env = request.urlopen(env_url).read()
|
||||
|
||||
env = environment_format.parse(raw_env)
|
||||
|
||||
resolve_environment_urls(
|
||||
env.get('resource_registry'),
|
||||
files,
|
||||
env_base_url)
|
||||
|
||||
if include_env_in_files:
|
||||
files[env_url] = json.dumps(env)
|
||||
|
||||
return files, env
|
||||
|
||||
|
||||
def resolve_environment_urls(resource_registry, files, env_base_url,
|
||||
is_object=False, object_request=None):
|
||||
"""Handles any resource URLs specified in an environment.
|
||||
|
||||
:param resource_registry: mapping of type name to template filename
|
||||
:type resource_registry: dict
|
||||
:param files: dict to store loaded file contents into
|
||||
:type files: dict
|
||||
:param env_base_url: base URL to look in when loading files
|
||||
:type env_base_url: str or None
|
||||
"""
|
||||
if resource_registry is None:
|
||||
return
|
||||
|
||||
rr = resource_registry
|
||||
base_url = rr.get('base_url', env_base_url)
|
||||
|
||||
def ignore_if(key, value):
|
||||
if key == 'base_url':
|
||||
return True
|
||||
if isinstance(value, dict):
|
||||
return True
|
||||
if '::' in value:
|
||||
# Built in providers like: "X::Compute::Server"
|
||||
# don't need downloading.
|
||||
return True
|
||||
if key in ['hooks', 'restricted_actions']:
|
||||
return True
|
||||
|
||||
get_file_contents(rr, files, base_url, ignore_if,
|
||||
is_object=is_object, object_request=object_request)
|
||||
|
||||
for res_name, res_dict in rr.get('resources', {}).items():
|
||||
res_base_url = res_dict.get('base_url', base_url)
|
||||
get_file_contents(
|
||||
res_dict, files, res_base_url, ignore_if,
|
||||
is_object=is_object, object_request=object_request)
|
61
openstack/cloud/_heat/utils.py
Normal file
61
openstack/cloud/_heat/utils.py
Normal file
@ -0,0 +1,61 @@
|
||||
# Copyright 2012 OpenStack Foundation
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import base64
|
||||
import os
|
||||
|
||||
from six.moves.urllib import error
|
||||
from six.moves.urllib import parse
|
||||
from six.moves.urllib import request
|
||||
|
||||
from openstack.cloud import exc
|
||||
|
||||
|
||||
def base_url_for_url(url):
|
||||
parsed = parse.urlparse(url)
|
||||
parsed_dir = os.path.dirname(parsed.path)
|
||||
return parse.urljoin(url, parsed_dir)
|
||||
|
||||
|
||||
def normalise_file_path_to_url(path):
|
||||
if parse.urlparse(path).scheme:
|
||||
return path
|
||||
path = os.path.abspath(path)
|
||||
return parse.urljoin('file:', request.pathname2url(path))
|
||||
|
||||
|
||||
def read_url_content(url):
|
||||
try:
|
||||
# TODO(mordred) Use requests
|
||||
content = request.urlopen(url).read()
|
||||
except error.URLError:
|
||||
raise exc.OpenStackCloudException(
|
||||
'Could not fetch contents for %s' % url)
|
||||
|
||||
if content:
|
||||
try:
|
||||
content.decode('utf-8')
|
||||
except ValueError:
|
||||
content = base64.encodestring(content)
|
||||
return content
|
||||
|
||||
|
||||
def resource_nested_identifier(rsrc):
|
||||
nested_link = [l for l in rsrc.links or []
|
||||
if l.get('rel') == 'nested']
|
||||
if nested_link:
|
||||
nested_href = nested_link[0].get('href')
|
||||
nested_identifier = nested_href.split("/")[-2:]
|
||||
return "/".join(nested_identifier)
|
1095
openstack/cloud/_normalize.py
Normal file
1095
openstack/cloud/_normalize.py
Normal file
File diff suppressed because it is too large
Load Diff
97
openstack/cloud/_tasks.py
Normal file
97
openstack/cloud/_tasks.py
Normal file
@ -0,0 +1,97 @@
|
||||
# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
#
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from openstack.cloud import task_manager
|
||||
|
||||
|
||||
class MachineCreate(task_manager.Task):
|
||||
def main(self, client):
|
||||
return client.ironic_client.node.create(**self.args)
|
||||
|
||||
|
||||
class MachineDelete(task_manager.Task):
|
||||
def main(self, client):
|
||||
return client.ironic_client.node.delete(**self.args)
|
||||
|
||||
|
||||
class MachinePatch(task_manager.Task):
|
||||
def main(self, client):
|
||||
return client.ironic_client.node.update(**self.args)
|
||||
|
||||
|
||||
class MachinePortGet(task_manager.Task):
|
||||
def main(self, client):
|
||||
return client.ironic_client.port.get(**self.args)
|
||||
|
||||
|
||||
class MachinePortGetByAddress(task_manager.Task):
|
||||
def main(self, client):
|
||||
return client.ironic_client.port.get_by_address(**self.args)
|
||||
|
||||
|
||||
class MachinePortCreate(task_manager.Task):
|
||||
def main(self, client):
|
||||
return client.ironic_client.port.create(**self.args)
|
||||
|
||||
|
||||
class MachinePortDelete(task_manager.Task):
|
||||
def main(self, client):
|
||||
return client.ironic_client.port.delete(**self.args)
|
||||
|
||||
|
||||
class MachinePortList(task_manager.Task):
|
||||
def main(self, client):
|
||||
return client.ironic_client.port.list()
|
||||
|
||||
|
||||
class MachineNodeGet(task_manager.Task):
|
||||
def main(self, client):
|
||||
return client.ironic_client.node.get(**self.args)
|
||||
|
||||
|
||||
class MachineNodeList(task_manager.Task):
|
||||
def main(self, client):
|
||||
return client.ironic_client.node.list(**self.args)
|
||||
|
||||
|
||||
class MachineNodePortList(task_manager.Task):
|
||||
def main(self, client):
|
||||
return client.ironic_client.node.list_ports(**self.args)
|
||||
|
||||
|
||||
class MachineNodeUpdate(task_manager.Task):
|
||||
def main(self, client):
|
||||
return client.ironic_client.node.update(**self.args)
|
||||
|
||||
|
||||
class MachineNodeValidate(task_manager.Task):
|
||||
def main(self, client):
|
||||
return client.ironic_client.node.validate(**self.args)
|
||||
|
||||
|
||||
class MachineSetMaintenance(task_manager.Task):
|
||||
def main(self, client):
|
||||
return client.ironic_client.node.set_maintenance(**self.args)
|
||||
|
||||
|
||||
class MachineSetPower(task_manager.Task):
|
||||
def main(self, client):
|
||||
return client.ironic_client.node.set_power_state(**self.args)
|
||||
|
||||
|
||||
class MachineSetProvision(task_manager.Task):
|
||||
def main(self, client):
|
||||
return client.ironic_client.node.set_provision_state(**self.args)
|
713
openstack/cloud/_utils.py
Normal file
713
openstack/cloud/_utils.py
Normal file
@ -0,0 +1,713 @@
|
||||
# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import contextlib
|
||||
import fnmatch
|
||||
import inspect
|
||||
import jmespath
|
||||
import munch
|
||||
import netifaces
|
||||
import re
|
||||
import six
|
||||
import sre_constants
|
||||
import sys
|
||||
import time
|
||||
import uuid
|
||||
|
||||
from decorator import decorator
|
||||
|
||||
from openstack import _log
|
||||
from openstack.cloud import exc
|
||||
from openstack.cloud import meta
|
||||
|
||||
_decorated_methods = []
|
||||
|
||||
|
||||
def _exc_clear():
|
||||
"""Because sys.exc_clear is gone in py3 and is not in six."""
|
||||
if sys.version_info[0] == 2:
|
||||
sys.exc_clear()
|
||||
|
||||
|
||||
def _iterate_timeout(timeout, message, wait=2):
|
||||
"""Iterate and raise an exception on timeout.
|
||||
|
||||
This is a generator that will continually yield and sleep for
|
||||
wait seconds, and if the timeout is reached, will raise an exception
|
||||
with <message>.
|
||||
|
||||
"""
|
||||
log = _log.setup_logging('openstack.cloud.iterate_timeout')
|
||||
|
||||
try:
|
||||
# None as a wait winds up flowing well in the per-resource cache
|
||||
# flow. We could spread this logic around to all of the calling
|
||||
# points, but just having this treat None as "I don't have a value"
|
||||
# seems friendlier
|
||||
if wait is None:
|
||||
wait = 2
|
||||
elif wait == 0:
|
||||
# wait should be < timeout, unless timeout is None
|
||||
wait = 0.1 if timeout is None else min(0.1, timeout)
|
||||
wait = float(wait)
|
||||
except ValueError:
|
||||
raise exc.OpenStackCloudException(
|
||||
"Wait value must be an int or float value. {wait} given"
|
||||
" instead".format(wait=wait))
|
||||
|
||||
start = time.time()
|
||||
count = 0
|
||||
while (timeout is None) or (time.time() < start + timeout):
|
||||
count += 1
|
||||
yield count
|
||||
log.debug('Waiting %s seconds', wait)
|
||||
time.sleep(wait)
|
||||
raise exc.OpenStackCloudTimeout(message)
|
||||
|
||||
|
||||
def _make_unicode(input):
|
||||
"""Turn an input into unicode unconditionally
|
||||
|
||||
:param input:
|
||||
A unicode, string or other object
|
||||
"""
|
||||
try:
|
||||
if isinstance(input, unicode):
|
||||
return input
|
||||
if isinstance(input, str):
|
||||
return input.decode('utf-8')
|
||||
else:
|
||||
# int, for example
|
||||
return unicode(input)
|
||||
except NameError:
|
||||
# python3!
|
||||
return str(input)
|
||||
|
||||
|
||||
def _dictify_resource(resource):
|
||||
if isinstance(resource, list):
|
||||
return [_dictify_resource(r) for r in resource]
|
||||
else:
|
||||
if hasattr(resource, 'toDict'):
|
||||
return resource.toDict()
|
||||
else:
|
||||
return resource
|
||||
|
||||
|
||||
def _filter_list(data, name_or_id, filters):
|
||||
"""Filter a list by name/ID and arbitrary meta data.
|
||||
|
||||
:param list data:
|
||||
The list of dictionary data to filter. It is expected that
|
||||
each dictionary contains an 'id' and 'name'
|
||||
key if a value for name_or_id is given.
|
||||
:param string name_or_id:
|
||||
The name or ID of the entity being filtered. Can be a glob pattern,
|
||||
such as 'nb01*'.
|
||||
:param filters:
|
||||
A dictionary of meta data to use for further filtering. Elements
|
||||
of this dictionary may, themselves, be dictionaries. Example::
|
||||
|
||||
{
|
||||
'last_name': 'Smith',
|
||||
'other': {
|
||||
'gender': 'Female'
|
||||
}
|
||||
}
|
||||
OR
|
||||
A string containing a jmespath expression for further filtering.
|
||||
"""
|
||||
# The logger is openstack.cloud.fmmatch to allow a user/operator to
|
||||
# configure logging not to communicate about fnmatch misses
|
||||
# (they shouldn't be too spammy, but one never knows)
|
||||
log = _log.setup_logging('openstack.cloud.fnmatch')
|
||||
if name_or_id:
|
||||
# name_or_id might already be unicode
|
||||
name_or_id = _make_unicode(name_or_id)
|
||||
identifier_matches = []
|
||||
bad_pattern = False
|
||||
try:
|
||||
fn_reg = re.compile(fnmatch.translate(name_or_id))
|
||||
except sre_constants.error:
|
||||
# If the fnmatch re doesn't compile, then we don't care,
|
||||
# but log it in case the user DID pass a pattern but did
|
||||
# it poorly and wants to know what went wrong with their
|
||||
# search
|
||||
fn_reg = None
|
||||
for e in data:
|
||||
e_id = _make_unicode(e.get('id', None))
|
||||
e_name = _make_unicode(e.get('name', None))
|
||||
|
||||
if ((e_id and e_id == name_or_id) or
|
||||
(e_name and e_name == name_or_id)):
|
||||
identifier_matches.append(e)
|
||||
else:
|
||||
# Only try fnmatch if we don't match exactly
|
||||
if not fn_reg:
|
||||
# If we don't have a pattern, skip this, but set the flag
|
||||
# so that we log the bad pattern
|
||||
bad_pattern = True
|
||||
continue
|
||||
if ((e_id and fn_reg.match(e_id)) or
|
||||
(e_name and fn_reg.match(e_name))):
|
||||
identifier_matches.append(e)
|
||||
if not identifier_matches and bad_pattern:
|
||||
log.debug("Bad pattern passed to fnmatch", exc_info=True)
|
||||
data = identifier_matches
|
||||
|
||||
if not filters:
|
||||
return data
|
||||
|
||||
if isinstance(filters, six.string_types):
|
||||
return jmespath.search(filters, data)
|
||||
|
||||
def _dict_filter(f, d):
|
||||
if not d:
|
||||
return False
|
||||
for key in f.keys():
|
||||
if isinstance(f[key], dict):
|
||||
if not _dict_filter(f[key], d.get(key, None)):
|
||||
return False
|
||||
elif d.get(key, None) != f[key]:
|
||||
return False
|
||||
return True
|
||||
|
||||
filtered = []
|
||||
for e in data:
|
||||
filtered.append(e)
|
||||
for key in filters.keys():
|
||||
if isinstance(filters[key], dict):
|
||||
if not _dict_filter(filters[key], e.get(key, None)):
|
||||
filtered.pop()
|
||||
break
|
||||
elif e.get(key, None) != filters[key]:
|
||||
filtered.pop()
|
||||
break
|
||||
return filtered
|
||||
|
||||
|
||||
def _get_entity(cloud, resource, name_or_id, filters, **kwargs):
|
||||
"""Return a single entity from the list returned by a given method.
|
||||
|
||||
:param object cloud:
|
||||
The controller class (Example: the main OpenStackCloud object) .
|
||||
:param string or callable resource:
|
||||
The string that identifies the resource to use to lookup the
|
||||
get_<>_by_id or search_<resource>s methods(Example: network)
|
||||
or a callable to invoke.
|
||||
:param string name_or_id:
|
||||
The name or ID of the entity being filtered or a dict
|
||||
:param filters:
|
||||
A dictionary of meta data to use for further filtering.
|
||||
OR
|
||||
A string containing a jmespath expression for further filtering.
|
||||
Example:: "[?last_name==`Smith`] | [?other.gender]==`Female`]"
|
||||
"""
|
||||
|
||||
# Sometimes in the control flow of shade, we already have an object
|
||||
# fetched. Rather than then needing to pull the name or id out of that
|
||||
# object, pass it in here and rely on caching to prevent us from making
|
||||
# an additional call, it's simple enough to test to see if we got an
|
||||
# object and just short-circuit return it.
|
||||
|
||||
if hasattr(name_or_id, 'id'):
|
||||
return name_or_id
|
||||
|
||||
# If a uuid is passed short-circuit it calling the
|
||||
# get_<resorce_name>_by_id method
|
||||
if getattr(cloud, 'use_direct_get', False) and _is_uuid_like(name_or_id):
|
||||
get_resource = getattr(cloud, 'get_%s_by_id' % resource, None)
|
||||
if get_resource:
|
||||
return get_resource(name_or_id)
|
||||
|
||||
search = resource if callable(resource) else getattr(
|
||||
cloud, 'search_%ss' % resource, None)
|
||||
if search:
|
||||
entities = search(name_or_id, filters, **kwargs)
|
||||
if entities:
|
||||
if len(entities) > 1:
|
||||
raise exc.OpenStackCloudException(
|
||||
"Multiple matches found for %s" % name_or_id)
|
||||
return entities[0]
|
||||
return None
|
||||
|
||||
|
||||
def normalize_keystone_services(services):
|
||||
"""Normalize the structure of keystone services
|
||||
|
||||
In keystone v2, there is a field called "service_type". In v3, it's
|
||||
"type". Just make the returned dict have both.
|
||||
|
||||
:param list services: A list of keystone service dicts
|
||||
|
||||
:returns: A list of normalized dicts.
|
||||
"""
|
||||
ret = []
|
||||
for service in services:
|
||||
service_type = service.get('type', service.get('service_type'))
|
||||
new_service = {
|
||||
'id': service['id'],
|
||||
'name': service['name'],
|
||||
'description': service.get('description', None),
|
||||
'type': service_type,
|
||||
'service_type': service_type,
|
||||
'enabled': service['enabled']
|
||||
}
|
||||
ret.append(new_service)
|
||||
return meta.obj_list_to_munch(ret)
|
||||
|
||||
|
||||
def localhost_supports_ipv6():
|
||||
"""Determine whether the local host supports IPv6
|
||||
|
||||
We look for a default route that supports the IPv6 address family,
|
||||
and assume that if it is present, this host has globally routable
|
||||
IPv6 connectivity.
|
||||
"""
|
||||
|
||||
try:
|
||||
return netifaces.AF_INET6 in netifaces.gateways()['default']
|
||||
except AttributeError:
|
||||
return False
|
||||
|
||||
|
||||
def normalize_users(users):
|
||||
ret = [
|
||||
dict(
|
||||
id=user.get('id'),
|
||||
email=user.get('email'),
|
||||
name=user.get('name'),
|
||||
username=user.get('username'),
|
||||
default_project_id=user.get('default_project_id',
|
||||
user.get('tenantId')),
|
||||
domain_id=user.get('domain_id'),
|
||||
enabled=user.get('enabled'),
|
||||
description=user.get('description')
|
||||
) for user in users
|
||||
]
|
||||
return meta.obj_list_to_munch(ret)
|
||||
|
||||
|
||||
def normalize_domains(domains):
|
||||
ret = [
|
||||
dict(
|
||||
id=domain.get('id'),
|
||||
name=domain.get('name'),
|
||||
description=domain.get('description'),
|
||||
enabled=domain.get('enabled'),
|
||||
) for domain in domains
|
||||
]
|
||||
return meta.obj_list_to_munch(ret)
|
||||
|
||||
|
||||
def normalize_groups(domains):
|
||||
"""Normalize Identity groups."""
|
||||
ret = [
|
||||
dict(
|
||||
id=domain.get('id'),
|
||||
name=domain.get('name'),
|
||||
description=domain.get('description'),
|
||||
domain_id=domain.get('domain_id'),
|
||||
) for domain in domains
|
||||
]
|
||||
return meta.obj_list_to_munch(ret)
|
||||
|
||||
|
||||
def normalize_role_assignments(assignments):
|
||||
"""Put role_assignments into a form that works with search/get interface.
|
||||
|
||||
Role assignments have the structure::
|
||||
|
||||
[
|
||||
{
|
||||
"role": {
|
||||
"id": "--role-id--"
|
||||
},
|
||||
"scope": {
|
||||
"domain": {
|
||||
"id": "--domain-id--"
|
||||
}
|
||||
},
|
||||
"user": {
|
||||
"id": "--user-id--"
|
||||
}
|
||||
},
|
||||
]
|
||||
|
||||
Which is hard to work with in the rest of our interface. Map this to be::
|
||||
|
||||
[
|
||||
{
|
||||
"id": "--role-id--",
|
||||
"domain": "--domain-id--",
|
||||
"user": "--user-id--",
|
||||
}
|
||||
]
|
||||
|
||||
Scope can be "domain" or "project" and "user" can also be "group".
|
||||
|
||||
:param list assignments: A list of dictionaries of role assignments.
|
||||
|
||||
:returns: A list of flattened/normalized role assignment dicts.
|
||||
"""
|
||||
new_assignments = []
|
||||
for assignment in assignments:
|
||||
new_val = munch.Munch({'id': assignment['role']['id']})
|
||||
for scope in ('project', 'domain'):
|
||||
if scope in assignment['scope']:
|
||||
new_val[scope] = assignment['scope'][scope]['id']
|
||||
for assignee in ('user', 'group'):
|
||||
if assignee in assignment:
|
||||
new_val[assignee] = assignment[assignee]['id']
|
||||
new_assignments.append(new_val)
|
||||
return new_assignments
|
||||
|
||||
|
||||
def normalize_roles(roles):
|
||||
"""Normalize Identity roles."""
|
||||
ret = [
|
||||
dict(
|
||||
id=role.get('id'),
|
||||
name=role.get('name'),
|
||||
) for role in roles
|
||||
]
|
||||
return meta.obj_list_to_munch(ret)
|
||||
|
||||
|
||||
def normalize_flavor_accesses(flavor_accesses):
|
||||
"""Normalize Flavor access list."""
|
||||
return [munch.Munch(
|
||||
dict(
|
||||
flavor_id=acl.get('flavor_id'),
|
||||
project_id=acl.get('project_id') or acl.get('tenant_id'),
|
||||
)
|
||||
) for acl in flavor_accesses
|
||||
]
|
||||
|
||||
|
||||
def valid_kwargs(*valid_args):
|
||||
# This decorator checks if argument passed as **kwargs to a function are
|
||||
# present in valid_args.
|
||||
#
|
||||
# Typically, valid_kwargs is used when we want to distinguish between
|
||||
# None and omitted arguments and we still want to validate the argument
|
||||
# list.
|
||||
#
|
||||
# Example usage:
|
||||
#
|
||||
# @valid_kwargs('opt_arg1', 'opt_arg2')
|
||||
# def my_func(self, mandatory_arg1, mandatory_arg2, **kwargs):
|
||||
# ...
|
||||
#
|
||||
@decorator
|
||||
def func_wrapper(func, *args, **kwargs):
|
||||
argspec = inspect.getargspec(func)
|
||||
for k in kwargs:
|
||||
if k not in argspec.args[1:] and k not in valid_args:
|
||||
raise TypeError(
|
||||
"{f}() got an unexpected keyword argument "
|
||||
"'{arg}'".format(f=inspect.stack()[1][3], arg=k))
|
||||
return func(*args, **kwargs)
|
||||
return func_wrapper
|
||||
|
||||
|
||||
def cache_on_arguments(*cache_on_args, **cache_on_kwargs):
|
||||
_cache_name = cache_on_kwargs.pop('resource', None)
|
||||
|
||||
def _inner_cache_on_arguments(func):
|
||||
def _cache_decorator(obj, *args, **kwargs):
|
||||
the_method = obj._get_cache(_cache_name).cache_on_arguments(
|
||||
*cache_on_args, **cache_on_kwargs)(
|
||||
func.__get__(obj, type(obj)))
|
||||
return the_method(*args, **kwargs)
|
||||
|
||||
def invalidate(obj, *args, **kwargs):
|
||||
return obj._get_cache(
|
||||
_cache_name).cache_on_arguments()(func).invalidate(
|
||||
*args, **kwargs)
|
||||
|
||||
_cache_decorator.invalidate = invalidate
|
||||
_cache_decorator.func = func
|
||||
_decorated_methods.append(func.__name__)
|
||||
|
||||
return _cache_decorator
|
||||
return _inner_cache_on_arguments
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def shade_exceptions(error_message=None):
|
||||
"""Context manager for dealing with shade exceptions.
|
||||
|
||||
:param string error_message: String to use for the exception message
|
||||
content on non-OpenStackCloudExceptions.
|
||||
|
||||
Useful for avoiding wrapping shade OpenStackCloudException exceptions
|
||||
within themselves. Code called from within the context may throw such
|
||||
exceptions without having to catch and reraise them.
|
||||
|
||||
Non-OpenStackCloudException exceptions thrown within the context will
|
||||
be wrapped and the exception message will be appended to the given error
|
||||
message.
|
||||
"""
|
||||
try:
|
||||
yield
|
||||
except exc.OpenStackCloudException:
|
||||
raise
|
||||
except Exception as e:
|
||||
if error_message is None:
|
||||
error_message = str(e)
|
||||
raise exc.OpenStackCloudException(error_message)
|
||||
|
||||
|
||||
def safe_dict_min(key, data):
|
||||
"""Safely find the minimum for a given key in a list of dict objects.
|
||||
|
||||
This will find the minimum integer value for specific dictionary key
|
||||
across a list of dictionaries. The values for the given key MUST be
|
||||
integers, or string representations of an integer.
|
||||
|
||||
The dictionary key does not have to be present in all (or any)
|
||||
of the elements/dicts within the data set.
|
||||
|
||||
:param string key: The dictionary key to search for the minimum value.
|
||||
:param list data: List of dicts to use for the data set.
|
||||
|
||||
:returns: None if the field was not not found in any elements, or
|
||||
the minimum value for the field otherwise.
|
||||
"""
|
||||
min_value = None
|
||||
for d in data:
|
||||
if (key in d) and (d[key] is not None):
|
||||
try:
|
||||
val = int(d[key])
|
||||
except ValueError:
|
||||
raise exc.OpenStackCloudException(
|
||||
"Search for minimum value failed. "
|
||||
"Value for {key} is not an integer: {value}".format(
|
||||
key=key, value=d[key])
|
||||
)
|
||||
if (min_value is None) or (val < min_value):
|
||||
min_value = val
|
||||
return min_value
|
||||
|
||||
|
||||
def safe_dict_max(key, data):
|
||||
"""Safely find the maximum for a given key in a list of dict objects.
|
||||
|
||||
This will find the maximum integer value for specific dictionary key
|
||||
across a list of dictionaries. The values for the given key MUST be
|
||||
integers, or string representations of an integer.
|
||||
|
||||
The dictionary key does not have to be present in all (or any)
|
||||
of the elements/dicts within the data set.
|
||||
|
||||
:param string key: The dictionary key to search for the maximum value.
|
||||
:param list data: List of dicts to use for the data set.
|
||||
|
||||
:returns: None if the field was not not found in any elements, or
|
||||
the maximum value for the field otherwise.
|
||||
"""
|
||||
max_value = None
|
||||
for d in data:
|
||||
if (key in d) and (d[key] is not None):
|
||||
try:
|
||||
val = int(d[key])
|
||||
except ValueError:
|
||||
raise exc.OpenStackCloudException(
|
||||
"Search for maximum value failed. "
|
||||
"Value for {key} is not an integer: {value}".format(
|
||||
key=key, value=d[key])
|
||||
)
|
||||
if (max_value is None) or (val > max_value):
|
||||
max_value = val
|
||||
return max_value
|
||||
|
||||
|
||||
def parse_range(value):
|
||||
"""Parse a numerical range string.
|
||||
|
||||
Breakdown a range expression into its operater and numerical parts.
|
||||
This expression must be a string. Valid values must be an integer string,
|
||||
optionally preceeded by one of the following operators::
|
||||
|
||||
- "<" : Less than
|
||||
- ">" : Greater than
|
||||
- "<=" : Less than or equal to
|
||||
- ">=" : Greater than or equal to
|
||||
|
||||
Some examples of valid values and function return values::
|
||||
|
||||
- "1024" : returns (None, 1024)
|
||||
- "<5" : returns ("<", 5)
|
||||
- ">=100" : returns (">=", 100)
|
||||
|
||||
:param string value: The range expression to be parsed.
|
||||
|
||||
:returns: A tuple with the operator string (or None if no operator
|
||||
was given) and the integer value. None is returned if parsing failed.
|
||||
"""
|
||||
if value is None:
|
||||
return None
|
||||
|
||||
range_exp = re.match('(<|>|<=|>=){0,1}(\d+)$', value)
|
||||
if range_exp is None:
|
||||
return None
|
||||
|
||||
op = range_exp.group(1)
|
||||
num = int(range_exp.group(2))
|
||||
return (op, num)
|
||||
|
||||
|
||||
def range_filter(data, key, range_exp):
|
||||
"""Filter a list by a single range expression.
|
||||
|
||||
:param list data: List of dictionaries to be searched.
|
||||
:param string key: Key name to search within the data set.
|
||||
:param string range_exp: The expression describing the range of values.
|
||||
|
||||
:returns: A list subset of the original data set.
|
||||
:raises: OpenStackCloudException on invalid range expressions.
|
||||
"""
|
||||
filtered = []
|
||||
range_exp = str(range_exp).upper()
|
||||
|
||||
if range_exp == "MIN":
|
||||
key_min = safe_dict_min(key, data)
|
||||
if key_min is None:
|
||||
return []
|
||||
for d in data:
|
||||
if int(d[key]) == key_min:
|
||||
filtered.append(d)
|
||||
return filtered
|
||||
elif range_exp == "MAX":
|
||||
key_max = safe_dict_max(key, data)
|
||||
if key_max is None:
|
||||
return []
|
||||
for d in data:
|
||||
if int(d[key]) == key_max:
|
||||
filtered.append(d)
|
||||
return filtered
|
||||
|
||||
# Not looking for a min or max, so a range or exact value must
|
||||
# have been supplied.
|
||||
val_range = parse_range(range_exp)
|
||||
|
||||
# If parsing the range fails, it must be a bad value.
|
||||
if val_range is None:
|
||||
raise exc.OpenStackCloudException(
|
||||
"Invalid range value: {value}".format(value=range_exp))
|
||||
|
||||
op = val_range[0]
|
||||
if op:
|
||||
# Range matching
|
||||
for d in data:
|
||||
d_val = int(d[key])
|
||||
if op == '<':
|
||||
if d_val < val_range[1]:
|
||||
filtered.append(d)
|
||||
elif op == '>':
|
||||
if d_val > val_range[1]:
|
||||
filtered.append(d)
|
||||
elif op == '<=':
|
||||
if d_val <= val_range[1]:
|
||||
filtered.append(d)
|
||||
elif op == '>=':
|
||||
if d_val >= val_range[1]:
|
||||
filtered.append(d)
|
||||
return filtered
|
||||
else:
|
||||
# Exact number match
|
||||
for d in data:
|
||||
if int(d[key]) == val_range[1]:
|
||||
filtered.append(d)
|
||||
return filtered
|
||||
|
||||
|
||||
def generate_patches_from_kwargs(operation, **kwargs):
|
||||
"""Given a set of parameters, returns a list with the
|
||||
valid patch values.
|
||||
|
||||
:param string operation: The operation to perform.
|
||||
:param list kwargs: Dict of parameters.
|
||||
|
||||
:returns: A list with the right patch values.
|
||||
"""
|
||||
patches = []
|
||||
for k, v in kwargs.items():
|
||||
patch = {'op': operation,
|
||||
'value': v,
|
||||
'path': '/%s' % k}
|
||||
patches.append(patch)
|
||||
return sorted(patches)
|
||||
|
||||
|
||||
class FileSegment(object):
|
||||
"""File-like object to pass to requests."""
|
||||
|
||||
def __init__(self, filename, offset, length):
|
||||
self.filename = filename
|
||||
self.offset = offset
|
||||
self.length = length
|
||||
self.pos = 0
|
||||
self._file = open(filename, 'rb')
|
||||
self.seek(0)
|
||||
|
||||
def tell(self):
|
||||
return self._file.tell() - self.offset
|
||||
|
||||
def seek(self, offset, whence=0):
|
||||
if whence == 0:
|
||||
self._file.seek(self.offset + offset, whence)
|
||||
elif whence == 1:
|
||||
self._file.seek(offset, whence)
|
||||
elif whence == 2:
|
||||
self._file.seek(self.offset + self.length - offset, 0)
|
||||
|
||||
def read(self, size=-1):
|
||||
remaining = self.length - self.pos
|
||||
if remaining <= 0:
|
||||
return b''
|
||||
|
||||
to_read = remaining if size < 0 else min(size, remaining)
|
||||
chunk = self._file.read(to_read)
|
||||
self.pos += len(chunk)
|
||||
|
||||
return chunk
|
||||
|
||||
def reset(self):
|
||||
self._file.seek(self.offset, 0)
|
||||
|
||||
|
||||
def _format_uuid_string(string):
|
||||
return (string.replace('urn:', '')
|
||||
.replace('uuid:', '')
|
||||
.strip('{}')
|
||||
.replace('-', '')
|
||||
.lower())
|
||||
|
||||
|
||||
def _is_uuid_like(val):
|
||||
"""Returns validation of a value as a UUID.
|
||||
|
||||
:param val: Value to verify
|
||||
:type val: string
|
||||
:returns: bool
|
||||
|
||||
.. versionchanged:: 1.1.1
|
||||
Support non-lowercase UUIDs.
|
||||
"""
|
||||
try:
|
||||
return str(uuid.UUID(val)).replace('-', '') == _format_uuid_string(val)
|
||||
except (TypeError, ValueError, AttributeError):
|
||||
return False
|
0
openstack/cloud/cmd/__init__.py
Normal file
0
openstack/cloud/cmd/__init__.py
Normal file
70
openstack/cloud/cmd/inventory.py
Executable file
70
openstack/cloud/cmd/inventory.py
Executable file
@ -0,0 +1,70 @@
|
||||
# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
import yaml
|
||||
|
||||
import openstack.cloud
|
||||
import openstack.cloud.inventory
|
||||
|
||||
|
||||
def output_format_dict(data, use_yaml):
|
||||
if use_yaml:
|
||||
return yaml.safe_dump(data, default_flow_style=False)
|
||||
else:
|
||||
return json.dumps(data, sort_keys=True, indent=2)
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(description='OpenStack Inventory Module')
|
||||
parser.add_argument('--refresh', action='store_true',
|
||||
help='Refresh cached information')
|
||||
group = parser.add_mutually_exclusive_group(required=True)
|
||||
group.add_argument('--list', action='store_true',
|
||||
help='List active servers')
|
||||
group.add_argument('--host', help='List details about the specific host')
|
||||
parser.add_argument('--private', action='store_true', default=False,
|
||||
help='Use private IPs for interface_ip')
|
||||
parser.add_argument('--cloud', default=None,
|
||||
help='Return data for one cloud only')
|
||||
parser.add_argument('--yaml', action='store_true', default=False,
|
||||
help='Output data in nicely readable yaml')
|
||||
parser.add_argument('--debug', action='store_true', default=False,
|
||||
help='Enable debug output')
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def main():
|
||||
args = parse_args()
|
||||
try:
|
||||
openstack.cloud.simple_logging(debug=args.debug)
|
||||
inventory = openstack.cloud.inventory.OpenStackInventory(
|
||||
refresh=args.refresh, private=args.private,
|
||||
cloud=args.cloud)
|
||||
if args.list:
|
||||
output = inventory.list_hosts()
|
||||
elif args.host:
|
||||
output = inventory.get_host(args.host)
|
||||
print(output_format_dict(output, args.yaml))
|
||||
except openstack.OpenStackCloudException as e:
|
||||
sys.stderr.write(e.message + '\n')
|
||||
sys.exit(1)
|
||||
sys.exit(0)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
173
openstack/cloud/exc.py
Normal file
173
openstack/cloud/exc.py
Normal file
@ -0,0 +1,173 @@
|
||||
# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import sys
|
||||
|
||||
import munch
|
||||
from requests import exceptions as _rex
|
||||
|
||||
from openstack import _log
|
||||
|
||||
|
||||
class OpenStackCloudException(Exception):
|
||||
|
||||
log_inner_exceptions = False
|
||||
|
||||
def __init__(self, message, extra_data=None, **kwargs):
|
||||
args = [message]
|
||||
if extra_data:
|
||||
if isinstance(extra_data, munch.Munch):
|
||||
extra_data = extra_data.toDict()
|
||||
args.append("Extra: {0}".format(str(extra_data)))
|
||||
super(OpenStackCloudException, self).__init__(*args, **kwargs)
|
||||
self.extra_data = extra_data
|
||||
self.inner_exception = sys.exc_info()
|
||||
self.orig_message = message
|
||||
|
||||
def log_error(self, logger=None):
|
||||
if not logger:
|
||||
logger = _log.setup_logging('openstack.cloud.exc')
|
||||
if self.inner_exception and self.inner_exception[1]:
|
||||
logger.error(self.orig_message, exc_info=self.inner_exception)
|
||||
|
||||
def __str__(self):
|
||||
message = Exception.__str__(self)
|
||||
if (self.inner_exception and self.inner_exception[1]
|
||||
and not self.orig_message.endswith(
|
||||
str(self.inner_exception[1]))):
|
||||
message = "%s (Inner Exception: %s)" % (
|
||||
message,
|
||||
str(self.inner_exception[1]))
|
||||
if self.log_inner_exceptions:
|
||||
self.log_error()
|
||||
return message
|
||||
|
||||
|
||||
class OpenStackCloudCreateException(OpenStackCloudException):
|
||||
|
||||
def __init__(self, resource, resource_id, extra_data=None, **kwargs):
|
||||
super(OpenStackCloudCreateException, self).__init__(
|
||||
message="Error creating {resource}: {resource_id}".format(
|
||||
resource=resource, resource_id=resource_id),
|
||||
extra_data=extra_data, **kwargs)
|
||||
self.resource_id = resource_id
|
||||
|
||||
|
||||
class OpenStackCloudTimeout(OpenStackCloudException):
|
||||
pass
|
||||
|
||||
|
||||
class OpenStackCloudUnavailableExtension(OpenStackCloudException):
|
||||
pass
|
||||
|
||||
|
||||
class OpenStackCloudUnavailableFeature(OpenStackCloudException):
|
||||
pass
|
||||
|
||||
|
||||
class OpenStackCloudHTTPError(OpenStackCloudException, _rex.HTTPError):
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
OpenStackCloudException.__init__(self, *args, **kwargs)
|
||||
_rex.HTTPError.__init__(self, *args, **kwargs)
|
||||
|
||||
|
||||
class OpenStackCloudBadRequest(OpenStackCloudHTTPError):
|
||||
"""There is something wrong with the request payload.
|
||||
|
||||
Possible reasons can include malformed json or invalid values to parameters
|
||||
such as flavorRef to a server create.
|
||||
"""
|
||||
|
||||
|
||||
class OpenStackCloudURINotFound(OpenStackCloudHTTPError):
|
||||
pass
|
||||
|
||||
# Backwards compat
|
||||
OpenStackCloudResourceNotFound = OpenStackCloudURINotFound
|
||||
|
||||
|
||||
def _log_response_extras(response):
|
||||
# Sometimes we get weird HTML errors. This is usually from load balancers
|
||||
# or other things. Log them to a special logger so that they can be
|
||||
# toggled indepdently - and at debug level so that a person logging
|
||||
# openstack.cloud.* only gets them at debug.
|
||||
if response.headers.get('content-type') != 'text/html':
|
||||
return
|
||||
try:
|
||||
if int(response.headers.get('content-length', 0)) == 0:
|
||||
return
|
||||
except Exception:
|
||||
return
|
||||
logger = _log.setup_logging('openstack.cloud.http')
|
||||
if response.reason:
|
||||
logger.debug(
|
||||
"Non-standard error '{reason}' returned from {url}:".format(
|
||||
reason=response.reason,
|
||||
url=response.url))
|
||||
else:
|
||||
logger.debug(
|
||||
"Non-standard error returned from {url}:".format(
|
||||
url=response.url))
|
||||
for response_line in response.text.split('\n'):
|
||||
logger.debug(response_line)
|
||||
|
||||
|
||||
# Logic shamelessly stolen from requests
|
||||
def raise_from_response(response, error_message=None):
|
||||
msg = ''
|
||||
if 400 <= response.status_code < 500:
|
||||
source = "Client"
|
||||
elif 500 <= response.status_code < 600:
|
||||
source = "Server"
|
||||
else:
|
||||
return
|
||||
|
||||
remote_error = "Error for url: {url}".format(url=response.url)
|
||||
try:
|
||||
details = response.json()
|
||||
# Nova returns documents that look like
|
||||
# {statusname: 'message': message, 'code': code}
|
||||
detail_keys = list(details.keys())
|
||||
if len(detail_keys) == 1:
|
||||
detail_key = detail_keys[0]
|
||||
detail_message = details[detail_key].get('message')
|
||||
if detail_message:
|
||||
remote_error += " {message}".format(message=detail_message)
|
||||
except ValueError:
|
||||
if response.reason:
|
||||
remote_error += " {reason}".format(reason=response.reason)
|
||||
|
||||
_log_response_extras(response)
|
||||
|
||||
if error_message:
|
||||
msg = '{error_message}. ({code}) {source} {remote_error}'.format(
|
||||
error_message=error_message,
|
||||
source=source,
|
||||
code=response.status_code,
|
||||
remote_error=remote_error)
|
||||
else:
|
||||
msg = '({code}) {source} {remote_error}'.format(
|
||||
code=response.status_code,
|
||||
source=source,
|
||||
remote_error=remote_error)
|
||||
|
||||
# Special case 404 since we raised a specific one for neutron exceptions
|
||||
# before
|
||||
if response.status_code == 404:
|
||||
raise OpenStackCloudURINotFound(msg, response=response)
|
||||
elif response.status_code == 400:
|
||||
raise OpenStackCloudBadRequest(msg, response=response)
|
||||
if msg:
|
||||
raise OpenStackCloudHTTPError(msg, response=response)
|
85
openstack/cloud/inventory.py
Normal file
85
openstack/cloud/inventory.py
Normal file
@ -0,0 +1,85 @@
|
||||
# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import functools
|
||||
|
||||
import openstack.cloud
|
||||
import openstack.config
|
||||
from openstack.cloud import _utils
|
||||
|
||||
|
||||
class OpenStackInventory(object):
|
||||
|
||||
# Put this here so the capability can be detected with hasattr on the class
|
||||
extra_config = None
|
||||
|
||||
def __init__(
|
||||
self, config_files=None, refresh=False, private=False,
|
||||
config_key=None, config_defaults=None, cloud=None,
|
||||
use_direct_get=False):
|
||||
if config_files is None:
|
||||
config_files = []
|
||||
config = openstack.config.loader.OpenStackConfig(
|
||||
config_files=openstack.config.loader.CONFIG_FILES + config_files)
|
||||
self.extra_config = config.get_extra_config(
|
||||
config_key, config_defaults)
|
||||
|
||||
if cloud is None:
|
||||
self.clouds = [
|
||||
openstack.OpenStackCloud(cloud_config=cloud_config)
|
||||
for cloud_config in config.get_all_clouds()
|
||||
]
|
||||
else:
|
||||
try:
|
||||
self.clouds = [
|
||||
openstack.OpenStackCloud(
|
||||
cloud_config=config.get_one_cloud(cloud))
|
||||
]
|
||||
except openstack.config.exceptions.OpenStackConfigException as e:
|
||||
raise openstack.OpenStackCloudException(e)
|
||||
|
||||
if private:
|
||||
for cloud in self.clouds:
|
||||
cloud.private = True
|
||||
|
||||
# Handle manual invalidation of entire persistent cache
|
||||
if refresh:
|
||||
for cloud in self.clouds:
|
||||
cloud._cache.invalidate()
|
||||
|
||||
def list_hosts(self, expand=True, fail_on_cloud_config=True):
|
||||
hostvars = []
|
||||
|
||||
for cloud in self.clouds:
|
||||
try:
|
||||
# Cycle on servers
|
||||
for server in cloud.list_servers(detailed=expand):
|
||||
hostvars.append(server)
|
||||
except openstack.OpenStackCloudException:
|
||||
# Don't fail on one particular cloud as others may work
|
||||
if fail_on_cloud_config:
|
||||
raise
|
||||
|
||||
return hostvars
|
||||
|
||||
def search_hosts(self, name_or_id=None, filters=None, expand=True):
|
||||
hosts = self.list_hosts(expand=expand)
|
||||
return _utils._filter_list(hosts, name_or_id, filters)
|
||||
|
||||
def get_host(self, name_or_id, filters=None, expand=True):
|
||||
if expand:
|
||||
func = self.search_hosts
|
||||
else:
|
||||
func = functools.partial(self.search_hosts, expand=False)
|
||||
return _utils._get_entity(self, func, name_or_id, filters)
|
590
openstack/cloud/meta.py
Normal file
590
openstack/cloud/meta.py
Normal file
@ -0,0 +1,590 @@
|
||||
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
import munch
|
||||
import ipaddress
|
||||
import six
|
||||
import socket
|
||||
|
||||
from openstack import _log
|
||||
from openstack.cloud import exc
|
||||
|
||||
|
||||
NON_CALLABLES = (six.string_types, bool, dict, int, float, list, type(None))
|
||||
|
||||
|
||||
def find_nova_interfaces(addresses, ext_tag=None, key_name=None, version=4,
|
||||
mac_addr=None):
|
||||
ret = []
|
||||
for (k, v) in iter(addresses.items()):
|
||||
if key_name is not None and k != key_name:
|
||||
# key_name is specified and it doesn't match the current network.
|
||||
# Continue with the next one
|
||||
continue
|
||||
|
||||
for interface_spec in v:
|
||||
if ext_tag is not None:
|
||||
if 'OS-EXT-IPS:type' not in interface_spec:
|
||||
# ext_tag is specified, but this interface has no tag
|
||||
# We could actually return right away as this means that
|
||||
# this cloud doesn't support OS-EXT-IPS. Nevertheless,
|
||||
# it would be better to perform an explicit check. e.g.:
|
||||
# cloud._has_nova_extension('OS-EXT-IPS')
|
||||
# But this needs cloud to be passed to this function.
|
||||
continue
|
||||
elif interface_spec['OS-EXT-IPS:type'] != ext_tag:
|
||||
# Type doesn't match, continue with next one
|
||||
continue
|
||||
|
||||
if mac_addr is not None:
|
||||
if 'OS-EXT-IPS-MAC:mac_addr' not in interface_spec:
|
||||
# mac_addr is specified, but this interface has no mac_addr
|
||||
# We could actually return right away as this means that
|
||||
# this cloud doesn't support OS-EXT-IPS-MAC. Nevertheless,
|
||||
# it would be better to perform an explicit check. e.g.:
|
||||
# cloud._has_nova_extension('OS-EXT-IPS-MAC')
|
||||
# But this needs cloud to be passed to this function.
|
||||
continue
|
||||
elif interface_spec['OS-EXT-IPS-MAC:mac_addr'] != mac_addr:
|
||||
# MAC doesn't match, continue with next one
|
||||
continue
|
||||
|
||||
if interface_spec['version'] == version:
|
||||
ret.append(interface_spec)
|
||||
return ret
|
||||
|
||||
|
||||
def find_nova_addresses(addresses, ext_tag=None, key_name=None, version=4,
|
||||
mac_addr=None):
|
||||
interfaces = find_nova_interfaces(addresses, ext_tag, key_name, version,
|
||||
mac_addr)
|
||||
addrs = [i['addr'] for i in interfaces]
|
||||
return addrs
|
||||
|
||||
|
||||
def get_server_ip(server, public=False, cloud_public=True, **kwargs):
|
||||
"""Get an IP from the Nova addresses dict
|
||||
|
||||
:param server: The server to pull the address from
|
||||
:param public: Whether the address we're looking for should be considered
|
||||
'public' and therefore reachabiliity tests should be
|
||||
used. (defaults to False)
|
||||
:param cloud_public: Whether the cloud has been configured to use private
|
||||
IPs from servers as the interface_ip. This inverts the
|
||||
public reachability logic, as in this case it's the
|
||||
private ip we expect shade to be able to reach
|
||||
"""
|
||||
addrs = find_nova_addresses(server['addresses'], **kwargs)
|
||||
return find_best_address(
|
||||
addrs, socket.AF_INET, public=public, cloud_public=cloud_public)
|
||||
|
||||
|
||||
def get_server_private_ip(server, cloud=None):
|
||||
"""Find the private IP address
|
||||
|
||||
If Neutron is available, search for a port on a network where
|
||||
`router:external` is False and `shared` is False. This combination
|
||||
indicates a private network with private IP addresses. This port should
|
||||
have the private IP.
|
||||
|
||||
If Neutron is not available, or something goes wrong communicating with it,
|
||||
as a fallback, try the list of addresses associated with the server dict,
|
||||
looking for an IP type tagged as 'fixed' in the network named 'private'.
|
||||
|
||||
Last resort, ignore the IP type and just look for an IP on the 'private'
|
||||
network (e.g., Rackspace).
|
||||
"""
|
||||
if cloud and not cloud.use_internal_network():
|
||||
return None
|
||||
|
||||
# Try to get a floating IP interface. If we have one then return the
|
||||
# private IP address associated with that floating IP for consistency.
|
||||
fip_ints = find_nova_interfaces(server['addresses'], ext_tag='floating')
|
||||
fip_mac = None
|
||||
if fip_ints:
|
||||
fip_mac = fip_ints[0].get('OS-EXT-IPS-MAC:mac_addr')
|
||||
|
||||
# Short circuit the ports/networks search below with a heavily cached
|
||||
# and possibly pre-configured network name
|
||||
if cloud:
|
||||
int_nets = cloud.get_internal_ipv4_networks()
|
||||
for int_net in int_nets:
|
||||
int_ip = get_server_ip(
|
||||
server, key_name=int_net['name'],
|
||||
cloud_public=not cloud.private,
|
||||
mac_addr=fip_mac)
|
||||
if int_ip is not None:
|
||||
return int_ip
|
||||
|
||||
ip = get_server_ip(
|
||||
server, ext_tag='fixed', key_name='private', mac_addr=fip_mac)
|
||||
if ip:
|
||||
return ip
|
||||
|
||||
# Last resort, and Rackspace
|
||||
return get_server_ip(
|
||||
server, key_name='private')
|
||||
|
||||
|
||||
def get_server_external_ipv4(cloud, server):
|
||||
"""Find an externally routable IP for the server.
|
||||
|
||||
There are 5 different scenarios we have to account for:
|
||||
|
||||
* Cloud has externally routable IP from neutron but neutron APIs don't
|
||||
work (only info available is in nova server record) (rackspace)
|
||||
* Cloud has externally routable IP from neutron (runabove, ovh)
|
||||
* Cloud has externally routable IP from neutron AND supports optional
|
||||
private tenant networks (vexxhost, unitedstack)
|
||||
* Cloud only has private tenant network provided by neutron and requires
|
||||
floating-ip for external routing (dreamhost, hp)
|
||||
* Cloud only has private tenant network provided by nova-network and
|
||||
requires floating-ip for external routing (auro)
|
||||
|
||||
:param cloud: the cloud we're working with
|
||||
:param server: the server dict from which we want to get an IPv4 address
|
||||
:return: a string containing the IPv4 address or None
|
||||
"""
|
||||
|
||||
if not cloud.use_external_network():
|
||||
return None
|
||||
|
||||
if server['accessIPv4']:
|
||||
return server['accessIPv4']
|
||||
|
||||
# Short circuit the ports/networks search below with a heavily cached
|
||||
# and possibly pre-configured network name
|
||||
ext_nets = cloud.get_external_ipv4_networks()
|
||||
for ext_net in ext_nets:
|
||||
ext_ip = get_server_ip(
|
||||
server, key_name=ext_net['name'], public=True,
|
||||
cloud_public=not cloud.private)
|
||||
if ext_ip is not None:
|
||||
return ext_ip
|
||||
|
||||
# Try to get a floating IP address
|
||||
# Much as I might find floating IPs annoying, if it has one, that's
|
||||
# almost certainly the one that wants to be used
|
||||
ext_ip = get_server_ip(
|
||||
server, ext_tag='floating', public=True,
|
||||
cloud_public=not cloud.private)
|
||||
if ext_ip is not None:
|
||||
return ext_ip
|
||||
|
||||
# The cloud doesn't support Neutron or Neutron can't be contacted. The
|
||||
# server might have fixed addresses that are reachable from outside the
|
||||
# cloud (e.g. Rax) or have plain ol' floating IPs
|
||||
|
||||
# Try to get an address from a network named 'public'
|
||||
ext_ip = get_server_ip(
|
||||
server, key_name='public', public=True,
|
||||
cloud_public=not cloud.private)
|
||||
if ext_ip is not None:
|
||||
return ext_ip
|
||||
|
||||
# Nothing else works, try to find a globally routable IP address
|
||||
for interfaces in server['addresses'].values():
|
||||
for interface in interfaces:
|
||||
try:
|
||||
ip = ipaddress.ip_address(interface['addr'])
|
||||
except Exception:
|
||||
# Skip any error, we're looking for a working ip - if the
|
||||
# cloud returns garbage, it wouldn't be the first weird thing
|
||||
# but it still doesn't meet the requirement of "be a working
|
||||
# ip address"
|
||||
continue
|
||||
if ip.version == 4 and not ip.is_private:
|
||||
return str(ip)
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def find_best_address(addresses, family, public=False, cloud_public=True):
|
||||
do_check = public == cloud_public
|
||||
if not addresses:
|
||||
return None
|
||||
if len(addresses) == 1:
|
||||
return addresses[0]
|
||||
if len(addresses) > 1 and do_check:
|
||||
# We only want to do this check if the address is supposed to be
|
||||
# reachable. Otherwise we're just debug log spamming on every listing
|
||||
# of private ip addresses
|
||||
for address in addresses:
|
||||
# Return the first one that is reachable
|
||||
try:
|
||||
connect_socket = socket.socket(family, socket.SOCK_STREAM, 0)
|
||||
connect_socket.settimeout(1)
|
||||
connect_socket.connect((address, 22, 0, 0))
|
||||
return address
|
||||
except Exception:
|
||||
pass
|
||||
# Give up and return the first - none work as far as we can tell
|
||||
if do_check:
|
||||
log = _log.setup_logging('shade')
|
||||
log.debug(
|
||||
'The cloud returned multiple addresses, and none of them seem'
|
||||
' to work. That might be what you wanted, but we have no clue'
|
||||
" what's going on, so we just picked one at random")
|
||||
return addresses[0]
|
||||
|
||||
|
||||
def get_server_external_ipv6(server):
|
||||
""" Get an IPv6 address reachable from outside the cloud.
|
||||
|
||||
This function assumes that if a server has an IPv6 address, that address
|
||||
is reachable from outside the cloud.
|
||||
|
||||
:param server: the server from which we want to get an IPv6 address
|
||||
:return: a string containing the IPv6 address or None
|
||||
"""
|
||||
if server['accessIPv6']:
|
||||
return server['accessIPv6']
|
||||
addresses = find_nova_addresses(addresses=server['addresses'], version=6)
|
||||
return find_best_address(addresses, socket.AF_INET6, public=True)
|
||||
|
||||
|
||||
def get_server_default_ip(cloud, server):
|
||||
""" Get the configured 'default' address
|
||||
|
||||
It is possible in clouds.yaml to configure for a cloud a network that
|
||||
is the 'default_interface'. This is the network that should be used
|
||||
to talk to instances on the network.
|
||||
|
||||
:param cloud: the cloud we're working with
|
||||
:param server: the server dict from which we want to get the default
|
||||
IPv4 address
|
||||
:return: a string containing the IPv4 address or None
|
||||
"""
|
||||
ext_net = cloud.get_default_network()
|
||||
if ext_net:
|
||||
if (cloud._local_ipv6 and not cloud.force_ipv4):
|
||||
# try 6 first, fall back to four
|
||||
versions = [6, 4]
|
||||
else:
|
||||
versions = [4]
|
||||
for version in versions:
|
||||
ext_ip = get_server_ip(
|
||||
server, key_name=ext_net['name'], version=version, public=True,
|
||||
cloud_public=not cloud.private)
|
||||
if ext_ip is not None:
|
||||
return ext_ip
|
||||
return None
|
||||
|
||||
|
||||
def _get_interface_ip(cloud, server):
|
||||
""" Get the interface IP for the server
|
||||
|
||||
Interface IP is the IP that should be used for communicating with the
|
||||
server. It is:
|
||||
- the IP on the configured default_interface network
|
||||
- if cloud.private, the private ip if it exists
|
||||
- if the server has a public ip, the public ip
|
||||
"""
|
||||
default_ip = get_server_default_ip(cloud, server)
|
||||
if default_ip:
|
||||
return default_ip
|
||||
|
||||
if cloud.private and server['private_v4']:
|
||||
return server['private_v4']
|
||||
|
||||
if (server['public_v6'] and cloud._local_ipv6 and not cloud.force_ipv4):
|
||||
return server['public_v6']
|
||||
else:
|
||||
return server['public_v4']
|
||||
|
||||
|
||||
def get_groups_from_server(cloud, server, server_vars):
|
||||
groups = []
|
||||
|
||||
region = cloud.region_name
|
||||
cloud_name = cloud.name
|
||||
|
||||
# Create a group for the cloud
|
||||
groups.append(cloud_name)
|
||||
|
||||
# Create a group on region
|
||||
groups.append(region)
|
||||
|
||||
# And one by cloud_region
|
||||
groups.append("%s_%s" % (cloud_name, region))
|
||||
|
||||
# Check if group metadata key in servers' metadata
|
||||
group = server['metadata'].get('group')
|
||||
if group:
|
||||
groups.append(group)
|
||||
|
||||
for extra_group in server['metadata'].get('groups', '').split(','):
|
||||
if extra_group:
|
||||
groups.append(extra_group)
|
||||
|
||||
groups.append('instance-%s' % server['id'])
|
||||
|
||||
for key in ('flavor', 'image'):
|
||||
if 'name' in server_vars[key]:
|
||||
groups.append('%s-%s' % (key, server_vars[key]['name']))
|
||||
|
||||
for key, value in iter(server['metadata'].items()):
|
||||
groups.append('meta-%s_%s' % (key, value))
|
||||
|
||||
az = server_vars.get('az', None)
|
||||
if az:
|
||||
# Make groups for az, region_az and cloud_region_az
|
||||
groups.append(az)
|
||||
groups.append('%s_%s' % (region, az))
|
||||
groups.append('%s_%s_%s' % (cloud.name, region, az))
|
||||
return groups
|
||||
|
||||
|
||||
def expand_server_vars(cloud, server):
|
||||
"""Backwards compatibility function."""
|
||||
return add_server_interfaces(cloud, server)
|
||||
|
||||
|
||||
def _make_address_dict(fip, port):
|
||||
address = dict(version=4, addr=fip['floating_ip_address'])
|
||||
address['OS-EXT-IPS:type'] = 'floating'
|
||||
address['OS-EXT-IPS-MAC:mac_addr'] = port['mac_address']
|
||||
return address
|
||||
|
||||
|
||||
def _get_supplemental_addresses(cloud, server):
|
||||
fixed_ip_mapping = {}
|
||||
for name, network in server['addresses'].items():
|
||||
for address in network:
|
||||
if address['version'] == 6:
|
||||
continue
|
||||
if address.get('OS-EXT-IPS:type') == 'floating':
|
||||
# We have a floating IP that nova knows about, do nothing
|
||||
return server['addresses']
|
||||
fixed_ip_mapping[address['addr']] = name
|
||||
try:
|
||||
# Don't bother doing this before the server is active, it's a waste
|
||||
# of an API call while polling for a server to come up
|
||||
if (cloud.has_service('network') and cloud._has_floating_ips() and
|
||||
server['status'] == 'ACTIVE'):
|
||||
for port in cloud.search_ports(
|
||||
filters=dict(device_id=server['id'])):
|
||||
for fip in cloud.search_floating_ips(
|
||||
filters=dict(port_id=port['id'])):
|
||||
# This SHOULD return one and only one FIP - but doing
|
||||
# it as a search/list lets the logic work regardless
|
||||
if fip['fixed_ip_address'] not in fixed_ip_mapping:
|
||||
log = _log.setup_logging('shade')
|
||||
log.debug(
|
||||
"The cloud returned floating ip %(fip)s attached"
|
||||
" to server %(server)s but the fixed ip associated"
|
||||
" with the floating ip in the neutron listing"
|
||||
" does not exist in the nova listing. Something"
|
||||
" is exceptionally broken.",
|
||||
dict(fip=fip['id'], server=server['id']))
|
||||
fixed_net = fixed_ip_mapping[fip['fixed_ip_address']]
|
||||
server['addresses'][fixed_net].append(
|
||||
_make_address_dict(fip, port))
|
||||
except exc.OpenStackCloudException:
|
||||
# If something goes wrong with a cloud call, that's cool - this is
|
||||
# an attempt to provide additional data and should not block forward
|
||||
# progress
|
||||
pass
|
||||
return server['addresses']
|
||||
|
||||
|
||||
def add_server_interfaces(cloud, server):
|
||||
"""Add network interface information to server.
|
||||
|
||||
Query the cloud as necessary to add information to the server record
|
||||
about the network information needed to interface with the server.
|
||||
|
||||
Ensures that public_v4, public_v6, private_v4, private_v6, interface_ip,
|
||||
accessIPv4 and accessIPv6 are always set.
|
||||
"""
|
||||
# First, add an IP address. Set it to '' rather than None if it does
|
||||
# not exist to remain consistent with the pre-existing missing values
|
||||
server['addresses'] = _get_supplemental_addresses(cloud, server)
|
||||
server['public_v4'] = get_server_external_ipv4(cloud, server) or ''
|
||||
server['public_v6'] = get_server_external_ipv6(server) or ''
|
||||
server['private_v4'] = get_server_private_ip(server, cloud) or ''
|
||||
server['interface_ip'] = _get_interface_ip(cloud, server) or ''
|
||||
|
||||
# Some clouds do not set these, but they're a regular part of the Nova
|
||||
# server record. Since we know them, go ahead and set them. In the case
|
||||
# where they were set previous, we use the values, so this will not break
|
||||
# clouds that provide the information
|
||||
if cloud.private and server['private_v4']:
|
||||
server['accessIPv4'] = server['private_v4']
|
||||
else:
|
||||
server['accessIPv4'] = server['public_v4']
|
||||
server['accessIPv6'] = server['public_v6']
|
||||
|
||||
return server
|
||||
|
||||
|
||||
def expand_server_security_groups(cloud, server):
|
||||
try:
|
||||
groups = cloud.list_server_security_groups(server)
|
||||
except exc.OpenStackCloudException:
|
||||
groups = []
|
||||
server['security_groups'] = groups or []
|
||||
|
||||
|
||||
def get_hostvars_from_server(cloud, server, mounts=None):
|
||||
"""Expand additional server information useful for ansible inventory.
|
||||
|
||||
Variables in this function may make additional cloud queries to flesh out
|
||||
possibly interesting info, making it more expensive to call than
|
||||
expand_server_vars if caching is not set up. If caching is set up,
|
||||
the extra cost should be minimal.
|
||||
"""
|
||||
server_vars = add_server_interfaces(cloud, server)
|
||||
|
||||
flavor_id = server['flavor']['id']
|
||||
flavor_name = cloud.get_flavor_name(flavor_id)
|
||||
if flavor_name:
|
||||
server_vars['flavor']['name'] = flavor_name
|
||||
|
||||
expand_server_security_groups(cloud, server)
|
||||
|
||||
# OpenStack can return image as a string when you've booted from volume
|
||||
if str(server['image']) == server['image']:
|
||||
image_id = server['image']
|
||||
server_vars['image'] = dict(id=image_id)
|
||||
else:
|
||||
image_id = server['image'].get('id', None)
|
||||
if image_id:
|
||||
image_name = cloud.get_image_name(image_id)
|
||||
if image_name:
|
||||
server_vars['image']['name'] = image_name
|
||||
|
||||
volumes = []
|
||||
if cloud.has_service('volume'):
|
||||
try:
|
||||
for volume in cloud.get_volumes(server):
|
||||
# Make things easier to consume elsewhere
|
||||
volume['device'] = volume['attachments'][0]['device']
|
||||
volumes.append(volume)
|
||||
except exc.OpenStackCloudException:
|
||||
pass
|
||||
server_vars['volumes'] = volumes
|
||||
if mounts:
|
||||
for mount in mounts:
|
||||
for vol in server_vars['volumes']:
|
||||
if vol['display_name'] == mount['display_name']:
|
||||
if 'mount' in mount:
|
||||
vol['mount'] = mount['mount']
|
||||
|
||||
return server_vars
|
||||
|
||||
|
||||
def _log_request_id(obj, request_id):
|
||||
if request_id:
|
||||
# Log the request id and object id in a specific logger. This way
|
||||
# someone can turn it on if they're interested in this kind of tracing.
|
||||
log = _log.setup_logging('openstack.cloud.request_ids')
|
||||
obj_id = None
|
||||
if isinstance(obj, dict):
|
||||
obj_id = obj.get('id', obj.get('uuid'))
|
||||
if obj_id:
|
||||
log.debug("Retrieved object %(id)s. Request ID %(request_id)s",
|
||||
{'id': obj.get('id', obj.get('uuid')),
|
||||
'request_id': request_id})
|
||||
else:
|
||||
log.debug("Retrieved a response. Request ID %(request_id)s",
|
||||
{'request_id': request_id})
|
||||
|
||||
return obj
|
||||
|
||||
|
||||
def obj_to_munch(obj):
|
||||
""" Turn an object with attributes into a dict suitable for serializing.
|
||||
|
||||
Some of the things that are returned in OpenStack are objects with
|
||||
attributes. That's awesome - except when you want to expose them as JSON
|
||||
structures. We use this as the basis of get_hostvars_from_server above so
|
||||
that we can just have a plain dict of all of the values that exist in the
|
||||
nova metadata for a server.
|
||||
"""
|
||||
if obj is None:
|
||||
return None
|
||||
elif isinstance(obj, munch.Munch) or hasattr(obj, 'mock_add_spec'):
|
||||
# If we obj_to_munch twice, don't fail, just return the munch
|
||||
# Also, don't try to modify Mock objects - that way lies madness
|
||||
return obj
|
||||
elif isinstance(obj, dict):
|
||||
# The new request-id tracking spec:
|
||||
# https://specs.openstack.org/openstack/nova-specs/specs/juno/approved/log-request-id-mappings.html
|
||||
# adds a request-ids attribute to returned objects. It does this even
|
||||
# with dicts, which now become dict subclasses. So we want to convert
|
||||
# the dict we get, but we also want it to fall through to object
|
||||
# attribute processing so that we can also get the request_ids
|
||||
# data into our resulting object.
|
||||
instance = munch.Munch(obj)
|
||||
else:
|
||||
instance = munch.Munch()
|
||||
|
||||
for key in dir(obj):
|
||||
try:
|
||||
value = getattr(obj, key)
|
||||
# some attributes can be defined as a @propierty, so we can't assure
|
||||
# to have a valid value
|
||||
# e.g. id in python-novaclient/tree/novaclient/v2/quotas.py
|
||||
except AttributeError:
|
||||
continue
|
||||
if isinstance(value, NON_CALLABLES) and not key.startswith('_'):
|
||||
instance[key] = value
|
||||
return instance
|
||||
|
||||
|
||||
obj_to_dict = obj_to_munch
|
||||
|
||||
|
||||
def obj_list_to_munch(obj_list):
|
||||
"""Enumerate through lists of objects and return lists of dictonaries.
|
||||
|
||||
Some of the objects returned in OpenStack are actually lists of objects,
|
||||
and in order to expose the data structures as JSON, we need to facilitate
|
||||
the conversion to lists of dictonaries.
|
||||
"""
|
||||
return [obj_to_munch(obj) for obj in obj_list]
|
||||
|
||||
|
||||
obj_list_to_dict = obj_list_to_munch
|
||||
|
||||
|
||||
def warlock_to_dict(obj):
|
||||
# This function is unused in shade - but it is a public function, so
|
||||
# removing it would be rude. We don't actually have to depend on warlock
|
||||
# ourselves to keep this - so just leave it here.
|
||||
#
|
||||
# glanceclient v2 uses warlock to construct its objects. Warlock does
|
||||
# deep black magic to attribute look up to support validation things that
|
||||
# means we cannot use normal obj_to_munch
|
||||
obj_dict = munch.Munch()
|
||||
for (key, value) in obj.items():
|
||||
if isinstance(value, NON_CALLABLES) and not key.startswith('_'):
|
||||
obj_dict[key] = value
|
||||
return obj_dict
|
||||
|
||||
|
||||
def get_and_munchify(key, data):
|
||||
"""Get the value associated to key and convert it.
|
||||
|
||||
The value will be converted in a Munch object or a list of Munch objects
|
||||
based on the type
|
||||
"""
|
||||
result = data.get(key, []) if key else data
|
||||
if isinstance(result, list):
|
||||
return obj_list_to_munch(result)
|
||||
elif isinstance(result, dict):
|
||||
return obj_to_munch(result)
|
||||
return result
|
8564
openstack/cloud/openstackcloud.py
Normal file
8564
openstack/cloud/openstackcloud.py
Normal file
File diff suppressed because it is too large
Load Diff
2419
openstack/cloud/operatorcloud.py
Normal file
2419
openstack/cloud/operatorcloud.py
Normal file
File diff suppressed because it is too large
Load Diff
334
openstack/cloud/task_manager.py
Normal file
334
openstack/cloud/task_manager.py
Normal file
@ -0,0 +1,334 @@
|
||||
# Copyright (C) 2011-2013 OpenStack Foundation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
#
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import abc
|
||||
import concurrent.futures
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
import types
|
||||
|
||||
import keystoneauth1.exceptions
|
||||
import six
|
||||
|
||||
from openstack import _log
|
||||
from openstack.cloud import exc
|
||||
from openstack.cloud import meta
|
||||
|
||||
|
||||
def _is_listlike(obj):
|
||||
# NOTE(Shrews): Since the client API might decide to subclass one
|
||||
# of these result types, we use isinstance() here instead of type().
|
||||
return (
|
||||
isinstance(obj, list) or
|
||||
isinstance(obj, types.GeneratorType))
|
||||
|
||||
|
||||
def _is_objlike(obj):
|
||||
# NOTE(Shrews): Since the client API might decide to subclass one
|
||||
# of these result types, we use isinstance() here instead of type().
|
||||
return (
|
||||
not isinstance(obj, bool) and
|
||||
not isinstance(obj, int) and
|
||||
not isinstance(obj, float) and
|
||||
not isinstance(obj, six.string_types) and
|
||||
not isinstance(obj, set) and
|
||||
not isinstance(obj, tuple))
|
||||
|
||||
|
||||
@six.add_metaclass(abc.ABCMeta)
|
||||
class BaseTask(object):
|
||||
"""Represent a task to be performed on an OpenStack Cloud.
|
||||
|
||||
Some consumers need to inject things like rate-limiting or auditing
|
||||
around each external REST interaction. Task provides an interface
|
||||
to encapsulate each such interaction. Also, although shade itself
|
||||
operates normally in a single-threaded direct action manner, consuming
|
||||
programs may provide a multi-threaded TaskManager themselves. For that
|
||||
reason, Task uses threading events to ensure appropriate wait conditions.
|
||||
These should be a no-op in single-threaded applications.
|
||||
|
||||
A consumer is expected to overload the main method.
|
||||
|
||||
:param dict kw: Any args that are expected to be passed to something in
|
||||
the main payload at execution time.
|
||||
"""
|
||||
|
||||
def __init__(self, **kw):
|
||||
self._exception = None
|
||||
self._traceback = None
|
||||
self._result = None
|
||||
self._response = None
|
||||
self._finished = threading.Event()
|
||||
self.run_async = False
|
||||
self.args = kw
|
||||
self.name = type(self).__name__
|
||||
|
||||
@abc.abstractmethod
|
||||
def main(self, client):
|
||||
""" Override this method with the actual workload to be performed """
|
||||
|
||||
def done(self, result):
|
||||
self._result = result
|
||||
self._finished.set()
|
||||
|
||||
def exception(self, e, tb):
|
||||
self._exception = e
|
||||
self._traceback = tb
|
||||
self._finished.set()
|
||||
|
||||
def wait(self, raw=False):
|
||||
self._finished.wait()
|
||||
|
||||
if self._exception:
|
||||
six.reraise(type(self._exception), self._exception,
|
||||
self._traceback)
|
||||
|
||||
return self._result
|
||||
|
||||
def run(self, client):
|
||||
self._client = client
|
||||
try:
|
||||
# Retry one time if we get a retriable connection failure
|
||||
try:
|
||||
# Keep time for connection retrying logging
|
||||
start = time.time()
|
||||
self.done(self.main(client))
|
||||
except keystoneauth1.exceptions.RetriableConnectionFailure as e:
|
||||
end = time.time()
|
||||
dt = end - start
|
||||
if client.region_name:
|
||||
client.log.debug(str(e))
|
||||
client.log.debug(
|
||||
"Connection failure on %(cloud)s:%(region)s"
|
||||
" for %(name)s after %(secs)s seconds, retrying",
|
||||
{'cloud': client.name,
|
||||
'region': client.region_name,
|
||||
'secs': dt,
|
||||
'name': self.name})
|
||||
else:
|
||||
client.log.debug(
|
||||
"Connection failure on %(cloud)s for %(name)s after"
|
||||
" %(secs)s seconds, retrying",
|
||||
{'cloud': client.name, 'name': self.name, 'secs': dt})
|
||||
self.done(self.main(client))
|
||||
except Exception:
|
||||
raise
|
||||
except Exception as e:
|
||||
self.exception(e, sys.exc_info()[2])
|
||||
|
||||
|
||||
class Task(BaseTask):
|
||||
""" Shade specific additions to the BaseTask Interface. """
|
||||
|
||||
def wait(self, raw=False):
|
||||
super(Task, self).wait()
|
||||
|
||||
if raw:
|
||||
# Do NOT convert the result.
|
||||
return self._result
|
||||
|
||||
if _is_listlike(self._result):
|
||||
return meta.obj_list_to_munch(self._result)
|
||||
elif _is_objlike(self._result):
|
||||
return meta.obj_to_munch(self._result)
|
||||
else:
|
||||
return self._result
|
||||
|
||||
|
||||
class RequestTask(BaseTask):
|
||||
""" Extensions to the Shade Tasks to handle raw requests """
|
||||
|
||||
# It's totally legit for calls to not return things
|
||||
result_key = None
|
||||
|
||||
# keystoneauth1 throws keystoneauth1.exceptions.http.HttpError on !200
|
||||
def done(self, result):
|
||||
self._response = result
|
||||
|
||||
try:
|
||||
result_json = self._response.json()
|
||||
except ValueError as e:
|
||||
result_json = self._response.text
|
||||
self._client.log.debug(
|
||||
'Could not decode json in response: %(e)s', {'e': str(e)})
|
||||
self._client.log.debug(result_json)
|
||||
|
||||
if self.result_key:
|
||||
self._result = result_json[self.result_key]
|
||||
else:
|
||||
self._result = result_json
|
||||
|
||||
self._request_id = self._response.headers.get('x-openstack-request-id')
|
||||
self._finished.set()
|
||||
|
||||
def wait(self, raw=False):
|
||||
super(RequestTask, self).wait()
|
||||
|
||||
if raw:
|
||||
# Do NOT convert the result.
|
||||
return self._result
|
||||
|
||||
if _is_listlike(self._result):
|
||||
return meta.obj_list_to_munch(
|
||||
self._result, request_id=self._request_id)
|
||||
elif _is_objlike(self._result):
|
||||
return meta.obj_to_munch(self._result, request_id=self._request_id)
|
||||
return self._result
|
||||
|
||||
|
||||
def _result_filter_cb(result):
|
||||
return result
|
||||
|
||||
|
||||
def generate_task_class(method, name, result_filter_cb):
|
||||
if name is None:
|
||||
if callable(method):
|
||||
name = method.__name__
|
||||
else:
|
||||
name = method
|
||||
|
||||
class RunTask(Task):
|
||||
def __init__(self, **kw):
|
||||
super(RunTask, self).__init__(**kw)
|
||||
self.name = name
|
||||
self._method = method
|
||||
|
||||
def wait(self, raw=False):
|
||||
super(RunTask, self).wait()
|
||||
|
||||
if raw:
|
||||
# Do NOT convert the result.
|
||||
return self._result
|
||||
return result_filter_cb(self._result)
|
||||
|
||||
def main(self, client):
|
||||
if callable(self._method):
|
||||
return method(**self.args)
|
||||
else:
|
||||
meth = getattr(client, self._method)
|
||||
return meth(**self.args)
|
||||
return RunTask
|
||||
|
||||
|
||||
class TaskManager(object):
|
||||
log = _log.setup_logging('openstack.cloud.task_manager')
|
||||
|
||||
def __init__(
|
||||
self, client, name, result_filter_cb=None, workers=5, **kwargs):
|
||||
self.name = name
|
||||
self._client = client
|
||||
self._executor = concurrent.futures.ThreadPoolExecutor(
|
||||
max_workers=workers)
|
||||
if not result_filter_cb:
|
||||
self._result_filter_cb = _result_filter_cb
|
||||
else:
|
||||
self._result_filter_cb = result_filter_cb
|
||||
|
||||
def set_client(self, client):
|
||||
self._client = client
|
||||
|
||||
def stop(self):
|
||||
""" This is a direct action passthrough TaskManager """
|
||||
self._executor.shutdown(wait=True)
|
||||
|
||||
def run(self):
|
||||
""" This is a direct action passthrough TaskManager """
|
||||
pass
|
||||
|
||||
def submit_task(self, task, raw=False):
|
||||
"""Submit and execute the given task.
|
||||
|
||||
:param task: The task to execute.
|
||||
:param bool raw: If True, return the raw result as received from the
|
||||
underlying client call.
|
||||
"""
|
||||
return self.run_task(task=task, raw=raw)
|
||||
|
||||
def _run_task_async(self, task, raw=False):
|
||||
self.log.debug(
|
||||
"Manager %s submitting task %s", self.name, task.name)
|
||||
return self._executor.submit(self._run_task, task, raw=raw)
|
||||
|
||||
def run_task(self, task, raw=False):
|
||||
if hasattr(task, 'run_async') and task.run_async:
|
||||
return self._run_task_async(task, raw=raw)
|
||||
else:
|
||||
return self._run_task(task, raw=raw)
|
||||
|
||||
def _run_task(self, task, raw=False):
|
||||
self.log.debug(
|
||||
"Manager %s running task %s", self.name, task.name)
|
||||
start = time.time()
|
||||
task.run(self._client)
|
||||
end = time.time()
|
||||
dt = end - start
|
||||
self.log.debug(
|
||||
"Manager %s ran task %s in %ss", self.name, task.name, dt)
|
||||
|
||||
self.post_run_task(dt, task)
|
||||
|
||||
return task.wait(raw)
|
||||
|
||||
def post_run_task(self, elasped_time, task):
|
||||
pass
|
||||
|
||||
# Backwards compatibility
|
||||
submitTask = submit_task
|
||||
|
||||
def submit_function(
|
||||
self, method, name=None, result_filter_cb=None, **kwargs):
|
||||
""" Allows submitting an arbitrary method for work.
|
||||
|
||||
:param method: Method to run in the TaskManager. Can be either the
|
||||
name of a method to find on self.client, or a callable.
|
||||
"""
|
||||
if not result_filter_cb:
|
||||
result_filter_cb = self._result_filter_cb
|
||||
|
||||
task_class = generate_task_class(method, name, result_filter_cb)
|
||||
|
||||
return self._executor.submit_task(task_class(**kwargs))
|
||||
|
||||
|
||||
def wait_for_futures(futures, raise_on_error=True, log=None):
|
||||
'''Collect results or failures from a list of running future tasks.'''
|
||||
|
||||
results = []
|
||||
retries = []
|
||||
|
||||
# Check on each result as its thread finishes
|
||||
for completed in concurrent.futures.as_completed(futures):
|
||||
try:
|
||||
result = completed.result()
|
||||
# We have to do this here because munch_response doesn't
|
||||
# get called on async job results
|
||||
exc.raise_from_response(result)
|
||||
results.append(result)
|
||||
except (keystoneauth1.exceptions.RetriableConnectionFailure,
|
||||
exc.OpenStackCloudException) as e:
|
||||
if log:
|
||||
log.debug(
|
||||
"Exception processing async task: {e}".format(
|
||||
e=str(e)),
|
||||
exc_info=True)
|
||||
# If we get an exception, put the result into a list so we
|
||||
# can try again
|
||||
if raise_on_error:
|
||||
raise
|
||||
else:
|
||||
retries.append(result)
|
||||
return results, retries
|
0
openstack/cloud/tests/__init__.py
Normal file
0
openstack/cloud/tests/__init__.py
Normal file
90
openstack/config/__init__.py
Normal file
90
openstack/config/__init__.py
Normal file
@ -0,0 +1,90 @@
|
||||
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import sys
|
||||
|
||||
from openstack.config.loader import OpenStackConfig # noqa
|
||||
|
||||
_config = None
|
||||
|
||||
|
||||
def get_config(
|
||||
service_key=None, options=None,
|
||||
app_name=None, app_version=None,
|
||||
**kwargs):
|
||||
load_yaml_config = kwargs.pop('load_yaml_config', True)
|
||||
global _config
|
||||
if not _config:
|
||||
_config = OpenStackConfig(
|
||||
load_yaml_config=load_yaml_config,
|
||||
app_name=app_name, app_version=app_version)
|
||||
if options:
|
||||
_config.register_argparse_arguments(options, sys.argv, service_key)
|
||||
parsed_options = options.parse_known_args(sys.argv)
|
||||
else:
|
||||
parsed_options = None
|
||||
|
||||
return _config.get_one_cloud(options=parsed_options, **kwargs)
|
||||
|
||||
|
||||
def make_rest_client(
|
||||
service_key, options=None,
|
||||
app_name=None, app_version=None,
|
||||
**kwargs):
|
||||
"""Simple wrapper function. It has almost no features.
|
||||
|
||||
This will get you a raw requests Session Adapter that is mounted
|
||||
on the given service from the keystone service catalog. If you leave
|
||||
off cloud and region_name, it will assume that you've got env vars
|
||||
set, but if you give them, it'll use clouds.yaml as you'd expect.
|
||||
|
||||
This function is deliberately simple. It has no flexibility. If you
|
||||
want flexibility, you can make a cloud config object and call
|
||||
get_session_client on it. This function is to make it easy to poke
|
||||
at OpenStack REST APIs with a properly configured keystone session.
|
||||
"""
|
||||
cloud = get_config(
|
||||
service_key=service_key, options=options,
|
||||
app_name=app_name, app_version=app_version,
|
||||
**kwargs)
|
||||
return cloud.get_session_client(service_key)
|
||||
# Backwards compat - simple_client was a terrible name
|
||||
simple_client = make_rest_client
|
||||
# Backwards compat - session_client was a terrible name
|
||||
session_client = make_rest_client
|
||||
|
||||
|
||||
def make_connection(options=None, **kwargs):
|
||||
"""Simple wrapper for getting an OpenStack SDK Connection.
|
||||
|
||||
For completeness, provide a mechanism that matches make_client and
|
||||
make_rest_client. The heavy lifting here is done in openstacksdk.
|
||||
|
||||
:rtype: :class:`~openstack.connection.Connection`
|
||||
"""
|
||||
from openstack import connection
|
||||
cloud = get_config(options=options, **kwargs)
|
||||
return connection.from_config(cloud_config=cloud, options=options)
|
||||
|
||||
|
||||
def make_cloud(options=None, **kwargs):
|
||||
"""Simple wrapper for getting an OpenStackCloud object
|
||||
|
||||
A mechanism that matches make_connection and make_rest_client.
|
||||
|
||||
:rtype: :class:`~openstack.OpenStackCloud`
|
||||
"""
|
||||
import openstack.cloud
|
||||
cloud = get_config(options=options, **kwargs)
|
||||
return openstack.OpenStackCloud(cloud_config=cloud, **kwargs)
|
558
openstack/config/cloud_config.py
Normal file
558
openstack/config/cloud_config.py
Normal file
@ -0,0 +1,558 @@
|
||||
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import importlib
|
||||
import math
|
||||
import warnings
|
||||
|
||||
from keystoneauth1 import adapter
|
||||
import keystoneauth1.exceptions.catalog
|
||||
from keystoneauth1 import session
|
||||
import requestsexceptions
|
||||
|
||||
import openstack
|
||||
from openstack import _log
|
||||
from openstack.config import constructors
|
||||
from openstack.config import exceptions
|
||||
|
||||
|
||||
def _get_client(service_key):
|
||||
class_mapping = constructors.get_constructor_mapping()
|
||||
if service_key not in class_mapping:
|
||||
raise exceptions.OpenStackConfigException(
|
||||
"Service {service_key} is unkown. Please pass in a client"
|
||||
" constructor or submit a patch to os-client-config".format(
|
||||
service_key=service_key))
|
||||
mod_name, ctr_name = class_mapping[service_key].rsplit('.', 1)
|
||||
lib_name = mod_name.split('.')[0]
|
||||
try:
|
||||
mod = importlib.import_module(mod_name)
|
||||
except ImportError:
|
||||
raise exceptions.OpenStackConfigException(
|
||||
"Client for '{service_key}' was requested, but"
|
||||
" {mod_name} was unable to be imported. Either import"
|
||||
" the module yourself and pass the constructor in as an argument,"
|
||||
" or perhaps you do not have python-{lib_name} installed.".format(
|
||||
service_key=service_key,
|
||||
mod_name=mod_name,
|
||||
lib_name=lib_name))
|
||||
try:
|
||||
ctr = getattr(mod, ctr_name)
|
||||
except AttributeError:
|
||||
raise exceptions.OpenStackConfigException(
|
||||
"Client for '{service_key}' was requested, but although"
|
||||
" {mod_name} imported fine, the constructor at {fullname}"
|
||||
" as not found. Please check your installation, we have no"
|
||||
" clue what is wrong with your computer.".format(
|
||||
service_key=service_key,
|
||||
mod_name=mod_name,
|
||||
fullname=class_mapping[service_key]))
|
||||
return ctr
|
||||
|
||||
|
||||
def _make_key(key, service_type):
|
||||
if not service_type:
|
||||
return key
|
||||
else:
|
||||
service_type = service_type.lower().replace('-', '_')
|
||||
return "_".join([service_type, key])
|
||||
|
||||
|
||||
class CloudConfig(object):
|
||||
def __init__(self, name, region, config,
|
||||
force_ipv4=False, auth_plugin=None,
|
||||
openstack_config=None, session_constructor=None,
|
||||
app_name=None, app_version=None):
|
||||
self.name = name
|
||||
self.region = region
|
||||
self.config = config
|
||||
self.log = _log.setup_logging(__name__)
|
||||
self._force_ipv4 = force_ipv4
|
||||
self._auth = auth_plugin
|
||||
self._openstack_config = openstack_config
|
||||
self._keystone_session = None
|
||||
self._session_constructor = session_constructor or session.Session
|
||||
self._app_name = app_name
|
||||
self._app_version = app_version
|
||||
|
||||
def __getattr__(self, key):
|
||||
"""Return arbitrary attributes."""
|
||||
|
||||
if key.startswith('os_'):
|
||||
key = key[3:]
|
||||
|
||||
if key in [attr.replace('-', '_') for attr in self.config]:
|
||||
return self.config[key]
|
||||
else:
|
||||
return None
|
||||
|
||||
def __iter__(self):
|
||||
return self.config.__iter__()
|
||||
|
||||
def __eq__(self, other):
|
||||
return (self.name == other.name and self.region == other.region
|
||||
and self.config == other.config)
|
||||
|
||||
def __ne__(self, other):
|
||||
return not self == other
|
||||
|
||||
def set_session_constructor(self, session_constructor):
|
||||
"""Sets the Session constructor."""
|
||||
self._session_constructor = session_constructor
|
||||
|
||||
def get_requests_verify_args(self):
|
||||
"""Return the verify and cert values for the requests library."""
|
||||
if self.config['verify'] and self.config['cacert']:
|
||||
verify = self.config['cacert']
|
||||
else:
|
||||
verify = self.config['verify']
|
||||
if self.config['cacert']:
|
||||
warnings.warn(
|
||||
"You are specifying a cacert for the cloud {0} but "
|
||||
"also to ignore the host verification. The host SSL cert "
|
||||
"will not be verified.".format(self.name))
|
||||
|
||||
cert = self.config.get('cert', None)
|
||||
if cert:
|
||||
if self.config['key']:
|
||||
cert = (cert, self.config['key'])
|
||||
return (verify, cert)
|
||||
|
||||
def get_services(self):
|
||||
"""Return a list of service types we know something about."""
|
||||
services = []
|
||||
for key, val in self.config.items():
|
||||
if (key.endswith('api_version')
|
||||
or key.endswith('service_type')
|
||||
or key.endswith('service_name')):
|
||||
services.append("_".join(key.split('_')[:-2]))
|
||||
return list(set(services))
|
||||
|
||||
def get_auth_args(self):
|
||||
return self.config['auth']
|
||||
|
||||
def get_interface(self, service_type=None):
|
||||
key = _make_key('interface', service_type)
|
||||
interface = self.config.get('interface')
|
||||
return self.config.get(key, interface)
|
||||
|
||||
def get_region_name(self, service_type=None):
|
||||
if not service_type:
|
||||
return self.region
|
||||
key = _make_key('region_name', service_type)
|
||||
return self.config.get(key, self.region)
|
||||
|
||||
def get_api_version(self, service_type):
|
||||
key = _make_key('api_version', service_type)
|
||||
return self.config.get(key, None)
|
||||
|
||||
def get_service_type(self, service_type):
|
||||
key = _make_key('service_type', service_type)
|
||||
# Cinder did an evil thing where they defined a second service
|
||||
# type in the catalog. Of course, that's insane, so let's hide this
|
||||
# atrocity from the as-yet-unsullied eyes of our users.
|
||||
# Of course, if the user requests a volumev2, that structure should
|
||||
# still work.
|
||||
# What's even more amazing is that they did it AGAIN with cinder v3
|
||||
# And then I learned that mistral copied it.
|
||||
if service_type == 'volume':
|
||||
if self.get_api_version(service_type).startswith('2'):
|
||||
service_type = 'volumev2'
|
||||
elif self.get_api_version(service_type).startswith('3'):
|
||||
service_type = 'volumev3'
|
||||
elif service_type == 'workflow':
|
||||
if self.get_api_version(service_type).startswith('2'):
|
||||
service_type = 'workflowv2'
|
||||
return self.config.get(key, service_type)
|
||||
|
||||
def get_service_name(self, service_type):
|
||||
key = _make_key('service_name', service_type)
|
||||
return self.config.get(key, None)
|
||||
|
||||
def get_endpoint(self, service_type):
|
||||
key = _make_key('endpoint_override', service_type)
|
||||
old_key = _make_key('endpoint', service_type)
|
||||
return self.config.get(key, self.config.get(old_key, None))
|
||||
|
||||
@property
|
||||
def prefer_ipv6(self):
|
||||
return not self._force_ipv4
|
||||
|
||||
@property
|
||||
def force_ipv4(self):
|
||||
return self._force_ipv4
|
||||
|
||||
def get_auth(self):
|
||||
"""Return a keystoneauth plugin from the auth credentials."""
|
||||
return self._auth
|
||||
|
||||
def get_session(self):
|
||||
"""Return a keystoneauth session based on the auth credentials."""
|
||||
if self._keystone_session is None:
|
||||
if not self._auth:
|
||||
raise exceptions.OpenStackConfigException(
|
||||
"Problem with auth parameters")
|
||||
(verify, cert) = self.get_requests_verify_args()
|
||||
# Turn off urllib3 warnings about insecure certs if we have
|
||||
# explicitly configured requests to tell it we do not want
|
||||
# cert verification
|
||||
if not verify:
|
||||
self.log.debug(
|
||||
"Turning off SSL warnings for {cloud}:{region}"
|
||||
" since verify=False".format(
|
||||
cloud=self.name, region=self.region))
|
||||
requestsexceptions.squelch_warnings(insecure_requests=not verify)
|
||||
self._keystone_session = self._session_constructor(
|
||||
auth=self._auth,
|
||||
verify=verify,
|
||||
cert=cert,
|
||||
timeout=self.config['api_timeout'])
|
||||
if hasattr(self._keystone_session, 'additional_user_agent'):
|
||||
self._keystone_session.additional_user_agent.append(
|
||||
('openstacksdk', openstack.__version__))
|
||||
# Using old keystoneauth with new os-client-config fails if
|
||||
# we pass in app_name and app_version. Those are not essential,
|
||||
# nor a reason to bump our minimum, so just test for the session
|
||||
# having the attribute post creation and set them then.
|
||||
if hasattr(self._keystone_session, 'app_name'):
|
||||
self._keystone_session.app_name = self._app_name
|
||||
if hasattr(self._keystone_session, 'app_version'):
|
||||
self._keystone_session.app_version = self._app_version
|
||||
return self._keystone_session
|
||||
|
||||
def get_service_catalog(self):
|
||||
"""Helper method to grab the service catalog."""
|
||||
return self._auth.get_access(self.get_session()).service_catalog
|
||||
|
||||
def get_session_client(self, service_key):
|
||||
"""Return a prepped requests adapter for a given service.
|
||||
|
||||
This is useful for making direct requests calls against a
|
||||
'mounted' endpoint. That is, if you do:
|
||||
|
||||
client = get_session_client('compute')
|
||||
|
||||
then you can do:
|
||||
|
||||
client.get('/flavors')
|
||||
|
||||
and it will work like you think.
|
||||
"""
|
||||
|
||||
return adapter.Adapter(
|
||||
session=self.get_session(),
|
||||
service_type=self.get_service_type(service_key),
|
||||
service_name=self.get_service_name(service_key),
|
||||
interface=self.get_interface(service_key),
|
||||
region_name=self.region)
|
||||
|
||||
def _get_highest_endpoint(self, service_types, kwargs):
|
||||
session = self.get_session()
|
||||
for service_type in service_types:
|
||||
kwargs['service_type'] = service_type
|
||||
try:
|
||||
# Return the highest version we find that matches
|
||||
# the request
|
||||
return session.get_endpoint(**kwargs)
|
||||
except keystoneauth1.exceptions.catalog.EndpointNotFound:
|
||||
pass
|
||||
|
||||
def get_session_endpoint(
|
||||
self, service_key, min_version=None, max_version=None):
|
||||
"""Return the endpoint from config or the catalog.
|
||||
|
||||
If a configuration lists an explicit endpoint for a service,
|
||||
return that. Otherwise, fetch the service catalog from the
|
||||
keystone session and return the appropriate endpoint.
|
||||
|
||||
:param service_key: Generic key for service, such as 'compute' or
|
||||
'network'
|
||||
|
||||
"""
|
||||
|
||||
override_endpoint = self.get_endpoint(service_key)
|
||||
if override_endpoint:
|
||||
return override_endpoint
|
||||
endpoint = None
|
||||
kwargs = {
|
||||
'service_name': self.get_service_name(service_key),
|
||||
'region_name': self.region
|
||||
}
|
||||
kwargs['interface'] = self.get_interface(service_key)
|
||||
if service_key == 'volume' and not self.get_api_version('volume'):
|
||||
# If we don't have a configured cinder version, we can't know
|
||||
# to request a different service_type
|
||||
min_version = float(min_version or 1)
|
||||
max_version = float(max_version or 3)
|
||||
min_major = math.trunc(float(min_version))
|
||||
max_major = math.trunc(float(max_version))
|
||||
versions = range(int(max_major) + 1, int(min_major), -1)
|
||||
service_types = []
|
||||
for version in versions:
|
||||
if version == 1:
|
||||
service_types.append('volume')
|
||||
else:
|
||||
service_types.append('volumev{v}'.format(v=version))
|
||||
else:
|
||||
service_types = [self.get_service_type(service_key)]
|
||||
endpoint = self._get_highest_endpoint(service_types, kwargs)
|
||||
if not endpoint:
|
||||
self.log.warning(
|
||||
"Keystone catalog entry not found ("
|
||||
"service_type=%s,service_name=%s"
|
||||
"interface=%s,region_name=%s)",
|
||||
service_key,
|
||||
kwargs['service_name'],
|
||||
kwargs['interface'],
|
||||
kwargs['region_name'])
|
||||
return endpoint
|
||||
|
||||
def get_legacy_client(
|
||||
self, service_key, client_class=None, interface_key=None,
|
||||
pass_version_arg=True, version=None, min_version=None,
|
||||
max_version=None, **kwargs):
|
||||
"""Return a legacy OpenStack client object for the given config.
|
||||
|
||||
Most of the OpenStack python-*client libraries have the same
|
||||
interface for their client constructors, but there are several
|
||||
parameters one wants to pass given a :class:`CloudConfig` object.
|
||||
|
||||
In the future, OpenStack API consumption should be done through
|
||||
the OpenStack SDK, but that's not ready yet. This is for getting
|
||||
Client objects from python-*client only.
|
||||
|
||||
:param service_key: Generic key for service, such as 'compute' or
|
||||
'network'
|
||||
:param client_class: Class of the client to be instantiated. This
|
||||
should be the unversioned version if there
|
||||
is one, such as novaclient.client.Client, or
|
||||
the versioned one, such as
|
||||
neutronclient.v2_0.client.Client if there isn't
|
||||
:param interface_key: (optional) Some clients, such as glanceclient
|
||||
only accept the parameter 'interface' instead
|
||||
of 'endpoint_type' - this is a get-out-of-jail
|
||||
parameter for those until they can be aligned.
|
||||
os-client-config understands this to be the
|
||||
case if service_key is image, so this is really
|
||||
only for use with other unknown broken clients.
|
||||
:param pass_version_arg: (optional) If a versioned Client constructor
|
||||
was passed to client_class, set this to
|
||||
False, which will tell get_client to not
|
||||
pass a version parameter. os-client-config
|
||||
already understand that this is the
|
||||
case for network, so it can be omitted in
|
||||
that case.
|
||||
:param version: (optional) Version string to override the configured
|
||||
version string.
|
||||
:param min_version: (options) Minimum version acceptable.
|
||||
:param max_version: (options) Maximum version acceptable.
|
||||
:param kwargs: (optional) keyword args are passed through to the
|
||||
Client constructor, so this is in case anything
|
||||
additional needs to be passed in.
|
||||
"""
|
||||
if not client_class:
|
||||
client_class = _get_client(service_key)
|
||||
|
||||
interface = self.get_interface(service_key)
|
||||
# trigger exception on lack of service
|
||||
endpoint = self.get_session_endpoint(
|
||||
service_key, min_version=min_version, max_version=max_version)
|
||||
endpoint_override = self.get_endpoint(service_key)
|
||||
|
||||
if service_key == 'object-store':
|
||||
constructor_kwargs = dict(
|
||||
session=self.get_session(),
|
||||
os_options=dict(
|
||||
service_type=self.get_service_type(service_key),
|
||||
object_storage_url=endpoint_override,
|
||||
region_name=self.region))
|
||||
else:
|
||||
constructor_kwargs = dict(
|
||||
session=self.get_session(),
|
||||
service_name=self.get_service_name(service_key),
|
||||
service_type=self.get_service_type(service_key),
|
||||
endpoint_override=endpoint_override,
|
||||
region_name=self.region)
|
||||
|
||||
if service_key == 'image':
|
||||
# os-client-config does not depend on glanceclient, but if
|
||||
# the user passed in glanceclient.client.Client, which they
|
||||
# would need to do if they were requesting 'image' - then
|
||||
# they necessarily have glanceclient installed
|
||||
from glanceclient.common import utils as glance_utils
|
||||
endpoint, detected_version = glance_utils.strip_version(endpoint)
|
||||
# If the user has passed in a version, that's explicit, use it
|
||||
if not version:
|
||||
version = detected_version
|
||||
# If the user has passed in or configured an override, use it.
|
||||
# Otherwise, ALWAYS pass in an endpoint_override becuase
|
||||
# we've already done version stripping, so we don't want version
|
||||
# reconstruction to happen twice
|
||||
if not endpoint_override:
|
||||
constructor_kwargs['endpoint_override'] = endpoint
|
||||
constructor_kwargs.update(kwargs)
|
||||
if pass_version_arg and service_key != 'object-store':
|
||||
if not version:
|
||||
version = self.get_api_version(service_key)
|
||||
if not version and service_key == 'volume':
|
||||
from cinderclient import client as cinder_client
|
||||
version = cinder_client.get_volume_api_from_url(endpoint)
|
||||
# Temporary workaround while we wait for python-openstackclient
|
||||
# to be able to handle 2.0 which is what neutronclient expects
|
||||
if service_key == 'network' and version == '2':
|
||||
version = '2.0'
|
||||
if service_key == 'identity':
|
||||
# Workaround for bug#1513839
|
||||
if 'endpoint' not in constructor_kwargs:
|
||||
endpoint = self.get_session_endpoint('identity')
|
||||
constructor_kwargs['endpoint'] = endpoint
|
||||
if service_key == 'network':
|
||||
constructor_kwargs['api_version'] = version
|
||||
elif service_key == 'baremetal':
|
||||
if version != '1':
|
||||
# Set Ironic Microversion
|
||||
constructor_kwargs['os_ironic_api_version'] = version
|
||||
# Version arg is the major version, not the full microstring
|
||||
constructor_kwargs['version'] = version[0]
|
||||
else:
|
||||
constructor_kwargs['version'] = version
|
||||
if min_version and min_version > float(version):
|
||||
raise exceptions.OpenStackConfigVersionException(
|
||||
"Minimum version {min_version} requested but {version}"
|
||||
" found".format(min_version=min_version, version=version),
|
||||
version=version)
|
||||
if max_version and max_version < float(version):
|
||||
raise exceptions.OpenStackConfigVersionException(
|
||||
"Maximum version {max_version} requested but {version}"
|
||||
" found".format(max_version=max_version, version=version),
|
||||
version=version)
|
||||
if service_key == 'database':
|
||||
# TODO(mordred) Remove when https://review.openstack.org/314032
|
||||
# has landed and released. We're passing in a Session, but the
|
||||
# trove Client object has username and password as required
|
||||
# args
|
||||
constructor_kwargs['username'] = None
|
||||
constructor_kwargs['password'] = None
|
||||
|
||||
if not interface_key:
|
||||
if service_key in ('image', 'key-manager'):
|
||||
interface_key = 'interface'
|
||||
elif (service_key == 'identity'
|
||||
and version and version.startswith('3')):
|
||||
interface_key = 'interface'
|
||||
else:
|
||||
interface_key = 'endpoint_type'
|
||||
if service_key == 'object-store':
|
||||
constructor_kwargs['os_options'][interface_key] = interface
|
||||
else:
|
||||
constructor_kwargs[interface_key] = interface
|
||||
|
||||
return client_class(**constructor_kwargs)
|
||||
|
||||
def get_cache_expiration_time(self):
|
||||
if self._openstack_config:
|
||||
return self._openstack_config.get_cache_expiration_time()
|
||||
|
||||
def get_cache_path(self):
|
||||
if self._openstack_config:
|
||||
return self._openstack_config.get_cache_path()
|
||||
|
||||
def get_cache_class(self):
|
||||
if self._openstack_config:
|
||||
return self._openstack_config.get_cache_class()
|
||||
|
||||
def get_cache_arguments(self):
|
||||
if self._openstack_config:
|
||||
return self._openstack_config.get_cache_arguments()
|
||||
|
||||
def get_cache_expiration(self):
|
||||
if self._openstack_config:
|
||||
return self._openstack_config.get_cache_expiration()
|
||||
|
||||
def get_cache_resource_expiration(self, resource, default=None):
|
||||
"""Get expiration time for a resource
|
||||
|
||||
:param resource: Name of the resource type
|
||||
:param default: Default value to return if not found (optional,
|
||||
defaults to None)
|
||||
|
||||
:returns: Expiration time for the resource type as float or default
|
||||
"""
|
||||
if self._openstack_config:
|
||||
expiration = self._openstack_config.get_cache_expiration()
|
||||
if resource not in expiration:
|
||||
return default
|
||||
return float(expiration[resource])
|
||||
|
||||
def requires_floating_ip(self):
|
||||
"""Return whether or not this cloud requires floating ips.
|
||||
|
||||
|
||||
:returns: True of False if know, None if discovery is needed.
|
||||
If requires_floating_ip is not configured but the cloud is
|
||||
known to not provide floating ips, will return False.
|
||||
"""
|
||||
if self.config['floating_ip_source'] == "None":
|
||||
return False
|
||||
return self.config.get('requires_floating_ip')
|
||||
|
||||
def get_external_networks(self):
|
||||
"""Get list of network names for external networks."""
|
||||
return [
|
||||
net['name'] for net in self.config['networks']
|
||||
if net['routes_externally']]
|
||||
|
||||
def get_external_ipv4_networks(self):
|
||||
"""Get list of network names for external IPv4 networks."""
|
||||
return [
|
||||
net['name'] for net in self.config['networks']
|
||||
if net['routes_ipv4_externally']]
|
||||
|
||||
def get_external_ipv6_networks(self):
|
||||
"""Get list of network names for external IPv6 networks."""
|
||||
return [
|
||||
net['name'] for net in self.config['networks']
|
||||
if net['routes_ipv6_externally']]
|
||||
|
||||
def get_internal_networks(self):
|
||||
"""Get list of network names for internal networks."""
|
||||
return [
|
||||
net['name'] for net in self.config['networks']
|
||||
if not net['routes_externally']]
|
||||
|
||||
def get_internal_ipv4_networks(self):
|
||||
"""Get list of network names for internal IPv4 networks."""
|
||||
return [
|
||||
net['name'] for net in self.config['networks']
|
||||
if not net['routes_ipv4_externally']]
|
||||
|
||||
def get_internal_ipv6_networks(self):
|
||||
"""Get list of network names for internal IPv6 networks."""
|
||||
return [
|
||||
net['name'] for net in self.config['networks']
|
||||
if not net['routes_ipv6_externally']]
|
||||
|
||||
def get_default_network(self):
|
||||
"""Get network used for default interactions."""
|
||||
for net in self.config['networks']:
|
||||
if net['default_interface']:
|
||||
return net['name']
|
||||
return None
|
||||
|
||||
def get_nat_destination(self):
|
||||
"""Get network used for NAT destination."""
|
||||
for net in self.config['networks']:
|
||||
if net['nat_destination']:
|
||||
return net['name']
|
||||
return None
|
16
openstack/config/constructors.json
Normal file
16
openstack/config/constructors.json
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
"application-catalog": "muranoclient.client.Client",
|
||||
"baremetal": "ironicclient.client.Client",
|
||||
"compute": "novaclient.client.Client",
|
||||
"container-infra": "magnumclient.client.Client",
|
||||
"database": "troveclient.client.Client",
|
||||
"dns": "designateclient.client.Client",
|
||||
"identity": "keystoneclient.client.Client",
|
||||
"image": "glanceclient.Client",
|
||||
"key-manager": "barbicanclient.client.Client",
|
||||
"metering": "ceilometerclient.client.Client",
|
||||
"network": "neutronclient.neutron.client.Client",
|
||||
"object-store": "swiftclient.client.Connection",
|
||||
"orchestration": "heatclient.client.Client",
|
||||
"volume": "cinderclient.client.Client"
|
||||
}
|
36
openstack/config/constructors.py
Normal file
36
openstack/config/constructors.py
Normal file
@ -0,0 +1,36 @@
|
||||
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import json
|
||||
import os
|
||||
import threading
|
||||
|
||||
_json_path = os.path.join(
|
||||
os.path.dirname(os.path.realpath(__file__)), 'constructors.json')
|
||||
_class_mapping = None
|
||||
_class_mapping_lock = threading.Lock()
|
||||
|
||||
|
||||
def get_constructor_mapping():
|
||||
global _class_mapping
|
||||
if _class_mapping is not None:
|
||||
return _class_mapping.copy()
|
||||
with _class_mapping_lock:
|
||||
if _class_mapping is not None:
|
||||
return _class_mapping.copy()
|
||||
tmp_class_mapping = {}
|
||||
with open(_json_path, 'r') as json_file:
|
||||
tmp_class_mapping.update(json.load(json_file))
|
||||
_class_mapping = tmp_class_mapping
|
||||
return tmp_class_mapping.copy()
|
27
openstack/config/defaults.json
Normal file
27
openstack/config/defaults.json
Normal file
@ -0,0 +1,27 @@
|
||||
{
|
||||
"application_catalog_api_version": "1",
|
||||
"auth_type": "password",
|
||||
"baremetal_api_version": "1",
|
||||
"container_api_version": "1",
|
||||
"container_infra_api_version": "1",
|
||||
"compute_api_version": "2",
|
||||
"database_api_version": "1.0",
|
||||
"disable_vendor_agent": {},
|
||||
"dns_api_version": "2",
|
||||
"interface": "public",
|
||||
"floating_ip_source": "neutron",
|
||||
"identity_api_version": "2.0",
|
||||
"image_api_use_tasks": false,
|
||||
"image_api_version": "2",
|
||||
"image_format": "qcow2",
|
||||
"key_manager_api_version": "v1",
|
||||
"message": "",
|
||||
"metering_api_version": "2",
|
||||
"network_api_version": "2",
|
||||
"object_store_api_version": "1",
|
||||
"orchestration_api_version": "1",
|
||||
"secgroup_source": "neutron",
|
||||
"status": "active",
|
||||
"volume_api_version": "2",
|
||||
"workflow_api_version": "2"
|
||||
}
|
52
openstack/config/defaults.py
Normal file
52
openstack/config/defaults.py
Normal file
@ -0,0 +1,52 @@
|
||||
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import json
|
||||
import os
|
||||
import threading
|
||||
|
||||
_json_path = os.path.join(
|
||||
os.path.dirname(os.path.realpath(__file__)), 'defaults.json')
|
||||
_defaults = None
|
||||
_defaults_lock = threading.Lock()
|
||||
|
||||
|
||||
def get_defaults():
|
||||
global _defaults
|
||||
if _defaults is not None:
|
||||
return _defaults.copy()
|
||||
with _defaults_lock:
|
||||
if _defaults is not None:
|
||||
# Did someone else just finish filling it?
|
||||
return _defaults.copy()
|
||||
# Python language specific defaults
|
||||
# These are defaults related to use of python libraries, they are
|
||||
# not qualities of a cloud.
|
||||
#
|
||||
# NOTE(harlowja): update a in-memory dict, before updating
|
||||
# the global one so that other callers of get_defaults do not
|
||||
# see the partially filled one.
|
||||
tmp_defaults = dict(
|
||||
api_timeout=None,
|
||||
verify=True,
|
||||
cacert=None,
|
||||
cert=None,
|
||||
key=None,
|
||||
)
|
||||
with open(_json_path, 'r') as json_file:
|
||||
updates = json.load(json_file)
|
||||
if updates is not None:
|
||||
tmp_defaults.update(updates)
|
||||
_defaults = tmp_defaults
|
||||
return tmp_defaults.copy()
|
25
openstack/config/exceptions.py
Normal file
25
openstack/config/exceptions.py
Normal file
@ -0,0 +1,25 @@
|
||||
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
|
||||
class OpenStackConfigException(Exception):
|
||||
"""Something went wrong with parsing your OpenStack Config."""
|
||||
|
||||
|
||||
class OpenStackConfigVersionException(OpenStackConfigException):
|
||||
"""A version was requested that is different than what was found."""
|
||||
|
||||
def __init__(self, version):
|
||||
super(OpenStackConfigVersionException, self).__init__()
|
||||
self.version = version
|
1241
openstack/config/loader.py
Normal file
1241
openstack/config/loader.py
Normal file
File diff suppressed because it is too large
Load Diff
121
openstack/config/schema.json
Normal file
121
openstack/config/schema.json
Normal file
@ -0,0 +1,121 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-04/schema#",
|
||||
"id": "https://git.openstack.org/cgit/openstack/cloud-data/plain/schema.json#",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"auth_type": {
|
||||
"name": "Auth Type",
|
||||
"description": "Name of authentication plugin to be used",
|
||||
"default": "password",
|
||||
"type": "string"
|
||||
},
|
||||
"disable_vendor_agent": {
|
||||
"name": "Disable Vendor Agent Properties",
|
||||
"description": "Image properties required to disable vendor agent",
|
||||
"type": "object",
|
||||
"properties": {}
|
||||
},
|
||||
"floating_ip_source": {
|
||||
"name": "Floating IP Source",
|
||||
"description": "Which service provides Floating IPs",
|
||||
"enum": [ "neutron", "nova", "None" ],
|
||||
"default": "neutron"
|
||||
},
|
||||
"image_api_use_tasks": {
|
||||
"name": "Image Task API",
|
||||
"description": "Does the cloud require the Image Task API",
|
||||
"default": false,
|
||||
"type": "boolean"
|
||||
},
|
||||
"image_format": {
|
||||
"name": "Image Format",
|
||||
"description": "Format for uploaded Images",
|
||||
"default": "qcow2",
|
||||
"type": "string"
|
||||
},
|
||||
"interface": {
|
||||
"name": "API Interface",
|
||||
"description": "Which API Interface should connections hit",
|
||||
"default": "public",
|
||||
"enum": [ "public", "internal", "admin" ]
|
||||
},
|
||||
"secgroup_source": {
|
||||
"name": "Security Group Source",
|
||||
"description": "Which service provides security groups",
|
||||
"default": "neutron",
|
||||
"enum": [ "neutron", "nova", "None" ]
|
||||
},
|
||||
"baremetal_api_version": {
|
||||
"name": "Baremetal API Service Type",
|
||||
"description": "Baremetal API Service Type",
|
||||
"default": "1",
|
||||
"type": "string"
|
||||
},
|
||||
"compute_api_version": {
|
||||
"name": "Compute API Version",
|
||||
"description": "Compute API Version",
|
||||
"default": "2",
|
||||
"type": "string"
|
||||
},
|
||||
"database_api_version": {
|
||||
"name": "Database API Version",
|
||||
"description": "Database API Version",
|
||||
"default": "1.0",
|
||||
"type": "string"
|
||||
},
|
||||
"dns_api_version": {
|
||||
"name": "DNS API Version",
|
||||
"description": "DNS API Version",
|
||||
"default": "2",
|
||||
"type": "string"
|
||||
},
|
||||
"identity_api_version": {
|
||||
"name": "Identity API Version",
|
||||
"description": "Identity API Version",
|
||||
"default": "2",
|
||||
"type": "string"
|
||||
},
|
||||
"image_api_version": {
|
||||
"name": "Image API Version",
|
||||
"description": "Image API Version",
|
||||
"default": "1",
|
||||
"type": "string"
|
||||
},
|
||||
"network_api_version": {
|
||||
"name": "Network API Version",
|
||||
"description": "Network API Version",
|
||||
"default": "2",
|
||||
"type": "string"
|
||||
},
|
||||
"object_store_api_version": {
|
||||
"name": "Object Storage API Version",
|
||||
"description": "Object Storage API Version",
|
||||
"default": "1",
|
||||
"type": "string"
|
||||
},
|
||||
"volume_api_version": {
|
||||
"name": "Volume API Version",
|
||||
"description": "Volume API Version",
|
||||
"default": "2",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"auth_type",
|
||||
"baremetal_api_version",
|
||||
"compute_api_version",
|
||||
"database_api_version",
|
||||
"disable_vendor_agent",
|
||||
"dns_api_version",
|
||||
"floating_ip_source",
|
||||
"identity_api_version",
|
||||
"image_api_use_tasks",
|
||||
"image_api_version",
|
||||
"image_format",
|
||||
"interface",
|
||||
"network_api_version",
|
||||
"object_store_api_version",
|
||||
"secgroup_source",
|
||||
"volume_api_version"
|
||||
]
|
||||
}
|
223
openstack/config/vendor-schema.json
Normal file
223
openstack/config/vendor-schema.json
Normal file
@ -0,0 +1,223 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-04/schema#",
|
||||
"id": "https://git.openstack.org/cgit/openstack/cloud-data/plain/vendor-schema.json#",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": {
|
||||
"type": "string"
|
||||
},
|
||||
"profile": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"auth": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"auth_url": {
|
||||
"name": "Auth URL",
|
||||
"description": "URL of the primary Keystone endpoint",
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
},
|
||||
"auth_type": {
|
||||
"name": "Auth Type",
|
||||
"description": "Name of authentication plugin to be used",
|
||||
"default": "password",
|
||||
"type": "string"
|
||||
},
|
||||
"disable_vendor_agent": {
|
||||
"name": "Disable Vendor Agent Properties",
|
||||
"description": "Image properties required to disable vendor agent",
|
||||
"type": "object",
|
||||
"properties": {}
|
||||
},
|
||||
"floating_ip_source": {
|
||||
"name": "Floating IP Source",
|
||||
"description": "Which service provides Floating IPs",
|
||||
"enum": [ "neutron", "nova", "None" ],
|
||||
"default": "neutron"
|
||||
},
|
||||
"image_api_use_tasks": {
|
||||
"name": "Image Task API",
|
||||
"description": "Does the cloud require the Image Task API",
|
||||
"default": false,
|
||||
"type": "boolean"
|
||||
},
|
||||
"image_format": {
|
||||
"name": "Image Format",
|
||||
"description": "Format for uploaded Images",
|
||||
"default": "qcow2",
|
||||
"type": "string"
|
||||
},
|
||||
"interface": {
|
||||
"name": "API Interface",
|
||||
"description": "Which API Interface should connections hit",
|
||||
"default": "public",
|
||||
"enum": [ "public", "internal", "admin" ]
|
||||
},
|
||||
"message": {
|
||||
"name": "Status message",
|
||||
"description": "Optional message with information related to status",
|
||||
"type": "string"
|
||||
},
|
||||
"requires_floating_ip": {
|
||||
"name": "Requires Floating IP",
|
||||
"description": "Whether the cloud requires a floating IP to route traffic off of the cloud",
|
||||
"default": null,
|
||||
"type": ["boolean", "null"]
|
||||
},
|
||||
"secgroup_source": {
|
||||
"name": "Security Group Source",
|
||||
"description": "Which service provides security groups",
|
||||
"enum": [ "neutron", "nova", "None" ],
|
||||
"default": "neutron"
|
||||
},
|
||||
"status": {
|
||||
"name": "Vendor status",
|
||||
"description": "Status of the vendor's cloud",
|
||||
"enum": [ "active", "deprecated", "shutdown"],
|
||||
"default": "active"
|
||||
},
|
||||
"compute_service_name": {
|
||||
"name": "Compute API Service Name",
|
||||
"description": "Compute API Service Name",
|
||||
"type": "string"
|
||||
},
|
||||
"database_service_name": {
|
||||
"name": "Database API Service Name",
|
||||
"description": "Database API Service Name",
|
||||
"type": "string"
|
||||
},
|
||||
"dns_service_name": {
|
||||
"name": "DNS API Service Name",
|
||||
"description": "DNS API Service Name",
|
||||
"type": "string"
|
||||
},
|
||||
"identity_service_name": {
|
||||
"name": "Identity API Service Name",
|
||||
"description": "Identity API Service Name",
|
||||
"type": "string"
|
||||
},
|
||||
"image_service_name": {
|
||||
"name": "Image API Service Name",
|
||||
"description": "Image API Service Name",
|
||||
"type": "string"
|
||||
},
|
||||
"volume_service_name": {
|
||||
"name": "Volume API Service Name",
|
||||
"description": "Volume API Service Name",
|
||||
"type": "string"
|
||||
},
|
||||
"network_service_name": {
|
||||
"name": "Network API Service Name",
|
||||
"description": "Network API Service Name",
|
||||
"type": "string"
|
||||
},
|
||||
"object_service_name": {
|
||||
"name": "Object Storage API Service Name",
|
||||
"description": "Object Storage API Service Name",
|
||||
"type": "string"
|
||||
},
|
||||
"baremetal_service_name": {
|
||||
"name": "Baremetal API Service Name",
|
||||
"description": "Baremetal API Service Name",
|
||||
"type": "string"
|
||||
},
|
||||
"compute_service_type": {
|
||||
"name": "Compute API Service Type",
|
||||
"description": "Compute API Service Type",
|
||||
"type": "string"
|
||||
},
|
||||
"database_service_type": {
|
||||
"name": "Database API Service Type",
|
||||
"description": "Database API Service Type",
|
||||
"type": "string"
|
||||
},
|
||||
"dns_service_type": {
|
||||
"name": "DNS API Service Type",
|
||||
"description": "DNS API Service Type",
|
||||
"type": "string"
|
||||
},
|
||||
"identity_service_type": {
|
||||
"name": "Identity API Service Type",
|
||||
"description": "Identity API Service Type",
|
||||
"type": "string"
|
||||
},
|
||||
"image_service_type": {
|
||||
"name": "Image API Service Type",
|
||||
"description": "Image API Service Type",
|
||||
"type": "string"
|
||||
},
|
||||
"volume_service_type": {
|
||||
"name": "Volume API Service Type",
|
||||
"description": "Volume API Service Type",
|
||||
"type": "string"
|
||||
},
|
||||
"network_service_type": {
|
||||
"name": "Network API Service Type",
|
||||
"description": "Network API Service Type",
|
||||
"type": "string"
|
||||
},
|
||||
"object_service_type": {
|
||||
"name": "Object Storage API Service Type",
|
||||
"description": "Object Storage API Service Type",
|
||||
"type": "string"
|
||||
},
|
||||
"baremetal_service_type": {
|
||||
"name": "Baremetal API Service Type",
|
||||
"description": "Baremetal API Service Type",
|
||||
"type": "string"
|
||||
},
|
||||
"compute_api_version": {
|
||||
"name": "Compute API Version",
|
||||
"description": "Compute API Version",
|
||||
"type": "string"
|
||||
},
|
||||
"database_api_version": {
|
||||
"name": "Database API Version",
|
||||
"description": "Database API Version",
|
||||
"type": "string"
|
||||
},
|
||||
"dns_api_version": {
|
||||
"name": "DNS API Version",
|
||||
"description": "DNS API Version",
|
||||
"type": "string"
|
||||
},
|
||||
"identity_api_version": {
|
||||
"name": "Identity API Version",
|
||||
"description": "Identity API Version",
|
||||
"type": "string"
|
||||
},
|
||||
"image_api_version": {
|
||||
"name": "Image API Version",
|
||||
"description": "Image API Version",
|
||||
"type": "string"
|
||||
},
|
||||
"volume_api_version": {
|
||||
"name": "Volume API Version",
|
||||
"description": "Volume API Version",
|
||||
"type": "string"
|
||||
},
|
||||
"network_api_version": {
|
||||
"name": "Network API Version",
|
||||
"description": "Network API Version",
|
||||
"type": "string"
|
||||
},
|
||||
"object_api_version": {
|
||||
"name": "Object Storage API Version",
|
||||
"description": "Object Storage API Version",
|
||||
"type": "string"
|
||||
},
|
||||
"baremetal_api_version": {
|
||||
"name": "Baremetal API Version",
|
||||
"description": "Baremetal API Version",
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
"name",
|
||||
"profile"
|
||||
]
|
||||
}
|
37
openstack/config/vendors/__init__.py
vendored
Normal file
37
openstack/config/vendors/__init__.py
vendored
Normal file
@ -0,0 +1,37 @@
|
||||
# Copyright (c) 2014 Hewlett-Packard Development Company, L.P.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import glob
|
||||
import json
|
||||
import os
|
||||
|
||||
import yaml
|
||||
|
||||
_vendors_path = os.path.dirname(os.path.realpath(__file__))
|
||||
_vendor_defaults = None
|
||||
|
||||
|
||||
def get_profile(profile_name):
|
||||
global _vendor_defaults
|
||||
if _vendor_defaults is None:
|
||||
_vendor_defaults = {}
|
||||
for vendor in glob.glob(os.path.join(_vendors_path, '*.yaml')):
|
||||
with open(vendor, 'r') as f:
|
||||
vendor_data = yaml.safe_load(f)
|
||||
_vendor_defaults[vendor_data['name']] = vendor_data['profile']
|
||||
for vendor in glob.glob(os.path.join(_vendors_path, '*.json')):
|
||||
with open(vendor, 'r') as f:
|
||||
vendor_data = json.load(f)
|
||||
_vendor_defaults[vendor_data['name']] = vendor_data['profile']
|
||||
return _vendor_defaults.get(profile_name)
|
11
openstack/config/vendors/auro.json
vendored
Normal file
11
openstack/config/vendors/auro.json
vendored
Normal file
@ -0,0 +1,11 @@
|
||||
{
|
||||
"name": "auro",
|
||||
"profile": {
|
||||
"auth": {
|
||||
"auth_url": "https://api.van1.auro.io:5000/v2.0"
|
||||
},
|
||||
"identity_api_version": "2",
|
||||
"region_name": "van1",
|
||||
"requires_floating_ip": true
|
||||
}
|
||||
}
|
7
openstack/config/vendors/bluebox.json
vendored
Normal file
7
openstack/config/vendors/bluebox.json
vendored
Normal file
@ -0,0 +1,7 @@
|
||||
{
|
||||
"name": "bluebox",
|
||||
"profile": {
|
||||
"volume_api_version": "1",
|
||||
"region_name": "RegionOne"
|
||||
}
|
||||
}
|
15
openstack/config/vendors/catalyst.json
vendored
Normal file
15
openstack/config/vendors/catalyst.json
vendored
Normal file
@ -0,0 +1,15 @@
|
||||
{
|
||||
"name": "catalyst",
|
||||
"profile": {
|
||||
"auth": {
|
||||
"auth_url": "https://api.cloud.catalyst.net.nz:5000/v2.0"
|
||||
},
|
||||
"regions": [
|
||||
"nz-por-1",
|
||||
"nz_wlg_2"
|
||||
],
|
||||
"image_api_version": "1",
|
||||
"volume_api_version": "1",
|
||||
"image_format": "raw"
|
||||
}
|
||||
}
|
19
openstack/config/vendors/citycloud.json
vendored
Normal file
19
openstack/config/vendors/citycloud.json
vendored
Normal file
@ -0,0 +1,19 @@
|
||||
{
|
||||
"name": "citycloud",
|
||||
"profile": {
|
||||
"auth": {
|
||||
"auth_url": "https://identity1.citycloud.com:5000/v3/"
|
||||
},
|
||||
"regions": [
|
||||
"Buf1",
|
||||
"La1",
|
||||
"Fra1",
|
||||
"Lon1",
|
||||
"Sto2",
|
||||
"Kna1"
|
||||
],
|
||||
"requires_floating_ip": true,
|
||||
"volume_api_version": "1",
|
||||
"identity_api_version": "3"
|
||||
}
|
||||
}
|
14
openstack/config/vendors/conoha.json
vendored
Normal file
14
openstack/config/vendors/conoha.json
vendored
Normal file
@ -0,0 +1,14 @@
|
||||
{
|
||||
"name": "conoha",
|
||||
"profile": {
|
||||
"auth": {
|
||||
"auth_url": "https://identity.{region_name}.conoha.io"
|
||||
},
|
||||
"regions": [
|
||||
"sin1",
|
||||
"sjc1",
|
||||
"tyo1"
|
||||
],
|
||||
"identity_api_version": "2"
|
||||
}
|
||||
}
|
11
openstack/config/vendors/datacentred.json
vendored
Normal file
11
openstack/config/vendors/datacentred.json
vendored
Normal file
@ -0,0 +1,11 @@
|
||||
{
|
||||
"name": "datacentred",
|
||||
"profile": {
|
||||
"auth": {
|
||||
"auth_url": "https://compute.datacentred.io:5000"
|
||||
},
|
||||
"region-name": "sal01",
|
||||
"identity_api_version": "3",
|
||||
"image_api_version": "2"
|
||||
}
|
||||
}
|
11
openstack/config/vendors/dreamcompute.json
vendored
Normal file
11
openstack/config/vendors/dreamcompute.json
vendored
Normal file
@ -0,0 +1,11 @@
|
||||
{
|
||||
"name": "dreamcompute",
|
||||
"profile": {
|
||||
"auth": {
|
||||
"auth_url": "https://iad2.dream.io:5000"
|
||||
},
|
||||
"identity_api_version": "3",
|
||||
"region_name": "RegionOne",
|
||||
"image_format": "raw"
|
||||
}
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user