Refactor valet_plugins
Part of support for nested stacks and updates story To add nested stack support to Valet, make up for missing Heat resource Orchestration IDs in nested resources by generating a subset of Heat stack lifecycle scheduler hints for each resource in advance, store them as opaque metadata in Valet, then leverage the metadata at Nova scheduling time. Make additional accommodations in anticipation of complexities brought about by adding support for stack updates. To add a minimally viable amount of Heat `stack-update` support to Valet, significantly restrict the number of update use cases using a set of acceptance criteria. Skip holistic placement at `stack-update` time in favor of Valet's existing re-plan mechanism, placing or replacing resources one at a time, albeit still in consideration of other resources in the same stack hierarchy. Change-Id: I4654bcb4eacd5d64f76e262fe4c29553796e3f06 Story: #2001139 Task: #4858
This commit is contained in:
parent
8444b286c1
commit
fb512cd3ab
349
doc/source/heat.rst
Normal file
349
doc/source/heat.rst
Normal file
@ -0,0 +1,349 @@
|
|||||||
|
OpenStack Heat Resource Plugins
|
||||||
|
===============================
|
||||||
|
|
||||||
|
`Valet <https://codecloud.web.att.com/plugins/servlet/readmeparser/display/ST_CLOUDQOS/allegro/atRef/refs/heads/master/renderFile/README.md>`__
|
||||||
|
works with OpenStack Heat through the use of Resource Plugins. This
|
||||||
|
document explains what they are and how they work. As new plugins become
|
||||||
|
formally introduced, they will be added here.
|
||||||
|
|
||||||
|
**IMPORTANT**: Resources have been rearchitected. Types and properties
|
||||||
|
have changed/moved!
|
||||||
|
|
||||||
|
Example
|
||||||
|
-------
|
||||||
|
|
||||||
|
An example is provided first, followed by more details about each
|
||||||
|
resource.
|
||||||
|
|
||||||
|
Given a Heat template with two server resources, declare a group with
|
||||||
|
rack-level affinity, then assign the servers to the group:
|
||||||
|
|
||||||
|
.. code:: json
|
||||||
|
|
||||||
|
resources:
|
||||||
|
affinity_group:
|
||||||
|
type: OS::Valet::Group
|
||||||
|
properties:
|
||||||
|
name: my_vnf_rack_affinity
|
||||||
|
type: affinity
|
||||||
|
level: rack
|
||||||
|
|
||||||
|
server_affinity:
|
||||||
|
type: OS::Valet::GroupAssignment
|
||||||
|
properties:
|
||||||
|
group: {get_resource: affinity_group}
|
||||||
|
resources:
|
||||||
|
- {get_resource: server1}
|
||||||
|
- {get_resource: server2}
|
||||||
|
|
||||||
|
OS::Valet::Group
|
||||||
|
----------------
|
||||||
|
|
||||||
|
*This resource is under development and subject to rearchitecture. It is
|
||||||
|
not yet supported within Heat templates. Users bearing an admin role may
|
||||||
|
create and manage Groups using ``valet-api`` directly. Do NOT use
|
||||||
|
OS::Valet::Group at this time.*
|
||||||
|
|
||||||
|
A Group is used to define a particular association amongst resources.
|
||||||
|
Groups may be used only by their assigned members, currently identified
|
||||||
|
by project (tenant) IDs.
|
||||||
|
|
||||||
|
There are three types of groups: affinity, diversity, and exclusivity.
|
||||||
|
There are two levels: host and rack.
|
||||||
|
|
||||||
|
All groups must have a unique name, regardless of how they were created
|
||||||
|
and regardless of membership.
|
||||||
|
|
||||||
|
The user's project (tenant) id is automatically added to the member list
|
||||||
|
upon creation. However, there is no group owner. Any member can edit or
|
||||||
|
delete the group. If all members are removed, only a user bearing an
|
||||||
|
admin role can edit or remove the group.
|
||||||
|
|
||||||
|
This resource is purely informational in nature and makes no changes to
|
||||||
|
heat, nova, or cinder. Instead, the Valet Heat stack lifecycle plugin
|
||||||
|
intercepts Heat's create/update/delete operations and invokes valet-api
|
||||||
|
as needed.
|
||||||
|
|
||||||
|
Properties
|
||||||
|
~~~~~~~~~~
|
||||||
|
|
||||||
|
``name`` (String)
|
||||||
|
|
||||||
|
- Name of group.
|
||||||
|
- Required property.
|
||||||
|
|
||||||
|
``description`` (String)
|
||||||
|
|
||||||
|
- Description of group.
|
||||||
|
|
||||||
|
``type`` (String)
|
||||||
|
|
||||||
|
- Type of group.
|
||||||
|
- Allowed values: affinity, diversity, exclusivity (see "Types" for
|
||||||
|
details)
|
||||||
|
- Required property.
|
||||||
|
|
||||||
|
``level`` (String)
|
||||||
|
|
||||||
|
- Level of relationship between resources.
|
||||||
|
- Allowed values: rack, host (see "Levels" for details)
|
||||||
|
- Required property.
|
||||||
|
|
||||||
|
``members`` (List)
|
||||||
|
|
||||||
|
- List of group members.
|
||||||
|
- Use project (tenant) IDs.
|
||||||
|
- A user with admin role can add/remove any project ID.
|
||||||
|
- Creator's project ID is automatically added.
|
||||||
|
|
||||||
|
Types
|
||||||
|
^^^^^
|
||||||
|
|
||||||
|
- ``affinity``: Same level
|
||||||
|
- ``diversity``: Different levels (aka "anti-affinity")
|
||||||
|
- ``exclusivity``: Same level with exclusive use
|
||||||
|
|
||||||
|
Levels
|
||||||
|
^^^^^^
|
||||||
|
|
||||||
|
- ``rack``: Across racks, one resource per host.
|
||||||
|
- ``host``: All resources on a single host.
|
||||||
|
|
||||||
|
Attributes
|
||||||
|
~~~~~~~~~~
|
||||||
|
|
||||||
|
None. (There is a ``show`` attribute but it is not intended for
|
||||||
|
production use.)
|
||||||
|
|
||||||
|
Plugin Schema
|
||||||
|
~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
.. code:: json
|
||||||
|
|
||||||
|
$ heat resource-type-show OS::Valet::Group
|
||||||
|
{
|
||||||
|
"support_status": {
|
||||||
|
"status": "SUPPORTED",
|
||||||
|
"message": null,
|
||||||
|
"version": null,
|
||||||
|
"previous_status": null
|
||||||
|
},
|
||||||
|
"attributes": {
|
||||||
|
"show": {
|
||||||
|
"type": "map",
|
||||||
|
"description": "Detailed information about resource."
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"properties": {
|
||||||
|
"name": {
|
||||||
|
"description": "Name of group.",
|
||||||
|
"required": true,
|
||||||
|
"update_allowed": false,
|
||||||
|
"type": "string",
|
||||||
|
"immutable": false,
|
||||||
|
"constraints": [
|
||||||
|
{
|
||||||
|
"custom_constraint": "valet.group_name"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"type": {
|
||||||
|
"description": "Type of group.",
|
||||||
|
"required": true,
|
||||||
|
"update_allowed": false,
|
||||||
|
"type": "string",
|
||||||
|
"immutable": false,
|
||||||
|
"constraints": [
|
||||||
|
{
|
||||||
|
"allowed_values": [
|
||||||
|
"affinity",
|
||||||
|
"diversity",
|
||||||
|
"exclusivity"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"description": {
|
||||||
|
"type": "string",
|
||||||
|
"required": false,
|
||||||
|
"update_allowed": true,
|
||||||
|
"description": "Description of group.",
|
||||||
|
"immutable": false
|
||||||
|
},
|
||||||
|
"members": {
|
||||||
|
"type": "list",
|
||||||
|
"required": false,
|
||||||
|
"update_allowed": true,
|
||||||
|
"description": "List of one or more member IDs allowed to use this group.",
|
||||||
|
"immutable": false
|
||||||
|
},
|
||||||
|
"level": {
|
||||||
|
"description": "Level of relationship between resources.",
|
||||||
|
"required": true,
|
||||||
|
"update_allowed": false,
|
||||||
|
"type": "string",
|
||||||
|
"immutable": false,
|
||||||
|
"constraints": [
|
||||||
|
{
|
||||||
|
"allowed_values": [
|
||||||
|
"host",
|
||||||
|
"rack"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"resource_type": "OS::Valet::Group"
|
||||||
|
}
|
||||||
|
|
||||||
|
OS::Valet::GroupAssignment
|
||||||
|
--------------------------
|
||||||
|
|
||||||
|
*This resource is under development and subject to rearchitecture. The
|
||||||
|
``type`` and ``level`` properties are no longer available and are now
|
||||||
|
part of Groups. Users bearing an admin role may create and manage Groups
|
||||||
|
using ``valet-api`` directly.*
|
||||||
|
|
||||||
|
A Group Assignment describes one or more resources (e.g., Server)
|
||||||
|
assigned to a particular group.
|
||||||
|
|
||||||
|
**Caution**: It is possible to declare multiple GroupAssignment
|
||||||
|
resources referring to the same servers, which can lead to problems when
|
||||||
|
one GroupAssignment is updated and a duplicate server reference is
|
||||||
|
removed.
|
||||||
|
|
||||||
|
This resource is purely informational in nature and makes no changes to
|
||||||
|
heat, nova, or cinder. Instead, the Valet Heat stack lifecycle plugin
|
||||||
|
intercepts Heat's create/update/delete operations and invokes valet-api
|
||||||
|
as needed.
|
||||||
|
|
||||||
|
Properties
|
||||||
|
~~~~~~~~~~
|
||||||
|
|
||||||
|
``group`` (String)
|
||||||
|
|
||||||
|
- A reference to a previously defined group.
|
||||||
|
- This can be the group resource ID, the group name, or a HOT
|
||||||
|
``get_resource`` reference.
|
||||||
|
- Required property.
|
||||||
|
|
||||||
|
``resources`` (List)
|
||||||
|
|
||||||
|
- List of resource IDs to assign to the group.
|
||||||
|
- Can be updated without replacement.
|
||||||
|
- Required property.
|
||||||
|
|
||||||
|
Attributes
|
||||||
|
~~~~~~~~~~
|
||||||
|
|
||||||
|
None. (There is a ``show`` attribute but it is not intended for
|
||||||
|
production use.)
|
||||||
|
|
||||||
|
Plugin Schema
|
||||||
|
~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
.. code:: json
|
||||||
|
|
||||||
|
$ heat resource-type-show OS::Valet::GroupAssignment
|
||||||
|
{
|
||||||
|
"support_status": {
|
||||||
|
"status": "SUPPORTED",
|
||||||
|
"message": null,
|
||||||
|
"version": null,
|
||||||
|
"previous_status": null
|
||||||
|
},
|
||||||
|
"attributes": {
|
||||||
|
"show": {
|
||||||
|
"type": "map",
|
||||||
|
"description": "Detailed information about resource."
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"properties": {
|
||||||
|
"group": {
|
||||||
|
"type": "string",
|
||||||
|
"required": false,
|
||||||
|
"update_allowed": false,
|
||||||
|
"description": "Group reference.",
|
||||||
|
"immutable": false
|
||||||
|
},
|
||||||
|
"resources": {
|
||||||
|
"type": "list",
|
||||||
|
"required": true,
|
||||||
|
"update_allowed": true,
|
||||||
|
"description": "List of one or more resource IDs.",
|
||||||
|
"immutable": false
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"resource_type": "OS::Valet::GroupAssignment"
|
||||||
|
}
|
||||||
|
|
||||||
|
Future Work
|
||||||
|
-----------
|
||||||
|
|
||||||
|
The following sections are proposals and *not* implemented. It is
|
||||||
|
provided to aid in ongoing open discussion.
|
||||||
|
|
||||||
|
Resource Properties
|
||||||
|
~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
Resource property characteristics are under ongoing review and subject
|
||||||
|
to revision.
|
||||||
|
|
||||||
|
Volume Resource Support
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
Future placement support will formally include block storage services
|
||||||
|
(e.g., Cinder).
|
||||||
|
|
||||||
|
Additional Scheduling Levels
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
Future group levels could include:
|
||||||
|
|
||||||
|
- ``cluster``: Across a cluster, one resource per cluster.
|
||||||
|
- ``any``: Any level.
|
||||||
|
|
||||||
|
Proposed Notation for 'diverse-affinity'
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
Suppose we are given a set of server/volume pairs, and we'd like to
|
||||||
|
treat each pair as an affinity group, and then treat all affinity groups
|
||||||
|
diversely. The following notation makes this diverse affinity pattern
|
||||||
|
easier to describe, with no name repetition.
|
||||||
|
|
||||||
|
.. code:: json
|
||||||
|
|
||||||
|
resources:
|
||||||
|
my_group:
|
||||||
|
type: OS::Valet::Group
|
||||||
|
properties:
|
||||||
|
name: my_even_awesomer_group
|
||||||
|
type: diverse-affinity
|
||||||
|
level: host
|
||||||
|
|
||||||
|
my_group_assignment:
|
||||||
|
type: OS::Valet::GroupAssignment
|
||||||
|
properties:
|
||||||
|
group: {get_resource: my_group}
|
||||||
|
resources:
|
||||||
|
- - {get_resource: server1}
|
||||||
|
- {get_resource: volume1}
|
||||||
|
- - {get_resource: server2}
|
||||||
|
- {get_resource: volume2}
|
||||||
|
- - {get_resource: server3}
|
||||||
|
- {get_resource: volume3}
|
||||||
|
|
||||||
|
In this example, ``server1``/``volume1``, ``server2``/``volume2``, and
|
||||||
|
``server3``/``volume3`` are each treated as their own affinity group.
|
||||||
|
Then, each of these affinity groups is treated as a diversity group. The
|
||||||
|
dash notation is specific to YAML (a superset of JSON and the markup
|
||||||
|
language used by Heat).
|
||||||
|
|
||||||
|
Given a hypothetical example of a Ceph deployment with three monitors,
|
||||||
|
twelve OSDs, and one client, each paired with a volume, we would only
|
||||||
|
need to specify three Heat resources instead of eighteen.
|
||||||
|
|
||||||
|
Contact
|
||||||
|
-------
|
||||||
|
|
||||||
|
Joe D'Andrea jdandrea@research.att.com
|
@ -5,6 +5,7 @@ Welcome to Valet's documentation!
|
|||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
contributing
|
contributing
|
||||||
|
heat
|
||||||
|
|
||||||
Indices and tables
|
Indices and tables
|
||||||
==================
|
==================
|
||||||
|
318
plugins/common/valet_api.py
Normal file
318
plugins/common/valet_api.py
Normal file
@ -0,0 +1,318 @@
|
|||||||
|
#
|
||||||
|
# Copyright (c) 2014-2017 AT&T Intellectual Property
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
"""Valet API Wrapper"""
|
||||||
|
|
||||||
|
# TODO(jdandrea): Factor out and refashion into python-valetclient.
|
||||||
|
|
||||||
|
import json
|
||||||
|
import sys
|
||||||
|
|
||||||
|
from oslo_config import cfg
|
||||||
|
from oslo_log import log as logging
|
||||||
|
import requests
|
||||||
|
|
||||||
|
from plugins import exceptions
|
||||||
|
from plugins.i18n import _
|
||||||
|
|
||||||
|
CONF = cfg.CONF
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class ValetAPI(object):
|
||||||
|
"""Valet Python API
|
||||||
|
|
||||||
|
self.auth_token can be set once in advance,
|
||||||
|
or sent as auth_token with each request.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
"""Initializer"""
|
||||||
|
self._auth_token = None
|
||||||
|
self._register_opts()
|
||||||
|
|
||||||
|
@property
|
||||||
|
def auth_token(self):
|
||||||
|
"""Auth Token Property/Getter"""
|
||||||
|
return self._auth_token
|
||||||
|
|
||||||
|
@auth_token.setter
|
||||||
|
def auth_token(self, value):
|
||||||
|
"""Auth Token Setter"""
|
||||||
|
self._auth_token = value
|
||||||
|
|
||||||
|
@auth_token.deleter
|
||||||
|
def auth_token(self):
|
||||||
|
"""Auth Token Deleter"""
|
||||||
|
del self._auth_token
|
||||||
|
|
||||||
|
def _exception(self, exc_info, response):
|
||||||
|
"""Build exception/message and raise it.
|
||||||
|
|
||||||
|
Exceptions are of type ValetOpenStackPluginException.
|
||||||
|
|
||||||
|
exc_info must be sys.exc_info() tuple
|
||||||
|
response must be of type requests.models.Response
|
||||||
|
"""
|
||||||
|
|
||||||
|
msg = None
|
||||||
|
exception = exceptions.UnknownError
|
||||||
|
|
||||||
|
try:
|
||||||
|
response_dict = response.json()
|
||||||
|
error = response_dict.get('error', {})
|
||||||
|
msg = "{} (valet-api: {})".format(
|
||||||
|
response_dict.get('explanation', 'Unknown remediation'),
|
||||||
|
error.get('message', 'Unknown error'))
|
||||||
|
if response.status_code == 404:
|
||||||
|
exception = exceptions.NotFoundError
|
||||||
|
else:
|
||||||
|
exception = exceptions.PythonAPIError
|
||||||
|
except (AttributeError, ValueError):
|
||||||
|
# Plan B: Pick apart exc_info (HTTPError)
|
||||||
|
exc_class, exc, traceback = exc_info
|
||||||
|
if hasattr(exc.response, 'request'):
|
||||||
|
fmt = "Original Exception: {} for {} {} with body {}"
|
||||||
|
msg = fmt.format(
|
||||||
|
exc, exc.response.request.method,
|
||||||
|
exc.response.request.url, exc.response.request.body)
|
||||||
|
else:
|
||||||
|
msg = "Original Exception: {}".format(exc)
|
||||||
|
# TODO(jdandrea): Is this *truly* an "HTTP Error?"
|
||||||
|
exception = exceptions.HTTPError
|
||||||
|
raise exception(msg)
|
||||||
|
|
||||||
|
def _register_opts(self):
|
||||||
|
"""Register oslo.config options"""
|
||||||
|
opts = []
|
||||||
|
option = cfg.StrOpt('url', default=None,
|
||||||
|
help=_('API endpoint url'))
|
||||||
|
opts.append(option)
|
||||||
|
option = cfg.IntOpt('read_timeout', default=5,
|
||||||
|
help=_('API read timeout in seconds'))
|
||||||
|
opts.append(option)
|
||||||
|
option = cfg.IntOpt('retries', default=3,
|
||||||
|
help=_('API request retries'))
|
||||||
|
opts.append(option)
|
||||||
|
|
||||||
|
opt_group = cfg.OptGroup('valet')
|
||||||
|
CONF.register_group(opt_group)
|
||||||
|
CONF.register_opts(opts, group=opt_group)
|
||||||
|
|
||||||
|
def _request(self, method='get', content_type='application/json',
|
||||||
|
path='', headers=None, data=None, params=None):
|
||||||
|
"""Performs HTTP request.
|
||||||
|
|
||||||
|
Returns a response dict or raises an exception.
|
||||||
|
"""
|
||||||
|
if method not in ('post', 'get', 'put', 'delete'):
|
||||||
|
method = 'get'
|
||||||
|
method_fn = getattr(requests, method)
|
||||||
|
|
||||||
|
full_headers = {
|
||||||
|
'Accept': content_type,
|
||||||
|
'Content-Type': content_type,
|
||||||
|
}
|
||||||
|
if headers:
|
||||||
|
full_headers.update(headers)
|
||||||
|
if not full_headers.get('X-Auth-Token') and self.auth_token:
|
||||||
|
full_headers['X-Auth-Token'] = self.auth_token
|
||||||
|
full_url = '{}/{}'.format(CONF.valet.url, path.lstrip('/'))
|
||||||
|
|
||||||
|
# Prepare the request args
|
||||||
|
try:
|
||||||
|
data_str = json.dumps(data) if data else None
|
||||||
|
except (TypeError, ValueError):
|
||||||
|
data_str = data
|
||||||
|
kwargs = {
|
||||||
|
'data': data_str,
|
||||||
|
'params': params,
|
||||||
|
'headers': full_headers,
|
||||||
|
'timeout': CONF.valet.read_timeout,
|
||||||
|
}
|
||||||
|
|
||||||
|
LOG.debug("Request: {} {}".format(method.upper(), full_url))
|
||||||
|
if data:
|
||||||
|
LOG.debug("Request Body: {}".format(json.dumps(data)))
|
||||||
|
|
||||||
|
retries = CONF.valet.retries
|
||||||
|
response = None
|
||||||
|
response_dict = {}
|
||||||
|
|
||||||
|
# Use exc_info to wrap exceptions from internally used libraries.
|
||||||
|
# Callers are on a need-to-know basis, and they don't need to know.
|
||||||
|
exc_info = None
|
||||||
|
|
||||||
|
# FIXME(jdandrea): Retrying is questionable; it masks bigger issues.
|
||||||
|
for attempt in range(retries):
|
||||||
|
if attempt > 0:
|
||||||
|
LOG.warn(("Retry #{}/{}").format(attempt + 1, retries))
|
||||||
|
try:
|
||||||
|
response = method_fn(full_url, **kwargs)
|
||||||
|
if not response.ok:
|
||||||
|
LOG.debug("Response Status: {} {}".format(
|
||||||
|
response.status_code, response.reason))
|
||||||
|
try:
|
||||||
|
# This load/unload tidies up the unicode stuffs
|
||||||
|
response_dict = response.json()
|
||||||
|
LOG.debug("Response JSON: {}".format(
|
||||||
|
json.dumps(response_dict)))
|
||||||
|
except ValueError:
|
||||||
|
LOG.debug("Response Body: {}".format(response.text))
|
||||||
|
response.raise_for_status()
|
||||||
|
break # Don't retry, we're done.
|
||||||
|
except requests.exceptions.HTTPError as exc:
|
||||||
|
# Just grab the exception info. Don't retry.
|
||||||
|
exc_info = sys.exc_info()
|
||||||
|
break
|
||||||
|
except requests.exceptions.RequestException as exc:
|
||||||
|
# Grab exception info, log the error, and try again.
|
||||||
|
exc_info = sys.exc_info()
|
||||||
|
LOG.error(exc.message)
|
||||||
|
|
||||||
|
if exc_info:
|
||||||
|
# Response.__bool__ returns false if status is not ok. Ruh roh!
|
||||||
|
# That means we must check the object type vs treating as a bool.
|
||||||
|
# More info: https://github.com/kennethreitz/requests/issues/2002
|
||||||
|
if isinstance(response, requests.models.Response) \
|
||||||
|
and not response.ok:
|
||||||
|
LOG.debug("Status {} {}; attempts: {}; url: {}".format(
|
||||||
|
response.status_code, response.reason,
|
||||||
|
attempt + 1, full_url))
|
||||||
|
self._exception(exc_info, response)
|
||||||
|
return response_dict
|
||||||
|
|
||||||
|
def groups_create(self, group, auth_token=None):
|
||||||
|
"""Create a group"""
|
||||||
|
kwargs = {
|
||||||
|
"method": "post",
|
||||||
|
"path": "/groups",
|
||||||
|
"headers": {"X-Auth-Token": auth_token},
|
||||||
|
"data": group,
|
||||||
|
}
|
||||||
|
return self._request(**kwargs)
|
||||||
|
|
||||||
|
def groups_get(self, group_id, auth_token=None):
|
||||||
|
"""Get a group"""
|
||||||
|
kwargs = {
|
||||||
|
"method": "get",
|
||||||
|
"path": "/groups/{}".format(group_id),
|
||||||
|
"headers": {"X-Auth-Token": auth_token},
|
||||||
|
}
|
||||||
|
return self._request(**kwargs)
|
||||||
|
|
||||||
|
def groups_update(self, group_id, group, auth_token=None):
|
||||||
|
"""Update a group"""
|
||||||
|
kwargs = {
|
||||||
|
"method": "put",
|
||||||
|
"path": "/groups/{}".format(group_id),
|
||||||
|
"headers": {"X-Auth-Token": auth_token},
|
||||||
|
"data": group,
|
||||||
|
}
|
||||||
|
return self._request(**kwargs)
|
||||||
|
|
||||||
|
def groups_delete(self, group_id, auth_token=None):
|
||||||
|
"""Delete a group"""
|
||||||
|
kwargs = {
|
||||||
|
"method": "delete",
|
||||||
|
"path": "/groups/{}".format(group_id),
|
||||||
|
"headers": {"X-Auth-Token": auth_token},
|
||||||
|
}
|
||||||
|
return self._request(**kwargs)
|
||||||
|
|
||||||
|
def groups_members_update(self, group_id, members, auth_token=None):
|
||||||
|
"""Update a group with new members"""
|
||||||
|
kwargs = {
|
||||||
|
"method": "put",
|
||||||
|
"path": "/groups/{}/members".format(group_id),
|
||||||
|
"headers": {"X-Auth-Token": auth_token},
|
||||||
|
"data": {'members': members},
|
||||||
|
}
|
||||||
|
return self._request(**kwargs)
|
||||||
|
|
||||||
|
def groups_member_delete(self, group_id, member_id, auth_token=None):
|
||||||
|
"""Delete one member from a group"""
|
||||||
|
kwargs = {
|
||||||
|
"method": "delete",
|
||||||
|
"path": "/groups/{}/members/{}".format(group_id, member_id),
|
||||||
|
"headers": {"X-Auth-Token": auth_token},
|
||||||
|
}
|
||||||
|
return self._request(**kwargs)
|
||||||
|
|
||||||
|
def groups_members_delete(self, group_id, auth_token=None):
|
||||||
|
"""Delete all members from a group"""
|
||||||
|
kwargs = {
|
||||||
|
"method": "delete",
|
||||||
|
"path": "/groups/{}/members".format(group_id),
|
||||||
|
"headers": {"X-Auth-Token": auth_token},
|
||||||
|
}
|
||||||
|
return self._request(**kwargs)
|
||||||
|
|
||||||
|
def plans_create(self, stack, plan, auth_token=None):
|
||||||
|
"""Create a plan"""
|
||||||
|
kwargs = {
|
||||||
|
"method": "post",
|
||||||
|
"path": "/plans",
|
||||||
|
"headers": {"X-Auth-Token": auth_token},
|
||||||
|
"data": plan,
|
||||||
|
}
|
||||||
|
return self._request(**kwargs)
|
||||||
|
|
||||||
|
def plans_update(self, stack, plan, auth_token=None):
|
||||||
|
"""Update a plan"""
|
||||||
|
kwargs = {
|
||||||
|
"method": "put",
|
||||||
|
"path": "/plans/{}".format(stack.id),
|
||||||
|
"headers": {"X-Auth-Token": auth_token},
|
||||||
|
"data": plan,
|
||||||
|
}
|
||||||
|
return self._request(**kwargs)
|
||||||
|
|
||||||
|
def plans_delete(self, stack, auth_token=None):
|
||||||
|
"""Delete a plan"""
|
||||||
|
kwargs = {
|
||||||
|
"method": "delete",
|
||||||
|
"path": "/plans/{}".format(stack.id),
|
||||||
|
"headers": {"X-Auth-Token": auth_token},
|
||||||
|
}
|
||||||
|
return self._request(**kwargs)
|
||||||
|
|
||||||
|
def placement(self, orch_id, res_id,
|
||||||
|
action='reserve', hosts=None, auth_token=None):
|
||||||
|
"""Reserve previously made placement or force a replan.
|
||||||
|
|
||||||
|
action can be reserve or replan.
|
||||||
|
"""
|
||||||
|
kwargs = {
|
||||||
|
"path": "/placements/{}".format(orch_id),
|
||||||
|
"headers": {"X-Auth-Token": auth_token},
|
||||||
|
}
|
||||||
|
if hosts:
|
||||||
|
kwargs['method'] = 'post'
|
||||||
|
kwargs['data'] = {
|
||||||
|
"action": action,
|
||||||
|
"locations": hosts,
|
||||||
|
"resource_id": res_id,
|
||||||
|
}
|
||||||
|
return self._request(**kwargs)
|
||||||
|
|
||||||
|
def placements_search(self, query={}, auth_token=None):
|
||||||
|
"""Search Placements"""
|
||||||
|
kwargs = {
|
||||||
|
"path": "/placements",
|
||||||
|
"headers": {"X-Auth-Token": auth_token},
|
||||||
|
"params": query,
|
||||||
|
}
|
||||||
|
return self._request(**kwargs)
|
75
plugins/exceptions.py
Normal file
75
plugins/exceptions.py
Normal file
@ -0,0 +1,75 @@
|
|||||||
|
#
|
||||||
|
# Copyright (c) 2014-2017 AT&T Intellectual Property
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
"""Valet OpenStack Plugin Exceptions"""
|
||||||
|
|
||||||
|
# Do not use exceptions as control flow! Recommended guidelines:
|
||||||
|
# https://julien.danjou.info/blog/2016/python-exceptions-guide
|
||||||
|
# https://realpython.com/blog/python/the-most-diabolical-python-antipattern/
|
||||||
|
|
||||||
|
|
||||||
|
class ValetOpenStackPluginException(Exception):
|
||||||
|
"""Base error for Valet"""
|
||||||
|
def __init__(self, msg=None):
|
||||||
|
super(ValetOpenStackPluginException, self).__init__(msg)
|
||||||
|
|
||||||
|
|
||||||
|
class GroupResourceNotIsolatedError(ValetOpenStackPluginException):
|
||||||
|
"""Valet Group resources must be isolated in their own template"""
|
||||||
|
def __init__(self, msg=None):
|
||||||
|
"""Initialization"""
|
||||||
|
if msg is None:
|
||||||
|
msg = "OS::Valet::Group resources must be isolated " \
|
||||||
|
"in a dedicated stack"
|
||||||
|
super(GroupResourceNotIsolatedError, self).__init__(msg)
|
||||||
|
|
||||||
|
|
||||||
|
class ResourceNotFoundError(ValetOpenStackPluginException):
|
||||||
|
"""Resource was not found"""
|
||||||
|
def __init__(self, identifier, msg=None):
|
||||||
|
"""Initialization"""
|
||||||
|
if msg is None:
|
||||||
|
msg = "Resource identified by {} not found".format(identifier)
|
||||||
|
super(ResourceNotFoundError, self).__init__(msg)
|
||||||
|
self.identifier = identifier
|
||||||
|
|
||||||
|
|
||||||
|
class ResourceUUIDMissingError(ValetOpenStackPluginException):
|
||||||
|
"""Resource is missing a UUID (Orchestration ID)"""
|
||||||
|
def __init__(self, resource, msg=None):
|
||||||
|
"""Initialization"""
|
||||||
|
if msg is None:
|
||||||
|
msg = "Resource named {} has no UUID".format(resource.name)
|
||||||
|
super(ResourceUUIDMissingError, self).__init__(msg)
|
||||||
|
self.resource = resource
|
||||||
|
|
||||||
|
|
||||||
|
class UnknownError(ValetOpenStackPluginException):
|
||||||
|
"""Unknown Exception catchall"""
|
||||||
|
|
||||||
|
|
||||||
|
"""Python API"""
|
||||||
|
|
||||||
|
|
||||||
|
class PythonAPIError(ValetOpenStackPluginException):
|
||||||
|
"""Python API error"""
|
||||||
|
|
||||||
|
|
||||||
|
class NotFoundError(PythonAPIError):
|
||||||
|
"""Not Found error"""
|
||||||
|
|
||||||
|
|
||||||
|
class HTTPError(PythonAPIError):
|
||||||
|
"""HTTP error"""
|
38
plugins/heat/constraints.py
Normal file
38
plugins/heat/constraints.py
Normal file
@ -0,0 +1,38 @@
|
|||||||
|
#
|
||||||
|
# Copyright (c) 2014-2017 AT&T Intellectual Property
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
"""Valet Resource Constraints"""
|
||||||
|
|
||||||
|
import string
|
||||||
|
|
||||||
|
from heat.engine import constraints
|
||||||
|
|
||||||
|
|
||||||
|
class GroupNameConstraint(constraints.BaseCustomConstraint):
|
||||||
|
"""Validator for Valet Group name."""
|
||||||
|
|
||||||
|
def validate(self, value, context):
|
||||||
|
"""Validation callback."""
|
||||||
|
valid_characters = string.letters + string.digits + "-._~"
|
||||||
|
if not value:
|
||||||
|
self._error_message = "Group name must not be empty"
|
||||||
|
return False
|
||||||
|
elif not set(value) <= set(valid_characters):
|
||||||
|
self._error_message = ("Group name must contain only uppercase "
|
||||||
|
"and lowercase letters, decimal digits, "
|
||||||
|
"hyphens, periods, underscores, and tildes "
|
||||||
|
"[RFC 3986, Section 2.3]")
|
||||||
|
return False
|
||||||
|
return True
|
329
plugins/heat/group.py
Normal file
329
plugins/heat/group.py
Normal file
@ -0,0 +1,329 @@
|
|||||||
|
#
|
||||||
|
# Copyright (c) 2014-2017 AT&T Intellectual Property
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
"""Group Heat Resource Plugin"""
|
||||||
|
|
||||||
|
from heat.common import exception as heat_exception
|
||||||
|
from heat.common.i18n import _
|
||||||
|
from heat.engine import attributes
|
||||||
|
from heat.engine import constraints
|
||||||
|
from heat.engine import properties
|
||||||
|
from heat.engine import resource
|
||||||
|
from heat.engine.resources import scheduler_hints as sh
|
||||||
|
from heat.engine import support
|
||||||
|
from oslo_log import log as logging
|
||||||
|
|
||||||
|
from plugins.common import valet_api
|
||||||
|
from plugins import exceptions
|
||||||
|
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class Group(resource.Resource, sh.SchedulerHintsMixin):
|
||||||
|
"""Valet Group Resource
|
||||||
|
|
||||||
|
A Group is used to define a particular association amongst
|
||||||
|
resources. Groups may be used only by their assigned members,
|
||||||
|
currently identified by project (tenant) IDs. If no members are
|
||||||
|
assigned, any project (tenant) may assign resources to the group.
|
||||||
|
|
||||||
|
There are three types of groups: affinity, diversity, and exclusivity.
|
||||||
|
There are two levels: host and rack.
|
||||||
|
|
||||||
|
All groups must have a unique name, regardless of how they were created
|
||||||
|
and regardless of membership.
|
||||||
|
|
||||||
|
There is no lone group owner. Any user with an admin role, regardless
|
||||||
|
of project/tenant, can edit or delete the group.
|
||||||
|
"""
|
||||||
|
|
||||||
|
support_status = support.SupportStatus(version='2015.1')
|
||||||
|
|
||||||
|
_LEVEL_TYPES = (
|
||||||
|
HOST, RACK,
|
||||||
|
) = (
|
||||||
|
'host', 'rack',
|
||||||
|
)
|
||||||
|
|
||||||
|
_RELATIONSHIP_TYPES = (
|
||||||
|
AFFINITY, DIVERSITY, EXCLUSIVITY,
|
||||||
|
) = (
|
||||||
|
'affinity', 'diversity', 'exclusivity',
|
||||||
|
)
|
||||||
|
|
||||||
|
PROPERTIES = (
|
||||||
|
DESCRIPTION, LEVEL, MEMBERS, NAME, TYPE,
|
||||||
|
) = (
|
||||||
|
'description', 'level', 'members', 'name', 'type',
|
||||||
|
)
|
||||||
|
|
||||||
|
ATTRIBUTES = (
|
||||||
|
DESCRIPTION_ATTR, LEVEL_ATTR, MEMBERS_ATTR, NAME_ATTR, TYPE_ATTR,
|
||||||
|
) = (
|
||||||
|
'description', 'level', 'members', 'name', 'type',
|
||||||
|
)
|
||||||
|
|
||||||
|
properties_schema = {
|
||||||
|
DESCRIPTION: properties.Schema(
|
||||||
|
properties.Schema.STRING,
|
||||||
|
_('Description of group.'),
|
||||||
|
required=False,
|
||||||
|
update_allowed=True
|
||||||
|
),
|
||||||
|
LEVEL: properties.Schema(
|
||||||
|
properties.Schema.STRING,
|
||||||
|
_('Level of relationship between resources.'),
|
||||||
|
constraints=[
|
||||||
|
constraints.AllowedValues([HOST, RACK])
|
||||||
|
],
|
||||||
|
required=True,
|
||||||
|
update_allowed=False
|
||||||
|
),
|
||||||
|
MEMBERS: properties.Schema(
|
||||||
|
properties.Schema.LIST,
|
||||||
|
_('List of one or more member IDs allowed to use this group.'),
|
||||||
|
required=False,
|
||||||
|
update_allowed=True
|
||||||
|
),
|
||||||
|
NAME: properties.Schema(
|
||||||
|
properties.Schema.STRING,
|
||||||
|
_('Name of group.'),
|
||||||
|
constraints=[
|
||||||
|
constraints.CustomConstraint('valet.group_name'),
|
||||||
|
],
|
||||||
|
required=True,
|
||||||
|
update_allowed=False
|
||||||
|
),
|
||||||
|
TYPE: properties.Schema(
|
||||||
|
properties.Schema.STRING,
|
||||||
|
_('Type of group.'),
|
||||||
|
constraints=[
|
||||||
|
constraints.AllowedValues([AFFINITY, DIVERSITY, EXCLUSIVITY])
|
||||||
|
],
|
||||||
|
required=True,
|
||||||
|
update_allowed=False
|
||||||
|
),
|
||||||
|
}
|
||||||
|
|
||||||
|
# To maintain Kilo compatibility, do not use "type" here.
|
||||||
|
attributes_schema = {
|
||||||
|
DESCRIPTION_ATTR: attributes.Schema(
|
||||||
|
_('Description of group.')
|
||||||
|
),
|
||||||
|
LEVEL_ATTR: attributes.Schema(
|
||||||
|
_('Level of relationship between resources.')
|
||||||
|
),
|
||||||
|
MEMBERS_ATTR: attributes.Schema(
|
||||||
|
_('List of one or more member IDs allowed to use this group.')
|
||||||
|
),
|
||||||
|
NAME_ATTR: attributes.Schema(
|
||||||
|
_('Name of group.')
|
||||||
|
),
|
||||||
|
TYPE_ATTR: attributes.Schema(
|
||||||
|
_('Type of group.')
|
||||||
|
),
|
||||||
|
}
|
||||||
|
|
||||||
|
def __init__(self, name, json_snippet, stack):
|
||||||
|
"""Initialization"""
|
||||||
|
super(Group, self).__init__(name, json_snippet, stack)
|
||||||
|
self.api = valet_api.ValetAPI()
|
||||||
|
self.api.auth_token = self.context.auth_token
|
||||||
|
self._group = None
|
||||||
|
|
||||||
|
def _get_resource(self):
|
||||||
|
if self._group is None and self.resource_id is not None:
|
||||||
|
try:
|
||||||
|
groups = self.api.groups_get(
|
||||||
|
self.resource_id)
|
||||||
|
if groups:
|
||||||
|
self._group = groups.get('group', {})
|
||||||
|
except exceptions.NotFoundError:
|
||||||
|
# Ignore Not Found and fall through
|
||||||
|
pass
|
||||||
|
|
||||||
|
return self._group
|
||||||
|
|
||||||
|
def _group_name(self):
|
||||||
|
"""Group Name"""
|
||||||
|
name = self.properties.get(self.NAME)
|
||||||
|
if name:
|
||||||
|
return name
|
||||||
|
|
||||||
|
return self.physical_resource_name()
|
||||||
|
|
||||||
|
def FnGetRefId(self):
|
||||||
|
"""Get Reference ID"""
|
||||||
|
return self.physical_resource_name_or_FnGetRefId()
|
||||||
|
|
||||||
|
def handle_create(self):
|
||||||
|
"""Create resource"""
|
||||||
|
if self.resource_id is not None:
|
||||||
|
# TODO(jdandrea): Delete the resource and re-create?
|
||||||
|
# I've seen this called if a stack update fails.
|
||||||
|
# For now, just leave it be.
|
||||||
|
return
|
||||||
|
|
||||||
|
group_type = self.properties.get(self.TYPE)
|
||||||
|
level = self.properties.get(self.LEVEL)
|
||||||
|
description = self.properties.get(self.DESCRIPTION)
|
||||||
|
members = self.properties.get(self.MEMBERS)
|
||||||
|
group_args = {
|
||||||
|
'name': self._group_name(),
|
||||||
|
'type': group_type,
|
||||||
|
'level': level,
|
||||||
|
'description': description,
|
||||||
|
}
|
||||||
|
kwargs = {
|
||||||
|
'group': group_args,
|
||||||
|
}
|
||||||
|
|
||||||
|
# Create the group first. If an exception is
|
||||||
|
# thrown by groups_create, let Heat catch it.
|
||||||
|
group = self.api.groups_create(**kwargs)
|
||||||
|
if group is not None and 'id' in group:
|
||||||
|
self.resource_id_set(group.get('id'))
|
||||||
|
else:
|
||||||
|
raise heat_exception.ResourceNotAvailable(
|
||||||
|
resource_name=self._group_name())
|
||||||
|
|
||||||
|
# Now add members to the group
|
||||||
|
if members:
|
||||||
|
kwargs = {
|
||||||
|
'group_id': self.resource_id,
|
||||||
|
'members': members,
|
||||||
|
}
|
||||||
|
err = None
|
||||||
|
group = None
|
||||||
|
try:
|
||||||
|
group = self.api.groups_members_update(**kwargs)
|
||||||
|
except exceptions.PythonAPIError as err:
|
||||||
|
# Hold on to err. We'll use it in a moment.
|
||||||
|
pass
|
||||||
|
finally:
|
||||||
|
if group is None:
|
||||||
|
# Members couldn't be added.
|
||||||
|
# Delete the group we just created.
|
||||||
|
kwargs = {
|
||||||
|
'group_id': self.resource_id,
|
||||||
|
}
|
||||||
|
try:
|
||||||
|
self.api.groups_delete(**kwargs)
|
||||||
|
except exceptions.PythonAPIError:
|
||||||
|
# Ignore group deletion errors.
|
||||||
|
pass
|
||||||
|
if err:
|
||||||
|
raise err
|
||||||
|
else:
|
||||||
|
raise heat_exception.ResourceNotAvailable(
|
||||||
|
resource_name=self._group_name())
|
||||||
|
|
||||||
|
def handle_update(self, json_snippet, templ_diff, prop_diff):
|
||||||
|
"""Update resource"""
|
||||||
|
if prop_diff:
|
||||||
|
if self.DESCRIPTION in prop_diff:
|
||||||
|
description = prop_diff.get(
|
||||||
|
self.DESCRIPTION, self.properties.get(self.DESCRIPTION))
|
||||||
|
|
||||||
|
# If an exception is thrown by groups_update,
|
||||||
|
# let Heat catch it. Let the state remain as-is.
|
||||||
|
kwargs = {
|
||||||
|
'group_id': self.resource_id,
|
||||||
|
'group': {
|
||||||
|
self.DESCRIPTION: description,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
self.api.groups_update(**kwargs)
|
||||||
|
|
||||||
|
if self.MEMBERS in prop_diff:
|
||||||
|
members_update = prop_diff.get(self.MEMBERS, [])
|
||||||
|
members = self.properties.get(self.MEMBERS, [])
|
||||||
|
|
||||||
|
# Delete original members not in updated list.
|
||||||
|
# If an exception is thrown by groups_member_delete,
|
||||||
|
# let Heat catch it. Let the state remain as-is.
|
||||||
|
member_deletions = set(members) - set(members_update)
|
||||||
|
for member_id in member_deletions:
|
||||||
|
kwargs = {
|
||||||
|
'group_id': self.resource_id,
|
||||||
|
'member_id': member_id,
|
||||||
|
}
|
||||||
|
self.api.groups_member_delete(**kwargs)
|
||||||
|
|
||||||
|
# Add members_update members not in original list.
|
||||||
|
# If an exception is thrown by groups_members_update,
|
||||||
|
# let Heat catch it. Let the state remain as-is.
|
||||||
|
member_additions = set(members_update) - set(members)
|
||||||
|
if member_additions:
|
||||||
|
kwargs = {
|
||||||
|
'group_id': self.resource_id,
|
||||||
|
'members': list(member_additions),
|
||||||
|
}
|
||||||
|
self.api.groups_members_update(**kwargs)
|
||||||
|
|
||||||
|
# Clear cached group info
|
||||||
|
self._group = None
|
||||||
|
|
||||||
|
def handle_delete(self):
|
||||||
|
"""Delete resource"""
|
||||||
|
if self.resource_id is None:
|
||||||
|
return
|
||||||
|
|
||||||
|
kwargs = {
|
||||||
|
'group_id': self.resource_id,
|
||||||
|
}
|
||||||
|
|
||||||
|
group = self._get_resource()
|
||||||
|
if group:
|
||||||
|
# First, delete all the members
|
||||||
|
members = group.get('members', [])
|
||||||
|
if members:
|
||||||
|
try:
|
||||||
|
self.api.groups_members_delete(**kwargs)
|
||||||
|
except exceptions.NotFoundError:
|
||||||
|
# Ignore Not Found and fall through
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Now delete the group.
|
||||||
|
try:
|
||||||
|
response = self.api.groups_delete(**kwargs)
|
||||||
|
if type(response) is dict and len(response) == 0:
|
||||||
|
self.resource_id_set(None)
|
||||||
|
self._group = None
|
||||||
|
except exceptions.NotFoundError:
|
||||||
|
# Ignore Not Found and fall through
|
||||||
|
pass
|
||||||
|
|
||||||
|
def _resolve_attribute(self, key):
|
||||||
|
"""Resolve Attributes"""
|
||||||
|
if self.resource_id is None:
|
||||||
|
return
|
||||||
|
group = self._get_resource()
|
||||||
|
if group:
|
||||||
|
attributes = {
|
||||||
|
self.NAME_ATTR: group.get(self.NAME),
|
||||||
|
self.TYPE_ATTR: group.get(self.TYPE),
|
||||||
|
self.LEVEL_ATTR: group.get(self.LEVEL),
|
||||||
|
self.DESCRIPTION_ATTR: group.get(self.DESCRIPTION),
|
||||||
|
self.MEMBERS_ATTR: group.get(self.MEMBERS, []),
|
||||||
|
}
|
||||||
|
return attributes.get(key)
|
||||||
|
|
||||||
|
|
||||||
|
def resource_mapping():
|
||||||
|
"""Map names to resources."""
|
||||||
|
return {
|
||||||
|
'OS::Valet::Group': Group,
|
||||||
|
}
|
95
plugins/heat/group_assignment.py
Normal file
95
plugins/heat/group_assignment.py
Normal file
@ -0,0 +1,95 @@
|
|||||||
|
#
|
||||||
|
# Copyright (c) 2014-2017 AT&T Intellectual Property
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
"""GroupAssignment Heat Resource Plugin"""
|
||||||
|
|
||||||
|
import uuid
|
||||||
|
|
||||||
|
from heat.common.i18n import _
|
||||||
|
from heat.engine import properties
|
||||||
|
from heat.engine import resource
|
||||||
|
from heat.engine.resources import scheduler_hints as sh
|
||||||
|
from oslo_log import log as logging
|
||||||
|
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class GroupAssignment(resource.Resource, sh.SchedulerHintsMixin):
|
||||||
|
"""Valet Group Assignment Resource
|
||||||
|
|
||||||
|
A Group Assignment describes one or more resources (e.g., Server)
|
||||||
|
assigned to a particular group.
|
||||||
|
|
||||||
|
Caution: It is possible to declare multiple GroupAssignment resources
|
||||||
|
referring to the same servers, which can lead to problems when one
|
||||||
|
GroupAssignment is updated and a duplicate server reference is removed.
|
||||||
|
|
||||||
|
This resource is purely informational in nature and makes no changes
|
||||||
|
to heat, nova, or cinder. Instead, the Valet Heat stack lifecycle plugin
|
||||||
|
intercepts Heat's create/update/delete operations and invokes valet-api
|
||||||
|
as needed.
|
||||||
|
"""
|
||||||
|
|
||||||
|
PROPERTIES = (
|
||||||
|
GROUP, RESOURCES,
|
||||||
|
) = (
|
||||||
|
'group', 'resources',
|
||||||
|
)
|
||||||
|
|
||||||
|
properties_schema = {
|
||||||
|
GROUP: properties.Schema(
|
||||||
|
properties.Schema.STRING,
|
||||||
|
_('Group reference.'),
|
||||||
|
update_allowed=False
|
||||||
|
),
|
||||||
|
RESOURCES: properties.Schema(
|
||||||
|
properties.Schema.LIST,
|
||||||
|
_('List of one or more resource IDs.'),
|
||||||
|
required=True,
|
||||||
|
update_allowed=True
|
||||||
|
),
|
||||||
|
}
|
||||||
|
|
||||||
|
def handle_create(self):
|
||||||
|
"""Create resource"""
|
||||||
|
resource_id = str(uuid.uuid4())
|
||||||
|
self.resource_id_set(resource_id)
|
||||||
|
|
||||||
|
def handle_update(self, json_snippet, templ_diff, prop_diff):
|
||||||
|
"""Update resource"""
|
||||||
|
self.resource_id_set(self.resource_id)
|
||||||
|
|
||||||
|
def handle_delete(self):
|
||||||
|
"""Delete resource"""
|
||||||
|
self.resource_id_set(None)
|
||||||
|
|
||||||
|
|
||||||
|
# TODO(jdandrea): This resource is being pre-empted (not deprecated)
|
||||||
|
# and is unavailable in Valet 1.0.
|
||||||
|
#
|
||||||
|
# To assign a server to a Valet Group, specify metadata within that
|
||||||
|
# server's OS::Nova::Server resource properties like so. Declare
|
||||||
|
# groups in a separate template/stack using OS::Valet::Group.
|
||||||
|
#
|
||||||
|
# properties:
|
||||||
|
# metadata:
|
||||||
|
# valet:
|
||||||
|
# groups: [group1, group2, ..., groupN]
|
||||||
|
#
|
||||||
|
# def resource_mapping():
|
||||||
|
# """Map names to resources."""
|
||||||
|
# return {
|
||||||
|
# 'OS::Valet::GroupAssignment': GroupAssignment,
|
||||||
|
# }
|
37
plugins/i18n.py
Normal file
37
plugins/i18n.py
Normal file
@ -0,0 +1,37 @@
|
|||||||
|
#
|
||||||
|
# Copyright (c) 2014-2017 AT&T Intellectual Property
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
"""oslo.i18n integration module"""
|
||||||
|
|
||||||
|
# See http://docs.openstack.org/developer/oslo.i18n/usage.html
|
||||||
|
|
||||||
|
import oslo_i18n
|
||||||
|
|
||||||
|
DOMAIN = "valet"
|
||||||
|
|
||||||
|
_translators = oslo_i18n.TranslatorFactory(domain=DOMAIN)
|
||||||
|
|
||||||
|
# The primary translation function using the well-known name "_"
|
||||||
|
_ = _translators.primary
|
||||||
|
|
||||||
|
|
||||||
|
def translate(value, user_locale):
|
||||||
|
"""Translate"""
|
||||||
|
return oslo_i18n.translate(value, user_locale)
|
||||||
|
|
||||||
|
|
||||||
|
def get_available_languages():
|
||||||
|
"""Get Available Languages"""
|
||||||
|
return oslo_i18n.get_available_languages(DOMAIN)
|
449
plugins/plugins/heat/plugins.py
Normal file
449
plugins/plugins/heat/plugins.py
Normal file
@ -0,0 +1,449 @@
|
|||||||
|
#
|
||||||
|
# Copyright (c) 2014-2017 AT&T Intellectual Property
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
"""Valet Plugins for Heat"""
|
||||||
|
|
||||||
|
import string
|
||||||
|
import uuid
|
||||||
|
|
||||||
|
from heat.engine import lifecycle_plugin
|
||||||
|
from oslo_config import cfg
|
||||||
|
from oslo_log import log as logging
|
||||||
|
|
||||||
|
from plugins.common import valet_api
|
||||||
|
from plugins import exceptions
|
||||||
|
|
||||||
|
CONF = cfg.CONF
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def validate_uuid4(uuid_string):
|
||||||
|
"""Validate that a UUID string is in fact a valid uuid4.
|
||||||
|
|
||||||
|
Happily, the uuid module does the actual checking for us.
|
||||||
|
It is vital that the 'version' kwarg be passed to the
|
||||||
|
UUID() call, otherwise any 32-character hex string
|
||||||
|
is considered valid.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
val = uuid.UUID(uuid_string, version=4)
|
||||||
|
except ValueError:
|
||||||
|
# If it's a value error, then the string
|
||||||
|
# is not a valid hex code for a UUID.
|
||||||
|
return False
|
||||||
|
|
||||||
|
# If the uuid_string is a valid hex code, # but an invalid uuid4,
|
||||||
|
# the UUID.__init__ will convert it to a valid uuid4.
|
||||||
|
# This is bad for validation purposes.
|
||||||
|
|
||||||
|
# uuid_string will sometimes have separators.
|
||||||
|
return string.replace(val.hex, '-', '') == \
|
||||||
|
string.replace(uuid_string, '-', '')
|
||||||
|
|
||||||
|
|
||||||
|
def valid_uuid_for_resource(resource):
|
||||||
|
"""Return uuid if resource has a uuid attribute and a valid uuid4"""
|
||||||
|
if hasattr(resource, 'uuid') and \
|
||||||
|
resource.uuid and validate_uuid4(resource.uuid):
|
||||||
|
return resource.uuid
|
||||||
|
|
||||||
|
|
||||||
|
class ValetLifecyclePlugin(lifecycle_plugin.LifecyclePlugin):
|
||||||
|
"""Base class for pre-op and post-op work on a stack.
|
||||||
|
|
||||||
|
Implementations should extend this class and override the methods.
|
||||||
|
"""
|
||||||
|
|
||||||
|
_RESOURCE_TYPES = (
|
||||||
|
VALET_GROUP, VALET_GROUP_ASSIGNMENT, NOVA_SERVER,
|
||||||
|
) = (
|
||||||
|
"OS::Valet::Group", "OS::Valet::GroupAssignment", "OS::Nova::Server",
|
||||||
|
)
|
||||||
|
|
||||||
|
VALET_RESOURCE_TYPES = [
|
||||||
|
VALET_GROUP, VALET_GROUP_ASSIGNMENT,
|
||||||
|
]
|
||||||
|
OS_RESOURCE_TYPES = [
|
||||||
|
NOVA_SERVER,
|
||||||
|
]
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
""""Initialization"""
|
||||||
|
self.api = valet_api.ValetAPI()
|
||||||
|
self.hints_enabled = False
|
||||||
|
|
||||||
|
# This plugin can only work if stack_scheduler_hints is true
|
||||||
|
CONF.import_opt('stack_scheduler_hints', 'heat.common.config')
|
||||||
|
self.hints_enabled = CONF.stack_scheduler_hints
|
||||||
|
|
||||||
|
def _parse_stack(self, stack, original_stack={}):
|
||||||
|
"""Fetch resources out of the stack. Does not traverse."""
|
||||||
|
resources = {}
|
||||||
|
|
||||||
|
# We used to call Stack.preview_resources() and this used to
|
||||||
|
# be a recursive method. However, this breaks down significantly
|
||||||
|
# when nested stacks are involved. Previewing and creating/updating
|
||||||
|
# a stack are divergent operations. There are also side-effects for
|
||||||
|
# a preview-within-create operation that only happen to manifest
|
||||||
|
# with nested stacks.
|
||||||
|
|
||||||
|
valet_group_declared = False
|
||||||
|
resource_declared = False
|
||||||
|
for name, resource in stack.resources.iteritems():
|
||||||
|
# VALET_GROUP must not co-exist with other resources
|
||||||
|
# in the same stack.
|
||||||
|
resource_type = resource.type()
|
||||||
|
if resource_type == self.VALET_GROUP:
|
||||||
|
valet_group_declared = True
|
||||||
|
else:
|
||||||
|
resource_declared = True
|
||||||
|
if valet_group_declared and resource_declared:
|
||||||
|
raise exceptions.GroupResourceNotIsolatedError()
|
||||||
|
|
||||||
|
# Skip over resource types we aren't interested in.
|
||||||
|
# This test used to cover VALET_RESOURCE_TYPES as well.
|
||||||
|
# VALET_GROUP is implemented as a bona fide plugin,
|
||||||
|
# so that doesn't count, and VALET_GROUP_ASSIGNMENT
|
||||||
|
# has been pre-empted before v1.0 (not deprecated), so
|
||||||
|
# that doesn't count either, at least for now. There's
|
||||||
|
# a good chance it will re-appear in the future so
|
||||||
|
# the logic for Group Assignments remains within this loop.
|
||||||
|
if resource_type not in self.OS_RESOURCE_TYPES:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Find the original resource. We may need it in two cases.
|
||||||
|
# This may also end up being effectively null.
|
||||||
|
original_res = None
|
||||||
|
if original_stack:
|
||||||
|
original_res = original_stack.resources.get(name, {})
|
||||||
|
|
||||||
|
key = valid_uuid_for_resource(resource)
|
||||||
|
if not key:
|
||||||
|
# Here's the first case where we can use original_res.
|
||||||
|
#
|
||||||
|
# We can't proceed without a UUID in OS-facing resources.
|
||||||
|
# During a stack update, the current resources don't have
|
||||||
|
# their uuid's populated, but the original stack does.
|
||||||
|
# If that has been passed in, use that as a hint.
|
||||||
|
key = valid_uuid_for_resource(original_res)
|
||||||
|
if not key:
|
||||||
|
# Given that we're no longer traversing stack levels,
|
||||||
|
# it would be surprising if this exception is thrown
|
||||||
|
# in the stack create case (original stack is not
|
||||||
|
# passed in so it's easy to spot).
|
||||||
|
if stack and not original_stack:
|
||||||
|
# Keeping this here as a safety net and a way to
|
||||||
|
# indicate the specific problem.
|
||||||
|
raise exceptions.ResourceUUIDMissingError(resource)
|
||||||
|
|
||||||
|
# Unfortunately, stack updates are beyond messy.
|
||||||
|
# Heat is inconsistent in what context it has
|
||||||
|
# in various resources at update time.
|
||||||
|
#
|
||||||
|
# To mitigate, we create a mock orch_id and send
|
||||||
|
# that to valet-api instead. It can be resolved to the
|
||||||
|
# proper orch_id by valet-api at location scheduling time
|
||||||
|
# (e.g., nova-scheduler, cinder-scheduler) by using the
|
||||||
|
# stack lifecycle scheduler hints as clues. That also means
|
||||||
|
# valet-api must now have detailed knowledge of those hints
|
||||||
|
# to figure out which placement to actually use/adjust.
|
||||||
|
key = str(uuid.uuid4())
|
||||||
|
LOG.warn(("Orchestration ID not found for resource "
|
||||||
|
"named {}. Using mock orchestration "
|
||||||
|
"ID {}.").format(resource.name, key))
|
||||||
|
|
||||||
|
# Parse the resource, then ensure all keys at the top level
|
||||||
|
# are lowercase. (Heat makes some of them uppercase.)
|
||||||
|
parsed = resource.parsed_template()
|
||||||
|
parsed = dict((k.lower(), v) for k, v in parsed.iteritems())
|
||||||
|
|
||||||
|
parsed['name'] = name
|
||||||
|
|
||||||
|
# Resolve identifiers to resources with valid orch ids.
|
||||||
|
# The referenced resources MUST be at the same stack level.
|
||||||
|
if resource.type() == self.VALET_GROUP_ASSIGNMENT:
|
||||||
|
properties = parsed.get('properties', {})
|
||||||
|
uuids = []
|
||||||
|
# The GroupAssignment resource list consists of
|
||||||
|
# Heat resource names or IDs (physical UUIDs).
|
||||||
|
# Find that resource in the stack or original stack.
|
||||||
|
# The resource must have a uuid (orchestration id).
|
||||||
|
for ref_identifier in properties.get('resources'):
|
||||||
|
ref_resource = self._resource_with_uuid_for_identifier(
|
||||||
|
stack.resources, ref_identifier)
|
||||||
|
if not ref_resource and original_stack:
|
||||||
|
ref_resource = \
|
||||||
|
self._resource_with_uuid_for_identifier(
|
||||||
|
original_stack.resources, ref_identifier)
|
||||||
|
if not ref_resource:
|
||||||
|
msg = "Resource {} with an assigned " \
|
||||||
|
"orchestration id not found"
|
||||||
|
msg = msg.format(ref_identifier)
|
||||||
|
raise exceptions.ResourceNotFoundError(
|
||||||
|
ref_identifier, msg)
|
||||||
|
uuids.append(ref_resource.uuid)
|
||||||
|
properties['resources'] = uuids
|
||||||
|
|
||||||
|
# Add whatever stack lifecycle scheduler hints we can, in
|
||||||
|
# advance of when they would normally be added to the regular
|
||||||
|
# scheduler hints, provided the _scheduler_hints mixin exists.
|
||||||
|
# Give preference to the original resource, if it exists, as it
|
||||||
|
# is always more accurate.
|
||||||
|
hints = None
|
||||||
|
if hasattr(original_res, '_scheduler_hints'):
|
||||||
|
hints = original_res._scheduler_hints({})
|
||||||
|
elif hasattr(resource, '_scheduler_hints'):
|
||||||
|
hints = resource._scheduler_hints({})
|
||||||
|
if hints:
|
||||||
|
# Remove keys with empty values and store as metadata
|
||||||
|
parsed['metadata'] = \
|
||||||
|
dict((k, v) for k, v in hints.iteritems() if v)
|
||||||
|
else:
|
||||||
|
parsed['metadata'] = {}
|
||||||
|
|
||||||
|
# If the physical UUID is already known, store that too.
|
||||||
|
if resource.resource_id is not None:
|
||||||
|
parsed['resource_id'] = resource.resource_id
|
||||||
|
elif original_res and hasattr(original_res, 'resource_id'):
|
||||||
|
# Here's the second case where we can use original_res.
|
||||||
|
#
|
||||||
|
# It's not a showstopper if we can't find the physical
|
||||||
|
# UUID, but if it wasn't in the updated resource, we
|
||||||
|
# will try to see if it's in the original one, if any.
|
||||||
|
parsed['resource_id'] = original_res.resource_id
|
||||||
|
|
||||||
|
LOG.info(("Adding Resource, Type: {}, "
|
||||||
|
"Name: {}, UUID: {}").format(
|
||||||
|
resource.type(), name, resource.uuid))
|
||||||
|
resources[key] = parsed
|
||||||
|
return resources
|
||||||
|
|
||||||
|
# TODO(jdandrea) DO NOT USE. This code has changes that have not yet
|
||||||
|
# been committed, but are potentially valuable. Leaving it here in the
|
||||||
|
# event bug 1516807 is resolved and we can cross-reference resources
|
||||||
|
# in nested stacks.
|
||||||
|
def _parse_stack_preview(self, dest, preview):
|
||||||
|
"""Walk the preview list (possibly nested)
|
||||||
|
|
||||||
|
Extract parsed template dicts. Store mods in a flat dict.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# KEEP THIS so that the method is not usable (on purpose).
|
||||||
|
return
|
||||||
|
|
||||||
|
# The preview is either a list or not.
|
||||||
|
if not isinstance(preview, list):
|
||||||
|
# Heat does not assign orchestration UUIDs to
|
||||||
|
# all resources, so we must make our own sometimes.
|
||||||
|
# This also means nested templates can't be supported yet.
|
||||||
|
|
||||||
|
# FIXME(jdandrea): Either propose uniform use of UUIDs within
|
||||||
|
# Heat (related to Heat bug 1516807), or store
|
||||||
|
# resource UUIDs within the parsed template and
|
||||||
|
# use only Valet-originating UUIDs as keys.
|
||||||
|
if hasattr(preview, 'uuid') and \
|
||||||
|
preview.uuid and validate_uuid4(preview.uuid):
|
||||||
|
key = preview.uuid
|
||||||
|
else:
|
||||||
|
# TODO(jdandrea): Heat must be authoritative for UUIDs.
|
||||||
|
# This will require a change to heat-engine.
|
||||||
|
# This is one culprit: heat/db/sqlalchemy/models.py#L279
|
||||||
|
# Note that stacks are stored in the DB just-in-time.
|
||||||
|
key = str(uuid.uuid4())
|
||||||
|
parsed = preview.parsed_template()
|
||||||
|
parsed['name'] = preview.name
|
||||||
|
|
||||||
|
# Add whatever stack lifecycle scheduler hints we can, in
|
||||||
|
# advance of when they would normally be added to the regular
|
||||||
|
# scheduler hints, provided the _scheduler_hints mixin exists.
|
||||||
|
if hasattr(preview, '_scheduler_hints'):
|
||||||
|
hints = preview._scheduler_hints({})
|
||||||
|
|
||||||
|
# Remove keys with empty values and store as metadata
|
||||||
|
parsed['metadata'] = \
|
||||||
|
dict((k, v) for k, v in hints.iteritems() if v)
|
||||||
|
else:
|
||||||
|
parsed['metadata'] = {}
|
||||||
|
|
||||||
|
# TODO(jdandrea): Replace resource referenced names w/UUIDs.
|
||||||
|
|
||||||
|
# Ensure all keys at the top level are lowercase (heat makes
|
||||||
|
# some of them uppercase) before storing.
|
||||||
|
parsed = dict((k.lower(), v) for k, v in parsed.iteritems())
|
||||||
|
|
||||||
|
# TODO(jdandrea): Need to resolve the names within
|
||||||
|
# OS::Valet::GroupAssignment to corresponding UUIDs.
|
||||||
|
dest[key] = parsed
|
||||||
|
else:
|
||||||
|
for item in preview:
|
||||||
|
self._parse_stack_preview(dest, item)
|
||||||
|
|
||||||
|
# TODO(jdandrea): DO NOT USE. This code has not yet been committed,
|
||||||
|
# but is potentially valuable. Leaving it here in the event bug 1516807
|
||||||
|
# is resolved and we can cross-reference resources in nested stacks.
|
||||||
|
def _resolve_group_assignment_resources(self, resources):
|
||||||
|
"""Resolve Resource Names in GroupAssignments
|
||||||
|
|
||||||
|
This presumes a resource is being referenced from within
|
||||||
|
the same template/stack. No cross-template/stack references.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# KEEP THIS so that the method is not usable (on purpose).
|
||||||
|
return
|
||||||
|
|
||||||
|
# TODO(jdandrea): Use pyobjc-core someday. KVC simplifies this.
|
||||||
|
# Ref: https://bitbucket.org/ronaldoussoren/pyobjc/issues/187/
|
||||||
|
# pyobjc-core-on-platforms-besides-macos
|
||||||
|
for orch_id, resource in resources.iteritems():
|
||||||
|
if resource.get('type') == self.VALET_GROUP_ASSIGNMENT:
|
||||||
|
# Grab our stack name and path-in-stack
|
||||||
|
metadata = resource.get('metadata', {})
|
||||||
|
stack_name = metadata.get('heat_stack_name')
|
||||||
|
path_in_stack = metadata.get('heat_path_in_stack')
|
||||||
|
|
||||||
|
# For each referenced Heat resource name ...
|
||||||
|
properties = resource.get('properties')
|
||||||
|
ref_resources = properties.get('resources', [])
|
||||||
|
ref_orch_ids = []
|
||||||
|
for ref_res_name in ref_resources:
|
||||||
|
# Search all the resources for a mapping.
|
||||||
|
for _orch_id, _resource in resources.iteritems():
|
||||||
|
# Grab this resource's stack name and path-in-stack
|
||||||
|
_metadata = _resource.get('metadata', {})
|
||||||
|
_stack_name = _metadata.get('heat_stack_name')
|
||||||
|
_path_in_stack = _metadata.get('heat_path_in_stack')
|
||||||
|
|
||||||
|
# Grab the Heat resource name
|
||||||
|
_res_name = _resource.get('name')
|
||||||
|
|
||||||
|
# If everything matches, we found our orch_id
|
||||||
|
if ref_res_name == _res_name and \
|
||||||
|
stack_name == _stack_name and \
|
||||||
|
path_in_stack == _path_in_stack:
|
||||||
|
ref_orch_ids.append(_orch_id)
|
||||||
|
break
|
||||||
|
properties['resources'] = ref_orch_ids
|
||||||
|
|
||||||
|
def _resource_with_uuid_for_identifier(self, resources, identifier):
|
||||||
|
"""Return resource matching an identifier if it has a valid uuid.
|
||||||
|
|
||||||
|
uuid means "orchestration id"
|
||||||
|
resource_id means "physical id"
|
||||||
|
identifier is either a heat resource name or a resource_id.
|
||||||
|
identifier is never a uuid (orchestation id).
|
||||||
|
"""
|
||||||
|
if type(resources) is not dict:
|
||||||
|
return
|
||||||
|
for resource in resources.values():
|
||||||
|
if resource.name == identifier or \
|
||||||
|
resource.resource_id == identifier:
|
||||||
|
if valid_uuid_for_resource(resource):
|
||||||
|
return resource
|
||||||
|
|
||||||
|
def do_pre_op(self, cnxt, stack, current_stack=None, action=None):
|
||||||
|
"""Method to be run by heat before stack operations."""
|
||||||
|
if not self.hints_enabled:
|
||||||
|
LOG.warn("stack_scheduler_hints is False, skipping stack")
|
||||||
|
return
|
||||||
|
|
||||||
|
# We can now set the auth token in advance, so let's do that.
|
||||||
|
self.api.auth_token = cnxt.auth_token
|
||||||
|
|
||||||
|
# Take note of the various status states as they pass through here.
|
||||||
|
if stack:
|
||||||
|
original_stack_status = "{} ({})".format(stack.name, stack.status)
|
||||||
|
else:
|
||||||
|
original_stack_status = "n/a"
|
||||||
|
if current_stack:
|
||||||
|
updated_stack_status = "{} ({})".format(current_stack.name,
|
||||||
|
current_stack.status)
|
||||||
|
else:
|
||||||
|
updated_stack_status = "n/a"
|
||||||
|
msg = "Stack Lifecycle Action: {}, Original: {}, Updated: {}"
|
||||||
|
LOG.debug(msg.format(
|
||||||
|
action, original_stack_status, updated_stack_status))
|
||||||
|
|
||||||
|
if action == 'DELETE' and stack and stack.status == 'IN_PROGRESS':
|
||||||
|
# TODO(jdandrea): Consider moving this action to do_post_op.
|
||||||
|
LOG.info(('Requesting plan delete for '
|
||||||
|
'stack "{}" ({})').format(stack.name, stack.id))
|
||||||
|
try:
|
||||||
|
self.api.plans_delete(stack)
|
||||||
|
except exceptions.NotFoundError:
|
||||||
|
# Be forgiving only if the plan wasn't found.
|
||||||
|
# It might have been deleted via a direct API call.
|
||||||
|
LOG.warn("Plan not found, proceeding with stack delete.")
|
||||||
|
elif action == 'UPDATE' and stack \
|
||||||
|
and current_stack and current_stack.status == 'IN_PROGRESS':
|
||||||
|
LOG.info(('Building plan update request for '
|
||||||
|
'stack "{}" ({})').format(stack.name, stack.id))
|
||||||
|
|
||||||
|
# For an update, stack has the *original* resource declarations.
|
||||||
|
# current_stack has the updated resource declarations.
|
||||||
|
resources = self._parse_stack(stack)
|
||||||
|
if not resources:
|
||||||
|
# Original stack had no resources for Valet. That's ok.
|
||||||
|
LOG.warn("No prior resources found, proceeding")
|
||||||
|
try:
|
||||||
|
current_resources = self._parse_stack(current_stack, stack)
|
||||||
|
if not current_resources:
|
||||||
|
# Updated stack has no resources for Valet. Also ok.
|
||||||
|
LOG.warn("No current resources found, skipping stack")
|
||||||
|
return
|
||||||
|
except exceptions.ResourceUUIDMissingError:
|
||||||
|
current_resources = {}
|
||||||
|
|
||||||
|
plan = {
|
||||||
|
'action': 'update', # 'update' vs. 'migrate'
|
||||||
|
'original_resources': resources,
|
||||||
|
'resources': current_resources,
|
||||||
|
}
|
||||||
|
try:
|
||||||
|
# Treat this as an update.
|
||||||
|
self.api.plans_update(stack, plan)
|
||||||
|
except exceptions.NotFoundError:
|
||||||
|
# Valet hasn't seen this plan before (brownfield scenario).
|
||||||
|
# Treat it as a create instead, using the current resources.
|
||||||
|
LOG.warn("Plan not found, creating a new plan")
|
||||||
|
plan = {
|
||||||
|
'plan_name': stack.name,
|
||||||
|
'stack_id': stack.id,
|
||||||
|
'resources': current_resources,
|
||||||
|
}
|
||||||
|
self.api.plans_create(stack, plan)
|
||||||
|
elif action == 'CREATE' and stack and stack.status == 'IN_PROGRESS':
|
||||||
|
LOG.info(('Building plan create request for '
|
||||||
|
'stack "{}" ({})').format(stack.name, stack.id))
|
||||||
|
|
||||||
|
resources = self._parse_stack(stack)
|
||||||
|
if not resources:
|
||||||
|
LOG.warn("No resources found, skipping stack")
|
||||||
|
return
|
||||||
|
|
||||||
|
plan = {
|
||||||
|
'plan_name': stack.name,
|
||||||
|
'stack_id': stack.id,
|
||||||
|
'resources': resources,
|
||||||
|
}
|
||||||
|
self.api.plans_create(stack, plan)
|
||||||
|
|
||||||
|
def do_post_op(self, cnxt, stack, current_stack=None, action=None,
|
||||||
|
is_stack_failure=False):
|
||||||
|
"""Method run by heat after stack ops, including failures."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
def get_ordinal(self):
|
||||||
|
"""An ordinal used to order class instances for pre/post ops."""
|
||||||
|
return 100
|
380
plugins/plugins/nova/valet_filter.py
Normal file
380
plugins/plugins/nova/valet_filter.py
Normal file
@ -0,0 +1,380 @@
|
|||||||
|
#
|
||||||
|
# Copyright (c) 2014-2017 AT&T Intellectual Property
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
"""Valet Nova Scheduler Filter"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
|
||||||
|
from keystoneclient.v2_0 import client
|
||||||
|
from nova.scheduler import filters
|
||||||
|
from oslo_config import cfg
|
||||||
|
from oslo_log import log as logging
|
||||||
|
|
||||||
|
from plugins.common import valet_api
|
||||||
|
from plugins import exceptions
|
||||||
|
from plugins.i18n import _
|
||||||
|
|
||||||
|
CONF = cfg.CONF
|
||||||
|
LOG = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class ValetFilter(filters.BaseHostFilter):
|
||||||
|
"""Filter on Valet assignment."""
|
||||||
|
|
||||||
|
# Checked by Nova. Host state does not change within a request.
|
||||||
|
run_filter_once_per_request = True
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
"""Initializer"""
|
||||||
|
self.api = valet_api.ValetAPI()
|
||||||
|
self._register_opts()
|
||||||
|
|
||||||
|
def _authorize(self):
|
||||||
|
"""Authorize against Keystone"""
|
||||||
|
kwargs = {
|
||||||
|
'username': CONF.valet.admin_username,
|
||||||
|
'password': CONF.valet.admin_password,
|
||||||
|
'tenant_name': CONF.valet.admin_tenant_name,
|
||||||
|
'auth_url': CONF.valet.admin_auth_url,
|
||||||
|
}
|
||||||
|
keystone_client = client.Client(**kwargs)
|
||||||
|
self.api.auth_token = keystone_client.auth_token
|
||||||
|
|
||||||
|
def _first_orch_id_from_placements_search(self, response={}):
|
||||||
|
"""Return first orchestration id from placements search response.
|
||||||
|
|
||||||
|
Return None if not found.
|
||||||
|
"""
|
||||||
|
if type(response) is dict:
|
||||||
|
placements = response.get('placements', [])
|
||||||
|
if placements:
|
||||||
|
return placements[0].get('id')
|
||||||
|
|
||||||
|
def _is_same_host(self, host, location):
|
||||||
|
"""Returns true if host matches location"""
|
||||||
|
return host == location
|
||||||
|
|
||||||
|
def _location_for_resource(self, hosts, filter_properties):
|
||||||
|
"""Determine optimal location for a given resource
|
||||||
|
|
||||||
|
Returns a tuple:
|
||||||
|
|
||||||
|
location is the determined location.
|
||||||
|
res_id is the physical resource id.
|
||||||
|
orch_id is the orchestration id.
|
||||||
|
ad_hoc is True if the placement was made on-the-fly.
|
||||||
|
"""
|
||||||
|
orch_id = None
|
||||||
|
location = None
|
||||||
|
ad_hoc = False
|
||||||
|
|
||||||
|
# Get the image, instance properties, physical id, and hints
|
||||||
|
request_spec = filter_properties.get('request_spec')
|
||||||
|
image = request_spec.get('image')
|
||||||
|
instance_properties = request_spec.get('instance_properties')
|
||||||
|
res_id = instance_properties.get('uuid')
|
||||||
|
hints = filter_properties.get('scheduler_hints', {})
|
||||||
|
|
||||||
|
LOG.info(("Resolving Orchestration ID "
|
||||||
|
"for resource {}.").format(res_id))
|
||||||
|
|
||||||
|
if hints:
|
||||||
|
# Plan A: Resolve Orchestration ID using scheduler hints.
|
||||||
|
action = 'reserve'
|
||||||
|
orch_id = self._orch_id_from_scheduler_hints(hints)
|
||||||
|
|
||||||
|
if not orch_id:
|
||||||
|
# Plan B: Try again using Nova-assigned resource id.
|
||||||
|
action = 'replan'
|
||||||
|
orch_id = self._orch_id_from_resource_id(res_id)
|
||||||
|
|
||||||
|
if orch_id:
|
||||||
|
# If Plan A or B resulted in an Orchestration ID ...
|
||||||
|
if action == 'replan':
|
||||||
|
message = ("Forcing replan for Resource ID: {}, "
|
||||||
|
"Orchestration ID: {}")
|
||||||
|
else:
|
||||||
|
message = ("Reserving with possible replan "
|
||||||
|
"for Resource ID: {}, "
|
||||||
|
"Orchestration ID: {}")
|
||||||
|
LOG.info(message.format(res_id, orch_id))
|
||||||
|
location = self._location_from_reserve_or_replan(
|
||||||
|
orch_id, res_id, hosts, action)
|
||||||
|
else:
|
||||||
|
# Plan C: It's ad-hoc plan/placement time!
|
||||||
|
# FIXME(jdandrea): Shouldn't reserving occur after this?
|
||||||
|
LOG.info(("Orchestration ID not found. "
|
||||||
|
"Performing ad-hoc placement."))
|
||||||
|
ad_hoc = True
|
||||||
|
|
||||||
|
# We'll need the flavor, image name, metadata, and AZ (if any)
|
||||||
|
instance_type = filter_properties.get('instance_type')
|
||||||
|
flavor = instance_type.get('name')
|
||||||
|
image_name = image.get('name')
|
||||||
|
metadata = instance_properties.get('metadata')
|
||||||
|
availability_zone = instance_properties.get('availability_zone')
|
||||||
|
|
||||||
|
# The metadata may be a JSON string vs a dict.
|
||||||
|
# If it's a string, try converting it to a dict.
|
||||||
|
try:
|
||||||
|
valet_meta = metadata.get('valet')
|
||||||
|
if isinstance(valet_meta, basestring):
|
||||||
|
metadata['valet'] = json.loads(valet_meta)
|
||||||
|
except Exception:
|
||||||
|
# Leave it alone then.
|
||||||
|
pass
|
||||||
|
|
||||||
|
location = self._location_from_ad_hoc_plan(
|
||||||
|
res_id, flavor, image_name, metadata,
|
||||||
|
availability_zone, hosts)
|
||||||
|
return (location, res_id, orch_id, ad_hoc)
|
||||||
|
|
||||||
|
def _location_from_ad_hoc_plan(self, res_id, flavor,
|
||||||
|
image=None, metadata=None,
|
||||||
|
availability_zone=None,
|
||||||
|
hosts=None):
|
||||||
|
"""Determine host location on-the-fly (ad-hoc).
|
||||||
|
|
||||||
|
Create an ad hoc plan (and ad-hoc placement),
|
||||||
|
then return the host location if found.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Because this wasn't orchestrated, there's no stack.
|
||||||
|
# We're going to compose a resource as if there was one.
|
||||||
|
# In this particular case we use the nova-assigned
|
||||||
|
# resource id as both the orchestration and stack id.
|
||||||
|
resources = {
|
||||||
|
res_id: {
|
||||||
|
"properties": {
|
||||||
|
"flavor": flavor,
|
||||||
|
},
|
||||||
|
"type": "OS::Nova::Server",
|
||||||
|
"name": "ad_hoc_instance"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
res_properties = resources[res_id]["properties"]
|
||||||
|
if image:
|
||||||
|
res_properties["image"] = image
|
||||||
|
if metadata:
|
||||||
|
res_properties["metadata"] = metadata
|
||||||
|
if availability_zone:
|
||||||
|
res_properties["availability_zone"] = availability_zone
|
||||||
|
|
||||||
|
# FIXME(jdandrea): Constant should not be here, may not even be used
|
||||||
|
timeout = 60
|
||||||
|
plan = {
|
||||||
|
'plan_name': res_id,
|
||||||
|
'stack_id': res_id,
|
||||||
|
'locations': hosts,
|
||||||
|
'timeout': '%d sec' % timeout,
|
||||||
|
'resources': resources,
|
||||||
|
}
|
||||||
|
kwargs = {
|
||||||
|
'stack': None,
|
||||||
|
'plan': plan,
|
||||||
|
}
|
||||||
|
response = self.api.plans_create(**kwargs)
|
||||||
|
if response and response.get('plan'):
|
||||||
|
plan = response['plan']
|
||||||
|
if plan and plan.get('placements'):
|
||||||
|
placements = plan['placements']
|
||||||
|
if placements.get(res_id):
|
||||||
|
placement = placements.get(res_id)
|
||||||
|
location = placement['location']
|
||||||
|
return location
|
||||||
|
|
||||||
|
def _location_from_reserve_or_replan(self, orch_id, res_id,
|
||||||
|
hosts=None, action='reserve'):
|
||||||
|
"""Reserve placement with possible replan, or force replan."""
|
||||||
|
kwargs = {
|
||||||
|
'orch_id': orch_id,
|
||||||
|
'res_id': res_id,
|
||||||
|
'hosts': hosts,
|
||||||
|
'action': action,
|
||||||
|
}
|
||||||
|
response = self.api.placement(**kwargs)
|
||||||
|
|
||||||
|
if response and response.get('placement'):
|
||||||
|
placement = response['placement']
|
||||||
|
if placement.get('location'):
|
||||||
|
location = placement['location']
|
||||||
|
return location
|
||||||
|
|
||||||
|
def _orch_id_from_resource_id(self, res_id):
|
||||||
|
"""Find Orchestration ID via Nova-assigned Resource ID."""
|
||||||
|
kwargs = {
|
||||||
|
'query': {
|
||||||
|
'resource_id': res_id,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
LOG.info(("Searching by Resource ID: {}.").format(res_id))
|
||||||
|
response = self.api.placements_search(**kwargs)
|
||||||
|
orch_id = \
|
||||||
|
self._first_orch_id_from_placements_search(response)
|
||||||
|
return orch_id
|
||||||
|
|
||||||
|
def _orch_id_from_scheduler_hints(self, hints={}):
|
||||||
|
"""Find Orchestration ID via Lifecycle Scheduler Hints.
|
||||||
|
|
||||||
|
This is either within the hints or via searching Valet
|
||||||
|
placements using the hints. It's most likely the former
|
||||||
|
now that we handle each stack level as its own plan.
|
||||||
|
|
||||||
|
If we try to flatten the entire stack into one plan,
|
||||||
|
a bug/anomaly in Heat makes the orchestration IDs
|
||||||
|
unusable, which is why we added search. But that led
|
||||||
|
to new problems with stack creation, so instead we now
|
||||||
|
treat each stack level as its own plan, which means all
|
||||||
|
that work on implementing search is not being used.
|
||||||
|
|
||||||
|
Still, the search logic can remain for the time being.
|
||||||
|
"""
|
||||||
|
orch_id_key = 'heat_resource_uuid'
|
||||||
|
root_stack_id_key = 'heat_root_stack_id'
|
||||||
|
resource_name_key = 'heat_resource_name'
|
||||||
|
path_in_stack_key = 'heat_path_in_stack'
|
||||||
|
|
||||||
|
# Maybe the orch_id is already in the hints? If so, return it.
|
||||||
|
orch_id = hints.get(orch_id_key)
|
||||||
|
if orch_id:
|
||||||
|
return orch_id
|
||||||
|
|
||||||
|
# If it's not, try searching for it using remaining hint info.
|
||||||
|
if all(k in hints for k in (root_stack_id_key, resource_name_key,
|
||||||
|
path_in_stack_key)):
|
||||||
|
kwargs = {
|
||||||
|
'query': {
|
||||||
|
root_stack_id_key: hints.get(root_stack_id_key),
|
||||||
|
resource_name_key: hints.get(resource_name_key),
|
||||||
|
# Path in stack is made up of tuples. Make it a string.
|
||||||
|
path_in_stack_key: json.dumps(
|
||||||
|
hints.get(path_in_stack_key)),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
LOG.info("Searching placements via scheduler hints.")
|
||||||
|
response = self.api.placements_search(**kwargs)
|
||||||
|
orch_id = \
|
||||||
|
self._first_orch_id_from_placements_search(response)
|
||||||
|
return orch_id
|
||||||
|
|
||||||
|
def _register_opts(self):
|
||||||
|
"""Register additional options specific to this filter plugin"""
|
||||||
|
opts = []
|
||||||
|
option = cfg.StrOpt('failure_mode',
|
||||||
|
choices=['reject', 'yield'], default='reject',
|
||||||
|
help=_('Mode to operate in if Valet '
|
||||||
|
'planning fails for any reason.'))
|
||||||
|
|
||||||
|
# In the filter plugin space, there's no access to Nova's
|
||||||
|
# keystone credentials, so we have to specify our own.
|
||||||
|
# This also means we can't act as the user making the request
|
||||||
|
# at scheduling-time.
|
||||||
|
opts.append(option)
|
||||||
|
option = cfg.StrOpt('admin_tenant_name', default=None,
|
||||||
|
help=_('Valet Project Name'))
|
||||||
|
opts.append(option)
|
||||||
|
option = cfg.StrOpt('admin_username', default=None,
|
||||||
|
help=_('Valet Username'))
|
||||||
|
opts.append(option)
|
||||||
|
option = cfg.StrOpt('admin_password', default=None,
|
||||||
|
help=_('Valet Password'))
|
||||||
|
opts.append(option)
|
||||||
|
option = cfg.StrOpt('admin_auth_url', default=None,
|
||||||
|
help=_('Keystone Authorization API Endpoint'))
|
||||||
|
opts.append(option)
|
||||||
|
|
||||||
|
opt_group = cfg.OptGroup('valet')
|
||||||
|
cfg.CONF.register_group(opt_group)
|
||||||
|
cfg.CONF.register_opts(opts, group=opt_group)
|
||||||
|
|
||||||
|
def filter_all(self, filter_obj_list, filter_properties):
|
||||||
|
"""Filter all hosts in one swell foop"""
|
||||||
|
res_id = None
|
||||||
|
orch_id = None
|
||||||
|
location = None
|
||||||
|
|
||||||
|
authorized = False
|
||||||
|
ad_hoc = False
|
||||||
|
yield_all = False
|
||||||
|
|
||||||
|
hosts = [obj.host for obj in filter_obj_list]
|
||||||
|
|
||||||
|
# Do AuthN as late as possible (here), not in init().
|
||||||
|
try:
|
||||||
|
self._authorize()
|
||||||
|
authorized = True
|
||||||
|
except Exception as err:
|
||||||
|
LOG.error(('Authorization exception: {}').format(err.message))
|
||||||
|
|
||||||
|
if authorized:
|
||||||
|
# nova-conductor won't catch exceptions like heat-engine does.
|
||||||
|
# The best we can do is log it.
|
||||||
|
try:
|
||||||
|
(location, res_id, orch_id, ad_hoc) = \
|
||||||
|
self._location_for_resource(hosts, filter_properties)
|
||||||
|
except exceptions.ValetOpenStackPluginException as err:
|
||||||
|
LOG.error(('API Exception: {}').format(err.message))
|
||||||
|
|
||||||
|
# Honk if we didn't find a location and decide if we yield to Nova.
|
||||||
|
if not location:
|
||||||
|
if ad_hoc:
|
||||||
|
message = ("Ad-hoc placement unknown, "
|
||||||
|
"Resource ID: {}")
|
||||||
|
LOG.error(message.format(res_id))
|
||||||
|
elif orch_id:
|
||||||
|
message = ("Placement unknown, Resource ID: {}, "
|
||||||
|
"Orchestration ID: {}.")
|
||||||
|
LOG.error(message.format(res_id, orch_id))
|
||||||
|
elif res_id:
|
||||||
|
message = ("Placement unknown, Resource ID: {}")
|
||||||
|
LOG.error(message.format(res_id))
|
||||||
|
else:
|
||||||
|
message = ("Placement unknown.")
|
||||||
|
LOG.error(message)
|
||||||
|
|
||||||
|
if CONF.valet.failure_mode == 'yield':
|
||||||
|
message = ("Yielding to Nova or placement decisions.")
|
||||||
|
LOG.warn(message)
|
||||||
|
yield_all = True
|
||||||
|
else:
|
||||||
|
message = ("Rejecting all Nova placement decisions.")
|
||||||
|
LOG.error(message)
|
||||||
|
yield_all = False
|
||||||
|
|
||||||
|
# Yield the hosts that pass.
|
||||||
|
# Like the Highlander, there can (should) be only one.
|
||||||
|
# It's possible there could be none if Valet can't solve it.
|
||||||
|
for obj in filter_obj_list:
|
||||||
|
if location:
|
||||||
|
match = self._is_same_host(obj.host, location)
|
||||||
|
if match:
|
||||||
|
if ad_hoc:
|
||||||
|
message = ("Ad-Hoc host selection "
|
||||||
|
"is {} for Resource ID {}.")
|
||||||
|
LOG.info(message.format(obj.host, res_id))
|
||||||
|
else:
|
||||||
|
message = ("Host selection is {} for "
|
||||||
|
"Resource ID: {}, "
|
||||||
|
"Orchestration ID: {}.")
|
||||||
|
LOG.info(message.format(obj.host, res_id, orch_id))
|
||||||
|
else:
|
||||||
|
match = None
|
||||||
|
if yield_all or match:
|
||||||
|
yield obj
|
||||||
|
|
||||||
|
def host_passes(self, host_state, filter_properties):
|
||||||
|
"""Individual host pass check"""
|
||||||
|
# Intentionally let filter_all() handle it.
|
||||||
|
return False
|
65
plugins/tests/unit/test_plugins.py
Normal file
65
plugins/tests/unit/test_plugins.py
Normal file
@ -0,0 +1,65 @@
|
|||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
import mock
|
||||||
|
|
||||||
|
from valet.plugins.plugins.heat.plugins import ValetLifecyclePlugin
|
||||||
|
from valet.plugins.tests.base import Base
|
||||||
|
|
||||||
|
|
||||||
|
class TestPlugins(Base):
|
||||||
|
|
||||||
|
@mock.patch('valet_plugins.plugins.heat.plugins.CONF')
|
||||||
|
def setUp(self, mock_conf):
|
||||||
|
super(TestPlugins, self).setUp()
|
||||||
|
|
||||||
|
self.valet_life_cycle_plugin = self.init_ValetLifecyclePlugin()
|
||||||
|
|
||||||
|
@mock.patch('valet_plugins.common.valet_api.ValetAPI')
|
||||||
|
def init_ValetLifecyclePlugin(self, mock_class):
|
||||||
|
return ValetLifecyclePlugin()
|
||||||
|
|
||||||
|
@mock.patch.object(ValetLifecyclePlugin, '_parse_stack')
|
||||||
|
def test_do_pre_op(self, mock_parse_stack):
|
||||||
|
mock_parse_stack.return_value = {'test': 'resources'}
|
||||||
|
|
||||||
|
stack = mock.MagicMock()
|
||||||
|
stack.name = "test_stack"
|
||||||
|
stack.id = "test_id"
|
||||||
|
|
||||||
|
cnxt = mock.MagicMock()
|
||||||
|
cnxt.auth_token = "test_auth_token"
|
||||||
|
|
||||||
|
# returns due to hints_enabled
|
||||||
|
self.valet_life_cycle_plugin.hints_enabled = False
|
||||||
|
stack.status = "IN_PROGRESS"
|
||||||
|
self.valet_life_cycle_plugin.do_pre_op(cnxt, stack, action="DELETE")
|
||||||
|
self.validate_test(self.valet_life_cycle_plugin.api.method_calls == [])
|
||||||
|
|
||||||
|
# returns due to stack.status
|
||||||
|
self.valet_life_cycle_plugin.hints_enabled = True
|
||||||
|
stack.status = "NOT_IN_PROGRESS"
|
||||||
|
self.valet_life_cycle_plugin.do_pre_op(cnxt, stack, action="DELETE")
|
||||||
|
self.validate_test(self.valet_life_cycle_plugin.api.method_calls == [])
|
||||||
|
|
||||||
|
# action delete
|
||||||
|
self.valet_life_cycle_plugin.hints_enabled = True
|
||||||
|
stack.status = "IN_PROGRESS"
|
||||||
|
self.valet_life_cycle_plugin.do_pre_op(cnxt, stack, action="DELETE")
|
||||||
|
self.validate_test("plans_delete" in
|
||||||
|
self.valet_life_cycle_plugin.api.method_calls[0])
|
||||||
|
|
||||||
|
# action create
|
||||||
|
self.valet_life_cycle_plugin.do_pre_op(cnxt, stack, action="CREATE")
|
||||||
|
self.validate_test("plans_create"
|
||||||
|
in self.valet_life_cycle_plugin.api.method_calls[1])
|
@ -13,21 +13,21 @@
|
|||||||
|
|
||||||
import mock
|
import mock
|
||||||
|
|
||||||
from valet_plugins.common.valet_api import requests
|
from valet.plugins.common.valet_api import requests
|
||||||
from valet_plugins.common.valet_api import ValetAPIWrapper
|
from valet.plugins.common.valet_api import ValetAPI
|
||||||
from valet_plugins.tests.base import Base
|
from valet.plugins.tests.base import Base
|
||||||
|
|
||||||
|
|
||||||
class TestValetApi(Base):
|
class TestValetApi(Base):
|
||||||
|
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
super(TestValetApi, self).setUp()
|
super(TestValetApi, self).setUp()
|
||||||
self.valet_api_wrapper = self.init_ValetAPIWrapper()
|
self.valet_api = self.init_ValetAPI()
|
||||||
|
|
||||||
@mock.patch.object(ValetAPIWrapper, "_register_opts")
|
@mock.patch.object(ValetAPI, "_register_opts")
|
||||||
def init_ValetAPIWrapper(self, mock_api):
|
def init_ValetAPI(self, mock_api):
|
||||||
mock_api.return_value = None
|
mock_api.return_value = None
|
||||||
return ValetAPIWrapper()
|
return ValetAPI()
|
||||||
|
|
||||||
@mock.patch.object(requests, 'request')
|
@mock.patch.object(requests, 'request')
|
||||||
def test_plans_create(self, mock_request):
|
def test_plans_create(self, mock_request):
|
120
plugins/tests/unit/test_valet_filter.py
Normal file
120
plugins/tests/unit/test_valet_filter.py
Normal file
@ -0,0 +1,120 @@
|
|||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
|
||||||
|
import mock
|
||||||
|
import uuid
|
||||||
|
|
||||||
|
from keystoneclient.v2_0 import client
|
||||||
|
|
||||||
|
from valet.plugins.common import valet_api
|
||||||
|
from valet.plugins.plugins.nova.valet_filter import ValetFilter
|
||||||
|
from valet.plugins.tests.base import Base
|
||||||
|
|
||||||
|
|
||||||
|
class TestResources(object):
|
||||||
|
def __init__(self, host_name):
|
||||||
|
self.host = host_name
|
||||||
|
|
||||||
|
|
||||||
|
class TestValetFilter(Base):
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
super(TestValetFilter, self).setUp()
|
||||||
|
|
||||||
|
client.Client = mock.MagicMock()
|
||||||
|
self.valet_filter = self.init_ValetFilter()
|
||||||
|
|
||||||
|
@mock.patch.object(valet_api.ValetAPI, '_register_opts')
|
||||||
|
@mock.patch.object(ValetFilter, '_register_opts')
|
||||||
|
def init_ValetFilter(self, mock_opt, mock_init):
|
||||||
|
mock_init.return_value = None
|
||||||
|
mock_opt.return_value = None
|
||||||
|
return ValetFilter()
|
||||||
|
|
||||||
|
@mock.patch.object(ValetFilter, '_orch_id_from_resource_id')
|
||||||
|
@mock.patch.object(ValetFilter, '_location_from_ad_hoc_plan')
|
||||||
|
def test_filter_location_for_resource_ad_hoc(self, mock_loc_ad_hoc,
|
||||||
|
mock_orch_id_res):
|
||||||
|
host = 'hostname'
|
||||||
|
physical_id = str(uuid.uuid4())
|
||||||
|
|
||||||
|
mock_loc_ad_hoc.return_value = host
|
||||||
|
mock_orch_id_res.return_value = None
|
||||||
|
|
||||||
|
filter_properties = {
|
||||||
|
'request_spec': {
|
||||||
|
'image': {'name': 'image_name'},
|
||||||
|
'metadata': {},
|
||||||
|
'instance_properties': {'uuid': physical_id}
|
||||||
|
},
|
||||||
|
'instance_type': {
|
||||||
|
'name': "flavor"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
(location, res_id, orch_id, ad_hoc) = \
|
||||||
|
self.valet_filter._location_for_resource([host],
|
||||||
|
filter_properties)
|
||||||
|
|
||||||
|
self.assertEqual(location, host)
|
||||||
|
self.assertEqual(res_id, physical_id)
|
||||||
|
self.assertEqual(orch_id, None)
|
||||||
|
self.assertTrue(ad_hoc)
|
||||||
|
|
||||||
|
@mock.patch.object(ValetFilter, '_location_for_resource')
|
||||||
|
@mock.patch.object(ValetFilter, '_authorize')
|
||||||
|
@mock.patch.object(valet_api.ValetAPI, 'plans_create')
|
||||||
|
@mock.patch.object(valet_api.ValetAPI, 'placement')
|
||||||
|
def test_filter_all(self, mock_placement, mock_create,
|
||||||
|
mock_auth, mock_location_tuple):
|
||||||
|
mock_placement.return_value = None
|
||||||
|
mock_create.return_value = None
|
||||||
|
mock_location_tuple.return_value = \
|
||||||
|
('location', 'res_id', 'orch_id', 'ad_hoc')
|
||||||
|
|
||||||
|
with mock.patch('oslo_config.cfg.CONF') as config:
|
||||||
|
hosts = [
|
||||||
|
TestResources("first_host"),
|
||||||
|
TestResources("second_host"),
|
||||||
|
]
|
||||||
|
|
||||||
|
config.valet.failure_mode = 'yield'
|
||||||
|
|
||||||
|
filter_properties = {
|
||||||
|
'request_spec': {
|
||||||
|
'image': {'name': 'image_name'},
|
||||||
|
'instance_properties': {'uuid': ""},
|
||||||
|
},
|
||||||
|
'scheduler_hints': {'heat_resource_uuid': "123456"},
|
||||||
|
'instance_type': {'name': "instance_name"},
|
||||||
|
}
|
||||||
|
|
||||||
|
resources = self.valet_filter.filter_all(hosts, filter_properties)
|
||||||
|
|
||||||
|
for resource in resources:
|
||||||
|
self.validate_test(resource.host in "first_host, second_host")
|
||||||
|
self.validate_test(mock_placement.called)
|
||||||
|
|
||||||
|
filter_properties = {
|
||||||
|
'request_spec': {
|
||||||
|
'image': {'name': 'image_name'},
|
||||||
|
'instance_properties': {'uuid': ""},
|
||||||
|
},
|
||||||
|
'scheduler_hints': "scheduler_hints",
|
||||||
|
'instance_type': {'name': "instance_name"},
|
||||||
|
}
|
||||||
|
|
||||||
|
resources = self.valet_filter.filter_all(hosts, filter_properties)
|
||||||
|
|
||||||
|
for _ in resources:
|
||||||
|
self.validate_test(mock_create.called)
|
@ -1,198 +0,0 @@
|
|||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
|
|
||||||
'''Valet API Wrapper'''
|
|
||||||
|
|
||||||
from heat.common.i18n import _
|
|
||||||
import json
|
|
||||||
|
|
||||||
from oslo_config import cfg
|
|
||||||
from oslo_log import log as logging
|
|
||||||
|
|
||||||
import requests
|
|
||||||
import sys
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
def _exception(exc, exc_info, req):
|
|
||||||
'''Handle an exception'''
|
|
||||||
response = None
|
|
||||||
try:
|
|
||||||
if req is not None:
|
|
||||||
response = json.loads(req.text)
|
|
||||||
except Exception as e:
|
|
||||||
# FIXME(GJ): if Valet returns error
|
|
||||||
LOG.error("Exception is: %s, body is: %s" % (e, req.text))
|
|
||||||
return None
|
|
||||||
|
|
||||||
if response and 'error' in response:
|
|
||||||
# FIXME(GJ): if Valet returns error
|
|
||||||
error = response.get('error')
|
|
||||||
msg = "%(explanation)s (valet-api: %(message)s)" % {
|
|
||||||
'explanation': response.get('explanation',
|
|
||||||
_('No remediation available')),
|
|
||||||
'message': error.get('message', _('Unknown error'))
|
|
||||||
}
|
|
||||||
LOG.error("Response with error: " + msg)
|
|
||||||
return "error"
|
|
||||||
else:
|
|
||||||
# TODO(JD): Re-evaluate if this clause is necessary.
|
|
||||||
exc_class, exc, traceback = exc_info # pylint: disable=W0612
|
|
||||||
msg = (_("%(exc)s for %(method)s %(url)s with body %(body)s") %
|
|
||||||
{'exc': exc, 'method': exc.request.method,
|
|
||||||
'url': exc.request.url, 'body': exc.request.body})
|
|
||||||
LOG.error("Response is none: " + msg)
|
|
||||||
return "error"
|
|
||||||
|
|
||||||
|
|
||||||
# TODO(JD): Improve exception reporting back up to heat
|
|
||||||
class ValetAPIError(Exception):
|
|
||||||
'''Valet API Error'''
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class ValetAPIWrapper(object):
|
|
||||||
'''Valet API Wrapper'''
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
'''Initializer'''
|
|
||||||
self.headers = {'Content-Type': 'application/json'}
|
|
||||||
self.opt_group_str = 'valet'
|
|
||||||
self.opt_name_str = 'url'
|
|
||||||
self.opt_conn_timeout = 'connect_timeout'
|
|
||||||
self.opt_read_timeout = 'read_timeout'
|
|
||||||
self._register_opts()
|
|
||||||
|
|
||||||
def _api_endpoint(self):
|
|
||||||
'''Returns API endpoint'''
|
|
||||||
try:
|
|
||||||
opt = getattr(cfg.CONF, self.opt_group_str)
|
|
||||||
endpoint = opt[self.opt_name_str]
|
|
||||||
if endpoint:
|
|
||||||
return endpoint
|
|
||||||
else:
|
|
||||||
# FIXME: Possibly not wanted (misplaced-bare-raise)
|
|
||||||
raise # pylint: disable=E0704
|
|
||||||
except Exception:
|
|
||||||
raise # exception.Error(_('API Endpoint not defined.'))
|
|
||||||
|
|
||||||
def _get_timeout(self):
|
|
||||||
'''Returns Valet plugin API request timeout.
|
|
||||||
|
|
||||||
Returns the timeout values tuple (conn_timeout, read_timeout)
|
|
||||||
'''
|
|
||||||
read_timeout = 600
|
|
||||||
try:
|
|
||||||
opt = getattr(cfg.CONF, self.opt_group_str)
|
|
||||||
# conn_timeout = opt[self.opt_conn_timeout]
|
|
||||||
read_timeout = opt[self.opt_read_timeout]
|
|
||||||
except Exception:
|
|
||||||
pass
|
|
||||||
# Timeout accepts tupple on 'requests' version 2.4.0 and above -
|
|
||||||
# adding *connect* timeouts
|
|
||||||
# return conn_timeout, read_timeout
|
|
||||||
return read_timeout
|
|
||||||
|
|
||||||
def _register_opts(self):
|
|
||||||
'''Register options'''
|
|
||||||
opts = []
|
|
||||||
option = cfg.StrOpt(
|
|
||||||
self.opt_name_str, default=None, help=_('Valet API endpoint'))
|
|
||||||
opts.append(option)
|
|
||||||
option = cfg.IntOpt(
|
|
||||||
self.opt_conn_timeout, default=3,
|
|
||||||
help=_('Valet Plugin Connect Timeout'))
|
|
||||||
opts.append(option)
|
|
||||||
option = cfg.IntOpt(
|
|
||||||
self.opt_read_timeout, default=5,
|
|
||||||
help=_('Valet Plugin Read Timeout'))
|
|
||||||
opts.append(option)
|
|
||||||
|
|
||||||
opt_group = cfg.OptGroup(self.opt_group_str)
|
|
||||||
cfg.CONF.register_group(opt_group)
|
|
||||||
cfg.CONF.register_opts(opts, group=opt_group)
|
|
||||||
|
|
||||||
# TODO(JD): Keep stack param for now. We may need it again.
|
|
||||||
def plans_create(self, stack, plan, auth_token=None):
|
|
||||||
'''Create a plan'''
|
|
||||||
response = None
|
|
||||||
try:
|
|
||||||
req = None
|
|
||||||
timeout = self._get_timeout()
|
|
||||||
url = self._api_endpoint() + '/plans/'
|
|
||||||
payload = json.dumps(plan)
|
|
||||||
self.headers['X-Auth-Token'] = auth_token
|
|
||||||
req = requests.post(
|
|
||||||
url, data=payload, headers=self.headers, timeout=timeout)
|
|
||||||
req.raise_for_status()
|
|
||||||
response = json.loads(req.text)
|
|
||||||
except (requests.exceptions.HTTPError, requests.exceptions.Timeout,
|
|
||||||
requests.exceptions.ConnectionError) as exc:
|
|
||||||
return _exception(exc, sys.exc_info(), req)
|
|
||||||
except Exception as e:
|
|
||||||
LOG.error("Exception (at plans_create) is: %s" % e)
|
|
||||||
return None
|
|
||||||
return response
|
|
||||||
|
|
||||||
# TODO(JD): Keep stack param for now. We may need it again.
|
|
||||||
def plans_delete(self, stack, auth_token=None): # pylint: disable=W0613
|
|
||||||
'''Delete a plan'''
|
|
||||||
try:
|
|
||||||
req = None
|
|
||||||
timeout = self._get_timeout()
|
|
||||||
url = self._api_endpoint() + '/plans/' + stack.id
|
|
||||||
self.headers['X-Auth-Token'] = auth_token
|
|
||||||
req = requests.delete(url, headers=self.headers, timeout=timeout)
|
|
||||||
except (requests.exceptions.HTTPError, requests.exceptions.Timeout,
|
|
||||||
requests.exceptions.ConnectionError) as exc:
|
|
||||||
return _exception(exc, sys.exc_info(), req)
|
|
||||||
except Exception as e:
|
|
||||||
LOG.error("Exception (plans_delete) is: %s" % e)
|
|
||||||
return None
|
|
||||||
# Delete does not return a response body.
|
|
||||||
|
|
||||||
def placement(self, orch_id, res_id, hosts=None, auth_token=None):
|
|
||||||
'''Reserve previously made placement.'''
|
|
||||||
try:
|
|
||||||
req = None
|
|
||||||
payload = None
|
|
||||||
timeout = self._get_timeout()
|
|
||||||
url = self._api_endpoint() + '/placements/' + orch_id
|
|
||||||
self.headers['X-Auth-Token'] = auth_token
|
|
||||||
if hosts:
|
|
||||||
kwargs = {
|
|
||||||
"locations": hosts,
|
|
||||||
"resource_id": res_id
|
|
||||||
}
|
|
||||||
payload = json.dumps(kwargs)
|
|
||||||
req = requests.post(
|
|
||||||
url, data=payload, headers=self.headers, timeout=timeout)
|
|
||||||
else:
|
|
||||||
req = requests.get(url, headers=self.headers, timeout=timeout)
|
|
||||||
|
|
||||||
# TODO(JD): Raise an exception IFF the scheduler can handle it
|
|
||||||
# req.raise_for_status()
|
|
||||||
|
|
||||||
response = json.loads(req.text)
|
|
||||||
except (requests.exceptions.HTTPError, requests.exceptions.Timeout,
|
|
||||||
requests.exceptions.ConnectionError) as exc:
|
|
||||||
return _exception(exc, sys.exc_info(), req)
|
|
||||||
except Exception as e: # pylint: disable=W0702
|
|
||||||
LOG.error("Exception (placement) is: %s" % e)
|
|
||||||
# FIXME: Find which exceptions we should really handle here.
|
|
||||||
response = None
|
|
||||||
|
|
||||||
return response
|
|
@ -1,103 +0,0 @@
|
|||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from heat.common.i18n import _
|
|
||||||
from heat.engine import constraints
|
|
||||||
from heat.engine import properties
|
|
||||||
from heat.engine import resource
|
|
||||||
from oslo_log import log as logging
|
|
||||||
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class GroupAssignment(resource.Resource):
|
|
||||||
'''Group assignment
|
|
||||||
|
|
||||||
A Group Assignment describes one or more resources assigned to a particular
|
|
||||||
type of group. Assignments can reference other assignments, so long as
|
|
||||||
there are no circular references.
|
|
||||||
|
|
||||||
There are three types of groups: affinity, diversity, and exclusivity.
|
|
||||||
Exclusivity groups have a unique name, assigned through Valet.
|
|
||||||
|
|
||||||
This resource is purely informational in nature and makes no changes to
|
|
||||||
heat, nova, or cinder.
|
|
||||||
The Valet Heat Lifecycle Plugin passes this information to the optimizer.
|
|
||||||
'''
|
|
||||||
|
|
||||||
_RELATIONSHIP_TYPES = (
|
|
||||||
AFFINITY, DIVERSITY, EXCLUSIVITY,
|
|
||||||
) = (
|
|
||||||
"affinity", "diversity", "exclusivity",
|
|
||||||
)
|
|
||||||
|
|
||||||
PROPERTIES = (
|
|
||||||
GROUP_NAME, GROUP_TYPE, LEVEL, RESOURCES,
|
|
||||||
) = (
|
|
||||||
'group_name', 'group_type', 'level', 'resources',
|
|
||||||
)
|
|
||||||
|
|
||||||
properties_schema = {
|
|
||||||
GROUP_NAME: properties.Schema(
|
|
||||||
properties.Schema.STRING,
|
|
||||||
_('Group name. Required for exclusivity groups.'),
|
|
||||||
# TODO(JD): Add a custom constraint
|
|
||||||
# Constraint must ensure a valid and allowed name
|
|
||||||
# when an exclusivity group is in use.
|
|
||||||
# This is presently enforced by valet-api and can also
|
|
||||||
# be pro-actively enforced here, so as to avoid unnecessary
|
|
||||||
# orchestration.
|
|
||||||
update_allowed=True
|
|
||||||
),
|
|
||||||
GROUP_TYPE: properties.Schema(
|
|
||||||
properties.Schema.STRING,
|
|
||||||
_('Type of group.'),
|
|
||||||
constraints=[
|
|
||||||
constraints.AllowedValues([AFFINITY, DIVERSITY, EXCLUSIVITY])
|
|
||||||
],
|
|
||||||
required=True,
|
|
||||||
update_allowed=True
|
|
||||||
),
|
|
||||||
LEVEL: properties.Schema(
|
|
||||||
properties.Schema.STRING,
|
|
||||||
_('Level of relationship between resources.'),
|
|
||||||
constraints=[
|
|
||||||
constraints.AllowedValues(['host', 'rack']),
|
|
||||||
],
|
|
||||||
required=True,
|
|
||||||
update_allowed=True
|
|
||||||
),
|
|
||||||
RESOURCES: properties.Schema(
|
|
||||||
properties.Schema.LIST,
|
|
||||||
_('List of one or more resource IDs.'),
|
|
||||||
required=True,
|
|
||||||
update_allowed=True
|
|
||||||
),
|
|
||||||
}
|
|
||||||
|
|
||||||
def handle_create(self):
|
|
||||||
'''Create resource'''
|
|
||||||
self.resource_id_set(self.physical_resource_name())
|
|
||||||
|
|
||||||
def handle_update(self, json_snippet, templ_diff, prop_diff):
|
|
||||||
'''Update resource'''
|
|
||||||
self.resource_id_set(self.physical_resource_name())
|
|
||||||
|
|
||||||
def handle_delete(self):
|
|
||||||
'''Delete resource'''
|
|
||||||
self.resource_id_set(None)
|
|
||||||
|
|
||||||
|
|
||||||
def resource_mapping():
|
|
||||||
'''Map names to resources.'''
|
|
||||||
return {'ATT::Valet::GroupAssignment': GroupAssignment, }
|
|
@ -1,188 +0,0 @@
|
|||||||
# OpenStack Heat Resource Plugins
|
|
||||||
|
|
||||||
[Valet](https://codecloud.web.att.com/plugins/servlet/readmeparser/display/ST_CLOUDQOS/allegro/atRef/refs/heads/master/renderFile/README.md) works with OpenStack Heat through the use of Resource Plugins. This document explains what they are and how they work. As new plugins become formally introduced, they will be added here.
|
|
||||||
|
|
||||||
The following is current as of Valet Release 1.0.
|
|
||||||
|
|
||||||
## ATT::Valet::GroupAssignment
|
|
||||||
|
|
||||||
*Formerly ATT::Valet::ResourceGroup*
|
|
||||||
|
|
||||||
A Group Assignment describes one or more resources assigned to a particular type of group. Assignments can reference other assignments, so long as there are no circular references.
|
|
||||||
|
|
||||||
There are three types of groups: affinity, diversity, and exclusivity. Exclusivity groups have a unique name, assigned through Valet.
|
|
||||||
|
|
||||||
This resource is purely informational in nature and makes no changes to heat, nova, or cinder. The Valet Heat Lifecycle Plugin passes this information to the optimizer.
|
|
||||||
|
|
||||||
### Properties
|
|
||||||
|
|
||||||
``group_name`` (String)
|
|
||||||
|
|
||||||
* Name of group. Required for exclusivity groups. NOT permitted for affinity and diversity groups at this time.
|
|
||||||
* Can be updated without replacement.
|
|
||||||
|
|
||||||
``group_type`` (String)
|
|
||||||
|
|
||||||
* Type of group.
|
|
||||||
* Allowed values: affinity, diversity, exclusivity
|
|
||||||
* Can be updated without replacement.
|
|
||||||
* Required property.
|
|
||||||
|
|
||||||
``level`` (String)
|
|
||||||
|
|
||||||
* Level of relationship between resources.
|
|
||||||
* See list below for allowed values.
|
|
||||||
* Can be updated without replacement.
|
|
||||||
* Required property.
|
|
||||||
|
|
||||||
``resources`` (List)
|
|
||||||
|
|
||||||
* List of associated resource IDs.
|
|
||||||
* Can be updated without replacement.
|
|
||||||
* Required property.
|
|
||||||
|
|
||||||
#### Levels
|
|
||||||
|
|
||||||
* ``rack``: Across racks, one resource per host.
|
|
||||||
* ``host``: All resources on a single host.
|
|
||||||
|
|
||||||
### Attributes
|
|
||||||
|
|
||||||
None. (There is a ``show`` attribute but it is not intended for production use.)
|
|
||||||
|
|
||||||
### Example
|
|
||||||
|
|
||||||
Given a Heat template with two server resources, declare an affinity between them at the rack level:
|
|
||||||
|
|
||||||
```json
|
|
||||||
resources:
|
|
||||||
server_affinity:
|
|
||||||
type: ATT::Valet::GroupAssignment
|
|
||||||
properties:
|
|
||||||
group_type: affinity
|
|
||||||
level: rack
|
|
||||||
resources:
|
|
||||||
- {get_resource: server1}
|
|
||||||
- {get_resource: server2}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Plugin Schema
|
|
||||||
|
|
||||||
Use the OpenStack Heat CLI command `heat resource-type-show ATT::Valet::GroupAssignment` to view the schema.
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"support_status": {
|
|
||||||
"status": "SUPPORTED",
|
|
||||||
"message": null,
|
|
||||||
"version": null,
|
|
||||||
"previous_status": null
|
|
||||||
},
|
|
||||||
"attributes": {
|
|
||||||
"show": {
|
|
||||||
"type": "map",
|
|
||||||
"description": "Detailed information about resource."
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"properties": {
|
|
||||||
"level": {
|
|
||||||
"description": "Level of relationship between resources.",
|
|
||||||
"required": true,
|
|
||||||
"update_allowed": true,
|
|
||||||
"type": "string",
|
|
||||||
"immutable": false,
|
|
||||||
"constraints": [
|
|
||||||
{
|
|
||||||
"allowed_values": [
|
|
||||||
"host",
|
|
||||||
"rack"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"resources": {
|
|
||||||
"type": "list",
|
|
||||||
"required": true,
|
|
||||||
"update_allowed": true,
|
|
||||||
"description": "List of one or more resource IDs.",
|
|
||||||
"immutable": false
|
|
||||||
},
|
|
||||||
"group_type": {
|
|
||||||
"description": "Type of group.",
|
|
||||||
"required": true,
|
|
||||||
"update_allowed": true,
|
|
||||||
"type": "string",
|
|
||||||
"immutable": false,
|
|
||||||
"constraints": [
|
|
||||||
{
|
|
||||||
"allowed_values": [
|
|
||||||
"affinity",
|
|
||||||
"diversity",
|
|
||||||
"exclusivity"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"group_name": {
|
|
||||||
"type": "string",
|
|
||||||
"required": false,
|
|
||||||
"update_allowed": true,
|
|
||||||
"description": "Group name. Required for exclusivity groups.",
|
|
||||||
"immutable": false
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"resource_type": "ATT::Valet::GroupAssignment"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Future Work
|
|
||||||
|
|
||||||
The following sections are proposals and *not* implemented. It is provided to aid in ongoing open discussion.
|
|
||||||
|
|
||||||
#### Resource Namespace Changes
|
|
||||||
|
|
||||||
The resource namespace may change to ``OS::Valet`` in future releases.
|
|
||||||
|
|
||||||
#### Resource Properties
|
|
||||||
|
|
||||||
Resource property characteristics are under ongoing review and subject to revision.
|
|
||||||
|
|
||||||
#### Volume Resource Support
|
|
||||||
|
|
||||||
Future placement support will formally include block storage services (e.g., Cinder).
|
|
||||||
|
|
||||||
#### Additional Scheduling Levels
|
|
||||||
|
|
||||||
Future levels could include:
|
|
||||||
|
|
||||||
* ``cluster``: Across a cluster, one resource per cluster.
|
|
||||||
* ``any``: Any level.
|
|
||||||
|
|
||||||
#### Proposed Notation for 'diverse-affinity'
|
|
||||||
|
|
||||||
Suppose we are given a set of server/volume pairs, and we'd like to treat each pair as an affinity group, and then treat all affinity groups diversely. The following notation makes this diverse affinity pattern easier to describe, with no name repetition.
|
|
||||||
|
|
||||||
```json
|
|
||||||
resources:
|
|
||||||
my_group_assignment:
|
|
||||||
type: ATT::Valet::GroupAssignment
|
|
||||||
properties:
|
|
||||||
group_name: my_even_awesomer_group
|
|
||||||
group_type: diverse-affinity
|
|
||||||
level: host
|
|
||||||
resources:
|
|
||||||
- - {get_resource: server1}
|
|
||||||
- {get_resource: volume1}
|
|
||||||
- - {get_resource: server2}
|
|
||||||
- {get_resource: volume2}
|
|
||||||
- - {get_resource: server3}
|
|
||||||
- {get_resource: volume3}
|
|
||||||
```
|
|
||||||
|
|
||||||
In this example, ``server1``/``volume1``, ``server2``/``volume2``, and ``server3``/``volume3`` are each treated as their own affinity group. Then, each of these affinity groups is treated as a diversity group. The dash notation is specific to YAML (a superset of JSON and the markup language used by Heat).
|
|
||||||
|
|
||||||
Given a hypothetical example of a Ceph deployment with three monitors, twelve OSDs, and one client, each paired with a volume, we would only need to specify three Heat resources instead of eighteen.
|
|
||||||
|
|
||||||
## Contact
|
|
||||||
|
|
||||||
Joe D'Andrea <jdandrea@research.att.com>
|
|
@ -1,153 +0,0 @@
|
|||||||
# Copyright 2014-2017 AT&T Intellectual Property
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
import string
|
|
||||||
import uuid
|
|
||||||
|
|
||||||
from heat.engine import lifecycle_plugin
|
|
||||||
from oslo_config import cfg
|
|
||||||
from oslo_log import log as logging
|
|
||||||
|
|
||||||
from valet_plugins.common import valet_api
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
def validate_uuid4(uuid_string):
|
|
||||||
'''Validate that a UUID string is in fact a valid uuid4.
|
|
||||||
|
|
||||||
Happily, the uuid module does the actual checking for us.
|
|
||||||
It is vital that the 'version' kwarg be passed to the
|
|
||||||
UUID() call, otherwise any 32-character hex string
|
|
||||||
is considered valid.
|
|
||||||
'''
|
|
||||||
try:
|
|
||||||
val = uuid.UUID(uuid_string, version=4)
|
|
||||||
except ValueError:
|
|
||||||
# If it's a value error, then the string
|
|
||||||
# is not a valid hex code for a UUID.
|
|
||||||
return False
|
|
||||||
|
|
||||||
# If the uuid_string is a valid hex code, # but an invalid uuid4,
|
|
||||||
# the UUID.__init__ will convert it to a valid uuid4.
|
|
||||||
# This is bad for validation purposes.
|
|
||||||
|
|
||||||
# uuid_string will sometimes have separators.
|
|
||||||
return string.replace(val.hex, '-', '') == \
|
|
||||||
string.replace(uuid_string, '-', '')
|
|
||||||
|
|
||||||
|
|
||||||
class ValetLifecyclePlugin(lifecycle_plugin.LifecyclePlugin):
|
|
||||||
'''Base class for pre-op and post-op work on a stack.
|
|
||||||
|
|
||||||
Implementations should extend this class and override the methods.
|
|
||||||
'''
|
|
||||||
def __init__(self):
|
|
||||||
self.api = valet_api.ValetAPIWrapper()
|
|
||||||
self.hints_enabled = False
|
|
||||||
|
|
||||||
# This plugin can only work if stack_scheduler_hints is true
|
|
||||||
cfg.CONF.import_opt('stack_scheduler_hints', 'heat.common.config')
|
|
||||||
self.hints_enabled = cfg.CONF.stack_scheduler_hints
|
|
||||||
|
|
||||||
def _parse_stack_preview(self, dest, preview):
|
|
||||||
'''Walk the preview list (possibly nested)
|
|
||||||
|
|
||||||
extracting parsed template dicts and storing modified
|
|
||||||
versions in a flat dict.
|
|
||||||
'''
|
|
||||||
# The preview is either a list or not.
|
|
||||||
if not isinstance(preview, list):
|
|
||||||
# Heat does not assign orchestration UUIDs to
|
|
||||||
# all resources, so we must make our own sometimes.
|
|
||||||
# This also means nested templates can't be supported yet.
|
|
||||||
|
|
||||||
# FIXME: Either propose uniform use of UUIDs within
|
|
||||||
# Heat (related to Heat bug 1516807), or store
|
|
||||||
# resource UUIDs within the parsed template and
|
|
||||||
# use only Valet-originating UUIDs as keys.
|
|
||||||
if hasattr(preview, 'uuid') and \
|
|
||||||
preview.uuid and validate_uuid4(preview.uuid):
|
|
||||||
key = preview.uuid
|
|
||||||
else:
|
|
||||||
# TODO(JD): Heat should be authoritative for UUID assignments.
|
|
||||||
# This will require a change to heat-engine.
|
|
||||||
# Looks like it may be: heat/db/sqlalchemy/models.py#L279
|
|
||||||
# It could be that nested stacks aren't added to the DB yet.
|
|
||||||
key = str(uuid.uuid4())
|
|
||||||
parsed = preview.parsed_template()
|
|
||||||
parsed['name'] = preview.name
|
|
||||||
# TODO(JD): Replace resource referenced names with their UUIDs.
|
|
||||||
dest[key] = parsed
|
|
||||||
else:
|
|
||||||
for item in preview:
|
|
||||||
self._parse_stack_preview(dest, item)
|
|
||||||
|
|
||||||
def do_pre_op(self, cnxt, stack, current_stack=None, action=None):
|
|
||||||
'''Method to be run by heat before stack operations. '''
|
|
||||||
if not self.hints_enabled or stack.status != 'IN_PROGRESS':
|
|
||||||
return
|
|
||||||
|
|
||||||
if action == 'DELETE':
|
|
||||||
self.api.plans_delete(stack, auth_token=cnxt.auth_token)
|
|
||||||
elif action == 'CREATE':
|
|
||||||
resources = dict()
|
|
||||||
specifications = dict()
|
|
||||||
reservations = dict()
|
|
||||||
|
|
||||||
stack_preview = stack.preview_resources()
|
|
||||||
self._parse_stack_preview(resources, stack_preview)
|
|
||||||
|
|
||||||
timeout = 60
|
|
||||||
plan = {
|
|
||||||
'plan_name': stack.id,
|
|
||||||
'stack_id': stack.id,
|
|
||||||
'timeout': '%d sec' % timeout,
|
|
||||||
}
|
|
||||||
if resources and len(resources) > 0:
|
|
||||||
plan['resources'] = resources
|
|
||||||
else:
|
|
||||||
return
|
|
||||||
if specifications:
|
|
||||||
plan['specifications'] = specifications
|
|
||||||
if reservations:
|
|
||||||
plan['reservations'] = reservations
|
|
||||||
|
|
||||||
self.api.plans_create(stack, plan, auth_token=cnxt.auth_token)
|
|
||||||
|
|
||||||
def do_post_op(self, cnxt, stack, current_stack=None, action=None,
|
|
||||||
is_stack_failure=False):
|
|
||||||
'''Method to be run by heat after stack operations, including failures.
|
|
||||||
|
|
||||||
On failure to execute all the registered pre_ops, this method will be
|
|
||||||
called if and only if the corresponding pre_op was successfully called.
|
|
||||||
On failures of the actual stack operation, this method will
|
|
||||||
be called if all the pre operations were successfully called.
|
|
||||||
'''
|
|
||||||
pass
|
|
||||||
|
|
||||||
def get_ordinal(self):
|
|
||||||
'''An ordinal used to order instances for pre/post operation execution.
|
|
||||||
|
|
||||||
The values returned by get_ordinal are used to create a partial order
|
|
||||||
for pre and post operation method invocations. The default ordinal
|
|
||||||
value of 100 may be overridden.
|
|
||||||
If class1inst.ordinal() < class2inst.ordinal(), then the method on
|
|
||||||
class1inst will be executed before the method on class2inst.
|
|
||||||
If class1inst.ordinal() > class2inst.ordinal(), then the method on
|
|
||||||
class1inst will be executed after the method on class2inst.
|
|
||||||
If class1inst.ordinal() == class2inst.ordinal(), then the order of
|
|
||||||
method invocation is indeterminate.
|
|
||||||
'''
|
|
||||||
return 100
|
|
@ -1,279 +0,0 @@
|
|||||||
# Copyright 2014-2017 AT&T Intellectual Property
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
import time
|
|
||||||
|
|
||||||
from keystoneclient.v2_0 import client
|
|
||||||
from nova.i18n import _
|
|
||||||
from nova.i18n import _LE
|
|
||||||
from nova.i18n import _LI
|
|
||||||
from nova.i18n import _LW
|
|
||||||
from nova.scheduler import filters
|
|
||||||
from oslo_config import cfg
|
|
||||||
from oslo_log import log as logging
|
|
||||||
|
|
||||||
from valet_plugins.common import valet_api
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
|
||||||
LOG = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class ValetFilter(filters.BaseHostFilter):
|
|
||||||
'''Filter on Valet assignment.'''
|
|
||||||
|
|
||||||
# Host state does not change within a request
|
|
||||||
run_filter_once_per_request = True
|
|
||||||
|
|
||||||
# Used to authenticate request. Update via _authorize()
|
|
||||||
_auth_token = None
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
'''Initializer'''
|
|
||||||
self.api = valet_api.ValetAPIWrapper()
|
|
||||||
self.opt_group_str = 'valet'
|
|
||||||
self.opt_failure_mode_str = 'failure_mode'
|
|
||||||
self.opt_project_name_str = 'admin_tenant_name'
|
|
||||||
self.opt_username_str = 'admin_username'
|
|
||||||
self.opt_password_str = 'admin_password'
|
|
||||||
self.opt_auth_uri_str = 'admin_auth_url'
|
|
||||||
self._register_opts()
|
|
||||||
|
|
||||||
self.retries = 60
|
|
||||||
self.interval = 1
|
|
||||||
|
|
||||||
def _authorize(self):
|
|
||||||
'''Keystone AuthN'''
|
|
||||||
opt = getattr(cfg.CONF, self.opt_group_str)
|
|
||||||
project_name = opt[self.opt_project_name_str]
|
|
||||||
username = opt[self.opt_username_str]
|
|
||||||
password = opt[self.opt_password_str]
|
|
||||||
auth_uri = opt[self.opt_auth_uri_str]
|
|
||||||
|
|
||||||
kwargs = {
|
|
||||||
'username': username,
|
|
||||||
'password': password,
|
|
||||||
'tenant_name': project_name,
|
|
||||||
'auth_url': auth_uri
|
|
||||||
}
|
|
||||||
keystone_client = client.Client(**kwargs)
|
|
||||||
self._auth_token = keystone_client.auth_token
|
|
||||||
|
|
||||||
def _is_same_host(self, host, location): # pylint: disable=R0201
|
|
||||||
'''Returns true if host matches location'''
|
|
||||||
return host == location
|
|
||||||
|
|
||||||
def _register_opts(self):
|
|
||||||
'''Register Options'''
|
|
||||||
opts = []
|
|
||||||
option = cfg.StrOpt(
|
|
||||||
self.opt_failure_mode_str,
|
|
||||||
choices=['reject', 'yield'], default='reject',
|
|
||||||
help=_('Mode to operate in if Valet planning fails.'))
|
|
||||||
opts.append(option)
|
|
||||||
option = cfg.StrOpt(self.opt_project_name_str,
|
|
||||||
default=None, help=_('Valet Project Name'))
|
|
||||||
opts.append(option)
|
|
||||||
option = cfg.StrOpt(self.opt_username_str,
|
|
||||||
default=None, help=_('Valet Username'))
|
|
||||||
opts.append(option)
|
|
||||||
option = cfg.StrOpt(self.opt_password_str,
|
|
||||||
default=None, help=_('Valet Password'))
|
|
||||||
opts.append(option)
|
|
||||||
option = cfg.StrOpt(self.opt_auth_uri_str,
|
|
||||||
default=None,
|
|
||||||
help=_('Keystone Authorization API Endpoint'))
|
|
||||||
opts.append(option)
|
|
||||||
|
|
||||||
opt_group = cfg.OptGroup(self.opt_group_str)
|
|
||||||
cfg.CONF.register_group(opt_group)
|
|
||||||
cfg.CONF.register_opts(opts, group=opt_group)
|
|
||||||
|
|
||||||
# TODO(JD): Factor out common code between this and the cinder filter
|
|
||||||
def filter_all(self, filter_obj_list, request_spec):
|
|
||||||
'''Filter all hosts in one swell foop'''
|
|
||||||
|
|
||||||
orch_id_key = 'heat_resource_uuid'
|
|
||||||
|
|
||||||
ad_hoc = False
|
|
||||||
yield_all = False
|
|
||||||
location = None
|
|
||||||
|
|
||||||
opt = getattr(cfg.CONF, self.opt_group_str)
|
|
||||||
failure_mode = opt[self.opt_failure_mode_str]
|
|
||||||
|
|
||||||
# Get the resource_id (physical id) and host candidates
|
|
||||||
res_id = request_spec.instance_uuid
|
|
||||||
hosts = [obj.host for obj in filter_obj_list]
|
|
||||||
hints = request_spec.scheduler_hints
|
|
||||||
|
|
||||||
# TODO(JD): If we can't reach Valet at all, we may opt to fail
|
|
||||||
# TODO(JD): all hosts depending on a TBD config flag.
|
|
||||||
|
|
||||||
# Max
|
|
||||||
# fix for invaid authorization in yield mode
|
|
||||||
failed = None
|
|
||||||
try:
|
|
||||||
self._authorize()
|
|
||||||
except Exception as ex:
|
|
||||||
failed = ex
|
|
||||||
|
|
||||||
if failed:
|
|
||||||
msg = _LW("Failed to filter the hosts, failure mode is %s")
|
|
||||||
LOG.warn(msg % failure_mode)
|
|
||||||
if failure_mode == 'yield':
|
|
||||||
LOG.warn(failed)
|
|
||||||
yield_all = True
|
|
||||||
else:
|
|
||||||
LOG.error(failed)
|
|
||||||
elif orch_id_key not in hints:
|
|
||||||
msg = _LW("Valet: Heat Stack Lifecycle Scheduler Hints not found. "
|
|
||||||
"Performing ad-hoc placement.")
|
|
||||||
LOG.info(msg)
|
|
||||||
ad_hoc = True
|
|
||||||
|
|
||||||
# We'll need the flavor.
|
|
||||||
flavor = request_spec.flavor.flavorid
|
|
||||||
|
|
||||||
# Because this wasn't orchestrated, there's no stack.
|
|
||||||
# We're going to compose a resource as if there as one.
|
|
||||||
# In this particular case we use the physical
|
|
||||||
# resource id as both the orchestration and stack id.
|
|
||||||
resources = {
|
|
||||||
res_id: {
|
|
||||||
"properties": {
|
|
||||||
"flavor": flavor,
|
|
||||||
},
|
|
||||||
"type": "OS::Nova::Server",
|
|
||||||
"name": "ad_hoc_instance"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# Only add the AZ if it was expressly defined
|
|
||||||
res_properties = resources[res_id]["properties"]
|
|
||||||
a_zone = request_spec.availability_zone
|
|
||||||
if a_zone:
|
|
||||||
res_properties["availability_zone"] = a_zone
|
|
||||||
|
|
||||||
timeout = 60
|
|
||||||
plan = {
|
|
||||||
'plan_name': res_id,
|
|
||||||
'stack_id': res_id,
|
|
||||||
'locations': hosts,
|
|
||||||
'timeout': '%d sec' % timeout,
|
|
||||||
'resources': resources
|
|
||||||
}
|
|
||||||
|
|
||||||
count = 0
|
|
||||||
response = None
|
|
||||||
while count < self.retries:
|
|
||||||
try:
|
|
||||||
response = self.api.plans_create(
|
|
||||||
None, plan, auth_token=self._auth_token)
|
|
||||||
except Exception:
|
|
||||||
# TODO(JD): Get context from exception
|
|
||||||
msg = _LE("Raise exception for ad hoc placement request.")
|
|
||||||
LOG.error(msg)
|
|
||||||
response = None
|
|
||||||
if response is None:
|
|
||||||
count += 1
|
|
||||||
msg = ("Valet conn error for plan_create Retry {0} "
|
|
||||||
"for stack = {1}.")
|
|
||||||
LOG.warn(msg.format(str(count), res_id))
|
|
||||||
time.sleep(self.interval)
|
|
||||||
else:
|
|
||||||
break
|
|
||||||
|
|
||||||
if response and response.get('plan'):
|
|
||||||
plan = response['plan']
|
|
||||||
if plan and plan.get('placements'):
|
|
||||||
placements = plan['placements']
|
|
||||||
if placements.get(res_id):
|
|
||||||
placement = placements.get(res_id)
|
|
||||||
location = placement['location']
|
|
||||||
|
|
||||||
if not location:
|
|
||||||
msg = _LE("Valet ad-hoc placement unknown "
|
|
||||||
"for resource id {0}.")
|
|
||||||
LOG.error(msg.format(res_id))
|
|
||||||
if failure_mode == 'yield':
|
|
||||||
msg = _LW("Valet will yield to Nova for "
|
|
||||||
"placement decisions.")
|
|
||||||
LOG.warn(msg)
|
|
||||||
yield_all = True
|
|
||||||
else:
|
|
||||||
yield_all = False
|
|
||||||
else:
|
|
||||||
orch_id = hints[orch_id_key]
|
|
||||||
|
|
||||||
count = 0
|
|
||||||
response = None
|
|
||||||
while count < self.retries:
|
|
||||||
try:
|
|
||||||
response = self.api.placement(
|
|
||||||
orch_id, res_id, hosts=hosts,
|
|
||||||
auth_token=self._auth_token)
|
|
||||||
except Exception:
|
|
||||||
LOG.error(_LW("Raise exception for placement request."))
|
|
||||||
response = None
|
|
||||||
|
|
||||||
if response is None:
|
|
||||||
count += 1
|
|
||||||
msg = _LW("Valet conn error for placement Retry {0} "
|
|
||||||
"for stack = {1}.")
|
|
||||||
LOG.warn(msg.format(str(count), orch_id))
|
|
||||||
time.sleep(self.interval)
|
|
||||||
else:
|
|
||||||
break
|
|
||||||
|
|
||||||
if response and response.get('placement'):
|
|
||||||
placement = response['placement']
|
|
||||||
if placement.get('location'):
|
|
||||||
location = placement['location']
|
|
||||||
|
|
||||||
if not location:
|
|
||||||
# TODO(JD): Get context from exception
|
|
||||||
msg = _LE("Valet placement unknown for resource id {0}, "
|
|
||||||
"orchestration id {1}.")
|
|
||||||
LOG.error(msg.format(res_id, orch_id))
|
|
||||||
if failure_mode == 'yield':
|
|
||||||
msg = _LW("Valet will yield to Nova for "
|
|
||||||
"placement decisions.")
|
|
||||||
LOG.warn(msg)
|
|
||||||
yield_all = True
|
|
||||||
else:
|
|
||||||
yield_all = False
|
|
||||||
|
|
||||||
# Yield the hosts that pass.
|
|
||||||
# Like the Highlander, there can (should) be only one.
|
|
||||||
# It's possible there could be none if Valet can't solve it.
|
|
||||||
for obj in filter_obj_list:
|
|
||||||
if location:
|
|
||||||
match = self._is_same_host(obj.host, location)
|
|
||||||
if match:
|
|
||||||
if ad_hoc:
|
|
||||||
msg = _LI("Valet ad-hoc placement for resource id "
|
|
||||||
"{0}: {1}.")
|
|
||||||
LOG.info(msg.format(res_id, obj.host))
|
|
||||||
else:
|
|
||||||
msg = _LI("Valet placement for resource id {0}, "
|
|
||||||
"orchestration id {1}: {2}.")
|
|
||||||
LOG.info(msg.format(res_id, orch_id, obj.host))
|
|
||||||
else:
|
|
||||||
match = None
|
|
||||||
if yield_all or match:
|
|
||||||
yield obj
|
|
||||||
|
|
||||||
def host_passes(self, host_state, filter_properties):
|
|
||||||
'''Individual host pass check'''
|
|
||||||
# Intentionally let filter_all() handle in one swell foop.
|
|
||||||
return False
|
|
@ -1,59 +0,0 @@
|
|||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import mock
|
|
||||||
from valet_plugins.plugins.heat.plugins import ValetLifecyclePlugin
|
|
||||||
from valet_plugins.tests.base import Base
|
|
||||||
|
|
||||||
|
|
||||||
class TestPlugins(Base):
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestPlugins, self).setUp()
|
|
||||||
|
|
||||||
self.valet_life_cycle_plugin = self.init_ValetLifecyclePlugin()
|
|
||||||
|
|
||||||
@mock.patch('valet_plugins.common.valet_api.ValetAPIWrapper')
|
|
||||||
def init_ValetLifecyclePlugin(self, mock_class):
|
|
||||||
with mock.patch('oslo_config.cfg.CONF'):
|
|
||||||
return ValetLifecyclePlugin()
|
|
||||||
|
|
||||||
def test_do_pre_op(self):
|
|
||||||
stack = mock.MagicMock()
|
|
||||||
stack.status = "IN_PROGRESS"
|
|
||||||
|
|
||||||
cnxt = mock.MagicMock()
|
|
||||||
cnxt.auth_token = "test_auth_token"
|
|
||||||
|
|
||||||
# returns due to hints_enabled
|
|
||||||
self.valet_life_cycle_plugin.hints_enabled = False
|
|
||||||
self.valet_life_cycle_plugin.do_pre_op(cnxt, stack, action="DELETE")
|
|
||||||
self.validate_test(self.valet_life_cycle_plugin.api.method_calls == [])
|
|
||||||
|
|
||||||
# returns due to stack.status
|
|
||||||
self.valet_life_cycle_plugin.hints_enabled = True
|
|
||||||
stack.status = "NOT_IN_PROGRESS"
|
|
||||||
self.valet_life_cycle_plugin.do_pre_op(cnxt, stack, action="DELETE")
|
|
||||||
self.validate_test(self.valet_life_cycle_plugin.api.method_calls == [])
|
|
||||||
|
|
||||||
# action delete
|
|
||||||
self.valet_life_cycle_plugin.hints_enabled = True
|
|
||||||
stack.status = "IN_PROGRESS"
|
|
||||||
self.valet_life_cycle_plugin.do_pre_op(cnxt, stack, action="DELETE")
|
|
||||||
self.validate_test(
|
|
||||||
"plans_delete" in self.valet_life_cycle_plugin.api.method_calls[0])
|
|
||||||
|
|
||||||
# action create
|
|
||||||
self.valet_life_cycle_plugin.do_pre_op(cnxt, stack, action="CREATE")
|
|
||||||
self.validate_test(
|
|
||||||
"plans_create" in self.valet_life_cycle_plugin.api.method_calls[1])
|
|
@ -1,80 +0,0 @@
|
|||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
from keystoneclient.v2_0 import client
|
|
||||||
import mock
|
|
||||||
from valet_plugins.common import valet_api
|
|
||||||
from valet_plugins.plugins.nova.valet_filter import ValetFilter
|
|
||||||
from valet_plugins.tests.base import Base
|
|
||||||
|
|
||||||
|
|
||||||
class TestResources(object):
|
|
||||||
def __init__(self, host_name):
|
|
||||||
self.host = host_name
|
|
||||||
|
|
||||||
|
|
||||||
class TestValetFilter(Base):
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
super(TestValetFilter, self).setUp()
|
|
||||||
|
|
||||||
client.Client = mock.MagicMock()
|
|
||||||
self.valet_filter = self.init_ValetFilter()
|
|
||||||
|
|
||||||
@mock.patch.object(valet_api.ValetAPIWrapper, '_register_opts')
|
|
||||||
@mock.patch.object(ValetFilter, '_register_opts')
|
|
||||||
def init_ValetFilter(self, mock_opt, mock_init):
|
|
||||||
mock_init.return_value = None
|
|
||||||
mock_opt.return_value = None
|
|
||||||
return ValetFilter()
|
|
||||||
|
|
||||||
@mock.patch.object(valet_api.ValetAPIWrapper, 'plans_create')
|
|
||||||
@mock.patch.object(valet_api.ValetAPIWrapper, 'placement')
|
|
||||||
def test_filter_all(self, mock_placement, mock_create):
|
|
||||||
mock_placement.return_value = None
|
|
||||||
mock_create.return_value = None
|
|
||||||
|
|
||||||
with mock.patch('oslo_config.cfg.CONF') as config:
|
|
||||||
setattr(
|
|
||||||
config, "valet",
|
|
||||||
{self.valet_filter.opt_failure_mode_str: "yield",
|
|
||||||
self.valet_filter.opt_project_name_str:
|
|
||||||
"test_admin_tenant_name",
|
|
||||||
self.valet_filter.opt_username_str: "test_admin_username",
|
|
||||||
self.valet_filter.opt_password_str: "test_admin_password",
|
|
||||||
self.valet_filter.opt_auth_uri_str: "test_admin_auth_url"})
|
|
||||||
|
|
||||||
filter_properties = {
|
|
||||||
'request_spec': {'instance_properties': {'uuid': ""}},
|
|
||||||
'scheduler_hints': {'heat_resource_uuid': "123456"},
|
|
||||||
'instance_type': {'name': "instance_name"}}
|
|
||||||
|
|
||||||
resources = self.valet_filter.filter_all(
|
|
||||||
[TestResources("first_host"), TestResources("second_host")],
|
|
||||||
filter_properties)
|
|
||||||
|
|
||||||
for resource in resources:
|
|
||||||
self.validate_test(resource.host in "first_host, second_host")
|
|
||||||
self.validate_test(mock_placement.called)
|
|
||||||
|
|
||||||
filter_properties = {
|
|
||||||
'request_spec': {'instance_properties': {'uuid': ""}},
|
|
||||||
'scheduler_hints': "scheduler_hints",
|
|
||||||
'instance_type': {'name': "instance_name"}}
|
|
||||||
|
|
||||||
resources = self.valet_filter.filter_all(
|
|
||||||
[TestResources("first_host"), TestResources("second_host")],
|
|
||||||
filter_properties)
|
|
||||||
|
|
||||||
for _ in resources:
|
|
||||||
self.validate_test(mock_create.called)
|
|
Loading…
Reference in New Issue
Block a user