Update hacking version to latest

Change-Id: I4869ed8297f243de09b019d6afe7bfe86df1d105
This commit is contained in:
zhulingjie 2019-01-05 00:53:21 +08:00
parent 0bb2c20ba8
commit 9a8e03f2cf
173 changed files with 20602 additions and 6 deletions

View File

@ -1 +0,0 @@
../../specs/

0
doc/source/specs/.gitignore vendored Normal file
View File

View File

@ -0,0 +1,181 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
================
Properties Group
================
https://blueprints.launchpad.net/heat/+spec/heat-property-group
Adds PropertiesGroup for grouping set of properties of a Heat Resource plug-in.
Problem description
===================
In many of Heat resource plug-in implementations, properties are getting
defined with validation schema and there is no group concept which is required
for following reasons:
* sometimes, resource mandates to provide either PropertyA or PropertyB,
and one of them is mandatory. This can't be defined now in Heat
Properties schema, as developer can't set required=true for both the
properties and as part of validate() method, developer should implement
the logic whether one of these property is provided.
* some plug-ins supports multiple versions of its thing, for example,
docker plug-in supports multiple versions. So some properties are only
supported in specific versions only. Now there is no generic way to
specify declaratively that some properties are required for a given client
version.
Proposed change
===============
The first problem could be solved by introducing the concept of
PropertiesGroup, which helps declarative validation as defined below:
Resource class will have `properties_groups_schema`, which contains list of
properties groups. Each properties group has next representation:
Assume that there are two Properties: PropertyA and PropertyB, and there are
already declared with proper Property Schema. Then properties group will be
specified by logical expression in dict:
.. code-block:: python
properties_groups_schema = [
{properties_group.AND: [[PropertyA], [PropertyB]]}
]
In that way, logical expression consists of one-key dict with list-type value,
which can contain list-type properties names or properties group logical
expression. Dictionary key should be equal to one of the following operators:
"and", "or", "xor".
Properties groups can be nested, for example:
.. code-block:: python
properties_groups_schema = [
{properties_group.AND: [
{properties_group.OR: [[PropertyA], [PropertyB]]},
[propertyC]]}
]
Here as part of 'validation()' phase, each of the groups in the
property_groups_schema will be validated in sequence by using the operator
across properties. This helps to bring up the complex validation logic across
dependent properties.
Each group declared in the properties_groups_schema can refer the other group
in it's properties list. So the complex validation could be:
Here, each of the property entry could be defined in the form of
['prp1', 'child_prp1', 'grand_child_prp1']
even if property entry comprises only one item, i.e.
['prp1']
For example, if there's properties_schema:
.. code-block:: python
properties_schema = {
PropertyA: properties.Schema(
properties.Schema.MAP,
schema={
PropertySubA: properties.Schema(properties.Schema.STRING),
PropertySubB: properties.Schema(properties.Schema.STRING)
}
)
}
Then properties_groups_schema should be next:
.. code-block:: python
properties_groups_schema = [
{properties_group.AND: [[PropertyA, PropertySubA],
[PropertyA, PropertySubB]]}
]
Also, properties group will support specifying API versions of `client_plugin`,
used for property, i.e. property will be supported only if specified
`client_plugin` API version satisfies versions in group. Then property group
will have next format:
.. code-block:: python
properties_groups_schema = [
{properties_group.API_VERSIONS: {
properties_group.CLIENT_PLUGIN: <client_plugin object>,
properties_group.VERSIONS: <list of supported versions>,
properties_group.PROPERTIES: <list of properties entries>}
}
]
Example of using `API_VERSIONS` as properties group:
.. code-block:: python
properties_groups_schema = [
{properties_group.API_VERSIONS: {
properties_group.CLIENT_PLUGIN: self.client_plugin('keystone'),
properties_group.VERSIONS: ['1.2', '2.0'],
properties_group.PROPERTIES: [[PropertyA], [PropertyB]]}
}
]
Heat engine can infer that this set of properties in the properties group is
supported only for 1.2 and 2.0 API versions, so it can check the current
`client_plugin` supported and validate accordingly.
Besides the validation part, all necessary changes will be added to
documentation generator to allow user learn relations between properties.
Alternatives
------------
None.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Kanagaraj Manickam (kanagaraj-manickam)
Peter Razumovsky <prazumovsky>
Milestones
----------
Currently moved to backlog due to no community's interest. Workable PoC placed
here:
https://review.openstack.org/#/q/topic:bp/property-group
Work Items
----------
* Define PropertiesGroup class with required validation logic for the given
resource
* Update the resource validation logic to validate with property group
* Update the existing resources with property_groups
* Generate property group documentation for users to understand the property
requirements
* Add required test cases
Dependencies
============
None.

View File

@ -0,0 +1,92 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===========================================
Heat template migrate resources' properties
===========================================
https://blueprints.launchpad.net/heat/+spec/heat-template-port
Heat does not provide api/cli to migrate the given template from old heat
version to later.
Problem description
===================
Heat is being released inline with each Openstack release and there is a chance
that resource properties and attributes might have changed and/or deprecated
across these releases. A user may wish to migrate the template to current
version which was created during earlier version say juno. Currently heat
does not support it.
Proposed change
===============
Heat already having mechanism to define the translation rule for each of the
properties being deprecated by using translation.TranslationRule.
This is being implemented in resource plugins in order to support migration
to new property in place of deprecated one.
This feature could be updated with below command
``openstack orchestration template migrate -t <template-file> --output-format
[json|yaml] --output-file <output-file>``
This command will migrate the given template file and will write the template
output mentioned in output-format.
Command will provide messages of changes made to the deprecated properties in
below format::
<resource-path> <property> <action> <details>
where:
*resource-path*
Gives the resource path in the given template.
*property*:
Property name to migrate to current version.
*action*:
One of add, replace or delete.
*details*:
Provides additional details about deprecation, if any.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
kanagaraj-manickam
ananta
Milestones
----------
Target Milestone for completion:
ocata-1
Work Items
----------
* For those deprecated properties in resource plugins, add translation rules
* Add required API and test cases
* update the python-openstackclient with new CLI as mentioned above.
Dependencies
============
None

View File

@ -0,0 +1,174 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
===================================
Add lock and unlock stack actions
===================================
https://blueprints.launchpad.net/heat/+spec/lock-stack
As application vendors deploy their applications using heat stacks, they
can currently use automatic processes such as ceilometer alarms,
auto-scaling groups, etc..., as well as manual processes such as stack-update.
In some cases ,for example manual maintenance of the application,
actions done on the stack can interrupt and prolong the maintenance period.
A lock on the stack to disable and block these types of processes should
solve this issue.
Problem description
===================
use cases:
1. application vendors are interested in a "maintenance mode" for their
application. When in maintenance no topology changes are permitted.
For example a maintenance mode is required for a clustered DB app that needs a
manual reboot of one of its servers - when the server reboots
all the other servers are redistributing the data among themselves which causes
high CPU levels which in turn might cause an undesired scale
out (which will cause another CPU spike and so on...).
2. some cloud-admins have a configuration stack that initializes the cloud
(Creating networks, flavors, images, ...) and these resources should always
exist while the cloud exists. Locking these configuration stacks, will
prevent someone from accidentally deleting/modifying the stack or its resources
.
This feature might even raise in significance, once convergence phase 2 is in
place, and many other automatic actions are performed by heat. The ability to
manually perform admin actions on the stack with no interruptions is important.
Proposed change
===============
The proposal is to add a "Lock" operation to be performed on the stack. Similar
to: nova server "lock" or glance-image "--is-protected" flag. Once a stack is
locked, the only operations allowed on the stack is "unlock" or "lock" which
in order to change locking level - heat engine should reject any stack
operations and ignore signals that modify the stack (such as scaling) and
optionally its underlying resources.
This API calls would be additional to the stack-actions API, of 'lock' and
'unlock'.
The lock operation should have a "level" flag with possible values of
{all, stacks} (default = all)
when level = stacks: perform heat lock - which would lock the stack and
all nested stacks (actions on the "physical" resources are still permitted).
this means any action on the stack or it's nested stack will be blocked. but
other stack resources will not be locked.
When level = all: perform heat lock and enable lock/protect for each stack
resource that supports it (nova server, glance image,...).
The lock operation should only be called once the stack is in a final state,
(a state which is not "IN_PROGRESS", not "INIT_COMPLETE" and not
"DELETE_COMPLETE").
when the api call is successful it will return with response code 200.
when calling the api when the stack is in an invalid state it will return a
response code 409.
The unlock operation can only be called on a stack that is either locked or
failed to lock/unlock. The ability to call the unlock api both when locking
or unlocking failed, is important for transient issues that leave the stack
in a "dirty" state and we want to bring it back to it's previous healthy one.
Alternatives
------------
In the future we might want to enable interrupting or rolling
back running processes (such as retry of stack-create or scaling) and
locking the stack, instead of waiting for the running process to finish.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
noa-koffman
melisha
avi-vachnis
Milestones
----------
Target Milestone for completion:
liberty-1
Work Items
----------
Changes to API:
- Support 'lock' and 'unlock' actions in the existing stack-action API.
- locking a stack, will be called by stack actions api:
HTTP POST /v1/{tenant_id}/stacks/{stack_name}/{stack_id}/actions
with the following body:
{
"lock":{"level": stacks }
}
- unlocking a stack will be called similarly with the following body:
{
"unlock": null
}
Changes to engine:
Develop a lock stack logic in heat which prevents stack actions
(suspend/resume), update-stack, auto-scaling,..., from taking place.
In the future we might add additional locking modes, to enable locking the
stack
from action but allowing auto-scaling or suspend and resume actions.
- the stack's ACTIONS will now contain two new actions ("LOCK" and "UNLOCK")
- new methods will be created for locking and unlocking a stack, which will
be similar to the suspend and resume methods.
- similarly to the existing (suspend and resume) stack actions, the new
methods will trigger calls to a "handle_lock" and "handle_unlock" method
in the stack resources. for resources that will not implement locking,
this method will not have any actual affect.
- appropriate stack and stack-resources states (LOCK_IN_PROGRESS,
LOCK_COMPLETE, LOCK_FAILED, UNLOCK_IN_PROGRESS, UNLOCK_COMPLETE,
UNLOCK_FAILED) should be added.
allowed actions for each state are as follows:
LOCK_IN_PROGRESS: none
LOCK_COMPLETE: unlock, lock (in order to enable changing the locking level)
LOCK_FAILED: unlock, delete, lock
UNLOCK_COMPLETED: all actions except for unlock.
UNLOCK_FAILED: delete, unlock
UNLOCK_IN_PROGRESS: none
- Any action of the engine on the stack, except unlocking a stack, will
only start after validating the stack is not locked.
- the engine should validate the stack status is in an appropriate state
before starting the lock process.
Changes to client:
- action-lock command will be added this will include the "lock resources"
- the action-lock command will allow passing the "level" parameter using a
"--level" flag (which will be added), similar to stack-create command.
usage: heat action-lock <Name or ID of stack> --level=stacks
- action-unlock command will be added used the same as action-suspend and
action-resume, with no parameter flags.
usage: heat action-unlock <Name or ID of stack>
Documentation changes:
- update developer.openstack.org/api-ref-orchestration-v1.html with the
additions to stack actions api
- add the lock stack design to wiki.openstack.org
- add the lock and unlock actions to developers api docs:
in .../heat/sourcecode/heat/heat.api.openstack.v1.actions.html
Dependencies
============
None

View File

@ -0,0 +1,375 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
===============================================================
Add Template "Capabilities" Annotation/Resolution/Introspection
===============================================================
https://blueprints.launchpad.net/heat/+spec/resource-capabilities
Also related to https://blueprints.launchpad.net/heat/+spec/interface-types
Add an optional annotation to HOT which enables a template author to
define that a template implements/provides particular capabilities.
Problem description
===================
Currently, the environment resource_registry provides an extremely flexible
but completely unconstrained interface for mapping type aliases to
implementation.
This makes it difficult for those wishing to use the resource_registry for
composition, in particular if you wish to offer users the choice to pick
a particular implementation of a provider resource template.
For example, consider this workflow:
1. Choose parent template.
2. Choose a set of other templates and environments (or have this
programmatically generated, for instance by pulling templates from one or
more known locations/paths).
3. Inspect that group to figure out the resource-type level
capabilities/options. These are the first major choices a user will make to
determine the nested stack implementations for each type.
4. The user selects a nested stack choice for each one that has more than one
choice.
5. Reinspect given those major options for the full set of parameters such
that the user may be prompted for mandatory and optional parameters,
including those not exposed by the top-level parent template.
6. The user enters in values for all of the parameters and the stack gets
created.
The topic of this spec is steps 3 and 4 above.
https://review.openstack.org/#/c/197199 discusses step 5. The other steps
are already possible.
The discussion below focuses on the TripleO use case, since that is what is
motivating this work (TripleO makes very heavy use of template composition via
the ``resource_registry``). However, the feature should be generally useful to
anyone wishing to use the ``resource_registry`` to build a complex environment
from a tree of interrelated templates via the ``resource_registry``.
Here's an example of the ``resource_registry`` mapping used for TripleO
controller node implementation:
.. code-block:: yaml
resource_registry:
OS::TripleO::Controller: puppet/controller-puppet.yaml
OS::TripleO::Controller::Net::SoftwareConfig: net-config-bridge.yaml
OS::TripleO::ControllerPostDeployment: puppet/controller-post-puppet.yaml
OS::TripleO::ControllerConfig: puppet/controller-config.yaml
OS::TripleO::Controller::Ports::ExternalPort: network/ports/noop.yaml
OS::TripleO::Controller::Ports::InternalApiPort: network/ports/noop.yaml
OS::TripleO::Controller::Ports::StoragePort: network/ports/noop.yaml
OS::TripleO::Controller::Ports::StorageMgmtPort: network/ports/noop.yaml
OS::TripleO::Controller::Ports::TenantPort: network/ports/noop.yaml
OS::TripleO::Controller::CinderBackend: extraconfig/controller/noop.yaml
We can see that there are a large number of choices (and this is only a tiny
subset of the full environment), with no way for a UI to determine what
are valid choices for any of the mappings. It would be beneficial to
describe directly in the template what valid implementations are, such that
they may be discovered by UI/CLI tools and constrained at validation time.
Taking the above as a worked example, there are multiple choices to be made:
* Configuration Tool type (all ``puppet/*.yaml`` resource)
* NIC configuration (physical network, e.g bridged, bonded, etc)
* Port assignement (overlay network, where ``ports/noop.yaml`` assigns all
ports to a common network)
* Choice of (potentially multiple CinderBackend implementations)
For simplicity, the examples below will consider only the choice between puppet
and some other implementation.
Proposed change
===============
The proposed change covers three areas:
* :ref:`capabilities-annotations` - How to convey the relationship between a
resource type and templates that may potentially fufill the type.
* :ref:`capabilities-resolution` - How Heat can use user settings on the stack
to choose the most applicable template to fulfill a resource type from a
list of options.
* :ref:`capabilities-introspection` - How a UI can programmatically use the
annotations to present the user with an interface in which to select an
option for each eligible resource type.
The remainder of this change is broken up into those three sections. It should
be noted that all three are not necessary for a minimal viable feature. It is
possible that the implementation for mitaka only covers, for example, the
annotations and introspection APIs necessary for the user to understand the
"schema" (for lack of a better word) around selecting template implementations
for a stack.
.. _capabilities-annotations:
Annotation
----------
Add an optional template annotation, inspired by the TOSCA
"substitution_mappings" interface [1]_, which allows an optional new block in
HOT templates where template authors may declare that a template provides a
particular set of capabilities.
There are two slightly different uses of the capabilities annotation being
proposed.
Tag-Based
^^^^^^^^^
For example, there may be multiple valid implementations of
``OS::TripleO::Controller``. For the Puppet-based implementation, the
capabilities annotation on the template will indicate it:
.. code-block:: yaml
heat_template_version: 2015-10-15
capabilities:
deployment: puppet
The syntax used here is similar to that defined in the TOSCA spec but the names
have been adjusted to better match existing HOT conventions. The capabilities
section will not be strictly validated; it will be possible to add extra
key/value pairs that are not specified in the environment, such that templates
may be portable.
.. _capabilities-annotation-type:
Type-Based
^^^^^^^^^^
It also may be possible to use these annotations for client-side discovery
of the list of valid templates to be passed via the ``resource_registry`` by
specifically referencing the name of the resource type the template may be
used as a mapping for:
.. code-block:: yaml
heat_template_version: 2015-10-15
capabilities:
resource_type: OS::TripleO::Controller
This should support a list as TripleO has already seen an example of templates
that can be used as either the computer or controller hooks:
.. code-block:: yaml
heat_template_version: 2015-10-15
capabilities:
resource_type: [OS::TripleO::ControllerPostDeployment,
OS::TripleO::ComputePostDeployment]
.. _capabilities-resolution:
Resolution
----------
In the environment, an optional new "requires" section will be
added and support for ``resource_registry`` keys containing a list of
multiple implementations. Heat will then be able to resolve the
implementation that should be chosen by matching the environment
requires to the list of possible templates with (hopefully matching)
capabilities. A validation error will be thrown should either zero or multiple
implementations be found.
For example, expanding on the examples from the previous section, take the
following environment file:
.. code-block:: yaml
requires:
deployment: puppet
resource_registry:
OS::TripleO::Controller: [puppet/controller.yaml, docker/controller.yaml]
Adding annotations to the two referenced templates, we have:
.. _capabilities-ex-puppet:
``puppet/controller.yaml``
.. code-block:: yaml
heat_template_version: 2015-10-15
capabilities:
deployment: puppet
.. _capabilities-ex-docker:
``docker/controller.yaml``
.. code-block:: yaml
heat_template_version: 2015-10-15
capabilities:
deployment: docker
Putting these three files together, Heat would use the ``capabilities`` section
to determine which of the two ``controller.yaml`` files to use.
.. _capabilities-introspection:
Introspection
-------------
The functionality described in the :ref:`capabilities-annotations` section
provides enough information for Heat to provide a series of introspection
queries to facilitate the user experience.
Specific Type Query
^^^^^^^^^^^^^^^^^^^
Given a specific resource type name, the Heat API should be able to return a
list templates that claim to support that type (note: this is contingent on
using the :ref:`capabilities-annotation-type` annotation style described
above).
A potential example of the output of such a query through the Heat client
is below:
.. code-block:: bash
$ heat capabilities-find -r -c resource_type=OS::TripleO::Controller ./*
puppet/controller.yaml
docker/controller.yaml
This would recurse from the current directory inspecting the capabilities in
each template, returning a list of those which match the capabilities required
(with the possibility of passing multiple ``-c`` options if necessary).
This makes multiple implementations discoverable on the client side.
Capabilities Summary
^^^^^^^^^^^^^^^^^^^^
There is also a need to have Heat analyze a series of templates and
environments, returning a list of all capabilities that can be specified:
For example, given the :ref:`Puppet <capabilities-ex-puppet>` and
:ref:`Docker <capabilities-ex-docker>` example templates above:
.. code-block:: bash
$ heat template-capabilities -f puppet/controller.yaml \
-f docker/controller.yaml
Which would return:
.. code-block:: json
{
'deployment': ['puppet', 'docker']
}
A similar version of the call exists if the
:ref:`capabilities-annotation-type` annotation is used:
.. code-block:: json
{
'OS::TripleO::Controller': ['puppet/controller.yaml',
'docker/controller.yaml']
}
The operator or UI then knows that these are the options which may be
resolved in order for the stack to be created. Note this is related to
but not the same as the spec posted related to recursive validation [2]_,
which is about exposing the parameters required for stack create,
not the options related to a valid composition.
.. note:: API v. Client-side
The original iteration of this spec spoke in terms of having the Heat client
walk the template tree and perform the introspections described. It has since
been changed to refer to the Heat API, moving the logic server-side and
allowing non-Python clients access to this functionality.
.. rubric:: References
.. [1] http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.0/csd03/TOSCA-Simple-Profile-YAML-v1.0-csd03.html#_Toc419746122
.. [2] https://review.openstack.org/#/c/197199
Alternatives
------------
The main alternative discussed (see previous revision of this patch) was
adding constraints to the ``resource_registry``, such that valid mappings may
be defined inside the environment. This idea was rejected because of the
desire for a more discoverable interface (e.g look for valid implementations
vs a rigidly defined list of constraints).
A subsequent proposal was also rejected, which focussed on only matching
a ``resource_type`` annotation in the templates, it was suggested that this
was insufficiently granular and not flexible enough.
Implementation
==============
The implementation will require adding the new capabilities
annotation to the Mitaka HOT version, this will be optional and if
it is omitted the existing behavior will be maintained.
Then support will be added to the environment to enable lists to be
passed via the ``resource_registry``, and resolved via a new requires
section.
Assignee(s)
-----------
Primary assignee(s):
* shardy
* jdob
Milestones
----------
Target Milestone for completion:
mitaka-2
Work Items
----------
Changes to Engine
^^^^^^^^^^^^^^^^^
* Update HOT to support optional new capabilities annotation
* Update environment code to allow lists for resource_registry
* Update environment to process capabilities section to filter lists
Changes to heatclient
^^^^^^^^^^^^^^^^^^^^^
* Add support to python-heatclient for parsing a tree of templates, returning
a list of valid templates for a specified capability
* Add support for passing files/environment to get required capabilities
Documentation Changes
^^^^^^^^^^^^^^^^^^^^^
* Document new interfaces in template guide docs/HOT spec.
Dependencies
============
None

View File

@ -0,0 +1,316 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===================================
Action-aware Software Configuration
===================================
https://blueprints.launchpad.net/heat/+spec/action-aware-sw-config
Heat resources have a well-defined lifecycle, handling the lifecycle actions
CREATE, DELETE, SUSPEND, RESUME and UPDATE. Software components in a Heat
template should follow the same lifecycle-awareness and allow for users to
provide configuration hooks for the aforementioned actions.
Problem description
===================
With the current design of Heat software orchestration, "software components"
defined through SoftwareConfig resources allow for only one configuration (e.g.
one script) to be specified. Typically, however, a software component has a
lifecycle that is hard to express in a single script. For example, software
must be installed (created), there should be support for suspend/resume
handling, and it should be possible to allow for deletion-logic. This is also
in line with the general Heat resource lifecycle.
To achieve the desired behavior of having all those lifecycle hooks with the
current design, one would have to define several SoftwareConfig resources along
with several SoftwareDeployment resources, each addressing one specific
lifecycle action. Alternative, one would have to design automation scripts in a
way so they can conditionally handle each lifecycle action accordingly. Both of
those options lack some intuitiveness or impose complexity on the creation of
automation scripts. By making software components action-aware like other Heat
resources, thus leveraging more of the orchestration capabilities of the Heat
engine, creation of software configuration automation and respective Heat
templates can be simplified for users.
Proposed change
===============
It is proposed to make software components (defined through SoftwareComponent
and SoftwareDeployment resources) lifecylce-action-aware by allowing users to
provide configuration scripts for one software component for all standard Heat
lifecycle actions (CREATE, DELETE, SUSPEND, RESUME, UPDATE).
Those configurations that collective belong to one software component (e.g.
Tomcat web server, MySQL database) can be defined in one place (i.e. one
*SoftwareComponent* resource) and can be associated to a server by means of one
single SoftwareDeployment resource.
The new SoftwareComponent resource will - like the SoftwareConfig resource -
not gain any new behavior, but it will also be static store of software
configuration data. Compared to SoftwareConfig, though, it will be extended to
provide several configurations corresponding to Heat lifecyle actions in one
place and following a well-defined structure so that SoftwareDeployment
resources in combination with in-instance agents can act in a lifecycle-aware
manner.
.. _software_component_resource:
New SoftwareComponent resource
------------------------------
It is proposed to implement a new resource type OS::Heat::SoftwareComponent,
which is similar to the existing SoftwareConfig resource, but has a richer
structure and semantics.
As an alternative, we could choose to extend the existing "SoftwareConfig"
resource, but the overloaded semantics could cause confusion with users.
Furthermore, extension of the existing resource could raise additional
complexity when having to maintain backwards-compatibility with existing uses
of SoftwareConfig.
The set of properties for OS::Heat::SoftwareComponent will be as follows:
.. code-block:: yaml
# HOT representation of new SoftwareComponent resource
sw-config:
type: OS::Heat::SoftwareComponent
properties:
# per action configurations
configs:
- actions: [ string, ... ]
config: string
tool: string
- actions: [ string, ... ]
config: string
tool: string
# ...
# inputs and outputs
inputs: [ ... ]
outputs: [ ... ]
options: { ... }
The *configs* property is a list of configurations for the various lifecycle
operations of a software component. Each entry in that list defines the
following properties:
actions
This property defines a list of resource actions when the respective config
should be applied. Possible values in that list correspond to lifecycle
actions of Heat's resource model (i.e. CREATE, DELETE, SUSPEND, RESUME, and
UPDATE).
Making this property a list of actions allows for re-using one configuration
for multiple resource actions when desired. For example, Chef recipe for
deploying some software (i.e. CREATE action) could also be used for handling
updates to software configuration properties (i.e. UPDATE action).
**Note:** One action like CREATE is only allowed to appear in the *actions*
property of at most one config. Otherwise, the ordering of several configs
for one lifecycle action at runtime would be unclear. This constraint will be
validated in the *validate()* method of the SoftwareComponent resource.
Allowing an action to appear in more than one config (probably with
additional annotation for ordering) is something that could be done as future
work.
config
This property defines the actual configuration to be applied, analogous to
the *config* property of OS::Heat::SoftwareConfig.
tool
This property specifies the configuration tool to be used. Note that this is
analogous to the SoftwareConfig resource's *group* property, but it has been
suggested to use a more intuitive name here.
Having the *tool* property for each config entry allows for mixing different
configuration tools for one software component. For example, the deployment
of software (i.e. CREATE) could be done using Chef or Puppet, but a simple
script could be used for SUSPEND or RESUME.
The *inputs* and *outputs* properties will be defined global for the complete
SoftwareComponent definition instead of being provided per config hook.
Otherwise, the corresponding SoftwareDeployment resource at runtime would
potentially have different or stale attributes depending on which resource
action was last run, which would likely introduce more complexity.
Template authors will have to make sure that the defined *inputs* and *outputs*
cover the superset of inputs and outputs for all operation hooks. Typically,
the CREATE hook will require the broadest set of inputs and produce most
outputs.
The *options* property will also be defined globally for the complete
SoftwareComponent. This property is meant to provide extra options for the
respective configuration tool to be used. It is assumed that the same options
will apply to all invocations of a configuration for one SoftwareComponent, so
making this a per-config settings does not make sense.
Note that in case of multiple configuration tools being used in one
SoftwareComponent, options need to be namespaced so they can mapped to the
respective tools. For that reason, the *options* map will have to contain
sub-sections for the respective tools. For example, for Chef the *options* map
would contain a 'chef' entry the value of which is in turn a map of
Chef-specific options.
Example
~~~~~~~
The following snippet shows an example of a SoftwareComponent definition for an
application server. The SoftwareComponent defines dedicated hooks for CREATE,
UPDATE and SUSPEND operations.
.. code-block:: yaml
appserver-config:
type: OS::Heat::SoftwareComponent
properties:
# per action configurations
configs:
- actions: [ CREATE ]
config: { get_file: scripts/install_appserver.sh }
tool: script
- actions: [ UPDATE ]
config: { get_file: scripts/reconfigure_appserver.sh }
tool: script
- actions: [ SUSPEND ]
config: { get_file: scripts/drain_sessions.sh }
tool: script
# inputs and outputs
inputs:
- name: http_port
- name: https_port
- name: default_con_timeout
outputs:
- name: admin_url
- name: root_url
Adaptation of SoftwareDeployment resource
-----------------------------------------
The SoftwareDeployment resource (OS::Heat::SoftwareDeployment) will be adapted
to cope with the new SoftwareComponent resource, for example to provide the
contents of the *configs* property to the instance in the appropriate form.
Furthermore, the SoftwareDeployment resource's action and state (e.g. CREATE
and IN_PROGRESS) will be passed to the instance so the in-instance
configuration hook can select the right configuration to be applied (see also
:ref:`in_instance_hooks`).
The SoftwareDeployment resource creates transient configuration objects at
runtime for providing data to the in-instance tools that actually perform
software configuration. When a SoftwareComponent resource is associated to a
SoftwareDeployment resource, the complete set of configurations of the software
component (i.e. the complete *configs* property) will be stored in that
transient configuration object, and it will therefore be available to
in-instance tools.
There will be no change in SoftwareDeployment properties, but there will have
to be special handling for the *actions* property: the *actions* property
will be ignored when a SoftwareComponent resource is associated to a
SoftwareDeployment. In that case, the entries defined in the *configs* property
will provide the set of actions on which SoftwareDeployment, or in-instance
tools respectively, shall react.
Note: as an alternative to passing the complete set of configurations defined
in a SoftwareComponent, along with the SoftwareDeployment's action and state
to the instance, we could make the SoftwareDeployment resource select the right
config based on its action and state and only pass this to the instance. This
could possibly allow for using the existing in-instance hooks without change.
However, at the time of writing this spec, it was decided to implement config
select in the in-instance hook since it gives more power to the in-instance
implementation for possible future enhancements.
.. _in_instance_hooks:
Update to in-instance configuration hooks
-----------------------------------------
The in-instance hooks (55-heat-config) have to be updated to select the
appropriate configuration to be applied depending on the action and state
indicated by the associated SoftwareDeployment resources.
In case of a *SoftwareComponent* being deployed, the complete set of
configurations will be made available to in-instance hooks via Heat metadata.
In addition, SoftwareDeployment resources will add their action and state
to the metadata (e.g. CREATE and IN_PROGRESS). Based on that information, the
in-instance hook will then be able to select and apply the right configuration
at runtime.
As an alternative, we could choose to implement SoftwareDeployment in a way to
only pass that configuration to the instance (via Heat metadata) that
corresponds to its current action and state. In-instance tools could then
potentially remain without changes (see also note in previous section).
Alternatives
------------
Without any change to current implementation, the following alternatives for
providing action-specific configuration hooks for a software component would
exist:
Use of OS::Heat::StructuredConfig
StructuredConfig allows for defining a map of configurations, i.e it would
allow for defining the proposed structure of the *configs* property to be
added to SoftwareConfig. However, StructuredConfig does not define a schema
for that map and would thus allow for any free-form data which would make it
much harder to enforce well-defined handling.
In addition, this would change the semantics of the map structure in
StructuredConfig and thus it would be abuse of this resource.
Use of several SoftwareConfigs and SoftwareDeployments:
As already outlined in the problem description, with the current design it
would be possible to define separate SoftwareConfigs and SoftwareDeployments,
each corresponding to one lifecycle resource action. However, this makes
templates much more verbose by having many resources for representing one
software component, and the overall structure does not align with the general
structure of all other Heat resources.
Use of scripts that conditionally handle actions
It would be possible to provide scripts that get invoked for all of a
resource's lifecycle actions. Those scripts would have to include a lot of
conditional logic, which would make them very complicated.
Potential follow-up work
------------------------
The current specification and implementation will only cover Heat's basic
lifecycle operations CREATE, DELETE, SUSPEND, RESUME and UPDATE. It is
recognized that special handling might make sense for scenarios where servers
are being quiesced for an upgrade, or where they need to be evacuated for a
scaling operation. In addition, users might want to define complete custom
actions (see also :ref:`software_component_resource`). Handling of those
actions are out of scope for now, but can be enabled by follow-up work on-top
of the implementation of this specification. For example, an additional
property *extended_action* could be added to SoftwareDeployment which could be
set to the extended actions mentioned above. When passing this additional
property to in-instance hooks, the hooks could then select and apply
the respective config for the specified extended action.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Thomas Spatzier
Milestones
----------
Target Milestone for completion:
Juno-2
Work Items
----------
* Create new OS::Heat::SoftwareComponent resource
* Adapt OS::Heat::SoftwareDeployment for new SoftwareComponent
* Adapt in-instance hook for selecting right configuration to be applied
Dependencies
============
None

View File

@ -0,0 +1,73 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
======================
Convergence Observer
======================
https://blueprints.launchpad.net/heat/+spec/convergence-observer
As a step toward implementing the ``convergence`` specification, Heat
will split operations which fall into the "observing reality" category
into a separate "observer" process.
Problem description
===================
External systems hosting the physical resources of a stack will change
independent of operations in Heat. There is a need to have a way to record
and respond to these changes.
Proposed change
===============
* Observer is responsible for managing the model of reality
* polls nova/neutron/etc using resource `check` methods.
* conceptually polls heat stack descriptions to update internal resources
* Data model will need to store "observed state"
* REST API will need to display "observed state"
Note that no change will be necessary to the resource plugin API. Also
note that subscribing to notifications will be done in a separate
blueprint named `convergence-continuous-observer`.
Alternatives
------------
-
Implementation
==============
Assignee(s)
-----------
Work should be spread between all developers as much as possible to help
spread awareness of how things work.
Milestones
----------
Target Milestone for completion:
Juno-2
Work Items
----------
* Modify data model to record resource state
* Modify public API to display observed state
* Create new observer RPC API calls
* Create new observer entry point
* Move "check_active" and "check" calls to use observer API
Dependencies
============
-

View File

@ -0,0 +1,308 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=============================
Convergence
=============================
https://blueprints.launchpad.net/heat/+spec/convergence
Clouds are noisy - servers fail to come up, or die when the underlying
hypervisor crashes or suffers a power failure. Heat should be resilient
and allow concurrent operations on any sized stack.
Problem description
===================
There are multiple problems that face users of Heat with the current model.
* stacks that fail during creation / update
* Physical resources may (silently) stop working - either
disappearing or have an error of some sort (e.g. loadbalancer that
isn't forwarding traffic, or nova instance in ERROR state). When this
happens a subsequent update that depends on a presumably "active"
resource is likely to fail unexpectedly.
* Heat engines are also noisy:
* they get restarted when servers need to get updated
* they may fail due to hardware or network failure (see under
hypervisor failure)
* Heat engine failures show up as a _FAILED stack, which is a
problem for the user, but it should not be as whatever happened is a
temporary problem for the operator, and not resolvable by the user.
* Large stacks exceed the capacity of a single heat-engine process
to update / manage efficiently.
* Large clusters - e.g. 10K VMs should be directly usable
* Stack updates lock state until the entire thing has converged again
which prevents admins making changes until its completed
* This makes it hard/impossible to do autoscaling as autoscaling
decisions may be more frequent than the completion time from
each event
* Concern: Why would you make a controller that makes decisions
so frequently that it does not have time to observe the
effects of one decision before making the next?
* Large admin teams are forced to use an external coordination
service to ensure they don't do expensive updates except when
there is scheduled time
* Reacting to emergencies is problematic
User Stories
------------
* Users should only need to intervene with a stack when there
is no right action that Heat can take to deliver the current
template+environment+parameters. E.g. if a cinder volume attached to a
non-scaling-group resource goes offline, that requires administrative
intervention -> STACK_FAILED
* Examples that this would handle without intervention
* nova instances that never reach ACTIVE
* neutron ports that aren't reachable
* Servers in a scaling group that disappear / go to ERROR in
the nova api
* Examples that may need intervention
* servers that are not in a scaling group which go to ERROR
after running for a while or just disappear
* Scaling groups that drop below a specified minimum due to
servers erroring/disappearing.
* Heat users can expect Heat to bring a stack into line with the
template+parameters even if the world around it changes after
STACK_READY - e.g. due to a server being deleted by the user.
* That said, there will be times where users will want to disable
this feature.
* Operators should not need to manually wait-or-prepare heat engines
for maintenance: assume crash/shutdown/failure will happen and have
that be seamless to the user.
* Stacks that are being updated must not be broken / interrrupted
in a user visible way due to a heat engine reboot/restart/redeploy.
* Users should be able to deploy stacks that scale to the size of
the backend storage engine - e.g. we should be able to do a million
resources in a single heat stack (somewhat arbitrary number as a
target). This does not mean a 1 million resource single template,
but a single stack that has 1 million resources in it, perhaps by
way of resource groups and/or nested stacks.
* Users need to be able to tell heat their desired template+parameters
at any time, not just when heat believes the stack is 'READY'.
* Autoscaling is a special case of 'user' here in that it tunes
the sizes of groups but otherwise is identical to a user.
* Admins reacting to problematic situations may well need to make
'overlapping' changes in rapid fire.
* Users deploying stacks with excess of 10K instances (and thus
perhaps 50K resources) should expect Heat to deploy and update said
stacks quicky and gracefully, given appropriate cloud capacity.
* Existing stacks should continue to function. "We don't break
user-space".
* During stack create, the creation process is stuck waiting for a signal
that will never come due to out of band user actions. An update is
issued to remove the signal wait.
* During stack create, software initialization is failing because of
inadequate amounts of space allocated in volumes. Update is issued to
allocate larger volumes.
* During stack delete, the deletion process is waiting indefinitely to
delete an undeletable resource. Update is issued to change the deletion
policy and not try to remove the physical resource.
Proposed change
===============
This specification is primarily meant to drive an overall design. Most
of the work will be done under a set of sub-blueprints:
* Move from using in-process-polling to observe resource state, to an
observe-and-notify approach. This will be the spec ``convergence-observer``.
* Move from a call-stack implementation to a continual-convergence
implementation, triggered by change notification. This will be the spec
``convergence-engine``.
* Run each individual convergence step with support from the taskflow
library via a distributed set of workers.
Prior to, and supporting, that work will be database schema changes.
The primary changes are to separate desired and observed state, and to
support the revised processing technique. To separate desired and
observed state we will: (1) clone the table named resource, making a
table named resource_observed (the table named resource_data seems
more like part of the implementation of certain kinds of resources and
so does not need to be cloned), and (2) introduce a table named
resource_properties_observed. For the resource_observed table, the
columns named status, status_reason, action, and rsrc_metadata will be
removed. The raw template will be part of the desired state. A given
resource's properties, in the desired state, are computed from the
template and effective environment (which includes the stack
parameters). In the observed state a resource's properties are held
in the resource_properties_observed table; it will have the following
fields.
1. id VARCHAR(36)
2. stack_id VARCHAR(36)
3. resource_name VARCHAR(255)
4. prop_name VARCHAR
5. prop_value VARCHAR
Upon upgrade of the schema and engines, existing stacks will automatically
start using the convergence model.
No required changes will be made to existing API's, including the resource
plugin API.
Convergence Engine
------------------
A new set of internal RPC calls will be created to allow per-resource
convergence operations to be triggered by the observers. A new set of
public API calls will also be needed to trigger convergence on a stack
or resource manually.
There was a plan previously to only use the existing stack-update to
enable a manual convergence. This would result in a somewhat awkward
user experience that would require more of the user than is necessary.
Observer Engine
---------------
A new set of internal RPC calls will be created to trigger immediate
observation of reality by the observer. A new set of public API calls will
also be needed to trigger observation of a stack or resource manually.
Note that this will build on top of the calls introduced in
the`stack-check` blueprint by allowing a resource-check as well.
Data Model
----------
Heat will need a new concept of a `desired state` and an `observed state`
for each resource. Storage will be expected to serialize concurrent
modification of an individual resource's states, so that on the
per-resource level we can expect consistency.
Scheduling
----------
Heat stacks contain dependency graphs that users expect to be respected
during operations. Mutation of the goal state must be scheduled in the
same manner as it is now, but will be moved from a central task scheduler
to a distributed task scheduler.
On creation of a stack, for instance, the entire stack will be parsed
by the current engine. Any items in the graph that have no incomplete
parents will produce a direct message to the convergence engine queue,
which is handled by all convergence engines. The message would instruct
the worker that this resource should exist, and the worker will make
the necessary state changes to record that. Once it is recorded, a job
to converge the resource will be created.
The converge job will create an observation job. Once the reality is
observed to match desired state, the graph will be checked for children
which now have their parents all satisfied, and if any are found, the
convergence process is started for them. If we find that there are no
more children, the stack is marked as COMPLETE.
This may produce a situation where a user with larger stacks is given
an unfair amount of resources compared to a user with smaller stacks,
because the larger stack will fill up the queue before the smaller one,
leading to long queue lengths. For now, quotas and general resource
limits will have to be sufficient to prevent this situation.
Updates will work in the exact same manner. Removed items will still be
enumerated by searching for all existing resources in the new graph,
and appropriately recording the desired state as "should not exist"
for anything not in the new graph.
Rollbacks will be enabled in the same manner as they currently are,
with the old stack definition being kept around and re-applied as the
rollback operation. If concurrent updates are done, the rollback is
always to the previous stack definition that reached a COMPLETE state. [#]_
Stack deletes happen in reverse. The stack would be recorded as "should
not exist", which will inform the convergence jobs that the scheduling
direction is in reverse. The childless nodes of the graph would be
recorded as "should not exist" and then parents with no more children
in an active state recorded as "should not exist".
This will effectively render the convergence engine a garbage collector,
as physical resources will be left unreferenced in the graph, in a state
where they can be deleted at any time. Given the potential for cost
to the user, these resources must remain visible to the user garbage
collection must be given a high priority.
Note that the state of "should not exist" does not change the meaning
of deletion policies expressed in the template. That will still result
in a rename and basic de-referencing if there is a policy preventing
actual deletion.
.. [#] The rollback design needs further discussion as it isn't clear
that this would be sufficient to not violate user expectations.
We can copy the current implementation and keep a copy of the last
known "COMPLETE" template, and roll back by asserting that if
a user has asked for rollback. Otherwise the fact that we allow
relatively fast updates with convergence should allow users to
get a better rollback experience by using version control on
templates and environment files.
Alternatives
------------
* Improve current model with better error handling and retry support.
* Does not solve locking/concurrency problems
* Does not solve large stack scalability problems
Implementation
==============
Assignee(s)
-----------
This work will be broken up significantly and spread between many developers.
Milestones
----------
The bulk of this work should be completed in the "K" cycle, with the
sub-blueprints landing significant amounts of change throughout Juno.
In particular, the DB schema changes to separate desired and observed
state will come first. Once that is done we can make a major
improvement without much change in the code structure; simply by
updating the observed state as soon as each change is made we fix the
worst problem (that a partially successful stack update does not
accurately record the resulting state). Later comes the major code
re-org.
Work Items
----------
TBD
Dependencies
============
* Blueprints
* convergence-observer
* convergence-engine
* Taskflow
* Any specific needs for taskflow should be added here.

View File

@ -0,0 +1,90 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=======================
Decouple Nested Stacks
=======================
https://blueprints.launchpad.net/heat/+spec/decouple-nested
As step towards the more granular architecture described in the
convegence-engine spec, it has been proposed that we could more effectively
decouple nested stacks within the existing heat architecture.
Problem description
===================
Creating a tree of many nested stacks results in the entire stack tree getting
processed, for every stack operation, by one heat-engine process, with access
to every nested stack serialized by the same global lock (that obtained to
access the top-level stack).
While the arguably more complex and invasive steps described by
convergence-engine are worked out (which may take some time, and will probably
be made simpler by the decoupling described below), it's proposed that we look
at decoupling nested stacks more effectively from their parent, such that we
can make use of the existing RPC round-robin scheduling to enable nested stacks
operations (e.g create/update/delete) to be handled in a more scalable way by
spreading the work for each stack over multiple engine processes or workers.
Proposed change
===============
* Rework the engine RPC interfaces to enable some additional arguments to be
passed to create/update operations, such that the existing coupling (for
example passing user_creds ID's) between parent and nested stacks can be
broken.
* Refactor the StackResource base-class to perform operations via RPC and not
manipulate parser.Stack objects directly when performing lifecycle operations
Note that the StackResource rework will focus on performing actions which
change the state of the stack via RPC calls (e.g those which are performed
asynchronously via an IN_PROGRESS state), leaving the existing code for stack
introspection unchanged. This should allow a less risky transition to the
new interfaces with minimal rework of the StackResource subclasses.
One area which may be left for a future enhancement is the polling for COMPLETE
state after triggering the action via RPC, e.g when we triggger a nested stack
create via an RPC call, we will poll the DB directly waiting for the CREATE
COMPLETE state in check_create_complete. In future, it would be better to wait
for a notification to avoid the overhead of polling the DB.
Alternatives
------------
Wait for the full convergence-engine vision to come together I guess, but it
seems apparent that we need a more immediate mitigation plan for the subset of
users who care primarily about these kind of workloads.
Implementation
==============
Assignee(s)
-----------
Steven Hardy (shardy)
Milestones
----------
Target Milestone for completion:
Juno-3
Work Items
----------
* Rework RPC interfaces
* Convert StackResource create operations to create the stack via RPC
* Convert StackResource delete operations to delete the stack via RPC
* Convert StackResource suspend operations to suspend the stack via RPC
* Convert StackResource resume operations to resume the stack via RPC
* Convert StackResource check operations to check the stack via RPC
* Convert StackResource update operations to update the stack via RPC
Dependencies
============
None, but this could be considered a precursor to the convergence-engine work.

View File

@ -0,0 +1,97 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=============================
Hidden Parameters Encryption
=============================
https://blueprints.launchpad.net/heat/+spec/encrypt-hidden-parameters
Encrypt template parameters that were marked as hidden before storing them in
database.
Problem description
===================
Heat template parameters can be marked as hidden, but currently these values
are stored in database in plain text.
A template author currently marks a parameter as hidden so that it will not be
logged or displayed to the user in user interfaces.
The problem itself is that these are probably sensitive pieces of data and thus
it would provide some safety against a database attacker if they were encrypted
in the database.
Leaving sensitive customer data at rest unencrypted provides many more options
for that data to get in the wrong hands or be taken outside the company. It is
quick and easy to do a MySQL dump if the DB linux system is compromised, which
has nothing to do with Heat having a vulnerability. Encrypting the data helps
in case if a leak of arbitrary DB data does surface in Heat.
Proposed change
===============
* Provide a configuration option to enable/disable hidden parameter encryption.
(Default is to disable parameter encryption)
* Encrypt parameters that were marked as hidden before storing Stack data in
the database.
* Decrypt parameters as soon as the stack data is read from database and
use decrypted parameters to create Stack object.
* This implementation uses same key and encryption mechanism that is currently
being used for encrypting/decrypting user credentials, trust tokens, and
resource data. (Encryption key is defined in Heat configuration file)
Alternatives
------------
* Instead of encrypting hidden parameters, we could encrypt all the parameters
as a dictionary.
* Encrypt full disk where entire MySQL database is being stored or encrypt
files where specific tables are stored.
* Another alternative is to use CryptDB:
www.cs.berkeley.edu/~istoica/classes/cs294/11/papers/sosp2011-final53.pdf
* Integrate Barbican with Heat and use Barbican to store secrets.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
vijendar-komalla
Milestones
----------
Target Milestone for completion:
Juno-2
Work Items
----------
* Modify Stack 'store' method to encrytpt parameters before storing in database
* Modify Stack 'load' method to decrypt parameters
* Create a migration script to encrypt parameters that are already stored
* Create a tool/script to change the encryption key and re-encrypting all the
parameters
Dependencies
============
None

View File

@ -0,0 +1,81 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
=============================
Events pagination
=============================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/events-pagination
This adds support to the events index call for limit, marker,
sort_keys and sort_dir query parameters, allowing users of the API to
retrieve a subset of events.
Problem description
===================
It is now highly probable that an event-list call could
end up attempting to return hundreds of events(especially for
AutoScalingGroup resources). At a certain point Heat
starts responding with a 500 error because the response is too large.
Proposed change
===============
We should support event pagination with limit and marker query parameters.
And we should also support event sorting with sort_keys and sort_dir query
parameters. It will make the use of more convenient for event-list.
* limit: the number of events to list
* marker: the ID of the last item in the previous page
* sort_keys: an array of fields used to sort the list, 'event_time'
or 'resource_status', default by 'event_time'
* sort_dir: the direction of the sort, 'asc' or 'desc', default is 'desc'
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
<huangtianhua>
Milestones
----------
Target Milestone for completion:
Juno-2
Work Items
----------
* Add support for pagination and sorting events
* Add UT fot the pagination and sorting events
* Add support for pagination and sorting events in python-heatclient
* Write tempest api orchestration and scenario test to exercise events
pagination
Dependencies
============
None

View File

@ -0,0 +1,191 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
=============================
Explode Nested Resources
=============================
https://blueprints.launchpad.net/heat/+spec/explode-nested-resources
For many UI use-cases, it is generally resource intensive to list all
resources associated with a given stack if that stack includes stack-based
resources. It is therefore proposed that `resource-list` should return all
resources associated with a given stack if requested.
Problem description
===================
Currently, `resource-list` only returns top-level resources of a given stack
but does not include resources that are inside of any nested stacks. This
makes several use cases difficult or sub-optimal because of the need to make
several API calls on resource reference links.
* When deleting a stack, a UI should be able to present the user with a list
of *all* resources associated with a given stack to avoid confusion about
what and why certain resources were deleted due to a stack delete.
* A user of the API (either via CLI, curl, or other method) wants to be able
to quickly and easily list and follow the status of every resource associated
with a stack, regardless of a resource's position in the stack hierarchy.
* OpenStack dashboard may show an incorrect, confusing topology of resources
from a stack because it knows nothing about a nested stack (e.g. a group of
servers).
Proposed change
===============
The proposed implementation would add an optional query parameter to the
`resource-list` API method:
nested_depth
Recursion depth to limit the returned resources. This parameter
indicates that the user wishes to return nested resources as well as those
from the parent stack. Setting this parameter to a number results in the
system limiting the recursion depth. A value of `0` has no effect. A value
of `MAX` results in all resources being returned up to
`max_nested_stack_depth`. The system will never recurse farther than
`max_nested_stack_depth`, regardless of the value passed in the parameter.
The Heat service would see this parameter and recurse through all of the
nested stacks to the specified depth and flatten the resource list data
structure. For resources that exist in nested stacks, the containing nested
stack id and parent resource name would also be included.
The resulting response data would look like::
{"resources":
[
{
"resource_name": "db",
"links": [...],
"logical_resource_id": "db",
"resource_status_reason": "state changed",
"updated_time": "2014-04-15T18:23:35Z",
"required_by": ["web_nodes"],
"resource_status": "CREATE_COMPLETE",
"physical_resource_id": "4974985c-da78-444b-aeb3-9a80baccdd1a",
"resource_type": "OS::Trove::Instance"
},
{
"resource_name": "lb",
"links": [...],
"logical_resource_id": "lb",
"resource_status_reason": "state changed",
"updated_time": "2014-04-15T18:30:52Z",
"required_by": [],
"resource_status": "CREATE_COMPLETE",
"physical_resource_id": "229145",
"resource_type": "Rackspace::Cloud::LoadBalancer"
},
{
"resource_name": "web_nodes",
"links": [...],
"logical_resource_id": "web_nodes",
"resource_status_reason": "state changed",
"updated_time": "2014-04-15T18:25:10Z",
"required_by": ["lb"],
"resource_status": "CREATE_COMPLETE",
"physical_resource_id": "c3a46e6f-f999-4f9b-a797-3043031d381a",
"resource_type": "OS::Heat::ResourceGroup"
},
{
"resource_name": "web_node1",
"links": [...],
"logical_resource_id": "web_node1",
"resource_status_reason": "state changed",
"updated_time": "2014-04-15T18:25:10Z",
"required_by": ["lb"],
"resource_status": "CREATE_COMPLETE",
"physical_resource_id": "c3a46e6f-f999-4f9b-a797-3043031d3811",
"resource_type": "Rackspace::Cloud::Server",
"parent": "web_nodes",
"nested_stack_id": "1234512345"
},
{
"resource_name": "web_node2",
"links": [...],
"logical_resource_id": "web_node2",
"resource_status_reason": "state changed",
"updated_time": "2014-04-15T18:25:10Z",
"required_by": ["lb"],
"resource_status": "CREATE_COMPLETE",
"physical_resource_id": "c3a46e6f-f999-4f9b-a797-3043031d3822",
"resource_type": "Rackspace::Cloud::Server",
"parent": "web_nodes",
"nested_stack_id": "1234512345"
}
]
}
These changes will primarily reside in:
* heat.engine.service
* heat.db
* heat.api
* python-heatclient
Alternatives
------------
Currently, each resource that abstracts a nested stack will include a link to
the nested stack when viewed with a `resource-show`. This allows a user to
implement this functionality client-side by:
#. listing all of the resources in the stack
#. retrieving each resource individually
#. if the current resource has a link to a nested stack, recurse the resources
of that stack and add them to the list/tree
While this offers greater flexibility in how nested resources are listed for
the user's particular use case, its very inefficient for the stated use cases
as well as very noisy from a network perspective. This specification does not
intend to remove this option, only to provide an alternative to more
efficiently satisfy several common use cases while maintaining the existing
link traversal method for use cases requiring more control over the display
of the resource hierarchy.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
randall-burt
Milestones
----------
Target Milestone for completion:
Juno-2
Work Items
----------
* Update DB API and implementation to accept the `nested_depth` parameter
for resource list and use that in logic to append resources from any
nested stacks.
* Update the engine to accept and then pass the `nested_depth` parameter to
the DB API.
* Update the API to accept and pass the `nested_depth` parameter to the
engine; try not to have to version the RPC API, please.
* Update python-heatclient to expose the new flag and properly format the
output
* Add the parameters to the Heat V1 WADL
Dependencies
============
None

View File

@ -0,0 +1,131 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
=======================================================
Implement equivalent to AWS "Updates are not supported"
=======================================================
As Heat tries to maintain compatibility of its AWS resources,
a user can expect that a template using Heat's AWS compatible resources
will work the same both on Heat and on AWS.
Currently though we are missing a specific behavior of some AWS resources
on stack update - a property of resource might not support any updates,
including UpdateReplace (that is currently our default update behavior).
https://blueprints.launchpad.net/heat/+spec/implement-aws-updates-not-supported
Problem description
===================
AWS CloudFormation
------------------
AWS CloudFormation has a distinction between "Update requires: Replacement"
and "Update requires: Updates are not supported" for a property of a resource.
In latter case, an attempt to update this property during a stack update
will result in an error putting resource in UPDATE_FAILED state.
Example
~~~~~~~
The ``AWS::EC2::Volume`` resource has all properties marked as
"Update requires: Updates are not supported" in AWS docs [1]_.
This is the relevant part of AWS event when trying to increase the volume size
from 10 to 11 using ``update-stack`` command::
{
"ResourceStatus": "UPDATE_FAILED",
"ResourceType": "AWS::EC2::Volume",
"ResourceStatusReason":
"Update to resource type AWS::EC2::Volume is not supported.",
"ResourceProperties":
"{\"AvailabilityZone\":\"us-west-2a\",\"Size\":\"11\"}"
}
Heat
----
In Heat we currently have default update behavior as ``UpdateReplace``.
Any updateable properties must be explicitly declared as such
and handled in ``handle_update`` method of a resource.
We have no clear way of completely denying any update to a resource
(including replacing it with new resource).
Thus if one e.g. follows the same scenario as in Example_ above,
the stack update succeeds having replaced the volume.
From currently implemented AWS compatible resources the following are affected:
* ``AWS::EC2::Volume`` - Updates are not supported [1]_
* ``AWS::EC2::VolumeAttachment`` - Updates are not supported [2]_
* ``AWS::CloudFormation::WaitCondition`` - Updates are not supported [3]_
* ``AWS::CloudFormation::Stack`` - Updates are not supported for
``TimeoutInMinutes`` property [4]_
.. [1] http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-ebs-volume.html
.. [2] http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-ebs-volumeattachment.html
.. [3] http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-waitcondition.html
.. [4] http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-stack.html
Proposed change
===============
- add a property schema attribute ``update_replace_allowed`` with default value
``True``
- modify ``Resource.update_template_diff_properties`` method to raise
``NotSupported`` error (a check similar to check for
``update_allowed``)
The properties schema of a resource then can specify
``update_replace_allowed=False`` which would lead to resource update
failure on any attempt to update such property.
Alternatives
------------
As an alternative we might mark all the properties of the AWS resource
in question as ``update_allowed`` and raise the same error in resource's
``handle_update``. This though would make the ``update_allowed`` effectively
a no-op, confusing users and documentation.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Pavlo Shchelokovskyy (pshchelo)
Milestones
----------
Target Milestone for completion:
Juno-3
Work Items
----------
* add ``update_replace_allowed`` property attribute
* modify the default resource update logic
* amend docs generation to display the status of this attribute for a property
(probably only if it is ``False``)
* mark corresponding properties of AWS compatible resources as
``update_replace_allowed = False``
Dependencies
============
None

View File

@ -0,0 +1,88 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
======================================================
Implement BlockDeviceMappings for AWS::EC2::Instance
======================================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/implement-ec2instance-bdm
We should support the BlockDeviceMappings for AWS::EC2::Instance resource
to be compatible with AWSCloudFormation.
Problem description
===================
Now in Heat, the AWS::EC2::Instance resource only has 'Volumes' property to
indicate the volumes to be attached, but there are two ways defining volumes
in AWSCloudFormation, 'Volumes' and 'BlockDeviceMappings', see:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html
1. 'Volumes' support the 'volume_id', user can specify the volume to be
attached to the instance. This way has been implemented in Heat, but
it's not a good way for batch creation because one volume can't be attached
to many instances.
2. 'BlockDeviceMappings' support the 'snapshot_id', user can specify
a snapshot, then a volume will be created from the snapshot, and the volume
will be attached to the instance. This way is a good way for batch creation.
Nova supports to create a server with a block device mapping:
http://docs.openstack.org/api/openstack-compute/2/content/ext-os-block-device-mapping-v2-boot.html
So, we should support the 'BlockDeviceMappings' for AWS::EC2::Instance
resource.
Proposed change
===============
1. Add 'BlockDeviceMappings' property for AWS::EC2::Instance resource,
specially in which user can specify the 'snapshot_id'.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
<huangtianhua>
Milestones
----------
Target Milestone for completion:
Juno-2
Work Items
----------
1. Support the BlockDeviceMappings for AWS::EC2::Instance resource
2. Add UT/Tempest for the change
3. Add a template for AWS::EC2::Instance with BlockDeviceMappings
Dependencies
============
None

View File

@ -0,0 +1,88 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
=========================================================================
Implement BlockDeviceMappings for AWS::AutoScaling::LaunchConfiguration
=========================================================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/implement-launchconfiguration-bdm
We should support the BlockDeviceMappings for
AWS::AutoScaling::LaunchConfiguration resource to be compatible with
AWSCloudFormation. And therefore, user can specify volumes to attach
to instances while AutoScalingGroup/InstanceGroup creation.
Problem description
===================
Now in Heat, the AWS::AutoScaling::LaunchConfiguration resource doesn't
implement 'BlockDeviceMappings' property to indicate the volumes to be
attached. There are two problems:
1. First, it's incompatible with AWSCloudFormation. In AWSCloudFormation,
'BlockDeviceMappings' support the 'SnapshotId', user can specify a snapshot,
then a volume will be created from the snapshot, and the volume will be
attached to the instance.
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig.html
2. Second, user can't specify volumes to be attached to instances which in
AutoScalingGroup/InstanceGroup while creation.
So, we should support the 'BlockDeviceMappings' for
AWS::AutoScaling::LaunchConfiguration.
Proposed change
===============
1. Implement 'BlockDeviceMappings' property for
AWS::AutoScaling::LaunchConfiguration resource, specially in which user can
specify the 'SnapshotId'.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
<huangtianhua>
Milestones
----------
Target Milestone for completion:
Juno-2
Work Items
----------
1. Support the BlockDeviceMappings for AWS::AutoScaling::LaunchConfiguration
resource
2. Add UT/Tempest for the change
3. Add a template for AWS::AutoScaling::LaunchConfiguration with
BlockDeviceMappings
Dependencies
============
https://blueprints.launchpad.net/heat/+spec/implement-ec2instance-bdm

View File

@ -0,0 +1,121 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
==================================
Add log translation hints for Heat
==================================
https://blueprints.launchpad.net/Heat/+spec/log-translation-hints
To update Heat log messages to take advantage of oslo's new feature of
supporting translating log messages using different translation domains.
Problem description
===================
Current oslo libraries support translating log messages using different
translation domains and oslo would like to see hints in all of our code
by the end of juno. So Heat should handle the changes out over the release.
Proposed change
===============
Since there are too many files need to change, so divide this bp into dozens of
patches according to Heat directories(which need applying this change).
For each directory's files, we change all the log messages as follows.
1. Change "LOG.error(_(" to "LOG.error(_LE".
2. Change "LOG.warn(_(" to "LOG.warn(_LW(".
3. Change "LOG.info(_(" to "LOG.info(_LI(".
4. Change "LOG.critical(_(" to "LOG.critical(_LC(".
Note that this spec and associated blueprint are not to address the problem of
removing translation of debug messages.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
liusheng<liusheng@huawei.com>
Milestones
----------
Target Milestone for completion:
Juno-3
Work Items
----------
For each directory's files, we change all the log messages as follows.
1. Change "LOG.error(_(" to "LOG.error(_LE".
2. Change "LOG.warn(_(" to "LOG.warn(_LW(".
3. Change "LOG.info(_(" to "LOG.info(_LI(".
4. Change "LOG.critical(_(" to "LOG.critical(_LC(".
We handle these changes in the following order::
├── contrib #TODO1
├── heat
│   ├── api #TODO2
│   ├── cloudinit #TODO3
│   ├── cmd
│   ├── common #TODO4
│   ├── db #TODO5
│   ├── doc
│   ├── engine #TODO6
│   ├── locale
│   ├── openstack #TODO7
│   ├── rpc #TODO8
│   ├── scaling #TODO9
│   ├── tests #TODO10
Add a HACKING check rule to ensure that log messages to relative domain.
Using regular expression to check whether log messages with relative _L*
function.
::
log_translation_domain_info =re.compile(
r"(.)*LOG\.info\(\s*_LI\(('|\")")
log_translation_domain_warning = re.compile(
r"(.)*LOG\.(warn|warning)\(\s*_LW\(('|\")")
log_translation_domain_error = re.compile(
r"(.)*LOG\.error\(\s*_LE\(('|\")")
log_translation_domain_critical = re.compile(
r"(.)*LOG\.critical\(\s*_LC\(('|\")")
Dependencies
============
[1]https://blueprints.launchpad.net/oslo/+spec/log-messages-translation-domain-rollout
[2]https://wiki.openstack.org/wiki/LoggingStandards

View File

@ -0,0 +1,74 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
==============================================
Implement More Custom Constraints for Neutron
==============================================
https://blueprints.launchpad.net/heat/+spec/neutron-custom-constraint
Now only network constraint is supported for neutron, we need more constraints
like subnet, port, router etc.
Problem description
===================
Many resources have some properties related with network, now neutron custom
constraints only support network constraint, haven't support subnet/port/router
constraints.
Proposed change
===============
Add 3 custom constraints to neutron.
1. 'neutron.subnet' for subnet constraint.
2. 'neutron.port' for port constraint.
3. 'neutron.router' for router constraint.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Ethan Lynn
Milestones
----------
Target Milestone for completion:
Juno-2
Work Items
----------
1. Implement subnet constraint for neutron
2. Implement port constraint for neutron
3. Implement router constraint for neutron
Dependencies
============
https://blueprints.launchpad.net/heat/+spec/glance-parameter-constraint

View File

@ -0,0 +1,199 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
=======================================
Reorg AutoScalingGroup Implementation
=======================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/reorg-asg-code
This specs is about reorganize AutoScalingGroup implementation which includes
the following resource types:
- AWS::AutoScaling::LaunchConfiguration
- AWS::AutoScaling::AutoScalingGroup
- AWS::AutoScaling::ScalingPolicy
- OS::Heat::InstanceGroup
- OS::Heat::AutoScalingGroup
- OS::Heat::ScalingPolicy
- OS::Heat::ResourceGroup
The goal is to 1) reorganize the class hierarchy; 2) split and relocate sources
into subdirectories to better reflect resources' name space; 3) make it easier
for future enhancements to each resource types.
Problem description
===================
The current class hierarchy of resource groups and scaling groups is something
like the diagram shown below::
CooldownMixin
|
| StackResource
| |
| +--> ResourceGroup [OS::Heat::ResourceGroup]
| |
| +--> InstanceGroup [OS::Heat::InstanceGroup]
| |
+---------+--> AutoScalingGroup [AWS::AutoScaling::AutoScalingGroup]
| |
| +--> AutoScalingResourceGroup [OS::Heat:AutoScalingGroup]
|
| SignalResponder
| |
+---+--> ScalingPolicy [AWS::AutoScaling::ScalingPolicy]
|
+--> AutoScalingPolicy [OS::Heat::ScalingPolicy]
Besides this hierarchy, there are utility functions located in the modules like
heat.scaling.template.
One of the problems of this design is related to namespace as pointed out by
an existing blueprint:
https://blueprints.launchpad.net/heat/+spec/resource-package-reorg
Another problem is that having all classes implemented in almost one file is
making the implementation difficult to digest or improve. For example, it
may make a better sense to have InstanceGroup a subclass of ResourceGroup.
For another example, it doesn't make much sense to have
AutoScalingResourceGroupa subclass of InstanceGroup because the subclass is
more open to other resource types as its members.
Proposed change
===============
1. Reorganize Class Hierarchy
The proposed change is to reorganize the class hierarchy to be something like
shown in the diagram below::
CooldownMixin
| StackResource
| |
| ResourceGroup
| [OS::Heat::ResourceGroup]
| |
| +-------------------+---------------+
| | |
+--> AutoScalingGroup InstanceGroup
| [OS::Heat::AutoScalingGroup] [OS::Heat::InstanceGroup]
| |
| |
+---------------------------------------> AWSAutoScalingGroup
| [AWS::AutoScaling:AutoScalingGroup]
|
| SignalResponder
| |
+----------------------->|
|
+-------------------------------+
| |
AWSAutoScalingPolicy AutoScalingPolicy
[AWS::AutoScaling::ScalingPolicy] [OS::Heat::ScalingPolicy]
This change will break the subclass relationships between OpenStack and AWS
implementation.
As for utility/helper classes, e.g. `CooldownMixin`, the first step is to
separate them into independent classes, followed by further refactoring them
into utility functions when appropriate.
2. Relocate Source Files
The AWS version will be relocated into heat/engine/resources/aws subdirectory,
including the LaunchConfiguration implementation. The OpenStack version will
be relocated into heat/engine/resources/openstack subdirectory.
The shared parent class ResourceGroup will remain in heat/engine/resources,
while the CooldownMixin class will be relocated into heat/scaling subdirectory.
The eventual layout of the modules and classes would look like this::
heat/engine/resources/
|
+-- resource_group.py [ResourceGroup]
+-- instance_group.py [InstanceGroup]
|
+-- aws/
| |
| +--- autoscaling_group.py [AWSAutoScalingGroup]
| +--- scaling_policy.py [AWSAutoScalingPolicy]
| +--- launch_config.py [LaunchConfiguration]
|
+-- openstack/
|
+-- autoscaling_group.py [AutoScalingGroup]
+-- scaling_policy.py [AutoScalingPolicy]
heat/scaling/
|
+-- cooldown.py [CooldownMixin]
+-- (possibily other shared utility classes)
This reshuffling is optional. We will determine whether reshuffling is
necessary indeed after the cleanup work is done.
Alternatives
------------
Since this is a pure implementation level change, one rule of thumb is that "we
don't break userland".
We can have AWS AutoScalingPolicy extend Heat AutoScalingPolicy. However that
may mean that any future changes to Heat implementation must be very careful,
in case those changes may break the conformance of the AWS version to its
Amazon specification.
The same applies to the two versions of AutoScalingGroup. Hopefully, we may
extract common code into ResourceGroup level to minimize code duplication.
However, having a subclass relationship between these two classes is not a
good design in the long term. The goal of the AWS version is to closely
follow the Amazon development while the goal of the Heat version is more
about user needs in the context of OpenStack. So the current thought is to
split the implmentation although it may imply some code duplication.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Qiming
There could be other contributors interested in helping out as well.
Milestones
----------
Target Milestone for completion:
Kilo-1
Work Items
----------
- Extract common code to parent classes
- Split AWS version and OS version of resources
- Modify test cases
Dependencies
============
No new dependencies to other libraries will be introduced.
This work may disturb several on-going work related to AutoScalingGroups.
The following work will have to be rebased on this change.
#. https://review.openstack.org/110379 Scaling group scale-down plugpoint
#. https://review.openstack.org/105644 LaunchConfiguration bdm
#. https://review.openstack.org/105907 Balancing ScalingGroup across AZs

View File

@ -0,0 +1,165 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
================
Stack Breakpoint
================
https://blueprints.launchpad.net/heat/+spec/stack-breakpoint
Orchestration template is a powerful automation tool when it works; however
when it fails, troubleshooting can be quite difficult. During development,
debugging failed template is simply part of the process, but in production,
a previously working template can also fail for many reasons. Providing support
for troubleshooting template will not only increase productivity but will also
help the adoption of Heat template by allowing users to "look under the hood"
and have a better handle on the automation.
Typically, the user would start by checking the logs to get some bearing on
the error. If possible, the user may try to enhance the logs by adding more
log message in the script. This initial approach should resolve many errors,
but difficult error may require more active debugging. The user would need to
stop at or before the point of template failure, inspect variables, check the
environment, run command or script manually, etc. Since the template is
declarative, the user would need to be able to recreate the error consistently.
Support for troubleshooting is broad and will require many blueprints to
implement the different features to control the template flow, recreate the
error, and inspect the elements. Related blueprints include
troubleshooting-low-level-control_, resolve-failed-stack-attributes_,
user-visible-logs_, user-friendly-template-errors_. This blueprint covers the
particular scenario of how to better control the stack deployment while
troubleshooting.
.. _troubleshooting-low-level-control: https://blueprints.launchpad.net/heat/+spec/troubleshooting-low-level-control
.. _resolve-failed-stack-attributes: https://blueprints.launchpad.net/heat/+spec/resolve-failed-stack-attributes
.. _user-visible-logs: https://blueprints.launchpad.net/heat/+spec/user-visible-logs
.. _user-friendly-template-errors: https://blueprints.launchpad.net/heat/+spec/user-friendly-template-errors
Problem description
===================
With a failing stack, currently we can stop on the point of failure by
disabling rollback: the stack will stop when a resource fails, leaving in
place the resources that have been created successfully. There may be some
false failures because some resources may be aborted, but they can be easily
identified by displaying the state of the resource. This technique works well
for troubleshooting stack-create; stack-update can be handled similarly once
the blueprint update-failure-recovery is implemented.
In many cases however, the point of failure may be too late or too hard to
debug because the original cause of the failure may not be obvious or the
environment may have been changed. If we can pause the stack at a point before
the failure, then we are in a better position to troubleshoot. For instance,
we can check whether the state of the environment and the stack is what we
expect, we can manually run the next step to see how it fails, etc.
While developing new template or resource type, it is also useful to bring up
a stack to a point before the new code is to be executed. Then the developer
can manually execute and debug the new code.
Proposed change
===============
The usage would be as follows:
- Run stack-create or stack-update with one or more resource name specified
as breakpoint, for example:
heat stack-create my_stack --template-file my_template.yaml
--breakpoint failing_resource_name
heat stack-update my_stack --template-file my_template.yaml
--breakpoint failing_resource_name
- The breakpoint can also be coded in the environment file pointing to
a particular resource, for example:
breakpoints:
resource: failing_resource_name
- As the engine traverses down the dependency graph, it would stop at the
breakpoint resource and all dependent resources. Other resources with no
dependency will proceed to completion before stopping. Multiple breakpoints
can be set to control parallel paths in the graph.
- Running resource-list or resource-show will show the resource at the
breakpoint as "CREATE.INPROGRESS" or "UPDATE.INPROGRESS" and the resource
is not created or updated yet. Running event-list will show that the
breakpoint has occurred, and event-show will give more details on the
breakpoint.
- The breakpoint can be deleted on the command line by:
heat stack-update my_stack --template-file my_template.yaml
--nobreakpoint failing_resource_name
- In the environment file, the breakpoint can be deleted simply by deleting
the resource name in the breakpoint property. This would take effect the
next time the environment file is specified on stack-update. The user
is probably more likely to use the command line option.
- After debugging, continue the stack by (done manually, but can also be
automated by a high level debugger):
- Stepping: remove current breakpoint, set breakpoint for next resource(s)
in dependency graph, resume stack-create (or stack-update).
- Running to completion: remove current breakpoint, resume stack-create or
stack-update by running stack-update with the same
template and parameters.
For nested stack, the breakpoint would be prefixed with the name of the
nested template.
The change will include the heat client, api and environment to add the
breakpoint option.
For the Heat engine to stop at a resource, we will leverage the blueprint
lifecycle-callbacks_. Some code to set up and interface with the callback
will be needed and the details will be determined when this blueprint is
implemented.
.. _lifecycle-callbacks: https://wiki.openstack.org/wiki/Heat/Blueprints/lifecycle-callbacks
Alternatives
------------
The manual approach is simply to edit the template and delete any failing
resources until the remaining resources can be created successfully.
Then stepping each resource can be done by adding it back to the template and
running stack-update. The full stack will need to be deleted and recreated
for each iteration.
This manual technique cannot be incorporated into high level tool.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Ton Ngo
Milestones
----------
Target Milestone for completion:
Juno-3 or further
Work Items
----------
- Heat client: add option to specify breakpoint
- Heat API: add option to specify breakpoint
- Environment: add option to specify breakpoint
- Interface with lifecycle-callbacks_
.. lifecycle-callbacks_: https://wiki.openstack.org/wiki/Heat/Blueprints/lifecycle-callbacks
Dependencies
============
https://wiki.openstack.org/wiki/Heat/Blueprints/lifecycle-callbacks

View File

@ -0,0 +1,123 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
============================================
Display more user information in stack lists
============================================
https://blueprints.launchpad.net/heat/+spec/stack-display-fields
Stacks are launched by users but scoped to tenants, so users in the same tenant
currently have no way to know who stacks belong to. The same is especially
true to unscoped stack lists. Since humans are much better with names than
they are with numbers, it would be great if this list also contained other
information to allow for better identification of stack owners.
Problem description
===================
There is currently no way to know what user created a stack, since only the
tenant ID is displayed with a stack and multiple users can be in the same
tenant.
Also, when listing unscoped stacks (with the flag ``global_tenant=True``), all
stacks are returned, regardless of the tenant who owns them. This list
contains information about the stacks, including some info about the stack
owner (e.g. the Tenant ID is included, but usernames are not).
This is helpful for cloud providers to be able to more easily support their
customers. However, humans are better at dealing with names than with numbers,
so returning just the Tenant ID is not ideal.
In order to make it possible for supporters to easily identify their clients,
it would be great to also include the username of the stack owners in the stack
information.
Proposed change
===============
The proposed implementation would add the extra information when formatting a
stack.
Currently, the username is already saved to the database but not parsed back
into the stack when loaded from the DB. This would parse it back from the DB
into the stack at all times, but only exposed to the API response when
formatting stacks to a ``global_tenant`` call::
{
"stacks": [
{
"creation_time": "...",
"description": "...",
"id": "...",
"links": [...],
"project": "TENANT_ID",
"stack_owner": "USERNAME", // Additional info
"stack_owner_id": "USER_ID", // ----------------
"stack_name": "...",
"stack_status": "...",
"stack_status_reason": "...",
"updated_time": "..."
}
]
}
The necessary changes will primarily reside in:
* heat.api.openstack.v1.views.stacks_view.py
* heat.engine.api.py
* heat.engine.parser.py
* heat.engine.service.py
* heat.rpc.api.py
Alternatives
------------
None, since this is just field additions.
Implementation
==============
Assignee(s)
-----------
Primary assignees:
* rblee88
* andersonvom
Milestones
----------
Target Milestone for completion:
Juno-2
Work Items
----------
* Read username into the Stack back from the DB
* Display username when displaying stacks
Dependencies
============
None

View File

@ -0,0 +1,127 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
=============================
Stack lifecycle plugpoint
=============================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/stack-lifecycle-plugpoint
A cloud provider may have a need for custom code to examine stack requests
prior to performing the operations to create or update a stack.
Some providers may also need code to run after operations on
a stack complete. A mechanism is proposed whereby providers may easily add
pre-operation calls from heat to their own code, which is called prior to
performing the stack work, and post-operation calls, which are made after
a stack operation completes or fails.
Problem description
===================
There are at least two primary use cases.
(1) Enabling holistic (whole-pattern) scheduling of the virtual resources
in a template instance (stack) prior to creating or deleting them.
This would usually include making decisions about where to host virtual
resources in the physical infrastructure to satisfy policy requirements.
It would also cover failing a stack create or update if the policies
included with the stack create or update were not satisfiable, or other
cloud provider policies being checked were not satisfiable.
As an example, an application owner requires that VMs and volumes
attached to them are deployed on the same rack. As another example,
a cloud provider may want to enforce consultation with a license server
before deploying an application. As another example, an application owner
may require that their VMs be spread across a given number of
racks.
(2) Enabling checking of policies not related to virtual resource scheduling,
with stack create or update failure if the policies would not be satisfied.
As an example, a cloud provider may want to verify that compute resources
for certain types of applications are deployed with certain security groups.
As another example, a cloud provider may want to be warned when patterns
with > 100 VMs are deployed.
Proposed change
===============
An ordered registry of python classes which implement pre-operation and/or
post-operation methods is required. This would be done through stevedore,
with some addition to force a full (or partial) ordering on the classes.
Pre and post operation methods should not modify the parameter stack(s).
Any modifications would be considered to be a bug.
A possible exception would be to allow status changes
to the stack, to facilitate error handling.
[The no-modifications rule could be enforced, e.g. by passing deep copies to
the plugins but this might incur an unacceptable
performance cost.] Both pre-operation and
post-operation methods can both indicate failure, which would be treated like
any other stack failure. On failure of a pre-operation call, when more than
one plugin
is registered, the post-op methods would be called for all the classes already
processed, to indicate to each plugin that any decisions that
it made with respect to the stack should be un-made.
All stack actions would need calls to either pre or post operations, or both.
This includes at least create, update, delete, abandon, and adopt. In a basic
design, modifications to the Stack class in parser.py are sufficient for adding
the call to the pre-operation and post-operation methods found via the
lifecycle plugin registry. The post-operation calls would need to be called in
both the normal paths and all error paths.
Alternatives
------------
No other approach was identified that would allow the operator (heat provider)
to extend heat with this functionality for all stack deployments.
https://blueprints.launchpad.net/heat/+spec/lifecycle-callbacks describes
an approach where heat users can optionally specify callbacks for in templates
for stack and resource events.
It does not provide the ubiquitous callbacks (for all stacks) that would be
needed by the use cases described above, unless the heat provider tightly
controls the templates that users can use.
Implementation
==============
A patch comprising a full implementation of the blueprint
(https://review.openstack.org/#/c/89363/) is already being
reviewed, under the old pre-spec process.
Assignee(s)
-----------
Primary assignee:
William C. Arnold (barnold-8)
Milestones
----------
Target Milestone for completion:
Juno-2
Work Items
----------
Implementation: https://review.openstack.org/#/c/89363/
Dependencies
============
No dependencies

View File

@ -0,0 +1,95 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
====================================
Add support for SR-IOV-PORT
====================================
https://blueprints.launchpad.net/heat/+spec/neutron-resource-add-pci-port
When creating Neutron SR-IOV ports, these ports should have their own resource
types. This spec proposes to add vnic_type to OS::Neutron::Port objects.
Problem description
===================
A neutron port is a virtual port that is either attached to a linux bridge or
an openvswitch bridge on a compute node. With the introduction of PCI
Passthrough SR-IOV support, the intermediate virtual bridge is no longer
required. Instead, the SR-IOV port is associated with a virtual function
that is supported by the vNIC adaptor.
Currently a PCI port can be created by setting the value_specs property
in OS::Neutron::Port. However, having a new resource type will simplify
the templates for the user and allow for different constraints in the
future.
Proposed change
===============
Add support for vnic_type OS::Neutron::Port.
Provider resources will be used to create a PCI resource.
OS::Neutron::Port will be modified to suport the vnic type.
The properties for OS::Neutron::Port will be as follows:
.. code-block:: yaml
resources:
sriov_port:
type: OS::Neutron::Port
properties:
network: { get_param: my_net }
vnic_type: direct
The vnics type supported are normal, direct and macvtap
Alternatives
------------
Implement a new resource OS::Neutron::PciPort. This will reside with the
current Neutron::Port and reuse as much of Neutron::Port as possible.
The properties for OS::Neutron::PciPort will be as follows:
.. code-block:: yaml
resources:
sriov_port:
type: OS::Neutron::PciPort
properties:
network: { get_param: my_net }
vnic_type: direct
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Rob Pothier
Milestones
----------
Target Milestone for completion:
Kilo-1
Work Items
----------
* modify OS::Neutron::Port https://review.openstack.org/#/c/129353/
Dependencies
============
None

View File

@ -0,0 +1,74 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
==================================
Apply Neutron Custom Constraints
==================================
https://blueprints.launchpad.net/heat/+spec/apply-neutron-constraints
Apply neutron port/subnet/network/router custom constraints.
Problem description
===================
1. Neutron port/subnet/router custom constraints are defined, but not to apply.
2. Neutron network custom constraint only apply to OS::Sahara::* resources,
should apply to other related resources.
Proposed change
===============
1. Apply neutron subnet constraint.
2. Apply neutron port constraint.
3. Apply neutron router constraint.
4. Apply neutron network constraint.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
huangtianhua@huawei.com
Milestones
----------
Target Milestone for completion:
Kilo-1
Work Items
----------
1. Apply neutron subnet constraint.
2. Apply neutron port constraint.
3. Apply neutron router constraint.
4. Apply neutron network constraint.
5. Add UT/Tempest tests for changes.
Dependencies
============
None

View File

@ -0,0 +1,80 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
================================
Using Barbican as secret backend
================================
https://blueprints.launchpad.net/heat/+spec/barbican-as-secret-backend
We store some secret data in the Heat database using a simple symmetric
encryption with a static key. To improve security of the storage, we should
optionally support using Barbican to store those secrets.
Problem description
===================
Heat uses a simple encrypt mechanism to store secret data in its database, with
the key specified in the configuration. While it provides some security, a
compromised Heat node will give the attacker access to all the users' secrets.
Proposed change
===============
Add a new flag to the Heat configuration specifying that Barbican must be used
for storing secret. When set, Heat will query the service catalog for the
Barbican service, and will store the secrets in the user project, with
predictable prefixes.
We already support 2 different methods of decryption, 'heat' being the legacy
one, and 'oslo_v1' being the current version. Current values encrypted using
those methods will keep getting decrypted the same way. When we use Barbican,
the encryption method will be set to 'barbican_v1' and the value will be the
reference of the secret.
It should require a refactoring, as data encryption is today managed at the
SQLAlchemy data layer, whereas it may be easier to manage it above, especially
as we need user credentials to talk to Barbican.
Alternatives
------------
There seems to be an effort to create a key management shim that may use local
secure storage as an option. We may want to wait for that effort.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
therve
Milestones
----------
Target Milestone for completion:
Kilo-2
Work Items
----------
* Extract encryption management from the SQLAlchemy layer
* Move Barbican client out of contrib
* Add a configuration option to send secrets to the Barbican service
Dependencies
============
None

View File

@ -0,0 +1,136 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
======================================
Support Ceilometer alarm Gnocchi rules
======================================
https://blueprints.launchpad.net/heat/+spec/ceilometer-gnocchi-alarm
Gnocchi provides two new kind of Ceilometer alarm rules that allows to query
Gnocchi API instead of Ceilometer API to retreive statistics about Ceilometer
monitored metrics.
This blueprint proposes to add the corresponding heat resources:
* OS::Ceilometer::GnocchiResourcesAlarm
* OS::Ceilometer::GnocchiMetricsAlarm
Problem description
===================
It's now possible to send Ceilometer samples to Gnocchi in additional of the
traditional database and to create alarms that query Gnocchi API instead of
Ceilometer API to retreive statistics. But currently we can't create this
kind of alarm with heat, this BP will solve this issue.
Proposed change
===============
Add the OS::Ceilometer::GnocchiResourcesAlarm like this::
resources:
type: OS::Ceilometer::GnocchiResourcesAlarm
properties:
description: Scale-down if the average CPU < 15% for 1 minutes
metric: cpu_util
aggregation_method: mean
granularity: 300
evaluation_periods: 1
threshold: 1
comparison_operator: lt
alarm_actions:
- {get_attr: [web_server_scaledown_policy, alarm_url]}
resource_type: instance
resource_constraint:
str_replace:
template: 'server_group=stack_id'
params:
stack_id: {get_param: "OS::stack_id"}
Add the OS::Ceilometer::GnocchiMetricsAlarm like this::
resources:
type: OS::Ceilometer::GnocchiMetricsAlarm
properties:
description: Scale-down if the average CPU < 15% for 1 minutes
metrics: ["09ff6ad8-1704-4f18-8989-6559307dfe79",
"dea49e52-be42-4c71-bd77-fe265b1b6dbb"]
aggregation_method: mean
granularity: 300
evaluation_periods: 1
threshold: 1
comparison_operator: lt
alarm_actions:
- {get_attr: [web_server_scaledown_policy, alarm_url]}
These resources will start to live in /contrib and will move
into the supported resources when gnocchi will move into openstack namespace
after k-3.
Alternatives
------------
None
Usage Scenario
--------------
I want to create an autoscaling group that scale down when a statistics against
cpu_util of a group of vm computed by Gnocchi, reach a certain threshold::
resources:
type: OS::Ceilometer::GnocchiResourcesAlarm
properties:
description: Scale-down if the average CPU < 15% for 1 minutes
metric: cpu_util
aggregation_method: mean
granularity: 300
evaluation_periods: 1
threshold: 1
comparison_operator: lt
alarm_actions:
- {get_attr: [web_server_scaledown_policy, alarm_url]}
resource_type: instance
resource_constraint:
str_replace:
template: 'server_group=stack_id'
params:
stack_id: {get_param: "OS::stack_id"}
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Mehdi Abaakouk <sileht@redhat.com>
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
* Add the new Ceilometer alarm resources
Dependencies
============
None
References
----------
* https://review.openstack.org/#/c/153291/

View File

@ -0,0 +1,69 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
===================================
Support Cinder Custom Constraints
===================================
https://blueprints.launchpad.net/heat/+spec/cinder-custom-constraints
Support Cinder Custom Constraints, and apply them to related resources.
Problem description
===================
Many resources have property Volume/Snapshot which related with cinder
volume/snapshot, but we haven't support corresponding custom constraints.
Proposed change
===============
1. Add cinder volume custom constraint, and to apply it for resources.
2. Add cinder snapshot custom constraint, and to apply it for resources.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
huangtianhua@huawei.com
Milestones
----------
Target Milestone for completion:
Kilo-1
Work Items
----------
1. Add/Apply cinder volume custom constraint.
2. Add/Apply cinder snapshot custom constraint.
3. Add UT/Tempest tests for all the changes.
Dependencies
============
None

View File

@ -0,0 +1,80 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==============================
Support Cinder scheduler hints
==============================
https://blueprints.launchpad.net/heat/+spec/cinder-scheduler-hints
When creating volumes with Cinder, passing scheduler hints can be necessary to
select an appropriate back-end. This spec proposes to add a 'scheduler_hints'
option for OS::Cinder::Volume objects, as is it already done for
OS::Nova::Server.
Problem description
===================
Currently, it is not possible to pass hints to the Cinder scheduler when using
Heat to create volumes.
Proposed change
===============
Add a new optional key-value map (named 'scheduler_hints') for
OS::Cinder::Volume resources. A user can pass hints to the Cinder scheduler by
specifying one or more keys-values in scheduler_hints.
Alternatives
------------
None
Usage Scenario
--------------
For instance, request creation of `volume-A` on a different back-end than
`volume-B` using the different_host scheduler hint::
resources:
volume-A:
type: OS::Cinder::Volume
properties:
size: 10
scheduler_hints: {different_host: {Ref: volume-B}}
Implementation
==============
Assignee(s)
-----------
Primary assignee:
adrien-verge
Milestones
----------
Target Milestone for completion:
Kilo-1
Work Items
----------
* Extend OS::Cinder::Volume to support a new 'scheduler_hints' option
* When set, pass this option to the Cinder client
Dependencies
============
* Support Cinder API version 2
https://blueprints.launchpad.net/heat/+spec/support-cinder-api-v2

View File

@ -0,0 +1,87 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=================================
support cinder volume type manage
=================================
https://blueprints.launchpad.net/heat/+spec/cinder-volume-type
Cinder volume type is an import parameter when creating a volume, it can
specify the volume backend and specific whether support consistency group
and so on.
Support OS::Cinder::VolumeType resource manage in heat will be good.
Note that by default only users who have the admin role can manage volume
types because of the default policy in Cinder.
Problem description
===================
Currently volume types need to be managed externally to heat and passed into
the stack as parameters. This spec defines how we could create both the volume
and the volume type within one template.
Proposed change
===============
Add the OS::Cinder::VolumeType resource, like this::
resources:
my_volume_type:
type: OS::Cinder::VolumeType
properties:
name: volumeBackend
metadata: {volume_backend_name: lvmdriver}
Note that because of the admin restriction mentioned above,
the new resource will be added to /contrib.
Alternatives
------------
None
Usage Scenario
--------------
For volume creation take the volume_type to specific the lvm-driver::
resources:
my_volume:
type: OS::Cinder::Volume
properties:
size: 10
volume_type: {get_resource: my_volume_type}
Implementation
==============
Assignee(s)
-----------
Primary assignee:
huangtianhua <huangtianhua@huawei.com>
Milestones
----------
Target Milestone for completion:
Kilo-1
Work Items
----------
* Add OS::Cinder::VolumeType resource, implement its basic actions
* Add UT/Tempest for the change
Dependencies
============
None

View File

@ -0,0 +1,80 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
=================================
Implement check_resource workflow
=================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/convergence-check-workflow
Problem description
===================
Rather than working on the whole stack in-memory, in convergence we want to
distribute tasks across workers by sending out notifications when individual
resources are ready to be operated on.
Proposed change
===============
The workflow has been prototyped in
https://github.com/zaneb/heat-convergence-prototype/blob/resumable/converge/converger.py
After calculating the traversal graph, the stack update call triggers the leaf
nodes of the graph. After each node is processed, examine the traversal graph
(which is stored in the Stack table) to determine which nodes are waiting for
this one. Store input data for each of those nodes in their SyncPoints, and
trigger a check on any which now contain their full complement of inputs.
The SyncPoint for the stack works similarly, except that when complete we
notify the stack itself to mark the update as complete.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
sirushtim
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
- Kick off the workflow from the stack update
- Implement skeleton check_resource
- Notify the stack of the result when complete (or on failure)
- Create developer documentation
Dependencies
============
- https://blueprints.launchpad.net/heat/+spec/convergence-graph-progress
- https://blueprints.launchpad.net/heat/+spec/convergence-prepare-traversal
- https://blueprints.launchpad.net/heat/+spec/convergence-lightweight-stack
- https://blueprints.launchpad.net/heat/+spec/convergence-message-bus

View File

@ -0,0 +1,94 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
======================================================
Convergence workflow for dealing with locked resources
======================================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/convergence-concurrent-workflow
Problem description
===================
When doing convergence with legacy resource plugins (which is all plugins in
Phase 1 of convergence), we may encounter a resource that is locked for
processing a previous update. We want to wait for this previous update to
complete and retrigger another update with the latest template.
Proposed change
===============
If the workflow encounters a resource that is locked by another engine, it
should first check that the other engine is still alive, and if not then break
the lock. Assuming the other engine is still working, the workflow should
neither process that resource nor trigger processing any subsequent nodes. To
ensure that processing of that graph node is retriggered once the previous
update is complete, we must check at the conclusion of every update whether the
traversal we are processing is still current.
Since SyncPoints belonging to previous traversals are deleted before beginning
a new one, failing to find a SyncPoint in the database is sufficient to alert
us of a potentially-waiting new traversal. If this occurs, reload the stack to
determine the current traversal, and check the SyncPoint for the current node
to determine if it is ready. If it is, then retrigger the current node with the
appropriate data for the latest traversal (which can be found in the Stack
table).
There is a race that could cause multiple triggers on the same graph node,
however it will be resolved by the lock on the resource, since only the process
that successfully acquires the lock will continue.
An exception to all of this is the case where the graph node is of the update
type and the resource status is DELETE_IN_PROGRESS. In that case, we should
simply create a replacement resource.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
ananta
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
- Bail out when we encounter a locked resource
- Retrigger when required if a SyncPoint is not found
- Replace a resource that is still needed but has the status DELETE_IN_PROGRESS
- Create developer documentation
Dependencies
============
- https://blueprints.launchpad.net/heat/+spec/convergence-check-workflow
- https://blueprints.launchpad.net/heat/+spec/convergence-resource-locking
- https://blueprints.launchpad.net/heat/+spec/convergence-graph-progress
- https://blueprints.launchpad.net/heat/+spec/convergence-stack-data
- https://blueprints.launchpad.net/heat/+spec/convergence-resource-replacement

View File

@ -0,0 +1,85 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
=========================================
Add a config option to enable Convergence
=========================================
https://blueprints.launchpad.net/heat/+spec/convergence-config-option
Problem description
===================
The new convergence architecture is a major change to the code base. In order
to decouple landing the necessary changes from the release cycle while
maintaining stability for end users, we need to develop it alongside existing
code and tests and avoid breaking existing code until such time as convergence
can pass the functional test suite.
Proposed change
===============
Add a config option that allows the operator to enable the convergence code
path for new stacks. The option will initially be off by default. We will
enable it as soon as convergence has landed and is in a working state (passes
functional tests), provided that doing so does not create an undue level of
risk (i.e. if this happened to occur right before feature freeze, we would
likely delay changing the default until after the release).
Also add a flag to the Stack table to indicate whether each stack should use
existing legacy code path or convergence code path. All pre-existing stacks
will continue using the legacy code path. New stacks will use the code path
selected by the operator via the config option.
At some point in the future, we will create a tool that allows us to populate
existing legacy stacks with the additional data required to start using them
with the convergence code. Once we have such a migration tool we can deprecate
the legacy code path, and after an appropriate interval and once all of the
legacy 'unit' tests that require it have been converted to functional tests we
can remove that code path altogether.
Alternatives
------------
In addition to the config option, we could also allow the user to select on
each stack create whether to use the legacy or convergence code. This could
make funtional testing easier, as we wouldn't need to change the configuration
to test the two parts. However, the downside is that it exposes to the user
what should be an implementation detail.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
prazumovsky
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
- Implement the config option
Dependencies
============
None

View File

@ -0,0 +1,96 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
============================
Implement SyncPoint DB table
============================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/convergence-graph-progress
Problem description
===================
As we traverse the dependency graph during an update, we need to keep a record
of our progress so that we can resume the traversal later in case it is
interrupted.
Proposed change
===============
Add a new table, `SyncPoint`, to the database with the following rows:
- `resource_id` (a Resource key)
- `is_update` (Boolean - True for update, False for cleanup)
- `traversal_id` (UUID)
- `stack_id` (a Stack key)
- `input_data` (JSON data)
The first three fields should form a composite primary key. That should allow
us to do a quick get of a SyncPoint given a graph key (resource key + is_update
direction) and traversal ID (i.e. without doing a query). The stack key
together with the traversal ID allows us to query for all of the SyncPoints
associated with a particular traversal (e.g. to delete them if the traversal
is cancelled.)
The input data will contain a map of graph keys (resource key of the Resource
that was current at the beginning of the update + is_update direction) to
resource key (may be different if the resource was replaced), RefID and
attribute data. Thus the input data pushed from previously-updated resources is
cached until such time as the current resource is ready for it. This data will
likely be serialised in JSON format, and could be quite large.
A prototype for this is
https://github.com/zaneb/heat-convergence-prototype/blob/resumable/converge/sync_point.py
Updates to the input data must be atomic, and must use the "UPDATE ... WHERE
..." form discussed in
http://www.joinfu.com/2015/01/understanding-reservations-concurrency-locking-in-nova/
- which probably means adding an extra integer field that is incremented on
every write (since we can't really query on a text field).
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
rh-s
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
- Implement the new table and DB migration
- Implement an API for creating and updating entries
Dependencies
============
- https://blueprints.launchpad.net/heat/+spec/convergence-push-data
- https://blueprints.launchpad.net/heat/+spec/convergence-stack-data
- https://bugs.launchpad.net/heat/+bug/1415237

View File

@ -0,0 +1,71 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
=========================================
Lightweight Stack loading for convergence
=========================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/convergence-lightweight-stack
Problem description
===================
When we load the resources for a stack from the database, we load all of them
at once. We also assume that resource names are unique within a stack (i.e.
there is only one version of each resource). In convergence there will be
multiple versions of each resource coexisting in the same stack, and we'll want
to load only the one we're going to perform operations on at any given time.
Proposed change
===============
Allow the stack to provide cached values for all of the `get_resource` and
`get_attr` references in the template when they are resolved. Don't load the
whole list of resources when this cached data is available.
Alternatives
------------
Continue to load every resource from the database whenever we need resource ID
or attribute data.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
sirushtim
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
- Substitute reading from a cache for loading resources when resolving template
functions
Dependencies
============
The cached values will be obtained by the code for
https://blueprints.launchpad.net/heat/+spec/convergence-push-data

View File

@ -0,0 +1,80 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
===========================================
Internal oslo.messaging bus for convergence
===========================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/convergence-message-bus
Problem description
===================
We need a message bus for the internal worker API of convergence.
Proposed change
===============
Create a new worker service within heat-engine that is dedicated to handling
internal messages to the 'worker' (a.k.a. 'converger') actor in convergence.
Messages on this bus will use the 'cast' rather than 'call' method to anycast
the message to an engine that will handle it asynchronously. We won't wait for
or expect replies from these messages.
The message types that will eventually be implemented on this bus are those
marked with the @asynchronous decorator in
https://github.com/zaneb/heat-convergence-prototype/blob/resumable/converge/converger.py
Alternatives
------------
We could have a separate heat-worker daemon, but there appears to be no point
in making life difficult for deployers as heat-engine can handle the same
tasks.
We could mix this into the same service as the existing RPC API that is called
by heat-api, but this is messy because the two have entirely different uses.
There is already precedent for running another RPC service inside heat-engine,
although we can't reuse that because it listens on a queue specific to the
engine id.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
kanagaraj-manickam
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
- Implement the new worker service in the engine
- Implement a client API to make it easy to send messages to the worker service
Dependencies
============
None

View File

@ -0,0 +1,88 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
=====================================================
Move Parameter data storage from Stack to RawTemplate
=====================================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/convergence-parameter-storage
Problem description
===================
The target state of a stack is not defined by a template alone, but by a
combination of a template and an environment (including input parameter
values). However, currently these are stored separately - there is a
RawTemplate database table for templates, but the environment is stored in the
Stack table.
In order to allow the user to roll back to a previous state, we need to store
both the old template and the old environment.
Proposed change
===============
Move the storage of the environment from the `parameters` column of the Stack
table to the RawTemplate table. In this way, we can roll back to a previously
commanded state whenever its template is still available in the database.
While we are at it, we should add an out-of-band indicator of whether the
parameters are encrypted, since we know that encrypting the parameters in the
database is something we will want to implement.
We can also consider storing the user parameters and other parts of the
environment separately. The current design is a result of retrofitting the
environment where previously we only had parameters. We should probably store
the "files" section of the environment as a multipart-MIME document in a
separate column, rather than as a JSON dict as part of the environment, since
that is the format we want to allow in a future v2 API.
Alternatives
------------
Instead of just storing multiple references to templates in the Stack table, we
could also include multiple versions of the environment (e.g. have a
`previous_parameters` or `previous_environment` row). This would save doing the
migration now, but it would be messier and more error-prone to implement
rollback in convergence.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
ckmvishnu
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
- Database migration
- Change how environment data is loaded from the database
Dependencies
============
None

View File

@ -0,0 +1,98 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
============================================================
Load and generate the dependency graph for a stack traversal
============================================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/convergence-prepare-traversal
Problem description
===================
We need to precalculate the graph of dependencies between all extant resources
in the stack, including versions of resources that are now out of date.
Proposed change
===============
When applying a transformation to a stack, load all extant resources for that
stack from the DB. If one or more versions of a resource in the template
already exist, select the most up-to-date one to update provided it is in a
valid state. If no versions of a resource in the template exist in the
database, create one for it.
Calculate the dependency graph, such that we will visit all of the selected
resources in dependency order to update them where neccessary and visit *all*
of the resources in reverse dependency order to clean them up where neccessary.
Clean-up operations on a resource must always happen after any update operation
on the same resource.
Finally, replace the traversal ID with a new UUID and create a SyncPoint for
each node in the graph with this traversal ID. It should also create a
SyncPoint for the stack itself, which will be used to indicate when the update
portion of the traversal is complete, at which time the stack status can be
updated.
This code should largely follow the prototype in
https://github.com/zaneb/heat-convergence-prototype/blob/resumable/converge/stack.py
If two updates are racing, one of them will fail to atomically update the Stack
row with its own newly-generated traversal ID. In this case it should roll back
the database changes, by deleting any newly-created Resource rows that it added
as well as all of the SyncPoints.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
rh-s
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
- Load all resources from a Stack
- Determine which resources are the most up-to-date
- Create new resources where required
- Generate the graph for traversal
- Create SyncPoints for every node in the graph
- Roll back any changes made in the database if we lose the race to be the next
update
- Create developer documentation
Dependencies
============
- https://blueprints.launchpad.net/heat/+spec/convergence-config-option
- https://blueprints.launchpad.net/heat/+spec/convergence-graph-progress
- https://blueprints.launchpad.net/heat/+spec/convergence-stack-data
- https://blueprints.launchpad.net/heat/+spec/convergence-resource-table

View File

@ -0,0 +1,80 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
=====================================================
Extract data from resources to push into dependencies
=====================================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/convergence-push-data
Problem description
===================
We currently assume that every resource in a stack is loaded in memory
concurrently, and we query it directly to determine its attribute values. This
'pull' system is inefficient in the convergence architechture compared to a
'push' system, since we hope to typically have only one resource loaded in
memory at a time.
Proposed change
===============
Analyse the template to determine which attributes of a resource are needed
elsewhere. This is conceptually quite similar to the way we analyse the
template looking for dependencies, by recursively examining a snippet of
template and building up a list of dependent resources, except that instead of
only a list of resources we'll need to include the attribute names being
referenced. In this way the code will be able to work with arbitrary template
format plugins, although it will probably require a change to the Function
plugin API. Return the result as a mapping from resource names to a list of
referenced attribute names.
After a create or update event, we can use this list of attributes to query the
resource for all of the information that will be needed subsequently from it.
Alternatives
------------
Load a resource from the database whenever we need to retrieve one of its
attributes.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
skraynev
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
- Determine the list of attributes of a resource which are referenced in the
template
Dependencies
============
None

View File

@ -0,0 +1,79 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
=================================
Enable locking of Resources in DB
=================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/convergence-resource-locking
Problem description
===================
Currently we enforce locking at the stack level, so only one operation can be
in progress at a time on a stack. This is not fine-grained enough, as it
prevents us from starting a new update while awaiting the result of a previous
one. Phase one of convergence is to remove this restriction by locking at the
level of individual resources.
Proposed change
===============
Make rows in the Resource table lockable, by ensuring that state changes are
atomic. We'll also need to store the ID of the engine that currently holds the
lock, so that we can use this to detect when an engine has died and clean up
appropriately.
We'll use the "UPDATE ... WHERE ..." form discussed in
http://www.joinfu.com/2015/01/understanding-reservations-concurrency-locking-in-nova/
to ensure atomic updates.
Alternatives
------------
The existing StackLock code does almost exactly what we want already, but the
downside is that it uses a separate table in the database to do so. Using that
rather than applying new semantics to the writes we are already doing would
make convergence even more database-intensive than it already is.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
ishant-tyagi
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
- Database migration to add the id of the engine holding the lock
- Modify the way changes to the Resource table are written to guarantee
atomicity
Dependencies
============
None

View File

@ -0,0 +1,78 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
===================================================
Implement Resource convergence create/update/delete
===================================================
https://blueprints.launchpad.net/heat/+spec/convergence-resource-operations
Problem description
===================
We need to modify the operations (create/update/delete) of
heat.engine.resource.Resource to work in both the convergence architecture and
the legacy architecture.
Proposed change
===============
Create a lightweight wrapper in the worker that runs the appropriate operation
using a TaskRunner. Any code that is specific to the convergence architecture
and that shouldn't be executed in the legacy architecture can hopefully also be
contained in this wrapper.
To the extent that any changes to the create/update/delete operations
themselves are benign to the legacy architecture (for example, storing the
extra data needed by convergence in the Resource table), they should be
implemented as part of the existing operations.
The prototype
https://github.com/zaneb/heat-convergence-prototype/blob/resumable/converge/resource.py
should give a good indication of the types of changes that will be neccessary.
Alternatives
------------
An alternative would be to build separate create/update/delete operations for
convergence as part of the Resource class. We could do that if it proved
necessary, but it seems preferable to keep to a single code path as much as
possible.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
unmesh-gurjar
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
- Make any necessary changes to the Resource.create/update/delete
- Implement TaskRunner wrapper and call it from the relevant workflow code
Dependencies
============
- https://blueprints.launchpad.net/heat/+spec/convergence-check-workflow

View File

@ -0,0 +1,83 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
================================
Convergence Resource replacement
================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/convergence-resource-replacement
Problem description
===================
During a stack update, some resources will need to be replaced (rather than
updated in-place). In general, we can't know in advance for which resources
that will be the case, so we need to be able to create replacements on the fly.
Proposed change
===============
When we detect that a resource needs to be replaced (i.e. Resource.update
raises UpdateReplace), create a new resource with the same name in the same
stack. Fill in a the `replaces` and `replaced_by` fields of the new and
existing resources, respectively. Do *not* create a SyncPoint for the new
resource.
Once the new Resource has been stored in the database, retrigger the current
check with the same data except passing the key of the new resource. Then
return immediately, without triggering any dependent nodes.
Note that no modification of the graph stored in the Stack is required. When we
come to trigger the SyncPoints of nodes that are dependent on the replaced
resource, the replacement should just use the old resource's graph key to
impersonate it. However the contents of the input data (not the keys) to the
next resource will contain the resource ID of the replacement, so that
dependent resources will update their dependency lists. The previous resource
will be visited again in the clean-up phase of the graph, at which point it
will be deleted.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
ananta
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
- Create a replacement resource and link it to its predecessor
- Trigger the check on the new resource
- Create developer documentation
Dependencies
============
- https://blueprints.launchpad.net/heat/+spec/convergence-check-workflow

View File

@ -0,0 +1,77 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
==========================================
Add convergence data to the Resource table
==========================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/convergence-resource-table
Problem description
===================
The convergence design requires extra data to be stored with each Resource row
in the database, in order to allow different versions of a resource to co-exist
within the same stack.
Proposed change
===============
Add the following extra fields to the Resource table:
- `needed_by` (a list of Resource keys)
- `requires` (a list of Resource keys)
- `replaces` (a single Resource key, Null by default)
- `replaced_by` (a single Resource key, Null by default)
- `current_template` (a single RawTemplate key)
(Note, the first two fields are currently known as `requirers` and
`requirements`, respectively, in
https://github.com/zaneb/heat-convergence-prototype/blob/resumable/converge/resource.py
- but those are too confusing. Once we settle on names, we should update the
simulator code as well.)
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
skraynev
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
- Database migration
Dependencies
============
We need to resolve https://bugs.launchpad.net/heat/+bug/1415237 first as that
will determine what the type of a Resource key is.

View File

@ -0,0 +1,80 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
====================
Convergence Rollback
====================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/convergence-rollback
Problem description
===================
We need to allow the user to cancel an update that is in progress and roll back
to the previous known good state. We also need to give users the option of
rolling back to the previous known good state in the event of a failure while
updating the stack.
Proposed change
===============
Since convergence removes the Stack-level locking for updates, we can implement
rollback as a simple update to a previously-stored version of the template.
Other parts of the convergence implementation will ensure that this deals
correctly with any resources that may still be in progress. The update will
still get a new traversal ID, even though it is updating to the same template
ID that was seen previously.
In the Stack table, we will store the ID of the most recent template to
successfully complete (if any) alongside the ID of the current target template
(at the completion of an update, these will be the same). Whenever either of
the stored template IDs are overwritten in such a way that we will no longer
refer to a particular Template, delete that Template from the database.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
ananta
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
- Implement rollback
- Clean up unused templates
Dependencies
============
- https://blueprints.launchpad.net/heat/+spec/convergence-check-workflow
- https://blueprints.launchpad.net/heat/+spec/convergence-concurrent-workflow
- https://blueprints.launchpad.net/heat/+spec/convergence-parameter-storage
- https://blueprints.launchpad.net/heat/+spec/convergence-stack-data

View File

@ -0,0 +1,88 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
=====================================
Port tests from convergence simulator
=====================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/convergence-simulator-tests
Problem description
===================
In coming up with the design of convergence, we built a simulator that verifies
a substantial number of test scenarios. The scenarios are defined in what
amounts to a simple DSL. If we can run the exact same scenarios against the
real Heat code base, then we can not only verify that our convergence
implementation fullfills the requirements of the simulator but also continue to
do that over time, even as we add more scenarios and even if we still have the
need to rapidly prototype design changes in the simulator.
Proposed change
===============
Implement a stub for the RPC APIs that puts messages into in-memory queues that
are drained by an event loop.
Implement a fake resource type that uses an in-memory store to represent the
underlying physical resource.
Provide wrappers for the global inputs to the scenario - `reality`, `verify`,
`Template`, `RsrcDef`, `GetRes`, `GetAtt`, `engine`, `converger` - that allow
them to be backed by the real equivalent classes in Heat.
Finally, reimplement
https://github.com/zaneb/heat-convergence-prototype/blob/resumable/test_converge.py
using testtools primitives and passing the wrappers above as globals, rather
than those defined in
https://github.com/zaneb/heat-convergence-prototype/blob/resumable/converge/__init__.py#L24-L41
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
ishant-tyagi
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
- Implement RPC stub and event loop
- Implement fake TestResource type and backend simulator
- Implement wrappers to map the scenario DSL to real Heat classes
- Implement a unit test framework to run the scenarios
Dependencies
============
- https://blueprints.launchpad.net/heat/+spec/convergence-message-bus
Of course, few of these tests are going to pass until Phase 1 of convergence is
substantially complete.

View File

@ -0,0 +1,75 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
=================================================
Add extra data to the Stack table for convergence
=================================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/convergence-stack-data
Problem description
===================
The convergence design requires extra data to be stored with each Stack row in
order to manage concurrent updates.
Proposed change
===============
Add the following extra fields to the Stack table:
- `prev_raw_template` (a RawTemplate key)
- `current_traversal` (a UUID that gets changed on each update)
- `current_deps` (a list of edges in the dependency graph, stored as JSON)
We also need to ensure that modifications to the Stack table are atomic with
respect to the `current_traversal` field - if a new traversal starts then any
previous traversals should stop updating the stack data. This should be
achieved using the "UPDATE ... WHERE ..." form as discussed in
http://www.joinfu.com/2015/01/understanding-reservations-concurrency-locking-in-nova/
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
unmesh-gurjar
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
- Add the database migration
- Ensure that updates are atomic w.r.t. `current_traversal`
Dependencies
============
None

View File

@ -0,0 +1,99 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
===============================
Decouple AWS and OS resources
===============================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/decouple-aws-os-resources
Decouple AWS and OS resources code.
Problem description
===================
The code structure of the resources folder is in some confusion.
https://blueprints.launchpad.net/heat/+spec/reorganize-resources-code-structure
There is too much coupling between AWS and OS resources for reorganizing to be
possible, for example modules: wait_condition.py, instance.py, user.py and
volume.py.
Proposed change
===============
The new code structure will be::
heat
|----engine
|----resources
|----aws
|----wait_condition.py(AWS::CloudFormation::WaitCondition)
|----wait_condition_handle.py
(AWS::CloudFormation::WaitConditionHandle)
|----volume.py
(AWS::EC2::Volume and AWS::EC2::VolumeAttachment)
|----user.py(AWS::IAM::User and AWS::IAM::AccessKey)
|----instance.py(AWS::EC2::Instance)
|----openstack
|----wait_condition.py(OS::Heat::WaitCondition)
|----wait_condition_handle.py(OS::Heat::WaitConditionHandle)
|----volume.py
(OS::Cinder::Volume and OS::Cinder::VolumeAttachment)
|----access_policy.py(OS::Heat::AccessPolicy)
|----ha_restarter.py(OS::Heat::HARestarter)
|----wait_condition.py(base module)
|----volume_tasks.py(volume attach/detach tasks)
And also the tests code will be split::
heat
|----engine
|----tests
|----test_waitcondition.py
|----test_os_waitcondition.py
|----test_volume.py
|----test_os_volume.py
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
huangtianhua <huangtianhua@huawei.com>
Milestones
----------
Target Milestone for completion:
Kilo-2
Work Items
----------
* Decouple AWS and OS WaitCondition related resources
* Decouple AWS and OS Volumes related resources
* Decouple AWS and OS Instances related resources
* Decouple AWS and OS Users related resources
* Decouple responding tests
Dependencies
============
None

View File

@ -0,0 +1,135 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
======================
Detailed Resource Show
======================
https://blueprints.launchpad.net/heat/+spec/detailed-resource-show
After creating a stack, there is currently no way to retrieve a resource's
attributes other than outside of heat (e.g. a user can get the ID of a nova
server and call nova directly to get such attributes).
Problem description
===================
Currently a template author needs to explicitly define in the outputs section
which attributes they'll need access to after a stack is created. Without doing
so, the attributes cannot be retrieved anymore unless the user updates the
template to add such attributes to the outputs section and update the stack
afterwards.
Proposed change
===============
Since the attributes of a resource are really being retrieved by heat using the
resource client, that means the user can get the resource ID from heat and
interact directly with the client (e.g. get the ID of a nova server and talk
directly to nova) to retrieve its attributes.
We propose returning all resource attributes when displaying data for a
specific resource. This way, a user will be able to issue a resource-show call
and be able to look up attributes after creating their stacks even if the
template author didn't think about them beforehand.
Because these attributes can be retrieved either by the resource's client or by
changing the template and adding them to the outputs section, this should not
pose
any more risk of revealing sensitive data than what is already possible.
This can be achieved by changing the API response to also include attributes
that can be automatically discovered (i.e. resources that have an attributes
schema).
# API
# the call below would also return all attributes in the resource schema
/<tenant_id>/stacks/<stack_name>/<stack_id>/resources/<resource_name>
However, some resources have dynamic attributes that cannot be discovered using
their attributes schema, so this approach won't work for those resources. For
instance, ``OS::Heat::ResourceGroup`` has dynamic attributes based on what
outputs/attributes the group type exposes and ``OS::Heat::SoftwareDeployments``
has an attribute for each output defined in the config resource outputs
property.
For such resources, the API can be extended to accept a query param that will
hold the names of the attributes to be retrived. Something like:
# API
/<tenant_id>/stacks/<stack_name>/<stack_id>/resources/
<resource_name>?with_attr=foo&with_attr=bar
# heatclient
resource-show <stack_name> <resource_name> --with-attr foo --with-attr bar
However, certain clients or scripts may want to consume a given attribute
directly. For these cases, we could also add two new endpoints: one to keep
things RESTful and return only the discoverable attributes of a resource; and
another one that would only return the value of the requested attribute.
# API
/<tenant_id>/stacks/<stack_name>/<stack_id>/resources/<resource_name>/
attributes
/<tenant_id>/stacks/<stack_name>/<stack_id>/resources/<resource_name>/
attributes/<attribute_name>
# heatclient
heat resource-attributes <stack-id> <resource-name>
heat resource-attributes <stack-id> <resource-name> <attribute-name>
Alternatives
------------
Alternatively, we can keep the current resource-show behavior the same and only
add the two new endpoints to return the attribute information. This has the
benefit of being simpler to implement, as only changes to add the new endpoint
would be needed. However, the drawback is that one would have to make two
separate calls to get all the available information on a given resource: one to
resource-show and another one to resource-attributes.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
andersonvom
asifrc
Milestones
----------
Target Milestone for completion:
Kilo
Work Items
----------
* Add resource attributes to the engine API at format time.
Dependencies
============
None

View File

@ -0,0 +1,87 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
===========================
Digest Intrinsic Function
===========================
https://blueprints.launchpad.net/heat/+spec/digest-intrinsic-function
It would be useful to the user to have intrinsic functions to perform digest
operations such as MD5 or SHA-512.
Problem description
===================
Certain applications require the user to provide information in a hashed format
(e.g. Chef user resources only take hashed passwords), so it would be useful to
the user to be able to use an intrinsic function to do it for them.
Proposed change
===============
Add another class to run existing digest algorithms (e.g. MD5, SHA-512, etc) on
user provided data and expose it in the HOT functions list. The class would
take the name of the digest algortihm and the value to be hashed.
Python's ``hashlib`` natively supports md5, sha1 and sha2 (sha224, 256, 384,
512) on most platforms and this will be documented as being the supported list
of algorithms. But the cloud provider may go beyond and support more algortihms
as well, since, depending on the way Python was built, ``hashlib`` can also use
algorithms supported by OpenSSL.
Examples:
::
# raw string
gravatar: { digest: ['md5', 'sample@example.com'] }
# from a user supplied parameter
pwd_hash: { digest: ['sha512', { get_param: raw_password }] }
Alternatives
------------
There's really no good alternative other than an intrinsic function for this.
Implementation
==============
Assignee(s)
-----------
andersonvom
Milestones
----------
Target Milestone for completion:
Kilo-2
Work Items
----------
- Add class to perform digest operations;
- Expose new class to HOT templates;
- Update the docs;
Dependencies
============
None.

View File

@ -0,0 +1,133 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
=================================================
Usablity enhancements to the user's environment.
=================================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/env-nested-usability
There a number of related enhancements that we can easily do in the
way the environment interacts with template resources, lets quickly
solve these for our users, to make heat more useable.
These issues have been raised here:
https://etherpad.openstack.org/p/heat-useablity-improvements
Problem description
===================
Here are some small problems that are related to the interaction of
the environment and template resources. They are grouped here to
reduce the red-tape.
No way to specify "global" parameters
-------------------------------------
When creating deep and/or complex compositions of multiple provider
templates, it becomes cubmersome if you end up passing a long list
of common parameters down through the "layers" via
properties/parameters. If the environment had a "global_parameters"
section, you could specify those parameters which should be visible
to not only the top-level stack, but all child stacks too.
There is no way to transparently replace a resource with a provider resource.
-----------------------------------------------------------------------------
When, for example, you replace OS::Nova::Server with
OS::My::SpecialServer via a provider resource mapped in the
environment, you can't use the overloaded special server
transparently, because when you do get_resource: special_server, you
get a nested stack ID, not the nested server ID.
Required mirroring of resource attributes.
------------------------------------------
It is a pain to require the user to mirror a nested stack's resource
attributes in the outputs so they can be referenced outside of the
nested stack. We should generate these attributes dynamically.
Proposed change
===============
1. Add the concept of parameter_defaults to the environment.
This will look like the following::
parameter_defaults:
flavor: m1.small
region: far-away
The behaviour of these parameters will be as follows:
- if there is no parameter definition for it, it will be ignored.
- these will be passed into all nested templates
- they will only be used as a default so that they can be explicitly
overridden in the "parameters" section.
2. Support a specially named output to Template resources that is used
for references.
Modify the FnGetRefId of TemplateResource to look for an output called
"OS::stack_id", if this is provided then return this, else the current
value.
3. Add dynamic attributes to template resources.
A reminder of what the resource group does::
{get_attr: [a_resource_group, resource.<res number>.attr_name]}
For template resources the following will be supported::
{get_attr: [a_resource_templ, resource.<res name>.attr_name]}
To achieve this, _resolve_attribute() will be overridden to look for
"resource.<res name>" and then drill down to that resource's attribute.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
asalkeld
Milestones
----------
Target Milestone for completion:
Kilo-2
Work Items
----------
Each item can be completed seperately.
Documentation for each feature needs to be added to the template guide.
Dependencies
============
None

View File

@ -0,0 +1,107 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
====================================================================
Make tempest orchestration scenario tests the heat functional tests
====================================================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/functional-tests
Having all OpenStack functional tests in tempest is no longer scalable,
so heat functional tests need to live in the heat repository.
Problem description
===================
Existing tempest orchestration scenario tests need to be moved into the heat
repository in a way which requires no dependency on the tempest code, and
which can be done with minimal development effort.
The heat gate needs to switch over to running the heat functional tests,
as well as whatever orchestration tests remain in tempest.
Proposed change
===============
The proposed plan for this work will be:
* Forklift tempest.scenario.orchestration into heat functionaltests
* Copy and modify any supporting tempest code into a subpackage of
functionaltests to make it possible for the tests to run
* Replace configuration loaded from tempest.conf with a solution which
initially requires no configuration file, specifically:
* Tests will be run with credentials sourced from the environment, which
heatclient does by default anyway
* Configuration which refers to cloud resources will hard-code values
which correspond to values set up by devstack, and tests will fail
if cloud resources with those names do not exist. This applies to
configuration values:
image_ref, keypair_name, instance_type, network_for_ssh
* build_timeout will be given a default value which is overridable from
an environment variable
* Modify devstack, devstack-gate and openstack-infra/config to check and
gate on the heat functional tests. This job will replace the current
heat-slow job
* Ensure there are no tempest.api.orchestration tests running in the heat-slow
job, specifically:
* Do not tag test_nova_keypair_resources as a slow test
* Modify test_neutron_resources to run with cirros, or rewrite it as a
functional test
* Delete the heat-slow job, and tests in tempest.scenario.orchestration
Alternatives
------------
The following alternative design points could be considered:
* A dedicated conf file to replace the current tempest.conf, or read
test configuration values from heat.conf
* Failing tests instead of skipping for missing credentials or required cloud
resources
* Modifying tox.ini to filter out functional tests on a unit test run instead
of skipping based on current environment
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Steve Baker <sbaker@redhat.com>
Milestones
----------
Target Milestone for completion:
Juno-3, but work can continue during feature freeze
Work Items
----------
Work items or tasks -- break the feature up into the things that need to be
done to implement it. Those parts might end up being done by different people,
but we're mostly trying to understand the timeline for implementation.
Dependencies
============
* devstack

View File

@ -0,0 +1,73 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
========================
Heat-manage service list
========================
https://blueprints.launchpad.net/heat/+spec/heat-manage-service-list
Adds the ability to heat-manage command to list the running status of
heat-engines deployed in a given cloud environment.
Problem description
===================
In a given enterprise cloud environment, Heat to support horizontal scaling,
multiple heat-engines will be deployed and executed. Once these engines are
deployed on multiple hosts, there is no way an admin can find these
heat engines details like
* what is the node on which heat engine is running,
* what is the running status of each engine.
* How long the heat-engines are running successfully.
Proposed change
===============
Heat already provides heat-manage command to take care of the database syncing
and archiving. As part of this blue print, 'service list' is added to provide
the following details:
* Heat-engine node name
* Heat-engine running status
* Heat-engine host (message queue)
* Heat-engine last updated time of running status.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Kanagaraj Manickam (kanagaraj-manickam)
Milestones
----------
Target Milestone for completion:
Kilo-1
Work Items
----------
* Add required db migration script to add the new table 'Services'
* Add 'Service' model in the sqlalchemy and required db api
* Update the heat-engine service for updating the db at given periodic interval
* Add 'service list' to heat.cmd.manage and it required help
* Add heat service REST API as contrib (extension) api
* Add heat service-list command in heat CLI
* Add required test cases
Dependencies
============
None

View File

@ -0,0 +1,76 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===========================================
Implement 'InstanceId' for AutoScalingGroup
===========================================
https://blueprints.launchpad.net/heat/+spec/implement-instanceid-for-autoscalinggroup
We should support the 'InstanceId' for AWS::AutoScaling::AutoScalingGroup
resource to be compatible with AWSCloudFormation.
Problem description
===================
In AWSCloudFormation, user can specify 'InstanceId' property if he want to
create an Auto Scaling group that uses an existing instance instead of
a launch configuration, see:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-group.html
Now in Heat, the AWS::AutoScaling::AutoScalingGroup resource only has
'LaunchConfigurationName' property, will be good to implement 'InstanceId'
property.
Proposed change
===============
1. Change 'LaunchConfigurationName' to be an optional property
2. Add 'InstanceId' property, optional and non-updatable
3. Add validate for AWS::AutoScaling::AutoScalingGroup resource, make sure
choose one of the two properties
4. Modify the _get_conf_properties() function
* if specify 'InstanceId', to get the attributes of the instance, and
to make a temporary launch config resource, and then return the resource
and its properties.
Note that the attributes include ImageId, InstanceType, KeyName,
SecurityGroups.
* if without 'InstanceId', using the old way to get the launch config
resource and its properties.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
huangtianhua <huangtianhua@huawei.com>
Milestones
----------
Target Milestone for completion:
Kilo-1
Work Items
----------
* Support the 'InstanceId' property
* Add UT/Tempest for the change
Dependencies
============
None

View File

@ -0,0 +1,76 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==============================================
Implement 'InstanceId' for LaunchConfiguration
==============================================
https://blueprints.launchpad.net/heat/+spec/implement-instanceid-for-launchconfiguration
We should support the 'InstanceId' for AWS::AutoScaling::LaunchConfiguration
resource to be compatible with AWSCloudFormation.
Problem description
===================
In AWSCloudFormation, user can specify 'InstanceId' property if he wants the
launch configuration to use settings from an existing instance, see:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig.html
http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/create-lc-with-instanceID.html
Will be good to implement 'InstanceId' property to be compatible with
AWSCloudFormation.
Proposed change
===============
1. Add 'InstanceId' property, optional and non-updatable
2. Change 'ImageId' and 'InstanceType' properties to optional
3. Add the validation of 'InstanceId', 'ImageId' and 'InstanceType', if don't
specify 'InstanceId', the other two properties are required
4. According to the aws developer guide and implementation, allows three cases:
* Without 'InstanceId', should specify 'ImageId' and 'InstanceType'
properties, using the old way to create the new launch configuration.
* Specify 'InstanceId' only, the new launch configuration has
'ImageId', 'InstanceType', 'KeyName', and 'SecurityGroups'
attributes from the instance.
* Specify 'InstanceId' and other properties else, these properties will
override the attributes from the instance.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
huangtianhua <huangtianhua@huawei.com>
Milestones
----------
Target Milestone for completion:
Kilo-1
Work Items
----------
* Support the 'InstanceId' property
* Add UT/Tempest for the change
Dependencies
============
None

View File

@ -0,0 +1,141 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
=========================
Native Keystone Resources
=========================
https://blueprints.launchpad.net/heat/+spec/keystone-resources
Problem description
===================
Some cloud operators would like to be able to use Heat templates to manage
users, projects and roles in Keystone. Currently we can only create users, and
only through an AWS IAM resource type.
This was discussed on the mailing list in the thread beginning here:
http://lists.openstack.org/pipermail/openstack-dev/2015-January/055554.html
Proposed change
===============
Implement the following native resource types:
* OS::Keystone::User
* name (optional - defaults to self.physical_resource_name())
* default_project_id (optional)
* email (optional)
* domain (optional)
* password (optional)
* enabled (default True)
* groups (list)
* roles (list)
* domain (optional - domain or project or both must be specified)
* project (optional - domain or project or both must be specified)
* role
* OS::Keystone::Group
* name (optional - defaults to self.physical_resource_name())
* domain (optional)
* description (optional)
* roles (list)
* domain (optional - domain or project or both must be specified)
* project (optional - domain or project or both must be specified)
* role
* OS::Keystone::Role
* name (optional - defaults to self.physical_resource_name())
* OS::Keystone::Project
* name (optional - defaults to self.physical_resource_name())
* domain (optional)
* description (optional)
* enabled (default True)
Since in the default policy.json configuration these APIs are available only to
administrative users, the plugin would be in the /contrib tree and not
installed by default.
Alternatives
------------
Another possible data model would be to have a separate RoleAssignment resource
(or similar) to grant roles to users or groups, rather that having the roles
listed in the user or group resources. A similar thing could apply to the list
of users in a group, which could be implemented instead as a GroupMembership
resource.
However, there are a couple of problems with that data model. The first is that
adding a user to a group or granting a role to a user/group does not create a
physical resource with its own UUID. This makes it difficult for Heat to manage
the resources.
The second issue is that such an approach tends to create dependency problems
for users - for example in this model if another resource depends on a User,
then Heat may begin creating it before the User has been assigned a Role that
it may need to perform the operation. This is possibly less of an issue with
Keystone resources than it has proven with Neutron resources, but it is a known
anti-pattern in Heat data modelling.
A similar issue occurs with Users and Groups - an alternative implementation
would be for the Group definition to contain a list of Users rather than for
the User definition to contain a list of Groups. The advantage of that is that
it more closely follows how the API is implemented, but this way was chosen
because it is more likely to automatically generate correct dependencies:
anything that depends on a User will always wait for all groups to be assigned.
Both approaches are likely to make some (different) subset of use cases
awkward, but the only solution would be a separate GroupMembership resources
type and that would suffer from all of the problems with a RoleAssignment
discussed above.
Implementation
==============
Assignee(s)
-----------
Primary Assignee:
kanagaraj-manickam
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
- User plugin
- Project plugin
- Group plugin
- Role plugin
- Custom constraint for keystone.project
- Custom constraint for keystone.group
- Custom constraint for keystone.role
- Custom constraint for keystone.domain
Dependencies
============
None

View File

@ -0,0 +1,244 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===========================
Implement Mistral resources
===========================
https://blueprints.launchpad.net/heat/+spec/mistral-resources-for-heat
Add support for Mistral resources which will allow to create and execute
workflows.
Problem description
===================
Heat doesn't support Mistral resources currently.
Mistral is a task management service, also known as Workflow as a Service.
Resources, which will be added to Heat, add new possibilities:
* Workflows, which contains different tasks for execution.
* Actions, which are a particular instructions associated with a tasks
that needs to be performed once tasks dependencies are satisfied.
* CronTriggers, which make possible to run workflows according to
specific rules: periodically setting a cron pattern or on external
events like Ceilometer alarm.
* Executions, which allows to execute given Workflows.
Proposed change
===============
Mistral resources are not integrated, so they will be added to contrib
directory.
Mistral client plugin will be added for communication with Mistral, which has
his own requirements. Following resources will be added with next syntax:
Add the OS::Mistral::Workflow resource, like this:
.. code-block:: yaml
resources:
workflow:
type: OS::Mistral::Workflow
properties:
definition: |
workflow_name:
type: String
description: String
input: [Value, Value, ...]
output: { ... }
on-success: [Value, Value, ...]
on-error: [Value, Value, ...]
on-complete: [Value, Value, ...]
policies: { ... }
tasks: { ... }
input: { ... }
Where definition specifying rely on Mistral DSL v2.
Add the OS::Mistral::CronTrigger resource, like this:
.. code-block:: yaml
resources:
cronTrigger:
type: OS::Mistral::CronTrigger
properties:
name: my_cron_trigger
pattern: 1 0 * * *
workflow:
name: String
input: { ... }
There is some use cases, which should be described:
1. To create and execute workflow follow next steps: at first we create
template with OS::Mistral::Workflow:
.. code-block:: yaml
heat_template_version: 2013-05-23
resources:
workflow:
type: OS::Mistral::Workflow
properties:
definition: |
test:
type: direct
tasks:
hello:
action: std.echo output='Hello'
publish:
result: $
When stack will created, to execute workflow run next command::
heat resource-signal stack_name workflow_name \
-D 'Json-type execution input'
Execution state will be available in 'executions' attribute as a dict.
2. Compatibility with Ceilometer alarms, i.e. using webhook url for workflow
executing:
.. code-block:: yaml
heat_template_version: 2013-05-23
resources:
workflow:
type: OS::Mistral::Workflow
properties:
definition: |
test:
type: direct
tasks:
alarm_hello:
action: std.echo output='Alarm!'
publish:
result: $
alarm:
type: OS::Ceilometer::Alarm
properties:
alarm:
type: OS::Ceilometer::Alarm
properties:
meter_name: cpu_util
statistic: avg
period: 60
evaluation_periods: 1
threshold: 0
alarm_actions:
- { get_attr: [workflow, alarm_url] }
comparison_operator: ge
outputs:
executions:
value: { get_attr: [workflow, executions] }
workflows:
value: { get_attr: [workflow, available_workflows] }
In the template, described above, workflow will begin execute when alarm
will goes to the state 'alarm'. Output 'execution' contain dict with info
about all executions, which belong to the workflow. Output 'workflows'
contain dict with all workflows' names that belong to the workflow, e.g.
{'test': 'stack_name.workflow.test'}.
3. Using cron trigger in template. There is the definition named 'wfdef.yaml':
.. code-block:: yaml
version: 2.0
create_vm:
type: direct
input:
- vm_name
- image_ref
- flavor_ref
output:
vm_id: $.vm_id
tasks:
create_server:
action: >
nova.servers_create name={$.vm_name} image={$.image_ref}
flavor={$.flavor_ref}
publish:
vm_id: $.id
on-success:
- check_server_exists
check_server_exists:
action: nova.servers_get server={$.vm_id}
publish:
server_exists: True
on-success:
- wait_instance
wait_instance:
action: nova.servers_find id={$.vm_id} status='ACTIVE'
policies:
retry:
delay: 5
count: 15
This definition will be used in template, which also have cron trigger
resource:
.. code-block:: yaml
heat_template_version: 2013-05-23
resources:
workflow:
type: OS::Mistral::Workflow
properties:
definition: { get_file: wfdef.yaml }
input:
vm_name: test
image_ref: some_image_id
flavor_ref: some_flavor_id
cron_trigger:
type: OS::Mistral::CronTrigger
properties:
name: test_trigger
pattern: 1 0 * * *
workflow: { get_attr: [workflow, available_workflows, create_vm]}
Need to note, that name is optional attribute.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
<prazumovsky>
Assisted by:
<tlashchova>
Milestones
----------
Target Milestone for completion:
Kilo-2
Work Items
----------
* Add Mistral client plugin for Heat
* Add Mistral workflow resource
* Add Mistral cron trigger resource
Dependencies
============
None

View File

@ -0,0 +1,74 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
==================================
Optimize Nova Custom Constraints
==================================
https://blueprints.launchpad.net/heat/+spec/nova-custom-constraints
Optimize Nova Custom Constraints, add/apply nova server constraint,
and apply nova flavor constraint.
Problem description
===================
1. Many resources have property InstanceId/Server which related with nova
server, but until now we haven't support nova server constraints.
2. Just define nova flavor custom constraint, but not to apply it.
Proposed change
===============
1. Add nova server custom constraint, and to apply it for resources.
2. Move nova keypair and flavor custom constraints to nova.py, to make sure
all nova custom constraints defined together.
3. Apply nova flavor constraints for resources.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
huangtianhua@huawei.com
Milestones
----------
Target Milestone for completion:
Kilo-1
Work Items
----------
1. Add/Apply nova server custom constraint.
2. Move nova keypair and flavor custom constraints to nova.py.
3. Apply nova flavor constraints for resources
4. Add UT/Tempest tests for all the changes.
Dependencies
============
None

View File

@ -0,0 +1,88 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
====================================================
Reorganize the code structure of resources folder
====================================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/reorganize-resources-code-structure
Reorganize the resources code structure to make it more clear.
Problem description
===================
The code structure of the resources folder is in some confusion.
Proposed change
===============
The new code structure will be::
heat
|----engine
|----resources
|----aws
|----ec2
|----res1
|----res2
|----autoscaling
|----res1
|----res2
|----openstack
|----nova
|----res1
|----res2
|----neutron
|----res1
|----res2
|----cinder
|----res1
|----res2
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
huangtianhua <huangtianhua@huawei.com>
Milestones
----------
Target Milestone for completion:
Kilo-2
Work Items
----------
* Put the AWS resources to folder resources/aws
* Put the OpenStack resources to folder resources/openstack
Dependencies
============
https://blueprints.launchpad.net/heat/+spec/decouple-aws-os-resources

View File

@ -0,0 +1,157 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===================================
"repeat" function for HOT templates
===================================
https://blueprints.launchpad.net/heat/+spec/repeat-function
This specification introduces a "repeat" control structure for HOT
templates.
Problem description
===================
Parameters of type "comma_delimited_list" are useful to define lists of items,
but the HOT template syntax does not provide any way to map or transform those
items.
For example, consider the use of a parameter to specify a list of ports to
include in a security group::
parameters:
ports:
type: comma_delimited_list
label: ports
default: "80,443,8080"
The desired outcome, which is currently not possible to obtain, is that the
above parameter can be used to construct a resource as follows::
resources:
security_group:
type: OS::Neutron::SecurityGroup
properties:
name: web_server_security_group
rules:
- protocol: tcp
port_range_min: 80
port_range_max: 80
- protocol: tcp
port_range_min: 443
port_range_max: 443
- protocol: tcp
port_range_min: 8080
port_range_max: 8080
Proposed change
===============
This proposal introduces a new function called ``repeat`` that iterates over
the elements of a list, replacing each item into a given template.
Following the security group example from the previous section, the
``repeat`` function would be used as follows::
resources:
security_group:
type: OS::Neutron::SecurityGroup
properties:
name: web_server_security_group
rules:
repeat:
for_each:
<%port%>: { get_param: ports }
template:
protocol: tcp
port_range_min: <%port%>
port_range_max: <%port%>
Below is another example in which this function enables a solution that is
currently impossible to implement::
resources:
my_server:
type: OS::Nova::Server
properties:
networks:
repeat:
for_each:
<%net_name%>: { get_param: networks }
template:
network: <%net_name%>
In this example a list of networks that an instance needs to be attached to is
given as a list in a parameter.
Another interesting possibility is to generate permutations of two or more
lists. For example, the security group example above can be extended to also
support parametrized protocols as follows::
resources:
security_group:
type: OS::Neutron::SecurityGroup
properties:
name: web_server_security_group
rules:
repeat:
for_each:
<%port%>: { get_param: ports }
<%protocol%>: { get_param: protocols }
template:
protocol: <%protocol%>
port_range_min: <%port%>
The ``for_each`` argument specifies the loop variable and the list to
iterate on as a key-value pair. The loop variable has to be chosen carefully,
as any occurrences will be replaced with each of the items in the list in each
iteration.
If more than one key/value pair is included in the ``for_each`` section, then
the iterations are done over all the permutations of the elements in
the given lists, similar to how nested loops work in most programming
languages.
The result of the ``repeat`` function is a new list, with its elements set to
the data generated in each of the loop iterations. When a single list is given,
the size of the resulting list is equal to the size of the input list. When
multiple lists are given as input, the size of the resulting list will be equal
to the sizes of all the input lists multipled.
Alternatives
------------
An alternative that was explored was to extend the ``str_replace`` function to
accomodate this functionality, but in the end it was agreed that there are
significant differences between the two usages.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
miguelgrinberg
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
* Write the ``repeat`` function.
* Documentation.
* Unit tests.
Dependencies
============
None

View File

@ -0,0 +1,172 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html To test out your
formatting, see http://www.tele3.cz/jbar/rest/rest.html
======================================
OS::Nova::Server rich network property
======================================
https://blueprints.launchpad.net/heat/+spec/rich-network-prop
Allowing port and floating IP association properties to be specified within a
OS::Nova::Server works around issues with nova port management, provides
users with a simpler way of implementing a common pattern, and allows more
templates to be neutron/nova-networking agnostic.
Problem description
===================
There are a number of issues with OS::Neutron::Port which are currently
difficult to work around, including:
1. On server delete Nova deletes all ports, even those which were created
before the nova boot (https://bugs.launchpad.net/nova/+bug/1158684).
This causes issues on a stack-update since the port underlying a given
port resource may no longer exist once the server has been replaced.
2. A port's relationship with a server is exclusive, however a stack-update
which replaces a server will have the old and new servers attempting to
attach to the same port resource
3. Two ports with the same fixed IP address cannot exist on the same network,
so a stack-update which results in a port being replaced will fail
unless the fixed IP address is changed too.
4. OS::Neutron::Port has a top-level ``network`` property but the ``subnet``
is inside the ``fixed_ips`` property. If a network has multiple subnets
and the port resource does not specify which subnet then neutron assigns
the port to a non-deterministic subnet.
5. Users can avoid the above problems if they don't define an OS::Neutron::Port
resource, but they must if they want to define a neutron floating IP
association. Server+port+floating-ip is such a common pattern that users
would benefit from being able to define all this in the server resource.
6. Likewise any template which associates servers with floating IPs will only
work on either a neutron or nova-networking OpenStack.
Proposed change
===============
OS::Nova::Server has a networks property which allows a list of maps, where
the map key is one of ``fixed_ip``, ``network``, ``port`` or ``uuid`` which
map directly to nova boot nic options.
The proposed change is that new keys will be added to this map to support
fully describing ports within the ``networks`` items. The server resource
will take responsibility for creating and managing the port rather than
allowing nova to create the port implicitly.
* ``network`` *existing key* Name or UUID of network to create the nic on.
Applies to neutron and nova-network.
* ``fixed_ip`` *existing key* Optional fixed IP address to assign to the
nic. Applies to neutron and nova-network.
* ``subnet`` *new key* Name or UUID of neutron subnet to create the nic on.
If specified then ``network`` is optional. If ``network`` is also specified
then validation will confirm whether the subnet belongs to the network.
Applies to neutron only.
* ``floating_ip`` *new key* ID of the floating IP to assign to this networks
entry. The value can be a ref from a ``OS::Neutron::FloatingIP`` or
``OS::Nova::FloatingIP``. Or it can be provided a string from a parameter
from an already existing floating IP. This property replaces
OS::Neutron::FloatingIPAssociation and OS::Nova::FloatingIPAssociation so
these resources which don't represent *real* resources can be deprecated.
Applies to neutron and nova-network.
* ``port_extra_properties`` Map containing extra values to the neutron port
creation which are not covered by the above or the derived properties.
Applies to neutron only.
The implementation will be in the Server resource and will have different
paths based on ``self.is_using_neutron``.
Validation will be performed so that an error is raised if a value is set
that is not supported by nova-networking.
Server create in the neutron path will do the following:
* Limit the port to have at most one fixed_ip (neutron ports allow multiple
fixed_ips). Users who require multiple fixed IPs can still create a full
port resource.
* Derive the security groups from the Server property security_groups. This
means that all created ports will be assigned to the same list of security
groups.
* Derive the port name from the server name and the networks list position
* Create a port based on the passed and derived properties, and add that
``port-id`` to the nova ``nics`` list.
* Store the port-id for each created port in the resource data
Resource update in the neutron path will do the following:
* Calculate which ports to update, which to create and which to delete
Resource delete in the neutron path will delete any ports stored in the
resource data.
Special handling will be required for the following case:
* A stack update results in server replacement, and
* One of the ``networks`` items has specified a fixed_ip which doesn't change
In this case the handle_delete of the old server and the handle_create of the
new server will need to interact to allow the new port to be assigned the
fixed_ip which is assigned to the old port. Assigning back to the old port
may be required on rollback too.
Alternatives
------------
An alternative is to wait for bug #1158684 to be fixed in Nova, and make any
other necessary changes to OS::Neutron::Port and OS::Nova::Server to mitigate
the items listed in the `Problem description`_. (Items 4., 5. and 6. likely
wouldn't be addressed.
Implementation
==============
Assignee(s)
-----------
This blueprint needs a primary author to adopt it. Steve Baker will provide
implementation and review assistance if required.
Primary assignee:
<skraynev>
Assisted by:
<steve-stevebaker>
Milestones
----------
Target Milestone for completion:
Kilo-2
Work Items
----------
Steps mentioned in section Proposed change describes the list of work items.
Dependencies
============
There are no blueprint or library dependencies for this blueprint.

View File

@ -0,0 +1,130 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html To test out your
formatting, see http://www.tele3.cz/jbar/rest/rest.html
==========================================
Software config notify deployment progress
==========================================
https://blueprints.launchpad.net/heat/+spec/software-config-progress
Currently when a deployment resource remains IN_PROGRESS there is no way of
knowing whether configuration is taking a long time, or if an unrelated
problem occured before or after. The only option is to ssh into a server to
diagnose the issue. This blueprint proposes that the server signal to heat
when any deployment activity starts.
Problem description
===================
Currently when a deployment resource remains IN_PROGRESS the configuration may
be taking a legitimately long time. In other cases there may be a failure due
to one of the following problems.
The potential problems during server boot include:
1. Nova says the server has booted but the image failed to actually boot
2. The server booted, but was not successfully assigned an IP address
3. Nova metadata server cannot be reached on boot to poll for initial metadata
The potential problems which occur after boot but before a specific deployment
is executed include:
4. Misconfiguration in the installed server agent, hooks and config tools
5. Failure to poll deployment metadata from heat (or other configured polling
source)
And finally the potential problems when actually executing the deployment:
6. Inability for the server to signal the results back to heat, either due to
authentication or connectivity issues.
Currently there is no feedback that the actual deployment has started. If the
user had earlier feedback that deployment has started then they can eliminate
the above six failures as the cause of the deployment being IN_PROGRESS.
Proposed change
===============
Currently SoftwareDeployment.signal assumes that as soon as a signal is
received the deployment task is complete. This will be changed so that the
signal details are checked for extra data which indicates that this is an
IN_PROGRESS signal rather than a COMPLETE/FAILED signal. The software-config
hooks will be modified to send an IN_PROGRESS signal before they start the
deployment task.
The signal details are currently a JSON map with entries for each output
value, plus ``deploy_stdout``, ``deploy_stderr`` and ``deploy_status_code``.
Two new entries will be expected, ``deploy_status`` and
``deploy_status_reason``. SoftwareDeployment.signal will check for this and
if ``deploy_status`` is ``IN_PROGRESS`` then the deployment resource will
remain in an IN_PROGRESS state. However there will be a resource event
generated to give the user some feedback that their deployment task has
started.
Backwards-compatibility concerns need to be considered both with old images
running on new heat, and new images running on old heat.
Old image, new heat
-------------------
There is nothing special to consider here. The server will not signal heat
that a deployment is starting, but the deployment resource will already be in
an IN_PROGRESS state. The only implication is that the user will not see the
extra IN_PROGRESS event telling them that the deployment has started.
New image, old heat
-------------------
Since old heat assumes that the deployment is complete as soon as a signal is
received, the hooks need to suppress sending any IN_PROGRESS signals. This
can be achieved by the hooks checking for the input ``deploy_status_aware``
being set to ``true``. Only new heat will set this input value to ``true`` so
the hook can check this input and behave accordingly.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
<steve-stevebaker>
Milestones
----------
Target Milestone for completion:
Kilo-1
Work Items
----------
Work items or tasks -- break the feature up into the things that need to be
done to implement it. Those parts might end up being done by different
people, but we're mostly trying to understand the timeline for implementation.
Dependencies
============
There are no blueprint or library dependencies for this blueprint

View File

@ -0,0 +1,88 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
================================================
Signaling SoftwareDeployment resources via Swift
================================================
https://blueprints.launchpad.net/heat/+spec/software-config-swift-signal
Currently the only option for signaling a deployment resource requires making
an authenticated API request to heat. A SoftwareDeployment resource
signal_transport: TEMP_URL_POLL will allow unauthenticated signaling using a
similar approach to OS::Heat::SwiftSignal.
Problem description
===================
OS::Heat::SoftwareDeployment signal_transport options currently both require
resource scoped credentials and network connectivity from the server to a
heat API to work.
Proposed change
===============
Like OS::Heat::SwiftSignal, signal_transport:TEMP_URL_POLL would create a
long-lived swift TempURL which is polled by heat until the object contains
the expected data from the nova server performing the configuration
deployment. Initially, "long-lived" will mean expiring in year 2038.
Implementing a signal_transport:TEMP_URL_POLL would have the following
benefits:
* Each OS::Heat::SoftwareDeployment resource would not need to create a
stack user
* Making swift objects accessible from nova servers is more likely to be
provided for by the cloud operator, compared to access to keystone and heat
APIs.
Also, heat.conf default_software_config_transport option will be added so that
operators can choose the most appropriate transport for their cloud. Choosing
the default will depend on whether the cloud supports keystone v3, swift and
the cloudformation endpoint.
Alternatives
------------
Blueprint software-config-zaqar will implement signal_transport:ZAQAR_MESSAGE
which would be the preference for clouds which offer a zaqar endpoint. Since
Swift is much more widely deployed than Zaqar then ZAQAR_MESSAGE should be
recommended first, followed by TEMP_URL_POLL.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
<steve-stevebaker>
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
* Implement TempURL creation and polling in SoftwareDeployment
* Implement TempURL POSTing in heat-templates 55-heat-config (may not be
required if interface is identical to CFN_SIGNAL)
* Document implications for using TEMP_URL_POLL and AUTO in the
software-deployment section of the hot-guide.
Dependencies
============
No dependencies on new libraries or existing blueprints.

View File

@ -0,0 +1,154 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
================================================
Trigger an action in a software-config component
================================================
https://blueprints.launchpad.net/heat/+spec/software-config-trigger
OS::Heat::SoftwareComponent now allows non-lifecycle configs to be specified.
This feature will make it possible to trigger these configs and monitor their
progress and results.
Problem description
===================
OS::Heat::SoftwareComponent can specify configs to execute for stack lifecycle
actions (CREATE, DELETE etc) but it can also specify any non-lifecycle
actions (eg BACKUP, FOO, BAR). However there is currently no obvious way to
trigger these non-lifecycle actions. Once this feature is complete it should
be possible to use the heat CLI tool to do the following:
* Trigger a single config already defined in a OS::Heat::SoftwareComponent
resource
* Monitor the progress of a triggered config
* View the resulting outputs of a triggered config
* Cancel the in-progress state of a triggered config
Proposed change
===============
Hypothetically it is already possible to trigger a single action config in a
SoftwareComponent by interacting directly with the REST API, however there is
no way to receive the results of this trigger.
Consider a SoftwareComponent which defines a config that runs on the action
BACKUP. Once stack creation is complete the following would have happened:
* Config created containing the component configs, including the BACKUP
action config
* Derived-config created, which will add the deployment extra inputs etc
provided by the deployment resource
* Deployment created which associates the derived-config with the nova server
Now to trigger BACKUP on a given server in the stack (optionally with some
extra input values set), REST API calls can be made to:
* Fetch the original config, modify the input values (if necessary), then
create a derived-config. This leaves the stack-managed
derived-config resource untouched.
* Create a swift TempURL to store the signal from the server.
* Create a trigger deployment, specifying the derived-config, the
server, and the action BACKUP. The name of the trigger deployment is
derived from the original deployment, plus the action name (BACKUP)
The above will all be performed by a single `heat deployment-create` command
where the user can specify all the values required to create a deployment,
including the config, server, name, action, overridden input values, etc.
Changes will be required to move some OS::Heat::SoftwareDeployment into the
deployment create call itself.
This blueprint will also depend on blueprint software-config-swift-signal
since there will need to be a signal store which is not coupled with any
stack or resources.
python-heatclient will need to be modified so that all software-config and
deployment operations can be done from the command line. New convenience
commands will also be added to trigger and monitor a single action in a
component.
This could also be an appropriate umbrella blueprint to switch to using RPC
instead of full REST calls for when config and deployment resources call
config and deployment APIs.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
<steve-stevebaker>
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
Currently python-heatclient lacks any cli commands to manage software configs
and deployments. A prerequisite for this change is cli support for
interacting with the existing config and deployment REST API, including
* Creating a software-config
* Showing a software-config
* Deleting a software-config
* Creating a software-deployment
* Showing a software-deployment
* Deleting a software-deployment
* Listing software-deployments for a given server
Once these have been implemented, new convenience commands will also be added
to trigger and monitor a single action in a component.
In heat, the following changes would be required:
* Move some OS::Heat::SoftwareDeployment into the deployment create call
itself. Specifically, creating the derived config and the deployment could
be combined in EngineService.create_software_deployment.
* Modify EngineService.resource_signal so that some signal calls get
redirected to a new method EngineService.signal_software_deployment
* Functional tests to confirm the above can be used.
Dependencies
============
Not a hard dependency, but this would benefit from blueprint
software-config-progress being implemented to provide the user with feedback
that their config trigger has started.
If it is deemed inappropriate to modify EngineService.resource_signal then
some alternative external polling based signaling would be required, as
provided by blueprint software-config-swift-signal or blueprint
software-config-zaqar.

View File

@ -0,0 +1,104 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
====================================================
Use Zaqar for software-config metadata and signaling
====================================================
https://blueprints.launchpad.net/heat/+spec/software-config-zaqar
Zaqar provides a simple messaging service which allows heat and orchestrated
services to efficiently communicate with each other, which make it ideal for
software-config metadata distribution and signaling
Problem description
===================
There are a number of areas where having a messaging service like Zaqar
available can benefit Heat. Two of these are:
* Propagating server configuration metadata from heat to the servers
* Signaling from servers to heat that a software configuration event has
occurred, with associated data.
Like OS::Nova::Server software_config_transport:POLL_TEMP_URL this will stop
servers from polling heat directly for metadata delivery which will improve
heat scalability.
Proposed change
===============
For OS::Nova::Server software_config_transport:ZAQAR_MESSAGE create a queue
dedicated to publishing metadata changes from heat to one server.
os-collect-config will need a collector which consumes messages from this
queue.
For OS::Heat::SoftwareDeployment signal_transport:ZAQAR_MESSAGE create a queue
dedicated to one server signalling configuration results to one deployment
resource. heat-templates 55-heat-config will need to be modified to depend on
python-zaqarclient and push to the queue if the required deploy input values
indicate that a queue is configured.
Just like signal_transport:HEAT_SIGNAL and
software_config_transport:POLL_SERVER_HEAT there will be a stack users
created for the deployment and server resources and the credentials for those
users will be given to the server. If and when Zaqar allows reading and
writing messages to signed webhooks then we can consider switching to this so
that it is not necessary to create the stack users.
signal_transport:AUTO will be modified so that ZAQAR_MESSAGE is the preferred
method if there is a configured messaging endpoint.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
This blueprint currently has no engineer assigned to it
Primary assignee:
<None>
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
* Implement OS::Nova::Server software_config_transport:ZAQAR_MESSAGE
* Implement OS::Heat::SoftwareDeployment signal_transport:ZAQAR_MESSAGE
* Write a Zaqar collector for os-collect-config
* Modify software-config os-refresh-config hook to use zaqar to push
deployment signal data
Dependencies
============
python-zaqarclient will be added to heat/requirements.txt (this is already a
requirement for the zaqar contrib resource)
python-zaqarclient will become a requirement in os-collect-config and the
heat-templates heat-config element.
This could be done after blueprint software-config-trigger since that includes
some refactoring which includes moving signal_transport logic from the
resource to the deployments REST API.

View File

@ -0,0 +1,103 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=========================================
Stack lifecycle scheduler hint blueprint
=========================================
https://blueprints.launchpad.net/heat/+spec/stack-lifecycle-scheduler-hint
A heat provider may have a need for custom code to examine stack requests
prior to performing the operations to create or update a stack. After the
custom code completes, the provider may want to provide hints to the nova
scheduler with stack related identifiers, for processing by any custom
scheduler plug-ins configured for nova.
Problem description
===================
A heat provider may have a need for custom code to examine stack
requests prior to performing the operations to create or update a stack.
An example would be a holistic scheduler that schedules a stack's member
compute resources as group. This would be done using a custom plugin
invoked through the stack lifecycle plugpoint. After the custom code
completes, when the create or update is being processed, any custom
schedulers configured for nova would need to map nova create requests
back to any decisions made during the call to the custom stack
lifecycle plugin. Current heat includes no identifiers in a nova
create request that can be used to map back to a Server or Instance
resource within a heat stack.
It is out of scope for this spec, but worth noting that cinder scheduler
hints are now supported by heat and may need similar treatment. See
https://review.openstack.org/#/c/126282/ and
https://review.openstack.org/#/c/126298/
Proposed change
===============
When heat processes a stack, and the feature is enabled,
the stack id, root stack id, stack resource id,
stack resource name and the path in the stack (as a list of tuples,
(stackresourcename, stackname)) will be passed to nova by heat as
scheduler hints, to the configured schedulers for nova.
The behavior changes will be optional, default disabled, and controlled
through a new heat config variable.
These five scheduler hints will be added to server creates done using
either resource class Server (OS::Nova::Server) or resource class
Instance (AWS::EC2::Instance). heat_root_stack_id will be set to the
id of the root stack of the resource, heat_stack_id will be
set to the id of the resource's parent stack,
heat_stack_name will be set to the id of the resource's
parent stack, heat_path_in_stack will be set to a list of
tuples, (stackresourcename, stackname) with list[0] being
(None, rootstackname), and heat_resource_name will be set to
the resource's name
Alternatives
------------
No reasonable alternatives were identified.
Similar function could be achieved if the lifecycle plugin modified the stack
(and changes were persisted). This would be bad behavior. It would conflict
with convergence when it lands, and scheduler decisions would become visible
to the heat user (unless somehow redacted on query).
Implementation
==============
Assignee(s)
-----------
A patch comprising a full implementation of the blueprint
(https://review.openstack.org/#/c/96889/) is already being
reviewed, under the old pre-spec process.
Primary assignee:
William C. Arnold (barnold-8)
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
Implementation: https://review.openstack.org/#/c/96889/
Documentation: Add good documentation to heat in tree docs
Dependencies
============
No dependencies

View File

@ -0,0 +1,139 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
============
Stack Tags
============
https://blueprints.launchpad.net/heat/+spec/stack-tags
This feature will allow attributing a set of simple string-based tags
to stacks and optionally the ability to hide stacks with certain tags
by default.
Problem description
===================
Heat should be usable by cloud providers for behind-the-scenes orchestration of
cloud infrastructure, without exposing the user to the resulting
automatically-created stacks.
For example, creation of a Nova server might include, by default, creation and
configuration of a network, subnet, port, and security group. The "server
create" function in the cloud portal would make a call to Heat instead of Nova.
When the user clicks the "server create" button in the cloud portal, Heat would
then orchestrate the Nova server creation along with calls to other services
and then wire it all up.
Sahara already uses Heat for its internal orchestration, and currently when we
instantiate a OS::Sahara::Cluster resource in a template, the user also sees
the underlying stack created by Sahara. It would be nice if operators of
Sahara service also could add such specific tags to their internally created
stacks to hide them from common user by default. That also might concern Trove
when it moves to using Heat orchestration internally.
As other services use heat behind the scenes, they would set specific tags to
such stacks (e.g. source:nova, source:sahara, etc) which, optionally, could be
configured not to be displayed by default, effectively hiding them from regular
users of the API. Since Heat seems to be no longer a purely user-facing
orchestration service, it makes sense to use these tags as a means to prevent
cluttering of the user's stacks and avoid confusion.
Proposed change
===============
Add a "tag" flag to the stack-create API, which, if given, will create the
stack with such tags. Also add a configuration option that will allow
operators to hide specific tags from the default stack list.
Add a "show_hidden" flag to the stack-list API, which, if passed, will
result in listing both hidden and non-hidden stacks. By default, only
non-hidden stacks will be displayed in the stack-list output.
Alternatives
------------
- Using Nova plug-ins for orchestration (not the best tool for the job).
Implementation
==============
Assignee(s)
-----------
Primary assignee:
jasondunsmore
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
- Add a "stack_tag" table.
- Add a "tags" parameter to stack-create (engine and API). Note: Tag
names must not contain a comma, as specified in the spec:
https://review.openstack.org/#/c/155620/
- Add ability to update stack tags during stack-update (engine and
API). It should be possible to remove all tags from a stack.
- Add a "show_hidden" parameter to stack-list in engine (engine and
API).
- Add a "tags" parameter to stack-list in engine (engine and API).
Passing a tag name will result in only stacks containing that tag
being shown. If multiple tags are passed, they will be combined
using the AND boolean expression.
- Add a "tags-any" parameter to stack-list in engine (engine and API).
Passing a tag name will result in only stacks containing that tag
being shown. If multiple tags are passed, they will be combined
using the OR boolean expression.
- Add a "not-tags" parameter to stack-list in engine (engine and API).
Passing a tag name will result in only stacks NOT containing that
tag being shown. If multiple tags are passed, they will be combined
using the AND boolean expression.
- Add a "not-tags-any" parameter to stack-list in engine (engine and
API). Passing a tag name will result in only stacks NOT containing
that tag being shown. If multiple tags are passed, they will be
combined using the OR boolean expression.
- Add an API to list tags, ie. "heat tag-list" (engine and API).
- Ensure tags show up in the "heat stack-show <stack>" output (engine
and API).
- Add docs for new API parameters to "api-site" project.
- Write unit tests to ensure that other stack operations continue to
work as expected with hidden stacks, eg. stack-show, resource-list,
stack-list pagination...
- Register a configuration parameter that contains a list of tags to
hide by default.
- Implement changes to the DB/service/RPC to hide stacks according to
the configuration parameter.
- Add "show_hidden" parameter to stack-list in python-heatclient.
- Add "--tags", "--tags-any", "--not-tags", and "--not-tags-any"
options to filter stack-list output by tag in python-heatclient.
Dependencies
============
None.

View File

@ -0,0 +1,116 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
===========================
Restrict Stack Update Scope
===========================
https://blueprints.launchpad.net/heat/+spec/stack-update-restrict
When updating a stack, there is currently no way to stop an update from
destroying a given resource.
Problem description
===================
Users can (and do) worry about stack update doing wonky things. The
update-preview endpoint addresses this partially by showing what will probably
happen. The limitation of the preview function is that resources can raise
UpdateReplace exceptions at any time, making it impossible to be *certain* of
the results of an update until it is performed.
Proposed change
===============
Use the existing 'update_policy' resource attribute to let users protect
certain resources from being replaced during updates.
If the update_policy can't be satisfied, heat will move the stack to
'UPDATE_FAILED' and halt. If at all possible, constraints should be validated
before applying the update, thus moving the stack straight to 'UPDATE_FAILED'
when the update_policy is incorrect. After the update fails, the user can
adjust the restrictions and try again.
The update_policy attribute is already used for CloudFormation autoscaling
preferences, which are nested into the keys "AutoScalingScheduledAction" and
"AutoScalingRollingUpdate". CFN preferences would be unaffected by the HOT
version of update policies.
A user would specify per-resource how aggressive an update
can be with a resource. The restrictions could be on updating the resource at
all, or just on destroying the resource (including UpdateReplace).
The base cases here are:
* Restrict destroy/replace
* Restrict nondestructive updates
* Restrict both
* Restrict nothing
* Omit the update_policy entirely
The keys for these restrictions would be nested into an 'actions' key as below.
::
resources:
myresource:
type: Foo::Bar::Baz
update_policy:
allow:
update: <bool>
replace: <bool>
The reason for nesting the allowed actions is to avoid adding top level keys if
there are more actions that users want to restrict in the future.
A user would be able to add or remove restrictions by updating the resource
template. The new restrictions would be effective for the current update. For
example, a resource that would otherwise be replaced would be protected if it
had an update policy added in the current update.
Conflicting directives are possible, for example in nested stacks. If an inner
resource has "replace: true" but the outer scope has "replace: false" then heat
will transfer the stack to UPDATE_FAILED to surface the problem to the user.
Alternatives
------------
An alternatives way to handle conflicting directives may be to honor the most
conservative applicable policy. This method would be much more confusing for
users, so failing the update would be preferable.
Pitfalls
--------
Implementation
==============
Assignee(s)
-----------
Milestones
----------
Targeted for Kilo
Work Items
----------
* Add an actions key to update_policy
Dependencies
============
update-dry-run

View File

@ -0,0 +1,110 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
============================
Support Cinder API version 2
============================
https://blueprints.launchpad.net/heat/+spec/support-cinder-api-v2
This specification proposes to add support for the second version of the Cinder
API, which brings useful improvements and will soon replace version one.
Problem description
===================
Currently Heat uses only version 1 of Cinder API to create volumes. Version
two, however, brings useful features such as scheduler hints, more consistent
responses, caching, filtering, etc.
Also, Cinder is deprecating API version 1 in favor of 2 [1], which has been
available in devstack since Havana. Supporting both would make switching
easier for users.
The new API provides [2]:
* More consistent properties like 'name', 'description', etc.
* New methods (set_metadata, promote, retype, set_bootable, etc.)
* Additional options in existing methods (such as the use of scheduler hints).
* Caching data between controllers instead of multiple database hits.
* Filtering when listing information on volumes, snapshots and backups.
Use cases:
* As a developer I want to be able to pass scheduler hints to Cinder when
creating volumes, in order to choose back-ends more precisely.
* As a deployer I don't want to have to choose which Cinder API version to use.
Let Heat autodiscover the latest and use it.
Proposed change
===============
Add new methods to CinderClientPlugin:
* discover_api_versions()
To query Keystone for 'volume' and 'volumev2' services.
* api_version()
To get the Cinder API version currently used by Heat (this value will be set
to latest available one).
The client returned by CinderClientPlugin._create() will be made depending on
api_version().
Six cinderclient methods are currently used within Heat:
* volumes.get(), volumes.extend(), backups.create() and restores.restore() that
won't be affected by this change;
* volumes.create() and volume.update() that use arguments that differ depending
on the Cinder API version: (display_name, display_description) for v1 and
(name, description) for v2.
The proposed implementation will not change current OS::Cinder::Volume
properties, since they already are 'name' and 'description' (as in new API
version).
Alternatives
------------
Wait for Cinder API v1 to be deprecated and switch abruptly to v2.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
adrien-verge
Milestones
----------
Target Milestone for completion:
Kilo-1
Work Items
----------
* Discover the latest Cinder API version using Keystone.
* Create the correct Cinder client using the latest available API.
* Use correct arguments for volumes.create() and volume.update() depending on
the used API.
Dependencies
============
None
References
----------
* [1]: https://wiki.openstack.org/wiki/CinderAPIv2
* [2]: https://github.com/openstack/nova-specs/blob/master/specs/juno/support-cinderclient-v2.rst

View File

@ -0,0 +1,91 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
================================
Implement Trove cluster resource
================================
https://blueprints.launchpad.net/heat/+spec/trove-cluster-resource
Add support for Trove cluster resource which will allow to create clusters
with Heat.
Problem description
===================
Currently we can't create Trove cluster resource in Heat.
Proposed change
===============
Implement new resource type:
* OS::Trove::Cluster
* properties
* name (optional - defaults to self.physical_resource_name())
* datastore_type (required)
* datastore_version (required)
* instance_parameters (list, required)
* flavor (required)
* volume_size (required)
* attributes
* instances (list of instances ids)
* ip (IP of the cluster)
Alternatives
------------
None
Usage Scenario
--------------
Create the OS::Trove::Cluster resource like this::
resources:
cluster:
type: OS::Trove::Cluster
properties:
name: my_cluster
datastore_type: mongodb
datastore_version: 2.6.1
instances: [{flavor: m1.heat, volume_size: 1},
{flavor: m1.small, volume_size: 2},
{flavor: m1.large, volume_size: 3}]
Implementation
==============
Assignee(s)
-----------
Primary assignee:
tlashchova
Milestones
----------
Target Milestone for completion:
Kilo-3
Work Items
----------
* Add Trove cluster resource
Dependencies
============
None

View File

@ -0,0 +1,105 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
==============================================================
Use oslo-versioned-objects to help with dealing with upgrades.
==============================================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/versioned-objects
Problem description
===================
We are looking to improve the way we deal with versioning (of all sorts
db/rpc/rest/templates/plugins).
Nova has come up with the idea of versioned objects, that Ironic has also now
used. This has now been proposed as an oslo library:
https://review.openstack.org/#/c/127532/
https://etherpad.openstack.org/p/kilo-crossproject-upgrades-and-versioning
Versioned-objects will help us deal with DB schema being at a
different version than the code expects. This will allow Heat to be
operated safely during upgrades.
Looking forward as we pass more and more data over RPC we can make use
of versioned-objects to ensure upgrades happen without spreading the
version dependant code across the code base.
Proposed change
===============
Since it will take some time before versioned-objects goes into the oslo
library, the plan is to get an early version of it for Heat and
transition to oslo-versioned-objects when it is ready.
Create a directory heat/objects/ that will contain wrapper objects that
are a layer above the db objects. This allows the remainder of Heat to
not having to worry about dealing with older DB objects.
Once the objects are in place the rest of the code will be changed to
use the versioned objects instead of the db_api directly. This can be
done object-by-object to avoid overly large changes.
Alternatives
------------
Data model impact
-----------------
None. The objects being introduced are not stored in the database. Instead,
these objects are a replacement for sqlalchemy objects that is being used to
represent stack, resource, etc throughout Heat internals.
Developer impact
----------------
It will take some time to convert heat internals over to the object
model, so the existing convention of direct database calls should be
accepted until all object models are in place.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Angus Salkeld <asalkeld@mirantis.com>
<others are welcome to help out>
Milestones
----------
Target Milestone for completion:
Kilo-2
Work Items
----------
* (If needed) Obtain an early version of oslo.versionedobjects.
* Implement the objects for each DB object type we have.
* Update code that uses the DB to use versioned-objects instead.
* Write some developer docs on how to deal with older schema.
* Transition to oslo-versioned-objects as soon as it is available.
Dependencies
============
* oslo-versioned-objects

View File

@ -0,0 +1,126 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
===================================
Nova Server VNC Console Attribute
===================================
Launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/vnc-console-attr
Problem description
===================
As an end user, if I want to retrieve the vnc console url of a server
resource in heat stack, I need to combine `heat resource-list` and
`nova get-vnc-console` to get the result. For example::
heat resource-list <stack_name> # get physical_resource_id
nova get-vnc-console <physical_resource_id> <vnc_console_type>
We should provide a way for template developers to show console
url(for example, vnc, rdp and spice) in stack outputs.
Usage Scenario
--------------
Get novnc console url::
heat_template_version: 2013-05-23
resources:
server:
type: "OS::Nova::Server"
properties:
image: fedora
key_name: heat_key
flavor: m1.small
outputs:
vnc_console_url:
value:
get_attr: [server, console_urls, novnc]
So the novnc console url can be retrieved via `heat output-show
<stack> vnc_console_url`.
Get xvpvnc console url::
heat_template_version: 2013-05-23
resources:
server:
type: "OS::Nova::Server"
properties:
image: fedora
key_name: heat_key
flavor: m1.small
outputs:
vnc_console_url:
value:
get_attr: [server, console_urls, xvpvnc]
So the xvpvnc console url can be retrieved via `heat output-show
<stack> vnc_console_url`.
Get spice console url::
heat_template_version: 2013-05-23
resources:
server:
type: "OS::Nova::Server"
properties:
image: fedora
key_name: heat_key
flavor: m1.small
outputs:
spice_console_url:
value:
get_attr: [server, console_urls, spice-html5]
Proposed change
===============
Add composite attribute `console_urls` to `OS::Nova::Server` resource.
When `get_attr` is invoked, return the console URL according the key supplied
to this attribute, or URLs for all supported types when no key is provided.
Gracefully deal with the case when the type of the console being asked for
is not available in current deployment.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
pshchelo
Milestones
----------
Target Milestone for completion:
Kilo-1
Work Items
----------
- implement `get_console_urls` method in Nova client plugin;
- add `console_urls` attribute to OS::Nova::Server resource.
Dependencies
============
No dependency on other spec or additional library.

View File

@ -0,0 +1,97 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
======================================
Autoscaling Group Rolling Update Hooks
======================================
Hooks are included in Kilo, but don't address a helpful use for hooks.
Currently, there are pre-create and pre-update hooks, but this would add a
special hook to OS::Heat::AutoScalingGroup to set a hook before each batch in a
rolling update.
Problem description
===================
Working with TripleO it's often desirable to pause to check that an update is
doing what you want (which is why hooks exist at all), and the "pause_time"
provided by the rolling_updates policy can be used for a similar purpose.
The problem is that you may not be able to sufficiently test the result of a
rolling update batch within the pause_time set, but you don't have a way to
signal to heat to add more time. Or the pause_time may be excessively long,
making the update time too slow.
Being able to set a breakpoint between batches is a much better solution, so
the operator can take an arbitrarily long (up to stack timeout) or short time
to confirm the upgrade went as planned.
Proposed change
===============
To make debugging and verifying rolling updates easier, I propose adding a
'batch_hook' parameter to rolling_updates like below.::
my_asg:
type: "OS::Heat::AutoScalingGroup"
properties:
desired_capacity: 4
...
rolling_updates:
batch_hook: true
min_in_service: 1
...
The batch_hook option and pause_time will be mutually exclusive, since it
doesn't make much sense to have both a set pause time *and* hooks between
batches.
This will be confined to AutoScalingGroup, won't break any existing templates,
and won't affect other grouped resources.
Alternatives
------------
1. The name "batch_hook" seems descriptive enough for me, but another option
for the parameter name would be "pre_batch_hook" to denote that the hook is
set before each batch (not after).
2. Another alternative would be to add a hook type that would be set by in the
environment, not in the rolling_update policy. I think localizing this to
the AutoScalingGroup scaling policy is a better choice and use stack updates
to toggle the hooks for each group.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
sb~p (Ryan Brown)
Milestones
----------
liberty-1
Work Items
----------
Dependencies
============
None

View File

@ -0,0 +1,93 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
================================
Cinder volume encryption support
================================
https://blueprints.launchpad.net/heat/+spec/cinder-volume-encryption
Provides support for encrypted cinder volume creation.
Problem description
===================
Cinder provide encrypted volume creation by using encrypted volume type
as described in below wiki page:
http://docs.openstack.org/juno/config-reference/content/section_volume-encryption.html
Proposed change
===============
Add new contrib heat resource plugin for creating the encrypted volume type
OS::Cinder::EncryptedVolumeType with following properties:
* provider (required)
* description: The class that provides encryption support. For example,
nova.volume.encryptors.luks.LuksEncryptor.
* type: string
* cipher (optional)
* description: The encryption algorithm or mode. For example,
aes-xts-plain64
* type: string
* key_size (optional)
* description: Size of encryption key, in bits. For example, 128 or
256.
* type: integer
* control_location (optional)
* default: front-end
* allowed-values: front-end, back-end.
* description: Notional service where encryption is performed.
* type: string
* type (required)
* description: Name or id of volume type (OS::Cinder::VolumeType)
* type: string
This resource needs following actions:
* create
* delete
Alternatives
------------
None.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Kanagaraj Manickam (kanagaraj-manickam)
Milestones
----------
Target Milestone for completion:
liberty-1
Work Items
----------
* Add new contrib resource plugin as described in the solution section
* Add test cases for new resource plugin
* Add required functional test cases to validate the resource.
Dependencies
============
None

View File

@ -0,0 +1,94 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=====================================================
Conditional exposure of resources based on user roles
=====================================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/conditional-resource-exposure
Expose resources as available based on actual user roles.
Problem description
===================
Currently we unconditionally register and present to the user all resource
plugins that we have in-tree.
As we will move in-tree some contrib/ resources that require special roles
for instantiation (e.g. Keystone resources) all users will see them
as available despite that the user might not actually be able to use them
due to RBAC restrictions.
This would be confusing to users and facilitate later stack failure
at creation instead of failing early at validation.
Proposed change
===============
Add optional settings in ``heat.conf``
(in ``[clients]`` section to be used for every client or in ``[client_*]``
section for a specific client) specifying the list of required "special"
roles to instantiate restricted resources of this service.
Use these values during validation to compare the roles with the roles from
the context to check for resource availability for the specific user who has
made the request.
Default value (empty list) of the new config option will mean
show the resource as available to any user.
Alternatives
------------
Keep the things as they are continuing to confuse users and fail later than
earlier for templates with resources that current user can not create without
having special roles.
Long term alternative / improvement would be to wait until Keystone implements
fine grained policy control and querying as part of API.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Pavlo Shchelokovskyy <pshchelo>
Milestones
----------
Target Milestone for completion:
liberty-1
Work Items
----------
- add config options for client plugins describing the required special
roles list
- add an attribute to resources requiring special roles that marks them as such
- add an extra parameter to SupportStatus hinting that this resource will
likely require a special role a common user would not generally have
- modify docs generation to flag such resources
- add validation step comparing the options from config with roles from context
- unit tests
- functional tests
- modify DevStack to automatically configure Heat with DevStack's default
policies in respect to special roles for new Keystone client options
- check if Keystone resources are listed if called from non-admin users
- check that template containing Keystone resources is failing validation
Dependencies
============
- blueprint keystone-based-resource-availability is implemented
- admin-requiring resources are moved in-tree from contrib/

View File

@ -0,0 +1,95 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================================
Conditionally expose resources based on available services
==========================================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/keystone-based-resource-availability
Expose resources as available based on presence of the service.
Problem description
===================
Currently we unconditionally register and present to the user all resource
plugins that we have in-tree, even though the actual service might not be
installed in a particular cloud (e.g. Neutron resources on Nova-network
based cloud or Sahara resources though Sahara is not that usually installed).
This is confusing to the user as (s)he sees resources that can not actually
be used as available, and facilitates late failure of instantiated template
instead of failing at validation.
The situation is only going to get worse as we move the contrib/ resources
back in-tree, and we will probably accept in-tree resources for many more
projects under the "Big Tent" governance model.
Proposed change
===============
Add an additional validation step in the resource class that checks
if the required endpoint is present.
Endpoints can be accessed from the request context that is already available
in the resource class as ``stack.context``.
This method should be called from ``Resource.__new__`` and raise
a ``StackResourceUnavailable`` exception
(new subclass of ``StackValidationError``) when appropriate.
The ``list_resource_types`` method should tolerate the
``StackResourceUnavailable`` and do not output resources raising this as part
of available resources list.
Client plugins must implement a ``service_type`` property to be used during
validation and also during client instantiation.
Every resource type must implement the ``default_client_plugin``
class attribute to be used in the base ``Resource`` class to validate
the endpoints presence in the context.
Alternatives
------------
Keep the things as they are continuing to confuse users and fail later than
earlier for templates with resources for services unavailable
in the current deployment.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Kanagaraj Manickam <kanagaraj-manickam>
Assisted by:
Pavlo Shchelokovskyy <pshchelo>
Milestones
----------
Target Milestone for completion:
liberty-1
Work Items
----------
- changes in the client plugins
- add ``default_client_plugin`` to all resource plugins
- changes in the base resource class
- changes in the ``list_resource_types`` and ``show_resource_type`` service
methods
- unit tests
- scenario integration tests (based on some "exotic" resource)
- check if resource is listed as available
- check if template with resource validates
Dependencies
============
None

View File

@ -0,0 +1,138 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=====================================================================
Cache r/o API requests to OS components during constraint validation
=====================================================================
https://blueprints.launchpad.net/heat/+spec/constraint-validation-cache
Heat does a lot of request to the other clients in OpenStack. These
requests leads to some overhead when we are trying to deploy a lot of instances
at the same moment. One of these requests Heat is doing during validation of
property constraints that checks external resources existence (image, flavor
or something else). In order to reduce that overhead the some kind of
validation cache has proposed in the current spec.
Problem description
===================
The detailed description of the problem is described in the following use case:
1. User prepares a template with N stacks that use the same resource (image,
flavor, keypair, etc)
2. User requests Heat to create the stack
3. During custom constraint validation for resources Heat does the following:
- Find appropriate class that validates custom constraint
- Request the other clients about constraint (check that the volume,
server, flavor, etc exists)
- If request was successful then pass validation
This approach leads to some overhead requests because we need to request the
same info (existence of image, flavor, etc) several times. In addition, current
realization doubles this overhead because we are checking property constraints
twice (during resource creation and stack validation).
Proposed change
===============
The desired use case is the following:
0. Heat initializes a cache back-end and cache regions for each client plugin
with using dogpile.cache (cache configuration is defined in heat.conf).
Heat also registers generation functions for them (see
http://dogpilecache.readthedocs.org/en/latest/usage.html for more info)
1. User prepares a template with N stacks that use the same resource (image,
flavor, keypair, etc)
2. User requests Heat to create the stack
3. During custom constraint validation for resource Heat does the following:
- Find appropriate class that validates custom constraint
- Request client plugin about the data from another OS component
- if caching is enabled then
check cache region for client plugin and
return result of API request to client with the same resource name
(volume, server, flavor, etc) and the same context.
If no results have found in cache then cache region automatically
requests the new value using generation function (see note below)
else
request the new value with using client_plugin
- Pass validation if no exceptions were raised during request
- If exception has been raised then delete the value from cache because
we need to request it every time.
Note: if cache size exceeds size option in heat then we need to
flush the oldest values. This logic should be managed by cache back-end.
To support the case above the following steps should be executed:
- The cache configuration options should be supported in heat.conf
- The cache back-end should be configured using the options in heat.conf.
Please note that using dogpile we can use several types of cache back-ends
(in-memory, memcached, file system, DB, self-written etc). Each back-end
requires specific input arguments.
- The cache region should be configured for each client. In addition
time-to-live value and cache size options should be defined for clients
using heat.conf options.
- The sub-classes of heat.engine.clients.ClientPlugin should request cache
regions about new values if caching has enabled
- The requests to client plugin should have an attribute(use_cache=False)
that allows to define should we use caching or not. It allows to use
caching for constraint validation only and avoid unintentional using of
caches.
Alternatives
------------
1. Implement cache and integrate it into BaseCustomConstraint. In this case
caching will be used for custom constraint validation only but this is not
the best solution because of 2 reasons:
- Cache cannot be used in future for other purposes
- ClientPlugin is more appropriate place for caching in conceptual terms.
There is no strict relationships in terms of OOP between Constraint
and cache.
2. Implement light-weight cache in client plugins. This solution was declined
during review. Please see the details below:
"client plugins are instantiated on the first access from a resource since
the Stack object is created. Since we now have decouple-nested this is
going to be of less value as every nested stack is going to recreate these
clients. And in convergence all resources will recreate the client object
as the resource actions will be rpc'd to be worked on. So given this
wouldn't something like memcached be better?"
Implementation
==============
Assignee(s)
-----------
Primary assignee:
kkushaev
Milestones
----------
Target Milestone for completion:
Liberty-1
Work Items
----------
- Implement cache back-end - leverage dogpile back-end that tracks
timeouts and results of previous requests
- Implement initialization of cache regions
- Integrate caching into subtypes of ClientPlugin that make requests to other
clients
- Prepare tests for each step
Dependencies
============
None

View File

@ -0,0 +1,121 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===================================
Improvements in deprecation process
===================================
https://blueprints.launchpad.net/heat/+spec/deprecating-improvements
These changes should make deprecation process obvious and safe for users.
Problem description
===================
Current deprecation process contains some issues:
- there is no clear information for all deprecated properties and
attributes, when each one was deprecated and will be deleted.
- there are no notes about how and when we plan to remove support for option.
- undeletable code in property/attribute schemas.
- backward compatibility with old templates.
Proposed change
===============
Suggested changes should solve issues mentioned above:
1. Need to add new page in Heat documentation with detailed description of
deprecation process.
Add new page in Heat documentation to Developers Documentation section
named 'Heat support status usage' with description of using support status
for resources, properties and attributes:
- how long legacy option will be available
- what will happen, when deprecation period is over
- how to use support_status for properties, attributes and resources
- what will happen with deprecated resources
Also, add information about support_status parameter in Heat Resource
Plug-in Development Guide page.
2. Improve SupportStatus.
Add to SupportStatus `previous_status` option for displaying previous
status of object and it's version::
support_status=support.SupportStatus(
status=support.DEPRECATED,
version='2015.2',
previous_status=support.SupportStatus(version='2014.1')
)
Also, add HIDDEN status for DEPRECATED objects, which become absolutely
obsolete. Objects with this status will be hidden from documentation and
resource-type-list.
3. Improvement in documentation status code.
Improve generating documentation for new SupportStatus option
`previous_status`. Documentation must show full life cycle of resource.
Besides that, next features can be implemented:
1. Add option in attribute/property schema, which shows legacy names::
property_schema = {
subnet:
....
legacy_names: [subnet_id]
}
2. Add migration mechanism, which allows to support two following cases:
- New stacks deployed from old templates continue to work during
the period the element is in the deprecated state.
- Old stacks are correctly interpreted by new code after the element
was deprecated.
- When deprecation period ends, templates should be updated, otherwise
Validation Error will be raised. Old created stacks will be available, but
can not be updated with old templates. For comfortable work will be
recommended to update old stacks with new templates.
Alternatives
------------
Optionally we may add an API which updates old template and returns user new
updated template or information about which option should be changed.
Note, that it doesn't make sense if we start returning validation error on old
templates.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
<prazumovsky>
Assisted by:
<skraynev>
Milestones
----------
Target Milestone for completion:
Liberty-1
Work Items
----------
* Add section in documentation about how we deprecate options.
* Add status HIDDEN to SupportStatus and improve documentation generating.
* Add parameter previous_status and improve SupportStatuses for heat objects.
* Add option "legacy_names" for property schema.
* Create auto-upgrade mechanism for old templates.
Dependencies
============
None

View File

@ -0,0 +1,91 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===================================
Enhance constraints for properties
===================================
https://blueprints.launchpad.net/heat/+spec/enhance-property-constraints
We need more constraints for neutron properties so that we can validate them
before stack creation.
Problem description
===================
Since we have many type of properties, some of them have custom constraints,
e.g. nova.flavor, glance.image etc. But still some of them have some certain
format as input, e.g. IP address, MAC address, network cidr, protocol etc.
It's better to check input format before passing them to CLI or stack
creation. It's helpful for users, so that they can get error message during
validation instead of stack create/update failed.
Proposed change
===============
Add custom constraints for IP address, mac address, network cidr.
For IP address constraint, it's going to be like this:
::
constraints=[
constraints.CustomConstraint('ip_addr')
]
For mac address constraint, it's going to be like this:
::
constraints=[
constraints.CustomConstraint('mac_addr')
]
For CIDR constraint, it's going to be like this:
::
constraints=[
constraints.CustomConstraint('net_cidr')
]
We can apply these constraints to neutron properties or
template parameters.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Ethan Lynn
Milestones
----------
Target Milestone for completion:
liberty-2
Work Items
----------
1. Add IPv4/IPv6 address format constraint
2. Add mac address format constraint
2. Add network cidr format constraint
Dependencies
============
None

View File

@ -0,0 +1,141 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
====================================
Add support for external resources.
====================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/external-resources
Problem description
===================
We have no way to instruct Heat to use an existing (external) physical
resource id.
Use case 1:
-----------
When running a stack over a period of time a user might find it necessary
to operate on a server (rebuild it) out of band. But that then left
it out of sync with Heat's perspective of it. So this is a mechanism
to tell Heat "this is the new resource id, and I am now taking control
of it". At some stage later (s)he might want to tell Heat to take control
of it again - by removing the "external_reference" section. This fills
TripleO's needs (and we assume many other users that have to operate
an important stack for an extended time) by allowing the
get_attr/get_resource to keep functioning when they are working on a
resource externally and for Heat to leave it alone. Then when the user
is happy with the state of it they can return it to Heat's control.
Use case 2:
-----------
There is an existing resource that we would like to use get_attr
on to retrieve useful information instead of doing this manually
and passing the info in via the Parameters.
In all these cases once this is done the resource can be marked
as external and any update or delete will be ignored (unless the user
removes the "external" information first).
Proposed change
===============
To achieve this the user would add the following to the template::
resources:
...
res_a:
type: OS::Nova::Server
external_id: the-server-id
properties:
...
Note:
1. There is no place for "resource_data or Metadata" as these are
used by actions that once it becomes external are not possible. There
will be some resource types that will not survive going from external
to normal resources because of the missing resource data/metadata.
This will be documented as best as possible, and the use of
resource_data should be discouraged by Heat developers.
2. Once the resource has the "external_id" attribute present the properties
will be ignored (but be allowed to be present). If the "external_id"
is then removed the resource will be updated with the properties.
Creating a resource with external_id.
-------------------------------------
This covers the second use case. Here we see that there is an
external_id and logically do an adopt and check (to make sure
the resource actually exists).
Updating a resource with external_id.
-------------------------------------
This covers the first use case. We overwrite the resource_id that Heat
has previously written and ignore all the properties. Check will also
be called here to make sure the resource exists. If the external_id is
different to the existing physical_resource_id then the existing one
will be deleted.
Removing the external_id.
-------------------------
To convert the resource into a "normal" resource the user must
remove the "external_id" attribute from the resource and do
a stack update. If the resource requires some missing resource_data
or metadata that is missing (and can't be recovered) this will fail
and it will remain as external.
Deleting a stack with a resource that has external_reference.
-------------------------------------------------------------
When we have an external_reference, a deletion policy of RETAIN is
assumed (it will not be deleted).
Alternatives
------------
The user *could* use the current adopt/abandon mechanism, but it has
some slightly different behaviour. Also switching physical resource_id
is difficult with 2 API calls.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
asalkeld
Milestones
----------
Target Milestone for completion:
Liberty-2
Work Items
----------
* Code
* Functional tests.
* Documentation needs to be added to the template guide.
* Document limitations (resources that require resource_data
and metadata).
Dependencies
============
None

View File

@ -0,0 +1,82 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=====================================================
Special form of get_attr which returns all attributes
=====================================================
https://blueprints.launchpad.net/heat/+spec/get-attr-all-attributes
Add new functionality for ``get_attr`` function which allows to return dict of
all attributes.
Problem description
===================
Current implementation of base attribute "show" returns JSON
representation of resource. Content of this representation depends on
particular resource. This output is also used in native clients for building
output of command :code:`<client> <resource name>-show`.
Historically some Heat resources have attributes schema with attributes
taken from the output mentioned above. However it doesn't mean,
that all attributes in schema are presented in "show" output.
It's mostly related to dynamic attributes and custom attributes,
which require additional calculations, e.g. "addresses" attribute
of OS::Nova::Server that also contains related port id in output.
From the other side Heat also have resources with empty attribute schema,
so only "show" attribute is available for them.
In some cases to avoid using several outputs in template it will be useful
to return all attributes from attribute schema
(excluding the base attribute "show") in one output.
This functionality should be added for ``get_attr`` intrinsic function.
Proposed change
===============
Add some special form of ``get_attr`` as next::
{ get_attr: [resource_name] }
with no extra arguments, which will returns dict of all attributes' outputs.
This behaviour of get_attr can be used only when the latest
heat_template_version is selected, so this case should be noted in the
documentation.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
<prazumovsky>
Milestones
----------
Target Milestone for completion:
liberty-1
Work Items
----------
* Add new functionality to ``get_attr``
* Add note to documentation about new functionality of ``get_attr``
Dependencies
============
None

View File

@ -0,0 +1,85 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
===================================
Adopt Oslo Guru Meditation Reports
===================================
https://blueprints.launchpad.net/heat/+spec/guru-meditation-report
This spec adopts Oslo Guru Meditation Reports in Heat. The feature will
enhance debugging capabilities of all Heat services, by providing
an easy and convenient way to collect debug data about current threads
and configuration, among other things, to developers, operators,
and tech support in production deployments.
Problem description
===================
Currently,Heat doesn't provide a way to collect state data from active service
processes. The only information that is available to deployers, developers,
and tech support is what was actually logged by the service. Additional data
could be usefully used to debug and solve problems that occur during Heat
operation. We could be interested in stack traces of green and real threads,
pid/ppid info, package version, configuration as seen by the service, etc.
Oslo Guru Meditation Reports provide an easy way to add support for collecting
the live state info from any service. Report generation is triggered by sending
a special(USR1) signal to a service. Reports are generated on stderr, and can
be piped into system log based on need.
Nova has supported Oslo Guru Meditation Reports.
Proposed change
===============
First, the oslo-incubator module (reports.*) should be synchronized into
heat tree. Then, each service process needs to initialize the error report
system prior to the service start().
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
zhangtralon
Milestones
----------
Target Milestone for completion:
liberty-1
Work Items
----------
* sync reports.* module from oslo-incubator
* adopt it in all heat services under heat/bin/
* add some developer docs on how to use this feature
Dependencies
============
[1] oslo-incubator module: http://git.openstack.org/cgit/openstack/oslo-incubator/tree/openstack/common/report
[2] nova guru meditation reports: https://blueprints.launchpad.net/nova/+spec/guru-meditation-report
[3] blog about nova guru reports: https://www.berrange.com/posts/2015/02/19/nova-and-its-use-of-olso-incubator-guru-meditation-reports/
[4] oslo.reports repo: https://github.com/directxman12/oslo.reports

View File

@ -0,0 +1,146 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===================
Designate resources
===================
https://blueprints.launchpad.net/heat/+spec/heat-designate-resource
This blueprint adds heat resource plug-ins for OpenStack DNS as a service
(designate).
Problem description
===================
OpenStack provides DNS as a service (designate) and more details are
available at wiki https://wiki.openstack.org/wiki/Designate
In heat, resource plug-ins are not available for designate service. And this
blueprint is created to provide required plug-ins for designate service.
Proposed change
===============
Designate service provides v1 and v2 APIs[1] and it's python client[2]
provides support only for v1. So in this blueprint, v1 support is added
with following resources.
* OS::Designate::Domain
Properties:
* name:
- required: True
- type: String
- update_allowed: False
- description: Domain name
* ttl:
- required: False
- type: int
- update_allowed: True
- description: Time To Live (Seconds)
* description:
- required: False
- type: String
- update_allowed: True
- description: Description of domain
* email:
- required: True
- type: String
- update_allowed: True
- description: Domain email
Attributes:
* serial:
- description: DNS domain serial
* OS::Designate::Server
Properties:
* name:
- required: True
- type: String
- update_allowed: True
- description: DNS Server Name
* OS:Designate::Record
Properties:
* domain:
- required: True
- type: String
- update_allowed: False
- description: DNS Domain id or name
- constraints: CustomConstrain('designate.domain')
* name:
- required: True
- type: String
- update_allowed: False
- description: DNS Name
* type:
- required: True
- type: String
- update_allowed: True
- description: DNS record type
- constraints:[A, AAAA, CNAME, MX, SRV, TXT, SPF, NS, PTR, SSHFP, SOA]
* data:
- required: True
- type: String
- update_allowed: True
- description: DNS record data (Ip address)
* ttl:
- required: False
- type: int
- update_allowed: True
- description: DNS record Time To Live (Seconds)
* description:
- required: False
- type: String
- update_allowed: True
- description: Description of DNS record
* priority:
- required: False
- type: int
- update_allowed: True
- description: DNS record priority
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Kanagaraj Manickam (kanagaraj-manickam)
Anant Patil (ananta)
Milestones
----------
Target Milestone for completion:
Liberty-1
Work Items
----------
* Implement proposed resource plug-ins
* Implement custom constrain for 'designate.domain'
* Add required test cases
Dependencies
============
[1] http://designate.readthedocs.org/en/latest/rest.html
[2] https://github.com/openstack/python-designateclient

View File

@ -0,0 +1,169 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
====================================================
Scale-out and pid support for Manage Service listing
====================================================
https://blueprints.launchpad.net/heat/+spec/heat-manage-service-list-scale-out
Adds pagination, sorting and filtering capability to 'Manage service'
listing feature. In addition, each engine will be reported with pid.
Problem description
===================
In a scale out environment, cloud provider start to run many heat-engines
to serve the huge requests at the given point in time. Once many engines are
started to run, 'Manage service' would help cloud provider to find out the
currently running heat-engines and their status and they would expect to
retrieve these engines details with pagination and want to search them for a
given host on which engines are running, based on the status of them, etc.
These functionalities are missing in the current release.
Proposed change
===============
* Pagination :
Add following parameters in REST API and heat CLI for enabling pagination
for listing heat-engines as part of 'Manage Service' feature:
* marker: Starting heat-engine service id
* limit: Number of records from starting index (default=20)
* with_count: If True (default), then provide following counts in
response:
- count: Total number of heat-engines like defined in
https://github.com/openstack/api-wg/blob/master/guidelines/counting.rst
* Sorting :
Add following parameters in REST API and heat CLI for sorting heat-engine
services in a given heat deployment:
* sort: List of service attributes in given priority sequence.
- Allowed attributes : created_at, updated_at, status, hostname
- Default key is created_at
- Default sorting direction is desc for created_at and updated_at and
for other allowed attributes, it will be asc.
- sort key value format to be aligned with API-WG
http://git.openstack.org/cgit/openstack/api-wg/tree/guidelines/pagination_filter_sort.rst
* Filtering :
Add following parameters in REST API and heat CLI for filtering heat-engine
services:
* hostname: List of heat-engines hostname
* status: List of heat-engines service status
To support NOT condition, each of the list entry could be in the form of
'[not:]entry' like 'not:FAILED'
Affected Service REST API:
``/v1/{tenant_id}/services?<above mentioned parameters as http query
parameters>``
Here, to provide the filtering parameters, 'filter' query parameter will be
used with it's value similarly to --filters option used in CLI.
Affected Heat CLI:
(only shown the new parameters here)
``heat service-list [-f <KEY1=VALUE1;KEY2=VALUE2...>]
[-l <LIMIT>] [-m <ID>] [-s <KEY1:asc,KEY2,KEY3>]``
Optional arguments:
-f <KEY1=VALUE1;KEY2=VALUE2...>, --filters <KEY1=VALUE1;KEY2=VALUE2...>
Filter parameters to apply on returned heat-engine services. This
can be specified multiple times, or once with
parameters separated by a semicolon.
-l <LIMIT>, --limit <LIMIT>
Limit the number of heat-engine services returned.
-m <ID>, --marker <ID>
Only return heat-engines that appear after the given ID.
-s <KEY1:asc,KEY2,KEY3>, --sort <KEY1:asc,KEY2,KEY3>
Sorting keys in the given precedence and sorting directions.
* heat-engine PID:
In addition, When multiple heat-engines are running on a given host, it is
difficult to find out the process-id for a given heat-engine. This is
required during trouble shooting issues. So new field called 'pid' to be
added to Service model.
* heat-manage service list:
Add the similar enhancement done in CLI, (this is required for admin, when
all heat-engines are down and heat service-list became in-capable.)
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Kanagaraj Manickam (kanagaraj-manickam)
Milestones
----------
Target Milestone for completion:
liberty-1
Work Items
----------
* DB model changes:
* Update Service table with new column named 'pid'
* DB API changes:
* 'service_get_all' to be updated to handle with pagination parameters and
filtering parameters
* Object changes:
* Add pid and corresponding changes for db api changes in the Service object
methods
* RPC API changes:
* Enhance 'list_services' to handle pagination and filtering capabilities
* Heat engine service:
* Enhance the method 'service_manage_report' in EngineService to update the
pid of current engine.
* REST API changes:
* Update ServiceController 'index' to handle pagination and filtering
capabilities
* heat CLI:
* 'heat service-list' to handle pagination and filtering capabilities
* heat-manage command:
* Add the similar enhancement done in CLI.
* Add required test cases
* Documentation:
* update documentation for REST API (api-sites), heat CLI (python-heatclient)
and heat-manage tool
Dependencies
============
None

View File

@ -0,0 +1,121 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=========================================
Enhance Manage Service with service-stack
=========================================
https://blueprints.launchpad.net/heat/+spec/heat-manage-service-stack
Retrieves IN_PROGRESS stacks being handled in the given heat-engine and
vice-versa.
Problem description
===================
In convergence mode, a given stack is being handled by one or more heat-engines
and vice-versa. Later scenario is applicable for non-convergence mode as well.
This will help operators to track the IN_PROGRESS stacks for the given
heat-engine or vice-versa. And also useful for operator during
troubleshooting issues.
Proposed change
===============
To list the stacks for the given heat-engine:
Update stack-list command filter argument with additional parameter engine-id
as follows:
``stack-list -f engine-id <engine-id>``
Here, stack-list already supports to provide filter parameters multiple times.
So, user can filter stacks for multiple engines as well.
Corresponding REST API would be:
``GET on /v1/{tenant_id}/stacks?filter=engine_id:<engine-id>``
Here, multiple engine-id could be provided with comma separated.
To list the heat-engines handing the given stack:
Update heat CLI with following additional parameters:
``service-list --stack-id <stack-id>``
* stack-id - to report the list of heat-engines handling the given stack.
Corresponding REST API would be:
``GET on /v1/{tenant_id}/services?filter=stack_id:<stack-id>``
``GET on /v1/<tenant-id>/services``
Here, multiple stack-id could be provided with comma separated.
NOTE: This blueprint can be extented to provide IN_PROGRESS resources in a
given heat-engine.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Kanagaraj Manickam (kanagaraj-manickam)
Milestones
----------
Target Milestone for completion:
liberty-1
Work Items
----------
* DB API changes:
* Add new API 'service_get_stacks_by' with two parameters as described in the
solution section.
* Object changes:
* Add corresponding changes for db api changes in the Service object methods
* RPC API changes:
* Add corresponding RPC API for the new DB API 'service_get_stacks_by'
* REST API changes:
* Update ServiceController and StackController to handle the new REST APIs
as defined in the solution section.
* Heat CLI
* Updated required CLI as defined in the solution section.
* Heat-manage command:
* Add the similar enhancement done in CLI, (this is required by admin, in
case all heat-engines are down)
* Add required test cases
* Documentation:
* update documentation for REST API, heat CLI and heat-manage tool
* update CLI and API documents to mention that engine-id parameter is
only for admin users.
Dependencies
============
None.

View File

@ -0,0 +1,193 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==================================================
Resource plugin for Monasca Alarm and Notification
==================================================
https://blueprints.launchpad.net/heat/+spec/support-monasca-alarm-notification
Adds resource plugin for Monasca Alarm and Notification
Problem description
===================
OpenStack provides monitoring-as-a-service (monasca) and more details are
available at wiki https://wiki.openstack.org/wiki/Monasca
In heat, resource plug-ins are not available for monasca service. And this
blueprint is created to provide required plug-ins for monasca alarm and
notification.
Proposed change
===============
Add following resource plugins:
* OS::Monasca::Alarm:
* name
- type: string
- required: false
- default: physical_resource_name
- update_allowed: false
- description:
* description
- type: string
- required: false
- update_allowed: true
* expression
- type: string
- required: true
- update_allowed: true
* match_by
- type: string
- required: false
- update_allowed: true
* severity
- type: list
- required: false
- update_allowed: true
- allowed_values: [low, medium, high, critical]
- default: low
* alarm_actions
- type: list
- required: false
- update_allowed: true
- List item constrains: Custom constrain 'monasca.notification'
* ok_actions
- type: list
- required: false
- update_allowed: true
- List item constrains: Custom constrain 'monasca.notification'
* undetermined_actions
- type: list
- required: false
- update_allowed: true
- List item constrains: Custom constrain 'monasca.notification'
* OS::Monasca::Notification:
* name
- type: string
- required: false
- default: physical_resource_name
- update_allowed: false
* type
- type: string
- required: true
- update_allowed: true
- allowed_values: [email, webhook, pagerduty]
* address
- type: string
- required: true
- update_allowed: true
* Custom constrain 'monasca.notification'
As monasca provides Notification separated from the Alarm,
to be compatible with other existing alarm resources in heat,
following additional resource is added, where all actions are considered as
webhook.
If the user provided webhook is not exist in the monasca,
heat will create new notification with that webhook, before creating the
alarm, otherwise, the existing notification for that webhook will be used.
* OS::Monasca::AlarmI:
* name
- type: string
- required: false
- default: physical_resource_name
- update_allowed: false
- description:
* description
- type: string
- required: false
- update_allowed: true
* expression
- type: string
- required: true
- update_allowed: true
* match_by
- type: string
- required: false
- update_allowed: true
* severity
- type: list
- required: false
- update_allowed: true
- allowed_values: [low, medium, high, critical]
- default: low
* alarm_actions
- type: list
- required: false
- update_allowed: true
- List item constrains: Webhook URL
* ok_actions
- type: list
- required: false
- update_allowed: true
- List item constrains: Webhook URL
* undetermined_actions
- type: list
- required: false
- update_allowed: true
- List item constrains: Webhook URL
* All of these resource plugins will be supported from version '5.0.0'
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
duanlg@live.cn
kanagaraj-manickam
Milestones
----------
Target Milestone for completion:
Liberty-1
Work Items
----------
* Implement Monasca client plugin
* Implement custom constrain 'monasca.notification'
* Implement alarm and notification resource plugins as detailed above
* Implement the logic to load the monsaca resources only when
python-monascaclient is available.
* Implement required test cases.
* Add sample template in the heat-templates github repo.
Dependencies
============
None

View File

@ -0,0 +1,111 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/heat/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://sphinx-doc.org/rest.html
To test out your formatting, see http://www.tele3.cz/jbar/rest/rest.html
================
Python34 Support
================
https://blueprints.launchpad.net/heat/+spec/heat-python34-support
This spec aims to bring Python 3.4 support to Heat.
Problem description
===================
Heat isn't compatible with Python3.x. The blocker for Heat to migrate
was eventlet and now that eventlet fully supports python3, it is possible
for us to run Heat unit tests in a Python 3.4 environment. Once
all the dependencies of Heat are all functionally Python3 compatible, we
should be able to run integrationtests against Heat in a devstack environment.
Proposed change
===============
The first step towards Python 3.4 compatibility for Heat would be to
get the unit tests running successfully in a py34 environment. We need
to add a new py34 environment in tox for this and start testing individual
test files. To avoid regressing on old test files, we should add a separate
file which will consist of all the test files that have already been
verified in a Python3 environment.
All of these changes are not supposed to break existing unit tests nor
change the functionality in any way. The existing gate tests should take
care of this.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
sirushtim
Milestones
----------
Target Milestone for completion:
liberty-1, could stretch to liberty-2 depending on how many
incompatibilities exist while running tests.
Work Items
----------
- Use 2to6 partially to automatically fix some incompatibilities and satisfy
flake8.
- Create a tox py34 env that will run off a meta-testfile which will consist
of test file names that have already been verified to work with py34. This
env will also use a different requirements file since there are two
dependencies qpid-python and MySQL-python which aren't currently Python3
compatible.
- Add a voting python34-partial gate job that will run the above env.
- Migrate all the unit tests to be compatible with Python 3.4 either one-by-one
or migrate tests in alphabetical order, whichever is reasonably sized and
easier to review. This also means we will fix the modules/files that each
test case imports to test and make them python34 compatible.
While migrating the tests, the strategy with mox is to use mox3 instead of
converting them to mock as much as possible.
- Once migration is complete for all the tests, delete the meta-testfile and
rename the gate job to gate-heat-python34.
- Remove dependencies on qpid-python and MySQL-python and merge
requirements.txt for python-2.7 and python-3.4.
- Once dependencies of Heat are functionally Python 3.4 compatible, create a
DevStack gate job which will run the Python 3.4 version of Heat.
Dependencies
============
Current dependencies of Heat that are/were not compatible with Python 3.4:
requirements.txt
- qpid-python: Used in install.sh. Can be removed.
- PasteDeploy: Needs to be functionally tested. The tests pass on Python 3.4
and the classifiers were just added.[1]
- oslo.messaging: Some of the drivers/executors don't work at the moment
but are being worked on by Victor Stinner.
- oslo.db: MySQL-python dialect isn't compatible with Python 3.4. There's a
Python 3.4 port for MySQL-python however.
- sqlalchemy-migrate: There's PY34 tests running for every patch of
sqlalchemy-migrate and the classifiers will be added for it.[2]
test-requirements.txt
- MySQL-python: ditto - oslo.db^. Can be removed.
- mox: needs to be replaced by mox3 until we move to mock completely.
[1] https://bitbucket.org/ianb/pastedeploy/commits/f30a7d518c6a79fcddfbe3f622337f81e41cb6a5
[2] https://review.openstack.org/#/c/174738/

View File

@ -0,0 +1,84 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
====================
Search Resource Type
====================
https://blueprints.launchpad.net/heat/+spec/heat-resource-type-search
Enable filtering capabilities for resource types loaded in the given heat
deployment.
Problem description
===================
Search and get resource type based on
* resource type,
* supported since,
* support status
Proposed change
===============
Add Following parameters in REST API and heat CLI for filtering heat
resource type:
* resource_type: List of glob matching expression (like ``*``)
* supported_since: Heat version, since resource type is supported.
* supported_status: List of status. It could be one of UNKNOWN,
SUPPORTED, PROTOTYPE, DEPRECATED, UNSUPPORTED
To support NOT condition, each of the list entry could be in the form of
'[not:]entry' like 'not:DEPRECATED'
Affected Service REST API:
``/v1/{tenant_id}/resource_types?filter=<query parameters>``
Here, 'filter' query parameter will be used with it's value similarly to
--filters option used in CLI.
Affected Heat CLI:
(only shown the new parameters here)
```heat resource-type-list [-f <KEY1=VALUE1;KEY2=VALUE2...>]``
Optional arguments:
-f <KEY1=VALUE1;KEY2=VALUE2...>, --filters <KEY1=VALUE1;KEY2=VALUE2...>
Filter parameters to apply on returned resource type. This
can be specified multiple times, or once with
parameters separated by a semicolon.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Kanagaraj Manickam (kanagaraj-manickam)
Milestones
----------
Target Milestone for completion:
liberty-1
Work Items
----------
* update Resource Type REST API controller with additional filtering ability.
* update the heat CLI as described in the solution section
* Add required additional test cases.
* Add documentation for CLI, REST API updates
Dependencies
============
None

View File

@ -0,0 +1,125 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
================================================
Stack resource filtering, sorting and pagination
================================================
https://blueprints.launchpad.net/heat/+spec/heat-stack-resource-search
Enhance the filtering, sorting and filtering ability for resources in a given
stack.
Problem description
===================
In larger stack, heat does allow 1000 (max_resources_per_stack) by default
which is configurable, and it would help users to get the resources in a stack
if its provided with pagination, sorting and filtering abilities based on
certain resource attributes
Proposed change
===============
* Pagination :
Add Following parameters in REST API and heat CLI for enabling pagination
for given stack resources:
* marker: Starting Resource id (default=0)
* limit: Number of records from starting index (default=20)
* with_count: If True (default), then provide following counts in
response:
- count: Total number of resources in a given stack like defined in
https://github.com/openstack/api-wg/blob/master/guidelines/counting.rst
* Sorting :
Add Following parameters in REST API and heat CLI for sorting resources
in a given stack:
* sort: List of resource attributes in given priority sequence.
- Allowed attributes : created_at, updated_at, status, name
- Default key is created_at
- Default sorting direction is desc for created_at,
updated_at and asc for status, name.
- sort key value format to be aligned with API-WG
http://git.openstack.org/cgit/openstack/api-wg/tree/guidelines/pagination_filter_sort.rst
* Filtering :
Add Following parameters in REST API and heat CLI for filtering resources in
a given stack:
* type: List of valid Resource type
* status: List of valid resource statuses
* name: Resource name
* action: List of valid resource actions
* uuid: List of resource uuid
* physical_resource_id: List of physical resource id
To support NOT condition, each of the list entry could be in the form of
'[not:]entry' like 'not:FAILED'
Affected Resource REST API:
``/v1/{tenant_id}/stacks/{stack_name}/{stack_id}/resources
?<query parameters>``
Here, to provide the filtering parameters, 'filter' query parameter will be
used with it's value similarly to --filters option used in CLI.
Affected Heat CLI:
(only shown the new parameters here)
``heat resource-list [-f <KEY1=VALUE1;KEY2=VALUE2...>]
[-l <LIMIT>] [-m <ID>] [-s <KEY1:asc,KEY2,KEY3>]``
Optional arguments:
-f <KEY1=VALUE1;KEY2=VALUE2...>, --filters <KEY1=VALUE1;KEY2=VALUE2...>
Filter parameters to apply on returned resources. This
can be specified multiple times, or once with
parameters separated by a semicolon.
-l <LIMIT>, --limit <LIMIT>
Limit the number of resources returned.
-m <ID>, --marker <ID>
Only return resources that appear after the given resource ID.
-s <KEY1:asc,KEY2,KEY3>, --sort <KEY1:asc,KEY2,KEY3>
Sorting keys in the given precedence and sorting directions.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Kanagaraj Manickam (kanagaraj-manickam)
Milestones
----------
Target Milestone for completion:
liberty-1
Work Items
----------
* Update Resource REST API controller with additional capabilities for
pagination, sorting and filtering
* Update the heat CLI as described in the solution section
* Add required RPC and DB api with required micro version.
* Add required additional test cases.
* Add documentation for CLI (python-heatclient), REST API (api-sites)
Dependencies
============
None

View File

@ -0,0 +1,93 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
============================
Heat template functions list
============================
https://blueprints.launchpad.net/heat/+spec/template-function-list
Add an ability to get the list of available functions for given template
by REST API and CLI
Problem description
===================
There is no possibility to get the list of functions that are supported by
the given template version. It is useful for helping template
writers, especially for HOT builders.
Proposed change
===============
Add following command to heat CLI:
``heat template-function-list <template_version>``
Where `template_version` is template version given by
`heat template-version-list` command output. This command returns
the list of available template versions with corresponding type
(cfn or hot) for user convenience.
Corresponding REST API would be the following:
``GET on /template_versions/<template_version>/functions``
Possible output:
+--------------+--------------------------------------------------------+
| Functions |Description |
+==============+========================================================+
| Fn::GetAZs |Returns the Availability Zones within the given region. |
+--------------+--------------------------------------------------------+
| get_param |A function for resolving parameter references. |
+--------------+--------------------------------------------------------+
| get_resource |A function for resolving resource references. |
+--------------+--------------------------------------------------------+
| Ref |A function for resolving parameter references. |
+--------------+--------------------------------------------------------+
| ... | |
+--------------+--------------------------------------------------------+
Alternatives
------------
None
Implementation
==============
Needed template can be obtained by template manager via
_get_template_extension_manager() from template module. Each
template has the list of functions as class attribute. Description
of each functions will be obtained via __doc__() method of the
function class. Additional changes is needed to REST API controller
and RPC.
Assignee(s)
-----------
ochuprykov
tlashchova
skraynev
Milestones
----------
Target Milestone for completion:
liberty-2
Work Items
----------
* Update Resource REST API controller with additional capabilities
* Update the heat CLI
* Add required RPC
* Add required additional test cases.
* Add documentation for CLI (python-heatclient), REST API (api-sites)
Dependencies
============
https://blueprints.launchpad.net/heat/+spec/template-version-list

View File

@ -0,0 +1,83 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===========================
Heat template versions list
===========================
https://blueprints.launchpad.net/heat/+spec/template-version-list
Add an ability to get list of available template versions by
REST API and CLI
Problem description
===================
There is no such command in heat now. It is useful for helping
end-users to write heat templates, especially for HOT builders.
Another use-case is to get list of available template versions that
are available on current deployment.
Proposed change
===============
Add the following command to heat CLI:
``heat template-version-list``
Output may be the following:
+--------------------------------------+-----+
| Versions |Type |
+======================================+=====+
| heat_template_version.2013-05-23 |hot |
+--------------------------------------+-----+
| heat_template_version.2014-10-16 |hot |
+--------------------------------------+-----+
| heat_template_version.2015-04-30 |hot |
+--------------------------------------+-----+
| HeatTemplateFormatVersion.2012-12-12 |cfn |
+--------------------------------------+-----+
| AWSTemplateFormatVersion.2010-09-09 |cfn |
+--------------------------------------+-----+
Corresponding REST API would be the following:
``GET on /template_versions``
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
ochuprykov
tlashchova
skraynev
Milestones
----------
Target Milestone for completion:
liberty-1
Work Items
----------
* Update REST API controller with additional capabilities
* Update the heat CLI
* Add required RPC
* Add required additional test cases.
* Add documentation for CLI (python-heatclient), REST API (api-sites)
Dependencies
============
None

View File

@ -0,0 +1,74 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=================================================
Keystone Resource plugin for Service and Endpoint
=================================================
https://blueprints.launchpad.net/heat/+spec/keystone-resource-service-endpoint
Adds resource plugin for Keystone Service and Endpoint.
Problem description
===================
In Heat based cloud deployment tool such as TripleO, vendors are automating
the creation of Keystone Region, Service and Endpoint by some-means such as
shell scripting. This is being repeated across multiple vendors and it could
automated by heat template if heat provides Resource plugin for Keystone
Region, Service and endpoint. So this blueprint is created to provide Heat
resource plugin for Keystone Service and Endpoint.
Proposed change
===============
Add following Resources under contirb/heat_keystone by using keystone v3 API.
* OS::Keystone::Service
* name (optional - defaults to self.physical_resource_name()
* description (optional)
* type (required)
* OS::Keystone::Endpoint
* region (optional)
* service_id (required)
* interface: 'public', 'admin' or 'internal'
* url (required)
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Kanagaraj Manickam (kanagaraj-manickam)
Milestones
----------
Target Milestone for completion:
liberty-1
Work Items
----------
* Add contrib resources for those resources defined in solution section
* Add constrains for service
* Add required test cases
* Add sample templates in heat-template project
Dependencies
============
None

View File

@ -0,0 +1,171 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================
Implement Magnum resources
==========================
https://blueprints.launchpad.net/heat/+spec/magnum-resources
This Blueprint proposes to add support for Magnum resources.
Problem description
===================
Magnum is a container management service that is currently not supported by
Heat. Resources will be added to Heat to support:
* Baymodel, An object stores template information about the bay which is used
to create new bays consistently.
* Bay, A collection of node objects where work is scheduled.
* Pod, A collection of containers running on one physical or virtual machine.
* Service, An abstraction which defines a logical set of pods and a policy
by which to access them.
* ReplicationController, An abstraction for managing a group of PODs to
ensure a specified number of PODs are running.
* Node, A baremetal or virtual machine where work executes
* Container, A docker container
Proposed change
===============
Magnum resources are not integrated, so they will be added to contrib
directory.
Magnum client plugin will be added for communication with Magnum, which has
his own requirements. Following resources will be added:
Add the OS::Magnum::BayModel resource
.. code-block:: yaml
resources:
model:
type: OS::Magnum::BayModel
properties:
name: String
image: String
keypair: String
external_network: String
dns_nameserver: String
flavor: String
docker_volume_size: Int
network_driver: String
http_proxy: String
https_proxy: String
no_proxy: String
labels: String
insecure: Boolean
Add the OS::Magnum::Bay resource
.. code-block:: yaml
resources:
bay:
type: OS::Magnum::Bay
properties:
name: String
baymodel: { get_resource: model }
node_count: Int
discovery_url: String
bay_create_timeout: Int
Add the OS::Magnum::Pod resource
.. code-block:: yaml
resources:
pod:
type: OS::Magnum::Pod
properties:
bay: { get_resource: bay }
manifest: SOFTWARE_CONFIG
manifest_url: String
Add the OS::Magnum::Service resource
.. code-block:: yaml
resources:
service:
type: OS::Magnum::Service
properties:
bay: { get_resource: bay }
manifest: SOFTWARE_CONFIG
manifest_url: String
Add the OS::Magnum::ReplicationController resource
.. code-block:: yaml
resources:
rc:
type: OS::Magnum::ReplicationController
properties:
bay: { get_resource: bay }
manifest: SOFTWARE_CONFIG
manifest_url: String
Add the OS::Magnum::Node resource
.. code-block:: yaml
resources:
rc:
type: OS::Magnum::Node
properties:
name: String
type: String
image: String
Add the OS::Magnum::Container resource
.. code-block:: yaml
resources:
rc:
type: OS::Magnum::Node
properties:
name: String
type: String
command: String
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
<rpothier@cisco.com>
Milestones
----------
Target Milestone for completion:
liberty-1
Work Items
----------
* Add Magnum client plugin for Heat
* Add Magnum BayModel and Bay resources
* Add Magnum Pod, Service and ReplicationController resources
* Add Magnum Node and Container resources
Dependencies
============
None

View File

@ -0,0 +1,141 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================
Implement Manila resources
==========================
https://blueprints.launchpad.net/heat/+spec/add-manila-resources
Add support for Manila resources in Heat.
Manila provides the management of shared or distributed filesystems
(e.g. NFS, CIFS). Using Manila we can create following resources:
* Share - unit of storage with a protocol, a size, and an access list;
* Share type - administrator-defined "type of service";
* Share network - tenant-defined object that informs Manila about the
security and network configuration for a group of shares;
* Security service - set of options that defines a security domain for
a particular shared filesystem protocol.
Problem description
===================
Heat doesn't support Manila resources currently.
Proposed change
===============
Add Manila client plugin and implement following resource types:
1. OS::Manila::Share
Properties:
* share_protocol (required, one of: NFS, CIFS, GlusterFS, HDFS)
* size (required)
* snapshot (optional)
* name (optional)
* metadata (optional)
* share_network (optional)
* description (optional)
* share_type (required)
* is_public (optional, defaults to False)
* access_rules (list, optional)
* access_to (optional)
* access_type (optional, one of: ip, domain)
* access_level (optional, one of: ro, rw)
Attributes:
* availability_zone
* host
* export_locations
* share_server_id
* created_at
* status
2. OS::Manila::ShareType
Properties:
* name (required)
* driver_handles_share_servers (required, one of true/1, false/0)
* is_public (optional, defaults to True)
3. OS::Manila::ShareNetwork
Properties:
* neutron_network (optional)
* neutron_subnet (optional)
* nova_network (optional)
* name (optional)
* description (optional)
* security_services (list, optional)
Attributes:
* segmentation_id
* cidr
* ip_version
* network_type
4. OS::Manila::SecurityService
Properties:
* type (required, one of: ldap, kerberos, active_directory)
* dns (optional)
* server (optional)
* domain (optional)
* user (optional)
* password (optional)
* name (optional)
* description (optional)
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
tlashchova
Assisted by:
ochuprykov
kkushaev
Milestones
----------
Target Milestone for completion:
Liberty-1
Work Items
----------
* Add Manila client plugin for Heat
* Add Manila share resource
* Add Manila share network resource
* Add Manila share type resource
* Add Manila security service
Dependencies
============
None

View File

@ -0,0 +1,74 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=============================
Multi-region scenario test
=============================
https://blueprints.launchpad.net/heat/+spec/multi-region-test
Add a scenario test for Multi-Region Orchestration.
Problem description
===================
Heat supports Multi-Region Orchestration through remote stacks. While remote
stacks themselves are tested with unit and functional tests, there are no
scenario tests which test the creation of remote stacks across multiple
regions.
Proposed change
===============
This change will add a scenario test which creates two remote stacks in
different regions and checks if their creation was successful.
This will require a multinode test setup with two distinct devstack instances,
each configured with its own region. Multinode test setups are already possible
in infra, but the configuration of regions requires changes to devstack-gate
and openstack-infra/project-config to allow this test to run as a gate test.
Alternatives
------------
In case it turns out to be impossible to create a multinode test setup with
multiple regions in the openstack infrastructure, this scenario test could also
be added as a local-only test which is not ran at the gate.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
dgonzalez
Milestones
----------
Target Milestone for completion:
liberty-3
Work Items
----------
1. Implement scenario test which does the following:
- Create a stack containing two simple remote stacks
- Both remote stacks target different regions
- After sucessful creation, the output of the remote stacks is checked
2. Include scenario test in devstack-gate
- Configure devstack multinode setup in project-config
- Assign regions to the devstack nodes
Dependencies
============
None

View File

@ -0,0 +1,144 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
====================================
Intrinsic function to split strings
====================================
https://blueprints.launchpad.net/heat/+spec/str-split
From HOT 2014-10-16 we no longer support the AWS compatible Fn::Split intrinsic
function in HOT templates, which means there's no way to split a string into
a list of components by delimiter.
Problem description
===================
The current use case is to avoid doing this in TripleO templates, but it's
likely a generally useful addition:
.. code-block:: yaml
ip_subnet:
# FIXME: this assumes a 2 digit subnet CIDR (need more heat functions?)
description: IP/Subnet CIDR for the storage network IP
value:
list_join:
- ''
- - {get_attr: [StoragePort, fixed_ips, 0, ip_address]}
- '/'
- {get_attr: [StoragePort, subnets, 0, cidr, -2]}
- {get_attr: [StoragePort, subnets, 0, cidr, -1]}
This is both fragile and cumbersome, it'd be better to allow easily splitting
on the "/" delimiter.
The second use-case is to enable joining of two (or more) lists together:
.. code-block:: yaml
parameters:
ExtraConfig:
type: json
default: []
resources:
type: OS::Heat::StructuredConfig
properties:
group: os-apply-config
config:
hiera:
hierarchy:
- controller
- object
- ceph
- common
- {get_param: ExtraConfig}
Here, the desired behavior is to merge/append the contents of the ExtraConfig
parameter, which may be either json or comma_delimited_list type, such that the
"hierarchy" list contains both the hard-coded items and whatever list is
provided via ExtraConfig.
Proposed change
===============
Add a str_split intrinsic function, such that the first example becomes:
.. code-block:: yaml
list_join:
- ''
- - {get_attr: [StoragePort, fixed_ips, 0, ip_address]}
- '/'
- {str_split: ['/', {get_attr: [StoragePort, subnets, 0, cidr]}, 1]}
This means we can strip the subnet mask from the CIDR without hard-coded
assumptions around it always being 2 digits - path based lookup by index will
be supported, e.g the same syntax as get_attr and get_param, so when an index
is specified then the list item at that index is returned, otherwise the
entire list is returned. This is consistent with current get_attr behavior
and avoids forcing the user to use Fn::Select to extract a list item.
To enable the second use-case, we can use the new str_split function, with
an enhanced version of list_join which can optionally take a list of lists,
e.g it's capable of joining multiple lists on a delimiter:
.. code-block:: yaml
config:
hiera:
hierarchy:
str_split:
- ','
- list_join:
- ','
- - controller
- object
- ceph
- common
- {get_param: ExtraConfig}
Alternatives
------------
For the list merging I was thinking we could use the YAML << merge directive,
but some experiements indicate this will only merge maps, not lists which
are required in this case.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
shardy
Milestones
----------
Target Milestone for completion:
liberty-1
Work Items
----------
Changes to engine:
- Bump HOT template version for Liberty
- Enhance list_join to support optionally joining lists of lists.
- Add a new str_split function with associated tests.
Documentation changes:
- Update HOT specification as part of the commits above.
Dependencies
============
None

View File

@ -0,0 +1,345 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
===================================================
Allow validation and inspection of nested resources
===================================================
https://blueprints.launchpad.net/heat/+spec/nested-validation
Currently, there's no way to recursively validate all nested templates other
than doing a stack-create and waiting for it to fail. Additionally, there's
no way to inspect the interfaces exposed by nested template, e.g those
accessible via parameter_defaults. Adding more comprehensive support for
pre-create validation (e.g heat template-validate) will allow solving both
of these issues.
Problem description
===================
heat template-validate takes an optional environment argument, but it doesn't
parse the files and include a "files" map such as is consumed by create/update.
As a result, we expliclitly ignore validation of any nested stacks, even when
they are specified in the environment. This means it's hard to validate
nested templates at all before create, other than perhaps by validating
each template individually (this part could probably be considered a bug).
However it's also problematic when a nested template exposes interfaces we
wish to interact with via parameter_defaults in the environment, e.g not
specified via the parent template via properties/parameters. What is needed
is a way to validate the entire tree, and return a schema of the entire
tree, not only the parameters exposed by the top-level template.
For example, consider this workflow:
1. Choose parent template.
2. Choose a set of other templates and environments (or have this
programmatically generated e.g by pulling templates from one or more known
locations/paths)
3. Inspect that group to figure out the resource-type level
capabilities/options. These are the first major choices a user will make,
to determine the nested stack implementations for each type.
4. The user selects a nested stack choice for each one that has more than one
choice (note https://review.openstack.org/#/c/196656/ discusses approaches
for this selection to be made programmatically via the choices made in (3))
5. Reinspect given those major options for the full set of parameters such that
the user may be prompted for mandatory and optional parameters, including
those not exposed by the top-level parent template.
6. The user enters in values for all of the parameters and the stack gets
created.
The topic of this spec is step 5 above, where we wish to build a schema of
parameters for the entire tree.
For a more concrete example consider this pattern (which is used commonly,
e.g in TripleO) - the parent template creates a server, then passes the ID to
some nested template which then performs some configuration steps, which are
intended to be pluggable and the interfaces are not known at the time of
writing the parent template:
.. code-block:: yaml
resource_registry:
OS::TripleO::ControllerExtraConfigPre: extraconfig/some_extra.yaml
parameter_defaults:
SomeExtraParam: "extra foo"
Here, any template may be hooked in via ControllerExtraConfigPre, and the
parent template need not know anything about the parameters exposed, other than
that a server ID may be passed in, any extra parameters are wired in at the
time of defining ControllerExtraConfigPre, e.g via parameter_defaults.
.. code-block:: yaml
heat_template_version: 2015-04-30
description: Configure some extra config on your server
parameters:
server:
description: ID of the server to apply config to
type: string
# Config specific parameters, to be provided via parameter_defaults
SomeExtraParam:
type: string
default: "bar"
resources:
ExtraServerConfig:
type: OS::Heat::StructuredConfig
properties:
group: os-apply-config
config: <some config>
ExtraServerDeployment:
type: OS::Heat::StructuredDeployment
properties:
config: {get_resource: ExtraServerConfig}
server: {get_param: server}
input_values:
ImplementationSpecificStuff: {get_param: SomeExtraParam}
Here we can see the nested template consuming both the parent provided
parameter (server) and the environment provided one (SomeExtraParam).
Currently, there's no way, other than knowledge of the templates (or
inspection/parsing by non-heat tools) to know that SomeExtraParam
is a required additional parameter when choosing extraconfig/some_extra.yaml
Proposed change
===============
Firstly, we need to fix the basic syntax/structure validation part, which will
mean passing an optional "files" map to the validate API (same as for create
and update), then instead of skipping TemplateResource validation (in
service.py validate_template()) we can recurse into the child templates and
validate (similar to what happens on pre-create except we'll tolerate missing
parameters).
Then, we need to expose additional parameter information, other than what is
currently exposed (parent template parameters only), this could be done via a
new --show-nested (-n) option::
heat template-validate -f parent.yaml -e env.yaml --show-nested
Below is a sample output when run on a group of templates with the following
properties:
* The parent template contains a single resource named ``level-1-resource``
of type ``demo::Level1``
* The ``parent-p1`` parameter is defined by the parent template
* The ``demo::Level1`` template contains a parameter that must be specified
by the parent and one that has a default. The latter is meant to represent
the type of value that is specified as a ``parameter_default``.
* The ``level-1-resource`` resource contains a resource named
``level-2-resource`` of type ``demo::Level2``.
* Similarly, the ``demo::Level2`` template defines a non-defaulted parameter
that must be specified by the parent and one that may optionally be
overridden through ``parameter_defaults``.
.. code-block:: json
{
"Description": "parent template",
"Parameters": {
"parent-p1": {
"Type": "String",
"NoEcho": "false",
"Description": "parent first parameter",
"Label": "parent-p1"
}
},
"NestedParameters": {
"level-1-resource": {
"Type": "demo::Level1",
"Description": "level 1 nested template",
"Parameters": {
"level-1-p1": {
"Type": "String",
"NoEcho": "false",
"Description": "set by parent; should have a Value field",
"Value": "parent-set-value-1",
"Label": "level-1-p1"
},
"level-1-p2-optional": {
"Default": "",
"Type": "String",
"NoEcho": "false",
"Description": "not set by parent",
"Label": "level-1-p2-optional"
}
},
"NestedParameters": {
"level-2-resource": {
"Type": "demo::Level2",
"Description": "level 2 nested template",
"Parameters": {
"level-2-p2-optional": {
"Default": "",
"Type": "String",
"NoEcho": "false",
"Description": "not set by parent",
"Label": "level-2-p2-optional"
},
"level-2-p1": {
"Type": "String",
"NoEcho": "false",
"Description": "set by parent; should have a Value field",
"Value": "level-1-set-value",
"Label": "level-2-p1"
}
}
}
}
}
}
}
Here we would return a new "NestedParameters" section, (potentially to
multiple levels of nesting), reflecting the parameters validation at each
step of recursion through child templates (or rather resource instantiations
of each child template, which may be used in more than one place with different
parameters).
The "Default" key would be included if the nested template defines a parameter
default (as usual) or if a default is set via ``parameter_defaults``.
The "Value" key would be included if a value is provided by the parent
template. Note that since parameters are optional during template-validate
calls, this could be None, e.g a Value of None indicates the parent provides
a value but it was not provided as part of the template-validate call.
This would mean that it's possible to build a schema from the returned data,
such that, for example any parameters missing both "Default" and "Value" may
be identified, as these will require operator input to provide a parameter.
The next category of parameters would be "defaulted but configurable" where
Default is present, but no Value - these values you may want to ask operators
for values other than the template default, and if constraints are specified
they will be exposed here (as choices, as with the existing Parameters section)
Note that the key naming in the returned data structure aligns with the
existing Parameters section - when we reach a v2 API it would be good to
rework both to use more native_api_style_names.
Below is the example output when the example template above is modified to
use resource groups. The only change is that the parent resource
``level-1-resource`` has been replaced by a resource group named
``level-1-groups``. The definition inside of the group is identical to
the previous example.
For brevity, the bulk of the output has been removed. The relevant point is
that each node in the group will be listed by index:
.. code-block:: json
{
"Description": "parent template",
"Parameters": {
"parent-p1": {
"Default": "foo",
"Type": "String",
"NoEcho": "false",
"Description": "parent first parameter",
"Label": "parent-p1"
}
},
"NestedParameters": {
"level-1-groups": {
"Type": "OS::Heat::ResourceGroup",
"Description": "No description",
"Parameters": {},
"NestedParameters": {
"0": {
"Type": "demo::Level1",
"Description": "level 1 nested template",
"Parameters": {
"level-1-p1": {},
"level-1-p2-optional": {}
},
"NestedParameters": {
"level-2-resource": {
"Type": "demo::Level2",
"Description": "level 2 nested template"
}
}
}
}
}
}
}
Alternatives
------------
The alternative we've been working with for some time in the TripleO community
is to maintain a separate service, outside of heat, which contains logic
that is coupled to the template implementation, and knows how to wire in the
appropriate parameters, and maintains a mapping of nested template
implementations to provider resource types.
This works, but you end up with a single-purpose service which is very highly
coupled to the template implementation, which is tough from a maintainability
perpective as well as a not helping the wider heat community with their
composition and interface building needs.
Implementation
==============
There will be two stages to the implementation, first pass the files map in to
heat and make basic validation work on nested stacks via template-validate.
Then the extra data outlined above will be added via the NestedParameters key,
and finally the heatclient interfaces to consume this will be added.
It may be that additional usability features can be added to heatclient,
such as building a stub environment file containing parameter_defaults
related to the validation output (this has been discussed on the ML), but
since the requirement here is not fully defined, I won't consider it in
this spec.
Assignee(s)
-----------
Primary assignee:
shardy
Milestones
----------
Target Milestone for completion:
liberty-2
Work Items
----------
Changes to API:
- Add support for "files" map to be passed via validate call
Changes to engine:
- Modify the template-validate path so TemplateResources are no longer skipped,
instead recursively validate similar to the pre-create steps.
- Update template-validate code to build NestedParameter output
Changes to heatclient:
- Add --show-nested option to template-validate, which collects and populates
the optional files map, and passes it to the API
Documentation changes:
- Update API docs to reflect the "files" optional argument
Dependencies
============
None

View File

@ -0,0 +1,110 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
..
=====================================
"None" resource which does.. nothing!
=====================================
https://blueprints.launchpad.net/heat/+spec/noop-resource
Add a "None" resource, intended to simplify mapping resource_registry
entries to an implemention which always passes, but does nothing.
Problem description
===================
Currently, in a large tree of composable nested templates, controlled by
a number of rigidly defined parent templates, there is often the need
to provide optional interfaces where extra logic may be linked in.
Simplified example derived from TripleO (this pattern is repeated in several
places):
.. code-block:: yaml
resource_registry:
OS::TripleO::Controller: foo/controller.yaml
OS::TripleO::ControllerExtraConfig: noop.yaml
Here we have a nested template which creates a "controller" node, and does
some standard configuration. Then, in some circumstances, we want to hook in
some extra configuration steps, or provide an interface which enables that.
.. code-block:: yaml
resources:
controller:
type: OS::TripleO::Controller
properties:
aproperty: 123
extra_config:
type: OS::TripleO::ControllerExtraConfig
properties:
server: {get_resource: controller}
The ExtraConfig "noop.yaml" implementation is just an empty template which
takes the "server" parameter.
It would be nice to avoid having these "noop" templates duplicated, when
all they do is duplicate the interface expected for a "real" implementation,
you end up with multiple noop.yaml files with different parameters/outputs
which is inconvenient and error prone.
Proposed change
===============
Add an OS::Heat::None resource, which replaces the noop.yaml
.. code-block:: yaml
resource_registry:
OS::TripleO::Controller: foo/controller.yaml
OS::TripleO::ControllerExtraConfig: OS::Heat::None
This resource will accept any properties, and return any attribute (as None).
Alternatives
------------
The alternative is for template authors wishing to provide interfaces for
optional additional functionality to keep maintaining multiple templates
which actually do nothing, such as is happening in TripleO at the moment.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
shardy
Milestones
----------
Target Milestone for completion:
liberty-2
Work Items
----------
Changes to engine:
- Implement noop resource and tests.
Documentation changes:
- Ensure docstrings are present in code so template guide is updated.
Dependencies
============
None

View File

@ -0,0 +1,92 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
====================================================
Implement batch create and update for resource group
====================================================
https://blueprints.launchpad.net/heat/+spec/resource-group-batching
Add possibility to create and update resources in batches
Problem description
===================
Heat doesn't allow to create resources in batches which can lead
to unnecessary high load of the cloud for large number of resources.
In particular nova can fail while deploying large Sahara clusters
because of tons of simultaneous requests to create/update VMs and
polling those resources for status.
Also Heat partially support batch update (only for AutoscalingGroup)
Proposed change
===============
Add batching_policy property to ResourceGroup with similar to
rolling_update structure::
res_group:
type: OS::Heat:ResourceGroup
properties:
count: 5
...
batching_policy:
min_in_service:1
max_batch_size: 2
pause_time: 10
batch_actions: ['CREATE', 'UPDATE_EXISTING', 'UPDATE']
...
Where:
`max_batch_size`, `pause_time`, `min_in_service` has the same meaning as
in rolling_update with one exception that `min_in_service` can't be applied
to batch `CREATE` action and will be ignored.
``batch_actions`` is actions that will be batched, with the following available
options:
`CREATE`: apply batching on stack creation i.e. add resources in sequence by
`max_batch_size` resources at every batch, possible except for the last one.
`UPDATE_EXISTING`: exactly the same thing as rolling update
`UPDATE`: regular update, existing resources will updated in batches and if it
is needed to add some count of resources they will be added in bathes too.
It is proposed to make old rolling_update property deprecated in favour of
batching_policy as batching_policy has wider possibilities including old
rolling_update.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
ochuprykov
Milestones
----------
Target Milestone for completion:
Liberty-2
Work Items
----------
* Add batching_policy property to ResourceGroup
* Add required additional unit and functional test cases
Dependencies
============
None

View File

@ -0,0 +1,111 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==============================
Implement Sahara EDP resources
==============================
https://blueprints.launchpad.net/heat/+spec/sahara-edp
Add support for Job, JobBinary and DataSource sahara objects as resources
in heat.
Using sahara EDP in heat we can create following resources:
* ``Data source`` object stores a URL which designates the location of input
or output data and any credentials needed to access the location;
* ``Job binary`` object stores a URL to a single script or Jar file and any
credentials needed to retrieve the file;
* ``Job`` object specifies the type of the job and lists all of the
individual ``job binary`` objects. Can be launched using resource-signal.
Problem description
===================
Currently we can't create Sahara EDP resources in Heat.
Proposed change
===============
Implement following resource types:
1. OS::Sahara::DataSource
Properties:
* name (optional) - name of the data source
* type (required) - type of the data source
* url (required) - URL for the data source
* description (optional) - description of the data source
* user (optional) - username for accessing the data source URL
* password (optional) - password for accessing the data source URL
2. OS::Sahara::JobBinary
Properties:
* name (optional) - name of the job binary
* url (required) - URL for the job binary
* description (optional) - description of the job binary
* user (optional) - username for accessing the job binary URL
* password (optional) - password for accessing the job binary URL
3. OS::Sahara::Job
Properties:
* name (optional) - name of the job
* type (required) - type of the job
* main (optional) - ID for job's main job-binary
* lib (list, optional) - ID of job's lib job-binary
* description (optional) - description of the job
Attributes:
* executions - list of the job executions
To execute the job run the following command::
heat resource-signal stack_name job_name -D <data>
``data`` contains execution details including data sources, configuration
values, and program arguments.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
tlashchova
Milestones
----------
Target Milestone for completion:
Liberty-3
Work Items
----------
* Add Sahara data source resource
* Add Sahara job binary resource
* Add Sahara job resource
* Add required test cases
* Add sample templates in heat-template project
Dependencies
============
None

View File

@ -0,0 +1,67 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==================================
Support actions for remote stack
==================================
https://blueprints.launchpad.net/heat/+spec/support-actions-for-remote-stack
This Blueprint will support actions such as snapshot, restore, check,
cancel-update, abandon and so on for OS::Heat::Stack resource.
Problem description
===================
We support to manager OpenStack resources across multiple regions
after the Blueprint
https://blueprints.launchpad.net/heat/+spec/multi-region-support
But now we don't support some actions such as snapshot/restore
for remote stacks.
Proposed change
===============
The changes will support some actions such as snapshot, restore, check,
cancel-update, abandon and so on for remote stack(OS::Heat::Stack resource).
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
huangtianhua@huawei.com
Milestones
----------
Target Milestone for completion:
Liberty-1
Work Items
----------
* Support snapshot and restore for remote stacks.
* Support cancel-update for remote stacks.
* Support check for remote stacks.
* Support abandon for remote stacks.
* Add related tests for changes.
Dependencies
============
None

View File

@ -0,0 +1,67 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===============================================================
Support to generate hot templates based on the specified type
===============================================================
https://blueprints.launchpad.net/heat/+spec/support-to-generate-hot-templates
This Blueprint will support to generate hot templates based on the specified
type.
Problem description
===================
Currently Heat only supports to generate the 'HeatTemplateFormatVersion'
template based on the specified resource type, this is the functionality
exposed via the 'heat resource-type-template' command. And the link of the
API:
http://developer.openstack.org/api-ref-orchestration-v1.html
See resource_types/{type_name}/template API.
Proposed change
===============
The changes will support to generate hot templates based on the specified type,
since we recommend user using hot templates.
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
huangtianhua@huawei.com
Milestones
----------
Target Milestone for completion:
Liberty-1
Work Items
----------
* Update the heat API to support passing a new option specifying
the required template type. Return the cfn template if not specify
the new option.
* Update python-heatclient to expose this new option.
* Add related tests for changes.
Dependencies
============
None

Some files were not shown because too many files have changed in this diff Show More