Stop ignoring kilo release directory in tests

Change-Id: I3a8044fd8f68442055dcf34e4c9b5079a91c51da
This commit is contained in:
Sergey Kraynev 2015-12-22 09:54:13 -05:00
parent d84e677004
commit d8cecd4124
30 changed files with 144 additions and 53 deletions

View File

@ -81,7 +81,7 @@ None
Usage Scenario
==============
--------------
I want to create a autoscaling group that scale down when a statistics against
cpu_util of a group of vm computed by Gnocchi, reach a certain threshold::
@ -130,7 +130,7 @@ Dependencies
None
Links
=====
References
----------
* https://review.openstack.org/#/c/153291/

View File

@ -38,7 +38,7 @@ None
Usage Scenario
==============
--------------
For instance, request creation of `volume-A` on a different back-end than
`volume-B` using the different_host scheduler hint::

View File

@ -48,7 +48,7 @@ None
Usage Scenario
==============
--------------
For volume creation take the volume_type to specific the lvm-driver::

View File

@ -43,6 +43,11 @@ trigger a check on any which now contain their full complement of inputs.
The SyncPoint for the stack works similarly, except that when complete we
notify the stack itself to mark the update as complete.
Alternatives
------------
None
Implementation
==============

View File

@ -56,6 +56,11 @@ An exception to all of this is the case where the graph node is of the update
type and the resource status is DELETE_IN_PROGRESS. In that case, we should
simply create a replacement resource.
Alternatives
------------
None
Implementation
==============

View File

@ -62,6 +62,11 @@ http://www.joinfu.com/2015/01/understanding-reservations-concurrency-locking-in-
- which probably means adding an extra integer field that is incremented on
every write (since we can't really query on a text field).
Alternatives
------------
None
Implementation
==============

View File

@ -56,6 +56,11 @@ row with its own newly-generated traversal ID. In this case it should roll back
the database changes, by deleting any newly-created Resource rows that it added
as well as all of the SyncPoints.
Alternatives
------------
None
Implementation
==============

View File

@ -50,6 +50,11 @@ dependent resources will update their dependency lists. The previous resource
will be visited again in the clean-up phase of the graph, at which point it
will be deleted.
Alternatives
------------
None
Implementation
==============

View File

@ -45,6 +45,11 @@ https://github.com/zaneb/heat-convergence-prototype/blob/resumable/converge/reso
- but those are too confusing. Once we settle on names, we should update the
simulator code as well.)
Alternatives
------------
None
Implementation
==============

View File

@ -45,6 +45,11 @@ successfully complete (if any) alongside the ID of the current target template
the stored template IDs are overwritten in such a way that we will no longer
refer to a particular Template, delete that Template from the database.
Alternatives
------------
None
Implementation
==============

View File

@ -51,6 +51,11 @@ using testtools primitives and passing the wrappers above as globals, rather
than those defined in
https://github.com/zaneb/heat-convergence-prototype/blob/resumable/converge/__init__.py#L24-L41
Alternatives
------------
None
Implementation
==============

View File

@ -42,6 +42,11 @@ previous traversals should stop updating the stack data. This should be
achieved using the "UPDATE ... WHERE ..." form as discussed in
http://www.joinfu.com/2015/01/understanding-reservations-concurrency-locking-in-nova/
Alternatives
------------
None
Implementation
==============

View File

@ -23,8 +23,9 @@ Problem description
The code structure of the resources folder is in some confusion.
https://blueprints.launchpad.net/heat/+spec/reorganize-resources-code-structure
There is too much coupling between AWS and OS resources for reorganizing to be possible,
for example modules: wait_condition.py, instance.py, user.py and volume.py.
There is too much coupling between AWS and OS resources for reorganizing to be
possible, for example modules: wait_condition.py, instance.py, user.py and
volume.py.
Proposed change
===============
@ -36,14 +37,17 @@ The new code structure will be::
|----resources
|----aws
|----wait_condition.py(AWS::CloudFormation::WaitCondition)
|----wait_condition_handle.py(AWS::CloudFormation::WaitConditionHandle)
|----volume.py(AWS::EC2::Volume and AWS::EC2::VolumeAttachment)
|----wait_condition_handle.py
(AWS::CloudFormation::WaitConditionHandle)
|----volume.py
(AWS::EC2::Volume and AWS::EC2::VolumeAttachment)
|----user.py(AWS::IAM::User and AWS::IAM::AccessKey)
|----instance.py(AWS::EC2::Instance)
|----openstack
|----wait_condition.py(OS::Heat::WaitCondition)
|----wait_condition_handle.py(OS::Heat::WaitConditionHandle)
|----volume.py(OS::Cinder::Volume and OS::Cinder::VolumeAttachment)
|----volume.py
(OS::Cinder::Volume and OS::Cinder::VolumeAttachment)
|----access_policy.py(OS::Heat::AccessPolicy)
|----ha_restarter.py(OS::Heat::HARestarter)
|----wait_condition.py(base module)

View File

@ -46,7 +46,8 @@ and be able to look up attributes after creating their stacks even if the
template author didn't think about them beforehand.
Because these attributes can be retrieved either by the resource's client or by
changing the template and adding them to the outputs section, this should not pose
changing the template and adding them to the outputs section, this should not
pose
any more risk of revealing sensitive data than what is already possible.
This can be achieved by changing the API response to also include attributes
@ -68,9 +69,12 @@ For such resources, the API can be extended to accept a query param that will
hold the names of the attributes to be retrived. Something like:
# API
/<tenant_id>/stacks/<stack_name>/<stack_id>/resources/<resource_name>?with_attr=foo&with_attr=bar
/<tenant_id>/stacks/<stack_name>/<stack_id>/resources/
<resource_name>?with_attr=foo&with_attr=bar
# heatclient
resource-show <stack_name> <resource_name> --with-attr foo --with-attr bar
However, certain clients or scripts may want to consume a given attribute
@ -79,11 +83,17 @@ things RESTful and return only the discoverable attributes of a resource; and
another one that would only return the value of the requested attribute.
# API
/<tenant_id>/stacks/<stack_name>/<stack_id>/resources/<resource_name>/attributes
/<tenant_id>/stacks/<stack_name>/<stack_id>/resources/<resource_name>/attributes/<attribute_name>
/<tenant_id>/stacks/<stack_name>/<stack_id>/resources/<resource_name>/
attributes
/<tenant_id>/stacks/<stack_name>/<stack_id>/resources/<resource_name>/
attributes/<attribute_name>
# heatclient
heat resource-attributes <stack-id> <resource-name>
heat resource-attributes <stack-id> <resource-name> <attribute-name>
Alternatives

View File

@ -36,9 +36,9 @@ Add another class to run existing digest algorithms (e.g. MD5, SHA-512, etc) on
user provided data and expose it in the HOT functions list. The class would
take the name of the digest algortihm and the value to be hashed.
Python's ``hashlib`` natively supports md5, sha1 and sha2 (sha224, 256, 384, 512)
on most platforms and this will be documented as being the supported list of
algorithms. But the cloud provider may go beyond and support more algortihms
Python's ``hashlib`` natively supports md5, sha1 and sha2 (sha224, 256, 384,
512) on most platforms and this will be documented as being the supported list
of algorithms. But the cloud provider may go beyond and support more algortihms
as well, since, depending on the way Python was built, ``hashlib`` can also use
algorithms supported by OpenSSL.

View File

@ -62,8 +62,8 @@ attributes in the outputs so they can be referenced outside of the
nested stack. We should generate these attributes dynamically.
Proposed changes
================
Proposed change
===============
1. Add the concept of parameter_defaults to the environment.
This will look like the following::
@ -75,7 +75,8 @@ Proposed changes
The behaviour of these parameters will be as follows:
- if there is no parameter definition for it, it will be ignored.
- these will be passed into all nested templates
- they will only be used as a default so that they can be explicitly overridden in the "parameters" section.
- they will only be used as a default so that they can be explicitly
overridden in the "parameters" section.
2. Support a specially named output to Template resources that is used
for references.

View File

@ -10,8 +10,8 @@ Implement 'InstanceId' for AutoScalingGroup
https://blueprints.launchpad.net/heat/+spec/implement-instanceid-for-autoscalinggroup
We should support the 'InstanceId' for AWS::AutoScaling::AutoScalingGroup resource
to be compatible with AWSCloudFormation.
We should support the 'InstanceId' for AWS::AutoScaling::AutoScalingGroup
resource to be compatible with AWSCloudFormation.
Problem description
===================
@ -31,17 +31,19 @@ Proposed change
===============
1. Change 'LaunchConfigurationName' to be an optional property
2. Add 'InstanceId' property, optional and non-updatable
3. Add validate for AWS::AutoScaling::AutoScalingGroup resource, make sure choose
one of the two properties
3. Add validate for AWS::AutoScaling::AutoScalingGroup resource, make sure
choose one of the two properties
4. Modify the _get_conf_properties() function
* if specify 'InstanceId', to get the attributes of the instance, and to make a temporary
launch config resource, and then return the resource and its properties.
* if specify 'InstanceId', to get the attributes of the instance, and
to make a temporary launch config resource, and then return the resource
and its properties.
Note that the attributes include ImageId, InstanceType, KeyName, SecurityGroups.
Note that the attributes include ImageId, InstanceType, KeyName,
SecurityGroups.
* if without 'InstanceId', using the old way to get the launch config resource and
its properties.
* if without 'InstanceId', using the old way to get the launch config
resource and its properties.
Alternatives
------------

View File

@ -10,8 +10,8 @@ Implement 'InstanceId' for LaunchConfiguration
https://blueprints.launchpad.net/heat/+spec/implement-instanceid-for-launchconfiguration
We should support the 'InstanceId' for AWS::AutoScaling::LaunchConfiguration resource
to be compatible with AWSCloudFormation.
We should support the 'InstanceId' for AWS::AutoScaling::LaunchConfiguration
resource to be compatible with AWSCloudFormation.
Problem description
===================
@ -22,7 +22,8 @@ launch configuration to use settings from an existing instance, see:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig.html
http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/create-lc-with-instanceID.html
Will be good to implement 'InstanceId' property to be compatible with AWSCloudFormation.
Will be good to implement 'InstanceId' property to be compatible with
AWSCloudFormation.
Proposed change
@ -33,8 +34,8 @@ Proposed change
specify 'InstanceId', the other two properties are required
4. According to the aws developer guide and implementation, allows three cases:
* Without 'InstanceId', should specify 'ImageId' and 'InstanceType' properties,
using the old way to create the new launch configuration.
* Without 'InstanceId', should specify 'ImageId' and 'InstanceType'
properties, using the old way to create the new launch configuration.
* Specify 'InstanceId' only, the new launch configuration has
'ImageId', 'InstanceType', 'KeyName', and 'SecurityGroups'

View File

@ -98,7 +98,8 @@ There is some use cases, which should be described:
When stack will created, to execute workflow run next command::
heat resource-signal stack_name workflow_name -D 'Json-type execution input'
heat resource-signal stack_name workflow_name \
-D 'Json-type execution input'
Execution state will be available in 'executions' attribute as a dict.
@ -160,7 +161,9 @@ There is some use cases, which should be described:
vm_id: $.vm_id
tasks:
create_server:
action: nova.servers_create name={$.vm_name} image={$.image_ref} flavor={$.flavor_ref}
action: >
nova.servers_create name={$.vm_name} image={$.image_ref}
flavor={$.flavor_ref}
publish:
vm_id: $.id
on-success:

View File

@ -25,8 +25,8 @@ and apply nova flavor constraint.
Problem description
===================
1. Many resources have property InstanceId/Server which related with nova server,
but until now we haven't support nova server constraints.
1. Many resources have property InstanceId/Server which related with nova
server, but until now we haven't support nova server constraints.
2. Just define nova flavor custom constraint, but not to apply it.

View File

@ -161,6 +161,11 @@ Milestones
Target Milestone for completion:
Kilo-2
Work Items
----------
Steps mentioned in section Proposed change describes the list of work items.
Dependencies
============

View File

@ -96,6 +96,11 @@ can be achieved by the hooks checking for the input ``deploy_status_aware``
being set to ``true``. Only new heat will set this input value to ``true`` so
the hook can check this input and behave accordingly.
Alternatives
------------
None
Implementation
==============

View File

@ -85,6 +85,11 @@ This could also be an appropriate umbrella blueprint to switch to using RPC
instead of full REST calls for when config and deployment resources call
config and deployment APIs.
Alternatives
------------
None
Implementation
==============
@ -146,4 +151,4 @@ that their config trigger has started.
If it is deemed inappropriate to modify EngineService.resource_signal then
some alternative external polling based signaling would be required, as
provided by blueprint software-config-swift-signal or blueprint
software-config-zaqar.
software-config-zaqar.

View File

@ -55,6 +55,11 @@ that it is not necessary to create the stack users.
signal_transport:AUTO will be modified so that ZAQAR_MESSAGE is the preferred
method if there is a configured messaging endpoint.
Alternatives
------------
None
Implementation
==============
@ -96,4 +101,4 @@ heat-templates heat-config element.
This could be done after blueprint software-config-trigger since that includes
some refactoring which includes moving signal_transport logic from the
resource to the deployments REST API.
resource to the deployments REST API.

View File

@ -28,8 +28,8 @@ Problem description
Users can (and do) worry about stack update doing wonky things. The
update-preview endpoint addresses this partially by showing what will probably
happen. The limitation of the preview function is that resources can raise
UpdateReplace exceptions at any time, making it impossible to be *certain* of the
results of an update until it is performed.
UpdateReplace exceptions at any time, making it impossible to be *certain* of
the results of an update until it is performed.
Proposed change
===============
@ -48,9 +48,9 @@ preferences, which are nested into the keys "AutoScalingScheduledAction" and
"AutoScalingRollingUpdate". CFN preferences would be unaffected by the HOT
version of update policies.
A user would specify per-resource how aggressive an update can be with a resource.
The restrictions could be on updating the resource at all, or just on
destroying the resource (including UpdateReplace).
A user would specify per-resource how aggressive an update
can be with a resource. The restrictions could be on updating the resource at
all, or just on destroying the resource (including UpdateReplace).
The base cases here are:

View File

@ -104,7 +104,7 @@ None
References
==========
----------
* [1]: https://wiki.openstack.org/wiki/CinderAPIv2
* [2]: https://github.com/openstack/nova-specs/blob/master/specs/juno/support-cinderclient-v2.rst

View File

@ -48,7 +48,7 @@ None
Usage Scenario
==============
--------------
Create the OS::Trove::Cluster resource like this::

View File

@ -24,9 +24,11 @@ https://blueprints.launchpad.net/heat/+spec/versioned-objects
Problem description
===================
We are looking to improve the way we deal with versioning (of all sorts db/rpc/rest/templates/plugins).
Nova has come up with the idea of versioned objects, that Ironic has also now used.
This has now been proposed as an oslo library: https://review.openstack.org/#/c/127532/
We are looking to improve the way we deal with versioning (of all sorts
db/rpc/rest/templates/plugins).
Nova has come up with the idea of versioned objects, that Ironic has also now
used. This has now been proposed as an oslo library:
https://review.openstack.org/#/c/127532/
https://etherpad.openstack.org/p/kilo-crossproject-upgrades-and-versioning

View File

@ -29,7 +29,7 @@ We should provide a way for template developers to show console
url(for example, vnc, rdp and spice) in stack outputs.
Usage Scenario
==============
--------------
Get novnc console url::
@ -92,6 +92,10 @@ to this attribute, or URLs for all supported types when no key is provided.
Gracefully deal with the case when the type of the console being asked for
is not available in current deployment.
Alternatives
------------
None
Implementation
==============

View File

@ -20,7 +20,7 @@ import testtools
def create_scenarios():
# create a set of excluded from testing directories
exclude_dirs = {'templates', 'kilo'}
exclude_dirs = {'templates', }
# get whole list of sub-directories in specs directory
release_names = [x.split('/')[1] for x in glob.glob('specs/*/')]
# generate a list of scenarious (1 scenario - for each release)
@ -119,7 +119,6 @@ class TestTitles(testscenarios.WithScenarios, testtools.TestCase):
template = f.read()
base_spec = docutils.core.publish_doctree(template)
expected_template_titles = self._get_titles(base_spec)
for filename, data in self._iterate_files():
spec = docutils.core.publish_doctree(data)
titles = self._get_titles(spec)