Stop ignoring juno directory for tests

Change-Id: I5851637aa5b5c17a667f63cbf44fe082986d9ace
This commit is contained in:
Sergey Kraynev 2015-12-22 08:06:48 -05:00
parent 521d5b633a
commit d84e677004
12 changed files with 104 additions and 89 deletions

View File

@ -22,10 +22,10 @@ Problem description
With the current design of Heat software orchestration, "software components"
defined through SoftwareConfig resources allow for only one configuration (e.g.
one script) to be specified. Typically, however, a software component has a
lifecycle that is hard to express in a single script. For example, software must
be installed (created), there should be support for suspend/resume handling, and
it should be possible to allow for deletion-logic. This is also in line with the
general Heat resource lifecycle.
lifecycle that is hard to express in a single script. For example, software
must be installed (created), there should be support for suspend/resume
handling, and it should be possible to allow for deletion-logic. This is also
in line with the general Heat resource lifecycle.
To achieve the desired behavior of having all those lifecycle hooks with the
current design, one would have to define several SoftwareConfig resources along
@ -51,8 +51,8 @@ Tomcat web server, MySQL database) can be defined in one place (i.e. one
*SoftwareComponent* resource) and can be associated to a server by means of one
single SoftwareDeployment resource.
The new SoftwareComponent resource will - like the SoftwareConfig resource - not
gain any new behavior, but it will also be static store of software
The new SoftwareComponent resource will - like the SoftwareConfig resource -
not gain any new behavior, but it will also be static store of software
configuration data. Compared to SoftwareConfig, though, it will be extended to
provide several configurations corresponding to Heat lifecyle actions in one
place and following a well-defined structure so that SoftwareDeployment
@ -71,8 +71,8 @@ structure and semantics.
As an alternative, we could choose to extend the existing "SoftwareConfig"
resource, but the overloaded semantics could cause confusion with users.
Furthermore, extension of the existing resource could raise additional
complexity when having to maintain backwards-compatibility with existing uses of
SoftwareConfig.
complexity when having to maintain backwards-compatibility with existing uses
of SoftwareConfig.
The set of properties for OS::Heat::SoftwareComponent will be as follows:
@ -126,9 +126,9 @@ tool
analogous to the SoftwareConfig resource's *group* property, but it has been
suggested to use a more intuitive name here.
Having the *tool* property for each config entry allows for mixing different
configuration tools for one software component. For example, the deployment of
software (i.e. CREATE) could be done using Chef or Puppet, but a simple script
could be used for SUSPEND or RESUME.
configuration tools for one software component. For example, the deployment
of software (i.e. CREATE) could be done using Chef or Puppet, but a simple
script could be used for SUSPEND or RESUME.
The *inputs* and *outputs* properties will be defined global for the complete
SoftwareComponent definition instead of being provided per config hook.
@ -191,9 +191,9 @@ Adaptation of SoftwareDeployment resource
The SoftwareDeployment resource (OS::Heat::SoftwareDeployment) will be adapted
to cope with the new SoftwareComponent resource, for example to provide the
contents of the *configs* property to the instance in the appropriate form.
Furthermore, the SoftwareDeployment resource's action and state (e.g. CREATE and
IN_PROGRESS) will be passed to the instance so the in-instance configuration
hook can select the right configuration to be applied (see also
Furthermore, the SoftwareDeployment resource's action and state (e.g. CREATE
and IN_PROGRESS) will be passed to the instance so the in-instance
configuration hook can select the right configuration to be applied (see also
:ref:`in_instance_hooks`).
The SoftwareDeployment resource creates transient configuration objects at
@ -204,21 +204,21 @@ component (i.e. the complete *configs* property) will be stored in that
transient configuration object, and it will therefore be available to
in-instance tools.
There will be no change in SoftwareDeployment properties, but there will have to
be special handling for the *actions* property: the *actions* property will be
ignored when a SoftwareComponent resource is associated to a SoftwareDeployment.
In that case, the entries defined in the *configs* property will provide the set
of actions on which SoftwareDeployment, or in-instance tools respectively, shall
react.
There will be no change in SoftwareDeployment properties, but there will have
to be special handling for the *actions* property: the *actions* property
will be ignored when a SoftwareComponent resource is associated to a
SoftwareDeployment. In that case, the entries defined in the *configs* property
will provide the set of actions on which SoftwareDeployment, or in-instance
tools respectively, shall react.
Note: as an alternative to passing the complete set of configurations defined in
a SoftwareComponent, along with the SoftwareDeployment's action and state to the
instance, we could make the SoftwareDeployment resource select the right config
based on its action and state and only pass this to the instance. This could
possibly allow for using the existing in-instance hooks without change. However,
at the time of writing this spec, it was decided to implement config select in
the in-instance hook since it gives more power to the in-instance implementation
for possible future enhancements.
Note: as an alternative to passing the complete set of configurations defined
in a SoftwareComponent, along with the SoftwareDeployment's action and state
to the instance, we could make the SoftwareDeployment resource select the right
config based on its action and state and only pass this to the instance. This
could possibly allow for using the existing in-instance hooks without change.
However, at the time of writing this spec, it was decided to implement config
select in the in-instance hook since it gives more power to the in-instance
implementation for possible future enhancements.
.. _in_instance_hooks:
@ -231,8 +231,8 @@ indicated by the associated SoftwareDeployment resources.
In case of a *SoftwareComponent* being deployed, the complete set of
configurations will be made available to in-instance hooks via Heat metadata.
In addition, SoftwareDeployment resources will add their action and state to the
metadata (e.g. CREATE and IN_PROGRESS). Based on that information, the
In addition, SoftwareDeployment resources will add their action and state
to the metadata (e.g. CREATE and IN_PROGRESS). Based on that information, the
in-instance hook will then be able to select and apply the right configuration
at runtime.
@ -277,13 +277,13 @@ lifecycle operations CREATE, DELETE, SUSPEND, RESUME and UPDATE. It is
recognized that special handling might make sense for scenarios where servers
are being quiesced for an upgrade, or where they need to be evacuated for a
scaling operation. In addition, users might want to define complete custom
actions (see also :ref:`software_component_resource`). Handling of those actions
are out of scope for now, but can be enabled by follow-up work on-top of the
implementation of this specification. For example, an additional property
*extended_action* could be added to SoftwareDeployment which could be set to
the extended actions mentioned above. When passing this additional property to
in-instance hooks, the hooks could then select and apply the respective config
for the specified extended action.
actions (see also :ref:`software_component_resource`). Handling of those
actions are out of scope for now, but can be enabled by follow-up work on-top
of the implementation of this specification. For example, an additional
property *extended_action* could be added to SoftwareDeployment which could be
set to the extended actions mentioned above. When passing this additional
property to in-instance hooks, the hooks could then select and apply
the respective config for the specified extended action.
Implementation

View File

@ -19,8 +19,9 @@ Add filter support to stack query for cfn API
https://blueprints.launchpad.net/heat/+spec/cfn-liststacks-filter
Currently filtering stacks by status is supported in openstack API, for the compatibility
with Cloudformation API, it also should be supported in cfn API.
Currently filtering stacks by status is supported in openstack API, for
the compatibility with Cloudformation API, it also should be supported
in cfn API.
Problem description
===================
@ -31,8 +32,9 @@ implemented in openstack API, we also need to implement it in cfn API.
Proposed change
===============
Add parameter "StackStatusFilter" for list-stacks of cfn API, and pass the fiter parameters
to the backend, then return the stacks filtered by status. The url should be like this::
Add parameter "StackStatusFilter" for list-stacks of cfn API, and pass
the fiter parameters to the backend, then return the stacks filtered by status.
The url should be like this::
https://example.com:8000/v1/
?Action=ListStacks

View File

@ -72,7 +72,8 @@ Work Items
* Add support for pagination and sorting events
* Add UT fot the pagination and sorting events
* Add support for pagination and sorting events in python-heatclient
* Write tempest api orchestration and scenario test to exercise events pagination
* Write tempest api orchestration and scenario test to exercise events
pagination
Dependencies
============

View File

@ -33,8 +33,8 @@ makes several use cases difficult or sub-optimal because of the need to make
several API calls on resource reference links.
* When deleting a stack, a UI should be able to present the user with a list
of *all* resources associated with a given stack to avoid confusion about what
and why certain resources were deleted due to a stack delete.
of *all* resources associated with a given stack to avoid confusion about
what and why certain resources were deleted due to a stack delete.
* A user of the API (either via CLI, curl, or other method) wants to be able
to quickly and easily list and follow the status of every resource associated
with a stack, regardless of a resource's position in the stack hierarchy.

View File

@ -48,8 +48,10 @@ from 10 to 11 using ``update-stack`` command::
{
"ResourceStatus": "UPDATE_FAILED",
"ResourceType": "AWS::EC2::Volume",
"ResourceStatusReason": "Update to resource type AWS::EC2::Volume is not supported.",
"ResourceProperties": "{\"AvailabilityZone\":\"us-west-2a\",\"Size\":\"11\"}"
"ResourceStatusReason":
"Update to resource type AWS::EC2::Volume is not supported.",
"ResourceProperties":
"{\"AvailabilityZone\":\"us-west-2a\",\"Size\":\"11\"}"
}
Heat

View File

@ -33,19 +33,21 @@ in AWSCloudFormation, 'Volumes' and 'BlockDeviceMappings', see:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html
1. 'Volumes' support the 'volume_id', user can specify the volume to be attached
to the instance. This way has been implemented in Heat, but it's not a good way
for batch creation because one volume can't be attached to many instances.
1. 'Volumes' support the 'volume_id', user can specify the volume to be
attached to the instance. This way has been implemented in Heat, but
it's not a good way for batch creation because one volume can't be attached
to many instances.
2. 'BlockDeviceMappings' support the 'snapshot_id', user can specify a snapshot,
then a volume will be created from the snapshot, and the volume will be attached
to the instance. This way is a good way for batch creation.
2. 'BlockDeviceMappings' support the 'snapshot_id', user can specify
a snapshot, then a volume will be created from the snapshot, and the volume
will be attached to the instance. This way is a good way for batch creation.
Nova supports to create a server with a block device mapping:
http://docs.openstack.org/api/openstack-compute/2/content/ext-os-block-device-mapping-v2-boot.html
So, we should support the 'BlockDeviceMappings' for AWS::EC2::Instance resource.
So, we should support the 'BlockDeviceMappings' for AWS::EC2::Instance
resource.
Proposed change
===============

View File

@ -21,34 +21,37 @@ Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/heat/+spec/implement-launchconfiguration-bdm
We should support the BlockDeviceMappings for AWS::AutoScaling::LaunchConfiguration
resource to be compatible with AWSCloudFormation. And therefore, user can specify volumes
to attach to instances while AutoScalingGroup/InstanceGroup creation.
We should support the BlockDeviceMappings for
AWS::AutoScaling::LaunchConfiguration resource to be compatible with
AWSCloudFormation. And therefore, user can specify volumes to attach
to instances while AutoScalingGroup/InstanceGroup creation.
Problem description
===================
Now in Heat, the AWS::AutoScaling::LaunchConfiguration resource doesn't implement
'BlockDeviceMappings' property to indicate the volumes to be attached. There are
two problems:
Now in Heat, the AWS::AutoScaling::LaunchConfiguration resource doesn't
implement 'BlockDeviceMappings' property to indicate the volumes to be
attached. There are two problems:
1. First, it's incompatible with AWSCloudFormation. In AWSCloudFormation,
'BlockDeviceMappings' support the 'SnapshotId', user can specify a snapshot,
then a volume will be created from the snapshot, and the volume will be attached
to the instance.
then a volume will be created from the snapshot, and the volume will be
attached to the instance.
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig.html
2. Second, user can't specify volumes to be attached to instances which in
AutoScalingGroup/InstanceGroup while creation.
So, we should support the 'BlockDeviceMappings' for AWS::AutoScaling::LaunchConfiguration.
So, we should support the 'BlockDeviceMappings' for
AWS::AutoScaling::LaunchConfiguration.
Proposed change
===============
1. Implement 'BlockDeviceMappings' property for AWS::AutoScaling::LaunchConfiguration resource,
specially in which user can specify the 'SnapshotId'.
1. Implement 'BlockDeviceMappings' property for
AWS::AutoScaling::LaunchConfiguration resource, specially in which user can
specify the 'SnapshotId'.
Alternatives
------------
@ -73,9 +76,11 @@ Target Milestone for completion:
Work Items
----------
1. Support the BlockDeviceMappings for AWS::AutoScaling::LaunchConfiguration resource
1. Support the BlockDeviceMappings for AWS::AutoScaling::LaunchConfiguration
resource
2. Add UT/Tempest for the change
3. Add a template for AWS::AutoScaling::LaunchConfiguration with BlockDeviceMappings
3. Add a template for AWS::AutoScaling::LaunchConfiguration with
BlockDeviceMappings
Dependencies
============

View File

@ -63,9 +63,9 @@ https://blueprints.launchpad.net/heat/+spec/resource-package-reorg
Another problem is that having all classes implemented in almost one file is
making the implementation difficult to digest or improve. For example, it
may make a better sense to have InstanceGroup a subclass of ResourceGroup.
For another example, it doesn't make much sense to have AutoScalingResourceGroup
a subclass of InstanceGroup because the subclass is more open to other resource
types as its members.
For another example, it doesn't make much sense to have
AutoScalingResourceGroupa subclass of InstanceGroup because the subclass is
more open to other resource types as its members.
Proposed change
===============
@ -113,9 +113,9 @@ The AWS version will be relocated into heat/engine/resources/aws subdirectory,
including the LaunchConfiguration implementation. The OpenStack version will
be relocated into heat/engine/resources/openstack subdirectory.
The shared parent class ResourceGroup will remain in heat/engine/resources, while
the CooldownMixin class will be relocated into heat/scaling subdirectory. The
eventual layout of the modules and classes would look like this::
The shared parent class ResourceGroup will remain in heat/engine/resources,
while the CooldownMixin class will be relocated into heat/scaling subdirectory.
The eventual layout of the modules and classes would look like this::
heat/engine/resources/
|
@ -139,8 +139,8 @@ eventual layout of the modules and classes would look like this::
+-- (possibily other shared utility classes)
This reshuffling is optional. We will determine whether reshuffling is necessary
indeed after the cleanup work is done.
This reshuffling is optional. We will determine whether reshuffling is
necessary indeed after the cleanup work is done.
Alternatives
------------
@ -149,9 +149,9 @@ Since this is a pure implementation level change, one rule of thumb is that "we
don't break userland".
We can have AWS AutoScalingPolicy extend Heat AutoScalingPolicy. However that
may mean that any future changes to Heat implementation must be very careful, in
case those changes may break the conformance of the AWS version to its Amazon
specification.
may mean that any future changes to Heat implementation must be very careful,
in case those changes may break the conformance of the AWS version to its
Amazon specification.
The same applies to the two versions of AutoScalingGroup. Hopefully, we may
extract common code into ResourceGroup level to minimize code duplication.

View File

@ -77,9 +77,9 @@ it made with respect to the stack should be un-made.
All stack actions would need calls to either pre or post operations, or both.
This includes at least create, update, delete, abandon, and adopt. In a basic
design, modifications to the Stack class in parser.py are sufficient for adding
the call to the pre-operation and post-operation methods found via the lifecycle
plugin registry. The post-operation calls would need to be called in both the
normal paths and all error paths.
the call to the pre-operation and post-operation methods found via the
lifecycle plugin registry. The post-operation calls would need to be called in
both the normal paths and all error paths.
Alternatives
------------
@ -87,12 +87,12 @@ Alternatives
No other approach was identified that would allow the operator (heat provider)
to extend heat with this functionality for all stack deployments.
https://blueprints.launchpad.net/heat/+spec/lifecycle-callbacks describes an approach
where heat users can optionally specify callbacks for in templates for
stack and resource events.
It does not provide the ubiquitous callbacks (for all stacks) that would be needed by
the use cases described above, unless the heat provider tightly controls the
templates that users can use.
https://blueprints.launchpad.net/heat/+spec/lifecycle-callbacks describes
an approach where heat users can optionally specify callbacks for in templates
for stack and resource events.
It does not provide the ubiquitous callbacks (for all stacks) that would be
needed by the use cases described above, unless the heat provider tightly
controls the templates that users can use.
Implementation
==============
@ -102,6 +102,9 @@ A patch comprising a full implementation of the blueprint
(https://review.openstack.org/#/c/89363/) is already being
reviewed, under the old pre-spec process.
Assignee(s)
-----------
Primary assignee:
William C. Arnold (barnold-8)

View File

@ -20,7 +20,7 @@ import testtools
def create_scenarios():
# create a set of excluded from testing directories
exclude_dirs = {'templates', 'juno', 'kilo'}
exclude_dirs = {'templates', 'kilo'}
# get whole list of sub-directories in specs directory
release_names = [x.split('/')[1] for x in glob.glob('specs/*/')]
# generate a list of scenarious (1 scenario - for each release)