OpenStack Orchestration (Heat)
Go to file
Zane Bitter ae2b47d8fd Fix race condition deleting in-progress stack
If we call stack_delete on a stack with an operation in progress, we kill
any existing delete thread that is running. However, we don't wait for that
thread to die before starting a new thread to delete the stack again. If
any part of the cleanup operation in the old thread (i.e. handling of the
GreenthreadExit exception) causes a context switch (which is likely), other
threads can start working while the cleanup is still in progress. This
could create race conditions like the one in bug 1328983.

Avoid this problem by making sure we wait for all threads in a thread group
to die before continuing. (Note that this means the user's API call is
blocking on the cleanup of the old thread. This is sadly unavoidable for
now, but should probably be fixed in the future by stopping the old thread
from the new delete thread.)

This was suggested earlier, but removed without explanation between
patchsets 11 and 12 of I188e43ad88b98da7d1a08269189aaefa57c36df2, which
implemented deletion of in-progress stacks with locks:
https://review.openstack.org/#/c/63002/11..12/heat/engine/service.py

Also remove the call to stack_lock_release(), which was a hack around the
fact that wait() does not wait for link()ed functions - eventlet sends the
exit event (that wait() is waiting on) before resolving links. Instead, add
another link to the end of the list to indicate that links have all been
run. This should eliminate "Lock was already released" messages in the
logs.

Change-Id: I2e4561cbe29ab10554da67859df8c2db0854dd38
2014-06-20 12:37:01 -04:00
bin Move Engine initialization into service start() 2014-06-01 08:20:49 +01:00
contrib tests add stub_keystoneclient to base test class 2014-06-18 10:55:41 +01:00
doc Merge "Heat and Openstack incomplete documentation" 2014-06-17 00:04:55 +00:00
etc/heat Support x-openstack-request-id for Heat 2014-06-17 10:11:24 +00:00
heat Fix race condition deleting in-progress stack 2014-06-20 12:37:01 -04:00
tools Sync with oslo incubator 2014-06-17 11:22:37 +02:00
.coveragerc Enabled source code coverage for contrib directory 2014-01-28 21:49:40 +08:00
.gitignore Add heat.sqlite in git ignore list 2014-01-25 13:58:21 +08:00
.gitreview Update .gitreview for org move. 2012-12-02 17:46:15 +00:00
.testr.conf Restructure contrib/ directories 2014-03-03 10:49:28 -05:00
CONTRIBUTING.rst Add CONTRIBUTING file. 2013-05-25 08:46:32 +02:00
HACKING.rst Updates OpenStack Style Commandments link 2013-10-16 22:44:44 +05:30
LICENSE Initial commit (basics copied from glance) 2012-03-13 21:48:07 +11:00
MANIFEST.in Delete deprecated docs/ directory 2013-10-24 11:03:11 -10:00
README.rst Rename Quantum to Neutron 2013-08-06 22:08:27 -07:00
babel.cfg Add setup.py and friends 2012-03-14 09:25:54 +11:00
install.sh Update install.sh to reflect recent oslo.db format 2013-11-13 16:54:59 +00:00
openstack-common.conf Sync oslo-incubator.middleware module 2014-06-17 18:37:10 +09:00
pylintrc Directives to not use variable names that conflict with pdb 2012-03-20 07:16:16 -04:00
requirements.txt Merge "Add glanceclient to heat" 2014-05-09 15:18:45 +00:00
run_tests.sh Run pep8 check in run_tests.sh as in tox 2014-03-29 23:47:20 +02:00
setup.cfg Use entry points for config generation 2014-05-30 16:21:03 +02:00
setup.py Updated from global requirements 2014-05-09 02:42:01 +00:00
test-requirements.txt Sync version of sphinx from requirements 2014-05-28 11:54:37 +12:00
tox.ini Use entry points for config generation 2014-05-30 16:21:03 +02:00
uninstall.sh Add uninstall script for Heat 2012-06-23 22:41:30 -04:00

README.rst

HEAT

Heat is a service to orchestrate multiple composite cloud applications using templates, through both an OpenStack-native ReST API and a CloudFormation-compatible Query API.

Why heat? It makes the clouds rise and keeps them there.

Getting Started

If you'd like to run from the master branch, you can clone the git repo:

git clone git@github.com:openstack/heat.git

Python client

https://github.com/openstack/python-heatclient

References

We have integration with