diff --git a/CONTRIBUTING.rst b/CONTRIBUTING.rst index 988f2856..2762e800 100644 --- a/CONTRIBUTING.rst +++ b/CONTRIBUTING.rst @@ -1,13 +1,13 @@ If you would like to contribute to the development of OpenStack, you must follow the steps documented at: - http://wiki.openstack.org/HowToContribute#If_you.27re_a_developer + http://docs.openstack.org/infra/manual/developers.html#development-workflow Once those steps have been completed, changes to OpenStack should be submitted for review via the Gerrit tool, following the workflow documented at: - http://wiki.openstack.org/GerritWorkflow + http://docs.openstack.org/infra/manual/developers.html#development-workflow Pull requests submitted through GitHub will be ignored. diff --git a/README.rst b/README.rst index 24c6e5d9..c5e1f105 100644 --- a/README.rst +++ b/README.rst @@ -5,7 +5,10 @@ A library to do [jobs, tasks, flows] in a highly available, easy to understand and declarative manner (and more!) to be used with OpenStack and other projects. -- More information can be found by referring to the `developer documentation`_. +* Free software: Apache license +* Documentation: http://docs.openstack.org/developer/taskflow +* Source: http://git.openstack.org/cgit/openstack/taskflow +* Bugs: http://bugs.launchpad.net/taskflow/ Join us ------- @@ -18,32 +21,27 @@ Testing and requirements Requirements ~~~~~~~~~~~~ -Because TaskFlow has many optional (pluggable) parts like persistence -backends and engines, we decided to split our requirements into two -parts: - things that are absolutely required by TaskFlow (you can't use -TaskFlow without them) are put into ``requirements-pyN.txt`` (``N`` being the -Python *major* version number used to install the package); - things that are -required by some optional part of TaskFlow (you can use TaskFlow without -them) are put into ``optional-requirements.txt``; if you want to use the -feature in question, you should add that requirements to your project or -environment; - as usual, things that required only for running tests are -put into ``test-requirements.txt``. +Because this project has many optional (pluggable) parts like persistence +backends and engines, we decided to split our requirements into three +parts: - things that are absolutely required (you can't use the project +without them) are put into ``requirements-pyN.txt`` (``N`` being the +Python *major* version number used to install the package). The requirements +that are required by some optional part of this project (you can use the +project without them) are put into our ``tox.ini`` file (so that we can still +test the optional functionality works as expected). If you want to use the +feature in question (`eventlet`_ or the worker based engine that +uses `kombu`_ or the `sqlalchemy`_ persistence backend or jobboards which +have an implementation built using `kazoo`_ ...), you should add +that requirement(s) to your project or environment; - as usual, things that +required only for running tests are put into ``test-requirements.txt``. Tox.ini ~~~~~~~ Our ``tox.ini`` file describes several test environments that allow to test TaskFlow with different python versions and sets of requirements installed. - -To generate the ``tox.ini`` file, use the ``toxgen.py`` script by first -installing `toxgen`_ and then provide that script as input the ``tox-tmpl.ini`` -file to generate the final ``tox.ini`` file. - -*For example:* - -:: - - $ toxgen.py -i tox-tmpl.ini -o tox.ini +Please refer to the `tox`_ documentation to understand how to make these test +environments work for you. Developer documentation ----------------------- @@ -56,5 +54,9 @@ We also have sphinx documentation in ``docs/source``. $ python setup.py build_sphinx -.. _toxgen: https://pypi.python.org/pypi/toxgen/ +.. _kazoo: http://kazoo.readthedocs.org/ +.. _sqlalchemy: http://www.sqlalchemy.org/ +.. _kombu: http://kombu.readthedocs.org/ +.. _eventlet: http://eventlet.net/ +.. _tox: http://tox.testrun.org/ .. _developer documentation: http://docs.openstack.org/developer/taskflow/ diff --git a/doc/diagrams/core.graffle b/doc/diagrams/core.graffle deleted file mode 100644 index a570fe59..00000000 --- a/doc/diagrams/core.graffle +++ /dev/null @@ -1,8023 +0,0 @@ - - - - - ActiveLayerIndex - 0 - ApplicationVersion - - com.omnigroup.OmniGrafflePro - 139.18.0.187838 - - AutoAdjust - - BackgroundGraphic - - Bounds - {{0, 0}, {1152, 2199}} - Class - SolidGraphic - ID - 2 - Style - - shadow - - Draws - NO - - stroke - - Draws - NO - - - - BaseZoom - 0 - CanvasOrigin - {0, 0} - ColumnAlign - 1 - ColumnSpacing - 36 - CreationDate - 2014-07-08 20:47:01 +0000 - Creator - Joshua Harlow - DisplayScale - 1 0/72 in = 1.0000 in - ExportShapes - - - InspectorGroup - 255 - ShapeImageRect - {{2, 2}, {22, 22}} - ShapeName - 33C70F48-B008-4466-BD81-E84D73C055CA-438-0000056AF6035FFB - ShouldExport - YES - StrokePath - - elements - - - element - MOVETO - point - {0.40652500000000003, 0.088786000000000004} - - - control1 - {0.39769700000000002, -0.059801} - control2 - {0.312282, -0.20657200000000001} - element - CURVETO - point - {0.15027599999999999, -0.32002000000000003} - - - control1 - {-0.028644599999999999, -0.44531500000000002} - control2 - {-0.26560600000000001, -0.50519099999999995} - element - CURVETO - point - {-0.5, -0.49964799999999998} - - - element - LINETO - point - {-0.5, -0.25638699999999998} - - - control1 - {-0.358902, -0.262291} - control2 - {-0.21507999999999999, -0.22622900000000001} - element - CURVETO - point - {-0.10728, -0.148201} - - - control1 - {-0.0160971, -0.082201999999999997} - control2 - {0.033605599999999999, 0.0024510600000000001} - element - CURVETO - point - {0.041826200000000001, 0.088786000000000004} - - - element - LINETO - point - {-0.043046000000000001, 0.088786000000000004} - - - element - LINETO - point - {0.22847700000000001, 0.5} - - - element - LINETO - point - {0.5, 0.088786000000000004} - - - element - LINETO - point - {0.40652500000000003, 0.088786000000000004} - - - element - CLOSE - - - element - MOVETO - point - {0.40652500000000003, 0.088786000000000004} - - - - TextBounds - {{0, 0}, {1, 1}} - - - GraphDocumentVersion - 8 - GraphicsList - - - Class - LineGraphic - ID - 1169 - Points - - {148.34850886899716, 1297.778564453125} - {148.34850886899716, 1565.8355233257191} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 1 - Pattern - 1 - TailArrow - 0 - - - - - Bounds - {{108.29570600619962, 1459.9910998882619}, {30, 14}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - FontInfo - - Font - Helvetica - Size - 12 - - ID - 1167 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\i\fs24 \cf0 Emits} - VerticalPad - 0 - - Wrap - NO - - - Class - LineGraphic - ID - 1166 - Points - - {172.01007495190493, 1463.7899284362793} - {108.29570625246291, 1489.3205401648188} - - Style - - stroke - - HeadArrow - FilledArrow - Legacy - - LineType - 1 - Pattern - 1 - TailArrow - 0 - - - - - Bounds - {{28, 1489.3205331673671}, {108.00000616531918, 60.376010894775391}} - Class - ShapedGraphic - ID - 1165 - Shape - Cloud - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Board\ -Notifications} - VerticalPad - 0 - - - - Class - LineGraphic - ID - 1161 - Points - - {16.938813712387287, 1214.6957778930664} - {550.61227271339396, 1214.6957778930664} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 1 - Pattern - 1 - TailArrow - 0 - - - - - Class - Group - Graphics - - - Bounds - {{177.05329513549805, 1254.1663719071589}, {82, 22}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 1163 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\i\b\fs36 \cf0 (optional)} - VerticalPad - 0 - - Wrap - NO - - - Bounds - {{56.053289698640896, 1231.7786193741999}, {116, 66}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - FontInfo - - Font - Helvetica-BoldOblique - Size - 18 - - ID - 1164 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\i\b\fs36 \cf0 Posting & \ -Consumption\ -Phase} - VerticalPad - 0 - - Wrap - NO - - - ID - 1162 - - - Bounds - {{560.82414838901218, 1484.9621440334549}, {71, 28}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - FontInfo - - Font - Helvetica - Size - 12 - - ID - 1159 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\i\fs24 \cf0 Consumption\ -Loop} - VerticalPad - 0 - - Wrap - NO - - - Class - Group - Graphics - - - Bounds - {{521.16725664085266, 1494.1828820625792}, {27.016406012875592, 38.542124503311257}} - Class - ShapedGraphic - ID - 1155 - Magnets - - {0.15027599999999999, -0.32002000000000003} - {-0.5, -0.49964799999999998} - {-0.5, -0.25638699999999998} - {-0.10728, -0.148201} - {0.041826500000000003, 0.088786000000000004} - {-0.043045800000000002, 0.088786000000000004} - {0.22847700000000001, 0.5} - {0.5, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - - Rotation - 90 - Shape - Bezier - ShapeData - - UnitPoints - - {0.406219, 0.101163} - {0.39736100000000002, -0.042958700000000002} - {0.31166700000000003, -0.185311} - {0.149117, -0.29534500000000002} - {-0.030395499999999999, -0.41686299999999998} - {-0.261517, -0.50514099999999995} - {-0.49668800000000002, -0.49976700000000002} - {-0.496693, -0.49976500000000001} - {-0.062913899999999995, -0.36058899999999999} - {-0.062913899999999995, -0.36058899999999999} - {-0.062918699999999994, -0.36058899999999999} - {-0.5, -0.21609700000000001} - {-0.5, -0.21609600000000001} - {-0.35843000000000003, -0.22182399999999999} - {-0.217449, -0.204378} - {-0.10928400000000001, -0.12870000000000001} - {-0.017806099999999998, -0.064687300000000003} - {0.032062500000000001, 0.0174179} - {0.040309900000000003, 0.101163} - {0.040309900000000003, 0.101163} - {-0.044847499999999998, 0.101163} - {-0.044847499999999998, 0.101163} - {-0.044847499999999998, 0.101163} - {0.22758200000000001, 0.5} - {0.22758200000000001, 0.5} - {0.22758200000000001, 0.5} - {0.5, 0.101163} - {0.5, 0.101163} - {0.5, 0.101163} - {0.406219, 0.101163} - - - - - Bounds - {{501.82414838901218, 1486.2594805049087}, {26.999999999999996, 38.288223134554855}} - Class - ShapedGraphic - ID - 1156 - Magnets - - {0.15027599999999999, -0.32002000000000003} - {-0.5, -0.49964799999999998} - {-0.5, -0.25638699999999998} - {-0.10728, -0.148201} - {0.041826500000000003, 0.088786000000000004} - {-0.043045800000000002, 0.088786000000000004} - {0.22847700000000001, 0.5} - {0.5, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - - Rotation - 180 - Shape - 33C70F48-B008-4466-BD81-E84D73C055CA-438-0000056AF6035FFB - - - Bounds - {{509.94240895873349, 1465.3378642343948}, {27.016406012875589, 38.264972185430459}} - Class - ShapedGraphic - ID - 1157 - Magnets - - {0.15027599999999999, -0.32002000000000003} - {-0.5, -0.49964799999999998} - {-0.5, -0.25638699999999998} - {-0.10728, -0.148201} - {0.041826500000000003, 0.088786000000000004} - {-0.043045800000000002, 0.088786000000000004} - {0.22847700000000001, 0.5} - {0.5, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - - Rotation - 270 - Shape - 33C70F48-B008-4466-BD81-E84D73C055CA-438-0000056AF6035FFB - - - Bounds - {{528.82414838901218, 1473.3774855848621}, {27.000000000000004, 38.288223134554862}} - Class - ShapedGraphic - ID - 1158 - Magnets - - {0.15027599999999999, -0.32002000000000003} - {-0.5, -0.49964799999999998} - {-0.5, -0.25638699999999998} - {-0.10728, -0.148201} - {0.041826500000000003, 0.088786000000000004} - {-0.043045800000000002, 0.088786000000000004} - {0.22847700000000001, 0.5} - {0.5, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - - Shape - 33C70F48-B008-4466-BD81-E84D73C055CA-438-0000056AF6035FFB - - - ID - 1154 - - - Class - Group - Graphics - - - Bounds - {{302.01802465549162, 1480.6049629105769}, {37, 28}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 1150 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Align - 0 - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural - -\f0\fs24 \cf0 - wait()\ -....} - VerticalPad - 0 - - Wrap - NO - - - Bounds - {{230.34967062106779, 1480.6049629105769}, {63, 42}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 1151 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Align - 0 - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural - -\f0\fs24 \cf0 - abandon()\ -- iterjobs()\ -} - VerticalPad - 0 - - Wrap - NO - - - Bounds - {{177.68130514255216, 1480.6049629105769}, {44, 42}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 1152 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Align - 0 - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural - -\f0\fs24 \cf0 - post()\ -- claim()\ -} - VerticalPad - 0 - - Wrap - NO - - - Bounds - {{170.82414838901212, 1475.3192573441706}, {175.99597549438477, 38.571445465087891}} - Class - ShapedGraphic - ID - 1153 - Shape - Rectangle - Style - - stroke - - Pattern - 1 - - - - - ID - 1149 - - - Class - LineGraphic - ID - 1148 - Points - - {290.10982343784025, 1540.438820465416} - {289.2121722540532, 1513.7478166474543} - - Style - - stroke - - HeadArrow - UMLInheritance - Legacy - - LineType - 1 - TailArrow - 0 - - - Tail - - ID - 1147 - - - - Bounds - {{245.10982343784025, 1540.438820465416}, {90, 36}} - Class - ShapedGraphic - ID - 1147 - Magnets - - {0, 1} - {0, -1} - {1, 0} - {-1, 0} - - Shape - Rectangle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\qc - -\f0\i\b\fs24 \cf0 Zookeeper\ -Jobboard} - VerticalPad - 0 - - - - Bounds - {{435.40182134738615, 1470.9621696472168}, {58, 56}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - FontInfo - - Font - Helvetica - Size - 12 - - ID - 1146 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Align - 0 - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural - -\f0\i\fs24 \cf0 - Claim job\ -- Load job\ -- Translate\ -- Activate} - VerticalPad - 0 - - Wrap - NO - - - Class - Group - Graphics - - - Class - Group - Graphics - - - Bounds - {{539.9018270694321, 1424.6444211854182}, {33, 12}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 1117 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs20 \cf0 Worker} - VerticalPad - 0 - - Wrap - NO - - - Class - Group - Graphics - - - Bounds - {{530.40181944003757, 1370.6049924744807}, {52, 72}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 1119 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs20 \cf0 Specialized\ -\ -\ -\ -\ -} - VerticalPad - 0 - - Wrap - NO - - - Class - Group - Graphics - - - Bounds - {{545.72859289858161, 1420.3446371019186}, {19.299808229718884, 19.299808879032174}} - Class - ShapedGraphic - ID - 1121 - Shape - Rectangle - Style - - fill - - Color - - b - 0.4 - g - 1 - r - 1 - - Draws - NO - FillType - 2 - GradientAngle - 90 - GradientColor - - b - 0.4 - g - 1 - r - 1 - - MiddleColor - - b - 0.4 - g - 1 - r - 1 - - TrippleBlend - YES - - shadow - - Beneath - YES - Draws - NO - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - Draws - NO - Width - 1.5 - - - VFlip - YES - Wrap - NO - - - Class - LineGraphic - ID - 1122 - Points - - {545.72859289858161, 1402.9748103444847} - {565.02840112830063, 1402.9748103444847} - {565.02840112830063, 1402.9748103444847} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Class - LineGraphic - ID - 1123 - Points - - {555.37849701344112, 1410.6947336363717} - {545.7285925739252, 1420.3446377512312} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Class - LineGraphic - ID - 1124 - Points - - {555.37849701344112, 1410.6947336363719} - {565.02840112830063, 1420.7306576742712} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Class - LineGraphic - ID - 1125 - Points - - {555.37849701344112, 1397.1848678755687} - {555.37849701344112, 1410.6947352596553} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Bounds - {{549.58855454452544, 1385.6049829377375}, {11.57988428851805, 11.57988428851805}} - Class - ShapedGraphic - ID - 1126 - Shape - Circle - Style - - fill - - Color - - b - 0.4 - g - 1 - r - 1 - - Draws - NO - FillType - 2 - GradientAngle - 90 - GradientColor - - b - 0.4 - g - 1 - r - 1 - - MiddleColor - - b - 0.4 - g - 1 - r - 1 - - TrippleBlend - YES - - shadow - - Beneath - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - Width - 1.5 - - - - - ID - 1120 - - - ID - 1118 - - - ID - 1116 - - - Class - Group - Graphics - - - Bounds - {{492.40181944003757, 1394.5655408753596}, {47, 12}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 1128 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs20 \cf0 Conductor} - VerticalPad - 0 - - Wrap - NO - - - Class - Group - Graphics - - - Bounds - {{505.22859289858167, 1444.305185502797}, {19.299808229718884, 19.299808879032174}} - Class - ShapedGraphic - ID - 1130 - Shape - Rectangle - Style - - fill - - Color - - b - 0.4 - g - 1 - r - 1 - - Draws - NO - FillType - 2 - GradientAngle - 90 - GradientColor - - b - 0.4 - g - 1 - r - 1 - - MiddleColor - - b - 0.4 - g - 1 - r - 1 - - TrippleBlend - YES - - shadow - - Beneath - YES - Draws - NO - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - Draws - NO - Width - 1.5 - - - VFlip - YES - Wrap - NO - - - Class - LineGraphic - ID - 1131 - Points - - {505.22859289858161, 1426.9353587453638} - {524.52840112830063, 1426.9353587453638} - {524.52840112830063, 1426.9353587453638} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Class - LineGraphic - ID - 1132 - Points - - {514.87849701344112, 1434.6552820372506} - {505.2285925739252, 1444.3051861521101} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Class - LineGraphic - ID - 1133 - Points - - {514.87849701344112, 1434.6552820372508} - {524.52840112830063, 1444.6912060751501} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Class - LineGraphic - ID - 1134 - Points - - {514.87849701344112, 1421.1454162764476} - {514.87849701344112, 1434.6552836605342} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Bounds - {{509.08855454452544, 1409.5655313386164}, {11.57988428851805, 11.57988428851805}} - Class - ShapedGraphic - ID - 1135 - Shape - Circle - Style - - fill - - Color - - b - 0.4 - g - 1 - r - 1 - - Draws - NO - FillType - 2 - GradientAngle - 90 - GradientColor - - b - 0.4 - g - 1 - r - 1 - - MiddleColor - - b - 0.4 - g - 1 - r - 1 - - TrippleBlend - YES - - shadow - - Beneath - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - Width - 1.5 - - - - - ID - 1129 - - - ID - 1127 - - - Class - Group - Graphics - - - Bounds - {{454.40181944003757, 1373.5655408753596}, {47, 12}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 1137 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs20 \cf0 Conductor} - VerticalPad - 0 - - Wrap - NO - - - Class - Group - Graphics - - - Bounds - {{467.22859289858161, 1423.305185502797}, {19.299808229718884, 19.299808879032174}} - Class - ShapedGraphic - ID - 1139 - Shape - Rectangle - Style - - fill - - Color - - b - 0.4 - g - 1 - r - 1 - - Draws - NO - FillType - 2 - GradientAngle - 90 - GradientColor - - b - 0.4 - g - 1 - r - 1 - - MiddleColor - - b - 0.4 - g - 1 - r - 1 - - TrippleBlend - YES - - shadow - - Beneath - YES - Draws - NO - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - Draws - NO - Width - 1.5 - - - VFlip - YES - Wrap - NO - - - Class - LineGraphic - ID - 1140 - Points - - {467.22859289858161, 1405.9353587453638} - {486.52840112830063, 1405.9353587453638} - {486.52840112830063, 1405.9353587453638} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Class - LineGraphic - ID - 1141 - Points - - {476.87849701344112, 1413.6552820372506} - {467.2285925739252, 1423.3051861521101} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Class - LineGraphic - ID - 1142 - Points - - {476.87849701344112, 1413.6552820372508} - {486.52840112830063, 1423.6912060751501} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Class - LineGraphic - ID - 1143 - Points - - {476.87849701344112, 1400.1454162764476} - {476.87849701344112, 1413.6552836605342} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Bounds - {{471.08855454452544, 1388.5655313386164}, {11.57988428851805, 11.57988428851805}} - Class - ShapedGraphic - ID - 1144 - Shape - Circle - Style - - fill - - Color - - b - 0.4 - g - 1 - r - 1 - - Draws - NO - FillType - 2 - GradientAngle - 90 - GradientColor - - b - 0.4 - g - 1 - r - 1 - - MiddleColor - - b - 0.4 - g - 1 - r - 1 - - TrippleBlend - YES - - shadow - - Beneath - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - Width - 1.5 - - - - - ID - 1138 - - - ID - 1136 - - - Bounds - {{428.40181944003757, 1349.4199200524531}, {175.99597549438477, 117.33065032958984}} - Class - ShapedGraphic - ID - 1145 - Shape - Cloud - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\i\fs24 \cf0 Workers} - VerticalPad - 0 - - TextPlacement - 0 - TextRelativeArea - {{0.14999999999999999, -0.15000001192092893}, {0.69999999999999996, 0.69999999999999996}} - - - ID - 1115 - - - Class - LineGraphic - Head - - ID - 968 - - ID - 1065 - Points - - {440.82414838901212, 1414.6049619569026} - {387.27826521030119, 1414.6049619569026} - - Style - - stroke - - HeadArrow - FilledArrow - Legacy - - LineType - 1 - TailArrow - FilledArrow - - - - - Bounds - {{93.246466708152184, 1417.5425142326339}, {50, 42}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 996 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Receives\ -Job\ -} - VerticalPad - 0 - - Wrap - NO - - - Bounds - {{91.82415492531527, 1366.6049531974777}, {50, 42}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 995 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Posts\ -Workflow\ -} - VerticalPad - 0 - - Wrap - NO - - - Class - LineGraphic - ID - 994 - Points - - {110.33829994431926, 1405.0729529199277} - {164.08388984446532, 1405.5} - - Style - - stroke - - HeadArrow - FilledArrow - Legacy - - LineType - 1 - TailArrow - FilledArrow - - - - - Class - Group - Graphics - - - Bounds - {{50.1444289081536, 1351.4198986563467}, {35, 28}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 986 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fnil\fcharset0 GillSans;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Library\ -User} - VerticalPad - 0 - - Wrap - NO - - - Class - Group - Graphics - - - Bounds - {{53.471203982158727, 1435.4435211691641}, {28.346457481384277, 28.346458435058594}} - Class - ShapedGraphic - ID - 988 - Shape - Rectangle - Style - - fill - - Color - - b - 0.4 - g - 1 - r - 1 - - Draws - NO - FillType - 2 - GradientAngle - 90 - GradientColor - - b - 0.4 - g - 1 - r - 1 - - MiddleColor - - b - 0.4 - g - 1 - r - 1 - - TrippleBlend - YES - - shadow - - Beneath - YES - Draws - NO - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - Draws - NO - Width - 1.5 - - - VFlip - YES - Wrap - NO - - - Class - LineGraphic - ID - 989 - Points - - {53.471203982158727, 1409.9317103895926} - {81.817661463543004, 1409.9317103895926} - {81.817661463543004, 1409.9317103895926} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Class - LineGraphic - ID - 990 - Points - - {67.644432722850866, 1421.2702933821463} - {53.471203505321569, 1435.4435221228384} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Class - LineGraphic - ID - 991 - Points - - {67.644432722850866, 1421.2702933821463} - {81.817661463543004, 1436.0104861675161} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Class - LineGraphic - ID - 992 - Points - - {67.644432722850866, 1401.4277731451773} - {67.644432722850866, 1421.2702957663321} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Bounds - {{59.140495478435582, 1384.4198986563467}, {17.00787353515625, 17.00787353515625}} - Class - ShapedGraphic - ID - 993 - Shape - Circle - Style - - fill - - Color - - b - 0.4 - g - 1 - r - 1 - - Draws - NO - FillType - 2 - GradientAngle - 90 - GradientColor - - b - 0.4 - g - 1 - r - 1 - - MiddleColor - - b - 0.4 - g - 1 - r - 1 - - TrippleBlend - YES - - shadow - - Beneath - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - Width - 1.5 - - - - - ID - 987 - - - ID - 1001 - - - Bounds - {{237.44151899448087, 1414.6049619569026}, {54, 36}} - Class - ShapedGraphic - ID - 961 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Job} - - - - Bounds - {{228.44151899448087, 1405.6049619569026}, {54, 36}} - Class - ShapedGraphic - ID - 1000 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Job1} - - - - Class - Group - Graphics - - - Class - LineGraphic - Head - - ID - 968 - - ID - 967 - Points - - {273.94151899473252, 1414.6017734596319} - {332.27826523991331, 1414.5950095848621} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 1 - Pattern - 1 - TailArrow - FilledArrow - - - Tail - - ID - 969 - - - - Bounds - {{332.77826521030119, 1396.6049619569026}, {54, 36}} - Class - ShapedGraphic - ID - 968 - Shape - Rectangle - Style - - stroke - - Pattern - 1 - - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Owner} - - - - Bounds - {{219.44151899448087, 1396.6049619569026}, {54, 36}} - Class - ShapedGraphic - ID - 969 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Job1} - - - - ID - 966 - - - Class - Group - Graphics - - - Class - LineGraphic - Head - - ID - 972 - - ID - 971 - Points - - {264.94151899473252, 1405.6017734596319} - {323.27826523991331, 1405.5950095848621} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 1 - Pattern - 1 - TailArrow - FilledArrow - - - Tail - - ID - 973 - - - - Bounds - {{323.77826521030119, 1387.6049619569026}, {54, 36}} - Class - ShapedGraphic - ID - 972 - Shape - Rectangle - Style - - stroke - - Pattern - 1 - - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Owner} - - - - Bounds - {{210.44151899448087, 1387.6049619569026}, {54, 36}} - Class - ShapedGraphic - ID - 973 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Job1} - - - - ID - 970 - - - Class - Group - Graphics - - - Class - LineGraphic - Head - - ID - 976 - - ID - 975 - Points - - {255.94151899473252, 1396.6017734596319} - {314.27826523991337, 1396.5950095848621} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 1 - Pattern - 1 - TailArrow - FilledArrow - - - Tail - - ID - 977 - - - - Bounds - {{314.77826521030119, 1378.6049619569026}, {54, 36}} - Class - ShapedGraphic - ID - 976 - Shape - Rectangle - Style - - stroke - - Pattern - 1 - - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Owner} - - - - Bounds - {{201.44151899448087, 1378.6049619569026}, {54, 36}} - Class - ShapedGraphic - ID - 977 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Job1} - - - - ID - 974 - - - Class - Group - Graphics - - - Class - LineGraphic - Head - - ID - 980 - - ID - 979 - Points - - {246.94151899473252, 1387.6017734596319} - {305.27826523991337, 1387.5950095848621} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 1 - Pattern - 1 - TailArrow - FilledArrow - - - Tail - - ID - 981 - - - - Bounds - {{305.77826521030119, 1369.6049619569026}, {54, 36}} - Class - ShapedGraphic - ID - 980 - Shape - Rectangle - Style - - stroke - - Pattern - 1 - - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Owner} - - - - Bounds - {{192.44151899448087, 1369.6049619569026}, {54, 36}} - Class - ShapedGraphic - ID - 981 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Job1} - - - - ID - 978 - - - Bounds - {{170.82414838901212, 1345.6049695862971}, {236.99999999999997, 168}} - Class - ShapedGraphic - FitText - Vertical - Flow - Resize - ID - 983 - Shape - Rectangle - Style - - fill - - GradientCenter - {-0.29411799999999999, -0.264706} - - - Text - - Align - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720 - -\f0\fs24 \cf0 \ -\ -\ -\ -\ -\ -\ -\ -\ -\ -\ -} - VerticalPad - 0 - - TextPlacement - 0 - - - Bounds - {{170.82414838901212, 1331.6049695862971}, {236.99999999999997, 14}} - Class - ShapedGraphic - FitText - Vertical - Flow - Resize - ID - 984 - Shape - Rectangle - Style - - fill - - GradientCenter - {-0.29411799999999999, -0.264706} - - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\qc - -\f0\i\b\fs24 \cf0 Jobboard} - VerticalPad - 0 - - TextPlacement - 0 - - - Class - LineGraphic - ID - 861 - Points - - {470.1300977351811, 156.79728666398489} - {409.22449458705552, 177.09915438002673} - - Style - - stroke - - HeadArrow - FilledArrow - Legacy - - LineType - 1 - Pattern - 1 - TailArrow - 0 - - - - - Bounds - {{476.1300977351811, 138.79728666398486}, {41, 28}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - FontInfo - - Font - Helvetica - Size - 12 - - ID - 860 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\i\fs24 \cf0 Nested\ -subflow} - VerticalPad - 0 - - Wrap - NO - - - Class - LineGraphic - ID - 859 - Points - - {382.65871206690008, 221.8325309753418} - {382.65871206690008, 249.83253047325724} - - Style - - stroke - - HeadArrow - FilledArrow - Legacy - - LineType - 1 - TailArrow - 0 - - - - - Bounds - {{359.27167431166708, 255.11224365234375}, {47, 47}} - Class - ShapedGraphic - HFlip - YES - ID - 855 - Shape - Circle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Flow} - VerticalPad - 0 - - - - Bounds - {{355.77167171239853, 251.61224365234375}, {54, 54}} - Class - ShapedGraphic - HFlip - YES - ID - 856 - Shape - Circle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Retry} - VerticalPad - 0 - - TextPlacement - 0 - TextRelativeArea - {{0.099999999999999978, 1.0000000238418578}, {0.80000000000000004, 0.69999999999999996}} - TextRotation - 305.1478271484375 - - - Bounds - {{290.73464965820312, 1032.5300847720423}, {27, 28}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - FontInfo - - Font - Helvetica - Size - 12 - - ID - 839 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\i\fs24 \cf0 Run\ -Loop} - VerticalPad - 0 - - Wrap - NO - - - Class - LineGraphic - ID - 838 - Points - - {16.938772201538086, 784.51440811157227} - {550.61223120254476, 784.51440811157227} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 1 - Pattern - 1 - TailArrow - 0 - - - - - Bounds - {{478.53062537152402, 1122.9011524936079}, {63.714366912841797, 31.333333333333332}} - Class - ShapedGraphic - ID - 837 - Shape - Rectangle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs20 \cf0 Completer} - VerticalPad - 0 - - - - Bounds - {{478.53062537152402, 1079.8705891391157}, {63.714366912841797, 31.333333333333332}} - Class - ShapedGraphic - ID - 836 - Shape - Rectangle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs20 \cf0 Scheduler} - VerticalPad - 0 - - - - Bounds - {{372.92606544494629, 1123.8570556640625}, {61, 56}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 834 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Align - 0 - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural - -\f0\fs24 \cf0 - run()\ -- suspend()\ -...\ -} - VerticalPad - 0 - - Wrap - NO - - - Bounds - {{390.8163413254731, 1078.5120424153899}, {63.714366912841797, 31.333333333333332}} - Class - ShapedGraphic - ID - 832 - Shape - Rectangle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs20 \cf0 Compiler} - VerticalPad - 0 - - - - Bounds - {{209.22450065612793, 852.73572444915771}, {80, 28}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - FontInfo - - Font - Helvetica - Size - 12 - - ID - 831 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\i\fs24 \cf0 States, results,\ -progress...} - VerticalPad - 0 - - Wrap - NO - - - Bounds - {{195.91839599609375, 1080.2736424160523}, {156, 70}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - FontInfo - - Font - Helvetica - Size - 12 - - ID - 828 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Align - 0 - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural - -\f0\i\fs24 \cf0 - PENDING -> RUNNING\ -- RUNNING -> SUCCESS\ -- SUSPENDED -> RUNNING\ -- FAILURE -> REVERTING\ -....} - VerticalPad - 0 - - Wrap - NO - - - Bounds - {{179.01475524902344, 1044.5300637912073}, {30, 14}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - FontInfo - - Font - Helvetica - Size - 12 - - ID - 827 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\i\fs24 \cf0 Emits} - VerticalPad - 0 - - Wrap - NO - - - Class - LineGraphic - ID - 826 - Points - - {228.92846501504124, 1022.9387556204747} - {165.21409631559922, 1048.4693673490142} - - Style - - stroke - - HeadArrow - FilledArrow - Legacy - - LineType - 1 - Pattern - 1 - TailArrow - 0 - - - - - Bounds - {{84.918390063136314, 1048.4693603515625}, {108.00000616531918, 60.376010894775391}} - Class - ShapedGraphic - ID - 9 - Shape - Cloud - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 State\ -Transition\ -Notifications} - VerticalPad - 0 - - - - Class - Group - Graphics - - - Bounds - {{253.38389312817009, 1036.7868945785101}, {27.016406012875592, 38.542124503311257}} - Class - ShapedGraphic - ID - 93 - Magnets - - {0.15027599999999999, -0.32002000000000003} - {-0.5, -0.49964799999999998} - {-0.5, -0.25638699999999998} - {-0.10728, -0.148201} - {0.041826500000000003, 0.088786000000000004} - {-0.043045800000000002, 0.088786000000000004} - {0.22847700000000001, 0.5} - {0.5, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - - Rotation - 90 - Shape - Bezier - ShapeData - - UnitPoints - - {0.406219, 0.101163} - {0.39736100000000002, -0.042958700000000002} - {0.31166700000000003, -0.185311} - {0.149117, -0.29534500000000002} - {-0.030395499999999999, -0.41686299999999998} - {-0.261517, -0.50514099999999995} - {-0.49668800000000002, -0.49976700000000002} - {-0.496693, -0.49976500000000001} - {-0.062913899999999995, -0.36058899999999999} - {-0.062913899999999995, -0.36058899999999999} - {-0.062918699999999994, -0.36058899999999999} - {-0.5, -0.21609700000000001} - {-0.5, -0.21609600000000001} - {-0.35843000000000003, -0.22182399999999999} - {-0.217449, -0.204378} - {-0.10928400000000001, -0.12870000000000001} - {-0.017806099999999998, -0.064687300000000003} - {0.032062500000000001, 0.0174179} - {0.040309900000000003, 0.101163} - {0.040309900000000003, 0.101163} - {-0.044847499999999998, 0.101163} - {-0.044847499999999998, 0.101163} - {-0.044847499999999998, 0.101163} - {0.22758200000000001, 0.5} - {0.22758200000000001, 0.5} - {0.22758200000000001, 0.5} - {0.5, 0.101163} - {0.5, 0.101163} - {0.5, 0.101163} - {0.406219, 0.101163} - - - - - Bounds - {{234.04078487632972, 1028.8634930208395}, {26.999999999999996, 38.288223134554855}} - Class - ShapedGraphic - ID - 94 - Magnets - - {0.15027599999999999, -0.32002000000000003} - {-0.5, -0.49964799999999998} - {-0.5, -0.25638699999999998} - {-0.10728, -0.148201} - {0.041826500000000003, 0.088786000000000004} - {-0.043045800000000002, 0.088786000000000004} - {0.22847700000000001, 0.5} - {0.5, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - - Rotation - 180 - Shape - 33C70F48-B008-4466-BD81-E84D73C055CA-438-0000056AF6035FFB - - - Bounds - {{242.15904544605092, 1007.9418767503257}, {27.016406012875589, 38.264972185430459}} - Class - ShapedGraphic - ID - 95 - Magnets - - {0.15027599999999999, -0.32002000000000003} - {-0.5, -0.49964799999999998} - {-0.5, -0.25638699999999998} - {-0.10728, -0.148201} - {0.041826500000000003, 0.088786000000000004} - {-0.043045800000000002, 0.088786000000000004} - {0.22847700000000001, 0.5} - {0.5, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - - Rotation - 270 - Shape - 33C70F48-B008-4466-BD81-E84D73C055CA-438-0000056AF6035FFB - - - Bounds - {{261.04078487632967, 1015.981498100793}, {27.000000000000004, 38.288223134554862}} - Class - ShapedGraphic - ID - 96 - Magnets - - {0.15027599999999999, -0.32002000000000003} - {-0.5, -0.49964799999999998} - {-0.5, -0.25638699999999998} - {-0.10728, -0.148201} - {0.041826500000000003, 0.088786000000000004} - {-0.043045800000000002, 0.088786000000000004} - {0.22847700000000001, 0.5} - {0.5, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - {0.40652500000000003, 0.088786000000000004} - - Shape - 33C70F48-B008-4466-BD81-E84D73C055CA-438-0000056AF6035FFB - - - ID - 92 - - - Bounds - {{396.52035685550777, 974.46163584936892}, {142, 14}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - FontInfo - - Font - Helvetica-Bold - Size - 12 - - ID - 457 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\i\b\fs24 \cf0 ActionEngine (one impl.)} - VerticalPad - 0 - - Wrap - NO - - - Bounds - {{390.8163413254731, 1038.2328676107024}, {63.714366912841797, 31.333333333333332}} - Class - ShapedGraphic - ID - 450 - Shape - Rectangle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs20 \cf0 Runner} - VerticalPad - 0 - - - - Bounds - {{478.53062537152402, 1038.2328900119185}, {63.714366912841797, 31.333333333333332}} - Class - ShapedGraphic - ID - 449 - Shape - Rectangle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs20 \cf0 Runtime} - VerticalPad - 0 - - - - Bounds - {{478.5306334878843, 994.19207080251738}, {63.714366912841797, 31.333333333333332}} - Class - ShapedGraphic - ID - 447 - Shape - Rectangle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs20 \cf0 Executor} - VerticalPad - 0 - - - - Bounds - {{390.81631892425446, 994.19204840129885}, {63.714366912841797, 31.333333333333332}} - Class - ShapedGraphic - ID - 446 - Shape - Rectangle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs20 \cf0 Analyzer} - VerticalPad - 0 - - - - Class - LineGraphic - Head - - ID - 444 - - ID - 445 - Points - - {304.30400417385465, 1005.6686926988394} - {365.81839492659333, 1029.0751702876107} - - Style - - stroke - - HeadArrow - FilledArrow - Legacy - - LineType - 1 - Pattern - 1 - TailArrow - 0 - - - Tail - - ID - 423 - - - - Class - LineGraphic - Head - - ID - 10 - - ID - 433 - Points - - {437.73468537749687, 869.13090571936129} - {473.25508692784757, 868.8206769098332} - - Style - - stroke - - HeadArrow - FilledArrow - Legacy - - LineType - 1 - TailArrow - FilledArrow - - - - - Bounds - {{473.75506787377572, 840.81631016602194}, {63.714366912841797, 56}} - Class - ShapedGraphic - ID - 10 - Magnets - - {0, 1} - {0, -1} - {1, 0} - {-1, 0} - - Shape - Cylinder - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\qc - -\f0\fs20 \cf0 Persistence\ -Backend} - VerticalPad - 0 - - - - Class - LineGraphic - ID - 428 - OrthogonalBarAutomatic - - OrthogonalBarPoint - {0, 0} - OrthogonalBarPosition - -1 - Points - - {258.38771438598633, 947} - {308.12470245361328, 886} - - Style - - stroke - - HeadArrow - FilledArrow - Legacy - - LineType - 2 - TailArrow - FilledArrow - - - - - Class - TableGroup - Graphics - - - Bounds - {{310.93862753220276, 826.66148410306198}, {126, 14}} - Class - ShapedGraphic - FitText - Vertical - Flow - Resize - ID - 426 - Shape - Rectangle - Style - - fill - - GradientCenter - {-0.29411799999999999, -0.264706} - - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\qc - -\f0\b\fs24 \cf0 Storage} - VerticalPad - 0 - - TextPlacement - 0 - - - Bounds - {{310.93862753220276, 840.66148410306198}, {126, 28}} - Class - ShapedGraphic - FitText - Vertical - Flow - Resize - ID - 43 - Shape - Rectangle - Style - - fill - - GradientCenter - {-0.29411799999999999, -0.264706} - - - Text - - Align - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720 - -\f0\fs24 \cf0 - flow_name\ -- flow_uuid} - VerticalPad - 0 - - TextPlacement - 0 - - - Bounds - {{310.93862753220276, 868.66148410306198}, {126, 56}} - Class - ShapedGraphic - FitText - Vertical - Flow - Resize - ID - 427 - Shape - Rectangle - Style - - fill - - GradientCenter - {-0.29411799999999999, -0.264706} - - - Text - - Align - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720 - -\f0\fs24 \cf0 - save()\ -- get()\ -- get_failures()\ -...} - VerticalPad - 0 - - TextPlacement - 0 - - - GridH - - 426 - 43 - 427 - - - ID - 425 - - - Bounds - {{207.28567728426299, 974.78645878243321}, {105.10203552246094, 36}} - Class - ShapedGraphic - ID - 421 - Shape - Cloud - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Compilation} - VerticalPad - 0 - - - - Bounds - {{240.83671598660843, 957.79548143397199}, {38, 14}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 422 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Engine} - VerticalPad - 0 - - Wrap - NO - - - Bounds - {{215.83669066280191, 952.49403624989395}, {88, 72.509323120117188}} - Class - ShapedGraphic - ID - 423 - Shape - Rectangle - - - Class - LineGraphic - ID - 418 - Points - - {175.01475125757293, 858.46545582024169} - {175.01475125757293, 1126.5224146928358} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 1 - Pattern - 1 - TailArrow - 0 - - - - - Bounds - {{56.053440093994141, 802.0387135699907}, {88, 44}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - FontInfo - - Font - Helvetica-BoldOblique - Size - 18 - - ID - 414 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\i\b\fs36 \cf0 Activation\ -Phase} - VerticalPad - 0 - - Wrap - NO - - - Bounds - {{105.08388984446533, 1003.3409264674543}, {59, 28}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 413 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Results/\ -Exceptions} - VerticalPad - 0 - - Wrap - NO - - - Class - LineGraphic - ID - 412 - Points - - {109.26527080670799, 991.99397346428293} - {192.17343756180355, 991.99397346428293} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - FilledArrow - - - - - Bounds - {{115.18593484369178, 915.30610463461369}, {49, 56}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 411 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Run/\ -Resume/\ -Revert/\ -Suspend} - VerticalPad - 0 - - Wrap - NO - - - Class - LineGraphic - ID - 410 - Points - - {113.34690338032949, 979.62663303122656} - {203.34690321589053, 979.62663303122656} - - Style - - stroke - - HeadArrow - FilledArrow - Legacy - - LineType - 1 - TailArrow - 0 - - - - - Class - Group - Graphics - - - Bounds - {{59.15303234416389, 922.95317062424022}, {35, 28}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 402 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fnil\fcharset0 GillSans;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Library\ -User} - VerticalPad - 0 - - Wrap - NO - - - Class - Group - Graphics - - - Bounds - {{62.479807418169017, 1006.9767931370576}, {28.346457481384277, 28.346458435058594}} - Class - ShapedGraphic - ID - 404 - Shape - Rectangle - Style - - fill - - Color - - b - 0.4 - g - 1 - r - 1 - - Draws - NO - FillType - 2 - GradientAngle - 90 - GradientColor - - b - 0.4 - g - 1 - r - 1 - - MiddleColor - - b - 0.4 - g - 1 - r - 1 - - TrippleBlend - YES - - shadow - - Beneath - YES - Draws - NO - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - Draws - NO - Width - 1.5 - - - VFlip - YES - Wrap - NO - - - Class - LineGraphic - ID - 405 - Points - - {62.479807418169017, 981.46498235748606} - {90.826264899553294, 981.46498235748606} - {90.826264899553294, 981.46498235748606} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Class - LineGraphic - ID - 406 - Points - - {76.653036158861156, 992.80356535003978} - {62.479806941331859, 1006.9767940907319} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Class - LineGraphic - ID - 407 - Points - - {76.653036158861156, 992.80356535003978} - {90.826264899553294, 1007.5437581354097} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Class - LineGraphic - ID - 408 - Points - - {76.653036158861156, 972.96104511307078} - {76.653036158861156, 992.80356773422557} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Bounds - {{68.149098914445872, 955.95317062424022}, {17.00787353515625, 17.00787353515625}} - Class - ShapedGraphic - ID - 409 - Shape - Circle - Style - - fill - - Color - - b - 0.4 - g - 1 - r - 1 - - Draws - NO - FillType - 2 - GradientAngle - 90 - GradientColor - - b - 0.4 - g - 1 - r - 1 - - MiddleColor - - b - 0.4 - g - 1 - r - 1 - - TrippleBlend - YES - - shadow - - Beneath - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - Width - 1.5 - - - - - ID - 403 - - - ID - 401 - - - Class - LineGraphic - ID - 399 - Points - - {450.39306747989752, 692.4796011495929} - {451.28062907089827, 609.48823926071918} - - Style - - stroke - - HeadArrow - UMLInheritance - Legacy - - LineType - 1 - TailArrow - 0 - - - Tail - - ID - 398 - Info - 2 - - - - Bounds - {{405.3877204726607, 692.97957255114432}, {90, 36}} - Class - ShapedGraphic - ID - 398 - Magnets - - {0, 1} - {0, -1} - {1, 0} - {-1, 0} - - Shape - Rectangle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\qc - -\f0\b\fs24 \cf0 Distributed\ -Engine} - VerticalPad - 0 - - - - Class - LineGraphic - Head - - ID - 395 - - ID - 397 - Points - - {515.69384736152347, 643.69388126880108} - {479.18127560660326, 607.7427600751555} - - Style - - stroke - - HeadArrow - UMLInheritance - Legacy - - LineType - 1 - TailArrow - 0 - - - Tail - - ID - 396 - Info - 2 - - - - Bounds - {{470.69384736152347, 643.69388126880108}, {90, 36}} - Class - ShapedGraphic - ID - 396 - Magnets - - {0, 1} - {0, -1} - {1, 0} - {-1, 0} - - Shape - Rectangle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\qc - -\f0\b\fs24 \cf0 No-Thread\ -Engine} - VerticalPad - 0 - - - - Class - LineGraphic - Head - - ID - 395 - - ID - 27 - Points - - {387.9411398922191, 643.33613420977974} - {422.69414909838434, 607.74967407906797} - - Style - - stroke - - HeadArrow - UMLInheritance - Legacy - - LineType - 1 - TailArrow - 0 - - - Tail - - ID - 11 - - - - Bounds - {{342.59180028217145, 643.69385172515479}, {90, 36}} - Class - ShapedGraphic - ID - 11 - Magnets - - {0, 1} - {0, -1} - {1, 0} - {-1, 0} - - Shape - Rectangle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\qc - -\f0\i\b\fs24 \cf0 K -\i0 -Threaded\ -Engine} - VerticalPad - 0 - - - - Class - TableGroup - Graphics - - - Bounds - {{405.38771609711887, 495.39195656369293}, {90, 14}} - Class - ShapedGraphic - FitText - Vertical - Flow - Resize - ID - 393 - Shape - Rectangle - Style - - fill - - GradientCenter - {-0.29411799999999999, -0.264706} - - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\qc - -\f0\i\b\fs24 \cf0 Engine} - VerticalPad - 0 - - TextPlacement - 0 - - - Bounds - {{405.38771609711887, 509.39195656369293}, {90, 42}} - Class - ShapedGraphic - FitText - Vertical - Flow - Resize - ID - 394 - Shape - Rectangle - Style - - fill - - GradientCenter - {-0.29411799999999999, -0.264706} - - - Text - - Align - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720 - -\f0\fs24 \cf0 - notifier\ -- atom_notifier\ -- storage} - VerticalPad - 0 - - TextPlacement - 0 - - - Bounds - {{405.38771609711887, 551.39195656369293}, {90, 56}} - Class - ShapedGraphic - FitText - Vertical - Flow - Resize - ID - 395 - Shape - Rectangle - Style - - fill - - GradientCenter - {-0.29411799999999999, -0.264706} - - - Text - - Align - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720 - -\f0\fs24 \cf0 - compile()\ -- prepare()\ -- run()\ -- suspend()} - VerticalPad - 0 - - TextPlacement - 0 - - - GridH - - 393 - 394 - 395 - - - ID - 392 - - - Bounds - {{324.43479725203395, 600.47359077973181}, {35, 14}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - FontInfo - - Font - Helvetica - Size - 12 - - ID - 390 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\i\fs24 \cf0 Load()} - VerticalPad - 0 - - Wrap - NO - - - Class - LineGraphic - ID - 389 - Points - - {299.19385094739675, 622.37332926589443} - {349.19384944761669, 622.37332926589443} - - Style - - stroke - - HeadArrow - FilledArrow - Legacy - - LineType - 1 - TailArrow - FilledArrow - - - - - Class - LineGraphic - ID - 388 - Points - - {315.73465810741618, 475.31037359821562} - {315.73465810741618, 743.36733247080952} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 1 - Pattern - 1 - TailArrow - 0 - - - - - Bounds - {{183.17344081803969, 484.69386811754907}, {72, 42}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - FontInfo - - Font - Helvetica - Size - 12 - - ID - 387 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\i\fs24 \cf0 Workflow +\ -Runtime\ -Configuration} - VerticalPad - 0 - - Wrap - NO - - - Bounds - {{192.17344567816366, 658.51018444452041}, {54, 36}} - Class - ShapedGraphic - ID - 386 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Storage\ -Config} - - - - Bounds - {{192.17343756180355, 606.36734600615216}, {54, 36}} - Class - ShapedGraphic - ID - 385 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Engine\ -Config} - - - - Bounds - {{192.1734294454435, 554.22448349078456}, {54, 36}} - Class - ShapedGraphic - ID - 1 - Shape - Rectangle - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Flow} - - - - Bounds - {{161.23465842199704, 531.90531247737556}, {126, 182}} - Class - ShapedGraphic - ID - 15 - Shape - NoteShape - Style - - Text - - VerticalPad - 0 - - - - Bounds - {{81.034519768316954, 649.04956274775338}, {43, 28}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 384 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Returns\ -Engine} - VerticalPad - 0 - - Wrap - NO - - - Class - LineGraphic - ID - 383 - Points - - {83.544734716513915, 635.76110575309315} - {129.28172189203261, 635.76110575309315} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - FilledArrow - - - - - Bounds - {{80.054942197373691, 597.00601059533471}, {47, 14}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 382 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Provides} - VerticalPad - 0 - - Wrap - NO - - - Class - LineGraphic - ID - 381 - Points - - {87.626367290135391, 623.39376532003678} - {133.36335446565408, 623.39376532003678} - - Style - - stroke - - HeadArrow - FilledArrow - Legacy - - LineType - 1 - TailArrow - 0 - - - - - Class - Group - Graphics - - - Bounds - {{33.432496253969788, 566.72030291305043}, {35, 28}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 373 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fnil\fcharset0 GillSans;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Library\ -User} - VerticalPad - 0 - - Wrap - NO - - - Class - Group - Graphics - - - Bounds - {{36.759271327974915, 650.74392542586781}, {28.346457481384277, 28.346458435058594}} - Class - ShapedGraphic - ID - 375 - Shape - Rectangle - Style - - fill - - Color - - b - 0.4 - g - 1 - r - 1 - - Draws - NO - FillType - 2 - GradientAngle - 90 - GradientColor - - b - 0.4 - g - 1 - r - 1 - - MiddleColor - - b - 0.4 - g - 1 - r - 1 - - TrippleBlend - YES - - shadow - - Beneath - YES - Draws - NO - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - Draws - NO - Width - 1.5 - - - VFlip - YES - Wrap - NO - - - Class - LineGraphic - ID - 376 - Points - - {36.759271327974915, 625.23211464629628} - {65.105728809359192, 625.23211464629628} - {65.105728809359192, 625.23211464629628} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Class - LineGraphic - ID - 377 - Points - - {50.932500068667053, 636.57069763884999} - {36.759270851137757, 650.74392637954213} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Class - LineGraphic - ID - 378 - Points - - {50.932500068667053, 636.57069763884999} - {65.105728809359192, 651.31089042421991} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Class - LineGraphic - ID - 379 - Points - - {50.932500068667053, 616.728177401881} - {50.932500068667053, 636.57070002303578} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Bounds - {{42.42856282425177, 599.72030291305043}, {17.00787353515625, 17.00787353515625}} - Class - ShapedGraphic - ID - 380 - Shape - Circle - Style - - fill - - Color - - b - 0.4 - g - 1 - r - 1 - - Draws - NO - FillType - 2 - GradientAngle - 90 - GradientColor - - b - 0.4 - g - 1 - r - 1 - - MiddleColor - - b - 0.4 - g - 1 - r - 1 - - TrippleBlend - YES - - shadow - - Beneath - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - Width - 1.5 - - - - - ID - 374 - - - ID - 372 - - - Bounds - {{56.053440093994141, 454.08162381538807}, {97, 44}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - FontInfo - - Font - Helvetica-BoldOblique - Size - 18 - - ID - 371 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\i\b\fs36 \cf0 Translation\ -Phase} - VerticalPad - 0 - - Wrap - NO - - - Class - LineGraphic - ID - 370 - Points - - {239.72452365274654, 129.59183421248156} - {249.59182766764883, 184.82352424792543} - - Style - - stroke - - HeadArrow - FilledArrow - Legacy - - LineType - 1 - Pattern - 1 - TailArrow - 0 - - - - - Bounds - {{203.06122053766794, 96.734692512775737}, {75, 28}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - FontInfo - - Font - Helvetica - Size - 12 - - ID - 369 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\i\fs24 \cf0 Explicit\ -dependencies} - VerticalPad - 0 - - Wrap - NO - - - Class - LineGraphic - ID - 368 - Points - - {16.938771677235714, 434.69386909068618} - {550.61223067824244, 434.69386909068618} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 1 - Pattern - 1 - TailArrow - 0 - - - - - Bounds - {{56.053440093994141, 39.83673387962002}, {112, 44}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - FontInfo - - Font - Helvetica-BoldOblique - Size - 18 - - ID - 367 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\i\b\fs36 \cf0 Construction\ -Phase} - VerticalPad - 0 - - Wrap - NO - - - Bounds - {{100.22448568729375, 191.89795101130832}, {50, 56}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 366 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Creates\ -\ -\ -Workflow} - VerticalPad - 0 - - Wrap - NO - - - Class - LineGraphic - ID - 364 - Points - - {110.31631892346081, 220.32652202282102} - {156.05330609897936, 220.32652202282102} - - Style - - stroke - - HeadArrow - FilledArrow - Legacy - - LineType - 1 - TailArrow - FilledArrow - - - - - Class - TableGroup - Graphics - - - Bounds - {{306.40872322346235, 337.6122433290239}, {126, 14}} - Class - ShapedGraphic - FitText - Vertical - Flow - Resize - ID - 361 - Shape - Rectangle - Style - - fill - - GradientCenter - {-0.29411799999999999, -0.264706} - - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\qc - -\f0\i\b\fs24 \cf0 Retry (Atom)} - VerticalPad - 0 - - TextPlacement - 0 - - - Bounds - {{306.40872322346235, 351.6122433290239}, {126, 56}} - Class - ShapedGraphic - FitText - Vertical - Flow - Resize - ID - 362 - Shape - Rectangle - Style - - fill - - GradientCenter - {-0.29411799999999999, -0.264706} - - - Text - - Align - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720 - -\f0\fs24 \cf0 - execute()\ -- revert()\ -- on_failure()\ -...} - VerticalPad - 0 - - TextPlacement - 0 - - - GridH - - 361 - 362 - - - ID - 360 - - - Class - TableGroup - Graphics - - - Bounds - {{165.22448860432195, 337.6122433290239}, {126, 14}} - Class - ShapedGraphic - FitText - Vertical - Flow - Resize - ID - 42 - Shape - Rectangle - Style - - fill - - GradientCenter - {-0.29411799999999999, -0.264706} - - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\qc - -\f0\i\b\fs24 \cf0 Task (Atom)} - VerticalPad - 0 - - TextPlacement - 0 - - - Bounds - {{165.22448860432195, 351.6122433290239}, {126, 56}} - Class - ShapedGraphic - FitText - Vertical - Flow - Resize - ID - 44 - Shape - Rectangle - Style - - fill - - GradientCenter - {-0.29411799999999999, -0.264706} - - - Text - - Align - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720 - -\f0\fs24 \cf0 - execute()\ -- revert()\ -- update_progress()\ -...} - VerticalPad - 0 - - TextPlacement - 0 - - - GridH - - 42 - 44 - - - ID - 352 - - - Class - LineGraphic - Head - - ID - 840 - - ID - 842 - Points - - {381.22447887295158, 117.34693649161716} - {381.85914273540726, 165.11464199273692} - - Style - - stroke - - HeadArrow - FilledArrow - Legacy - - LineType - 1 - Pattern - 1 - TailArrow - 0 - - - - - Class - LineGraphic - ID - 347 - Points - - {395.51019288062668, 111.79728682683641} - {394.92858042750709, 139.79728698730469} - - Style - - stroke - - HeadArrow - FilledArrow - Legacy - - LineType - 1 - Pattern - 1 - TailArrow - 0 - - - - - Bounds - {{302.34693227416039, 79.112244302955403}, {185, 28}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - FontInfo - - Font - Helvetica - Size - 12 - - ID - 345 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\i\fs24 \cf0 Workflow (declarative) structure\ -& code (not executed immediately)} - VerticalPad - 0 - - Wrap - NO - - - Class - LineGraphic - ID - 329 - Points - - {472.22448860432172, 249.61224332902393} - {411.31888545619614, 269.91411104506579} - - Style - - stroke - - HeadArrow - FilledArrow - Legacy - - LineType - 1 - Pattern - 1 - TailArrow - 0 - - - - - Class - LineGraphic - ID - 343 - Points - - {474.22448860432172, 205.97599760148498} - {409.22448860432172, 228.24848490451151} - - Style - - stroke - - HeadArrow - FilledArrow - Legacy - - LineType - 1 - Pattern - 1 - TailArrow - 0 - - - - - Bounds - {{478.22448860432172, 179.61224332902398}, {83, 42}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - FontInfo - - Font - Helvetica - Size - 12 - - ID - 344 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\i\fs24 \cf0 Dataflow\ -(symbol-based)\ -dependencies} - VerticalPad - 0 - - Wrap - NO - - - Class - LineGraphic - ID - 341 - Points - - {361.09295431543518, 212.90662923905811} - {315.35596770538763, 253.81785993690883} - - Style - - stroke - - HeadArrow - FilledArrow - Legacy - - LineType - 1 - TailArrow - 0 - - - - - Bounds - {{387.44897720864344, 229.36224332902393}, {30, 14}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 336 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 out/in} - VerticalPad - 0 - - Wrap - NO - - - Class - LineGraphic - ID - 334 - Points - - {239.72451559473222, 278.11224001641114} - {269.72448860432183, 278.11224001641114} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - FilledArrow - - - - - Class - LineGraphic - ID - 333 - Points - - {324.22448507715393, 278.11224001641114} - {354.2244580867436, 278.11224001641114} - - Style - - stroke - - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - FilledArrow - - - - - Class - LineGraphic - ID - 332 - Points - - {325.22451559473205, 192.11224001641122} - {355.22448860432172, 192.11224001641122} - - Style - - stroke - - HeadArrow - FilledArrow - Legacy - - LineType - 1 - TailArrow - 0 - - - - - Class - LineGraphic - Head - - ID - 324 - - ID - 331 - Points - - {239.72450209952797, 192.61225527520028} - {269.72447510911758, 192.61225527520028} - - Style - - stroke - - HeadArrow - FilledArrow - Legacy - - LineType - 1 - TailArrow - 0 - - - Tail - - ID - 28 - - - - Bounds - {{474.22448860432172, 234.61224332902393}, {49, 42}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - FontInfo - - Font - Helvetica - Size - 12 - - ID - 330 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\i\fs24 \cf0 Nested\ -subflow\ -with retry} - VerticalPad - 0 - - Wrap - NO - - - Bounds - {{271.72448860432172, 251.61224332902393}, {54, 54}} - Class - ShapedGraphic - HFlip - YES - ID - 328 - Shape - Circle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Task} - VerticalPad - 0 - - - - Bounds - {{183.72451554562139, 251.61224332902393}, {54, 54}} - Class - ShapedGraphic - HFlip - YES - ID - 327 - Shape - Circle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Task} - VerticalPad - 0 - - - - Bounds - {{355.22448860432172, 165.61224332902398}, {54, 54}} - Class - ShapedGraphic - HFlip - YES - ID - 840 - Shape - Circle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Flow} - VerticalPad - 0 - - - - Bounds - {{270.22448860432183, 165.61224332902398}, {54, 54}} - Class - ShapedGraphic - HFlip - YES - ID - 324 - Shape - Circle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Task} - VerticalPad - 0 - - - - Bounds - {{185.22448860432195, 165.61224332902398}, {54, 54}} - Class - ShapedGraphic - HFlip - YES - ID - 28 - Shape - Circle - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Task} - VerticalPad - 0 - - - - Class - TableGroup - Graphics - - - Bounds - {{165.224488604322, 153.79728666398492}, {269, 168}} - Class - ShapedGraphic - FitText - Vertical - Flow - Resize - ID - 35 - Shape - Rectangle - Style - - fill - - GradientCenter - {-0.29411799999999999, -0.264706} - - - Text - - Align - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720 - -\f0\fs24 \cf0 \ -\ -\ -\ -\ -\ -\ -\ -\ -\ -\ -} - VerticalPad - 0 - - TextPlacement - 0 - - - Bounds - {{165.224488604322, 139.79728666398492}, {269, 14}} - Class - ShapedGraphic - FitText - Vertical - Flow - Resize - ID - 34 - Shape - Rectangle - Style - - fill - - GradientCenter - {-0.29411799999999999, -0.264706} - - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\qc - -\f0\i\b\fs24 \cf0 Flow (pattern)} - VerticalPad - 0 - - TextPlacement - 0 - - - GridH - - 34 - 35 - - - ID - 33 - - - Class - Group - Graphics - - - Bounds - {{56.122447887295152, 164.67346775924003}, {35, 28}} - Class - ShapedGraphic - FitText - YES - Flow - Resize - ID - 61 - Shape - Rectangle - Style - - fill - - Draws - NO - - shadow - - Draws - NO - - stroke - - Draws - NO - - - Text - - Pad - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fnil\fcharset0 GillSans;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\qc - -\f0\fs24 \cf0 Library\ -User} - VerticalPad - 0 - - Wrap - NO - - - Class - Group - Graphics - - - Bounds - {{59.449222961300279, 248.69709027205738}, {28.346457481384277, 28.346458435058594}} - Class - ShapedGraphic - ID - 63 - Shape - Rectangle - Style - - fill - - Color - - b - 0.4 - g - 1 - r - 1 - - Draws - NO - FillType - 2 - GradientAngle - 90 - GradientColor - - b - 0.4 - g - 1 - r - 1 - - MiddleColor - - b - 0.4 - g - 1 - r - 1 - - TrippleBlend - YES - - shadow - - Beneath - YES - Draws - NO - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - Draws - NO - Width - 1.5 - - - VFlip - YES - Wrap - NO - - - Class - LineGraphic - ID - 64 - Points - - {59.449222961300279, 223.18527949248588} - {87.795680442684557, 223.18527949248588} - {87.795680442684557, 223.18527949248588} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Class - LineGraphic - ID - 65 - Points - - {73.622451701992418, 234.52386248503961} - {59.449222484463121, 248.6970912257317} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Class - LineGraphic - ID - 66 - Points - - {73.622451701992418, 234.52386248503953} - {87.795680442684557, 249.26405527040947} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Class - LineGraphic - ID - 67 - Points - - {73.622451701992418, 214.68134224807059} - {73.622451701992418, 234.52386486922538} - - Style - - shadow - - Beneath - YES - Draws - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - HeadArrow - 0 - Legacy - - LineType - 1 - TailArrow - 0 - Width - 1.5 - - - - - Bounds - {{65.118514457577135, 197.67346775924003}, {17.00787353515625, 17.00787353515625}} - Class - ShapedGraphic - ID - 68 - Shape - Circle - Style - - fill - - Color - - b - 0.4 - g - 1 - r - 1 - - Draws - NO - FillType - 2 - GradientAngle - 90 - GradientColor - - b - 0.4 - g - 1 - r - 1 - - MiddleColor - - b - 0.4 - g - 1 - r - 1 - - TrippleBlend - YES - - shadow - - Beneath - YES - Fuzziness - 2.5038185119628906 - ShadowVector - {0, 1} - - stroke - - CornerRadius - 1 - Width - 1.5 - - - - - ID - 62 - - - ID - 60 - - - Class - Group - Graphics - - - Bounds - {{524.20410965184897, 903.84686831164879}, {90, 36}} - Class - ShapedGraphic - ID - 440 - Magnets - - {0, 1} - {0, -1} - {1, 0} - {-1, 0} - - Shape - RoundRect - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\qc - -\f0\fs20 \cf0 Zookeeper} - VerticalPad - 0 - - - - Bounds - {{524.20413205306681, 867.84684591043094}, {90, 36}} - Class - ShapedGraphic - ID - 441 - Magnets - - {0, 1} - {0, -1} - {1, 0} - {-1, 0} - - Shape - RoundRect - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\qc - -\f0\fs20 \cf0 Filesystem} - VerticalPad - 0 - - - - Bounds - {{524.20410965184885, 832.86723361478039}, {90, 36}} - Class - ShapedGraphic - ID - 442 - Magnets - - {0, 1} - {0, -1} - {1, 0} - {-1, 0} - - Shape - RoundRect - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\qc - -\f0\fs20 \cf0 Memory} - VerticalPad - 0 - - - - Bounds - {{524.20409341912841, 797.78565525800093}, {90, 36}} - Class - ShapedGraphic - ID - 443 - Magnets - - {0, 1} - {0, -1} - {1, 0} - {-1, 0} - - Shape - RoundRect - Style - - Text - - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\qc - -\f0\fs20 \cf0 SQLAlchemy} - VerticalPad - 0 - - - - ID - 439 - - - Bounds - {{366.28570556640625, 1120.9999961853027}, {75.71429443359375, 44}} - Class - ShapedGraphic - ID - 835 - Shape - Rectangle - Style - - stroke - - Pattern - 1 - - - - - Bounds - {{366.28571101041302, 969.38000187999}, {202.04080200195312, 196.6199951171875}} - Class - ShapedGraphic - ID - 444 - Shape - Rectangle - - - Bounds - {{379.54083251953125, 960.24970708018532}, {202.04080200195312, 196.6199951171875}} - Class - ShapedGraphic - ID - 1170 - Shape - Rectangle - - - Bounds - {{181.81308267749165, 1321.1440843224241}, {236.99999999999997, 168}} - Class - ShapedGraphic - FitText - Vertical - Flow - Resize - ID - 1172 - Shape - Rectangle - Style - - fill - - GradientCenter - {-0.29411799999999999, -0.264706} - - - Text - - Align - 0 - Text - {\rtf1\ansi\ansicpg1252\cocoartf1265\cocoasubrtf200 -\cocoascreenfonts1{\fonttbl\f0\fswiss\fcharset0 Helvetica;} -{\colortbl;\red255\green255\blue255;} -\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720 - -\f0\fs24 \cf0 \ -\ -\ -\ -\ -\ -\ -\ -\ -\ -\ -} - VerticalPad - 0 - - TextPlacement - 0 - - - GridInfo - - GuidesLocked - NO - GuidesVisible - YES - HPages - 2 - ImageCounter - 1 - KeepToScale - - Layers - - - Lock - NO - Name - Layer 1 - Print - YES - View - YES - - - LayoutInfo - - Animate - NO - circoMinDist - 18 - circoSeparation - 0.0 - layoutEngine - dot - neatoSeparation - 0.0 - twopiSeparation - 0.0 - - LinksVisible - NO - MagnetsVisible - NO - MasterSheets - - ModificationDate - 2014-07-09 22:24:00 +0000 - Modifier - Joshua Harlow - NotesVisible - NO - Orientation - 2 - OriginVisible - NO - PageBreaks - YES - PrintInfo - - NSBottomMargin - - float - 41 - - NSHorizonalPagination - - coded - BAtzdHJlYW10eXBlZIHoA4QBQISEhAhOU051bWJlcgCEhAdOU1ZhbHVlAISECE5TT2JqZWN0AIWEASqEhAFxlwCG - - NSLeftMargin - - float - 18 - - NSPaperSize - - size - {612, 792} - - NSPrintReverseOrientation - - int - 0 - - NSRightMargin - - float - 18 - - NSTopMargin - - float - 18 - - - PrintOnePage - - ReadOnly - NO - RowAlign - 1 - RowSpacing - 36 - SheetTitle - Canvas 1 - SmartAlignmentGuidesActive - YES - SmartDistanceGuidesActive - YES - UniqueID - 1 - UseEntirePage - - VPages - 3 - WindowInfo - - CurrentSheet - 0 - ExpandedCanvases - - - name - Canvas 1 - - - Frame - {{77, 45}, {1067, 833}} - ListView - - OutlineWidth - 142 - RightSidebar - - ShowRuler - - Sidebar - - SidebarWidth - 120 - VisibleRegion - {{8.8235295767602651, 949.50982167692416}, {900.00001682954701, 665.68628695780228}} - Zoom - 1.0199999809265137 - ZoomValues - - - Canvas 1 - 1.0199999809265137 - 1 - - - - - diff --git a/doc/diagrams/core.graffle.tgz b/doc/diagrams/core.graffle.tgz new file mode 100644 index 00000000..9ab23321 Binary files /dev/null and b/doc/diagrams/core.graffle.tgz differ diff --git a/doc/diagrams/jobboard.graffle.tgz b/doc/diagrams/jobboard.graffle.tgz new file mode 100644 index 00000000..0fbe33a5 Binary files /dev/null and b/doc/diagrams/jobboard.graffle.tgz differ diff --git a/doc/source/arguments_and_results.rst b/doc/source/arguments_and_results.rst index e23a6375..cb2c8761 100644 --- a/doc/source/arguments_and_results.rst +++ b/doc/source/arguments_and_results.rst @@ -1,32 +1,35 @@ -========================== -Atom Arguments and Results -========================== +===================== +Arguments and results +===================== .. |task.execute| replace:: :py:meth:`~taskflow.task.BaseTask.execute` .. |task.revert| replace:: :py:meth:`~taskflow.task.BaseTask.revert` .. |retry.execute| replace:: :py:meth:`~taskflow.retry.Retry.execute` .. |retry.revert| replace:: :py:meth:`~taskflow.retry.Retry.revert` +.. |Retry| replace:: :py:class:`~taskflow.retry.Retry` +.. |Task| replace:: :py:class:`Task ` -In TaskFlow, all flow and task state goes to (potentially persistent) storage. -That includes all the information that :doc:`atoms ` (e.g. tasks) in the -flow need when they are executed, and all the information task produces (via -serializable task results). A developer who implements tasks or flows can -specify what arguments a task accepts and what result it returns in several -ways. This document will help you understand what those ways are and how to use -those ways to accomplish your desired usage pattern. +In TaskFlow, all flow and task state goes to (potentially persistent) storage +(see :doc:`persistence ` for more details). That includes all the +information that :doc:`atoms ` (e.g. tasks, retry objects...) in the +workflow need when they are executed, and all the information task/retry +produces (via serializable results). A developer who implements tasks/retries +or flows can specify what arguments a task/retry accepts and what result it +returns in several ways. This document will help you understand what those ways +are and how to use those ways to accomplish your desired usage pattern. .. glossary:: - Task arguments - Set of names of task arguments available as the ``requires`` - property of the task instance. When a task is about to be executed - values with these names are retrieved from storage and passed to - |task.execute| method of the task. + Task/retry arguments + Set of names of task/retry arguments available as the ``requires`` + property of the task/retry instance. When a task or retry object is + about to be executed values with these names are retrieved from storage + and passed to the ``execute`` method of the task/retry. - Task results - Set of names of task results (what task provides) available as - ``provides`` property of task instance. After a task finishes - successfully, its result(s) (what the task |task.execute| method + Task/retry results + Set of names of task/retry results (what task/retry provides) available + as ``provides`` property of task or retry instance. After a task/retry + finishes successfully, its result(s) (what the ``execute`` method returns) are available by these names from storage (see examples below). @@ -44,8 +47,8 @@ There are different ways to specify the task argument ``requires`` set. Arguments inference ------------------- -Task arguments can be inferred from arguments of the |task.execute| method of -the task. +Task/retry arguments can be inferred from arguments of the |task.execute| +method of a task (or the |retry.execute| of a retry object). .. doctest:: @@ -56,10 +59,10 @@ the task. >>> sorted(MyTask().requires) ['eggs', 'spam'] -Inference from the method signature is the ''simplest'' way to specify task +Inference from the method signature is the ''simplest'' way to specify arguments. Optional arguments (with default values), and special arguments like -``self``, ``*args`` and ``**kwargs`` are ignored on inference (as these names -have special meaning/usage in python). +``self``, ``*args`` and ``**kwargs`` are ignored during inference (as these +names have special meaning/usage in python). .. doctest:: @@ -83,14 +86,14 @@ have special meaning/usage in python). Rebinding --------- -**Why:** There are cases when the value you want to pass to a task is stored -with a name other then the corresponding task arguments name. That's when the -``rebind`` task constructor parameter comes in handy. Using it the flow author +**Why:** There are cases when the value you want to pass to a task/retry is +stored with a name other then the corresponding arguments name. That's when the +``rebind`` constructor parameter comes in handy. Using it the flow author can instruct the engine to fetch a value from storage by one name, but pass it -to a tasks |task.execute| method with another name. There are two possible ways -of accomplishing this. +to a tasks/retrys ``execute`` method with another name. There are two possible +ways of accomplishing this. -The first is to pass a dictionary that maps the task argument name to the name +The first is to pass a dictionary that maps the argument name to the name of a saved value. For example, if you have task:: @@ -100,24 +103,25 @@ For example, if you have task:: def execute(self, vm_name, vm_image_id, **kwargs): pass # TODO(imelnikov): use parameters to spawn vm -and you saved 'vm_name' with 'name' key in storage, you can spawn a vm with -such 'name' like this:: +and you saved ``'vm_name'`` with ``'name'`` key in storage, you can spawn a vm +with such ``'name'`` like this:: SpawnVMTask(rebind={'vm_name': 'name'}) The second way is to pass a tuple/list/dict of argument names. The length of -the tuple/list/dict should not be less then number of task required parameters. +the tuple/list/dict should not be less then number of required parameters. + For example, you can achieve the same effect as the previous example with:: SpawnVMTask(rebind_args=('name', 'vm_image_id')) -which is equivalent to a more elaborate:: +This is equivalent to a more elaborate:: SpawnVMTask(rebind=dict(vm_name='name', vm_image_id='vm_image_id')) -In both cases, if your task accepts arbitrary arguments with ``**kwargs`` -construct, you can specify extra arguments. +In both cases, if your task (or retry) accepts arbitrary arguments +with the ``**kwargs`` construct, you can specify extra arguments. :: @@ -158,7 +162,8 @@ arguments) will appear in the ``kwargs`` of the |task.execute| method. When constructing a task instance the flow author can also add more requirements if desired. Those manual requirements (if they are not functional -arguments) will appear in the ``**kwargs`` the |task.execute| method. +arguments) will appear in the ``kwargs`` parameter of the |task.execute| +method. .. doctest:: @@ -189,15 +194,19 @@ avoid invalid argument mappings. Results specification ===================== -In python, function results are not named, so we can not infer what a task -returns. This is important since the complete task result (what the -|task.execute| method returns) is saved in (potentially persistent) storage, -and it is typically (but not always) desirable to make those results accessible -to other tasks. To accomplish this the task specifies names of those values via -its ``provides`` task constructor parameter or other method (see below). +In python, function results are not named, so we can not infer what a +task/retry returns. This is important since the complete result (what the +task |task.execute| or retry |retry.execute| method returns) is saved +in (potentially persistent) storage, and it is typically (but not always) +desirable to make those results accessible to others. To accomplish this +the task/retry specifies names of those values via its ``provides`` constructor +parameter or by its default provides attribute. + +Examples +-------- Returning one value -------------------- ++++++++++++++++++++ If task returns just one value, ``provides`` should be string -- the name of the value. @@ -212,7 +221,7 @@ name of the value. set(['the_answer']) Returning a tuple ------------------ ++++++++++++++++++ For a task that returns several values, one option (as usual in python) is to return those values via a ``tuple``. @@ -242,17 +251,17 @@ tasks) will be able to get those elements from storage by name: Provides argument can be shorter then the actual tuple returned by a task -- then extra values are ignored (but, as expected, **all** those values are saved -and passed to the |task.revert| method). +and passed to the task |task.revert| or retry |retry.revert| method). .. note:: Provides arguments tuple can also be longer then the actual tuple returned by task -- when this happens the extra parameters are left undefined: a warning is printed to logs and if use of such parameter is attempted a - ``NotFound`` exception is raised. + :py:class:`~taskflow.exceptions.NotFound` exception is raised. Returning a dictionary ----------------------- +++++++++++++++++++++++ Another option is to return several values as a dictionary (aka a ``dict``). @@ -290,16 +299,17 @@ will be able to get elements from storage by name: and passed to the |task.revert| method). If the provides argument has some items not present in the actual dict returned by the task -- then extra parameters are left undefined: a warning is printed to logs and if use of - such parameter is attempted a ``NotFound`` exception is raised. + such parameter is attempted a :py:class:`~taskflow.exceptions.NotFound` + exception is raised. Default provides ----------------- +++++++++++++++++ -As mentioned above, the default task base class provides nothing, which means -task results are not accessible to other tasks in the flow. +As mentioned above, the default base class provides nothing, which means +results are not accessible to other tasks/retrys in the flow. -The task author can override this and specify default value for provides using -``default_provides`` class variable: +The author can override this and specify default value for provides using +the ``default_provides`` class/instance variable: :: @@ -314,8 +324,8 @@ Of course, the flow author can override this to change names if needed: BitsAndPiecesTask(provides=('b', 'p')) -or to change structure -- e.g. this instance will make whole tuple accessible -to other tasks by name 'bnp': +or to change structure -- e.g. this instance will make tuple accessible +to other tasks by name ``'bnp'``: :: @@ -331,28 +341,29 @@ the task from other tasks in the flow (e.g. to avoid naming conflicts): Revert arguments ================ -To revert a task engine calls its |task.revert| method. This method -should accept same arguments as |task.execute| method of the task and one -more special keyword argument, named ``result``. +To revert a task the :doc:`engine ` calls the tasks +|task.revert| method. This method should accept the same arguments +as the |task.execute| method of the task and one more special keyword +argument, named ``result``. For ``result`` value, two cases are possible: -* if task is being reverted because it failed (an exception was raised from its - |task.execute| method), ``result`` value is instance of - :py:class:`taskflow.utils.misc.Failure` object that holds exception - information; +* If the task is being reverted because it failed (an exception was raised + from its |task.execute| method), the ``result`` value is an instance of a + :py:class:`~taskflow.types.failure.Failure` object that holds the exception + information. -* if task is being reverted because some other task failed, and this task - finished successfully, ``result`` value is task result fetched from storage: - basically, that's what |task.execute| method returned. +* If the task is being reverted because some other task failed, and this task + finished successfully, ``result`` value is the result fetched from storage: + ie, what the |task.execute| method returned. All other arguments are fetched from storage in the same way it is done for |task.execute| method. -To determine if task failed you can check whether ``result`` is instance of -:py:class:`taskflow.utils.misc.Failure`:: +To determine if a task failed you can check whether ``result`` is instance of +:py:class:`~taskflow.types.failure.Failure`:: - from taskflow.utils import misc + from taskflow.types import failure class RevertingTask(task.Task): @@ -360,55 +371,61 @@ To determine if task failed you can check whether ``result`` is instance of return do_something(spam, eggs) def revert(self, result, spam, eggs): - if isinstance(result, misc.Failure): + if isinstance(result, failure.Failure): print("This task failed, exception: %s" % result.exception_str) else: print("do_something returned %r" % result) -If this task failed (``do_something`` raised exception) it will print ``"This -task failed, exception:"`` and exception message on revert. If this task -finished successfully, it will print ``"do_something returned"`` and -representation of result. +If this task failed (ie ``do_something`` raised an exception) it will print +``"This task failed, exception:"`` and a exception message on revert. If this +task finished successfully, it will print ``"do_something returned"`` and a +representation of the ``do_something`` result. Retry arguments =============== -A Retry controller works with arguments in the same way as a Task. But it has -an additional parameter 'history' that is a list of tuples. Each tuple contains -a result of the previous Retry run and a table where a key is a failed task and -a value is a :py:class:`taskflow.utils.misc.Failure`. +A |Retry| controller works with arguments in the same way as a |Task|. But it +has an additional parameter ``'history'`` that is itself a +:py:class:`~taskflow.retry.History` object that contains what failed over all +the engines attempts (aka the outcomes). The history object can be +viewed as a tuple that contains a result of the previous retrys run and a +table/dict where each key is a failed atoms name and each value is +a :py:class:`~taskflow.types.failure.Failure` object. -Consider the following Retry:: +Consider the following implementation:: class MyRetry(retry.Retry): default_provides = 'value' def on_failure(self, history, *args, **kwargs): - print history + print(list(history)) return RETRY def execute(self, history, *args, **kwargs): - print history + print(list(history)) return 5 def revert(self, history, *args, **kwargs): - print history + print(list(history)) -Imagine the following Retry had returned a value '5' and then some task 'A' +Imagine the above retry had returned a value ``'5'`` and then some task ``'A'`` failed with some exception. In this case ``on_failure`` method will receive -the following history:: +the following history (printed as a list):: - [('5', {'A': misc.Failure()})] + [('5', {'A': failure.Failure()})] -Then the |retry.execute| method will be called again and it'll receive the same -history. +At this point (since the implementation returned ``RETRY``) the +|retry.execute| method will be called again and it will receive the same +history and it can then return a value that subseqent tasks can use to alter +there behavior. -If the |retry.execute| method raises an exception, the |retry.revert| method of -Retry will be called and :py:class:`taskflow.utils.misc.Failure` object will be -present in the history instead of Retry result:: +If instead the |retry.execute| method itself raises an exception, +the |retry.revert| method of the implementation will be called and +a :py:class:`~taskflow.types.failure.Failure` object will be present in the +history object instead of the typical result. - [('5', {'A': misc.Failure()}), (misc.Failure(), {})] +.. note:: -After the Retry has been reverted, the Retry history will be cleaned. + After a |Retry| has been reverted, the objects history will be cleaned. diff --git a/doc/source/atoms.rst b/doc/source/atoms.rst index 85086346..f2b75ffa 100644 --- a/doc/source/atoms.rst +++ b/doc/source/atoms.rst @@ -1,5 +1,5 @@ ------------------------ -Atoms, Tasks and Retries +Atoms, tasks and retries ------------------------ Atom @@ -94,8 +94,8 @@ subclasses are provided: :py:class:`~taskflow.retry.ForEach` but extracts values from storage instead of the :py:class:`~taskflow.retry.ForEach` constructor. -Usage ------ +Examples +-------- .. testsetup:: diff --git a/doc/source/conductors.rst b/doc/source/conductors.rst index 25eb75c8..56fb0e0e 100644 --- a/doc/source/conductors.rst +++ b/doc/source/conductors.rst @@ -63,6 +63,10 @@ Interfaces ========== .. automodule:: taskflow.conductors.base + +Implementations +=============== + .. automodule:: taskflow.conductors.single_threaded Hierarchy diff --git a/doc/source/conf.py b/doc/source/conf.py index 3b0c35ce..9dec3b69 100644 --- a/doc/source/conf.py +++ b/doc/source/conf.py @@ -1,5 +1,6 @@ # -*- coding: utf-8 -*- +import datetime import os import sys @@ -13,7 +14,6 @@ extensions = [ 'sphinx.ext.doctest', 'sphinx.ext.extlinks', 'sphinx.ext.inheritance_diagram', - 'sphinx.ext.intersphinx', 'sphinx.ext.viewcode', 'oslosphinx' ] @@ -37,7 +37,7 @@ exclude_patterns = ['_build'] # General information about the project. project = u'TaskFlow' -copyright = u'2013-2014, OpenStack Foundation' +copyright = u'%s, OpenStack Foundation' % datetime.date.today().year source_tree = 'http://git.openstack.org/cgit/openstack/taskflow/tree' # If true, '()' will be appended to :func: etc. cross-reference text. @@ -56,6 +56,7 @@ modindex_common_prefix = ['taskflow.'] # Shortened external links. extlinks = { 'example': (source_tree + '/taskflow/examples/%s.py', ''), + 'pybug': ('http://bugs.python.org/issue%s', ''), } # -- Options for HTML output -------------------------------------------------- @@ -82,9 +83,6 @@ latex_documents = [ 'OpenStack Foundation', 'manual'), ] -# Example configuration for intersphinx: refer to the Python standard library. -intersphinx_mapping = {'http://docs.python.org/': None} - # -- Options for autoddoc ---------------------------------------------------- # Keep source order diff --git a/doc/source/engines.rst b/doc/source/engines.rst index 752f9f0e..c9b0c5da 100644 --- a/doc/source/engines.rst +++ b/doc/source/engines.rst @@ -13,23 +13,23 @@ and uses it to decide which :doc:`atom ` to run and when. TaskFlow provides different implementations of engines. Some may be easier to use (ie, require no additional infrastructure setup) and understand; others might require more complicated setup but provide better scalability. The idea -and *ideal* is that deployers or developers of a service that uses TaskFlow can +and *ideal* is that deployers or developers of a service that use TaskFlow can select an engine that suites their setup best without modifying the code of said service. Engines usually have different capabilities and configuration, but all of them **must** implement the same interface and preserve the semantics of patterns -(e.g. parts of :py:class:`linear flow ` -are run one after another, in order, even if engine is *capable* of running -tasks in parallel). +(e.g. parts of a :py:class:`.linear_flow.Flow` +are run one after another, in order, even if the selected engine is *capable* +of running tasks in parallel). Why they exist -------------- -An engine being the core component which actually makes your flows progress is -likely a new concept for many programmers so let's describe how it operates in -more depth and some of the reasoning behind why it exists. This will hopefully -make it more clear on there value add to the TaskFlow library user. +An engine being *the* core component which actually makes your flows progress +is likely a new concept for many programmers so let's describe how it operates +in more depth and some of the reasoning behind why it exists. This will +hopefully make it more clear on there value add to the TaskFlow library user. First though let us discuss something most are familiar already with; the difference between `declarative`_ and `imperative`_ programming models. The @@ -48,15 +48,15 @@ more of a *pure* function that executes, reverts and may require inputs and provide outputs). This is where engines get involved; they do the execution of the *what* defined via :doc:`atoms `, tasks, flows and the relationships defined there-in and execute these in a well-defined manner (and the engine is -responsible for *most* of the state manipulation instead). +responsible for any state manipulation instead). This mix of imperative and declarative (with a stronger emphasis on the -declarative model) allows for the following functionality to be possible: +declarative model) allows for the following functionality to become possible: * Enhancing reliability: Decoupling of state alterations from what should be accomplished allows for a *natural* way of resuming by allowing the engine to - track the current state and know at which point a flow is in and how to get - back into that state when resumption occurs. + track the current state and know at which point a workflow is in and how to + get back into that state when resumption occurs. * Enhancing scalability: When a engine is responsible for executing your desired work it becomes possible to alter the *how* in the future by creating new types of execution backends (for example the worker model which does not @@ -83,13 +83,14 @@ Of course these kind of features can come with some drawbacks: away from (and this is likely a mindset change for programmers used to the imperative model). We have worked to make this less of a concern by creating and encouraging the usage of :doc:`persistence `, to help make - it possible to have some level of provided state transfer mechanism. + it possible to have state and tranfer that state via a argument input and + output mechanism. * Depending on how much imperative code exists (and state inside that code) - there can be *significant* rework of that code and converting or refactoring - it to these new concepts. We have tried to help here by allowing you to have - tasks that internally use regular python code (and internally can be written - in an imperative style) as well as by providing examples and these developer - docs; helping this process be as seamless as possible. + there *may* be *significant* rework of that code and converting or + refactoring it to these new concepts. We have tried to help here by allowing + you to have tasks that internally use regular python code (and internally can + be written in an imperative style) as well as by providing + :doc:`examples ` that show how to use these concepts. * Another one of the downsides of decoupling the *what* from the *how* is that it may become harder to use traditional techniques to debug failures (especially if remote workers are involved). We try to help here by making it @@ -110,16 +111,16 @@ All engines are mere classes that implement the same interface, and of course it is possible to import them and create instances just like with any classes in Python. But the easier (and recommended) way for creating an engine is using the engine helper functions. All of these functions are imported into the -`taskflow.engines` module namespace, so the typical usage of these functions +``taskflow.engines`` module namespace, so the typical usage of these functions might look like:: from taskflow import engines ... flow = make_flow() - engine = engines.load(flow, engine_conf=my_conf, - backend=my_persistence_conf) - engine.run + eng = engines.load(flow, engine='serial', backend=my_persistence_conf) + eng.run() + ... .. automodule:: taskflow.engines.helpers @@ -128,59 +129,74 @@ Usage ===== To select which engine to use and pass parameters to an engine you should use -the ``engine_conf`` parameter any helper factory function accepts. It may be: +the ``engine`` parameter any engine helper function accepts and for any engine +specific options use the ``kwargs`` parameter. -* a string, naming engine type; -* a dictionary, holding engine type with key ``'engine'`` and possibly - type-specific engine configuration parameters. +Types +===== -Single-Threaded ---------------- +Serial +------ **Engine type**: ``'serial'`` -Runs all tasks on the single thread -- the same thread `engine.run()` is called -on. This engine is used by default. +Runs all tasks on a single thread -- the same thread ``engine.run()`` is +called from. + +.. note:: + + This engine is used by default. .. tip:: If eventlet is used then this engine will not block other threads - from running as eventlet automatically creates a co-routine system (using - greenthreads and monkey patching). See `eventlet `_ - and `greenlet `_ for more details. + from running as eventlet automatically creates a implicit co-routine + system (using greenthreads and monkey patching). See + `eventlet `_ and + `greenlet `_ for more details. Parallel -------- **Engine type**: ``'parallel'`` -Parallel engine schedules tasks onto different threads to run them in parallel. - -Additional supported keyword arguments: - -* ``executor``: a object that implements a :pep:`3148` compatible `executor`_ - interface; it will be used for scheduling tasks. You can use instances of a - `thread pool executor`_ or a :py:class:`green executor - ` (which internally uses - `eventlet `_ and greenthread pools). +A parallel engine schedules tasks onto different threads/processes to allow for +running non-dependent tasks simultaneously. See the documentation of +:py:class:`~taskflow.engines.action_engine.engine.ParallelActionEngine` for +supported arguments that can be used to construct a parallel engine that runs +using your desired execution model. .. tip:: - Sharing executor between engine instances provides better - scalability by reducing thread creation and teardown as well as by reusing - existing pools (which is a good practice in general). + Sharing an executor between engine instances provides better + scalability by reducing thread/process creation and teardown as well as by + reusing existing pools (which is a good practice in general). .. note:: - Running tasks with a `process pool executor`_ is not currently supported. + Running tasks with a `process pool executor`_ is **experimentally** + supported. This is mainly due to the `futures backport`_ and + the `multiprocessing`_ module that exist in older versions of python not + being as up to date (with important fixes such as :pybug:`4892`, + :pybug:`6721`, :pybug:`9205`, :pybug:`11635`, :pybug:`16284`, + :pybug:`22393` and others...) as the most recent python version (which + themselves have a variety of ongoing/recent bugs). -Worker-Based ------------- +Workers +------- -**Engine type**: ``'worker-based'`` +**Engine type**: ``'worker-based'`` or ``'workers'`` -For more information, please see :doc:`workers ` for more details on -how the worker based engine operates (and the design decisions behind it). +.. note:: Since this engine is significantly more complicated (and + different) then the others we thought it appropriate to devote a + whole documentation section to it. + +For further information, please refer to the the following: + +.. toctree:: + :maxdepth: 2 + + workers How they run ============ @@ -241,6 +257,14 @@ object starts to take over and begins going through the stages listed below (for a more visual diagram/representation see the :ref:`engine state diagram `). +.. note:: + + The engine will respect the constraints imposed by the flow. For example, + if Engine is executing a :py:class:`.linear_flow.Flow` then it is + constrained by the dependency-graph which is linear in this case, and hence + using a Parallel Engine may not yield any benefits if one is looking for + concurrency. + Resumption ^^^^^^^^^^ @@ -265,13 +289,13 @@ Scheduling ^^^^^^^^^^ This stage selects which atoms are eligible to run by using a -:py:class:`~taskflow.engines.action_engine.runtime.Scheduler` implementation +:py:class:`~taskflow.engines.action_engine.scheduler.Scheduler` implementation (the default implementation looks at there intention, checking if predecessor atoms have ran and so-on, using a :py:class:`~taskflow.engines.action_engine.analyzer.Analyzer` helper object as needed) and submits those atoms to a previously provided compatible `executor`_ for asynchronous execution. This -:py:class:`~taskflow.engines.action_engine.runtime.Scheduler` will return a +:py:class:`~taskflow.engines.action_engine.scheduler.Scheduler` will return a `future`_ object for each atom scheduled; all of which are collected into a list of not done futures. This will end the initial round of scheduling and at this point the engine enters the :ref:`waiting ` stage. @@ -284,7 +308,7 @@ Waiting In this stage the engine waits for any of the future objects previously submitted to complete. Once one of the future objects completes (or fails) that atoms result will be examined and finalized using a -:py:class:`~taskflow.engines.action_engine.runtime.Completer` implementation. +:py:class:`~taskflow.engines.action_engine.completer.Completer` implementation. It typically will persist results to a provided persistence backend (saved into the corresponding :py:class:`~taskflow.persistence.logbook.AtomDetail` and :py:class:`~taskflow.persistence.logbook.FlowDetail` objects) and reflect @@ -322,24 +346,33 @@ saved for this execution. Interfaces ========== +.. automodule:: taskflow.engines.base + +Implementations +=============== + .. automodule:: taskflow.engines.action_engine.analyzer .. automodule:: taskflow.engines.action_engine.compiler +.. automodule:: taskflow.engines.action_engine.completer .. automodule:: taskflow.engines.action_engine.engine +.. automodule:: taskflow.engines.action_engine.executor .. automodule:: taskflow.engines.action_engine.runner .. automodule:: taskflow.engines.action_engine.runtime -.. automodule:: taskflow.engines.base +.. automodule:: taskflow.engines.action_engine.scheduler +.. automodule:: taskflow.engines.action_engine.scopes Hierarchy ========= .. inheritance-diagram:: - taskflow.engines.base - taskflow.engines.action_engine.engine - taskflow.engines.worker_based.engine + taskflow.engines.action_engine.engine.ActionEngine + taskflow.engines.base.Engine + taskflow.engines.worker_based.engine.WorkerBasedActionEngine :parts: 1 +.. _multiprocessing: https://docs.python.org/2/library/multiprocessing.html .. _future: https://docs.python.org/dev/library/concurrent.futures.html#future-objects .. _executor: https://docs.python.org/dev/library/concurrent.futures.html#concurrent.futures.Executor .. _networkx: https://networkx.github.io/ -.. _thread pool executor: https://docs.python.org/dev/library/concurrent.futures.html#threadpoolexecutor +.. _futures backport: https://pypi.python.org/pypi/futures .. _process pool executor: https://docs.python.org/dev/library/concurrent.futures.html#processpoolexecutor diff --git a/doc/source/examples.rst b/doc/source/examples.rst index 9199bc11..d30bd85f 100644 --- a/doc/source/examples.rst +++ b/doc/source/examples.rst @@ -1,3 +1,39 @@ +Hello world +=========== + +.. note:: + + Full source located at :example:`hello_world`. + +.. literalinclude:: ../../taskflow/examples/hello_world.py + :language: python + :linenos: + :lines: 16- + +Passing values from and to tasks +================================ + +.. note:: + + Full source located at :example:`simple_linear_pass`. + +.. literalinclude:: ../../taskflow/examples/simple_linear_pass.py + :language: python + :linenos: + :lines: 16- + +Using listeners +=============== + +.. note:: + + Full source located at :example:`echo_listener`. + +.. literalinclude:: ../../taskflow/examples/echo_listener.py + :language: python + :linenos: + :lines: 16- + Making phone calls ================== @@ -34,6 +70,42 @@ Building a car :linenos: :lines: 16- +Iterating over the alphabet (using processes) +============================================= + +.. note:: + + Full source located at :example:`alphabet_soup`. + +.. literalinclude:: ../../taskflow/examples/alphabet_soup.py + :language: python + :linenos: + :lines: 16- + +Watching execution timing +========================= + +.. note:: + + Full source located at :example:`timing_listener`. + +.. literalinclude:: ../../taskflow/examples/timing_listener.py + :language: python + :linenos: + :lines: 16- + +Table multiplier (in parallel) +============================== + +.. note:: + + Full source located at :example:`parallel_table_multiply` + +.. literalinclude:: ../../taskflow/examples/parallel_table_multiply.py + :language: python + :linenos: + :lines: 16- + Linear equation solver (explicit dependencies) ============================================== @@ -80,6 +152,18 @@ Creating a volume (in parallel) :linenos: :lines: 16- +Summation mapper(s) and reducer (in parallel) +============================================= + +.. note:: + + Full source located at :example:`simple_map_reduce` + +.. literalinclude:: ../../taskflow/examples/simple_map_reduce.py + :language: python + :linenos: + :lines: 16- + Storing & emitting a bill ========================= @@ -163,3 +247,50 @@ Distributed execution (simple) :language: python :linenos: :lines: 16- + +Distributed notification (simple) +================================= + +.. note:: + + Full source located at :example:`wbe_event_sender` + +.. literalinclude:: ../../taskflow/examples/wbe_event_sender.py + :language: python + :linenos: + :lines: 16- + +Distributed mandelbrot (complex) +================================ + +.. note:: + + Full source located at :example:`wbe_mandelbrot` + +Output +------ + +.. image:: img/mandelbrot.png + :height: 128px + :align: right + :alt: Generated mandelbrot fractal + +Code +---- + +.. literalinclude:: ../../taskflow/examples/wbe_mandelbrot.py + :language: python + :linenos: + :lines: 16- + +Jobboard producer/consumer (simple) +=================================== + +.. note:: + + Full source located at :example:`jobboard_produce_consume_colors` + +.. literalinclude:: ../../taskflow/examples/jobboard_produce_consume_colors.py + :language: python + :linenos: + :lines: 16- diff --git a/doc/source/img/engine_states.svg b/doc/source/img/engine_states.svg index 497c31ef..08a419d5 100644 --- a/doc/source/img/engine_states.svg +++ b/doc/source/img/engine_states.svg @@ -3,6 +3,6 @@ - -Engines statesRESUMINGSCHEDULINGWAITINGSUCCESSSUSPENDEDREVERTEDANALYZINGstart + +Engines statesGAME_OVERREVERTEDrevertedSUCCESSsuccessSUSPENDEDsuspendedFAILUREfailedUNDEFINEDRESUMINGstartSCHEDULINGschedule nextANALYZINGcompletedschedule nextWAITINGwait finishedwait finishedexamine finishedstart diff --git a/doc/source/img/flow_states.svg b/doc/source/img/flow_states.svg index c6d9825e..80bf1a0a 100644 --- a/doc/source/img/flow_states.svg +++ b/doc/source/img/flow_states.svg @@ -1,8 +1,8 @@ - - -Flow statesPENDINGRUNNINGRESUMINGFAILURESUCCESSREVERTEDSUSPENDINGSUSPENDEDstart + +Flow statesPENDINGRUNNINGFAILURESUSPENDINGREVERTEDSUCCESSRESUMINGSUSPENDEDstart diff --git a/doc/source/img/jobboard.png b/doc/source/img/jobboard.png new file mode 100644 index 00000000..87d1dc8f Binary files /dev/null and b/doc/source/img/jobboard.png differ diff --git a/doc/source/img/mandelbrot.png b/doc/source/img/mandelbrot.png new file mode 100644 index 00000000..6dc26ee5 Binary files /dev/null and b/doc/source/img/mandelbrot.png differ diff --git a/doc/source/img/retry_states.svg b/doc/source/img/retry_states.svg index 014516e0..8b0c6357 100644 --- a/doc/source/img/retry_states.svg +++ b/doc/source/img/retry_states.svg @@ -1,8 +1,8 @@ - - -Retries statesPENDINGRUNNINGFAILURESUCCESSREVERTINGRETRYINGREVERTEDstart + +Retries statesPENDINGRUNNINGSUCCESSFAILURERETRYINGREVERTINGREVERTEDstart diff --git a/doc/source/img/task_states.svg b/doc/source/img/task_states.svg index f40501ac..14a1f098 100644 --- a/doc/source/img/task_states.svg +++ b/doc/source/img/task_states.svg @@ -1,8 +1,8 @@ - - -Tasks statesPENDINGRUNNINGFAILURESUCCESSREVERTINGREVERTEDstart + +Tasks statesPENDINGRUNNINGSUCCESSFAILUREREVERTINGREVERTEDstart diff --git a/doc/source/img/wbe_request_states.svg b/doc/source/img/wbe_request_states.svg new file mode 100644 index 00000000..da2c9d30 --- /dev/null +++ b/doc/source/img/wbe_request_states.svg @@ -0,0 +1,8 @@ + + + + + +WBE requests statesWAITINGPENDINGFAILURERUNNINGSUCCESSstart + diff --git a/doc/source/index.rst b/doc/source/index.rst index 3e9326b6..7ab0fedd 100644 --- a/doc/source/index.rst +++ b/doc/source/index.rst @@ -14,7 +14,7 @@ Contents ======== .. toctree:: - :maxdepth: 2 + :maxdepth: 3 atoms arguments_and_results @@ -29,11 +29,6 @@ Contents jobs conductors -.. toctree:: - :hidden: - - workers - Examples -------- @@ -70,13 +65,9 @@ TaskFlow into your project: ``[TaskFlow]`` to your emails subject to get an even faster response). * Follow (or at least attempt to follow) some of the established `best practices`_ (feel free to add your own suggested best practices). - -.. warning:: - - External usage of internal helpers and other internal utility functions - and modules should be kept to a *minimum* as these may be altered, - refactored or moved *without* notice. If you are unsure whether to use - a function, class, or module, please ask (see above). +* Keep in touch with the team (see above); we are all friendly and enjoy + knowing your use cases and learning how we can help make your lives easier + by adding or adjusting functionality in this library. .. _IRC: irc://chat.freenode.net/openstack-state-management .. _best practices: http://wiki.openstack.org/wiki/TaskFlow/Best_practices @@ -91,6 +82,8 @@ Miscellaneous exceptions states + types + utils Indices and tables ================== diff --git a/doc/source/inputs_and_outputs.rst b/doc/source/inputs_and_outputs.rst index 34fb1bad..7c8de762 100644 --- a/doc/source/inputs_and_outputs.rst +++ b/doc/source/inputs_and_outputs.rst @@ -1,11 +1,11 @@ ================== -Inputs and Outputs +Inputs and outputs ================== In TaskFlow there are multiple ways to provide inputs for your tasks and flows and get information from them. This document describes one of them, that involves task arguments and results. There are also :doc:`notifications -`, which allow you to get notified when task or flow changed +`, which allow you to get notified when a task or flow changes state. You may also opt to use the :doc:`persistence ` layer itself directly. @@ -19,15 +19,16 @@ This is the standard and recommended way to pass data from one task to another. Of course not every task argument needs to be provided to some other task of a flow, and not every task result should be consumed by every task. -If some value is required by one or more tasks of a flow, but is not provided -by any task, it is considered to be flow input, and **must** be put into the -storage before the flow is run. A set of names required by a flow can be -retrieved via that flow's ``requires`` property. These names can be used to +If some value is required by one or more tasks of a flow, but it is not +provided by any task, it is considered to be flow input, and **must** be put +into the storage before the flow is run. A set of names required by a flow can +be retrieved via that flow's ``requires`` property. These names can be used to determine what names may be applicable for placing in storage ahead of time and which names are not applicable. All values provided by tasks of the flow are considered to be flow outputs; the -set of names of such values is available via ``provides`` property of the flow. +set of names of such values is available via the ``provides`` property of the +flow. .. testsetup:: @@ -49,7 +50,7 @@ For example: ... MyTask(requires='b', provides='d') ... ) >>> flow.requires - set(['a']) + frozenset(['a']) >>> sorted(flow.provides) ['b', 'c', 'd'] @@ -59,8 +60,10 @@ As you can see, this flow does not require b, as it is provided by the fist task. .. note:: - There is no difference between processing of Task and Retry inputs - and outputs. + + There is no difference between processing of + :py:class:`Task ` and + :py:class:`~taskflow.retry.Retry` inputs and outputs. ------------------ Engine and storage @@ -146,8 +149,10 @@ Outputs As you can see from examples above, the run method returns all flow outputs in a ``dict``. This same data can be fetched via -:py:meth:`~taskflow.storage.Storage.fetch_all` method of the storage. You can -also get single results using :py:meth:`~taskflow.storage.Storage.fetch`. +:py:meth:`~taskflow.storage.Storage.fetch_all` method of the engines storage +object. You can also get single results using the +engines storage objects :py:meth:`~taskflow.storage.Storage.fetch` method. + For example: .. doctest:: diff --git a/doc/source/jobs.rst b/doc/source/jobs.rst index 048a66ea..06f1123e 100644 --- a/doc/source/jobs.rst +++ b/doc/source/jobs.rst @@ -28,14 +28,14 @@ Definitions =========== Jobs - A :py:class:`job ` consists of a unique identifier, + A :py:class:`job ` consists of a unique identifier, name, and a reference to a :py:class:`logbook ` which contains the details of the work that has been or should be/will be completed to finish the work that has been created for that job. Jobboards - A :py:class:`jobboard ` is responsible for + A :py:class:`jobboard ` is responsible for managing the posting, ownership, and delivery of jobs. It acts as the location where jobs can be posted, claimed and searched for; typically by iteration or notification. Jobboards may be backed by different *capable* @@ -45,6 +45,13 @@ Jobboards service that uses TaskFlow to select a jobboard implementation that fits their setup (and there intended usage) best. +High level architecture +======================= + +.. image:: img/jobboard.png + :height: 350px + :align: right + Features ======== @@ -157,10 +164,13 @@ might look like: else: # I finished it, now cleanup. board.consume(my_job) - persistence.destroy_logbook(my_job.book.uuid) + persistence.get_connection().destroy_logbook(my_job.book.uuid) time.sleep(coffee_break_time) ... +Types +===== + Zookeeper --------- @@ -192,6 +202,11 @@ Additional *configuration* parameters: when your program uses eventlet and you want to instruct kazoo to use an eventlet compatible handler (such as the `eventlet handler`_). +.. note:: + + See :py:class:`~taskflow.jobs.backends.impl_zookeeper.ZookeeperJobBoard` + for implementation details. + Considerations ============== @@ -244,9 +259,21 @@ the claim by then, therefore both would be *working* on a job. Interfaces ========== +.. automodule:: taskflow.jobs.base .. automodule:: taskflow.jobs.backends -.. automodule:: taskflow.jobs.job -.. automodule:: taskflow.jobs.jobboard + +Implementations +=============== + +.. automodule:: taskflow.jobs.backends.impl_zookeeper + +Hierarchy +========= + +.. inheritance-diagram:: + taskflow.jobs.base + taskflow.jobs.backends.impl_zookeeper + :parts: 1 .. _paradigm shift: https://wiki.openstack.org/wiki/TaskFlow/Paradigm_shifts#Workflow_ownership_transfer .. _zookeeper: http://zookeeper.apache.org/ diff --git a/doc/source/notifications.rst b/doc/source/notifications.rst index 3fe430de..c0dbe4e2 100644 --- a/doc/source/notifications.rst +++ b/doc/source/notifications.rst @@ -1,5 +1,5 @@ =========================== -Notifications and Listeners +Notifications and listeners =========================== .. testsetup:: @@ -7,6 +7,8 @@ Notifications and Listeners from taskflow import task from taskflow.patterns import linear_flow from taskflow import engines + from taskflow.types import notifier + ANY = notifier.Notifier.ANY -------- Overview @@ -17,10 +19,9 @@ transitions, which is useful for monitoring, logging, metrics, debugging and plenty of other tasks. To receive these notifications you should register a callback with -an instance of the the :py:class:`notifier ` -class that is attached -to :py:class:`engine ` -attributes ``task_notifier`` and ``notifier``. +an instance of the :py:class:`~taskflow.types.notifier.Notifier` +class that is attached to :py:class:`~taskflow.engines.base.Engine` +attributes ``atom_notifier`` and ``notifier``. TaskFlow also comes with a set of predefined :ref:`listeners `, and provides means to write your own listeners, which can be more convenient than @@ -30,17 +31,14 @@ using raw callbacks. Receiving notifications with callbacks -------------------------------------- -To manage notifications instances of -:py:class:`~taskflow.utils.misc.Notifier` are used. - -.. autoclass:: taskflow.utils.misc.Notifier - Flow notifications ------------------ -To receive notification on flow state changes use -:py:class:`~taskflow.utils.misc.Notifier` available as -``notifier`` property of the engine. A basic example is: +To receive notification on flow state changes use the +:py:class:`~taskflow.types.notifier.Notifier` instance available as the +``notifier`` property of an engine. + +A basic example is: .. doctest:: @@ -61,7 +59,7 @@ To receive notification on flow state changes use >>> flo = linear_flow.Flow("cat-dog").add( ... CatTalk(), DogTalk(provides="dog")) >>> eng = engines.load(flo, store={'meow': 'meow', 'woof': 'woof'}) - >>> eng.notifier.register("*", flow_transition) + >>> eng.notifier.register(ANY, flow_transition) >>> eng.run() Flow 'cat-dog' transition to state RUNNING meow @@ -71,9 +69,11 @@ To receive notification on flow state changes use Task notifications ------------------ -To receive notification on task state changes use -:py:class:`~taskflow.utils.misc.Notifier` available as -``task_notifier`` property of the engine. A basic example is: +To receive notification on task state changes use the +:py:class:`~taskflow.types.notifier.Notifier` instance available as the +``atom_notifier`` property of an engine. + +A basic example is: .. doctest:: @@ -95,7 +95,7 @@ To receive notification on task state changes use >>> flo.add(CatTalk(), DogTalk(provides="dog")) >>> eng = engines.load(flo, store={'meow': 'meow', 'woof': 'woof'}) - >>> eng.task_notifier.register("*", task_transition) + >>> eng.task_notifier.register(ANY, task_transition) >>> eng.run() Task 'CatTalk' transition to state RUNNING meow @@ -138,30 +138,53 @@ For example, this is how you can use >>> with printing.PrintingListener(eng): ... eng.run() ... - taskflow.engines.action_engine.engine.SingleThreadedActionEngine: ... has moved flow 'cat-dog' (...) into state 'RUNNING' - taskflow.engines.action_engine.engine.SingleThreadedActionEngine: ... has moved task 'CatTalk' (...) into state 'RUNNING' + has moved flow 'cat-dog' (...) into state 'RUNNING' from state 'PENDING' + has moved task 'CatTalk' (...) into state 'RUNNING' from state 'PENDING' meow - taskflow.engines.action_engine.engine.SingleThreadedActionEngine: ... has moved task 'CatTalk' (...) into state 'SUCCESS' with result 'cat' (failure=False) - taskflow.engines.action_engine.engine.SingleThreadedActionEngine: ... has moved task 'DogTalk' (...) into state 'RUNNING' + has moved task 'CatTalk' (...) into state 'SUCCESS' from state 'RUNNING' with result 'cat' (failure=False) + has moved task 'DogTalk' (...) into state 'RUNNING' from state 'PENDING' woof - taskflow.engines.action_engine.engine.SingleThreadedActionEngine: ... has moved task 'DogTalk' (...) into state 'SUCCESS' with result 'dog' (failure=False) - taskflow.engines.action_engine.engine.SingleThreadedActionEngine: ... has moved flow 'cat-dog' (...) into state 'SUCCESS' + has moved task 'DogTalk' (...) into state 'SUCCESS' from state 'RUNNING' with result 'dog' (failure=False) + has moved flow 'cat-dog' (...) into state 'SUCCESS' from state 'RUNNING' Basic listener -------------- -.. autoclass:: taskflow.listeners.base.ListenerBase +.. autoclass:: taskflow.listeners.base.Listener Printing and logging listeners ------------------------------ -.. autoclass:: taskflow.listeners.base.LoggingBase +.. autoclass:: taskflow.listeners.base.DumpingListener .. autoclass:: taskflow.listeners.logging.LoggingListener +.. autoclass:: taskflow.listeners.logging.DynamicLoggingListener + .. autoclass:: taskflow.listeners.printing.PrintingListener Timing listener --------------- .. autoclass:: taskflow.listeners.timing.TimingListener + +.. autoclass:: taskflow.listeners.timing.PrintingTimingListener + +Claim listener +-------------- + +.. autoclass:: taskflow.listeners.claims.CheckingClaimListener + +Hierarchy +--------- + +.. inheritance-diagram:: + taskflow.listeners.base.DumpingListener + taskflow.listeners.base.Listener + taskflow.listeners.claims.CheckingClaimListener + taskflow.listeners.logging.DynamicLoggingListener + taskflow.listeners.logging.LoggingListener + taskflow.listeners.printing.PrintingListener + taskflow.listeners.timing.PrintingTimingListener + taskflow.listeners.timing.TimingListener + :parts: 1 diff --git a/doc/source/persistence.rst b/doc/source/persistence.rst index 022773e5..9cb99896 100644 --- a/doc/source/persistence.rst +++ b/doc/source/persistence.rst @@ -38,7 +38,7 @@ How it is used On :doc:`engine ` construction typically a backend (it can be optional) will be provided which satisfies the -:py:class:`~taskflow.persistence.backends.base.Backend` abstraction. Along with +:py:class:`~taskflow.persistence.base.Backend` abstraction. Along with providing a backend object a :py:class:`~taskflow.persistence.logbook.FlowDetail` object will also be created and provided (this object will contain the details about the flow to be @@ -55,7 +55,7 @@ interface to the underlying backend storage objects (it provides helper functions that are commonly used by the engine, avoiding repeating code when interacting with the provided :py:class:`~taskflow.persistence.logbook.FlowDetail` and -:py:class:`~taskflow.persistence.backends.base.Backend` objects). As an engine +:py:class:`~taskflow.persistence.base.Backend` objects). As an engine initializes it will extract (or create) :py:class:`~taskflow.persistence.logbook.AtomDetail` objects for each atom in the workflow the engine will be executing. @@ -72,7 +72,7 @@ predecessor :py:class:`~taskflow.persistence.logbook.AtomDetail` outputs and states (which may have been persisted in a past run). This will result in either using there previous information or by running those predecessors and saving their output to the :py:class:`~taskflow.persistence.logbook.FlowDetail` -and :py:class:`~taskflow.persistence.backends.base.Backend` objects. This +and :py:class:`~taskflow.persistence.base.Backend` objects. This execution, analysis and interaction with the storage objects continues (what is described here is a simplification of what really happens; which is quite a bit more complex) until the engine has finished running (at which point the engine @@ -144,6 +144,9 @@ the following: ``'connection'`` and possibly type-specific backend parameters as other keys. +Types +===== + Memory ------ @@ -152,6 +155,11 @@ Memory Retains all data in local memory (not persisted to reliable storage). Useful for scenarios where persistence is not required (and also in unit tests). +.. note:: + + See :py:class:`~taskflow.persistence.backends.impl_memory.MemoryBackend` + for implementation details. + Files ----- @@ -163,6 +171,11 @@ from the same local machine only). Useful for cases where a *more* reliable persistence is desired along with the simplicity of files and directories (a concept everyone is familiar with). +.. note:: + + See :py:class:`~taskflow.persistence.backends.impl_dir.DirBackend` + for implementation details. + Sqlalchemy ---------- @@ -174,9 +187,62 @@ Useful when you need a higher level of durability than offered by the previous solutions. When using these connection types it is possible to resume a engine from a peer machine (this does not apply when using sqlite). +Schema +^^^^^^ + +*Logbooks* + +========== ======== ============= +Name Type Primary Key +========== ======== ============= +created_at DATETIME False +updated_at DATETIME False +uuid VARCHAR True +name VARCHAR False +meta TEXT False +========== ======== ============= + +*Flow details* + +=========== ======== ============= +Name Type Primary Key +=========== ======== ============= +created_at DATETIME False +updated_at DATETIME False +uuid VARCHAR True +name VARCHAR False +meta TEXT False +state VARCHAR False +parent_uuid VARCHAR False +=========== ======== ============= + +*Atom details* + +=========== ======== ============= +Name Type Primary Key +=========== ======== ============= +created_at DATETIME False +updated_at DATETIME False +uuid VARCHAR True +name VARCHAR False +meta TEXT False +atom_type VARCHAR False +state VARCHAR False +intention VARCHAR False +results TEXT False +failure TEXT False +version TEXT False +parent_uuid VARCHAR False +=========== ======== ============= + .. _sqlalchemy: http://www.sqlalchemy.org/docs/ .. _ACID: https://en.wikipedia.org/wiki/ACID +.. note:: + + See :py:class:`~taskflow.persistence.backends.impl_sqlalchemy.SQLAlchemyBackend` + for implementation details. + Zookeeper --------- @@ -190,6 +256,11 @@ logbook represented as znodes. Since zookeeper is also distributed it is also able to resume a engine from a peer machine (having similar functionality as the database connection types listed previously). +.. note:: + + See :py:class:`~taskflow.persistence.backends.impl_zookeeper.ZkBackend` + for implementation details. + .. _zookeeper: http://zookeeper.apache.org .. _kazoo: http://kazoo.readthedocs.org/ @@ -197,15 +268,24 @@ Interfaces ========== .. automodule:: taskflow.persistence.backends -.. automodule:: taskflow.persistence.backends.base +.. automodule:: taskflow.persistence.base .. automodule:: taskflow.persistence.logbook +Implementations +=============== + +.. automodule:: taskflow.persistence.backends.impl_dir +.. automodule:: taskflow.persistence.backends.impl_memory +.. automodule:: taskflow.persistence.backends.impl_sqlalchemy +.. automodule:: taskflow.persistence.backends.impl_zookeeper + Hierarchy ========= .. inheritance-diagram:: - taskflow.persistence.backends.impl_memory - taskflow.persistence.backends.impl_zookeeper + taskflow.persistence.base taskflow.persistence.backends.impl_dir + taskflow.persistence.backends.impl_memory taskflow.persistence.backends.impl_sqlalchemy + taskflow.persistence.backends.impl_zookeeper :parts: 2 diff --git a/doc/source/resumption.rst b/doc/source/resumption.rst index 8ddd4e95..3be864f6 100644 --- a/doc/source/resumption.rst +++ b/doc/source/resumption.rst @@ -88,7 +88,7 @@ The following scenarios explain some expected structural changes and how they can be accommodated (and what the effect will be when resuming & running). Same atoms ----------- +++++++++++ When the factory function mentioned above returns the exact same the flow and atoms (no changes are performed). @@ -98,7 +98,7 @@ atoms with :py:class:`~taskflow.persistence.logbook.AtomDetail` objects by name and then the engine resumes. Atom was added --------------- +++++++++++++++ When the factory function mentioned above alters the flow by adding a new atom in (for example for changing the runtime structure of what was previously ran @@ -109,7 +109,7 @@ corresponding :py:class:`~taskflow.persistence.logbook.AtomDetail` does not exist and one will be created and associated. Atom was removed ----------------- +++++++++++++++++ When the factory function mentioned above alters the flow by removing a new atom in (for example for changing the runtime structure of what was previously @@ -121,7 +121,7 @@ it was not there, and any results it returned if it was completed before will be ignored. Atom code was changed ---------------------- ++++++++++++++++++++++ When the factory function mentioned above alters the flow by deciding that a newer version of a previously existing atom should be ran (possibly to perform @@ -137,8 +137,8 @@ ability to upgrade atoms before running (manual introspection & modification of a :py:class:`~taskflow.persistence.logbook.LogBook` can be done before engine loading and running to accomplish this in the meantime). -Atom was split in two atoms or merged from two (or more) to one atom --------------------------------------------------------------------- +Atom was split in two atoms or merged ++++++++++++++++++++++++++++++++++++++ When the factory function mentioned above alters the flow by deciding that a previously existing atom should be split into N atoms or the factory function @@ -154,7 +154,7 @@ introspection & modification of a loading and running to accomplish this in the meantime). Flow structure was changed --------------------------- +++++++++++++++++++++++++++ If manual links were added or removed from graph, or task requirements were changed, or flow was refactored (atom moved into or out of subflows, linear diff --git a/doc/source/states.rst b/doc/source/states.rst index 02fcaf15..bba8d203 100644 --- a/doc/source/states.rst +++ b/doc/source/states.rst @@ -4,12 +4,21 @@ States .. _engine states: +.. note:: + + The code contains explicit checks during transitions using the models + described below. These checks ensure that a transition is valid, if the + transition is determined to be invalid the transitioning code will raise + a :py:class:`~taskflow.exceptions.InvalidState` exception. This exception + being triggered usually means there is some kind of bug in the code or some + type of misuse/state violation is occurring, and should be reported as such. + Engine ====== .. image:: img/engine_states.svg :width: 660px - :align: left + :align: center :alt: Action engine state transitions **RESUMING** - Prepares flow & atoms to be resumed. @@ -22,135 +31,166 @@ Engine **SUCCESS** - Completed successfully. +**FAILURE** - Completed unsuccessfully. + **REVERTED** - Reverting was induced and all atoms were **not** completed successfully. **SUSPENDED** - Suspended while running. +**UNDEFINED** - *Internal state.* + +**GAME_OVER** - *Internal state.* + Flow ==== .. image:: img/flow_states.svg :width: 660px - :align: left + :align: center :alt: Flow state transitions -**PENDING** - A flow starts its life in this state. +**PENDING** - A flow starts its execution lifecycle in this state (it has no +state prior to being ran by an engine, since flow(s) are just pattern(s) +that define the semantics and ordering of their contents and flows gain +state only when they are executed). -**RUNNING** - In this state flow makes a progress, executes and/or reverts its -atoms. +**RUNNING** - In this state the engine running a flow progresses through the +flow. -**SUCCESS** - Once all atoms have finished successfully the flow transitions to -the SUCCESS state. +**SUCCESS** - Transitioned to once all of the flows atoms have finished +successfully. -**REVERTED** - The flow transitions to this state when it has been reverted -successfully after the failure. +**REVERTED** - Transitioned to once all of the flows atoms have been reverted +successfully after a failure. -**FAILURE** - The flow transitions to this state when it can not be reverted -after the failure. +**FAILURE** - The engine will transition the flow to this state when it can not +be reverted after a single failure or after multiple failures (greater than +one failure *may* occur when running in parallel). -**SUSPENDING** - In the RUNNING state the flow can be suspended. When this -happens, flow transitions to the SUSPENDING state immediately. In that state -the engine running the flow waits for running atoms to finish (since the engine -can not preempt atoms that are active). +**SUSPENDING** - In the ``RUNNING`` state the engine running the flow can be +suspended. When this happens, the engine attempts to transition the flow +to the ``SUSPENDING`` state immediately. In that state the engine running the +flow waits for running atoms to finish (since the engine can not preempt +atoms that are actively running). -**SUSPENDED** - When no atoms are running and all results received so far are -saved, the flow transitions from the SUSPENDING state to SUSPENDED. Also it may -go to the SUCCESS state if all atoms were in fact ran, or to the REVERTED state -if the flow was reverting and all atoms were reverted while the engine was -waiting for running atoms to finish, or to the FAILURE state if atoms were run -or reverted and some of them failed. - -**RESUMING** - When the flow is interrupted 'in a hard way' (e.g. server -crashed), it can be loaded from storage in any state. If the state is not -PENDING (aka, the flow was never ran) or SUCCESS, FAILURE or REVERTED (in which -case the flow has already finished), the flow gets set to the RESUMING state -for the short time period while it is being loaded from backend storage [a -database, a filesystem...] (this transition is not shown on the diagram). When -the flow is finally loaded, it goes to the SUSPENDED state. - -From the SUCCESS, FAILURE or REVERTED states the flow can be ran again (and -thus it goes back into the RUNNING state). One of the possible use cases for -this transition is to allow for alteration of a flow or flow details associated -with a previously ran flow after the flow has finished, and client code wants -to ensure that each atom from this new (potentially updated) flow has its -chance to run. +**SUSPENDED** - When no atoms are running and all results received so far have +been saved, the engine transitions the flow from the ``SUSPENDING`` state +to the ``SUSPENDED`` state. .. note:: - The current code also contains strong checks during each flow state - transition using the model described above and raises the - :py:class:`~taskflow.exceptions.InvalidState` exception if an invalid - transition is attempted. This exception being triggered usually means there - is some kind of bug in the engine code or some type of misuse/state violation - is occurring, and should be reported as such. + The engine may transition the flow to the ``SUCCESS`` state (from the + ``SUSPENDING`` state) if all atoms were in fact running (and completed) + before the suspension request was able to be honored (this is due to the lack + of preemption) or to the ``REVERTED`` state if the engine was reverting and + all atoms were reverted while the engine was waiting for running atoms to + finish or to the ``FAILURE`` state if atoms were running or reverted and + some of them had failed. +**RESUMING** - When the engine running a flow is interrupted *'in a +hard way'* (e.g. server crashed), it can be loaded from storage in *any* +state (this is required since it is can not be known what state was last +successfully saved). If the loaded state is not ``PENDING`` (aka, the flow was +never ran) or ``SUCCESS``, ``FAILURE`` or ``REVERTED`` (in which case the flow +has already finished), the flow gets set to the ``RESUMING`` state for the +short time period while it is being loaded from backend storage [a database, a +filesystem...] (this transition is not shown on the diagram). When the flow is +finally loaded, it goes to the ``SUSPENDED`` state. + +From the ``SUCCESS``, ``FAILURE`` or ``REVERTED`` states the flow can be ran +again; therefore it is allowable to go back into the ``RUNNING`` state +immediately. One of the possible use cases for this transition is to allow for +alteration of a flow or flow details associated with a previously ran flow +after the flow has finished, and client code wants to ensure that each atom +from this new (potentially updated) flow has its chance to run. Task ==== .. image:: img/task_states.svg :width: 660px - :align: left + :align: center :alt: Task state transitions -**PENDING** - When a task is added to a flow, it starts in the PENDING state, -which means it can be executed immediately or waits for all of task it depends -on to complete. The task transitions to the PENDING state after it was -reverted and its flow was restarted or retried. +**PENDING** - A task starts its execution lifecycle in this state (it has no +state prior to being ran by an engine, since tasks(s) are just objects that +represent how to accomplish a piece of work). Once it has been transitioned to +the ``PENDING`` state by the engine this means it can be executed immediately +or if needed will wait for all of the atoms it depends on to complete. -**RUNNING** - When flow starts to execute the task, it transitions to the -RUNNING state, and stays in this state until its -:py:meth:`execute() ` method returns. +.. note:: -**SUCCESS** - The task transitions to this state after it was finished -successfully. + A engine running a task also transitions the task to the ``PENDING`` state + after it was reverted and its containing flow was restarted or retried. -**FAILURE** - The task transitions to this state after it was finished with -error. When the flow containing this task is being reverted, all its tasks are -walked in particular order. +**RUNNING** - When an engine running the task starts to execute the task, the +engine will transition the task to the ``RUNNING`` state, and the task will +stay in this state until the tasks :py:meth:`~taskflow.task.BaseTask.execute` +method returns. -**REVERTING** - The task transitions to this state when the flow starts to -revert it and its :py:meth:`revert() ` method -is called. Only tasks in the SUCCESS or FAILURE state can be reverted. If this -method fails (raises exception), the task goes to the FAILURE state. +**SUCCESS** - The engine running the task transitions the task to this state +after the task has finished successfully (ie no exception/s were raised during +execution). + +**FAILURE** - The engine running the task transitions the task to this state +after it has finished with an error. + +**REVERTING** - The engine running a task transitions the task to this state +when the containing flow the engine is running starts to revert and +its :py:meth:`~taskflow.task.BaseTask.revert` method is called. Only tasks in +the ``SUCCESS`` or ``FAILURE`` state can be reverted. If this method fails (ie +raises an exception), the task goes to the ``FAILURE`` state (if it was already +in the ``FAILURE`` state then this is a no-op). **REVERTED** - A task that has been reverted appears in this state. - Retry ===== +.. note:: + + A retry has the same states as a task and one additional state. + .. image:: img/retry_states.svg :width: 660px - :align: left + :align: center :alt: Retry state transitions -Retry has the same states as a task and one additional state. +**PENDING** - A retry starts its execution lifecycle in this state (it has no +state prior to being ran by an engine, since retry(s) are just objects that +represent how to retry an associated flow). Once it has been transitioned to +the ``PENDING`` state by the engine this means it can be executed immediately +or if needed will wait for all of the atoms it depends on to complete (in the +retry case the retry object will also be consulted when failures occur in the +flow that the retry is associated with by consulting its +:py:meth:`~taskflow.retry.Decider.on_failure` method). -**PENDING** - When a retry is added to a flow, it starts in the PENDING state, -which means it can be executed immediately or waits for all of task it depends -on to complete. The retry transitions to the PENDING state after it was -reverted and its flow was restarted or retried. +.. note:: -**RUNNING** - When flow starts to execute the retry, it transitions to the -RUNNING state, and stays in this state until its -:py:meth:`execute() ` method returns. + A engine running a retry also transitions the retry to the ``PENDING`` state + after it was reverted and its associated flow was restarted or retried. -**SUCCESS** - The retry transitions to this state after it was finished -successfully. +**RUNNING** - When a engine starts to execute the retry, the engine +transitions the retry to the ``RUNNING`` state, and the retry stays in this +state until its :py:meth:`~taskflow.retry.Retry.execute` method returns. -**FAILURE** - The retry transitions to this state after it was finished with -error. When the flow containing this retry is being reverted, all its tasks are -walked in particular order. +**SUCCESS** - The engine running the retry transitions it to this state after +it was finished successfully (ie no exception/s were raised during +execution). -**REVERTING** - The retry transitions to this state when the flow starts to -revert it and its :py:meth:`revert() ` method is -called. Only retries in SUCCESS or FAILURE state can be reverted. If this -method fails (raises exception), the retry goes to the FAILURE state. +**FAILURE** - The engine running the retry transitions it to this state after +it has finished with an error. + +**REVERTING** - The engine running the retry transitions to this state when +the associated flow the engine is running starts to revert it and its +:py:meth:`~taskflow.retry.Retry.revert` method is called. Only retries +in ``SUCCESS`` or ``FAILURE`` state can be reverted. If this method fails (ie +raises an exception), the retry goes to the ``FAILURE`` state (if it was +already in the ``FAILURE`` state then this is a no-op). **REVERTED** - A retry that has been reverted appears in this state. -**RETRYING** - If flow that is managed by the current retry was failed and -reverted, the engine prepares it for the next run and transitions to the -RETRYING state. +**RETRYING** - If flow that is associated with the current retry was failed and +reverted, the engine prepares the flow for the next run and transitions the +retry to the ``RETRYING`` state. diff --git a/doc/source/types.rst b/doc/source/types.rst new file mode 100644 index 00000000..47ba7e48 --- /dev/null +++ b/doc/source/types.rst @@ -0,0 +1,68 @@ +----- +Types +----- + +.. note:: + + Even though these types **are** made for public consumption and usage + should be encouraged/easily possible it should be noted that these may be + moved out to new libraries at various points in the future (for example + the ``FSM`` code *may* move to its own oslo supported ``automaton`` library + at some point in the future [#f1]_). If you are using these + types **without** using the rest of this library it is **strongly** + encouraged that you be a vocal proponent of getting these made + into *isolated* libraries (as using these types in this manner is not + the expected and/or desired usage). + +Cache +===== + +.. automodule:: taskflow.types.cache + +Failure +======= + +.. automodule:: taskflow.types.failure + +FSM +=== + +.. automodule:: taskflow.types.fsm + +Futures +======= + +.. automodule:: taskflow.types.futures + +Graph +===== + +.. automodule:: taskflow.types.graph + +Notifier +======== + +.. automodule:: taskflow.types.notifier + +Periodic +======== + +.. automodule:: taskflow.types.periodic + +Table +===== + +.. automodule:: taskflow.types.table + +Timing +====== + +.. automodule:: taskflow.types.timing + +Tree +==== + +.. automodule:: taskflow.types.tree + +.. [#f1] See: https://review.openstack.org/#/c/141961 for a proposal to + do this. diff --git a/doc/source/utils.rst b/doc/source/utils.rst new file mode 100644 index 00000000..1f774663 --- /dev/null +++ b/doc/source/utils.rst @@ -0,0 +1,54 @@ +--------- +Utilities +--------- + +.. warning:: + + External usage of internal utility functions and modules should be kept + to a **minimum** as they may be altered, refactored or moved to other + locations **without** notice (and without the typical deprecation cycle). + +Async +~~~~~ + +.. automodule:: taskflow.utils.async_utils + +Deprecation +~~~~~~~~~~~ + +.. automodule:: taskflow.utils.deprecation + +Eventlet +~~~~~~~~ + +.. automodule:: taskflow.utils.eventlet_utils + +Kazoo +~~~~~ + +.. automodule:: taskflow.utils.kazoo_utils + +Kombu +~~~~~ + +.. automodule:: taskflow.utils.kombu_utils + +Locks +~~~~~ + +.. automodule:: taskflow.utils.lock_utils + +Miscellaneous +~~~~~~~~~~~~~ + +.. automodule:: taskflow.utils.misc + +Persistence +~~~~~~~~~~~ + +.. automodule:: taskflow.utils.persistence_utils + +Threading +~~~~~~~~~ + +.. automodule:: taskflow.utils.threading_utils diff --git a/doc/source/workers.rst b/doc/source/workers.rst index 9c2f2b9c..7c4f0112 100644 --- a/doc/source/workers.rst +++ b/doc/source/workers.rst @@ -1,7 +1,3 @@ -------- -Workers -------- - Overview ======== @@ -17,7 +13,6 @@ connected via `amqp`_ (or other supported `kombu`_ transports). production ready. .. _blueprint page: https://blueprints.launchpad.net/taskflow?searchtext=wbe -.. _kombu: http://kombu.readthedocs.org/ Terminology ----------- @@ -36,11 +31,12 @@ Executor these requests can be accepted and processed by remote workers. Worker - Workers are started on remote hosts and has list of tasks it can perform (on - request). Workers accept and process task requests that are published by an - executor. Several requests can be processed simultaneously in separate - threads. For example, an `executor`_ can be passed to the worker and - configured to run in as many threads (green or not) as desired. + Workers are started on remote hosts and each has a list of tasks it can + perform (on request). Workers accept and process task requests that are + published by an executor. Several requests can be processed simultaneously + in separate threads (or processes...). For example, an `executor`_ can be + passed to the worker and configured to run in as many threads (green or + not) as desired. Proxy Executors interact with workers via a proxy. The proxy maintains the @@ -72,35 +68,12 @@ Requirements .. _executor: https://docs.python.org/dev/library/concurrent.futures.html#executor-objects .. _protocol: http://en.wikipedia.org/wiki/Communications_protocol -Use-cases ---------- - -* `Glance`_ - - * Image tasks *(long-running)* - - * Convert, import/export & more... - -* `Heat`_ - - * Engine work distribution - -* `Rally`_ - - * Load generation - -* *Your use-case here* - -.. _Heat: https://wiki.openstack.org/wiki/Heat -.. _Rally: https://wiki.openstack.org/wiki/Rally -.. _Glance: https://wiki.openstack.org/wiki/Glance - Design ====== -There are two communication sides, the *executor* and *worker* that communicate -using a proxy component. The proxy is designed to accept/publish messages -from/into a named exchange. +There are two communication sides, the *executor* (and associated engine +derivative) and *worker* that communicate using a proxy component. The proxy +is designed to accept/publish messages from/into a named exchange. High level architecture ----------------------- @@ -135,7 +108,7 @@ engine executor in the following manner: executes the task). 2. If dispatched succeeded then the worker sends a confirmation response to the executor otherwise the worker sends a failed response along with - a serialized :py:class:`failure ` object + a serialized :py:class:`failure ` object that contains what has failed (and why). 3. The worker executes the task and once it is finished sends the result back to the originating executor (every time a task progress event is @@ -152,20 +125,29 @@ engine executor in the following manner: .. note:: - :py:class:`~taskflow.utils.misc.Failure` objects are not json-serializable - (they contain references to tracebacks which are not serializable), so they - are converted to dicts before sending and converted from dicts after - receiving on both executor & worker sides (this translation is lossy since - the traceback won't be fully retained). + :py:class:`~taskflow.types.failure.Failure` objects are not directly + json-serializable (they contain references to tracebacks which are not + serializable), so they are converted to dicts before sending and converted + from dicts after receiving on both executor & worker sides (this + translation is lossy since the traceback won't be fully retained). -Executor request format -~~~~~~~~~~~~~~~~~~~~~~~ +Protocol +~~~~~~~~ -* **task** - full task name to be performed +.. automodule:: taskflow.engines.worker_based.protocol + +Examples +~~~~~~~~ + +Request (execute) +""""""""""""""""" + +* **task_name** - full task name to be performed +* **task_cls** - full task class name to be performed * **action** - task action to be performed (e.g. execute, revert) * **arguments** - arguments the task action to be called with * **result** - task execution result (result or - :py:class:`~taskflow.utils.misc.Failure`) *[passed to revert only]* + :py:class:`~taskflow.types.failure.Failure`) *[passed to revert only]* Additionally, the following parameters are added to the request message: @@ -180,20 +162,70 @@ Additionally, the following parameters are added to the request message: { "action": "execute", "arguments": { - "joe_number": 444 + "x": 111 }, - "task": "tasks.CallJoe" + "task_cls": "taskflow.tests.utils.TaskOneArgOneReturn", + "task_name": "taskflow.tests.utils.TaskOneArgOneReturn", + "task_version": [ + 1, + 0 + ] } -Worker response format -~~~~~~~~~~~~~~~~~~~~~~ + +Request (revert) +"""""""""""""""" + +When **reverting:** + +.. code:: json + + { + "action": "revert", + "arguments": {}, + "failures": { + "taskflow.tests.utils.TaskWithFailure": { + "exc_type_names": [ + "RuntimeError", + "StandardError", + "Exception" + ], + "exception_str": "Woot!", + "traceback_str": " File \"/homes/harlowja/dev/os/taskflow/taskflow/engines/action_engine/executor.py\", line 56, in _execute_task\n result = task.execute(**arguments)\n File \"/homes/harlowja/dev/os/taskflow/taskflow/tests/utils.py\", line 165, in execute\n raise RuntimeError('Woot!')\n", + "version": 1 + } + }, + "result": [ + "failure", + { + "exc_type_names": [ + "RuntimeError", + "StandardError", + "Exception" + ], + "exception_str": "Woot!", + "traceback_str": " File \"/homes/harlowja/dev/os/taskflow/taskflow/engines/action_engine/executor.py\", line 56, in _execute_task\n result = task.execute(**arguments)\n File \"/homes/harlowja/dev/os/taskflow/taskflow/tests/utils.py\", line 165, in execute\n raise RuntimeError('Woot!')\n", + "version": 1 + } + ], + "task_cls": "taskflow.tests.utils.TaskWithFailure", + "task_name": "taskflow.tests.utils.TaskWithFailure", + "task_version": [ + 1, + 0 + ] + } + +Worker response(s) +"""""""""""""""""" When **running:** .. code:: json { - "status": "RUNNING" + "data": {}, + "state": "RUNNING" } When **progressing:** @@ -201,9 +233,11 @@ When **progressing:** .. code:: json { - "event_data": , - "progress": , - "state": "PROGRESS" + "details": { + "progress": 0.5 + }, + "event_type": "update_progress", + "state": "EVENT" } When **succeeded:** @@ -211,8 +245,9 @@ When **succeeded:** .. code:: json { - "event": , - "result": , + "data": { + "result": 666 + }, "state": "SUCCESS" } @@ -221,15 +256,68 @@ When **failed:** .. code:: json { - "event": , - "result": , + "data": { + "result": { + "exc_type_names": [ + "RuntimeError", + "StandardError", + "Exception" + ], + "exception_str": "Woot!", + "traceback_str": " File \"/homes/harlowja/dev/os/taskflow/taskflow/engines/action_engine/executor.py\", line 56, in _execute_task\n result = task.execute(**arguments)\n File \"/homes/harlowja/dev/os/taskflow/taskflow/tests/utils.py\", line 165, in execute\n raise RuntimeError('Woot!')\n", + "version": 1 + } + }, "state": "FAILURE" } +Request state transitions +------------------------- + +.. image:: img/wbe_request_states.svg + :width: 520px + :align: center + :alt: WBE request state transitions + +**WAITING** - Request placed on queue (or other `kombu`_ message bus/transport) +but not *yet* consumed. + +**PENDING** - Worker accepted request and is pending to run using its +executor (threads, processes, or other). + +**FAILURE** - Worker failed after running request (due to task exeception) or +no worker moved/started executing (by placing the request into ``RUNNING`` +state) with-in specified time span (this defaults to 60 seconds unless +overriden). + +**RUNNING** - Workers executor (using threads, processes...) has started to +run requested task (once this state is transitioned to any request timeout no +longer becomes applicable; since at this point it is unknown how long a task +will run since it can not be determined if a task is just taking a long time +or has failed). + +**SUCCESS** - Worker finished running task without exception. + +.. note:: + + During the ``WAITING`` and ``PENDING`` stages the engine keeps track + of how long the request has been *alive* for and if a timeout is reached + the request will automatically transition to ``FAILURE`` and any further + transitions from a worker will be disallowed (for example, if a worker + accepts the request in the future and sets the task to ``PENDING`` this + transition will be logged and ignored). This timeout can be adjusted and/or + removed by setting the engine ``transition_timeout`` option to a + higher/lower value or by setting it to ``None`` (to remove the timeout + completely). In the future this will be improved to be more dynamic + by implementing the blueprints associated with `failover`_ and + `info/resilence`_. + +.. _failover: https://blueprints.launchpad.net/taskflow/+spec/wbe-worker-failover +.. _info/resilence: https://blueprints.launchpad.net/taskflow/+spec/wbe-worker-info + Usage ===== - Workers ------- @@ -273,32 +361,26 @@ For complete parameters and object usage please see .. code:: python - engine_conf = { - 'engine': 'worker-based', - 'url': 'amqp://guest:guest@localhost:5672//', - 'exchange': 'test-exchange', - 'topics': ['topic1', 'topic2'], - } flow = lf.Flow('simple-linear').add(...) - eng = taskflow.engines.load(flow, engine_conf=engine_conf) + eng = taskflow.engines.load(flow, engine='worker-based', + url='amqp://guest:guest@localhost:5672//', + exchange='test-exchange', + topics=['topic1', 'topic2']) eng.run() **Example with filesystem transport:** .. code:: python - engine_conf = { - 'engine': 'worker-based', - 'exchange': 'test-exchange', - 'topics': ['topic1', 'topic2'], - 'transport': 'filesystem', - 'transport_options': { - 'data_folder_in': '/tmp/test', - 'data_folder_out': '/tmp/test', - }, - } flow = lf.Flow('simple-linear').add(...) - eng = taskflow.engines.load(flow, engine_conf=engine_conf) + eng = taskflow.engines.load(flow, engine='worker-based', + exchange='test-exchange', + topics=['topic1', 'topic2'], + transport='filesystem', + transport_options={ + 'data_folder_in': '/tmp/in', + 'data_folder_out': '/tmp/out', + }) eng.run() Additional supported keyword arguments: @@ -333,7 +415,8 @@ Limitations Interfaces ========== -.. automodule:: taskflow.engines.worker_based.worker .. automodule:: taskflow.engines.worker_based.engine .. automodule:: taskflow.engines.worker_based.proxy -.. automodule:: taskflow.engines.worker_based.executor +.. automodule:: taskflow.engines.worker_based.worker + +.. _kombu: http://kombu.readthedocs.org/ diff --git a/openstack-common.conf b/openstack-common.conf index 8940a040..127bc839 100644 --- a/openstack-common.conf +++ b/openstack-common.conf @@ -1,16 +1,7 @@ [DEFAULT] # The list of modules to copy from oslo-incubator.git -module=excutils -module=importutils -module=jsonutils -module=strutils -module=timeutils -module=uuidutils -module=network_utils - script=tools/run_cross_tests.sh # The base module to hold the copy of openstack.common base=taskflow - diff --git a/optional-requirements.txt b/optional-requirements.txt deleted file mode 100644 index e010cf60..00000000 --- a/optional-requirements.txt +++ /dev/null @@ -1,31 +0,0 @@ -# This file lists dependencies that are used by different pluggable (optional) -# parts of TaskFlow, like engines or persistence backends. They are not -# strictly required by TaskFlow (aka you can use TaskFlow without them), so -# they don't go into one of the requirements.txt files. - -# The order of packages is significant, because pip processes them in the order -# of appearance. Changing the order has an impact on the overall integration -# process, which may cause wedges in the gate later. - -# Database (sqlalchemy) persistence: -SQLAlchemy>=0.7.8,<=0.9.99 -alembic>=0.4.1 - -# Database (sqlalchemy) persistence with MySQL: -MySQL-python - -# NOTE(imelnikov): pyMySQL should be here, but for now it's commented out -# because of https://bugs.launchpad.net/openstack-ci/+bug/1280008 -# pyMySQL - -# Database (sqlalchemy) persistence with PostgreSQL: -psycopg2 - -# ZooKeeper backends -kazoo>=1.3.1 - -# Eventlet may be used with parallel engine: -eventlet>=0.13.0 - -# Needed for the worker-based engine: -kombu>=2.4.8 diff --git a/requirements-py2.txt b/requirements-py2.txt index 9b204ea6..083caec0 100644 --- a/requirements-py2.txt +++ b/requirements-py2.txt @@ -2,21 +2,29 @@ # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. +# See: https://bugs.launchpad.net/pbr/+bug/1384919 for why this is here... +pbr>=0.6,!=0.7,<1.0 + # Packages needed for using this library. -anyjson>=0.3.3 -iso8601>=0.1.9 + # Only needed on python 2.6 ordereddict + # Python 2->3 compatibility library. six>=1.7.0 + # Very nice graph library networkx>=1.8 -Babel>=1.3 + # Used for backend storage engine loading. -stevedore>=0.14 +stevedore>=1.1.0 # Apache-2.0 + # Backport for concurrent.futures which exists in 3.2+ futures>=2.1.6 + # Used for structured input validation jsonschema>=2.0.0,<3.0.0 -# For pretty printing state-machine tables -PrettyTable>=0.7,<0.8 + +# For common utilities +oslo.utils>=1.2.0 # Apache-2.0 +oslo.serialization>=1.2.0 # Apache-2.0 diff --git a/requirements-py3.txt b/requirements-py3.txt index 63880b31..b04fc0af 100644 --- a/requirements-py3.txt +++ b/requirements-py3.txt @@ -2,17 +2,23 @@ # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. +# See: https://bugs.launchpad.net/pbr/+bug/1384919 for why this is here... +pbr>=0.6,!=0.7,<1.0 + # Packages needed for using this library. -anyjson>=0.3.3 -iso8601>=0.1.9 + # Python 2->3 compatibility library. six>=1.7.0 + # Very nice graph library networkx>=1.8 -Babel>=1.3 + # Used for backend storage engine loading. -stevedore>=0.14 +stevedore>=1.1.0 # Apache-2.0 + # Used for structured input validation jsonschema>=2.0.0,<3.0.0 -# For pretty printing state-machine tables -PrettyTable>=0.7,<0.8 + +# For common utilities +oslo.utils>=1.2.0 # Apache-2.0 +oslo.serialization>=1.2.0 # Apache-2.0 diff --git a/setup.cfg b/setup.cfg index 0c396204..fcaff44d 100644 --- a/setup.cfg +++ b/setup.cfg @@ -6,11 +6,8 @@ description-file = author = Taskflow Developers author-email = taskflow-dev@lists.launchpad.net home-page = https://launchpad.net/taskflow -keywords = reliable recoverable execution - tasks flows workflows jobs - persistence states - asynchronous parallel threads - dataflow openstack +keywords = reliable,tasks,execution,parallel,dataflow,workflows,distributed +requires-python = >=2.6 classifier = Development Status :: 4 - Beta Environment :: OpenStack @@ -24,6 +21,7 @@ classifier = Programming Language :: Python :: 2.7 Programming Language :: Python :: 3 Programming Language :: Python :: 3.3 + Programming Language :: Python :: 3.4 Topic :: Software Development :: Libraries Topic :: System :: Distributed Computing @@ -49,10 +47,11 @@ taskflow.persistence = zookeeper = taskflow.persistence.backends.impl_zookeeper:ZkBackend taskflow.engines = - default = taskflow.engines.action_engine.engine:SingleThreadedActionEngine - serial = taskflow.engines.action_engine.engine:SingleThreadedActionEngine - parallel = taskflow.engines.action_engine.engine:MultiThreadedActionEngine + default = taskflow.engines.action_engine.engine:SerialActionEngine + serial = taskflow.engines.action_engine.engine:SerialActionEngine + parallel = taskflow.engines.action_engine.engine:ParallelActionEngine worker-based = taskflow.engines.worker_based.engine:WorkerBasedActionEngine + workers = taskflow.engines.worker_based.engine:WorkerBasedActionEngine [nosetests] cover-erase = true diff --git a/taskflow/atom.py b/taskflow/atom.py index d93ff57a..d236ff90 100644 --- a/taskflow/atom.py +++ b/taskflow/atom.py @@ -15,15 +15,11 @@ # License for the specific language governing permissions and limitations # under the License. -import logging - +from oslo_utils import reflection import six from taskflow import exceptions from taskflow.utils import misc -from taskflow.utils import reflection - -LOG = logging.getLogger(__name__) def _save_as_to_mapping(save_as): @@ -73,7 +69,8 @@ def _build_rebind_dict(args, rebind_args): elif isinstance(rebind_args, dict): return rebind_args else: - raise TypeError('Invalid rebind value: %s' % rebind_args) + raise TypeError("Invalid rebind value '%s' (%s)" + % (rebind_args, type(rebind_args))) def _build_arg_mapping(atom_name, reqs, rebind_args, function, do_infer, @@ -125,7 +122,7 @@ class Atom(object): with this atom. It can be useful in resuming older versions of atoms. Standard major, minor versioning concepts should apply. - :ivar save_as: An *immutable* output ``resource`` name dict this atom + :ivar save_as: An *immutable* output ``resource`` name dictionary this atom produces that other atoms may depend on this atom providing. The format is output index (or key when a dictionary is returned from the execute method) to stored argument @@ -136,11 +133,19 @@ class Atom(object): the names that this atom expects (in a way this is like remapping a namespace of another atom into the namespace of this atom). - :ivar inject: An *immutable* input_name => value dictionary which specifies - any initial inputs that should be automatically injected into - the atoms scope before the atom execution commences (this - allows for providing atom *local* values that do not need to - be provided by other atoms). + :param name: Meaningful name for this atom, should be something that is + distinguishable and understandable for notification, + debugging, storing and any other similar purposes. + :param provides: A set, string or list of items that + this will be providing (or could provide) to others, used + to correlate and associate the thing/s this atom + produces, if it produces anything at all. + :param inject: An *immutable* input_name => value dictionary which + specifies any initial inputs that should be automatically + injected into the atoms scope before the atom execution + commences (this allows for providing atom *local* values that + do not need to be provided by other atoms/dependents). + :ivar inject: See parameter ``inject``. """ def __init__(self, name=None, provides=None, inject=None): diff --git a/taskflow/conductors/base.py b/taskflow/conductors/base.py index e7c9887a..f7546c3e 100644 --- a/taskflow/conductors/base.py +++ b/taskflow/conductors/base.py @@ -17,7 +17,7 @@ import threading import six -import taskflow.engines +from taskflow import engines from taskflow import exceptions as excp from taskflow.utils import lock_utils @@ -34,10 +34,15 @@ class Conductor(object): period of time will finish up the prior failed conductors work. """ - def __init__(self, name, jobboard, engine_conf, persistence): + def __init__(self, name, jobboard, persistence, + engine=None, engine_options=None): self._name = name self._jobboard = jobboard - self._engine_conf = engine_conf + self._engine = engine + if not engine_options: + self._engine_options = {} + else: + self._engine_options = engine_options.copy() self._persistence = persistence self._lock = threading.RLock() @@ -83,10 +88,10 @@ class Conductor(object): store = dict(job.details["store"]) else: store = {} - return taskflow.engines.load_from_detail(flow_detail, - store=store, - engine_conf=self._engine_conf, - backend=self._persistence) + return engines.load_from_detail(flow_detail, store=store, + engine=self._engine, + backend=self._persistence, + **self._engine_options) @lock_utils.locked def connect(self): @@ -108,9 +113,10 @@ class Conductor(object): """Dispatches a claimed job for work completion. Accepts a single (already claimed) job and causes it to be run in - an engine. Returns a boolean that signifies whether the job should - be consumed. The job is consumed upon completion (unless False is - returned which will signify the job should be abandoned instead). + an engine. Returns a future object that represented the work to be + completed sometime in the future. The future should return a single + boolean from its result() method. This boolean determines whether the + job will be consumed (true) or whether it should be abandoned (false). :param job: A job instance that has already been claimed by the jobboard. diff --git a/taskflow/conductors/single_threaded.py b/taskflow/conductors/single_threaded.py index 5e78e348..e39f4c49 100644 --- a/taskflow/conductors/single_threaded.py +++ b/taskflow/conductors/single_threaded.py @@ -12,16 +12,16 @@ # License for the specific language governing permissions and limitations # under the License. -import logging -import threading - import six from taskflow.conductors import base from taskflow import exceptions as excp from taskflow.listeners import logging as logging_listener +from taskflow import logging from taskflow.types import timing as tt +from taskflow.utils import async_utils from taskflow.utils import lock_utils +from taskflow.utils import threading_utils LOG = logging.getLogger(__name__) WAIT_TIMEOUT = 0.5 @@ -50,11 +50,11 @@ class SingleThreadedConductor(base.Conductor): upon the jobboard capabilities to automatically abandon these jobs. """ - def __init__(self, name, jobboard, engine_conf, persistence, - wait_timeout=None): - super(SingleThreadedConductor, self).__init__(name, jobboard, - engine_conf, - persistence) + def __init__(self, name, jobboard, persistence, + engine=None, engine_options=None, wait_timeout=None): + super(SingleThreadedConductor, self).__init__( + name, jobboard, persistence, + engine=engine, engine_options=engine_options) if wait_timeout is None: wait_timeout = WAIT_TIMEOUT if isinstance(wait_timeout, (int, float) + six.string_types): @@ -63,7 +63,7 @@ class SingleThreadedConductor(base.Conductor): self._wait_timeout = wait_timeout else: raise ValueError("Invalid timeout literal: %s" % (wait_timeout)) - self._dead = threading.Event() + self._dead = threading_utils.Event() @lock_utils.locked def stop(self, timeout=None): @@ -80,8 +80,7 @@ class SingleThreadedConductor(base.Conductor): be honored in the future) and False will be returned indicating this. """ self._wait_timeout.interrupt() - self._dead.wait(timeout) - return self._dead.is_set() + return self._dead.wait(timeout) @property def dispatching(self): @@ -116,7 +115,7 @@ class SingleThreadedConductor(base.Conductor): job, exc_info=True) else: LOG.info("Job completed successfully: %s", job) - return consume + return async_utils.make_completed_future(consume) def run(self): self._dead.clear() @@ -136,12 +135,13 @@ class SingleThreadedConductor(base.Conductor): continue consume = False try: - consume = self._dispatch_job(job) + f = self._dispatch_job(job) except Exception: LOG.warn("Job dispatching failed: %s", job, exc_info=True) else: dispatched += 1 + consume = f.result() try: if consume: self._jobboard.consume(job, self._name) diff --git a/taskflow/openstack/__init__.py b/taskflow/engines/action_engine/actions/__init__.py similarity index 100% rename from taskflow/openstack/__init__.py rename to taskflow/engines/action_engine/actions/__init__.py diff --git a/taskflow/engines/action_engine/actions/base.py b/taskflow/engines/action_engine/actions/base.py new file mode 100644 index 00000000..5595268a --- /dev/null +++ b/taskflow/engines/action_engine/actions/base.py @@ -0,0 +1,42 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import abc + +import six + +from taskflow import states + + +#: Sentinel use to represent no-result (none can be a valid result...) +NO_RESULT = object() + +#: States that are expected to/may have a result to save... +SAVE_RESULT_STATES = (states.SUCCESS, states.FAILURE) + + +@six.add_metaclass(abc.ABCMeta) +class Action(object): + """An action that handles executing, state changes, ... of atoms.""" + + def __init__(self, storage, notifier, walker_factory): + self._storage = storage + self._notifier = notifier + self._walker_factory = walker_factory + + @abc.abstractmethod + def handles(self, atom): + """Checks if this action handles the provided atom.""" diff --git a/taskflow/engines/action_engine/actions/retry.py b/taskflow/engines/action_engine/actions/retry.py new file mode 100644 index 00000000..bd96c899 --- /dev/null +++ b/taskflow/engines/action_engine/actions/retry.py @@ -0,0 +1,130 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from taskflow.engines.action_engine.actions import base +from taskflow.engines.action_engine import executor as ex +from taskflow import logging +from taskflow import retry as retry_atom +from taskflow import states +from taskflow.types import failure +from taskflow.types import futures + +LOG = logging.getLogger(__name__) + + +def _execute_retry(retry, arguments): + try: + result = retry.execute(**arguments) + except Exception: + result = failure.Failure() + return (ex.EXECUTED, result) + + +def _revert_retry(retry, arguments): + try: + result = retry.revert(**arguments) + except Exception: + result = failure.Failure() + return (ex.REVERTED, result) + + +class RetryAction(base.Action): + """An action that handles executing, state changes, ... of retry atoms.""" + + def __init__(self, storage, notifier, walker_factory): + super(RetryAction, self).__init__(storage, notifier, walker_factory) + self._executor = futures.SynchronousExecutor() + + @staticmethod + def handles(atom): + return isinstance(atom, retry_atom.Retry) + + def _get_retry_args(self, retry, addons=None): + scope_walker = self._walker_factory(retry) + arguments = self._storage.fetch_mapped_args(retry.rebind, + atom_name=retry.name, + scope_walker=scope_walker) + history = self._storage.get_retry_history(retry.name) + arguments[retry_atom.EXECUTE_REVERT_HISTORY] = history + if addons: + arguments.update(addons) + return arguments + + def change_state(self, retry, state, result=base.NO_RESULT): + old_state = self._storage.get_atom_state(retry.name) + if state in base.SAVE_RESULT_STATES: + save_result = None + if result is not base.NO_RESULT: + save_result = result + self._storage.save(retry.name, save_result, state) + elif state == states.REVERTED: + self._storage.cleanup_retry_history(retry.name, state) + else: + if state == old_state: + # NOTE(imelnikov): nothing really changed, so we should not + # write anything to storage and run notifications + return + self._storage.set_atom_state(retry.name, state) + retry_uuid = self._storage.get_atom_uuid(retry.name) + details = { + 'retry_name': retry.name, + 'retry_uuid': retry_uuid, + 'old_state': old_state, + } + if result is not base.NO_RESULT: + details['result'] = result + self._notifier.notify(state, details) + + def execute(self, retry): + + def _on_done_callback(fut): + result = fut.result()[-1] + if isinstance(result, failure.Failure): + self.change_state(retry, states.FAILURE, result=result) + else: + self.change_state(retry, states.SUCCESS, result=result) + + self.change_state(retry, states.RUNNING) + fut = self._executor.submit(_execute_retry, retry, + self._get_retry_args(retry)) + fut.add_done_callback(_on_done_callback) + fut.atom = retry + return fut + + def revert(self, retry): + + def _on_done_callback(fut): + result = fut.result()[-1] + if isinstance(result, failure.Failure): + self.change_state(retry, states.FAILURE) + else: + self.change_state(retry, states.REVERTED) + + self.change_state(retry, states.REVERTING) + arg_addons = { + retry_atom.REVERT_FLOW_FAILURES: self._storage.get_failures(), + } + fut = self._executor.submit(_revert_retry, retry, + self._get_retry_args(retry, + addons=arg_addons)) + fut.add_done_callback(_on_done_callback) + fut.atom = retry + return fut + + def on_failure(self, retry, atom, last_failure): + self._storage.save_retry_failure(retry.name, atom.name, last_failure) + arguments = self._get_retry_args(retry) + return retry.on_failure(**arguments) diff --git a/taskflow/engines/action_engine/actions/task.py b/taskflow/engines/action_engine/actions/task.py new file mode 100644 index 00000000..607b26d5 --- /dev/null +++ b/taskflow/engines/action_engine/actions/task.py @@ -0,0 +1,150 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import functools + +from taskflow.engines.action_engine.actions import base +from taskflow import logging +from taskflow import states +from taskflow import task as task_atom +from taskflow.types import failure + +LOG = logging.getLogger(__name__) + + +class TaskAction(base.Action): + """An action that handles scheduling, state changes, ... of task atoms.""" + + def __init__(self, storage, notifier, walker_factory, task_executor): + super(TaskAction, self).__init__(storage, notifier, walker_factory) + self._task_executor = task_executor + + @staticmethod + def handles(atom): + return isinstance(atom, task_atom.BaseTask) + + def _is_identity_transition(self, old_state, state, task, progress): + if state in base.SAVE_RESULT_STATES: + # saving result is never identity transition + return False + if state != old_state: + # changing state is not identity transition by definition + return False + # NOTE(imelnikov): last thing to check is that the progress has + # changed, which means progress is not None and is different from + # what is stored in the database. + if progress is None: + return False + old_progress = self._storage.get_task_progress(task.name) + if old_progress != progress: + return False + return True + + def change_state(self, task, state, + result=base.NO_RESULT, progress=None): + old_state = self._storage.get_atom_state(task.name) + if self._is_identity_transition(old_state, state, task, progress): + # NOTE(imelnikov): ignore identity transitions in order + # to avoid extra write to storage backend and, what's + # more important, extra notifications + return + if state in base.SAVE_RESULT_STATES: + save_result = None + if result is not base.NO_RESULT: + save_result = result + self._storage.save(task.name, save_result, state) + else: + self._storage.set_atom_state(task.name, state) + if progress is not None: + self._storage.set_task_progress(task.name, progress) + task_uuid = self._storage.get_atom_uuid(task.name) + details = { + 'task_name': task.name, + 'task_uuid': task_uuid, + 'old_state': old_state, + } + if result is not base.NO_RESULT: + details['result'] = result + self._notifier.notify(state, details) + if progress is not None: + task.update_progress(progress) + + def _on_update_progress(self, task, event_type, details): + """Should be called when task updates its progress.""" + try: + progress = details.pop('progress') + except KeyError: + pass + else: + try: + self._storage.set_task_progress(task.name, progress, + details=details) + except Exception: + # Update progress callbacks should never fail, so capture and + # log the emitted exception instead of raising it. + LOG.exception("Failed setting task progress for %s to %0.3f", + task, progress) + + def schedule_execution(self, task): + self.change_state(task, states.RUNNING, progress=0.0) + scope_walker = self._walker_factory(task) + arguments = self._storage.fetch_mapped_args(task.rebind, + atom_name=task.name, + scope_walker=scope_walker) + if task.notifier.can_be_registered(task_atom.EVENT_UPDATE_PROGRESS): + progress_callback = functools.partial(self._on_update_progress, + task) + else: + progress_callback = None + task_uuid = self._storage.get_atom_uuid(task.name) + return self._task_executor.execute_task( + task, task_uuid, arguments, + progress_callback=progress_callback) + + def complete_execution(self, task, result): + if isinstance(result, failure.Failure): + self.change_state(task, states.FAILURE, result=result) + else: + self.change_state(task, states.SUCCESS, + result=result, progress=1.0) + + def schedule_reversion(self, task): + self.change_state(task, states.REVERTING, progress=0.0) + scope_walker = self._walker_factory(task) + arguments = self._storage.fetch_mapped_args(task.rebind, + atom_name=task.name, + scope_walker=scope_walker) + task_uuid = self._storage.get_atom_uuid(task.name) + task_result = self._storage.get(task.name) + failures = self._storage.get_failures() + if task.notifier.can_be_registered(task_atom.EVENT_UPDATE_PROGRESS): + progress_callback = functools.partial(self._on_update_progress, + task) + else: + progress_callback = None + future = self._task_executor.revert_task( + task, task_uuid, arguments, task_result, failures, + progress_callback=progress_callback) + return future + + def complete_reversion(self, task, result): + if isinstance(result, failure.Failure): + self.change_state(task, states.FAILURE) + else: + self.change_state(task, states.REVERTED, progress=1.0) + + def wait_for_any(self, fs, timeout): + return self._task_executor.wait_for_any(fs, timeout) diff --git a/taskflow/engines/action_engine/compiler.py b/taskflow/engines/action_engine/compiler.py index 32cb58c8..fb81ba80 100644 --- a/taskflow/engines/action_engine/compiler.py +++ b/taskflow/engines/action_engine/compiler.py @@ -14,211 +14,397 @@ # License for the specific language governing permissions and limitations # under the License. -import logging +import collections +import threading from taskflow import exceptions as exc from taskflow import flow +from taskflow import logging from taskflow import retry from taskflow import task from taskflow.types import graph as gr +from taskflow.types import tree as tr +from taskflow.utils import lock_utils from taskflow.utils import misc LOG = logging.getLogger(__name__) +_RETRY_EDGE_DATA = { + flow.LINK_RETRY: True, +} +_EDGE_INVARIANTS = (flow.LINK_INVARIANT, flow.LINK_MANUAL, flow.LINK_RETRY) +_EDGE_REASONS = flow.LINK_REASONS + class Compilation(object): - """The result of a compilers compile() is this *immutable* object. + """The result of a compilers compile() is this *immutable* object.""" - For now it is just a execution graph but in the future it will grow to - include more methods & properties that help the various runtime units - execute in a more optimal & featureful manner. - """ - def __init__(self, execution_graph): + def __init__(self, execution_graph, hierarchy): self._execution_graph = execution_graph + self._hierarchy = hierarchy @property def execution_graph(self): + """The execution ordering of atoms (as a graph structure).""" return self._execution_graph + @property + def hierarchy(self): + """The hierachy of patterns (as a tree structure).""" + return self._hierarchy + + +def _add_update_edges(graph, nodes_from, nodes_to, attr_dict=None): + """Adds/updates edges from nodes to other nodes in the specified graph. + + It will connect the 'nodes_from' to the 'nodes_to' if an edge currently + does *not* exist (if it does already exist then the edges attributes + are just updated instead). When an edge is created the provided edge + attributes dictionary will be applied to the new edge between these two + nodes. + """ + # NOTE(harlowja): give each edge its own attr copy so that if it's + # later modified that the same copy isn't modified... + for u in nodes_from: + for v in nodes_to: + if not graph.has_edge(u, v): + if attr_dict: + graph.add_edge(u, v, attr_dict=attr_dict.copy()) + else: + graph.add_edge(u, v) + else: + # Just update the attr_dict (if any). + if attr_dict: + graph.add_edge(u, v, attr_dict=attr_dict.copy()) + + +class Linker(object): + """Compiler helper that adds pattern(s) constraints onto a graph.""" + + @staticmethod + def _is_not_empty(graph): + # Returns true if the given graph is *not* empty... + return graph.number_of_nodes() > 0 + + @staticmethod + def _find_first_decomposed(node, priors, + decomposed_members, decomposed_filter): + # How this works; traverse backwards and find only the predecessor + # items that are actually connected to this entity, and avoid any + # linkage that is not directly connected. This is guaranteed to be + # valid since we always iter_links() over predecessors before + # successors in all currently known patterns; a queue is used here + # since it is possible for a node to have 2+ different predecessors so + # we must search back through all of them in a reverse BFS order... + # + # Returns the first decomposed graph of those nodes (including the + # passed in node) that passes the provided filter + # function (returns none if none match). + frontier = collections.deque([node]) + # NOTE(harowja): None is in this initial set since the first prior in + # the priors list has None as its predecessor (which we don't want to + # look for a decomposed member of). + visited = set([None]) + while frontier: + node = frontier.popleft() + if node in visited: + continue + node_graph = decomposed_members[node] + if decomposed_filter(node_graph): + return node_graph + visited.add(node) + # TODO(harlowja): optimize this more to avoid searching through + # things already searched... + for (u, v) in reversed(priors): + if node == v: + # Queue its predecessor to be searched in the future... + frontier.append(u) + else: + return None + + def apply_constraints(self, graph, flow, decomposed_members): + # This list is used to track the links that have been previously + # iterated over, so that when we are trying to find a entry to + # connect to that we iterate backwards through this list, finding + # connected nodes to the current target (lets call it v) and find + # the first (u_n, or u_n - 1, u_n - 2...) that was decomposed into + # a non-empty graph. We also retain all predecessors of v so that we + # can correctly locate u_n - 1 if u_n turns out to have decomposed into + # an empty graph (and so on). + priors = [] + # NOTE(harlowja): u, v are flows/tasks (also graph terminology since + # we are compiling things down into a flattened graph), the meaning + # of this link iteration via iter_links() is that u -> v (with the + # provided dictionary attributes, if any). + for (u, v, attr_dict) in flow.iter_links(): + if not priors: + priors.append((None, u)) + v_g = decomposed_members[v] + if not v_g.number_of_nodes(): + priors.append((u, v)) + continue + invariant = any(attr_dict.get(k) for k in _EDGE_INVARIANTS) + if not invariant: + # This is a symbol *only* dependency, connect + # corresponding providers and consumers to allow the consumer + # to be executed immediately after the provider finishes (this + # is an optimization for these types of dependencies...) + u_g = decomposed_members[u] + if not u_g.number_of_nodes(): + # This must always exist, but incase it somehow doesn't... + raise exc.CompilationFailure( + "Non-invariant link being created from '%s' ->" + " '%s' even though the target '%s' was found to be" + " decomposed into an empty graph" % (v, u, u)) + for u in u_g.nodes_iter(): + for v in v_g.nodes_iter(): + depends_on = u.provides & v.requires + if depends_on: + _add_update_edges(graph, + [u], [v], + attr_dict={ + _EDGE_REASONS: depends_on, + }) + else: + # Connect nodes with no predecessors in v to nodes with no + # successors in the *first* non-empty predecessor of v (thus + # maintaining the edge dependency). + match = self._find_first_decomposed(u, priors, + decomposed_members, + self._is_not_empty) + if match is not None: + _add_update_edges(graph, + match.no_successors_iter(), + list(v_g.no_predecessors_iter()), + attr_dict=attr_dict) + priors.append((u, v)) + class PatternCompiler(object): - """Compiles patterns & atoms into a compilation unit. + """Compiles a pattern (or task) into a compilation unit. - NOTE(harlowja): during this pattern translation process any nested flows - will be converted into there equivalent subgraphs. This currently implies - that contained atoms in those nested flows, post-translation will no longer - be associated with there previously containing flow but instead will lose - this identity and what will remain is the logical constraints that there - contained flow mandated. In the future this may be changed so that this - association is not lost via the compilation process (since it can be - useful to retain this relationship). + Let's dive into the basic idea for how this works: + + The compiler here is provided a 'root' object via its __init__ method, + this object could be a task, or a flow (one of the supported patterns), + the end-goal is to produce a :py:class:`.Compilation` object as the result + with the needed components. If this is not possible a + :py:class:`~.taskflow.exceptions.CompilationFailure` will be raised (or + in the case where a unknown type is being requested to compile + a ``TypeError`` will be raised). + + The complexity of this comes into play when the 'root' is a flow that + contains itself other nested flows (and so-on); to compile this object and + its contained objects into a graph that *preserves* the constraints the + pattern mandates we have to go through a recursive algorithm that creates + subgraphs for each nesting level, and then on the way back up through + the recursion (now with a decomposed mapping from contained patterns or + atoms to there corresponding subgraph) we have to then connect the + subgraphs (and the atom(s) there-in) that were decomposed for a pattern + correctly into a new graph (using a :py:class:`.Linker` object to ensure + the pattern mandated constraints are retained) and then return to the + caller (and they will do the same thing up until the root node, which by + that point one graph is created with all contained atoms in the + pattern/nested patterns mandated ordering). + + Also maintained in the :py:class:`.Compilation` object is a hierarchy of + the nesting of items (which is also built up during the above mentioned + recusion, via a much simpler algorithm); this is typically used later to + determine the prior atoms of a given atom when looking up values that can + be provided to that atom for execution (see the scopes.py file for how this + works). Note that although you *could* think that the graph itself could be + used for this, which in some ways it can (for limited usage) the hierarchy + retains the nested structure (which is useful for scoping analysis/lookup) + to be able to provide back a iterator that gives back the scopes visible + at each level (the graph does not have this information once flattened). + + Let's take an example: + + Given the pattern ``f(a(b, c), d)`` where ``f`` is a + :py:class:`~taskflow.patterns.linear_flow.Flow` with items ``a(b, c)`` + where ``a`` is a :py:class:`~taskflow.patterns.linear_flow.Flow` composed + of tasks ``(b, c)`` and task ``d``. + + The algorithm that will be performed (mirroring the above described logic) + will go through the following steps (the tree hierachy building is left + out as that is more obvious):: + + Compiling f + - Decomposing flow f with no parent (must be the root) + - Compiling a + - Decomposing flow a with parent f + - Compiling b + - Decomposing task b with parent a + - Decomposed b into: + Name: b + Nodes: 1 + - b + Edges: 0 + - Compiling c + - Decomposing task c with parent a + - Decomposed c into: + Name: c + Nodes: 1 + - c + Edges: 0 + - Relinking decomposed b -> decomposed c + - Decomposed a into: + Name: a + Nodes: 2 + - b + - c + Edges: 1 + b -> c ({'invariant': True}) + - Compiling d + - Decomposing task d with parent f + - Decomposed d into: + Name: d + Nodes: 1 + - d + Edges: 0 + - Relinking decomposed a -> decomposed d + - Decomposed f into: + Name: f + Nodes: 3 + - c + - b + - d + Edges: 2 + c -> d ({'invariant': True}) + b -> c ({'invariant': True}) """ - def compile(self, root): - graph = _Flattener(root).flatten() - if graph.number_of_nodes() == 0: - # Try to get a name attribute, otherwise just use the object - # string representation directly if that attribute does not exist. - name = getattr(root, 'name', root) - raise exc.Empty("Root container '%s' (%s) is empty." - % (name, type(root))) - return Compilation(graph) - - -_RETRY_EDGE_DATA = { - 'retry': True, -} - - -class _Flattener(object): - """Flattens a root item (task/flow) into a execution graph.""" def __init__(self, root, freeze=True): self._root = root - self._graph = None self._history = set() - self._freeze = bool(freeze) + self._linker = Linker() + self._freeze = freeze + self._lock = threading.Lock() + self._compilation = None - def _add_new_edges(self, graph, nodes_from, nodes_to, edge_attrs): - """Adds new edges from nodes to other nodes in the specified graph. - - It will connect the nodes_from to the nodes_to if an edge currently - does *not* exist. When an edge is created the provided edge attributes - will be applied to the new edge between these two nodes. - """ - nodes_to = list(nodes_to) - for u in nodes_from: - for v in nodes_to: - if not graph.has_edge(u, v): - # NOTE(harlowja): give each edge its own attr copy so that - # if it's later modified that the same copy isn't modified. - graph.add_edge(u, v, attr_dict=edge_attrs.copy()) - - def _flatten(self, item): - functor = self._find_flattener(item) - if not functor: - raise TypeError("Unknown type requested to flatten: %s (%s)" - % (item, type(item))) + def _flatten(self, item, parent): + """Flattens a item (pattern, task) into a graph + tree node.""" + functor = self._find_flattener(item, parent) self._pre_item_flatten(item) - graph = functor(item) - self._post_item_flatten(item, graph) - return graph + graph, node = functor(item, parent) + self._post_item_flatten(item, graph, node) + return graph, node - def _find_flattener(self, item): + def _find_flattener(self, item, parent): """Locates the flattening function to use to flatten the given item.""" if isinstance(item, flow.Flow): return self._flatten_flow elif isinstance(item, task.BaseTask): return self._flatten_task elif isinstance(item, retry.Retry): - if len(self._history) == 1: - raise TypeError("Retry controller: %s (%s) must only be used" + if parent is None: + raise TypeError("Retry controller '%s' (%s) must only be used" " as a flow constructor parameter and not as a" " root component" % (item, type(item))) else: - # TODO(harlowja): we should raise this type error earlier - # instead of later since we should do this same check on add() - # calls, this makes the error more visible (instead of waiting - # until compile time). - raise TypeError("Retry controller: %s (%s) must only be used" + raise TypeError("Retry controller '%s' (%s) must only be used" " as a flow constructor parameter and not as a" " flow added component" % (item, type(item))) else: - return None + raise TypeError("Unknown item '%s' (%s) requested to flatten" + % (item, type(item))) def _connect_retry(self, retry, graph): graph.add_node(retry) - # All graph nodes that have no predecessors should depend on its retry - nodes_to = [n for n in graph.no_predecessors_iter() if n != retry] - self._add_new_edges(graph, [retry], nodes_to, _RETRY_EDGE_DATA) + # All nodes that have no predecessors should depend on this retry. + nodes_to = [n for n in graph.no_predecessors_iter() if n is not retry] + if nodes_to: + _add_update_edges(graph, [retry], nodes_to, + attr_dict=_RETRY_EDGE_DATA) - # Add link to retry for each node of subgraph that hasn't - # a parent retry + # Add association for each node of graph that has no existing retry. for n in graph.nodes_iter(): - if n != retry and 'retry' not in graph.node[n]: - graph.node[n]['retry'] = retry + if n is not retry and flow.LINK_RETRY not in graph.node[n]: + graph.node[n][flow.LINK_RETRY] = retry - def _flatten_task(self, task): + def _flatten_task(self, task, parent): """Flattens a individual task.""" graph = gr.DiGraph(name=task.name) graph.add_node(task) - return graph + node = tr.Node(task) + if parent is not None: + parent.add(node) + return graph, node - def _flatten_flow(self, flow): - """Flattens a graph flow.""" + def _decompose_flow(self, flow, parent): + """Decomposes a flow into a graph, tree node + decomposed subgraphs.""" graph = gr.DiGraph(name=flow.name) - - # Flatten all nodes into a single subgraph per node. - subgraph_map = {} + node = tr.Node(flow) + if parent is not None: + parent.add(node) + if flow.retry is not None: + node.add(tr.Node(flow.retry)) + decomposed_members = {} for item in flow: - subgraph = self._flatten(item) - subgraph_map[item] = subgraph - graph = gr.merge_graphs([graph, subgraph]) - - # Reconnect all node edges to their corresponding subgraphs. - for (u, v, attrs) in flow.iter_links(): - u_g = subgraph_map[u] - v_g = subgraph_map[v] - if any(attrs.get(k) for k in ('invariant', 'manual', 'retry')): - # Connect nodes with no predecessors in v to nodes with - # no successors in u (thus maintaining the edge dependency). - self._add_new_edges(graph, - u_g.no_successors_iter(), - v_g.no_predecessors_iter(), - edge_attrs=attrs) - else: - # This is dependency-only edge, connect corresponding - # providers and consumers. - for provider in u_g: - for consumer in v_g: - reasons = provider.provides & consumer.requires - if reasons: - graph.add_edge(provider, consumer, reasons=reasons) + subgraph, _subnode = self._flatten(item, node) + decomposed_members[item] = subgraph + if subgraph.number_of_nodes(): + graph = gr.merge_graphs([graph, subgraph]) + return graph, node, decomposed_members + def _flatten_flow(self, flow, parent): + """Flattens a flow.""" + graph, node, decomposed_members = self._decompose_flow(flow, parent) + self._linker.apply_constraints(graph, flow, decomposed_members) if flow.retry is not None: self._connect_retry(flow.retry, graph) - return graph + return graph, node def _pre_item_flatten(self, item): """Called before a item is flattened; any pre-flattening actions.""" - if id(item) in self._history: - raise ValueError("Already flattened item: %s (%s), recursive" - " flattening not supported" % (item, id(item))) - self._history.add(id(item)) + if item in self._history: + raise ValueError("Already flattened item '%s' (%s), recursive" + " flattening is not supported" % (item, + type(item))) + self._history.add(item) - def _post_item_flatten(self, item, graph): - """Called before a item is flattened; any post-flattening actions.""" + def _post_item_flatten(self, item, graph, node): + """Called after a item is flattened; doing post-flattening actions.""" def _pre_flatten(self): - """Called before the flattening of the item starts.""" + """Called before the flattening of the root starts.""" self._history.clear() - def _post_flatten(self, graph): - """Called after the flattening of the item finishes successfully.""" + def _post_flatten(self, graph, node): + """Called after the flattening of the root finishes successfully.""" dup_names = misc.get_duplicate_keys(graph.nodes_iter(), key=lambda node: node.name) if dup_names: - dup_names = ', '.join(sorted(dup_names)) - raise exc.Duplicate("Atoms with duplicate names " - "found: %s" % (dup_names)) + raise exc.Duplicate( + "Atoms with duplicate names found: %s" % (sorted(dup_names))) + if graph.number_of_nodes() == 0: + raise exc.Empty("Root container '%s' (%s) is empty" + % (self._root, type(self._root))) self._history.clear() # NOTE(harlowja): this one can be expensive to calculate (especially - # the cycle detection), so only do it if we know debugging is enabled + # the cycle detection), so only do it if we know BLATHER is enabled # and not under all cases. - if LOG.isEnabledFor(logging.DEBUG): - LOG.debug("Translated '%s' into a graph:", self._root) + if LOG.isEnabledFor(logging.BLATHER): + LOG.blather("Translated '%s'", self._root) + LOG.blather("Graph:") for line in graph.pformat().splitlines(): # Indent it so that it's slightly offset from the above line. - LOG.debug(" %s", line) + LOG.blather(" %s", line) + LOG.blather("Hierarchy:") + for line in node.pformat().splitlines(): + # Indent it so that it's slightly offset from the above line. + LOG.blather(" %s", line) - def flatten(self): - """Flattens a item (a task or flow) into a single execution graph.""" - if self._graph is not None: - return self._graph - self._pre_flatten() - graph = self._flatten(self._root) - self._post_flatten(graph) - self._graph = graph - if self._freeze: - self._graph.freeze() - return self._graph + @lock_utils.locked + def compile(self): + """Compiles the contained item into a compiled equivalent.""" + if self._compilation is None: + self._pre_flatten() + graph, node = self._flatten(self._root, None) + self._post_flatten(graph, node) + if self._freeze: + graph.freeze() + node.freeze() + self._compilation = Compilation(graph, node) + return self._compilation diff --git a/taskflow/engines/action_engine/completer.py b/taskflow/engines/action_engine/completer.py new file mode 100644 index 00000000..958d5f03 --- /dev/null +++ b/taskflow/engines/action_engine/completer.py @@ -0,0 +1,114 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from taskflow.engines.action_engine import executor as ex +from taskflow import retry as retry_atom +from taskflow import states as st +from taskflow import task as task_atom +from taskflow.types import failure + + +class Completer(object): + """Completes atoms using actions to complete them.""" + + def __init__(self, runtime): + self._runtime = runtime + self._analyzer = runtime.analyzer + self._retry_action = runtime.retry_action + self._runtime = runtime + self._storage = runtime.storage + self._task_action = runtime.task_action + + def _complete_task(self, task, event, result): + """Completes the given task, processes task failure.""" + if event == ex.EXECUTED: + self._task_action.complete_execution(task, result) + else: + self._task_action.complete_reversion(task, result) + + def resume(self): + """Resumes nodes in the contained graph. + + This is done to allow any previously completed or failed nodes to + be analyzed, there results processed and any potential nodes affected + to be adjusted as needed. + + This should return a set of nodes which should be the initial set of + nodes that were previously not finished (due to a RUNNING or REVERTING + attempt not previously finishing). + """ + for node in self._analyzer.iterate_all_nodes(): + if self._analyzer.get_state(node) == st.FAILURE: + self._process_atom_failure(node, self._storage.get(node.name)) + for retry in self._analyzer.iterate_retries(st.RETRYING): + self._runtime.retry_subflow(retry) + unfinished_nodes = set() + for node in self._analyzer.iterate_all_nodes(): + if self._analyzer.get_state(node) in (st.RUNNING, st.REVERTING): + unfinished_nodes.add(node) + return unfinished_nodes + + def complete(self, node, event, result): + """Performs post-execution completion of a node. + + Returns whether the result should be saved into an accumulator of + failures or whether this should not be done. + """ + if isinstance(node, task_atom.BaseTask): + self._complete_task(node, event, result) + if isinstance(result, failure.Failure): + if event == ex.EXECUTED: + self._process_atom_failure(node, result) + else: + return True + return False + + def _process_atom_failure(self, atom, failure): + """Processes atom failure & applies resolution strategies. + + On atom failure this will find the atoms associated retry controller + and ask that controller for the strategy to perform to resolve that + failure. After getting a resolution strategy decision this method will + then adjust the needed other atoms intentions, and states, ... so that + the failure can be worked around. + """ + retry = self._analyzer.find_atom_retry(atom) + if retry is not None: + # Ask retry controller what to do in case of failure + action = self._retry_action.on_failure(retry, atom, failure) + if action == retry_atom.RETRY: + # Prepare just the surrounding subflow for revert to be later + # retried... + self._storage.set_atom_intention(retry.name, st.RETRY) + self._runtime.reset_subgraph(retry, state=None, + intention=st.REVERT) + elif action == retry_atom.REVERT: + # Ask parent checkpoint. + self._process_atom_failure(retry, failure) + elif action == retry_atom.REVERT_ALL: + # Prepare all flow for revert + self._revert_all() + else: + raise ValueError("Unknown atom failure resolution" + " action '%s'" % action) + else: + # Prepare all flow for revert + self._revert_all() + + def _revert_all(self): + """Attempts to set all nodes to the REVERT intention.""" + self._runtime.reset_nodes(self._analyzer.iterate_all_nodes(), + state=None, intention=st.REVERT) diff --git a/taskflow/engines/action_engine/engine.py b/taskflow/engines/action_engine/engine.py index a5f587fd..51df5698 100644 --- a/taskflow/engines/action_engine/engine.py +++ b/taskflow/engines/action_engine/engine.py @@ -14,21 +14,24 @@ # License for the specific language governing permissions and limitations # under the License. +import collections import contextlib import threading +from concurrent import futures +from oslo_utils import excutils +import six + from taskflow.engines.action_engine import compiler from taskflow.engines.action_engine import executor from taskflow.engines.action_engine import runtime from taskflow.engines import base from taskflow import exceptions as exc -from taskflow.openstack.common import excutils -from taskflow import retry from taskflow import states from taskflow import storage as atom_storage +from taskflow.types import failure from taskflow.utils import lock_utils from taskflow.utils import misc -from taskflow.utils import reflection @contextlib.contextmanager @@ -41,7 +44,7 @@ def _start_stop(executor): executor.stop() -class ActionEngine(base.EngineBase): +class ActionEngine(base.Engine): """Generic action-based engine. This engine compiles the flow (and any subflows) into a compilation unit @@ -57,10 +60,9 @@ class ActionEngine(base.EngineBase): the tasks and flow being ran can go through. """ _compiler_factory = compiler.PatternCompiler - _task_executor_factory = executor.SerialTaskExecutor - def __init__(self, flow, flow_detail, backend, conf): - super(ActionEngine, self).__init__(flow, flow_detail, backend, conf) + def __init__(self, flow, flow_detail, backend, options): + super(ActionEngine, self).__init__(flow, flow_detail, backend, options) self._runtime = None self._compiled = False self._compilation = None @@ -68,9 +70,6 @@ class ActionEngine(base.EngineBase): self._state_lock = threading.RLock() self._storage_ensured = False - def __str__(self): - return "%s: %s" % (reflection.get_class_name(self), id(self)) - def suspend(self): if not self._compiled: raise exc.InvalidState("Can not suspend an engine" @@ -129,7 +128,7 @@ class ActionEngine(base.EngineBase): closed = False for (last_state, failures) in runner.run_iter(timeout=timeout): if failures: - misc.Failure.reraise_if_any(failures) + failure.Failure.reraise_if_any(failures) if closed: continue try: @@ -152,7 +151,7 @@ class ActionEngine(base.EngineBase): self._change_state(last_state) if last_state not in [states.SUSPENDED, states.SUCCESS]: failures = self.storage.get_failures() - misc.Failure.reraise_if_any(failures.values()) + failure.Failure.reraise_if_any(failures.values()) def _change_state(self, state): with self._state_lock: @@ -169,19 +168,11 @@ class ActionEngine(base.EngineBase): self.notifier.notify(state, details) def _ensure_storage(self): - # NOTE(harlowja): signal to the tasks that exist that we are about to - # resume, if they have a previous state, they will now transition to - # a resuming state (and then to suspended). - self._change_state(states.RESUMING) # does nothing in PENDING state + """Ensure all contained atoms exist in the storage unit.""" for node in self._compilation.execution_graph.nodes_iter(): - version = misc.get_version_string(node) - if isinstance(node, retry.Retry): - self.storage.ensure_retry(node.name, version, node.save_as) - else: - self.storage.ensure_task(node.name, version, node.save_as) + self.storage.ensure_atom(node) if node.inject: self.storage.inject_atom_args(node.name, node.inject) - self._change_state(states.SUSPENDED) # does nothing in PENDING state @lock_utils.locked def prepare(self): @@ -189,7 +180,12 @@ class ActionEngine(base.EngineBase): raise exc.InvalidState("Can not prepare an engine" " which has not been compiled") if not self._storage_ensured: + # Set our own state to resuming -> (ensure atoms exist + # in storage) -> suspended in the storage unit and notify any + # attached listeners of these changes. + self._change_state(states.RESUMING) self._ensure_storage() + self._change_state(states.SUSPENDED) self._storage_ensured = True # At this point we can check to ensure all dependencies are either # flow/task provided or storage provided, if there are still missing @@ -204,42 +200,162 @@ class ActionEngine(base.EngineBase): self._runtime.reset_all() self._change_state(states.PENDING) - @misc.cachedproperty - def _task_executor(self): - return self._task_executor_factory() - @misc.cachedproperty def _compiler(self): - return self._compiler_factory() + return self._compiler_factory(self._flow) @lock_utils.locked def compile(self): if self._compiled: return - self._compilation = self._compiler.compile(self._flow) + self._compilation = self._compiler.compile() self._runtime = runtime.Runtime(self._compilation, self.storage, - self.task_notifier, + self.atom_notifier, self._task_executor) self._compiled = True -class SingleThreadedActionEngine(ActionEngine): +class SerialActionEngine(ActionEngine): """Engine that runs tasks in serial manner.""" _storage_factory = atom_storage.SingleThreadedStorage + def __init__(self, flow, flow_detail, backend, options): + super(SerialActionEngine, self).__init__(flow, flow_detail, + backend, options) + self._task_executor = executor.SerialTaskExecutor() + + +class _ExecutorTypeMatch(collections.namedtuple('_ExecutorTypeMatch', + ['types', 'executor_cls'])): + def matches(self, executor): + return isinstance(executor, self.types) + + +class _ExecutorTextMatch(collections.namedtuple('_ExecutorTextMatch', + ['strings', 'executor_cls'])): + def matches(self, text): + return text.lower() in self.strings + + +class ParallelActionEngine(ActionEngine): + """Engine that runs tasks in parallel manner. + + Supported keyword arguments: + + * ``executor``: a object that implements a :pep:`3148` compatible executor + interface; it will be used for scheduling tasks. The following + type are applicable (other unknown types passed will cause a type + error to be raised). + +========================= =============================================== +Type provided Executor used +========================= =============================================== +|cft|.ThreadPoolExecutor :class:`~.executor.ParallelThreadTaskExecutor` +|cfp|.ProcessPoolExecutor :class:`~.executor.ParallelProcessTaskExecutor` +|cf|._base.Executor :class:`~.executor.ParallelThreadTaskExecutor` +========================= =============================================== + + * ``executor``: a string that will be used to select a :pep:`3148` + compatible executor; it will be used for scheduling tasks. The following + string are applicable (other unknown strings passed will cause a value + error to be raised). + +=========================== =============================================== +String (case insensitive) Executor used +=========================== =============================================== +``process`` :class:`~.executor.ParallelProcessTaskExecutor` +``processes`` :class:`~.executor.ParallelProcessTaskExecutor` +``thread`` :class:`~.executor.ParallelThreadTaskExecutor` +``threaded`` :class:`~.executor.ParallelThreadTaskExecutor` +``threads`` :class:`~.executor.ParallelThreadTaskExecutor` +=========================== =============================================== + + .. |cfp| replace:: concurrent.futures.process + .. |cft| replace:: concurrent.futures.thread + .. |cf| replace:: concurrent.futures + """ -class MultiThreadedActionEngine(ActionEngine): - """Engine that runs tasks in parallel manner.""" _storage_factory = atom_storage.MultiThreadedStorage - def _task_executor_factory(self): - return executor.ParallelTaskExecutor(executor=self._executor, - max_workers=self._max_workers) + # One of these types should match when a object (non-string) is provided + # for the 'executor' option. + # + # NOTE(harlowja): the reason we use the library/built-in futures is to + # allow for instances of that to be detected and handled correctly, instead + # of forcing everyone to use our derivatives... + _executor_cls_matchers = [ + _ExecutorTypeMatch((futures.ThreadPoolExecutor,), + executor.ParallelThreadTaskExecutor), + _ExecutorTypeMatch((futures.ProcessPoolExecutor,), + executor.ParallelProcessTaskExecutor), + _ExecutorTypeMatch((futures.Executor,), + executor.ParallelThreadTaskExecutor), + ] - def __init__(self, flow, flow_detail, backend, conf, - executor=None, max_workers=None): - super(MultiThreadedActionEngine, self).__init__( - flow, flow_detail, backend, conf) - self._executor = executor - self._max_workers = max_workers + # One of these should match when a string/text is provided for the + # 'executor' option (a mixed case equivalent is allowed since the match + # will be lower-cased before checking). + _executor_str_matchers = [ + _ExecutorTextMatch(frozenset(['processes', 'process']), + executor.ParallelProcessTaskExecutor), + _ExecutorTextMatch(frozenset(['thread', 'threads', 'threaded']), + executor.ParallelThreadTaskExecutor), + ] + + # Used when no executor is provided (either a string or object)... + _default_executor_cls = executor.ParallelThreadTaskExecutor + + def __init__(self, flow, flow_detail, backend, options): + super(ParallelActionEngine, self).__init__(flow, flow_detail, + backend, options) + # This ensures that any provided executor will be validated before + # we get to far in the compilation/execution pipeline... + self._task_executor = self._fetch_task_executor(self._options) + + @classmethod + def _fetch_task_executor(cls, options): + kwargs = {} + executor_cls = cls._default_executor_cls + # Match the desired executor to a class that will work with it... + desired_executor = options.get('executor') + if isinstance(desired_executor, six.string_types): + matched_executor_cls = None + for m in cls._executor_str_matchers: + if m.matches(desired_executor): + matched_executor_cls = m.executor_cls + break + if matched_executor_cls is None: + expected = set() + for m in cls._executor_str_matchers: + expected.update(m.strings) + raise ValueError("Unknown executor string '%s' expected" + " one of %s (or mixed case equivalent)" + % (desired_executor, list(expected))) + else: + executor_cls = matched_executor_cls + elif desired_executor is not None: + matched_executor_cls = None + for m in cls._executor_cls_matchers: + if m.matches(desired_executor): + matched_executor_cls = m.executor_cls + break + if matched_executor_cls is None: + expected = set() + for m in cls._executor_cls_matchers: + expected.update(m.types) + raise TypeError("Unknown executor '%s' (%s) expected an" + " instance of %s" % (desired_executor, + type(desired_executor), + list(expected))) + else: + executor_cls = matched_executor_cls + kwargs['executor'] = desired_executor + for k in getattr(executor_cls, 'OPTIONS', []): + if k == 'executor': + continue + try: + kwargs[k] = options[k] + except KeyError: + pass + return executor_cls(**kwargs) diff --git a/taskflow/engines/action_engine/executor.py b/taskflow/engines/action_engine/executor.py index b2bdbdae..b271beb8 100644 --- a/taskflow/engines/action_engine/executor.py +++ b/taskflow/engines/action_engine/executor.py @@ -15,52 +15,315 @@ # under the License. import abc +import collections +from multiprocessing import managers +import os +import pickle -from concurrent import futures +from oslo_utils import excutils +from oslo_utils import reflection +from oslo_utils import timeutils +from oslo_utils import uuidutils import six +from six.moves import queue as compat_queue +from taskflow import logging +from taskflow import task as task_atom +from taskflow.types import failure +from taskflow.types import futures +from taskflow.types import notifier +from taskflow.types import timing from taskflow.utils import async_utils -from taskflow.utils import misc from taskflow.utils import threading_utils # Execution and reversion events. EXECUTED = 'executed' REVERTED = 'reverted' +# See http://bugs.python.org/issue1457119 for why this is so complex... +_PICKLE_ERRORS = [pickle.PickleError, TypeError] +try: + import cPickle as _cPickle + _PICKLE_ERRORS.append(_cPickle.PickleError) +except ImportError: + pass +_PICKLE_ERRORS = tuple(_PICKLE_ERRORS) +_SEND_ERRORS = (IOError, EOFError) +_UPDATE_PROGRESS = task_atom.EVENT_UPDATE_PROGRESS -def _execute_task(task, arguments, progress_callback): - with task.autobind('update_progress', progress_callback): +# Message types/kind sent from worker/child processes... +_KIND_COMPLETE_ME = 'complete_me' +_KIND_EVENT = 'event' + +LOG = logging.getLogger(__name__) + + +def _execute_task(task, arguments, progress_callback=None): + with notifier.register_deregister(task.notifier, + _UPDATE_PROGRESS, + callback=progress_callback): try: task.pre_execute() result = task.execute(**arguments) except Exception: # NOTE(imelnikov): wrap current exception with Failure # object and return it. - result = misc.Failure() + result = failure.Failure() finally: task.post_execute() - return (task, EXECUTED, result) + return (EXECUTED, result) -def _revert_task(task, arguments, result, failures, progress_callback): - kwargs = arguments.copy() - kwargs['result'] = result - kwargs['flow_failures'] = failures - with task.autobind('update_progress', progress_callback): +def _revert_task(task, arguments, result, failures, progress_callback=None): + arguments = arguments.copy() + arguments[task_atom.REVERT_RESULT] = result + arguments[task_atom.REVERT_FLOW_FAILURES] = failures + with notifier.register_deregister(task.notifier, + _UPDATE_PROGRESS, + callback=progress_callback): try: task.pre_revert() - result = task.revert(**kwargs) + result = task.revert(**arguments) except Exception: # NOTE(imelnikov): wrap current exception with Failure # object and return it. - result = misc.Failure() + result = failure.Failure() finally: task.post_revert() - return (task, REVERTED, result) + return (REVERTED, result) + + +class _ViewableSyncManager(managers.SyncManager): + """Manager that exposes its state as methods.""" + + def is_shutdown(self): + return self._state.value == managers.State.SHUTDOWN + + def is_running(self): + return self._state.value == managers.State.STARTED + + +class _Channel(object): + """Helper wrapper around a multiprocessing queue used by a worker.""" + + def __init__(self, queue, identity): + self._queue = queue + self._identity = identity + self._sent_messages = collections.defaultdict(int) + self._pid = None + + @property + def sent_messages(self): + return self._sent_messages + + def put(self, message): + # NOTE(harlowja): this is done in late in execution to ensure that this + # happens in the child process and not the parent process (where the + # constructor is called). + if self._pid is None: + self._pid = os.getpid() + message.update({ + 'sent_on': timeutils.utcnow(), + 'sender': { + 'pid': self._pid, + 'id': self._identity, + }, + }) + if 'body' not in message: + message['body'] = {} + try: + self._queue.put(message) + except _PICKLE_ERRORS: + LOG.warn("Failed serializing message %s", message, exc_info=True) + return False + except _SEND_ERRORS: + LOG.warn("Failed sending message %s", message, exc_info=True) + return False + else: + self._sent_messages[message['kind']] += 1 + return True + + +class _WaitWorkItem(object): + """The piece of work that will executed by a process executor. + + This will call the target function, then wait until the tasks emitted + events/items have been depleted before offically being finished. + + NOTE(harlowja): this is done so that the task function will *not* return + until all of its notifications have been proxied back to its originating + task. If we didn't do this then the executor would see this task as done + and then potentially start tasks that are successors of the task that just + finished even though notifications are still left to be sent from the + previously finished task... + """ + + def __init__(self, channel, barrier, + func, task, *args, **kwargs): + self._channel = channel + self._barrier = barrier + self._func = func + self._task = task + self._args = args + self._kwargs = kwargs + + def _on_finish(self): + sent_events = self._channel.sent_messages.get(_KIND_EVENT, 0) + if sent_events: + message = { + 'created_on': timeutils.utcnow(), + 'kind': _KIND_COMPLETE_ME, + } + if self._channel.put(message): + watch = timing.StopWatch() + watch.start() + self._barrier.wait() + LOG.blather("Waited %s seconds until task '%s' %s emitted" + " notifications were depleted", watch.elapsed(), + self._task, sent_events) + + def __call__(self): + args = self._args + kwargs = self._kwargs + try: + return self._func(self._task, *args, **kwargs) + finally: + self._on_finish() + + +class _EventSender(object): + """Sends event information from a child worker process to its creator.""" + + def __init__(self, channel): + self._channel = channel + + def __call__(self, event_type, details): + message = { + 'created_on': timeutils.utcnow(), + 'kind': _KIND_EVENT, + 'body': { + 'event_type': event_type, + 'details': details, + }, + } + self._channel.put(message) + + +class _Target(object): + """An immutable helper object that represents a target of a message.""" + + def __init__(self, task, barrier, identity): + self.task = task + self.barrier = barrier + self.identity = identity + # Counters used to track how many message 'kinds' were proxied... + self.dispatched = collections.defaultdict(int) + + def __repr__(self): + return "<%s at 0x%x targeting '%s' with identity '%s'>" % ( + reflection.get_class_name(self), id(self), + self.task, self.identity) + + +class _Dispatcher(object): + """Dispatches messages received from child worker processes.""" + + # When the run() method is busy (typically in a thread) we want to set + # these so that the thread can know how long to sleep when there is no + # active work to dispatch. + _SPIN_PERIODICITY = 0.01 + + def __init__(self, dispatch_periodicity=None): + if dispatch_periodicity is None: + dispatch_periodicity = self._SPIN_PERIODICITY + if dispatch_periodicity <= 0: + raise ValueError("Provided dispatch periodicity must be greater" + " than zero and not '%s'" % dispatch_periodicity) + self._targets = {} + self._dead = threading_utils.Event() + self._dispatch_periodicity = dispatch_periodicity + self._stop_when_empty = False + + def register(self, identity, target): + self._targets[identity] = target + + def deregister(self, identity): + try: + target = self._targets.pop(identity) + except KeyError: + pass + else: + # Just incase set the barrier to unblock any worker... + target.barrier.set() + if LOG.isEnabledFor(logging.BLATHER): + LOG.blather("Dispatched %s messages %s to target '%s' during" + " the lifetime of its existence in the dispatcher", + sum(six.itervalues(target.dispatched)), + dict(target.dispatched), target) + + def reset(self): + self._stop_when_empty = False + self._dead.clear() + if self._targets: + leftover = set(six.iterkeys(self._targets)) + while leftover: + self.deregister(leftover.pop()) + + def interrupt(self): + self._stop_when_empty = True + self._dead.set() + + def _dispatch(self, message): + if LOG.isEnabledFor(logging.BLATHER): + LOG.blather("Dispatching message %s (it took %s seconds" + " for it to arrive for processing after being" + " sent)", message, + timeutils.delta_seconds(message['sent_on'], + timeutils.utcnow())) + try: + kind = message['kind'] + sender = message['sender'] + body = message['body'] + except (KeyError, ValueError, TypeError): + LOG.warn("Badly formatted message %s received", message, + exc_info=True) + return + target = self._targets.get(sender['id']) + if target is None: + # Must of been removed... + return + if kind == _KIND_COMPLETE_ME: + target.dispatched[kind] += 1 + target.barrier.set() + elif kind == _KIND_EVENT: + task = target.task + target.dispatched[kind] += 1 + task.notifier.notify(body['event_type'], body['details']) + else: + LOG.warn("Unknown message '%s' found in message from sender" + " %s to target '%s'", kind, sender, target) + + def run(self, queue): + watch = timing.StopWatch(duration=self._dispatch_periodicity) + while (not self._dead.is_set() or + (self._stop_when_empty and self._targets)): + watch.restart() + leftover = watch.leftover() + while leftover: + try: + message = queue.get(timeout=leftover) + except compat_queue.Empty: + break + else: + self._dispatch(message) + leftover = watch.leftover() + leftover = watch.leftover() + if leftover: + self._dead.wait(leftover) @six.add_metaclass(abc.ABCMeta) -class TaskExecutorBase(object): +class TaskExecutor(object): """Executes and reverts tasks. This class takes task and its arguments and executes or reverts it. @@ -69,7 +332,8 @@ class TaskExecutorBase(object): """ @abc.abstractmethod - def execute_task(self, task, task_uuid, arguments, progress_callback=None): + def execute_task(self, task, task_uuid, arguments, + progress_callback=None): """Schedules task execution.""" @abc.abstractmethod @@ -77,9 +341,9 @@ class TaskExecutorBase(object): progress_callback=None): """Schedules task reversion.""" - @abc.abstractmethod def wait_for_any(self, fs, timeout=None): """Wait for futures returned by this executor to complete.""" + return async_utils.wait_for_any(fs, timeout=timeout) def start(self): """Prepare to execute tasks.""" @@ -90,58 +354,221 @@ class TaskExecutorBase(object): pass -class SerialTaskExecutor(TaskExecutorBase): - """Execute task one after another.""" +class SerialTaskExecutor(TaskExecutor): + """Executes tasks one after another.""" + + def __init__(self): + self._executor = futures.SynchronousExecutor() + + def start(self): + self._executor.restart() + + def stop(self): + self._executor.shutdown() def execute_task(self, task, task_uuid, arguments, progress_callback=None): - return async_utils.make_completed_future( - _execute_task(task, arguments, progress_callback)) + fut = self._executor.submit(_execute_task, + task, arguments, + progress_callback=progress_callback) + fut.atom = task + return fut def revert_task(self, task, task_uuid, arguments, result, failures, progress_callback=None): - return async_utils.make_completed_future( - _revert_task(task, arguments, result, - failures, progress_callback)) - - def wait_for_any(self, fs, timeout=None): - # NOTE(imelnikov): this executor returns only done futures. - return (fs, set()) + fut = self._executor.submit(_revert_task, + task, arguments, result, failures, + progress_callback=progress_callback) + fut.atom = task + return fut -class ParallelTaskExecutor(TaskExecutorBase): +class ParallelTaskExecutor(TaskExecutor): """Executes tasks in parallel. Submits tasks to an executor which should provide an interface similar to concurrent.Futures.Executor. """ + #: Options this executor supports (passed in from engine options). + OPTIONS = frozenset(['max_workers']) + def __init__(self, executor=None, max_workers=None): self._executor = executor self._max_workers = max_workers - self._create_executor = executor is None + self._own_executor = executor is None + + @abc.abstractmethod + def _create_executor(self, max_workers=None): + """Called when an executor has not been provided to make one.""" + + def _submit_task(self, func, task, *args, **kwargs): + fut = self._executor.submit(func, task, *args, **kwargs) + fut.atom = task + return fut def execute_task(self, task, task_uuid, arguments, progress_callback=None): - return self._executor.submit( - _execute_task, task, arguments, progress_callback) + return self._submit_task(_execute_task, task, arguments, + progress_callback=progress_callback) def revert_task(self, task, task_uuid, arguments, result, failures, progress_callback=None): - return self._executor.submit( - _revert_task, task, - arguments, result, failures, progress_callback) - - def wait_for_any(self, fs, timeout=None): - return async_utils.wait_for_any(fs, timeout) + return self._submit_task(_revert_task, task, arguments, result, + failures, progress_callback=progress_callback) def start(self): - if self._create_executor: + if self._own_executor: if self._max_workers is not None: max_workers = self._max_workers else: max_workers = threading_utils.get_optimal_thread_count() - self._executor = futures.ThreadPoolExecutor(max_workers) + self._executor = self._create_executor(max_workers=max_workers) def stop(self): - if self._create_executor: + if self._own_executor: self._executor.shutdown(wait=True) self._executor = None + + +class ParallelThreadTaskExecutor(ParallelTaskExecutor): + """Executes tasks in parallel using a thread pool executor.""" + + def _create_executor(self, max_workers=None): + return futures.ThreadPoolExecutor(max_workers=max_workers) + + +class ParallelProcessTaskExecutor(ParallelTaskExecutor): + """Executes tasks in parallel using a process pool executor. + + NOTE(harlowja): this executor executes tasks in external processes, so that + implies that tasks that are sent to that external process are pickleable + since this is how the multiprocessing works (sending pickled objects back + and forth) and that the bound handlers (for progress updating in + particular) are proxied correctly from that external process to the one + that is alive in the parent process to ensure that callbacks registered in + the parent are executed on events in the child. + """ + + #: Options this executor supports (passed in from engine options). + OPTIONS = frozenset(['max_workers', 'dispatch_periodicity']) + + def __init__(self, executor=None, max_workers=None, + dispatch_periodicity=None): + super(ParallelProcessTaskExecutor, self).__init__( + executor=executor, max_workers=max_workers) + self._manager = _ViewableSyncManager() + self._dispatcher = _Dispatcher( + dispatch_periodicity=dispatch_periodicity) + # Only created after starting... + self._worker = None + self._queue = None + + def _create_executor(self, max_workers=None): + return futures.ProcessPoolExecutor(max_workers=max_workers) + + def start(self): + if threading_utils.is_alive(self._worker): + raise RuntimeError("Worker thread must be stopped via stop()" + " before starting/restarting") + super(ParallelProcessTaskExecutor, self).start() + # These don't seem restartable; make a new one... + if self._manager.is_shutdown(): + self._manager = _ViewableSyncManager() + if not self._manager.is_running(): + self._manager.start() + self._dispatcher.reset() + self._queue = self._manager.Queue() + self._worker = threading_utils.daemon_thread(self._dispatcher.run, + self._queue) + self._worker.start() + + def stop(self): + self._dispatcher.interrupt() + super(ParallelProcessTaskExecutor, self).stop() + if threading_utils.is_alive(self._worker): + self._worker.join() + self._worker = None + self._queue = None + self._dispatcher.reset() + self._manager.shutdown() + self._manager.join() + + def _rebind_task(self, task, clone, channel, progress_callback=None): + # Creates and binds proxies for all events the task could receive + # so that when the clone runs in another process that this task + # can recieve the same notifications (thus making it look like the + # the notifications are transparently happening in this process). + needed = set() + for (event_type, listeners) in task.notifier.listeners_iter(): + if listeners: + needed.add(event_type) + if progress_callback is not None: + needed.add(_UPDATE_PROGRESS) + if needed: + sender = _EventSender(channel) + for event_type in needed: + clone.notifier.register(event_type, sender) + + def _submit_task(self, func, task, *args, **kwargs): + """Submit a function to run the given task (with given args/kwargs). + + NOTE(harlowja): Adjust all events to be proxies instead since we want + those callbacks to be activated in this process, not in the child, + also since typically callbacks are functors (or callables) we can + not pickle those in the first place... + + To make sure people understand how this works, the following is a + lengthy description of what is going on here, read at will: + + So to ensure that we are proxying task triggered events that occur + in the executed subprocess (which will be created and used by the + thing using the multiprocessing based executor) we need to establish + a link between that process and this process that ensures that when a + event is triggered in that task in that process that a corresponding + event is triggered on the original task that was requested to be ran + in this process. + + To accomplish this we have to create a copy of the task (without + any listeners) and then reattach a new set of listeners that will + now instead of calling the desired listeners just place messages + for this process (a dispatcher thread that is created in this class) + to dispatch to the original task (using a common queue + per task + sender identity/target that is used and associated to know which task + to proxy back too, since it is possible that there many be *many* + subprocess running at the same time, each running a different task + and using the same common queue to submit messages back to). + + Once the subprocess task has finished execution, the executor will + then trigger a callback that will remove the task + target from the + dispatcher (which will stop any further proxying back to the original + task). + """ + progress_callback = kwargs.pop('progress_callback', None) + clone = task.copy(retain_listeners=False) + identity = uuidutils.generate_uuid() + target = _Target(task, self._manager.Event(), identity) + channel = _Channel(self._queue, identity) + self._rebind_task(task, clone, channel, + progress_callback=progress_callback) + + def register(): + if progress_callback is not None: + task.notifier.register(_UPDATE_PROGRESS, progress_callback) + self._dispatcher.register(identity, target) + + def deregister(): + if progress_callback is not None: + task.notifier.deregister(_UPDATE_PROGRESS, progress_callback) + self._dispatcher.deregister(identity) + + register() + work = _WaitWorkItem(channel, target.barrier, + func, clone, *args, **kwargs) + try: + fut = self._executor.submit(work) + except RuntimeError: + with excutils.save_and_reraise_exception(): + deregister() + + fut.atom = task + fut.add_done_callback(lambda fut: deregister()) + return fut diff --git a/taskflow/engines/action_engine/retry_action.py b/taskflow/engines/action_engine/retry_action.py deleted file mode 100644 index afdfb456..00000000 --- a/taskflow/engines/action_engine/retry_action.py +++ /dev/null @@ -1,86 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import logging - -from taskflow.engines.action_engine import executor as ex -from taskflow import states -from taskflow.utils import async_utils -from taskflow.utils import misc - -LOG = logging.getLogger(__name__) - -SAVE_RESULT_STATES = (states.SUCCESS, states.FAILURE) - - -class RetryAction(object): - def __init__(self, storage, notifier): - self._storage = storage - self._notifier = notifier - - def _get_retry_args(self, retry): - kwargs = self._storage.fetch_mapped_args(retry.rebind, - atom_name=retry.name) - kwargs['history'] = self._storage.get_retry_history(retry.name) - return kwargs - - def change_state(self, retry, state, result=None): - if state in SAVE_RESULT_STATES: - self._storage.save(retry.name, result, state) - elif state == states.REVERTED: - self._storage.cleanup_retry_history(retry.name, state) - else: - old_state = self._storage.get_atom_state(retry.name) - if state == old_state: - # NOTE(imelnikov): nothing really changed, so we should not - # write anything to storage and run notifications - return - self._storage.set_atom_state(retry.name, state) - retry_uuid = self._storage.get_atom_uuid(retry.name) - details = dict(retry_name=retry.name, - retry_uuid=retry_uuid, - result=result) - self._notifier.notify(state, details) - - def execute(self, retry): - self.change_state(retry, states.RUNNING) - kwargs = self._get_retry_args(retry) - try: - result = retry.execute(**kwargs) - except Exception: - result = misc.Failure() - self.change_state(retry, states.FAILURE, result=result) - else: - self.change_state(retry, states.SUCCESS, result=result) - return async_utils.make_completed_future((retry, ex.EXECUTED, result)) - - def revert(self, retry): - self.change_state(retry, states.REVERTING) - kwargs = self._get_retry_args(retry) - kwargs['flow_failures'] = self._storage.get_failures() - try: - result = retry.revert(**kwargs) - except Exception: - result = misc.Failure() - self.change_state(retry, states.FAILURE) - else: - self.change_state(retry, states.REVERTED) - return async_utils.make_completed_future((retry, ex.REVERTED, result)) - - def on_failure(self, retry, atom, last_failure): - self._storage.save_retry_failure(retry.name, atom.name, last_failure) - kwargs = self._get_retry_args(retry) - return retry.on_failure(**kwargs) diff --git a/taskflow/engines/action_engine/runner.py b/taskflow/engines/action_engine/runner.py index 7a0b9c87..f1f880ce 100644 --- a/taskflow/engines/action_engine/runner.py +++ b/taskflow/engines/action_engine/runner.py @@ -14,11 +14,10 @@ # License for the specific language governing permissions and limitations # under the License. -import logging - +from taskflow import logging from taskflow import states as st +from taskflow.types import failure from taskflow.types import fsm -from taskflow.utils import misc # Waiting state timeout (in seconds). _WAITING_TIMEOUT = 60 @@ -28,6 +27,17 @@ _UNDEFINED = 'UNDEFINED' _GAME_OVER = 'GAME_OVER' _META_STATES = (_GAME_OVER, _UNDEFINED) +# Event name constants the state machine uses. +_SCHEDULE = 'schedule_next' +_WAIT = 'wait_finished' +_ANALYZE = 'examine_finished' +_FINISH = 'completed' +_FAILED = 'failed' +_SUSPENDED = 'suspended' +_SUCCESS = 'success' +_REVERTED = 'reverted' +_START = 'start' + LOG = logging.getLogger(__name__) @@ -46,25 +56,25 @@ class _MachineBuilder(object): NOTE(harlowja): the machine states that this build will for are:: - +--------------+-----------+------------+----------+---------+ - | Start | Event | End | On Enter | On Exit | - +--------------+-----------+------------+----------+---------+ - | ANALYZING | finished | GAME_OVER | on_enter | on_exit | - | ANALYZING | schedule | SCHEDULING | on_enter | on_exit | - | ANALYZING | wait | WAITING | on_enter | on_exit | - | FAILURE[$] | | | | | - | GAME_OVER | failed | FAILURE | on_enter | on_exit | - | GAME_OVER | reverted | REVERTED | on_enter | on_exit | - | GAME_OVER | success | SUCCESS | on_enter | on_exit | - | GAME_OVER | suspended | SUSPENDED | on_enter | on_exit | - | RESUMING | schedule | SCHEDULING | on_enter | on_exit | - | REVERTED[$] | | | | | - | SCHEDULING | wait | WAITING | on_enter | on_exit | - | SUCCESS[$] | | | | | - | SUSPENDED[$] | | | | | - | UNDEFINED[^] | start | RESUMING | on_enter | on_exit | - | WAITING | analyze | ANALYZING | on_enter | on_exit | - +--------------+-----------+------------+----------+---------+ + +--------------+------------------+------------+----------+---------+ + Start | Event | End | On Enter | On Exit + +--------------+------------------+------------+----------+---------+ + ANALYZING | completed | GAME_OVER | | + ANALYZING | schedule_next | SCHEDULING | | + ANALYZING | wait_finished | WAITING | | + FAILURE[$] | | | | + GAME_OVER | failed | FAILURE | | + GAME_OVER | reverted | REVERTED | | + GAME_OVER | success | SUCCESS | | + GAME_OVER | suspended | SUSPENDED | | + RESUMING | schedule_next | SCHEDULING | | + REVERTED[$] | | | | + SCHEDULING | wait_finished | WAITING | | + SUCCESS[$] | | | | + SUSPENDED[$] | | | | + UNDEFINED[^] | start | RESUMING | | + WAITING | examine_finished | ANALYZING | | + +--------------+------------------+------------+----------+---------+ Between any of these yielded states (minus ``GAME_OVER`` and ``UNDEFINED``) if the engine has been suspended or the engine has failed (due to a @@ -89,21 +99,34 @@ class _MachineBuilder(object): timeout = _WAITING_TIMEOUT def resume(old_state, new_state, event): + # This reaction function just updates the state machines memory + # to include any nodes that need to be executed (from a previous + # attempt, which may be empty if never ran before) and any nodes + # that are now ready to be ran. memory.next_nodes.update(self._completer.resume()) memory.next_nodes.update(self._analyzer.get_next_nodes()) - return 'schedule' + return _SCHEDULE def game_over(old_state, new_state, event): + # This reaction function is mainly a intermediary delegation + # function that analyzes the current memory and transitions to + # the appropriate handler that will deal with the memory values, + # it is *always* called before the final state is entered. if memory.failures: - return 'failed' + return _FAILED if self._analyzer.get_next_nodes(): - return 'suspended' + return _SUSPENDED elif self._analyzer.is_success(): - return 'success' + return _SUCCESS else: - return 'reverted' + return _REVERTED def schedule(old_state, new_state, event): + # This reaction function starts to schedule the memory's next + # nodes (iff the engine is still runnable, which it may not be + # if the user of this engine has requested the engine/storage + # that holds this information to stop or suspend); handles failures + # that occur during this process safely... if self.runnable() and memory.next_nodes: not_done, failures = self._scheduler.schedule( memory.next_nodes) @@ -112,7 +135,7 @@ class _MachineBuilder(object): if failures: memory.failures.extend(failures) memory.next_nodes.clear() - return 'wait' + return _WAIT def wait(old_state, new_state, event): # TODO(harlowja): maybe we should start doing 'yield from' this @@ -123,33 +146,55 @@ class _MachineBuilder(object): timeout) memory.done.update(done) memory.not_done = not_done - return 'analyze' + return _ANALYZE def analyze(old_state, new_state, event): + # This reaction function is responsible for analyzing all nodes + # that have finished executing and completing them and figuring + # out what nodes are now ready to be ran (and then triggering those + # nodes to be scheduled in the future); handles failures that + # occur during this process safely... next_nodes = set() while memory.done: fut = memory.done.pop() + node = fut.atom try: - node, event, result = fut.result() + event, result = fut.result() retain = self._completer.complete(node, event, result) - if retain and isinstance(result, misc.Failure): - memory.failures.append(result) + if isinstance(result, failure.Failure): + if retain: + memory.failures.append(result) + else: + # NOTE(harlowja): avoid making any + # intention request to storage unless we are + # sure we are in DEBUG enabled logging (otherwise + # we will call this all the time even when DEBUG + # is not enabled, which would suck...) + if LOG.isEnabledFor(logging.DEBUG): + intention = self._storage.get_atom_intention( + node.name) + LOG.debug("Discarding failure '%s' (in" + " response to event '%s') under" + " completion units request during" + " completion of node '%s' (intention" + " is to %s)", result, event, + node, intention) except Exception: - memory.failures.append(misc.Failure()) + memory.failures.append(failure.Failure()) else: try: more_nodes = self._analyzer.get_next_nodes(node) except Exception: - memory.failures.append(misc.Failure()) + memory.failures.append(failure.Failure()) else: next_nodes.update(more_nodes) if self.runnable() and next_nodes and not memory.failures: memory.next_nodes.update(next_nodes) - return 'schedule' + return _SCHEDULE elif memory.not_done: - return 'wait' + return _WAIT else: - return 'finished' + return _FINISH def on_exit(old_state, event): LOG.debug("Exiting old state '%s' in response to event '%s'", @@ -178,24 +223,25 @@ class _MachineBuilder(object): m.add_state(st.WAITING, **watchers) m.add_state(st.FAILURE, terminal=True, **watchers) - m.add_transition(_GAME_OVER, st.REVERTED, 'reverted') - m.add_transition(_GAME_OVER, st.SUCCESS, 'success') - m.add_transition(_GAME_OVER, st.SUSPENDED, 'suspended') - m.add_transition(_GAME_OVER, st.FAILURE, 'failed') - m.add_transition(_UNDEFINED, st.RESUMING, 'start') - m.add_transition(st.ANALYZING, _GAME_OVER, 'finished') - m.add_transition(st.ANALYZING, st.SCHEDULING, 'schedule') - m.add_transition(st.ANALYZING, st.WAITING, 'wait') - m.add_transition(st.RESUMING, st.SCHEDULING, 'schedule') - m.add_transition(st.SCHEDULING, st.WAITING, 'wait') - m.add_transition(st.WAITING, st.ANALYZING, 'analyze') + m.add_transition(_GAME_OVER, st.REVERTED, _REVERTED) + m.add_transition(_GAME_OVER, st.SUCCESS, _SUCCESS) + m.add_transition(_GAME_OVER, st.SUSPENDED, _SUSPENDED) + m.add_transition(_GAME_OVER, st.FAILURE, _FAILED) + m.add_transition(_UNDEFINED, st.RESUMING, _START) + m.add_transition(st.ANALYZING, _GAME_OVER, _FINISH) + m.add_transition(st.ANALYZING, st.SCHEDULING, _SCHEDULE) + m.add_transition(st.ANALYZING, st.WAITING, _WAIT) + m.add_transition(st.RESUMING, st.SCHEDULING, _SCHEDULE) + m.add_transition(st.SCHEDULING, st.WAITING, _WAIT) + m.add_transition(st.WAITING, st.ANALYZING, _ANALYZE) - m.add_reaction(_GAME_OVER, 'finished', game_over) - m.add_reaction(st.ANALYZING, 'analyze', analyze) - m.add_reaction(st.RESUMING, 'start', resume) - m.add_reaction(st.SCHEDULING, 'schedule', schedule) - m.add_reaction(st.WAITING, 'wait', wait) + m.add_reaction(_GAME_OVER, _FINISH, game_over) + m.add_reaction(st.ANALYZING, _ANALYZE, analyze) + m.add_reaction(st.RESUMING, _START, resume) + m.add_reaction(st.SCHEDULING, _SCHEDULE, schedule) + m.add_reaction(st.WAITING, _WAIT, wait) + m.freeze() return (m, memory) @@ -230,7 +276,7 @@ class Runner(object): def run_iter(self, timeout=None): """Runs the nodes using a built state machine.""" machine, memory = self.builder.build(timeout=timeout) - for (_prior_state, new_state) in machine.run_iter('start'): + for (_prior_state, new_state) in machine.run_iter(_START): # NOTE(harlowja): skip over meta-states. if new_state not in _META_STATES: if new_state == st.FAILURE: diff --git a/taskflow/engines/action_engine/runtime.py b/taskflow/engines/action_engine/runtime.py index 90913b99..169a6415 100644 --- a/taskflow/engines/action_engine/runtime.py +++ b/taskflow/engines/action_engine/runtime.py @@ -14,15 +14,14 @@ # License for the specific language governing permissions and limitations # under the License. -from taskflow.engines.action_engine import analyzer as ca -from taskflow.engines.action_engine import executor as ex -from taskflow.engines.action_engine import retry_action as ra +from taskflow.engines.action_engine.actions import retry as ra +from taskflow.engines.action_engine.actions import task as ta +from taskflow.engines.action_engine import analyzer as an +from taskflow.engines.action_engine import completer as co from taskflow.engines.action_engine import runner as ru -from taskflow.engines.action_engine import task_action as ta -from taskflow import exceptions as excp -from taskflow import retry as retry_atom +from taskflow.engines.action_engine import scheduler as sched +from taskflow.engines.action_engine import scopes as sc from taskflow import states as st -from taskflow import task as task_atom from taskflow.utils import misc @@ -34,11 +33,12 @@ class Runtime(object): action engine to run to completion. """ - def __init__(self, compilation, storage, task_notifier, task_executor): - self._task_notifier = task_notifier + def __init__(self, compilation, storage, atom_notifier, task_executor): + self._atom_notifier = atom_notifier self._task_executor = task_executor self._storage = storage self._compilation = compilation + self._scopes = {} @property def compilation(self): @@ -50,7 +50,7 @@ class Runtime(object): @misc.cachedproperty def analyzer(self): - return ca.Analyzer(self._compilation, self._storage) + return an.Analyzer(self._compilation, self._storage) @misc.cachedproperty def runner(self): @@ -58,30 +58,47 @@ class Runtime(object): @misc.cachedproperty def completer(self): - return Completer(self) + return co.Completer(self) @misc.cachedproperty def scheduler(self): - return Scheduler(self) + return sched.Scheduler(self) @misc.cachedproperty def retry_action(self): - return ra.RetryAction(self.storage, self._task_notifier) + return ra.RetryAction(self._storage, self._atom_notifier, + self._fetch_scopes_for) @misc.cachedproperty def task_action(self): - return ta.TaskAction(self.storage, self._task_executor, - self._task_notifier) + return ta.TaskAction(self._storage, + self._atom_notifier, self._fetch_scopes_for, + self._task_executor) + + def _fetch_scopes_for(self, atom): + """Fetches a tuple of the visible scopes for the given atom.""" + try: + return self._scopes[atom] + except KeyError: + walker = sc.ScopeWalker(self.compilation, atom, + names_only=True) + visible_to = tuple(walker) + self._scopes[atom] = visible_to + return visible_to + + # Various helper methods used by the runtime components; not for public + # consumption... def reset_nodes(self, nodes, state=st.PENDING, intention=st.EXECUTE): for node in nodes: if state: - if isinstance(node, task_atom.BaseTask): - self.task_action.change_state(node, state, progress=0.0) - elif isinstance(node, retry_atom.Retry): + if self.task_action.handles(node): + self.task_action.change_state(node, state, + progress=0.0) + elif self.retry_action.handles(node): self.retry_action.change_state(node, state) else: - raise TypeError("Unknown how to reset node %s, %s" + raise TypeError("Unknown how to reset atom '%s' (%s)" % (node, type(node))) if intention: self.storage.set_atom_intention(node.name, intention) @@ -94,174 +111,6 @@ class Runtime(object): self.reset_nodes(self.analyzer.iterate_subgraph(node), state=state, intention=intention) - -# Various helper methods used by completer and scheduler. -def _retry_subflow(retry, runtime): - runtime.storage.set_atom_intention(retry.name, st.EXECUTE) - runtime.reset_subgraph(retry) - - -class Completer(object): - """Completes atoms using actions to complete them.""" - - def __init__(self, runtime): - self._analyzer = runtime.analyzer - self._retry_action = runtime.retry_action - self._runtime = runtime - self._storage = runtime.storage - self._task_action = runtime.task_action - - def _complete_task(self, task, event, result): - """Completes the given task, processes task failure.""" - if event == ex.EXECUTED: - self._task_action.complete_execution(task, result) - else: - self._task_action.complete_reversion(task, result) - - def resume(self): - """Resumes nodes in the contained graph. - - This is done to allow any previously completed or failed nodes to - be analyzed, there results processed and any potential nodes affected - to be adjusted as needed. - - This should return a set of nodes which should be the initial set of - nodes that were previously not finished (due to a RUNNING or REVERTING - attempt not previously finishing). - """ - for node in self._analyzer.iterate_all_nodes(): - if self._analyzer.get_state(node) == st.FAILURE: - self._process_atom_failure(node, self._storage.get(node.name)) - for retry in self._analyzer.iterate_retries(st.RETRYING): - _retry_subflow(retry, self._runtime) - unfinished_nodes = set() - for node in self._analyzer.iterate_all_nodes(): - if self._analyzer.get_state(node) in (st.RUNNING, st.REVERTING): - unfinished_nodes.add(node) - return unfinished_nodes - - def complete(self, node, event, result): - """Performs post-execution completion of a node. - - Returns whether the result should be saved into an accumulator of - failures or whether this should not be done. - """ - if isinstance(node, task_atom.BaseTask): - self._complete_task(node, event, result) - if isinstance(result, misc.Failure): - if event == ex.EXECUTED: - self._process_atom_failure(node, result) - else: - return True - return False - - def _process_atom_failure(self, atom, failure): - """Processes atom failure & applies resolution strategies. - - On atom failure this will find the atoms associated retry controller - and ask that controller for the strategy to perform to resolve that - failure. After getting a resolution strategy decision this method will - then adjust the needed other atoms intentions, and states, ... so that - the failure can be worked around. - """ - retry = self._analyzer.find_atom_retry(atom) - if retry: - # Ask retry controller what to do in case of failure - action = self._retry_action.on_failure(retry, atom, failure) - if action == retry_atom.RETRY: - # Prepare subflow for revert - self._storage.set_atom_intention(retry.name, st.RETRY) - self._runtime.reset_subgraph(retry, state=None, - intention=st.REVERT) - elif action == retry_atom.REVERT: - # Ask parent checkpoint - self._process_atom_failure(retry, failure) - elif action == retry_atom.REVERT_ALL: - # Prepare all flow for revert - self._revert_all() - else: - # Prepare all flow for revert - self._revert_all() - - def _revert_all(self): - """Attempts to set all nodes to the REVERT intention.""" - self._runtime.reset_nodes(self._analyzer.iterate_all_nodes(), - state=None, intention=st.REVERT) - - -class Scheduler(object): - """Schedules atoms using actions to schedule.""" - - def __init__(self, runtime): - self._analyzer = runtime.analyzer - self._retry_action = runtime.retry_action - self._runtime = runtime - self._storage = runtime.storage - self._task_action = runtime.task_action - - def _schedule_node(self, node): - """Schedule a single node for execution.""" - # TODO(harlowja): we need to rework this so that we aren't doing type - # checking here, type checking usually means something isn't done right - # and usually will limit extensibility in the future. - if isinstance(node, task_atom.BaseTask): - return self._schedule_task(node) - elif isinstance(node, retry_atom.Retry): - return self._schedule_retry(node) - else: - raise TypeError("Unknown how to schedule node %s, %s" - % (node, type(node))) - - def _schedule_retry(self, retry): - """Schedules the given retry atom for *future* completion. - - Depending on the atoms stored intention this may schedule the retry - atom for reversion or execution. - """ - intention = self._storage.get_atom_intention(retry.name) - if intention == st.EXECUTE: - return self._retry_action.execute(retry) - elif intention == st.REVERT: - return self._retry_action.revert(retry) - elif intention == st.RETRY: - self._retry_action.change_state(retry, st.RETRYING) - _retry_subflow(retry, self._runtime) - return self._retry_action.execute(retry) - else: - raise excp.ExecutionFailure("Unknown how to schedule retry with" - " intention: %s" % intention) - - def _schedule_task(self, task): - """Schedules the given task atom for *future* completion. - - Depending on the atoms stored intention this may schedule the task - atom for reversion or execution. - """ - intention = self._storage.get_atom_intention(task.name) - if intention == st.EXECUTE: - return self._task_action.schedule_execution(task) - elif intention == st.REVERT: - return self._task_action.schedule_reversion(task) - else: - raise excp.ExecutionFailure("Unknown how to schedule task with" - " intention: %s" % intention) - - def schedule(self, nodes): - """Schedules the provided nodes for *future* completion. - - This method should schedule a future for each node provided and return - a set of those futures to be waited on (or used for other similar - purposes). It should also return any failure objects that represented - scheduling failures that may have occurred during this scheduling - process. - """ - futures = set() - for node in nodes: - try: - futures.add(self._schedule_node(node)) - except Exception: - # Immediately stop scheduling future work so that we can - # exit execution early (rather than later) if a single task - # fails to schedule correctly. - return (futures, [misc.Failure()]) - return (futures, []) + def retry_subflow(self, retry): + self.storage.set_atom_intention(retry.name, st.EXECUTE) + self.reset_subgraph(retry) diff --git a/taskflow/engines/action_engine/scheduler.py b/taskflow/engines/action_engine/scheduler.py new file mode 100644 index 00000000..8e3c64b3 --- /dev/null +++ b/taskflow/engines/action_engine/scheduler.py @@ -0,0 +1,115 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from taskflow import exceptions as excp +from taskflow import retry as retry_atom +from taskflow import states as st +from taskflow import task as task_atom +from taskflow.types import failure + + +class _RetryScheduler(object): + def __init__(self, runtime): + self._runtime = runtime + self._retry_action = runtime.retry_action + self._storage = runtime.storage + + @staticmethod + def handles(atom): + return isinstance(atom, retry_atom.Retry) + + def schedule(self, retry): + """Schedules the given retry atom for *future* completion. + + Depending on the atoms stored intention this may schedule the retry + atom for reversion or execution. + """ + intention = self._storage.get_atom_intention(retry.name) + if intention == st.EXECUTE: + return self._retry_action.execute(retry) + elif intention == st.REVERT: + return self._retry_action.revert(retry) + elif intention == st.RETRY: + self._retry_action.change_state(retry, st.RETRYING) + self._runtime.retry_subflow(retry) + return self._retry_action.execute(retry) + else: + raise excp.ExecutionFailure("Unknown how to schedule retry with" + " intention: %s" % intention) + + +class _TaskScheduler(object): + def __init__(self, runtime): + self._storage = runtime.storage + self._task_action = runtime.task_action + + @staticmethod + def handles(atom): + return isinstance(atom, task_atom.BaseTask) + + def schedule(self, task): + """Schedules the given task atom for *future* completion. + + Depending on the atoms stored intention this may schedule the task + atom for reversion or execution. + """ + intention = self._storage.get_atom_intention(task.name) + if intention == st.EXECUTE: + return self._task_action.schedule_execution(task) + elif intention == st.REVERT: + return self._task_action.schedule_reversion(task) + else: + raise excp.ExecutionFailure("Unknown how to schedule task with" + " intention: %s" % intention) + + +class Scheduler(object): + """Schedules atoms using actions to schedule.""" + + def __init__(self, runtime): + self._schedulers = [ + _RetryScheduler(runtime), + _TaskScheduler(runtime), + ] + + def _schedule_node(self, node): + """Schedule a single node for execution.""" + for sched in self._schedulers: + if sched.handles(node): + return sched.schedule(node) + else: + raise TypeError("Unknown how to schedule '%s' (%s)" + % (node, type(node))) + + def schedule(self, nodes): + """Schedules the provided nodes for *future* completion. + + This method should schedule a future for each node provided and return + a set of those futures to be waited on (or used for other similar + purposes). It should also return any failure objects that represented + scheduling failures that may have occurred during this scheduling + process. + """ + futures = set() + for node in nodes: + try: + futures.add(self._schedule_node(node)) + except Exception: + # Immediately stop scheduling future work so that we can + # exit execution early (rather than later) if a single task + # fails to schedule correctly. + return (futures, [failure.Failure()]) + return (futures, []) diff --git a/taskflow/engines/action_engine/scopes.py b/taskflow/engines/action_engine/scopes.py new file mode 100644 index 00000000..6b7f9ffd --- /dev/null +++ b/taskflow/engines/action_engine/scopes.py @@ -0,0 +1,113 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from taskflow import atom as atom_type +from taskflow import flow as flow_type +from taskflow import logging + +LOG = logging.getLogger(__name__) + + +def _extract_atoms(node, idx=-1): + # Always go left to right, since right to left is the pattern order + # and we want to go backwards and not forwards through that ordering... + if idx == -1: + children_iter = node.reverse_iter() + else: + children_iter = reversed(node[0:idx]) + atoms = [] + for child in children_iter: + if isinstance(child.item, flow_type.Flow): + atoms.extend(_extract_atoms(child)) + elif isinstance(child.item, atom_type.Atom): + atoms.append(child.item) + else: + raise TypeError( + "Unknown extraction item '%s' (%s)" % (child.item, + type(child.item))) + return atoms + + +class ScopeWalker(object): + """Walks through the scopes of a atom using a engines compilation. + + This will walk the visible scopes that are accessible for the given + atom, which can be used by some external entity in some meaningful way, + for example to find dependent values... + """ + + def __init__(self, compilation, atom, names_only=False): + self._node = compilation.hierarchy.find(atom) + if self._node is None: + raise ValueError("Unable to find atom '%s' in compilation" + " hierarchy" % atom) + self._atom = atom + self._graph = compilation.execution_graph + self._names_only = names_only + + def __iter__(self): + """Iterates over the visible scopes. + + How this works is the following: + + We find all the possible predecessors of the given atom, this is useful + since we know they occurred before this atom but it doesn't tell us + the corresponding scope *level* that each predecessor was created in, + so we need to find this information. + + For that information we consult the location of the atom ``Y`` in the + node hierarchy. We lookup in a reverse order the parent ``X`` of ``Y`` + and traverse backwards from the index in the parent where ``Y`` + occurred, all children in ``X`` that we encounter in this backwards + search (if a child is a flow itself, its atom contents will be + expanded) will be assumed to be at the same scope. This is then a + *potential* single scope, to make an *actual* scope we remove the items + from the *potential* scope that are not predecessors of ``Y`` to form + the *actual* scope. + + Then for additional scopes we continue up the tree, by finding the + parent of ``X`` (lets call it ``Z``) and perform the same operation, + going through the children in a reverse manner from the index in + parent ``Z`` where ``X`` was located. This forms another *potential* + scope which we provide back as an *actual* scope after reducing the + potential set by the predecessors of ``Y``. We then repeat this process + until we no longer have any parent nodes (aka have reached the top of + the tree) or we run out of predecessors. + """ + predecessors = set(self._graph.bfs_predecessors_iter(self._atom)) + last = self._node + for parent in self._node.path_iter(include_self=False): + if not predecessors: + break + last_idx = parent.index(last.item) + visible = [] + for a in _extract_atoms(parent, idx=last_idx): + if a in predecessors: + predecessors.remove(a) + if not self._names_only: + visible.append(a) + else: + visible.append(a.name) + if LOG.isEnabledFor(logging.BLATHER): + if not self._names_only: + visible_names = [a.name for a in visible] + else: + visible_names = visible + LOG.blather("Scope visible to '%s' (limited by parent '%s'" + " index < %s) is: %s", self._atom, + parent.item.name, last_idx, visible_names) + yield visible + last = parent diff --git a/taskflow/engines/action_engine/task_action.py b/taskflow/engines/action_engine/task_action.py deleted file mode 100644 index a07ded79..00000000 --- a/taskflow/engines/action_engine/task_action.py +++ /dev/null @@ -1,116 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import logging - -from taskflow import states -from taskflow.utils import misc - -LOG = logging.getLogger(__name__) - -SAVE_RESULT_STATES = (states.SUCCESS, states.FAILURE) - - -class TaskAction(object): - - def __init__(self, storage, task_executor, notifier): - self._storage = storage - self._task_executor = task_executor - self._notifier = notifier - - def _is_identity_transition(self, state, task, progress): - if state in SAVE_RESULT_STATES: - # saving result is never identity transition - return False - old_state = self._storage.get_atom_state(task.name) - if state != old_state: - # changing state is not identity transition by definition - return False - # NOTE(imelnikov): last thing to check is that the progress has - # changed, which means progress is not None and is different from - # what is stored in the database. - if progress is None: - return False - old_progress = self._storage.get_task_progress(task.name) - if old_progress != progress: - return False - return True - - def change_state(self, task, state, result=None, progress=None): - if self._is_identity_transition(state, task, progress): - # NOTE(imelnikov): ignore identity transitions in order - # to avoid extra write to storage backend and, what's - # more important, extra notifications - return - if state in SAVE_RESULT_STATES: - self._storage.save(task.name, result, state) - else: - self._storage.set_atom_state(task.name, state) - if progress is not None: - self._storage.set_task_progress(task.name, progress) - task_uuid = self._storage.get_atom_uuid(task.name) - details = dict(task_name=task.name, - task_uuid=task_uuid, - result=result) - self._notifier.notify(state, details) - if progress is not None: - task.update_progress(progress) - - def _on_update_progress(self, task, event_data, progress, **kwargs): - """Should be called when task updates its progress.""" - try: - self._storage.set_task_progress(task.name, progress, kwargs) - except Exception: - # Update progress callbacks should never fail, so capture and log - # the emitted exception instead of raising it. - LOG.exception("Failed setting task progress for %s to %0.3f", - task, progress) - - def schedule_execution(self, task): - self.change_state(task, states.RUNNING, progress=0.0) - kwargs = self._storage.fetch_mapped_args(task.rebind, - atom_name=task.name) - task_uuid = self._storage.get_atom_uuid(task.name) - return self._task_executor.execute_task(task, task_uuid, kwargs, - self._on_update_progress) - - def complete_execution(self, task, result): - if isinstance(result, misc.Failure): - self.change_state(task, states.FAILURE, result=result) - else: - self.change_state(task, states.SUCCESS, - result=result, progress=1.0) - - def schedule_reversion(self, task): - self.change_state(task, states.REVERTING, progress=0.0) - kwargs = self._storage.fetch_mapped_args(task.rebind, - atom_name=task.name) - task_uuid = self._storage.get_atom_uuid(task.name) - task_result = self._storage.get(task.name) - failures = self._storage.get_failures() - future = self._task_executor.revert_task(task, task_uuid, kwargs, - task_result, failures, - self._on_update_progress) - return future - - def complete_reversion(self, task, rev_result): - if isinstance(rev_result, misc.Failure): - self.change_state(task, states.FAILURE) - else: - self.change_state(task, states.REVERTED, progress=1.0) - - def wait_for_any(self, fs, timeout): - return self._task_executor.wait_for_any(fs, timeout) diff --git a/taskflow/engines/base.py b/taskflow/engines/base.py index 4bfcbabc..a97cf3b7 100644 --- a/taskflow/engines/base.py +++ b/taskflow/engines/base.py @@ -19,29 +19,57 @@ import abc import six +from taskflow.types import notifier +from taskflow.utils import deprecation from taskflow.utils import misc @six.add_metaclass(abc.ABCMeta) -class EngineBase(object): +class Engine(object): """Base for all engines implementations. :ivar notifier: A notification object that will dispatch events that occur related to the flow the engine contains. :ivar task_notifier: A notification object that will dispatch events that occur related to the tasks the engine contains. + occur related to the tasks the engine + contains (deprecated). + :ivar atom_notifier: A notification object that will dispatch events that + occur related to the atoms the engine contains. """ - def __init__(self, flow, flow_detail, backend, conf): + def __init__(self, flow, flow_detail, backend, options): self._flow = flow self._flow_detail = flow_detail self._backend = backend - if not conf: - self._conf = {} + if not options: + self._options = {} else: - self._conf = dict(conf) - self.notifier = misc.Notifier() - self.task_notifier = misc.Notifier() + self._options = dict(options) + self._notifier = notifier.Notifier() + self._atom_notifier = notifier.Notifier() + + @property + def notifier(self): + """The flow notifier.""" + return self._notifier + + @property + @deprecation.moved_property('atom_notifier', version="0.6", + removal_version="?") + def task_notifier(self): + """The task notifier.""" + return self._atom_notifier + + @property + def atom_notifier(self): + """The atom notifier.""" + return self._atom_notifier + + @property + def options(self): + """The options that were passed to this engine on construction.""" + return self._options @misc.cachedproperty def storage(self): @@ -85,3 +113,10 @@ class EngineBase(object): not currently be preempted) and move the engine into a suspend state which can then later be resumed from. """ + + +# TODO(harlowja): remove in 0.7 or later... +EngineBase = deprecation.moved_inheritable_class(Engine, + 'EngineBase', __name__, + version="0.6", + removal_version="?") diff --git a/taskflow/engines/helpers.py b/taskflow/engines/helpers.py index c200df8a..3d8ccf55 100644 --- a/taskflow/engines/helpers.py +++ b/taskflow/engines/helpers.py @@ -15,21 +15,93 @@ # under the License. import contextlib +import itertools +import traceback +from oslo_utils import importutils +from oslo_utils import reflection import six import stevedore.driver from taskflow import exceptions as exc -from taskflow.openstack.common import importutils +from taskflow import logging from taskflow.persistence import backends as p_backends +from taskflow.utils import deprecation from taskflow.utils import misc from taskflow.utils import persistence_utils as p_utils -from taskflow.utils import reflection +LOG = logging.getLogger(__name__) # NOTE(imelnikov): this is the entrypoint namespace, not the module namespace. ENGINES_NAMESPACE = 'taskflow.engines' +# The default entrypoint engine type looked for when it is not provided. +ENGINE_DEFAULT = 'default' + +# TODO(harlowja): only used during the deprecation cycle, remove it once +# ``_extract_engine_compat`` is also gone... +_FILE_NAMES = [__file__] +if six.PY2: + # Due to a bug in py2.x the __file__ may point to the pyc file & since + # we are using the traceback module and that module only shows py files + # we have to do a slight adjustment to ensure we match correctly... + # + # This is addressed in https://www.python.org/dev/peps/pep-3147/#file + if __file__.endswith("pyc"): + _FILE_NAMES.append(__file__[0:-1]) +_FILE_NAMES = tuple(_FILE_NAMES) + + +def _extract_engine(**kwargs): + """Extracts the engine kind and any associated options.""" + + def _compat_extract(**kwargs): + options = {} + kind = kwargs.pop('engine', None) + engine_conf = kwargs.pop('engine_conf', None) + if engine_conf is not None: + if isinstance(engine_conf, six.string_types): + kind = engine_conf + else: + options.update(engine_conf) + kind = options.pop('engine', None) + if not kind: + kind = ENGINE_DEFAULT + # See if it's a URI and if so, extract any further options... + try: + uri = misc.parse_uri(kind) + except (TypeError, ValueError): + pass + else: + kind = uri.scheme + options = misc.merge_uri(uri, options.copy()) + # Merge in any leftover **kwargs into the options, this makes it so + # that the provided **kwargs override any URI or engine_conf specific + # options. + options.update(kwargs) + return (kind, options) + + engine_conf = kwargs.get('engine_conf', None) + if engine_conf is not None: + # Figure out where our code ends and the calling code begins (this is + # needed since this code is called from two functions in this module, + # which means the stack level will vary by one depending on that). + finder = itertools.takewhile( + lambda frame: frame[0] in _FILE_NAMES, + reversed(traceback.extract_stack(limit=3))) + stacklevel = sum(1 for _frame in finder) + decorator = deprecation.renamed_kwarg('engine_conf', 'engine', + version="0.6", + removal_version="?", + # Three is added on since the + # decorator adds three of its own + # stack levels that we need to + # hop out of... + stacklevel=stacklevel + 3) + return decorator(_compat_extract)(**kwargs) + else: + return _compat_extract(**kwargs) + def _fetch_factory(factory_name): try: @@ -56,49 +128,43 @@ def _fetch_validate_factory(flow_factory): def load(flow, store=None, flow_detail=None, book=None, - engine_conf=None, backend=None, namespace=ENGINES_NAMESPACE, - **kwargs): + engine_conf=None, backend=None, + namespace=ENGINES_NAMESPACE, engine=ENGINE_DEFAULT, **kwargs): """Load a flow into an engine. - This function creates and prepares engine to run the - flow. All that is left is to run the engine with 'run()' method. + This function creates and prepares an engine to run the provided flow. All + that is left after this returns is to run the engine with the + engines ``run()`` method. - Which engine to load is specified in 'engine_conf' parameter. It - can be a string that names engine type or a dictionary which holds - engine type (with 'engine' key) and additional engine-specific - configuration. + Which engine to load is specified via the ``engine`` parameter. It + can be a string that names the engine type to use, or a string that + is a URI with a scheme that names the engine type to use and further + options contained in the URI's host, port, and query parameters... - Which storage backend to use is defined by backend parameter. It + Which storage backend to use is defined by the backend parameter. It can be backend itself, or a dictionary that is passed to - taskflow.persistence.backends.fetch to obtain backend. + ``taskflow.persistence.backends.fetch()`` to obtain a viable backend. :param flow: flow to load :param store: dict -- data to put to storage to satisfy flow requirements :param flow_detail: FlowDetail that holds the state of the flow (if one is not provided then one will be created for you in the provided backend) :param book: LogBook to create flow detail in if flow_detail is None - :param engine_conf: engine type and configuration configuration - :param backend: storage backend to use or configuration - :param namespace: driver namespace for stevedore (default is fine - if you don't know what is it) + :param engine_conf: engine type or URI and options (**deprecated**) + :param backend: storage backend to use or configuration that defines it + :param namespace: driver namespace for stevedore (or empty for default) + :param engine: string engine type or URI string with scheme that contains + the engine type and any URI specific components that will + become part of the engine options. + :param kwargs: arbitrary keyword arguments passed as options (merged with + any extracted ``engine`` and ``engine_conf`` options), + typically used for any engine specific options that do not + fit as any of the existing arguments. :returns: engine """ - if engine_conf is None: - engine_conf = {'engine': 'default'} - - # NOTE(imelnikov): this allows simpler syntax. - if isinstance(engine_conf, six.string_types): - engine_conf = {'engine': engine_conf} - - engine_name = engine_conf['engine'] - try: - pieces = misc.parse_uri(engine_name) - except (TypeError, ValueError): - pass - else: - engine_name = pieces['scheme'] - engine_conf = misc.merge_uri(pieces, engine_conf.copy()) + kind, options = _extract_engine(engine_conf=engine_conf, + engine=engine, **kwargs) if isinstance(backend, dict): backend = p_backends.fetch(backend) @@ -107,15 +173,15 @@ def load(flow, store=None, flow_detail=None, book=None, flow_detail = p_utils.create_flow_detail(flow, book=book, backend=backend) + LOG.debug('Looking for %r engine driver in %r', kind, namespace) try: mgr = stevedore.driver.DriverManager( - namespace, engine_name, + namespace, kind, invoke_on_load=True, - invoke_args=(flow, flow_detail, backend, engine_conf), - invoke_kwds=kwargs) + invoke_args=(flow, flow_detail, backend, options)) engine = mgr.driver except RuntimeError as e: - raise exc.NotFound("Could not find engine %s" % (engine_name), e) + raise exc.NotFound("Could not find engine '%s'" % (kind), e) else: if store: engine.storage.inject(store) @@ -123,35 +189,20 @@ def load(flow, store=None, flow_detail=None, book=None, def run(flow, store=None, flow_detail=None, book=None, - engine_conf=None, backend=None, namespace=ENGINES_NAMESPACE, **kwargs): + engine_conf=None, backend=None, namespace=ENGINES_NAMESPACE, + engine=ENGINE_DEFAULT, **kwargs): """Run the flow. - This function load the flow into engine (with 'load' function) - and runs the engine. + This function loads the flow into an engine (with the :func:`load() ` + function) and runs the engine. - Which engine to load is specified in 'engine_conf' parameter. It - can be a string that names engine type or a dictionary which holds - engine type (with 'engine' key) and additional engine-specific - configuration. + The arguments are interpreted as for :func:`load() `. - Which storage backend to use is defined by backend parameter. It - can be backend itself, or a dictionary that is passed to - taskflow.persistence.backends.fetch to obtain backend. - - :param flow: flow to run - :param store: dict -- data to put to storage to satisfy flow requirements - :param flow_detail: FlowDetail that holds the state of the flow (if one is - not provided then one will be created for you in the provided backend) - :param book: LogBook to create flow detail in if flow_detail is None - :param engine_conf: engine type and configuration configuration - :param backend: storage backend to use or configuration - :param namespace: driver namespace for stevedore (default is fine - if you don't know what is it) - :returns: dictionary of all named task results (see Storage.fetch_all) + :returns: dictionary of all named results (see ``storage.fetch_all()``) """ engine = load(flow, store=store, flow_detail=flow_detail, book=book, engine_conf=engine_conf, backend=backend, - namespace=namespace, **kwargs) + namespace=namespace, engine=engine, **kwargs) engine.run() return engine.storage.fetch_all() @@ -196,23 +247,21 @@ def save_factory_details(flow_detail, def load_from_factory(flow_factory, factory_args=None, factory_kwargs=None, store=None, book=None, engine_conf=None, backend=None, - namespace=ENGINES_NAMESPACE, **kwargs): + namespace=ENGINES_NAMESPACE, engine=ENGINE_DEFAULT, + **kwargs): """Loads a flow from a factory function into an engine. Gets flow factory function (or name of it) and creates flow with - it. Then, flow is loaded into engine with load(), and factory - function fully qualified name is saved to flow metadata so that - it can be later resumed with resume. + it. Then, the flow is loaded into an engine with the :func:`load() ` + function, and the factory function fully qualified name is saved to flow + metadata so that it can be later resumed. :param flow_factory: function or string: function that creates the flow :param factory_args: list or tuple of factory positional arguments :param factory_kwargs: dict of factory keyword arguments - :param store: dict -- data to put to storage to satisfy flow requirements - :param book: LogBook to create flow detail in - :param engine_conf: engine type and configuration configuration - :param backend: storage backend to use or configuration - :param namespace: driver namespace for stevedore (default is fine - if you don't know what is it) + + Further arguments are interpreted as for :func:`load() `. + :returns: engine """ @@ -230,7 +279,7 @@ def load_from_factory(flow_factory, factory_args=None, factory_kwargs=None, backend=backend) return load(flow=flow, store=store, flow_detail=flow_detail, book=book, engine_conf=engine_conf, backend=backend, namespace=namespace, - **kwargs) + engine=engine, **kwargs) def flow_from_detail(flow_detail): @@ -261,21 +310,21 @@ def flow_from_detail(flow_detail): def load_from_detail(flow_detail, store=None, engine_conf=None, backend=None, - namespace=ENGINES_NAMESPACE, **kwargs): + namespace=ENGINES_NAMESPACE, engine=ENGINE_DEFAULT, + **kwargs): """Reloads an engine previously saved. - This reloads the flow using the flow_from_detail() function and then calls - into the load() function to create an engine from that flow. + This reloads the flow using the + :func:`flow_from_detail() ` function and then calls + into the :func:`load() ` function to create an engine from that flow. :param flow_detail: FlowDetail that holds state of the flow to load - :param store: dict -- data to put to storage to satisfy flow requirements - :param engine_conf: engine type and configuration configuration - :param backend: storage backend to use or configuration - :param namespace: driver namespace for stevedore (default is fine - if you don't know what is it) + + Further arguments are interpreted as for :func:`load() `. + :returns: engine """ flow = flow_from_detail(flow_detail) return load(flow, flow_detail=flow_detail, store=store, engine_conf=engine_conf, backend=backend, - namespace=namespace, **kwargs) + namespace=namespace, engine=engine, **kwargs) diff --git a/taskflow/engines/worker_based/cache.py b/taskflow/engines/worker_based/cache.py deleted file mode 100644 index 9da7f12c..00000000 --- a/taskflow/engines/worker_based/cache.py +++ /dev/null @@ -1,48 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import random - -import six - -from taskflow.engines.worker_based import protocol as pr -from taskflow.types import cache as base - - -class RequestsCache(base.ExpiringCache): - """Represents a thread-safe requests cache.""" - - def get_waiting_requests(self, tasks): - """Get list of waiting requests by tasks.""" - waiting_requests = [] - with self._lock.read_lock(): - for request in six.itervalues(self._data): - if request.state == pr.WAITING and request.task_cls in tasks: - waiting_requests.append(request) - return waiting_requests - - -class WorkersCache(base.ExpiringCache): - """Represents a thread-safe workers cache.""" - - def get_topic_by_task(self, task): - """Get topic for a given task.""" - available_topics = [] - with self._lock.read_lock(): - for topic, tasks in six.iteritems(self._data): - if task in tasks: - available_topics.append(topic) - return random.choice(available_topics) if available_topics else None diff --git a/taskflow/engines/worker_based/dispatcher.py b/taskflow/engines/worker_based/dispatcher.py index 9ff8ac10..13470e08 100644 --- a/taskflow/engines/worker_based/dispatcher.py +++ b/taskflow/engines/worker_based/dispatcher.py @@ -14,12 +14,11 @@ # License for the specific language governing permissions and limitations # under the License. -import logging - from kombu import exceptions as kombu_exc -import six from taskflow import exceptions as excp +from taskflow import logging +from taskflow.utils import kombu_utils as ku LOG = logging.getLogger(__name__) @@ -27,31 +26,55 @@ LOG = logging.getLogger(__name__) class TypeDispatcher(object): """Receives messages and dispatches to type specific handlers.""" - def __init__(self, type_handlers): - self._handlers = dict(type_handlers) - self._requeue_filters = [] + def __init__(self, type_handlers=None, requeue_filters=None): + if type_handlers is not None: + self._type_handlers = dict(type_handlers) + else: + self._type_handlers = {} + if requeue_filters is not None: + self._requeue_filters = list(requeue_filters) + else: + self._requeue_filters = [] - def add_requeue_filter(self, callback): - """Add a callback that can *request* message requeuing. + @property + def type_handlers(self): + """Dictionary of message type -> callback to handle that message. - The callback will be activated before the message has been acked and - it can be used to instruct the dispatcher to requeue the message - instead of processing it. + The callback(s) will be activated by looking for a message + property 'type' and locating a callback in this dictionary that maps + to that type; if one is found it is expected to be a callback that + accepts two positional parameters; the first being the message data + and the second being the message object. If a callback is not found + then the message is rejected and it will be up to the underlying + message transport to determine what this means/implies... """ - assert six.callable(callback), "Callback must be callable" - self._requeue_filters.append(callback) + return self._type_handlers + + @property + def requeue_filters(self): + """List of filters (callbacks) to request a message to be requeued. + + The callback(s) will be activated before the message has been acked and + it can be used to instruct the dispatcher to requeue the message + instead of processing it. The callback, when called, will be provided + two positional parameters; the first being the message data and the + second being the message object. Using these provided parameters the + filter should return a truthy object if the message should be requeued + and a falsey object if it should not. + """ + return self._requeue_filters def _collect_requeue_votes(self, data, message): # Returns how many of the filters asked for the message to be requeued. requeue_votes = 0 - for f in self._requeue_filters: + for i, cb in enumerate(self._requeue_filters): try: - if f(data, message): + if cb(data, message): requeue_votes += 1 except Exception: - LOG.exception("Failed calling requeue filter to determine" - " if message %r should be requeued.", - message.delivery_tag) + LOG.exception("Failed calling requeue filter %s '%s' to" + " determine if message %r should be requeued.", + i + 1, cb, message.delivery_tag) return requeue_votes def _requeue_log_error(self, message, errors): @@ -66,15 +89,15 @@ class TypeDispatcher(object): LOG.critical("Couldn't requeue %r, reason:%r", message.delivery_tag, exc, exc_info=True) else: - LOG.debug("AMQP message %r requeued.", message.delivery_tag) + LOG.debug("Message '%s' was requeued.", ku.DelayedPretty(message)) def _process_message(self, data, message, message_type): - handler = self._handlers.get(message_type) + handler = self._type_handlers.get(message_type) if handler is None: message.reject_log_error(logger=LOG, errors=(kombu_exc.MessageStateError,)) LOG.warning("Unexpected message type: '%s' in message" - " %r", message_type, message.delivery_tag) + " '%s'", message_type, ku.DelayedPretty(message)) else: if isinstance(handler, (tuple, list)): handler, validator = handler @@ -83,20 +106,23 @@ class TypeDispatcher(object): except excp.InvalidFormat as e: message.reject_log_error( logger=LOG, errors=(kombu_exc.MessageStateError,)) - LOG.warn("Message: %r, '%s' was rejected due to it being" + LOG.warn("Message '%s' (%s) was rejected due to it being" " in an invalid format: %s", - message.delivery_tag, message_type, e) + ku.DelayedPretty(message), message_type, e) return message.ack_log_error(logger=LOG, errors=(kombu_exc.MessageStateError,)) if message.acknowledged: - LOG.debug("AMQP message %r acknowledged.", - message.delivery_tag) + LOG.debug("Message '%s' was acknowledged.", + ku.DelayedPretty(message)) handler(data, message) + else: + message.reject_log_error(logger=LOG, + errors=(kombu_exc.MessageStateError,)) def on_message(self, data, message): """This method is called on incoming messages.""" - LOG.debug("Got message: %r", message.delivery_tag) + LOG.debug("Received message '%s'", ku.DelayedPretty(message)) if self._collect_requeue_votes(data, message): self._requeue_log_error(message, errors=(kombu_exc.MessageStateError,)) @@ -107,6 +133,6 @@ class TypeDispatcher(object): message.reject_log_error( logger=LOG, errors=(kombu_exc.MessageStateError,)) LOG.warning("The 'type' message property is missing" - " in message %r", message.delivery_tag) + " in message '%s'", ku.DelayedPretty(message)) else: self._process_message(data, message, message_type) diff --git a/taskflow/engines/worker_based/endpoint.py b/taskflow/engines/worker_based/endpoint.py index 3a16266d..2c85310e 100644 --- a/taskflow/engines/worker_based/endpoint.py +++ b/taskflow/engines/worker_based/endpoint.py @@ -14,8 +14,9 @@ # License for the specific language governing permissions and limitations # under the License. +from oslo_utils import reflection + from taskflow.engines.action_engine import executor -from taskflow.utils import reflection class Endpoint(object): @@ -33,18 +34,16 @@ class Endpoint(object): def name(self): return self._task_cls_name - def _get_task(self, name=None): + def generate(self, name=None): # NOTE(skudriashev): Note that task is created here with the `name` # argument passed to its constructor. This will be a problem when # task's constructor requires any other arguments. return self._task_cls(name=name) - def execute(self, task_name, **kwargs): - task, event, result = self._executor.execute_task( - self._get_task(task_name), **kwargs).result() + def execute(self, task, **kwargs): + event, result = self._executor.execute_task(task, **kwargs).result() return result - def revert(self, task_name, **kwargs): - task, event, result = self._executor.revert_task( - self._get_task(task_name), **kwargs).result() + def revert(self, task, **kwargs): + event, result = self._executor.revert_task(task, **kwargs).result() return result diff --git a/taskflow/engines/worker_based/engine.py b/taskflow/engines/worker_based/engine.py index e92e73f8..aee39e89 100644 --- a/taskflow/engines/worker_based/engine.py +++ b/taskflow/engines/worker_based/engine.py @@ -16,13 +16,14 @@ from taskflow.engines.action_engine import engine from taskflow.engines.worker_based import executor +from taskflow.engines.worker_based import protocol as pr from taskflow import storage as t_storage class WorkerBasedActionEngine(engine.ActionEngine): """Worker based action engine. - Specific backend configuration: + Specific backend options (extracted from provided engine options): :param exchange: broker exchange exchange name in which executor / worker communication is performed @@ -30,24 +31,48 @@ class WorkerBasedActionEngine(engine.ActionEngine): :param topics: list of workers topics to communicate with (this will also be learned by listening to the notifications that workers emit). - :keyword transport: transport to be used (e.g. amqp, memory, etc.) - :keyword transport_options: transport specific options + :param transport: transport to be used (e.g. amqp, memory, etc.) + :param transition_timeout: numeric value (or None for infinite) to wait + for submitted remote requests to transition out + of the (PENDING, WAITING) request states. When + expired the associated task the request was made + for will have its result become a + `RequestTimeout` exception instead of its + normally returned value (or raised exception). + :param transport_options: transport specific options (see: + http://kombu.readthedocs.org/ for what these + options imply and are expected to be) + :param retry_options: retry specific options + (see: :py:attr:`~.proxy.Proxy.DEFAULT_RETRY_OPTIONS`) """ _storage_factory = t_storage.SingleThreadedStorage - def _task_executor_factory(self): - if self._executor is not None: - return self._executor - return executor.WorkerTaskExecutor( - uuid=self._flow_detail.uuid, - url=self._conf.get('url'), - exchange=self._conf.get('exchange', 'default'), - topics=self._conf.get('topics', []), - transport=self._conf.get('transport'), - transport_options=self._conf.get('transport_options')) + def __init__(self, flow, flow_detail, backend, options): + super(WorkerBasedActionEngine, self).__init__(flow, flow_detail, + backend, options) + # This ensures that any provided executor will be validated before + # we get to far in the compilation/execution pipeline... + self._task_executor = self._fetch_task_executor(self._options, + self._flow_detail) - def __init__(self, flow, flow_detail, backend, conf, **kwargs): - super(WorkerBasedActionEngine, self).__init__( - flow, flow_detail, backend, conf) - self._executor = kwargs.get('executor') + @classmethod + def _fetch_task_executor(cls, options, flow_detail): + try: + e = options['executor'] + if not isinstance(e, executor.WorkerTaskExecutor): + raise TypeError("Expected an instance of type '%s' instead of" + " type '%s' for 'executor' option" + % (executor.WorkerTaskExecutor, type(e))) + return e + except KeyError: + return executor.WorkerTaskExecutor( + uuid=flow_detail.uuid, + url=options.get('url'), + exchange=options.get('exchange', 'default'), + retry_options=options.get('retry_options'), + topics=options.get('topics', []), + transport=options.get('transport'), + transport_options=options.get('transport_options'), + transition_timeout=options.get('transition_timeout', + pr.REQUEST_TIMEOUT)) diff --git a/taskflow/engines/worker_based/executor.py b/taskflow/engines/worker_based/executor.py index 9ff7078b..a55229f1 100644 --- a/taskflow/engines/worker_based/executor.py +++ b/taskflow/engines/worker_based/executor.py @@ -15,119 +15,94 @@ # under the License. import functools -import logging -import threading + +from oslo_utils import timeutils from taskflow.engines.action_engine import executor -from taskflow.engines.worker_based import cache from taskflow.engines.worker_based import protocol as pr from taskflow.engines.worker_based import proxy +from taskflow.engines.worker_based import types as wt from taskflow import exceptions as exc -from taskflow.openstack.common import timeutils -from taskflow.types import timing as tt -from taskflow.utils import async_utils +from taskflow import logging +from taskflow import task as task_atom +from taskflow.types import periodic +from taskflow.utils import kombu_utils as ku from taskflow.utils import misc -from taskflow.utils import reflection from taskflow.utils import threading_utils as tu LOG = logging.getLogger(__name__) -def _is_alive(thread): - if not thread: - return False - return thread.is_alive() - - -class PeriodicWorker(object): - """Calls a set of functions when activated periodically. - - NOTE(harlowja): the provided timeout object determines the periodicity. - """ - def __init__(self, timeout, functors): - self._timeout = timeout - self._functors = [] - for f in functors: - self._functors.append((f, reflection.get_callable_name(f))) - - def start(self): - while not self._timeout.is_stopped(): - for (f, f_name) in self._functors: - LOG.debug("Calling periodic function '%s'", f_name) - try: - f() - except Exception: - LOG.warn("Failed to call periodic function '%s'", f_name, - exc_info=True) - self._timeout.wait() - - def stop(self): - self._timeout.interrupt() - - def reset(self): - self._timeout.reset() - - -class WorkerTaskExecutor(executor.TaskExecutorBase): +class WorkerTaskExecutor(executor.TaskExecutor): """Executes tasks on remote workers.""" - def __init__(self, uuid, exchange, topics, **kwargs): + def __init__(self, uuid, exchange, topics, + transition_timeout=pr.REQUEST_TIMEOUT, + url=None, transport=None, transport_options=None, + retry_options=None): self._uuid = uuid - self._topics = topics - self._requests_cache = cache.RequestsCache() - self._workers_cache = cache.WorkersCache() - self._workers_arrival = threading.Condition() - handlers = { - pr.NOTIFY: [ - self._process_notify, - functools.partial(pr.Notify.validate, response=True), - ], + self._requests_cache = wt.RequestsCache() + self._transition_timeout = transition_timeout + type_handlers = { pr.RESPONSE: [ self._process_response, pr.Response.validate, ], } - self._proxy = proxy.Proxy(uuid, exchange, handlers, - self._on_wait, **kwargs) - self._proxy_thread = None - self._periodic = PeriodicWorker(tt.Timeout(pr.NOTIFY_PERIOD), - [self._notify_topics]) - self._periodic_thread = None + self._proxy = proxy.Proxy(uuid, exchange, + type_handlers=type_handlers, + on_wait=self._on_wait, url=url, + transport=transport, + transport_options=transport_options, + retry_options=retry_options) + # NOTE(harlowja): This is the most simplest finder impl. that + # doesn't have external dependencies (outside of what this engine + # already requires); it though does create periodic 'polling' traffic + # to workers to 'learn' of the tasks they can perform (and requires + # pre-existing knowledge of the topics those workers are on to gather + # and update this information). + self._finder = wt.ProxyWorkerFinder(uuid, self._proxy, topics) + self._finder.on_worker = self._on_worker + self._helpers = tu.ThreadBundle() + self._helpers.bind(lambda: tu.daemon_thread(self._proxy.start), + after_start=lambda t: self._proxy.wait(), + before_join=lambda t: self._proxy.stop()) + p_worker = periodic.PeriodicWorker.create([self._finder]) + if p_worker: + self._helpers.bind(lambda: tu.daemon_thread(p_worker.start), + before_join=lambda t: p_worker.stop(), + after_join=lambda t: p_worker.reset(), + before_start=lambda t: p_worker.reset()) - def _process_notify(self, notify, message): - """Process notify message from remote side.""" - LOG.debug("Start processing notify message.") - topic = notify['topic'] - tasks = notify['tasks'] - - # add worker info to the cache - self._workers_arrival.acquire() - try: - self._workers_cache[topic] = tasks - self._workers_arrival.notify_all() - finally: - self._workers_arrival.release() - - # publish waiting requests - for request in self._requests_cache.get_waiting_requests(tasks): + def _on_worker(self, worker): + """Process new worker that has arrived (and fire off any work).""" + for request in self._requests_cache.get_waiting_requests(worker): if request.transition_and_log_error(pr.PENDING, logger=LOG): - self._publish_request(request, topic) + self._publish_request(request, worker) def _process_response(self, response, message): """Process response from remote side.""" - LOG.debug("Start processing response message.") + LOG.debug("Started processing response message '%s'", + ku.DelayedPretty(message)) try: task_uuid = message.properties['correlation_id'] except KeyError: - LOG.warning("The 'correlation_id' message property is missing.") + LOG.warning("The 'correlation_id' message property is" + " missing in message '%s'", + ku.DelayedPretty(message)) else: request = self._requests_cache.get(task_uuid) if request is not None: response = pr.Response.from_dict(response) + LOG.debug("Response with state '%s' received for '%s'", + response.state, request) if response.state == pr.RUNNING: request.transition_and_log_error(pr.RUNNING, logger=LOG) - elif response.state == pr.PROGRESS: - request.on_progress(**response.data) + elif response.state == pr.EVENT: + # Proxy the event + details to the task/request notifier... + event_type = response.data['event_type'] + details = response.data['details'] + request.notifier.notify(event_type, details) elif response.state in (pr.FAILURE, pr.SUCCESS): moved = request.transition_and_log_error(response.state, logger=LOG) @@ -139,10 +114,10 @@ class WorkerTaskExecutor(executor.TaskExecutorBase): del self._requests_cache[request.uuid] request.set_result(**response.data) else: - LOG.warning("Unexpected response status: '%s'", + LOG.warning("Unexpected response status '%s'", response.state) else: - LOG.debug("Request with id='%s' not found.", task_uuid) + LOG.debug("Request with id='%s' not found", task_uuid) @staticmethod def _handle_expired_request(request): @@ -163,67 +138,76 @@ class WorkerTaskExecutor(executor.TaskExecutorBase): " seconds for it to transition out of (%s) states" % (request, request_age, ", ".join(pr.WAITING_STATES))) except exc.RequestTimeout: - with misc.capture_failure() as fail: - LOG.debug(fail.exception_str) - request.set_result(fail) + with misc.capture_failure() as failure: + LOG.debug(failure.exception_str) + request.set_result(failure) def _on_wait(self): """This function is called cyclically between draining events.""" self._requests_cache.cleanup(self._handle_expired_request) def _submit_task(self, task, task_uuid, action, arguments, - progress_callback, timeout=pr.REQUEST_TIMEOUT, **kwargs): + progress_callback=None, **kwargs): """Submit task request to a worker.""" request = pr.Request(task, task_uuid, action, arguments, - progress_callback, timeout, **kwargs) + self._transition_timeout, **kwargs) - # Get task's topic and publish request if topic was found. - topic = self._workers_cache.get_topic_by_task(request.task_cls) - if topic is not None: + # Register the callback, so that we can proxy the progress correctly. + if (progress_callback is not None and + request.notifier.can_be_registered( + task_atom.EVENT_UPDATE_PROGRESS)): + request.notifier.register(task_atom.EVENT_UPDATE_PROGRESS, + progress_callback) + cleaner = functools.partial(request.notifier.deregister, + task_atom.EVENT_UPDATE_PROGRESS, + progress_callback) + request.result.add_done_callback(lambda fut: cleaner()) + + # Get task's worker and publish request if worker was found. + worker = self._finder.get_worker_for_task(task) + if worker is not None: # NOTE(skudriashev): Make sure request is set to the PENDING state # before putting it into the requests cache to prevent the notify # processing thread get list of waiting requests and publish it # before it is published here, so it wouldn't be published twice. if request.transition_and_log_error(pr.PENDING, logger=LOG): self._requests_cache[request.uuid] = request - self._publish_request(request, topic) + self._publish_request(request, worker) else: + LOG.debug("Delaying submission of '%s', no currently known" + " worker/s available to process it", request) self._requests_cache[request.uuid] = request return request.result - def _publish_request(self, request, topic): + def _publish_request(self, request, worker): """Publish request to a given topic.""" + LOG.debug("Submitting execution of '%s' to worker '%s' (expecting" + " response identified by reply_to=%s and" + " correlation_id=%s)", request, worker, self._uuid, + request.uuid) try: - self._proxy.publish(msg=request, - routing_key=topic, + self._proxy.publish(request, worker.topic, reply_to=self._uuid, correlation_id=request.uuid) except Exception: with misc.capture_failure() as failure: - LOG.exception("Failed to submit the '%s' request.", request) + LOG.critical("Failed to submit '%s' (transitioning it to" + " %s)", request, pr.FAILURE, exc_info=True) if request.transition_and_log_error(pr.FAILURE, logger=LOG): del self._requests_cache[request.uuid] request.set_result(failure) - def _notify_topics(self): - """Cyclically called to publish notify message to each topic.""" - self._proxy.publish(pr.Notify(), self._topics, reply_to=self._uuid) - def execute_task(self, task, task_uuid, arguments, progress_callback=None): return self._submit_task(task, task_uuid, pr.EXECUTE, arguments, - progress_callback) + progress_callback=progress_callback) def revert_task(self, task, task_uuid, arguments, result, failures, progress_callback=None): return self._submit_task(task, task_uuid, pr.REVERT, arguments, - progress_callback, result=result, - failures=failures) - - def wait_for_any(self, fs, timeout=None): - """Wait for futures returned by this executor to complete.""" - return async_utils.wait_for_any(fs, timeout) + progress_callback=progress_callback, + result=result, failures=failures) def wait_for_workers(self, workers=1, timeout=None): """Waits for geq workers to notify they are ready to do work. @@ -234,42 +218,15 @@ class WorkerTaskExecutor(executor.TaskExecutorBase): return how many workers are still needed, otherwise it will return zero. """ - if workers <= 0: - raise ValueError("Worker amount must be greater than zero") - w = None - if timeout is not None: - w = tt.StopWatch(timeout).start() - self._workers_arrival.acquire() - try: - while len(self._workers_cache) < workers: - if w is not None and w.expired(): - return workers - len(self._workers_cache) - timeout = None - if w is not None: - timeout = w.leftover() - self._workers_arrival.wait(timeout) - return 0 - finally: - self._workers_arrival.release() + return self._finder.wait_for_workers(workers=workers, + timeout=timeout) def start(self): """Starts proxy thread and associated topic notification thread.""" - if not _is_alive(self._proxy_thread): - self._proxy_thread = tu.daemon_thread(self._proxy.start) - self._proxy_thread.start() - self._proxy.wait() - if not _is_alive(self._periodic_thread): - self._periodic.reset() - self._periodic_thread = tu.daemon_thread(self._periodic.start) - self._periodic_thread.start() + self._helpers.start() def stop(self): """Stops proxy thread and associated topic notification thread.""" - if self._periodic_thread is not None: - self._periodic.stop() - self._periodic_thread.join() - self._periodic_thread = None - if self._proxy_thread is not None: - self._proxy.stop() - self._proxy_thread.join() - self._proxy_thread = None + self._helpers.stop() + self._requests_cache.clear(self._handle_expired_request) + self._finder.clear() diff --git a/taskflow/engines/worker_based/protocol.py b/taskflow/engines/worker_based/protocol.py index 6e54f9fb..8a137471 100644 --- a/taskflow/engines/worker_based/protocol.py +++ b/taskflow/engines/worker_based/protocol.py @@ -15,21 +15,21 @@ # under the License. import abc -import logging import threading from concurrent import futures import jsonschema from jsonschema import exceptions as schema_exc +from oslo_utils import reflection +from oslo_utils import timeutils import six from taskflow.engines.action_engine import executor from taskflow import exceptions as excp -from taskflow.openstack.common import timeutils +from taskflow import logging +from taskflow.types import failure as ft from taskflow.types import timing as tt from taskflow.utils import lock_utils -from taskflow.utils import misc -from taskflow.utils import reflection # NOTE(skudriashev): This is protocol states and events, which are not # related to task states. @@ -38,14 +38,14 @@ PENDING = 'PENDING' RUNNING = 'RUNNING' SUCCESS = 'SUCCESS' FAILURE = 'FAILURE' -PROGRESS = 'PROGRESS' +EVENT = 'EVENT' # During these states the expiry is active (once out of these states the expiry # no longer matters, since we have no way of knowing how long a task will run # for). WAITING_STATES = (WAITING, PENDING) -_ALL_STATES = (WAITING, PENDING, RUNNING, SUCCESS, FAILURE, PROGRESS) +_ALL_STATES = (WAITING, PENDING, RUNNING, SUCCESS, FAILURE, EVENT) _STOP_TIMER_STATES = (RUNNING, SUCCESS, FAILURE) # Transitions that a request state can go through. @@ -121,12 +121,16 @@ class Message(object): class Notify(Message): """Represents notify message type.""" + + #: String constant representing this message type. TYPE = NOTIFY # NOTE(harlowja): the executor (the entity who initially requests a worker # to send back a notification response) schema is different than the # worker response schema (that's why there are two schemas here). - _RESPONSE_SCHEMA = { + + #: Expected notify *response* message schema (in json schema format). + RESPONSE_SCHEMA = { "type": "object", 'properties': { 'topic': { @@ -142,7 +146,9 @@ class Notify(Message): "required": ["topic", 'tasks'], "additionalProperties": False, } - _SENDER_SCHEMA = { + + #: Expected *sender* request message schema (in json schema format). + SENDER_SCHEMA = { "type": "object", "additionalProperties": False, } @@ -156,9 +162,9 @@ class Notify(Message): @classmethod def validate(cls, data, response): if response: - schema = cls._RESPONSE_SCHEMA + schema = cls.RESPONSE_SCHEMA else: - schema = cls._SENDER_SCHEMA + schema = cls.SENDER_SCHEMA try: jsonschema.validate(data, schema, types=_SCHEMA_TYPES) except schema_exc.ValidationError as e: @@ -180,8 +186,11 @@ class Request(Message): states. """ + #: String constant representing this message type. TYPE = REQUEST - _SCHEMA = { + + #: Expected message schema (in json schema format). + SCHEMA = { "type": "object", 'properties': { # These two are typically only sent on revert actions (that is @@ -219,29 +228,36 @@ class Request(Message): 'required': ['task_cls', 'task_name', 'task_version', 'action'], } - def __init__(self, task, uuid, action, arguments, progress_callback, - timeout, **kwargs): + def __init__(self, task, uuid, action, arguments, timeout, **kwargs): self._task = task - self._task_cls = reflection.get_class_name(task) self._uuid = uuid self._action = action self._event = ACTION_TO_EVENT[action] self._arguments = arguments - self._progress_callback = progress_callback self._kwargs = kwargs self._watch = tt.StopWatch(duration=timeout).start() self._state = WAITING self._lock = threading.Lock() self._created_on = timeutils.utcnow() - self.result = futures.Future() + self._result = futures.Future() + self._result.atom = task + self._notifier = task.notifier + + @property + def result(self): + return self._result + + @property + def notifier(self): + return self._notifier @property def uuid(self): return self._uuid @property - def task_cls(self): - return self._task_cls + def task(self): + return self._task @property def state(self): @@ -270,15 +286,19 @@ class Request(Message): """Return json-serializable request. To convert requests that have failed due to some exception this will - convert all `misc.Failure` objects into dictionaries (which will then - be reconstituted by the receiver). + convert all `failure.Failure` objects into dictionaries (which will + then be reconstituted by the receiver). """ - request = dict(task_cls=self._task_cls, task_name=self._task.name, - task_version=self._task.version, action=self._action, - arguments=self._arguments) + request = { + 'task_cls': reflection.get_class_name(self._task), + 'task_name': self._task.name, + 'task_version': self._task.version, + 'action': self._action, + 'arguments': self._arguments, + } if 'result' in self._kwargs: result = self._kwargs['result'] - if isinstance(result, misc.Failure): + if isinstance(result, ft.Failure): request['result'] = ('failure', result.to_dict()) else: request['result'] = ('success', result) @@ -290,10 +310,7 @@ class Request(Message): return request def set_result(self, result): - self.result.set_result((self._task, self._event, result)) - - def on_progress(self, event_data, progress): - self._progress_callback(self._task, event_data, progress) + self.result.set_result((self._event, result)) def transition_and_log_error(self, new_state, logger=None): """Transitions *and* logs an error if that transitioning raises. @@ -341,7 +358,7 @@ class Request(Message): @classmethod def validate(cls, data): try: - jsonschema.validate(data, cls._SCHEMA, types=_SCHEMA_TYPES) + jsonschema.validate(data, cls.SCHEMA, types=_SCHEMA_TYPES) except schema_exc.ValidationError as e: raise excp.InvalidFormat("%s message response data not of the" " expected format: %s" @@ -350,8 +367,12 @@ class Request(Message): class Response(Message): """Represents response message type.""" + + #: String constant representing this message type. TYPE = RESPONSE - _SCHEMA = { + + #: Expected message schema (in json schema format). + SCHEMA = { "type": "object", 'properties': { 'state': { @@ -361,7 +382,7 @@ class Response(Message): 'data': { "anyOf": [ { - "$ref": "#/definitions/progress", + "$ref": "#/definitions/event", }, { "$ref": "#/definitions/completion", @@ -375,17 +396,17 @@ class Response(Message): "required": ["state", 'data'], "additionalProperties": False, "definitions": { - "progress": { + "event": { "type": "object", "properties": { - 'progress': { - 'type': 'number', + 'event_type': { + 'type': 'string', }, - 'event_data': { + 'details': { 'type': 'object', }, }, - "required": ["progress", 'event_data'], + "required": ["event_type", 'details'], "additionalProperties": False, }, # Used when sending *only* request state changes (and no data is @@ -417,7 +438,7 @@ class Response(Message): state = data['state'] data = data['data'] if state == FAILURE and 'result' in data: - data['result'] = misc.Failure.from_dict(data['result']) + data['result'] = ft.Failure.from_dict(data['result']) return cls(state, **data) @property @@ -434,7 +455,7 @@ class Response(Message): @classmethod def validate(cls, data): try: - jsonschema.validate(data, cls._SCHEMA, types=_SCHEMA_TYPES) + jsonschema.validate(data, cls.SCHEMA, types=_SCHEMA_TYPES) except schema_exc.ValidationError as e: raise excp.InvalidFormat("%s message response data not of the" " expected format: %s" diff --git a/taskflow/engines/worker_based/proxy.py b/taskflow/engines/worker_based/proxy.py index d2991ca3..e9d2ec22 100644 --- a/taskflow/engines/worker_based/proxy.py +++ b/taskflow/engines/worker_based/proxy.py @@ -14,15 +14,15 @@ # License for the specific language governing permissions and limitations # under the License. -import logging -import socket -import threading +import collections import kombu +from kombu import exceptions as kombu_exceptions import six from taskflow.engines.worker_based import dispatcher -from taskflow.utils import misc +from taskflow import logging +from taskflow.utils import threading_utils LOG = logging.getLogger(__name__) @@ -30,102 +30,197 @@ LOG = logging.getLogger(__name__) # the socket can get "stuck", and is a best practice for Kombu consumers. DRAIN_EVENTS_PERIOD = 1 +# Helper objects returned when requested to get connection details, used +# instead of returning the raw results from the kombu connection objects +# themselves so that a person can not mutate those objects (which would be +# bad). +_ConnectionDetails = collections.namedtuple('_ConnectionDetails', + ['uri', 'transport']) +_TransportDetails = collections.namedtuple('_TransportDetails', + ['options', 'driver_type', + 'driver_name', 'driver_version']) + class Proxy(object): - """A proxy processes messages from/to the named exchange.""" + """A proxy processes messages from/to the named exchange. - def __init__(self, topic, exchange_name, type_handlers, on_wait=None, - **kwargs): + For **internal** usage only (not for public consumption). + """ + + DEFAULT_RETRY_OPTIONS = { + # The number of seconds we start sleeping for. + 'interval_start': 1, + # How many seconds added to the interval for each retry. + 'interval_step': 1, + # Maximum number of seconds to sleep between each retry. + 'interval_max': 1, + # Maximum number of times to retry. + 'max_retries': 3, + } + """Settings used (by default) to reconnect under transient failures. + + See: http://kombu.readthedocs.org/ (and connection ``ensure_options``) for + what these values imply/mean... + """ + + # This is the only provided option that should be an int, the others + # are allowed to be floats; used when we check that the user-provided + # value is valid... + _RETRY_INT_OPTS = frozenset(['max_retries']) + + def __init__(self, topic, exchange, + type_handlers=None, on_wait=None, url=None, + transport=None, transport_options=None, + retry_options=None): self._topic = topic - self._exchange_name = exchange_name + self._exchange_name = exchange self._on_wait = on_wait - self._running = threading.Event() - self._dispatcher = dispatcher.TypeDispatcher(type_handlers) - self._dispatcher.add_requeue_filter( + self._running = threading_utils.Event() + self._dispatcher = dispatcher.TypeDispatcher( # NOTE(skudriashev): Process all incoming messages only if proxy is # running, otherwise requeue them. - lambda data, message: not self.is_running) + requeue_filters=[lambda data, message: not self.is_running], + type_handlers=type_handlers) - url = kwargs.get('url') - transport = kwargs.get('transport') - transport_opts = kwargs.get('transport_options') + ensure_options = self.DEFAULT_RETRY_OPTIONS.copy() + if retry_options is not None: + # Override the defaults with any user provided values... + for k in set(six.iterkeys(ensure_options)): + if k in retry_options: + # Ensure that the right type is passed in... + val = retry_options[k] + if k in self._RETRY_INT_OPTS: + tmp_val = int(val) + else: + tmp_val = float(val) + if tmp_val < 0: + raise ValueError("Expected value greater or equal to" + " zero for 'retry_options' %s; got" + " %s instead" % (k, val)) + ensure_options[k] = tmp_val + self._ensure_options = ensure_options self._drain_events_timeout = DRAIN_EVENTS_PERIOD - if transport == 'memory' and transport_opts: - polling_interval = transport_opts.get('polling_interval') + if transport == 'memory' and transport_options: + polling_interval = transport_options.get('polling_interval') if polling_interval is not None: self._drain_events_timeout = polling_interval # create connection self._conn = kombu.Connection(url, transport=transport, - transport_options=transport_opts) + transport_options=transport_options) # create exchange self._exchange = kombu.Exchange(name=self._exchange_name, - durable=False, - auto_delete=True) + durable=False, auto_delete=True) + + @property + def dispatcher(self): + """Dispatcher internally used to dispatch message(s) that match.""" + return self._dispatcher @property def connection_details(self): + """Details about the connection (read-only).""" # The kombu drivers seem to use 'N/A' when they don't have a version... driver_version = self._conn.transport.driver_version() if driver_version and driver_version.lower() == 'n/a': driver_version = None - return misc.AttrDict( + if self._conn.transport_options: + transport_options = self._conn.transport_options.copy() + else: + transport_options = {} + transport = _TransportDetails( + options=transport_options, + driver_type=self._conn.transport.driver_type, + driver_name=self._conn.transport.driver_name, + driver_version=driver_version) + return _ConnectionDetails( uri=self._conn.as_uri(include_password=False), - transport=misc.AttrDict( - options=dict(self._conn.transport_options), - driver_type=self._conn.transport.driver_type, - driver_name=self._conn.transport.driver_name, - driver_version=driver_version)) + transport=transport) @property def is_running(self): """Return whether the proxy is running.""" return self._running.is_set() - def _make_queue(self, name, exchange, **kwargs): - """Make named queue for the given exchange.""" - return kombu.Queue(name="%s_%s" % (self._exchange_name, name), - exchange=exchange, - routing_key=name, - durable=False, - auto_delete=True, - **kwargs) + def _make_queue(self, routing_key, exchange, channel=None): + """Make a named queue for the given exchange.""" + queue_name = "%s_%s" % (self._exchange_name, routing_key) + return kombu.Queue(name=queue_name, + routing_key=routing_key, durable=False, + exchange=exchange, auto_delete=True, + channel=channel) - def publish(self, msg, routing_key, **kwargs): + def publish(self, msg, routing_key, reply_to=None, correlation_id=None): """Publish message to the named exchange with given routing key.""" - LOG.debug("Sending %s", msg) if isinstance(routing_key, six.string_types): routing_keys = [routing_key] else: routing_keys = routing_key - with kombu.producers[self._conn].acquire(block=True) as producer: - for routing_key in routing_keys: - queue = self._make_queue(routing_key, self._exchange) - producer.publish(body=msg.to_dict(), - routing_key=routing_key, - exchange=self._exchange, - declare=[queue], - type=msg.TYPE, - **kwargs) + + # Filter out any empty keys... + routing_keys = [r_k for r_k in routing_keys if r_k] + if not routing_keys: + LOG.warn("No routing key/s specified; unable to send '%s'" + " to any target queue on exchange '%s'", msg, + self._exchange_name) + return + + def _publish(producer, routing_key): + queue = self._make_queue(routing_key, self._exchange) + producer.publish(body=msg.to_dict(), + routing_key=routing_key, + exchange=self._exchange, + declare=[queue], + type=msg.TYPE, + reply_to=reply_to, + correlation_id=correlation_id) + + def _publish_errback(exc, interval): + LOG.exception('Publishing error: %s', exc) + LOG.info('Retry triggering in %s seconds', interval) + + LOG.debug("Sending '%s' message using routing keys %s", + msg, routing_keys) + with kombu.connections[self._conn].acquire(block=True) as conn: + with conn.Producer() as producer: + ensure_kwargs = self._ensure_options.copy() + ensure_kwargs['errback'] = _publish_errback + safe_publish = conn.ensure(producer, _publish, **ensure_kwargs) + for routing_key in routing_keys: + safe_publish(producer, routing_key) def start(self): """Start proxy.""" + + def _drain(conn, timeout): + try: + conn.drain_events(timeout=timeout) + except kombu_exceptions.TimeoutError: + pass + + def _drain_errback(exc, interval): + LOG.exception('Draining error: %s', exc) + LOG.info('Retry triggering in %s seconds', interval) + LOG.info("Starting to consume from the '%s' exchange.", self._exchange_name) with kombu.connections[self._conn].acquire(block=True) as conn: queue = self._make_queue(self._topic, self._exchange, channel=conn) - with conn.Consumer(queues=queue, - callbacks=[self._dispatcher.on_message]): + callbacks = [self._dispatcher.on_message] + with conn.Consumer(queues=queue, callbacks=callbacks) as consumer: + ensure_kwargs = self._ensure_options.copy() + ensure_kwargs['errback'] = _drain_errback + safe_drain = conn.ensure(consumer, _drain, **ensure_kwargs) self._running.set() - while self.is_running: - try: - conn.drain_events(timeout=self._drain_events_timeout) - except socket.timeout: - pass - if self._on_wait is not None: - self._on_wait() + try: + while self._running.is_set(): + safe_drain(conn, self._drain_events_timeout) + if self._on_wait is not None: + self._on_wait() + finally: + self._running.clear() def wait(self): """Wait until proxy is started.""" diff --git a/taskflow/engines/worker_based/server.py b/taskflow/engines/worker_based/server.py index 73625865..949b4691 100644 --- a/taskflow/engines/worker_based/server.py +++ b/taskflow/engines/worker_based/server.py @@ -15,12 +15,15 @@ # under the License. import functools -import logging import six from taskflow.engines.worker_based import protocol as pr from taskflow.engines.worker_based import proxy +from taskflow import logging +from taskflow.types import failure as ft +from taskflow.types import notifier as nt +from taskflow.utils import kombu_utils as ku from taskflow.utils import misc LOG = logging.getLogger(__name__) @@ -43,8 +46,10 @@ def delayed(executor): class Server(object): """Server implementation that waits for incoming tasks requests.""" - def __init__(self, topic, exchange, executor, endpoints, **kwargs): - handlers = { + def __init__(self, topic, exchange, executor, endpoints, + url=None, transport=None, transport_options=None, + retry_options=None): + type_handlers = { pr.NOTIFY: [ delayed(executor)(self._process_notify), functools.partial(pr.Notify.validate, response=False), @@ -54,10 +59,12 @@ class Server(object): pr.Request.validate, ], } - self._proxy = proxy.Proxy(topic, exchange, handlers, - on_wait=None, **kwargs) + self._proxy = proxy.Proxy(topic, exchange, + type_handlers=type_handlers, + url=url, transport=transport, + transport_options=transport_options, + retry_options=retry_options) self._topic = topic - self._executor = executor self._endpoints = dict([(endpoint.name, endpoint) for endpoint in endpoints]) @@ -70,21 +77,26 @@ class Server(object): failures=None, **kwargs): """Parse request before it can be further processed. - All `misc.Failure` objects that have been converted to dict on the - remote side will now converted back to `misc.Failure` objects. + All `failure.Failure` objects that have been converted to dict on the + remote side will now converted back to `failure.Failure` objects. """ - action_args = dict(arguments=arguments, task_name=task_name) + # These arguments will eventually be given to the task executor + # so they need to be in a format it will accept (and using keyword + # argument names that it accepts)... + arguments = { + 'arguments': arguments, + } if result is not None: data_type, data = result if data_type == 'failure': - action_args['result'] = misc.Failure.from_dict(data) + arguments['result'] = ft.Failure.from_dict(data) else: - action_args['result'] = data + arguments['result'] = data if failures is not None: - action_args['failures'] = {} - for k, v in failures.items(): - action_args['failures'][k] = misc.Failure.from_dict(v) - return task_cls, action, action_args + arguments['failures'] = {} + for key, data in six.iteritems(failures): + arguments['failures'][key] = ft.Failure.from_dict(data) + return (task_cls, task_name, action, arguments) @staticmethod def _parse_message(message): @@ -100,62 +112,84 @@ class Server(object): except KeyError: raise ValueError("The '%s' message property is missing" % prop) - return properties - def _reply(self, reply_to, task_uuid, state=pr.FAILURE, **kwargs): - """Send reply to the `reply_to` queue.""" + def _reply(self, capture, reply_to, task_uuid, state=pr.FAILURE, **kwargs): + """Send a reply to the `reply_to` queue with the given information. + + Can capture failures to publish and if capturing will log associated + critical errors on behalf of the caller, and then returns whether the + publish worked out or did not. + """ response = pr.Response(state, **kwargs) + published = False try: self._proxy.publish(response, reply_to, correlation_id=task_uuid) + published = True except Exception: - LOG.exception("Failed to send reply") + if not capture: + raise + LOG.critical("Failed to send reply to '%s' for task '%s' with" + " response %s", reply_to, task_uuid, response, + exc_info=True) + return published - def _on_update_progress(self, reply_to, task_uuid, task, event_data, - progress): - """Send task update progress notification.""" - self._reply(reply_to, task_uuid, pr.PROGRESS, event_data=event_data, - progress=progress) + def _on_event(self, reply_to, task_uuid, event_type, details): + """Send out a task event notification.""" + # NOTE(harlowja): the executor that will trigger this using the + # task notification/listener mechanism will handle logging if this + # fails, so thats why capture is 'False' is used here. + self._reply(False, reply_to, task_uuid, pr.EVENT, + event_type=event_type, details=details) def _process_notify(self, notify, message): """Process notify message and reply back.""" - LOG.debug("Start processing notify message.") + LOG.debug("Started processing notify message '%s'", + ku.DelayedPretty(message)) try: reply_to = message.properties['reply_to'] - except Exception: - LOG.exception("The 'reply_to' message property is missing.") + except KeyError: + LOG.warn("The 'reply_to' message property is missing" + " in received notify message '%s'", + ku.DelayedPretty(message), exc_info=True) else: - self._proxy.publish( - msg=pr.Notify(topic=self._topic, tasks=self._endpoints.keys()), - routing_key=reply_to - ) + response = pr.Notify(topic=self._topic, + tasks=self._endpoints.keys()) + try: + self._proxy.publish(response, routing_key=reply_to) + except Exception: + LOG.critical("Failed to send reply to '%s' with notify" + " response '%s'", reply_to, response, + exc_info=True) def _process_request(self, request, message): """Process request message and reply back.""" - # NOTE(skudriashev): parse broker message first to get the `reply_to` - # and the `task_uuid` parameters to have possibility to reply back. - LOG.debug("Start processing request message.") + LOG.debug("Started processing request message '%s'", + ku.DelayedPretty(message)) try: + # NOTE(skudriashev): parse broker message first to get + # the `reply_to` and the `task_uuid` parameters to have + # possibility to reply back (if we can't parse, we can't respond + # in the first place...). reply_to, task_uuid = self._parse_message(message) except ValueError: - LOG.exception("Failed to parse broker message") + LOG.warn("Failed to parse request attributes from message '%s'", + ku.DelayedPretty(message), exc_info=True) return else: - # prepare task progress callback - progress_callback = functools.partial( - self._on_update_progress, reply_to, task_uuid) # prepare reply callback - reply_callback = functools.partial( - self._reply, reply_to, task_uuid) + reply_callback = functools.partial(self._reply, True, reply_to, + task_uuid) # parse request to get task name, action and action arguments try: - task_cls, action, action_args = self._parse_request(**request) - action_args.update(task_uuid=task_uuid, - progress_callback=progress_callback) + bundle = self._parse_request(**request) + task_cls, task_name, action, arguments = bundle + arguments['task_uuid'] = task_uuid except ValueError: with misc.capture_failure() as failure: - LOG.exception("Failed to parse request") + LOG.warn("Failed to parse request contents from message '%s'", + ku.DelayedPretty(message), exc_info=True) reply_callback(result=failure.to_dict()) return @@ -164,22 +198,61 @@ class Server(object): endpoint = self._endpoints[task_cls] except KeyError: with misc.capture_failure() as failure: - LOG.exception("The '%s' task endpoint does not exist", - task_cls) + LOG.warn("The '%s' task endpoint does not exist, unable" + " to continue processing request message '%s'", + task_cls, ku.DelayedPretty(message), exc_info=True) reply_callback(result=failure.to_dict()) return else: - reply_callback(state=pr.RUNNING) + try: + handler = getattr(endpoint, action) + except AttributeError: + with misc.capture_failure() as failure: + LOG.warn("The '%s' handler does not exist on task endpoint" + " '%s', unable to continue processing request" + " message '%s'", action, endpoint, + ku.DelayedPretty(message), exc_info=True) + reply_callback(result=failure.to_dict()) + return + else: + try: + task = endpoint.generate(name=task_name) + except Exception: + with misc.capture_failure() as failure: + LOG.warn("The '%s' task '%s' generation for request" + " message '%s' failed", endpoint, action, + ku.DelayedPretty(message), exc_info=True) + reply_callback(result=failure.to_dict()) + return + else: + if not reply_callback(state=pr.RUNNING): + return - # perform task action + # associate *any* events this task emits with a proxy that will + # emit them back to the engine... for handling at the engine side + # of things... + if task.notifier.can_be_registered(nt.Notifier.ANY): + task.notifier.register(nt.Notifier.ANY, + functools.partial(self._on_event, + reply_to, task_uuid)) + elif isinstance(task.notifier, nt.RestrictedNotifier): + # only proxy the allowable events then... + for event_type in task.notifier.events_iter(): + task.notifier.register(event_type, + functools.partial(self._on_event, + reply_to, task_uuid)) + + # perform the task action try: - result = getattr(endpoint, action)(**action_args) + result = handler(task, **arguments) except Exception: with misc.capture_failure() as failure: - LOG.exception("The %s task execution failed", endpoint) + LOG.warn("The '%s' endpoint '%s' execution for request" + " message '%s' failed", endpoint, action, + ku.DelayedPretty(message), exc_info=True) reply_callback(result=failure.to_dict()) else: - if isinstance(result, misc.Failure): + if isinstance(result, ft.Failure): reply_callback(result=result.to_dict()) else: reply_callback(state=pr.SUCCESS, result=result) diff --git a/taskflow/engines/worker_based/types.py b/taskflow/engines/worker_based/types.py new file mode 100644 index 00000000..70185d52 --- /dev/null +++ b/taskflow/engines/worker_based/types.py @@ -0,0 +1,234 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import abc +import functools +import itertools +import random +import threading + +from oslo_utils import reflection +import six + +from taskflow.engines.worker_based import protocol as pr +from taskflow import logging +from taskflow.types import cache as base +from taskflow.types import periodic +from taskflow.types import timing as tt +from taskflow.utils import kombu_utils as ku + +LOG = logging.getLogger(__name__) + + +class RequestsCache(base.ExpiringCache): + """Represents a thread-safe requests cache.""" + + def get_waiting_requests(self, worker): + """Get list of waiting requests that the given worker can satisfy.""" + waiting_requests = [] + with self._lock: + for request in six.itervalues(self._data): + if request.state == pr.WAITING \ + and worker.performs(request.task): + waiting_requests.append(request) + return waiting_requests + + +# TODO(harlowja): this needs to be made better, once +# https://blueprints.launchpad.net/taskflow/+spec/wbe-worker-info is finally +# implemented we can go about using that instead. +class TopicWorker(object): + """A (read-only) worker and its relevant information + useful methods.""" + + _NO_IDENTITY = object() + + def __init__(self, topic, tasks, identity=_NO_IDENTITY): + self.tasks = [] + for task in tasks: + if not isinstance(task, six.string_types): + task = reflection.get_class_name(task) + self.tasks.append(task) + self.topic = topic + self.identity = identity + + def performs(self, task): + if not isinstance(task, six.string_types): + task = reflection.get_class_name(task) + return task in self.tasks + + def __eq__(self, other): + if not isinstance(other, TopicWorker): + return NotImplemented + if len(other.tasks) != len(self.tasks): + return False + if other.topic != self.topic: + return False + for task in other.tasks: + if not self.performs(task): + return False + # If one of the identity equals _NO_IDENTITY, then allow it to match... + if self._NO_IDENTITY in (self.identity, other.identity): + return True + else: + return other.identity == self.identity + + def __repr__(self): + r = reflection.get_class_name(self, fully_qualified=False) + if self.identity is not self._NO_IDENTITY: + r += "(identity=%s, tasks=%s, topic=%s)" % (self.identity, + self.tasks, self.topic) + else: + r += "(identity=*, tasks=%s, topic=%s)" % (self.tasks, self.topic) + return r + + +@six.add_metaclass(abc.ABCMeta) +class WorkerFinder(object): + """Base class for worker finders...""" + + def __init__(self): + self._cond = threading.Condition() + self.on_worker = None + + @abc.abstractmethod + def _total_workers(self): + """Returns how many workers are known.""" + + def wait_for_workers(self, workers=1, timeout=None): + """Waits for geq workers to notify they are ready to do work. + + NOTE(harlowja): if a timeout is provided this function will wait + until that timeout expires, if the amount of workers does not reach + the desired amount of workers before the timeout expires then this will + return how many workers are still needed, otherwise it will + return zero. + """ + if workers <= 0: + raise ValueError("Worker amount must be greater than zero") + watch = tt.StopWatch(duration=timeout) + watch.start() + with self._cond: + while self._total_workers() < workers: + if watch.expired(): + return max(0, workers - self._total_workers()) + self._cond.wait(watch.leftover(return_none=True)) + return 0 + + @staticmethod + def _match_worker(task, available_workers): + """Select a worker (from geq 1 workers) that can best perform the task. + + NOTE(harlowja): this method will be activated when there exists + one one greater than one potential workers that can perform a task, + the arguments provided will be the potential workers located and the + task that is being requested to perform and the result should be one + of those workers using whatever best-fit algorithm is possible (or + random at the least). + """ + if len(available_workers) == 1: + return available_workers[0] + else: + return random.choice(available_workers) + + @abc.abstractmethod + def get_worker_for_task(self, task): + """Gets a worker that can perform a given task.""" + + def clear(self): + pass + + +class ProxyWorkerFinder(WorkerFinder): + """Requests and receives responses about workers topic+task details.""" + + def __init__(self, uuid, proxy, topics): + super(ProxyWorkerFinder, self).__init__() + self._proxy = proxy + self._topics = topics + self._workers = {} + self._uuid = uuid + self._proxy.dispatcher.type_handlers.update({ + pr.NOTIFY: [ + self._process_response, + functools.partial(pr.Notify.validate, response=True), + ], + }) + self._counter = itertools.count() + + def _next_worker(self, topic, tasks, temporary=False): + if not temporary: + return TopicWorker(topic, tasks, + identity=six.next(self._counter)) + else: + return TopicWorker(topic, tasks) + + @periodic.periodic(pr.NOTIFY_PERIOD) + def beat(self): + """Cyclically called to publish notify message to each topic.""" + self._proxy.publish(pr.Notify(), self._topics, reply_to=self._uuid) + + def _total_workers(self): + return len(self._workers) + + def _add(self, topic, tasks): + """Adds/updates a worker for the topic for the given tasks.""" + try: + worker = self._workers[topic] + # Check if we already have an equivalent worker, if so just + # return it... + if worker == self._next_worker(topic, tasks, temporary=True): + return (worker, False) + # This *fall through* is done so that if someone is using an + # active worker object that already exists that we just create + # a new one; so that the existing object doesn't get + # affected (workers objects are supposed to be immutable). + except KeyError: + pass + worker = self._next_worker(topic, tasks) + self._workers[topic] = worker + return (worker, True) + + def _process_response(self, response, message): + """Process notify message from remote side.""" + LOG.debug("Started processing notify message '%s'", + ku.DelayedPretty(message)) + topic = response['topic'] + tasks = response['tasks'] + with self._cond: + worker, new_or_updated = self._add(topic, tasks) + if new_or_updated: + LOG.debug("Received notification about worker '%s' (%s" + " total workers are currently known)", worker, + self._total_workers()) + self._cond.notify_all() + if self.on_worker is not None and new_or_updated: + self.on_worker(worker) + + def clear(self): + with self._cond: + self._workers.clear() + self._cond.notify_all() + + def get_worker_for_task(self, task): + available_workers = [] + with self._cond: + for worker in six.itervalues(self._workers): + if worker.performs(task): + available_workers.append(worker) + if available_workers: + return self._match_worker(task, available_workers) + else: + return None diff --git a/taskflow/engines/worker_based/worker.py b/taskflow/engines/worker_based/worker.py index 49816eab..2110b92b 100644 --- a/taskflow/engines/worker_based/worker.py +++ b/taskflow/engines/worker_based/worker.py @@ -14,19 +14,20 @@ # License for the specific language governing permissions and limitations # under the License. -import logging import os import platform import socket import string import sys -from concurrent import futures +from oslo_utils import reflection from taskflow.engines.worker_based import endpoint from taskflow.engines.worker_based import server +from taskflow import logging from taskflow import task as t_task -from taskflow.utils import reflection +from taskflow.types import futures +from taskflow.utils import misc from taskflow.utils import threading_utils as tu from taskflow import version @@ -69,46 +70,35 @@ class Worker(object): :param url: broker url :param exchange: broker exchange name :param topic: topic name under which worker is stated - :param tasks: tasks list that worker is capable to perform - - Tasks list item can be one of the following types: - 1. String: - - 1.1 Python module name: - - > tasks=['taskflow.tests.utils'] - - 1.2. Task class (BaseTask subclass) name: - - > tasks=['taskflow.test.utils.DummyTask'] - - 3. Python module: - - > from taskflow.tests import utils - > tasks=[utils] - - 4. Task class (BaseTask subclass): - - > from taskflow.tests import utils - > tasks=[utils.DummyTask] - - :param executor: custom executor object that is used for processing - requests in separate threads - :keyword threads_count: threads count to be passed to the default executor - :keyword transport: transport to be used (e.g. amqp, memory, etc.) - :keyword transport_options: transport specific options + :param tasks: task list that worker is capable of performing, items in + the list can be one of the following types; 1, a string naming the + python module name to search for tasks in or the task class name; 2, a + python module to search for tasks in; 3, a task class object that + will be used to create tasks from. + :param executor: custom executor object that can used for processing + requests in separate threads (if not provided one will be created) + :param threads_count: threads count to be passed to the + default executor (used only if an executor is not + passed in) + :param transport: transport to be used (e.g. amqp, memory, etc.) + :param transport_options: transport specific options (see: + http://kombu.readthedocs.org/ for what these + options imply and are expected to be) + :param retry_options: retry specific options + (see: :py:attr:`~.proxy.Proxy.DEFAULT_RETRY_OPTIONS`) """ - def __init__(self, exchange, topic, tasks, executor=None, **kwargs): + def __init__(self, exchange, topic, tasks, + executor=None, threads_count=None, url=None, + transport=None, transport_options=None, + retry_options=None): self._topic = topic self._executor = executor self._owns_executor = False self._threads_count = -1 if self._executor is None: - if 'threads_count' in kwargs: - self._threads_count = int(kwargs.pop('threads_count')) - if self._threads_count <= 0: - raise ValueError("threads_count provided must be > 0") + if threads_count is not None: + self._threads_count = int(threads_count) else: self._threads_count = tu.get_optimal_thread_count() self._executor = futures.ThreadPoolExecutor(self._threads_count) @@ -116,12 +106,15 @@ class Worker(object): self._endpoints = self._derive_endpoints(tasks) self._exchange = exchange self._server = server.Server(topic, exchange, self._executor, - self._endpoints, **kwargs) + self._endpoints, url=url, + transport=transport, + transport_options=transport_options, + retry_options=retry_options) @staticmethod def _derive_endpoints(tasks): """Derive endpoints from list of strings, classes or packages.""" - derived_tasks = reflection.find_subclasses(tasks, t_task.BaseTask) + derived_tasks = misc.find_subclasses(tasks, t_task.BaseTask) return [endpoint.Endpoint(task) for task in derived_tasks] def _generate_banner(self): @@ -158,14 +151,23 @@ class Worker(object): pass tpl_params['platform'] = platform.platform() tpl_params['thread_id'] = tu.get_ident() - return BANNER_TEMPLATE.substitute(BANNER_TEMPLATE.defaults, - **tpl_params) + banner = BANNER_TEMPLATE.substitute(BANNER_TEMPLATE.defaults, + **tpl_params) + # NOTE(harlowja): this is needed since the template in this file + # will always have newlines that end with '\n' (even on different + # platforms due to the way this source file is encoded) so we have + # to do this little dance to make it platform neutral... + return misc.fix_newlines(banner) - def run(self, display_banner=True): + def run(self, display_banner=True, banner_writer=None): """Runs the worker.""" if display_banner: - for line in self._generate_banner().splitlines(): - LOG.info(line) + banner = self._generate_banner() + if banner_writer is None: + for line in banner.splitlines(): + LOG.info(line) + else: + banner_writer(banner) self._server.start() def wait(self): diff --git a/taskflow/examples/alphabet_soup.py b/taskflow/examples/alphabet_soup.py new file mode 100644 index 00000000..a287f538 --- /dev/null +++ b/taskflow/examples/alphabet_soup.py @@ -0,0 +1,93 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import fractions +import functools +import logging +import os +import string +import sys +import time + +logging.basicConfig(level=logging.ERROR) + +self_dir = os.path.abspath(os.path.dirname(__file__)) +top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), + os.pardir, + os.pardir)) +sys.path.insert(0, top_dir) +sys.path.insert(0, self_dir) + +from taskflow import engines +from taskflow import exceptions +from taskflow.patterns import linear_flow +from taskflow import task + + +# In this example we show how a simple linear set of tasks can be executed +# using local processes (and not threads or remote workers) with minimial (if +# any) modification to those tasks to make them safe to run in this mode. +# +# This is useful since it allows further scaling up your workflows when thread +# execution starts to become a bottleneck (which it can start to be due to the +# GIL in python). It also offers a intermediary scalable runner that can be +# used when the scale and/or setup of remote workers is not desirable. + + +def progress_printer(task, event_type, details): + # This callback, attached to each task will be called in the local + # process (not the child processes)... + progress = details.pop('progress') + progress = int(progress * 100.0) + print("Task '%s' reached %d%% completion" % (task.name, progress)) + + +class AlphabetTask(task.Task): + # Second delay between each progress part. + _DELAY = 0.1 + + # This task will run in X main stages (each with a different progress + # report that will be delivered back to the running process...). The + # initial 0% and 100% are triggered automatically by the engine when + # a task is started and finished (so that's why those are not emitted + # here). + _PROGRESS_PARTS = [fractions.Fraction("%s/5" % x) for x in range(1, 5)] + + def execute(self): + for p in self._PROGRESS_PARTS: + self.update_progress(p) + time.sleep(self._DELAY) + + +print("Constructing...") +soup = linear_flow.Flow("alphabet-soup") +for letter in string.ascii_lowercase: + abc = AlphabetTask(letter) + abc.notifier.register(task.EVENT_UPDATE_PROGRESS, + functools.partial(progress_printer, abc)) + soup.add(abc) +try: + print("Loading...") + e = engines.load(soup, engine='parallel', executor='processes') + print("Compiling...") + e.compile() + print("Preparing...") + e.prepare() + print("Running...") + e.run() + print("Done...") +except exceptions.NotImplementedError as e: + print(e) diff --git a/taskflow/examples/build_a_car.py b/taskflow/examples/build_a_car.py index 1655f2a6..02be020e 100644 --- a/taskflow/examples/build_a_car.py +++ b/taskflow/examples/build_a_car.py @@ -31,6 +31,9 @@ import taskflow.engines from taskflow.patterns import graph_flow as gf from taskflow.patterns import linear_flow as lf from taskflow import task +from taskflow.types import notifier + +ANY = notifier.Notifier.ANY import example_utils as eu # noqa @@ -160,11 +163,11 @@ spec = { engine = taskflow.engines.load(flow, store={'spec': spec.copy()}) -# This registers all (*) state transitions to trigger a call to the flow_watch -# function for flow state transitions, and registers the same all (*) state -# transitions for task state transitions. -engine.notifier.register('*', flow_watch) -engine.task_notifier.register('*', task_watch) +# This registers all (ANY) state transitions to trigger a call to the +# flow_watch function for flow state transitions, and registers the +# same all (ANY) state transitions for task state transitions. +engine.notifier.register(ANY, flow_watch) +engine.task_notifier.register(ANY, task_watch) eu.print_wrapped("Building a car") engine.run() @@ -176,8 +179,8 @@ engine.run() spec['doors'] = 5 engine = taskflow.engines.load(flow, store={'spec': spec.copy()}) -engine.notifier.register('*', flow_watch) -engine.task_notifier.register('*', task_watch) +engine.notifier.register(ANY, flow_watch) +engine.task_notifier.register(ANY, task_watch) eu.print_wrapped("Building a wrong car that doesn't match specification") try: diff --git a/taskflow/examples/calculate_in_parallel.py b/taskflow/examples/calculate_in_parallel.py index 0215f956..7ab32fae 100644 --- a/taskflow/examples/calculate_in_parallel.py +++ b/taskflow/examples/calculate_in_parallel.py @@ -93,5 +93,5 @@ flow = lf.Flow('root').add( # The result here will be all results (from all tasks) which is stored in an # in-memory storage location that backs this engine since it is not configured # with persistence storage. -result = taskflow.engines.run(flow, engine_conf='parallel') +result = taskflow.engines.run(flow, engine='parallel') print(result) diff --git a/taskflow/examples/create_parallel_volume.py b/taskflow/examples/create_parallel_volume.py index de511adf..c23bf342 100644 --- a/taskflow/examples/create_parallel_volume.py +++ b/taskflow/examples/create_parallel_volume.py @@ -28,11 +28,12 @@ top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir)) sys.path.insert(0, top_dir) +from oslo_utils import reflection + from taskflow import engines from taskflow.listeners import printing from taskflow.patterns import unordered_flow as uf from taskflow import task -from taskflow.utils import reflection # INTRO: This examples shows how unordered_flow can be used to create a large # number of fake volumes in parallel (or serially, depending on a constant that @@ -64,13 +65,9 @@ VOLUME_COUNT = 5 # time difference that this causes. SERIAL = False if SERIAL: - engine_conf = { - 'engine': 'serial', - } + engine = 'serial' else: - engine_conf = { - 'engine': 'parallel', - } + engine = 'parallel' class VolumeCreator(task.Task): @@ -106,7 +103,7 @@ for i in range(0, VOLUME_COUNT): # Show how much time the overall engine loading and running takes. with show_time(name=flow.name.title()): - eng = engines.load(flow, engine_conf=engine_conf) + eng = engines.load(flow, engine=engine) # This context manager automatically adds (and automatically removes) a # helpful set of state transition notification printing helper utilities # that show you exactly what transitions the engine is going through diff --git a/taskflow/examples/delayed_return.py b/taskflow/examples/delayed_return.py index 46578621..5ca70078 100644 --- a/taskflow/examples/delayed_return.py +++ b/taskflow/examples/delayed_return.py @@ -39,14 +39,14 @@ from taskflow.listeners import base from taskflow.patterns import linear_flow as lf from taskflow import states from taskflow import task -from taskflow.utils import misc +from taskflow.types import notifier class PokeFutureListener(base.ListenerBase): def __init__(self, engine, future, task_name): super(PokeFutureListener, self).__init__( engine, - task_listen_for=(misc.Notifier.ANY,), + task_listen_for=(notifier.Notifier.ANY,), flow_listen_for=[]) self._future = future self._task_name = task_name @@ -74,7 +74,7 @@ class Bye(task.Task): def return_from_flow(pool): wf = lf.Flow("root").add(Hi("hi"), Bye("bye")) - eng = taskflow.engines.load(wf, engine_conf='serial') + eng = taskflow.engines.load(wf, engine='serial') f = futures.Future() watcher = PokeFutureListener(eng, f, 'hi') watcher.register() diff --git a/taskflow/examples/echo_listener.py b/taskflow/examples/echo_listener.py new file mode 100644 index 00000000..a8eebf60 --- /dev/null +++ b/taskflow/examples/echo_listener.py @@ -0,0 +1,56 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import logging +import os +import sys + +logging.basicConfig(level=logging.DEBUG) + +top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), + os.pardir, + os.pardir)) +sys.path.insert(0, top_dir) + +from taskflow import engines +from taskflow.listeners import logging as logging_listener +from taskflow.patterns import linear_flow as lf +from taskflow import task + +# INTRO: This example walks through a miniature workflow which will do a +# simple echo operation; during this execution a listener is assocated with +# the engine to recieve all notifications about what the flow has performed, +# this example dumps that output to the stdout for viewing (at debug level +# to show all the information which is possible). + + +class Echo(task.Task): + def execute(self): + print(self.name) + + +# Generate the work to be done (but don't do it yet). +wf = lf.Flow('abc') +wf.add(Echo('a')) +wf.add(Echo('b')) +wf.add(Echo('c')) + +# This will associate the listener with the engine (the listener +# will automatically register for notifications with the engine and deregister +# when the context is exited). +e = engines.load(wf) +with logging_listener.DynamicLoggingListener(e): + e.run() diff --git a/taskflow/examples/fake_billing.py b/taskflow/examples/fake_billing.py index 9a421f92..5d26a2dc 100644 --- a/taskflow/examples/fake_billing.py +++ b/taskflow/examples/fake_billing.py @@ -27,10 +27,10 @@ top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir)) sys.path.insert(0, top_dir) +from oslo_utils import uuidutils from taskflow import engines from taskflow.listeners import printing -from taskflow.openstack.common import uuidutils from taskflow.patterns import graph_flow as gf from taskflow.patterns import linear_flow as lf from taskflow import task @@ -148,6 +148,12 @@ class DeclareSuccess(task.Task): print("All data processed and sent to %s" % (sent_to)) +class DummyUser(object): + def __init__(self, user, id_): + self.user = user + self.id = id_ + + # Resources (db handles and similar) of course can *not* be persisted so we # need to make sure that we pass this resource fetcher to the tasks constructor # so that the tasks have access to any needed resources (the resources are @@ -168,9 +174,9 @@ flow.add(sub_flow) # prepopulating this allows the tasks that dependent on the 'request' variable # to start processing (in this case this is the ExtractInputRequest task). store = { - 'request': misc.AttrDict(user="bob", id="1.35"), + 'request': DummyUser(user="bob", id_="1.35"), } -eng = engines.load(flow, engine_conf='serial', store=store) +eng = engines.load(flow, engine='serial', store=store) # This context manager automatically adds (and automatically removes) a # helpful set of state transition notification printing helper utilities diff --git a/taskflow/examples/graph_flow.py b/taskflow/examples/graph_flow.py index 99dfdd45..9f28dc71 100644 --- a/taskflow/examples/graph_flow.py +++ b/taskflow/examples/graph_flow.py @@ -81,11 +81,11 @@ store = { } result = taskflow.engines.run( - flow, engine_conf='serial', store=store) + flow, engine='serial', store=store) print("Single threaded engine result %s" % result) result = taskflow.engines.run( - flow, engine_conf='parallel', store=store) + flow, engine='parallel', store=store) print("Multi threaded engine result %s" % result) diff --git a/taskflow/examples/hello_world.py b/taskflow/examples/hello_world.py new file mode 100644 index 00000000..f8e0bb23 --- /dev/null +++ b/taskflow/examples/hello_world.py @@ -0,0 +1,105 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import logging +import os +import sys + +logging.basicConfig(level=logging.ERROR) + +top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), + os.pardir, + os.pardir)) +sys.path.insert(0, top_dir) + +from taskflow import engines +from taskflow.patterns import linear_flow as lf +from taskflow.patterns import unordered_flow as uf +from taskflow import task +from taskflow.types import futures +from taskflow.utils import eventlet_utils + + +# INTRO: This is the defacto hello world equivalent for taskflow; it shows how +# a overly simplistic workflow can be created that runs using different +# engines using different styles of execution (all can be used to run in +# parallel if a workflow is provided that is parallelizable). + +class PrinterTask(task.Task): + def __init__(self, name, show_name=True, inject=None): + super(PrinterTask, self).__init__(name, inject=inject) + self._show_name = show_name + + def execute(self, output): + if self._show_name: + print("%s: %s" % (self.name, output)) + else: + print(output) + + +# This will be the work that we want done, which for this example is just to +# print 'hello world' (like a song) using different tasks and different +# execution models. +song = lf.Flow("beats") + +# Unordered flows when ran can be ran in parallel; and a chorus is everyone +# singing at once of course! +hi_chorus = uf.Flow('hello') +world_chorus = uf.Flow('world') +for (name, hello, world) in [('bob', 'hello', 'world'), + ('joe', 'hellooo', 'worllllld'), + ('sue', "helloooooo!", 'wooorllld!')]: + hi_chorus.add(PrinterTask("%s@hello" % name, + # This will show up to the execute() method of + # the task as the argument named 'output' (which + # will allow us to print the character we want). + inject={'output': hello})) + world_chorus.add(PrinterTask("%s@world" % name, + inject={'output': world})) + +# The composition starts with the conductor and then runs in sequence with +# the chorus running in parallel, but no matter what the 'hello' chorus must +# always run before the 'world' chorus (otherwise the world will fall apart). +song.add(PrinterTask("conductor@begin", + show_name=False, inject={'output': "*ding*"}), + hi_chorus, + world_chorus, + PrinterTask("conductor@end", + show_name=False, inject={'output': "*dong*"})) + +# Run in parallel using eventlet green threads... +if eventlet_utils.EVENTLET_AVAILABLE: + with futures.GreenThreadPoolExecutor() as executor: + e = engines.load(song, executor=executor, engine='parallel') + e.run() + + +# Run in parallel using real threads... +with futures.ThreadPoolExecutor(max_workers=1) as executor: + e = engines.load(song, executor=executor, engine='parallel') + e.run() + + +# Run in parallel using external processes... +with futures.ProcessPoolExecutor(max_workers=1) as executor: + e = engines.load(song, executor=executor, engine='parallel') + e.run() + + +# Run serially (aka, if the workflow could have been ran in parallel, it will +# not be when ran in this mode)... +e = engines.load(song, engine='serial') +e.run() diff --git a/taskflow/examples/jobboard_produce_consume_colors.py b/taskflow/examples/jobboard_produce_consume_colors.py new file mode 100644 index 00000000..80c2acba --- /dev/null +++ b/taskflow/examples/jobboard_produce_consume_colors.py @@ -0,0 +1,177 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import collections +import contextlib +import logging +import os +import random +import sys +import threading +import time + +logging.basicConfig(level=logging.ERROR) + +top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), + os.pardir, + os.pardir)) +sys.path.insert(0, top_dir) + +from six.moves import range as compat_range +from zake import fake_client + +from taskflow import exceptions as excp +from taskflow.jobs import backends +from taskflow.utils import threading_utils + +# In this example we show how a jobboard can be used to post work for other +# entities to work on. This example creates a set of jobs using one producer +# thread (typically this would be split across many machines) and then having +# other worker threads with there own jobboards select work using a given +# filters [red/blue] and then perform that work (and consuming or abandoning +# the job after it has been completed or failed). + +# Things to note: +# - No persistence layer is used (or logbook), just the job details are used +# to determine if a job should be selected by a worker or not. +# - This example runs in a single process (this is expected to be atypical +# but this example shows that it can be done if needed, for testing...) +# - The iterjobs(), claim(), consume()/abandon() worker workflow. +# - The post() producer workflow. + +SHARED_CONF = { + 'path': "/taskflow/jobs", + 'board': 'zookeeper', +} + +# How many workers and producers of work will be created (as threads). +PRODUCERS = 3 +WORKERS = 5 + +# How many units of work each producer will create. +PRODUCER_UNITS = 10 + +# How many units of work are expected to be produced (used so workers can +# know when to stop running and shutdown, typically this would not be a +# a value but we have to limit this examples execution time to be less than +# infinity). +EXPECTED_UNITS = PRODUCER_UNITS * PRODUCERS + +# Delay between producing/consuming more work. +WORKER_DELAY, PRODUCER_DELAY = (0.5, 0.5) + +# To ensure threads don't trample other threads output. +STDOUT_LOCK = threading.Lock() + + +def dispatch_work(job): + # This is where the jobs contained work *would* be done + time.sleep(1.0) + + +def safe_print(name, message, prefix=""): + with STDOUT_LOCK: + if prefix: + print("%s %s: %s" % (prefix, name, message)) + else: + print("%s: %s" % (name, message)) + + +def worker(ident, client, consumed): + # Create a personal board (using the same client so that it works in + # the same process) and start looking for jobs on the board that we want + # to perform. + name = "W-%s" % (ident) + safe_print(name, "started") + claimed_jobs = 0 + consumed_jobs = 0 + abandoned_jobs = 0 + with backends.backend(name, SHARED_CONF.copy(), client=client) as board: + while len(consumed) != EXPECTED_UNITS: + favorite_color = random.choice(['blue', 'red']) + for job in board.iterjobs(ensure_fresh=True, only_unclaimed=True): + # See if we should even bother with it... + if job.details.get('color') != favorite_color: + continue + safe_print(name, "'%s' [attempting claim]" % (job)) + try: + board.claim(job, name) + claimed_jobs += 1 + safe_print(name, "'%s' [claimed]" % (job)) + except (excp.NotFound, excp.UnclaimableJob): + safe_print(name, "'%s' [claim unsuccessful]" % (job)) + else: + try: + dispatch_work(job) + board.consume(job, name) + safe_print(name, "'%s' [consumed]" % (job)) + consumed_jobs += 1 + consumed.append(job) + except Exception: + board.abandon(job, name) + abandoned_jobs += 1 + safe_print(name, "'%s' [abandoned]" % (job)) + time.sleep(WORKER_DELAY) + safe_print(name, + "finished (claimed %s jobs, consumed %s jobs," + " abandoned %s jobs)" % (claimed_jobs, consumed_jobs, + abandoned_jobs), prefix=">>>") + + +def producer(ident, client): + # Create a personal board (using the same client so that it works in + # the same process) and start posting jobs on the board that we want + # some entity to perform. + name = "P-%s" % (ident) + safe_print(name, "started") + with backends.backend(name, SHARED_CONF.copy(), client=client) as board: + for i in compat_range(0, PRODUCER_UNITS): + job_name = "%s-%s" % (name, i) + details = { + 'color': random.choice(['red', 'blue']), + } + job = board.post(job_name, book=None, details=details) + safe_print(name, "'%s' [posted]" % (job)) + time.sleep(PRODUCER_DELAY) + safe_print(name, "finished", prefix=">>>") + + +def main(): + with contextlib.closing(fake_client.FakeClient()) as c: + created = [] + for i in compat_range(0, PRODUCERS): + p = threading_utils.daemon_thread(producer, i + 1, c) + created.append(p) + p.start() + consumed = collections.deque() + for i in compat_range(0, WORKERS): + w = threading_utils.daemon_thread(worker, i + 1, c, consumed) + created.append(w) + w.start() + while created: + t = created.pop() + t.join() + # At the end there should be nothing leftover, let's verify that. + board = backends.fetch('verifier', SHARED_CONF.copy(), client=c) + board.connect() + with contextlib.closing(board): + if board.job_count != 0 or len(consumed) != EXPECTED_UNITS: + return 1 + return 0 + + +if __name__ == "__main__": + sys.exit(main()) diff --git a/taskflow/examples/parallel_table_multiply.py b/taskflow/examples/parallel_table_multiply.py new file mode 100644 index 00000000..f4550c20 --- /dev/null +++ b/taskflow/examples/parallel_table_multiply.py @@ -0,0 +1,129 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import csv +import logging +import os +import random +import sys + +logging.basicConfig(level=logging.ERROR) + +top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), + os.pardir, + os.pardir)) +sys.path.insert(0, top_dir) + +from six.moves import range as compat_range + +from taskflow import engines +from taskflow.patterns import unordered_flow as uf +from taskflow import task +from taskflow.types import futures +from taskflow.utils import eventlet_utils + +# INTRO: This example walks through a miniature workflow which does a parallel +# table modification where each row in the table gets adjusted by a thread, or +# green thread (if eventlet is available) in parallel and then the result +# is reformed into a new table and some verifications are performed on it +# to ensure everything went as expected. + + +MULTIPLER = 10 + + +class RowMultiplier(task.Task): + """Performs a modification of an input row, creating a output row.""" + + def __init__(self, name, index, row, multiplier): + super(RowMultiplier, self).__init__(name=name) + self.index = index + self.multiplier = multiplier + self.row = row + + def execute(self): + return [r * self.multiplier for r in self.row] + + +def make_flow(table): + # This creation will allow for parallel computation (since the flow here + # is specifically unordered; and when things are unordered they have + # no dependencies and when things have no dependencies they can just be + # ran at the same time, limited in concurrency by the executor or max + # workers of that executor...) + f = uf.Flow("root") + for i, row in enumerate(table): + f.add(RowMultiplier("m-%s" % i, i, row, MULTIPLER)) + # NOTE(harlowja): at this point nothing has ran, the above is just + # defining what should be done (but not actually doing it) and associating + # an ordering dependencies that should be enforced (the flow pattern used + # forces this), the engine in the later main() function will actually + # perform this work... + return f + + +def main(): + if len(sys.argv) == 2: + tbl = [] + with open(sys.argv[1], 'rb') as fh: + reader = csv.reader(fh) + for row in reader: + tbl.append([float(r) if r else 0.0 for r in row]) + else: + # Make some random table out of thin air... + tbl = [] + cols = random.randint(1, 100) + rows = random.randint(1, 100) + for _i in compat_range(0, rows): + row = [] + for _j in compat_range(0, cols): + row.append(random.random()) + tbl.append(row) + + # Generate the work to be done. + f = make_flow(tbl) + + # Now run it (using the specified executor)... + if eventlet_utils.EVENTLET_AVAILABLE: + executor = futures.GreenThreadPoolExecutor(max_workers=5) + else: + executor = futures.ThreadPoolExecutor(max_workers=5) + try: + e = engines.load(f, engine='parallel', executor=executor) + for st in e.run_iter(): + print(st) + finally: + executor.shutdown() + + # Find the old rows and put them into place... + # + # TODO(harlowja): probably easier just to sort instead of search... + computed_tbl = [] + for i in compat_range(0, len(tbl)): + for t in f: + if t.index == i: + computed_tbl.append(e.storage.get(t.name)) + + # Do some basic validation (which causes the return code of this process + # to be different if things were not as expected...) + if len(computed_tbl) != len(tbl): + return 1 + else: + return 0 + + +if __name__ == "__main__": + sys.exit(main()) diff --git a/taskflow/examples/persistence_example.py b/taskflow/examples/persistence_example.py index 720914cd..fe5968fe 100644 --- a/taskflow/examples/persistence_example.py +++ b/taskflow/examples/persistence_example.py @@ -91,20 +91,15 @@ else: blowup = True with eu.get_backend(backend_uri) as backend: - # Now we can run. - engine_config = { - 'backend': backend, - 'engine_conf': 'serial', - 'book': logbook.LogBook("my-test"), - } - # Make a flow that will blowup if the file doesn't exist previously, if it # did exist, assume we won't blowup (and therefore this shows the undo # and redo that a flow will go through). + book = logbook.LogBook("my-test") flow = make_flow(blowup=blowup) eu.print_wrapped("Running") try: - eng = engines.load(flow, **engine_config) + eng = engines.load(flow, engine='serial', + backend=backend, book=book) eng.run() if not blowup: eu.rm_path(persist_path) @@ -115,4 +110,4 @@ with eu.get_backend(backend_uri) as backend: traceback.print_exc(file=sys.stdout) eu.print_wrapped("Book contents") - print(p_utils.pformat(engine_config['book'])) + print(p_utils.pformat(book)) diff --git a/taskflow/examples/resume_many_flows.py b/taskflow/examples/resume_many_flows.py index 08cc1740..88e55510 100644 --- a/taskflow/examples/resume_many_flows.py +++ b/taskflow/examples/resume_many_flows.py @@ -48,7 +48,7 @@ def _exec(cmd, add_env=None): stdout=subprocess.PIPE, stderr=sys.stderr) - stdout, stderr = proc.communicate() + stdout, _stderr = proc.communicate() rc = proc.returncode if rc != 0: raise RuntimeError("Could not run %s [%s]", cmd, rc) diff --git a/taskflow/examples/resume_vm_boot.py b/taskflow/examples/resume_vm_boot.py index acdf42b5..4e93f787 100644 --- a/taskflow/examples/resume_vm_boot.py +++ b/taskflow/examples/resume_vm_boot.py @@ -31,13 +31,15 @@ top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), sys.path.insert(0, top_dir) sys.path.insert(0, self_dir) +from oslo_utils import uuidutils + from taskflow import engines from taskflow import exceptions as exc -from taskflow.openstack.common import uuidutils from taskflow.patterns import graph_flow as gf from taskflow.patterns import linear_flow as lf from taskflow import task -from taskflow.utils import eventlet_utils as e_utils +from taskflow.types import futures +from taskflow.utils import eventlet_utils from taskflow.utils import persistence_utils as p_utils import example_utils as eu # noqa @@ -141,7 +143,7 @@ class AllocateIP(task.Task): def execute(self, vm_spec): ips = [] - for i in range(0, vm_spec.get('ips', 0)): + for _i in range(0, vm_spec.get('ips', 0)): ips.append("192.168.0.%s" % (random.randint(1, 254))) return ips @@ -235,11 +237,9 @@ with eu.get_backend() as backend: flow_id = None # Set up how we want our engine to run, serial, parallel... - engine_conf = { - 'engine': 'parallel', - } - if e_utils.EVENTLET_AVAILABLE: - engine_conf['executor'] = e_utils.GreenExecutor(5) + executor = None + if eventlet_utils.EVENTLET_AVAILABLE: + executor = futures.GreenThreadPoolExecutor(5) # Create/fetch a logbook that will track the workflows work. book = None @@ -255,15 +255,15 @@ with eu.get_backend() as backend: book = p_utils.temporary_log_book(backend) engine = engines.load_from_factory(create_flow, backend=backend, book=book, - engine_conf=engine_conf) + engine='parallel', + executor=executor) print("!! Your tracking id is: '%s+%s'" % (book.uuid, engine.storage.flow_uuid)) print("!! Please submit this on later runs for tracking purposes") else: # Attempt to load from a previously partially completed flow. - engine = engines.load_from_detail(flow_detail, - backend=backend, - engine_conf=engine_conf) + engine = engines.load_from_detail(flow_detail, backend=backend, + engine='parallel', executor=executor) # Make me my vm please! eu.print_wrapped('Running') diff --git a/taskflow/examples/resume_volume_create.py b/taskflow/examples/resume_volume_create.py index 0fe502e4..275fa6b8 100644 --- a/taskflow/examples/resume_volume_create.py +++ b/taskflow/examples/resume_volume_create.py @@ -143,13 +143,9 @@ with example_utils.get_backend() as backend: flow_detail = find_flow_detail(backend, book_id, flow_id) # Load and run. - engine_conf = { - 'engine': 'serial', - } engine = engines.load(flow, flow_detail=flow_detail, - backend=backend, - engine_conf=engine_conf) + backend=backend, engine='serial') engine.run() # How to use. diff --git a/taskflow/examples/run_by_iter.py b/taskflow/examples/run_by_iter.py index 0a7761b7..4b7b98cc 100644 --- a/taskflow/examples/run_by_iter.py +++ b/taskflow/examples/run_by_iter.py @@ -30,9 +30,9 @@ sys.path.insert(0, top_dir) sys.path.insert(0, self_dir) -from taskflow.engines.action_engine import engine +from taskflow import engines from taskflow.patterns import linear_flow as lf -from taskflow.persistence.backends import impl_memory +from taskflow.persistence import backends as persistence_backends from taskflow import task from taskflow.utils import persistence_utils @@ -73,18 +73,15 @@ flows = [] for i in range(0, flow_count): f = make_alphabet_flow(i + 1) flows.append(make_alphabet_flow(i + 1)) -be = impl_memory.MemoryBackend({}) +be = persistence_backends.fetch(conf={'connection': 'memory'}) book = persistence_utils.temporary_log_book(be) -engines = [] +engine_iters = [] for f in flows: fd = persistence_utils.create_flow_detail(f, book, be) - e = engine.SingleThreadedActionEngine(f, fd, be, {}) + e = engines.load(f, flow_detail=fd, backend=be, book=book) e.compile() e.storage.inject({'A': 'A'}) e.prepare() - engines.append(e) -engine_iters = [] -for e in engines: engine_iters.append(e.run_iter()) while engine_iters: for it in list(engine_iters): diff --git a/taskflow/examples/run_by_iter_enumerate.py b/taskflow/examples/run_by_iter_enumerate.py index 66b1859f..d954d6aa 100644 --- a/taskflow/examples/run_by_iter_enumerate.py +++ b/taskflow/examples/run_by_iter_enumerate.py @@ -27,9 +27,9 @@ top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), sys.path.insert(0, top_dir) sys.path.insert(0, self_dir) -from taskflow.engines.action_engine import engine +from taskflow import engines from taskflow.patterns import linear_flow as lf -from taskflow.persistence.backends import impl_memory +from taskflow.persistence import backends as persistence_backends from taskflow import task from taskflow.utils import persistence_utils @@ -48,10 +48,10 @@ f = lf.Flow("counter") for i in range(0, 10): f.add(EchoNameTask("echo_%s" % (i + 1))) -be = impl_memory.MemoryBackend() +be = persistence_backends.fetch(conf={'connection': 'memory'}) book = persistence_utils.temporary_log_book(be) fd = persistence_utils.create_flow_detail(f, book, be) -e = engine.SingleThreadedActionEngine(f, fd, be, {}) +e = engines.load(f, flow_detail=fd, backend=be, book=book) e.compile() e.prepare() diff --git a/taskflow/examples/simple_linear_listening.py b/taskflow/examples/simple_linear_listening.py index 04f9f14e..d14c82c4 100644 --- a/taskflow/examples/simple_linear_listening.py +++ b/taskflow/examples/simple_linear_listening.py @@ -28,6 +28,9 @@ sys.path.insert(0, top_dir) import taskflow.engines from taskflow.patterns import linear_flow as lf from taskflow import task +from taskflow.types import notifier + +ANY = notifier.Notifier.ANY # INTRO: In this example we create two tasks (this time as functions instead # of task subclasses as in the simple_linear.py example), each of which ~calls~ @@ -92,8 +95,8 @@ engine = taskflow.engines.load(flow, store={ # notification objects that a engine exposes. The usage of a '*' (kleene star) # here means that we want to be notified on all state changes, if you want to # restrict to a specific state change, just register that instead. -engine.notifier.register('*', flow_watch) -engine.task_notifier.register('*', task_watch) +engine.notifier.register(ANY, flow_watch) +engine.task_notifier.register(ANY, task_watch) # And now run! engine.run() diff --git a/taskflow/examples/simple_linear_pass.out.txt b/taskflow/examples/simple_linear_pass.out.txt new file mode 100644 index 00000000..1e58a63c --- /dev/null +++ b/taskflow/examples/simple_linear_pass.out.txt @@ -0,0 +1,9 @@ +Constructing... +Loading... +Compiling... +Preparing... +Running... +Executing 'a' +Executing 'b' +Got input 'a' +Done... diff --git a/taskflow/examples/simple_linear_pass.py b/taskflow/examples/simple_linear_pass.py new file mode 100644 index 00000000..d378418d --- /dev/null +++ b/taskflow/examples/simple_linear_pass.py @@ -0,0 +1,68 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import logging +import os +import sys + +logging.basicConfig(level=logging.ERROR) + +self_dir = os.path.abspath(os.path.dirname(__file__)) +top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), + os.pardir, + os.pardir)) +sys.path.insert(0, top_dir) +sys.path.insert(0, self_dir) + +from taskflow import engines +from taskflow.patterns import linear_flow +from taskflow import task + +# INTRO: This examples shows how a task (in a linear/serial workflow) can +# produce an output that can be then consumed/used by a downstream task. + + +class TaskA(task.Task): + default_provides = 'a' + + def execute(self): + print("Executing '%s'" % (self.name)) + return 'a' + + +class TaskB(task.Task): + def execute(self, a): + print("Executing '%s'" % (self.name)) + print("Got input '%s'" % (a)) + + +print("Constructing...") +wf = linear_flow.Flow("pass-from-to") +wf.add(TaskA('a'), TaskB('b')) + +print("Loading...") +e = engines.load(wf) + +print("Compiling...") +e.compile() + +print("Preparing...") +e.prepare() + +print("Running...") +e.run() + +print("Done...") diff --git a/taskflow/examples/simple_map_reduce.py b/taskflow/examples/simple_map_reduce.py new file mode 100644 index 00000000..3a47fdc1 --- /dev/null +++ b/taskflow/examples/simple_map_reduce.py @@ -0,0 +1,115 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import logging +import os +import sys + +logging.basicConfig(level=logging.ERROR) + +self_dir = os.path.abspath(os.path.dirname(__file__)) +top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), + os.pardir, + os.pardir)) +sys.path.insert(0, top_dir) +sys.path.insert(0, self_dir) + +# INTRO: this examples shows a simplistic map/reduce implementation where +# a set of mapper(s) will sum a series of input numbers (in parallel) and +# return there individual summed result. A reducer will then use those +# produced values and perform a final summation and this result will then be +# printed (and verified to ensure the calculation was as expected). + +import six + +from taskflow import engines +from taskflow.patterns import linear_flow +from taskflow.patterns import unordered_flow +from taskflow import task + + +class SumMapper(task.Task): + def execute(self, inputs): + # Sums some set of provided inputs. + return sum(inputs) + + +class TotalReducer(task.Task): + def execute(self, *args, **kwargs): + # Reduces all mapped summed outputs into a single value. + total = 0 + for (k, v) in six.iteritems(kwargs): + # If any other kwargs was passed in, we don't want to use those + # in the calculation of the total... + if k.startswith('reduction_'): + total += v + return total + + +def chunk_iter(chunk_size, upperbound): + """Yields back chunk size pieces from zero to upperbound - 1.""" + chunk = [] + for i in range(0, upperbound): + chunk.append(i) + if len(chunk) == chunk_size: + yield chunk + chunk = [] + + +# Upper bound of numbers to sum for example purposes... +UPPER_BOUND = 10000 + +# How many mappers we want to have. +SPLIT = 10 + +# How big of a chunk we want to give each mapper. +CHUNK_SIZE = UPPER_BOUND // SPLIT + +# This will be the workflow we will compose and run. +w = linear_flow.Flow("root") + +# The mappers will run in parallel. +store = {} +provided = [] +mappers = unordered_flow.Flow('map') +for i, chunk in enumerate(chunk_iter(CHUNK_SIZE, UPPER_BOUND)): + mapper_name = 'mapper_%s' % i + # Give that mapper some information to compute. + store[mapper_name] = chunk + # The reducer uses all of the outputs of the mappers, so it needs + # to be recorded that it needs access to them (under a specific name). + provided.append("reduction_%s" % i) + mappers.add(SumMapper(name=mapper_name, + rebind={'inputs': mapper_name}, + provides=provided[-1])) +w.add(mappers) + +# The reducer will run last (after all the mappers). +w.add(TotalReducer('reducer', requires=provided)) + +# Now go! +e = engines.load(w, engine='parallel', store=store, max_workers=4) +print("Running a parallel engine with options: %s" % e.options) +e.run() + +# Now get the result the reducer created. +total = e.storage.get('reducer') +print("Calculated result = %s" % total) + +# Calculate it manually to verify that it worked... +calc_total = sum(range(0, UPPER_BOUND)) +if calc_total != total: + sys.exit(1) diff --git a/taskflow/examples/timing_listener.py b/taskflow/examples/timing_listener.py new file mode 100644 index 00000000..ab53a9aa --- /dev/null +++ b/taskflow/examples/timing_listener.py @@ -0,0 +1,59 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import logging +import os +import random +import sys +import time + +logging.basicConfig(level=logging.ERROR) + +top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), + os.pardir, + os.pardir)) +sys.path.insert(0, top_dir) + +from taskflow import engines +from taskflow.listeners import timing +from taskflow.patterns import linear_flow as lf +from taskflow import task + +# INTRO: in this example we will attach a listener to an engine +# and have variable run time tasks run and show how the listener will print +# out how long those tasks took (when they started and when they finished). +# +# This shows how timing metrics can be gathered (or attached onto a engine) +# after a workflow has been constructed, making it easy to gather metrics +# dynamically for situations where this kind of information is applicable (or +# even adding this information on at a later point in the future when your +# application starts to slow down). + + +class VariableTask(task.Task): + def __init__(self, name): + super(VariableTask, self).__init__(name) + self._sleepy_time = random.random() + + def execute(self): + time.sleep(self._sleepy_time) + + +f = lf.Flow('root') +f.add(VariableTask('a'), VariableTask('b'), VariableTask('c')) +e = engines.load(f) +with timing.PrintingTimingListener(e): + e.run() diff --git a/taskflow/examples/wbe_event_sender.py b/taskflow/examples/wbe_event_sender.py new file mode 100644 index 00000000..e5b075ac --- /dev/null +++ b/taskflow/examples/wbe_event_sender.py @@ -0,0 +1,150 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import logging +import os +import string +import sys +import time + +top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), + os.pardir, + os.pardir)) +sys.path.insert(0, top_dir) + +from six.moves import range as compat_range + +from taskflow import engines +from taskflow.engines.worker_based import worker +from taskflow.patterns import linear_flow as lf +from taskflow import task +from taskflow.types import notifier +from taskflow.utils import threading_utils + +ANY = notifier.Notifier.ANY + +# INTRO: This examples shows how to use a remote workers event notification +# attribute to proxy back task event notifications to the controlling process. +# +# In this case a simple set of events are triggered by a worker running a +# task (simulated to be remote by using a kombu memory transport and threads). +# Those events that the 'remote worker' produces will then be proxied back to +# the task that the engine is running 'remotely', and then they will be emitted +# back to the original callbacks that exist in the originating engine +# process/thread. This creates a one-way *notification* channel that can +# transparently be used in-process, outside-of-process using remote workers and +# so-on that allows tasks to signal to its controlling process some sort of +# action that has occurred that the task may need to tell others about (for +# example to trigger some type of response when the task reaches 50% done...). + + +def event_receiver(event_type, details): + """This is the callback that (in this example) doesn't do much...""" + print("Recieved event '%s'" % event_type) + print("Details = %s" % details) + + +class EventReporter(task.Task): + """This is the task that will be running 'remotely' (not really remote).""" + + EVENTS = tuple(string.ascii_uppercase) + EVENT_DELAY = 0.1 + + def execute(self): + for i, e in enumerate(self.EVENTS): + details = { + 'leftover': self.EVENTS[i:], + } + self.notifier.notify(e, details) + time.sleep(self.EVENT_DELAY) + + +BASE_SHARED_CONF = { + 'exchange': 'taskflow', + 'transport': 'memory', + 'transport_options': { + 'polling_interval': 0.1, + }, +} + +# Until https://github.com/celery/kombu/issues/398 is resolved it is not +# recommended to run many worker threads in this example due to the types +# of errors mentioned in that issue. +MEMORY_WORKERS = 1 +WORKER_CONF = { + 'tasks': [ + # Used to locate which tasks we can run (we don't want to allow + # arbitrary code/tasks to be ran by any worker since that would + # open up a variety of vulnerabilities). + '%s:EventReporter' % (__name__), + ], +} + + +def run(engine_options): + reporter = EventReporter() + reporter.notifier.register(ANY, event_receiver) + flow = lf.Flow('event-reporter').add(reporter) + eng = engines.load(flow, engine='worker-based', **engine_options) + eng.run() + + +if __name__ == "__main__": + logging.basicConfig(level=logging.ERROR) + + # Setup our transport configuration and merge it into the worker and + # engine configuration so that both of those objects use it correctly. + worker_conf = dict(WORKER_CONF) + worker_conf.update(BASE_SHARED_CONF) + engine_options = dict(BASE_SHARED_CONF) + workers = [] + + # These topics will be used to request worker information on; those + # workers will respond with there capabilities which the executing engine + # will use to match pending tasks to a matched worker, this will cause + # the task to be sent for execution, and the engine will wait until it + # is finished (a response is recieved) and then the engine will either + # continue with other tasks, do some retry/failure resolution logic or + # stop (and potentially re-raise the remote workers failure)... + worker_topics = [] + + try: + # Create a set of worker threads to simulate actual remote workers... + print('Running %s workers.' % (MEMORY_WORKERS)) + for i in compat_range(0, MEMORY_WORKERS): + # Give each one its own unique topic name so that they can + # correctly communicate with the engine (they will all share the + # same exchange). + worker_conf['topic'] = 'worker-%s' % (i + 1) + worker_topics.append(worker_conf['topic']) + w = worker.Worker(**worker_conf) + runner = threading_utils.daemon_thread(w.run) + runner.start() + w.wait() + workers.append((runner, w.stop)) + + # Now use those workers to do something. + print('Executing some work.') + engine_options['topics'] = worker_topics + result = run(engine_options) + print('Execution finished.') + finally: + # And cleanup. + print('Stopping workers.') + while workers: + r, stopper = workers.pop() + stopper() + r.join() diff --git a/taskflow/examples/wbe_mandelbrot.out.txt b/taskflow/examples/wbe_mandelbrot.out.txt new file mode 100644 index 00000000..3b526414 --- /dev/null +++ b/taskflow/examples/wbe_mandelbrot.out.txt @@ -0,0 +1,6 @@ +Calculating your mandelbrot fractal of size 512x512. +Running 2 workers. +Execution finished. +Stopping workers. +Writing image... +Gathered 262144 results that represents a mandelbrot image (using 8 chunks that are computed jointly by 2 workers). diff --git a/taskflow/examples/wbe_mandelbrot.py b/taskflow/examples/wbe_mandelbrot.py new file mode 100644 index 00000000..c59b85ce --- /dev/null +++ b/taskflow/examples/wbe_mandelbrot.py @@ -0,0 +1,253 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import logging +import math +import os +import sys + +top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), + os.pardir, + os.pardir)) +sys.path.insert(0, top_dir) + +from six.moves import range as compat_range + +from taskflow import engines +from taskflow.engines.worker_based import worker +from taskflow.patterns import unordered_flow as uf +from taskflow import task +from taskflow.utils import threading_utils + +# INTRO: This example walks through a workflow that will in parallel compute +# a mandelbrot result set (using X 'remote' workers) and then combine their +# results together to form a final mandelbrot fractal image. It shows a usage +# of taskflow to perform a well-known embarrassingly parallel problem that has +# the added benefit of also being an elegant visualization. +# +# NOTE(harlowja): this example simulates the expected larger number of workers +# by using a set of threads (which in this example simulate the remote workers +# that would typically be running on other external machines). +# +# NOTE(harlowja): to have it produce an image run (after installing pillow): +# +# $ python taskflow/examples/wbe_mandelbrot.py output.png + +BASE_SHARED_CONF = { + 'exchange': 'taskflow', +} +WORKERS = 2 +WORKER_CONF = { + # These are the tasks the worker can execute, they *must* be importable, + # typically this list is used to restrict what workers may execute to + # a smaller set of *allowed* tasks that are known to be safe (one would + # not want to allow all python code to be executed). + 'tasks': [ + '%s:MandelCalculator' % (__name__), + ], +} +ENGINE_CONF = { + 'engine': 'worker-based', +} + +# Mandelbrot & image settings... +IMAGE_SIZE = (512, 512) +CHUNK_COUNT = 8 +MAX_ITERATIONS = 25 + + +class MandelCalculator(task.Task): + def execute(self, image_config, mandelbrot_config, chunk): + """Returns the number of iterations before the computation "escapes". + + Given the real and imaginary parts of a complex number, determine if it + is a candidate for membership in the mandelbrot set given a fixed + number of iterations. + """ + + # Parts borrowed from (credit to mark harris and benoît mandelbrot). + # + # http://nbviewer.ipython.org/gist/harrism/f5707335f40af9463c43 + def mandelbrot(x, y, max_iters): + c = complex(x, y) + z = 0.0j + for i in compat_range(max_iters): + z = z * z + c + if (z.real * z.real + z.imag * z.imag) >= 4: + return i + return max_iters + + min_x, max_x, min_y, max_y, max_iters = mandelbrot_config + height, width = image_config['size'] + pixel_size_x = (max_x - min_x) / width + pixel_size_y = (max_y - min_y) / height + block = [] + for y in compat_range(chunk[0], chunk[1]): + row = [] + imag = min_y + y * pixel_size_y + for x in compat_range(0, width): + real = min_x + x * pixel_size_x + row.append(mandelbrot(real, imag, max_iters)) + block.append(row) + return block + + +def calculate(engine_conf): + # Subdivide the work into X pieces, then request each worker to calculate + # one of those chunks and then later we will write these chunks out to + # an image bitmap file. + + # And unordered flow is used here since the mandelbrot calculation is an + # example of a embarrassingly parallel computation that we can scatter + # across as many workers as possible. + flow = uf.Flow("mandelbrot") + + # These symbols will be automatically given to tasks as input to there + # execute method, in this case these are constants used in the mandelbrot + # calculation. + store = { + 'mandelbrot_config': [-2.0, 1.0, -1.0, 1.0, MAX_ITERATIONS], + 'image_config': { + 'size': IMAGE_SIZE, + } + } + + # We need the task names to be in the right order so that we can extract + # the final results in the right order (we don't care about the order when + # executing). + task_names = [] + + # Compose our workflow. + height, _width = IMAGE_SIZE + chunk_size = int(math.ceil(height / float(CHUNK_COUNT))) + for i in compat_range(0, CHUNK_COUNT): + chunk_name = 'chunk_%s' % i + task_name = "calculation_%s" % i + # Break the calculation up into chunk size pieces. + rows = [i * chunk_size, i * chunk_size + chunk_size] + flow.add( + MandelCalculator(task_name, + # This ensures the storage symbol with name + # 'chunk_name' is sent into the tasks local + # symbol 'chunk'. This is how we give each + # calculator its own correct sequence of rows + # to work on. + rebind={'chunk': chunk_name})) + store[chunk_name] = rows + task_names.append(task_name) + + # Now execute it. + eng = engines.load(flow, store=store, engine_conf=engine_conf) + eng.run() + + # Gather all the results and order them for further processing. + gather = [] + for name in task_names: + gather.extend(eng.storage.get(name)) + points = [] + for y, row in enumerate(gather): + for x, color in enumerate(row): + points.append(((x, y), color)) + return points + + +def write_image(results, output_filename=None): + print("Gathered %s results that represents a mandelbrot" + " image (using %s chunks that are computed jointly" + " by %s workers)." % (len(results), CHUNK_COUNT, WORKERS)) + if not output_filename: + return + + # Pillow (the PIL fork) saves us from writing our own image writer... + try: + from PIL import Image + except ImportError as e: + # To currently get this (may change in the future), + # $ pip install Pillow + raise RuntimeError("Pillow is required to write image files: %s" % e) + + # Limit to 255, find the max and normalize to that... + color_max = 0 + for _point, color in results: + color_max = max(color, color_max) + + # Use gray scale since we don't really have other colors. + img = Image.new('L', IMAGE_SIZE, "black") + pixels = img.load() + for (x, y), color in results: + if color_max == 0: + color = 0 + else: + color = int((float(color) / color_max) * 255.0) + pixels[x, y] = color + img.save(output_filename) + + +def create_fractal(): + logging.basicConfig(level=logging.ERROR) + + # Setup our transport configuration and merge it into the worker and + # engine configuration so that both of those use it correctly. + shared_conf = dict(BASE_SHARED_CONF) + shared_conf.update({ + 'transport': 'memory', + 'transport_options': { + 'polling_interval': 0.1, + }, + }) + + if len(sys.argv) >= 2: + output_filename = sys.argv[1] + else: + output_filename = None + + worker_conf = dict(WORKER_CONF) + worker_conf.update(shared_conf) + engine_conf = dict(ENGINE_CONF) + engine_conf.update(shared_conf) + workers = [] + worker_topics = [] + + print('Calculating your mandelbrot fractal of size %sx%s.' % IMAGE_SIZE) + try: + # Create a set of workers to simulate actual remote workers. + print('Running %s workers.' % (WORKERS)) + for i in compat_range(0, WORKERS): + worker_conf['topic'] = 'calculator_%s' % (i + 1) + worker_topics.append(worker_conf['topic']) + w = worker.Worker(**worker_conf) + runner = threading_utils.daemon_thread(w.run) + runner.start() + w.wait() + workers.append((runner, w.stop)) + + # Now use those workers to do something. + engine_conf['topics'] = worker_topics + results = calculate(engine_conf) + print('Execution finished.') + finally: + # And cleanup. + print('Stopping workers.') + while workers: + r, stopper = workers.pop() + stopper() + r.join() + print("Writing image...") + write_image(results, output_filename=output_filename) + + +if __name__ == "__main__": + create_fractal() diff --git a/taskflow/examples/wbe_simple_linear.py b/taskflow/examples/wbe_simple_linear.py index e28579f8..bcaa8612 100644 --- a/taskflow/examples/wbe_simple_linear.py +++ b/taskflow/examples/wbe_simple_linear.py @@ -19,7 +19,6 @@ import logging import os import sys import tempfile -import threading top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, @@ -30,6 +29,7 @@ from taskflow import engines from taskflow.engines.worker_based import worker from taskflow.patterns import linear_flow as lf from taskflow.tests import utils +from taskflow.utils import threading_utils import example_utils # noqa @@ -53,7 +53,12 @@ USE_FILESYSTEM = False BASE_SHARED_CONF = { 'exchange': 'taskflow', } -WORKERS = 2 + +# Until https://github.com/celery/kombu/issues/398 is resolved it is not +# recommended to run many worker threads in this example due to the types +# of errors mentioned in that issue. +MEMORY_WORKERS = 2 +FILE_WORKERS = 1 WORKER_CONF = { # These are the tasks the worker can execute, they *must* be importable, # typically this list is used to restrict what workers may execute to @@ -64,19 +69,16 @@ WORKER_CONF = { 'taskflow.tests.utils:TaskMultiArgOneReturn' ], } -ENGINE_CONF = { - 'engine': 'worker-based', -} -def run(engine_conf): +def run(engine_options): flow = lf.Flow('simple-linear').add( utils.TaskOneArgOneReturn(provides='result1'), utils.TaskMultiArgOneReturn(provides='result2') ) eng = engines.load(flow, store=dict(x=111, y=222, z=333), - engine_conf=engine_conf) + engine='worker-based', **engine_options) eng.run() return eng.storage.fetch_all() @@ -90,6 +92,7 @@ if __name__ == "__main__": tmp_path = None if USE_FILESYSTEM: + worker_count = FILE_WORKERS tmp_path = tempfile.mkdtemp(prefix='wbe-example-') shared_conf.update({ 'transport': 'filesystem', @@ -100,6 +103,7 @@ if __name__ == "__main__": }, }) else: + worker_count = MEMORY_WORKERS shared_conf.update({ 'transport': 'memory', 'transport_options': { @@ -108,28 +112,26 @@ if __name__ == "__main__": }) worker_conf = dict(WORKER_CONF) worker_conf.update(shared_conf) - engine_conf = dict(ENGINE_CONF) - engine_conf.update(shared_conf) + engine_options = dict(shared_conf) workers = [] worker_topics = [] try: # Create a set of workers to simulate actual remote workers. - print('Running %s workers.' % (WORKERS)) - for i in range(0, WORKERS): + print('Running %s workers.' % (worker_count)) + for i in range(0, worker_count): worker_conf['topic'] = 'worker-%s' % (i + 1) worker_topics.append(worker_conf['topic']) w = worker.Worker(**worker_conf) - runner = threading.Thread(target=w.run) - runner.daemon = True + runner = threading_utils.daemon_thread(w.run) runner.start() w.wait() workers.append((runner, w.stop)) # Now use those workers to do something. print('Executing some work.') - engine_conf['topics'] = worker_topics - result = run(engine_conf) + engine_options['topics'] = worker_topics + result = run(engine_options) print('Execution finished.') # This is done so that the test examples can work correctly # even when the keys change order (which will happen in various diff --git a/taskflow/examples/wrapped_exception.py b/taskflow/examples/wrapped_exception.py index 7679a150..78b5ad06 100644 --- a/taskflow/examples/wrapped_exception.py +++ b/taskflow/examples/wrapped_exception.py @@ -33,7 +33,7 @@ from taskflow import exceptions from taskflow.patterns import unordered_flow as uf from taskflow import task from taskflow.tests import utils -from taskflow.utils import misc +from taskflow.types import failure import example_utils as eu # noqa @@ -93,18 +93,18 @@ def run(**store): try: with utils.wrap_all_failures(): taskflow.engines.run(flow, store=store, - engine_conf='parallel') + engine='parallel') except exceptions.WrappedFailure as ex: unknown_failures = [] - for failure in ex: - if failure.check(FirstException): - print("Got FirstException: %s" % failure.exception_str) - elif failure.check(SecondException): - print("Got SecondException: %s" % failure.exception_str) + for a_failure in ex: + if a_failure.check(FirstException): + print("Got FirstException: %s" % a_failure.exception_str) + elif a_failure.check(SecondException): + print("Got SecondException: %s" % a_failure.exception_str) else: - print("Unknown failure: %s" % failure) - unknown_failures.append(failure) - misc.Failure.reraise_if_any(unknown_failures) + print("Unknown failure: %s" % a_failure) + unknown_failures.append(a_failure) + failure.Failure.reraise_if_any(unknown_failures) eu.print_wrapped("Raise and catch first exception only") diff --git a/taskflow/exceptions.py b/taskflow/exceptions.py index e5b9a9c2..d4db4e5f 100644 --- a/taskflow/exceptions.py +++ b/taskflow/exceptions.py @@ -14,6 +14,7 @@ # License for the specific language governing permissions and limitations # under the License. +import os import traceback import six @@ -25,6 +26,14 @@ class TaskFlowException(Exception): NOTE(harlowja): in later versions of python we can likely remove the need to have a cause here as PY3+ have implemented PEP 3134 which handles chaining in a much more elegant manner. + + :param message: the exception message, typically some string that is + useful for consumers to view when debugging or analyzing + failures. + :param cause: the cause of the exception being raised, when provided this + should itself be an exception instance, this is useful for + creating a chain of exceptions for versions of python where + this is not yet implemented/supported natively. """ def __init__(self, message, cause=None): super(TaskFlowException, self).__init__(message) @@ -38,21 +47,31 @@ class TaskFlowException(Exception): """Pretty formats a taskflow exception + any connected causes.""" if indent < 0: raise ValueError("indent must be greater than or equal to zero") + return os.linesep.join(self._pformat(self, [], 0, + indent=indent, + indent_text=indent_text)) - def _format(excp, indent_by): - lines = [] - for line in traceback.format_exception_only(type(excp), excp): - # We'll add our own newlines on at the end of formatting. - if line.endswith("\n"): - line = line[0:-1] - lines.append((indent_text * indent_by) + line) - try: - lines.extend(_format(excp.cause, indent_by + indent)) - except AttributeError: - pass - return lines - - return "\n".join(_format(self, 0)) + @classmethod + def _pformat(cls, excp, lines, current_indent, indent=2, indent_text=" "): + line_prefix = indent_text * current_indent + for line in traceback.format_exception_only(type(excp), excp): + # We'll add our own newlines on at the end of formatting. + # + # NOTE(harlowja): the reason we don't search for os.linesep is + # that the traceback module seems to only use '\n' (for some + # reason). + if line.endswith("\n"): + line = line[0:-1] + lines.append(line_prefix + line) + try: + cause = excp.cause + except AttributeError: + pass + else: + if cause is not None: + cls._pformat(cause, lines, current_indent + indent, + indent=indent, indent_text=indent_text) + return lines # Errors related to storage or operations on storage units. @@ -98,8 +117,21 @@ class DependencyFailure(TaskFlowException): """Raised when some type of dependency problem occurs.""" +class AmbiguousDependency(DependencyFailure): + """Raised when some type of ambiguous dependency problem occurs.""" + + class MissingDependencies(DependencyFailure): - """Raised when a entity has dependencies that can not be satisfied.""" + """Raised when a entity has dependencies that can not be satisfied. + + :param who: the entity that caused the missing dependency to be triggered. + :param requirements: the dependency which were not satisfied. + + Further arguments are interpreted as for in + :py:class:`~taskflow.exceptions.TaskFlowException`. + """ + + #: Exception message template used when creating an actual message. MESSAGE_TPL = ("%(who)s requires %(requirements)s but no other entity" " produces said requirements") @@ -109,6 +141,10 @@ class MissingDependencies(DependencyFailure): self.missing_requirements = requirements +class CompilationFailure(TaskFlowException): + """Raised when some type of compilation issue is found.""" + + class IncompatibleVersion(TaskFlowException): """Raised when some type of version incompatibility is found.""" @@ -135,13 +171,30 @@ class InvalidFormat(TaskFlowException): # Others. -class WrappedFailure(Exception): - """Wraps one or several failures. +class NotImplementedError(NotImplementedError): + """Exception for when some functionality really isn't implemented. - When exception cannot be re-raised (for example, because - the value and traceback is lost in serialization) or - there are several exceptions, we wrap corresponding Failure - objects into this exception class. + This is typically useful when the library itself needs to distinguish + internal features not being made available from users features not being + made available/implemented (and to avoid misinterpreting the two). + """ + + +class WrappedFailure(Exception): + """Wraps one or several failure objects. + + When exception/s cannot be re-raised (for example, because the value and + traceback are lost in serialization) or there are several exceptions active + at the same time (due to more than one thread raising exceptions), we will + wrap the corresponding failure objects into this exception class and + *may* reraise this exception type to allow users to handle the contained + failures/causes as they see fit... + + See the failure class documentation for a more comprehensive set of reasons + why this object *may* be reraised instead of the original exception. + + :param causes: the :py:class:`~taskflow.types.failure.Failure` objects + that caused this this exception to be raised. """ def __init__(self, causes): @@ -163,12 +216,14 @@ class WrappedFailure(Exception): return len(self._causes) def check(self, *exc_classes): - """Check if any of exc_classes caused (part of) the failure. + """Check if any of exception classes caused the failure/s. - Arguments of this method can be exception types or type names - (strings). If any of wrapped failures were caused by exception - of given type, the corresponding argument is returned. Else, - None is returned. + :param exc_classes: exception types/exception type names to + search for. + + If any of the contained failures were caused by an exception of a + given type, the corresponding argument that matched is returned. If + not then none is returned. """ if not exc_classes: return None @@ -184,7 +239,10 @@ class WrappedFailure(Exception): def exception_message(exc): - """Return the string representation of exception.""" + """Return the string representation of exception. + + :param exc: exception object to get a string representation of. + """ # NOTE(imelnikov): Dealing with non-ascii data in python is difficult: # https://bugs.launchpad.net/taskflow/+bug/1275895 # https://bugs.launchpad.net/taskflow/+bug/1276053 diff --git a/taskflow/flow.py b/taskflow/flow.py index 5533ed4e..cd70e7c9 100644 --- a/taskflow/flow.py +++ b/taskflow/flow.py @@ -16,9 +16,21 @@ import abc +from oslo_utils import reflection import six -from taskflow.utils import reflection +# Link metadata keys that have inherent/special meaning. +# +# This key denotes the link is an invariant that ensures the order is +# correctly preserved. +LINK_INVARIANT = 'invariant' +# This key denotes the link is a manually/user-specified. +LINK_MANUAL = 'manual' +# This key denotes the link was created when resolving/compiling retries. +LINK_RETRY = 'retry' +# This key denotes the link was created due to symbol constraints and the +# value will be a set of names that the constraint ensures are satisfied. +LINK_REASONS = 'reasons' @six.add_metaclass(abc.ABCMeta) @@ -34,10 +46,7 @@ class Flow(object): NOTE(harlowja): if a flow is placed in another flow as a subflow, a desired way to compose flows together, then it is valid and permissible that during - execution the subflow & parent flow may be flattened into a new flow. Since - a flow is just a 'structuring' concept this is typically a behavior that - should not be worried about (as it is not visible to the user), but it is - worth mentioning here. + compilation the subflow & parent flow *may* be flattened into a new flow. """ def __init__(self, name, retry=None): @@ -45,7 +54,7 @@ class Flow(object): self._retry = retry # NOTE(akarpinska): if retry doesn't have a name, # the name of its owner will be assigned - if self._retry and self._retry.name is None: + if self._retry is not None and self._retry.name is None: self._retry.name = self.name + "_retry" @property @@ -93,27 +102,14 @@ class Flow(object): @property def provides(self): - """Set of result names provided by the flow. - - Includes names of all the outputs provided by atoms of this flow. - """ + """Set of symbol names provided by the flow.""" provides = set() - if self._retry: + if self._retry is not None: provides.update(self._retry.provides) - for subflow in self: - provides.update(subflow.provides) - return provides + for item in self: + provides.update(item.provides) + return frozenset(provides) - @property + @abc.abstractproperty def requires(self): - """Set of argument names required by the flow. - - Includes names of all the inputs required by atoms of this - flow, but not provided within the flow itself. - """ - requires = set() - if self._retry: - requires.update(self._retry.requires) - for subflow in self: - requires.update(subflow.requires) - return requires - self.provides + """Set of *unsatisfied* symbol names required by the flow.""" diff --git a/taskflow/jobs/backends/__init__.py b/taskflow/jobs/backends/__init__.py index 099f0476..e8bd6daf 100644 --- a/taskflow/jobs/backends/__init__.py +++ b/taskflow/jobs/backends/__init__.py @@ -15,12 +15,12 @@ # under the License. import contextlib -import logging import six from stevedore import driver from taskflow import exceptions as exc +from taskflow import logging from taskflow.utils import misc @@ -55,12 +55,12 @@ def fetch(name, conf, namespace=BACKEND_NAMESPACE, **kwargs): conf = {'board': conf} board = conf['board'] try: - pieces = misc.parse_uri(board) + uri = misc.parse_uri(board) except (TypeError, ValueError): pass else: - board = pieces['scheme'] - conf = misc.merge_uri(pieces, conf.copy()) + board = uri.scheme + conf = misc.merge_uri(uri, conf.copy()) LOG.debug('Looking for %r jobboard driver in %r', board, namespace) try: mgr = driver.DriverManager(namespace, board, diff --git a/taskflow/jobs/backends/impl_zookeeper.py b/taskflow/jobs/backends/impl_zookeeper.py index 4fc7b6eb..3e52f65b 100644 --- a/taskflow/jobs/backends/impl_zookeeper.py +++ b/taskflow/jobs/backends/impl_zookeeper.py @@ -17,21 +17,20 @@ import collections import contextlib import functools -import logging import threading from concurrent import futures from kazoo import exceptions as k_exceptions from kazoo.protocol import paths as k_paths from kazoo.recipe import watchers +from oslo_serialization import jsonutils +from oslo_utils import excutils +from oslo_utils import uuidutils import six from taskflow import exceptions as excp -from taskflow.jobs import job as base_job -from taskflow.jobs import jobboard -from taskflow.openstack.common import excutils -from taskflow.openstack.common import jsonutils -from taskflow.openstack.common import uuidutils +from taskflow.jobs import base +from taskflow import logging from taskflow import states from taskflow.types import timing as tt from taskflow.utils import kazoo_utils @@ -62,7 +61,9 @@ def _check_who(who): raise ValueError("Job applicant must be non-empty") -class ZookeeperJob(base_job.Job): +class ZookeeperJob(base.Job): + """A zookeeper job.""" + def __init__(self, name, board, client, backend, path, uuid=None, details=None, book=None, book_data=None, created_on=None): @@ -77,10 +78,13 @@ class ZookeeperJob(base_job.Job): if all((self._book, self._book_data)): raise ValueError("Only one of 'book_data' or 'book'" " can be provided") - self._path = path + self._path = k_paths.normpath(path) self._lock_path = path + LOCK_POSTFIX self._created_on = created_on self._node_not_found = False + basename = k_paths.basename(self._path) + self._root = self._path[0:-len(basename)] + self._sequence = int(basename[len(JOB_PREFIX):]) @property def lock_path(self): @@ -90,6 +94,16 @@ class ZookeeperJob(base_job.Job): def path(self): return self._path + @property + def sequence(self): + """Sequence number of the current job.""" + return self._sequence + + @property + def root(self): + """The parent path of the job in zookeeper.""" + return self._root + def _get_node_attr(self, path, attr_name, trans_func=None): try: _data, node_stat = self._client.get(path) @@ -104,7 +118,7 @@ class ZookeeperJob(base_job.Job): % (attr_name, self.uuid, self.path, path), e) except self._client.handler.timeout_exception as e: raise excp.JobFailure("Can not fetch the %r attribute" - " of job %s (%s), connection timed out" + " of job %s (%s), operation timed out" % (attr_name, self.uuid, self.path), e) except k_exceptions.SessionExpiredError as e: raise excp.JobFailure("Can not fetch the %r attribute" @@ -172,7 +186,7 @@ class ZookeeperJob(base_job.Job): " session expired" % (self.uuid), e) except self._client.handler.timeout_exception as e: raise excp.JobFailure("Can not fetch the state of %s," - " connection timed out" % (self.uuid), e) + " operation timed out" % (self.uuid), e) except k_exceptions.KazooException as e: raise excp.JobFailure("Can not fetch the state of %s, internal" " error" % (self.uuid), e) @@ -186,8 +200,11 @@ class ZookeeperJob(base_job.Job): return states.UNCLAIMED return states.CLAIMED - def __cmp__(self, other): - return cmp(self.path, other.path) + def __lt__(self, other): + if self.root == other.root: + return self.sequence < other.sequence + else: + return self.root < other.root def __hash__(self): return hash(self.path) @@ -218,14 +235,14 @@ class ZookeeperJob(base_job.Job): class ZookeeperJobBoardIterator(six.Iterator): - """Iterator over a zookeeper jobboard. + """Iterator over a zookeeper jobboard that iterates over potential jobs. It supports the following attributes/constructor arguments: - * ensure_fresh: boolean that requests that during every fetch of a new + * ``ensure_fresh``: boolean that requests that during every fetch of a new set of jobs this will cause the iterator to force the backend to refresh (ensuring that the jobboard has the most recent job listings). - * only_unclaimed: boolean that indicates whether to only iterate + * ``only_unclaimed``: boolean that indicates whether to only iterate over unclaimed jobs. """ @@ -274,7 +291,30 @@ class ZookeeperJobBoardIterator(six.Iterator): return job -class ZookeeperJobBoard(jobboard.NotifyingJobBoard): +class ZookeeperJobBoard(base.NotifyingJobBoard): + """A jobboard backend by zookeeper. + + Powered by the `kazoo `_ library. + + This jobboard creates *sequenced* persistent znodes in a directory in + zookeeper (that directory defaults ``/taskflow/jobs``) and uses zookeeper + watches to notify other jobboards that the job which was posted using the + :meth:`.post` method (this creates a znode with contents/details in json) + The users of those jobboard(s) (potentially on disjoint sets of machines) + can then iterate over the available jobs and decide if they want to attempt + to claim one of the jobs they have iterated over. If so they will then + attempt to contact zookeeper and will attempt to create a ephemeral znode + using the name of the persistent znode + ".lock" as a postfix. If the + entity trying to use the jobboard to :meth:`.claim` the job is able to + create a ephemeral znode with that name then it will be allowed (and + expected) to perform whatever *work* the contents of that job that it + locked described. Once finished the ephemeral znode and persistent znode + may be deleted (if successfully completed) in a single transcation or if + not successfull (or the entity that claimed the znode dies) the ephemeral + znode will be released (either manually by using :meth:`.abandon` or + automatically by zookeeper the ephemeral is deemed to be lost). + """ + def __init__(self, name, conf, client=None, persistence=None, emit_notifications=True): super(ZookeeperJobBoard, self).__init__(name, conf) @@ -298,8 +338,7 @@ class ZookeeperJobBoard(jobboard.NotifyingJobBoard): self._persistence = persistence # Misc. internal details self._known_jobs = {} - self._job_lock = threading.RLock() - self._job_cond = threading.Condition(self._job_lock) + self._job_cond = threading.Condition() self._open_close_lock = threading.RLock() self._client.add_listener(self._state_change_listener) self._bad_paths = frozenset([path]) @@ -312,8 +351,12 @@ class ZookeeperJobBoard(jobboard.NotifyingJobBoard): def _emit(self, state, details): # Submit the work to the executor to avoid blocking the kazoo queue. - if self._worker is not None: + try: self._worker.submit(self.notifier.notify, state, details) + except (AttributeError, RuntimeError): + # Notification thread is shutdown or non-existent, either case we + # just want to skip submitting a notification... + pass @property def path(self): @@ -321,20 +364,19 @@ class ZookeeperJobBoard(jobboard.NotifyingJobBoard): @property def job_count(self): - with self._job_lock: - return len(self._known_jobs) + return len(self._known_jobs) def _fetch_jobs(self, ensure_fresh=False): if ensure_fresh: self._force_refresh() - with self._job_lock: + with self._job_cond: return sorted(six.itervalues(self._known_jobs)) def _force_refresh(self): try: children = self._client.get_children(self.path) except self._client.handler.timeout_exception as e: - raise excp.JobFailure("Refreshing failure, connection timed out", + raise excp.JobFailure("Refreshing failure, operation timed out", e) except k_exceptions.SessionExpiredError as e: raise excp.JobFailure("Refreshing failure, session expired", e) @@ -351,11 +393,13 @@ class ZookeeperJobBoard(jobboard.NotifyingJobBoard): ensure_fresh=ensure_fresh) def _remove_job(self, path): - LOG.debug("Removing job that was at path: %s", path) - with self._job_lock: + if path not in self._known_jobs: + return + with self._job_cond: job = self._known_jobs.pop(path, None) if job is not None: - self._emit(jobboard.REMOVAL, details={'job': job}) + LOG.debug("Removed job that was at path '%s'", path) + self._emit(base.REMOVAL, details={'job': job}) def _process_child(self, path, request): """Receives the result of a child data fetch request.""" @@ -368,7 +412,7 @@ class ZookeeperJobBoard(jobboard.NotifyingJobBoard): LOG.warn("Incorrectly formatted job data found at path: %s", path, exc_info=True) except self._client.handler.timeout_exception: - LOG.warn("Connection timed out fetching job data from path: %s", + LOG.warn("Operation timed out fetching job data from path: %s", path, exc_info=True) except k_exceptions.SessionExpiredError: LOG.warn("Session expired fetching job data from path: %s", path, @@ -380,8 +424,10 @@ class ZookeeperJobBoard(jobboard.NotifyingJobBoard): LOG.warn("Internal error fetching job data from path: %s", path, exc_info=True) else: - self._job_cond.acquire() - try: + with self._job_cond: + # Now we can offically check if someone already placed this + # jobs information into the known job set (if it's already + # existing then just leave it alone). if path not in self._known_jobs: job = ZookeeperJob(job_data['name'], self, self._client, self._persistence, path, @@ -391,10 +437,8 @@ class ZookeeperJobBoard(jobboard.NotifyingJobBoard): created_on=created_on) self._known_jobs[path] = job self._job_cond.notify_all() - finally: - self._job_cond.release() if job is not None: - self._emit(jobboard.POSTED, details={'job': job}) + self._emit(base.POSTED, details={'job': job}) def _on_job_posting(self, children, delayed=True): LOG.debug("Got children %s under path %s", children, self.path) @@ -405,32 +449,43 @@ class ZookeeperJobBoard(jobboard.NotifyingJobBoard): continue child_paths.append(k_paths.join(self.path, c)) - # Remove jobs that we know about but which are no longer children - with self._job_lock: - removals = set() - for path, _job in six.iteritems(self._known_jobs): + # Figure out what we really should be investigating and what we + # shouldn't (remove jobs that exist in our local version, but don't + # exist in the children anymore) and accumulate all paths that we + # need to trigger population of (without holding the job lock). + investigate_paths = [] + pending_removals = [] + with self._job_cond: + for path in six.iterkeys(self._known_jobs): if path not in child_paths: - removals.add(path) - for path in removals: - self._remove_job(path) - - # Ensure that we have a job record for each new job that has appeared + pending_removals.append(path) for path in child_paths: if path in self._bad_paths: continue - with self._job_lock: - if path not in self._known_jobs: - # Fire off the request to populate this job asynchronously. - # - # This method is *usually* called from a asynchronous - # handler so it's better to exit from this quickly to - # allow other asynchronous handlers to be executed. - request = self._client.get_async(path) - child_proc = functools.partial(self._process_child, path) - if delayed: - request.rawlink(child_proc) - else: - child_proc(request) + # This pre-check will *not* guarantee that we will not already + # have the job (if it's being populated elsewhere) but it will + # reduce the amount of duplicated requests in general; later when + # the job information has been populated we will ensure that we + # are not adding duplicates into the currently known jobs... + if path in self._known_jobs: + continue + if path not in investigate_paths: + investigate_paths.append(path) + if pending_removals: + with self._job_cond: + for path in pending_removals: + self._remove_job(path) + for path in investigate_paths: + # Fire off the request to populate this job. + # + # This method is *usually* called from a asynchronous handler so + # it's better to exit from this quickly to allow other asynchronous + # handlers to be executed. + request = self._client.get_async(path) + if delayed: + request.rawlink(functools.partial(self._process_child, path)) + else: + self._process_child(path, request) def post(self, name, book=None, details=None): @@ -466,13 +521,10 @@ class ZookeeperJobBoard(jobboard.NotifyingJobBoard): self._persistence, job_path, book=book, details=details, uuid=job_uuid) - self._job_cond.acquire() - try: + with self._job_cond: self._known_jobs[job_path] = job self._job_cond.notify_all() - finally: - self._job_cond.release() - self._emit(jobboard.POSTED, details={'job': job}) + self._emit(base.POSTED, details={'job': job}) return job def claim(self, job, who): @@ -531,14 +583,13 @@ class ZookeeperJobBoard(jobboard.NotifyingJobBoard): if not job_path: raise ValueError("Unable to check if %r is a known path" % (job_path)) - with self._job_lock: - if job_path not in self._known_jobs: - fail_msg_tpl += ", unknown job" - raise excp.NotFound(fail_msg_tpl % (job_uuid)) + if job_path not in self._known_jobs: + fail_msg_tpl += ", unknown job" + raise excp.NotFound(fail_msg_tpl % (job_uuid)) try: yield except self._client.handler.timeout_exception as e: - fail_msg_tpl += ", connection timed out" + fail_msg_tpl += ", operation timed out" raise excp.JobFailure(fail_msg_tpl % (job_uuid), e) except k_exceptions.SessionExpiredError as e: fail_msg_tpl += ", session expired" @@ -610,14 +661,12 @@ class ZookeeperJobBoard(jobboard.NotifyingJobBoard): def wait(self, timeout=None): # Wait until timeout expires (or forever) for jobs to appear. - watch = None - if timeout is not None: - watch = tt.StopWatch(duration=float(timeout)).start() - self._job_cond.acquire() - try: + watch = tt.StopWatch(duration=timeout) + watch.start() + with self._job_cond: while True: if not self._known_jobs: - if watch is not None and watch.expired(): + if watch.expired(): raise excp.NotFound("Expired waiting for jobs to" " arrive; waited %s seconds" % watch.elapsed()) @@ -626,17 +675,12 @@ class ZookeeperJobBoard(jobboard.NotifyingJobBoard): # when we acquire the condition that there will actually # be jobs (especially if we are spuriously awaken), so we # must recalculate the amount of time we really have left. - timeout = None - if watch is not None: - timeout = watch.leftover() - self._job_cond.wait(timeout) + self._job_cond.wait(watch.leftover(return_none=True)) else: it = ZookeeperJobBoardIterator(self) it._jobs.extend(self._fetch_jobs()) it._fetched = True return it - finally: - self._job_cond.release() @property def connected(self): @@ -651,7 +695,7 @@ class ZookeeperJobBoard(jobboard.NotifyingJobBoard): LOG.debug("Shutting down the notifier") self._worker.shutdown() self._worker = None - with self._job_lock: + with self._job_cond: self._known_jobs.clear() LOG.debug("Stopped & cleared local state") diff --git a/taskflow/jobs/jobboard.py b/taskflow/jobs/base.py similarity index 73% rename from taskflow/jobs/jobboard.py rename to taskflow/jobs/base.py index d7d0850f..eea5b12b 100644 --- a/taskflow/jobs/jobboard.py +++ b/taskflow/jobs/base.py @@ -17,9 +17,100 @@ import abc +from oslo_utils import uuidutils import six -from taskflow.utils import misc +from taskflow.types import notifier + + +@six.add_metaclass(abc.ABCMeta) +class Job(object): + """A abstraction that represents a named and trackable unit of work. + + A job connects a logbook, a owner, last modified and created on dates and + any associated state that the job has. Since it is a connector to a + logbook, which are each associated with a set of factories that can create + set of flows, it is the current top-level container for a piece of work + that can be owned by an entity (typically that entity will read those + logbooks and run any contained flows). + + Only one entity will be allowed to own and operate on the flows contained + in a job at a given time (for the foreseeable future). + + NOTE(harlowja): It is the object that will be transferred to another + entity on failure so that the contained flows ownership can be + transferred to the secondary entity/owner for resumption, continuation, + reverting... + """ + + def __init__(self, name, uuid=None, details=None): + if uuid: + self._uuid = uuid + else: + self._uuid = uuidutils.generate_uuid() + self._name = name + if not details: + details = {} + self._details = details + + @abc.abstractproperty + def last_modified(self): + """The datetime the job was last modified.""" + pass + + @abc.abstractproperty + def created_on(self): + """The datetime the job was created on.""" + pass + + @abc.abstractproperty + def board(self): + """The board this job was posted on or was created from.""" + + @abc.abstractproperty + def state(self): + """The current state of this job.""" + + @abc.abstractproperty + def book(self): + """Logbook associated with this job. + + If no logbook is associated with this job, this property is None. + """ + + @abc.abstractproperty + def book_uuid(self): + """UUID of logbook associated with this job. + + If no logbook is associated with this job, this property is None. + """ + + @abc.abstractproperty + def book_name(self): + """Name of logbook associated with this job. + + If no logbook is associated with this job, this property is None. + """ + + @property + def uuid(self): + """The uuid of this job.""" + return self._uuid + + @property + def details(self): + """A dictionary of any details associated with this job.""" + return self._details + + @property + def name(self): + """The non-uniquely identifying name of this job.""" + return self._name + + def __str__(self): + """Pretty formats the job into something *more* meaningful.""" + return "%s %s (%s): %s" % (type(self).__name__, + self.name, self.uuid, self.details) @six.add_metaclass(abc.ABCMeta) @@ -203,4 +294,4 @@ class NotifyingJobBoard(JobBoard): """ def __init__(self, name, conf): super(NotifyingJobBoard, self).__init__(name, conf) - self.notifier = misc.Notifier() + self.notifier = notifier.Notifier() diff --git a/taskflow/jobs/job.py b/taskflow/jobs/job.py deleted file mode 100644 index 41ac4c16..00000000 --- a/taskflow/jobs/job.py +++ /dev/null @@ -1,112 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright (C) 2013 Rackspace Hosting Inc. All Rights Reserved. -# Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import abc - -import six - -from taskflow.openstack.common import uuidutils - - -@six.add_metaclass(abc.ABCMeta) -class Job(object): - """A abstraction that represents a named and trackable unit of work. - - A job connects a logbook, a owner, last modified and created on dates and - any associated state that the job has. Since it is a connector to a - logbook, which are each associated with a set of factories that can create - set of flows, it is the current top-level container for a piece of work - that can be owned by an entity (typically that entity will read those - logbooks and run any contained flows). - - Only one entity will be allowed to own and operate on the flows contained - in a job at a given time (for the foreseeable future). - - NOTE(harlowja): It is the object that will be transferred to another - entity on failure so that the contained flows ownership can be - transferred to the secondary entity/owner for resumption, continuation, - reverting... - """ - - def __init__(self, name, uuid=None, details=None): - if uuid: - self._uuid = uuid - else: - self._uuid = uuidutils.generate_uuid() - self._name = name - if not details: - details = {} - self._details = details - - @abc.abstractproperty - def last_modified(self): - """The datetime the job was last modified.""" - pass - - @abc.abstractproperty - def created_on(self): - """The datetime the job was created on.""" - pass - - @abc.abstractproperty - def board(self): - """The board this job was posted on or was created from.""" - - @abc.abstractproperty - def state(self): - """The current state of this job.""" - - @abc.abstractproperty - def book(self): - """Logbook associated with this job. - - If no logbook is associated with this job, this property is None. - """ - - @abc.abstractproperty - def book_uuid(self): - """UUID of logbook associated with this job. - - If no logbook is associated with this job, this property is None. - """ - - @abc.abstractproperty - def book_name(self): - """Name of logbook associated with this job. - - If no logbook is associated with this job, this property is None. - """ - - @property - def uuid(self): - """The uuid of this job.""" - return self._uuid - - @property - def details(self): - """A dictionary of any details associated with this job.""" - return self._details - - @property - def name(self): - """The non-uniquely identifying name of this job.""" - return self._name - - def __str__(self): - """Pretty formats the job into something *more* meaningful.""" - return "%s %s (%s): %s" % (type(self).__name__, - self.name, self.uuid, self.details) diff --git a/taskflow/listeners/base.py b/taskflow/listeners/base.py index 352b652a..4884d90e 100644 --- a/taskflow/listeners/base.py +++ b/taskflow/listeners/base.py @@ -17,13 +17,15 @@ from __future__ import absolute_import import abc -import logging +from oslo_utils import excutils import six -from taskflow.openstack.common import excutils +from taskflow import logging from taskflow import states -from taskflow.utils import misc +from taskflow.types import failure +from taskflow.types import notifier +from taskflow.utils import deprecation LOG = logging.getLogger(__name__) @@ -31,33 +33,84 @@ LOG = logging.getLogger(__name__) # do not produce results. FINISH_STATES = (states.FAILURE, states.SUCCESS) +# What is listened for by default... +DEFAULT_LISTEN_FOR = (notifier.Notifier.ANY,) -class ListenerBase(object): + +def _task_matcher(details): + """Matches task details emitted.""" + if not details: + return False + if 'task_name' in details and 'task_uuid' in details: + return True + return False + + +def _retry_matcher(details): + """Matches retry details emitted.""" + if not details: + return False + if 'retry_name' in details and 'retry_uuid' in details: + return True + return False + + +def _bulk_deregister(notifier, registered, details_filter=None): + """Bulk deregisters callbacks associated with many states.""" + while registered: + state, cb = registered.pop() + notifier.deregister(state, cb, + details_filter=details_filter) + + +def _bulk_register(watch_states, notifier, cb, details_filter=None): + """Bulk registers a callback associated with many states.""" + registered = [] + try: + for state in watch_states: + if not notifier.is_registered(state, cb, + details_filter=details_filter): + notifier.register(state, cb, + details_filter=details_filter) + registered.append((state, cb)) + except ValueError: + with excutils.save_and_reraise_exception(): + _bulk_deregister(notifier, registered, + details_filter=details_filter) + else: + return registered + + +class Listener(object): """Base class for listeners. A listener can be attached to an engine to do various actions on flow and - task state transitions. It implements context manager protocol to be able - to register and unregister with a given engine automatically when a context - is entered and when it is exited. + atom state transitions. It implements the context manager protocol to be + able to register and unregister with a given engine automatically when a + context is entered and when it is exited. To implement a listener, derive from this class and override - ``_flow_receiver`` and/or ``_task_receiver`` methods (in this class, - they do nothing). + ``_flow_receiver`` and/or ``_task_receiver`` and/or ``_retry_receiver`` + methods (in this class, they do nothing). """ def __init__(self, engine, - task_listen_for=(misc.Notifier.ANY,), - flow_listen_for=(misc.Notifier.ANY,)): + task_listen_for=DEFAULT_LISTEN_FOR, + flow_listen_for=DEFAULT_LISTEN_FOR, + retry_listen_for=DEFAULT_LISTEN_FOR): if not task_listen_for: task_listen_for = [] + if not retry_listen_for: + retry_listen_for = [] if not flow_listen_for: flow_listen_for = [] self._listen_for = { 'task': list(task_listen_for), + 'retry': list(retry_listen_for), 'flow': list(flow_listen_for), } self._engine = engine - self._registered = False + self._registered = {} def _flow_receiver(self, state, details): pass @@ -65,46 +118,42 @@ class ListenerBase(object): def _task_receiver(self, state, details): pass + def _retry_receiver(self, state, details): + pass + def deregister(self): - if not self._registered: - return - - def _deregister(watch_states, notifier, cb): - for s in watch_states: - notifier.deregister(s, cb) - - _deregister(self._listen_for['task'], self._engine.task_notifier, - self._task_receiver) - _deregister(self._listen_for['flow'], self._engine.notifier, - self._flow_receiver) - - self._registered = False + if 'task' in self._registered: + _bulk_deregister(self._engine.atom_notifier, + self._registered['task'], + details_filter=_task_matcher) + del self._registered['task'] + if 'retry' in self._registered: + _bulk_deregister(self._engine.atom_notifier, + self._registered['retry'], + details_filter=_retry_matcher) + del self._registered['retry'] + if 'flow' in self._registered: + _bulk_deregister(self._engine.notifier, + self._registered['flow']) + del self._registered['flow'] def register(self): - if self._registered: - return - - def _register(watch_states, notifier, cb): - registered = [] - try: - for s in watch_states: - if not notifier.is_registered(s, cb): - notifier.register(s, cb) - registered.append((s, cb)) - except ValueError: - with excutils.save_and_reraise_exception(): - for (s, cb) in registered: - notifier.deregister(s, cb) - - _register(self._listen_for['task'], self._engine.task_notifier, - self._task_receiver) - _register(self._listen_for['flow'], self._engine.notifier, - self._flow_receiver) - - self._registered = True + if 'task' not in self._registered: + self._registered['task'] = _bulk_register( + self._listen_for['task'], self._engine.atom_notifier, + self._task_receiver, details_filter=_task_matcher) + if 'retry' not in self._registered: + self._registered['retry'] = _bulk_register( + self._listen_for['retry'], self._engine.atom_notifier, + self._retry_receiver, details_filter=_retry_matcher) + if 'flow' not in self._registered: + self._registered['flow'] = _bulk_register( + self._listen_for['flow'], self._engine.notifier, + self._flow_receiver) def __enter__(self): self.register() + return self def __exit__(self, type, value, tb): try: @@ -115,42 +164,63 @@ class ListenerBase(object): self._engine, exc_info=True) +# TODO(harlowja): remove in 0.7 or later... +ListenerBase = deprecation.moved_inheritable_class(Listener, + 'ListenerBase', __name__, + version="0.6", + removal_version="?") + + @six.add_metaclass(abc.ABCMeta) -class LoggingBase(ListenerBase): - """Abstract base class for logging listeners. +class DumpingListener(Listener): + """Abstract base class for dumping listeners. This provides a simple listener that can be attached to an engine which can - be derived from to log task and/or flow state transitions to some logging + be derived from to dump task and/or flow state transitions to some target backend. - To implement your own logging listener derive form this class and - override ``_log`` method. + To implement your own dumping listener derive from this class and + override the ``_dump`` method. """ @abc.abstractmethod - def _log(self, message, *args, **kwargs): - raise NotImplementedError() + def _dump(self, message, *args, **kwargs): + """Dumps the provided *templated* message to some output.""" def _flow_receiver(self, state, details): - self._log("%s has moved flow '%s' (%s) into state '%s'", - self._engine, details['flow_name'], - details['flow_uuid'], state) + self._dump("%s has moved flow '%s' (%s) into state '%s'" + " from state '%s'", self._engine, details['flow_name'], + details['flow_uuid'], state, details['old_state']) def _task_receiver(self, state, details): if state in FINISH_STATES: result = details.get('result') exc_info = None was_failure = False - if isinstance(result, misc.Failure): + if isinstance(result, failure.Failure): if result.exc_info: exc_info = tuple(result.exc_info) was_failure = True - self._log("%s has moved task '%s' (%s) into state '%s'" - " with result '%s' (failure=%s)", - self._engine, details['task_name'], - details['task_uuid'], state, result, was_failure, - exc_info=exc_info) + self._dump("%s has moved task '%s' (%s) into state '%s'" + " from state '%s' with result '%s' (failure=%s)", + self._engine, details['task_name'], + details['task_uuid'], state, details['old_state'], + result, was_failure, exc_info=exc_info) else: - self._log("%s has moved task '%s' (%s) into state '%s'", - self._engine, details['task_name'], - details['task_uuid'], state) + self._dump("%s has moved task '%s' (%s) into state '%s'" + " from state '%s'", self._engine, details['task_name'], + details['task_uuid'], state, details['old_state']) + + +# TODO(harlowja): remove in 0.7 or later... +class LoggingBase(deprecation.moved_inheritable_class(DumpingListener, + 'LoggingBase', __name__, + version="0.6", + removal_version="?")): + + def _dump(self, message, *args, **kwargs): + self._log(message, *args, **kwargs) + + @abc.abstractmethod + def _log(self, message, *args, **kwargs): + """Logs the provided *templated* message to some output.""" diff --git a/taskflow/listeners/claims.py b/taskflow/listeners/claims.py new file mode 100644 index 00000000..82b89cf0 --- /dev/null +++ b/taskflow/listeners/claims.py @@ -0,0 +1,102 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from __future__ import absolute_import + +import logging +import os + +import six + +from taskflow import exceptions +from taskflow.listeners import base +from taskflow import states + +LOG = logging.getLogger(__name__) + + +class CheckingClaimListener(base.Listener): + """Listener that interacts [engine, job, jobboard]; ensures claim is valid. + + This listener (or a derivative) can be associated with an engines + notification system after the job has been claimed (so that the jobs work + can be worked on by that engine). This listener (after associated) will + check that the job is still claimed *whenever* the engine notifies of a + task or flow state change. If the job is not claimed when a state change + occurs, a associated handler (or the default) will be activated to + determine how to react to this *hopefully* exceptional case. + + NOTE(harlowja): this may create more traffic than desired to the + jobboard backend (zookeeper or other), since the amount of state change + per task and flow is non-zero (and checking during each state change will + result in quite a few calls to that management system to check the jobs + claim status); this could be later optimized to check less (or only check + on a smaller set of states) + + NOTE(harlowja): if a custom ``on_job_loss`` callback is provided it must + accept three positional arguments, the first being the current engine being + ran, the second being the 'task/flow' state and the third being the details + that were sent from the engine to listeners for inspection. + """ + + def __init__(self, engine, job, board, owner, on_job_loss=None): + super(CheckingClaimListener, self).__init__(engine) + self._job = job + self._board = board + self._owner = owner + if on_job_loss is None: + self._on_job_loss = self._suspend_engine_on_loss + else: + if not six.callable(on_job_loss): + raise ValueError("Custom 'on_job_loss' handler must be" + " callable") + self._on_job_loss = on_job_loss + + def _suspend_engine_on_loss(self, engine, state, details): + """The default strategy for handling claims being lost.""" + try: + engine.suspend() + except exceptions.TaskFlowException as e: + LOG.warn("Failed suspending engine '%s', (previously owned by" + " '%s'):%s%s", engine, self._owner, os.linesep, + e.pformat()) + + def _flow_receiver(self, state, details): + self._claim_checker(state, details) + + def _task_receiver(self, state, details): + self._claim_checker(state, details) + + def _has_been_lost(self): + try: + job_state = self._job.state + job_owner = self._board.find_owner(self._job) + except (exceptions.NotFound, exceptions.JobFailure): + return True + else: + if job_state == states.UNCLAIMED or self._owner != job_owner: + return True + else: + return False + + def _claim_checker(self, state, details): + if not self._has_been_lost(): + LOG.debug("Job '%s' is still claimed (actively owned by '%s')", + self._job, self._owner) + else: + LOG.warn("Job '%s' has lost its claim (previously owned by '%s')", + self._job, self._owner) + self._on_job_loss(self._engine, state, details) diff --git a/taskflow/listeners/logging.py b/taskflow/listeners/logging.py index 71bf83f5..b2d4d344 100644 --- a/taskflow/listeners/logging.py +++ b/taskflow/listeners/logging.py @@ -16,34 +16,187 @@ from __future__ import absolute_import -import logging +import logging as logging_base +import os +import sys from taskflow.listeners import base +from taskflow import logging +from taskflow import states +from taskflow.types import failure from taskflow.utils import misc LOG = logging.getLogger(__name__) +if sys.version_info[0:2] == (2, 6): + _PY26 = True +else: + _PY26 = False -class LoggingListener(base.LoggingBase): + +# Fixes this for python 2.6 which was missing the is enabled for method +# when a logger adapter is being used/provided, this will no longer be needed +# when we can just support python 2.7+ (which fixed the lack of this method +# on adapters). +def _isEnabledFor(logger, level): + if _PY26 and isinstance(logger, logging_base.LoggerAdapter): + return logger.logger.isEnabledFor(level) + return logger.isEnabledFor(level) + + +class LoggingListener(base.DumpingListener): """Listener that logs notifications it receives. - It listens for task and flow notifications and writes those - notifications to provided logger, or logger of its module - (``taskflow.listeners.logging``) if none provided. Log level - can also be configured, ``logging.DEBUG`` is used by default. + It listens for task and flow notifications and writes those notifications + to a provided logger, or logger of its module + (``taskflow.listeners.logging``) if none is provided (and no class + attribute is overriden). The log level can also be + configured, ``logging.DEBUG`` is used by default when none is provided. """ + + #: Default logger to use if one is not provided on construction. + _LOGGER = None + def __init__(self, engine, - task_listen_for=(misc.Notifier.ANY,), - flow_listen_for=(misc.Notifier.ANY,), + task_listen_for=base.DEFAULT_LISTEN_FOR, + flow_listen_for=base.DEFAULT_LISTEN_FOR, + retry_listen_for=base.DEFAULT_LISTEN_FOR, log=None, level=logging.DEBUG): - super(LoggingListener, self).__init__(engine, - task_listen_for=task_listen_for, - flow_listen_for=flow_listen_for) - self._logger = log - if not self._logger: - self._logger = LOG + super(LoggingListener, self).__init__( + engine, task_listen_for=task_listen_for, + flow_listen_for=flow_listen_for, retry_listen_for=retry_listen_for) + self._logger = misc.pick_first_not_none(log, self._LOGGER, LOG) self._level = level - def _log(self, message, *args, **kwargs): + def _dump(self, message, *args, **kwargs): self._logger.log(self._level, message, *args, **kwargs) + + +class DynamicLoggingListener(base.Listener): + """Listener that logs notifications it receives. + + It listens for task and flow notifications and writes those notifications + to a provided logger, or logger of its module + (``taskflow.listeners.logging``) if none is provided (and no class + attribute is overriden). The log level can *slightly* be configured + and ``logging.DEBUG`` or ``logging.WARNING`` (unless overriden via a + constructor parameter) will be selected automatically based on the + execution state and results produced. + + The following flow states cause ``logging.WARNING`` (or provided + level) to be used: + + * ``states.FAILURE`` + * ``states.REVERTED`` + + The following task states cause ``logging.WARNING`` (or provided level) + to be used: + + * ``states.FAILURE`` + * ``states.RETRYING`` + * ``states.REVERTING`` + + When a task produces a :py:class:`~taskflow.types.failure.Failure` object + as its result (typically this happens when a task raises an exception) this + will **always** switch the logger to use ``logging.WARNING`` (if the + failure object contains a ``exc_info`` tuple this will also be logged to + provide a meaningful traceback). + """ + + #: Default logger to use if one is not provided on construction. + _LOGGER = None + + def __init__(self, engine, + task_listen_for=base.DEFAULT_LISTEN_FOR, + flow_listen_for=base.DEFAULT_LISTEN_FOR, + retry_listen_for=base.DEFAULT_LISTEN_FOR, + log=None, failure_level=logging.WARNING, + level=logging.DEBUG): + super(DynamicLoggingListener, self).__init__( + engine, task_listen_for=task_listen_for, + flow_listen_for=flow_listen_for, retry_listen_for=retry_listen_for) + self._failure_level = failure_level + self._level = level + self._task_log_levels = { + states.FAILURE: self._failure_level, + states.REVERTED: self._failure_level, + states.RETRYING: self._failure_level, + } + self._flow_log_levels = { + states.FAILURE: self._failure_level, + states.REVERTED: self._failure_level, + } + self._logger = misc.pick_first_not_none(log, self._LOGGER, LOG) + + @staticmethod + def _format_failure(fail): + """Returns a (exc_info, exc_details) tuple about the failure. + + The ``exc_info`` tuple should be a standard three element + (exctype, value, traceback) tuple that will be used for further + logging. If a non-empty string is returned for ``exc_details`` it + should contain any string info about the failure (with any specific + details the ``exc_info`` may not have/contain). If the ``exc_info`` + tuple is returned as ``None`` then it will cause the logging + system to avoid outputting any traceback information (read + the python documentation on the logger interaction with ``exc_info`` + to learn more). + """ + if fail.exc_info: + exc_info = fail.exc_info + exc_details = '' + else: + # When a remote failure occurs (or somehow the failure + # object lost its traceback), we will not have a valid + # exc_info that can be used but we *should* have a string + # version that we can use instead... + exc_info = None + exc_details = "%s%s" % (os.linesep, fail.pformat(traceback=True)) + return (exc_info, exc_details) + + def _flow_receiver(self, state, details): + """Gets called on flow state changes.""" + level = self._flow_log_levels.get(state, self._level) + self._logger.log(level, "Flow '%s' (%s) transitioned into state '%s'" + " from state '%s'", details['flow_name'], + details['flow_uuid'], state, details.get('old_state')) + + def _task_receiver(self, state, details): + """Gets called on task state changes.""" + if 'result' in details and state in base.FINISH_STATES: + # If the task failed, it's useful to show the exception traceback + # and any other available exception information. + result = details.get('result') + if isinstance(result, failure.Failure): + exc_info, exc_details = self._format_failure(result) + self._logger.log(self._failure_level, + "Task '%s' (%s) transitioned into state" + " '%s' from state '%s'%s", + details['task_name'], details['task_uuid'], + state, details['old_state'], exc_details, + exc_info=exc_info) + else: + # Otherwise, depending on the enabled logging level/state we + # will show or hide results that the task may have produced + # during execution. + level = self._task_log_levels.get(state, self._level) + if (_isEnabledFor(self._logger, self._level) + or state == states.FAILURE): + self._logger.log(level, "Task '%s' (%s) transitioned into" + " state '%s' from state '%s' with" + " result '%s'", details['task_name'], + details['task_uuid'], state, + details['old_state'], result) + else: + self._logger.log(level, "Task '%s' (%s) transitioned into" + " state '%s' from state '%s'", + details['task_name'], + details['task_uuid'], state, + details['old_state']) + else: + # Just a intermediary state, carry on! + level = self._task_log_levels.get(state, self._level) + self._logger.log(level, "Task '%s' (%s) transitioned into state" + " '%s' from state '%s'", details['task_name'], + details['task_uuid'], state, details['old_state']) diff --git a/taskflow/listeners/printing.py b/taskflow/listeners/printing.py index e9359bf5..2a89b179 100644 --- a/taskflow/listeners/printing.py +++ b/taskflow/listeners/printing.py @@ -20,24 +20,24 @@ import sys import traceback from taskflow.listeners import base -from taskflow.utils import misc -class PrintingListener(base.LoggingBase): +class PrintingListener(base.DumpingListener): """Writes the task and flow notifications messages to stdout or stderr.""" def __init__(self, engine, - task_listen_for=(misc.Notifier.ANY,), - flow_listen_for=(misc.Notifier.ANY,), + task_listen_for=base.DEFAULT_LISTEN_FOR, + flow_listen_for=base.DEFAULT_LISTEN_FOR, + retry_listen_for=base.DEFAULT_LISTEN_FOR, stderr=False): - super(PrintingListener, self).__init__(engine, - task_listen_for=task_listen_for, - flow_listen_for=flow_listen_for) + super(PrintingListener, self).__init__( + engine, task_listen_for=task_listen_for, + flow_listen_for=flow_listen_for, retry_listen_for=retry_listen_for) if stderr: self._file = sys.stderr else: self._file = sys.stdout - def _log(self, message, *args, **kwargs): + def _dump(self, message, *args, **kwargs): print(message % args, file=self._file) exc_info = kwargs.get('exc_info') if exc_info is not None: diff --git a/taskflow/listeners/timing.py b/taskflow/listeners/timing.py index e21dd642..c0fab524 100644 --- a/taskflow/listeners/timing.py +++ b/taskflow/listeners/timing.py @@ -16,22 +16,30 @@ from __future__ import absolute_import -import logging +import itertools from taskflow import exceptions as exc from taskflow.listeners import base +from taskflow import logging from taskflow import states from taskflow.types import timing as tt -STARTING_STATES = (states.RUNNING, states.REVERTING) -FINISHED_STATES = base.FINISH_STATES + (states.REVERTED,) -WATCH_STATES = frozenset(FINISHED_STATES + STARTING_STATES + - (states.PENDING,)) +STARTING_STATES = frozenset((states.RUNNING, states.REVERTING)) +FINISHED_STATES = frozenset((base.FINISH_STATES + (states.REVERTED,))) +WATCH_STATES = frozenset(itertools.chain(FINISHED_STATES, STARTING_STATES, + [states.PENDING])) LOG = logging.getLogger(__name__) -class TimingListener(base.ListenerBase): +# TODO(harlowja): get rid of this when we can just support python 3.x and use +# its print function directly instead of having to wrap it in a helper function +# due to how python 2.x print is a language built-in and not a function... +def _printer(message): + print(message) + + +class TimingListener(base.Listener): """Listener that captures task duration. It records how long a task took to execute (or fail) @@ -46,11 +54,17 @@ class TimingListener(base.ListenerBase): def deregister(self): super(TimingListener, self).deregister() + # There should be none that still exist at deregistering time, so log a + # warning if there were any that somehow still got left behind... + leftover_timers = len(self._timers) + if leftover_timers: + LOG.warn("%s task(s) did not enter %s states", leftover_timers, + FINISHED_STATES) self._timers.clear() def _record_ending(self, timer, task_name): meta_update = { - 'duration': float(timer.elapsed()), + 'duration': timer.elapsed(), } try: # Don't let storage failures throw exceptions in a listener method. @@ -66,5 +80,28 @@ class TimingListener(base.ListenerBase): elif state in STARTING_STATES: self._timers[task_name] = tt.StopWatch().start() elif state in FINISHED_STATES: - if task_name in self._timers: - self._record_ending(self._timers[task_name], task_name) + timer = self._timers.pop(task_name, None) + if timer is not None: + timer.stop() + self._record_ending(timer, task_name) + + +class PrintingTimingListener(TimingListener): + """Listener that prints the start & stop timing as well as recording it.""" + + def __init__(self, engine, printer=None): + super(PrintingTimingListener, self).__init__(engine) + if printer is None: + self._printer = _printer + else: + self._printer = printer + + def _record_ending(self, timer, task_name): + super(PrintingTimingListener, self)._record_ending(timer, task_name) + self._printer("It took task '%s' %0.2f seconds to" + " finish." % (task_name, timer.elapsed())) + + def _task_receiver(self, state, details): + super(PrintingTimingListener, self)._task_receiver(state, details) + if state in STARTING_STATES: + self._printer("'%s' task started." % (details['task_name'])) diff --git a/taskflow/logging.py b/taskflow/logging.py new file mode 100644 index 00000000..0ce457eb --- /dev/null +++ b/taskflow/logging.py @@ -0,0 +1,92 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from __future__ import absolute_import + +import logging +import sys + +_BASE = __name__.split(".", 1)[0] + +# Add a BLATHER level, this matches the multiprocessing utils.py module (and +# kazoo and others) that declares a similar level, this level is for +# information that is even lower level than regular DEBUG and gives out so +# much runtime information that it is only useful by low-level/certain users... +BLATHER = 5 + +# Copy over *select* attributes to make it easy to use this module. +CRITICAL = logging.CRITICAL +DEBUG = logging.DEBUG +ERROR = logging.ERROR +FATAL = logging.FATAL +NOTSET = logging.NOTSET +WARN = logging.WARN +WARNING = logging.WARNING + + +class _BlatherLoggerAdapter(logging.LoggerAdapter): + + def blather(self, msg, *args, **kwargs): + """Delegate a blather call to the underlying logger.""" + self.log(BLATHER, msg, *args, **kwargs) + + def warn(self, msg, *args, **kwargs): + """Delegate a warning call to the underlying logger.""" + self.warning(msg, *args, **kwargs) + + +# TODO(harlowja): we should remove when we no longer have to support 2.6... +if sys.version_info[0:2] == (2, 6): + + class _FixedBlatherLoggerAdapter(_BlatherLoggerAdapter): + """Ensures isEnabledFor() exists on adapters that are created.""" + + def isEnabledFor(self, level): + return self.logger.isEnabledFor(level) + + _BlatherLoggerAdapter = _FixedBlatherLoggerAdapter + + # Taken from python2.7 (same in python3.4)... + class _NullHandler(logging.Handler): + """This handler does nothing. + + It's intended to be used to avoid the + "No handlers could be found for logger XXX" one-off warning. This is + important for library code, which may contain code to log events. If a + user of the library does not configure logging, the one-off warning + might be produced; to avoid this, the library developer simply needs + to instantiate a _NullHandler and add it to the top-level logger of the + library module or package. + """ + + def handle(self, record): + """Stub.""" + + def emit(self, record): + """Stub.""" + + def createLock(self): + self.lock = None + +else: + _NullHandler = logging.NullHandler + + +def getLogger(name=_BASE, extra=None): + logger = logging.getLogger(name) + if not logger.handlers: + logger.addHandler(_NullHandler()) + return _BlatherLoggerAdapter(logger, extra=extra) diff --git a/taskflow/openstack/common/__init__.py b/taskflow/openstack/common/__init__.py deleted file mode 100644 index d1223eaf..00000000 --- a/taskflow/openstack/common/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import six - - -six.add_move(six.MovedModule('mox', 'mox', 'mox3.mox')) diff --git a/taskflow/openstack/common/excutils.py b/taskflow/openstack/common/excutils.py deleted file mode 100644 index 790fc0b1..00000000 --- a/taskflow/openstack/common/excutils.py +++ /dev/null @@ -1,113 +0,0 @@ -# Copyright 2011 OpenStack Foundation. -# Copyright 2012, Red Hat, Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -""" -Exception related utilities. -""" - -import logging -import sys -import time -import traceback - -import six - -from taskflow.openstack.common.gettextutils import _LE - - -class save_and_reraise_exception(object): - """Save current exception, run some code and then re-raise. - - In some cases the exception context can be cleared, resulting in None - being attempted to be re-raised after an exception handler is run. This - can happen when eventlet switches greenthreads or when running an - exception handler, code raises and catches an exception. In both - cases the exception context will be cleared. - - To work around this, we save the exception state, run handler code, and - then re-raise the original exception. If another exception occurs, the - saved exception is logged and the new exception is re-raised. - - In some cases the caller may not want to re-raise the exception, and - for those circumstances this context provides a reraise flag that - can be used to suppress the exception. For example:: - - except Exception: - with save_and_reraise_exception() as ctxt: - decide_if_need_reraise() - if not should_be_reraised: - ctxt.reraise = False - - If another exception occurs and reraise flag is False, - the saved exception will not be logged. - - If the caller wants to raise new exception during exception handling - he/she sets reraise to False initially with an ability to set it back to - True if needed:: - - except Exception: - with save_and_reraise_exception(reraise=False) as ctxt: - [if statements to determine whether to raise a new exception] - # Not raising a new exception, so reraise - ctxt.reraise = True - """ - def __init__(self, reraise=True): - self.reraise = reraise - - def __enter__(self): - self.type_, self.value, self.tb, = sys.exc_info() - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - if exc_type is not None: - if self.reraise: - logging.error(_LE('Original exception being dropped: %s'), - traceback.format_exception(self.type_, - self.value, - self.tb)) - return False - if self.reraise: - six.reraise(self.type_, self.value, self.tb) - - -def forever_retry_uncaught_exceptions(infunc): - def inner_func(*args, **kwargs): - last_log_time = 0 - last_exc_message = None - exc_count = 0 - while True: - try: - return infunc(*args, **kwargs) - except Exception as exc: - this_exc_message = six.u(str(exc)) - if this_exc_message == last_exc_message: - exc_count += 1 - else: - exc_count = 1 - # Do not log any more frequently than once a minute unless - # the exception message changes - cur_time = int(time.time()) - if (cur_time - last_log_time > 60 or - this_exc_message != last_exc_message): - logging.exception( - _LE('Unexpected exception occurred %d time(s)... ' - 'retrying.') % exc_count) - last_log_time = cur_time - last_exc_message = this_exc_message - exc_count = 0 - # This should be a very rare event. In case it isn't, do - # a sleep. - time.sleep(1) - return inner_func diff --git a/taskflow/openstack/common/gettextutils.py b/taskflow/openstack/common/gettextutils.py deleted file mode 100644 index 20fc2543..00000000 --- a/taskflow/openstack/common/gettextutils.py +++ /dev/null @@ -1,479 +0,0 @@ -# Copyright 2012 Red Hat, Inc. -# Copyright 2013 IBM Corp. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -""" -gettext for openstack-common modules. - -Usual usage in an openstack.common module: - - from taskflow.openstack.common.gettextutils import _ -""" - -import copy -import gettext -import locale -from logging import handlers -import os - -from babel import localedata -import six - -_AVAILABLE_LANGUAGES = {} - -# FIXME(dhellmann): Remove this when moving to oslo.i18n. -USE_LAZY = False - - -class TranslatorFactory(object): - """Create translator functions - """ - - def __init__(self, domain, localedir=None): - """Establish a set of translation functions for the domain. - - :param domain: Name of translation domain, - specifying a message catalog. - :type domain: str - :param lazy: Delays translation until a message is emitted. - Defaults to False. - :type lazy: Boolean - :param localedir: Directory with translation catalogs. - :type localedir: str - """ - self.domain = domain - if localedir is None: - localedir = os.environ.get(domain.upper() + '_LOCALEDIR') - self.localedir = localedir - - def _make_translation_func(self, domain=None): - """Return a new translation function ready for use. - - Takes into account whether or not lazy translation is being - done. - - The domain can be specified to override the default from the - factory, but the localedir from the factory is always used - because we assume the log-level translation catalogs are - installed in the same directory as the main application - catalog. - - """ - if domain is None: - domain = self.domain - t = gettext.translation(domain, - localedir=self.localedir, - fallback=True) - # Use the appropriate method of the translation object based - # on the python version. - m = t.gettext if six.PY3 else t.ugettext - - def f(msg): - """oslo.i18n.gettextutils translation function.""" - if USE_LAZY: - return Message(msg, domain=domain) - return m(msg) - return f - - @property - def primary(self): - "The default translation function." - return self._make_translation_func() - - def _make_log_translation_func(self, level): - return self._make_translation_func(self.domain + '-log-' + level) - - @property - def log_info(self): - "Translate info-level log messages." - return self._make_log_translation_func('info') - - @property - def log_warning(self): - "Translate warning-level log messages." - return self._make_log_translation_func('warning') - - @property - def log_error(self): - "Translate error-level log messages." - return self._make_log_translation_func('error') - - @property - def log_critical(self): - "Translate critical-level log messages." - return self._make_log_translation_func('critical') - - -# NOTE(dhellmann): When this module moves out of the incubator into -# oslo.i18n, these global variables can be moved to an integration -# module within each application. - -# Create the global translation functions. -_translators = TranslatorFactory('taskflow') - -# The primary translation function using the well-known name "_" -_ = _translators.primary - -# Translators for log levels. -# -# The abbreviated names are meant to reflect the usual use of a short -# name like '_'. The "L" is for "log" and the other letter comes from -# the level. -_LI = _translators.log_info -_LW = _translators.log_warning -_LE = _translators.log_error -_LC = _translators.log_critical - -# NOTE(dhellmann): End of globals that will move to the application's -# integration module. - - -def enable_lazy(): - """Convenience function for configuring _() to use lazy gettext - - Call this at the start of execution to enable the gettextutils._ - function to use lazy gettext functionality. This is useful if - your project is importing _ directly instead of using the - gettextutils.install() way of importing the _ function. - """ - global USE_LAZY - USE_LAZY = True - - -def install(domain): - """Install a _() function using the given translation domain. - - Given a translation domain, install a _() function using gettext's - install() function. - - The main difference from gettext.install() is that we allow - overriding the default localedir (e.g. /usr/share/locale) using - a translation-domain-specific environment variable (e.g. - NOVA_LOCALEDIR). - - Note that to enable lazy translation, enable_lazy must be - called. - - :param domain: the translation domain - """ - from six import moves - tf = TranslatorFactory(domain) - moves.builtins.__dict__['_'] = tf.primary - - -class Message(six.text_type): - """A Message object is a unicode object that can be translated. - - Translation of Message is done explicitly using the translate() method. - For all non-translation intents and purposes, a Message is simply unicode, - and can be treated as such. - """ - - def __new__(cls, msgid, msgtext=None, params=None, - domain='taskflow', *args): - """Create a new Message object. - - In order for translation to work gettext requires a message ID, this - msgid will be used as the base unicode text. It is also possible - for the msgid and the base unicode text to be different by passing - the msgtext parameter. - """ - # If the base msgtext is not given, we use the default translation - # of the msgid (which is in English) just in case the system locale is - # not English, so that the base text will be in that locale by default. - if not msgtext: - msgtext = Message._translate_msgid(msgid, domain) - # We want to initialize the parent unicode with the actual object that - # would have been plain unicode if 'Message' was not enabled. - msg = super(Message, cls).__new__(cls, msgtext) - msg.msgid = msgid - msg.domain = domain - msg.params = params - return msg - - def translate(self, desired_locale=None): - """Translate this message to the desired locale. - - :param desired_locale: The desired locale to translate the message to, - if no locale is provided the message will be - translated to the system's default locale. - - :returns: the translated message in unicode - """ - - translated_message = Message._translate_msgid(self.msgid, - self.domain, - desired_locale) - if self.params is None: - # No need for more translation - return translated_message - - # This Message object may have been formatted with one or more - # Message objects as substitution arguments, given either as a single - # argument, part of a tuple, or as one or more values in a dictionary. - # When translating this Message we need to translate those Messages too - translated_params = _translate_args(self.params, desired_locale) - - translated_message = translated_message % translated_params - - return translated_message - - @staticmethod - def _translate_msgid(msgid, domain, desired_locale=None): - if not desired_locale: - system_locale = locale.getdefaultlocale() - # If the system locale is not available to the runtime use English - if not system_locale[0]: - desired_locale = 'en_US' - else: - desired_locale = system_locale[0] - - locale_dir = os.environ.get(domain.upper() + '_LOCALEDIR') - lang = gettext.translation(domain, - localedir=locale_dir, - languages=[desired_locale], - fallback=True) - if six.PY3: - translator = lang.gettext - else: - translator = lang.ugettext - - translated_message = translator(msgid) - return translated_message - - def __mod__(self, other): - # When we mod a Message we want the actual operation to be performed - # by the parent class (i.e. unicode()), the only thing we do here is - # save the original msgid and the parameters in case of a translation - params = self._sanitize_mod_params(other) - unicode_mod = super(Message, self).__mod__(params) - modded = Message(self.msgid, - msgtext=unicode_mod, - params=params, - domain=self.domain) - return modded - - def _sanitize_mod_params(self, other): - """Sanitize the object being modded with this Message. - - - Add support for modding 'None' so translation supports it - - Trim the modded object, which can be a large dictionary, to only - those keys that would actually be used in a translation - - Snapshot the object being modded, in case the message is - translated, it will be used as it was when the Message was created - """ - if other is None: - params = (other,) - elif isinstance(other, dict): - # Merge the dictionaries - # Copy each item in case one does not support deep copy. - params = {} - if isinstance(self.params, dict): - for key, val in self.params.items(): - params[key] = self._copy_param(val) - for key, val in other.items(): - params[key] = self._copy_param(val) - else: - params = self._copy_param(other) - return params - - def _copy_param(self, param): - try: - return copy.deepcopy(param) - except Exception: - # Fallback to casting to unicode this will handle the - # python code-like objects that can't be deep-copied - return six.text_type(param) - - def __add__(self, other): - msg = _('Message objects do not support addition.') - raise TypeError(msg) - - def __radd__(self, other): - return self.__add__(other) - - if six.PY2: - def __str__(self): - # NOTE(luisg): Logging in python 2.6 tries to str() log records, - # and it expects specifically a UnicodeError in order to proceed. - msg = _('Message objects do not support str() because they may ' - 'contain non-ascii characters. ' - 'Please use unicode() or translate() instead.') - raise UnicodeError(msg) - - -def get_available_languages(domain): - """Lists the available languages for the given translation domain. - - :param domain: the domain to get languages for - """ - if domain in _AVAILABLE_LANGUAGES: - return copy.copy(_AVAILABLE_LANGUAGES[domain]) - - localedir = '%s_LOCALEDIR' % domain.upper() - find = lambda x: gettext.find(domain, - localedir=os.environ.get(localedir), - languages=[x]) - - # NOTE(mrodden): en_US should always be available (and first in case - # order matters) since our in-line message strings are en_US - language_list = ['en_US'] - # NOTE(luisg): Babel <1.0 used a function called list(), which was - # renamed to locale_identifiers() in >=1.0, the requirements master list - # requires >=0.9.6, uncapped, so defensively work with both. We can remove - # this check when the master list updates to >=1.0, and update all projects - list_identifiers = (getattr(localedata, 'list', None) or - getattr(localedata, 'locale_identifiers')) - locale_identifiers = list_identifiers() - - for i in locale_identifiers: - if find(i) is not None: - language_list.append(i) - - # NOTE(luisg): Babel>=1.0,<1.3 has a bug where some OpenStack supported - # locales (e.g. 'zh_CN', and 'zh_TW') aren't supported even though they - # are perfectly legitimate locales: - # https://github.com/mitsuhiko/babel/issues/37 - # In Babel 1.3 they fixed the bug and they support these locales, but - # they are still not explicitly "listed" by locale_identifiers(). - # That is why we add the locales here explicitly if necessary so that - # they are listed as supported. - aliases = {'zh': 'zh_CN', - 'zh_Hant_HK': 'zh_HK', - 'zh_Hant': 'zh_TW', - 'fil': 'tl_PH'} - for (locale_, alias) in six.iteritems(aliases): - if locale_ in language_list and alias not in language_list: - language_list.append(alias) - - _AVAILABLE_LANGUAGES[domain] = language_list - return copy.copy(language_list) - - -def translate(obj, desired_locale=None): - """Gets the translated unicode representation of the given object. - - If the object is not translatable it is returned as-is. - If the locale is None the object is translated to the system locale. - - :param obj: the object to translate - :param desired_locale: the locale to translate the message to, if None the - default system locale will be used - :returns: the translated object in unicode, or the original object if - it could not be translated - """ - message = obj - if not isinstance(message, Message): - # If the object to translate is not already translatable, - # let's first get its unicode representation - message = six.text_type(obj) - if isinstance(message, Message): - # Even after unicoding() we still need to check if we are - # running with translatable unicode before translating - return message.translate(desired_locale) - return obj - - -def _translate_args(args, desired_locale=None): - """Translates all the translatable elements of the given arguments object. - - This method is used for translating the translatable values in method - arguments which include values of tuples or dictionaries. - If the object is not a tuple or a dictionary the object itself is - translated if it is translatable. - - If the locale is None the object is translated to the system locale. - - :param args: the args to translate - :param desired_locale: the locale to translate the args to, if None the - default system locale will be used - :returns: a new args object with the translated contents of the original - """ - if isinstance(args, tuple): - return tuple(translate(v, desired_locale) for v in args) - if isinstance(args, dict): - translated_dict = {} - for (k, v) in six.iteritems(args): - translated_v = translate(v, desired_locale) - translated_dict[k] = translated_v - return translated_dict - return translate(args, desired_locale) - - -class TranslationHandler(handlers.MemoryHandler): - """Handler that translates records before logging them. - - The TranslationHandler takes a locale and a target logging.Handler object - to forward LogRecord objects to after translating them. This handler - depends on Message objects being logged, instead of regular strings. - - The handler can be configured declaratively in the logging.conf as follows: - - [handlers] - keys = translatedlog, translator - - [handler_translatedlog] - class = handlers.WatchedFileHandler - args = ('/var/log/api-localized.log',) - formatter = context - - [handler_translator] - class = openstack.common.log.TranslationHandler - target = translatedlog - args = ('zh_CN',) - - If the specified locale is not available in the system, the handler will - log in the default locale. - """ - - def __init__(self, locale=None, target=None): - """Initialize a TranslationHandler - - :param locale: locale to use for translating messages - :param target: logging.Handler object to forward - LogRecord objects to after translation - """ - # NOTE(luisg): In order to allow this handler to be a wrapper for - # other handlers, such as a FileHandler, and still be able to - # configure it using logging.conf, this handler has to extend - # MemoryHandler because only the MemoryHandlers' logging.conf - # parsing is implemented such that it accepts a target handler. - handlers.MemoryHandler.__init__(self, capacity=0, target=target) - self.locale = locale - - def setFormatter(self, fmt): - self.target.setFormatter(fmt) - - def emit(self, record): - # We save the message from the original record to restore it - # after translation, so other handlers are not affected by this - original_msg = record.msg - original_args = record.args - - try: - self._translate_and_log_record(record) - finally: - record.msg = original_msg - record.args = original_args - - def _translate_and_log_record(self, record): - record.msg = translate(record.msg, self.locale) - - # In addition to translating the message, we also need to translate - # arguments that were passed to the log method that were not part - # of the main message e.g., log.info(_('Some message %s'), this_one)) - record.args = _translate_args(record.args, self.locale) - - self.target.emit(record) diff --git a/taskflow/openstack/common/importutils.py b/taskflow/openstack/common/importutils.py deleted file mode 100644 index 1e0e703f..00000000 --- a/taskflow/openstack/common/importutils.py +++ /dev/null @@ -1,73 +0,0 @@ -# Copyright 2011 OpenStack Foundation. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -""" -Import related utilities and helper functions. -""" - -import sys -import traceback - - -def import_class(import_str): - """Returns a class from a string including module and class.""" - mod_str, _sep, class_str = import_str.rpartition('.') - __import__(mod_str) - try: - return getattr(sys.modules[mod_str], class_str) - except AttributeError: - raise ImportError('Class %s cannot be found (%s)' % - (class_str, - traceback.format_exception(*sys.exc_info()))) - - -def import_object(import_str, *args, **kwargs): - """Import a class and return an instance of it.""" - return import_class(import_str)(*args, **kwargs) - - -def import_object_ns(name_space, import_str, *args, **kwargs): - """Tries to import object from default namespace. - - Imports a class and return an instance of it, first by trying - to find the class in a default namespace, then failing back to - a full path if not found in the default namespace. - """ - import_value = "%s.%s" % (name_space, import_str) - try: - return import_class(import_value)(*args, **kwargs) - except ImportError: - return import_class(import_str)(*args, **kwargs) - - -def import_module(import_str): - """Import a module.""" - __import__(import_str) - return sys.modules[import_str] - - -def import_versioned_module(version, submodule=None): - module = 'taskflow.v%s' % version - if submodule: - module = '.'.join((module, submodule)) - return import_module(module) - - -def try_import(import_str, default=None): - """Try to import a module and if it fails return default.""" - try: - return import_module(import_str) - except ImportError: - return default diff --git a/taskflow/openstack/common/jsonutils.py b/taskflow/openstack/common/jsonutils.py deleted file mode 100644 index 8231688c..00000000 --- a/taskflow/openstack/common/jsonutils.py +++ /dev/null @@ -1,202 +0,0 @@ -# Copyright 2010 United States Government as represented by the -# Administrator of the National Aeronautics and Space Administration. -# Copyright 2011 Justin Santa Barbara -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -''' -JSON related utilities. - -This module provides a few things: - - 1) A handy function for getting an object down to something that can be - JSON serialized. See to_primitive(). - - 2) Wrappers around loads() and dumps(). The dumps() wrapper will - automatically use to_primitive() for you if needed. - - 3) This sets up anyjson to use the loads() and dumps() wrappers if anyjson - is available. -''' - - -import codecs -import datetime -import functools -import inspect -import itertools -import sys - -is_simplejson = False -if sys.version_info < (2, 7): - # On Python <= 2.6, json module is not C boosted, so try to use - # simplejson module if available - try: - import simplejson as json - # NOTE(mriedem): Make sure we have a new enough version of simplejson - # to support the namedobject_as_tuple argument. This can be removed - # in the Kilo release when python 2.6 support is dropped. - if 'namedtuple_as_object' in inspect.getargspec(json.dumps).args: - is_simplejson = True - else: - import json - except ImportError: - import json -else: - import json - -import six -import six.moves.xmlrpc_client as xmlrpclib - -from taskflow.openstack.common import gettextutils -from taskflow.openstack.common import importutils -from taskflow.openstack.common import strutils -from taskflow.openstack.common import timeutils - -netaddr = importutils.try_import("netaddr") - -_nasty_type_tests = [inspect.ismodule, inspect.isclass, inspect.ismethod, - inspect.isfunction, inspect.isgeneratorfunction, - inspect.isgenerator, inspect.istraceback, inspect.isframe, - inspect.iscode, inspect.isbuiltin, inspect.isroutine, - inspect.isabstract] - -_simple_types = (six.string_types + six.integer_types - + (type(None), bool, float)) - - -def to_primitive(value, convert_instances=False, convert_datetime=True, - level=0, max_depth=3): - """Convert a complex object into primitives. - - Handy for JSON serialization. We can optionally handle instances, - but since this is a recursive function, we could have cyclical - data structures. - - To handle cyclical data structures we could track the actual objects - visited in a set, but not all objects are hashable. Instead we just - track the depth of the object inspections and don't go too deep. - - Therefore, convert_instances=True is lossy ... be aware. - - """ - # handle obvious types first - order of basic types determined by running - # full tests on nova project, resulting in the following counts: - # 572754 - # 460353 - # 379632 - # 274610 - # 199918 - # 114200 - # 51817 - # 26164 - # 6491 - # 283 - # 19 - if isinstance(value, _simple_types): - return value - - if isinstance(value, datetime.datetime): - if convert_datetime: - return timeutils.strtime(value) - else: - return value - - # value of itertools.count doesn't get caught by nasty_type_tests - # and results in infinite loop when list(value) is called. - if type(value) == itertools.count: - return six.text_type(value) - - # FIXME(vish): Workaround for LP bug 852095. Without this workaround, - # tests that raise an exception in a mocked method that - # has a @wrap_exception with a notifier will fail. If - # we up the dependency to 0.5.4 (when it is released) we - # can remove this workaround. - if getattr(value, '__module__', None) == 'mox': - return 'mock' - - if level > max_depth: - return '?' - - # The try block may not be necessary after the class check above, - # but just in case ... - try: - recursive = functools.partial(to_primitive, - convert_instances=convert_instances, - convert_datetime=convert_datetime, - level=level, - max_depth=max_depth) - if isinstance(value, dict): - return dict((k, recursive(v)) for k, v in six.iteritems(value)) - elif isinstance(value, (list, tuple)): - return [recursive(lv) for lv in value] - - # It's not clear why xmlrpclib created their own DateTime type, but - # for our purposes, make it a datetime type which is explicitly - # handled - if isinstance(value, xmlrpclib.DateTime): - value = datetime.datetime(*tuple(value.timetuple())[:6]) - - if convert_datetime and isinstance(value, datetime.datetime): - return timeutils.strtime(value) - elif isinstance(value, gettextutils.Message): - return value.data - elif hasattr(value, 'iteritems'): - return recursive(dict(value.iteritems()), level=level + 1) - elif hasattr(value, '__iter__'): - return recursive(list(value)) - elif convert_instances and hasattr(value, '__dict__'): - # Likely an instance of something. Watch for cycles. - # Ignore class member vars. - return recursive(value.__dict__, level=level + 1) - elif netaddr and isinstance(value, netaddr.IPAddress): - return six.text_type(value) - else: - if any(test(value) for test in _nasty_type_tests): - return six.text_type(value) - return value - except TypeError: - # Class objects are tricky since they may define something like - # __iter__ defined but it isn't callable as list(). - return six.text_type(value) - - -def dumps(value, default=to_primitive, **kwargs): - if is_simplejson: - kwargs['namedtuple_as_object'] = False - return json.dumps(value, default=default, **kwargs) - - -def dump(obj, fp, *args, **kwargs): - if is_simplejson: - kwargs['namedtuple_as_object'] = False - return json.dump(obj, fp, *args, **kwargs) - - -def loads(s, encoding='utf-8', **kwargs): - return json.loads(strutils.safe_decode(s, encoding), **kwargs) - - -def load(fp, encoding='utf-8', **kwargs): - return json.load(codecs.getreader(encoding)(fp), **kwargs) - - -try: - import anyjson -except ImportError: - pass -else: - anyjson._modules.append((__name__, 'dumps', TypeError, - 'loads', ValueError, 'load')) - anyjson.force_implementation(__name__) diff --git a/taskflow/openstack/common/network_utils.py b/taskflow/openstack/common/network_utils.py deleted file mode 100644 index 2729c3fb..00000000 --- a/taskflow/openstack/common/network_utils.py +++ /dev/null @@ -1,163 +0,0 @@ -# Copyright 2012 OpenStack Foundation. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -""" -Network-related utilities and helper functions. -""" - -import logging -import socket - -from six.moves.urllib import parse - -from taskflow.openstack.common.gettextutils import _LW - -LOG = logging.getLogger(__name__) - - -def parse_host_port(address, default_port=None): - """Interpret a string as a host:port pair. - - An IPv6 address MUST be escaped if accompanied by a port, - because otherwise ambiguity ensues: 2001:db8:85a3::8a2e:370:7334 - means both [2001:db8:85a3::8a2e:370:7334] and - [2001:db8:85a3::8a2e:370]:7334. - - >>> parse_host_port('server01:80') - ('server01', 80) - >>> parse_host_port('server01') - ('server01', None) - >>> parse_host_port('server01', default_port=1234) - ('server01', 1234) - >>> parse_host_port('[::1]:80') - ('::1', 80) - >>> parse_host_port('[::1]') - ('::1', None) - >>> parse_host_port('[::1]', default_port=1234) - ('::1', 1234) - >>> parse_host_port('2001:db8:85a3::8a2e:370:7334', default_port=1234) - ('2001:db8:85a3::8a2e:370:7334', 1234) - >>> parse_host_port(None) - (None, None) - """ - if not address: - return (None, None) - - if address[0] == '[': - # Escaped ipv6 - _host, _port = address[1:].split(']') - host = _host - if ':' in _port: - port = _port.split(':')[1] - else: - port = default_port - else: - if address.count(':') == 1: - host, port = address.split(':') - else: - # 0 means ipv4, >1 means ipv6. - # We prohibit unescaped ipv6 addresses with port. - host = address - port = default_port - - return (host, None if port is None else int(port)) - - -class ModifiedSplitResult(parse.SplitResult): - """Split results class for urlsplit.""" - - # NOTE(dims): The functions below are needed for Python 2.6.x. - # We can remove these when we drop support for 2.6.x. - @property - def hostname(self): - netloc = self.netloc.split('@', 1)[-1] - host, port = parse_host_port(netloc) - return host - - @property - def port(self): - netloc = self.netloc.split('@', 1)[-1] - host, port = parse_host_port(netloc) - return port - - -def urlsplit(url, scheme='', allow_fragments=True): - """Parse a URL using urlparse.urlsplit(), splitting query and fragments. - This function papers over Python issue9374 when needed. - - The parameters are the same as urlparse.urlsplit. - """ - scheme, netloc, path, query, fragment = parse.urlsplit( - url, scheme, allow_fragments) - if allow_fragments and '#' in path: - path, fragment = path.split('#', 1) - if '?' in path: - path, query = path.split('?', 1) - return ModifiedSplitResult(scheme, netloc, - path, query, fragment) - - -def set_tcp_keepalive(sock, tcp_keepalive=True, - tcp_keepidle=None, - tcp_keepalive_interval=None, - tcp_keepalive_count=None): - """Set values for tcp keepalive parameters - - This function configures tcp keepalive parameters if users wish to do - so. - - :param tcp_keepalive: Boolean, turn on or off tcp_keepalive. If users are - not sure, this should be True, and default values will be used. - - :param tcp_keepidle: time to wait before starting to send keepalive probes - :param tcp_keepalive_interval: time between successive probes, once the - initial wait time is over - :param tcp_keepalive_count: number of probes to send before the connection - is killed - """ - - # NOTE(praneshp): Despite keepalive being a tcp concept, the level is - # still SOL_SOCKET. This is a quirk. - if isinstance(tcp_keepalive, bool): - sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, tcp_keepalive) - else: - raise TypeError("tcp_keepalive must be a boolean") - - if not tcp_keepalive: - return - - # These options aren't available in the OS X version of eventlet, - # Idle + Count * Interval effectively gives you the total timeout. - if tcp_keepidle is not None: - if hasattr(socket, 'TCP_KEEPIDLE'): - sock.setsockopt(socket.IPPROTO_TCP, - socket.TCP_KEEPIDLE, - tcp_keepidle) - else: - LOG.warning(_LW('tcp_keepidle not available on your system')) - if tcp_keepalive_interval is not None: - if hasattr(socket, 'TCP_KEEPINTVL'): - sock.setsockopt(socket.IPPROTO_TCP, - socket.TCP_KEEPINTVL, - tcp_keepalive_interval) - else: - LOG.warning(_LW('tcp_keepintvl not available on your system')) - if tcp_keepalive_count is not None: - if hasattr(socket, 'TCP_KEEPCNT'): - sock.setsockopt(socket.IPPROTO_TCP, - socket.TCP_KEEPCNT, - tcp_keepalive_count) - else: - LOG.warning(_LW('tcp_keepknt not available on your system')) diff --git a/taskflow/openstack/common/strutils.py b/taskflow/openstack/common/strutils.py deleted file mode 100644 index 2f0fd659..00000000 --- a/taskflow/openstack/common/strutils.py +++ /dev/null @@ -1,311 +0,0 @@ -# Copyright 2011 OpenStack Foundation. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -""" -System-level utilities and helper functions. -""" - -import math -import re -import sys -import unicodedata - -import six - -from taskflow.openstack.common.gettextutils import _ - - -UNIT_PREFIX_EXPONENT = { - 'k': 1, - 'K': 1, - 'Ki': 1, - 'M': 2, - 'Mi': 2, - 'G': 3, - 'Gi': 3, - 'T': 4, - 'Ti': 4, -} -UNIT_SYSTEM_INFO = { - 'IEC': (1024, re.compile(r'(^[-+]?\d*\.?\d+)([KMGT]i?)?(b|bit|B)$')), - 'SI': (1000, re.compile(r'(^[-+]?\d*\.?\d+)([kMGT])?(b|bit|B)$')), -} - -TRUE_STRINGS = ('1', 't', 'true', 'on', 'y', 'yes') -FALSE_STRINGS = ('0', 'f', 'false', 'off', 'n', 'no') - -SLUGIFY_STRIP_RE = re.compile(r"[^\w\s-]") -SLUGIFY_HYPHENATE_RE = re.compile(r"[-\s]+") - - -# NOTE(flaper87): The following globals are used by `mask_password` -_SANITIZE_KEYS = ['adminPass', 'admin_pass', 'password', 'admin_password'] - -# NOTE(ldbragst): Let's build a list of regex objects using the list of -# _SANITIZE_KEYS we already have. This way, we only have to add the new key -# to the list of _SANITIZE_KEYS and we can generate regular expressions -# for XML and JSON automatically. -_SANITIZE_PATTERNS_2 = [] -_SANITIZE_PATTERNS_1 = [] - -# NOTE(amrith): Some regular expressions have only one parameter, some -# have two parameters. Use different lists of patterns here. -_FORMAT_PATTERNS_1 = [r'(%(key)s\s*[=]\s*)[^\s^\'^\"]+'] -_FORMAT_PATTERNS_2 = [r'(%(key)s\s*[=]\s*[\"\']).*?([\"\'])', - r'(%(key)s\s+[\"\']).*?([\"\'])', - r'([-]{2}%(key)s\s+)[^\'^\"^=^\s]+([\s]*)', - r'(<%(key)s>).*?()', - r'([\"\']%(key)s[\"\']\s*:\s*[\"\']).*?([\"\'])', - r'([\'"].*?%(key)s[\'"]\s*:\s*u?[\'"]).*?([\'"])', - r'([\'"].*?%(key)s[\'"]\s*,\s*\'--?[A-z]+\'\s*,\s*u?' - '[\'"]).*?([\'"])', - r'(%(key)s\s*--?[A-z]+\s*)\S+(\s*)'] - -for key in _SANITIZE_KEYS: - for pattern in _FORMAT_PATTERNS_2: - reg_ex = re.compile(pattern % {'key': key}, re.DOTALL) - _SANITIZE_PATTERNS_2.append(reg_ex) - - for pattern in _FORMAT_PATTERNS_1: - reg_ex = re.compile(pattern % {'key': key}, re.DOTALL) - _SANITIZE_PATTERNS_1.append(reg_ex) - - -def int_from_bool_as_string(subject): - """Interpret a string as a boolean and return either 1 or 0. - - Any string value in: - - ('True', 'true', 'On', 'on', '1') - - is interpreted as a boolean True. - - Useful for JSON-decoded stuff and config file parsing - """ - return bool_from_string(subject) and 1 or 0 - - -def bool_from_string(subject, strict=False, default=False): - """Interpret a string as a boolean. - - A case-insensitive match is performed such that strings matching 't', - 'true', 'on', 'y', 'yes', or '1' are considered True and, when - `strict=False`, anything else returns the value specified by 'default'. - - Useful for JSON-decoded stuff and config file parsing. - - If `strict=True`, unrecognized values, including None, will raise a - ValueError which is useful when parsing values passed in from an API call. - Strings yielding False are 'f', 'false', 'off', 'n', 'no', or '0'. - """ - if not isinstance(subject, six.string_types): - subject = six.text_type(subject) - - lowered = subject.strip().lower() - - if lowered in TRUE_STRINGS: - return True - elif lowered in FALSE_STRINGS: - return False - elif strict: - acceptable = ', '.join( - "'%s'" % s for s in sorted(TRUE_STRINGS + FALSE_STRINGS)) - msg = _("Unrecognized value '%(val)s', acceptable values are:" - " %(acceptable)s") % {'val': subject, - 'acceptable': acceptable} - raise ValueError(msg) - else: - return default - - -def safe_decode(text, incoming=None, errors='strict'): - """Decodes incoming text/bytes string using `incoming` if they're not - already unicode. - - :param incoming: Text's current encoding - :param errors: Errors handling policy. See here for valid - values http://docs.python.org/2/library/codecs.html - :returns: text or a unicode `incoming` encoded - representation of it. - :raises TypeError: If text is not an instance of str - """ - if not isinstance(text, (six.string_types, six.binary_type)): - raise TypeError("%s can't be decoded" % type(text)) - - if isinstance(text, six.text_type): - return text - - if not incoming: - incoming = (sys.stdin.encoding or - sys.getdefaultencoding()) - - try: - return text.decode(incoming, errors) - except UnicodeDecodeError: - # Note(flaper87) If we get here, it means that - # sys.stdin.encoding / sys.getdefaultencoding - # didn't return a suitable encoding to decode - # text. This happens mostly when global LANG - # var is not set correctly and there's no - # default encoding. In this case, most likely - # python will use ASCII or ANSI encoders as - # default encodings but they won't be capable - # of decoding non-ASCII characters. - # - # Also, UTF-8 is being used since it's an ASCII - # extension. - return text.decode('utf-8', errors) - - -def safe_encode(text, incoming=None, - encoding='utf-8', errors='strict'): - """Encodes incoming text/bytes string using `encoding`. - - If incoming is not specified, text is expected to be encoded with - current python's default encoding. (`sys.getdefaultencoding`) - - :param incoming: Text's current encoding - :param encoding: Expected encoding for text (Default UTF-8) - :param errors: Errors handling policy. See here for valid - values http://docs.python.org/2/library/codecs.html - :returns: text or a bytestring `encoding` encoded - representation of it. - :raises TypeError: If text is not an instance of str - """ - if not isinstance(text, (six.string_types, six.binary_type)): - raise TypeError("%s can't be encoded" % type(text)) - - if not incoming: - incoming = (sys.stdin.encoding or - sys.getdefaultencoding()) - - if isinstance(text, six.text_type): - return text.encode(encoding, errors) - elif text and encoding != incoming: - # Decode text before encoding it with `encoding` - text = safe_decode(text, incoming, errors) - return text.encode(encoding, errors) - else: - return text - - -def string_to_bytes(text, unit_system='IEC', return_int=False): - """Converts a string into an float representation of bytes. - - The units supported for IEC :: - - Kb(it), Kib(it), Mb(it), Mib(it), Gb(it), Gib(it), Tb(it), Tib(it) - KB, KiB, MB, MiB, GB, GiB, TB, TiB - - The units supported for SI :: - - kb(it), Mb(it), Gb(it), Tb(it) - kB, MB, GB, TB - - Note that the SI unit system does not support capital letter 'K' - - :param text: String input for bytes size conversion. - :param unit_system: Unit system for byte size conversion. - :param return_int: If True, returns integer representation of text - in bytes. (default: decimal) - :returns: Numerical representation of text in bytes. - :raises ValueError: If text has an invalid value. - - """ - try: - base, reg_ex = UNIT_SYSTEM_INFO[unit_system] - except KeyError: - msg = _('Invalid unit system: "%s"') % unit_system - raise ValueError(msg) - match = reg_ex.match(text) - if match: - magnitude = float(match.group(1)) - unit_prefix = match.group(2) - if match.group(3) in ['b', 'bit']: - magnitude /= 8 - else: - msg = _('Invalid string format: %s') % text - raise ValueError(msg) - if not unit_prefix: - res = magnitude - else: - res = magnitude * pow(base, UNIT_PREFIX_EXPONENT[unit_prefix]) - if return_int: - return int(math.ceil(res)) - return res - - -def to_slug(value, incoming=None, errors="strict"): - """Normalize string. - - Convert to lowercase, remove non-word characters, and convert spaces - to hyphens. - - Inspired by Django's `slugify` filter. - - :param value: Text to slugify - :param incoming: Text's current encoding - :param errors: Errors handling policy. See here for valid - values http://docs.python.org/2/library/codecs.html - :returns: slugified unicode representation of `value` - :raises TypeError: If text is not an instance of str - """ - value = safe_decode(value, incoming, errors) - # NOTE(aababilov): no need to use safe_(encode|decode) here: - # encodings are always "ascii", error handling is always "ignore" - # and types are always known (first: unicode; second: str) - value = unicodedata.normalize("NFKD", value).encode( - "ascii", "ignore").decode("ascii") - value = SLUGIFY_STRIP_RE.sub("", value).strip().lower() - return SLUGIFY_HYPHENATE_RE.sub("-", value) - - -def mask_password(message, secret="***"): - """Replace password with 'secret' in message. - - :param message: The string which includes security information. - :param secret: value with which to replace passwords. - :returns: The unicode value of message with the password fields masked. - - For example: - - >>> mask_password("'adminPass' : 'aaaaa'") - "'adminPass' : '***'" - >>> mask_password("'admin_pass' : 'aaaaa'") - "'admin_pass' : '***'" - >>> mask_password('"password" : "aaaaa"') - '"password" : "***"' - >>> mask_password("'original_password' : 'aaaaa'") - "'original_password' : '***'" - >>> mask_password("u'original_password' : u'aaaaa'") - "u'original_password' : u'***'" - """ - message = six.text_type(message) - - # NOTE(ldbragst): Check to see if anything in message contains any key - # specified in _SANITIZE_KEYS, if not then just return the message since - # we don't have to mask any passwords. - if not any(key in message for key in _SANITIZE_KEYS): - return message - - substitute = r'\g<1>' + secret + r'\g<2>' - for pattern in _SANITIZE_PATTERNS_2: - message = re.sub(pattern, substitute, message) - - substitute = r'\g<1>' + secret - for pattern in _SANITIZE_PATTERNS_1: - message = re.sub(pattern, substitute, message) - - return message diff --git a/taskflow/openstack/common/timeutils.py b/taskflow/openstack/common/timeutils.py deleted file mode 100644 index c48da95f..00000000 --- a/taskflow/openstack/common/timeutils.py +++ /dev/null @@ -1,210 +0,0 @@ -# Copyright 2011 OpenStack Foundation. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -""" -Time related utilities and helper functions. -""" - -import calendar -import datetime -import time - -import iso8601 -import six - - -# ISO 8601 extended time format with microseconds -_ISO8601_TIME_FORMAT_SUBSECOND = '%Y-%m-%dT%H:%M:%S.%f' -_ISO8601_TIME_FORMAT = '%Y-%m-%dT%H:%M:%S' -PERFECT_TIME_FORMAT = _ISO8601_TIME_FORMAT_SUBSECOND - - -def isotime(at=None, subsecond=False): - """Stringify time in ISO 8601 format.""" - if not at: - at = utcnow() - st = at.strftime(_ISO8601_TIME_FORMAT - if not subsecond - else _ISO8601_TIME_FORMAT_SUBSECOND) - tz = at.tzinfo.tzname(None) if at.tzinfo else 'UTC' - st += ('Z' if tz == 'UTC' else tz) - return st - - -def parse_isotime(timestr): - """Parse time from ISO 8601 format.""" - try: - return iso8601.parse_date(timestr) - except iso8601.ParseError as e: - raise ValueError(six.text_type(e)) - except TypeError as e: - raise ValueError(six.text_type(e)) - - -def strtime(at=None, fmt=PERFECT_TIME_FORMAT): - """Returns formatted utcnow.""" - if not at: - at = utcnow() - return at.strftime(fmt) - - -def parse_strtime(timestr, fmt=PERFECT_TIME_FORMAT): - """Turn a formatted time back into a datetime.""" - return datetime.datetime.strptime(timestr, fmt) - - -def normalize_time(timestamp): - """Normalize time in arbitrary timezone to UTC naive object.""" - offset = timestamp.utcoffset() - if offset is None: - return timestamp - return timestamp.replace(tzinfo=None) - offset - - -def is_older_than(before, seconds): - """Return True if before is older than seconds.""" - if isinstance(before, six.string_types): - before = parse_strtime(before).replace(tzinfo=None) - else: - before = before.replace(tzinfo=None) - - return utcnow() - before > datetime.timedelta(seconds=seconds) - - -def is_newer_than(after, seconds): - """Return True if after is newer than seconds.""" - if isinstance(after, six.string_types): - after = parse_strtime(after).replace(tzinfo=None) - else: - after = after.replace(tzinfo=None) - - return after - utcnow() > datetime.timedelta(seconds=seconds) - - -def utcnow_ts(): - """Timestamp version of our utcnow function.""" - if utcnow.override_time is None: - # NOTE(kgriffs): This is several times faster - # than going through calendar.timegm(...) - return int(time.time()) - - return calendar.timegm(utcnow().timetuple()) - - -def utcnow(): - """Overridable version of utils.utcnow.""" - if utcnow.override_time: - try: - return utcnow.override_time.pop(0) - except AttributeError: - return utcnow.override_time - return datetime.datetime.utcnow() - - -def iso8601_from_timestamp(timestamp): - """Returns an iso8601 formatted date from timestamp.""" - return isotime(datetime.datetime.utcfromtimestamp(timestamp)) - - -utcnow.override_time = None - - -def set_time_override(override_time=None): - """Overrides utils.utcnow. - - Make it return a constant time or a list thereof, one at a time. - - :param override_time: datetime instance or list thereof. If not - given, defaults to the current UTC time. - """ - utcnow.override_time = override_time or datetime.datetime.utcnow() - - -def advance_time_delta(timedelta): - """Advance overridden time using a datetime.timedelta.""" - assert utcnow.override_time is not None - try: - for dt in utcnow.override_time: - dt += timedelta - except TypeError: - utcnow.override_time += timedelta - - -def advance_time_seconds(seconds): - """Advance overridden time by seconds.""" - advance_time_delta(datetime.timedelta(0, seconds)) - - -def clear_time_override(): - """Remove the overridden time.""" - utcnow.override_time = None - - -def marshall_now(now=None): - """Make an rpc-safe datetime with microseconds. - - Note: tzinfo is stripped, but not required for relative times. - """ - if not now: - now = utcnow() - return dict(day=now.day, month=now.month, year=now.year, hour=now.hour, - minute=now.minute, second=now.second, - microsecond=now.microsecond) - - -def unmarshall_time(tyme): - """Unmarshall a datetime dict.""" - return datetime.datetime(day=tyme['day'], - month=tyme['month'], - year=tyme['year'], - hour=tyme['hour'], - minute=tyme['minute'], - second=tyme['second'], - microsecond=tyme['microsecond']) - - -def delta_seconds(before, after): - """Return the difference between two timing objects. - - Compute the difference in seconds between two date, time, or - datetime objects (as a float, to microsecond resolution). - """ - delta = after - before - return total_seconds(delta) - - -def total_seconds(delta): - """Return the total seconds of datetime.timedelta object. - - Compute total seconds of datetime.timedelta, datetime.timedelta - doesn't have method total_seconds in Python2.6, calculate it manually. - """ - try: - return delta.total_seconds() - except AttributeError: - return ((delta.days * 24 * 3600) + delta.seconds + - float(delta.microseconds) / (10 ** 6)) - - -def is_soon(dt, window): - """Determines if time is going to happen in the next window seconds. - - :param dt: the time - :param window: minimum seconds to remain to consider the time not soon - - :return: True if expiration is within the given duration - """ - soon = (utcnow() + datetime.timedelta(seconds=window)) - return normalize_time(dt) <= soon diff --git a/taskflow/openstack/common/uuidutils.py b/taskflow/openstack/common/uuidutils.py deleted file mode 100644 index 234b880c..00000000 --- a/taskflow/openstack/common/uuidutils.py +++ /dev/null @@ -1,37 +0,0 @@ -# Copyright (c) 2012 Intel Corporation. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -""" -UUID related utilities and helper functions. -""" - -import uuid - - -def generate_uuid(): - return str(uuid.uuid4()) - - -def is_uuid_like(val): - """Returns validation of a value as a UUID. - - For our purposes, a UUID is a canonical form string: - aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa - - """ - try: - return str(uuid.UUID(val)) == val - except (TypeError, ValueError, AttributeError): - return False diff --git a/taskflow/patterns/graph_flow.py b/taskflow/patterns/graph_flow.py index 7db4fee2..f71f285b 100644 --- a/taskflow/patterns/graph_flow.py +++ b/taskflow/patterns/graph_flow.py @@ -16,13 +16,27 @@ import collections -from networkx.algorithms import traversal - from taskflow import exceptions as exc from taskflow import flow from taskflow.types import graph as gr +def _unsatisfied_requires(node, graph, *additional_provided): + """Extracts the unsatisified symbol requirements of a single node.""" + requires = set(node.requires) + if not requires: + return requires + for provided in additional_provided: + requires = requires - provided + if not requires: + return requires + for pred in graph.bfs_predecessors_iter(node): + requires = requires - pred.provides + if not requires: + return requires + return requires + + class Flow(flow.Flow): """Graph flow pattern. @@ -61,11 +75,11 @@ class Flow(flow.Flow): if not attrs: attrs = {} if manual: - attrs['manual'] = True + attrs[flow.LINK_MANUAL] = True if reason is not None: - if 'reasons' not in attrs: - attrs['reasons'] = set() - attrs['reasons'].add(reason) + if flow.LINK_REASONS not in attrs: + attrs[flow.LINK_REASONS] = set() + attrs[flow.LINK_REASONS].add(reason) if not mutable_graph: graph = gr.DiGraph(graph) graph.add_edge(u, v, **attrs) @@ -82,33 +96,59 @@ class Flow(flow.Flow): if not graph.is_directed_acyclic(): raise exc.DependencyFailure("No path through the items in the" " graph produces an ordering that" - " will allow for correct dependency" - " resolution") - self._graph = graph - self._graph.freeze() + " will allow for logical" + " edge traversal") + self._graph = graph.freeze() - def add(self, *items): - """Adds a given task/tasks/flow/flows to this flow.""" + def add(self, *items, **kwargs): + """Adds a given task/tasks/flow/flows to this flow. + + :param items: items to add to the flow + :param kwargs: keyword arguments, the two keyword arguments + currently processed are: + + * ``resolve_requires`` a boolean that when true (the + default) implies that when items are added their + symbol requirements will be matched to existing items + and links will be automatically made to those + providers. If multiple possible providers exist + then a AmbiguousDependency exception will be raised. + * ``resolve_existing``, a boolean that when true (the + default) implies that on addition of a new item that + existing items will have their requirements scanned + for symbols that this newly added item can provide. + If a match is found a link is automatically created + from the newly added item to the requiree. + """ items = [i for i in items if not self._graph.has_node(i)] if not items: return self - requirements = collections.defaultdict(list) - provided = {} + # This syntax will *hopefully* be better in future versions of python. + # + # See: http://legacy.python.org/dev/peps/pep-3102/ (python 3.0+) + resolve_requires = bool(kwargs.get('resolve_requires', True)) + resolve_existing = bool(kwargs.get('resolve_existing', True)) - def update_requirements(node): - for value in node.requires: - requirements[value].append(node) + # Figure out what the existing nodes *still* require and what they + # provide so we can do this lookup later when inferring. + required = collections.defaultdict(list) + provided = collections.defaultdict(list) - for node in self: - update_requirements(node) - for value in node.provides: - provided[value] = node + retry_provides = set() + if self._retry is not None: + for value in self._retry.requires: + required[value].append(self._retry) + for value in self._retry.provides: + retry_provides.add(value) + provided[value].append(self._retry) - if self.retry: - update_requirements(self.retry) - provided.update(dict((k, self.retry) - for k in self.retry.provides)) + for item in self._graph.nodes_iter(): + for value in _unsatisfied_requires(item, self._graph, + retry_provides): + required[value].append(item) + for value in item.provides: + provided[value].append(item) # NOTE(harlowja): Add items and edges to a temporary copy of the # underlying graph and only if that is successful added to do we then @@ -116,37 +156,41 @@ class Flow(flow.Flow): tmp_graph = gr.DiGraph(self._graph) for item in items: tmp_graph.add_node(item) - update_requirements(item) - for value in item.provides: - if value in provided: - raise exc.DependencyFailure( - "%(item)s provides %(value)s but is already being" - " provided by %(flow)s and duplicate producers" - " are disallowed" - % dict(item=item.name, - flow=provided[value].name, - value=value)) - if self.retry and value in self.retry.requires: - raise exc.DependencyFailure( - "Flows retry controller %(retry)s requires %(value)s " - "but item %(item)s being added to the flow produces " - "that item, this creates a cyclic dependency and is " - "disallowed" - % dict(item=item.name, - retry=self.retry.name, - value=value)) - provided[value] = item - for value in item.requires: - if value in provided: - self._link(provided[value], item, - graph=tmp_graph, reason=value) + # Try to find a valid provider. + if resolve_requires: + for value in _unsatisfied_requires(item, tmp_graph, + retry_provides): + if value in provided: + providers = provided[value] + if len(providers) > 1: + provider_names = [n.name for n in providers] + raise exc.AmbiguousDependency( + "Resolution error detected when" + " adding %(item)s, multiple" + " providers %(providers)s found for" + " required symbol '%(value)s'" + % dict(item=item.name, + providers=sorted(provider_names), + value=value)) + else: + self._link(providers[0], item, + graph=tmp_graph, reason=value) + else: + required[value].append(item) for value in item.provides: - if value in requirements: - for node in requirements[value]: - self._link(item, node, - graph=tmp_graph, reason=value) + provided[value].append(item) + + # See if what we provide fulfills any existing requiree. + if resolve_existing: + for value in item.provides: + if value in required: + for requiree in list(required[value]): + if requiree is not item: + self._link(item, requiree, + graph=tmp_graph, reason=value) + required[value].remove(requiree) self._swap(tmp_graph) return self @@ -163,13 +207,25 @@ class Flow(flow.Flow): return self._get_subgraph().number_of_nodes() def __iter__(self): - for n in self._get_subgraph().nodes_iter(): + for n in self._get_subgraph().topological_sort(): yield n def iter_links(self): for (u, v, e_data) in self._get_subgraph().edges_iter(data=True): yield (u, v, e_data) + @property + def requires(self): + requires = set() + retry_provides = set() + if self._retry is not None: + requires.update(self._retry.requires) + retry_provides.update(self._retry.provides) + g = self._get_subgraph() + for item in g.nodes_iter(): + requires.update(_unsatisfied_requires(item, g, retry_provides)) + return frozenset(requires) + class TargetedFlow(Flow): """Graph flow with a target. @@ -223,8 +279,7 @@ class TargetedFlow(Flow): if self._target is None: return self._graph nodes = [self._target] - nodes.extend(dst for _src, dst in - traversal.dfs_edges(self._graph.reverse(), self._target)) + nodes.extend(self._graph.bfs_predecessors_iter(self._target)) self._subgraph = self._graph.subgraph(nodes) self._subgraph.freeze() return self._subgraph diff --git a/taskflow/patterns/linear_flow.py b/taskflow/patterns/linear_flow.py index 48b4d3cb..3067076c 100644 --- a/taskflow/patterns/linear_flow.py +++ b/taskflow/patterns/linear_flow.py @@ -14,22 +14,18 @@ # License for the specific language governing permissions and limitations # under the License. -from taskflow import exceptions from taskflow import flow -_LINK_METADATA = {'invariant': True} +_LINK_METADATA = {flow.LINK_INVARIANT: True} class Flow(flow.Flow): - """Linear Flow pattern. + """Linear flow pattern. A linear (potentially nested) flow of *tasks/flows* that can be applied in order as one unit and rolled back as one unit using the reverse order that the *tasks/flows* have been applied in. - - NOTE(imelnikov): Tasks/flows contained in this linear flow must not - depend on outputs (provided names/values) of tasks/flows that follow it. """ def __init__(self, name, retry=None): @@ -38,32 +34,7 @@ class Flow(flow.Flow): def add(self, *items): """Adds a given task/tasks/flow/flows to this flow.""" - if not items: - return self - - # NOTE(imelnikov): we add item to the end of flow, so it should - # not provide anything previous items of the flow require. - requires = self.requires - provides = self.provides - for item in items: - requires |= item.requires - out_of_order = requires & item.provides - if out_of_order: - raise exceptions.DependencyFailure( - "%(item)s provides %(oo)s that are required " - "by previous item(s) of linear flow %(flow)s" - % dict(item=item.name, flow=self.name, - oo=sorted(out_of_order))) - same_provides = provides & item.provides - if same_provides: - raise exceptions.DependencyFailure( - "%(item)s provides %(value)s but is already being" - " provided by %(flow)s and duplicate producers" - " are disallowed" - % dict(item=item.name, flow=self.name, - value=sorted(same_provides))) - provides |= item.provides - + items = [i for i in items if i not in self._children] self._children.extend(items) return self @@ -74,7 +45,18 @@ class Flow(flow.Flow): for child in self._children: yield child + @property + def requires(self): + requires = set() + prior_provides = set() + if self._retry is not None: + requires.update(self._retry.requires) + prior_provides.update(self._retry.provides) + for item in self: + requires.update(item.requires - prior_provides) + prior_provides.update(item.provides) + return frozenset(requires) + def iter_links(self): - for src, dst in zip(self._children[:-1], - self._children[1:]): + for src, dst in zip(self._children[:-1], self._children[1:]): yield (src, dst, _LINK_METADATA.copy()) diff --git a/taskflow/patterns/unordered_flow.py b/taskflow/patterns/unordered_flow.py index a8377960..52bd286e 100644 --- a/taskflow/patterns/unordered_flow.py +++ b/taskflow/patterns/unordered_flow.py @@ -14,70 +14,25 @@ # License for the specific language governing permissions and limitations # under the License. -from taskflow import exceptions from taskflow import flow class Flow(flow.Flow): - """Unordered Flow pattern. + """Unordered flow pattern. A unordered (potentially nested) flow of *tasks/flows* that can be executed in any order as one unit and rolled back as one unit. - - NOTE(harlowja): Since the flow is unordered there can *not* be any - dependency between task/flow inputs (requirements) and - task/flow outputs (provided names/values). """ def __init__(self, name, retry=None): super(Flow, self).__init__(name, retry) # NOTE(imelnikov): A unordered flow is unordered, so we use # set instead of list to save children, children so that - # people using it don't depend on the ordering + # people using it don't depend on the ordering. self._children = set() def add(self, *items): """Adds a given task/tasks/flow/flows to this flow.""" - if not items: - return self - - # check that items don't provide anything that other - # part of flow provides or requires - provides = self.provides - old_requires = self.requires - for item in items: - item_provides = item.provides - bad_provs = item_provides & old_requires - if bad_provs: - raise exceptions.DependencyFailure( - "%(item)s provides %(oo)s that are required " - "by other item(s) of unordered flow %(flow)s" - % dict(item=item.name, flow=self.name, - oo=sorted(bad_provs))) - same_provides = provides & item.provides - if same_provides: - raise exceptions.DependencyFailure( - "%(item)s provides %(value)s but is already being" - " provided by %(flow)s and duplicate producers" - " are disallowed" - % dict(item=item.name, flow=self.name, - value=sorted(same_provides))) - provides |= item.provides - - # check that items don't require anything other children provides - if self.retry: - # NOTE(imelnikov): it is allowed to depend on value provided - # by retry controller of the flow - provides -= self.retry.provides - for item in items: - bad_reqs = provides & item.requires - if bad_reqs: - raise exceptions.DependencyFailure( - "%(item)s requires %(oo)s that are provided " - "by other item(s) of unordered flow %(flow)s" - % dict(item=item.name, flow=self.name, - oo=sorted(bad_reqs))) - self._children.update(items) return self @@ -92,3 +47,15 @@ class Flow(flow.Flow): # NOTE(imelnikov): children in unordered flow have no dependencies # between each other due to invariants retained during construction. return iter(()) + + @property + def requires(self): + requires = set() + retry_provides = set() + if self._retry is not None: + requires.update(self._retry.requires) + retry_provides.update(self._retry.provides) + for item in self: + item_requires = item.requires - retry_provides + requires.update(item_requires) + return frozenset(requires) diff --git a/taskflow/persistence/backends/__init__.py b/taskflow/persistence/backends/__init__.py index 6faabdef..f11bfad0 100644 --- a/taskflow/persistence/backends/__init__.py +++ b/taskflow/persistence/backends/__init__.py @@ -15,11 +15,11 @@ # under the License. import contextlib -import logging from stevedore import driver from taskflow import exceptions as exc +from taskflow import logging from taskflow.utils import misc @@ -52,12 +52,17 @@ def fetch(conf, namespace=BACKEND_NAMESPACE, **kwargs): """ backend_name = conf['connection'] try: - pieces = misc.parse_uri(backend_name) + uri = misc.parse_uri(backend_name) except (TypeError, ValueError): pass else: - backend_name = pieces['scheme'] - conf = misc.merge_uri(pieces, conf.copy()) + backend_name = uri.scheme + conf = misc.merge_uri(uri, conf.copy()) + # If the backend is like 'mysql+pymysql://...' which informs the + # backend to use a dialect (supported by sqlalchemy at least) we just want + # to look at the first component to find our entrypoint backend name... + if backend_name.find("+") != -1: + backend_name = backend_name.split("+", 1)[0] LOG.debug('Looking for %r backend driver in %r', backend_name, namespace) try: mgr = driver.DriverManager(namespace, backend_name, diff --git a/taskflow/persistence/backends/impl_dir.py b/taskflow/persistence/backends/impl_dir.py index 9ce4a324..644ca453 100644 --- a/taskflow/persistence/backends/impl_dir.py +++ b/taskflow/persistence/backends/impl_dir.py @@ -16,15 +16,15 @@ # under the License. import errno -import logging import os import shutil +from oslo_serialization import jsonutils import six from taskflow import exceptions as exc -from taskflow.openstack.common import jsonutils -from taskflow.persistence.backends import base +from taskflow import logging +from taskflow.persistence import base from taskflow.persistence import logbook from taskflow.utils import lock_utils from taskflow.utils import misc @@ -46,11 +46,11 @@ class DirBackend(base.Backend): guarantee that there will be no interprocess race conditions when writing and reading by using a consistent hierarchy of file based locks. - Example conf: + Example configuration:: - conf = { - "path": "/tmp/taskflow", - } + conf = { + "path": "/tmp/taskflow", + } """ def __init__(self, conf): super(DirBackend, self).__init__(conf) diff --git a/taskflow/persistence/backends/impl_memory.py b/taskflow/persistence/backends/impl_memory.py index f425987c..5e94afb1 100644 --- a/taskflow/persistence/backends/impl_memory.py +++ b/taskflow/persistence/backends/impl_memory.py @@ -15,17 +15,97 @@ # License for the specific language governing permissions and limitations # under the License. -import logging +import functools import six from taskflow import exceptions as exc -from taskflow.persistence.backends import base +from taskflow import logging +from taskflow.persistence import base from taskflow.persistence import logbook +from taskflow.utils import lock_utils LOG = logging.getLogger(__name__) +class _Memory(object): + """Where the data is really stored.""" + + def __init__(self): + self.log_books = {} + self.flow_details = {} + self.atom_details = {} + + def clear_all(self): + self.log_books.clear() + self.flow_details.clear() + self.atom_details.clear() + + +class _MemoryHelper(object): + """Helper functionality for the memory backends & connections.""" + + def __init__(self, memory): + self._memory = memory + + @staticmethod + def _fetch_clone_args(incoming): + if isinstance(incoming, (logbook.LogBook, logbook.FlowDetail)): + # We keep our own copy of the added contents of the following + # types so we don't need the clone to retain them directly... + return { + 'retain_contents': False, + } + return {} + + def construct(self, uuid, container): + """Reconstructs a object from the given uuid and storage container.""" + source = container[uuid] + clone_kwargs = self._fetch_clone_args(source) + clone = source['object'].copy(**clone_kwargs) + rebuilder = source.get('rebuilder') + if rebuilder: + for component in map(rebuilder, source['components']): + clone.add(component) + return clone + + def merge(self, incoming, saved_info=None): + """Merges the incoming object into the local memories copy.""" + if saved_info is None: + if isinstance(incoming, logbook.LogBook): + saved_info = self._memory.log_books.setdefault( + incoming.uuid, {}) + elif isinstance(incoming, logbook.FlowDetail): + saved_info = self._memory.flow_details.setdefault( + incoming.uuid, {}) + elif isinstance(incoming, logbook.AtomDetail): + saved_info = self._memory.atom_details.setdefault( + incoming.uuid, {}) + else: + raise TypeError("Unknown how to merge '%s' (%s)" + % (incoming, type(incoming))) + try: + saved_info['object'].merge(incoming) + except KeyError: + clone_kwargs = self._fetch_clone_args(incoming) + saved_info['object'] = incoming.copy(**clone_kwargs) + if isinstance(incoming, logbook.LogBook): + flow_details = saved_info.setdefault('components', set()) + if 'rebuilder' not in saved_info: + saved_info['rebuilder'] = functools.partial( + self.construct, container=self._memory.flow_details) + for flow_detail in incoming: + flow_details.add(self.merge(flow_detail)) + elif isinstance(incoming, logbook.FlowDetail): + atom_details = saved_info.setdefault('components', set()) + if 'rebuilder' not in saved_info: + saved_info['rebuilder'] = functools.partial( + self.construct, container=self._memory.atom_details) + for atom_detail in incoming: + atom_details.add(self.merge(atom_detail)) + return incoming.uuid + + class MemoryBackend(base.Backend): """A in-memory (non-persistent) backend. @@ -34,21 +114,28 @@ class MemoryBackend(base.Backend): """ def __init__(self, conf=None): super(MemoryBackend, self).__init__(conf) - self._log_books = {} - self._flow_details = {} - self._atom_details = {} + self._memory = _Memory() + self._helper = _MemoryHelper(self._memory) + self._lock = lock_utils.ReaderWriterLock() + + def _construct_from(self, container): + return dict((uuid, self._helper.construct(uuid, container)) + for uuid in six.iterkeys(container)) @property def log_books(self): - return self._log_books + with self._lock.read_lock(): + return self._construct_from(self._memory.log_books) @property def flow_details(self): - return self._flow_details + with self._lock.read_lock(): + return self._construct_from(self._memory.flow_details) @property def atom_details(self): - return self._atom_details + with self._lock.read_lock(): + return self._construct_from(self._memory.atom_details) def get_connection(self): return Connection(self) @@ -58,8 +145,13 @@ class MemoryBackend(base.Backend): class Connection(base.Connection): + """A connection to an in-memory backend.""" + def __init__(self, backend): self._backend = backend + self._helper = backend._helper + self._memory = backend._memory + self._lock = backend._lock def upgrade(self): pass @@ -75,78 +167,70 @@ class Connection(base.Connection): pass def clear_all(self): - count = 0 - for book_uuid in list(six.iterkeys(self.backend.log_books)): - self.destroy_logbook(book_uuid) - count += 1 - return count + with self._lock.write_lock(): + self._memory.clear_all() def destroy_logbook(self, book_uuid): - try: - # Do the same cascading delete that the sql layer does. - lb = self.backend.log_books.pop(book_uuid) - for fd in lb: - self.backend.flow_details.pop(fd.uuid, None) - for ad in fd: - self.backend.atom_details.pop(ad.uuid, None) - except KeyError: - raise exc.NotFound("No logbook found with id: %s" % book_uuid) + with self._lock.write_lock(): + try: + # Do the same cascading delete that the sql layer does. + book_info = self._memory.log_books.pop(book_uuid) + except KeyError: + raise exc.NotFound("No logbook found with uuid '%s'" + % book_uuid) + else: + while book_info['components']: + flow_uuid = book_info['components'].pop() + flow_info = self._memory.flow_details.pop(flow_uuid) + while flow_info['components']: + atom_uuid = flow_info['components'].pop() + self._memory.atom_details.pop(atom_uuid) def update_atom_details(self, atom_detail): - try: - e_ad = self.backend.atom_details[atom_detail.uuid] - except KeyError: - raise exc.NotFound("No atom details found with id: %s" - % atom_detail.uuid) - return e_ad.merge(atom_detail, deep_copy=True) - - def _save_flowdetail_atoms(self, e_fd, flow_detail): - for atom_detail in flow_detail: - e_ad = e_fd.find(atom_detail.uuid) - if e_ad is None: - e_fd.add(atom_detail) - self.backend.atom_details[atom_detail.uuid] = atom_detail - else: - e_ad.merge(atom_detail, deep_copy=True) + with self._lock.write_lock(): + try: + atom_info = self._memory.atom_details[atom_detail.uuid] + return self._helper.construct( + self._helper.merge(atom_detail, saved_info=atom_info), + self._memory.atom_details) + except KeyError: + raise exc.NotFound("No atom details found with uuid '%s'" + % atom_detail.uuid) def update_flow_details(self, flow_detail): - try: - e_fd = self.backend.flow_details[flow_detail.uuid] - except KeyError: - raise exc.NotFound("No flow details found with id: %s" - % flow_detail.uuid) - e_fd.merge(flow_detail, deep_copy=True) - self._save_flowdetail_atoms(e_fd, flow_detail) - return e_fd + with self._lock.write_lock(): + try: + flow_info = self._memory.flow_details[flow_detail.uuid] + return self._helper.construct( + self._helper.merge(flow_detail, saved_info=flow_info), + self._memory.flow_details) + except KeyError: + raise exc.NotFound("No flow details found with uuid '%s'" + % flow_detail.uuid) def save_logbook(self, book): - # Get a existing logbook model (or create it if it isn't there). - try: - e_lb = self.backend.log_books[book.uuid] - except KeyError: - e_lb = logbook.LogBook(book.name, uuid=book.uuid) - self.backend.log_books[e_lb.uuid] = e_lb - - e_lb.merge(book, deep_copy=True) - # Add anything in to the new logbook that isn't already in the existing - # logbook. - for flow_detail in book: - try: - e_fd = self.backend.flow_details[flow_detail.uuid] - except KeyError: - e_fd = logbook.FlowDetail(flow_detail.name, flow_detail.uuid) - e_lb.add(e_fd) - self.backend.flow_details[e_fd.uuid] = e_fd - e_fd.merge(flow_detail, deep_copy=True) - self._save_flowdetail_atoms(e_fd, flow_detail) - return e_lb + with self._lock.write_lock(): + return self._helper.construct(self._helper.merge(book), + self._memory.log_books) def get_logbook(self, book_uuid): - try: - return self.backend.log_books[book_uuid] - except KeyError: - raise exc.NotFound("No logbook found with id: %s" % book_uuid) + with self._lock.read_lock(): + try: + return self._helper.construct(book_uuid, + self._memory.log_books) + except KeyError: + raise exc.NotFound("No logbook found with uuid '%s'" + % book_uuid) def get_logbooks(self): - for lb in list(six.itervalues(self.backend.log_books)): - yield lb + # Don't hold locks while iterating... + with self._lock.read_lock(): + book_uuids = set(six.iterkeys(self._memory.log_books)) + for book_uuid in book_uuids: + try: + with self._lock.read_lock(): + book = self._helper.construct(book_uuid, + self._memory.log_books) + yield book + except KeyError: + pass diff --git a/taskflow/persistence/backends/impl_sqlalchemy.py b/taskflow/persistence/backends/impl_sqlalchemy.py index 1dc008eb..98ebb30d 100644 --- a/taskflow/persistence/backends/impl_sqlalchemy.py +++ b/taskflow/persistence/backends/impl_sqlalchemy.py @@ -22,9 +22,9 @@ from __future__ import absolute_import import contextlib import copy import functools -import logging import time +from oslo_utils import strutils import six import sqlalchemy as sa from sqlalchemy import exc as sa_exc @@ -32,11 +32,12 @@ from sqlalchemy import orm as sa_orm from sqlalchemy import pool as sa_pool from taskflow import exceptions as exc -from taskflow.openstack.common import strutils -from taskflow.persistence.backends import base +from taskflow import logging from taskflow.persistence.backends.sqlalchemy import migration from taskflow.persistence.backends.sqlalchemy import models +from taskflow.persistence import base from taskflow.persistence import logbook +from taskflow.types import failure from taskflow.utils import eventlet_utils from taskflow.utils import misc @@ -182,11 +183,11 @@ def _ping_listener(dbapi_conn, connection_rec, connection_proxy): class SQLAlchemyBackend(base.Backend): """A sqlalchemy backend. - Example conf: + Example configuration:: - conf = { - "connection": "sqlite:////tmp/test.db", - } + conf = { + "connection": "sqlite:////tmp/test.db", + } """ def __init__(self, conf, engine=None): super(SQLAlchemyBackend, self).__init__(conf) @@ -256,8 +257,6 @@ class SQLAlchemyBackend(base.Backend): if _as_bool(conf.pop('checkout_ping', True)): sa.event.listen(engine, 'checkout', _ping_listener) mode = None - if _as_bool(conf.pop('mysql_traditional_mode', True)): - mode = 'TRADITIONAL' if 'mysql_sql_mode' in conf: mode = conf.pop('mysql_sql_mode') if mode is not None: @@ -328,7 +327,7 @@ class Connection(base.Connection): pass except sa_exc.OperationalError as ex: if _is_db_connection_error(six.text_type(ex.args[0])): - failures.append(misc.Failure()) + failures.append(failure.Failure()) return False return True diff --git a/taskflow/persistence/backends/impl_zookeeper.py b/taskflow/persistence/backends/impl_zookeeper.py index e60bad85..916a889f 100644 --- a/taskflow/persistence/backends/impl_zookeeper.py +++ b/taskflow/persistence/backends/impl_zookeeper.py @@ -15,14 +15,14 @@ # under the License. import contextlib -import logging from kazoo import exceptions as k_exc from kazoo.protocol import paths +from oslo_serialization import jsonutils from taskflow import exceptions as exc -from taskflow.openstack.common import jsonutils -from taskflow.persistence.backends import base +from taskflow import logging +from taskflow.persistence import base from taskflow.persistence import logbook from taskflow.utils import kazoo_utils as k_utils from taskflow.utils import misc @@ -43,12 +43,12 @@ class ZkBackend(base.Backend): inside those directories that represent the contents of those objects for later reading and writing. - Example conf: + Example configuration:: - conf = { - "hosts": "192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181", - "path": "/taskflow", - } + conf = { + "hosts": "192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181", + "path": "/taskflow", + } """ def __init__(self, conf, client=None): super(ZkBackend, self).__init__(conf) @@ -169,7 +169,11 @@ class ZkConnection(base.Connection): ad_data, _zstat = self._client.get(ad_path) except k_exc.NoNodeError: # Not-existent: create or raise exception. - raise exc.NotFound("No atom details found with id: %s" % ad.uuid) + if not create_missing: + raise exc.NotFound("No atom details found with" + " id: %s" % ad.uuid) + else: + txn.create(ad_path) else: # Existent: read it out. try: diff --git a/taskflow/persistence/backends/sqlalchemy/models.py b/taskflow/persistence/backends/sqlalchemy/models.py index 4a78c5cb..e43c9652 100644 --- a/taskflow/persistence/backends/sqlalchemy/models.py +++ b/taskflow/persistence/backends/sqlalchemy/models.py @@ -15,6 +15,9 @@ # License for the specific language governing permissions and limitations # under the License. +from oslo_serialization import jsonutils +from oslo_utils import timeutils +from oslo_utils import uuidutils from sqlalchemy import Column, String, DateTime, Enum from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import ForeignKey @@ -22,9 +25,6 @@ from sqlalchemy.orm import backref from sqlalchemy.orm import relationship from sqlalchemy import types as types -from taskflow.openstack.common import jsonutils -from taskflow.openstack.common import timeutils -from taskflow.openstack.common import uuidutils from taskflow.persistence import logbook from taskflow import states diff --git a/taskflow/persistence/backends/base.py b/taskflow/persistence/base.py similarity index 98% rename from taskflow/persistence/backends/base.py rename to taskflow/persistence/base.py index 9185d69c..00fb29be 100644 --- a/taskflow/persistence/backends/base.py +++ b/taskflow/persistence/base.py @@ -29,8 +29,8 @@ class Backend(object): if not conf: conf = {} if not isinstance(conf, dict): - raise TypeError("Configuration dictionary expected not: %s" - % type(conf)) + raise TypeError("Configuration dictionary expected not '%s' (%s)" + % (conf, type(conf))) self._conf = conf @abc.abstractmethod diff --git a/taskflow/persistence/logbook.py b/taskflow/persistence/logbook.py index 12c6c996..fcd777da 100644 --- a/taskflow/persistence/logbook.py +++ b/taskflow/persistence/logbook.py @@ -17,15 +17,15 @@ import abc import copy -import logging +from oslo_utils import timeutils +from oslo_utils import uuidutils import six from taskflow import exceptions as exc -from taskflow.openstack.common import timeutils -from taskflow.openstack.common import uuidutils +from taskflow import logging from taskflow import states -from taskflow.utils import misc +from taskflow.types import failure as ft LOG = logging.getLogger(__name__) @@ -50,7 +50,7 @@ def _safe_unmarshal_time(when): def _was_failure(state, result): - return state == states.FAILURE and isinstance(result, misc.Failure) + return state == states.FAILURE and isinstance(result, ft.Failure) def _fix_meta(data): @@ -137,7 +137,7 @@ class LogBook(object): @classmethod def from_dict(cls, data, unmarshal_time=False): - """Translates the given data into an instance of this class.""" + """Translates the given dictionary into an instance of this class.""" if not unmarshal_time: unmarshal_fn = lambda x: x else: @@ -163,6 +163,17 @@ class LogBook(object): def __len__(self): return len(self._flowdetails_by_id) + def copy(self, retain_contents=True): + """Copies/clones this log book.""" + clone = copy.copy(self) + if not retain_contents: + clone._flowdetails_by_id = {} + else: + clone._flowdetails_by_id = self._flowdetails_by_id.copy() + if self.meta: + clone.meta = self.meta.copy() + return clone + class FlowDetail(object): """A container of atom details, a name and associated metadata. @@ -186,7 +197,7 @@ class FlowDetail(object): """Updates the objects state to be the same as the given one.""" if fd is self: return self - self._atomdetails_by_id = dict(fd._atomdetails_by_id) + self._atomdetails_by_id = fd._atomdetails_by_id self.state = fd.state self.meta = fd.meta return self @@ -206,6 +217,17 @@ class FlowDetail(object): self.state = fd.state return self + def copy(self, retain_contents=True): + """Copies/clones this flow detail.""" + clone = copy.copy(self) + if not retain_contents: + clone._atomdetails_by_id = {} + else: + clone._atomdetails_by_id = self._atomdetails_by_id.copy() + if self.meta: + clone.meta = self.meta.copy() + return clone + def to_dict(self): """Translates the internal state of this object to a dictionary. @@ -363,7 +385,7 @@ class AtomDetail(object): self.meta = _fix_meta(data) failure = data.get('failure') if failure: - self.failure = misc.Failure.from_dict(failure) + self.failure = ft.Failure.from_dict(failure) @property def uuid(self): @@ -380,6 +402,7 @@ class AtomDetail(object): class TaskDetail(AtomDetail): """This class represents a task detail for flow task object.""" + def __init__(self, name, uuid): super(TaskDetail, self).__init__(name, uuid) @@ -410,8 +433,10 @@ class TaskDetail(AtomDetail): return self._to_dict_shared() def merge(self, other, deep_copy=False): + """Merges the current object state with the given ones state.""" if not isinstance(other, TaskDetail): - raise NotImplementedError("Can only merge with other task details") + raise exc.NotImplementedError("Can only merge with other" + " task details") if other is self: return self super(TaskDetail, self).merge(other, deep_copy=deep_copy) @@ -420,6 +445,16 @@ class TaskDetail(AtomDetail): self.results = copy_fn(other.results) return self + def copy(self): + """Copies/clones this task detail.""" + clone = copy.copy(self) + clone.results = copy.copy(self.results) + if self.meta: + clone.meta = self.meta.copy() + if self.version: + clone.version = copy.copy(self.version) + return clone + class RetryDetail(AtomDetail): """This class represents a retry detail for retry controller object.""" @@ -433,6 +468,24 @@ class RetryDetail(AtomDetail): self.state = state self.intention = states.EXECUTE + def copy(self): + """Copies/clones this retry detail.""" + clone = copy.copy(self) + results = [] + # NOTE(imelnikov): we can't just deep copy Failures, as they + # contain tracebacks, which are not copyable. + for (data, failures) in self.results: + copied_failures = {} + for (key, failure) in six.iteritems(failures): + copied_failures[key] = failure + results.append((data, copied_failures)) + clone.results = results + if self.meta: + clone.meta = self.meta.copy() + if self.version: + clone.version = copy.copy(self.version) + return clone + @property def last_results(self): try: @@ -466,8 +519,8 @@ class RetryDetail(AtomDetail): new_results = [] for (data, failures) in results: new_failures = {} - for (key, failure_data) in six.iteritems(failures): - new_failures[key] = misc.Failure.from_dict(failure_data) + for (key, data) in six.iteritems(failures): + new_failures[key] = ft.Failure.from_dict(data) new_results.append((data, new_failures)) return new_results @@ -495,9 +548,10 @@ class RetryDetail(AtomDetail): return base def merge(self, other, deep_copy=False): + """Merges the current object state with the given ones state.""" if not isinstance(other, RetryDetail): - raise NotImplementedError("Can only merge with other retry " - "details") + raise exc.NotImplementedError("Can only merge with other" + " retry details") if other is self: return self super(RetryDetail, self).merge(other, deep_copy=deep_copy) @@ -529,11 +583,12 @@ def atom_detail_class(atom_type): try: return _NAME_TO_DETAIL[atom_type] except KeyError: - raise TypeError("Unknown atom type: %s" % (atom_type)) + raise TypeError("Unknown atom type '%s'" % (atom_type)) def atom_detail_type(atom_detail): try: return _DETAIL_TO_NAME[type(atom_detail)] except KeyError: - raise TypeError("Unknown atom type: %s" % type(atom_detail)) + raise TypeError("Unknown atom '%s' (%s)" + % (atom_detail, type(atom_detail))) diff --git a/taskflow/retry.py b/taskflow/retry.py index 425c8ea6..47ed8ca5 100644 --- a/taskflow/retry.py +++ b/taskflow/retry.py @@ -16,7 +16,6 @@ # under the License. import abc -import logging import six @@ -24,52 +23,99 @@ from taskflow import atom from taskflow import exceptions as exc from taskflow.utils import misc -LOG = logging.getLogger(__name__) - # Decision results. REVERT = "REVERT" REVERT_ALL = "REVERT_ALL" RETRY = "RETRY" +# Constants passed into revert/execute kwargs. +# +# Contains information about the past decisions and outcomes that have +# occurred (if available). +EXECUTE_REVERT_HISTORY = 'history' +# +# The cause of the flow failure/s +REVERT_FLOW_FAILURES = 'flow_failures' -@six.add_metaclass(abc.ABCMeta) -class Decider(object): - """A class/mixin object that can decide how to resolve execution failures. - A decider may be executed multiple times on subflow or other atom - failure and it is expected to make a decision about what should be done - to resolve the failure (retry, revert to the previous retry, revert - the whole flow, etc.). - """ +class History(object): + """Helper that simplifies interactions with retry historical contents.""" - @abc.abstractmethod - def on_failure(self, history, *args, **kwargs): - """On failure makes a decision about the future. + def __init__(self, contents, failure=None): + self._contents = contents + self._failure = failure - This method will typically use information about prior failures (if - this historical failure information is not available or was not - persisted this history will be empty). + @property + def failure(self): + """Returns the retries own failure or none if not existent.""" + return self._failure - Returns retry action constant: + def outcomes_iter(self, index=None): + """Iterates over the contained failure outcomes. - * ``RETRY`` when subflow must be reverted and restarted again (maybe - with new parameters). - * ``REVERT`` when this subflow must be completely reverted and parent - subflow should make a decision about the flow execution. - * ``REVERT_ALL`` in a case when the whole flow must be reverted and - marked as ``FAILURE``. + If the index is not provided, then all outcomes are iterated over. + + NOTE(harlowja): if the retry itself failed, this will **not** include + those types of failures. Use the :py:attr:`.failure` attribute to + access that instead (if it exists, aka, non-none). """ + if index is None: + contents = self._contents + else: + contents = [ + self._contents[index], + ] + for (provided, outcomes) in contents: + for (owner, outcome) in six.iteritems(outcomes): + yield (owner, outcome) + + def __len__(self): + return len(self._contents) + + def provided_iter(self): + """Iterates over all the values the retry has attempted (in order).""" + for (provided, outcomes) in self._contents: + yield provided + + def __getitem__(self, index): + return self._contents[index] + + def caused_by(self, exception_cls, index=None, include_retry=False): + """Checks if the exception class provided caused the failures. + + If the index is not provided, then all outcomes are iterated over. + + NOTE(harlowja): only if ``include_retry`` is provided as true (defaults + to false) will the potential retries own failure be + checked against as well. + """ + for (name, failure) in self.outcomes_iter(index=index): + if failure.check(exception_cls): + return True + if include_retry and self._failure is not None: + if self._failure.check(exception_cls): + return True + return False + + def __iter__(self): + """Iterates over the raw contents of this history object.""" + return iter(self._contents) @six.add_metaclass(abc.ABCMeta) -class Retry(atom.Atom, Decider): +class Retry(atom.Atom): """A class that can decide how to resolve execution failures. This abstract base class is used to inherit from and provide different strategies that will be activated upon execution failures. Since a retry - object is an atom it may also provide execute and revert methods to alter - the inputs of connected atoms (depending on the desired strategy to be - used this can be quite useful). + object is an atom it may also provide :meth:`.execute` and + :meth:`.revert` methods to alter the inputs of connected atoms (depending + on the desired strategy to be used this can be quite useful). + + NOTE(harlowja): the :meth:`.execute` and :meth:`.revert` and + :meth:`.on_failure` will automatically be given a ``history`` parameter, + which contains information about the past decisions and outcomes + that have occurred (if available). """ default_provides = None @@ -80,7 +126,7 @@ class Retry(atom.Atom, Decider): provides = self.default_provides super(Retry, self).__init__(name, provides) self._build_arg_mapping(self.execute, requires, rebind, auto_extract, - ignore_list=['history']) + ignore_list=[EXECUTE_REVERT_HISTORY]) @property def name(self): @@ -92,11 +138,11 @@ class Retry(atom.Atom, Decider): @abc.abstractmethod def execute(self, history, *args, **kwargs): - """Executes the given retry atom. + """Executes the given retry. This execution activates a given retry which will typically produce data required to start or restart a connected component using - previously provided values and a history of prior failures from + previously provided values and a ``history`` of prior failures from previous runs. The historical data can be analyzed to alter the resolution strategy that this retry controller will use. @@ -105,12 +151,15 @@ class Retry(atom.Atom, Decider): saved to the history of the retry atom automatically, that is a list of tuples (result, failures) are persisted where failures is a dictionary of failures indexed by task names and the result is the execution - result returned by this retry controller during that failure resolution + result returned by this retry during that failure resolution attempt. + + :param args: positional arguments that retry requires to execute. + :param kwargs: any keyword arguments that retry requires to execute. """ def revert(self, history, *args, **kwargs): - """Reverts this retry using the given context. + """Reverts this retry. On revert call all results that had been provided by previous tries and all errors caused during reversion are provided. This method @@ -118,6 +167,29 @@ class Retry(atom.Atom, Decider): retry (that is to say that the controller has ran out of resolution options and has either given up resolution or has failed to handle a execution failure). + + :param args: positional arguments that the retry required to execute. + :param kwargs: any keyword arguments that the retry required to + execute. + """ + + @abc.abstractmethod + def on_failure(self, history, *args, **kwargs): + """Makes a decision about the future. + + This method will typically use information about prior failures (if + this historical failure information is not available or was not + persisted the provided history will be empty). + + Returns a retry constant (one of): + + * ``RETRY``: when the controlling flow must be reverted and restarted + again (for example with new parameters). + * ``REVERT``: when this controlling flow must be completely reverted + and the parent flow (if any) should make a decision about further + flow execution. + * ``REVERT_ALL``: when this controlling flow and the parent + flow (if any) must be reverted and marked as a ``FAILURE``. """ @@ -166,8 +238,7 @@ class ForEachBase(Retry): # Fetches the next resolution result to try, removes overlapping # entries with what has already been tried and then returns the first # resolution strategy remaining. - items = (item for item, _failures in history) - remaining = misc.sequence_minus(values, items) + remaining = misc.sequence_minus(values, history.provided_iter()) if not remaining: raise exc.NotFound("No elements left in collection of iterable " "retry controller %s" % self.name) diff --git a/taskflow/storage.py b/taskflow/storage.py index 31a8868f..df801483 100644 --- a/taskflow/storage.py +++ b/taskflow/storage.py @@ -16,21 +16,96 @@ import abc import contextlib -import logging +from oslo_utils import reflection +from oslo_utils import uuidutils import six from taskflow import exceptions -from taskflow.openstack.common import uuidutils +from taskflow import logging from taskflow.persistence import logbook +from taskflow import retry from taskflow import states +from taskflow import task +from taskflow.types import failure from taskflow.utils import lock_utils from taskflow.utils import misc -from taskflow.utils import reflection LOG = logging.getLogger(__name__) STATES_WITH_RESULTS = (states.SUCCESS, states.REVERTING, states.FAILURE) +# TODO(harlowja): do this better (via a singleton or something else...) +_TRANSIENT_PROVIDER = object() + +# NOTE(harlowja): Perhaps the container is a dictionary-like object and that +# key does not exist (key error), or the container is a tuple/list and a +# non-numeric key is being requested (index error), or there was no container +# and an attempt to index into none/other unsubscriptable type is being +# requested (type error). +# +# Overall this (along with the item_from* functions) try to handle the vast +# majority of wrong indexing operations on the wrong/invalid types so that we +# can fail extraction during lookup or emit warning on result reception... +_EXTRACTION_EXCEPTIONS = (IndexError, KeyError, ValueError, TypeError) + + +class _Provider(object): + """A named symbol provider that produces a output at the given index.""" + + def __init__(self, name, index): + self.name = name + self.index = index + + def __repr__(self): + # TODO(harlowja): clean this up... + if self.name is _TRANSIENT_PROVIDER: + base = " failure mapping. self._failures[ad.name] = data @@ -464,94 +555,177 @@ class Storage(object): def save_transient(): self._transients.update(pairs) - # NOTE(harlowja): none is not a valid atom name, so that means - # we can use it internally to reference all of our transient - # variables. - return (None, six.iterkeys(self._transients)) + return (_TRANSIENT_PROVIDER, six.iterkeys(self._transients)) with self._lock.write_lock(): if transient: - (atom_name, names) = save_transient() + provider_name, names = save_transient() else: - (atom_name, names) = save_persistent() - self._set_result_mapping(atom_name, + provider_name, names = save_persistent() + self._set_result_mapping(provider_name, dict((name, name) for name in names)) - def _set_result_mapping(self, atom_name, mapping): - """Sets the result mapping for an atom. + def _set_result_mapping(self, provider_name, mapping): + """Sets the result mapping for a given producer. The result saved with given name would be accessible by names defined in mapping. Mapping is a dict name => index. If index is None, the whole result will have this name; else, only part of it, result[index]. """ - if not mapping: - return - self._result_mappings[atom_name] = mapping - for name, index in six.iteritems(mapping): - entries = self._reverse_mapping.setdefault(name, []) + provider_mapping = self._result_mappings.setdefault(provider_name, {}) + if mapping: + provider_mapping.update(mapping) + # Ensure the reverse mapping/index is updated (for faster lookups). + for name, index in six.iteritems(provider_mapping): + entries = self._reverse_mapping.setdefault(name, []) + provider = _Provider(provider_name, index) + if provider not in entries: + entries.append(provider) - # NOTE(imelnikov): We support setting same result mapping for - # the same atom twice (e.g when we are injecting 'a' and then - # injecting 'a' again), so we should not log warning below in - # that case and we should have only one item for each pair - # (atom_name, index) in entries. It should be put to the end of - # entries list because order matters on fetching. - try: - entries.remove((atom_name, index)) - except ValueError: - pass - - entries.append((atom_name, index)) - if len(entries) > 1: - LOG.warning("Multiple provider mappings being created for %r", - name) - - def fetch(self, name): - """Fetch a named atoms result.""" + def fetch(self, name, many_handler=None): + """Fetch a named result.""" + # By default we just return the first of many (unless provided + # a different callback that can translate many results into something + # more meaningful). + if many_handler is None: + many_handler = lambda values: values[0] with self._lock.read_lock(): try: - indexes = self._reverse_mapping[name] + providers = self._reverse_mapping[name] except KeyError: - raise exceptions.NotFound("Name %r is not mapped" % name) - # Return the first one that is found. - for (atom_name, index) in reversed(indexes): - if not atom_name: - results = self._transients + raise exceptions.NotFound("Name %r is not mapped as a" + " produced output by any" + " providers" % name) + values = [] + for provider in providers: + if provider.name is _TRANSIENT_PROVIDER: + values.append(_item_from_single(provider, + self._transients, name)) else: - results = self._get(atom_name, only_last=True) - try: - return misc.item_from(results, index, name) - except exceptions.NotFound: - pass - raise exceptions.NotFound("Unable to find result %r" % name) + try: + container = self._get(provider.name, only_last=True) + except exceptions.NotFound: + pass + else: + values.append(_item_from_single(provider, + container, name)) + if not values: + raise exceptions.NotFound("Unable to find result %r," + " searched %s" % (name, providers)) + else: + return many_handler(values) def fetch_all(self): - """Fetch all named atom results known so far. + """Fetch all named results known so far. - Should be used for debugging and testing purposes mostly. + NOTE(harlowja): should be used for debugging and testing purposes. """ + def many_handler(values): + if len(values) > 1: + return values + return values[0] with self._lock.read_lock(): results = {} - for name in self._reverse_mapping: + for name in six.iterkeys(self._reverse_mapping): try: - results[name] = self.fetch(name) + results[name] = self.fetch(name, many_handler=many_handler) except exceptions.NotFound: pass return results - def fetch_mapped_args(self, args_mapping, atom_name=None): - """Fetch arguments for an atom using an atoms arguments mapping.""" + def fetch_mapped_args(self, args_mapping, + atom_name=None, scope_walker=None): + """Fetch arguments for an atom using an atoms argument mapping.""" + + def _get_results(looking_for, provider): + """Gets the results saved for a given provider.""" + try: + return self._get(provider.name, only_last=True) + except exceptions.NotFound as e: + raise exceptions.NotFound( + "Expected to be able to find output %r produced" + " by %s but was unable to get at that providers" + " results" % (looking_for, provider), e) + + def _locate_providers(looking_for, possible_providers): + """Finds the accessible providers.""" + default_providers = [] + for p in possible_providers: + if p.name is _TRANSIENT_PROVIDER: + default_providers.append((p, self._transients)) + if p.name == self.injector_name: + default_providers.append((p, _get_results(looking_for, p))) + if default_providers: + return default_providers + if scope_walker is not None: + scope_iter = iter(scope_walker) + else: + scope_iter = iter([]) + for atom_names in scope_iter: + if not atom_names: + continue + providers = [] + for p in possible_providers: + if p.name in atom_names: + providers.append((p, _get_results(looking_for, p))) + if providers: + return providers + return [] + with self._lock.read_lock(): - injected_args = {} + if atom_name and atom_name not in self._atom_name_to_uuid: + raise exceptions.NotFound("Unknown atom name: %s" % atom_name) + if not args_mapping: + return {} + # The order of lookup is the following: + # + # 1. Injected atom specific arguments. + # 2. Transient injected arguments. + # 3. Non-transient injected arguments. + # 4. First scope visited group that produces the named result. + # a). The first of that group that actually provided the name + # result is selected (if group size is greater than one). + # + # Otherwise: blowup! (this will also happen if reading or + # extracting an expected result fails, since it is better to fail + # on lookup then provide invalid data from the wrong provider) if atom_name: injected_args = self._injected_args.get(atom_name, {}) + else: + injected_args = {} mapped_args = {} - for key, name in six.iteritems(args_mapping): + for (bound_name, name) in six.iteritems(args_mapping): + if LOG.isEnabledFor(logging.BLATHER): + if atom_name: + LOG.blather("Looking for %r <= %r for atom named: %s", + bound_name, name, atom_name) + else: + LOG.blather("Looking for %r <= %r", bound_name, name) if name in injected_args: - mapped_args[key] = injected_args[name] + value = injected_args[name] + mapped_args[bound_name] = value + LOG.blather("Matched %r <= %r to %r (from injected" + " values)", bound_name, name, value) else: - mapped_args[key] = self.fetch(name) + try: + possible_providers = self._reverse_mapping[name] + except KeyError: + raise exceptions.NotFound("Name %r is not mapped as a" + " produced output by any" + " providers" % name) + # Reduce the possible providers to one that are allowed. + providers = _locate_providers(name, possible_providers) + if not providers: + raise exceptions.NotFound( + "Mapped argument %r <= %r was not produced" + " by any accessible provider (%s possible" + " providers were scanned)" + % (bound_name, name, len(possible_providers))) + provider, value = _item_from_first_of(providers, name) + mapped_args[bound_name] = value + LOG.blather("Matched %r <= %r to %r (from %s)", + bound_name, name, value, provider) return mapped_args def set_flow_state(self, state): @@ -568,20 +742,35 @@ class Storage(object): state = states.PENDING return state + def _translate_into_history(self, ad): + failure = None + if ad.failure is not None: + # NOTE(harlowja): Try to use our local cache to get a more + # complete failure object that has a traceback (instead of the + # one that is saved which will *typically* not have one)... + cached = self._failures.get(ad.name) + if ad.failure.matches(cached): + failure = cached + else: + failure = ad.failure + return retry.History(ad.results, failure=failure) + def get_retry_history(self, retry_name): - """Fetch retry results history.""" + """Fetch a single retrys history.""" with self._lock.read_lock(): ad = self._atomdetail_by_name(retry_name, expected_type=logbook.RetryDetail) - if ad.failure is not None: - cached = self._failures.get(retry_name) - history = list(ad.results) - if ad.failure.matches(cached): - history.append((cached, {})) - else: - history.append((ad.failure, {})) - return history - return ad.results + return self._translate_into_history(ad) + + def get_retry_histories(self): + """Fetch all retrys histories.""" + histories = [] + with self._lock.read_lock(): + for ad in self._flowdetail: + if isinstance(ad, logbook.RetryDetail): + histories.append((ad.name, + self._translate_into_history(ad))) + return histories class MultiThreadedStorage(Storage): diff --git a/taskflow/task.py b/taskflow/task.py index cd470e72..8fa9ffb5 100644 --- a/taskflow/task.py +++ b/taskflow/task.py @@ -16,17 +16,29 @@ # under the License. import abc -import collections -import contextlib -import logging +import copy +from oslo_utils import reflection import six from taskflow import atom -from taskflow.utils import reflection +from taskflow import logging +from taskflow.types import notifier +from taskflow.utils import misc LOG = logging.getLogger(__name__) +# Constants passed into revert kwargs. +# +# Contain the execute() result (if any). +REVERT_RESULT = 'result' +# +# The cause of the flow failure/s +REVERT_FLOW_FAILURES = 'flow_failures' + +# Common events +EVENT_UPDATE_PROGRESS = 'update_progress' + @six.add_metaclass(abc.ABCMeta) class BaseTask(atom.Atom): @@ -38,21 +50,35 @@ class BaseTask(atom.Atom): same piece of work. """ - TASK_EVENTS = ('update_progress', ) + # Known internal events this task can have callbacks bound to (others that + # are not in this set/tuple will not be able to be bound); this should be + # updated and/or extended in subclasses as needed to enable or disable new + # or existing internal events... + TASK_EVENTS = (EVENT_UPDATE_PROGRESS,) def __init__(self, name, provides=None, inject=None): if name is None: name = reflection.get_class_name(self) super(BaseTask, self).__init__(name, provides, inject=inject) - # Map of events => lists of callbacks to invoke on task events. - self._events_listeners = collections.defaultdict(list) + self._notifier = notifier.RestrictedNotifier(self.TASK_EVENTS) + + @property + def notifier(self): + """Internal notification dispatcher/registry. + + A notification object that will dispatch events that occur related + to *internal* notifications that the task internally emits to + listeners (for example for progress status updates, telling others + that a task has reached 50% completion...). + """ + return self._notifier def pre_execute(self): """Code to be run prior to executing the task. A common pattern for initializing the state of the system prior to running tasks is to define some code in a base class that all your - tasks inherit from. In that class, you can define a pre_execute + tasks inherit from. In that class, you can define a ``pre_execute`` method and it will always be invoked just prior to your tasks running. """ @@ -72,6 +98,9 @@ class BaseTask(atom.Atom): happens in a different python process or on a remote machine) and so that the result can be transmitted to other tasks (which may be local or remote). + + :param args: positional arguments that task requires to execute. + :param kwargs: any keyword arguments that task requires to execute. """ def post_execute(self): @@ -79,7 +108,7 @@ class BaseTask(atom.Atom): A common pattern for cleaning up global state of the system after the execution of tasks is to define some code in a base class that all your - tasks inherit from. In that class, you can define a post_execute + tasks inherit from. In that class, you can define a ``post_execute`` method and it will always be invoked just after your tasks execute, regardless of whether they succeded or not. @@ -90,7 +119,7 @@ class BaseTask(atom.Atom): def pre_revert(self): """Code to be run prior to reverting the task. - This works the same as pre_execute, but for the revert phase. + This works the same as :meth:`.pre_execute`, but for the revert phase. """ def revert(self, *args, **kwargs): @@ -98,125 +127,69 @@ class BaseTask(atom.Atom): This method should undo any side-effects caused by previous execution of the task using the result of the :py:meth:`execute` method and - information on failure which triggered reversion of the flow. + information on the failure which triggered reversion of the flow the + task is contained in (if applicable). - NOTE(harlowja): The ``**kwargs`` which are passed into the - :py:meth:`execute` method will also be passed into this method. The - ``**kwargs`` key ``'result'`` will contain the :py:meth:`execute` - result (if any) and the ``**kwargs`` key ``'flow_failures'`` will - contain the failure information. + :param args: positional arguments that the task required to execute. + :param kwargs: any keyword arguments that the task required to + execute; the special key ``'result'`` will contain + the :py:meth:`execute` result (if any) and + the ``**kwargs`` key ``'flow_failures'`` will contain + any failure information. """ def post_revert(self): """Code to be run after reverting the task. - This works the same as post_execute, but for the revert phase. + This works the same as :meth:`.post_execute`, but for the revert phase. """ - def update_progress(self, progress, **kwargs): + def copy(self, retain_listeners=True): + """Clone/copy this task. + + :param retain_listeners: retain the attached notification listeners + when cloning, when false the listeners will + be emptied, when true the listeners will be + copied and retained + + :return: the copied task + """ + c = copy.copy(self) + c._notifier = self._notifier.copy() + if not retain_listeners: + c._notifier.reset() + return c + + def update_progress(self, progress): """Update task progress and notify all registered listeners. - :param progress: task progress float value between 0 and 1 - :param kwargs: task specific progress information + :param progress: task progress float value between 0.0 and 1.0 """ - if progress > 1.0: - LOG.warn("Progress must be <= 1.0, clamping to upper bound") - progress = 1.0 - if progress < 0.0: - LOG.warn("Progress must be >= 0.0, clamping to lower bound") - progress = 0.0 - self._trigger('update_progress', progress, **kwargs) - - def _trigger(self, event, *args, **kwargs): - """Execute all handlers for the given event type.""" - for (handler, event_data) in self._events_listeners.get(event, []): - try: - handler(self, event_data, *args, **kwargs) - except Exception: - LOG.warn("Failed calling `%s` on event '%s'", - reflection.get_callable_name(handler), event, - exc_info=True) - - @contextlib.contextmanager - def autobind(self, event_name, handler_func, **kwargs): - """Binds & unbinds a given event handler to the task. - - This function binds and unbinds using the context manager protocol. - When events are triggered on the task of the given event name this - handler will automatically be called with the provided keyword - arguments. - """ - bound = False - if handler_func is not None: - try: - self.bind(event_name, handler_func, **kwargs) - bound = True - except ValueError: - LOG.warn("Failed binding functor `%s` as a receiver of" - " event '%s' notifications emitted from task %s", - handler_func, event_name, self, exc_info=True) - try: - yield self - finally: - if bound: - self.unbind(event_name, handler_func) - - def bind(self, event, handler, **kwargs): - """Attach a handler to an event for the task. - - :param event: event type - :param handler: callback to execute each time event is triggered - :param kwargs: optional named parameters that will be passed to the - event handler - :raises ValueError: if invalid event type passed - """ - if event not in self.TASK_EVENTS: - raise ValueError("Unknown task event '%s', can only bind" - " to events %s" % (event, self.TASK_EVENTS)) - assert six.callable(handler), "Handler must be callable" - self._events_listeners[event].append((handler, kwargs)) - - def unbind(self, event, handler=None): - """Remove a previously-attached event handler from the task. - - If a handler function not passed, then this will unbind all event - handlers for the provided event. If multiple of the same handlers are - bound, then the first match is removed (and only the first match). - - :param event: event type - :param handler: handler previously bound - - :rtype: boolean - :return: whether anything was removed - """ - removed_any = False - if not handler: - removed_any = self._events_listeners.pop(event, removed_any) - else: - event_listeners = self._events_listeners.get(event, []) - for i, (handler2, _event_data) in enumerate(event_listeners): - if reflection.is_same_callback(handler, handler2): - event_listeners.pop(i) - removed_any = True - break - return bool(removed_any) + def on_clamped(): + LOG.warn("Progress value must be greater or equal to 0.0 or less" + " than or equal to 1.0 instead of being '%s'", progress) + cleaned_progress = misc.clamp(progress, 0.0, 1.0, + on_clamped=on_clamped) + self._notifier.notify(EVENT_UPDATE_PROGRESS, + {'progress': cleaned_progress}) class Task(BaseTask): - """Base class for user-defined tasks. + """Base class for user-defined tasks (derive from it at will!). - Adds following features to Task: - - auto-generates name from type of self - - adds all execute argument names to task requirements - - items provided by the task may be specified via - 'default_provides' class attribute or property + Adds the following features on top of the :py:class:`.BaseTask`: + + - Auto-generates a name from the class name if a name is not + explicitly provided. + - Automatically adds all :py:meth:`.BaseTask.execute` argument names to + the task requirements (items provided by the task may be also specified + via ``default_provides`` class attribute or instance property). """ default_provides = None def __init__(self, name=None, provides=None, requires=None, auto_extract=True, rebind=None, inject=None): - """Initialize task instance.""" if provides is None: provides = self.default_provides super(Task, self).__init__(name, provides=provides, inject=inject) @@ -226,17 +199,23 @@ class Task(BaseTask): class FunctorTask(BaseTask): """Adaptor to make a task from a callable. - Take any callable and make a task from it. + Take any callable pair and make a task from it. + + NOTE(harlowja): If a name is not provided the function/method name of + the ``execute`` callable will be used as the name instead (the name of + the ``revert`` callable is not used). """ def __init__(self, execute, name=None, provides=None, requires=None, auto_extract=True, rebind=None, revert=None, version=None, inject=None): - assert six.callable(execute), ("Function to use for executing must be" - " callable") - if revert: - assert six.callable(revert), ("Function to use for reverting must" - " be callable") + if not six.callable(execute): + raise ValueError("Function to use for executing must be" + " callable") + if revert is not None: + if not six.callable(revert): + raise ValueError("Function to use for reverting must" + " be callable") if name is None: name = reflection.get_callable_name(execute) super(FunctorTask, self).__init__(name, provides=provides, diff --git a/taskflow/test.py b/taskflow/test.py index 4de61d3e..bf252229 100644 --- a/taskflow/test.py +++ b/taskflow/test.py @@ -14,9 +14,29 @@ # License for the specific language governing permissions and limitations # under the License. +from __future__ import absolute_import + +import collections +import logging + import fixtures -import mock +from oslotest import base +from oslotest import mockpatch import six + +# This is weird like this since we want to import a mock that works the best +# and we need to try this import order, since oslotest registers a six.moves +# module (but depending on the import order of importing oslotest we may or +# may not see that change when trying to use it from six). +try: + from six.moves import mock +except ImportError: + try: + # In python 3.3+ mock got included in the standard library... + from unittest import mock + except ImportError: + import mock + from testtools import compat from testtools import matchers from testtools import testcase @@ -85,7 +105,7 @@ class ItemsEqual(object): return None -class TestCase(testcase.TestCase): +class TestCase(base.BaseTestCase): """Test case base class for all taskflow unit tests.""" def makeTmpDir(self): @@ -188,19 +208,17 @@ class MockTestCase(TestCase): super(MockTestCase, self).setUp() self.master_mock = mock.Mock(name='master_mock') - def _patch(self, target, autospec=True, **kwargs): + def patch(self, target, autospec=True, **kwargs): """Patch target and attach it to the master mock.""" - patcher = mock.patch(target, autospec=autospec, **kwargs) - mocked = patcher.start() - self.addCleanup(patcher.stop) - + f = self.useFixture(mockpatch.Patch(target, + autospec=autospec, **kwargs)) + mocked = f.mock attach_as = kwargs.pop('attach_as', None) if attach_as is not None: self.master_mock.attach_mock(mocked, attach_as) - return mocked - def _patch_class(self, module, name, autospec=True, attach_as=None): + def patchClass(self, module, name, autospec=True, attach_as=None): """Patches a modules class. This will create a class instance mock (using the provided name to @@ -212,9 +230,9 @@ class MockTestCase(TestCase): else: instance_mock = mock.Mock() - patcher = mock.patch.object(module, name, autospec=autospec) - class_mock = patcher.start() - self.addCleanup(patcher.stop) + f = self.useFixture(mockpatch.PatchObject(module, name, + autospec=autospec)) + class_mock = f.mock class_mock.return_value = instance_mock if attach_as is None: @@ -226,8 +244,73 @@ class MockTestCase(TestCase): self.master_mock.attach_mock(class_mock, attach_class_as) self.master_mock.attach_mock(instance_mock, attach_instance_as) - return class_mock, instance_mock - def _reset_master_mock(self): + def resetMasterMock(self): self.master_mock.reset_mock() + + +class CapturingLoggingHandler(logging.Handler): + """A handler that saves record contents for post-test analysis.""" + + def __init__(self, level=logging.DEBUG): + # It seems needed to use the old style of base class calling, we + # can remove this old style when we only support py3.x + logging.Handler.__init__(self, level=level) + self._records = [] + + @property + def counts(self): + """Returns a dictionary with the number of records at each level.""" + self.acquire() + try: + captured = collections.defaultdict(int) + for r in self._records: + captured[r.levelno] += 1 + return captured + finally: + self.release() + + @property + def messages(self): + """Returns a dictionary with list of record messages at each level.""" + self.acquire() + try: + captured = collections.defaultdict(list) + for r in self._records: + captured[r.levelno].append(r.getMessage()) + return captured + finally: + self.release() + + @property + def exc_infos(self): + """Returns a list of all the record exc_info tuples captured.""" + self.acquire() + try: + captured = [] + for r in self._records: + if r.exc_info: + captured.append(r.exc_info) + return captured + finally: + self.release() + + def emit(self, record): + self.acquire() + try: + self._records.append(record) + finally: + self.release() + + def reset(self): + """Resets *all* internally captured state.""" + self.acquire() + try: + self._records = [] + finally: + self.release() + + def close(self): + logging.Handler.close(self) + self.reset() diff --git a/taskflow/tests/test_examples.py b/taskflow/tests/test_examples.py index 2631cd47..a7a297c3 100644 --- a/taskflow/tests/test_examples.py +++ b/taskflow/tests/test_examples.py @@ -28,22 +28,37 @@ examples is indeterministic (due to hash randomization for example). """ +import keyword import os import re import subprocess import sys -import taskflow.test +import six + +from taskflow import test ROOT_DIR = os.path.abspath( os.path.dirname( os.path.dirname( os.path.dirname(__file__)))) +# This is used so that any uuid like data being output is removed (since it +# will change per test run and will invalidate the deterministic output that +# we expect to be able to check). UUID_RE = re.compile('XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX' .replace('X', '[0-9a-f]')) +def safe_filename(filename): + # Translates a filename into a method name, returns falsey if not + # possible to perform this translation... + name = re.sub("[^a-zA-Z0-9_]+", "_", filename) + if not name or re.match(r"^[_]+$", name) or keyword.iskeyword(name): + return False + return name + + def root_path(*args): return os.path.join(ROOT_DIR, *args) @@ -71,7 +86,7 @@ def expected_output_path(name): return root_path('taskflow', 'examples', '%s.out.txt' % name) -def list_examples(): +def iter_examples(): examples_dir = root_path('taskflow', 'examples') for filename in os.listdir(examples_dir): path = os.path.join(examples_dir, filename) @@ -80,38 +95,34 @@ def list_examples(): name, ext = os.path.splitext(filename) if ext != ".py": continue - bad_endings = [] - for i in ("utils", "no_test"): - if name.endswith(i): - bad_endings.append(True) - if not any(bad_endings): - yield name + if not any(name.endswith(i) for i in ("utils", "no_test")): + safe_name = safe_filename(name) + if safe_name: + yield name, safe_name -class ExamplesTestCase(taskflow.test.TestCase): - @classmethod - def update(cls): - """For each example, adds on a test method. +class ExampleAdderMeta(type): + """Translates examples into test cases/methods.""" - This newly created test method will then be activated by the testing - framework when it scans for and runs tests. This makes for a elegant - and simple way to ensure that all of the provided examples - actually work. - """ - def add_test_method(name, method_name): + def __new__(cls, name, parents, dct): + + def generate_test(example_name): def test_example(self): - self._check_example(name) - test_example.__name__ = method_name - setattr(cls, method_name, test_example) + self._check_example(example_name) + return test_example - for name in list_examples(): - safe_name = str(re.sub("[^a-zA-Z0-9_]+", "_", name)) - if re.match(r"^[_]+$", safe_name): - continue - add_test_method(name, 'test_%s' % safe_name) + for example_name, safe_name in iter_examples(): + test_name = 'test_%s' % safe_name + dct[test_name] = generate_test(example_name) + + return type.__new__(cls, name, parents, dct) + + +@six.add_metaclass(ExampleAdderMeta) +class ExamplesTestCase(test.TestCase): + """Runs the examples, and checks the outputs against expected outputs.""" def _check_example(self, name): - """Runs the example, and checks the output against expected output.""" output = run_example(name) eop = expected_output_path(name) if os.path.isfile(eop): @@ -123,14 +134,14 @@ class ExamplesTestCase(taskflow.test.TestCase): expected_output = UUID_RE.sub('', expected_output) self.assertEqual(output, expected_output) -ExamplesTestCase.update() - def make_output_files(): """Generate output files for all examples.""" - for name in list_examples(): - output = run_example(name) - with open(expected_output_path(name), 'w') as f: + for example_name, _safe_name in iter_examples(): + print("Running %s" % example_name) + print("Please wait...") + output = run_example(example_name) + with open(expected_output_path(example_name), 'w') as f: f.write(output) diff --git a/taskflow/tests/unit/action_engine/test_compile.py b/taskflow/tests/unit/action_engine/test_compile.py index 7207468e..a290c50b 100644 --- a/taskflow/tests/unit/action_engine/test_compile.py +++ b/taskflow/tests/unit/action_engine/test_compile.py @@ -27,21 +27,25 @@ from taskflow.tests import utils as test_utils class PatternCompileTest(test.TestCase): def test_task(self): task = test_utils.DummyTask(name='a') - compilation = compiler.PatternCompiler().compile(task) + compilation = compiler.PatternCompiler(task).compile() g = compilation.execution_graph self.assertEqual(list(g.nodes()), [task]) self.assertEqual(list(g.edges()), []) def test_retry(self): r = retry.AlwaysRevert('r1') - msg_regex = "^Retry controller: .* must only be used .*" + msg_regex = "^Retry controller .* must only be used .*" self.assertRaisesRegexp(TypeError, msg_regex, - compiler.PatternCompiler().compile, r) + compiler.PatternCompiler(r).compile) def test_wrong_object(self): - msg_regex = '^Unknown type requested to flatten' + msg_regex = '^Unknown item .* requested to flatten' self.assertRaisesRegexp(TypeError, msg_regex, - compiler.PatternCompiler().compile, 42) + compiler.PatternCompiler(42).compile) + + def test_empty(self): + flo = lf.Flow("test") + self.assertRaises(exc.Empty, compiler.PatternCompiler(flo).compile) def test_linear(self): a, b, c, d = test_utils.make_many(4) @@ -51,7 +55,7 @@ class PatternCompileTest(test.TestCase): sflo.add(d) flo.add(sflo) - compilation = compiler.PatternCompiler().compile(flo) + compilation = compiler.PatternCompiler(flo).compile() g = compilation.execution_graph self.assertEqual(4, len(g)) @@ -69,13 +73,13 @@ class PatternCompileTest(test.TestCase): flo.add(a, b, c) flo.add(flo) self.assertRaises(ValueError, - compiler.PatternCompiler().compile, flo) + compiler.PatternCompiler(flo).compile) def test_unordered(self): a, b, c, d = test_utils.make_many(4) flo = uf.Flow("test") flo.add(a, b, c, d) - compilation = compiler.PatternCompiler().compile(flo) + compilation = compiler.PatternCompiler(flo).compile() g = compilation.execution_graph self.assertEqual(4, len(g)) self.assertEqual(0, g.number_of_edges()) @@ -92,7 +96,7 @@ class PatternCompileTest(test.TestCase): flo2.add(c, d) flo.add(flo2) - compilation = compiler.PatternCompiler().compile(flo) + compilation = compiler.PatternCompiler(flo).compile() g = compilation.execution_graph self.assertEqual(4, len(g)) @@ -116,7 +120,7 @@ class PatternCompileTest(test.TestCase): flo2.add(c, d) flo.add(flo2) - compilation = compiler.PatternCompiler().compile(flo) + compilation = compiler.PatternCompiler(flo).compile() g = compilation.execution_graph self.assertEqual(4, len(g)) for n in [a, b]: @@ -138,7 +142,7 @@ class PatternCompileTest(test.TestCase): uf.Flow('ut').add(b, c), d) - compilation = compiler.PatternCompiler().compile(flo) + compilation = compiler.PatternCompiler(flo).compile() g = compilation.execution_graph self.assertEqual(4, len(g)) self.assertItemsEqual(g.edges(), [ @@ -153,7 +157,7 @@ class PatternCompileTest(test.TestCase): flo = gf.Flow("test") flo.add(a, b, c, d) - compilation = compiler.PatternCompiler().compile(flo) + compilation = compiler.PatternCompiler(flo).compile() g = compilation.execution_graph self.assertEqual(4, len(g)) self.assertEqual(0, g.number_of_edges()) @@ -167,7 +171,7 @@ class PatternCompileTest(test.TestCase): flo2.add(e, f, g) flo.add(flo2) - compilation = compiler.PatternCompiler().compile(flo) + compilation = compiler.PatternCompiler(flo).compile() graph = compilation.execution_graph self.assertEqual(7, len(graph)) self.assertItemsEqual(graph.edges(data=True), [ @@ -184,7 +188,7 @@ class PatternCompileTest(test.TestCase): flo2.add(e, f, g) flo.add(flo2) - compilation = compiler.PatternCompiler().compile(flo) + compilation = compiler.PatternCompiler(flo).compile() g = compilation.execution_graph self.assertEqual(7, len(g)) self.assertEqual(0, g.number_of_edges()) @@ -197,7 +201,7 @@ class PatternCompileTest(test.TestCase): flo.link(b, c) flo.link(c, d) - compilation = compiler.PatternCompiler().compile(flo) + compilation = compiler.PatternCompiler(flo).compile() g = compilation.execution_graph self.assertEqual(4, len(g)) self.assertItemsEqual(g.edges(data=True), [ @@ -213,7 +217,7 @@ class PatternCompileTest(test.TestCase): b = test_utils.ProvidesRequiresTask('b', provides=[], requires=['x']) flo = gf.Flow("test").add(a, b) - compilation = compiler.PatternCompiler().compile(flo) + compilation = compiler.PatternCompiler(flo).compile() g = compilation.execution_graph self.assertEqual(2, len(g)) self.assertItemsEqual(g.edges(data=True), [ @@ -231,7 +235,7 @@ class PatternCompileTest(test.TestCase): lf.Flow("test2").add(b, c) ) - compilation = compiler.PatternCompiler().compile(flo) + compilation = compiler.PatternCompiler(flo).compile() g = compilation.execution_graph self.assertEqual(3, len(g)) self.assertItemsEqual(g.edges(data=True), [ @@ -250,7 +254,7 @@ class PatternCompileTest(test.TestCase): lf.Flow("test2").add(b, c) ) - compilation = compiler.PatternCompiler().compile(flo) + compilation = compiler.PatternCompiler(flo).compile() g = compilation.execution_graph self.assertEqual(3, len(g)) self.assertItemsEqual(g.edges(data=True), [ @@ -260,6 +264,107 @@ class PatternCompileTest(test.TestCase): self.assertItemsEqual([b], g.no_predecessors_iter()) self.assertItemsEqual([a, c], g.no_successors_iter()) + def test_empty_flow_in_linear_flow(self): + flow = lf.Flow('lf') + a = test_utils.ProvidesRequiresTask('a', provides=[], requires=[]) + b = test_utils.ProvidesRequiresTask('b', provides=[], requires=[]) + empty_flow = gf.Flow("empty") + flow.add(a, empty_flow, b) + + compilation = compiler.PatternCompiler(flow).compile() + g = compilation.execution_graph + self.assertItemsEqual(g.edges(data=True), [ + (a, b, {'invariant': True}), + ]) + + def test_many_empty_in_graph_flow(self): + flow = gf.Flow('root') + + a = test_utils.ProvidesRequiresTask('a', provides=[], requires=[]) + flow.add(a) + + b = lf.Flow('b') + b_0 = test_utils.ProvidesRequiresTask('b.0', provides=[], requires=[]) + b_3 = test_utils.ProvidesRequiresTask('b.3', provides=[], requires=[]) + b.add( + b_0, + lf.Flow('b.1'), lf.Flow('b.2'), + b_3, + ) + flow.add(b) + + c = lf.Flow('c') + c.add(lf.Flow('c.0'), lf.Flow('c.1'), lf.Flow('c.2')) + flow.add(c) + + d = test_utils.ProvidesRequiresTask('d', provides=[], requires=[]) + flow.add(d) + + flow.link(b, d) + flow.link(a, d) + flow.link(c, d) + + compilation = compiler.PatternCompiler(flow).compile() + g = compilation.execution_graph + self.assertTrue(g.has_edge(b_0, b_3)) + self.assertTrue(g.has_edge(b_3, d)) + self.assertEqual(4, len(g)) + + def test_empty_flow_in_nested_flow(self): + flow = lf.Flow('lf') + a = test_utils.ProvidesRequiresTask('a', provides=[], requires=[]) + b = test_utils.ProvidesRequiresTask('b', provides=[], requires=[]) + + flow2 = lf.Flow("lf-2") + c = test_utils.ProvidesRequiresTask('c', provides=[], requires=[]) + d = test_utils.ProvidesRequiresTask('d', provides=[], requires=[]) + empty_flow = gf.Flow("empty") + flow2.add(c, empty_flow, d) + flow.add(a, flow2, b) + + compilation = compiler.PatternCompiler(flow).compile() + g = compilation.execution_graph + + self.assertTrue(g.has_edge(a, c)) + self.assertTrue(g.has_edge(c, d)) + self.assertTrue(g.has_edge(d, b)) + + def test_empty_flow_in_graph_flow(self): + flow = lf.Flow('lf') + a = test_utils.ProvidesRequiresTask('a', provides=['a'], requires=[]) + b = test_utils.ProvidesRequiresTask('b', provides=[], requires=['a']) + empty_flow = lf.Flow("empty") + flow.add(a, empty_flow, b) + + compilation = compiler.PatternCompiler(flow).compile() + g = compilation.execution_graph + self.assertTrue(g.has_edge(a, b)) + + def test_empty_flow_in_graph_flow_empty_linkage(self): + flow = gf.Flow('lf') + a = test_utils.ProvidesRequiresTask('a', provides=[], requires=[]) + b = test_utils.ProvidesRequiresTask('b', provides=[], requires=[]) + empty_flow = lf.Flow("empty") + flow.add(a, empty_flow, b) + flow.link(empty_flow, b) + + compilation = compiler.PatternCompiler(flow).compile() + g = compilation.execution_graph + self.assertEqual(0, len(g.edges())) + + def test_empty_flow_in_graph_flow_linkage(self): + flow = gf.Flow('lf') + a = test_utils.ProvidesRequiresTask('a', provides=[], requires=[]) + b = test_utils.ProvidesRequiresTask('b', provides=[], requires=[]) + empty_flow = lf.Flow("empty") + flow.add(a, empty_flow, b) + flow.link(a, b) + + compilation = compiler.PatternCompiler(flow).compile() + g = compilation.execution_graph + self.assertEqual(1, len(g.edges())) + self.assertTrue(g.has_edge(a, b)) + def test_checks_for_dups(self): flo = gf.Flow("test").add( test_utils.DummyTask(name="a"), @@ -267,7 +372,7 @@ class PatternCompileTest(test.TestCase): ) self.assertRaisesRegexp(exc.Duplicate, '^Atoms with duplicate names', - compiler.PatternCompiler().compile, flo) + compiler.PatternCompiler(flo).compile) def test_checks_for_dups_globally(self): flo = gf.Flow("test").add( @@ -275,25 +380,25 @@ class PatternCompileTest(test.TestCase): gf.Flow("int2").add(test_utils.DummyTask(name="a"))) self.assertRaisesRegexp(exc.Duplicate, '^Atoms with duplicate names', - compiler.PatternCompiler().compile, flo) + compiler.PatternCompiler(flo).compile) def test_retry_in_linear_flow(self): flo = lf.Flow("test", retry.AlwaysRevert("c")) - compilation = compiler.PatternCompiler().compile(flo) + compilation = compiler.PatternCompiler(flo).compile() g = compilation.execution_graph self.assertEqual(1, len(g)) self.assertEqual(0, g.number_of_edges()) def test_retry_in_unordered_flow(self): flo = uf.Flow("test", retry.AlwaysRevert("c")) - compilation = compiler.PatternCompiler().compile(flo) + compilation = compiler.PatternCompiler(flo).compile() g = compilation.execution_graph self.assertEqual(1, len(g)) self.assertEqual(0, g.number_of_edges()) def test_retry_in_graph_flow(self): flo = gf.Flow("test", retry.AlwaysRevert("c")) - compilation = compiler.PatternCompiler().compile(flo) + compilation = compiler.PatternCompiler(flo).compile() g = compilation.execution_graph self.assertEqual(1, len(g)) self.assertEqual(0, g.number_of_edges()) @@ -302,7 +407,7 @@ class PatternCompileTest(test.TestCase): c1 = retry.AlwaysRevert("c1") c2 = retry.AlwaysRevert("c2") flo = lf.Flow("test", c1).add(lf.Flow("test2", c2)) - compilation = compiler.PatternCompiler().compile(flo) + compilation = compiler.PatternCompiler(flo).compile() g = compilation.execution_graph self.assertEqual(2, len(g)) @@ -317,7 +422,7 @@ class PatternCompileTest(test.TestCase): c = retry.AlwaysRevert("c") a, b = test_utils.make_many(2) flo = lf.Flow("test", c).add(a, b) - compilation = compiler.PatternCompiler().compile(flo) + compilation = compiler.PatternCompiler(flo).compile() g = compilation.execution_graph self.assertEqual(3, len(g)) @@ -335,7 +440,7 @@ class PatternCompileTest(test.TestCase): c = retry.AlwaysRevert("c") a, b = test_utils.make_many(2) flo = uf.Flow("test", c).add(a, b) - compilation = compiler.PatternCompiler().compile(flo) + compilation = compiler.PatternCompiler(flo).compile() g = compilation.execution_graph self.assertEqual(3, len(g)) @@ -353,7 +458,7 @@ class PatternCompileTest(test.TestCase): r = retry.AlwaysRevert("cp") a, b, c = test_utils.make_many(3) flo = gf.Flow("test", r).add(a, b, c).link(b, c) - compilation = compiler.PatternCompiler().compile(flo) + compilation = compiler.PatternCompiler(flo).compile() g = compilation.execution_graph self.assertEqual(4, len(g)) @@ -377,7 +482,7 @@ class PatternCompileTest(test.TestCase): a, lf.Flow("test", c2).add(b, c), d) - compilation = compiler.PatternCompiler().compile(flo) + compilation = compiler.PatternCompiler(flo).compile() g = compilation.execution_graph self.assertEqual(6, len(g)) @@ -402,7 +507,7 @@ class PatternCompileTest(test.TestCase): a, lf.Flow("test").add(b, c), d) - compilation = compiler.PatternCompiler().compile(flo) + compilation = compiler.PatternCompiler(flo).compile() g = compilation.execution_graph self.assertEqual(5, len(g)) diff --git a/taskflow/tests/unit/action_engine/test_creation.py b/taskflow/tests/unit/action_engine/test_creation.py new file mode 100644 index 00000000..2c6ed585 --- /dev/null +++ b/taskflow/tests/unit/action_engine/test_creation.py @@ -0,0 +1,80 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import testtools + +from taskflow.engines.action_engine import engine +from taskflow.engines.action_engine import executor +from taskflow.patterns import linear_flow as lf +from taskflow.persistence import backends +from taskflow import test +from taskflow.tests import utils +from taskflow.types import futures as futures +from taskflow.utils import eventlet_utils as eu +from taskflow.utils import persistence_utils as pu + + +class ParallelCreationTest(test.TestCase): + @staticmethod + def _create_engine(**kwargs): + flow = lf.Flow('test-flow').add(utils.DummyTask()) + backend = backends.fetch({'connection': 'memory'}) + flow_detail = pu.create_flow_detail(flow, backend=backend) + options = kwargs.copy() + return engine.ParallelActionEngine(flow, flow_detail, + backend, options) + + def test_thread_string_creation(self): + for s in ['threads', 'threaded', 'thread']: + eng = self._create_engine(executor=s) + self.assertIsInstance(eng._task_executor, + executor.ParallelThreadTaskExecutor) + + def test_process_string_creation(self): + for s in ['process', 'processes']: + eng = self._create_engine(executor=s) + self.assertIsInstance(eng._task_executor, + executor.ParallelProcessTaskExecutor) + + def test_thread_executor_creation(self): + with futures.ThreadPoolExecutor(1) as e: + eng = self._create_engine(executor=e) + self.assertIsInstance(eng._task_executor, + executor.ParallelThreadTaskExecutor) + + def test_process_executor_creation(self): + with futures.ProcessPoolExecutor(1) as e: + eng = self._create_engine(executor=e) + self.assertIsInstance(eng._task_executor, + executor.ParallelProcessTaskExecutor) + + @testtools.skipIf(not eu.EVENTLET_AVAILABLE, 'eventlet is not available') + def test_green_executor_creation(self): + with futures.GreenThreadPoolExecutor(1) as e: + eng = self._create_engine(executor=e) + self.assertIsInstance(eng._task_executor, + executor.ParallelThreadTaskExecutor) + + def test_sync_executor_creation(self): + with futures.SynchronousExecutor() as e: + eng = self._create_engine(executor=e) + self.assertIsInstance(eng._task_executor, + executor.ParallelThreadTaskExecutor) + + def test_invalid_creation(self): + self.assertRaises(ValueError, self._create_engine, executor='crap') + self.assertRaises(TypeError, self._create_engine, executor=2) + self.assertRaises(TypeError, self._create_engine, executor=object()) diff --git a/taskflow/tests/unit/action_engine/test_runner.py b/taskflow/tests/unit/action_engine/test_runner.py index 2e18f6b6..9b3bdb47 100644 --- a/taskflow/tests/unit/action_engine/test_runner.py +++ b/taskflow/tests/unit/action_engine/test_runner.py @@ -27,21 +27,21 @@ from taskflow import storage from taskflow import test from taskflow.tests import utils as test_utils from taskflow.types import fsm -from taskflow.utils import misc +from taskflow.types import notifier from taskflow.utils import persistence_utils as pu class _RunnerTestMixin(object): def _make_runtime(self, flow, initial_state=None): - compilation = compiler.PatternCompiler().compile(flow) + compilation = compiler.PatternCompiler(flow).compile() flow_detail = pu.create_flow_detail(flow) store = storage.SingleThreadedStorage(flow_detail) # This ensures the tasks exist in storage... for task in compilation.execution_graph: - store.ensure_task(task.name) + store.ensure_atom(task) if initial_state: store.set_flow_state(initial_state) - task_notifier = misc.Notifier() + task_notifier = notifier.Notifier() task_executor = executor.SerialTaskExecutor() task_executor.start() self.addCleanup(task_executor.stop) diff --git a/taskflow/tests/unit/action_engine/test_scoping.py b/taskflow/tests/unit/action_engine/test_scoping.py new file mode 100644 index 00000000..b4429264 --- /dev/null +++ b/taskflow/tests/unit/action_engine/test_scoping.py @@ -0,0 +1,297 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from taskflow.engines.action_engine import compiler +from taskflow.engines.action_engine import scopes as sc +from taskflow.patterns import graph_flow as gf +from taskflow.patterns import linear_flow as lf +from taskflow.patterns import unordered_flow as uf +from taskflow import test +from taskflow.tests import utils as test_utils + + +def _get_scopes(compilation, atom, names_only=True): + walker = sc.ScopeWalker(compilation, atom, names_only=names_only) + return list(iter(walker)) + + +class LinearScopingTest(test.TestCase): + def test_unknown(self): + r = lf.Flow("root") + r_1 = test_utils.TaskOneReturn("root.1") + r.add(r_1) + + r_2 = test_utils.TaskOneReturn("root.2") + c = compiler.PatternCompiler(r).compile() + self.assertRaises(ValueError, _get_scopes, c, r_2) + + def test_empty(self): + r = lf.Flow("root") + r_1 = test_utils.TaskOneReturn("root.1") + r.add(r_1) + + c = compiler.PatternCompiler(r).compile() + self.assertIn(r_1, c.execution_graph) + self.assertIsNotNone(c.hierarchy.find(r_1)) + + walker = sc.ScopeWalker(c, r_1) + scopes = list(walker) + self.assertEqual([], scopes) + + def test_single_prior_linear(self): + r = lf.Flow("root") + r_1 = test_utils.TaskOneReturn("root.1") + r_2 = test_utils.TaskOneReturn("root.2") + r.add(r_1, r_2) + + c = compiler.PatternCompiler(r).compile() + for a in r: + self.assertIn(a, c.execution_graph) + self.assertIsNotNone(c.hierarchy.find(a)) + + self.assertEqual([], _get_scopes(c, r_1)) + self.assertEqual([['root.1']], _get_scopes(c, r_2)) + + def test_nested_prior_linear(self): + r = lf.Flow("root") + r.add(test_utils.TaskOneReturn("root.1"), + test_utils.TaskOneReturn("root.2")) + sub_r = lf.Flow("subroot") + sub_r_1 = test_utils.TaskOneReturn("subroot.1") + sub_r.add(sub_r_1) + r.add(sub_r) + + c = compiler.PatternCompiler(r).compile() + self.assertEqual([[], ['root.2', 'root.1']], _get_scopes(c, sub_r_1)) + + def test_nested_prior_linear_begin_middle_end(self): + r = lf.Flow("root") + begin_r = test_utils.TaskOneReturn("root.1") + r.add(begin_r, test_utils.TaskOneReturn("root.2")) + middle_r = test_utils.TaskOneReturn("root.3") + r.add(middle_r) + sub_r = lf.Flow("subroot") + sub_r.add(test_utils.TaskOneReturn("subroot.1"), + test_utils.TaskOneReturn("subroot.2")) + r.add(sub_r) + end_r = test_utils.TaskOneReturn("root.4") + r.add(end_r) + + c = compiler.PatternCompiler(r).compile() + + self.assertEqual([], _get_scopes(c, begin_r)) + self.assertEqual([['root.2', 'root.1']], _get_scopes(c, middle_r)) + self.assertEqual([['subroot.2', 'subroot.1', 'root.3', 'root.2', + 'root.1']], _get_scopes(c, end_r)) + + +class GraphScopingTest(test.TestCase): + def test_dependent(self): + r = gf.Flow("root") + + customer = test_utils.ProvidesRequiresTask("customer", + provides=['dog'], + requires=[]) + washer = test_utils.ProvidesRequiresTask("washer", + requires=['dog'], + provides=['wash']) + dryer = test_utils.ProvidesRequiresTask("dryer", + requires=['dog', 'wash'], + provides=['dry_dog']) + shaved = test_utils.ProvidesRequiresTask("shaver", + requires=['dry_dog'], + provides=['shaved_dog']) + happy_customer = test_utils.ProvidesRequiresTask( + "happy_customer", requires=['shaved_dog'], provides=['happiness']) + + r.add(customer, washer, dryer, shaved, happy_customer) + + c = compiler.PatternCompiler(r).compile() + + self.assertEqual([], _get_scopes(c, customer)) + self.assertEqual([['washer', 'customer']], _get_scopes(c, dryer)) + self.assertEqual([['shaver', 'dryer', 'washer', 'customer']], + _get_scopes(c, happy_customer)) + + def test_no_visible(self): + r = gf.Flow("root") + atoms = [] + for i in range(0, 10): + atoms.append(test_utils.TaskOneReturn("root.%s" % i)) + r.add(*atoms) + + c = compiler.PatternCompiler(r).compile() + for a in atoms: + self.assertEqual([], _get_scopes(c, a)) + + def test_nested(self): + r = gf.Flow("root") + + r_1 = test_utils.TaskOneReturn("root.1") + r_2 = test_utils.TaskOneReturn("root.2") + r.add(r_1, r_2) + r.link(r_1, r_2) + + subroot = gf.Flow("subroot") + subroot_r_1 = test_utils.TaskOneReturn("subroot.1") + subroot_r_2 = test_utils.TaskOneReturn("subroot.2") + subroot.add(subroot_r_1, subroot_r_2) + subroot.link(subroot_r_1, subroot_r_2) + + r.add(subroot) + r_3 = test_utils.TaskOneReturn("root.3") + r.add(r_3) + r.link(r_2, r_3) + + c = compiler.PatternCompiler(r).compile() + self.assertEqual([], _get_scopes(c, r_1)) + self.assertEqual([['root.1']], _get_scopes(c, r_2)) + self.assertEqual([['root.2', 'root.1']], _get_scopes(c, r_3)) + + self.assertEqual([], _get_scopes(c, subroot_r_1)) + self.assertEqual([['subroot.1']], _get_scopes(c, subroot_r_2)) + + +class UnorderedScopingTest(test.TestCase): + def test_no_visible(self): + r = uf.Flow("root") + atoms = [] + for i in range(0, 10): + atoms.append(test_utils.TaskOneReturn("root.%s" % i)) + r.add(*atoms) + c = compiler.PatternCompiler(r).compile() + for a in atoms: + self.assertEqual([], _get_scopes(c, a)) + + +class MixedPatternScopingTest(test.TestCase): + def test_graph_linear_scope(self): + r = gf.Flow("root") + r_1 = test_utils.TaskOneReturn("root.1") + r_2 = test_utils.TaskOneReturn("root.2") + r.add(r_1, r_2) + r.link(r_1, r_2) + + s = lf.Flow("subroot") + s_1 = test_utils.TaskOneReturn("subroot.1") + s_2 = test_utils.TaskOneReturn("subroot.2") + s.add(s_1, s_2) + r.add(s) + + t = gf.Flow("subroot2") + t_1 = test_utils.TaskOneReturn("subroot2.1") + t_2 = test_utils.TaskOneReturn("subroot2.2") + t.add(t_1, t_2) + t.link(t_1, t_2) + r.add(t) + r.link(s, t) + + c = compiler.PatternCompiler(r).compile() + self.assertEqual([], _get_scopes(c, r_1)) + self.assertEqual([['root.1']], _get_scopes(c, r_2)) + self.assertEqual([], _get_scopes(c, s_1)) + self.assertEqual([['subroot.1']], _get_scopes(c, s_2)) + self.assertEqual([[], ['subroot.2', 'subroot.1']], + _get_scopes(c, t_1)) + self.assertEqual([["subroot2.1"], ['subroot.2', 'subroot.1']], + _get_scopes(c, t_2)) + + def test_linear_unordered_scope(self): + r = lf.Flow("root") + r_1 = test_utils.TaskOneReturn("root.1") + r_2 = test_utils.TaskOneReturn("root.2") + r.add(r_1, r_2) + + u = uf.Flow("subroot") + atoms = [] + for i in range(0, 5): + atoms.append(test_utils.TaskOneReturn("subroot.%s" % i)) + u.add(*atoms) + r.add(u) + + r_3 = test_utils.TaskOneReturn("root.3") + r.add(r_3) + + c = compiler.PatternCompiler(r).compile() + + self.assertEqual([], _get_scopes(c, r_1)) + self.assertEqual([['root.1']], _get_scopes(c, r_2)) + for a in atoms: + self.assertEqual([[], ['root.2', 'root.1']], _get_scopes(c, a)) + + scope = _get_scopes(c, r_3) + self.assertEqual(1, len(scope)) + first_root = 0 + for i, n in enumerate(scope[0]): + if n.startswith('root.'): + first_root = i + break + first_subroot = 0 + for i, n in enumerate(scope[0]): + if n.startswith('subroot.'): + first_subroot = i + break + self.assertGreater(first_subroot, first_root) + self.assertEqual(scope[0][-2:], ['root.2', 'root.1']) + + def test_shadow_graph(self): + r = gf.Flow("root") + customer = test_utils.ProvidesRequiresTask("customer", + provides=['dog'], + requires=[]) + customer2 = test_utils.ProvidesRequiresTask("customer2", + provides=['dog'], + requires=[]) + washer = test_utils.ProvidesRequiresTask("washer", + requires=['dog'], + provides=['wash']) + r.add(customer, washer) + r.add(customer2, resolve_requires=False) + r.link(customer2, washer) + + c = compiler.PatternCompiler(r).compile() + + # The order currently is *not* guaranteed to be 'customer' before + # 'customer2' or the reverse, since either can occur before the + # washer; since *either* is a valid topological ordering of the + # dependencies... + # + # This may be different after/if the following is resolved: + # + # https://github.com/networkx/networkx/issues/1181 (and a few others) + self.assertEqual(set(['customer', 'customer2']), + set(_get_scopes(c, washer)[0])) + self.assertEqual([], _get_scopes(c, customer2)) + self.assertEqual([], _get_scopes(c, customer)) + + def test_shadow_linear(self): + r = lf.Flow("root") + + customer = test_utils.ProvidesRequiresTask("customer", + provides=['dog'], + requires=[]) + customer2 = test_utils.ProvidesRequiresTask("customer2", + provides=['dog'], + requires=[]) + washer = test_utils.ProvidesRequiresTask("washer", + requires=['dog'], + provides=['wash']) + r.add(customer, customer2, washer) + + c = compiler.PatternCompiler(r).compile() + + # This order is guaranteed... + self.assertEqual(['customer2', 'customer'], _get_scopes(c, washer)[0]) diff --git a/taskflow/tests/unit/conductor/test_conductor.py b/taskflow/tests/unit/conductor/test_conductor.py index b43ba035..137d0f3a 100644 --- a/taskflow/tests/unit/conductor/test_conductor.py +++ b/taskflow/tests/unit/conductor/test_conductor.py @@ -14,22 +14,22 @@ # License for the specific language governing permissions and limitations # under the License. +import collections import contextlib -import threading from zake import fake_client from taskflow.conductors import single_threaded as stc from taskflow import engines from taskflow.jobs.backends import impl_zookeeper -from taskflow.jobs import jobboard +from taskflow.jobs import base from taskflow.patterns import linear_flow as lf from taskflow.persistence.backends import impl_memory from taskflow import states as st from taskflow import test from taskflow.tests import utils as test_utils -from taskflow.utils import misc from taskflow.utils import persistence_utils as pu +from taskflow.utils import threading_utils @contextlib.contextmanager @@ -44,34 +44,26 @@ def close_many(*closeables): def test_factory(blowup): f = lf.Flow("test") if not blowup: - f.add(test_utils.SaveOrderTask('test1')) + f.add(test_utils.ProgressingTask('test1')) else: f.add(test_utils.FailingTask("test1")) return f -def make_thread(conductor): - t = threading.Thread(target=conductor.run) - t.daemon = True - return t - - class SingleThreadedConductorTest(test_utils.EngineTestBase, test.TestCase): + ComponentBundle = collections.namedtuple('ComponentBundle', + ['board', 'client', + 'persistence', 'conductor']) + def make_components(self, name='testing', wait_timeout=0.1): client = fake_client.FakeClient() persistence = impl_memory.MemoryBackend() board = impl_zookeeper.ZookeeperJobBoard(name, {}, client=client, persistence=persistence) - engine_conf = { - 'engine': 'default', - } - conductor = stc.SingleThreadedConductor(name, board, engine_conf, - persistence, wait_timeout) - return misc.AttrDict(board=board, - client=client, - persistence=persistence, - conductor=conductor) + conductor = stc.SingleThreadedConductor(name, board, persistence, + wait_timeout=wait_timeout) + return self.ComponentBundle(board, client, persistence, conductor) def test_connection(self): components = self.make_components() @@ -86,23 +78,24 @@ class SingleThreadedConductorTest(test_utils.EngineTestBase, test.TestCase): components = self.make_components() components.conductor.connect() with close_many(components.conductor, components.client): - t = make_thread(components.conductor) + t = threading_utils.daemon_thread(components.conductor.run) t.start() - self.assertTrue(components.conductor.stop(0.5)) + self.assertTrue( + components.conductor.stop(test_utils.WAIT_TIMEOUT)) self.assertFalse(components.conductor.dispatching) t.join() def test_run(self): components = self.make_components() components.conductor.connect() - consumed_event = threading.Event() + consumed_event = threading_utils.Event() def on_consume(state, details): consumed_event.set() - components.board.notifier.register(jobboard.REMOVAL, on_consume) + components.board.notifier.register(base.REMOVAL, on_consume) with close_many(components.conductor, components.client): - t = make_thread(components.conductor) + t = threading_utils.daemon_thread(components.conductor.run) t.start() lb, fd = pu.temporary_flow_detail(components.persistence) engines.save_factory_details(fd, test_factory, @@ -110,9 +103,8 @@ class SingleThreadedConductorTest(test_utils.EngineTestBase, test.TestCase): backend=components.persistence) components.board.post('poke', lb, details={'flow_uuid': fd.uuid}) - consumed_event.wait(1.0) - self.assertTrue(consumed_event.is_set()) - self.assertTrue(components.conductor.stop(1.0)) + self.assertTrue(consumed_event.wait(test_utils.WAIT_TIMEOUT)) + self.assertTrue(components.conductor.stop(test_utils.WAIT_TIMEOUT)) self.assertFalse(components.conductor.dispatching) persistence = components.persistence @@ -125,15 +117,14 @@ class SingleThreadedConductorTest(test_utils.EngineTestBase, test.TestCase): def test_fail_run(self): components = self.make_components() components.conductor.connect() - - consumed_event = threading.Event() + consumed_event = threading_utils.Event() def on_consume(state, details): consumed_event.set() - components.board.notifier.register(jobboard.REMOVAL, on_consume) + components.board.notifier.register(base.REMOVAL, on_consume) with close_many(components.conductor, components.client): - t = make_thread(components.conductor) + t = threading_utils.daemon_thread(components.conductor.run) t.start() lb, fd = pu.temporary_flow_detail(components.persistence) engines.save_factory_details(fd, test_factory, @@ -141,9 +132,8 @@ class SingleThreadedConductorTest(test_utils.EngineTestBase, test.TestCase): backend=components.persistence) components.board.post('poke', lb, details={'flow_uuid': fd.uuid}) - consumed_event.wait(1.0) - self.assertTrue(consumed_event.is_set()) - self.assertTrue(components.conductor.stop(1.0)) + self.assertTrue(consumed_event.wait(test_utils.WAIT_TIMEOUT)) + self.assertTrue(components.conductor.stop(test_utils.WAIT_TIMEOUT)) self.assertFalse(components.conductor.dispatching) persistence = components.persistence diff --git a/taskflow/tests/unit/jobs/base.py b/taskflow/tests/unit/jobs/base.py index a178a8af..8a8bee22 100644 --- a/taskflow/tests/unit/jobs/base.py +++ b/taskflow/tests/unit/jobs/base.py @@ -15,18 +15,19 @@ # under the License. import contextlib -import threading import time from kazoo.recipe import watchers -import mock +from oslo_utils import uuidutils from taskflow import exceptions as excp -from taskflow.openstack.common import uuidutils from taskflow.persistence.backends import impl_dir from taskflow import states +from taskflow.test import mock +from taskflow.tests import utils as test_utils from taskflow.utils import misc from taskflow.utils import persistence_utils as p_utils +from taskflow.utils import threading_utils FLUSH_PATH_TPL = '/taskflow/flush-test/%s' @@ -52,8 +53,8 @@ def flush(client, path=None): # before this context manager exits. if not path: path = FLUSH_PATH_TPL % uuidutils.generate_uuid() - created = threading.Event() - deleted = threading.Event() + created = threading_utils.Event() + deleted = threading_utils.Event() def on_created(data, stat): if stat is not None: @@ -67,13 +68,19 @@ def flush(client, path=None): watchers.DataWatch(client, path, func=on_created) client.create(path, makepath=True) - created.wait() + if not created.wait(test_utils.WAIT_TIMEOUT): + raise RuntimeError("Could not receive creation of %s in" + " the alloted timeout of %s seconds" + % (path, test_utils.WAIT_TIMEOUT)) try: yield finally: watchers.DataWatch(client, path, func=on_deleted) client.delete(path, recursive=True) - deleted.wait() + if not deleted.wait(test_utils.WAIT_TIMEOUT): + raise RuntimeError("Could not receive deletion of %s in" + " the alloted timeout of %s seconds" + % (path, test_utils.WAIT_TIMEOUT)) class BoardTestMixin(object): @@ -119,11 +126,13 @@ class BoardTestMixin(object): self.assertRaises(excp.NotFound, self.board.wait, timeout=0.1) def test_wait_arrival(self): - ev = threading.Event() + ev = threading_utils.Event() jobs = [] def poster(wait_post=0.2): - ev.wait() # wait until the waiter is active + if not ev.wait(test_utils.WAIT_TIMEOUT): + raise RuntimeError("Waiter did not appear ready" + " in %s seconds" % test_utils.WAIT_TIMEOUT) time.sleep(wait_post) self.board.post('test', p_utils.temporary_log_book()) @@ -133,11 +142,9 @@ class BoardTestMixin(object): jobs.extend(it) with connect_close(self.board): - t1 = threading.Thread(target=poster) - t1.daemon = True + t1 = threading_utils.daemon_thread(poster) t1.start() - t2 = threading.Thread(target=waiter) - t2.daemon = True + t2 = threading_utils.daemon_thread(waiter) t2.start() for t in (t1, t2): t.join() diff --git a/taskflow/tests/unit/jobs/test_zk_job.py b/taskflow/tests/unit/jobs/test_zk_job.py index 7268a1a4..afff4123 100644 --- a/taskflow/tests/unit/jobs/test_zk_job.py +++ b/taskflow/tests/unit/jobs/test_zk_job.py @@ -14,14 +14,14 @@ # License for the specific language governing permissions and limitations # under the License. +from oslo_serialization import jsonutils +from oslo_utils import uuidutils import six import testtools from zake import fake_client from zake import utils as zake_utils from taskflow.jobs.backends import impl_zookeeper -from taskflow.openstack.common import jsonutils -from taskflow.openstack.common import uuidutils from taskflow import states from taskflow import test from taskflow.tests.unit.jobs import base diff --git a/taskflow/tests/unit/patterns/test_graph_flow.py b/taskflow/tests/unit/patterns/test_graph_flow.py index c7dad38e..62dbc287 100644 --- a/taskflow/tests/unit/patterns/test_graph_flow.py +++ b/taskflow/tests/unit/patterns/test_graph_flow.py @@ -97,7 +97,40 @@ class GraphFlowTest(test.TestCase): task1 = _task(name='task1', provides=['a', 'b']) task2 = _task(name='task2', provides=['a', 'c']) f = gf.Flow('test') - self.assertRaises(exc.DependencyFailure, f.add, task2, task1) + f.add(task2, task1) + self.assertEqual(set(['a', 'b', 'c']), f.provides) + + def test_graph_flow_ambiguous_provides(self): + task1 = _task(name='task1', provides=['a', 'b']) + task2 = _task(name='task2', provides=['a']) + f = gf.Flow('test') + f.add(task1, task2) + self.assertEqual(set(['a', 'b']), f.provides) + task3 = _task(name='task3', requires=['a']) + self.assertRaises(exc.AmbiguousDependency, f.add, task3) + + def test_graph_flow_no_resolve_requires(self): + task1 = _task(name='task1', provides=['a', 'b', 'c']) + task2 = _task(name='task2', requires=['a', 'b']) + f = gf.Flow('test') + f.add(task1, task2, resolve_requires=False) + self.assertEqual(set(['a', 'b']), f.requires) + + def test_graph_flow_no_resolve_existing(self): + task1 = _task(name='task1', requires=['a', 'b']) + task2 = _task(name='task2', provides=['a', 'b']) + f = gf.Flow('test') + f.add(task1) + f.add(task2, resolve_existing=False) + self.assertEqual(set(['a', 'b']), f.requires) + + def test_graph_flow_resolve_existing(self): + task1 = _task(name='task1', requires=['a', 'b']) + task2 = _task(name='task2', provides=['a', 'b']) + f = gf.Flow('test') + f.add(task1) + f.add(task2, resolve_existing=True) + self.assertEqual(set([]), f.requires) def test_graph_flow_with_retry(self): ret = retry.AlwaysRevert(requires=['a'], provides=['b']) diff --git a/taskflow/tests/unit/patterns/test_linear_flow.py b/taskflow/tests/unit/patterns/test_linear_flow.py index a0dbd0d7..23f891a8 100644 --- a/taskflow/tests/unit/patterns/test_linear_flow.py +++ b/taskflow/tests/unit/patterns/test_linear_flow.py @@ -14,7 +14,6 @@ # License for the specific language governing permissions and limitations # under the License. -from taskflow import exceptions as exc from taskflow.patterns import linear_flow as lf from taskflow import retry from taskflow import test @@ -95,24 +94,6 @@ class LinearFlowTest(test.TestCase): (task1, task2, {'invariant': True}) ]) - def test_linear_flow_two_dependent_tasks_reverse_order(self): - task1 = _task(name='task1', provides=['a']) - task2 = _task(name='task2', requires=['a']) - f = lf.Flow('test') - self.assertRaises(exc.DependencyFailure, f.add, task2, task1) - - def test_linear_flow_two_dependent_tasks_reverse_order2(self): - task1 = _task(name='task1', provides=['a']) - task2 = _task(name='task2', requires=['a']) - f = lf.Flow('test').add(task2) - self.assertRaises(exc.DependencyFailure, f.add, task1) - - def test_linear_flow_two_task_same_provide(self): - task1 = _task(name='task1', provides=['a', 'b']) - task2 = _task(name='task2', provides=['a', 'c']) - f = lf.Flow('test') - self.assertRaises(exc.DependencyFailure, f.add, task2, task1) - def test_linear_flow_three_tasks(self): task1 = _task(name='task1') task2 = _task(name='task2') diff --git a/taskflow/tests/unit/patterns/test_unordered_flow.py b/taskflow/tests/unit/patterns/test_unordered_flow.py index a4043fe2..e55cfad0 100644 --- a/taskflow/tests/unit/patterns/test_unordered_flow.py +++ b/taskflow/tests/unit/patterns/test_unordered_flow.py @@ -14,7 +14,6 @@ # License for the specific language governing permissions and limitations # under the License. -from taskflow import exceptions as exc from taskflow.patterns import unordered_flow as uf from taskflow import retry from taskflow import test @@ -59,7 +58,7 @@ class UnorderedFlowTest(test.TestCase): self.assertEqual(f.requires, set(['a', 'b'])) self.assertEqual(f.provides, set(['c', 'd'])) - def test_unordered_flow_two_independent_tasks(self): + def test_unordered_flow_two_tasks(self): task1 = _task(name='task1') task2 = _task(name='task2') f = uf.Flow('test').add(task1, task2) @@ -68,35 +67,29 @@ class UnorderedFlowTest(test.TestCase): self.assertEqual(set(f), set([task1, task2])) self.assertEqual(list(f.iter_links()), []) - def test_unordered_flow_two_dependent_tasks(self): - task1 = _task(name='task1', provides=['a']) - task2 = _task(name='task2', requires=['a']) - f = uf.Flow('test') - self.assertRaises(exc.DependencyFailure, f.add, task1, task2) - - def test_unordered_flow_two_dependent_tasks_two_different_calls(self): + def test_unordered_flow_two_tasks_two_different_calls(self): task1 = _task(name='task1', provides=['a']) task2 = _task(name='task2', requires=['a']) f = uf.Flow('test').add(task1) - self.assertRaises(exc.DependencyFailure, f.add, task2) + f.add(task2) + self.assertEqual(len(f), 2) + self.assertEqual(set(['a']), f.requires) + self.assertEqual(set(['a']), f.provides) - def test_unordered_flow_two_dependent_tasks_reverse_order(self): + def test_unordered_flow_two_tasks_reverse_order(self): task1 = _task(name='task1', provides=['a']) task2 = _task(name='task2', requires=['a']) - f = uf.Flow('test') - self.assertRaises(exc.DependencyFailure, f.add, task2, task1) - - def test_unordered_flow_two_dependent_tasks_reverse_order2(self): - task1 = _task(name='task1', provides=['a']) - task2 = _task(name='task2', requires=['a']) - f = uf.Flow('test').add(task2) - self.assertRaises(exc.DependencyFailure, f.add, task1) + f = uf.Flow('test').add(task2).add(task1) + self.assertEqual(len(f), 2) + self.assertEqual(set(['a']), f.requires) + self.assertEqual(set(['a']), f.provides) def test_unordered_flow_two_task_same_provide(self): task1 = _task(name='task1', provides=['a', 'b']) task2 = _task(name='task2', provides=['a', 'c']) f = uf.Flow('test') - self.assertRaises(exc.DependencyFailure, f.add, task2, task1) + f.add(task2, task1) + self.assertEqual(len(f), 2) def test_unordered_flow_with_retry(self): ret = retry.AlwaysRevert(requires=['a'], provides=['b']) @@ -106,3 +99,12 @@ class UnorderedFlowTest(test.TestCase): self.assertEqual(f.requires, set(['a'])) self.assertEqual(f.provides, set(['b'])) + + def test_unordered_flow_with_retry_fully_satisfies(self): + ret = retry.AlwaysRevert(provides=['b', 'a']) + f = uf.Flow('test', ret) + f.add(_task(name='task1', requires=['a'])) + self.assertIs(f.retry, ret) + self.assertEqual(ret.name, 'test_retry') + self.assertEqual(f.requires, set([])) + self.assertEqual(f.provides, set(['b', 'a'])) diff --git a/taskflow/tests/unit/persistence/base.py b/taskflow/tests/unit/persistence/base.py index 3d28695c..184cf51e 100644 --- a/taskflow/tests/unit/persistence/base.py +++ b/taskflow/tests/unit/persistence/base.py @@ -16,16 +16,58 @@ import contextlib +from oslo_utils import uuidutils + from taskflow import exceptions as exc -from taskflow.openstack.common import uuidutils from taskflow.persistence import logbook from taskflow import states -from taskflow.utils import misc +from taskflow.types import failure class PersistenceTestMixin(object): def _get_connection(self): - raise NotImplementedError() + raise NotImplementedError('_get_connection() implementation required') + + def test_task_detail_update_not_existing(self): + lb_id = uuidutils.generate_uuid() + lb_name = 'lb-%s' % (lb_id) + lb = logbook.LogBook(name=lb_name, uuid=lb_id) + fd = logbook.FlowDetail('test', uuid=uuidutils.generate_uuid()) + lb.add(fd) + td = logbook.TaskDetail("detail-1", uuid=uuidutils.generate_uuid()) + fd.add(td) + with contextlib.closing(self._get_connection()) as conn: + conn.save_logbook(lb) + + td2 = logbook.TaskDetail("detail-1", uuid=uuidutils.generate_uuid()) + fd.add(td2) + with contextlib.closing(self._get_connection()) as conn: + conn.update_flow_details(fd) + + with contextlib.closing(self._get_connection()) as conn: + lb2 = conn.get_logbook(lb.uuid) + fd2 = lb2.find(fd.uuid) + self.assertIsNotNone(fd2.find(td.uuid)) + self.assertIsNotNone(fd2.find(td2.uuid)) + + def test_flow_detail_update_not_existing(self): + lb_id = uuidutils.generate_uuid() + lb_name = 'lb-%s' % (lb_id) + lb = logbook.LogBook(name=lb_name, uuid=lb_id) + fd = logbook.FlowDetail('test', uuid=uuidutils.generate_uuid()) + lb.add(fd) + with contextlib.closing(self._get_connection()) as conn: + conn.save_logbook(lb) + + fd2 = logbook.FlowDetail('test-2', uuid=uuidutils.generate_uuid()) + lb.add(fd2) + with contextlib.closing(self._get_connection()) as conn: + conn.save_logbook(lb) + + with contextlib.closing(self._get_connection()) as conn: + lb2 = conn.get_logbook(lb.uuid) + self.assertIsNotNone(lb2.find(fd.uuid)) + self.assertIsNotNone(lb2.find(fd2.uuid)) def test_logbook_save_retrieve(self): lb_id = uuidutils.generate_uuid() @@ -147,7 +189,7 @@ class PersistenceTestMixin(object): try: raise RuntimeError('Woot!') except Exception: - td.failure = misc.Failure() + td.failure = failure.Failure() fd.add(td) @@ -161,10 +203,9 @@ class PersistenceTestMixin(object): lb2 = conn.get_logbook(lb_id) fd2 = lb2.find(fd.uuid) td2 = fd2.find(td.uuid) - failure = td2.failure - self.assertEqual(failure.exception_str, 'Woot!') - self.assertIs(failure.check(RuntimeError), RuntimeError) - self.assertEqual(failure.traceback_str, td.failure.traceback_str) + self.assertEqual(td2.failure.exception_str, 'Woot!') + self.assertIs(td2.failure.check(RuntimeError), RuntimeError) + self.assertEqual(td2.failure.traceback_str, td.failure.traceback_str) self.assertIsInstance(td2, logbook.TaskDetail) def test_logbook_merge_flow_detail(self): @@ -269,7 +310,7 @@ class PersistenceTestMixin(object): fd = logbook.FlowDetail('test', uuid=uuidutils.generate_uuid()) lb.add(fd) rd = logbook.RetryDetail("retry-1", uuid=uuidutils.generate_uuid()) - fail = misc.Failure.from_exception(RuntimeError('fail')) + fail = failure.Failure.from_exception(RuntimeError('fail')) rd.results.append((42, {'some-task': fail})) fd.add(rd) @@ -286,7 +327,7 @@ class PersistenceTestMixin(object): rd2 = fd2.find(rd.uuid) self.assertIsInstance(rd2, logbook.RetryDetail) fail2 = rd2.results[0][1].get('some-task') - self.assertIsInstance(fail2, misc.Failure) + self.assertIsInstance(fail2, failure.Failure) self.assertTrue(fail.matches(fail2)) def test_retry_detail_save_intention(self): diff --git a/taskflow/tests/unit/persistence/test_sql_persistence.py b/taskflow/tests/unit/persistence/test_sql_persistence.py index b48f84a8..8489160d 100644 --- a/taskflow/tests/unit/persistence/test_sql_persistence.py +++ b/taskflow/tests/unit/persistence/test_sql_persistence.py @@ -14,11 +14,13 @@ # License for the specific language governing permissions and limitations # under the License. +import abc import contextlib import os +import random import tempfile -import threading +import six import testtools @@ -29,12 +31,14 @@ import testtools # There are also "opportunistic" tests for both mysql and postgresql in here, # which allows testing against all 3 databases (sqlite, mysql, postgres) in # a properly configured unit test environment. For the opportunistic testing -# you need to set up a db named 'openstack_citest' with user 'openstack_citest' -# and password 'openstack_citest' on localhost. +# you need to set up a db user 'openstack_citest' with password +# 'openstack_citest' that has the permissions to create databases on +# localhost. USER = "openstack_citest" PASSWD = "openstack_citest" -DATABASE = "openstack_citest" +DATABASE = "tftest_" + ''.join(random.choice('0123456789') + for _ in range(12)) try: from taskflow.persistence.backends import impl_sqlalchemy @@ -50,7 +54,6 @@ MYSQL_VARIANTS = ('mysqldb', 'pymysql') from taskflow.persistence import backends from taskflow import test from taskflow.tests.unit.persistence import base -from taskflow.utils import lock_utils def _get_connect_string(backend, user, passwd, database=None, variant=None): @@ -97,7 +100,7 @@ def _postgres_exists(): return False engine = None try: - db_uri = _get_connect_string('postgres', USER, PASSWD, 'template1') + db_uri = _get_connect_string('postgres', USER, PASSWD, 'postgres') engine = sa.create_engine(db_uri) with contextlib.closing(engine.connect()): return True @@ -135,34 +138,45 @@ class SqlitePersistenceTest(test.TestCase, base.PersistenceTestMixin): self.db_location = None +@six.add_metaclass(abc.ABCMeta) class BackendPersistenceTestMixin(base.PersistenceTestMixin): """Specifies a backend type and does required setup and teardown.""" - LOCK_NAME = None def _get_connection(self): return self.backend.get_connection() - def _reset_database(self): - """Resets the database, and returns the uri to that database. + def test_entrypoint(self): + # Test that the entrypoint fetching also works (even with dialects) + # using the same configuration we used in setUp() but not using + # the impl_sqlalchemy SQLAlchemyBackend class directly... + with contextlib.closing(backends.fetch(self.db_conf)) as backend: + with contextlib.closing(backend.get_connection()): + pass - Called *only* after locking succeeds. - """ - raise NotImplementedError() + @abc.abstractmethod + def _init_db(self): + """Sets up the database, and returns the uri to that database.""" + + @abc.abstractmethod + def _remove_db(self): + """Cleans up by removing the database once the tests are done.""" def setUp(self): super(BackendPersistenceTestMixin, self).setUp() self.backend = None - self.big_lock.acquire() - self.addCleanup(self.big_lock.release) try: - conf = { - 'connection': self._reset_database(), + self.db_uri = self._init_db() + self.db_conf = { + 'connection': self.db_uri } + # Since we are using random database names, we need to make sure + # and remove our random database when we are done testing. + self.addCleanup(self._remove_db) except Exception as e: - self.skipTest("Failed to reset your database;" + self.skipTest("Failed to create temporary database;" " testing being skipped due to: %s" % (e)) try: - self.backend = impl_sqlalchemy.SQLAlchemyBackend(conf) + self.backend = impl_sqlalchemy.SQLAlchemyBackend(self.db_conf) self.addCleanup(self.backend.close) with contextlib.closing(self._get_connection()) as conn: conn.upgrade() @@ -174,25 +188,11 @@ class BackendPersistenceTestMixin(base.PersistenceTestMixin): @testtools.skipIf(not SQLALCHEMY_AVAILABLE, 'sqlalchemy is not available') @testtools.skipIf(not _mysql_exists(), 'mysql is not available') class MysqlPersistenceTest(BackendPersistenceTestMixin, test.TestCase): - LOCK_NAME = 'mysql_persistence_test' def __init__(self, *args, **kwargs): test.TestCase.__init__(self, *args, **kwargs) - # We need to make sure that each test goes through a set of locks - # to ensure that multiple tests are not modifying the database, - # dropping it, creating it at the same time. To accomplish this we use - # a lock that ensures multiple parallel processes can't run at the - # same time as well as a in-process lock to ensure that multiple - # threads can't run at the same time. - lock_path = os.path.join(tempfile.gettempdir(), - 'taskflow-%s.lock' % (self.LOCK_NAME)) - locks = [ - lock_utils.InterProcessLock(lock_path), - threading.RLock(), - ] - self.big_lock = lock_utils.MultiLock(locks) - def _reset_database(self): + def _init_db(self): working_variant = None for variant in MYSQL_VARIANTS: engine = None @@ -201,7 +201,6 @@ class MysqlPersistenceTest(BackendPersistenceTestMixin, test.TestCase): variant=variant) engine = sa.create_engine(db_uri) with contextlib.closing(engine.connect()) as conn: - conn.execute("DROP DATABASE IF EXISTS %s" % DATABASE) conn.execute("CREATE DATABASE %s" % DATABASE) working_variant = variant except Exception: @@ -216,60 +215,82 @@ class MysqlPersistenceTest(BackendPersistenceTestMixin, test.TestCase): break if not working_variant: variants = ", ".join(MYSQL_VARIANTS) - self.skipTest("Failed to find a mysql variant" - " (tried %s) that works; mysql testing" - " being skipped" % (variants)) + raise Exception("Failed to initialize MySQL db." + " Tried these variants: %s; MySQL testing" + " being skipped" % (variants)) else: return _get_connect_string('mysql', USER, PASSWD, database=DATABASE, variant=working_variant) - -@testtools.skipIf(not SQLALCHEMY_AVAILABLE, 'sqlalchemy is not available') -@testtools.skipIf(not _postgres_exists(), 'postgres is not available') -class PostgresPersistenceTest(BackendPersistenceTestMixin, test.TestCase): - LOCK_NAME = 'postgres_persistence_test' - - def __init__(self, *args, **kwargs): - test.TestCase.__init__(self, *args, **kwargs) - # We need to make sure that each test goes through a set of locks - # to ensure that multiple tests are not modifying the database, - # dropping it, creating it at the same time. To accomplish this we use - # a lock that ensures multiple parallel processes can't run at the - # same time as well as a in-process lock to ensure that multiple - # threads can't run at the same time. - lock_path = os.path.join(tempfile.gettempdir(), - 'taskflow-%s.lock' % (self.LOCK_NAME)) - locks = [ - lock_utils.InterProcessLock(lock_path), - threading.RLock(), - ] - self.big_lock = lock_utils.MultiLock(locks) - - def _reset_database(self): + def _remove_db(self): engine = None try: - # Postgres can't operate on the database it's connected to, that's - # why we connect to the default template database 'template1' and - # then drop and create the desired database. - db_uri = _get_connect_string('postgres', USER, PASSWD, - database='template1') - engine = sa.create_engine(db_uri) + engine = sa.create_engine(self.db_uri) with contextlib.closing(engine.connect()) as conn: - conn.connection.set_isolation_level(0) conn.execute("DROP DATABASE IF EXISTS %s" % DATABASE) - conn.connection.set_isolation_level(1) - with contextlib.closing(engine.connect()) as conn: - conn.connection.set_isolation_level(0) - conn.execute("CREATE DATABASE %s" % DATABASE) - conn.connection.set_isolation_level(1) + except Exception as e: + raise Exception('Failed to remove temporary database: %s' % (e)) + finally: + if engine is not None: + try: + engine.dispose() + except Exception: + pass + + +@testtools.skipIf(not SQLALCHEMY_AVAILABLE, 'sqlalchemy is not available') +@testtools.skipIf(not _postgres_exists(), 'postgres is not available') +class PostgresPersistenceTest(BackendPersistenceTestMixin, test.TestCase): + + def __init__(self, *args, **kwargs): + test.TestCase.__init__(self, *args, **kwargs) + + def _init_db(self): + engine = None + try: + # Postgres can't operate on the database it's connected to, that's + # why we connect to the database 'postgres' and then create the + # desired database. + db_uri = _get_connect_string('postgres', USER, PASSWD, + database='postgres') + engine = sa.create_engine(db_uri) + with contextlib.closing(engine.connect()) as conn: + conn.connection.set_isolation_level(0) + conn.execute("CREATE DATABASE %s" % DATABASE) + conn.connection.set_isolation_level(1) + except Exception as e: + raise Exception('Failed to initialize PostgreSQL db: %s' % (e)) + finally: + if engine is not None: + try: + engine.dispose() + except Exception: + pass + return _get_connect_string('postgres', USER, PASSWD, + database=DATABASE) + + def _remove_db(self): + engine = None + try: + # Postgres can't operate on the database it's connected to, that's + # why we connect to the 'postgres' database and then drop the + # database. + db_uri = _get_connect_string('postgres', USER, PASSWD, + database='postgres') + engine = sa.create_engine(db_uri) + with contextlib.closing(engine.connect()) as conn: + conn.connection.set_isolation_level(0) + conn.execute("DROP DATABASE IF EXISTS %s" % DATABASE) + conn.connection.set_isolation_level(1) + except Exception as e: + raise Exception('Failed to remove temporary database: %s' % (e)) finally: if engine is not None: try: engine.dispose() except Exception: pass - return _get_connect_string('postgres', USER, PASSWD, database=DATABASE) @testtools.skipIf(not SQLALCHEMY_AVAILABLE, 'sqlalchemy is not available') diff --git a/taskflow/tests/unit/persistence/test_zk_persistence.py b/taskflow/tests/unit/persistence/test_zk_persistence.py index 609de21f..bb8bec9a 100644 --- a/taskflow/tests/unit/persistence/test_zk_persistence.py +++ b/taskflow/tests/unit/persistence/test_zk_persistence.py @@ -17,11 +17,11 @@ import contextlib from kazoo import exceptions as kazoo_exceptions +from oslo_utils import uuidutils import testtools from zake import fake_client from taskflow import exceptions as exc -from taskflow.openstack.common import uuidutils from taskflow.persistence import backends from taskflow.persistence.backends import impl_zookeeper from taskflow import test diff --git a/taskflow/tests/unit/test_arguments_passing.py b/taskflow/tests/unit/test_arguments_passing.py index 5e9fc3a8..fb4744bd 100644 --- a/taskflow/tests/unit/test_arguments_passing.py +++ b/taskflow/tests/unit/test_arguments_passing.py @@ -155,15 +155,14 @@ class SingleThreadedEngineTest(ArgumentsPassingTest, def _make_engine(self, flow, flow_detail=None): return taskflow.engines.load(flow, flow_detail=flow_detail, - engine_conf='serial', + engine='serial', backend=self.backend) class MultiThreadedEngineTest(ArgumentsPassingTest, test.TestCase): def _make_engine(self, flow, flow_detail=None, executor=None): - engine_conf = dict(engine='parallel') return taskflow.engines.load(flow, flow_detail=flow_detail, - engine_conf=engine_conf, + engine='parallel', backend=self.backend, executor=executor) diff --git a/taskflow/tests/unit/test_duration.py b/taskflow/tests/unit/test_duration.py deleted file mode 100644 index e1588eb2..00000000 --- a/taskflow/tests/unit/test_duration.py +++ /dev/null @@ -1,82 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import contextlib -import time - -import mock - -import taskflow.engines -from taskflow import exceptions as exc -from taskflow.listeners import timing -from taskflow.patterns import linear_flow as lf -from taskflow.persistence.backends import impl_memory -from taskflow import task -from taskflow import test -from taskflow.tests import utils as t_utils -from taskflow.utils import persistence_utils as p_utils - - -class SleepyTask(task.Task): - def __init__(self, name, sleep_for=0.0): - super(SleepyTask, self).__init__(name=name) - self._sleep_for = float(sleep_for) - - def execute(self): - if self._sleep_for <= 0: - return - else: - time.sleep(self._sleep_for) - - -class TestDuration(test.TestCase): - def make_engine(self, flow, flow_detail, backend): - e = taskflow.engines.load(flow, - flow_detail=flow_detail, - backend=backend) - e.compile() - return e - - def test_duration(self): - with contextlib.closing(impl_memory.MemoryBackend({})) as be: - flow = lf.Flow("test") - flow.add(SleepyTask("test-1", sleep_for=0.1)) - (lb, fd) = p_utils.temporary_flow_detail(be) - e = self.make_engine(flow, fd, be) - with timing.TimingListener(e): - e.run() - t_uuid = e.storage.get_atom_uuid("test-1") - td = fd.find(t_uuid) - self.assertIsNotNone(td) - self.assertIsNotNone(td.meta) - self.assertIn('duration', td.meta) - self.assertGreaterEqual(0.1, td.meta['duration']) - - @mock.patch.object(timing.LOG, 'warn') - def test_record_ending_exception(self, mocked_warn): - with contextlib.closing(impl_memory.MemoryBackend({})) as be: - flow = lf.Flow("test") - flow.add(t_utils.TaskNoRequiresNoReturns("test-1")) - (lb, fd) = p_utils.temporary_flow_detail(be) - e = self.make_engine(flow, fd, be) - timing_listener = timing.TimingListener(e) - with mock.patch.object(timing_listener._engine.storage, - 'update_atom_metadata') as mocked_uam: - mocked_uam.side_effect = exc.StorageFailure('Woot!') - with timing_listener: - e.run() - mocked_warn.assert_called_once_with(mock.ANY, mock.ANY, 'test-1', - exc_info=True) diff --git a/taskflow/tests/unit/test_engine_helpers.py b/taskflow/tests/unit/test_engine_helpers.py index 30ff51c3..ed40caa0 100644 --- a/taskflow/tests/unit/test_engine_helpers.py +++ b/taskflow/tests/unit/test_engine_helpers.py @@ -14,28 +14,40 @@ # License for the specific language governing permissions and limitations # under the License. -import mock - import taskflow.engines from taskflow import exceptions as exc from taskflow.patterns import linear_flow from taskflow import test +from taskflow.test import mock from taskflow.tests import utils as test_utils from taskflow.utils import persistence_utils as p_utils class EngineLoadingTestCase(test.TestCase): - def test_default_load(self): + def _make_dummy_flow(self): f = linear_flow.Flow('test') f.add(test_utils.TaskOneReturn("run-1")) + return f + + def test_default_load(self): + f = self._make_dummy_flow() e = taskflow.engines.load(f) self.assertIsNotNone(e) def test_unknown_load(self): - f = linear_flow.Flow('test') - f.add(test_utils.TaskOneReturn("run-1")) + f = self._make_dummy_flow() self.assertRaises(exc.NotFound, taskflow.engines.load, f, - engine_conf='not_really_any_engine') + engine='not_really_any_engine') + + def test_options_empty(self): + f = self._make_dummy_flow() + e = taskflow.engines.load(f) + self.assertEqual({}, e.options) + + def test_options_passthrough(self): + f = self._make_dummy_flow() + e = taskflow.engines.load(f, pass_1=1, pass_2=2) + self.assertEqual({'pass_1': 1, 'pass_2': 2}, e.options) class FlowFromDetailTestCase(test.TestCase): @@ -69,7 +81,7 @@ class FlowFromDetailTestCase(test.TestCase): _lb, flow_detail = p_utils.temporary_flow_detail() flow_detail.meta = dict(factory=dict(name=name)) - with mock.patch('taskflow.openstack.common.importutils.import_class', + with mock.patch('oslo_utils.importutils.import_class', return_value=lambda: 'RESULT') as mock_import: result = taskflow.engines.flow_from_detail(flow_detail) mock_import.assert_called_onec_with(name) @@ -80,7 +92,7 @@ class FlowFromDetailTestCase(test.TestCase): _lb, flow_detail = p_utils.temporary_flow_detail() flow_detail.meta = dict(factory=dict(name=name, args=['foo'])) - with mock.patch('taskflow.openstack.common.importutils.import_class', + with mock.patch('oslo_utils.importutils.import_class', return_value=lambda x: 'RESULT %s' % x) as mock_import: result = taskflow.engines.flow_from_detail(flow_detail) mock_import.assert_called_onec_with(name) diff --git a/taskflow/tests/unit/test_engines.py b/taskflow/tests/unit/test_engines.py index d2fb0d43..8762d386 100644 --- a/taskflow/tests/unit/test_engines.py +++ b/taskflow/tests/unit/test_engines.py @@ -15,9 +15,7 @@ # under the License. import contextlib -import threading -from concurrent import futures import testtools import taskflow.engines @@ -33,59 +31,51 @@ from taskflow import states from taskflow import task from taskflow import test from taskflow.tests import utils +from taskflow.types import failure +from taskflow.types import futures from taskflow.types import graph as gr from taskflow.utils import eventlet_utils as eu -from taskflow.utils import misc from taskflow.utils import persistence_utils as p_utils +from taskflow.utils import threading_utils as tu -class EngineTaskTest(utils.EngineTestBase): +class EngineTaskTest(object): def test_run_task_as_flow(self): - flow = utils.SaveOrderTask(name='task1') + flow = utils.ProgressingTask(name='task1') engine = self._make_engine(flow) - engine.run() - self.assertEqual(self.values, ['task1']) - - @staticmethod - def _callback(state, values, details): - name = details.get('task_name', '') - values.append('%s %s' % (name, state)) - - @staticmethod - def _flow_callback(state, values, details): - values.append('flow %s' % state) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + engine.run() + expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)'] + self.assertEqual(expected, capturer.values) def test_run_task_with_notifications(self): - flow = utils.SaveOrderTask(name='task1') + flow = utils.ProgressingTask(name='task1') engine = self._make_engine(flow) - utils.register_notifiers(engine, self.values) - engine.run() - self.assertEqual(self.values, - ['flow RUNNING', - 'task1 RUNNING', - 'task1', - 'task1 SUCCESS', - 'flow SUCCESS']) + with utils.CaptureListener(engine) as capturer: + engine.run() + expected = ['task1.f RUNNING', 'task1.t RUNNING', + 'task1.t SUCCESS(5)', 'task1.f SUCCESS'] + self.assertEqual(expected, capturer.values) def test_failing_task_with_notifications(self): + values = [] flow = utils.FailingTask('fail') engine = self._make_engine(flow) - utils.register_notifiers(engine, self.values) - expected = ['flow RUNNING', - 'fail RUNNING', - 'fail FAILURE', - 'fail REVERTING', - 'fail reverted(Failure: RuntimeError: Woot!)', - 'fail REVERTED', - 'flow REVERTED'] - self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) - self.assertEqual(self.values, expected) + expected = ['fail.f RUNNING', 'fail.t RUNNING', + 'fail.t FAILURE(Failure: RuntimeError: Woot!)', + 'fail.t REVERTING', 'fail.t REVERTED', + 'fail.f REVERTED'] + with utils.CaptureListener(engine, values=values) as capturer: + self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) + self.assertEqual(expected, capturer.values) self.assertEqual(engine.storage.get_flow_state(), states.REVERTED) - - self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) - now_expected = expected + ['fail PENDING', 'flow PENDING'] + expected - self.assertEqual(self.values, now_expected) + with utils.CaptureListener(engine, values=values) as capturer: + self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) + now_expected = list(expected) + now_expected.extend(['fail.t PENDING', 'fail.f PENDING']) + now_expected.extend(expected) + self.assertEqual(now_expected, values) self.assertEqual(engine.storage.get_flow_state(), states.REVERTED) def test_invalid_flow_raises(self): @@ -123,63 +113,74 @@ class EngineLinearFlowTest(utils.EngineTestBase): def test_sequential_flow_one_task(self): flow = lf.Flow('flow-1').add( - utils.SaveOrderTask(name='task1') + utils.ProgressingTask(name='task1') ) - self._make_engine(flow).run() - self.assertEqual(self.values, ['task1']) + engine = self._make_engine(flow) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + engine.run() + expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)'] + self.assertEqual(expected, capturer.values) def test_sequential_flow_two_tasks(self): flow = lf.Flow('flow-2').add( - utils.SaveOrderTask(name='task1'), - utils.SaveOrderTask(name='task2') + utils.ProgressingTask(name='task1'), + utils.ProgressingTask(name='task2') ) - self._make_engine(flow).run() - self.assertEqual(self.values, ['task1', 'task2']) + engine = self._make_engine(flow) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + engine.run() + expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)', + 'task2.t RUNNING', 'task2.t SUCCESS(5)'] + self.assertEqual(expected, capturer.values) self.assertEqual(len(flow), 2) def test_sequential_flow_two_tasks_iter(self): flow = lf.Flow('flow-2').add( - utils.SaveOrderTask(name='task1'), - utils.SaveOrderTask(name='task2') + utils.ProgressingTask(name='task1'), + utils.ProgressingTask(name='task2') ) - e = self._make_engine(flow) - gathered_states = list(e.run_iter()) + engine = self._make_engine(flow) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + gathered_states = list(engine.run_iter()) self.assertTrue(len(gathered_states) > 0) - self.assertEqual(self.values, ['task1', 'task2']) + expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)', + 'task2.t RUNNING', 'task2.t SUCCESS(5)'] + self.assertEqual(expected, capturer.values) self.assertEqual(len(flow), 2) def test_sequential_flow_iter_suspend_resume(self): flow = lf.Flow('flow-2').add( - utils.SaveOrderTask(name='task1'), - utils.SaveOrderTask(name='task2') + utils.ProgressingTask(name='task1'), + utils.ProgressingTask(name='task2') ) - _lb, fd = p_utils.temporary_flow_detail(self.backend) - e = self._make_engine(flow, flow_detail=fd) - it = e.run_iter() - gathered_states = [] - suspend_it = None - while True: - try: - s = it.send(suspend_it) - gathered_states.append(s) - if s == states.WAITING: - # Stop it before task2 runs/starts. - suspend_it = True - except StopIteration: - break + lb, fd = p_utils.temporary_flow_detail(self.backend) + + engine = self._make_engine(flow, flow_detail=fd) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + it = engine.run_iter() + gathered_states = [] + suspend_it = None + while True: + try: + s = it.send(suspend_it) + gathered_states.append(s) + if s == states.WAITING: + # Stop it before task2 runs/starts. + suspend_it = True + except StopIteration: + break self.assertTrue(len(gathered_states) > 0) - self.assertEqual(self.values, ['task1']) - self.assertEqual(states.SUSPENDED, e.storage.get_flow_state()) + expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)'] + self.assertEqual(expected, capturer.values) + self.assertEqual(states.SUSPENDED, engine.storage.get_flow_state()) # Attempt to resume it and see what runs now... - # - # NOTE(harlowja): Clear all the values, but don't reset the reference. - while len(self.values): - self.values.pop() - gathered_states = list(e.run_iter()) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + gathered_states = list(engine.run_iter()) self.assertTrue(len(gathered_states) > 0) - self.assertEqual(self.values, ['task2']) - self.assertEqual(states.SUCCESS, e.storage.get_flow_state()) + expected = ['task2.t RUNNING', 'task2.t SUCCESS(5)'] + self.assertEqual(expected, capturer.values) + self.assertEqual(states.SUCCESS, engine.storage.get_flow_state()) def test_revert_removes_data(self): flow = lf.Flow('revert-removes').add( @@ -193,13 +194,17 @@ class EngineLinearFlowTest(utils.EngineTestBase): def test_sequential_flow_nested_blocks(self): flow = lf.Flow('nested-1').add( - utils.SaveOrderTask('task1'), + utils.ProgressingTask('task1'), lf.Flow('inner-1').add( - utils.SaveOrderTask('task2') + utils.ProgressingTask('task2') ) ) - self._make_engine(flow).run() - self.assertEqual(self.values, ['task1', 'task2']) + engine = self._make_engine(flow) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + engine.run() + expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)', + 'task2.t RUNNING', 'task2.t SUCCESS(5)'] + self.assertEqual(expected, capturer.values) def test_revert_exception_is_reraised(self): flow = lf.Flow('revert-1').add( @@ -215,26 +220,32 @@ class EngineLinearFlowTest(utils.EngineTestBase): utils.NeverRunningTask(), ) engine = self._make_engine(flow) - self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) - self.assertEqual( - self.values, - ['fail reverted(Failure: RuntimeError: Woot!)']) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) + expected = ['fail.t RUNNING', + 'fail.t FAILURE(Failure: RuntimeError: Woot!)', + 'fail.t REVERTING', 'fail.t REVERTED'] + self.assertEqual(expected, capturer.values) def test_correctly_reverts_children(self): flow = lf.Flow('root-1').add( - utils.SaveOrderTask('task1'), + utils.ProgressingTask('task1'), lf.Flow('child-1').add( - utils.SaveOrderTask('task2'), + utils.ProgressingTask('task2'), utils.FailingTask('fail') ) ) engine = self._make_engine(flow) - self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) - self.assertEqual( - self.values, - ['task1', 'task2', - 'fail reverted(Failure: RuntimeError: Woot!)', - 'task2 reverted(5)', 'task1 reverted(5)']) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) + expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)', + 'task2.t RUNNING', 'task2.t SUCCESS(5)', + 'fail.t RUNNING', + 'fail.t FAILURE(Failure: RuntimeError: Woot!)', + 'fail.t REVERTING', 'fail.t REVERTED', + 'task2.t REVERTING', 'task2.t REVERTED', + 'task1.t REVERTING', 'task1.t REVERTED'] + self.assertEqual(expected, capturer.values) class EngineParallelFlowTest(utils.EngineTestBase): @@ -246,22 +257,26 @@ class EngineParallelFlowTest(utils.EngineTestBase): def test_parallel_flow_one_task(self): flow = uf.Flow('p-1').add( - utils.SaveOrderTask(name='task1', provides='a') + utils.ProgressingTask(name='task1', provides='a') ) engine = self._make_engine(flow) - engine.run() - self.assertEqual(self.values, ['task1']) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + engine.run() + expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)'] + self.assertEqual(expected, capturer.values) self.assertEqual(engine.storage.fetch_all(), {'a': 5}) def test_parallel_flow_two_tasks(self): flow = uf.Flow('p-2').add( - utils.SaveOrderTask(name='task1'), - utils.SaveOrderTask(name='task2') + utils.ProgressingTask(name='task1'), + utils.ProgressingTask(name='task2') ) - self._make_engine(flow).run() - - result = set(self.values) - self.assertEqual(result, set(['task1', 'task2'])) + engine = self._make_engine(flow) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + engine.run() + expected = set(['task2.t SUCCESS(5)', 'task2.t RUNNING', + 'task1.t RUNNING', 'task1.t SUCCESS(5)']) + self.assertEqual(expected, set(capturer.values)) def test_parallel_revert(self): flow = uf.Flow('p-r-3').add( @@ -270,9 +285,10 @@ class EngineParallelFlowTest(utils.EngineTestBase): utils.TaskNoRequiresNoReturns(name='task2') ) engine = self._make_engine(flow) - self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) - self.assertIn('fail reverted(Failure: RuntimeError: Woot!)', - self.values) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) + self.assertIn('fail.t FAILURE(Failure: RuntimeError: Woot!)', + capturer.values) def test_parallel_revert_exception_is_reraised(self): # NOTE(imelnikov): if we put NastyTask and FailingTask @@ -291,12 +307,12 @@ class EngineParallelFlowTest(utils.EngineTestBase): def test_sequential_flow_two_tasks_with_resumption(self): flow = lf.Flow('lf-2-r').add( - utils.SaveOrderTask(name='task1', provides='x1'), - utils.SaveOrderTask(name='task2', provides='x2') + utils.ProgressingTask(name='task1', provides='x1'), + utils.ProgressingTask(name='task2', provides='x2') ) # Create FlowDetail as if we already run task1 - _lb, fd = p_utils.temporary_flow_detail(self.backend) + lb, fd = p_utils.temporary_flow_detail(self.backend) td = logbook.TaskDetail(name='task1', uuid='42') td.state = states.SUCCESS td.results = 17 @@ -307,8 +323,10 @@ class EngineParallelFlowTest(utils.EngineTestBase): td.update(conn.update_atom_details(td)) engine = self._make_engine(flow, fd) - engine.run() - self.assertEqual(self.values, ['task2']) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + engine.run() + expected = ['task2.t RUNNING', 'task2.t SUCCESS(5)'] + self.assertEqual(expected, capturer.values) self.assertEqual(engine.storage.fetch_all(), {'x1': 17, 'x2': 5}) @@ -317,86 +335,98 @@ class EngineLinearAndUnorderedExceptionsTest(utils.EngineTestBase): def test_revert_ok_for_unordered_in_linear(self): flow = lf.Flow('p-root').add( - utils.SaveOrderTask(name='task1'), - utils.SaveOrderTask(name='task2'), + utils.ProgressingTask(name='task1'), + utils.ProgressingTask(name='task2'), uf.Flow('p-inner').add( - utils.SaveOrderTask(name='task3'), + utils.ProgressingTask(name='task3'), utils.FailingTask('fail') ) ) engine = self._make_engine(flow) - self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) # NOTE(imelnikov): we don't know if task 3 was run, but if it was, # it should have been reverted in correct order. possible_values_no_task3 = [ - 'task1', 'task2', - 'fail reverted(Failure: RuntimeError: Woot!)', - 'task2 reverted(5)', 'task1 reverted(5)' + 'task1.t RUNNING', 'task2.t RUNNING', + 'fail.t FAILURE(Failure: RuntimeError: Woot!)', + 'task2.t REVERTED', 'task1.t REVERTED' ] - self.assertIsSuperAndSubsequence(self.values, + self.assertIsSuperAndSubsequence(capturer.values, possible_values_no_task3) - if 'task3' in self.values: + if 'task3' in capturer.values: possible_values_task3 = [ - 'task1', 'task2', 'task3', - 'task3 reverted(5)', 'task2 reverted(5)', 'task1 reverted(5)' + 'task1.t RUNNING', 'task2.t RUNNING', 'task3.t RUNNING', + 'task3.t REVERTED', 'task2.t REVERTED', 'task1.t REVERTED' ] - self.assertIsSuperAndSubsequence(self.values, + self.assertIsSuperAndSubsequence(capturer.values, possible_values_task3) def test_revert_raises_for_unordered_in_linear(self): flow = lf.Flow('p-root').add( - utils.SaveOrderTask(name='task1'), - utils.SaveOrderTask(name='task2'), + utils.ProgressingTask(name='task1'), + utils.ProgressingTask(name='task2'), uf.Flow('p-inner').add( - utils.SaveOrderTask(name='task3'), - utils.NastyFailingTask() + utils.ProgressingTask(name='task3'), + utils.NastyFailingTask(name='nasty') ) ) engine = self._make_engine(flow) - self.assertFailuresRegexp(RuntimeError, '^Gotcha', engine.run) + with utils.CaptureListener(engine, + capture_flow=False, + skip_tasks=['nasty']) as capturer: + self.assertFailuresRegexp(RuntimeError, '^Gotcha', engine.run) # NOTE(imelnikov): we don't know if task 3 was run, but if it was, # it should have been reverted in correct order. - possible_values = ['task1', 'task2', 'task3', - 'task3 reverted(5)'] - self.assertIsSuperAndSubsequence(possible_values, self.values) - possible_values_no_task3 = ['task1', 'task2'] - self.assertIsSuperAndSubsequence(self.values, + possible_values = ['task1.t RUNNING', 'task1.t SUCCESS(5)', + 'task2.t RUNNING', 'task2.t SUCCESS(5)', + 'task3.t RUNNING', 'task3.t SUCCESS(5)', + 'task3.t REVERTING', + 'task3.t REVERTED'] + self.assertIsSuperAndSubsequence(possible_values, capturer.values) + possible_values_no_task3 = ['task1.t RUNNING', 'task2.t RUNNING'] + self.assertIsSuperAndSubsequence(capturer.values, possible_values_no_task3) def test_revert_ok_for_linear_in_unordered(self): flow = uf.Flow('p-root').add( - utils.SaveOrderTask(name='task1'), + utils.ProgressingTask(name='task1'), lf.Flow('p-inner').add( - utils.SaveOrderTask(name='task2'), + utils.ProgressingTask(name='task2'), utils.FailingTask('fail') ) ) engine = self._make_engine(flow) - self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) - self.assertIn('fail reverted(Failure: RuntimeError: Woot!)', - self.values) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) + self.assertIn('fail.t FAILURE(Failure: RuntimeError: Woot!)', + capturer.values) # NOTE(imelnikov): if task1 was run, it should have been reverted. - if 'task1' in self.values: - task1_story = ['task1', 'task1 reverted(5)'] - self.assertIsSuperAndSubsequence(self.values, task1_story) + if 'task1' in capturer.values: + task1_story = ['task1.t RUNNING', 'task1.t SUCCESS(5)', + 'task1.t REVERTED'] + self.assertIsSuperAndSubsequence(capturer.values, task1_story) + # NOTE(imelnikov): task2 should have been run and reverted - task2_story = ['task2', 'task2 reverted(5)'] - self.assertIsSuperAndSubsequence(self.values, task2_story) + task2_story = ['task2.t RUNNING', 'task2.t SUCCESS(5)', + 'task2.t REVERTED'] + self.assertIsSuperAndSubsequence(capturer.values, task2_story) def test_revert_raises_for_linear_in_unordered(self): flow = uf.Flow('p-root').add( - utils.SaveOrderTask(name='task1'), + utils.ProgressingTask(name='task1'), lf.Flow('p-inner').add( - utils.SaveOrderTask(name='task2'), + utils.ProgressingTask(name='task2'), utils.NastyFailingTask() ) ) engine = self._make_engine(flow) - self.assertFailuresRegexp(RuntimeError, '^Gotcha', engine.run) - self.assertNotIn('task2 reverted(5)', self.values) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + self.assertFailuresRegexp(RuntimeError, '^Gotcha', engine.run) + self.assertNotIn('task2.t REVERTED', capturer.values) class EngineGraphFlowTest(utils.EngineTestBase): @@ -414,66 +444,90 @@ class EngineGraphFlowTest(utils.EngineTestBase): def test_graph_flow_one_task(self): flow = gf.Flow('g-1').add( - utils.SaveOrderTask(name='task1') + utils.ProgressingTask(name='task1') ) - self._make_engine(flow).run() - self.assertEqual(self.values, ['task1']) + engine = self._make_engine(flow) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + engine.run() + expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)'] + self.assertEqual(expected, capturer.values) def test_graph_flow_two_independent_tasks(self): flow = gf.Flow('g-2').add( - utils.SaveOrderTask(name='task1'), - utils.SaveOrderTask(name='task2') + utils.ProgressingTask(name='task1'), + utils.ProgressingTask(name='task2') ) - self._make_engine(flow).run() - self.assertEqual(set(self.values), set(['task1', 'task2'])) + engine = self._make_engine(flow) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + engine.run() + expected = set(['task2.t SUCCESS(5)', 'task2.t RUNNING', + 'task1.t RUNNING', 'task1.t SUCCESS(5)']) + self.assertEqual(expected, set(capturer.values)) self.assertEqual(len(flow), 2) def test_graph_flow_two_tasks(self): flow = gf.Flow('g-1-1').add( - utils.SaveOrderTask(name='task2', requires=['a']), - utils.SaveOrderTask(name='task1', provides='a') + utils.ProgressingTask(name='task2', requires=['a']), + utils.ProgressingTask(name='task1', provides='a') ) - self._make_engine(flow).run() - self.assertEqual(self.values, ['task1', 'task2']) + engine = self._make_engine(flow) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + engine.run() + expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)', + 'task2.t RUNNING', 'task2.t SUCCESS(5)'] + self.assertEqual(expected, capturer.values) def test_graph_flow_four_tasks_added_separately(self): flow = (gf.Flow('g-4') - .add(utils.SaveOrderTask(name='task4', - provides='d', requires=['c'])) - .add(utils.SaveOrderTask(name='task2', - provides='b', requires=['a'])) - .add(utils.SaveOrderTask(name='task3', - provides='c', requires=['b'])) - .add(utils.SaveOrderTask(name='task1', - provides='a')) + .add(utils.ProgressingTask(name='task4', + provides='d', requires=['c'])) + .add(utils.ProgressingTask(name='task2', + provides='b', requires=['a'])) + .add(utils.ProgressingTask(name='task3', + provides='c', requires=['b'])) + .add(utils.ProgressingTask(name='task1', + provides='a')) ) - self._make_engine(flow).run() - self.assertEqual(self.values, ['task1', 'task2', 'task3', 'task4']) + engine = self._make_engine(flow) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + engine.run() + expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)', + 'task2.t RUNNING', 'task2.t SUCCESS(5)', + 'task3.t RUNNING', 'task3.t SUCCESS(5)', + 'task4.t RUNNING', 'task4.t SUCCESS(5)'] + self.assertEqual(expected, capturer.values) def test_graph_flow_four_tasks_revert(self): flow = gf.Flow('g-4-failing').add( - utils.SaveOrderTask(name='task4', - provides='d', requires=['c']), - utils.SaveOrderTask(name='task2', - provides='b', requires=['a']), + utils.ProgressingTask(name='task4', + provides='d', requires=['c']), + utils.ProgressingTask(name='task2', + provides='b', requires=['a']), utils.FailingTask(name='task3', provides='c', requires=['b']), - utils.SaveOrderTask(name='task1', provides='a')) + utils.ProgressingTask(name='task1', provides='a')) engine = self._make_engine(flow) - self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) - self.assertEqual( - self.values, - ['task1', 'task2', - 'task3 reverted(Failure: RuntimeError: Woot!)', - 'task2 reverted(5)', 'task1 reverted(5)']) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + self.assertFailuresRegexp(RuntimeError, '^Woot', engine.run) + expected = ['task1.t RUNNING', 'task1.t SUCCESS(5)', + 'task2.t RUNNING', 'task2.t SUCCESS(5)', + 'task3.t RUNNING', + 'task3.t FAILURE(Failure: RuntimeError: Woot!)', + 'task3.t REVERTING', + 'task3.t REVERTED', + 'task2.t REVERTING', + 'task2.t REVERTED', + 'task1.t REVERTING', + 'task1.t REVERTED'] + self.assertEqual(expected, capturer.values) self.assertEqual(engine.storage.get_flow_state(), states.REVERTED) def test_graph_flow_four_tasks_revert_failure(self): flow = gf.Flow('g-3-nasty').add( utils.NastyTask(name='task2', provides='b', requires=['a']), utils.FailingTask(name='task3', requires=['b']), - utils.SaveOrderTask(name='task1', provides='a')) + utils.ProgressingTask(name='task1', provides='a')) engine = self._make_engine(flow) self.assertFailuresRegexp(RuntimeError, '^Gotcha', engine.run) @@ -519,6 +573,9 @@ class EngineGraphFlowTest(utils.EngineTestBase): class EngineCheckingTaskTest(utils.EngineTestBase): + # FIXME: this test uses a inner class that workers/process engines can't + # get to, so we need to do something better to make this test useful for + # those engines... def test_flow_failures_are_passed_to_revert(self): class CheckingTask(task.Task): @@ -529,7 +586,7 @@ class EngineCheckingTaskTest(utils.EngineTestBase): self.assertEqual(result, 'RESULT') self.assertEqual(list(flow_failures.keys()), ['fail1']) fail = flow_failures['fail1'] - self.assertIsInstance(fail, misc.Failure) + self.assertIsInstance(fail, failure.Failure) self.assertEqual(str(fail), 'Failure: RuntimeError: Woot!') flow = lf.Flow('test').add( @@ -540,54 +597,57 @@ class EngineCheckingTaskTest(utils.EngineTestBase): self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) -class SingleThreadedEngineTest(EngineTaskTest, - EngineLinearFlowTest, - EngineParallelFlowTest, - EngineLinearAndUnorderedExceptionsTest, - EngineGraphFlowTest, - EngineCheckingTaskTest, - test.TestCase): +class SerialEngineTest(EngineTaskTest, + EngineLinearFlowTest, + EngineParallelFlowTest, + EngineLinearAndUnorderedExceptionsTest, + EngineGraphFlowTest, + EngineCheckingTaskTest, + test.TestCase): def _make_engine(self, flow, flow_detail=None): return taskflow.engines.load(flow, flow_detail=flow_detail, - engine_conf='serial', + engine='serial', backend=self.backend) def test_correct_load(self): engine = self._make_engine(utils.TaskNoRequiresNoReturns) - self.assertIsInstance(engine, eng.SingleThreadedActionEngine) + self.assertIsInstance(engine, eng.SerialActionEngine) def test_singlethreaded_is_the_default(self): engine = taskflow.engines.load(utils.TaskNoRequiresNoReturns) - self.assertIsInstance(engine, eng.SingleThreadedActionEngine) + self.assertIsInstance(engine, eng.SerialActionEngine) -class MultiThreadedEngineTest(EngineTaskTest, - EngineLinearFlowTest, - EngineParallelFlowTest, - EngineLinearAndUnorderedExceptionsTest, - EngineGraphFlowTest, - EngineCheckingTaskTest, - test.TestCase): +class ParallelEngineWithThreadsTest(EngineTaskTest, + EngineLinearFlowTest, + EngineParallelFlowTest, + EngineLinearAndUnorderedExceptionsTest, + EngineGraphFlowTest, + EngineCheckingTaskTest, + test.TestCase): + _EXECUTOR_WORKERS = 2 + def _make_engine(self, flow, flow_detail=None, executor=None): - engine_conf = dict(engine='parallel') + if executor is None: + executor = 'threads' return taskflow.engines.load(flow, flow_detail=flow_detail, - engine_conf=engine_conf, backend=self.backend, - executor=executor) + executor=executor, + engine='parallel', + max_workers=self._EXECUTOR_WORKERS) def test_correct_load(self): engine = self._make_engine(utils.TaskNoRequiresNoReturns) - self.assertIsInstance(engine, eng.MultiThreadedActionEngine) - self.assertIs(engine._executor, None) + self.assertIsInstance(engine, eng.ParallelActionEngine) def test_using_common_executor(self): flow = utils.TaskNoRequiresNoReturns(name='task1') - executor = futures.ThreadPoolExecutor(2) + executor = futures.ThreadPoolExecutor(self._EXECUTOR_WORKERS) try: e1 = self._make_engine(flow, executor=executor) e2 = self._make_engine(flow, executor=executor) - self.assertIs(e1._executor, e2._executor) + self.assertIs(e1.options['executor'], e2.options['executor']) finally: executor.shutdown(wait=True) @@ -603,12 +663,33 @@ class ParallelEngineWithEventletTest(EngineTaskTest, def _make_engine(self, flow, flow_detail=None, executor=None): if executor is None: - executor = eu.GreenExecutor() - engine_conf = dict(engine='parallel', - executor=executor) + executor = futures.GreenThreadPoolExecutor() + self.addCleanup(executor.shutdown) return taskflow.engines.load(flow, flow_detail=flow_detail, - engine_conf=engine_conf, - backend=self.backend) + backend=self.backend, engine='parallel', + executor=executor) + + +class ParallelEngineWithProcessTest(EngineTaskTest, + EngineLinearFlowTest, + EngineParallelFlowTest, + EngineLinearAndUnorderedExceptionsTest, + EngineGraphFlowTest, + test.TestCase): + _EXECUTOR_WORKERS = 2 + + def test_correct_load(self): + engine = self._make_engine(utils.TaskNoRequiresNoReturns) + self.assertIsInstance(engine, eng.ParallelActionEngine) + + def _make_engine(self, flow, flow_detail=None, executor=None): + if executor is None: + executor = 'processes' + return taskflow.engines.load(flow, flow_detail=flow_detail, + backend=self.backend, + engine='parallel', + executor=executor, + max_workers=self._EXECUTOR_WORKERS) class WorkerBasedEngineTest(EngineTaskTest, @@ -617,28 +698,41 @@ class WorkerBasedEngineTest(EngineTaskTest, EngineLinearAndUnorderedExceptionsTest, EngineGraphFlowTest, test.TestCase): - def setUp(self): super(WorkerBasedEngineTest, self).setUp() - self.exchange = 'test' - self.topic = 'topic' - self.transport = 'memory' - worker_conf = { - 'exchange': self.exchange, - 'topic': self.topic, - 'tasks': [ - 'taskflow.tests.utils', - ], - 'transport': self.transport, + shared_conf = { + 'exchange': 'test', + 'transport': 'memory', 'transport_options': { - 'polling_interval': 0.01 - } + # NOTE(imelnikov): I run tests several times for different + # intervals. Reducing polling interval below 0.01 did not give + # considerable win in tests run time; reducing polling interval + # too much (AFAIR below 0.0005) affected stability -- I was + # seeing timeouts. So, 0.01 looks like the most balanced for + # local transports (for now). + 'polling_interval': 0.01, + }, } + worker_conf = shared_conf.copy() + worker_conf.update({ + 'topic': 'my-topic', + 'tasks': [ + # This makes it possible for the worker to run/find any atoms + # that are defined in the test.utils module (which are all + # the task/atom types that this test uses)... + utils.__name__, + ], + }) + self.engine_conf = shared_conf.copy() + self.engine_conf.update({ + 'engine': 'worker-based', + 'topics': tuple([worker_conf['topic']]), + }) self.worker = wkr.Worker(**worker_conf) - self.worker_thread = threading.Thread(target=self.worker.run) - self.worker_thread.daemon = True + self.worker_thread = tu.daemon_thread(self.worker.run) self.worker_thread.start() - # make sure worker is started before we can continue + + # Make sure the worker is started before we can continue... self.worker.wait() def tearDown(self): @@ -647,15 +741,8 @@ class WorkerBasedEngineTest(EngineTaskTest, super(WorkerBasedEngineTest, self).tearDown() def _make_engine(self, flow, flow_detail=None): - engine_conf = { - 'engine': 'worker-based', - 'exchange': self.exchange, - 'topics': [self.topic], - 'transport': self.transport, - } return taskflow.engines.load(flow, flow_detail=flow_detail, - engine_conf=engine_conf, - backend=self.backend) + backend=self.backend, **self.engine_conf) def test_correct_load(self): engine = self._make_engine(utils.TaskNoRequiresNoReturns) diff --git a/taskflow/tests/unit/test_utils_failure.py b/taskflow/tests/unit/test_failure.py similarity index 70% rename from taskflow/tests/unit/test_utils_failure.py rename to taskflow/tests/unit/test_failure.py index 4958da62..793274e1 100644 --- a/taskflow/tests/unit/test_utils_failure.py +++ b/taskflow/tests/unit/test_failure.py @@ -14,19 +14,28 @@ # License for the specific language governing permissions and limitations # under the License. +import sys + import six from taskflow import exceptions from taskflow import test from taskflow.tests import utils as test_utils -from taskflow.utils import misc +from taskflow.types import failure def _captured_failure(msg): - try: - raise RuntimeError(msg) - except Exception: - return misc.Failure() + try: + raise RuntimeError(msg) + except Exception: + return failure.Failure() + + +def _make_exc_info(msg): + try: + raise RuntimeError(msg) + except Exception: + return sys.exc_info() class GeneralFailureObjTestsMixin(object): @@ -85,9 +94,9 @@ class ReCreatedFailureTestCase(test.TestCase, GeneralFailureObjTestsMixin): def setUp(self): super(ReCreatedFailureTestCase, self).setUp() fail_obj = _captured_failure('Woot!') - self.fail_obj = misc.Failure(exception_str=fail_obj.exception_str, - traceback_str=fail_obj.traceback_str, - exc_type_names=list(fail_obj)) + self.fail_obj = failure.Failure(exception_str=fail_obj.exception_str, + traceback_str=fail_obj.traceback_str, + exc_type_names=list(fail_obj)) def test_value_lost(self): self.assertIs(self.fail_obj.exception, None) @@ -104,12 +113,20 @@ class ReCreatedFailureTestCase(test.TestCase, GeneralFailureObjTestsMixin): self.fail_obj.reraise) self.assertIs(exc.check(RuntimeError), RuntimeError) + def test_no_type_names(self): + fail_obj = _captured_failure('Woot!') + fail_obj = failure.Failure(exception_str=fail_obj.exception_str, + traceback_str=fail_obj.traceback_str, + exc_type_names=[]) + self.assertEqual([], list(fail_obj)) + self.assertEqual("Failure: Woot!", fail_obj.pformat()) + class FromExceptionTestCase(test.TestCase, GeneralFailureObjTestsMixin): def setUp(self): super(FromExceptionTestCase, self).setUp() - self.fail_obj = misc.Failure.from_exception(RuntimeError('Woot!')) + self.fail_obj = failure.Failure.from_exception(RuntimeError('Woot!')) def test_pformat_no_traceback(self): text = self.fail_obj.pformat(traceback=True) @@ -122,10 +139,10 @@ class FailureObjectTestCase(test.TestCase): try: raise SystemExit() except BaseException: - self.assertRaises(TypeError, misc.Failure) + self.assertRaises(TypeError, failure.Failure) def test_unknown_argument(self): - exc = self.assertRaises(TypeError, misc.Failure, + exc = self.assertRaises(TypeError, failure.Failure, exception_str='Woot!', traceback_str=None, exc_type_names=['Exception'], @@ -134,12 +151,12 @@ class FailureObjectTestCase(test.TestCase): self.assertEqual(str(exc), expected) def test_empty_does_not_reraise(self): - self.assertIs(misc.Failure.reraise_if_any([]), None) + self.assertIs(failure.Failure.reraise_if_any([]), None) def test_reraises_one(self): fls = [_captured_failure('Woot!')] self.assertRaisesRegexp(RuntimeError, '^Woot!$', - misc.Failure.reraise_if_any, fls) + failure.Failure.reraise_if_any, fls) def test_reraises_several(self): fls = [ @@ -147,7 +164,7 @@ class FailureObjectTestCase(test.TestCase): _captured_failure('Oh, not again!') ] exc = self.assertRaises(exceptions.WrappedFailure, - misc.Failure.reraise_if_any, fls) + failure.Failure.reraise_if_any, fls) self.assertEqual(list(exc), fls) def test_failure_copy(self): @@ -160,9 +177,9 @@ class FailureObjectTestCase(test.TestCase): def test_failure_copy_recaptured(self): captured = _captured_failure('Woot!') - fail_obj = misc.Failure(exception_str=captured.exception_str, - traceback_str=captured.traceback_str, - exc_type_names=list(captured)) + fail_obj = failure.Failure(exception_str=captured.exception_str, + traceback_str=captured.traceback_str, + exc_type_names=list(captured)) copied = fail_obj.copy() self.assertIsNot(fail_obj, copied) self.assertEqual(fail_obj, copied) @@ -171,9 +188,9 @@ class FailureObjectTestCase(test.TestCase): def test_recaptured_not_eq(self): captured = _captured_failure('Woot!') - fail_obj = misc.Failure(exception_str=captured.exception_str, - traceback_str=captured.traceback_str, - exc_type_names=list(captured)) + fail_obj = failure.Failure(exception_str=captured.exception_str, + traceback_str=captured.traceback_str, + exc_type_names=list(captured)) self.assertFalse(fail_obj == captured) self.assertTrue(fail_obj != captured) self.assertTrue(fail_obj.matches(captured)) @@ -185,13 +202,13 @@ class FailureObjectTestCase(test.TestCase): def test_two_recaptured_neq(self): captured = _captured_failure('Woot!') - fail_obj = misc.Failure(exception_str=captured.exception_str, - traceback_str=captured.traceback_str, - exc_type_names=list(captured)) + fail_obj = failure.Failure(exception_str=captured.exception_str, + traceback_str=captured.traceback_str, + exc_type_names=list(captured)) new_exc_str = captured.exception_str.replace('Woot', 'w00t') - fail_obj2 = misc.Failure(exception_str=new_exc_str, - traceback_str=captured.traceback_str, - exc_type_names=list(captured)) + fail_obj2 = failure.Failure(exception_str=new_exc_str, + traceback_str=captured.traceback_str, + exc_type_names=list(captured)) self.assertNotEqual(fail_obj, fail_obj2) self.assertFalse(fail_obj2.matches(fail_obj)) @@ -207,7 +224,7 @@ class FailureObjectTestCase(test.TestCase): def test_pformat_traceback_captured_no_exc_info(self): captured = _captured_failure('Woot!') - captured = misc.Failure.from_dict(captured.to_dict()) + captured = failure.Failure.from_dict(captured.to_dict()) text = captured.pformat(traceback=True) self.assertIn("Traceback (most recent call last):", text) @@ -242,7 +259,7 @@ class WrappedFailureTestCase(test.TestCase): try: raise exceptions.WrappedFailure([f1, f2]) except Exception: - fail_obj = misc.Failure() + fail_obj = failure.Failure() wf = exceptions.WrappedFailure([fail_obj, f3]) self.assertEqual(list(wf), [f1, f2, f3]) @@ -252,13 +269,13 @@ class NonAsciiExceptionsTestCase(test.TestCase): def test_exception_with_non_ascii_str(self): bad_string = chr(200) - fail = misc.Failure.from_exception(ValueError(bad_string)) + fail = failure.Failure.from_exception(ValueError(bad_string)) self.assertEqual(fail.exception_str, bad_string) self.assertEqual(str(fail), 'Failure: ValueError: %s' % bad_string) def test_exception_non_ascii_unicode(self): hi_ru = u'привет' - fail = misc.Failure.from_exception(ValueError(hi_ru)) + fail = failure.Failure.from_exception(ValueError(hi_ru)) self.assertEqual(fail.exception_str, hi_ru) self.assertIsInstance(fail.exception_str, six.text_type) self.assertEqual(six.text_type(fail), @@ -268,7 +285,7 @@ class NonAsciiExceptionsTestCase(test.TestCase): hi_cn = u'嗨' fail = ValueError(hi_cn) self.assertEqual(hi_cn, exceptions.exception_message(fail)) - fail = misc.Failure.from_exception(fail) + fail = failure.Failure.from_exception(fail) wrapped_fail = exceptions.WrappedFailure([fail]) if six.PY2: # Python 2.x will unicode escape it, while python 3.3+ will not, @@ -283,12 +300,46 @@ class NonAsciiExceptionsTestCase(test.TestCase): def test_failure_equality_with_non_ascii_str(self): bad_string = chr(200) - fail = misc.Failure.from_exception(ValueError(bad_string)) + fail = failure.Failure.from_exception(ValueError(bad_string)) copied = fail.copy() self.assertEqual(fail, copied) def test_failure_equality_non_ascii_unicode(self): hi_ru = u'привет' - fail = misc.Failure.from_exception(ValueError(hi_ru)) + fail = failure.Failure.from_exception(ValueError(hi_ru)) copied = fail.copy() self.assertEqual(fail, copied) + + +class ExcInfoUtilsTest(test.TestCase): + def test_copy_none(self): + result = failure._copy_exc_info(None) + self.assertIsNone(result) + + def test_copy_exc_info(self): + exc_info = _make_exc_info("Woot!") + result = failure._copy_exc_info(exc_info) + self.assertIsNot(result, exc_info) + self.assertIs(result[0], RuntimeError) + self.assertIsNot(result[1], exc_info[1]) + self.assertIs(result[2], exc_info[2]) + + def test_none_equals(self): + self.assertTrue(failure._are_equal_exc_info_tuples(None, None)) + + def test_none_ne_tuple(self): + exc_info = _make_exc_info("Woot!") + self.assertFalse(failure._are_equal_exc_info_tuples(None, exc_info)) + + def test_tuple_nen_none(self): + exc_info = _make_exc_info("Woot!") + self.assertFalse(failure._are_equal_exc_info_tuples(exc_info, None)) + + def test_tuple_equals_itself(self): + exc_info = _make_exc_info("Woot!") + self.assertTrue(failure._are_equal_exc_info_tuples(exc_info, exc_info)) + + def test_typle_equals_copy(self): + exc_info = _make_exc_info("Woot!") + copied = failure._copy_exc_info(exc_info) + self.assertTrue(failure._are_equal_exc_info_tuples(exc_info, copied)) diff --git a/taskflow/tests/unit/test_flow_dependencies.py b/taskflow/tests/unit/test_flow_dependencies.py index 3ddb95d9..69f4a8fe 100644 --- a/taskflow/tests/unit/test_flow_dependencies.py +++ b/taskflow/tests/unit/test_flow_dependencies.py @@ -130,24 +130,27 @@ class FlowDependenciesTest(test.TestCase): def test_unordered_flow_provides_required_values(self): flow = uf.Flow('uf') - self.assertRaises(exceptions.DependencyFailure, - flow.add, - utils.TaskOneReturn('task1', provides='x'), - utils.TaskOneArg('task2')) + flow.add(utils.TaskOneReturn('task1', provides='x'), + utils.TaskOneArg('task2')) + flow.add(utils.TaskOneReturn('task1', provides='x'), + utils.TaskOneArg('task2')) + self.assertEqual(set(['x']), flow.provides) + self.assertEqual(set(['x']), flow.requires) def test_unordered_flow_requires_provided_value_other_call(self): flow = uf.Flow('uf') flow.add(utils.TaskOneReturn('task1', provides='x')) - self.assertRaises(exceptions.DependencyFailure, - flow.add, - utils.TaskOneArg('task2')) + flow.add(utils.TaskOneArg('task2')) + self.assertEqual(set(['x']), flow.provides) + self.assertEqual(set(['x']), flow.requires) def test_unordered_flow_provides_required_value_other_call(self): flow = uf.Flow('uf') flow.add(utils.TaskOneArg('task2')) - self.assertRaises(exceptions.DependencyFailure, - flow.add, - utils.TaskOneReturn('task1', provides='x')) + flow.add(utils.TaskOneReturn('task1', provides='x')) + self.assertEqual(2, len(flow)) + self.assertEqual(set(['x']), flow.provides) + self.assertEqual(set(['x']), flow.requires) def test_unordered_flow_multi_provides_and_requires_values(self): flow = uf.Flow('uf').add( @@ -161,16 +164,14 @@ class FlowDependenciesTest(test.TestCase): def test_unordered_flow_provides_same_values(self): flow = uf.Flow('uf').add(utils.TaskOneReturn(provides='x')) - self.assertRaises(exceptions.DependencyFailure, - flow.add, - utils.TaskOneReturn(provides='x')) + flow.add(utils.TaskOneReturn(provides='x')) + self.assertEqual(set(['x']), flow.provides) def test_unordered_flow_provides_same_values_one_add(self): flow = uf.Flow('uf') - self.assertRaises(exceptions.DependencyFailure, - flow.add, - utils.TaskOneReturn(provides='x'), - utils.TaskOneReturn(provides='x')) + flow.add(utils.TaskOneReturn(provides='x'), + utils.TaskOneReturn(provides='x')) + self.assertEqual(set(['x']), flow.provides) def test_nested_flows_requirements(self): flow = uf.Flow('uf').add( @@ -186,13 +187,6 @@ class FlowDependenciesTest(test.TestCase): self.assertEqual(flow.requires, set(['a', 'b', 'c'])) self.assertEqual(flow.provides, set(['x', 'y', 'z', 'q'])) - def test_nested_flows_provides_same_values(self): - flow = lf.Flow('lf').add( - uf.Flow('uf').add(utils.TaskOneReturn(provides='x'))) - self.assertRaises(exceptions.DependencyFailure, - flow.add, - gf.Flow('gf').add(utils.TaskOneReturn(provides='x'))) - def test_graph_flow_requires_values(self): flow = gf.Flow('gf').add( utils.TaskOneArg('task1'), @@ -224,9 +218,8 @@ class FlowDependenciesTest(test.TestCase): def test_graph_flow_provides_provided_value_other_call(self): flow = gf.Flow('gf') flow.add(utils.TaskOneReturn('task1', provides='x')) - self.assertRaises(exceptions.DependencyFailure, - flow.add, - utils.TaskOneReturn('task2', provides='x')) + flow.add(utils.TaskOneReturn('task2', provides='x')) + self.assertEqual(set(['x']), flow.provides) def test_graph_flow_multi_provides_and_requires_values(self): flow = gf.Flow('gf').add( @@ -336,18 +329,6 @@ class FlowDependenciesTest(test.TestCase): self.assertEqual(flow.requires, set(['x', 'y', 'c'])) self.assertEqual(flow.provides, set(['a', 'b', 'z'])) - def test_linear_flow_retry_and_task_dependency_conflict(self): - flow = lf.Flow('lf', retry.AlwaysRevert('rt', requires=['x'])) - self.assertRaises(exceptions.DependencyFailure, - flow.add, - utils.TaskOneReturn(provides=['x'])) - - def test_linear_flow_retry_and_task_provide_same_value(self): - flow = lf.Flow('lf', retry.AlwaysRevert('rt', provides=['x'])) - self.assertRaises(exceptions.DependencyFailure, - flow.add, - utils.TaskOneReturn('t1', provides=['x'])) - def test_unordered_flow_retry_and_task(self): flow = uf.Flow('uf', retry.AlwaysRevert('rt', requires=['x', 'y'], @@ -358,24 +339,22 @@ class FlowDependenciesTest(test.TestCase): self.assertEqual(flow.requires, set(['x', 'y', 'c'])) self.assertEqual(flow.provides, set(['a', 'b', 'z'])) - def test_unordered_flow_retry_and_task_dependency_conflict(self): + def test_unordered_flow_retry_and_task_same_requires_provides(self): flow = uf.Flow('uf', retry.AlwaysRevert('rt', requires=['x'])) - self.assertRaises(exceptions.DependencyFailure, - flow.add, - utils.TaskOneReturn(provides=['x'])) + flow.add(utils.TaskOneReturn(provides=['x'])) + self.assertEqual(set(['x']), flow.requires) + self.assertEqual(set(['x']), flow.provides) def test_unordered_flow_retry_and_task_provide_same_value(self): flow = uf.Flow('uf', retry.AlwaysRevert('rt', provides=['x'])) - self.assertRaises(exceptions.DependencyFailure, - flow.add, - utils.TaskOneReturn('t1', provides=['x'])) + flow.add(utils.TaskOneReturn('t1', provides=['x'])) + self.assertEqual(set(['x']), flow.provides) def test_unordered_flow_retry_two_tasks_provide_same_value(self): flow = uf.Flow('uf', retry.AlwaysRevert('rt', provides=['y'])) - self.assertRaises(exceptions.DependencyFailure, - flow.add, - utils.TaskOneReturn('t1', provides=['x']), - utils.TaskOneReturn('t2', provides=['x'])) + flow.add(utils.TaskOneReturn('t1', provides=['x']), + utils.TaskOneReturn('t2', provides=['x'])) + self.assertEqual(set(['x', 'y']), flow.provides) def test_graph_flow_retry_and_task(self): flow = gf.Flow('gf', retry.AlwaysRevert('rt', @@ -387,32 +366,16 @@ class FlowDependenciesTest(test.TestCase): self.assertEqual(flow.requires, set(['x', 'y', 'c'])) self.assertEqual(flow.provides, set(['a', 'b', 'z'])) - def test_graph_flow_retry_and_task_dependency_conflict(self): + def test_graph_flow_retry_and_task_dependency_provide_require(self): flow = gf.Flow('gf', retry.AlwaysRevert('rt', requires=['x'])) - self.assertRaises(exceptions.DependencyFailure, - flow.add, - utils.TaskOneReturn(provides=['x'])) + flow.add(utils.TaskOneReturn(provides=['x'])) + self.assertEqual(set(['x']), flow.provides) + self.assertEqual(set(['x']), flow.requires) def test_graph_flow_retry_and_task_provide_same_value(self): flow = gf.Flow('gf', retry.AlwaysRevert('rt', provides=['x'])) - self.assertRaises(exceptions.DependencyFailure, - flow.add, - utils.TaskOneReturn('t1', provides=['x'])) - - def test_two_retries_provide_same_values_in_nested_flows(self): - flow = lf.Flow('lf', retry.AlwaysRevert('rt1', provides=['x'])) - self.assertRaises(exceptions.DependencyFailure, - flow.add, - lf.Flow('lf1', retry.AlwaysRevert('rt2', - provides=['x']))) - - def test_two_retries_provide_same_values(self): - flow = lf.Flow('lf').add( - lf.Flow('lf1', retry.AlwaysRevert('rt1', provides=['x']))) - self.assertRaises(exceptions.DependencyFailure, - flow.add, - lf.Flow('lf2', retry.AlwaysRevert('rt2', - provides=['x']))) + flow.add(utils.TaskOneReturn('t1', provides=['x'])) + self.assertEqual(set(['x']), flow.provides) def test_builtin_retry_args(self): diff --git a/taskflow/tests/unit/test_futures.py b/taskflow/tests/unit/test_futures.py new file mode 100644 index 00000000..ce2c69c1 --- /dev/null +++ b/taskflow/tests/unit/test_futures.py @@ -0,0 +1,229 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import collections +import functools +import threading +import time + +import testtools + +from taskflow import test +from taskflow.types import futures +from taskflow.utils import eventlet_utils as eu + +try: + from eventlet.green import threading as greenthreading + from eventlet.green import time as greentime +except ImportError: + pass + + +def _noop(): + pass + + +def _blowup(): + raise IOError("Broke!") + + +def _return_given(given): + return given + + +def _return_one(): + return 1 + + +def _double(x): + return x * 2 + + +class _SimpleFuturesTestMixin(object): + # This exists to test basic functionality, mainly required to test the + # process executor which has a very restricted set of things it can + # execute (no lambda functions, no instance methods...) + def _make_executor(self, max_workers): + raise NotImplementedError("Not implemented") + + def test_invalid_workers(self): + self.assertRaises(ValueError, self._make_executor, -1) + self.assertRaises(ValueError, self._make_executor, 0) + + def test_exception_transfer(self): + with self._make_executor(2) as e: + f = e.submit(_blowup) + self.assertRaises(IOError, f.result) + self.assertEqual(1, e.statistics.failures) + + def test_accumulator(self): + created = [] + with self._make_executor(5) as e: + for _i in range(0, 10): + created.append(e.submit(_return_one)) + results = [f.result() for f in created] + self.assertEqual(10, sum(results)) + self.assertEqual(10, e.statistics.executed) + + def test_map(self): + count = [i for i in range(0, 100)] + with self._make_executor(5) as e: + results = list(e.map(_double, count)) + initial = sum(count) + self.assertEqual(2 * initial, sum(results)) + + def test_alive(self): + e = self._make_executor(1) + self.assertTrue(e.alive) + e.shutdown() + self.assertFalse(e.alive) + with self._make_executor(1) as e2: + self.assertTrue(e2.alive) + self.assertFalse(e2.alive) + + +class _FuturesTestMixin(_SimpleFuturesTestMixin): + def _delay(self, secs): + raise NotImplementedError("Not implemented") + + def _make_lock(self): + raise NotImplementedError("Not implemented") + + def _make_funcs(self, called, amount): + mutator = self._make_lock() + + def store_call(ident): + with mutator: + called[ident] += 1 + + for i in range(0, amount): + yield functools.partial(store_call, ident=i) + + def test_func_calls(self): + called = collections.defaultdict(int) + + with self._make_executor(2) as e: + for f in self._make_funcs(called, 2): + e.submit(f) + + self.assertEqual(1, called[0]) + self.assertEqual(1, called[1]) + self.assertEqual(2, e.statistics.executed) + + def test_result_callback(self): + called = collections.defaultdict(int) + mutator = self._make_lock() + + def callback(future): + with mutator: + called[future] += 1 + + funcs = list(self._make_funcs(called, 1)) + with self._make_executor(2) as e: + for func in funcs: + f = e.submit(func) + f.add_done_callback(callback) + + self.assertEqual(2, len(called)) + + def test_result_transfer(self): + create_am = 50 + with self._make_executor(2) as e: + fs = [] + for i in range(0, create_am): + fs.append(e.submit(functools.partial(_return_given, i))) + self.assertEqual(create_am, len(fs)) + self.assertEqual(create_am, e.statistics.executed) + for i in range(0, create_am): + result = fs[i].result() + self.assertEqual(i, result) + + def test_called_restricted_size(self): + create_am = 100 + called = collections.defaultdict(int) + + with self._make_executor(1) as e: + for f in self._make_funcs(called, create_am): + e.submit(f) + + self.assertFalse(e.alive) + self.assertEqual(create_am, len(called)) + self.assertEqual(create_am, e.statistics.executed) + + +class ThreadPoolExecutorTest(test.TestCase, _FuturesTestMixin): + def _make_executor(self, max_workers): + return futures.ThreadPoolExecutor(max_workers=max_workers) + + def _delay(self, secs): + time.sleep(secs) + + def _make_lock(self): + return threading.Lock() + + +class ProcessPoolExecutorTest(test.TestCase, _SimpleFuturesTestMixin): + def _make_executor(self, max_workers): + return futures.ProcessPoolExecutor(max_workers=max_workers) + + +class SynchronousExecutorTest(test.TestCase, _FuturesTestMixin): + def _make_executor(self, max_workers): + return futures.SynchronousExecutor() + + def _delay(self, secs): + time.sleep(secs) + + def _make_lock(self): + return threading.Lock() + + def test_invalid_workers(self): + pass + + +@testtools.skipIf(not eu.EVENTLET_AVAILABLE, 'eventlet is not available') +class GreenThreadPoolExecutorTest(test.TestCase, _FuturesTestMixin): + def _make_executor(self, max_workers): + return futures.GreenThreadPoolExecutor(max_workers=max_workers) + + def _delay(self, secs): + greentime.sleep(secs) + + def _make_lock(self): + return greenthreading.Lock() + + def test_cancellation(self): + called = collections.defaultdict(int) + + fs = [] + with self._make_executor(2) as e: + for func in self._make_funcs(called, 2): + fs.append(e.submit(func)) + # Greenthreads don't start executing until we wait for them + # to, since nothing here does IO, this will work out correctly. + # + # If something here did a blocking call, then eventlet could swap + # one of the executors threads in, but nothing in this test does. + for f in fs: + self.assertFalse(f.running()) + f.cancel() + + self.assertEqual(0, len(called)) + self.assertEqual(2, len(fs)) + self.assertEqual(2, e.statistics.cancelled) + for f in fs: + self.assertTrue(f.cancelled()) + self.assertTrue(f.done()) diff --git a/taskflow/tests/unit/test_green_executor.py b/taskflow/tests/unit/test_green_executor.py deleted file mode 100644 index eae523dc..00000000 --- a/taskflow/tests/unit/test_green_executor.py +++ /dev/null @@ -1,131 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import collections -import functools - -import testtools - -from taskflow import test -from taskflow.utils import eventlet_utils as eu - - -@testtools.skipIf(not eu.EVENTLET_AVAILABLE, 'eventlet is not available') -class GreenExecutorTest(test.TestCase): - def make_funcs(self, called, amount): - - def store_call(name): - called[name] += 1 - - for i in range(0, amount): - yield functools.partial(store_call, name=i) - - def test_func_calls(self): - called = collections.defaultdict(int) - - with eu.GreenExecutor(2) as e: - for f in self.make_funcs(called, 2): - e.submit(f) - - self.assertEqual(1, called[0]) - self.assertEqual(1, called[1]) - - def test_no_construction(self): - self.assertRaises(ValueError, eu.GreenExecutor, 0) - self.assertRaises(ValueError, eu.GreenExecutor, -1) - self.assertRaises(ValueError, eu.GreenExecutor, "-1") - - def test_result_callback(self): - called = collections.defaultdict(int) - - def callback(future): - called[future] += 1 - - funcs = list(self.make_funcs(called, 1)) - with eu.GreenExecutor(2) as e: - for func in funcs: - f = e.submit(func) - f.add_done_callback(callback) - - self.assertEqual(2, len(called)) - - def test_exception_transfer(self): - - def blowup(): - raise IOError("Broke!") - - with eu.GreenExecutor(2) as e: - f = e.submit(blowup) - - self.assertRaises(IOError, f.result) - - def test_result_transfer(self): - - def return_given(given): - return given - - create_am = 50 - with eu.GreenExecutor(2) as e: - fs = [] - for i in range(0, create_am): - fs.append(e.submit(functools.partial(return_given, i))) - - self.assertEqual(create_am, len(fs)) - for i in range(0, create_am): - result = fs[i].result() - self.assertEqual(i, result) - - def test_called_restricted_size(self): - called = collections.defaultdict(int) - - with eu.GreenExecutor(1) as e: - for f in self.make_funcs(called, 100): - e.submit(f) - self.assertEqual(99, e.amount_delayed) - - self.assertFalse(e.alive) - self.assertEqual(100, len(called)) - self.assertGreaterEqual(1, e.workers_created) - self.assertEqual(0, e.amount_delayed) - - def test_shutdown_twice(self): - e = eu.GreenExecutor(1) - self.assertTrue(e.alive) - e.shutdown() - self.assertFalse(e.alive) - e.shutdown() - self.assertFalse(e.alive) - - def test_func_cancellation(self): - called = collections.defaultdict(int) - - fs = [] - with eu.GreenExecutor(2) as e: - for func in self.make_funcs(called, 2): - fs.append(e.submit(func)) - # Greenthreads don't start executing until we wait for them - # to, since nothing here does IO, this will work out correctly. - # - # If something here did a blocking call, then eventlet could swap - # one of the executors threads in, but nothing in this test does. - for f in fs: - self.assertFalse(f.running()) - f.cancel() - - self.assertEqual(0, len(called)) - for f in fs: - self.assertTrue(f.cancelled()) - self.assertTrue(f.done()) diff --git a/taskflow/tests/unit/test_listeners.py b/taskflow/tests/unit/test_listeners.py new file mode 100644 index 00000000..d6c64bbd --- /dev/null +++ b/taskflow/tests/unit/test_listeners.py @@ -0,0 +1,328 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import contextlib +import logging +import time + +from oslo_serialization import jsonutils +from oslo_utils import reflection +import six +from zake import fake_client + +import taskflow.engines +from taskflow import exceptions as exc +from taskflow.jobs import backends as jobs +from taskflow.listeners import claims +from taskflow.listeners import logging as logging_listeners +from taskflow.listeners import timing +from taskflow.patterns import linear_flow as lf +from taskflow.persistence.backends import impl_memory +from taskflow import states +from taskflow import task +from taskflow import test +from taskflow.test import mock +from taskflow.tests import utils as test_utils +from taskflow.utils import misc +from taskflow.utils import persistence_utils +from taskflow.utils import threading_utils + + +_LOG_LEVELS = frozenset([ + logging.CRITICAL, + logging.DEBUG, + logging.ERROR, + logging.INFO, + logging.NOTSET, + logging.WARNING, +]) + + +class SleepyTask(task.Task): + def __init__(self, name, sleep_for=0.0): + super(SleepyTask, self).__init__(name=name) + self._sleep_for = float(sleep_for) + + def execute(self): + if self._sleep_for <= 0: + return + else: + time.sleep(self._sleep_for) + + +class EngineMakerMixin(object): + def _make_engine(self, flow, flow_detail=None, backend=None): + e = taskflow.engines.load(flow, + flow_detail=flow_detail, + backend=backend) + e.compile() + e.prepare() + return e + + +class TestClaimListener(test.TestCase, EngineMakerMixin): + def _make_dummy_flow(self, count): + f = lf.Flow('root') + for i in range(0, count): + f.add(test_utils.ProvidesRequiresTask('%s_test' % i, [], [])) + return f + + def setUp(self): + super(TestClaimListener, self).setUp() + self.client = fake_client.FakeClient() + self.addCleanup(self.client.stop) + self.board = jobs.fetch('test', 'zookeeper', client=self.client) + self.addCleanup(self.board.close) + self.board.connect() + + def _post_claim_job(self, job_name, book=None, details=None): + arrived = threading_utils.Event() + + def set_on_children(children): + if children: + arrived.set() + + self.client.ChildrenWatch("/taskflow", set_on_children) + job = self.board.post('test-1') + + # Make sure it arrived and claimed before doing further work... + self.assertTrue(arrived.wait(test_utils.WAIT_TIMEOUT)) + arrived.clear() + self.board.claim(job, self.board.name) + self.assertTrue(arrived.wait(test_utils.WAIT_TIMEOUT)) + self.assertEqual(states.CLAIMED, job.state) + + return job + + def _destroy_locks(self): + children = self.client.storage.get_children("/taskflow", + only_direct=False) + removed = 0 + for p, data in six.iteritems(children): + if p.endswith(".lock"): + self.client.storage.pop(p) + removed += 1 + return removed + + def _change_owner(self, new_owner): + children = self.client.storage.get_children("/taskflow", + only_direct=False) + altered = 0 + for p, data in six.iteritems(children): + if p.endswith(".lock"): + self.client.set(p, misc.binary_encode( + jsonutils.dumps({'owner': new_owner}))) + altered += 1 + return altered + + def test_bad_create(self): + job = self._post_claim_job('test') + f = self._make_dummy_flow(10) + e = self._make_engine(f) + self.assertRaises(ValueError, claims.CheckingClaimListener, + e, job, self.board, self.board.name, + on_job_loss=1) + + def test_claim_lost_suspended(self): + job = self._post_claim_job('test') + f = self._make_dummy_flow(10) + e = self._make_engine(f) + + try_destroy = True + ran_states = [] + with claims.CheckingClaimListener(e, job, + self.board, self.board.name): + for state in e.run_iter(): + ran_states.append(state) + if state == states.SCHEDULING and try_destroy: + try_destroy = bool(self._destroy_locks()) + + self.assertEqual(states.SUSPENDED, e.storage.get_flow_state()) + self.assertEqual(1, ran_states.count(states.ANALYZING)) + self.assertEqual(1, ran_states.count(states.SCHEDULING)) + self.assertEqual(1, ran_states.count(states.WAITING)) + + def test_claim_lost_custom_handler(self): + job = self._post_claim_job('test') + f = self._make_dummy_flow(10) + e = self._make_engine(f) + + handler = mock.MagicMock() + ran_states = [] + try_destroy = True + destroyed_at = -1 + with claims.CheckingClaimListener(e, job, self.board, + self.board.name, + on_job_loss=handler): + for i, state in enumerate(e.run_iter()): + ran_states.append(state) + if state == states.SCHEDULING and try_destroy: + destroyed = bool(self._destroy_locks()) + if destroyed: + destroyed_at = i + try_destroy = False + + self.assertTrue(handler.called) + self.assertEqual(10, ran_states.count(states.SCHEDULING)) + self.assertNotEqual(-1, destroyed_at) + + after_states = ran_states[destroyed_at:] + self.assertGreater(0, len(after_states)) + + def test_claim_lost_new_owner(self): + job = self._post_claim_job('test') + f = self._make_dummy_flow(10) + e = self._make_engine(f) + + change_owner = True + ran_states = [] + with claims.CheckingClaimListener(e, job, + self.board, self.board.name): + for state in e.run_iter(): + ran_states.append(state) + if state == states.SCHEDULING and change_owner: + change_owner = bool(self._change_owner('test-2')) + + self.assertEqual(states.SUSPENDED, e.storage.get_flow_state()) + self.assertEqual(1, ran_states.count(states.ANALYZING)) + self.assertEqual(1, ran_states.count(states.SCHEDULING)) + self.assertEqual(1, ran_states.count(states.WAITING)) + + +class TestTimingListener(test.TestCase, EngineMakerMixin): + def test_duration(self): + with contextlib.closing(impl_memory.MemoryBackend()) as be: + flow = lf.Flow("test") + flow.add(SleepyTask("test-1", sleep_for=0.1)) + (lb, fd) = persistence_utils.temporary_flow_detail(be) + e = self._make_engine(flow, fd, be) + with timing.TimingListener(e): + e.run() + t_uuid = e.storage.get_atom_uuid("test-1") + td = fd.find(t_uuid) + self.assertIsNotNone(td) + self.assertIsNotNone(td.meta) + self.assertIn('duration', td.meta) + self.assertGreaterEqual(0.1, td.meta['duration']) + + @mock.patch.object(timing.LOG, 'warn') + def test_record_ending_exception(self, mocked_warn): + with contextlib.closing(impl_memory.MemoryBackend()) as be: + flow = lf.Flow("test") + flow.add(test_utils.TaskNoRequiresNoReturns("test-1")) + (lb, fd) = persistence_utils.temporary_flow_detail(be) + e = self._make_engine(flow, fd, be) + timing_listener = timing.TimingListener(e) + with mock.patch.object(timing_listener._engine.storage, + 'update_atom_metadata') as mocked_uam: + mocked_uam.side_effect = exc.StorageFailure('Woot!') + with timing_listener: + e.run() + mocked_warn.assert_called_once_with(mock.ANY, mock.ANY, 'test-1', + exc_info=True) + + +class TestLoggingListeners(test.TestCase, EngineMakerMixin): + def _make_logger(self, level=logging.DEBUG): + log = logging.getLogger( + reflection.get_callable_name(self._get_test_method())) + log.propagate = False + for handler in reversed(log.handlers): + log.removeHandler(handler) + handler = test.CapturingLoggingHandler(level=level) + log.addHandler(handler) + log.setLevel(level) + self.addCleanup(handler.reset) + self.addCleanup(log.removeHandler, handler) + return (log, handler) + + def test_basic(self): + flow = lf.Flow("test") + flow.add(test_utils.TaskNoRequiresNoReturns("test-1")) + e = self._make_engine(flow) + log, handler = self._make_logger() + with logging_listeners.LoggingListener(e, log=log): + e.run() + self.assertGreater(0, handler.counts[logging.DEBUG]) + for levelno in _LOG_LEVELS - set([logging.DEBUG]): + self.assertEqual(0, handler.counts[levelno]) + self.assertEqual([], handler.exc_infos) + + def test_basic_customized(self): + flow = lf.Flow("test") + flow.add(test_utils.TaskNoRequiresNoReturns("test-1")) + e = self._make_engine(flow) + log, handler = self._make_logger() + listener = logging_listeners.LoggingListener( + e, log=log, level=logging.INFO) + with listener: + e.run() + self.assertGreater(0, handler.counts[logging.INFO]) + for levelno in _LOG_LEVELS - set([logging.INFO]): + self.assertEqual(0, handler.counts[levelno]) + self.assertEqual([], handler.exc_infos) + + def test_basic_failure(self): + flow = lf.Flow("test") + flow.add(test_utils.TaskWithFailure("test-1")) + e = self._make_engine(flow) + log, handler = self._make_logger() + with logging_listeners.LoggingListener(e, log=log): + self.assertRaises(RuntimeError, e.run) + self.assertGreater(0, handler.counts[logging.DEBUG]) + for levelno in _LOG_LEVELS - set([logging.DEBUG]): + self.assertEqual(0, handler.counts[levelno]) + self.assertEqual(1, len(handler.exc_infos)) + + def test_dynamic(self): + flow = lf.Flow("test") + flow.add(test_utils.TaskNoRequiresNoReturns("test-1")) + e = self._make_engine(flow) + log, handler = self._make_logger() + with logging_listeners.DynamicLoggingListener(e, log=log): + e.run() + self.assertGreater(0, handler.counts[logging.DEBUG]) + for levelno in _LOG_LEVELS - set([logging.DEBUG]): + self.assertEqual(0, handler.counts[levelno]) + self.assertEqual([], handler.exc_infos) + + def test_dynamic_failure(self): + flow = lf.Flow("test") + flow.add(test_utils.TaskWithFailure("test-1")) + e = self._make_engine(flow) + log, handler = self._make_logger() + with logging_listeners.DynamicLoggingListener(e, log=log): + self.assertRaises(RuntimeError, e.run) + self.assertGreater(0, handler.counts[logging.WARNING]) + self.assertGreater(0, handler.counts[logging.DEBUG]) + self.assertEqual(1, len(handler.exc_infos)) + for levelno in _LOG_LEVELS - set([logging.DEBUG, logging.WARNING]): + self.assertEqual(0, handler.counts[levelno]) + + def test_dynamic_failure_customized_level(self): + flow = lf.Flow("test") + flow.add(test_utils.TaskWithFailure("test-1")) + e = self._make_engine(flow) + log, handler = self._make_logger() + listener = logging_listeners.DynamicLoggingListener( + e, log=log, failure_level=logging.ERROR) + with listener: + self.assertRaises(RuntimeError, e.run) + self.assertGreater(0, handler.counts[logging.ERROR]) + self.assertGreater(0, handler.counts[logging.DEBUG]) + self.assertEqual(1, len(handler.exc_infos)) + for levelno in _LOG_LEVELS - set([logging.DEBUG, logging.ERROR]): + self.assertEqual(0, handler.counts[levelno]) diff --git a/taskflow/tests/unit/test_notifier.py b/taskflow/tests/unit/test_notifier.py new file mode 100644 index 00000000..60e0e1e8 --- /dev/null +++ b/taskflow/tests/unit/test_notifier.py @@ -0,0 +1,212 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import collections +import functools + +from taskflow import states +from taskflow import test +from taskflow.types import notifier as nt + + +class NotifierTest(test.TestCase): + + def test_notify_called(self): + call_collector = [] + + def call_me(state, details): + call_collector.append((state, details)) + + notifier = nt.Notifier() + notifier.register(nt.Notifier.ANY, call_me) + notifier.notify(states.SUCCESS, {}) + notifier.notify(states.SUCCESS, {}) + + self.assertEqual(2, len(call_collector)) + self.assertEqual(1, len(notifier)) + + def test_notify_not_called(self): + call_collector = [] + + def call_me(state, details): + call_collector.append((state, details)) + + notifier = nt.Notifier() + notifier.register(nt.Notifier.ANY, call_me) + notifier.notify(nt.Notifier.ANY, {}) + self.assertFalse(notifier.can_trigger_notification(nt.Notifier.ANY)) + + self.assertEqual(0, len(call_collector)) + self.assertEqual(1, len(notifier)) + + def test_notify_register_deregister(self): + + def call_me(state, details): + pass + + class A(object): + def call_me_too(self, state, details): + pass + + notifier = nt.Notifier() + notifier.register(nt.Notifier.ANY, call_me) + a = A() + notifier.register(nt.Notifier.ANY, a.call_me_too) + + self.assertEqual(2, len(notifier)) + notifier.deregister(nt.Notifier.ANY, call_me) + notifier.deregister(nt.Notifier.ANY, a.call_me_too) + self.assertEqual(0, len(notifier)) + + def test_notify_reset(self): + + def call_me(state, details): + pass + + notifier = nt.Notifier() + notifier.register(nt.Notifier.ANY, call_me) + self.assertEqual(1, len(notifier)) + + notifier.reset() + self.assertEqual(0, len(notifier)) + + def test_bad_notify(self): + + def call_me(state, details): + pass + + notifier = nt.Notifier() + self.assertRaises(KeyError, notifier.register, + nt.Notifier.ANY, call_me, + kwargs={'details': 5}) + + def test_not_callable(self): + notifier = nt.Notifier() + self.assertRaises(ValueError, notifier.register, + nt.Notifier.ANY, 2) + + def test_restricted_notifier(self): + notifier = nt.RestrictedNotifier(['a', 'b']) + self.assertRaises(ValueError, notifier.register, + 'c', lambda *args, **kargs: None) + notifier.register('b', lambda *args, **kargs: None) + self.assertEqual(1, len(notifier)) + + def test_restricted_notifier_any(self): + notifier = nt.RestrictedNotifier(['a', 'b']) + self.assertRaises(ValueError, notifier.register, + 'c', lambda *args, **kargs: None) + notifier.register('b', lambda *args, **kargs: None) + self.assertEqual(1, len(notifier)) + notifier.register(nt.RestrictedNotifier.ANY, + lambda *args, **kargs: None) + self.assertEqual(2, len(notifier)) + + def test_restricted_notifier_no_any(self): + notifier = nt.RestrictedNotifier(['a', 'b'], allow_any=False) + self.assertRaises(ValueError, notifier.register, + nt.RestrictedNotifier.ANY, + lambda *args, **kargs: None) + notifier.register('b', lambda *args, **kargs: None) + self.assertEqual(1, len(notifier)) + + def test_selective_notify(self): + call_counts = collections.defaultdict(list) + + def call_me_on(registered_state, state, details): + call_counts[registered_state].append((state, details)) + + notifier = nt.Notifier() + + call_me_on_success = functools.partial(call_me_on, states.SUCCESS) + notifier.register(states.SUCCESS, call_me_on_success) + self.assertTrue(notifier.is_registered(states.SUCCESS, + call_me_on_success)) + + call_me_on_any = functools.partial(call_me_on, nt.Notifier.ANY) + notifier.register(nt.Notifier.ANY, call_me_on_any) + self.assertTrue(notifier.is_registered(nt.Notifier.ANY, + call_me_on_any)) + + self.assertEqual(2, len(notifier)) + notifier.notify(states.SUCCESS, {}) + + self.assertEqual(1, len(call_counts[nt.Notifier.ANY])) + self.assertEqual(1, len(call_counts[states.SUCCESS])) + + notifier.notify(states.FAILURE, {}) + self.assertEqual(2, len(call_counts[nt.Notifier.ANY])) + self.assertEqual(1, len(call_counts[states.SUCCESS])) + self.assertEqual(2, len(call_counts)) + + def test_details_filter(self): + call_counts = collections.defaultdict(list) + + def call_me_on(registered_state, state, details): + call_counts[registered_state].append((state, details)) + + def when_red(details): + return details.get('color') == 'red' + + notifier = nt.Notifier() + + call_me_on_success = functools.partial(call_me_on, states.SUCCESS) + notifier.register(states.SUCCESS, call_me_on_success, + details_filter=when_red) + self.assertEqual(1, len(notifier)) + self.assertTrue(notifier.is_registered( + states.SUCCESS, call_me_on_success, details_filter=when_red)) + + notifier.notify(states.SUCCESS, {}) + self.assertEqual(0, len(call_counts[states.SUCCESS])) + notifier.notify(states.SUCCESS, {'color': 'red'}) + self.assertEqual(1, len(call_counts[states.SUCCESS])) + notifier.notify(states.SUCCESS, {'color': 'green'}) + self.assertEqual(1, len(call_counts[states.SUCCESS])) + + def test_different_details_filter(self): + call_counts = collections.defaultdict(list) + + def call_me_on(registered_state, state, details): + call_counts[registered_state].append((state, details)) + + def when_red(details): + return details.get('color') == 'red' + + def when_blue(details): + return details.get('color') == 'blue' + + notifier = nt.Notifier() + + call_me_on_success = functools.partial(call_me_on, states.SUCCESS) + notifier.register(states.SUCCESS, call_me_on_success, + details_filter=when_red) + notifier.register(states.SUCCESS, call_me_on_success, + details_filter=when_blue) + self.assertEqual(2, len(notifier)) + self.assertTrue(notifier.is_registered( + states.SUCCESS, call_me_on_success, details_filter=when_blue)) + self.assertTrue(notifier.is_registered( + states.SUCCESS, call_me_on_success, details_filter=when_red)) + + notifier.notify(states.SUCCESS, {}) + self.assertEqual(0, len(call_counts[states.SUCCESS])) + notifier.notify(states.SUCCESS, {'color': 'red'}) + self.assertEqual(1, len(call_counts[states.SUCCESS])) + notifier.notify(states.SUCCESS, {'color': 'blue'}) + self.assertEqual(2, len(call_counts[states.SUCCESS])) + notifier.notify(states.SUCCESS, {'color': 'green'}) + self.assertEqual(2, len(call_counts[states.SUCCESS])) diff --git a/taskflow/tests/unit/test_progress.py b/taskflow/tests/unit/test_progress.py index f37d1132..943f93c0 100644 --- a/taskflow/tests/unit/test_progress.py +++ b/taskflow/tests/unit/test_progress.py @@ -39,7 +39,12 @@ class ProgressTask(task.Task): class ProgressTaskWithDetails(task.Task): def execute(self): - self.update_progress(0.5, test='test data', foo='bar') + details = { + 'progress': 0.5, + 'test': 'test data', + 'foo': 'bar', + } + self.notifier.notify(task.EVENT_UPDATE_PROGRESS, details) class TestProgress(test.TestCase): @@ -60,12 +65,12 @@ class TestProgress(test.TestCase): def test_sanity_progress(self): fired_events = [] - def notify_me(task, event_data, progress): - fired_events.append(progress) + def notify_me(event_type, details): + fired_events.append(details.pop('progress')) ev_count = 5 t = ProgressTask("test", ev_count) - t.bind('update_progress', notify_me) + t.notifier.register(task.EVENT_UPDATE_PROGRESS, notify_me) flo = lf.Flow("test") flo.add(t) e = self._make_engine(flo) @@ -77,11 +82,11 @@ class TestProgress(test.TestCase): def test_no_segments_progress(self): fired_events = [] - def notify_me(task, event_data, progress): - fired_events.append(progress) + def notify_me(event_type, details): + fired_events.append(details.pop('progress')) t = ProgressTask("test", 0) - t.bind('update_progress', notify_me) + t.notifier.register(task.EVENT_UPDATE_PROGRESS, notify_me) flo = lf.Flow("test") flo.add(t) e = self._make_engine(flo) @@ -121,12 +126,12 @@ class TestProgress(test.TestCase): def test_dual_storage_progress(self): fired_events = [] - def notify_me(task, event_data, progress): - fired_events.append(progress) + def notify_me(event_type, details): + fired_events.append(details.pop('progress')) with contextlib.closing(impl_memory.MemoryBackend({})) as be: t = ProgressTask("test", 5) - t.bind('update_progress', notify_me) + t.notifier.register(task.EVENT_UPDATE_PROGRESS, notify_me) flo = lf.Flow("test") flo.add(t) b, fd = p_utils.temporary_flow_detail(be) diff --git a/taskflow/tests/unit/test_retries.py b/taskflow/tests/unit/test_retries.py index 71ea70cb..b459184b 100644 --- a/taskflow/tests/unit/test_retries.py +++ b/taskflow/tests/unit/test_retries.py @@ -14,6 +14,8 @@ # License for the specific language governing permissions and limitations # under the License. +import testtools + import taskflow.engines from taskflow import exceptions as exc from taskflow.patterns import graph_flow as gf @@ -23,7 +25,26 @@ from taskflow import retry from taskflow import states as st from taskflow import test from taskflow.tests import utils -from taskflow.utils import misc +from taskflow.types import failure +from taskflow.types import futures +from taskflow.utils import eventlet_utils as eu + + +class FailingRetry(retry.Retry): + + def execute(self, **kwargs): + raise ValueError('OMG I FAILED') + + def revert(self, history, **kwargs): + self.history = history + + def on_failure(self, **kwargs): + return retry.REVERT + + +class NastyFailingRetry(FailingRetry): + def revert(self, history, **kwargs): + raise ValueError('WOOT!') class RetryTest(utils.EngineTestBase): @@ -48,89 +69,71 @@ class RetryTest(utils.EngineTestBase): def test_states_retry_success_linear_flow(self): flow = lf.Flow('flow-1', retry.Times(4, 'r1', provides='x')).add( - utils.SaveOrderTask("task1"), + utils.ProgressingTask("task1"), utils.ConditionalTask("task2") ) engine = self._make_engine(flow) - utils.register_notifiers(engine, self.values) engine.storage.inject({'y': 2}) - engine.run() + with utils.CaptureListener(engine) as capturer: + engine.run() self.assertEqual(engine.storage.fetch_all(), {'y': 2, 'x': 2}) - expected = ['flow RUNNING', - 'r1 RUNNING', - 'r1 SUCCESS', - 'task1 RUNNING', - 'task1', - 'task1 SUCCESS', - 'task2 RUNNING', - 'task2', - 'task2 FAILURE', - 'task2 REVERTING', - u'task2 reverted(Failure: RuntimeError: Woot!)', - 'task2 REVERTED', - 'task1 REVERTING', - 'task1 reverted(5)', - 'task1 REVERTED', - 'r1 RETRYING', - 'task1 PENDING', - 'task2 PENDING', - 'r1 RUNNING', - 'r1 SUCCESS', - 'task1 RUNNING', - 'task1', - 'task1 SUCCESS', - 'task2 RUNNING', - 'task2', - 'task2 SUCCESS', - 'flow SUCCESS'] - self.assertEqual(self.values, expected) + expected = ['flow-1.f RUNNING', + 'r1.r RUNNING', 'r1.r SUCCESS(1)', + 'task1.t RUNNING', 'task1.t SUCCESS(5)', + 'task2.t RUNNING', + 'task2.t FAILURE(Failure: RuntimeError: Woot!)', + 'task2.t REVERTING', 'task2.t REVERTED', + 'task1.t REVERTING', 'task1.t REVERTED', + 'r1.r RETRYING', + 'task1.t PENDING', + 'task2.t PENDING', + 'r1.r RUNNING', + 'r1.r SUCCESS(2)', + 'task1.t RUNNING', + 'task1.t SUCCESS(5)', + 'task2.t RUNNING', + 'task2.t SUCCESS(None)', + 'flow-1.f SUCCESS'] + self.assertEqual(expected, capturer.values) def test_states_retry_reverted_linear_flow(self): flow = lf.Flow('flow-1', retry.Times(2, 'r1', provides='x')).add( - utils.SaveOrderTask("task1"), + utils.ProgressingTask("task1"), utils.ConditionalTask("task2") ) engine = self._make_engine(flow) - utils.register_notifiers(engine, self.values) engine.storage.inject({'y': 4}) - self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) + with utils.CaptureListener(engine) as capturer: + self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) self.assertEqual(engine.storage.fetch_all(), {'y': 4}) - expected = ['flow RUNNING', - 'r1 RUNNING', - 'r1 SUCCESS', - 'task1 RUNNING', - 'task1', - 'task1 SUCCESS', - 'task2 RUNNING', - 'task2', - 'task2 FAILURE', - 'task2 REVERTING', - u'task2 reverted(Failure: RuntimeError: Woot!)', - 'task2 REVERTED', - 'task1 REVERTING', - 'task1 reverted(5)', - 'task1 REVERTED', - 'r1 RETRYING', - 'task1 PENDING', - 'task2 PENDING', - 'r1 RUNNING', - 'r1 SUCCESS', - 'task1 RUNNING', - 'task1', - 'task1 SUCCESS', - 'task2 RUNNING', - 'task2', - 'task2 FAILURE', - 'task2 REVERTING', - u'task2 reverted(Failure: RuntimeError: Woot!)', - 'task2 REVERTED', - 'task1 REVERTING', - 'task1 reverted(5)', - 'task1 REVERTED', - 'r1 REVERTING', - 'r1 REVERTED', - 'flow REVERTED'] - self.assertEqual(self.values, expected) + expected = ['flow-1.f RUNNING', + 'r1.r RUNNING', + 'r1.r SUCCESS(1)', + 'task1.t RUNNING', + 'task1.t SUCCESS(5)', + 'task2.t RUNNING', + 'task2.t FAILURE(Failure: RuntimeError: Woot!)', + 'task2.t REVERTING', + 'task2.t REVERTED', + 'task1.t REVERTING', + 'task1.t REVERTED', + 'r1.r RETRYING', + 'task1.t PENDING', + 'task2.t PENDING', + 'r1.r RUNNING', + 'r1.r SUCCESS(2)', + 'task1.t RUNNING', + 'task1.t SUCCESS(5)', + 'task2.t RUNNING', + 'task2.t FAILURE(Failure: RuntimeError: Woot!)', + 'task2.t REVERTING', + 'task2.t REVERTED', + 'task1.t REVERTING', + 'task1.t REVERTED', + 'r1.r REVERTING', + 'r1.r REVERTED', + 'flow-1.f REVERTED'] + self.assertEqual(expected, capturer.values) def test_states_retry_failure_linear_flow(self): flow = lf.Flow('flow-1', retry.Times(2, 'r1', provides='x')).add( @@ -138,25 +141,23 @@ class RetryTest(utils.EngineTestBase): utils.ConditionalTask("task2") ) engine = self._make_engine(flow) - utils.register_notifiers(engine, self.values) engine.storage.inject({'y': 4}) - self.assertRaisesRegexp(RuntimeError, '^Gotcha', engine.run) + with utils.CaptureListener(engine) as capturer: + self.assertRaisesRegexp(RuntimeError, '^Gotcha', engine.run) self.assertEqual(engine.storage.fetch_all(), {'y': 4, 'x': 1}) - expected = ['flow RUNNING', - 'r1 RUNNING', - 'r1 SUCCESS', - 'task1 RUNNING', - 'task1 SUCCESS', - 'task2 RUNNING', - 'task2', - 'task2 FAILURE', - 'task2 REVERTING', - u'task2 reverted(Failure: RuntimeError: Woot!)', - 'task2 REVERTED', - 'task1 REVERTING', - 'task1 FAILURE', - 'flow FAILURE'] - self.assertEqual(self.values, expected) + expected = ['flow-1.f RUNNING', + 'r1.r RUNNING', + 'r1.r SUCCESS(1)', + 'task1.t RUNNING', + 'task1.t SUCCESS(None)', + 'task2.t RUNNING', + 'task2.t FAILURE(Failure: RuntimeError: Woot!)', + 'task2.t REVERTING', + 'task2.t REVERTED', + 'task1.t REVERTING', + 'task1.t FAILURE', + 'flow-1.f FAILURE'] + self.assertEqual(expected, capturer.values) def test_states_retry_failure_nested_flow_fails(self): flow = lf.Flow('flow-1', utils.retry.AlwaysRevert('r1')).add( @@ -168,41 +169,38 @@ class RetryTest(utils.EngineTestBase): utils.TaskNoRequiresNoReturns("task4") ) engine = self._make_engine(flow) - utils.register_notifiers(engine, self.values) engine.storage.inject({'y': 2}) - engine.run() + with utils.CaptureListener(engine) as capturer: + engine.run() self.assertEqual(engine.storage.fetch_all(), {'y': 2, 'x': 2}) - expected = ['flow RUNNING', - 'r1 RUNNING', - 'r1 SUCCESS', - 'task1 RUNNING', - 'task1 SUCCESS', - 'r2 RUNNING', - 'r2 SUCCESS', - 'task2 RUNNING', - 'task2 SUCCESS', - 'task3 RUNNING', - 'task3', - 'task3 FAILURE', - 'task3 REVERTING', - u'task3 reverted(Failure: RuntimeError: Woot!)', - 'task3 REVERTED', - 'task2 REVERTING', - 'task2 REVERTED', - 'r2 RETRYING', - 'task2 PENDING', - 'task3 PENDING', - 'r2 RUNNING', - 'r2 SUCCESS', - 'task2 RUNNING', - 'task2 SUCCESS', - 'task3 RUNNING', - 'task3', - 'task3 SUCCESS', - 'task4 RUNNING', - 'task4 SUCCESS', - 'flow SUCCESS'] - self.assertEqual(self.values, expected) + expected = ['flow-1.f RUNNING', + 'r1.r RUNNING', + 'r1.r SUCCESS(None)', + 'task1.t RUNNING', + 'task1.t SUCCESS(None)', + 'r2.r RUNNING', + 'r2.r SUCCESS(1)', + 'task2.t RUNNING', + 'task2.t SUCCESS(None)', + 'task3.t RUNNING', + 'task3.t FAILURE(Failure: RuntimeError: Woot!)', + 'task3.t REVERTING', + 'task3.t REVERTED', + 'task2.t REVERTING', + 'task2.t REVERTED', + 'r2.r RETRYING', + 'task2.t PENDING', + 'task3.t PENDING', + 'r2.r RUNNING', + 'r2.r SUCCESS(2)', + 'task2.t RUNNING', + 'task2.t SUCCESS(None)', + 'task3.t RUNNING', + 'task3.t SUCCESS(None)', + 'task4.t RUNNING', + 'task4.t SUCCESS(None)', + 'flow-1.f SUCCESS'] + self.assertEqual(expected, capturer.values) def test_states_retry_failure_parent_flow_fails(self): flow = lf.Flow('flow-1', retry.Times(3, 'r1', provides='x1')).add( @@ -214,158 +212,160 @@ class RetryTest(utils.EngineTestBase): utils.ConditionalTask("task4", rebind={'x': 'x1'}) ) engine = self._make_engine(flow) - utils.register_notifiers(engine, self.values) engine.storage.inject({'y': 2}) - engine.run() + with utils.CaptureListener(engine) as capturer: + engine.run() self.assertEqual(engine.storage.fetch_all(), {'y': 2, 'x1': 2, 'x2': 1}) - expected = ['flow RUNNING', - 'r1 RUNNING', - 'r1 SUCCESS', - 'task1 RUNNING', - 'task1 SUCCESS', - 'r2 RUNNING', - 'r2 SUCCESS', - 'task2 RUNNING', - 'task2 SUCCESS', - 'task3 RUNNING', - 'task3 SUCCESS', - 'task4 RUNNING', - 'task4', - 'task4 FAILURE', - 'task4 REVERTING', - u'task4 reverted(Failure: RuntimeError: Woot!)', - 'task4 REVERTED', - 'task3 REVERTING', - 'task3 REVERTED', - 'task2 REVERTING', - 'task2 REVERTED', - 'r2 REVERTING', - 'r2 REVERTED', - 'task1 REVERTING', - 'task1 REVERTED', - 'r1 RETRYING', - 'task1 PENDING', - 'r2 PENDING', - 'task2 PENDING', - 'task3 PENDING', - 'task4 PENDING', - 'r1 RUNNING', - 'r1 SUCCESS', - 'task1 RUNNING', - 'task1 SUCCESS', - 'r2 RUNNING', - 'r2 SUCCESS', - 'task2 RUNNING', - 'task2 SUCCESS', - 'task3 RUNNING', - 'task3 SUCCESS', - 'task4 RUNNING', - 'task4', - 'task4 SUCCESS', - 'flow SUCCESS'] - self.assertEqual(self.values, expected) + expected = ['flow-1.f RUNNING', + 'r1.r RUNNING', + 'r1.r SUCCESS(1)', + 'task1.t RUNNING', + 'task1.t SUCCESS(None)', + 'r2.r RUNNING', + 'r2.r SUCCESS(1)', + 'task2.t RUNNING', + 'task2.t SUCCESS(None)', + 'task3.t RUNNING', + 'task3.t SUCCESS(None)', + 'task4.t RUNNING', + 'task4.t FAILURE(Failure: RuntimeError: Woot!)', + 'task4.t REVERTING', + 'task4.t REVERTED', + 'task3.t REVERTING', + 'task3.t REVERTED', + 'task2.t REVERTING', + 'task2.t REVERTED', + 'r2.r REVERTING', + 'r2.r REVERTED', + 'task1.t REVERTING', + 'task1.t REVERTED', + 'r1.r RETRYING', + 'task1.t PENDING', + 'r2.r PENDING', + 'task2.t PENDING', + 'task3.t PENDING', + 'task4.t PENDING', + 'r1.r RUNNING', + 'r1.r SUCCESS(2)', + 'task1.t RUNNING', + 'task1.t SUCCESS(None)', + 'r2.r RUNNING', + 'r2.r SUCCESS(1)', + 'task2.t RUNNING', + 'task2.t SUCCESS(None)', + 'task3.t RUNNING', + 'task3.t SUCCESS(None)', + 'task4.t RUNNING', + 'task4.t SUCCESS(None)', + 'flow-1.f SUCCESS'] + self.assertEqual(expected, capturer.values) def test_unordered_flow_task_fails_parallel_tasks_should_be_reverted(self): flow = uf.Flow('flow-1', retry.Times(3, 'r', provides='x')).add( - utils.SaveOrderTask("task1"), + utils.ProgressingTask("task1"), utils.ConditionalTask("task2") ) engine = self._make_engine(flow) engine.storage.inject({'y': 2}) - engine.run() + with utils.CaptureListener(engine) as capturer: + engine.run() self.assertEqual(engine.storage.fetch_all(), {'y': 2, 'x': 2}) - expected = ['task2', - 'task1', - u'task2 reverted(Failure: RuntimeError: Woot!)', - 'task1 reverted(5)', - 'task2', - 'task1'] - self.assertItemsEqual(self.values, expected) + expected = ['flow-1.f RUNNING', + 'r.r RUNNING', + 'r.r SUCCESS(1)', + 'task1.t RUNNING', + 'task2.t RUNNING', + 'task1.t SUCCESS(5)', + 'task2.t FAILURE(Failure: RuntimeError: Woot!)', + 'task2.t REVERTING', + 'task1.t REVERTING', + 'task2.t REVERTED', + 'task1.t REVERTED', + 'r.r RETRYING', + 'task1.t PENDING', + 'task2.t PENDING', + 'r.r RUNNING', + 'r.r SUCCESS(2)', + 'task1.t RUNNING', + 'task2.t RUNNING', + 'task1.t SUCCESS(5)', + 'task2.t SUCCESS(None)', + 'flow-1.f SUCCESS'] + self.assertItemsEqual(capturer.values, expected) def test_nested_flow_reverts_parent_retries(self): retry1 = retry.Times(3, 'r1', provides='x') retry2 = retry.Times(0, 'r2', provides='x2') - flow = lf.Flow('flow-1', retry1).add( - utils.SaveOrderTask("task1"), + utils.ProgressingTask("task1"), lf.Flow('flow-2', retry2).add(utils.ConditionalTask("task2")) ) engine = self._make_engine(flow) engine.storage.inject({'y': 2}) - utils.register_notifiers(engine, self.values) - engine.run() + with utils.CaptureListener(engine) as capturer: + engine.run() self.assertEqual(engine.storage.fetch_all(), {'y': 2, 'x': 2, 'x2': 1}) - expected = ['flow RUNNING', - 'r1 RUNNING', - 'r1 SUCCESS', - 'task1 RUNNING', - 'task1', - 'task1 SUCCESS', - 'r2 RUNNING', - 'r2 SUCCESS', - 'task2 RUNNING', - 'task2', - 'task2 FAILURE', - 'task2 REVERTING', - u'task2 reverted(Failure: RuntimeError: Woot!)', - 'task2 REVERTED', - 'r2 REVERTING', - 'r2 REVERTED', - 'task1 REVERTING', - 'task1 reverted(5)', - 'task1 REVERTED', - 'r1 RETRYING', - 'task1 PENDING', - 'r2 PENDING', - 'task2 PENDING', - 'r1 RUNNING', - 'r1 SUCCESS', - 'task1 RUNNING', - 'task1', - 'task1 SUCCESS', - 'r2 RUNNING', - 'r2 SUCCESS', - 'task2 RUNNING', - 'task2', - 'task2 SUCCESS', - 'flow SUCCESS'] - self.assertEqual(self.values, expected) + expected = ['flow-1.f RUNNING', + 'r1.r RUNNING', + 'r1.r SUCCESS(1)', + 'task1.t RUNNING', + 'task1.t SUCCESS(5)', + 'r2.r RUNNING', + 'r2.r SUCCESS(1)', + 'task2.t RUNNING', + 'task2.t FAILURE(Failure: RuntimeError: Woot!)', + 'task2.t REVERTING', + 'task2.t REVERTED', + 'r2.r REVERTING', + 'r2.r REVERTED', + 'task1.t REVERTING', + 'task1.t REVERTED', + 'r1.r RETRYING', + 'task1.t PENDING', + 'r2.r PENDING', + 'task2.t PENDING', + 'r1.r RUNNING', + 'r1.r SUCCESS(2)', + 'task1.t RUNNING', + 'task1.t SUCCESS(5)', + 'r2.r RUNNING', + 'r2.r SUCCESS(1)', + 'task2.t RUNNING', + 'task2.t SUCCESS(None)', + 'flow-1.f SUCCESS'] + self.assertEqual(expected, capturer.values) def test_revert_all_retry(self): flow = lf.Flow('flow-1', retry.Times(3, 'r1', provides='x')).add( - utils.SaveOrderTask("task1"), + utils.ProgressingTask("task1"), lf.Flow('flow-2', retry.AlwaysRevertAll('r2')).add( utils.ConditionalTask("task2")) ) engine = self._make_engine(flow) engine.storage.inject({'y': 2}) - utils.register_notifiers(engine, self.values) - self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) + with utils.CaptureListener(engine) as capturer: + self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) self.assertEqual(engine.storage.fetch_all(), {'y': 2}) - expected = ['flow RUNNING', - 'r1 RUNNING', - 'r1 SUCCESS', - 'task1 RUNNING', - 'task1', - 'task1 SUCCESS', - 'r2 RUNNING', - 'r2 SUCCESS', - 'task2 RUNNING', - 'task2', - 'task2 FAILURE', - 'task2 REVERTING', - u'task2 reverted(Failure: RuntimeError: Woot!)', - 'task2 REVERTED', - 'r2 REVERTING', - 'r2 REVERTED', - 'task1 REVERTING', - 'task1 reverted(5)', - 'task1 REVERTED', - 'r1 REVERTING', - 'r1 REVERTED', - 'flow REVERTED'] - self.assertEqual(self.values, expected) + expected = ['flow-1.f RUNNING', + 'r1.r RUNNING', + 'r1.r SUCCESS(1)', + 'task1.t RUNNING', + 'task1.t SUCCESS(5)', + 'r2.r RUNNING', + 'r2.r SUCCESS(None)', + 'task2.t RUNNING', + 'task2.t FAILURE(Failure: RuntimeError: Woot!)', + 'task2.t REVERTING', + 'task2.t REVERTED', + 'r2.r REVERTING', + 'r2.r REVERTED', + 'task1.t REVERTING', + 'task1.t REVERTED', + 'r1.r REVERTING', + 'r1.r REVERTED', + 'flow-1.f REVERTED'] + self.assertEqual(expected, capturer.values) def test_restart_reverted_flow_with_retry(self): flow = lf.Flow('test', retry=utils.OneReturnRetry(provides='x')).add( @@ -386,123 +386,213 @@ class RetryTest(utils.EngineTestBase): def test_resume_flow_that_had_been_interrupted_during_retrying(self): flow = lf.Flow('flow-1', retry.Times(3, 'r1')).add( - utils.SaveOrderTask('t1'), - utils.SaveOrderTask('t2'), - utils.SaveOrderTask('t3') + utils.ProgressingTask('t1'), + utils.ProgressingTask('t2'), + utils.ProgressingTask('t3') ) engine = self._make_engine(flow) engine.compile() engine.prepare() - utils.register_notifiers(engine, self.values) - engine.storage.set_atom_state('r1', st.RETRYING) - engine.storage.set_atom_state('t1', st.PENDING) - engine.storage.set_atom_state('t2', st.REVERTED) - engine.storage.set_atom_state('t3', st.REVERTED) - - engine.run() - expected = ['flow RUNNING', - 't2 PENDING', - 't3 PENDING', - 'r1 RUNNING', - 'r1 SUCCESS', - 't1 RUNNING', - 't1', - 't1 SUCCESS', - 't2 RUNNING', - 't2', - 't2 SUCCESS', - 't3 RUNNING', - 't3', - 't3 SUCCESS', - 'flow SUCCESS'] - self.assertEqual(self.values, expected) + with utils.CaptureListener(engine) as capturer: + engine.storage.set_atom_state('r1', st.RETRYING) + engine.storage.set_atom_state('t1', st.PENDING) + engine.storage.set_atom_state('t2', st.REVERTED) + engine.storage.set_atom_state('t3', st.REVERTED) + engine.run() + expected = ['flow-1.f RUNNING', + 't2.t PENDING', + 't3.t PENDING', + 'r1.r RUNNING', + 'r1.r SUCCESS(1)', + 't1.t RUNNING', + 't1.t SUCCESS(5)', + 't2.t RUNNING', + 't2.t SUCCESS(5)', + 't3.t RUNNING', + 't3.t SUCCESS(5)', + 'flow-1.f SUCCESS'] + self.assertEqual(capturer.values, expected) def test_resume_flow_that_should_be_retried(self): flow = lf.Flow('flow-1', retry.Times(3, 'r1')).add( - utils.SaveOrderTask('t1'), - utils.SaveOrderTask('t2') + utils.ProgressingTask('t1'), + utils.ProgressingTask('t2') ) engine = self._make_engine(flow) engine.compile() engine.prepare() - utils.register_notifiers(engine, self.values) - engine.storage.set_atom_intention('r1', st.RETRY) - engine.storage.set_atom_state('r1', st.SUCCESS) - engine.storage.set_atom_state('t1', st.REVERTED) - engine.storage.set_atom_state('t2', st.REVERTED) - - engine.run() - expected = ['flow RUNNING', - 'r1 RETRYING', - 't1 PENDING', - 't2 PENDING', - 'r1 RUNNING', - 'r1 SUCCESS', - 't1 RUNNING', - 't1', - 't1 SUCCESS', - 't2 RUNNING', - 't2', - 't2 SUCCESS', - 'flow SUCCESS'] - self.assertEqual(self.values, expected) + with utils.CaptureListener(engine) as capturer: + engine.storage.set_atom_intention('r1', st.RETRY) + engine.storage.set_atom_state('r1', st.SUCCESS) + engine.storage.set_atom_state('t1', st.REVERTED) + engine.storage.set_atom_state('t2', st.REVERTED) + engine.run() + expected = ['flow-1.f RUNNING', + 'r1.r RETRYING', + 't1.t PENDING', + 't2.t PENDING', + 'r1.r RUNNING', + 'r1.r SUCCESS(1)', + 't1.t RUNNING', + 't1.t SUCCESS(5)', + 't2.t RUNNING', + 't2.t SUCCESS(5)', + 'flow-1.f SUCCESS'] + self.assertEqual(expected, capturer.values) def test_retry_tasks_that_has_not_been_reverted(self): flow = lf.Flow('flow-1', retry.Times(3, 'r1', provides='x')).add( utils.ConditionalTask('c'), - utils.SaveOrderTask('t1') + utils.ProgressingTask('t1') ) engine = self._make_engine(flow) engine.storage.inject({'y': 2}) - engine.run() - expected = ['c', - u'c reverted(Failure: RuntimeError: Woot!)', - 'c', - 't1'] - self.assertEqual(self.values, expected) + with utils.CaptureListener(engine) as capturer: + engine.run() + expected = ['flow-1.f RUNNING', + 'r1.r RUNNING', + 'r1.r SUCCESS(1)', + 'c.t RUNNING', + 'c.t FAILURE(Failure: RuntimeError: Woot!)', + 'c.t REVERTING', + 'c.t REVERTED', + 'r1.r RETRYING', + 'c.t PENDING', + 'r1.r RUNNING', + 'r1.r SUCCESS(2)', + 'c.t RUNNING', + 'c.t SUCCESS(None)', + 't1.t RUNNING', + 't1.t SUCCESS(5)', + 'flow-1.f SUCCESS'] + self.assertEqual(capturer.values, expected) def test_default_times_retry(self): flow = lf.Flow('flow-1', retry.Times(3, 'r1')).add( - utils.SaveOrderTask('t1'), + utils.ProgressingTask('t1'), utils.FailingTask('t2')) engine = self._make_engine(flow) - - self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) - expected = ['t1', - u't2 reverted(Failure: RuntimeError: Woot!)', - 't1 reverted(5)', - 't1', - u't2 reverted(Failure: RuntimeError: Woot!)', - 't1 reverted(5)', - 't1', - u't2 reverted(Failure: RuntimeError: Woot!)', - 't1 reverted(5)'] - self.assertEqual(self.values, expected) + with utils.CaptureListener(engine) as capturer: + self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) + expected = ['flow-1.f RUNNING', + 'r1.r RUNNING', + 'r1.r SUCCESS(1)', + 't1.t RUNNING', + 't1.t SUCCESS(5)', + 't2.t RUNNING', + 't2.t FAILURE(Failure: RuntimeError: Woot!)', + 't2.t REVERTING', + 't2.t REVERTED', + 't1.t REVERTING', + 't1.t REVERTED', + 'r1.r RETRYING', + 't1.t PENDING', + 't2.t PENDING', + 'r1.r RUNNING', + 'r1.r SUCCESS(2)', + 't1.t RUNNING', + 't1.t SUCCESS(5)', + 't2.t RUNNING', + 't2.t FAILURE(Failure: RuntimeError: Woot!)', + 't2.t REVERTING', + 't2.t REVERTED', + 't1.t REVERTING', + 't1.t REVERTED', + 'r1.r RETRYING', + 't1.t PENDING', + 't2.t PENDING', + 'r1.r RUNNING', + 'r1.r SUCCESS(3)', + 't1.t RUNNING', + 't1.t SUCCESS(5)', + 't2.t RUNNING', + 't2.t FAILURE(Failure: RuntimeError: Woot!)', + 't2.t REVERTING', + 't2.t REVERTED', + 't1.t REVERTING', + 't1.t REVERTED', + 'r1.r REVERTING', + 'r1.r REVERTED', + 'flow-1.f REVERTED'] + self.assertEqual(expected, capturer.values) def test_for_each_with_list(self): collection = [3, 2, 3, 5] retry1 = retry.ForEach(collection, 'r1', provides='x') flow = lf.Flow('flow-1', retry1).add(utils.FailingTaskWithOneArg('t1')) engine = self._make_engine(flow) - - self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) - expected = [u't1 reverted(Failure: RuntimeError: Woot with 3)', - u't1 reverted(Failure: RuntimeError: Woot with 2)', - u't1 reverted(Failure: RuntimeError: Woot with 3)', - u't1 reverted(Failure: RuntimeError: Woot with 5)'] - self.assertEqual(self.values, expected) + with utils.CaptureListener(engine) as capturer: + self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) + expected = ['flow-1.f RUNNING', + 'r1.r RUNNING', + 'r1.r SUCCESS(3)', + 't1.t RUNNING', + 't1.t FAILURE(Failure: RuntimeError: Woot with 3)', + 't1.t REVERTING', + 't1.t REVERTED', + 'r1.r RETRYING', + 't1.t PENDING', + 'r1.r RUNNING', + 'r1.r SUCCESS(2)', + 't1.t RUNNING', + 't1.t FAILURE(Failure: RuntimeError: Woot with 2)', + 't1.t REVERTING', + 't1.t REVERTED', + 'r1.r RETRYING', + 't1.t PENDING', + 'r1.r RUNNING', + 'r1.r SUCCESS(3)', + 't1.t RUNNING', + 't1.t FAILURE(Failure: RuntimeError: Woot with 3)', + 't1.t REVERTING', + 't1.t REVERTED', + 'r1.r RETRYING', + 't1.t PENDING', + 'r1.r RUNNING', + 'r1.r SUCCESS(5)', + 't1.t RUNNING', + 't1.t FAILURE(Failure: RuntimeError: Woot with 5)', + 't1.t REVERTING', + 't1.t REVERTED', + 'r1.r REVERTING', + 'r1.r REVERTED', + 'flow-1.f REVERTED'] + self.assertEqual(expected, capturer.values) def test_for_each_with_set(self): collection = set([3, 2, 5]) retry1 = retry.ForEach(collection, 'r1', provides='x') flow = lf.Flow('flow-1', retry1).add(utils.FailingTaskWithOneArg('t1')) engine = self._make_engine(flow) - - self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) - expected = [u't1 reverted(Failure: RuntimeError: Woot with 3)', - u't1 reverted(Failure: RuntimeError: Woot with 2)', - u't1 reverted(Failure: RuntimeError: Woot with 5)'] - self.assertItemsEqual(self.values, expected) + with utils.CaptureListener(engine) as capturer: + self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) + expected = ['flow-1.f RUNNING', + 'r1.r RUNNING', + 'r1.r SUCCESS(2)', + 't1.t RUNNING', + 't1.t FAILURE(Failure: RuntimeError: Woot with 2)', + 't1.t REVERTING', + 't1.t REVERTED', + 'r1.r RETRYING', + 't1.t PENDING', + 'r1.r RUNNING', + 'r1.r SUCCESS(3)', + 't1.t RUNNING', + 't1.t FAILURE(Failure: RuntimeError: Woot with 3)', + 't1.t REVERTING', + 't1.t REVERTED', + 'r1.r RETRYING', + 't1.t PENDING', + 'r1.r RUNNING', + 'r1.r SUCCESS(5)', + 't1.t RUNNING', + 't1.t FAILURE(Failure: RuntimeError: Woot with 5)', + 't1.t REVERTING', + 't1.t REVERTED', + 'r1.r REVERTING', + 'r1.r REVERTED', + 'flow-1.f REVERTED'] + self.assertItemsEqual(capturer.values, expected) def test_for_each_empty_collection(self): values = [] @@ -518,12 +608,35 @@ class RetryTest(utils.EngineTestBase): flow = lf.Flow('flow-1', retry1).add(utils.FailingTaskWithOneArg('t1')) engine = self._make_engine(flow) engine.storage.inject({'values': values, 'y': 1}) - - self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) - expected = [u't1 reverted(Failure: RuntimeError: Woot with 3)', - u't1 reverted(Failure: RuntimeError: Woot with 2)', - u't1 reverted(Failure: RuntimeError: Woot with 5)'] - self.assertEqual(self.values, expected) + with utils.CaptureListener(engine) as capturer: + self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) + expected = ['flow-1.f RUNNING', + 'r1.r RUNNING', + 'r1.r SUCCESS(3)', + 't1.t RUNNING', + 't1.t FAILURE(Failure: RuntimeError: Woot with 3)', + 't1.t REVERTING', + 't1.t REVERTED', + 'r1.r RETRYING', + 't1.t PENDING', + 'r1.r RUNNING', + 'r1.r SUCCESS(2)', + 't1.t RUNNING', + 't1.t FAILURE(Failure: RuntimeError: Woot with 2)', + 't1.t REVERTING', + 't1.t REVERTED', + 'r1.r RETRYING', + 't1.t PENDING', + 'r1.r RUNNING', + 'r1.r SUCCESS(5)', + 't1.t RUNNING', + 't1.t FAILURE(Failure: RuntimeError: Woot with 5)', + 't1.t REVERTING', + 't1.t REVERTED', + 'r1.r REVERTING', + 'r1.r REVERTED', + 'flow-1.f REVERTED'] + self.assertEqual(expected, capturer.values) def test_parameterized_for_each_with_set(self): values = ([3, 2, 5]) @@ -531,12 +644,35 @@ class RetryTest(utils.EngineTestBase): flow = lf.Flow('flow-1', retry1).add(utils.FailingTaskWithOneArg('t1')) engine = self._make_engine(flow) engine.storage.inject({'values': values, 'y': 1}) - - self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) - expected = [u't1 reverted(Failure: RuntimeError: Woot with 3)', - u't1 reverted(Failure: RuntimeError: Woot with 2)', - u't1 reverted(Failure: RuntimeError: Woot with 5)'] - self.assertItemsEqual(self.values, expected) + with utils.CaptureListener(engine) as capturer: + self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) + expected = ['flow-1.f RUNNING', + 'r1.r RUNNING', + 'r1.r SUCCESS(3)', + 't1.t RUNNING', + 't1.t FAILURE(Failure: RuntimeError: Woot with 3)', + 't1.t REVERTING', + 't1.t REVERTED', + 'r1.r RETRYING', + 't1.t PENDING', + 'r1.r RUNNING', + 'r1.r SUCCESS(2)', + 't1.t RUNNING', + 't1.t FAILURE(Failure: RuntimeError: Woot with 2)', + 't1.t REVERTING', + 't1.t REVERTED', + 'r1.r RETRYING', + 't1.t PENDING', + 'r1.r RUNNING', + 'r1.r SUCCESS(5)', + 't1.t RUNNING', + 't1.t FAILURE(Failure: RuntimeError: Woot with 5)', + 't1.t REVERTING', + 't1.t REVERTED', + 'r1.r REVERTING', + 'r1.r REVERTED', + 'flow-1.f REVERTED'] + self.assertItemsEqual(capturer.values, expected) def test_parameterized_for_each_empty_collection(self): values = [] @@ -548,7 +684,7 @@ class RetryTest(utils.EngineTestBase): def _pretend_to_run_a_flow_and_crash(self, when): flow = uf.Flow('flow-1', retry.Times(3, provides='x')).add( - utils.SaveOrderTask('task1')) + utils.ProgressingTask('task1')) engine = self._make_engine(flow) engine.compile() engine.prepare() @@ -559,7 +695,7 @@ class RetryTest(utils.EngineTestBase): # we execute retry engine.storage.save('flow-1_retry', 1) # task fails - fail = misc.Failure.from_exception(RuntimeError('foo')), + fail = failure.Failure.from_exception(RuntimeError('foo')) engine.storage.save('task1', fail, state=st.FAILURE) if when == 'task fails': return engine @@ -583,83 +719,100 @@ class RetryTest(utils.EngineTestBase): def test_resumption_on_crash_after_task_failure(self): engine = self._pretend_to_run_a_flow_and_crash('task fails') - # then process die and we resume engine - engine.run() - expected = [u'task1 reverted(Failure: RuntimeError: foo)', 'task1'] - self.assertEqual(self.values, expected) + with utils.CaptureListener(engine) as capturer: + engine.run() + expected = ['task1.t REVERTING', + 'task1.t REVERTED', + 'flow-1_retry.r RETRYING', + 'task1.t PENDING', + 'flow-1_retry.r RUNNING', + 'flow-1_retry.r SUCCESS(2)', + 'task1.t RUNNING', + 'task1.t SUCCESS(5)', + 'flow-1.f SUCCESS'] + self.assertEqual(expected, capturer.values) def test_resumption_on_crash_after_retry_queried(self): engine = self._pretend_to_run_a_flow_and_crash('retry queried') - # then process die and we resume engine - engine.run() - expected = [u'task1 reverted(Failure: RuntimeError: foo)', 'task1'] - self.assertEqual(self.values, expected) + with utils.CaptureListener(engine) as capturer: + engine.run() + expected = ['task1.t REVERTING', + 'task1.t REVERTED', + 'flow-1_retry.r RETRYING', + 'task1.t PENDING', + 'flow-1_retry.r RUNNING', + 'flow-1_retry.r SUCCESS(2)', + 'task1.t RUNNING', + 'task1.t SUCCESS(5)', + 'flow-1.f SUCCESS'] + self.assertEqual(expected, capturer.values) def test_resumption_on_crash_after_retry_updated(self): engine = self._pretend_to_run_a_flow_and_crash('retry updated') - # then process die and we resume engine - engine.run() - expected = [u'task1 reverted(Failure: RuntimeError: foo)', 'task1'] - self.assertEqual(self.values, expected) + with utils.CaptureListener(engine) as capturer: + engine.run() + expected = ['task1.t REVERTING', + 'task1.t REVERTED', + 'flow-1_retry.r RETRYING', + 'task1.t PENDING', + 'flow-1_retry.r RUNNING', + 'flow-1_retry.r SUCCESS(2)', + 'task1.t RUNNING', + 'task1.t SUCCESS(5)', + 'flow-1.f SUCCESS'] + self.assertEqual(expected, capturer.values) def test_resumption_on_crash_after_task_updated(self): engine = self._pretend_to_run_a_flow_and_crash('task updated') - # then process die and we resume engine - engine.run() - expected = [u'task1 reverted(Failure: RuntimeError: foo)', 'task1'] - self.assertEqual(self.values, expected) + with utils.CaptureListener(engine) as capturer: + engine.run() + expected = ['task1.t REVERTING', + 'task1.t REVERTED', + 'flow-1_retry.r RETRYING', + 'task1.t PENDING', + 'flow-1_retry.r RUNNING', + 'flow-1_retry.r SUCCESS(2)', + 'task1.t RUNNING', + 'task1.t SUCCESS(5)', + 'flow-1.f SUCCESS'] + self.assertEqual(expected, capturer.values) def test_resumption_on_crash_after_revert_scheduled(self): engine = self._pretend_to_run_a_flow_and_crash('revert scheduled') - # then process die and we resume engine - engine.run() - expected = [u'task1 reverted(Failure: RuntimeError: foo)', 'task1'] - self.assertEqual(self.values, expected) + with utils.CaptureListener(engine) as capturer: + engine.run() + expected = ['task1.t REVERTED', + 'flow-1_retry.r RETRYING', + 'task1.t PENDING', + 'flow-1_retry.r RUNNING', + 'flow-1_retry.r SUCCESS(2)', + 'task1.t RUNNING', + 'task1.t SUCCESS(5)', + 'flow-1.f SUCCESS'] + self.assertEqual(capturer.values, expected) def test_retry_fails(self): - - class FailingRetry(retry.Retry): - - def execute(self, **kwargs): - raise ValueError('OMG I FAILED') - - def revert(self, history, **kwargs): - self.history = history - - def on_failure(self, **kwargs): - return retry.REVERT - r = FailingRetry() flow = lf.Flow('testflow', r) - self.assertRaisesRegexp(ValueError, '^OMG', - self._make_engine(flow).run) - self.assertEqual(len(r.history), 1) - self.assertEqual(r.history[0][1], {}) - self.assertEqual(isinstance(r.history[0][0], misc.Failure), True) + engine = self._make_engine(flow) + self.assertRaisesRegexp(ValueError, '^OMG', engine.run) + self.assertEqual(1, len(engine.storage.get_retry_histories())) + self.assertEqual(len(r.history), 0) + self.assertEqual([], list(r.history.outcomes_iter())) + self.assertIsNotNone(r.history.failure) + self.assertTrue(r.history.caused_by(ValueError, include_retry=True)) def test_retry_revert_fails(self): - - class FailingRetry(retry.Retry): - - def execute(self, **kwargs): - raise ValueError('OMG I FAILED') - - def revert(self, history, **kwargs): - raise ValueError('WOOT!') - - def on_failure(self, **kwargs): - return retry.REVERT - - r = FailingRetry() + r = NastyFailingRetry() flow = lf.Flow('testflow', r) engine = self._make_engine(flow) self.assertRaisesRegexp(ValueError, '^WOOT', engine.run) def test_nested_provides_graph_reverts_correctly(self): flow = gf.Flow("test").add( - utils.SaveOrderTask('a', requires=['x']), + utils.ProgressingTask('a', requires=['x']), lf.Flow("test2", retry=retry.Times(2)).add( - utils.SaveOrderTask('b', provides='x'), + utils.ProgressingTask('b', provides='x'), utils.FailingTask('c'))) engine = self._make_engine(flow) engine.compile() @@ -667,45 +820,64 @@ class RetryTest(utils.EngineTestBase): engine.storage.save('test2_retry', 1) engine.storage.save('b', 11) engine.storage.save('a', 10) - self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) - self.assertItemsEqual(self.values[:3], [ - 'a reverted(10)', - 'c reverted(Failure: RuntimeError: Woot!)', - 'b reverted(11)', - ]) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) + expected = ['c.t RUNNING', + 'c.t FAILURE(Failure: RuntimeError: Woot!)', + 'a.t REVERTING', + 'c.t REVERTING', + 'a.t REVERTED', + 'c.t REVERTED', + 'b.t REVERTING', + 'b.t REVERTED'] + self.assertItemsEqual(capturer.values[:8], expected) # Task 'a' was or was not executed again, both cases are ok. - self.assertIsSuperAndSubsequence(self.values[3:], [ - 'b', - 'c reverted(Failure: RuntimeError: Woot!)', - 'b reverted(5)' + self.assertIsSuperAndSubsequence(capturer.values[8:], [ + 'b.t RUNNING', + 'c.t FAILURE(Failure: RuntimeError: Woot!)', + 'b.t REVERTED', ]) self.assertEqual(engine.storage.get_flow_state(), st.REVERTED) def test_nested_provides_graph_retried_correctly(self): flow = gf.Flow("test").add( - utils.SaveOrderTask('a', requires=['x']), + utils.ProgressingTask('a', requires=['x']), lf.Flow("test2", retry=retry.Times(2)).add( - utils.SaveOrderTask('b', provides='x'), - utils.SaveOrderTask('c'))) + utils.ProgressingTask('b', provides='x'), + utils.ProgressingTask('c'))) engine = self._make_engine(flow) engine.compile() engine.prepare() engine.storage.save('test2_retry', 1) engine.storage.save('b', 11) # pretend that 'c' failed - fail = misc.Failure.from_exception(RuntimeError('Woot!')) + fail = failure.Failure.from_exception(RuntimeError('Woot!')) engine.storage.save('c', fail, st.FAILURE) - - engine.run() - self.assertItemsEqual(self.values[:2], [ - 'c reverted(Failure: RuntimeError: Woot!)', - 'b reverted(11)', - ]) - self.assertItemsEqual(self.values[2:], ['b', 'c', 'a']) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + engine.run() + expected = ['c.t REVERTING', + 'c.t REVERTED', + 'b.t REVERTING', + 'b.t REVERTED'] + self.assertItemsEqual(capturer.values[:4], expected) + expected = ['test2_retry.r RETRYING', + 'b.t PENDING', + 'c.t PENDING', + 'test2_retry.r RUNNING', + 'test2_retry.r SUCCESS(2)', + 'b.t RUNNING', + 'b.t SUCCESS(5)', + 'a.t RUNNING', + 'c.t RUNNING', + 'a.t SUCCESS(5)', + 'c.t SUCCESS(5)'] + self.assertItemsEqual(expected, capturer.values[4:]) self.assertEqual(engine.storage.get_flow_state(), st.SUCCESS) class RetryParallelExecutionTest(utils.EngineTestBase): + # FIXME(harlowja): fix this class so that it doesn't use events or uses + # them in a way that works with more executors... def test_when_subflow_fails_revert_running_tasks(self): waiting_task = utils.WaitForOneFromTask('task1', 'task2', @@ -717,21 +889,35 @@ class RetryParallelExecutionTest(utils.EngineTestBase): engine = self._make_engine(flow) engine.task_notifier.register('*', waiting_task.callback) engine.storage.inject({'y': 2}) - engine.run() + with utils.CaptureListener(engine, capture_flow=False) as capturer: + engine.run() self.assertEqual(engine.storage.fetch_all(), {'y': 2, 'x': 2}) - expected = ['task2', - 'task1', - u'task2 reverted(Failure: RuntimeError: Woot!)', - 'task1 reverted(5)', - 'task2', - 'task1'] - self.assertItemsEqual(self.values, expected) + expected = ['r.r RUNNING', + 'r.r SUCCESS(1)', + 'task1.t RUNNING', + 'task2.t RUNNING', + 'task2.t FAILURE(Failure: RuntimeError: Woot!)', + 'task2.t REVERTING', + 'task2.t REVERTED', + 'task1.t SUCCESS(5)', + 'task1.t REVERTING', + 'task1.t REVERTED', + 'r.r RETRYING', + 'task1.t PENDING', + 'task2.t PENDING', + 'r.r RUNNING', + 'r.r SUCCESS(2)', + 'task1.t RUNNING', + 'task2.t RUNNING', + 'task2.t SUCCESS(None)', + 'task1.t SUCCESS(5)'] + self.assertItemsEqual(capturer.values, expected) def test_when_subflow_fails_revert_success_tasks(self): waiting_task = utils.WaitForOneFromTask('task2', 'task1', [st.SUCCESS, st.FAILURE]) flow = uf.Flow('flow-1', retry.Times(3, 'r', provides='x')).add( - utils.SaveOrderTask('task1'), + utils.ProgressingTask('task1'), lf.Flow('flow-2').add( waiting_task, utils.ConditionalTask('task3')) @@ -739,35 +925,81 @@ class RetryParallelExecutionTest(utils.EngineTestBase): engine = self._make_engine(flow) engine.task_notifier.register('*', waiting_task.callback) engine.storage.inject({'y': 2}) - engine.run() + with utils.CaptureListener(engine, capture_flow=False) as capturer: + engine.run() self.assertEqual(engine.storage.fetch_all(), {'y': 2, 'x': 2}) - expected = ['task1', - 'task2', - 'task3', - u'task3 reverted(Failure: RuntimeError: Woot!)', - 'task1 reverted(5)', - 'task2 reverted(5)', - 'task1', - 'task2', - 'task3'] - self.assertItemsEqual(self.values, expected) + expected = ['r.r RUNNING', + 'r.r SUCCESS(1)', + 'task1.t RUNNING', + 'task2.t RUNNING', + 'task1.t SUCCESS(5)', + 'task2.t SUCCESS(5)', + 'task3.t RUNNING', + 'task3.t FAILURE(Failure: RuntimeError: Woot!)', + 'task3.t REVERTING', + 'task1.t REVERTING', + 'task3.t REVERTED', + 'task1.t REVERTED', + 'task2.t REVERTING', + 'task2.t REVERTED', + 'r.r RETRYING', + 'task1.t PENDING', + 'task2.t PENDING', + 'task3.t PENDING', + 'r.r RUNNING', + 'r.r SUCCESS(2)', + 'task1.t RUNNING', + 'task2.t RUNNING', + 'task1.t SUCCESS(5)', + 'task2.t SUCCESS(5)', + 'task3.t RUNNING', + 'task3.t SUCCESS(None)'] + self.assertItemsEqual(capturer.values, expected) -class SingleThreadedEngineTest(RetryTest, - test.TestCase): +class SerialEngineTest(RetryTest, test.TestCase): def _make_engine(self, flow, flow_detail=None): return taskflow.engines.load(flow, flow_detail=flow_detail, - engine_conf='serial', + engine='serial', backend=self.backend) -class MultiThreadedEngineTest(RetryTest, - RetryParallelExecutionTest, - test.TestCase): +class ParallelEngineWithThreadsTest(RetryTest, + RetryParallelExecutionTest, + test.TestCase): + _EXECUTOR_WORKERS = 2 + def _make_engine(self, flow, flow_detail=None, executor=None): - engine_conf = dict(engine='parallel') + if executor is None: + executor = 'threads' return taskflow.engines.load(flow, flow_detail=flow_detail, - engine_conf=engine_conf, + engine='parallel', backend=self.backend, + executor=executor, + max_workers=self._EXECUTOR_WORKERS) + + +@testtools.skipIf(not eu.EVENTLET_AVAILABLE, 'eventlet is not available') +class ParallelEngineWithEventletTest(RetryTest, test.TestCase): + + def _make_engine(self, flow, flow_detail=None, executor=None): + if executor is None: + executor = futures.GreenThreadPoolExecutor() + self.addCleanup(executor.shutdown) + return taskflow.engines.load(flow, flow_detail=flow_detail, + backend=self.backend, engine='parallel', executor=executor) + + +class ParallelEngineWithProcessTest(RetryTest, test.TestCase): + _EXECUTOR_WORKERS = 2 + + def _make_engine(self, flow, flow_detail=None, executor=None): + if executor is None: + executor = 'processes' + return taskflow.engines.load(flow, flow_detail=flow_detail, + engine='parallel', + backend=self.backend, + executor=executor, + max_workers=self._EXECUTOR_WORKERS) diff --git a/taskflow/tests/unit/test_storage.py b/taskflow/tests/unit/test_storage.py index 001cba97..5f521afb 100644 --- a/taskflow/tests/unit/test_storage.py +++ b/taskflow/tests/unit/test_storage.py @@ -17,16 +17,16 @@ import contextlib import threading -import mock +from oslo_utils import uuidutils from taskflow import exceptions -from taskflow.openstack.common import uuidutils from taskflow.persistence import backends from taskflow.persistence import logbook from taskflow import states from taskflow import storage from taskflow import test -from taskflow.utils import misc +from taskflow.tests import utils as test_utils +from taskflow.types import failure from taskflow.utils import persistence_utils as p_utils @@ -60,7 +60,7 @@ class StorageTestMixin(object): def test_non_saving_storage(self): _lb, flow_detail = p_utils.temporary_flow_detail(self.backend) s = storage.SingleThreadedStorage(flow_detail=flow_detail) - s.ensure_task('my_task') + s.ensure_atom(test_utils.NoopTask('my_task')) self.assertTrue(uuidutils.is_uuid_like(s.get_atom_uuid('my_task'))) def test_flow_name_and_uuid(self): @@ -71,14 +71,14 @@ class StorageTestMixin(object): def test_ensure_task(self): s = self._get_storage() - s.ensure_task('my task') + s.ensure_atom(test_utils.NoopTask('my task')) self.assertEqual(s.get_atom_state('my task'), states.PENDING) self.assertTrue(uuidutils.is_uuid_like(s.get_atom_uuid('my task'))) def test_get_tasks_states(self): s = self._get_storage() - s.ensure_task('my task') - s.ensure_task('my task2') + s.ensure_atom(test_utils.NoopTask('my task')) + s.ensure_atom(test_utils.NoopTask('my task2')) s.save('my task', 'foo') expected = { 'my task': (states.SUCCESS, states.EXECUTE), @@ -89,7 +89,9 @@ class StorageTestMixin(object): def test_ensure_task_flow_detail(self): _lb, flow_detail = p_utils.temporary_flow_detail(self.backend) s = self._get_storage(flow_detail) - s.ensure_task('my task', '3.11') + t = test_utils.NoopTask('my task') + t.version = (3, 11) + s.ensure_atom(t) td = flow_detail.find(s.get_atom_uuid('my task')) self.assertIsNotNone(td) self.assertEqual(td.name, 'my task') @@ -108,12 +110,12 @@ class StorageTestMixin(object): td = logbook.TaskDetail(name='my_task', uuid='42') flow_detail.add(td) s = self._get_storage(flow_detail) - s.ensure_task('my_task') + s.ensure_atom(test_utils.NoopTask('my_task')) self.assertEqual('42', s.get_atom_uuid('my_task')) def test_save_and_get(self): s = self._get_storage() - s.ensure_task('my task') + s.ensure_atom(test_utils.NoopTask('my task')) s.save('my task', 5) self.assertEqual(s.get('my task'), 5) self.assertEqual(s.fetch_all(), {}) @@ -121,62 +123,62 @@ class StorageTestMixin(object): def test_save_and_get_other_state(self): s = self._get_storage() - s.ensure_task('my task') + s.ensure_atom(test_utils.NoopTask('my task')) s.save('my task', 5, states.FAILURE) self.assertEqual(s.get('my task'), 5) self.assertEqual(s.get_atom_state('my task'), states.FAILURE) def test_save_and_get_cached_failure(self): - failure = misc.Failure.from_exception(RuntimeError('Woot!')) + a_failure = failure.Failure.from_exception(RuntimeError('Woot!')) s = self._get_storage() - s.ensure_task('my task') - s.save('my task', failure, states.FAILURE) - self.assertEqual(s.get('my task'), failure) + s.ensure_atom(test_utils.NoopTask('my task')) + s.save('my task', a_failure, states.FAILURE) + self.assertEqual(s.get('my task'), a_failure) self.assertEqual(s.get_atom_state('my task'), states.FAILURE) self.assertTrue(s.has_failures()) - self.assertEqual(s.get_failures(), {'my task': failure}) + self.assertEqual(s.get_failures(), {'my task': a_failure}) def test_save_and_get_non_cached_failure(self): - failure = misc.Failure.from_exception(RuntimeError('Woot!')) + a_failure = failure.Failure.from_exception(RuntimeError('Woot!')) s = self._get_storage() - s.ensure_task('my task') - s.save('my task', failure, states.FAILURE) - self.assertEqual(s.get('my task'), failure) + s.ensure_atom(test_utils.NoopTask('my task')) + s.save('my task', a_failure, states.FAILURE) + self.assertEqual(s.get('my task'), a_failure) s._failures['my task'] = None - self.assertTrue(failure.matches(s.get('my task'))) + self.assertTrue(a_failure.matches(s.get('my task'))) def test_get_failure_from_reverted_task(self): - failure = misc.Failure.from_exception(RuntimeError('Woot!')) + a_failure = failure.Failure.from_exception(RuntimeError('Woot!')) s = self._get_storage() - s.ensure_task('my task') - s.save('my task', failure, states.FAILURE) + s.ensure_atom(test_utils.NoopTask('my task')) + s.save('my task', a_failure, states.FAILURE) s.set_atom_state('my task', states.REVERTING) - self.assertEqual(s.get('my task'), failure) + self.assertEqual(s.get('my task'), a_failure) s.set_atom_state('my task', states.REVERTED) - self.assertEqual(s.get('my task'), failure) + self.assertEqual(s.get('my task'), a_failure) def test_get_failure_after_reload(self): - failure = misc.Failure.from_exception(RuntimeError('Woot!')) + a_failure = failure.Failure.from_exception(RuntimeError('Woot!')) s = self._get_storage() - s.ensure_task('my task') - s.save('my task', failure, states.FAILURE) + s.ensure_atom(test_utils.NoopTask('my task')) + s.save('my task', a_failure, states.FAILURE) s2 = self._get_storage(s._flowdetail) self.assertTrue(s2.has_failures()) self.assertEqual(1, len(s2.get_failures())) - self.assertTrue(failure.matches(s2.get('my task'))) + self.assertTrue(a_failure.matches(s2.get('my task'))) self.assertEqual(s2.get_atom_state('my task'), states.FAILURE) def test_get_non_existing_var(self): s = self._get_storage() - s.ensure_task('my task') + s.ensure_atom(test_utils.NoopTask('my task')) self.assertRaises(exceptions.NotFound, s.get, 'my task') def test_reset(self): s = self._get_storage() - s.ensure_task('my task') + s.ensure_atom(test_utils.NoopTask('my task')) s.save('my task', 5) s.reset('my task') self.assertEqual(s.get_atom_state('my task'), states.PENDING) @@ -184,13 +186,13 @@ class StorageTestMixin(object): def test_reset_unknown_task(self): s = self._get_storage() - s.ensure_task('my task') + s.ensure_atom(test_utils.NoopTask('my task')) self.assertEqual(s.reset('my task'), None) def test_fetch_by_name(self): s = self._get_storage() name = 'my result' - s.ensure_task('my task', '1.0', {name: None}) + s.ensure_atom(test_utils.NoopTask('my task', provides=name)) s.save('my task', 5) self.assertEqual(s.fetch(name), 5) self.assertEqual(s.fetch_all(), {name: 5}) @@ -203,7 +205,7 @@ class StorageTestMixin(object): def test_task_metadata_update_with_none(self): s = self._get_storage() - s.ensure_task('my task') + s.ensure_atom(test_utils.NoopTask('my task')) s.update_atom_metadata('my task', None) self.assertEqual(s.get_task_progress('my task'), 0.0) s.set_task_progress('my task', 0.5) @@ -213,13 +215,13 @@ class StorageTestMixin(object): def test_default_task_progress(self): s = self._get_storage() - s.ensure_task('my task') + s.ensure_atom(test_utils.NoopTask('my task')) self.assertEqual(s.get_task_progress('my task'), 0.0) self.assertEqual(s.get_task_progress_details('my task'), None) def test_task_progress(self): s = self._get_storage() - s.ensure_task('my task') + s.ensure_atom(test_utils.NoopTask('my task')) s.set_task_progress('my task', 0.5, {'test_data': 11}) self.assertEqual(s.get_task_progress('my task'), 0.5) @@ -244,7 +246,7 @@ class StorageTestMixin(object): def test_task_progress_erase(self): s = self._get_storage() - s.ensure_task('my task') + s.ensure_atom(test_utils.NoopTask('my task')) s.set_task_progress('my task', 0.8, {}) self.assertEqual(s.get_task_progress('my task'), 0.8) @@ -253,24 +255,22 @@ class StorageTestMixin(object): def test_fetch_result_not_ready(self): s = self._get_storage() name = 'my result' - s.ensure_task('my task', result_mapping={name: None}) + s.ensure_atom(test_utils.NoopTask('my task', provides=name)) self.assertRaises(exceptions.NotFound, s.get, name) self.assertEqual(s.fetch_all(), {}) def test_save_multiple_results(self): s = self._get_storage() - result_mapping = {'foo': 0, 'bar': 1, 'whole': None} - s.ensure_task('my task', result_mapping=result_mapping) + s.ensure_atom(test_utils.NoopTask('my task', provides=['foo', 'bar'])) s.save('my task', ('spam', 'eggs')) self.assertEqual(s.fetch_all(), { 'foo': 'spam', 'bar': 'eggs', - 'whole': ('spam', 'eggs') }) def test_mapping_none(self): s = self._get_storage() - s.ensure_task('my task') + s.ensure_atom(test_utils.NoopTask('my task')) s.save('my task', 5) self.assertEqual(s.fetch_all(), {}) @@ -314,7 +314,7 @@ class StorageTestMixin(object): s = self._get_storage(threaded=True) def ensure_my_task(): - s.ensure_task('my_task', result_mapping={}) + s.ensure_atom(test_utils.NoopTask('my_task')) threads = [] for i in range(0, self.thread_count): @@ -357,7 +357,7 @@ class StorageTestMixin(object): def test_set_and_get_task_state(self): s = self._get_storage() state = states.PENDING - s.ensure_task('my task') + s.ensure_atom(test_utils.NoopTask('my task')) s.set_atom_state('my task', state) self.assertEqual(s.get_atom_state('my task'), state) @@ -368,7 +368,7 @@ class StorageTestMixin(object): def test_task_by_name(self): s = self._get_storage() - s.ensure_task('my task') + s.ensure_atom(test_utils.NoopTask('my task')) self.assertTrue(uuidutils.is_uuid_like(s.get_atom_uuid('my task'))) def test_transient_storage_fetch_all(self): @@ -423,104 +423,85 @@ class StorageTestMixin(object): s.set_flow_state(states.SUCCESS) self.assertEqual(s.get_flow_state(), states.SUCCESS) - @mock.patch.object(storage.LOG, 'warning') - def test_result_is_checked(self, mocked_warning): + def test_result_is_checked(self): s = self._get_storage() - s.ensure_task('my task', result_mapping={'result': 'key'}) + s.ensure_atom(test_utils.NoopTask('my task', provides=set(['result']))) s.save('my task', {}) - mocked_warning.assert_called_once_with( - mock.ANY, 'my task', 'key', 'result') self.assertRaisesRegexp(exceptions.NotFound, '^Unable to find result', s.fetch, 'result') - @mock.patch.object(storage.LOG, 'warning') - def test_empty_result_is_checked(self, mocked_warning): + def test_empty_result_is_checked(self): s = self._get_storage() - s.ensure_task('my task', result_mapping={'a': 0}) + s.ensure_atom(test_utils.NoopTask('my task', provides=['a'])) s.save('my task', ()) - mocked_warning.assert_called_once_with( - mock.ANY, 'my task', 0, 'a') self.assertRaisesRegexp(exceptions.NotFound, '^Unable to find result', s.fetch, 'a') - @mock.patch.object(storage.LOG, 'warning') - def test_short_result_is_checked(self, mocked_warning): + def test_short_result_is_checked(self): s = self._get_storage() - s.ensure_task('my task', result_mapping={'a': 0, 'b': 1}) + s.ensure_atom(test_utils.NoopTask('my task', provides=['a', 'b'])) s.save('my task', ['result']) - mocked_warning.assert_called_once_with( - mock.ANY, 'my task', 1, 'b') self.assertEqual(s.fetch('a'), 'result') self.assertRaisesRegexp(exceptions.NotFound, '^Unable to find result', s.fetch, 'b') - @mock.patch.object(storage.LOG, 'warning') - def test_multiple_providers_are_checked(self, mocked_warning): - s = self._get_storage() - s.ensure_task('my task', result_mapping={'result': 'key'}) - self.assertEqual(mocked_warning.mock_calls, []) - s.ensure_task('my other task', result_mapping={'result': 'key'}) - mocked_warning.assert_called_once_with( - mock.ANY, 'result') - - @mock.patch.object(storage.LOG, 'warning') - def test_multiple_providers_with_inject_are_checked(self, mocked_warning): - s = self._get_storage() - s.inject({'result': 'DONE'}) - self.assertEqual(mocked_warning.mock_calls, []) - s.ensure_task('my other task', result_mapping={'result': 'key'}) - mocked_warning.assert_called_once_with(mock.ANY, 'result') - def test_ensure_retry(self): s = self._get_storage() - s.ensure_retry('my retry') + s.ensure_atom(test_utils.NoopRetry('my retry')) history = s.get_retry_history('my retry') - self.assertEqual(history, []) + self.assertEqual([], list(history)) def test_ensure_retry_and_task_with_same_name(self): s = self._get_storage() - s.ensure_task('my retry') + s.ensure_atom(test_utils.NoopTask('my retry')) self.assertRaisesRegexp(exceptions.Duplicate, - '^Atom detail', s.ensure_retry, 'my retry') + '^Atom detail', s.ensure_atom, + test_utils.NoopRetry('my retry')) def test_save_retry_results(self): s = self._get_storage() - s.ensure_retry('my retry') + s.ensure_atom(test_utils.NoopRetry('my retry')) s.save('my retry', 'a') s.save('my retry', 'b') history = s.get_retry_history('my retry') - self.assertEqual(history, [('a', {}), ('b', {})]) + self.assertEqual([('a', {}), ('b', {})], list(history)) + self.assertEqual(['a', 'b'], list(history.provided_iter())) def test_save_retry_results_with_mapping(self): s = self._get_storage() - s.ensure_retry('my retry', result_mapping={'x': 0}) + s.ensure_atom(test_utils.NoopRetry('my retry', provides=['x'])) s.save('my retry', 'a') s.save('my retry', 'b') history = s.get_retry_history('my retry') - self.assertEqual(history, [('a', {}), ('b', {})]) - self.assertEqual(s.fetch_all(), {'x': 'b'}) - self.assertEqual(s.fetch('x'), 'b') + self.assertEqual([('a', {}), ('b', {})], list(history)) + self.assertEqual(['a', 'b'], list(history.provided_iter())) + self.assertEqual({'x': 'b'}, s.fetch_all()) + self.assertEqual('b', s.fetch('x')) def test_cleanup_retry_history(self): s = self._get_storage() - s.ensure_retry('my retry', result_mapping={'x': 0}) + s.ensure_atom(test_utils.NoopRetry('my retry', provides=['x'])) s.save('my retry', 'a') s.save('my retry', 'b') s.cleanup_retry_history('my retry', states.REVERTED) history = s.get_retry_history('my retry') - self.assertEqual(history, []) + self.assertEqual(list(history), []) + self.assertEqual(0, len(history)) self.assertEqual(s.fetch_all(), {}) def test_cached_retry_failure(self): - failure = misc.Failure.from_exception(RuntimeError('Woot!')) + a_failure = failure.Failure.from_exception(RuntimeError('Woot!')) s = self._get_storage() - s.ensure_retry('my retry', result_mapping={'x': 0}) + s.ensure_atom(test_utils.NoopRetry('my retry', provides=['x'])) s.save('my retry', 'a') - s.save('my retry', failure, states.FAILURE) + s.save('my retry', a_failure, states.FAILURE) history = s.get_retry_history('my retry') - self.assertEqual(history, [('a', {}), (failure, {})]) - self.assertIs(s.has_failures(), True) - self.assertEqual(s.get_failures(), {'my retry': failure}) + self.assertEqual([('a', {})], list(history)) + self.assertTrue(history.caused_by(RuntimeError, include_retry=True)) + self.assertIsNotNone(history.failure) + self.assertEqual(1, len(history)) + self.assertTrue(s.has_failures()) + self.assertEqual(s.get_failures(), {'my retry': a_failure}) def test_logbook_get_unknown_atom_type(self): self.assertRaisesRegexp(TypeError, @@ -529,14 +510,14 @@ class StorageTestMixin(object): def test_save_task_intention(self): s = self._get_storage() - s.ensure_task('my task') + s.ensure_atom(test_utils.NoopTask('my task')) s.set_atom_intention('my task', states.REVERT) intention = s.get_atom_intention('my task') self.assertEqual(intention, states.REVERT) def test_save_retry_intention(self): s = self._get_storage() - s.ensure_retry('my retry') + s.ensure_atom(test_utils.NoopTask('my retry')) s.set_atom_intention('my retry', states.RETRY) intention = s.get_atom_intention('my retry') self.assertEqual(intention, states.RETRY) diff --git a/taskflow/tests/unit/test_suspend.py b/taskflow/tests/unit/test_suspend.py new file mode 100644 index 00000000..e5d0288f --- /dev/null +++ b/taskflow/tests/unit/test_suspend.py @@ -0,0 +1,237 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import testtools + +import taskflow.engines +from taskflow import exceptions as exc +from taskflow.patterns import linear_flow as lf +from taskflow import states +from taskflow import test +from taskflow.tests import utils +from taskflow.types import futures +from taskflow.utils import eventlet_utils as eu + + +class SuspendingListener(utils.CaptureListener): + + def __init__(self, engine, + task_name, task_state, capture_flow=False): + super(SuspendingListener, self).__init__( + engine, + capture_flow=capture_flow) + self._revert_match = (task_name, task_state) + + def _task_receiver(self, state, details): + super(SuspendingListener, self)._task_receiver(state, details) + if (details['task_name'], state) == self._revert_match: + self._engine.suspend() + + +class SuspendTest(utils.EngineTestBase): + + def test_suspend_one_task(self): + flow = utils.ProgressingTask('a') + engine = self._make_engine(flow) + with SuspendingListener(engine, task_name='b', + task_state=states.SUCCESS) as capturer: + engine.run() + self.assertEqual(engine.storage.get_flow_state(), states.SUCCESS) + expected = ['a.t RUNNING', 'a.t SUCCESS(5)'] + self.assertEqual(expected, capturer.values) + with SuspendingListener(engine, task_name='b', + task_state=states.SUCCESS) as capturer: + engine.run() + self.assertEqual(engine.storage.get_flow_state(), states.SUCCESS) + expected = [] + self.assertEqual(expected, capturer.values) + + def test_suspend_linear_flow(self): + flow = lf.Flow('linear').add( + utils.ProgressingTask('a'), + utils.ProgressingTask('b'), + utils.ProgressingTask('c') + ) + engine = self._make_engine(flow) + with SuspendingListener(engine, task_name='b', + task_state=states.SUCCESS) as capturer: + engine.run() + self.assertEqual(engine.storage.get_flow_state(), states.SUSPENDED) + expected = ['a.t RUNNING', 'a.t SUCCESS(5)', + 'b.t RUNNING', 'b.t SUCCESS(5)'] + self.assertEqual(expected, capturer.values) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + engine.run() + self.assertEqual(engine.storage.get_flow_state(), states.SUCCESS) + expected = ['c.t RUNNING', 'c.t SUCCESS(5)'] + self.assertEqual(expected, capturer.values) + + def test_suspend_linear_flow_on_revert(self): + flow = lf.Flow('linear').add( + utils.ProgressingTask('a'), + utils.ProgressingTask('b'), + utils.FailingTask('c') + ) + engine = self._make_engine(flow) + with SuspendingListener(engine, task_name='b', + task_state=states.REVERTED) as capturer: + engine.run() + self.assertEqual(engine.storage.get_flow_state(), states.SUSPENDED) + expected = ['a.t RUNNING', + 'a.t SUCCESS(5)', + 'b.t RUNNING', + 'b.t SUCCESS(5)', + 'c.t RUNNING', + 'c.t FAILURE(Failure: RuntimeError: Woot!)', + 'c.t REVERTING', + 'c.t REVERTED', + 'b.t REVERTING', + 'b.t REVERTED'] + self.assertEqual(expected, capturer.values) + with utils.CaptureListener(engine, capture_flow=False) as capturer: + self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) + self.assertEqual(engine.storage.get_flow_state(), states.REVERTED) + expected = ['a.t REVERTING', 'a.t REVERTED'] + self.assertEqual(expected, capturer.values) + + def test_suspend_and_resume_linear_flow_on_revert(self): + flow = lf.Flow('linear').add( + utils.ProgressingTask('a'), + utils.ProgressingTask('b'), + utils.FailingTask('c') + ) + engine = self._make_engine(flow) + with SuspendingListener(engine, task_name='b', + task_state=states.REVERTED) as capturer: + engine.run() + expected = ['a.t RUNNING', + 'a.t SUCCESS(5)', + 'b.t RUNNING', + 'b.t SUCCESS(5)', + 'c.t RUNNING', + 'c.t FAILURE(Failure: RuntimeError: Woot!)', + 'c.t REVERTING', + 'c.t REVERTED', + 'b.t REVERTING', + 'b.t REVERTED'] + self.assertEqual(expected, capturer.values) + + # pretend we are resuming + engine2 = self._make_engine(flow, engine.storage._flowdetail) + with utils.CaptureListener(engine2, capture_flow=False) as capturer2: + self.assertRaisesRegexp(RuntimeError, '^Woot', engine2.run) + self.assertEqual(engine2.storage.get_flow_state(), states.REVERTED) + expected = ['a.t REVERTING', + 'a.t REVERTED'] + self.assertEqual(expected, capturer2.values) + + def test_suspend_and_revert_even_if_task_is_gone(self): + flow = lf.Flow('linear').add( + utils.ProgressingTask('a'), + utils.ProgressingTask('b'), + utils.FailingTask('c') + ) + engine = self._make_engine(flow) + + with SuspendingListener(engine, task_name='b', + task_state=states.REVERTED) as capturer: + engine.run() + + expected = ['a.t RUNNING', + 'a.t SUCCESS(5)', + 'b.t RUNNING', + 'b.t SUCCESS(5)', + 'c.t RUNNING', + 'c.t FAILURE(Failure: RuntimeError: Woot!)', + 'c.t REVERTING', + 'c.t REVERTED', + 'b.t REVERTING', + 'b.t REVERTED'] + self.assertEqual(expected, capturer.values) + + # pretend we are resuming, but task 'c' gone when flow got updated + flow2 = lf.Flow('linear').add( + utils.ProgressingTask('a'), + utils.ProgressingTask('b'), + ) + engine2 = self._make_engine(flow2, engine.storage._flowdetail) + with utils.CaptureListener(engine2, capture_flow=False) as capturer2: + self.assertRaisesRegexp(RuntimeError, '^Woot', engine2.run) + self.assertEqual(engine2.storage.get_flow_state(), states.REVERTED) + expected = ['a.t REVERTING', 'a.t REVERTED'] + self.assertEqual(capturer2.values, expected) + + def test_storage_is_rechecked(self): + flow = lf.Flow('linear').add( + utils.ProgressingTask('b', requires=['foo']), + utils.ProgressingTask('c') + ) + engine = self._make_engine(flow) + engine.storage.inject({'foo': 'bar'}) + with SuspendingListener(engine, task_name='b', + task_state=states.SUCCESS): + engine.run() + self.assertEqual(engine.storage.get_flow_state(), states.SUSPENDED) + # uninject everything: + engine.storage.save(engine.storage.injector_name, + {}, states.SUCCESS) + self.assertRaises(exc.MissingDependencies, engine.run) + + +class SerialEngineTest(SuspendTest, test.TestCase): + def _make_engine(self, flow, flow_detail=None): + return taskflow.engines.load(flow, + flow_detail=flow_detail, + engine='serial', + backend=self.backend) + + +class ParallelEngineWithThreadsTest(SuspendTest, test.TestCase): + _EXECUTOR_WORKERS = 2 + + def _make_engine(self, flow, flow_detail=None, executor=None): + if executor is None: + executor = 'threads' + return taskflow.engines.load(flow, flow_detail=flow_detail, + engine='parallel', + backend=self.backend, + executor=executor, + max_workers=self._EXECUTOR_WORKERS) + + +@testtools.skipIf(not eu.EVENTLET_AVAILABLE, 'eventlet is not available') +class ParallelEngineWithEventletTest(SuspendTest, test.TestCase): + + def _make_engine(self, flow, flow_detail=None, executor=None): + if executor is None: + executor = futures.GreenThreadPoolExecutor() + self.addCleanup(executor.shutdown) + return taskflow.engines.load(flow, flow_detail=flow_detail, + backend=self.backend, engine='parallel', + executor=executor) + + +class ParallelEngineWithProcessTest(SuspendTest, test.TestCase): + _EXECUTOR_WORKERS = 2 + + def _make_engine(self, flow, flow_detail=None, executor=None): + if executor is None: + executor = 'processes' + return taskflow.engines.load(flow, flow_detail=flow_detail, + engine='parallel', + backend=self.backend, + executor=executor, + max_workers=self._EXECUTOR_WORKERS) diff --git a/taskflow/tests/unit/test_suspend_flow.py b/taskflow/tests/unit/test_suspend_flow.py deleted file mode 100644 index bb953449..00000000 --- a/taskflow/tests/unit/test_suspend_flow.py +++ /dev/null @@ -1,196 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import testtools - -import taskflow.engines -from taskflow import exceptions as exc -from taskflow.listeners import base as lbase -from taskflow.patterns import linear_flow as lf -from taskflow import states -from taskflow import test -from taskflow.tests import utils -from taskflow.utils import eventlet_utils as eu - - -class SuspendingListener(lbase.ListenerBase): - - def __init__(self, engine, task_name, task_state): - super(SuspendingListener, self).__init__( - engine, task_listen_for=(task_state,)) - self._task_name = task_name - - def _task_receiver(self, state, details): - if details['task_name'] == self._task_name: - self._engine.suspend() - - -class SuspendFlowTest(utils.EngineTestBase): - - def test_suspend_one_task(self): - flow = utils.SaveOrderTask('a') - engine = self._make_engine(flow) - with SuspendingListener(engine, task_name='b', - task_state=states.SUCCESS): - engine.run() - self.assertEqual(engine.storage.get_flow_state(), states.SUCCESS) - self.assertEqual(self.values, ['a']) - engine.run() - self.assertEqual(engine.storage.get_flow_state(), states.SUCCESS) - self.assertEqual(self.values, ['a']) - - def test_suspend_linear_flow(self): - flow = lf.Flow('linear').add( - utils.SaveOrderTask('a'), - utils.SaveOrderTask('b'), - utils.SaveOrderTask('c') - ) - engine = self._make_engine(flow) - with SuspendingListener(engine, task_name='b', - task_state=states.SUCCESS): - engine.run() - self.assertEqual(engine.storage.get_flow_state(), states.SUSPENDED) - self.assertEqual(self.values, ['a', 'b']) - engine.run() - self.assertEqual(engine.storage.get_flow_state(), states.SUCCESS) - self.assertEqual(self.values, ['a', 'b', 'c']) - - def test_suspend_linear_flow_on_revert(self): - flow = lf.Flow('linear').add( - utils.SaveOrderTask('a'), - utils.SaveOrderTask('b'), - utils.FailingTask('c') - ) - engine = self._make_engine(flow) - with SuspendingListener(engine, task_name='b', - task_state=states.REVERTED): - engine.run() - self.assertEqual(engine.storage.get_flow_state(), states.SUSPENDED) - self.assertEqual( - self.values, - ['a', 'b', - 'c reverted(Failure: RuntimeError: Woot!)', - 'b reverted(5)']) - self.assertRaisesRegexp(RuntimeError, '^Woot', engine.run) - self.assertEqual(engine.storage.get_flow_state(), states.REVERTED) - self.assertEqual( - self.values, - ['a', - 'b', - 'c reverted(Failure: RuntimeError: Woot!)', - 'b reverted(5)', - 'a reverted(5)']) - - def test_suspend_and_resume_linear_flow_on_revert(self): - flow = lf.Flow('linear').add( - utils.SaveOrderTask('a'), - utils.SaveOrderTask('b'), - utils.FailingTask('c') - ) - engine = self._make_engine(flow) - - with SuspendingListener(engine, task_name='b', - task_state=states.REVERTED): - engine.run() - - # pretend we are resuming - engine2 = self._make_engine(flow, engine.storage._flowdetail) - self.assertRaisesRegexp(RuntimeError, '^Woot', engine2.run) - self.assertEqual(engine2.storage.get_flow_state(), states.REVERTED) - self.assertEqual( - self.values, - ['a', - 'b', - 'c reverted(Failure: RuntimeError: Woot!)', - 'b reverted(5)', - 'a reverted(5)']) - - def test_suspend_and_revert_even_if_task_is_gone(self): - flow = lf.Flow('linear').add( - utils.SaveOrderTask('a'), - utils.SaveOrderTask('b'), - utils.FailingTask('c') - ) - engine = self._make_engine(flow) - - with SuspendingListener(engine, task_name='b', - task_state=states.REVERTED): - engine.run() - - expected_values = ['a', 'b', - 'c reverted(Failure: RuntimeError: Woot!)', - 'b reverted(5)'] - self.assertEqual(self.values, expected_values) - - # pretend we are resuming, but task 'c' gone when flow got updated - flow2 = lf.Flow('linear').add( - utils.SaveOrderTask('a'), - utils.SaveOrderTask('b') - ) - engine2 = self._make_engine(flow2, engine.storage._flowdetail) - self.assertRaisesRegexp(RuntimeError, '^Woot', engine2.run) - self.assertEqual(engine2.storage.get_flow_state(), states.REVERTED) - expected_values.append('a reverted(5)') - self.assertEqual(self.values, expected_values) - - def test_storage_is_rechecked(self): - flow = lf.Flow('linear').add( - utils.SaveOrderTask('b', requires=['foo']), - utils.SaveOrderTask('c') - ) - engine = self._make_engine(flow) - engine.storage.inject({'foo': 'bar'}) - with SuspendingListener(engine, task_name='b', - task_state=states.SUCCESS): - engine.run() - self.assertEqual(engine.storage.get_flow_state(), states.SUSPENDED) - # uninject everything: - engine.storage.save(engine.storage.injector_name, - {}, states.SUCCESS) - self.assertRaises(exc.MissingDependencies, engine.run) - - -class SingleThreadedEngineTest(SuspendFlowTest, - test.TestCase): - def _make_engine(self, flow, flow_detail=None): - return taskflow.engines.load(flow, - flow_detail=flow_detail, - engine_conf='serial', - backend=self.backend) - - -class MultiThreadedEngineTest(SuspendFlowTest, - test.TestCase): - def _make_engine(self, flow, flow_detail=None, executor=None): - engine_conf = dict(engine='parallel') - return taskflow.engines.load(flow, flow_detail=flow_detail, - engine_conf=engine_conf, - backend=self.backend, - executor=executor) - - -@testtools.skipIf(not eu.EVENTLET_AVAILABLE, 'eventlet is not available') -class ParallelEngineWithEventletTest(SuspendFlowTest, - test.TestCase): - - def _make_engine(self, flow, flow_detail=None, executor=None): - if executor is None: - executor = eu.GreenExecutor() - engine_conf = dict(engine='parallel') - return taskflow.engines.load(flow, flow_detail=flow_detail, - engine_conf=engine_conf, - backend=self.backend, - executor=executor) diff --git a/taskflow/tests/unit/test_task.py b/taskflow/tests/unit/test_task.py index bcb75788..50e783f3 100644 --- a/taskflow/tests/unit/test_task.py +++ b/taskflow/tests/unit/test_task.py @@ -14,11 +14,10 @@ # License for the specific language governing permissions and limitations # under the License. -import mock - from taskflow import task from taskflow import test -from taskflow.utils import reflection +from taskflow.test import mock +from taskflow.types import notifier class MyTask(task.Task): @@ -182,7 +181,7 @@ class TaskTest(test.TestCase): }) def test_rebind_list_bad_value(self): - self.assertRaisesRegexp(TypeError, '^Invalid rebind value:', + self.assertRaisesRegexp(TypeError, '^Invalid rebind value', MyTask, rebind=object()) def test_default_provides(self): @@ -199,24 +198,24 @@ class TaskTest(test.TestCase): values = [0.0, 0.5, 1.0] result = [] - def progress_callback(task, event_data, progress): - result.append(progress) + def progress_callback(event_type, details): + result.append(details.pop('progress')) - task = ProgressTask() - with task.autobind('update_progress', progress_callback): - task.execute(values) + a_task = ProgressTask() + a_task.notifier.register(task.EVENT_UPDATE_PROGRESS, progress_callback) + a_task.execute(values) self.assertEqual(result, values) @mock.patch.object(task.LOG, 'warn') def test_update_progress_lower_bound(self, mocked_warn): result = [] - def progress_callback(task, event_data, progress): - result.append(progress) + def progress_callback(event_type, details): + result.append(details.pop('progress')) - task = ProgressTask() - with task.autobind('update_progress', progress_callback): - task.execute([-1.0, -0.5, 0.0]) + a_task = ProgressTask() + a_task.notifier.register(task.EVENT_UPDATE_PROGRESS, progress_callback) + a_task.execute([-1.0, -0.5, 0.0]) self.assertEqual(result, [0.0, 0.0, 0.0]) self.assertEqual(mocked_warn.call_count, 2) @@ -224,64 +223,87 @@ class TaskTest(test.TestCase): def test_update_progress_upper_bound(self, mocked_warn): result = [] - def progress_callback(task, event_data, progress): - result.append(progress) + def progress_callback(event_type, details): + result.append(details.pop('progress')) - task = ProgressTask() - with task.autobind('update_progress', progress_callback): - task.execute([1.0, 1.5, 2.0]) + a_task = ProgressTask() + a_task.notifier.register(task.EVENT_UPDATE_PROGRESS, progress_callback) + a_task.execute([1.0, 1.5, 2.0]) self.assertEqual(result, [1.0, 1.0, 1.0]) self.assertEqual(mocked_warn.call_count, 2) - @mock.patch.object(task.LOG, 'warn') + @mock.patch.object(notifier.LOG, 'warn') def test_update_progress_handler_failure(self, mocked_warn): + def progress_callback(*args, **kwargs): raise Exception('Woot!') - task = ProgressTask() - with task.autobind('update_progress', progress_callback): - task.execute([0.5]) - mocked_warn.assert_called_once_with( - mock.ANY, reflection.get_callable_name(progress_callback), - 'update_progress', exc_info=mock.ANY) + a_task = ProgressTask() + a_task.notifier.register(task.EVENT_UPDATE_PROGRESS, progress_callback) + a_task.execute([0.5]) + mocked_warn.assert_called_once() - @mock.patch.object(task.LOG, 'warn') - def test_autobind_non_existent_event(self, mocked_warn): - event = 'test-event' - handler = lambda: None - task = MyTask() - with task.autobind(event, handler): - self.assertEqual(len(task._events_listeners), 0) - mocked_warn.assert_called_once_with( - mock.ANY, handler, event, task, exc_info=mock.ANY) + def test_register_handler_is_none(self): + a_task = MyTask() + self.assertRaises(ValueError, a_task.notifier.register, + task.EVENT_UPDATE_PROGRESS, None) + self.assertEqual(len(a_task.notifier), 0) - def test_autobind_handler_is_none(self): - task = MyTask() - with task.autobind('update_progress', None): - self.assertEqual(len(task._events_listeners), 0) + def test_deregister_any_handler(self): + a_task = MyTask() + self.assertEqual(len(a_task.notifier), 0) + a_task.notifier.register(task.EVENT_UPDATE_PROGRESS, + lambda event_type, details: None) + self.assertEqual(len(a_task.notifier), 1) + a_task.notifier.deregister_event(task.EVENT_UPDATE_PROGRESS) + self.assertEqual(len(a_task.notifier), 0) - def test_unbind_any_handler(self): - task = MyTask() - self.assertEqual(len(task._events_listeners), 0) - task.bind('update_progress', lambda: None) - self.assertEqual(len(task._events_listeners), 1) - self.assertTrue(task.unbind('update_progress')) - self.assertEqual(len(task._events_listeners), 0) + def test_deregister_any_handler_empty_listeners(self): + a_task = MyTask() + self.assertEqual(len(a_task.notifier), 0) + self.assertFalse(a_task.notifier.deregister_event( + task.EVENT_UPDATE_PROGRESS)) + self.assertEqual(len(a_task.notifier), 0) - def test_unbind_any_handler_empty_listeners(self): - task = MyTask() - self.assertEqual(len(task._events_listeners), 0) - self.assertFalse(task.unbind('update_progress')) - self.assertEqual(len(task._events_listeners), 0) + def test_deregister_non_existent_listener(self): + handler1 = lambda event_type, details: None + handler2 = lambda event_type, details: None + a_task = MyTask() + a_task.notifier.register(task.EVENT_UPDATE_PROGRESS, handler1) + self.assertEqual(len(list(a_task.notifier.listeners_iter())), 1) + a_task.notifier.deregister(task.EVENT_UPDATE_PROGRESS, handler2) + self.assertEqual(len(list(a_task.notifier.listeners_iter())), 1) + a_task.notifier.deregister(task.EVENT_UPDATE_PROGRESS, handler1) + self.assertEqual(len(list(a_task.notifier.listeners_iter())), 0) - def test_unbind_non_existent_listener(self): - handler1 = lambda: None - handler2 = lambda: None - task = MyTask() - task.bind('update_progress', handler1) - self.assertEqual(len(task._events_listeners), 1) - self.assertFalse(task.unbind('update_progress', handler2)) - self.assertEqual(len(task._events_listeners), 1) + def test_bind_not_callable(self): + a_task = MyTask() + self.assertRaises(ValueError, a_task.notifier.register, + task.EVENT_UPDATE_PROGRESS, 2) + + def test_copy_no_listeners(self): + handler1 = lambda event_type, details: None + a_task = MyTask() + a_task.notifier.register(task.EVENT_UPDATE_PROGRESS, handler1) + b_task = a_task.copy(retain_listeners=False) + self.assertEqual(len(a_task.notifier), 1) + self.assertEqual(len(b_task.notifier), 0) + + def test_copy_listeners(self): + handler1 = lambda event_type, details: None + handler2 = lambda event_type, details: None + a_task = MyTask() + a_task.notifier.register(task.EVENT_UPDATE_PROGRESS, handler1) + b_task = a_task.copy() + self.assertEqual(len(b_task.notifier), 1) + self.assertTrue(a_task.notifier.deregister_event( + task.EVENT_UPDATE_PROGRESS)) + self.assertEqual(len(a_task.notifier), 0) + self.assertEqual(len(b_task.notifier), 1) + b_task.notifier.register(task.EVENT_UPDATE_PROGRESS, handler2) + listeners = dict(list(b_task.notifier.listeners_iter())) + self.assertEqual(len(listeners[task.EVENT_UPDATE_PROGRESS]), 2) + self.assertEqual(len(a_task.notifier), 0) class FunctorTaskTest(test.TestCase): @@ -290,3 +312,10 @@ class FunctorTaskTest(test.TestCase): version = (2, 0) f_task = task.FunctorTask(lambda: None, version=version) self.assertEqual(f_task.version, version) + + def test_execute_not_callable(self): + self.assertRaises(ValueError, task.FunctorTask, 2) + + def test_revert_not_callable(self): + self.assertRaises(ValueError, task.FunctorTask, lambda: None, + revert=2) diff --git a/taskflow/tests/unit/test_types.py b/taskflow/tests/unit/test_types.py index 141cdfc8..4daea4bf 100644 --- a/taskflow/tests/unit/test_types.py +++ b/taskflow/tests/unit/test_types.py @@ -23,8 +23,31 @@ from taskflow import exceptions as excp from taskflow import test from taskflow.types import fsm from taskflow.types import graph +from taskflow.types import latch +from taskflow.types import periodic +from taskflow.types import table from taskflow.types import timing as tt from taskflow.types import tree +from taskflow.utils import threading_utils as tu + + +class PeriodicThingy(object): + def __init__(self): + self.capture = [] + + @periodic.periodic(0.01) + def a(self): + self.capture.append('a') + + @periodic.periodic(0.02) + def b(self): + self.capture.append('b') + + def c(self): + pass + + def d(self): + pass class GraphTest(test.TestCase): @@ -121,45 +144,146 @@ class TreeTest(test.TestCase): class StopWatchTest(test.TestCase): + def setUp(self): + super(StopWatchTest, self).setUp() + tt.StopWatch.set_now_override(now=0) + self.addCleanup(tt.StopWatch.clear_overrides) + + def test_leftover_no_duration(self): + watch = tt.StopWatch() + watch.start() + self.assertRaises(RuntimeError, watch.leftover) + self.assertRaises(RuntimeError, watch.leftover, return_none=False) + self.assertIsNone(watch.leftover(return_none=True)) + def test_no_states(self): watch = tt.StopWatch() self.assertRaises(RuntimeError, watch.stop) self.assertRaises(RuntimeError, watch.resume) + def test_bad_expiry(self): + self.assertRaises(ValueError, tt.StopWatch, -1) + + def test_backwards(self): + watch = tt.StopWatch(0.1) + watch.start() + tt.StopWatch.advance_time_seconds(0.5) + self.assertTrue(watch.expired()) + + tt.StopWatch.advance_time_seconds(-1.0) + self.assertFalse(watch.expired()) + self.assertEqual(0.0, watch.elapsed()) + def test_expiry(self): watch = tt.StopWatch(0.1) watch.start() - time.sleep(0.2) + tt.StopWatch.advance_time_seconds(0.2) self.assertTrue(watch.expired()) + def test_not_expired(self): + watch = tt.StopWatch(0.1) + watch.start() + tt.StopWatch.advance_time_seconds(0.05) + self.assertFalse(watch.expired()) + def test_no_expiry(self): watch = tt.StopWatch(0.1) - watch.start() - self.assertFalse(watch.expired()) + self.assertRaises(RuntimeError, watch.expired) def test_elapsed(self): watch = tt.StopWatch() watch.start() - time.sleep(0.2) + tt.StopWatch.advance_time_seconds(0.2) # NOTE(harlowja): Allow for a slight variation by using 0.19. self.assertGreaterEqual(0.19, watch.elapsed()) + def test_no_elapsed(self): + watch = tt.StopWatch() + self.assertRaises(RuntimeError, watch.elapsed) + + def test_no_leftover(self): + watch = tt.StopWatch() + self.assertRaises(RuntimeError, watch.leftover) + watch = tt.StopWatch(1) + self.assertRaises(RuntimeError, watch.leftover) + def test_pause_resume(self): watch = tt.StopWatch() watch.start() - time.sleep(0.05) + tt.StopWatch.advance_time_seconds(0.05) watch.stop() elapsed = watch.elapsed() - time.sleep(0.05) self.assertAlmostEqual(elapsed, watch.elapsed()) watch.resume() + tt.StopWatch.advance_time_seconds(0.05) self.assertNotEqual(elapsed, watch.elapsed()) def test_context_manager(self): with tt.StopWatch() as watch: - time.sleep(0.05) + tt.StopWatch.advance_time_seconds(0.05) self.assertGreater(0.01, watch.elapsed()) + def test_splits(self): + watch = tt.StopWatch() + watch.start() + self.assertEqual(0, len(watch.splits)) + + watch.split() + self.assertEqual(1, len(watch.splits)) + self.assertEqual(watch.splits[0].elapsed, + watch.splits[0].length) + + tt.StopWatch.advance_time_seconds(0.05) + watch.split() + splits = watch.splits + self.assertEqual(2, len(splits)) + self.assertNotEqual(splits[0].elapsed, splits[1].elapsed) + self.assertEqual(splits[1].length, + splits[1].elapsed - splits[0].elapsed) + + watch.stop() + self.assertEqual(2, len(watch.splits)) + + watch.start() + self.assertEqual(0, len(watch.splits)) + + def test_elapsed_maximum(self): + watch = tt.StopWatch() + watch.start() + + tt.StopWatch.advance_time_seconds(1) + self.assertEqual(1, watch.elapsed()) + + tt.StopWatch.advance_time_seconds(10) + self.assertEqual(11, watch.elapsed()) + self.assertEqual(1, watch.elapsed(maximum=1)) + + watch.stop() + self.assertEqual(11, watch.elapsed()) + tt.StopWatch.advance_time_seconds(10) + self.assertEqual(11, watch.elapsed()) + self.assertEqual(0, watch.elapsed(maximum=-1)) + + +class TableTest(test.TestCase): + def test_create_valid_no_rows(self): + tbl = table.PleasantTable(['Name', 'City', 'State', 'Country']) + self.assertGreater(0, len(tbl.pformat())) + + def test_create_valid_rows(self): + tbl = table.PleasantTable(['Name', 'City', 'State', 'Country']) + before_rows = tbl.pformat() + tbl.add_row(["Josh", "San Jose", "CA", "USA"]) + after_rows = tbl.pformat() + self.assertGreater(len(before_rows), len(after_rows)) + + def test_create_invalid_columns(self): + self.assertRaises(ValueError, table.PleasantTable, []) + + def test_create_invalid_rows(self): + tbl = table.PleasantTable(['Name', 'City', 'State', 'Country']) + self.assertRaises(ValueError, tbl.add_row, ['a', 'b']) + class FSMTest(test.TestCase): def setUp(self): @@ -251,7 +375,9 @@ class FSMTest(test.TestCase): m.process_event('fall') self.assertEqual([('down', 'beat'), ('up', 'jump'), ('down', 'fall')], enter_transitions) - self.assertEqual([('down', 'jump'), ('up', 'fall')], exit_transitions) + self.assertEqual( + [('start', 'beat'), ('down', 'jump'), ('up', 'fall')], + exit_transitions) def test_run_iter(self): up_downs = [] @@ -292,15 +418,169 @@ class FSMTest(test.TestCase): self.assertRaises(fsm.NotInitialized, self.jumper.process_event, 'jump') + def test_copy_states(self): + c = fsm.FSM('down') + self.assertEqual(0, len(c.states)) + d = c.copy() + c.add_state('up') + c.add_state('down') + self.assertEqual(2, len(c.states)) + self.assertEqual(0, len(d.states)) + + def test_copy_reactions(self): + c = fsm.FSM('down') + d = c.copy() + + c.add_state('down') + c.add_state('up') + c.add_reaction('down', 'jump', lambda *args: 'up') + c.add_transition('down', 'up', 'jump') + + self.assertEqual(1, c.events) + self.assertEqual(0, d.events) + self.assertNotIn('down', d) + self.assertNotIn('up', d) + self.assertEqual([], list(d)) + self.assertEqual([('down', 'jump', 'up')], list(c)) + + def test_copy_initialized(self): + j = self.jumper.copy() + self.assertIsNone(j.current_state) + + for i, transition in enumerate(self.jumper.run_iter('jump')): + if i == 4: + break + + self.assertIsNone(j.current_state) + self.assertIsNotNone(self.jumper.current_state) + def test_iter(self): transitions = list(self.jumper) self.assertEqual(2, len(transitions)) self.assertIn(('up', 'fall', 'down'), transitions) self.assertIn(('down', 'jump', 'up'), transitions) + def test_freeze(self): + self.jumper.freeze() + self.assertRaises(fsm.FrozenMachine, self.jumper.add_state, 'test') + self.assertRaises(fsm.FrozenMachine, + self.jumper.add_transition, 'test', 'test', 'test') + self.assertRaises(fsm.FrozenMachine, + self.jumper.add_reaction, + 'test', 'test', lambda *args: 'test') + def test_invalid_callbacks(self): m = fsm.FSM('working') m.add_state('working') m.add_state('broken') - self.assertRaises(AssertionError, m.add_state, 'b', on_enter=2) - self.assertRaises(AssertionError, m.add_state, 'b', on_exit=2) + self.assertRaises(ValueError, m.add_state, 'b', on_enter=2) + self.assertRaises(ValueError, m.add_state, 'b', on_exit=2) + + +class PeriodicTest(test.TestCase): + + def test_invalid_periodic(self): + + def no_op(): + pass + + self.assertRaises(ValueError, periodic.periodic, -1) + + def test_valid_periodic(self): + + @periodic.periodic(2) + def no_op(): + pass + + self.assertTrue(getattr(no_op, '_periodic')) + self.assertEqual(2, getattr(no_op, '_periodic_spacing')) + self.assertEqual(True, getattr(no_op, '_periodic_run_immediately')) + + def test_scanning_periodic(self): + p = PeriodicThingy() + w = periodic.PeriodicWorker.create([p]) + self.assertEqual(2, len(w)) + + t = tu.daemon_thread(target=w.start) + t.start() + time.sleep(0.1) + w.stop() + t.join() + + b_calls = [c for c in p.capture if c == 'b'] + self.assertGreater(0, len(b_calls)) + a_calls = [c for c in p.capture if c == 'a'] + self.assertGreater(0, len(a_calls)) + + def test_periodic_single(self): + barrier = latch.Latch(5) + capture = [] + tombstone = tu.Event() + + @periodic.periodic(0.01) + def callee(): + barrier.countdown() + if barrier.needed == 0: + tombstone.set() + capture.append(1) + + w = periodic.PeriodicWorker([callee], tombstone=tombstone) + t = tu.daemon_thread(target=w.start) + t.start() + t.join() + + self.assertEqual(0, barrier.needed) + self.assertEqual(5, sum(capture)) + self.assertTrue(tombstone.is_set()) + + def test_immediate(self): + capture = [] + + @periodic.periodic(120, run_immediately=True) + def a(): + capture.append('a') + + w = periodic.PeriodicWorker([a]) + t = tu.daemon_thread(target=w.start) + t.start() + time.sleep(0.1) + w.stop() + t.join() + + a_calls = [c for c in capture if c == 'a'] + self.assertGreater(0, len(a_calls)) + + def test_period_double_no_immediate(self): + capture = [] + + @periodic.periodic(0.01, run_immediately=False) + def a(): + capture.append('a') + + @periodic.periodic(0.02, run_immediately=False) + def b(): + capture.append('b') + + w = periodic.PeriodicWorker([a, b]) + t = tu.daemon_thread(target=w.start) + t.start() + time.sleep(0.1) + w.stop() + t.join() + + b_calls = [c for c in capture if c == 'b'] + self.assertGreater(0, len(b_calls)) + a_calls = [c for c in capture if c == 'a'] + self.assertGreater(0, len(a_calls)) + + def test_start_nothing_error(self): + w = periodic.PeriodicWorker([]) + self.assertRaises(RuntimeError, w.start) + + def test_missing_function_attrs(self): + + def fake_periodic(): + pass + + cb = fake_periodic + self.assertRaises(ValueError, periodic.PeriodicWorker, [cb]) diff --git a/taskflow/tests/unit/test_utils.py b/taskflow/tests/unit/test_utils.py index 4518f961..ba71cca2 100644 --- a/taskflow/tests/unit/test_utils.py +++ b/taskflow/tests/unit/test_utils.py @@ -15,349 +15,13 @@ # under the License. import collections -import functools import inspect -import sys +import random +import time -import six -import testtools - -from taskflow import states from taskflow import test -from taskflow.tests import utils as test_utils -from taskflow.utils import lock_utils from taskflow.utils import misc -from taskflow.utils import reflection - - -def mere_function(a, b): - pass - - -def function_with_defs(a, b, optional=None): - pass - - -def function_with_kwargs(a, b, **kwargs): - pass - - -class Class(object): - - def method(self, c, d): - pass - - @staticmethod - def static_method(e, f): - pass - - @classmethod - def class_method(cls, g, h): - pass - - -class CallableClass(object): - def __call__(self, i, j): - pass - - -class ClassWithInit(object): - def __init__(self, k, l): - pass - - -class CallbackEqualityTest(test.TestCase): - def test_different_simple_callbacks(self): - - def a(): - pass - - def b(): - pass - - self.assertFalse(reflection.is_same_callback(a, b)) - - def test_static_instance_callbacks(self): - - class A(object): - - @staticmethod - def b(a, b, c): - pass - - a = A() - b = A() - - self.assertTrue(reflection.is_same_callback(a.b, b.b)) - - def test_different_instance_callbacks(self): - - class A(object): - def b(self): - pass - - def __eq__(self, other): - return True - - b = A() - c = A() - - self.assertFalse(reflection.is_same_callback(b.b, c.b)) - self.assertTrue(reflection.is_same_callback(b.b, c.b, strict=False)) - - -class GetCallableNameTest(test.TestCase): - - def test_mere_function(self): - name = reflection.get_callable_name(mere_function) - self.assertEqual(name, '.'.join((__name__, 'mere_function'))) - - def test_method(self): - name = reflection.get_callable_name(Class.method) - self.assertEqual(name, '.'.join((__name__, 'Class', 'method'))) - - def test_instance_method(self): - name = reflection.get_callable_name(Class().method) - self.assertEqual(name, '.'.join((__name__, 'Class', 'method'))) - - def test_static_method(self): - name = reflection.get_callable_name(Class.static_method) - if six.PY3: - self.assertEqual(name, - '.'.join((__name__, 'Class', 'static_method'))) - else: - # NOTE(imelnikov): static method are just functions, class name - # is not recorded anywhere in them. - self.assertEqual(name, - '.'.join((__name__, 'static_method'))) - - def test_class_method(self): - name = reflection.get_callable_name(Class.class_method) - self.assertEqual(name, '.'.join((__name__, 'Class', 'class_method'))) - - def test_constructor(self): - name = reflection.get_callable_name(Class) - self.assertEqual(name, '.'.join((__name__, 'Class'))) - - def test_callable_class(self): - name = reflection.get_callable_name(CallableClass()) - self.assertEqual(name, '.'.join((__name__, 'CallableClass'))) - - def test_callable_class_call(self): - name = reflection.get_callable_name(CallableClass().__call__) - self.assertEqual(name, '.'.join((__name__, 'CallableClass', - '__call__'))) - - -# These extended/special case tests only work on python 3, due to python 2 -# being broken/incorrect with regard to these special cases... -@testtools.skipIf(not six.PY3, 'python 3.x is not currently available') -class GetCallableNameTestExtended(test.TestCase): - # Tests items in http://legacy.python.org/dev/peps/pep-3155/ - - class InnerCallableClass(object): - def __call__(self): - pass - - def test_inner_callable_class(self): - obj = self.InnerCallableClass() - name = reflection.get_callable_name(obj.__call__) - expected_name = '.'.join((__name__, 'GetCallableNameTestExtended', - 'InnerCallableClass', '__call__')) - self.assertEqual(expected_name, name) - - def test_inner_callable_function(self): - def a(): - - def b(): - pass - - return b - - name = reflection.get_callable_name(a()) - expected_name = '.'.join((__name__, 'GetCallableNameTestExtended', - 'test_inner_callable_function', '', - 'a', '', 'b')) - self.assertEqual(expected_name, name) - - def test_inner_class(self): - obj = self.InnerCallableClass() - name = reflection.get_callable_name(obj) - expected_name = '.'.join((__name__, - 'GetCallableNameTestExtended', - 'InnerCallableClass')) - self.assertEqual(expected_name, name) - - -class NotifierTest(test.TestCase): - - def test_notify_called(self): - call_collector = [] - - def call_me(state, details): - call_collector.append((state, details)) - - notifier = misc.Notifier() - notifier.register(misc.Notifier.ANY, call_me) - notifier.notify(states.SUCCESS, {}) - notifier.notify(states.SUCCESS, {}) - - self.assertEqual(2, len(call_collector)) - self.assertEqual(1, len(notifier)) - - def test_notify_register_deregister(self): - - def call_me(state, details): - pass - - class A(object): - def call_me_too(self, state, details): - pass - - notifier = misc.Notifier() - notifier.register(misc.Notifier.ANY, call_me) - a = A() - notifier.register(misc.Notifier.ANY, a.call_me_too) - - self.assertEqual(2, len(notifier)) - notifier.deregister(misc.Notifier.ANY, call_me) - notifier.deregister(misc.Notifier.ANY, a.call_me_too) - self.assertEqual(0, len(notifier)) - - def test_notify_reset(self): - - def call_me(state, details): - pass - - notifier = misc.Notifier() - notifier.register(misc.Notifier.ANY, call_me) - self.assertEqual(1, len(notifier)) - - notifier.reset() - self.assertEqual(0, len(notifier)) - - def test_bad_notify(self): - - def call_me(state, details): - pass - - notifier = misc.Notifier() - self.assertRaises(KeyError, notifier.register, - misc.Notifier.ANY, call_me, - kwargs={'details': 5}) - - def test_selective_notify(self): - call_counts = collections.defaultdict(list) - - def call_me_on(registered_state, state, details): - call_counts[registered_state].append((state, details)) - - notifier = misc.Notifier() - notifier.register(states.SUCCESS, - functools.partial(call_me_on, states.SUCCESS)) - notifier.register(misc.Notifier.ANY, - functools.partial(call_me_on, - misc.Notifier.ANY)) - - self.assertEqual(2, len(notifier)) - notifier.notify(states.SUCCESS, {}) - - self.assertEqual(1, len(call_counts[misc.Notifier.ANY])) - self.assertEqual(1, len(call_counts[states.SUCCESS])) - - notifier.notify(states.FAILURE, {}) - self.assertEqual(2, len(call_counts[misc.Notifier.ANY])) - self.assertEqual(1, len(call_counts[states.SUCCESS])) - self.assertEqual(2, len(call_counts)) - - -class GetCallableArgsTest(test.TestCase): - - def test_mere_function(self): - result = reflection.get_callable_args(mere_function) - self.assertEqual(['a', 'b'], result) - - def test_function_with_defaults(self): - result = reflection.get_callable_args(function_with_defs) - self.assertEqual(['a', 'b', 'optional'], result) - - def test_required_only(self): - result = reflection.get_callable_args(function_with_defs, - required_only=True) - self.assertEqual(['a', 'b'], result) - - def test_method(self): - result = reflection.get_callable_args(Class.method) - self.assertEqual(['self', 'c', 'd'], result) - - def test_instance_method(self): - result = reflection.get_callable_args(Class().method) - self.assertEqual(['c', 'd'], result) - - def test_class_method(self): - result = reflection.get_callable_args(Class.class_method) - self.assertEqual(['g', 'h'], result) - - def test_class_constructor(self): - result = reflection.get_callable_args(ClassWithInit) - self.assertEqual(['k', 'l'], result) - - def test_class_with_call(self): - result = reflection.get_callable_args(CallableClass()) - self.assertEqual(['i', 'j'], result) - - def test_decorators_work(self): - @lock_utils.locked - def locked_fun(x, y): - pass - result = reflection.get_callable_args(locked_fun) - self.assertEqual(['x', 'y'], result) - - -class AcceptsKwargsTest(test.TestCase): - - def test_no_kwargs(self): - self.assertEqual( - reflection.accepts_kwargs(mere_function), False) - - def test_with_kwargs(self): - self.assertEqual( - reflection.accepts_kwargs(function_with_kwargs), True) - - -class GetClassNameTest(test.TestCase): - - def test_std_exception(self): - name = reflection.get_class_name(RuntimeError) - self.assertEqual(name, 'RuntimeError') - - def test_global_class(self): - name = reflection.get_class_name(misc.Failure) - self.assertEqual(name, 'taskflow.utils.misc.Failure') - - def test_class(self): - name = reflection.get_class_name(Class) - self.assertEqual(name, '.'.join((__name__, 'Class'))) - - def test_instance(self): - name = reflection.get_class_name(Class()) - self.assertEqual(name, '.'.join((__name__, 'Class'))) - - def test_int(self): - name = reflection.get_class_name(42) - self.assertEqual(name, 'int') - - -class GetAllClassNamesTest(test.TestCase): - - def test_std_class(self): - names = list(reflection.get_all_class_names(RuntimeError)) - self.assertEqual(names, test_utils.RUNTIME_ERROR_CLASSES) - - def test_std_class_up_to(self): - names = list(reflection.get_all_class_names(RuntimeError, - up_to=Exception)) - self.assertEqual(names, test_utils.RUNTIME_ERROR_CLASSES[:-2]) +from taskflow.utils import threading_utils class CachedPropertyTest(test.TestCase): @@ -437,108 +101,34 @@ class CachedPropertyTest(test.TestCase): self.assertEqual(None, inspect.getdoc(A.b)) + def test_threaded_access_property(self): + called = collections.deque() -class AttrDictTest(test.TestCase): - def test_ok_create(self): - attrs = { - 'a': 1, - 'b': 2, - } - obj = misc.AttrDict(**attrs) - self.assertEqual(obj.a, 1) - self.assertEqual(obj.b, 2) + class A(object): + @misc.cachedproperty + def b(self): + called.append(1) + # NOTE(harlowja): wait for a little and give some time for + # another thread to potentially also get in this method to + # also create the same property... + time.sleep(random.random() * 0.5) + return 'b' - def test_private_create(self): - attrs = { - '_a': 1, - } - self.assertRaises(AttributeError, misc.AttrDict, **attrs) + a = A() + threads = [] + try: + for _i in range(0, 20): + t = threading_utils.daemon_thread(lambda: a.b) + threads.append(t) + for t in threads: + t.start() + finally: + while threads: + t = threads.pop() + t.join() - def test_invalid_create(self): - attrs = { - # Python attributes can't start with a number. - '123_abc': 1, - } - self.assertRaises(AttributeError, misc.AttrDict, **attrs) - - def test_no_overwrite(self): - attrs = { - # Python attributes can't start with a number. - 'update': 1, - } - self.assertRaises(AttributeError, misc.AttrDict, **attrs) - - def test_back_todict(self): - attrs = { - 'a': 1, - } - obj = misc.AttrDict(**attrs) - self.assertEqual(obj.a, 1) - self.assertEqual(attrs, dict(obj)) - - def test_runtime_invalid_set(self): - - def bad_assign(obj): - obj._123 = 'b' - - attrs = { - 'a': 1, - } - obj = misc.AttrDict(**attrs) - self.assertEqual(obj.a, 1) - self.assertRaises(AttributeError, bad_assign, obj) - - def test_bypass_get(self): - attrs = { - 'a': 1, - } - obj = misc.AttrDict(**attrs) - self.assertEqual(1, obj['a']) - - def test_bypass_set_no_get(self): - - def bad_assign(obj): - obj._b = 'e' - - attrs = { - 'a': 1, - } - obj = misc.AttrDict(**attrs) - self.assertEqual(1, obj['a']) - obj['_b'] = 'c' - self.assertRaises(AttributeError, bad_assign, obj) - self.assertEqual('c', obj['_b']) - - -class IsValidAttributeNameTestCase(test.TestCase): - def test_a_is_ok(self): - self.assertTrue(misc.is_valid_attribute_name('a')) - - def test_name_can_be_longer(self): - self.assertTrue(misc.is_valid_attribute_name('foobarbaz')) - - def test_name_can_have_digits(self): - self.assertTrue(misc.is_valid_attribute_name('fo12')) - - def test_name_cannot_start_with_digit(self): - self.assertFalse(misc.is_valid_attribute_name('1z')) - - def test_hidden_names_are_forbidden(self): - self.assertFalse(misc.is_valid_attribute_name('_z')) - - def test_hidden_names_can_be_allowed(self): - self.assertTrue( - misc.is_valid_attribute_name('_z', allow_hidden=True)) - - def test_self_is_forbidden(self): - self.assertFalse(misc.is_valid_attribute_name('self')) - - def test_self_can_be_allowed(self): - self.assertTrue( - misc.is_valid_attribute_name('self', allow_self=True)) - - def test_no_unicode_please(self): - self.assertFalse(misc.is_valid_attribute_name('mañana')) + self.assertEqual(1, len(called)) + self.assertEqual('b', a.b) class UriParseTest(test.TestCase): @@ -550,12 +140,7 @@ class UriParseTest(test.TestCase): self.assertEqual('192.168.0.1', parsed.hostname) self.assertEqual('', parsed.fragment) self.assertEqual('/a/b/', parsed.path) - self.assertEqual({'c': 'd'}, parsed.params) - - def test_multi_params(self): - url = "mysql://www.yahoo.com:3306/a/b/?c=d&c=e" - parsed = misc.parse_uri(url, query_duplicates=True) - self.assertEqual({'c': ['d', 'e']}, parsed.params) + self.assertEqual({'c': 'd'}, parsed.params()) def test_port_provided(self): url = "rabbitmq://www.yahoo.com:5672" @@ -586,47 +171,6 @@ class UriParseTest(test.TestCase): self.assertEqual(None, parsed.password) -class ExcInfoUtilsTest(test.TestCase): - - def _make_ex_info(self): - try: - raise RuntimeError('Woot!') - except Exception: - return sys.exc_info() - - def test_copy_none(self): - result = misc.copy_exc_info(None) - self.assertIsNone(result) - - def test_copy_exc_info(self): - exc_info = self._make_ex_info() - result = misc.copy_exc_info(exc_info) - self.assertIsNot(result, exc_info) - self.assertIs(result[0], RuntimeError) - self.assertIsNot(result[1], exc_info[1]) - self.assertIs(result[2], exc_info[2]) - - def test_none_equals(self): - self.assertTrue(misc.are_equal_exc_info_tuples(None, None)) - - def test_none_ne_tuple(self): - exc_info = self._make_ex_info() - self.assertFalse(misc.are_equal_exc_info_tuples(None, exc_info)) - - def test_tuple_nen_none(self): - exc_info = self._make_ex_info() - self.assertFalse(misc.are_equal_exc_info_tuples(exc_info, None)) - - def test_tuple_equals_itself(self): - exc_info = self._make_ex_info() - self.assertTrue(misc.are_equal_exc_info_tuples(exc_info, exc_info)) - - def test_typle_equals_copy(self): - exc_info = self._make_ex_info() - copied = misc.copy_exc_info(exc_info) - self.assertTrue(misc.are_equal_exc_info_tuples(exc_info, copied)) - - class TestSequenceMinus(test.TestCase): def test_simple_case(self): @@ -644,3 +188,32 @@ class TestSequenceMinus(test.TestCase): def test_equal_items_not_continious(self): result = misc.sequence_minus([1, 2, 3, 1], [1, 3]) self.assertEqual(result, [2, 1]) + + +class TestClamping(test.TestCase): + def test_simple_clamp(self): + result = misc.clamp(1.0, 2.0, 3.0) + self.assertEqual(result, 2.0) + result = misc.clamp(4.0, 2.0, 3.0) + self.assertEqual(result, 3.0) + result = misc.clamp(3.0, 4.0, 4.0) + self.assertEqual(result, 4.0) + + def test_invalid_clamp(self): + self.assertRaises(ValueError, misc.clamp, 0.0, 2.0, 1.0) + + def test_clamped_callback(self): + calls = [] + + def on_clamped(): + calls.append(True) + + misc.clamp(-1, 0.0, 1.0, on_clamped=on_clamped) + self.assertEqual(1, len(calls)) + calls.pop() + + misc.clamp(0.0, 0.0, 1.0, on_clamped=on_clamped) + self.assertEqual(0, len(calls)) + + misc.clamp(2, 0.0, 1.0, on_clamped=on_clamped) + self.assertEqual(1, len(calls)) diff --git a/taskflow/tests/unit/test_utils_async_utils.py b/taskflow/tests/unit/test_utils_async_utils.py index 0abf4107..b538c2ee 100644 --- a/taskflow/tests/unit/test_utils_async_utils.py +++ b/taskflow/tests/unit/test_utils_async_utils.py @@ -14,10 +14,10 @@ # License for the specific language governing permissions and limitations # under the License. -from concurrent import futures import testtools from taskflow import test +from taskflow.types import futures from taskflow.utils import async_utils as au from taskflow.utils import eventlet_utils as eu @@ -29,7 +29,7 @@ class WaitForAnyTestsMixin(object): def foo(): pass - with self.executor_cls(2) as e: + with self._make_executor(2) as e: fs = [e.submit(foo), e.submit(foo)] # this test assumes that our foo will end within 10 seconds done, not_done = au.wait_for_any(fs, 10) @@ -56,31 +56,14 @@ class WaitForAnyTestsMixin(object): @testtools.skipIf(not eu.EVENTLET_AVAILABLE, 'eventlet is not available') class AsyncUtilsEventletTest(test.TestCase, WaitForAnyTestsMixin): - executor_cls = eu.GreenExecutor - is_green = True - - def test_add_result(self): - waiter = eu._GreenWaiter() - self.assertFalse(waiter.event.is_set()) - waiter.add_result(futures.Future()) - self.assertTrue(waiter.event.is_set()) - - def test_add_exception(self): - waiter = eu._GreenWaiter() - self.assertFalse(waiter.event.is_set()) - waiter.add_exception(futures.Future()) - self.assertTrue(waiter.event.is_set()) - - def test_add_cancelled(self): - waiter = eu._GreenWaiter() - self.assertFalse(waiter.event.is_set()) - waiter.add_cancelled(futures.Future()) - self.assertTrue(waiter.event.is_set()) + def _make_executor(self, max_workers): + return futures.GreenThreadPoolExecutor(max_workers=max_workers) class AsyncUtilsThreadedTest(test.TestCase, WaitForAnyTestsMixin): - executor_cls = futures.ThreadPoolExecutor + def _make_executor(self, max_workers): + return futures.ThreadPoolExecutor(max_workers=max_workers) class MakeCompletedFutureTest(test.TestCase): @@ -90,3 +73,16 @@ class MakeCompletedFutureTest(test.TestCase): future = au.make_completed_future(result) self.assertTrue(future.done()) self.assertIs(future.result(), result) + + def test_make_completed_future_exception(self): + result = IOError("broken") + future = au.make_completed_future(result, exception=True) + self.assertTrue(future.done()) + self.assertRaises(IOError, future.result) + self.assertIsNotNone(future.exception()) + + +class AsyncUtilsSynchronousTest(test.TestCase, + WaitForAnyTestsMixin): + def _make_executor(self, max_workers): + return futures.SynchronousExecutor() diff --git a/taskflow/tests/unit/test_utils_lock_utils.py b/taskflow/tests/unit/test_utils_lock_utils.py index 2b2f1f83..06bef1ee 100644 --- a/taskflow/tests/unit/test_utils_lock_utils.py +++ b/taskflow/tests/unit/test_utils_lock_utils.py @@ -21,7 +21,10 @@ import time from concurrent import futures from taskflow import test +from taskflow.test import mock +from taskflow.tests import utils as test_utils from taskflow.utils import lock_utils +from taskflow.utils import threading_utils # NOTE(harlowja): Sleep a little so time.time() can not be the same (which will # cause false positives when our overlap detection code runs). If there are @@ -85,6 +88,220 @@ def _spawn_variation(readers, writers, max_workers=None): return (writer_times, reader_times) +class MultilockTest(test.TestCase): + def test_empty_error(self): + self.assertRaises(ValueError, + lock_utils.MultiLock, []) + self.assertRaises(ValueError, + lock_utils.MultiLock, ()) + self.assertRaises(ValueError, + lock_utils.MultiLock, iter([])) + + def test_creation(self): + locks = [] + for _i in range(0, 10): + locks.append(threading.Lock()) + n_lock = lock_utils.MultiLock(locks) + self.assertEqual(0, n_lock.obtained) + self.assertEqual(len(locks), len(n_lock)) + + def test_acquired(self): + lock1 = threading.Lock() + lock2 = threading.Lock() + n_lock = lock_utils.MultiLock((lock1, lock2)) + self.assertTrue(n_lock.acquire()) + try: + self.assertTrue(lock1.locked()) + self.assertTrue(lock2.locked()) + finally: + n_lock.release() + self.assertFalse(lock1.locked()) + self.assertFalse(lock2.locked()) + + def test_acquired_context_manager(self): + lock1 = threading.Lock() + n_lock = lock_utils.MultiLock([lock1]) + with n_lock as gotten: + self.assertTrue(gotten) + self.assertTrue(lock1.locked()) + self.assertFalse(lock1.locked()) + self.assertEqual(0, n_lock.obtained) + + def test_partial_acquired(self): + lock1 = threading.Lock() + lock2 = mock.create_autospec(threading.Lock()) + lock2.acquire.return_value = False + n_lock = lock_utils.MultiLock((lock1, lock2)) + with n_lock as gotten: + self.assertFalse(gotten) + self.assertTrue(lock1.locked()) + self.assertEqual(1, n_lock.obtained) + self.assertEqual(2, len(n_lock)) + self.assertEqual(0, n_lock.obtained) + + def test_partial_acquired_failure(self): + lock1 = threading.Lock() + lock2 = mock.create_autospec(threading.Lock()) + lock2.acquire.side_effect = RuntimeError("Broke") + n_lock = lock_utils.MultiLock((lock1, lock2)) + self.assertRaises(threading.ThreadError, n_lock.acquire) + self.assertEqual(1, n_lock.obtained) + n_lock.release() + + def test_release_failure(self): + lock1 = threading.Lock() + lock2 = mock.create_autospec(threading.Lock()) + lock2.acquire.return_value = True + lock2.release.side_effect = RuntimeError("Broke") + n_lock = lock_utils.MultiLock((lock1, lock2)) + self.assertTrue(n_lock.acquire()) + self.assertEqual(2, n_lock.obtained) + self.assertRaises(threading.ThreadError, n_lock.release) + self.assertEqual(2, n_lock.obtained) + lock2.release.side_effect = None + n_lock.release() + self.assertEqual(0, n_lock.obtained) + + def test_release_partial_failure(self): + lock1 = threading.Lock() + lock2 = mock.create_autospec(threading.Lock()) + lock2.acquire.return_value = True + lock2.release.side_effect = RuntimeError("Broke") + lock3 = threading.Lock() + n_lock = lock_utils.MultiLock((lock1, lock2, lock3)) + self.assertTrue(n_lock.acquire()) + self.assertEqual(3, n_lock.obtained) + self.assertRaises(threading.ThreadError, n_lock.release) + self.assertEqual(2, n_lock.obtained) + lock2.release.side_effect = None + n_lock.release() + self.assertEqual(0, n_lock.obtained) + + def test_acquired_pass(self): + activated = collections.deque() + lock1 = threading.Lock() + lock2 = threading.Lock() + n_lock = lock_utils.MultiLock((lock1, lock2)) + + def critical_section(): + start = time.time() + time.sleep(0.05) + end = time.time() + activated.append((start, end)) + + def run(): + with n_lock: + critical_section() + + threads = [] + for _i in range(0, 20): + t = threading_utils.daemon_thread(run) + threads.append(t) + t.start() + while threads: + t = threads.pop() + t.join() + for (start, end) in activated: + self.assertEqual(1, _find_overlaps(activated, start, end)) + + self.assertFalse(lock1.locked()) + self.assertFalse(lock2.locked()) + + def test_acquired_fail(self): + activated = collections.deque() + lock1 = threading.Lock() + lock2 = threading.Lock() + n_lock = lock_utils.MultiLock((lock1, lock2)) + + def run(): + with n_lock: + start = time.time() + time.sleep(0.05) + end = time.time() + activated.append((start, end)) + + def run_fail(): + try: + with n_lock: + raise RuntimeError() + except RuntimeError: + pass + + threads = [] + for i in range(0, 20): + if i % 2 == 1: + target = run_fail + else: + target = run + t = threading_utils.daemon_thread(target) + threads.append(t) + t.start() + while threads: + t = threads.pop() + t.join() + + for (start, end) in activated: + self.assertEqual(1, _find_overlaps(activated, start, end)) + self.assertFalse(lock1.locked()) + self.assertFalse(lock2.locked()) + + def test_double_acquire_single(self): + activated = collections.deque() + + def run(): + start = time.time() + time.sleep(0.05) + end = time.time() + activated.append((start, end)) + + lock1 = threading.RLock() + lock2 = threading.RLock() + n_lock = lock_utils.MultiLock((lock1, lock2)) + with n_lock: + run() + with n_lock: + run() + run() + + for (start, end) in activated: + self.assertEqual(1, _find_overlaps(activated, start, end)) + + def test_double_acquire_many(self): + activated = collections.deque() + n_lock = lock_utils.MultiLock((threading.RLock(), threading.RLock())) + + def critical_section(): + start = time.time() + time.sleep(0.05) + end = time.time() + activated.append((start, end)) + + def run(): + with n_lock: + critical_section() + with n_lock: + critical_section() + critical_section() + + threads = [] + for i in range(0, 20): + t = threading_utils.daemon_thread(run) + threads.append(t) + t.start() + while threads: + t = threads.pop() + t.join() + + for (start, end) in activated: + self.assertEqual(1, _find_overlaps(activated, start, end)) + + def test_no_acquire_release(self): + lock1 = threading.Lock() + lock2 = threading.Lock() + n_lock = lock_utils.MultiLock((lock1, lock2)) + self.assertRaises(threading.ThreadError, n_lock.release) + + class ReadWriteLockTest(test.TestCase): def test_writer_abort(self): lock = lock_utils.ReaderWriterLock() @@ -135,7 +352,7 @@ class ReadWriteLockTest(test.TestCase): def test_double_reader_writer(self): lock = lock_utils.ReaderWriterLock() activated = collections.deque() - active = threading.Event() + active = threading_utils.Event() def double_reader(): with lock.read_lock(): @@ -149,11 +366,11 @@ class ReadWriteLockTest(test.TestCase): with lock.write_lock(): activated.append(lock.owner) - reader = threading.Thread(target=double_reader) + reader = threading_utils.daemon_thread(double_reader) reader.start() - active.wait() + self.assertTrue(active.wait(test_utils.WAIT_TIMEOUT)) - writer = threading.Thread(target=happy_writer) + writer = threading_utils.daemon_thread(happy_writer) writer.start() reader.join() diff --git a/taskflow/tests/unit/test_utils_threading_utils.py b/taskflow/tests/unit/test_utils_threading_utils.py new file mode 100644 index 00000000..974285fa --- /dev/null +++ b/taskflow/tests/unit/test_utils_threading_utils.py @@ -0,0 +1,115 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2012 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import collections +import time + +from taskflow import test +from taskflow.utils import threading_utils as tu + + +def _spinner(death): + while not death.is_set(): + time.sleep(0.1) + + +class TestThreadHelpers(test.TestCase): + def test_event_wait(self): + e = tu.Event() + e.set() + self.assertTrue(e.wait()) + + def test_alive_thread_falsey(self): + for v in [False, 0, None, ""]: + self.assertFalse(tu.is_alive(v)) + + def test_alive_thread(self): + death = tu.Event() + t = tu.daemon_thread(_spinner, death) + self.assertFalse(tu.is_alive(t)) + t.start() + self.assertTrue(tu.is_alive(t)) + death.set() + t.join() + self.assertFalse(tu.is_alive(t)) + + def test_daemon_thread(self): + death = tu.Event() + t = tu.daemon_thread(_spinner, death) + self.assertTrue(t.daemon) + + +class TestThreadBundle(test.TestCase): + thread_count = 5 + + def setUp(self): + super(TestThreadBundle, self).setUp() + self.bundle = tu.ThreadBundle() + self.death = tu.Event() + self.addCleanup(self.bundle.stop) + self.addCleanup(self.death.set) + + def test_bind_invalid(self): + self.assertRaises(ValueError, self.bundle.bind, 1) + for k in ['after_start', 'before_start', + 'before_join', 'after_join']: + kwargs = { + k: 1, + } + self.assertRaises(ValueError, self.bundle.bind, + lambda: tu.daemon_thread(_spinner, self.death), + **kwargs) + + def test_bundle_length(self): + self.assertEqual(0, len(self.bundle)) + for i in range(0, self.thread_count): + self.bundle.bind(lambda: tu.daemon_thread(_spinner, self.death)) + self.assertEqual(1, self.bundle.start()) + self.assertEqual(i + 1, len(self.bundle)) + self.death.set() + self.assertEqual(self.thread_count, self.bundle.stop()) + self.assertEqual(self.thread_count, len(self.bundle)) + + def test_start_stop(self): + events = collections.deque() + + def before_start(t): + events.append('bs') + + def before_join(t): + events.append('bj') + self.death.set() + + def after_start(t): + events.append('as') + + def after_join(t): + events.append('aj') + + for _i in range(0, self.thread_count): + self.bundle.bind(lambda: tu.daemon_thread(_spinner, self.death), + before_join=before_join, + after_join=after_join, + before_start=before_start, + after_start=after_start) + self.assertEqual(self.thread_count, self.bundle.start()) + self.assertEqual(self.thread_count, len(self.bundle)) + self.assertEqual(self.thread_count, self.bundle.stop()) + for event in ['as', 'bs', 'bj', 'aj']: + self.assertEqual(self.thread_count, + len([e for e in events if e == event])) + self.assertEqual(0, self.bundle.stop()) + self.assertTrue(self.death.is_set()) diff --git a/taskflow/tests/unit/worker_based/test_creation.py b/taskflow/tests/unit/worker_based/test_creation.py new file mode 100644 index 00000000..887498ce --- /dev/null +++ b/taskflow/tests/unit/worker_based/test_creation.py @@ -0,0 +1,90 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from taskflow.engines.worker_based import engine +from taskflow.engines.worker_based import executor +from taskflow.patterns import linear_flow as lf +from taskflow.persistence import backends +from taskflow import test +from taskflow.test import mock +from taskflow.tests import utils +from taskflow.utils import persistence_utils as pu + + +class TestWorkerBasedActionEngine(test.MockTestCase): + @staticmethod + def _create_engine(**kwargs): + flow = lf.Flow('test-flow').add(utils.DummyTask()) + backend = backends.fetch({'connection': 'memory'}) + flow_detail = pu.create_flow_detail(flow, backend=backend) + options = kwargs.copy() + return engine.WorkerBasedActionEngine(flow, flow_detail, + backend, options) + + def _patch_in_executor(self): + executor_mock, executor_inst_mock = self.patchClass( + engine.executor, 'WorkerTaskExecutor', attach_as='executor') + return executor_mock, executor_inst_mock + + def test_creation_default(self): + executor_mock, executor_inst_mock = self._patch_in_executor() + eng = self._create_engine() + expected_calls = [ + mock.call.executor_class(uuid=eng.storage.flow_uuid, + url=None, + exchange='default', + topics=[], + transport=None, + transport_options=None, + transition_timeout=mock.ANY, + retry_options=None) + ] + self.assertEqual(self.master_mock.mock_calls, expected_calls) + + def test_creation_custom(self): + executor_mock, executor_inst_mock = self._patch_in_executor() + topics = ['test-topic1', 'test-topic2'] + exchange = 'test-exchange' + broker_url = 'test-url' + eng = self._create_engine( + url=broker_url, + exchange=exchange, + transport='memory', + transport_options={}, + transition_timeout=200, + topics=topics, + retry_options={}) + expected_calls = [ + mock.call.executor_class(uuid=eng.storage.flow_uuid, + url=broker_url, + exchange=exchange, + topics=topics, + transport='memory', + transport_options={}, + transition_timeout=200, + retry_options={}) + ] + self.assertEqual(self.master_mock.mock_calls, expected_calls) + + def test_creation_custom_executor(self): + ex = executor.WorkerTaskExecutor('a', 'test-exchange', ['test-topic']) + eng = self._create_engine(executor=ex) + self.assertIs(eng._task_executor, ex) + self.assertIsInstance(eng._task_executor, executor.WorkerTaskExecutor) + + def test_creation_invalid_custom_executor(self): + self.assertRaises(TypeError, self._create_engine, executor=2) + self.assertRaises(TypeError, self._create_engine, executor='blah') diff --git a/taskflow/tests/unit/worker_based/test_dispatcher.py b/taskflow/tests/unit/worker_based/test_dispatcher.py index 4dae910d..21fccdcc 100644 --- a/taskflow/tests/unit/worker_based/test_dispatcher.py +++ b/taskflow/tests/unit/worker_based/test_dispatcher.py @@ -14,11 +14,14 @@ # License for the specific language governing permissions and limitations # under the License. -from kombu import message -import mock +try: + from kombu import message # noqa +except ImportError: + from kombu.transport import base as message from taskflow.engines.worker_based import dispatcher from taskflow import test +from taskflow.test import mock def mock_acked_message(ack_ok=True, **kwargs): @@ -34,16 +37,16 @@ def mock_acked_message(ack_ok=True, **kwargs): return msg -class TestDispatcher(test.MockTestCase): +class TestDispatcher(test.TestCase): def test_creation(self): on_hello = mock.MagicMock() handlers = {'hello': on_hello} - dispatcher.TypeDispatcher(handlers) + dispatcher.TypeDispatcher(type_handlers=handlers) def test_on_message(self): on_hello = mock.MagicMock() handlers = {'hello': on_hello} - d = dispatcher.TypeDispatcher(handlers) + d = dispatcher.TypeDispatcher(type_handlers=handlers) msg = mock_acked_message(properties={'type': 'hello'}) d.on_message("", msg) self.assertTrue(on_hello.called) @@ -51,15 +54,15 @@ class TestDispatcher(test.MockTestCase): self.assertTrue(msg.acknowledged) def test_on_rejected_message(self): - d = dispatcher.TypeDispatcher({}) + d = dispatcher.TypeDispatcher() msg = mock_acked_message(properties={'type': 'hello'}) d.on_message("", msg) self.assertTrue(msg.reject_log_error.called) self.assertFalse(msg.acknowledged) def test_on_requeue_message(self): - d = dispatcher.TypeDispatcher({}) - d.add_requeue_filter(lambda data, message: True) + d = dispatcher.TypeDispatcher() + d.requeue_filters.append(lambda data, message: True) msg = mock_acked_message() d.on_message("", msg) self.assertTrue(msg.requeue.called) @@ -68,7 +71,7 @@ class TestDispatcher(test.MockTestCase): def test_failed_ack(self): on_hello = mock.MagicMock() handlers = {'hello': on_hello} - d = dispatcher.TypeDispatcher(handlers) + d = dispatcher.TypeDispatcher(type_handlers=handlers) msg = mock_acked_message(ack_ok=False, properties={'type': 'hello'}) d.on_message("", msg) diff --git a/taskflow/tests/unit/worker_based/test_endpoint.py b/taskflow/tests/unit/worker_based/test_endpoint.py index 2a52f5ab..6f13c8be 100644 --- a/taskflow/tests/unit/worker_based/test_endpoint.py +++ b/taskflow/tests/unit/worker_based/test_endpoint.py @@ -14,11 +14,12 @@ # License for the specific language governing permissions and limitations # under the License. +from oslo_utils import reflection + from taskflow.engines.worker_based import endpoint as ep from taskflow import task from taskflow import test from taskflow.tests import utils -from taskflow.utils import reflection class Task(task.Task): @@ -42,14 +43,14 @@ class TestEndpoint(test.TestCase): self.task_result = 1 def test_creation(self): - task = self.task_ep._get_task() + task = self.task_ep.generate() self.assertEqual(self.task_ep.name, self.task_cls_name) self.assertIsInstance(task, self.task_cls) self.assertEqual(task.name, self.task_cls_name) def test_creation_with_task_name(self): task_name = 'test' - task = self.task_ep._get_task(name=task_name) + task = self.task_ep.generate(name=task_name) self.assertEqual(self.task_ep.name, self.task_cls_name) self.assertIsInstance(task, self.task_cls) self.assertEqual(task.name, task_name) @@ -58,20 +59,22 @@ class TestEndpoint(test.TestCase): # NOTE(skudriashev): Exception is expected here since task # is created without any arguments passing to its constructor. endpoint = ep.Endpoint(Task) - self.assertRaises(TypeError, endpoint._get_task) + self.assertRaises(TypeError, endpoint.generate) def test_to_str(self): self.assertEqual(str(self.task_ep), self.task_cls_name) def test_execute(self): - result = self.task_ep.execute(task_name=self.task_cls_name, + task = self.task_ep.generate(self.task_cls_name) + result = self.task_ep.execute(task, task_uuid=self.task_uuid, arguments=self.task_args, progress_callback=None) self.assertEqual(result, self.task_result) def test_revert(self): - result = self.task_ep.revert(task_name=self.task_cls_name, + task = self.task_ep.generate(self.task_cls_name) + result = self.task_ep.revert(task, task_uuid=self.task_uuid, arguments=self.task_args, progress_callback=None, diff --git a/taskflow/tests/unit/worker_based/test_engine.py b/taskflow/tests/unit/worker_based/test_engine.py deleted file mode 100644 index c966be5a..00000000 --- a/taskflow/tests/unit/worker_based/test_engine.py +++ /dev/null @@ -1,70 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import mock - -from taskflow.engines.worker_based import engine -from taskflow.patterns import linear_flow as lf -from taskflow import test -from taskflow.tests import utils -from taskflow.utils import persistence_utils as pu - - -class TestWorkerBasedActionEngine(test.MockTestCase): - - def setUp(self): - super(TestWorkerBasedActionEngine, self).setUp() - self.broker_url = 'test-url' - self.exchange = 'test-exchange' - self.topics = ['test-topic1', 'test-topic2'] - - # patch classes - self.executor_mock, self.executor_inst_mock = self._patch_class( - engine.executor, 'WorkerTaskExecutor', attach_as='executor') - - def test_creation_default(self): - flow = lf.Flow('test-flow').add(utils.DummyTask()) - _, flow_detail = pu.temporary_flow_detail() - engine.WorkerBasedActionEngine(flow, flow_detail, None, {}).compile() - - expected_calls = [ - mock.call.executor_class(uuid=flow_detail.uuid, - url=None, - exchange='default', - topics=[], - transport=None, - transport_options=None) - ] - self.assertEqual(self.master_mock.mock_calls, expected_calls) - - def test_creation_custom(self): - flow = lf.Flow('test-flow').add(utils.DummyTask()) - _, flow_detail = pu.temporary_flow_detail() - config = {'url': self.broker_url, 'exchange': self.exchange, - 'topics': self.topics, 'transport': 'memory', - 'transport_options': {}} - engine.WorkerBasedActionEngine( - flow, flow_detail, None, config).compile() - - expected_calls = [ - mock.call.executor_class(uuid=flow_detail.uuid, - url=self.broker_url, - exchange=self.exchange, - topics=self.topics, - transport='memory', - transport_options={}) - ] - self.assertEqual(self.master_mock.mock_calls, expected_calls) diff --git a/taskflow/tests/unit/worker_based/test_executor.py b/taskflow/tests/unit/worker_based/test_executor.py index e6c97e17..e7831783 100644 --- a/taskflow/tests/unit/worker_based/test_executor.py +++ b/taskflow/tests/unit/worker_based/test_executor.py @@ -14,25 +14,26 @@ # License for the specific language governing permissions and limitations # under the License. -import threading import time from concurrent import futures -import mock +from oslo_utils import timeutils from taskflow.engines.worker_based import executor from taskflow.engines.worker_based import protocol as pr -from taskflow.openstack.common import timeutils +from taskflow import task as task_atom from taskflow import test -from taskflow.tests import utils -from taskflow.utils import misc +from taskflow.test import mock +from taskflow.tests import utils as test_utils +from taskflow.types import failure +from taskflow.utils import threading_utils class TestWorkerTaskExecutor(test.MockTestCase): def setUp(self): super(TestWorkerTaskExecutor, self).setUp() - self.task = utils.DummyTask() + self.task = test_utils.DummyTask() self.task_uuid = 'task-uuid' self.task_args = {'a': 'a'} self.task_result = 'task-result' @@ -42,12 +43,12 @@ class TestWorkerTaskExecutor(test.MockTestCase): self.executor_uuid = 'executor-uuid' self.executor_exchange = 'executor-exchange' self.executor_topic = 'test-topic1' - self.proxy_started_event = threading.Event() + self.proxy_started_event = threading_utils.Event() # patch classes - self.proxy_mock, self.proxy_inst_mock = self._patch_class( + self.proxy_mock, self.proxy_inst_mock = self.patchClass( executor.proxy, 'Proxy') - self.request_mock, self.request_inst_mock = self._patch_class( + self.request_mock, self.request_inst_mock = self.patchClass( executor.pr, 'Request', autospec=False) # other mocking @@ -56,8 +57,8 @@ class TestWorkerTaskExecutor(test.MockTestCase): self.request_inst_mock.uuid = self.task_uuid self.request_inst_mock.expired = False self.request_inst_mock.task_cls = self.task.name - self.wait_for_any_mock = self._patch( - 'taskflow.engines.worker_based.executor.async_utils.wait_for_any') + self.wait_for_any_mock = self.patch( + 'taskflow.engines.action_engine.executor.async_utils.wait_for_any') self.message_mock = mock.MagicMock(name='message') self.message_mock.properties = {'correlation_id': self.task_uuid, 'type': pr.RESPONSE} @@ -78,15 +79,19 @@ class TestWorkerTaskExecutor(test.MockTestCase): executor_kwargs.update(kwargs) ex = executor.WorkerTaskExecutor(**executor_kwargs) if reset_master_mock: - self._reset_master_mock() + self.resetMasterMock() return ex def test_creation(self): ex = self.executor(reset_master_mock=False) - master_mock_calls = [ mock.call.Proxy(self.executor_uuid, self.executor_exchange, - mock.ANY, ex._on_wait, url=self.broker_url) + on_wait=ex._on_wait, + url=self.broker_url, transport=mock.ANY, + transport_options=mock.ANY, + retry_options=mock.ANY, + type_handlers=mock.ANY), + mock.call.proxy.dispatcher.type_handlers.update(mock.ANY), ] self.assertEqual(self.master_mock.mock_calls, master_mock_calls) @@ -102,17 +107,22 @@ class TestWorkerTaskExecutor(test.MockTestCase): self.assertEqual(expected_calls, self.request_inst_mock.mock_calls) def test_on_message_response_state_progress(self): - response = pr.Response(pr.PROGRESS, progress=1.0) + response = pr.Response(pr.EVENT, + event_type=task_atom.EVENT_UPDATE_PROGRESS, + details={'progress': 1.0}) ex = self.executor() ex._requests_cache[self.task_uuid] = self.request_inst_mock ex._process_response(response.to_dict(), self.message_mock) - self.assertEqual(self.request_inst_mock.mock_calls, - [mock.call.on_progress(progress=1.0)]) + expected_calls = [ + mock.call.notifier.notify(task_atom.EVENT_UPDATE_PROGRESS, + {'progress': 1.0}), + ] + self.assertEqual(expected_calls, self.request_inst_mock.mock_calls) def test_on_message_response_state_failure(self): - failure = misc.Failure.from_exception(Exception('test')) - failure_dict = failure.to_dict() + a_failure = failure.Failure.from_exception(Exception('test')) + failure_dict = a_failure.to_dict() response = pr.Response(pr.FAILURE, result=failure_dict) ex = self.executor() ex._requests_cache[self.task_uuid] = self.request_inst_mock @@ -121,7 +131,7 @@ class TestWorkerTaskExecutor(test.MockTestCase): self.assertEqual(len(ex._requests_cache), 0) expected_calls = [ mock.call.transition_and_log_error(pr.FAILURE, logger=mock.ANY), - mock.call.set_result(result=utils.FailureMatcher(failure)) + mock.call.set_result(result=test_utils.FailureMatcher(a_failure)) ] self.assertEqual(expected_calls, self.request_inst_mock.mock_calls) @@ -203,72 +213,65 @@ class TestWorkerTaskExecutor(test.MockTestCase): self.assertEqual(len(ex._requests_cache), 0) def test_execute_task(self): - self.message_mock.properties['type'] = pr.NOTIFY - notify = pr.Notify(topic=self.executor_topic, tasks=[self.task.name]) ex = self.executor() - ex._process_notify(notify.to_dict(), self.message_mock) + ex._finder._add(self.executor_topic, [self.task.name]) ex.execute_task(self.task, self.task_uuid, self.task_args) expected_calls = [ mock.call.Request(self.task, self.task_uuid, 'execute', - self.task_args, None, self.timeout), + self.task_args, self.timeout), mock.call.request.transition_and_log_error(pr.PENDING, logger=mock.ANY), - mock.call.proxy.publish(msg=self.request_inst_mock, - routing_key=self.executor_topic, + mock.call.proxy.publish(self.request_inst_mock, + self.executor_topic, reply_to=self.executor_uuid, correlation_id=self.task_uuid) ] self.assertEqual(expected_calls, self.master_mock.mock_calls) def test_revert_task(self): - self.message_mock.properties['type'] = pr.NOTIFY - notify = pr.Notify(topic=self.executor_topic, tasks=[self.task.name]) ex = self.executor() - ex._process_notify(notify.to_dict(), self.message_mock) + ex._finder._add(self.executor_topic, [self.task.name]) ex.revert_task(self.task, self.task_uuid, self.task_args, self.task_result, self.task_failures) expected_calls = [ mock.call.Request(self.task, self.task_uuid, 'revert', - self.task_args, None, self.timeout, + self.task_args, self.timeout, failures=self.task_failures, result=self.task_result), mock.call.request.transition_and_log_error(pr.PENDING, logger=mock.ANY), - mock.call.proxy.publish(msg=self.request_inst_mock, - routing_key=self.executor_topic, + mock.call.proxy.publish(self.request_inst_mock, + self.executor_topic, reply_to=self.executor_uuid, correlation_id=self.task_uuid) ] self.assertEqual(expected_calls, self.master_mock.mock_calls) def test_execute_task_topic_not_found(self): - workers_info = {self.executor_topic: ['']} - ex = self.executor(workers_info=workers_info) + ex = self.executor() ex.execute_task(self.task, self.task_uuid, self.task_args) expected_calls = [ mock.call.Request(self.task, self.task_uuid, 'execute', - self.task_args, None, self.timeout) + self.task_args, self.timeout), ] self.assertEqual(self.master_mock.mock_calls, expected_calls) def test_execute_task_publish_error(self): - self.message_mock.properties['type'] = pr.NOTIFY self.proxy_inst_mock.publish.side_effect = Exception('Woot!') - notify = pr.Notify(topic=self.executor_topic, tasks=[self.task.name]) ex = self.executor() - ex._process_notify(notify.to_dict(), self.message_mock) + ex._finder._add(self.executor_topic, [self.task.name]) ex.execute_task(self.task, self.task_uuid, self.task_args) expected_calls = [ mock.call.Request(self.task, self.task_uuid, 'execute', - self.task_args, None, self.timeout), + self.task_args, self.timeout), mock.call.request.transition_and_log_error(pr.PENDING, logger=mock.ANY), - mock.call.proxy.publish(msg=self.request_inst_mock, - routing_key=self.executor_topic, + mock.call.proxy.publish(self.request_inst_mock, + self.executor_topic, reply_to=self.executor_uuid, correlation_id=self.task_uuid), mock.call.request.transition_and_log_error(pr.FAILURE, @@ -283,7 +286,7 @@ class TestWorkerTaskExecutor(test.MockTestCase): ex.wait_for_any(fs) expected_calls = [ - mock.call(fs, None) + mock.call(fs, timeout=None) ] self.assertEqual(self.wait_for_any_mock.mock_calls, expected_calls) @@ -294,7 +297,7 @@ class TestWorkerTaskExecutor(test.MockTestCase): ex.wait_for_any(fs, timeout) master_mock_calls = [ - mock.call(fs, timeout) + mock.call(fs, timeout=timeout) ] self.assertEqual(self.wait_for_any_mock.mock_calls, master_mock_calls) @@ -303,7 +306,7 @@ class TestWorkerTaskExecutor(test.MockTestCase): ex.start() # make sure proxy thread started - self.proxy_started_event.wait() + self.assertTrue(self.proxy_started_event.wait(test_utils.WAIT_TIMEOUT)) # stop executor ex.stop() @@ -319,7 +322,7 @@ class TestWorkerTaskExecutor(test.MockTestCase): ex.start() # make sure proxy thread started - self.proxy_started_event.wait() + self.assertTrue(self.proxy_started_event.wait(test_utils.WAIT_TIMEOUT)) # start executor again ex.start() @@ -345,9 +348,6 @@ class TestWorkerTaskExecutor(test.MockTestCase): ex = self.executor() ex.start() - # wait until executor thread is done - ex._proxy_thread.join() - # stop executor ex.stop() @@ -362,14 +362,14 @@ class TestWorkerTaskExecutor(test.MockTestCase): ex.start() # make sure thread started - self.proxy_started_event.wait() + self.assertTrue(self.proxy_started_event.wait(test_utils.WAIT_TIMEOUT)) # restart executor ex.stop() ex.start() # make sure thread started - self.proxy_started_event.wait() + self.assertTrue(self.proxy_started_event.wait(test_utils.WAIT_TIMEOUT)) # stop executor ex.stop() diff --git a/taskflow/tests/unit/worker_based/test_message_pump.py b/taskflow/tests/unit/worker_based/test_message_pump.py index 10116c21..d8438131 100644 --- a/taskflow/tests/unit/worker_based/test_message_pump.py +++ b/taskflow/tests/unit/worker_based/test_message_pump.py @@ -14,25 +14,23 @@ # License for the specific language governing permissions and limitations # under the License. -import threading - -import mock +from oslo_utils import uuidutils from taskflow.engines.worker_based import protocol as pr from taskflow.engines.worker_based import proxy -from taskflow.openstack.common import uuidutils from taskflow import test +from taskflow.test import mock from taskflow.tests import utils as test_utils from taskflow.types import latch +from taskflow.utils import threading_utils TEST_EXCHANGE, TEST_TOPIC = ('test-exchange', 'test-topic') -BARRIER_WAIT_TIMEOUT = 1.0 POLLING_INTERVAL = 0.01 -class TestMessagePump(test.MockTestCase): +class TestMessagePump(test.TestCase): def test_notify(self): - barrier = threading.Event() + barrier = threading_utils.Event() on_notify = mock.MagicMock() on_notify.side_effect = lambda *args, **kwargs: barrier.set() @@ -44,14 +42,12 @@ class TestMessagePump(test.MockTestCase): 'polling_interval': POLLING_INTERVAL, }) - t = threading.Thread(target=p.start) - t.daemon = True + t = threading_utils.daemon_thread(p.start) t.start() p.wait() p.publish(pr.Notify(), TEST_TOPIC) - barrier.wait(BARRIER_WAIT_TIMEOUT) - self.assertTrue(barrier.is_set()) + self.assertTrue(barrier.wait(test_utils.WAIT_TIMEOUT)) p.stop() t.join() @@ -59,7 +55,7 @@ class TestMessagePump(test.MockTestCase): on_notify.assert_called_with({}, mock.ANY) def test_response(self): - barrier = threading.Event() + barrier = threading_utils.Event() on_response = mock.MagicMock() on_response.side_effect = lambda *args, **kwargs: barrier.set() @@ -71,14 +67,13 @@ class TestMessagePump(test.MockTestCase): 'polling_interval': POLLING_INTERVAL, }) - t = threading.Thread(target=p.start) - t.daemon = True + t = threading_utils.daemon_thread(p.start) t.start() p.wait() resp = pr.Response(pr.RUNNING) p.publish(resp, TEST_TOPIC) - barrier.wait(BARRIER_WAIT_TIMEOUT) + self.assertTrue(barrier.wait(test_utils.WAIT_TIMEOUT)) self.assertTrue(barrier.is_set()) p.stop() t.join() @@ -111,8 +106,7 @@ class TestMessagePump(test.MockTestCase): 'polling_interval': POLLING_INTERVAL, }) - t = threading.Thread(target=p.start) - t.daemon = True + t = threading_utils.daemon_thread(p.start) t.start() p.wait() @@ -125,9 +119,9 @@ class TestMessagePump(test.MockTestCase): else: p.publish(pr.Request(test_utils.DummyTask("dummy_%s" % i), uuidutils.generate_uuid(), - pr.EXECUTE, [], None, None), TEST_TOPIC) + pr.EXECUTE, [], None), TEST_TOPIC) - barrier.wait(BARRIER_WAIT_TIMEOUT) + self.assertTrue(barrier.wait(test_utils.WAIT_TIMEOUT)) self.assertEqual(0, barrier.needed) p.stop() t.join() diff --git a/taskflow/tests/unit/worker_based/test_pipeline.py b/taskflow/tests/unit/worker_based/test_pipeline.py index 8809785e..8d4de7f5 100644 --- a/taskflow/tests/unit/worker_based/test_pipeline.py +++ b/taskflow/tests/unit/worker_based/test_pipeline.py @@ -14,17 +14,17 @@ # License for the specific language governing permissions and limitations # under the License. -import threading - from concurrent import futures +from oslo_utils import uuidutils +from taskflow.engines.action_engine import executor as base_executor from taskflow.engines.worker_based import endpoint from taskflow.engines.worker_based import executor as worker_executor from taskflow.engines.worker_based import server as worker_server -from taskflow.openstack.common import uuidutils from taskflow import test from taskflow.tests import utils as test_utils -from taskflow.utils import misc +from taskflow.types import failure +from taskflow.utils import threading_utils TEST_EXCHANGE, TEST_TOPIC = ('test-exchange', 'test-topic') @@ -32,7 +32,7 @@ WAIT_TIMEOUT = 1.0 POLLING_INTERVAL = 0.01 -class TestPipeline(test.MockTestCase): +class TestPipeline(test.TestCase): def _fetch_server(self, task_classes): endpoints = [] for cls in task_classes: @@ -44,8 +44,7 @@ class TestPipeline(test.MockTestCase): transport_options={ 'polling_interval': POLLING_INTERVAL, }) - server_thread = threading.Thread(target=server.start) - server_thread.daemon = True + server_thread = threading_utils.daemon_thread(server.start) return (server, server_thread) def _fetch_executor(self): @@ -75,13 +74,14 @@ class TestPipeline(test.MockTestCase): self.assertEqual(0, executor.wait_for_workers(timeout=WAIT_TIMEOUT)) t = test_utils.TaskOneReturn() - f = executor.execute_task(t, uuidutils.generate_uuid(), {}) + progress_callback = lambda *args, **kwargs: None + f = executor.execute_task(t, uuidutils.generate_uuid(), {}, + progress_callback=progress_callback) executor.wait_for_any([f]) - t2, _action, result = f.result() - + event, result = f.result() self.assertEqual(1, result) - self.assertEqual(t, t2) + self.assertEqual(base_executor.EXECUTED, event) def test_execution_failure_pipeline(self): task_classes = [ @@ -90,9 +90,12 @@ class TestPipeline(test.MockTestCase): executor, server = self._start_components(task_classes) t = test_utils.TaskWithFailure() - f = executor.execute_task(t, uuidutils.generate_uuid(), {}) + progress_callback = lambda *args, **kwargs: None + f = executor.execute_task(t, uuidutils.generate_uuid(), {}, + progress_callback=progress_callback) executor.wait_for_any([f]) - _t2, _action, result = f.result() - self.assertIsInstance(result, misc.Failure) + action, result = f.result() + self.assertIsInstance(result, failure.Failure) self.assertEqual(RuntimeError, result.check(RuntimeError)) + self.assertEqual(base_executor.EXECUTED, action) diff --git a/taskflow/tests/unit/worker_based/test_protocol.py b/taskflow/tests/unit/worker_based/test_protocol.py index 7d51da31..5436df3c 100644 --- a/taskflow/tests/unit/worker_based/test_protocol.py +++ b/taskflow/tests/unit/worker_based/test_protocol.py @@ -15,14 +15,15 @@ # under the License. from concurrent import futures -import mock +from oslo_utils import uuidutils +from taskflow.engines.action_engine import executor from taskflow.engines.worker_based import protocol as pr from taskflow import exceptions as excp -from taskflow.openstack.common import uuidutils from taskflow import test from taskflow.tests import utils -from taskflow.utils import misc +from taskflow.types import failure +from taskflow.types import timing class TestProtocolValidation(test.TestCase): @@ -51,7 +52,7 @@ class TestProtocolValidation(test.TestCase): def test_request(self): msg = pr.Request(utils.DummyTask("hi"), uuidutils.generate_uuid(), - pr.EXECUTE, {}, None, 1.0) + pr.EXECUTE, {}, 1.0) pr.Request.validate(msg.to_dict()) def test_request_invalid(self): @@ -64,13 +65,14 @@ class TestProtocolValidation(test.TestCase): def test_request_invalid_action(self): msg = pr.Request(utils.DummyTask("hi"), uuidutils.generate_uuid(), - pr.EXECUTE, {}, None, 1.0) + pr.EXECUTE, {}, 1.0) msg = msg.to_dict() msg['action'] = 'NOTHING' self.assertRaises(excp.InvalidFormat, pr.Request.validate, msg) def test_response_progress(self): - msg = pr.Response(pr.PROGRESS, progress=0.5, event_data={}) + msg = pr.Response(pr.EVENT, details={'progress': 0.5}, + event_type='blah') pr.Response.validate(msg.to_dict()) def test_response_completion(self): @@ -78,7 +80,9 @@ class TestProtocolValidation(test.TestCase): pr.Response.validate(msg.to_dict()) def test_response_mixed_invalid(self): - msg = pr.Response(pr.PROGRESS, progress=0.5, event_data={}, result=1) + msg = pr.Response(pr.EVENT, + details={'progress': 0.5}, + event_type='blah', result=1) self.assertRaises(excp.InvalidFormat, pr.Response.validate, msg) def test_response_bad_state(self): @@ -90,6 +94,8 @@ class TestProtocol(test.TestCase): def setUp(self): super(TestProtocol, self).setUp() + timing.StopWatch.set_now_override() + self.addCleanup(timing.StopWatch.clear_overrides) self.task = utils.DummyTask() self.task_uuid = 'task-uuid' self.task_action = 'execute' @@ -130,7 +136,7 @@ class TestProtocol(test.TestCase): def test_creation(self): request = self.request() self.assertEqual(request.uuid, self.task_uuid) - self.assertEqual(request.task_cls, self.task.name) + self.assertEqual(request.task, self.task) self.assertIsInstance(request.result, futures.Future) self.assertFalse(request.result.done()) @@ -146,50 +152,37 @@ class TestProtocol(test.TestCase): self.request_to_dict(result=('success', None))) def test_to_dict_with_result_failure(self): - failure = misc.Failure.from_exception(RuntimeError('Woot!')) - expected = self.request_to_dict(result=('failure', failure.to_dict())) - self.assertEqual(self.request(result=failure).to_dict(), expected) + a_failure = failure.Failure.from_exception(RuntimeError('Woot!')) + expected = self.request_to_dict(result=('failure', + a_failure.to_dict())) + self.assertEqual(self.request(result=a_failure).to_dict(), expected) def test_to_dict_with_failures(self): - failure = misc.Failure.from_exception(RuntimeError('Woot!')) - request = self.request(failures={self.task.name: failure}) + a_failure = failure.Failure.from_exception(RuntimeError('Woot!')) + request = self.request(failures={self.task.name: a_failure}) expected = self.request_to_dict( - failures={self.task.name: failure.to_dict()}) + failures={self.task.name: a_failure.to_dict()}) self.assertEqual(request.to_dict(), expected) - @mock.patch('taskflow.engines.worker_based.protocol.misc.wallclock') - def test_pending_not_expired(self, mocked_wallclock): - mocked_wallclock.side_effect = [0, self.timeout - 1] - self.assertFalse(self.request().expired) + def test_pending_not_expired(self): + req = self.request() + timing.StopWatch.set_offset_override(self.timeout - 1) + self.assertFalse(req.expired) - @mock.patch('taskflow.engines.worker_based.protocol.misc.wallclock') - def test_pending_expired(self, mocked_wallclock): - mocked_wallclock.side_effect = [0, self.timeout + 2] - self.assertTrue(self.request().expired) + def test_pending_expired(self): + req = self.request() + timing.StopWatch.set_offset_override(self.timeout + 1) + self.assertTrue(req.expired) - @mock.patch('taskflow.engines.worker_based.protocol.misc.wallclock') - def test_running_not_expired(self, mocked_wallclock): - mocked_wallclock.side_effect = [0, self.timeout + 2] + def test_running_not_expired(self): request = self.request() request.transition(pr.PENDING) request.transition(pr.RUNNING) + timing.StopWatch.set_offset_override(self.timeout + 1) self.assertFalse(request.expired) def test_set_result(self): request = self.request() request.set_result(111) result = request.result.result() - self.assertEqual(result, (self.task, 'executed', 111)) - - def test_on_progress(self): - progress_callback = mock.MagicMock(name='progress_callback') - request = self.request(task=self.task, - progress_callback=progress_callback) - request.on_progress('event_data', 0.0) - request.on_progress('event_data', 1.0) - - expected_calls = [ - mock.call(self.task, 'event_data', 0.0), - mock.call(self.task, 'event_data', 1.0) - ] - self.assertEqual(progress_callback.mock_calls, expected_calls) + self.assertEqual(result, (executor.EXECUTED, 111)) diff --git a/taskflow/tests/unit/worker_based/test_proxy.py b/taskflow/tests/unit/worker_based/test_proxy.py index e2dc02e8..7ec91780 100644 --- a/taskflow/tests/unit/worker_based/test_proxy.py +++ b/taskflow/tests/unit/worker_based/test_proxy.py @@ -15,12 +15,11 @@ # under the License. import socket -import threading - -import mock from taskflow.engines.worker_based import proxy from taskflow import test +from taskflow.test import mock +from taskflow.utils import threading_utils class TestProxy(test.MockTestCase): @@ -29,37 +28,37 @@ class TestProxy(test.MockTestCase): super(TestProxy, self).setUp() self.topic = 'test-topic' self.broker_url = 'test-url' - self.exchange_name = 'test-exchange' + self.exchange = 'test-exchange' self.timeout = 5 self.de_period = proxy.DRAIN_EVENTS_PERIOD # patch classes - self.conn_mock, self.conn_inst_mock = self._patch_class( + self.conn_mock, self.conn_inst_mock = self.patchClass( proxy.kombu, 'Connection') - self.exchange_mock, self.exchange_inst_mock = self._patch_class( + self.exchange_mock, self.exchange_inst_mock = self.patchClass( proxy.kombu, 'Exchange') - self.queue_mock, self.queue_inst_mock = self._patch_class( + self.queue_mock, self.queue_inst_mock = self.patchClass( proxy.kombu, 'Queue') - self.producer_mock, self.producer_inst_mock = self._patch_class( + self.producer_mock, self.producer_inst_mock = self.patchClass( proxy.kombu, 'Producer') # connection mocking + def _ensure(obj, func, *args, **kwargs): + return func self.conn_inst_mock.drain_events.side_effect = [ socket.timeout, socket.timeout, KeyboardInterrupt] + self.conn_inst_mock.ensure = mock.MagicMock(side_effect=_ensure) # connections mocking - self.connections_mock = self._patch( + self.connections_mock = self.patch( "taskflow.engines.worker_based.proxy.kombu.connections", attach_as='connections') self.connections_mock.__getitem__().acquire().__enter__.return_value =\ self.conn_inst_mock # producers mocking - self.producers_mock = self._patch( - "taskflow.engines.worker_based.proxy.kombu.producers", - attach_as='producers') - self.producers_mock.__getitem__().acquire().__enter__.return_value =\ - self.producer_inst_mock + self.conn_inst_mock.Producer.return_value.__enter__ = mock.MagicMock() + self.conn_inst_mock.Producer.return_value.__exit__ = mock.MagicMock() # consumer mocking self.conn_inst_mock.Consumer.return_value.__enter__ = mock.MagicMock() @@ -70,10 +69,10 @@ class TestProxy(test.MockTestCase): self.master_mock.attach_mock(self.on_wait_mock, 'on_wait') # reset master mock - self._reset_master_mock() + self.resetMasterMock() def _queue_name(self, topic): - return "%s_%s" % (self.exchange_name, topic) + return "%s_%s" % (self.exchange, topic) def proxy_start_calls(self, calls, exc_type=mock.ANY): return [ @@ -86,20 +85,47 @@ class TestProxy(test.MockTestCase): mock.call.connection.Consumer(queues=self.queue_inst_mock, callbacks=[mock.ANY]), mock.call.connection.Consumer().__enter__(), + mock.call.connection.ensure(mock.ANY, mock.ANY, + interval_start=mock.ANY, + interval_max=mock.ANY, + max_retries=mock.ANY, + interval_step=mock.ANY, + errback=mock.ANY), ] + calls + [ mock.call.connection.Consumer().__exit__(exc_type, mock.ANY, mock.ANY) ] + def proxy_publish_calls(self, calls, routing_key, exc_type=mock.ANY): + return [ + mock.call.connection.Producer(), + mock.call.connection.Producer().__enter__(), + mock.call.connection.ensure(mock.ANY, mock.ANY, + interval_start=mock.ANY, + interval_max=mock.ANY, + max_retries=mock.ANY, + interval_step=mock.ANY, + errback=mock.ANY), + mock.call.Queue(name=self._queue_name(routing_key), + routing_key=routing_key, + exchange=self.exchange_inst_mock, + durable=False, + auto_delete=True, + channel=None), + ] + calls + [ + mock.call.connection.Producer().__exit__(exc_type, mock.ANY, + mock.ANY) + ] + def proxy(self, reset_master_mock=False, **kwargs): proxy_kwargs = dict(topic=self.topic, - exchange_name=self.exchange_name, + exchange=self.exchange, url=self.broker_url, type_handlers={}) proxy_kwargs.update(kwargs) p = proxy.Proxy(**proxy_kwargs) if reset_master_mock: - self._reset_master_mock() + self.resetMasterMock() return p def test_creation(self): @@ -108,7 +134,7 @@ class TestProxy(test.MockTestCase): master_mock_calls = [ mock.call.Connection(self.broker_url, transport=None, transport_options=None), - mock.call.Exchange(name=self.exchange_name, + mock.call.Exchange(name=self.exchange, durable=False, auto_delete=True) ] @@ -121,7 +147,7 @@ class TestProxy(test.MockTestCase): master_mock_calls = [ mock.call.Connection(self.broker_url, transport='memory', transport_options=transport_opts), - mock.call.Exchange(name=self.exchange_name, + mock.call.Exchange(name=self.exchange, durable=False, auto_delete=True) ] @@ -133,25 +159,20 @@ class TestProxy(test.MockTestCase): msg_mock.to_dict.return_value = msg_data routing_key = 'routing-key' task_uuid = 'task-uuid' - kwargs = dict(a='a', b='b') - self.proxy(reset_master_mock=True).publish( - msg_mock, routing_key, correlation_id=task_uuid, **kwargs) + p = self.proxy(reset_master_mock=True) + p.publish(msg_mock, routing_key, correlation_id=task_uuid) - master_mock_calls = [ - mock.call.Queue(name=self._queue_name(routing_key), - exchange=self.exchange_inst_mock, - routing_key=routing_key, - durable=False, - auto_delete=True), - mock.call.producer.publish(body=msg_data, - routing_key=routing_key, - exchange=self.exchange_inst_mock, - correlation_id=task_uuid, - declare=[self.queue_inst_mock], - type=msg_mock.TYPE, - **kwargs) - ] + mock_producer = mock.call.connection.Producer() + master_mock_calls = self.proxy_publish_calls([ + mock_producer.__enter__().publish(body=msg_data, + routing_key=routing_key, + exchange=self.exchange_inst_mock, + correlation_id=task_uuid, + declare=[self.queue_inst_mock], + type=msg_mock.TYPE, + reply_to=None) + ], routing_key) self.master_mock.assert_has_calls(master_mock_calls) def test_start(self): @@ -210,8 +231,7 @@ class TestProxy(test.MockTestCase): self.assertFalse(pr.is_running) # start proxy in separate thread - t = threading.Thread(target=pr.start) - t.daemon = True + t = threading_utils.daemon_thread(pr.start) t.start() # make sure proxy is started diff --git a/taskflow/tests/unit/worker_based/test_server.py b/taskflow/tests/unit/worker_based/test_server.py index 2a64c960..fea5d1cc 100644 --- a/taskflow/tests/unit/worker_based/test_server.py +++ b/taskflow/tests/unit/worker_based/test_server.py @@ -14,15 +14,16 @@ # License for the specific language governing permissions and limitations # under the License. -import mock import six from taskflow.engines.worker_based import endpoint as ep from taskflow.engines.worker_based import protocol as pr from taskflow.engines.worker_based import server +from taskflow import task as task_atom from taskflow import test +from taskflow.test import mock from taskflow.tests import utils -from taskflow.utils import misc +from taskflow.types import failure class TestServer(test.MockTestCase): @@ -42,9 +43,9 @@ class TestServer(test.MockTestCase): ep.Endpoint(task_cls=utils.ProgressingTask)] # patch classes - self.proxy_mock, self.proxy_inst_mock = self._patch_class( + self.proxy_mock, self.proxy_inst_mock = self.patchClass( server.proxy, 'Proxy') - self.response_mock, self.response_inst_mock = self._patch_class( + self.response_mock, self.response_inst_mock = self.patchClass( server.pr, 'Response') # other mocking @@ -66,7 +67,7 @@ class TestServer(test.MockTestCase): server_kwargs.update(kwargs) s = server.Server(**server_kwargs) if reset_master_mock: - self._reset_master_mock() + self.resetMasterMock() return s def make_request(self, **kwargs): @@ -85,7 +86,9 @@ class TestServer(test.MockTestCase): # check calls master_mock_calls = [ mock.call.Proxy(self.server_topic, self.server_exchange, - mock.ANY, url=self.broker_url, on_wait=mock.ANY) + type_handlers=mock.ANY, url=self.broker_url, + transport=mock.ANY, transport_options=mock.ANY, + retry_options=mock.ANY) ] self.master_mock.assert_has_calls(master_mock_calls) self.assertEqual(len(s._endpoints), 3) @@ -96,70 +99,78 @@ class TestServer(test.MockTestCase): # check calls master_mock_calls = [ mock.call.Proxy(self.server_topic, self.server_exchange, - mock.ANY, url=self.broker_url, on_wait=mock.ANY) + type_handlers=mock.ANY, url=self.broker_url, + transport=mock.ANY, transport_options=mock.ANY, + retry_options=mock.ANY) ] self.master_mock.assert_has_calls(master_mock_calls) self.assertEqual(len(s._endpoints), len(self.endpoints)) def test_parse_request(self): request = self.make_request() - task_cls, action, task_args = server.Server._parse_request(**request) - - self.assertEqual((task_cls, action, task_args), - (self.task.name, self.task_action, - dict(task_name=self.task.name, - arguments=self.task_args))) + bundle = server.Server._parse_request(**request) + task_cls, task_name, action, task_args = bundle + self.assertEqual((task_cls, task_name, action, task_args), + (self.task.name, self.task.name, self.task_action, + dict(arguments=self.task_args))) def test_parse_request_with_success_result(self): request = self.make_request(action='revert', result=1) - task_cls, action, task_args = server.Server._parse_request(**request) - - self.assertEqual((task_cls, action, task_args), - (self.task.name, 'revert', - dict(task_name=self.task.name, - arguments=self.task_args, + bundle = server.Server._parse_request(**request) + task_cls, task_name, action, task_args = bundle + self.assertEqual((task_cls, task_name, action, task_args), + (self.task.name, self.task.name, 'revert', + dict(arguments=self.task_args, result=1))) def test_parse_request_with_failure_result(self): - failure = misc.Failure.from_exception(Exception('test')) - request = self.make_request(action='revert', result=failure) - task_cls, action, task_args = server.Server._parse_request(**request) - - self.assertEqual((task_cls, action, task_args), - (self.task.name, 'revert', - dict(task_name=self.task.name, - arguments=self.task_args, - result=utils.FailureMatcher(failure)))) + a_failure = failure.Failure.from_exception(Exception('test')) + request = self.make_request(action='revert', result=a_failure) + bundle = server.Server._parse_request(**request) + task_cls, task_name, action, task_args = bundle + self.assertEqual((task_cls, task_name, action, task_args), + (self.task.name, self.task.name, 'revert', + dict(arguments=self.task_args, + result=utils.FailureMatcher(a_failure)))) def test_parse_request_with_failures(self): - failures = {'0': misc.Failure.from_exception(Exception('test1')), - '1': misc.Failure.from_exception(Exception('test2'))} + failures = {'0': failure.Failure.from_exception(Exception('test1')), + '1': failure.Failure.from_exception(Exception('test2'))} request = self.make_request(action='revert', failures=failures) - task_cls, action, task_args = server.Server._parse_request(**request) - + bundle = server.Server._parse_request(**request) + task_cls, task_name, action, task_args = bundle self.assertEqual( - (task_cls, action, task_args), - (self.task.name, 'revert', - dict(task_name=self.task.name, - arguments=self.task_args, + (task_cls, task_name, action, task_args), + (self.task.name, self.task.name, 'revert', + dict(arguments=self.task_args, failures=dict((i, utils.FailureMatcher(f)) for i, f in six.iteritems(failures))))) - @mock.patch("taskflow.engines.worker_based.server.LOG.exception") + @mock.patch("taskflow.engines.worker_based.server.LOG.critical") def test_reply_publish_failure(self, mocked_exception): self.proxy_inst_mock.publish.side_effect = RuntimeError('Woot!') # create server and process request s = self.server(reset_master_mock=True) - s._reply(self.reply_to, self.task_uuid) + s._reply(True, self.reply_to, self.task_uuid) - self.assertEqual(self.master_mock.mock_calls, [ + self.master_mock.assert_has_calls([ mock.call.Response(pr.FAILURE), mock.call.proxy.publish(self.response_inst_mock, self.reply_to, correlation_id=self.task_uuid) ]) self.assertTrue(mocked_exception.called) + def test_on_run_reply_failure(self): + request = self.make_request(task=utils.ProgressingTask(), arguments={}) + self.proxy_inst_mock.publish.side_effect = RuntimeError('Woot!') + + # create server and process request + s = self.server(reset_master_mock=True) + s._process_request(request, self.message_mock) + + self.assertEqual(1, self.proxy_inst_mock.publish.call_count) + def test_on_update_progress(self): request = self.make_request(task=utils.ProgressingTask(), arguments={}) @@ -172,17 +183,19 @@ class TestServer(test.MockTestCase): mock.call.Response(pr.RUNNING), mock.call.proxy.publish(self.response_inst_mock, self.reply_to, correlation_id=self.task_uuid), - mock.call.Response(pr.PROGRESS, progress=0.0, event_data={}), + mock.call.Response(pr.EVENT, details={'progress': 0.0}, + event_type=task_atom.EVENT_UPDATE_PROGRESS), mock.call.proxy.publish(self.response_inst_mock, self.reply_to, correlation_id=self.task_uuid), - mock.call.Response(pr.PROGRESS, progress=1.0, event_data={}), + mock.call.Response(pr.EVENT, details={'progress': 1.0}, + event_type=task_atom.EVENT_UPDATE_PROGRESS), mock.call.proxy.publish(self.response_inst_mock, self.reply_to, correlation_id=self.task_uuid), mock.call.Response(pr.SUCCESS, result=5), mock.call.proxy.publish(self.response_inst_mock, self.reply_to, correlation_id=self.task_uuid) ] - self.assertEqual(self.master_mock.mock_calls, master_mock_calls) + self.master_mock.assert_has_calls(master_mock_calls) def test_process_request(self): # create server and process request @@ -198,28 +211,26 @@ class TestServer(test.MockTestCase): mock.call.proxy.publish(self.response_inst_mock, self.reply_to, correlation_id=self.task_uuid) ] - self.assertEqual(self.master_mock.mock_calls, master_mock_calls) + self.master_mock.assert_has_calls(master_mock_calls) - @mock.patch("taskflow.engines.worker_based.server.LOG.exception") + @mock.patch("taskflow.engines.worker_based.server.LOG.warn") def test_process_request_parse_message_failure(self, mocked_exception): self.message_mock.properties = {} request = self.make_request() s = self.server(reset_master_mock=True) s._process_request(request, self.message_mock) - - self.assertEqual(self.master_mock.mock_calls, []) self.assertTrue(mocked_exception.called) - @mock.patch.object(misc.Failure, 'from_dict') - @mock.patch.object(misc.Failure, 'to_dict') + @mock.patch.object(failure.Failure, 'from_dict') + @mock.patch.object(failure.Failure, 'to_dict') def test_process_request_parse_request_failure(self, to_mock, from_mock): failure_dict = { 'failure': 'failure', } - failure = misc.Failure.from_exception(RuntimeError('Woot!')) + a_failure = failure.Failure.from_exception(RuntimeError('Woot!')) to_mock.return_value = failure_dict from_mock.side_effect = ValueError('Woot!') - request = self.make_request(result=failure) + request = self.make_request(result=a_failure) # create server and process request s = self.server(reset_master_mock=True) @@ -232,9 +243,9 @@ class TestServer(test.MockTestCase): self.reply_to, correlation_id=self.task_uuid) ] - self.assertEqual(master_mock_calls, self.master_mock.mock_calls) + self.master_mock.assert_has_calls(master_mock_calls) - @mock.patch.object(misc.Failure, 'to_dict') + @mock.patch.object(failure.Failure, 'to_dict') def test_process_request_endpoint_not_found(self, to_mock): failure_dict = { 'failure': 'failure', @@ -253,9 +264,9 @@ class TestServer(test.MockTestCase): self.reply_to, correlation_id=self.task_uuid) ] - self.assertEqual(self.master_mock.mock_calls, master_mock_calls) + self.master_mock.assert_has_calls(master_mock_calls) - @mock.patch.object(misc.Failure, 'to_dict') + @mock.patch.object(failure.Failure, 'to_dict') def test_process_request_execution_failure(self, to_mock): failure_dict = { 'failure': 'failure', @@ -270,17 +281,14 @@ class TestServer(test.MockTestCase): # check calls master_mock_calls = [ - mock.call.Response(pr.RUNNING), - mock.call.proxy.publish(self.response_inst_mock, self.reply_to, - correlation_id=self.task_uuid), mock.call.Response(pr.FAILURE, result=failure_dict), mock.call.proxy.publish(self.response_inst_mock, self.reply_to, correlation_id=self.task_uuid) ] - self.assertEqual(self.master_mock.mock_calls, master_mock_calls) + self.master_mock.assert_has_calls(master_mock_calls) - @mock.patch.object(misc.Failure, 'to_dict') + @mock.patch.object(failure.Failure, 'to_dict') def test_process_request_task_failure(self, to_mock): failure_dict = { 'failure': 'failure', @@ -302,7 +310,7 @@ class TestServer(test.MockTestCase): self.reply_to, correlation_id=self.task_uuid) ] - self.assertEqual(self.master_mock.mock_calls, master_mock_calls) + self.master_mock.assert_has_calls(master_mock_calls) def test_start(self): self.server(reset_master_mock=True).start() @@ -311,7 +319,7 @@ class TestServer(test.MockTestCase): master_mock_calls = [ mock.call.proxy.start() ] - self.assertEqual(self.master_mock.mock_calls, master_mock_calls) + self.master_mock.assert_has_calls(master_mock_calls) def test_wait(self): server = self.server(reset_master_mock=True) @@ -323,7 +331,7 @@ class TestServer(test.MockTestCase): mock.call.proxy.start(), mock.call.proxy.wait() ] - self.assertEqual(self.master_mock.mock_calls, master_mock_calls) + self.master_mock.assert_has_calls(master_mock_calls) def test_stop(self): self.server(reset_master_mock=True).stop() @@ -332,4 +340,4 @@ class TestServer(test.MockTestCase): master_mock_calls = [ mock.call.proxy.stop() ] - self.assertEqual(self.master_mock.mock_calls, master_mock_calls) + self.master_mock.assert_has_calls(master_mock_calls) diff --git a/taskflow/tests/unit/worker_based/test_types.py b/taskflow/tests/unit/worker_based/test_types.py new file mode 100644 index 00000000..287283cf --- /dev/null +++ b/taskflow/tests/unit/worker_based/test_types.py @@ -0,0 +1,119 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from oslo.utils import reflection + +from taskflow.engines.worker_based import protocol as pr +from taskflow.engines.worker_based import types as worker_types +from taskflow import test +from taskflow.test import mock +from taskflow.tests import utils +from taskflow.types import timing + + +class TestRequestCache(test.TestCase): + + def setUp(self): + super(TestRequestCache, self).setUp() + self.addCleanup(timing.StopWatch.clear_overrides) + self.task = utils.DummyTask() + self.task_uuid = 'task-uuid' + self.task_action = 'execute' + self.task_args = {'a': 'a'} + self.timeout = 60 + + def request(self, **kwargs): + request_kwargs = dict(task=self.task, + uuid=self.task_uuid, + action=self.task_action, + arguments=self.task_args, + progress_callback=None, + timeout=self.timeout) + request_kwargs.update(kwargs) + return pr.Request(**request_kwargs) + + def test_requests_cache_expiry(self): + # Mock out the calls the underlying objects will soon use to return + # times that we can control more easily... + overrides = [ + 0, + 1, + self.timeout + 1, + ] + timing.StopWatch.set_now_override(overrides) + + cache = worker_types.RequestsCache() + cache[self.task_uuid] = self.request() + cache.cleanup() + self.assertEqual(1, len(cache)) + cache.cleanup() + self.assertEqual(0, len(cache)) + + def test_requests_cache_match(self): + cache = worker_types.RequestsCache() + cache[self.task_uuid] = self.request() + cache['task-uuid-2'] = self.request(task=utils.NastyTask(), + uuid='task-uuid-2') + worker = worker_types.TopicWorker("dummy-topic", [utils.DummyTask], + identity="dummy") + matches = cache.get_waiting_requests(worker) + self.assertEqual(1, len(matches)) + self.assertEqual(2, len(cache)) + + +class TestTopicWorker(test.TestCase): + def test_topic_worker(self): + worker = worker_types.TopicWorker("dummy-topic", + [utils.DummyTask], identity="dummy") + self.assertTrue(worker.performs(utils.DummyTask)) + self.assertFalse(worker.performs(utils.NastyTask)) + self.assertEqual('dummy', worker.identity) + self.assertEqual('dummy-topic', worker.topic) + + +class TestProxyFinder(test.TestCase): + def test_single_topic_worker(self): + finder = worker_types.ProxyWorkerFinder('me', mock.MagicMock(), []) + w, emit = finder._add('dummy-topic', [utils.DummyTask]) + self.assertIsNotNone(w) + self.assertTrue(emit) + self.assertEqual(1, finder._total_workers()) + w2 = finder.get_worker_for_task(utils.DummyTask) + self.assertEqual(w.identity, w2.identity) + + def test_multi_same_topic_workers(self): + finder = worker_types.ProxyWorkerFinder('me', mock.MagicMock(), []) + w, emit = finder._add('dummy-topic', [utils.DummyTask]) + self.assertIsNotNone(w) + self.assertTrue(emit) + w2, emit = finder._add('dummy-topic-2', [utils.DummyTask]) + self.assertIsNotNone(w2) + self.assertTrue(emit) + w3 = finder.get_worker_for_task( + reflection.get_class_name(utils.DummyTask)) + self.assertIn(w3.identity, [w.identity, w2.identity]) + + def test_multi_different_topic_workers(self): + finder = worker_types.ProxyWorkerFinder('me', mock.MagicMock(), []) + added = [] + added.append(finder._add('dummy-topic', [utils.DummyTask])) + added.append(finder._add('dummy-topic-2', [utils.DummyTask])) + added.append(finder._add('dummy-topic-3', [utils.NastyTask])) + self.assertEqual(3, finder._total_workers()) + w = finder.get_worker_for_task(utils.NastyTask) + self.assertEqual(added[-1][0].identity, w.identity) + w = finder.get_worker_for_task(utils.DummyTask) + self.assertIn(w.identity, [w_a[0].identity for w_a in added[0:2]]) diff --git a/taskflow/tests/unit/worker_based/test_worker.py b/taskflow/tests/unit/worker_based/test_worker.py index c66255f9..cc4578c0 100644 --- a/taskflow/tests/unit/worker_based/test_worker.py +++ b/taskflow/tests/unit/worker_based/test_worker.py @@ -14,13 +14,14 @@ # License for the specific language governing permissions and limitations # under the License. -import mock +from oslo_utils import reflection +import six from taskflow.engines.worker_based import endpoint from taskflow.engines.worker_based import worker from taskflow import test +from taskflow.test import mock from taskflow.tests import utils -from taskflow.utils import reflection class TestWorker(test.MockTestCase): @@ -36,13 +37,13 @@ class TestWorker(test.MockTestCase): self.endpoint_count = 21 # patch classes - self.executor_mock, self.executor_inst_mock = self._patch_class( + self.executor_mock, self.executor_inst_mock = self.patchClass( worker.futures, 'ThreadPoolExecutor', attach_as='executor') - self.server_mock, self.server_inst_mock = self._patch_class( + self.server_mock, self.server_inst_mock = self.patchClass( worker.server, 'Server') # other mocking - self.threads_count_mock = self._patch( + self.threads_count_mock = self.patch( 'taskflow.engines.worker_based.worker.tu.get_optimal_thread_count') self.threads_count_mock.return_value = self.threads_count @@ -54,7 +55,7 @@ class TestWorker(test.MockTestCase): worker_kwargs.update(kwargs) w = worker.Worker(**worker_kwargs) if reset_master_mock: - self._reset_master_mock() + self.resetMasterMock() return w def test_creation(self): @@ -63,30 +64,46 @@ class TestWorker(test.MockTestCase): master_mock_calls = [ mock.call.executor_class(self.threads_count), mock.call.Server(self.topic, self.exchange, - self.executor_inst_mock, [], url=self.broker_url) + self.executor_inst_mock, [], + url=self.broker_url, + transport_options=mock.ANY, + transport=mock.ANY, + retry_options=mock.ANY) ] self.assertEqual(self.master_mock.mock_calls, master_mock_calls) + def test_banner_writing(self): + buf = six.StringIO() + w = self.worker() + w.run(banner_writer=buf.write) + w.wait() + w.stop() + self.assertGreater(0, len(buf.getvalue())) + def test_creation_with_custom_threads_count(self): self.worker(threads_count=10) master_mock_calls = [ mock.call.executor_class(10), mock.call.Server(self.topic, self.exchange, - self.executor_inst_mock, [], url=self.broker_url) + self.executor_inst_mock, [], + url=self.broker_url, + transport_options=mock.ANY, + transport=mock.ANY, + retry_options=mock.ANY) ] self.assertEqual(self.master_mock.mock_calls, master_mock_calls) - def test_creation_with_negative_threads_count(self): - self.assertRaises(ValueError, self.worker, threads_count=-10) - def test_creation_with_custom_executor(self): executor_mock = mock.MagicMock(name='executor') self.worker(executor=executor_mock) master_mock_calls = [ mock.call.Server(self.topic, self.exchange, executor_mock, [], - url=self.broker_url) + url=self.broker_url, + transport_options=mock.ANY, + transport=mock.ANY, + retry_options=mock.ANY) ] self.assertEqual(self.master_mock.mock_calls, master_mock_calls) diff --git a/taskflow/tests/utils.py b/taskflow/tests/utils.py index d7c85b95..5abdd100 100644 --- a/taskflow/tests/utils.py +++ b/taskflow/tests/utils.py @@ -16,25 +16,28 @@ import contextlib import string -import threading import six from taskflow import exceptions +from taskflow.listeners import base as listener_base from taskflow.persistence.backends import impl_memory from taskflow import retry from taskflow import task +from taskflow.types import failure from taskflow.utils import kazoo_utils -from taskflow.utils import misc +from taskflow.utils import threading_utils ARGS_KEY = '__args__' KWARGS_KEY = '__kwargs__' ORDER_KEY = '__order__' - ZK_TEST_CONFIG = { 'timeout': 1.0, 'hosts': ["localhost:2181"], } +# If latches/events take longer than this to become empty/set, something is +# usually wrong and should be debugged instead of deadlocking... +WAIT_TIMEOUT = 300 @contextlib.contextmanager @@ -48,7 +51,7 @@ def wrap_all_failures(): try: yield except Exception: - raise exceptions.WrappedFailure([misc.Failure()]) + raise exceptions.WrappedFailure([failure.Failure()]) def zookeeper_available(min_version, timeout=3): @@ -70,6 +73,16 @@ def zookeeper_available(min_version, timeout=3): kazoo_utils.finalize_client(client) +class NoopRetry(retry.AlwaysRevert): + pass + + +class NoopTask(task.Task): + + def execute(self): + pass + + class DummyTask(task.Task): def execute(self, context, *args, **kwargs): @@ -104,43 +117,71 @@ class ProvidesRequiresTask(task.Task): return dict((k, k) for k in self.provides) -def task_callback(state, values, details): - name = details.get('task_name', None) - if not name: - name = details.get('retry_name', '') - values.append('%s %s' % (name, state)) +class CaptureListener(listener_base.Listener): + _LOOKUP_NAME_POSTFIX = { + 'task_name': '.t', + 'retry_name': '.r', + 'flow_name': '.f', + } + + def __init__(self, engine, + task_listen_for=listener_base.DEFAULT_LISTEN_FOR, + values=None, + capture_flow=True, capture_task=True, capture_retry=True, + skip_tasks=None, skip_retries=None, skip_flows=None): + super(CaptureListener, self).__init__(engine, + task_listen_for=task_listen_for) + self._capture_flow = capture_flow + self._capture_task = capture_task + self._capture_retry = capture_retry + self._skip_tasks = skip_tasks or [] + self._skip_flows = skip_flows or [] + self._skip_retries = skip_retries or [] + if values is None: + self.values = [] + else: + self.values = values + + def _capture(self, state, details, name_key): + name = details[name_key] + try: + name += self._LOOKUP_NAME_POSTFIX[name_key] + except KeyError: + pass + if 'result' in details: + name += ' %s(%s)' % (state, details['result']) + else: + name += " %s" % state + return name + + def _task_receiver(self, state, details): + if self._capture_task: + if details['task_name'] not in self._skip_tasks: + self.values.append(self._capture(state, details, 'task_name')) + + def _retry_receiver(self, state, details): + if self._capture_retry: + if details['retry_name'] not in self._skip_retries: + self.values.append(self._capture(state, details, 'retry_name')) + + def _flow_receiver(self, state, details): + if self._capture_flow: + if details['flow_name'] not in self._skip_flows: + self.values.append(self._capture(state, details, 'flow_name')) -def flow_callback(state, values, details): - values.append('flow %s' % state) - - -def register_notifiers(engine, values): - engine.notifier.register('*', flow_callback, kwargs={'values': values}) - engine.task_notifier.register('*', task_callback, - kwargs={'values': values}) - - -class SaveOrderTask(task.Task): - - def __init__(self, name=None, *args, **kwargs): - super(SaveOrderTask, self).__init__(name=name, *args, **kwargs) - self.values = EngineTestBase.values - +class ProgressingTask(task.Task): def execute(self, **kwargs): self.update_progress(0.0) - self.values.append(self.name) self.update_progress(1.0) return 5 def revert(self, **kwargs): self.update_progress(0) - self.values.append(self.name + ' reverted(%s)' - % kwargs.get('result')) self.update_progress(1.0) -class FailingTask(SaveOrderTask): +class FailingTask(ProgressingTask): def execute(self, **kwargs): self.update_progress(0) self.update_progress(0.99) @@ -161,7 +202,7 @@ class ProgressingTask(task.Task): return 5 -class FailingTaskWithOneArg(SaveOrderTask): +class FailingTaskWithOneArg(ProgressingTask): def execute(self, x, **kwargs): raise RuntimeError('Woot with %s' % x) @@ -270,11 +311,8 @@ class NeverRunningTask(task.Task): class EngineTestBase(object): - values = None - def setUp(self): super(EngineTestBase, self).setUp() - EngineTestBase.values = [] self.backend = impl_memory.MemoryBackend(conf={}) def tearDown(self): @@ -285,7 +323,9 @@ class EngineTestBase(object): super(EngineTestBase, self).tearDown() def _make_engine(self, flow, **kwargs): - raise NotImplementedError() + raise exceptions.NotImplementedError("_make_engine() must be" + " overridden if an engine is" + " desired") class FailureMatcher(object): @@ -310,7 +350,7 @@ class OneReturnRetry(retry.AlwaysRevert): pass -class ConditionalTask(SaveOrderTask): +class ConditionalTask(ProgressingTask): def execute(self, x, y): super(ConditionalTask, self).execute() @@ -318,7 +358,7 @@ class ConditionalTask(SaveOrderTask): raise RuntimeError('Woot!') -class WaitForOneFromTask(SaveOrderTask): +class WaitForOneFromTask(ProgressingTask): def __init__(self, name, wait_for, wait_states, **kwargs): super(WaitForOneFromTask, self).__init__(name, **kwargs) @@ -330,16 +370,14 @@ class WaitForOneFromTask(SaveOrderTask): self.wait_states = [wait_states] else: self.wait_states = wait_states - self.event = threading.Event() + self.event = threading_utils.Event() def execute(self): - # NOTE(imelnikov): if test was not complete within - # 5 minutes, something is terribly wrong - self.event.wait(300) - if not self.event.is_set(): - raise RuntimeError('Timeout occurred while waiting ' + if not self.event.wait(WAIT_TIMEOUT): + raise RuntimeError('%s second timeout occurred while waiting ' 'for %s to change state to %s' - % (self.wait_for, self.wait_states)) + % (WAIT_TIMEOUT, self.wait_for, + self.wait_states)) return super(WaitForOneFromTask, self).execute() def callback(self, state, details): diff --git a/taskflow/types/cache.py b/taskflow/types/cache.py index 72214fed..802bc610 100644 --- a/taskflow/types/cache.py +++ b/taskflow/types/cache.py @@ -14,10 +14,10 @@ # License for the specific language governing permissions and limitations # under the License. -import six +import threading -from taskflow.utils import lock_utils as lu -from taskflow.utils import reflection +from oslo_utils import reflection +import six class ExpiringCache(object): @@ -30,41 +30,53 @@ class ExpiringCache(object): def __init__(self): self._data = {} - self._lock = lu.ReaderWriterLock() + self._lock = threading.Lock() def __setitem__(self, key, value): """Set a value in the cache.""" - with self._lock.write_lock(): + with self._lock: self._data[key] = value def __len__(self): """Returns how many items are in this cache.""" - with self._lock.read_lock(): - return len(self._data) + return len(self._data) def get(self, key, default=None): """Retrieve a value from the cache (returns default if not found).""" - with self._lock.read_lock(): - return self._data.get(key, default) + return self._data.get(key, default) def __getitem__(self, key): """Retrieve a value from the cache.""" - with self._lock.read_lock(): - return self._data[key] + return self._data[key] def __delitem__(self, key): """Delete a key & value from the cache.""" - with self._lock.write_lock(): + with self._lock: del self._data[key] + def clear(self, on_cleared_callback=None): + """Removes all keys & values from the cache.""" + cleared_items = [] + with self._lock: + if on_cleared_callback is not None: + cleared_items.extend(six.iteritems(self._data)) + self._data.clear() + if on_cleared_callback is not None: + arg_c = len(reflection.get_callable_args(on_cleared_callback)) + for (k, v) in cleared_items: + if arg_c == 2: + on_cleared_callback(k, v) + else: + on_cleared_callback(v) + def cleanup(self, on_expired_callback=None): """Delete out-dated keys & values from the cache.""" - with self._lock.write_lock(): + with self._lock: expired_values = [(k, v) for k, v in six.iteritems(self._data) if v.expired] for (k, _v) in expired_values: del self._data[k] - if on_expired_callback: + if on_expired_callback is not None: arg_c = len(reflection.get_callable_args(on_expired_callback)) for (k, v) in expired_values: if arg_c == 2: diff --git a/taskflow/types/failure.py b/taskflow/types/failure.py new file mode 100644 index 00000000..0f45bc35 --- /dev/null +++ b/taskflow/types/failure.py @@ -0,0 +1,347 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import copy +import os +import sys +import traceback + +from oslo_utils import reflection +import six + +from taskflow import exceptions as exc + + +def _copy_exc_info(exc_info): + if exc_info is None: + return None + exc_type, exc_value, tb = exc_info + # NOTE(imelnikov): there is no need to copy the exception type, and + # a shallow copy of the value is fine and we can't copy the traceback since + # it contains reference to the internal stack frames... + return (exc_type, copy.copy(exc_value), tb) + + +def _fill_iter(it, desired_len, filler=None): + """Iterates over a provided iterator up to the desired length. + + If the source iterator does not have enough values then the filler + value is yielded until the desired length is reached. + """ + count = 0 + for value in it: + if count >= desired_len: + return + yield value + count += 1 + while count < desired_len: + yield filler + count += 1 + + +def _are_equal_exc_info_tuples(ei1, ei2): + if ei1 == ei2: + return True + if ei1 is None or ei2 is None: + return False # if both are None, we returned True above + + # NOTE(imelnikov): we can't compare exceptions with '==' + # because we want exc_info be equal to it's copy made with + # copy_exc_info above. + if ei1[0] is not ei2[0]: + return False + if not all((type(ei1[1]) == type(ei2[1]), + exc.exception_message(ei1[1]) == exc.exception_message(ei2[1]), + repr(ei1[1]) == repr(ei2[1]))): + return False + if ei1[2] == ei2[2]: + return True + tb1 = traceback.format_tb(ei1[2]) + tb2 = traceback.format_tb(ei2[2]) + return tb1 == tb2 + + +class Failure(object): + """An immutable object that represents failure. + + Failure objects encapsulate exception information so that they can be + re-used later to re-raise, inspect, examine, log, print, serialize, + deserialize... + + One example where they are dependened upon is in the WBE engine. When a + remote worker throws an exception, the WBE based engine will receive that + exception and desire to reraise it to the user/caller of the WBE based + engine for appropriate handling (this matches the behavior of non-remote + engines). To accomplish this a failure object (or a + :py:meth:`~.Failure.to_dict` form) would be sent over the WBE channel + and the WBE based engine would deserialize it and use this objects + :meth:`.reraise` method to cause an exception that contains + similar/equivalent information as the original exception to be reraised, + allowing the user (or the WBE engine itself) to then handle the worker + failure/exception as they desire. + + For those who are curious, here are a few reasons why the original + exception itself *may* not be reraised and instead a reraised wrapped + failure exception object will be instead. These explanations are *only* + applicable when a failure object is serialized and deserialized (when it is + retained inside the python process that the exception was created in the + the original exception can be reraised correctly without issue). + + * Traceback objects are not serializable/recreatable, since they contain + references to stack frames at the location where the exception was + raised. When a failure object is serialized and sent across a channel + and recreated it is *not* possible to restore the original traceback and + originating stack frames. + * The original exception *type* can not be guaranteed to be found, workers + can run code that is not accessible/available when the failure is being + deserialized. Even if it was possible to use pickle safely it would not + be possible to find the originating exception or associated code in this + situation. + * The original exception *type* can not be guaranteed to be constructed in + a *correct* manner. At the time of failure object creation the exception + has already been created and the failure object can not assume it has + knowledge (or the ability) to recreate the original type of the captured + exception (this is especially hard if the original exception was created + via a complex process via some custom exception constructor). + * The original exception *type* can not be guaranteed to be constructed in + a *safe* manner. Importing *foreign* exception types dynamically can be + problematic when not done correctly and in a safe manner; since failure + objects can capture any exception it would be *unsafe* to try to import + those exception types namespaces and modules on the receiver side + dynamically (this would create similar issues as the ``pickle`` module in + python has where foreign modules can be imported, causing those modules + to have code ran when this happens, and this can cause issues and + side-effects that the receiver would not have intended to have caused). + + TODO(harlowja): when/if http://bugs.python.org/issue17911 merges and + becomes available for use we should be able to use that and simplify the + methods and contents of this object. + """ + DICT_VERSION = 1 + + def __init__(self, exc_info=None, **kwargs): + if not kwargs: + if exc_info is None: + exc_info = sys.exc_info() + else: + # This should always be the (type, value, traceback) tuple, + # either from a prior sys.exc_info() call or from some other + # creation... + if len(exc_info) != 3: + raise ValueError("Provided 'exc_info' must contain three" + " elements") + self._exc_info = exc_info + self._exc_type_names = tuple( + reflection.get_all_class_names(exc_info[0], up_to=Exception)) + if not self._exc_type_names: + raise TypeError("Invalid exception type '%s' (%s)" + % (exc_info[0], type(exc_info[0]))) + self._exception_str = exc.exception_message(self._exc_info[1]) + self._traceback_str = ''.join( + traceback.format_tb(self._exc_info[2])) + else: + self._exc_info = exc_info # may be None + self._exception_str = kwargs.pop('exception_str') + self._exc_type_names = tuple(kwargs.pop('exc_type_names', [])) + self._traceback_str = kwargs.pop('traceback_str', None) + if kwargs: + raise TypeError( + 'Failure.__init__ got unexpected keyword argument(s): %s' + % ', '.join(six.iterkeys(kwargs))) + + @classmethod + def from_exception(cls, exception): + """Creates a failure object from a exception instance.""" + return cls((type(exception), exception, None)) + + def _matches(self, other): + if self is other: + return True + return (self._exc_type_names == other._exc_type_names + and self.exception_str == other.exception_str + and self.traceback_str == other.traceback_str) + + def matches(self, other): + """Checks if another object is equivalent to this object.""" + if not isinstance(other, Failure): + return False + if self.exc_info is None or other.exc_info is None: + return self._matches(other) + else: + return self == other + + def __eq__(self, other): + if not isinstance(other, Failure): + return NotImplemented + return (self._matches(other) and + _are_equal_exc_info_tuples(self.exc_info, other.exc_info)) + + def __ne__(self, other): + return not (self == other) + + # NOTE(imelnikov): obj.__hash__() should return same values for equal + # objects, so we should redefine __hash__. Failure equality semantics + # is a bit complicated, so for now we just mark Failure objects as + # unhashable. See python docs on object.__hash__ for more info: + # http://docs.python.org/2/reference/datamodel.html#object.__hash__ + __hash__ = None + + @property + def exception(self): + """Exception value, or None if exception value is not present. + + Exception value may be lost during serialization. + """ + if self._exc_info: + return self._exc_info[1] + else: + return None + + @property + def exception_str(self): + """String representation of exception.""" + return self._exception_str + + @property + def exc_info(self): + """Exception info tuple or None.""" + return self._exc_info + + @property + def traceback_str(self): + """Exception traceback as string.""" + return self._traceback_str + + @staticmethod + def reraise_if_any(failures): + """Re-raise exceptions if argument is not empty. + + If argument is empty list, this method returns None. If + argument is a list with a single ``Failure`` object in it, + that failure is reraised. Else, a + :class:`~taskflow.exceptions.WrappedFailure` exception + is raised with a failure list as causes. + """ + failures = list(failures) + if len(failures) == 1: + failures[0].reraise() + elif len(failures) > 1: + raise exc.WrappedFailure(failures) + + def reraise(self): + """Re-raise captured exception.""" + if self._exc_info: + six.reraise(*self._exc_info) + else: + raise exc.WrappedFailure([self]) + + def check(self, *exc_classes): + """Check if any of ``exc_classes`` caused the failure. + + Arguments of this method can be exception types or type + names (stings). If captured exception is instance of + exception of given type, the corresponding argument is + returned. Else, None is returned. + """ + for cls in exc_classes: + if isinstance(cls, type): + err = reflection.get_class_name(cls) + else: + err = cls + if err in self._exc_type_names: + return cls + return None + + def __str__(self): + return self.pformat() + + def pformat(self, traceback=False): + """Pretty formats the failure object into a string.""" + buf = six.StringIO() + if not self._exc_type_names: + buf.write('Failure: %s' % (self._exception_str)) + else: + buf.write('Failure: %s: %s' % (self._exc_type_names[0], + self._exception_str)) + if traceback: + if self._traceback_str is not None: + traceback_str = self._traceback_str.rstrip() + else: + traceback_str = None + if traceback_str: + buf.write(os.linesep) + buf.write('Traceback (most recent call last):') + buf.write(os.linesep) + buf.write(traceback_str) + else: + buf.write(os.linesep) + buf.write('Traceback not available.') + return buf.getvalue() + + def __iter__(self): + """Iterate over exception type names.""" + for et in self._exc_type_names: + yield et + + def __getstate__(self): + dct = self.to_dict() + if self._exc_info: + # Avoids 'TypeError: can't pickle traceback objects' + dct['exc_info'] = self._exc_info[0:2] + return dct + + def __setstate__(self, dct): + self._exception_str = dct['exception_str'] + self._traceback_str = dct['traceback_str'] + self._exc_type_names = dct['exc_type_names'] + if 'exc_info' in dct: + # Tracebacks can't be serialized/deserialized, but since we + # provide a traceback string (and more) this should be + # acceptable... + # + # TODO(harlowja): in the future we could do something like + # what the twisted people have done, see for example + # twisted-13.0.0/twisted/python/failure.py#L89 for how they + # created a fake traceback object... + self._exc_info = tuple(_fill_iter(dct['exc_info'], 3)) + else: + self._exc_info = None + + @classmethod + def from_dict(cls, data): + """Converts this from a dictionary to a object.""" + data = dict(data) + version = data.pop('version', None) + if version != cls.DICT_VERSION: + raise ValueError('Invalid dict version of failure object: %r' + % version) + return cls(**data) + + def to_dict(self): + """Converts this object to a dictionary.""" + return { + 'exception_str': self.exception_str, + 'traceback_str': self.traceback_str, + 'exc_type_names': list(self), + 'version': self.DICT_VERSION, + } + + def copy(self): + """Copies this object.""" + return Failure(exc_info=_copy_exc_info(self.exc_info), + exception_str=self.exception_str, + traceback_str=self.traceback_str, + exc_type_names=self._exc_type_names[:]) diff --git a/taskflow/types/fsm.py b/taskflow/types/fsm.py index cbe85b78..6ca22909 100644 --- a/taskflow/types/fsm.py +++ b/taskflow/types/fsm.py @@ -19,10 +19,10 @@ try: except ImportError: from ordereddict import OrderedDict # noqa -import prettytable import six from taskflow import exceptions as excp +from taskflow.types import table class _Jump(object): @@ -33,6 +33,12 @@ class _Jump(object): self.on_exit = on_exit +class FrozenMachine(Exception): + """Exception raised when a frozen machine is modified.""" + def __init__(self): + super(FrozenMachine, self).__init__("Frozen machine can't be modified") + + class NotInitialized(excp.TaskFlowException): """Error raised when an action is attempted on a not inited machine.""" @@ -62,6 +68,7 @@ class FSM(object): self._states = OrderedDict() self._start_state = start_state self._current = None + self.frozen = False @property def start_state(self): @@ -89,12 +96,16 @@ class FSM(object): parameter which is the event that is being processed that caused the state transition. """ + if self.frozen: + raise FrozenMachine() if state in self._states: raise excp.Duplicate("State '%s' already defined" % state) if on_enter is not None: - assert six.callable(on_enter), "On enter callback must be callable" + if not six.callable(on_enter): + raise ValueError("On enter callback must be callable") if on_exit is not None: - assert six.callable(on_exit), "On exit callback must be callable" + if not six.callable(on_exit): + raise ValueError("On exit callback must be callable") self._states[state] = { 'terminal': bool(terminal), 'reactions': {}, @@ -123,10 +134,13 @@ class FSM(object): this process typically repeats) until the state machine reaches a terminal state. """ + if self.frozen: + raise FrozenMachine() if state not in self._states: raise excp.NotFound("Can not add a reaction to event '%s' for an" " undefined state '%s'" % (event, state)) - assert six.callable(reaction), "Reaction callback must be callable" + if not six.callable(reaction): + raise ValueError("Reaction callback must be callable") if event not in self._states[state]['reactions']: self._states[state]['reactions'][event] = (reaction, args, kwargs) else: @@ -135,6 +149,8 @@ class FSM(object): def add_transition(self, start, end, event): """Adds an allowed transition from start -> end for the given event.""" + if self.frozen: + raise FrozenMachine() if start not in self._states: raise excp.NotFound("Can not add a transition on event '%s' that" " starts in a undefined state '%s'" % (event, @@ -180,13 +196,33 @@ class FSM(object): if self._states[self._start_state]['terminal']: raise excp.InvalidState("Can not start from a terminal" " state '%s'" % (self._start_state)) - self._current = _Jump(self._start_state, None, None) + # No on enter will be called, since we are priming the state machine + # and have not really transitioned from anything to get here, we will + # though allow 'on_exit' to be called on the event that causes this + # to be moved from... + self._current = _Jump(self._start_state, None, + self._states[self._start_state]['on_exit']) def run(self, event, initialize=True): """Runs the state machine, using reactions only.""" - for transition in self.run_iter(event, initialize=initialize): + for _transition in self.run_iter(event, initialize=initialize): pass + def copy(self): + """Copies the current state machine. + + NOTE(harlowja): the copy will be left in an *uninitialized* state. + """ + c = FSM(self.start_state) + c.frozen = self.frozen + for state, data in six.iteritems(self._states): + copied_data = data.copy() + copied_data['reactions'] = copied_data['reactions'].copy() + c._states[state] = copied_data + for state, data in six.iteritems(self._transitions): + c._transitions[state] = data.copy() + return c + def run_iter(self, event, initialize=True): """Returns a iterator/generator that will run the state machine. @@ -220,8 +256,13 @@ class FSM(object): event = cb(old_state, new_state, event, *args, **kwargs) def __contains__(self, state): + """Returns if this state exists in the machines known states.""" return state in self._states + def freeze(self): + """Freezes & stops addition of states, transitions, reactions...""" + self.frozen = True + @property def states(self): """Returns the state names.""" @@ -247,13 +288,34 @@ class FSM(object): NOTE(harlowja): the sort parameter can be provided to sort the states and transitions by sort order; with it being provided as false the rows will be iterated in addition order instead. + + **Example**:: + + >>> from taskflow.types import fsm + >>> f = fsm.FSM("sits") + >>> f.add_state("sits") + >>> f.add_state("barks") + >>> f.add_state("wags tail") + >>> f.add_transition("sits", "barks", "squirrel!") + >>> f.add_transition("barks", "wags tail", "gets petted") + >>> f.add_transition("wags tail", "sits", "gets petted") + >>> f.add_transition("wags tail", "barks", "squirrel!") + >>> print(f.pformat()) + +-----------+-------------+-----------+----------+---------+ + Start | Event | End | On Enter | On Exit + +-----------+-------------+-----------+----------+---------+ + barks | gets petted | wags tail | | + sits[^] | squirrel! | barks | | + wags tail | gets petted | sits | | + wags tail | squirrel! | barks | | + +-----------+-------------+-----------+----------+---------+ """ def orderedkeys(data): if sort: return sorted(six.iterkeys(data)) return list(six.iterkeys(data)) - tbl = prettytable.PrettyTable( - ["Start", "Event", "End", "On Enter", "On Exit"]) + tbl = table.PleasantTable(["Start", "Event", "End", + "On Enter", "On Exit"]) for state in orderedkeys(self._states): prefix_markings = [] if self.current_state == state: @@ -287,4 +349,4 @@ class FSM(object): tbl.add_row(row) else: tbl.add_row([pretty_state, "", "", "", ""]) - return tbl.get_string(print_empty=True) + return tbl.pformat() diff --git a/taskflow/types/futures.py b/taskflow/types/futures.py new file mode 100644 index 00000000..1d847ddc --- /dev/null +++ b/taskflow/types/futures.py @@ -0,0 +1,389 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import functools +import threading + +from concurrent import futures as _futures +from concurrent.futures import process as _process +from concurrent.futures import thread as _thread +from oslo_utils import importutils +from oslo_utils import reflection + +greenpatcher = importutils.try_import('eventlet.patcher') +greenpool = importutils.try_import('eventlet.greenpool') +greenqueue = importutils.try_import('eventlet.queue') +greenthreading = importutils.try_import('eventlet.green.threading') + +from taskflow.types import timing +from taskflow.utils import eventlet_utils as eu +from taskflow.utils import threading_utils as tu + + +# NOTE(harlowja): Allows for simpler access to this type... +Future = _futures.Future + + +class _Gatherer(object): + def __init__(self, submit_func, + lock_cls=threading.Lock, start_before_submit=False): + self._submit_func = submit_func + self._stats_lock = lock_cls() + self._stats = ExecutorStatistics() + self._start_before_submit = start_before_submit + + @property + def statistics(self): + return self._stats + + def clear(self): + with self._stats_lock: + self._stats = ExecutorStatistics() + + def _capture_stats(self, watch, fut): + watch.stop() + with self._stats_lock: + # Use a new collection and lock so that all mutations are seen as + # atomic and not overlapping and corrupting with other + # mutations (the clone ensures that others reading the current + # values will not see a mutated/corrupted one). Since futures may + # be completed by different threads we need to be extra careful to + # gather this data in a way that is thread-safe... + (failures, executed, runtime, cancelled) = (self._stats.failures, + self._stats.executed, + self._stats.runtime, + self._stats.cancelled) + if fut.cancelled(): + cancelled += 1 + else: + executed += 1 + if fut.exception() is not None: + failures += 1 + runtime += watch.elapsed() + self._stats = ExecutorStatistics(failures=failures, + executed=executed, + runtime=runtime, + cancelled=cancelled) + + def submit(self, fn, *args, **kwargs): + watch = timing.StopWatch() + if self._start_before_submit: + watch.start() + fut = self._submit_func(fn, *args, **kwargs) + if not self._start_before_submit: + watch.start() + fut.add_done_callback(functools.partial(self._capture_stats, watch)) + return fut + + +class ThreadPoolExecutor(_thread.ThreadPoolExecutor): + """Executor that uses a thread pool to execute calls asynchronously. + + It gathers statistics about the submissions executed for post-analysis... + + See: https://docs.python.org/dev/library/concurrent.futures.html + """ + def __init__(self, max_workers=None): + if max_workers is None: + max_workers = tu.get_optimal_thread_count() + super(ThreadPoolExecutor, self).__init__(max_workers=max_workers) + if self._max_workers <= 0: + raise ValueError("Max workers must be greater than zero") + self._gatherer = _Gatherer( + # Since our submit will use this gatherer we have to reference + # the parent submit, bound to this instance (which is what we + # really want to use anyway). + super(ThreadPoolExecutor, self).submit) + + @property + def statistics(self): + """:class:`.ExecutorStatistics` about the executors executions.""" + return self._gatherer.statistics + + @property + def alive(self): + """Accessor to determine if the executor is alive/active.""" + return not self._shutdown + + def submit(self, fn, *args, **kwargs): + """Submit some work to be executed (and gather statistics).""" + return self._gatherer.submit(fn, *args, **kwargs) + + +class ProcessPoolExecutor(_process.ProcessPoolExecutor): + """Executor that uses a process pool to execute calls asynchronously. + + It gathers statistics about the submissions executed for post-analysis... + + See: https://docs.python.org/dev/library/concurrent.futures.html + """ + def __init__(self, max_workers=None): + if max_workers is None: + max_workers = tu.get_optimal_thread_count() + super(ProcessPoolExecutor, self).__init__(max_workers=max_workers) + if self._max_workers <= 0: + raise ValueError("Max workers must be greater than zero") + self._gatherer = _Gatherer( + # Since our submit will use this gatherer we have to reference + # the parent submit, bound to this instance (which is what we + # really want to use anyway). + super(ProcessPoolExecutor, self).submit) + + @property + def alive(self): + """Accessor to determine if the executor is alive/active.""" + return not self._shutdown_thread + + @property + def statistics(self): + """:class:`.ExecutorStatistics` about the executors executions.""" + return self._gatherer.statistics + + def submit(self, fn, *args, **kwargs): + """Submit some work to be executed (and gather statistics).""" + return self._gatherer.submit(fn, *args, **kwargs) + + +class _WorkItem(object): + def __init__(self, future, fn, args, kwargs): + self.future = future + self.fn = fn + self.args = args + self.kwargs = kwargs + + def run(self): + if not self.future.set_running_or_notify_cancel(): + return + try: + result = self.fn(*self.args, **self.kwargs) + except BaseException as e: + self.future.set_exception(e) + else: + self.future.set_result(result) + + +class SynchronousExecutor(_futures.Executor): + """Executor that uses the caller to execute calls synchronously. + + This provides an interface to a caller that looks like an executor but + will execute the calls inside the caller thread instead of executing it + in a external process/thread for when this type of functionality is + useful to provide... + + It gathers statistics about the submissions executed for post-analysis... + """ + + def __init__(self): + self._shutoff = False + self._gatherer = _Gatherer(self._submit, + start_before_submit=True) + + @property + def alive(self): + """Accessor to determine if the executor is alive/active.""" + return not self._shutoff + + def shutdown(self, wait=True): + self._shutoff = True + + def restart(self): + """Restarts this executor (*iff* previously shutoff/shutdown). + + NOTE(harlowja): clears any previously gathered statistics. + """ + if self._shutoff: + self._shutoff = False + self._gatherer.clear() + + @property + def statistics(self): + """:class:`.ExecutorStatistics` about the executors executions.""" + return self._gatherer.statistics + + def submit(self, fn, *args, **kwargs): + """Submit some work to be executed (and gather statistics).""" + if self._shutoff: + raise RuntimeError('Can not schedule new futures' + ' after being shutdown') + return self._gatherer.submit(fn, *args, **kwargs) + + def _submit(self, fn, *args, **kwargs): + f = Future() + runner = _WorkItem(f, fn, args, kwargs) + runner.run() + return f + + +class _GreenWorker(object): + def __init__(self, executor, work, work_queue): + self.executor = executor + self.work = work + self.work_queue = work_queue + + def __call__(self): + # Run our main piece of work. + try: + self.work.run() + finally: + # Consume any delayed work before finishing (this is how we finish + # work that was to big for the pool size, but needs to be finished + # no matter). + while True: + try: + w = self.work_queue.get_nowait() + except greenqueue.Empty: + break + else: + try: + w.run() + finally: + self.work_queue.task_done() + + +class GreenFuture(Future): + def __init__(self): + super(GreenFuture, self).__init__() + eu.check_for_eventlet(RuntimeError('Eventlet is needed to use a green' + ' future')) + # NOTE(harlowja): replace the built-in condition with a greenthread + # compatible one so that when getting the result of this future the + # functions will correctly yield to eventlet. If this is not done then + # waiting on the future never actually causes the greenthreads to run + # and thus you wait for infinity. + if not greenpatcher.is_monkey_patched('threading'): + self._condition = greenthreading.Condition() + + +class GreenThreadPoolExecutor(_futures.Executor): + """Executor that uses a green thread pool to execute calls asynchronously. + + See: https://docs.python.org/dev/library/concurrent.futures.html + and http://eventlet.net/doc/modules/greenpool.html for information on + how this works. + + It gathers statistics about the submissions executed for post-analysis... + """ + + def __init__(self, max_workers=1000): + eu.check_for_eventlet(RuntimeError('Eventlet is needed to use a green' + ' executor')) + if max_workers <= 0: + raise ValueError("Max workers must be greater than zero") + self._max_workers = max_workers + self._pool = greenpool.GreenPool(self._max_workers) + self._delayed_work = greenqueue.Queue() + self._shutdown_lock = greenthreading.Lock() + self._shutdown = False + self._gatherer = _Gatherer(self._submit, + lock_cls=greenthreading.Lock) + + @property + def alive(self): + """Accessor to determine if the executor is alive/active.""" + return not self._shutdown + + @property + def statistics(self): + """:class:`.ExecutorStatistics` about the executors executions.""" + return self._gatherer.statistics + + def submit(self, fn, *args, **kwargs): + """Submit some work to be executed (and gather statistics).""" + with self._shutdown_lock: + if self._shutdown: + raise RuntimeError('Can not schedule new futures' + ' after being shutdown') + return self._gatherer.submit(fn, *args, **kwargs) + + def _submit(self, fn, *args, **kwargs): + f = GreenFuture() + work = _WorkItem(f, fn, args, kwargs) + if not self._spin_up(work): + self._delayed_work.put(work) + return f + + def _spin_up(self, work): + alive = self._pool.running() + self._pool.waiting() + if alive < self._max_workers: + self._pool.spawn_n(_GreenWorker(self, work, self._delayed_work)) + return True + return False + + def shutdown(self, wait=True): + with self._shutdown_lock: + if not self._shutdown: + self._shutdown = True + shutoff = True + else: + shutoff = False + if wait and shutoff: + self._pool.waitall() + self._delayed_work.join() + + +class ExecutorStatistics(object): + """Holds *immutable* information about a executors executions.""" + + __slots__ = ['_failures', '_executed', '_runtime', '_cancelled'] + + __repr_format = ("failures=%(failures)s, executed=%(executed)s, " + "runtime=%(runtime)s, cancelled=%(cancelled)s") + + def __init__(self, failures=0, executed=0, runtime=0.0, cancelled=0): + self._failures = failures + self._executed = executed + self._runtime = runtime + self._cancelled = cancelled + + @property + def failures(self): + """How many submissions ended up raising exceptions.""" + return self._failures + + @property + def executed(self): + """How many submissions were executed (failed or not).""" + return self._executed + + @property + def runtime(self): + """Total runtime of all submissions executed (failed or not).""" + return self._runtime + + @property + def cancelled(self): + """How many submissions were cancelled before executing.""" + return self._cancelled + + @property + def average_runtime(self): + """The average runtime of all submissions executed. + + :raises: ZeroDivisionError when no executions have occurred. + """ + return self._runtime / self._executed + + def __repr__(self): + r = reflection.get_class_name(self, fully_qualified=False) + r += "(" + r += self.__repr_format % ({ + 'failures': self._failures, + 'executed': self._executed, + 'runtime': self._runtime, + 'cancelled': self._cancelled, + }) + r += ")" + return r diff --git a/taskflow/types/graph.py b/taskflow/types/graph.py index d3e2bae2..068a8e20 100644 --- a/taskflow/types/graph.py +++ b/taskflow/types/graph.py @@ -14,6 +14,9 @@ # License for the specific language governing permissions and limitations # under the License. +import collections +import os + import networkx as nx import six @@ -76,7 +79,7 @@ class DiGraph(nx.DiGraph): buf.write(" --> %s" % (cycle[i])) buf.write(" --> %s" % (cycle[0])) lines.append(" %s" % buf.getvalue()) - return "\n".join(lines) + return os.linesep.join(lines) def export_to_dot(self): """Exports the graph to a dot format (requires pydot library).""" @@ -98,6 +101,26 @@ class DiGraph(nx.DiGraph): if not len(self.predecessors(n)): yield n + def bfs_predecessors_iter(self, n): + """Iterates breadth first over *all* predecessors of a given node. + + This will go through the nodes predecessors, then the predecessor nodes + predecessors and so on until no more predecessors are found. + + NOTE(harlowja): predecessor cycles (if they exist) will not be iterated + over more than once (this prevents infinite iteration). + """ + visited = set([n]) + queue = collections.deque(self.predecessors_iter(n)) + while queue: + pred = queue.popleft() + if pred not in visited: + yield pred + visited.add(pred) + for pred_pred in self.predecessors_iter(pred): + if pred_pred not in visited: + queue.append(pred_pred) + def merge_graphs(graphs, allow_overlaps=False): """Merges a bunch of graphs into a single graph.""" diff --git a/taskflow/types/latch.py b/taskflow/types/latch.py index 9aa2622d..3e279787 100644 --- a/taskflow/types/latch.py +++ b/taskflow/types/latch.py @@ -20,7 +20,12 @@ from taskflow.types import timing as tt class Latch(object): - """A class that ensures N-arrivals occur before unblocking.""" + """A class that ensures N-arrivals occur before unblocking. + + TODO(harlowja): replace with http://bugs.python.org/issue8777 when we no + longer have to support python 2.6 or 2.7 and we can only support 3.2 or + later. + """ def __init__(self, count): count = int(count) @@ -36,13 +41,10 @@ class Latch(object): def countdown(self): """Decrements the internal counter due to an arrival.""" - self._cond.acquire() - try: + with self._cond: self._count -= 1 if self._count <= 0: self._cond.notify_all() - finally: - self._cond.release() def wait(self, timeout=None): """Waits until the latch is released. @@ -52,18 +54,12 @@ class Latch(object): timeout expires then this will return True, otherwise it will return False. """ - w = None - if timeout is not None: - w = tt.StopWatch(timeout).start() - self._cond.acquire() - try: + watch = tt.StopWatch(duration=timeout) + watch.start() + with self._cond: while self._count > 0: - if w is not None: - if w.expired(): - return False - else: - timeout = w.leftover() - self._cond.wait(timeout) + if watch.expired(): + return False + else: + self._cond.wait(watch.leftover(return_none=True)) return True - finally: - self._cond.release() diff --git a/taskflow/types/notifier.py b/taskflow/types/notifier.py new file mode 100644 index 00000000..9f4df801 --- /dev/null +++ b/taskflow/types/notifier.py @@ -0,0 +1,278 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import collections +import contextlib +import copy +import logging + +from oslo_utils import reflection +import six + +LOG = logging.getLogger(__name__) + + +class _Listener(object): + """Internal helper that represents a notification listener/target.""" + + def __init__(self, callback, args=None, kwargs=None, details_filter=None): + self._callback = callback + self._details_filter = details_filter + if not args: + self._args = () + else: + self._args = args[:] + if not kwargs: + self._kwargs = {} + else: + self._kwargs = kwargs.copy() + + @property + def kwargs(self): + return self._kwargs + + @property + def args(self): + return self._args + + def __call__(self, event_type, details): + if self._details_filter is not None: + if not self._details_filter(details): + return + kwargs = self._kwargs.copy() + kwargs['details'] = details + self._callback(event_type, *self._args, **kwargs) + + def __repr__(self): + repr_msg = "%s object at 0x%x calling into '%r'" % ( + reflection.get_class_name(self), id(self), self._callback) + if self._details_filter is not None: + repr_msg += " using details filter '%r'" % self._details_filter + return "<%s>" % repr_msg + + def is_equivalent(self, callback, details_filter=None): + if not reflection.is_same_callback(self._callback, callback): + return False + if details_filter is not None: + if self._details_filter is None: + return False + else: + return reflection.is_same_callback(self._details_filter, + details_filter) + else: + return self._details_filter is None + + def __eq__(self, other): + if isinstance(other, _Listener): + return self.is_equivalent(other._callback, + details_filter=other._details_filter) + else: + return NotImplemented + + +class Notifier(object): + """A notification helper class. + + It is intended to be used to subscribe to notifications of events + occurring as well as allow a entity to post said notifications to any + associated subscribers without having either entity care about how this + notification occurs. + """ + + #: Keys that can *not* be used in callbacks arguments + RESERVED_KEYS = ('details',) + + #: Kleene star constant that is used to recieve all notifications + ANY = '*' + + #: Events which can *not* be used to trigger notifications + _DISALLOWED_NOTIFICATION_EVENTS = set([ANY]) + + def __init__(self): + self._listeners = collections.defaultdict(list) + + def __len__(self): + """Returns how many callbacks are registered.""" + count = 0 + for (_event_type, listeners) in six.iteritems(self._listeners): + count += len(listeners) + return count + + def is_registered(self, event_type, callback, details_filter=None): + """Check if a callback is registered.""" + for listener in self._listeners.get(event_type, []): + if listener.is_equivalent(callback, details_filter=details_filter): + return True + return False + + def reset(self): + """Forget all previously registered callbacks.""" + self._listeners.clear() + + def notify(self, event_type, details): + """Notify about event occurrence. + + All callbacks registered to receive notifications about given + event type will be called. If the provided event type can not be + used to emit notifications (this is checked via + the :meth:`.can_be_registered` method) then it will silently be + dropped (notification failures are not allowed to cause or + raise exceptions). + + :param event_type: event type that occurred + :param details: additional event details *dictionary* passed to + callback keyword argument with the same name. + """ + if not self.can_trigger_notification(event_type): + LOG.debug("Event type '%s' is not allowed to trigger" + " notifications", event_type) + return + listeners = list(self._listeners.get(self.ANY, [])) + listeners.extend(self._listeners.get(event_type, [])) + if not listeners: + return + if not details: + details = {} + for listener in listeners: + try: + listener(event_type, details.copy()) + except Exception: + LOG.warn("Failure calling listener %s to notify about event" + " %s, details: %s", listener, event_type, + details, exc_info=True) + + def register(self, event_type, callback, + args=None, kwargs=None, details_filter=None): + """Register a callback to be called when event of a given type occurs. + + Callback will be called with provided ``args`` and ``kwargs`` and + when event type occurs (or on any event if ``event_type`` equals to + :attr:`.ANY`). It will also get additional keyword argument, + ``details``, that will hold event details provided to the + :meth:`.notify` method (if a details filter callback is provided then + the target callback will *only* be triggered if the details filter + callback returns a truthy value). + """ + if not six.callable(callback): + raise ValueError("Event callback must be callable") + if details_filter is not None: + if not six.callable(details_filter): + raise ValueError("Details filter must be callable") + if not self.can_be_registered(event_type): + raise ValueError("Disallowed event type '%s' can not have a" + " callback registered" % event_type) + if self.is_registered(event_type, callback, + details_filter=details_filter): + raise ValueError("Event callback already registered with" + " equivalent details filter") + if kwargs: + for k in self.RESERVED_KEYS: + if k in kwargs: + raise KeyError("Reserved key '%s' not allowed in " + "kwargs" % k) + self._listeners[event_type].append( + _Listener(callback, + args=args, kwargs=kwargs, + details_filter=details_filter)) + + def deregister(self, event_type, callback, details_filter=None): + """Remove a single listener bound to event ``event_type``.""" + if event_type not in self._listeners: + return False + for i, listener in enumerate(self._listeners.get(event_type, [])): + if listener.is_equivalent(callback, details_filter=details_filter): + self._listeners[event_type].pop(i) + return True + return False + + def deregister_event(self, event_type): + """Remove a group of listeners bound to event ``event_type``.""" + return len(self._listeners.pop(event_type, [])) + + def copy(self): + c = copy.copy(self) + c._listeners = collections.defaultdict(list) + for event_type, listeners in six.iteritems(self._listeners): + c._listeners[event_type] = listeners[:] + return c + + def listeners_iter(self): + """Return an iterator over the mapping of event => listeners bound.""" + for event_type, listeners in six.iteritems(self._listeners): + if listeners: + yield (event_type, listeners) + + def can_be_registered(self, event_type): + """Checks if the event can be registered/subscribed to.""" + return True + + def can_trigger_notification(self, event_type): + """Checks if the event can trigger a notification.""" + if event_type in self._DISALLOWED_NOTIFICATION_EVENTS: + return False + else: + return True + + +class RestrictedNotifier(Notifier): + """A notification class that restricts events registered/triggered. + + NOTE(harlowja): This class unlike :class:`.Notifier` restricts and + disallows registering callbacks for event types that are not declared + when constructing the notifier. + """ + + def __init__(self, watchable_events, allow_any=True): + super(RestrictedNotifier, self).__init__() + self._watchable_events = frozenset(watchable_events) + self._allow_any = allow_any + + def events_iter(self): + """Returns iterator of events that can be registered/subscribed to. + + NOTE(harlowja): does not include back the ``ANY`` event type as that + meta-type is not a specific event but is a capture-all that does not + imply the same meaning as specific event types. + """ + for event_type in self._watchable_events: + yield event_type + + def can_be_registered(self, event_type): + """Checks if the event can be registered/subscribed to.""" + return (event_type in self._watchable_events or + (event_type == self.ANY and self._allow_any)) + + +@contextlib.contextmanager +def register_deregister(notifier, event_type, callback=None, + args=None, kwargs=None, details_filter=None): + """Context manager that registers a callback, then deregisters on exit. + + NOTE(harlowja): if the callback is none, then this registers nothing, which + is different from the behavior of the ``register`` method + which will *not* accept none as it is not callable... + """ + if callback is None: + yield + else: + notifier.register(event_type, callback, + args=args, kwargs=kwargs, + details_filter=details_filter) + try: + yield + finally: + notifier.deregister(event_type, callback, + details_filter=details_filter) diff --git a/taskflow/types/periodic.py b/taskflow/types/periodic.py new file mode 100644 index 00000000..bbb494d3 --- /dev/null +++ b/taskflow/types/periodic.py @@ -0,0 +1,179 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import heapq +import inspect + +from oslo_utils import reflection +import six + +from taskflow import logging +from taskflow.utils import misc +from taskflow.utils import threading_utils as tu + +LOG = logging.getLogger(__name__) + +# Find a monotonic providing time (or fallback to using time.time() +# which isn't *always* accurate but will suffice). +_now = misc.find_monotonic(allow_time_time=True) + +# Attributes expected on periodic tagged/decorated functions or methods... +_PERIODIC_ATTRS = tuple([ + '_periodic', + '_periodic_spacing', + '_periodic_run_immediately', +]) + + +def periodic(spacing, run_immediately=True): + """Tags a method/function as wanting/able to execute periodically.""" + + if spacing <= 0: + raise ValueError("Periodicity/spacing must be greater than" + " zero instead of %s" % spacing) + + def wrapper(f): + f._periodic = True + f._periodic_spacing = spacing + f._periodic_run_immediately = run_immediately + + @six.wraps(f) + def decorator(*args, **kwargs): + return f(*args, **kwargs) + + return decorator + + return wrapper + + +class PeriodicWorker(object): + """Calls a collection of callables periodically (sleeping as needed...). + + NOTE(harlowja): typically the :py:meth:`.start` method is executed in a + background thread so that the periodic callables are executed in + the background/asynchronously (using the defined periods to determine + when each is called). + """ + + @classmethod + def create(cls, objects, exclude_hidden=True): + """Automatically creates a worker by analyzing object(s) methods. + + Only picks up methods that have been tagged/decorated with + the :py:func:`.periodic` decorator (does not match against private + or protected methods unless explicitly requested to). + """ + callables = [] + for obj in objects: + for (name, member) in inspect.getmembers(obj): + if name.startswith("_") and exclude_hidden: + continue + if reflection.is_bound_method(member): + consume = True + for attr_name in _PERIODIC_ATTRS: + if not hasattr(member, attr_name): + consume = False + break + if consume: + callables.append(member) + return cls(callables) + + def __init__(self, callables, tombstone=None): + if tombstone is None: + self._tombstone = tu.Event() + else: + # Allows someone to share an event (if they so want to...) + self._tombstone = tombstone + almost_callables = list(callables) + for cb in almost_callables: + if not six.callable(cb): + raise ValueError("Periodic callback must be callable") + for attr_name in _PERIODIC_ATTRS: + if not hasattr(cb, attr_name): + raise ValueError("Periodic callback missing required" + " attribute '%s'" % attr_name) + self._callables = tuple((cb, reflection.get_callable_name(cb)) + for cb in almost_callables) + self._schedule = [] + self._immediates = [] + now = _now() + for i, (cb, cb_name) in enumerate(self._callables): + spacing = getattr(cb, '_periodic_spacing') + next_run = now + spacing + heapq.heappush(self._schedule, (next_run, i)) + for (cb, cb_name) in reversed(self._callables): + if getattr(cb, '_periodic_run_immediately', False): + self._immediates.append((cb, cb_name)) + + def __len__(self): + return len(self._callables) + + @staticmethod + def _safe_call(cb, cb_name, kind='periodic'): + try: + cb() + except Exception: + LOG.warn("Failed to call %s callable '%s'", + kind, cb_name, exc_info=True) + + def start(self): + """Starts running (will not stop/return until the tombstone is set). + + NOTE(harlowja): If this worker has no contained callables this raises + a runtime error and does not run since it is impossible to periodically + run nothing. + """ + if not self._callables: + raise RuntimeError("A periodic worker can not start" + " without any callables") + while not self._tombstone.is_set(): + if self._immediates: + cb, cb_name = self._immediates.pop() + LOG.debug("Calling immediate callable '%s'", cb_name) + self._safe_call(cb, cb_name, kind='immediate') + else: + # Figure out when we should run next (by selecting the + # minimum item from the heap, where the minimum should be + # the callable that needs to run next and has the lowest + # next desired run time). + now = _now() + next_run, i = heapq.heappop(self._schedule) + when_next = next_run - now + if when_next <= 0: + cb, cb_name = self._callables[i] + spacing = getattr(cb, '_periodic_spacing') + LOG.debug("Calling periodic callable '%s' (it runs every" + " %s seconds)", cb_name, spacing) + self._safe_call(cb, cb_name) + # Run again someday... + next_run = now + spacing + heapq.heappush(self._schedule, (next_run, i)) + else: + # Gotta wait... + heapq.heappush(self._schedule, (next_run, i)) + self._tombstone.wait(when_next) + + def stop(self): + """Sets the tombstone (this stops any further executions).""" + self._tombstone.set() + + def reset(self): + """Resets the tombstone and re-queues up any immediate executions.""" + self._tombstone.clear() + self._immediates = [] + for (cb, cb_name) in reversed(self._callables): + if getattr(cb, '_periodic_run_immediately', False): + self._immediates.append((cb, cb_name)) diff --git a/taskflow/types/table.py b/taskflow/types/table.py new file mode 100644 index 00000000..6813fab1 --- /dev/null +++ b/taskflow/types/table.py @@ -0,0 +1,130 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import itertools +import os + +import six + + +class PleasantTable(object): + """A tiny pretty printing table (like prettytable/tabulate but smaller). + + Creates simply formatted tables (with no special sauce):: + + >>> from taskflow.types import table + >>> tbl = table.PleasantTable(['Name', 'City', 'State', 'Country']) + >>> tbl.add_row(["Josh", "San Jose", "CA", "USA"]) + >>> print(tbl.pformat()) + +------+----------+-------+---------+ + Name | City | State | Country + +------+----------+-------+---------+ + Josh | San Jose | CA | USA + +------+----------+-------+---------+ + """ + COLUMN_STARTING_CHAR = ' ' + COLUMN_ENDING_CHAR = '' + COLUMN_SEPARATOR_CHAR = '|' + HEADER_FOOTER_JOINING_CHAR = '+' + HEADER_FOOTER_CHAR = '-' + LINE_SEP = os.linesep + + @staticmethod + def _center_text(text, max_len, fill=' '): + return '{0:{fill}{align}{size}}'.format(text, fill=fill, + align="^", size=max_len) + + @classmethod + def _size_selector(cls, possible_sizes): + # The number two is used so that the edges of a column have spaces + # around them (instead of being right next to a column separator). + try: + return max(x + 2 for x in possible_sizes) + except ValueError: + return 0 + + def __init__(self, columns): + if len(columns) == 0: + raise ValueError("Column count must be greater than zero") + self._columns = [column.strip() for column in columns] + self._rows = [] + + def add_row(self, row): + if len(row) != len(self._columns): + raise ValueError("Row must have %s columns instead of" + " %s columns" % (len(self._columns), len(row))) + self._rows.append([six.text_type(column) for column in row]) + + def pformat(self): + # Figure out the maximum column sizes... + column_count = len(self._columns) + column_sizes = [0] * column_count + headers = [] + for i, column in enumerate(self._columns): + possible_sizes_iter = itertools.chain( + [len(column)], (len(row[i]) for row in self._rows)) + column_sizes[i] = self._size_selector(possible_sizes_iter) + headers.append(self._center_text(column, column_sizes[i])) + # Build the header and footer prefix/postfix. + header_footer_buf = six.StringIO() + header_footer_buf.write(self.HEADER_FOOTER_JOINING_CHAR) + for i, header in enumerate(headers): + header_footer_buf.write(self.HEADER_FOOTER_CHAR * len(header)) + if i + 1 != column_count: + header_footer_buf.write(self.HEADER_FOOTER_JOINING_CHAR) + header_footer_buf.write(self.HEADER_FOOTER_JOINING_CHAR) + # Build the main header. + content_buf = six.StringIO() + content_buf.write(header_footer_buf.getvalue()) + content_buf.write(self.LINE_SEP) + content_buf.write(self.COLUMN_STARTING_CHAR) + for i, header in enumerate(headers): + if i + 1 == column_count: + if self.COLUMN_ENDING_CHAR: + content_buf.write(headers[i]) + content_buf.write(self.COLUMN_ENDING_CHAR) + else: + content_buf.write(headers[i].rstrip()) + else: + content_buf.write(headers[i]) + content_buf.write(self.COLUMN_SEPARATOR_CHAR) + content_buf.write(self.LINE_SEP) + content_buf.write(header_footer_buf.getvalue()) + # Build the main content. + row_count = len(self._rows) + if row_count: + content_buf.write(self.LINE_SEP) + for i, row in enumerate(self._rows): + pieces = [] + for j, column in enumerate(row): + pieces.append(self._center_text(column, column_sizes[j])) + if j + 1 != column_count: + pieces.append(self.COLUMN_SEPARATOR_CHAR) + blob = ''.join(pieces) + if self.COLUMN_ENDING_CHAR: + content_buf.write(self.COLUMN_STARTING_CHAR) + content_buf.write(blob) + content_buf.write(self.COLUMN_ENDING_CHAR) + else: + blob = blob.rstrip() + if blob: + content_buf.write(self.COLUMN_STARTING_CHAR) + content_buf.write(blob) + if i + 1 != row_count: + content_buf.write(self.LINE_SEP) + content_buf.write(self.LINE_SEP) + content_buf.write(header_footer_buf.getvalue()) + return content_buf.getvalue() diff --git a/taskflow/types/timing.py b/taskflow/types/timing.py index cd822ae7..da3938dc 100644 --- a/taskflow/types/timing.py +++ b/taskflow/types/timing.py @@ -14,9 +14,14 @@ # License for the specific language governing permissions and limitations # under the License. -import threading +from oslo_utils import reflection from taskflow.utils import misc +from taskflow.utils import threading_utils + +# Find a monotonic providing time (or fallback to using time.time() +# which isn't *always* accurate but will suffice). +_now = misc.find_monotonic(allow_time_time=True) class Timeout(object): @@ -29,7 +34,7 @@ class Timeout(object): if timeout < 0: raise ValueError("Timeout must be >= 0 and not %s" % (timeout)) self._timeout = timeout - self._event = threading.Event() + self._event = threading_utils.Event() def interrupt(self): self._event.set() @@ -44,62 +49,226 @@ class Timeout(object): self._event.clear() +class Split(object): + """A *immutable* stopwatch split. + + See: http://en.wikipedia.org/wiki/Stopwatch for what this is/represents. + """ + + __slots__ = ['_elapsed', '_length'] + + def __init__(self, elapsed, length): + self._elapsed = elapsed + self._length = length + + @property + def elapsed(self): + """Duration from stopwatch start.""" + return self._elapsed + + @property + def length(self): + """Seconds from last split (or the elapsed time if no prior split).""" + return self._length + + def __repr__(self): + r = reflection.get_class_name(self, fully_qualified=False) + r += "(elapsed=%s, length=%s)" % (self._elapsed, self._length) + return r + + class StopWatch(object): """A simple timer/stopwatch helper class. Inspired by: apache-commons-lang java stopwatch. - Not thread-safe. + Not thread-safe (when a single watch is mutated by multiple threads at + the same time). Thread-safe when used by a single thread (not shared) or + when operations are performed in a thread-safe manner on these objects by + wrapping those operations with locks. """ _STARTED = 'STARTED' _STOPPED = 'STOPPED' + """ + Class variables that should only be used for testing purposes only... + """ + _now_offset = None + _now_override = None + def __init__(self, duration=None): - self._duration = duration + if duration is not None: + if duration < 0: + raise ValueError("Duration must be >= 0 and not %s" % duration) + self._duration = duration + else: + self._duration = None self._started_at = None self._stopped_at = None self._state = None + self._splits = [] def start(self): + """Starts the watch (if not already started). + + NOTE(harlowja): resets any splits previously captured (if any). + """ if self._state == self._STARTED: return self - self._started_at = misc.wallclock() + self._started_at = self._now() self._stopped_at = None self._state = self._STARTED + self._splits = [] return self - def elapsed(self): - if self._state == self._STOPPED: - return float(self._stopped_at - self._started_at) - elif self._state == self._STARTED: - return float(misc.wallclock() - self._started_at) + @property + def splits(self): + """Accessor to all/any splits that have been captured.""" + return tuple(self._splits) + + def split(self): + """Captures a split/elapsed since start time (and doesn't stop).""" + if self._state == self._STARTED: + elapsed = self.elapsed() + if self._splits: + length = self._delta_seconds(self._splits[-1].elapsed, elapsed) + else: + length = elapsed + self._splits.append(Split(elapsed, length)) + return self._splits[-1] else: - raise RuntimeError("Can not get the elapsed time of an invalid" - " stopwatch") + raise RuntimeError("Can not create a split time of a stopwatch" + " if it has not been started") + + def restart(self): + """Restarts the watch from a started/stopped state.""" + if self._state == self._STARTED: + self.stop() + self.start() + return self + + @classmethod + def clear_overrides(cls): + """Clears all overrides/offsets. + + **Only to be used for testing (affects all watch instances).** + """ + cls._now_override = None + cls._now_offset = None + + @classmethod + def set_offset_override(cls, offset): + """Sets a offset that is applied to each time fetch. + + **Only to be used for testing (affects all watch instances).** + """ + cls._now_offset = offset + + @classmethod + def advance_time_seconds(cls, offset): + """Advances/sets a offset that is applied to each time fetch. + + NOTE(harlowja): if a previous offset exists (not ``None``) then this + offset will be added onto the existing one (if you want to reset + the offset completely use the :meth:`.set_offset_override` + method instead). + + **Only to be used for testing (affects all watch instances).** + """ + if cls._now_offset is None: + cls.set_offset_override(offset) + else: + cls.set_offset_override(cls._now_offset + offset) + + @classmethod + def set_now_override(cls, now=None): + """Sets time override to use (if none, then current time is fetched). + + NOTE(harlowja): if a list/tuple is provided then the first element of + the list will be used (and removed) each time a time fetch occurs (once + it becomes empty the override/s will no longer be applied). If a + numeric value is provided then it will be used (and never removed + until the override(s) are cleared via the :meth:`.clear_overrides` + method). + + **Only to be used for testing (affects all watch instances).** + """ + if isinstance(now, (list, tuple)): + cls._now_override = list(now) + else: + if now is None: + now = _now() + cls._now_override = now + + @staticmethod + def _delta_seconds(earlier, later): + return max(0.0, later - earlier) + + @classmethod + def _now(cls): + if cls._now_override is not None: + if isinstance(cls._now_override, list): + try: + now = cls._now_override.pop(0) + except IndexError: + now = _now() + else: + now = cls._now_override + else: + now = _now() + if cls._now_offset is not None: + now = now + cls._now_offset + return now + + def elapsed(self, maximum=None): + """Returns how many seconds have elapsed.""" + if self._state not in (self._STOPPED, self._STARTED): + raise RuntimeError("Can not get the elapsed time of a stopwatch" + " if it has not been started/stopped") + if self._state == self._STOPPED: + elapsed = self._delta_seconds(self._started_at, self._stopped_at) + else: + elapsed = self._delta_seconds(self._started_at, self._now()) + if maximum is not None and elapsed > maximum: + elapsed = max(0.0, maximum) + return elapsed def __enter__(self): + """Starts the watch.""" self.start() return self def __exit__(self, type, value, traceback): + """Stops the watch (ignoring errors if stop fails).""" try: self.stop() except RuntimeError: pass - # NOTE(harlowja): don't silence the exception. - return False - def leftover(self): - if self._duration is None: - raise RuntimeError("Can not get the leftover time of a watch that" - " has no duration") + def leftover(self, return_none=False): + """Returns how many seconds are left until the watch expires. + + :param return_none: when ``True`` instead of raising a ``RuntimeError`` + when no duration has been set this call will + return ``None`` instead. + :type return_none: boolean + """ if self._state != self._STARTED: raise RuntimeError("Can not get the leftover time of a stopwatch" " that has not been started") - end_time = self._started_at + self._duration - return max(0.0, end_time - misc.wallclock()) + if self._duration is None: + if not return_none: + raise RuntimeError("Can not get the leftover time of a watch" + " that has no duration") + else: + return None + return max(0.0, self._duration - self.elapsed()) def expired(self): + """Returns if the watch has expired (ie, duration provided elapsed).""" + if self._state is None: + raise RuntimeError("Can not check if a stopwatch has expired" + " if it has not been started/stopped") if self._duration is None: return False if self.elapsed() > self._duration: @@ -107,6 +276,7 @@ class StopWatch(object): return False def resume(self): + """Resumes the watch from a stopped state.""" if self._state == self._STOPPED: self._state = self._STARTED return self @@ -115,11 +285,12 @@ class StopWatch(object): " stopped") def stop(self): + """Stops the watch.""" if self._state == self._STOPPED: return self if self._state != self._STARTED: raise RuntimeError("Can not stop a stopwatch that has not been" " started") - self._stopped_at = misc.wallclock() + self._stopped_at = self._now() self._state = self._STOPPED return self diff --git a/taskflow/types/tree.py b/taskflow/types/tree.py index 41369b04..e6fad20c 100644 --- a/taskflow/types/tree.py +++ b/taskflow/types/tree.py @@ -16,12 +16,17 @@ # License for the specific language governing permissions and limitations # under the License. +import os + import six class FrozenNode(Exception): """Exception raised when a frozen node is modified.""" + def __init__(self): + super(FrozenNode, self).__init__("Frozen node(s) can't be modified") + class _DFSIter(object): """Depth first iterator (non-recursive) over the child nodes.""" @@ -42,8 +47,7 @@ class _DFSIter(object): # Visit the node. yield node # Traverse the left & right subtree. - for child_node in reversed(list(node)): - stack.append(child_node) + stack.extend(node.reverse_iter()) class Node(object): @@ -53,20 +57,22 @@ class Node(object): self.item = item self.parent = None self.metadata = dict(kwargs) + self.frozen = False self._children = [] - self._frozen = False - - def _frozen_add(self, child): - raise FrozenNode("Frozen node(s) can't be modified") def freeze(self): - if not self._frozen: + if not self.frozen: + # This will DFS until all children are frozen as well, only + # after that works do we freeze ourselves (this makes it so + # that we don't become frozen if a child node fails to perform + # the freeze operation). for n in self: n.freeze() - self.add = self._frozen_add - self._frozen = True + self.frozen = True def add(self, child): + if self.frozen: + raise FrozenNode() child.parent = self self._children.append(child) @@ -107,21 +113,22 @@ class Node(object): def pformat(self): """Recursively formats a node into a nice string representation. - Example Input: - yahoo = tt.Node("CEO") - yahoo.add(tt.Node("Infra")) - yahoo[0].add(tt.Node("Boss")) - yahoo[0][0].add(tt.Node("Me")) - yahoo.add(tt.Node("Mobile")) - yahoo.add(tt.Node("Mail")) + **Example**:: - Example Output: - CEO - |__Infra - | |__Boss - | |__Me - |__Mobile - |__Mail + >>> from taskflow.types import tree + >>> yahoo = tree.Node("CEO") + >>> yahoo.add(tree.Node("Infra")) + >>> yahoo[0].add(tree.Node("Boss")) + >>> yahoo[0][0].add(tree.Node("Me")) + >>> yahoo.add(tree.Node("Mobile")) + >>> yahoo.add(tree.Node("Mail")) + >>> print(yahoo.pformat()) + CEO + |__Infra + | |__Boss + | |__Me + |__Mobile + |__Mail """ def _inner_pformat(node, level): if level == 0: @@ -130,10 +137,10 @@ class Node(object): else: yield "__%s" % six.text_type(node.item) prefix = " " * 2 - children = list(node) - for (i, child) in enumerate(children): + child_count = node.child_count() + for (i, child) in enumerate(node): for (j, text) in enumerate(_inner_pformat(child, level + 1)): - if j == 0 or i + 1 < len(children): + if j == 0 or i + 1 < child_count: text = prefix + "|" + text else: text = prefix + " " + text @@ -143,7 +150,7 @@ class Node(object): for i, line in enumerate(_inner_pformat(self, 0)): accumulator.write(line) if i < expected_lines: - accumulator.write('\n') + accumulator.write(os.linesep) return accumulator.getvalue() def child_count(self, only_direct=True): @@ -166,8 +173,13 @@ class Node(object): for c in self._children: yield c + def reverse_iter(self): + """Iterates over the direct children of this node (left->right).""" + for c in reversed(self._children): + yield c + def index(self, item): - """Finds the child index of a given item, searchs in added order.""" + """Finds the child index of a given item, searches in added order.""" index_at = None for (i, child) in enumerate(self._children): if child.item == item: diff --git a/taskflow/utils/async_utils.py b/taskflow/utils/async_utils.py index 0599870d..e04c44e7 100644 --- a/taskflow/utils/async_utils.py +++ b/taskflow/utils/async_utils.py @@ -14,28 +14,99 @@ # License for the specific language governing permissions and limitations # under the License. -from concurrent import futures +from concurrent import futures as _futures +from concurrent.futures import _base +from oslo_utils import importutils +greenthreading = importutils.try_import('eventlet.green.threading') + +from taskflow.types import futures from taskflow.utils import eventlet_utils as eu +_DONE_STATES = frozenset([ + _base.CANCELLED_AND_NOTIFIED, + _base.FINISHED, +]) + + +def make_completed_future(result, exception=False): + """Make a future completed with a given result.""" + future = futures.Future() + if exception: + future.set_exception(result) + else: + future.set_result(result) + return future + + def wait_for_any(fs, timeout=None): """Wait for one of the futures to complete. - Works correctly with both green and non-green futures. + Works correctly with both green and non-green futures (but not both + together, since this can't be guaranteed to avoid dead-lock due to how + the waiting implementations are different when green threads are being + used). - Returns pair (done, not_done). + Returns pair (done futures, not done futures). """ - any_green = any(isinstance(f, eu.GreenFuture) for f in fs) - if any_green: - return eu.wait_for_any(fs, timeout=timeout) + green_fs = sum(1 for f in fs if isinstance(f, futures.GreenFuture)) + if not green_fs: + return _futures.wait(fs, + timeout=timeout, + return_when=_futures.FIRST_COMPLETED) else: - return tuple(futures.wait(fs, timeout=timeout, - return_when=futures.FIRST_COMPLETED)) + non_green_fs = len(fs) - green_fs + if non_green_fs: + raise RuntimeError("Can not wait on %s green futures and %s" + " non-green futures in the same `wait_for_any`" + " call" % (green_fs, non_green_fs)) + else: + return _wait_for_any_green(fs, timeout=timeout) -def make_completed_future(result): - """Make with completed with given result.""" - future = futures.Future() - future.set_result(result) - return future +class _GreenWaiter(object): + """Provides the event that wait_for_any() blocks on.""" + def __init__(self): + self.event = greenthreading.Event() + + def add_result(self, future): + self.event.set() + + def add_exception(self, future): + self.event.set() + + def add_cancelled(self, future): + self.event.set() + + +def _partition_futures(fs): + done = set() + not_done = set() + for f in fs: + if f._state in _DONE_STATES: + done.add(f) + else: + not_done.add(f) + return done, not_done + + +def _wait_for_any_green(fs, timeout=None): + eu.check_for_eventlet(RuntimeError('Eventlet is needed to wait on' + ' green futures')) + + with _base._AcquireFutures(fs): + done, not_done = _partition_futures(fs) + if done: + return _base.DoneAndNotDoneFutures(done, not_done) + waiter = _GreenWaiter() + for f in fs: + f._waiters.append(waiter) + + waiter.event.wait(timeout) + for f in fs: + f._waiters.remove(waiter) + + with _base._AcquireFutures(fs): + done, not_done = _partition_futures(fs) + return _base.DoneAndNotDoneFutures(done, not_done) diff --git a/taskflow/utils/deprecation.py b/taskflow/utils/deprecation.py new file mode 100644 index 00000000..60f82d6f --- /dev/null +++ b/taskflow/utils/deprecation.py @@ -0,0 +1,257 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import functools +import warnings + +from oslo_utils import reflection +import six + +_CLASS_MOVED_PREFIX_TPL = "Class '%s' has moved to '%s'" +_KIND_MOVED_PREFIX_TPL = "%s '%s' has moved to '%s'" +_KWARG_MOVED_POSTFIX_TPL = ", please use the '%s' argument instead" +_KWARG_MOVED_PREFIX_TPL = "Using the '%s' argument is deprecated" + + +def deprecation(message, stacklevel=None): + """Warns about some type of deprecation that has been (or will be) made. + + This helper function makes it easier to interact with the warnings module + by standardizing the arguments that the warning function recieves so that + it is easier to use. + + This should be used to emit warnings to users (users can easily turn these + warnings off/on, see https://docs.python.org/2/library/warnings.html + as they see fit so that the messages do not fill up the users logs with + warnings that they do not wish to see in production) about functions, + methods, attributes or other code that is deprecated and will be removed + in a future release (this is done using these warnings to avoid breaking + existing users of those functions, methods, code; which a library should + avoid doing by always giving at *least* N + 1 release for users to address + the deprecation warnings). + """ + if stacklevel is None: + warnings.warn(message, category=DeprecationWarning) + else: + warnings.warn(message, + category=DeprecationWarning, stacklevel=stacklevel) + + +# Helper accessors for the moved proxy (since it will not have easy access +# to its own getattr and setattr functions). +_setattr = object.__setattr__ +_getattr = object.__getattribute__ + + +class MovedClassProxy(object): + """Acts as a proxy to a class that was moved to another location. + + Partially based on: + + http://code.activestate.com/recipes/496741-object-proxying/ and other + various examination of how to make a good enough proxy for our usage to + move the various types we want to move during the deprecation process. + + And partially based on the wrapt object proxy (which we should just use + when it becomes available @ http://review.openstack.org/#/c/94754/). + """ + + __slots__ = [ + '__wrapped__', '__message__', '__stacklevel__', + # Ensure weakrefs can be made, + # https://docs.python.org/2/reference/datamodel.html#slots + '__weakref__', + ] + + def __init__(self, wrapped, message, stacklevel): + # We can't assign to these directly, since we are overriding getattr + # and setattr and delattr so we have to do this hoop jump to ensure + # that we don't invoke those methods (and cause infinite recursion). + _setattr(self, '__wrapped__', wrapped) + _setattr(self, '__message__', message) + _setattr(self, '__stacklevel__', stacklevel) + try: + _setattr(self, '__qualname__', wrapped.__qualname__) + except AttributeError: + pass + + def __instancecheck__(self, instance): + deprecation(_getattr(self, '__message__'), + stacklevel=_getattr(self, '__stacklevel__')) + return isinstance(instance, _getattr(self, '__wrapped__')) + + def __subclasscheck__(self, instance): + deprecation(_getattr(self, '__message__'), + stacklevel=_getattr(self, '__stacklevel__')) + return issubclass(instance, _getattr(self, '__wrapped__')) + + def __call__(self, *args, **kwargs): + deprecation(_getattr(self, '__message__'), + stacklevel=_getattr(self, '__stacklevel__')) + return _getattr(self, '__wrapped__')(*args, **kwargs) + + def __getattribute__(self, name): + return getattr(_getattr(self, '__wrapped__'), name) + + def __setattr__(self, name, value): + setattr(_getattr(self, '__wrapped__'), name, value) + + def __delattr__(self, name): + delattr(_getattr(self, '__wrapped__'), name) + + def __repr__(self): + wrapped = _getattr(self, '__wrapped__') + return "<%s at 0x%x for %r at 0x%x>" % ( + type(self).__name__, id(self), wrapped, id(wrapped)) + + +def _generate_moved_message(prefix, postfix=None, message=None, + version=None, removal_version=None): + message_components = [prefix] + if version: + message_components.append(" in version '%s'" % version) + if removal_version: + if removal_version == "?": + message_components.append(" and will be removed in a future" + " version") + else: + message_components.append(" and will be removed in version '%s'" + % removal_version) + if postfix: + message_components.append(postfix) + if message: + message_components.append(": %s" % message) + return ''.join(message_components) + + +def renamed_kwarg(old_name, new_name, message=None, + version=None, removal_version=None, stacklevel=3): + """Decorates a kwarg accepting function to deprecate a renamed kwarg.""" + + prefix = _KWARG_MOVED_PREFIX_TPL % old_name + postfix = _KWARG_MOVED_POSTFIX_TPL % new_name + out_message = _generate_moved_message(prefix, postfix=postfix, + message=message, version=version, + removal_version=removal_version) + + def decorator(f): + + @six.wraps(f) + def wrapper(*args, **kwargs): + if old_name in kwargs: + deprecation(out_message, stacklevel=stacklevel) + return f(*args, **kwargs) + + return wrapper + + return decorator + + +def _moved_decorator(kind, new_attribute_name, message=None, + version=None, removal_version=None, + stacklevel=3): + """Decorates a method/property that was moved to another location.""" + + def decorator(f): + try: + old_attribute_name = f.__qualname__ + fully_qualified = True + except AttributeError: + old_attribute_name = f.__name__ + fully_qualified = False + + @six.wraps(f) + def wrapper(self, *args, **kwargs): + base_name = reflection.get_class_name(self, fully_qualified=False) + if fully_qualified: + old_name = old_attribute_name + else: + old_name = ".".join((base_name, old_attribute_name)) + new_name = ".".join((base_name, new_attribute_name)) + prefix = _KIND_MOVED_PREFIX_TPL % (kind, old_name, new_name) + out_message = _generate_moved_message( + prefix, message=message, + version=version, removal_version=removal_version) + deprecation(out_message, stacklevel=stacklevel) + return f(self, *args, **kwargs) + + return wrapper + + return decorator + + +def moved_property(new_attribute_name, message=None, + version=None, removal_version=None, stacklevel=3): + """Decorates a *instance* property that was moved to another location.""" + + return _moved_decorator('Property', new_attribute_name, message=message, + version=version, removal_version=removal_version, + stacklevel=stacklevel) + + +def moved_inheritable_class(new_class, old_class_name, old_module_name, + message=None, version=None, removal_version=None): + """Deprecates a class that was moved to another location. + + NOTE(harlowja): this creates a new-old type that can be used for a + deprecation period that can be inherited from, the difference between this + and the ``moved_class`` deprecation function is that the proxy from that + function can not be inherited from (thus limiting its use for a more + particular usecase where inheritance is not needed). + + This will emit warnings when the old locations class is initialized, + telling where the new and improved location for the old class now is. + """ + old_name = ".".join((old_module_name, old_class_name)) + new_name = reflection.get_class_name(new_class) + prefix = _CLASS_MOVED_PREFIX_TPL % (old_name, new_name) + out_message = _generate_moved_message(prefix, + message=message, version=version, + removal_version=removal_version) + + def decorator(f): + + # Use the older functools until the following is available: + # + # https://bitbucket.org/gutworth/six/issue/105 + + @functools.wraps(f, assigned=("__name__", "__doc__")) + def wrapper(self, *args, **kwargs): + deprecation(out_message, stacklevel=3) + return f(self, *args, **kwargs) + + return wrapper + + old_class = type(old_class_name, (new_class,), {}) + old_class.__module__ = old_module_name + old_class.__init__ = decorator(old_class.__init__) + return old_class + + +def moved_class(new_class, old_class_name, old_module_name, message=None, + version=None, removal_version=None, stacklevel=3): + """Deprecates a class that was moved to another location. + + This will emit warnings when the old locations class is initialized, + telling where the new and improved location for the old class now is. + """ + old_name = ".".join((old_module_name, old_class_name)) + new_name = reflection.get_class_name(new_class) + prefix = _CLASS_MOVED_PREFIX_TPL % (old_name, new_name) + out_message = _generate_moved_message(prefix, + message=message, version=version, + removal_version=removal_version) + return MovedClassProxy(new_class, out_message, stacklevel=stacklevel) diff --git a/taskflow/utils/eventlet_utils.py b/taskflow/utils/eventlet_utils.py index cc26dfe1..2f5a42a6 100644 --- a/taskflow/utils/eventlet_utils.py +++ b/taskflow/utils/eventlet_utils.py @@ -1,6 +1,6 @@ # -*- coding: utf-8 -*- -# Copyright (C) 2013 Yahoo! Inc. All Rights Reserved. +# Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain @@ -14,179 +14,21 @@ # License for the specific language governing permissions and limitations # under the License. -import logging +from oslo_utils import importutils -from concurrent import futures +_eventlet = importutils.try_import('eventlet') -try: - from eventlet.green import threading as greenthreading - from eventlet import greenpool - from eventlet import patcher as greenpatcher - from eventlet import queue as greenqueue - EVENTLET_AVAILABLE = True -except ImportError: - EVENTLET_AVAILABLE = False +EVENTLET_AVAILABLE = bool(_eventlet) -from taskflow.utils import lock_utils +def check_for_eventlet(exc=None): + """Check if eventlet is available and if not raise a runtime error. -LOG = logging.getLogger(__name__) - -_DONE_STATES = frozenset([ - futures._base.CANCELLED_AND_NOTIFIED, - futures._base.FINISHED, -]) - - -class _WorkItem(object): - def __init__(self, future, fn, args, kwargs): - self.future = future - self.fn = fn - self.args = args - self.kwargs = kwargs - - def run(self): - if not self.future.set_running_or_notify_cancel(): - return - try: - result = self.fn(*self.args, **self.kwargs) - except BaseException as e: - self.future.set_exception(e) + :param exc: exception to raise instead of raising a runtime error + :type exc: exception + """ + if not EVENTLET_AVAILABLE: + if exc is None: + raise RuntimeError('Eventlet is not current available') else: - self.future.set_result(result) - - -class _Worker(object): - def __init__(self, executor, work, work_queue): - self.executor = executor - self.work = work - self.work_queue = work_queue - - def __call__(self): - # Run our main piece of work. - try: - self.work.run() - finally: - # Consume any delayed work before finishing (this is how we finish - # work that was to big for the pool size, but needs to be finished - # no matter). - while True: - try: - w = self.work_queue.get_nowait() - except greenqueue.Empty: - break - else: - try: - w.run() - finally: - self.work_queue.task_done() - - -class GreenFuture(futures.Future): - def __init__(self): - super(GreenFuture, self).__init__() - assert EVENTLET_AVAILABLE, 'eventlet is needed to use a green future' - # NOTE(harlowja): replace the built-in condition with a greenthread - # compatible one so that when getting the result of this future the - # functions will correctly yield to eventlet. If this is not done then - # waiting on the future never actually causes the greenthreads to run - # and thus you wait for infinity. - if not greenpatcher.is_monkey_patched('threading'): - self._condition = greenthreading.Condition() - - -class GreenExecutor(futures.Executor): - """A greenthread backed executor.""" - - def __init__(self, max_workers=1000): - assert EVENTLET_AVAILABLE, 'eventlet is needed to use a green executor' - self._max_workers = int(max_workers) - if self._max_workers <= 0: - raise ValueError('Max workers must be greater than zero') - self._pool = greenpool.GreenPool(self._max_workers) - self._delayed_work = greenqueue.Queue() - self._shutdown_lock = greenthreading.Lock() - self._shutdown = False - self._workers_created = 0 - - @property - def workers_created(self): - return self._workers_created - - @property - def amount_delayed(self): - return self._delayed_work.qsize() - - @property - def alive(self): - return not self._shutdown - - @lock_utils.locked(lock='_shutdown_lock') - def submit(self, fn, *args, **kwargs): - if self._shutdown: - raise RuntimeError('cannot schedule new futures after shutdown') - f = GreenFuture() - work = _WorkItem(f, fn, args, kwargs) - if not self._spin_up(work): - self._delayed_work.put(work) - return f - - def _spin_up(self, work): - alive = self._pool.running() + self._pool.waiting() - if alive < self._max_workers: - self._pool.spawn_n(_Worker(self, work, self._delayed_work)) - self._workers_created += 1 - return True - return False - - def shutdown(self, wait=True): - with self._shutdown_lock: - self._shutdown = True - if wait: - self._pool.waitall() - # NOTE(harlowja): Fixed in eventlet 0.15 (remove when able to use) - if not self._delayed_work.empty(): - self._delayed_work.join() - - -class _GreenWaiter(object): - """Provides the event that wait_for_any() blocks on.""" - def __init__(self): - self.event = greenthreading.Event() - - def add_result(self, future): - self.event.set() - - def add_exception(self, future): - self.event.set() - - def add_cancelled(self, future): - self.event.set() - - -def _partition_futures(fs): - """Partitions the input futures into done and not done lists.""" - done = set() - not_done = set() - for f in fs: - if f._state in _DONE_STATES: - done.add(f) - else: - not_done.add(f) - return (done, not_done) - - -def wait_for_any(fs, timeout=None): - assert EVENTLET_AVAILABLE, ('eventlet is needed to wait on green futures') - with futures._base._AcquireFutures(fs): - (done, not_done) = _partition_futures(fs) - if done: - return (done, not_done) - waiter = _GreenWaiter() - for f in fs: - f._waiters.append(waiter) - waiter.event.wait(timeout) - for f in fs: - f._waiters.remove(waiter) - with futures._base._AcquireFutures(fs): - return _partition_futures(fs) + raise exc diff --git a/taskflow/utils/kazoo_utils.py b/taskflow/utils/kazoo_utils.py index ae62e880..c2869bdd 100644 --- a/taskflow/utils/kazoo_utils.py +++ b/taskflow/utils/kazoo_utils.py @@ -16,10 +16,11 @@ from kazoo import client from kazoo import exceptions as k_exc +from oslo_utils import reflection import six +from six.moves import zip as compat_zip from taskflow import exceptions as exc -from taskflow.utils import reflection def _parse_hosts(hosts): @@ -94,13 +95,17 @@ class KazooTransactionException(k_exc.KazooException): def checked_commit(txn): - # Until https://github.com/python-zk/kazoo/pull/224 is fixed we have - # to workaround the transaction failing silently. + """Commits a kazoo transcation and validates the result. + + NOTE(harlowja): Until https://github.com/python-zk/kazoo/pull/224 is fixed + or a similar pull request is merged we have to workaround the transaction + failing silently. + """ if not txn.operations: return [] results = txn.commit() failures = [] - for op, result in six.moves.zip(txn.operations, results): + for op, result in compat_zip(txn.operations, results): if isinstance(result, k_exc.KazooException): failures.append((op, result)) if len(results) < len(txn.operations): @@ -180,7 +185,8 @@ def make_client(conf): hosts = _parse_hosts(conf.get("hosts", "localhost:2181")) if not hosts or not isinstance(hosts, six.string_types): raise TypeError("Invalid hosts format, expected " - "non-empty string/list, not %s" % type(hosts)) + "non-empty string/list, not '%s' (%s)" + % (hosts, type(hosts))) client_kwargs['hosts'] = hosts if 'timeout' in conf: client_kwargs['timeout'] = float(conf['timeout']) diff --git a/taskflow/utils/kombu_utils.py b/taskflow/utils/kombu_utils.py new file mode 100644 index 00000000..8ace067b --- /dev/null +++ b/taskflow/utils/kombu_utils.py @@ -0,0 +1,73 @@ +# -*- coding: utf-8 -*- + +# Copyright (C) 2015 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +# Keys extracted from the message properties when formatting... +_MSG_PROPERTIES = tuple([ + 'correlation_id', + 'delivery_info/routing_key', + 'type', +]) + + +class DelayedPretty(object): + """Wraps a message and delays prettifying it until requested.""" + + def __init__(self, message): + self._message = message + self._message_pretty = None + + def __str__(self): + if self._message_pretty is None: + self._message_pretty = _prettify_message(self._message) + return self._message_pretty + + +def _get_deep(properties, *keys): + """Get a final key among a list of keys (each with its own sub-dict).""" + for key in keys: + properties = properties[key] + return properties + + +def _prettify_message(message): + """Kombu doesn't currently have a useful ``__str__()`` or ``__repr__()``. + + This provides something decent(ish) for debugging (or other purposes) so + that messages are more nice and understandable.... + + TODO(harlowja): submit something into kombu to fix/adjust this. + """ + if message.content_type is not None: + properties = { + 'content_type': message.content_type, + } + else: + properties = {} + for name in _MSG_PROPERTIES: + segments = name.split("/") + try: + value = _get_deep(message.properties, *segments) + except (KeyError, ValueError, TypeError): + pass + else: + if value is not None: + properties[segments[-1]] = value + if message.body is not None: + properties['body_length'] = len(message.body) + return "%(delivery_tag)s: %(properties)s" % { + 'delivery_tag': message.delivery_tag, + 'properties': properties, + } diff --git a/taskflow/utils/lock_utils.py b/taskflow/utils/lock_utils.py index dbc0b778..b74931e9 100644 --- a/taskflow/utils/lock_utils.py +++ b/taskflow/utils/lock_utils.py @@ -19,17 +19,16 @@ # pulls in oslo.cfg) and is reduced to only what taskflow currently wants to # use from that code. -import abc import collections import contextlib import errno -import logging import os import threading import time import six +from taskflow import logging from taskflow.utils import misc from taskflow.utils import threading_utils as tu @@ -38,8 +37,13 @@ LOG = logging.getLogger(__name__) @contextlib.contextmanager def try_lock(lock): - """Attempts to acquire a lock, and autoreleases if acquisition occurred.""" - was_locked = lock.acquire(blocking=False) + """Attempts to acquire a lock, and auto releases if acquired (on exit).""" + # NOTE(harlowja): the keyword argument for 'blocking' does not work + # in py2.x and only is fixed in py3.x (this adjustment is documented + # and/or debated in http://bugs.python.org/issue10789); so we'll just + # stick to the format that works in both (oddly the keyword argument + # works in py2.x but only with reentrant locks). + was_locked = lock.acquire(False) try: yield was_locked finally: @@ -91,47 +95,7 @@ def locked(*args, **kwargs): return decorator -@six.add_metaclass(abc.ABCMeta) -class _ReaderWriterLockBase(object): - """Base class for reader/writer lock implementations.""" - - @abc.abstractproperty - def has_pending_writers(self): - """Returns if there are writers waiting to become the *one* writer.""" - - @abc.abstractmethod - def is_writer(self, check_pending=True): - """Returns if the caller is the active writer or a pending writer.""" - - @abc.abstractproperty - def owner(self): - """Returns whether the lock is locked by a writer or reader.""" - - @abc.abstractmethod - def is_reader(self): - """Returns if the caller is one of the readers.""" - - @abc.abstractmethod - def read_lock(self): - """Context manager that grants a read lock. - - Will wait until no active or pending writers. - - Raises a RuntimeError if an active or pending writer tries to acquire - a read lock. - """ - - @abc.abstractmethod - def write_lock(self): - """Context manager that grants a write lock. - - Will wait until no active readers. Blocks readers after acquiring. - - Raises a RuntimeError if an active reader attempts to acquire a lock. - """ - - -class ReaderWriterLock(_ReaderWriterLockBase): +class ReaderWriterLock(object): """A reader/writer lock. This lock allows for simultaneous readers to exist but only one writer @@ -142,6 +106,9 @@ class ReaderWriterLock(_ReaderWriterLockBase): the write lock. In the future these restrictions may be relaxed. + + This can be eventually removed if http://bugs.python.org/issue8800 ever + gets accepted into the python standard threading library... """ WRITER = 'w' READER = 'r' @@ -154,6 +121,7 @@ class ReaderWriterLock(_ReaderWriterLockBase): @property def has_pending_writers(self): + """Returns if there are writers waiting to become the *one* writer.""" self._cond.acquire() try: return bool(self._pending_writers) @@ -161,6 +129,7 @@ class ReaderWriterLock(_ReaderWriterLockBase): self._cond.release() def is_writer(self, check_pending=True): + """Returns if the caller is the active writer or a pending writer.""" self._cond.acquire() try: me = tu.get_ident() @@ -175,6 +144,7 @@ class ReaderWriterLock(_ReaderWriterLockBase): @property def owner(self): + """Returns whether the lock is locked by a writer or reader.""" self._cond.acquire() try: if self._writer is not None: @@ -186,6 +156,7 @@ class ReaderWriterLock(_ReaderWriterLockBase): self._cond.release() def is_reader(self): + """Returns if the caller is one of the readers.""" self._cond.acquire() try: return tu.get_ident() in self._readers @@ -194,6 +165,13 @@ class ReaderWriterLock(_ReaderWriterLockBase): @contextlib.contextmanager def read_lock(self): + """Context manager that grants a read lock. + + Will wait until no active or pending writers. + + Raises a RuntimeError if an active or pending writer tries to acquire + a read lock. + """ me = tu.get_ident() if self.is_writer(): raise RuntimeError("Writer %s can not acquire a read lock" @@ -226,6 +204,12 @@ class ReaderWriterLock(_ReaderWriterLockBase): @contextlib.contextmanager def write_lock(self): + """Context manager that grants a write lock. + + Will wait until no active readers. Blocks readers after acquiring. + + Raises a RuntimeError if an active reader attempts to acquire a lock. + """ me = tu.get_ident() if self.is_reader(): raise RuntimeError("Reader %s to writer privilege" @@ -257,7 +241,7 @@ class ReaderWriterLock(_ReaderWriterLockBase): self._cond.release() -class DummyReaderWriterLock(_ReaderWriterLockBase): +class DummyReaderWriterLock(object): """A dummy reader/writer lock. This dummy lock doesn't lock anything but provides the same functions as a @@ -291,46 +275,122 @@ class MultiLock(object): """A class which attempts to obtain & release many locks at once. It is typically useful as a context manager around many locks (instead of - having to nest individual lock context managers). + having to nest individual lock context managers, which can become pretty + awkward looking). + + NOTE(harlowja): The locks that will be obtained will be in the order the + locks are given in the constructor, they will be acquired in order and + released in reverse order (so ordering matters). """ def __init__(self, locks): - assert len(locks) > 0, "Zero locks requested" + if not isinstance(locks, tuple): + locks = tuple(locks) + if len(locks) <= 0: + raise ValueError("Zero locks requested") self._locks = locks - self._locked = [False] * len(locks) + self._local = threading.local() + + @property + def _lock_stacks(self): + # This is weird, but this is how thread locals work (in that each + # thread will need to check if it has already created the attribute and + # if not then create it and set it to the thread local variable...) + # + # This isn't done in the constructor since the constructor is only + # activated by one of the many threads that could use this object, + # and that means that the attribute will only exist for that one + # thread. + try: + return self._local.stacks + except AttributeError: + self._local.stacks = [] + return self._local.stacks def __enter__(self): - self.acquire() + return self.acquire() + + @property + def obtained(self): + """Returns how many locks were last acquired/obtained.""" + try: + return self._lock_stacks[-1] + except IndexError: + return 0 + + def __len__(self): + return len(self._locks) def acquire(self): + """This will attempt to acquire all the locks given in the constructor. - def is_locked(lock): - # NOTE(harlowja): reentrant locks (rlock) don't have this - # attribute, but normal non-reentrant locks do, how odd... - if hasattr(lock, 'locked'): - return lock.locked() - return False + If all the locks can not be acquired (and say only X of Y locks could + be acquired then this will return false to signify that not all the + locks were able to be acquired, you can later use the :attr:`.obtained` + property to determine how many were obtained during the last + acquisition attempt). - for i in range(0, len(self._locked)): - if self._locked[i] or is_locked(self._locks[i]): - raise threading.ThreadError("Lock %s not previously released" - % (i + 1)) - self._locked[i] = False - - for (i, lock) in enumerate(self._locks): - self._locked[i] = lock.acquire() + NOTE(harlowja): When not all locks were acquired it is still required + to release since under partial acquisition the acquired locks + must still be released. For example if 4 out of 5 locks were acquired + this will return false, but the user **must** still release those + other 4 to avoid causing locking issues... + """ + gotten = 0 + for lock in self._locks: + try: + acked = lock.acquire() + except (threading.ThreadError, RuntimeError) as e: + # If we have already gotten some set of the desired locks + # make sure we track that and ensure that we later release them + # instead of losing them. + if gotten: + self._lock_stacks.append(gotten) + raise threading.ThreadError( + "Unable to acquire lock %s/%s due to '%s'" + % (gotten + 1, len(self._locks), e)) + else: + if not acked: + break + else: + gotten += 1 + if gotten: + self._lock_stacks.append(gotten) + return gotten == len(self._locks) def __exit__(self, type, value, traceback): self.release() def release(self): - for (i, locked) in enumerate(self._locked): + """Releases any past acquired locks (partial or otherwise).""" + height = len(self._lock_stacks) + if not height: + # Raise the same error type as the threading.Lock raises so that + # it matches the behavior of the built-in class (it's odd though + # that the threading.RLock raises a runtime error on this same + # method instead...) + raise threading.ThreadError('Release attempted on unlocked lock') + # Cleans off one level of the stack (this is done so that if there + # are multiple __enter__() and __exit__() pairs active that this will + # only remove one level (the last one), and not all levels... + leftover = self._lock_stacks[-1] + while leftover: + lock = self._locks[leftover - 1] try: - if locked: - self._locks[i].release() - self._locked[i] = False - except threading.ThreadError: - LOG.exception("Unable to release lock %s", i + 1) + lock.release() + except (threading.ThreadError, RuntimeError) as e: + # Ensure that we adjust the lock stack under failure so that + # if release is attempted again that we do not try to release + # the locks we already released... + self._lock_stacks[-1] = leftover + raise threading.ThreadError( + "Unable to release lock %s/%s due to '%s'" + % (leftover, len(self._locks), e)) + else: + leftover -= 1 + # At the end only clear it off, so that under partial failure we don't + # lose any locks... + self._lock_stacks.pop() class _InterProcessLock(object): diff --git a/taskflow/utils/misc.py b/taskflow/utils/misc.py index 8e5e1921..299082ae 100644 --- a/taskflow/utils/misc.py +++ b/taskflow/utils/misc.py @@ -15,39 +15,66 @@ # License for the specific language governing permissions and limitations # under the License. -import collections import contextlib -import copy import datetime import errno import inspect -import keyword -import logging import os import re -import string import sys +import threading import time -import traceback +import types +from oslo_serialization import jsonutils +from oslo_utils import importutils +from oslo_utils import netutils +from oslo_utils import reflection import six -from six.moves.urllib import parse as urlparse +from six.moves import map as compat_map +from six.moves import range as compat_range -from taskflow import exceptions as exc -from taskflow.openstack.common import jsonutils -from taskflow.openstack.common import network_utils -from taskflow.utils import reflection +from taskflow.types import failure +from taskflow.types import notifier +from taskflow.utils import deprecation -LOG = logging.getLogger(__name__) NUMERIC_TYPES = six.integer_types + (float,) # NOTE(imelnikov): regular expression to get scheme from URI, # see RFC 3986 section 3.1 _SCHEME_REGEX = re.compile(r"^([A-Za-z][A-Za-z0-9+.-]*):") +_MONOTONIC_LOCATIONS = tuple([ + # The built-in/expected location in python3.3+ + 'time.monotonic', + # NOTE(harlowja): Try to use the pypi module that provides this + # functionality for older versions of python less than 3.3 so that + # they to can benefit from better timing... + # + # See: http://pypi.python.org/pypi/monotonic + 'monotonic.monotonic', +]) -def merge_uri(uri_pieces, conf): + +def find_monotonic(allow_time_time=False): + """Tries to find a monotonic time providing function (and returns it).""" + for import_str in _MONOTONIC_LOCATIONS: + mod_str, _sep, attr_str = import_str.rpartition('.') + mod = importutils.try_import(mod_str) + if mod is None: + continue + func = getattr(mod, attr_str, None) + if func is not None: + return func + # Finally give up and use time.time (which isn't monotonic)... + if allow_time_time: + return time.time + else: + return None + + +def merge_uri(uri, conf): """Merges a parsed uri into the given configuration dictionary. Merges the username, password, hostname, and query params of a uri into @@ -56,64 +83,104 @@ def merge_uri(uri_pieces, conf): NOTE(harlowja): does not merge the path, scheme or fragment. """ - for k in ('username', 'password'): - if not uri_pieces[k]: + for (k, v) in [('username', uri.username), ('password', uri.password)]: + if not v: continue - conf.setdefault(k, uri_pieces[k]) - hostname = uri_pieces.get('hostname') - if hostname: - port = uri_pieces.get('port') - if port is not None: - hostname += ":%s" % (port) + conf.setdefault(k, v) + if uri.hostname: + hostname = uri.hostname + if uri.port is not None: + hostname += ":%s" % (uri.port) conf.setdefault('hostname', hostname) - for (k, v) in six.iteritems(uri_pieces['params']): + for (k, v) in six.iteritems(uri.params()): conf.setdefault(k, v) return conf -def parse_uri(uri, query_duplicates=False): +def find_subclasses(locations, base_cls, exclude_hidden=True): + """Finds subclass types in the given locations. + + This will examines the given locations for types which are subclasses of + the base class type provided and returns the found subclasses (or fails + with exceptions if this introspection can not be accomplished). + + If a string is provided as one of the locations it will be imported and + examined if it is a subclass of the base class. If a module is given, + all of its members will be examined for attributes which are subclasses of + the base class. If a type itself is given it will be examined for being a + subclass of the base class. + """ + derived = set() + for item in locations: + module = None + if isinstance(item, six.string_types): + try: + pkg, cls = item.split(':') + except ValueError: + module = importutils.import_module(item) + else: + obj = importutils.import_class('%s.%s' % (pkg, cls)) + if not reflection.is_subclass(obj, base_cls): + raise TypeError("Object '%s' (%s) is not a '%s' subclass" + % (item, type(item), base_cls)) + derived.add(obj) + elif isinstance(item, types.ModuleType): + module = item + elif reflection.is_subclass(item, base_cls): + derived.add(item) + else: + raise TypeError("Object '%s' (%s) is an unexpected type" % + (item, type(item))) + # If it's a module derive objects from it if we can. + if module is not None: + for (name, obj) in inspect.getmembers(module): + if name.startswith("_") and exclude_hidden: + continue + if reflection.is_subclass(obj, base_cls): + derived.add(obj) + return derived + + +def pick_first_not_none(*values): + """Returns first of values that is *not* None (or None if all are/were).""" + for val in values: + if val is not None: + return val + return None + + +def parse_uri(uri): """Parses a uri into its components.""" # Do some basic validation before continuing... if not isinstance(uri, six.string_types): raise TypeError("Can only parse string types to uri data, " - "and not an object of type %s" - % reflection.get_class_name(uri)) + "and not '%s' (%s)" % (uri, type(uri))) match = _SCHEME_REGEX.match(uri) if not match: - raise ValueError("Uri %r does not start with a RFC 3986 compliant" + raise ValueError("Uri '%s' does not start with a RFC 3986 compliant" " scheme" % (uri)) - parsed = network_utils.urlsplit(uri) - if parsed.query: - query_params = urlparse.parse_qsl(parsed.query) - if not query_duplicates: - query_params = dict(query_params) - else: - # Retain duplicates in a list for keys which have duplicates, but - # for items which are not duplicated, just associate the key with - # the value. - tmp_query_params = {} - for (k, v) in query_params: - if k in tmp_query_params: - p_v = tmp_query_params[k] - if isinstance(p_v, list): - p_v.append(v) - else: - p_v = [p_v, v] - tmp_query_params[k] = p_v - else: - tmp_query_params[k] = v - query_params = tmp_query_params - else: - query_params = {} - return AttrDict( - scheme=parsed.scheme, - username=parsed.username, - password=parsed.password, - fragment=parsed.fragment, - path=parsed.path, - params=query_params, - hostname=parsed.hostname, - port=parsed.port) + return netutils.urlsplit(uri) + + +def clamp(value, minimum, maximum, on_clamped=None): + """Clamps a value to ensure its >= minimum and <= maximum.""" + if minimum > maximum: + raise ValueError("Provided minimum '%s' must be less than or equal to" + " the provided maximum '%s'" % (minimum, maximum)) + if value > maximum: + value = maximum + if on_clamped is not None: + on_clamped() + if value < minimum: + value = minimum + if on_clamped is not None: + on_clamped() + return value + + +def fix_newlines(text, replacement=os.linesep): + """Fixes text that *may* end with wrong nl by replacing with right nl.""" + return replacement.join(text.splitlines()) def binary_encode(text, encoding='utf-8'): @@ -126,7 +193,7 @@ def binary_encode(text, encoding='utf-8'): elif isinstance(text, six.text_type): return text.encode(encoding) else: - raise TypeError("Expected binary or string type") + raise TypeError("Expected binary or string type not '%s'" % type(text)) def binary_decode(data, encoding='utf-8'): @@ -139,7 +206,7 @@ def binary_decode(data, encoding='utf-8'): elif isinstance(data, six.text_type): return data else: - raise TypeError("Expected binary or string type") + raise TypeError("Expected binary or string type not '%s'" % type(data)) def decode_json(raw_data, root_types=(dict,)): @@ -155,15 +222,22 @@ def decode_json(raw_data, root_types=(dict,)): raise ValueError("Expected UTF-8 decodable data: %s" % e) except ValueError as e: raise ValueError("Expected JSON decodable data: %s" % e) - if root_types and not isinstance(data, tuple(root_types)): - ok_types = ", ".join(str(t) for t in root_types) - raise ValueError("Expected (%s) root types not: %s" - % (ok_types, type(data))) + if root_types: + if not isinstance(root_types, tuple): + root_types = tuple(root_types) + if not isinstance(data, root_types): + if len(root_types) == 1: + root_type = root_types[0] + raise ValueError("Expected '%s' root type not '%s'" + % (root_type, type(data))) + else: + raise ValueError("Expected %s root types not '%s'" + % (list(root_types), type(data))) return data class cachedproperty(object): - """A descriptor property that is only evaluated once.. + """A *thread-safe* descriptor property that is only evaluated once. This caching descriptor can be placed on instance methods to translate those methods into properties that will be cached in the instance (avoiding @@ -176,6 +250,7 @@ class cachedproperty(object): after the first call to 'get_thing' occurs. """ def __init__(self, fget): + self._lock = threading.RLock() # If a name is provided (as an argument) then this will be the string # to place the cached attribute under if not then it will be the # function itself to be wrapped into a property. @@ -205,19 +280,19 @@ class cachedproperty(object): def __get__(self, instance, owner): if instance is None: return self - try: + # Quick check to see if this already has been made (before acquiring + # the lock). This is safe to do since we don't allow deletion after + # being created. + if hasattr(instance, self._attr_name): return getattr(instance, self._attr_name) - except AttributeError: - value = self._fget(instance) - setattr(instance, self._attr_name, value) - return value - - -def wallclock(): - # NOTE(harlowja): made into a function so that this can be easily mocked - # out if we want to alter time related functionality (for testing - # purposes). - return time.time() + else: + with self._lock: + try: + return getattr(instance, self._attr_name) + except AttributeError: + value = self._fget(instance) + setattr(instance, self._attr_name, value) + return value def millis_to_datetime(milliseconds): @@ -257,27 +332,9 @@ def sequence_minus(seq1, seq2): return result -def item_from(container, index, name=None): - """Attempts to fetch a index/key from a given container.""" - if index is None: - return container - try: - return container[index] - except (IndexError, KeyError, ValueError, TypeError): - # NOTE(harlowja): Perhaps the container is a dictionary-like object - # and that key does not exist (key error), or the container is a - # tuple/list and a non-numeric key is being requested (index error), - # or there was no container and an attempt to index into none/other - # unsubscriptable type is being requested (type error). - if name is None: - name = index - raise exc.NotFound("Unable to find %r in container %s" - % (name, container)) - - def get_duplicate_keys(iterable, key=None): if key is not None: - iterable = six.moves.map(key, iterable) + iterable = compat_map(key, iterable) keys = set() duplicates = set() for item in iterable: @@ -287,67 +344,6 @@ def get_duplicate_keys(iterable, key=None): return duplicates -# NOTE(imelnikov): we should not use str.isalpha or str.isdigit -# as they are locale-dependant -_ASCII_WORD_SYMBOLS = frozenset(string.ascii_letters + string.digits + '_') - - -def is_valid_attribute_name(name, allow_self=False, allow_hidden=False): - """Checks that a string is a valid/invalid python attribute name.""" - return all(( - isinstance(name, six.string_types), - len(name) > 0, - (allow_self or not name.lower().startswith('self')), - (allow_hidden or not name.lower().startswith('_')), - - # NOTE(imelnikov): keywords should be forbidden. - not keyword.iskeyword(name), - - # See: http://docs.python.org/release/2.5.2/ref/grammar.txt - not (name[0] in string.digits), - all(symbol in _ASCII_WORD_SYMBOLS for symbol in name) - )) - - -class AttrDict(dict): - """Dictionary subclass that allows for attribute based access. - - This subclass allows for accessing a dictionaries keys and values by - accessing those keys as regular attributes. Keys that are not valid python - attribute names can not of course be acccessed/set (those keys must be - accessed/set by the traditional dictionary indexing operators instead). - """ - NO_ATTRS = tuple(reflection.get_member_names(dict)) - - @classmethod - def _is_valid_attribute_name(cls, name): - if not is_valid_attribute_name(name): - return False - # Make the name just be a simple string in latin-1 encoding in python3. - if name in cls.NO_ATTRS: - return False - return True - - def __init__(self, **kwargs): - for (k, v) in kwargs.items(): - if not self._is_valid_attribute_name(k): - raise AttributeError("Invalid attribute name: '%s'" % (k)) - self[k] = v - - def __getattr__(self, name): - if not self._is_valid_attribute_name(name): - raise AttributeError("Invalid attribute name: '%s'" % (name)) - try: - return self[name] - except KeyError: - raise AttributeError("No attributed named: '%s'" % (name)) - - def __setattr__(self, name, value): - if not self._is_valid_attribute_name(name): - raise AttributeError("Invalid attribute name: '%s'" % (name)) - self[name] = value - - class ExponentialBackoff(object): """An iterable object that will yield back an exponential delay sequence. @@ -364,7 +360,7 @@ class ExponentialBackoff(object): def __iter__(self): if self.count <= 0: raise StopIteration() - for i in six.moves.range(0, self.count): + for i in compat_range(0, self.count): yield min(self.exponent ** i, self.max_backoff) def __str__(self): @@ -385,7 +381,8 @@ def as_int(obj, quiet=False): pass # Eck, not sure what this is then. if not quiet: - raise TypeError("Can not translate %s to an integer." % (obj)) + raise TypeError("Can not translate '%s' (%s) to an integer" + % (obj, type(obj))) return obj @@ -399,143 +396,25 @@ def ensure_tree(path): """ try: os.makedirs(path) - except OSError as exc: - if exc.errno == errno.EEXIST: + except OSError as e: + if e.errno == errno.EEXIST: if not os.path.isdir(path): raise else: raise -class Notifier(object): - """A notification helper class. - - It is intended to be used to subscribe to notifications of events - occurring as well as allow a entity to post said notifications to any - associated subscribers without having either entity care about how this - notification occurs. - """ - - RESERVED_KEYS = ('details',) - ANY = '*' - - def __init__(self): - self._listeners = collections.defaultdict(list) - - def __len__(self): - """Returns how many callbacks are registered.""" - count = 0 - for (_event_type, callbacks) in six.iteritems(self._listeners): - count += len(callbacks) - return count - - def is_registered(self, event_type, callback): - """Check if a callback is registered.""" - listeners = list(self._listeners.get(event_type, [])) - for (cb, _args, _kwargs) in listeners: - if reflection.is_same_callback(cb, callback): - return True - return False - - def reset(self): - """Forget all previously registered callbacks.""" - self._listeners.clear() - - def notify(self, event_type, details): - """Notify about event occurrence. - - All callbacks registered to receive notifications about given - event type will be called. - - :param event_type: event type that occurred - :param details: addition event details - """ - listeners = list(self._listeners.get(self.ANY, [])) - for i in self._listeners[event_type]: - if i not in listeners: - listeners.append(i) - if not listeners: - return - for (callback, args, kwargs) in listeners: - if args is None: - args = [] - if kwargs is None: - kwargs = {} - kwargs['details'] = details - try: - callback(event_type, *args, **kwargs) - except Exception: - LOG.warn("Failure calling callback %s to notify about event" - " %s, details: %s", callback, event_type, - details, exc_info=True) - - def register(self, event_type, callback, args=None, kwargs=None): - """Register a callback to be called when event of a given type occurs. - - Callback will be called with provided ``args`` and ``kwargs`` and - when event type occurs (or on any event if ``event_type`` equals to - ``Notifier.ANY``). It will also get additional keyword argument, - ``details``, that will hold event details provided to - :py:meth:`notify` method. - """ - assert six.callable(callback), "Callback must be callable" - if self.is_registered(event_type, callback): - raise ValueError("Callback %s already registered" % (callback)) - if kwargs: - for k in self.RESERVED_KEYS: - if k in kwargs: - raise KeyError(("Reserved key '%s' not allowed in " - "kwargs") % k) - kwargs = copy.copy(kwargs) - if args: - args = copy.copy(args) - self._listeners[event_type].append((callback, args, kwargs)) - - def deregister(self, event_type, callback): - """Remove a single callback from listening to event ``event_type``.""" - if event_type not in self._listeners: - return - for i, (cb, args, kwargs) in enumerate(self._listeners[event_type]): - if reflection.is_same_callback(cb, callback): - self._listeners[event_type].pop(i) - break +Failure = deprecation.moved_class(failure.Failure, 'Failure', __name__, + version="0.6", removal_version="?") -def copy_exc_info(exc_info): - """Make copy of exception info tuple, as deep as possible.""" - if exc_info is None: - return None - exc_type, exc_value, tb = exc_info - # NOTE(imelnikov): there is no need to copy type, and - # we can't copy traceback. - return (exc_type, copy.deepcopy(exc_value), tb) - - -def are_equal_exc_info_tuples(ei1, ei2): - if ei1 == ei2: - return True - if ei1 is None or ei2 is None: - return False # if both are None, we returned True above - - # NOTE(imelnikov): we can't compare exceptions with '==' - # because we want exc_info be equal to it's copy made with - # copy_exc_info above. - if ei1[0] is not ei2[0]: - return False - if not all((type(ei1[1]) == type(ei2[1]), - exc.exception_message(ei1[1]) == exc.exception_message(ei2[1]), - repr(ei1[1]) == repr(ei2[1]))): - return False - if ei1[2] == ei2[2]: - return True - tb1 = traceback.format_tb(ei1[2]) - tb2 = traceback.format_tb(ei2[2]) - return tb1 == tb2 +Notifier = deprecation.moved_class(notifier.Notifier, 'Notifier', __name__, + version="0.6", removal_version="?") @contextlib.contextmanager def capture_failure(): - """Captures the occuring exception and provides a failure back. + """Captures the occurring exception and provides a failure object back. This will save the current exception information and yield back a failure object for the caller to use (it will raise a runtime error if @@ -552,192 +431,30 @@ def capture_failure(): For example:: - except Exception: - with capture_failure() as fail: - LOG.warn("Activating cleanup") - cleanup() - save_failure(fail) + >>> from taskflow.utils import misc + >>> + >>> def cleanup(): + ... pass + ... + >>> + >>> def save_failure(f): + ... print("Saving %s" % f) + ... + >>> + >>> try: + ... raise IOError("Broken") + ... except Exception: + ... with misc.capture_failure() as fail: + ... print("Activating cleanup") + ... cleanup() + ... save_failure(fail) + ... + Activating cleanup + Saving Failure: IOError: Broken + """ exc_info = sys.exc_info() if not any(exc_info): raise RuntimeError("No active exception is being handled") else: - yield Failure(exc_info=exc_info) - - -class Failure(object): - """Object that represents failure. - - Failure objects encapsulate exception information so that - it can be re-used later to re-raise or inspect. - """ - DICT_VERSION = 1 - - def __init__(self, exc_info=None, **kwargs): - if not kwargs: - if exc_info is None: - exc_info = sys.exc_info() - self._exc_info = exc_info - self._exc_type_names = list( - reflection.get_all_class_names(exc_info[0], up_to=Exception)) - if not self._exc_type_names: - raise TypeError('Invalid exception type: %r' % exc_info[0]) - self._exception_str = exc.exception_message(self._exc_info[1]) - self._traceback_str = ''.join( - traceback.format_tb(self._exc_info[2])) - else: - self._exc_info = exc_info # may be None - self._exception_str = kwargs.pop('exception_str') - self._exc_type_names = kwargs.pop('exc_type_names', []) - self._traceback_str = kwargs.pop('traceback_str', None) - if kwargs: - raise TypeError( - 'Failure.__init__ got unexpected keyword argument(s): %s' - % ', '.join(six.iterkeys(kwargs))) - - @classmethod - def from_exception(cls, exception): - return cls((type(exception), exception, None)) - - def _matches(self, other): - if self is other: - return True - return (self._exc_type_names == other._exc_type_names - and self.exception_str == other.exception_str - and self.traceback_str == other.traceback_str) - - def matches(self, other): - if not isinstance(other, Failure): - return False - if self.exc_info is None or other.exc_info is None: - return self._matches(other) - else: - return self == other - - def __eq__(self, other): - if not isinstance(other, Failure): - return NotImplemented - return (self._matches(other) and - are_equal_exc_info_tuples(self.exc_info, other.exc_info)) - - def __ne__(self, other): - return not (self == other) - - # NOTE(imelnikov): obj.__hash__() should return same values for equal - # objects, so we should redefine __hash__. Failure equality semantics - # is a bit complicated, so for now we just mark Failure objects as - # unhashable. See python docs on object.__hash__ for more info: - # http://docs.python.org/2/reference/datamodel.html#object.__hash__ - __hash__ = None - - @property - def exception(self): - """Exception value, or None if exception value is not present. - - Exception value may be lost during serialization. - """ - if self._exc_info: - return self._exc_info[1] - else: - return None - - @property - def exception_str(self): - """String representation of exception.""" - return self._exception_str - - @property - def exc_info(self): - """Exception info tuple or None.""" - return self._exc_info - - @property - def traceback_str(self): - """Exception traceback as string.""" - return self._traceback_str - - @staticmethod - def reraise_if_any(failures): - """Re-raise exceptions if argument is not empty. - - If argument is empty list, this method returns None. If - argument is list with single Failure object in it, - this failure is reraised. Else, WrappedFailure exception - is raised with failures list as causes. - """ - failures = list(failures) - if len(failures) == 1: - failures[0].reraise() - elif len(failures) > 1: - raise exc.WrappedFailure(failures) - - def reraise(self): - """Re-raise captured exception.""" - if self._exc_info: - six.reraise(*self._exc_info) - else: - raise exc.WrappedFailure([self]) - - def check(self, *exc_classes): - """Check if any of exc_classes caused the failure. - - Arguments of this method can be exception types or type - names (stings). If captured exception is instance of - exception of given type, the corresponding argument is - returned. Else, None is returned. - """ - for cls in exc_classes: - if isinstance(cls, type): - err = reflection.get_class_name(cls) - else: - err = cls - if err in self._exc_type_names: - return cls - return None - - def __str__(self): - return self.pformat() - - def pformat(self, traceback=False): - buf = six.StringIO() - buf.write( - 'Failure: %s: %s' % (self._exc_type_names[0], self._exception_str)) - if traceback: - if self._traceback_str is not None: - traceback_str = self._traceback_str.rstrip() - else: - traceback_str = None - if traceback_str: - buf.write('\nTraceback (most recent call last):\n') - buf.write(traceback_str) - else: - buf.write('\nTraceback not available.') - return buf.getvalue() - - def __iter__(self): - """Iterate over exception type names.""" - for et in self._exc_type_names: - yield et - - @classmethod - def from_dict(cls, data): - data = dict(data) - version = data.pop('version', None) - if version != cls.DICT_VERSION: - raise ValueError('Invalid dict version of failure object: %r' - % version) - return cls(**data) - - def to_dict(self): - return { - 'exception_str': self.exception_str, - 'traceback_str': self.traceback_str, - 'exc_type_names': list(self), - 'version': self.DICT_VERSION, - } - - def copy(self): - return Failure(exc_info=copy_exc_info(self.exc_info), - exception_str=self.exception_str, - traceback_str=self.traceback_str, - exc_type_names=self._exc_type_names[:]) + yield failure.Failure(exc_info=exc_info) diff --git a/taskflow/utils/persistence_utils.py b/taskflow/utils/persistence_utils.py index e3c4ba36..dd304bc6 100644 --- a/taskflow/utils/persistence_utils.py +++ b/taskflow/utils/persistence_utils.py @@ -15,10 +15,12 @@ # under the License. import contextlib -import logging +import os -from taskflow.openstack.common import timeutils -from taskflow.openstack.common import uuidutils +from oslo_utils import timeutils +from oslo_utils import uuidutils + +from taskflow import logging from taskflow.persistence import logbook from taskflow.utils import misc @@ -138,7 +140,7 @@ def pformat_atom_detail(atom_detail, indent=0): lines.append("%s- failure = %s" % (" " * (indent + 1), bool(atom_detail.failure))) lines.extend(_format_meta(atom_detail.meta, indent=indent + 1)) - return "\n".join(lines) + return os.linesep.join(lines) def pformat_flow_detail(flow_detail, indent=0): @@ -148,7 +150,7 @@ def pformat_flow_detail(flow_detail, indent=0): lines.extend(_format_meta(flow_detail.meta, indent=indent + 1)) for task_detail in flow_detail: lines.append(pformat_atom_detail(task_detail, indent=indent + 1)) - return "\n".join(lines) + return os.linesep.join(lines) def pformat(book, indent=0): @@ -166,4 +168,4 @@ def pformat(book, indent=0): timeutils.isotime(book.updated_at))) for flow_detail in book: lines.append(pformat_flow_detail(flow_detail, indent=indent + 1)) - return "\n".join(lines) + return os.linesep.join(lines) diff --git a/taskflow/utils/reflection.py b/taskflow/utils/reflection.py deleted file mode 100644 index bc5a3223..00000000 --- a/taskflow/utils/reflection.py +++ /dev/null @@ -1,252 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright (C) 2012-2013 Yahoo! Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import inspect -import types - -import six - -from taskflow.openstack.common import importutils - -try: - _TYPE_TYPE = types.TypeType -except AttributeError: - _TYPE_TYPE = type - -# See: https://docs.python.org/2/library/__builtin__.html#module-__builtin__ -# and see https://docs.python.org/2/reference/executionmodel.html (and likely -# others)... -_BUILTIN_MODULES = ('builtins', '__builtin__', 'exceptions') - - -def _get_members(obj, exclude_hidden): - """Yields the members of an object, filtering by hidden/not hidden.""" - for (name, value) in inspect.getmembers(obj): - if name.startswith("_") and exclude_hidden: - continue - yield (name, value) - - -def find_subclasses(locations, base_cls, exclude_hidden=True): - """Finds subclass types in the given locations. - - This will examines the given locations for types which are subclasses of - the base class type provided and returns the found subclasses (or fails - with exceptions if this introspection can not be accomplished). - - If a string is provided as one of the locations it will be imported and - examined if it is a subclass of the base class. If a module is given, - all of its members will be examined for attributes which are subclasses of - the base class. If a type itself is given it will be examined for being a - subclass of the base class. - """ - derived = set() - for item in locations: - module = None - if isinstance(item, six.string_types): - try: - pkg, cls = item.split(':') - except ValueError: - module = importutils.import_module(item) - else: - obj = importutils.import_class('%s.%s' % (pkg, cls)) - if not is_subclass(obj, base_cls): - raise TypeError("Item %s is not a %s subclass" % - (item, base_cls)) - derived.add(obj) - elif isinstance(item, types.ModuleType): - module = item - elif is_subclass(item, base_cls): - derived.add(item) - else: - raise TypeError("Item %s unexpected type: %s" % - (item, type(item))) - # If it's a module derive objects from it if we can. - if module is not None: - for (_name, obj) in _get_members(module, exclude_hidden): - if is_subclass(obj, base_cls): - derived.add(obj) - return derived - - -def get_member_names(obj, exclude_hidden=True): - """Get all the member names for a object.""" - return [name for (name, _obj) in _get_members(obj, exclude_hidden)] - - -def get_class_name(obj, fully_qualified=True): - """Get class name for object. - - If object is a type, fully qualified name of the type is returned. - Else, fully qualified name of the type of the object is returned. - For builtin types, just name is returned. - """ - if not isinstance(obj, six.class_types): - obj = type(obj) - try: - built_in = obj.__module__ in _BUILTIN_MODULES - except AttributeError: - pass - else: - if built_in: - try: - return obj.__qualname__ - except AttributeError: - return obj.__name__ - pieces = [] - try: - pieces.append(obj.__qualname__) - except AttributeError: - pieces.append(obj.__name__) - if fully_qualified: - try: - pieces.insert(0, obj.__module__) - except AttributeError: - pass - return '.'.join(pieces) - - -def get_all_class_names(obj, up_to=object): - """Get class names of object parent classes. - - Iterate over all class names object is instance or subclass of, - in order of method resolution (mro). If up_to parameter is provided, - only name of classes that are sublcasses to that class are returned. - """ - if not isinstance(obj, six.class_types): - obj = type(obj) - for cls in obj.mro(): - if issubclass(cls, up_to): - yield get_class_name(cls) - - -def get_callable_name(function): - """Generate a name from callable. - - Tries to do the best to guess fully qualified callable name. - """ - method_self = get_method_self(function) - if method_self is not None: - # This is a bound method. - if isinstance(method_self, six.class_types): - # This is a bound class method. - im_class = method_self - else: - im_class = type(method_self) - try: - parts = (im_class.__module__, function.__qualname__) - except AttributeError: - parts = (im_class.__module__, im_class.__name__, function.__name__) - elif inspect.ismethod(function) or inspect.isfunction(function): - # This could be a function, a static method, a unbound method... - try: - parts = (function.__module__, function.__qualname__) - except AttributeError: - if hasattr(function, 'im_class'): - # This is a unbound method, which exists only in python 2.x - im_class = function.im_class - parts = (im_class.__module__, - im_class.__name__, function.__name__) - else: - parts = (function.__module__, function.__name__) - else: - im_class = type(function) - if im_class is _TYPE_TYPE: - im_class = function - try: - parts = (im_class.__module__, im_class.__qualname__) - except AttributeError: - parts = (im_class.__module__, im_class.__name__) - return '.'.join(parts) - - -def get_method_self(method): - if not inspect.ismethod(method): - return None - try: - return six.get_method_self(method) - except AttributeError: - return None - - -def is_same_callback(callback1, callback2, strict=True): - """Returns if the two callbacks are the same.""" - if callback1 is callback2: - # This happens when plain methods are given (or static/non-bound - # methods). - return True - if callback1 == callback2: - if not strict: - return True - # Two bound methods are equal if functions themselves are equal and - # objects they are applied to are equal. This means that a bound - # method could be the same bound method on another object if the - # objects have __eq__ methods that return true (when in fact it is a - # different bound method). Python u so crazy! - try: - self1 = six.get_method_self(callback1) - self2 = six.get_method_self(callback2) - return self1 is self2 - except AttributeError: - pass - return False - - -def is_bound_method(method): - """Returns if the given method is bound to an object.""" - return bool(get_method_self(method)) - - -def is_subclass(obj, cls): - """Returns if the object is class and it is subclass of a given class.""" - return inspect.isclass(obj) and issubclass(obj, cls) - - -def _get_arg_spec(function): - if isinstance(function, type): - bound = True - function = function.__init__ - elif isinstance(function, (types.FunctionType, types.MethodType)): - bound = is_bound_method(function) - function = getattr(function, '__wrapped__', function) - else: - function = function.__call__ - bound = is_bound_method(function) - return inspect.getargspec(function), bound - - -def get_callable_args(function, required_only=False): - """Get names of callable arguments. - - Special arguments (like *args and **kwargs) are not included into - output. - - If required_only is True, optional arguments (with default values) - are not included into output. - """ - argspec, bound = _get_arg_spec(function) - f_args = argspec.args - if required_only and argspec.defaults: - f_args = f_args[:-len(argspec.defaults)] - if bound: - f_args = f_args[1:] - return f_args - - -def accepts_kwargs(function): - """Returns True if function accepts kwargs.""" - argspec, _bound = _get_arg_spec(function) - return bool(argspec.keywords) diff --git a/taskflow/utils/threading_utils.py b/taskflow/utils/threading_utils.py index 2af17023..cea0760d 100644 --- a/taskflow/utils/threading_utils.py +++ b/taskflow/utils/threading_utils.py @@ -14,12 +14,41 @@ # License for the specific language governing permissions and limitations # under the License. +import collections import multiprocessing +import sys import threading +import six from six.moves import _thread +if sys.version_info[0:2] == (2, 6): + # This didn't return that was/wasn't set in 2.6, since we actually care + # whether it did or didn't add that feature by taking the code from 2.7 + # that added this functionality... + # + # TODO(harlowja): remove when we can drop 2.6 support. + class Event(threading._Event): + def wait(self, timeout=None): + self.__cond.acquire() + try: + if not self.__flag: + self.__cond.wait(timeout) + return self.__flag + finally: + self.__cond.release() +else: + Event = threading.Event + + +def is_alive(thread): + """Helper to determine if a thread is alive (handles none safely).""" + if not thread: + return False + return thread.is_alive() + + def get_ident(): """Return the 'thread identifier' of the current thread.""" return _thread.get_ident() @@ -44,3 +73,105 @@ def daemon_thread(target, *args, **kwargs): # unless the daemon property is set to True. thread.daemon = True return thread + + +# Container for thread creator + associated callbacks. +_ThreadBuilder = collections.namedtuple('_ThreadBuilder', + ['thread_factory', + 'before_start', 'after_start', + 'before_join', 'after_join']) +_ThreadBuilder.callables = tuple([ + # Attribute name -> none allowed as a valid value... + ('thread_factory', False), + ('before_start', True), + ('after_start', True), + ('before_join', True), + ('after_join', True), +]) + + +class ThreadBundle(object): + """A group/bundle of threads that start/stop together.""" + + def __init__(self): + self._threads = [] + self._lock = threading.Lock() + + def bind(self, thread_factory, + before_start=None, after_start=None, + before_join=None, after_join=None): + """Adds a thread (to-be) into this bundle (with given callbacks). + + NOTE(harlowja): callbacks provided should not attempt to call + mutating methods (:meth:`.stop`, :meth:`.start`, + :meth:`.bind` ...) on this object as that will result + in dead-lock since the lock on this object is not + meant to be (and is not) reentrant... + """ + builder = _ThreadBuilder(thread_factory, + before_start, after_start, + before_join, after_join) + for attr_name, none_allowed in builder.callables: + cb = getattr(builder, attr_name) + if cb is None and none_allowed: + continue + if not six.callable(cb): + raise ValueError("Provided callback for argument" + " '%s' must be callable" % attr_name) + with self._lock: + self._threads.append([ + builder, + # The built thread. + None, + # Whether the built thread was started (and should have + # ran or still be running). + False, + ]) + + @staticmethod + def _trigger_callback(callback, thread): + if callback is not None: + callback(thread) + + def start(self): + """Creates & starts all associated threads (that are not running).""" + count = 0 + with self._lock: + for i, (builder, thread, started) in enumerate(self._threads): + if thread and started: + continue + if not thread: + self._threads[i][1] = thread = builder.thread_factory() + self._trigger_callback(builder.before_start, thread) + thread.start() + count += 1 + try: + self._trigger_callback(builder.after_start, thread) + finally: + # Just incase the 'after_start' callback blows up make sure + # we always set this... + self._threads[i][2] = started = True + return count + + def stop(self): + """Stops & joins all associated threads (that have been started).""" + count = 0 + with self._lock: + for i, (builder, thread, started) in enumerate(self._threads): + if not thread or not started: + continue + self._trigger_callback(builder.before_join, thread) + thread.join() + count += 1 + try: + self._trigger_callback(builder.after_join, thread) + finally: + # Just incase the 'after_join' callback blows up make sure + # we always set/reset these... + self._threads[i][1] = thread = None + self._threads[i][2] = started = False + return count + + def __len__(self): + """Returns how many threads (to-be) are in this bundle.""" + return len(self._threads) diff --git a/test-requirements.txt b/test-requirements.txt index 4068d786..293ec5dc 100644 --- a/test-requirements.txt +++ b/test-requirements.txt @@ -3,13 +3,30 @@ # process, which may cause wedges in the gate later. hacking>=0.9.2,<0.10 -discover -coverage>=3.6 +oslotest>=1.2.0 # Apache-2.0 mock>=1.0 -python-subunit>=0.0.18 -testrepository>=0.0.18 -testtools>=0.9.34 +testtools>=0.9.36,!=1.2.0 + +# Used for testing the WBE engine. +kombu>=2.5.0 + +# Used for testing zookeeper & backends. zake>=0.1 # Apache-2.0 -# docs build jobs -sphinx>=1.1.2,!=1.2.0,<1.3 -oslosphinx>=2.2.0.0a2 +kazoo>=1.3.1 + +# Used for testing database persistence backends. +# +# NOTE(harlowja): SQLAlchemy isn't listed here currently but is +# listed in our tox.ini files so that we can test multiple varying SQLAlchemy +# versions to ensure a wider range of compatibility. +# +# Explict mysql drivers are also not listed here so that we can test against +# PyMySQL or MySQL-python depending on the python version the tests are being +# ran in (MySQL-python is currently preferred for 2.x environments, since +# it has been used in openstack for the longest). +alembic>=0.7.2 +psycopg2 + +# Docs build jobs need these packages. +sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3 +oslosphinx>=2.2.0 # Apache-2.0 diff --git a/tools/env_builder.sh b/tools/env_builder.sh new file mode 100644 index 00000000..7adb39e9 --- /dev/null +++ b/tools/env_builder.sh @@ -0,0 +1,126 @@ +#!/bin/bash + +# This sets up a developer testing environment that can be used with various +# openstack projects (mainly for taskflow, but for others it should work +# fine also). +# +# Some things to note: +# +# - The mysql server that is setup is *not* secured. +# - The zookeeper server that is setup is *not* secured. +# - The downloads from external services are *not* certificate verified. +# +# Overall it should only be used for testing/developer environments (it was +# tested on ubuntu 14.04 and rhel 6.x, for other distributions some tweaking +# may be required). + +set -e +set -u + +# If on a debian environment this will make apt-get *not* prompt for passwords. +export DEBIAN_FRONTEND=noninteractive + +# http://www.unixcl.com/2009/03/print-text-in-style-box-bash-scripting.html +Box () { + str="$@" + len=$((${#str}+4)) + for i in $(seq $len); do echo -n '*'; done; + echo; echo "* "$str" *"; + for i in $(seq $len); do echo -n '*'; done; + echo +} + +Box "Installing system packages..." +if [ -f "/etc/redhat-release" ]; then + yum install -y -q mysql-devel postgresql-devel mysql-server \ + wget gcc make autoconf + mysqld="mysqld" + zookeeperd="zookeeper-server" +elif [ -f "/etc/debian_version" ]; then + apt-get -y -qq install libmysqlclient-dev mysql-server postgresql \ + wget gcc make autoconf + mysqld="mysql" + zookeeperd="zookeeper" +else + echo "Unknown distribution!!" + lsb_release -a + exit 1 +fi + +set +e +python_27=`which python2.7` +set -e + +build_dir=`mktemp -d` +echo "Created build directory $build_dir..." +cd $build_dir + +# Get python 2.7 installed (if it's not). +if [ -z "$python_27" ]; then + py_file="Python-2.7.7.tgz" + py_base_file=${py_file%.*} + py_url="https://www.python.org/ftp/python/2.7.7/$py_file" + + Box "Building python 2.7..." + wget $py_url -O "$build_dir/$py_file" --no-check-certificate -nv + tar -xf "$py_file" + cd $build_dir/$py_base_file + ./configure --disable-ipv6 -q + make --quiet + + Box "Installing python 2.7..." + make altinstall >/dev/null 2>&1 + python_27=/usr/local/bin/python2.7 +fi + +set +e +pip_27=`which pip2.7` +set -e +if [ -z "$pip_27" ]; then + Box "Installing pip..." + wget "https://bootstrap.pypa.io/get-pip.py" \ + -O "$build_dir/get-pip.py" --no-check-certificate -nv + $python_27 "$build_dir/get-pip.py" >/dev/null 2>&1 + pip_27=/usr/local/bin/pip2.7 +fi + +Box "Installing tox..." +$pip_27 install -q 'tox>=1.6.1,<1.7.0' + +Box "Setting up mysql..." +service $mysqld restart +/usr/bin/mysql --user="root" --execute='CREATE DATABASE 'openstack_citest'' +cat << EOF > $build_dir/mysql.sql +CREATE USER 'openstack_citest'@'localhost' IDENTIFIED BY 'openstack_citest'; +CREATE USER 'openstack_citest' IDENTIFIED BY 'openstack_citest'; +GRANT ALL PRIVILEGES ON *.* TO 'openstack_citest'@'localhost'; +GRANT ALL PRIVILEGES ON *.* TO 'openstack_citest'; +FLUSH PRIVILEGES; +EOF +/usr/bin/mysql --user="root" < $build_dir/mysql.sql + +# TODO(harlowja): configure/setup postgresql... + +Box "Installing zookeeper..." +if [ -f "/etc/redhat-release" ]; then + # RH doesn't ship zookeeper (still...) + zk_file="cloudera-cdh-4-0.x86_64.rpm" + zk_url="http://archive.cloudera.com/cdh4/one-click-install/redhat/6/x86_64/$zk_file" + wget $zk_url -O $build_dir/$zk_file --no-check-certificate -nv + yum -y -q --nogpgcheck localinstall $build_dir/$zk_file + yum -y -q install zookeeper-server java + service zookeeper-server stop + service zookeeper-server init --force + mkdir -pv /var/lib/zookeeper + python -c "import random; print random.randint(1, 16384)" > /var/lib/zookeeper/myid +elif [ -f "/etc/debian_version" ]; then + apt-get install -y -qq zookeeperd +else + echo "Unknown distribution!!" + lsb_release -a + exit 1 +fi + +Box "Starting zookeeper..." +service $zookeeperd restart +service $zookeeperd status diff --git a/tools/generate_states.sh b/tools/generate_states.sh index 2da75817..308c6400 100755 --- a/tools/generate_states.sh +++ b/tools/generate_states.sh @@ -30,3 +30,7 @@ $xsltproc $PWD/.diagram-tools/notugly.xsl /tmp/states.svg > $img_dir/engine_stat echo "---- Updating retry state diagram ----" python $script_dir/state_graph.py -r -f /tmp/states.svg $xsltproc $PWD/.diagram-tools/notugly.xsl /tmp/states.svg > $img_dir/retry_states.svg + +echo "---- Updating wbe request state diagram ----" +python $script_dir/state_graph.py -w -f /tmp/states.svg +$xsltproc $PWD/.diagram-tools/notugly.xsl /tmp/states.svg > $img_dir/wbe_request_states.svg diff --git a/tools/schema_generator.py b/tools/schema_generator.py new file mode 100755 index 00000000..3685a0a1 --- /dev/null +++ b/tools/schema_generator.py @@ -0,0 +1,83 @@ +#!/usr/bin/env python + +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import contextlib +import re + +import six +import tabulate + +from taskflow.persistence.backends import impl_sqlalchemy + +NAME_MAPPING = { + 'flowdetails': 'Flow details', + 'atomdetails': 'Atom details', + 'logbooks': 'Logbooks', +} +CONN_CONF = { + # This uses an in-memory database (aka nothing is written) + "connection": "sqlite://", +} +TABLE_QUERY = "SELECT name, sql FROM sqlite_master WHERE type='table'" +SCHEMA_QUERY = "pragma table_info(%s)" + + +def to_bool_string(val): + if isinstance(val, (int, bool)): + return six.text_type(bool(val)) + if not isinstance(val, six.string_types): + val = six.text_type(val) + if val.lower() in ('0', 'false'): + return 'False' + if val.lower() in ('1', 'true'): + return 'True' + raise ValueError("Unknown boolean input '%s'" % (val)) + + +def main(): + backend = impl_sqlalchemy.SQLAlchemyBackend(CONN_CONF) + with contextlib.closing(backend) as backend: + # Make the schema exist... + with contextlib.closing(backend.get_connection()) as conn: + conn.upgrade() + # Now make a prettier version of that schema... + tables = backend.engine.execute(TABLE_QUERY) + table_names = [r[0] for r in tables] + for i, table_name in enumerate(table_names): + pretty_name = NAME_MAPPING.get(table_name, table_name) + print("*" + pretty_name + "*") + # http://www.sqlite.org/faq.html#q24 + table_name = table_name.replace("\"", "\"\"") + rows = [] + for r in backend.engine.execute(SCHEMA_QUERY % table_name): + # Cut out the numbers from things like VARCHAR(12) since + # this is not very useful to show users who just want to + # see the basic schema... + row_type = re.sub(r"\(.*?\)", "", r['type']).strip() + if not row_type: + raise ValueError("Row %s of table '%s' was empty after" + " cleaning" % (r['cid'], table_name)) + rows.append([r['name'], row_type, to_bool_string(r['pk'])]) + contents = tabulate.tabulate( + rows, headers=['Name', 'Type', 'Primary Key'], + tablefmt="rst") + print("\n%s" % contents.strip()) + if i + 1 != len(table_names): + print("") + + +if __name__ == '__main__': + main() diff --git a/tools/state_graph.py b/tools/state_graph.py index 77b85636..5ba9da7f 100755 --- a/tools/state_graph.py +++ b/tools/state_graph.py @@ -1,5 +1,19 @@ #!/usr/bin/env python +# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + import optparse import os import sys @@ -8,13 +22,58 @@ top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir)) sys.path.insert(0, top_dir) -import networkx as nx - # To get this installed you may have to follow: # https://code.google.com/p/pydot/issues/detail?id=93 (until fixed). import pydot +from taskflow.engines.action_engine import runner +from taskflow.engines.worker_based import protocol from taskflow import states +from taskflow.types import fsm + + +# This is just needed to get at the runner builder object (we will not +# actually be running it...). +class DummyRuntime(object): + def __init__(self): + self.analyzer = None + self.completer = None + self.scheduler = None + self.storage = None + + +def clean_event(name): + name = name.replace("_", " ") + name = name.strip() + return name + + +def make_machine(start_state, transitions, disallowed): + machine = fsm.FSM(start_state) + machine.add_state(start_state) + for (start_state, end_state) in transitions: + if start_state in disallowed or end_state in disallowed: + continue + if start_state not in machine: + machine.add_state(start_state) + if end_state not in machine: + machine.add_state(end_state) + # Make a fake event (not used anyway)... + event = "on_%s" % (end_state) + machine.add_transition(start_state, end_state, event.lower()) + return machine + + +def map_color(internal_states, state): + if state in internal_states: + return 'blue' + if state == states.FAILURE: + return 'red' + if state == states.REVERTED: + return 'darkorange' + if state == states.SUCCESS: + return 'green' + return None def main(): @@ -33,6 +92,10 @@ def main(): action='store_true', help="use engine state transitions", default=False) + parser.add_option("-w", "--wbe-requests", dest="wbe_requests", + action='store_true', + help="use wbe request transitions", + default=False) parser.add_option("-T", "--format", dest="format", help="output in given format", default='svg') @@ -41,81 +104,91 @@ def main(): if options.filename is None: options.filename = 'states.%s' % options.format - types = [options.engines, options.retries, options.tasks] + types = [ + options.engines, + options.retries, + options.tasks, + options.wbe_requests, + ] if sum([int(i) for i in types]) > 1: - parser.error("Only one of task/retry/engines may be specified.") + parser.error("Only one of task/retry/engines/wbe requests" + " may be specified.") - disallowed = set() - start_node = states.PENDING + internal_states = list() + ordering = 'in' if options.tasks: - source = list(states._ALLOWED_TASK_TRANSITIONS) source_type = "Tasks" - disallowed.add(states.RETRYING) + source = make_machine(states.PENDING, + list(states._ALLOWED_TASK_TRANSITIONS), + [states.RETRYING]) elif options.retries: - source = list(states._ALLOWED_TASK_TRANSITIONS) source_type = "Retries" + source = make_machine(states.PENDING, + list(states._ALLOWED_TASK_TRANSITIONS), []) elif options.engines: - # TODO(harlowja): place this in states.py - source = [ - (states.RESUMING, states.SCHEDULING), - (states.SCHEDULING, states.WAITING), - (states.WAITING, states.ANALYZING), - (states.ANALYZING, states.SCHEDULING), - (states.ANALYZING, states.WAITING), - ] - for u in (states.SCHEDULING, states.ANALYZING): - for v in (states.SUSPENDED, states.SUCCESS, states.REVERTED): - source.append((u, v)) source_type = "Engines" - start_node = states.RESUMING + r = runner.Runner(DummyRuntime(), None) + source, memory = r.builder.build() + internal_states.extend(runner._META_STATES) + ordering = 'out' + elif options.wbe_requests: + source_type = "WBE requests" + source = make_machine(protocol.WAITING, + list(protocol._ALLOWED_TRANSITIONS), []) else: - source = list(states._ALLOWED_FLOW_TRANSITIONS) source_type = "Flow" - - transitions = nx.DiGraph() - for (u, v) in source: - if u not in disallowed: - transitions.add_node(u) - if v not in disallowed: - transitions.add_node(v) - for (u, v) in source: - if not transitions.has_node(u) or not transitions.has_node(v): - continue - transitions.add_edge(u, v) + source = make_machine(states.PENDING, + list(states._ALLOWED_FLOW_TRANSITIONS), []) graph_name = "%s states" % source_type g = pydot.Dot(graph_name=graph_name, rankdir='LR', nodesep='0.25', overlap='false', ranksep="0.5", size="11x8.5", - splines='true', ordering='in') + splines='true', ordering=ordering) node_attrs = { 'fontsize': '11', } nodes = {} - nodes_order = [] - edges_added = [] - for (u, v) in nx.bfs_edges(transitions, source=start_node): - if u not in nodes: - nodes[u] = pydot.Node(u, **node_attrs) - g.add_node(nodes[u]) - nodes_order.append(u) - if v not in nodes: - nodes[v] = pydot.Node(v, **node_attrs) - g.add_node(nodes[v]) - nodes_order.append(v) - for u in nodes_order: - for v in transitions.successors_iter(u): - if (u, v) not in edges_added: - g.add_edge(pydot.Edge(nodes[u], nodes[v])) - edges_added.append((u, v)) + for (start_state, on_event, end_state) in source: + on_event = clean_event(on_event) + if start_state not in nodes: + start_node_attrs = node_attrs.copy() + text_color = map_color(internal_states, start_state) + if text_color: + start_node_attrs['fontcolor'] = text_color + nodes[start_state] = pydot.Node(start_state, **start_node_attrs) + g.add_node(nodes[start_state]) + if end_state not in nodes: + end_node_attrs = node_attrs.copy() + text_color = map_color(internal_states, end_state) + if text_color: + end_node_attrs['fontcolor'] = text_color + nodes[end_state] = pydot.Node(end_state, **end_node_attrs) + g.add_node(nodes[end_state]) + if options.engines: + edge_attrs = { + 'label': on_event, + } + if 'reverted' in on_event: + edge_attrs['fontcolor'] = 'darkorange' + if 'fail' in on_event: + edge_attrs['fontcolor'] = 'red' + if 'success' in on_event: + edge_attrs['fontcolor'] = 'green' + else: + edge_attrs = {} + g.add_edge(pydot.Edge(nodes[start_state], nodes[end_state], + **edge_attrs)) + start = pydot.Node("__start__", shape="point", width="0.1", xlabel='start', fontcolor='green', **node_attrs) g.add_node(start) - g.add_edge(pydot.Edge(start, nodes[start_node], style='dotted')) + g.add_edge(pydot.Edge(start, nodes[source.start_state], style='dotted')) print("*" * len(graph_name)) print(graph_name) print("*" * len(graph_name)) + print(source.pformat()) print(g.to_string().strip()) g.write(options.filename, format=options.format) diff --git a/tox-tmpl.ini b/tox-tmpl.ini deleted file mode 100644 index 8ad9f332..00000000 --- a/tox-tmpl.ini +++ /dev/null @@ -1,113 +0,0 @@ -# NOTE(harlowja): this is a template, not a fully-generated tox.ini, use toxgen -# to translate this into a fully specified tox.ini file before using. Changes -# made to tox.ini will only be reflected if ran through the toxgen generator. - -[tox] -minversion = 1.6 -skipsdist = True - -[testenv] -usedevelop = True -install_command = pip install {opts} {packages} -setenv = VIRTUAL_ENV={envdir} -deps = -r{toxinidir}/test-requirements.txt - alembic>=0.4.1 - psycopg2 - kazoo>=1.3.1 - kombu>=2.4.8 -commands = python setup.py testr --slowest --testr-args='{posargs}' - -[tox:jenkins] -downloadcache = ~/cache/pip - -[testenv:pep8] -commands = flake8 {posargs} - -[testenv:pylint] -setenv = VIRTUAL_ENV={envdir} -deps = -r{toxinidir}/requirements-py2.txt - pylint==0.26.0 -commands = pylint --rcfile=pylintrc taskflow - -[testenv:cover] -basepython = python2.7 -deps = {[testenv:py27]deps} -commands = python setup.py testr --coverage --testr-args='{posargs}' - -[testenv:venv] -commands = {posargs} - -[flake8] -# H904 Wrap long lines in parentheses instead of a backslash -ignore = H904 -builtins = _ -exclude = .venv,.tox,dist,doc,./taskflow/openstack/common,*egg,.git,build,tools - -# NOTE(imelnikov): pyXY envs are considered to be default, so they must have -# richest set of test requirements -[testenv:py26] -deps = {[testenv:py26-sa7-mysql-ev]deps} - -[testenv:py27] -deps = -r{toxinidir}/requirements-py2.txt - -r{toxinidir}/optional-requirements.txt - -r{toxinidir}/test-requirements.txt - doc8>=0.3.4 -commands = - python setup.py testr --slowest --testr-args='{posargs}' - sphinx-build -b doctest doc/source doc/build - doc8 doc/source - -[testenv:py33] -deps = {[testenv]deps} - -r{toxinidir}/requirements-py3.txt - SQLAlchemy>=0.7.8,<=0.9.99 - -# NOTE(imelnikov): psycopg2 is not supported on pypy -[testenv:pypy] -deps = -r{toxinidir}/requirements-py2.txt - -r{toxinidir}/test-requirements.txt - SQLAlchemy>=0.7.8,<=0.9.99 - alembic>=0.4.1 - kazoo>=1.3.1 - kombu>=2.4.8 - -[axes] -python = py26,py27 -sqlalchemy = sa7,sa8,sa9 -mysql = mysql,pymysql -eventlet = ev,* - -[axis:python:py26] -basepython = python2.6 -deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - -[axis:python:py27] -basepython = python2.7 -deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - -[axis:eventlet:ev] -deps = - eventlet>=0.13.0 - -[axis:sqlalchemy:sa7] -deps = - SQLAlchemy>=0.7.8,<=0.7.99 - -[axis:sqlalchemy:sa8] -deps = - SQLAlchemy>=0.8,<=0.8.99 - -[axis:sqlalchemy:sa9] -deps = - SQLAlchemy>=0.9,<=0.9.99 - -[axis:mysql:mysql] -deps = - MySQL-python - -[axis:mysql:pymysql] -deps = - pyMySQL diff --git a/tox.ini b/tox.ini index 0283c14d..4ef1c335 100644 --- a/tox.ini +++ b/tox.ini @@ -1,51 +1,30 @@ -# DO NOT EDIT THIS FILE - it is machine generated from tox-tmpl.ini - [tox] minversion = 1.6 skipsdist = True envlist = cover, + docs, pep8, py26, py26-sa7-mysql, - py26-sa7-mysql-ev, - py26-sa7-pymysql, - py26-sa7-pymysql-ev, - py26-sa8-mysql, - py26-sa8-mysql-ev, - py26-sa8-pymysql, - py26-sa8-pymysql-ev, - py26-sa9-mysql, - py26-sa9-mysql-ev, - py26-sa9-pymysql, - py26-sa9-pymysql-ev, py27, - py27-sa7-mysql, - py27-sa7-mysql-ev, - py27-sa7-pymysql, - py27-sa7-pymysql-ev, py27-sa8-mysql, - py27-sa8-mysql-ev, - py27-sa8-pymysql, - py27-sa8-pymysql-ev, - py27-sa9-mysql, - py27-sa9-mysql-ev, - py27-sa9-pymysql, - py27-sa9-pymysql-ev, py33, + py34, pylint, - pypy [testenv] usedevelop = True install_command = pip install {opts} {packages} setenv = VIRTUAL_ENV={envdir} deps = -r{toxinidir}/test-requirements.txt - alembic>=0.4.1 - psycopg2 - kazoo>=1.3.1 - kombu>=2.4.8 commands = python setup.py testr --slowest --testr-args='{posargs}' +[testenv:docs] +basepython = python2.7 +deps = {[testenv:py27]deps} +commands = python setup.py build_sphinx + doc8 doc/source + [tox:jenkins] downloadcache = ~/cache/pip @@ -61,24 +40,42 @@ commands = pylint --rcfile=pylintrc taskflow [testenv:cover] basepython = python2.7 deps = {[testenv:py27]deps} + coverage>=3.6 commands = python setup.py testr --coverage --testr-args='{posargs}' [testenv:venv] +basepython = python2.7 +deps = {[testenv:py27]deps} commands = {posargs} [flake8] +# H904 Wrap long lines in parentheses instead of a backslash ignore = H904 builtins = _ exclude = .venv,.tox,dist,doc,./taskflow/openstack/common,*egg,.git,build,tools +[hacking] +import_exceptions = six.moves + taskflow.test.mock + unittest.mock + +# NOTE(imelnikov): pyXY envs are considered to be default, so they must have +# richest set of test requirements [testenv:py26] -deps = {[testenv:py26-sa7-mysql-ev]deps} +basepython = python2.6 +deps = {[testenv]deps} + -r{toxinidir}/requirements-py2.txt + MySQL-python + eventlet>=0.15.1 + SQLAlchemy>=0.7.8,<=0.8.99 [testenv:py27] -deps = -r{toxinidir}/requirements-py2.txt - -r{toxinidir}/optional-requirements.txt - -r{toxinidir}/test-requirements.txt - doc8>=0.3.4 +deps = {[testenv]deps} + -r{toxinidir}/requirements-py2.txt + MySQL-python + eventlet>=0.15.1 + SQLAlchemy>=0.7.8,<=0.9.99 + doc8 commands = python setup.py testr --slowest --testr-args='{posargs}' sphinx-build -b doctest doc/source doc/build @@ -88,192 +85,24 @@ commands = deps = {[testenv]deps} -r{toxinidir}/requirements-py3.txt SQLAlchemy>=0.7.8,<=0.9.99 + PyMySQL>=0.6.2 -[testenv:pypy] -deps = -r{toxinidir}/requirements-py2.txt - -r{toxinidir}/test-requirements.txt - SQLAlchemy>=0.7.8,<=0.9.99 - alembic>=0.4.1 - kazoo>=1.3.1 - kombu>=2.4.8 - -[testenv:py26-sa7-mysql-ev] +[testenv:py34] deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - SQLAlchemy>=0.7.8,<=0.7.99 - MySQL-python - eventlet>=0.13.0 -basepython = python2.6 + -r{toxinidir}/requirements-py3.txt + SQLAlchemy>=0.7.8,<=0.9.99 + PyMySQL>=0.6.2 [testenv:py26-sa7-mysql] +basepython = python2.6 deps = {[testenv]deps} -r{toxinidir}/requirements-py2.txt SQLAlchemy>=0.7.8,<=0.7.99 MySQL-python -basepython = python2.6 - -[testenv:py26-sa7-pymysql-ev] -deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - SQLAlchemy>=0.7.8,<=0.7.99 - pyMySQL - eventlet>=0.13.0 -basepython = python2.6 - -[testenv:py26-sa7-pymysql] -deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - SQLAlchemy>=0.7.8,<=0.7.99 - pyMySQL -basepython = python2.6 - -[testenv:py26-sa8-mysql-ev] -deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - SQLAlchemy>=0.8,<=0.8.99 - MySQL-python - eventlet>=0.13.0 -basepython = python2.6 - -[testenv:py26-sa8-mysql] -deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - SQLAlchemy>=0.8,<=0.8.99 - MySQL-python -basepython = python2.6 - -[testenv:py26-sa8-pymysql-ev] -deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - SQLAlchemy>=0.8,<=0.8.99 - pyMySQL - eventlet>=0.13.0 -basepython = python2.6 - -[testenv:py26-sa8-pymysql] -deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - SQLAlchemy>=0.8,<=0.8.99 - pyMySQL -basepython = python2.6 - -[testenv:py26-sa9-mysql-ev] -deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - SQLAlchemy>=0.9,<=0.9.99 - MySQL-python - eventlet>=0.13.0 -basepython = python2.6 - -[testenv:py26-sa9-mysql] -deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - SQLAlchemy>=0.9,<=0.9.99 - MySQL-python -basepython = python2.6 - -[testenv:py26-sa9-pymysql-ev] -deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - SQLAlchemy>=0.9,<=0.9.99 - pyMySQL - eventlet>=0.13.0 -basepython = python2.6 - -[testenv:py26-sa9-pymysql] -deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - SQLAlchemy>=0.9,<=0.9.99 - pyMySQL -basepython = python2.6 - -[testenv:py27-sa7-mysql-ev] -deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - SQLAlchemy>=0.7.8,<=0.7.99 - MySQL-python - eventlet>=0.13.0 -basepython = python2.7 - -[testenv:py27-sa7-mysql] -deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - SQLAlchemy>=0.7.8,<=0.7.99 - MySQL-python -basepython = python2.7 - -[testenv:py27-sa7-pymysql-ev] -deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - SQLAlchemy>=0.7.8,<=0.7.99 - pyMySQL - eventlet>=0.13.0 -basepython = python2.7 - -[testenv:py27-sa7-pymysql] -deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - SQLAlchemy>=0.7.8,<=0.7.99 - pyMySQL -basepython = python2.7 - -[testenv:py27-sa8-mysql-ev] -deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - SQLAlchemy>=0.8,<=0.8.99 - MySQL-python - eventlet>=0.13.0 -basepython = python2.7 [testenv:py27-sa8-mysql] +basepython = python2.7 deps = {[testenv]deps} -r{toxinidir}/requirements-py2.txt SQLAlchemy>=0.8,<=0.8.99 MySQL-python -basepython = python2.7 - -[testenv:py27-sa8-pymysql-ev] -deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - SQLAlchemy>=0.8,<=0.8.99 - pyMySQL - eventlet>=0.13.0 -basepython = python2.7 - -[testenv:py27-sa8-pymysql] -deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - SQLAlchemy>=0.8,<=0.8.99 - pyMySQL -basepython = python2.7 - -[testenv:py27-sa9-mysql-ev] -deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - SQLAlchemy>=0.9,<=0.9.99 - MySQL-python - eventlet>=0.13.0 -basepython = python2.7 - -[testenv:py27-sa9-mysql] -deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - SQLAlchemy>=0.9,<=0.9.99 - MySQL-python -basepython = python2.7 - -[testenv:py27-sa9-pymysql-ev] -deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - SQLAlchemy>=0.9,<=0.9.99 - pyMySQL - eventlet>=0.13.0 -basepython = python2.7 - -[testenv:py27-sa9-pymysql] -deps = {[testenv]deps} - -r{toxinidir}/requirements-py2.txt - SQLAlchemy>=0.9,<=0.9.99 - pyMySQL -basepython = python2.7 -