Fix a couple of spelling and grammar errors

Some things that popped out while reading the
comments/documentation.

Change-Id: I0ccecae3381447ede44bb855d91f997349be1562
This commit is contained in:
Rick van de Loo 2015-03-28 20:13:21 +11:00
parent 179854e3ee
commit 5544d71bc8
29 changed files with 53 additions and 53 deletions

View File

@ -416,7 +416,7 @@ the following history (printed as a list)::
At this point (since the implementation returned ``RETRY``) the
|retry.execute| method will be called again and it will receive the same
history and it can then return a value that subseqent tasks can use to alter
there behavior.
their behavior.
If instead the |retry.execute| method itself raises an exception,
the |retry.revert| method of the implementation will be called and

View File

@ -91,7 +91,7 @@ subclasses are provided:
.. note::
They are *similar* to exception handlers but are made to be *more* capable
due to there ability to *dynamically* choose a reconciliation strategy,
due to their ability to *dynamically* choose a reconciliation strategy,
which allows for these atoms to influence subsequent execution(s) and the
inputs any associated atoms require.

View File

@ -29,7 +29,7 @@ Why they exist
An engine being *the* core component which actually makes your flows progress
is likely a new concept for many programmers so let's describe how it operates
in more depth and some of the reasoning behind why it exists. This will
hopefully make it more clear on there value add to the TaskFlow library user.
hopefully make it more clear on their value add to the TaskFlow library user.
First though let us discuss something most are familiar already with; the
difference between `declarative`_ and `imperative`_ programming models. The
@ -57,7 +57,7 @@ declarative model) allows for the following functionality to become possible:
accomplished allows for a *natural* way of resuming by allowing the engine to
track the current state and know at which point a workflow is in and how to
get back into that state when resumption occurs.
* Enhancing scalability: When a engine is responsible for executing your
* Enhancing scalability: When an engine is responsible for executing your
desired work it becomes possible to alter the *how* in the future by creating
new types of execution backends (for example the `worker`_ model which does
not execute locally). Without the decoupling of the *what* and the *how* it
@ -203,7 +203,7 @@ For further information, please refer to the the following:
How they run
============
To provide a peek into the general process that a engine goes through when
To provide a peek into the general process that an engine goes through when
running lets break it apart a little and describe what one of the engine types
does while executing (for this we will look into the
:py:class:`~taskflow.engines.action_engine.engine.ActionEngine` engine type).
@ -299,7 +299,7 @@ Scheduling
This stage selects which atoms are eligible to run by using a
:py:class:`~taskflow.engines.action_engine.scheduler.Scheduler` implementation
(the default implementation looks at there intention, checking if predecessor
(the default implementation looks at their intention, checking if predecessor
atoms have ran and so-on, using a
:py:class:`~taskflow.engines.action_engine.analyzer.Analyzer` helper
object as needed) and submits those atoms to a previously provided compatible
@ -335,7 +335,7 @@ above stages will be restarted and resuming will occur).
If the engine is suspended while the engine is going through the above
stages this will stop any further scheduling stages from occurring and
all currently executing atoms will be allowed to finish (and there results
all currently executing atoms will be allowed to finish (and their results
will be saved).
Finishing
@ -366,7 +366,7 @@ be selected?
Default strategy
----------------
When a engine is executing it internally interacts with the
When an engine is executing it internally interacts with the
:py:class:`~taskflow.storage.Storage` class
and that class interacts with the a
:py:class:`~taskflow.engines.action_engine.scopes.ScopeWalker` instance

View File

@ -43,7 +43,7 @@ Jobboards
jobboards implement the same interface and semantics so that the backend
usage is as transparent as possible. This allows deployers or developers of a
service that uses TaskFlow to select a jobboard implementation that fits
their setup (and there intended usage) best.
their setup (and their intended usage) best.
High level architecture
=======================
@ -218,7 +218,7 @@ Dual-engine jobs
----------------
**What:** Since atoms and engines are not currently `preemptable`_ we can not
force a engine (or the threads/remote workers... it is using to run) to stop
force an engine (or the threads/remote workers... it is using to run) to stop
working on an atom (it is general bad behavior to force code to stop without
its consent anyway) if it has already started working on an atom (short of
doing a ``kill -9`` on the running interpreter). This could cause problems

View File

@ -70,7 +70,7 @@ from a previous run) they will begin executing only after any dependent inputs
are ready. This is done by analyzing the execution graph and looking at
predecessor :py:class:`~taskflow.persistence.logbook.AtomDetail` outputs and
states (which may have been persisted in a past run). This will result in
either using there previous information or by running those predecessors and
either using their previous information or by running those predecessors and
saving their output to the :py:class:`~taskflow.persistence.logbook.FlowDetail`
and :py:class:`~taskflow.persistence.base.Backend` objects. This
execution, analysis and interaction with the storage objects continues (what is
@ -81,7 +81,7 @@ will have succeeded or failed in its attempt to run the workflow).
**Post-execution:** Typically when an engine is done running the logbook would
be discarded (to avoid creating a stockpile of useless data) and the backend
storage would be told to delete any contents for a given execution. For certain
use-cases though it may be advantageous to retain logbooks and there contents.
use-cases though it may be advantageous to retain logbooks and their contents.
A few scenarios come to mind:

View File

@ -121,7 +121,7 @@ or if needed will wait for all of the atoms it depends on to complete.
.. note::
A engine running a task also transitions the task to the ``PENDING`` state
An engine running a task also transitions the task to the ``PENDING`` state
after it was reverted and its containing flow was restarted or retried.
**RUNNING** - When an engine running the task starts to execute the task, the
@ -168,10 +168,10 @@ flow that the retry is associated with by consulting its
.. note::
A engine running a retry also transitions the retry to the ``PENDING`` state
An engine running a retry also transitions the retry to the ``PENDING`` state
after it was reverted and its associated flow was restarted or retried.
**RUNNING** - When a engine starts to execute the retry, the engine
**RUNNING** - When an engine starts to execute the retry, the engine
transitions the retry to the ``RUNNING`` state, and the retry stays in this
state until its :py:meth:`~taskflow.retry.Retry.execute` method returns.

View File

@ -409,7 +409,7 @@ Limitations
* Fault detection, currently when a worker acknowledges a task the engine will
wait for the task result indefinitely (a task may take an indeterminate
amount of time to finish). In the future there needs to be a way to limit
the duration of a remote workers execution (and track there liveness) and
the duration of a remote workers execution (and track their liveness) and
possibly spawn the task on a secondary worker if a timeout is reached (aka
the first worker has died or has stopped responding).

View File

@ -12,7 +12,7 @@ variable-rgx=[a-z_][a-z0-9_]{0,30}$
argument-rgx=[a-z_][a-z0-9_]{1,30}$
# Method names should be at least 3 characters long
# and be lowecased with underscores
# and be lowercased with underscores
method-rgx=[a-z_][a-z0-9_]{2,50}$
# Don't require docstrings on tests.

View File

@ -38,7 +38,7 @@ from taskflow import task
# In this example we show how a simple linear set of tasks can be executed
# using local processes (and not threads or remote workers) with minimial (if
# using local processes (and not threads or remote workers) with minimal (if
# any) modification to those tasks to make them safe to run in this mode.
#
# This is useful since it allows further scaling up your workflows when thread

View File

@ -38,7 +38,7 @@ ANY = notifier.Notifier.ANY
import example_utils as eu # noqa
# INTRO: This examples shows how a graph flow and linear flow can be used
# INTRO: This example shows how a graph flow and linear flow can be used
# together to execute dependent & non-dependent tasks by going through the
# steps required to build a simplistic car (an assembly line if you will). It
# also shows how raw functions can be wrapped into a task object instead of

View File

@ -30,7 +30,7 @@ from taskflow.patterns import linear_flow as lf
from taskflow.patterns import unordered_flow as uf
from taskflow import task
# INTRO: This examples shows how a linear flow and a unordered flow can be
# INTRO: These examples show how a linear flow and an unordered flow can be
# used together to execute calculations in parallel and then use the
# result for the next task/s. The adder task is used for all calculations
# and argument bindings are used to set correct parameters for each task.

View File

@ -35,7 +35,7 @@ from taskflow.listeners import printing
from taskflow.patterns import unordered_flow as uf
from taskflow import task
# INTRO: This examples shows how unordered_flow can be used to create a large
# INTRO: These examples show how unordered_flow can be used to create a large
# number of fake volumes in parallel (or serially, depending on a constant that
# can be easily changed).

View File

@ -31,8 +31,8 @@ from taskflow.patterns import linear_flow as lf
from taskflow import task
# INTRO: This example walks through a miniature workflow which will do a
# simple echo operation; during this execution a listener is assocated with
# the engine to recieve all notifications about what the flow has performed,
# simple echo operation; during this execution a listener is associated with
# the engine to receive all notifications about what the flow has performed,
# this example dumps that output to the stdout for viewing (at debug level
# to show all the information which is possible).

View File

@ -36,8 +36,8 @@ from taskflow.patterns import linear_flow as lf
from taskflow import task
from taskflow.utils import misc
# INTRO: This example walks through a miniature workflow which simulates a
# the reception of a API request, creation of a database entry, driver
# INTRO: This example walks through a miniature workflow which simulates
# the reception of an API request, creation of a database entry, driver
# activation (which invokes a 'fake' webservice) and final completion.
#
# This example also shows how a function/object (in this class the url sending)

View File

@ -34,7 +34,7 @@ from taskflow.utils import eventlet_utils
# INTRO: This is the defacto hello world equivalent for taskflow; it shows how
# a overly simplistic workflow can be created that runs using different
# an overly simplistic workflow can be created that runs using different
# engines using different styles of execution (all can be used to run in
# parallel if a workflow is provided that is parallelizable).

View File

@ -40,7 +40,7 @@ from taskflow.utils import threading_utils
# In this example we show how a jobboard can be used to post work for other
# entities to work on. This example creates a set of jobs using one producer
# thread (typically this would be split across many machines) and then having
# other worker threads with there own jobboards select work using a given
# other worker threads with their own jobboards select work using a given
# filters [red/blue] and then perform that work (and consuming or abandoning
# the job after it has been completed or failed).
@ -66,7 +66,7 @@ PRODUCER_UNITS = 10
# How many units of work are expected to be produced (used so workers can
# know when to stop running and shutdown, typically this would not be a
# a value but we have to limit this examples execution time to be less than
# a value but we have to limit this example's execution time to be less than
# infinity).
EXPECTED_UNITS = PRODUCER_UNITS * PRODUCERS

View File

@ -68,15 +68,15 @@ class ByeTask(task.Task):
print("Bye!")
# This generates your flow structure (at this stage nothing is ran).
# This generates your flow structure (at this stage nothing is run).
def make_flow(blowup=False):
flow = lf.Flow("hello-world")
flow.add(HiTask(), ByeTask(blowup))
return flow
# Persist the flow and task state here, if the file/dir exists already blowup
# if not don't blowup, this allows a user to see both the modes and to see
# Persist the flow and task state here, if the file/dir exists already blow up
# if not don't blow up, this allows a user to see both the modes and to see
# what is stored in each case.
if eu.SQLALCHEMY_AVAILABLE:
persist_path = os.path.join(tempfile.gettempdir(), "persisting.db")
@ -91,8 +91,8 @@ else:
blowup = True
with eu.get_backend(backend_uri) as backend:
# Make a flow that will blowup if the file doesn't exist previously, if it
# did exist, assume we won't blowup (and therefore this shows the undo
# Make a flow that will blow up if the file didn't exist previously, if it
# did exist, assume we won't blow up (and therefore this shows the undo
# and redo that a flow will go through).
book = logbook.LogBook("my-test")
flow = make_flow(blowup=blowup)

View File

@ -44,7 +44,7 @@ from taskflow.utils import persistence_utils as p_utils
import example_utils as eu # noqa
# INTRO: This examples shows how a hierarchy of flows can be used to create a
# INTRO: These examples show how a hierarchy of flows can be used to create a
# vm in a reliable & resumable manner using taskflow + a miniature version of
# what nova does while booting a vm.

View File

@ -39,7 +39,7 @@ from taskflow.utils import persistence_utils as p_utils
import example_utils # noqa
# INTRO: This examples shows how a hierarchy of flows can be used to create a
# INTRO: These examples show how a hierarchy of flows can be used to create a
# pseudo-volume in a reliable & resumable manner using taskflow + a miniature
# version of what cinder does while creating a volume (very miniature).

View File

@ -32,7 +32,7 @@ from taskflow import task
# INTRO: In this example we create a retry controller that receives a phone
# directory and tries different phone numbers. The next task tries to call Jim
# using the given number. If if is not a Jim's number, the tasks raises an
# using the given number. If it is not a Jim's number, the task raises an
# exception and retry controller takes the next number from the phone
# directory and retries the call.
#

View File

@ -37,7 +37,7 @@ from taskflow import task
from taskflow.utils import persistence_utils
# INTRO: This examples shows how to run a set of engines at the same time, each
# INTRO: This example shows how to run a set of engines at the same time, each
# running in different engines using a single thread of control to iterate over
# each engine (which causes that engine to advanced to its next state during
# each iteration).

View File

@ -33,10 +33,10 @@ from taskflow.persistence import backends as persistence_backends
from taskflow import task
from taskflow.utils import persistence_utils
# INTRO: This examples shows how to run a engine using the engine iteration
# INTRO: These examples show how to run an engine using the engine iteration
# capability, in between iterations other activities occur (in this case a
# value is output to stdout); but more complicated actions can occur at the
# boundary when a engine yields its current state back to the caller.
# boundary when an engine yields its current state back to the caller.
class EchoNameTask(task.Task):

View File

@ -41,8 +41,8 @@ from taskflow import task
# taskflow provides via tasks and flows makes it possible for you to easily at
# a later time hook in a persistence layer (and then gain the functionality
# that offers) when you decide the complexity of adding that layer in
# is 'worth it' for your applications usage pattern (which certain applications
# may not need).
# is 'worth it' for your application's usage pattern (which certain
# applications may not need).
class CallJim(task.Task):

View File

@ -37,7 +37,7 @@ ANY = notifier.Notifier.ANY
# a given ~phone~ number (provided as a function input) in a linear fashion
# (one after the other).
#
# For a workflow which is serial this shows a extremely simple way
# For a workflow which is serial this shows an extremely simple way
# of structuring your tasks (the code that does the work) into a linear
# sequence (the flow) and then passing the work off to an engine, with some
# initial data to be ran in a reliable manner.
@ -92,7 +92,7 @@ engine = taskflow.engines.load(flow, store={
})
# This is where we attach our callback functions to the 2 different
# notification objects that a engine exposes. The usage of a '*' (kleene star)
# notification objects that an engine exposes. The usage of a '*' (kleene star)
# here means that we want to be notified on all state changes, if you want to
# restrict to a specific state change, just register that instead.
engine.notifier.register(ANY, flow_watch)

View File

@ -31,7 +31,7 @@ from taskflow import engines
from taskflow.patterns import linear_flow
from taskflow import task
# INTRO: This examples shows how a task (in a linear/serial workflow) can
# INTRO: This example shows how a task (in a linear/serial workflow) can
# produce an output that can be then consumed/used by a downstream task.

View File

@ -27,9 +27,9 @@ top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__),
sys.path.insert(0, top_dir)
sys.path.insert(0, self_dir)
# INTRO: this examples shows a simplistic map/reduce implementation where
# INTRO: These examples show a simplistic map/reduce implementation where
# a set of mapper(s) will sum a series of input numbers (in parallel) and
# return there individual summed result. A reducer will then use those
# return their individual summed result. A reducer will then use those
# produced values and perform a final summation and this result will then be
# printed (and verified to ensure the calculation was as expected).

View File

@ -36,7 +36,7 @@ from taskflow import task
# and have variable run time tasks run and show how the listener will print
# out how long those tasks took (when they started and when they finished).
#
# This shows how timing metrics can be gathered (or attached onto a engine)
# This shows how timing metrics can be gathered (or attached onto an engine)
# after a workflow has been constructed, making it easy to gather metrics
# dynamically for situations where this kind of information is applicable (or
# even adding this information on at a later point in the future when your

View File

@ -36,10 +36,10 @@ from taskflow.utils import threading_utils
ANY = notifier.Notifier.ANY
# INTRO: This examples shows how to use a remote workers event notification
# INTRO: These examples show how to use a remote worker's event notification
# attribute to proxy back task event notifications to the controlling process.
#
# In this case a simple set of events are triggered by a worker running a
# In this case a simple set of events is triggered by a worker running a
# task (simulated to be remote by using a kombu memory transport and threads).
# Those events that the 'remote worker' produces will then be proxied back to
# the task that the engine is running 'remotely', and then they will be emitted
@ -113,10 +113,10 @@ if __name__ == "__main__":
workers = []
# These topics will be used to request worker information on; those
# workers will respond with there capabilities which the executing engine
# workers will respond with their capabilities which the executing engine
# will use to match pending tasks to a matched worker, this will cause
# the task to be sent for execution, and the engine will wait until it
# is finished (a response is recieved) and then the engine will either
# is finished (a response is received) and then the engine will either
# continue with other tasks, do some retry/failure resolution logic or
# stop (and potentially re-raise the remote workers failure)...
worker_topics = []

View File

@ -111,11 +111,11 @@ def calculate(engine_conf):
# an image bitmap file.
# And unordered flow is used here since the mandelbrot calculation is an
# example of a embarrassingly parallel computation that we can scatter
# example of an embarrassingly parallel computation that we can scatter
# across as many workers as possible.
flow = uf.Flow("mandelbrot")
# These symbols will be automatically given to tasks as input to there
# These symbols will be automatically given to tasks as input to their
# execute method, in this case these are constants used in the mandelbrot
# calculation.
store = {