Adjust doc linking

Instead of linking to other topics (which then uses
the name of those topics as the link name) define a
name that makes more sense for the inline usage and
retain the link to the document using wording that
fits the surronding text.

Also adjust the futures/executor links to point to the
external links documenting these features.

Change-Id: I5a89e2f747dfec2505947f25c124b157271c07cf
This commit is contained in:
Joshua Harlow
2014-05-03 23:55:25 -07:00
parent f97217c7f7
commit 7d441555ef
7 changed files with 45 additions and 39 deletions

View File

@@ -9,12 +9,12 @@ Atom Arguments and Results
.. |retry.revert| replace:: :py:meth:`~taskflow.retry.Retry.revert`
In taskflow, all flow and task state goes to (potentially persistent) storage.
That includes all the information that atoms (e.g. tasks) in the flow need when
they are executed, and all the information task produces (via serializable task
results). A developer who implements tasks or flows can specify what arguments
a task accepts and what result it returns in several ways. This document will
help you understand what those ways are and how to use those ways to accomplish
your desired TaskFlow usage pattern.
That includes all the information that :doc:`atoms <atoms>` (e.g. tasks) in the
flow need when they are executed, and all the information task produces (via
serializable task results). A developer who implements tasks or flows can specify
what arguments a task accepts and what result it returns in several ways. This
document will help you understand what those ways are and how to use those ways
to accomplish your desired usage pattern.
.. glossary::

View File

@@ -5,8 +5,8 @@ Atoms, Tasks and Retries
An atom is the smallest unit in taskflow which acts as the base for other
classes. Atoms have a name and a version (if applicable). An atom is expected
to name desired input values (requirements) and name outputs (provided
values), see :doc:`arguments_and_results` page for a complete reference
about these inputs and outputs.
values), see the :doc:`arguments and results <arguments_and_results>` page for
a complete reference about these inputs and outputs.
.. automodule:: taskflow.atom

View File

@@ -7,8 +7,8 @@ Overview
Engines are what **really** runs your atoms.
An *engine* takes a flow structure (described by :doc:`patterns`) and uses it to
decide which :doc:`atom <atoms>` to run and when.
An *engine* takes a flow structure (described by :doc:`patterns <patterns>`) and
uses it to decide which :doc:`atom <atoms>` to run and when.
TaskFlow provides different implementations of engines. Some may be easier to
use (ie, require no additional infrastructure setup) and understand; others
@@ -152,11 +152,11 @@ Parallel engine schedules tasks onto different threads to run them in parallel.
Additional configuration parameters:
* ``executor``: a class that provides ``concurrent.futures.Executor``-like
* ``executor``: a object that implements a :pep:`3148` compatible `executor`_
interface; it will be used for scheduling tasks. You can use instances
of ``concurrent.futures.ThreadPoolExecutor`` or
``taskflow.utils.eventlet_utils.GreenExecutor`` (which internally uses
`eventlet <http://eventlet.net/>`_ and greenthread pools).
of a `thread pool executor`_ or a
:py:class:`green executor <taskflow.utils.eventlet_utils.GreenExecutor>`
(which internally uses `eventlet <http://eventlet.net/>`_ and greenthread pools).
.. tip::
@@ -166,8 +166,7 @@ Additional configuration parameters:
.. note::
Running tasks with ``concurrent.futures.ProcessPoolExecutor`` is not
supported now.
Running tasks with a `process pool executor`_ is not currently supported.
Worker-Based
------------
@@ -291,10 +290,6 @@ exceptions). If no failures have occurred then the engine will have finished and
if so desired the :doc:`persistence <persistence>` can be used to cleanup any
details that were saved for this execution.
.. _future: https://docs.python.org/dev/library/concurrent.futures.html#future-objects
.. _executor: https://docs.python.org/dev/library/concurrent.futures.html#concurrent.futures.Executor
.. _networkx: https://networkx.github.io/
Interfaces
==========
@@ -311,3 +306,9 @@ Hierarchy
taskflow.engines.action_engine.engine
taskflow.engines.worker_based.engine
:parts: 1
.. _future: https://docs.python.org/dev/library/concurrent.futures.html#future-objects
.. _executor: https://docs.python.org/dev/library/concurrent.futures.html#concurrent.futures.Executor
.. _networkx: https://networkx.github.io/
.. _thread pool executor: https://docs.python.org/dev/library/concurrent.futures.html#threadpoolexecutor
.. _process pool executor: https://docs.python.org/dev/library/concurrent.futures.html#processpoolexecutor

View File

@@ -1,9 +1,12 @@
TaskFlow
========
TaskFlow is a Python library for OpenStack that helps make task execution easy, consistent, and reliable.
*TaskFlow is a Python library for OpenStack that helps make task execution
easy, consistent, and reliable.*
TaskFlow documentation is hosted on wiki: https://wiki.openstack.org/wiki/TaskFlow
.. note::
Additional documentation is also hosted on wiki: https://wiki.openstack.org/wiki/TaskFlow
Contents
========

View File

@@ -4,19 +4,20 @@ Inputs and Outputs
In TaskFlow there are multiple ways to provide inputs for your tasks and flows
and get information from them. This document describes one of them, that
involves task arguments and results. There are also :doc:`notifications`, which
allow you to get notified when task or flow changed state. You may also opt to
use :doc:`persistence` directly.
involves task arguments and results. There are also
:doc:`notifications <notifications>`, which allow you to get notified when task
or flow changed state. You may also opt to use the :doc:`persistence <persistence>`
layer itself directly.
-----------------------
Flow Inputs and Outputs
-----------------------
Tasks accept inputs via task arguments and provide outputs via task results
(see :doc:`arguments_and_results` for more details). This is the standard and
recommended way to pass data from one task to another. Of course not every task
argument needs to be provided to some other task of a flow, and not every task
result should be consumed by every task.
(see :doc:`arguments and results <arguments_and_results>` for more details). This
is the standard and recommended way to pass data from one task to another. Of
course not every task argument needs to be provided to some other task of a
flow, and not every task result should be consumed by every task.
If some value is required by one or more tasks of a flow, but is not provided
by any task, it is considered to be flow input, and **must** be put into the
@@ -62,8 +63,8 @@ As you can see, this flow does not require b, as it is provided by the fist task
Engine and Storage
------------------
The storage layer is how an engine persists flow and task details. For more
in-depth design details see :doc:`persistence`.
The storage layer is how an engine persists flow and task details (for more
in-depth details see :doc:`persistence <persistence>`).
Inputs
------

View File

@@ -21,7 +21,7 @@ To receive these notifications you should register a callback in
Each engine provides two of them: one notifies about flow state changes,
and another notifies about changes of tasks.
TaskFlow also has a set of predefined :ref:`listeners`, and provides
TaskFlow also has a set of predefined :ref:`listeners <listeners>`, and provides
means to write your own listeners, which can be more convenient than
using raw callbacks.

View File

@@ -303,13 +303,14 @@ Limitations
===========
* Atoms inside a flow must receive and accept parameters only from the ways
defined in :doc:`persistence`. In other words, the task that is created when
a workflow is constructed will not be the same task that is executed on a
remote worker (and any internal state not passed via the
:doc:`inputs_and_outputs` mechanism can not be transferred). This means
resource objects (database handles, file descriptors, sockets, ...) can **not**
be directly sent across to remote workers (instead the configuration that
defines how to fetch/create these objects must be instead).
defined in :doc:`persistence <persistence>`. In other words, the task
that is created when a workflow is constructed will not be the same task that
is executed on a remote worker (and any internal state not passed via the
:doc:`input and output <inputs_and_outputs>` mechanism can not be
transferred). This means resource objects (database handles, file
descriptors, sockets, ...) can **not** be directly sent across to remote
workers (instead the configuration that defines how to fetch/create these
objects must be instead).
* Worker-based engines will in the future be able to run lightweight tasks
locally to avoid transport overhead for very simple tasks (currently it will
run even lightweight tasks remotely, which may be non-performant).