Fix documentation spelling errors

Change-Id: Ic5a06195cba7c27ff7664fcb8e8b514d7dc31cb7
This commit is contained in:
Stanislav Kudriashev
2014-04-23 23:42:44 +02:00
parent 8732e4772d
commit ef72c4dcde
3 changed files with 15 additions and 15 deletions

View File

@@ -111,7 +111,7 @@ operates.
.. note:: .. note::
This engine is under active development and is experimental but it is This engine is under active development and is experimental but it is
useable and does work but is missing some features (please check the usable and does work but is missing some features (please check the
`blueprint page`_ for known issues and plans) that will make it more `blueprint page`_ for known issues and plans) that will make it more
production ready. production ready.

View File

@@ -24,12 +24,12 @@ Features
- High availability - High availability
- Guarantees workflow forward progress by transfering partially completed work - Guarantees workflow forward progress by transferring partially complete
or work that has not been started to entities which can either resume the work or work that has not been started to entities which can either resume
previously partially completed work or begin initial work to ensure that the previously partially completed work or begin initial work to ensure
the workflow as a whole progresses (where progressing implies transitioning that the workflow as a whole progresses (where progressing implies
through the workflow :doc:`patterns <patterns>` and :doc:`atoms <atoms>` transitioning through the workflow :doc:`patterns <patterns>` and
and completing their associated state transitions). :doc:`atoms <atoms>` and completing their associated state transitions).
- Atomic transfer and single ownership - Atomic transfer and single ownership
@@ -46,7 +46,7 @@ Features
- Jobs can be created with logbooks that contain a specification of the work - Jobs can be created with logbooks that contain a specification of the work
to be done by a entity (such as an API server). The job then can be to be done by a entity (such as an API server). The job then can be
completed by a entity that is watching that jobboard (not neccasarily the completed by a entity that is watching that jobboard (not necessarily the
API server itself). This creates a disconnection between work API server itself). This creates a disconnection between work
formation and work completion that is useful for scaling out horizontally. formation and work completion that is useful for scaling out horizontally.
@@ -111,7 +111,7 @@ of a job) might look like:
Consumption of jobs is similarly achieved by creating a jobboard and using Consumption of jobs is similarly achieved by creating a jobboard and using
the iteration functionality to find and claim jobs (and eventually consume the iteration functionality to find and claim jobs (and eventually consume
them). The typical usage of a joboard for consumption (and work completion) them). The typical usage of a jobboard for consumption (and work completion)
might look like: might look like:
.. code-block:: python .. code-block:: python
@@ -188,9 +188,9 @@ Additional *configuration* parameters:
* ``timeout``: the timeout used when performing operations with zookeeper; * ``timeout``: the timeout used when performing operations with zookeeper;
only used if a client is not provided. only used if a client is not provided.
* ``handler``: a class that provides ``kazoo.handlers``-like interface; it will * ``handler``: a class that provides ``kazoo.handlers``-like interface; it will
be used internally by `kazoo`_ to perform asynchronous operations, useful when be used internally by `kazoo`_ to perform asynchronous operations, useful
your program uses eventlet and you want to instruct kazoo to use an eventlet when your program uses eventlet and you want to instruct kazoo to use an
compatible handler (such as the `eventlet handler`_). eventlet compatible handler (such as the `eventlet handler`_).
Job Interface Job Interface

View File

@@ -5,10 +5,10 @@ Persistence
Overview Overview
======== ========
In order to be able to recieve inputs and create outputs from atoms (or other In order to be able to receive inputs and create outputs from atoms (or other
engine processes) in a fault-tolerant way, there is a need to be able to place engine processes) in a fault-tolerant way, there is a need to be able to place
what atoms output in some kind of location where it can be re-used by other what atoms output in some kind of location where it can be re-used by other
atoms (or used for other purposes). To accomodate this type of usage taskflow atoms (or used for other purposes). To accommodate this type of usage taskflow
provides an abstraction (provided by pluggable `stevedore`_ backends) that is provides an abstraction (provided by pluggable `stevedore`_ backends) that is
similar in concept to a running programs *memory*. similar in concept to a running programs *memory*.
@@ -39,7 +39,7 @@ How it is used
============== ==============
On :doc:`engine <engines>` construction typically a backend (it can be optional) On :doc:`engine <engines>` construction typically a backend (it can be optional)
will be provided which satisifies the :py:class:`~taskflow.persistence.backends.base.Backend` will be provided which satisfies the :py:class:`~taskflow.persistence.backends.base.Backend`
abstraction. Along with providing a backend object a :py:class:`~taskflow.persistence.logbook.FlowDetail` abstraction. Along with providing a backend object a :py:class:`~taskflow.persistence.logbook.FlowDetail`
object will also be created and provided (this object will contain the details about object will also be created and provided (this object will contain the details about
the flow to be ran) to the engine constructor (or associated :py:meth:`load() <taskflow.engines.helpers.load>` helper functions). the flow to be ran) to the engine constructor (or associated :py:meth:`load() <taskflow.engines.helpers.load>` helper functions).