Fix 20 typos on devref

specifiy    => specify
mulit-part  => multi-part
analagous   => analogous
driver.intialize_connection => driver.initialize_connection
succesfully => successfully
Analagous   => Analogous
responsiblity => responsibility
standaredized => standardized
replciation => replication
speicfied   => specified
desireable  => desirable
occurr      => occur
transfered  => transferred
migraton    => migration
streching   => stretching
Documenation => Documentation

Change-Id: Id531e35457f592cccd963ae8b7d50553c7ffb62d
This commit is contained in:
Atsushi SAKAI 2016-04-22 18:24:27 +09:00
parent cd03ed0c4d
commit efac539b91
5 changed files with 20 additions and 20 deletions

View File

@ -12,7 +12,7 @@ the API without breaking users who don't specifically ask for it. This
is done with an HTTP header ``OpenStack-API-Version`` which
is a monotonically increasing semantic version number starting from
``3.0``. Each service that uses microversions will share this header, so
the Volume service will need to specifiy ``volume``:
the Volume service will need to specify ``volume``:
``OpenStack-API-Version: volume 3.0``
If a user makes a request without specifying a version, they will get

View File

@ -21,22 +21,22 @@ trying to work on Cinder. The convention is actually quite simple, although
it may be difficult to decipher from the code.
Attach/Detach Operations are mulit-part commands
Attach/Detach Operations are multi-part commands
================================================
There are three things that happen in the workflow for an attach or detach call.
1. Update the status of the volume in the DB (ie attaching/detaching)
- For Attach, this is the cinder.volume.api.reserve method
- For Detach, the analagous call is cinder.volume.api.begin_detaching
- For Detach, the analogous call is cinder.volume.api.begin_detaching
2. Handle the connection operations that need to be done on the Volume
- For Attach, this is the cinder.volume.api.initialize_connection method
- For Detach, the analagous calls is cinder.volume.api.terminate_connection
- For Detach, the analogous calls is cinder.volume.api.terminate_connection
3. Finalize the status of the volume and release the resource
- For attach, this is the cinder.volume.api.attach method
- For detach, the analagous call is cinder.volume.api.detach
- For detach, the analogous call is cinder.volume.api.detach
Attach workflow
===============
@ -99,7 +99,7 @@ form the response data in the parent request.
We call this infor the model_update and it's used to update vital target
information associated with the volume in the Cinder database.
driver.intialize_connection
driver.initialize_connection
***************************
Now that we've actually built a target and persisted the important
@ -128,7 +128,7 @@ attach(self, context, volume, instance_uuid, host_name, mount_point, mode)
This is the last call that *should* be pretty simple. The intent is that this
is simply used to finalize the attach process. In other words, we simply
update the status on the Volume in the database, and provide a mechanism to
notify the driver that the attachment has completed succesfully.
notify the driver that the attachment has completed successfully.
There's some additional information that has been added to this finalize call
over time like instance_uuid, host_name etc. Some of these are only provided
@ -142,13 +142,13 @@ Detach workflow
begin_detaching(self, context, volume)
--------------------------------------
Analagous to the Attach workflows ``reserve_volume`` method.
Analogous to the Attach workflows ``reserve_volume`` method.
Performs a simple conditional update of Volume status to ``detaching``.
terminate_connection(self, context, volume, connector, force=False)
-------------------------------------------------------------------
Analagous to the Attach workflows ``initialize_connection`` method.
Analogous to the Attach workflows ``initialize_connection`` method.
Used to send calls down to drivers/target-drivers to do any sort of cleanup
they might require.

View File

@ -27,7 +27,7 @@ ALL replication configurations are expected to work by using the same
driver. In other words, rather than trying to perform any magic
by changing host entries in the DB for a Volume etc, all replication
targets are considered "unmanged" BUT if a failover is issued, it's
the drivers responsiblity to access replication volumes on the replicated
the drivers responsibility to access replication volumes on the replicated
backend device.
This results in no changes for the end-user. For example, He/She can
@ -53,7 +53,7 @@ like to configure.
*NOTE:*
There is one standaredized and REQUIRED key in the config
There is one standardized and REQUIRED key in the config
entry, all others are vendor-unique:
* backend_id:<vendor-identifier-for-rep-target>
@ -106,7 +106,7 @@ as requested. While the scoping key can be anything, it's strongly recommended
backends utilize the same key (replication) for consistency and to make things easier for
the Cloud Administrator.
Additionally it's expected that if a backend is configured with 3 replciation
Additionally it's expected that if a backend is configured with 3 replication
targets, that if a volume of type replication=enabled is issued against that
backend then it will replicate to ALL THREE of the configured targets.
@ -154,11 +154,11 @@ type should now be unavailable.
NOTE: We do not expect things like create requests to go to the driver and
magically create volumes on the replication target. The concept is that the
backend is lost, and we're just providing a DR mechanism to preserve user data
for volumes that were speicfied as such via type settings.
for volumes that were specified as such via type settings.
**freeze_backend**
Puts a backend host/service into a R/O state for the control plane. For
example if a failover is issued, it is likely desireable that while data access
example if a failover is issued, it is likely desirable that while data access
to existing volumes is maintained, it likely would not be wise to continue
doing things like creates, deletes, extends etc.

View File

@ -42,7 +42,7 @@ compatibility at the end of the release - we should keep things compatible
through the whole release.
To achieve compatibility, discipline is required from the developers. There are
several planes on which incompatibility may occurr:
several planes on which incompatibility may occur:
* **REST API changes** - these are prohibited by definition and this document
will not describe the subject. For further information one may use `API
@ -55,7 +55,7 @@ several planes on which incompatibility may occurr:
(assuming N has no notion of the column).
* **Database data migrations** - if a migration requires big amount of data to
be transfered between columns or tables or converted, it will most likely
be transferred between columns or tables or converted, it will most likely
lock the tables. This may cause services to be unresponsive, causing the
downtime.
@ -85,7 +85,7 @@ Adding a column
This is the simplest case - we don't have any requirements when adding a new
column apart from the fact that it should be added as the last one in the
table. If that's covered, the DB engine will make sure the migraton won't be
table. If that's covered, the DB engine will make sure the migration won't be
disruptive.
Dropping a column not referenced in SQLAlchemy code
@ -127,7 +127,7 @@ create a new column with desired properties and start moving the data (in a
live manner). In worst case old column can be removed in N+2. Whole procedure
is described in more details below.
In aforementioned case we need to make more complicated steps streching through
In aforementioned case we need to make more complicated steps stretching through
3 releases - always keeping the backwards compatibility. In short when we want
to start to move data inside the DB, then in N we should:
@ -211,7 +211,7 @@ service::
cctxt.cast(ctxt, 'create_volume', **msg_args)
As can be seen there's this magic :code:`self.client.can_send_version()` method
which detects if we're running in a version-heterogenious environment and need
which detects if we're running in a version-heterogeneous environment and need
to downgrade the message. Detection is based on dynamic RPC version pinning. In
general all the services (managers) report supported RPC API version. RPC API
client gets all the versions from the DB, chooses the lowest one and starts to

View File

@ -45,7 +45,7 @@ This script is a wrapper around the testr testrunner and the flake8 checker. Not
there has been talk around deprecating this wrapper and this method of testing, it's currently
available still but it may be good to get used to using tox or even ostestr directly.
Documenation is left in place for those that still use it.
Documentation is left in place for those that still use it.
Flags
-----