Various doc formatting cleanups

* Get rid of a bunch of accidental blockquote formatting
* Always declare a lexer to use for ``.. code::`` blocks

Change-Id: I8940e75b094843e542e815dde6b6be4740751813
This commit is contained in:
Tim Burke 2022-08-02 11:51:24 -07:00
parent 7e5c78423f
commit 52a4fe37aa
28 changed files with 784 additions and 757 deletions

View File

@ -324,14 +324,14 @@ In order to prevent rsync replication to specific drives, firstly
setup ``rsync_module`` per disk in your ``object-replicator``. setup ``rsync_module`` per disk in your ``object-replicator``.
Set this in ``object-server.conf``: Set this in ``object-server.conf``:
.. code:: .. code:: cfg
[object-replicator] [object-replicator]
rsync_module = {replication_ip}::object_{device} rsync_module = {replication_ip}::object_{device}
Set the individual drives in ``rsync.conf``. For example: Set the individual drives in ``rsync.conf``. For example:
.. code:: .. code:: cfg
[object_sda] [object_sda]
max connections = 4 max connections = 4
@ -387,7 +387,7 @@ monitoring solution to achieve this. The following is an example script:
For the above script to work, ensure ``/etc/rsync.d/`` conf files are For the above script to work, ensure ``/etc/rsync.d/`` conf files are
included, by specifying ``&include`` in your ``rsync.conf`` file: included, by specifying ``&include`` in your ``rsync.conf`` file:
.. code:: .. code:: cfg
&include /etc/rsync.d &include /etc/rsync.d
@ -395,7 +395,7 @@ Use this in conjunction with a cron job to periodically run the script, for exam
.. highlight:: none .. highlight:: none
.. code:: .. code:: cfg
# /etc/cron.d/devicecheck # /etc/cron.d/devicecheck
* * * * * root /some/path/to/disable_rsync.py * * * * * root /some/path/to/disable_rsync.py

View File

@ -12,13 +12,13 @@ that does not support the ``/info`` request.
To use the ``/info`` request, send a **GET** request using the ``/info`` To use the ``/info`` request, send a **GET** request using the ``/info``
path to the Object Store endpoint as shown in this example: path to the Object Store endpoint as shown in this example:
.. code:: .. code:: console
# curl https://storage.clouddrive.com/info # curl https://storage.clouddrive.com/info
This example shows a truncated response body: This example shows a truncated response body:
.. code:: .. code:: console
{ {
"swift":{ "swift":{

View File

@ -29,9 +29,8 @@ The format of the form **POST** request is:
**Example 1.14. Form POST format** **Example 1.14. Form POST format**
.. code:: .. code:: xml
<![CDATA[
<form action="SWIFT_URL" <form action="SWIFT_URL"
method="POST" method="POST"
enctype="multipart/form-data"> enctype="multipart/form-data">
@ -44,7 +43,6 @@ The format of the form **POST** request is:
<br/> <br/>
<input type="submit"/> <input type="submit"/>
</form> </form>
]]>
**action="SWIFT_URL"** **action="SWIFT_URL"**
@ -53,14 +51,14 @@ Set to full URL where the objects are to be uploaded. The names of
uploaded files are appended to the specified *SWIFT_URL*. So, you uploaded files are appended to the specified *SWIFT_URL*. So, you
can upload directly to the root of a container with a URL like: can upload directly to the root of a container with a URL like:
.. code:: .. code:: none
https://swift-cluster.example.com/v1/my_account/container/ https://swift-cluster.example.com/v1/my_account/container/
Optionally, you can include an object prefix to separate uploads, such Optionally, you can include an object prefix to separate uploads, such
as: as:
.. code:: .. code:: none
https://swift-cluster.example.com/v1/my_account/container/OBJECT_PREFIX https://swift-cluster.example.com/v1/my_account/container/OBJECT_PREFIX
@ -123,7 +121,7 @@ follow the file attributes are ignored.
Optionally, if you want the uploaded files to be temporary you can set x-delete-at or x-delete-after attributes by adding one of these as a form input: Optionally, if you want the uploaded files to be temporary you can set x-delete-at or x-delete-after attributes by adding one of these as a form input:
.. code:: .. code:: xml
<input type="hidden" name="x_delete_at" value="<unix-timestamp>" /> <input type="hidden" name="x_delete_at" value="<unix-timestamp>" />
<input type="hidden" name="x_delete_after" value="<seconds>" /> <input type="hidden" name="x_delete_after" value="<seconds>" />
@ -169,7 +167,7 @@ The following example code generates a signature for use with form
**Example 1.15. HMAC-SHA1 signature for form POST** **Example 1.15. HMAC-SHA1 signature for form POST**
.. code:: .. code:: python
import hmac import hmac
from hashlib import sha1 from hashlib import sha1
@ -198,13 +196,13 @@ being uploaded is called ``flower.jpg``.
This example uses the **swift-form-signature** script to compute the This example uses the **swift-form-signature** script to compute the
``expires`` and ``signature`` values. ``expires`` and ``signature`` values.
.. code:: .. code:: console
$ bin/swift-form-signature /v1/my_account/container/photos/ https://example.com/done.html 5373952000 1 200 MYKEY $ bin/swift-form-signature /v1/my_account/container/photos/ https://example.com/done.html 5373952000 1 200 MYKEY
Expires: 1390825338 Expires: 1390825338
Signature: 35129416ebda2f1a21b3c2b8939850dfc63d8f43 Signature: 35129416ebda2f1a21b3c2b8939850dfc63d8f43
.. code:: .. code:: console
$ curl -i https://swift-cluster.example.com/v1/my_account/container/photos/ -X POST \ $ curl -i https://swift-cluster.example.com/v1/my_account/container/photos/ -X POST \
-F max_file_size=5373952000 -F max_file_count=1 -F expires=1390825338 \ -F max_file_size=5373952000 -F max_file_count=1 -F expires=1390825338 \

View File

@ -82,7 +82,7 @@ This example shows three segment objects. You can use several containers
and the object names do not have to conform to a specific pattern, in and the object names do not have to conform to a specific pattern, in
contrast to dynamic large objects. contrast to dynamic large objects.
.. code:: .. code:: json
[ [
{ {
@ -192,7 +192,7 @@ manifest is still available to download the first set of segments.
**Example Upload segment of large object request: HTTP** **Example Upload segment of large object request: HTTP**
.. code:: .. code:: none
PUT /{api_version}/{account}/{container}/{object} HTTP/1.1 PUT /{api_version}/{account}/{container}/{object} HTTP/1.1
Host: storage.clouddrive.com Host: storage.clouddrive.com
@ -214,7 +214,7 @@ uploading the manifest.
**Example Upload next segment of large object request: HTTP** **Example Upload next segment of large object request: HTTP**
.. code:: .. code:: none
PUT /{api_version}/{account}/{container}/{object} HTTP/1.1 PUT /{api_version}/{account}/{container}/{object} HTTP/1.1
Host: storage.clouddrive.com Host: storage.clouddrive.com
@ -232,7 +232,7 @@ subsequent additional segments.
**Example Upload manifest request: HTTP** **Example Upload manifest request: HTTP**
.. code:: .. code:: none
PUT /{api_version}/{account}/{container}/{object} HTTP/1.1 PUT /{api_version}/{account}/{container}/{object} HTTP/1.1
Host: storage.clouddrive.com Host: storage.clouddrive.com
@ -244,7 +244,7 @@ subsequent additional segments.
**Example Upload manifest response: HTTP** **Example Upload manifest response: HTTP**
.. code:: .. code:: none
[...] [...]

View File

@ -97,14 +97,14 @@ interact with the Object Storage API.
Specifically, the resource path reflects this structure and has this Specifically, the resource path reflects this structure and has this
format: format:
.. code:: .. code:: none
/v1/{account}/{container}/{object} /v1/{account}/{container}/{object}
For example, for the ``flowers/rose.jpg`` object in the ``images`` For example, for the ``flowers/rose.jpg`` object in the ``images``
container in the ``12345678912345`` account, the resource path is: container in the ``12345678912345`` account, the resource path is:
.. code:: .. code:: none
/v1/12345678912345/images/flowers/rose.jpg /v1/12345678912345/images/flowers/rose.jpg
@ -133,7 +133,7 @@ parameter ``reverse``, noting that your marker and end_markers should be
switched when applied to a reverse listing. I.e, for a list of objects switched when applied to a reverse listing. I.e, for a list of objects
``[a, b, c, d, e]`` the non-reversed could be: ``[a, b, c, d, e]`` the non-reversed could be:
.. code:: .. code:: none
/v1/{account}/{container}/?marker=a&end_marker=d /v1/{account}/{container}/?marker=a&end_marker=d
b b
@ -141,7 +141,7 @@ switched when applied to a reverse listing. I.e, for a list of objects
However, when reversed marker and end_marker are applied to a reversed list: However, when reversed marker and end_marker are applied to a reversed list:
.. code:: .. code:: none
/v1/{account}/{container}/?marker=d&end_marker=a&reverse=on /v1/{account}/{container}/?marker=d&end_marker=a&reverse=on
c c

View File

@ -46,37 +46,37 @@ container, overwriting the current version.
Example Using ``X-Versions-Location`` Example Using ``X-Versions-Location``
------------------------------------- -------------------------------------
#. Create the ``current`` container: #. Create the ``current`` container:
.. code:: .. code:: console
# curl -i $publicURL/current -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" -H "X-Versions-Location: archive" # curl -i $publicURL/current -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" -H "X-Versions-Location: archive"
.. code:: .. code:: console
HTTP/1.1 201 Created HTTP/1.1 201 Created
Content-Length: 0 Content-Length: 0
Content-Type: text/html; charset=UTF-8 Content-Type: text/html; charset=UTF-8
X-Trans-Id: txb91810fb717347d09eec8-0052e18997 X-Trans-Id: txb91810fb717347d09eec8-0052e18997
X-Openstack-Request-Id: txb91810fb717347d09eec8-0052e18997 X-Openstack-Request-Id: txb91810fb717347d09eec8-0052e18997
Date: Thu, 23 Jan 2014 21:28:55 GMT Date: Thu, 23 Jan 2014 21:28:55 GMT
#. Create the first version of an object in the ``current`` container: #. Create the first version of an object in the ``current`` container:
.. code:: .. code:: console
# curl -i $publicURL/current/my_object --data-binary 1 -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" # curl -i $publicURL/current/my_object --data-binary 1 -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token"
.. code:: .. code:: console
HTTP/1.1 201 Created HTTP/1.1 201 Created
Last-Modified: Thu, 23 Jan 2014 21:31:22 GMT Last-Modified: Thu, 23 Jan 2014 21:31:22 GMT
Content-Length: 0 Content-Length: 0
Etag: d41d8cd98f00b204e9800998ecf8427e Etag: d41d8cd98f00b204e9800998ecf8427e
Content-Type: text/html; charset=UTF-8 Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx5992d536a4bd4fec973aa-0052e18a2a X-Trans-Id: tx5992d536a4bd4fec973aa-0052e18a2a
X-Openstack-Request-Id: tx5992d536a4bd4fec973aa-0052e18a2a X-Openstack-Request-Id: tx5992d536a4bd4fec973aa-0052e18a2a
Date: Thu, 23 Jan 2014 21:31:22 GMT Date: Thu, 23 Jan 2014 21:31:22 GMT
Nothing is written to the non-current version container when you Nothing is written to the non-current version container when you
initially **PUT** an object in the ``current`` container. However, initially **PUT** an object in the ``current`` container. However,
@ -85,9 +85,9 @@ Example Using ``X-Versions-Location``
These non-current versions are named as follows: These non-current versions are named as follows:
.. code:: .. code:: none
<length><object_name>/<timestamp> <length><object_name>/<timestamp>
Where ``length`` is the 3-character, zero-padded hexadecimal Where ``length`` is the 3-character, zero-padded hexadecimal
character length of the object, ``<object_name>`` is the object name, character length of the object, ``<object_name>`` is the object name,
@ -96,20 +96,20 @@ Example Using ``X-Versions-Location``
#. Create a second version of the object in the ``current`` container: #. Create a second version of the object in the ``current`` container:
.. code:: .. code:: console
# curl -i $publicURL/current/my_object --data-binary 2 -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" # curl -i $publicURL/current/my_object --data-binary 2 -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token"
.. code:: .. code:: console
HTTP/1.1 201 Created HTTP/1.1 201 Created
Last-Modified: Thu, 23 Jan 2014 21:41:32 GMT Last-Modified: Thu, 23 Jan 2014 21:41:32 GMT
Content-Length: 0 Content-Length: 0
Etag: d41d8cd98f00b204e9800998ecf8427e Etag: d41d8cd98f00b204e9800998ecf8427e
Content-Type: text/html; charset=UTF-8 Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx468287ce4fc94eada96ec-0052e18c8c X-Trans-Id: tx468287ce4fc94eada96ec-0052e18c8c
X-Openstack-Request-Id: tx468287ce4fc94eada96ec-0052e18c8c X-Openstack-Request-Id: tx468287ce4fc94eada96ec-0052e18c8c
Date: Thu, 23 Jan 2014 21:41:32 GMT Date: Thu, 23 Jan 2014 21:41:32 GMT
#. Issue a **GET** request to a versioned object to get the current #. Issue a **GET** request to a versioned object to get the current
version of the object. You do not have to do any request redirects or version of the object. You do not have to do any request redirects or
@ -117,24 +117,24 @@ Example Using ``X-Versions-Location``
List older versions of the object in the ``archive`` container: List older versions of the object in the ``archive`` container:
.. code:: .. code:: console
# curl -i $publicURL/archive?prefix=009my_object -X GET -H "X-Auth-Token: $token" # curl -i $publicURL/archive?prefix=009my_object -X GET -H "X-Auth-Token: $token"
.. code:: .. code:: console
HTTP/1.1 200 OK HTTP/1.1 200 OK
Content-Length: 30 Content-Length: 30
X-Container-Object-Count: 1 X-Container-Object-Count: 1
Accept-Ranges: bytes Accept-Ranges: bytes
X-Timestamp: 1390513280.79684 X-Timestamp: 1390513280.79684
X-Container-Bytes-Used: 0 X-Container-Bytes-Used: 0
Content-Type: text/plain; charset=utf-8 Content-Type: text/plain; charset=utf-8
X-Trans-Id: tx9a441884997542d3a5868-0052e18d8e X-Trans-Id: tx9a441884997542d3a5868-0052e18d8e
X-Openstack-Request-Id: tx9a441884997542d3a5868-0052e18d8e X-Openstack-Request-Id: tx9a441884997542d3a5868-0052e18d8e
Date: Thu, 23 Jan 2014 21:45:50 GMT Date: Thu, 23 Jan 2014 21:45:50 GMT
009my_object/1390512682.92052 009my_object/1390512682.92052
.. note:: .. note::
A **POST** request to a versioned object updates only the metadata A **POST** request to a versioned object updates only the metadata
@ -145,38 +145,38 @@ Example Using ``X-Versions-Location``
current version of the object and replace it with the next-most current version of the object and replace it with the next-most
current version in the non-current container. current version in the non-current container.
.. code:: .. code:: console
# curl -i $publicURL/current/my_object -X DELETE -H "X-Auth-Token: $token" # curl -i $publicURL/current/my_object -X DELETE -H "X-Auth-Token: $token"
.. code:: .. code:: console
HTTP/1.1 204 No Content HTTP/1.1 204 No Content
Content-Length: 0 Content-Length: 0
Content-Type: text/html; charset=UTF-8 Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx006d944e02494e229b8ee-0052e18edd X-Trans-Id: tx006d944e02494e229b8ee-0052e18edd
X-Openstack-Request-Id: tx006d944e02494e229b8ee-0052e18edd X-Openstack-Request-Id: tx006d944e02494e229b8ee-0052e18edd
Date: Thu, 23 Jan 2014 21:51:25 GMT Date: Thu, 23 Jan 2014 21:51:25 GMT
List objects in the ``archive`` container to show that the archived List objects in the ``archive`` container to show that the archived
object was moved back to the ``current`` container: object was moved back to the ``current`` container:
.. code:: .. code:: console
# curl -i $publicURL/archive?prefix=009my_object -X GET -H "X-Auth-Token: $token" # curl -i $publicURL/archive?prefix=009my_object -X GET -H "X-Auth-Token: $token"
.. code:: .. code:: console
HTTP/1.1 204 No Content HTTP/1.1 204 No Content
Content-Length: 0 Content-Length: 0
X-Container-Object-Count: 0 X-Container-Object-Count: 0
Accept-Ranges: bytes Accept-Ranges: bytes
X-Timestamp: 1390513280.79684 X-Timestamp: 1390513280.79684
X-Container-Bytes-Used: 0 X-Container-Bytes-Used: 0
Content-Type: text/html; charset=UTF-8 Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx044f2a05f56f4997af737-0052e18eed X-Trans-Id: tx044f2a05f56f4997af737-0052e18eed
X-Openstack-Request-Id: tx044f2a05f56f4997af737-0052e18eed X-Openstack-Request-Id: tx044f2a05f56f4997af737-0052e18eed
Date: Thu, 23 Jan 2014 21:51:41 GMT Date: Thu, 23 Jan 2014 21:51:41 GMT
This next-most current version carries with it any metadata last set This next-most current version carries with it any metadata last set
on it. If want to completely remove an object and you have five on it. If want to completely remove an object and you have five
@ -185,37 +185,37 @@ Example Using ``X-Versions-Location``
Example Using ``X-History-Location`` Example Using ``X-History-Location``
------------------------------------ ------------------------------------
#. Create the ``current`` container: #. Create the ``current`` container:
.. code:: .. code:: console
# curl -i $publicURL/current -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" -H "X-History-Location: archive" # curl -i $publicURL/current -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" -H "X-History-Location: archive"
.. code:: .. code:: console
HTTP/1.1 201 Created HTTP/1.1 201 Created
Content-Length: 0 Content-Length: 0
Content-Type: text/html; charset=UTF-8 Content-Type: text/html; charset=UTF-8
X-Trans-Id: txb91810fb717347d09eec8-0052e18997 X-Trans-Id: txb91810fb717347d09eec8-0052e18997
X-Openstack-Request-Id: txb91810fb717347d09eec8-0052e18997 X-Openstack-Request-Id: txb91810fb717347d09eec8-0052e18997
Date: Thu, 23 Jan 2014 21:28:55 GMT Date: Thu, 23 Jan 2014 21:28:55 GMT
#. Create the first version of an object in the ``current`` container: #. Create the first version of an object in the ``current`` container:
.. code:: .. code:: console
# curl -i $publicURL/current/my_object --data-binary 1 -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" # curl -i $publicURL/current/my_object --data-binary 1 -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token"
.. code:: .. code:: console
HTTP/1.1 201 Created HTTP/1.1 201 Created
Last-Modified: Thu, 23 Jan 2014 21:31:22 GMT Last-Modified: Thu, 23 Jan 2014 21:31:22 GMT
Content-Length: 0 Content-Length: 0
Etag: d41d8cd98f00b204e9800998ecf8427e Etag: d41d8cd98f00b204e9800998ecf8427e
Content-Type: text/html; charset=UTF-8 Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx5992d536a4bd4fec973aa-0052e18a2a X-Trans-Id: tx5992d536a4bd4fec973aa-0052e18a2a
X-Openstack-Request-Id: tx5992d536a4bd4fec973aa-0052e18a2a X-Openstack-Request-Id: tx5992d536a4bd4fec973aa-0052e18a2a
Date: Thu, 23 Jan 2014 21:31:22 GMT Date: Thu, 23 Jan 2014 21:31:22 GMT
Nothing is written to the non-current version container when you Nothing is written to the non-current version container when you
initially **PUT** an object in the ``current`` container. However, initially **PUT** an object in the ``current`` container. However,
@ -224,9 +224,9 @@ Example Using ``X-History-Location``
These non-current versions are named as follows: These non-current versions are named as follows:
.. code:: .. code:: none
<length><object_name>/<timestamp> <length><object_name>/<timestamp>
Where ``length`` is the 3-character, zero-padded hexadecimal Where ``length`` is the 3-character, zero-padded hexadecimal
character length of the object, ``<object_name>`` is the object name, character length of the object, ``<object_name>`` is the object name,
@ -235,20 +235,20 @@ Example Using ``X-History-Location``
#. Create a second version of the object in the ``current`` container: #. Create a second version of the object in the ``current`` container:
.. code:: .. code:: console
# curl -i $publicURL/current/my_object --data-binary 2 -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" # curl -i $publicURL/current/my_object --data-binary 2 -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token"
.. code:: .. code:: console
HTTP/1.1 201 Created HTTP/1.1 201 Created
Last-Modified: Thu, 23 Jan 2014 21:41:32 GMT Last-Modified: Thu, 23 Jan 2014 21:41:32 GMT
Content-Length: 0 Content-Length: 0
Etag: d41d8cd98f00b204e9800998ecf8427e Etag: d41d8cd98f00b204e9800998ecf8427e
Content-Type: text/html; charset=UTF-8 Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx468287ce4fc94eada96ec-0052e18c8c X-Trans-Id: tx468287ce4fc94eada96ec-0052e18c8c
X-Openstack-Request-Id: tx468287ce4fc94eada96ec-0052e18c8c X-Openstack-Request-Id: tx468287ce4fc94eada96ec-0052e18c8c
Date: Thu, 23 Jan 2014 21:41:32 GMT Date: Thu, 23 Jan 2014 21:41:32 GMT
#. Issue a **GET** request to a versioned object to get the current #. Issue a **GET** request to a versioned object to get the current
version of the object. You do not have to do any request redirects or version of the object. You do not have to do any request redirects or
@ -256,24 +256,24 @@ Example Using ``X-History-Location``
List older versions of the object in the ``archive`` container: List older versions of the object in the ``archive`` container:
.. code:: .. code:: console
# curl -i $publicURL/archive?prefix=009my_object -X GET -H "X-Auth-Token: $token" # curl -i $publicURL/archive?prefix=009my_object -X GET -H "X-Auth-Token: $token"
.. code:: .. code:: console
HTTP/1.1 200 OK HTTP/1.1 200 OK
Content-Length: 30 Content-Length: 30
X-Container-Object-Count: 1 X-Container-Object-Count: 1
Accept-Ranges: bytes Accept-Ranges: bytes
X-Timestamp: 1390513280.79684 X-Timestamp: 1390513280.79684
X-Container-Bytes-Used: 0 X-Container-Bytes-Used: 0
Content-Type: text/plain; charset=utf-8 Content-Type: text/plain; charset=utf-8
X-Trans-Id: tx9a441884997542d3a5868-0052e18d8e X-Trans-Id: tx9a441884997542d3a5868-0052e18d8e
X-Openstack-Request-Id: tx9a441884997542d3a5868-0052e18d8e X-Openstack-Request-Id: tx9a441884997542d3a5868-0052e18d8e
Date: Thu, 23 Jan 2014 21:45:50 GMT Date: Thu, 23 Jan 2014 21:45:50 GMT
009my_object/1390512682.92052 009my_object/1390512682.92052
.. note:: .. note::
A **POST** request to a versioned object updates only the metadata A **POST** request to a versioned object updates only the metadata
@ -285,41 +285,41 @@ Example Using ``X-History-Location``
the current container. Subsequent **GET** requests to the object in the the current container. Subsequent **GET** requests to the object in the
current container will return ``404 Not Found``. current container will return ``404 Not Found``.
.. code:: .. code:: console
# curl -i $publicURL/current/my_object -X DELETE -H "X-Auth-Token: $token" # curl -i $publicURL/current/my_object -X DELETE -H "X-Auth-Token: $token"
.. code:: .. code:: console
HTTP/1.1 204 No Content HTTP/1.1 204 No Content
Content-Length: 0 Content-Length: 0
Content-Type: text/html; charset=UTF-8 Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx006d944e02494e229b8ee-0052e18edd X-Trans-Id: tx006d944e02494e229b8ee-0052e18edd
X-Openstack-Request-Id: tx006d944e02494e229b8ee-0052e18edd X-Openstack-Request-Id: tx006d944e02494e229b8ee-0052e18edd
Date: Thu, 23 Jan 2014 21:51:25 GMT Date: Thu, 23 Jan 2014 21:51:25 GMT
List older versions of the object in the ``archive`` container:: List older versions of the object in the ``archive`` container:
.. code:: .. code:: console
# curl -i $publicURL/archive?prefix=009my_object -X GET -H "X-Auth-Token: $token" # curl -i $publicURL/archive?prefix=009my_object -X GET -H "X-Auth-Token: $token"
.. code:: .. code:: console
HTTP/1.1 200 OK HTTP/1.1 200 OK
Content-Length: 90 Content-Length: 90
X-Container-Object-Count: 3 X-Container-Object-Count: 3
Accept-Ranges: bytes Accept-Ranges: bytes
X-Timestamp: 1390513280.79684 X-Timestamp: 1390513280.79684
X-Container-Bytes-Used: 0 X-Container-Bytes-Used: 0
Content-Type: text/html; charset=UTF-8 Content-Type: text/html; charset=UTF-8
X-Trans-Id: tx044f2a05f56f4997af737-0052e18eed X-Trans-Id: tx044f2a05f56f4997af737-0052e18eed
X-Openstack-Request-Id: tx044f2a05f56f4997af737-0052e18eed X-Openstack-Request-Id: tx044f2a05f56f4997af737-0052e18eed
Date: Thu, 23 Jan 2014 21:51:41 GMT Date: Thu, 23 Jan 2014 21:51:41 GMT
009my_object/1390512682.92052 009my_object/1390512682.92052
009my_object/1390512692.23062 009my_object/1390512692.23062
009my_object/1390513885.67732 009my_object/1390513885.67732
In addition to the two previous versions of the object, the archive In addition to the two previous versions of the object, the archive
container has a "delete marker" to record when the object was deleted. container has a "delete marker" to record when the object was deleted.
@ -334,18 +334,18 @@ To disable object versioning for the ``current`` container, remove
its ``X-Versions-Location`` metadata header by sending an empty key its ``X-Versions-Location`` metadata header by sending an empty key
value. value.
.. code:: .. code:: console
# curl -i $publicURL/current -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" -H "X-Versions-Location: " # curl -i $publicURL/current -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" -H "X-Versions-Location: "
.. code:: .. code:: console
HTTP/1.1 202 Accepted HTTP/1.1 202 Accepted
Content-Length: 76 Content-Length: 76
Content-Type: text/html; charset=UTF-8 Content-Type: text/html; charset=UTF-8
X-Trans-Id: txe2476de217134549996d0-0052e19038 X-Trans-Id: txe2476de217134549996d0-0052e19038
X-Openstack-Request-Id: txe2476de217134549996d0-0052e19038 X-Openstack-Request-Id: txe2476de217134549996d0-0052e19038
Date: Thu, 23 Jan 2014 21:57:12 GMT Date: Thu, 23 Jan 2014 21:57:12 GMT
<html><h1>Accepted</h1><p>The request is accepted for processing.</p></html> <html><h1>Accepted</h1><p>The request is accepted for processing.</p></html>

View File

@ -34,14 +34,14 @@ the requested list of the objects.
.. code-block:: console .. code-block:: console
photos/animals/cats/persian.jpg photos/animals/cats/persian.jpg
photos/animals/cats/siamese.jpg photos/animals/cats/siamese.jpg
photos/animals/dogs/corgi.jpg photos/animals/dogs/corgi.jpg
photos/animals/dogs/poodle.jpg photos/animals/dogs/poodle.jpg
photos/animals/dogs/terrier.jpg photos/animals/dogs/terrier.jpg
photos/me.jpg photos/me.jpg
photos/plants/fern.jpg photos/plants/fern.jpg
photos/plants/rose.jpg photos/plants/rose.jpg
Use the delimiter parameter to limit the displayed results. To use Use the delimiter parameter to limit the displayed results. To use
``delimiter`` with pseudo-directories, you must use the parameter slash ``delimiter`` with pseudo-directories, you must use the parameter slash
@ -63,20 +63,20 @@ For example:
.. code-block:: JSON .. code-block:: JSON
[ [
{ {
"subdir": "photos/" "subdir": "photos/"
} }
] ]
.. code-block:: XML .. code-block:: XML
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<container name="backups"> <container name="backups">
<subdir name="photos/"> <subdir name="photos/">
<name>photos/</name> <name>photos/</name>
</subdir> </subdir>
</container> </container>
Use the ``prefix`` and ``delimiter`` parameters to view the objects Use the ``prefix`` and ``delimiter`` parameters to view the objects
inside a pseudo-directory, including further nested pseudo-directories. inside a pseudo-directory, including further nested pseudo-directories.
@ -92,46 +92,46 @@ pseudo-directory.
.. code-block:: console .. code-block:: console
photos/animals/ photos/animals/
photos/me.jpg photos/me.jpg
photos/plants/ photos/plants/
.. code-block:: JSON .. code-block:: JSON
[ [
{ {
"subdir": "photos/animals/" "subdir": "photos/animals/"
}, },
{ {
"hash": "b249a153f8f38b51e92916bbc6ea57ad", "hash": "b249a153f8f38b51e92916bbc6ea57ad",
"last_modified": "2015-12-03T17:31:28.187370", "last_modified": "2015-12-03T17:31:28.187370",
"bytes": 2906, "bytes": 2906,
"name": "photos/me.jpg", "name": "photos/me.jpg",
"content_type": "image/jpeg" "content_type": "image/jpeg"
}, },
{ {
"subdir": "photos/plants/" "subdir": "photos/plants/"
} }
] ]
.. code-block:: XML .. code-block:: XML
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<container name="backups"> <container name="backups">
<subdir name="photos/animals/"> <subdir name="photos/animals/">
<name>photos/animals/</name> <name>photos/animals/</name>
</subdir> </subdir>
<object> <object>
<name>photos/me.jpg</name> <name>photos/me.jpg</name>
<hash>b249a153f8f38b51e92916bbc6ea57ad</hash> <hash>b249a153f8f38b51e92916bbc6ea57ad</hash>
<bytes>2906</bytes> <bytes>2906</bytes>
<content_type>image/jpeg</content_type> <content_type>image/jpeg</content_type>
<last_modified>2015-12-03T17:31:28.187370</last_modified> <last_modified>2015-12-03T17:31:28.187370</last_modified>
</object> </object>
<subdir name="photos/plants/"> <subdir name="photos/plants/">
<name>photos/plants/</name> <name>photos/plants/</name>
</subdir> </subdir>
</container> </container>
You can create an unlimited number of nested pseudo-directories. To You can create an unlimited number of nested pseudo-directories. To
navigate through them, use a longer ``prefix`` parameter coupled with navigate through them, use a longer ``prefix`` parameter coupled with
@ -150,6 +150,6 @@ the objects and pseudo-directories within the nested pseudo-directory.
.. code-block:: console .. code-block:: console
photos/animals/dogs/corgi.jpg photos/animals/dogs/corgi.jpg
photos/animals/dogs/poodle.jpg photos/animals/dogs/poodle.jpg
photos/animals/dogs/terrier.jpg photos/animals/dogs/terrier.jpg

View File

@ -56,18 +56,18 @@ format:
.. code-block:: json .. code-block:: json
[ [
{ {
"count":0, "count":0,
"bytes":0, "bytes":0,
"name":"janeausten" "name":"janeausten"
}, },
{ {
"count":1, "count":1,
"bytes":14, "bytes":14,
"name":"marktwain" "name":"marktwain"
} }
] ]
Example 2. XML example with Accept header Example 2. XML example with Accept header
@ -100,19 +100,19 @@ format:
.. code-block:: xml .. code-block:: xml
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<account name="AUTH_73f0aa26640f4971864919d0eb0f0880"> <account name="AUTH_73f0aa26640f4971864919d0eb0f0880">
<container> <container>
<name>janeausten</name> <name>janeausten</name>
<count>2</count> <count>2</count>
<bytes>33</bytes> <bytes>33</bytes>
</container> </container>
<container> <container>
<name>marktwain</name> <name>marktwain</name>
<count>1</count> <count>1</count>
<bytes>14</bytes> <bytes>14</bytes>
</container> </container>
</account> </account>
The remainder of the examples in this guide use standard, non-serialized The remainder of the examples in this guide use standard, non-serialized
responses. However, all ``GET`` requests that perform list operations responses. However, all ``GET`` requests that perform list operations

View File

@ -21,11 +21,10 @@ a common prefix. They are useful for sharing a set of objects.
Ask your cloud administrator to enable the temporary URL feature. For Ask your cloud administrator to enable the temporary URL feature. For
information, see :ref:`tempurl` in the *Source Documentation*. information, see :ref:`tempurl` in the *Source Documentation*.
Note .. note::
~~~~
To use **POST** requests to upload objects to specific Object Storage To use **POST** requests to upload objects to specific Object Storage
locations, use :doc:`form_post_middleware` instead of temporary URL middleware. locations, use :doc:`form_post_middleware` instead of temporary URL middleware.
Temporary URL format Temporary URL format
~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~
@ -35,12 +34,12 @@ parameters:
**Example Temporary URL format** **Example Temporary URL format**
.. code:: .. code:: none
https://swift-cluster.example.com/v1/my_account/container/object https://swift-cluster.example.com/v1/my_account/container/object
?temp_url_sig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b ?temp_url_sig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b
&temp_url_expires=1323479485 &temp_url_expires=1323479485
&filename=My+Test+File.pdf &filename=My+Test+File.pdf
The example shows these elements: The example shows these elements:
@ -71,12 +70,12 @@ A prefix-based temporary URL is similar but requires the parameter
``temp_url_prefix``, which must be equal to the common prefix shared ``temp_url_prefix``, which must be equal to the common prefix shared
by all object names for which the URL is valid. by all object names for which the URL is valid.
.. code:: .. code:: none
https://swift-cluster.example.com/v1/my_account/container/my_prefix/object https://swift-cluster.example.com/v1/my_account/container/my_prefix/object
?temp_url_sig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b ?temp_url_sig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b
&temp_url_expires=2011-12-10T01:11:25Z &temp_url_expires=2011-12-10T01:11:25Z
&temp_url_prefix=my_prefix &temp_url_prefix=my_prefix
.. _secret_keys: .. _secret_keys:
@ -109,15 +108,14 @@ The arbitrary values serve as the secret keys.
For example, use the **swift post** command to set the secret key to For example, use the **swift post** command to set the secret key to
*``MYKEY``*: *``MYKEY``*:
.. code:: .. code:: console
$ swift post -m "Temp-URL-Key:MYKEY" $ swift post -m "Temp-URL-Key:MYKEY"
Note .. note::
~~~~
Changing these headers invalidates any previously generated temporary Changing these headers invalidates any previously generated temporary
URLs within 60 seconds, which is the memcache time for the key. URLs within 60 seconds, which is the memcache time for the key.
HMAC signature for temporary URLs HMAC signature for temporary URLs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -145,33 +143,33 @@ temporary URLs:
**Example HMAC-SHA256 signature for object-based temporary URLs** **Example HMAC-SHA256 signature for object-based temporary URLs**
.. code:: .. code:: python
import hmac import hmac
from hashlib import sha256 from hashlib import sha256
from time import time from time import time
method = 'GET' method = 'GET'
duration_in_seconds = 60*60*24 duration_in_seconds = 60*60*24
expires = int(time() + duration_in_seconds) expires = int(time() + duration_in_seconds)
path = '/v1/my_account/container/object' path = '/v1/my_account/container/object'
key = 'MYKEY' key = 'MYKEY'
hmac_body = '%s\n%s\n%s' % (method, expires, path) hmac_body = '%s\n%s\n%s' % (method, expires, path)
signature = hmac.new(key, hmac_body, sha256).hexdigest() signature = hmac.new(key, hmac_body, sha256).hexdigest()
**Example HMAC-SHA512 signature for prefix-based temporary URLs** **Example HMAC-SHA512 signature for prefix-based temporary URLs**
.. code:: .. code:: python
import hmac import hmac
from hashlib import sha512 from hashlib import sha512
from time import time from time import time
method = 'GET' method = 'GET'
duration_in_seconds = 60*60*24 duration_in_seconds = 60*60*24
expires = int(time() + duration_in_seconds) expires = int(time() + duration_in_seconds)
path = 'prefix:/v1/my_account/container/my_prefix' path = 'prefix:/v1/my_account/container/my_prefix'
key = 'MYKEY' key = 'MYKEY'
hmac_body = '%s\n%s\n%s' % (method, expires, path) hmac_body = '%s\n%s\n%s' % (method, expires, path)
signature = hmac.new(key, hmac_body, sha512).hexdigest() signature = hmac.new(key, hmac_body, sha512).hexdigest()
Do not URL-encode the path when you generate the HMAC signature. Do not URL-encode the path when you generate the HMAC signature.
However, when you make the actual HTTP request, you should properly However, when you make the actual HTTP request, you should properly
@ -186,10 +184,10 @@ Authentication <http://www.ietf.org/rfc/rfc2104.txt>`__.
If you want to transform a UNIX timestamp into an ISO 8601 UTC timestamp, If you want to transform a UNIX timestamp into an ISO 8601 UTC timestamp,
you can use following code snippet: you can use following code snippet:
.. code:: .. code:: python
import time import time
time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime(timestamp)) time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime(timestamp))
Using the ``swift`` tool to generate a Temporary URL Using the ``swift`` tool to generate a Temporary URL
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -198,32 +196,32 @@ The ``swift`` tool provides the tempurl_ option that
auto-generates the *``temp_url_sig``* and *``temp_url_expires``* query auto-generates the *``temp_url_sig``* and *``temp_url_expires``* query
parameters. For example, you might run this command: parameters. For example, you might run this command:
.. code:: .. code:: console
$ swift tempurl GET 3600 /v1/my_account/container/object MYKEY $ swift tempurl GET 3600 /v1/my_account/container/object MYKEY
.. note:: .. note::
The ``swift`` tool is not yet updated and continues to use the The ``swift`` tool is not yet updated and continues to use the
deprecated cipher SHA1. deprecated cipher SHA1.
This command returns the path: This command returns the path:
.. code:: .. code:: none
/v1/my_account/container/object /v1/my_account/container/object
?temp_url_sig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91 ?temp_url_sig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91
&temp_url_expires=1374497657 &temp_url_expires=1374497657
To create the temporary URL, prefix this path with the Object Storage To create the temporary URL, prefix this path with the Object Storage
storage host name. For example, prefix the path with storage host name. For example, prefix the path with
``https://swift-cluster.example.com``, as follows: ``https://swift-cluster.example.com``, as follows:
.. code:: .. code:: none
https://swift-cluster.example.com/v1/my_account/container/object https://swift-cluster.example.com/v1/my_account/container/object
?temp_url_sig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91 ?temp_url_sig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91
&temp_url_expires=1374497657 &temp_url_expires=1374497657
Note that if the above example is copied exactly, and used in a command Note that if the above example is copied exactly, and used in a command
shell, then the ampersand is interpreted as an operator and the URL shell, then the ampersand is interpreted as an operator and the URL

View File

@ -12,11 +12,11 @@ underlying media type (``Content-Type``) of the file, such as a video.
This example assigns an attachment type to the ``Content-Encoding`` This example assigns an attachment type to the ``Content-Encoding``
header that indicates how the file is downloaded: header that indicates how the file is downloaded:
.. code:: .. code:: none
PUT /<api version>/<account>/<container>/<object> HTTP/1.1 PUT /<api version>/<account>/<container>/<object> HTTP/1.1
Host: storage.clouddrive.com Host: storage.clouddrive.com
X-Auth-Token: eaaafd18-0fed-4b3a-81b4-663c99ec1cbb X-Auth-Token: eaaafd18-0fed-4b3a-81b4-663c99ec1cbb
Content-Type: video/mp4 Content-Type: video/mp4
Content-Encoding: gzip Content-Encoding: gzip

View File

@ -14,18 +14,18 @@ This example assigns an attachment type to the ``Content-Disposition``
header. This attachment type indicates that the file is to be downloaded header. This attachment type indicates that the file is to be downloaded
as ``goodbye.txt``: as ``goodbye.txt``:
.. code:: .. code:: console
# curl -i $publicURL/marktwain/goodbye -X POST -H "X-Auth-Token: $token" -H "Content-Length: 14" -H "Content-Type: application/octet-stream" -H "Content-Disposition: attachment; filename=goodbye.txt" # curl -i $publicURL/marktwain/goodbye -X POST -H "X-Auth-Token: $token" -H "Content-Length: 14" -H "Content-Type: application/octet-stream" -H "Content-Disposition: attachment; filename=goodbye.txt"
.. code:: .. code:: console
HTTP/1.1 202 Accepted HTTP/1.1 202 Accepted
Content-Length: 76 Content-Length: 76
Content-Type: text/html; charset=UTF-8 Content-Type: text/html; charset=UTF-8
X-Trans-Id: txa9b5e57d7f354d7ea9f57-0052e17e13 X-Trans-Id: txa9b5e57d7f354d7ea9f57-0052e17e13
X-Openstack-Request-Id: txa9b5e57d7f354d7ea9f57-0052e17e13 X-Openstack-Request-Id: txa9b5e57d7f354d7ea9f57-0052e17e13
Date: Thu, 23 Jan 2014 20:39:47 GMT Date: Thu, 23 Jan 2014 20:39:47 GMT
<html><h1>Accepted</h1><p>The request is accepted for processing.</p></html> <html><h1>Accepted</h1><p>The request is accepted for processing.</p></html>

View File

@ -100,7 +100,7 @@ Test CORS Page
A sample cross-site test page is located in the project source tree A sample cross-site test page is located in the project source tree
``doc/source/test-cors.html``. ``doc/source/test-cors.html``.
.. literalinclude:: test-cors.html .. literalinclude:: test-cors.html
.. _CORS: https://developer.mozilla.org/en-US/docs/HTTP/Access_control_CORS .. _CORS: https://developer.mozilla.org/en-US/docs/HTTP/Access_control_CORS
.. _preflight request: https://developer.mozilla.org/en-US/docs/HTTP/Access_control_CORS#Preflighted_requests .. _preflight request: https://developer.mozilla.org/en-US/docs/HTTP/Access_control_CORS#Preflighted_requests

View File

@ -37,7 +37,7 @@ will be omitted.
It is highly recommended that authentication server implementers prefix their It is highly recommended that authentication server implementers prefix their
tokens and Swift storage accounts they create with a configurable reseller tokens and Swift storage accounts they create with a configurable reseller
prefix (`AUTH_` by default with the included TempAuth). This prefix will avoid prefix (``AUTH_`` by default with the included TempAuth). This prefix will avoid
conflicts with other authentication servers that might be using the same conflicts with other authentication servers that might be using the same
Swift cluster. Otherwise, the Swift cluster will have to try all the resellers Swift cluster. Otherwise, the Swift cluster will have to try all the resellers
until one validates a token or all fail. until one validates a token or all fail.
@ -48,18 +48,18 @@ designations as you'll see later).
Example Authentication with TempAuth: Example Authentication with TempAuth:
* Token AUTH_tkabcd is given to the TempAuth middleware in a request's * Token AUTH_tkabcd is given to the TempAuth middleware in a request's
X-Auth-Token header. X-Auth-Token header.
* The TempAuth middleware validates the token AUTH_tkabcd and discovers * The TempAuth middleware validates the token AUTH_tkabcd and discovers
it matches the "tester" user within the "test" account for the storage it matches the "tester" user within the "test" account for the storage
account "AUTH_storage_xyz". account "AUTH_storage_xyz".
* The TempAuth middleware sets the REMOTE_USER to * The TempAuth middleware sets the REMOTE_USER to
"test:tester,test,AUTH_storage_xyz" "test:tester,test,AUTH_storage_xyz"
* Now this user will have full access (via authorization procedures later) * Now this user will have full access (via authorization procedures later)
to the AUTH_storage_xyz Swift storage account and access to containers in to the AUTH_storage_xyz Swift storage account and access to containers in
other storage accounts, provided the storage account begins with the same other storage accounts, provided the storage account begins with the same
`AUTH_` reseller prefix and the container has an ACL specifying at least ``AUTH_`` reseller prefix and the container has an ACL specifying at least
one of those three groups. one of those three groups.
Authorization is performed through callbacks by the Swift Proxy server to the Authorization is performed through callbacks by the Swift Proxy server to the
WSGI environment's swift.authorize value, if one is set. The swift.authorize WSGI environment's swift.authorize value, if one is set. The swift.authorize

View File

@ -12,7 +12,7 @@ from the details of how data is laid out and stored on-disk.
The APIs are documented in the reference implementations for all three The APIs are documented in the reference implementations for all three
servers. For historical reasons, the object server backend reference servers. For historical reasons, the object server backend reference
implementation module is named `diskfile`, while the account and container implementation module is named ``diskfile``, while the account and container
server backend reference implementation modules are named appropriately. server backend reference implementation modules are named appropriately.
This API is still under development and not yet finalized. This API is still under development and not yet finalized.

View File

@ -36,11 +36,11 @@ External monitoring
We use pingdom.com to monitor the external Swift API. We suggest the We use pingdom.com to monitor the external Swift API. We suggest the
following: following:
- Do a GET on ``/healthcheck`` - Do a GET on ``/healthcheck``
- Create a container, make it public (x-container-read: - Create a container, make it public (``x-container-read:
.r*,.rlistings), create a small file in the container; do a GET .r*,.rlistings``), create a small file in the container; do a GET
on the object on the object
Diagnose: General approach Diagnose: General approach
-------------------------- --------------------------
@ -82,11 +82,11 @@ if any servers are down. We suggest you run it regularly
to the last report without having to wait for a long-running command to the last report without having to wait for a long-running command
to complete. to complete.
Diagnose: Is system responding to /healthcheck? Diagnose: Is system responding to ``/healthcheck``?
----------------------------------------------- ---------------------------------------------------
When you want to establish if a swift endpoint is running, run ``curl -k`` When you want to establish if a swift endpoint is running, run ``curl -k``
against https://*[ENDPOINT]*/healthcheck. against ``https://$ENDPOINT/healthcheck``.
.. _swift_logs: .. _swift_logs:
@ -209,11 +209,11 @@ Diagnose: Parted reports the backup GPT table is corrupt
- If a GPT table is broken, a message like the following should be - If a GPT table is broken, a message like the following should be
observed when the following command is run: observed when the following command is run:
.. code:: .. code:: console
$ sudo parted -l $ sudo parted -l
.. code:: .. code:: console
Error: The backup GPT table is corrupt, but the primary appears OK, Error: The backup GPT table is corrupt, but the primary appears OK,
so that will be used. so that will be used.
@ -232,40 +232,40 @@ invalid filesystem label. In such cases proceed as follows:
#. Verify that the disk labels are correct: #. Verify that the disk labels are correct:
.. code:: .. code:: console
FS=/dev/sd#1 $ FS=/dev/sd#1
sudo parted -l | grep object $ sudo parted -l | grep object
#. If partition labels are inconsistent then, resolve the disk label issues #. If partition labels are inconsistent then, resolve the disk label issues
before proceeding: before proceeding:
.. code:: .. code:: console
sudo parted -s ${FS} name ${PART_NO} ${PART_NAME} #Partition Label $ sudo parted -s ${FS} name ${PART_NO} ${PART_NAME} #Partition Label
#PART_NO is 1 for object disks and 3 for OS disks $ # PART_NO is 1 for object disks and 3 for OS disks
#PART_NAME follows the convention seen in "sudo parted -l | grep object" $ # PART_NAME follows the convention seen in "sudo parted -l | grep object"
#. If the Filesystem label is missing then create it with care: #. If the Filesystem label is missing then create it with care:
.. code:: .. code:: console
sudo xfs_admin -l ${FS} #Filesystem label (12 Char limit) $ sudo xfs_admin -l ${FS} #Filesystem label (12 Char limit)
#Check for the existence of a FS label $ # Check for the existence of a FS label
OBJNO=<3 Length Object No.> $ OBJNO=<3 Length Object No.>
#I.E OBJNO for sw-stbaz3-object0007 would be 007 $ # I.E OBJNO for sw-stbaz3-object0007 would be 007
DISKNO=<3 Length Disk No.> $ DISKNO=<3 Length Disk No.>
#I.E DISKNO for /dev/sdb would be 001, /dev/sdc would be 002 etc. $ # I.E DISKNO for /dev/sdb would be 001, /dev/sdc would be 002 etc.
sudo xfs_admin -L "obj${OBJNO}dsk${DISKNO}" ${FS} $ sudo xfs_admin -L "obj${OBJNO}dsk${DISKNO}" ${FS}
#Create a FS Label $ # Create a FS Label
Diagnose: Failed LUNs Diagnose: Failed LUNs
--------------------- ---------------------
@ -293,9 +293,9 @@ Otherwise the lun can be re-enabled as follows:
LUN. You will come back later and grep this file for more details, but LUN. You will come back later and grep this file for more details, but
just generate it for now. just generate it for now.
.. code:: .. code:: console
sudo hpssacli controller all diag file=/tmp/hpacu.diag ris=on xml=off zip=off $ sudo hpssacli controller all diag file=/tmp/hpacu.diag ris=on xml=off zip=off
Export the following variables using the below instructions before Export the following variables using the below instructions before
proceeding further. proceeding further.
@ -304,16 +304,16 @@ proceeding further.
failed drive's number and array value (example output: "array A failed drive's number and array value (example output: "array A
logicaldrive 1..." would be exported as LDRIVE=1): logicaldrive 1..." would be exported as LDRIVE=1):
.. code:: .. code:: console
sudo hpssacli controller slot=1 ld all show $ sudo hpssacli controller slot=1 ld all show
#. Export the number of the logical drive that was retrieved from the #. Export the number of the logical drive that was retrieved from the
previous command into the LDRIVE variable: previous command into the LDRIVE variable:
.. code:: .. code:: console
export LDRIVE=<LogicalDriveNumber> $ export LDRIVE=<LogicalDriveNumber>
#. Print the array value and Port:Box:Bay for all drives and take note of #. Print the array value and Port:Box:Bay for all drives and take note of
the Port:Box:Bay for the failed drive (example output: " array A the Port:Box:Bay for the failed drive (example output: " array A
@ -324,9 +324,9 @@ proceeding further.
in the case of "array c"), but we will run a different command to be sure in the case of "array c"), but we will run a different command to be sure
we are operating on the correct device. we are operating on the correct device.
.. code:: .. code:: console
sudo hpssacli controller slot=1 pd all show $ sudo hpssacli controller slot=1 pd all show
.. note:: .. note::
@ -339,24 +339,24 @@ proceeding further.
#. Export the Port:Box:Bay for the failed drive into the PBOX variable: #. Export the Port:Box:Bay for the failed drive into the PBOX variable:
.. code:: .. code:: console
export PBOX=<Port:Box:Bay> $ export PBOX=<Port:Box:Bay>
#. Print the physical device information and take note of the Disk Name #. Print the physical device information and take note of the Disk Name
(example output: "Disk Name: /dev/sdk" would be exported as (example output: "Disk Name: /dev/sdk" would be exported as
DEV=/dev/sdk): DEV=/dev/sdk):
.. code:: .. code:: console
sudo hpssacli controller slot=1 ld ${LDRIVE} show detail | grep -i "Disk Name" $ sudo hpssacli controller slot=1 ld ${LDRIVE} show detail | grep -i "Disk Name"
#. Export the device name variable from the preceding command (example: #. Export the device name variable from the preceding command (example:
/dev/sdk): /dev/sdk):
.. code:: .. code:: console
export DEV=<Device> $ export DEV=<Device>
#. Export the filesystem variable. Disks that are split between the #. Export the filesystem variable. Disks that are split between the
operating system and data storage, typically sda and sdb, should only operating system and data storage, typically sda and sdb, should only
@ -367,39 +367,39 @@ proceeding further.
data filesystem for the device in question as the export. For example: data filesystem for the device in question as the export. For example:
/dev/sdk1. /dev/sdk1.
.. code:: .. code:: console
export FS=<Filesystem> $ export FS=<Filesystem>
#. Verify the LUN is failed, and the device is not: #. Verify the LUN is failed, and the device is not:
.. code:: .. code:: console
sudo hpssacli controller slot=1 ld all show $ sudo hpssacli controller slot=1 ld all show
sudo hpssacli controller slot=1 pd all show $ sudo hpssacli controller slot=1 pd all show
sudo hpssacli controller slot=1 ld ${LDRIVE} show detail $ sudo hpssacli controller slot=1 ld ${LDRIVE} show detail
sudo hpssacli controller slot=1 pd ${PBOX} show detail $ sudo hpssacli controller slot=1 pd ${PBOX} show detail
#. Stop the swift and rsync service: #. Stop the swift and rsync service:
.. code:: .. code:: console
sudo service rsync stop $ sudo service rsync stop
sudo swift-init shutdown all $ sudo swift-init shutdown all
#. Unmount the problem drive, fix the LUN and the filesystem: #. Unmount the problem drive, fix the LUN and the filesystem:
.. code:: .. code:: console
sudo umount ${FS} $ sudo umount ${FS}
#. If umount fails, you should run lsof search for the mountpoint and #. If umount fails, you should run lsof search for the mountpoint and
kill any lingering processes before repeating the unpount: kill any lingering processes before repeating the unpount:
.. code:: .. code:: console
sudo hpacucli controller slot=1 ld ${LDRIVE} modify reenable $ sudo hpacucli controller slot=1 ld ${LDRIVE} modify reenable
sudo xfs_repair ${FS} $ sudo xfs_repair ${FS}
#. If the ``xfs_repair`` complains about possible journal data, use the #. If the ``xfs_repair`` complains about possible journal data, use the
``xfs_repair -L`` option to zeroise the journal log. ``xfs_repair -L`` option to zeroise the journal log.
@ -407,21 +407,21 @@ proceeding further.
#. Once complete test-mount the filesystem, and tidy up its lost and #. Once complete test-mount the filesystem, and tidy up its lost and
found area. found area.
.. code:: .. code:: console
sudo mount ${FS} /mnt $ sudo mount ${FS} /mnt
sudo rm -rf /mnt/lost+found/ $ sudo rm -rf /mnt/lost+found/
sudo umount /mnt $ sudo umount /mnt
#. Mount the filesystem and restart swift and rsync. #. Mount the filesystem and restart swift and rsync.
#. Run the following to determine if a DC ticket is needed to check the #. Run the following to determine if a DC ticket is needed to check the
cables on the node: cables on the node:
.. code:: .. code:: console
grep -y media.exchanged /tmp/hpacu.diag $ grep -y media.exchanged /tmp/hpacu.diag
grep -y hot.plug.count /tmp/hpacu.diag $ grep -y hot.plug.count /tmp/hpacu.diag
#. If the output reports any non 0x00 values, it suggests that the cables #. If the output reports any non 0x00 values, it suggests that the cables
should be checked. For example, log a DC ticket to check the sas cables should be checked. For example, log a DC ticket to check the sas cables
@ -440,7 +440,7 @@ If the diagnostics report a message such as ``sda: drive is slow``, you
should log onto the node and run the following command (remove ``-c 1`` option to continuously monitor should log onto the node and run the following command (remove ``-c 1`` option to continuously monitor
the data): the data):
.. code:: .. code:: console
$ /usr/bin/collectl -s D -c 1 $ /usr/bin/collectl -s D -c 1
waiting for 1 second sample... waiting for 1 second sample...
@ -475,7 +475,7 @@ otherwise hardware replacement is needed.
Another way to look at the data is as follows: Another way to look at the data is as follows:
.. code:: .. code:: console
$ /opt/hp/syseng/disk-anal.pl -d $ /opt/hp/syseng/disk-anal.pl -d
Disk: sda Wait: 54580 371 65 25 12 6 6 0 1 2 0 46 Disk: sda Wait: 54580 371 65 25 12 6 6 0 1 2 0 46
@ -524,7 +524,7 @@ historical data. You can look at recent data as follows. It only looks
at data from 13:15 to 14:15. As you can see, this is a relatively clean at data from 13:15 to 14:15. As you can see, this is a relatively clean
system (few if any long wait or service times): system (few if any long wait or service times):
.. code:: .. code:: console
$ /opt/hp/syseng/disk-anal.pl -d -t 13:15-14:15 $ /opt/hp/syseng/disk-anal.pl -d -t 13:15-14:15
Disk: sda Wait: 3600 0 0 0 0 0 0 0 0 0 0 0 Disk: sda Wait: 3600 0 0 0 0 0 0 0 0 0 0 0
@ -582,21 +582,21 @@ Running tests
#. Prepare the ``target`` node as follows: #. Prepare the ``target`` node as follows:
.. code:: .. code:: console
sudo iptables -I INPUT -p tcp -j ACCEPT $ sudo iptables -I INPUT -p tcp -j ACCEPT
Or, do: Or, do:
.. code:: .. code:: console
sudo ufw allow 12866/tcp $ sudo ufw allow 12866/tcp
#. On the ``source`` node, run the following command to check #. On the ``source`` node, run the following command to check
throughput. Note the double-dash before the -P option. throughput. Note the double-dash before the -P option.
The command takes 10 seconds to complete. The ``target`` node is 192.168.245.5. The command takes 10 seconds to complete. The ``target`` node is 192.168.245.5.
.. code:: .. code:: console
$ netperf -H 192.168.245.5 -- -P 12866 $ netperf -H 192.168.245.5 -- -P 12866
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 12866 AF_INET to MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 12866 AF_INET to
@ -609,7 +609,7 @@ Running tests
#. On the ``source`` node, run the following command to check latency: #. On the ``source`` node, run the following command to check latency:
.. code:: .. code:: console
$ netperf -H 192.168.245.5 -t TCP_RR -- -P 12866 $ netperf -H 192.168.245.5 -t TCP_RR -- -P 12866
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 12866 MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 12866
@ -644,21 +644,21 @@ Diagnose: Remapping sectors experiencing UREs
#. Set the environment variables SEC, DEV & FS, for example: #. Set the environment variables SEC, DEV & FS, for example:
.. code:: .. code:: console
SEC=2930954256 $ SEC=2930954256
DEV=/dev/sdi $ DEV=/dev/sdi
FS=/dev/sdi1 $ FS=/dev/sdi1
#. Verify that the sector is bad: #. Verify that the sector is bad:
.. code:: .. code:: console
sudo dd if=${DEV} of=/dev/null bs=512 count=1 skip=${SEC} $ sudo dd if=${DEV} of=/dev/null bs=512 count=1 skip=${SEC}
#. If the sector is bad this command will output an input/output error: #. If the sector is bad this command will output an input/output error:
.. code:: .. code:: console
dd: reading `/dev/sdi`: Input/output error dd: reading `/dev/sdi`: Input/output error
0+0 records in 0+0 records in
@ -667,28 +667,28 @@ Diagnose: Remapping sectors experiencing UREs
#. Prevent chef from attempting to re-mount the filesystem while the #. Prevent chef from attempting to re-mount the filesystem while the
repair is in progress: repair is in progress:
.. code:: .. code:: console
sudo mv /etc/chef/client.pem /etc/chef/xx-client.xx-pem $ sudo mv /etc/chef/client.pem /etc/chef/xx-client.xx-pem
#. Stop the swift and rsync service: #. Stop the swift and rsync service:
.. code:: .. code:: console
sudo service rsync stop $ sudo service rsync stop
sudo swift-init shutdown all $ sudo swift-init shutdown all
#. Unmount the problem drive: #. Unmount the problem drive:
.. code:: .. code:: console
sudo umount ${FS} $ sudo umount ${FS}
#. Overwrite/remap the bad sector: #. Overwrite/remap the bad sector:
.. code:: .. code:: console
sudo dd_rescue -d -A -m8b -s ${SEC}b ${DEV} ${DEV} $ sudo dd_rescue -d -A -m8b -s ${SEC}b ${DEV} ${DEV}
#. This command should report an input/output error the first time #. This command should report an input/output error the first time
it is run. Run the command a second time, if it successfully remapped it is run. Run the command a second time, if it successfully remapped
@ -696,9 +696,9 @@ Diagnose: Remapping sectors experiencing UREs
#. Verify the sector is now readable: #. Verify the sector is now readable:
.. code:: .. code:: console
sudo dd if=${DEV} of=/dev/null bs=512 count=1 skip=${SEC} $ sudo dd if=${DEV} of=/dev/null bs=512 count=1 skip=${SEC}
#. If the sector is now readable this command should not report an #. If the sector is now readable this command should not report an
input/output error. input/output error.
@ -706,24 +706,24 @@ Diagnose: Remapping sectors experiencing UREs
#. If more than one problem sector is listed, set the SEC environment #. If more than one problem sector is listed, set the SEC environment
variable to the next sector in the list: variable to the next sector in the list:
.. code:: .. code:: console
SEC=123456789 $ SEC=123456789
#. Repeat from step 8. #. Repeat from step 8.
#. Repair the filesystem: #. Repair the filesystem:
.. code:: .. code:: console
sudo xfs_repair ${FS} $ sudo xfs_repair ${FS}
#. If ``xfs_repair`` reports that the filesystem has valuable filesystem #. If ``xfs_repair`` reports that the filesystem has valuable filesystem
changes: changes:
.. code:: .. code:: console
sudo xfs_repair ${FS} $ sudo xfs_repair ${FS}
Phase 1 - find and verify superblock... Phase 1 - find and verify superblock...
Phase 2 - using internal log Phase 2 - using internal log
- zero log... - zero log...
@ -739,11 +739,11 @@ Diagnose: Remapping sectors experiencing UREs
#. You should attempt to mount the filesystem, and clear the lost+found #. You should attempt to mount the filesystem, and clear the lost+found
area: area:
.. code:: .. code:: console
sudo mount $FS /mnt $ sudo mount $FS /mnt
sudo rm -rf /mnt/lost+found/* $ sudo rm -rf /mnt/lost+found/*
sudo umount /mnt $ sudo umount /mnt
#. If the filesystem fails to mount then you will need to use the #. If the filesystem fails to mount then you will need to use the
``xfs_repair -L`` option to force log zeroing. ``xfs_repair -L`` option to force log zeroing.
@ -752,16 +752,16 @@ Diagnose: Remapping sectors experiencing UREs
#. If ``xfs_repair`` reports that an additional input/output error has been #. If ``xfs_repair`` reports that an additional input/output error has been
encountered, get the sector details as follows: encountered, get the sector details as follows:
.. code:: .. code:: console
sudo grep "I/O error" /var/log/kern.log | grep sector | tail -1 $ sudo grep "I/O error" /var/log/kern.log | grep sector | tail -1
#. If new input/output error is reported then set the SEC environment #. If new input/output error is reported then set the SEC environment
variable to the problem sector number: variable to the problem sector number:
.. code:: .. code:: console
SEC=234567890 $ SEC=234567890
#. Repeat from step 8 #. Repeat from step 8
@ -806,31 +806,31 @@ errors, it may well indicate a cable, switch, or network issue.
Get an overview of the interface with: Get an overview of the interface with:
.. code:: .. code:: console
sudo ifconfig eth{n} $ sudo ifconfig eth{n}
sudo ethtool eth{n} $ sudo ethtool eth{n}
The ``Link Detected:`` indicator will read ``yes`` if the nic is The ``Link Detected:`` indicator will read ``yes`` if the nic is
cabled. cabled.
Establish the adapter type with: Establish the adapter type with:
.. code:: .. code:: console
sudo ethtool -i eth{n} $ sudo ethtool -i eth{n}
Gather the interface statistics with: Gather the interface statistics with:
.. code:: .. code:: console
sudo ethtool -S eth{n} $ sudo ethtool -S eth{n}
If the nick supports self test, this can be performed with: If the nick supports self test, this can be performed with:
.. code:: .. code:: console
sudo ethtool -t eth{n} $ sudo ethtool -t eth{n}
Self tests should read ``PASS`` if the nic is operating correctly. Self tests should read ``PASS`` if the nic is operating correctly.
@ -853,9 +853,9 @@ A replicator reports in its log that remaining time exceeds
making progress. Another useful way to check this is with the making progress. Another useful way to check this is with the
'swift-recon -r' command on a swift proxy server: 'swift-recon -r' command on a swift proxy server:
.. code:: .. code:: console
sudo swift-recon -r $ sudo swift-recon -r
=============================================================================== ===============================================================================
--> Starting reconnaissance on 384 hosts --> Starting reconnaissance on 384 hosts
@ -877,9 +877,9 @@ You can further check if the object replicator is stuck by logging on
the object server and checking the object replicator progress with the object server and checking the object replicator progress with
the following command: the following command:
.. code:: .. code:: console
# sudo grep object-rep /var/log/swift/background.log | grep -e "Starting object replication" -e "Object replication complete" -e "partitions rep" $ sudo grep object-rep /var/log/swift/background.log | grep -e "Starting object replication" -e "Object replication complete" -e "partitions rep"
Jul 16 06:25:46 192.168.245.4 object-replicator 15344/16450 (93.28%) partitions replicated in 69018.48s (0.22/sec, 22h remaining) Jul 16 06:25:46 192.168.245.4 object-replicator 15344/16450 (93.28%) partitions replicated in 69018.48s (0.22/sec, 22h remaining)
Jul 16 06:30:46 192.168.245.4object-replicator 15344/16450 (93.28%) partitions replicated in 69318.58s (0.22/sec, 22h remaining) Jul 16 06:30:46 192.168.245.4object-replicator 15344/16450 (93.28%) partitions replicated in 69318.58s (0.22/sec, 22h remaining)
Jul 16 06:35:46 192.168.245.4 object-replicator 15344/16450 (93.28%) partitions replicated in 69618.63s (0.22/sec, 23h remaining) Jul 16 06:35:46 192.168.245.4 object-replicator 15344/16450 (93.28%) partitions replicated in 69618.63s (0.22/sec, 23h remaining)
@ -912,9 +912,9 @@ One of the reasons for the object replicator hanging like this is
filesystem corruption on the drive. The following is a typical log entry filesystem corruption on the drive. The following is a typical log entry
of a corrupted filesystem detected by the object replicator: of a corrupted filesystem detected by the object replicator:
.. code:: .. code:: console
# sudo bzgrep "Remote I/O error" /var/log/swift/background.log* |grep srv | - tail -1 $ sudo bzgrep "Remote I/O error" /var/log/swift/background.log* |grep srv | - tail -1
Jul 12 03:33:30 192.168.245.4 object-replicator STDOUT: ERROR:root:Error hashing suffix#012Traceback (most recent call last):#012 File Jul 12 03:33:30 192.168.245.4 object-replicator STDOUT: ERROR:root:Error hashing suffix#012Traceback (most recent call last):#012 File
"/usr/lib/python2.7/dist-packages/swift/obj/replicator.py", line 199, in get_hashes#012 hashes[suffix] = hash_suffix(suffix_dir, "/usr/lib/python2.7/dist-packages/swift/obj/replicator.py", line 199, in get_hashes#012 hashes[suffix] = hash_suffix(suffix_dir,
reclaim_age)#012 File "/usr/lib/python2.7/dist-packages/swift/obj/replicator.py", line 84, in hash_suffix#012 path_contents = reclaim_age)#012 File "/usr/lib/python2.7/dist-packages/swift/obj/replicator.py", line 84, in hash_suffix#012 path_contents =
@ -922,9 +922,9 @@ of a corrupted filesystem detected by the object replicator:
An ``ls`` of the problem file or directory usually shows something like the following: An ``ls`` of the problem file or directory usually shows something like the following:
.. code:: .. code:: console
# ls -l /srv/node/disk4/objects/1643763/b51 $ ls -l /srv/node/disk4/objects/1643763/b51
ls: cannot access /srv/node/disk4/objects/1643763/b51: Remote I/O error ls: cannot access /srv/node/disk4/objects/1643763/b51: Remote I/O error
If no entry with ``Remote I/O error`` occurs in the ``background.log`` it is If no entry with ``Remote I/O error`` occurs in the ``background.log`` it is
@ -935,27 +935,27 @@ restart the object-replicator.
#. Stop the object-replicator: #. Stop the object-replicator:
.. code:: .. code:: console
# sudo swift-init object-replicator stop # sudo swift-init object-replicator stop
#. Make sure the object replicator has stopped, if it has hung, the stop #. Make sure the object replicator has stopped, if it has hung, the stop
command will not stop the hung process: command will not stop the hung process:
.. code:: .. code:: console
# ps auxww | - grep swift-object-replicator # ps auxww | - grep swift-object-replicator
#. If the previous ps shows the object-replicator is still running, kill #. If the previous ps shows the object-replicator is still running, kill
the process: the process:
.. code:: .. code:: console
# kill -9 <pid-of-swift-object-replicator> # kill -9 <pid-of-swift-object-replicator>
#. Start the object-replicator: #. Start the object-replicator:
.. code:: .. code:: console
# sudo swift-init object-replicator start # sudo swift-init object-replicator start
@ -964,14 +964,14 @@ to repair the problem filesystem.
#. Stop swift and rsync: #. Stop swift and rsync:
.. code:: .. code:: console
# sudo swift-init all shutdown # sudo swift-init all shutdown
# sudo service rsync stop # sudo service rsync stop
#. Make sure all swift process have stopped: #. Make sure all swift process have stopped:
.. code:: .. code:: console
# ps auxww | grep swift | grep python # ps auxww | grep swift | grep python
@ -979,13 +979,13 @@ to repair the problem filesystem.
#. Unmount the problem filesystem: #. Unmount the problem filesystem:
.. code:: .. code:: console
# sudo umount /srv/node/disk4 # sudo umount /srv/node/disk4
#. Repair the filesystem: #. Repair the filesystem:
.. code:: .. code:: console
# sudo xfs_repair -P /dev/sde1 # sudo xfs_repair -P /dev/sde1
@ -1002,7 +1002,7 @@ The CPU load average on an object server, as shown with the
'uptime' command, is typically under 10 when the server is 'uptime' command, is typically under 10 when the server is
lightly-moderately loaded: lightly-moderately loaded:
.. code:: .. code:: console
$ uptime $ uptime
07:59:26 up 99 days, 5:57, 1 user, load average: 8.59, 8.39, 8.32 07:59:26 up 99 days, 5:57, 1 user, load average: 8.59, 8.39, 8.32
@ -1014,7 +1014,7 @@ However, sometimes the CPU load average can increase significantly. The
following is an example of an object server that has extremely high CPU following is an example of an object server that has extremely high CPU
load: load:
.. code:: .. code:: console
$ uptime $ uptime
07:44:02 up 18:22, 1 user, load average: 407.12, 406.36, 404.59 07:44:02 up 18:22, 1 user, load average: 407.12, 406.36, 404.59
@ -1050,9 +1050,9 @@ Further issues and resolutions
given server. given server.
- Run this command: - Run this command:
.. code:: .. code:: console
sudo swift-init all start $ sudo swift-init all start
Examine messages in the swift log files to see if there are any Examine messages in the swift log files to see if there are any
error messages related to any of the swift processes since the time you error messages related to any of the swift processes since the time you
@ -1080,9 +1080,9 @@ Further issues and resolutions
- Restart the swift processes on the affected node: - Restart the swift processes on the affected node:
.. code:: .. code:: console
% sudo swift-init all reload $ sudo swift-init all reload
Urgency: Urgency:
If known performance problem: Immediate If known performance problem: Immediate
@ -1135,18 +1135,18 @@ Further issues and resolutions
For example, it is running at 100 Mb/s and the NIC is a 1Ge NIC. For example, it is running at 100 Mb/s and the NIC is a 1Ge NIC.
- 1. Try resetting the interface with: - 1. Try resetting the interface with:
.. code:: .. code:: console
sudo ethtool -s eth0 speed 1000 $ sudo ethtool -s eth0 speed 1000
... and then run: ... and then run:
.. code:: .. code:: console
sudo lshw -class $ sudo lshw -class
See if size goes to the expected speed. Failing See if size goes to the expected speed. Failing
that, check hardware (NIC cable/switch port). that, check hardware (NIC cable/switch port).
2. If persistent, consider shutting down the server (especially if a proxy) 2. If persistent, consider shutting down the server (especially if a proxy)
until the problem is identified and resolved. If you leave this server until the problem is identified and resolved. If you leave this server
@ -1183,9 +1183,11 @@ Further issues and resolutions
- Urgency: Medium - Urgency: Medium
This may have been triggered by a recent restart of the rsyslog daemon. This may have been triggered by a recent restart of the rsyslog daemon.
Restart the service with: Restart the service with:
.. code::
sudo swift-init <service> reload .. code:: console
$ sudo swift-init <service> reload
* - Object replicator: Reports the remaining time and that time is more than 100 hours. * - Object replicator: Reports the remaining time and that time is more than 100 hours.
- Each replication cycle the object replicator writes a log message to its log - Each replication cycle the object replicator writes a log message to its log
reporting statistics about the current cycle. This includes an estimate for the reporting statistics about the current cycle. This includes an estimate for the
@ -1193,9 +1195,10 @@ Further issues and resolutions
100 hours, there is a problem with the replication process. 100 hours, there is a problem with the replication process.
- Urgency: Medium - Urgency: Medium
Restart the service with: Restart the service with:
.. code::
sudo swift-init object-replicator reload .. code:: console
$ sudo swift-init object-replicator reload
Check that the remaining replication time is going down. Check that the remaining replication time is going down.

View File

@ -27,9 +27,9 @@ if you wait a while things get better.
For example: For example:
.. code:: .. code:: console
sudo swift-recon -rla $ sudo swift-recon -rla
=============================================================================== ===============================================================================
[2012-03-10 12:57:21] Checking async pendings on 384 hosts... [2012-03-10 12:57:21] Checking async pendings on 384 hosts...
Async stats: low: 0, high: 1, avg: 0, total: 1 Async stats: low: 0, high: 1, avg: 0, total: 1
@ -52,7 +52,7 @@ system. Rules-of-thumb for 'good' recon output are:
- Nodes that respond are up and running Swift. If all nodes respond, - Nodes that respond are up and running Swift. If all nodes respond,
that is a good sign. But some nodes may time out. For example: that is a good sign. But some nodes may time out. For example:
.. code:: .. code:: console
-> [http://<redacted>.29:6200/recon/load:] <urlopen error [Errno 111] ECONNREFUSED> -> [http://<redacted>.29:6200/recon/load:] <urlopen error [Errno 111] ECONNREFUSED>
-> [http://<redacted>.31:6200/recon/load:] <urlopen error timed out> -> [http://<redacted>.31:6200/recon/load:] <urlopen error timed out>
@ -83,7 +83,7 @@ system. Rules-of-thumb for 'good' recon output are:
For comparison here is the recon output for the same system above when For comparison here is the recon output for the same system above when
two entire racks of Swift are down: two entire racks of Swift are down:
.. code:: .. code:: console
[2012-03-10 16:56:33] Checking async pendings on 384 hosts... [2012-03-10 16:56:33] Checking async pendings on 384 hosts...
-> http://<redacted>.22:6200/recon/async: <urlopen error timed out> -> http://<redacted>.22:6200/recon/async: <urlopen error timed out>
@ -152,9 +152,9 @@ Here is an example of noting and tracking down a problem with recon.
Running reccon shows some async pendings: Running reccon shows some async pendings:
.. code:: .. code:: console
bob@notso:~/swift-1.4.4/swift$ ssh -q <redacted>.132.7 sudo swift-recon -alr $ ssh -q <redacted>.132.7 sudo swift-recon -alr
=============================================================================== ===============================================================================
[2012-03-14 17:25:55] Checking async pendings on 384 hosts... [2012-03-14 17:25:55] Checking async pendings on 384 hosts...
Async stats: low: 0, high: 23, avg: 8, total: 3356 Async stats: low: 0, high: 23, avg: 8, total: 3356
@ -172,9 +172,9 @@ Why? Running recon again with -av swift (not shown here) tells us that
the node with the highest (23) is <redacted>.72.61. Looking at the log the node with the highest (23) is <redacted>.72.61. Looking at the log
files on <redacted>.72.61 we see: files on <redacted>.72.61 we see:
.. code:: .. code:: console
souzab@<redacted>:~$ sudo tail -f /var/log/swift/background.log | - grep -i ERROR $ sudo tail -f /var/log/swift/background.log | - grep -i ERROR
Mar 14 17:28:06 <redacted> container-replicator ERROR Remote drive not mounted Mar 14 17:28:06 <redacted> container-replicator ERROR Remote drive not mounted
{'zone': 5, 'weight': 1952.0, 'ip': '<redacted>.204.119', 'id': 5481, 'meta': '', 'device': 'disk6', 'port': 6201} {'zone': 5, 'weight': 1952.0, 'ip': '<redacted>.204.119', 'id': 5481, 'meta': '', 'device': 'disk6', 'port': 6201}
Mar 14 17:28:06 <redacted> container-replicator ERROR Remote drive not mounted Mar 14 17:28:06 <redacted> container-replicator ERROR Remote drive not mounted
@ -235,7 +235,7 @@ Procedure
running the ring builder on a proxy node to determine which zones running the ring builder on a proxy node to determine which zones
the storage nodes are in. For example: the storage nodes are in. For example:
.. code:: .. code:: console
% sudo swift-ring-builder /etc/swift/object.builder % sudo swift-ring-builder /etc/swift/object.builder
/etc/swift/object.builder, build version 1467 /etc/swift/object.builder, build version 1467
@ -258,7 +258,7 @@ Procedure
builder again, this time with the ``list_parts`` option and specify builder again, this time with the ``list_parts`` option and specify
the nodes under consideration. For example: the nodes under consideration. For example:
.. code:: .. code:: console
% sudo swift-ring-builder /etc/swift/object.builder list_parts <redacted>.8 <redacted>.15 <redacted>.72.2 % sudo swift-ring-builder /etc/swift/object.builder list_parts <redacted>.8 <redacted>.15 <redacted>.72.2
Partition Matches Partition Matches
@ -283,7 +283,7 @@ Procedure
small, and is proportional to the number of entries that have a 3 in small, and is proportional to the number of entries that have a 3 in
the Matches column. For example: the Matches column. For example:
.. code:: .. code:: console
Partition Matches Partition Matches
26865 3 26865 3
@ -300,7 +300,7 @@ Procedure
#. A quick way to count the number of rows with 3 matches is: #. A quick way to count the number of rows with 3 matches is:
.. code:: .. code:: console
% sudo swift-ring-builder /etc/swift/object.builder list_parts <redacted>.8 <redacted>.15 <redacted>.72.2 | grep "3$" | wc -l % sudo swift-ring-builder /etc/swift/object.builder list_parts <redacted>.8 <redacted>.15 <redacted>.72.2 | grep "3$" | wc -l

View File

@ -10,13 +10,13 @@ Fix broken GPT table (broken disk partition)
- If a GPT table is broken, a message like the following should be - If a GPT table is broken, a message like the following should be
observed when the command... observed when the command...
.. code:: .. code:: console
$ sudo parted -l $ sudo parted -l
- ... is run. - ... is run.
.. code:: .. code:: console
... ...
Error: The backup GPT table is corrupt, but the primary appears OK, so that will Error: The backup GPT table is corrupt, but the primary appears OK, so that will
@ -25,13 +25,13 @@ Fix broken GPT table (broken disk partition)
#. To fix this, firstly install the ``gdisk`` program to fix this: #. To fix this, firstly install the ``gdisk`` program to fix this:
.. code:: .. code:: console
$ sudo aptitude install gdisk $ sudo aptitude install gdisk
#. Run ``gdisk`` for the particular drive with the damaged partition: #. Run ``gdisk`` for the particular drive with the damaged partition:
.. code: .. code: console
$ sudo gdisk /dev/sd*a-l* $ sudo gdisk /dev/sd*a-l*
GPT fdisk (gdisk) version 0.6.14 GPT fdisk (gdisk) version 0.6.14
@ -57,7 +57,7 @@ Fix broken GPT table (broken disk partition)
and finally ``w`` (write table to disk and exit). Will also need to and finally ``w`` (write table to disk and exit). Will also need to
enter ``Y`` when prompted in order to confirm actions. enter ``Y`` when prompted in order to confirm actions.
.. code:: .. code:: console
Command (? for help): r Command (? for help): r
@ -92,7 +92,7 @@ Fix broken GPT table (broken disk partition)
#. Running the command: #. Running the command:
.. code:: .. code:: console
$ sudo parted /dev/sd# $ sudo parted /dev/sd#
@ -100,7 +100,7 @@ Fix broken GPT table (broken disk partition)
#. Finally, uninstall ``gdisk`` from the node: #. Finally, uninstall ``gdisk`` from the node:
.. code:: .. code:: console
$ sudo aptitude remove gdisk $ sudo aptitude remove gdisk
@ -112,20 +112,20 @@ Procedure: Fix broken XFS filesystem
#. A filesystem may be corrupt or broken if the following output is #. A filesystem may be corrupt or broken if the following output is
observed when checking its label: observed when checking its label:
.. code:: .. code:: console
$ sudo xfs_admin -l /dev/sd# $ sudo xfs_admin -l /dev/sd#
cache_node_purge: refcount was 1, not zero (node=0x25d5ee0) cache_node_purge: refcount was 1, not zero (node=0x25d5ee0)
xfs_admin: cannot read root inode (117) xfs_admin: cannot read root inode (117)
cache_node_purge: refcount was 1, not zero (node=0x25d92b0) cache_node_purge: refcount was 1, not zero (node=0x25d92b0)
xfs_admin: cannot read realtime bitmap inode (117) xfs_admin: cannot read realtime bitmap inode (117)
bad sb magic # 0 in AG 1 bad sb magic # 0 in AG 1
failed to read label in AG 1 failed to read label in AG 1
#. Run the following commands to remove the broken/corrupt filesystem and replace. #. Run the following commands to remove the broken/corrupt filesystem and replace.
(This example uses the filesystem ``/dev/sdb2``) Firstly need to replace the partition: (This example uses the filesystem ``/dev/sdb2``) Firstly need to replace the partition:
.. code:: .. code:: console
$ sudo parted $ sudo parted
GNU Parted 2.3 GNU Parted 2.3
@ -167,7 +167,7 @@ Procedure: Fix broken XFS filesystem
#. Next step is to scrub the filesystem and format: #. Next step is to scrub the filesystem and format:
.. code:: .. code:: console
$ sudo dd if=/dev/zero of=/dev/sdb2 bs=$((1024*1024)) count=1 $ sudo dd if=/dev/zero of=/dev/sdb2 bs=$((1024*1024)) count=1
1+0 records in 1+0 records in
@ -175,19 +175,19 @@ Procedure: Fix broken XFS filesystem
1048576 bytes (1.0 MB) copied, 0.00480617 s, 218 MB/s 1048576 bytes (1.0 MB) copied, 0.00480617 s, 218 MB/s
$ sudo /sbin/mkfs.xfs -f -i size=1024 /dev/sdb2 $ sudo /sbin/mkfs.xfs -f -i size=1024 /dev/sdb2
meta-data=/dev/sdb2 isize=1024 agcount=4, agsize=106811524 blks meta-data=/dev/sdb2 isize=1024 agcount=4, agsize=106811524 blks
= sectsz=512 attr=2, projid32bit=0 = sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=427246093, imaxpct=5 data = bsize=4096 blocks=427246093, imaxpct=5
= sunit=0 swidth=0 blks = sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=208616, version=2 log =internal log bsize=4096 blocks=208616, version=2
= sectsz=512 sunit=0 blks, lazy-count=1 = sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0 realtime =none extsz=4096 blocks=0, rtextents=0
#. You should now label and mount your filesystem. #. You should now label and mount your filesystem.
#. Can now check to see if the filesystem is mounted using the command: #. Can now check to see if the filesystem is mounted using the command:
.. code:: .. code:: console
$ mount $ mount
@ -204,7 +204,7 @@ Procedure: Checking if an account is okay
You must know the tenant/project ID. You can check if the account is okay as follows from a proxy. You must know the tenant/project ID. You can check if the account is okay as follows from a proxy.
.. code:: .. code:: console
$ sudo -u swift /opt/hp/swift/bin/swift-direct show AUTH_<project-id> $ sudo -u swift /opt/hp/swift/bin/swift-direct show AUTH_<project-id>
@ -214,7 +214,7 @@ containers, or an error indicating that the resource could not be found.
Alternatively, you can use ``swift-get-nodes`` to find the account database Alternatively, you can use ``swift-get-nodes`` to find the account database
files. Run the following on a proxy: files. Run the following on a proxy:
.. code:: .. code:: console
$ sudo swift-get-nodes /etc/swift/account.ring.gz AUTH_<project-id> $ sudo swift-get-nodes /etc/swift/account.ring.gz AUTH_<project-id>
@ -239,7 +239,7 @@ Log onto one of the swift proxy servers.
Use swift-direct to show this accounts usage: Use swift-direct to show this accounts usage:
.. code:: .. code:: console
$ sudo -u swift /opt/hp/swift/bin/swift-direct show AUTH_<project-id> $ sudo -u swift /opt/hp/swift/bin/swift-direct show AUTH_<project-id>
Status: 200 Status: 200
@ -288,7 +288,7 @@ re-create the account as follows:
servers). The output has been truncated so we can focus on the import pieces servers). The output has been truncated so we can focus on the import pieces
of data: of data:
.. code:: .. code:: console
$ sudo swift-get-nodes /etc/swift/account.ring.gz AUTH_4ebe3039674d4864a11fe0864ae4d905 $ sudo swift-get-nodes /etc/swift/account.ring.gz AUTH_4ebe3039674d4864a11fe0864ae4d905
... ...
@ -308,7 +308,7 @@ re-create the account as follows:
#. Before proceeding check that the account is really deleted by using curl. Execute the #. Before proceeding check that the account is really deleted by using curl. Execute the
commands printed by ``swift-get-nodes``. For example: commands printed by ``swift-get-nodes``. For example:
.. code:: .. code:: console
$ curl -I -XHEAD "http://192.168.245.5:6202/disk1/3934/AUTH_4ebe3039674d4864a11fe0864ae4d905" $ curl -I -XHEAD "http://192.168.245.5:6202/disk1/3934/AUTH_4ebe3039674d4864a11fe0864ae4d905"
HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found
@ -323,7 +323,7 @@ re-create the account as follows:
#. Use the ssh commands printed by ``swift-get-nodes`` to check if database #. Use the ssh commands printed by ``swift-get-nodes`` to check if database
files exist. For example: files exist. For example:
.. code:: .. code:: console
$ ssh 192.168.245.5 "ls -lah ${DEVICE:-/srv/node*}/disk1/accounts/3934/052/f5ecf8b40de3e1b0adb0dbe576874052" $ ssh 192.168.245.5 "ls -lah ${DEVICE:-/srv/node*}/disk1/accounts/3934/052/f5ecf8b40de3e1b0adb0dbe576874052"
total 20K total 20K
@ -344,7 +344,7 @@ re-create the account as follows:
#. Delete the database files. For example: #. Delete the database files. For example:
.. code:: .. code:: console
$ ssh 192.168.245.5 $ ssh 192.168.245.5
$ cd /srv/node/disk1/accounts/3934/052/f5ecf8b40de3e1b0adb0dbe576874052 $ cd /srv/node/disk1/accounts/3934/052/f5ecf8b40de3e1b0adb0dbe576874052
@ -374,9 +374,9 @@ balancers, customer's are not impacted by the misbehaving proxy.
#. Shut down Swift as follows: #. Shut down Swift as follows:
.. code:: .. code:: console
sudo swift-init proxy shutdown $ sudo swift-init proxy shutdown
.. note:: .. note::
@ -384,15 +384,15 @@ balancers, customer's are not impacted by the misbehaving proxy.
#. Create the ``/etc/swift/disabled-by-file`` file. For example: #. Create the ``/etc/swift/disabled-by-file`` file. For example:
.. code:: .. code:: console
sudo touch /etc/swift/disabled-by-file $ sudo touch /etc/swift/disabled-by-file
#. Optional, restart Swift: #. Optional, restart Swift:
.. code:: .. code:: console
sudo swift-init proxy start $ sudo swift-init proxy start
It works because the healthcheck middleware looks for /etc/swift/disabled-by-file. It works because the healthcheck middleware looks for /etc/swift/disabled-by-file.
If it exists, the middleware will return 503/error instead of 200/OK. This means the load balancer If it exists, the middleware will return 503/error instead of 200/OK. This means the load balancer
@ -403,9 +403,9 @@ Procedure: Ad-Hoc disk performance test
You can get an idea whether a disk drive is performing as follows: You can get an idea whether a disk drive is performing as follows:
.. code:: .. code:: console
sudo dd bs=1M count=256 if=/dev/zero conv=fdatasync of=/srv/node/disk11/remember-to-delete-this-later $ sudo dd bs=1M count=256 if=/dev/zero conv=fdatasync of=/srv/node/disk11/remember-to-delete-this-later
You can expect ~600MB/sec. If you get a low number, repeat many times as You can expect ~600MB/sec. If you get a low number, repeat many times as
Swift itself may also read or write to the disk, hence giving a lower Swift itself may also read or write to the disk, hence giving a lower

View File

@ -16,20 +16,20 @@ transactions from this user. The linux ``bzgrep`` command can be used to
search all the proxy log files on a node including the ``.bz2`` compressed search all the proxy log files on a node including the ``.bz2`` compressed
files. For example: files. For example:
.. code:: .. code:: console
$ PDSH_SSH_ARGS_APPEND="-o StrictHostKeyChecking=no" pdsh -l <yourusername> -R ssh \ $ PDSH_SSH_ARGS_APPEND="-o StrictHostKeyChecking=no" pdsh -l <yourusername> -R ssh \
-w <redacted>.68.[4-11,132-139 4-11,132-139],<redacted>.132.[4-11,132-139] \ -w <redacted>.68.[4-11,132-139 4-11,132-139],<redacted>.132.[4-11,132-139] \
'sudo bzgrep -w AUTH_redacted-4962-4692-98fb-52ddda82a5af /var/log/swift/proxy.log*' | dshbak -c 'sudo bzgrep -w AUTH_redacted-4962-4692-98fb-52ddda82a5af /var/log/swift/proxy.log*' | dshbak -c
. .
. .
---------------- ----------------
<redacted>.132.6 <redacted>.132.6
---------------- ----------------
Feb 29 08:51:57 sw-aw2az2-proxy011 proxy-server <redacted>.16.132 Feb 29 08:51:57 sw-aw2az2-proxy011 proxy-server <redacted>.16.132
<redacted>.66.8 29/Feb/2012/08/51/57 GET /v1.0/AUTH_redacted-4962-4692-98fb-52ddda82a5af <redacted>.66.8 29/Feb/2012/08/51/57 GET /v1.0/AUTH_redacted-4962-4692-98fb-52ddda82a5af
/%3Fformat%3Djson HTTP/1.0 404 - - <REDACTED>_4f4d50c5e4b064d88bd7ab82 - - - /%3Fformat%3Djson HTTP/1.0 404 - - <REDACTED>_4f4d50c5e4b064d88bd7ab82 - - -
tx429fc3be354f434ab7f9c6c4206c1dc3 - 0.0130 tx429fc3be354f434ab7f9c6c4206c1dc3 - 0.0130
This shows a ``GET`` operation on the users account. This shows a ``GET`` operation on the users account.
@ -40,7 +40,7 @@ This shows a ``GET`` operation on the users account.
Using the transaction ID, ``tx429fc3be354f434ab7f9c6c4206c1dc3`` you can Using the transaction ID, ``tx429fc3be354f434ab7f9c6c4206c1dc3`` you can
search the swift object servers log files for this transaction ID: search the swift object servers log files for this transaction ID:
.. code:: .. code:: console
$ PDSH_SSH_ARGS_APPEND="-o StrictHostKeyChecking=no" pdsh -l <yourusername> -R ssh \ $ PDSH_SSH_ARGS_APPEND="-o StrictHostKeyChecking=no" pdsh -l <yourusername> -R ssh \
-w <redacted>.72.[4-67|4-67],<redacted>.[4-67|4-67],<redacted>.[4-67|4-67],<redacted>.204.[4-131] \ -w <redacted>.72.[4-67|4-67],<redacted>.[4-67|4-67],<redacted>.[4-67|4-67],<redacted>.204.[4-131] \
@ -79,7 +79,7 @@ search the swift object servers log files for this transaction ID:
Next, use the ``swift-get-nodes`` command to determine exactly where the Next, use the ``swift-get-nodes`` command to determine exactly where the
user's account data is stored: user's account data is stored:
.. code:: .. code:: console
$ sudo swift-get-nodes /etc/swift/account.ring.gz AUTH_redacted-4962-4692-98fb-52ddda82a5af $ sudo swift-get-nodes /etc/swift/account.ring.gz AUTH_redacted-4962-4692-98fb-52ddda82a5af
Account AUTH_redacted-4962-4692-98fb-52ddda82a5af Account AUTH_redacted-4962-4692-98fb-52ddda82a5af
@ -119,7 +119,7 @@ user's account data is stored:
Check each of the primary servers, <redacted>.31, <redacted>.204.70 and <redacted>.72.16, for Check each of the primary servers, <redacted>.31, <redacted>.204.70 and <redacted>.72.16, for
this users account. For example on <redacted>.72.16: this users account. For example on <redacted>.72.16:
.. code:: .. code:: console
$ ls -lah /srv/node/disk9/accounts/198875/696/1846d99185f8a0edaf65cfbf37439696/ $ ls -lah /srv/node/disk9/accounts/198875/696/1846d99185f8a0edaf65cfbf37439696/
total 1.0M total 1.0M
@ -131,7 +131,7 @@ this users account. For example on <redacted>.72.16:
So this users account db, an sqlite db is present. Use sqlite to So this users account db, an sqlite db is present. Use sqlite to
checkout the account: checkout the account:
.. code:: .. code:: console
$ sudo cp /srv/node/disk9/accounts/198875/696/1846d99185f8a0edaf65cfbf37439696/1846d99185f8a0edaf65cfbf37439696.db /tmp $ sudo cp /srv/node/disk9/accounts/198875/696/1846d99185f8a0edaf65cfbf37439696/1846d99185f8a0edaf65cfbf37439696.db /tmp
$ sudo sqlite3 /tmp/1846d99185f8a0edaf65cfbf37439696.db $ sudo sqlite3 /tmp/1846d99185f8a0edaf65cfbf37439696.db
@ -156,7 +156,7 @@ checkout the account:
why the GET operations are returning 404, not found. Check the account why the GET operations are returning 404, not found. Check the account
delete date/time: delete date/time:
.. code:: .. code:: console
$ python $ python
@ -167,7 +167,7 @@ checkout the account:
Next try and find the ``DELETE`` operation for this account in the proxy Next try and find the ``DELETE`` operation for this account in the proxy
server logs: server logs:
.. code:: .. code:: console
$ PDSH_SSH_ARGS_APPEND="-o StrictHostKeyChecking=no" pdsh -l <yourusername> -R ssh \ $ PDSH_SSH_ARGS_APPEND="-o StrictHostKeyChecking=no" pdsh -l <yourusername> -R ssh \
-w <redacted>.68.[4-11,132-139 4-11,132-139],<redacted>.132.[4-11,132-139|4-11,132-139] \ -w <redacted>.68.[4-11,132-139 4-11,132-139],<redacted>.132.[4-11,132-139|4-11,132-139] \
@ -206,7 +206,7 @@ as follows:
Examine the object in question: Examine the object in question:
.. code:: .. code:: console
$ sudo -u swift /opt/hp/swift/bin/swift-direct head 132345678912345 container_name obj_name $ sudo -u swift /opt/hp/swift/bin/swift-direct head 132345678912345 container_name obj_name
@ -219,14 +219,14 @@ name of the objects this means it is a DLO. For example,
if ``X-Object-Manifest`` is ``container2/seg-blah``, list the contents if ``X-Object-Manifest`` is ``container2/seg-blah``, list the contents
of the container container2 as follows: of the container container2 as follows:
.. code:: .. code:: console
$ sudo -u swift /opt/hp/swift/bin/swift-direct show 132345678912345 container2 $ sudo -u swift /opt/hp/swift/bin/swift-direct show 132345678912345 container2
Pick out the objects whose names start with ``seg-blah``. Pick out the objects whose names start with ``seg-blah``.
Delete the segment objects as follows: Delete the segment objects as follows:
.. code:: .. code:: console
$ sudo -u swift /opt/hp/swift/bin/swift-direct delete 132345678912345 container2 seg-blah01 $ sudo -u swift /opt/hp/swift/bin/swift-direct delete 132345678912345 container2 seg-blah01
$ sudo -u swift /opt/hp/swift/bin/swift-direct delete 132345678912345 container2 seg-blah02 $ sudo -u swift /opt/hp/swift/bin/swift-direct delete 132345678912345 container2 seg-blah02

View File

@ -22,39 +22,45 @@ to understand and adding a bunch of new devices isn't really required
to implement a usable set of policies. to implement a usable set of policies.
1. To define your policies, add the following to your ``/etc/swift/swift.conf`` 1. To define your policies, add the following to your ``/etc/swift/swift.conf``
file:: file:
[storage-policy:0] .. code:: ini
name = gold
aliases = yellow, orange
default = yes
[storage-policy:1] [storage-policy:0]
name = silver name = gold
aliases = yellow, orange
default = yes
See :doc:`overview_policies` for detailed information on ``swift.conf`` policy [storage-policy:1]
options. name = silver
See :doc:`overview_policies` for detailed information on ``swift.conf`` policy
options.
2. To create the object ring for the silver policy (index 1), add the following 2. To create the object ring for the silver policy (index 1), add the following
to your ``bin/remakerings`` script and re-run it (your script may already have to your ``bin/remakerings`` script and re-run it (your script may already have
these changes):: these changes):
swift-ring-builder object-1.builder create 10 2 1 .. code:: shell
swift-ring-builder object-1.builder add r1z1-127.0.0.1:6210/sdb1 1
swift-ring-builder object-1.builder add r1z2-127.0.0.1:6220/sdb2 1
swift-ring-builder object-1.builder add r1z3-127.0.0.1:6230/sdb3 1
swift-ring-builder object-1.builder add r1z4-127.0.0.1:6240/sdb4 1
swift-ring-builder object-1.builder rebalance
Note that the reduced replication of the silver policy is only a function swift-ring-builder object-1.builder create 10 2 1
of the replication parameter in the ``swift-ring-builder create`` command swift-ring-builder object-1.builder add r1z1-127.0.0.1:6210/sdb1 1
and is not specified in ``/etc/swift/swift.conf``. swift-ring-builder object-1.builder add r1z2-127.0.0.1:6220/sdb2 1
swift-ring-builder object-1.builder add r1z3-127.0.0.1:6230/sdb3 1
swift-ring-builder object-1.builder add r1z4-127.0.0.1:6240/sdb4 1
swift-ring-builder object-1.builder rebalance
Note that the reduced replication of the silver policy is only a function
of the replication parameter in the ``swift-ring-builder create`` command
and is not specified in ``/etc/swift/swift.conf``.
3. Copy ``etc/container-reconciler.conf-sample`` to 3. Copy ``etc/container-reconciler.conf-sample`` to
``/etc/swift/container-reconciler.conf`` and fix the user option:: ``/etc/swift/container-reconciler.conf`` and fix the user option:
cp etc/container-reconciler.conf-sample /etc/swift/container-reconciler.conf .. code:: shell
sed -i "s/# user.*/user = $USER/g" /etc/swift/container-reconciler.conf
cp etc/container-reconciler.conf-sample /etc/swift/container-reconciler.conf
sed -i "s/# user.*/user = $USER/g" /etc/swift/container-reconciler.conf
------------------ ------------------
Using Policies Using Policies
@ -68,82 +74,104 @@ Storage Policies effect placement of data in Swift.
1. We will be using the list_endpoints middleware to confirm object locations, 1. We will be using the list_endpoints middleware to confirm object locations,
so enable that now in your ``proxy-server.conf`` file by adding it to the pipeline so enable that now in your ``proxy-server.conf`` file by adding it to the pipeline
and including the filter section as shown below (be sure to restart your proxy and including the filter section as shown below (be sure to restart your proxy
after making these changes):: after making these changes):
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache bulk \ .. code:: ini
slo dlo ratelimit crossdomain list-endpoints tempurl tempauth staticweb \
container-quotas account-quotas proxy-logging proxy-server
[filter:list-endpoints] pipeline = catch_errors gatekeeper healthcheck proxy-logging cache bulk \
use = egg:swift#list_endpoints slo dlo ratelimit crossdomain list-endpoints tempurl tempauth staticweb \
container-quotas account-quotas proxy-logging proxy-server
2. Check to see that your policies are reported via /info:: [filter:list-endpoints]
use = egg:swift#list_endpoints
swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing info 2. Check to see that your policies are reported via /info:
You should see this: (only showing the policy output here):: .. code:: shell
policies: [{'aliases': 'gold, yellow, orange', 'default': True, swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing info
'name': 'gold'}, {'aliases': 'silver', 'name': 'silver'}]
You should see this: (only showing the policy output here):
.. code:: none
policies: [{'aliases': 'gold, yellow, orange', 'default': True,
'name': 'gold'}, {'aliases': 'silver', 'name': 'silver'}]
3. Now create a container without specifying a policy, it will use the 3. Now create a container without specifying a policy, it will use the
default, 'gold' and then put a test object in it (create the file ``file0.txt`` default, 'gold' and then put a test object in it (create the file ``file0.txt``
with your favorite editor with some content):: with your favorite editor with some content):
curl -v -X PUT -H 'X-Auth-Token: <your auth token>' \ .. code:: shell
http://127.0.0.1:8080/v1/AUTH_test/myCont0
curl -X PUT -v -T file0.txt -H 'X-Auth-Token: <your auth token>' \
http://127.0.0.1:8080/v1/AUTH_test/myCont0/file0.txt
4. Now confirm placement of the object with the :ref:`list_endpoints` middleware:: curl -v -X PUT -H 'X-Auth-Token: <your auth token>' \
http://127.0.0.1:8080/v1/AUTH_test/myCont0
curl -X PUT -v -T file0.txt -H 'X-Auth-Token: <your auth token>' \
http://127.0.0.1:8080/v1/AUTH_test/myCont0/file0.txt
curl -X GET -v http://127.0.0.1:8080/endpoints/AUTH_test/myCont0/file0.txt 4. Now confirm placement of the object with the :ref:`list_endpoints` middleware:
You should see this: (note placement on expected devices):: .. code:: shell
["http://127.0.0.1:6230/sdb3/761/AUTH_test/myCont0/file0.txt", curl -X GET -v http://127.0.0.1:8080/endpoints/AUTH_test/myCont0/file0.txt
"http://127.0.0.1:6210/sdb1/761/AUTH_test/myCont0/file0.txt",
"http://127.0.0.1:6220/sdb2/761/AUTH_test/myCont0/file0.txt"]
5. Create a container using policy 'silver' and put a different file in it:: You should see this: (note placement on expected devices):
curl -v -X PUT -H 'X-Auth-Token: <your auth token>' -H \ .. code:: json
"X-Storage-Policy: silver" \
http://127.0.0.1:8080/v1/AUTH_test/myCont1
curl -X PUT -v -T file1.txt -H 'X-Auth-Token: <your auth token>' \
http://127.0.0.1:8080/v1/AUTH_test/myCont1/
6. Confirm placement of the object for policy 'silver':: ["http://127.0.0.1:6230/sdb3/761/AUTH_test/myCont0/file0.txt",
"http://127.0.0.1:6210/sdb1/761/AUTH_test/myCont0/file0.txt",
"http://127.0.0.1:6220/sdb2/761/AUTH_test/myCont0/file0.txt"]
curl -X GET -v http://127.0.0.1:8080/endpoints/AUTH_test/myCont1/file1.txt 5. Create a container using policy 'silver' and put a different file in it:
You should see this: (note placement on expected devices):: .. code:: shell
["http://127.0.0.1:6210/sdb1/32/AUTH_test/myCont1/file1.txt", curl -v -X PUT -H 'X-Auth-Token: <your auth token>' -H \
"http://127.0.0.1:6240/sdb4/32/AUTH_test/myCont1/file1.txt"] "X-Storage-Policy: silver" \
http://127.0.0.1:8080/v1/AUTH_test/myCont1
curl -X PUT -v -T file1.txt -H 'X-Auth-Token: <your auth token>' \
http://127.0.0.1:8080/v1/AUTH_test/myCont1/
6. Confirm placement of the object for policy 'silver':
.. code:: shell
curl -X GET -v http://127.0.0.1:8080/endpoints/AUTH_test/myCont1/file1.txt
You should see this: (note placement on expected devices):
.. code:: json
["http://127.0.0.1:6210/sdb1/32/AUTH_test/myCont1/file1.txt",
"http://127.0.0.1:6240/sdb4/32/AUTH_test/myCont1/file1.txt"]
7. Confirm account information with HEAD, make sure that your container-updater 7. Confirm account information with HEAD, make sure that your container-updater
service is running and has executed once since you performed the PUTs or the service is running and has executed once since you performed the PUTs or the
account database won't be updated yet:: account database won't be updated yet:
curl -i -X HEAD -H 'X-Auth-Token: <your auth token>' \ .. code:: shell
http://127.0.0.1:8080/v1/AUTH_test
You should see something like this (note that total and per policy stats curl -i -X HEAD -H 'X-Auth-Token: <your auth token>' \
object sizes will vary):: http://127.0.0.1:8080/v1/AUTH_test
HTTP/1.1 204 No Content You should see something like this (note that total and per policy stats
Content-Length: 0 object sizes will vary):
X-Account-Object-Count: 2
X-Account-Bytes-Used: 174 .. code:: none
X-Account-Container-Count: 2
X-Account-Storage-Policy-Gold-Object-Count: 1 HTTP/1.1 204 No Content
X-Account-Storage-Policy-Gold-Bytes-Used: 84 Content-Length: 0
X-Account-Storage-Policy-Silver-Object-Count: 1 X-Account-Object-Count: 2
X-Account-Storage-Policy-Silver-Bytes-Used: 90 X-Account-Bytes-Used: 174
X-Timestamp: 1397230339.71525 X-Account-Container-Count: 2
Content-Type: text/plain; charset=utf-8 X-Account-Storage-Policy-Gold-Object-Count: 1
Accept-Ranges: bytes X-Account-Storage-Policy-Gold-Bytes-Used: 84
X-Trans-Id: tx96e7496b19bb44abb55a3-0053482c75 X-Account-Storage-Policy-Silver-Object-Count: 1
X-Openstack-Request-Id: tx96e7496b19bb44abb55a3-0053482c75 X-Account-Storage-Policy-Silver-Bytes-Used: 90
Date: Fri, 11 Apr 2014 17:55:01 GMT X-Timestamp: 1397230339.71525
Content-Type: text/plain; charset=utf-8
Accept-Ranges: bytes
X-Trans-Id: tx96e7496b19bb44abb55a3-0053482c75
X-Openstack-Request-Id: tx96e7496b19bb44abb55a3-0053482c75
Date: Fri, 11 Apr 2014 17:55:01 GMT

View File

@ -92,7 +92,7 @@ class BaseDecrypterContext(CryptoWSGIContext):
the value itself, otherwise return the value unmodified. the value itself, otherwise return the value unmodified.
A value should either be a string that does not contain the ';' A value should either be a string that does not contain the ';'
character or should be of the form: character or should be of the form::
<base64-encoded ciphertext>;swift_meta=<crypto meta> <base64-encoded ciphertext>;swift_meta=<crypto meta>

View File

@ -208,10 +208,10 @@ class BaseKeyMaster(object):
This provides some basic helpers for: This provides some basic helpers for:
- loading from a separate config path, - loading from a separate config path,
- deriving keys based on path, and - deriving keys based on path, and
- installing a ``swift.callback.fetch_crypto_keys`` hook - installing a ``swift.callback.fetch_crypto_keys`` hook
in the request environment. in the request environment.
Subclasses should define ``log_route``, ``keymaster_opts``, and Subclasses should define ``log_route``, ``keymaster_opts``, and
``keymaster_conf_section`` attributes, and implement the ``keymaster_conf_section`` attributes, and implement the

View File

@ -178,7 +178,7 @@ class RingBuilder(object):
@contextmanager @contextmanager
def debug(self): def debug(self):
""" """
Temporarily enables debug logging, useful in tests, e.g. Temporarily enables debug logging, useful in tests, e.g.::
with rb.debug(): with rb.debug():
rb.rebalance() rb.rebalance()

View File

@ -1285,7 +1285,7 @@ class Timestamp(object):
""" """
Get an isoformat string representation of the 'normal' part of the Get an isoformat string representation of the 'normal' part of the
Timestamp with microsecond precision and no trailing timezone, for Timestamp with microsecond precision and no trailing timezone, for
example: example::
1970-01-01T00:00:00.000000 1970-01-01T00:00:00.000000
@ -2518,12 +2518,12 @@ def get_hub():
Another note about epoll: it's hard to use when forking. epoll works Another note about epoll: it's hard to use when forking. epoll works
like so: like so:
* create an epoll instance: efd = epoll_create(...) * create an epoll instance: ``efd = epoll_create(...)``
* register file descriptors of interest with epoll_ctl(efd, * register file descriptors of interest with
EPOLL_CTL_ADD, fd, ...) ``epoll_ctl(efd, EPOLL_CTL_ADD, fd, ...)``
* wait for events with epoll_wait(efd, ...) * wait for events with ``epoll_wait(efd, ...)``
If you fork, you and all your child processes end up using the same If you fork, you and all your child processes end up using the same
epoll instance, and everyone becomes confused. It is possible to use epoll instance, and everyone becomes confused. It is possible to use
@ -6437,7 +6437,7 @@ def make_db_file_path(db_path, epoch):
def get_db_files(db_path): def get_db_files(db_path):
""" """
Given the path to a db file, return a sorted list of all valid db files Given the path to a db file, return a sorted list of all valid db files
that actually exist in that path's dir. A valid db filename has the form: that actually exist in that path's dir. A valid db filename has the form::
<hash>[_<epoch>].db <hash>[_<epoch>].db

View File

@ -322,20 +322,20 @@ class ContainerBroker(DatabaseBroker):
Note that this may involve multiple on-disk DB files if the container Note that this may involve multiple on-disk DB files if the container
becomes sharded: becomes sharded:
* :attr:`_db_file` is the path to the legacy container DB name, i.e. * :attr:`_db_file` is the path to the legacy container DB name, i.e.
``<hash>.db``. This file should exist for an initialised broker that ``<hash>.db``. This file should exist for an initialised broker that
has never been sharded, but will not exist once a container has been has never been sharded, but will not exist once a container has been
sharded. sharded.
* :attr:`db_files` is a list of existing db files for the broker. This * :attr:`db_files` is a list of existing db files for the broker. This
list should have at least one entry for an initialised broker, and list should have at least one entry for an initialised broker, and
should have two entries while a broker is in SHARDING state. should have two entries while a broker is in SHARDING state.
* :attr:`db_file` is the path to whichever db is currently authoritative * :attr:`db_file` is the path to whichever db is currently authoritative
for the container. Depending on the container's state, this may not be for the container. Depending on the container's state, this may not be
the same as the ``db_file`` argument given to :meth:`~__init__`, unless the same as the ``db_file`` argument given to :meth:`~__init__`, unless
``force_db_file`` is True in which case :attr:`db_file` is always equal ``force_db_file`` is True in which case :attr:`db_file` is always equal
to the ``db_file`` argument given to :meth:`~__init__`. to the ``db_file`` argument given to :meth:`~__init__`.
* :attr:`pending_file` is always equal to :attr:`_db_file` extended with * :attr:`pending_file` is always equal to :attr:`_db_file` extended with
``.pending``, i.e. ``<hash>.db.pending``. ``.pending``, i.e. ``<hash>.db.pending``.
""" """
db_type = 'container' db_type = 'container'
db_contains_type = 'object' db_contains_type = 'object'

View File

@ -502,43 +502,43 @@ class CleavingContext(object):
Encapsulates metadata associated with the process of cleaving a retiring Encapsulates metadata associated with the process of cleaving a retiring
DB. This metadata includes: DB. This metadata includes:
* ``ref``: The unique part of the key that is used when persisting a * ``ref``: The unique part of the key that is used when persisting a
serialized ``CleavingContext`` as sysmeta in the DB. The unique part of serialized ``CleavingContext`` as sysmeta in the DB. The unique part of
the key is based off the DB id. This ensures that each context is the key is based off the DB id. This ensures that each context is
associated with a specific DB file. The unique part of the key is associated with a specific DB file. The unique part of the key is
included in the ``CleavingContext`` but should not be modified by any included in the ``CleavingContext`` but should not be modified by any
caller. caller.
* ``cursor``: the upper bound of the last shard range to have been * ``cursor``: the upper bound of the last shard range to have been
cleaved from the retiring DB. cleaved from the retiring DB.
* ``max_row``: the retiring DB's max row; this is updated to the value of * ``max_row``: the retiring DB's max row; this is updated to the value of
the retiring DB's ``max_row`` every time a ``CleavingContext`` is the retiring DB's ``max_row`` every time a ``CleavingContext`` is
loaded for that DB, and may change during the process of cleaving the loaded for that DB, and may change during the process of cleaving the
DB. DB.
* ``cleave_to_row``: the value of ``max_row`` at the moment when cleaving * ``cleave_to_row``: the value of ``max_row`` at the moment when cleaving
starts for the DB. When cleaving completes (i.e. the cleave cursor has starts for the DB. When cleaving completes (i.e. the cleave cursor has
reached the upper bound of the cleaving namespace), ``cleave_to_row`` reached the upper bound of the cleaving namespace), ``cleave_to_row``
is compared to the current ``max_row``: if the two values are not equal is compared to the current ``max_row``: if the two values are not equal
then rows have been added to the DB which may not have been cleaved, in then rows have been added to the DB which may not have been cleaved, in
which case the ``CleavingContext`` is ``reset`` and cleaving is which case the ``CleavingContext`` is ``reset`` and cleaving is
re-started. re-started.
* ``last_cleave_to_row``: the minimum DB row from which cleaving should * ``last_cleave_to_row``: the minimum DB row from which cleaving should
select objects to cleave; this is initially set to None i.e. all rows select objects to cleave; this is initially set to None i.e. all rows
should be cleaved. If the ``CleavingContext`` is ``reset`` then the should be cleaved. If the ``CleavingContext`` is ``reset`` then the
``last_cleave_to_row`` is set to the current value of ``last_cleave_to_row`` is set to the current value of
``cleave_to_row``, which in turn is set to the current value of ``cleave_to_row``, which in turn is set to the current value of
``max_row`` by a subsequent call to ``start``. The repeated cleaving ``max_row`` by a subsequent call to ``start``. The repeated cleaving
therefore only selects objects in rows greater than the therefore only selects objects in rows greater than the
``last_cleave_to_row``, rather than cleaving the whole DB again. ``last_cleave_to_row``, rather than cleaving the whole DB again.
* ``ranges_done``: the number of shard ranges that have been cleaved from * ``ranges_done``: the number of shard ranges that have been cleaved from
the retiring DB. the retiring DB.
* ``ranges_todo``: the number of shard ranges that are yet to be * ``ranges_todo``: the number of shard ranges that are yet to be
cleaved from the retiring DB. cleaved from the retiring DB.
""" """
def __init__(self, ref, cursor='', max_row=None, cleave_to_row=None, def __init__(self, ref, cursor='', max_row=None, cleave_to_row=None,
last_cleave_to_row=None, cleaving_done=False, last_cleave_to_row=None, cleaving_done=False,

View File

@ -96,28 +96,28 @@ class ContainerSync(Daemon):
An example may help. Assume replica count is 3 and perfectly matching An example may help. Assume replica count is 3 and perfectly matching
ROWIDs starting at 1. ROWIDs starting at 1.
First sync run, database has 6 rows: First sync run, database has 6 rows:
* SyncPoint1 starts as -1. * SyncPoint1 starts as -1.
* SyncPoint2 starts as -1. * SyncPoint2 starts as -1.
* No rows between points, so no "all updates" rows. * No rows between points, so no "all updates" rows.
* Six rows newer than SyncPoint1, so a third of the rows are sent * Six rows newer than SyncPoint1, so a third of the rows are sent
by node 1, another third by node 2, remaining third by node 3. by node 1, another third by node 2, remaining third by node 3.
* SyncPoint1 is set as 6 (the newest ROWID known). * SyncPoint1 is set as 6 (the newest ROWID known).
* SyncPoint2 is left as -1 since no "all updates" rows were synced. * SyncPoint2 is left as -1 since no "all updates" rows were synced.
Next sync run, database has 12 rows: Next sync run, database has 12 rows:
* SyncPoint1 starts as 6. * SyncPoint1 starts as 6.
* SyncPoint2 starts as -1. * SyncPoint2 starts as -1.
* The rows between -1 and 6 all trigger updates (most of which * The rows between -1 and 6 all trigger updates (most of which
should short-circuit on the remote end as having already been should short-circuit on the remote end as having already been
done). done).
* Six more rows newer than SyncPoint1, so a third of the rows are * Six more rows newer than SyncPoint1, so a third of the rows are
sent by node 1, another third by node 2, remaining third by node sent by node 1, another third by node 2, remaining third by node
3. 3.
* SyncPoint1 is set as 12 (the newest ROWID known). * SyncPoint1 is set as 12 (the newest ROWID known).
* SyncPoint2 is set as 6 (the newest "all updates" ROWID). * SyncPoint2 is set as 6 (the newest "all updates" ROWID).
In this way, under normal circumstances each node sends its share of In this way, under normal circumstances each node sends its share of
updates each run and just sends a batch of older updates to ensure nothing updates each run and just sends a batch of older updates to ensure nothing

View File

@ -116,17 +116,17 @@ class Receiver(object):
The general process inside an SSYNC request is: The general process inside an SSYNC request is:
1. Initialize the request: Basic request validation, mount check, 1. Initialize the request: Basic request validation, mount check,
acquire semaphore lock, etc.. acquire semaphore lock, etc..
2. Missing check: Sender sends the hashes and timestamps of 2. Missing check: Sender sends the hashes and timestamps of
the object information it can send, receiver sends back the object information it can send, receiver sends back
the hashes it wants (doesn't have or has an older the hashes it wants (doesn't have or has an older
timestamp). timestamp).
3. Updates: Sender sends the object information requested. 3. Updates: Sender sends the object information requested.
4. Close down: Release semaphore lock, etc. 4. Close down: Release semaphore lock, etc.
""" """
def __init__(self, app, request): def __init__(self, app, request):
@ -346,29 +346,29 @@ class Receiver(object):
The process is generally: The process is generally:
1. Sender sends `:MISSING_CHECK: START` and begins 1. Sender sends ``:MISSING_CHECK: START`` and begins
sending `hash timestamp` lines. sending `hash timestamp` lines.
2. Receiver gets `:MISSING_CHECK: START` and begins 2. Receiver gets ``:MISSING_CHECK: START`` and begins
reading the `hash timestamp` lines, collecting the reading the `hash timestamp` lines, collecting the
hashes of those it desires. hashes of those it desires.
3. Sender sends `:MISSING_CHECK: END`. 3. Sender sends ``:MISSING_CHECK: END``.
4. Receiver gets `:MISSING_CHECK: END`, responds with 4. Receiver gets ``:MISSING_CHECK: END``, responds with
`:MISSING_CHECK: START`, followed by the list of ``:MISSING_CHECK: START``, followed by the list of
<wanted_hash> specifiers it collected as being wanted <wanted_hash> specifiers it collected as being wanted
(one per line), `:MISSING_CHECK: END`, and flushes any (one per line), ``:MISSING_CHECK: END``, and flushes any
buffers. buffers.
Each <wanted_hash> specifier has the form <hash>[ <parts>] where Each <wanted_hash> specifier has the form <hash>[ <parts>] where
<parts> is a string containing characters 'd' and/or 'm' <parts> is a string containing characters 'd' and/or 'm'
indicating that only data or meta part of object respectively is indicating that only data or meta part of object respectively is
required to be sync'd. required to be sync'd.
5. Sender gets `:MISSING_CHECK: START` and reads the list 5. Sender gets ``:MISSING_CHECK: START`` and reads the list
of hashes desired by the receiver until reading of hashes desired by the receiver until reading
`:MISSING_CHECK: END`. ``:MISSING_CHECK: END``.
The collection and then response is so the sender doesn't The collection and then response is so the sender doesn't
have to read while it writes to ensure network buffers don't have to read while it writes to ensure network buffers don't
@ -413,18 +413,18 @@ class Receiver(object):
The process is generally: The process is generally:
1. Sender sends `:UPDATES: START` and begins sending the 1. Sender sends ``:UPDATES: START`` and begins sending the
PUT and DELETE subrequests. PUT and DELETE subrequests.
2. Receiver gets `:UPDATES: START` and begins routing the 2. Receiver gets ``:UPDATES: START`` and begins routing the
subrequests to the object server. subrequests to the object server.
3. Sender sends `:UPDATES: END`. 3. Sender sends ``:UPDATES: END``.
4. Receiver gets `:UPDATES: END` and sends `:UPDATES: 4. Receiver gets ``:UPDATES: END`` and sends ``:UPDATES:
START` and `:UPDATES: END` (assuming no errors). START`` and ``:UPDATES: END`` (assuming no errors).
5. Sender gets `:UPDATES: START` and `:UPDATES: END`. 5. Sender gets ``:UPDATES: START`` and ``:UPDATES: END``.
If too many subrequests fail, as configured by If too many subrequests fail, as configured by
replication_failure_threshold and replication_failure_ratio, replication_failure_threshold and replication_failure_ratio,

View File

@ -86,15 +86,15 @@ class BucketizedUpdateSkippingLimiter(object):
The iterator increments stats as follows: The iterator increments stats as follows:
* The `deferrals` stat is incremented for each update that is * The `deferrals` stat is incremented for each update that is
rate-limited. Note that a individual update is rate-limited at most rate-limited. Note that a individual update is rate-limited at most
once. once.
* The `skips` stat is incremented for each rate-limited update that is * The `skips` stat is incremented for each rate-limited update that is
not eventually yielded. This includes updates that are evicted from the not eventually yielded. This includes updates that are evicted from the
deferral queue and all updates that remain in the deferral queue when deferral queue and all updates that remain in the deferral queue when
``drain_until`` time is reached and the iterator terminates. ``drain_until`` time is reached and the iterator terminates.
* The `drains` stat is incremented for each rate-limited update that is * The `drains` stat is incremented for each rate-limited update that is
eventually yielded. eventually yielded.
Consequently, when this iterator terminates, the sum of `skips` and Consequently, when this iterator terminates, the sum of `skips` and
`drains` is equal to the number of `deferrals`. `drains` is equal to the number of `deferrals`.
@ -219,12 +219,12 @@ class SweepStats(object):
""" """
Stats bucket for an update sweep Stats bucket for an update sweep
A measure of the rate at which updates are being rate-limited is: A measure of the rate at which updates are being rate-limited is::
deferrals / (deferrals + successes + failures - drains) deferrals / (deferrals + successes + failures - drains)
A measure of the rate at which updates are not being sent during a sweep A measure of the rate at which updates are not being sent during a sweep
is: is::
skips / (skips + successes + failures) skips / (skips + successes + failures)
""" """