Various doc formatting cleanups
* Get rid of a bunch of accidental blockquote formatting * Always declare a lexer to use for ``.. code::`` blocks Change-Id: I8940e75b094843e542e815dde6b6be4740751813
This commit is contained in:
parent
7e5c78423f
commit
52a4fe37aa
@ -324,14 +324,14 @@ In order to prevent rsync replication to specific drives, firstly
|
||||
setup ``rsync_module`` per disk in your ``object-replicator``.
|
||||
Set this in ``object-server.conf``:
|
||||
|
||||
.. code::
|
||||
.. code:: cfg
|
||||
|
||||
[object-replicator]
|
||||
rsync_module = {replication_ip}::object_{device}
|
||||
|
||||
Set the individual drives in ``rsync.conf``. For example:
|
||||
|
||||
.. code::
|
||||
.. code:: cfg
|
||||
|
||||
[object_sda]
|
||||
max connections = 4
|
||||
@ -387,7 +387,7 @@ monitoring solution to achieve this. The following is an example script:
|
||||
For the above script to work, ensure ``/etc/rsync.d/`` conf files are
|
||||
included, by specifying ``&include`` in your ``rsync.conf`` file:
|
||||
|
||||
.. code::
|
||||
.. code:: cfg
|
||||
|
||||
&include /etc/rsync.d
|
||||
|
||||
@ -395,7 +395,7 @@ Use this in conjunction with a cron job to periodically run the script, for exam
|
||||
|
||||
.. highlight:: none
|
||||
|
||||
.. code::
|
||||
.. code:: cfg
|
||||
|
||||
# /etc/cron.d/devicecheck
|
||||
* * * * * root /some/path/to/disable_rsync.py
|
||||
|
@ -12,13 +12,13 @@ that does not support the ``/info`` request.
|
||||
To use the ``/info`` request, send a **GET** request using the ``/info``
|
||||
path to the Object Store endpoint as shown in this example:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# curl https://storage.clouddrive.com/info
|
||||
|
||||
This example shows a truncated response body:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
{
|
||||
"swift":{
|
||||
|
@ -29,9 +29,8 @@ The format of the form **POST** request is:
|
||||
|
||||
**Example 1.14. Form POST format**
|
||||
|
||||
.. code::
|
||||
.. code:: xml
|
||||
|
||||
<![CDATA[
|
||||
<form action="SWIFT_URL"
|
||||
method="POST"
|
||||
enctype="multipart/form-data">
|
||||
@ -44,7 +43,6 @@ The format of the form **POST** request is:
|
||||
<br/>
|
||||
<input type="submit"/>
|
||||
</form>
|
||||
]]>
|
||||
|
||||
|
||||
**action="SWIFT_URL"**
|
||||
@ -53,14 +51,14 @@ Set to full URL where the objects are to be uploaded. The names of
|
||||
uploaded files are appended to the specified *SWIFT_URL*. So, you
|
||||
can upload directly to the root of a container with a URL like:
|
||||
|
||||
.. code::
|
||||
.. code:: none
|
||||
|
||||
https://swift-cluster.example.com/v1/my_account/container/
|
||||
|
||||
Optionally, you can include an object prefix to separate uploads, such
|
||||
as:
|
||||
|
||||
.. code::
|
||||
.. code:: none
|
||||
|
||||
https://swift-cluster.example.com/v1/my_account/container/OBJECT_PREFIX
|
||||
|
||||
@ -123,7 +121,7 @@ follow the file attributes are ignored.
|
||||
|
||||
Optionally, if you want the uploaded files to be temporary you can set x-delete-at or x-delete-after attributes by adding one of these as a form input:
|
||||
|
||||
.. code::
|
||||
.. code:: xml
|
||||
|
||||
<input type="hidden" name="x_delete_at" value="<unix-timestamp>" />
|
||||
<input type="hidden" name="x_delete_after" value="<seconds>" />
|
||||
@ -169,7 +167,7 @@ The following example code generates a signature for use with form
|
||||
|
||||
**Example 1.15. HMAC-SHA1 signature for form POST**
|
||||
|
||||
.. code::
|
||||
.. code:: python
|
||||
|
||||
import hmac
|
||||
from hashlib import sha1
|
||||
@ -198,13 +196,13 @@ being uploaded is called ``flower.jpg``.
|
||||
This example uses the **swift-form-signature** script to compute the
|
||||
``expires`` and ``signature`` values.
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ bin/swift-form-signature /v1/my_account/container/photos/ https://example.com/done.html 5373952000 1 200 MYKEY
|
||||
Expires: 1390825338
|
||||
Signature: 35129416ebda2f1a21b3c2b8939850dfc63d8f43
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ curl -i https://swift-cluster.example.com/v1/my_account/container/photos/ -X POST \
|
||||
-F max_file_size=5373952000 -F max_file_count=1 -F expires=1390825338 \
|
||||
|
@ -82,7 +82,7 @@ This example shows three segment objects. You can use several containers
|
||||
and the object names do not have to conform to a specific pattern, in
|
||||
contrast to dynamic large objects.
|
||||
|
||||
.. code::
|
||||
.. code:: json
|
||||
|
||||
[
|
||||
{
|
||||
@ -192,7 +192,7 @@ manifest is still available to download the first set of segments.
|
||||
|
||||
**Example Upload segment of large object request: HTTP**
|
||||
|
||||
.. code::
|
||||
.. code:: none
|
||||
|
||||
PUT /{api_version}/{account}/{container}/{object} HTTP/1.1
|
||||
Host: storage.clouddrive.com
|
||||
@ -214,7 +214,7 @@ uploading the manifest.
|
||||
|
||||
**Example Upload next segment of large object request: HTTP**
|
||||
|
||||
.. code::
|
||||
.. code:: none
|
||||
|
||||
PUT /{api_version}/{account}/{container}/{object} HTTP/1.1
|
||||
Host: storage.clouddrive.com
|
||||
@ -232,7 +232,7 @@ subsequent additional segments.
|
||||
|
||||
**Example Upload manifest request: HTTP**
|
||||
|
||||
.. code::
|
||||
.. code:: none
|
||||
|
||||
PUT /{api_version}/{account}/{container}/{object} HTTP/1.1
|
||||
Host: storage.clouddrive.com
|
||||
@ -244,7 +244,7 @@ subsequent additional segments.
|
||||
|
||||
**Example Upload manifest response: HTTP**
|
||||
|
||||
.. code::
|
||||
.. code:: none
|
||||
|
||||
[...]
|
||||
|
||||
|
@ -97,14 +97,14 @@ interact with the Object Storage API.
|
||||
Specifically, the resource path reflects this structure and has this
|
||||
format:
|
||||
|
||||
.. code::
|
||||
.. code:: none
|
||||
|
||||
/v1/{account}/{container}/{object}
|
||||
|
||||
For example, for the ``flowers/rose.jpg`` object in the ``images``
|
||||
container in the ``12345678912345`` account, the resource path is:
|
||||
|
||||
.. code::
|
||||
.. code:: none
|
||||
|
||||
/v1/12345678912345/images/flowers/rose.jpg
|
||||
|
||||
@ -133,7 +133,7 @@ parameter ``reverse``, noting that your marker and end_markers should be
|
||||
switched when applied to a reverse listing. I.e, for a list of objects
|
||||
``[a, b, c, d, e]`` the non-reversed could be:
|
||||
|
||||
.. code::
|
||||
.. code:: none
|
||||
|
||||
/v1/{account}/{container}/?marker=a&end_marker=d
|
||||
b
|
||||
@ -141,7 +141,7 @@ switched when applied to a reverse listing. I.e, for a list of objects
|
||||
|
||||
However, when reversed marker and end_marker are applied to a reversed list:
|
||||
|
||||
.. code::
|
||||
.. code:: none
|
||||
|
||||
/v1/{account}/{container}/?marker=d&end_marker=a&reverse=on
|
||||
c
|
||||
|
@ -46,37 +46,37 @@ container, overwriting the current version.
|
||||
Example Using ``X-Versions-Location``
|
||||
-------------------------------------
|
||||
|
||||
#. Create the ``current`` container:
|
||||
#. Create the ``current`` container:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# curl -i $publicURL/current -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" -H "X-Versions-Location: archive"
|
||||
# curl -i $publicURL/current -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" -H "X-Versions-Location: archive"
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
HTTP/1.1 201 Created
|
||||
Content-Length: 0
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: txb91810fb717347d09eec8-0052e18997
|
||||
X-Openstack-Request-Id: txb91810fb717347d09eec8-0052e18997
|
||||
Date: Thu, 23 Jan 2014 21:28:55 GMT
|
||||
HTTP/1.1 201 Created
|
||||
Content-Length: 0
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: txb91810fb717347d09eec8-0052e18997
|
||||
X-Openstack-Request-Id: txb91810fb717347d09eec8-0052e18997
|
||||
Date: Thu, 23 Jan 2014 21:28:55 GMT
|
||||
|
||||
#. Create the first version of an object in the ``current`` container:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# curl -i $publicURL/current/my_object --data-binary 1 -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token"
|
||||
# curl -i $publicURL/current/my_object --data-binary 1 -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token"
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
HTTP/1.1 201 Created
|
||||
Last-Modified: Thu, 23 Jan 2014 21:31:22 GMT
|
||||
Content-Length: 0
|
||||
Etag: d41d8cd98f00b204e9800998ecf8427e
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: tx5992d536a4bd4fec973aa-0052e18a2a
|
||||
X-Openstack-Request-Id: tx5992d536a4bd4fec973aa-0052e18a2a
|
||||
Date: Thu, 23 Jan 2014 21:31:22 GMT
|
||||
HTTP/1.1 201 Created
|
||||
Last-Modified: Thu, 23 Jan 2014 21:31:22 GMT
|
||||
Content-Length: 0
|
||||
Etag: d41d8cd98f00b204e9800998ecf8427e
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: tx5992d536a4bd4fec973aa-0052e18a2a
|
||||
X-Openstack-Request-Id: tx5992d536a4bd4fec973aa-0052e18a2a
|
||||
Date: Thu, 23 Jan 2014 21:31:22 GMT
|
||||
|
||||
Nothing is written to the non-current version container when you
|
||||
initially **PUT** an object in the ``current`` container. However,
|
||||
@ -85,9 +85,9 @@ Example Using ``X-Versions-Location``
|
||||
|
||||
These non-current versions are named as follows:
|
||||
|
||||
.. code::
|
||||
.. code:: none
|
||||
|
||||
<length><object_name>/<timestamp>
|
||||
<length><object_name>/<timestamp>
|
||||
|
||||
Where ``length`` is the 3-character, zero-padded hexadecimal
|
||||
character length of the object, ``<object_name>`` is the object name,
|
||||
@ -96,20 +96,20 @@ Example Using ``X-Versions-Location``
|
||||
|
||||
#. Create a second version of the object in the ``current`` container:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# curl -i $publicURL/current/my_object --data-binary 2 -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token"
|
||||
# curl -i $publicURL/current/my_object --data-binary 2 -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token"
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
HTTP/1.1 201 Created
|
||||
Last-Modified: Thu, 23 Jan 2014 21:41:32 GMT
|
||||
Content-Length: 0
|
||||
Etag: d41d8cd98f00b204e9800998ecf8427e
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: tx468287ce4fc94eada96ec-0052e18c8c
|
||||
X-Openstack-Request-Id: tx468287ce4fc94eada96ec-0052e18c8c
|
||||
Date: Thu, 23 Jan 2014 21:41:32 GMT
|
||||
HTTP/1.1 201 Created
|
||||
Last-Modified: Thu, 23 Jan 2014 21:41:32 GMT
|
||||
Content-Length: 0
|
||||
Etag: d41d8cd98f00b204e9800998ecf8427e
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: tx468287ce4fc94eada96ec-0052e18c8c
|
||||
X-Openstack-Request-Id: tx468287ce4fc94eada96ec-0052e18c8c
|
||||
Date: Thu, 23 Jan 2014 21:41:32 GMT
|
||||
|
||||
#. Issue a **GET** request to a versioned object to get the current
|
||||
version of the object. You do not have to do any request redirects or
|
||||
@ -117,24 +117,24 @@ Example Using ``X-Versions-Location``
|
||||
|
||||
List older versions of the object in the ``archive`` container:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# curl -i $publicURL/archive?prefix=009my_object -X GET -H "X-Auth-Token: $token"
|
||||
# curl -i $publicURL/archive?prefix=009my_object -X GET -H "X-Auth-Token: $token"
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
Content-Length: 30
|
||||
X-Container-Object-Count: 1
|
||||
Accept-Ranges: bytes
|
||||
X-Timestamp: 1390513280.79684
|
||||
X-Container-Bytes-Used: 0
|
||||
Content-Type: text/plain; charset=utf-8
|
||||
X-Trans-Id: tx9a441884997542d3a5868-0052e18d8e
|
||||
X-Openstack-Request-Id: tx9a441884997542d3a5868-0052e18d8e
|
||||
Date: Thu, 23 Jan 2014 21:45:50 GMT
|
||||
HTTP/1.1 200 OK
|
||||
Content-Length: 30
|
||||
X-Container-Object-Count: 1
|
||||
Accept-Ranges: bytes
|
||||
X-Timestamp: 1390513280.79684
|
||||
X-Container-Bytes-Used: 0
|
||||
Content-Type: text/plain; charset=utf-8
|
||||
X-Trans-Id: tx9a441884997542d3a5868-0052e18d8e
|
||||
X-Openstack-Request-Id: tx9a441884997542d3a5868-0052e18d8e
|
||||
Date: Thu, 23 Jan 2014 21:45:50 GMT
|
||||
|
||||
009my_object/1390512682.92052
|
||||
009my_object/1390512682.92052
|
||||
|
||||
.. note::
|
||||
A **POST** request to a versioned object updates only the metadata
|
||||
@ -145,38 +145,38 @@ Example Using ``X-Versions-Location``
|
||||
current version of the object and replace it with the next-most
|
||||
current version in the non-current container.
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# curl -i $publicURL/current/my_object -X DELETE -H "X-Auth-Token: $token"
|
||||
# curl -i $publicURL/current/my_object -X DELETE -H "X-Auth-Token: $token"
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
HTTP/1.1 204 No Content
|
||||
Content-Length: 0
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: tx006d944e02494e229b8ee-0052e18edd
|
||||
X-Openstack-Request-Id: tx006d944e02494e229b8ee-0052e18edd
|
||||
Date: Thu, 23 Jan 2014 21:51:25 GMT
|
||||
HTTP/1.1 204 No Content
|
||||
Content-Length: 0
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: tx006d944e02494e229b8ee-0052e18edd
|
||||
X-Openstack-Request-Id: tx006d944e02494e229b8ee-0052e18edd
|
||||
Date: Thu, 23 Jan 2014 21:51:25 GMT
|
||||
|
||||
List objects in the ``archive`` container to show that the archived
|
||||
object was moved back to the ``current`` container:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# curl -i $publicURL/archive?prefix=009my_object -X GET -H "X-Auth-Token: $token"
|
||||
# curl -i $publicURL/archive?prefix=009my_object -X GET -H "X-Auth-Token: $token"
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
HTTP/1.1 204 No Content
|
||||
Content-Length: 0
|
||||
X-Container-Object-Count: 0
|
||||
Accept-Ranges: bytes
|
||||
X-Timestamp: 1390513280.79684
|
||||
X-Container-Bytes-Used: 0
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: tx044f2a05f56f4997af737-0052e18eed
|
||||
X-Openstack-Request-Id: tx044f2a05f56f4997af737-0052e18eed
|
||||
Date: Thu, 23 Jan 2014 21:51:41 GMT
|
||||
HTTP/1.1 204 No Content
|
||||
Content-Length: 0
|
||||
X-Container-Object-Count: 0
|
||||
Accept-Ranges: bytes
|
||||
X-Timestamp: 1390513280.79684
|
||||
X-Container-Bytes-Used: 0
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: tx044f2a05f56f4997af737-0052e18eed
|
||||
X-Openstack-Request-Id: tx044f2a05f56f4997af737-0052e18eed
|
||||
Date: Thu, 23 Jan 2014 21:51:41 GMT
|
||||
|
||||
This next-most current version carries with it any metadata last set
|
||||
on it. If want to completely remove an object and you have five
|
||||
@ -185,37 +185,37 @@ Example Using ``X-Versions-Location``
|
||||
Example Using ``X-History-Location``
|
||||
------------------------------------
|
||||
|
||||
#. Create the ``current`` container:
|
||||
#. Create the ``current`` container:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# curl -i $publicURL/current -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" -H "X-History-Location: archive"
|
||||
# curl -i $publicURL/current -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" -H "X-History-Location: archive"
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
HTTP/1.1 201 Created
|
||||
Content-Length: 0
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: txb91810fb717347d09eec8-0052e18997
|
||||
X-Openstack-Request-Id: txb91810fb717347d09eec8-0052e18997
|
||||
Date: Thu, 23 Jan 2014 21:28:55 GMT
|
||||
HTTP/1.1 201 Created
|
||||
Content-Length: 0
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: txb91810fb717347d09eec8-0052e18997
|
||||
X-Openstack-Request-Id: txb91810fb717347d09eec8-0052e18997
|
||||
Date: Thu, 23 Jan 2014 21:28:55 GMT
|
||||
|
||||
#. Create the first version of an object in the ``current`` container:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# curl -i $publicURL/current/my_object --data-binary 1 -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token"
|
||||
# curl -i $publicURL/current/my_object --data-binary 1 -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token"
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
HTTP/1.1 201 Created
|
||||
Last-Modified: Thu, 23 Jan 2014 21:31:22 GMT
|
||||
Content-Length: 0
|
||||
Etag: d41d8cd98f00b204e9800998ecf8427e
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: tx5992d536a4bd4fec973aa-0052e18a2a
|
||||
X-Openstack-Request-Id: tx5992d536a4bd4fec973aa-0052e18a2a
|
||||
Date: Thu, 23 Jan 2014 21:31:22 GMT
|
||||
HTTP/1.1 201 Created
|
||||
Last-Modified: Thu, 23 Jan 2014 21:31:22 GMT
|
||||
Content-Length: 0
|
||||
Etag: d41d8cd98f00b204e9800998ecf8427e
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: tx5992d536a4bd4fec973aa-0052e18a2a
|
||||
X-Openstack-Request-Id: tx5992d536a4bd4fec973aa-0052e18a2a
|
||||
Date: Thu, 23 Jan 2014 21:31:22 GMT
|
||||
|
||||
Nothing is written to the non-current version container when you
|
||||
initially **PUT** an object in the ``current`` container. However,
|
||||
@ -224,9 +224,9 @@ Example Using ``X-History-Location``
|
||||
|
||||
These non-current versions are named as follows:
|
||||
|
||||
.. code::
|
||||
.. code:: none
|
||||
|
||||
<length><object_name>/<timestamp>
|
||||
<length><object_name>/<timestamp>
|
||||
|
||||
Where ``length`` is the 3-character, zero-padded hexadecimal
|
||||
character length of the object, ``<object_name>`` is the object name,
|
||||
@ -235,20 +235,20 @@ Example Using ``X-History-Location``
|
||||
|
||||
#. Create a second version of the object in the ``current`` container:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# curl -i $publicURL/current/my_object --data-binary 2 -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token"
|
||||
# curl -i $publicURL/current/my_object --data-binary 2 -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token"
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
HTTP/1.1 201 Created
|
||||
Last-Modified: Thu, 23 Jan 2014 21:41:32 GMT
|
||||
Content-Length: 0
|
||||
Etag: d41d8cd98f00b204e9800998ecf8427e
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: tx468287ce4fc94eada96ec-0052e18c8c
|
||||
X-Openstack-Request-Id: tx468287ce4fc94eada96ec-0052e18c8c
|
||||
Date: Thu, 23 Jan 2014 21:41:32 GMT
|
||||
HTTP/1.1 201 Created
|
||||
Last-Modified: Thu, 23 Jan 2014 21:41:32 GMT
|
||||
Content-Length: 0
|
||||
Etag: d41d8cd98f00b204e9800998ecf8427e
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: tx468287ce4fc94eada96ec-0052e18c8c
|
||||
X-Openstack-Request-Id: tx468287ce4fc94eada96ec-0052e18c8c
|
||||
Date: Thu, 23 Jan 2014 21:41:32 GMT
|
||||
|
||||
#. Issue a **GET** request to a versioned object to get the current
|
||||
version of the object. You do not have to do any request redirects or
|
||||
@ -256,24 +256,24 @@ Example Using ``X-History-Location``
|
||||
|
||||
List older versions of the object in the ``archive`` container:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# curl -i $publicURL/archive?prefix=009my_object -X GET -H "X-Auth-Token: $token"
|
||||
# curl -i $publicURL/archive?prefix=009my_object -X GET -H "X-Auth-Token: $token"
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
Content-Length: 30
|
||||
X-Container-Object-Count: 1
|
||||
Accept-Ranges: bytes
|
||||
X-Timestamp: 1390513280.79684
|
||||
X-Container-Bytes-Used: 0
|
||||
Content-Type: text/plain; charset=utf-8
|
||||
X-Trans-Id: tx9a441884997542d3a5868-0052e18d8e
|
||||
X-Openstack-Request-Id: tx9a441884997542d3a5868-0052e18d8e
|
||||
Date: Thu, 23 Jan 2014 21:45:50 GMT
|
||||
HTTP/1.1 200 OK
|
||||
Content-Length: 30
|
||||
X-Container-Object-Count: 1
|
||||
Accept-Ranges: bytes
|
||||
X-Timestamp: 1390513280.79684
|
||||
X-Container-Bytes-Used: 0
|
||||
Content-Type: text/plain; charset=utf-8
|
||||
X-Trans-Id: tx9a441884997542d3a5868-0052e18d8e
|
||||
X-Openstack-Request-Id: tx9a441884997542d3a5868-0052e18d8e
|
||||
Date: Thu, 23 Jan 2014 21:45:50 GMT
|
||||
|
||||
009my_object/1390512682.92052
|
||||
009my_object/1390512682.92052
|
||||
|
||||
.. note::
|
||||
A **POST** request to a versioned object updates only the metadata
|
||||
@ -285,41 +285,41 @@ Example Using ``X-History-Location``
|
||||
the current container. Subsequent **GET** requests to the object in the
|
||||
current container will return ``404 Not Found``.
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# curl -i $publicURL/current/my_object -X DELETE -H "X-Auth-Token: $token"
|
||||
# curl -i $publicURL/current/my_object -X DELETE -H "X-Auth-Token: $token"
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
HTTP/1.1 204 No Content
|
||||
Content-Length: 0
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: tx006d944e02494e229b8ee-0052e18edd
|
||||
X-Openstack-Request-Id: tx006d944e02494e229b8ee-0052e18edd
|
||||
Date: Thu, 23 Jan 2014 21:51:25 GMT
|
||||
HTTP/1.1 204 No Content
|
||||
Content-Length: 0
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: tx006d944e02494e229b8ee-0052e18edd
|
||||
X-Openstack-Request-Id: tx006d944e02494e229b8ee-0052e18edd
|
||||
Date: Thu, 23 Jan 2014 21:51:25 GMT
|
||||
|
||||
List older versions of the object in the ``archive`` container::
|
||||
List older versions of the object in the ``archive`` container:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# curl -i $publicURL/archive?prefix=009my_object -X GET -H "X-Auth-Token: $token"
|
||||
# curl -i $publicURL/archive?prefix=009my_object -X GET -H "X-Auth-Token: $token"
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
Content-Length: 90
|
||||
X-Container-Object-Count: 3
|
||||
Accept-Ranges: bytes
|
||||
X-Timestamp: 1390513280.79684
|
||||
X-Container-Bytes-Used: 0
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: tx044f2a05f56f4997af737-0052e18eed
|
||||
X-Openstack-Request-Id: tx044f2a05f56f4997af737-0052e18eed
|
||||
Date: Thu, 23 Jan 2014 21:51:41 GMT
|
||||
HTTP/1.1 200 OK
|
||||
Content-Length: 90
|
||||
X-Container-Object-Count: 3
|
||||
Accept-Ranges: bytes
|
||||
X-Timestamp: 1390513280.79684
|
||||
X-Container-Bytes-Used: 0
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: tx044f2a05f56f4997af737-0052e18eed
|
||||
X-Openstack-Request-Id: tx044f2a05f56f4997af737-0052e18eed
|
||||
Date: Thu, 23 Jan 2014 21:51:41 GMT
|
||||
|
||||
009my_object/1390512682.92052
|
||||
009my_object/1390512692.23062
|
||||
009my_object/1390513885.67732
|
||||
009my_object/1390512682.92052
|
||||
009my_object/1390512692.23062
|
||||
009my_object/1390513885.67732
|
||||
|
||||
In addition to the two previous versions of the object, the archive
|
||||
container has a "delete marker" to record when the object was deleted.
|
||||
@ -334,18 +334,18 @@ To disable object versioning for the ``current`` container, remove
|
||||
its ``X-Versions-Location`` metadata header by sending an empty key
|
||||
value.
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# curl -i $publicURL/current -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" -H "X-Versions-Location: "
|
||||
# curl -i $publicURL/current -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token" -H "X-Versions-Location: "
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
HTTP/1.1 202 Accepted
|
||||
Content-Length: 76
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: txe2476de217134549996d0-0052e19038
|
||||
X-Openstack-Request-Id: txe2476de217134549996d0-0052e19038
|
||||
Date: Thu, 23 Jan 2014 21:57:12 GMT
|
||||
HTTP/1.1 202 Accepted
|
||||
Content-Length: 76
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: txe2476de217134549996d0-0052e19038
|
||||
X-Openstack-Request-Id: txe2476de217134549996d0-0052e19038
|
||||
Date: Thu, 23 Jan 2014 21:57:12 GMT
|
||||
|
||||
<html><h1>Accepted</h1><p>The request is accepted for processing.</p></html>
|
||||
<html><h1>Accepted</h1><p>The request is accepted for processing.</p></html>
|
||||
|
||||
|
@ -34,14 +34,14 @@ the requested list of the objects.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
photos/animals/cats/persian.jpg
|
||||
photos/animals/cats/siamese.jpg
|
||||
photos/animals/dogs/corgi.jpg
|
||||
photos/animals/dogs/poodle.jpg
|
||||
photos/animals/dogs/terrier.jpg
|
||||
photos/me.jpg
|
||||
photos/plants/fern.jpg
|
||||
photos/plants/rose.jpg
|
||||
photos/animals/cats/persian.jpg
|
||||
photos/animals/cats/siamese.jpg
|
||||
photos/animals/dogs/corgi.jpg
|
||||
photos/animals/dogs/poodle.jpg
|
||||
photos/animals/dogs/terrier.jpg
|
||||
photos/me.jpg
|
||||
photos/plants/fern.jpg
|
||||
photos/plants/rose.jpg
|
||||
|
||||
Use the delimiter parameter to limit the displayed results. To use
|
||||
``delimiter`` with pseudo-directories, you must use the parameter slash
|
||||
@ -63,20 +63,20 @@ For example:
|
||||
|
||||
.. code-block:: JSON
|
||||
|
||||
[
|
||||
{
|
||||
"subdir": "photos/"
|
||||
}
|
||||
]
|
||||
[
|
||||
{
|
||||
"subdir": "photos/"
|
||||
}
|
||||
]
|
||||
|
||||
.. code-block:: XML
|
||||
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<container name="backups">
|
||||
<subdir name="photos/">
|
||||
<name>photos/</name>
|
||||
</subdir>
|
||||
</container>
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<container name="backups">
|
||||
<subdir name="photos/">
|
||||
<name>photos/</name>
|
||||
</subdir>
|
||||
</container>
|
||||
|
||||
Use the ``prefix`` and ``delimiter`` parameters to view the objects
|
||||
inside a pseudo-directory, including further nested pseudo-directories.
|
||||
@ -92,46 +92,46 @@ pseudo-directory.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
photos/animals/
|
||||
photos/me.jpg
|
||||
photos/plants/
|
||||
photos/animals/
|
||||
photos/me.jpg
|
||||
photos/plants/
|
||||
|
||||
.. code-block:: JSON
|
||||
|
||||
[
|
||||
{
|
||||
"subdir": "photos/animals/"
|
||||
},
|
||||
{
|
||||
"hash": "b249a153f8f38b51e92916bbc6ea57ad",
|
||||
"last_modified": "2015-12-03T17:31:28.187370",
|
||||
"bytes": 2906,
|
||||
"name": "photos/me.jpg",
|
||||
"content_type": "image/jpeg"
|
||||
},
|
||||
{
|
||||
"subdir": "photos/plants/"
|
||||
}
|
||||
]
|
||||
[
|
||||
{
|
||||
"subdir": "photos/animals/"
|
||||
},
|
||||
{
|
||||
"hash": "b249a153f8f38b51e92916bbc6ea57ad",
|
||||
"last_modified": "2015-12-03T17:31:28.187370",
|
||||
"bytes": 2906,
|
||||
"name": "photos/me.jpg",
|
||||
"content_type": "image/jpeg"
|
||||
},
|
||||
{
|
||||
"subdir": "photos/plants/"
|
||||
}
|
||||
]
|
||||
|
||||
.. code-block:: XML
|
||||
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<container name="backups">
|
||||
<subdir name="photos/animals/">
|
||||
<name>photos/animals/</name>
|
||||
</subdir>
|
||||
<object>
|
||||
<name>photos/me.jpg</name>
|
||||
<hash>b249a153f8f38b51e92916bbc6ea57ad</hash>
|
||||
<bytes>2906</bytes>
|
||||
<content_type>image/jpeg</content_type>
|
||||
<last_modified>2015-12-03T17:31:28.187370</last_modified>
|
||||
</object>
|
||||
<subdir name="photos/plants/">
|
||||
<name>photos/plants/</name>
|
||||
</subdir>
|
||||
</container>
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<container name="backups">
|
||||
<subdir name="photos/animals/">
|
||||
<name>photos/animals/</name>
|
||||
</subdir>
|
||||
<object>
|
||||
<name>photos/me.jpg</name>
|
||||
<hash>b249a153f8f38b51e92916bbc6ea57ad</hash>
|
||||
<bytes>2906</bytes>
|
||||
<content_type>image/jpeg</content_type>
|
||||
<last_modified>2015-12-03T17:31:28.187370</last_modified>
|
||||
</object>
|
||||
<subdir name="photos/plants/">
|
||||
<name>photos/plants/</name>
|
||||
</subdir>
|
||||
</container>
|
||||
|
||||
You can create an unlimited number of nested pseudo-directories. To
|
||||
navigate through them, use a longer ``prefix`` parameter coupled with
|
||||
@ -150,6 +150,6 @@ the objects and pseudo-directories within the nested pseudo-directory.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
photos/animals/dogs/corgi.jpg
|
||||
photos/animals/dogs/poodle.jpg
|
||||
photos/animals/dogs/terrier.jpg
|
||||
photos/animals/dogs/corgi.jpg
|
||||
photos/animals/dogs/poodle.jpg
|
||||
photos/animals/dogs/terrier.jpg
|
||||
|
@ -56,18 +56,18 @@ format:
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
[
|
||||
{
|
||||
"count":0,
|
||||
"bytes":0,
|
||||
"name":"janeausten"
|
||||
},
|
||||
{
|
||||
"count":1,
|
||||
"bytes":14,
|
||||
"name":"marktwain"
|
||||
}
|
||||
]
|
||||
[
|
||||
{
|
||||
"count":0,
|
||||
"bytes":0,
|
||||
"name":"janeausten"
|
||||
},
|
||||
{
|
||||
"count":1,
|
||||
"bytes":14,
|
||||
"name":"marktwain"
|
||||
}
|
||||
]
|
||||
|
||||
|
||||
Example 2. XML example with Accept header
|
||||
@ -100,19 +100,19 @@ format:
|
||||
|
||||
.. code-block:: xml
|
||||
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<account name="AUTH_73f0aa26640f4971864919d0eb0f0880">
|
||||
<container>
|
||||
<name>janeausten</name>
|
||||
<count>2</count>
|
||||
<bytes>33</bytes>
|
||||
</container>
|
||||
<container>
|
||||
<name>marktwain</name>
|
||||
<count>1</count>
|
||||
<bytes>14</bytes>
|
||||
</container>
|
||||
</account>
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<account name="AUTH_73f0aa26640f4971864919d0eb0f0880">
|
||||
<container>
|
||||
<name>janeausten</name>
|
||||
<count>2</count>
|
||||
<bytes>33</bytes>
|
||||
</container>
|
||||
<container>
|
||||
<name>marktwain</name>
|
||||
<count>1</count>
|
||||
<bytes>14</bytes>
|
||||
</container>
|
||||
</account>
|
||||
|
||||
The remainder of the examples in this guide use standard, non-serialized
|
||||
responses. However, all ``GET`` requests that perform list operations
|
||||
|
@ -21,11 +21,10 @@ a common prefix. They are useful for sharing a set of objects.
|
||||
Ask your cloud administrator to enable the temporary URL feature. For
|
||||
information, see :ref:`tempurl` in the *Source Documentation*.
|
||||
|
||||
Note
|
||||
~~~~
|
||||
.. note::
|
||||
|
||||
To use **POST** requests to upload objects to specific Object Storage
|
||||
locations, use :doc:`form_post_middleware` instead of temporary URL middleware.
|
||||
To use **POST** requests to upload objects to specific Object Storage
|
||||
locations, use :doc:`form_post_middleware` instead of temporary URL middleware.
|
||||
|
||||
Temporary URL format
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
@ -35,12 +34,12 @@ parameters:
|
||||
|
||||
**Example Temporary URL format**
|
||||
|
||||
.. code::
|
||||
.. code:: none
|
||||
|
||||
https://swift-cluster.example.com/v1/my_account/container/object
|
||||
?temp_url_sig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b
|
||||
&temp_url_expires=1323479485
|
||||
&filename=My+Test+File.pdf
|
||||
https://swift-cluster.example.com/v1/my_account/container/object
|
||||
?temp_url_sig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b
|
||||
&temp_url_expires=1323479485
|
||||
&filename=My+Test+File.pdf
|
||||
|
||||
The example shows these elements:
|
||||
|
||||
@ -71,12 +70,12 @@ A prefix-based temporary URL is similar but requires the parameter
|
||||
``temp_url_prefix``, which must be equal to the common prefix shared
|
||||
by all object names for which the URL is valid.
|
||||
|
||||
.. code::
|
||||
.. code:: none
|
||||
|
||||
https://swift-cluster.example.com/v1/my_account/container/my_prefix/object
|
||||
?temp_url_sig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b
|
||||
&temp_url_expires=2011-12-10T01:11:25Z
|
||||
&temp_url_prefix=my_prefix
|
||||
https://swift-cluster.example.com/v1/my_account/container/my_prefix/object
|
||||
?temp_url_sig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b
|
||||
&temp_url_expires=2011-12-10T01:11:25Z
|
||||
&temp_url_prefix=my_prefix
|
||||
|
||||
.. _secret_keys:
|
||||
|
||||
@ -109,15 +108,14 @@ The arbitrary values serve as the secret keys.
|
||||
For example, use the **swift post** command to set the secret key to
|
||||
*``MYKEY``*:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ swift post -m "Temp-URL-Key:MYKEY"
|
||||
$ swift post -m "Temp-URL-Key:MYKEY"
|
||||
|
||||
Note
|
||||
~~~~
|
||||
.. note::
|
||||
|
||||
Changing these headers invalidates any previously generated temporary
|
||||
URLs within 60 seconds, which is the memcache time for the key.
|
||||
Changing these headers invalidates any previously generated temporary
|
||||
URLs within 60 seconds, which is the memcache time for the key.
|
||||
|
||||
HMAC signature for temporary URLs
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
@ -145,33 +143,33 @@ temporary URLs:
|
||||
|
||||
**Example HMAC-SHA256 signature for object-based temporary URLs**
|
||||
|
||||
.. code::
|
||||
.. code:: python
|
||||
|
||||
import hmac
|
||||
from hashlib import sha256
|
||||
from time import time
|
||||
method = 'GET'
|
||||
duration_in_seconds = 60*60*24
|
||||
expires = int(time() + duration_in_seconds)
|
||||
path = '/v1/my_account/container/object'
|
||||
key = 'MYKEY'
|
||||
hmac_body = '%s\n%s\n%s' % (method, expires, path)
|
||||
signature = hmac.new(key, hmac_body, sha256).hexdigest()
|
||||
import hmac
|
||||
from hashlib import sha256
|
||||
from time import time
|
||||
method = 'GET'
|
||||
duration_in_seconds = 60*60*24
|
||||
expires = int(time() + duration_in_seconds)
|
||||
path = '/v1/my_account/container/object'
|
||||
key = 'MYKEY'
|
||||
hmac_body = '%s\n%s\n%s' % (method, expires, path)
|
||||
signature = hmac.new(key, hmac_body, sha256).hexdigest()
|
||||
|
||||
**Example HMAC-SHA512 signature for prefix-based temporary URLs**
|
||||
|
||||
.. code::
|
||||
.. code:: python
|
||||
|
||||
import hmac
|
||||
from hashlib import sha512
|
||||
from time import time
|
||||
method = 'GET'
|
||||
duration_in_seconds = 60*60*24
|
||||
expires = int(time() + duration_in_seconds)
|
||||
path = 'prefix:/v1/my_account/container/my_prefix'
|
||||
key = 'MYKEY'
|
||||
hmac_body = '%s\n%s\n%s' % (method, expires, path)
|
||||
signature = hmac.new(key, hmac_body, sha512).hexdigest()
|
||||
import hmac
|
||||
from hashlib import sha512
|
||||
from time import time
|
||||
method = 'GET'
|
||||
duration_in_seconds = 60*60*24
|
||||
expires = int(time() + duration_in_seconds)
|
||||
path = 'prefix:/v1/my_account/container/my_prefix'
|
||||
key = 'MYKEY'
|
||||
hmac_body = '%s\n%s\n%s' % (method, expires, path)
|
||||
signature = hmac.new(key, hmac_body, sha512).hexdigest()
|
||||
|
||||
Do not URL-encode the path when you generate the HMAC signature.
|
||||
However, when you make the actual HTTP request, you should properly
|
||||
@ -186,10 +184,10 @@ Authentication <http://www.ietf.org/rfc/rfc2104.txt>`__.
|
||||
If you want to transform a UNIX timestamp into an ISO 8601 UTC timestamp,
|
||||
you can use following code snippet:
|
||||
|
||||
.. code::
|
||||
.. code:: python
|
||||
|
||||
import time
|
||||
time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime(timestamp))
|
||||
import time
|
||||
time.strftime('%Y-%m-%dT%H:%M:%SZ', time.gmtime(timestamp))
|
||||
|
||||
Using the ``swift`` tool to generate a Temporary URL
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
@ -198,32 +196,32 @@ The ``swift`` tool provides the tempurl_ option that
|
||||
auto-generates the *``temp_url_sig``* and *``temp_url_expires``* query
|
||||
parameters. For example, you might run this command:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ swift tempurl GET 3600 /v1/my_account/container/object MYKEY
|
||||
$ swift tempurl GET 3600 /v1/my_account/container/object MYKEY
|
||||
|
||||
.. note::
|
||||
|
||||
The ``swift`` tool is not yet updated and continues to use the
|
||||
deprecated cipher SHA1.
|
||||
The ``swift`` tool is not yet updated and continues to use the
|
||||
deprecated cipher SHA1.
|
||||
|
||||
This command returns the path:
|
||||
|
||||
.. code::
|
||||
.. code:: none
|
||||
|
||||
/v1/my_account/container/object
|
||||
?temp_url_sig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91
|
||||
&temp_url_expires=1374497657
|
||||
/v1/my_account/container/object
|
||||
?temp_url_sig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91
|
||||
&temp_url_expires=1374497657
|
||||
|
||||
To create the temporary URL, prefix this path with the Object Storage
|
||||
storage host name. For example, prefix the path with
|
||||
``https://swift-cluster.example.com``, as follows:
|
||||
|
||||
.. code::
|
||||
.. code:: none
|
||||
|
||||
https://swift-cluster.example.com/v1/my_account/container/object
|
||||
?temp_url_sig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91
|
||||
&temp_url_expires=1374497657
|
||||
https://swift-cluster.example.com/v1/my_account/container/object
|
||||
?temp_url_sig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91
|
||||
&temp_url_expires=1374497657
|
||||
|
||||
Note that if the above example is copied exactly, and used in a command
|
||||
shell, then the ampersand is interpreted as an operator and the URL
|
||||
|
@ -12,11 +12,11 @@ underlying media type (``Content-Type``) of the file, such as a video.
|
||||
This example assigns an attachment type to the ``Content-Encoding``
|
||||
header that indicates how the file is downloaded:
|
||||
|
||||
.. code::
|
||||
.. code:: none
|
||||
|
||||
PUT /<api version>/<account>/<container>/<object> HTTP/1.1
|
||||
Host: storage.clouddrive.com
|
||||
X-Auth-Token: eaaafd18-0fed-4b3a-81b4-663c99ec1cbb
|
||||
Content-Type: video/mp4
|
||||
Content-Encoding: gzip
|
||||
PUT /<api version>/<account>/<container>/<object> HTTP/1.1
|
||||
Host: storage.clouddrive.com
|
||||
X-Auth-Token: eaaafd18-0fed-4b3a-81b4-663c99ec1cbb
|
||||
Content-Type: video/mp4
|
||||
Content-Encoding: gzip
|
||||
|
||||
|
@ -14,18 +14,18 @@ This example assigns an attachment type to the ``Content-Disposition``
|
||||
header. This attachment type indicates that the file is to be downloaded
|
||||
as ``goodbye.txt``:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# curl -i $publicURL/marktwain/goodbye -X POST -H "X-Auth-Token: $token" -H "Content-Length: 14" -H "Content-Type: application/octet-stream" -H "Content-Disposition: attachment; filename=goodbye.txt"
|
||||
# curl -i $publicURL/marktwain/goodbye -X POST -H "X-Auth-Token: $token" -H "Content-Length: 14" -H "Content-Type: application/octet-stream" -H "Content-Disposition: attachment; filename=goodbye.txt"
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
HTTP/1.1 202 Accepted
|
||||
Content-Length: 76
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: txa9b5e57d7f354d7ea9f57-0052e17e13
|
||||
X-Openstack-Request-Id: txa9b5e57d7f354d7ea9f57-0052e17e13
|
||||
Date: Thu, 23 Jan 2014 20:39:47 GMT
|
||||
HTTP/1.1 202 Accepted
|
||||
Content-Length: 76
|
||||
Content-Type: text/html; charset=UTF-8
|
||||
X-Trans-Id: txa9b5e57d7f354d7ea9f57-0052e17e13
|
||||
X-Openstack-Request-Id: txa9b5e57d7f354d7ea9f57-0052e17e13
|
||||
Date: Thu, 23 Jan 2014 20:39:47 GMT
|
||||
|
||||
<html><h1>Accepted</h1><p>The request is accepted for processing.</p></html>
|
||||
<html><h1>Accepted</h1><p>The request is accepted for processing.</p></html>
|
||||
|
||||
|
@ -100,7 +100,7 @@ Test CORS Page
|
||||
A sample cross-site test page is located in the project source tree
|
||||
``doc/source/test-cors.html``.
|
||||
|
||||
.. literalinclude:: test-cors.html
|
||||
.. literalinclude:: test-cors.html
|
||||
|
||||
.. _CORS: https://developer.mozilla.org/en-US/docs/HTTP/Access_control_CORS
|
||||
.. _preflight request: https://developer.mozilla.org/en-US/docs/HTTP/Access_control_CORS#Preflighted_requests
|
||||
|
@ -37,7 +37,7 @@ will be omitted.
|
||||
|
||||
It is highly recommended that authentication server implementers prefix their
|
||||
tokens and Swift storage accounts they create with a configurable reseller
|
||||
prefix (`AUTH_` by default with the included TempAuth). This prefix will avoid
|
||||
prefix (``AUTH_`` by default with the included TempAuth). This prefix will avoid
|
||||
conflicts with other authentication servers that might be using the same
|
||||
Swift cluster. Otherwise, the Swift cluster will have to try all the resellers
|
||||
until one validates a token or all fail.
|
||||
@ -48,18 +48,18 @@ designations as you'll see later).
|
||||
|
||||
Example Authentication with TempAuth:
|
||||
|
||||
* Token AUTH_tkabcd is given to the TempAuth middleware in a request's
|
||||
X-Auth-Token header.
|
||||
* The TempAuth middleware validates the token AUTH_tkabcd and discovers
|
||||
it matches the "tester" user within the "test" account for the storage
|
||||
account "AUTH_storage_xyz".
|
||||
* The TempAuth middleware sets the REMOTE_USER to
|
||||
"test:tester,test,AUTH_storage_xyz"
|
||||
* Now this user will have full access (via authorization procedures later)
|
||||
to the AUTH_storage_xyz Swift storage account and access to containers in
|
||||
other storage accounts, provided the storage account begins with the same
|
||||
`AUTH_` reseller prefix and the container has an ACL specifying at least
|
||||
one of those three groups.
|
||||
* Token AUTH_tkabcd is given to the TempAuth middleware in a request's
|
||||
X-Auth-Token header.
|
||||
* The TempAuth middleware validates the token AUTH_tkabcd and discovers
|
||||
it matches the "tester" user within the "test" account for the storage
|
||||
account "AUTH_storage_xyz".
|
||||
* The TempAuth middleware sets the REMOTE_USER to
|
||||
"test:tester,test,AUTH_storage_xyz"
|
||||
* Now this user will have full access (via authorization procedures later)
|
||||
to the AUTH_storage_xyz Swift storage account and access to containers in
|
||||
other storage accounts, provided the storage account begins with the same
|
||||
``AUTH_`` reseller prefix and the container has an ACL specifying at least
|
||||
one of those three groups.
|
||||
|
||||
Authorization is performed through callbacks by the Swift Proxy server to the
|
||||
WSGI environment's swift.authorize value, if one is set. The swift.authorize
|
||||
|
@ -12,7 +12,7 @@ from the details of how data is laid out and stored on-disk.
|
||||
|
||||
The APIs are documented in the reference implementations for all three
|
||||
servers. For historical reasons, the object server backend reference
|
||||
implementation module is named `diskfile`, while the account and container
|
||||
implementation module is named ``diskfile``, while the account and container
|
||||
server backend reference implementation modules are named appropriately.
|
||||
|
||||
This API is still under development and not yet finalized.
|
||||
|
@ -36,11 +36,11 @@ External monitoring
|
||||
We use pingdom.com to monitor the external Swift API. We suggest the
|
||||
following:
|
||||
|
||||
- Do a GET on ``/healthcheck``
|
||||
- Do a GET on ``/healthcheck``
|
||||
|
||||
- Create a container, make it public (x-container-read:
|
||||
.r*,.rlistings), create a small file in the container; do a GET
|
||||
on the object
|
||||
- Create a container, make it public (``x-container-read:
|
||||
.r*,.rlistings``), create a small file in the container; do a GET
|
||||
on the object
|
||||
|
||||
Diagnose: General approach
|
||||
--------------------------
|
||||
@ -82,11 +82,11 @@ if any servers are down. We suggest you run it regularly
|
||||
to the last report without having to wait for a long-running command
|
||||
to complete.
|
||||
|
||||
Diagnose: Is system responding to /healthcheck?
|
||||
-----------------------------------------------
|
||||
Diagnose: Is system responding to ``/healthcheck``?
|
||||
---------------------------------------------------
|
||||
|
||||
When you want to establish if a swift endpoint is running, run ``curl -k``
|
||||
against https://*[ENDPOINT]*/healthcheck.
|
||||
against ``https://$ENDPOINT/healthcheck``.
|
||||
|
||||
.. _swift_logs:
|
||||
|
||||
@ -209,11 +209,11 @@ Diagnose: Parted reports the backup GPT table is corrupt
|
||||
- If a GPT table is broken, a message like the following should be
|
||||
observed when the following command is run:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ sudo parted -l
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
Error: The backup GPT table is corrupt, but the primary appears OK,
|
||||
so that will be used.
|
||||
@ -232,40 +232,40 @@ invalid filesystem label. In such cases proceed as follows:
|
||||
|
||||
#. Verify that the disk labels are correct:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
FS=/dev/sd#1
|
||||
$ FS=/dev/sd#1
|
||||
|
||||
sudo parted -l | grep object
|
||||
$ sudo parted -l | grep object
|
||||
|
||||
#. If partition labels are inconsistent then, resolve the disk label issues
|
||||
before proceeding:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo parted -s ${FS} name ${PART_NO} ${PART_NAME} #Partition Label
|
||||
#PART_NO is 1 for object disks and 3 for OS disks
|
||||
#PART_NAME follows the convention seen in "sudo parted -l | grep object"
|
||||
$ sudo parted -s ${FS} name ${PART_NO} ${PART_NAME} #Partition Label
|
||||
$ # PART_NO is 1 for object disks and 3 for OS disks
|
||||
$ # PART_NAME follows the convention seen in "sudo parted -l | grep object"
|
||||
|
||||
#. If the Filesystem label is missing then create it with care:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo xfs_admin -l ${FS} #Filesystem label (12 Char limit)
|
||||
$ sudo xfs_admin -l ${FS} #Filesystem label (12 Char limit)
|
||||
|
||||
#Check for the existence of a FS label
|
||||
$ # Check for the existence of a FS label
|
||||
|
||||
OBJNO=<3 Length Object No.>
|
||||
$ OBJNO=<3 Length Object No.>
|
||||
|
||||
#I.E OBJNO for sw-stbaz3-object0007 would be 007
|
||||
$ # I.E OBJNO for sw-stbaz3-object0007 would be 007
|
||||
|
||||
DISKNO=<3 Length Disk No.>
|
||||
$ DISKNO=<3 Length Disk No.>
|
||||
|
||||
#I.E DISKNO for /dev/sdb would be 001, /dev/sdc would be 002 etc.
|
||||
$ # I.E DISKNO for /dev/sdb would be 001, /dev/sdc would be 002 etc.
|
||||
|
||||
sudo xfs_admin -L "obj${OBJNO}dsk${DISKNO}" ${FS}
|
||||
$ sudo xfs_admin -L "obj${OBJNO}dsk${DISKNO}" ${FS}
|
||||
|
||||
#Create a FS Label
|
||||
$ # Create a FS Label
|
||||
|
||||
Diagnose: Failed LUNs
|
||||
---------------------
|
||||
@ -293,9 +293,9 @@ Otherwise the lun can be re-enabled as follows:
|
||||
LUN. You will come back later and grep this file for more details, but
|
||||
just generate it for now.
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo hpssacli controller all diag file=/tmp/hpacu.diag ris=on xml=off zip=off
|
||||
$ sudo hpssacli controller all diag file=/tmp/hpacu.diag ris=on xml=off zip=off
|
||||
|
||||
Export the following variables using the below instructions before
|
||||
proceeding further.
|
||||
@ -304,16 +304,16 @@ proceeding further.
|
||||
failed drive's number and array value (example output: "array A
|
||||
logicaldrive 1..." would be exported as LDRIVE=1):
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo hpssacli controller slot=1 ld all show
|
||||
$ sudo hpssacli controller slot=1 ld all show
|
||||
|
||||
#. Export the number of the logical drive that was retrieved from the
|
||||
previous command into the LDRIVE variable:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
export LDRIVE=<LogicalDriveNumber>
|
||||
$ export LDRIVE=<LogicalDriveNumber>
|
||||
|
||||
#. Print the array value and Port:Box:Bay for all drives and take note of
|
||||
the Port:Box:Bay for the failed drive (example output: " array A
|
||||
@ -324,9 +324,9 @@ proceeding further.
|
||||
in the case of "array c"), but we will run a different command to be sure
|
||||
we are operating on the correct device.
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo hpssacli controller slot=1 pd all show
|
||||
$ sudo hpssacli controller slot=1 pd all show
|
||||
|
||||
.. note::
|
||||
|
||||
@ -339,24 +339,24 @@ proceeding further.
|
||||
|
||||
#. Export the Port:Box:Bay for the failed drive into the PBOX variable:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
export PBOX=<Port:Box:Bay>
|
||||
$ export PBOX=<Port:Box:Bay>
|
||||
|
||||
#. Print the physical device information and take note of the Disk Name
|
||||
(example output: "Disk Name: /dev/sdk" would be exported as
|
||||
DEV=/dev/sdk):
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo hpssacli controller slot=1 ld ${LDRIVE} show detail | grep -i "Disk Name"
|
||||
$ sudo hpssacli controller slot=1 ld ${LDRIVE} show detail | grep -i "Disk Name"
|
||||
|
||||
#. Export the device name variable from the preceding command (example:
|
||||
/dev/sdk):
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
export DEV=<Device>
|
||||
$ export DEV=<Device>
|
||||
|
||||
#. Export the filesystem variable. Disks that are split between the
|
||||
operating system and data storage, typically sda and sdb, should only
|
||||
@ -367,39 +367,39 @@ proceeding further.
|
||||
data filesystem for the device in question as the export. For example:
|
||||
/dev/sdk1.
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
export FS=<Filesystem>
|
||||
$ export FS=<Filesystem>
|
||||
|
||||
#. Verify the LUN is failed, and the device is not:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo hpssacli controller slot=1 ld all show
|
||||
sudo hpssacli controller slot=1 pd all show
|
||||
sudo hpssacli controller slot=1 ld ${LDRIVE} show detail
|
||||
sudo hpssacli controller slot=1 pd ${PBOX} show detail
|
||||
$ sudo hpssacli controller slot=1 ld all show
|
||||
$ sudo hpssacli controller slot=1 pd all show
|
||||
$ sudo hpssacli controller slot=1 ld ${LDRIVE} show detail
|
||||
$ sudo hpssacli controller slot=1 pd ${PBOX} show detail
|
||||
|
||||
#. Stop the swift and rsync service:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo service rsync stop
|
||||
sudo swift-init shutdown all
|
||||
$ sudo service rsync stop
|
||||
$ sudo swift-init shutdown all
|
||||
|
||||
#. Unmount the problem drive, fix the LUN and the filesystem:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo umount ${FS}
|
||||
$ sudo umount ${FS}
|
||||
|
||||
#. If umount fails, you should run lsof search for the mountpoint and
|
||||
kill any lingering processes before repeating the unpount:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo hpacucli controller slot=1 ld ${LDRIVE} modify reenable
|
||||
sudo xfs_repair ${FS}
|
||||
$ sudo hpacucli controller slot=1 ld ${LDRIVE} modify reenable
|
||||
$ sudo xfs_repair ${FS}
|
||||
|
||||
#. If the ``xfs_repair`` complains about possible journal data, use the
|
||||
``xfs_repair -L`` option to zeroise the journal log.
|
||||
@ -407,21 +407,21 @@ proceeding further.
|
||||
#. Once complete test-mount the filesystem, and tidy up its lost and
|
||||
found area.
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo mount ${FS} /mnt
|
||||
sudo rm -rf /mnt/lost+found/
|
||||
sudo umount /mnt
|
||||
$ sudo mount ${FS} /mnt
|
||||
$ sudo rm -rf /mnt/lost+found/
|
||||
$ sudo umount /mnt
|
||||
|
||||
#. Mount the filesystem and restart swift and rsync.
|
||||
|
||||
#. Run the following to determine if a DC ticket is needed to check the
|
||||
cables on the node:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
grep -y media.exchanged /tmp/hpacu.diag
|
||||
grep -y hot.plug.count /tmp/hpacu.diag
|
||||
$ grep -y media.exchanged /tmp/hpacu.diag
|
||||
$ grep -y hot.plug.count /tmp/hpacu.diag
|
||||
|
||||
#. If the output reports any non 0x00 values, it suggests that the cables
|
||||
should be checked. For example, log a DC ticket to check the sas cables
|
||||
@ -440,7 +440,7 @@ If the diagnostics report a message such as ``sda: drive is slow``, you
|
||||
should log onto the node and run the following command (remove ``-c 1`` option to continuously monitor
|
||||
the data):
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ /usr/bin/collectl -s D -c 1
|
||||
waiting for 1 second sample...
|
||||
@ -475,7 +475,7 @@ otherwise hardware replacement is needed.
|
||||
|
||||
Another way to look at the data is as follows:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ /opt/hp/syseng/disk-anal.pl -d
|
||||
Disk: sda Wait: 54580 371 65 25 12 6 6 0 1 2 0 46
|
||||
@ -524,7 +524,7 @@ historical data. You can look at recent data as follows. It only looks
|
||||
at data from 13:15 to 14:15. As you can see, this is a relatively clean
|
||||
system (few if any long wait or service times):
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ /opt/hp/syseng/disk-anal.pl -d -t 13:15-14:15
|
||||
Disk: sda Wait: 3600 0 0 0 0 0 0 0 0 0 0 0
|
||||
@ -582,21 +582,21 @@ Running tests
|
||||
|
||||
#. Prepare the ``target`` node as follows:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo iptables -I INPUT -p tcp -j ACCEPT
|
||||
$ sudo iptables -I INPUT -p tcp -j ACCEPT
|
||||
|
||||
Or, do:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo ufw allow 12866/tcp
|
||||
$ sudo ufw allow 12866/tcp
|
||||
|
||||
#. On the ``source`` node, run the following command to check
|
||||
throughput. Note the double-dash before the -P option.
|
||||
The command takes 10 seconds to complete. The ``target`` node is 192.168.245.5.
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ netperf -H 192.168.245.5 -- -P 12866
|
||||
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 12866 AF_INET to
|
||||
@ -609,7 +609,7 @@ Running tests
|
||||
|
||||
#. On the ``source`` node, run the following command to check latency:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ netperf -H 192.168.245.5 -t TCP_RR -- -P 12866
|
||||
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 12866
|
||||
@ -644,21 +644,21 @@ Diagnose: Remapping sectors experiencing UREs
|
||||
|
||||
#. Set the environment variables SEC, DEV & FS, for example:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
SEC=2930954256
|
||||
DEV=/dev/sdi
|
||||
FS=/dev/sdi1
|
||||
$ SEC=2930954256
|
||||
$ DEV=/dev/sdi
|
||||
$ FS=/dev/sdi1
|
||||
|
||||
#. Verify that the sector is bad:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo dd if=${DEV} of=/dev/null bs=512 count=1 skip=${SEC}
|
||||
$ sudo dd if=${DEV} of=/dev/null bs=512 count=1 skip=${SEC}
|
||||
|
||||
#. If the sector is bad this command will output an input/output error:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
dd: reading `/dev/sdi`: Input/output error
|
||||
0+0 records in
|
||||
@ -667,28 +667,28 @@ Diagnose: Remapping sectors experiencing UREs
|
||||
#. Prevent chef from attempting to re-mount the filesystem while the
|
||||
repair is in progress:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo mv /etc/chef/client.pem /etc/chef/xx-client.xx-pem
|
||||
$ sudo mv /etc/chef/client.pem /etc/chef/xx-client.xx-pem
|
||||
|
||||
#. Stop the swift and rsync service:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo service rsync stop
|
||||
sudo swift-init shutdown all
|
||||
$ sudo service rsync stop
|
||||
$ sudo swift-init shutdown all
|
||||
|
||||
#. Unmount the problem drive:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo umount ${FS}
|
||||
$ sudo umount ${FS}
|
||||
|
||||
#. Overwrite/remap the bad sector:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo dd_rescue -d -A -m8b -s ${SEC}b ${DEV} ${DEV}
|
||||
$ sudo dd_rescue -d -A -m8b -s ${SEC}b ${DEV} ${DEV}
|
||||
|
||||
#. This command should report an input/output error the first time
|
||||
it is run. Run the command a second time, if it successfully remapped
|
||||
@ -696,9 +696,9 @@ Diagnose: Remapping sectors experiencing UREs
|
||||
|
||||
#. Verify the sector is now readable:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo dd if=${DEV} of=/dev/null bs=512 count=1 skip=${SEC}
|
||||
$ sudo dd if=${DEV} of=/dev/null bs=512 count=1 skip=${SEC}
|
||||
|
||||
#. If the sector is now readable this command should not report an
|
||||
input/output error.
|
||||
@ -706,24 +706,24 @@ Diagnose: Remapping sectors experiencing UREs
|
||||
#. If more than one problem sector is listed, set the SEC environment
|
||||
variable to the next sector in the list:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
SEC=123456789
|
||||
$ SEC=123456789
|
||||
|
||||
#. Repeat from step 8.
|
||||
|
||||
#. Repair the filesystem:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo xfs_repair ${FS}
|
||||
$ sudo xfs_repair ${FS}
|
||||
|
||||
#. If ``xfs_repair`` reports that the filesystem has valuable filesystem
|
||||
changes:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo xfs_repair ${FS}
|
||||
$ sudo xfs_repair ${FS}
|
||||
Phase 1 - find and verify superblock...
|
||||
Phase 2 - using internal log
|
||||
- zero log...
|
||||
@ -739,11 +739,11 @@ Diagnose: Remapping sectors experiencing UREs
|
||||
#. You should attempt to mount the filesystem, and clear the lost+found
|
||||
area:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo mount $FS /mnt
|
||||
sudo rm -rf /mnt/lost+found/*
|
||||
sudo umount /mnt
|
||||
$ sudo mount $FS /mnt
|
||||
$ sudo rm -rf /mnt/lost+found/*
|
||||
$ sudo umount /mnt
|
||||
|
||||
#. If the filesystem fails to mount then you will need to use the
|
||||
``xfs_repair -L`` option to force log zeroing.
|
||||
@ -752,16 +752,16 @@ Diagnose: Remapping sectors experiencing UREs
|
||||
#. If ``xfs_repair`` reports that an additional input/output error has been
|
||||
encountered, get the sector details as follows:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo grep "I/O error" /var/log/kern.log | grep sector | tail -1
|
||||
$ sudo grep "I/O error" /var/log/kern.log | grep sector | tail -1
|
||||
|
||||
#. If new input/output error is reported then set the SEC environment
|
||||
variable to the problem sector number:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
SEC=234567890
|
||||
$ SEC=234567890
|
||||
|
||||
#. Repeat from step 8
|
||||
|
||||
@ -806,31 +806,31 @@ errors, it may well indicate a cable, switch, or network issue.
|
||||
|
||||
Get an overview of the interface with:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo ifconfig eth{n}
|
||||
sudo ethtool eth{n}
|
||||
$ sudo ifconfig eth{n}
|
||||
$ sudo ethtool eth{n}
|
||||
|
||||
The ``Link Detected:`` indicator will read ``yes`` if the nic is
|
||||
cabled.
|
||||
|
||||
Establish the adapter type with:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo ethtool -i eth{n}
|
||||
$ sudo ethtool -i eth{n}
|
||||
|
||||
Gather the interface statistics with:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo ethtool -S eth{n}
|
||||
$ sudo ethtool -S eth{n}
|
||||
|
||||
If the nick supports self test, this can be performed with:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo ethtool -t eth{n}
|
||||
$ sudo ethtool -t eth{n}
|
||||
|
||||
Self tests should read ``PASS`` if the nic is operating correctly.
|
||||
|
||||
@ -853,9 +853,9 @@ A replicator reports in its log that remaining time exceeds
|
||||
making progress. Another useful way to check this is with the
|
||||
'swift-recon -r' command on a swift proxy server:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo swift-recon -r
|
||||
$ sudo swift-recon -r
|
||||
===============================================================================
|
||||
|
||||
--> Starting reconnaissance on 384 hosts
|
||||
@ -877,9 +877,9 @@ You can further check if the object replicator is stuck by logging on
|
||||
the object server and checking the object replicator progress with
|
||||
the following command:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# sudo grep object-rep /var/log/swift/background.log | grep -e "Starting object replication" -e "Object replication complete" -e "partitions rep"
|
||||
$ sudo grep object-rep /var/log/swift/background.log | grep -e "Starting object replication" -e "Object replication complete" -e "partitions rep"
|
||||
Jul 16 06:25:46 192.168.245.4 object-replicator 15344/16450 (93.28%) partitions replicated in 69018.48s (0.22/sec, 22h remaining)
|
||||
Jul 16 06:30:46 192.168.245.4object-replicator 15344/16450 (93.28%) partitions replicated in 69318.58s (0.22/sec, 22h remaining)
|
||||
Jul 16 06:35:46 192.168.245.4 object-replicator 15344/16450 (93.28%) partitions replicated in 69618.63s (0.22/sec, 23h remaining)
|
||||
@ -912,9 +912,9 @@ One of the reasons for the object replicator hanging like this is
|
||||
filesystem corruption on the drive. The following is a typical log entry
|
||||
of a corrupted filesystem detected by the object replicator:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# sudo bzgrep "Remote I/O error" /var/log/swift/background.log* |grep srv | - tail -1
|
||||
$ sudo bzgrep "Remote I/O error" /var/log/swift/background.log* |grep srv | - tail -1
|
||||
Jul 12 03:33:30 192.168.245.4 object-replicator STDOUT: ERROR:root:Error hashing suffix#012Traceback (most recent call last):#012 File
|
||||
"/usr/lib/python2.7/dist-packages/swift/obj/replicator.py", line 199, in get_hashes#012 hashes[suffix] = hash_suffix(suffix_dir,
|
||||
reclaim_age)#012 File "/usr/lib/python2.7/dist-packages/swift/obj/replicator.py", line 84, in hash_suffix#012 path_contents =
|
||||
@ -922,9 +922,9 @@ of a corrupted filesystem detected by the object replicator:
|
||||
|
||||
An ``ls`` of the problem file or directory usually shows something like the following:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# ls -l /srv/node/disk4/objects/1643763/b51
|
||||
$ ls -l /srv/node/disk4/objects/1643763/b51
|
||||
ls: cannot access /srv/node/disk4/objects/1643763/b51: Remote I/O error
|
||||
|
||||
If no entry with ``Remote I/O error`` occurs in the ``background.log`` it is
|
||||
@ -935,27 +935,27 @@ restart the object-replicator.
|
||||
|
||||
#. Stop the object-replicator:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# sudo swift-init object-replicator stop
|
||||
|
||||
#. Make sure the object replicator has stopped, if it has hung, the stop
|
||||
command will not stop the hung process:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# ps auxww | - grep swift-object-replicator
|
||||
|
||||
#. If the previous ps shows the object-replicator is still running, kill
|
||||
the process:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# kill -9 <pid-of-swift-object-replicator>
|
||||
|
||||
#. Start the object-replicator:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# sudo swift-init object-replicator start
|
||||
|
||||
@ -964,14 +964,14 @@ to repair the problem filesystem.
|
||||
|
||||
#. Stop swift and rsync:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# sudo swift-init all shutdown
|
||||
# sudo service rsync stop
|
||||
|
||||
#. Make sure all swift process have stopped:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# ps auxww | grep swift | grep python
|
||||
|
||||
@ -979,13 +979,13 @@ to repair the problem filesystem.
|
||||
|
||||
#. Unmount the problem filesystem:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# sudo umount /srv/node/disk4
|
||||
|
||||
#. Repair the filesystem:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
# sudo xfs_repair -P /dev/sde1
|
||||
|
||||
@ -1002,7 +1002,7 @@ The CPU load average on an object server, as shown with the
|
||||
'uptime' command, is typically under 10 when the server is
|
||||
lightly-moderately loaded:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ uptime
|
||||
07:59:26 up 99 days, 5:57, 1 user, load average: 8.59, 8.39, 8.32
|
||||
@ -1014,7 +1014,7 @@ However, sometimes the CPU load average can increase significantly. The
|
||||
following is an example of an object server that has extremely high CPU
|
||||
load:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ uptime
|
||||
07:44:02 up 18:22, 1 user, load average: 407.12, 406.36, 404.59
|
||||
@ -1050,9 +1050,9 @@ Further issues and resolutions
|
||||
given server.
|
||||
- Run this command:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo swift-init all start
|
||||
$ sudo swift-init all start
|
||||
|
||||
Examine messages in the swift log files to see if there are any
|
||||
error messages related to any of the swift processes since the time you
|
||||
@ -1080,9 +1080,9 @@ Further issues and resolutions
|
||||
|
||||
- Restart the swift processes on the affected node:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
% sudo swift-init all reload
|
||||
$ sudo swift-init all reload
|
||||
|
||||
Urgency:
|
||||
If known performance problem: Immediate
|
||||
@ -1135,18 +1135,18 @@ Further issues and resolutions
|
||||
For example, it is running at 100 Mb/s and the NIC is a 1Ge NIC.
|
||||
- 1. Try resetting the interface with:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo ethtool -s eth0 speed 1000
|
||||
$ sudo ethtool -s eth0 speed 1000
|
||||
|
||||
... and then run:
|
||||
... and then run:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo lshw -class
|
||||
$ sudo lshw -class
|
||||
|
||||
See if size goes to the expected speed. Failing
|
||||
that, check hardware (NIC cable/switch port).
|
||||
See if size goes to the expected speed. Failing
|
||||
that, check hardware (NIC cable/switch port).
|
||||
|
||||
2. If persistent, consider shutting down the server (especially if a proxy)
|
||||
until the problem is identified and resolved. If you leave this server
|
||||
@ -1183,9 +1183,11 @@ Further issues and resolutions
|
||||
- Urgency: Medium
|
||||
This may have been triggered by a recent restart of the rsyslog daemon.
|
||||
Restart the service with:
|
||||
.. code::
|
||||
|
||||
sudo swift-init <service> reload
|
||||
.. code:: console
|
||||
|
||||
$ sudo swift-init <service> reload
|
||||
|
||||
* - Object replicator: Reports the remaining time and that time is more than 100 hours.
|
||||
- Each replication cycle the object replicator writes a log message to its log
|
||||
reporting statistics about the current cycle. This includes an estimate for the
|
||||
@ -1193,9 +1195,10 @@ Further issues and resolutions
|
||||
100 hours, there is a problem with the replication process.
|
||||
- Urgency: Medium
|
||||
Restart the service with:
|
||||
.. code::
|
||||
|
||||
sudo swift-init object-replicator reload
|
||||
.. code:: console
|
||||
|
||||
$ sudo swift-init object-replicator reload
|
||||
|
||||
Check that the remaining replication time is going down.
|
||||
|
||||
|
@ -27,9 +27,9 @@ if you wait a while things get better.
|
||||
|
||||
For example:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo swift-recon -rla
|
||||
$ sudo swift-recon -rla
|
||||
===============================================================================
|
||||
[2012-03-10 12:57:21] Checking async pendings on 384 hosts...
|
||||
Async stats: low: 0, high: 1, avg: 0, total: 1
|
||||
@ -52,7 +52,7 @@ system. Rules-of-thumb for 'good' recon output are:
|
||||
- Nodes that respond are up and running Swift. If all nodes respond,
|
||||
that is a good sign. But some nodes may time out. For example:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
-> [http://<redacted>.29:6200/recon/load:] <urlopen error [Errno 111] ECONNREFUSED>
|
||||
-> [http://<redacted>.31:6200/recon/load:] <urlopen error timed out>
|
||||
@ -83,7 +83,7 @@ system. Rules-of-thumb for 'good' recon output are:
|
||||
For comparison here is the recon output for the same system above when
|
||||
two entire racks of Swift are down:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
[2012-03-10 16:56:33] Checking async pendings on 384 hosts...
|
||||
-> http://<redacted>.22:6200/recon/async: <urlopen error timed out>
|
||||
@ -152,9 +152,9 @@ Here is an example of noting and tracking down a problem with recon.
|
||||
|
||||
Running reccon shows some async pendings:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
bob@notso:~/swift-1.4.4/swift$ ssh -q <redacted>.132.7 sudo swift-recon -alr
|
||||
$ ssh -q <redacted>.132.7 sudo swift-recon -alr
|
||||
===============================================================================
|
||||
[2012-03-14 17:25:55] Checking async pendings on 384 hosts...
|
||||
Async stats: low: 0, high: 23, avg: 8, total: 3356
|
||||
@ -172,9 +172,9 @@ Why? Running recon again with -av swift (not shown here) tells us that
|
||||
the node with the highest (23) is <redacted>.72.61. Looking at the log
|
||||
files on <redacted>.72.61 we see:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
souzab@<redacted>:~$ sudo tail -f /var/log/swift/background.log | - grep -i ERROR
|
||||
$ sudo tail -f /var/log/swift/background.log | - grep -i ERROR
|
||||
Mar 14 17:28:06 <redacted> container-replicator ERROR Remote drive not mounted
|
||||
{'zone': 5, 'weight': 1952.0, 'ip': '<redacted>.204.119', 'id': 5481, 'meta': '', 'device': 'disk6', 'port': 6201}
|
||||
Mar 14 17:28:06 <redacted> container-replicator ERROR Remote drive not mounted
|
||||
@ -235,7 +235,7 @@ Procedure
|
||||
running the ring builder on a proxy node to determine which zones
|
||||
the storage nodes are in. For example:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
% sudo swift-ring-builder /etc/swift/object.builder
|
||||
/etc/swift/object.builder, build version 1467
|
||||
@ -258,7 +258,7 @@ Procedure
|
||||
builder again, this time with the ``list_parts`` option and specify
|
||||
the nodes under consideration. For example:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
% sudo swift-ring-builder /etc/swift/object.builder list_parts <redacted>.8 <redacted>.15 <redacted>.72.2
|
||||
Partition Matches
|
||||
@ -283,7 +283,7 @@ Procedure
|
||||
small, and is proportional to the number of entries that have a 3 in
|
||||
the Matches column. For example:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
Partition Matches
|
||||
26865 3
|
||||
@ -300,7 +300,7 @@ Procedure
|
||||
|
||||
#. A quick way to count the number of rows with 3 matches is:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
% sudo swift-ring-builder /etc/swift/object.builder list_parts <redacted>.8 <redacted>.15 <redacted>.72.2 | grep "3$" | wc -l
|
||||
|
||||
|
@ -10,13 +10,13 @@ Fix broken GPT table (broken disk partition)
|
||||
- If a GPT table is broken, a message like the following should be
|
||||
observed when the command...
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ sudo parted -l
|
||||
|
||||
- ... is run.
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
...
|
||||
Error: The backup GPT table is corrupt, but the primary appears OK, so that will
|
||||
@ -25,13 +25,13 @@ Fix broken GPT table (broken disk partition)
|
||||
|
||||
#. To fix this, firstly install the ``gdisk`` program to fix this:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ sudo aptitude install gdisk
|
||||
|
||||
#. Run ``gdisk`` for the particular drive with the damaged partition:
|
||||
|
||||
.. code:
|
||||
.. code: console
|
||||
|
||||
$ sudo gdisk /dev/sd*a-l*
|
||||
GPT fdisk (gdisk) version 0.6.14
|
||||
@ -57,7 +57,7 @@ Fix broken GPT table (broken disk partition)
|
||||
and finally ``w`` (write table to disk and exit). Will also need to
|
||||
enter ``Y`` when prompted in order to confirm actions.
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
Command (? for help): r
|
||||
|
||||
@ -92,7 +92,7 @@ Fix broken GPT table (broken disk partition)
|
||||
|
||||
#. Running the command:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ sudo parted /dev/sd#
|
||||
|
||||
@ -100,7 +100,7 @@ Fix broken GPT table (broken disk partition)
|
||||
|
||||
#. Finally, uninstall ``gdisk`` from the node:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ sudo aptitude remove gdisk
|
||||
|
||||
@ -112,20 +112,20 @@ Procedure: Fix broken XFS filesystem
|
||||
#. A filesystem may be corrupt or broken if the following output is
|
||||
observed when checking its label:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ sudo xfs_admin -l /dev/sd#
|
||||
cache_node_purge: refcount was 1, not zero (node=0x25d5ee0)
|
||||
xfs_admin: cannot read root inode (117)
|
||||
cache_node_purge: refcount was 1, not zero (node=0x25d92b0)
|
||||
xfs_admin: cannot read realtime bitmap inode (117)
|
||||
bad sb magic # 0 in AG 1
|
||||
failed to read label in AG 1
|
||||
cache_node_purge: refcount was 1, not zero (node=0x25d5ee0)
|
||||
xfs_admin: cannot read root inode (117)
|
||||
cache_node_purge: refcount was 1, not zero (node=0x25d92b0)
|
||||
xfs_admin: cannot read realtime bitmap inode (117)
|
||||
bad sb magic # 0 in AG 1
|
||||
failed to read label in AG 1
|
||||
|
||||
#. Run the following commands to remove the broken/corrupt filesystem and replace.
|
||||
(This example uses the filesystem ``/dev/sdb2``) Firstly need to replace the partition:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ sudo parted
|
||||
GNU Parted 2.3
|
||||
@ -167,7 +167,7 @@ Procedure: Fix broken XFS filesystem
|
||||
|
||||
#. Next step is to scrub the filesystem and format:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ sudo dd if=/dev/zero of=/dev/sdb2 bs=$((1024*1024)) count=1
|
||||
1+0 records in
|
||||
@ -175,19 +175,19 @@ Procedure: Fix broken XFS filesystem
|
||||
1048576 bytes (1.0 MB) copied, 0.00480617 s, 218 MB/s
|
||||
$ sudo /sbin/mkfs.xfs -f -i size=1024 /dev/sdb2
|
||||
meta-data=/dev/sdb2 isize=1024 agcount=4, agsize=106811524 blks
|
||||
= sectsz=512 attr=2, projid32bit=0
|
||||
data = bsize=4096 blocks=427246093, imaxpct=5
|
||||
= sunit=0 swidth=0 blks
|
||||
naming =version 2 bsize=4096 ascii-ci=0
|
||||
log =internal log bsize=4096 blocks=208616, version=2
|
||||
= sectsz=512 sunit=0 blks, lazy-count=1
|
||||
realtime =none extsz=4096 blocks=0, rtextents=0
|
||||
= sectsz=512 attr=2, projid32bit=0
|
||||
data = bsize=4096 blocks=427246093, imaxpct=5
|
||||
= sunit=0 swidth=0 blks
|
||||
naming =version 2 bsize=4096 ascii-ci=0
|
||||
log =internal log bsize=4096 blocks=208616, version=2
|
||||
= sectsz=512 sunit=0 blks, lazy-count=1
|
||||
realtime =none extsz=4096 blocks=0, rtextents=0
|
||||
|
||||
#. You should now label and mount your filesystem.
|
||||
|
||||
#. Can now check to see if the filesystem is mounted using the command:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ mount
|
||||
|
||||
@ -204,7 +204,7 @@ Procedure: Checking if an account is okay
|
||||
|
||||
You must know the tenant/project ID. You can check if the account is okay as follows from a proxy.
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ sudo -u swift /opt/hp/swift/bin/swift-direct show AUTH_<project-id>
|
||||
|
||||
@ -214,7 +214,7 @@ containers, or an error indicating that the resource could not be found.
|
||||
Alternatively, you can use ``swift-get-nodes`` to find the account database
|
||||
files. Run the following on a proxy:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ sudo swift-get-nodes /etc/swift/account.ring.gz AUTH_<project-id>
|
||||
|
||||
@ -239,7 +239,7 @@ Log onto one of the swift proxy servers.
|
||||
|
||||
Use swift-direct to show this accounts usage:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ sudo -u swift /opt/hp/swift/bin/swift-direct show AUTH_<project-id>
|
||||
Status: 200
|
||||
@ -288,7 +288,7 @@ re-create the account as follows:
|
||||
servers). The output has been truncated so we can focus on the import pieces
|
||||
of data:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ sudo swift-get-nodes /etc/swift/account.ring.gz AUTH_4ebe3039674d4864a11fe0864ae4d905
|
||||
...
|
||||
@ -308,7 +308,7 @@ re-create the account as follows:
|
||||
#. Before proceeding check that the account is really deleted by using curl. Execute the
|
||||
commands printed by ``swift-get-nodes``. For example:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ curl -I -XHEAD "http://192.168.245.5:6202/disk1/3934/AUTH_4ebe3039674d4864a11fe0864ae4d905"
|
||||
HTTP/1.1 404 Not Found
|
||||
@ -323,7 +323,7 @@ re-create the account as follows:
|
||||
#. Use the ssh commands printed by ``swift-get-nodes`` to check if database
|
||||
files exist. For example:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ ssh 192.168.245.5 "ls -lah ${DEVICE:-/srv/node*}/disk1/accounts/3934/052/f5ecf8b40de3e1b0adb0dbe576874052"
|
||||
total 20K
|
||||
@ -344,7 +344,7 @@ re-create the account as follows:
|
||||
|
||||
#. Delete the database files. For example:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ ssh 192.168.245.5
|
||||
$ cd /srv/node/disk1/accounts/3934/052/f5ecf8b40de3e1b0adb0dbe576874052
|
||||
@ -374,9 +374,9 @@ balancers, customer's are not impacted by the misbehaving proxy.
|
||||
|
||||
#. Shut down Swift as follows:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo swift-init proxy shutdown
|
||||
$ sudo swift-init proxy shutdown
|
||||
|
||||
.. note::
|
||||
|
||||
@ -384,15 +384,15 @@ balancers, customer's are not impacted by the misbehaving proxy.
|
||||
|
||||
#. Create the ``/etc/swift/disabled-by-file`` file. For example:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo touch /etc/swift/disabled-by-file
|
||||
$ sudo touch /etc/swift/disabled-by-file
|
||||
|
||||
#. Optional, restart Swift:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo swift-init proxy start
|
||||
$ sudo swift-init proxy start
|
||||
|
||||
It works because the healthcheck middleware looks for /etc/swift/disabled-by-file.
|
||||
If it exists, the middleware will return 503/error instead of 200/OK. This means the load balancer
|
||||
@ -403,9 +403,9 @@ Procedure: Ad-Hoc disk performance test
|
||||
|
||||
You can get an idea whether a disk drive is performing as follows:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
sudo dd bs=1M count=256 if=/dev/zero conv=fdatasync of=/srv/node/disk11/remember-to-delete-this-later
|
||||
$ sudo dd bs=1M count=256 if=/dev/zero conv=fdatasync of=/srv/node/disk11/remember-to-delete-this-later
|
||||
|
||||
You can expect ~600MB/sec. If you get a low number, repeat many times as
|
||||
Swift itself may also read or write to the disk, hence giving a lower
|
||||
|
@ -16,20 +16,20 @@ transactions from this user. The linux ``bzgrep`` command can be used to
|
||||
search all the proxy log files on a node including the ``.bz2`` compressed
|
||||
files. For example:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ PDSH_SSH_ARGS_APPEND="-o StrictHostKeyChecking=no" pdsh -l <yourusername> -R ssh \
|
||||
-w <redacted>.68.[4-11,132-139 4-11,132-139],<redacted>.132.[4-11,132-139] \
|
||||
'sudo bzgrep -w AUTH_redacted-4962-4692-98fb-52ddda82a5af /var/log/swift/proxy.log*' | dshbak -c
|
||||
.
|
||||
.
|
||||
----------------
|
||||
<redacted>.132.6
|
||||
----------------
|
||||
Feb 29 08:51:57 sw-aw2az2-proxy011 proxy-server <redacted>.16.132
|
||||
<redacted>.66.8 29/Feb/2012/08/51/57 GET /v1.0/AUTH_redacted-4962-4692-98fb-52ddda82a5af
|
||||
/%3Fformat%3Djson HTTP/1.0 404 - - <REDACTED>_4f4d50c5e4b064d88bd7ab82 - - -
|
||||
tx429fc3be354f434ab7f9c6c4206c1dc3 - 0.0130
|
||||
.
|
||||
.
|
||||
----------------
|
||||
<redacted>.132.6
|
||||
----------------
|
||||
Feb 29 08:51:57 sw-aw2az2-proxy011 proxy-server <redacted>.16.132
|
||||
<redacted>.66.8 29/Feb/2012/08/51/57 GET /v1.0/AUTH_redacted-4962-4692-98fb-52ddda82a5af
|
||||
/%3Fformat%3Djson HTTP/1.0 404 - - <REDACTED>_4f4d50c5e4b064d88bd7ab82 - - -
|
||||
tx429fc3be354f434ab7f9c6c4206c1dc3 - 0.0130
|
||||
|
||||
This shows a ``GET`` operation on the users account.
|
||||
|
||||
@ -40,7 +40,7 @@ This shows a ``GET`` operation on the users account.
|
||||
Using the transaction ID, ``tx429fc3be354f434ab7f9c6c4206c1dc3`` you can
|
||||
search the swift object servers log files for this transaction ID:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ PDSH_SSH_ARGS_APPEND="-o StrictHostKeyChecking=no" pdsh -l <yourusername> -R ssh \
|
||||
-w <redacted>.72.[4-67|4-67],<redacted>.[4-67|4-67],<redacted>.[4-67|4-67],<redacted>.204.[4-131] \
|
||||
@ -79,7 +79,7 @@ search the swift object servers log files for this transaction ID:
|
||||
Next, use the ``swift-get-nodes`` command to determine exactly where the
|
||||
user's account data is stored:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ sudo swift-get-nodes /etc/swift/account.ring.gz AUTH_redacted-4962-4692-98fb-52ddda82a5af
|
||||
Account AUTH_redacted-4962-4692-98fb-52ddda82a5af
|
||||
@ -119,7 +119,7 @@ user's account data is stored:
|
||||
Check each of the primary servers, <redacted>.31, <redacted>.204.70 and <redacted>.72.16, for
|
||||
this users account. For example on <redacted>.72.16:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ ls -lah /srv/node/disk9/accounts/198875/696/1846d99185f8a0edaf65cfbf37439696/
|
||||
total 1.0M
|
||||
@ -131,7 +131,7 @@ this users account. For example on <redacted>.72.16:
|
||||
So this users account db, an sqlite db is present. Use sqlite to
|
||||
checkout the account:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ sudo cp /srv/node/disk9/accounts/198875/696/1846d99185f8a0edaf65cfbf37439696/1846d99185f8a0edaf65cfbf37439696.db /tmp
|
||||
$ sudo sqlite3 /tmp/1846d99185f8a0edaf65cfbf37439696.db
|
||||
@ -156,7 +156,7 @@ checkout the account:
|
||||
why the GET operations are returning 404, not found. Check the account
|
||||
delete date/time:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ python
|
||||
|
||||
@ -167,7 +167,7 @@ checkout the account:
|
||||
Next try and find the ``DELETE`` operation for this account in the proxy
|
||||
server logs:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ PDSH_SSH_ARGS_APPEND="-o StrictHostKeyChecking=no" pdsh -l <yourusername> -R ssh \
|
||||
-w <redacted>.68.[4-11,132-139 4-11,132-139],<redacted>.132.[4-11,132-139|4-11,132-139] \
|
||||
@ -206,7 +206,7 @@ as follows:
|
||||
|
||||
Examine the object in question:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ sudo -u swift /opt/hp/swift/bin/swift-direct head 132345678912345 container_name obj_name
|
||||
|
||||
@ -219,14 +219,14 @@ name of the objects this means it is a DLO. For example,
|
||||
if ``X-Object-Manifest`` is ``container2/seg-blah``, list the contents
|
||||
of the container container2 as follows:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ sudo -u swift /opt/hp/swift/bin/swift-direct show 132345678912345 container2
|
||||
|
||||
Pick out the objects whose names start with ``seg-blah``.
|
||||
Delete the segment objects as follows:
|
||||
|
||||
.. code::
|
||||
.. code:: console
|
||||
|
||||
$ sudo -u swift /opt/hp/swift/bin/swift-direct delete 132345678912345 container2 seg-blah01
|
||||
$ sudo -u swift /opt/hp/swift/bin/swift-direct delete 132345678912345 container2 seg-blah02
|
||||
|
@ -22,39 +22,45 @@ to understand and adding a bunch of new devices isn't really required
|
||||
to implement a usable set of policies.
|
||||
|
||||
1. To define your policies, add the following to your ``/etc/swift/swift.conf``
|
||||
file::
|
||||
file:
|
||||
|
||||
[storage-policy:0]
|
||||
name = gold
|
||||
aliases = yellow, orange
|
||||
default = yes
|
||||
.. code:: ini
|
||||
|
||||
[storage-policy:1]
|
||||
name = silver
|
||||
[storage-policy:0]
|
||||
name = gold
|
||||
aliases = yellow, orange
|
||||
default = yes
|
||||
|
||||
See :doc:`overview_policies` for detailed information on ``swift.conf`` policy
|
||||
options.
|
||||
[storage-policy:1]
|
||||
name = silver
|
||||
|
||||
See :doc:`overview_policies` for detailed information on ``swift.conf`` policy
|
||||
options.
|
||||
|
||||
2. To create the object ring for the silver policy (index 1), add the following
|
||||
to your ``bin/remakerings`` script and re-run it (your script may already have
|
||||
these changes)::
|
||||
these changes):
|
||||
|
||||
swift-ring-builder object-1.builder create 10 2 1
|
||||
swift-ring-builder object-1.builder add r1z1-127.0.0.1:6210/sdb1 1
|
||||
swift-ring-builder object-1.builder add r1z2-127.0.0.1:6220/sdb2 1
|
||||
swift-ring-builder object-1.builder add r1z3-127.0.0.1:6230/sdb3 1
|
||||
swift-ring-builder object-1.builder add r1z4-127.0.0.1:6240/sdb4 1
|
||||
swift-ring-builder object-1.builder rebalance
|
||||
.. code:: shell
|
||||
|
||||
Note that the reduced replication of the silver policy is only a function
|
||||
of the replication parameter in the ``swift-ring-builder create`` command
|
||||
and is not specified in ``/etc/swift/swift.conf``.
|
||||
swift-ring-builder object-1.builder create 10 2 1
|
||||
swift-ring-builder object-1.builder add r1z1-127.0.0.1:6210/sdb1 1
|
||||
swift-ring-builder object-1.builder add r1z2-127.0.0.1:6220/sdb2 1
|
||||
swift-ring-builder object-1.builder add r1z3-127.0.0.1:6230/sdb3 1
|
||||
swift-ring-builder object-1.builder add r1z4-127.0.0.1:6240/sdb4 1
|
||||
swift-ring-builder object-1.builder rebalance
|
||||
|
||||
Note that the reduced replication of the silver policy is only a function
|
||||
of the replication parameter in the ``swift-ring-builder create`` command
|
||||
and is not specified in ``/etc/swift/swift.conf``.
|
||||
|
||||
3. Copy ``etc/container-reconciler.conf-sample`` to
|
||||
``/etc/swift/container-reconciler.conf`` and fix the user option::
|
||||
``/etc/swift/container-reconciler.conf`` and fix the user option:
|
||||
|
||||
cp etc/container-reconciler.conf-sample /etc/swift/container-reconciler.conf
|
||||
sed -i "s/# user.*/user = $USER/g" /etc/swift/container-reconciler.conf
|
||||
.. code:: shell
|
||||
|
||||
cp etc/container-reconciler.conf-sample /etc/swift/container-reconciler.conf
|
||||
sed -i "s/# user.*/user = $USER/g" /etc/swift/container-reconciler.conf
|
||||
|
||||
------------------
|
||||
Using Policies
|
||||
@ -68,82 +74,104 @@ Storage Policies effect placement of data in Swift.
|
||||
1. We will be using the list_endpoints middleware to confirm object locations,
|
||||
so enable that now in your ``proxy-server.conf`` file by adding it to the pipeline
|
||||
and including the filter section as shown below (be sure to restart your proxy
|
||||
after making these changes)::
|
||||
after making these changes):
|
||||
|
||||
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache bulk \
|
||||
slo dlo ratelimit crossdomain list-endpoints tempurl tempauth staticweb \
|
||||
container-quotas account-quotas proxy-logging proxy-server
|
||||
.. code:: ini
|
||||
|
||||
[filter:list-endpoints]
|
||||
use = egg:swift#list_endpoints
|
||||
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache bulk \
|
||||
slo dlo ratelimit crossdomain list-endpoints tempurl tempauth staticweb \
|
||||
container-quotas account-quotas proxy-logging proxy-server
|
||||
|
||||
2. Check to see that your policies are reported via /info::
|
||||
[filter:list-endpoints]
|
||||
use = egg:swift#list_endpoints
|
||||
|
||||
swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing info
|
||||
2. Check to see that your policies are reported via /info:
|
||||
|
||||
You should see this: (only showing the policy output here)::
|
||||
.. code:: shell
|
||||
|
||||
policies: [{'aliases': 'gold, yellow, orange', 'default': True,
|
||||
'name': 'gold'}, {'aliases': 'silver', 'name': 'silver'}]
|
||||
swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing info
|
||||
|
||||
You should see this: (only showing the policy output here):
|
||||
|
||||
.. code:: none
|
||||
|
||||
policies: [{'aliases': 'gold, yellow, orange', 'default': True,
|
||||
'name': 'gold'}, {'aliases': 'silver', 'name': 'silver'}]
|
||||
|
||||
3. Now create a container without specifying a policy, it will use the
|
||||
default, 'gold' and then put a test object in it (create the file ``file0.txt``
|
||||
with your favorite editor with some content)::
|
||||
with your favorite editor with some content):
|
||||
|
||||
curl -v -X PUT -H 'X-Auth-Token: <your auth token>' \
|
||||
http://127.0.0.1:8080/v1/AUTH_test/myCont0
|
||||
curl -X PUT -v -T file0.txt -H 'X-Auth-Token: <your auth token>' \
|
||||
http://127.0.0.1:8080/v1/AUTH_test/myCont0/file0.txt
|
||||
.. code:: shell
|
||||
|
||||
4. Now confirm placement of the object with the :ref:`list_endpoints` middleware::
|
||||
curl -v -X PUT -H 'X-Auth-Token: <your auth token>' \
|
||||
http://127.0.0.1:8080/v1/AUTH_test/myCont0
|
||||
curl -X PUT -v -T file0.txt -H 'X-Auth-Token: <your auth token>' \
|
||||
http://127.0.0.1:8080/v1/AUTH_test/myCont0/file0.txt
|
||||
|
||||
curl -X GET -v http://127.0.0.1:8080/endpoints/AUTH_test/myCont0/file0.txt
|
||||
4. Now confirm placement of the object with the :ref:`list_endpoints` middleware:
|
||||
|
||||
You should see this: (note placement on expected devices)::
|
||||
.. code:: shell
|
||||
|
||||
["http://127.0.0.1:6230/sdb3/761/AUTH_test/myCont0/file0.txt",
|
||||
"http://127.0.0.1:6210/sdb1/761/AUTH_test/myCont0/file0.txt",
|
||||
"http://127.0.0.1:6220/sdb2/761/AUTH_test/myCont0/file0.txt"]
|
||||
curl -X GET -v http://127.0.0.1:8080/endpoints/AUTH_test/myCont0/file0.txt
|
||||
|
||||
5. Create a container using policy 'silver' and put a different file in it::
|
||||
You should see this: (note placement on expected devices):
|
||||
|
||||
curl -v -X PUT -H 'X-Auth-Token: <your auth token>' -H \
|
||||
"X-Storage-Policy: silver" \
|
||||
http://127.0.0.1:8080/v1/AUTH_test/myCont1
|
||||
curl -X PUT -v -T file1.txt -H 'X-Auth-Token: <your auth token>' \
|
||||
http://127.0.0.1:8080/v1/AUTH_test/myCont1/
|
||||
.. code:: json
|
||||
|
||||
6. Confirm placement of the object for policy 'silver'::
|
||||
["http://127.0.0.1:6230/sdb3/761/AUTH_test/myCont0/file0.txt",
|
||||
"http://127.0.0.1:6210/sdb1/761/AUTH_test/myCont0/file0.txt",
|
||||
"http://127.0.0.1:6220/sdb2/761/AUTH_test/myCont0/file0.txt"]
|
||||
|
||||
curl -X GET -v http://127.0.0.1:8080/endpoints/AUTH_test/myCont1/file1.txt
|
||||
5. Create a container using policy 'silver' and put a different file in it:
|
||||
|
||||
You should see this: (note placement on expected devices)::
|
||||
.. code:: shell
|
||||
|
||||
["http://127.0.0.1:6210/sdb1/32/AUTH_test/myCont1/file1.txt",
|
||||
"http://127.0.0.1:6240/sdb4/32/AUTH_test/myCont1/file1.txt"]
|
||||
curl -v -X PUT -H 'X-Auth-Token: <your auth token>' -H \
|
||||
"X-Storage-Policy: silver" \
|
||||
http://127.0.0.1:8080/v1/AUTH_test/myCont1
|
||||
curl -X PUT -v -T file1.txt -H 'X-Auth-Token: <your auth token>' \
|
||||
http://127.0.0.1:8080/v1/AUTH_test/myCont1/
|
||||
|
||||
6. Confirm placement of the object for policy 'silver':
|
||||
|
||||
.. code:: shell
|
||||
|
||||
curl -X GET -v http://127.0.0.1:8080/endpoints/AUTH_test/myCont1/file1.txt
|
||||
|
||||
You should see this: (note placement on expected devices):
|
||||
|
||||
.. code:: json
|
||||
|
||||
["http://127.0.0.1:6210/sdb1/32/AUTH_test/myCont1/file1.txt",
|
||||
"http://127.0.0.1:6240/sdb4/32/AUTH_test/myCont1/file1.txt"]
|
||||
|
||||
7. Confirm account information with HEAD, make sure that your container-updater
|
||||
service is running and has executed once since you performed the PUTs or the
|
||||
account database won't be updated yet::
|
||||
account database won't be updated yet:
|
||||
|
||||
curl -i -X HEAD -H 'X-Auth-Token: <your auth token>' \
|
||||
http://127.0.0.1:8080/v1/AUTH_test
|
||||
.. code:: shell
|
||||
|
||||
You should see something like this (note that total and per policy stats
|
||||
object sizes will vary)::
|
||||
curl -i -X HEAD -H 'X-Auth-Token: <your auth token>' \
|
||||
http://127.0.0.1:8080/v1/AUTH_test
|
||||
|
||||
HTTP/1.1 204 No Content
|
||||
Content-Length: 0
|
||||
X-Account-Object-Count: 2
|
||||
X-Account-Bytes-Used: 174
|
||||
X-Account-Container-Count: 2
|
||||
X-Account-Storage-Policy-Gold-Object-Count: 1
|
||||
X-Account-Storage-Policy-Gold-Bytes-Used: 84
|
||||
X-Account-Storage-Policy-Silver-Object-Count: 1
|
||||
X-Account-Storage-Policy-Silver-Bytes-Used: 90
|
||||
X-Timestamp: 1397230339.71525
|
||||
Content-Type: text/plain; charset=utf-8
|
||||
Accept-Ranges: bytes
|
||||
X-Trans-Id: tx96e7496b19bb44abb55a3-0053482c75
|
||||
X-Openstack-Request-Id: tx96e7496b19bb44abb55a3-0053482c75
|
||||
Date: Fri, 11 Apr 2014 17:55:01 GMT
|
||||
You should see something like this (note that total and per policy stats
|
||||
object sizes will vary):
|
||||
|
||||
.. code:: none
|
||||
|
||||
HTTP/1.1 204 No Content
|
||||
Content-Length: 0
|
||||
X-Account-Object-Count: 2
|
||||
X-Account-Bytes-Used: 174
|
||||
X-Account-Container-Count: 2
|
||||
X-Account-Storage-Policy-Gold-Object-Count: 1
|
||||
X-Account-Storage-Policy-Gold-Bytes-Used: 84
|
||||
X-Account-Storage-Policy-Silver-Object-Count: 1
|
||||
X-Account-Storage-Policy-Silver-Bytes-Used: 90
|
||||
X-Timestamp: 1397230339.71525
|
||||
Content-Type: text/plain; charset=utf-8
|
||||
Accept-Ranges: bytes
|
||||
X-Trans-Id: tx96e7496b19bb44abb55a3-0053482c75
|
||||
X-Openstack-Request-Id: tx96e7496b19bb44abb55a3-0053482c75
|
||||
Date: Fri, 11 Apr 2014 17:55:01 GMT
|
||||
|
@ -92,7 +92,7 @@ class BaseDecrypterContext(CryptoWSGIContext):
|
||||
the value itself, otherwise return the value unmodified.
|
||||
|
||||
A value should either be a string that does not contain the ';'
|
||||
character or should be of the form:
|
||||
character or should be of the form::
|
||||
|
||||
<base64-encoded ciphertext>;swift_meta=<crypto meta>
|
||||
|
||||
|
@ -208,10 +208,10 @@ class BaseKeyMaster(object):
|
||||
|
||||
This provides some basic helpers for:
|
||||
|
||||
- loading from a separate config path,
|
||||
- deriving keys based on path, and
|
||||
- installing a ``swift.callback.fetch_crypto_keys`` hook
|
||||
in the request environment.
|
||||
- loading from a separate config path,
|
||||
- deriving keys based on path, and
|
||||
- installing a ``swift.callback.fetch_crypto_keys`` hook
|
||||
in the request environment.
|
||||
|
||||
Subclasses should define ``log_route``, ``keymaster_opts``, and
|
||||
``keymaster_conf_section`` attributes, and implement the
|
||||
|
@ -178,7 +178,7 @@ class RingBuilder(object):
|
||||
@contextmanager
|
||||
def debug(self):
|
||||
"""
|
||||
Temporarily enables debug logging, useful in tests, e.g.
|
||||
Temporarily enables debug logging, useful in tests, e.g.::
|
||||
|
||||
with rb.debug():
|
||||
rb.rebalance()
|
||||
|
@ -1285,7 +1285,7 @@ class Timestamp(object):
|
||||
"""
|
||||
Get an isoformat string representation of the 'normal' part of the
|
||||
Timestamp with microsecond precision and no trailing timezone, for
|
||||
example:
|
||||
example::
|
||||
|
||||
1970-01-01T00:00:00.000000
|
||||
|
||||
@ -2518,12 +2518,12 @@ def get_hub():
|
||||
Another note about epoll: it's hard to use when forking. epoll works
|
||||
like so:
|
||||
|
||||
* create an epoll instance: efd = epoll_create(...)
|
||||
* create an epoll instance: ``efd = epoll_create(...)``
|
||||
|
||||
* register file descriptors of interest with epoll_ctl(efd,
|
||||
EPOLL_CTL_ADD, fd, ...)
|
||||
* register file descriptors of interest with
|
||||
``epoll_ctl(efd, EPOLL_CTL_ADD, fd, ...)``
|
||||
|
||||
* wait for events with epoll_wait(efd, ...)
|
||||
* wait for events with ``epoll_wait(efd, ...)``
|
||||
|
||||
If you fork, you and all your child processes end up using the same
|
||||
epoll instance, and everyone becomes confused. It is possible to use
|
||||
@ -6437,7 +6437,7 @@ def make_db_file_path(db_path, epoch):
|
||||
def get_db_files(db_path):
|
||||
"""
|
||||
Given the path to a db file, return a sorted list of all valid db files
|
||||
that actually exist in that path's dir. A valid db filename has the form:
|
||||
that actually exist in that path's dir. A valid db filename has the form::
|
||||
|
||||
<hash>[_<epoch>].db
|
||||
|
||||
|
@ -322,20 +322,20 @@ class ContainerBroker(DatabaseBroker):
|
||||
Note that this may involve multiple on-disk DB files if the container
|
||||
becomes sharded:
|
||||
|
||||
* :attr:`_db_file` is the path to the legacy container DB name, i.e.
|
||||
``<hash>.db``. This file should exist for an initialised broker that
|
||||
has never been sharded, but will not exist once a container has been
|
||||
sharded.
|
||||
* :attr:`db_files` is a list of existing db files for the broker. This
|
||||
list should have at least one entry for an initialised broker, and
|
||||
should have two entries while a broker is in SHARDING state.
|
||||
* :attr:`db_file` is the path to whichever db is currently authoritative
|
||||
for the container. Depending on the container's state, this may not be
|
||||
the same as the ``db_file`` argument given to :meth:`~__init__`, unless
|
||||
``force_db_file`` is True in which case :attr:`db_file` is always equal
|
||||
to the ``db_file`` argument given to :meth:`~__init__`.
|
||||
* :attr:`pending_file` is always equal to :attr:`_db_file` extended with
|
||||
``.pending``, i.e. ``<hash>.db.pending``.
|
||||
* :attr:`_db_file` is the path to the legacy container DB name, i.e.
|
||||
``<hash>.db``. This file should exist for an initialised broker that
|
||||
has never been sharded, but will not exist once a container has been
|
||||
sharded.
|
||||
* :attr:`db_files` is a list of existing db files for the broker. This
|
||||
list should have at least one entry for an initialised broker, and
|
||||
should have two entries while a broker is in SHARDING state.
|
||||
* :attr:`db_file` is the path to whichever db is currently authoritative
|
||||
for the container. Depending on the container's state, this may not be
|
||||
the same as the ``db_file`` argument given to :meth:`~__init__`, unless
|
||||
``force_db_file`` is True in which case :attr:`db_file` is always equal
|
||||
to the ``db_file`` argument given to :meth:`~__init__`.
|
||||
* :attr:`pending_file` is always equal to :attr:`_db_file` extended with
|
||||
``.pending``, i.e. ``<hash>.db.pending``.
|
||||
"""
|
||||
db_type = 'container'
|
||||
db_contains_type = 'object'
|
||||
|
@ -502,43 +502,43 @@ class CleavingContext(object):
|
||||
Encapsulates metadata associated with the process of cleaving a retiring
|
||||
DB. This metadata includes:
|
||||
|
||||
* ``ref``: The unique part of the key that is used when persisting a
|
||||
serialized ``CleavingContext`` as sysmeta in the DB. The unique part of
|
||||
the key is based off the DB id. This ensures that each context is
|
||||
associated with a specific DB file. The unique part of the key is
|
||||
included in the ``CleavingContext`` but should not be modified by any
|
||||
caller.
|
||||
* ``ref``: The unique part of the key that is used when persisting a
|
||||
serialized ``CleavingContext`` as sysmeta in the DB. The unique part of
|
||||
the key is based off the DB id. This ensures that each context is
|
||||
associated with a specific DB file. The unique part of the key is
|
||||
included in the ``CleavingContext`` but should not be modified by any
|
||||
caller.
|
||||
|
||||
* ``cursor``: the upper bound of the last shard range to have been
|
||||
cleaved from the retiring DB.
|
||||
* ``cursor``: the upper bound of the last shard range to have been
|
||||
cleaved from the retiring DB.
|
||||
|
||||
* ``max_row``: the retiring DB's max row; this is updated to the value of
|
||||
the retiring DB's ``max_row`` every time a ``CleavingContext`` is
|
||||
loaded for that DB, and may change during the process of cleaving the
|
||||
DB.
|
||||
* ``max_row``: the retiring DB's max row; this is updated to the value of
|
||||
the retiring DB's ``max_row`` every time a ``CleavingContext`` is
|
||||
loaded for that DB, and may change during the process of cleaving the
|
||||
DB.
|
||||
|
||||
* ``cleave_to_row``: the value of ``max_row`` at the moment when cleaving
|
||||
starts for the DB. When cleaving completes (i.e. the cleave cursor has
|
||||
reached the upper bound of the cleaving namespace), ``cleave_to_row``
|
||||
is compared to the current ``max_row``: if the two values are not equal
|
||||
then rows have been added to the DB which may not have been cleaved, in
|
||||
which case the ``CleavingContext`` is ``reset`` and cleaving is
|
||||
re-started.
|
||||
* ``cleave_to_row``: the value of ``max_row`` at the moment when cleaving
|
||||
starts for the DB. When cleaving completes (i.e. the cleave cursor has
|
||||
reached the upper bound of the cleaving namespace), ``cleave_to_row``
|
||||
is compared to the current ``max_row``: if the two values are not equal
|
||||
then rows have been added to the DB which may not have been cleaved, in
|
||||
which case the ``CleavingContext`` is ``reset`` and cleaving is
|
||||
re-started.
|
||||
|
||||
* ``last_cleave_to_row``: the minimum DB row from which cleaving should
|
||||
select objects to cleave; this is initially set to None i.e. all rows
|
||||
should be cleaved. If the ``CleavingContext`` is ``reset`` then the
|
||||
``last_cleave_to_row`` is set to the current value of
|
||||
``cleave_to_row``, which in turn is set to the current value of
|
||||
``max_row`` by a subsequent call to ``start``. The repeated cleaving
|
||||
therefore only selects objects in rows greater than the
|
||||
``last_cleave_to_row``, rather than cleaving the whole DB again.
|
||||
* ``last_cleave_to_row``: the minimum DB row from which cleaving should
|
||||
select objects to cleave; this is initially set to None i.e. all rows
|
||||
should be cleaved. If the ``CleavingContext`` is ``reset`` then the
|
||||
``last_cleave_to_row`` is set to the current value of
|
||||
``cleave_to_row``, which in turn is set to the current value of
|
||||
``max_row`` by a subsequent call to ``start``. The repeated cleaving
|
||||
therefore only selects objects in rows greater than the
|
||||
``last_cleave_to_row``, rather than cleaving the whole DB again.
|
||||
|
||||
* ``ranges_done``: the number of shard ranges that have been cleaved from
|
||||
the retiring DB.
|
||||
* ``ranges_done``: the number of shard ranges that have been cleaved from
|
||||
the retiring DB.
|
||||
|
||||
* ``ranges_todo``: the number of shard ranges that are yet to be
|
||||
cleaved from the retiring DB.
|
||||
* ``ranges_todo``: the number of shard ranges that are yet to be
|
||||
cleaved from the retiring DB.
|
||||
"""
|
||||
def __init__(self, ref, cursor='', max_row=None, cleave_to_row=None,
|
||||
last_cleave_to_row=None, cleaving_done=False,
|
||||
|
@ -96,28 +96,28 @@ class ContainerSync(Daemon):
|
||||
An example may help. Assume replica count is 3 and perfectly matching
|
||||
ROWIDs starting at 1.
|
||||
|
||||
First sync run, database has 6 rows:
|
||||
First sync run, database has 6 rows:
|
||||
|
||||
* SyncPoint1 starts as -1.
|
||||
* SyncPoint2 starts as -1.
|
||||
* No rows between points, so no "all updates" rows.
|
||||
* Six rows newer than SyncPoint1, so a third of the rows are sent
|
||||
by node 1, another third by node 2, remaining third by node 3.
|
||||
* SyncPoint1 is set as 6 (the newest ROWID known).
|
||||
* SyncPoint2 is left as -1 since no "all updates" rows were synced.
|
||||
* SyncPoint1 starts as -1.
|
||||
* SyncPoint2 starts as -1.
|
||||
* No rows between points, so no "all updates" rows.
|
||||
* Six rows newer than SyncPoint1, so a third of the rows are sent
|
||||
by node 1, another third by node 2, remaining third by node 3.
|
||||
* SyncPoint1 is set as 6 (the newest ROWID known).
|
||||
* SyncPoint2 is left as -1 since no "all updates" rows were synced.
|
||||
|
||||
Next sync run, database has 12 rows:
|
||||
Next sync run, database has 12 rows:
|
||||
|
||||
* SyncPoint1 starts as 6.
|
||||
* SyncPoint2 starts as -1.
|
||||
* The rows between -1 and 6 all trigger updates (most of which
|
||||
should short-circuit on the remote end as having already been
|
||||
done).
|
||||
* Six more rows newer than SyncPoint1, so a third of the rows are
|
||||
sent by node 1, another third by node 2, remaining third by node
|
||||
3.
|
||||
* SyncPoint1 is set as 12 (the newest ROWID known).
|
||||
* SyncPoint2 is set as 6 (the newest "all updates" ROWID).
|
||||
* SyncPoint1 starts as 6.
|
||||
* SyncPoint2 starts as -1.
|
||||
* The rows between -1 and 6 all trigger updates (most of which
|
||||
should short-circuit on the remote end as having already been
|
||||
done).
|
||||
* Six more rows newer than SyncPoint1, so a third of the rows are
|
||||
sent by node 1, another third by node 2, remaining third by node
|
||||
3.
|
||||
* SyncPoint1 is set as 12 (the newest ROWID known).
|
||||
* SyncPoint2 is set as 6 (the newest "all updates" ROWID).
|
||||
|
||||
In this way, under normal circumstances each node sends its share of
|
||||
updates each run and just sends a batch of older updates to ensure nothing
|
||||
|
@ -116,17 +116,17 @@ class Receiver(object):
|
||||
|
||||
The general process inside an SSYNC request is:
|
||||
|
||||
1. Initialize the request: Basic request validation, mount check,
|
||||
acquire semaphore lock, etc..
|
||||
1. Initialize the request: Basic request validation, mount check,
|
||||
acquire semaphore lock, etc..
|
||||
|
||||
2. Missing check: Sender sends the hashes and timestamps of
|
||||
the object information it can send, receiver sends back
|
||||
the hashes it wants (doesn't have or has an older
|
||||
timestamp).
|
||||
2. Missing check: Sender sends the hashes and timestamps of
|
||||
the object information it can send, receiver sends back
|
||||
the hashes it wants (doesn't have or has an older
|
||||
timestamp).
|
||||
|
||||
3. Updates: Sender sends the object information requested.
|
||||
3. Updates: Sender sends the object information requested.
|
||||
|
||||
4. Close down: Release semaphore lock, etc.
|
||||
4. Close down: Release semaphore lock, etc.
|
||||
"""
|
||||
|
||||
def __init__(self, app, request):
|
||||
@ -346,29 +346,29 @@ class Receiver(object):
|
||||
|
||||
The process is generally:
|
||||
|
||||
1. Sender sends `:MISSING_CHECK: START` and begins
|
||||
sending `hash timestamp` lines.
|
||||
1. Sender sends ``:MISSING_CHECK: START`` and begins
|
||||
sending `hash timestamp` lines.
|
||||
|
||||
2. Receiver gets `:MISSING_CHECK: START` and begins
|
||||
reading the `hash timestamp` lines, collecting the
|
||||
hashes of those it desires.
|
||||
2. Receiver gets ``:MISSING_CHECK: START`` and begins
|
||||
reading the `hash timestamp` lines, collecting the
|
||||
hashes of those it desires.
|
||||
|
||||
3. Sender sends `:MISSING_CHECK: END`.
|
||||
3. Sender sends ``:MISSING_CHECK: END``.
|
||||
|
||||
4. Receiver gets `:MISSING_CHECK: END`, responds with
|
||||
`:MISSING_CHECK: START`, followed by the list of
|
||||
<wanted_hash> specifiers it collected as being wanted
|
||||
(one per line), `:MISSING_CHECK: END`, and flushes any
|
||||
buffers.
|
||||
4. Receiver gets ``:MISSING_CHECK: END``, responds with
|
||||
``:MISSING_CHECK: START``, followed by the list of
|
||||
<wanted_hash> specifiers it collected as being wanted
|
||||
(one per line), ``:MISSING_CHECK: END``, and flushes any
|
||||
buffers.
|
||||
|
||||
Each <wanted_hash> specifier has the form <hash>[ <parts>] where
|
||||
<parts> is a string containing characters 'd' and/or 'm'
|
||||
indicating that only data or meta part of object respectively is
|
||||
required to be sync'd.
|
||||
Each <wanted_hash> specifier has the form <hash>[ <parts>] where
|
||||
<parts> is a string containing characters 'd' and/or 'm'
|
||||
indicating that only data or meta part of object respectively is
|
||||
required to be sync'd.
|
||||
|
||||
5. Sender gets `:MISSING_CHECK: START` and reads the list
|
||||
of hashes desired by the receiver until reading
|
||||
`:MISSING_CHECK: END`.
|
||||
5. Sender gets ``:MISSING_CHECK: START`` and reads the list
|
||||
of hashes desired by the receiver until reading
|
||||
``:MISSING_CHECK: END``.
|
||||
|
||||
The collection and then response is so the sender doesn't
|
||||
have to read while it writes to ensure network buffers don't
|
||||
@ -413,18 +413,18 @@ class Receiver(object):
|
||||
|
||||
The process is generally:
|
||||
|
||||
1. Sender sends `:UPDATES: START` and begins sending the
|
||||
PUT and DELETE subrequests.
|
||||
1. Sender sends ``:UPDATES: START`` and begins sending the
|
||||
PUT and DELETE subrequests.
|
||||
|
||||
2. Receiver gets `:UPDATES: START` and begins routing the
|
||||
subrequests to the object server.
|
||||
2. Receiver gets ``:UPDATES: START`` and begins routing the
|
||||
subrequests to the object server.
|
||||
|
||||
3. Sender sends `:UPDATES: END`.
|
||||
3. Sender sends ``:UPDATES: END``.
|
||||
|
||||
4. Receiver gets `:UPDATES: END` and sends `:UPDATES:
|
||||
START` and `:UPDATES: END` (assuming no errors).
|
||||
4. Receiver gets ``:UPDATES: END`` and sends ``:UPDATES:
|
||||
START`` and ``:UPDATES: END`` (assuming no errors).
|
||||
|
||||
5. Sender gets `:UPDATES: START` and `:UPDATES: END`.
|
||||
5. Sender gets ``:UPDATES: START`` and ``:UPDATES: END``.
|
||||
|
||||
If too many subrequests fail, as configured by
|
||||
replication_failure_threshold and replication_failure_ratio,
|
||||
|
@ -86,15 +86,15 @@ class BucketizedUpdateSkippingLimiter(object):
|
||||
|
||||
The iterator increments stats as follows:
|
||||
|
||||
* The `deferrals` stat is incremented for each update that is
|
||||
rate-limited. Note that a individual update is rate-limited at most
|
||||
once.
|
||||
* The `skips` stat is incremented for each rate-limited update that is
|
||||
not eventually yielded. This includes updates that are evicted from the
|
||||
deferral queue and all updates that remain in the deferral queue when
|
||||
``drain_until`` time is reached and the iterator terminates.
|
||||
* The `drains` stat is incremented for each rate-limited update that is
|
||||
eventually yielded.
|
||||
* The `deferrals` stat is incremented for each update that is
|
||||
rate-limited. Note that a individual update is rate-limited at most
|
||||
once.
|
||||
* The `skips` stat is incremented for each rate-limited update that is
|
||||
not eventually yielded. This includes updates that are evicted from the
|
||||
deferral queue and all updates that remain in the deferral queue when
|
||||
``drain_until`` time is reached and the iterator terminates.
|
||||
* The `drains` stat is incremented for each rate-limited update that is
|
||||
eventually yielded.
|
||||
|
||||
Consequently, when this iterator terminates, the sum of `skips` and
|
||||
`drains` is equal to the number of `deferrals`.
|
||||
@ -219,12 +219,12 @@ class SweepStats(object):
|
||||
"""
|
||||
Stats bucket for an update sweep
|
||||
|
||||
A measure of the rate at which updates are being rate-limited is:
|
||||
A measure of the rate at which updates are being rate-limited is::
|
||||
|
||||
deferrals / (deferrals + successes + failures - drains)
|
||||
|
||||
A measure of the rate at which updates are not being sent during a sweep
|
||||
is:
|
||||
is::
|
||||
|
||||
skips / (skips + successes + failures)
|
||||
"""
|
||||
|
Loading…
Reference in New Issue
Block a user