Browse Source

Fixed grammar and improve docs.

Corrected its/it's mistakes, harmonized line wrapping within some docs
and clarified doc wording in several places.

Change-Id: Ib9ac6d5e859f770a702e1fad6de8d4abe0390b47
changes/22/6422/1
Paul McMillan 10 years ago
parent
commit
92fbf44d10
  1. 1
      AUTHORS
  2. 63
      doc/source/admin_guide.rst
  3. 4
      doc/source/deployment_guide.rst
  4. 9
      doc/source/development_auth.rst
  5. 22
      doc/source/overview_large_objects.rst
  6. 2
      swift/common/manager.py
  7. 2
      swift/common/middleware/cname_lookup.py
  8. 2
      swift/common/middleware/formpost.py
  9. 2
      swift/common/ring/builder.py
  10. 6
      test/probe/test_object_handoff.py
  11. 4
      test/unit/common/test_utils.py

1
AUTHORS

@ -34,6 +34,7 @@ Soren Hansen (soren@linux2go.dk)
Derek Higgins (derekh@redhat.com)
Florian Hines (florian.hines@gmail.com)
Paul Jimenez (pj@place.org)
Paul McMillan (paul.mcmillan@nebula.com)
Brian K. Jones (bkjones@gmail.com)
Morita Kazutaka (morita.kazutaka@gmail.com)
Ed Leafe (ed.leafe@rackspace.com)

63
doc/source/admin_guide.rst

@ -6,7 +6,11 @@ Administrator's Guide
Managing the Rings
------------------
You need to build the storage rings on the proxy server node, and distribute them to all the servers in the cluster. Storage rings contain information about all the Swift storage partitions and how they are distributed between the different nodes and disks. For more information see :doc:`overview_ring`.
You need to build the storage rings on the proxy server node, and
distribute them to all the servers in the cluster. Storage rings
contain information about all the Swift storage partitions and how
they are distributed between the different nodes and disks. For more
information see :doc:`overview_ring`.
Removing a device from the ring::
@ -37,7 +41,8 @@ Scripting Ring Creation
-----------------------
You can create scripts to create the account and container rings and rebalance. Here's an example script for the Account ring. Use similar commands to create a make-container-ring.sh script on the proxy server node.
1. Create a script file called make-account-ring.sh on the proxy server node with the following content::
1. Create a script file called make-account-ring.sh on the proxy
server node with the following content::
#!/bin/bash
cd /etc/swift
@ -47,14 +52,25 @@ You can create scripts to create the account and container rings and rebalance.
swift-ring-builder account.builder add z2-<account-server-2>:6002/sdb1 1
swift-ring-builder account.builder rebalance
You need to replace the values of <account-server-1>, <account-server-2>, etc. with the IP addresses of the account servers used in your setup. You can have as many account servers as you need. All account servers are assumed to be listening on port 6002, and have a storage device called "sdb1" (this is a directory name created under /drives when we setup the account server). The "z1", "z2", etc. designate zones, and you can choose whether you put devices in the same or different zones.
You need to replace the values of <account-server-1>,
<account-server-2>, etc. with the IP addresses of the account
servers used in your setup. You can have as many account servers as
you need. All account servers are assumed to be listening on port
6002, and have a storage device called "sdb1" (this is a directory
name created under /drives when we setup the account server). The
"z1", "z2", etc. designate zones, and you can choose whether you
put devices in the same or different zones.
2. Make the script file executable and run it to create the account ring file::
chmod +x make-account-ring.sh
sudo ./make-account-ring.sh
3. Copy the resulting ring file /etc/swift/account.ring.gz to all the account server nodes in your Swift environment, and put them in the /etc/swift directory on these nodes. Make sure that every time you change the account ring configuration, you copy the resulting ring file to all the account nodes.
3. Copy the resulting ring file /etc/swift/account.ring.gz to all the
account server nodes in your Swift environment, and put them in the
/etc/swift directory on these nodes. Make sure that every time you
change the account ring configuration, you copy the resulting ring
file to all the account nodes.
-----------------------
Handling System Updates
@ -385,11 +401,26 @@ Swift Orphans
Swift Orphans are processes left over after a reload of a Swift server.
For example, when upgrading a proxy server you would probaby finish with a `swift-init proxy-server reload` or `/etc/init.d/swift-proxy reload`. This kills the parent proxy server process and leaves the child processes running to finish processing whatever requests they might be handling at the time. It then starts up a new parent proxy server process and its children to handle new incoming requests. This allows zero-downtime upgrades with no impact to existing requests.
The orphaned child processes may take a while to exit, depending on the length of the requests they were handling. However, sometimes an old process can be hung up due to some bug or hardware issue. In these cases, these orphaned processes will hang around forever. `swift-orphans` can be used to find and kill these orphans.
`swift-orphans` with no arguments will just list the orphans it finds that were started more than 24 hours ago. You shouldn't really check for orphans until 24 hours after you perform a reload, as some requests can take a long time to process. `swift-orphans -k TERM` will send the SIG_TERM signal to the orphans processes, or you can `kill -TERM` the pids yourself if you prefer.
For example, when upgrading a proxy server you would probaby finish
with a `swift-init proxy-server reload` or `/etc/init.d/swift-proxy
reload`. This kills the parent proxy server process and leaves the
child processes running to finish processing whatever requests they
might be handling at the time. It then starts up a new parent proxy
server process and its children to handle new incoming requests. This
allows zero-downtime upgrades with no impact to existing requests.
The orphaned child processes may take a while to exit, depending on
the length of the requests they were handling. However, sometimes an
old process can be hung up due to some bug or hardware issue. In these
cases, these orphaned processes will hang around
forever. `swift-orphans` can be used to find and kill these orphans.
`swift-orphans` with no arguments will just list the orphans it finds
that were started more than 24 hours ago. You shouldn't really check
for orphans until 24 hours after you perform a reload, as some
requests can take a long time to process. `swift-orphans -k TERM` will
send the SIG_TERM signal to the orphans processes, or you can `kill
-TERM` the pids yourself if you prefer.
You can run `swift-orphans --help` for more options.
@ -398,6 +429,14 @@ You can run `swift-orphans --help` for more options.
Swift Oldies
------------
Swift Oldies are processes that have just been around for a long time. There's nothing necessarily wrong with this, but it might indicate a hung process if you regularly upgrade and reload/restart services. You might have so many servers that you don't notice when a reload/restart fails, `swift-oldies` can help with this.
For example, if you upgraded and reloaded/restarted everything 2 days ago, and you've already cleaned up any orphans with `swift-orphans`, you can run `swift-oldies -a 48` to find any Swift processes still around that were started more than 2 days ago and then investigate them accordingly.
Swift Oldies are processes that have just been around for a long
time. There's nothing necessarily wrong with this, but it might
indicate a hung process if you regularly upgrade and reload/restart
services. You might have so many servers that you don't notice when a
reload/restart fails, `swift-oldies` can help with this.
For example, if you upgraded and reloaded/restarted everything 2 days
ago, and you've already cleaned up any orphans with `swift-orphans`,
you can run `swift-oldies -a 48` to find any Swift processes still
around that were started more than 2 days ago and then investigate
them accordingly.

4
doc/source/deployment_guide.rst

@ -172,11 +172,11 @@ The resulting configuration that myapp receives is::
So, `name1` got the global value which is fine since it's only in the `DEFAULT`
section anyway.
`name2` got the global value from `DEFAULT` even though it's seemingly
`name2` got the global value from `DEFAULT` even though it appears to be
overridden in the `app:myapp` subsection. This is just the unfortunate way
paste.deploy works (at least at the time of this writing.)
`name3` got the local value from the `app:myapp` subsection because it using
`name3` got the local value from the `app:myapp` subsection because it is using
the special paste.deploy syntax of ``set option_name = value``. So, if you want
a default value for most app/filters but want to overridde it in one
subsection, this is how you do it.

9
doc/source/development_auth.rst

@ -218,10 +218,11 @@ only allow GETs after a referrer check and any requests after a group check::
return Authorization(app, conf)
return auth_filter
The access control strings are set with PUTs and POSTs to containers with the
X-Container-Read and X-Container-Write headers. Swift allows these strings to
be set to any value, though it's very useful to validate the strings meet the
desired format and return a useful error to the user if they don't.
The access control strings are set with PUTs and POSTs to containers
with the X-Container-Read and X-Container-Write headers. Swift allows
these strings to be set to any value, though it's very useful to
validate that the strings meet the desired format and return a useful
error to the user if they don't.
To support this validation, the Swift Proxy application will call the WSGI
environment's swift.clean_acl callback whenever one of these headers is to be

22
doc/source/overview_large_objects.rst

@ -17,9 +17,9 @@ with the possibility of parallel uploads of the segments.
Using ``swift`` for Segmented Objects
-------------------------------------
The quickest way to try out this feature is use the included ``swift`` Swift Tool.
You can use the ``-S`` option to specify the segment size to use when splitting
a large file. For example::
The quickest way to try out this feature is use the included ``swift``
Swift Tool. You can use the ``-S`` option to specify the segment size
to use when splitting a large file. For example::
swift upload test_container -S 1073741824 large_file
@ -31,10 +31,10 @@ So now, the following ``swift`` command would download the entire large object::
swift download test_container large_file
``swift`` uses a strict convention for its segmented object support. In the above
example it will upload all the segments into a second container named
test_container_segments. These segments will have names like
large_file/1290206778.25/21474836480/00000000,
``swift`` uses a strict convention for its segmented object
support. In the above example it will upload all the segments into a
second container named test_container_segments. These segments will
have names like large_file/1290206778.25/21474836480/00000000,
large_file/1290206778.25/21474836480/00000001, etc.
The main benefit for using a separate container is that the main container
@ -52,10 +52,10 @@ multiple versions of the same large object available.
Direct API
----------
You can also work with the segments and manifests directly with HTTP requests
instead of having ``swift`` do that for you. You can just upload the segments like
you would any other object and the manifest is just a zero-byte file with an
extra ``X-Object-Manifest`` header.
You can also work with the segments and manifests directly with HTTP
requests instead of having ``swift`` do that for you. You can just
upload the segments like you would any other object and the manifest
is just a zero-byte file with an extra ``X-Object-Manifest`` header.
All the object segments need to be in the same container, have a common object
name prefix, and their names sort in the order they should be concatenated.

2
swift/common/manager.py

@ -539,7 +539,7 @@ class Server():
"""
status = 0
for proc in self.procs:
# wait for process to close it's stdout
# wait for process to close its stdout
output = proc.stdout.read()
if output:
print output

2
swift/common/middleware/cname_lookup.py

@ -24,7 +24,7 @@ from swift.common.utils import cache_from_env, get_logger
def lookup_cname(domain): # pragma: no cover
"""
Given a domain, returns it's DNS CNAME mapping and DNS ttl.
Given a domain, returns its DNS CNAME mapping and DNS ttl.
:param domain: domain to query on
:returns: (ttl, result)

2
swift/common/middleware/formpost.py

@ -64,7 +64,7 @@ additional ``<input type="file" name="filexx" />`` attributes if
desired.
The expires attribute is the Unix timestamp before which the form
must be submitted before it's invalidated.
must be submitted before it is invalidated.
The signature attribute is the HMAC-SHA1 signature of the form. Here is
sample code for computing the signature::

2
swift/common/ring/builder.py

@ -59,7 +59,7 @@ class RingBuilder(object):
# _last_part_moves is a 2**23 array of unsigned bytes representing the
# number of hours since a given partition was last moved. This is used
# to guarantee we don't move a partition twice within a given number of
# hours (24 is my usual test). Removing a device or setting it's weight
# hours (24 is my usual test). Removing a device or setting its weight
# to 0 overrides this behavior as it's assumed those actions are done
# because of device failure.
# _last_part_moves_epoch indicates the time the offsets in

6
test/probe/test_object_handoff.py

@ -99,7 +99,7 @@ class TestObjectHandoff(unittest.TestCase):
exc = True
if not exc:
raise Exception('Previously downed object server had test object')
# Run the extra server last so it'll remove it's extra partition
# Run the extra server last so it'll remove its extra partition
ps = []
for n in onodes:
ps.append(Popen(['swift-object-replicator',
@ -151,7 +151,7 @@ class TestObjectHandoff(unittest.TestCase):
# if oheaders.get('x-object-meta-probe') == 'value':
# raise Exception('Previously downed object server had the new '
# 'metadata when it should not have it')
# # Run the extra server last so it'll remove it's extra partition
# # Run the extra server last so it'll remove its extra partition
# ps = []
# for n in onodes:
# ps.append(Popen(['swift-object-replicator',
@ -196,7 +196,7 @@ class TestObjectHandoff(unittest.TestCase):
sleep(2)
direct_client.direct_get_object(onode, opart, self.account, container,
obj)
# Run the extra server last so it'll remove it's extra partition
# Run the extra server last so it'll remove its extra partition
ps = []
for n in onodes:
ps.append(Popen(['swift-object-replicator',

4
test/unit/common/test_utils.py

@ -675,7 +675,7 @@ log_name = %(yarr)s'''
start = time.time()
for i in range(50):
running_time = utils.ratelimit_sleep(running_time, 200)
# make sure its accurate to 10th of a second
# make sure it's accurate to 10th of a second
self.assertTrue(abs(25 - (time.time() - start) * 100) < 10)
def test_ratelimit_sleep_with_incr(self):
@ -720,7 +720,7 @@ log_name = %(yarr)s'''
running_time = utils.ratelimit_sleep(running_time, 40,
rate_buffer=1)
time.sleep(i)
# make sure its accurate to 10th of a second
# make sure it's accurate to 10th of a second
self.assertTrue(abs(100 - (time.time() - start) * 100) < 10)
def test_search_tree(self):

Loading…
Cancel
Save