2010-07-30 14:57:20 -05:00
=====================
Administrator's Guide
=====================
2014-04-07 14:22:27 -07:00
-------------------------
Defining Storage Policies
-------------------------
Defining your Storage Policies is very easy to do with Swift. It is important
that the administrator understand the concepts behind Storage Policies
before actually creating and using them in order to get the most benefit out
of the feature and, more importantly, to avoid having to make unnecessary changes
once a set of policies have been deployed to a cluster.
It is highly recommended that the reader fully read and comprehend
:doc: `overview_policies` before proceeding with administration of
policies. Plan carefully and it is suggested that experimentation be
done first on a non-production cluster to be certain that the desired
configuration meets the needs of the users. See :ref: `upgrade-policy`
before planning the upgrade of your existing deployment.
Following is a high level view of the very few steps it takes to configure
policies once you have decided what you want to do:
2017-07-12 12:14:45 -07:00
#. Define your policies in `` /etc/swift/swift.conf ``
#. Create the corresponding object rings
#. Communicate the names of the Storage Policies to cluster users
2014-04-07 14:22:27 -07:00
For a specific example that takes you through these steps, please see
:doc: `policies_saio`
2010-07-30 14:57:20 -05:00
------------------
Managing the Rings
------------------
2012-08-21 10:03:42 -07:00
You may build the storage rings on any server with the appropriate
version of Swift installed. Once built or changed (rebalanced), you
must distribute the rings to all the servers in the cluster. Storage
rings contain information about all the Swift storage partitions and
how they are distributed between the different nodes and disks.
Swift 1.6.0 is the last version to use a Python pickle format.
Subsequent versions use a different serialization format. **Rings
generated by Swift versions 1.6.0 and earlier may be read by any
version, but rings generated after 1.6.0 may only be read by Swift
versions greater than 1.6.0.** So when upgrading from version 1.6.0 or
earlier to a version greater than 1.6.0, either upgrade Swift on your
ring building server **last** after all Swift nodes have been successfully
upgraded, or refrain from generating rings until all Swift nodes have
been successfully upgraded.
2017-03-20 14:32:00 +08:00
If you need to downgrade from a version of Swift greater than 1.6.0 to
2012-08-21 10:03:42 -07:00
a version less than or equal to 1.6.0, first downgrade your ring-building
server, generate new rings, push them out, then continue with the rest
of the downgrade.
For more information see :doc: `overview_ring` .
2010-11-30 12:24:55 -06:00
2017-07-12 12:14:45 -07:00
.. highlight :: none
2010-07-30 14:57:20 -05:00
Removing a device from the ring::
swift-ring-builder <builder-file> remove <ip_address>/<device_name>
2014-04-07 14:22:27 -07:00
2010-07-30 14:57:20 -05:00
Removing a server from the ring::
swift-ring-builder <builder-file> remove <ip_address>
2014-04-07 14:22:27 -07:00
2010-07-30 14:57:20 -05:00
Adding devices to the ring:
See :ref: `ring-preparing`
2014-04-07 14:22:27 -07:00
2010-07-30 14:57:20 -05:00
See what devices for a server are in the ring::
swift-ring-builder <builder-file> search <ip_address>
Once you are done with all changes to the ring, the changes need to be
"committed"::
swift-ring-builder <builder-file> rebalance
2014-04-07 14:22:27 -07:00
2010-07-30 14:57:20 -05:00
Once the new rings are built, they should be pushed out to all the servers
in the cluster.
2013-01-25 08:00:33 -08:00
Optionally, if invoked as 'swift-ring-builder-safe' the directory containing
the specified builder file will be locked (via a .lock file in the parent
directory). This provides a basic safe guard against multiple instances
of the swift-ring-builder (or other utilities that observe this lock) from
attempting to write to or read the builder/ring files while operations are in
progress. This can be useful in environments where ring management has been
automated but the operator still needs to interact with the rings manually.
2015-01-11 13:41:35 -08:00
If the ring builder is not producing the balances that you are
expecting, you can gain visibility into what it's doing with the
`` --debug `` flag.::
swift-ring-builder <builder-file> rebalance --debug
This produces a great deal of output that is mostly useful if you are
either (a) attempting to fix the ring builder, or (b) filing a bug
against the ring builder.
2016-05-25 14:35:54 +10:00
You may notice in the rebalance output a 'dispersion' number. What this
number means is explained in :ref: `ring_dispersion` but in essence
is the percentage of partitions in the ring that have too many replicas
within a particular failure domain. You can ask 'swift-ring-builder' what
the dispersion is with::
swift-ring-builder <builder-file> dispersion
This will give you the percentage again, if you want a detailed view of
the dispersion simply add a `` --verbose `` ::
swift-ring-builder <builder-file> dispersion --verbose
This will not only display the percentage but will also display a dispersion
table that lists partition dispersion by tier. You can use this table to figure
out were you need to add capacity or to help tune an :ref: `ring_overload` value.
2016-05-25 09:53:31 +02:00
Now let's take an example with 1 region, 3 zones and 4 devices. Each device has
the same weight, and the `` dispersion --verbose `` might show the following::
2017-12-14 20:03:24 -08:00
Dispersion is 16.666667, Balance is 0.000000, Overload is 0.00%
2016-05-25 09:53:31 +02:00
Required overload is 33.333333%
2017-12-14 20:03:24 -08:00
Worst tier is 33.333333 (r1z3)
2016-05-25 09:53:31 +02:00
--------------------------------------------------------------------------
Tier Parts % Max 0 1 2 3
--------------------------------------------------------------------------
2017-12-14 20:03:24 -08:00
r1 768 0.00 3 0 0 0 256
2016-05-25 09:53:31 +02:00
r1z1 192 0.00 1 64 192 0 0
r1z1-127.0.0.1 192 0.00 1 64 192 0 0
r1z1-127.0.0.1/sda 192 0.00 1 64 192 0 0
r1z2 192 0.00 1 64 192 0 0
r1z2-127.0.0.2 192 0.00 1 64 192 0 0
r1z2-127.0.0.2/sda 192 0.00 1 64 192 0 0
2017-12-14 20:03:24 -08:00
r1z3 384 33.33 1 0 128 128 0
r1z3-127.0.0.3 384 33.33 1 0 128 128 0
2016-05-25 09:53:31 +02:00
r1z3-127.0.0.3/sda 192 0.00 1 64 192 0 0
r1z3-127.0.0.3/sdb 192 0.00 1 64 192 0 0
The first line reports that there are 256 partitions with 3 copies in region 1;
and this is an expected output in this case (single region with 3 replicas) as
reported by the "Max" value.
2016-09-30 11:11:20 +08:00
However, there is some imbalance in the cluster, more precisely in zone 3. The
2016-05-25 09:53:31 +02:00
"Max" reports a maximum of 1 copy in this zone; however 50.00% of the partitions
are storing 2 replicas in this zone (which is somewhat expected, because there
are more disks in this zone).
You can now either add more capacity to the other zones, decrease the total
2016-05-25 11:21:25 -07:00
weight in zone 3 or set the overload to a value `greater than` 33.333333% -
only as much overload as needed will be used.
2016-05-25 09:53:31 +02:00
2010-11-30 14:15:41 -06:00
-----------------------
Scripting Ring Creation
-----------------------
You can create scripts to create the account and container rings and rebalance. Here's an example script for the Account ring. Use similar commands to create a make-container-ring.sh script on the proxy server node.
2010-11-30 12:24:55 -06:00
2012-04-10 12:25:01 -07:00
1. Create a script file called make-account-ring.sh on the proxy
server node with the following content::
2010-11-30 12:24:55 -06:00
#!/bin/bash
cd /etc/swift
rm -f account.builder account.ring.gz backups/account.builder backups/account.ring.gz
swift-ring-builder account.builder create 18 3 1
2016-02-01 18:06:54 +00:00
swift-ring-builder account.builder add r1z1-<account-server-1>:6202/sdb1 1
swift-ring-builder account.builder add r1z2-<account-server-2>:6202/sdb1 1
2010-11-30 12:24:55 -06:00
swift-ring-builder account.builder rebalance
2012-04-10 12:25:01 -07:00
You need to replace the values of <account-server-1>,
<account-server-2>, etc. with the IP addresses of the account
servers used in your setup. You can have as many account servers as
you need. All account servers are assumed to be listening on port
2016-02-01 18:06:54 +00:00
6202, and have a storage device called "sdb1" (this is a directory
2012-04-10 12:25:01 -07:00
name created under /drives when we setup the account server). The
"z1", "z2", etc. designate zones, and you can choose whether you
2015-10-23 18:20:25 +00:00
put devices in the same or different zones. The "r1" designates
the region, with different regions specified as "r1", "r2", etc.
2010-11-30 12:24:55 -06:00
2. Make the script file executable and run it to create the account ring file::
chmod +x make-account-ring.sh
sudo ./make-account-ring.sh
2012-04-10 12:25:01 -07:00
3. Copy the resulting ring file /etc/swift/account.ring.gz to all the
account server nodes in your Swift environment, and put them in the
/etc/swift directory on these nodes. Make sure that every time you
change the account ring configuration, you copy the resulting ring
file to all the account nodes.
2010-11-30 12:24:55 -06:00
2010-07-30 14:57:20 -05:00
-----------------------
Handling System Updates
-----------------------
It is recommended that system updates and reboots are done a zone at a time.
This allows the update to happen, and for the Swift cluster to stay available
and responsive to requests. It is also advisable when updating a zone, let
it run for a while before updating the other zones to make sure the update
doesn't have any adverse effects.
----------------------
Handling Drive Failure
----------------------
In the event that a drive has failed, the first step is to make sure the drive
2017-03-20 14:32:00 +08:00
is unmounted. This will make it easier for Swift to work around the failure
2010-07-30 14:57:20 -05:00
until it has been resolved. If the drive is going to be replaced immediately,
then it is just best to replace the drive, format it, remount it, and let
replication fill it up.
2015-07-27 14:19:09 -05:00
After the drive is unmounted, make sure the mount point is owned by root
(root:root 755). This ensures that rsync will not try to replicate into the
root drive once the failed drive is unmounted.
2010-07-30 14:57:20 -05:00
If the drive can't be replaced immediately, then it is best to leave it
2014-10-29 10:34:53 +00:00
unmounted, and set the device weight to 0. This will allow all the
2010-07-30 14:57:20 -05:00
replicas that were on that drive to be replicated elsewhere until the drive
2014-10-29 10:34:53 +00:00
is replaced. Once the drive is replaced, the device weight can be increased
again. Setting the device weight to 0 instead of removing the drive from the
ring gives Swift the chance to replicate data from the failing disk too (in case
it is still possible to read some of the data).
Setting the device weight to 0 (or removing a failed drive from the ring) has
another benefit: all partitions that were stored on the failed drive are
distributed over the remaining disks in the cluster, and each disk only needs to
store a few new partitions. This is much faster compared to replicating all
partitions to a single, new disk. It decreases the time to recover from a
degraded number of replicas significantly, and becomes more and more important
with bigger disks.
2010-07-30 14:57:20 -05:00
-----------------------
Handling Server Failure
-----------------------
2014-04-07 14:22:27 -07:00
If a server is having hardware issues, it is a good idea to make sure the
2017-03-20 14:32:00 +08:00
Swift services are not running. This will allow Swift to work around the
2010-07-30 14:57:20 -05:00
failure while you troubleshoot.
If the server just needs a reboot, or a small amount of work that should
only last a couple of hours, then it is probably best to let Swift work
around the failure and get the machine fixed and back online. When the
machine comes back online, replication will make sure that anything that is
missing during the downtime will get updated.
If the server has more serious issues, then it is probably best to remove
all of the server's devices from the ring. Once the server has been repaired
and is back online, the server's devices can be added back into the ring.
It is important that the devices are reformatted before putting them back
into the ring as it is likely to be responsible for a different set of
partitions than before.
-----------------------
Detecting Failed Drives
-----------------------
It has been our experience that when a drive is about to fail, error messages
will spew into `/var/log/kern.log` . There is a script called
2014-04-07 14:22:27 -07:00
`swift-drive-audit` that can be run via cron to watch for bad drives. If
2010-07-30 14:57:20 -05:00
errors are detected, it will unmount the bad drive, so that Swift can
work around it. The script takes a configuration file with the following
settings:
2017-07-12 12:14:45 -07:00
`` [drive-audit] ``
2010-07-30 14:57:20 -05:00
2013-06-28 14:51:15 -05:00
================== ============== ===========================================
Option Default Description
------------------ -------------- -------------------------------------------
2016-10-17 19:46:57 +02:00
user swift Drop privileges to this user for non-root
tasks
2013-06-28 14:51:15 -05:00
log_facility LOG_LOCAL0 Syslog log facility
log_level INFO Log level
device_dir /srv/node Directory devices are mounted under
minutes 60 Number of minutes to look back in
`/var/log/kern.log`
error_limit 1 Number of errors to find before a device
is unmounted
log_file_pattern /var/log/kern* Location of the log file with globbing
pattern to check against device errors
regex_pattern_X (see below) Regular expression patterns to be used to
locate device blocks with errors in the
2014-04-07 14:22:27 -07:00
log file
2013-06-28 14:51:15 -05:00
================== ============== ===========================================
The default regex pattern used to locate device blocks with errors are
`\berror\b.*\b(sd[a-z]{1,2}\d?)\b` and `\b(sd[a-z]{1,2}\d?)\b.*\berror\b` .
One is able to overwrite the default above by providing new expressions
using the format `regex_pattern_X = regex_expression` , where `X` is a number.
This script has been tested on Ubuntu 10.04 and Ubuntu 12.04, so if you are
using a different distro or OS, some care should be taken before using in production.
2010-07-30 14:57:20 -05:00
2016-05-12 15:46:24 +01:00
------------------------------
Preventing Disk Full Scenarios
------------------------------
2017-07-12 12:14:45 -07:00
.. highlight :: cfg
2016-05-12 15:46:24 +01:00
Prevent disk full scenarios by ensuring that the `` proxy-server `` blocks PUT
requests and rsync prevents replication to the specific drives.
2018-02-05 14:06:18 -08:00
You can prevent `proxy-server` PUT requests to low space disks by
ensuring `` fallocate_reserve `` is set in `` account-server.conf `` ,
`` container-server.conf `` , and `` object-server.conf `` . By default,
`` fallocate_reserve `` is set to 1%. In the object server, this blocks
PUT requests that would leave the free disk space below 1% of the
disk. In the account and container servers, this blocks operations
that will increase account or container database size once the free
disk space falls below 1%.
Setting `` fallocate_reserve `` is highly recommended to avoid filling
disks to 100%. When Swift's disks are completely full, all requests
involving those disks will fail, including DELETE requests that would
otherwise free up space. This is because object deletion includes the
creation of a zero-byte tombstone (.ts) to record the time of the
deletion for replication purposes; this happens prior to deletion of
the object's data. On a completely-full filesystem, that zero-byte .ts
file cannot be created, so the DELETE request will fail and the disk
will remain completely full. If `` fallocate_reserve `` is set, then the
filesystem will have enough space to create the zero-byte .ts file,
and thus the deletion of the object will succeed and free up some
space.
2016-05-12 15:46:24 +01:00
In order to prevent rsync replication to specific drives, firstly
setup `` rsync_module `` per disk in your `` object-replicator `` .
Set this in `` object-server.conf `` :
2022-08-02 11:51:24 -07:00
.. code :: cfg
2016-05-12 15:46:24 +01:00
[object-replicator]
rsync_module = {replication_ip}::object_{device}
Set the individual drives in `` rsync.conf `` . For example:
2022-08-02 11:51:24 -07:00
.. code :: cfg
2016-05-12 15:46:24 +01:00
[object_sda]
max connections = 4
lock file = /var/lock/object_sda.lock
[object_sdb]
max connections = 4
lock file = /var/lock/object_sdb.lock
Finally, monitor the disk space of each disk and adjust the rsync
`` max connections `` per drive to `` -1 `` . We recommend utilising your existing
monitoring solution to achieve this. The following is an example script:
.. code-block :: python
#!/usr/bin/env python
import os
import errno
RESERVE = 500 * 2 * * 20 # 500 MiB
DEVICES = '/srv/node1'
path_template = '/etc/rsync.d/disable_%s.conf'
config_template = '''
[object_%s]
max connections = -1
'''
def disable_rsync(device):
with open(path_template % device, 'w') as f:
f.write(config_template.lstrip() % device)
def enable_rsync(device):
try:
os.unlink(path_template % device)
except OSError as e:
# ignore file does not exist
if e.errno != errno.ENOENT:
raise
for device in os.listdir(DEVICES):
path = os.path.join(DEVICES, device)
st = os.statvfs(path)
free = st.f_bavail * st.f_frsize
if free < RESERVE:
disable_rsync(device)
else:
enable_rsync(device)
For the above script to work, ensure `` /etc/rsync.d/ `` conf files are
included, by specifying `` &include `` in your `` rsync.conf `` file:
2022-08-02 11:51:24 -07:00
.. code :: cfg
2016-05-12 15:46:24 +01:00
&include /etc/rsync.d
Use this in conjunction with a cron job to periodically run the script, for example:
2017-07-12 12:14:45 -07:00
.. highlight :: none
2022-08-02 11:51:24 -07:00
.. code :: cfg
2016-05-12 15:46:24 +01:00
# /etc/cron.d/devicecheck
* * * * * root /some/path/to/disable_rsync.py
2016-03-09 14:28:17 +00:00
.. _dispersion_report:
-----------------
Dispersion Report
-----------------
2010-07-30 14:57:20 -05:00
2011-03-31 22:32:41 +00:00
There is a swift-dispersion-report tool for measuring overall cluster health.
This is accomplished by checking if a set of deliberately distributed
containers and objects are currently in their proper places within the cluster.
2010-08-17 12:36:49 -07:00
For instance, a common deployment has three replicas of each object. The health
of that object can be measured by checking if each replica is in its proper
place. If only 2 of the 3 is in place the object's heath can be said to be at
66.66%, where 100% would be perfect.
A single object's health, especially an older object, usually reflects the
health of that entire partition the object is in. If we make enough objects on
a distinct percentage of the partitions in the cluster, we can get a pretty
valid estimate of the overall cluster health. In practice, about 1% partition
coverage seems to balance well between accuracy and the amount of time it takes
to gather results.
The first thing that needs to be done to provide this health value is create a
new account solely for this usage. Next, we need to place the containers and
objects throughout the system so that they are on distinct partitions. The
2011-03-31 22:32:41 +00:00
swift-dispersion-populate tool does this by making up random container and
object names until they fall on distinct partitions. Last, and repeatedly for
the life of the cluster, we need to run the swift-dispersion-report tool to
check the health of each of these containers and objects.
2010-08-17 12:36:49 -07:00
2017-07-12 12:14:45 -07:00
.. highlight :: cfg
2010-08-17 12:36:49 -07:00
These tools need direct access to the entire cluster and to the ring files
2011-03-14 02:56:37 +00:00
(installing them on a proxy server will probably do). Both
2011-03-31 22:32:41 +00:00
swift-dispersion-populate and swift-dispersion-report use the same
configuration file, /etc/swift/dispersion.conf. Example conf file::
2010-08-17 12:36:49 -07:00
2011-10-05 16:54:56 +02:00
[dispersion]
2012-07-13 17:48:37 -05:00
auth_url = http://localhost:8080/auth/v1.0
2010-08-17 12:36:49 -07:00
auth_user = test:tester
auth_key = testing
2013-01-23 09:58:28 +00:00
endpoint_type = internalURL
2010-08-17 12:36:49 -07:00
2017-07-12 12:14:45 -07:00
.. highlight :: none
2010-08-17 12:36:49 -07:00
There are also options for the conf file for specifying the dispersion coverage
2011-03-31 22:32:41 +00:00
(defaults to 1%), retries, concurrency, etc. though usually the defaults are
2015-06-24 16:54:02 +02:00
fine. If you want to use keystone v3 for authentication there are options like
auth_version, user_domain_name, project_domain_name and project_name.
2010-08-17 12:36:49 -07:00
2011-03-31 22:32:41 +00:00
Once the configuration is in place, run `swift-dispersion-populate` to populate
2010-08-17 12:36:49 -07:00
the containers and objects throughout the cluster.
Now that those containers and objects are in place, you can run
2011-03-31 22:32:41 +00:00
`swift-dispersion-report` to get a dispersion report, or the overall health of
2010-08-17 12:36:49 -07:00
the cluster. Here is an example of a cluster in perfect health::
2011-03-31 22:32:41 +00:00
$ swift-dispersion-report
2010-08-17 12:36:49 -07:00
Queried 2621 containers for dispersion reporting, 19s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space
2014-04-07 14:22:27 -07:00
2010-08-17 12:36:49 -07:00
Queried 2619 objects for dispersion reporting, 7s, 0 retries
100.00% of object copies found (7857 of 7857)
Sample represents 1.00% of the object partition space
Now I'll deliberately double the weight of a device in the object ring (with
replication turned off) and rerun the dispersion report to show what impact
that has::
$ swift-ring-builder object.builder set_weight d0 200
$ swift-ring-builder object.builder rebalance
...
2011-03-31 22:32:41 +00:00
$ swift-dispersion-report
2010-08-17 12:36:49 -07:00
Queried 2621 containers for dispersion reporting, 8s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space
2014-04-07 14:22:27 -07:00
2010-08-17 12:36:49 -07:00
Queried 2619 objects for dispersion reporting, 7s, 0 retries
There were 1763 partitions missing one copy.
77.56% of object copies found (6094 of 7857)
Sample represents 1.00% of the object partition space
You can see the health of the objects in the cluster has gone down
significantly. Of course, I only have four devices in this test environment, in
a production environment with many many devices the impact of one device change
is much less. Next, I'll run the replicators to get everything put back into
place and then rerun the dispersion report::
... start object replicators and monitor logs until they're caught up ...
2011-03-31 22:32:41 +00:00
$ swift-dispersion-report
2010-08-17 12:36:49 -07:00
Queried 2621 containers for dispersion reporting, 17s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space
Queried 2619 objects for dispersion reporting, 7s, 0 retries
100.00% of object copies found (7857 of 7857)
Sample represents 1.00% of the object partition space
2012-12-03 16:12:10 -06:00
You can also run the report for only containers or objects::
$ swift-dispersion-report --container-only
Queried 2621 containers for dispersion reporting, 17s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space
$ swift-dispersion-report --object-only
Queried 2619 objects for dispersion reporting, 7s, 0 retries
100.00% of object copies found (7857 of 7857)
Sample represents 1.00% of the object partition space
2016-09-29 20:48:21 +09:00
Alternatively, the dispersion report can also be output in JSON format. This
2012-02-29 04:24:26 +00:00
allows it to be more easily consumed by third party utilities::
$ swift-dispersion-report -j
{"object": {"retries:": 0, "missing_two": 0, "copies_found": 7863, "missing_one": 0, "copies_expected": 7863, "pct_found": 100.0, "overlapping": 0, "missing_all": 0}, "container": {"retries:": 0, "missing_two": 0, "copies_found": 12534, "missing_one": 0, "copies_expected": 12534, "pct_found": 100.0, "overlapping": 15, "missing_all": 0}}
2015-05-25 14:41:42 -07:00
Note that you may select which storage policy to use by setting the option
'--policy-name silver' or '-P silver' (silver is the example policy name here).
If no policy is specified, the default will be used per the swift.conf file.
When you specify a policy the containers created also include the policy index,
thus even when running a container_only report, you will need to specify the
policy not using the default.
2010-07-30 14:57:20 -05:00
2016-04-11 20:51:00 -07:00
-----------------------------------------------
Geographically Distributed Swift Considerations
-----------------------------------------------
Swift provides two features that may be used to distribute replicas of objects
across multiple geographically distributed data-centers: with
:doc: `overview_global_cluster` object replicas may be dispersed across devices
from different data-centers by using `regions` in ring device descriptors; with
:doc: `overview_container_sync` objects may be copied between independent Swift
clusters in each data-center. The operation and configuration of each are
described in their respective documentation. The following points should be
considered when selecting the feature that is most appropriate for a particular
use case:
2017-07-12 12:14:45 -07:00
#. Global Clusters allows the distribution of object replicas across
data-centers to be controlled by the cluster operator on per-policy basis,
since the distribution is determined by the assignment of devices from
each data-center in each policy's ring file. With Container Sync the end
user controls the distribution of objects across clusters on a
per-container basis.
#. Global Clusters requires an operator to coordinate ring deployments across
multiple data-centers. Container Sync allows for independent management of
separate Swift clusters in each data-center, and for existing Swift
clusters to be used as peers in Container Sync relationships without
deploying new policies/rings.
#. Global Clusters seamlessly supports features that may rely on
cross-container operations such as large objects and versioned writes.
Container Sync requires the end user to ensure that all required
containers are sync'd for these features to work in all data-centers.
#. Global Clusters makes objects available for GET or HEAD requests in both
data-centers even if a replica of the object has not yet been
asynchronously migrated between data-centers, by forwarding requests
between data-centers. Container Sync is unable to serve requests for an
object in a particular data-center until the asynchronous sync process has
copied the object to that data-center.
#. Global Clusters may require less storage capacity than Container Sync to
achieve equivalent durability of objects in each data-center. Global
Clusters can restore replicas that are lost or corrupted in one
data-center using replicas from other data-centers. Container Sync
requires each data-center to independently manage the durability of
objects, which may result in each data-center storing more replicas than
with Global Clusters.
#. Global Clusters execute all account/container metadata updates
synchronously to account/container replicas in all data-centers, which may
incur delays when making updates across WANs. Container Sync only copies
objects between data-centers and all Swift internal traffic is
confined to each data-center.
#. Global Clusters does not yet guarantee the availability of objects stored
in Erasure Coded policies when one data-center is offline. With Container
Sync the availability of objects in each data-center is independent of the
state of other data-centers once objects have been synced. Container Sync
also allows objects to be stored using different policy types in different
data-centers.
2013-06-13 11:24:29 -07:00
2016-06-09 06:17:22 +00:00
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Checking handoff partition distribution
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can check if handoff partitions are piling up on a server by
comparing the expected number of partitions with the actual number on
your disks. First get the number of partitions that are currently
assigned to a server using the `` dispersion `` command from
`` swift-ring-builder `` ::
swift-ring-builder sample.builder dispersion --verbose
Dispersion is 0.000000, Balance is 0.000000, Overload is 0.00%
Required overload is 0.000000%
--------------------------------------------------------------------------
Tier Parts % Max 0 1 2 3
--------------------------------------------------------------------------
r1 8192 0.00 2 0 0 8192 0
r1z1 4096 0.00 1 4096 4096 0 0
r1z1-172.16.10.1 4096 0.00 1 4096 4096 0 0
r1z1-172.16.10.1/sda1 4096 0.00 1 4096 4096 0 0
r1z2 4096 0.00 1 4096 4096 0 0
r1z2-172.16.10.2 4096 0.00 1 4096 4096 0 0
r1z2-172.16.10.2/sda1 4096 0.00 1 4096 4096 0 0
r1z3 4096 0.00 1 4096 4096 0 0
r1z3-172.16.10.3 4096 0.00 1 4096 4096 0 0
r1z3-172.16.10.3/sda1 4096 0.00 1 4096 4096 0 0
r1z4 4096 0.00 1 4096 4096 0 0
r1z4-172.16.20.4 4096 0.00 1 4096 4096 0 0
r1z4-172.16.20.4/sda1 4096 0.00 1 4096 4096 0 0
r2 8192 0.00 2 0 8192 0 0
r2z1 4096 0.00 1 4096 4096 0 0
r2z1-172.16.20.1 4096 0.00 1 4096 4096 0 0
r2z1-172.16.20.1/sda1 4096 0.00 1 4096 4096 0 0
r2z2 4096 0.00 1 4096 4096 0 0
r2z2-172.16.20.2 4096 0.00 1 4096 4096 0 0
r2z2-172.16.20.2/sda1 4096 0.00 1 4096 4096 0 0
As you can see from the output, each server should store 4096 partitions, and
each region should store 8192 partitions. This example used a partition power
of 13 and 3 replicas.
With write_affinity enabled it is expected to have a higher number of
partitions on disk compared to the value reported by the
swift-ring-builder dispersion command. The number of additional (handoff)
partitions in region r1 depends on your cluster size, the amount
of incoming data as well as the replication speed.
Let's use the example from above with 6 nodes in 2 regions, and write_affinity
configured to write to region r1 first. `swift-ring-builder` reported that
each node should store 4096 partitions::
Expected partitions for region r2: 8192
Handoffs stored across 4 nodes in region r1: 8192 / 4 = 2048
Maximum number of partitions on each server in region r1: 2048 + 4096 = 6144
Worst case is that handoff partitions in region 1 are populated with new
object replicas faster than replication is able to move them to region 2.
In that case you will see ~ 6144 partitions per
server in region r1. Your actual number should be lower and
between 4096 and 6144 partitions (preferably on the lower side).
Now count the number of object partitions on a given server in region 1,
for example on 172.16.10.1. Note that the pathnames might be
different; `/srv/node/` is the default mount location, and `objects`
applies only to storage policy 0 (storage policy 1 would use
`objects-1` and so on)::
find -L /srv/node/ -maxdepth 3 -type d -wholename "*objects/* " | wc -l
If this number is always on the upper end of the expected partition
number range (4096 to 6144) or increasing you should check your
replication speed and maybe even disable write_affinity.
Please refer to the next section how to collect metrics from Swift, and
especially :ref: `swift-recon -r <recon-replication>` how to check replication
stats.
2017-07-12 12:14:45 -07:00
.. _cluster_telemetry_and_monitoring:
2011-10-18 21:10:50 +00:00
--------------------------------
Cluster Telemetry and Monitoring
--------------------------------
2012-05-14 18:01:48 -05:00
Various metrics and telemetry can be obtained from the account, container, and
object servers using the recon server middleware and the swift-recon cli. To do
so update your account, container, or object servers pipelines to include recon
and add the associated filter config.
2017-07-12 12:14:45 -07:00
.. highlight :: cfg
2012-05-14 18:01:48 -05:00
object-server.conf sample::
2011-10-18 21:10:50 +00:00
[pipeline:main]
pipeline = recon object-server
2012-05-14 18:01:48 -05:00
2011-10-18 21:10:50 +00:00
[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
2012-05-14 18:01:48 -05:00
container-server.conf sample::
2011-10-18 21:10:50 +00:00
2012-05-14 18:01:48 -05:00
[pipeline:main]
pipeline = recon container-server
2011-10-18 21:10:50 +00:00
2012-05-14 18:01:48 -05:00
[filter:recon]
use = egg:swift#recon
2011-10-18 21:10:50 +00:00
recon_cache_path = /var/cache/swift
2012-05-14 18:01:48 -05:00
account-server.conf sample::
[pipeline:main]
pipeline = recon account-server
[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
2017-07-12 12:14:45 -07:00
.. highlight :: none
2012-05-14 18:01:48 -05:00
The recon_cache_path simply sets the directory where stats for a few items will
be stored. Depending on the method of deployment you may need to create this
2017-03-20 14:32:00 +08:00
directory manually and ensure that Swift has read/write access.
2012-05-14 18:01:48 -05:00
Finally, if you also wish to track asynchronous pending on your object
servers you will need to setup a cronjob to run the swift-recon-cron script
periodically on your object servers::
2011-10-18 21:10:50 +00:00
*/5 * * * * swift /usr/bin/swift-recon-cron /etc/swift/object-server.conf
2012-05-14 18:01:48 -05:00
2013-03-17 15:58:06 -07:00
Once the recon middleware is enabled, a GET request for
"/recon/<metric>" to the backend object server will return a
JSON-formatted response::
2011-10-18 21:10:50 +00:00
2020-07-17 14:59:45 -07:00
fhines@ubuntu:~$ curl -i http://localhost:6230/recon/async
2011-10-18 21:10:50 +00:00
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 20
Date: Tue, 18 Oct 2011 21:03:01 GMT
{"async_pending": 0}
2013-03-17 15:58:06 -07:00
2016-02-01 18:06:54 +00:00
Note that the default port for the object server is 6200, except on a
2020-07-17 14:59:45 -07:00
Swift All-In-One installation, which uses 6210, 6220, 6230, and 6240.
2013-03-17 15:58:06 -07:00
2012-05-28 13:36:59 -05:00
The following metrics and telemetry are currently exposed:
2012-05-14 18:01:48 -05:00
2012-05-28 13:36:59 -05:00
========================= ========================================================================================
2012-05-14 18:01:48 -05:00
Request URI Description
2012-05-28 13:36:59 -05:00
------------------------- ----------------------------------------------------------------------------------------
2012-05-14 18:01:48 -05:00
/recon/load returns 1,5, and 15 minute load average
/recon/mem returns /proc/meminfo
/recon/mounted returns *ALL* currently mounted filesystems
/recon/unmounted returns all unmounted drives if mount_check = True
/recon/diskusage returns disk utilization for storage devices
2016-07-17 14:31:06 +00:00
/recon/driveaudit returns # of drive audit errors
2012-05-14 18:01:48 -05:00
/recon/ringmd5 returns object/container/account ring md5sums
2016-07-17 14:31:06 +00:00
/recon/swiftconfmd5 returns swift.conf md5sum
2012-05-14 18:01:48 -05:00
/recon/quarantined returns # of quarantined objects/accounts/containers
/recon/sockstat returns consumable info from /proc/net/sockstat|6
/recon/devices returns list of devices and devices dir i.e. /srv/node
/recon/async returns count of async pending
Enable Object Replicator's failure count in recon
This patch makes the count of object replication failure in recon.
And "failure_nodes" is added to Account Replicator and
Container Replicator.
Recon shows the count of object repliction failure as follows:
$ curl http://<ip>:<port>/recon/replication/object
{
"replication_last": 1416334368.60865,
"replication_stats": {
"attempted": 13346,
"failure": 870,
"failure_nodes": {
"192.168.0.1": {"sdb1": 3},
"192.168.0.2": {"sdb1": 851,
"sdc1": 1,
"sdd1": 8},
"192.168.0.3": {"sdb1": 3,
"sdc1": 4}
},
"hashmatch": 0,
"remove": 0,
"rsync": 0,
"start": 1416354240.9761429,
"success": 1908
},
"replication_time": 2316.5563162644703,
"object_replication_last": 1416334368.60865,
"object_replication_time": 2316.5563162644703
}
Note that 'object_replication_last' and 'object_replication_time' are
considered to be transitional and will be removed in the subsequent
releases. Use 'replication_last' and 'replication_time' instead.
Additionaly this patch adds the count in swift-recon and it will be
showed as follows:
$ swift-recon object -r
========================================================================
=======
--> Starting reconnaissance on 4 hosts
========================================================================
=======
[2014-11-27 16:14:09] Checking on replication
[replication_failure] low: 0, high: 0, avg: 0.0, total: 0, Failed: 0.0%,
no_result: 0, reported: 4
[replication_success] low: 3, high: 3, avg: 3.0, total: 12,
Failed: 0.0%, no_result: 0, reported: 4
[replication_time] low: 0, high: 0, avg: 0.0, total: 0, Failed: 0.0%,
no_result: 0, reported: 4
[replication_attempted] low: 1, high: 1, avg: 1.0, total: 4,
Failed: 0.0%, no_result: 0, reported: 4
Oldest completion was 2014-11-27 16:09:45 (4 minutes ago) by
192.168.0.4:6002.
Most recent completion was 2014-11-27 16:14:19 (-10 seconds ago) by
192.168.0.1:6002.
========================================================================
=======
In case there is a cluster which has servers, a server runs with this
patch and the other servers run without this patch. If swift-recon
executes on the server which runs with this patch, there are unnecessary
information on the output such as [failure], [success] and [attempted].
Because other servers which run without this patch are not able to
send a response with information that this patch needs.
Therefore once you apply this patch, you also apply this patch to other
servers before you execute swift-recon.
DocImpact
Change-Id: Iecd33655ae2568482833131f422679996c374d78
Co-Authored-By: Kenichiro Matsuda <matsuda_kenichi@jp.fujitsu.com>
Co-Authored-By: Brian Cline <bcline@softlayer.com>
Implements: blueprint enable-object-replication-failure-in-recon
2014-12-03 06:15:16 +09:00
/recon/replication returns object replication info (for backward compatibility)
2012-05-14 18:01:48 -05:00
/recon/replication/<type> returns replication info for given type (account, container, object)
/recon/auditor/<type> returns auditor stats on last reported scan for given type (account, container, object)
/recon/updater/<type> returns last updater sweep times for given type (container, object)
2016-12-01 10:30:03 +00:00
/recon/expirer/object returns time elapsed and number of objects deleted during last object expirer sweep
2016-07-17 14:31:06 +00:00
/recon/version returns Swift version
/recon/time returns node time
2012-05-28 13:36:59 -05:00
========================= ========================================================================================
2011-10-18 21:10:50 +00:00
Enable Object Replicator's failure count in recon
This patch makes the count of object replication failure in recon.
And "failure_nodes" is added to Account Replicator and
Container Replicator.
Recon shows the count of object repliction failure as follows:
$ curl http://<ip>:<port>/recon/replication/object
{
"replication_last": 1416334368.60865,
"replication_stats": {
"attempted": 13346,
"failure": 870,
"failure_nodes": {
"192.168.0.1": {"sdb1": 3},
"192.168.0.2": {"sdb1": 851,
"sdc1": 1,
"sdd1": 8},
"192.168.0.3": {"sdb1": 3,
"sdc1": 4}
},
"hashmatch": 0,
"remove": 0,
"rsync": 0,
"start": 1416354240.9761429,
"success": 1908
},
"replication_time": 2316.5563162644703,
"object_replication_last": 1416334368.60865,
"object_replication_time": 2316.5563162644703
}
Note that 'object_replication_last' and 'object_replication_time' are
considered to be transitional and will be removed in the subsequent
releases. Use 'replication_last' and 'replication_time' instead.
Additionaly this patch adds the count in swift-recon and it will be
showed as follows:
$ swift-recon object -r
========================================================================
=======
--> Starting reconnaissance on 4 hosts
========================================================================
=======
[2014-11-27 16:14:09] Checking on replication
[replication_failure] low: 0, high: 0, avg: 0.0, total: 0, Failed: 0.0%,
no_result: 0, reported: 4
[replication_success] low: 3, high: 3, avg: 3.0, total: 12,
Failed: 0.0%, no_result: 0, reported: 4
[replication_time] low: 0, high: 0, avg: 0.0, total: 0, Failed: 0.0%,
no_result: 0, reported: 4
[replication_attempted] low: 1, high: 1, avg: 1.0, total: 4,
Failed: 0.0%, no_result: 0, reported: 4
Oldest completion was 2014-11-27 16:09:45 (4 minutes ago) by
192.168.0.4:6002.
Most recent completion was 2014-11-27 16:14:19 (-10 seconds ago) by
192.168.0.1:6002.
========================================================================
=======
In case there is a cluster which has servers, a server runs with this
patch and the other servers run without this patch. If swift-recon
executes on the server which runs with this patch, there are unnecessary
information on the output such as [failure], [success] and [attempted].
Because other servers which run without this patch are not able to
send a response with information that this patch needs.
Therefore once you apply this patch, you also apply this patch to other
servers before you execute swift-recon.
DocImpact
Change-Id: Iecd33655ae2568482833131f422679996c374d78
Co-Authored-By: Kenichiro Matsuda <matsuda_kenichi@jp.fujitsu.com>
Co-Authored-By: Brian Cline <bcline@softlayer.com>
Implements: blueprint enable-object-replication-failure-in-recon
2014-12-03 06:15:16 +09:00
Note that 'object_replication_last' and 'object_replication_time' in object
replication info are considered to be transitional and will be removed in
the subsequent releases. Use 'replication_last' and 'replication_time' instead.
2011-10-18 21:10:50 +00:00
This information can also be queried via the swift-recon command line utility::
fhines@ubuntu:~$ swift-recon -h
2014-04-07 14:22:27 -07:00
Usage:
2012-05-14 18:01:48 -05:00
usage: swift-recon <server_type> [-v] [--suppress] [-a] [-r] [-u] [-d]
2018-02-06 06:19:25 +00:00
[-R] [-l] [-T] [--md5] [--auditor] [--updater] [--expirer] [--sockstat]
2012-05-14 18:01:48 -05:00
<server_type> account|container|object
Defaults to object server.
ex: swift-recon container -l --auditor
2011-10-18 21:10:50 +00:00
Options:
-h, --help show this help message and exit
-v, --verbose Print verbose info
--suppress Suppress most connection related errors
-a, --async Get async stats
-r, --replication Get replication stats
2018-02-06 06:19:25 +00:00
-R, --reconstruction Get reconstruction stats
2012-05-14 18:01:48 -05:00
--auditor Get auditor stats
--updater Get updater stats
--expirer Get expirer stats
2011-10-18 21:10:50 +00:00
-u, --unmounted Check cluster for unmounted devices
-d, --diskusage Get disk usage stats
-l, --loadstats Get cluster load average stats
-q, --quarantined Get cluster quarantine stats
2012-05-14 18:01:48 -05:00
--md5 Get md5sum of servers ring and compare to local copy
2011-11-15 17:55:14 +00:00
--sockstat Get cluster socket usage stats
2015-06-16 17:42:58 +02:00
-T, --time Check time synchronization
2015-11-01 18:10:06 +01:00
--all Perform all checks. Equal to
-arudlqT --md5 --sockstat --auditor --updater
--expirer --driveaudit --validate-servers
2011-10-18 21:10:50 +00:00
-z ZONE, --zone=ZONE Only query servers in specified zone
2012-05-14 18:01:48 -05:00
-t SECONDS, --timeout=SECONDS
Time to wait for a response from a server
2011-10-18 21:10:50 +00:00
--swiftdir=SWIFTDIR Default = /etc/swift
2016-06-09 06:17:22 +00:00
.. _recon-replication:
2012-05-14 18:01:48 -05:00
For example, to obtain container replication info from all hosts in zone "3"::
2011-10-18 21:10:50 +00:00
2012-05-14 18:01:48 -05:00
fhines@ubuntu:~$ swift-recon container -r --zone 3
2011-10-18 21:10:50 +00:00
===============================================================================
2012-05-14 18:01:48 -05:00
--> Starting reconnaissance on 1 hosts
2011-10-18 21:10:50 +00:00
===============================================================================
2012-05-14 18:01:48 -05:00
[2012-04-02 02:45:48] Checking on replication
[failure] low: 0.000, high: 0.000, avg: 0.000, reported: 1
[success] low: 486.000, high: 486.000, avg: 486.000, reported: 1
[replication_time] low: 20.853, high: 20.853, avg: 20.853, reported: 1
[attempted] low: 243.000, high: 243.000, avg: 243.000, reported: 1
2011-10-18 21:10:50 +00:00
Adding StatsD logging to Swift.
Documentation, including a list of metrics reported and their semantics,
is in the Admin Guide in a new section, "Reporting Metrics to StatsD".
An optional "metric prefix" may be configured which will be prepended to
every metric name sent to StatsD.
Here is the rationale for doing a deep integration like this versus only
sending metrics to StatsD in middleware. It's the only way to report
some internal activities of Swift in a real-time manner. So to have one
way of reporting to StatsD and one place/style of configuration, even
some things (like, say, timing of PUT requests into the proxy-server)
which could be logged via middleware are consistently logged the same
way (deep integration via the logger delegate methods).
When log_statsd_host is configured, get_logger() injects a
swift.common.utils.StatsdClient object into the logger as
logger.statsd_client. Then a set of delegate methods on LogAdapter
either pass through to the StatsdClient object or become no-ops. This
allows StatsD logging to look like:
self.logger.increment('some.metric.here')
and do the right thing in all cases and with no messy conditional logic.
I wanted to use the pystatsd module for the StatsD client, but the
version on PyPi is lagging the git repo (and is missing both the prefix
functionality and timing_since() method). So I wrote my
swift.common.utils.StatsdClient. The interface is the same as
pystatsd.Client, but the code was written from scratch. It's pretty
simple, and the tests I added cover it. This also frees Swift from an
optional dependency on the pystatsd module, making this feature easier
to enable.
There's test coverage for the new code and all existing tests continue
to pass.
Refactored out _one_audit_pass() method in swift/account/auditor.py and
swift/container/auditor.py.
Fixed some misc. PEP8 violations.
Misc test cleanups and refactorings (particularly the way "fake logging"
is handled).
Change-Id: Ie968a9ae8771f59ee7591e2ae11999c44bfe33b2
2012-04-01 16:47:08 -07:00
---------------------------
Reporting Metrics to StatsD
---------------------------
2017-07-12 12:14:45 -07:00
.. highlight :: cfg
Adding StatsD logging to Swift.
Documentation, including a list of metrics reported and their semantics,
is in the Admin Guide in a new section, "Reporting Metrics to StatsD".
An optional "metric prefix" may be configured which will be prepended to
every metric name sent to StatsD.
Here is the rationale for doing a deep integration like this versus only
sending metrics to StatsD in middleware. It's the only way to report
some internal activities of Swift in a real-time manner. So to have one
way of reporting to StatsD and one place/style of configuration, even
some things (like, say, timing of PUT requests into the proxy-server)
which could be logged via middleware are consistently logged the same
way (deep integration via the logger delegate methods).
When log_statsd_host is configured, get_logger() injects a
swift.common.utils.StatsdClient object into the logger as
logger.statsd_client. Then a set of delegate methods on LogAdapter
either pass through to the StatsdClient object or become no-ops. This
allows StatsD logging to look like:
self.logger.increment('some.metric.here')
and do the right thing in all cases and with no messy conditional logic.
I wanted to use the pystatsd module for the StatsD client, but the
version on PyPi is lagging the git repo (and is missing both the prefix
functionality and timing_since() method). So I wrote my
swift.common.utils.StatsdClient. The interface is the same as
pystatsd.Client, but the code was written from scratch. It's pretty
simple, and the tests I added cover it. This also frees Swift from an
optional dependency on the pystatsd module, making this feature easier
to enable.
There's test coverage for the new code and all existing tests continue
to pass.
Refactored out _one_audit_pass() method in swift/account/auditor.py and
swift/container/auditor.py.
Fixed some misc. PEP8 violations.
Misc test cleanups and refactorings (particularly the way "fake logging"
is handled).
Change-Id: Ie968a9ae8771f59ee7591e2ae11999c44bfe33b2
2012-04-01 16:47:08 -07:00
If you have a StatsD_ server running, Swift may be configured to send it
real-time operational metrics. To enable this, set the following
configuration entries (see the sample configuration files)::
log_statsd_host = localhost
log_statsd_port = 8125
2013-01-19 15:25:27 -08:00
log_statsd_default_sample_rate = 1.0
log_statsd_sample_rate_factor = 1.0
Adding StatsD logging to Swift.
Documentation, including a list of metrics reported and their semantics,
is in the Admin Guide in a new section, "Reporting Metrics to StatsD".
An optional "metric prefix" may be configured which will be prepended to
every metric name sent to StatsD.
Here is the rationale for doing a deep integration like this versus only
sending metrics to StatsD in middleware. It's the only way to report
some internal activities of Swift in a real-time manner. So to have one
way of reporting to StatsD and one place/style of configuration, even
some things (like, say, timing of PUT requests into the proxy-server)
which could be logged via middleware are consistently logged the same
way (deep integration via the logger delegate methods).
When log_statsd_host is configured, get_logger() injects a
swift.common.utils.StatsdClient object into the logger as
logger.statsd_client. Then a set of delegate methods on LogAdapter
either pass through to the StatsdClient object or become no-ops. This
allows StatsD logging to look like:
self.logger.increment('some.metric.here')
and do the right thing in all cases and with no messy conditional logic.
I wanted to use the pystatsd module for the StatsD client, but the
version on PyPi is lagging the git repo (and is missing both the prefix
functionality and timing_since() method). So I wrote my
swift.common.utils.StatsdClient. The interface is the same as
pystatsd.Client, but the code was written from scratch. It's pretty
simple, and the tests I added cover it. This also frees Swift from an
optional dependency on the pystatsd module, making this feature easier
to enable.
There's test coverage for the new code and all existing tests continue
to pass.
Refactored out _one_audit_pass() method in swift/account/auditor.py and
swift/container/auditor.py.
Fixed some misc. PEP8 violations.
Misc test cleanups and refactorings (particularly the way "fake logging"
is handled).
Change-Id: Ie968a9ae8771f59ee7591e2ae11999c44bfe33b2
2012-04-01 16:47:08 -07:00
log_statsd_metric_prefix = [empty-string]
If `log_statsd_host` is not set, this feature is disabled. The default values
2016-01-21 11:18:18 -08:00
for the other settings are given above. The `log_statsd_host` can be a
hostname, an IPv4 address, or an IPv6 address (not surrounded with brackets, as
this is unnecessary since the port is specified separately). If a hostname
resolves to an IPv4 address, an IPv4 socket will be used to send StatsD UDP
packets, even if the hostname would also resolve to an IPv6 address.
Adding StatsD logging to Swift.
Documentation, including a list of metrics reported and their semantics,
is in the Admin Guide in a new section, "Reporting Metrics to StatsD".
An optional "metric prefix" may be configured which will be prepended to
every metric name sent to StatsD.
Here is the rationale for doing a deep integration like this versus only
sending metrics to StatsD in middleware. It's the only way to report
some internal activities of Swift in a real-time manner. So to have one
way of reporting to StatsD and one place/style of configuration, even
some things (like, say, timing of PUT requests into the proxy-server)
which could be logged via middleware are consistently logged the same
way (deep integration via the logger delegate methods).
When log_statsd_host is configured, get_logger() injects a
swift.common.utils.StatsdClient object into the logger as
logger.statsd_client. Then a set of delegate methods on LogAdapter
either pass through to the StatsdClient object or become no-ops. This
allows StatsD logging to look like:
self.logger.increment('some.metric.here')
and do the right thing in all cases and with no messy conditional logic.
I wanted to use the pystatsd module for the StatsD client, but the
version on PyPi is lagging the git repo (and is missing both the prefix
functionality and timing_since() method). So I wrote my
swift.common.utils.StatsdClient. The interface is the same as
pystatsd.Client, but the code was written from scratch. It's pretty
simple, and the tests I added cover it. This also frees Swift from an
optional dependency on the pystatsd module, making this feature easier
to enable.
There's test coverage for the new code and all existing tests continue
to pass.
Refactored out _one_audit_pass() method in swift/account/auditor.py and
swift/container/auditor.py.
Fixed some misc. PEP8 violations.
Misc test cleanups and refactorings (particularly the way "fake logging"
is handled).
Change-Id: Ie968a9ae8771f59ee7591e2ae11999c44bfe33b2
2012-04-01 16:47:08 -07:00
2019-08-16 12:44:13 +01:00
.. _StatsD: https://codeascraft.com/2011/02/15/measure-anything-measure-everything/
2018-01-18 17:03:11 +08:00
.. _Graphite: http://graphiteapp.org/
Adding StatsD logging to Swift.
Documentation, including a list of metrics reported and their semantics,
is in the Admin Guide in a new section, "Reporting Metrics to StatsD".
An optional "metric prefix" may be configured which will be prepended to
every metric name sent to StatsD.
Here is the rationale for doing a deep integration like this versus only
sending metrics to StatsD in middleware. It's the only way to report
some internal activities of Swift in a real-time manner. So to have one
way of reporting to StatsD and one place/style of configuration, even
some things (like, say, timing of PUT requests into the proxy-server)
which could be logged via middleware are consistently logged the same
way (deep integration via the logger delegate methods).
When log_statsd_host is configured, get_logger() injects a
swift.common.utils.StatsdClient object into the logger as
logger.statsd_client. Then a set of delegate methods on LogAdapter
either pass through to the StatsdClient object or become no-ops. This
allows StatsD logging to look like:
self.logger.increment('some.metric.here')
and do the right thing in all cases and with no messy conditional logic.
I wanted to use the pystatsd module for the StatsD client, but the
version on PyPi is lagging the git repo (and is missing both the prefix
functionality and timing_since() method). So I wrote my
swift.common.utils.StatsdClient. The interface is the same as
pystatsd.Client, but the code was written from scratch. It's pretty
simple, and the tests I added cover it. This also frees Swift from an
optional dependency on the pystatsd module, making this feature easier
to enable.
There's test coverage for the new code and all existing tests continue
to pass.
Refactored out _one_audit_pass() method in swift/account/auditor.py and
swift/container/auditor.py.
Fixed some misc. PEP8 violations.
Misc test cleanups and refactorings (particularly the way "fake logging"
is handled).
Change-Id: Ie968a9ae8771f59ee7591e2ae11999c44bfe33b2
2012-04-01 16:47:08 -07:00
.. _Ganglia: http://ganglia.sourceforge.net/
The sample rate is a real number between 0 and 1 which defines the
probability of sending a sample for any given event or timing measurement.
This sample rate is sent with each sample to StatsD and used to
multiply the value. For example, with a sample rate of 0.5, StatsD will
multiply that counter's value by 2 when flushing the metric to an upstream
2013-01-19 15:25:27 -08:00
monitoring system (Graphite_, Ganglia_, etc.).
Some relatively high-frequency metrics have a default sample rate less than
one. If you want to override the default sample rate for all metrics whose
default sample rate is not specified in the Swift source, you may set
`log_statsd_default_sample_rate` to a value less than one. This is NOT
recommended (see next paragraph). A better way to reduce StatsD load is to
adjust `log_statsd_sample_rate_factor` to a value less than one. The
`log_statsd_sample_rate_factor` is multiplied to any sample rate (either the
global default or one specified by the actual metric logging call in the Swift
source) prior to handling. In other words, this one tunable can lower the
frequency of all StatsD logging by a proportional amount.
To get the best data, start with the default `log_statsd_default_sample_rate`
and `log_statsd_sample_rate_factor` values of 1 and only lower
`log_statsd_sample_rate_factor` if needed. The
`log_statsd_default_sample_rate` should not be used and remains for backward
compatibility only.
Adding StatsD logging to Swift.
Documentation, including a list of metrics reported and their semantics,
is in the Admin Guide in a new section, "Reporting Metrics to StatsD".
An optional "metric prefix" may be configured which will be prepended to
every metric name sent to StatsD.
Here is the rationale for doing a deep integration like this versus only
sending metrics to StatsD in middleware. It's the only way to report
some internal activities of Swift in a real-time manner. So to have one
way of reporting to StatsD and one place/style of configuration, even
some things (like, say, timing of PUT requests into the proxy-server)
which could be logged via middleware are consistently logged the same
way (deep integration via the logger delegate methods).
When log_statsd_host is configured, get_logger() injects a
swift.common.utils.StatsdClient object into the logger as
logger.statsd_client. Then a set of delegate methods on LogAdapter
either pass through to the StatsdClient object or become no-ops. This
allows StatsD logging to look like:
self.logger.increment('some.metric.here')
and do the right thing in all cases and with no messy conditional logic.
I wanted to use the pystatsd module for the StatsD client, but the
version on PyPi is lagging the git repo (and is missing both the prefix
functionality and timing_since() method). So I wrote my
swift.common.utils.StatsdClient. The interface is the same as
pystatsd.Client, but the code was written from scratch. It's pretty
simple, and the tests I added cover it. This also frees Swift from an
optional dependency on the pystatsd module, making this feature easier
to enable.
There's test coverage for the new code and all existing tests continue
to pass.
Refactored out _one_audit_pass() method in swift/account/auditor.py and
swift/container/auditor.py.
Fixed some misc. PEP8 violations.
Misc test cleanups and refactorings (particularly the way "fake logging"
is handled).
Change-Id: Ie968a9ae8771f59ee7591e2ae11999c44bfe33b2
2012-04-01 16:47:08 -07:00
The metric prefix will be prepended to every metric sent to the StatsD server
For example, with::
log_statsd_metric_prefix = proxy01
the metric `proxy-server.errors` would be sent to StatsD as
`proxy01.proxy-server.errors` . This is useful for differentiating different
servers when sending statistics to a central StatsD server. If you run a local
StatsD server per node, you could configure a per-node metrics prefix there and
leave `log_statsd_metric_prefix` blank.
2013-01-19 10:57:14 -08:00
Note that metrics reported to StatsD are counters or timing data (which are
sent in units of milliseconds). StatsD usually expands timing data out to min,
max, avg, count, and 90th percentile per timing metric, but the details of
this behavior will depend on the configuration of your StatsD server. Some
important "gauge" metrics may still need to be collected using another method.
For example, the `object-server.async_pendings` StatsD metric counts the generation
of async_pendings in real-time, but will not tell you the current number of
async_pending container updates on disk at any point in time.
Adding StatsD logging to Swift.
Documentation, including a list of metrics reported and their semantics,
is in the Admin Guide in a new section, "Reporting Metrics to StatsD".
An optional "metric prefix" may be configured which will be prepended to
every metric name sent to StatsD.
Here is the rationale for doing a deep integration like this versus only
sending metrics to StatsD in middleware. It's the only way to report
some internal activities of Swift in a real-time manner. So to have one
way of reporting to StatsD and one place/style of configuration, even
some things (like, say, timing of PUT requests into the proxy-server)
which could be logged via middleware are consistently logged the same
way (deep integration via the logger delegate methods).
When log_statsd_host is configured, get_logger() injects a
swift.common.utils.StatsdClient object into the logger as
logger.statsd_client. Then a set of delegate methods on LogAdapter
either pass through to the StatsdClient object or become no-ops. This
allows StatsD logging to look like:
self.logger.increment('some.metric.here')
and do the right thing in all cases and with no messy conditional logic.
I wanted to use the pystatsd module for the StatsD client, but the
version on PyPi is lagging the git repo (and is missing both the prefix
functionality and timing_since() method). So I wrote my
swift.common.utils.StatsdClient. The interface is the same as
pystatsd.Client, but the code was written from scratch. It's pretty
simple, and the tests I added cover it. This also frees Swift from an
optional dependency on the pystatsd module, making this feature easier
to enable.
There's test coverage for the new code and all existing tests continue
to pass.
Refactored out _one_audit_pass() method in swift/account/auditor.py and
swift/container/auditor.py.
Fixed some misc. PEP8 violations.
Misc test cleanups and refactorings (particularly the way "fake logging"
is handled).
Change-Id: Ie968a9ae8771f59ee7591e2ae11999c44bfe33b2
2012-04-01 16:47:08 -07:00
Note also that the set of metrics collected, their names, and their semantics
2023-05-25 13:40:33 -07:00
are not locked down and will change over time. For more details, see the
service-specific tables listed below:
.. toctree ::
metrics/account_auditor
metrics/account_reaper
metrics/account_server
metrics/account_replicator
metrics/container_auditor
metrics/container_replicator
metrics/container_server
metrics/container_sync
metrics/container_updater
metrics/object_auditor
metrics/object_expirer
metrics/object_reconstructor
metrics/object_replicator
metrics/object_server
metrics/object_updater
metrics/proxy_server
Or, view :doc: `metrics/all` as one page.
Adding StatsD logging to Swift.
Documentation, including a list of metrics reported and their semantics,
is in the Admin Guide in a new section, "Reporting Metrics to StatsD".
An optional "metric prefix" may be configured which will be prepended to
every metric name sent to StatsD.
Here is the rationale for doing a deep integration like this versus only
sending metrics to StatsD in middleware. It's the only way to report
some internal activities of Swift in a real-time manner. So to have one
way of reporting to StatsD and one place/style of configuration, even
some things (like, say, timing of PUT requests into the proxy-server)
which could be logged via middleware are consistently logged the same
way (deep integration via the logger delegate methods).
When log_statsd_host is configured, get_logger() injects a
swift.common.utils.StatsdClient object into the logger as
logger.statsd_client. Then a set of delegate methods on LogAdapter
either pass through to the StatsdClient object or become no-ops. This
allows StatsD logging to look like:
self.logger.increment('some.metric.here')
and do the right thing in all cases and with no messy conditional logic.
I wanted to use the pystatsd module for the StatsD client, but the
version on PyPi is lagging the git repo (and is missing both the prefix
functionality and timing_since() method). So I wrote my
swift.common.utils.StatsdClient. The interface is the same as
pystatsd.Client, but the code was written from scratch. It's pretty
simple, and the tests I added cover it. This also frees Swift from an
optional dependency on the pystatsd module, making this feature easier
to enable.
There's test coverage for the new code and all existing tests continue
to pass.
Refactored out _one_audit_pass() method in swift/account/auditor.py and
swift/container/auditor.py.
Fixed some misc. PEP8 violations.
Misc test cleanups and refactorings (particularly the way "fake logging"
is handled).
Change-Id: Ie968a9ae8771f59ee7591e2ae11999c44bfe33b2
2012-04-01 16:47:08 -07:00
2010-07-30 14:57:20 -05:00
------------------------
Debugging Tips and Tools
------------------------
When a request is made to Swift, it is given a unique transaction id. This
id should be in every log line that has to do with that request. This can
2010-10-13 11:28:27 -05:00
be useful when looking at all the services that are hit by a single request.
2010-07-30 14:57:20 -05:00
If you need to know where a specific account, container or object is in the
cluster, `swift-get-nodes` will show the location where each replica should be.
If you are looking at an object on the server and need more info,
`swift-object-info` will display the account, container, replica locations
and metadata of the object.
2014-03-21 07:47:21 +05:30
If you are looking at a container on the server and need more info,
`swift-container-info` will display all the information like the account,
container, replica locations and metadata of the container.
2014-03-30 21:52:53 +05:30
If you are looking at an account on the server and need more info,
`swift-account-info` will display the account, replica locations
and metadata of the account.
2010-07-30 14:57:20 -05:00
If you want to audit the data for an account, `swift-account-audit` can be
used to crawl the account, checking that all containers and objects can be
found.
-----------------
Managing Services
-----------------
2017-07-12 12:14:45 -07:00
Swift services are generally managed with `` swift-init `` . the general usage is
2017-03-20 14:32:00 +08:00
`` swift-init <service> <command> `` , where service is the Swift service to
2010-07-30 14:57:20 -05:00
manage (for example object, container, account, proxy) and command is one of:
2019-10-25 12:48:36 -07:00
=============== ===============================================
Command Description
--------------- -----------------------------------------------
start Start the service
stop Stop the service
restart Restart the service
shutdown Attempt to gracefully shutdown the service
reload Attempt to gracefully restart the service
reload-seamless Attempt to seamlessly restart the service
=============== ===============================================
A graceful shutdown or reload will allow all server workers to finish any
current requests before exiting. The parent server process exits immediately.
A seamless reload will make new configuration settings active, with no window
where client requests fail due to there being no active listen socket.
The parent server process will re-exec itself, retaining its existing PID.
After the re-exec'ed parent server process binds its listen sockets, the old
listen sockets are closed and old server workers finish any current requests
before exiting.
There is also a special case of `` swift-init all <command> `` , which will run
the command for all swift services.
2010-07-30 14:57:20 -05:00
2014-05-08 22:52:15 +00:00
In cases where there are multiple configs for a service, a specific config
can be managed with `` swift-init <service>.<config> <command> `` .
For example, when a separate replication network is used, there might be
2017-07-12 12:14:45 -07:00
`` /etc/swift/object-server/public.conf `` for the object server and
`` /etc/swift/object-server/replication.conf `` for the replication services.
2014-05-08 22:52:15 +00:00
In this case, the replication services could be restarted with
`` swift-init object-server.replication restart `` .
2011-02-14 20:25:40 +00:00
--------------
Object Auditor
--------------
On system failures, the XFS file system can sometimes truncate files it's
2012-10-18 14:49:46 -07:00
trying to write and produce zero-byte files. The object-auditor will catch
2011-02-14 20:25:40 +00:00
these problems but in the case of a system crash it would be advisable to run
an extra, less rate limited sweep to check for these specific files. You can
2017-07-12 12:14:45 -07:00
run this command as follows::
swift-object-auditor /path/to/object-server/config/file.conf once -z 1000
`` -z `` means to only check for zero-byte files at 1000 files per second.
2011-02-21 16:37:12 -08:00
2014-02-24 11:24:56 +00:00
At times it is useful to be able to run the object auditor on a specific
2017-07-12 12:14:45 -07:00
device or set of devices. You can run the object-auditor as follows::
swift-object-auditor /path/to/object-server/config/file.conf once --devices=sda,sdb
2014-02-24 11:24:56 +00:00
This will run the object auditor on only the sda and sdb devices. This param
accepts a comma separated list of values.
2012-09-28 12:24:15 -07:00
-----------------
Object Replicator
-----------------
At times it is useful to be able to run the object replicator on a specific
2017-07-12 12:14:45 -07:00
device or partition. You can run the object-replicator as follows::
swift-object-replicator /path/to/object-server/config/file.conf once --devices=sda,sdb
2012-09-28 12:24:15 -07:00
This will run the object replicator on only the sda and sdb devices. You can
2017-07-12 12:14:45 -07:00
likewise run that command with `` --partitions `` . Both params accept a comma
2012-09-28 12:24:15 -07:00
separated list of values. If both are specified they will be ANDed together.
These can only be run in "once" mode.
2011-12-03 07:42:08 +00:00
-------------
Swift Orphans
-------------
Swift Orphans are processes left over after a reload of a Swift server.
2014-11-22 15:35:10 -05:00
For example, when upgrading a proxy server you would probably finish
2017-07-12 12:14:45 -07:00
with a `` swift-init proxy-server reload `` or `` /etc/init.d/swift-proxy
reload`` . This kills the parent proxy server process and leaves the
2012-04-10 12:25:01 -07:00
child processes running to finish processing whatever requests they
might be handling at the time. It then starts up a new parent proxy
server process and its children to handle new incoming requests. This
allows zero-downtime upgrades with no impact to existing requests.
The orphaned child processes may take a while to exit, depending on
the length of the requests they were handling. However, sometimes an
old process can be hung up due to some bug or hardware issue. In these
cases, these orphaned processes will hang around
2017-07-12 12:14:45 -07:00
forever. `` swift-orphans `` can be used to find and kill these orphans.
2012-04-10 12:25:01 -07:00
2017-07-12 12:14:45 -07:00
`` swift-orphans `` with no arguments will just list the orphans it finds
2012-04-10 12:25:01 -07:00
that were started more than 24 hours ago. You shouldn't really check
for orphans until 24 hours after you perform a reload, as some
2017-07-12 12:14:45 -07:00
requests can take a long time to process. `` swift-orphans -k TERM `` will
send the SIG_TERM signal to the orphans processes, or you can `` kill
-TERM`` the pids yourself if you prefer.
2011-12-03 07:42:08 +00:00
2017-07-12 12:14:45 -07:00
You can run `` swift-orphans --help `` for more options.
2011-12-03 07:42:08 +00:00
------------
Swift Oldies
------------
2012-04-10 12:25:01 -07:00
Swift Oldies are processes that have just been around for a long
time. There's nothing necessarily wrong with this, but it might
indicate a hung process if you regularly upgrade and reload/restart
services. You might have so many servers that you don't notice when a
2017-07-12 12:14:45 -07:00
reload/restart fails; `` swift-oldies `` can help with this.
2012-04-10 12:25:01 -07:00
For example, if you upgraded and reloaded/restarted everything 2 days
2017-07-12 12:14:45 -07:00
ago, and you've already cleaned up any orphans with `` swift-orphans `` ,
you can run `` swift-oldies -a 48 `` to find any Swift processes still
2012-04-10 12:25:01 -07:00
around that were started more than 2 days ago and then investigate
them accordingly.
2012-10-26 14:56:10 -05:00
-------------------
Custom Log Handlers
-------------------
Swift supports setting up custom log handlers for services by specifying a
comma-separated list of functions to invoke when logging is setup. It does so
2017-07-12 12:14:45 -07:00
via the `` log_custom_handlers `` configuration option. Logger hooks invoked are
2024-10-01 16:14:09 -04:00
passed the same arguments as Swift's `` get_logger `` function, as well as the
`` logging.Logger `` and `` SwiftLogAdapter `` objects:
2012-10-26 14:56:10 -05:00
============== ===============================================
Name Description
-------------- -----------------------------------------------
conf Configuration dict to read settings from
name Name of the logger received
log_to_console (optional) Write log messages to console on stderr
log_route Route for the logging received
fmt Override log format received
logger The logging.getLogger object
adapted_logger The LogAdapter object
============== ===============================================
2024-10-01 16:14:09 -04:00
.. note ::
The instance of `` SwiftLogAdapter `` that wraps the `` logging.Logger ``
object may be replaced with cloned instances during runtime, for example to
use a different log prefix with the same `` logging.Logger `` . Custom log
handlers should therefore not modify any attributes of the
`` SwiftLogAdapter `` instance other than those that will be copied if it is
cloned.
2012-10-26 14:56:10 -05:00
A basic example that sets up a custom logger might look like the
following:
.. code-block :: python
def my_logger(conf, name, log_to_console, log_route, fmt, logger,
adapted_logger):
my_conf_opt = conf.get('some_custom_setting')
my_handler = third_party_logstore_handler(my_conf_opt)
logger.addHandler(my_handler)
See :ref: `custom-logger-hooks-label` for sample use cases.
2014-10-07 21:54:17 +02:00
------------------------
Securing OpenStack Swift
------------------------
2017-09-05 19:01:48 +08:00
Please refer to the security guide at https://docs.openstack.org/security-guide
2017-07-12 12:14:45 -07:00
and in particular the `Object Storage
2017-09-05 19:01:48 +08:00
<https://docs.openstack.org/security-guide/object-storage.html> `__ section.