The target development platform has changed to Ubuntu 14.04 [1].
This patch makes the suggested SAIO platform the same.
Also, remove pointer to wiki page for other platform install
instructions that either redirects back to this SAIO doc or
to another wike page and then a dead link.
[1] I0a96bcf692bb240f3ab5aab7fefd294a07735a83
DocImpact
Change-Id: I9f96104b5437c1f1f28f924c048ef83cf03338f4
vagrant-swift-all-in-one is being used and maintained by a number of swift
developers, it has an open source license.
The ansible playbook project serves a similar goal but it's based on a Fedora
distribution and includes Swift-on-File support.
Drive-by fix for the Swift-on-File link which has migrated to stackforge.
Change-Id: Id7478d58adcead57cf56ac4e1d05c6556c8c9b7b
Updated proxy-server.conf-sample with the correct default. Also
updated the note on the overview-auth doc page.
Change-Id: I5cd62a7a118a28f7b58f47b8d8d4d963f6bc7347
Bring docs in line with changes to auth_token config
defaults made in I7076fa03ab531cbb1114918f75113620b65590dc
Change-Id: Ia21685ebd1f3ed7bdba9de2ebac9fdcce8495949
This change modifies the swift-ring-builder and introduces new format
of sub-commands (search, list_parts, set_weight, set_info and remove)
in addition to add sub-command so that hostnames can be used in place
of an ip-address for the sub-commands.
The account reaper, container synchronizer, and replicators were also
updated so that they still have a way to identify a particular device
as being "local".
Previously this was Change-Id:
Ie471902413002872fc6755bacd36af3b9c613b74
Change-Id: Ieff583ffb932133e3820744a3f8f9f491686b08d
Co-Authored-By: Alex Pecoraro <alex.pecoraro@emc.com>
Implements: blueprint allow-hostnames-for-nodes-in-rings
To make it easier for Swift operators to specify problematic devices,
a policy index will be recorded in log files of proxy and storage servers
for each user request which is related to storage policy.
This patch simply adds 'storage_policy_index' field in a log format.
If there is no specified policy index, '-' is output in this field.
Extra fix: Doc about the log line of storage nodes now properly reflects
'server_pid' field.
DocImpact
Change-Id: I7286ae85bcbcec73b5377dc115cbdb0f57d1b025
Implements: blueprint logging-policy-number
Current behavior:
* If data/body is present in manifest file PUT request, the data/body gets
saved onto disk, just like for a normal object.
* Generally, this data in manifest file is never served on a GET response.
However, when the manifest object path itself is part of prefix, GET
response would contain data present in manifest file as well.
* The query param multipart-manifest=get meant to retrieve SLO manifest
also works in case of DLO manifest. Hence a COPY request with the
multipart-manifest=get query param would actually copy DLO manifest.
How things should have been:
* The DLO manifest object is supposed to have no content and only have
X-Object-Manifest metadata header.
* Query param multipart-manifest=get is SLO specific and shouldn't have
any role in DLO.
This change intends to only document current behaviour and not change it,
assuming there are users who have previously saved some content in DLO
manifest file and/or have been using multipart-manifest=get to fetch
and/or COPY the DLO manifest file with it's content.
Change-Id: I0f6e175ad7752169ecf94df949336e0665928df7
Signed-off-by: Prashanth Pai <ppai@redhat.com>
The way we do this now involves a conf change and a proxy
reload which is a pain. You can now just set these:
X-Account-Sysmeta-Global-Write-Ratelimit: WHITELIST
or
X-Account-Sysmeta-Global-Write-Ratelimit: BLACKLIST
NOTE:
The existing proxy config settings: account_whitelist
and account_blacklist will continue to work.
Change-Id: I532663f1d2c75d03170c5fdb9b330416822fbc88
Output a dispersion report that shows how many parts have each replica count
at each tier along with some additional context. Also the max_dispersion is a
good canary for what a reasonable overload might be.
Also display a warning on rebalance if the ring's dispersion is sub-optimal.
The primitive form of the dispersion graph is cached on the builder, but the
dispersion command will build it on the fly if you have a ring that was last
rebalanced before the change.
Also add --force option to rebalance to make it write a ring even if less than
1% of parts moved.
Try to clarify some dispersion and balance a little bit in the ring section of
the architectural overview.
Co-Authored-By: Christian Schwede <christian.schwede@enovance.com>
Co-Authored-By: Darrell Bishop <darrell@swiftstack.com>
Change-Id: I7696df25d092fac56588080722e0a4167ed2c824
The ring builder's placement algorithm has two goals: first, to ensure
that each partition has its replicas as far apart as possible, and
second, to ensure that partitions are fairly distributed according to
device weight. In many cases, it succeeds in both, but sometimes those
goals conflict. When that happens, operators may want to relax the
rules a little bit in order to reach a compromise solution.
Imagine a cluster of 3 nodes (A, B, C), each with 20 identical disks,
and using 3 replicas. The ring builder will place 1 replica of each
partition on each node, as you'd expect.
Now imagine that one disk fails in node C and is removed from the
ring. The operator would probably be okay with remaining at 1 replica
per node (unless their disks are really close to full), but to
accomplish that, they have to multiply the weights of the other disks
in node C by 20/19 to make C's total weight stay the same. Otherwise,
the ring builder will move partitions around such that some partitions
have replicas only on nodes A and B.
If 14 more disks failed in node C, the operator would probably be okay
with some data not living on C, as a 4x increase in storage
requirements is likely to fill disks.
This commit introduces the notion of "overload": how much extra
partition space can be placed on each disk *over* what the weight
dictates.
For example, an overload of 0.1 means that a device can take up to 10%
more partitions than its weight would imply in order to make the
replica dispersion better.
Overload only has an effect when replica-dispersion and device weights
come into conflict.
The overload is a single floating-point value for the builder
file. Existing builders get an overload of 0.0, so there will be no
behavior change on existing rings.
In the example above, imagine the operator sets an overload of 0.112
on his rings. If node C loses a drive, each other drive can take on up
to 11.2% more data. Splitting the dead drive's partitions among the
remaining 19 results in a 5.26% increase, so everything that was on
node C stays on node C. If another disk dies, then we're up to an
11.1% increase, and so everything still stays on node C. If a third
disk dies, then we've reached the limits of the overload, so some
partitions will begin to reside solely on nodes A and B.
DocImpact
Change-Id: I3593a1defcd63b6ed8eae9c1c66b9d3428b33864
After discussion https://review.openstack.org/#/c/129384/ moving
to the doc directory in swift repo.
This lets us eliminate the object-api repo along with all the <service>-
api repos and move content to audience-centric locations.
Change-Id: Ia0d9973847f7409a02dcc1a0e19400a3c3ecdf32
Noticed that slo and dlo middleware were placed before
tempauth, they should be placed after
DocImpact
Change-Id: Ia931e2280125d846f248b23e219aebad14c66210
Signed-off-by: Thiago da Silva <thiago@redhat.com>
Instead of recommending to edit resetswift to replace "/dev/sdb1" with
"/srv/swift-disk", use an environment variable instead. This way I can
set SAIO_BLOCK_DEVICE=/srv/swift-disk in my .bashrc, and then when I'm
testing out changes to resetswift, I don't need to remember to edit
the modified script, nor do I end up submitting changes with the wrong
default in there.
The variable defaults to /dev/sdb1, so if you use the script unmodified
and don't set SAIO_BLOCK_DEVICE, nothing changes for you.
Change-Id: I741a8c91c2c54a4f32bc391cd794ef4206402753
Let's use the full project name to avoid confusion with the recently added
Swiftbrowser based on AngularJS.
Change-Id: Ib07338268a1593bc2882908b49c1fb4a130ff43d
Simply replacing a failed disk requires a very long time if the ring is not
changed, because all data will be replicated to a single new disk. This extends
the time to recover from missing replicas, and becomes even more important with
bigger disks.
This patch updates the doc to include a faster alternative by setting the weight
of a failed disk to 0. In this case the partitions from the failed disk are
distributed and replicated to the remaining disks in the cluster, and because
each disk gets only a fraction of the partitions it's also much faster.
Change-Id: I16617756359771ad89ca5d4690b58a014f481d9b
These docs currently say we target ubuntu 10.04 and an eventlet version
that our requirements file does not allow. Update these versions.
Change-Id: I052b6561f88ec90f865454e426032f1baf4586c0
The multi-node install was so horribly outdated it still
refered to ubuntu 10.04. The install guide at docs.openstack.org
has a usable swift installation guide - link to it until such
time as this page can be fixed.
Change-Id: I29fa334d9ffc9b63c8f31c664e7509b2f2577574
This change allows the user to use a "--no-overlap" parameter when
running the tool multiple times. It will increase the coverage by
whatever is specified in the dispersion_coverage field of the conf
file in a manner where existing container/objects are left in place
and no partition is populated more than once.
Related-Bug: #1233045
Change-Id: I139fed2f4c967ba18d073b7ecd1e946ed4da1271